All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 00/17] Imagination Technologies PowerVR DRM driver
@ 2023-08-16  8:25 ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: matthew.brost, luben.tuikov, tzimmermann, linux-kernel, mripard,
	afd, boris.brezillon, dakr, donald.robson, hns, christian.koenig,
	faith.ekstrand

This patch series adds the initial DRM driver for Imagination Technologies PowerVR
GPUs, starting with those based on our Rogue architecture. It's worth pointing
out that this is a new driver, written from the ground up, rather than a
refactored version of our existing downstream driver (pvrsrvkm).

This new DRM driver supports:
- GEM shmem allocations
- dma-buf / PRIME
- Per-context userspace managed virtual address space
- DRM sync objects (binary and timeline)
- Power management suspend / resume
- GPU job submission (geometry, fragment, compute, transfer)
- META firmware processor
- MIPS firmware processor
- GPU hang detection and recovery

Currently our main focus is on the AXE-1-16M GPU. Testing so far has been done
using a TI SK-AM62 board (AXE-1-16M GPU). Firmware for the AXE-1-16M can be
found here:
https://gitlab.freedesktop.org/frankbinns/linux-firmware/-/tree/powervr

A Vulkan driver that works with our downstream kernel driver has already been
merged into Mesa [1][2]. Support for this new DRM driver is being maintained in
a merge request [3], with the branch located here:
https://gitlab.freedesktop.org/frankbinns/mesa/-/tree/powervr-winsys

Job stream formats are documented at:
https://gitlab.freedesktop.org/mesa/mesa/-/blob/f8d2b42ae65c2f16f36a43e0ae39d288431e4263/src/imagination/csbgen/rogue_kmd_stream.xml

The Vulkan driver is progressing towards Vulkan 1.0. We're code complete, and
are working towards passing conformance. The current combination of this kernel
driver with the Mesa Vulkan driver (powervr-mesa-next branch) achieves 88.3% conformance.

The code in this patch series, along with some of its history, can also be found here:
https://gitlab.freedesktop.org/frankbinns/powervr/-/tree/powervr-next

This patch series has dependencies on a number of patches not yet merged. They
are listed below :

drm/sched: Convert drm scheduler to use a work queue rather than kthread:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-2-matthew.brost@intel.com/
drm/sched: Move schedule policy to scheduler / entity:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-3-matthew.brost@intel.com/
drm/sched: Add DRM_SCHED_POLICY_SINGLE_ENTITY scheduling policy:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-4-matthew.brost@intel.com/
drm/sched: Start run wq before TDR in drm_sched_start:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-6-matthew.brost@intel.com/
drm/sched: Submit job before starting TDR:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-7-matthew.brost@intel.com/
drm/sched: Add helper to set TDR timeout:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-8-matthew.brost@intel.com/

[1] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/15243
[2] https://gitlab.freedesktop.org/mesa/mesa/-/tree/main/src/imagination/vulkan
[3] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/15507

High level summary of changes:

v5:
* Retrieve GPU device information from firmware image header
* Address issues with DT binding and example DTS
* Update VM code for upstream GPU VA manager
* BOs are always zeroed on allocation
* Update copyright

v4:
* Implemented hang recovery via firmware hard reset
* Add support for partial render jobs
* Move to a threaded IRQ
* Remove unnecessary read/write and clock helpers
* Remove device tree elements not relevant to AXE-1-16M
* Clean up resource acquisition
* Remove unused DT binding attributes

v3:
* Use drm_sched for scheduling
* Use GPU VA manager
* Use runtime PM
* Use drm_gem_shmem
* GPU watchdog and device loss handling
* DT binding changes: remove unused attributes, add additionProperties:false

v2:
* Redesigned and simplified UAPI based on RFC feedback from XDC 2022
* Support for transfer and partial render jobs
* Support for timeline sync objects

RFC v1: https://lore.kernel.org/dri-devel/20220815165156.118212-1-sarah.walker@imgtec.com/

RFC v2: https://lore.kernel.org/dri-devel/20230413103419.293493-1-sarah.walker@imgtec.com/

v3: https://lore.kernel.org/dri-devel/20230613144800.52657-1-sarah.walker@imgtec.com/

v4: https://lore.kernel.org/dri-devel/20230714142355.111382-1-sarah.walker@imgtec.com/

Matt Coster (1):
  sizes.h: Add entries between 32G and 64T

Sarah Walker (16):
  dt-bindings: gpu: Add Imagination Technologies PowerVR GPU
  drm/imagination/uapi: Add PowerVR driver UAPI
  drm/imagination: Add skeleton PowerVR driver
  drm/imagination: Get GPU resources
  drm/imagination: Add GPU register and FWIF headers
  drm/imagination: Add GPU ID parsing and firmware loading
  drm/imagination: Add GEM and VM related code
  drm/imagination: Implement power management
  drm/imagination: Implement firmware infrastructure and META FW support
  drm/imagination: Implement MIPS firmware processor and MMU support
  drm/imagination: Implement free list and HWRT create and destroy
    ioctls
  drm/imagination: Implement context creation/destruction ioctls
  drm/imagination: Implement job submission and scheduling
  drm/imagination: Add firmware trace to debugfs
  drm/imagination: Add driver documentation
  arm64: dts: ti: k3-am62-main: Add GPU device node [DO NOT MERGE]

 .../devicetree/bindings/gpu/img,powervr.yaml  |   75 +
 Documentation/gpu/drivers.rst                 |    2 +
 Documentation/gpu/imagination/index.rst       |   14 +
 Documentation/gpu/imagination/uapi.rst        |  174 +
 .../gpu/imagination/virtual_memory.rst        |  462 ++
 MAINTAINERS                                   |   10 +
 arch/arm64/boot/dts/ti/k3-am62-main.dtsi      |    9 +
 drivers/gpu/drm/Kconfig                       |    2 +
 drivers/gpu/drm/Makefile                      |    1 +
 drivers/gpu/drm/imagination/Kconfig           |   16 +
 drivers/gpu/drm/imagination/Makefile          |   35 +
 drivers/gpu/drm/imagination/pvr_ccb.c         |  641 ++
 drivers/gpu/drm/imagination/pvr_ccb.h         |   71 +
 drivers/gpu/drm/imagination/pvr_cccb.c        |  267 +
 drivers/gpu/drm/imagination/pvr_cccb.h        |  109 +
 drivers/gpu/drm/imagination/pvr_context.c     |  460 ++
 drivers/gpu/drm/imagination/pvr_context.h     |  205 +
 drivers/gpu/drm/imagination/pvr_debugfs.c     |   53 +
 drivers/gpu/drm/imagination/pvr_debugfs.h     |   29 +
 drivers/gpu/drm/imagination/pvr_device.c      |  651 ++
 drivers/gpu/drm/imagination/pvr_device.h      |  704 ++
 drivers/gpu/drm/imagination/pvr_device_info.c |  253 +
 drivers/gpu/drm/imagination/pvr_device_info.h |  185 +
 drivers/gpu/drm/imagination/pvr_drv.c         | 1515 ++++
 drivers/gpu/drm/imagination/pvr_drv.h         |  129 +
 drivers/gpu/drm/imagination/pvr_free_list.c   |  625 ++
 drivers/gpu/drm/imagination/pvr_free_list.h   |  195 +
 drivers/gpu/drm/imagination/pvr_fw.c          | 1470 ++++
 drivers/gpu/drm/imagination/pvr_fw.h          |  508 ++
 drivers/gpu/drm/imagination/pvr_fw_info.h     |  135 +
 drivers/gpu/drm/imagination/pvr_fw_meta.c     |  554 ++
 drivers/gpu/drm/imagination/pvr_fw_meta.h     |   14 +
 drivers/gpu/drm/imagination/pvr_fw_mips.c     |  250 +
 drivers/gpu/drm/imagination/pvr_fw_mips.h     |   38 +
 .../gpu/drm/imagination/pvr_fw_startstop.c    |  301 +
 .../gpu/drm/imagination/pvr_fw_startstop.h    |   13 +
 drivers/gpu/drm/imagination/pvr_fw_trace.c    |  515 ++
 drivers/gpu/drm/imagination/pvr_fw_trace.h    |   78 +
 drivers/gpu/drm/imagination/pvr_gem.c         |  396 ++
 drivers/gpu/drm/imagination/pvr_gem.h         |  184 +
 drivers/gpu/drm/imagination/pvr_hwrt.c        |  549 ++
 drivers/gpu/drm/imagination/pvr_hwrt.h        |  165 +
 drivers/gpu/drm/imagination/pvr_job.c         |  770 ++
 drivers/gpu/drm/imagination/pvr_job.h         |  161 +
 drivers/gpu/drm/imagination/pvr_mmu.c         | 2523 +++++++
 drivers/gpu/drm/imagination/pvr_mmu.h         |  108 +
 drivers/gpu/drm/imagination/pvr_params.c      |  147 +
 drivers/gpu/drm/imagination/pvr_params.h      |   72 +
 drivers/gpu/drm/imagination/pvr_power.c       |  421 ++
 drivers/gpu/drm/imagination/pvr_power.h       |   39 +
 drivers/gpu/drm/imagination/pvr_queue.c       | 1455 ++++
 drivers/gpu/drm/imagination/pvr_queue.h       |  179 +
 .../gpu/drm/imagination/pvr_rogue_cr_defs.h   | 6193 +++++++++++++++++
 .../imagination/pvr_rogue_cr_defs_client.h    |  159 +
 drivers/gpu/drm/imagination/pvr_rogue_defs.h  |  179 +
 drivers/gpu/drm/imagination/pvr_rogue_fwif.h  | 2208 ++++++
 .../drm/imagination/pvr_rogue_fwif_check.h    |  491 ++
 .../drm/imagination/pvr_rogue_fwif_client.h   |  371 +
 .../imagination/pvr_rogue_fwif_client_check.h |  133 +
 .../drm/imagination/pvr_rogue_fwif_common.h   |   60 +
 .../drm/imagination/pvr_rogue_fwif_dev_info.h |  112 +
 .../pvr_rogue_fwif_resetframework.h           |   28 +
 .../gpu/drm/imagination/pvr_rogue_fwif_sf.h   | 1648 +++++
 .../drm/imagination/pvr_rogue_fwif_shared.h   |  258 +
 .../imagination/pvr_rogue_fwif_shared_check.h |  108 +
 .../drm/imagination/pvr_rogue_fwif_stream.h   |   78 +
 .../drm/imagination/pvr_rogue_heap_config.h   |  113 +
 drivers/gpu/drm/imagination/pvr_rogue_meta.h  |  356 +
 drivers/gpu/drm/imagination/pvr_rogue_mips.h  |  335 +
 .../drm/imagination/pvr_rogue_mips_check.h    |   58 +
 .../gpu/drm/imagination/pvr_rogue_mmu_defs.h  |  136 +
 drivers/gpu/drm/imagination/pvr_stream.c      |  285 +
 drivers/gpu/drm/imagination/pvr_stream.h      |   75 +
 drivers/gpu/drm/imagination/pvr_stream_defs.c |  351 +
 drivers/gpu/drm/imagination/pvr_stream_defs.h |   16 +
 drivers/gpu/drm/imagination/pvr_sync.c        |  287 +
 drivers/gpu/drm/imagination/pvr_sync.h        |   84 +
 drivers/gpu/drm/imagination/pvr_vm.c          |  906 +++
 drivers/gpu/drm/imagination/pvr_vm.h          |   60 +
 drivers/gpu/drm/imagination/pvr_vm_mips.c     |  208 +
 drivers/gpu/drm/imagination/pvr_vm_mips.h     |   22 +
 include/linux/sizes.h                         |    9 +
 include/uapi/drm/pvr_drm.h                    | 1303 ++++
 83 files changed, 34567 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/gpu/img,powervr.yaml
 create mode 100644 Documentation/gpu/imagination/index.rst
 create mode 100644 Documentation/gpu/imagination/uapi.rst
 create mode 100644 Documentation/gpu/imagination/virtual_memory.rst
 create mode 100644 drivers/gpu/drm/imagination/Kconfig
 create mode 100644 drivers/gpu/drm/imagination/Makefile
 create mode 100644 drivers/gpu/drm/imagination/pvr_ccb.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_ccb.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_cccb.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_cccb.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_context.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_context.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_debugfs.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_debugfs.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_device.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_device.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_device_info.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_device_info.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_drv.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_drv.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_free_list.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_free_list.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_info.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_meta.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_meta.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_mips.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_mips.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_startstop.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_startstop.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_trace.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_trace.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_gem.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_gem.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_hwrt.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_hwrt.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_job.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_job.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_params.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_params.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_power.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_power.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_queue.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_queue.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_cr_defs.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_cr_defs_client.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_defs.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_check.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_client.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_client_check.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_common.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_dev_info.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_resetframework.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_sf.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_shared.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_shared_check.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_stream.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_heap_config.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_meta.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_mips.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_mips_check.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_mmu_defs.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_stream.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_stream.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_stream_defs.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_stream_defs.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_sync.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_sync.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_vm.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_vm.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_vm_mips.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_vm_mips.h
 create mode 100644 include/uapi/drm/pvr_drm.h

-- 
2.41.0


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v5 00/17] Imagination Technologies PowerVR DRM driver
@ 2023-08-16  8:25 ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: frank.binns, donald.robson, boris.brezillon, faith.ekstrand,
	airlied, daniel, maarten.lankhorst, mripard, tzimmermann, afd,
	hns, matthew.brost, christian.koenig, luben.tuikov, dakr,
	linux-kernel

This patch series adds the initial DRM driver for Imagination Technologies PowerVR
GPUs, starting with those based on our Rogue architecture. It's worth pointing
out that this is a new driver, written from the ground up, rather than a
refactored version of our existing downstream driver (pvrsrvkm).

This new DRM driver supports:
- GEM shmem allocations
- dma-buf / PRIME
- Per-context userspace managed virtual address space
- DRM sync objects (binary and timeline)
- Power management suspend / resume
- GPU job submission (geometry, fragment, compute, transfer)
- META firmware processor
- MIPS firmware processor
- GPU hang detection and recovery

Currently our main focus is on the AXE-1-16M GPU. Testing so far has been done
using a TI SK-AM62 board (AXE-1-16M GPU). Firmware for the AXE-1-16M can be
found here:
https://gitlab.freedesktop.org/frankbinns/linux-firmware/-/tree/powervr

A Vulkan driver that works with our downstream kernel driver has already been
merged into Mesa [1][2]. Support for this new DRM driver is being maintained in
a merge request [3], with the branch located here:
https://gitlab.freedesktop.org/frankbinns/mesa/-/tree/powervr-winsys

Job stream formats are documented at:
https://gitlab.freedesktop.org/mesa/mesa/-/blob/f8d2b42ae65c2f16f36a43e0ae39d288431e4263/src/imagination/csbgen/rogue_kmd_stream.xml

The Vulkan driver is progressing towards Vulkan 1.0. We're code complete, and
are working towards passing conformance. The current combination of this kernel
driver with the Mesa Vulkan driver (powervr-mesa-next branch) achieves 88.3% conformance.

The code in this patch series, along with some of its history, can also be found here:
https://gitlab.freedesktop.org/frankbinns/powervr/-/tree/powervr-next

This patch series has dependencies on a number of patches not yet merged. They
are listed below :

drm/sched: Convert drm scheduler to use a work queue rather than kthread:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-2-matthew.brost@intel.com/
drm/sched: Move schedule policy to scheduler / entity:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-3-matthew.brost@intel.com/
drm/sched: Add DRM_SCHED_POLICY_SINGLE_ENTITY scheduling policy:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-4-matthew.brost@intel.com/
drm/sched: Start run wq before TDR in drm_sched_start:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-6-matthew.brost@intel.com/
drm/sched: Submit job before starting TDR:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-7-matthew.brost@intel.com/
drm/sched: Add helper to set TDR timeout:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-8-matthew.brost@intel.com/

[1] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/15243
[2] https://gitlab.freedesktop.org/mesa/mesa/-/tree/main/src/imagination/vulkan
[3] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/15507

High level summary of changes:

v5:
* Retrieve GPU device information from firmware image header
* Address issues with DT binding and example DTS
* Update VM code for upstream GPU VA manager
* BOs are always zeroed on allocation
* Update copyright

v4:
* Implemented hang recovery via firmware hard reset
* Add support for partial render jobs
* Move to a threaded IRQ
* Remove unnecessary read/write and clock helpers
* Remove device tree elements not relevant to AXE-1-16M
* Clean up resource acquisition
* Remove unused DT binding attributes

v3:
* Use drm_sched for scheduling
* Use GPU VA manager
* Use runtime PM
* Use drm_gem_shmem
* GPU watchdog and device loss handling
* DT binding changes: remove unused attributes, add additionProperties:false

v2:
* Redesigned and simplified UAPI based on RFC feedback from XDC 2022
* Support for transfer and partial render jobs
* Support for timeline sync objects

RFC v1: https://lore.kernel.org/dri-devel/20220815165156.118212-1-sarah.walker@imgtec.com/

RFC v2: https://lore.kernel.org/dri-devel/20230413103419.293493-1-sarah.walker@imgtec.com/

v3: https://lore.kernel.org/dri-devel/20230613144800.52657-1-sarah.walker@imgtec.com/

v4: https://lore.kernel.org/dri-devel/20230714142355.111382-1-sarah.walker@imgtec.com/

Matt Coster (1):
  sizes.h: Add entries between 32G and 64T

Sarah Walker (16):
  dt-bindings: gpu: Add Imagination Technologies PowerVR GPU
  drm/imagination/uapi: Add PowerVR driver UAPI
  drm/imagination: Add skeleton PowerVR driver
  drm/imagination: Get GPU resources
  drm/imagination: Add GPU register and FWIF headers
  drm/imagination: Add GPU ID parsing and firmware loading
  drm/imagination: Add GEM and VM related code
  drm/imagination: Implement power management
  drm/imagination: Implement firmware infrastructure and META FW support
  drm/imagination: Implement MIPS firmware processor and MMU support
  drm/imagination: Implement free list and HWRT create and destroy
    ioctls
  drm/imagination: Implement context creation/destruction ioctls
  drm/imagination: Implement job submission and scheduling
  drm/imagination: Add firmware trace to debugfs
  drm/imagination: Add driver documentation
  arm64: dts: ti: k3-am62-main: Add GPU device node [DO NOT MERGE]

 .../devicetree/bindings/gpu/img,powervr.yaml  |   75 +
 Documentation/gpu/drivers.rst                 |    2 +
 Documentation/gpu/imagination/index.rst       |   14 +
 Documentation/gpu/imagination/uapi.rst        |  174 +
 .../gpu/imagination/virtual_memory.rst        |  462 ++
 MAINTAINERS                                   |   10 +
 arch/arm64/boot/dts/ti/k3-am62-main.dtsi      |    9 +
 drivers/gpu/drm/Kconfig                       |    2 +
 drivers/gpu/drm/Makefile                      |    1 +
 drivers/gpu/drm/imagination/Kconfig           |   16 +
 drivers/gpu/drm/imagination/Makefile          |   35 +
 drivers/gpu/drm/imagination/pvr_ccb.c         |  641 ++
 drivers/gpu/drm/imagination/pvr_ccb.h         |   71 +
 drivers/gpu/drm/imagination/pvr_cccb.c        |  267 +
 drivers/gpu/drm/imagination/pvr_cccb.h        |  109 +
 drivers/gpu/drm/imagination/pvr_context.c     |  460 ++
 drivers/gpu/drm/imagination/pvr_context.h     |  205 +
 drivers/gpu/drm/imagination/pvr_debugfs.c     |   53 +
 drivers/gpu/drm/imagination/pvr_debugfs.h     |   29 +
 drivers/gpu/drm/imagination/pvr_device.c      |  651 ++
 drivers/gpu/drm/imagination/pvr_device.h      |  704 ++
 drivers/gpu/drm/imagination/pvr_device_info.c |  253 +
 drivers/gpu/drm/imagination/pvr_device_info.h |  185 +
 drivers/gpu/drm/imagination/pvr_drv.c         | 1515 ++++
 drivers/gpu/drm/imagination/pvr_drv.h         |  129 +
 drivers/gpu/drm/imagination/pvr_free_list.c   |  625 ++
 drivers/gpu/drm/imagination/pvr_free_list.h   |  195 +
 drivers/gpu/drm/imagination/pvr_fw.c          | 1470 ++++
 drivers/gpu/drm/imagination/pvr_fw.h          |  508 ++
 drivers/gpu/drm/imagination/pvr_fw_info.h     |  135 +
 drivers/gpu/drm/imagination/pvr_fw_meta.c     |  554 ++
 drivers/gpu/drm/imagination/pvr_fw_meta.h     |   14 +
 drivers/gpu/drm/imagination/pvr_fw_mips.c     |  250 +
 drivers/gpu/drm/imagination/pvr_fw_mips.h     |   38 +
 .../gpu/drm/imagination/pvr_fw_startstop.c    |  301 +
 .../gpu/drm/imagination/pvr_fw_startstop.h    |   13 +
 drivers/gpu/drm/imagination/pvr_fw_trace.c    |  515 ++
 drivers/gpu/drm/imagination/pvr_fw_trace.h    |   78 +
 drivers/gpu/drm/imagination/pvr_gem.c         |  396 ++
 drivers/gpu/drm/imagination/pvr_gem.h         |  184 +
 drivers/gpu/drm/imagination/pvr_hwrt.c        |  549 ++
 drivers/gpu/drm/imagination/pvr_hwrt.h        |  165 +
 drivers/gpu/drm/imagination/pvr_job.c         |  770 ++
 drivers/gpu/drm/imagination/pvr_job.h         |  161 +
 drivers/gpu/drm/imagination/pvr_mmu.c         | 2523 +++++++
 drivers/gpu/drm/imagination/pvr_mmu.h         |  108 +
 drivers/gpu/drm/imagination/pvr_params.c      |  147 +
 drivers/gpu/drm/imagination/pvr_params.h      |   72 +
 drivers/gpu/drm/imagination/pvr_power.c       |  421 ++
 drivers/gpu/drm/imagination/pvr_power.h       |   39 +
 drivers/gpu/drm/imagination/pvr_queue.c       | 1455 ++++
 drivers/gpu/drm/imagination/pvr_queue.h       |  179 +
 .../gpu/drm/imagination/pvr_rogue_cr_defs.h   | 6193 +++++++++++++++++
 .../imagination/pvr_rogue_cr_defs_client.h    |  159 +
 drivers/gpu/drm/imagination/pvr_rogue_defs.h  |  179 +
 drivers/gpu/drm/imagination/pvr_rogue_fwif.h  | 2208 ++++++
 .../drm/imagination/pvr_rogue_fwif_check.h    |  491 ++
 .../drm/imagination/pvr_rogue_fwif_client.h   |  371 +
 .../imagination/pvr_rogue_fwif_client_check.h |  133 +
 .../drm/imagination/pvr_rogue_fwif_common.h   |   60 +
 .../drm/imagination/pvr_rogue_fwif_dev_info.h |  112 +
 .../pvr_rogue_fwif_resetframework.h           |   28 +
 .../gpu/drm/imagination/pvr_rogue_fwif_sf.h   | 1648 +++++
 .../drm/imagination/pvr_rogue_fwif_shared.h   |  258 +
 .../imagination/pvr_rogue_fwif_shared_check.h |  108 +
 .../drm/imagination/pvr_rogue_fwif_stream.h   |   78 +
 .../drm/imagination/pvr_rogue_heap_config.h   |  113 +
 drivers/gpu/drm/imagination/pvr_rogue_meta.h  |  356 +
 drivers/gpu/drm/imagination/pvr_rogue_mips.h  |  335 +
 .../drm/imagination/pvr_rogue_mips_check.h    |   58 +
 .../gpu/drm/imagination/pvr_rogue_mmu_defs.h  |  136 +
 drivers/gpu/drm/imagination/pvr_stream.c      |  285 +
 drivers/gpu/drm/imagination/pvr_stream.h      |   75 +
 drivers/gpu/drm/imagination/pvr_stream_defs.c |  351 +
 drivers/gpu/drm/imagination/pvr_stream_defs.h |   16 +
 drivers/gpu/drm/imagination/pvr_sync.c        |  287 +
 drivers/gpu/drm/imagination/pvr_sync.h        |   84 +
 drivers/gpu/drm/imagination/pvr_vm.c          |  906 +++
 drivers/gpu/drm/imagination/pvr_vm.h          |   60 +
 drivers/gpu/drm/imagination/pvr_vm_mips.c     |  208 +
 drivers/gpu/drm/imagination/pvr_vm_mips.h     |   22 +
 include/linux/sizes.h                         |    9 +
 include/uapi/drm/pvr_drm.h                    | 1303 ++++
 83 files changed, 34567 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/gpu/img,powervr.yaml
 create mode 100644 Documentation/gpu/imagination/index.rst
 create mode 100644 Documentation/gpu/imagination/uapi.rst
 create mode 100644 Documentation/gpu/imagination/virtual_memory.rst
 create mode 100644 drivers/gpu/drm/imagination/Kconfig
 create mode 100644 drivers/gpu/drm/imagination/Makefile
 create mode 100644 drivers/gpu/drm/imagination/pvr_ccb.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_ccb.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_cccb.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_cccb.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_context.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_context.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_debugfs.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_debugfs.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_device.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_device.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_device_info.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_device_info.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_drv.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_drv.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_free_list.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_free_list.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_info.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_meta.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_meta.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_mips.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_mips.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_startstop.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_startstop.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_trace.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_trace.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_gem.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_gem.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_hwrt.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_hwrt.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_job.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_job.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_params.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_params.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_power.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_power.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_queue.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_queue.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_cr_defs.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_cr_defs_client.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_defs.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_check.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_client.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_client_check.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_common.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_dev_info.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_resetframework.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_sf.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_shared.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_shared_check.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_stream.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_heap_config.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_meta.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_mips.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_mips_check.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_mmu_defs.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_stream.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_stream.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_stream_defs.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_stream_defs.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_sync.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_sync.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_vm.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_vm.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_vm_mips.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_vm_mips.h
 create mode 100644 include/uapi/drm/pvr_drm.h

-- 
2.41.0


^ permalink raw reply	[flat|nested] 77+ messages in thread

* [PATCH v5 01/17] sizes.h: Add entries between 32G and 64T
  2023-08-16  8:25 ` Sarah Walker
@ 2023-08-16  8:25   ` Sarah Walker
  -1 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: matthew.brost, luben.tuikov, tzimmermann, linux-kernel, mripard,
	afd, Matt Coster, boris.brezillon, dakr, donald.robson, hns,
	christian.koenig, faith.ekstrand

From: Matt Coster <matt.coster@imgtec.com>

Signed-off-by: Matt Coster <matt.coster@imgtec.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
---
 include/linux/sizes.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/include/linux/sizes.h b/include/linux/sizes.h
index 84aa448d8bb3..c3a00b967d18 100644
--- a/include/linux/sizes.h
+++ b/include/linux/sizes.h
@@ -47,8 +47,17 @@
 #define SZ_8G				_AC(0x200000000, ULL)
 #define SZ_16G				_AC(0x400000000, ULL)
 #define SZ_32G				_AC(0x800000000, ULL)
+#define SZ_64G				_AC(0x1000000000, ULL)
+#define SZ_128G				_AC(0x2000000000, ULL)
+#define SZ_256G				_AC(0x4000000000, ULL)
+#define SZ_512G				_AC(0x8000000000, ULL)
 
 #define SZ_1T				_AC(0x10000000000, ULL)
+#define SZ_2T				_AC(0x20000000000, ULL)
+#define SZ_4T				_AC(0x40000000000, ULL)
+#define SZ_8T				_AC(0x80000000000, ULL)
+#define SZ_16T				_AC(0x100000000000, ULL)
+#define SZ_32T				_AC(0x200000000000, ULL)
 #define SZ_64T				_AC(0x400000000000, ULL)
 
 #endif /* __LINUX_SIZES_H__ */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 01/17] sizes.h: Add entries between 32G and 64T
@ 2023-08-16  8:25   ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: frank.binns, donald.robson, boris.brezillon, faith.ekstrand,
	airlied, daniel, maarten.lankhorst, mripard, tzimmermann, afd,
	hns, matthew.brost, christian.koenig, luben.tuikov, dakr,
	linux-kernel, Matt Coster, Linus Walleij

From: Matt Coster <matt.coster@imgtec.com>

Signed-off-by: Matt Coster <matt.coster@imgtec.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
---
 include/linux/sizes.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/include/linux/sizes.h b/include/linux/sizes.h
index 84aa448d8bb3..c3a00b967d18 100644
--- a/include/linux/sizes.h
+++ b/include/linux/sizes.h
@@ -47,8 +47,17 @@
 #define SZ_8G				_AC(0x200000000, ULL)
 #define SZ_16G				_AC(0x400000000, ULL)
 #define SZ_32G				_AC(0x800000000, ULL)
+#define SZ_64G				_AC(0x1000000000, ULL)
+#define SZ_128G				_AC(0x2000000000, ULL)
+#define SZ_256G				_AC(0x4000000000, ULL)
+#define SZ_512G				_AC(0x8000000000, ULL)
 
 #define SZ_1T				_AC(0x10000000000, ULL)
+#define SZ_2T				_AC(0x20000000000, ULL)
+#define SZ_4T				_AC(0x40000000000, ULL)
+#define SZ_8T				_AC(0x80000000000, ULL)
+#define SZ_16T				_AC(0x100000000000, ULL)
+#define SZ_32T				_AC(0x200000000000, ULL)
 #define SZ_64T				_AC(0x400000000000, ULL)
 
 #endif /* __LINUX_SIZES_H__ */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 02/17] dt-bindings: gpu: Add Imagination Technologies PowerVR GPU
  2023-08-16  8:25 ` Sarah Walker
@ 2023-08-16  8:25   ` Sarah Walker
  -1 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: matthew.brost, luben.tuikov, tzimmermann, linux-kernel, mripard,
	afd, boris.brezillon, dakr, donald.robson, hns, christian.koenig,
	faith.ekstrand

Add the device tree binding documentation for the Series AXE GPU used in
TI AM62 SoCs.

Co-developed-by: Frank Binns <frank.binns@imgtec.com>
Signed-off-by: Frank Binns <frank.binns@imgtec.com>
Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
Changes since v4:
- Add clocks constraint for ti,am62-gpu
- Remove excess address and size cells in example
- Remove interrupt name and add maxItems
- Make property order consistent between dts and bindings doc
- Update example to match dts

Changes since v3:
- Remove oneOf in compatible property
- Remove power-supply (not used on AM62)

Changes since v2:
- Add commit message description
- Remove mt8173-gpu support (not currently supported)
- Drop quotes from $id and $schema
- Remove reg: minItems
- Drop _clk suffixes from clock-names
- Remove operating-points-v2 property and cooling-cells (not currently
  used)
- Add additionalProperties: false
- Remove stray blank line at the end of file

 .../devicetree/bindings/gpu/img,powervr.yaml  | 75 +++++++++++++++++++
 MAINTAINERS                                   |  7 ++
 2 files changed, 82 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/gpu/img,powervr.yaml

diff --git a/Documentation/devicetree/bindings/gpu/img,powervr.yaml b/Documentation/devicetree/bindings/gpu/img,powervr.yaml
new file mode 100644
index 000000000000..40ade5ceef6e
--- /dev/null
+++ b/Documentation/devicetree/bindings/gpu/img,powervr.yaml
@@ -0,0 +1,75 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+# Copyright (c) 2023 Imagination Technologies Ltd.
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/gpu/img,powervr.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Imagination Technologies PowerVR GPU
+
+maintainers:
+  - Sarah Walker <sarah.walker@imgtec.com>
+
+properties:
+  compatible:
+    items:
+      - enum:
+          - ti,am62-gpu
+      - const: img,powervr-seriesaxe
+
+  reg:
+    maxItems: 1
+
+  clocks:
+    minItems: 1
+    maxItems: 3
+
+  clock-names:
+    items:
+      - const: core
+      - const: mem
+      - const: sys
+    minItems: 1
+
+  interrupts:
+    maxItems: 1
+
+  power-domains:
+    maxItems: 1
+
+required:
+  - compatible
+  - reg
+  - clocks
+  - clock-names
+  - interrupts
+
+additionalProperties: false
+
+allOf:
+  - if:
+      properties:
+        compatible:
+          contains:
+            const: ti,am62-gpu
+    then:
+      properties:
+        clocks:
+          maxItems: 1
+        clock-names:
+          const: core
+
+examples:
+  - |
+    #include <dt-bindings/interrupt-controller/irq.h>
+    #include <dt-bindings/interrupt-controller/arm-gic.h>
+    #include <dt-bindings/soc/ti,sci_pm_domain.h>
+
+    gpu: gpu@fd00000 {
+        compatible = "ti,am62-gpu", "img,powervr-seriesaxe";
+        reg = <0x0fd00000 0x20000>;
+        clocks = <&k3_clks 187 0>;
+        clock-names = "core";
+        interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_HIGH>;
+        power-domains = <&k3_pds 187 TI_SCI_PD_EXCLUSIVE>;
+    };
diff --git a/MAINTAINERS b/MAINTAINERS
index cd882b87a3c6..f84390bb6cfe 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -10138,6 +10138,13 @@ IMGTEC IR DECODER DRIVER
 S:	Orphan
 F:	drivers/media/rc/img-ir/
 
+IMGTEC POWERVR DRM DRIVER
+M:	Frank Binns <frank.binns@imgtec.com>
+M:	Sarah Walker <sarah.walker@imgtec.com>
+M:	Donald Robson <donald.robson@imgtec.com>
+S:	Supported
+F:	Documentation/devicetree/bindings/gpu/img,powervr.yaml
+
 IMON SOUNDGRAPH USB IR RECEIVER
 M:	Sean Young <sean@mess.org>
 L:	linux-media@vger.kernel.org
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 02/17] dt-bindings: gpu: Add Imagination Technologies PowerVR GPU
@ 2023-08-16  8:25   ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: frank.binns, donald.robson, boris.brezillon, faith.ekstrand,
	airlied, daniel, maarten.lankhorst, mripard, tzimmermann, afd,
	hns, matthew.brost, christian.koenig, luben.tuikov, dakr,
	linux-kernel

Add the device tree binding documentation for the Series AXE GPU used in
TI AM62 SoCs.

Co-developed-by: Frank Binns <frank.binns@imgtec.com>
Signed-off-by: Frank Binns <frank.binns@imgtec.com>
Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
Changes since v4:
- Add clocks constraint for ti,am62-gpu
- Remove excess address and size cells in example
- Remove interrupt name and add maxItems
- Make property order consistent between dts and bindings doc
- Update example to match dts

Changes since v3:
- Remove oneOf in compatible property
- Remove power-supply (not used on AM62)

Changes since v2:
- Add commit message description
- Remove mt8173-gpu support (not currently supported)
- Drop quotes from $id and $schema
- Remove reg: minItems
- Drop _clk suffixes from clock-names
- Remove operating-points-v2 property and cooling-cells (not currently
  used)
- Add additionalProperties: false
- Remove stray blank line at the end of file

 .../devicetree/bindings/gpu/img,powervr.yaml  | 75 +++++++++++++++++++
 MAINTAINERS                                   |  7 ++
 2 files changed, 82 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/gpu/img,powervr.yaml

diff --git a/Documentation/devicetree/bindings/gpu/img,powervr.yaml b/Documentation/devicetree/bindings/gpu/img,powervr.yaml
new file mode 100644
index 000000000000..40ade5ceef6e
--- /dev/null
+++ b/Documentation/devicetree/bindings/gpu/img,powervr.yaml
@@ -0,0 +1,75 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+# Copyright (c) 2023 Imagination Technologies Ltd.
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/gpu/img,powervr.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Imagination Technologies PowerVR GPU
+
+maintainers:
+  - Sarah Walker <sarah.walker@imgtec.com>
+
+properties:
+  compatible:
+    items:
+      - enum:
+          - ti,am62-gpu
+      - const: img,powervr-seriesaxe
+
+  reg:
+    maxItems: 1
+
+  clocks:
+    minItems: 1
+    maxItems: 3
+
+  clock-names:
+    items:
+      - const: core
+      - const: mem
+      - const: sys
+    minItems: 1
+
+  interrupts:
+    maxItems: 1
+
+  power-domains:
+    maxItems: 1
+
+required:
+  - compatible
+  - reg
+  - clocks
+  - clock-names
+  - interrupts
+
+additionalProperties: false
+
+allOf:
+  - if:
+      properties:
+        compatible:
+          contains:
+            const: ti,am62-gpu
+    then:
+      properties:
+        clocks:
+          maxItems: 1
+        clock-names:
+          const: core
+
+examples:
+  - |
+    #include <dt-bindings/interrupt-controller/irq.h>
+    #include <dt-bindings/interrupt-controller/arm-gic.h>
+    #include <dt-bindings/soc/ti,sci_pm_domain.h>
+
+    gpu: gpu@fd00000 {
+        compatible = "ti,am62-gpu", "img,powervr-seriesaxe";
+        reg = <0x0fd00000 0x20000>;
+        clocks = <&k3_clks 187 0>;
+        clock-names = "core";
+        interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_HIGH>;
+        power-domains = <&k3_pds 187 TI_SCI_PD_EXCLUSIVE>;
+    };
diff --git a/MAINTAINERS b/MAINTAINERS
index cd882b87a3c6..f84390bb6cfe 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -10138,6 +10138,13 @@ IMGTEC IR DECODER DRIVER
 S:	Orphan
 F:	drivers/media/rc/img-ir/
 
+IMGTEC POWERVR DRM DRIVER
+M:	Frank Binns <frank.binns@imgtec.com>
+M:	Sarah Walker <sarah.walker@imgtec.com>
+M:	Donald Robson <donald.robson@imgtec.com>
+S:	Supported
+F:	Documentation/devicetree/bindings/gpu/img,powervr.yaml
+
 IMON SOUNDGRAPH USB IR RECEIVER
 M:	Sean Young <sean@mess.org>
 L:	linux-media@vger.kernel.org
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 03/17] drm/imagination/uapi: Add PowerVR driver UAPI
  2023-08-16  8:25 ` Sarah Walker
@ 2023-08-16  8:25   ` Sarah Walker
  -1 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: matthew.brost, luben.tuikov, tzimmermann, linux-kernel, mripard,
	afd, Matt Coster, boris.brezillon, dakr, donald.robson, hns,
	christian.koenig, faith.ekstrand

Add the UAPI implementation for the PowerVR driver.

Changes from v4 :
- Remove CREATE_ZEROED flag for BO creation (all buffers are now zeroed)

Co-developed-by: Frank Binns <frank.binns@imgtec.com>
Signed-off-by: Frank Binns <frank.binns@imgtec.com>
Co-developed-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Co-developed-by: Matt Coster <matt.coster@imgtec.com>
Signed-off-by: Matt Coster <matt.coster@imgtec.com>
Co-developed-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 MAINTAINERS                |    1 +
 include/uapi/drm/pvr_drm.h | 1303 ++++++++++++++++++++++++++++++++++++
 2 files changed, 1304 insertions(+)
 create mode 100644 include/uapi/drm/pvr_drm.h

diff --git a/MAINTAINERS b/MAINTAINERS
index f84390bb6cfe..55e17f8aea91 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -10144,6 +10144,7 @@ M:	Sarah Walker <sarah.walker@imgtec.com>
 M:	Donald Robson <donald.robson@imgtec.com>
 S:	Supported
 F:	Documentation/devicetree/bindings/gpu/img,powervr.yaml
+F:	include/uapi/drm/pvr_drm.h
 
 IMON SOUNDGRAPH USB IR RECEIVER
 M:	Sean Young <sean@mess.org>
diff --git a/include/uapi/drm/pvr_drm.h b/include/uapi/drm/pvr_drm.h
new file mode 100644
index 000000000000..c0aac8b135ca
--- /dev/null
+++ b/include/uapi/drm/pvr_drm.h
@@ -0,0 +1,1303 @@
+/* SPDX-License-Identifier: (GPL-2.0 WITH Linux-syscall-note) OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_DRM_UAPI_H
+#define PVR_DRM_UAPI_H
+
+#include "drm.h"
+
+#include <linux/const.h>
+#include <linux/types.h>
+
+#if defined(__cplusplus)
+extern "C" {
+#endif
+
+/**
+ * DOC: PowerVR UAPI
+ *
+ * The PowerVR IOCTL argument structs have a few limitations in place, in
+ * addition to the standard kernel restrictions:
+ *
+ *  - All members must be type-aligned.
+ *  - The overall struct must be padded to 64-bit alignment.
+ *  - Explicit padding is almost always required. This takes the form of
+ *    ``_padding_[x]`` members of sufficient size to pad to the next power-of-two
+ *    alignment, where [x] is the offset into the struct in hexadecimal. Arrays
+ *    are never used for alignment. Padding fields must be zeroed; this is
+ *    always checked.
+ *  - Unions may only appear as the last member of a struct.
+ *  - Individual union members may grow in the future. The space between the
+ *    end of a union member and the end of its containing union is considered
+ *    "implicit padding" and must be zeroed. This is always checked.
+ *
+ * In addition to the IOCTL argument structs, the PowerVR UAPI makes use of
+ * DEV_QUERY argument structs. These are used to fetch information about the
+ * device and runtime. These structs are subject to the same rules set out
+ * above.
+ */
+
+/**
+ * struct drm_pvr_obj_array - Container used to pass arrays of objects
+ *
+ * It is not unusual to have to extend objects to pass new parameters, and the DRM
+ * ioctl infrastructure is supporting that by padding ioctl arguments with zeros
+ * when the data passed by userspace is smaller than the struct defined in the
+ * drm_ioctl_desc, thus keeping things backward compatible. This type is just
+ * applying the same concepts to indirect objects passed through arrays referenced
+ * from the main ioctl arguments structure: the stride basically defines the size
+ * of the object passed by userspace, which allows the kernel driver to pad with
+ * zeros when it's smaller than the size of the object it expects.
+ *
+ * Use ``DRM_PVR_OBJ_ARRAY()`` to fill object array fields, unless you
+ * have a very good reason not to.
+ */
+struct drm_pvr_obj_array {
+	/** @stride: Stride of object struct. Used for versioning. */
+	__u32 stride;
+
+	/** @count: Number of objects in the array. */
+	__u32 count;
+
+	/** @array: User pointer to an array of objects. */
+	__u64 array;
+};
+
+/**
+ * DRM_PVR_OBJ_ARRAY() - Helper macro for filling &struct drm_pvr_obj_array.
+ * @cnt: Number of elements pointed to py @ptr.
+ * @ptr: Pointer to start of a C array.
+ *
+ * Return: Literal of type &struct drm_pvr_obj_array.
+ */
+#define DRM_PVR_OBJ_ARRAY(cnt, ptr) \
+	{ .stride = sizeof((ptr)[0]), .count = (cnt), .array = (__u64)(uintptr_t)(ptr) }
+
+/**
+ * DOC: PowerVR IOCTL interface
+ */
+
+/**
+ * PVR_IOCTL() - Build a PowerVR IOCTL number
+ * @_ioctl: An incrementing id for this IOCTL. Added to %DRM_COMMAND_BASE.
+ * @_mode: Must be one of %DRM_IOR, %DRM_IOW or %DRM_IOWR.
+ * @_data: The type of the args struct passed by this IOCTL.
+ *
+ * The struct referred to by @_data must have a ``drm_pvr_ioctl_`` prefix and an
+ * ``_args suffix``. They are therefore omitted from @_data.
+ *
+ * This should only be used to build the constants described below; it should
+ * never be used to call an IOCTL directly.
+ *
+ * Return: An IOCTL number to be passed to ioctl() from userspace.
+ */
+#define PVR_IOCTL(_ioctl, _mode, _data) \
+	_mode(DRM_COMMAND_BASE + (_ioctl), struct drm_pvr_ioctl_##_data##_args)
+
+#define DRM_IOCTL_PVR_DEV_QUERY PVR_IOCTL(0x00, DRM_IOWR, dev_query)
+#define DRM_IOCTL_PVR_CREATE_BO PVR_IOCTL(0x01, DRM_IOWR, create_bo)
+#define DRM_IOCTL_PVR_GET_BO_MMAP_OFFSET PVR_IOCTL(0x02, DRM_IOWR, get_bo_mmap_offset)
+#define DRM_IOCTL_PVR_CREATE_VM_CONTEXT PVR_IOCTL(0x03, DRM_IOWR, create_vm_context)
+#define DRM_IOCTL_PVR_DESTROY_VM_CONTEXT PVR_IOCTL(0x04, DRM_IOW, destroy_vm_context)
+#define DRM_IOCTL_PVR_VM_MAP PVR_IOCTL(0x05, DRM_IOW, vm_map)
+#define DRM_IOCTL_PVR_VM_UNMAP PVR_IOCTL(0x06, DRM_IOW, vm_unmap)
+#define DRM_IOCTL_PVR_CREATE_CONTEXT PVR_IOCTL(0x07, DRM_IOWR, create_context)
+#define DRM_IOCTL_PVR_DESTROY_CONTEXT PVR_IOCTL(0x08, DRM_IOW, destroy_context)
+#define DRM_IOCTL_PVR_CREATE_FREE_LIST PVR_IOCTL(0x09, DRM_IOWR, create_free_list)
+#define DRM_IOCTL_PVR_DESTROY_FREE_LIST PVR_IOCTL(0x0a, DRM_IOW, destroy_free_list)
+#define DRM_IOCTL_PVR_CREATE_HWRT_DATASET PVR_IOCTL(0x0b, DRM_IOWR, create_hwrt_dataset)
+#define DRM_IOCTL_PVR_DESTROY_HWRT_DATASET PVR_IOCTL(0x0c, DRM_IOW, destroy_hwrt_dataset)
+#define DRM_IOCTL_PVR_SUBMIT_JOBS PVR_IOCTL(0x0d, DRM_IOW, submit_jobs)
+
+/**
+ * DOC: PowerVR IOCTL DEV_QUERY interface
+ */
+
+/**
+ * struct drm_pvr_dev_query_gpu_info - Container used to fetch information about
+ * the graphics processor.
+ *
+ * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must be set
+ * to %DRM_PVR_DEV_QUERY_GPU_INFO_GET.
+ */
+struct drm_pvr_dev_query_gpu_info {
+	/**
+	 * @gpu_id: GPU identifier.
+	 *
+	 * For all currently supported GPUs this is the BVNC encoded as a 64-bit
+	 * value as follows:
+	 *
+	 *    +--------+--------+--------+-------+
+	 *    | 63..48 | 47..32 | 31..16 | 15..0 |
+	 *    +========+========+========+=======+
+	 *    | B      | V      | N      | C     |
+	 *    +--------+--------+--------+-------+
+	 */
+	__u64 gpu_id;
+
+	/**
+	 * @num_phantoms: Number of Phantoms present.
+	 */
+	__u32 num_phantoms;
+};
+
+/**
+ * struct drm_pvr_dev_query_runtime_info - Container used to fetch information
+ * about the graphics runtime.
+ *
+ * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must be set
+ * to %DRM_PVR_DEV_QUERY_RUNTIME_INFO_GET.
+ */
+struct drm_pvr_dev_query_runtime_info {
+	/**
+	 * @free_list_min_pages: Minimum allowed free list size,
+	 * in PM physical pages.
+	 */
+	__u64 free_list_min_pages;
+
+	/**
+	 * @free_list_max_pages: Maximum allowed free list size,
+	 * in PM physical pages.
+	 */
+	__u64 free_list_max_pages;
+
+	/**
+	 * @common_store_alloc_region_size: Size of the Allocation
+	 * Region within the Common Store used for coefficient and shared
+	 * registers, in dwords.
+	 */
+	__u32 common_store_alloc_region_size;
+
+	/**
+	 * @common_store_partition_space_size: Size of the
+	 * Partition Space within the Common Store for output buffers, in
+	 * dwords.
+	 */
+	__u32 common_store_partition_space_size;
+
+	/**
+	 * @max_coeffs: Maximum coefficients, in dwords.
+	 */
+	__u32 max_coeffs;
+
+	/**
+	 * @cdm_max_local_mem_size_regs: Maximum amount of local
+	 * memory available to a compute kernel, in dwords.
+	 */
+	__u32 cdm_max_local_mem_size_regs;
+};
+
+/**
+ * struct drm_pvr_dev_query_quirks - Container used to fetch information about
+ * hardware fixes for which the device may require support in the user mode
+ * driver.
+ *
+ * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must be set
+ * to %DRM_PVR_DEV_QUERY_QUIRKS_GET.
+ */
+struct drm_pvr_dev_query_quirks {
+	/**
+	 * @quirks: A userspace address for the hardware quirks __u32 array.
+	 *
+	 * The first @musthave_count items in the list are quirks that the
+	 * client must support for this device. If userspace does not support
+	 * all these quirks then functionality is not guaranteed and client
+	 * initialisation must fail.
+	 * The remaining quirks in the list affect userspace and the kernel or
+	 * firmware. They are disabled by default and require userspace to
+	 * opt-in. The opt-in mechanism depends on the quirk.
+	 */
+	__u64 quirks;
+
+	/** @count: Length of @quirks (number of __u32). */
+	__u16 count;
+
+	/**
+	 * @musthave_count: The number of entries in @quirks that are
+	 * mandatory, starting at index 0.
+	 */
+	__u16 musthave_count;
+
+	/** @_padding_c: Reserved. This field must be zeroed. */
+	__u32 _padding_c;
+};
+
+/**
+ * struct drm_pvr_dev_query_enhancements - Container used to fetch information
+ * about optional enhancements supported by the device that require support in
+ * the user mode driver.
+ *
+ * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must be set
+ * to %DRM_PVR_DEV_ENHANCEMENTS_GET.
+ */
+struct drm_pvr_dev_query_enhancements {
+	/**
+	 * @enhancements: A userspace address for the hardware enhancements
+	 * __u32 array.
+	 *
+	 * These enhancements affect userspace and the kernel or firmware. They
+	 * are disabled by default and require userspace to opt-in. The opt-in
+	 * mechanism depends on the quirk.
+	 */
+	__u64 enhancements;
+
+	/** @count: Length of @enhancements (number of __u32). */
+	__u16 count;
+
+	/** @_padding_a: Reserved. This field must be zeroed. */
+	__u16 _padding_a;
+
+	/** @_padding_c: Reserved. This field must be zeroed. */
+	__u32 _padding_c;
+};
+
+/**
+ * enum drm_pvr_heap_id - Array index for heap info data returned by
+ * %DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
+ *
+ * For compatibility reasons all indices will be present in the returned array,
+ * however some heaps may not be present. These are indicated where
+ * &struct drm_pvr_heap.size is set to zero.
+ */
+enum drm_pvr_heap_id {
+	/** @DRM_PVR_HEAP_GENERAL: General purpose heap. */
+	DRM_PVR_HEAP_GENERAL = 0,
+	/** @DRM_PVR_HEAP_PDS_CODE_DATA: PDS code and data heap. */
+	DRM_PVR_HEAP_PDS_CODE_DATA,
+	/** @DRM_PVR_HEAP_USC_CODE: USC code heap. */
+	DRM_PVR_HEAP_USC_CODE,
+	/** @DRM_PVR_HEAP_RGNHDR: Region header heap. Only used if GPU has BRN63142. */
+	DRM_PVR_HEAP_RGNHDR,
+	/** @DRM_PVR_HEAP_VIS_TEST: Visibility test heap. */
+	DRM_PVR_HEAP_VIS_TEST,
+	/** @DRM_PVR_HEAP_TRANSFER_FRAG: Transfer fragment heap. */
+	DRM_PVR_HEAP_TRANSFER_FRAG,
+
+	/**
+	 * @DRM_PVR_HEAP_COUNT: The number of heaps returned by
+	 * %DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
+	 *
+	 * More heaps may be added, so this also serves as the copy limit when
+	 * sent by the caller.
+	 */
+	DRM_PVR_HEAP_COUNT
+	/* Please only add additional heaps above DRM_PVR_HEAP_COUNT! */
+};
+
+/**
+ * DOC: Flags for DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
+ *
+ * .. c:macro:: DRM_PVR_HEAP_FLAG_STATIC_CARVEOUT_AT_END
+ *
+ *    The static data area is at the end of the heap memory area, rather than
+ *    at the beginning.
+ *    The base address will be:
+ *        drm_pvr_heap::base +
+ *            (drm_pvr_heap::size - drm_pvr_heap::static_data_carveout_size)
+ */
+#define DRM_PVR_HEAP_FLAG_STATIC_CARVEOUT_AT_END _BITUL(0)
+
+/**
+ * struct drm_pvr_heap - Container holding information about a single heap.
+ *
+ * This will always be fetched as an array.
+ */
+struct drm_pvr_heap {
+	/** @base: Base address of heap. */
+	__u64 base;
+
+	/**
+	 * @size: Size of heap, in bytes. Will be 0 if the heap is not present.
+	 */
+	__u64 size;
+
+	/** @flags: Flags for this heap. See &enum drm_pvr_heap_flags. */
+	__u32 flags;
+
+	/** @page_size_log2: Log2 of page size. */
+	__u32 page_size_log2;
+};
+
+/**
+ * struct drm_pvr_dev_query_heap_info - Container used to fetch information
+ * about heaps supported by the device driver.
+ *
+ * Please note all driver-supported heaps will be returned up to &heaps.count.
+ * Some heaps will not be present in all devices, which will be indicated by
+ * &struct drm_pvr_heap.size being set to zero.
+ *
+ * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must be set
+ * to %DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
+ */
+struct drm_pvr_dev_query_heap_info {
+	/**
+	 * @heaps: Array of &struct drm_pvr_heap. If pointer is NULL, the count
+	 * and stride will be updated with those known to the driver version, to
+	 * facilitate allocation by the caller.
+	 */
+	struct drm_pvr_obj_array heaps;
+};
+
+/**
+ * enum drm_pvr_static_data_area_usage - Array index for static data area info
+ * returned by %DRM_PVR_DEV_QUERY_STATIC_DATA_AREAS_GET.
+ *
+ * For compatibility reasons all indices will be present in the returned array,
+ * however some areas may not be present. These are indicated where
+ * &struct drm_pvr_static_data_area.size is set to zero.
+ */
+enum drm_pvr_static_data_area_usage {
+	/**
+	 * @DRM_PVR_STATIC_DATA_AREA_EOT: End of Tile USC program.
+	 *
+	 * The End of Tile task runs at completion of a tile, and is responsible for emitting the
+	 * tile to the Pixel Back End.
+	 */
+	DRM_PVR_STATIC_DATA_AREA_EOT = 0,
+
+	/**
+	 * @DRM_PVR_STATIC_DATA_AREA_FENCE: MCU fence area, used during cache flush and
+	 * invalidation.
+	 *
+	 * This must point to valid physical memory but the contents otherwise are not used.
+	 */
+	DRM_PVR_STATIC_DATA_AREA_FENCE,
+
+	/**
+	 * @DRM_PVR_STATIC_DATA_AREA_VDM_SYNC: VDM sync program.
+	 *
+	 * The VDM sync program is used to synchronise multiple areas of the GPU hardware.
+	 */
+	DRM_PVR_STATIC_DATA_AREA_VDM_SYNC,
+
+	/**
+	 * @DRM_PVR_STATIC_DATA_AREA_YUV_CSC: YUV coefficients.
+	 *
+	 * Area contains up to 16 slots with stride of 64 bytes. Each is a 3x4 matrix of u16 fixed
+	 * point numbers, with 1 sign bit, 2 integer bits and 13 fractional bits.
+	 *
+	 * The slots are :
+	 * 0 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_RGB_IDENTITY_KHR
+	 * 1 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_IDENTITY_KHR (full range)
+	 * 2 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_IDENTITY_KHR (conformant range)
+	 * 3 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_709_KHR (full range)
+	 * 4 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_709_KHR (conformant range)
+	 * 5 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_601_KHR (full range)
+	 * 6 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_601_KHR (conformant range)
+	 * 7 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_2020_KHR (full range)
+	 * 8 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_2020_KHR (conformant range)
+	 * 9 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_601_KHR (conformant range, 10 bit)
+	 * 10 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_709_KHR (conformant range, 10 bit)
+	 * 11 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_2020_KHR (conformant range, 10 bit)
+	 * 14 = Identity (biased)
+	 * 15 = Identity
+	 */
+	DRM_PVR_STATIC_DATA_AREA_YUV_CSC,
+};
+
+/**
+ * struct drm_pvr_static_data_area - Container holding information about a
+ * single static data area.
+ *
+ * This will always be fetched as an array.
+ */
+struct drm_pvr_static_data_area {
+	/**
+	 * @area_usage: Usage of static data area.
+	 * See &enum drm_pvr_static_data_area_usage.
+	 */
+	__u16 area_usage;
+
+	/**
+	 * @location_heap_id: Array index of heap where this of static data
+	 * area is located. This array is fetched using
+	 * %DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
+	 */
+	__u16 location_heap_id;
+
+	/** @size: Size of static data area. Not present if set to zero. */
+	__u32 size;
+
+	/**
+	 * @offset: Offset of static data area from start of static data
+	 * carveout.
+	 */
+	__u64 offset;
+};
+
+/**
+ * struct drm_pvr_dev_query_static_data_areas - Container used to fetch
+ * information about the static data areas in heaps supported by the device
+ * driver.
+ *
+ * Please note all driver-supported static data areas will be returned up to
+ * &static_data_areas.count. Some will not be present for all devices which,
+ * will be indicated by &struct drm_pvr_static_data_area.size being set to zero.
+ *
+ * Further, some heaps will not be present either. See &struct
+ * drm_pvr_dev_query_heap_info.
+ *
+ * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must be set
+ * to %DRM_PVR_DEV_QUERY_STATIC_DATA_AREAS_GET.
+ */
+struct drm_pvr_dev_query_static_data_areas {
+	/**
+	 * @static_data_areas: Array of &struct drm_pvr_static_data_area. If
+	 * pointer is NULL, the count and stride will be updated with those
+	 * known to the driver version, to facilitate allocation by the caller.
+	 */
+	struct drm_pvr_obj_array static_data_areas;
+};
+
+/**
+ * enum drm_pvr_dev_query - For use with &drm_pvr_ioctl_dev_query_args.type to
+ * indicate the type of the receiving container.
+ *
+ * Append only. Do not reorder.
+ */
+enum drm_pvr_dev_query {
+	/**
+	 * @DRM_PVR_DEV_QUERY_GPU_INFO_GET: The dev query args contain a pointer
+	 * to &struct drm_pvr_dev_query_gpu_info.
+	 */
+	DRM_PVR_DEV_QUERY_GPU_INFO_GET = 0,
+
+	/**
+	 * @DRM_PVR_DEV_QUERY_RUNTIME_INFO_GET: The dev query args contain a
+	 * pointer to &struct drm_pvr_dev_query_runtime_info.
+	 */
+	DRM_PVR_DEV_QUERY_RUNTIME_INFO_GET,
+
+	/**
+	 * @DRM_PVR_DEV_QUERY_QUIRKS_GET: The dev query args contain a pointer
+	 * to &struct drm_pvr_dev_query_quirks.
+	 */
+	DRM_PVR_DEV_QUERY_QUIRKS_GET,
+
+	/**
+	 * @DRM_PVR_DEV_QUERY_ENHANCEMENTS_GET: The dev query args contain a
+	 * pointer to &struct drm_pvr_dev_query_enhancements.
+	 */
+	DRM_PVR_DEV_QUERY_ENHANCEMENTS_GET,
+
+	/**
+	 * @DRM_PVR_DEV_QUERY_HEAP_INFO_GET: The dev query args contain a
+	 * pointer to &struct drm_pvr_dev_query_heap_info.
+	 */
+	DRM_PVR_DEV_QUERY_HEAP_INFO_GET,
+
+	/**
+	 * @DRM_PVR_DEV_QUERY_STATIC_DATA_AREAS_GET: The dev query args contain
+	 * a pointer to &struct drm_pvr_dev_query_static_data_areas.
+	 */
+	DRM_PVR_DEV_QUERY_STATIC_DATA_AREAS_GET,
+};
+
+/**
+ * struct drm_pvr_ioctl_dev_query_args - Arguments for %DRM_IOCTL_PVR_DEV_QUERY.
+ */
+struct drm_pvr_ioctl_dev_query_args {
+	/**
+	 * @type: Type of query and output struct. See &enum drm_pvr_dev_query.
+	 */
+	__u32 type;
+
+	/**
+	 * @size: Size of the receiving struct, see @type.
+	 *
+	 * After a successful call this will be updated to the written byte
+	 * length.
+	 * Can also be used to get the minimum byte length (see @pointer).
+	 * This allows additional fields to be appended to the structs in
+	 * future.
+	 */
+	__u32 size;
+
+	/**
+	 * @pointer: Pointer to struct @type.
+	 *
+	 * Must be large enough to contain @size bytes.
+	 * If pointer is NULL, the expected size will be returned in the @size
+	 * field, but no other data will be written.
+	 */
+	__u64 pointer;
+};
+
+/**
+ * DOC: PowerVR IOCTL CREATE_BO interface
+ */
+
+/**
+ * DOC: Flags for CREATE_BO
+ *
+ * The &struct drm_pvr_ioctl_create_bo_args.flags field is 64 bits wide and consists
+ * of three groups of flags: creation, device mapping and CPU mapping.
+ *
+ * We use "device" to refer to the GPU here because of the ambiguity between
+ * CPU and GPU in some fonts.
+ *
+ * Creation options
+ *    These use the prefix ``DRM_PVR_BO_CREATE_``.
+ *
+ * Device mapping options
+ *    These use the prefix ``DRM_PVR_BO_DEVICE_``.
+ *
+ *    :BYPASS_CACHE: There are very few situations where this flag is useful.
+ *       By default, the device flushes its memory caches after every job.
+ *    :PM_FW_PROTECT: Specify that only the Parameter Manager (PM) and/or
+ *       firmware processor should be allowed to access this memory when mapped
+ *       to the device. It is not valid to specify this flag with
+ *       CPU_ALLOW_USERSPACE_ACCESS.
+ *
+ * CPU mapping options
+ *    These use the prefix ``DRM_PVR_BO_CPU_``.
+ *
+ *    :ALLOW_USERSPACE_ACCESS: Allow userspace to map and access the contents
+ *       of this memory. It is not valid to specify this flag with
+ *       DEVICE_PM_FW_PROTECT.
+ */
+#define DRM_PVR_BO_DEVICE_BYPASS_CACHE _BITULL(0)
+#define DRM_PVR_BO_DEVICE_PM_FW_PROTECT _BITULL(1)
+#define DRM_PVR_BO_CPU_ALLOW_USERSPACE_ACCESS _BITULL(2)
+/* Bits 3..63 are reserved. */
+
+/**
+ * struct drm_pvr_ioctl_create_bo_args - Arguments for %DRM_IOCTL_PVR_CREATE_BO
+ */
+struct drm_pvr_ioctl_create_bo_args {
+	/**
+	 * @size: [IN/OUT] Unaligned size of buffer object to create. On
+	 * return, this will be populated with the actual aligned size of the
+	 * new buffer.
+	 */
+	__u64 size;
+
+	/**
+	 * @handle: [OUT] GEM handle of the new buffer object for use in
+	 * userspace.
+	 */
+	__u32 handle;
+
+	/** @_padding_c: Reserved. This field must be zeroed. */
+	__u32 _padding_c;
+
+	/**
+	 * @flags: [IN] Options which will affect the behaviour of this
+	 * creation operation and future mapping operations on the created
+	 * object. This field must be a valid combination of ``DRM_PVR_BO_*``
+	 * values, with all bits marked as reserved set to zero.
+	 */
+	__u64 flags;
+};
+
+/**
+ * DOC: PowerVR IOCTL GET_BO_MMAP_OFFSET interface
+ */
+
+/**
+ * struct drm_pvr_ioctl_get_bo_mmap_offset_args - Arguments for
+ * %DRM_IOCTL_PVR_GET_BO_MMAP_OFFSET
+ *
+ * Like other DRM drivers, the "mmap" IOCTL doesn't actually map any memory.
+ * Instead, it allocates a fake offset which refers to the specified buffer
+ * object. This offset can be used with a real mmap call on the DRM device
+ * itself.
+ */
+struct drm_pvr_ioctl_get_bo_mmap_offset_args {
+	/** @handle: [IN] GEM handle of the buffer object to be mapped. */
+	__u32 handle;
+
+	/** @_padding_4: Reserved. This field must be zeroed. */
+	__u32 _padding_4;
+
+	/** @offset: [OUT] Fake offset to use in the real mmap call. */
+	__u64 offset;
+};
+
+/**
+ * DOC: PowerVR IOCTL CREATE_VM_CONTEXT and DESTROY_VM_CONTEXT interfaces
+ */
+
+/**
+ * struct drm_pvr_ioctl_create_vm_context_args - Arguments for
+ * %DRM_IOCTL_PVR_CREATE_VM_CONTEXT
+ */
+struct drm_pvr_ioctl_create_vm_context_args {
+	/** @handle: [OUT] Handle for new VM context. */
+	__u32 handle;
+
+	/** @_padding_4: Reserved. This field must be zeroed. */
+	__u32 _padding_4;
+};
+
+/**
+ * struct drm_pvr_ioctl_destroy_vm_context_args - Arguments for
+ * %DRM_IOCTL_PVR_DESTROY_VM_CONTEXT
+ */
+struct drm_pvr_ioctl_destroy_vm_context_args {
+	/**
+	 * @handle: [IN] Handle for VM context to be destroyed.
+	 */
+	__u32 handle;
+
+	/** @_padding_4: Reserved. This field must be zeroed. */
+	__u32 _padding_4;
+};
+
+/**
+ * DOC: PowerVR IOCTL VM_MAP and VM_UNMAP interfaces
+ *
+ * The VM UAPI allows userspace to create buffer object mappings in GPU virtual address space.
+ *
+ * The client is responsible for managing GPU address space. It should allocate mappings within
+ * the heaps returned by %DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
+ *
+ * %DRM_IOCTL_PVR_VM_MAP creates a new mapping. The client provides the target virtual address for
+ * the mapping. Size and offset within the mapped buffer object can be specified, so the client can
+ * partially map a buffer.
+ *
+ * %DRM_IOCTL_PVR_VM_UNMAP removes a mapping. The entire mapping will be removed from GPU address
+ * space. For this reason only the start address is provided by the client.
+ */
+
+/**
+ * struct drm_pvr_ioctl_vm_map_args - Arguments for %DRM_IOCTL_PVR_VM_MAP.
+ */
+struct drm_pvr_ioctl_vm_map_args {
+	/**
+	 * @vm_context_handle: [IN] Handle for VM context for this mapping to
+	 * exist in.
+	 */
+	__u32 vm_context_handle;
+
+	/** @flags: [IN] Flags which affect this mapping. Currently always 0. */
+	__u32 flags;
+
+	/**
+	 * @device_addr: [IN] Requested device-virtual address for the mapping.
+	 * This must be non-zero and aligned to the device page size for the
+	 * heap containing the requested address. It is an error to specify an
+	 * address which is not contained within one of the heaps returned by
+	 * %DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
+	 */
+	__u64 device_addr;
+
+	/**
+	 * @handle: [IN] Handle of the target buffer object. This must be a
+	 * valid handle returned by %DRM_IOCTL_PVR_CREATE_BO.
+	 */
+	__u32 handle;
+
+	/** @_padding_14: Reserved. This field must be zeroed. */
+	__u32 _padding_14;
+
+	/**
+	 * @offset: [IN] Offset into the target bo from which to begin the
+	 * mapping.
+	 */
+	__u64 offset;
+
+	/**
+	 * @size: [IN] Size of the requested mapping. Must be aligned to
+	 * the device page size for the heap containing the requested address,
+	 * as well as the host page size. When added to @device_addr, the
+	 * result must not overflow the heap which contains @device_addr (i.e.
+	 * the range specified by @device_addr and @size must be completely
+	 * contained within a single heap specified by
+	 * %DRM_PVR_DEV_QUERY_HEAP_INFO_GET).
+	 */
+	__u64 size;
+};
+
+/**
+ * struct drm_pvr_ioctl_vm_unmap_args - Arguments for %DRM_IOCTL_PVR_VM_UNMAP.
+ */
+struct drm_pvr_ioctl_vm_unmap_args {
+	/**
+	 * @vm_context_handle: [IN] Handle for VM context that this mapping
+	 * exists in.
+	 */
+	__u32 vm_context_handle;
+
+	/** @_padding_4: Reserved. This field must be zeroed. */
+	__u32 _padding_4;
+
+	/**
+	 * @device_addr: [IN] Device-virtual address at the start of the target
+	 * mapping. This must be non-zero.
+	 */
+	__u64 device_addr;
+
+	/**
+	 * @size: Size in bytes of the target mapping. This must be non-zero.
+	 */
+	__u64 size;
+};
+
+/**
+ * DOC: PowerVR IOCTL CREATE_CONTEXT and DESTROY_CONTEXT interfaces
+ */
+
+/**
+ * enum drm_pvr_ctx_priority - Arguments for
+ * &drm_pvr_ioctl_create_context_args.priority
+ */
+enum drm_pvr_ctx_priority {
+	/** @DRM_PVR_CTX_PRIORITY_LOW: Priority below normal. */
+	DRM_PVR_CTX_PRIORITY_LOW = -512,
+
+	/** @DRM_PVR_CTX_PRIORITY_NORMAL: Normal priority. */
+	DRM_PVR_CTX_PRIORITY_NORMAL = 0,
+
+	/**
+	 * @DRM_PVR_CTX_PRIORITY_HIGH: Priority above normal.
+	 * Note this requires ``CAP_SYS_NICE`` or ``DRM_MASTER``.
+	 */
+	DRM_PVR_CTX_PRIORITY_HIGH = 512,
+};
+
+/**
+ * enum drm_pvr_ctx_type - Arguments for
+ * &struct drm_pvr_ioctl_create_context_args.type
+ */
+enum drm_pvr_ctx_type {
+	/**
+	 * @DRM_PVR_CTX_TYPE_RENDER: Render context. Use &struct
+	 * drm_pvr_ioctl_create_render_context_args for context creation arguments.
+	 */
+	DRM_PVR_CTX_TYPE_RENDER = 0,
+
+	/**
+	 * @DRM_PVR_CTX_TYPE_COMPUTE: Compute context. Use &struct
+	 * drm_pvr_ioctl_create_compute_context_args for context creation arguments.
+	 */
+	DRM_PVR_CTX_TYPE_COMPUTE,
+
+	/**
+	 * @DRM_PVR_CTX_TYPE_TRANSFER_FRAG: Transfer context for fragment data masters. Use
+	 * &struct drm_pvr_ioctl_create_transfer_context_args for context creation arguments.
+	 */
+	DRM_PVR_CTX_TYPE_TRANSFER_FRAG,
+};
+
+/**
+ * struct drm_pvr_ioctl_create_context_args - Arguments for
+ * %DRM_IOCTL_PVR_CREATE_CONTEXT
+ */
+struct drm_pvr_ioctl_create_context_args {
+	/**
+	 * @type: [IN] Type of context to create.
+	 *
+	 * This must be one of the values defined by &enum drm_pvr_ctx_type.
+	 */
+	__u32 type;
+
+	/** @flags: [IN] Flags for context. */
+	__u32 flags;
+
+	/**
+	 * @priority: [IN] Priority of new context.
+	 *
+	 * This must be one of the values defined by &enum drm_pvr_ctx_priority.
+	 */
+	__s32 priority;
+
+	/** @handle: [OUT] Handle for new context. */
+	__u32 handle;
+
+	/**
+	 * @static_context_state: [IN] Pointer to static context state stream.
+	 */
+	__u64 static_context_state;
+
+	/**
+	 * @static_context_state_len: [IN] Length of static context state, in bytes.
+	 */
+	__u32 static_context_state_len;
+
+	/**
+	 * @vm_context_handle: [IN] Handle for VM context that this context is
+	 * associated with.
+	 */
+	__u32 vm_context_handle;
+
+	/**
+	 * @callstack_addr: [IN] Address for initial call stack pointer. Only valid
+	 * if @type is %DRM_PVR_CTX_TYPE_RENDER, otherwise must be 0.
+	 */
+	__u64 callstack_addr;
+};
+
+/**
+ * struct drm_pvr_ioctl_destroy_context_args - Arguments for
+ * %DRM_IOCTL_PVR_DESTROY_CONTEXT
+ */
+struct drm_pvr_ioctl_destroy_context_args {
+	/**
+	 * @handle: [IN] Handle for context to be destroyed.
+	 */
+	__u32 handle;
+
+	/** @_padding_4: Reserved. This field must be zeroed. */
+	__u32 _padding_4;
+};
+
+/**
+ * DOC: PowerVR IOCTL CREATE_FREE_LIST and DESTROY_FREE_LIST interfaces
+ */
+
+/**
+ * struct drm_pvr_ioctl_create_free_list_args - Arguments for
+ * %DRM_IOCTL_PVR_CREATE_FREE_LIST
+ *
+ * Free list arguments have the following constraints :
+ *
+ * - @max_num_pages must be greater than zero.
+ * - @grow_threshold must be between 0 and 100.
+ * - @grow_num_pages must be less than or equal to &max_num_pages.
+ * - @initial_num_pages, @max_num_pages and @grow_num_pages must be multiples
+ *   of 4.
+ * - When &grow_num_pages is 0, @initial_num_pages must be equal to
+ *   @max_num_pages.
+ * - When &grow_num_pages is non-zero, @initial_num_pages must be less than
+ *   @max_num_pages.
+ */
+struct drm_pvr_ioctl_create_free_list_args {
+	/**
+	 * @free_list_gpu_addr: [IN] Address of GPU mapping of buffer object
+	 * containing memory to be used by free list.
+	 *
+	 * The mapped region of the buffer object must be at least
+	 * @max_num_pages * ``sizeof(__u32)``.
+	 *
+	 * The buffer object must have been created with
+	 * %DRM_PVR_BO_DEVICE_PM_FW_PROTECT set and
+	 * %DRM_PVR_BO_CPU_ALLOW_USERSPACE_ACCESS not set.
+	 */
+	__u64 free_list_gpu_addr;
+
+	/** @initial_num_pages: [IN] Pages initially allocated to free list. */
+	__u32 initial_num_pages;
+
+	/** @max_num_pages: [IN] Maximum number of pages in free list. */
+	__u32 max_num_pages;
+
+	/** @grow_num_pages: [IN] Pages to grow free list by per request. */
+	__u32 grow_num_pages;
+
+	/**
+	 * @grow_threshold: [IN] Percentage of FL memory used that should
+	 * trigger a new grow request.
+	 */
+	__u32 grow_threshold;
+
+	/**
+	 * @vm_context_handle: [IN] Handle for VM context that the free list buffer
+	 * object is mapped in.
+	 */
+	__u32 vm_context_handle;
+
+	/**
+	 * @handle: [OUT] Handle for created free list.
+	 */
+	__u32 handle;
+};
+
+/**
+ * struct drm_pvr_ioctl_destroy_free_list_args - Arguments for
+ * %DRM_IOCTL_PVR_DESTROY_FREE_LIST
+ */
+struct drm_pvr_ioctl_destroy_free_list_args {
+	/**
+	 * @handle: [IN] Handle for free list to be destroyed.
+	 */
+	__u32 handle;
+
+	/** @_padding_4: Reserved. This field must be zeroed. */
+	__u32 _padding_4;
+};
+
+/**
+ * DOC: PowerVR IOCTL CREATE_HWRT_DATASET and DESTROY_HWRT_DATASET interfaces
+ */
+
+/**
+ * struct drm_pvr_create_hwrt_geom_data_args - Geometry data arguments used for
+ * &struct drm_pvr_ioctl_create_hwrt_dataset_args.geom_data_args.
+ */
+struct drm_pvr_create_hwrt_geom_data_args {
+	/** @tpc_dev_addr: [IN] Tail pointer cache GPU virtual address. */
+	__u64 tpc_dev_addr;
+
+	/** @tpc_size: [IN] Size of TPC, in bytes. */
+	__u32 tpc_size;
+
+	/** @tpc_stride: [IN] Stride between layers in TPC, in pages */
+	__u32 tpc_stride;
+
+	/** @vheap_table_dev_addr: [IN] VHEAP table GPU virtual address. */
+	__u64 vheap_table_dev_addr;
+
+	/** @rtc_dev_addr: [IN] Render Target Cache virtual address. */
+	__u64 rtc_dev_addr;
+};
+
+/**
+ * struct drm_pvr_create_hwrt_rt_data_args - Render target arguments used for
+ * &struct drm_pvr_ioctl_create_hwrt_dataset_args.rt_data_args.
+ */
+struct drm_pvr_create_hwrt_rt_data_args {
+	/** @pm_mlist_dev_addr: [IN] PM MLIST GPU virtual address. */
+	__u64 pm_mlist_dev_addr;
+
+	/** @macrotile_array_dev_addr: [IN] Macrotile array GPU virtual address. */
+	__u64 macrotile_array_dev_addr;
+
+	/** @region_header_dev_addr: [IN] Region header array GPU virtual address. */
+	__u64 region_header_dev_addr;
+};
+
+/**
+ * struct drm_pvr_ioctl_create_hwrt_dataset_args - Arguments for
+ * %DRM_IOCTL_PVR_CREATE_HWRT_DATASET
+ */
+struct drm_pvr_ioctl_create_hwrt_dataset_args {
+	/** @geom_data_args: [IN] Geometry data arguments. */
+	struct drm_pvr_create_hwrt_geom_data_args geom_data_args;
+
+	/** @rt_data_args: [IN] Array of render target arguments. */
+	struct drm_pvr_create_hwrt_rt_data_args rt_data_args[2];
+
+	/**
+	 * @free_list_handles: [IN] Array of free list handles.
+	 *
+	 * free_list_handles[0] must have initial size of at least that reported
+	 * by &drm_pvr_dev_query_runtime_info.free_list_min_pages.
+	 */
+	__u32 free_list_handles[2];
+
+	/** @width: [IN] Width in pixels. */
+	__u32 width;
+
+	/** @height: [IN] Height in pixels. */
+	__u32 height;
+
+	/** @samples: [IN] Number of samples. */
+	__u32 samples;
+
+	/** @layers: [IN] Number of layers. */
+	__u32 layers;
+
+	/** @isp_merge_lower_x: [IN] Lower X coefficient for triangle merging. */
+	__u32 isp_merge_lower_x;
+
+	/** @isp_merge_lower_y: [IN] Lower Y coefficient for triangle merging. */
+	__u32 isp_merge_lower_y;
+
+	/** @isp_merge_scale_x: [IN] Scale X coefficient for triangle merging. */
+	__u32 isp_merge_scale_x;
+
+	/** @isp_merge_scale_y: [IN] Scale Y coefficient for triangle merging. */
+	__u32 isp_merge_scale_y;
+
+	/** @isp_merge_upper_x: [IN] Upper X coefficient for triangle merging. */
+	__u32 isp_merge_upper_x;
+
+	/** @isp_merge_upper_y: [IN] Upper Y coefficient for triangle merging. */
+	__u32 isp_merge_upper_y;
+
+	/**
+	 * @region_header_size: [IN] Size of region header array. This common field is used by
+	 * both render targets in this data set.
+	 *
+	 * The units for this field differ depending on what version of the simple internal
+	 * parameter format the device uses. If format 2 is in use then this is interpreted as the
+	 * number of region headers. For other formats it is interpreted as the size in dwords.
+	 */
+	__u32 region_header_size;
+
+	/**
+	 * @handle: [OUT] Handle for created HWRT dataset.
+	 */
+	__u32 handle;
+};
+
+/**
+ * struct drm_pvr_ioctl_destroy_hwrt_dataset_args - Arguments for
+ * %DRM_IOCTL_PVR_DESTROY_HWRT_DATASET
+ */
+struct drm_pvr_ioctl_destroy_hwrt_dataset_args {
+	/**
+	 * @handle: [IN] Handle for HWRT dataset to be destroyed.
+	 */
+	__u32 handle;
+
+	/** @_padding_4: Reserved. This field must be zeroed. */
+	__u32 _padding_4;
+};
+
+/**
+ * DOC: PowerVR IOCTL SUBMIT_JOBS interface
+ */
+
+/**
+ * DOC: Flags for the drm_pvr_sync_op object.
+ *
+ * .. c:macro:: DRM_PVR_SYNC_OP_HANDLE_TYPE_MASK
+ *
+ *    Handle type mask for the drm_pvr_sync_op::flags field.
+ *
+ * .. c:macro:: DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_SYNCOBJ
+ *
+ *    Indicates the handle passed in drm_pvr_sync_op::handle is a syncobj handle.
+ *    This is the default type.
+ *
+ * .. c:macro:: DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_TIMELINE_SYNCOBJ
+ *
+ *    Indicates the handle passed in drm_pvr_sync_op::handle is a timeline syncobj handle.
+ *
+ * .. c:macro:: DRM_PVR_SYNC_OP_FLAG_SIGNAL
+ *
+ *    Signal operation requested. The out-fence bound to the job will be attached to
+ *    the syncobj whose handle is passed in drm_pvr_sync_op::handle.
+ *
+ * .. c:macro:: DRM_PVR_SYNC_OP_FLAG_WAIT
+ *
+ *    Wait operation requested. The job will wait for this particular syncobj or syncobj
+ *    point to be signaled before being started.
+ *    This is the default operation.
+ */
+#define DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_MASK 0xf
+#define DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_SYNCOBJ 0
+#define DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_TIMELINE_SYNCOBJ 1
+#define DRM_PVR_SYNC_OP_FLAG_SIGNAL _BITULL(31)
+#define DRM_PVR_SYNC_OP_FLAG_WAIT 0
+
+#define DRM_PVR_SYNC_OP_FLAGS_MASK (DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_MASK | \
+				    DRM_PVR_SYNC_OP_FLAG_SIGNAL)
+
+/**
+ * struct drm_pvr_sync_op - Object describing a sync operation
+ */
+struct drm_pvr_sync_op {
+	/** @handle: Handle of sync object. */
+	__u32 handle;
+
+	/** @flags: Combination of ``DRM_PVR_SYNC_OP_FLAG_`` flags. */
+	__u32 flags;
+
+	/** @value: Timeline value for this drm_syncobj. MBZ for a binary syncobj. */
+	__u64 value;
+};
+
+/**
+ * DOC: Flags for SUBMIT_JOB ioctl geometry command.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_GEOM_CMD_FIRST
+ *
+ *    Indicates if this the first command to be issued for a render.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_GEOM_CMD_LAST
+ *
+ *    Indicates if this the last command to be issued for a render.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_GEOM_CMD_SINGLE_CORE
+ *
+ *    Forces to use single core in a multi core device.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_GEOM_CMD_FLAGS_MASK
+ *
+ *    Logical OR of all the geometry cmd flags.
+ */
+#define DRM_PVR_SUBMIT_JOB_GEOM_CMD_FIRST _BITULL(0)
+#define DRM_PVR_SUBMIT_JOB_GEOM_CMD_LAST _BITULL(1)
+#define DRM_PVR_SUBMIT_JOB_GEOM_CMD_SINGLE_CORE _BITULL(2)
+#define DRM_PVR_SUBMIT_JOB_GEOM_CMD_FLAGS_MASK                                 \
+	(DRM_PVR_SUBMIT_JOB_GEOM_CMD_FIRST |                                   \
+	 DRM_PVR_SUBMIT_JOB_GEOM_CMD_LAST |                                    \
+	 DRM_PVR_SUBMIT_JOB_GEOM_CMD_SINGLE_CORE)
+
+/**
+ * DOC: Flags for SUBMIT_JOB ioctl fragment command.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_FRAG_CMD_SINGLE_CORE
+ *
+ *    Use single core in a multi core setup.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_FRAG_CMD_DEPTHBUFFER
+ *
+ *    Indicates whether a depth buffer is present.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_FRAG_CMD_STENCILBUFFER
+ *
+ *    Indicates whether a stencil buffer is present.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_FRAG_CMD_PREVENT_CDM_OVERLAP
+ *
+ *    Disallow compute overlapped with this render.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_FRAG_CMD_GET_VIS_RESULTS
+ *
+ *    Indicates whether this render produces visibility results.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_FRAG_CMD_SCRATCHBUFFER
+ *
+ *    Indicates whether partial renders write to a scratch buffer instead of
+ *    the final surface. It also forces the full screen copy expected to be
+ *    present on the last render after all partial renders have completed.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_FRAG_CMD_FLAGS_MASK
+ *
+ *    Logical OR of all the fragment cmd flags.
+ */
+#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_SINGLE_CORE _BITULL(0)
+#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_DEPTHBUFFER _BITULL(1)
+#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_STENCILBUFFER _BITULL(2)
+#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_PREVENT_CDM_OVERLAP _BITULL(3)
+#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_SCRATCHBUFFER _BITULL(4)
+#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_GET_VIS_RESULTS _BITULL(5)
+#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_PARTIAL_RENDER _BITULL(6)
+#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_FLAGS_MASK                                 \
+	(DRM_PVR_SUBMIT_JOB_FRAG_CMD_SINGLE_CORE |                             \
+	 DRM_PVR_SUBMIT_JOB_FRAG_CMD_DEPTHBUFFER |                             \
+	 DRM_PVR_SUBMIT_JOB_FRAG_CMD_STENCILBUFFER |                           \
+	 DRM_PVR_SUBMIT_JOB_FRAG_CMD_PREVENT_CDM_OVERLAP |                     \
+	 DRM_PVR_SUBMIT_JOB_FRAG_CMD_SCRATCHBUFFER |                           \
+	 DRM_PVR_SUBMIT_JOB_FRAG_CMD_GET_VIS_RESULTS |                         \
+	 DRM_PVR_SUBMIT_JOB_FRAG_CMD_PARTIAL_RENDER)
+
+/**
+ * DOC: Flags for SUBMIT_JOB ioctl compute command.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_PREVENT_ALL_OVERLAP
+ *
+ *    Disallow other jobs overlapped with this compute.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_SINGLE_CORE
+ *
+ *    Forces to use single core in a multi core device.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_FLAGS_MASK
+ *
+ *    Logical OR of all the compute cmd flags.
+ */
+#define DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_PREVENT_ALL_OVERLAP _BITULL(0)
+#define DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_SINGLE_CORE _BITULL(1)
+#define DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_FLAGS_MASK         \
+	(DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_PREVENT_ALL_OVERLAP | \
+	 DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_SINGLE_CORE)
+
+/**
+ * DOC: Flags for SUBMIT_JOB ioctl transfer command.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_TRANSFER_CMD_SINGLE_CORE
+ *
+ *    Forces job to use a single core in a multi core device.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_TRANSFER_CMD_FLAGS_MASK
+ *
+ *    Logical OR of all the transfer cmd flags.
+ */
+#define DRM_PVR_SUBMIT_JOB_TRANSFER_CMD_SINGLE_CORE _BITULL(0)
+
+#define DRM_PVR_SUBMIT_JOB_TRANSFER_CMD_FLAGS_MASK \
+	DRM_PVR_SUBMIT_JOB_TRANSFER_CMD_SINGLE_CORE
+
+/**
+ * enum drm_pvr_job_type - Arguments for &struct drm_pvr_job.job_type
+ */
+enum drm_pvr_job_type {
+	/** @DRM_PVR_JOB_TYPE_GEOMETRY: Job type is geometry. */
+	DRM_PVR_JOB_TYPE_GEOMETRY = 0,
+
+	/** @DRM_PVR_JOB_TYPE_FRAGMENT: Job type is fragment. */
+	DRM_PVR_JOB_TYPE_FRAGMENT,
+
+	/** @DRM_PVR_JOB_TYPE_COMPUTE: Job type is compute. */
+	DRM_PVR_JOB_TYPE_COMPUTE,
+
+	/** @DRM_PVR_JOB_TYPE_TRANSFER_FRAG: Job type is a fragment transfer. */
+	DRM_PVR_JOB_TYPE_TRANSFER_FRAG,
+};
+
+/**
+ * struct drm_pvr_hwrt_data_ref - Reference HWRT data
+ */
+struct drm_pvr_hwrt_data_ref {
+	/** @set_handle: HWRT data set handle. */
+	__u32 set_handle;
+
+	/** @data_index: Index of the HWRT data inside the data set. */
+	__u32 data_index;
+};
+
+/**
+ * struct drm_pvr_job - Job arguments passed to the %DRM_IOCTL_PVR_SUBMIT_JOBS ioctl
+ */
+struct drm_pvr_job {
+	/**
+	 * @type: [IN] Type of job being submitted
+	 *
+	 * This must be one of the values defined by &enum drm_pvr_job_type.
+	 */
+	__u32 type;
+
+	/**
+	 * @context_handle: [IN] Context handle.
+	 *
+	 * When @job_type is %DRM_PVR_JOB_TYPE_RENDER, %DRM_PVR_JOB_TYPE_COMPUTE or
+	 * %DRM_PVR_JOB_TYPE_TRANSFER_FRAG, this must be a valid handle returned by
+	 * %DRM_IOCTL_PVR_CREATE_CONTEXT. The type of context must be compatible
+	 * with the type of job being submitted.
+	 *
+	 * When @job_type is %DRM_PVR_JOB_TYPE_NULL, this must be zero.
+	 */
+	__u32 context_handle;
+
+	/**
+	 * @flags: [IN] Flags for command.
+	 *
+	 * Those are job-dependent. See all ``DRM_PVR_SUBMIT_JOB_*``.
+	 */
+	__u32 flags;
+
+	/**
+	 * @cmd_stream_len: [IN] Length of command stream, in bytes.
+	 */
+	__u32 cmd_stream_len;
+
+	/**
+	 * @cmd_stream: [IN] Pointer to command stream for command.
+	 *
+	 * The command stream must be u64-aligned.
+	 */
+	__u64 cmd_stream;
+
+	/** @sync_ops: [IN] Fragment sync operations. */
+	struct drm_pvr_obj_array sync_ops;
+
+	/**
+	 * @hwrt: [IN] HWRT data used by render jobs (geometry or fragment).
+	 *
+	 * Must be zero for non-render jobs.
+	 */
+	struct drm_pvr_hwrt_data_ref hwrt;
+};
+
+/**
+ * struct drm_pvr_ioctl_submit_jobs_args - Arguments for %DRM_IOCTL_PVR_SUBMIT_JOB
+ *
+ * If the syscall returns an error it is important to check the value of
+ * @jobs.count. This indicates the index into @jobs.array where the
+ * error occurred.
+ */
+struct drm_pvr_ioctl_submit_jobs_args {
+	/** @jobs: [IN] Array of jobs to submit. */
+	struct drm_pvr_obj_array jobs;
+};
+
+#if defined(__cplusplus)
+}
+#endif
+
+#endif /* PVR_DRM_UAPI_H */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 03/17] drm/imagination/uapi: Add PowerVR driver UAPI
@ 2023-08-16  8:25   ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: frank.binns, donald.robson, boris.brezillon, faith.ekstrand,
	airlied, daniel, maarten.lankhorst, mripard, tzimmermann, afd,
	hns, matthew.brost, christian.koenig, luben.tuikov, dakr,
	linux-kernel, Matt Coster

Add the UAPI implementation for the PowerVR driver.

Changes from v4 :
- Remove CREATE_ZEROED flag for BO creation (all buffers are now zeroed)

Co-developed-by: Frank Binns <frank.binns@imgtec.com>
Signed-off-by: Frank Binns <frank.binns@imgtec.com>
Co-developed-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Co-developed-by: Matt Coster <matt.coster@imgtec.com>
Signed-off-by: Matt Coster <matt.coster@imgtec.com>
Co-developed-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 MAINTAINERS                |    1 +
 include/uapi/drm/pvr_drm.h | 1303 ++++++++++++++++++++++++++++++++++++
 2 files changed, 1304 insertions(+)
 create mode 100644 include/uapi/drm/pvr_drm.h

diff --git a/MAINTAINERS b/MAINTAINERS
index f84390bb6cfe..55e17f8aea91 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -10144,6 +10144,7 @@ M:	Sarah Walker <sarah.walker@imgtec.com>
 M:	Donald Robson <donald.robson@imgtec.com>
 S:	Supported
 F:	Documentation/devicetree/bindings/gpu/img,powervr.yaml
+F:	include/uapi/drm/pvr_drm.h
 
 IMON SOUNDGRAPH USB IR RECEIVER
 M:	Sean Young <sean@mess.org>
diff --git a/include/uapi/drm/pvr_drm.h b/include/uapi/drm/pvr_drm.h
new file mode 100644
index 000000000000..c0aac8b135ca
--- /dev/null
+++ b/include/uapi/drm/pvr_drm.h
@@ -0,0 +1,1303 @@
+/* SPDX-License-Identifier: (GPL-2.0 WITH Linux-syscall-note) OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_DRM_UAPI_H
+#define PVR_DRM_UAPI_H
+
+#include "drm.h"
+
+#include <linux/const.h>
+#include <linux/types.h>
+
+#if defined(__cplusplus)
+extern "C" {
+#endif
+
+/**
+ * DOC: PowerVR UAPI
+ *
+ * The PowerVR IOCTL argument structs have a few limitations in place, in
+ * addition to the standard kernel restrictions:
+ *
+ *  - All members must be type-aligned.
+ *  - The overall struct must be padded to 64-bit alignment.
+ *  - Explicit padding is almost always required. This takes the form of
+ *    ``_padding_[x]`` members of sufficient size to pad to the next power-of-two
+ *    alignment, where [x] is the offset into the struct in hexadecimal. Arrays
+ *    are never used for alignment. Padding fields must be zeroed; this is
+ *    always checked.
+ *  - Unions may only appear as the last member of a struct.
+ *  - Individual union members may grow in the future. The space between the
+ *    end of a union member and the end of its containing union is considered
+ *    "implicit padding" and must be zeroed. This is always checked.
+ *
+ * In addition to the IOCTL argument structs, the PowerVR UAPI makes use of
+ * DEV_QUERY argument structs. These are used to fetch information about the
+ * device and runtime. These structs are subject to the same rules set out
+ * above.
+ */
+
+/**
+ * struct drm_pvr_obj_array - Container used to pass arrays of objects
+ *
+ * It is not unusual to have to extend objects to pass new parameters, and the DRM
+ * ioctl infrastructure is supporting that by padding ioctl arguments with zeros
+ * when the data passed by userspace is smaller than the struct defined in the
+ * drm_ioctl_desc, thus keeping things backward compatible. This type is just
+ * applying the same concepts to indirect objects passed through arrays referenced
+ * from the main ioctl arguments structure: the stride basically defines the size
+ * of the object passed by userspace, which allows the kernel driver to pad with
+ * zeros when it's smaller than the size of the object it expects.
+ *
+ * Use ``DRM_PVR_OBJ_ARRAY()`` to fill object array fields, unless you
+ * have a very good reason not to.
+ */
+struct drm_pvr_obj_array {
+	/** @stride: Stride of object struct. Used for versioning. */
+	__u32 stride;
+
+	/** @count: Number of objects in the array. */
+	__u32 count;
+
+	/** @array: User pointer to an array of objects. */
+	__u64 array;
+};
+
+/**
+ * DRM_PVR_OBJ_ARRAY() - Helper macro for filling &struct drm_pvr_obj_array.
+ * @cnt: Number of elements pointed to py @ptr.
+ * @ptr: Pointer to start of a C array.
+ *
+ * Return: Literal of type &struct drm_pvr_obj_array.
+ */
+#define DRM_PVR_OBJ_ARRAY(cnt, ptr) \
+	{ .stride = sizeof((ptr)[0]), .count = (cnt), .array = (__u64)(uintptr_t)(ptr) }
+
+/**
+ * DOC: PowerVR IOCTL interface
+ */
+
+/**
+ * PVR_IOCTL() - Build a PowerVR IOCTL number
+ * @_ioctl: An incrementing id for this IOCTL. Added to %DRM_COMMAND_BASE.
+ * @_mode: Must be one of %DRM_IOR, %DRM_IOW or %DRM_IOWR.
+ * @_data: The type of the args struct passed by this IOCTL.
+ *
+ * The struct referred to by @_data must have a ``drm_pvr_ioctl_`` prefix and an
+ * ``_args suffix``. They are therefore omitted from @_data.
+ *
+ * This should only be used to build the constants described below; it should
+ * never be used to call an IOCTL directly.
+ *
+ * Return: An IOCTL number to be passed to ioctl() from userspace.
+ */
+#define PVR_IOCTL(_ioctl, _mode, _data) \
+	_mode(DRM_COMMAND_BASE + (_ioctl), struct drm_pvr_ioctl_##_data##_args)
+
+#define DRM_IOCTL_PVR_DEV_QUERY PVR_IOCTL(0x00, DRM_IOWR, dev_query)
+#define DRM_IOCTL_PVR_CREATE_BO PVR_IOCTL(0x01, DRM_IOWR, create_bo)
+#define DRM_IOCTL_PVR_GET_BO_MMAP_OFFSET PVR_IOCTL(0x02, DRM_IOWR, get_bo_mmap_offset)
+#define DRM_IOCTL_PVR_CREATE_VM_CONTEXT PVR_IOCTL(0x03, DRM_IOWR, create_vm_context)
+#define DRM_IOCTL_PVR_DESTROY_VM_CONTEXT PVR_IOCTL(0x04, DRM_IOW, destroy_vm_context)
+#define DRM_IOCTL_PVR_VM_MAP PVR_IOCTL(0x05, DRM_IOW, vm_map)
+#define DRM_IOCTL_PVR_VM_UNMAP PVR_IOCTL(0x06, DRM_IOW, vm_unmap)
+#define DRM_IOCTL_PVR_CREATE_CONTEXT PVR_IOCTL(0x07, DRM_IOWR, create_context)
+#define DRM_IOCTL_PVR_DESTROY_CONTEXT PVR_IOCTL(0x08, DRM_IOW, destroy_context)
+#define DRM_IOCTL_PVR_CREATE_FREE_LIST PVR_IOCTL(0x09, DRM_IOWR, create_free_list)
+#define DRM_IOCTL_PVR_DESTROY_FREE_LIST PVR_IOCTL(0x0a, DRM_IOW, destroy_free_list)
+#define DRM_IOCTL_PVR_CREATE_HWRT_DATASET PVR_IOCTL(0x0b, DRM_IOWR, create_hwrt_dataset)
+#define DRM_IOCTL_PVR_DESTROY_HWRT_DATASET PVR_IOCTL(0x0c, DRM_IOW, destroy_hwrt_dataset)
+#define DRM_IOCTL_PVR_SUBMIT_JOBS PVR_IOCTL(0x0d, DRM_IOW, submit_jobs)
+
+/**
+ * DOC: PowerVR IOCTL DEV_QUERY interface
+ */
+
+/**
+ * struct drm_pvr_dev_query_gpu_info - Container used to fetch information about
+ * the graphics processor.
+ *
+ * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must be set
+ * to %DRM_PVR_DEV_QUERY_GPU_INFO_GET.
+ */
+struct drm_pvr_dev_query_gpu_info {
+	/**
+	 * @gpu_id: GPU identifier.
+	 *
+	 * For all currently supported GPUs this is the BVNC encoded as a 64-bit
+	 * value as follows:
+	 *
+	 *    +--------+--------+--------+-------+
+	 *    | 63..48 | 47..32 | 31..16 | 15..0 |
+	 *    +========+========+========+=======+
+	 *    | B      | V      | N      | C     |
+	 *    +--------+--------+--------+-------+
+	 */
+	__u64 gpu_id;
+
+	/**
+	 * @num_phantoms: Number of Phantoms present.
+	 */
+	__u32 num_phantoms;
+};
+
+/**
+ * struct drm_pvr_dev_query_runtime_info - Container used to fetch information
+ * about the graphics runtime.
+ *
+ * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must be set
+ * to %DRM_PVR_DEV_QUERY_RUNTIME_INFO_GET.
+ */
+struct drm_pvr_dev_query_runtime_info {
+	/**
+	 * @free_list_min_pages: Minimum allowed free list size,
+	 * in PM physical pages.
+	 */
+	__u64 free_list_min_pages;
+
+	/**
+	 * @free_list_max_pages: Maximum allowed free list size,
+	 * in PM physical pages.
+	 */
+	__u64 free_list_max_pages;
+
+	/**
+	 * @common_store_alloc_region_size: Size of the Allocation
+	 * Region within the Common Store used for coefficient and shared
+	 * registers, in dwords.
+	 */
+	__u32 common_store_alloc_region_size;
+
+	/**
+	 * @common_store_partition_space_size: Size of the
+	 * Partition Space within the Common Store for output buffers, in
+	 * dwords.
+	 */
+	__u32 common_store_partition_space_size;
+
+	/**
+	 * @max_coeffs: Maximum coefficients, in dwords.
+	 */
+	__u32 max_coeffs;
+
+	/**
+	 * @cdm_max_local_mem_size_regs: Maximum amount of local
+	 * memory available to a compute kernel, in dwords.
+	 */
+	__u32 cdm_max_local_mem_size_regs;
+};
+
+/**
+ * struct drm_pvr_dev_query_quirks - Container used to fetch information about
+ * hardware fixes for which the device may require support in the user mode
+ * driver.
+ *
+ * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must be set
+ * to %DRM_PVR_DEV_QUERY_QUIRKS_GET.
+ */
+struct drm_pvr_dev_query_quirks {
+	/**
+	 * @quirks: A userspace address for the hardware quirks __u32 array.
+	 *
+	 * The first @musthave_count items in the list are quirks that the
+	 * client must support for this device. If userspace does not support
+	 * all these quirks then functionality is not guaranteed and client
+	 * initialisation must fail.
+	 * The remaining quirks in the list affect userspace and the kernel or
+	 * firmware. They are disabled by default and require userspace to
+	 * opt-in. The opt-in mechanism depends on the quirk.
+	 */
+	__u64 quirks;
+
+	/** @count: Length of @quirks (number of __u32). */
+	__u16 count;
+
+	/**
+	 * @musthave_count: The number of entries in @quirks that are
+	 * mandatory, starting at index 0.
+	 */
+	__u16 musthave_count;
+
+	/** @_padding_c: Reserved. This field must be zeroed. */
+	__u32 _padding_c;
+};
+
+/**
+ * struct drm_pvr_dev_query_enhancements - Container used to fetch information
+ * about optional enhancements supported by the device that require support in
+ * the user mode driver.
+ *
+ * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must be set
+ * to %DRM_PVR_DEV_ENHANCEMENTS_GET.
+ */
+struct drm_pvr_dev_query_enhancements {
+	/**
+	 * @enhancements: A userspace address for the hardware enhancements
+	 * __u32 array.
+	 *
+	 * These enhancements affect userspace and the kernel or firmware. They
+	 * are disabled by default and require userspace to opt-in. The opt-in
+	 * mechanism depends on the quirk.
+	 */
+	__u64 enhancements;
+
+	/** @count: Length of @enhancements (number of __u32). */
+	__u16 count;
+
+	/** @_padding_a: Reserved. This field must be zeroed. */
+	__u16 _padding_a;
+
+	/** @_padding_c: Reserved. This field must be zeroed. */
+	__u32 _padding_c;
+};
+
+/**
+ * enum drm_pvr_heap_id - Array index for heap info data returned by
+ * %DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
+ *
+ * For compatibility reasons all indices will be present in the returned array,
+ * however some heaps may not be present. These are indicated where
+ * &struct drm_pvr_heap.size is set to zero.
+ */
+enum drm_pvr_heap_id {
+	/** @DRM_PVR_HEAP_GENERAL: General purpose heap. */
+	DRM_PVR_HEAP_GENERAL = 0,
+	/** @DRM_PVR_HEAP_PDS_CODE_DATA: PDS code and data heap. */
+	DRM_PVR_HEAP_PDS_CODE_DATA,
+	/** @DRM_PVR_HEAP_USC_CODE: USC code heap. */
+	DRM_PVR_HEAP_USC_CODE,
+	/** @DRM_PVR_HEAP_RGNHDR: Region header heap. Only used if GPU has BRN63142. */
+	DRM_PVR_HEAP_RGNHDR,
+	/** @DRM_PVR_HEAP_VIS_TEST: Visibility test heap. */
+	DRM_PVR_HEAP_VIS_TEST,
+	/** @DRM_PVR_HEAP_TRANSFER_FRAG: Transfer fragment heap. */
+	DRM_PVR_HEAP_TRANSFER_FRAG,
+
+	/**
+	 * @DRM_PVR_HEAP_COUNT: The number of heaps returned by
+	 * %DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
+	 *
+	 * More heaps may be added, so this also serves as the copy limit when
+	 * sent by the caller.
+	 */
+	DRM_PVR_HEAP_COUNT
+	/* Please only add additional heaps above DRM_PVR_HEAP_COUNT! */
+};
+
+/**
+ * DOC: Flags for DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
+ *
+ * .. c:macro:: DRM_PVR_HEAP_FLAG_STATIC_CARVEOUT_AT_END
+ *
+ *    The static data area is at the end of the heap memory area, rather than
+ *    at the beginning.
+ *    The base address will be:
+ *        drm_pvr_heap::base +
+ *            (drm_pvr_heap::size - drm_pvr_heap::static_data_carveout_size)
+ */
+#define DRM_PVR_HEAP_FLAG_STATIC_CARVEOUT_AT_END _BITUL(0)
+
+/**
+ * struct drm_pvr_heap - Container holding information about a single heap.
+ *
+ * This will always be fetched as an array.
+ */
+struct drm_pvr_heap {
+	/** @base: Base address of heap. */
+	__u64 base;
+
+	/**
+	 * @size: Size of heap, in bytes. Will be 0 if the heap is not present.
+	 */
+	__u64 size;
+
+	/** @flags: Flags for this heap. See &enum drm_pvr_heap_flags. */
+	__u32 flags;
+
+	/** @page_size_log2: Log2 of page size. */
+	__u32 page_size_log2;
+};
+
+/**
+ * struct drm_pvr_dev_query_heap_info - Container used to fetch information
+ * about heaps supported by the device driver.
+ *
+ * Please note all driver-supported heaps will be returned up to &heaps.count.
+ * Some heaps will not be present in all devices, which will be indicated by
+ * &struct drm_pvr_heap.size being set to zero.
+ *
+ * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must be set
+ * to %DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
+ */
+struct drm_pvr_dev_query_heap_info {
+	/**
+	 * @heaps: Array of &struct drm_pvr_heap. If pointer is NULL, the count
+	 * and stride will be updated with those known to the driver version, to
+	 * facilitate allocation by the caller.
+	 */
+	struct drm_pvr_obj_array heaps;
+};
+
+/**
+ * enum drm_pvr_static_data_area_usage - Array index for static data area info
+ * returned by %DRM_PVR_DEV_QUERY_STATIC_DATA_AREAS_GET.
+ *
+ * For compatibility reasons all indices will be present in the returned array,
+ * however some areas may not be present. These are indicated where
+ * &struct drm_pvr_static_data_area.size is set to zero.
+ */
+enum drm_pvr_static_data_area_usage {
+	/**
+	 * @DRM_PVR_STATIC_DATA_AREA_EOT: End of Tile USC program.
+	 *
+	 * The End of Tile task runs at completion of a tile, and is responsible for emitting the
+	 * tile to the Pixel Back End.
+	 */
+	DRM_PVR_STATIC_DATA_AREA_EOT = 0,
+
+	/**
+	 * @DRM_PVR_STATIC_DATA_AREA_FENCE: MCU fence area, used during cache flush and
+	 * invalidation.
+	 *
+	 * This must point to valid physical memory but the contents otherwise are not used.
+	 */
+	DRM_PVR_STATIC_DATA_AREA_FENCE,
+
+	/**
+	 * @DRM_PVR_STATIC_DATA_AREA_VDM_SYNC: VDM sync program.
+	 *
+	 * The VDM sync program is used to synchronise multiple areas of the GPU hardware.
+	 */
+	DRM_PVR_STATIC_DATA_AREA_VDM_SYNC,
+
+	/**
+	 * @DRM_PVR_STATIC_DATA_AREA_YUV_CSC: YUV coefficients.
+	 *
+	 * Area contains up to 16 slots with stride of 64 bytes. Each is a 3x4 matrix of u16 fixed
+	 * point numbers, with 1 sign bit, 2 integer bits and 13 fractional bits.
+	 *
+	 * The slots are :
+	 * 0 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_RGB_IDENTITY_KHR
+	 * 1 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_IDENTITY_KHR (full range)
+	 * 2 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_IDENTITY_KHR (conformant range)
+	 * 3 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_709_KHR (full range)
+	 * 4 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_709_KHR (conformant range)
+	 * 5 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_601_KHR (full range)
+	 * 6 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_601_KHR (conformant range)
+	 * 7 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_2020_KHR (full range)
+	 * 8 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_2020_KHR (conformant range)
+	 * 9 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_601_KHR (conformant range, 10 bit)
+	 * 10 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_709_KHR (conformant range, 10 bit)
+	 * 11 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_2020_KHR (conformant range, 10 bit)
+	 * 14 = Identity (biased)
+	 * 15 = Identity
+	 */
+	DRM_PVR_STATIC_DATA_AREA_YUV_CSC,
+};
+
+/**
+ * struct drm_pvr_static_data_area - Container holding information about a
+ * single static data area.
+ *
+ * This will always be fetched as an array.
+ */
+struct drm_pvr_static_data_area {
+	/**
+	 * @area_usage: Usage of static data area.
+	 * See &enum drm_pvr_static_data_area_usage.
+	 */
+	__u16 area_usage;
+
+	/**
+	 * @location_heap_id: Array index of heap where this of static data
+	 * area is located. This array is fetched using
+	 * %DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
+	 */
+	__u16 location_heap_id;
+
+	/** @size: Size of static data area. Not present if set to zero. */
+	__u32 size;
+
+	/**
+	 * @offset: Offset of static data area from start of static data
+	 * carveout.
+	 */
+	__u64 offset;
+};
+
+/**
+ * struct drm_pvr_dev_query_static_data_areas - Container used to fetch
+ * information about the static data areas in heaps supported by the device
+ * driver.
+ *
+ * Please note all driver-supported static data areas will be returned up to
+ * &static_data_areas.count. Some will not be present for all devices which,
+ * will be indicated by &struct drm_pvr_static_data_area.size being set to zero.
+ *
+ * Further, some heaps will not be present either. See &struct
+ * drm_pvr_dev_query_heap_info.
+ *
+ * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must be set
+ * to %DRM_PVR_DEV_QUERY_STATIC_DATA_AREAS_GET.
+ */
+struct drm_pvr_dev_query_static_data_areas {
+	/**
+	 * @static_data_areas: Array of &struct drm_pvr_static_data_area. If
+	 * pointer is NULL, the count and stride will be updated with those
+	 * known to the driver version, to facilitate allocation by the caller.
+	 */
+	struct drm_pvr_obj_array static_data_areas;
+};
+
+/**
+ * enum drm_pvr_dev_query - For use with &drm_pvr_ioctl_dev_query_args.type to
+ * indicate the type of the receiving container.
+ *
+ * Append only. Do not reorder.
+ */
+enum drm_pvr_dev_query {
+	/**
+	 * @DRM_PVR_DEV_QUERY_GPU_INFO_GET: The dev query args contain a pointer
+	 * to &struct drm_pvr_dev_query_gpu_info.
+	 */
+	DRM_PVR_DEV_QUERY_GPU_INFO_GET = 0,
+
+	/**
+	 * @DRM_PVR_DEV_QUERY_RUNTIME_INFO_GET: The dev query args contain a
+	 * pointer to &struct drm_pvr_dev_query_runtime_info.
+	 */
+	DRM_PVR_DEV_QUERY_RUNTIME_INFO_GET,
+
+	/**
+	 * @DRM_PVR_DEV_QUERY_QUIRKS_GET: The dev query args contain a pointer
+	 * to &struct drm_pvr_dev_query_quirks.
+	 */
+	DRM_PVR_DEV_QUERY_QUIRKS_GET,
+
+	/**
+	 * @DRM_PVR_DEV_QUERY_ENHANCEMENTS_GET: The dev query args contain a
+	 * pointer to &struct drm_pvr_dev_query_enhancements.
+	 */
+	DRM_PVR_DEV_QUERY_ENHANCEMENTS_GET,
+
+	/**
+	 * @DRM_PVR_DEV_QUERY_HEAP_INFO_GET: The dev query args contain a
+	 * pointer to &struct drm_pvr_dev_query_heap_info.
+	 */
+	DRM_PVR_DEV_QUERY_HEAP_INFO_GET,
+
+	/**
+	 * @DRM_PVR_DEV_QUERY_STATIC_DATA_AREAS_GET: The dev query args contain
+	 * a pointer to &struct drm_pvr_dev_query_static_data_areas.
+	 */
+	DRM_PVR_DEV_QUERY_STATIC_DATA_AREAS_GET,
+};
+
+/**
+ * struct drm_pvr_ioctl_dev_query_args - Arguments for %DRM_IOCTL_PVR_DEV_QUERY.
+ */
+struct drm_pvr_ioctl_dev_query_args {
+	/**
+	 * @type: Type of query and output struct. See &enum drm_pvr_dev_query.
+	 */
+	__u32 type;
+
+	/**
+	 * @size: Size of the receiving struct, see @type.
+	 *
+	 * After a successful call this will be updated to the written byte
+	 * length.
+	 * Can also be used to get the minimum byte length (see @pointer).
+	 * This allows additional fields to be appended to the structs in
+	 * future.
+	 */
+	__u32 size;
+
+	/**
+	 * @pointer: Pointer to struct @type.
+	 *
+	 * Must be large enough to contain @size bytes.
+	 * If pointer is NULL, the expected size will be returned in the @size
+	 * field, but no other data will be written.
+	 */
+	__u64 pointer;
+};
+
+/**
+ * DOC: PowerVR IOCTL CREATE_BO interface
+ */
+
+/**
+ * DOC: Flags for CREATE_BO
+ *
+ * The &struct drm_pvr_ioctl_create_bo_args.flags field is 64 bits wide and consists
+ * of three groups of flags: creation, device mapping and CPU mapping.
+ *
+ * We use "device" to refer to the GPU here because of the ambiguity between
+ * CPU and GPU in some fonts.
+ *
+ * Creation options
+ *    These use the prefix ``DRM_PVR_BO_CREATE_``.
+ *
+ * Device mapping options
+ *    These use the prefix ``DRM_PVR_BO_DEVICE_``.
+ *
+ *    :BYPASS_CACHE: There are very few situations where this flag is useful.
+ *       By default, the device flushes its memory caches after every job.
+ *    :PM_FW_PROTECT: Specify that only the Parameter Manager (PM) and/or
+ *       firmware processor should be allowed to access this memory when mapped
+ *       to the device. It is not valid to specify this flag with
+ *       CPU_ALLOW_USERSPACE_ACCESS.
+ *
+ * CPU mapping options
+ *    These use the prefix ``DRM_PVR_BO_CPU_``.
+ *
+ *    :ALLOW_USERSPACE_ACCESS: Allow userspace to map and access the contents
+ *       of this memory. It is not valid to specify this flag with
+ *       DEVICE_PM_FW_PROTECT.
+ */
+#define DRM_PVR_BO_DEVICE_BYPASS_CACHE _BITULL(0)
+#define DRM_PVR_BO_DEVICE_PM_FW_PROTECT _BITULL(1)
+#define DRM_PVR_BO_CPU_ALLOW_USERSPACE_ACCESS _BITULL(2)
+/* Bits 3..63 are reserved. */
+
+/**
+ * struct drm_pvr_ioctl_create_bo_args - Arguments for %DRM_IOCTL_PVR_CREATE_BO
+ */
+struct drm_pvr_ioctl_create_bo_args {
+	/**
+	 * @size: [IN/OUT] Unaligned size of buffer object to create. On
+	 * return, this will be populated with the actual aligned size of the
+	 * new buffer.
+	 */
+	__u64 size;
+
+	/**
+	 * @handle: [OUT] GEM handle of the new buffer object for use in
+	 * userspace.
+	 */
+	__u32 handle;
+
+	/** @_padding_c: Reserved. This field must be zeroed. */
+	__u32 _padding_c;
+
+	/**
+	 * @flags: [IN] Options which will affect the behaviour of this
+	 * creation operation and future mapping operations on the created
+	 * object. This field must be a valid combination of ``DRM_PVR_BO_*``
+	 * values, with all bits marked as reserved set to zero.
+	 */
+	__u64 flags;
+};
+
+/**
+ * DOC: PowerVR IOCTL GET_BO_MMAP_OFFSET interface
+ */
+
+/**
+ * struct drm_pvr_ioctl_get_bo_mmap_offset_args - Arguments for
+ * %DRM_IOCTL_PVR_GET_BO_MMAP_OFFSET
+ *
+ * Like other DRM drivers, the "mmap" IOCTL doesn't actually map any memory.
+ * Instead, it allocates a fake offset which refers to the specified buffer
+ * object. This offset can be used with a real mmap call on the DRM device
+ * itself.
+ */
+struct drm_pvr_ioctl_get_bo_mmap_offset_args {
+	/** @handle: [IN] GEM handle of the buffer object to be mapped. */
+	__u32 handle;
+
+	/** @_padding_4: Reserved. This field must be zeroed. */
+	__u32 _padding_4;
+
+	/** @offset: [OUT] Fake offset to use in the real mmap call. */
+	__u64 offset;
+};
+
+/**
+ * DOC: PowerVR IOCTL CREATE_VM_CONTEXT and DESTROY_VM_CONTEXT interfaces
+ */
+
+/**
+ * struct drm_pvr_ioctl_create_vm_context_args - Arguments for
+ * %DRM_IOCTL_PVR_CREATE_VM_CONTEXT
+ */
+struct drm_pvr_ioctl_create_vm_context_args {
+	/** @handle: [OUT] Handle for new VM context. */
+	__u32 handle;
+
+	/** @_padding_4: Reserved. This field must be zeroed. */
+	__u32 _padding_4;
+};
+
+/**
+ * struct drm_pvr_ioctl_destroy_vm_context_args - Arguments for
+ * %DRM_IOCTL_PVR_DESTROY_VM_CONTEXT
+ */
+struct drm_pvr_ioctl_destroy_vm_context_args {
+	/**
+	 * @handle: [IN] Handle for VM context to be destroyed.
+	 */
+	__u32 handle;
+
+	/** @_padding_4: Reserved. This field must be zeroed. */
+	__u32 _padding_4;
+};
+
+/**
+ * DOC: PowerVR IOCTL VM_MAP and VM_UNMAP interfaces
+ *
+ * The VM UAPI allows userspace to create buffer object mappings in GPU virtual address space.
+ *
+ * The client is responsible for managing GPU address space. It should allocate mappings within
+ * the heaps returned by %DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
+ *
+ * %DRM_IOCTL_PVR_VM_MAP creates a new mapping. The client provides the target virtual address for
+ * the mapping. Size and offset within the mapped buffer object can be specified, so the client can
+ * partially map a buffer.
+ *
+ * %DRM_IOCTL_PVR_VM_UNMAP removes a mapping. The entire mapping will be removed from GPU address
+ * space. For this reason only the start address is provided by the client.
+ */
+
+/**
+ * struct drm_pvr_ioctl_vm_map_args - Arguments for %DRM_IOCTL_PVR_VM_MAP.
+ */
+struct drm_pvr_ioctl_vm_map_args {
+	/**
+	 * @vm_context_handle: [IN] Handle for VM context for this mapping to
+	 * exist in.
+	 */
+	__u32 vm_context_handle;
+
+	/** @flags: [IN] Flags which affect this mapping. Currently always 0. */
+	__u32 flags;
+
+	/**
+	 * @device_addr: [IN] Requested device-virtual address for the mapping.
+	 * This must be non-zero and aligned to the device page size for the
+	 * heap containing the requested address. It is an error to specify an
+	 * address which is not contained within one of the heaps returned by
+	 * %DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
+	 */
+	__u64 device_addr;
+
+	/**
+	 * @handle: [IN] Handle of the target buffer object. This must be a
+	 * valid handle returned by %DRM_IOCTL_PVR_CREATE_BO.
+	 */
+	__u32 handle;
+
+	/** @_padding_14: Reserved. This field must be zeroed. */
+	__u32 _padding_14;
+
+	/**
+	 * @offset: [IN] Offset into the target bo from which to begin the
+	 * mapping.
+	 */
+	__u64 offset;
+
+	/**
+	 * @size: [IN] Size of the requested mapping. Must be aligned to
+	 * the device page size for the heap containing the requested address,
+	 * as well as the host page size. When added to @device_addr, the
+	 * result must not overflow the heap which contains @device_addr (i.e.
+	 * the range specified by @device_addr and @size must be completely
+	 * contained within a single heap specified by
+	 * %DRM_PVR_DEV_QUERY_HEAP_INFO_GET).
+	 */
+	__u64 size;
+};
+
+/**
+ * struct drm_pvr_ioctl_vm_unmap_args - Arguments for %DRM_IOCTL_PVR_VM_UNMAP.
+ */
+struct drm_pvr_ioctl_vm_unmap_args {
+	/**
+	 * @vm_context_handle: [IN] Handle for VM context that this mapping
+	 * exists in.
+	 */
+	__u32 vm_context_handle;
+
+	/** @_padding_4: Reserved. This field must be zeroed. */
+	__u32 _padding_4;
+
+	/**
+	 * @device_addr: [IN] Device-virtual address at the start of the target
+	 * mapping. This must be non-zero.
+	 */
+	__u64 device_addr;
+
+	/**
+	 * @size: Size in bytes of the target mapping. This must be non-zero.
+	 */
+	__u64 size;
+};
+
+/**
+ * DOC: PowerVR IOCTL CREATE_CONTEXT and DESTROY_CONTEXT interfaces
+ */
+
+/**
+ * enum drm_pvr_ctx_priority - Arguments for
+ * &drm_pvr_ioctl_create_context_args.priority
+ */
+enum drm_pvr_ctx_priority {
+	/** @DRM_PVR_CTX_PRIORITY_LOW: Priority below normal. */
+	DRM_PVR_CTX_PRIORITY_LOW = -512,
+
+	/** @DRM_PVR_CTX_PRIORITY_NORMAL: Normal priority. */
+	DRM_PVR_CTX_PRIORITY_NORMAL = 0,
+
+	/**
+	 * @DRM_PVR_CTX_PRIORITY_HIGH: Priority above normal.
+	 * Note this requires ``CAP_SYS_NICE`` or ``DRM_MASTER``.
+	 */
+	DRM_PVR_CTX_PRIORITY_HIGH = 512,
+};
+
+/**
+ * enum drm_pvr_ctx_type - Arguments for
+ * &struct drm_pvr_ioctl_create_context_args.type
+ */
+enum drm_pvr_ctx_type {
+	/**
+	 * @DRM_PVR_CTX_TYPE_RENDER: Render context. Use &struct
+	 * drm_pvr_ioctl_create_render_context_args for context creation arguments.
+	 */
+	DRM_PVR_CTX_TYPE_RENDER = 0,
+
+	/**
+	 * @DRM_PVR_CTX_TYPE_COMPUTE: Compute context. Use &struct
+	 * drm_pvr_ioctl_create_compute_context_args for context creation arguments.
+	 */
+	DRM_PVR_CTX_TYPE_COMPUTE,
+
+	/**
+	 * @DRM_PVR_CTX_TYPE_TRANSFER_FRAG: Transfer context for fragment data masters. Use
+	 * &struct drm_pvr_ioctl_create_transfer_context_args for context creation arguments.
+	 */
+	DRM_PVR_CTX_TYPE_TRANSFER_FRAG,
+};
+
+/**
+ * struct drm_pvr_ioctl_create_context_args - Arguments for
+ * %DRM_IOCTL_PVR_CREATE_CONTEXT
+ */
+struct drm_pvr_ioctl_create_context_args {
+	/**
+	 * @type: [IN] Type of context to create.
+	 *
+	 * This must be one of the values defined by &enum drm_pvr_ctx_type.
+	 */
+	__u32 type;
+
+	/** @flags: [IN] Flags for context. */
+	__u32 flags;
+
+	/**
+	 * @priority: [IN] Priority of new context.
+	 *
+	 * This must be one of the values defined by &enum drm_pvr_ctx_priority.
+	 */
+	__s32 priority;
+
+	/** @handle: [OUT] Handle for new context. */
+	__u32 handle;
+
+	/**
+	 * @static_context_state: [IN] Pointer to static context state stream.
+	 */
+	__u64 static_context_state;
+
+	/**
+	 * @static_context_state_len: [IN] Length of static context state, in bytes.
+	 */
+	__u32 static_context_state_len;
+
+	/**
+	 * @vm_context_handle: [IN] Handle for VM context that this context is
+	 * associated with.
+	 */
+	__u32 vm_context_handle;
+
+	/**
+	 * @callstack_addr: [IN] Address for initial call stack pointer. Only valid
+	 * if @type is %DRM_PVR_CTX_TYPE_RENDER, otherwise must be 0.
+	 */
+	__u64 callstack_addr;
+};
+
+/**
+ * struct drm_pvr_ioctl_destroy_context_args - Arguments for
+ * %DRM_IOCTL_PVR_DESTROY_CONTEXT
+ */
+struct drm_pvr_ioctl_destroy_context_args {
+	/**
+	 * @handle: [IN] Handle for context to be destroyed.
+	 */
+	__u32 handle;
+
+	/** @_padding_4: Reserved. This field must be zeroed. */
+	__u32 _padding_4;
+};
+
+/**
+ * DOC: PowerVR IOCTL CREATE_FREE_LIST and DESTROY_FREE_LIST interfaces
+ */
+
+/**
+ * struct drm_pvr_ioctl_create_free_list_args - Arguments for
+ * %DRM_IOCTL_PVR_CREATE_FREE_LIST
+ *
+ * Free list arguments have the following constraints :
+ *
+ * - @max_num_pages must be greater than zero.
+ * - @grow_threshold must be between 0 and 100.
+ * - @grow_num_pages must be less than or equal to &max_num_pages.
+ * - @initial_num_pages, @max_num_pages and @grow_num_pages must be multiples
+ *   of 4.
+ * - When &grow_num_pages is 0, @initial_num_pages must be equal to
+ *   @max_num_pages.
+ * - When &grow_num_pages is non-zero, @initial_num_pages must be less than
+ *   @max_num_pages.
+ */
+struct drm_pvr_ioctl_create_free_list_args {
+	/**
+	 * @free_list_gpu_addr: [IN] Address of GPU mapping of buffer object
+	 * containing memory to be used by free list.
+	 *
+	 * The mapped region of the buffer object must be at least
+	 * @max_num_pages * ``sizeof(__u32)``.
+	 *
+	 * The buffer object must have been created with
+	 * %DRM_PVR_BO_DEVICE_PM_FW_PROTECT set and
+	 * %DRM_PVR_BO_CPU_ALLOW_USERSPACE_ACCESS not set.
+	 */
+	__u64 free_list_gpu_addr;
+
+	/** @initial_num_pages: [IN] Pages initially allocated to free list. */
+	__u32 initial_num_pages;
+
+	/** @max_num_pages: [IN] Maximum number of pages in free list. */
+	__u32 max_num_pages;
+
+	/** @grow_num_pages: [IN] Pages to grow free list by per request. */
+	__u32 grow_num_pages;
+
+	/**
+	 * @grow_threshold: [IN] Percentage of FL memory used that should
+	 * trigger a new grow request.
+	 */
+	__u32 grow_threshold;
+
+	/**
+	 * @vm_context_handle: [IN] Handle for VM context that the free list buffer
+	 * object is mapped in.
+	 */
+	__u32 vm_context_handle;
+
+	/**
+	 * @handle: [OUT] Handle for created free list.
+	 */
+	__u32 handle;
+};
+
+/**
+ * struct drm_pvr_ioctl_destroy_free_list_args - Arguments for
+ * %DRM_IOCTL_PVR_DESTROY_FREE_LIST
+ */
+struct drm_pvr_ioctl_destroy_free_list_args {
+	/**
+	 * @handle: [IN] Handle for free list to be destroyed.
+	 */
+	__u32 handle;
+
+	/** @_padding_4: Reserved. This field must be zeroed. */
+	__u32 _padding_4;
+};
+
+/**
+ * DOC: PowerVR IOCTL CREATE_HWRT_DATASET and DESTROY_HWRT_DATASET interfaces
+ */
+
+/**
+ * struct drm_pvr_create_hwrt_geom_data_args - Geometry data arguments used for
+ * &struct drm_pvr_ioctl_create_hwrt_dataset_args.geom_data_args.
+ */
+struct drm_pvr_create_hwrt_geom_data_args {
+	/** @tpc_dev_addr: [IN] Tail pointer cache GPU virtual address. */
+	__u64 tpc_dev_addr;
+
+	/** @tpc_size: [IN] Size of TPC, in bytes. */
+	__u32 tpc_size;
+
+	/** @tpc_stride: [IN] Stride between layers in TPC, in pages */
+	__u32 tpc_stride;
+
+	/** @vheap_table_dev_addr: [IN] VHEAP table GPU virtual address. */
+	__u64 vheap_table_dev_addr;
+
+	/** @rtc_dev_addr: [IN] Render Target Cache virtual address. */
+	__u64 rtc_dev_addr;
+};
+
+/**
+ * struct drm_pvr_create_hwrt_rt_data_args - Render target arguments used for
+ * &struct drm_pvr_ioctl_create_hwrt_dataset_args.rt_data_args.
+ */
+struct drm_pvr_create_hwrt_rt_data_args {
+	/** @pm_mlist_dev_addr: [IN] PM MLIST GPU virtual address. */
+	__u64 pm_mlist_dev_addr;
+
+	/** @macrotile_array_dev_addr: [IN] Macrotile array GPU virtual address. */
+	__u64 macrotile_array_dev_addr;
+
+	/** @region_header_dev_addr: [IN] Region header array GPU virtual address. */
+	__u64 region_header_dev_addr;
+};
+
+/**
+ * struct drm_pvr_ioctl_create_hwrt_dataset_args - Arguments for
+ * %DRM_IOCTL_PVR_CREATE_HWRT_DATASET
+ */
+struct drm_pvr_ioctl_create_hwrt_dataset_args {
+	/** @geom_data_args: [IN] Geometry data arguments. */
+	struct drm_pvr_create_hwrt_geom_data_args geom_data_args;
+
+	/** @rt_data_args: [IN] Array of render target arguments. */
+	struct drm_pvr_create_hwrt_rt_data_args rt_data_args[2];
+
+	/**
+	 * @free_list_handles: [IN] Array of free list handles.
+	 *
+	 * free_list_handles[0] must have initial size of at least that reported
+	 * by &drm_pvr_dev_query_runtime_info.free_list_min_pages.
+	 */
+	__u32 free_list_handles[2];
+
+	/** @width: [IN] Width in pixels. */
+	__u32 width;
+
+	/** @height: [IN] Height in pixels. */
+	__u32 height;
+
+	/** @samples: [IN] Number of samples. */
+	__u32 samples;
+
+	/** @layers: [IN] Number of layers. */
+	__u32 layers;
+
+	/** @isp_merge_lower_x: [IN] Lower X coefficient for triangle merging. */
+	__u32 isp_merge_lower_x;
+
+	/** @isp_merge_lower_y: [IN] Lower Y coefficient for triangle merging. */
+	__u32 isp_merge_lower_y;
+
+	/** @isp_merge_scale_x: [IN] Scale X coefficient for triangle merging. */
+	__u32 isp_merge_scale_x;
+
+	/** @isp_merge_scale_y: [IN] Scale Y coefficient for triangle merging. */
+	__u32 isp_merge_scale_y;
+
+	/** @isp_merge_upper_x: [IN] Upper X coefficient for triangle merging. */
+	__u32 isp_merge_upper_x;
+
+	/** @isp_merge_upper_y: [IN] Upper Y coefficient for triangle merging. */
+	__u32 isp_merge_upper_y;
+
+	/**
+	 * @region_header_size: [IN] Size of region header array. This common field is used by
+	 * both render targets in this data set.
+	 *
+	 * The units for this field differ depending on what version of the simple internal
+	 * parameter format the device uses. If format 2 is in use then this is interpreted as the
+	 * number of region headers. For other formats it is interpreted as the size in dwords.
+	 */
+	__u32 region_header_size;
+
+	/**
+	 * @handle: [OUT] Handle for created HWRT dataset.
+	 */
+	__u32 handle;
+};
+
+/**
+ * struct drm_pvr_ioctl_destroy_hwrt_dataset_args - Arguments for
+ * %DRM_IOCTL_PVR_DESTROY_HWRT_DATASET
+ */
+struct drm_pvr_ioctl_destroy_hwrt_dataset_args {
+	/**
+	 * @handle: [IN] Handle for HWRT dataset to be destroyed.
+	 */
+	__u32 handle;
+
+	/** @_padding_4: Reserved. This field must be zeroed. */
+	__u32 _padding_4;
+};
+
+/**
+ * DOC: PowerVR IOCTL SUBMIT_JOBS interface
+ */
+
+/**
+ * DOC: Flags for the drm_pvr_sync_op object.
+ *
+ * .. c:macro:: DRM_PVR_SYNC_OP_HANDLE_TYPE_MASK
+ *
+ *    Handle type mask for the drm_pvr_sync_op::flags field.
+ *
+ * .. c:macro:: DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_SYNCOBJ
+ *
+ *    Indicates the handle passed in drm_pvr_sync_op::handle is a syncobj handle.
+ *    This is the default type.
+ *
+ * .. c:macro:: DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_TIMELINE_SYNCOBJ
+ *
+ *    Indicates the handle passed in drm_pvr_sync_op::handle is a timeline syncobj handle.
+ *
+ * .. c:macro:: DRM_PVR_SYNC_OP_FLAG_SIGNAL
+ *
+ *    Signal operation requested. The out-fence bound to the job will be attached to
+ *    the syncobj whose handle is passed in drm_pvr_sync_op::handle.
+ *
+ * .. c:macro:: DRM_PVR_SYNC_OP_FLAG_WAIT
+ *
+ *    Wait operation requested. The job will wait for this particular syncobj or syncobj
+ *    point to be signaled before being started.
+ *    This is the default operation.
+ */
+#define DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_MASK 0xf
+#define DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_SYNCOBJ 0
+#define DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_TIMELINE_SYNCOBJ 1
+#define DRM_PVR_SYNC_OP_FLAG_SIGNAL _BITULL(31)
+#define DRM_PVR_SYNC_OP_FLAG_WAIT 0
+
+#define DRM_PVR_SYNC_OP_FLAGS_MASK (DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_MASK | \
+				    DRM_PVR_SYNC_OP_FLAG_SIGNAL)
+
+/**
+ * struct drm_pvr_sync_op - Object describing a sync operation
+ */
+struct drm_pvr_sync_op {
+	/** @handle: Handle of sync object. */
+	__u32 handle;
+
+	/** @flags: Combination of ``DRM_PVR_SYNC_OP_FLAG_`` flags. */
+	__u32 flags;
+
+	/** @value: Timeline value for this drm_syncobj. MBZ for a binary syncobj. */
+	__u64 value;
+};
+
+/**
+ * DOC: Flags for SUBMIT_JOB ioctl geometry command.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_GEOM_CMD_FIRST
+ *
+ *    Indicates if this the first command to be issued for a render.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_GEOM_CMD_LAST
+ *
+ *    Indicates if this the last command to be issued for a render.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_GEOM_CMD_SINGLE_CORE
+ *
+ *    Forces to use single core in a multi core device.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_GEOM_CMD_FLAGS_MASK
+ *
+ *    Logical OR of all the geometry cmd flags.
+ */
+#define DRM_PVR_SUBMIT_JOB_GEOM_CMD_FIRST _BITULL(0)
+#define DRM_PVR_SUBMIT_JOB_GEOM_CMD_LAST _BITULL(1)
+#define DRM_PVR_SUBMIT_JOB_GEOM_CMD_SINGLE_CORE _BITULL(2)
+#define DRM_PVR_SUBMIT_JOB_GEOM_CMD_FLAGS_MASK                                 \
+	(DRM_PVR_SUBMIT_JOB_GEOM_CMD_FIRST |                                   \
+	 DRM_PVR_SUBMIT_JOB_GEOM_CMD_LAST |                                    \
+	 DRM_PVR_SUBMIT_JOB_GEOM_CMD_SINGLE_CORE)
+
+/**
+ * DOC: Flags for SUBMIT_JOB ioctl fragment command.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_FRAG_CMD_SINGLE_CORE
+ *
+ *    Use single core in a multi core setup.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_FRAG_CMD_DEPTHBUFFER
+ *
+ *    Indicates whether a depth buffer is present.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_FRAG_CMD_STENCILBUFFER
+ *
+ *    Indicates whether a stencil buffer is present.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_FRAG_CMD_PREVENT_CDM_OVERLAP
+ *
+ *    Disallow compute overlapped with this render.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_FRAG_CMD_GET_VIS_RESULTS
+ *
+ *    Indicates whether this render produces visibility results.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_FRAG_CMD_SCRATCHBUFFER
+ *
+ *    Indicates whether partial renders write to a scratch buffer instead of
+ *    the final surface. It also forces the full screen copy expected to be
+ *    present on the last render after all partial renders have completed.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_FRAG_CMD_FLAGS_MASK
+ *
+ *    Logical OR of all the fragment cmd flags.
+ */
+#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_SINGLE_CORE _BITULL(0)
+#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_DEPTHBUFFER _BITULL(1)
+#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_STENCILBUFFER _BITULL(2)
+#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_PREVENT_CDM_OVERLAP _BITULL(3)
+#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_SCRATCHBUFFER _BITULL(4)
+#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_GET_VIS_RESULTS _BITULL(5)
+#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_PARTIAL_RENDER _BITULL(6)
+#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_FLAGS_MASK                                 \
+	(DRM_PVR_SUBMIT_JOB_FRAG_CMD_SINGLE_CORE |                             \
+	 DRM_PVR_SUBMIT_JOB_FRAG_CMD_DEPTHBUFFER |                             \
+	 DRM_PVR_SUBMIT_JOB_FRAG_CMD_STENCILBUFFER |                           \
+	 DRM_PVR_SUBMIT_JOB_FRAG_CMD_PREVENT_CDM_OVERLAP |                     \
+	 DRM_PVR_SUBMIT_JOB_FRAG_CMD_SCRATCHBUFFER |                           \
+	 DRM_PVR_SUBMIT_JOB_FRAG_CMD_GET_VIS_RESULTS |                         \
+	 DRM_PVR_SUBMIT_JOB_FRAG_CMD_PARTIAL_RENDER)
+
+/**
+ * DOC: Flags for SUBMIT_JOB ioctl compute command.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_PREVENT_ALL_OVERLAP
+ *
+ *    Disallow other jobs overlapped with this compute.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_SINGLE_CORE
+ *
+ *    Forces to use single core in a multi core device.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_FLAGS_MASK
+ *
+ *    Logical OR of all the compute cmd flags.
+ */
+#define DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_PREVENT_ALL_OVERLAP _BITULL(0)
+#define DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_SINGLE_CORE _BITULL(1)
+#define DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_FLAGS_MASK         \
+	(DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_PREVENT_ALL_OVERLAP | \
+	 DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_SINGLE_CORE)
+
+/**
+ * DOC: Flags for SUBMIT_JOB ioctl transfer command.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_TRANSFER_CMD_SINGLE_CORE
+ *
+ *    Forces job to use a single core in a multi core device.
+ *
+ * .. c:macro:: DRM_PVR_SUBMIT_JOB_TRANSFER_CMD_FLAGS_MASK
+ *
+ *    Logical OR of all the transfer cmd flags.
+ */
+#define DRM_PVR_SUBMIT_JOB_TRANSFER_CMD_SINGLE_CORE _BITULL(0)
+
+#define DRM_PVR_SUBMIT_JOB_TRANSFER_CMD_FLAGS_MASK \
+	DRM_PVR_SUBMIT_JOB_TRANSFER_CMD_SINGLE_CORE
+
+/**
+ * enum drm_pvr_job_type - Arguments for &struct drm_pvr_job.job_type
+ */
+enum drm_pvr_job_type {
+	/** @DRM_PVR_JOB_TYPE_GEOMETRY: Job type is geometry. */
+	DRM_PVR_JOB_TYPE_GEOMETRY = 0,
+
+	/** @DRM_PVR_JOB_TYPE_FRAGMENT: Job type is fragment. */
+	DRM_PVR_JOB_TYPE_FRAGMENT,
+
+	/** @DRM_PVR_JOB_TYPE_COMPUTE: Job type is compute. */
+	DRM_PVR_JOB_TYPE_COMPUTE,
+
+	/** @DRM_PVR_JOB_TYPE_TRANSFER_FRAG: Job type is a fragment transfer. */
+	DRM_PVR_JOB_TYPE_TRANSFER_FRAG,
+};
+
+/**
+ * struct drm_pvr_hwrt_data_ref - Reference HWRT data
+ */
+struct drm_pvr_hwrt_data_ref {
+	/** @set_handle: HWRT data set handle. */
+	__u32 set_handle;
+
+	/** @data_index: Index of the HWRT data inside the data set. */
+	__u32 data_index;
+};
+
+/**
+ * struct drm_pvr_job - Job arguments passed to the %DRM_IOCTL_PVR_SUBMIT_JOBS ioctl
+ */
+struct drm_pvr_job {
+	/**
+	 * @type: [IN] Type of job being submitted
+	 *
+	 * This must be one of the values defined by &enum drm_pvr_job_type.
+	 */
+	__u32 type;
+
+	/**
+	 * @context_handle: [IN] Context handle.
+	 *
+	 * When @job_type is %DRM_PVR_JOB_TYPE_RENDER, %DRM_PVR_JOB_TYPE_COMPUTE or
+	 * %DRM_PVR_JOB_TYPE_TRANSFER_FRAG, this must be a valid handle returned by
+	 * %DRM_IOCTL_PVR_CREATE_CONTEXT. The type of context must be compatible
+	 * with the type of job being submitted.
+	 *
+	 * When @job_type is %DRM_PVR_JOB_TYPE_NULL, this must be zero.
+	 */
+	__u32 context_handle;
+
+	/**
+	 * @flags: [IN] Flags for command.
+	 *
+	 * Those are job-dependent. See all ``DRM_PVR_SUBMIT_JOB_*``.
+	 */
+	__u32 flags;
+
+	/**
+	 * @cmd_stream_len: [IN] Length of command stream, in bytes.
+	 */
+	__u32 cmd_stream_len;
+
+	/**
+	 * @cmd_stream: [IN] Pointer to command stream for command.
+	 *
+	 * The command stream must be u64-aligned.
+	 */
+	__u64 cmd_stream;
+
+	/** @sync_ops: [IN] Fragment sync operations. */
+	struct drm_pvr_obj_array sync_ops;
+
+	/**
+	 * @hwrt: [IN] HWRT data used by render jobs (geometry or fragment).
+	 *
+	 * Must be zero for non-render jobs.
+	 */
+	struct drm_pvr_hwrt_data_ref hwrt;
+};
+
+/**
+ * struct drm_pvr_ioctl_submit_jobs_args - Arguments for %DRM_IOCTL_PVR_SUBMIT_JOB
+ *
+ * If the syscall returns an error it is important to check the value of
+ * @jobs.count. This indicates the index into @jobs.array where the
+ * error occurred.
+ */
+struct drm_pvr_ioctl_submit_jobs_args {
+	/** @jobs: [IN] Array of jobs to submit. */
+	struct drm_pvr_obj_array jobs;
+};
+
+#if defined(__cplusplus)
+}
+#endif
+
+#endif /* PVR_DRM_UAPI_H */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 04/17] drm/imagination: Add skeleton PowerVR driver
  2023-08-16  8:25 ` Sarah Walker
@ 2023-08-16  8:25   ` Sarah Walker
  -1 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: matthew.brost, luben.tuikov, tzimmermann, linux-kernel, mripard,
	afd, Matt Coster, boris.brezillon, dakr, donald.robson, hns,
	christian.koenig, faith.ekstrand

This adds the basic skeleton of the driver. The driver registers
itself with DRM on probe. Ioctl handlers are currently implemented
as stubs.

Changes since v3:
- Clarify supported GPU generations in driver description
- Use drm_dev_unplug() when removing device
- Change from_* and to_* functions to macros
- Fix IS_PTR/PTR_ERR confusion in pvr_probe()
- Remove err_out labels in favour of direct returning
- Remove specific am62 compatible match string
- Drop MODULE_FIRMWARE()

Co-developed-by: Frank Binns <frank.binns@imgtec.com>
Signed-off-by: Frank Binns <frank.binns@imgtec.com>
Co-developed-by: Matt Coster <matt.coster@imgtec.com>
Signed-off-by: Matt Coster <matt.coster@imgtec.com>
Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
Reviewed-by: Maxime Ripard <mripard@kernel.org>
---
 MAINTAINERS                              |   1 +
 drivers/gpu/drm/Kconfig                  |   2 +
 drivers/gpu/drm/Makefile                 |   1 +
 drivers/gpu/drm/imagination/Kconfig      |  15 +
 drivers/gpu/drm/imagination/Makefile     |   9 +
 drivers/gpu/drm/imagination/pvr_device.h | 153 +++++++
 drivers/gpu/drm/imagination/pvr_drv.c    | 510 +++++++++++++++++++++++
 drivers/gpu/drm/imagination/pvr_drv.h    |  22 +
 8 files changed, 713 insertions(+)
 create mode 100644 drivers/gpu/drm/imagination/Kconfig
 create mode 100644 drivers/gpu/drm/imagination/Makefile
 create mode 100644 drivers/gpu/drm/imagination/pvr_device.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_drv.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_drv.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 55e17f8aea91..82b82cbdb22a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -10144,6 +10144,7 @@ M:	Sarah Walker <sarah.walker@imgtec.com>
 M:	Donald Robson <donald.robson@imgtec.com>
 S:	Supported
 F:	Documentation/devicetree/bindings/gpu/img,powervr.yaml
+F:	drivers/gpu/drm/imagination/
 F:	include/uapi/drm/pvr_drm.h
 
 IMON SOUNDGRAPH USB IR RECEIVER
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index 9d1f0e04fd56..5ff12cdc94be 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -387,6 +387,8 @@ source "drivers/gpu/drm/solomon/Kconfig"
 
 source "drivers/gpu/drm/sprd/Kconfig"
 
+source "drivers/gpu/drm/imagination/Kconfig"
+
 config DRM_HYPERV
 	tristate "DRM Support for Hyper-V synthetic video device"
 	depends on DRM && PCI && MMU && HYPERV
diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
index 215e78e79125..3255e7d4504a 100644
--- a/drivers/gpu/drm/Makefile
+++ b/drivers/gpu/drm/Makefile
@@ -198,3 +198,4 @@ obj-$(CONFIG_DRM_HYPERV) += hyperv/
 obj-y			+= solomon/
 obj-$(CONFIG_DRM_SPRD) += sprd/
 obj-$(CONFIG_DRM_LOONGSON) += loongson/
+obj-$(CONFIG_DRM_POWERVR) += imagination/
diff --git a/drivers/gpu/drm/imagination/Kconfig b/drivers/gpu/drm/imagination/Kconfig
new file mode 100644
index 000000000000..b7cc82de53fc
--- /dev/null
+++ b/drivers/gpu/drm/imagination/Kconfig
@@ -0,0 +1,15 @@
+# SPDX-License-Identifier: GPL-2.0 OR MIT
+# Copyright (c) 2023 Imagination Technologies Ltd.
+
+config DRM_POWERVR
+	tristate "Imagination Technologies PowerVR Graphics (Series 6 and later)"
+	depends on ARM64
+	depends on DRM
+	select DRM_GEM_SHMEM_HELPER
+	select DRM_SCHED
+	select FW_LOADER
+	help
+	  Choose this option if you have a system that has an Imagination
+	  Technologies PowerVR GPU (Series 6 or later).
+
+	  If "M" is selected, the module will be called powervr.
diff --git a/drivers/gpu/drm/imagination/Makefile b/drivers/gpu/drm/imagination/Makefile
new file mode 100644
index 000000000000..19b40c2d7356
--- /dev/null
+++ b/drivers/gpu/drm/imagination/Makefile
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: GPL-2.0 OR MIT
+# Copyright (c) 2023 Imagination Technologies Ltd.
+
+subdir-ccflags-y := -I$(srctree)/$(src)
+
+powervr-y := \
+	pvr_drv.o \
+
+obj-$(CONFIG_DRM_POWERVR) += powervr.o
diff --git a/drivers/gpu/drm/imagination/pvr_device.h b/drivers/gpu/drm/imagination/pvr_device.h
new file mode 100644
index 000000000000..38e17448013f
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_device.h
@@ -0,0 +1,153 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_DEVICE_H
+#define PVR_DEVICE_H
+
+#include <drm/drm_device.h>
+#include <drm/drm_file.h>
+#include <drm/drm_mm.h>
+
+#include <linux/bits.h>
+#include <linux/compiler_attributes.h>
+#include <linux/compiler_types.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/wait.h>
+
+/**
+ * struct pvr_device - powervr-specific wrapper for &struct drm_device
+ */
+struct pvr_device {
+	/**
+	 * @base: The underlying &struct drm_device.
+	 *
+	 * Do not access this member directly, instead call
+	 * from_pvr_device().
+	 */
+	struct drm_device base;
+};
+
+/**
+ * struct pvr_file - powervr-specific data to be assigned to &struct
+ * drm_file.driver_priv
+ */
+struct pvr_file {
+	/**
+	 * @file: A reference to the parent &struct drm_file.
+	 *
+	 * Do not access this member directly, instead call from_pvr_file().
+	 */
+	struct drm_file *file;
+
+	/**
+	 * @pvr_dev: A reference to the powervr-specific wrapper for the
+	 *           associated device. Saves on repeated calls to
+	 *           to_pvr_device().
+	 */
+	struct pvr_device *pvr_dev;
+};
+
+#define from_pvr_device(pvr_dev) (&pvr_dev->base)
+
+#define to_pvr_device(drm_dev) container_of_const(drm_dev, struct pvr_device, base)
+
+#define from_pvr_file(pvr_file) (pvr_file->file)
+
+#define to_pvr_file(file) (file->driver_priv)
+
+/**
+ * DOC: IOCTL validation helpers
+ *
+ * To validate the constraints imposed on IOCTL argument structs, a collection
+ * of macros and helper functions exist in ``pvr_device.h``.
+ *
+ * Of the current helpers, it should only be necessary to call
+ * PVR_IOCTL_UNION_PADDING_CHECK() directly. This macro should be used once in
+ * every code path which extracts a union member from a struct passed from
+ * userspace.
+ */
+
+/**
+ * pvr_ioctl_union_padding_check() - Validate that the implicit padding between
+ * the end of a union member and the end of the union itself is zeroed.
+ * @instance: Pointer to the instance of the struct to validate.
+ * @union_offset: Offset into the type of @instance of the target union. Must
+ * be 64-bit aligned.
+ * @union_size: Size of the target union in the type of @instance. Must be
+ * 64-bit aligned.
+ * @member_size: Size of the target member in the target union specified by
+ * @union_offset and @union_size. It is assumed that the offset of the target
+ * member is zero relative to @union_offset. Must be 64-bit aligned.
+ *
+ * You probably want to use PVR_IOCTL_UNION_PADDING_CHECK() instead of calling
+ * this function directly, since that macro abstracts away much of the setup,
+ * and also provides some static validation. See its docs for details.
+ *
+ * Return:
+ *  * %true if every byte between the end of the used member of the union and
+ *    the end of that union is zeroed, or
+ *  * %false otherwise.
+ */
+static __always_inline bool
+pvr_ioctl_union_padding_check(void *instance, size_t union_offset,
+			      size_t union_size, size_t member_size)
+{
+	/*
+	 * void pointer arithmetic is technically illegal - cast to a byte
+	 * pointer so this addition works safely.
+	 */
+	void *padding_start = ((u8 *)instance) + union_offset + member_size;
+	size_t padding_size = union_size - member_size;
+
+	return !memchr_inv(padding_start, 0, padding_size);
+}
+
+/**
+ * PVR_STATIC_ASSERT_64BIT_ALIGNED() - Inline assertion for 64-bit alignment.
+ * @static_expr_: Target expression to evaluate.
+ *
+ * If @static_expr_ does not evaluate to a constant integer which would be a
+ * 64-bit aligned address (i.e. a multiple of 8), compilation will fail.
+ *
+ * Return:
+ * The value of @static_expr_.
+ */
+#define PVR_STATIC_ASSERT_64BIT_ALIGNED(static_expr_)                     \
+	({                                                                \
+		static_assert(((static_expr_) & (sizeof(u64) - 1)) == 0); \
+		(static_expr_);                                           \
+	})
+
+/**
+ * PVR_IOCTL_UNION_PADDING_CHECK() - Validate that the implicit padding between
+ * the end of a union member and the end of the union itself is zeroed.
+ * @struct_instance_: An expression which evaluates to a pointer to a UAPI data
+ * struct.
+ * @union_: The name of the union member of @struct_instance_ to check. If the
+ * union member is nested within the type of @struct_instance_, this may
+ * contain the member access operator (".").
+ * @member_: The name of the member of @union_ to assess.
+ *
+ * This is a wrapper around pvr_ioctl_union_padding_check() which performs
+ * alignment checks and simplifies things for the caller.
+ *
+ * Return:
+ *  * %true if every byte in @struct_instance_ between the end of @member_ and
+ *    the end of @union_ is zeroed, or
+ *  * %false otherwise.
+ */
+#define PVR_IOCTL_UNION_PADDING_CHECK(struct_instance_, union_, member_)     \
+	({                                                                   \
+		typeof(struct_instance_) __instance = (struct_instance_);    \
+		size_t __union_offset = PVR_STATIC_ASSERT_64BIT_ALIGNED(     \
+			offsetof(typeof(*__instance), union_));              \
+		size_t __union_size = PVR_STATIC_ASSERT_64BIT_ALIGNED(       \
+			sizeof(__instance->union_));                         \
+		size_t __member_size = PVR_STATIC_ASSERT_64BIT_ALIGNED(      \
+			sizeof(__instance->union_.member_));                 \
+		pvr_ioctl_union_padding_check(__instance, __union_offset,    \
+					      __union_size, __member_size);  \
+	})
+
+#endif /* PVR_DEVICE_H */
diff --git a/drivers/gpu/drm/imagination/pvr_drv.c b/drivers/gpu/drm/imagination/pvr_drv.c
new file mode 100644
index 000000000000..87b5c34e67de
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_drv.c
@@ -0,0 +1,510 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+#include "pvr_drv.h"
+
+#include <uapi/drm/pvr_drm.h>
+
+#include <drm/drm_device.h>
+#include <drm/drm_drv.h>
+#include <drm/drm_file.h>
+#include <drm/drm_gem.h>
+#include <drm/drm_ioctl.h>
+
+#include <linux/err.h>
+#include <linux/export.h>
+#include <linux/fs.h>
+#include <linux/kernel.h>
+#include <linux/mod_devicetable.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/of_device.h>
+#include <linux/of_platform.h>
+#include <linux/platform_device.h>
+
+/**
+ * DOC: PowerVR (Series 6 and later) Graphics Driver
+ *
+ * This driver supports the following PowerVR graphics cores from Imagination
+ * Technologies:
+ *
+ * * AXE-1-16M (found in Texas Instruments AM62)
+ */
+
+/**
+ * pvr_ioctl_create_bo() - IOCTL to create a GEM buffer object.
+ * @drm_dev: [IN] Target DRM device.
+ * @raw_args: [IN/OUT] Arguments passed to this IOCTL. This must be of type
+ * &struct drm_pvr_ioctl_create_bo_args.
+ * @file: [IN] DRM file-private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_CREATE_BO.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * -%EINVAL if the value of &drm_pvr_ioctl_create_bo_args.size is zero
+ *    or wider than &typedef size_t,
+ *  * -%EINVAL if any bits in &drm_pvr_ioctl_create_bo_args.flags that are
+ *    reserved or undefined are set,
+ *  * -%EINVAL if any padding fields in &drm_pvr_ioctl_create_bo_args are not
+ *    zero,
+ *  * Any error encountered while creating the object (see
+ *    pvr_gem_object_create()), or
+ *  * Any error encountered while transferring ownership of the object into a
+ *    userspace-accessible handle (see pvr_gem_object_into_handle()).
+ */
+static int
+pvr_ioctl_create_bo(struct drm_device *drm_dev, void *raw_args,
+		    struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_get_bo_mmap_offset() - IOCTL to generate a "fake" offset to be
+ * used when calling mmap() from userspace to map the given GEM buffer object
+ * @drm_dev: [IN] DRM device (unused).
+ * @raw_args: [IN/OUT] Arguments passed to this IOCTL. This must be of type
+ *                     &struct drm_pvr_ioctl_get_bo_mmap_offset_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_GET_BO_MMAP_OFFSET.
+ *
+ * This IOCTL does *not* perform an mmap. See the docs on
+ * &struct drm_pvr_ioctl_get_bo_mmap_offset_args for details.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * -%ENOENT if the handle does not reference a valid GEM buffer object,
+ *  * -%EINVAL if any padding fields in &struct
+ *    drm_pvr_ioctl_get_bo_mmap_offset_args are not zero, or
+ *  * Any error returned by drm_gem_create_mmap_offset().
+ */
+static int
+pvr_ioctl_get_bo_mmap_offset(struct drm_device *drm_dev, void *raw_args,
+			     struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_dev_query() - IOCTL to copy information about a device
+ * @drm_dev: [IN] DRM device.
+ * @raw_args: [IN/OUT] Arguments passed to this IOCTL. This must be of type
+ *                     &struct drm_pvr_ioctl_dev_query_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_DEV_QUERY.
+ * If the given receiving struct pointer is NULL, or the indicated size is too
+ * small, the expected size of the struct type will be returned in the size
+ * argument field.
+ *
+ * Return:
+ *  * 0 on success or when fetching the size with args->pointer == NULL, or
+ *  * -%E2BIG if the indicated size of the receiving struct is less than is
+ *    required to contain the copied data, or
+ *  * -%EINVAL if the indicated struct type is unknown, or
+ *  * -%ENOMEM if local memory could not be allocated, or
+ *  * -%EFAULT if local memory could not be copied to userspace.
+ */
+static int
+pvr_ioctl_dev_query(struct drm_device *drm_dev, void *raw_args,
+		    struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_create_context() - IOCTL to create a context
+ * @drm_dev: [IN] DRM device.
+ * @raw_args: [IN/OUT] Arguments passed to this IOCTL. This must be of type
+ *                     &struct drm_pvr_ioctl_create_context_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_CREATE_CONTEXT.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * -%EINVAL if provided arguments are invalid, or
+ *  * -%EFAULT if arguments can't be copied from userspace, or
+ *  * Any error returned by pvr_create_render_context().
+ */
+static int
+pvr_ioctl_create_context(struct drm_device *drm_dev, void *raw_args,
+			 struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_destroy_context() - IOCTL to destroy a context
+ * @drm_dev: [IN] DRM device.
+ * @raw_args: [IN/OUT] Arguments passed to this IOCTL. This must be of type
+ *                     &struct drm_pvr_ioctl_destroy_context_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_DESTROY_CONTEXT.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * -%EINVAL if context not in context list.
+ */
+static int
+pvr_ioctl_destroy_context(struct drm_device *drm_dev, void *raw_args,
+			  struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_create_free_list() - IOCTL to create a free list
+ * @drm_dev: [IN] DRM device.
+ * @raw_args: [IN/OUT] Arguments passed to this IOCTL. This must be of type
+ *                     &struct drm_pvr_ioctl_create_free_list_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_CREATE_FREE_LIST.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_free_list_create().
+ */
+static int
+pvr_ioctl_create_free_list(struct drm_device *drm_dev, void *raw_args,
+			   struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_destroy_free_list() - IOCTL to destroy a free list
+ * @drm_dev: [IN] DRM device.
+ * @raw_args: [IN] Arguments passed to this IOCTL. This must be of type
+ *                 &struct drm_pvr_ioctl_destroy_free_list_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_DESTROY_FREE_LIST.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * -%EINVAL if free list not in object list.
+ */
+static int
+pvr_ioctl_destroy_free_list(struct drm_device *drm_dev, void *raw_args,
+			    struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_create_hwrt_dataset() - IOCTL to create a HWRT dataset
+ * @drm_dev: [IN] DRM device.
+ * @raw_args: [IN/OUT] Arguments passed to this IOCTL. This must be of type
+ *                     &struct drm_pvr_ioctl_create_hwrt_dataset_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_CREATE_HWRT_DATASET.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_hwrt_dataset_create().
+ */
+static int
+pvr_ioctl_create_hwrt_dataset(struct drm_device *drm_dev, void *raw_args,
+			      struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_destroy_hwrt_dataset() - IOCTL to destroy a HWRT dataset
+ * @drm_dev: [IN] DRM device.
+ * @raw_args: [IN] Arguments passed to this IOCTL. This must be of type
+ *                 &struct drm_pvr_ioctl_destroy_hwrt_dataset_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_DESTROY_HWRT_DATASET.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * -%EINVAL if HWRT dataset not in object list.
+ */
+static int
+pvr_ioctl_destroy_hwrt_dataset(struct drm_device *drm_dev, void *raw_args,
+			       struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_create_vm_context() - IOCTL to create a VM context
+ * @drm_dev: [IN] DRM device.
+ * @raw_args: [IN/OUT] Arguments passed to this IOCTL. This must be of type
+ *                     &struct drm_pvr_ioctl_create_vm_context_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_CREATE_VM_CONTEXT.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_vm_create_context().
+ */
+static int
+pvr_ioctl_create_vm_context(struct drm_device *drm_dev, void *raw_args,
+			    struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_destroy_vm_context() - IOCTL to destroy a VM context
+* @drm_dev: [IN] DRM device.
+* @raw_args: [IN] Arguments passed to this IOCTL. This must be of type
+*                 &struct drm_pvr_ioctl_destroy_vm_context_args.
+* @file: [IN] DRM file private data.
+*
+* Called from userspace with %DRM_IOCTL_PVR_DESTROY_VM_CONTEXT.
+*
+* Return:
+*  * 0 on success, or
+*  * -%EINVAL if object not in object list.
+ */
+static int
+pvr_ioctl_destroy_vm_context(struct drm_device *drm_dev, void *raw_args,
+			     struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_vm_map() - IOCTL to map buffer to GPU address space.
+ * @drm_dev: [IN] DRM device.
+ * @raw_args: [IN] Arguments passed to this IOCTL. This must be of type
+ *                 &struct drm_pvr_ioctl_vm_map_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_VM_MAP.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * -%EINVAL if &drm_pvr_ioctl_vm_op_map_args.flags is not zero,
+ *  * -%EINVAL if the bounds specified by &drm_pvr_ioctl_vm_op_map_args.offset
+ *    and &drm_pvr_ioctl_vm_op_map_args.size are not valid or do not fall
+ *    within the buffer object specified by
+ *    &drm_pvr_ioctl_vm_op_map_args.handle,
+ *  * -%EINVAL if the bounds specified by
+ *    &drm_pvr_ioctl_vm_op_map_args.device_addr and
+ *    &drm_pvr_ioctl_vm_op_map_args.size do not form a valid device-virtual
+ *    address range which falls entirely within a single heap, or
+ *  * -%ENOENT if &drm_pvr_ioctl_vm_op_map_args.handle does not refer to a
+ *    valid PowerVR buffer object.
+ */
+static int
+pvr_ioctl_vm_map(struct drm_device *drm_dev, void *raw_args,
+		 struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_vm_unmap() - IOCTL to unmap buffer from GPU address space.
+ * @drm_dev: [IN] DRM device.
+ * @raw_args: [IN] Arguments passed to this IOCTL. This must be of type
+ *                 &struct drm_pvr_ioctl_vm_unmap_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_VM_UNMAP.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * -%EINVAL if &drm_pvr_ioctl_vm_op_unmap_args.device_addr is not a valid
+ *    device page-aligned device-virtual address, or
+ *  * -%ENOENT if there is currently no PowerVR buffer object mapped at
+ *    &drm_pvr_ioctl_vm_op_unmap_args.device_addr.
+ */
+static int
+pvr_ioctl_vm_unmap(struct drm_device *drm_dev, void *raw_args,
+		   struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/*
+ * pvr_ioctl_submit_job() - IOCTL to submit a job to the GPU
+ * @drm_dev: [IN] DRM device.
+ * @raw_args: [IN] Arguments passed to this IOCTL. This must be of type
+ *                 &struct drm_pvr_ioctl_submit_job_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_SUBMIT_JOB.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * -%EINVAL if arguments are invalid.
+ */
+static int
+pvr_ioctl_submit_jobs(struct drm_device *drm_dev, void *raw_args,
+		      struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+#define DRM_PVR_IOCTL(_name, _func, _flags) \
+	DRM_IOCTL_DEF_DRV(PVR_##_name, pvr_ioctl_##_func, _flags)
+
+/* clang-format off */
+
+static const struct drm_ioctl_desc pvr_drm_driver_ioctls[] = {
+	DRM_PVR_IOCTL(DEV_QUERY, dev_query, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(CREATE_BO, create_bo, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(GET_BO_MMAP_OFFSET, get_bo_mmap_offset, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(CREATE_VM_CONTEXT, create_vm_context, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(DESTROY_VM_CONTEXT, destroy_vm_context, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(VM_MAP, vm_map, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(VM_UNMAP, vm_unmap, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(CREATE_CONTEXT, create_context, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(DESTROY_CONTEXT, destroy_context, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(CREATE_FREE_LIST, create_free_list, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(DESTROY_FREE_LIST, destroy_free_list, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(CREATE_HWRT_DATASET, create_hwrt_dataset, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(DESTROY_HWRT_DATASET, destroy_hwrt_dataset, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(SUBMIT_JOBS, submit_jobs, DRM_RENDER_ALLOW),
+};
+
+/* clang-format on */
+
+#undef DRM_PVR_IOCTL
+
+/**
+ * pvr_drm_driver_open() - Driver callback when a new &struct drm_file is opened
+ * @drm_dev: [IN] DRM device.
+ * @file: [IN] DRM file private data.
+ *
+ * Allocates powervr-specific file private data (&struct pvr_file).
+ *
+ * Registered in &pvr_drm_driver.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * -%ENOMEM if the allocation of a &struct ipvr_file fails, or
+ *  * Any error returned by pvr_memory_context_init().
+ */
+static int
+pvr_drm_driver_open(struct drm_device *drm_dev, struct drm_file *file)
+{
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+	struct pvr_file *pvr_file;
+
+	pvr_file = kzalloc(sizeof(*pvr_file), GFP_KERNEL);
+	if (!pvr_file)
+		return -ENOMEM;
+
+	/*
+	 * Store reference to base DRM file private data for use by
+	 * from_pvr_file.
+	 */
+	pvr_file->file = file;
+
+	/*
+	 * Store reference to powervr-specific outer device struct in file
+	 * private data for convenient access.
+	 */
+	pvr_file->pvr_dev = pvr_dev;
+
+	/*
+	 * Store reference to powervr-specific file private data in DRM file
+	 * private data.
+	 */
+	file->driver_priv = pvr_file;
+
+	return 0;
+}
+
+/**
+ * pvr_drm_driver_postclose() - One of the driver callbacks when a &struct
+ * drm_file is closed.
+ * @drm_dev: [IN] DRM device (unused).
+ * @file: [IN] DRM file private data.
+ *
+ * Frees powervr-specific file private data (&struct pvr_file).
+ *
+ * Registered in &pvr_drm_driver.
+ */
+static void
+pvr_drm_driver_postclose(__always_unused struct drm_device *drm_dev,
+			 struct drm_file *file)
+{
+	struct pvr_file *pvr_file = to_pvr_file(file);
+
+	kfree(pvr_file);
+	file->driver_priv = NULL;
+}
+
+DEFINE_DRM_GEM_FOPS(pvr_drm_driver_fops);
+
+static struct drm_driver pvr_drm_driver = {
+	.driver_features = DRIVER_RENDER,
+	.open = pvr_drm_driver_open,
+	.postclose = pvr_drm_driver_postclose,
+	.ioctls = pvr_drm_driver_ioctls,
+	.num_ioctls = ARRAY_SIZE(pvr_drm_driver_ioctls),
+	.fops = &pvr_drm_driver_fops,
+
+	.name = PVR_DRIVER_NAME,
+	.desc = PVR_DRIVER_DESC,
+	.date = PVR_DRIVER_DATE,
+	.major = PVR_DRIVER_MAJOR,
+	.minor = PVR_DRIVER_MINOR,
+	.patchlevel = PVR_DRIVER_PATCHLEVEL,
+
+};
+
+static int
+pvr_probe(struct platform_device *plat_dev)
+{
+	struct pvr_device *pvr_dev;
+	struct drm_device *drm_dev;
+
+	pvr_dev = devm_drm_dev_alloc(&plat_dev->dev, &pvr_drm_driver,
+				     struct pvr_device, base);
+	if (IS_ERR(pvr_dev))
+		return PTR_ERR(pvr_dev);
+
+	drm_dev = &pvr_dev->base;
+
+	platform_set_drvdata(plat_dev, drm_dev);
+
+	return drm_dev_register(drm_dev, 0);
+}
+
+static int
+pvr_remove(struct platform_device *plat_dev)
+{
+	struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
+
+	drm_dev_unplug(drm_dev);
+
+	return 0;
+}
+
+static const struct of_device_id dt_match[] = {
+	{ .compatible = "img,powervr-seriesaxe", .data = NULL },
+	{}
+};
+MODULE_DEVICE_TABLE(of, dt_match);
+
+static struct platform_driver pvr_driver = {
+	.probe = pvr_probe,
+	.remove = pvr_remove,
+	.driver = {
+		.name = PVR_DRIVER_NAME,
+		.of_match_table = dt_match,
+	},
+};
+module_platform_driver(pvr_driver);
+
+MODULE_AUTHOR("Imagination Technologies Ltd.");
+MODULE_DESCRIPTION(PVR_DRIVER_DESC);
+MODULE_LICENSE("Dual MIT/GPL");
+MODULE_IMPORT_NS(DMA_BUF);
diff --git a/drivers/gpu/drm/imagination/pvr_drv.h b/drivers/gpu/drm/imagination/pvr_drv.h
new file mode 100644
index 000000000000..6075a993d922
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_drv.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_DRV_H
+#define PVR_DRV_H
+
+#include "linux/compiler_attributes.h"
+#include <uapi/drm/pvr_drm.h>
+
+#define PVR_DRIVER_NAME "powervr"
+#define PVR_DRIVER_DESC "Imagination PowerVR Graphics (Series 6 and later)"
+#define PVR_DRIVER_DATE "20230814"
+
+/*
+ * Driver interface version:
+ *  - 1.0: Initial interface
+ */
+#define PVR_DRIVER_MAJOR 1
+#define PVR_DRIVER_MINOR 0
+#define PVR_DRIVER_PATCHLEVEL 0
+
+#endif /* PVR_DRV_H */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 04/17] drm/imagination: Add skeleton PowerVR driver
@ 2023-08-16  8:25   ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: frank.binns, donald.robson, boris.brezillon, faith.ekstrand,
	airlied, daniel, maarten.lankhorst, mripard, tzimmermann, afd,
	hns, matthew.brost, christian.koenig, luben.tuikov, dakr,
	linux-kernel, Matt Coster

This adds the basic skeleton of the driver. The driver registers
itself with DRM on probe. Ioctl handlers are currently implemented
as stubs.

Changes since v3:
- Clarify supported GPU generations in driver description
- Use drm_dev_unplug() when removing device
- Change from_* and to_* functions to macros
- Fix IS_PTR/PTR_ERR confusion in pvr_probe()
- Remove err_out labels in favour of direct returning
- Remove specific am62 compatible match string
- Drop MODULE_FIRMWARE()

Co-developed-by: Frank Binns <frank.binns@imgtec.com>
Signed-off-by: Frank Binns <frank.binns@imgtec.com>
Co-developed-by: Matt Coster <matt.coster@imgtec.com>
Signed-off-by: Matt Coster <matt.coster@imgtec.com>
Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
Reviewed-by: Maxime Ripard <mripard@kernel.org>
---
 MAINTAINERS                              |   1 +
 drivers/gpu/drm/Kconfig                  |   2 +
 drivers/gpu/drm/Makefile                 |   1 +
 drivers/gpu/drm/imagination/Kconfig      |  15 +
 drivers/gpu/drm/imagination/Makefile     |   9 +
 drivers/gpu/drm/imagination/pvr_device.h | 153 +++++++
 drivers/gpu/drm/imagination/pvr_drv.c    | 510 +++++++++++++++++++++++
 drivers/gpu/drm/imagination/pvr_drv.h    |  22 +
 8 files changed, 713 insertions(+)
 create mode 100644 drivers/gpu/drm/imagination/Kconfig
 create mode 100644 drivers/gpu/drm/imagination/Makefile
 create mode 100644 drivers/gpu/drm/imagination/pvr_device.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_drv.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_drv.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 55e17f8aea91..82b82cbdb22a 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -10144,6 +10144,7 @@ M:	Sarah Walker <sarah.walker@imgtec.com>
 M:	Donald Robson <donald.robson@imgtec.com>
 S:	Supported
 F:	Documentation/devicetree/bindings/gpu/img,powervr.yaml
+F:	drivers/gpu/drm/imagination/
 F:	include/uapi/drm/pvr_drm.h
 
 IMON SOUNDGRAPH USB IR RECEIVER
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index 9d1f0e04fd56..5ff12cdc94be 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -387,6 +387,8 @@ source "drivers/gpu/drm/solomon/Kconfig"
 
 source "drivers/gpu/drm/sprd/Kconfig"
 
+source "drivers/gpu/drm/imagination/Kconfig"
+
 config DRM_HYPERV
 	tristate "DRM Support for Hyper-V synthetic video device"
 	depends on DRM && PCI && MMU && HYPERV
diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
index 215e78e79125..3255e7d4504a 100644
--- a/drivers/gpu/drm/Makefile
+++ b/drivers/gpu/drm/Makefile
@@ -198,3 +198,4 @@ obj-$(CONFIG_DRM_HYPERV) += hyperv/
 obj-y			+= solomon/
 obj-$(CONFIG_DRM_SPRD) += sprd/
 obj-$(CONFIG_DRM_LOONGSON) += loongson/
+obj-$(CONFIG_DRM_POWERVR) += imagination/
diff --git a/drivers/gpu/drm/imagination/Kconfig b/drivers/gpu/drm/imagination/Kconfig
new file mode 100644
index 000000000000..b7cc82de53fc
--- /dev/null
+++ b/drivers/gpu/drm/imagination/Kconfig
@@ -0,0 +1,15 @@
+# SPDX-License-Identifier: GPL-2.0 OR MIT
+# Copyright (c) 2023 Imagination Technologies Ltd.
+
+config DRM_POWERVR
+	tristate "Imagination Technologies PowerVR Graphics (Series 6 and later)"
+	depends on ARM64
+	depends on DRM
+	select DRM_GEM_SHMEM_HELPER
+	select DRM_SCHED
+	select FW_LOADER
+	help
+	  Choose this option if you have a system that has an Imagination
+	  Technologies PowerVR GPU (Series 6 or later).
+
+	  If "M" is selected, the module will be called powervr.
diff --git a/drivers/gpu/drm/imagination/Makefile b/drivers/gpu/drm/imagination/Makefile
new file mode 100644
index 000000000000..19b40c2d7356
--- /dev/null
+++ b/drivers/gpu/drm/imagination/Makefile
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: GPL-2.0 OR MIT
+# Copyright (c) 2023 Imagination Technologies Ltd.
+
+subdir-ccflags-y := -I$(srctree)/$(src)
+
+powervr-y := \
+	pvr_drv.o \
+
+obj-$(CONFIG_DRM_POWERVR) += powervr.o
diff --git a/drivers/gpu/drm/imagination/pvr_device.h b/drivers/gpu/drm/imagination/pvr_device.h
new file mode 100644
index 000000000000..38e17448013f
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_device.h
@@ -0,0 +1,153 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_DEVICE_H
+#define PVR_DEVICE_H
+
+#include <drm/drm_device.h>
+#include <drm/drm_file.h>
+#include <drm/drm_mm.h>
+
+#include <linux/bits.h>
+#include <linux/compiler_attributes.h>
+#include <linux/compiler_types.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/wait.h>
+
+/**
+ * struct pvr_device - powervr-specific wrapper for &struct drm_device
+ */
+struct pvr_device {
+	/**
+	 * @base: The underlying &struct drm_device.
+	 *
+	 * Do not access this member directly, instead call
+	 * from_pvr_device().
+	 */
+	struct drm_device base;
+};
+
+/**
+ * struct pvr_file - powervr-specific data to be assigned to &struct
+ * drm_file.driver_priv
+ */
+struct pvr_file {
+	/**
+	 * @file: A reference to the parent &struct drm_file.
+	 *
+	 * Do not access this member directly, instead call from_pvr_file().
+	 */
+	struct drm_file *file;
+
+	/**
+	 * @pvr_dev: A reference to the powervr-specific wrapper for the
+	 *           associated device. Saves on repeated calls to
+	 *           to_pvr_device().
+	 */
+	struct pvr_device *pvr_dev;
+};
+
+#define from_pvr_device(pvr_dev) (&pvr_dev->base)
+
+#define to_pvr_device(drm_dev) container_of_const(drm_dev, struct pvr_device, base)
+
+#define from_pvr_file(pvr_file) (pvr_file->file)
+
+#define to_pvr_file(file) (file->driver_priv)
+
+/**
+ * DOC: IOCTL validation helpers
+ *
+ * To validate the constraints imposed on IOCTL argument structs, a collection
+ * of macros and helper functions exist in ``pvr_device.h``.
+ *
+ * Of the current helpers, it should only be necessary to call
+ * PVR_IOCTL_UNION_PADDING_CHECK() directly. This macro should be used once in
+ * every code path which extracts a union member from a struct passed from
+ * userspace.
+ */
+
+/**
+ * pvr_ioctl_union_padding_check() - Validate that the implicit padding between
+ * the end of a union member and the end of the union itself is zeroed.
+ * @instance: Pointer to the instance of the struct to validate.
+ * @union_offset: Offset into the type of @instance of the target union. Must
+ * be 64-bit aligned.
+ * @union_size: Size of the target union in the type of @instance. Must be
+ * 64-bit aligned.
+ * @member_size: Size of the target member in the target union specified by
+ * @union_offset and @union_size. It is assumed that the offset of the target
+ * member is zero relative to @union_offset. Must be 64-bit aligned.
+ *
+ * You probably want to use PVR_IOCTL_UNION_PADDING_CHECK() instead of calling
+ * this function directly, since that macro abstracts away much of the setup,
+ * and also provides some static validation. See its docs for details.
+ *
+ * Return:
+ *  * %true if every byte between the end of the used member of the union and
+ *    the end of that union is zeroed, or
+ *  * %false otherwise.
+ */
+static __always_inline bool
+pvr_ioctl_union_padding_check(void *instance, size_t union_offset,
+			      size_t union_size, size_t member_size)
+{
+	/*
+	 * void pointer arithmetic is technically illegal - cast to a byte
+	 * pointer so this addition works safely.
+	 */
+	void *padding_start = ((u8 *)instance) + union_offset + member_size;
+	size_t padding_size = union_size - member_size;
+
+	return !memchr_inv(padding_start, 0, padding_size);
+}
+
+/**
+ * PVR_STATIC_ASSERT_64BIT_ALIGNED() - Inline assertion for 64-bit alignment.
+ * @static_expr_: Target expression to evaluate.
+ *
+ * If @static_expr_ does not evaluate to a constant integer which would be a
+ * 64-bit aligned address (i.e. a multiple of 8), compilation will fail.
+ *
+ * Return:
+ * The value of @static_expr_.
+ */
+#define PVR_STATIC_ASSERT_64BIT_ALIGNED(static_expr_)                     \
+	({                                                                \
+		static_assert(((static_expr_) & (sizeof(u64) - 1)) == 0); \
+		(static_expr_);                                           \
+	})
+
+/**
+ * PVR_IOCTL_UNION_PADDING_CHECK() - Validate that the implicit padding between
+ * the end of a union member and the end of the union itself is zeroed.
+ * @struct_instance_: An expression which evaluates to a pointer to a UAPI data
+ * struct.
+ * @union_: The name of the union member of @struct_instance_ to check. If the
+ * union member is nested within the type of @struct_instance_, this may
+ * contain the member access operator (".").
+ * @member_: The name of the member of @union_ to assess.
+ *
+ * This is a wrapper around pvr_ioctl_union_padding_check() which performs
+ * alignment checks and simplifies things for the caller.
+ *
+ * Return:
+ *  * %true if every byte in @struct_instance_ between the end of @member_ and
+ *    the end of @union_ is zeroed, or
+ *  * %false otherwise.
+ */
+#define PVR_IOCTL_UNION_PADDING_CHECK(struct_instance_, union_, member_)     \
+	({                                                                   \
+		typeof(struct_instance_) __instance = (struct_instance_);    \
+		size_t __union_offset = PVR_STATIC_ASSERT_64BIT_ALIGNED(     \
+			offsetof(typeof(*__instance), union_));              \
+		size_t __union_size = PVR_STATIC_ASSERT_64BIT_ALIGNED(       \
+			sizeof(__instance->union_));                         \
+		size_t __member_size = PVR_STATIC_ASSERT_64BIT_ALIGNED(      \
+			sizeof(__instance->union_.member_));                 \
+		pvr_ioctl_union_padding_check(__instance, __union_offset,    \
+					      __union_size, __member_size);  \
+	})
+
+#endif /* PVR_DEVICE_H */
diff --git a/drivers/gpu/drm/imagination/pvr_drv.c b/drivers/gpu/drm/imagination/pvr_drv.c
new file mode 100644
index 000000000000..87b5c34e67de
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_drv.c
@@ -0,0 +1,510 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+#include "pvr_drv.h"
+
+#include <uapi/drm/pvr_drm.h>
+
+#include <drm/drm_device.h>
+#include <drm/drm_drv.h>
+#include <drm/drm_file.h>
+#include <drm/drm_gem.h>
+#include <drm/drm_ioctl.h>
+
+#include <linux/err.h>
+#include <linux/export.h>
+#include <linux/fs.h>
+#include <linux/kernel.h>
+#include <linux/mod_devicetable.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/of_device.h>
+#include <linux/of_platform.h>
+#include <linux/platform_device.h>
+
+/**
+ * DOC: PowerVR (Series 6 and later) Graphics Driver
+ *
+ * This driver supports the following PowerVR graphics cores from Imagination
+ * Technologies:
+ *
+ * * AXE-1-16M (found in Texas Instruments AM62)
+ */
+
+/**
+ * pvr_ioctl_create_bo() - IOCTL to create a GEM buffer object.
+ * @drm_dev: [IN] Target DRM device.
+ * @raw_args: [IN/OUT] Arguments passed to this IOCTL. This must be of type
+ * &struct drm_pvr_ioctl_create_bo_args.
+ * @file: [IN] DRM file-private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_CREATE_BO.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * -%EINVAL if the value of &drm_pvr_ioctl_create_bo_args.size is zero
+ *    or wider than &typedef size_t,
+ *  * -%EINVAL if any bits in &drm_pvr_ioctl_create_bo_args.flags that are
+ *    reserved or undefined are set,
+ *  * -%EINVAL if any padding fields in &drm_pvr_ioctl_create_bo_args are not
+ *    zero,
+ *  * Any error encountered while creating the object (see
+ *    pvr_gem_object_create()), or
+ *  * Any error encountered while transferring ownership of the object into a
+ *    userspace-accessible handle (see pvr_gem_object_into_handle()).
+ */
+static int
+pvr_ioctl_create_bo(struct drm_device *drm_dev, void *raw_args,
+		    struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_get_bo_mmap_offset() - IOCTL to generate a "fake" offset to be
+ * used when calling mmap() from userspace to map the given GEM buffer object
+ * @drm_dev: [IN] DRM device (unused).
+ * @raw_args: [IN/OUT] Arguments passed to this IOCTL. This must be of type
+ *                     &struct drm_pvr_ioctl_get_bo_mmap_offset_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_GET_BO_MMAP_OFFSET.
+ *
+ * This IOCTL does *not* perform an mmap. See the docs on
+ * &struct drm_pvr_ioctl_get_bo_mmap_offset_args for details.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * -%ENOENT if the handle does not reference a valid GEM buffer object,
+ *  * -%EINVAL if any padding fields in &struct
+ *    drm_pvr_ioctl_get_bo_mmap_offset_args are not zero, or
+ *  * Any error returned by drm_gem_create_mmap_offset().
+ */
+static int
+pvr_ioctl_get_bo_mmap_offset(struct drm_device *drm_dev, void *raw_args,
+			     struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_dev_query() - IOCTL to copy information about a device
+ * @drm_dev: [IN] DRM device.
+ * @raw_args: [IN/OUT] Arguments passed to this IOCTL. This must be of type
+ *                     &struct drm_pvr_ioctl_dev_query_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_DEV_QUERY.
+ * If the given receiving struct pointer is NULL, or the indicated size is too
+ * small, the expected size of the struct type will be returned in the size
+ * argument field.
+ *
+ * Return:
+ *  * 0 on success or when fetching the size with args->pointer == NULL, or
+ *  * -%E2BIG if the indicated size of the receiving struct is less than is
+ *    required to contain the copied data, or
+ *  * -%EINVAL if the indicated struct type is unknown, or
+ *  * -%ENOMEM if local memory could not be allocated, or
+ *  * -%EFAULT if local memory could not be copied to userspace.
+ */
+static int
+pvr_ioctl_dev_query(struct drm_device *drm_dev, void *raw_args,
+		    struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_create_context() - IOCTL to create a context
+ * @drm_dev: [IN] DRM device.
+ * @raw_args: [IN/OUT] Arguments passed to this IOCTL. This must be of type
+ *                     &struct drm_pvr_ioctl_create_context_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_CREATE_CONTEXT.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * -%EINVAL if provided arguments are invalid, or
+ *  * -%EFAULT if arguments can't be copied from userspace, or
+ *  * Any error returned by pvr_create_render_context().
+ */
+static int
+pvr_ioctl_create_context(struct drm_device *drm_dev, void *raw_args,
+			 struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_destroy_context() - IOCTL to destroy a context
+ * @drm_dev: [IN] DRM device.
+ * @raw_args: [IN/OUT] Arguments passed to this IOCTL. This must be of type
+ *                     &struct drm_pvr_ioctl_destroy_context_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_DESTROY_CONTEXT.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * -%EINVAL if context not in context list.
+ */
+static int
+pvr_ioctl_destroy_context(struct drm_device *drm_dev, void *raw_args,
+			  struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_create_free_list() - IOCTL to create a free list
+ * @drm_dev: [IN] DRM device.
+ * @raw_args: [IN/OUT] Arguments passed to this IOCTL. This must be of type
+ *                     &struct drm_pvr_ioctl_create_free_list_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_CREATE_FREE_LIST.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_free_list_create().
+ */
+static int
+pvr_ioctl_create_free_list(struct drm_device *drm_dev, void *raw_args,
+			   struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_destroy_free_list() - IOCTL to destroy a free list
+ * @drm_dev: [IN] DRM device.
+ * @raw_args: [IN] Arguments passed to this IOCTL. This must be of type
+ *                 &struct drm_pvr_ioctl_destroy_free_list_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_DESTROY_FREE_LIST.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * -%EINVAL if free list not in object list.
+ */
+static int
+pvr_ioctl_destroy_free_list(struct drm_device *drm_dev, void *raw_args,
+			    struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_create_hwrt_dataset() - IOCTL to create a HWRT dataset
+ * @drm_dev: [IN] DRM device.
+ * @raw_args: [IN/OUT] Arguments passed to this IOCTL. This must be of type
+ *                     &struct drm_pvr_ioctl_create_hwrt_dataset_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_CREATE_HWRT_DATASET.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_hwrt_dataset_create().
+ */
+static int
+pvr_ioctl_create_hwrt_dataset(struct drm_device *drm_dev, void *raw_args,
+			      struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_destroy_hwrt_dataset() - IOCTL to destroy a HWRT dataset
+ * @drm_dev: [IN] DRM device.
+ * @raw_args: [IN] Arguments passed to this IOCTL. This must be of type
+ *                 &struct drm_pvr_ioctl_destroy_hwrt_dataset_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_DESTROY_HWRT_DATASET.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * -%EINVAL if HWRT dataset not in object list.
+ */
+static int
+pvr_ioctl_destroy_hwrt_dataset(struct drm_device *drm_dev, void *raw_args,
+			       struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_create_vm_context() - IOCTL to create a VM context
+ * @drm_dev: [IN] DRM device.
+ * @raw_args: [IN/OUT] Arguments passed to this IOCTL. This must be of type
+ *                     &struct drm_pvr_ioctl_create_vm_context_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_CREATE_VM_CONTEXT.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_vm_create_context().
+ */
+static int
+pvr_ioctl_create_vm_context(struct drm_device *drm_dev, void *raw_args,
+			    struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_destroy_vm_context() - IOCTL to destroy a VM context
+* @drm_dev: [IN] DRM device.
+* @raw_args: [IN] Arguments passed to this IOCTL. This must be of type
+*                 &struct drm_pvr_ioctl_destroy_vm_context_args.
+* @file: [IN] DRM file private data.
+*
+* Called from userspace with %DRM_IOCTL_PVR_DESTROY_VM_CONTEXT.
+*
+* Return:
+*  * 0 on success, or
+*  * -%EINVAL if object not in object list.
+ */
+static int
+pvr_ioctl_destroy_vm_context(struct drm_device *drm_dev, void *raw_args,
+			     struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_vm_map() - IOCTL to map buffer to GPU address space.
+ * @drm_dev: [IN] DRM device.
+ * @raw_args: [IN] Arguments passed to this IOCTL. This must be of type
+ *                 &struct drm_pvr_ioctl_vm_map_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_VM_MAP.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * -%EINVAL if &drm_pvr_ioctl_vm_op_map_args.flags is not zero,
+ *  * -%EINVAL if the bounds specified by &drm_pvr_ioctl_vm_op_map_args.offset
+ *    and &drm_pvr_ioctl_vm_op_map_args.size are not valid or do not fall
+ *    within the buffer object specified by
+ *    &drm_pvr_ioctl_vm_op_map_args.handle,
+ *  * -%EINVAL if the bounds specified by
+ *    &drm_pvr_ioctl_vm_op_map_args.device_addr and
+ *    &drm_pvr_ioctl_vm_op_map_args.size do not form a valid device-virtual
+ *    address range which falls entirely within a single heap, or
+ *  * -%ENOENT if &drm_pvr_ioctl_vm_op_map_args.handle does not refer to a
+ *    valid PowerVR buffer object.
+ */
+static int
+pvr_ioctl_vm_map(struct drm_device *drm_dev, void *raw_args,
+		 struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/**
+ * pvr_ioctl_vm_unmap() - IOCTL to unmap buffer from GPU address space.
+ * @drm_dev: [IN] DRM device.
+ * @raw_args: [IN] Arguments passed to this IOCTL. This must be of type
+ *                 &struct drm_pvr_ioctl_vm_unmap_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_VM_UNMAP.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * -%EINVAL if &drm_pvr_ioctl_vm_op_unmap_args.device_addr is not a valid
+ *    device page-aligned device-virtual address, or
+ *  * -%ENOENT if there is currently no PowerVR buffer object mapped at
+ *    &drm_pvr_ioctl_vm_op_unmap_args.device_addr.
+ */
+static int
+pvr_ioctl_vm_unmap(struct drm_device *drm_dev, void *raw_args,
+		   struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+/*
+ * pvr_ioctl_submit_job() - IOCTL to submit a job to the GPU
+ * @drm_dev: [IN] DRM device.
+ * @raw_args: [IN] Arguments passed to this IOCTL. This must be of type
+ *                 &struct drm_pvr_ioctl_submit_job_args.
+ * @file: [IN] DRM file private data.
+ *
+ * Called from userspace with %DRM_IOCTL_PVR_SUBMIT_JOB.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * -%EINVAL if arguments are invalid.
+ */
+static int
+pvr_ioctl_submit_jobs(struct drm_device *drm_dev, void *raw_args,
+		      struct drm_file *file)
+{
+	return -ENOTTY;
+}
+
+#define DRM_PVR_IOCTL(_name, _func, _flags) \
+	DRM_IOCTL_DEF_DRV(PVR_##_name, pvr_ioctl_##_func, _flags)
+
+/* clang-format off */
+
+static const struct drm_ioctl_desc pvr_drm_driver_ioctls[] = {
+	DRM_PVR_IOCTL(DEV_QUERY, dev_query, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(CREATE_BO, create_bo, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(GET_BO_MMAP_OFFSET, get_bo_mmap_offset, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(CREATE_VM_CONTEXT, create_vm_context, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(DESTROY_VM_CONTEXT, destroy_vm_context, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(VM_MAP, vm_map, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(VM_UNMAP, vm_unmap, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(CREATE_CONTEXT, create_context, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(DESTROY_CONTEXT, destroy_context, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(CREATE_FREE_LIST, create_free_list, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(DESTROY_FREE_LIST, destroy_free_list, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(CREATE_HWRT_DATASET, create_hwrt_dataset, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(DESTROY_HWRT_DATASET, destroy_hwrt_dataset, DRM_RENDER_ALLOW),
+	DRM_PVR_IOCTL(SUBMIT_JOBS, submit_jobs, DRM_RENDER_ALLOW),
+};
+
+/* clang-format on */
+
+#undef DRM_PVR_IOCTL
+
+/**
+ * pvr_drm_driver_open() - Driver callback when a new &struct drm_file is opened
+ * @drm_dev: [IN] DRM device.
+ * @file: [IN] DRM file private data.
+ *
+ * Allocates powervr-specific file private data (&struct pvr_file).
+ *
+ * Registered in &pvr_drm_driver.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * -%ENOMEM if the allocation of a &struct ipvr_file fails, or
+ *  * Any error returned by pvr_memory_context_init().
+ */
+static int
+pvr_drm_driver_open(struct drm_device *drm_dev, struct drm_file *file)
+{
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+	struct pvr_file *pvr_file;
+
+	pvr_file = kzalloc(sizeof(*pvr_file), GFP_KERNEL);
+	if (!pvr_file)
+		return -ENOMEM;
+
+	/*
+	 * Store reference to base DRM file private data for use by
+	 * from_pvr_file.
+	 */
+	pvr_file->file = file;
+
+	/*
+	 * Store reference to powervr-specific outer device struct in file
+	 * private data for convenient access.
+	 */
+	pvr_file->pvr_dev = pvr_dev;
+
+	/*
+	 * Store reference to powervr-specific file private data in DRM file
+	 * private data.
+	 */
+	file->driver_priv = pvr_file;
+
+	return 0;
+}
+
+/**
+ * pvr_drm_driver_postclose() - One of the driver callbacks when a &struct
+ * drm_file is closed.
+ * @drm_dev: [IN] DRM device (unused).
+ * @file: [IN] DRM file private data.
+ *
+ * Frees powervr-specific file private data (&struct pvr_file).
+ *
+ * Registered in &pvr_drm_driver.
+ */
+static void
+pvr_drm_driver_postclose(__always_unused struct drm_device *drm_dev,
+			 struct drm_file *file)
+{
+	struct pvr_file *pvr_file = to_pvr_file(file);
+
+	kfree(pvr_file);
+	file->driver_priv = NULL;
+}
+
+DEFINE_DRM_GEM_FOPS(pvr_drm_driver_fops);
+
+static struct drm_driver pvr_drm_driver = {
+	.driver_features = DRIVER_RENDER,
+	.open = pvr_drm_driver_open,
+	.postclose = pvr_drm_driver_postclose,
+	.ioctls = pvr_drm_driver_ioctls,
+	.num_ioctls = ARRAY_SIZE(pvr_drm_driver_ioctls),
+	.fops = &pvr_drm_driver_fops,
+
+	.name = PVR_DRIVER_NAME,
+	.desc = PVR_DRIVER_DESC,
+	.date = PVR_DRIVER_DATE,
+	.major = PVR_DRIVER_MAJOR,
+	.minor = PVR_DRIVER_MINOR,
+	.patchlevel = PVR_DRIVER_PATCHLEVEL,
+
+};
+
+static int
+pvr_probe(struct platform_device *plat_dev)
+{
+	struct pvr_device *pvr_dev;
+	struct drm_device *drm_dev;
+
+	pvr_dev = devm_drm_dev_alloc(&plat_dev->dev, &pvr_drm_driver,
+				     struct pvr_device, base);
+	if (IS_ERR(pvr_dev))
+		return PTR_ERR(pvr_dev);
+
+	drm_dev = &pvr_dev->base;
+
+	platform_set_drvdata(plat_dev, drm_dev);
+
+	return drm_dev_register(drm_dev, 0);
+}
+
+static int
+pvr_remove(struct platform_device *plat_dev)
+{
+	struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
+
+	drm_dev_unplug(drm_dev);
+
+	return 0;
+}
+
+static const struct of_device_id dt_match[] = {
+	{ .compatible = "img,powervr-seriesaxe", .data = NULL },
+	{}
+};
+MODULE_DEVICE_TABLE(of, dt_match);
+
+static struct platform_driver pvr_driver = {
+	.probe = pvr_probe,
+	.remove = pvr_remove,
+	.driver = {
+		.name = PVR_DRIVER_NAME,
+		.of_match_table = dt_match,
+	},
+};
+module_platform_driver(pvr_driver);
+
+MODULE_AUTHOR("Imagination Technologies Ltd.");
+MODULE_DESCRIPTION(PVR_DRIVER_DESC);
+MODULE_LICENSE("Dual MIT/GPL");
+MODULE_IMPORT_NS(DMA_BUF);
diff --git a/drivers/gpu/drm/imagination/pvr_drv.h b/drivers/gpu/drm/imagination/pvr_drv.h
new file mode 100644
index 000000000000..6075a993d922
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_drv.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_DRV_H
+#define PVR_DRV_H
+
+#include "linux/compiler_attributes.h"
+#include <uapi/drm/pvr_drm.h>
+
+#define PVR_DRIVER_NAME "powervr"
+#define PVR_DRIVER_DESC "Imagination PowerVR Graphics (Series 6 and later)"
+#define PVR_DRIVER_DATE "20230814"
+
+/*
+ * Driver interface version:
+ *  - 1.0: Initial interface
+ */
+#define PVR_DRIVER_MAJOR 1
+#define PVR_DRIVER_MINOR 0
+#define PVR_DRIVER_PATCHLEVEL 0
+
+#endif /* PVR_DRV_H */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 05/17] drm/imagination: Get GPU resources
  2023-08-16  8:25 ` Sarah Walker
@ 2023-08-16  8:25   ` Sarah Walker
  -1 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: matthew.brost, luben.tuikov, tzimmermann, linux-kernel, mripard,
	afd, Matt Coster, boris.brezillon, dakr, donald.robson, hns,
	christian.koenig, faith.ekstrand

Acquire clock and register resources, and enable/map as appropriate.

Changes since v3:
- Remove regulator resource (not used on supported platform)
- Use devm helpers
- Use devm_clk_get_optional() for optional clocks
- Don't prepare clocks on resource acquisition
- Drop pvr_device_clk_core_get_freq() helper
- Drop pvr_device_reg_fini()
- Drop NULLing of clocks in pvr_device_clk_init()
- Use dev_err_probe() on clock acquisition failure
- Remove PVR_CR_READ/WRITE helper macros
- Improve documentation for GPU clocks
- Remove regs resource (not used in this commit)

Co-developed-by: Frank Binns <frank.binns@imgtec.com>
Signed-off-by: Frank Binns <frank.binns@imgtec.com>
Co-developed-by: Matt Coster <matt.coster@imgtec.com>
Signed-off-by: Matt Coster <matt.coster@imgtec.com>
Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
Reviewed-by: Maxime Ripard <mripard@kernel.org>
---
 drivers/gpu/drm/imagination/Makefile     |   1 +
 drivers/gpu/drm/imagination/pvr_device.c | 147 ++++++++++++++++++++++
 drivers/gpu/drm/imagination/pvr_device.h | 152 +++++++++++++++++++++++
 drivers/gpu/drm/imagination/pvr_drv.c    |  18 ++-
 4 files changed, 317 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/imagination/pvr_device.c

diff --git a/drivers/gpu/drm/imagination/Makefile b/drivers/gpu/drm/imagination/Makefile
index 19b40c2d7356..b4aa190c9d4a 100644
--- a/drivers/gpu/drm/imagination/Makefile
+++ b/drivers/gpu/drm/imagination/Makefile
@@ -4,6 +4,7 @@
 subdir-ccflags-y := -I$(srctree)/$(src)
 
 powervr-y := \
+	pvr_device.o \
 	pvr_drv.o \
 
 obj-$(CONFIG_DRM_POWERVR) += powervr.o
diff --git a/drivers/gpu/drm/imagination/pvr_device.c b/drivers/gpu/drm/imagination/pvr_device.c
new file mode 100644
index 000000000000..cef3511c0c42
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_device.c
@@ -0,0 +1,147 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+
+#include <drm/drm_print.h>
+
+#include <linux/clk.h>
+#include <linux/compiler_attributes.h>
+#include <linux/compiler_types.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/gfp.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/stddef.h>
+#include <linux/types.h>
+
+/**
+ * pvr_device_reg_init() - Initialize kernel access to a PowerVR device's
+ * control registers.
+ * @pvr_dev: Target PowerVR device.
+ *
+ * Sets struct pvr_device->regs.
+ *
+ * This method of mapping the device control registers into memory ensures that
+ * they are unmapped when the driver is detached (i.e. no explicit cleanup is
+ * required).
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error returned by devm_platform_ioremap_resource().
+ */
+static int
+pvr_device_reg_init(struct pvr_device *pvr_dev)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	struct platform_device *plat_dev = to_platform_device(drm_dev->dev);
+	void __iomem *regs;
+
+	pvr_dev->regs = NULL;
+
+	regs = devm_platform_ioremap_resource(plat_dev, 0);
+	if (IS_ERR(regs))
+		return dev_err_probe(drm_dev->dev, PTR_ERR(regs),
+				     "failed to ioremap gpu registers\n");
+
+	pvr_dev->regs = regs;
+
+	return 0;
+}
+
+/**
+ * pvr_device_clk_init() - Initialize clocks required by a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ *
+ * Sets struct pvr_device->core_clk, struct pvr_device->sys_clk and
+ * struct pvr_device->mem_clk.
+ *
+ * Three clocks are required by the PowerVR device: core, sys and mem. On
+ * return, this function guarantees that the clocks are in one of the following
+ * states:
+ *
+ *  * All successfully initialized,
+ *  * Core errored, sys and mem uninitialized,
+ *  * Core deinitialized, sys errored, mem uninitialized, or
+ *  * Core and sys deinitialized, mem errored.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * Any error returned by devm_clk_get(), or
+ *  * Any error returned by devm_clk_get_optional().
+ */
+static int pvr_device_clk_init(struct pvr_device *pvr_dev)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	struct clk *core_clk;
+	struct clk *sys_clk;
+	struct clk *mem_clk;
+
+	core_clk = devm_clk_get(drm_dev->dev, "core");
+	if (IS_ERR(core_clk))
+		return dev_err_probe(drm_dev->dev, PTR_ERR(core_clk),
+				     "failed to get core clock\n");
+
+	sys_clk = devm_clk_get_optional(drm_dev->dev, "sys");
+	if (IS_ERR(sys_clk))
+		return dev_err_probe(drm_dev->dev, PTR_ERR(core_clk),
+				     "failed to get sys clock\n");
+
+	mem_clk = devm_clk_get_optional(drm_dev->dev, "mem");
+	if (IS_ERR(mem_clk))
+		return dev_err_probe(drm_dev->dev, PTR_ERR(core_clk),
+				     "failed to get mem clock\n");
+
+	pvr_dev->core_clk = core_clk;
+	pvr_dev->sys_clk = sys_clk;
+	pvr_dev->mem_clk = mem_clk;
+
+	return 0;
+}
+
+/**
+ * pvr_device_init() - Initialize a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ *
+ * If this function returns successfully, the device will have been fully
+ * initialized. Otherwise, any parts of the device initialized before an error
+ * occurs will be de-initialized before returning.
+ *
+ * NOTE: The initialization steps currently taken are the bare minimum required
+ *       to read from the control registers. The device is unlikely to function
+ *       until further initialization steps are added. [This note should be
+ *       removed when that happens.]
+ *
+ * Return:
+ *  * 0 on success,
+ *  * Any error returned by pvr_device_reg_init(),
+ *  * Any error returned by pvr_device_clk_init(), or
+ *  * Any error returned by pvr_device_gpu_init().
+ */
+int
+pvr_device_init(struct pvr_device *pvr_dev)
+{
+	int err;
+
+	/* Enable and initialize clocks required for the device to operate. */
+	err = pvr_device_clk_init(pvr_dev);
+	if (err)
+		return err;
+
+	/* Map the control registers into memory. */
+	return pvr_device_reg_init(pvr_dev);
+}
+
+/**
+ * pvr_device_fini() - Deinitialize a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ */
+void
+pvr_device_fini(struct pvr_device *pvr_dev)
+{
+	/*
+	 * Deinitialization stages are performed in reverse order compared to
+	 * the initialization stages in pvr_device_init().
+	 */
+}
diff --git a/drivers/gpu/drm/imagination/pvr_device.h b/drivers/gpu/drm/imagination/pvr_device.h
index 38e17448013f..41a00d3d8220 100644
--- a/drivers/gpu/drm/imagination/pvr_device.h
+++ b/drivers/gpu/drm/imagination/pvr_device.h
@@ -11,9 +11,22 @@
 #include <linux/bits.h>
 #include <linux/compiler_attributes.h>
 #include <linux/compiler_types.h>
+#include <linux/io.h>
+#include <linux/iopoll.h>
 #include <linux/kernel.h>
+#include <linux/math.h>
+#include <linux/mutex.h>
+#include <linux/timer.h>
 #include <linux/types.h>
 #include <linux/wait.h>
+#include <linux/workqueue.h>
+#include <linux/xarray.h>
+
+/* Forward declaration from <linux/clk.h>. */
+struct clk;
+
+/* Forward declaration from <linux/firmware.h>. */
+struct firmware;
 
 /**
  * struct pvr_device - powervr-specific wrapper for &struct drm_device
@@ -26,6 +39,37 @@ struct pvr_device {
 	 * from_pvr_device().
 	 */
 	struct drm_device base;
+
+	/**
+	 * @regs: Device control registers.
+	 *
+	 * These are mapped into memory when the device is initialized; that
+	 * location is where this pointer points.
+	 */
+	void __iomem *regs;
+
+	/**
+	 * @core_clk: General core clock.
+	 *
+	 * This is the primary clock used by the entire GPU core.
+	 */
+	struct clk *core_clk;
+
+	/**
+	 * @sys_clk: Optional system bus clock.
+	 *
+	 * This may be used on some platforms to provide an independent clock to the SoC Interface
+	 * (SOCIF). If present, this needs to be enabled/disabled together with @core_clk.
+	 */
+	struct clk *sys_clk;
+
+	/**
+	 * @mem_clk: Optional memory clock.
+	 *
+	 * This may be used on some platforms to provide an independent clock to the Memory
+	 * Interface (MEMIF). If present, this needs to be enabled/disabled together with @core_clk.
+	 */
+	struct clk *mem_clk;
 };
 
 /**
@@ -56,6 +100,114 @@ struct pvr_file {
 
 #define to_pvr_file(file) (file->driver_priv)
 
+int pvr_device_init(struct pvr_device *pvr_dev);
+void pvr_device_fini(struct pvr_device *pvr_dev);
+
+/**
+ * PVR_CR_FIELD_GET() - Extract a single field from a PowerVR control register
+ * @val: Value of the target register.
+ * @field: Field specifier, as defined in "pvr_rogue_cr_defs.h".
+ *
+ * Return: The extracted field.
+ */
+#define PVR_CR_FIELD_GET(val, field) FIELD_GET(~ROGUE_CR_##field##_CLRMSK, val)
+
+/**
+ * pvr_cr_read32() - Read a 32-bit register from a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ * @reg: Target register.
+ *
+ * Return: The value of the requested register.
+ */
+static __always_inline u32
+pvr_cr_read32(struct pvr_device *pvr_dev, u32 reg)
+{
+	return ioread32(pvr_dev->regs + reg);
+}
+
+/**
+ * pvr_cr_read64() - Read a 64-bit register from a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ * @reg: Target register.
+ *
+ * Return: The value of the requested register.
+ */
+static __always_inline u64
+pvr_cr_read64(struct pvr_device *pvr_dev, u32 reg)
+{
+	return ioread64(pvr_dev->regs + reg);
+}
+
+/**
+ * pvr_cr_write32() - Write to a 32-bit register in a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ * @reg: Target register.
+ * @val: Value to write.
+ */
+static __always_inline void
+pvr_cr_write32(struct pvr_device *pvr_dev, u32 reg, u32 val)
+{
+	iowrite32(val, pvr_dev->regs + reg);
+}
+
+/**
+ * pvr_cr_write64() - Write to a 64-bit register in a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ * @reg: Target register.
+ * @val: Value to write.
+ */
+static __always_inline void
+pvr_cr_write64(struct pvr_device *pvr_dev, u32 reg, u64 val)
+{
+	iowrite64(val, pvr_dev->regs + reg);
+}
+
+/**
+ * pvr_cr_poll_reg32() - Wait for a 32-bit register to match a given value by
+ *                       polling
+ * @pvr_dev: Target PowerVR device.
+ * @reg_addr: Address of register.
+ * @reg_value: Expected register value (after masking).
+ * @reg_mask: Mask of bits valid for comparison with @reg_value.
+ * @timeout_usec: Timeout length, in us.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -%ETIMEDOUT on timeout.
+ */
+static __always_inline int
+pvr_cr_poll_reg32(struct pvr_device *pvr_dev, u32 reg_addr, u32 reg_value,
+		  u32 reg_mask, u64 timeout_usec)
+{
+	u32 value;
+
+	return readl_poll_timeout(pvr_dev->regs + reg_addr, value,
+		(value & reg_mask) == reg_value, 0, timeout_usec);
+}
+
+/**
+ * pvr_cr_poll_reg64() - Wait for a 64-bit register to match a given value by
+ *                       polling
+ * @pvr_dev: Target PowerVR device.
+ * @reg_addr: Address of register.
+ * @reg_value: Expected register value (after masking).
+ * @reg_mask: Mask of bits valid for comparison with @reg_value.
+ * @timeout_usec: Timeout length, in us.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -%ETIMEDOUT on timeout.
+ */
+static __always_inline int
+pvr_cr_poll_reg64(struct pvr_device *pvr_dev, u32 reg_addr, u64 reg_value,
+		  u64 reg_mask, u64 timeout_usec)
+{
+	u64 value;
+
+	return readq_poll_timeout(pvr_dev->regs + reg_addr, value,
+		(value & reg_mask) == reg_value, 0, timeout_usec);
+}
+
 /**
  * DOC: IOCTL validation helpers
  *
diff --git a/drivers/gpu/drm/imagination/pvr_drv.c b/drivers/gpu/drm/imagination/pvr_drv.c
index 87b5c34e67de..ebab8b477749 100644
--- a/drivers/gpu/drm/imagination/pvr_drv.c
+++ b/drivers/gpu/drm/imagination/pvr_drv.c
@@ -465,6 +465,7 @@ pvr_probe(struct platform_device *plat_dev)
 {
 	struct pvr_device *pvr_dev;
 	struct drm_device *drm_dev;
+	int err;
 
 	pvr_dev = devm_drm_dev_alloc(&plat_dev->dev, &pvr_drm_driver,
 				     struct pvr_device, base);
@@ -475,15 +476,30 @@ pvr_probe(struct platform_device *plat_dev)
 
 	platform_set_drvdata(plat_dev, drm_dev);
 
-	return drm_dev_register(drm_dev, 0);
+	err = pvr_device_init(pvr_dev);
+	if (err)
+		return err;
+
+	err = drm_dev_register(drm_dev, 0);
+	if (err)
+		goto err_device_fini;
+
+	return 0;
+
+err_device_fini:
+	pvr_device_fini(pvr_dev);
+
+	return err;
 }
 
 static int
 pvr_remove(struct platform_device *plat_dev)
 {
 	struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
 
 	drm_dev_unplug(drm_dev);
+	pvr_device_fini(pvr_dev);
 
 	return 0;
 }
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 05/17] drm/imagination: Get GPU resources
@ 2023-08-16  8:25   ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: frank.binns, donald.robson, boris.brezillon, faith.ekstrand,
	airlied, daniel, maarten.lankhorst, mripard, tzimmermann, afd,
	hns, matthew.brost, christian.koenig, luben.tuikov, dakr,
	linux-kernel, Matt Coster

Acquire clock and register resources, and enable/map as appropriate.

Changes since v3:
- Remove regulator resource (not used on supported platform)
- Use devm helpers
- Use devm_clk_get_optional() for optional clocks
- Don't prepare clocks on resource acquisition
- Drop pvr_device_clk_core_get_freq() helper
- Drop pvr_device_reg_fini()
- Drop NULLing of clocks in pvr_device_clk_init()
- Use dev_err_probe() on clock acquisition failure
- Remove PVR_CR_READ/WRITE helper macros
- Improve documentation for GPU clocks
- Remove regs resource (not used in this commit)

Co-developed-by: Frank Binns <frank.binns@imgtec.com>
Signed-off-by: Frank Binns <frank.binns@imgtec.com>
Co-developed-by: Matt Coster <matt.coster@imgtec.com>
Signed-off-by: Matt Coster <matt.coster@imgtec.com>
Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
Reviewed-by: Maxime Ripard <mripard@kernel.org>
---
 drivers/gpu/drm/imagination/Makefile     |   1 +
 drivers/gpu/drm/imagination/pvr_device.c | 147 ++++++++++++++++++++++
 drivers/gpu/drm/imagination/pvr_device.h | 152 +++++++++++++++++++++++
 drivers/gpu/drm/imagination/pvr_drv.c    |  18 ++-
 4 files changed, 317 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/imagination/pvr_device.c

diff --git a/drivers/gpu/drm/imagination/Makefile b/drivers/gpu/drm/imagination/Makefile
index 19b40c2d7356..b4aa190c9d4a 100644
--- a/drivers/gpu/drm/imagination/Makefile
+++ b/drivers/gpu/drm/imagination/Makefile
@@ -4,6 +4,7 @@
 subdir-ccflags-y := -I$(srctree)/$(src)
 
 powervr-y := \
+	pvr_device.o \
 	pvr_drv.o \
 
 obj-$(CONFIG_DRM_POWERVR) += powervr.o
diff --git a/drivers/gpu/drm/imagination/pvr_device.c b/drivers/gpu/drm/imagination/pvr_device.c
new file mode 100644
index 000000000000..cef3511c0c42
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_device.c
@@ -0,0 +1,147 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+
+#include <drm/drm_print.h>
+
+#include <linux/clk.h>
+#include <linux/compiler_attributes.h>
+#include <linux/compiler_types.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/gfp.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/stddef.h>
+#include <linux/types.h>
+
+/**
+ * pvr_device_reg_init() - Initialize kernel access to a PowerVR device's
+ * control registers.
+ * @pvr_dev: Target PowerVR device.
+ *
+ * Sets struct pvr_device->regs.
+ *
+ * This method of mapping the device control registers into memory ensures that
+ * they are unmapped when the driver is detached (i.e. no explicit cleanup is
+ * required).
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error returned by devm_platform_ioremap_resource().
+ */
+static int
+pvr_device_reg_init(struct pvr_device *pvr_dev)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	struct platform_device *plat_dev = to_platform_device(drm_dev->dev);
+	void __iomem *regs;
+
+	pvr_dev->regs = NULL;
+
+	regs = devm_platform_ioremap_resource(plat_dev, 0);
+	if (IS_ERR(regs))
+		return dev_err_probe(drm_dev->dev, PTR_ERR(regs),
+				     "failed to ioremap gpu registers\n");
+
+	pvr_dev->regs = regs;
+
+	return 0;
+}
+
+/**
+ * pvr_device_clk_init() - Initialize clocks required by a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ *
+ * Sets struct pvr_device->core_clk, struct pvr_device->sys_clk and
+ * struct pvr_device->mem_clk.
+ *
+ * Three clocks are required by the PowerVR device: core, sys and mem. On
+ * return, this function guarantees that the clocks are in one of the following
+ * states:
+ *
+ *  * All successfully initialized,
+ *  * Core errored, sys and mem uninitialized,
+ *  * Core deinitialized, sys errored, mem uninitialized, or
+ *  * Core and sys deinitialized, mem errored.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * Any error returned by devm_clk_get(), or
+ *  * Any error returned by devm_clk_get_optional().
+ */
+static int pvr_device_clk_init(struct pvr_device *pvr_dev)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	struct clk *core_clk;
+	struct clk *sys_clk;
+	struct clk *mem_clk;
+
+	core_clk = devm_clk_get(drm_dev->dev, "core");
+	if (IS_ERR(core_clk))
+		return dev_err_probe(drm_dev->dev, PTR_ERR(core_clk),
+				     "failed to get core clock\n");
+
+	sys_clk = devm_clk_get_optional(drm_dev->dev, "sys");
+	if (IS_ERR(sys_clk))
+		return dev_err_probe(drm_dev->dev, PTR_ERR(core_clk),
+				     "failed to get sys clock\n");
+
+	mem_clk = devm_clk_get_optional(drm_dev->dev, "mem");
+	if (IS_ERR(mem_clk))
+		return dev_err_probe(drm_dev->dev, PTR_ERR(core_clk),
+				     "failed to get mem clock\n");
+
+	pvr_dev->core_clk = core_clk;
+	pvr_dev->sys_clk = sys_clk;
+	pvr_dev->mem_clk = mem_clk;
+
+	return 0;
+}
+
+/**
+ * pvr_device_init() - Initialize a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ *
+ * If this function returns successfully, the device will have been fully
+ * initialized. Otherwise, any parts of the device initialized before an error
+ * occurs will be de-initialized before returning.
+ *
+ * NOTE: The initialization steps currently taken are the bare minimum required
+ *       to read from the control registers. The device is unlikely to function
+ *       until further initialization steps are added. [This note should be
+ *       removed when that happens.]
+ *
+ * Return:
+ *  * 0 on success,
+ *  * Any error returned by pvr_device_reg_init(),
+ *  * Any error returned by pvr_device_clk_init(), or
+ *  * Any error returned by pvr_device_gpu_init().
+ */
+int
+pvr_device_init(struct pvr_device *pvr_dev)
+{
+	int err;
+
+	/* Enable and initialize clocks required for the device to operate. */
+	err = pvr_device_clk_init(pvr_dev);
+	if (err)
+		return err;
+
+	/* Map the control registers into memory. */
+	return pvr_device_reg_init(pvr_dev);
+}
+
+/**
+ * pvr_device_fini() - Deinitialize a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ */
+void
+pvr_device_fini(struct pvr_device *pvr_dev)
+{
+	/*
+	 * Deinitialization stages are performed in reverse order compared to
+	 * the initialization stages in pvr_device_init().
+	 */
+}
diff --git a/drivers/gpu/drm/imagination/pvr_device.h b/drivers/gpu/drm/imagination/pvr_device.h
index 38e17448013f..41a00d3d8220 100644
--- a/drivers/gpu/drm/imagination/pvr_device.h
+++ b/drivers/gpu/drm/imagination/pvr_device.h
@@ -11,9 +11,22 @@
 #include <linux/bits.h>
 #include <linux/compiler_attributes.h>
 #include <linux/compiler_types.h>
+#include <linux/io.h>
+#include <linux/iopoll.h>
 #include <linux/kernel.h>
+#include <linux/math.h>
+#include <linux/mutex.h>
+#include <linux/timer.h>
 #include <linux/types.h>
 #include <linux/wait.h>
+#include <linux/workqueue.h>
+#include <linux/xarray.h>
+
+/* Forward declaration from <linux/clk.h>. */
+struct clk;
+
+/* Forward declaration from <linux/firmware.h>. */
+struct firmware;
 
 /**
  * struct pvr_device - powervr-specific wrapper for &struct drm_device
@@ -26,6 +39,37 @@ struct pvr_device {
 	 * from_pvr_device().
 	 */
 	struct drm_device base;
+
+	/**
+	 * @regs: Device control registers.
+	 *
+	 * These are mapped into memory when the device is initialized; that
+	 * location is where this pointer points.
+	 */
+	void __iomem *regs;
+
+	/**
+	 * @core_clk: General core clock.
+	 *
+	 * This is the primary clock used by the entire GPU core.
+	 */
+	struct clk *core_clk;
+
+	/**
+	 * @sys_clk: Optional system bus clock.
+	 *
+	 * This may be used on some platforms to provide an independent clock to the SoC Interface
+	 * (SOCIF). If present, this needs to be enabled/disabled together with @core_clk.
+	 */
+	struct clk *sys_clk;
+
+	/**
+	 * @mem_clk: Optional memory clock.
+	 *
+	 * This may be used on some platforms to provide an independent clock to the Memory
+	 * Interface (MEMIF). If present, this needs to be enabled/disabled together with @core_clk.
+	 */
+	struct clk *mem_clk;
 };
 
 /**
@@ -56,6 +100,114 @@ struct pvr_file {
 
 #define to_pvr_file(file) (file->driver_priv)
 
+int pvr_device_init(struct pvr_device *pvr_dev);
+void pvr_device_fini(struct pvr_device *pvr_dev);
+
+/**
+ * PVR_CR_FIELD_GET() - Extract a single field from a PowerVR control register
+ * @val: Value of the target register.
+ * @field: Field specifier, as defined in "pvr_rogue_cr_defs.h".
+ *
+ * Return: The extracted field.
+ */
+#define PVR_CR_FIELD_GET(val, field) FIELD_GET(~ROGUE_CR_##field##_CLRMSK, val)
+
+/**
+ * pvr_cr_read32() - Read a 32-bit register from a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ * @reg: Target register.
+ *
+ * Return: The value of the requested register.
+ */
+static __always_inline u32
+pvr_cr_read32(struct pvr_device *pvr_dev, u32 reg)
+{
+	return ioread32(pvr_dev->regs + reg);
+}
+
+/**
+ * pvr_cr_read64() - Read a 64-bit register from a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ * @reg: Target register.
+ *
+ * Return: The value of the requested register.
+ */
+static __always_inline u64
+pvr_cr_read64(struct pvr_device *pvr_dev, u32 reg)
+{
+	return ioread64(pvr_dev->regs + reg);
+}
+
+/**
+ * pvr_cr_write32() - Write to a 32-bit register in a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ * @reg: Target register.
+ * @val: Value to write.
+ */
+static __always_inline void
+pvr_cr_write32(struct pvr_device *pvr_dev, u32 reg, u32 val)
+{
+	iowrite32(val, pvr_dev->regs + reg);
+}
+
+/**
+ * pvr_cr_write64() - Write to a 64-bit register in a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ * @reg: Target register.
+ * @val: Value to write.
+ */
+static __always_inline void
+pvr_cr_write64(struct pvr_device *pvr_dev, u32 reg, u64 val)
+{
+	iowrite64(val, pvr_dev->regs + reg);
+}
+
+/**
+ * pvr_cr_poll_reg32() - Wait for a 32-bit register to match a given value by
+ *                       polling
+ * @pvr_dev: Target PowerVR device.
+ * @reg_addr: Address of register.
+ * @reg_value: Expected register value (after masking).
+ * @reg_mask: Mask of bits valid for comparison with @reg_value.
+ * @timeout_usec: Timeout length, in us.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -%ETIMEDOUT on timeout.
+ */
+static __always_inline int
+pvr_cr_poll_reg32(struct pvr_device *pvr_dev, u32 reg_addr, u32 reg_value,
+		  u32 reg_mask, u64 timeout_usec)
+{
+	u32 value;
+
+	return readl_poll_timeout(pvr_dev->regs + reg_addr, value,
+		(value & reg_mask) == reg_value, 0, timeout_usec);
+}
+
+/**
+ * pvr_cr_poll_reg64() - Wait for a 64-bit register to match a given value by
+ *                       polling
+ * @pvr_dev: Target PowerVR device.
+ * @reg_addr: Address of register.
+ * @reg_value: Expected register value (after masking).
+ * @reg_mask: Mask of bits valid for comparison with @reg_value.
+ * @timeout_usec: Timeout length, in us.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -%ETIMEDOUT on timeout.
+ */
+static __always_inline int
+pvr_cr_poll_reg64(struct pvr_device *pvr_dev, u32 reg_addr, u64 reg_value,
+		  u64 reg_mask, u64 timeout_usec)
+{
+	u64 value;
+
+	return readq_poll_timeout(pvr_dev->regs + reg_addr, value,
+		(value & reg_mask) == reg_value, 0, timeout_usec);
+}
+
 /**
  * DOC: IOCTL validation helpers
  *
diff --git a/drivers/gpu/drm/imagination/pvr_drv.c b/drivers/gpu/drm/imagination/pvr_drv.c
index 87b5c34e67de..ebab8b477749 100644
--- a/drivers/gpu/drm/imagination/pvr_drv.c
+++ b/drivers/gpu/drm/imagination/pvr_drv.c
@@ -465,6 +465,7 @@ pvr_probe(struct platform_device *plat_dev)
 {
 	struct pvr_device *pvr_dev;
 	struct drm_device *drm_dev;
+	int err;
 
 	pvr_dev = devm_drm_dev_alloc(&plat_dev->dev, &pvr_drm_driver,
 				     struct pvr_device, base);
@@ -475,15 +476,30 @@ pvr_probe(struct platform_device *plat_dev)
 
 	platform_set_drvdata(plat_dev, drm_dev);
 
-	return drm_dev_register(drm_dev, 0);
+	err = pvr_device_init(pvr_dev);
+	if (err)
+		return err;
+
+	err = drm_dev_register(drm_dev, 0);
+	if (err)
+		goto err_device_fini;
+
+	return 0;
+
+err_device_fini:
+	pvr_device_fini(pvr_dev);
+
+	return err;
 }
 
 static int
 pvr_remove(struct platform_device *plat_dev)
 {
 	struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
 
 	drm_dev_unplug(drm_dev);
+	pvr_device_fini(pvr_dev);
 
 	return 0;
 }
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 06/17] drm/imagination: Add GPU register and FWIF headers
  2023-08-16  8:25 ` Sarah Walker
@ 2023-08-16  8:25   ` Sarah Walker
  -1 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: frank.binns, donald.robson, boris.brezillon, faith.ekstrand,
	airlied, daniel, maarten.lankhorst, mripard, tzimmermann, afd,
	hns, matthew.brost, christian.koenig, luben.tuikov, dakr,
	linux-kernel

Changes since v4:
- Add FW header device info

Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 .../gpu/drm/imagination/pvr_rogue_cr_defs.h   | 6193 +++++++++++++++++
 .../imagination/pvr_rogue_cr_defs_client.h    |  159 +
 drivers/gpu/drm/imagination/pvr_rogue_defs.h  |  179 +
 drivers/gpu/drm/imagination/pvr_rogue_fwif.h  | 2208 ++++++
 .../drm/imagination/pvr_rogue_fwif_check.h    |  491 ++
 .../drm/imagination/pvr_rogue_fwif_client.h   |  371 +
 .../imagination/pvr_rogue_fwif_client_check.h |  133 +
 .../drm/imagination/pvr_rogue_fwif_common.h   |   60 +
 .../drm/imagination/pvr_rogue_fwif_dev_info.h |  112 +
 .../pvr_rogue_fwif_resetframework.h           |   28 +
 .../gpu/drm/imagination/pvr_rogue_fwif_sf.h   | 1648 +++++
 .../drm/imagination/pvr_rogue_fwif_shared.h   |  258 +
 .../imagination/pvr_rogue_fwif_shared_check.h |  108 +
 .../drm/imagination/pvr_rogue_fwif_stream.h   |   78 +
 .../drm/imagination/pvr_rogue_heap_config.h   |  113 +
 drivers/gpu/drm/imagination/pvr_rogue_meta.h  |  356 +
 drivers/gpu/drm/imagination/pvr_rogue_mips.h  |  335 +
 .../drm/imagination/pvr_rogue_mips_check.h    |   58 +
 .../gpu/drm/imagination/pvr_rogue_mmu_defs.h  |  136 +
 19 files changed, 13024 insertions(+)
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_cr_defs.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_cr_defs_client.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_defs.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_check.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_client.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_client_check.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_common.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_dev_info.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_resetframework.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_sf.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_shared.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_shared_check.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_stream.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_heap_config.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_meta.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_mips.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_mips_check.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_mmu_defs.h

diff --git a/drivers/gpu/drm/imagination/pvr_rogue_cr_defs.h b/drivers/gpu/drm/imagination/pvr_rogue_cr_defs.h
new file mode 100644
index 000000000000..23fc197c4f42
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_cr_defs.h
@@ -0,0 +1,6193 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+/*  *** Autogenerated C -- do not edit ***  */
+
+#ifndef PVR_ROGUE_CR_DEFS_H
+#define PVR_ROGUE_CR_DEFS_H
+
+/* clang-format off */
+
+#define ROGUE_CR_DEFS_REVISION 1
+
+/* Register ROGUE_CR_RASTERISATION_INDIRECT */
+#define ROGUE_CR_RASTERISATION_INDIRECT 0x8238U
+#define ROGUE_CR_RASTERISATION_INDIRECT_MASKFULL 0x000000000000000FULL
+#define ROGUE_CR_RASTERISATION_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_RASTERISATION_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFF0U
+
+/* Register ROGUE_CR_PBE_INDIRECT */
+#define ROGUE_CR_PBE_INDIRECT 0x83E0U
+#define ROGUE_CR_PBE_INDIRECT_MASKFULL 0x000000000000000FULL
+#define ROGUE_CR_PBE_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_PBE_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFF0U
+
+/* Register ROGUE_CR_PBE_PERF_INDIRECT */
+#define ROGUE_CR_PBE_PERF_INDIRECT 0x83D8U
+#define ROGUE_CR_PBE_PERF_INDIRECT_MASKFULL 0x000000000000000FULL
+#define ROGUE_CR_PBE_PERF_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_PBE_PERF_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFF0U
+
+/* Register ROGUE_CR_TPU_PERF_INDIRECT */
+#define ROGUE_CR_TPU_PERF_INDIRECT 0x83F0U
+#define ROGUE_CR_TPU_PERF_INDIRECT_MASKFULL 0x0000000000000007ULL
+#define ROGUE_CR_TPU_PERF_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_TPU_PERF_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFF8U
+
+/* Register ROGUE_CR_RASTERISATION_PERF_INDIRECT */
+#define ROGUE_CR_RASTERISATION_PERF_INDIRECT 0x8318U
+#define ROGUE_CR_RASTERISATION_PERF_INDIRECT_MASKFULL 0x000000000000000FULL
+#define ROGUE_CR_RASTERISATION_PERF_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_RASTERISATION_PERF_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFF0U
+
+/* Register ROGUE_CR_TPU_MCU_L0_PERF_INDIRECT */
+#define ROGUE_CR_TPU_MCU_L0_PERF_INDIRECT 0x8028U
+#define ROGUE_CR_TPU_MCU_L0_PERF_INDIRECT_MASKFULL 0x0000000000000007ULL
+#define ROGUE_CR_TPU_MCU_L0_PERF_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_TPU_MCU_L0_PERF_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFF8U
+
+/* Register ROGUE_CR_USC_PERF_INDIRECT */
+#define ROGUE_CR_USC_PERF_INDIRECT 0x8030U
+#define ROGUE_CR_USC_PERF_INDIRECT_MASKFULL 0x000000000000000FULL
+#define ROGUE_CR_USC_PERF_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_USC_PERF_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFF0U
+
+/* Register ROGUE_CR_BLACKPEARL_INDIRECT */
+#define ROGUE_CR_BLACKPEARL_INDIRECT 0x8388U
+#define ROGUE_CR_BLACKPEARL_INDIRECT_MASKFULL 0x0000000000000003ULL
+#define ROGUE_CR_BLACKPEARL_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_BLACKPEARL_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFFCU
+
+/* Register ROGUE_CR_BLACKPEARL_PERF_INDIRECT */
+#define ROGUE_CR_BLACKPEARL_PERF_INDIRECT 0x83F8U
+#define ROGUE_CR_BLACKPEARL_PERF_INDIRECT_MASKFULL 0x0000000000000003ULL
+#define ROGUE_CR_BLACKPEARL_PERF_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_BLACKPEARL_PERF_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFFCU
+
+/* Register ROGUE_CR_TEXAS3_PERF_INDIRECT */
+#define ROGUE_CR_TEXAS3_PERF_INDIRECT 0x83D0U
+#define ROGUE_CR_TEXAS3_PERF_INDIRECT_MASKFULL 0x0000000000000007ULL
+#define ROGUE_CR_TEXAS3_PERF_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_TEXAS3_PERF_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFF8U
+
+/* Register ROGUE_CR_TEXAS_PERF_INDIRECT */
+#define ROGUE_CR_TEXAS_PERF_INDIRECT 0x8288U
+#define ROGUE_CR_TEXAS_PERF_INDIRECT_MASKFULL 0x0000000000000003ULL
+#define ROGUE_CR_TEXAS_PERF_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_TEXAS_PERF_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFFCU
+
+/* Register ROGUE_CR_BX_TU_PERF_INDIRECT */
+#define ROGUE_CR_BX_TU_PERF_INDIRECT 0xC900U
+#define ROGUE_CR_BX_TU_PERF_INDIRECT_MASKFULL 0x0000000000000003ULL
+#define ROGUE_CR_BX_TU_PERF_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_BX_TU_PERF_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFFCU
+
+/* Register ROGUE_CR_CLK_CTRL */
+#define ROGUE_CR_CLK_CTRL 0x0000U
+#define ROGUE_CR_CLK_CTRL__PBE2_XE__MASKFULL 0xFFFFFF003F3FFFFFULL
+#define ROGUE_CR_CLK_CTRL__S7_TOP__MASKFULL 0xCFCF03000F3F3F0FULL
+#define ROGUE_CR_CLK_CTRL_MASKFULL 0xFFFFFF003F3FFFFFULL
+#define ROGUE_CR_CLK_CTRL_BIF_TEXAS_SHIFT 62U
+#define ROGUE_CR_CLK_CTRL_BIF_TEXAS_CLRMSK 0x3FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_BIF_TEXAS_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_BIF_TEXAS_ON 0x4000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_BIF_TEXAS_AUTO 0x8000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_IPP_SHIFT 60U
+#define ROGUE_CR_CLK_CTRL_IPP_CLRMSK 0xCFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_IPP_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_IPP_ON 0x1000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_IPP_AUTO 0x2000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_FBC_SHIFT 58U
+#define ROGUE_CR_CLK_CTRL_FBC_CLRMSK 0xF3FFFFFFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_FBC_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_FBC_ON 0x0400000000000000ULL
+#define ROGUE_CR_CLK_CTRL_FBC_AUTO 0x0800000000000000ULL
+#define ROGUE_CR_CLK_CTRL_FBDC_SHIFT 56U
+#define ROGUE_CR_CLK_CTRL_FBDC_CLRMSK 0xFCFFFFFFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_FBDC_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_FBDC_ON 0x0100000000000000ULL
+#define ROGUE_CR_CLK_CTRL_FBDC_AUTO 0x0200000000000000ULL
+#define ROGUE_CR_CLK_CTRL_FB_TLCACHE_SHIFT 54U
+#define ROGUE_CR_CLK_CTRL_FB_TLCACHE_CLRMSK 0xFF3FFFFFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_FB_TLCACHE_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_FB_TLCACHE_ON 0x0040000000000000ULL
+#define ROGUE_CR_CLK_CTRL_FB_TLCACHE_AUTO 0x0080000000000000ULL
+#define ROGUE_CR_CLK_CTRL_USCS_SHIFT 52U
+#define ROGUE_CR_CLK_CTRL_USCS_CLRMSK 0xFFCFFFFFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_USCS_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_USCS_ON 0x0010000000000000ULL
+#define ROGUE_CR_CLK_CTRL_USCS_AUTO 0x0020000000000000ULL
+#define ROGUE_CR_CLK_CTRL_PBE_SHIFT 50U
+#define ROGUE_CR_CLK_CTRL_PBE_CLRMSK 0xFFF3FFFFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_PBE_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_PBE_ON 0x0004000000000000ULL
+#define ROGUE_CR_CLK_CTRL_PBE_AUTO 0x0008000000000000ULL
+#define ROGUE_CR_CLK_CTRL_MCU_L1_SHIFT 48U
+#define ROGUE_CR_CLK_CTRL_MCU_L1_CLRMSK 0xFFFCFFFFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_MCU_L1_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_MCU_L1_ON 0x0001000000000000ULL
+#define ROGUE_CR_CLK_CTRL_MCU_L1_AUTO 0x0002000000000000ULL
+#define ROGUE_CR_CLK_CTRL_CDM_SHIFT 46U
+#define ROGUE_CR_CLK_CTRL_CDM_CLRMSK 0xFFFF3FFFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_CDM_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_CDM_ON 0x0000400000000000ULL
+#define ROGUE_CR_CLK_CTRL_CDM_AUTO 0x0000800000000000ULL
+#define ROGUE_CR_CLK_CTRL_SIDEKICK_SHIFT 44U
+#define ROGUE_CR_CLK_CTRL_SIDEKICK_CLRMSK 0xFFFFCFFFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_SIDEKICK_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_SIDEKICK_ON 0x0000100000000000ULL
+#define ROGUE_CR_CLK_CTRL_SIDEKICK_AUTO 0x0000200000000000ULL
+#define ROGUE_CR_CLK_CTRL_BIF_SIDEKICK_SHIFT 42U
+#define ROGUE_CR_CLK_CTRL_BIF_SIDEKICK_CLRMSK 0xFFFFF3FFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_BIF_SIDEKICK_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_BIF_SIDEKICK_ON 0x0000040000000000ULL
+#define ROGUE_CR_CLK_CTRL_BIF_SIDEKICK_AUTO 0x0000080000000000ULL
+#define ROGUE_CR_CLK_CTRL_BIF_SHIFT 40U
+#define ROGUE_CR_CLK_CTRL_BIF_CLRMSK 0xFFFFFCFFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_BIF_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_BIF_ON 0x0000010000000000ULL
+#define ROGUE_CR_CLK_CTRL_BIF_AUTO 0x0000020000000000ULL
+#define ROGUE_CR_CLK_CTRL_TPU_MCU_DEMUX_SHIFT 28U
+#define ROGUE_CR_CLK_CTRL_TPU_MCU_DEMUX_CLRMSK 0xFFFFFFFFCFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_TPU_MCU_DEMUX_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_TPU_MCU_DEMUX_ON 0x0000000010000000ULL
+#define ROGUE_CR_CLK_CTRL_TPU_MCU_DEMUX_AUTO 0x0000000020000000ULL
+#define ROGUE_CR_CLK_CTRL_MCU_L0_SHIFT 26U
+#define ROGUE_CR_CLK_CTRL_MCU_L0_CLRMSK 0xFFFFFFFFF3FFFFFFULL
+#define ROGUE_CR_CLK_CTRL_MCU_L0_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_MCU_L0_ON 0x0000000004000000ULL
+#define ROGUE_CR_CLK_CTRL_MCU_L0_AUTO 0x0000000008000000ULL
+#define ROGUE_CR_CLK_CTRL_TPU_SHIFT 24U
+#define ROGUE_CR_CLK_CTRL_TPU_CLRMSK 0xFFFFFFFFFCFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_TPU_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_TPU_ON 0x0000000001000000ULL
+#define ROGUE_CR_CLK_CTRL_TPU_AUTO 0x0000000002000000ULL
+#define ROGUE_CR_CLK_CTRL_USC_SHIFT 20U
+#define ROGUE_CR_CLK_CTRL_USC_CLRMSK 0xFFFFFFFFFFCFFFFFULL
+#define ROGUE_CR_CLK_CTRL_USC_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_USC_ON 0x0000000000100000ULL
+#define ROGUE_CR_CLK_CTRL_USC_AUTO 0x0000000000200000ULL
+#define ROGUE_CR_CLK_CTRL_TLA_SHIFT 18U
+#define ROGUE_CR_CLK_CTRL_TLA_CLRMSK 0xFFFFFFFFFFF3FFFFULL
+#define ROGUE_CR_CLK_CTRL_TLA_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_TLA_ON 0x0000000000040000ULL
+#define ROGUE_CR_CLK_CTRL_TLA_AUTO 0x0000000000080000ULL
+#define ROGUE_CR_CLK_CTRL_SLC_SHIFT 16U
+#define ROGUE_CR_CLK_CTRL_SLC_CLRMSK 0xFFFFFFFFFFFCFFFFULL
+#define ROGUE_CR_CLK_CTRL_SLC_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_SLC_ON 0x0000000000010000ULL
+#define ROGUE_CR_CLK_CTRL_SLC_AUTO 0x0000000000020000ULL
+#define ROGUE_CR_CLK_CTRL_UVS_SHIFT 14U
+#define ROGUE_CR_CLK_CTRL_UVS_CLRMSK 0xFFFFFFFFFFFF3FFFULL
+#define ROGUE_CR_CLK_CTRL_UVS_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_UVS_ON 0x0000000000004000ULL
+#define ROGUE_CR_CLK_CTRL_UVS_AUTO 0x0000000000008000ULL
+#define ROGUE_CR_CLK_CTRL_PDS_SHIFT 12U
+#define ROGUE_CR_CLK_CTRL_PDS_CLRMSK 0xFFFFFFFFFFFFCFFFULL
+#define ROGUE_CR_CLK_CTRL_PDS_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_PDS_ON 0x0000000000001000ULL
+#define ROGUE_CR_CLK_CTRL_PDS_AUTO 0x0000000000002000ULL
+#define ROGUE_CR_CLK_CTRL_VDM_SHIFT 10U
+#define ROGUE_CR_CLK_CTRL_VDM_CLRMSK 0xFFFFFFFFFFFFF3FFULL
+#define ROGUE_CR_CLK_CTRL_VDM_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_VDM_ON 0x0000000000000400ULL
+#define ROGUE_CR_CLK_CTRL_VDM_AUTO 0x0000000000000800ULL
+#define ROGUE_CR_CLK_CTRL_PM_SHIFT 8U
+#define ROGUE_CR_CLK_CTRL_PM_CLRMSK 0xFFFFFFFFFFFFFCFFULL
+#define ROGUE_CR_CLK_CTRL_PM_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_PM_ON 0x0000000000000100ULL
+#define ROGUE_CR_CLK_CTRL_PM_AUTO 0x0000000000000200ULL
+#define ROGUE_CR_CLK_CTRL_GPP_SHIFT 6U
+#define ROGUE_CR_CLK_CTRL_GPP_CLRMSK 0xFFFFFFFFFFFFFF3FULL
+#define ROGUE_CR_CLK_CTRL_GPP_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_GPP_ON 0x0000000000000040ULL
+#define ROGUE_CR_CLK_CTRL_GPP_AUTO 0x0000000000000080ULL
+#define ROGUE_CR_CLK_CTRL_TE_SHIFT 4U
+#define ROGUE_CR_CLK_CTRL_TE_CLRMSK 0xFFFFFFFFFFFFFFCFULL
+#define ROGUE_CR_CLK_CTRL_TE_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_TE_ON 0x0000000000000010ULL
+#define ROGUE_CR_CLK_CTRL_TE_AUTO 0x0000000000000020ULL
+#define ROGUE_CR_CLK_CTRL_TSP_SHIFT 2U
+#define ROGUE_CR_CLK_CTRL_TSP_CLRMSK 0xFFFFFFFFFFFFFFF3ULL
+#define ROGUE_CR_CLK_CTRL_TSP_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_TSP_ON 0x0000000000000004ULL
+#define ROGUE_CR_CLK_CTRL_TSP_AUTO 0x0000000000000008ULL
+#define ROGUE_CR_CLK_CTRL_ISP_SHIFT 0U
+#define ROGUE_CR_CLK_CTRL_ISP_CLRMSK 0xFFFFFFFFFFFFFFFCULL
+#define ROGUE_CR_CLK_CTRL_ISP_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_ISP_ON 0x0000000000000001ULL
+#define ROGUE_CR_CLK_CTRL_ISP_AUTO 0x0000000000000002ULL
+
+/* Register ROGUE_CR_CLK_STATUS */
+#define ROGUE_CR_CLK_STATUS 0x0008U
+#define ROGUE_CR_CLK_STATUS__PBE2_XE__MASKFULL 0x00000001FFF077FFULL
+#define ROGUE_CR_CLK_STATUS__S7_TOP__MASKFULL 0x00000001B3101773ULL
+#define ROGUE_CR_CLK_STATUS_MASKFULL 0x00000001FFF077FFULL
+#define ROGUE_CR_CLK_STATUS_MCU_FBTC_SHIFT 32U
+#define ROGUE_CR_CLK_STATUS_MCU_FBTC_CLRMSK 0xFFFFFFFEFFFFFFFFULL
+#define ROGUE_CR_CLK_STATUS_MCU_FBTC_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_MCU_FBTC_RUNNING 0x0000000100000000ULL
+#define ROGUE_CR_CLK_STATUS_BIF_TEXAS_SHIFT 31U
+#define ROGUE_CR_CLK_STATUS_BIF_TEXAS_CLRMSK 0xFFFFFFFF7FFFFFFFULL
+#define ROGUE_CR_CLK_STATUS_BIF_TEXAS_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_BIF_TEXAS_RUNNING 0x0000000080000000ULL
+#define ROGUE_CR_CLK_STATUS_IPP_SHIFT 30U
+#define ROGUE_CR_CLK_STATUS_IPP_CLRMSK 0xFFFFFFFFBFFFFFFFULL
+#define ROGUE_CR_CLK_STATUS_IPP_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_IPP_RUNNING 0x0000000040000000ULL
+#define ROGUE_CR_CLK_STATUS_FBC_SHIFT 29U
+#define ROGUE_CR_CLK_STATUS_FBC_CLRMSK 0xFFFFFFFFDFFFFFFFULL
+#define ROGUE_CR_CLK_STATUS_FBC_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_FBC_RUNNING 0x0000000020000000ULL
+#define ROGUE_CR_CLK_STATUS_FBDC_SHIFT 28U
+#define ROGUE_CR_CLK_STATUS_FBDC_CLRMSK 0xFFFFFFFFEFFFFFFFULL
+#define ROGUE_CR_CLK_STATUS_FBDC_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_FBDC_RUNNING 0x0000000010000000ULL
+#define ROGUE_CR_CLK_STATUS_FB_TLCACHE_SHIFT 27U
+#define ROGUE_CR_CLK_STATUS_FB_TLCACHE_CLRMSK 0xFFFFFFFFF7FFFFFFULL
+#define ROGUE_CR_CLK_STATUS_FB_TLCACHE_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_FB_TLCACHE_RUNNING 0x0000000008000000ULL
+#define ROGUE_CR_CLK_STATUS_USCS_SHIFT 26U
+#define ROGUE_CR_CLK_STATUS_USCS_CLRMSK 0xFFFFFFFFFBFFFFFFULL
+#define ROGUE_CR_CLK_STATUS_USCS_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_USCS_RUNNING 0x0000000004000000ULL
+#define ROGUE_CR_CLK_STATUS_PBE_SHIFT 25U
+#define ROGUE_CR_CLK_STATUS_PBE_CLRMSK 0xFFFFFFFFFDFFFFFFULL
+#define ROGUE_CR_CLK_STATUS_PBE_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_PBE_RUNNING 0x0000000002000000ULL
+#define ROGUE_CR_CLK_STATUS_MCU_L1_SHIFT 24U
+#define ROGUE_CR_CLK_STATUS_MCU_L1_CLRMSK 0xFFFFFFFFFEFFFFFFULL
+#define ROGUE_CR_CLK_STATUS_MCU_L1_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_MCU_L1_RUNNING 0x0000000001000000ULL
+#define ROGUE_CR_CLK_STATUS_CDM_SHIFT 23U
+#define ROGUE_CR_CLK_STATUS_CDM_CLRMSK 0xFFFFFFFFFF7FFFFFULL
+#define ROGUE_CR_CLK_STATUS_CDM_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_CDM_RUNNING 0x0000000000800000ULL
+#define ROGUE_CR_CLK_STATUS_SIDEKICK_SHIFT 22U
+#define ROGUE_CR_CLK_STATUS_SIDEKICK_CLRMSK 0xFFFFFFFFFFBFFFFFULL
+#define ROGUE_CR_CLK_STATUS_SIDEKICK_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_SIDEKICK_RUNNING 0x0000000000400000ULL
+#define ROGUE_CR_CLK_STATUS_BIF_SIDEKICK_SHIFT 21U
+#define ROGUE_CR_CLK_STATUS_BIF_SIDEKICK_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_CLK_STATUS_BIF_SIDEKICK_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_BIF_SIDEKICK_RUNNING 0x0000000000200000ULL
+#define ROGUE_CR_CLK_STATUS_BIF_SHIFT 20U
+#define ROGUE_CR_CLK_STATUS_BIF_CLRMSK 0xFFFFFFFFFFEFFFFFULL
+#define ROGUE_CR_CLK_STATUS_BIF_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_BIF_RUNNING 0x0000000000100000ULL
+#define ROGUE_CR_CLK_STATUS_TPU_MCU_DEMUX_SHIFT 14U
+#define ROGUE_CR_CLK_STATUS_TPU_MCU_DEMUX_CLRMSK 0xFFFFFFFFFFFFBFFFULL
+#define ROGUE_CR_CLK_STATUS_TPU_MCU_DEMUX_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_TPU_MCU_DEMUX_RUNNING 0x0000000000004000ULL
+#define ROGUE_CR_CLK_STATUS_MCU_L0_SHIFT 13U
+#define ROGUE_CR_CLK_STATUS_MCU_L0_CLRMSK 0xFFFFFFFFFFFFDFFFULL
+#define ROGUE_CR_CLK_STATUS_MCU_L0_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_MCU_L0_RUNNING 0x0000000000002000ULL
+#define ROGUE_CR_CLK_STATUS_TPU_SHIFT 12U
+#define ROGUE_CR_CLK_STATUS_TPU_CLRMSK 0xFFFFFFFFFFFFEFFFULL
+#define ROGUE_CR_CLK_STATUS_TPU_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_TPU_RUNNING 0x0000000000001000ULL
+#define ROGUE_CR_CLK_STATUS_USC_SHIFT 10U
+#define ROGUE_CR_CLK_STATUS_USC_CLRMSK 0xFFFFFFFFFFFFFBFFULL
+#define ROGUE_CR_CLK_STATUS_USC_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_USC_RUNNING 0x0000000000000400ULL
+#define ROGUE_CR_CLK_STATUS_TLA_SHIFT 9U
+#define ROGUE_CR_CLK_STATUS_TLA_CLRMSK 0xFFFFFFFFFFFFFDFFULL
+#define ROGUE_CR_CLK_STATUS_TLA_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_TLA_RUNNING 0x0000000000000200ULL
+#define ROGUE_CR_CLK_STATUS_SLC_SHIFT 8U
+#define ROGUE_CR_CLK_STATUS_SLC_CLRMSK 0xFFFFFFFFFFFFFEFFULL
+#define ROGUE_CR_CLK_STATUS_SLC_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_SLC_RUNNING 0x0000000000000100ULL
+#define ROGUE_CR_CLK_STATUS_UVS_SHIFT 7U
+#define ROGUE_CR_CLK_STATUS_UVS_CLRMSK 0xFFFFFFFFFFFFFF7FULL
+#define ROGUE_CR_CLK_STATUS_UVS_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_UVS_RUNNING 0x0000000000000080ULL
+#define ROGUE_CR_CLK_STATUS_PDS_SHIFT 6U
+#define ROGUE_CR_CLK_STATUS_PDS_CLRMSK 0xFFFFFFFFFFFFFFBFULL
+#define ROGUE_CR_CLK_STATUS_PDS_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_PDS_RUNNING 0x0000000000000040ULL
+#define ROGUE_CR_CLK_STATUS_VDM_SHIFT 5U
+#define ROGUE_CR_CLK_STATUS_VDM_CLRMSK 0xFFFFFFFFFFFFFFDFULL
+#define ROGUE_CR_CLK_STATUS_VDM_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_VDM_RUNNING 0x0000000000000020ULL
+#define ROGUE_CR_CLK_STATUS_PM_SHIFT 4U
+#define ROGUE_CR_CLK_STATUS_PM_CLRMSK 0xFFFFFFFFFFFFFFEFULL
+#define ROGUE_CR_CLK_STATUS_PM_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_PM_RUNNING 0x0000000000000010ULL
+#define ROGUE_CR_CLK_STATUS_GPP_SHIFT 3U
+#define ROGUE_CR_CLK_STATUS_GPP_CLRMSK 0xFFFFFFFFFFFFFFF7ULL
+#define ROGUE_CR_CLK_STATUS_GPP_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_GPP_RUNNING 0x0000000000000008ULL
+#define ROGUE_CR_CLK_STATUS_TE_SHIFT 2U
+#define ROGUE_CR_CLK_STATUS_TE_CLRMSK 0xFFFFFFFFFFFFFFFBULL
+#define ROGUE_CR_CLK_STATUS_TE_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_TE_RUNNING 0x0000000000000004ULL
+#define ROGUE_CR_CLK_STATUS_TSP_SHIFT 1U
+#define ROGUE_CR_CLK_STATUS_TSP_CLRMSK 0xFFFFFFFFFFFFFFFDULL
+#define ROGUE_CR_CLK_STATUS_TSP_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_TSP_RUNNING 0x0000000000000002ULL
+#define ROGUE_CR_CLK_STATUS_ISP_SHIFT 0U
+#define ROGUE_CR_CLK_STATUS_ISP_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_CLK_STATUS_ISP_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_ISP_RUNNING 0x0000000000000001ULL
+
+/* Register ROGUE_CR_CORE_ID */
+#define ROGUE_CR_CORE_ID__PBVNC 0x0020U
+#define ROGUE_CR_CORE_ID__PBVNC__MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_CORE_ID__PBVNC__BRANCH_ID_SHIFT 48U
+#define ROGUE_CR_CORE_ID__PBVNC__BRANCH_ID_CLRMSK 0x0000FFFFFFFFFFFFULL
+#define ROGUE_CR_CORE_ID__PBVNC__VERSION_ID_SHIFT 32U
+#define ROGUE_CR_CORE_ID__PBVNC__VERSION_ID_CLRMSK 0xFFFF0000FFFFFFFFULL
+#define ROGUE_CR_CORE_ID__PBVNC__NUMBER_OF_SCALABLE_UNITS_SHIFT 16U
+#define ROGUE_CR_CORE_ID__PBVNC__NUMBER_OF_SCALABLE_UNITS_CLRMSK 0xFFFFFFFF0000FFFFULL
+#define ROGUE_CR_CORE_ID__PBVNC__CONFIG_ID_SHIFT 0U
+#define ROGUE_CR_CORE_ID__PBVNC__CONFIG_ID_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_CORE_ID */
+#define ROGUE_CR_CORE_ID 0x0018U
+#define ROGUE_CR_CORE_ID_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_CORE_ID_ID_SHIFT 16U
+#define ROGUE_CR_CORE_ID_ID_CLRMSK 0x0000FFFFU
+#define ROGUE_CR_CORE_ID_CONFIG_SHIFT 0U
+#define ROGUE_CR_CORE_ID_CONFIG_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_CORE_REVISION */
+#define ROGUE_CR_CORE_REVISION 0x0020U
+#define ROGUE_CR_CORE_REVISION_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_CORE_REVISION_DESIGNER_SHIFT 24U
+#define ROGUE_CR_CORE_REVISION_DESIGNER_CLRMSK 0x00FFFFFFU
+#define ROGUE_CR_CORE_REVISION_MAJOR_SHIFT 16U
+#define ROGUE_CR_CORE_REVISION_MAJOR_CLRMSK 0xFF00FFFFU
+#define ROGUE_CR_CORE_REVISION_MINOR_SHIFT 8U
+#define ROGUE_CR_CORE_REVISION_MINOR_CLRMSK 0xFFFF00FFU
+#define ROGUE_CR_CORE_REVISION_MAINTENANCE_SHIFT 0U
+#define ROGUE_CR_CORE_REVISION_MAINTENANCE_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_DESIGNER_REV_FIELD1 */
+#define ROGUE_CR_DESIGNER_REV_FIELD1 0x0028U
+#define ROGUE_CR_DESIGNER_REV_FIELD1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_DESIGNER_REV_FIELD1_DESIGNER_REV_FIELD1_SHIFT 0U
+#define ROGUE_CR_DESIGNER_REV_FIELD1_DESIGNER_REV_FIELD1_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_DESIGNER_REV_FIELD2 */
+#define ROGUE_CR_DESIGNER_REV_FIELD2 0x0030U
+#define ROGUE_CR_DESIGNER_REV_FIELD2_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_DESIGNER_REV_FIELD2_DESIGNER_REV_FIELD2_SHIFT 0U
+#define ROGUE_CR_DESIGNER_REV_FIELD2_DESIGNER_REV_FIELD2_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_CHANGESET_NUMBER */
+#define ROGUE_CR_CHANGESET_NUMBER 0x0040U
+#define ROGUE_CR_CHANGESET_NUMBER_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_CHANGESET_NUMBER_CHANGESET_NUMBER_SHIFT 0U
+#define ROGUE_CR_CHANGESET_NUMBER_CHANGESET_NUMBER_CLRMSK 0x0000000000000000ULL
+
+/* Register ROGUE_CR_CLK_XTPLUS_CTRL */
+#define ROGUE_CR_CLK_XTPLUS_CTRL 0x0080U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_MASKFULL 0x0000003FFFFF0000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_TDM_SHIFT 36U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_TDM_CLRMSK 0xFFFFFFCFFFFFFFFFULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_TDM_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_TDM_ON 0x0000001000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_TDM_AUTO 0x0000002000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_ASTC_SHIFT 34U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_ASTC_CLRMSK 0xFFFFFFF3FFFFFFFFULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_ASTC_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_ASTC_ON 0x0000000400000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_ASTC_AUTO 0x0000000800000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_IPF_SHIFT 32U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_IPF_CLRMSK 0xFFFFFFFCFFFFFFFFULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_IPF_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_IPF_ON 0x0000000100000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_IPF_AUTO 0x0000000200000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_COMPUTE_SHIFT 30U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_COMPUTE_CLRMSK 0xFFFFFFFF3FFFFFFFULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_COMPUTE_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_COMPUTE_ON 0x0000000040000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_COMPUTE_AUTO 0x0000000080000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_PIXEL_SHIFT 28U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_PIXEL_CLRMSK 0xFFFFFFFFCFFFFFFFULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_PIXEL_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_PIXEL_ON 0x0000000010000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_PIXEL_AUTO 0x0000000020000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_VERTEX_SHIFT 26U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_VERTEX_CLRMSK 0xFFFFFFFFF3FFFFFFULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_VERTEX_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_VERTEX_ON 0x0000000004000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_VERTEX_AUTO 0x0000000008000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_USCPS_SHIFT 24U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_USCPS_CLRMSK 0xFFFFFFFFFCFFFFFFULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_USCPS_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_USCPS_ON 0x0000000001000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_USCPS_AUTO 0x0000000002000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_PDS_SHARED_SHIFT 22U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_PDS_SHARED_CLRMSK 0xFFFFFFFFFF3FFFFFULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_PDS_SHARED_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_PDS_SHARED_ON 0x0000000000400000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_PDS_SHARED_AUTO 0x0000000000800000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_BIF_BLACKPEARL_SHIFT 20U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_BIF_BLACKPEARL_CLRMSK 0xFFFFFFFFFFCFFFFFULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_BIF_BLACKPEARL_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_BIF_BLACKPEARL_ON 0x0000000000100000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_BIF_BLACKPEARL_AUTO 0x0000000000200000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_USC_SHARED_SHIFT 18U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_USC_SHARED_CLRMSK 0xFFFFFFFFFFF3FFFFULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_USC_SHARED_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_USC_SHARED_ON 0x0000000000040000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_USC_SHARED_AUTO 0x0000000000080000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_GEOMETRY_SHIFT 16U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_GEOMETRY_CLRMSK 0xFFFFFFFFFFFCFFFFULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_GEOMETRY_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_GEOMETRY_ON 0x0000000000010000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_GEOMETRY_AUTO 0x0000000000020000ULL
+
+/* Register ROGUE_CR_CLK_XTPLUS_STATUS */
+#define ROGUE_CR_CLK_XTPLUS_STATUS 0x0088U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_MASKFULL 0x00000000000007FFULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_TDM_SHIFT 10U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_TDM_CLRMSK 0xFFFFFFFFFFFFFBFFULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_TDM_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_TDM_RUNNING 0x0000000000000400ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_IPF_SHIFT 9U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_IPF_CLRMSK 0xFFFFFFFFFFFFFDFFULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_IPF_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_IPF_RUNNING 0x0000000000000200ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_COMPUTE_SHIFT 8U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_COMPUTE_CLRMSK 0xFFFFFFFFFFFFFEFFULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_COMPUTE_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_COMPUTE_RUNNING 0x0000000000000100ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_ASTC_SHIFT 7U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_ASTC_CLRMSK 0xFFFFFFFFFFFFFF7FULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_ASTC_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_ASTC_RUNNING 0x0000000000000080ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_PIXEL_SHIFT 6U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_PIXEL_CLRMSK 0xFFFFFFFFFFFFFFBFULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_PIXEL_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_PIXEL_RUNNING 0x0000000000000040ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_VERTEX_SHIFT 5U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_VERTEX_CLRMSK 0xFFFFFFFFFFFFFFDFULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_VERTEX_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_VERTEX_RUNNING 0x0000000000000020ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_USCPS_SHIFT 4U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_USCPS_CLRMSK 0xFFFFFFFFFFFFFFEFULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_USCPS_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_USCPS_RUNNING 0x0000000000000010ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_PDS_SHARED_SHIFT 3U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_PDS_SHARED_CLRMSK 0xFFFFFFFFFFFFFFF7ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_PDS_SHARED_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_PDS_SHARED_RUNNING 0x0000000000000008ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_BIF_BLACKPEARL_SHIFT 2U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_BIF_BLACKPEARL_CLRMSK 0xFFFFFFFFFFFFFFFBULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_BIF_BLACKPEARL_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_BIF_BLACKPEARL_RUNNING 0x0000000000000004ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_USC_SHARED_SHIFT 1U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_USC_SHARED_CLRMSK 0xFFFFFFFFFFFFFFFDULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_USC_SHARED_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_USC_SHARED_RUNNING 0x0000000000000002ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_GEOMETRY_SHIFT 0U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_GEOMETRY_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_GEOMETRY_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_GEOMETRY_RUNNING 0x0000000000000001ULL
+
+/* Register ROGUE_CR_SOFT_RESET */
+#define ROGUE_CR_SOFT_RESET 0x0100U
+#define ROGUE_CR_SOFT_RESET__PBE2_XE__MASKFULL 0xFFEFFFFFFFFFFC3DULL
+#define ROGUE_CR_SOFT_RESET_MASKFULL 0x00E7FFFFFFFFFC3DULL
+#define ROGUE_CR_SOFT_RESET_PHANTOM3_CORE_SHIFT 63U
+#define ROGUE_CR_SOFT_RESET_PHANTOM3_CORE_CLRMSK 0x7FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_PHANTOM3_CORE_EN 0x8000000000000000ULL
+#define ROGUE_CR_SOFT_RESET_PHANTOM2_CORE_SHIFT 62U
+#define ROGUE_CR_SOFT_RESET_PHANTOM2_CORE_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_PHANTOM2_CORE_EN 0x4000000000000000ULL
+#define ROGUE_CR_SOFT_RESET_BERNADO2_CORE_SHIFT 61U
+#define ROGUE_CR_SOFT_RESET_BERNADO2_CORE_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_BERNADO2_CORE_EN 0x2000000000000000ULL
+#define ROGUE_CR_SOFT_RESET_JONES_CORE_SHIFT 60U
+#define ROGUE_CR_SOFT_RESET_JONES_CORE_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_JONES_CORE_EN 0x1000000000000000ULL
+#define ROGUE_CR_SOFT_RESET_TILING_CORE_SHIFT 59U
+#define ROGUE_CR_SOFT_RESET_TILING_CORE_CLRMSK 0xF7FFFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_TILING_CORE_EN 0x0800000000000000ULL
+#define ROGUE_CR_SOFT_RESET_TE3_SHIFT 58U
+#define ROGUE_CR_SOFT_RESET_TE3_CLRMSK 0xFBFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_TE3_EN 0x0400000000000000ULL
+#define ROGUE_CR_SOFT_RESET_VCE_SHIFT 57U
+#define ROGUE_CR_SOFT_RESET_VCE_CLRMSK 0xFDFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_VCE_EN 0x0200000000000000ULL
+#define ROGUE_CR_SOFT_RESET_VBS_SHIFT 56U
+#define ROGUE_CR_SOFT_RESET_VBS_CLRMSK 0xFEFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_VBS_EN 0x0100000000000000ULL
+#define ROGUE_CR_SOFT_RESET_DPX1_CORE_SHIFT 55U
+#define ROGUE_CR_SOFT_RESET_DPX1_CORE_CLRMSK 0xFF7FFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_DPX1_CORE_EN 0x0080000000000000ULL
+#define ROGUE_CR_SOFT_RESET_DPX0_CORE_SHIFT 54U
+#define ROGUE_CR_SOFT_RESET_DPX0_CORE_CLRMSK 0xFFBFFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_DPX0_CORE_EN 0x0040000000000000ULL
+#define ROGUE_CR_SOFT_RESET_FBA_SHIFT 53U
+#define ROGUE_CR_SOFT_RESET_FBA_CLRMSK 0xFFDFFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_FBA_EN 0x0020000000000000ULL
+#define ROGUE_CR_SOFT_RESET_FB_CDC_SHIFT 51U
+#define ROGUE_CR_SOFT_RESET_FB_CDC_CLRMSK 0xFFF7FFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_FB_CDC_EN 0x0008000000000000ULL
+#define ROGUE_CR_SOFT_RESET_SH_SHIFT 50U
+#define ROGUE_CR_SOFT_RESET_SH_CLRMSK 0xFFFBFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_SH_EN 0x0004000000000000ULL
+#define ROGUE_CR_SOFT_RESET_VRDM_SHIFT 49U
+#define ROGUE_CR_SOFT_RESET_VRDM_CLRMSK 0xFFFDFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_VRDM_EN 0x0002000000000000ULL
+#define ROGUE_CR_SOFT_RESET_MCU_FBTC_SHIFT 48U
+#define ROGUE_CR_SOFT_RESET_MCU_FBTC_CLRMSK 0xFFFEFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_MCU_FBTC_EN 0x0001000000000000ULL
+#define ROGUE_CR_SOFT_RESET_PHANTOM1_CORE_SHIFT 47U
+#define ROGUE_CR_SOFT_RESET_PHANTOM1_CORE_CLRMSK 0xFFFF7FFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_PHANTOM1_CORE_EN 0x0000800000000000ULL
+#define ROGUE_CR_SOFT_RESET_PHANTOM0_CORE_SHIFT 46U
+#define ROGUE_CR_SOFT_RESET_PHANTOM0_CORE_CLRMSK 0xFFFFBFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_PHANTOM0_CORE_EN 0x0000400000000000ULL
+#define ROGUE_CR_SOFT_RESET_BERNADO1_CORE_SHIFT 45U
+#define ROGUE_CR_SOFT_RESET_BERNADO1_CORE_CLRMSK 0xFFFFDFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_BERNADO1_CORE_EN 0x0000200000000000ULL
+#define ROGUE_CR_SOFT_RESET_BERNADO0_CORE_SHIFT 44U
+#define ROGUE_CR_SOFT_RESET_BERNADO0_CORE_CLRMSK 0xFFFFEFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_BERNADO0_CORE_EN 0x0000100000000000ULL
+#define ROGUE_CR_SOFT_RESET_IPP_SHIFT 43U
+#define ROGUE_CR_SOFT_RESET_IPP_CLRMSK 0xFFFFF7FFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_IPP_EN 0x0000080000000000ULL
+#define ROGUE_CR_SOFT_RESET_BIF_TEXAS_SHIFT 42U
+#define ROGUE_CR_SOFT_RESET_BIF_TEXAS_CLRMSK 0xFFFFFBFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_BIF_TEXAS_EN 0x0000040000000000ULL
+#define ROGUE_CR_SOFT_RESET_TORNADO_CORE_SHIFT 41U
+#define ROGUE_CR_SOFT_RESET_TORNADO_CORE_CLRMSK 0xFFFFFDFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_TORNADO_CORE_EN 0x0000020000000000ULL
+#define ROGUE_CR_SOFT_RESET_DUST_H_CORE_SHIFT 40U
+#define ROGUE_CR_SOFT_RESET_DUST_H_CORE_CLRMSK 0xFFFFFEFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_DUST_H_CORE_EN 0x0000010000000000ULL
+#define ROGUE_CR_SOFT_RESET_DUST_G_CORE_SHIFT 39U
+#define ROGUE_CR_SOFT_RESET_DUST_G_CORE_CLRMSK 0xFFFFFF7FFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_DUST_G_CORE_EN 0x0000008000000000ULL
+#define ROGUE_CR_SOFT_RESET_DUST_F_CORE_SHIFT 38U
+#define ROGUE_CR_SOFT_RESET_DUST_F_CORE_CLRMSK 0xFFFFFFBFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_DUST_F_CORE_EN 0x0000004000000000ULL
+#define ROGUE_CR_SOFT_RESET_DUST_E_CORE_SHIFT 37U
+#define ROGUE_CR_SOFT_RESET_DUST_E_CORE_CLRMSK 0xFFFFFFDFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_DUST_E_CORE_EN 0x0000002000000000ULL
+#define ROGUE_CR_SOFT_RESET_DUST_D_CORE_SHIFT 36U
+#define ROGUE_CR_SOFT_RESET_DUST_D_CORE_CLRMSK 0xFFFFFFEFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_DUST_D_CORE_EN 0x0000001000000000ULL
+#define ROGUE_CR_SOFT_RESET_DUST_C_CORE_SHIFT 35U
+#define ROGUE_CR_SOFT_RESET_DUST_C_CORE_CLRMSK 0xFFFFFFF7FFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_DUST_C_CORE_EN 0x0000000800000000ULL
+#define ROGUE_CR_SOFT_RESET_MMU_SHIFT 34U
+#define ROGUE_CR_SOFT_RESET_MMU_CLRMSK 0xFFFFFFFBFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_MMU_EN 0x0000000400000000ULL
+#define ROGUE_CR_SOFT_RESET_BIF1_SHIFT 33U
+#define ROGUE_CR_SOFT_RESET_BIF1_CLRMSK 0xFFFFFFFDFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_BIF1_EN 0x0000000200000000ULL
+#define ROGUE_CR_SOFT_RESET_GARTEN_SHIFT 32U
+#define ROGUE_CR_SOFT_RESET_GARTEN_CLRMSK 0xFFFFFFFEFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_GARTEN_EN 0x0000000100000000ULL
+#define ROGUE_CR_SOFT_RESET_CPU_SHIFT 32U
+#define ROGUE_CR_SOFT_RESET_CPU_CLRMSK 0xFFFFFFFEFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_CPU_EN 0x0000000100000000ULL
+#define ROGUE_CR_SOFT_RESET_RASCAL_CORE_SHIFT 31U
+#define ROGUE_CR_SOFT_RESET_RASCAL_CORE_CLRMSK 0xFFFFFFFF7FFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_RASCAL_CORE_EN 0x0000000080000000ULL
+#define ROGUE_CR_SOFT_RESET_DUST_B_CORE_SHIFT 30U
+#define ROGUE_CR_SOFT_RESET_DUST_B_CORE_CLRMSK 0xFFFFFFFFBFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_DUST_B_CORE_EN 0x0000000040000000ULL
+#define ROGUE_CR_SOFT_RESET_DUST_A_CORE_SHIFT 29U
+#define ROGUE_CR_SOFT_RESET_DUST_A_CORE_CLRMSK 0xFFFFFFFFDFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_DUST_A_CORE_EN 0x0000000020000000ULL
+#define ROGUE_CR_SOFT_RESET_FB_TLCACHE_SHIFT 28U
+#define ROGUE_CR_SOFT_RESET_FB_TLCACHE_CLRMSK 0xFFFFFFFFEFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_FB_TLCACHE_EN 0x0000000010000000ULL
+#define ROGUE_CR_SOFT_RESET_SLC_SHIFT 27U
+#define ROGUE_CR_SOFT_RESET_SLC_CLRMSK 0xFFFFFFFFF7FFFFFFULL
+#define ROGUE_CR_SOFT_RESET_SLC_EN 0x0000000008000000ULL
+#define ROGUE_CR_SOFT_RESET_TLA_SHIFT 26U
+#define ROGUE_CR_SOFT_RESET_TLA_CLRMSK 0xFFFFFFFFFBFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_TLA_EN 0x0000000004000000ULL
+#define ROGUE_CR_SOFT_RESET_UVS_SHIFT 25U
+#define ROGUE_CR_SOFT_RESET_UVS_CLRMSK 0xFFFFFFFFFDFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_UVS_EN 0x0000000002000000ULL
+#define ROGUE_CR_SOFT_RESET_TE_SHIFT 24U
+#define ROGUE_CR_SOFT_RESET_TE_CLRMSK 0xFFFFFFFFFEFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_TE_EN 0x0000000001000000ULL
+#define ROGUE_CR_SOFT_RESET_GPP_SHIFT 23U
+#define ROGUE_CR_SOFT_RESET_GPP_CLRMSK 0xFFFFFFFFFF7FFFFFULL
+#define ROGUE_CR_SOFT_RESET_GPP_EN 0x0000000000800000ULL
+#define ROGUE_CR_SOFT_RESET_FBDC_SHIFT 22U
+#define ROGUE_CR_SOFT_RESET_FBDC_CLRMSK 0xFFFFFFFFFFBFFFFFULL
+#define ROGUE_CR_SOFT_RESET_FBDC_EN 0x0000000000400000ULL
+#define ROGUE_CR_SOFT_RESET_FBC_SHIFT 21U
+#define ROGUE_CR_SOFT_RESET_FBC_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_SOFT_RESET_FBC_EN 0x0000000000200000ULL
+#define ROGUE_CR_SOFT_RESET_PM_SHIFT 20U
+#define ROGUE_CR_SOFT_RESET_PM_CLRMSK 0xFFFFFFFFFFEFFFFFULL
+#define ROGUE_CR_SOFT_RESET_PM_EN 0x0000000000100000ULL
+#define ROGUE_CR_SOFT_RESET_PBE_SHIFT 19U
+#define ROGUE_CR_SOFT_RESET_PBE_CLRMSK 0xFFFFFFFFFFF7FFFFULL
+#define ROGUE_CR_SOFT_RESET_PBE_EN 0x0000000000080000ULL
+#define ROGUE_CR_SOFT_RESET_USC_SHARED_SHIFT 18U
+#define ROGUE_CR_SOFT_RESET_USC_SHARED_CLRMSK 0xFFFFFFFFFFFBFFFFULL
+#define ROGUE_CR_SOFT_RESET_USC_SHARED_EN 0x0000000000040000ULL
+#define ROGUE_CR_SOFT_RESET_MCU_L1_SHIFT 17U
+#define ROGUE_CR_SOFT_RESET_MCU_L1_CLRMSK 0xFFFFFFFFFFFDFFFFULL
+#define ROGUE_CR_SOFT_RESET_MCU_L1_EN 0x0000000000020000ULL
+#define ROGUE_CR_SOFT_RESET_BIF_SHIFT 16U
+#define ROGUE_CR_SOFT_RESET_BIF_CLRMSK 0xFFFFFFFFFFFEFFFFULL
+#define ROGUE_CR_SOFT_RESET_BIF_EN 0x0000000000010000ULL
+#define ROGUE_CR_SOFT_RESET_CDM_SHIFT 15U
+#define ROGUE_CR_SOFT_RESET_CDM_CLRMSK 0xFFFFFFFFFFFF7FFFULL
+#define ROGUE_CR_SOFT_RESET_CDM_EN 0x0000000000008000ULL
+#define ROGUE_CR_SOFT_RESET_VDM_SHIFT 14U
+#define ROGUE_CR_SOFT_RESET_VDM_CLRMSK 0xFFFFFFFFFFFFBFFFULL
+#define ROGUE_CR_SOFT_RESET_VDM_EN 0x0000000000004000ULL
+#define ROGUE_CR_SOFT_RESET_TESS_SHIFT 13U
+#define ROGUE_CR_SOFT_RESET_TESS_CLRMSK 0xFFFFFFFFFFFFDFFFULL
+#define ROGUE_CR_SOFT_RESET_TESS_EN 0x0000000000002000ULL
+#define ROGUE_CR_SOFT_RESET_PDS_SHIFT 12U
+#define ROGUE_CR_SOFT_RESET_PDS_CLRMSK 0xFFFFFFFFFFFFEFFFULL
+#define ROGUE_CR_SOFT_RESET_PDS_EN 0x0000000000001000ULL
+#define ROGUE_CR_SOFT_RESET_ISP_SHIFT 11U
+#define ROGUE_CR_SOFT_RESET_ISP_CLRMSK 0xFFFFFFFFFFFFF7FFULL
+#define ROGUE_CR_SOFT_RESET_ISP_EN 0x0000000000000800ULL
+#define ROGUE_CR_SOFT_RESET_TSP_SHIFT 10U
+#define ROGUE_CR_SOFT_RESET_TSP_CLRMSK 0xFFFFFFFFFFFFFBFFULL
+#define ROGUE_CR_SOFT_RESET_TSP_EN 0x0000000000000400ULL
+#define ROGUE_CR_SOFT_RESET_SYSARB_SHIFT 5U
+#define ROGUE_CR_SOFT_RESET_SYSARB_CLRMSK 0xFFFFFFFFFFFFFFDFULL
+#define ROGUE_CR_SOFT_RESET_SYSARB_EN 0x0000000000000020ULL
+#define ROGUE_CR_SOFT_RESET_TPU_MCU_DEMUX_SHIFT 4U
+#define ROGUE_CR_SOFT_RESET_TPU_MCU_DEMUX_CLRMSK 0xFFFFFFFFFFFFFFEFULL
+#define ROGUE_CR_SOFT_RESET_TPU_MCU_DEMUX_EN 0x0000000000000010ULL
+#define ROGUE_CR_SOFT_RESET_MCU_L0_SHIFT 3U
+#define ROGUE_CR_SOFT_RESET_MCU_L0_CLRMSK 0xFFFFFFFFFFFFFFF7ULL
+#define ROGUE_CR_SOFT_RESET_MCU_L0_EN 0x0000000000000008ULL
+#define ROGUE_CR_SOFT_RESET_TPU_SHIFT 2U
+#define ROGUE_CR_SOFT_RESET_TPU_CLRMSK 0xFFFFFFFFFFFFFFFBULL
+#define ROGUE_CR_SOFT_RESET_TPU_EN 0x0000000000000004ULL
+#define ROGUE_CR_SOFT_RESET_USC_SHIFT 0U
+#define ROGUE_CR_SOFT_RESET_USC_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_SOFT_RESET_USC_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_SOFT_RESET2 */
+#define ROGUE_CR_SOFT_RESET2 0x0108U
+#define ROGUE_CR_SOFT_RESET2_MASKFULL 0x00000000001FFFFFULL
+#define ROGUE_CR_SOFT_RESET2_SPFILTER_SHIFT 12U
+#define ROGUE_CR_SOFT_RESET2_SPFILTER_CLRMSK 0xFFE00FFFU
+#define ROGUE_CR_SOFT_RESET2_TDM_SHIFT 11U
+#define ROGUE_CR_SOFT_RESET2_TDM_CLRMSK 0xFFFFF7FFU
+#define ROGUE_CR_SOFT_RESET2_TDM_EN 0x00000800U
+#define ROGUE_CR_SOFT_RESET2_ASTC_SHIFT 10U
+#define ROGUE_CR_SOFT_RESET2_ASTC_CLRMSK 0xFFFFFBFFU
+#define ROGUE_CR_SOFT_RESET2_ASTC_EN 0x00000400U
+#define ROGUE_CR_SOFT_RESET2_BLACKPEARL_SHIFT 9U
+#define ROGUE_CR_SOFT_RESET2_BLACKPEARL_CLRMSK 0xFFFFFDFFU
+#define ROGUE_CR_SOFT_RESET2_BLACKPEARL_EN 0x00000200U
+#define ROGUE_CR_SOFT_RESET2_USCPS_SHIFT 8U
+#define ROGUE_CR_SOFT_RESET2_USCPS_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_SOFT_RESET2_USCPS_EN 0x00000100U
+#define ROGUE_CR_SOFT_RESET2_IPF_SHIFT 7U
+#define ROGUE_CR_SOFT_RESET2_IPF_CLRMSK 0xFFFFFF7FU
+#define ROGUE_CR_SOFT_RESET2_IPF_EN 0x00000080U
+#define ROGUE_CR_SOFT_RESET2_GEOMETRY_SHIFT 6U
+#define ROGUE_CR_SOFT_RESET2_GEOMETRY_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_SOFT_RESET2_GEOMETRY_EN 0x00000040U
+#define ROGUE_CR_SOFT_RESET2_USC_SHARED_SHIFT 5U
+#define ROGUE_CR_SOFT_RESET2_USC_SHARED_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_SOFT_RESET2_USC_SHARED_EN 0x00000020U
+#define ROGUE_CR_SOFT_RESET2_PDS_SHARED_SHIFT 4U
+#define ROGUE_CR_SOFT_RESET2_PDS_SHARED_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_SOFT_RESET2_PDS_SHARED_EN 0x00000010U
+#define ROGUE_CR_SOFT_RESET2_BIF_BLACKPEARL_SHIFT 3U
+#define ROGUE_CR_SOFT_RESET2_BIF_BLACKPEARL_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_SOFT_RESET2_BIF_BLACKPEARL_EN 0x00000008U
+#define ROGUE_CR_SOFT_RESET2_PIXEL_SHIFT 2U
+#define ROGUE_CR_SOFT_RESET2_PIXEL_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_SOFT_RESET2_PIXEL_EN 0x00000004U
+#define ROGUE_CR_SOFT_RESET2_CDM_SHIFT 1U
+#define ROGUE_CR_SOFT_RESET2_CDM_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_SOFT_RESET2_CDM_EN 0x00000002U
+#define ROGUE_CR_SOFT_RESET2_VERTEX_SHIFT 0U
+#define ROGUE_CR_SOFT_RESET2_VERTEX_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SOFT_RESET2_VERTEX_EN 0x00000001U
+
+/* Register ROGUE_CR_EVENT_STATUS */
+#define ROGUE_CR_EVENT_STATUS 0x0130U
+#define ROGUE_CR_EVENT_STATUS__ROGUEXE__MASKFULL 0x00000000E01DFFFFULL
+#define ROGUE_CR_EVENT_STATUS__SIGNALS__MASKFULL 0x00000000E007FFFFULL
+#define ROGUE_CR_EVENT_STATUS_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_EVENT_STATUS_TDM_FENCE_FINISHED_SHIFT 31U
+#define ROGUE_CR_EVENT_STATUS_TDM_FENCE_FINISHED_CLRMSK 0x7FFFFFFFU
+#define ROGUE_CR_EVENT_STATUS_TDM_FENCE_FINISHED_EN 0x80000000U
+#define ROGUE_CR_EVENT_STATUS_TDM_BUFFER_STALL_SHIFT 30U
+#define ROGUE_CR_EVENT_STATUS_TDM_BUFFER_STALL_CLRMSK 0xBFFFFFFFU
+#define ROGUE_CR_EVENT_STATUS_TDM_BUFFER_STALL_EN 0x40000000U
+#define ROGUE_CR_EVENT_STATUS_COMPUTE_SIGNAL_FAILURE_SHIFT 29U
+#define ROGUE_CR_EVENT_STATUS_COMPUTE_SIGNAL_FAILURE_CLRMSK 0xDFFFFFFFU
+#define ROGUE_CR_EVENT_STATUS_COMPUTE_SIGNAL_FAILURE_EN 0x20000000U
+#define ROGUE_CR_EVENT_STATUS_DPX_OUT_OF_MEMORY_SHIFT 28U
+#define ROGUE_CR_EVENT_STATUS_DPX_OUT_OF_MEMORY_CLRMSK 0xEFFFFFFFU
+#define ROGUE_CR_EVENT_STATUS_DPX_OUT_OF_MEMORY_EN 0x10000000U
+#define ROGUE_CR_EVENT_STATUS_DPX_MMU_PAGE_FAULT_SHIFT 27U
+#define ROGUE_CR_EVENT_STATUS_DPX_MMU_PAGE_FAULT_CLRMSK 0xF7FFFFFFU
+#define ROGUE_CR_EVENT_STATUS_DPX_MMU_PAGE_FAULT_EN 0x08000000U
+#define ROGUE_CR_EVENT_STATUS_RPM_OUT_OF_MEMORY_SHIFT 26U
+#define ROGUE_CR_EVENT_STATUS_RPM_OUT_OF_MEMORY_CLRMSK 0xFBFFFFFFU
+#define ROGUE_CR_EVENT_STATUS_RPM_OUT_OF_MEMORY_EN 0x04000000U
+#define ROGUE_CR_EVENT_STATUS_FBA_FC3_FINISHED_SHIFT 25U
+#define ROGUE_CR_EVENT_STATUS_FBA_FC3_FINISHED_CLRMSK 0xFDFFFFFFU
+#define ROGUE_CR_EVENT_STATUS_FBA_FC3_FINISHED_EN 0x02000000U
+#define ROGUE_CR_EVENT_STATUS_FBA_FC2_FINISHED_SHIFT 24U
+#define ROGUE_CR_EVENT_STATUS_FBA_FC2_FINISHED_CLRMSK 0xFEFFFFFFU
+#define ROGUE_CR_EVENT_STATUS_FBA_FC2_FINISHED_EN 0x01000000U
+#define ROGUE_CR_EVENT_STATUS_FBA_FC1_FINISHED_SHIFT 23U
+#define ROGUE_CR_EVENT_STATUS_FBA_FC1_FINISHED_CLRMSK 0xFF7FFFFFU
+#define ROGUE_CR_EVENT_STATUS_FBA_FC1_FINISHED_EN 0x00800000U
+#define ROGUE_CR_EVENT_STATUS_FBA_FC0_FINISHED_SHIFT 22U
+#define ROGUE_CR_EVENT_STATUS_FBA_FC0_FINISHED_CLRMSK 0xFFBFFFFFU
+#define ROGUE_CR_EVENT_STATUS_FBA_FC0_FINISHED_EN 0x00400000U
+#define ROGUE_CR_EVENT_STATUS_RDM_FC3_FINISHED_SHIFT 21U
+#define ROGUE_CR_EVENT_STATUS_RDM_FC3_FINISHED_CLRMSK 0xFFDFFFFFU
+#define ROGUE_CR_EVENT_STATUS_RDM_FC3_FINISHED_EN 0x00200000U
+#define ROGUE_CR_EVENT_STATUS_RDM_FC2_FINISHED_SHIFT 20U
+#define ROGUE_CR_EVENT_STATUS_RDM_FC2_FINISHED_CLRMSK 0xFFEFFFFFU
+#define ROGUE_CR_EVENT_STATUS_RDM_FC2_FINISHED_EN 0x00100000U
+#define ROGUE_CR_EVENT_STATUS_SAFETY_SHIFT 20U
+#define ROGUE_CR_EVENT_STATUS_SAFETY_CLRMSK 0xFFEFFFFFU
+#define ROGUE_CR_EVENT_STATUS_SAFETY_EN 0x00100000U
+#define ROGUE_CR_EVENT_STATUS_RDM_FC1_FINISHED_SHIFT 19U
+#define ROGUE_CR_EVENT_STATUS_RDM_FC1_FINISHED_CLRMSK 0xFFF7FFFFU
+#define ROGUE_CR_EVENT_STATUS_RDM_FC1_FINISHED_EN 0x00080000U
+#define ROGUE_CR_EVENT_STATUS_SLAVE_REQ_SHIFT 19U
+#define ROGUE_CR_EVENT_STATUS_SLAVE_REQ_CLRMSK 0xFFF7FFFFU
+#define ROGUE_CR_EVENT_STATUS_SLAVE_REQ_EN 0x00080000U
+#define ROGUE_CR_EVENT_STATUS_RDM_FC0_FINISHED_SHIFT 18U
+#define ROGUE_CR_EVENT_STATUS_RDM_FC0_FINISHED_CLRMSK 0xFFFBFFFFU
+#define ROGUE_CR_EVENT_STATUS_RDM_FC0_FINISHED_EN 0x00040000U
+#define ROGUE_CR_EVENT_STATUS_TDM_CONTEXT_STORE_FINISHED_SHIFT 18U
+#define ROGUE_CR_EVENT_STATUS_TDM_CONTEXT_STORE_FINISHED_CLRMSK 0xFFFBFFFFU
+#define ROGUE_CR_EVENT_STATUS_TDM_CONTEXT_STORE_FINISHED_EN 0x00040000U
+#define ROGUE_CR_EVENT_STATUS_SHG_FINISHED_SHIFT 17U
+#define ROGUE_CR_EVENT_STATUS_SHG_FINISHED_CLRMSK 0xFFFDFFFFU
+#define ROGUE_CR_EVENT_STATUS_SHG_FINISHED_EN 0x00020000U
+#define ROGUE_CR_EVENT_STATUS_SPFILTER_SIGNAL_UPDATE_SHIFT 17U
+#define ROGUE_CR_EVENT_STATUS_SPFILTER_SIGNAL_UPDATE_CLRMSK 0xFFFDFFFFU
+#define ROGUE_CR_EVENT_STATUS_SPFILTER_SIGNAL_UPDATE_EN 0x00020000U
+#define ROGUE_CR_EVENT_STATUS_COMPUTE_BUFFER_STALL_SHIFT 16U
+#define ROGUE_CR_EVENT_STATUS_COMPUTE_BUFFER_STALL_CLRMSK 0xFFFEFFFFU
+#define ROGUE_CR_EVENT_STATUS_COMPUTE_BUFFER_STALL_EN 0x00010000U
+#define ROGUE_CR_EVENT_STATUS_USC_TRIGGER_SHIFT 15U
+#define ROGUE_CR_EVENT_STATUS_USC_TRIGGER_CLRMSK 0xFFFF7FFFU
+#define ROGUE_CR_EVENT_STATUS_USC_TRIGGER_EN 0x00008000U
+#define ROGUE_CR_EVENT_STATUS_ZLS_FINISHED_SHIFT 14U
+#define ROGUE_CR_EVENT_STATUS_ZLS_FINISHED_CLRMSK 0xFFFFBFFFU
+#define ROGUE_CR_EVENT_STATUS_ZLS_FINISHED_EN 0x00004000U
+#define ROGUE_CR_EVENT_STATUS_GPIO_ACK_SHIFT 13U
+#define ROGUE_CR_EVENT_STATUS_GPIO_ACK_CLRMSK 0xFFFFDFFFU
+#define ROGUE_CR_EVENT_STATUS_GPIO_ACK_EN 0x00002000U
+#define ROGUE_CR_EVENT_STATUS_GPIO_REQ_SHIFT 12U
+#define ROGUE_CR_EVENT_STATUS_GPIO_REQ_CLRMSK 0xFFFFEFFFU
+#define ROGUE_CR_EVENT_STATUS_GPIO_REQ_EN 0x00001000U
+#define ROGUE_CR_EVENT_STATUS_POWER_ABORT_SHIFT 11U
+#define ROGUE_CR_EVENT_STATUS_POWER_ABORT_CLRMSK 0xFFFFF7FFU
+#define ROGUE_CR_EVENT_STATUS_POWER_ABORT_EN 0x00000800U
+#define ROGUE_CR_EVENT_STATUS_POWER_COMPLETE_SHIFT 10U
+#define ROGUE_CR_EVENT_STATUS_POWER_COMPLETE_CLRMSK 0xFFFFFBFFU
+#define ROGUE_CR_EVENT_STATUS_POWER_COMPLETE_EN 0x00000400U
+#define ROGUE_CR_EVENT_STATUS_MMU_PAGE_FAULT_SHIFT 9U
+#define ROGUE_CR_EVENT_STATUS_MMU_PAGE_FAULT_CLRMSK 0xFFFFFDFFU
+#define ROGUE_CR_EVENT_STATUS_MMU_PAGE_FAULT_EN 0x00000200U
+#define ROGUE_CR_EVENT_STATUS_PM_3D_MEM_FREE_SHIFT 8U
+#define ROGUE_CR_EVENT_STATUS_PM_3D_MEM_FREE_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_EVENT_STATUS_PM_3D_MEM_FREE_EN 0x00000100U
+#define ROGUE_CR_EVENT_STATUS_PM_OUT_OF_MEMORY_SHIFT 7U
+#define ROGUE_CR_EVENT_STATUS_PM_OUT_OF_MEMORY_CLRMSK 0xFFFFFF7FU
+#define ROGUE_CR_EVENT_STATUS_PM_OUT_OF_MEMORY_EN 0x00000080U
+#define ROGUE_CR_EVENT_STATUS_TA_TERMINATE_SHIFT 6U
+#define ROGUE_CR_EVENT_STATUS_TA_TERMINATE_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_EVENT_STATUS_TA_TERMINATE_EN 0x00000040U
+#define ROGUE_CR_EVENT_STATUS_TA_FINISHED_SHIFT 5U
+#define ROGUE_CR_EVENT_STATUS_TA_FINISHED_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_EVENT_STATUS_TA_FINISHED_EN 0x00000020U
+#define ROGUE_CR_EVENT_STATUS_ISP_END_MACROTILE_SHIFT 4U
+#define ROGUE_CR_EVENT_STATUS_ISP_END_MACROTILE_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_EVENT_STATUS_ISP_END_MACROTILE_EN 0x00000010U
+#define ROGUE_CR_EVENT_STATUS_PIXELBE_END_RENDER_SHIFT 3U
+#define ROGUE_CR_EVENT_STATUS_PIXELBE_END_RENDER_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_EVENT_STATUS_PIXELBE_END_RENDER_EN 0x00000008U
+#define ROGUE_CR_EVENT_STATUS_COMPUTE_FINISHED_SHIFT 2U
+#define ROGUE_CR_EVENT_STATUS_COMPUTE_FINISHED_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_EVENT_STATUS_COMPUTE_FINISHED_EN 0x00000004U
+#define ROGUE_CR_EVENT_STATUS_KERNEL_FINISHED_SHIFT 1U
+#define ROGUE_CR_EVENT_STATUS_KERNEL_FINISHED_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_EVENT_STATUS_KERNEL_FINISHED_EN 0x00000002U
+#define ROGUE_CR_EVENT_STATUS_TLA_COMPLETE_SHIFT 0U
+#define ROGUE_CR_EVENT_STATUS_TLA_COMPLETE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_EVENT_STATUS_TLA_COMPLETE_EN 0x00000001U
+
+/* Register ROGUE_CR_TIMER */
+#define ROGUE_CR_TIMER 0x0160U
+#define ROGUE_CR_TIMER_MASKFULL 0x8000FFFFFFFFFFFFULL
+#define ROGUE_CR_TIMER_BIT31_SHIFT 63U
+#define ROGUE_CR_TIMER_BIT31_CLRMSK 0x7FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_TIMER_BIT31_EN 0x8000000000000000ULL
+#define ROGUE_CR_TIMER_VALUE_SHIFT 0U
+#define ROGUE_CR_TIMER_VALUE_CLRMSK 0xFFFF000000000000ULL
+
+/* Register ROGUE_CR_TLA_STATUS */
+#define ROGUE_CR_TLA_STATUS 0x0178U
+#define ROGUE_CR_TLA_STATUS_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_TLA_STATUS_BLIT_COUNT_SHIFT 39U
+#define ROGUE_CR_TLA_STATUS_BLIT_COUNT_CLRMSK 0x0000007FFFFFFFFFULL
+#define ROGUE_CR_TLA_STATUS_REQUEST_SHIFT 7U
+#define ROGUE_CR_TLA_STATUS_REQUEST_CLRMSK 0xFFFFFF800000007FULL
+#define ROGUE_CR_TLA_STATUS_FIFO_FULLNESS_SHIFT 1U
+#define ROGUE_CR_TLA_STATUS_FIFO_FULLNESS_CLRMSK 0xFFFFFFFFFFFFFF81ULL
+#define ROGUE_CR_TLA_STATUS_BUSY_SHIFT 0U
+#define ROGUE_CR_TLA_STATUS_BUSY_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_TLA_STATUS_BUSY_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_PM_PARTIAL_RENDER_ENABLE */
+#define ROGUE_CR_PM_PARTIAL_RENDER_ENABLE 0x0338U
+#define ROGUE_CR_PM_PARTIAL_RENDER_ENABLE_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_PM_PARTIAL_RENDER_ENABLE_OP_SHIFT 0U
+#define ROGUE_CR_PM_PARTIAL_RENDER_ENABLE_OP_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_PM_PARTIAL_RENDER_ENABLE_OP_EN 0x00000001U
+
+/* Register ROGUE_CR_SIDEKICK_IDLE */
+#define ROGUE_CR_SIDEKICK_IDLE 0x03C8U
+#define ROGUE_CR_SIDEKICK_IDLE_MASKFULL 0x000000000000007FULL
+#define ROGUE_CR_SIDEKICK_IDLE_FB_CDC_SHIFT 6U
+#define ROGUE_CR_SIDEKICK_IDLE_FB_CDC_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_SIDEKICK_IDLE_FB_CDC_EN 0x00000040U
+#define ROGUE_CR_SIDEKICK_IDLE_MMU_SHIFT 5U
+#define ROGUE_CR_SIDEKICK_IDLE_MMU_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_SIDEKICK_IDLE_MMU_EN 0x00000020U
+#define ROGUE_CR_SIDEKICK_IDLE_BIF128_SHIFT 4U
+#define ROGUE_CR_SIDEKICK_IDLE_BIF128_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_SIDEKICK_IDLE_BIF128_EN 0x00000010U
+#define ROGUE_CR_SIDEKICK_IDLE_TLA_SHIFT 3U
+#define ROGUE_CR_SIDEKICK_IDLE_TLA_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_SIDEKICK_IDLE_TLA_EN 0x00000008U
+#define ROGUE_CR_SIDEKICK_IDLE_GARTEN_SHIFT 2U
+#define ROGUE_CR_SIDEKICK_IDLE_GARTEN_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_SIDEKICK_IDLE_GARTEN_EN 0x00000004U
+#define ROGUE_CR_SIDEKICK_IDLE_HOSTIF_SHIFT 1U
+#define ROGUE_CR_SIDEKICK_IDLE_HOSTIF_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_SIDEKICK_IDLE_HOSTIF_EN 0x00000002U
+#define ROGUE_CR_SIDEKICK_IDLE_SOCIF_SHIFT 0U
+#define ROGUE_CR_SIDEKICK_IDLE_SOCIF_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SIDEKICK_IDLE_SOCIF_EN 0x00000001U
+
+/* Register ROGUE_CR_MARS_IDLE */
+#define ROGUE_CR_MARS_IDLE 0x08F8U
+#define ROGUE_CR_MARS_IDLE_MASKFULL 0x0000000000000007ULL
+#define ROGUE_CR_MARS_IDLE_MH_SYSARB0_SHIFT 2U
+#define ROGUE_CR_MARS_IDLE_MH_SYSARB0_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_MARS_IDLE_MH_SYSARB0_EN 0x00000004U
+#define ROGUE_CR_MARS_IDLE_CPU_SHIFT 1U
+#define ROGUE_CR_MARS_IDLE_CPU_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_MARS_IDLE_CPU_EN 0x00000002U
+#define ROGUE_CR_MARS_IDLE_SOCIF_SHIFT 0U
+#define ROGUE_CR_MARS_IDLE_SOCIF_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_MARS_IDLE_SOCIF_EN 0x00000001U
+
+/* Register ROGUE_CR_VDM_CONTEXT_STORE_STATUS */
+#define ROGUE_CR_VDM_CONTEXT_STORE_STATUS 0x0430U
+#define ROGUE_CR_VDM_CONTEXT_STORE_STATUS_MASKFULL 0x00000000000000F3ULL
+#define ROGUE_CR_VDM_CONTEXT_STORE_STATUS_LAST_PIPE_SHIFT 4U
+#define ROGUE_CR_VDM_CONTEXT_STORE_STATUS_LAST_PIPE_CLRMSK 0xFFFFFF0FU
+#define ROGUE_CR_VDM_CONTEXT_STORE_STATUS_NEED_RESUME_SHIFT 1U
+#define ROGUE_CR_VDM_CONTEXT_STORE_STATUS_NEED_RESUME_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_VDM_CONTEXT_STORE_STATUS_NEED_RESUME_EN 0x00000002U
+#define ROGUE_CR_VDM_CONTEXT_STORE_STATUS_COMPLETE_SHIFT 0U
+#define ROGUE_CR_VDM_CONTEXT_STORE_STATUS_COMPLETE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_VDM_CONTEXT_STORE_STATUS_COMPLETE_EN 0x00000001U
+
+/* Register ROGUE_CR_VDM_CONTEXT_STORE_TASK0 */
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK0 0x0438U
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK0_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK0_PDS_STATE1_SHIFT 32U
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK0_PDS_STATE1_CLRMSK 0x00000000FFFFFFFFULL
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK0_PDS_STATE0_SHIFT 0U
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK0_PDS_STATE0_CLRMSK 0xFFFFFFFF00000000ULL
+
+/* Register ROGUE_CR_VDM_CONTEXT_STORE_TASK1 */
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK1 0x0440U
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK1_PDS_STATE2_SHIFT 0U
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK1_PDS_STATE2_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_VDM_CONTEXT_STORE_TASK2 */
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK2 0x0448U
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK2_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK2_STREAM_OUT2_SHIFT 32U
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK2_STREAM_OUT2_CLRMSK 0x00000000FFFFFFFFULL
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK2_STREAM_OUT1_SHIFT 0U
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK2_STREAM_OUT1_CLRMSK 0xFFFFFFFF00000000ULL
+
+/* Register ROGUE_CR_VDM_CONTEXT_RESUME_TASK0 */
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK0 0x0450U
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK0_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK0_PDS_STATE1_SHIFT 32U
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK0_PDS_STATE1_CLRMSK 0x00000000FFFFFFFFULL
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK0_PDS_STATE0_SHIFT 0U
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK0_PDS_STATE0_CLRMSK 0xFFFFFFFF00000000ULL
+
+/* Register ROGUE_CR_VDM_CONTEXT_RESUME_TASK1 */
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK1 0x0458U
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK1_PDS_STATE2_SHIFT 0U
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK1_PDS_STATE2_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_VDM_CONTEXT_RESUME_TASK2 */
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK2 0x0460U
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK2_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK2_STREAM_OUT2_SHIFT 32U
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK2_STREAM_OUT2_CLRMSK 0x00000000FFFFFFFFULL
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK2_STREAM_OUT1_SHIFT 0U
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK2_STREAM_OUT1_CLRMSK 0xFFFFFFFF00000000ULL
+
+/* Register ROGUE_CR_CDM_CONTEXT_STORE_STATUS */
+#define ROGUE_CR_CDM_CONTEXT_STORE_STATUS 0x04A0U
+#define ROGUE_CR_CDM_CONTEXT_STORE_STATUS_MASKFULL 0x0000000000000003ULL
+#define ROGUE_CR_CDM_CONTEXT_STORE_STATUS_NEED_RESUME_SHIFT 1U
+#define ROGUE_CR_CDM_CONTEXT_STORE_STATUS_NEED_RESUME_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_CDM_CONTEXT_STORE_STATUS_NEED_RESUME_EN 0x00000002U
+#define ROGUE_CR_CDM_CONTEXT_STORE_STATUS_COMPLETE_SHIFT 0U
+#define ROGUE_CR_CDM_CONTEXT_STORE_STATUS_COMPLETE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_CDM_CONTEXT_STORE_STATUS_COMPLETE_EN 0x00000001U
+
+/* Register ROGUE_CR_CDM_CONTEXT_PDS0 */
+#define ROGUE_CR_CDM_CONTEXT_PDS0 0x04A8U
+#define ROGUE_CR_CDM_CONTEXT_PDS0_MASKFULL 0xFFFFFFF0FFFFFFF0ULL
+#define ROGUE_CR_CDM_CONTEXT_PDS0_DATA_ADDR_SHIFT 36U
+#define ROGUE_CR_CDM_CONTEXT_PDS0_DATA_ADDR_CLRMSK 0x0000000FFFFFFFFFULL
+#define ROGUE_CR_CDM_CONTEXT_PDS0_DATA_ADDR_ALIGNSHIFT 4U
+#define ROGUE_CR_CDM_CONTEXT_PDS0_DATA_ADDR_ALIGNSIZE 16U
+#define ROGUE_CR_CDM_CONTEXT_PDS0_CODE_ADDR_SHIFT 4U
+#define ROGUE_CR_CDM_CONTEXT_PDS0_CODE_ADDR_CLRMSK 0xFFFFFFFF0000000FULL
+#define ROGUE_CR_CDM_CONTEXT_PDS0_CODE_ADDR_ALIGNSHIFT 4U
+#define ROGUE_CR_CDM_CONTEXT_PDS0_CODE_ADDR_ALIGNSIZE 16U
+
+/* Register ROGUE_CR_CDM_CONTEXT_PDS1 */
+#define ROGUE_CR_CDM_CONTEXT_PDS1 0x04B0U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__MASKFULL 0x000000007FFFFFFFULL
+#define ROGUE_CR_CDM_CONTEXT_PDS1_MASKFULL 0x000000003FFFFFFFULL
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__PDS_SEQ_DEP_SHIFT 30U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__PDS_SEQ_DEP_CLRMSK 0xBFFFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__PDS_SEQ_DEP_EN 0x40000000U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_PDS_SEQ_DEP_SHIFT 29U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_PDS_SEQ_DEP_CLRMSK 0xDFFFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1_PDS_SEQ_DEP_EN 0x20000000U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__USC_SEQ_DEP_SHIFT 29U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__USC_SEQ_DEP_CLRMSK 0xDFFFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__USC_SEQ_DEP_EN 0x20000000U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_USC_SEQ_DEP_SHIFT 28U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_USC_SEQ_DEP_CLRMSK 0xEFFFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1_USC_SEQ_DEP_EN 0x10000000U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__TARGET_SHIFT 28U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__TARGET_CLRMSK 0xEFFFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__TARGET_EN 0x10000000U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_TARGET_SHIFT 27U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_TARGET_CLRMSK 0xF7FFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1_TARGET_EN 0x08000000U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__UNIFIED_SIZE_SHIFT 22U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__UNIFIED_SIZE_CLRMSK 0xF03FFFFFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1_UNIFIED_SIZE_SHIFT 21U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_UNIFIED_SIZE_CLRMSK 0xF81FFFFFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__COMMON_SHARED_SHIFT 21U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__COMMON_SHARED_CLRMSK 0xFFDFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__COMMON_SHARED_EN 0x00200000U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_COMMON_SHARED_SHIFT 20U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_COMMON_SHARED_CLRMSK 0xFFEFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1_COMMON_SHARED_EN 0x00100000U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__COMMON_SIZE_SHIFT 12U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__COMMON_SIZE_CLRMSK 0xFFE00FFFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1_COMMON_SIZE_SHIFT 11U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_COMMON_SIZE_CLRMSK 0xFFF007FFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1_TEMP_SIZE_SHIFT 7U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_TEMP_SIZE_CLRMSK 0xFFFFF87FU
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__TEMP_SIZE_SHIFT 7U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__TEMP_SIZE_CLRMSK 0xFFFFF07FU
+#define ROGUE_CR_CDM_CONTEXT_PDS1_DATA_SIZE_SHIFT 1U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_DATA_SIZE_CLRMSK 0xFFFFFF81U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_FENCE_SHIFT 0U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_FENCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_CDM_CONTEXT_PDS1_FENCE_EN 0x00000001U
+
+/* Register ROGUE_CR_CDM_TERMINATE_PDS */
+#define ROGUE_CR_CDM_TERMINATE_PDS 0x04B8U
+#define ROGUE_CR_CDM_TERMINATE_PDS_MASKFULL 0xFFFFFFF0FFFFFFF0ULL
+#define ROGUE_CR_CDM_TERMINATE_PDS_DATA_ADDR_SHIFT 36U
+#define ROGUE_CR_CDM_TERMINATE_PDS_DATA_ADDR_CLRMSK 0x0000000FFFFFFFFFULL
+#define ROGUE_CR_CDM_TERMINATE_PDS_DATA_ADDR_ALIGNSHIFT 4U
+#define ROGUE_CR_CDM_TERMINATE_PDS_DATA_ADDR_ALIGNSIZE 16U
+#define ROGUE_CR_CDM_TERMINATE_PDS_CODE_ADDR_SHIFT 4U
+#define ROGUE_CR_CDM_TERMINATE_PDS_CODE_ADDR_CLRMSK 0xFFFFFFFF0000000FULL
+#define ROGUE_CR_CDM_TERMINATE_PDS_CODE_ADDR_ALIGNSHIFT 4U
+#define ROGUE_CR_CDM_TERMINATE_PDS_CODE_ADDR_ALIGNSIZE 16U
+
+/* Register ROGUE_CR_CDM_TERMINATE_PDS1 */
+#define ROGUE_CR_CDM_TERMINATE_PDS1 0x04C0U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__MASKFULL 0x000000007FFFFFFFULL
+#define ROGUE_CR_CDM_TERMINATE_PDS1_MASKFULL 0x000000003FFFFFFFULL
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__PDS_SEQ_DEP_SHIFT 30U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__PDS_SEQ_DEP_CLRMSK 0xBFFFFFFFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__PDS_SEQ_DEP_EN 0x40000000U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_PDS_SEQ_DEP_SHIFT 29U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_PDS_SEQ_DEP_CLRMSK 0xDFFFFFFFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1_PDS_SEQ_DEP_EN 0x20000000U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__USC_SEQ_DEP_SHIFT 29U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__USC_SEQ_DEP_CLRMSK 0xDFFFFFFFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__USC_SEQ_DEP_EN 0x20000000U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_USC_SEQ_DEP_SHIFT 28U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_USC_SEQ_DEP_CLRMSK 0xEFFFFFFFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1_USC_SEQ_DEP_EN 0x10000000U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__TARGET_SHIFT 28U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__TARGET_CLRMSK 0xEFFFFFFFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__TARGET_EN 0x10000000U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_TARGET_SHIFT 27U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_TARGET_CLRMSK 0xF7FFFFFFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1_TARGET_EN 0x08000000U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__UNIFIED_SIZE_SHIFT 22U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__UNIFIED_SIZE_CLRMSK 0xF03FFFFFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1_UNIFIED_SIZE_SHIFT 21U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_UNIFIED_SIZE_CLRMSK 0xF81FFFFFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__COMMON_SHARED_SHIFT 21U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__COMMON_SHARED_CLRMSK 0xFFDFFFFFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__COMMON_SHARED_EN 0x00200000U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_COMMON_SHARED_SHIFT 20U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_COMMON_SHARED_CLRMSK 0xFFEFFFFFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1_COMMON_SHARED_EN 0x00100000U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__COMMON_SIZE_SHIFT 12U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__COMMON_SIZE_CLRMSK 0xFFE00FFFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1_COMMON_SIZE_SHIFT 11U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_COMMON_SIZE_CLRMSK 0xFFF007FFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1_TEMP_SIZE_SHIFT 7U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_TEMP_SIZE_CLRMSK 0xFFFFF87FU
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__TEMP_SIZE_SHIFT 7U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__TEMP_SIZE_CLRMSK 0xFFFFF07FU
+#define ROGUE_CR_CDM_TERMINATE_PDS1_DATA_SIZE_SHIFT 1U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_DATA_SIZE_CLRMSK 0xFFFFFF81U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_FENCE_SHIFT 0U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_FENCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_CDM_TERMINATE_PDS1_FENCE_EN 0x00000001U
+
+/* Register ROGUE_CR_CDM_CONTEXT_LOAD_PDS0 */
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS0 0x04D8U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS0_MASKFULL 0xFFFFFFF0FFFFFFF0ULL
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS0_DATA_ADDR_SHIFT 36U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS0_DATA_ADDR_CLRMSK 0x0000000FFFFFFFFFULL
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS0_DATA_ADDR_ALIGNSHIFT 4U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS0_DATA_ADDR_ALIGNSIZE 16U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS0_CODE_ADDR_SHIFT 4U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS0_CODE_ADDR_CLRMSK 0xFFFFFFFF0000000FULL
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS0_CODE_ADDR_ALIGNSHIFT 4U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS0_CODE_ADDR_ALIGNSIZE 16U
+
+/* Register ROGUE_CR_CDM_CONTEXT_LOAD_PDS1 */
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1 0x04E0U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__MASKFULL 0x000000007FFFFFFFULL
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_MASKFULL 0x000000003FFFFFFFULL
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__PDS_SEQ_DEP_SHIFT 30U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__PDS_SEQ_DEP_CLRMSK 0xBFFFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__PDS_SEQ_DEP_EN 0x40000000U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_PDS_SEQ_DEP_SHIFT 29U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_PDS_SEQ_DEP_CLRMSK 0xDFFFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_PDS_SEQ_DEP_EN 0x20000000U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__USC_SEQ_DEP_SHIFT 29U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__USC_SEQ_DEP_CLRMSK 0xDFFFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__USC_SEQ_DEP_EN 0x20000000U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_USC_SEQ_DEP_SHIFT 28U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_USC_SEQ_DEP_CLRMSK 0xEFFFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_USC_SEQ_DEP_EN 0x10000000U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__TARGET_SHIFT 28U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__TARGET_CLRMSK 0xEFFFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__TARGET_EN 0x10000000U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_TARGET_SHIFT 27U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_TARGET_CLRMSK 0xF7FFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_TARGET_EN 0x08000000U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__UNIFIED_SIZE_SHIFT 22U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__UNIFIED_SIZE_CLRMSK 0xF03FFFFFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_UNIFIED_SIZE_SHIFT 21U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_UNIFIED_SIZE_CLRMSK 0xF81FFFFFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__COMMON_SHARED_SHIFT 21U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__COMMON_SHARED_CLRMSK 0xFFDFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__COMMON_SHARED_EN 0x00200000U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_COMMON_SHARED_SHIFT 20U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_COMMON_SHARED_CLRMSK 0xFFEFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_COMMON_SHARED_EN 0x00100000U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__COMMON_SIZE_SHIFT 12U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__COMMON_SIZE_CLRMSK 0xFFE00FFFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_COMMON_SIZE_SHIFT 11U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_COMMON_SIZE_CLRMSK 0xFFF007FFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_TEMP_SIZE_SHIFT 7U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_TEMP_SIZE_CLRMSK 0xFFFFF87FU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__TEMP_SIZE_SHIFT 7U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__TEMP_SIZE_CLRMSK 0xFFFFF07FU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_DATA_SIZE_SHIFT 1U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_DATA_SIZE_CLRMSK 0xFFFFFF81U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_FENCE_SHIFT 0U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_FENCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_FENCE_EN 0x00000001U
+
+/* Register ROGUE_CR_MIPS_WRAPPER_CONFIG */
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG 0x0810U
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_MASKFULL 0x000001030F01FFFFULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_FW_IDLE_ENABLE_SHIFT 40U
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_FW_IDLE_ENABLE_CLRMSK 0xFFFFFEFFFFFFFFFFULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_FW_IDLE_ENABLE_EN 0x0000010000000000ULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_DISABLE_BOOT_SHIFT 33U
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_DISABLE_BOOT_CLRMSK 0xFFFFFFFDFFFFFFFFULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_DISABLE_BOOT_EN 0x0000000200000000ULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_L2_CACHE_OFF_SHIFT 32U
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_L2_CACHE_OFF_CLRMSK 0xFFFFFFFEFFFFFFFFULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_L2_CACHE_OFF_EN 0x0000000100000000ULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_OS_ID_SHIFT 25U
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_OS_ID_CLRMSK 0xFFFFFFFFF1FFFFFFULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_TRUSTED_SHIFT 24U
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_TRUSTED_CLRMSK 0xFFFFFFFFFEFFFFFFULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_TRUSTED_EN 0x0000000001000000ULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_BOOT_ISA_MODE_SHIFT 16U
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_BOOT_ISA_MODE_CLRMSK 0xFFFFFFFFFFFEFFFFULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_BOOT_ISA_MODE_MIPS32 0x0000000000000000ULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_BOOT_ISA_MODE_MICROMIPS 0x0000000000010000ULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_REGBANK_BASE_ADDR_SHIFT 0U
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_REGBANK_BASE_ADDR_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG1 */
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG1 0x0818U
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG1_MASKFULL 0x00000000FFFFF001ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG1_BASE_ADDR_IN_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG1_BASE_ADDR_IN_CLRMSK 0xFFFFFFFF00000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG1_MODE_ENABLE_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG1_MODE_ENABLE_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG1_MODE_ENABLE_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2 */
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2 0x0820U
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2_MASKFULL 0x000000FFFFFFF1FFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2_ADDR_OUT_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2_ADDR_OUT_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2_OS_ID_SHIFT 6U
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2_OS_ID_CLRMSK 0xFFFFFFFFFFFFFE3FULL
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2_TRUSTED_SHIFT 5U
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2_TRUSTED_CLRMSK 0xFFFFFFFFFFFFFFDFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2_TRUSTED_EN 0x0000000000000020ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2_REGION_SIZE_POW2_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2_REGION_SIZE_POW2_CLRMSK 0xFFFFFFFFFFFFFFE0ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG1 */
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG1 0x0828U
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG1_MASKFULL 0x00000000FFFFF001ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG1_BASE_ADDR_IN_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG1_BASE_ADDR_IN_CLRMSK 0xFFFFFFFF00000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG1_MODE_ENABLE_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG1_MODE_ENABLE_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG1_MODE_ENABLE_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2 */
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2 0x0830U
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2_MASKFULL 0x000000FFFFFFF1FFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2_ADDR_OUT_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2_ADDR_OUT_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2_OS_ID_SHIFT 6U
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2_OS_ID_CLRMSK 0xFFFFFFFFFFFFFE3FULL
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2_TRUSTED_SHIFT 5U
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2_TRUSTED_CLRMSK 0xFFFFFFFFFFFFFFDFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2_TRUSTED_EN 0x0000000000000020ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2_REGION_SIZE_POW2_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2_REGION_SIZE_POW2_CLRMSK 0xFFFFFFFFFFFFFFE0ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG1 */
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG1 0x0838U
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG1_MASKFULL 0x00000000FFFFF001ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG1_BASE_ADDR_IN_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG1_BASE_ADDR_IN_CLRMSK 0xFFFFFFFF00000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG1_MODE_ENABLE_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG1_MODE_ENABLE_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG1_MODE_ENABLE_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2 */
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2 0x0840U
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2_MASKFULL 0x000000FFFFFFF1FFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2_ADDR_OUT_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2_ADDR_OUT_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2_OS_ID_SHIFT 6U
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2_OS_ID_CLRMSK 0xFFFFFFFFFFFFFE3FULL
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2_TRUSTED_SHIFT 5U
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2_TRUSTED_CLRMSK 0xFFFFFFFFFFFFFFDFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2_TRUSTED_EN 0x0000000000000020ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2_REGION_SIZE_POW2_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2_REGION_SIZE_POW2_CLRMSK 0xFFFFFFFFFFFFFFE0ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG1 */
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG1 0x0848U
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG1_MASKFULL 0x00000000FFFFF001ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG1_BASE_ADDR_IN_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG1_BASE_ADDR_IN_CLRMSK 0xFFFFFFFF00000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG1_MODE_ENABLE_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG1_MODE_ENABLE_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG1_MODE_ENABLE_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2 */
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2 0x0850U
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2_MASKFULL 0x000000FFFFFFF1FFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2_ADDR_OUT_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2_ADDR_OUT_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2_OS_ID_SHIFT 6U
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2_OS_ID_CLRMSK 0xFFFFFFFFFFFFFE3FULL
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2_TRUSTED_SHIFT 5U
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2_TRUSTED_CLRMSK 0xFFFFFFFFFFFFFFDFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2_TRUSTED_EN 0x0000000000000020ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2_REGION_SIZE_POW2_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2_REGION_SIZE_POW2_CLRMSK 0xFFFFFFFFFFFFFFE0ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG1 */
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG1 0x0858U
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG1_MASKFULL 0x00000000FFFFF001ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG1_BASE_ADDR_IN_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG1_BASE_ADDR_IN_CLRMSK 0xFFFFFFFF00000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG1_MODE_ENABLE_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG1_MODE_ENABLE_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG1_MODE_ENABLE_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2 */
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2 0x0860U
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2_MASKFULL 0x000000FFFFFFF1FFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2_ADDR_OUT_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2_ADDR_OUT_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2_OS_ID_SHIFT 6U
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2_OS_ID_CLRMSK 0xFFFFFFFFFFFFFE3FULL
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2_TRUSTED_SHIFT 5U
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2_TRUSTED_CLRMSK 0xFFFFFFFFFFFFFFDFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2_TRUSTED_EN 0x0000000000000020ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2_REGION_SIZE_POW2_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2_REGION_SIZE_POW2_CLRMSK 0xFFFFFFFFFFFFFFE0ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_STATUS */
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_STATUS 0x0868U
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_STATUS_MASKFULL 0x00000001FFFFFFFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_STATUS_EVENT_SHIFT 32U
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_STATUS_EVENT_CLRMSK 0xFFFFFFFEFFFFFFFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_STATUS_EVENT_EN 0x0000000100000000ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_STATUS_ADDRESS_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_STATUS_ADDRESS_CLRMSK 0xFFFFFFFF00000000ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_CLEAR */
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_CLEAR 0x0870U
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_CLEAR_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_CLEAR_EVENT_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_CLEAR_EVENT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_CLEAR_EVENT_EN 0x00000001U
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG */
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG 0x0878U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_MASKFULL 0xFFFFFFF7FFFFFFBFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_ADDR_OUT_SHIFT 36U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_ADDR_OUT_CLRMSK 0x0000000FFFFFFFFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_OS_ID_SHIFT 32U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_OS_ID_CLRMSK 0xFFFFFFF8FFFFFFFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_BASE_ADDR_IN_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_BASE_ADDR_IN_CLRMSK 0xFFFFFFFF00000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_TRUSTED_SHIFT 11U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_TRUSTED_CLRMSK 0xFFFFFFFFFFFFF7FFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_TRUSTED_EN 0x0000000000000800ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_REGION_SIZE_SHIFT 7U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_REGION_SIZE_CLRMSK 0xFFFFFFFFFFFFF87FULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_REGION_SIZE_4KB 0x0000000000000000ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_REGION_SIZE_16KB 0x0000000000000080ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_REGION_SIZE_64KB 0x0000000000000100ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_REGION_SIZE_256KB 0x0000000000000180ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_REGION_SIZE_1MB 0x0000000000000200ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_REGION_SIZE_4MB 0x0000000000000280ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_REGION_SIZE_16MB 0x0000000000000300ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_REGION_SIZE_64MB 0x0000000000000380ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_REGION_SIZE_256MB 0x0000000000000400ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_ENTRY_SHIFT 1U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_ENTRY_CLRMSK 0xFFFFFFFFFFFFFFC1ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_MODE_ENABLE_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_MODE_ENABLE_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_MODE_ENABLE_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP_RANGE_READ */
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_READ 0x0880U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_READ_MASKFULL 0x000000000000003FULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_READ_ENTRY_SHIFT 1U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_READ_ENTRY_CLRMSK 0xFFFFFFC1U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_READ_REQUEST_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_READ_REQUEST_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_READ_REQUEST_EN 0x00000001U
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA */
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA 0x0888U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_MASKFULL 0xFFFFFFF7FFFFFF81ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_ADDR_OUT_SHIFT 36U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_ADDR_OUT_CLRMSK 0x0000000FFFFFFFFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_OS_ID_SHIFT 32U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_OS_ID_CLRMSK 0xFFFFFFF8FFFFFFFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_BASE_ADDR_IN_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_BASE_ADDR_IN_CLRMSK 0xFFFFFFFF00000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_TRUSTED_SHIFT 11U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_TRUSTED_CLRMSK 0xFFFFFFFFFFFFF7FFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_TRUSTED_EN 0x0000000000000800ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_REGION_SIZE_SHIFT 7U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_REGION_SIZE_CLRMSK 0xFFFFFFFFFFFFF87FULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_MODE_ENABLE_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_MODE_ENABLE_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_MODE_ENABLE_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_MIPS_WRAPPER_IRQ_ENABLE */
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_ENABLE 0x08A0U
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_ENABLE_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_ENABLE_EVENT_SHIFT 0U
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_ENABLE_EVENT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_ENABLE_EVENT_EN 0x00000001U
+
+/* Register ROGUE_CR_MIPS_WRAPPER_IRQ_STATUS */
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_STATUS 0x08A8U
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_STATUS_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_STATUS_EVENT_SHIFT 0U
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_STATUS_EVENT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_STATUS_EVENT_EN 0x00000001U
+
+/* Register ROGUE_CR_MIPS_WRAPPER_IRQ_CLEAR */
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_CLEAR 0x08B0U
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_CLEAR_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_CLEAR_EVENT_SHIFT 0U
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_CLEAR_EVENT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_CLEAR_EVENT_EN 0x00000001U
+
+/* Register ROGUE_CR_MIPS_WRAPPER_NMI_ENABLE */
+#define ROGUE_CR_MIPS_WRAPPER_NMI_ENABLE 0x08B8U
+#define ROGUE_CR_MIPS_WRAPPER_NMI_ENABLE_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_MIPS_WRAPPER_NMI_ENABLE_EVENT_SHIFT 0U
+#define ROGUE_CR_MIPS_WRAPPER_NMI_ENABLE_EVENT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_MIPS_WRAPPER_NMI_ENABLE_EVENT_EN 0x00000001U
+
+/* Register ROGUE_CR_MIPS_WRAPPER_NMI_EVENT */
+#define ROGUE_CR_MIPS_WRAPPER_NMI_EVENT 0x08C0U
+#define ROGUE_CR_MIPS_WRAPPER_NMI_EVENT_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_MIPS_WRAPPER_NMI_EVENT_TRIGGER_SHIFT 0U
+#define ROGUE_CR_MIPS_WRAPPER_NMI_EVENT_TRIGGER_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_MIPS_WRAPPER_NMI_EVENT_TRIGGER_EN 0x00000001U
+
+/* Register ROGUE_CR_MIPS_DEBUG_CONFIG */
+#define ROGUE_CR_MIPS_DEBUG_CONFIG 0x08C8U
+#define ROGUE_CR_MIPS_DEBUG_CONFIG_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_MIPS_DEBUG_CONFIG_DISABLE_PROBE_DEBUG_SHIFT 0U
+#define ROGUE_CR_MIPS_DEBUG_CONFIG_DISABLE_PROBE_DEBUG_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_MIPS_DEBUG_CONFIG_DISABLE_PROBE_DEBUG_EN 0x00000001U
+
+/* Register ROGUE_CR_MIPS_EXCEPTION_STATUS */
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS 0x08D0U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_MASKFULL 0x000000000000003FULL
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_SLEEP_SHIFT 5U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_SLEEP_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_SLEEP_EN 0x00000020U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_NMI_TAKEN_SHIFT 4U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_NMI_TAKEN_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_NMI_TAKEN_EN 0x00000010U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_NEST_EXL_SHIFT 3U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_NEST_EXL_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_NEST_EXL_EN 0x00000008U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_NEST_ERL_SHIFT 2U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_NEST_ERL_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_NEST_ERL_EN 0x00000004U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_EXL_SHIFT 1U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_EXL_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_EXL_EN 0x00000002U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_ERL_SHIFT 0U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_ERL_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_ERL_EN 0x00000001U
+
+/* Register ROGUE_CR_MIPS_WRAPPER_STATUS */
+#define ROGUE_CR_MIPS_WRAPPER_STATUS 0x08E8U
+#define ROGUE_CR_MIPS_WRAPPER_STATUS_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_MIPS_WRAPPER_STATUS_OUTSTANDING_REQUESTS_SHIFT 0U
+#define ROGUE_CR_MIPS_WRAPPER_STATUS_OUTSTANDING_REQUESTS_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_XPU_BROADCAST */
+#define ROGUE_CR_XPU_BROADCAST 0x0890U
+#define ROGUE_CR_XPU_BROADCAST_MASKFULL 0x00000000000001FFULL
+#define ROGUE_CR_XPU_BROADCAST_MASK_SHIFT 0U
+#define ROGUE_CR_XPU_BROADCAST_MASK_CLRMSK 0xFFFFFE00U
+
+/* Register ROGUE_CR_META_SP_MSLVDATAX */
+#define ROGUE_CR_META_SP_MSLVDATAX 0x0A00U
+#define ROGUE_CR_META_SP_MSLVDATAX_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_META_SP_MSLVDATAX_MSLVDATAX_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVDATAX_MSLVDATAX_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_META_SP_MSLVDATAT */
+#define ROGUE_CR_META_SP_MSLVDATAT 0x0A08U
+#define ROGUE_CR_META_SP_MSLVDATAT_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_META_SP_MSLVDATAT_MSLVDATAT_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVDATAT_MSLVDATAT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_META_SP_MSLVCTRL0 */
+#define ROGUE_CR_META_SP_MSLVCTRL0 0x0A10U
+#define ROGUE_CR_META_SP_MSLVCTRL0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_META_SP_MSLVCTRL0_ADDR_SHIFT 2U
+#define ROGUE_CR_META_SP_MSLVCTRL0_ADDR_CLRMSK 0x00000003U
+#define ROGUE_CR_META_SP_MSLVCTRL0_AUTOINCR_SHIFT 1U
+#define ROGUE_CR_META_SP_MSLVCTRL0_AUTOINCR_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_META_SP_MSLVCTRL0_AUTOINCR_EN 0x00000002U
+#define ROGUE_CR_META_SP_MSLVCTRL0_RD_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVCTRL0_RD_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_META_SP_MSLVCTRL0_RD_EN 0x00000001U
+
+/* Register ROGUE_CR_META_SP_MSLVCTRL1 */
+#define ROGUE_CR_META_SP_MSLVCTRL1 0x0A18U
+#define ROGUE_CR_META_SP_MSLVCTRL1_MASKFULL 0x00000000F7F4003FULL
+#define ROGUE_CR_META_SP_MSLVCTRL1_DEFERRTHREAD_SHIFT 30U
+#define ROGUE_CR_META_SP_MSLVCTRL1_DEFERRTHREAD_CLRMSK 0x3FFFFFFFU
+#define ROGUE_CR_META_SP_MSLVCTRL1_LOCK2_INTERLOCK_SHIFT 29U
+#define ROGUE_CR_META_SP_MSLVCTRL1_LOCK2_INTERLOCK_CLRMSK 0xDFFFFFFFU
+#define ROGUE_CR_META_SP_MSLVCTRL1_LOCK2_INTERLOCK_EN 0x20000000U
+#define ROGUE_CR_META_SP_MSLVCTRL1_ATOMIC_INTERLOCK_SHIFT 28U
+#define ROGUE_CR_META_SP_MSLVCTRL1_ATOMIC_INTERLOCK_CLRMSK 0xEFFFFFFFU
+#define ROGUE_CR_META_SP_MSLVCTRL1_ATOMIC_INTERLOCK_EN 0x10000000U
+#define ROGUE_CR_META_SP_MSLVCTRL1_GBLPORT_IDLE_SHIFT 26U
+#define ROGUE_CR_META_SP_MSLVCTRL1_GBLPORT_IDLE_CLRMSK 0xFBFFFFFFU
+#define ROGUE_CR_META_SP_MSLVCTRL1_GBLPORT_IDLE_EN 0x04000000U
+#define ROGUE_CR_META_SP_MSLVCTRL1_COREMEM_IDLE_SHIFT 25U
+#define ROGUE_CR_META_SP_MSLVCTRL1_COREMEM_IDLE_CLRMSK 0xFDFFFFFFU
+#define ROGUE_CR_META_SP_MSLVCTRL1_COREMEM_IDLE_EN 0x02000000U
+#define ROGUE_CR_META_SP_MSLVCTRL1_READY_SHIFT 24U
+#define ROGUE_CR_META_SP_MSLVCTRL1_READY_CLRMSK 0xFEFFFFFFU
+#define ROGUE_CR_META_SP_MSLVCTRL1_READY_EN 0x01000000U
+#define ROGUE_CR_META_SP_MSLVCTRL1_DEFERRID_SHIFT 21U
+#define ROGUE_CR_META_SP_MSLVCTRL1_DEFERRID_CLRMSK 0xFF1FFFFFU
+#define ROGUE_CR_META_SP_MSLVCTRL1_DEFERR_SHIFT 20U
+#define ROGUE_CR_META_SP_MSLVCTRL1_DEFERR_CLRMSK 0xFFEFFFFFU
+#define ROGUE_CR_META_SP_MSLVCTRL1_DEFERR_EN 0x00100000U
+#define ROGUE_CR_META_SP_MSLVCTRL1_WR_ACTIVE_SHIFT 18U
+#define ROGUE_CR_META_SP_MSLVCTRL1_WR_ACTIVE_CLRMSK 0xFFFBFFFFU
+#define ROGUE_CR_META_SP_MSLVCTRL1_WR_ACTIVE_EN 0x00040000U
+#define ROGUE_CR_META_SP_MSLVCTRL1_THREAD_SHIFT 4U
+#define ROGUE_CR_META_SP_MSLVCTRL1_THREAD_CLRMSK 0xFFFFFFCFU
+#define ROGUE_CR_META_SP_MSLVCTRL1_TRANS_SIZE_SHIFT 2U
+#define ROGUE_CR_META_SP_MSLVCTRL1_TRANS_SIZE_CLRMSK 0xFFFFFFF3U
+#define ROGUE_CR_META_SP_MSLVCTRL1_BYTE_ROUND_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVCTRL1_BYTE_ROUND_CLRMSK 0xFFFFFFFCU
+
+/* Register ROGUE_CR_META_SP_MSLVHANDSHKE */
+#define ROGUE_CR_META_SP_MSLVHANDSHKE 0x0A50U
+#define ROGUE_CR_META_SP_MSLVHANDSHKE_MASKFULL 0x000000000000000FULL
+#define ROGUE_CR_META_SP_MSLVHANDSHKE_INPUT_SHIFT 2U
+#define ROGUE_CR_META_SP_MSLVHANDSHKE_INPUT_CLRMSK 0xFFFFFFF3U
+#define ROGUE_CR_META_SP_MSLVHANDSHKE_OUTPUT_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVHANDSHKE_OUTPUT_CLRMSK 0xFFFFFFFCU
+
+/* Register ROGUE_CR_META_SP_MSLVT0KICK */
+#define ROGUE_CR_META_SP_MSLVT0KICK 0x0A80U
+#define ROGUE_CR_META_SP_MSLVT0KICK_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_META_SP_MSLVT0KICK_MSLVT0KICK_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVT0KICK_MSLVT0KICK_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_META_SP_MSLVT0KICKI */
+#define ROGUE_CR_META_SP_MSLVT0KICKI 0x0A88U
+#define ROGUE_CR_META_SP_MSLVT0KICKI_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_META_SP_MSLVT0KICKI_MSLVT0KICKI_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVT0KICKI_MSLVT0KICKI_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_META_SP_MSLVT1KICK */
+#define ROGUE_CR_META_SP_MSLVT1KICK 0x0A90U
+#define ROGUE_CR_META_SP_MSLVT1KICK_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_META_SP_MSLVT1KICK_MSLVT1KICK_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVT1KICK_MSLVT1KICK_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_META_SP_MSLVT1KICKI */
+#define ROGUE_CR_META_SP_MSLVT1KICKI 0x0A98U
+#define ROGUE_CR_META_SP_MSLVT1KICKI_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_META_SP_MSLVT1KICKI_MSLVT1KICKI_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVT1KICKI_MSLVT1KICKI_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_META_SP_MSLVT2KICK */
+#define ROGUE_CR_META_SP_MSLVT2KICK 0x0AA0U
+#define ROGUE_CR_META_SP_MSLVT2KICK_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_META_SP_MSLVT2KICK_MSLVT2KICK_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVT2KICK_MSLVT2KICK_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_META_SP_MSLVT2KICKI */
+#define ROGUE_CR_META_SP_MSLVT2KICKI 0x0AA8U
+#define ROGUE_CR_META_SP_MSLVT2KICKI_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_META_SP_MSLVT2KICKI_MSLVT2KICKI_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVT2KICKI_MSLVT2KICKI_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_META_SP_MSLVT3KICK */
+#define ROGUE_CR_META_SP_MSLVT3KICK 0x0AB0U
+#define ROGUE_CR_META_SP_MSLVT3KICK_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_META_SP_MSLVT3KICK_MSLVT3KICK_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVT3KICK_MSLVT3KICK_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_META_SP_MSLVT3KICKI */
+#define ROGUE_CR_META_SP_MSLVT3KICKI 0x0AB8U
+#define ROGUE_CR_META_SP_MSLVT3KICKI_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_META_SP_MSLVT3KICKI_MSLVT3KICKI_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVT3KICKI_MSLVT3KICKI_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_META_SP_MSLVRST */
+#define ROGUE_CR_META_SP_MSLVRST 0x0AC0U
+#define ROGUE_CR_META_SP_MSLVRST_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_META_SP_MSLVRST_SOFTRESET_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVRST_SOFTRESET_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_META_SP_MSLVRST_SOFTRESET_EN 0x00000001U
+
+/* Register ROGUE_CR_META_SP_MSLVIRQSTATUS */
+#define ROGUE_CR_META_SP_MSLVIRQSTATUS 0x0AC8U
+#define ROGUE_CR_META_SP_MSLVIRQSTATUS_MASKFULL 0x000000000000000CULL
+#define ROGUE_CR_META_SP_MSLVIRQSTATUS_TRIGVECT3_SHIFT 3U
+#define ROGUE_CR_META_SP_MSLVIRQSTATUS_TRIGVECT3_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_META_SP_MSLVIRQSTATUS_TRIGVECT3_EN 0x00000008U
+#define ROGUE_CR_META_SP_MSLVIRQSTATUS_TRIGVECT2_SHIFT 2U
+#define ROGUE_CR_META_SP_MSLVIRQSTATUS_TRIGVECT2_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_META_SP_MSLVIRQSTATUS_TRIGVECT2_EN 0x00000004U
+
+/* Register ROGUE_CR_META_SP_MSLVIRQENABLE */
+#define ROGUE_CR_META_SP_MSLVIRQENABLE 0x0AD0U
+#define ROGUE_CR_META_SP_MSLVIRQENABLE_MASKFULL 0x000000000000000CULL
+#define ROGUE_CR_META_SP_MSLVIRQENABLE_EVENT1_SHIFT 3U
+#define ROGUE_CR_META_SP_MSLVIRQENABLE_EVENT1_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_META_SP_MSLVIRQENABLE_EVENT1_EN 0x00000008U
+#define ROGUE_CR_META_SP_MSLVIRQENABLE_EVENT0_SHIFT 2U
+#define ROGUE_CR_META_SP_MSLVIRQENABLE_EVENT0_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_META_SP_MSLVIRQENABLE_EVENT0_EN 0x00000004U
+
+/* Register ROGUE_CR_META_SP_MSLVIRQLEVEL */
+#define ROGUE_CR_META_SP_MSLVIRQLEVEL 0x0AD8U
+#define ROGUE_CR_META_SP_MSLVIRQLEVEL_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_META_SP_MSLVIRQLEVEL_MODE_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVIRQLEVEL_MODE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_META_SP_MSLVIRQLEVEL_MODE_EN 0x00000001U
+
+/* Register ROGUE_CR_MTS_SCHEDULE */
+#define ROGUE_CR_MTS_SCHEDULE 0x0B00U
+#define ROGUE_CR_MTS_SCHEDULE_MASKFULL 0x00000000000001FFULL
+#define ROGUE_CR_MTS_SCHEDULE_HOST_SHIFT 8U
+#define ROGUE_CR_MTS_SCHEDULE_HOST_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_MTS_SCHEDULE_HOST_BG_TIMER 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE_HOST_HOST 0x00000100U
+#define ROGUE_CR_MTS_SCHEDULE_PRIORITY_SHIFT 6U
+#define ROGUE_CR_MTS_SCHEDULE_PRIORITY_CLRMSK 0xFFFFFF3FU
+#define ROGUE_CR_MTS_SCHEDULE_PRIORITY_PRT0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE_PRIORITY_PRT1 0x00000040U
+#define ROGUE_CR_MTS_SCHEDULE_PRIORITY_PRT2 0x00000080U
+#define ROGUE_CR_MTS_SCHEDULE_PRIORITY_PRT3 0x000000C0U
+#define ROGUE_CR_MTS_SCHEDULE_CONTEXT_SHIFT 5U
+#define ROGUE_CR_MTS_SCHEDULE_CONTEXT_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_MTS_SCHEDULE_CONTEXT_BGCTX 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE_CONTEXT_INTCTX 0x00000020U
+#define ROGUE_CR_MTS_SCHEDULE_TASK_SHIFT 4U
+#define ROGUE_CR_MTS_SCHEDULE_TASK_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_MTS_SCHEDULE_TASK_NON_COUNTED 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE_TASK_COUNTED 0x00000010U
+#define ROGUE_CR_MTS_SCHEDULE_DM_SHIFT 0U
+#define ROGUE_CR_MTS_SCHEDULE_DM_CLRMSK 0xFFFFFFF0U
+#define ROGUE_CR_MTS_SCHEDULE_DM_DM0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE_DM_DM1 0x00000001U
+#define ROGUE_CR_MTS_SCHEDULE_DM_DM2 0x00000002U
+#define ROGUE_CR_MTS_SCHEDULE_DM_DM3 0x00000003U
+#define ROGUE_CR_MTS_SCHEDULE_DM_DM4 0x00000004U
+#define ROGUE_CR_MTS_SCHEDULE_DM_DM5 0x00000005U
+#define ROGUE_CR_MTS_SCHEDULE_DM_DM6 0x00000006U
+#define ROGUE_CR_MTS_SCHEDULE_DM_DM7 0x00000007U
+#define ROGUE_CR_MTS_SCHEDULE_DM_DM_ALL 0x0000000FU
+
+/* Register ROGUE_CR_MTS_SCHEDULE1 */
+#define ROGUE_CR_MTS_SCHEDULE1 0x10B00U
+#define ROGUE_CR_MTS_SCHEDULE1_MASKFULL 0x00000000000001FFULL
+#define ROGUE_CR_MTS_SCHEDULE1_HOST_SHIFT 8U
+#define ROGUE_CR_MTS_SCHEDULE1_HOST_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_MTS_SCHEDULE1_HOST_BG_TIMER 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE1_HOST_HOST 0x00000100U
+#define ROGUE_CR_MTS_SCHEDULE1_PRIORITY_SHIFT 6U
+#define ROGUE_CR_MTS_SCHEDULE1_PRIORITY_CLRMSK 0xFFFFFF3FU
+#define ROGUE_CR_MTS_SCHEDULE1_PRIORITY_PRT0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE1_PRIORITY_PRT1 0x00000040U
+#define ROGUE_CR_MTS_SCHEDULE1_PRIORITY_PRT2 0x00000080U
+#define ROGUE_CR_MTS_SCHEDULE1_PRIORITY_PRT3 0x000000C0U
+#define ROGUE_CR_MTS_SCHEDULE1_CONTEXT_SHIFT 5U
+#define ROGUE_CR_MTS_SCHEDULE1_CONTEXT_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_MTS_SCHEDULE1_CONTEXT_BGCTX 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE1_CONTEXT_INTCTX 0x00000020U
+#define ROGUE_CR_MTS_SCHEDULE1_TASK_SHIFT 4U
+#define ROGUE_CR_MTS_SCHEDULE1_TASK_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_MTS_SCHEDULE1_TASK_NON_COUNTED 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE1_TASK_COUNTED 0x00000010U
+#define ROGUE_CR_MTS_SCHEDULE1_DM_SHIFT 0U
+#define ROGUE_CR_MTS_SCHEDULE1_DM_CLRMSK 0xFFFFFFF0U
+#define ROGUE_CR_MTS_SCHEDULE1_DM_DM0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE1_DM_DM1 0x00000001U
+#define ROGUE_CR_MTS_SCHEDULE1_DM_DM2 0x00000002U
+#define ROGUE_CR_MTS_SCHEDULE1_DM_DM3 0x00000003U
+#define ROGUE_CR_MTS_SCHEDULE1_DM_DM4 0x00000004U
+#define ROGUE_CR_MTS_SCHEDULE1_DM_DM5 0x00000005U
+#define ROGUE_CR_MTS_SCHEDULE1_DM_DM6 0x00000006U
+#define ROGUE_CR_MTS_SCHEDULE1_DM_DM7 0x00000007U
+#define ROGUE_CR_MTS_SCHEDULE1_DM_DM_ALL 0x0000000FU
+
+/* Register ROGUE_CR_MTS_SCHEDULE2 */
+#define ROGUE_CR_MTS_SCHEDULE2 0x20B00U
+#define ROGUE_CR_MTS_SCHEDULE2_MASKFULL 0x00000000000001FFULL
+#define ROGUE_CR_MTS_SCHEDULE2_HOST_SHIFT 8U
+#define ROGUE_CR_MTS_SCHEDULE2_HOST_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_MTS_SCHEDULE2_HOST_BG_TIMER 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE2_HOST_HOST 0x00000100U
+#define ROGUE_CR_MTS_SCHEDULE2_PRIORITY_SHIFT 6U
+#define ROGUE_CR_MTS_SCHEDULE2_PRIORITY_CLRMSK 0xFFFFFF3FU
+#define ROGUE_CR_MTS_SCHEDULE2_PRIORITY_PRT0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE2_PRIORITY_PRT1 0x00000040U
+#define ROGUE_CR_MTS_SCHEDULE2_PRIORITY_PRT2 0x00000080U
+#define ROGUE_CR_MTS_SCHEDULE2_PRIORITY_PRT3 0x000000C0U
+#define ROGUE_CR_MTS_SCHEDULE2_CONTEXT_SHIFT 5U
+#define ROGUE_CR_MTS_SCHEDULE2_CONTEXT_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_MTS_SCHEDULE2_CONTEXT_BGCTX 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE2_CONTEXT_INTCTX 0x00000020U
+#define ROGUE_CR_MTS_SCHEDULE2_TASK_SHIFT 4U
+#define ROGUE_CR_MTS_SCHEDULE2_TASK_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_MTS_SCHEDULE2_TASK_NON_COUNTED 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE2_TASK_COUNTED 0x00000010U
+#define ROGUE_CR_MTS_SCHEDULE2_DM_SHIFT 0U
+#define ROGUE_CR_MTS_SCHEDULE2_DM_CLRMSK 0xFFFFFFF0U
+#define ROGUE_CR_MTS_SCHEDULE2_DM_DM0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE2_DM_DM1 0x00000001U
+#define ROGUE_CR_MTS_SCHEDULE2_DM_DM2 0x00000002U
+#define ROGUE_CR_MTS_SCHEDULE2_DM_DM3 0x00000003U
+#define ROGUE_CR_MTS_SCHEDULE2_DM_DM4 0x00000004U
+#define ROGUE_CR_MTS_SCHEDULE2_DM_DM5 0x00000005U
+#define ROGUE_CR_MTS_SCHEDULE2_DM_DM6 0x00000006U
+#define ROGUE_CR_MTS_SCHEDULE2_DM_DM7 0x00000007U
+#define ROGUE_CR_MTS_SCHEDULE2_DM_DM_ALL 0x0000000FU
+
+/* Register ROGUE_CR_MTS_SCHEDULE3 */
+#define ROGUE_CR_MTS_SCHEDULE3 0x30B00U
+#define ROGUE_CR_MTS_SCHEDULE3_MASKFULL 0x00000000000001FFULL
+#define ROGUE_CR_MTS_SCHEDULE3_HOST_SHIFT 8U
+#define ROGUE_CR_MTS_SCHEDULE3_HOST_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_MTS_SCHEDULE3_HOST_BG_TIMER 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE3_HOST_HOST 0x00000100U
+#define ROGUE_CR_MTS_SCHEDULE3_PRIORITY_SHIFT 6U
+#define ROGUE_CR_MTS_SCHEDULE3_PRIORITY_CLRMSK 0xFFFFFF3FU
+#define ROGUE_CR_MTS_SCHEDULE3_PRIORITY_PRT0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE3_PRIORITY_PRT1 0x00000040U
+#define ROGUE_CR_MTS_SCHEDULE3_PRIORITY_PRT2 0x00000080U
+#define ROGUE_CR_MTS_SCHEDULE3_PRIORITY_PRT3 0x000000C0U
+#define ROGUE_CR_MTS_SCHEDULE3_CONTEXT_SHIFT 5U
+#define ROGUE_CR_MTS_SCHEDULE3_CONTEXT_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_MTS_SCHEDULE3_CONTEXT_BGCTX 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE3_CONTEXT_INTCTX 0x00000020U
+#define ROGUE_CR_MTS_SCHEDULE3_TASK_SHIFT 4U
+#define ROGUE_CR_MTS_SCHEDULE3_TASK_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_MTS_SCHEDULE3_TASK_NON_COUNTED 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE3_TASK_COUNTED 0x00000010U
+#define ROGUE_CR_MTS_SCHEDULE3_DM_SHIFT 0U
+#define ROGUE_CR_MTS_SCHEDULE3_DM_CLRMSK 0xFFFFFFF0U
+#define ROGUE_CR_MTS_SCHEDULE3_DM_DM0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE3_DM_DM1 0x00000001U
+#define ROGUE_CR_MTS_SCHEDULE3_DM_DM2 0x00000002U
+#define ROGUE_CR_MTS_SCHEDULE3_DM_DM3 0x00000003U
+#define ROGUE_CR_MTS_SCHEDULE3_DM_DM4 0x00000004U
+#define ROGUE_CR_MTS_SCHEDULE3_DM_DM5 0x00000005U
+#define ROGUE_CR_MTS_SCHEDULE3_DM_DM6 0x00000006U
+#define ROGUE_CR_MTS_SCHEDULE3_DM_DM7 0x00000007U
+#define ROGUE_CR_MTS_SCHEDULE3_DM_DM_ALL 0x0000000FU
+
+/* Register ROGUE_CR_MTS_SCHEDULE4 */
+#define ROGUE_CR_MTS_SCHEDULE4 0x40B00U
+#define ROGUE_CR_MTS_SCHEDULE4_MASKFULL 0x00000000000001FFULL
+#define ROGUE_CR_MTS_SCHEDULE4_HOST_SHIFT 8U
+#define ROGUE_CR_MTS_SCHEDULE4_HOST_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_MTS_SCHEDULE4_HOST_BG_TIMER 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE4_HOST_HOST 0x00000100U
+#define ROGUE_CR_MTS_SCHEDULE4_PRIORITY_SHIFT 6U
+#define ROGUE_CR_MTS_SCHEDULE4_PRIORITY_CLRMSK 0xFFFFFF3FU
+#define ROGUE_CR_MTS_SCHEDULE4_PRIORITY_PRT0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE4_PRIORITY_PRT1 0x00000040U
+#define ROGUE_CR_MTS_SCHEDULE4_PRIORITY_PRT2 0x00000080U
+#define ROGUE_CR_MTS_SCHEDULE4_PRIORITY_PRT3 0x000000C0U
+#define ROGUE_CR_MTS_SCHEDULE4_CONTEXT_SHIFT 5U
+#define ROGUE_CR_MTS_SCHEDULE4_CONTEXT_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_MTS_SCHEDULE4_CONTEXT_BGCTX 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE4_CONTEXT_INTCTX 0x00000020U
+#define ROGUE_CR_MTS_SCHEDULE4_TASK_SHIFT 4U
+#define ROGUE_CR_MTS_SCHEDULE4_TASK_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_MTS_SCHEDULE4_TASK_NON_COUNTED 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE4_TASK_COUNTED 0x00000010U
+#define ROGUE_CR_MTS_SCHEDULE4_DM_SHIFT 0U
+#define ROGUE_CR_MTS_SCHEDULE4_DM_CLRMSK 0xFFFFFFF0U
+#define ROGUE_CR_MTS_SCHEDULE4_DM_DM0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE4_DM_DM1 0x00000001U
+#define ROGUE_CR_MTS_SCHEDULE4_DM_DM2 0x00000002U
+#define ROGUE_CR_MTS_SCHEDULE4_DM_DM3 0x00000003U
+#define ROGUE_CR_MTS_SCHEDULE4_DM_DM4 0x00000004U
+#define ROGUE_CR_MTS_SCHEDULE4_DM_DM5 0x00000005U
+#define ROGUE_CR_MTS_SCHEDULE4_DM_DM6 0x00000006U
+#define ROGUE_CR_MTS_SCHEDULE4_DM_DM7 0x00000007U
+#define ROGUE_CR_MTS_SCHEDULE4_DM_DM_ALL 0x0000000FU
+
+/* Register ROGUE_CR_MTS_SCHEDULE5 */
+#define ROGUE_CR_MTS_SCHEDULE5 0x50B00U
+#define ROGUE_CR_MTS_SCHEDULE5_MASKFULL 0x00000000000001FFULL
+#define ROGUE_CR_MTS_SCHEDULE5_HOST_SHIFT 8U
+#define ROGUE_CR_MTS_SCHEDULE5_HOST_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_MTS_SCHEDULE5_HOST_BG_TIMER 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE5_HOST_HOST 0x00000100U
+#define ROGUE_CR_MTS_SCHEDULE5_PRIORITY_SHIFT 6U
+#define ROGUE_CR_MTS_SCHEDULE5_PRIORITY_CLRMSK 0xFFFFFF3FU
+#define ROGUE_CR_MTS_SCHEDULE5_PRIORITY_PRT0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE5_PRIORITY_PRT1 0x00000040U
+#define ROGUE_CR_MTS_SCHEDULE5_PRIORITY_PRT2 0x00000080U
+#define ROGUE_CR_MTS_SCHEDULE5_PRIORITY_PRT3 0x000000C0U
+#define ROGUE_CR_MTS_SCHEDULE5_CONTEXT_SHIFT 5U
+#define ROGUE_CR_MTS_SCHEDULE5_CONTEXT_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_MTS_SCHEDULE5_CONTEXT_BGCTX 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE5_CONTEXT_INTCTX 0x00000020U
+#define ROGUE_CR_MTS_SCHEDULE5_TASK_SHIFT 4U
+#define ROGUE_CR_MTS_SCHEDULE5_TASK_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_MTS_SCHEDULE5_TASK_NON_COUNTED 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE5_TASK_COUNTED 0x00000010U
+#define ROGUE_CR_MTS_SCHEDULE5_DM_SHIFT 0U
+#define ROGUE_CR_MTS_SCHEDULE5_DM_CLRMSK 0xFFFFFFF0U
+#define ROGUE_CR_MTS_SCHEDULE5_DM_DM0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE5_DM_DM1 0x00000001U
+#define ROGUE_CR_MTS_SCHEDULE5_DM_DM2 0x00000002U
+#define ROGUE_CR_MTS_SCHEDULE5_DM_DM3 0x00000003U
+#define ROGUE_CR_MTS_SCHEDULE5_DM_DM4 0x00000004U
+#define ROGUE_CR_MTS_SCHEDULE5_DM_DM5 0x00000005U
+#define ROGUE_CR_MTS_SCHEDULE5_DM_DM6 0x00000006U
+#define ROGUE_CR_MTS_SCHEDULE5_DM_DM7 0x00000007U
+#define ROGUE_CR_MTS_SCHEDULE5_DM_DM_ALL 0x0000000FU
+
+/* Register ROGUE_CR_MTS_SCHEDULE6 */
+#define ROGUE_CR_MTS_SCHEDULE6 0x60B00U
+#define ROGUE_CR_MTS_SCHEDULE6_MASKFULL 0x00000000000001FFULL
+#define ROGUE_CR_MTS_SCHEDULE6_HOST_SHIFT 8U
+#define ROGUE_CR_MTS_SCHEDULE6_HOST_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_MTS_SCHEDULE6_HOST_BG_TIMER 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE6_HOST_HOST 0x00000100U
+#define ROGUE_CR_MTS_SCHEDULE6_PRIORITY_SHIFT 6U
+#define ROGUE_CR_MTS_SCHEDULE6_PRIORITY_CLRMSK 0xFFFFFF3FU
+#define ROGUE_CR_MTS_SCHEDULE6_PRIORITY_PRT0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE6_PRIORITY_PRT1 0x00000040U
+#define ROGUE_CR_MTS_SCHEDULE6_PRIORITY_PRT2 0x00000080U
+#define ROGUE_CR_MTS_SCHEDULE6_PRIORITY_PRT3 0x000000C0U
+#define ROGUE_CR_MTS_SCHEDULE6_CONTEXT_SHIFT 5U
+#define ROGUE_CR_MTS_SCHEDULE6_CONTEXT_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_MTS_SCHEDULE6_CONTEXT_BGCTX 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE6_CONTEXT_INTCTX 0x00000020U
+#define ROGUE_CR_MTS_SCHEDULE6_TASK_SHIFT 4U
+#define ROGUE_CR_MTS_SCHEDULE6_TASK_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_MTS_SCHEDULE6_TASK_NON_COUNTED 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE6_TASK_COUNTED 0x00000010U
+#define ROGUE_CR_MTS_SCHEDULE6_DM_SHIFT 0U
+#define ROGUE_CR_MTS_SCHEDULE6_DM_CLRMSK 0xFFFFFFF0U
+#define ROGUE_CR_MTS_SCHEDULE6_DM_DM0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE6_DM_DM1 0x00000001U
+#define ROGUE_CR_MTS_SCHEDULE6_DM_DM2 0x00000002U
+#define ROGUE_CR_MTS_SCHEDULE6_DM_DM3 0x00000003U
+#define ROGUE_CR_MTS_SCHEDULE6_DM_DM4 0x00000004U
+#define ROGUE_CR_MTS_SCHEDULE6_DM_DM5 0x00000005U
+#define ROGUE_CR_MTS_SCHEDULE6_DM_DM6 0x00000006U
+#define ROGUE_CR_MTS_SCHEDULE6_DM_DM7 0x00000007U
+#define ROGUE_CR_MTS_SCHEDULE6_DM_DM_ALL 0x0000000FU
+
+/* Register ROGUE_CR_MTS_SCHEDULE7 */
+#define ROGUE_CR_MTS_SCHEDULE7 0x70B00U
+#define ROGUE_CR_MTS_SCHEDULE7_MASKFULL 0x00000000000001FFULL
+#define ROGUE_CR_MTS_SCHEDULE7_HOST_SHIFT 8U
+#define ROGUE_CR_MTS_SCHEDULE7_HOST_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_MTS_SCHEDULE7_HOST_BG_TIMER 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE7_HOST_HOST 0x00000100U
+#define ROGUE_CR_MTS_SCHEDULE7_PRIORITY_SHIFT 6U
+#define ROGUE_CR_MTS_SCHEDULE7_PRIORITY_CLRMSK 0xFFFFFF3FU
+#define ROGUE_CR_MTS_SCHEDULE7_PRIORITY_PRT0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE7_PRIORITY_PRT1 0x00000040U
+#define ROGUE_CR_MTS_SCHEDULE7_PRIORITY_PRT2 0x00000080U
+#define ROGUE_CR_MTS_SCHEDULE7_PRIORITY_PRT3 0x000000C0U
+#define ROGUE_CR_MTS_SCHEDULE7_CONTEXT_SHIFT 5U
+#define ROGUE_CR_MTS_SCHEDULE7_CONTEXT_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_MTS_SCHEDULE7_CONTEXT_BGCTX 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE7_CONTEXT_INTCTX 0x00000020U
+#define ROGUE_CR_MTS_SCHEDULE7_TASK_SHIFT 4U
+#define ROGUE_CR_MTS_SCHEDULE7_TASK_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_MTS_SCHEDULE7_TASK_NON_COUNTED 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE7_TASK_COUNTED 0x00000010U
+#define ROGUE_CR_MTS_SCHEDULE7_DM_SHIFT 0U
+#define ROGUE_CR_MTS_SCHEDULE7_DM_CLRMSK 0xFFFFFFF0U
+#define ROGUE_CR_MTS_SCHEDULE7_DM_DM0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE7_DM_DM1 0x00000001U
+#define ROGUE_CR_MTS_SCHEDULE7_DM_DM2 0x00000002U
+#define ROGUE_CR_MTS_SCHEDULE7_DM_DM3 0x00000003U
+#define ROGUE_CR_MTS_SCHEDULE7_DM_DM4 0x00000004U
+#define ROGUE_CR_MTS_SCHEDULE7_DM_DM5 0x00000005U
+#define ROGUE_CR_MTS_SCHEDULE7_DM_DM6 0x00000006U
+#define ROGUE_CR_MTS_SCHEDULE7_DM_DM7 0x00000007U
+#define ROGUE_CR_MTS_SCHEDULE7_DM_DM_ALL 0x0000000FU
+
+/* Register ROGUE_CR_MTS_BGCTX_THREAD0_DM_ASSOC */
+#define ROGUE_CR_MTS_BGCTX_THREAD0_DM_ASSOC 0x0B30U
+#define ROGUE_CR_MTS_BGCTX_THREAD0_DM_ASSOC_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_MTS_BGCTX_THREAD0_DM_ASSOC_DM_ASSOC_SHIFT 0U
+#define ROGUE_CR_MTS_BGCTX_THREAD0_DM_ASSOC_DM_ASSOC_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_MTS_BGCTX_THREAD1_DM_ASSOC */
+#define ROGUE_CR_MTS_BGCTX_THREAD1_DM_ASSOC 0x0B38U
+#define ROGUE_CR_MTS_BGCTX_THREAD1_DM_ASSOC_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_MTS_BGCTX_THREAD1_DM_ASSOC_DM_ASSOC_SHIFT 0U
+#define ROGUE_CR_MTS_BGCTX_THREAD1_DM_ASSOC_DM_ASSOC_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_MTS_INTCTX_THREAD0_DM_ASSOC */
+#define ROGUE_CR_MTS_INTCTX_THREAD0_DM_ASSOC 0x0B40U
+#define ROGUE_CR_MTS_INTCTX_THREAD0_DM_ASSOC_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_MTS_INTCTX_THREAD0_DM_ASSOC_DM_ASSOC_SHIFT 0U
+#define ROGUE_CR_MTS_INTCTX_THREAD0_DM_ASSOC_DM_ASSOC_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_MTS_INTCTX_THREAD1_DM_ASSOC */
+#define ROGUE_CR_MTS_INTCTX_THREAD1_DM_ASSOC 0x0B48U
+#define ROGUE_CR_MTS_INTCTX_THREAD1_DM_ASSOC_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_MTS_INTCTX_THREAD1_DM_ASSOC_DM_ASSOC_SHIFT 0U
+#define ROGUE_CR_MTS_INTCTX_THREAD1_DM_ASSOC_DM_ASSOC_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG */
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG 0x0B50U
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG__S7_TOP__MASKFULL 0x000FF0FFFFFFF701ULL
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_MASKFULL 0x0000FFFFFFFFF001ULL
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_PC_BASE_SHIFT 44U
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_PC_BASE_CLRMSK 0xFFFF0FFFFFFFFFFFULL
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG__S7_TOP__FENCE_PC_BASE_SHIFT 44U
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG__S7_TOP__FENCE_PC_BASE_CLRMSK 0xFFF00FFFFFFFFFFFULL
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_DM_SHIFT 40U
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_DM_CLRMSK 0xFFFFF0FFFFFFFFFFULL
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_ADDR_SHIFT 12U
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_PERSISTENCE_SHIFT 9U
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_PERSISTENCE_CLRMSK 0xFFFFFFFFFFFFF9FFULL
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_SLC_COHERENT_SHIFT 8U
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_SLC_COHERENT_CLRMSK 0xFFFFFFFFFFFFFEFFULL
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_SLC_COHERENT_EN 0x0000000000000100ULL
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_IDLE_CTRL_SHIFT 0U
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_IDLE_CTRL_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_IDLE_CTRL_META 0x0000000000000000ULL
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_IDLE_CTRL_MTS 0x0000000000000001ULL
+
+/* Register ROGUE_CR_MTS_DM0_INTERRUPT_ENABLE */
+#define ROGUE_CR_MTS_DM0_INTERRUPT_ENABLE 0x0B58U
+#define ROGUE_CR_MTS_DM0_INTERRUPT_ENABLE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_MTS_DM0_INTERRUPT_ENABLE_INT_ENABLE_SHIFT 0U
+#define ROGUE_CR_MTS_DM0_INTERRUPT_ENABLE_INT_ENABLE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_MTS_DM1_INTERRUPT_ENABLE */
+#define ROGUE_CR_MTS_DM1_INTERRUPT_ENABLE 0x0B60U
+#define ROGUE_CR_MTS_DM1_INTERRUPT_ENABLE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_MTS_DM1_INTERRUPT_ENABLE_INT_ENABLE_SHIFT 0U
+#define ROGUE_CR_MTS_DM1_INTERRUPT_ENABLE_INT_ENABLE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_MTS_DM2_INTERRUPT_ENABLE */
+#define ROGUE_CR_MTS_DM2_INTERRUPT_ENABLE 0x0B68U
+#define ROGUE_CR_MTS_DM2_INTERRUPT_ENABLE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_MTS_DM2_INTERRUPT_ENABLE_INT_ENABLE_SHIFT 0U
+#define ROGUE_CR_MTS_DM2_INTERRUPT_ENABLE_INT_ENABLE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_MTS_DM3_INTERRUPT_ENABLE */
+#define ROGUE_CR_MTS_DM3_INTERRUPT_ENABLE 0x0B70U
+#define ROGUE_CR_MTS_DM3_INTERRUPT_ENABLE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_MTS_DM3_INTERRUPT_ENABLE_INT_ENABLE_SHIFT 0U
+#define ROGUE_CR_MTS_DM3_INTERRUPT_ENABLE_INT_ENABLE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_MTS_DM4_INTERRUPT_ENABLE */
+#define ROGUE_CR_MTS_DM4_INTERRUPT_ENABLE 0x0B78U
+#define ROGUE_CR_MTS_DM4_INTERRUPT_ENABLE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_MTS_DM4_INTERRUPT_ENABLE_INT_ENABLE_SHIFT 0U
+#define ROGUE_CR_MTS_DM4_INTERRUPT_ENABLE_INT_ENABLE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_MTS_DM5_INTERRUPT_ENABLE */
+#define ROGUE_CR_MTS_DM5_INTERRUPT_ENABLE 0x0B80U
+#define ROGUE_CR_MTS_DM5_INTERRUPT_ENABLE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_MTS_DM5_INTERRUPT_ENABLE_INT_ENABLE_SHIFT 0U
+#define ROGUE_CR_MTS_DM5_INTERRUPT_ENABLE_INT_ENABLE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_MTS_INTCTX */
+#define ROGUE_CR_MTS_INTCTX 0x0B98U
+#define ROGUE_CR_MTS_INTCTX_MASKFULL 0x000000003FFFFFFFULL
+#define ROGUE_CR_MTS_INTCTX_DM_HOST_SCHEDULE_SHIFT 22U
+#define ROGUE_CR_MTS_INTCTX_DM_HOST_SCHEDULE_CLRMSK 0xC03FFFFFU
+#define ROGUE_CR_MTS_INTCTX_DM_PTR_SHIFT 18U
+#define ROGUE_CR_MTS_INTCTX_DM_PTR_CLRMSK 0xFFC3FFFFU
+#define ROGUE_CR_MTS_INTCTX_THREAD_ACTIVE_SHIFT 16U
+#define ROGUE_CR_MTS_INTCTX_THREAD_ACTIVE_CLRMSK 0xFFFCFFFFU
+#define ROGUE_CR_MTS_INTCTX_DM_TIMER_SCHEDULE_SHIFT 8U
+#define ROGUE_CR_MTS_INTCTX_DM_TIMER_SCHEDULE_CLRMSK 0xFFFF00FFU
+#define ROGUE_CR_MTS_INTCTX_DM_INTERRUPT_SCHEDULE_SHIFT 0U
+#define ROGUE_CR_MTS_INTCTX_DM_INTERRUPT_SCHEDULE_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_MTS_BGCTX */
+#define ROGUE_CR_MTS_BGCTX 0x0BA0U
+#define ROGUE_CR_MTS_BGCTX_MASKFULL 0x0000000000003FFFULL
+#define ROGUE_CR_MTS_BGCTX_DM_PTR_SHIFT 10U
+#define ROGUE_CR_MTS_BGCTX_DM_PTR_CLRMSK 0xFFFFC3FFU
+#define ROGUE_CR_MTS_BGCTX_THREAD_ACTIVE_SHIFT 8U
+#define ROGUE_CR_MTS_BGCTX_THREAD_ACTIVE_CLRMSK 0xFFFFFCFFU
+#define ROGUE_CR_MTS_BGCTX_DM_NONCOUNTED_SCHEDULE_SHIFT 0U
+#define ROGUE_CR_MTS_BGCTX_DM_NONCOUNTED_SCHEDULE_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE */
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE 0x0BA8U
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM7_SHIFT 56U
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM7_CLRMSK 0x00FFFFFFFFFFFFFFULL
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM6_SHIFT 48U
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM6_CLRMSK 0xFF00FFFFFFFFFFFFULL
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM5_SHIFT 40U
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM5_CLRMSK 0xFFFF00FFFFFFFFFFULL
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM4_SHIFT 32U
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM4_CLRMSK 0xFFFFFF00FFFFFFFFULL
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM3_SHIFT 24U
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM3_CLRMSK 0xFFFFFFFF00FFFFFFULL
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM2_SHIFT 16U
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM2_CLRMSK 0xFFFFFFFFFF00FFFFULL
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM1_SHIFT 8U
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM1_CLRMSK 0xFFFFFFFFFFFF00FFULL
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM0_SHIFT 0U
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM0_CLRMSK 0xFFFFFFFFFFFFFF00ULL
+
+/* Register ROGUE_CR_MTS_GPU_INT_STATUS */
+#define ROGUE_CR_MTS_GPU_INT_STATUS 0x0BB0U
+#define ROGUE_CR_MTS_GPU_INT_STATUS_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_MTS_GPU_INT_STATUS_STATUS_SHIFT 0U
+#define ROGUE_CR_MTS_GPU_INT_STATUS_STATUS_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_MTS_SCHEDULE_ENABLE */
+#define ROGUE_CR_MTS_SCHEDULE_ENABLE 0x0BC8U
+#define ROGUE_CR_MTS_SCHEDULE_ENABLE_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_MTS_SCHEDULE_ENABLE_MASK_SHIFT 0U
+#define ROGUE_CR_MTS_SCHEDULE_ENABLE_MASK_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_IRQ_OS0_EVENT_STATUS */
+#define ROGUE_CR_IRQ_OS0_EVENT_STATUS 0x0BD8U
+#define ROGUE_CR_IRQ_OS0_EVENT_STATUS_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS0_EVENT_STATUS_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS0_EVENT_STATUS_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS0_EVENT_STATUS_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS0_EVENT_CLEAR */
+#define ROGUE_CR_IRQ_OS0_EVENT_CLEAR 0x0BE8U
+#define ROGUE_CR_IRQ_OS0_EVENT_CLEAR_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS0_EVENT_CLEAR_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS0_EVENT_CLEAR_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS0_EVENT_CLEAR_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS1_EVENT_STATUS */
+#define ROGUE_CR_IRQ_OS1_EVENT_STATUS 0x10BD8U
+#define ROGUE_CR_IRQ_OS1_EVENT_STATUS_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS1_EVENT_STATUS_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS1_EVENT_STATUS_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS1_EVENT_STATUS_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS1_EVENT_CLEAR */
+#define ROGUE_CR_IRQ_OS1_EVENT_CLEAR 0x10BE8U
+#define ROGUE_CR_IRQ_OS1_EVENT_CLEAR_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS1_EVENT_CLEAR_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS1_EVENT_CLEAR_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS1_EVENT_CLEAR_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS2_EVENT_STATUS */
+#define ROGUE_CR_IRQ_OS2_EVENT_STATUS 0x20BD8U
+#define ROGUE_CR_IRQ_OS2_EVENT_STATUS_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS2_EVENT_STATUS_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS2_EVENT_STATUS_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS2_EVENT_STATUS_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS2_EVENT_CLEAR */
+#define ROGUE_CR_IRQ_OS2_EVENT_CLEAR 0x20BE8U
+#define ROGUE_CR_IRQ_OS2_EVENT_CLEAR_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS2_EVENT_CLEAR_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS2_EVENT_CLEAR_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS2_EVENT_CLEAR_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS3_EVENT_STATUS */
+#define ROGUE_CR_IRQ_OS3_EVENT_STATUS 0x30BD8U
+#define ROGUE_CR_IRQ_OS3_EVENT_STATUS_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS3_EVENT_STATUS_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS3_EVENT_STATUS_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS3_EVENT_STATUS_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS3_EVENT_CLEAR */
+#define ROGUE_CR_IRQ_OS3_EVENT_CLEAR 0x30BE8U
+#define ROGUE_CR_IRQ_OS3_EVENT_CLEAR_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS3_EVENT_CLEAR_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS3_EVENT_CLEAR_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS3_EVENT_CLEAR_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS4_EVENT_STATUS */
+#define ROGUE_CR_IRQ_OS4_EVENT_STATUS 0x40BD8U
+#define ROGUE_CR_IRQ_OS4_EVENT_STATUS_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS4_EVENT_STATUS_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS4_EVENT_STATUS_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS4_EVENT_STATUS_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS4_EVENT_CLEAR */
+#define ROGUE_CR_IRQ_OS4_EVENT_CLEAR 0x40BE8U
+#define ROGUE_CR_IRQ_OS4_EVENT_CLEAR_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS4_EVENT_CLEAR_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS4_EVENT_CLEAR_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS4_EVENT_CLEAR_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS5_EVENT_STATUS */
+#define ROGUE_CR_IRQ_OS5_EVENT_STATUS 0x50BD8U
+#define ROGUE_CR_IRQ_OS5_EVENT_STATUS_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS5_EVENT_STATUS_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS5_EVENT_STATUS_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS5_EVENT_STATUS_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS5_EVENT_CLEAR */
+#define ROGUE_CR_IRQ_OS5_EVENT_CLEAR 0x50BE8U
+#define ROGUE_CR_IRQ_OS5_EVENT_CLEAR_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS5_EVENT_CLEAR_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS5_EVENT_CLEAR_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS5_EVENT_CLEAR_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS6_EVENT_STATUS */
+#define ROGUE_CR_IRQ_OS6_EVENT_STATUS 0x60BD8U
+#define ROGUE_CR_IRQ_OS6_EVENT_STATUS_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS6_EVENT_STATUS_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS6_EVENT_STATUS_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS6_EVENT_STATUS_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS6_EVENT_CLEAR */
+#define ROGUE_CR_IRQ_OS6_EVENT_CLEAR 0x60BE8U
+#define ROGUE_CR_IRQ_OS6_EVENT_CLEAR_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS6_EVENT_CLEAR_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS6_EVENT_CLEAR_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS6_EVENT_CLEAR_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS7_EVENT_STATUS */
+#define ROGUE_CR_IRQ_OS7_EVENT_STATUS 0x70BD8U
+#define ROGUE_CR_IRQ_OS7_EVENT_STATUS_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS7_EVENT_STATUS_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS7_EVENT_STATUS_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS7_EVENT_STATUS_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS7_EVENT_CLEAR */
+#define ROGUE_CR_IRQ_OS7_EVENT_CLEAR 0x70BE8U
+#define ROGUE_CR_IRQ_OS7_EVENT_CLEAR_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS7_EVENT_CLEAR_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS7_EVENT_CLEAR_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS7_EVENT_CLEAR_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_META_BOOT */
+#define ROGUE_CR_META_BOOT 0x0BF8U
+#define ROGUE_CR_META_BOOT_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_META_BOOT_MODE_SHIFT 0U
+#define ROGUE_CR_META_BOOT_MODE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_META_BOOT_MODE_EN 0x00000001U
+
+/* Register ROGUE_CR_GARTEN_SLC */
+#define ROGUE_CR_GARTEN_SLC 0x0BB8U
+#define ROGUE_CR_GARTEN_SLC_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_GARTEN_SLC_FORCE_COHERENCY_SHIFT 0U
+#define ROGUE_CR_GARTEN_SLC_FORCE_COHERENCY_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_GARTEN_SLC_FORCE_COHERENCY_EN 0x00000001U
+
+/* Register ROGUE_CR_PPP */
+#define ROGUE_CR_PPP 0x0CD0U
+#define ROGUE_CR_PPP_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PPP_CHECKSUM_SHIFT 0U
+#define ROGUE_CR_PPP_CHECKSUM_CLRMSK 0x00000000U
+
+#define ROGUE_CR_ISP_RENDER_DIR_TYPE_MASK 0x00000003U
+/* Top-left to bottom-right */
+#define ROGUE_CR_ISP_RENDER_DIR_TYPE_TL2BR 0x00000000U
+/* Top-right to bottom-left */
+#define ROGUE_CR_ISP_RENDER_DIR_TYPE_TR2BL 0x00000001U
+/* Bottom-left to top-right */
+#define ROGUE_CR_ISP_RENDER_DIR_TYPE_BL2TR 0x00000002U
+/* Bottom-right to top-left */
+#define ROGUE_CR_ISP_RENDER_DIR_TYPE_BR2TL 0x00000003U
+
+#define ROGUE_CR_ISP_RENDER_MODE_TYPE_MASK 0x00000003U
+/* Normal render */
+#define ROGUE_CR_ISP_RENDER_MODE_TYPE_NORM 0x00000000U
+/* Fast 2D render */
+#define ROGUE_CR_ISP_RENDER_MODE_TYPE_FAST_2D 0x00000002U
+/* Fast scale render */
+#define ROGUE_CR_ISP_RENDER_MODE_TYPE_FAST_SCALE 0x00000003U
+
+/* Register ROGUE_CR_ISP_RENDER */
+#define ROGUE_CR_ISP_RENDER 0x0F08U
+#define ROGUE_CR_ISP_RENDER_MASKFULL 0x00000000000001FFULL
+#define ROGUE_CR_ISP_RENDER_FAST_RENDER_FORCE_PROTECT_SHIFT 8U
+#define ROGUE_CR_ISP_RENDER_FAST_RENDER_FORCE_PROTECT_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_ISP_RENDER_FAST_RENDER_FORCE_PROTECT_EN 0x00000100U
+#define ROGUE_CR_ISP_RENDER_PROCESS_PROTECTED_TILES_SHIFT 7U
+#define ROGUE_CR_ISP_RENDER_PROCESS_PROTECTED_TILES_CLRMSK 0xFFFFFF7FU
+#define ROGUE_CR_ISP_RENDER_PROCESS_PROTECTED_TILES_EN 0x00000080U
+#define ROGUE_CR_ISP_RENDER_PROCESS_UNPROTECTED_TILES_SHIFT 6U
+#define ROGUE_CR_ISP_RENDER_PROCESS_UNPROTECTED_TILES_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_ISP_RENDER_PROCESS_UNPROTECTED_TILES_EN 0x00000040U
+#define ROGUE_CR_ISP_RENDER_DISABLE_EOMT_SHIFT 5U
+#define ROGUE_CR_ISP_RENDER_DISABLE_EOMT_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_ISP_RENDER_DISABLE_EOMT_EN 0x00000020U
+#define ROGUE_CR_ISP_RENDER_RESUME_SHIFT 4U
+#define ROGUE_CR_ISP_RENDER_RESUME_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_ISP_RENDER_RESUME_EN 0x00000010U
+#define ROGUE_CR_ISP_RENDER_DIR_SHIFT 2U
+#define ROGUE_CR_ISP_RENDER_DIR_CLRMSK 0xFFFFFFF3U
+#define ROGUE_CR_ISP_RENDER_DIR_TL2BR 0x00000000U
+#define ROGUE_CR_ISP_RENDER_DIR_TR2BL 0x00000004U
+#define ROGUE_CR_ISP_RENDER_DIR_BL2TR 0x00000008U
+#define ROGUE_CR_ISP_RENDER_DIR_BR2TL 0x0000000CU
+#define ROGUE_CR_ISP_RENDER_MODE_SHIFT 0U
+#define ROGUE_CR_ISP_RENDER_MODE_CLRMSK 0xFFFFFFFCU
+#define ROGUE_CR_ISP_RENDER_MODE_NORM 0x00000000U
+#define ROGUE_CR_ISP_RENDER_MODE_FAST_2D 0x00000002U
+#define ROGUE_CR_ISP_RENDER_MODE_FAST_SCALE 0x00000003U
+
+/* Register ROGUE_CR_ISP_CTL */
+#define ROGUE_CR_ISP_CTL 0x0F38U
+#define ROGUE_CR_ISP_CTL_MASKFULL 0x00000000FFFFF3FFULL
+#define ROGUE_CR_ISP_CTL_SKIP_INIT_HDRS_SHIFT 31U
+#define ROGUE_CR_ISP_CTL_SKIP_INIT_HDRS_CLRMSK 0x7FFFFFFFU
+#define ROGUE_CR_ISP_CTL_SKIP_INIT_HDRS_EN 0x80000000U
+#define ROGUE_CR_ISP_CTL_LINE_STYLE_SHIFT 30U
+#define ROGUE_CR_ISP_CTL_LINE_STYLE_CLRMSK 0xBFFFFFFFU
+#define ROGUE_CR_ISP_CTL_LINE_STYLE_EN 0x40000000U
+#define ROGUE_CR_ISP_CTL_LINE_STYLE_PIX_SHIFT 29U
+#define ROGUE_CR_ISP_CTL_LINE_STYLE_PIX_CLRMSK 0xDFFFFFFFU
+#define ROGUE_CR_ISP_CTL_LINE_STYLE_PIX_EN 0x20000000U
+#define ROGUE_CR_ISP_CTL_PAIR_TILES_VERT_SHIFT 28U
+#define ROGUE_CR_ISP_CTL_PAIR_TILES_VERT_CLRMSK 0xEFFFFFFFU
+#define ROGUE_CR_ISP_CTL_PAIR_TILES_VERT_EN 0x10000000U
+#define ROGUE_CR_ISP_CTL_PAIR_TILES_SHIFT 27U
+#define ROGUE_CR_ISP_CTL_PAIR_TILES_CLRMSK 0xF7FFFFFFU
+#define ROGUE_CR_ISP_CTL_PAIR_TILES_EN 0x08000000U
+#define ROGUE_CR_ISP_CTL_CREQ_BUF_EN_SHIFT 26U
+#define ROGUE_CR_ISP_CTL_CREQ_BUF_EN_CLRMSK 0xFBFFFFFFU
+#define ROGUE_CR_ISP_CTL_CREQ_BUF_EN_EN 0x04000000U
+#define ROGUE_CR_ISP_CTL_TILE_AGE_EN_SHIFT 25U
+#define ROGUE_CR_ISP_CTL_TILE_AGE_EN_CLRMSK 0xFDFFFFFFU
+#define ROGUE_CR_ISP_CTL_TILE_AGE_EN_EN 0x02000000U
+#define ROGUE_CR_ISP_CTL_ISP_SAMPLE_POS_MODE_SHIFT 23U
+#define ROGUE_CR_ISP_CTL_ISP_SAMPLE_POS_MODE_CLRMSK 0xFE7FFFFFU
+#define ROGUE_CR_ISP_CTL_ISP_SAMPLE_POS_MODE_DX9 0x00000000U
+#define ROGUE_CR_ISP_CTL_ISP_SAMPLE_POS_MODE_DX10 0x00800000U
+#define ROGUE_CR_ISP_CTL_ISP_SAMPLE_POS_MODE_OGL 0x01000000U
+#define ROGUE_CR_ISP_CTL_NUM_TILES_PER_USC_SHIFT 21U
+#define ROGUE_CR_ISP_CTL_NUM_TILES_PER_USC_CLRMSK 0xFF9FFFFFU
+#define ROGUE_CR_ISP_CTL_DBIAS_IS_INT_SHIFT 20U
+#define ROGUE_CR_ISP_CTL_DBIAS_IS_INT_CLRMSK 0xFFEFFFFFU
+#define ROGUE_CR_ISP_CTL_DBIAS_IS_INT_EN 0x00100000U
+#define ROGUE_CR_ISP_CTL_OVERLAP_CHECK_MODE_SHIFT 19U
+#define ROGUE_CR_ISP_CTL_OVERLAP_CHECK_MODE_CLRMSK 0xFFF7FFFFU
+#define ROGUE_CR_ISP_CTL_OVERLAP_CHECK_MODE_EN 0x00080000U
+#define ROGUE_CR_ISP_CTL_PT_UPFRONT_DEPTH_DISABLE_SHIFT 18U
+#define ROGUE_CR_ISP_CTL_PT_UPFRONT_DEPTH_DISABLE_CLRMSK 0xFFFBFFFFU
+#define ROGUE_CR_ISP_CTL_PT_UPFRONT_DEPTH_DISABLE_EN 0x00040000U
+#define ROGUE_CR_ISP_CTL_PROCESS_EMPTY_TILES_SHIFT 17U
+#define ROGUE_CR_ISP_CTL_PROCESS_EMPTY_TILES_CLRMSK 0xFFFDFFFFU
+#define ROGUE_CR_ISP_CTL_PROCESS_EMPTY_TILES_EN 0x00020000U
+#define ROGUE_CR_ISP_CTL_SAMPLE_POS_SHIFT 16U
+#define ROGUE_CR_ISP_CTL_SAMPLE_POS_CLRMSK 0xFFFEFFFFU
+#define ROGUE_CR_ISP_CTL_SAMPLE_POS_EN 0x00010000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_SHIFT 12U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_CLRMSK 0xFFFF0FFFU
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_ONE 0x00000000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_TWO 0x00001000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_THREE 0x00002000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_FOUR 0x00003000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_FIVE 0x00004000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_SIX 0x00005000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_SEVEN 0x00006000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_EIGHT 0x00007000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_NINE 0x00008000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_TEN 0x00009000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_ELEVEN 0x0000A000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_TWELVE 0x0000B000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_THIRTEEN 0x0000C000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_FOURTEEN 0x0000D000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_FIFTEEN 0x0000E000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_SIXTEEN 0x0000F000U
+#define ROGUE_CR_ISP_CTL_VALID_ID_SHIFT 4U
+#define ROGUE_CR_ISP_CTL_VALID_ID_CLRMSK 0xFFFFFC0FU
+#define ROGUE_CR_ISP_CTL_UPASS_START_SHIFT 0U
+#define ROGUE_CR_ISP_CTL_UPASS_START_CLRMSK 0xFFFFFFF0U
+
+/* Register ROGUE_CR_ISP_STATUS */
+#define ROGUE_CR_ISP_STATUS 0x1038U
+#define ROGUE_CR_ISP_STATUS_MASKFULL 0x0000000000000007ULL
+#define ROGUE_CR_ISP_STATUS_SPLIT_MAX_SHIFT 2U
+#define ROGUE_CR_ISP_STATUS_SPLIT_MAX_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_ISP_STATUS_SPLIT_MAX_EN 0x00000004U
+#define ROGUE_CR_ISP_STATUS_ACTIVE_SHIFT 1U
+#define ROGUE_CR_ISP_STATUS_ACTIVE_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_ISP_STATUS_ACTIVE_EN 0x00000002U
+#define ROGUE_CR_ISP_STATUS_EOR_SHIFT 0U
+#define ROGUE_CR_ISP_STATUS_EOR_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_ISP_STATUS_EOR_EN 0x00000001U
+
+/* Register group: ROGUE_CR_ISP_XTP_RESUME, with 64 repeats */
+#define ROGUE_CR_ISP_XTP_RESUME_REPEATCOUNT 64U
+/* Register ROGUE_CR_ISP_XTP_RESUME0 */
+#define ROGUE_CR_ISP_XTP_RESUME0 0x3A00U
+#define ROGUE_CR_ISP_XTP_RESUME0_MASKFULL 0x00000000003FF3FFULL
+#define ROGUE_CR_ISP_XTP_RESUME0_TILE_X_SHIFT 12U
+#define ROGUE_CR_ISP_XTP_RESUME0_TILE_X_CLRMSK 0xFFC00FFFU
+#define ROGUE_CR_ISP_XTP_RESUME0_TILE_Y_SHIFT 0U
+#define ROGUE_CR_ISP_XTP_RESUME0_TILE_Y_CLRMSK 0xFFFFFC00U
+
+/* Register group: ROGUE_CR_ISP_XTP_STORE, with 32 repeats */
+#define ROGUE_CR_ISP_XTP_STORE_REPEATCOUNT 32U
+/* Register ROGUE_CR_ISP_XTP_STORE0 */
+#define ROGUE_CR_ISP_XTP_STORE0 0x3C00U
+#define ROGUE_CR_ISP_XTP_STORE0_MASKFULL 0x000000007F3FF3FFULL
+#define ROGUE_CR_ISP_XTP_STORE0_ACTIVE_SHIFT 30U
+#define ROGUE_CR_ISP_XTP_STORE0_ACTIVE_CLRMSK 0xBFFFFFFFU
+#define ROGUE_CR_ISP_XTP_STORE0_ACTIVE_EN 0x40000000U
+#define ROGUE_CR_ISP_XTP_STORE0_EOR_SHIFT 29U
+#define ROGUE_CR_ISP_XTP_STORE0_EOR_CLRMSK 0xDFFFFFFFU
+#define ROGUE_CR_ISP_XTP_STORE0_EOR_EN 0x20000000U
+#define ROGUE_CR_ISP_XTP_STORE0_TILE_LAST_SHIFT 28U
+#define ROGUE_CR_ISP_XTP_STORE0_TILE_LAST_CLRMSK 0xEFFFFFFFU
+#define ROGUE_CR_ISP_XTP_STORE0_TILE_LAST_EN 0x10000000U
+#define ROGUE_CR_ISP_XTP_STORE0_MT_SHIFT 24U
+#define ROGUE_CR_ISP_XTP_STORE0_MT_CLRMSK 0xF0FFFFFFU
+#define ROGUE_CR_ISP_XTP_STORE0_TILE_X_SHIFT 12U
+#define ROGUE_CR_ISP_XTP_STORE0_TILE_X_CLRMSK 0xFFC00FFFU
+#define ROGUE_CR_ISP_XTP_STORE0_TILE_Y_SHIFT 0U
+#define ROGUE_CR_ISP_XTP_STORE0_TILE_Y_CLRMSK 0xFFFFFC00U
+
+/* Register group: ROGUE_CR_BIF_CAT_BASE, with 8 repeats */
+#define ROGUE_CR_BIF_CAT_BASE_REPEATCOUNT 8U
+/* Register ROGUE_CR_BIF_CAT_BASE0 */
+#define ROGUE_CR_BIF_CAT_BASE0 0x1200U
+#define ROGUE_CR_BIF_CAT_BASE0_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_BIF_CAT_BASE0_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE0_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_CAT_BASE0_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE0_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_CAT_BASE1 */
+#define ROGUE_CR_BIF_CAT_BASE1 0x1208U
+#define ROGUE_CR_BIF_CAT_BASE1_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_BIF_CAT_BASE1_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE1_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_CAT_BASE1_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE1_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_CAT_BASE2 */
+#define ROGUE_CR_BIF_CAT_BASE2 0x1210U
+#define ROGUE_CR_BIF_CAT_BASE2_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_BIF_CAT_BASE2_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE2_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_CAT_BASE2_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE2_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_CAT_BASE3 */
+#define ROGUE_CR_BIF_CAT_BASE3 0x1218U
+#define ROGUE_CR_BIF_CAT_BASE3_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_BIF_CAT_BASE3_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE3_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_CAT_BASE3_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE3_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_CAT_BASE4 */
+#define ROGUE_CR_BIF_CAT_BASE4 0x1220U
+#define ROGUE_CR_BIF_CAT_BASE4_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_BIF_CAT_BASE4_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE4_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_CAT_BASE4_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE4_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_CAT_BASE5 */
+#define ROGUE_CR_BIF_CAT_BASE5 0x1228U
+#define ROGUE_CR_BIF_CAT_BASE5_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_BIF_CAT_BASE5_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE5_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_CAT_BASE5_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE5_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_CAT_BASE6 */
+#define ROGUE_CR_BIF_CAT_BASE6 0x1230U
+#define ROGUE_CR_BIF_CAT_BASE6_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_BIF_CAT_BASE6_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE6_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_CAT_BASE6_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE6_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_CAT_BASE7 */
+#define ROGUE_CR_BIF_CAT_BASE7 0x1238U
+#define ROGUE_CR_BIF_CAT_BASE7_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_BIF_CAT_BASE7_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE7_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_CAT_BASE7_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE7_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_CAT_BASE_INDEX */
+#define ROGUE_CR_BIF_CAT_BASE_INDEX 0x1240U
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_MASKFULL 0x00070707073F0707ULL
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_RVTX_SHIFT 48U
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_RVTX_CLRMSK 0xFFF8FFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_RAY_SHIFT 40U
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_RAY_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_HOST_SHIFT 32U
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_HOST_CLRMSK 0xFFFFFFF8FFFFFFFFULL
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_TLA_SHIFT 24U
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_TLA_CLRMSK 0xFFFFFFFFF8FFFFFFULL
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_TDM_SHIFT 19U
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_TDM_CLRMSK 0xFFFFFFFFFFC7FFFFULL
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_CDM_SHIFT 16U
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_CDM_CLRMSK 0xFFFFFFFFFFF8FFFFULL
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_PIXEL_SHIFT 8U
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_PIXEL_CLRMSK 0xFFFFFFFFFFFFF8FFULL
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_TA_SHIFT 0U
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_TA_CLRMSK 0xFFFFFFFFFFFFFFF8ULL
+
+/* Register ROGUE_CR_BIF_PM_CAT_BASE_VCE0 */
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0 0x1248U
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0_MASKFULL 0x0FFFFFFFFFFFF003ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0_INIT_PAGE_SHIFT 40U
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0_INIT_PAGE_CLRMSK 0xF00000FFFFFFFFFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0_WRAP_SHIFT 1U
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0_WRAP_CLRMSK 0xFFFFFFFFFFFFFFFDULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0_WRAP_EN 0x0000000000000002ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0_VALID_SHIFT 0U
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0_VALID_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0_VALID_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_BIF_PM_CAT_BASE_TE0 */
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0 0x1250U
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0_MASKFULL 0x0FFFFFFFFFFFF003ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0_INIT_PAGE_SHIFT 40U
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0_INIT_PAGE_CLRMSK 0xF00000FFFFFFFFFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0_WRAP_SHIFT 1U
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0_WRAP_CLRMSK 0xFFFFFFFFFFFFFFFDULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0_WRAP_EN 0x0000000000000002ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0_VALID_SHIFT 0U
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0_VALID_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0_VALID_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_BIF_PM_CAT_BASE_ALIST0 */
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0 0x1260U
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0_MASKFULL 0x0FFFFFFFFFFFF003ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0_INIT_PAGE_SHIFT 40U
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0_INIT_PAGE_CLRMSK 0xF00000FFFFFFFFFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0_WRAP_SHIFT 1U
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0_WRAP_CLRMSK 0xFFFFFFFFFFFFFFFDULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0_WRAP_EN 0x0000000000000002ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0_VALID_SHIFT 0U
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0_VALID_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0_VALID_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_BIF_PM_CAT_BASE_VCE1 */
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1 0x1268U
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1_MASKFULL 0x0FFFFFFFFFFFF003ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1_INIT_PAGE_SHIFT 40U
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1_INIT_PAGE_CLRMSK 0xF00000FFFFFFFFFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1_WRAP_SHIFT 1U
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1_WRAP_CLRMSK 0xFFFFFFFFFFFFFFFDULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1_WRAP_EN 0x0000000000000002ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1_VALID_SHIFT 0U
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1_VALID_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1_VALID_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_BIF_PM_CAT_BASE_TE1 */
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1 0x1270U
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1_MASKFULL 0x0FFFFFFFFFFFF003ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1_INIT_PAGE_SHIFT 40U
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1_INIT_PAGE_CLRMSK 0xF00000FFFFFFFFFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1_WRAP_SHIFT 1U
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1_WRAP_CLRMSK 0xFFFFFFFFFFFFFFFDULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1_WRAP_EN 0x0000000000000002ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1_VALID_SHIFT 0U
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1_VALID_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1_VALID_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_BIF_PM_CAT_BASE_ALIST1 */
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1 0x1280U
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1_MASKFULL 0x0FFFFFFFFFFFF003ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1_INIT_PAGE_SHIFT 40U
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1_INIT_PAGE_CLRMSK 0xF00000FFFFFFFFFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1_WRAP_SHIFT 1U
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1_WRAP_CLRMSK 0xFFFFFFFFFFFFFFFDULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1_WRAP_EN 0x0000000000000002ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1_VALID_SHIFT 0U
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1_VALID_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1_VALID_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_BIF_MMU_ENTRY_STATUS */
+#define ROGUE_CR_BIF_MMU_ENTRY_STATUS 0x1288U
+#define ROGUE_CR_BIF_MMU_ENTRY_STATUS_MASKFULL 0x000000FFFFFFF0F3ULL
+#define ROGUE_CR_BIF_MMU_ENTRY_STATUS_ADDRESS_SHIFT 12U
+#define ROGUE_CR_BIF_MMU_ENTRY_STATUS_ADDRESS_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_MMU_ENTRY_STATUS_CAT_BASE_SHIFT 4U
+#define ROGUE_CR_BIF_MMU_ENTRY_STATUS_CAT_BASE_CLRMSK 0xFFFFFFFFFFFFFF0FULL
+#define ROGUE_CR_BIF_MMU_ENTRY_STATUS_DATA_TYPE_SHIFT 0U
+#define ROGUE_CR_BIF_MMU_ENTRY_STATUS_DATA_TYPE_CLRMSK 0xFFFFFFFFFFFFFFFCULL
+
+/* Register ROGUE_CR_BIF_MMU_ENTRY */
+#define ROGUE_CR_BIF_MMU_ENTRY 0x1290U
+#define ROGUE_CR_BIF_MMU_ENTRY_MASKFULL 0x0000000000000003ULL
+#define ROGUE_CR_BIF_MMU_ENTRY_ENABLE_SHIFT 1U
+#define ROGUE_CR_BIF_MMU_ENTRY_ENABLE_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_BIF_MMU_ENTRY_ENABLE_EN 0x00000002U
+#define ROGUE_CR_BIF_MMU_ENTRY_PENDING_SHIFT 0U
+#define ROGUE_CR_BIF_MMU_ENTRY_PENDING_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_BIF_MMU_ENTRY_PENDING_EN 0x00000001U
+
+/* Register ROGUE_CR_BIF_CTRL_INVAL */
+#define ROGUE_CR_BIF_CTRL_INVAL 0x12A0U
+#define ROGUE_CR_BIF_CTRL_INVAL_MASKFULL 0x000000000000000FULL
+#define ROGUE_CR_BIF_CTRL_INVAL_TLB1_SHIFT 3U
+#define ROGUE_CR_BIF_CTRL_INVAL_TLB1_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_BIF_CTRL_INVAL_TLB1_EN 0x00000008U
+#define ROGUE_CR_BIF_CTRL_INVAL_PC_SHIFT 2U
+#define ROGUE_CR_BIF_CTRL_INVAL_PC_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_BIF_CTRL_INVAL_PC_EN 0x00000004U
+#define ROGUE_CR_BIF_CTRL_INVAL_PD_SHIFT 1U
+#define ROGUE_CR_BIF_CTRL_INVAL_PD_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_BIF_CTRL_INVAL_PD_EN 0x00000002U
+#define ROGUE_CR_BIF_CTRL_INVAL_PT_SHIFT 0U
+#define ROGUE_CR_BIF_CTRL_INVAL_PT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_BIF_CTRL_INVAL_PT_EN 0x00000001U
+
+/* Register ROGUE_CR_BIF_CTRL */
+#define ROGUE_CR_BIF_CTRL 0x12A8U
+#define ROGUE_CR_BIF_CTRL__XE_MEM__MASKFULL 0x000000000000033FULL
+#define ROGUE_CR_BIF_CTRL_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_CPU_SHIFT 9U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_CPU_CLRMSK 0xFFFFFDFFU
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_CPU_EN 0x00000200U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF4_SHIFT 8U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF4_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF4_EN 0x00000100U
+#define ROGUE_CR_BIF_CTRL_ENABLE_MMU_QUEUE_BYPASS_SHIFT 7U
+#define ROGUE_CR_BIF_CTRL_ENABLE_MMU_QUEUE_BYPASS_CLRMSK 0xFFFFFF7FU
+#define ROGUE_CR_BIF_CTRL_ENABLE_MMU_QUEUE_BYPASS_EN 0x00000080U
+#define ROGUE_CR_BIF_CTRL_ENABLE_MMU_AUTO_PREFETCH_SHIFT 6U
+#define ROGUE_CR_BIF_CTRL_ENABLE_MMU_AUTO_PREFETCH_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_BIF_CTRL_ENABLE_MMU_AUTO_PREFETCH_EN 0x00000040U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF3_SHIFT 5U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF3_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF3_EN 0x00000020U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF2_SHIFT 4U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF2_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF2_EN 0x00000010U
+#define ROGUE_CR_BIF_CTRL_PAUSE_BIF1_SHIFT 3U
+#define ROGUE_CR_BIF_CTRL_PAUSE_BIF1_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_BIF_CTRL_PAUSE_BIF1_EN 0x00000008U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_PM_SHIFT 2U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_PM_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_PM_EN 0x00000004U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF1_SHIFT 1U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF1_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF1_EN 0x00000002U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF0_SHIFT 0U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF0_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF0_EN 0x00000001U
+
+/* Register ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS */
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS 0x12B0U
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_MASKFULL 0x000000000000F775ULL
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_CAT_BASE_SHIFT 12U
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_CAT_BASE_CLRMSK 0xFFFF0FFFU
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_PAGE_SIZE_SHIFT 8U
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_PAGE_SIZE_CLRMSK 0xFFFFF8FFU
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_DATA_TYPE_SHIFT 5U
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_DATA_TYPE_CLRMSK 0xFFFFFF9FU
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_FAULT_RO_SHIFT 4U
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_FAULT_RO_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_FAULT_RO_EN 0x00000010U
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_FAULT_PM_META_RO_SHIFT 2U
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_FAULT_PM_META_RO_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_FAULT_PM_META_RO_EN 0x00000004U
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_FAULT_SHIFT 0U
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_FAULT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_FAULT_EN 0x00000001U
+
+/* Register ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS */
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS 0x12B8U
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS__XE_MEM__MASKFULL 0x001FFFFFFFFFFFF0ULL
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_MASKFULL 0x0007FFFFFFFFFFF0ULL
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS__XE_MEM__RNW_SHIFT 52U
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS__XE_MEM__RNW_CLRMSK 0xFFEFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS__XE_MEM__RNW_EN 0x0010000000000000ULL
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_RNW_SHIFT 50U
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_RNW_CLRMSK 0xFFFBFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_RNW_EN 0x0004000000000000ULL
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS__XE_MEM__TAG_SB_SHIFT 46U
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS__XE_MEM__TAG_SB_CLRMSK 0xFFF03FFFFFFFFFFFULL
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_TAG_SB_SHIFT 44U
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_TAG_SB_CLRMSK 0xFFFC0FFFFFFFFFFFULL
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_TAG_ID_SHIFT 40U
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_TAG_ID_CLRMSK 0xFFFFF0FFFFFFFFFFULL
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS__XE_MEM__TAG_ID_SHIFT 40U
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS__XE_MEM__TAG_ID_CLRMSK 0xFFFFC0FFFFFFFFFFULL
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_ADDRESS_SHIFT 4U
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_ADDRESS_CLRMSK 0xFFFFFF000000000FULL
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_ADDRESS_ALIGNSHIFT 4U
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_ADDRESS_ALIGNSIZE 16U
+
+/* Register ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS */
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS 0x12C0U
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_MASKFULL 0x000000000000F775ULL
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_CAT_BASE_SHIFT 12U
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_CAT_BASE_CLRMSK 0xFFFF0FFFU
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_PAGE_SIZE_SHIFT 8U
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_PAGE_SIZE_CLRMSK 0xFFFFF8FFU
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_DATA_TYPE_SHIFT 5U
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_DATA_TYPE_CLRMSK 0xFFFFFF9FU
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_FAULT_RO_SHIFT 4U
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_FAULT_RO_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_FAULT_RO_EN 0x00000010U
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_FAULT_PM_META_RO_SHIFT 2U
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_FAULT_PM_META_RO_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_FAULT_PM_META_RO_EN 0x00000004U
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_FAULT_SHIFT 0U
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_FAULT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_FAULT_EN 0x00000001U
+
+/* Register ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS */
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS 0x12C8U
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_MASKFULL 0x0007FFFFFFFFFFF0ULL
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_RNW_SHIFT 50U
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_RNW_CLRMSK 0xFFFBFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_RNW_EN 0x0004000000000000ULL
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_TAG_SB_SHIFT 44U
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_TAG_SB_CLRMSK 0xFFFC0FFFFFFFFFFFULL
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_TAG_ID_SHIFT 40U
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_TAG_ID_CLRMSK 0xFFFFF0FFFFFFFFFFULL
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_ADDRESS_SHIFT 4U
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_ADDRESS_CLRMSK 0xFFFFFF000000000FULL
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_ADDRESS_ALIGNSHIFT 4U
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_ADDRESS_ALIGNSIZE 16U
+
+/* Register ROGUE_CR_BIF_MMU_STATUS */
+#define ROGUE_CR_BIF_MMU_STATUS 0x12D0U
+#define ROGUE_CR_BIF_MMU_STATUS__XE_MEM__MASKFULL 0x000000001FFFFFF7ULL
+#define ROGUE_CR_BIF_MMU_STATUS_MASKFULL 0x000000001FFFFFF7ULL
+#define ROGUE_CR_BIF_MMU_STATUS_PM_FAULT_SHIFT 28U
+#define ROGUE_CR_BIF_MMU_STATUS_PM_FAULT_CLRMSK 0xEFFFFFFFU
+#define ROGUE_CR_BIF_MMU_STATUS_PM_FAULT_EN 0x10000000U
+#define ROGUE_CR_BIF_MMU_STATUS_PC_DATA_SHIFT 20U
+#define ROGUE_CR_BIF_MMU_STATUS_PC_DATA_CLRMSK 0xF00FFFFFU
+#define ROGUE_CR_BIF_MMU_STATUS_PD_DATA_SHIFT 12U
+#define ROGUE_CR_BIF_MMU_STATUS_PD_DATA_CLRMSK 0xFFF00FFFU
+#define ROGUE_CR_BIF_MMU_STATUS_PT_DATA_SHIFT 4U
+#define ROGUE_CR_BIF_MMU_STATUS_PT_DATA_CLRMSK 0xFFFFF00FU
+#define ROGUE_CR_BIF_MMU_STATUS_STALLED_SHIFT 2U
+#define ROGUE_CR_BIF_MMU_STATUS_STALLED_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_BIF_MMU_STATUS_STALLED_EN 0x00000004U
+#define ROGUE_CR_BIF_MMU_STATUS_PAUSED_SHIFT 1U
+#define ROGUE_CR_BIF_MMU_STATUS_PAUSED_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_BIF_MMU_STATUS_PAUSED_EN 0x00000002U
+#define ROGUE_CR_BIF_MMU_STATUS_BUSY_SHIFT 0U
+#define ROGUE_CR_BIF_MMU_STATUS_BUSY_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_BIF_MMU_STATUS_BUSY_EN 0x00000001U
+
+/* Register group: ROGUE_CR_BIF_TILING_CFG, with 8 repeats */
+#define ROGUE_CR_BIF_TILING_CFG_REPEATCOUNT 8U
+/* Register ROGUE_CR_BIF_TILING_CFG0 */
+#define ROGUE_CR_BIF_TILING_CFG0 0x12D8U
+#define ROGUE_CR_BIF_TILING_CFG0_MASKFULL 0xFFFFFFFF0FFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG0_XSTRIDE_SHIFT 61U
+#define ROGUE_CR_BIF_TILING_CFG0_XSTRIDE_CLRMSK 0x1FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG0_ENABLE_SHIFT 60U
+#define ROGUE_CR_BIF_TILING_CFG0_ENABLE_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG0_ENABLE_EN 0x1000000000000000ULL
+#define ROGUE_CR_BIF_TILING_CFG0_MAX_ADDRESS_SHIFT 32U
+#define ROGUE_CR_BIF_TILING_CFG0_MAX_ADDRESS_CLRMSK 0xF0000000FFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG0_MAX_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG0_MAX_ADDRESS_ALIGNSIZE 4096U
+#define ROGUE_CR_BIF_TILING_CFG0_MIN_ADDRESS_SHIFT 0U
+#define ROGUE_CR_BIF_TILING_CFG0_MIN_ADDRESS_CLRMSK 0xFFFFFFFFF0000000ULL
+#define ROGUE_CR_BIF_TILING_CFG0_MIN_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG0_MIN_ADDRESS_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_TILING_CFG1 */
+#define ROGUE_CR_BIF_TILING_CFG1 0x12E0U
+#define ROGUE_CR_BIF_TILING_CFG1_MASKFULL 0xFFFFFFFF0FFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG1_XSTRIDE_SHIFT 61U
+#define ROGUE_CR_BIF_TILING_CFG1_XSTRIDE_CLRMSK 0x1FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG1_ENABLE_SHIFT 60U
+#define ROGUE_CR_BIF_TILING_CFG1_ENABLE_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG1_ENABLE_EN 0x1000000000000000ULL
+#define ROGUE_CR_BIF_TILING_CFG1_MAX_ADDRESS_SHIFT 32U
+#define ROGUE_CR_BIF_TILING_CFG1_MAX_ADDRESS_CLRMSK 0xF0000000FFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG1_MAX_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG1_MAX_ADDRESS_ALIGNSIZE 4096U
+#define ROGUE_CR_BIF_TILING_CFG1_MIN_ADDRESS_SHIFT 0U
+#define ROGUE_CR_BIF_TILING_CFG1_MIN_ADDRESS_CLRMSK 0xFFFFFFFFF0000000ULL
+#define ROGUE_CR_BIF_TILING_CFG1_MIN_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG1_MIN_ADDRESS_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_TILING_CFG2 */
+#define ROGUE_CR_BIF_TILING_CFG2 0x12E8U
+#define ROGUE_CR_BIF_TILING_CFG2_MASKFULL 0xFFFFFFFF0FFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG2_XSTRIDE_SHIFT 61U
+#define ROGUE_CR_BIF_TILING_CFG2_XSTRIDE_CLRMSK 0x1FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG2_ENABLE_SHIFT 60U
+#define ROGUE_CR_BIF_TILING_CFG2_ENABLE_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG2_ENABLE_EN 0x1000000000000000ULL
+#define ROGUE_CR_BIF_TILING_CFG2_MAX_ADDRESS_SHIFT 32U
+#define ROGUE_CR_BIF_TILING_CFG2_MAX_ADDRESS_CLRMSK 0xF0000000FFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG2_MAX_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG2_MAX_ADDRESS_ALIGNSIZE 4096U
+#define ROGUE_CR_BIF_TILING_CFG2_MIN_ADDRESS_SHIFT 0U
+#define ROGUE_CR_BIF_TILING_CFG2_MIN_ADDRESS_CLRMSK 0xFFFFFFFFF0000000ULL
+#define ROGUE_CR_BIF_TILING_CFG2_MIN_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG2_MIN_ADDRESS_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_TILING_CFG3 */
+#define ROGUE_CR_BIF_TILING_CFG3 0x12F0U
+#define ROGUE_CR_BIF_TILING_CFG3_MASKFULL 0xFFFFFFFF0FFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG3_XSTRIDE_SHIFT 61U
+#define ROGUE_CR_BIF_TILING_CFG3_XSTRIDE_CLRMSK 0x1FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG3_ENABLE_SHIFT 60U
+#define ROGUE_CR_BIF_TILING_CFG3_ENABLE_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG3_ENABLE_EN 0x1000000000000000ULL
+#define ROGUE_CR_BIF_TILING_CFG3_MAX_ADDRESS_SHIFT 32U
+#define ROGUE_CR_BIF_TILING_CFG3_MAX_ADDRESS_CLRMSK 0xF0000000FFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG3_MAX_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG3_MAX_ADDRESS_ALIGNSIZE 4096U
+#define ROGUE_CR_BIF_TILING_CFG3_MIN_ADDRESS_SHIFT 0U
+#define ROGUE_CR_BIF_TILING_CFG3_MIN_ADDRESS_CLRMSK 0xFFFFFFFFF0000000ULL
+#define ROGUE_CR_BIF_TILING_CFG3_MIN_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG3_MIN_ADDRESS_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_TILING_CFG4 */
+#define ROGUE_CR_BIF_TILING_CFG4 0x12F8U
+#define ROGUE_CR_BIF_TILING_CFG4_MASKFULL 0xFFFFFFFF0FFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG4_XSTRIDE_SHIFT 61U
+#define ROGUE_CR_BIF_TILING_CFG4_XSTRIDE_CLRMSK 0x1FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG4_ENABLE_SHIFT 60U
+#define ROGUE_CR_BIF_TILING_CFG4_ENABLE_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG4_ENABLE_EN 0x1000000000000000ULL
+#define ROGUE_CR_BIF_TILING_CFG4_MAX_ADDRESS_SHIFT 32U
+#define ROGUE_CR_BIF_TILING_CFG4_MAX_ADDRESS_CLRMSK 0xF0000000FFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG4_MAX_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG4_MAX_ADDRESS_ALIGNSIZE 4096U
+#define ROGUE_CR_BIF_TILING_CFG4_MIN_ADDRESS_SHIFT 0U
+#define ROGUE_CR_BIF_TILING_CFG4_MIN_ADDRESS_CLRMSK 0xFFFFFFFFF0000000ULL
+#define ROGUE_CR_BIF_TILING_CFG4_MIN_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG4_MIN_ADDRESS_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_TILING_CFG5 */
+#define ROGUE_CR_BIF_TILING_CFG5 0x1300U
+#define ROGUE_CR_BIF_TILING_CFG5_MASKFULL 0xFFFFFFFF0FFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG5_XSTRIDE_SHIFT 61U
+#define ROGUE_CR_BIF_TILING_CFG5_XSTRIDE_CLRMSK 0x1FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG5_ENABLE_SHIFT 60U
+#define ROGUE_CR_BIF_TILING_CFG5_ENABLE_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG5_ENABLE_EN 0x1000000000000000ULL
+#define ROGUE_CR_BIF_TILING_CFG5_MAX_ADDRESS_SHIFT 32U
+#define ROGUE_CR_BIF_TILING_CFG5_MAX_ADDRESS_CLRMSK 0xF0000000FFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG5_MAX_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG5_MAX_ADDRESS_ALIGNSIZE 4096U
+#define ROGUE_CR_BIF_TILING_CFG5_MIN_ADDRESS_SHIFT 0U
+#define ROGUE_CR_BIF_TILING_CFG5_MIN_ADDRESS_CLRMSK 0xFFFFFFFFF0000000ULL
+#define ROGUE_CR_BIF_TILING_CFG5_MIN_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG5_MIN_ADDRESS_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_TILING_CFG6 */
+#define ROGUE_CR_BIF_TILING_CFG6 0x1308U
+#define ROGUE_CR_BIF_TILING_CFG6_MASKFULL 0xFFFFFFFF0FFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG6_XSTRIDE_SHIFT 61U
+#define ROGUE_CR_BIF_TILING_CFG6_XSTRIDE_CLRMSK 0x1FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG6_ENABLE_SHIFT 60U
+#define ROGUE_CR_BIF_TILING_CFG6_ENABLE_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG6_ENABLE_EN 0x1000000000000000ULL
+#define ROGUE_CR_BIF_TILING_CFG6_MAX_ADDRESS_SHIFT 32U
+#define ROGUE_CR_BIF_TILING_CFG6_MAX_ADDRESS_CLRMSK 0xF0000000FFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG6_MAX_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG6_MAX_ADDRESS_ALIGNSIZE 4096U
+#define ROGUE_CR_BIF_TILING_CFG6_MIN_ADDRESS_SHIFT 0U
+#define ROGUE_CR_BIF_TILING_CFG6_MIN_ADDRESS_CLRMSK 0xFFFFFFFFF0000000ULL
+#define ROGUE_CR_BIF_TILING_CFG6_MIN_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG6_MIN_ADDRESS_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_TILING_CFG7 */
+#define ROGUE_CR_BIF_TILING_CFG7 0x1310U
+#define ROGUE_CR_BIF_TILING_CFG7_MASKFULL 0xFFFFFFFF0FFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG7_XSTRIDE_SHIFT 61U
+#define ROGUE_CR_BIF_TILING_CFG7_XSTRIDE_CLRMSK 0x1FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG7_ENABLE_SHIFT 60U
+#define ROGUE_CR_BIF_TILING_CFG7_ENABLE_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG7_ENABLE_EN 0x1000000000000000ULL
+#define ROGUE_CR_BIF_TILING_CFG7_MAX_ADDRESS_SHIFT 32U
+#define ROGUE_CR_BIF_TILING_CFG7_MAX_ADDRESS_CLRMSK 0xF0000000FFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG7_MAX_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG7_MAX_ADDRESS_ALIGNSIZE 4096U
+#define ROGUE_CR_BIF_TILING_CFG7_MIN_ADDRESS_SHIFT 0U
+#define ROGUE_CR_BIF_TILING_CFG7_MIN_ADDRESS_CLRMSK 0xFFFFFFFFF0000000ULL
+#define ROGUE_CR_BIF_TILING_CFG7_MIN_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG7_MIN_ADDRESS_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_READS_EXT_STATUS */
+#define ROGUE_CR_BIF_READS_EXT_STATUS 0x1320U
+#define ROGUE_CR_BIF_READS_EXT_STATUS_MASKFULL 0x000000000FFFFFFFULL
+#define ROGUE_CR_BIF_READS_EXT_STATUS_MMU_SHIFT 16U
+#define ROGUE_CR_BIF_READS_EXT_STATUS_MMU_CLRMSK 0xF000FFFFU
+#define ROGUE_CR_BIF_READS_EXT_STATUS_BANK1_SHIFT 0U
+#define ROGUE_CR_BIF_READS_EXT_STATUS_BANK1_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_BIF_READS_INT_STATUS */
+#define ROGUE_CR_BIF_READS_INT_STATUS 0x1328U
+#define ROGUE_CR_BIF_READS_INT_STATUS_MASKFULL 0x0000000007FFFFFFULL
+#define ROGUE_CR_BIF_READS_INT_STATUS_MMU_SHIFT 16U
+#define ROGUE_CR_BIF_READS_INT_STATUS_MMU_CLRMSK 0xF800FFFFU
+#define ROGUE_CR_BIF_READS_INT_STATUS_BANK1_SHIFT 0U
+#define ROGUE_CR_BIF_READS_INT_STATUS_BANK1_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_BIFPM_READS_INT_STATUS */
+#define ROGUE_CR_BIFPM_READS_INT_STATUS 0x1330U
+#define ROGUE_CR_BIFPM_READS_INT_STATUS_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_BIFPM_READS_INT_STATUS_BANK0_SHIFT 0U
+#define ROGUE_CR_BIFPM_READS_INT_STATUS_BANK0_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_BIFPM_READS_EXT_STATUS */
+#define ROGUE_CR_BIFPM_READS_EXT_STATUS 0x1338U
+#define ROGUE_CR_BIFPM_READS_EXT_STATUS_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_BIFPM_READS_EXT_STATUS_BANK0_SHIFT 0U
+#define ROGUE_CR_BIFPM_READS_EXT_STATUS_BANK0_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_BIFPM_STATUS_MMU */
+#define ROGUE_CR_BIFPM_STATUS_MMU 0x1350U
+#define ROGUE_CR_BIFPM_STATUS_MMU_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_BIFPM_STATUS_MMU_REQUESTS_SHIFT 0U
+#define ROGUE_CR_BIFPM_STATUS_MMU_REQUESTS_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_BIF_STATUS_MMU */
+#define ROGUE_CR_BIF_STATUS_MMU 0x1358U
+#define ROGUE_CR_BIF_STATUS_MMU_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_BIF_STATUS_MMU_REQUESTS_SHIFT 0U
+#define ROGUE_CR_BIF_STATUS_MMU_REQUESTS_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_BIF_FAULT_READ */
+#define ROGUE_CR_BIF_FAULT_READ 0x13E0U
+#define ROGUE_CR_BIF_FAULT_READ_MASKFULL 0x000000FFFFFFFFF0ULL
+#define ROGUE_CR_BIF_FAULT_READ_ADDRESS_SHIFT 4U
+#define ROGUE_CR_BIF_FAULT_READ_ADDRESS_CLRMSK 0xFFFFFF000000000FULL
+#define ROGUE_CR_BIF_FAULT_READ_ADDRESS_ALIGNSHIFT 4U
+#define ROGUE_CR_BIF_FAULT_READ_ADDRESS_ALIGNSIZE 16U
+
+/* Register ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS */
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS 0x1430U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_MASKFULL 0x000000000000F775ULL
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_CAT_BASE_SHIFT 12U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_CAT_BASE_CLRMSK 0xFFFF0FFFU
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_PAGE_SIZE_SHIFT 8U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_PAGE_SIZE_CLRMSK 0xFFFFF8FFU
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_DATA_TYPE_SHIFT 5U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_DATA_TYPE_CLRMSK 0xFFFFFF9FU
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_FAULT_RO_SHIFT 4U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_FAULT_RO_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_FAULT_RO_EN 0x00000010U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_FAULT_PM_META_RO_SHIFT 2U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_FAULT_PM_META_RO_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_FAULT_PM_META_RO_EN 0x00000004U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_FAULT_SHIFT 0U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_FAULT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_FAULT_EN 0x00000001U
+
+/* Register ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS */
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS 0x1438U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_MASKFULL 0x0007FFFFFFFFFFF0ULL
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_RNW_SHIFT 50U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_RNW_CLRMSK 0xFFFBFFFFFFFFFFFFULL
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_RNW_EN 0x0004000000000000ULL
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_TAG_SB_SHIFT 44U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_TAG_SB_CLRMSK 0xFFFC0FFFFFFFFFFFULL
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_TAG_ID_SHIFT 40U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_TAG_ID_CLRMSK 0xFFFFF0FFFFFFFFFFULL
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_ADDRESS_SHIFT 4U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_ADDRESS_CLRMSK 0xFFFFFF000000000FULL
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_ADDRESS_ALIGNSHIFT 4U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_ADDRESS_ALIGNSIZE 16U
+
+/* Register ROGUE_CR_MCU_FENCE */
+#define ROGUE_CR_MCU_FENCE 0x1740U
+#define ROGUE_CR_MCU_FENCE_MASKFULL 0x000007FFFFFFFFE0ULL
+#define ROGUE_CR_MCU_FENCE_DM_SHIFT 40U
+#define ROGUE_CR_MCU_FENCE_DM_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_MCU_FENCE_DM_VERTEX 0x0000000000000000ULL
+#define ROGUE_CR_MCU_FENCE_DM_PIXEL 0x0000010000000000ULL
+#define ROGUE_CR_MCU_FENCE_DM_COMPUTE 0x0000020000000000ULL
+#define ROGUE_CR_MCU_FENCE_DM_RAY_VERTEX 0x0000030000000000ULL
+#define ROGUE_CR_MCU_FENCE_DM_RAY 0x0000040000000000ULL
+#define ROGUE_CR_MCU_FENCE_DM_FASTRENDER 0x0000050000000000ULL
+#define ROGUE_CR_MCU_FENCE_ADDR_SHIFT 5U
+#define ROGUE_CR_MCU_FENCE_ADDR_CLRMSK 0xFFFFFF000000001FULL
+#define ROGUE_CR_MCU_FENCE_ADDR_ALIGNSHIFT 5U
+#define ROGUE_CR_MCU_FENCE_ADDR_ALIGNSIZE 32U
+
+/* Register group: ROGUE_CR_SCRATCH, with 16 repeats */
+#define ROGUE_CR_SCRATCH_REPEATCOUNT 16U
+/* Register ROGUE_CR_SCRATCH0 */
+#define ROGUE_CR_SCRATCH0 0x1A00U
+#define ROGUE_CR_SCRATCH0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH0_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH0_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH1 */
+#define ROGUE_CR_SCRATCH1 0x1A08U
+#define ROGUE_CR_SCRATCH1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH1_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH1_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH2 */
+#define ROGUE_CR_SCRATCH2 0x1A10U
+#define ROGUE_CR_SCRATCH2_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH2_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH2_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH3 */
+#define ROGUE_CR_SCRATCH3 0x1A18U
+#define ROGUE_CR_SCRATCH3_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH3_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH3_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH4 */
+#define ROGUE_CR_SCRATCH4 0x1A20U
+#define ROGUE_CR_SCRATCH4_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH4_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH4_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH5 */
+#define ROGUE_CR_SCRATCH5 0x1A28U
+#define ROGUE_CR_SCRATCH5_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH5_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH5_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH6 */
+#define ROGUE_CR_SCRATCH6 0x1A30U
+#define ROGUE_CR_SCRATCH6_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH6_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH6_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH7 */
+#define ROGUE_CR_SCRATCH7 0x1A38U
+#define ROGUE_CR_SCRATCH7_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH7_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH7_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH8 */
+#define ROGUE_CR_SCRATCH8 0x1A40U
+#define ROGUE_CR_SCRATCH8_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH8_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH8_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH9 */
+#define ROGUE_CR_SCRATCH9 0x1A48U
+#define ROGUE_CR_SCRATCH9_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH9_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH9_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH10 */
+#define ROGUE_CR_SCRATCH10 0x1A50U
+#define ROGUE_CR_SCRATCH10_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH10_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH10_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH11 */
+#define ROGUE_CR_SCRATCH11 0x1A58U
+#define ROGUE_CR_SCRATCH11_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH11_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH11_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH12 */
+#define ROGUE_CR_SCRATCH12 0x1A60U
+#define ROGUE_CR_SCRATCH12_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH12_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH12_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH13 */
+#define ROGUE_CR_SCRATCH13 0x1A68U
+#define ROGUE_CR_SCRATCH13_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH13_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH13_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH14 */
+#define ROGUE_CR_SCRATCH14 0x1A70U
+#define ROGUE_CR_SCRATCH14_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH14_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH14_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH15 */
+#define ROGUE_CR_SCRATCH15 0x1A78U
+#define ROGUE_CR_SCRATCH15_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH15_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH15_DATA_CLRMSK 0x00000000U
+
+/* Register group: ROGUE_CR_OS0_SCRATCH, with 2 repeats */
+#define ROGUE_CR_OS0_SCRATCH_REPEATCOUNT 2U
+/* Register ROGUE_CR_OS0_SCRATCH0 */
+#define ROGUE_CR_OS0_SCRATCH0 0x1A80U
+#define ROGUE_CR_OS0_SCRATCH0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS0_SCRATCH0_DATA_SHIFT 0U
+#define ROGUE_CR_OS0_SCRATCH0_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS0_SCRATCH1 */
+#define ROGUE_CR_OS0_SCRATCH1 0x1A88U
+#define ROGUE_CR_OS0_SCRATCH1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS0_SCRATCH1_DATA_SHIFT 0U
+#define ROGUE_CR_OS0_SCRATCH1_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS0_SCRATCH2 */
+#define ROGUE_CR_OS0_SCRATCH2 0x1A90U
+#define ROGUE_CR_OS0_SCRATCH2_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS0_SCRATCH2_DATA_SHIFT 0U
+#define ROGUE_CR_OS0_SCRATCH2_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_OS0_SCRATCH3 */
+#define ROGUE_CR_OS0_SCRATCH3 0x1A98U
+#define ROGUE_CR_OS0_SCRATCH3_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS0_SCRATCH3_DATA_SHIFT 0U
+#define ROGUE_CR_OS0_SCRATCH3_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register group: ROGUE_CR_OS1_SCRATCH, with 2 repeats */
+#define ROGUE_CR_OS1_SCRATCH_REPEATCOUNT 2U
+/* Register ROGUE_CR_OS1_SCRATCH0 */
+#define ROGUE_CR_OS1_SCRATCH0 0x11A80U
+#define ROGUE_CR_OS1_SCRATCH0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS1_SCRATCH0_DATA_SHIFT 0U
+#define ROGUE_CR_OS1_SCRATCH0_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS1_SCRATCH1 */
+#define ROGUE_CR_OS1_SCRATCH1 0x11A88U
+#define ROGUE_CR_OS1_SCRATCH1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS1_SCRATCH1_DATA_SHIFT 0U
+#define ROGUE_CR_OS1_SCRATCH1_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS1_SCRATCH2 */
+#define ROGUE_CR_OS1_SCRATCH2 0x11A90U
+#define ROGUE_CR_OS1_SCRATCH2_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS1_SCRATCH2_DATA_SHIFT 0U
+#define ROGUE_CR_OS1_SCRATCH2_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_OS1_SCRATCH3 */
+#define ROGUE_CR_OS1_SCRATCH3 0x11A98U
+#define ROGUE_CR_OS1_SCRATCH3_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS1_SCRATCH3_DATA_SHIFT 0U
+#define ROGUE_CR_OS1_SCRATCH3_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register group: ROGUE_CR_OS2_SCRATCH, with 2 repeats */
+#define ROGUE_CR_OS2_SCRATCH_REPEATCOUNT 2U
+/* Register ROGUE_CR_OS2_SCRATCH0 */
+#define ROGUE_CR_OS2_SCRATCH0 0x21A80U
+#define ROGUE_CR_OS2_SCRATCH0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS2_SCRATCH0_DATA_SHIFT 0U
+#define ROGUE_CR_OS2_SCRATCH0_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS2_SCRATCH1 */
+#define ROGUE_CR_OS2_SCRATCH1 0x21A88U
+#define ROGUE_CR_OS2_SCRATCH1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS2_SCRATCH1_DATA_SHIFT 0U
+#define ROGUE_CR_OS2_SCRATCH1_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS2_SCRATCH2 */
+#define ROGUE_CR_OS2_SCRATCH2 0x21A90U
+#define ROGUE_CR_OS2_SCRATCH2_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS2_SCRATCH2_DATA_SHIFT 0U
+#define ROGUE_CR_OS2_SCRATCH2_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_OS2_SCRATCH3 */
+#define ROGUE_CR_OS2_SCRATCH3 0x21A98U
+#define ROGUE_CR_OS2_SCRATCH3_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS2_SCRATCH3_DATA_SHIFT 0U
+#define ROGUE_CR_OS2_SCRATCH3_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register group: ROGUE_CR_OS3_SCRATCH, with 2 repeats */
+#define ROGUE_CR_OS3_SCRATCH_REPEATCOUNT 2U
+/* Register ROGUE_CR_OS3_SCRATCH0 */
+#define ROGUE_CR_OS3_SCRATCH0 0x31A80U
+#define ROGUE_CR_OS3_SCRATCH0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS3_SCRATCH0_DATA_SHIFT 0U
+#define ROGUE_CR_OS3_SCRATCH0_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS3_SCRATCH1 */
+#define ROGUE_CR_OS3_SCRATCH1 0x31A88U
+#define ROGUE_CR_OS3_SCRATCH1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS3_SCRATCH1_DATA_SHIFT 0U
+#define ROGUE_CR_OS3_SCRATCH1_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS3_SCRATCH2 */
+#define ROGUE_CR_OS3_SCRATCH2 0x31A90U
+#define ROGUE_CR_OS3_SCRATCH2_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS3_SCRATCH2_DATA_SHIFT 0U
+#define ROGUE_CR_OS3_SCRATCH2_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_OS3_SCRATCH3 */
+#define ROGUE_CR_OS3_SCRATCH3 0x31A98U
+#define ROGUE_CR_OS3_SCRATCH3_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS3_SCRATCH3_DATA_SHIFT 0U
+#define ROGUE_CR_OS3_SCRATCH3_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register group: ROGUE_CR_OS4_SCRATCH, with 2 repeats */
+#define ROGUE_CR_OS4_SCRATCH_REPEATCOUNT 2U
+/* Register ROGUE_CR_OS4_SCRATCH0 */
+#define ROGUE_CR_OS4_SCRATCH0 0x41A80U
+#define ROGUE_CR_OS4_SCRATCH0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS4_SCRATCH0_DATA_SHIFT 0U
+#define ROGUE_CR_OS4_SCRATCH0_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS4_SCRATCH1 */
+#define ROGUE_CR_OS4_SCRATCH1 0x41A88U
+#define ROGUE_CR_OS4_SCRATCH1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS4_SCRATCH1_DATA_SHIFT 0U
+#define ROGUE_CR_OS4_SCRATCH1_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS4_SCRATCH2 */
+#define ROGUE_CR_OS4_SCRATCH2 0x41A90U
+#define ROGUE_CR_OS4_SCRATCH2_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS4_SCRATCH2_DATA_SHIFT 0U
+#define ROGUE_CR_OS4_SCRATCH2_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_OS4_SCRATCH3 */
+#define ROGUE_CR_OS4_SCRATCH3 0x41A98U
+#define ROGUE_CR_OS4_SCRATCH3_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS4_SCRATCH3_DATA_SHIFT 0U
+#define ROGUE_CR_OS4_SCRATCH3_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register group: ROGUE_CR_OS5_SCRATCH, with 2 repeats */
+#define ROGUE_CR_OS5_SCRATCH_REPEATCOUNT 2U
+/* Register ROGUE_CR_OS5_SCRATCH0 */
+#define ROGUE_CR_OS5_SCRATCH0 0x51A80U
+#define ROGUE_CR_OS5_SCRATCH0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS5_SCRATCH0_DATA_SHIFT 0U
+#define ROGUE_CR_OS5_SCRATCH0_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS5_SCRATCH1 */
+#define ROGUE_CR_OS5_SCRATCH1 0x51A88U
+#define ROGUE_CR_OS5_SCRATCH1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS5_SCRATCH1_DATA_SHIFT 0U
+#define ROGUE_CR_OS5_SCRATCH1_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS5_SCRATCH2 */
+#define ROGUE_CR_OS5_SCRATCH2 0x51A90U
+#define ROGUE_CR_OS5_SCRATCH2_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS5_SCRATCH2_DATA_SHIFT 0U
+#define ROGUE_CR_OS5_SCRATCH2_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_OS5_SCRATCH3 */
+#define ROGUE_CR_OS5_SCRATCH3 0x51A98U
+#define ROGUE_CR_OS5_SCRATCH3_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS5_SCRATCH3_DATA_SHIFT 0U
+#define ROGUE_CR_OS5_SCRATCH3_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register group: ROGUE_CR_OS6_SCRATCH, with 2 repeats */
+#define ROGUE_CR_OS6_SCRATCH_REPEATCOUNT 2U
+/* Register ROGUE_CR_OS6_SCRATCH0 */
+#define ROGUE_CR_OS6_SCRATCH0 0x61A80U
+#define ROGUE_CR_OS6_SCRATCH0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS6_SCRATCH0_DATA_SHIFT 0U
+#define ROGUE_CR_OS6_SCRATCH0_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS6_SCRATCH1 */
+#define ROGUE_CR_OS6_SCRATCH1 0x61A88U
+#define ROGUE_CR_OS6_SCRATCH1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS6_SCRATCH1_DATA_SHIFT 0U
+#define ROGUE_CR_OS6_SCRATCH1_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS6_SCRATCH2 */
+#define ROGUE_CR_OS6_SCRATCH2 0x61A90U
+#define ROGUE_CR_OS6_SCRATCH2_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS6_SCRATCH2_DATA_SHIFT 0U
+#define ROGUE_CR_OS6_SCRATCH2_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_OS6_SCRATCH3 */
+#define ROGUE_CR_OS6_SCRATCH3 0x61A98U
+#define ROGUE_CR_OS6_SCRATCH3_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS6_SCRATCH3_DATA_SHIFT 0U
+#define ROGUE_CR_OS6_SCRATCH3_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register group: ROGUE_CR_OS7_SCRATCH, with 2 repeats */
+#define ROGUE_CR_OS7_SCRATCH_REPEATCOUNT 2U
+/* Register ROGUE_CR_OS7_SCRATCH0 */
+#define ROGUE_CR_OS7_SCRATCH0 0x71A80U
+#define ROGUE_CR_OS7_SCRATCH0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS7_SCRATCH0_DATA_SHIFT 0U
+#define ROGUE_CR_OS7_SCRATCH0_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS7_SCRATCH1 */
+#define ROGUE_CR_OS7_SCRATCH1 0x71A88U
+#define ROGUE_CR_OS7_SCRATCH1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS7_SCRATCH1_DATA_SHIFT 0U
+#define ROGUE_CR_OS7_SCRATCH1_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS7_SCRATCH2 */
+#define ROGUE_CR_OS7_SCRATCH2 0x71A90U
+#define ROGUE_CR_OS7_SCRATCH2_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS7_SCRATCH2_DATA_SHIFT 0U
+#define ROGUE_CR_OS7_SCRATCH2_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_OS7_SCRATCH3 */
+#define ROGUE_CR_OS7_SCRATCH3 0x71A98U
+#define ROGUE_CR_OS7_SCRATCH3_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS7_SCRATCH3_DATA_SHIFT 0U
+#define ROGUE_CR_OS7_SCRATCH3_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_SPFILTER_SIGNAL_DESCR */
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR 0x2700U
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR_SIZE_SHIFT 0U
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR_SIZE_CLRMSK 0xFFFF0000U
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR_SIZE_ALIGNSHIFT 4U
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR_SIZE_ALIGNSIZE 16U
+
+/* Register ROGUE_CR_SPFILTER_SIGNAL_DESCR_MIN */
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR_MIN 0x2708U
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR_MIN_MASKFULL 0x000000FFFFFFFFF0ULL
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR_MIN_ADDR_SHIFT 4U
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR_MIN_ADDR_CLRMSK 0xFFFFFF000000000FULL
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR_MIN_ADDR_ALIGNSHIFT 4U
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR_MIN_ADDR_ALIGNSIZE 16U
+
+/* Register group: ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG, with 16 repeats */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG_REPEATCOUNT 16U
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0 0x3000U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1 0x3008U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2 0x3010U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3 0x3018U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4 0x3020U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5 0x3028U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6 0x3030U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7 0x3038U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8 0x3040U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9 0x3048U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10 0x3050U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11 0x3058U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12 0x3060U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13 0x3068U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14 0x3070U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15 0x3078U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_BOOT */
+#define ROGUE_CR_FWCORE_BOOT 0x3090U
+#define ROGUE_CR_FWCORE_BOOT_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_FWCORE_BOOT_ENABLE_SHIFT 0U
+#define ROGUE_CR_FWCORE_BOOT_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_FWCORE_BOOT_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_FWCORE_RESET_ADDR */
+#define ROGUE_CR_FWCORE_RESET_ADDR 0x3098U
+#define ROGUE_CR_FWCORE_RESET_ADDR_MASKFULL 0x00000000FFFFFFFEULL
+#define ROGUE_CR_FWCORE_RESET_ADDR_ADDR_SHIFT 1U
+#define ROGUE_CR_FWCORE_RESET_ADDR_ADDR_CLRMSK 0x00000001U
+#define ROGUE_CR_FWCORE_RESET_ADDR_ADDR_ALIGNSHIFT 1U
+#define ROGUE_CR_FWCORE_RESET_ADDR_ADDR_ALIGNSIZE 2U
+
+/* Register ROGUE_CR_FWCORE_WRAPPER_NMI_ADDR */
+#define ROGUE_CR_FWCORE_WRAPPER_NMI_ADDR 0x30A0U
+#define ROGUE_CR_FWCORE_WRAPPER_NMI_ADDR_MASKFULL 0x00000000FFFFFFFEULL
+#define ROGUE_CR_FWCORE_WRAPPER_NMI_ADDR_ADDR_SHIFT 1U
+#define ROGUE_CR_FWCORE_WRAPPER_NMI_ADDR_ADDR_CLRMSK 0x00000001U
+#define ROGUE_CR_FWCORE_WRAPPER_NMI_ADDR_ADDR_ALIGNSHIFT 1U
+#define ROGUE_CR_FWCORE_WRAPPER_NMI_ADDR_ADDR_ALIGNSIZE 2U
+
+/* Register ROGUE_CR_FWCORE_WRAPPER_NMI_EVENT */
+#define ROGUE_CR_FWCORE_WRAPPER_NMI_EVENT 0x30A8U
+#define ROGUE_CR_FWCORE_WRAPPER_NMI_EVENT_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_FWCORE_WRAPPER_NMI_EVENT_TRIGGER_EN_SHIFT 0U
+#define ROGUE_CR_FWCORE_WRAPPER_NMI_EVENT_TRIGGER_EN_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_FWCORE_WRAPPER_NMI_EVENT_TRIGGER_EN_EN 0x00000001U
+
+/* Register ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS */
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS 0x30B0U
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_MASKFULL 0x000000000000F771ULL
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_CAT_BASE_SHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_CAT_BASE_CLRMSK 0xFFFF0FFFU
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_PAGE_SIZE_SHIFT 8U
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_PAGE_SIZE_CLRMSK 0xFFFFF8FFU
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_DATA_TYPE_SHIFT 5U
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_DATA_TYPE_CLRMSK 0xFFFFFF9FU
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_FAULT_RO_SHIFT 4U
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_FAULT_RO_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_FAULT_RO_EN 0x00000010U
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_FAULT_SHIFT 0U
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_FAULT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_FAULT_EN 0x00000001U
+
+/* Register ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS */
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS 0x30B8U
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_MASKFULL 0x001FFFFFFFFFFFF0ULL
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_RNW_SHIFT 52U
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_RNW_CLRMSK 0xFFEFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_RNW_EN 0x0010000000000000ULL
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_TAG_SB_SHIFT 46U
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_TAG_SB_CLRMSK 0xFFF03FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_TAG_ID_SHIFT 40U
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_TAG_ID_CLRMSK 0xFFFFC0FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_ADDRESS_SHIFT 4U
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_ADDRESS_CLRMSK 0xFFFFFF000000000FULL
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_ADDRESS_ALIGNSHIFT 4U
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_ADDRESS_ALIGNSIZE 16U
+
+/* Register ROGUE_CR_FWCORE_MEM_CTRL_INVAL */
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL 0x30C0U
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_MASKFULL 0x000000000000000FULL
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_TLB_SHIFT 3U
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_TLB_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_TLB_EN 0x00000008U
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_PC_SHIFT 2U
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_PC_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_PC_EN 0x00000004U
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_PD_SHIFT 1U
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_PD_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_PD_EN 0x00000002U
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_PT_SHIFT 0U
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_PT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_PT_EN 0x00000001U
+
+/* Register ROGUE_CR_FWCORE_MEM_MMU_STATUS */
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS 0x30C8U
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_MASKFULL 0x000000000FFFFFF7ULL
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_PC_DATA_SHIFT 20U
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_PC_DATA_CLRMSK 0xF00FFFFFU
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_PD_DATA_SHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_PD_DATA_CLRMSK 0xFFF00FFFU
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_PT_DATA_SHIFT 4U
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_PT_DATA_CLRMSK 0xFFFFF00FU
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_STALLED_SHIFT 2U
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_STALLED_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_STALLED_EN 0x00000004U
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_PAUSED_SHIFT 1U
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_PAUSED_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_PAUSED_EN 0x00000002U
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_BUSY_SHIFT 0U
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_BUSY_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_BUSY_EN 0x00000001U
+
+/* Register ROGUE_CR_FWCORE_MEM_READS_EXT_STATUS */
+#define ROGUE_CR_FWCORE_MEM_READS_EXT_STATUS 0x30D8U
+#define ROGUE_CR_FWCORE_MEM_READS_EXT_STATUS_MASKFULL 0x0000000000000FFFULL
+#define ROGUE_CR_FWCORE_MEM_READS_EXT_STATUS_MMU_SHIFT 0U
+#define ROGUE_CR_FWCORE_MEM_READS_EXT_STATUS_MMU_CLRMSK 0xFFFFF000U
+
+/* Register ROGUE_CR_FWCORE_MEM_READS_INT_STATUS */
+#define ROGUE_CR_FWCORE_MEM_READS_INT_STATUS 0x30E0U
+#define ROGUE_CR_FWCORE_MEM_READS_INT_STATUS_MASKFULL 0x00000000000007FFULL
+#define ROGUE_CR_FWCORE_MEM_READS_INT_STATUS_MMU_SHIFT 0U
+#define ROGUE_CR_FWCORE_MEM_READS_INT_STATUS_MMU_CLRMSK 0xFFFFF800U
+
+/* Register ROGUE_CR_FWCORE_WRAPPER_FENCE */
+#define ROGUE_CR_FWCORE_WRAPPER_FENCE 0x30E8U
+#define ROGUE_CR_FWCORE_WRAPPER_FENCE_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_FWCORE_WRAPPER_FENCE_ID_SHIFT 0U
+#define ROGUE_CR_FWCORE_WRAPPER_FENCE_ID_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_FWCORE_WRAPPER_FENCE_ID_EN 0x00000001U
+
+/* Register group: ROGUE_CR_FWCORE_MEM_CAT_BASE, with 8 repeats */
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE_REPEATCOUNT 8U
+/* Register ROGUE_CR_FWCORE_MEM_CAT_BASE0 */
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE0 0x30F0U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE0_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE0_ADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE0_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE0_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE0_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_MEM_CAT_BASE1 */
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE1 0x30F8U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE1_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE1_ADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE1_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE1_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE1_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_MEM_CAT_BASE2 */
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE2 0x3100U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE2_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE2_ADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE2_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE2_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE2_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_MEM_CAT_BASE3 */
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE3 0x3108U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE3_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE3_ADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE3_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE3_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE3_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_MEM_CAT_BASE4 */
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE4 0x3110U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE4_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE4_ADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE4_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE4_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE4_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_MEM_CAT_BASE5 */
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE5 0x3118U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE5_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE5_ADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE5_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE5_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE5_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_MEM_CAT_BASE6 */
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE6 0x3120U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE6_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE6_ADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE6_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE6_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE6_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_MEM_CAT_BASE7 */
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE7 0x3128U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE7_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE7_ADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE7_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE7_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE7_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_WDT_RESET */
+#define ROGUE_CR_FWCORE_WDT_RESET 0x3130U
+#define ROGUE_CR_FWCORE_WDT_RESET_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_FWCORE_WDT_RESET_EN_SHIFT 0U
+#define ROGUE_CR_FWCORE_WDT_RESET_EN_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_FWCORE_WDT_RESET_EN_EN 0x00000001U
+
+/* Register ROGUE_CR_FWCORE_WDT_CTRL */
+#define ROGUE_CR_FWCORE_WDT_CTRL 0x3138U
+#define ROGUE_CR_FWCORE_WDT_CTRL_MASKFULL 0x00000000FFFF1F01ULL
+#define ROGUE_CR_FWCORE_WDT_CTRL_PROT_SHIFT 16U
+#define ROGUE_CR_FWCORE_WDT_CTRL_PROT_CLRMSK 0x0000FFFFU
+#define ROGUE_CR_FWCORE_WDT_CTRL_THRESHOLD_SHIFT 8U
+#define ROGUE_CR_FWCORE_WDT_CTRL_THRESHOLD_CLRMSK 0xFFFFE0FFU
+#define ROGUE_CR_FWCORE_WDT_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_FWCORE_WDT_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_FWCORE_WDT_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_FWCORE_WDT_COUNT */
+#define ROGUE_CR_FWCORE_WDT_COUNT 0x3140U
+#define ROGUE_CR_FWCORE_WDT_COUNT_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_FWCORE_WDT_COUNT_VALUE_SHIFT 0U
+#define ROGUE_CR_FWCORE_WDT_COUNT_VALUE_CLRMSK 0x00000000U
+
+/* Register group: ROGUE_CR_FWCORE_DMI_RESERVED0, with 4 repeats */
+#define ROGUE_CR_FWCORE_DMI_RESERVED0_REPEATCOUNT 4U
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED00 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED00 0x3400U
+#define ROGUE_CR_FWCORE_DMI_RESERVED00_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED01 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED01 0x3408U
+#define ROGUE_CR_FWCORE_DMI_RESERVED01_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED02 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED02 0x3410U
+#define ROGUE_CR_FWCORE_DMI_RESERVED02_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED03 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED03 0x3418U
+#define ROGUE_CR_FWCORE_DMI_RESERVED03_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_DATA0 */
+#define ROGUE_CR_FWCORE_DMI_DATA0 0x3420U
+#define ROGUE_CR_FWCORE_DMI_DATA0_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_DATA1 */
+#define ROGUE_CR_FWCORE_DMI_DATA1 0x3428U
+#define ROGUE_CR_FWCORE_DMI_DATA1_MASKFULL 0x0000000000000000ULL
+
+/* Register group: ROGUE_CR_FWCORE_DMI_RESERVED1, with 5 repeats */
+#define ROGUE_CR_FWCORE_DMI_RESERVED1_REPEATCOUNT 5U
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED10 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED10 0x3430U
+#define ROGUE_CR_FWCORE_DMI_RESERVED10_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED11 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED11 0x3438U
+#define ROGUE_CR_FWCORE_DMI_RESERVED11_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED12 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED12 0x3440U
+#define ROGUE_CR_FWCORE_DMI_RESERVED12_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED13 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED13 0x3448U
+#define ROGUE_CR_FWCORE_DMI_RESERVED13_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED14 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED14 0x3450U
+#define ROGUE_CR_FWCORE_DMI_RESERVED14_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_DMCONTROL */
+#define ROGUE_CR_FWCORE_DMI_DMCONTROL 0x3480U
+#define ROGUE_CR_FWCORE_DMI_DMCONTROL_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_DMSTATUS */
+#define ROGUE_CR_FWCORE_DMI_DMSTATUS 0x3488U
+#define ROGUE_CR_FWCORE_DMI_DMSTATUS_MASKFULL 0x0000000000000000ULL
+
+/* Register group: ROGUE_CR_FWCORE_DMI_RESERVED2, with 4 repeats */
+#define ROGUE_CR_FWCORE_DMI_RESERVED2_REPEATCOUNT 4U
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED20 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED20 0x3490U
+#define ROGUE_CR_FWCORE_DMI_RESERVED20_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED21 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED21 0x3498U
+#define ROGUE_CR_FWCORE_DMI_RESERVED21_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED22 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED22 0x34A0U
+#define ROGUE_CR_FWCORE_DMI_RESERVED22_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED23 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED23 0x34A8U
+#define ROGUE_CR_FWCORE_DMI_RESERVED23_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_ABSTRACTCS */
+#define ROGUE_CR_FWCORE_DMI_ABSTRACTCS 0x34B0U
+#define ROGUE_CR_FWCORE_DMI_ABSTRACTCS_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_COMMAND */
+#define ROGUE_CR_FWCORE_DMI_COMMAND 0x34B8U
+#define ROGUE_CR_FWCORE_DMI_COMMAND_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_SBCS */
+#define ROGUE_CR_FWCORE_DMI_SBCS 0x35C0U
+#define ROGUE_CR_FWCORE_DMI_SBCS_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_SBADDRESS0 */
+#define ROGUE_CR_FWCORE_DMI_SBADDRESS0 0x35C8U
+#define ROGUE_CR_FWCORE_DMI_SBADDRESS0_MASKFULL 0x0000000000000000ULL
+
+/* Register group: ROGUE_CR_FWCORE_DMI_RESERVED3, with 2 repeats */
+#define ROGUE_CR_FWCORE_DMI_RESERVED3_REPEATCOUNT 2U
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED30 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED30 0x34D0U
+#define ROGUE_CR_FWCORE_DMI_RESERVED30_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED31 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED31 0x34D8U
+#define ROGUE_CR_FWCORE_DMI_RESERVED31_MASKFULL 0x0000000000000000ULL
+
+/* Register group: ROGUE_CR_FWCORE_DMI_SBDATA, with 4 repeats */
+#define ROGUE_CR_FWCORE_DMI_SBDATA_REPEATCOUNT 4U
+/* Register ROGUE_CR_FWCORE_DMI_SBDATA0 */
+#define ROGUE_CR_FWCORE_DMI_SBDATA0 0x35E0U
+#define ROGUE_CR_FWCORE_DMI_SBDATA0_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_SBDATA1 */
+#define ROGUE_CR_FWCORE_DMI_SBDATA1 0x35E8U
+#define ROGUE_CR_FWCORE_DMI_SBDATA1_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_SBDATA2 */
+#define ROGUE_CR_FWCORE_DMI_SBDATA2 0x35F0U
+#define ROGUE_CR_FWCORE_DMI_SBDATA2_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_SBDATA3 */
+#define ROGUE_CR_FWCORE_DMI_SBDATA3 0x35F8U
+#define ROGUE_CR_FWCORE_DMI_SBDATA3_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_HALTSUM0 */
+#define ROGUE_CR_FWCORE_DMI_HALTSUM0 0x3600U
+#define ROGUE_CR_FWCORE_DMI_HALTSUM0_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_SLC_CTRL_MISC */
+#define ROGUE_CR_SLC_CTRL_MISC 0x3800U
+#define ROGUE_CR_SLC_CTRL_MISC_MASKFULL 0xFFFFFFFF01FF010FULL
+#define ROGUE_CR_SLC_CTRL_MISC_SCRAMBLE_BITS_SHIFT 32U
+#define ROGUE_CR_SLC_CTRL_MISC_SCRAMBLE_BITS_CLRMSK 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_MISC_LAZYWB_OVERRIDE_SHIFT 24U
+#define ROGUE_CR_SLC_CTRL_MISC_LAZYWB_OVERRIDE_CLRMSK 0xFFFFFFFFFEFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_MISC_LAZYWB_OVERRIDE_EN 0x0000000001000000ULL
+#define ROGUE_CR_SLC_CTRL_MISC_ADDR_DECODE_MODE_SHIFT 16U
+#define ROGUE_CR_SLC_CTRL_MISC_ADDR_DECODE_MODE_CLRMSK 0xFFFFFFFFFF00FFFFULL
+#define ROGUE_CR_SLC_CTRL_MISC_ADDR_DECODE_MODE_INTERLEAVED_64_BYTE 0x0000000000000000ULL
+#define ROGUE_CR_SLC_CTRL_MISC_ADDR_DECODE_MODE_INTERLEAVED_128_BYTE 0x0000000000010000ULL
+#define ROGUE_CR_SLC_CTRL_MISC_ADDR_DECODE_MODE_SIMPLE_HASH1 0x0000000000100000ULL
+#define ROGUE_CR_SLC_CTRL_MISC_ADDR_DECODE_MODE_SIMPLE_HASH2 0x0000000000110000ULL
+#define ROGUE_CR_SLC_CTRL_MISC_ADDR_DECODE_MODE_PVR_HASH1 0x0000000000200000ULL
+#define ROGUE_CR_SLC_CTRL_MISC_ADDR_DECODE_MODE_PVR_HASH2_SCRAMBLE 0x0000000000210000ULL
+#define ROGUE_CR_SLC_CTRL_MISC_PAUSE_SHIFT 8U
+#define ROGUE_CR_SLC_CTRL_MISC_PAUSE_CLRMSK 0xFFFFFFFFFFFFFEFFULL
+#define ROGUE_CR_SLC_CTRL_MISC_PAUSE_EN 0x0000000000000100ULL
+#define ROGUE_CR_SLC_CTRL_MISC_RESP_PRIORITY_SHIFT 3U
+#define ROGUE_CR_SLC_CTRL_MISC_RESP_PRIORITY_CLRMSK 0xFFFFFFFFFFFFFFF7ULL
+#define ROGUE_CR_SLC_CTRL_MISC_RESP_PRIORITY_EN 0x0000000000000008ULL
+#define ROGUE_CR_SLC_CTRL_MISC_ENABLE_LINE_USE_LIMIT_SHIFT 2U
+#define ROGUE_CR_SLC_CTRL_MISC_ENABLE_LINE_USE_LIMIT_CLRMSK 0xFFFFFFFFFFFFFFFBULL
+#define ROGUE_CR_SLC_CTRL_MISC_ENABLE_LINE_USE_LIMIT_EN 0x0000000000000004ULL
+#define ROGUE_CR_SLC_CTRL_MISC_ENABLE_PSG_HAZARD_CHECK_SHIFT 1U
+#define ROGUE_CR_SLC_CTRL_MISC_ENABLE_PSG_HAZARD_CHECK_CLRMSK 0xFFFFFFFFFFFFFFFDULL
+#define ROGUE_CR_SLC_CTRL_MISC_ENABLE_PSG_HAZARD_CHECK_EN 0x0000000000000002ULL
+#define ROGUE_CR_SLC_CTRL_MISC_BYPASS_BURST_COMBINER_SHIFT 0U
+#define ROGUE_CR_SLC_CTRL_MISC_BYPASS_BURST_COMBINER_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_SLC_CTRL_MISC_BYPASS_BURST_COMBINER_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_SLC_CTRL_FLUSH_INVAL */
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL 0x3818U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_MASKFULL 0x0000000080000FFFULL
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_LAZY_SHIFT 31U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_LAZY_CLRMSK 0x7FFFFFFFU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_LAZY_EN 0x80000000U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_FASTRENDER_SHIFT 11U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_FASTRENDER_CLRMSK 0xFFFFF7FFU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_FASTRENDER_EN 0x00000800U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_RAY_VERTEX_SHIFT 10U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_RAY_VERTEX_CLRMSK 0xFFFFFBFFU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_RAY_VERTEX_EN 0x00000400U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_RAY_SHIFT 9U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_RAY_CLRMSK 0xFFFFFDFFU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_RAY_EN 0x00000200U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_FRC_SHIFT 8U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_FRC_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_FRC_EN 0x00000100U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_VXE_SHIFT 7U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_VXE_CLRMSK 0xFFFFFF7FU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_VXE_EN 0x00000080U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_VXD_SHIFT 6U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_VXD_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_VXD_EN 0x00000040U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_HOST_META_SHIFT 5U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_HOST_META_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_HOST_META_EN 0x00000020U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_MMU_SHIFT 4U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_MMU_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_MMU_EN 0x00000010U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_COMPUTE_SHIFT 3U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_COMPUTE_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_COMPUTE_EN 0x00000008U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_PIXEL_SHIFT 2U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_PIXEL_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_PIXEL_EN 0x00000004U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_TA_SHIFT 1U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_TA_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_TA_EN 0x00000002U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_ALL_SHIFT 0U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_ALL_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_ALL_EN 0x00000001U
+
+/* Register ROGUE_CR_SLC_STATUS0 */
+#define ROGUE_CR_SLC_STATUS0 0x3820U
+#define ROGUE_CR_SLC_STATUS0_MASKFULL 0x0000000000000007ULL
+#define ROGUE_CR_SLC_STATUS0_FLUSH_INVAL_PENDING_SHIFT 2U
+#define ROGUE_CR_SLC_STATUS0_FLUSH_INVAL_PENDING_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_SLC_STATUS0_FLUSH_INVAL_PENDING_EN 0x00000004U
+#define ROGUE_CR_SLC_STATUS0_INVAL_PENDING_SHIFT 1U
+#define ROGUE_CR_SLC_STATUS0_INVAL_PENDING_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_SLC_STATUS0_INVAL_PENDING_EN 0x00000002U
+#define ROGUE_CR_SLC_STATUS0_FLUSH_PENDING_SHIFT 0U
+#define ROGUE_CR_SLC_STATUS0_FLUSH_PENDING_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SLC_STATUS0_FLUSH_PENDING_EN 0x00000001U
+
+/* Register ROGUE_CR_SLC_CTRL_BYPASS */
+#define ROGUE_CR_SLC_CTRL_BYPASS 0x3828U
+#define ROGUE_CR_SLC_CTRL_BYPASS__XE_MEM__MASKFULL 0x0FFFFFFFFFFF7FFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_MASKFULL 0x000000000FFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_COMP_ZLS_SHIFT 59U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_COMP_ZLS_CLRMSK 0xF7FFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_COMP_ZLS_EN 0x0800000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_ZLS_HEADER_SHIFT 58U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_ZLS_HEADER_CLRMSK 0xFBFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_ZLS_HEADER_EN 0x0400000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_TCU_HEADER_SHIFT 57U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_TCU_HEADER_CLRMSK 0xFDFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_TCU_HEADER_EN 0x0200000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_ZLS_DATA_SHIFT 56U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_ZLS_DATA_CLRMSK 0xFEFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_ZLS_DATA_EN 0x0100000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_TCU_DATA_SHIFT 55U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_TCU_DATA_CLRMSK 0xFF7FFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_TCU_DATA_EN 0x0080000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_COMP_PBE_SHIFT 54U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_COMP_PBE_CLRMSK 0xFFBFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_COMP_PBE_EN 0x0040000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TCU_DM_COMPUTE_SHIFT 53U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TCU_DM_COMPUTE_CLRMSK 0xFFDFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TCU_DM_COMPUTE_EN 0x0020000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_PDSRW_NOLINEFILL_SHIFT 52U
+#define ROGUE_CR_SLC_CTRL_BYPASS_PDSRW_NOLINEFILL_CLRMSK 0xFFEFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_PDSRW_NOLINEFILL_EN 0x0010000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_PBE_NOLINEFILL_SHIFT 51U
+#define ROGUE_CR_SLC_CTRL_BYPASS_PBE_NOLINEFILL_CLRMSK 0xFFF7FFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_PBE_NOLINEFILL_EN 0x0008000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_FBC_SHIFT 50U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_FBC_CLRMSK 0xFFFBFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_FBC_EN 0x0004000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_RREQ_SHIFT 49U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_RREQ_CLRMSK 0xFFFDFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_RREQ_EN 0x0002000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_CREQ_SHIFT 48U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_CREQ_CLRMSK 0xFFFEFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_CREQ_EN 0x0001000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_PREQ_SHIFT 47U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_PREQ_CLRMSK 0xFFFF7FFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_PREQ_EN 0x0000800000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_DBSC_SHIFT 46U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_DBSC_CLRMSK 0xFFFFBFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_DBSC_EN 0x0000400000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TCU_SHIFT 45U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TCU_CLRMSK 0xFFFFDFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TCU_EN 0x0000200000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_PBE_SHIFT 44U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_PBE_CLRMSK 0xFFFFEFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_PBE_EN 0x0000100000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_ISP_SHIFT 43U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_ISP_CLRMSK 0xFFFFF7FFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_ISP_EN 0x0000080000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_PM_SHIFT 42U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_PM_CLRMSK 0xFFFFFBFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_PM_EN 0x0000040000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TDM_SHIFT 41U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TDM_CLRMSK 0xFFFFFDFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TDM_EN 0x0000020000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_CDM_SHIFT 40U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_CDM_CLRMSK 0xFFFFFEFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_CDM_EN 0x0000010000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TSPF_PDS_STATE_SHIFT 39U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TSPF_PDS_STATE_CLRMSK 0xFFFFFF7FFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TSPF_PDS_STATE_EN 0x0000008000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TSPF_DB_SHIFT 38U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TSPF_DB_CLRMSK 0xFFFFFFBFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TSPF_DB_EN 0x0000004000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TSPF_VTX_VAR_SHIFT 37U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TSPF_VTX_VAR_CLRMSK 0xFFFFFFDFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TSPF_VTX_VAR_EN 0x0000002000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_VDM_SHIFT 36U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_VDM_CLRMSK 0xFFFFFFEFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_VDM_EN 0x0000001000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_PSG_STREAM_SHIFT 35U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_PSG_STREAM_CLRMSK 0xFFFFFFF7FFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_PSG_STREAM_EN 0x0000000800000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_PSG_REGION_SHIFT 34U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_PSG_REGION_CLRMSK 0xFFFFFFFBFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_PSG_REGION_EN 0x0000000400000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_VCE_SHIFT 33U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_VCE_CLRMSK 0xFFFFFFFDFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_VCE_EN 0x0000000200000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_PPP_SHIFT 32U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_PPP_CLRMSK 0xFFFFFFFEFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_PPP_EN 0x0000000100000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_FASTRENDER_SHIFT 31U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_FASTRENDER_CLRMSK 0xFFFFFFFF7FFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_FASTRENDER_EN 0x0000000080000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PM_ALIST_SHIFT 30U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PM_ALIST_CLRMSK 0xFFFFFFFFBFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PM_ALIST_EN 0x0000000040000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PB_TE_SHIFT 29U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PB_TE_CLRMSK 0xFFFFFFFFDFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PB_TE_EN 0x0000000020000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PB_VCE_SHIFT 28U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PB_VCE_CLRMSK 0xFFFFFFFFEFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PB_VCE_EN 0x0000000010000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_RAY_VERTEX_SHIFT 27U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_RAY_VERTEX_CLRMSK 0xFFFFFFFFF7FFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_RAY_VERTEX_EN 0x0000000008000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_RAY_SHIFT 26U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_RAY_CLRMSK 0xFFFFFFFFFBFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_RAY_EN 0x0000000004000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_CPF_SHIFT 25U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_CPF_CLRMSK 0xFFFFFFFFFDFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_CPF_EN 0x0000000002000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TPU_SHIFT 24U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TPU_CLRMSK 0xFFFFFFFFFEFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TPU_EN 0x0000000001000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_FBDC_SHIFT 23U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_FBDC_CLRMSK 0xFFFFFFFFFF7FFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_FBDC_EN 0x0000000000800000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TLA_SHIFT 22U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TLA_CLRMSK 0xFFFFFFFFFFBFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TLA_EN 0x0000000000400000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_BYP_CC_N_SHIFT 21U
+#define ROGUE_CR_SLC_CTRL_BYPASS_BYP_CC_N_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_BYP_CC_N_EN 0x0000000000200000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_BYP_CC_SHIFT 20U
+#define ROGUE_CR_SLC_CTRL_BYPASS_BYP_CC_CLRMSK 0xFFFFFFFFFFEFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_BYP_CC_EN 0x0000000000100000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MCU_SHIFT 19U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MCU_CLRMSK 0xFFFFFFFFFFF7FFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MCU_EN 0x0000000000080000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_PDS_SHIFT 18U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_PDS_CLRMSK 0xFFFFFFFFFFFBFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_PDS_EN 0x0000000000040000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TPF_SHIFT 17U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TPF_CLRMSK 0xFFFFFFFFFFFDFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TPF_EN 0x0000000000020000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_TPC_SHIFT 16U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_TPC_CLRMSK 0xFFFFFFFFFFFEFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_TPC_EN 0x0000000000010000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_OBJ_SHIFT 15U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_OBJ_CLRMSK 0xFFFFFFFFFFFF7FFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_OBJ_EN 0x0000000000008000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_USC_SHIFT 14U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_USC_CLRMSK 0xFFFFFFFFFFFFBFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_USC_EN 0x0000000000004000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_META_SHIFT 13U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_META_CLRMSK 0xFFFFFFFFFFFFDFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_META_EN 0x0000000000002000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_HOST_SHIFT 12U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_HOST_CLRMSK 0xFFFFFFFFFFFFEFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_HOST_EN 0x0000000000001000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MMU_PT_SHIFT 11U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MMU_PT_CLRMSK 0xFFFFFFFFFFFFF7FFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MMU_PT_EN 0x0000000000000800ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MMU_PD_SHIFT 10U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MMU_PD_CLRMSK 0xFFFFFFFFFFFFFBFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MMU_PD_EN 0x0000000000000400ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MMU_PC_SHIFT 9U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MMU_PC_CLRMSK 0xFFFFFFFFFFFFFDFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MMU_PC_EN 0x0000000000000200ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_FRC_SHIFT 8U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_FRC_CLRMSK 0xFFFFFFFFFFFFFEFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_FRC_EN 0x0000000000000100ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_VXE_SHIFT 7U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_VXE_CLRMSK 0xFFFFFFFFFFFFFF7FULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_VXE_EN 0x0000000000000080ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_VXD_SHIFT 6U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_VXD_CLRMSK 0xFFFFFFFFFFFFFFBFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_VXD_EN 0x0000000000000040ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_HOST_META_SHIFT 5U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_HOST_META_CLRMSK 0xFFFFFFFFFFFFFFDFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_HOST_META_EN 0x0000000000000020ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_MMU_SHIFT 4U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_MMU_CLRMSK 0xFFFFFFFFFFFFFFEFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_MMU_EN 0x0000000000000010ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_COMPUTE_SHIFT 3U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_COMPUTE_CLRMSK 0xFFFFFFFFFFFFFFF7ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_COMPUTE_EN 0x0000000000000008ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PIXEL_SHIFT 2U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PIXEL_CLRMSK 0xFFFFFFFFFFFFFFFBULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PIXEL_EN 0x0000000000000004ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_TA_SHIFT 1U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_TA_CLRMSK 0xFFFFFFFFFFFFFFFDULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_TA_EN 0x0000000000000002ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_ALL_SHIFT 0U
+#define ROGUE_CR_SLC_CTRL_BYPASS_ALL_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_ALL_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_SLC_STATUS1 */
+#define ROGUE_CR_SLC_STATUS1 0x3870U
+#define ROGUE_CR_SLC_STATUS1_MASKFULL 0x800003FF03FFFFFFULL
+#define ROGUE_CR_SLC_STATUS1_PAUSED_SHIFT 63U
+#define ROGUE_CR_SLC_STATUS1_PAUSED_CLRMSK 0x7FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_STATUS1_PAUSED_EN 0x8000000000000000ULL
+#define ROGUE_CR_SLC_STATUS1_READS1_SHIFT 32U
+#define ROGUE_CR_SLC_STATUS1_READS1_CLRMSK 0xFFFFFC00FFFFFFFFULL
+#define ROGUE_CR_SLC_STATUS1_READS0_SHIFT 16U
+#define ROGUE_CR_SLC_STATUS1_READS0_CLRMSK 0xFFFFFFFFFC00FFFFULL
+#define ROGUE_CR_SLC_STATUS1_READS1_EXT_SHIFT 8U
+#define ROGUE_CR_SLC_STATUS1_READS1_EXT_CLRMSK 0xFFFFFFFFFFFF00FFULL
+#define ROGUE_CR_SLC_STATUS1_READS0_EXT_SHIFT 0U
+#define ROGUE_CR_SLC_STATUS1_READS0_EXT_CLRMSK 0xFFFFFFFFFFFFFF00ULL
+
+/* Register ROGUE_CR_SLC_IDLE */
+#define ROGUE_CR_SLC_IDLE 0x3898U
+#define ROGUE_CR_SLC_IDLE__XE_MEM__MASKFULL 0x00000000000003FFULL
+#define ROGUE_CR_SLC_IDLE_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_SLC_IDLE_MH_SYSARB1_SHIFT 9U
+#define ROGUE_CR_SLC_IDLE_MH_SYSARB1_CLRMSK 0xFFFFFDFFU
+#define ROGUE_CR_SLC_IDLE_MH_SYSARB1_EN 0x00000200U
+#define ROGUE_CR_SLC_IDLE_MH_SYSARB0_SHIFT 8U
+#define ROGUE_CR_SLC_IDLE_MH_SYSARB0_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_SLC_IDLE_MH_SYSARB0_EN 0x00000100U
+#define ROGUE_CR_SLC_IDLE_IMGBV4_SHIFT 7U
+#define ROGUE_CR_SLC_IDLE_IMGBV4_CLRMSK 0xFFFFFF7FU
+#define ROGUE_CR_SLC_IDLE_IMGBV4_EN 0x00000080U
+#define ROGUE_CR_SLC_IDLE_CACHE_BANKS_SHIFT 6U
+#define ROGUE_CR_SLC_IDLE_CACHE_BANKS_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_SLC_IDLE_CACHE_BANKS_EN 0x00000040U
+#define ROGUE_CR_SLC_IDLE_RBOFIFO_SHIFT 5U
+#define ROGUE_CR_SLC_IDLE_RBOFIFO_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_SLC_IDLE_RBOFIFO_EN 0x00000020U
+#define ROGUE_CR_SLC_IDLE_FRC_CONV_SHIFT 4U
+#define ROGUE_CR_SLC_IDLE_FRC_CONV_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_SLC_IDLE_FRC_CONV_EN 0x00000010U
+#define ROGUE_CR_SLC_IDLE_VXE_CONV_SHIFT 3U
+#define ROGUE_CR_SLC_IDLE_VXE_CONV_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_SLC_IDLE_VXE_CONV_EN 0x00000008U
+#define ROGUE_CR_SLC_IDLE_VXD_CONV_SHIFT 2U
+#define ROGUE_CR_SLC_IDLE_VXD_CONV_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_SLC_IDLE_VXD_CONV_EN 0x00000004U
+#define ROGUE_CR_SLC_IDLE_BIF1_CONV_SHIFT 1U
+#define ROGUE_CR_SLC_IDLE_BIF1_CONV_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_SLC_IDLE_BIF1_CONV_EN 0x00000002U
+#define ROGUE_CR_SLC_IDLE_CBAR_SHIFT 0U
+#define ROGUE_CR_SLC_IDLE_CBAR_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SLC_IDLE_CBAR_EN 0x00000001U
+
+/* Register ROGUE_CR_SLC_STATUS2 */
+#define ROGUE_CR_SLC_STATUS2 0x3908U
+#define ROGUE_CR_SLC_STATUS2_MASKFULL 0x000003FF03FFFFFFULL
+#define ROGUE_CR_SLC_STATUS2_READS3_SHIFT 32U
+#define ROGUE_CR_SLC_STATUS2_READS3_CLRMSK 0xFFFFFC00FFFFFFFFULL
+#define ROGUE_CR_SLC_STATUS2_READS2_SHIFT 16U
+#define ROGUE_CR_SLC_STATUS2_READS2_CLRMSK 0xFFFFFFFFFC00FFFFULL
+#define ROGUE_CR_SLC_STATUS2_READS3_EXT_SHIFT 8U
+#define ROGUE_CR_SLC_STATUS2_READS3_EXT_CLRMSK 0xFFFFFFFFFFFF00FFULL
+#define ROGUE_CR_SLC_STATUS2_READS2_EXT_SHIFT 0U
+#define ROGUE_CR_SLC_STATUS2_READS2_EXT_CLRMSK 0xFFFFFFFFFFFFFF00ULL
+
+/* Register ROGUE_CR_SLC_CTRL_MISC2 */
+#define ROGUE_CR_SLC_CTRL_MISC2 0x3930U
+#define ROGUE_CR_SLC_CTRL_MISC2_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_MISC2_SCRAMBLE_BITS_SHIFT 0U
+#define ROGUE_CR_SLC_CTRL_MISC2_SCRAMBLE_BITS_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SLC_CROSSBAR_LOAD_BALANCE */
+#define ROGUE_CR_SLC_CROSSBAR_LOAD_BALANCE 0x3938U
+#define ROGUE_CR_SLC_CROSSBAR_LOAD_BALANCE_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_SLC_CROSSBAR_LOAD_BALANCE_BYPASS_SHIFT 0U
+#define ROGUE_CR_SLC_CROSSBAR_LOAD_BALANCE_BYPASS_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SLC_CROSSBAR_LOAD_BALANCE_BYPASS_EN 0x00000001U
+
+/* Register ROGUE_CR_USC_UVS0_CHECKSUM */
+#define ROGUE_CR_USC_UVS0_CHECKSUM 0x5000U
+#define ROGUE_CR_USC_UVS0_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_USC_UVS0_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_USC_UVS0_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_USC_UVS1_CHECKSUM */
+#define ROGUE_CR_USC_UVS1_CHECKSUM 0x5008U
+#define ROGUE_CR_USC_UVS1_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_USC_UVS1_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_USC_UVS1_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_USC_UVS2_CHECKSUM */
+#define ROGUE_CR_USC_UVS2_CHECKSUM 0x5010U
+#define ROGUE_CR_USC_UVS2_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_USC_UVS2_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_USC_UVS2_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_USC_UVS3_CHECKSUM */
+#define ROGUE_CR_USC_UVS3_CHECKSUM 0x5018U
+#define ROGUE_CR_USC_UVS3_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_USC_UVS3_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_USC_UVS3_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PPP_SIGNATURE */
+#define ROGUE_CR_PPP_SIGNATURE 0x5020U
+#define ROGUE_CR_PPP_SIGNATURE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PPP_SIGNATURE_VALUE_SHIFT 0U
+#define ROGUE_CR_PPP_SIGNATURE_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_TE_SIGNATURE */
+#define ROGUE_CR_TE_SIGNATURE 0x5028U
+#define ROGUE_CR_TE_SIGNATURE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_TE_SIGNATURE_VALUE_SHIFT 0U
+#define ROGUE_CR_TE_SIGNATURE_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_TE_CHECKSUM */
+#define ROGUE_CR_TE_CHECKSUM 0x5110U
+#define ROGUE_CR_TE_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_TE_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_TE_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_USC_UVB_CHECKSUM */
+#define ROGUE_CR_USC_UVB_CHECKSUM 0x5118U
+#define ROGUE_CR_USC_UVB_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_USC_UVB_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_USC_UVB_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_VCE_CHECKSUM */
+#define ROGUE_CR_VCE_CHECKSUM 0x5030U
+#define ROGUE_CR_VCE_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_VCE_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_VCE_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_ISP_PDS_CHECKSUM */
+#define ROGUE_CR_ISP_PDS_CHECKSUM 0x5038U
+#define ROGUE_CR_ISP_PDS_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_ISP_PDS_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_ISP_PDS_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_ISP_TPF_CHECKSUM */
+#define ROGUE_CR_ISP_TPF_CHECKSUM 0x5040U
+#define ROGUE_CR_ISP_TPF_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_ISP_TPF_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_ISP_TPF_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_TFPU_PLANE0_CHECKSUM */
+#define ROGUE_CR_TFPU_PLANE0_CHECKSUM 0x5048U
+#define ROGUE_CR_TFPU_PLANE0_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_TFPU_PLANE0_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_TFPU_PLANE0_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_TFPU_PLANE1_CHECKSUM */
+#define ROGUE_CR_TFPU_PLANE1_CHECKSUM 0x5050U
+#define ROGUE_CR_TFPU_PLANE1_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_TFPU_PLANE1_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_TFPU_PLANE1_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PBE_CHECKSUM */
+#define ROGUE_CR_PBE_CHECKSUM 0x5058U
+#define ROGUE_CR_PBE_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PBE_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_PBE_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PDS_DOUTM_STM_SIGNATURE */
+#define ROGUE_CR_PDS_DOUTM_STM_SIGNATURE 0x5060U
+#define ROGUE_CR_PDS_DOUTM_STM_SIGNATURE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PDS_DOUTM_STM_SIGNATURE_VALUE_SHIFT 0U
+#define ROGUE_CR_PDS_DOUTM_STM_SIGNATURE_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_IFPU_ISP_CHECKSUM */
+#define ROGUE_CR_IFPU_ISP_CHECKSUM 0x5068U
+#define ROGUE_CR_IFPU_ISP_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_IFPU_ISP_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_IFPU_ISP_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_USC_UVS4_CHECKSUM */
+#define ROGUE_CR_USC_UVS4_CHECKSUM 0x5100U
+#define ROGUE_CR_USC_UVS4_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_USC_UVS4_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_USC_UVS4_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_USC_UVS5_CHECKSUM */
+#define ROGUE_CR_USC_UVS5_CHECKSUM 0x5108U
+#define ROGUE_CR_USC_UVS5_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_USC_UVS5_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_USC_UVS5_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PPP_CLIP_CHECKSUM */
+#define ROGUE_CR_PPP_CLIP_CHECKSUM 0x5120U
+#define ROGUE_CR_PPP_CLIP_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PPP_CLIP_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_PPP_CLIP_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_TA_PHASE */
+#define ROGUE_CR_PERF_TA_PHASE 0x6008U
+#define ROGUE_CR_PERF_TA_PHASE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_TA_PHASE_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_TA_PHASE_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_3D_PHASE */
+#define ROGUE_CR_PERF_3D_PHASE 0x6010U
+#define ROGUE_CR_PERF_3D_PHASE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_3D_PHASE_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_3D_PHASE_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_COMPUTE_PHASE */
+#define ROGUE_CR_PERF_COMPUTE_PHASE 0x6018U
+#define ROGUE_CR_PERF_COMPUTE_PHASE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_COMPUTE_PHASE_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_COMPUTE_PHASE_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_TA_CYCLE */
+#define ROGUE_CR_PERF_TA_CYCLE 0x6020U
+#define ROGUE_CR_PERF_TA_CYCLE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_TA_CYCLE_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_TA_CYCLE_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_3D_CYCLE */
+#define ROGUE_CR_PERF_3D_CYCLE 0x6028U
+#define ROGUE_CR_PERF_3D_CYCLE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_3D_CYCLE_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_3D_CYCLE_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_COMPUTE_CYCLE */
+#define ROGUE_CR_PERF_COMPUTE_CYCLE 0x6030U
+#define ROGUE_CR_PERF_COMPUTE_CYCLE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_COMPUTE_CYCLE_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_COMPUTE_CYCLE_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_TA_OR_3D_CYCLE */
+#define ROGUE_CR_PERF_TA_OR_3D_CYCLE 0x6038U
+#define ROGUE_CR_PERF_TA_OR_3D_CYCLE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_TA_OR_3D_CYCLE_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_TA_OR_3D_CYCLE_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_INITIAL_TA_CYCLE */
+#define ROGUE_CR_PERF_INITIAL_TA_CYCLE 0x6040U
+#define ROGUE_CR_PERF_INITIAL_TA_CYCLE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_INITIAL_TA_CYCLE_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_INITIAL_TA_CYCLE_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_SLC0_READ_STALL */
+#define ROGUE_CR_PERF_SLC0_READ_STALL 0x60B8U
+#define ROGUE_CR_PERF_SLC0_READ_STALL_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_SLC0_READ_STALL_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_SLC0_READ_STALL_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_SLC0_WRITE_STALL */
+#define ROGUE_CR_PERF_SLC0_WRITE_STALL 0x60C0U
+#define ROGUE_CR_PERF_SLC0_WRITE_STALL_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_SLC0_WRITE_STALL_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_SLC0_WRITE_STALL_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_SLC1_READ_STALL */
+#define ROGUE_CR_PERF_SLC1_READ_STALL 0x60E0U
+#define ROGUE_CR_PERF_SLC1_READ_STALL_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_SLC1_READ_STALL_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_SLC1_READ_STALL_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_SLC1_WRITE_STALL */
+#define ROGUE_CR_PERF_SLC1_WRITE_STALL 0x60E8U
+#define ROGUE_CR_PERF_SLC1_WRITE_STALL_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_SLC1_WRITE_STALL_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_SLC1_WRITE_STALL_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_SLC2_READ_STALL */
+#define ROGUE_CR_PERF_SLC2_READ_STALL 0x6158U
+#define ROGUE_CR_PERF_SLC2_READ_STALL_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_SLC2_READ_STALL_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_SLC2_READ_STALL_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_SLC2_WRITE_STALL */
+#define ROGUE_CR_PERF_SLC2_WRITE_STALL 0x6160U
+#define ROGUE_CR_PERF_SLC2_WRITE_STALL_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_SLC2_WRITE_STALL_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_SLC2_WRITE_STALL_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_SLC3_READ_STALL */
+#define ROGUE_CR_PERF_SLC3_READ_STALL 0x6180U
+#define ROGUE_CR_PERF_SLC3_READ_STALL_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_SLC3_READ_STALL_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_SLC3_READ_STALL_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_SLC3_WRITE_STALL */
+#define ROGUE_CR_PERF_SLC3_WRITE_STALL 0x6188U
+#define ROGUE_CR_PERF_SLC3_WRITE_STALL_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_SLC3_WRITE_STALL_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_SLC3_WRITE_STALL_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_3D_SPINUP */
+#define ROGUE_CR_PERF_3D_SPINUP 0x6220U
+#define ROGUE_CR_PERF_3D_SPINUP_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_3D_SPINUP_CYCLES_SHIFT 0U
+#define ROGUE_CR_PERF_3D_SPINUP_CYCLES_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_AXI_ACE_LITE_CONFIGURATION */
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION 0x38C0U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_MASKFULL 0x00003FFFFFFFFFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ENABLE_FENCE_OUT_SHIFT 45U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ENABLE_FENCE_OUT_CLRMSK 0xFFFFDFFFFFFFFFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ENABLE_FENCE_OUT_EN 0x0000200000000000ULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_OSID_SECURITY_SHIFT 37U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_OSID_SECURITY_CLRMSK 0xFFFFE01FFFFFFFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_DISABLE_COHERENT_WRITELINEUNIQUE_SHIFT 36U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_DISABLE_COHERENT_WRITELINEUNIQUE_CLRMSK \
+	0xFFFFFFEFFFFFFFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_DISABLE_COHERENT_WRITELINEUNIQUE_EN \
+	0x0000001000000000ULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_DISABLE_COHERENT_WRITE_SHIFT 35U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_DISABLE_COHERENT_WRITE_CLRMSK 0xFFFFFFF7FFFFFFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_DISABLE_COHERENT_WRITE_EN 0x0000000800000000ULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_DISABLE_COHERENT_READ_SHIFT 34U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_DISABLE_COHERENT_READ_CLRMSK 0xFFFFFFFBFFFFFFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_DISABLE_COHERENT_READ_EN 0x0000000400000000ULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARCACHE_CACHE_MAINTENANCE_SHIFT 30U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARCACHE_CACHE_MAINTENANCE_CLRMSK 0xFFFFFFFC3FFFFFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARCACHE_COHERENT_SHIFT 26U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARCACHE_COHERENT_CLRMSK 0xFFFFFFFFC3FFFFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWCACHE_COHERENT_SHIFT 22U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWCACHE_COHERENT_CLRMSK 0xFFFFFFFFFC3FFFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARDOMAIN_BARRIER_SHIFT 20U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARDOMAIN_BARRIER_CLRMSK 0xFFFFFFFFFFCFFFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWDOMAIN_BARRIER_SHIFT 18U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWDOMAIN_BARRIER_CLRMSK 0xFFFFFFFFFFF3FFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARDOMAIN_CACHE_MAINTENANCE_SHIFT 16U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARDOMAIN_CACHE_MAINTENANCE_CLRMSK 0xFFFFFFFFFFFCFFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWDOMAIN_COHERENT_SHIFT 14U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWDOMAIN_COHERENT_CLRMSK 0xFFFFFFFFFFFF3FFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARDOMAIN_COHERENT_SHIFT 12U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARDOMAIN_COHERENT_CLRMSK 0xFFFFFFFFFFFFCFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARDOMAIN_NON_SNOOPING_SHIFT 10U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARDOMAIN_NON_SNOOPING_CLRMSK 0xFFFFFFFFFFFFF3FFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWDOMAIN_NON_SNOOPING_SHIFT 8U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWDOMAIN_NON_SNOOPING_CLRMSK 0xFFFFFFFFFFFFFCFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARCACHE_NON_SNOOPING_SHIFT 4U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARCACHE_NON_SNOOPING_CLRMSK 0xFFFFFFFFFFFFFF0FULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWCACHE_NON_SNOOPING_SHIFT 0U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWCACHE_NON_SNOOPING_CLRMSK 0xFFFFFFFFFFFFFFF0ULL
+
+/* Register ROGUE_CR_POWER_ESTIMATE_RESULT */
+#define ROGUE_CR_POWER_ESTIMATE_RESULT 0x6328U
+#define ROGUE_CR_POWER_ESTIMATE_RESULT_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_POWER_ESTIMATE_RESULT_VALUE_SHIFT 0U
+#define ROGUE_CR_POWER_ESTIMATE_RESULT_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_TA_PERF */
+#define ROGUE_CR_TA_PERF 0x7600U
+#define ROGUE_CR_TA_PERF_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_TA_PERF_CLR_3_SHIFT 4U
+#define ROGUE_CR_TA_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_TA_PERF_CLR_3_EN 0x00000010U
+#define ROGUE_CR_TA_PERF_CLR_2_SHIFT 3U
+#define ROGUE_CR_TA_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_TA_PERF_CLR_2_EN 0x00000008U
+#define ROGUE_CR_TA_PERF_CLR_1_SHIFT 2U
+#define ROGUE_CR_TA_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_TA_PERF_CLR_1_EN 0x00000004U
+#define ROGUE_CR_TA_PERF_CLR_0_SHIFT 1U
+#define ROGUE_CR_TA_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_TA_PERF_CLR_0_EN 0x00000002U
+#define ROGUE_CR_TA_PERF_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_TA_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_TA_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_TA_PERF_SELECT0 */
+#define ROGUE_CR_TA_PERF_SELECT0 0x7608U
+#define ROGUE_CR_TA_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_TA_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_TA_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT0_MODE_SHIFT 21U
+#define ROGUE_CR_TA_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_TA_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_TA_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_TA_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_TA_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_TA_PERF_SELECT1 */
+#define ROGUE_CR_TA_PERF_SELECT1 0x7610U
+#define ROGUE_CR_TA_PERF_SELECT1_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT1_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_TA_PERF_SELECT1_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT1_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_TA_PERF_SELECT1_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT1_MODE_SHIFT 21U
+#define ROGUE_CR_TA_PERF_SELECT1_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT1_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_TA_PERF_SELECT1_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_TA_PERF_SELECT1_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_TA_PERF_SELECT1_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_TA_PERF_SELECT1_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_TA_PERF_SELECT2 */
+#define ROGUE_CR_TA_PERF_SELECT2 0x7618U
+#define ROGUE_CR_TA_PERF_SELECT2_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT2_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_TA_PERF_SELECT2_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT2_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_TA_PERF_SELECT2_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT2_MODE_SHIFT 21U
+#define ROGUE_CR_TA_PERF_SELECT2_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT2_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_TA_PERF_SELECT2_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_TA_PERF_SELECT2_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_TA_PERF_SELECT2_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_TA_PERF_SELECT2_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_TA_PERF_SELECT3 */
+#define ROGUE_CR_TA_PERF_SELECT3 0x7620U
+#define ROGUE_CR_TA_PERF_SELECT3_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT3_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_TA_PERF_SELECT3_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT3_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_TA_PERF_SELECT3_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT3_MODE_SHIFT 21U
+#define ROGUE_CR_TA_PERF_SELECT3_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT3_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_TA_PERF_SELECT3_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_TA_PERF_SELECT3_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_TA_PERF_SELECT3_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_TA_PERF_SELECT3_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_TA_PERF_SELECTED_BITS */
+#define ROGUE_CR_TA_PERF_SELECTED_BITS 0x7648U
+#define ROGUE_CR_TA_PERF_SELECTED_BITS_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECTED_BITS_REG3_SHIFT 48U
+#define ROGUE_CR_TA_PERF_SELECTED_BITS_REG3_CLRMSK 0x0000FFFFFFFFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECTED_BITS_REG2_SHIFT 32U
+#define ROGUE_CR_TA_PERF_SELECTED_BITS_REG2_CLRMSK 0xFFFF0000FFFFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECTED_BITS_REG1_SHIFT 16U
+#define ROGUE_CR_TA_PERF_SELECTED_BITS_REG1_CLRMSK 0xFFFFFFFF0000FFFFULL
+#define ROGUE_CR_TA_PERF_SELECTED_BITS_REG0_SHIFT 0U
+#define ROGUE_CR_TA_PERF_SELECTED_BITS_REG0_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_TA_PERF_COUNTER_0 */
+#define ROGUE_CR_TA_PERF_COUNTER_0 0x7650U
+#define ROGUE_CR_TA_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_TA_PERF_COUNTER_0_REG_SHIFT 0U
+#define ROGUE_CR_TA_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_TA_PERF_COUNTER_1 */
+#define ROGUE_CR_TA_PERF_COUNTER_1 0x7658U
+#define ROGUE_CR_TA_PERF_COUNTER_1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_TA_PERF_COUNTER_1_REG_SHIFT 0U
+#define ROGUE_CR_TA_PERF_COUNTER_1_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_TA_PERF_COUNTER_2 */
+#define ROGUE_CR_TA_PERF_COUNTER_2 0x7660U
+#define ROGUE_CR_TA_PERF_COUNTER_2_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_TA_PERF_COUNTER_2_REG_SHIFT 0U
+#define ROGUE_CR_TA_PERF_COUNTER_2_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_TA_PERF_COUNTER_3 */
+#define ROGUE_CR_TA_PERF_COUNTER_3 0x7668U
+#define ROGUE_CR_TA_PERF_COUNTER_3_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_TA_PERF_COUNTER_3_REG_SHIFT 0U
+#define ROGUE_CR_TA_PERF_COUNTER_3_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_RASTERISATION_PERF */
+#define ROGUE_CR_RASTERISATION_PERF 0x7700U
+#define ROGUE_CR_RASTERISATION_PERF_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_RASTERISATION_PERF_CLR_3_SHIFT 4U
+#define ROGUE_CR_RASTERISATION_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_RASTERISATION_PERF_CLR_3_EN 0x00000010U
+#define ROGUE_CR_RASTERISATION_PERF_CLR_2_SHIFT 3U
+#define ROGUE_CR_RASTERISATION_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_RASTERISATION_PERF_CLR_2_EN 0x00000008U
+#define ROGUE_CR_RASTERISATION_PERF_CLR_1_SHIFT 2U
+#define ROGUE_CR_RASTERISATION_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_RASTERISATION_PERF_CLR_1_EN 0x00000004U
+#define ROGUE_CR_RASTERISATION_PERF_CLR_0_SHIFT 1U
+#define ROGUE_CR_RASTERISATION_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_RASTERISATION_PERF_CLR_0_EN 0x00000002U
+#define ROGUE_CR_RASTERISATION_PERF_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_RASTERISATION_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_RASTERISATION_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_RASTERISATION_PERF_SELECT0 */
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0 0x7708U
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_MODE_SHIFT 21U
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_RASTERISATION_PERF_COUNTER_0 */
+#define ROGUE_CR_RASTERISATION_PERF_COUNTER_0 0x7750U
+#define ROGUE_CR_RASTERISATION_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_RASTERISATION_PERF_COUNTER_0_REG_SHIFT 0U
+#define ROGUE_CR_RASTERISATION_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_HUB_BIFPMCACHE_PERF */
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF 0x7800U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_3_SHIFT 4U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_3_EN 0x00000010U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_2_SHIFT 3U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_2_EN 0x00000008U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_1_SHIFT 2U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_1_EN 0x00000004U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_0_SHIFT 1U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_0_EN 0x00000002U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0 */
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0 0x7808U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_MODE_SHIFT 21U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_HUB_BIFPMCACHE_PERF_COUNTER_0 */
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_COUNTER_0 0x7850U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_COUNTER_0_REG_SHIFT 0U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_TPU_MCU_L0_PERF */
+#define ROGUE_CR_TPU_MCU_L0_PERF 0x7900U
+#define ROGUE_CR_TPU_MCU_L0_PERF_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_3_SHIFT 4U
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_3_EN 0x00000010U
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_2_SHIFT 3U
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_2_EN 0x00000008U
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_1_SHIFT 2U
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_1_EN 0x00000004U
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_0_SHIFT 1U
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_0_EN 0x00000002U
+#define ROGUE_CR_TPU_MCU_L0_PERF_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_TPU_MCU_L0_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_TPU_MCU_L0_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_TPU_MCU_L0_PERF_SELECT0 */
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0 0x7908U
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_MODE_SHIFT 21U
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_TPU_MCU_L0_PERF_COUNTER_0 */
+#define ROGUE_CR_TPU_MCU_L0_PERF_COUNTER_0 0x7950U
+#define ROGUE_CR_TPU_MCU_L0_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_TPU_MCU_L0_PERF_COUNTER_0_REG_SHIFT 0U
+#define ROGUE_CR_TPU_MCU_L0_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_USC_PERF */
+#define ROGUE_CR_USC_PERF 0x8100U
+#define ROGUE_CR_USC_PERF_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_USC_PERF_CLR_3_SHIFT 4U
+#define ROGUE_CR_USC_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_USC_PERF_CLR_3_EN 0x00000010U
+#define ROGUE_CR_USC_PERF_CLR_2_SHIFT 3U
+#define ROGUE_CR_USC_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_USC_PERF_CLR_2_EN 0x00000008U
+#define ROGUE_CR_USC_PERF_CLR_1_SHIFT 2U
+#define ROGUE_CR_USC_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_USC_PERF_CLR_1_EN 0x00000004U
+#define ROGUE_CR_USC_PERF_CLR_0_SHIFT 1U
+#define ROGUE_CR_USC_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_USC_PERF_CLR_0_EN 0x00000002U
+#define ROGUE_CR_USC_PERF_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_USC_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_USC_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_USC_PERF_SELECT0 */
+#define ROGUE_CR_USC_PERF_SELECT0 0x8108U
+#define ROGUE_CR_USC_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_USC_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_USC_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_USC_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_USC_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_USC_PERF_SELECT0_MODE_SHIFT 21U
+#define ROGUE_CR_USC_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_USC_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_USC_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_USC_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_USC_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_USC_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_USC_PERF_COUNTER_0 */
+#define ROGUE_CR_USC_PERF_COUNTER_0 0x8150U
+#define ROGUE_CR_USC_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_USC_PERF_COUNTER_0_REG_SHIFT 0U
+#define ROGUE_CR_USC_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_JONES_IDLE */
+#define ROGUE_CR_JONES_IDLE 0x8328U
+#define ROGUE_CR_JONES_IDLE_MASKFULL 0x0000000000007FFFULL
+#define ROGUE_CR_JONES_IDLE_TDM_SHIFT 14U
+#define ROGUE_CR_JONES_IDLE_TDM_CLRMSK 0xFFFFBFFFU
+#define ROGUE_CR_JONES_IDLE_TDM_EN 0x00004000U
+#define ROGUE_CR_JONES_IDLE_FB_CDC_TLA_SHIFT 13U
+#define ROGUE_CR_JONES_IDLE_FB_CDC_TLA_CLRMSK 0xFFFFDFFFU
+#define ROGUE_CR_JONES_IDLE_FB_CDC_TLA_EN 0x00002000U
+#define ROGUE_CR_JONES_IDLE_FB_CDC_SHIFT 12U
+#define ROGUE_CR_JONES_IDLE_FB_CDC_CLRMSK 0xFFFFEFFFU
+#define ROGUE_CR_JONES_IDLE_FB_CDC_EN 0x00001000U
+#define ROGUE_CR_JONES_IDLE_MMU_SHIFT 11U
+#define ROGUE_CR_JONES_IDLE_MMU_CLRMSK 0xFFFFF7FFU
+#define ROGUE_CR_JONES_IDLE_MMU_EN 0x00000800U
+#define ROGUE_CR_JONES_IDLE_TLA_SHIFT 10U
+#define ROGUE_CR_JONES_IDLE_TLA_CLRMSK 0xFFFFFBFFU
+#define ROGUE_CR_JONES_IDLE_TLA_EN 0x00000400U
+#define ROGUE_CR_JONES_IDLE_GARTEN_SHIFT 9U
+#define ROGUE_CR_JONES_IDLE_GARTEN_CLRMSK 0xFFFFFDFFU
+#define ROGUE_CR_JONES_IDLE_GARTEN_EN 0x00000200U
+#define ROGUE_CR_JONES_IDLE_HOSTIF_SHIFT 8U
+#define ROGUE_CR_JONES_IDLE_HOSTIF_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_JONES_IDLE_HOSTIF_EN 0x00000100U
+#define ROGUE_CR_JONES_IDLE_SOCIF_SHIFT 7U
+#define ROGUE_CR_JONES_IDLE_SOCIF_CLRMSK 0xFFFFFF7FU
+#define ROGUE_CR_JONES_IDLE_SOCIF_EN 0x00000080U
+#define ROGUE_CR_JONES_IDLE_TILING_SHIFT 6U
+#define ROGUE_CR_JONES_IDLE_TILING_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_JONES_IDLE_TILING_EN 0x00000040U
+#define ROGUE_CR_JONES_IDLE_IPP_SHIFT 5U
+#define ROGUE_CR_JONES_IDLE_IPP_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_JONES_IDLE_IPP_EN 0x00000020U
+#define ROGUE_CR_JONES_IDLE_USCS_SHIFT 4U
+#define ROGUE_CR_JONES_IDLE_USCS_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_JONES_IDLE_USCS_EN 0x00000010U
+#define ROGUE_CR_JONES_IDLE_PM_SHIFT 3U
+#define ROGUE_CR_JONES_IDLE_PM_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_JONES_IDLE_PM_EN 0x00000008U
+#define ROGUE_CR_JONES_IDLE_CDM_SHIFT 2U
+#define ROGUE_CR_JONES_IDLE_CDM_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_JONES_IDLE_CDM_EN 0x00000004U
+#define ROGUE_CR_JONES_IDLE_VDM_SHIFT 1U
+#define ROGUE_CR_JONES_IDLE_VDM_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_JONES_IDLE_VDM_EN 0x00000002U
+#define ROGUE_CR_JONES_IDLE_BIF_SHIFT 0U
+#define ROGUE_CR_JONES_IDLE_BIF_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_JONES_IDLE_BIF_EN 0x00000001U
+
+/* Register ROGUE_CR_TORNADO_PERF */
+#define ROGUE_CR_TORNADO_PERF 0x8228U
+#define ROGUE_CR_TORNADO_PERF_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_TORNADO_PERF_CLR_3_SHIFT 4U
+#define ROGUE_CR_TORNADO_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_TORNADO_PERF_CLR_3_EN 0x00000010U
+#define ROGUE_CR_TORNADO_PERF_CLR_2_SHIFT 3U
+#define ROGUE_CR_TORNADO_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_TORNADO_PERF_CLR_2_EN 0x00000008U
+#define ROGUE_CR_TORNADO_PERF_CLR_1_SHIFT 2U
+#define ROGUE_CR_TORNADO_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_TORNADO_PERF_CLR_1_EN 0x00000004U
+#define ROGUE_CR_TORNADO_PERF_CLR_0_SHIFT 1U
+#define ROGUE_CR_TORNADO_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_TORNADO_PERF_CLR_0_EN 0x00000002U
+#define ROGUE_CR_TORNADO_PERF_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_TORNADO_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_TORNADO_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_TORNADO_PERF_SELECT0 */
+#define ROGUE_CR_TORNADO_PERF_SELECT0 0x8230U
+#define ROGUE_CR_TORNADO_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_TORNADO_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_TORNADO_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_TORNADO_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_TORNADO_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_TORNADO_PERF_SELECT0_MODE_SHIFT 21U
+#define ROGUE_CR_TORNADO_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_TORNADO_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_TORNADO_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_TORNADO_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_TORNADO_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_TORNADO_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_TORNADO_PERF_COUNTER_0 */
+#define ROGUE_CR_TORNADO_PERF_COUNTER_0 0x8268U
+#define ROGUE_CR_TORNADO_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_TORNADO_PERF_COUNTER_0_REG_SHIFT 0U
+#define ROGUE_CR_TORNADO_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_TEXAS_PERF */
+#define ROGUE_CR_TEXAS_PERF 0x8290U
+#define ROGUE_CR_TEXAS_PERF_MASKFULL 0x000000000000007FULL
+#define ROGUE_CR_TEXAS_PERF_CLR_5_SHIFT 6U
+#define ROGUE_CR_TEXAS_PERF_CLR_5_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_TEXAS_PERF_CLR_5_EN 0x00000040U
+#define ROGUE_CR_TEXAS_PERF_CLR_4_SHIFT 5U
+#define ROGUE_CR_TEXAS_PERF_CLR_4_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_TEXAS_PERF_CLR_4_EN 0x00000020U
+#define ROGUE_CR_TEXAS_PERF_CLR_3_SHIFT 4U
+#define ROGUE_CR_TEXAS_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_TEXAS_PERF_CLR_3_EN 0x00000010U
+#define ROGUE_CR_TEXAS_PERF_CLR_2_SHIFT 3U
+#define ROGUE_CR_TEXAS_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_TEXAS_PERF_CLR_2_EN 0x00000008U
+#define ROGUE_CR_TEXAS_PERF_CLR_1_SHIFT 2U
+#define ROGUE_CR_TEXAS_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_TEXAS_PERF_CLR_1_EN 0x00000004U
+#define ROGUE_CR_TEXAS_PERF_CLR_0_SHIFT 1U
+#define ROGUE_CR_TEXAS_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_TEXAS_PERF_CLR_0_EN 0x00000002U
+#define ROGUE_CR_TEXAS_PERF_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_TEXAS_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_TEXAS_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_TEXAS_PERF_SELECT0 */
+#define ROGUE_CR_TEXAS_PERF_SELECT0 0x8298U
+#define ROGUE_CR_TEXAS_PERF_SELECT0_MASKFULL 0x3FFF3FFF803FFFFFULL
+#define ROGUE_CR_TEXAS_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_TEXAS_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_TEXAS_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_TEXAS_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_TEXAS_PERF_SELECT0_MODE_SHIFT 31U
+#define ROGUE_CR_TEXAS_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFF7FFFFFFFULL
+#define ROGUE_CR_TEXAS_PERF_SELECT0_MODE_EN 0x0000000080000000ULL
+#define ROGUE_CR_TEXAS_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_TEXAS_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFC0FFFFULL
+#define ROGUE_CR_TEXAS_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_TEXAS_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_TEXAS_PERF_COUNTER_0 */
+#define ROGUE_CR_TEXAS_PERF_COUNTER_0 0x82D8U
+#define ROGUE_CR_TEXAS_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_TEXAS_PERF_COUNTER_0_REG_SHIFT 0U
+#define ROGUE_CR_TEXAS_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_JONES_PERF */
+#define ROGUE_CR_JONES_PERF 0x8330U
+#define ROGUE_CR_JONES_PERF_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_JONES_PERF_CLR_3_SHIFT 4U
+#define ROGUE_CR_JONES_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_JONES_PERF_CLR_3_EN 0x00000010U
+#define ROGUE_CR_JONES_PERF_CLR_2_SHIFT 3U
+#define ROGUE_CR_JONES_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_JONES_PERF_CLR_2_EN 0x00000008U
+#define ROGUE_CR_JONES_PERF_CLR_1_SHIFT 2U
+#define ROGUE_CR_JONES_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_JONES_PERF_CLR_1_EN 0x00000004U
+#define ROGUE_CR_JONES_PERF_CLR_0_SHIFT 1U
+#define ROGUE_CR_JONES_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_JONES_PERF_CLR_0_EN 0x00000002U
+#define ROGUE_CR_JONES_PERF_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_JONES_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_JONES_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_JONES_PERF_SELECT0 */
+#define ROGUE_CR_JONES_PERF_SELECT0 0x8338U
+#define ROGUE_CR_JONES_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_JONES_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_JONES_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_JONES_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_JONES_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_JONES_PERF_SELECT0_MODE_SHIFT 21U
+#define ROGUE_CR_JONES_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_JONES_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_JONES_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_JONES_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_JONES_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_JONES_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_JONES_PERF_COUNTER_0 */
+#define ROGUE_CR_JONES_PERF_COUNTER_0 0x8368U
+#define ROGUE_CR_JONES_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_JONES_PERF_COUNTER_0_REG_SHIFT 0U
+#define ROGUE_CR_JONES_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_BLACKPEARL_PERF */
+#define ROGUE_CR_BLACKPEARL_PERF 0x8400U
+#define ROGUE_CR_BLACKPEARL_PERF_MASKFULL 0x000000000000007FULL
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_5_SHIFT 6U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_5_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_5_EN 0x00000040U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_4_SHIFT 5U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_4_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_4_EN 0x00000020U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_3_SHIFT 4U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_3_EN 0x00000010U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_2_SHIFT 3U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_2_EN 0x00000008U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_1_SHIFT 2U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_1_EN 0x00000004U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_0_SHIFT 1U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_0_EN 0x00000002U
+#define ROGUE_CR_BLACKPEARL_PERF_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_BLACKPEARL_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_BLACKPEARL_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_BLACKPEARL_PERF_SELECT0 */
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0 0x8408U
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_MASKFULL 0x3FFF3FFF803FFFFFULL
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_MODE_SHIFT 31U
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFF7FFFFFFFULL
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_MODE_EN 0x0000000080000000ULL
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFC0FFFFULL
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_BLACKPEARL_PERF_COUNTER_0 */
+#define ROGUE_CR_BLACKPEARL_PERF_COUNTER_0 0x8448U
+#define ROGUE_CR_BLACKPEARL_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_BLACKPEARL_PERF_COUNTER_0_REG_SHIFT 0U
+#define ROGUE_CR_BLACKPEARL_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PBE_PERF */
+#define ROGUE_CR_PBE_PERF 0x8478U
+#define ROGUE_CR_PBE_PERF_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_PBE_PERF_CLR_3_SHIFT 4U
+#define ROGUE_CR_PBE_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_PBE_PERF_CLR_3_EN 0x00000010U
+#define ROGUE_CR_PBE_PERF_CLR_2_SHIFT 3U
+#define ROGUE_CR_PBE_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_PBE_PERF_CLR_2_EN 0x00000008U
+#define ROGUE_CR_PBE_PERF_CLR_1_SHIFT 2U
+#define ROGUE_CR_PBE_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_PBE_PERF_CLR_1_EN 0x00000004U
+#define ROGUE_CR_PBE_PERF_CLR_0_SHIFT 1U
+#define ROGUE_CR_PBE_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_PBE_PERF_CLR_0_EN 0x00000002U
+#define ROGUE_CR_PBE_PERF_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_PBE_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_PBE_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_PBE_PERF_SELECT0 */
+#define ROGUE_CR_PBE_PERF_SELECT0 0x8480U
+#define ROGUE_CR_PBE_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_PBE_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_PBE_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_PBE_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_PBE_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_PBE_PERF_SELECT0_MODE_SHIFT 21U
+#define ROGUE_CR_PBE_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_PBE_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_PBE_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_PBE_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_PBE_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_PBE_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_PBE_PERF_COUNTER_0 */
+#define ROGUE_CR_PBE_PERF_COUNTER_0 0x84B0U
+#define ROGUE_CR_PBE_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PBE_PERF_COUNTER_0_REG_SHIFT 0U
+#define ROGUE_CR_PBE_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OCP_REVINFO */
+#define ROGUE_CR_OCP_REVINFO 0x9000U
+#define ROGUE_CR_OCP_REVINFO_MASKFULL 0x00000007FFFFFFFFULL
+#define ROGUE_CR_OCP_REVINFO_HWINFO_SYSBUS_SHIFT 33U
+#define ROGUE_CR_OCP_REVINFO_HWINFO_SYSBUS_CLRMSK 0xFFFFFFF9FFFFFFFFULL
+#define ROGUE_CR_OCP_REVINFO_HWINFO_MEMBUS_SHIFT 32U
+#define ROGUE_CR_OCP_REVINFO_HWINFO_MEMBUS_CLRMSK 0xFFFFFFFEFFFFFFFFULL
+#define ROGUE_CR_OCP_REVINFO_HWINFO_MEMBUS_EN 0x0000000100000000ULL
+#define ROGUE_CR_OCP_REVINFO_REVISION_SHIFT 0U
+#define ROGUE_CR_OCP_REVINFO_REVISION_CLRMSK 0xFFFFFFFF00000000ULL
+
+/* Register ROGUE_CR_OCP_SYSCONFIG */
+#define ROGUE_CR_OCP_SYSCONFIG 0x9010U
+#define ROGUE_CR_OCP_SYSCONFIG_MASKFULL 0x0000000000000FFFULL
+#define ROGUE_CR_OCP_SYSCONFIG_DUST2_STANDBY_MODE_SHIFT 10U
+#define ROGUE_CR_OCP_SYSCONFIG_DUST2_STANDBY_MODE_CLRMSK 0xFFFFF3FFU
+#define ROGUE_CR_OCP_SYSCONFIG_DUST1_STANDBY_MODE_SHIFT 8U
+#define ROGUE_CR_OCP_SYSCONFIG_DUST1_STANDBY_MODE_CLRMSK 0xFFFFFCFFU
+#define ROGUE_CR_OCP_SYSCONFIG_DUST0_STANDBY_MODE_SHIFT 6U
+#define ROGUE_CR_OCP_SYSCONFIG_DUST0_STANDBY_MODE_CLRMSK 0xFFFFFF3FU
+#define ROGUE_CR_OCP_SYSCONFIG_RASCAL_STANDBYMODE_SHIFT 4U
+#define ROGUE_CR_OCP_SYSCONFIG_RASCAL_STANDBYMODE_CLRMSK 0xFFFFFFCFU
+#define ROGUE_CR_OCP_SYSCONFIG_STANDBY_MODE_SHIFT 2U
+#define ROGUE_CR_OCP_SYSCONFIG_STANDBY_MODE_CLRMSK 0xFFFFFFF3U
+#define ROGUE_CR_OCP_SYSCONFIG_IDLE_MODE_SHIFT 0U
+#define ROGUE_CR_OCP_SYSCONFIG_IDLE_MODE_CLRMSK 0xFFFFFFFCU
+
+/* Register ROGUE_CR_OCP_IRQSTATUS_RAW_0 */
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_0 0x9020U
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_0_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_0_INIT_MINTERRUPT_RAW_SHIFT 0U
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_0_INIT_MINTERRUPT_RAW_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_0_INIT_MINTERRUPT_RAW_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQSTATUS_RAW_1 */
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_1 0x9028U
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_1_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_1_TARGET_SINTERRUPT_RAW_SHIFT 0U
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_1_TARGET_SINTERRUPT_RAW_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_1_TARGET_SINTERRUPT_RAW_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQSTATUS_RAW_2 */
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_2 0x9030U
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_2_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_2_RGX_IRQ_RAW_SHIFT 0U
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_2_RGX_IRQ_RAW_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_2_RGX_IRQ_RAW_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQSTATUS_0 */
+#define ROGUE_CR_OCP_IRQSTATUS_0 0x9038U
+#define ROGUE_CR_OCP_IRQSTATUS_0_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQSTATUS_0_INIT_MINTERRUPT_STATUS_SHIFT 0U
+#define ROGUE_CR_OCP_IRQSTATUS_0_INIT_MINTERRUPT_STATUS_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQSTATUS_0_INIT_MINTERRUPT_STATUS_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQSTATUS_1 */
+#define ROGUE_CR_OCP_IRQSTATUS_1 0x9040U
+#define ROGUE_CR_OCP_IRQSTATUS_1_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQSTATUS_1_TARGET_SINTERRUPT_STATUS_SHIFT 0U
+#define ROGUE_CR_OCP_IRQSTATUS_1_TARGET_SINTERRUPT_STATUS_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQSTATUS_1_TARGET_SINTERRUPT_STATUS_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQSTATUS_2 */
+#define ROGUE_CR_OCP_IRQSTATUS_2 0x9048U
+#define ROGUE_CR_OCP_IRQSTATUS_2_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQSTATUS_2_RGX_IRQ_STATUS_SHIFT 0U
+#define ROGUE_CR_OCP_IRQSTATUS_2_RGX_IRQ_STATUS_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQSTATUS_2_RGX_IRQ_STATUS_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQENABLE_SET_0 */
+#define ROGUE_CR_OCP_IRQENABLE_SET_0 0x9050U
+#define ROGUE_CR_OCP_IRQENABLE_SET_0_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQENABLE_SET_0_INIT_MINTERRUPT_ENABLE_SHIFT 0U
+#define ROGUE_CR_OCP_IRQENABLE_SET_0_INIT_MINTERRUPT_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQENABLE_SET_0_INIT_MINTERRUPT_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQENABLE_SET_1 */
+#define ROGUE_CR_OCP_IRQENABLE_SET_1 0x9058U
+#define ROGUE_CR_OCP_IRQENABLE_SET_1_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQENABLE_SET_1_TARGET_SINTERRUPT_ENABLE_SHIFT 0U
+#define ROGUE_CR_OCP_IRQENABLE_SET_1_TARGET_SINTERRUPT_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQENABLE_SET_1_TARGET_SINTERRUPT_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQENABLE_SET_2 */
+#define ROGUE_CR_OCP_IRQENABLE_SET_2 0x9060U
+#define ROGUE_CR_OCP_IRQENABLE_SET_2_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQENABLE_SET_2_RGX_IRQ_ENABLE_SHIFT 0U
+#define ROGUE_CR_OCP_IRQENABLE_SET_2_RGX_IRQ_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQENABLE_SET_2_RGX_IRQ_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQENABLE_CLR_0 */
+#define ROGUE_CR_OCP_IRQENABLE_CLR_0 0x9068U
+#define ROGUE_CR_OCP_IRQENABLE_CLR_0_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQENABLE_CLR_0_INIT_MINTERRUPT_DISABLE_SHIFT 0U
+#define ROGUE_CR_OCP_IRQENABLE_CLR_0_INIT_MINTERRUPT_DISABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQENABLE_CLR_0_INIT_MINTERRUPT_DISABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQENABLE_CLR_1 */
+#define ROGUE_CR_OCP_IRQENABLE_CLR_1 0x9070U
+#define ROGUE_CR_OCP_IRQENABLE_CLR_1_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQENABLE_CLR_1_TARGET_SINTERRUPT_DISABLE_SHIFT 0U
+#define ROGUE_CR_OCP_IRQENABLE_CLR_1_TARGET_SINTERRUPT_DISABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQENABLE_CLR_1_TARGET_SINTERRUPT_DISABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQENABLE_CLR_2 */
+#define ROGUE_CR_OCP_IRQENABLE_CLR_2 0x9078U
+#define ROGUE_CR_OCP_IRQENABLE_CLR_2_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQENABLE_CLR_2_RGX_IRQ_DISABLE_SHIFT 0U
+#define ROGUE_CR_OCP_IRQENABLE_CLR_2_RGX_IRQ_DISABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQENABLE_CLR_2_RGX_IRQ_DISABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQ_EVENT */
+#define ROGUE_CR_OCP_IRQ_EVENT 0x9080U
+#define ROGUE_CR_OCP_IRQ_EVENT_MASKFULL 0x00000000000FFFFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETH_RCVD_UNEXPECTED_RDATA_SHIFT 19U
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETH_RCVD_UNEXPECTED_RDATA_CLRMSK 0xFFFFFFFFFFF7FFFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETH_RCVD_UNEXPECTED_RDATA_EN 0x0000000000080000ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETH_RCVD_UNSUPPORTED_MCMD_SHIFT 18U
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETH_RCVD_UNSUPPORTED_MCMD_CLRMSK 0xFFFFFFFFFFFBFFFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETH_RCVD_UNSUPPORTED_MCMD_EN 0x0000000000040000ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETS_RCVD_UNEXPECTED_RDATA_SHIFT 17U
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETS_RCVD_UNEXPECTED_RDATA_CLRMSK 0xFFFFFFFFFFFDFFFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETS_RCVD_UNEXPECTED_RDATA_EN 0x0000000000020000ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETS_RCVD_UNSUPPORTED_MCMD_SHIFT 16U
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETS_RCVD_UNSUPPORTED_MCMD_CLRMSK 0xFFFFFFFFFFFEFFFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETS_RCVD_UNSUPPORTED_MCMD_EN 0x0000000000010000ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_IMG_PAGE_BOUNDARY_CROSS_SHIFT 15U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_IMG_PAGE_BOUNDARY_CROSS_CLRMSK 0xFFFFFFFFFFFF7FFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_IMG_PAGE_BOUNDARY_CROSS_EN 0x0000000000008000ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_RCVD_RESP_ERR_FAIL_SHIFT 14U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_RCVD_RESP_ERR_FAIL_CLRMSK 0xFFFFFFFFFFFFBFFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_RCVD_RESP_ERR_FAIL_EN 0x0000000000004000ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_RCVD_UNUSED_TAGID_SHIFT 13U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_RCVD_UNUSED_TAGID_CLRMSK 0xFFFFFFFFFFFFDFFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_RCVD_UNUSED_TAGID_EN 0x0000000000002000ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_RDATA_FIFO_OVERFILL_SHIFT 12U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_RDATA_FIFO_OVERFILL_CLRMSK 0xFFFFFFFFFFFFEFFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_RDATA_FIFO_OVERFILL_EN 0x0000000000001000ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_IMG_PAGE_BOUNDARY_CROSS_SHIFT 11U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_IMG_PAGE_BOUNDARY_CROSS_CLRMSK 0xFFFFFFFFFFFFF7FFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_IMG_PAGE_BOUNDARY_CROSS_EN 0x0000000000000800ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_RCVD_RESP_ERR_FAIL_SHIFT 10U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_RCVD_RESP_ERR_FAIL_CLRMSK 0xFFFFFFFFFFFFFBFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_RCVD_RESP_ERR_FAIL_EN 0x0000000000000400ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_RCVD_UNUSED_TAGID_SHIFT 9U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_RCVD_UNUSED_TAGID_CLRMSK 0xFFFFFFFFFFFFFDFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_RCVD_UNUSED_TAGID_EN 0x0000000000000200ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_RDATA_FIFO_OVERFILL_SHIFT 8U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_RDATA_FIFO_OVERFILL_CLRMSK 0xFFFFFFFFFFFFFEFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_RDATA_FIFO_OVERFILL_EN 0x0000000000000100ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_IMG_PAGE_BOUNDARY_CROSS_SHIFT 7U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_IMG_PAGE_BOUNDARY_CROSS_CLRMSK 0xFFFFFFFFFFFFFF7FULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_IMG_PAGE_BOUNDARY_CROSS_EN 0x0000000000000080ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_RCVD_RESP_ERR_FAIL_SHIFT 6U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_RCVD_RESP_ERR_FAIL_CLRMSK 0xFFFFFFFFFFFFFFBFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_RCVD_RESP_ERR_FAIL_EN 0x0000000000000040ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_RCVD_UNUSED_TAGID_SHIFT 5U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_RCVD_UNUSED_TAGID_CLRMSK 0xFFFFFFFFFFFFFFDFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_RCVD_UNUSED_TAGID_EN 0x0000000000000020ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_RDATA_FIFO_OVERFILL_SHIFT 4U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_RDATA_FIFO_OVERFILL_CLRMSK 0xFFFFFFFFFFFFFFEFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_RDATA_FIFO_OVERFILL_EN 0x0000000000000010ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_IMG_PAGE_BOUNDARY_CROSS_SHIFT 3U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_IMG_PAGE_BOUNDARY_CROSS_CLRMSK 0xFFFFFFFFFFFFFFF7ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_IMG_PAGE_BOUNDARY_CROSS_EN 0x0000000000000008ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_RCVD_RESP_ERR_FAIL_SHIFT 2U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_RCVD_RESP_ERR_FAIL_CLRMSK 0xFFFFFFFFFFFFFFFBULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_RCVD_RESP_ERR_FAIL_EN 0x0000000000000004ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_RCVD_UNUSED_TAGID_SHIFT 1U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_RCVD_UNUSED_TAGID_CLRMSK 0xFFFFFFFFFFFFFFFDULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_RCVD_UNUSED_TAGID_EN 0x0000000000000002ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_RDATA_FIFO_OVERFILL_SHIFT 0U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_RDATA_FIFO_OVERFILL_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_RDATA_FIFO_OVERFILL_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_OCP_DEBUG_CONFIG */
+#define ROGUE_CR_OCP_DEBUG_CONFIG 0x9088U
+#define ROGUE_CR_OCP_DEBUG_CONFIG_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_DEBUG_CONFIG_REG_SHIFT 0U
+#define ROGUE_CR_OCP_DEBUG_CONFIG_REG_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_DEBUG_CONFIG_REG_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_DEBUG_STATUS */
+#define ROGUE_CR_OCP_DEBUG_STATUS 0x9090U
+#define ROGUE_CR_OCP_DEBUG_STATUS_MASKFULL 0x001F1F77FFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_SDISCACK_SHIFT 51U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_SDISCACK_CLRMSK 0xFFE7FFFFFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_SCONNECT_SHIFT 50U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_SCONNECT_CLRMSK 0xFFFBFFFFFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_SCONNECT_EN 0x0004000000000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_MCONNECT_SHIFT 48U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_MCONNECT_CLRMSK 0xFFFCFFFFFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_SDISCACK_SHIFT 43U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_SDISCACK_CLRMSK 0xFFFFE7FFFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_SCONNECT_SHIFT 42U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_SCONNECT_CLRMSK 0xFFFFFBFFFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_SCONNECT_EN 0x0000040000000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_MCONNECT_SHIFT 40U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_MCONNECT_CLRMSK 0xFFFFFCFFFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_BUSY_SHIFT 38U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_BUSY_CLRMSK 0xFFFFFFBFFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_BUSY_EN 0x0000004000000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_CMD_FIFO_FULL_SHIFT 37U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_CMD_FIFO_FULL_CLRMSK 0xFFFFFFDFFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_CMD_FIFO_FULL_EN 0x0000002000000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_SRESP_ERROR_SHIFT 36U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_SRESP_ERROR_CLRMSK 0xFFFFFFEFFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_SRESP_ERROR_EN 0x0000001000000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_BUSY_SHIFT 34U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_BUSY_CLRMSK 0xFFFFFFFBFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_BUSY_EN 0x0000000400000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_CMD_FIFO_FULL_SHIFT 33U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_CMD_FIFO_FULL_CLRMSK 0xFFFFFFFDFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_CMD_FIFO_FULL_EN 0x0000000200000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_SRESP_ERROR_SHIFT 32U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_SRESP_ERROR_CLRMSK 0xFFFFFFFEFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_SRESP_ERROR_EN 0x0000000100000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_RESERVED_SHIFT 31U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_RESERVED_CLRMSK 0xFFFFFFFF7FFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_RESERVED_EN 0x0000000080000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_SWAIT_SHIFT 30U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_SWAIT_CLRMSK 0xFFFFFFFFBFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_SWAIT_EN 0x0000000040000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_MDISCREQ_SHIFT 29U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_MDISCREQ_CLRMSK 0xFFFFFFFFDFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_MDISCREQ_EN 0x0000000020000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_MDISCACK_SHIFT 27U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_MDISCACK_CLRMSK 0xFFFFFFFFE7FFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_SCONNECT_SHIFT 26U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_SCONNECT_CLRMSK 0xFFFFFFFFFBFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_SCONNECT_EN 0x0000000004000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_MCONNECT_SHIFT 24U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_MCONNECT_CLRMSK 0xFFFFFFFFFCFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_RESERVED_SHIFT 23U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_RESERVED_CLRMSK 0xFFFFFFFFFF7FFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_RESERVED_EN 0x0000000000800000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_SWAIT_SHIFT 22U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_SWAIT_CLRMSK 0xFFFFFFFFFFBFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_SWAIT_EN 0x0000000000400000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_MDISCREQ_SHIFT 21U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_MDISCREQ_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_MDISCREQ_EN 0x0000000000200000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_MDISCACK_SHIFT 19U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_MDISCACK_CLRMSK 0xFFFFFFFFFFE7FFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_SCONNECT_SHIFT 18U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_SCONNECT_CLRMSK 0xFFFFFFFFFFFBFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_SCONNECT_EN 0x0000000000040000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_MCONNECT_SHIFT 16U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_MCONNECT_CLRMSK 0xFFFFFFFFFFFCFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_RESERVED_SHIFT 15U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_RESERVED_CLRMSK 0xFFFFFFFFFFFF7FFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_RESERVED_EN 0x0000000000008000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_SWAIT_SHIFT 14U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_SWAIT_CLRMSK 0xFFFFFFFFFFFFBFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_SWAIT_EN 0x0000000000004000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_MDISCREQ_SHIFT 13U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_MDISCREQ_CLRMSK 0xFFFFFFFFFFFFDFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_MDISCREQ_EN 0x0000000000002000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_MDISCACK_SHIFT 11U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_MDISCACK_CLRMSK 0xFFFFFFFFFFFFE7FFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_SCONNECT_SHIFT 10U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_SCONNECT_CLRMSK 0xFFFFFFFFFFFFFBFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_SCONNECT_EN 0x0000000000000400ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_MCONNECT_SHIFT 8U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_MCONNECT_CLRMSK 0xFFFFFFFFFFFFFCFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_RESERVED_SHIFT 7U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_RESERVED_CLRMSK 0xFFFFFFFFFFFFFF7FULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_RESERVED_EN 0x0000000000000080ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_SWAIT_SHIFT 6U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_SWAIT_CLRMSK 0xFFFFFFFFFFFFFFBFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_SWAIT_EN 0x0000000000000040ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_MDISCREQ_SHIFT 5U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_MDISCREQ_CLRMSK 0xFFFFFFFFFFFFFFDFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_MDISCREQ_EN 0x0000000000000020ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_MDISCACK_SHIFT 3U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_MDISCACK_CLRMSK 0xFFFFFFFFFFFFFFE7ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_SCONNECT_SHIFT 2U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_SCONNECT_CLRMSK 0xFFFFFFFFFFFFFFFBULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_SCONNECT_EN 0x0000000000000004ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_MCONNECT_SHIFT 0U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_MCONNECT_CLRMSK 0xFFFFFFFFFFFFFFFCULL
+
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PM_ALIST_SHIFT 6U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PM_ALIST_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PM_ALIST_EN 0x00000040U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_HOST_SHIFT 5U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_HOST_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_HOST_EN 0x00000020U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_META_SHIFT 4U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_META_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_META_EN 0x00000010U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PB_ZLS_SHIFT 3U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PB_ZLS_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PB_ZLS_EN 0x00000008U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PB_TE_SHIFT 2U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PB_TE_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PB_TE_EN 0x00000004U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PB_VCE_SHIFT 1U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PB_VCE_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PB_VCE_EN 0x00000002U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_TLA_SHIFT 0U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_TLA_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_TLA_EN 0x00000001U
+
+#define ROGUE_CR_BIF_TRUST_DM_MASK 0x0000007FU
+
+/* Register ROGUE_CR_BIF_TRUST */
+#define ROGUE_CR_BIF_TRUST 0xA000U
+#define ROGUE_CR_BIF_TRUST_MASKFULL 0x00000000001FFFFFULL
+#define ROGUE_CR_BIF_TRUST_OTHER_RAY_VERTEX_DM_TRUSTED_SHIFT 20U
+#define ROGUE_CR_BIF_TRUST_OTHER_RAY_VERTEX_DM_TRUSTED_CLRMSK 0xFFEFFFFFU
+#define ROGUE_CR_BIF_TRUST_OTHER_RAY_VERTEX_DM_TRUSTED_EN 0x00100000U
+#define ROGUE_CR_BIF_TRUST_MCU_RAY_VERTEX_DM_TRUSTED_SHIFT 19U
+#define ROGUE_CR_BIF_TRUST_MCU_RAY_VERTEX_DM_TRUSTED_CLRMSK 0xFFF7FFFFU
+#define ROGUE_CR_BIF_TRUST_MCU_RAY_VERTEX_DM_TRUSTED_EN 0x00080000U
+#define ROGUE_CR_BIF_TRUST_OTHER_RAY_DM_TRUSTED_SHIFT 18U
+#define ROGUE_CR_BIF_TRUST_OTHER_RAY_DM_TRUSTED_CLRMSK 0xFFFBFFFFU
+#define ROGUE_CR_BIF_TRUST_OTHER_RAY_DM_TRUSTED_EN 0x00040000U
+#define ROGUE_CR_BIF_TRUST_MCU_RAY_DM_TRUSTED_SHIFT 17U
+#define ROGUE_CR_BIF_TRUST_MCU_RAY_DM_TRUSTED_CLRMSK 0xFFFDFFFFU
+#define ROGUE_CR_BIF_TRUST_MCU_RAY_DM_TRUSTED_EN 0x00020000U
+#define ROGUE_CR_BIF_TRUST_ENABLE_SHIFT 16U
+#define ROGUE_CR_BIF_TRUST_ENABLE_CLRMSK 0xFFFEFFFFU
+#define ROGUE_CR_BIF_TRUST_ENABLE_EN 0x00010000U
+#define ROGUE_CR_BIF_TRUST_DM_TRUSTED_SHIFT 9U
+#define ROGUE_CR_BIF_TRUST_DM_TRUSTED_CLRMSK 0xFFFF01FFU
+#define ROGUE_CR_BIF_TRUST_OTHER_COMPUTE_DM_TRUSTED_SHIFT 8U
+#define ROGUE_CR_BIF_TRUST_OTHER_COMPUTE_DM_TRUSTED_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_BIF_TRUST_OTHER_COMPUTE_DM_TRUSTED_EN 0x00000100U
+#define ROGUE_CR_BIF_TRUST_MCU_COMPUTE_DM_TRUSTED_SHIFT 7U
+#define ROGUE_CR_BIF_TRUST_MCU_COMPUTE_DM_TRUSTED_CLRMSK 0xFFFFFF7FU
+#define ROGUE_CR_BIF_TRUST_MCU_COMPUTE_DM_TRUSTED_EN 0x00000080U
+#define ROGUE_CR_BIF_TRUST_PBE_COMPUTE_DM_TRUSTED_SHIFT 6U
+#define ROGUE_CR_BIF_TRUST_PBE_COMPUTE_DM_TRUSTED_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_BIF_TRUST_PBE_COMPUTE_DM_TRUSTED_EN 0x00000040U
+#define ROGUE_CR_BIF_TRUST_OTHER_PIXEL_DM_TRUSTED_SHIFT 5U
+#define ROGUE_CR_BIF_TRUST_OTHER_PIXEL_DM_TRUSTED_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_BIF_TRUST_OTHER_PIXEL_DM_TRUSTED_EN 0x00000020U
+#define ROGUE_CR_BIF_TRUST_MCU_PIXEL_DM_TRUSTED_SHIFT 4U
+#define ROGUE_CR_BIF_TRUST_MCU_PIXEL_DM_TRUSTED_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_BIF_TRUST_MCU_PIXEL_DM_TRUSTED_EN 0x00000010U
+#define ROGUE_CR_BIF_TRUST_PBE_PIXEL_DM_TRUSTED_SHIFT 3U
+#define ROGUE_CR_BIF_TRUST_PBE_PIXEL_DM_TRUSTED_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_BIF_TRUST_PBE_PIXEL_DM_TRUSTED_EN 0x00000008U
+#define ROGUE_CR_BIF_TRUST_OTHER_VERTEX_DM_TRUSTED_SHIFT 2U
+#define ROGUE_CR_BIF_TRUST_OTHER_VERTEX_DM_TRUSTED_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_BIF_TRUST_OTHER_VERTEX_DM_TRUSTED_EN 0x00000004U
+#define ROGUE_CR_BIF_TRUST_MCU_VERTEX_DM_TRUSTED_SHIFT 1U
+#define ROGUE_CR_BIF_TRUST_MCU_VERTEX_DM_TRUSTED_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_BIF_TRUST_MCU_VERTEX_DM_TRUSTED_EN 0x00000002U
+#define ROGUE_CR_BIF_TRUST_PBE_VERTEX_DM_TRUSTED_SHIFT 0U
+#define ROGUE_CR_BIF_TRUST_PBE_VERTEX_DM_TRUSTED_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_BIF_TRUST_PBE_VERTEX_DM_TRUSTED_EN 0x00000001U
+
+/* Register ROGUE_CR_SYS_BUS_SECURE */
+#define ROGUE_CR_SYS_BUS_SECURE 0xA100U
+#define ROGUE_CR_SYS_BUS_SECURE__SECR__MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_SYS_BUS_SECURE_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_SYS_BUS_SECURE_ENABLE_SHIFT 0U
+#define ROGUE_CR_SYS_BUS_SECURE_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SYS_BUS_SECURE_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_FBA_FC0_CHECKSUM */
+#define ROGUE_CR_FBA_FC0_CHECKSUM 0xD170U
+#define ROGUE_CR_FBA_FC0_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_FBA_FC0_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_FBA_FC0_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_FBA_FC1_CHECKSUM */
+#define ROGUE_CR_FBA_FC1_CHECKSUM 0xD178U
+#define ROGUE_CR_FBA_FC1_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_FBA_FC1_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_FBA_FC1_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_FBA_FC2_CHECKSUM */
+#define ROGUE_CR_FBA_FC2_CHECKSUM 0xD180U
+#define ROGUE_CR_FBA_FC2_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_FBA_FC2_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_FBA_FC2_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_FBA_FC3_CHECKSUM */
+#define ROGUE_CR_FBA_FC3_CHECKSUM 0xD188U
+#define ROGUE_CR_FBA_FC3_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_FBA_FC3_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_FBA_FC3_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_CLK_CTRL2 */
+#define ROGUE_CR_CLK_CTRL2 0xD200U
+#define ROGUE_CR_CLK_CTRL2_MASKFULL 0x0000000000000F33ULL
+#define ROGUE_CR_CLK_CTRL2_MCU_FBTC_SHIFT 10U
+#define ROGUE_CR_CLK_CTRL2_MCU_FBTC_CLRMSK 0xFFFFFFFFFFFFF3FFULL
+#define ROGUE_CR_CLK_CTRL2_MCU_FBTC_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL2_MCU_FBTC_ON 0x0000000000000400ULL
+#define ROGUE_CR_CLK_CTRL2_MCU_FBTC_AUTO 0x0000000000000800ULL
+#define ROGUE_CR_CLK_CTRL2_VRDM_SHIFT 8U
+#define ROGUE_CR_CLK_CTRL2_VRDM_CLRMSK 0xFFFFFFFFFFFFFCFFULL
+#define ROGUE_CR_CLK_CTRL2_VRDM_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL2_VRDM_ON 0x0000000000000100ULL
+#define ROGUE_CR_CLK_CTRL2_VRDM_AUTO 0x0000000000000200ULL
+#define ROGUE_CR_CLK_CTRL2_SH_SHIFT 4U
+#define ROGUE_CR_CLK_CTRL2_SH_CLRMSK 0xFFFFFFFFFFFFFFCFULL
+#define ROGUE_CR_CLK_CTRL2_SH_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL2_SH_ON 0x0000000000000010ULL
+#define ROGUE_CR_CLK_CTRL2_SH_AUTO 0x0000000000000020ULL
+#define ROGUE_CR_CLK_CTRL2_FBA_SHIFT 0U
+#define ROGUE_CR_CLK_CTRL2_FBA_CLRMSK 0xFFFFFFFFFFFFFFFCULL
+#define ROGUE_CR_CLK_CTRL2_FBA_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL2_FBA_ON 0x0000000000000001ULL
+#define ROGUE_CR_CLK_CTRL2_FBA_AUTO 0x0000000000000002ULL
+
+/* Register ROGUE_CR_CLK_STATUS2 */
+#define ROGUE_CR_CLK_STATUS2 0xD208U
+#define ROGUE_CR_CLK_STATUS2_MASKFULL 0x0000000000000015ULL
+#define ROGUE_CR_CLK_STATUS2_VRDM_SHIFT 4U
+#define ROGUE_CR_CLK_STATUS2_VRDM_CLRMSK 0xFFFFFFFFFFFFFFEFULL
+#define ROGUE_CR_CLK_STATUS2_VRDM_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS2_VRDM_RUNNING 0x0000000000000010ULL
+#define ROGUE_CR_CLK_STATUS2_SH_SHIFT 2U
+#define ROGUE_CR_CLK_STATUS2_SH_CLRMSK 0xFFFFFFFFFFFFFFFBULL
+#define ROGUE_CR_CLK_STATUS2_SH_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS2_SH_RUNNING 0x0000000000000004ULL
+#define ROGUE_CR_CLK_STATUS2_FBA_SHIFT 0U
+#define ROGUE_CR_CLK_STATUS2_FBA_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_CLK_STATUS2_FBA_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS2_FBA_RUNNING 0x0000000000000001ULL
+
+/* Register ROGUE_CR_RPM_SHF_FPL */
+#define ROGUE_CR_RPM_SHF_FPL 0xD520U
+#define ROGUE_CR_RPM_SHF_FPL_MASKFULL 0x3FFFFFFFFFFFFFFCULL
+#define ROGUE_CR_RPM_SHF_FPL_SIZE_SHIFT 40U
+#define ROGUE_CR_RPM_SHF_FPL_SIZE_CLRMSK 0xC00000FFFFFFFFFFULL
+#define ROGUE_CR_RPM_SHF_FPL_BASE_SHIFT 2U
+#define ROGUE_CR_RPM_SHF_FPL_BASE_CLRMSK 0xFFFFFF0000000003ULL
+#define ROGUE_CR_RPM_SHF_FPL_BASE_ALIGNSHIFT 2U
+#define ROGUE_CR_RPM_SHF_FPL_BASE_ALIGNSIZE 4U
+
+/* Register ROGUE_CR_RPM_SHF_FPL_READ */
+#define ROGUE_CR_RPM_SHF_FPL_READ 0xD528U
+#define ROGUE_CR_RPM_SHF_FPL_READ_MASKFULL 0x00000000007FFFFFULL
+#define ROGUE_CR_RPM_SHF_FPL_READ_TOGGLE_SHIFT 22U
+#define ROGUE_CR_RPM_SHF_FPL_READ_TOGGLE_CLRMSK 0xFFBFFFFFU
+#define ROGUE_CR_RPM_SHF_FPL_READ_TOGGLE_EN 0x00400000U
+#define ROGUE_CR_RPM_SHF_FPL_READ_OFFSET_SHIFT 0U
+#define ROGUE_CR_RPM_SHF_FPL_READ_OFFSET_CLRMSK 0xFFC00000U
+
+/* Register ROGUE_CR_RPM_SHF_FPL_WRITE */
+#define ROGUE_CR_RPM_SHF_FPL_WRITE 0xD530U
+#define ROGUE_CR_RPM_SHF_FPL_WRITE_MASKFULL 0x00000000007FFFFFULL
+#define ROGUE_CR_RPM_SHF_FPL_WRITE_TOGGLE_SHIFT 22U
+#define ROGUE_CR_RPM_SHF_FPL_WRITE_TOGGLE_CLRMSK 0xFFBFFFFFU
+#define ROGUE_CR_RPM_SHF_FPL_WRITE_TOGGLE_EN 0x00400000U
+#define ROGUE_CR_RPM_SHF_FPL_WRITE_OFFSET_SHIFT 0U
+#define ROGUE_CR_RPM_SHF_FPL_WRITE_OFFSET_CLRMSK 0xFFC00000U
+
+/* Register ROGUE_CR_RPM_SHG_FPL */
+#define ROGUE_CR_RPM_SHG_FPL 0xD538U
+#define ROGUE_CR_RPM_SHG_FPL_MASKFULL 0x3FFFFFFFFFFFFFFCULL
+#define ROGUE_CR_RPM_SHG_FPL_SIZE_SHIFT 40U
+#define ROGUE_CR_RPM_SHG_FPL_SIZE_CLRMSK 0xC00000FFFFFFFFFFULL
+#define ROGUE_CR_RPM_SHG_FPL_BASE_SHIFT 2U
+#define ROGUE_CR_RPM_SHG_FPL_BASE_CLRMSK 0xFFFFFF0000000003ULL
+#define ROGUE_CR_RPM_SHG_FPL_BASE_ALIGNSHIFT 2U
+#define ROGUE_CR_RPM_SHG_FPL_BASE_ALIGNSIZE 4U
+
+/* Register ROGUE_CR_RPM_SHG_FPL_READ */
+#define ROGUE_CR_RPM_SHG_FPL_READ 0xD540U
+#define ROGUE_CR_RPM_SHG_FPL_READ_MASKFULL 0x00000000007FFFFFULL
+#define ROGUE_CR_RPM_SHG_FPL_READ_TOGGLE_SHIFT 22U
+#define ROGUE_CR_RPM_SHG_FPL_READ_TOGGLE_CLRMSK 0xFFBFFFFFU
+#define ROGUE_CR_RPM_SHG_FPL_READ_TOGGLE_EN 0x00400000U
+#define ROGUE_CR_RPM_SHG_FPL_READ_OFFSET_SHIFT 0U
+#define ROGUE_CR_RPM_SHG_FPL_READ_OFFSET_CLRMSK 0xFFC00000U
+
+/* Register ROGUE_CR_RPM_SHG_FPL_WRITE */
+#define ROGUE_CR_RPM_SHG_FPL_WRITE 0xD548U
+#define ROGUE_CR_RPM_SHG_FPL_WRITE_MASKFULL 0x00000000007FFFFFULL
+#define ROGUE_CR_RPM_SHG_FPL_WRITE_TOGGLE_SHIFT 22U
+#define ROGUE_CR_RPM_SHG_FPL_WRITE_TOGGLE_CLRMSK 0xFFBFFFFFU
+#define ROGUE_CR_RPM_SHG_FPL_WRITE_TOGGLE_EN 0x00400000U
+#define ROGUE_CR_RPM_SHG_FPL_WRITE_OFFSET_SHIFT 0U
+#define ROGUE_CR_RPM_SHG_FPL_WRITE_OFFSET_CLRMSK 0xFFC00000U
+
+/* Register ROGUE_CR_SH_PERF */
+#define ROGUE_CR_SH_PERF 0xD5F8U
+#define ROGUE_CR_SH_PERF_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_SH_PERF_CLR_3_SHIFT 4U
+#define ROGUE_CR_SH_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_SH_PERF_CLR_3_EN 0x00000010U
+#define ROGUE_CR_SH_PERF_CLR_2_SHIFT 3U
+#define ROGUE_CR_SH_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_SH_PERF_CLR_2_EN 0x00000008U
+#define ROGUE_CR_SH_PERF_CLR_1_SHIFT 2U
+#define ROGUE_CR_SH_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_SH_PERF_CLR_1_EN 0x00000004U
+#define ROGUE_CR_SH_PERF_CLR_0_SHIFT 1U
+#define ROGUE_CR_SH_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_SH_PERF_CLR_0_EN 0x00000002U
+#define ROGUE_CR_SH_PERF_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_SH_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SH_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_SH_PERF_SELECT0 */
+#define ROGUE_CR_SH_PERF_SELECT0 0xD600U
+#define ROGUE_CR_SH_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_SH_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_SH_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_SH_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_SH_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_SH_PERF_SELECT0_MODE_SHIFT 21U
+#define ROGUE_CR_SH_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_SH_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_SH_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_SH_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_SH_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_SH_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_SH_PERF_COUNTER_0 */
+#define ROGUE_CR_SH_PERF_COUNTER_0 0xD628U
+#define ROGUE_CR_SH_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SH_PERF_COUNTER_0_REG_SHIFT 0U
+#define ROGUE_CR_SH_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SHF_SHG_CHECKSUM */
+#define ROGUE_CR_SHF_SHG_CHECKSUM 0xD1C0U
+#define ROGUE_CR_SHF_SHG_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SHF_SHG_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_SHF_SHG_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SHF_VERTEX_BIF_CHECKSUM */
+#define ROGUE_CR_SHF_VERTEX_BIF_CHECKSUM 0xD1C8U
+#define ROGUE_CR_SHF_VERTEX_BIF_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SHF_VERTEX_BIF_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_SHF_VERTEX_BIF_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SHF_VARY_BIF_CHECKSUM */
+#define ROGUE_CR_SHF_VARY_BIF_CHECKSUM 0xD1D0U
+#define ROGUE_CR_SHF_VARY_BIF_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SHF_VARY_BIF_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_SHF_VARY_BIF_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_RPM_BIF_CHECKSUM */
+#define ROGUE_CR_RPM_BIF_CHECKSUM 0xD1D8U
+#define ROGUE_CR_RPM_BIF_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_RPM_BIF_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_RPM_BIF_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SHG_BIF_CHECKSUM */
+#define ROGUE_CR_SHG_BIF_CHECKSUM 0xD1E0U
+#define ROGUE_CR_SHG_BIF_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SHG_BIF_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_SHG_BIF_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SHG_FE_BE_CHECKSUM */
+#define ROGUE_CR_SHG_FE_BE_CHECKSUM 0xD1E8U
+#define ROGUE_CR_SHG_FE_BE_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SHG_FE_BE_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_SHG_FE_BE_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register DPX_CR_BF_PERF */
+#define DPX_CR_BF_PERF 0xC458U
+#define DPX_CR_BF_PERF_MASKFULL 0x000000000000001FULL
+#define DPX_CR_BF_PERF_CLR_3_SHIFT 4U
+#define DPX_CR_BF_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define DPX_CR_BF_PERF_CLR_3_EN 0x00000010U
+#define DPX_CR_BF_PERF_CLR_2_SHIFT 3U
+#define DPX_CR_BF_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define DPX_CR_BF_PERF_CLR_2_EN 0x00000008U
+#define DPX_CR_BF_PERF_CLR_1_SHIFT 2U
+#define DPX_CR_BF_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define DPX_CR_BF_PERF_CLR_1_EN 0x00000004U
+#define DPX_CR_BF_PERF_CLR_0_SHIFT 1U
+#define DPX_CR_BF_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define DPX_CR_BF_PERF_CLR_0_EN 0x00000002U
+#define DPX_CR_BF_PERF_CTRL_ENABLE_SHIFT 0U
+#define DPX_CR_BF_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define DPX_CR_BF_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register DPX_CR_BF_PERF_SELECT0 */
+#define DPX_CR_BF_PERF_SELECT0 0xC460U
+#define DPX_CR_BF_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define DPX_CR_BF_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define DPX_CR_BF_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define DPX_CR_BF_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define DPX_CR_BF_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define DPX_CR_BF_PERF_SELECT0_MODE_SHIFT 21U
+#define DPX_CR_BF_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define DPX_CR_BF_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define DPX_CR_BF_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define DPX_CR_BF_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define DPX_CR_BF_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define DPX_CR_BF_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register DPX_CR_BF_PERF_COUNTER_0 */
+#define DPX_CR_BF_PERF_COUNTER_0 0xC488U
+#define DPX_CR_BF_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define DPX_CR_BF_PERF_COUNTER_0_REG_SHIFT 0U
+#define DPX_CR_BF_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register DPX_CR_BT_PERF */
+#define DPX_CR_BT_PERF 0xC3D0U
+#define DPX_CR_BT_PERF_MASKFULL 0x000000000000001FULL
+#define DPX_CR_BT_PERF_CLR_3_SHIFT 4U
+#define DPX_CR_BT_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define DPX_CR_BT_PERF_CLR_3_EN 0x00000010U
+#define DPX_CR_BT_PERF_CLR_2_SHIFT 3U
+#define DPX_CR_BT_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define DPX_CR_BT_PERF_CLR_2_EN 0x00000008U
+#define DPX_CR_BT_PERF_CLR_1_SHIFT 2U
+#define DPX_CR_BT_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define DPX_CR_BT_PERF_CLR_1_EN 0x00000004U
+#define DPX_CR_BT_PERF_CLR_0_SHIFT 1U
+#define DPX_CR_BT_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define DPX_CR_BT_PERF_CLR_0_EN 0x00000002U
+#define DPX_CR_BT_PERF_CTRL_ENABLE_SHIFT 0U
+#define DPX_CR_BT_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define DPX_CR_BT_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register DPX_CR_BT_PERF_SELECT0 */
+#define DPX_CR_BT_PERF_SELECT0 0xC3D8U
+#define DPX_CR_BT_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define DPX_CR_BT_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define DPX_CR_BT_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define DPX_CR_BT_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define DPX_CR_BT_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define DPX_CR_BT_PERF_SELECT0_MODE_SHIFT 21U
+#define DPX_CR_BT_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define DPX_CR_BT_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define DPX_CR_BT_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define DPX_CR_BT_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define DPX_CR_BT_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define DPX_CR_BT_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register DPX_CR_BT_PERF_COUNTER_0 */
+#define DPX_CR_BT_PERF_COUNTER_0 0xC420U
+#define DPX_CR_BT_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define DPX_CR_BT_PERF_COUNTER_0_REG_SHIFT 0U
+#define DPX_CR_BT_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register DPX_CR_RQ_USC_DEBUG */
+#define DPX_CR_RQ_USC_DEBUG 0xC110U
+#define DPX_CR_RQ_USC_DEBUG_MASKFULL 0x00000000FFFFFFFFULL
+#define DPX_CR_RQ_USC_DEBUG_CHECKSUM_SHIFT 0U
+#define DPX_CR_RQ_USC_DEBUG_CHECKSUM_CLRMSK 0xFFFFFFFF00000000ULL
+
+/* Register DPX_CR_BIF_FAULT_BANK_MMU_STATUS */
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS 0xC5C8U
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_MASKFULL 0x000000000000F775ULL
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_CAT_BASE_SHIFT 12U
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_CAT_BASE_CLRMSK 0xFFFF0FFFU
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_PAGE_SIZE_SHIFT 8U
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_PAGE_SIZE_CLRMSK 0xFFFFF8FFU
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_DATA_TYPE_SHIFT 5U
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_DATA_TYPE_CLRMSK 0xFFFFFF9FU
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_FAULT_RO_SHIFT 4U
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_FAULT_RO_CLRMSK 0xFFFFFFEFU
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_FAULT_RO_EN 0x00000010U
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_FAULT_PM_META_RO_SHIFT 2U
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_FAULT_PM_META_RO_CLRMSK 0xFFFFFFFBU
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_FAULT_PM_META_RO_EN 0x00000004U
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_FAULT_SHIFT 0U
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_FAULT_CLRMSK 0xFFFFFFFEU
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_FAULT_EN 0x00000001U
+
+/* Register DPX_CR_BIF_FAULT_BANK_REQ_STATUS */
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS 0xC5D0U
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_MASKFULL 0x03FFFFFFFFFFFFF0ULL
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_RNW_SHIFT 57U
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_RNW_CLRMSK 0xFDFFFFFFFFFFFFFFULL
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_RNW_EN 0x0200000000000000ULL
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_TAG_SB_SHIFT 44U
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_TAG_SB_CLRMSK 0xFE000FFFFFFFFFFFULL
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_TAG_ID_SHIFT 40U
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_TAG_ID_CLRMSK 0xFFFFF0FFFFFFFFFFULL
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_ADDRESS_SHIFT 4U
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_ADDRESS_CLRMSK 0xFFFFFF000000000FULL
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_ADDRESS_ALIGNSHIFT 4U
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_ADDRESS_ALIGNSIZE 16U
+
+/* Register DPX_CR_BIF_MMU_STATUS */
+#define DPX_CR_BIF_MMU_STATUS 0xC5D8U
+#define DPX_CR_BIF_MMU_STATUS_MASKFULL 0x000000000FFFFFF7ULL
+#define DPX_CR_BIF_MMU_STATUS_PC_DATA_SHIFT 20U
+#define DPX_CR_BIF_MMU_STATUS_PC_DATA_CLRMSK 0xF00FFFFFU
+#define DPX_CR_BIF_MMU_STATUS_PD_DATA_SHIFT 12U
+#define DPX_CR_BIF_MMU_STATUS_PD_DATA_CLRMSK 0xFFF00FFFU
+#define DPX_CR_BIF_MMU_STATUS_PT_DATA_SHIFT 4U
+#define DPX_CR_BIF_MMU_STATUS_PT_DATA_CLRMSK 0xFFFFF00FU
+#define DPX_CR_BIF_MMU_STATUS_STALLED_SHIFT 2U
+#define DPX_CR_BIF_MMU_STATUS_STALLED_CLRMSK 0xFFFFFFFBU
+#define DPX_CR_BIF_MMU_STATUS_STALLED_EN 0x00000004U
+#define DPX_CR_BIF_MMU_STATUS_PAUSED_SHIFT 1U
+#define DPX_CR_BIF_MMU_STATUS_PAUSED_CLRMSK 0xFFFFFFFDU
+#define DPX_CR_BIF_MMU_STATUS_PAUSED_EN 0x00000002U
+#define DPX_CR_BIF_MMU_STATUS_BUSY_SHIFT 0U
+#define DPX_CR_BIF_MMU_STATUS_BUSY_CLRMSK 0xFFFFFFFEU
+#define DPX_CR_BIF_MMU_STATUS_BUSY_EN 0x00000001U
+
+/* Register DPX_CR_RT_PERF */
+#define DPX_CR_RT_PERF 0xC700U
+#define DPX_CR_RT_PERF_MASKFULL 0x000000000000001FULL
+#define DPX_CR_RT_PERF_CLR_3_SHIFT 4U
+#define DPX_CR_RT_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define DPX_CR_RT_PERF_CLR_3_EN 0x00000010U
+#define DPX_CR_RT_PERF_CLR_2_SHIFT 3U
+#define DPX_CR_RT_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define DPX_CR_RT_PERF_CLR_2_EN 0x00000008U
+#define DPX_CR_RT_PERF_CLR_1_SHIFT 2U
+#define DPX_CR_RT_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define DPX_CR_RT_PERF_CLR_1_EN 0x00000004U
+#define DPX_CR_RT_PERF_CLR_0_SHIFT 1U
+#define DPX_CR_RT_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define DPX_CR_RT_PERF_CLR_0_EN 0x00000002U
+#define DPX_CR_RT_PERF_CTRL_ENABLE_SHIFT 0U
+#define DPX_CR_RT_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define DPX_CR_RT_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register DPX_CR_RT_PERF_SELECT0 */
+#define DPX_CR_RT_PERF_SELECT0 0xC708U
+#define DPX_CR_RT_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define DPX_CR_RT_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define DPX_CR_RT_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define DPX_CR_RT_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define DPX_CR_RT_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define DPX_CR_RT_PERF_SELECT0_MODE_SHIFT 21U
+#define DPX_CR_RT_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define DPX_CR_RT_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define DPX_CR_RT_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define DPX_CR_RT_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define DPX_CR_RT_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define DPX_CR_RT_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register DPX_CR_RT_PERF_COUNTER_0 */
+#define DPX_CR_RT_PERF_COUNTER_0 0xC730U
+#define DPX_CR_RT_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define DPX_CR_RT_PERF_COUNTER_0_REG_SHIFT 0U
+#define DPX_CR_RT_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register DPX_CR_BX_TU_PERF */
+#define DPX_CR_BX_TU_PERF 0xC908U
+#define DPX_CR_BX_TU_PERF_MASKFULL 0x000000000000001FULL
+#define DPX_CR_BX_TU_PERF_CLR_3_SHIFT 4U
+#define DPX_CR_BX_TU_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define DPX_CR_BX_TU_PERF_CLR_3_EN 0x00000010U
+#define DPX_CR_BX_TU_PERF_CLR_2_SHIFT 3U
+#define DPX_CR_BX_TU_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define DPX_CR_BX_TU_PERF_CLR_2_EN 0x00000008U
+#define DPX_CR_BX_TU_PERF_CLR_1_SHIFT 2U
+#define DPX_CR_BX_TU_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define DPX_CR_BX_TU_PERF_CLR_1_EN 0x00000004U
+#define DPX_CR_BX_TU_PERF_CLR_0_SHIFT 1U
+#define DPX_CR_BX_TU_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define DPX_CR_BX_TU_PERF_CLR_0_EN 0x00000002U
+#define DPX_CR_BX_TU_PERF_CTRL_ENABLE_SHIFT 0U
+#define DPX_CR_BX_TU_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define DPX_CR_BX_TU_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register DPX_CR_BX_TU_PERF_SELECT0 */
+#define DPX_CR_BX_TU_PERF_SELECT0 0xC910U
+#define DPX_CR_BX_TU_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define DPX_CR_BX_TU_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define DPX_CR_BX_TU_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define DPX_CR_BX_TU_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define DPX_CR_BX_TU_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define DPX_CR_BX_TU_PERF_SELECT0_MODE_SHIFT 21U
+#define DPX_CR_BX_TU_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define DPX_CR_BX_TU_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define DPX_CR_BX_TU_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define DPX_CR_BX_TU_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define DPX_CR_BX_TU_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define DPX_CR_BX_TU_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register DPX_CR_BX_TU_PERF_COUNTER_0 */
+#define DPX_CR_BX_TU_PERF_COUNTER_0 0xC938U
+#define DPX_CR_BX_TU_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define DPX_CR_BX_TU_PERF_COUNTER_0_REG_SHIFT 0U
+#define DPX_CR_BX_TU_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register DPX_CR_RS_PDS_RR_CHECKSUM */
+#define DPX_CR_RS_PDS_RR_CHECKSUM 0xC0F0U
+#define DPX_CR_RS_PDS_RR_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define DPX_CR_RS_PDS_RR_CHECKSUM_VALUE_SHIFT 0U
+#define DPX_CR_RS_PDS_RR_CHECKSUM_VALUE_CLRMSK 0xFFFFFFFF00000000ULL
+
+/* Register ROGUE_CR_MMU_CBASE_MAPPING_CONTEXT */
+#define ROGUE_CR_MMU_CBASE_MAPPING_CONTEXT 0xE140U
+#define ROGUE_CR_MMU_CBASE_MAPPING_CONTEXT_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_MMU_CBASE_MAPPING_CONTEXT_ID_SHIFT 0U
+#define ROGUE_CR_MMU_CBASE_MAPPING_CONTEXT_ID_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_MMU_CBASE_MAPPING */
+#define ROGUE_CR_MMU_CBASE_MAPPING 0xE148U
+#define ROGUE_CR_MMU_CBASE_MAPPING_MASKFULL 0x000000000FFFFFFFULL
+#define ROGUE_CR_MMU_CBASE_MAPPING_BASE_ADDR_SHIFT 0U
+#define ROGUE_CR_MMU_CBASE_MAPPING_BASE_ADDR_CLRMSK 0xF0000000U
+#define ROGUE_CR_MMU_CBASE_MAPPING_BASE_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_MMU_CBASE_MAPPING_BASE_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_MMU_FAULT_STATUS */
+#define ROGUE_CR_MMU_FAULT_STATUS 0xE150U
+#define ROGUE_CR_MMU_FAULT_STATUS_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_MMU_FAULT_STATUS_ADDRESS_SHIFT 28U
+#define ROGUE_CR_MMU_FAULT_STATUS_ADDRESS_CLRMSK 0x000000000FFFFFFFULL
+#define ROGUE_CR_MMU_FAULT_STATUS_CONTEXT_SHIFT 20U
+#define ROGUE_CR_MMU_FAULT_STATUS_CONTEXT_CLRMSK 0xFFFFFFFFF00FFFFFULL
+#define ROGUE_CR_MMU_FAULT_STATUS_TAG_SB_SHIFT 12U
+#define ROGUE_CR_MMU_FAULT_STATUS_TAG_SB_CLRMSK 0xFFFFFFFFFFF00FFFULL
+#define ROGUE_CR_MMU_FAULT_STATUS_REQ_ID_SHIFT 6U
+#define ROGUE_CR_MMU_FAULT_STATUS_REQ_ID_CLRMSK 0xFFFFFFFFFFFFF03FULL
+#define ROGUE_CR_MMU_FAULT_STATUS_LEVEL_SHIFT 4U
+#define ROGUE_CR_MMU_FAULT_STATUS_LEVEL_CLRMSK 0xFFFFFFFFFFFFFFCFULL
+#define ROGUE_CR_MMU_FAULT_STATUS_RNW_SHIFT 3U
+#define ROGUE_CR_MMU_FAULT_STATUS_RNW_CLRMSK 0xFFFFFFFFFFFFFFF7ULL
+#define ROGUE_CR_MMU_FAULT_STATUS_RNW_EN 0x0000000000000008ULL
+#define ROGUE_CR_MMU_FAULT_STATUS_TYPE_SHIFT 1U
+#define ROGUE_CR_MMU_FAULT_STATUS_TYPE_CLRMSK 0xFFFFFFFFFFFFFFF9ULL
+#define ROGUE_CR_MMU_FAULT_STATUS_FAULT_SHIFT 0U
+#define ROGUE_CR_MMU_FAULT_STATUS_FAULT_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_MMU_FAULT_STATUS_FAULT_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_MMU_FAULT_STATUS_META */
+#define ROGUE_CR_MMU_FAULT_STATUS_META 0xE158U
+#define ROGUE_CR_MMU_FAULT_STATUS_META_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_MMU_FAULT_STATUS_META_ADDRESS_SHIFT 28U
+#define ROGUE_CR_MMU_FAULT_STATUS_META_ADDRESS_CLRMSK 0x000000000FFFFFFFULL
+#define ROGUE_CR_MMU_FAULT_STATUS_META_CONTEXT_SHIFT 20U
+#define ROGUE_CR_MMU_FAULT_STATUS_META_CONTEXT_CLRMSK 0xFFFFFFFFF00FFFFFULL
+#define ROGUE_CR_MMU_FAULT_STATUS_META_TAG_SB_SHIFT 12U
+#define ROGUE_CR_MMU_FAULT_STATUS_META_TAG_SB_CLRMSK 0xFFFFFFFFFFF00FFFULL
+#define ROGUE_CR_MMU_FAULT_STATUS_META_REQ_ID_SHIFT 6U
+#define ROGUE_CR_MMU_FAULT_STATUS_META_REQ_ID_CLRMSK 0xFFFFFFFFFFFFF03FULL
+#define ROGUE_CR_MMU_FAULT_STATUS_META_LEVEL_SHIFT 4U
+#define ROGUE_CR_MMU_FAULT_STATUS_META_LEVEL_CLRMSK 0xFFFFFFFFFFFFFFCFULL
+#define ROGUE_CR_MMU_FAULT_STATUS_META_RNW_SHIFT 3U
+#define ROGUE_CR_MMU_FAULT_STATUS_META_RNW_CLRMSK 0xFFFFFFFFFFFFFFF7ULL
+#define ROGUE_CR_MMU_FAULT_STATUS_META_RNW_EN 0x0000000000000008ULL
+#define ROGUE_CR_MMU_FAULT_STATUS_META_TYPE_SHIFT 1U
+#define ROGUE_CR_MMU_FAULT_STATUS_META_TYPE_CLRMSK 0xFFFFFFFFFFFFFFF9ULL
+#define ROGUE_CR_MMU_FAULT_STATUS_META_FAULT_SHIFT 0U
+#define ROGUE_CR_MMU_FAULT_STATUS_META_FAULT_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_MMU_FAULT_STATUS_META_FAULT_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_SLC3_CTRL_MISC */
+#define ROGUE_CR_SLC3_CTRL_MISC 0xE200U
+#define ROGUE_CR_SLC3_CTRL_MISC_MASKFULL 0x0000000000000107ULL
+#define ROGUE_CR_SLC3_CTRL_MISC_WRITE_COMBINER_SHIFT 8U
+#define ROGUE_CR_SLC3_CTRL_MISC_WRITE_COMBINER_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_SLC3_CTRL_MISC_WRITE_COMBINER_EN 0x00000100U
+#define ROGUE_CR_SLC3_CTRL_MISC_ADDR_DECODE_MODE_SHIFT 0U
+#define ROGUE_CR_SLC3_CTRL_MISC_ADDR_DECODE_MODE_CLRMSK 0xFFFFFFF8U
+#define ROGUE_CR_SLC3_CTRL_MISC_ADDR_DECODE_MODE_LINEAR 0x00000000U
+#define ROGUE_CR_SLC3_CTRL_MISC_ADDR_DECODE_MODE_IN_PAGE_HASH 0x00000001U
+#define ROGUE_CR_SLC3_CTRL_MISC_ADDR_DECODE_MODE_FIXED_PVR_HASH 0x00000002U
+#define ROGUE_CR_SLC3_CTRL_MISC_ADDR_DECODE_MODE_SCRAMBLE_PVR_HASH 0x00000003U
+#define ROGUE_CR_SLC3_CTRL_MISC_ADDR_DECODE_MODE_WEAVED_HASH 0x00000004U
+
+/* Register ROGUE_CR_SLC3_SCRAMBLE */
+#define ROGUE_CR_SLC3_SCRAMBLE 0xE208U
+#define ROGUE_CR_SLC3_SCRAMBLE_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC3_SCRAMBLE_BITS_SHIFT 0U
+#define ROGUE_CR_SLC3_SCRAMBLE_BITS_CLRMSK 0x0000000000000000ULL
+
+/* Register ROGUE_CR_SLC3_SCRAMBLE2 */
+#define ROGUE_CR_SLC3_SCRAMBLE2 0xE210U
+#define ROGUE_CR_SLC3_SCRAMBLE2_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC3_SCRAMBLE2_BITS_SHIFT 0U
+#define ROGUE_CR_SLC3_SCRAMBLE2_BITS_CLRMSK 0x0000000000000000ULL
+
+/* Register ROGUE_CR_SLC3_SCRAMBLE3 */
+#define ROGUE_CR_SLC3_SCRAMBLE3 0xE218U
+#define ROGUE_CR_SLC3_SCRAMBLE3_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC3_SCRAMBLE3_BITS_SHIFT 0U
+#define ROGUE_CR_SLC3_SCRAMBLE3_BITS_CLRMSK 0x0000000000000000ULL
+
+/* Register ROGUE_CR_SLC3_SCRAMBLE4 */
+#define ROGUE_CR_SLC3_SCRAMBLE4 0xE260U
+#define ROGUE_CR_SLC3_SCRAMBLE4_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC3_SCRAMBLE4_BITS_SHIFT 0U
+#define ROGUE_CR_SLC3_SCRAMBLE4_BITS_CLRMSK 0x0000000000000000ULL
+
+/* Register ROGUE_CR_SLC3_STATUS */
+#define ROGUE_CR_SLC3_STATUS 0xE220U
+#define ROGUE_CR_SLC3_STATUS_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC3_STATUS_WRITES1_SHIFT 48U
+#define ROGUE_CR_SLC3_STATUS_WRITES1_CLRMSK 0x0000FFFFFFFFFFFFULL
+#define ROGUE_CR_SLC3_STATUS_WRITES0_SHIFT 32U
+#define ROGUE_CR_SLC3_STATUS_WRITES0_CLRMSK 0xFFFF0000FFFFFFFFULL
+#define ROGUE_CR_SLC3_STATUS_READS1_SHIFT 16U
+#define ROGUE_CR_SLC3_STATUS_READS1_CLRMSK 0xFFFFFFFF0000FFFFULL
+#define ROGUE_CR_SLC3_STATUS_READS0_SHIFT 0U
+#define ROGUE_CR_SLC3_STATUS_READS0_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_SLC3_IDLE */
+#define ROGUE_CR_SLC3_IDLE 0xE228U
+#define ROGUE_CR_SLC3_IDLE_MASKFULL 0x00000000000FFFFFULL
+#define ROGUE_CR_SLC3_IDLE_ORDERQ_DUST2_SHIFT 18U
+#define ROGUE_CR_SLC3_IDLE_ORDERQ_DUST2_CLRMSK 0xFFF3FFFFU
+#define ROGUE_CR_SLC3_IDLE_MMU_SHIFT 17U
+#define ROGUE_CR_SLC3_IDLE_MMU_CLRMSK 0xFFFDFFFFU
+#define ROGUE_CR_SLC3_IDLE_MMU_EN 0x00020000U
+#define ROGUE_CR_SLC3_IDLE_RDI_SHIFT 16U
+#define ROGUE_CR_SLC3_IDLE_RDI_CLRMSK 0xFFFEFFFFU
+#define ROGUE_CR_SLC3_IDLE_RDI_EN 0x00010000U
+#define ROGUE_CR_SLC3_IDLE_IMGBV4_SHIFT 12U
+#define ROGUE_CR_SLC3_IDLE_IMGBV4_CLRMSK 0xFFFF0FFFU
+#define ROGUE_CR_SLC3_IDLE_CACHE_BANKS_SHIFT 4U
+#define ROGUE_CR_SLC3_IDLE_CACHE_BANKS_CLRMSK 0xFFFFF00FU
+#define ROGUE_CR_SLC3_IDLE_ORDERQ_DUST_SHIFT 2U
+#define ROGUE_CR_SLC3_IDLE_ORDERQ_DUST_CLRMSK 0xFFFFFFF3U
+#define ROGUE_CR_SLC3_IDLE_ORDERQ_JONES_SHIFT 1U
+#define ROGUE_CR_SLC3_IDLE_ORDERQ_JONES_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_SLC3_IDLE_ORDERQ_JONES_EN 0x00000002U
+#define ROGUE_CR_SLC3_IDLE_XBAR_SHIFT 0U
+#define ROGUE_CR_SLC3_IDLE_XBAR_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SLC3_IDLE_XBAR_EN 0x00000001U
+
+/* Register ROGUE_CR_SLC3_FAULT_STOP_STATUS */
+#define ROGUE_CR_SLC3_FAULT_STOP_STATUS 0xE248U
+#define ROGUE_CR_SLC3_FAULT_STOP_STATUS_MASKFULL 0x0000000000001FFFULL
+#define ROGUE_CR_SLC3_FAULT_STOP_STATUS_BIF_SHIFT 0U
+#define ROGUE_CR_SLC3_FAULT_STOP_STATUS_BIF_CLRMSK 0xFFFFE000U
+
+/* Register ROGUE_CR_VDM_CONTEXT_STORE_MODE */
+#define ROGUE_CR_VDM_CONTEXT_STORE_MODE 0xF048U
+#define ROGUE_CR_VDM_CONTEXT_STORE_MODE_MASKFULL 0x0000000000000003ULL
+#define ROGUE_CR_VDM_CONTEXT_STORE_MODE_MODE_SHIFT 0U
+#define ROGUE_CR_VDM_CONTEXT_STORE_MODE_MODE_CLRMSK 0xFFFFFFFCU
+#define ROGUE_CR_VDM_CONTEXT_STORE_MODE_MODE_INDEX 0x00000000U
+#define ROGUE_CR_VDM_CONTEXT_STORE_MODE_MODE_INSTANCE 0x00000001U
+#define ROGUE_CR_VDM_CONTEXT_STORE_MODE_MODE_LIST 0x00000002U
+
+/* Register ROGUE_CR_CONTEXT_MAPPING0 */
+#define ROGUE_CR_CONTEXT_MAPPING0 0xF078U
+#define ROGUE_CR_CONTEXT_MAPPING0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_CONTEXT_MAPPING0_2D_SHIFT 24U
+#define ROGUE_CR_CONTEXT_MAPPING0_2D_CLRMSK 0x00FFFFFFU
+#define ROGUE_CR_CONTEXT_MAPPING0_CDM_SHIFT 16U
+#define ROGUE_CR_CONTEXT_MAPPING0_CDM_CLRMSK 0xFF00FFFFU
+#define ROGUE_CR_CONTEXT_MAPPING0_3D_SHIFT 8U
+#define ROGUE_CR_CONTEXT_MAPPING0_3D_CLRMSK 0xFFFF00FFU
+#define ROGUE_CR_CONTEXT_MAPPING0_TA_SHIFT 0U
+#define ROGUE_CR_CONTEXT_MAPPING0_TA_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_CONTEXT_MAPPING1 */
+#define ROGUE_CR_CONTEXT_MAPPING1 0xF080U
+#define ROGUE_CR_CONTEXT_MAPPING1_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_CONTEXT_MAPPING1_HOST_SHIFT 8U
+#define ROGUE_CR_CONTEXT_MAPPING1_HOST_CLRMSK 0xFFFF00FFU
+#define ROGUE_CR_CONTEXT_MAPPING1_TLA_SHIFT 0U
+#define ROGUE_CR_CONTEXT_MAPPING1_TLA_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_CONTEXT_MAPPING2 */
+#define ROGUE_CR_CONTEXT_MAPPING2 0xF088U
+#define ROGUE_CR_CONTEXT_MAPPING2_MASKFULL 0x0000000000FFFFFFULL
+#define ROGUE_CR_CONTEXT_MAPPING2_ALIST0_SHIFT 16U
+#define ROGUE_CR_CONTEXT_MAPPING2_ALIST0_CLRMSK 0xFF00FFFFU
+#define ROGUE_CR_CONTEXT_MAPPING2_TE0_SHIFT 8U
+#define ROGUE_CR_CONTEXT_MAPPING2_TE0_CLRMSK 0xFFFF00FFU
+#define ROGUE_CR_CONTEXT_MAPPING2_VCE0_SHIFT 0U
+#define ROGUE_CR_CONTEXT_MAPPING2_VCE0_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_CONTEXT_MAPPING3 */
+#define ROGUE_CR_CONTEXT_MAPPING3 0xF090U
+#define ROGUE_CR_CONTEXT_MAPPING3_MASKFULL 0x0000000000FFFFFFULL
+#define ROGUE_CR_CONTEXT_MAPPING3_ALIST1_SHIFT 16U
+#define ROGUE_CR_CONTEXT_MAPPING3_ALIST1_CLRMSK 0xFF00FFFFU
+#define ROGUE_CR_CONTEXT_MAPPING3_TE1_SHIFT 8U
+#define ROGUE_CR_CONTEXT_MAPPING3_TE1_CLRMSK 0xFFFF00FFU
+#define ROGUE_CR_CONTEXT_MAPPING3_VCE1_SHIFT 0U
+#define ROGUE_CR_CONTEXT_MAPPING3_VCE1_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_BIF_JONES_OUTSTANDING_READ */
+#define ROGUE_CR_BIF_JONES_OUTSTANDING_READ 0xF098U
+#define ROGUE_CR_BIF_JONES_OUTSTANDING_READ_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_BIF_JONES_OUTSTANDING_READ_COUNTER_SHIFT 0U
+#define ROGUE_CR_BIF_JONES_OUTSTANDING_READ_COUNTER_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_BIF_BLACKPEARL_OUTSTANDING_READ */
+#define ROGUE_CR_BIF_BLACKPEARL_OUTSTANDING_READ 0xF0A0U
+#define ROGUE_CR_BIF_BLACKPEARL_OUTSTANDING_READ_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_BIF_BLACKPEARL_OUTSTANDING_READ_COUNTER_SHIFT 0U
+#define ROGUE_CR_BIF_BLACKPEARL_OUTSTANDING_READ_COUNTER_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_BIF_DUST_OUTSTANDING_READ */
+#define ROGUE_CR_BIF_DUST_OUTSTANDING_READ 0xF0A8U
+#define ROGUE_CR_BIF_DUST_OUTSTANDING_READ_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_BIF_DUST_OUTSTANDING_READ_COUNTER_SHIFT 0U
+#define ROGUE_CR_BIF_DUST_OUTSTANDING_READ_COUNTER_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_CONTEXT_MAPPING4 */
+#define ROGUE_CR_CONTEXT_MAPPING4 0xF210U
+#define ROGUE_CR_CONTEXT_MAPPING4_MASKFULL 0x0000FFFFFFFFFFFFULL
+#define ROGUE_CR_CONTEXT_MAPPING4_3D_MMU_STACK_SHIFT 40U
+#define ROGUE_CR_CONTEXT_MAPPING4_3D_MMU_STACK_CLRMSK 0xFFFF00FFFFFFFFFFULL
+#define ROGUE_CR_CONTEXT_MAPPING4_3D_UFSTACK_SHIFT 32U
+#define ROGUE_CR_CONTEXT_MAPPING4_3D_UFSTACK_CLRMSK 0xFFFFFF00FFFFFFFFULL
+#define ROGUE_CR_CONTEXT_MAPPING4_3D_FSTACK_SHIFT 24U
+#define ROGUE_CR_CONTEXT_MAPPING4_3D_FSTACK_CLRMSK 0xFFFFFFFF00FFFFFFULL
+#define ROGUE_CR_CONTEXT_MAPPING4_TA_MMU_STACK_SHIFT 16U
+#define ROGUE_CR_CONTEXT_MAPPING4_TA_MMU_STACK_CLRMSK 0xFFFFFFFFFF00FFFFULL
+#define ROGUE_CR_CONTEXT_MAPPING4_TA_UFSTACK_SHIFT 8U
+#define ROGUE_CR_CONTEXT_MAPPING4_TA_UFSTACK_CLRMSK 0xFFFFFFFFFFFF00FFULL
+#define ROGUE_CR_CONTEXT_MAPPING4_TA_FSTACK_SHIFT 0U
+#define ROGUE_CR_CONTEXT_MAPPING4_TA_FSTACK_CLRMSK 0xFFFFFFFFFFFFFF00ULL
+
+/* Register ROGUE_CR_MULTICORE_GPU */
+#define ROGUE_CR_MULTICORE_GPU 0xF300U
+#define ROGUE_CR_MULTICORE_GPU_MASKFULL 0x000000000000007FULL
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_FRAGMENT_SHIFT 6U
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_FRAGMENT_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_FRAGMENT_EN 0x00000040U
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_GEOMETRY_SHIFT 5U
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_GEOMETRY_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_GEOMETRY_EN 0x00000020U
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_COMPUTE_SHIFT 4U
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_COMPUTE_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_COMPUTE_EN 0x00000010U
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_PRIMARY_SHIFT 3U
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_PRIMARY_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_PRIMARY_EN 0x00000008U
+#define ROGUE_CR_MULTICORE_GPU_ID_SHIFT 0U
+#define ROGUE_CR_MULTICORE_GPU_ID_CLRMSK 0xFFFFFFF8U
+
+/* Register ROGUE_CR_MULTICORE_SYSTEM */
+#define ROGUE_CR_MULTICORE_SYSTEM 0xF308U
+#define ROGUE_CR_MULTICORE_SYSTEM_MASKFULL 0x000000000000000FULL
+#define ROGUE_CR_MULTICORE_SYSTEM_GPU_COUNT_SHIFT 0U
+#define ROGUE_CR_MULTICORE_SYSTEM_GPU_COUNT_CLRMSK 0xFFFFFFF0U
+
+/* Register ROGUE_CR_MULTICORE_FRAGMENT_CTRL_COMMON */
+#define ROGUE_CR_MULTICORE_FRAGMENT_CTRL_COMMON 0xF310U
+#define ROGUE_CR_MULTICORE_FRAGMENT_CTRL_COMMON_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_MULTICORE_FRAGMENT_CTRL_COMMON_WORKLOAD_TYPE_SHIFT 30U
+#define ROGUE_CR_MULTICORE_FRAGMENT_CTRL_COMMON_WORKLOAD_TYPE_CLRMSK 0x3FFFFFFFU
+#define ROGUE_CR_MULTICORE_FRAGMENT_CTRL_COMMON_WORKLOAD_EXECUTE_COUNT_SHIFT 8U
+#define ROGUE_CR_MULTICORE_FRAGMENT_CTRL_COMMON_WORKLOAD_EXECUTE_COUNT_CLRMSK 0xC00000FFU
+#define ROGUE_CR_MULTICORE_FRAGMENT_CTRL_COMMON_GPU_ENABLE_SHIFT 0U
+#define ROGUE_CR_MULTICORE_FRAGMENT_CTRL_COMMON_GPU_ENABLE_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_MULTICORE_GEOMETRY_CTRL_COMMON */
+#define ROGUE_CR_MULTICORE_GEOMETRY_CTRL_COMMON 0xF320U
+#define ROGUE_CR_MULTICORE_GEOMETRY_CTRL_COMMON_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_MULTICORE_GEOMETRY_CTRL_COMMON_WORKLOAD_TYPE_SHIFT 30U
+#define ROGUE_CR_MULTICORE_GEOMETRY_CTRL_COMMON_WORKLOAD_TYPE_CLRMSK 0x3FFFFFFFU
+#define ROGUE_CR_MULTICORE_GEOMETRY_CTRL_COMMON_WORKLOAD_EXECUTE_COUNT_SHIFT 8U
+#define ROGUE_CR_MULTICORE_GEOMETRY_CTRL_COMMON_WORKLOAD_EXECUTE_COUNT_CLRMSK 0xC00000FFU
+#define ROGUE_CR_MULTICORE_GEOMETRY_CTRL_COMMON_GPU_ENABLE_SHIFT 0U
+#define ROGUE_CR_MULTICORE_GEOMETRY_CTRL_COMMON_GPU_ENABLE_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_MULTICORE_COMPUTE_CTRL_COMMON */
+#define ROGUE_CR_MULTICORE_COMPUTE_CTRL_COMMON 0xF330U
+#define ROGUE_CR_MULTICORE_COMPUTE_CTRL_COMMON_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_MULTICORE_COMPUTE_CTRL_COMMON_WORKLOAD_TYPE_SHIFT 30U
+#define ROGUE_CR_MULTICORE_COMPUTE_CTRL_COMMON_WORKLOAD_TYPE_CLRMSK 0x3FFFFFFFU
+#define ROGUE_CR_MULTICORE_COMPUTE_CTRL_COMMON_WORKLOAD_EXECUTE_COUNT_SHIFT 8U
+#define ROGUE_CR_MULTICORE_COMPUTE_CTRL_COMMON_WORKLOAD_EXECUTE_COUNT_CLRMSK 0xC00000FFU
+#define ROGUE_CR_MULTICORE_COMPUTE_CTRL_COMMON_GPU_ENABLE_SHIFT 0U
+#define ROGUE_CR_MULTICORE_COMPUTE_CTRL_COMMON_GPU_ENABLE_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_ECC_RAM_ERR_INJ */
+#define ROGUE_CR_ECC_RAM_ERR_INJ 0xF340U
+#define ROGUE_CR_ECC_RAM_ERR_INJ_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_ECC_RAM_ERR_INJ_SLC_SIDEKICK_SHIFT 4U
+#define ROGUE_CR_ECC_RAM_ERR_INJ_SLC_SIDEKICK_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_ECC_RAM_ERR_INJ_SLC_SIDEKICK_EN 0x00000010U
+#define ROGUE_CR_ECC_RAM_ERR_INJ_USC_SHIFT 3U
+#define ROGUE_CR_ECC_RAM_ERR_INJ_USC_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_ECC_RAM_ERR_INJ_USC_EN 0x00000008U
+#define ROGUE_CR_ECC_RAM_ERR_INJ_TPU_MCU_L0_SHIFT 2U
+#define ROGUE_CR_ECC_RAM_ERR_INJ_TPU_MCU_L0_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_ECC_RAM_ERR_INJ_TPU_MCU_L0_EN 0x00000004U
+#define ROGUE_CR_ECC_RAM_ERR_INJ_RASCAL_SHIFT 1U
+#define ROGUE_CR_ECC_RAM_ERR_INJ_RASCAL_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_ECC_RAM_ERR_INJ_RASCAL_EN 0x00000002U
+#define ROGUE_CR_ECC_RAM_ERR_INJ_MARS_SHIFT 0U
+#define ROGUE_CR_ECC_RAM_ERR_INJ_MARS_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_ECC_RAM_ERR_INJ_MARS_EN 0x00000001U
+
+/* Register ROGUE_CR_ECC_RAM_INIT_KICK */
+#define ROGUE_CR_ECC_RAM_INIT_KICK 0xF348U
+#define ROGUE_CR_ECC_RAM_INIT_KICK_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_ECC_RAM_INIT_KICK_SLC_SIDEKICK_SHIFT 4U
+#define ROGUE_CR_ECC_RAM_INIT_KICK_SLC_SIDEKICK_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_ECC_RAM_INIT_KICK_SLC_SIDEKICK_EN 0x00000010U
+#define ROGUE_CR_ECC_RAM_INIT_KICK_USC_SHIFT 3U
+#define ROGUE_CR_ECC_RAM_INIT_KICK_USC_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_ECC_RAM_INIT_KICK_USC_EN 0x00000008U
+#define ROGUE_CR_ECC_RAM_INIT_KICK_TPU_MCU_L0_SHIFT 2U
+#define ROGUE_CR_ECC_RAM_INIT_KICK_TPU_MCU_L0_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_ECC_RAM_INIT_KICK_TPU_MCU_L0_EN 0x00000004U
+#define ROGUE_CR_ECC_RAM_INIT_KICK_RASCAL_SHIFT 1U
+#define ROGUE_CR_ECC_RAM_INIT_KICK_RASCAL_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_ECC_RAM_INIT_KICK_RASCAL_EN 0x00000002U
+#define ROGUE_CR_ECC_RAM_INIT_KICK_MARS_SHIFT 0U
+#define ROGUE_CR_ECC_RAM_INIT_KICK_MARS_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_ECC_RAM_INIT_KICK_MARS_EN 0x00000001U
+
+/* Register ROGUE_CR_ECC_RAM_INIT_DONE */
+#define ROGUE_CR_ECC_RAM_INIT_DONE 0xF350U
+#define ROGUE_CR_ECC_RAM_INIT_DONE_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_ECC_RAM_INIT_DONE_SLC_SIDEKICK_SHIFT 4U
+#define ROGUE_CR_ECC_RAM_INIT_DONE_SLC_SIDEKICK_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_ECC_RAM_INIT_DONE_SLC_SIDEKICK_EN 0x00000010U
+#define ROGUE_CR_ECC_RAM_INIT_DONE_USC_SHIFT 3U
+#define ROGUE_CR_ECC_RAM_INIT_DONE_USC_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_ECC_RAM_INIT_DONE_USC_EN 0x00000008U
+#define ROGUE_CR_ECC_RAM_INIT_DONE_TPU_MCU_L0_SHIFT 2U
+#define ROGUE_CR_ECC_RAM_INIT_DONE_TPU_MCU_L0_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_ECC_RAM_INIT_DONE_TPU_MCU_L0_EN 0x00000004U
+#define ROGUE_CR_ECC_RAM_INIT_DONE_RASCAL_SHIFT 1U
+#define ROGUE_CR_ECC_RAM_INIT_DONE_RASCAL_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_ECC_RAM_INIT_DONE_RASCAL_EN 0x00000002U
+#define ROGUE_CR_ECC_RAM_INIT_DONE_MARS_SHIFT 0U
+#define ROGUE_CR_ECC_RAM_INIT_DONE_MARS_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_ECC_RAM_INIT_DONE_MARS_EN 0x00000001U
+
+/* Register ROGUE_CR_SAFETY_EVENT_ENABLE */
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE 0xF390U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__MASKFULL 0x000000000000007FULL
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__CPU_PAGE_FAULT_SHIFT 6U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__CPU_PAGE_FAULT_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__CPU_PAGE_FAULT_EN 0x00000040U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__SAFE_COMPUTE_FAIL_SHIFT 5U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__SAFE_COMPUTE_FAIL_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__SAFE_COMPUTE_FAIL_EN 0x00000020U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__WATCHDOG_TIMEOUT_SHIFT 4U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__WATCHDOG_TIMEOUT_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__WATCHDOG_TIMEOUT_EN 0x00000010U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__TRP_FAIL_SHIFT 3U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__TRP_FAIL_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__TRP_FAIL_EN 0x00000008U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_FW_SHIFT 2U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_FW_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_FW_EN 0x00000004U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_GPU_SHIFT 1U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_GPU_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_GPU_EN 0x00000002U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__GPU_PAGE_FAULT_SHIFT 0U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__GPU_PAGE_FAULT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__GPU_PAGE_FAULT_EN 0x00000001U
+
+/* Register ROGUE_CR_SAFETY_EVENT_STATUS */
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE 0xF398U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__MASKFULL 0x000000000000007FULL
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__CPU_PAGE_FAULT_SHIFT 6U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__CPU_PAGE_FAULT_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__CPU_PAGE_FAULT_EN 0x00000040U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__SAFE_COMPUTE_FAIL_SHIFT 5U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__SAFE_COMPUTE_FAIL_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__SAFE_COMPUTE_FAIL_EN 0x00000020U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__WATCHDOG_TIMEOUT_SHIFT 4U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__WATCHDOG_TIMEOUT_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__WATCHDOG_TIMEOUT_EN 0x00000010U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__TRP_FAIL_SHIFT 3U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__TRP_FAIL_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__TRP_FAIL_EN 0x00000008U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__FAULT_FW_SHIFT 2U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__FAULT_FW_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__FAULT_FW_EN 0x00000004U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__FAULT_GPU_SHIFT 1U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__FAULT_GPU_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__FAULT_GPU_EN 0x00000002U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__GPU_PAGE_FAULT_SHIFT 0U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__GPU_PAGE_FAULT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__GPU_PAGE_FAULT_EN 0x00000001U
+
+/* Register ROGUE_CR_SAFETY_EVENT_CLEAR */
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE 0xF3A0U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__MASKFULL 0x000000000000007FULL
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__CPU_PAGE_FAULT_SHIFT 6U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__CPU_PAGE_FAULT_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__CPU_PAGE_FAULT_EN 0x00000040U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__SAFE_COMPUTE_FAIL_SHIFT 5U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__SAFE_COMPUTE_FAIL_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__SAFE_COMPUTE_FAIL_EN 0x00000020U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__WATCHDOG_TIMEOUT_SHIFT 4U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__WATCHDOG_TIMEOUT_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__WATCHDOG_TIMEOUT_EN 0x00000010U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__TRP_FAIL_SHIFT 3U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__TRP_FAIL_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__TRP_FAIL_EN 0x00000008U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__FAULT_FW_SHIFT 2U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__FAULT_FW_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__FAULT_FW_EN 0x00000004U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__FAULT_GPU_SHIFT 1U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__FAULT_GPU_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__FAULT_GPU_EN 0x00000002U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__GPU_PAGE_FAULT_SHIFT 0U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__GPU_PAGE_FAULT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__GPU_PAGE_FAULT_EN 0x00000001U
+
+/* Register ROGUE_CR_MTS_SAFETY_EVENT_ENABLE */
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE 0xF3D8U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__MASKFULL 0x000000000000007FULL
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__CPU_PAGE_FAULT_SHIFT 6U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__CPU_PAGE_FAULT_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__CPU_PAGE_FAULT_EN 0x00000040U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__SAFE_COMPUTE_FAIL_SHIFT 5U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__SAFE_COMPUTE_FAIL_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__SAFE_COMPUTE_FAIL_EN 0x00000020U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__WATCHDOG_TIMEOUT_SHIFT 4U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__WATCHDOG_TIMEOUT_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__WATCHDOG_TIMEOUT_EN 0x00000010U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__TRP_FAIL_SHIFT 3U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__TRP_FAIL_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__TRP_FAIL_EN 0x00000008U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_FW_SHIFT 2U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_FW_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_FW_EN 0x00000004U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_GPU_SHIFT 1U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_GPU_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_GPU_EN 0x00000002U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__GPU_PAGE_FAULT_SHIFT 0U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__GPU_PAGE_FAULT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__GPU_PAGE_FAULT_EN 0x00000001U
+
+/* clang-format on */
+
+#endif /* PVR_ROGUE_CR_DEFS_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_cr_defs_client.h b/drivers/gpu/drm/imagination/pvr_rogue_cr_defs_client.h
new file mode 100644
index 000000000000..9a07516e5337
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_cr_defs_client.h
@@ -0,0 +1,159 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_CR_DEFS_CLIENT_H
+#define PVR_ROGUE_CR_DEFS_CLIENT_H
+
+/* clang-format off */
+
+/*
+ * This register controls the anti-aliasing mode of the Tiling Co-Processor, independent control is
+ * provided in both X & Y axis.
+ * This register needs to be set based on the ISP Samples Per Pixel a core supports.
+ *
+ * When ISP Samples Per Pixel = 1:
+ * 2xmsaa is achieved by enabling Y - TE does AA on Y plane only
+ * 4xmsaa is achieved by enabling Y and X - TE does AA on X and Y plane
+ * 8xmsaa not supported by XE cores
+ *
+ * When ISP Samples Per Pixel = 2:
+ * 2xmsaa is achieved by enabling X2 - does not affect TE
+ * 4xmsaa is achieved by enabling Y and X2 - TE does AA on Y plane only
+ * 8xmsaa is achieved by enabling Y, X and X2 - TE does AA on X and Y plane
+ * 8xmsaa not supported by XE cores
+ *
+ * When ISP Samples Per Pixel = 4:
+ * 2xmsaa is achieved by enabling X2 - does not affect TE
+ * 4xmsaa is achieved by enabling Y2 and X2 - TE does AA on Y plane only
+ * 8xmsaa not supported by XE cores
+ */
+/* Register ROGUE_CR_TE_AA */
+#define ROGUE_CR_TE_AA 0x0C00U
+#define ROGUE_CR_TE_AA_MASKFULL 0x000000000000000Full
+/* Y2
+ * Indicates 4xmsaa when X2 and Y2 are set to 1. This does not affect TE and is only used within
+ * TPW.
+ */
+#define ROGUE_CR_TE_AA_Y2_SHIFT 3
+#define ROGUE_CR_TE_AA_Y2_CLRMSK 0xFFFFFFF7
+#define ROGUE_CR_TE_AA_Y2_EN 0x00000008
+/* Y
+ * Anti-Aliasing in Y Plane Enabled
+ */
+#define ROGUE_CR_TE_AA_Y_SHIFT 2
+#define ROGUE_CR_TE_AA_Y_CLRMSK 0xFFFFFFFB
+#define ROGUE_CR_TE_AA_Y_EN 0x00000004
+/* X
+ * Anti-Aliasing in X Plane Enabled
+ */
+#define ROGUE_CR_TE_AA_X_SHIFT 1
+#define ROGUE_CR_TE_AA_X_CLRMSK 0xFFFFFFFD
+#define ROGUE_CR_TE_AA_X_EN 0x00000002
+/* X2
+ * 2x Anti-Aliasing Enabled, affects PPP only
+ */
+#define ROGUE_CR_TE_AA_X2_SHIFT                             (0U)
+#define ROGUE_CR_TE_AA_X2_CLRMSK                            (0xFFFFFFFEU)
+#define ROGUE_CR_TE_AA_X2_EN                                (0x00000001U)
+
+/* MacroTile Boundaries X Plane */
+/* Register ROGUE_CR_TE_MTILE1 */
+#define ROGUE_CR_TE_MTILE1 0x0C08
+#define ROGUE_CR_TE_MTILE1_MASKFULL 0x0000000007FFFFFFull
+/* X1 default: 0x00000004
+ * X1 MacroTile boundary, left tile X for second column of macrotiles (16MT mode) - 32 pixels across
+ * tile
+ */
+#define ROGUE_CR_TE_MTILE1_X1_SHIFT 18
+#define ROGUE_CR_TE_MTILE1_X1_CLRMSK 0xF803FFFF
+/* X2 default: 0x00000008
+ * X2 MacroTile boundary, left tile X for third(16MT) column of macrotiles - 32 pixels across tile
+ */
+#define ROGUE_CR_TE_MTILE1_X2_SHIFT 9U
+#define ROGUE_CR_TE_MTILE1_X2_CLRMSK 0xFFFC01FF
+/* X3 default: 0x0000000c
+ * X3 MacroTile boundary, left tile X for fourth column of macrotiles (16MT) - 32 pixels across tile
+ */
+#define ROGUE_CR_TE_MTILE1_X3_SHIFT 0
+#define ROGUE_CR_TE_MTILE1_X3_CLRMSK 0xFFFFFE00
+
+/* MacroTile Boundaries Y Plane. */
+/* Register ROGUE_CR_TE_MTILE2 */
+#define ROGUE_CR_TE_MTILE2 0x0C10
+#define ROGUE_CR_TE_MTILE2_MASKFULL 0x0000000007FFFFFFull
+/* Y1 default: 0x00000004
+ * X1 MacroTile boundary, ltop tile Y for second column of macrotiles (16MT mode) - 32 pixels tile
+ * height
+ */
+#define ROGUE_CR_TE_MTILE2_Y1_SHIFT 18
+#define ROGUE_CR_TE_MTILE2_Y1_CLRMSK 0xF803FFFF
+/* Y2 default: 0x00000008
+ * X2 MacroTile boundary, top tile Y for third(16MT) column of macrotiles - 32 pixels tile height
+ */
+#define ROGUE_CR_TE_MTILE2_Y2_SHIFT 9
+#define ROGUE_CR_TE_MTILE2_Y2_CLRMSK 0xFFFC01FF
+/* Y3 default: 0x0000000c
+ * X3 MacroTile boundary, top tile Y for fourth column of macrotiles (16MT) - 32 pixels tile height
+ */
+#define ROGUE_CR_TE_MTILE2_Y3_SHIFT 0
+#define ROGUE_CR_TE_MTILE2_Y3_CLRMSK 0xFFFFFE00
+
+/*
+ * In order to perform the tiling operation and generate the display list the maximum screen size
+ * must be configured in terms of the number of tiles in X & Y axis.
+ */
+
+/* Register ROGUE_CR_TE_SCREEN */
+#define ROGUE_CR_TE_SCREEN 0x0C18U
+#define ROGUE_CR_TE_SCREEN_MASKFULL 0x00000000001FF1FFull
+/* YMAX default: 0x00000010
+ * Maximum Y tile address visible on screen, 32 pixel tile height, 16Kx16K max screen size
+ */
+#define ROGUE_CR_TE_SCREEN_YMAX_SHIFT 12
+#define ROGUE_CR_TE_SCREEN_YMAX_CLRMSK 0xFFE00FFF
+/* XMAX default: 0x00000010
+ * Maximum X tile address visible on screen, 32 pixel tile width, 16Kx16K max screen size
+ */
+#define ROGUE_CR_TE_SCREEN_XMAX_SHIFT 0
+#define ROGUE_CR_TE_SCREEN_XMAX_CLRMSK 0xFFFFFE00
+
+/*
+ * In order to perform the tiling operation and generate the display list the maximum screen size
+ * must be configured in terms of the number of pixels in X & Y axis since this may not be the same
+ * as the number of tiles defined in the RGX_CR_TE_SCREEN register.
+ */
+/* Register ROGUE_CR_PPP_SCREEN */
+#define ROGUE_CR_PPP_SCREEN 0x0C98
+#define ROGUE_CR_PPP_SCREEN_MASKFULL 0x000000007FFF7FFFull
+/* PIXYMAX
+ * Screen height in pixels. (16K x 16K max screen size)
+ */
+#define ROGUE_CR_PPP_SCREEN_PIXYMAX_SHIFT 16
+#define ROGUE_CR_PPP_SCREEN_PIXYMAX_CLRMSK 0x8000FFFF
+/* PIXXMAX
+ * Screen width in pixels.(16K x 16K max screen size)
+ */
+#define ROGUE_CR_PPP_SCREEN_PIXXMAX_SHIFT 0
+#define ROGUE_CR_PPP_SCREEN_PIXXMAX_CLRMSK 0xFFFF8000
+
+/* Register ROGUE_CR_ISP_MTILE_SIZE */
+#define ROGUE_CR_ISP_MTILE_SIZE 0x0F18
+#define ROGUE_CR_ISP_MTILE_SIZE_MASKFULL 0x0000000003FF03FFull
+/* X
+ * Macrotile width, in tiles. A value of zero corresponds to the maximum size
+ */
+#define ROGUE_CR_ISP_MTILE_SIZE_X_SHIFT 16
+#define ROGUE_CR_ISP_MTILE_SIZE_X_CLRMSK 0xFC00FFFF
+#define ROGUE_CR_ISP_MTILE_SIZE_X_ALIGNSHIFT 0
+#define ROGUE_CR_ISP_MTILE_SIZE_X_ALIGNSIZE 1
+/* Y
+ * Macrotile height, in tiles. A value of zero corresponds to the maximum size
+ */
+#define ROGUE_CR_ISP_MTILE_SIZE_Y_SHIFT 0
+#define ROGUE_CR_ISP_MTILE_SIZE_Y_CLRMSK 0xFFFFFC00
+#define ROGUE_CR_ISP_MTILE_SIZE_Y_ALIGNSHIFT 0
+#define ROGUE_CR_ISP_MTILE_SIZE_Y_ALIGNSIZE 1
+
+/* clang-format on */
+
+#endif /* PVR_ROGUE_CR_DEFS_CLIENT_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_defs.h b/drivers/gpu/drm/imagination/pvr_rogue_defs.h
new file mode 100644
index 000000000000..b0662656444b
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_defs.h
@@ -0,0 +1,179 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_DEFS_H
+#define PVR_ROGUE_DEFS_H
+
+#include "pvr_rogue_cr_defs.h"
+
+#include <linux/bits.h>
+
+/*
+ ******************************************************************************
+ * ROGUE Defines
+ ******************************************************************************
+ */
+
+#define ROGUE_FW_MAX_NUM_OS (8U)
+#define ROGUE_FW_HOST_OS (0U)
+#define ROGUE_FW_GUEST_OSID_START (1U)
+
+#define ROGUE_FW_THREAD_0 (0U)
+#define ROGUE_FW_THREAD_1 (1U)
+
+#define GET_ROGUE_CACHE_LINE_SIZE(x) ((((s32)(x)) > 0) ? ((x) / 8) : (0))
+
+#define MAX_HW_GEOM_FRAG_CONTEXTS 2U
+
+#define ROGUE_CR_CLK_CTRL_ALL_ON \
+	(0x5555555555555555ull & ROGUE_CR_CLK_CTRL_MASKFULL)
+#define ROGUE_CR_CLK_CTRL_ALL_AUTO \
+	(0xaaaaaaaaaaaaaaaaull & ROGUE_CR_CLK_CTRL_MASKFULL)
+#define ROGUE_CR_CLK_CTRL2_ALL_ON \
+	(0x5555555555555555ull & ROGUE_CR_CLK_CTRL2_MASKFULL)
+#define ROGUE_CR_CLK_CTRL2_ALL_AUTO \
+	(0xaaaaaaaaaaaaaaaaull & ROGUE_CR_CLK_CTRL2_MASKFULL)
+
+#define ROGUE_CR_SOFT_RESET_DUST_n_CORE_EN    \
+	(ROGUE_CR_SOFT_RESET_DUST_A_CORE_EN | \
+	 ROGUE_CR_SOFT_RESET_DUST_B_CORE_EN | \
+	 ROGUE_CR_SOFT_RESET_DUST_C_CORE_EN | \
+	 ROGUE_CR_SOFT_RESET_DUST_D_CORE_EN | \
+	 ROGUE_CR_SOFT_RESET_DUST_E_CORE_EN | \
+	 ROGUE_CR_SOFT_RESET_DUST_F_CORE_EN | \
+	 ROGUE_CR_SOFT_RESET_DUST_G_CORE_EN | \
+	 ROGUE_CR_SOFT_RESET_DUST_H_CORE_EN)
+
+/* SOFT_RESET Rascal and DUSTs bits */
+#define ROGUE_CR_SOFT_RESET_RASCALDUSTS_EN    \
+	(ROGUE_CR_SOFT_RESET_RASCAL_CORE_EN | \
+	 ROGUE_CR_SOFT_RESET_DUST_n_CORE_EN)
+
+/* SOFT_RESET steps as defined in the TRM */
+#define ROGUE_S7_SOFT_RESET_DUSTS (ROGUE_CR_SOFT_RESET_DUST_n_CORE_EN)
+
+#define ROGUE_S7_SOFT_RESET_JONES                                 \
+	(ROGUE_CR_SOFT_RESET_PM_EN | ROGUE_CR_SOFT_RESET_VDM_EN | \
+	 ROGUE_CR_SOFT_RESET_ISP_EN)
+
+#define ROGUE_S7_SOFT_RESET_JONES_ALL                             \
+	(ROGUE_S7_SOFT_RESET_JONES | ROGUE_CR_SOFT_RESET_BIF_EN | \
+	 ROGUE_CR_SOFT_RESET_SLC_EN | ROGUE_CR_SOFT_RESET_GARTEN_EN)
+
+#define ROGUE_S7_SOFT_RESET2                                                  \
+	(ROGUE_CR_SOFT_RESET2_BLACKPEARL_EN | ROGUE_CR_SOFT_RESET2_PIXEL_EN | \
+	 ROGUE_CR_SOFT_RESET2_CDM_EN | ROGUE_CR_SOFT_RESET2_VERTEX_EN)
+
+#define ROGUE_BIF_PM_PHYSICAL_PAGE_ALIGNSHIFT (12U)
+#define ROGUE_BIF_PM_PHYSICAL_PAGE_SIZE \
+	BIT(ROGUE_BIF_PM_PHYSICAL_PAGE_ALIGNSHIFT)
+
+#define ROGUE_BIF_PM_VIRTUAL_PAGE_ALIGNSHIFT (14U)
+#define ROGUE_BIF_PM_VIRTUAL_PAGE_SIZE BIT(ROGUE_BIF_PM_VIRTUAL_PAGE_ALIGNSHIFT)
+
+#define ROGUE_BIF_PM_FREELIST_BASE_ADDR_ALIGNSIZE (16U)
+
+/*
+ * To get the number of required Dusts, divide the number of
+ * clusters by 2 and round up
+ */
+#define ROGUE_REQ_NUM_DUSTS(CLUSTERS) (((CLUSTERS) + 1U) / 2U)
+
+/*
+ * To get the number of required Bernado/Phantom(s), divide
+ * the number of clusters by 4 and round up
+ */
+#define ROGUE_REQ_NUM_PHANTOMS(CLUSTERS) (((CLUSTERS) + 3U) / 4U)
+#define ROGUE_REQ_NUM_BERNADOS(CLUSTERS) (((CLUSTERS) + 3U) / 4U)
+#define ROGUE_REQ_NUM_BLACKPEARLS(CLUSTERS) (((CLUSTERS) + 3U) / 4U)
+
+/*
+ * FW MMU contexts
+ */
+#define MMU_CONTEXT_MAPPING_FWPRIV (0x0) /* FW code/private data */
+#define MMU_CONTEXT_MAPPING_FWIF (0x0) /* Host/FW data */
+
+/*
+ * Utility macros to calculate CAT_BASE register addresses
+ */
+#define BIF_CAT_BASEX(n)          \
+	(ROGUE_CR_BIF_CAT_BASE0 + \
+	 (n) * (ROGUE_CR_BIF_CAT_BASE1 - ROGUE_CR_BIF_CAT_BASE0))
+
+#define FWCORE_MEM_CAT_BASEX(n)                 \
+	(ROGUE_CR_FWCORE_MEM_CAT_BASE0 +        \
+	 (n) * (ROGUE_CR_FWCORE_MEM_CAT_BASE1 - \
+		ROGUE_CR_FWCORE_MEM_CAT_BASE0))
+
+/*
+ * FWCORE wrapper register defines
+ */
+#define FWCORE_ADDR_REMAP_CONFIG0_MMU_CONTEXT_SHIFT \
+	ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_CBASE_SHIFT
+#define FWCORE_ADDR_REMAP_CONFIG0_MMU_CONTEXT_CLRMSK \
+	ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_CBASE_CLRMSK
+#define FWCORE_ADDR_REMAP_CONFIG0_SIZE_ALIGNSHIFT (12U)
+
+#define ROGUE_MAX_COMPUTE_SHARED_REGISTERS (2 * 1024)
+#define ROGUE_MAX_VERTEX_SHARED_REGISTERS 1024
+#define ROGUE_MAX_PIXEL_SHARED_REGISTERS 1024
+#define ROGUE_CSRM_LINE_SIZE_IN_DWORDS (64 * 4 * 4)
+
+#define ROGUE_CDMCTRL_USC_COMMON_SIZE_ALIGNSIZE 64
+#define ROGUE_CDMCTRL_USC_COMMON_SIZE_UPPER 256
+
+/*
+ * The maximum amount of local memory which can be allocated by a single kernel
+ * (in dwords/32-bit registers).
+ *
+ * ROGUE_CDMCTRL_USC_COMMON_SIZE_ALIGNSIZE is in bytes so we divide by four.
+ */
+#define ROGUE_MAX_PER_KERNEL_LOCAL_MEM_SIZE_REGS ((ROGUE_CDMCTRL_USC_COMMON_SIZE_ALIGNSIZE * \
+						   ROGUE_CDMCTRL_USC_COMMON_SIZE_UPPER) >> 2)
+
+/*
+ ******************************************************************************
+ * WA HWBRNs
+ ******************************************************************************
+ */
+
+/* GPU CR timer tick in GPU cycles */
+#define ROGUE_CRTIME_TICK_IN_CYCLES (256U)
+
+/* for nohw multicore return max cores possible to client */
+#define ROGUE_MULTICORE_MAX_NOHW_CORES (4U)
+
+/*
+ * If the size of the SLC is less than this value then the TPU bypasses the SLC.
+ */
+#define ROGUE_TPU_CACHED_SLC_SIZE_THRESHOLD (128U * 1024U)
+
+/*
+ * If the size of the SLC is bigger than this value then the TCU must not be
+ * bypassed in the SLC.
+ * In XE_MEMORY_HIERARCHY cores, the TCU is bypassed by default.
+ */
+#define ROGUE_TCU_CACHED_SLC_SIZE_THRESHOLD (32U * 1024U)
+
+/*
+ * Register used by the FW to track the current boot stage (not used in MIPS)
+ */
+#define ROGUE_FW_BOOT_STAGE_REGISTER (ROGUE_CR_POWER_ESTIMATE_RESULT)
+
+/*
+ * Virtualisation definitions
+ */
+#define ROGUE_VIRTUALISATION_REG_SIZE_PER_OS \
+	(ROGUE_CR_MTS_SCHEDULE1 - ROGUE_CR_MTS_SCHEDULE)
+
+/*
+ * Macro used to indicate which version of HWPerf is active
+ */
+#define ROGUE_FEATURE_HWPERF_ROGUE
+
+/*
+ * Maximum number of cores supported by TRP
+ */
+#define ROGUE_TRP_MAX_NUM_CORES (4U)
+
+#endif /* PVR_ROGUE_DEFS_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_fwif.h b/drivers/gpu/drm/imagination/pvr_rogue_fwif.h
new file mode 100644
index 000000000000..4f720de6d4ca
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_fwif.h
@@ -0,0 +1,2208 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_FWIF_H
+#define PVR_ROGUE_FWIF_H
+
+#include <linux/bits.h>
+#include <linux/build_bug.h>
+#include <linux/compiler.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+
+#include "pvr_rogue_defs.h"
+#include "pvr_rogue_fwif_common.h"
+#include "pvr_rogue_fwif_shared.h"
+
+/*
+ ****************************************************************************
+ * Logging type
+ ****************************************************************************
+ */
+#define ROGUE_FWIF_LOG_TYPE_NONE 0x00000000U
+#define ROGUE_FWIF_LOG_TYPE_TRACE 0x00000001U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_MAIN 0x00000002U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_MTS 0x00000004U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_CLEANUP 0x00000008U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_CSW 0x00000010U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_BIF 0x00000020U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_PM 0x00000040U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_RTD 0x00000080U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_SPM 0x00000100U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_POW 0x00000200U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_HWR 0x00000400U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_HWP 0x00000800U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_RPM 0x00001000U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_DMA 0x00002000U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_MISC 0x00004000U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_DEBUG 0x80000000U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_MASK 0x80007FFEU
+#define ROGUE_FWIF_LOG_TYPE_MASK 0x80007FFFU
+
+/* String used in pvrdebug -h output */
+#define ROGUE_FWIF_LOG_GROUPS_STRING_LIST \
+	"main,mts,cleanup,csw,bif,pm,rtd,spm,pow,hwr,hwp,rpm,dma,misc,debug"
+
+/* Table entry to map log group strings to log type value */
+struct rogue_fwif_log_group_map_entry {
+	const char *log_group_name;
+	u32 log_group_type;
+};
+
+/*
+ * Macro for use with the ROGUE_FWIF_LOG_GROUP_MAP_ENTRY type to create a lookup
+ * table where needed. Keep log group names short, no more than 20 chars.
+ */
+static const struct rogue_fwif_log_group_map_entry rogue_fwif_log_group_name_value_map[] = {
+	{"none", ROGUE_FWIF_LOG_TYPE_NONE},
+	{"main", ROGUE_FWIF_LOG_TYPE_GROUP_MAIN},
+	{"mts", ROGUE_FWIF_LOG_TYPE_GROUP_MTS},
+	{"cleanup", ROGUE_FWIF_LOG_TYPE_GROUP_CLEANUP},
+	{"csw", ROGUE_FWIF_LOG_TYPE_GROUP_CSW},
+	{"bif", ROGUE_FWIF_LOG_TYPE_GROUP_BIF},
+	{"pm", ROGUE_FWIF_LOG_TYPE_GROUP_PM},
+	{"rtd", ROGUE_FWIF_LOG_TYPE_GROUP_RTD},
+	{"spm", ROGUE_FWIF_LOG_TYPE_GROUP_SPM},
+	{"pow", ROGUE_FWIF_LOG_TYPE_GROUP_POW},
+	{"hwr", ROGUE_FWIF_LOG_TYPE_GROUP_HWR},
+	{"hwp", ROGUE_FWIF_LOG_TYPE_GROUP_HWP},
+	{"rpm", ROGUE_FWIF_LOG_TYPE_GROUP_RPM},
+	{"dma", ROGUE_FWIF_LOG_TYPE_GROUP_DMA},
+	{"misc", ROGUE_FWIF_LOG_TYPE_GROUP_MISC},
+	{"debug", ROGUE_FWIF_LOG_TYPE_GROUP_DEBUG}
+};
+
+/*
+ ****************************************************************************
+ * ROGUE FW signature checks
+ ****************************************************************************
+ */
+#define ROGUE_FW_SIG_BUFFER_SIZE_MIN (8192)
+
+#define ROGUE_FWIF_TIMEDIFF_ID ((0x1UL << 28) | ROGUE_CR_TIMER)
+
+/*
+ ****************************************************************************
+ * Trace Buffer
+ ****************************************************************************
+ */
+
+/* Default size of ROGUE_FWIF_TRACEBUF_SPACE in DWords */
+#define ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS 12000U
+#define ROGUE_FW_TRACE_BUFFER_ASSERT_SIZE 200U
+#define ROGUE_FW_THREAD_NUM 1U
+#define ROGUE_FW_THREAD_MAX 2U
+
+#define ROGUE_FW_POLL_TYPE_SET 0x80000000U
+
+struct rogue_fwif_file_info_buf {
+	char path[ROGUE_FW_TRACE_BUFFER_ASSERT_SIZE];
+	char info[ROGUE_FW_TRACE_BUFFER_ASSERT_SIZE];
+	u32 line_num;
+	u32 padding;
+} __aligned(8);
+
+struct rogue_fwif_tracebuf_space {
+	u32 trace_pointer;
+
+	u32 trace_buffer_fw_addr;
+
+	/* To be used by host when reading from trace buffer */
+	u32 *trace_buffer;
+
+	struct rogue_fwif_file_info_buf assert_buf;
+} __aligned(8);
+
+/* Total number of FW fault logs stored */
+#define ROGUE_FWIF_FWFAULTINFO_MAX (8U)
+
+struct rogue_fw_fault_info {
+	aligned_u64 cr_timer;
+	aligned_u64 os_timer;
+
+	u32 data __aligned(8);
+	u32 reserved;
+	struct rogue_fwif_file_info_buf fault_buf;
+} __aligned(8);
+
+enum rogue_fwif_pow_state {
+	ROGUE_FWIF_POW_OFF, /* idle and ready to full power down */
+	ROGUE_FWIF_POW_ON, /* running HW commands */
+	ROGUE_FWIF_POW_FORCED_IDLE, /* forced idle */
+	ROGUE_FWIF_POW_IDLE, /* idle waiting for host handshake */
+};
+
+/* Firmware HWR states */
+/* The HW state is ok or locked up */
+#define ROGUE_FWIF_HWR_HARDWARE_OK BIT(0)
+/* Tells if a HWR reset is in progress */
+#define ROGUE_FWIF_HWR_RESET_IN_PROGRESS BIT(1)
+/* A DM unrelated lockup has been detected */
+#define ROGUE_FWIF_HWR_GENERAL_LOCKUP BIT(3)
+/* At least one DM is running without being close to a lockup */
+#define ROGUE_FWIF_HWR_DM_RUNNING_OK BIT(4)
+/* At least one DM is close to lockup */
+#define ROGUE_FWIF_HWR_DM_STALLING BIT(5)
+/* The FW has faulted and needs to restart */
+#define ROGUE_FWIF_HWR_FW_FAULT BIT(6)
+/* The FW has requested the host to restart it */
+#define ROGUE_FWIF_HWR_RESTART_REQUESTED BIT(7)
+
+#define ROGUE_FWIF_PHR_STATE_SHIFT (8U)
+/* The FW has requested the host to restart it, per PHR configuration */
+#define ROGUE_FWIF_PHR_RESTART_REQUESTED ((1) << ROGUE_FWIF_PHR_STATE_SHIFT)
+/* A PHR triggered GPU reset has just finished */
+#define ROGUE_FWIF_PHR_RESTART_FINISHED ((2) << ROGUE_FWIF_PHR_STATE_SHIFT)
+#define ROGUE_FWIF_PHR_RESTART_MASK \
+	(ROGUE_FWIF_PHR_RESTART_REQUESTED | ROGUE_FWIF_PHR_RESTART_FINISHED)
+
+#define ROGUE_FWIF_PHR_MODE_OFF (0UL)
+#define ROGUE_FWIF_PHR_MODE_RD_RESET (1UL)
+#define ROGUE_FWIF_PHR_MODE_FULL_RESET (2UL)
+
+/* Firmware per-DM HWR states */
+/* DM is working if all flags are cleared */
+#define ROGUE_FWIF_DM_STATE_WORKING (0)
+/* DM is idle and ready for HWR */
+#define ROGUE_FWIF_DM_STATE_READY_FOR_HWR BIT(0)
+/* DM need to skip to next cmd before resuming processing */
+#define ROGUE_FWIF_DM_STATE_NEEDS_SKIP BIT(2)
+/* DM need partial render cleanup before resuming processing */
+#define ROGUE_FWIF_DM_STATE_NEEDS_PR_CLEANUP BIT(3)
+/* DM need to increment Recovery Count once fully recovered */
+#define ROGUE_FWIF_DM_STATE_NEEDS_TRACE_CLEAR BIT(4)
+/* DM was identified as locking up and causing HWR */
+#define ROGUE_FWIF_DM_STATE_GUILTY_LOCKUP BIT(5)
+/* DM was innocently affected by another lockup which caused HWR */
+#define ROGUE_FWIF_DM_STATE_INNOCENT_LOCKUP BIT(6)
+/* DM was identified as over-running and causing HWR */
+#define ROGUE_FWIF_DM_STATE_GUILTY_OVERRUNING BIT(7)
+/* DM was innocently affected by another DM over-running which caused HWR */
+#define ROGUE_FWIF_DM_STATE_INNOCENT_OVERRUNING BIT(8)
+/* DM was forced into HWR as it delayed more important workloads */
+#define ROGUE_FWIF_DM_STATE_HARD_CONTEXT_SWITCH BIT(9)
+/* DM was forced into HWR due to an uncorrected GPU ECC error */
+#define ROGUE_FWIF_DM_STATE_GPU_ECC_HWR BIT(10)
+
+/* Firmware's connection state */
+enum rogue_fwif_connection_fw_state {
+	/* Firmware is offline */
+	ROGUE_FW_CONNECTION_FW_OFFLINE = 0,
+	/* Firmware is initialised */
+	ROGUE_FW_CONNECTION_FW_READY,
+	/* Firmware connection is fully established */
+	ROGUE_FW_CONNECTION_FW_ACTIVE,
+	/* Firmware is clearing up connection data*/
+	ROGUE_FW_CONNECTION_FW_OFFLOADING,
+	ROGUE_FW_CONNECTION_FW_STATE_COUNT
+};
+
+/* OS' connection state */
+enum rogue_fwif_connection_os_state {
+	/* OS is offline */
+	ROGUE_FW_CONNECTION_OS_OFFLINE = 0,
+	/* OS's KM driver is setup and waiting */
+	ROGUE_FW_CONNECTION_OS_READY,
+	/* OS connection is fully established */
+	ROGUE_FW_CONNECTION_OS_ACTIVE,
+	ROGUE_FW_CONNECTION_OS_STATE_COUNT
+};
+
+struct rogue_fwif_os_runtime_flags {
+	unsigned int os_state : 3;
+	unsigned int fl_ok : 1;
+	unsigned int fl_grow_pending : 1;
+	unsigned int isolated_os : 1;
+	unsigned int reserved : 26;
+};
+
+#define PVR_SLR_LOG_ENTRIES 10
+/* MAX_CLIENT_CCB_NAME not visible to this header */
+#define PVR_SLR_LOG_STRLEN 30
+
+struct rogue_fwif_slr_entry {
+	aligned_u64 timestamp;
+	u32 fw_ctx_addr;
+	u32 num_ufos;
+	char ccb_name[PVR_SLR_LOG_STRLEN];
+	char padding[2];
+} __aligned(8);
+
+#define MAX_THREAD_NUM 2
+
+/* firmware trace control data */
+struct rogue_fwif_tracebuf {
+	u32 log_type;
+	struct rogue_fwif_tracebuf_space tracebuf[MAX_THREAD_NUM];
+	/*
+	 * Member initialised only when sTraceBuf is actually allocated (in
+	 * ROGUETraceBufferInitOnDemandResources)
+	 */
+	u32 tracebuf_size_in_dwords;
+	/* Compatibility and other flags */
+	u32 tracebuf_flags;
+} __aligned(8);
+
+/* firmware system data shared with the Host driver */
+struct rogue_fwif_sysdata {
+	/* Configuration flags from host */
+	u32 config_flags;
+	/* Extended configuration flags from host */
+	u32 config_flags_ext;
+	enum rogue_fwif_pow_state pow_state;
+	u32 hw_perf_ridx;
+	u32 hw_perf_widx;
+	u32 hw_perf_wrap_count;
+	/* Constant after setup, needed in FW */
+	u32 hw_perf_size;
+	/* The number of times the FW drops a packet due to buffer full */
+	u32 hw_perf_drop_count;
+
+	/*
+	 * ui32HWPerfUt, ui32FirstDropOrdinal, ui32LastDropOrdinal only valid
+	 * when FW is built with ROGUE_HWPERF_UTILIZATION &
+	 * ROGUE_HWPERF_DROP_TRACKING defined in rogue_fw_hwperf.c
+	 */
+	/* Buffer utilisation, high watermark of bytes in use */
+	u32 hw_perf_ut;
+	/* The ordinal of the first packet the FW dropped */
+	u32 first_drop_ordinal;
+	/* The ordinal of the last packet the FW dropped */
+	u32 last_drop_ordinal;
+	/* State flags for each Operating System mirrored from Fw coremem */
+	struct rogue_fwif_os_runtime_flags
+		os_runtime_flags_mirror[ROGUE_FW_MAX_NUM_OS];
+
+	struct rogue_fw_fault_info fault_info[ROGUE_FWIF_FWFAULTINFO_MAX];
+	u32 fw_faults;
+	u32 cr_poll_addr[MAX_THREAD_NUM];
+	u32 cr_poll_mask[MAX_THREAD_NUM];
+	u32 cr_poll_count[MAX_THREAD_NUM];
+	aligned_u64 start_idle_time;
+
+#if defined(SUPPORT_ROGUE_FW_STATS_FRAMEWORK)
+#	define ROGUE_FWIF_STATS_FRAMEWORK_LINESIZE (8)
+#	define ROGUE_FWIF_STATS_FRAMEWORK_MAX \
+		(2048 * ROGUE_FWIF_STATS_FRAMEWORK_LINESIZE)
+	u32 fw_stats_buf[ROGUE_FWIF_STATS_FRAMEWORK_MAX] __aligned(8);
+#endif
+	u32 hwr_state_flags;
+	u32 hwr_recovery_flags[PVR_FWIF_DM_MAX];
+	/* Compatibility and other flags */
+	u32 fw_sys_data_flags;
+	/* Identify whether MC config is P-P or P-S */
+	u32 mc_config;
+} __aligned(8);
+
+/* per-os firmware shared data */
+struct rogue_fwif_osdata {
+	/* Configuration flags from an OS */
+	u32 fw_os_config_flags;
+	/* Markers to signal that the host should perform a full sync check */
+	u32 fw_sync_check_mark;
+	u32 host_sync_check_mark;
+
+	u32 forced_updates_requested;
+	u8 slr_log_wp;
+	struct rogue_fwif_slr_entry slr_log_first;
+	struct rogue_fwif_slr_entry slr_log[PVR_SLR_LOG_ENTRIES];
+	aligned_u64 last_forced_update_time;
+
+	/* Interrupt count from Threads > */
+	u32 interrupt_count[MAX_THREAD_NUM];
+	u32 kccb_cmds_executed;
+	u32 power_sync_fw_addr;
+	/* Compatibility and other flags */
+	u32 fw_os_data_flags;
+	u32 padding;
+} __aligned(8);
+
+/* Firmware trace time-stamp field breakup */
+
+/* ROGUE_CR_TIMER register read (48 bits) value*/
+#define ROGUE_FWT_TIMESTAMP_TIME_SHIFT (0U)
+#define ROGUE_FWT_TIMESTAMP_TIME_CLRMSK (0xFFFF000000000000ull)
+
+/* Extra debug-info (16 bits) */
+#define ROGUE_FWT_TIMESTAMP_DEBUG_INFO_SHIFT (48U)
+#define ROGUE_FWT_TIMESTAMP_DEBUG_INFO_CLRMSK ~ROGUE_FWT_TIMESTAMP_TIME_CLRMSK
+
+/* Debug-info sub-fields */
+/*
+ * Bit 0: ROGUE_CR_EVENT_STATUS_MMU_PAGE_FAULT bit from ROGUE_CR_EVENT_STATUS
+ * register
+ */
+#define ROGUE_FWT_DEBUG_INFO_MMU_PAGE_FAULT_SHIFT (0U)
+#define ROGUE_FWT_DEBUG_INFO_MMU_PAGE_FAULT_SET \
+	BIT(ROGUE_FWT_DEBUG_INFO_MMU_PAGE_FAULT_SHIFT)
+
+/* Bit 1: ROGUE_CR_BIF_MMU_ENTRY_PENDING bit from ROGUE_CR_BIF_MMU_ENTRY register */
+#define ROGUE_FWT_DEBUG_INFO_MMU_ENTRY_PENDING_SHIFT (1U)
+#define ROGUE_FWT_DEBUG_INFO_MMU_ENTRY_PENDING_SET \
+	BIT(ROGUE_FWT_DEBUG_INFO_MMU_ENTRY_PENDING_SHIFT)
+
+/* Bit 2: ROGUE_CR_SLAVE_EVENT register is non-zero */
+#define ROGUE_FWT_DEBUG_INFO_SLAVE_EVENTS_SHIFT (2U)
+#define ROGUE_FWT_DEBUG_INFO_SLAVE_EVENTS_SET \
+	BIT(ROGUE_FWT_DEBUG_INFO_SLAVE_EVENTS_SHIFT)
+
+/* Bit 3-15: Unused bits */
+
+#define ROGUE_FWT_DEBUG_INFO_STR_MAXLEN 64
+#define ROGUE_FWT_DEBUG_INFO_STR_PREPEND " (debug info: "
+#define ROGUE_FWT_DEBUG_INFO_STR_APPEND ")"
+
+/*
+ ******************************************************************************
+ * HWR Data
+ ******************************************************************************
+ */
+enum rogue_hwrtype {
+	ROGUE_HWRTYPE_UNKNOWNFAILURE = 0,
+	ROGUE_HWRTYPE_OVERRUN = 1,
+	ROGUE_HWRTYPE_POLLFAILURE = 2,
+	ROGUE_HWRTYPE_BIF0FAULT = 3,
+	ROGUE_HWRTYPE_BIF1FAULT = 4,
+	ROGUE_HWRTYPE_TEXASBIF0FAULT = 5,
+	ROGUE_HWRTYPE_MMUFAULT = 6,
+	ROGUE_HWRTYPE_MMUMETAFAULT = 7,
+	ROGUE_HWRTYPE_MIPSTLBFAULT = 8,
+	ROGUE_HWRTYPE_ECCFAULT = 9,
+	ROGUE_HWRTYPE_MMURISCVFAULT = 10,
+};
+
+#define ROGUE_FWIF_HWRTYPE_BIF_BANK_GET(hwr_type) \
+	(((hwr_type) == ROGUE_HWRTYPE_BIF0FAULT) ? 0 : 1)
+
+#define ROGUE_FWIF_HWRTYPE_PAGE_FAULT_GET(hwr_type)       \
+	((((hwr_type) == ROGUE_HWRTYPE_BIF0FAULT) ||      \
+	  ((hwr_type) == ROGUE_HWRTYPE_BIF1FAULT) ||      \
+	  ((hwr_type) == ROGUE_HWRTYPE_TEXASBIF0FAULT) || \
+	  ((hwr_type) == ROGUE_HWRTYPE_MMUFAULT) ||       \
+	  ((hwr_type) == ROGUE_HWRTYPE_MMUMETAFAULT) ||   \
+	  ((hwr_type) == ROGUE_HWRTYPE_MIPSTLBFAULT) ||   \
+	  ((hwr_type) == ROGUE_HWRTYPE_MMURISCVFAULT))    \
+		 ? true                                   \
+		 : false)
+
+struct rogue_bifinfo {
+	aligned_u64 bif_req_status;
+	aligned_u64 bif_mmu_status;
+	aligned_u64 pc_address; /* phys address of the page catalogue */
+	aligned_u64 reserved;
+};
+
+struct rogue_eccinfo {
+	u32 fault_gpu;
+};
+
+struct rogue_mmuinfo {
+	aligned_u64 mmu_status[2];
+	aligned_u64 pc_address; /* phys address of the page catalogue */
+	aligned_u64 reserved;
+};
+
+struct rogue_pollinfo {
+	u32 thread_num;
+	u32 cr_poll_addr;
+	u32 cr_poll_mask;
+	u32 cr_poll_last_value;
+	aligned_u64 reserved;
+} __aligned(8);
+
+struct rogue_tlbinfo {
+	u32 bad_addr;
+	u32 entry_lo;
+};
+
+struct rogue_hwrinfo {
+	union {
+		struct rogue_bifinfo bif_info;
+		struct rogue_mmuinfo mmu_info;
+		struct rogue_pollinfo poll_info;
+		struct rogue_tlbinfo tlb_info;
+		struct rogue_eccinfo ecc_info;
+	} hwr_data;
+
+	aligned_u64 cr_timer;
+	aligned_u64 os_timer;
+	u32 frame_num;
+	u32 pid;
+	u32 active_hwrt_data;
+	u32 hwr_number;
+	u32 event_status;
+	u32 hwr_recovery_flags;
+	enum rogue_hwrtype hwr_type;
+	u32 dm;
+	u32 core_id;
+	aligned_u64 cr_time_of_kick;
+	aligned_u64 cr_time_hw_reset_start;
+	aligned_u64 cr_time_hw_reset_finish;
+	aligned_u64 cr_time_freelist_ready;
+	aligned_u64 reserved[2];
+} __aligned(8);
+
+/* Number of first HWR logs recorded (never overwritten by newer logs) */
+#define ROGUE_FWIF_HWINFO_MAX_FIRST 8U
+/* Number of latest HWR logs (older logs are overwritten by newer logs) */
+#define ROGUE_FWIF_HWINFO_MAX_LAST 8U
+/* Total number of HWR logs stored in a buffer */
+#define ROGUE_FWIF_HWINFO_MAX \
+	(ROGUE_FWIF_HWINFO_MAX_FIRST + ROGUE_FWIF_HWINFO_MAX_LAST)
+/* Index of the last log in the HWR log buffer */
+#define ROGUE_FWIF_HWINFO_LAST_INDEX (ROGUE_FWIF_HWINFO_MAX - 1U)
+
+struct rogue_fwif_hwrinfobuf {
+	struct rogue_hwrinfo hwr_info[ROGUE_FWIF_HWINFO_MAX];
+	u32 hwr_counter;
+	u32 write_index;
+	u32 dd_req_count;
+	u32 hwr_info_buf_flags; /* Compatibility and other flags */
+	u32 hwr_dm_locked_up_count[PVR_FWIF_DM_MAX];
+	u32 hwr_dm_overran_count[PVR_FWIF_DM_MAX];
+	u32 hwr_dm_recovered_count[PVR_FWIF_DM_MAX];
+	u32 hwr_dm_false_detect_count[PVR_FWIF_DM_MAX];
+} __aligned(8);
+
+#define ROGUE_FWIF_CTXSWITCH_PROFILE_FAST_EN (1)
+#define ROGUE_FWIF_CTXSWITCH_PROFILE_MEDIUM_EN (2)
+#define ROGUE_FWIF_CTXSWITCH_PROFILE_SLOW_EN (3)
+#define ROGUE_FWIF_CTXSWITCH_PROFILE_NODELAY_EN (4)
+
+#define ROGUE_FWIF_CDM_ARBITRATION_TASK_DEMAND_EN (1)
+#define ROGUE_FWIF_CDM_ARBITRATION_ROUND_ROBIN_EN (2)
+
+#define ROGUE_FWIF_ISP_SCHEDMODE_VER1_IPP (1)
+#define ROGUE_FWIF_ISP_SCHEDMODE_VER2_ISP (2)
+/*
+ ******************************************************************************
+ * ROGUE firmware Init Config Data
+ ******************************************************************************
+ */
+
+/* Flag definitions affecting the firmware globally */
+#define ROGUE_FWIF_INICFG_CTXSWITCH_MODE_RAND BIT(0)
+#define ROGUE_FWIF_INICFG_CTXSWITCH_SRESET_EN BIT(1)
+#define ROGUE_FWIF_INICFG_HWPERF_EN BIT(2)
+#define ROGUE_FWIF_INICFG_DM_KILL_MODE_RAND_EN BIT(3)
+#define ROGUE_FWIF_INICFG_POW_RASCALDUST BIT(4)
+/* Bit 5 is reserved. */
+#define ROGUE_FWIF_INICFG_FBCDC_V3_1_EN BIT(6)
+#define ROGUE_FWIF_INICFG_CHECK_MLIST_EN BIT(7)
+#define ROGUE_FWIF_INICFG_DISABLE_CLKGATING_EN BIT(8)
+/* Bit 9 is reserved. */
+/* Bit 10 is reserved. */
+/* Bit 11 is reserved. */
+#define ROGUE_FWIF_INICFG_REGCONFIG_EN BIT(12)
+#define ROGUE_FWIF_INICFG_ASSERT_ON_OUTOFMEMORY BIT(13)
+#define ROGUE_FWIF_INICFG_HWP_DISABLE_FILTER BIT(14)
+/* Bit 15 is reserved. */
+#define ROGUE_FWIF_INICFG_CTXSWITCH_PROFILE_SHIFT (16)
+#define ROGUE_FWIF_INICFG_CTXSWITCH_PROFILE_FAST \
+	(ROGUE_FWIF_CTXSWITCH_PROFILE_FAST_EN    \
+	 << ROGUE_FWIF_INICFG_CTXSWITCH_PROFILE_SHIFT)
+#define ROGUE_FWIF_INICFG_CTXSWITCH_PROFILE_MEDIUM \
+	(ROGUE_FWIF_CTXSWITCH_PROFILE_MEDIUM_EN    \
+	 << ROGUE_FWIF_INICFG_CTXSWITCH_PROFILE_SHIFT)
+#define ROGUE_FWIF_INICFG_CTXSWITCH_PROFILE_SLOW \
+	(ROGUE_FWIF_CTXSWITCH_PROFILE_SLOW_EN    \
+	 << ROGUE_FWIF_INICFG_CTXSWITCH_PROFILE_SHIFT)
+#define ROGUE_FWIF_INICFG_CTXSWITCH_PROFILE_NODELAY \
+	(ROGUE_FWIF_CTXSWITCH_PROFILE_NODELAY_EN    \
+	 << ROGUE_FWIF_INICFG_CTXSWITCH_PROFILE_SHIFT)
+#define ROGUE_FWIF_INICFG_CTXSWITCH_PROFILE_MASK \
+	(7 << ROGUE_FWIF_INICFG_CTXSWITCH_PROFILE_SHIFT)
+#define ROGUE_FWIF_INICFG_DISABLE_DM_OVERLAP BIT(19)
+#define ROGUE_FWIF_INICFG_ASSERT_ON_HWR_TRIGGER BIT(20)
+#define ROGUE_FWIF_INICFG_FABRIC_COHERENCY_ENABLED BIT(21)
+#define ROGUE_FWIF_INICFG_VALIDATE_IRQ BIT(22)
+#define ROGUE_FWIF_INICFG_DISABLE_PDP_EN BIT(23)
+#define ROGUE_FWIF_INICFG_SPU_POWER_STATE_MASK_CHANGE_EN BIT(24)
+#define ROGUE_FWIF_INICFG_WORKEST BIT(25)
+#define ROGUE_FWIF_INICFG_PDVFS BIT(26)
+#define ROGUE_FWIF_INICFG_CDM_ARBITRATION_SHIFT (27)
+#define ROGUE_FWIF_INICFG_CDM_ARBITRATION_TASK_DEMAND \
+	(ROGUE_FWIF_CDM_ARBITRATION_TASK_DEMAND_EN    \
+	 << ROGUE_FWIF_INICFG_CDM_ARBITRATION_SHIFT)
+#define ROGUE_FWIF_INICFG_CDM_ARBITRATION_ROUND_ROBIN \
+	(ROGUE_FWIF_CDM_ARBITRATION_ROUND_ROBIN_EN    \
+	 << ROGUE_FWIF_INICFG_CDM_ARBITRATION_SHIFT)
+#define ROGUE_FWIF_INICFG_CDM_ARBITRATION_MASK \
+	(3 << ROGUE_FWIF_INICFG_CDM_ARBITRATION_SHIFT)
+#define ROGUE_FWIF_INICFG_ISPSCHEDMODE_SHIFT (29)
+#define ROGUE_FWIF_INICFG_ISPSCHEDMODE_NONE (0)
+#define ROGUE_FWIF_INICFG_ISPSCHEDMODE_VER1_IPP \
+	(ROGUE_FWIF_ISP_SCHEDMODE_VER1_IPP      \
+	 << ROGUE_FWIF_INICFG_ISPSCHEDMODE_SHIFT)
+#define ROGUE_FWIF_INICFG_ISPSCHEDMODE_VER2_ISP \
+	(ROGUE_FWIF_ISP_SCHEDMODE_VER2_ISP      \
+	 << ROGUE_FWIF_INICFG_ISPSCHEDMODE_SHIFT)
+#define ROGUE_FWIF_INICFG_ISPSCHEDMODE_MASK        \
+	(ROGUE_FWIF_INICFG_ISPSCHEDMODE_VER1_IPP | \
+	 ROGUE_FWIF_INICFG_ISPSCHEDMODE_VER2_ISP)
+#define ROGUE_FWIF_INICFG_VALIDATE_SOCUSC_TIMER BIT(31)
+
+#define ROGUE_FWIF_INICFG_ALL (0xFFFFFFFFU)
+
+/* Extended Flag definitions affecting the firmware globally */
+#define ROGUE_FWIF_INICFG_EXT_TFBC_CONTROL_SHIFT (0)
+/* [7]   YUV10 override
+ * [6:4] Quality
+ * [3]   Quality enable
+ * [2:1] Compression scheme
+ * [0]   Lossy group
+ */
+#define ROGUE_FWIF_INICFG_EXT_TFBC_CONTROL_MASK (0xFF)
+#define ROGUE_FWIF_INICFG_EXT_ALL (ROGUE_FWIF_INICFG_EXT_TFBC_CONTROL_MASK)
+
+/* Flag definitions affecting only workloads submitted by a particular OS */
+#define ROGUE_FWIF_INICFG_OS_CTXSWITCH_TDM_EN BIT(0)
+#define ROGUE_FWIF_INICFG_OS_CTXSWITCH_GEOM_EN BIT(1)
+#define ROGUE_FWIF_INICFG_OS_CTXSWITCH_FRAG_EN BIT(2)
+#define ROGUE_FWIF_INICFG_OS_CTXSWITCH_CDM_EN BIT(3)
+
+#define ROGUE_FWIF_INICFG_OS_LOW_PRIO_CS_TDM BIT(4)
+#define ROGUE_FWIF_INICFG_OS_LOW_PRIO_CS_GEOM BIT(5)
+#define ROGUE_FWIF_INICFG_OS_LOW_PRIO_CS_FRAG BIT(6)
+#define ROGUE_FWIF_INICFG_OS_LOW_PRIO_CS_CDM BIT(7)
+
+#define ROGUE_FWIF_INICFG_OS_ALL (0xFF)
+
+#define ROGUE_FWIF_INICFG_OS_CTXSWITCH_DM_ALL     \
+	(ROGUE_FWIF_INICFG_OS_CTXSWITCH_TDM_EN |  \
+	 ROGUE_FWIF_INICFG_OS_CTXSWITCH_GEOM_EN | \
+	 ROGUE_FWIF_INICFG_OS_CTXSWITCH_FRAG_EN |   \
+	 ROGUE_FWIF_INICFG_OS_CTXSWITCH_CDM_EN)
+
+#define ROGUE_FWIF_INICFG_OS_CTXSWITCH_CLRMSK \
+	~(ROGUE_FWIF_INICFG_OS_CTXSWITCH_DM_ALL)
+
+#define ROGUE_FWIF_FILTCFG_TRUNCATE_HALF BIT(3)
+#define ROGUE_FWIF_FILTCFG_TRUNCATE_INT BIT(2)
+#define ROGUE_FWIF_FILTCFG_NEW_FILTER_MODE BIT(1)
+
+enum rogue_activepm_conf {
+	ROGUE_ACTIVEPM_FORCE_OFF = 0,
+	ROGUE_ACTIVEPM_FORCE_ON = 1,
+	ROGUE_ACTIVEPM_DEFAULT = 2
+};
+
+enum rogue_rd_power_island_conf {
+	ROGUE_RD_POWER_ISLAND_FORCE_OFF = 0,
+	ROGUE_RD_POWER_ISLAND_FORCE_ON = 1,
+	ROGUE_RD_POWER_ISLAND_DEFAULT = 2
+};
+
+struct rogue_fw_register_list {
+	/* Register number */
+	u16 reg_num;
+	/* Indirect register number (or 0 if not used) */
+	u16 indirect_reg_num;
+	/* Start value for indirect register */
+	u16 indirect_start_val;
+	/* End value for indirect register */
+	u16 indirect_end_val;
+};
+
+struct rogue_fwif_dllist_node {
+	u32 p;
+	u32 n;
+};
+
+/*
+ * This number is used to represent an invalid page catalogue physical address
+ */
+#define ROGUE_FWIF_INVALID_PC_PHYADDR 0xFFFFFFFFFFFFFFFFLLU
+
+/* This number is used to represent unallocated page catalog base register */
+#define ROGUE_FW_BIF_INVALID_PCSET 0xFFFFFFFFU
+
+/* Firmware memory context. */
+struct rogue_fwif_fwmemcontext {
+	/* device physical address of context's page catalogue */
+	aligned_u64 pc_dev_paddr;
+	/*
+	 * associated page catalog base register (ROGUE_FW_BIF_INVALID_PCSET ==
+	 * unallocated)
+	 */
+	u32 page_cat_base_reg_set;
+	/* breakpoint address */
+	u32 breakpoint_addr;
+	/* breakpoint handler address */
+	u32 bp_handler_addr;
+	/* DM and enable control for BP */
+	u32 breakpoint_ctl;
+	/* Compatibility and other flags */
+	u32 fw_mem_ctx_flags;
+	u32 padding;
+} __aligned(8);
+
+/*
+ * FW context state flags
+ */
+#define ROGUE_FWIF_CONTEXT_FLAGS_NEED_RESUME (0x00000001U)
+#define ROGUE_FWIF_CONTEXT_FLAGS_MC_NEED_RESUME_MASKFULL (0x000000FFU)
+#define ROGUE_FWIF_CONTEXT_FLAGS_TDM_HEADER_STALE (0x00000100U)
+#define ROGUE_FWIF_CONTEXT_FLAGS_LAST_KICK_SECURE (0x00000200U)
+
+#define ROGUE_NUM_GEOM_CORES_MAX 4
+
+/*
+ * FW-accessible TA state which must be written out to memory on context store
+ */
+struct rogue_fwif_geom_ctx_state_per_geom {
+	/* To store in mid-TA */
+	aligned_u64 geom_reg_vdm_call_stack_pointer;
+	/* Initial value (in case is 'lost' due to a lock-up */
+	aligned_u64 geom_reg_vdm_call_stack_pointer_init;
+	u32 geom_reg_vbs_so_prim[4];
+	u16 geom_current_idx;
+	u16 padding[3];
+} __aligned(8);
+
+struct rogue_fwif_geom_ctx_state {
+	/* FW-accessible TA state which must be written out to memory on context store */
+	struct rogue_fwif_geom_ctx_state_per_geom geom_core[ROGUE_NUM_GEOM_CORES_MAX];
+} __aligned(8);
+
+/*
+ * FW-accessible ISP state which must be written out to memory on context store
+ */
+struct rogue_fwif_frag_ctx_state {
+	u32 frag_reg_pm_deallocated_mask_status;
+	u32 frag_reg_dm_pds_mtilefree_status;
+	/* Compatibility and other flags */
+	u32 ctx_state_flags;
+	/*
+	 * frag_reg_isp_store should be the last element of the structure as this
+	 * is an array whose size is determined at runtime after detecting the
+	 * ROGUE core
+	 */
+	u32 frag_reg_isp_store[];
+} __aligned(8);
+
+#define ROGUE_FWIF_CTX_USING_BUFFER_A (0)
+#define ROGUE_FWIF_CTX_USING_BUFFER_B (1U)
+
+struct rogue_fwif_compute_ctx_state {
+	u32 ctx_state_flags; /* Target buffer and other flags */
+};
+
+struct rogue_fwif_fwcommoncontext {
+	/* CCB details for this firmware context */
+	u32 ccbctl_fw_addr; /* CCB control */
+	u32 ccb_fw_addr; /* CCB base */
+	struct rogue_fwif_dma_addr ccb_meta_dma_addr;
+
+	/* Context suspend state */
+	/* geom/frag context suspend state, read/written by FW */
+	u32 context_state_addr __aligned(8);
+
+	/* Flags e.g. for context switching */
+	u32 fw_com_ctx_flags;
+	u32 priority;
+	u32 priority_seq_num;
+
+	/* Framework state */
+	/* Register updates for Framework */
+	u32 rf_cmd_addr __aligned(8);
+
+	/* Statistic updates waiting to be passed back to the host... */
+	/* True when some stats are pending */
+	bool stats_pending __aligned(4);
+	/* Number of stores on this context since last update */
+	s32 stats_num_stores;
+	/* Number of OOMs on this context since last update */
+	s32 stats_num_out_of_memory;
+	/* Number of PRs on this context since last update */
+	s32 stats_num_partial_renders;
+	/* Data Master type */
+	u32 dm;
+	/* Device Virtual Address of the signal the context is waiting on */
+	aligned_u64 wait_signal_address;
+	/* List entry for the wait-signal list */
+	struct rogue_fwif_dllist_node wait_signal_node __aligned(8);
+	/* List entry for the buffer stalled list */
+	struct rogue_fwif_dllist_node buf_stalled_node __aligned(8);
+	/* Address of the circular buffer queue pointers */
+	aligned_u64 cbuf_queue_ctrl_addr;
+
+	aligned_u64 robustness_address;
+	/* Max HWR deadline limit in ms */
+	u32 max_deadline_ms;
+	/* Following HWR circular buffer read-offset needs resetting */
+	bool read_offset_needs_reset;
+
+	/* List entry for the waiting list */
+	struct rogue_fwif_dllist_node waiting_node __aligned(8);
+	/* List entry for the run list */
+	struct rogue_fwif_dllist_node run_node __aligned(8);
+	/* UFO that last failed (or NULL) */
+	struct rogue_fwif_ufo last_failed_ufo;
+
+	/* Memory context */
+	u32 fw_mem_context_fw_addr;
+
+	/* References to the host side originators */
+	/* the Server Common Context */
+	u32 server_common_context_id;
+	/* associated process ID */
+	u32 pid;
+
+	/* True when Geom DM OOM is not allowed */
+	bool geom_oom_disabled __aligned(4);
+} __aligned(8);
+
+/* Firmware render context. */
+struct rogue_fwif_fwrendercontext {
+	/* Geometry firmware context. */
+	struct rogue_fwif_fwcommoncontext geom_context;
+	/* Fragment firmware context. */
+	struct rogue_fwif_fwcommoncontext frag_context;
+
+	struct rogue_fwif_static_rendercontext_state static_render_context_state;
+
+	/* Number of commands submitted to the WorkEst FW CCB */
+	u32 work_est_ccb_submitted;
+
+	/* Compatibility and other flags */
+	u32 fw_render_ctx_flags;
+} __aligned(8);
+
+/* Firmware compute context. */
+struct rogue_fwif_fwcomputecontext {
+	/* Firmware context for the CDM */
+	struct rogue_fwif_fwcommoncontext cdm_context;
+
+	struct rogue_fwif_static_computecontext_state
+		static_compute_context_state;
+
+	/* Number of commands submitted to the WorkEst FW CCB */
+	u32 work_est_ccb_submitted;
+
+	/* Compatibility and other flags */
+	u32 compute_ctx_flags;
+
+	u32 wgp_state;
+	u32 wgp_checksum;
+	u32 core_mask_a;
+	u32 core_mask_b;
+} __aligned(8);
+
+/* Firmware TDM context. */
+struct rogue_fwif_fwtdmcontext {
+	/* Firmware context for the TDM */
+	struct rogue_fwif_fwcommoncontext tdm_context;
+
+	/* Number of commands submitted to the WorkEst FW CCB */
+	u32 work_est_ccb_submitted;
+} __aligned(8);
+
+/* Firmware TQ3D context. */
+struct rogue_fwif_fwtransfercontext {
+	/* Firmware context for TQ3D. */
+	struct rogue_fwif_fwcommoncontext tq_context;
+} __aligned(8);
+
+/*
+ ******************************************************************************
+ * Defines for CMD_TYPE corruption detection and forward compatibility check
+ ******************************************************************************
+ */
+
+/*
+ * CMD_TYPE 32bit contains:
+ * 31:16	Reserved for magic value to detect corruption (16 bits)
+ * 15		Reserved for ROGUE_CCB_TYPE_TASK (1 bit)
+ * 14:0		Bits available for CMD_TYPEs (15 bits)
+ */
+
+/* Magic value to detect corruption */
+#define ROGUE_CMD_MAGIC_DWORD (0x2ABC)
+#define ROGUE_CMD_MAGIC_DWORD_MASK (0xFFFF0000U)
+#define ROGUE_CMD_MAGIC_DWORD_SHIFT (16U)
+#define ROGUE_CMD_MAGIC_DWORD_SHIFTED \
+	(ROGUE_CMD_MAGIC_DWORD << ROGUE_CMD_MAGIC_DWORD_SHIFT)
+
+/* Kernel CCB control for ROGUE */
+struct rogue_fwif_ccb_ctl {
+	/* write offset into array of commands (MUST be aligned to 16 bytes!) */
+	u32 write_offset;
+	/* read offset into array of commands */
+	u32 read_offset;
+	/* Offset wrapping mask (Total capacity of the CCB - 1) */
+	u32 wrap_mask;
+	/* size of each command in bytes */
+	u32 cmd_size;
+} __aligned(8);
+
+/* Kernel CCB command structure for ROGUE */
+
+#define ROGUE_FWIF_MMUCACHEDATA_FLAGS_PT (0x1U) /* MMU_CTRL_INVAL_PT_EN */
+#define ROGUE_FWIF_MMUCACHEDATA_FLAGS_PD (0x2U) /* MMU_CTRL_INVAL_PD_EN */
+#define ROGUE_FWIF_MMUCACHEDATA_FLAGS_PC (0x4U) /* MMU_CTRL_INVAL_PC_EN */
+
+/*
+ * can't use PM_TLB0 bit from BIFPM_CTRL reg because it collides with PT
+ * bit from BIF_CTRL reg
+ */
+#define ROGUE_FWIF_MMUCACHEDATA_FLAGS_PMTLB (0x10)
+/* BIF_CTRL_INVAL_TLB1_EN */
+#define ROGUE_FWIF_MMUCACHEDATA_FLAGS_TLB \
+	(ROGUE_FWIF_MMUCACHEDATA_FLAGS_PMTLB | 0x8)
+/* MMU_CTRL_INVAL_ALL_CONTEXTS_EN */
+#define ROGUE_FWIF_MMUCACHEDATA_FLAGS_CTX_ALL (0x800)
+
+/* indicates FW should interrupt the host */
+#define ROGUE_FWIF_MMUCACHEDATA_FLAGS_INTERRUPT (0x4000000U)
+
+struct rogue_fwif_mmucachedata {
+	u32 cache_flags;
+	u32 mmu_cache_sync_fw_addr;
+	u32 mmu_cache_sync_update_value;
+};
+
+#define ROGUE_FWIF_BPDATA_FLAGS_ENABLE BIT(0)
+#define ROGUE_FWIF_BPDATA_FLAGS_WRITE BIT(1)
+#define ROGUE_FWIF_BPDATA_FLAGS_CTL BIT(2)
+#define ROGUE_FWIF_BPDATA_FLAGS_REGS BIT(3)
+
+struct rogue_fwif_bpdata {
+	/* Memory context */
+	u32 fw_mem_context_fw_addr;
+	/* Breakpoint address */
+	u32 bp_addr;
+	/* Breakpoint handler */
+	u32 bp_handler_addr;
+	/* Breakpoint control */
+	u32 bp_dm;
+	u32 bp_data_flags;
+	/* Number of temporary registers to overallocate */
+	u32 temp_regs;
+	/* Number of shared registers to overallocate */
+	u32 shared_regs;
+	/* DM associated with the breakpoint */
+	u32 dm;
+};
+
+#define ROGUE_FWIF_KCCB_CMD_KICK_DATA_MAX_NUM_CLEANUP_CTLS \
+	(ROGUE_FWIF_PRBUFFER_MAXSUPPORTED + 1U) /* +1 is RTDATASET cleanup */
+
+struct rogue_fwif_kccb_cmd_kick_data {
+	/* address of the firmware context */
+	u32 context_fw_addr;
+	/* Client CCB woff update */
+	u32 client_woff_update;
+	/* Client CCB wrap mask update after CCCB growth */
+	u32 client_wrap_mask_update;
+	/* number of CleanupCtl pointers attached */
+	u32 num_cleanup_ctl;
+	/* CleanupCtl structures associated with command */
+	u32 cleanup_ctl_fw_addr
+		[ROGUE_FWIF_KCCB_CMD_KICK_DATA_MAX_NUM_CLEANUP_CTLS];
+	/*
+	 * offset to the CmdHeader which houses the workload estimation kick
+	 * data.
+	 */
+	u32 work_est_cmd_header_offset;
+};
+
+struct rogue_fwif_kccb_cmd_combined_geom_frag_kick_data {
+	struct rogue_fwif_kccb_cmd_kick_data geom_cmd_kick_data;
+	struct rogue_fwif_kccb_cmd_kick_data frag_cmd_kick_data;
+};
+
+struct rogue_fwif_kccb_cmd_force_update_data {
+	/* address of the firmware context */
+	u32 context_fw_addr;
+	/* Client CCB fence offset */
+	u32 ccb_fence_offset;
+};
+
+enum rogue_fwif_cleanup_type {
+	/* FW common context cleanup */
+	ROGUE_FWIF_CLEANUP_FWCOMMONCONTEXT,
+	/* FW HW RT data cleanup */
+	ROGUE_FWIF_CLEANUP_HWRTDATA,
+	/* FW freelist cleanup */
+	ROGUE_FWIF_CLEANUP_FREELIST,
+	/* FW ZS Buffer cleanup */
+	ROGUE_FWIF_CLEANUP_ZSBUFFER,
+};
+
+struct rogue_fwif_cleanup_request {
+	/* Cleanup type */
+	enum rogue_fwif_cleanup_type cleanup_type;
+	union {
+		/* FW common context to cleanup */
+		u32 context_fw_addr;
+		/* HW RT to cleanup */
+		u32 hwrt_data_fw_addr;
+		/* Freelist to cleanup */
+		u32 freelist_fw_addr;
+		/* ZS Buffer to cleanup */
+		u32 zs_buffer_fw_addr;
+	} cleanup_data;
+};
+
+enum rogue_fwif_power_type {
+	ROGUE_FWIF_POW_OFF_REQ = 1,
+	ROGUE_FWIF_POW_FORCED_IDLE_REQ,
+	ROGUE_FWIF_POW_NUM_UNITS_CHANGE,
+	ROGUE_FWIF_POW_APM_LATENCY_CHANGE
+};
+
+enum rogue_fwif_power_force_idle_type {
+	ROGUE_FWIF_POWER_FORCE_IDLE = 1,
+	ROGUE_FWIF_POWER_CANCEL_FORCED_IDLE,
+	ROGUE_FWIF_POWER_HOST_TIMEOUT,
+};
+
+struct rogue_fwif_power_request {
+	/* Type of power request */
+	enum rogue_fwif_power_type pow_type;
+	union {
+		/* Number of active Dusts */
+		u32 num_of_dusts;
+		/* If the operation is mandatory */
+		bool forced __aligned(4);
+		/*
+		 * Type of Request. Consolidating Force Idle, Cancel Forced
+		 * Idle, Host Timeout
+		 */
+		enum rogue_fwif_power_force_idle_type pow_request_type;
+	} power_req_data;
+};
+
+struct rogue_fwif_slcflushinvaldata {
+	/* Context to fence on (only useful when bDMContext == TRUE) */
+	u32 context_fw_addr;
+	/* Invalidate the cache as well as flushing */
+	bool inval __aligned(4);
+	/* The data to flush/invalidate belongs to a specific DM context */
+	bool dm_context __aligned(4);
+	/* Optional address of range (only useful when bDMContext == FALSE) */
+	aligned_u64 address;
+	/* Optional size of range (only useful when bDMContext == FALSE) */
+	aligned_u64 size;
+};
+
+enum rogue_fwif_hwperf_update_config {
+	ROGUE_FWIF_HWPERF_CTRL_TOGGLE = 0,
+	ROGUE_FWIF_HWPERF_CTRL_SET = 1,
+	ROGUE_FWIF_HWPERF_CTRL_EMIT_FEATURES_EV = 2
+};
+
+struct rogue_fwif_hwperf_ctrl {
+	enum rogue_fwif_hwperf_update_config opcode; /* Control operation code */
+	aligned_u64 mask; /* Mask of events to toggle */
+};
+
+struct rogue_fwif_hwperf_config_enable_blks {
+	/* Number of ROGUE_HWPERF_CONFIG_MUX_CNTBLK in the array */
+	u32 num_blocks;
+	/* Address of the ROGUE_HWPERF_CONFIG_MUX_CNTBLK array */
+	u32 block_configs_fw_addr;
+};
+
+struct rogue_fwif_hwperf_config_da_blks {
+	/* Number of ROGUE_HWPERF_CONFIG_CNTBLK in the array */
+	u32 num_blocks;
+	/* Address of the ROGUE_HWPERF_CONFIG_CNTBLK array */
+	u32 block_configs_fw_addr;
+};
+
+struct rogue_fwif_coreclkspeedchange_data {
+	u32 new_clock_speed; /* New clock speed */
+};
+
+#define ROGUE_FWIF_HWPERF_CTRL_BLKS_MAX 16
+
+struct rogue_fwif_hwperf_ctrl_blks {
+	bool enable;
+	/* Number of block IDs in the array */
+	u32 num_blocks;
+	/* Array of ROGUE_HWPERF_CNTBLK_ID values */
+	u16 block_ids[ROGUE_FWIF_HWPERF_CTRL_BLKS_MAX];
+};
+
+struct rogue_fwif_hwperf_select_custom_cntrs {
+	u16 custom_block;
+	u16 num_counters;
+	u32 custom_counter_ids_fw_addr;
+};
+
+struct rogue_fwif_zsbuffer_backing_data {
+	u32 zs_buffer_fw_addr; /* ZS-Buffer FW address */
+
+	bool done __aligned(4); /* action backing/unbacking succeeded */
+};
+
+struct rogue_fwif_freelist_gs_data {
+	/* Freelist FW address */
+	u32 freelist_fw_addr;
+	/* Amount of the Freelist change */
+	u32 delta_pages;
+	/* New amount of pages on the freelist (including ready pages) */
+	u32 new_pages;
+	/* Number of ready pages to be held in reserve until OOM */
+	u32 ready_pages;
+};
+
+#define MAX_FREELISTS_SIZE 3
+#define MAX_HW_GEOM_FRAG_CONTEXTS_SIZE 3
+
+#define ROGUE_FWIF_MAX_FREELISTS_TO_RECONSTRUCT \
+	(MAX_HW_GEOM_FRAG_CONTEXTS_SIZE * MAX_FREELISTS_SIZE * 2U)
+#define ROGUE_FWIF_FREELISTS_RECONSTRUCTION_FAILED_FLAG 0x80000000U
+
+struct rogue_fwif_freelists_reconstruction_data {
+	u32 freelist_count;
+	u32 freelist_ids[ROGUE_FWIF_MAX_FREELISTS_TO_RECONSTRUCT];
+};
+
+struct rogue_fwif_write_offset_update_data {
+	/*
+	 * Context to that may need to be resumed following write offset update
+	 */
+	u32 context_fw_addr;
+} __aligned(8);
+
+/*
+ ******************************************************************************
+ * Proactive DVFS Structures
+ ******************************************************************************
+ */
+#define NUM_OPP_VALUES 16
+
+struct pdvfs_opp {
+	u32 volt; /* V  */
+	u32 freq; /* Hz */
+} __aligned(8);
+
+struct rogue_fwif_pdvfs_opp {
+	struct pdvfs_opp opp_values[NUM_OPP_VALUES];
+	u32 min_opp_point;
+	u32 max_opp_point;
+} __aligned(8);
+
+struct rogue_fwif_pdvfs_max_freq_data {
+	u32 max_opp_point;
+} __aligned(8);
+
+struct rogue_fwif_pdvfs_min_freq_data {
+	u32 min_opp_point;
+} __aligned(8);
+
+/*
+ ******************************************************************************
+ * Register configuration structures
+ ******************************************************************************
+ */
+
+#define ROGUE_FWIF_REG_CFG_MAX_SIZE 512
+
+enum rogue_fwif_regdata_cmd_type {
+	ROGUE_FWIF_REGCFG_CMD_ADD = 101,
+	ROGUE_FWIF_REGCFG_CMD_CLEAR = 102,
+	ROGUE_FWIF_REGCFG_CMD_ENABLE = 103,
+	ROGUE_FWIF_REGCFG_CMD_DISABLE = 104
+};
+
+enum rogue_fwif_reg_cfg_type {
+	/* Sidekick power event */
+	ROGUE_FWIF_REG_CFG_TYPE_PWR_ON = 0,
+	/* Rascal / dust power event */
+	ROGUE_FWIF_REG_CFG_TYPE_DUST_CHANGE,
+	/* Geometry kick */
+	ROGUE_FWIF_REG_CFG_TYPE_GEOM,
+	/* Fragment kick */
+	ROGUE_FWIF_REG_CFG_TYPE_FRAG,
+	/* Compute kick */
+	ROGUE_FWIF_REG_CFG_TYPE_CDM,
+	/* TLA kick */
+	ROGUE_FWIF_REG_CFG_TYPE_TLA,
+	/* TDM kick */
+	ROGUE_FWIF_REG_CFG_TYPE_TDM,
+	/* Applies to all types. Keep as last element */
+	ROGUE_FWIF_REG_CFG_TYPE_ALL
+};
+
+struct rogue_fwif_reg_cfg_rec {
+	u64 sddr;
+	u64 mask;
+	u64 value;
+};
+
+struct rogue_fwif_regconfig_data {
+	enum rogue_fwif_regdata_cmd_type cmd_type;
+	enum rogue_fwif_reg_cfg_type reg_config_type;
+	struct rogue_fwif_reg_cfg_rec reg_config __aligned(8);
+};
+
+struct rogue_fwif_reg_cfg {
+	/*
+	 * PDump WRW command write granularity is 32 bits.
+	 * Add padding to ensure array size is 32 bit granular.
+	 */
+	u8 num_regs_type[ALIGN((u32)ROGUE_FWIF_REG_CFG_TYPE_ALL,
+			       sizeof(u32))] __aligned(8);
+	struct rogue_fwif_reg_cfg_rec
+		reg_configs[ROGUE_FWIF_REG_CFG_MAX_SIZE] __aligned(8);
+} __aligned(8);
+
+enum rogue_fwif_os_state_change {
+	ROGUE_FWIF_OS_ONLINE = 1,
+	ROGUE_FWIF_OS_OFFLINE
+};
+
+struct rogue_fwif_os_state_change_data {
+	u32 osid;
+	enum rogue_fwif_os_state_change new_os_state;
+} __aligned(8);
+
+enum rogue_fwif_counter_dump_request {
+	ROGUE_FWIF_PWR_COUNTER_DUMP_START = 1,
+	ROGUE_FWIF_PWR_COUNTER_DUMP_STOP,
+	ROGUE_FWIF_PWR_COUNTER_DUMP_SAMPLE,
+};
+
+struct rogue_fwif_counter_dump_data {
+	enum rogue_fwif_counter_dump_request counter_dump_request;
+} __aligned(8);
+
+enum rogue_fwif_kccb_cmd_type {
+	/* Common commands */
+	ROGUE_FWIF_KCCB_CMD_KICK = 101U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	ROGUE_FWIF_KCCB_CMD_MMUCACHE = 102U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	ROGUE_FWIF_KCCB_CMD_BP = 103U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* SLC flush and invalidation request */
+	ROGUE_FWIF_KCCB_CMD_SLCFLUSHINVAL = 105U |
+					    ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/*
+	 * Requests cleanup of a FW resource (type specified in the command
+	 * data)
+	 */
+	ROGUE_FWIF_KCCB_CMD_CLEANUP = 106U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Power request */
+	ROGUE_FWIF_KCCB_CMD_POW = 107U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Backing for on-demand ZS-Buffer done */
+	ROGUE_FWIF_KCCB_CMD_ZSBUFFER_BACKING_UPDATE =
+		108U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Unbacking for on-demand ZS-Buffer done */
+	ROGUE_FWIF_KCCB_CMD_ZSBUFFER_UNBACKING_UPDATE =
+		109U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Freelist Grow done */
+	ROGUE_FWIF_KCCB_CMD_FREELIST_GROW_UPDATE =
+		110U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Freelists Reconstruction done */
+	ROGUE_FWIF_KCCB_CMD_FREELISTS_RECONSTRUCTION_UPDATE =
+		112U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/*
+	 * Informs the firmware that the host has added more data to a CDM2
+	 * Circular Buffer
+	 */
+	ROGUE_FWIF_KCCB_CMD_NOTIFY_WRITE_OFFSET_UPDATE =
+		114U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Health check request */
+	ROGUE_FWIF_KCCB_CMD_HEALTH_CHECK = 115U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Forcing signalling of all unmet UFOs for a given CCB offset */
+	ROGUE_FWIF_KCCB_CMD_FORCE_UPDATE = 116U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+
+	/* There is a geometry and a fragment command in this single kick */
+	ROGUE_FWIF_KCCB_CMD_COMBINED_GEOM_FRAG_KICK = 117U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Informs the FW that a Guest OS has come online / offline. */
+	ROGUE_FWIF_KCCB_CMD_OS_ONLINE_STATE_CONFIGURE	= 118U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+
+	/* Commands only permitted to the native or host OS */
+	ROGUE_FWIF_KCCB_CMD_REGCONFIG = 200U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+
+	/* Configure HWPerf events (to be generated) and HWPerf buffer address (if required) */
+	ROGUE_FWIF_KCCB_CMD_HWPERF_UPDATE_CONFIG = 201U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+
+	/* Enable or disable multiple HWPerf blocks (reusing existing configuration) */
+	ROGUE_FWIF_KCCB_CMD_HWPERF_CTRL_BLKS = 203U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Core clock speed change event */
+	ROGUE_FWIF_KCCB_CMD_CORECLKSPEEDCHANGE = 204U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+
+	/*
+	 * Ask the firmware to update its cached ui32LogType value from the (shared)
+	 * tracebuf control structure
+	 */
+	ROGUE_FWIF_KCCB_CMD_LOGTYPE_UPDATE = 206U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Set a maximum frequency/OPP point */
+	ROGUE_FWIF_KCCB_CMD_PDVFS_LIMIT_MAX_FREQ = 207U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/*
+	 * Changes the relative scheduling priority for a particular OSid. It can
+	 * only be serviced for the Host DDK
+	 */
+	ROGUE_FWIF_KCCB_CMD_OSID_PRIORITY_CHANGE = 208U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Set or clear firmware state flags */
+	ROGUE_FWIF_KCCB_CMD_STATEFLAGS_CTRL = 209U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+
+	/* Set a minimum frequency/OPP point */
+	ROGUE_FWIF_KCCB_CMD_PDVFS_LIMIT_MIN_FREQ = 212U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Configure Periodic Hardware Reset behaviour */
+	ROGUE_FWIF_KCCB_CMD_PHR_CFG = 213U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+
+	/* Configure Safety Firmware Watchdog */
+	ROGUE_FWIF_KCCB_CMD_WDG_CFG = 215U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Controls counter dumping in the FW */
+	ROGUE_FWIF_KCCB_CMD_COUNTER_DUMP = 216U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Configure, clear and enable multiple HWPerf blocks */
+	ROGUE_FWIF_KCCB_CMD_HWPERF_CONFIG_ENABLE_BLKS = 217U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Configure the custom counters for HWPerf */
+	ROGUE_FWIF_KCCB_CMD_HWPERF_SELECT_CUSTOM_CNTRS = 218U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+
+	/* Configure directly addressable counters for HWPerf */
+	ROGUE_FWIF_KCCB_CMD_HWPERF_CONFIG_BLKS = 220U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+};
+
+#define ROGUE_FWIF_LAST_ALLOWED_GUEST_KCCB_CMD \
+	(ROGUE_FWIF_KCCB_CMD_REGCONFIG - 1)
+
+/* Kernel CCB command packet */
+struct rogue_fwif_kccb_cmd {
+	/* Command type */
+	enum rogue_fwif_kccb_cmd_type cmd_type;
+	/* Compatibility and other flags */
+	u32 kccb_flags;
+
+	/*
+	 * NOTE: Make sure that uCmdData is the last member of this struct
+	 * This is to calculate actual command size for device mem copy.
+	 * (Refer ROGUEGetCmdMemCopySize())
+	 */
+	union {
+		/* Data for Kick command */
+		struct rogue_fwif_kccb_cmd_kick_data cmd_kick_data;
+		/* Data for combined geom/frag Kick command */
+		struct rogue_fwif_kccb_cmd_combined_geom_frag_kick_data
+			combined_geom_frag_cmd_kick_data;
+		/* Data for MMU cache command */
+		struct rogue_fwif_mmucachedata mmu_cache_data;
+		/* Data for Breakpoint Commands */
+		struct rogue_fwif_bpdata bp_data;
+		/* Data for SLC Flush/Inval commands */
+		struct rogue_fwif_slcflushinvaldata slc_flush_inval_data;
+		/* Data for cleanup commands */
+		struct rogue_fwif_cleanup_request cleanup_data;
+		/* Data for power request commands */
+		struct rogue_fwif_power_request pow_data;
+		/* Data for HWPerf control command */
+		struct rogue_fwif_hwperf_ctrl hw_perf_ctrl;
+		/*
+		 * Data for HWPerf configure, clear and enable performance
+		 * counter block command
+		 */
+		struct rogue_fwif_hwperf_config_enable_blks
+			hw_perf_cfg_enable_blks;
+		/*
+		 * Data for HWPerf enable or disable performance counter block
+		 * commands
+		 */
+		struct rogue_fwif_hwperf_ctrl_blks hw_perf_ctrl_blks;
+		/* Data for HWPerf configure the custom counters to read */
+		struct rogue_fwif_hwperf_select_custom_cntrs
+			hw_perf_select_cstm_cntrs;
+		/* Data for HWPerf configure Directly Addressable blocks */
+		struct rogue_fwif_hwperf_config_da_blks hw_perf_cfg_da_blks;
+		/* Data for core clock speed change */
+		struct rogue_fwif_coreclkspeedchange_data
+			core_clk_speed_change_data;
+		/* Feedback for Z/S Buffer backing/unbacking */
+		struct rogue_fwif_zsbuffer_backing_data zs_buffer_backing_data;
+		/* Feedback for Freelist grow/shrink */
+		struct rogue_fwif_freelist_gs_data free_list_gs_data;
+		/* Feedback for Freelists reconstruction*/
+		struct rogue_fwif_freelists_reconstruction_data
+			free_lists_reconstruction_data;
+		/* Data for custom register configuration */
+		struct rogue_fwif_regconfig_data reg_config_data;
+		/* Data for informing the FW about the write offset update */
+		struct rogue_fwif_write_offset_update_data
+			write_offset_update_data;
+		/* Data for setting the max frequency/OPP */
+		struct rogue_fwif_pdvfs_max_freq_data pdvfs_max_freq_data;
+		/* Data for setting the min frequency/OPP */
+		struct rogue_fwif_pdvfs_min_freq_data pdvfs_min_freq_data;
+		/* Data for updating the Guest Online states */
+		struct rogue_fwif_os_state_change_data cmd_os_online_state_data;
+		/* Dev address for TBI buffer allocated on demand */
+		u32 tbi_buffer_fw_addr;
+		/* Data for dumping of register ranges */
+		struct rogue_fwif_counter_dump_data counter_dump_config_data;
+		/* Data for signalling all unmet fences for a given CCB */
+		struct rogue_fwif_kccb_cmd_force_update_data force_update_data;
+	} cmd_data __aligned(8);
+} __aligned(8);
+
+PVR_FW_STRUCT_SIZE_ASSERT(struct rogue_fwif_kccb_cmd);
+
+/*
+ ******************************************************************************
+ * Firmware CCB command structure for ROGUE
+ ******************************************************************************
+ */
+
+struct rogue_fwif_fwccb_cmd_zsbuffer_backing_data {
+	u32 zs_buffer_id;
+};
+
+struct rogue_fwif_fwccb_cmd_freelist_gs_data {
+	u32 freelist_id;
+};
+
+struct rogue_fwif_fwccb_cmd_freelists_reconstruction_data {
+	u32 freelist_count;
+	u32 hwr_counter;
+	u32 freelist_ids[ROGUE_FWIF_MAX_FREELISTS_TO_RECONSTRUCT];
+};
+
+/* 1 if a page fault happened */
+#define ROGUE_FWIF_FWCCB_CMD_CONTEXT_RESET_FLAG_PF BIT(0)
+/* 1 if applicable to all contexts */
+#define ROGUE_FWIF_FWCCB_CMD_CONTEXT_RESET_FLAG_ALL_CTXS BIT(1)
+
+struct rogue_fwif_fwccb_cmd_context_reset_data {
+	/* Context affected by the reset */
+	u32 server_common_context_id;
+	/* Reason for reset */
+	enum rogue_context_reset_reason reset_reason;
+	/* Data Master affected by the reset */
+	u32 dm;
+	/* Job ref running at the time of reset */
+	u32 reset_job_ref;
+	/* ROGUE_FWIF_FWCCB_CMD_CONTEXT_RESET_FLAG bitfield */
+	u32 flags;
+	/* At what page catalog address */
+	aligned_u64 pc_address;
+	/* Page fault address (only when applicable) */
+	aligned_u64 fault_address;
+};
+
+struct rogue_fwif_fwccb_cmd_fw_pagefault_data {
+	/* Page fault address */
+	u64 fw_fault_addr;
+};
+
+enum rogue_fwif_fwccb_cmd_type {
+	/* Requests ZSBuffer to be backed with physical pages */
+	ROGUE_FWIF_FWCCB_CMD_ZSBUFFER_BACKING = 101U |
+						ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Requests ZSBuffer to be unbacked */
+	ROGUE_FWIF_FWCCB_CMD_ZSBUFFER_UNBACKING = 102U |
+						  ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Requests an on-demand freelist grow/shrink */
+	ROGUE_FWIF_FWCCB_CMD_FREELIST_GROW = 103U |
+					     ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Requests freelists reconstruction */
+	ROGUE_FWIF_FWCCB_CMD_FREELISTS_RECONSTRUCTION =
+		104U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Notifies host of a HWR event on a context */
+	ROGUE_FWIF_FWCCB_CMD_CONTEXT_RESET_NOTIFICATION =
+		105U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Requests an on-demand debug dump */
+	ROGUE_FWIF_FWCCB_CMD_DEBUG_DUMP = 106U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Requests an on-demand update on process stats */
+	ROGUE_FWIF_FWCCB_CMD_UPDATE_STATS = 107U |
+					    ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+
+	ROGUE_FWIF_FWCCB_CMD_CORE_CLK_RATE_CHANGE =
+		108U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	ROGUE_FWIF_FWCCB_CMD_REQUEST_GPU_RESTART =
+		109U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+
+	/* Notifies host of a FW pagefault */
+	ROGUE_FWIF_FWCCB_CMD_CONTEXT_FW_PF_NOTIFICATION =
+		112U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+};
+
+enum rogue_fwif_fwccb_cmd_update_stats_type {
+	/*
+	 * PVRSRVStatsUpdateRenderContextStats should increase the value of the
+	 * ui32TotalNumPartialRenders stat
+	 */
+	ROGUE_FWIF_FWCCB_CMD_UPDATE_NUM_PARTIAL_RENDERS = 1,
+	/*
+	 * PVRSRVStatsUpdateRenderContextStats should increase the value of the
+	 * ui32TotalNumOutOfMemory stat
+	 */
+	ROGUE_FWIF_FWCCB_CMD_UPDATE_NUM_OUT_OF_MEMORY,
+	/*
+	 * PVRSRVStatsUpdateRenderContextStats should increase the value of the
+	 * ui32NumGeomStores stat
+	 */
+	ROGUE_FWIF_FWCCB_CMD_UPDATE_NUM_GEOM_STORES,
+	/*
+	 * PVRSRVStatsUpdateRenderContextStats should increase the value of the
+	 * ui32NumFragStores stat
+	 */
+	ROGUE_FWIF_FWCCB_CMD_UPDATE_NUM_FRAG_STORES,
+	/*
+	 * PVRSRVStatsUpdateRenderContextStats should increase the value of the
+	 * ui32NumCDMStores stat
+	 */
+	ROGUE_FWIF_FWCCB_CMD_UPDATE_NUM_CDM_STORES,
+	/*
+	 * PVRSRVStatsUpdateRenderContextStats should increase the value of the
+	 * ui32NumTDMStores stat
+	 */
+	ROGUE_FWIF_FWCCB_CMD_UPDATE_NUM_TDM_STORES
+};
+
+struct rogue_fwif_fwccb_cmd_update_stats_data {
+	/* Element to update */
+	enum rogue_fwif_fwccb_cmd_update_stats_type element_to_update;
+	/* The pid of the process whose stats are being updated */
+	u32 pid_owner;
+	/* Adjustment to be made to the statistic */
+	s32 adjustment_value;
+};
+
+struct rogue_fwif_fwccb_cmd_core_clk_rate_change_data {
+	u32 core_clk_rate;
+} __aligned(8);
+
+struct rogue_fwif_fwccb_cmd {
+	/* Command type */
+	enum rogue_fwif_fwccb_cmd_type cmd_type;
+	/* Compatibility and other flags */
+	u32 fwccb_flags;
+
+	union {
+		/* Data for Z/S-Buffer on-demand (un)backing*/
+		struct rogue_fwif_fwccb_cmd_zsbuffer_backing_data
+			cmd_zs_buffer_backing;
+		/* Data for on-demand freelist grow/shrink */
+		struct rogue_fwif_fwccb_cmd_freelist_gs_data cmd_free_list_gs;
+		/* Data for freelists reconstruction */
+		struct rogue_fwif_fwccb_cmd_freelists_reconstruction_data
+			cmd_freelists_reconstruction;
+		/* Data for context reset notification */
+		struct rogue_fwif_fwccb_cmd_context_reset_data
+			cmd_context_reset_notification;
+		/* Data for updating process stats */
+		struct rogue_fwif_fwccb_cmd_update_stats_data
+			cmd_update_stats_data;
+		struct rogue_fwif_fwccb_cmd_core_clk_rate_change_data
+			cmd_core_clk_rate_change;
+		struct rogue_fwif_fwccb_cmd_fw_pagefault_data cmd_fw_pagefault;
+	} cmd_data __aligned(8);
+} __aligned(8);
+
+PVR_FW_STRUCT_SIZE_ASSERT(struct rogue_fwif_fwccb_cmd);
+
+/*
+ ******************************************************************************
+ * Workload estimation Firmware CCB command structure for ROGUE
+ ******************************************************************************
+ */
+struct rogue_fwif_workest_fwccb_cmd {
+	/* Index for return data array */
+	u16 return_data_index;
+	/* The cycles the workload took on the hardware */
+	u32 cycles_taken;
+};
+
+/*
+ ******************************************************************************
+ * Client CCB commands for ROGUE
+ ******************************************************************************
+ */
+
+/*
+ * Required memory alignment for 64-bit variables accessible by Meta
+ * (The gcc meta aligns 64-bit variables to 64-bit; therefore, memory shared
+ * between the host and meta that contains 64-bit variables has to maintain
+ * this alignment)
+ */
+#define ROGUE_FWIF_FWALLOC_ALIGN sizeof(u64)
+
+#define ROGUE_CCB_TYPE_TASK BIT(15)
+#define ROGUE_CCB_FWALLOC_ALIGN(size)                \
+	(((size) + (ROGUE_FWIF_FWALLOC_ALIGN - 1)) & \
+	 ~(ROGUE_FWIF_FWALLOC_ALIGN - 1))
+
+#define ROGUE_FWIF_CCB_CMD_TYPE_GEOM \
+	(201U | ROGUE_CMD_MAGIC_DWORD_SHIFTED | ROGUE_CCB_TYPE_TASK)
+#define ROGUE_FWIF_CCB_CMD_TYPE_TQ_3D \
+	(202U | ROGUE_CMD_MAGIC_DWORD_SHIFTED | ROGUE_CCB_TYPE_TASK)
+#define ROGUE_FWIF_CCB_CMD_TYPE_FRAG \
+	(203U | ROGUE_CMD_MAGIC_DWORD_SHIFTED | ROGUE_CCB_TYPE_TASK)
+#define ROGUE_FWIF_CCB_CMD_TYPE_FRAG_PR \
+	(204U | ROGUE_CMD_MAGIC_DWORD_SHIFTED | ROGUE_CCB_TYPE_TASK)
+#define ROGUE_FWIF_CCB_CMD_TYPE_CDM \
+	(205U | ROGUE_CMD_MAGIC_DWORD_SHIFTED | ROGUE_CCB_TYPE_TASK)
+#define ROGUE_FWIF_CCB_CMD_TYPE_TQ_TDM \
+	(206U | ROGUE_CMD_MAGIC_DWORD_SHIFTED | ROGUE_CCB_TYPE_TASK)
+#define ROGUE_FWIF_CCB_CMD_TYPE_FBSC_INVALIDATE \
+	(207U | ROGUE_CMD_MAGIC_DWORD_SHIFTED | ROGUE_CCB_TYPE_TASK)
+#define ROGUE_FWIF_CCB_CMD_TYPE_TQ_2D \
+	(208U | ROGUE_CMD_MAGIC_DWORD_SHIFTED | ROGUE_CCB_TYPE_TASK)
+#define ROGUE_FWIF_CCB_CMD_TYPE_PRE_TIMESTAMP \
+	(209U | ROGUE_CMD_MAGIC_DWORD_SHIFTED | ROGUE_CCB_TYPE_TASK)
+#define ROGUE_FWIF_CCB_CMD_TYPE_NULL \
+	(210U | ROGUE_CMD_MAGIC_DWORD_SHIFTED | ROGUE_CCB_TYPE_TASK)
+#define ROGUE_FWIF_CCB_CMD_TYPE_ABORT \
+	(211U | ROGUE_CMD_MAGIC_DWORD_SHIFTED | ROGUE_CCB_TYPE_TASK)
+
+/* Leave a gap between CCB specific commands and generic commands */
+#define ROGUE_FWIF_CCB_CMD_TYPE_FENCE (212U | ROGUE_CMD_MAGIC_DWORD_SHIFTED)
+#define ROGUE_FWIF_CCB_CMD_TYPE_UPDATE (213U | ROGUE_CMD_MAGIC_DWORD_SHIFTED)
+#define ROGUE_FWIF_CCB_CMD_TYPE_RMW_UPDATE \
+	(214U | ROGUE_CMD_MAGIC_DWORD_SHIFTED)
+#define ROGUE_FWIF_CCB_CMD_TYPE_FENCE_PR (215U | ROGUE_CMD_MAGIC_DWORD_SHIFTED)
+#define ROGUE_FWIF_CCB_CMD_TYPE_PRIORITY (216U | ROGUE_CMD_MAGIC_DWORD_SHIFTED)
+/*
+ * Pre and Post timestamp commands are supposed to sandwich the DM cmd. The
+ * padding code with the CCB wrap upsets the FW if we don't have the task type
+ * bit cleared for POST_TIMESTAMPs. That's why we have 2 different cmd types.
+ */
+#define ROGUE_FWIF_CCB_CMD_TYPE_POST_TIMESTAMP \
+	(217U | ROGUE_CMD_MAGIC_DWORD_SHIFTED)
+#define ROGUE_FWIF_CCB_CMD_TYPE_UNFENCED_UPDATE \
+	(218U | ROGUE_CMD_MAGIC_DWORD_SHIFTED)
+#define ROGUE_FWIF_CCB_CMD_TYPE_UNFENCED_RMW_UPDATE \
+	(219U | ROGUE_CMD_MAGIC_DWORD_SHIFTED)
+
+#define ROGUE_FWIF_CCB_CMD_TYPE_PADDING (221U | ROGUE_CMD_MAGIC_DWORD_SHIFTED)
+
+struct rogue_fwif_workest_kick_data {
+	/* Index for the KM Workload estimation return data array */
+	u16 return_data_index __aligned(8);
+	/* Predicted time taken to do the work in cycles */
+	u32 cycles_prediction __aligned(8);
+	/* Deadline for the workload */
+	aligned_u64 deadline;
+};
+
+struct rogue_fwif_ccb_cmd_header {
+	u32 cmd_type;
+	u32 cmd_size;
+	/*
+	 * external job reference - provided by client and used in debug for
+	 * tracking submitted work
+	 */
+	u32 ext_job_ref;
+	/*
+	 * internal job reference - generated by services and used in debug for
+	 * tracking submitted work
+	 */
+	u32 int_job_ref;
+	/* Workload Estimation - Workload Estimation Data */
+	struct rogue_fwif_workest_kick_data work_est_kick_data __aligned(8);
+};
+
+/*
+ ******************************************************************************
+ * Client CCB commands which are only required by the kernel
+ ******************************************************************************
+ */
+struct rogue_fwif_cmd_priority {
+	s32 priority;
+};
+
+/*
+ ******************************************************************************
+ * Signature and Checksums Buffer
+ ******************************************************************************
+ */
+struct rogue_fwif_sigbuf_ctl {
+	/* Ptr to Signature Buffer memory */
+	u32 buffer_fw_addr;
+	/* Amount of space left for storing regs in the buffer */
+	u32 left_size_in_regs;
+} __aligned(8);
+
+struct rogue_fwif_counter_dump_ctl {
+	/* Ptr to counter dump buffer */
+	u32 buffer_fw_addr;
+	/* Amount of space for storing in the buffer */
+	u32 size_in_dwords;
+} __aligned(8);
+
+struct rogue_fwif_firmware_gcov_ctl {
+	/* Ptr to firmware gcov buffer */
+	u32 buffer_fw_addr;
+	/* Amount of space for storing in the buffer */
+	u32 size;
+} __aligned(8);
+
+/*
+ *****************************************************************************
+ * ROGUE Compatibility checks
+ *****************************************************************************
+ */
+
+/*
+ * WARNING: Whenever the layout of ROGUE_FWIF_COMPCHECKS_BVNC changes, the
+ * following define should be increased by 1 to indicate to the compatibility
+ * logic that layout has changed.
+ */
+#define ROGUE_FWIF_COMPCHECKS_LAYOUT_VERSION 3
+
+struct rogue_fwif_compchecks_bvnc {
+	/* WARNING: This field must be defined as first one in this structure */
+	u32 layout_version;
+	aligned_u64 bvnc;
+} __aligned(8);
+
+struct rogue_fwif_init_options {
+	u8 os_count_support;
+	u8 padding[7];
+} __aligned(8);
+
+#define ROGUE_FWIF_COMPCHECKS_BVNC_DECLARE_AND_INIT(name) \
+	struct rogue_fwif_compchecks_bvnc(name) = {       \
+		ROGUE_FWIF_COMPCHECKS_LAYOUT_VERSION,     \
+		0,                                        \
+	}
+
+static inline void rogue_fwif_compchecks_bvnc_init(struct rogue_fwif_compchecks_bvnc *compchecks)
+{
+	compchecks->layout_version = ROGUE_FWIF_COMPCHECKS_LAYOUT_VERSION;
+	compchecks->bvnc = 0;
+}
+
+struct rogue_fwif_compchecks {
+	/* hardware BVNC (from the ROGUE registers) */
+	struct rogue_fwif_compchecks_bvnc hw_bvnc;
+	/* firmware BVNC */
+	struct rogue_fwif_compchecks_bvnc fw_bvnc;
+	/* identifier of the FW processor version */
+	u32 fw_processor_version;
+	/* software DDK version */
+	u32 ddk_version;
+	/* software DDK build no. */
+	u32 ddk_build;
+	/* build options bit-field */
+	u32 build_options;
+	/* initialisation options bit-field */
+	struct rogue_fwif_init_options init_options;
+	/* Information is valid */
+	bool updated __aligned(4);
+	u32 padding;
+} __aligned(8);
+
+/*
+ ******************************************************************************
+ * Updated configuration post FW data init.
+ ******************************************************************************
+ */
+struct rogue_fwif_runtime_cfg {
+	/* APM latency in ms before signalling IDLE to the host */
+	u32 active_pm_latency_ms;
+	/* Compatibility and other flags */
+	u32 runtime_cfg_flags;
+	/*
+	 * If set, APM latency does not reset to system default each GPU power
+	 * transition
+	 */
+	bool active_pm_latency_persistant __aligned(4);
+	/* Core clock speed, currently only used to calculate timer ticks */
+	u32 core_clock_speed;
+	/* Last number of dusts change requested by the host */
+	u32 default_dusts_num_init;
+	/* Periodic Hardware Reset configuration values */
+	u32 phr_mode;
+	/* New number of milliseconds C/S is allowed to last */
+	u32 hcs_deadline_ms;
+	/* The watchdog period in microseconds */
+	u32 wdg_period_us;
+	/* Array of priorities per OS */
+	u32 osid_priority[ROGUE_FW_MAX_NUM_OS];
+	/* On-demand allocated HWPerf buffer address, to be passed to the FW */
+	u32 hwperf_buf_fw_addr;
+
+	bool padding __aligned(4);
+};
+
+/*
+ *****************************************************************************
+ * Control data for ROGUE
+ *****************************************************************************
+ */
+
+#define ROGUE_FWIF_HWR_DEBUG_DUMP_ALL (99999U)
+
+enum rogue_fwif_tpu_dm {
+	ROGUE_FWIF_TPU_DM_PDM = 0,
+	ROGUE_FWIF_TPU_DM_VDM = 1,
+	ROGUE_FWIF_TPU_DM_CDM = 2,
+	ROGUE_FWIF_TPU_DM_TDM = 3,
+	ROGUE_FWIF_TPU_DM_LAST
+};
+
+enum rogue_fwif_gpio_val_mode {
+	/* No GPIO validation */
+	ROGUE_FWIF_GPIO_VAL_OFF = 0,
+	/*
+	 * Simple test case that initiates by sending data via the GPIO and then
+	 * sends back any data received over the GPIO
+	 */
+	ROGUE_FWIF_GPIO_VAL_GENERAL = 1,
+	/*
+	 * More complex test case that writes and reads data across the entire
+	 * GPIO AP address range.
+	 */
+	ROGUE_FWIF_GPIO_VAL_AP = 2,
+	/* Validates the GPIO Testbench. */
+	ROGUE_FWIF_GPIO_VAL_TESTBENCH = 5,
+	/* Send and then receive each byte in the range 0-255. */
+	ROGUE_FWIF_GPIO_VAL_LOOPBACK = 6,
+	/* Send and then receive each power-of-2 byte in the range 0-255. */
+	ROGUE_FWIF_GPIO_VAL_LOOPBACK_LITE = 7,
+	ROGUE_FWIF_GPIO_VAL_LAST
+};
+
+enum fw_perf_conf {
+	FW_PERF_CONF_NONE = 0,
+	FW_PERF_CONF_ICACHE = 1,
+	FW_PERF_CONF_DCACHE = 2,
+	FW_PERF_CONF_JTLB_INSTR = 5,
+	FW_PERF_CONF_INSTRUCTIONS = 6
+};
+
+enum fw_boot_stage {
+	FW_BOOT_STAGE_TLB_INIT_FAILURE = -2,
+	FW_BOOT_STAGE_NOT_AVAILABLE = -1,
+	FW_BOOT_NOT_STARTED = 0,
+	FW_BOOT_BLDR_STARTED = 1,
+	FW_BOOT_CACHE_DONE,
+	FW_BOOT_TLB_DONE,
+	FW_BOOT_MAIN_STARTED,
+	FW_BOOT_ALIGNCHECKS_DONE,
+	FW_BOOT_INIT_DONE,
+};
+
+/*
+ * Kernel CCB return slot responses. Usage of bit-fields instead of bare
+ * integers allows FW to possibly pack-in several responses for each single kCCB
+ * command.
+ */
+/* Command executed (return status from FW) */
+#define ROGUE_FWIF_KCCB_RTN_SLOT_CMD_EXECUTED BIT(0)
+/* A cleanup was requested but resource busy */
+#define ROGUE_FWIF_KCCB_RTN_SLOT_CLEANUP_BUSY BIT(1)
+/* Poll failed in FW for a HW operation to complete */
+#define ROGUE_FWIF_KCCB_RTN_SLOT_POLL_FAILURE BIT(2)
+/* Reset value of a kCCB return slot (set by host) */
+#define ROGUE_FWIF_KCCB_RTN_SLOT_NO_RESPONSE 0x0U
+
+struct rogue_fwif_connection_ctl {
+	/* Fw-Os connection states */
+	enum rogue_fwif_connection_fw_state connection_fw_state;
+	enum rogue_fwif_connection_os_state connection_os_state;
+	u32 alive_fw_token;
+	u32 alive_os_token;
+} __aligned(8);
+
+struct rogue_fwif_osinit {
+	/* Kernel CCB */
+	u32 kernel_ccbctl_fw_addr;
+	u32 kernel_ccb_fw_addr;
+	u32 kernel_ccb_rtn_slots_fw_addr;
+
+	/* Firmware CCB */
+	u32 firmware_ccbctl_fw_addr;
+	u32 firmware_ccb_fw_addr;
+
+	/* Workload Estimation Firmware CCB */
+	u32 work_est_firmware_ccbctl_fw_addr;
+	u32 work_est_firmware_ccb_fw_addr;
+
+	u32 rogue_fwif_hwr_info_buf_ctl_fw_addr;
+
+	u32 hwr_debug_dump_limit;
+
+	u32 fw_os_data_fw_addr;
+
+	/* Compatibility checks to be populated by the Firmware */
+	struct rogue_fwif_compchecks rogue_comp_checks;
+} __aligned(8);
+
+/* BVNC Features */
+struct rogue_hwperf_bvnc_block {
+	/* Counter block ID, see ROGUE_HWPERF_CNTBLK_ID */
+	u16 block_id;
+
+	/* Number of counters in this block type */
+	u16 num_counters;
+
+	/* Number of blocks of this type */
+	u16 num_blocks;
+
+	u16 reserved;
+};
+
+#define ROGUE_HWPERF_MAX_BVNC_LEN (24)
+
+#define ROGUE_HWPERF_MAX_BVNC_BLOCK_LEN (16U)
+
+/* BVNC Features */
+struct rogue_hwperf_bvnc {
+	/* BVNC string */
+	char bvnc_string[ROGUE_HWPERF_MAX_BVNC_LEN];
+	/* See ROGUE_HWPERF_FEATURE_FLAGS */
+	u32 bvnc_km_feature_flags;
+	/* Number of blocks described in aBvncBlocks */
+	u16 num_bvnc_blocks;
+	/* Number of GPU cores present */
+	u16 bvnc_gpu_cores;
+	/* Supported Performance Blocks for BVNC */
+	struct rogue_hwperf_bvnc_block
+		bvnc_blocks[ROGUE_HWPERF_MAX_BVNC_BLOCK_LEN];
+};
+
+PVR_FW_STRUCT_SIZE_ASSERT(struct rogue_hwperf_bvnc);
+
+struct rogue_fwif_sysinit {
+	/* Fault read address */
+	aligned_u64 fault_phys_addr;
+
+	/* PDS execution base */
+	aligned_u64 pds_exec_base;
+	/* UCS execution base */
+	aligned_u64 usc_exec_base;
+	/* FBCDC bindless texture state table base */
+	aligned_u64 fbcdc_state_table_base;
+	aligned_u64 fbcdc_large_state_table_base;
+	/* Texture state base */
+	aligned_u64 texture_heap_base;
+
+	/* Event filter for Firmware events */
+	u64 hw_perf_filter;
+
+	aligned_u64 slc3_fence_dev_addr;
+
+	u32 tpu_trilinear_frac_mask[ROGUE_FWIF_TPU_DM_LAST] __aligned(8);
+
+	/* Signature and Checksum Buffers for DMs */
+	struct rogue_fwif_sigbuf_ctl sigbuf_ctl[PVR_FWIF_DM_MAX];
+
+	struct rogue_fwif_pdvfs_opp pdvfs_opp_info;
+
+	struct rogue_fwif_dma_addr coremem_data_store;
+
+	struct rogue_fwif_counter_dump_ctl counter_dump_ctl;
+
+	u32 filter_flags;
+
+	u32 runtime_cfg_fw_addr;
+
+	u32 trace_buf_ctl_fw_addr;
+	u32 fw_sys_data_fw_addr;
+
+	u32 gpu_util_fw_cb_ctl_fw_addr;
+	u32 reg_cfg_fw_addr;
+	u32 hwperf_ctl_fw_addr;
+
+	u32 align_checks;
+
+	/* Core clock speed at FW boot time */
+	u32 initial_core_clock_speed;
+
+	/* APM latency in ms before signalling IDLE to the host */
+	u32 active_pm_latency_ms;
+
+	/* Flag to be set by the Firmware after successful start */
+	bool firmware_started __aligned(4);
+
+	/* Host/FW Trace synchronisation Partition Marker */
+	u32 marker_val;
+
+	/* Firmware initialization complete time */
+	u32 firmware_started_timestamp;
+
+	u32 jones_disable_mask;
+
+	/* Firmware performance counter config */
+	enum fw_perf_conf firmware_perf;
+
+	/*
+	 * FW Pointer to memory containing core clock rate in Hz.
+	 * Firmware (PDVFS) updates the memory when running on non primary FW
+	 * thread to communicate to host driver.
+	 */
+	u32 core_clock_rate_fw_addr;
+
+	enum rogue_fwif_gpio_val_mode gpio_validation_mode;
+
+	/* Used in HWPerf for decoding BVNC Features */
+	struct rogue_hwperf_bvnc bvnc_km_feature_flags;
+
+	/* Value to write into ROGUE_CR_TFBC_COMPRESSION_CONTROL */
+	u32 tfbc_compression_control;
+} __aligned(8);
+
+/*
+ *****************************************************************************
+ * Timer correlation shared data and defines
+ *****************************************************************************
+ */
+
+struct rogue_fwif_time_corr {
+	aligned_u64 os_timestamp;
+	aligned_u64 os_mono_timestamp;
+	aligned_u64 cr_timestamp;
+
+	/*
+	 * Utility variable used to convert CR timer deltas to OS timer deltas
+	 * (nS), where the deltas are relative to the timestamps above:
+	 * deltaOS = (deltaCR * K) >> decimal_shift, see full explanation below
+	 */
+	aligned_u64 cr_delta_to_os_delta_kns;
+
+	u32 core_clock_speed;
+	u32 reserved;
+} __aligned(8);
+
+/*
+ * The following macros are used to help converting FW timestamps to the Host
+ * time domain. On the FW the ROGUE_CR_TIMER counter is used to keep track of
+ * time; it increments by 1 every 256 GPU clock ticks, so the general
+ * formula to perform the conversion is:
+ *
+ * [ GPU clock speed in Hz, if (scale == 10^9) then deltaOS is in nS,
+ *   otherwise if (scale == 10^6) then deltaOS is in uS ]
+ *
+ *             deltaCR * 256                                   256 * scale
+ *  deltaOS = --------------- * scale = deltaCR * K    [ K = --------------- ]
+ *             GPUclockspeed                                  GPUclockspeed
+ *
+ * The actual K is multiplied by 2^20 (and deltaCR * K is divided by 2^20)
+ * to get some better accuracy and to avoid returning 0 in the integer
+ * division 256000000/GPUfreq if GPUfreq is greater than 256MHz.
+ * This is the same as keeping K as a decimal number.
+ *
+ * The maximum deltaOS is slightly more than 5hrs for all GPU frequencies
+ * (deltaCR * K is more or less a constant), and it's relative to the base
+ * OS timestamp sampled as a part of the timer correlation data.
+ * This base is refreshed on GPU power-on, DVFS transition and periodic
+ * frequency calibration (executed every few seconds if the FW is doing
+ * some work), so as long as the GPU is doing something and one of these
+ * events is triggered then deltaCR * K will not overflow and deltaOS will be
+ * correct.
+ */
+
+#define ROGUE_FWIF_CRDELTA_TO_OSDELTA_ACCURACY_SHIFT (20)
+
+#define ROGUE_FWIF_GET_DELTA_OSTIME_NS(delta_cr, k) \
+	(((delta_cr) * (k)) >> ROGUE_FWIF_CRDELTA_TO_OSDELTA_ACCURACY_SHIFT)
+
+/*
+ ******************************************************************************
+ * GPU Utilisation
+ ******************************************************************************
+ */
+
+/* See rogue_common.h for a list of GPU states */
+#define ROGUE_FWIF_GPU_UTIL_TIME_MASK \
+	(0xFFFFFFFFFFFFFFFFull & ~ROGUE_FWIF_GPU_UTIL_STATE_MASK)
+
+#define ROGUE_FWIF_GPU_UTIL_GET_TIME(word) \
+	((word)(&ROGUE_FWIF_GPU_UTIL_TIME_MASK))
+#define ROGUE_FWIF_GPU_UTIL_GET_STATE(word) \
+	((word)(&ROGUE_FWIF_GPU_UTIL_STATE_MASK))
+
+/*
+ * The OS timestamps computed by the FW are approximations of the real time,
+ * which means they could be slightly behind or ahead the real timer on the
+ * Host. In some cases we can perform subtractions between FW approximated
+ * timestamps and real OS timestamps, so we need a form of protection against
+ * negative results if for instance the FW one is a bit ahead of time.
+ */
+#define ROGUE_FWIF_GPU_UTIL_GET_PERIOD(newtime, oldtime) \
+	(((newtime) > (oldtime)) ? ((newtime) - (oldtime)) : 0U)
+
+#define ROGUE_FWIF_GPU_UTIL_MAKE_WORD(time, state) \
+	(ROGUE_FWIF_GPU_UTIL_GET_TIME(time) |      \
+	 ROGUE_FWIF_GPU_UTIL_GET_STATE(state))
+
+/*
+ * The timer correlation array must be big enough to ensure old entries won't be
+ * overwritten before all the HWPerf events linked to those entries are
+ * processed by the MISR. The update frequency of this array depends on how fast
+ * the system can change state (basically how small the APM latency is) and
+ * perform DVFS transitions.
+ *
+ * The minimum size is 2 (not 1) to avoid race conditions between the FW reading
+ * an entry while the Host is updating it. With 2 entries in the worst case the
+ * FW will read old data, which is still quite ok if the Host is updating the
+ * timer correlation at that time.
+ */
+#define ROGUE_FWIF_TIME_CORR_ARRAY_SIZE 256U
+#define ROGUE_FWIF_TIME_CORR_CURR_INDEX(seqcount) \
+	((seqcount) % ROGUE_FWIF_TIME_CORR_ARRAY_SIZE)
+
+/* Make sure the timer correlation array size is a power of 2 */
+static_assert((ROGUE_FWIF_TIME_CORR_ARRAY_SIZE &
+	       (ROGUE_FWIF_TIME_CORR_ARRAY_SIZE - 1U)) == 0U,
+	      "ROGUE_FWIF_TIME_CORR_ARRAY_SIZE must be a power of two");
+
+struct rogue_fwif_gpu_util_fwcb {
+	struct rogue_fwif_time_corr time_corr[ROGUE_FWIF_TIME_CORR_ARRAY_SIZE];
+	u32 time_corr_seq_count;
+
+	/* Compatibility and other flags */
+	u32 gpu_util_flags;
+
+	/* Last GPU state + OS time of the last state update */
+	aligned_u64 last_word;
+
+	/* Counters for the amount of time the GPU was active/idle/blocked */
+	aligned_u64 stats_counters[PVR_FWIF_GPU_UTIL_STATE_NUM];
+} __aligned(8);
+
+struct rogue_fwif_rta_ctl {
+	/* Render number */
+	u32 render_target_index;
+	/* index in RTA */
+	u32 current_render_target;
+	/* total active RTs */
+	u32 active_render_targets;
+	/* total active RTs from the first TA kick, for OOM */
+	u32 cumul_active_render_targets;
+	/* Array of valid RT indices */
+	u32 valid_render_targets_fw_addr;
+	/* Array of number of occurred partial renders per render target */
+	u32 rta_num_partial_renders_fw_addr;
+	/* Number of render targets in the array */
+	u32 max_rts;
+	/* Compatibility and other flags */
+	u32 rta_ctl_flags;
+} __aligned(8);
+
+struct rogue_fwif_freelist {
+	aligned_u64 freelist_dev_addr;
+	aligned_u64 current_dev_addr;
+	u32 current_stack_top;
+	u32 max_pages;
+	u32 grow_pages;
+	/* HW pages */
+	u32 current_pages;
+	u32 allocated_page_count;
+	u32 allocated_mmu_page_count;
+	u32 freelist_id;
+
+	bool grow_pending __aligned(4);
+	/* Pages that should be used only when OOM is reached */
+	u32 ready_pages;
+	/* Compatibility and other flags */
+	u32 freelist_flags;
+	/* PM Global PB on which Freelist is loaded */
+	u32 pm_global_pb;
+	u32 padding;
+} __aligned(8);
+
+/*
+ ******************************************************************************
+ * HWRTData
+ ******************************************************************************
+ */
+
+/* HWRTData flags */
+/* Deprecated flags 1:0 */
+#define HWRTDATA_HAS_LAST_GEOM BIT(2)
+#define HWRTDATA_PARTIAL_RENDERED BIT(3)
+#define HWRTDATA_DISABLE_TILE_REORDERING BIT(4)
+#define HWRTDATA_NEED_BRN65101_BLIT BIT(5)
+#define HWRTDATA_FIRST_BRN65101_STRIP BIT(6)
+#define HWRTDATA_NEED_BRN67182_2ND_RENDER BIT(7)
+
+enum rogue_fwif_rtdata_state {
+	ROGUE_FWIF_RTDATA_STATE_NONE = 0,
+	ROGUE_FWIF_RTDATA_STATE_KICK_GEOM,
+	ROGUE_FWIF_RTDATA_STATE_KICK_GEOM_FIRST,
+	ROGUE_FWIF_RTDATA_STATE_GEOM_FINISHED,
+	ROGUE_FWIF_RTDATA_STATE_KICK_FRAG,
+	ROGUE_FWIF_RTDATA_STATE_FRAG_FINISHED,
+	ROGUE_FWIF_RTDATA_STATE_FRAG_CONTEXT_STORED,
+	ROGUE_FWIF_RTDATA_STATE_GEOM_OUTOFMEM,
+	ROGUE_FWIF_RTDATA_STATE_PARTIALRENDERFINISHED,
+	/*
+	 * In case of HWR, we can't set the RTDATA state to NONE, as this will
+	 * cause any TA to become a first TA. To ensure all related TA's are
+	 * skipped, we use the HWR state
+	 */
+	ROGUE_FWIF_RTDATA_STATE_HWR,
+	ROGUE_FWIF_RTDATA_STATE_UNKNOWN = 0x7FFFFFFFU
+};
+
+struct rogue_fwif_hwrtdata_common {
+	bool geom_caches_need_zeroing __aligned(4);
+
+	u32 screen_pixel_max;
+	aligned_u64 multi_sample_ctl;
+	u64 flipped_multi_sample_ctl;
+	u32 tpc_stride;
+	u32 tpc_size;
+	u32 te_screen;
+	u32 mtile_stride;
+	u32 teaa;
+	u32 te_mtile1;
+	u32 te_mtile2;
+	u32 isp_merge_lower_x;
+	u32 isp_merge_lower_y;
+	u32 isp_merge_upper_x;
+	u32 isp_merge_upper_y;
+	u32 isp_merge_scale_x;
+	u32 isp_merge_scale_y;
+	u32 rgn_header_size;
+	u32 isp_mtile_size;
+	u32 padding;
+} __aligned(8);
+
+struct rogue_fwif_hwrtdata {
+	/* MList Data Store */
+	aligned_u64 pm_mlist_dev_addr;
+
+	aligned_u64 vce_cat_base[4];
+	aligned_u64 vce_last_cat_base[4];
+	aligned_u64 te_cat_base[4];
+	aligned_u64 te_last_cat_base[4];
+	aligned_u64 alist_cat_base;
+	aligned_u64 alist_last_cat_base;
+
+	aligned_u64 pm_alist_stack_pointer;
+	u32 pm_mlist_stack_pointer;
+
+	u32 hwrt_data_common_fw_addr;
+
+	u32 hwrt_data_flags;
+	enum rogue_fwif_rtdata_state state;
+
+	u32 freelists_fw_addr[MAX_FREELISTS_SIZE] __aligned(8);
+	u32 freelist_hwr_snapshot[MAX_FREELISTS_SIZE];
+
+	aligned_u64 vheap_table_dev_addr;
+
+	struct rogue_fwif_rta_ctl rta_ctl;
+
+	aligned_u64 tail_ptrs_dev_addr;
+	aligned_u64 macrotile_array_dev_addr;
+	aligned_u64 rgn_header_dev_addr;
+	aligned_u64 rtc_dev_addr;
+
+	u32 owner_geom_not_used_by_host __aligned(8);
+
+	bool geom_caches_need_zeroing __aligned(4);
+
+	struct rogue_fwif_cleanup_ctl cleanup_state __aligned(64);
+} __aligned(8);
+
+/*
+ ******************************************************************************
+ * Sync checkpoints
+ ******************************************************************************
+ */
+
+#define PVR_SYNC_CHECKPOINT_UNDEF 0x000
+#define PVR_SYNC_CHECKPOINT_ACTIVE 0xac1     /* Checkpoint has not signaled. */
+#define PVR_SYNC_CHECKPOINT_SIGNALED 0x519   /* Checkpoint has signaled. */
+#define PVR_SYNC_CHECKPOINT_ERRORED 0xeff    /* Checkpoint has been errored. */
+
+#include "pvr_rogue_fwif_check.h"
+
+#endif /* PVR_ROGUE_FWIF_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_fwif_check.h b/drivers/gpu/drm/imagination/pvr_rogue_fwif_check.h
new file mode 100644
index 000000000000..eb8f1ffe7373
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_fwif_check.h
@@ -0,0 +1,491 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_FWIF_CHECK_H
+#define PVR_ROGUE_FWIF_CHECK_H
+
+#include <linux/build_bug.h>
+
+#define OFFSET_CHECK(type, member, offset) \
+	static_assert(offsetof(type, member) == (offset), \
+		      "offsetof(" #type ", " #member ") incorrect")
+
+#define SIZE_CHECK(type, size) \
+	static_assert(sizeof(type) == (size), #type " is incorrect size")
+
+OFFSET_CHECK(struct rogue_fwif_file_info_buf, path, 0);
+OFFSET_CHECK(struct rogue_fwif_file_info_buf, info, 200);
+OFFSET_CHECK(struct rogue_fwif_file_info_buf, line_num, 400);
+SIZE_CHECK(struct rogue_fwif_file_info_buf, 408);
+
+OFFSET_CHECK(struct rogue_fwif_tracebuf_space, trace_pointer, 0);
+OFFSET_CHECK(struct rogue_fwif_tracebuf_space, trace_buffer_fw_addr, 4);
+OFFSET_CHECK(struct rogue_fwif_tracebuf_space, trace_buffer, 8);
+OFFSET_CHECK(struct rogue_fwif_tracebuf_space, assert_buf, 16);
+SIZE_CHECK(struct rogue_fwif_tracebuf_space, 424);
+
+OFFSET_CHECK(struct rogue_fwif_tracebuf, log_type, 0);
+OFFSET_CHECK(struct rogue_fwif_tracebuf, tracebuf, 8);
+OFFSET_CHECK(struct rogue_fwif_tracebuf, tracebuf_size_in_dwords, 856);
+OFFSET_CHECK(struct rogue_fwif_tracebuf, tracebuf_flags, 860);
+SIZE_CHECK(struct rogue_fwif_tracebuf, 864);
+
+OFFSET_CHECK(struct rogue_fw_fault_info, cr_timer, 0);
+OFFSET_CHECK(struct rogue_fw_fault_info, os_timer, 8);
+OFFSET_CHECK(struct rogue_fw_fault_info, data, 16);
+OFFSET_CHECK(struct rogue_fw_fault_info, reserved, 20);
+OFFSET_CHECK(struct rogue_fw_fault_info, fault_buf, 24);
+SIZE_CHECK(struct rogue_fw_fault_info, 432);
+
+OFFSET_CHECK(struct rogue_fwif_sysdata, config_flags, 0);
+OFFSET_CHECK(struct rogue_fwif_sysdata, config_flags_ext, 4);
+OFFSET_CHECK(struct rogue_fwif_sysdata, pow_state, 8);
+OFFSET_CHECK(struct rogue_fwif_sysdata, hw_perf_ridx, 12);
+OFFSET_CHECK(struct rogue_fwif_sysdata, hw_perf_widx, 16);
+OFFSET_CHECK(struct rogue_fwif_sysdata, hw_perf_wrap_count, 20);
+OFFSET_CHECK(struct rogue_fwif_sysdata, hw_perf_size, 24);
+OFFSET_CHECK(struct rogue_fwif_sysdata, hw_perf_drop_count, 28);
+OFFSET_CHECK(struct rogue_fwif_sysdata, hw_perf_ut, 32);
+OFFSET_CHECK(struct rogue_fwif_sysdata, first_drop_ordinal, 36);
+OFFSET_CHECK(struct rogue_fwif_sysdata, last_drop_ordinal, 40);
+OFFSET_CHECK(struct rogue_fwif_sysdata, os_runtime_flags_mirror, 44);
+OFFSET_CHECK(struct rogue_fwif_sysdata, fault_info, 80);
+OFFSET_CHECK(struct rogue_fwif_sysdata, fw_faults, 3536);
+OFFSET_CHECK(struct rogue_fwif_sysdata, cr_poll_addr, 3540);
+OFFSET_CHECK(struct rogue_fwif_sysdata, cr_poll_mask, 3548);
+OFFSET_CHECK(struct rogue_fwif_sysdata, cr_poll_count, 3556);
+OFFSET_CHECK(struct rogue_fwif_sysdata, start_idle_time, 3568);
+OFFSET_CHECK(struct rogue_fwif_sysdata, hwr_state_flags, 3576);
+OFFSET_CHECK(struct rogue_fwif_sysdata, hwr_recovery_flags, 3580);
+OFFSET_CHECK(struct rogue_fwif_sysdata, fw_sys_data_flags, 3616);
+OFFSET_CHECK(struct rogue_fwif_sysdata, mc_config, 3620);
+SIZE_CHECK(struct rogue_fwif_sysdata, 3624);
+
+OFFSET_CHECK(struct rogue_fwif_slr_entry, timestamp, 0);
+OFFSET_CHECK(struct rogue_fwif_slr_entry, fw_ctx_addr, 8);
+OFFSET_CHECK(struct rogue_fwif_slr_entry, num_ufos, 12);
+OFFSET_CHECK(struct rogue_fwif_slr_entry, ccb_name, 16);
+SIZE_CHECK(struct rogue_fwif_slr_entry, 48);
+
+OFFSET_CHECK(struct rogue_fwif_osdata, fw_os_config_flags, 0);
+OFFSET_CHECK(struct rogue_fwif_osdata, fw_sync_check_mark, 4);
+OFFSET_CHECK(struct rogue_fwif_osdata, host_sync_check_mark, 8);
+OFFSET_CHECK(struct rogue_fwif_osdata, forced_updates_requested, 12);
+OFFSET_CHECK(struct rogue_fwif_osdata, slr_log_wp, 16);
+OFFSET_CHECK(struct rogue_fwif_osdata, slr_log_first, 24);
+OFFSET_CHECK(struct rogue_fwif_osdata, slr_log, 72);
+OFFSET_CHECK(struct rogue_fwif_osdata, last_forced_update_time, 552);
+OFFSET_CHECK(struct rogue_fwif_osdata, interrupt_count, 560);
+OFFSET_CHECK(struct rogue_fwif_osdata, kccb_cmds_executed, 568);
+OFFSET_CHECK(struct rogue_fwif_osdata, power_sync_fw_addr, 572);
+OFFSET_CHECK(struct rogue_fwif_osdata, fw_os_data_flags, 576);
+SIZE_CHECK(struct rogue_fwif_osdata, 584);
+
+OFFSET_CHECK(struct rogue_bifinfo, bif_req_status, 0);
+OFFSET_CHECK(struct rogue_bifinfo, bif_mmu_status, 8);
+OFFSET_CHECK(struct rogue_bifinfo, pc_address, 16);
+OFFSET_CHECK(struct rogue_bifinfo, reserved, 24);
+SIZE_CHECK(struct rogue_bifinfo, 32);
+
+OFFSET_CHECK(struct rogue_eccinfo, fault_gpu, 0);
+SIZE_CHECK(struct rogue_eccinfo, 4);
+
+OFFSET_CHECK(struct rogue_mmuinfo, mmu_status, 0);
+OFFSET_CHECK(struct rogue_mmuinfo, pc_address, 16);
+OFFSET_CHECK(struct rogue_mmuinfo, reserved, 24);
+SIZE_CHECK(struct rogue_mmuinfo, 32);
+
+OFFSET_CHECK(struct rogue_pollinfo, thread_num, 0);
+OFFSET_CHECK(struct rogue_pollinfo, cr_poll_addr, 4);
+OFFSET_CHECK(struct rogue_pollinfo, cr_poll_mask, 8);
+OFFSET_CHECK(struct rogue_pollinfo, cr_poll_last_value, 12);
+OFFSET_CHECK(struct rogue_pollinfo, reserved, 16);
+SIZE_CHECK(struct rogue_pollinfo, 24);
+
+OFFSET_CHECK(struct rogue_tlbinfo, bad_addr, 0);
+OFFSET_CHECK(struct rogue_tlbinfo, entry_lo, 4);
+SIZE_CHECK(struct rogue_tlbinfo, 8);
+
+OFFSET_CHECK(struct rogue_hwrinfo, hwr_data, 0);
+OFFSET_CHECK(struct rogue_hwrinfo, cr_timer, 32);
+OFFSET_CHECK(struct rogue_hwrinfo, os_timer, 40);
+OFFSET_CHECK(struct rogue_hwrinfo, frame_num, 48);
+OFFSET_CHECK(struct rogue_hwrinfo, pid, 52);
+OFFSET_CHECK(struct rogue_hwrinfo, active_hwrt_data, 56);
+OFFSET_CHECK(struct rogue_hwrinfo, hwr_number, 60);
+OFFSET_CHECK(struct rogue_hwrinfo, event_status, 64);
+OFFSET_CHECK(struct rogue_hwrinfo, hwr_recovery_flags, 68);
+OFFSET_CHECK(struct rogue_hwrinfo, hwr_type, 72);
+OFFSET_CHECK(struct rogue_hwrinfo, dm, 76);
+OFFSET_CHECK(struct rogue_hwrinfo, core_id, 80);
+OFFSET_CHECK(struct rogue_hwrinfo, cr_time_of_kick, 88);
+OFFSET_CHECK(struct rogue_hwrinfo, cr_time_hw_reset_start, 96);
+OFFSET_CHECK(struct rogue_hwrinfo, cr_time_hw_reset_finish, 104);
+OFFSET_CHECK(struct rogue_hwrinfo, cr_time_freelist_ready, 112);
+OFFSET_CHECK(struct rogue_hwrinfo, reserved, 120);
+SIZE_CHECK(struct rogue_hwrinfo, 136);
+
+OFFSET_CHECK(struct rogue_fwif_hwrinfobuf, hwr_info, 0);
+OFFSET_CHECK(struct rogue_fwif_hwrinfobuf, hwr_counter, 2176);
+OFFSET_CHECK(struct rogue_fwif_hwrinfobuf, write_index, 2180);
+OFFSET_CHECK(struct rogue_fwif_hwrinfobuf, dd_req_count, 2184);
+OFFSET_CHECK(struct rogue_fwif_hwrinfobuf, hwr_info_buf_flags, 2188);
+OFFSET_CHECK(struct rogue_fwif_hwrinfobuf, hwr_dm_locked_up_count, 2192);
+OFFSET_CHECK(struct rogue_fwif_hwrinfobuf, hwr_dm_overran_count, 2228);
+OFFSET_CHECK(struct rogue_fwif_hwrinfobuf, hwr_dm_recovered_count, 2264);
+OFFSET_CHECK(struct rogue_fwif_hwrinfobuf, hwr_dm_false_detect_count, 2300);
+SIZE_CHECK(struct rogue_fwif_hwrinfobuf, 2336);
+
+OFFSET_CHECK(struct rogue_fwif_fwmemcontext, pc_dev_paddr, 0);
+OFFSET_CHECK(struct rogue_fwif_fwmemcontext, page_cat_base_reg_set, 8);
+OFFSET_CHECK(struct rogue_fwif_fwmemcontext, breakpoint_addr, 12);
+OFFSET_CHECK(struct rogue_fwif_fwmemcontext, bp_handler_addr, 16);
+OFFSET_CHECK(struct rogue_fwif_fwmemcontext, breakpoint_ctl, 20);
+OFFSET_CHECK(struct rogue_fwif_fwmemcontext, fw_mem_ctx_flags, 24);
+SIZE_CHECK(struct rogue_fwif_fwmemcontext, 32);
+
+OFFSET_CHECK(struct rogue_fwif_geom_ctx_state_per_geom, geom_reg_vdm_call_stack_pointer, 0);
+OFFSET_CHECK(struct rogue_fwif_geom_ctx_state_per_geom, geom_reg_vdm_call_stack_pointer_init, 8);
+OFFSET_CHECK(struct rogue_fwif_geom_ctx_state_per_geom, geom_reg_vbs_so_prim, 16);
+OFFSET_CHECK(struct rogue_fwif_geom_ctx_state_per_geom, geom_current_idx, 32);
+SIZE_CHECK(struct rogue_fwif_geom_ctx_state_per_geom, 40);
+
+OFFSET_CHECK(struct rogue_fwif_geom_ctx_state, geom_core, 0);
+SIZE_CHECK(struct rogue_fwif_geom_ctx_state, 160);
+
+OFFSET_CHECK(struct rogue_fwif_frag_ctx_state, frag_reg_pm_deallocated_mask_status, 0);
+OFFSET_CHECK(struct rogue_fwif_frag_ctx_state, frag_reg_dm_pds_mtilefree_status, 4);
+OFFSET_CHECK(struct rogue_fwif_frag_ctx_state, ctx_state_flags, 8);
+OFFSET_CHECK(struct rogue_fwif_frag_ctx_state, frag_reg_isp_store, 12);
+SIZE_CHECK(struct rogue_fwif_frag_ctx_state, 16);
+
+OFFSET_CHECK(struct rogue_fwif_compute_ctx_state, ctx_state_flags, 0);
+SIZE_CHECK(struct rogue_fwif_compute_ctx_state, 4);
+
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, ccbctl_fw_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, ccb_fw_addr, 4);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, ccb_meta_dma_addr, 8);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, context_state_addr, 24);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, fw_com_ctx_flags, 28);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, priority, 32);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, priority_seq_num, 36);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, rf_cmd_addr, 40);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, stats_pending, 44);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, stats_num_stores, 48);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, stats_num_out_of_memory, 52);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, stats_num_partial_renders, 56);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, dm, 60);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, wait_signal_address, 64);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, wait_signal_node, 72);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, buf_stalled_node, 80);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, cbuf_queue_ctrl_addr, 88);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, robustness_address, 96);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, max_deadline_ms, 104);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, read_offset_needs_reset, 108);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, waiting_node, 112);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, run_node, 120);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, last_failed_ufo, 128);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, fw_mem_context_fw_addr, 136);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, server_common_context_id, 140);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, pid, 144);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, geom_oom_disabled, 148);
+SIZE_CHECK(struct rogue_fwif_fwcommoncontext, 152);
+
+OFFSET_CHECK(struct rogue_fwif_ccb_ctl, write_offset, 0);
+OFFSET_CHECK(struct rogue_fwif_ccb_ctl, read_offset, 4);
+OFFSET_CHECK(struct rogue_fwif_ccb_ctl, wrap_mask, 8);
+OFFSET_CHECK(struct rogue_fwif_ccb_ctl, cmd_size, 12);
+SIZE_CHECK(struct rogue_fwif_ccb_ctl, 16);
+
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd_kick_data, context_fw_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd_kick_data, client_woff_update, 4);
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd_kick_data, client_wrap_mask_update, 8);
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd_kick_data, num_cleanup_ctl, 12);
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd_kick_data, cleanup_ctl_fw_addr, 16);
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd_kick_data, work_est_cmd_header_offset, 28);
+SIZE_CHECK(struct rogue_fwif_kccb_cmd_kick_data, 32);
+
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd_combined_geom_frag_kick_data, geom_cmd_kick_data, 0);
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd_combined_geom_frag_kick_data, frag_cmd_kick_data, 32);
+SIZE_CHECK(struct rogue_fwif_kccb_cmd_combined_geom_frag_kick_data, 64);
+
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd_force_update_data, context_fw_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd_force_update_data, ccb_fence_offset, 4);
+SIZE_CHECK(struct rogue_fwif_kccb_cmd_force_update_data, 8);
+
+OFFSET_CHECK(struct rogue_fwif_cleanup_request, cleanup_type, 0);
+OFFSET_CHECK(struct rogue_fwif_cleanup_request, cleanup_data, 4);
+SIZE_CHECK(struct rogue_fwif_cleanup_request, 8);
+
+OFFSET_CHECK(struct rogue_fwif_power_request, pow_type, 0);
+OFFSET_CHECK(struct rogue_fwif_power_request, power_req_data, 4);
+SIZE_CHECK(struct rogue_fwif_power_request, 8);
+
+OFFSET_CHECK(struct rogue_fwif_slcflushinvaldata, context_fw_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_slcflushinvaldata, inval, 4);
+OFFSET_CHECK(struct rogue_fwif_slcflushinvaldata, dm_context, 8);
+OFFSET_CHECK(struct rogue_fwif_slcflushinvaldata, address, 16);
+OFFSET_CHECK(struct rogue_fwif_slcflushinvaldata, size, 24);
+SIZE_CHECK(struct rogue_fwif_slcflushinvaldata, 32);
+
+OFFSET_CHECK(struct rogue_fwif_hwperf_ctrl, opcode, 0);
+OFFSET_CHECK(struct rogue_fwif_hwperf_ctrl, mask, 8);
+SIZE_CHECK(struct rogue_fwif_hwperf_ctrl, 16);
+
+OFFSET_CHECK(struct rogue_fwif_hwperf_config_enable_blks, num_blocks, 0);
+OFFSET_CHECK(struct rogue_fwif_hwperf_config_enable_blks, block_configs_fw_addr, 4);
+SIZE_CHECK(struct rogue_fwif_hwperf_config_enable_blks, 8);
+
+OFFSET_CHECK(struct rogue_fwif_hwperf_config_da_blks, num_blocks, 0);
+OFFSET_CHECK(struct rogue_fwif_hwperf_config_da_blks, block_configs_fw_addr, 4);
+SIZE_CHECK(struct rogue_fwif_hwperf_config_da_blks, 8);
+
+OFFSET_CHECK(struct rogue_fwif_coreclkspeedchange_data, new_clock_speed, 0);
+SIZE_CHECK(struct rogue_fwif_coreclkspeedchange_data, 4);
+
+OFFSET_CHECK(struct rogue_fwif_hwperf_ctrl_blks, enable, 0);
+OFFSET_CHECK(struct rogue_fwif_hwperf_ctrl_blks, num_blocks, 4);
+OFFSET_CHECK(struct rogue_fwif_hwperf_ctrl_blks, block_ids, 8);
+SIZE_CHECK(struct rogue_fwif_hwperf_ctrl_blks, 40);
+
+OFFSET_CHECK(struct rogue_fwif_hwperf_select_custom_cntrs, custom_block, 0);
+OFFSET_CHECK(struct rogue_fwif_hwperf_select_custom_cntrs, num_counters, 2);
+OFFSET_CHECK(struct rogue_fwif_hwperf_select_custom_cntrs, custom_counter_ids_fw_addr, 4);
+SIZE_CHECK(struct rogue_fwif_hwperf_select_custom_cntrs, 8);
+
+OFFSET_CHECK(struct rogue_fwif_zsbuffer_backing_data, zs_buffer_fw_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_zsbuffer_backing_data, done, 4);
+SIZE_CHECK(struct rogue_fwif_zsbuffer_backing_data, 8);
+
+OFFSET_CHECK(struct rogue_fwif_freelist_gs_data, freelist_fw_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_freelist_gs_data, delta_pages, 4);
+OFFSET_CHECK(struct rogue_fwif_freelist_gs_data, new_pages, 8);
+OFFSET_CHECK(struct rogue_fwif_freelist_gs_data, ready_pages, 12);
+SIZE_CHECK(struct rogue_fwif_freelist_gs_data, 16);
+
+OFFSET_CHECK(struct rogue_fwif_freelists_reconstruction_data, freelist_count, 0);
+OFFSET_CHECK(struct rogue_fwif_freelists_reconstruction_data, freelist_ids, 4);
+SIZE_CHECK(struct rogue_fwif_freelists_reconstruction_data, 76);
+
+OFFSET_CHECK(struct rogue_fwif_write_offset_update_data, context_fw_addr, 0);
+SIZE_CHECK(struct rogue_fwif_write_offset_update_data, 8);
+
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd, cmd_type, 0);
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd, kccb_flags, 4);
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd, cmd_data, 8);
+SIZE_CHECK(struct rogue_fwif_kccb_cmd, 88);
+
+OFFSET_CHECK(struct rogue_fwif_fwccb_cmd_context_reset_data, server_common_context_id, 0);
+OFFSET_CHECK(struct rogue_fwif_fwccb_cmd_context_reset_data, reset_reason, 4);
+OFFSET_CHECK(struct rogue_fwif_fwccb_cmd_context_reset_data, dm, 8);
+OFFSET_CHECK(struct rogue_fwif_fwccb_cmd_context_reset_data, reset_job_ref, 12);
+OFFSET_CHECK(struct rogue_fwif_fwccb_cmd_context_reset_data, flags, 16);
+OFFSET_CHECK(struct rogue_fwif_fwccb_cmd_context_reset_data, pc_address, 24);
+OFFSET_CHECK(struct rogue_fwif_fwccb_cmd_context_reset_data, fault_address, 32);
+SIZE_CHECK(struct rogue_fwif_fwccb_cmd_context_reset_data, 40);
+
+OFFSET_CHECK(struct rogue_fwif_fwccb_cmd_fw_pagefault_data, fw_fault_addr, 0);
+SIZE_CHECK(struct rogue_fwif_fwccb_cmd_fw_pagefault_data, 8);
+
+OFFSET_CHECK(struct rogue_fwif_fwccb_cmd, cmd_type, 0);
+OFFSET_CHECK(struct rogue_fwif_fwccb_cmd, fwccb_flags, 4);
+OFFSET_CHECK(struct rogue_fwif_fwccb_cmd, cmd_data, 8);
+SIZE_CHECK(struct rogue_fwif_fwccb_cmd, 88);
+
+OFFSET_CHECK(struct rogue_fwif_ccb_cmd_header, cmd_type, 0);
+OFFSET_CHECK(struct rogue_fwif_ccb_cmd_header, cmd_size, 4);
+OFFSET_CHECK(struct rogue_fwif_ccb_cmd_header, ext_job_ref, 8);
+OFFSET_CHECK(struct rogue_fwif_ccb_cmd_header, int_job_ref, 12);
+OFFSET_CHECK(struct rogue_fwif_ccb_cmd_header, work_est_kick_data, 16);
+SIZE_CHECK(struct rogue_fwif_ccb_cmd_header, 40);
+
+OFFSET_CHECK(struct rogue_fwif_runtime_cfg, active_pm_latency_ms, 0);
+OFFSET_CHECK(struct rogue_fwif_runtime_cfg, runtime_cfg_flags, 4);
+OFFSET_CHECK(struct rogue_fwif_runtime_cfg, active_pm_latency_persistant, 8);
+OFFSET_CHECK(struct rogue_fwif_runtime_cfg, core_clock_speed, 12);
+OFFSET_CHECK(struct rogue_fwif_runtime_cfg, default_dusts_num_init, 16);
+OFFSET_CHECK(struct rogue_fwif_runtime_cfg, phr_mode, 20);
+OFFSET_CHECK(struct rogue_fwif_runtime_cfg, hcs_deadline_ms, 24);
+OFFSET_CHECK(struct rogue_fwif_runtime_cfg, wdg_period_us, 28);
+OFFSET_CHECK(struct rogue_fwif_runtime_cfg, osid_priority, 32);
+OFFSET_CHECK(struct rogue_fwif_runtime_cfg, hwperf_buf_fw_addr, 64);
+OFFSET_CHECK(struct rogue_fwif_runtime_cfg, padding, 68);
+SIZE_CHECK(struct rogue_fwif_runtime_cfg, 72);
+
+OFFSET_CHECK(struct rogue_fwif_connection_ctl, connection_fw_state, 0);
+OFFSET_CHECK(struct rogue_fwif_connection_ctl, connection_os_state, 4);
+OFFSET_CHECK(struct rogue_fwif_connection_ctl, alive_fw_token, 8);
+OFFSET_CHECK(struct rogue_fwif_connection_ctl, alive_os_token, 12);
+SIZE_CHECK(struct rogue_fwif_connection_ctl, 16);
+
+OFFSET_CHECK(struct rogue_fwif_compchecks_bvnc, layout_version, 0);
+OFFSET_CHECK(struct rogue_fwif_compchecks_bvnc, bvnc, 8);
+SIZE_CHECK(struct rogue_fwif_compchecks_bvnc, 16);
+
+OFFSET_CHECK(struct rogue_fwif_init_options, os_count_support, 0);
+SIZE_CHECK(struct rogue_fwif_init_options, 8);
+
+OFFSET_CHECK(struct rogue_fwif_compchecks, hw_bvnc, 0);
+OFFSET_CHECK(struct rogue_fwif_compchecks, fw_bvnc, 16);
+OFFSET_CHECK(struct rogue_fwif_compchecks, fw_processor_version, 32);
+OFFSET_CHECK(struct rogue_fwif_compchecks, ddk_version, 36);
+OFFSET_CHECK(struct rogue_fwif_compchecks, ddk_build, 40);
+OFFSET_CHECK(struct rogue_fwif_compchecks, build_options, 44);
+OFFSET_CHECK(struct rogue_fwif_compchecks, init_options, 48);
+OFFSET_CHECK(struct rogue_fwif_compchecks, updated, 56);
+SIZE_CHECK(struct rogue_fwif_compchecks, 64);
+
+OFFSET_CHECK(struct rogue_fwif_osinit, kernel_ccbctl_fw_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_osinit, kernel_ccb_fw_addr, 4);
+OFFSET_CHECK(struct rogue_fwif_osinit, kernel_ccb_rtn_slots_fw_addr, 8);
+OFFSET_CHECK(struct rogue_fwif_osinit, firmware_ccbctl_fw_addr, 12);
+OFFSET_CHECK(struct rogue_fwif_osinit, firmware_ccb_fw_addr, 16);
+OFFSET_CHECK(struct rogue_fwif_osinit, work_est_firmware_ccbctl_fw_addr, 20);
+OFFSET_CHECK(struct rogue_fwif_osinit, work_est_firmware_ccb_fw_addr, 24);
+OFFSET_CHECK(struct rogue_fwif_osinit, rogue_fwif_hwr_info_buf_ctl_fw_addr, 28);
+OFFSET_CHECK(struct rogue_fwif_osinit, hwr_debug_dump_limit, 32);
+OFFSET_CHECK(struct rogue_fwif_osinit, fw_os_data_fw_addr, 36);
+OFFSET_CHECK(struct rogue_fwif_osinit, rogue_comp_checks, 40);
+SIZE_CHECK(struct rogue_fwif_osinit, 104);
+
+OFFSET_CHECK(struct rogue_fwif_sigbuf_ctl, buffer_fw_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_sigbuf_ctl, left_size_in_regs, 4);
+SIZE_CHECK(struct rogue_fwif_sigbuf_ctl, 8);
+
+OFFSET_CHECK(struct pdvfs_opp, volt, 0);
+OFFSET_CHECK(struct pdvfs_opp, freq, 4);
+SIZE_CHECK(struct pdvfs_opp, 8);
+
+OFFSET_CHECK(struct rogue_fwif_pdvfs_opp, opp_values, 0);
+OFFSET_CHECK(struct rogue_fwif_pdvfs_opp, min_opp_point, 128);
+OFFSET_CHECK(struct rogue_fwif_pdvfs_opp, max_opp_point, 132);
+SIZE_CHECK(struct rogue_fwif_pdvfs_opp, 136);
+
+OFFSET_CHECK(struct rogue_fwif_counter_dump_ctl, buffer_fw_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_counter_dump_ctl, size_in_dwords, 4);
+SIZE_CHECK(struct rogue_fwif_counter_dump_ctl, 8);
+
+OFFSET_CHECK(struct rogue_hwperf_bvnc, bvnc_string, 0);
+OFFSET_CHECK(struct rogue_hwperf_bvnc, bvnc_km_feature_flags, 24);
+OFFSET_CHECK(struct rogue_hwperf_bvnc, num_bvnc_blocks, 28);
+OFFSET_CHECK(struct rogue_hwperf_bvnc, bvnc_gpu_cores, 30);
+OFFSET_CHECK(struct rogue_hwperf_bvnc, bvnc_blocks, 32);
+SIZE_CHECK(struct rogue_hwperf_bvnc, 160);
+
+OFFSET_CHECK(struct rogue_fwif_sysinit, fault_phys_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_sysinit, pds_exec_base, 8);
+OFFSET_CHECK(struct rogue_fwif_sysinit, usc_exec_base, 16);
+OFFSET_CHECK(struct rogue_fwif_sysinit, fbcdc_state_table_base, 24);
+OFFSET_CHECK(struct rogue_fwif_sysinit, fbcdc_large_state_table_base, 32);
+OFFSET_CHECK(struct rogue_fwif_sysinit, texture_heap_base, 40);
+OFFSET_CHECK(struct rogue_fwif_sysinit, hw_perf_filter, 48);
+OFFSET_CHECK(struct rogue_fwif_sysinit, slc3_fence_dev_addr, 56);
+OFFSET_CHECK(struct rogue_fwif_sysinit, tpu_trilinear_frac_mask, 64);
+OFFSET_CHECK(struct rogue_fwif_sysinit, sigbuf_ctl, 80);
+OFFSET_CHECK(struct rogue_fwif_sysinit, pdvfs_opp_info, 152);
+OFFSET_CHECK(struct rogue_fwif_sysinit, coremem_data_store, 288);
+OFFSET_CHECK(struct rogue_fwif_sysinit, counter_dump_ctl, 304);
+OFFSET_CHECK(struct rogue_fwif_sysinit, filter_flags, 312);
+OFFSET_CHECK(struct rogue_fwif_sysinit, runtime_cfg_fw_addr, 316);
+OFFSET_CHECK(struct rogue_fwif_sysinit, trace_buf_ctl_fw_addr, 320);
+OFFSET_CHECK(struct rogue_fwif_sysinit, fw_sys_data_fw_addr, 324);
+OFFSET_CHECK(struct rogue_fwif_sysinit, gpu_util_fw_cb_ctl_fw_addr, 328);
+OFFSET_CHECK(struct rogue_fwif_sysinit, reg_cfg_fw_addr, 332);
+OFFSET_CHECK(struct rogue_fwif_sysinit, hwperf_ctl_fw_addr, 336);
+OFFSET_CHECK(struct rogue_fwif_sysinit, align_checks, 340);
+OFFSET_CHECK(struct rogue_fwif_sysinit, initial_core_clock_speed, 344);
+OFFSET_CHECK(struct rogue_fwif_sysinit, active_pm_latency_ms, 348);
+OFFSET_CHECK(struct rogue_fwif_sysinit, firmware_started, 352);
+OFFSET_CHECK(struct rogue_fwif_sysinit, marker_val, 356);
+OFFSET_CHECK(struct rogue_fwif_sysinit, firmware_started_timestamp, 360);
+OFFSET_CHECK(struct rogue_fwif_sysinit, jones_disable_mask, 364);
+OFFSET_CHECK(struct rogue_fwif_sysinit, firmware_perf, 368);
+OFFSET_CHECK(struct rogue_fwif_sysinit, core_clock_rate_fw_addr, 372);
+OFFSET_CHECK(struct rogue_fwif_sysinit, gpio_validation_mode, 376);
+OFFSET_CHECK(struct rogue_fwif_sysinit, bvnc_km_feature_flags, 380);
+OFFSET_CHECK(struct rogue_fwif_sysinit, tfbc_compression_control, 540);
+SIZE_CHECK(struct rogue_fwif_sysinit, 544);
+
+OFFSET_CHECK(struct rogue_fwif_gpu_util_fwcb, time_corr, 0);
+OFFSET_CHECK(struct rogue_fwif_gpu_util_fwcb, time_corr_seq_count, 10240);
+OFFSET_CHECK(struct rogue_fwif_gpu_util_fwcb, gpu_util_flags, 10244);
+OFFSET_CHECK(struct rogue_fwif_gpu_util_fwcb, last_word, 10248);
+OFFSET_CHECK(struct rogue_fwif_gpu_util_fwcb, stats_counters, 10256);
+SIZE_CHECK(struct rogue_fwif_gpu_util_fwcb, 10280);
+
+OFFSET_CHECK(struct rogue_fwif_rta_ctl, render_target_index, 0);
+OFFSET_CHECK(struct rogue_fwif_rta_ctl, current_render_target, 4);
+OFFSET_CHECK(struct rogue_fwif_rta_ctl, active_render_targets, 8);
+OFFSET_CHECK(struct rogue_fwif_rta_ctl, cumul_active_render_targets, 12);
+OFFSET_CHECK(struct rogue_fwif_rta_ctl, valid_render_targets_fw_addr, 16);
+OFFSET_CHECK(struct rogue_fwif_rta_ctl, rta_num_partial_renders_fw_addr, 20);
+OFFSET_CHECK(struct rogue_fwif_rta_ctl, max_rts, 24);
+OFFSET_CHECK(struct rogue_fwif_rta_ctl, rta_ctl_flags, 28);
+SIZE_CHECK(struct rogue_fwif_rta_ctl, 32);
+
+OFFSET_CHECK(struct rogue_fwif_freelist, freelist_dev_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_freelist, current_dev_addr, 8);
+OFFSET_CHECK(struct rogue_fwif_freelist, current_stack_top, 16);
+OFFSET_CHECK(struct rogue_fwif_freelist, max_pages, 20);
+OFFSET_CHECK(struct rogue_fwif_freelist, grow_pages, 24);
+OFFSET_CHECK(struct rogue_fwif_freelist, current_pages, 28);
+OFFSET_CHECK(struct rogue_fwif_freelist, allocated_page_count, 32);
+OFFSET_CHECK(struct rogue_fwif_freelist, allocated_mmu_page_count, 36);
+OFFSET_CHECK(struct rogue_fwif_freelist, freelist_id, 40);
+OFFSET_CHECK(struct rogue_fwif_freelist, grow_pending, 44);
+OFFSET_CHECK(struct rogue_fwif_freelist, ready_pages, 48);
+OFFSET_CHECK(struct rogue_fwif_freelist, freelist_flags, 52);
+OFFSET_CHECK(struct rogue_fwif_freelist, pm_global_pb, 56);
+SIZE_CHECK(struct rogue_fwif_freelist, 64);
+
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, geom_caches_need_zeroing, 0);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, screen_pixel_max, 4);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, multi_sample_ctl, 8);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, flipped_multi_sample_ctl, 16);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, tpc_stride, 24);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, tpc_size, 28);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, te_screen, 32);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, mtile_stride, 36);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, teaa, 40);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, te_mtile1, 44);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, te_mtile2, 48);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, isp_merge_lower_x, 52);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, isp_merge_lower_y, 56);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, isp_merge_upper_x, 60);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, isp_merge_upper_y, 64);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, isp_merge_scale_x, 68);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, isp_merge_scale_y, 72);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, rgn_header_size, 76);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, isp_mtile_size, 80);
+SIZE_CHECK(struct rogue_fwif_hwrtdata_common, 88);
+
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, pm_mlist_dev_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, vce_cat_base, 8);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, vce_last_cat_base, 40);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, te_cat_base, 72);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, te_last_cat_base, 104);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, alist_cat_base, 136);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, alist_last_cat_base, 144);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, pm_alist_stack_pointer, 152);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, pm_mlist_stack_pointer, 160);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, hwrt_data_common_fw_addr, 164);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, hwrt_data_flags, 168);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, state, 172);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, freelists_fw_addr, 176);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, freelist_hwr_snapshot, 188);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, vheap_table_dev_addr, 200);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, rta_ctl, 208);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, tail_ptrs_dev_addr, 240);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, macrotile_array_dev_addr, 248);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, rgn_header_dev_addr, 256);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, rtc_dev_addr, 264);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, owner_geom_not_used_by_host, 272);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, geom_caches_need_zeroing, 276);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, cleanup_state, 320);
+SIZE_CHECK(struct rogue_fwif_hwrtdata, 384);
+
+OFFSET_CHECK(struct rogue_fwif_sync_checkpoint, state, 0);
+OFFSET_CHECK(struct rogue_fwif_sync_checkpoint, fw_ref_count, 4);
+SIZE_CHECK(struct rogue_fwif_sync_checkpoint, 8);
+
+#endif /* PVR_ROGUE_FWIF_CHECK_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_fwif_client.h b/drivers/gpu/drm/imagination/pvr_rogue_fwif_client.h
new file mode 100644
index 000000000000..abe166e02a73
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_fwif_client.h
@@ -0,0 +1,371 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_FWIF_CLIENT_H
+#define PVR_ROGUE_FWIF_CLIENT_H
+
+#include <linux/bits.h>
+#include <linux/kernel.h>
+#include <linux/sizes.h>
+#include <linux/types.h>
+
+#include "pvr_rogue_fwif_shared.h"
+
+/*
+ * Page size used for Parameter Management.
+ */
+#define ROGUE_PM_PAGE_SIZE SZ_4K
+
+/*
+ * Minimum/Maximum PB size.
+ *
+ * Base page size is dependent on core:
+ *   S6/S6XT/S7               = 50 pages
+ *   S8XE                     = 40 pages
+ *   S8XE with BRN66011 fixed = 25 pages
+ *
+ * Minimum PB = Base Pages + (NUM_TE_PIPES-1)*16K + (NUM_VCE_PIPES-1)*64K +
+ *              IF_PM_PREALLOC(NUM_TE_PIPES*16K + NUM_VCE_PIPES*16K)
+ *
+ * Maximum PB size must ensure that no PM address space can be fully used,
+ * because if the full address space was used it would wrap and corrupt itself.
+ * Since there are two freelists (local is always minimum sized) this can be
+ * described as following three conditions being met:
+ *
+ *   (Minimum PB + Maximum PB)  <  ALIST PM address space size (16GB)
+ *   (Minimum PB + Maximum PB)  <  TE PM address space size (16GB) / NUM_TE_PIPES
+ *   (Minimum PB + Maximum PB)  <  VCE PM address space size (16GB) / NUM_VCE_PIPES
+ *
+ * Since the max of NUM_TE_PIPES and NUM_VCE_PIPES is 4, we have a hard limit
+ * of 4GB minus the Minimum PB. For convenience we take the smaller power-of-2
+ * value of 2GB. This is far more than any current applications use.
+ */
+#define ROGUE_PM_MAX_FREELIST_SIZE SZ_2G
+
+/*
+ * Flags supported by the geometry DM command i.e. &struct rogue_fwif_cmd_geom.
+ */
+
+#define ROGUE_GEOM_FLAGS_FIRSTKICK BIT_MASK(0)
+#define ROGUE_GEOM_FLAGS_LASTKICK BIT_MASK(1)
+/* Use single core in a multi core setup. */
+#define ROGUE_GEOM_FLAGS_SINGLE_CORE BIT_MASK(3)
+
+/*
+ * Flags supported by the fragment DM command i.e. &struct rogue_fwif_cmd_frag.
+ */
+
+/* Use single core in a multi core setup. */
+#define ROGUE_FRAG_FLAGS_SINGLE_CORE BIT_MASK(3)
+/* Indicates whether this render produces visibility results. */
+#define ROGUE_FRAG_FLAGS_GET_VIS_RESULTS BIT_MASK(5)
+/* Indicates whether a depth buffer is present. */
+#define ROGUE_FRAG_FLAGS_DEPTHBUFFER BIT_MASK(7)
+/* Indicates whether a stencil buffer is present. */
+#define ROGUE_FRAG_FLAGS_STENCILBUFFER BIT_MASK(8)
+/* Indicates whether a scratch buffer is present. */
+#define ROGUE_FRAG_FLAGS_SCRATCHBUFFER BIT_MASK(19)
+/* Disallow compute overlapped with this render. */
+#define ROGUE_FRAG_FLAGS_PREVENT_CDM_OVERLAP BIT_MASK(26)
+
+/*
+ * Flags supported by the compute DM command i.e. &struct rogue_fwif_cmd_compute.
+ */
+
+#define ROGUE_COMPUTE_FLAG_PREVENT_ALL_OVERLAP BIT_MASK(2)
+/*!< Use single core in a multi core setup. */
+#define ROGUE_COMPUTE_FLAG_SINGLE_CORE BIT_MASK(5)
+
+/*
+ * Flags supported by the transfer DM command i.e. &struct rogue_fwif_cmd_transfer.
+ */
+
+/*!< Use single core in a multi core setup. */
+#define ROGUE_TRANSFER_FLAGS_SINGLE_CORE BIT_MASK(1)
+
+/*
+ ************************************************
+ * Parameter/HWRTData control structures.
+ ************************************************
+ */
+
+/*
+ * Configuration registers which need to be loaded by the firmware before a geometry
+ * job can be started.
+ */
+struct rogue_fwif_geom_regs {
+	u64 vdm_ctrl_stream_base;
+	u64 tpu_border_colour_table;
+
+	/* Only used when feature VDM_DRAWINDIRECT present. */
+	u64 vdm_draw_indirect0;
+	/* Only used when feature VDM_DRAWINDIRECT present. */
+	u32 vdm_draw_indirect1;
+
+	u32 ppp_ctrl;
+	u32 te_psg;
+	/* Only used when BRN 49927 present. */
+	u32 tpu;
+
+	u32 vdm_context_resume_task0_size;
+	/* Only used when feature VDM_OBJECT_LEVEL_LLS present. */
+	u32 vdm_context_resume_task3_size;
+
+	/* Only used when BRN 56279 or BRN 67381 present. */
+	u32 pds_ctrl;
+
+	u32 view_idx;
+
+	/* Only used when feature TESSELLATION present */
+	u32 pds_coeff_free_prog;
+
+	u32 padding;
+};
+
+/* Only used when BRN 44455 or BRN 63027 present. */
+struct rogue_fwif_dummy_rgnhdr_init_geom_regs {
+	u64 te_psgregion_addr;
+};
+
+/*
+ * Represents a geometry command that can be used to tile a whole scene's objects as
+ * per TA behavior.
+ */
+struct rogue_fwif_cmd_geom {
+	/*
+	 * rogue_fwif_cmd_geom_frag_shared field must always be at the beginning of the
+	 * struct.
+	 *
+	 * The command struct (rogue_fwif_cmd_geom) is shared between Client and
+	 * Firmware. Kernel is unable to perform read/write operations on the
+	 * command struct, the SHARED region is the only exception from this rule.
+	 * This region must be the first member so that Kernel can easily access it.
+	 * For more info, see rogue_fwif_cmd_geom_frag_shared definition.
+	 */
+	struct rogue_fwif_cmd_geom_frag_shared cmd_shared;
+
+	struct rogue_fwif_geom_regs regs __aligned(8);
+	u32 flags __aligned(8);
+
+	/*
+	 * Holds the geometry/fragment fence value to allow the fragment partial render command
+	 * to go through.
+	 */
+	struct rogue_fwif_ufo partial_render_geom_frag_fence;
+
+	/* Only used when BRN 44455 or BRN 63027 present. */
+	struct rogue_fwif_dummy_rgnhdr_init_geom_regs dummy_rgnhdr_init_geom_regs __aligned(8);
+
+	/* Only used when BRN 61484 or BRN 66333 present. */
+	u32 brn61484_66333_live_rt;
+
+	u32 padding;
+};
+
+/*
+ * Configuration registers which need to be loaded by the firmware before ISP
+ * can be started.
+ */
+struct rogue_fwif_frag_regs {
+	u32 usc_pixel_output_ctrl;
+
+#define ROGUE_MAXIMUM_OUTPUT_REGISTERS_PER_PIXEL 8U
+	u32 usc_clear_register[ROGUE_MAXIMUM_OUTPUT_REGISTERS_PER_PIXEL];
+
+	u32 isp_bgobjdepth;
+	u32 isp_bgobjvals;
+	u32 isp_aa;
+	/* Only used when feature S7_TOP_INFRASTRUCTURE present. */
+	u32 isp_xtp_pipe_enable;
+
+	u32 isp_ctl;
+
+	/* Only used when BRN 49927 present. */
+	u32 tpu;
+
+	u32 event_pixel_pds_info;
+
+	/* Only used when feature CLUSTER_GROUPING present. */
+	u32 pixel_phantom;
+
+	u32 view_idx;
+
+	u32 event_pixel_pds_data;
+
+	/* Only used when BRN 65101 present. */
+	u32 brn65101_event_pixel_pds_data;
+
+	/* Only used when feature GPU_MULTICORE_SUPPORT or BRN 47217 present. */
+	u32 isp_oclqry_stride;
+
+	/* Only used when feature ZLS_SUBTILE present. */
+	u32 isp_zls_pixels;
+
+	/* Only used when feature ISP_ZLS_D24_S8_PACKING_OGL_MODE present. */
+	u32 rgx_cr_blackpearl_fix;
+
+	/* All values below the ALIGN(8) must be 64 bit. */
+	aligned_u64 isp_scissor_base;
+	u64 isp_dbias_base;
+	u64 isp_oclqry_base;
+	u64 isp_zlsctl;
+	u64 isp_zload_store_base;
+	u64 isp_stencil_load_store_base;
+
+	/*
+	 * Only used when feature FBCDC_ALGORITHM present and value < 3 or feature
+	 * FB_CDC_V4 present. Additionally, BRNs 48754, 60227, 72310 and 72311 must
+	 * not be present.
+	 */
+	u64 fb_cdc_zls;
+
+#define ROGUE_PBE_WORDS_REQUIRED_FOR_RENDERS 3U
+	u64 pbe_word[8U][ROGUE_PBE_WORDS_REQUIRED_FOR_RENDERS];
+	u64 tpu_border_colour_table;
+	u64 pds_bgnd[3U];
+
+	/* Only used when BRN 65101 present. */
+	u64 pds_bgnd_brn65101[3U];
+
+	u64 pds_pr_bgnd[3U];
+
+	/* Only used when BRN 62850 or 62865 present. */
+	u64 isp_dummy_stencil_store_base;
+
+	/* Only used when BRN 66193 present. */
+	u64 isp_dummy_depth_store_base;
+
+	/* Only used when BRN 67182 present. */
+	u32 rgnhdr_single_rt_size;
+	/* Only used when BRN 67182 present. */
+	u32 rgnhdr_scratch_offset;
+};
+
+struct rogue_fwif_cmd_frag {
+	struct rogue_fwif_cmd_geom_frag_shared cmd_shared __aligned(8);
+
+	struct rogue_fwif_frag_regs regs __aligned(8);
+	/* command control flags. */
+	u32 flags;
+	/* Stride IN BYTES for Z-Buffer in case of RTAs. */
+	u32 zls_stride;
+	/* Stride IN BYTES for S-Buffer in case of RTAs. */
+	u32 sls_stride;
+
+	/* Only used if feature GPU_MULTICORE_SUPPORT present. */
+	u32 execute_count;
+};
+
+/*
+ * Configuration registers which need to be loaded by the firmware before CDM
+ * can be started.
+ */
+struct rogue_fwif_compute_regs {
+	u64 tpu_border_colour_table;
+
+	/* Only used when feature CDM_USER_MODE_QUEUE present. */
+	u64 cdm_cb_queue;
+
+	/* Only used when feature CDM_USER_MODE_QUEUE present. */
+	u64 cdm_cb_base;
+	/* Only used when feature CDM_USER_MODE_QUEUE present. */
+	u64 cdm_cb;
+
+	/* Only used when feature CDM_USER_MODE_QUEUE is not present. */
+	u64 cdm_ctrl_stream_base;
+
+	u64 cdm_context_state_base_addr;
+
+	/* Only used when BRN 49927 is present. */
+	u32 tpu;
+	u32 cdm_resume_pds1;
+
+	/* Only used when feature COMPUTE_MORTON_CAPABLE present. */
+	u32 cdm_item;
+
+	/* Only used when feature CLUSTER_GROUPING present. */
+	u32 compute_cluster;
+
+	/* Only used when feature TPU_DM_GLOBAL_REGISTERS present. */
+	u32 tpu_tag_cdm_ctrl;
+
+	u32 padding;
+};
+
+struct rogue_fwif_cmd_compute {
+	/* Common command attributes */
+	struct rogue_fwif_cmd_common common __aligned(8);
+
+	/* CDM registers */
+	struct rogue_fwif_compute_regs regs;
+
+	/* Control flags */
+	u32 flags __aligned(8);
+
+	/* Only used when feature UNIFIED_STORE_VIRTUAL_PARTITIONING present. */
+	u32 num_temp_regions;
+
+	/* Only used when feature CDM_USER_MODE_QUEUE present. */
+	u32 stream_start_offset;
+
+	/* Only used when feature GPU_MULTICORE_SUPPORT present. */
+	u32 execute_count;
+};
+
+struct rogue_fwif_transfer_regs {
+	/*
+	 * All 32 bit values should be added in the top section. This then requires only a
+	 * single RGXFW_ALIGN to align all the 64 bit values in the second section.
+	 */
+	u32 isp_bgobjvals;
+
+	u32 usc_pixel_output_ctrl;
+	u32 usc_clear_register0;
+	u32 usc_clear_register1;
+	u32 usc_clear_register2;
+	u32 usc_clear_register3;
+
+	u32 isp_mtile_size;
+	u32 isp_render_origin;
+	u32 isp_ctl;
+
+	/* Only used when feature S7_TOP_INFRASTRUCTURE present. */
+	u32 isp_xtp_pipe_enable;
+	u32 isp_aa;
+
+	u32 event_pixel_pds_info;
+
+	u32 event_pixel_pds_code;
+	u32 event_pixel_pds_data;
+
+	u32 isp_render;
+	u32 isp_rgn;
+
+	/* Only used when feature GPU_MULTICORE_SUPPORT present. */
+	u32 frag_screen;
+
+	/* All values below the aligned_u64 must be 64 bit. */
+	aligned_u64 pds_bgnd0_base;
+	u64 pds_bgnd1_base;
+	u64 pds_bgnd3_sizeinfo;
+
+	u64 isp_mtile_base;
+#define ROGUE_PBE_WORDS_REQUIRED_FOR_TQS 3
+	/* TQ_MAX_RENDER_TARGETS * PBE_STATE_SIZE */
+	u64 pbe_wordx_mrty[3U * ROGUE_PBE_WORDS_REQUIRED_FOR_TQS];
+};
+
+struct rogue_fwif_cmd_transfer {
+	/* Common command attributes */
+	struct rogue_fwif_cmd_common common __aligned(8);
+
+	struct rogue_fwif_transfer_regs regs __aligned(8);
+
+	u32 flags;
+
+	u32 padding;
+};
+
+#include "pvr_rogue_fwif_client_check.h"
+
+#endif /* PVR_ROGUE_FWIF_CLIENT_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_fwif_client_check.h b/drivers/gpu/drm/imagination/pvr_rogue_fwif_client_check.h
new file mode 100644
index 000000000000..9fd1c5b826a5
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_fwif_client_check.h
@@ -0,0 +1,133 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_FWIF_CLIENT_CHECK_H
+#define PVR_ROGUE_FWIF_CLIENT_CHECK_H
+
+#include <linux/build_bug.h>
+
+#define OFFSET_CHECK(type, member, offset) \
+	static_assert(offsetof(type, member) == (offset), \
+		      "offsetof(" #type ", " #member ") incorrect")
+
+#define SIZE_CHECK(type, size) \
+	static_assert(sizeof(type) == (size), #type " is incorrect size")
+
+OFFSET_CHECK(struct rogue_fwif_geom_regs, vdm_ctrl_stream_base, 0);
+OFFSET_CHECK(struct rogue_fwif_geom_regs, tpu_border_colour_table, 8);
+OFFSET_CHECK(struct rogue_fwif_geom_regs, vdm_draw_indirect0, 16);
+OFFSET_CHECK(struct rogue_fwif_geom_regs, vdm_draw_indirect1, 24);
+OFFSET_CHECK(struct rogue_fwif_geom_regs, ppp_ctrl, 28);
+OFFSET_CHECK(struct rogue_fwif_geom_regs, te_psg, 32);
+OFFSET_CHECK(struct rogue_fwif_geom_regs, tpu, 36);
+OFFSET_CHECK(struct rogue_fwif_geom_regs, vdm_context_resume_task0_size, 40);
+OFFSET_CHECK(struct rogue_fwif_geom_regs, vdm_context_resume_task3_size, 44);
+OFFSET_CHECK(struct rogue_fwif_geom_regs, pds_ctrl, 48);
+OFFSET_CHECK(struct rogue_fwif_geom_regs, view_idx, 52);
+OFFSET_CHECK(struct rogue_fwif_geom_regs, pds_coeff_free_prog, 56);
+SIZE_CHECK(struct rogue_fwif_geom_regs, 64);
+
+OFFSET_CHECK(struct rogue_fwif_dummy_rgnhdr_init_geom_regs, te_psgregion_addr, 0);
+SIZE_CHECK(struct rogue_fwif_dummy_rgnhdr_init_geom_regs, 8);
+
+OFFSET_CHECK(struct rogue_fwif_cmd_geom, cmd_shared, 0);
+OFFSET_CHECK(struct rogue_fwif_cmd_geom, regs, 16);
+OFFSET_CHECK(struct rogue_fwif_cmd_geom, flags, 80);
+OFFSET_CHECK(struct rogue_fwif_cmd_geom, partial_render_geom_frag_fence, 84);
+OFFSET_CHECK(struct rogue_fwif_cmd_geom, dummy_rgnhdr_init_geom_regs, 96);
+OFFSET_CHECK(struct rogue_fwif_cmd_geom, brn61484_66333_live_rt, 104);
+SIZE_CHECK(struct rogue_fwif_cmd_geom, 112);
+
+OFFSET_CHECK(struct rogue_fwif_frag_regs, usc_pixel_output_ctrl, 0);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, usc_clear_register, 4);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_bgobjdepth, 36);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_bgobjvals, 40);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_aa, 44);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_xtp_pipe_enable, 48);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_ctl, 52);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, tpu, 56);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, event_pixel_pds_info, 60);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, pixel_phantom, 64);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, view_idx, 68);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, event_pixel_pds_data, 72);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, brn65101_event_pixel_pds_data, 76);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_oclqry_stride, 80);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_zls_pixels, 84);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, rgx_cr_blackpearl_fix, 88);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_scissor_base, 96);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_dbias_base, 104);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_oclqry_base, 112);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_zlsctl, 120);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_zload_store_base, 128);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_stencil_load_store_base, 136);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, fb_cdc_zls, 144);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, pbe_word, 152);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, tpu_border_colour_table, 344);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, pds_bgnd, 352);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, pds_bgnd_brn65101, 376);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, pds_pr_bgnd, 400);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_dummy_stencil_store_base, 424);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_dummy_depth_store_base, 432);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, rgnhdr_single_rt_size, 440);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, rgnhdr_scratch_offset, 444);
+SIZE_CHECK(struct rogue_fwif_frag_regs, 448);
+
+OFFSET_CHECK(struct rogue_fwif_cmd_frag, cmd_shared, 0);
+OFFSET_CHECK(struct rogue_fwif_cmd_frag, regs, 16);
+OFFSET_CHECK(struct rogue_fwif_cmd_frag, flags, 464);
+OFFSET_CHECK(struct rogue_fwif_cmd_frag, zls_stride, 468);
+OFFSET_CHECK(struct rogue_fwif_cmd_frag, sls_stride, 472);
+OFFSET_CHECK(struct rogue_fwif_cmd_frag, execute_count, 476);
+SIZE_CHECK(struct rogue_fwif_cmd_frag, 480);
+
+OFFSET_CHECK(struct rogue_fwif_compute_regs, tpu_border_colour_table, 0);
+OFFSET_CHECK(struct rogue_fwif_compute_regs, cdm_cb_queue, 8);
+OFFSET_CHECK(struct rogue_fwif_compute_regs, cdm_cb_base, 16);
+OFFSET_CHECK(struct rogue_fwif_compute_regs, cdm_cb, 24);
+OFFSET_CHECK(struct rogue_fwif_compute_regs, cdm_ctrl_stream_base, 32);
+OFFSET_CHECK(struct rogue_fwif_compute_regs, cdm_context_state_base_addr, 40);
+OFFSET_CHECK(struct rogue_fwif_compute_regs, tpu, 48);
+OFFSET_CHECK(struct rogue_fwif_compute_regs, cdm_resume_pds1, 52);
+OFFSET_CHECK(struct rogue_fwif_compute_regs, cdm_item, 56);
+OFFSET_CHECK(struct rogue_fwif_compute_regs, compute_cluster, 60);
+OFFSET_CHECK(struct rogue_fwif_compute_regs, tpu_tag_cdm_ctrl, 64);
+SIZE_CHECK(struct rogue_fwif_compute_regs, 72);
+
+OFFSET_CHECK(struct rogue_fwif_cmd_compute, common, 0);
+OFFSET_CHECK(struct rogue_fwif_cmd_compute, regs, 8);
+OFFSET_CHECK(struct rogue_fwif_cmd_compute, flags, 80);
+OFFSET_CHECK(struct rogue_fwif_cmd_compute, num_temp_regions, 84);
+OFFSET_CHECK(struct rogue_fwif_cmd_compute, stream_start_offset, 88);
+OFFSET_CHECK(struct rogue_fwif_cmd_compute, execute_count, 92);
+SIZE_CHECK(struct rogue_fwif_cmd_compute, 96);
+
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, isp_bgobjvals, 0);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, usc_pixel_output_ctrl, 4);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, usc_clear_register0, 8);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, usc_clear_register1, 12);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, usc_clear_register2, 16);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, usc_clear_register3, 20);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, isp_mtile_size, 24);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, isp_render_origin, 28);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, isp_ctl, 32);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, isp_xtp_pipe_enable, 36);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, isp_aa, 40);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, event_pixel_pds_info, 44);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, event_pixel_pds_code, 48);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, event_pixel_pds_data, 52);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, isp_render, 56);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, isp_rgn, 60);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, frag_screen, 64);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, pds_bgnd0_base, 72);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, pds_bgnd1_base, 80);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, pds_bgnd3_sizeinfo, 88);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, isp_mtile_base, 96);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, pbe_wordx_mrty, 104);
+SIZE_CHECK(struct rogue_fwif_transfer_regs, 176);
+
+OFFSET_CHECK(struct rogue_fwif_cmd_transfer, common, 0);
+OFFSET_CHECK(struct rogue_fwif_cmd_transfer, regs, 8);
+OFFSET_CHECK(struct rogue_fwif_cmd_transfer, flags, 184);
+SIZE_CHECK(struct rogue_fwif_cmd_transfer, 192);
+
+#endif /* PVR_ROGUE_FWIF_CLIENT_CHECK_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_fwif_common.h b/drivers/gpu/drm/imagination/pvr_rogue_fwif_common.h
new file mode 100644
index 000000000000..18b69500ba96
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_fwif_common.h
@@ -0,0 +1,60 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_FWIF_COMMON_H
+#define PVR_ROGUE_FWIF_COMMON_H
+
+#include <linux/build_bug.h>
+
+/*
+ * This macro represents a mask of LSBs that must be zero on data structure
+ * sizes and offsets to ensure they are 8-byte granular on types shared between
+ * the FW and host driver.
+ */
+#define PVR_FW_ALIGNMENT_LSB 7U
+
+/* Macro to test structure size alignment. */
+#define PVR_FW_STRUCT_SIZE_ASSERT(_a)                            \
+	static_assert((sizeof(_a) & PVR_FW_ALIGNMENT_LSB) == 0U, \
+		      "Size of " #_a " is not properly aligned")
+
+/* The master definition for data masters known to the firmware. */
+
+#define PVR_FWIF_DM_GP (0)
+/* Either TDM or 2D DM is present. */
+/* When the 'tla' feature is present in the hw (as per @pvr_device_features). */
+#define PVR_FWIF_DM_2D (1)
+/*
+ * When the 'fastrender_dm' feature is present in the hw (as per
+ * @pvr_device_features).
+ */
+#define PVR_FWIF_DM_TDM (1)
+
+#define PVR_FWIF_DM_GEOM (2)
+#define PVR_FWIF_DM_FRAG (3)
+#define PVR_FWIF_DM_CDM (4)
+#define PVR_FWIF_DM_RAY (5)
+#define PVR_FWIF_DM_GEOM2 (6)
+#define PVR_FWIF_DM_GEOM3 (7)
+#define PVR_FWIF_DM_GEOM4 (8)
+
+#define PVR_FWIF_DM_LAST PVR_FWIF_DM_GEOM4
+
+/* Maximum number of DM in use: GP, 2D/TDM, GEOM, 3D, CDM, RAY, GEOM2, GEOM3, GEOM4 */
+#define PVR_FWIF_DM_MAX (PVR_FWIF_DM_LAST + 1U)
+
+/* GPU Utilisation states */
+#define PVR_FWIF_GPU_UTIL_STATE_IDLE 0U
+#define PVR_FWIF_GPU_UTIL_STATE_ACTIVE 1U
+#define PVR_FWIF_GPU_UTIL_STATE_BLOCKED 2U
+#define PVR_FWIF_GPU_UTIL_STATE_NUM 3U
+#define PVR_FWIF_GPU_UTIL_STATE_MASK 0x3ULL
+
+/*
+ * Maximum amount of register writes that can be done by the register
+ * programmer (FW or META DMA). This is not a HW limitation, it is only
+ * a protection against malformed inputs to the register programmer.
+ */
+#define PVR_MAX_NUM_REGISTER_PROGRAMMER_WRITES 128U
+
+#endif /* PVR_ROGUE_FWIF_COMMON_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_fwif_dev_info.h b/drivers/gpu/drm/imagination/pvr_rogue_fwif_dev_info.h
new file mode 100644
index 000000000000..53e6b9d9de3e
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_fwif_dev_info.h
@@ -0,0 +1,112 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef __PVR_ROGUE_FWIF_DEV_INFO_H__
+#define __PVR_ROGUE_FWIF_DEV_INFO_H__
+
+enum {
+	PVR_FW_HAS_BRN_44079 = 0,
+	PVR_FW_HAS_BRN_47217,
+	PVR_FW_HAS_BRN_48492,
+	PVR_FW_HAS_BRN_48545,
+	PVR_FW_HAS_BRN_49927,
+	PVR_FW_HAS_BRN_50767,
+	PVR_FW_HAS_BRN_51764,
+	PVR_FW_HAS_BRN_62269,
+	PVR_FW_HAS_BRN_63142,
+	PVR_FW_HAS_BRN_63553,
+	PVR_FW_HAS_BRN_66011,
+
+	PVR_FW_HAS_BRN_MAX
+};
+
+enum {
+	PVR_FW_HAS_ERN_35421 = 0,
+	PVR_FW_HAS_ERN_38020,
+	PVR_FW_HAS_ERN_38748,
+	PVR_FW_HAS_ERN_42064,
+	PVR_FW_HAS_ERN_42290,
+	PVR_FW_HAS_ERN_42606,
+	PVR_FW_HAS_ERN_47025,
+	PVR_FW_HAS_ERN_57596,
+
+	PVR_FW_HAS_ERN_MAX
+};
+
+enum {
+	PVR_FW_HAS_FEATURE_AXI_ACELITE = 0,
+	PVR_FW_HAS_FEATURE_CDM_CONTROL_STREAM_FORMAT,
+	PVR_FW_HAS_FEATURE_CLUSTER_GROUPING,
+	PVR_FW_HAS_FEATURE_COMMON_STORE_SIZE_IN_DWORDS,
+	PVR_FW_HAS_FEATURE_COMPUTE,
+	PVR_FW_HAS_FEATURE_COMPUTE_MORTON_CAPABLE,
+	PVR_FW_HAS_FEATURE_COMPUTE_OVERLAP,
+	PVR_FW_HAS_FEATURE_COREID_PER_OS,
+	PVR_FW_HAS_FEATURE_DYNAMIC_DUST_POWER,
+	PVR_FW_HAS_FEATURE_ECC_RAMS,
+	PVR_FW_HAS_FEATURE_FBCDC,
+	PVR_FW_HAS_FEATURE_FBCDC_ALGORITHM,
+	PVR_FW_HAS_FEATURE_FBCDC_ARCHITECTURE,
+	PVR_FW_HAS_FEATURE_FBC_MAX_DEFAULT_DESCRIPTORS,
+	PVR_FW_HAS_FEATURE_FBC_MAX_LARGE_DESCRIPTORS,
+	PVR_FW_HAS_FEATURE_FB_CDC_V4,
+	PVR_FW_HAS_FEATURE_GPU_MULTICORE_SUPPORT,
+	PVR_FW_HAS_FEATURE_GPU_VIRTUALISATION,
+	PVR_FW_HAS_FEATURE_GS_RTA_SUPPORT,
+	PVR_FW_HAS_FEATURE_IRQ_PER_OS,
+	PVR_FW_HAS_FEATURE_ISP_MAX_TILES_IN_FLIGHT,
+	PVR_FW_HAS_FEATURE_ISP_SAMPLES_PER_PIXEL,
+	PVR_FW_HAS_FEATURE_ISP_ZLS_D24_S8_PACKING_OGL_MODE,
+	PVR_FW_HAS_FEATURE_LAYOUT_MARS,
+	PVR_FW_HAS_FEATURE_MAX_PARTITIONS,
+	PVR_FW_HAS_FEATURE_META,
+	PVR_FW_HAS_FEATURE_META_COREMEM_SIZE,
+	PVR_FW_HAS_FEATURE_MIPS,
+	PVR_FW_HAS_FEATURE_NUM_CLUSTERS,
+	PVR_FW_HAS_FEATURE_NUM_ISP_IPP_PIPES,
+	PVR_FW_HAS_FEATURE_NUM_OSIDS,
+	PVR_FW_HAS_FEATURE_NUM_RASTER_PIPES,
+	PVR_FW_HAS_FEATURE_PBE2_IN_XE,
+	PVR_FW_HAS_FEATURE_PBVNC_COREID_REG,
+	PVR_FW_HAS_FEATURE_PERFBUS,
+	PVR_FW_HAS_FEATURE_PERF_COUNTER_BATCH,
+	PVR_FW_HAS_FEATURE_PHYS_BUS_WIDTH,
+	PVR_FW_HAS_FEATURE_RISCV_FW_PROCESSOR,
+	PVR_FW_HAS_FEATURE_ROGUEXE,
+	PVR_FW_HAS_FEATURE_S7_TOP_INFRASTRUCTURE,
+	PVR_FW_HAS_FEATURE_SIMPLE_INTERNAL_PARAMETER_FORMAT,
+	PVR_FW_HAS_FEATURE_SIMPLE_INTERNAL_PARAMETER_FORMAT_V2,
+	PVR_FW_HAS_FEATURE_SIMPLE_PARAMETER_FORMAT_VERSION,
+	PVR_FW_HAS_FEATURE_SLC_BANKS,
+	PVR_FW_HAS_FEATURE_SLC_CACHE_LINE_SIZE_BITS,
+	PVR_FW_HAS_FEATURE_SLC_SIZE_CONFIGURABLE,
+	PVR_FW_HAS_FEATURE_SLC_SIZE_IN_KILOBYTES,
+	PVR_FW_HAS_FEATURE_SOC_TIMER,
+	PVR_FW_HAS_FEATURE_SYS_BUS_SECURE_RESET,
+	PVR_FW_HAS_FEATURE_TESSELLATION,
+	PVR_FW_HAS_FEATURE_TILE_REGION_PROTECTION,
+	PVR_FW_HAS_FEATURE_TILE_SIZE_X,
+	PVR_FW_HAS_FEATURE_TILE_SIZE_Y,
+	PVR_FW_HAS_FEATURE_TLA,
+	PVR_FW_HAS_FEATURE_TPU_CEM_DATAMASTER_GLOBAL_REGISTERS,
+	PVR_FW_HAS_FEATURE_TPU_DM_GLOBAL_REGISTERS,
+	PVR_FW_HAS_FEATURE_TPU_FILTERING_MODE_CONTROL,
+	PVR_FW_HAS_FEATURE_USC_MIN_OUTPUT_REGISTERS_PER_PIX,
+	PVR_FW_HAS_FEATURE_VDM_DRAWINDIRECT,
+	PVR_FW_HAS_FEATURE_VDM_OBJECT_LEVEL_LLS,
+	PVR_FW_HAS_FEATURE_VIRTUAL_ADDRESS_SPACE_BITS,
+	PVR_FW_HAS_FEATURE_WATCHDOG_TIMER,
+	PVR_FW_HAS_FEATURE_WORKGROUP_PROTECTION,
+	PVR_FW_HAS_FEATURE_XE_ARCHITECTURE,
+	PVR_FW_HAS_FEATURE_XE_MEMORY_HIERARCHY,
+	PVR_FW_HAS_FEATURE_XE_TPU2,
+	PVR_FW_HAS_FEATURE_XPU_MAX_REGBANKS_ADDR_WIDTH,
+	PVR_FW_HAS_FEATURE_XPU_MAX_SLAVES,
+	PVR_FW_HAS_FEATURE_XPU_REGISTER_BROADCAST,
+	PVR_FW_HAS_FEATURE_XT_TOP_INFRASTRUCTURE,
+	PVR_FW_HAS_FEATURE_ZLS_SUBTILE,
+
+	PVR_FW_HAS_FEATURE_MAX
+};
+
+#endif /* __PVR_ROGUE_FWIF_DEV_INFO_H__ */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_fwif_resetframework.h b/drivers/gpu/drm/imagination/pvr_rogue_fwif_resetframework.h
new file mode 100644
index 000000000000..3d89524fe5d2
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_fwif_resetframework.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_FWIF_RESETFRAMEWORK_H
+#define PVR_ROGUE_FWIF_RESETFRAMEWORK_H
+
+#include <linux/bits.h>
+#include <linux/types.h>
+
+#include "pvr_rogue_fwif_shared.h"
+
+struct rogue_fwif_rf_registers {
+	union {
+		u64 cdmreg_cdm_cb_base;
+		u64 cdmreg_cdm_ctrl_stream_base;
+	};
+	u64 cdmreg_cdm_cb_queue;
+	u64 cdmreg_cdm_cb;
+};
+
+struct rogue_fwif_rf_cmd {
+	/* THIS MUST BE THE LAST MEMBER OF THE CONTAINING STRUCTURE */
+	struct rogue_fwif_rf_registers fw_registers __aligned(8);
+};
+
+#define ROGUE_FWIF_RF_CMD_SIZE sizeof(struct rogue_fwif_rf_cmd)
+
+#endif /* PVR_ROGUE_FWIF_RESETFRAMEWORK_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_fwif_sf.h b/drivers/gpu/drm/imagination/pvr_rogue_fwif_sf.h
new file mode 100644
index 000000000000..0fcc500fab21
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_fwif_sf.h
@@ -0,0 +1,1648 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_FWIF_SF_H
+#define PVR_ROGUE_FWIF_SF_H
+
+/*
+ ******************************************************************************
+ * *DO*NOT* rearrange or delete lines in rogue_fw_log_sfgroups or stid_fmts
+ *           WILL BREAK fw tracing message compatibility with previous
+ *           fw versions. Only add new ones, if so required.
+ ******************************************************************************
+ */
+
+/* Available log groups. */
+enum rogue_fw_log_sfgroups {
+	ROGUE_FW_GROUP_NULL,
+	ROGUE_FW_GROUP_MAIN,
+	ROGUE_FW_GROUP_CLEANUP,
+	ROGUE_FW_GROUP_CSW,
+	ROGUE_FW_GROUP_PM,
+	ROGUE_FW_GROUP_RTD,
+	ROGUE_FW_GROUP_SPM,
+	ROGUE_FW_GROUP_MTS,
+	ROGUE_FW_GROUP_BIF,
+	ROGUE_FW_GROUP_MISC,
+	ROGUE_FW_GROUP_POW,
+	ROGUE_FW_GROUP_HWR,
+	ROGUE_FW_GROUP_HWP,
+	ROGUE_FW_GROUP_RPM,
+	ROGUE_FW_GROUP_DMA,
+	ROGUE_FW_GROUP_DBG,
+};
+
+#define PVR_SF_STRING_MAX_SIZE 256U
+
+/* pair of string format id and string formats */
+struct rogue_fw_stid_fmt {
+	u32 id;
+	char name[PVR_SF_STRING_MAX_SIZE];
+};
+
+/*
+ *  The symbolic names found in the table above are assigned an u32 value of
+ *  the following format:
+ *  31 30 28 27       20   19  16    15  12      11            0   bits
+ *  -   ---   ---- ----     ----      ----        ---- ---- ----
+ *     0-11: id number
+ *    12-15: group id number
+ *    16-19: number of parameters
+ *    20-27: unused
+ *    28-30: active: identify SF packet, otherwise regular int32
+ *       31: reserved for signed/unsigned compatibility
+ *
+ *   The following macro assigns those values to the enum generated SF ids list.
+ */
+#define ROGUE_FW_LOG_IDMARKER (0x70000000U)
+#define ROGUE_FW_LOG_CREATESFID(a, b, e) ((u32)(a) | ((u32)(b) << 12) | ((u32)(e) << 16) | \
+					  ROGUE_FW_LOG_IDMARKER)
+
+#define ROGUE_FW_LOG_IDMASK (0xFFF00000)
+#define ROGUE_FW_LOG_VALIDID(I) (((I) & ROGUE_FW_LOG_IDMASK) == ROGUE_FW_LOG_IDMARKER)
+
+/* Return the group id that the given (enum generated) id belongs to */
+#define ROGUE_FW_SF_GID(x) (((u32)(x) >> 12) & 0xfU)
+/* Returns how many arguments the SF(string format) for the given (enum generated) id requires */
+#define ROGUE_FW_SF_PARAMNUM(x) (((u32)(x) >> 16) & 0xfU)
+
+/* pair of string format id and string formats */
+struct rogue_km_stid_fmt {
+	u32 id;
+	const char *name;
+};
+
+static const struct rogue_km_stid_fmt stid_fmts[] = {
+	{ ROGUE_FW_LOG_CREATESFID(0, ROGUE_FW_GROUP_NULL, 0),
+	  "You should not use this string" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_MAIN, 6),
+	  "Kick 3D: FWCtx 0x%08.8x @ %d, RTD 0x%08x. Partial render:%d, CSW resume:%d, prio:%d" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_MAIN, 2),
+	  "3D finished, HWRTData0State=%x, HWRTData1State=%x" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_MAIN, 4),
+	  "Kick 3D TQ: FWCtx 0x%08.8x @ %d, CSW resume:%d, prio: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_MAIN, 0),
+	  "3D Transfer finished" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_MAIN, 3),
+	  "Kick Compute: FWCtx 0x%08.8x @ %d, prio: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_MAIN, 0),
+	  "Compute finished" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_MAIN, 7),
+	  "Kick TA: FWCtx 0x%08.8x @ %d, RTD 0x%08x. First kick:%d, Last kick:%d, CSW resume:%d, prio:%d" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_MAIN, 0),
+	  "TA finished" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_MAIN, 0),
+	  "Restart TA after partial render" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_MAIN, 0),
+	  "Resume TA without partial render" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_MAIN, 2),
+	  "Out of memory! Context 0x%08x, HWRTData 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_MAIN, 3),
+	  "Kick TLA: FWCtx 0x%08.8x @ %d, prio:%d" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_MAIN, 0),
+	  "TLA finished" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_MAIN, 3),
+	  "cCCB Woff update = %d, DM = %d, FWCtx = 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_MAIN, 2),
+	  "UFO Checks for FWCtx 0x%08.8x @ %d" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_MAIN, 3),
+	  "UFO Check: [0x%08.8x] is 0x%08.8x requires 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_MAIN, 0),
+	  "UFO Checks succeeded" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_MAIN, 3),
+	  "UFO PR-Check: [0x%08.8x] is 0x%08.8x requires >= 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_MAIN, 1),
+	  "UFO SPM PR-Checks for FWCtx 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_MAIN, 4),
+	  "UFO SPM special PR-Check: [0x%08.8x] is 0x%08.8x requires >= ????????, [0x%08.8x] is ???????? requires 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(22, ROGUE_FW_GROUP_MAIN, 2),
+	  "UFO Updates for FWCtx 0x%08.8x @ %d" },
+	{ ROGUE_FW_LOG_CREATESFID(23, ROGUE_FW_GROUP_MAIN, 2),
+	  "UFO Update: [0x%08.8x] = 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(24, ROGUE_FW_GROUP_MAIN, 1),
+	  "ASSERT Failed: line %d of:" },
+	{ ROGUE_FW_LOG_CREATESFID(25, ROGUE_FW_GROUP_MAIN, 2),
+	  "HWR: Lockup detected on DM%d, FWCtx: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(26, ROGUE_FW_GROUP_MAIN, 3),
+	  "HWR: Reset fw state for DM%d, FWCtx: 0x%08.8x, MemCtx: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(27, ROGUE_FW_GROUP_MAIN, 0),
+	  "HWR: Reset HW" },
+	{ ROGUE_FW_LOG_CREATESFID(28, ROGUE_FW_GROUP_MAIN, 0),
+	  "HWR: Lockup recovered." },
+	{ ROGUE_FW_LOG_CREATESFID(29, ROGUE_FW_GROUP_MAIN, 1),
+	  "HWR: False lockup detected for DM%u" },
+	{ ROGUE_FW_LOG_CREATESFID(30, ROGUE_FW_GROUP_MAIN, 3),
+	  "Alignment check %d failed: host = 0x%x, fw = 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(31, ROGUE_FW_GROUP_MAIN, 0),
+	  "GP USC triggered" },
+	{ ROGUE_FW_LOG_CREATESFID(32, ROGUE_FW_GROUP_MAIN, 2),
+	  "Overallocating %u temporary registers and %u shared registers for breakpoint handler" },
+	{ ROGUE_FW_LOG_CREATESFID(33, ROGUE_FW_GROUP_MAIN, 1),
+	  "Setting breakpoint: Addr 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(34, ROGUE_FW_GROUP_MAIN, 0),
+	  "Store breakpoint state" },
+	{ ROGUE_FW_LOG_CREATESFID(35, ROGUE_FW_GROUP_MAIN, 0),
+	  "Unsetting BP Registers" },
+	{ ROGUE_FW_LOG_CREATESFID(36, ROGUE_FW_GROUP_MAIN, 1),
+	  "Active RTs expected to be zero, actually %u" },
+	{ ROGUE_FW_LOG_CREATESFID(37, ROGUE_FW_GROUP_MAIN, 1),
+	  "RTC present, %u active render targets" },
+	{ ROGUE_FW_LOG_CREATESFID(38, ROGUE_FW_GROUP_MAIN, 1),
+	  "Estimated Power 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(39, ROGUE_FW_GROUP_MAIN, 1),
+	  "RTA render target %u" },
+	{ ROGUE_FW_LOG_CREATESFID(40, ROGUE_FW_GROUP_MAIN, 2),
+	  "Kick RTA render %u of %u" },
+	{ ROGUE_FW_LOG_CREATESFID(41, ROGUE_FW_GROUP_MAIN, 3),
+	  "HWR sizes check %d failed: addresses = %d, sizes = %d" },
+	{ ROGUE_FW_LOG_CREATESFID(42, ROGUE_FW_GROUP_MAIN, 1),
+	  "Pow: DUSTS_ENABLE = 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(43, ROGUE_FW_GROUP_MAIN, 2),
+	  "Pow: On(1)/Off(0): %d, Units: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(44, ROGUE_FW_GROUP_MAIN, 2),
+	  "Pow: Changing number of dusts from %d to %d" },
+	{ ROGUE_FW_LOG_CREATESFID(45, ROGUE_FW_GROUP_MAIN, 0),
+	  "Pow: Sidekick ready to be powered down" },
+	{ ROGUE_FW_LOG_CREATESFID(46, ROGUE_FW_GROUP_MAIN, 2),
+	  "Pow: Request to change num of dusts to %d (bPowRascalDust=%d)" },
+	{ ROGUE_FW_LOG_CREATESFID(47, ROGUE_FW_GROUP_MAIN, 0),
+	  "No ZS Buffer used for partial render (store)" },
+	{ ROGUE_FW_LOG_CREATESFID(48, ROGUE_FW_GROUP_MAIN, 0),
+	  "No Depth/Stencil Buffer used for partial render (load)" },
+	{ ROGUE_FW_LOG_CREATESFID(49, ROGUE_FW_GROUP_MAIN, 2),
+	  "HWR: Lock-up DM%d FWCtx: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(50, ROGUE_FW_GROUP_MAIN, 7),
+	  "MLIST%d checker: CatBase TE=0x%08x (%d Pages), VCE=0x%08x (%d Pages), ALIST=0x%08x, IsTA=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(51, ROGUE_FW_GROUP_MAIN, 3),
+	  "MLIST%d checker: MList[%d] = 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(52, ROGUE_FW_GROUP_MAIN, 1),
+	  "MLIST%d OK" },
+	{ ROGUE_FW_LOG_CREATESFID(53, ROGUE_FW_GROUP_MAIN, 1),
+	  "MLIST%d is empty" },
+	{ ROGUE_FW_LOG_CREATESFID(54, ROGUE_FW_GROUP_MAIN, 8),
+	  "MLIST%d checker: CatBase TE=0x%08x%08x, VCE=0x%08x%08x, ALIST=0x%08x%08x, IsTA=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(55, ROGUE_FW_GROUP_MAIN, 0),
+	  "3D OQ flush kick" },
+	{ ROGUE_FW_LOG_CREATESFID(56, ROGUE_FW_GROUP_MAIN, 1),
+	  "HWPerf block ID (0x%x) unsupported by device" },
+	{ ROGUE_FW_LOG_CREATESFID(57, ROGUE_FW_GROUP_MAIN, 2),
+	  "Setting breakpoint: Addr 0x%08.8x DM%u" },
+	{ ROGUE_FW_LOG_CREATESFID(58, ROGUE_FW_GROUP_MAIN, 3),
+	  "Kick RTU: FWCtx 0x%08.8x @ %d, prio: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(59, ROGUE_FW_GROUP_MAIN, 1),
+	  "RDM finished on context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(60, ROGUE_FW_GROUP_MAIN, 3),
+	  "Kick SHG: FWCtx 0x%08.8x @ %d, prio: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(61, ROGUE_FW_GROUP_MAIN, 0),
+	  "SHG finished" },
+	{ ROGUE_FW_LOG_CREATESFID(62, ROGUE_FW_GROUP_MAIN, 1),
+	  "FBA finished on context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(63, ROGUE_FW_GROUP_MAIN, 0),
+	  "UFO Checks failed" },
+	{ ROGUE_FW_LOG_CREATESFID(64, ROGUE_FW_GROUP_MAIN, 1),
+	  "Kill DM%d start" },
+	{ ROGUE_FW_LOG_CREATESFID(65, ROGUE_FW_GROUP_MAIN, 1),
+	  "Kill DM%d complete" },
+	{ ROGUE_FW_LOG_CREATESFID(66, ROGUE_FW_GROUP_MAIN, 2),
+	  "FC%u cCCB Woff update = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(67, ROGUE_FW_GROUP_MAIN, 4),
+	  "Kick RTU: FWCtx 0x%08.8x @ %d, prio: %d, Frame Context: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(68, ROGUE_FW_GROUP_MAIN, 0),
+	  "GPU init" },
+	{ ROGUE_FW_LOG_CREATESFID(69, ROGUE_FW_GROUP_MAIN, 1),
+	  "GPU Units init (# mask: 0x%x)" },
+	{ ROGUE_FW_LOG_CREATESFID(70, ROGUE_FW_GROUP_MAIN, 3),
+	  "Register access cycles: read: %d cycles, write: %d cycles, iterations: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(71, ROGUE_FW_GROUP_MAIN, 3),
+	  "Register configuration added. Address: 0x%x Value: 0x%x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(72, ROGUE_FW_GROUP_MAIN, 1),
+	  "Register configuration applied to type %d. (0:pow on, 1:Rascal/dust init, 2-5: TA,3D,CDM,TLA, 6:All)" },
+	{ ROGUE_FW_LOG_CREATESFID(73, ROGUE_FW_GROUP_MAIN, 0),
+	  "Perform TPC flush." },
+	{ ROGUE_FW_LOG_CREATESFID(74, ROGUE_FW_GROUP_MAIN, 0),
+	  "GPU has locked up (see HWR logs for more info)" },
+	{ ROGUE_FW_LOG_CREATESFID(75, ROGUE_FW_GROUP_MAIN, 0),
+	  "HWR has been triggered - GPU has overrun its deadline (see HWR logs)" },
+	{ ROGUE_FW_LOG_CREATESFID(76, ROGUE_FW_GROUP_MAIN, 0),
+	  "HWR has been triggered - GPU has failed a poll (see HWR logs)" },
+	{ ROGUE_FW_LOG_CREATESFID(77, ROGUE_FW_GROUP_MAIN, 1),
+	  "Doppler out of memory event for FC %u" },
+	{ ROGUE_FW_LOG_CREATESFID(78, ROGUE_FW_GROUP_MAIN, 3),
+	  "UFO SPM special PR-Check: [0x%08.8x] is 0x%08.8x requires >= 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(79, ROGUE_FW_GROUP_MAIN, 3),
+	  "UFO SPM special PR-Check: [0x%08.8x] is 0x%08.8x requires 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(80, ROGUE_FW_GROUP_MAIN, 1),
+	  "TIMESTAMP -> [0x%08.8x]" },
+	{ ROGUE_FW_LOG_CREATESFID(81, ROGUE_FW_GROUP_MAIN, 2),
+	  "UFO RMW Updates for FWCtx 0x%08.8x @ %d" },
+	{ ROGUE_FW_LOG_CREATESFID(82, ROGUE_FW_GROUP_MAIN, 2),
+	  "UFO Update: [0x%08.8x] = 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(83, ROGUE_FW_GROUP_MAIN, 2),
+	  "Kick Null cmd: FWCtx 0x%08.8x @ %d" },
+	{ ROGUE_FW_LOG_CREATESFID(84, ROGUE_FW_GROUP_MAIN, 2),
+	  "RPM Out of memory! Context 0x%08x, SH requestor %d" },
+	{ ROGUE_FW_LOG_CREATESFID(85, ROGUE_FW_GROUP_MAIN, 4),
+	  "Discard RTU due to RPM abort: FWCtx 0x%08.8x @ %d, prio: %d, Frame Context: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(86, ROGUE_FW_GROUP_MAIN, 4),
+	  "Deferring DM%u from running context 0x%08x @ %d (deferred DMs = 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(87, ROGUE_FW_GROUP_MAIN, 4),
+	  "Deferring DM%u from running context 0x%08x @ %d to let other deferred DMs run (deferred DMs = 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(88, ROGUE_FW_GROUP_MAIN, 4),
+	  "No longer deferring DM%u from running context = 0x%08x @ %d (deferred DMs = 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(89, ROGUE_FW_GROUP_MAIN, 3),
+	  "FWCCB for DM%u is full, we will have to wait for space! (Roff = %u, Woff = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(90, ROGUE_FW_GROUP_MAIN, 3),
+	  "FWCCB for OSid %u is full, we will have to wait for space! (Roff = %u, Woff = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(91, ROGUE_FW_GROUP_MAIN, 1),
+	  "Host Sync Partition marker: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(92, ROGUE_FW_GROUP_MAIN, 1),
+	  "Host Sync Partition repeat: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(93, ROGUE_FW_GROUP_MAIN, 1),
+	  "Core clock set to %d Hz" },
+	{ ROGUE_FW_LOG_CREATESFID(94, ROGUE_FW_GROUP_MAIN, 7),
+	  "Compute Queue: FWCtx 0x%08.8x, prio: %d, queue: 0x%08x%08x (Roff = %u, Woff = %u, Size = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(95, ROGUE_FW_GROUP_MAIN, 3),
+	  "Signal check failed, Required Data: 0x%x, Address: 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(96, ROGUE_FW_GROUP_MAIN, 5),
+	  "Signal update, Snoop Filter: %u, MMU Ctx: %u, Signal Id: %u, Signals Base: 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(97, ROGUE_FW_GROUP_MAIN, 4),
+	  "Signalled the previously waiting FWCtx: 0x%08.8x, OSId: %u, Signal Address: 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(98, ROGUE_FW_GROUP_MAIN, 0),
+	  "Compute stalled" },
+	{ ROGUE_FW_LOG_CREATESFID(99, ROGUE_FW_GROUP_MAIN, 3),
+	  "Compute stalled (Roff = %u, Woff = %u, Size = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(100, ROGUE_FW_GROUP_MAIN, 3),
+	  "Compute resumed (Roff = %u, Woff = %u, Size = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(101, ROGUE_FW_GROUP_MAIN, 4),
+	  "Signal update notification from the host, PC Physical Address: 0x%08x%08x, Signal Virtual Address: 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(102, ROGUE_FW_GROUP_MAIN, 4),
+	  "Signal update from DM: %u, OSId: %u, PC Physical Address: 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(103, ROGUE_FW_GROUP_MAIN, 1),
+	  "DM: %u signal check failed" },
+	{ ROGUE_FW_LOG_CREATESFID(104, ROGUE_FW_GROUP_MAIN, 3),
+	  "Kick TDM: FWCtx 0x%08.8x @ %d, prio:%d" },
+	{ ROGUE_FW_LOG_CREATESFID(105, ROGUE_FW_GROUP_MAIN, 0),
+	  "TDM finished" },
+	{ ROGUE_FW_LOG_CREATESFID(106, ROGUE_FW_GROUP_MAIN, 4),
+	  "MMU_PM_CAT_BASE_TE[%d]_PIPE[%d]:  0x%08x 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(107, ROGUE_FW_GROUP_MAIN, 0),
+	  "BRN 54141 HIT" },
+	{ ROGUE_FW_LOG_CREATESFID(108, ROGUE_FW_GROUP_MAIN, 0),
+	  "BRN 54141 Dummy TA kicked" },
+	{ ROGUE_FW_LOG_CREATESFID(109, ROGUE_FW_GROUP_MAIN, 0),
+	  "BRN 54141 resume TA" },
+	{ ROGUE_FW_LOG_CREATESFID(110, ROGUE_FW_GROUP_MAIN, 0),
+	  "BRN 54141 double hit after applying WA" },
+	{ ROGUE_FW_LOG_CREATESFID(111, ROGUE_FW_GROUP_MAIN, 2),
+	  "BRN 54141 Dummy TA VDM base address: 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(112, ROGUE_FW_GROUP_MAIN, 4),
+	  "Signal check failed, Required Data: 0x%x, Current Data: 0x%x, Address: 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(113, ROGUE_FW_GROUP_MAIN, 2),
+	  "TDM stalled (Roff = %u, Woff = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(114, ROGUE_FW_GROUP_MAIN, 1),
+	  "Write Offset update notification for stalled FWCtx 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(115, ROGUE_FW_GROUP_MAIN, 3),
+	  "Changing OSid %d's priority from %u to %u" },
+	{ ROGUE_FW_LOG_CREATESFID(116, ROGUE_FW_GROUP_MAIN, 0),
+	  "Compute resumed" },
+	{ ROGUE_FW_LOG_CREATESFID(117, ROGUE_FW_GROUP_MAIN, 7),
+	  "Kick TLA: FWCtx 0x%08.8x @ %d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(118, ROGUE_FW_GROUP_MAIN, 7),
+	  "Kick TDM: FWCtx 0x%08.8x @ %d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(119, ROGUE_FW_GROUP_MAIN, 11),
+	  "Kick TA: FWCtx 0x%08.8x @ %d, RTD 0x%08x, First kick:%d, Last kick:%d, CSW resume:%d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(120, ROGUE_FW_GROUP_MAIN, 10),
+	  "Kick 3D: FWCtx 0x%08.8x @ %d, RTD 0x%08x, Partial render:%d, CSW resume:%d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(121, ROGUE_FW_GROUP_MAIN, 8),
+	  "Kick 3D TQ: FWCtx 0x%08.8x @ %d, CSW resume:%d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(122, ROGUE_FW_GROUP_MAIN, 6),
+	  "Kick Compute: FWCtx 0x%08.8x @ %d. (PID:%d, prio:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(123, ROGUE_FW_GROUP_MAIN, 8),
+	  "Kick RTU: FWCtx 0x%08.8x @ %d, Frame Context:%d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(124, ROGUE_FW_GROUP_MAIN, 7),
+	  "Kick SHG: FWCtx 0x%08.8x @ %d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(125, ROGUE_FW_GROUP_MAIN, 1),
+	  "Reconfigure CSRM: special coeff support enable %d." },
+	{ ROGUE_FW_LOG_CREATESFID(127, ROGUE_FW_GROUP_MAIN, 1),
+	  "TA requires max coeff mode, deferring: %d." },
+	{ ROGUE_FW_LOG_CREATESFID(128, ROGUE_FW_GROUP_MAIN, 1),
+	  "3D requires max coeff mode, deferring: %d." },
+	{ ROGUE_FW_LOG_CREATESFID(129, ROGUE_FW_GROUP_MAIN, 1),
+	  "Kill DM%d failed" },
+	{ ROGUE_FW_LOG_CREATESFID(130, ROGUE_FW_GROUP_MAIN, 2),
+	  "Thread Queue is full, we will have to wait for space! (Roff = %u, Woff = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(131, ROGUE_FW_GROUP_MAIN, 3),
+	  "Thread Queue is fencing, we are waiting for Roff = %d (Roff = %u, Woff = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(132, ROGUE_FW_GROUP_MAIN, 1),
+	  "DM %d failed to Context Switch on time. Triggered HCS (see HWR logs)." },
+	{ ROGUE_FW_LOG_CREATESFID(133, ROGUE_FW_GROUP_MAIN, 1),
+	  "HCS changed to %d ms" },
+	{ ROGUE_FW_LOG_CREATESFID(134, ROGUE_FW_GROUP_MAIN, 4),
+	  "Updating Tiles In Flight (Dusts=%d, PartitionMask=0x%08x, ISPCtl=0x%08x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(135, ROGUE_FW_GROUP_MAIN, 2),
+	  "  Phantom %d: USCTiles=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(136, ROGUE_FW_GROUP_MAIN, 0),
+	  "Isolation grouping is disabled" },
+	{ ROGUE_FW_LOG_CREATESFID(137, ROGUE_FW_GROUP_MAIN, 1),
+	  "Isolation group configured with a priority threshold of %d" },
+	{ ROGUE_FW_LOG_CREATESFID(138, ROGUE_FW_GROUP_MAIN, 1),
+	  "OS %d has come online" },
+	{ ROGUE_FW_LOG_CREATESFID(139, ROGUE_FW_GROUP_MAIN, 1),
+	  "OS %d has gone offline" },
+	{ ROGUE_FW_LOG_CREATESFID(140, ROGUE_FW_GROUP_MAIN, 4),
+	  "Signalled the previously stalled FWCtx: 0x%08.8x, OSId: %u, Signal Address: 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(141, ROGUE_FW_GROUP_MAIN, 7),
+	  "TDM Queue: FWCtx 0x%08.8x, prio: %d, queue: 0x%08x%08x (Roff = %u, Woff = %u, Size = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(142, ROGUE_FW_GROUP_MAIN, 6),
+	  "Reset TDM Queue Read Offset: FWCtx 0x%08.8x, queue: 0x%08x%08x (Roff = %u becomes 0, Woff = %u, Size = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(143, ROGUE_FW_GROUP_MAIN, 5),
+	  "User Mode Queue mismatched stream start: FWCtx 0x%08.8x, queue: 0x%08x%08x (Roff = %u, StreamStartOffset = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(144, ROGUE_FW_GROUP_MAIN, 0),
+	  "GPU deinit" },
+	{ ROGUE_FW_LOG_CREATESFID(145, ROGUE_FW_GROUP_MAIN, 0),
+	  "GPU units deinit" },
+	{ ROGUE_FW_LOG_CREATESFID(146, ROGUE_FW_GROUP_MAIN, 2),
+	  "Initialised OS %d with config flags 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(147, ROGUE_FW_GROUP_MAIN, 2),
+	  "UFO limit exceeded %d/%d" },
+	{ ROGUE_FW_LOG_CREATESFID(148, ROGUE_FW_GROUP_MAIN, 0),
+	  "3D Dummy stencil store" },
+	{ ROGUE_FW_LOG_CREATESFID(149, ROGUE_FW_GROUP_MAIN, 3),
+	  "Initialised OS %d with config flags 0x%08x and extended config flags 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(150, ROGUE_FW_GROUP_MAIN, 1),
+	  "Unknown Command (eCmdType=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(151, ROGUE_FW_GROUP_MAIN, 4),
+	  "UFO forced update: FWCtx 0x%08.8x @ %d [0x%08.8x] = 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(152, ROGUE_FW_GROUP_MAIN, 5),
+	  "UFO forced update NOP: FWCtx 0x%08.8x @ %d [0x%08.8x] = 0x%08.8x, reason %d" },
+	{ ROGUE_FW_LOG_CREATESFID(153, ROGUE_FW_GROUP_MAIN, 3),
+	  "TDM context switch check: Roff %u points to 0x%08x, Match=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(154, ROGUE_FW_GROUP_MAIN, 6),
+	  "OSid %d CCB init status: %d (1-ok 0-fail): kCCBCtl@0x%x kCCB@0x%x fwCCBCtl@0x%x fwCCB@0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(155, ROGUE_FW_GROUP_MAIN, 2),
+	  "FW IRQ # %u @ %u" },
+	{ ROGUE_FW_LOG_CREATESFID(156, ROGUE_FW_GROUP_MAIN, 3),
+	  "Setting breakpoint: Addr 0x%08.8x DM%u usc_breakpoint_ctrl_dm = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(157, ROGUE_FW_GROUP_MAIN, 3),
+	  "Invalid KCCB setup for OSid %u: KCCB 0x%08x, KCCB Ctrl 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(158, ROGUE_FW_GROUP_MAIN, 3),
+	  "Invalid KCCB cmd (%u) for OSid %u @ KCCB 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(159, ROGUE_FW_GROUP_MAIN, 4),
+	  "FW FAULT: At line %d in file 0x%08x%08x, additional data=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(160, ROGUE_FW_GROUP_MAIN, 4),
+	  "Invalid breakpoint: MemCtx 0x%08x Addr 0x%08.8x DM%u usc_breakpoint_ctrl_dm = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(161, ROGUE_FW_GROUP_MAIN, 3),
+	  "Discarding invalid SLC flushinval command for OSid %u: DM %u, FWCtx 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(162, ROGUE_FW_GROUP_MAIN, 4),
+	  "Invalid Write Offset update notification from OSid %u to DM %u: FWCtx 0x%08x, MemCtx 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(163, ROGUE_FW_GROUP_MAIN, 4),
+	  "Null FWCtx in KCCB kick cmd for OSid %u: KCCB 0x%08x, ROff %u, WOff %u" },
+	{ ROGUE_FW_LOG_CREATESFID(164, ROGUE_FW_GROUP_MAIN, 3),
+	  "Checkpoint CCB for OSid %u is full, signalling host for full check state (Roff = %u, Woff = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(165, ROGUE_FW_GROUP_MAIN, 8),
+	  "OSid %d CCB init status: %d (1-ok 0-fail): kCCBCtl@0x%x kCCB@0x%x fwCCBCtl@0x%x fwCCB@0x%x chptCCBCtl@0x%x chptCCB@0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(166, ROGUE_FW_GROUP_MAIN, 4),
+	  "OSid %d fw state transition request: from %d to %d (0-offline 1-ready 2-active 3-offloading). Status %d (1-ok 0-fail)" },
+	{ ROGUE_FW_LOG_CREATESFID(167, ROGUE_FW_GROUP_MAIN, 2),
+	  "OSid %u has %u stale commands in its KCCB" },
+	{ ROGUE_FW_LOG_CREATESFID(168, ROGUE_FW_GROUP_MAIN, 0),
+	  "Applying VCE pause" },
+	{ ROGUE_FW_LOG_CREATESFID(169, ROGUE_FW_GROUP_MAIN, 3),
+	  "OSid %u KCCB slot %u value updated to %u" },
+	{ ROGUE_FW_LOG_CREATESFID(170, ROGUE_FW_GROUP_MAIN, 7),
+	  "Unknown KCCB Command: KCCBCtl=0x%08x, KCCB=0x%08x, Roff=%u, Woff=%u, Wrap=%u, Cmd=0x%08x, CmdType=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(171, ROGUE_FW_GROUP_MAIN, 10),
+	  "Unknown Client CCB Command processing fences: FWCtx=0x%08x, CCBCtl=0x%08x, CCB=0x%08x, Roff=%u, Doff=%u, Woff=%u, Wrap=%u, CmdHdr=0x%08x, CmdType=0x%08x, CmdSize=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(172, ROGUE_FW_GROUP_MAIN, 10),
+	  "Unknown Client CCB Command executing kick: FWCtx=0x%08x, CCBCtl=0x%08x, CCB=0x%08x, Roff=%u, Doff=%u, Woff=%u, Wrap=%u, CmdHdr=0x%08x, CmdType=0x%08x, CmdSize=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(173, ROGUE_FW_GROUP_MAIN, 2),
+	  "Null FWCtx in KCCB kick cmd for OSid %u with WOff %u" },
+	{ ROGUE_FW_LOG_CREATESFID(174, ROGUE_FW_GROUP_MAIN, 2),
+	  "Discarding invalid SLC flushinval command for OSid %u, FWCtx 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(175, ROGUE_FW_GROUP_MAIN, 3),
+	  "Invalid Write Offset update notification from OSid %u: FWCtx 0x%08x, MemCtx 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(176, ROGUE_FW_GROUP_MAIN, 2),
+	  "Initialised Firmware with config flags 0x%08x and extended config flags 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(177, ROGUE_FW_GROUP_MAIN, 1),
+	  "Set Periodic Hardware Reset Mode: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(179, ROGUE_FW_GROUP_MAIN, 3),
+	  "PHR mode %d, FW state: 0x%08x, HWR flags: 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(180, ROGUE_FW_GROUP_MAIN, 1),
+	  "PHR mode %d triggered a reset" },
+	{ ROGUE_FW_LOG_CREATESFID(181, ROGUE_FW_GROUP_MAIN, 2),
+	  "Signal update, Snoop Filter: %u, Signal Id: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(182, ROGUE_FW_GROUP_MAIN, 1),
+	  "WARNING: Skipping FW KCCB Cmd type %d which is not yet supported on Series8." },
+	{ ROGUE_FW_LOG_CREATESFID(183, ROGUE_FW_GROUP_MAIN, 4),
+	  "MMU context cache data NULL, but cache flags=0x%x (sync counter=%u, update value=%u) OSId=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(184, ROGUE_FW_GROUP_MAIN, 5),
+	  "SLC range based flush: Context=%u VAddr=0x%02x%08x, Size=0x%08x, Invalidate=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(185, ROGUE_FW_GROUP_MAIN, 3),
+	  "FBSC invalidate for Context Set [0x%08x]: Entry mask 0x%08x%08x." },
+	{ ROGUE_FW_LOG_CREATESFID(186, ROGUE_FW_GROUP_MAIN, 3),
+	  "TDM context switch check: Roff %u was not valid for kick starting at %u, moving back to %u" },
+	{ ROGUE_FW_LOG_CREATESFID(187, ROGUE_FW_GROUP_MAIN, 2),
+	  "Signal updates: FIFO: %u, Signals: 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(188, ROGUE_FW_GROUP_MAIN, 2),
+	  "Invalid FBSC cmd: FWCtx 0x%08x, MemCtx 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(189, ROGUE_FW_GROUP_MAIN, 0),
+	  "Insert BRN68497 WA blit after TDM Context store." },
+	{ ROGUE_FW_LOG_CREATESFID(190, ROGUE_FW_GROUP_MAIN, 1),
+	  "UFO Updates for previously finished FWCtx 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(191, ROGUE_FW_GROUP_MAIN, 1),
+	  "RTC with RTA present, %u active render targets" },
+	{ ROGUE_FW_LOG_CREATESFID(192, ROGUE_FW_GROUP_MAIN, 0),
+	  "Invalid RTA Set-up. The ValidRenderTargets array in RTACtl is Null!" },
+	{ ROGUE_FW_LOG_CREATESFID(193, ROGUE_FW_GROUP_MAIN, 2),
+	  "Block 0x%x / Counter 0x%x INVALID and ignored" },
+	{ ROGUE_FW_LOG_CREATESFID(194, ROGUE_FW_GROUP_MAIN, 2),
+	  "ECC fault GPU=0x%08x FW=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(195, ROGUE_FW_GROUP_MAIN, 1),
+	  "Processing XPU event on DM = %d" },
+	{ ROGUE_FW_LOG_CREATESFID(196, ROGUE_FW_GROUP_MAIN, 2),
+	  "OSid %u failed to respond to the virtualisation watchdog in time. Timestamp of its last input = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(197, ROGUE_FW_GROUP_MAIN, 1),
+	  "GPU-%u has locked up (see HWR logs for more info)" },
+	{ ROGUE_FW_LOG_CREATESFID(198, ROGUE_FW_GROUP_MAIN, 3),
+	  "Updating Tiles In Flight (Dusts=%d, PartitionMask=0x%08x, ISPCtl=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(199, ROGUE_FW_GROUP_MAIN, 0),
+	  "GPU has locked up (see HWR logs for more info)" },
+	{ ROGUE_FW_LOG_CREATESFID(200, ROGUE_FW_GROUP_MAIN, 1),
+	  "Reprocessing outstanding XPU events from cores 0x%02x" },
+	{ ROGUE_FW_LOG_CREATESFID(201, ROGUE_FW_GROUP_MAIN, 3),
+	  "Secondary XPU event on DM=%d, CoreMask=0x%02x, Raised=0x%02x" },
+	{ ROGUE_FW_LOG_CREATESFID(202, ROGUE_FW_GROUP_MAIN, 8),
+	  "TDM Queue: Core %u, FWCtx 0x%08.8x, prio: %d, queue: 0x%08x%08x (Roff = %u, Woff = %u, Size = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(203, ROGUE_FW_GROUP_MAIN, 3),
+	  "TDM stalled Core %u (Roff = %u, Woff = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(204, ROGUE_FW_GROUP_MAIN, 8),
+	  "Compute Queue: Core %u, FWCtx 0x%08.8x, prio: %d, queue: 0x%08x%08x (Roff = %u, Woff = %u, Size = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(205, ROGUE_FW_GROUP_MAIN, 4),
+	  "Compute stalled core %u (Roff = %u, Woff = %u, Size = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(206, ROGUE_FW_GROUP_MAIN, 6),
+	  "User Mode Queue mismatched stream start: Core %u, FWCtx 0x%08.8x, queue: 0x%08x%08x (Roff = %u, StreamStartOffset = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(207, ROGUE_FW_GROUP_MAIN, 3),
+	  "TDM resumed core %u (Roff = %u, Woff = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(208, ROGUE_FW_GROUP_MAIN, 4),
+	  "Compute resumed core %u (Roff = %u, Woff = %u, Size = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(209, ROGUE_FW_GROUP_MAIN, 2),
+	  " Updated permission for OSid %u to perform MTS kicks: %u (1 = allowed, 0 = not allowed)" },
+	{ ROGUE_FW_LOG_CREATESFID(210, ROGUE_FW_GROUP_MAIN, 2),
+	  "Mask = 0x%X, mask2 = 0x%X" },
+	{ ROGUE_FW_LOG_CREATESFID(211, ROGUE_FW_GROUP_MAIN, 3),
+	  "  core %u, reg = %u, mask = 0x%X)" },
+	{ ROGUE_FW_LOG_CREATESFID(212, ROGUE_FW_GROUP_MAIN, 1),
+	  "ECC fault received from safety bus: 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(213, ROGUE_FW_GROUP_MAIN, 1),
+	  "Safety Watchdog threshold period set to 0x%x clock cycles" },
+	{ ROGUE_FW_LOG_CREATESFID(214, ROGUE_FW_GROUP_MAIN, 0),
+	  "MTS Safety Event trigged by the safety watchdog." },
+	{ ROGUE_FW_LOG_CREATESFID(215, ROGUE_FW_GROUP_MAIN, 3),
+	  "DM%d USC tasks range limit 0 - %d, stride %d" },
+	{ ROGUE_FW_LOG_CREATESFID(216, ROGUE_FW_GROUP_MAIN, 1),
+	  "ECC fault GPU=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(217, ROGUE_FW_GROUP_MAIN, 0),
+	  "GPU Hardware units reset to prevent transient faults." },
+	{ ROGUE_FW_LOG_CREATESFID(218, ROGUE_FW_GROUP_MAIN, 2),
+	  "Kick Abort cmd: FWCtx 0x%08.8x @ %d" },
+	{ ROGUE_FW_LOG_CREATESFID(219, ROGUE_FW_GROUP_MAIN, 7),
+	  "Kick Ray: FWCtx 0x%08.8x @ %d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(220, ROGUE_FW_GROUP_MAIN, 0),
+	  "Ray finished" },
+	{ ROGUE_FW_LOG_CREATESFID(221, ROGUE_FW_GROUP_MAIN, 2),
+	  "State of firmware's private data at boot time: %d (0 = uninitialised, 1 = initialised); Fw State Flags = 0x%08X" },
+	{ ROGUE_FW_LOG_CREATESFID(222, ROGUE_FW_GROUP_MAIN, 2),
+	  "CFI Timeout detected (%d increasing to %d)" },
+	{ ROGUE_FW_LOG_CREATESFID(223, ROGUE_FW_GROUP_MAIN, 2),
+	  "CFI Timeout detected for FBM (%d increasing to %d)" },
+	{ ROGUE_FW_LOG_CREATESFID(224, ROGUE_FW_GROUP_MAIN, 0),
+	  "Geom OOM event not allowed" },
+	{ ROGUE_FW_LOG_CREATESFID(225, ROGUE_FW_GROUP_MAIN, 4),
+	  "Changing OSid %d's priority from %u to %u; Isolation = %u (0 = off; 1 = on)" },
+	{ ROGUE_FW_LOG_CREATESFID(226, ROGUE_FW_GROUP_MAIN, 2),
+	  "Skipping already executed TA FWCtx 0x%08.8x @ %d" },
+	{ ROGUE_FW_LOG_CREATESFID(227, ROGUE_FW_GROUP_MAIN, 2),
+	  "Attempt to execute TA FWCtx 0x%08.8x @ %d ahead of time on other GEOM" },
+	{ ROGUE_FW_LOG_CREATESFID(228, ROGUE_FW_GROUP_MAIN, 8),
+	  "Kick TDM: Kick ID %u FWCtx 0x%08.8x @ %d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(229, ROGUE_FW_GROUP_MAIN, 12),
+	  "Kick TA: Kick ID %u FWCtx 0x%08.8x @ %d, RTD 0x%08x, First kick:%d, Last kick:%d, CSW resume:%d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(230, ROGUE_FW_GROUP_MAIN, 11),
+	  "Kick 3D: Kick ID %u FWCtx 0x%08.8x @ %d, RTD 0x%08x, Partial render:%d, CSW resume:%d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(231, ROGUE_FW_GROUP_MAIN, 7),
+	  "Kick Compute: Kick ID %u FWCtx 0x%08.8x @ %d. (PID:%d, prio:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(232, ROGUE_FW_GROUP_MAIN, 1),
+	  "TDM finished: Kick ID %u " },
+	{ ROGUE_FW_LOG_CREATESFID(233, ROGUE_FW_GROUP_MAIN, 1),
+	  "TA finished: Kick ID %u " },
+	{ ROGUE_FW_LOG_CREATESFID(234, ROGUE_FW_GROUP_MAIN, 3),
+	  "3D finished: Kick ID %u , HWRTData0State=%x, HWRTData1State=%x" },
+	{ ROGUE_FW_LOG_CREATESFID(235, ROGUE_FW_GROUP_MAIN, 1),
+	  "Compute finished: Kick ID %u " },
+	{ ROGUE_FW_LOG_CREATESFID(236, ROGUE_FW_GROUP_MAIN, 10),
+	  "Kick TDM: Kick ID %u FWCtx 0x%08.8x @ %d, Base 0x%08x%08x. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(237, ROGUE_FW_GROUP_MAIN, 8),
+	  "Kick Ray: Kick ID %u FWCtx 0x%08.8x @ %d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(238, ROGUE_FW_GROUP_MAIN, 1),
+	  "Ray finished: Kick ID %u " },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_MTS, 2),
+	  "Bg Task DM = %u, counted = %d" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_MTS, 1),
+	  "Bg Task complete DM = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_MTS, 3),
+	  "Irq Task DM = %u, Breq = %d, SBIrq = 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_MTS, 1),
+	  "Irq Task complete DM = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_MTS, 0),
+	  "Kick MTS Bg task DM=All" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_MTS, 1),
+	  "Kick MTS Irq task DM=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_MTS, 2),
+	  "Ready queue debug DM = %u, celltype = %d" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_MTS, 2),
+	  "Ready-to-run debug DM = %u, item = 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_MTS, 3),
+	  "Client command header DM = %u, client CCB = 0x%x, cmd = 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_MTS, 3),
+	  "Ready-to-run debug OSid = %u, DM = %u, item = 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_MTS, 3),
+	  "Ready queue debug DM = %u, celltype = %d, OSid = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_MTS, 3),
+	  "Bg Task DM = %u, counted = %d, OSid = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_MTS, 1),
+	  "Bg Task complete DM Bitfield: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_MTS, 0),
+	  "Irq Task complete." },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_MTS, 7),
+	  "Discarded Command Type: %d OS ID = %d PID = %d context = 0x%08x cccb ROff = 0x%x, due to USC breakpoint hit by OS ID = %d PID = %d." },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_MTS, 4),
+	  "KCCB Slot %u: DM=%u, Cmd=0x%08x, OSid=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_MTS, 2),
+	  "KCCB Slot %u: Return value %u" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_MTS, 1),
+	  "Bg Task OSid = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_MTS, 3),
+	  "KCCB Slot %u: Cmd=0x%08x, OSid=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_MTS, 1),
+	  "Irq Task (EVENT_STATUS=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_MTS, 2),
+	  "VZ sideband test, kicked with OSid=%u from MTS, OSid for test=%u" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_CLEANUP, 1),
+	  "FwCommonContext [0x%08x] cleaned" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_CLEANUP, 3),
+	  "FwCommonContext [0x%08x] is busy: ReadOffset = %d, WriteOffset = %d" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_CLEANUP, 2),
+	  "HWRTData [0x%08x] for DM=%d, received cleanup request" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_CLEANUP, 3),
+	  "HWRTData [0x%08x] HW Context cleaned for DM%u, executed commands = %d" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_CLEANUP, 2),
+	  "HWRTData [0x%08x] HW Context for DM%u is busy" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_CLEANUP, 2),
+	  "HWRTData [0x%08x] HW Context %u cleaned" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_CLEANUP, 1),
+	  "Freelist [0x%08x] cleaned" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_CLEANUP, 1),
+	  "ZSBuffer [0x%08x] cleaned" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_CLEANUP, 3),
+	  "ZSBuffer [0x%08x] is busy: submitted = %d, executed = %d" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_CLEANUP, 4),
+	  "HWRTData [0x%08x] HW Context for DM%u is busy: submitted = %d, executed = %d" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_CLEANUP, 2),
+	  "HW Ray Frame data [0x%08x] for DM=%d, received cleanup request" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_CLEANUP, 3),
+	  "HW Ray Frame Data [0x%08x] cleaned for DM%u, executed commands = %d" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_CLEANUP, 4),
+	  "HW Ray Frame Data [0x%08x] for DM%u is busy: submitted = %d, executed = %d" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_CLEANUP, 2),
+	  "HW Ray Frame Data [0x%08x] HW Context %u cleaned" },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_CLEANUP, 1),
+	  "Discarding invalid cleanup request of type 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_CLEANUP, 1),
+	  "Received cleanup request for HWRTData [0x%08x]" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_CLEANUP, 3),
+	  "HWRTData [0x%08x] HW Context is busy: submitted = %d, executed = %d" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_CLEANUP, 3),
+	  "HWRTData [0x%08x] HW Context %u cleaned, executed commands = %d" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_CSW, 1),
+	  "CDM FWCtx 0x%08.8x needs resume" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_CSW, 3),
+	  "*** CDM FWCtx 0x%08.8x resume from snapshot buffer 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_CSW, 1),
+	  "CDM FWCtx shared alloc size load 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_CSW, 0),
+	  "*** CDM FWCtx store complete" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_CSW, 0),
+	  "*** CDM FWCtx store start" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_CSW, 0),
+	  "CDM Soft Reset" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_CSW, 1),
+	  "3D FWCtx 0x%08.8x needs resume" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_CSW, 1),
+	  "*** 3D FWCtx 0x%08.8x resume" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_CSW, 0),
+	  "*** 3D context store complete" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_CSW, 3),
+	  "3D context store pipe state: 0x%08.8x 0x%08.8x 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_CSW, 0),
+	  "*** 3D context store start" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_CSW, 1),
+	  "*** 3D TQ FWCtx 0x%08.8x resume" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_CSW, 1),
+	  "TA FWCtx 0x%08.8x needs resume" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_CSW, 3),
+	  "*** TA FWCtx 0x%08.8x resume from snapshot buffer 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_CSW, 2),
+	  "TA context shared alloc size store 0x%x, load 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_CSW, 0),
+	  "*** TA context store complete" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_CSW, 0),
+	  "*** TA context store start" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_CSW, 3),
+	  "Higher priority context scheduled for DM %u, old prio:%d, new prio:%d" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_CSW, 2),
+	  "Set FWCtx 0x%x priority to %u" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_CSW, 2),
+	  "3D context store pipe%d state: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_CSW, 2),
+	  "3D context resume pipe%d state: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(22, ROGUE_FW_GROUP_CSW, 1),
+	  "SHG FWCtx 0x%08.8x needs resume" },
+	{ ROGUE_FW_LOG_CREATESFID(23, ROGUE_FW_GROUP_CSW, 3),
+	  "*** SHG FWCtx 0x%08.8x resume from snapshot buffer 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(24, ROGUE_FW_GROUP_CSW, 2),
+	  "SHG context shared alloc size store 0x%x, load 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(25, ROGUE_FW_GROUP_CSW, 0),
+	  "*** SHG context store complete" },
+	{ ROGUE_FW_LOG_CREATESFID(26, ROGUE_FW_GROUP_CSW, 0),
+	  "*** SHG context store start" },
+	{ ROGUE_FW_LOG_CREATESFID(27, ROGUE_FW_GROUP_CSW, 1),
+	  "Performing TA indirection, last used pipe %d" },
+	{ ROGUE_FW_LOG_CREATESFID(28, ROGUE_FW_GROUP_CSW, 0),
+	  "CDM context store hit ctrl stream terminate. Skip resume." },
+	{ ROGUE_FW_LOG_CREATESFID(29, ROGUE_FW_GROUP_CSW, 4),
+	  "*** CDM FWCtx 0x%08.8x resume from snapshot buffer 0x%08x%08x, shader state %u" },
+	{ ROGUE_FW_LOG_CREATESFID(30, ROGUE_FW_GROUP_CSW, 2),
+	  "TA PDS/USC state buffer flip (%d->%d)" },
+	{ ROGUE_FW_LOG_CREATESFID(31, ROGUE_FW_GROUP_CSW, 0),
+	  "TA context store hit BRN 52563: vertex store tasks outstanding" },
+	{ ROGUE_FW_LOG_CREATESFID(32, ROGUE_FW_GROUP_CSW, 1),
+	  "TA USC poll failed (USC vertex task count: %d)" },
+	{ ROGUE_FW_LOG_CREATESFID(33, ROGUE_FW_GROUP_CSW, 0),
+	  "TA context store deferred due to BRN 54141." },
+	{ ROGUE_FW_LOG_CREATESFID(34, ROGUE_FW_GROUP_CSW, 7),
+	  "Higher priority context scheduled for DM %u. Prios (OSid, OSid Prio, Context Prio): Current: %u, %u, %u New: %u, %u, %u" },
+	{ ROGUE_FW_LOG_CREATESFID(35, ROGUE_FW_GROUP_CSW, 0),
+	  "*** TDM context store start" },
+	{ ROGUE_FW_LOG_CREATESFID(36, ROGUE_FW_GROUP_CSW, 0),
+	  "*** TDM context store complete" },
+	{ ROGUE_FW_LOG_CREATESFID(37, ROGUE_FW_GROUP_CSW, 2),
+	  "TDM context needs resume, header [0x%08.8x, 0x%08.8x]" },
+	{ ROGUE_FW_LOG_CREATESFID(38, ROGUE_FW_GROUP_CSW, 8),
+	  "Higher priority context scheduled for DM %u. Prios (OSid, OSid Prio, Context Prio): Current: %u, %u, %u New: %u, %u, %u. Hard Context Switching: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(39, ROGUE_FW_GROUP_CSW, 3),
+	  "3D context store pipe %2d (%2d) state: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(40, ROGUE_FW_GROUP_CSW, 3),
+	  "3D context resume pipe %2d (%2d) state: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(41, ROGUE_FW_GROUP_CSW, 1),
+	  "*** 3D context store start version %d (1=IPP_TILE, 2=ISP_TILE)" },
+	{ ROGUE_FW_LOG_CREATESFID(42, ROGUE_FW_GROUP_CSW, 3),
+	  "3D context store pipe%d state: 0x%08.8x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(43, ROGUE_FW_GROUP_CSW, 3),
+	  "3D context resume pipe%d state: 0x%08.8x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(44, ROGUE_FW_GROUP_CSW, 2),
+	  "3D context resume IPP state: 0x%08.8x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(45, ROGUE_FW_GROUP_CSW, 1),
+	  "All 3D pipes empty after ISP tile mode store! IPP_status: 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(46, ROGUE_FW_GROUP_CSW, 3),
+	  "TDM context resume pipe%d state: 0x%08.8x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(47, ROGUE_FW_GROUP_CSW, 0),
+	  "*** 3D context store start version 4" },
+	{ ROGUE_FW_LOG_CREATESFID(48, ROGUE_FW_GROUP_CSW, 2),
+	  "Multicore context resume on DM%d active core mask 0x%04.4x" },
+	{ ROGUE_FW_LOG_CREATESFID(49, ROGUE_FW_GROUP_CSW, 2),
+	  "Multicore context store on DM%d active core mask 0x%04.4x" },
+	{ ROGUE_FW_LOG_CREATESFID(50, ROGUE_FW_GROUP_CSW, 5),
+	  "TDM context resume Core %d, pipe%d state: 0x%08.8x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(51, ROGUE_FW_GROUP_CSW, 0),
+	  "*** RDM FWCtx store complete" },
+	{ ROGUE_FW_LOG_CREATESFID(52, ROGUE_FW_GROUP_CSW, 0),
+	  "*** RDM FWCtx store start" },
+	{ ROGUE_FW_LOG_CREATESFID(53, ROGUE_FW_GROUP_CSW, 1),
+	  "RDM FWCtx 0x%08.8x needs resume" },
+	{ ROGUE_FW_LOG_CREATESFID(54, ROGUE_FW_GROUP_CSW, 1),
+	  "RDM FWCtx 0x%08.8x resume" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_BIF, 3),
+	  "Activate MemCtx=0x%08x BIFreq=%d secure=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_BIF, 1),
+	  "Deactivate MemCtx=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_BIF, 1),
+	  "Alloc PC reg %d" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_BIF, 2),
+	  "Grab reg set %d refcount now %d" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_BIF, 2),
+	  "Ungrab reg set %d refcount now %d" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_BIF, 6),
+	  "Setup reg=%d BIFreq=%d, expect=0x%08x%08x, actual=0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_BIF, 2),
+	  "Trust enabled:%d, for BIFreq=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_BIF, 9),
+	  "BIF Tiling Cfg %d base 0x%08x%08x len 0x%08x%08x enable %d stride %d --> 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_BIF, 4),
+	  "Wrote the Value %d to OSID0, Cat Base %d, Register's contents are now 0x%08x 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_BIF, 3),
+	  "Wrote the Value %d to OSID1, Context  %d, Register's contents are now 0x%04x" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_BIF, 7),
+	  "ui32OSid = %u, Catbase = %u, Reg Address = 0x%x, Reg index = %u, Bitshift index = %u, Val = 0x%08x%08x" }, \
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_BIF, 5),
+	  "Map GPU memory DevVAddr 0x%x%08x, Size %u, Context ID %u, BIFREQ %u" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_BIF, 1),
+	  "Unmap GPU memory (event status 0x%x)" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_BIF, 3),
+	  "Activate MemCtx=0x%08x DM=%d secure=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_BIF, 6),
+	  "Setup reg=%d DM=%d, expect=0x%08x%08x, actual=0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_BIF, 4),
+	  "Map GPU memory DevVAddr 0x%x%08x, Size %u, Context ID %u" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_BIF, 2),
+	  "Trust enabled:%d, for DM=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_BIF, 5),
+	  "Map GPU memory DevVAddr 0x%x%08x, Size %u, Context ID %u, DM %u" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_BIF, 6),
+	  "Setup register set=%d DM=%d, PC address=0x%08x%08x, OSid=%u, NewPCRegRequired=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_BIF, 3),
+	  "Alloc PC set %d as register range [%u - %u]" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_MISC, 1),
+	  "GPIO write 0x%02x" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_MISC, 1),
+	  "GPIO read 0x%02x" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_MISC, 0),
+	  "GPIO enabled" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_MISC, 0),
+	  "GPIO disabled" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_MISC, 1),
+	  "GPIO status=%d (0=OK, 1=Disabled)" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_MISC, 2),
+	  "GPIO_AP: Read address=0x%02x (%d byte(s))" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_MISC, 2),
+	  "GPIO_AP: Write address=0x%02x (%d byte(s))" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_MISC, 0),
+	  "GPIO_AP timeout!" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_MISC, 1),
+	  "GPIO_AP error. GPIO status=%d (0=OK, 1=Disabled)" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_MISC, 1),
+	  "GPIO already read 0x%02x" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_MISC, 2),
+	  "SR: Check buffer %d available returned %d" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_MISC, 1),
+	  "SR: Waiting for buffer %d" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_MISC, 2),
+	  "SR: Timeout waiting for buffer %d (after %d ticks)" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_MISC, 2),
+	  "SR: Skip frame check for strip %d returned %d (0=No skip, 1=Skip frame)" },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_MISC, 1),
+	  "SR: Skip remaining strip %d in frame" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_MISC, 1),
+	  "SR: Inform HW that strip %d is a new frame" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_MISC, 1),
+	  "SR: Timeout waiting for INTERRUPT_FRAME_SKIP (after %d ticks)" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_MISC, 1),
+	  "SR: Strip mode is %d" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_MISC, 1),
+	  "SR: Strip Render start (strip %d)" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_MISC, 1),
+	  "SR: Strip Render complete (buffer %d)" },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_MISC, 1),
+	  "SR: Strip Render fault (buffer %d)" },
+	{ ROGUE_FW_LOG_CREATESFID(22, ROGUE_FW_GROUP_MISC, 1),
+	  "TRP state: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(23, ROGUE_FW_GROUP_MISC, 1),
+	  "TRP failure: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(24, ROGUE_FW_GROUP_MISC, 1),
+	  "SW TRP State: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(25, ROGUE_FW_GROUP_MISC, 1),
+	  "SW TRP failure: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(26, ROGUE_FW_GROUP_MISC, 1),
+	  "HW kick event (%u)" },
+	{ ROGUE_FW_LOG_CREATESFID(27, ROGUE_FW_GROUP_MISC, 4),
+	  "GPU core (%u/%u): checksum 0x%08x vs. 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(28, ROGUE_FW_GROUP_MISC, 6),
+	  "GPU core (%u/%u), unit (%u,%u): checksum 0x%08x vs. 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(29, ROGUE_FW_GROUP_MISC, 6),
+	  "HWR: Core%u, Register=0x%08x, OldValue=0x%08x%08x, CurrValue=0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(30, ROGUE_FW_GROUP_MISC, 4),
+	  "HWR: USC Core%u, ui32TotalSlotsUsedByDM=0x%08x, psDMHWCtl->ui32USCSlotsUsedByDM=0x%08x, bHWRNeeded=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(31, ROGUE_FW_GROUP_MISC, 6),
+	  "HWR: USC Core%u, Register=0x%08x, OldValue=0x%08x%08x, CurrValue=0x%08x%08x" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_PM, 10),
+	  "ALIST%d SP = %u, MLIST%d SP = %u (VCE 0x%08x%08x, TE 0x%08x%08x, ALIST 0x%08x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_PM, 8),
+	  "Is TA: %d, finished: %d on HW %u (HWRTData = 0x%08x, MemCtx = 0x%08x). FL different between TA/3D: global:%d, local:%d, mmu:%d" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_PM, 14),
+	  "UFL-3D-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u), FL-3D-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u), MFL-3D-Base: 0x%08x%08x (SP = %u, 4PT = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_PM, 14),
+	  "UFL-TA-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u), FL-TA-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u), MFL-TA-Base: 0x%08x%08x (SP = %u, 4PT = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_PM, 5),
+	  "Freelist grow completed [0x%08x]: added pages 0x%08x, total pages 0x%08x, new DevVirtAddr 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_PM, 1),
+	  "Grow for freelist ID=0x%08x denied by host" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_PM, 5),
+	  "Freelist update completed [0x%08x]: old total pages 0x%08x, new total pages 0x%08x, new DevVirtAddr 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_PM, 1),
+	  "Reconstruction of freelist ID=0x%08x failed" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_PM, 2),
+	  "Ignored attempt to pause or unpause the DM while there is no relevant operation in progress (0-TA,1-3D): %d, operation(0-unpause, 1-pause): %d" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_PM, 2),
+	  "Force free 3D Context memory, FWCtx: 0x%08x, status(1:success, 0:fail): %d" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_PM, 1),
+	  "PM pause TA ALLOC: PM_PAGE_MANAGEOP set to 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_PM, 1),
+	  "PM unpause TA ALLOC: PM_PAGE_MANAGEOP set to 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_PM, 1),
+	  "PM pause 3D DALLOC: PM_PAGE_MANAGEOP set to 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_PM, 1),
+	  "PM unpause 3D DALLOC: PM_PAGE_MANAGEOP set to 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_PM, 1),
+	  "PM ALLOC/DALLOC change was not actioned: PM_PAGE_MANAGEOP_STATUS=0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_PM, 7),
+	  "Is TA: %d, finished: %d on HW %u (HWRTData = 0x%08x, MemCtx = 0x%08x). FL different between TA/3D: global:%d, local:%d" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_PM, 10),
+	  "UFL-3D-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u), FL-3D-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_PM, 10),
+	  "UFL-TA-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u), FL-TA-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_PM, 7),
+	  "Freelist update completed [0x%08x / FL State 0x%08x%08x]: old total pages 0x%08x, new total pages 0x%08x, new DevVirtAddr 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_PM, 7),
+	  "Freelist update failed [0x%08x / FL State 0x%08x%08x]: old total pages 0x%08x, new total pages 0x%08x, new DevVirtAddr 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_PM, 10),
+	  "UFL-3D-State-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u), FL-3D-State-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(22, ROGUE_FW_GROUP_PM, 10),
+	  "UFL-TA-State-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u), FL-TA-State-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(23, ROGUE_FW_GROUP_PM, 5),
+	  "Freelist 0x%08x base address from HW: 0x%02x%08x (expected value: 0x%02x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(24, ROGUE_FW_GROUP_PM, 5),
+	  "Analysis of FL grow: Pause=(%u,%u) Paused+Valid(%u,%u) PMStateBuffer=0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(25, ROGUE_FW_GROUP_PM, 5),
+	  "Attempt FL grow for FL: 0x%08x, new dev address: 0x%02x%08x, new page count: %u, new ready count: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(26, ROGUE_FW_GROUP_PM, 5),
+	  "Deferring FL grow for non-loaded FL: 0x%08x, new dev address: 0x%02x%08x, new page count: %u, new ready count: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(27, ROGUE_FW_GROUP_PM, 4),
+	  "Is GEOM: %d, finished: %d (HWRTData = 0x%08x, MemCtx = 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(28, ROGUE_FW_GROUP_PM, 1),
+	  "3D Timeout Now for FWCtx 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(29, ROGUE_FW_GROUP_PM, 1),
+	  "GEOM PM Recycle for FWCtx 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(30, ROGUE_FW_GROUP_PM, 1),
+	  "PM running primary config (Core %d)" },
+	{ ROGUE_FW_LOG_CREATESFID(31, ROGUE_FW_GROUP_PM, 1),
+	  "PM running secondary config (Core %d)" },
+	{ ROGUE_FW_LOG_CREATESFID(32, ROGUE_FW_GROUP_PM, 1),
+	  "PM running tertiary config (Core %d)" },
+	{ ROGUE_FW_LOG_CREATESFID(33, ROGUE_FW_GROUP_PM, 1),
+	  "PM running quaternary config (Core %d)" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_RPM, 3),
+	  "Global link list dynamic page count: vertex 0x%x, varying 0x%x, node 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_RPM, 3),
+	  "Global link list static page count: vertex 0x%x, varying 0x%x, node 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_RPM, 0),
+	  "RPM request failed. Waiting for freelist grow." },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_RPM, 0),
+	  "RPM request failed. Aborting the current frame." },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_RPM, 1),
+	  "RPM waiting for pending grow on freelist 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_RPM, 3),
+	  "Request freelist grow [0x%08x] current pages %d, grow size %d" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_RPM, 2),
+	  "Freelist load: SHF = 0x%08x, SHG = 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_RPM, 2),
+	  "SHF FPL register: 0x%08x.0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_RPM, 2),
+	  "SHG FPL register: 0x%08x.0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_RPM, 5),
+	  "Kernel requested RPM grow on freelist (type %d) at 0x%08x from current size %d to new size %d, RPM restart: %d (1=Yes)" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_RPM, 0),
+	  "Restarting SHG" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_RPM, 0),
+	  "Grow failed, aborting the current frame." },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_RPM, 1),
+	  "RPM abort complete on HWFrameData [0x%08x]." },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_RPM, 1),
+	  "RPM freelist cleanup [0x%08x] requires abort to proceed." },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_RPM, 2),
+	  "RPM page table base register: 0x%08x.0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_RPM, 0),
+	  "Issuing RPM abort." },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_RPM, 0),
+	  "RPM OOM received but toggle bits indicate free pages available" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_RPM, 0),
+	  "RPM hardware timeout. Unable to process OOM event." },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_RPM, 5),
+	  "SHF FL (0x%08x) load, FPL: 0x%08x.0x%08x, roff: 0x%08x, woff: 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_RPM, 5),
+	  "SHG FL (0x%08x) load, FPL: 0x%08x.0x%08x, roff: 0x%08x, woff: 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_RPM, 3),
+	  "SHF FL (0x%08x) store, roff: 0x%08x, woff: 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(22, ROGUE_FW_GROUP_RPM, 3),
+	  "SHG FL (0x%08x) store, roff: 0x%08x, woff: 0x%08x" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_RTD, 2),
+	  "3D RTData 0x%08x finished on HW context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_RTD, 2),
+	  "3D RTData 0x%08x ready on HW context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_RTD, 4),
+	  "CONTEXT_PB_BASE set to 0x%x, FL different between TA/3D: local: %d, global: %d, mmu: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_RTD, 2),
+	  "Loading VFP table 0x%08x%08x for 3D" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_RTD, 2),
+	  "Loading VFP table 0x%08x%08x for TA" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_RTD, 10),
+	  "Load Freelist 0x%x type: %d (0:local,1:global,2:mmu) for DM%d: TotalPMPages = %d, FL-addr = 0x%08x%08x, stacktop = 0x%08x%08x, Alloc Page Count = %u, Alloc MMU Page Count = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_RTD, 0),
+	  "Perform VHEAP table store" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_RTD, 2),
+	  "RTData 0x%08x: found match in Context=%d: Load=No, Store=No" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_RTD, 2),
+	  "RTData 0x%08x: found NULL in Context=%d: Load=Yes, Store=No" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_RTD, 3),
+	  "RTData 0x%08x: found state 3D finished (0x%08x) in Context=%d: Load=Yes, Store=Yes" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_RTD, 3),
+	  "RTData 0x%08x: found state TA finished (0x%08x) in Context=%d: Load=Yes, Store=Yes" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_RTD, 5),
+	  "Loading stack-pointers for %d (0:MidTA,1:3D) on context %d, MLIST = 0x%08x, ALIST = 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_RTD, 10),
+	  "Store Freelist 0x%x type: %d (0:local,1:global,2:mmu) for DM%d: TotalPMPages = %d, FL-addr = 0x%08x%08x, stacktop = 0x%08x%08x, Alloc Page Count = %u, Alloc MMU Page Count = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_RTD, 2),
+	  "TA RTData 0x%08x finished on HW context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_RTD, 2),
+	  "TA RTData 0x%08x loaded on HW context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_RTD, 12),
+	  "Store Freelist 0x%x type: %d (0:local,1:global,2:mmu) for DM%d: FL Total Pages %u (max=%u,grow size=%u), FL-addr = 0x%08x%08x, stacktop = 0x%08x%08x, Alloc Page Count = %u, Alloc MMU Page Count = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_RTD, 12),
+	  "Load  Freelist 0x%x type: %d (0:local,1:global,2:mmu) for DM%d: FL Total Pages %u (max=%u,grow size=%u), FL-addr = 0x%08x%08x, stacktop = 0x%08x%08x, Alloc Page Count = %u, Alloc MMU Page Count = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_RTD, 1),
+	  "Freelist 0x%x RESET!!!!!!!!" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_RTD, 5),
+	  "Freelist 0x%x stacktop = 0x%08x%08x, Alloc Page Count = %u, Alloc MMU Page Count = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_RTD, 3),
+	  "Request reconstruction of Freelist 0x%x type: %d (0:local,1:global,2:mmu) on HW context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_RTD, 1),
+	  "Freelist reconstruction ACK from host (HWR state :%u)" },
+	{ ROGUE_FW_LOG_CREATESFID(22, ROGUE_FW_GROUP_RTD, 0),
+	  "Freelist reconstruction completed" },
+	{ ROGUE_FW_LOG_CREATESFID(23, ROGUE_FW_GROUP_RTD, 3),
+	  "TA RTData 0x%08x loaded on HW context %u HWRTDataNeedsLoading=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(24, ROGUE_FW_GROUP_RTD, 3),
+	  "TE Region headers base 0x%08x%08x (RGNHDR Init: %d)" },
+	{ ROGUE_FW_LOG_CREATESFID(25, ROGUE_FW_GROUP_RTD, 8),
+	  "TA Buffers: FWCtx 0x%08x, RT 0x%08x, RTData 0x%08x, VHeap 0x%08x%08x, TPC 0x%08x%08x (MemCtx 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(26, ROGUE_FW_GROUP_RTD, 2),
+	  "3D RTData 0x%08x loaded on HW context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(27, ROGUE_FW_GROUP_RTD, 4),
+	  "3D Buffers: FWCtx 0x%08x, RT 0x%08x, RTData 0x%08x (MemCtx 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(28, ROGUE_FW_GROUP_RTD, 2),
+	  "Restarting TA after partial render, HWRTData0State=0x%x, HWRTData1State=0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(29, ROGUE_FW_GROUP_RTD, 3),
+	  "CONTEXT_PB_BASE set to 0x%x, FL different between TA/3D: local: %d, global: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(30, ROGUE_FW_GROUP_RTD, 12),
+	  "Store Freelist 0x%x type: %d (0:local,1:global) for PMDM%d: FL Total Pages %u (max=%u,grow size=%u), FL-addr = 0x%08x%08x, stacktop = 0x%08x%08x, Alloc Page Count = %u, Alloc MMU Page Count = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(31, ROGUE_FW_GROUP_RTD, 12),
+	  "Load  Freelist 0x%x type: %d (0:local,1:global) for PMDM%d: FL Total Pages %u (max=%u,grow size=%u), FL-addr = 0x%08x%08x, stacktop = 0x%08x%08x, Alloc Page Count = %u, Alloc MMU Page Count = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(32, ROGUE_FW_GROUP_RTD, 5),
+	  "3D Buffers: FWCtx 0x%08x, parent RT 0x%08x, RTData 0x%08x on ctx %d, (MemCtx 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(33, ROGUE_FW_GROUP_RTD, 7),
+	  "TA Buffers: FWCtx 0x%08x, RTData 0x%08x, VHeap 0x%08x%08x, TPC 0x%08x%08x (MemCtx 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(34, ROGUE_FW_GROUP_RTD, 4),
+	  "3D Buffers: FWCtx 0x%08x, RTData 0x%08x on ctx %d, (MemCtx 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(35, ROGUE_FW_GROUP_RTD, 6),
+	  "Load  Freelist 0x%x type: %d (0:local,1:global) for PMDM%d: FL Total Pages %u (max=%u,grow size=%u)" },
+	{ ROGUE_FW_LOG_CREATESFID(36, ROGUE_FW_GROUP_RTD, 1),
+	  "TA RTData 0x%08x marked as killed." },
+	{ ROGUE_FW_LOG_CREATESFID(37, ROGUE_FW_GROUP_RTD, 1),
+	  "3D RTData 0x%08x marked as killed." },
+	{ ROGUE_FW_LOG_CREATESFID(38, ROGUE_FW_GROUP_RTD, 1),
+	  "RTData 0x%08x will be killed after TA restart." },
+	{ ROGUE_FW_LOG_CREATESFID(39, ROGUE_FW_GROUP_RTD, 3),
+	  "RTData 0x%08x Render State Buffer 0x%02x%08x will be reset." },
+	{ ROGUE_FW_LOG_CREATESFID(40, ROGUE_FW_GROUP_RTD, 3),
+	  "GEOM RTData 0x%08x using Render State Buffer 0x%02x%08x." },
+	{ ROGUE_FW_LOG_CREATESFID(41, ROGUE_FW_GROUP_RTD, 3),
+	  "FRAG RTData 0x%08x using Render State Buffer 0x%02x%08x." },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_SPM, 0),
+	  "Force Z-Load for partial render" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_SPM, 0),
+	  "Force Z-Store for partial render" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_SPM, 1),
+	  "3D MemFree: Local FL 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_SPM, 1),
+	  "3D MemFree: MMU FL 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_SPM, 1),
+	  "3D MemFree: Global FL 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_SPM, 6),
+	  "OOM TA/3D PR Check: [0x%08.8x] is 0x%08.8x requires 0x%08.8x, HardwareSync Fence [0x%08.8x] is 0x%08.8x requires 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_SPM, 3),
+	  "OOM TA_cmd=0x%08x, U-FL 0x%08x, N-FL 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_SPM, 5),
+	  "OOM TA_cmd=0x%08x, OOM MMU:%d, U-FL 0x%08x, N-FL 0x%08x, MMU-FL 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_SPM, 0),
+	  "Partial render avoided" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_SPM, 0),
+	  "Partial render discarded" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_SPM, 0),
+	  "Partial Render finished" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM Owner = 3D-BG" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM Owner = 3D-IRQ" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM Owner = NONE" },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM Owner = TA-BG" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM Owner = TA-IRQ" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_SPM, 2),
+	  "ZStore address 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_SPM, 2),
+	  "SStore address 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_SPM, 2),
+	  "ZLoad address 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_SPM, 2),
+	  "SLoad address 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_SPM, 0),
+	  "No deferred ZS Buffer provided" },
+	{ ROGUE_FW_LOG_CREATESFID(22, ROGUE_FW_GROUP_SPM, 1),
+	  "ZS Buffer successfully populated (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(23, ROGUE_FW_GROUP_SPM, 1),
+	  "No need to populate ZS Buffer (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(24, ROGUE_FW_GROUP_SPM, 1),
+	  "ZS Buffer successfully unpopulated (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(25, ROGUE_FW_GROUP_SPM, 1),
+	  "No need to unpopulate ZS Buffer (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(26, ROGUE_FW_GROUP_SPM, 1),
+	  "Send ZS-Buffer backing request to host (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(27, ROGUE_FW_GROUP_SPM, 1),
+	  "Send ZS-Buffer unbacking request to host (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(28, ROGUE_FW_GROUP_SPM, 1),
+	  "Don't send ZS-Buffer backing request. Previous request still pending (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(29, ROGUE_FW_GROUP_SPM, 1),
+	  "Don't send ZS-Buffer unbacking request. Previous request still pending (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(30, ROGUE_FW_GROUP_SPM, 1),
+	  "Partial Render waiting for ZBuffer to be backed (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(31, ROGUE_FW_GROUP_SPM, 1),
+	  "Partial Render waiting for SBuffer to be backed (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(32, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM State = none" },
+	{ ROGUE_FW_LOG_CREATESFID(33, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM State = PR blocked" },
+	{ ROGUE_FW_LOG_CREATESFID(34, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM State = wait for grow" },
+	{ ROGUE_FW_LOG_CREATESFID(35, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM State = wait for HW" },
+	{ ROGUE_FW_LOG_CREATESFID(36, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM State = PR running" },
+	{ ROGUE_FW_LOG_CREATESFID(37, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM State = PR avoided" },
+	{ ROGUE_FW_LOG_CREATESFID(38, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM State = PR executed" },
+	{ ROGUE_FW_LOG_CREATESFID(39, ROGUE_FW_GROUP_SPM, 2),
+	  "3DMemFree matches freelist 0x%08x (FL type = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(40, ROGUE_FW_GROUP_SPM, 0),
+	  "Raise the 3DMemFreeDedected flag" },
+	{ ROGUE_FW_LOG_CREATESFID(41, ROGUE_FW_GROUP_SPM, 1),
+	  "Wait for pending grow on Freelist 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(42, ROGUE_FW_GROUP_SPM, 1),
+	  "ZS Buffer failed to be populated (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(43, ROGUE_FW_GROUP_SPM, 5),
+	  "Grow update inconsistency: FL addr: 0x%02x%08x, curr pages: %u, ready: %u, new: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(44, ROGUE_FW_GROUP_SPM, 4),
+	  "OOM: Resumed TA with ready pages, FL addr: 0x%02x%08x, current pages: %u, SP : %u" },
+	{ ROGUE_FW_LOG_CREATESFID(45, ROGUE_FW_GROUP_SPM, 5),
+	  "Received grow update, FL addr: 0x%02x%08x, current pages: %u, ready pages: %u, threshold: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(46, ROGUE_FW_GROUP_SPM, 1),
+	  "No deferred partial render FW (Type=%d) Buffer provided" },
+	{ ROGUE_FW_LOG_CREATESFID(47, ROGUE_FW_GROUP_SPM, 1),
+	  "No need to populate PR Buffer (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(48, ROGUE_FW_GROUP_SPM, 1),
+	  "No need to unpopulate PR Buffer (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(49, ROGUE_FW_GROUP_SPM, 1),
+	  "Send PR Buffer backing request to host (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(50, ROGUE_FW_GROUP_SPM, 1),
+	  "Send PR Buffer unbacking request to host (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(51, ROGUE_FW_GROUP_SPM, 1),
+	  "Don't send PR Buffer backing request. Previous request still pending (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(52, ROGUE_FW_GROUP_SPM, 1),
+	  "Don't send PR Buffer unbacking request. Previous request still pending (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(53, ROGUE_FW_GROUP_SPM, 2),
+	  "Partial Render waiting for Buffer %d type to be backed (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(54, ROGUE_FW_GROUP_SPM, 4),
+	  "Received grow update, FL addr: 0x%02x%08x, new pages: %u, ready pages: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(66, ROGUE_FW_GROUP_SPM, 3),
+	  "OOM TA/3D PR Check: [0x%08.8x] is 0x%08.8x requires 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(67, ROGUE_FW_GROUP_SPM, 3),
+	  "OOM: Resumed TA with ready pages, FL addr: 0x%02x%08x, current pages: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(68, ROGUE_FW_GROUP_SPM, 3),
+	  "OOM TA/3D PR deadlock unblocked reordering DM%d runlist head from Context 0x%08x to 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(69, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM State = PR force free" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_POW, 4),
+	  "Check Pow state DM%d int: 0x%x, ext: 0x%x, pow flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_POW, 3),
+	  "GPU idle (might be powered down). Pow state int: 0x%x, ext: 0x%x, flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_POW, 3),
+	  "OS requested pow off (forced = %d), DM%d, pow flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_POW, 4),
+	  "Initiate powoff query. Inactive DMs: %d %d %d %d" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_POW, 2),
+	  "Any RD-DM pending? %d, Any RD-DM Active? %d" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_POW, 3),
+	  "GPU ready to be powered down. Pow state int: 0x%x, ext: 0x%x, flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_POW, 2),
+	  "HW Request On(1)/Off(0): %d, Units: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_POW, 2),
+	  "Request to change num of dusts to %d (Power flags=%d)" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_POW, 2),
+	  "Changing number of dusts from %d to %d" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_POW, 0),
+	  "Sidekick init" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_POW, 1),
+	  "Rascal+Dusts init (# dusts mask: 0x%x)" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_POW, 0),
+	  "Initiate powoff query for RD-DMs." },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_POW, 0),
+	  "Initiate powoff query for TLA-DM." },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_POW, 2),
+	  "Any RD-DM pending? %d, Any RD-DM Active? %d" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_POW, 2),
+	  "TLA-DM pending? %d, TLA-DM Active? %d" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_POW, 1),
+	  "Request power up due to BRN37270. Pow stat int: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_POW, 3),
+	  "Cancel power off request int: 0x%x, ext: 0x%x, pow flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_POW, 1),
+	  "OS requested forced IDLE, pow flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_POW, 1),
+	  "OS cancelled forced IDLE, pow flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_POW, 3),
+	  "Idle timer start. Pow state int: 0x%x, ext: 0x%x, flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(22, ROGUE_FW_GROUP_POW, 3),
+	  "Cancel idle timer. Pow state int: 0x%x, ext: 0x%x, flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(23, ROGUE_FW_GROUP_POW, 2),
+	  "Active PM latency set to %dms. Core clock: %d Hz" },
+	{ ROGUE_FW_LOG_CREATESFID(24, ROGUE_FW_GROUP_POW, 2),
+	  "Compute cluster mask change to 0x%x, %d dusts powered." },
+	{ ROGUE_FW_LOG_CREATESFID(25, ROGUE_FW_GROUP_POW, 0),
+	  "Null command executed, repeating initiate powoff query for RD-DMs." },
+	{ ROGUE_FW_LOG_CREATESFID(26, ROGUE_FW_GROUP_POW, 1),
+	  "Power monitor: Estimate of dynamic energy %u" },
+	{ ROGUE_FW_LOG_CREATESFID(27, ROGUE_FW_GROUP_POW, 3),
+	  "Check Pow state: Int: 0x%x, Ext: 0x%x, Pow flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(28, ROGUE_FW_GROUP_POW, 2),
+	  "Proactive DVFS: New deadline, time = 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(29, ROGUE_FW_GROUP_POW, 2),
+	  "Proactive DVFS: New workload, cycles = 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(30, ROGUE_FW_GROUP_POW, 1),
+	  "Proactive DVFS: Proactive frequency calculated = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(31, ROGUE_FW_GROUP_POW, 1),
+	  "Proactive DVFS: Reactive utilisation = %u percent" },
+	{ ROGUE_FW_LOG_CREATESFID(32, ROGUE_FW_GROUP_POW, 2),
+	  "Proactive DVFS: Reactive frequency calculated = %u.%u" },
+	{ ROGUE_FW_LOG_CREATESFID(33, ROGUE_FW_GROUP_POW, 1),
+	  "Proactive DVFS: OPP Point Sent = 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(34, ROGUE_FW_GROUP_POW, 2),
+	  "Proactive DVFS: Deadline removed = 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(35, ROGUE_FW_GROUP_POW, 2),
+	  "Proactive DVFS: Workload removed = 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(36, ROGUE_FW_GROUP_POW, 1),
+	  "Proactive DVFS: Throttle to a maximum = 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(37, ROGUE_FW_GROUP_POW, 0),
+	  "Proactive DVFS: Failed to pass OPP point via GPIO." },
+	{ ROGUE_FW_LOG_CREATESFID(38, ROGUE_FW_GROUP_POW, 0),
+	  "Proactive DVFS: Invalid node passed to function." },
+	{ ROGUE_FW_LOG_CREATESFID(39, ROGUE_FW_GROUP_POW, 1),
+	  "Proactive DVFS: Guest OS attempted to do a privileged action. OSid = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(40, ROGUE_FW_GROUP_POW, 1),
+	  "Proactive DVFS: Unprofiled work started. Total unprofiled work present: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(41, ROGUE_FW_GROUP_POW, 1),
+	  "Proactive DVFS: Unprofiled work finished. Total unprofiled work present: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(42, ROGUE_FW_GROUP_POW, 0),
+	  "Proactive DVFS: Disabled: Not enabled by host." },
+	{ ROGUE_FW_LOG_CREATESFID(43, ROGUE_FW_GROUP_POW, 2),
+	  "HW Request Completed(1)/Aborted(0): %d, Ticks: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(44, ROGUE_FW_GROUP_POW, 1),
+	  "Allowed number of dusts is %d due to BRN59042." },
+	{ ROGUE_FW_LOG_CREATESFID(45, ROGUE_FW_GROUP_POW, 3),
+	  "Host timed out while waiting for a forced idle state. Pow state int: 0x%x, ext: 0x%x, flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(46, ROGUE_FW_GROUP_POW, 5),
+	  "Check Pow state: Int: 0x%x, Ext: 0x%x, Pow flags: 0x%x, Fence Counters: Check: %u - Update: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(47, ROGUE_FW_GROUP_POW, 2),
+	  "Proactive DVFS: OPP Point Sent = 0x%x, Success = 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(48, ROGUE_FW_GROUP_POW, 0),
+	  "Proactive DVFS: GPU transitioned to idle" },
+	{ ROGUE_FW_LOG_CREATESFID(49, ROGUE_FW_GROUP_POW, 0),
+	  "Proactive DVFS: GPU transitioned to active" },
+	{ ROGUE_FW_LOG_CREATESFID(50, ROGUE_FW_GROUP_POW, 1),
+	  "Power counter dumping: Data truncated writing register %u. Buffer too small." },
+	{ ROGUE_FW_LOG_CREATESFID(51, ROGUE_FW_GROUP_POW, 0),
+	  "Power controller returned ABORT for last request so retrying." },
+	{ ROGUE_FW_LOG_CREATESFID(52, ROGUE_FW_GROUP_POW, 2),
+	  "Discarding invalid power request: type 0x%x, DM %u" },
+	{ ROGUE_FW_LOG_CREATESFID(53, ROGUE_FW_GROUP_POW, 2),
+	  "Detected attempt to cancel forced idle while not forced idle (pow state 0x%x, pow flags 0x%x)" },
+	{ ROGUE_FW_LOG_CREATESFID(54, ROGUE_FW_GROUP_POW, 2),
+	  "Detected attempt to force power off while not forced idle (pow state 0x%x, pow flags 0x%x)" },
+	{ ROGUE_FW_LOG_CREATESFID(55, ROGUE_FW_GROUP_POW, 1),
+	  "Detected attempt to change dust count while not forced idle (pow state 0x%x)" },
+	{ ROGUE_FW_LOG_CREATESFID(56, ROGUE_FW_GROUP_POW, 3),
+	  "Power monitor: Type = %d (0 = power, 1 = energy), Estimate result = 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(57, ROGUE_FW_GROUP_POW, 2),
+	  "Conflicting clock frequency range: OPP min = %u, max = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(58, ROGUE_FW_GROUP_POW, 1),
+	  "Proactive DVFS: Set floor to a minimum = 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(59, ROGUE_FW_GROUP_POW, 2),
+	  "OS requested pow off (forced = %d), pow flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(60, ROGUE_FW_GROUP_POW, 1),
+	  "Discarding invalid power request: type 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(61, ROGUE_FW_GROUP_POW, 3),
+	  "Request to change SPU power state mask from 0x%x to 0x%x. Pow flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(62, ROGUE_FW_GROUP_POW, 2),
+	  "Changing SPU power state mask from 0x%x to 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(63, ROGUE_FW_GROUP_POW, 1),
+	  "Detected attempt to change SPU power state mask while not forced idle (pow state 0x%x)" },
+	{ ROGUE_FW_LOG_CREATESFID(64, ROGUE_FW_GROUP_POW, 1),
+	  "Invalid SPU power mask 0x%x! Changing to 1" },
+	{ ROGUE_FW_LOG_CREATESFID(65, ROGUE_FW_GROUP_POW, 2),
+	  "Proactive DVFS: Send OPP %u with clock divider value %u" },
+	{ ROGUE_FW_LOG_CREATESFID(66, ROGUE_FW_GROUP_POW, 0),
+	  "PPA block started in perf validation mode." },
+	{ ROGUE_FW_LOG_CREATESFID(67, ROGUE_FW_GROUP_POW, 1),
+	  "Reset PPA block state %u (1=reset, 0=recalculate)." },
+	{ ROGUE_FW_LOG_CREATESFID(68, ROGUE_FW_GROUP_POW, 1),
+	  "Power controller returned ABORT for Core-%d last request so retrying." },
+	{ ROGUE_FW_LOG_CREATESFID(69, ROGUE_FW_GROUP_POW, 3),
+	  "HW Request On(1)/Off(0): %d, Units: 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(70, ROGUE_FW_GROUP_POW, 5),
+	  "Request to change SPU power state mask from 0x%x to 0x%x and RAC from 0x%x to 0x%x. Pow flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(71, ROGUE_FW_GROUP_POW, 4),
+	  "Changing SPU power state mask from 0x%x to 0x%x and RAC from 0x%x to 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(72, ROGUE_FW_GROUP_POW, 2),
+	  "RAC pending? %d, RAC Active? %d" },
+	{ ROGUE_FW_LOG_CREATESFID(73, ROGUE_FW_GROUP_POW, 0),
+	  "Initiate powoff query for RAC." },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_HWR, 2),
+	  "Lockup detected on DM%d, FWCtx: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_HWR, 3),
+	  "Reset fw state for DM%d, FWCtx: 0x%08.8x, MemCtx: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_HWR, 0),
+	  "Reset HW" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_HWR, 0),
+	  "Lockup recovered." },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_HWR, 2),
+	  "Lock-up DM%d FWCtx: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_HWR, 4),
+	  "Lockup detected: GLB(%d->%d), PER-DM(0x%08x->0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_HWR, 3),
+	  "Early fault detection: GLB(%d->%d), PER-DM(0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_HWR, 3),
+	  "Hold scheduling due lockup: GLB(%d), PER-DM(0x%08x->0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_HWR, 4),
+	  "False lockup detected: GLB(%d->%d), PER-DM(0x%08x->0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_HWR, 4),
+	  "BRN37729: GLB(%d->%d), PER-DM(0x%08x->0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_HWR, 3),
+	  "Freelists reconstructed: GLB(%d->%d), PER-DM(0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_HWR, 4),
+	  "Reconstructing freelists: %u (0-No, 1-Yes): GLB(%d->%d), PER-DM(0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_HWR, 3),
+	  "HW poll %u (0-Unset 1-Set) failed (reg:0x%08x val:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_HWR, 2),
+	  "Discarded cmd on DM%u FWCtx=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_HWR, 6),
+	  "Discarded cmd on DM%u (reason=%u) HWRTData=0x%08x (st: %d), FWCtx 0x%08x @ %d" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_HWR, 2),
+	  "PM fence WA could not be applied, Valid TA Setup: %d, RD powered off: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_HWR, 5),
+	  "FL snapshot RTD 0x%08.8x - local (0x%08.8x): %d, global (0x%08.8x): %d" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_HWR, 8),
+	  "FL check RTD 0x%08.8x, discard: %d - local (0x%08.8x): s%d?=c%d, global (0x%08.8x): s%d?=c%d" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_HWR, 2),
+	  "FL reconstruction 0x%08.8x c%d" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_HWR, 3),
+	  "3D check: missing TA FWCtx 0x%08.8x @ %d, RTD 0x%08x." },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_HWR, 2),
+	  "Reset HW (mmu:%d, extmem: %d)" },
+	{ ROGUE_FW_LOG_CREATESFID(22, ROGUE_FW_GROUP_HWR, 4),
+	  "Zero TA caches for FWCtx: 0x%08.8x (TPC addr: 0x%08x%08x, size: %d bytes)" },
+	{ ROGUE_FW_LOG_CREATESFID(23, ROGUE_FW_GROUP_HWR, 2),
+	  "Recovery DM%u: Freelists reconstructed. New R-Flags=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(24, ROGUE_FW_GROUP_HWR, 5),
+	  "Recovery DM%u: FWCtx 0x%08x skipped to command @ %u. PR=%u. New R-Flags=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(25, ROGUE_FW_GROUP_HWR, 1),
+	  "Recovery DM%u: DM fully recovered" },
+	{ ROGUE_FW_LOG_CREATESFID(26, ROGUE_FW_GROUP_HWR, 2),
+	  "DM%u: Hold scheduling due to R-Flag = 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(27, ROGUE_FW_GROUP_HWR, 0),
+	  "Analysis: Need freelist reconstruction" },
+	{ ROGUE_FW_LOG_CREATESFID(28, ROGUE_FW_GROUP_HWR, 2),
+	  "Analysis DM%u: Lockup FWCtx: 0x%08.8x. Need to skip to next command" },
+	{ ROGUE_FW_LOG_CREATESFID(29, ROGUE_FW_GROUP_HWR, 2),
+	  "Analysis DM%u: Lockup while TA is OOM FWCtx: 0x%08.8x. Need to skip to next command" },
+	{ ROGUE_FW_LOG_CREATESFID(30, ROGUE_FW_GROUP_HWR, 2),
+	  "Analysis DM%u: Lockup while partial render FWCtx: 0x%08.8x. Need PR cleanup" },
+	{ ROGUE_FW_LOG_CREATESFID(31, ROGUE_FW_GROUP_HWR, 0),
+	  "GPU has locked up" },
+	{ ROGUE_FW_LOG_CREATESFID(32, ROGUE_FW_GROUP_HWR, 1),
+	  "DM%u ready for HWR" },
+	{ ROGUE_FW_LOG_CREATESFID(33, ROGUE_FW_GROUP_HWR, 2),
+	  "Recovery DM%u: Updated Recovery counter. New R-Flags=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(34, ROGUE_FW_GROUP_HWR, 1),
+	  "Analysis: BRN37729 detected, reset TA and re-kicked 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(35, ROGUE_FW_GROUP_HWR, 1),
+	  "DM%u timed out" },
+	{ ROGUE_FW_LOG_CREATESFID(36, ROGUE_FW_GROUP_HWR, 1),
+	  "RGX_CR_EVENT_STATUS=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(37, ROGUE_FW_GROUP_HWR, 2),
+	  "DM%u lockup falsely detected, R-Flags=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(38, ROGUE_FW_GROUP_HWR, 0),
+	  "GPU has overrun its deadline" },
+	{ ROGUE_FW_LOG_CREATESFID(39, ROGUE_FW_GROUP_HWR, 0),
+	  "GPU has failed a poll" },
+	{ ROGUE_FW_LOG_CREATESFID(40, ROGUE_FW_GROUP_HWR, 2),
+	  "RGX DM%u phase count=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(41, ROGUE_FW_GROUP_HWR, 2),
+	  "Reset HW (loop:%d, poll failures: 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(42, ROGUE_FW_GROUP_HWR, 1),
+	  "MMU fault event: 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(43, ROGUE_FW_GROUP_HWR, 1),
+	  "BIF1 page fault detected (Bank1 MMU Status: 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(44, ROGUE_FW_GROUP_HWR, 1),
+	  "Fast CRC Failed. Proceeding to full register checking (DM: %u)." },
+	{ ROGUE_FW_LOG_CREATESFID(45, ROGUE_FW_GROUP_HWR, 2),
+	  "Meta MMU page fault detected (Meta MMU Status: 0x%08x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(46, ROGUE_FW_GROUP_HWR, 2),
+	  "Fast CRC Check result for DM%u is HWRNeeded=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(47, ROGUE_FW_GROUP_HWR, 2),
+	  "Full Signature Check result for DM%u is HWRNeeded=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(48, ROGUE_FW_GROUP_HWR, 3),
+	  "Final result for DM%u is HWRNeeded=%u with HWRChecksToGo=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(49, ROGUE_FW_GROUP_HWR, 3),
+	  "USC Slots result for DM%u is HWRNeeded=%u USCSlotsUsedByDM=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(50, ROGUE_FW_GROUP_HWR, 2),
+	  "Deadline counter for DM%u is HWRDeadline=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(51, ROGUE_FW_GROUP_HWR, 1),
+	  "Holding Scheduling on OSid %u due to pending freelist reconstruction" },
+	{ ROGUE_FW_LOG_CREATESFID(52, ROGUE_FW_GROUP_HWR, 2),
+	  "Requesting reconstruction for freelist 0x%x (ID=%d)" },
+	{ ROGUE_FW_LOG_CREATESFID(53, ROGUE_FW_GROUP_HWR, 1),
+	  "Reconstruction of freelist ID=%d complete" },
+	{ ROGUE_FW_LOG_CREATESFID(54, ROGUE_FW_GROUP_HWR, 4),
+	  "Reconstruction needed for freelist 0x%x (ID=%d) type: %d (0:local,1:global,2:mmu) on HW context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(55, ROGUE_FW_GROUP_HWR, 1),
+	  "Reconstruction of freelist ID=%d failed" },
+	{ ROGUE_FW_LOG_CREATESFID(56, ROGUE_FW_GROUP_HWR, 4),
+	  "Restricting PDS Tasks to help other stalling DMs (RunningMask=0x%02x, StallingMask=0x%02x, PDS_CTRL=0x%08x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(57, ROGUE_FW_GROUP_HWR, 4),
+	  "Unrestricting PDS Tasks again (RunningMask=0x%02x, StallingMask=0x%02x, PDS_CTRL=0x%08x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(58, ROGUE_FW_GROUP_HWR, 2),
+	  "USC slots: %u used by DM%u" },
+	{ ROGUE_FW_LOG_CREATESFID(59, ROGUE_FW_GROUP_HWR, 1),
+	  "USC slots: %u empty" },
+	{ ROGUE_FW_LOG_CREATESFID(60, ROGUE_FW_GROUP_HWR, 5),
+	  "HCS DM%d's Context Switch failed to meet deadline. Current time: 0x%08x%08x, deadline: 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(61, ROGUE_FW_GROUP_HWR, 1),
+	  "Begin hardware reset (HWR Counter=%d)" },
+	{ ROGUE_FW_LOG_CREATESFID(62, ROGUE_FW_GROUP_HWR, 1),
+	  "Finished hardware reset (HWR Counter=%d)" },
+	{ ROGUE_FW_LOG_CREATESFID(63, ROGUE_FW_GROUP_HWR, 2),
+	  "Holding Scheduling on DM %u for OSid %u due to pending freelist reconstruction" },
+	{ ROGUE_FW_LOG_CREATESFID(64, ROGUE_FW_GROUP_HWR, 5),
+	  "User Mode Queue ROff reset: FWCtx 0x%08.8x, queue: 0x%08x%08x (Roff = %u becomes StreamStartOffset = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(65, ROGUE_FW_GROUP_HWR, 4),
+	  "Reconstruction needed for freelist 0x%x (ID=%d) type: %d (0:local,1:global) on HW context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(66, ROGUE_FW_GROUP_HWR, 3),
+	  "Mips page fault detected (BadVAddr: 0x%08x, EntryLo0: 0x%08x, EntryLo1: 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(67, ROGUE_FW_GROUP_HWR, 1),
+	  "At least one other DM is running okay so DM%u will get another chance" },
+	{ ROGUE_FW_LOG_CREATESFID(68, ROGUE_FW_GROUP_HWR, 2),
+	  "Reconstructing in FW, FL: 0x%x (ID=%d)" },
+	{ ROGUE_FW_LOG_CREATESFID(69, ROGUE_FW_GROUP_HWR, 4),
+	  "Zero RTC for FWCtx: 0x%08.8x (RTC addr: 0x%08x%08x, size: %d bytes)" },
+	{ ROGUE_FW_LOG_CREATESFID(70, ROGUE_FW_GROUP_HWR, 5),
+	  "Reconstruction needed for freelist 0x%x (ID=%d) type: %d (0:local,1:global) phase: %d (0:TA, 1:3D) on HW context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(71, ROGUE_FW_GROUP_HWR, 3),
+	  "Start long HW poll %u (0-Unset 1-Set) for (reg:0x%08x val:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(72, ROGUE_FW_GROUP_HWR, 1),
+	  "End long HW poll (result=%d)" },
+	{ ROGUE_FW_LOG_CREATESFID(73, ROGUE_FW_GROUP_HWR, 3),
+	  "DM%u has taken %d ticks and deadline is %d ticks" },
+	{ ROGUE_FW_LOG_CREATESFID(74, ROGUE_FW_GROUP_HWR, 5),
+	  "USC Watchdog result for DM%u is HWRNeeded=%u Status=%u USCs={0x%x} with HWRChecksToGo=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(75, ROGUE_FW_GROUP_HWR, 6),
+	  "Reconstruction needed for freelist 0x%x (ID=%d) OSid: %d type: %d (0:local,1:global) phase: %d (0:TA, 1:3D) on HW context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(76, ROGUE_FW_GROUP_HWR, 1),
+	  "GPU-%u has locked up" },
+	{ ROGUE_FW_LOG_CREATESFID(77, ROGUE_FW_GROUP_HWR, 1),
+	  "DM%u has locked up" },
+	{ ROGUE_FW_LOG_CREATESFID(78, ROGUE_FW_GROUP_HWR, 2),
+	  "Core %d RGX_CR_EVENT_STATUS=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(79, ROGUE_FW_GROUP_HWR, 2),
+	  "RGX_CR_MULTICORE_EVENT_STATUS%u=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(80, ROGUE_FW_GROUP_HWR, 5),
+	  "BIF0 page fault detected (Core %d MMU Status: 0x%08x%08x Req Status: 0x%08x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(81, ROGUE_FW_GROUP_HWR, 3),
+	  "MMU page fault detected (Core %d MMU Status: 0x%08x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(82, ROGUE_FW_GROUP_HWR, 4),
+	  "MMU page fault detected (Core %d MMU Status: 0x%08x%08x 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(83, ROGUE_FW_GROUP_HWR, 4),
+	  "Reset HW (core:%d of %d, loop:%d, poll failures: 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(84, ROGUE_FW_GROUP_HWR, 3),
+	  "Fast CRC Check result for Core%u, DM%u is HWRNeeded=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(85, ROGUE_FW_GROUP_HWR, 3),
+	  "Full Signature Check result for Core%u, DM%u is HWRNeeded=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(86, ROGUE_FW_GROUP_HWR, 4),
+	  "USC Slots result for Core%u, DM%u is HWRNeeded=%u USCSlotsUsedByDM=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(87, ROGUE_FW_GROUP_HWR, 6),
+	  "USC Watchdog result for Core%u DM%u is HWRNeeded=%u Status=%u USCs={0x%x} with HWRChecksToGo=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(88, ROGUE_FW_GROUP_HWR, 3),
+	  "RISC-V MMU page fault detected (FWCORE MMU Status 0x%08x Req Status 0x%08x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(89, ROGUE_FW_GROUP_HWR, 2),
+	  "TEXAS1_PFS poll failed on core %d with value 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(90, ROGUE_FW_GROUP_HWR, 2),
+	  "BIF_PFS poll failed on core %d with value 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(91, ROGUE_FW_GROUP_HWR, 2),
+	  "MMU_ABORT_PM_STATUS set poll failed on core %d with value 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(92, ROGUE_FW_GROUP_HWR, 2),
+	  "MMU_ABORT_PM_STATUS unset poll failed on core %d with value 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(93, ROGUE_FW_GROUP_HWR, 2),
+	  "MMU_CTRL_INVAL poll (all but fw) failed on core %d with value 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(94, ROGUE_FW_GROUP_HWR, 2),
+	  "MMU_CTRL_INVAL poll (all) failed on core %d with value 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(95, ROGUE_FW_GROUP_HWR, 3),
+	  "TEXAS%d_PFS poll failed on core %d with value 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(96, ROGUE_FW_GROUP_HWR, 3),
+	  "Extra Registers Check result for Core%u, DM%u is HWRNeeded=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(97, ROGUE_FW_GROUP_HWR, 1),
+	  "FW attempted to write to read-only GPU address 0x%08x" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_HWP, 2),
+	  "Block 0x%x mapped to Config Idx %u" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_HWP, 1),
+	  "Block 0x%x omitted from event - not enabled in HW" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_HWP, 1),
+	  "Block 0x%x included in event - enabled in HW" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_HWP, 2),
+	  "Select register state hi_0x%x lo_0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_HWP, 1),
+	  "Counter stream block header word 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_HWP, 1),
+	  "Counter register offset 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_HWP, 1),
+	  "Block 0x%x config unset, skipping" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_HWP, 1),
+	  "Accessing Indirect block 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_HWP, 1),
+	  "Accessing Direct block 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_HWP, 1),
+	  "Programmed counter select register at offset 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_HWP, 2),
+	  "Block register offset 0x%x and value 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_HWP, 1),
+	  "Reading config block from driver 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_HWP, 2),
+	  "Reading block range 0x%x to 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_HWP, 1),
+	  "Recording block 0x%x config from driver" },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_HWP, 0),
+	  "Finished reading config block from driver" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_HWP, 2),
+	  "Custom Counter offset: 0x%x  value: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_HWP, 2),
+	  "Select counter n:%u  ID:0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_HWP, 3),
+	  "The counter ID 0x%x is not allowed. The package [b:%u, n:%u] will be discarded" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_HWP, 1),
+	  "Custom Counters filter status %d" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_HWP, 2),
+	  "The Custom block %d is not allowed. Use only blocks lower than %d. The package will be discarded" },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_HWP, 2),
+	  "The package will be discarded because it contains %d counters IDs while the upper limit is %d" },
+	{ ROGUE_FW_LOG_CREATESFID(22, ROGUE_FW_GROUP_HWP, 2),
+	  "Check Filter 0x%x is 0x%x ?" },
+	{ ROGUE_FW_LOG_CREATESFID(23, ROGUE_FW_GROUP_HWP, 1),
+	  "The custom block %u is reset" },
+	{ ROGUE_FW_LOG_CREATESFID(24, ROGUE_FW_GROUP_HWP, 1),
+	  "Encountered an invalid command (%d)" },
+	{ ROGUE_FW_LOG_CREATESFID(25, ROGUE_FW_GROUP_HWP, 2),
+	  "HWPerf Queue is full, we will have to wait for space! (Roff = %u, Woff = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(26, ROGUE_FW_GROUP_HWP, 3),
+	  "HWPerf Queue is fencing, we are waiting for Roff = %d (Roff = %u, Woff = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(27, ROGUE_FW_GROUP_HWP, 1),
+	  "Custom Counter block: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(28, ROGUE_FW_GROUP_HWP, 1),
+	  "Block 0x%x ENABLED" },
+	{ ROGUE_FW_LOG_CREATESFID(29, ROGUE_FW_GROUP_HWP, 1),
+	  "Block 0x%x DISABLED" },
+	{ ROGUE_FW_LOG_CREATESFID(30, ROGUE_FW_GROUP_HWP, 2),
+	  "Accessing Indirect block 0x%x, instance %u" },
+	{ ROGUE_FW_LOG_CREATESFID(31, ROGUE_FW_GROUP_HWP, 2),
+	  "Counter register 0x%x, Value 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(32, ROGUE_FW_GROUP_HWP, 1),
+	  "Counters filter status %d" },
+	{ ROGUE_FW_LOG_CREATESFID(33, ROGUE_FW_GROUP_HWP, 2),
+	  "Block 0x%x mapped to Ctl Idx %u" },
+	{ ROGUE_FW_LOG_CREATESFID(34, ROGUE_FW_GROUP_HWP, 0),
+	  "Block(s) in use for workload estimation." },
+	{ ROGUE_FW_LOG_CREATESFID(35, ROGUE_FW_GROUP_HWP, 3),
+	  "GPU %u Cycle counter 0x%x, Value 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(36, ROGUE_FW_GROUP_HWP, 3),
+	  "GPU Mask 0x%x Cycle counter 0x%x, Value 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(37, ROGUE_FW_GROUP_HWP, 1),
+	  "Blocks IGNORED for GPU %u" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_DMA, 5),
+	  "Transfer 0x%02x request: 0x%02x%08x -> 0x%08x, size %u" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_DMA, 4),
+	  "Transfer of type 0x%02x expected on channel %u, 0x%02x found, status %u" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_DMA, 1),
+	  "DMA Interrupt register 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_DMA, 1),
+	  "Waiting for transfer of type 0x%02x completion..." },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_DMA, 3),
+	  "Loading of cCCB data from FW common context 0x%08x (offset: %u, size: %u) failed" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_DMA, 3),
+	  "Invalid load of cCCB data from FW common context 0x%08x (offset: %u, size: %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_DMA, 1),
+	  "Transfer 0x%02x request poll failure" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_DMA, 2),
+	  "Boot transfer(s) failed (code? %u, data? %u), used slower memcpy instead" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_DMA, 7),
+	  "Transfer 0x%02x request on ch. %u: system 0x%02x%08x, coremem 0x%08x, flags 0x%x, size %u" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_DBG, 2),
+	  "0x%08x 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_DBG, 1),
+	  "0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_DBG, 2),
+	  "0x%08x 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_DBG, 3),
+	  "0x%08x 0x%08x 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_DBG, 4),
+	  "0x%08x 0x%08x 0x%08x 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_DBG, 5),
+	  "0x%08x 0x%08x 0x%08x 0x%08x 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_DBG, 6),
+	  "0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_DBG, 7),
+	  "0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_DBG, 8),
+	  "0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_DBG, 1),
+	  "%d" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_DBG, 2),
+	  "%d %d" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_DBG, 3),
+	  "%d %d %d" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_DBG, 4),
+	  "%d %d %d %d" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_DBG, 5),
+	  "%d %d %d %d %d" },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_DBG, 6),
+	  "%d %d %d %d %d %d" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_DBG, 7),
+	  "%d %d %d %d %d %d %d" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_DBG, 8),
+	  "%d %d %d %d %d %d %d %d" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_DBG, 1),
+	  "%u" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_DBG, 2),
+	  "%u %u" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_DBG, 3),
+	  "%u %u %u" },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_DBG, 4),
+	  "%u %u %u %u" },
+	{ ROGUE_FW_LOG_CREATESFID(22, ROGUE_FW_GROUP_DBG, 5),
+	  "%u %u %u %u %u" },
+	{ ROGUE_FW_LOG_CREATESFID(23, ROGUE_FW_GROUP_DBG, 6),
+	  "%u %u %u %u %u %u" },
+	{ ROGUE_FW_LOG_CREATESFID(24, ROGUE_FW_GROUP_DBG, 7),
+	  "%u %u %u %u %u %u %u" },
+	{ ROGUE_FW_LOG_CREATESFID(25, ROGUE_FW_GROUP_DBG, 8),
+	  "%u %u %u %u %u %u %u %u" },
+
+	{ ROGUE_FW_LOG_CREATESFID(65535, ROGUE_FW_GROUP_NULL, 15),
+	  "You should not use this string" },
+};
+
+#define ROGUE_FW_SF_FIRST ROGUE_FW_LOG_CREATESFID(0, ROGUE_FW_GROUP_NULL, 0)
+#define ROGUE_FW_SF_MAIN_ASSERT_FAILED ROGUE_FW_LOG_CREATESFID(24, ROGUE_FW_GROUP_MAIN, 1)
+#define ROGUE_FW_SF_LAST ROGUE_FW_LOG_CREATESFID(65535, ROGUE_FW_GROUP_NULL, 15)
+
+#endif /* PVR_ROGUE_FWIF_SF_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_fwif_shared.h b/drivers/gpu/drm/imagination/pvr_rogue_fwif_shared.h
new file mode 100644
index 000000000000..06c8012b31d4
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_fwif_shared.h
@@ -0,0 +1,258 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_FWIF_SHARED_H
+#define PVR_ROGUE_FWIF_SHARED_H
+
+#include <linux/compiler.h>
+#include <linux/types.h>
+
+#define ROGUE_FWIF_NUM_RTDATAS 2U
+#define ROGUE_FWIF_NUM_GEOMDATAS 1U
+#define ROGUE_FWIF_NUM_RTDATA_FREELISTS 2U
+#define ROGUE_NUM_GEOM_CORES 1U
+
+#define ROGUE_NUM_GEOM_CORES_SIZE 2U
+
+/*
+ * Maximum number of UFOs in a CCB command.
+ * The number is based on having 32 sync prims (as originally), plus 32 sync
+ * checkpoints.
+ * Once the use of sync prims is no longer supported, we will retain
+ * the same total (64) as the number of sync checkpoints which may be
+ * supporting a fence is not visible to the client driver and has to
+ * allow for the number of different timelines involved in fence merges.
+ */
+#define ROGUE_FWIF_CCB_CMD_MAX_UFOS (32U + 32U)
+
+/*
+ * This is a generic limit imposed on any DM (GEOMETRY,FRAGMENT,CDM,TDM,2D,TRANSFER)
+ * command passed through the bridge.
+ * Just across the bridge in the server, any incoming kick command size is
+ * checked against this maximum limit.
+ * In case the incoming command size is larger than the specified limit,
+ * the bridge call is retired with error.
+ */
+#define ROGUE_FWIF_DM_INDEPENDENT_KICK_CMD_SIZE (1024U)
+
+#define ROGUE_FWIF_PRBUFFER_START (0)
+#define ROGUE_FWIF_PRBUFFER_ZSBUFFER (0)
+#define ROGUE_FWIF_PRBUFFER_MSAABUFFER (1)
+#define ROGUE_FWIF_PRBUFFER_MAXSUPPORTED (2)
+
+struct rogue_fwif_dma_addr {
+	aligned_u64 dev_addr;
+	u32 fw_addr;
+	u32 padding;
+} __aligned(8);
+
+struct rogue_fwif_ufo {
+	u32 addr;
+	u32 value;
+};
+
+#define ROGUE_FWIF_UFO_ADDR_IS_SYNC_CHECKPOINT (1)
+
+struct rogue_fwif_sync_checkpoint {
+	u32 state;
+	u32 fw_ref_count;
+};
+
+struct rogue_fwif_cleanup_ctl {
+	/* Number of commands received by the FW */
+	u32 submitted_commands;
+	/* Number of commands executed by the FW */
+	u32 executed_commands;
+} __aligned(8);
+
+/*
+ * Used to share frame numbers across UM-KM-FW,
+ * frame number is set in UM,
+ * frame number is required in both KM for HTB and FW for FW trace.
+ *
+ * May be used to house Kick flags in the future.
+ */
+struct rogue_fwif_cmd_common {
+	/* associated frame number */
+	u32 frame_num;
+};
+
+/*
+ * Geometry and fragment commands require set of firmware addresses that are stored in the Kernel.
+ * Client has handle(s) to Kernel containers storing these addresses, instead of raw addresses. We
+ * have to patch/write these addresses in KM to prevent UM from controlling FW addresses directly.
+ * Typedefs for geometry and fragment commands are shared between Client and Firmware (both
+ * single-BVNC). Kernel is implemented in a multi-BVNC manner, so it can't use geometry|fragment
+ * CMD type definitions directly. Therefore we have a SHARED block that is shared between UM-KM-FW
+ * across all BVNC configurations.
+ */
+struct rogue_fwif_cmd_geom_frag_shared {
+	/* Common command attributes */
+	struct rogue_fwif_cmd_common cmn;
+
+	/*
+	 * RTData associated with this command, this is used for context
+	 * selection and for storing out HW-context, when TA is switched out for
+	 * continuing later
+	 */
+	u32 hwrt_data_fw_addr;
+
+	/* Supported PR Buffers like Z/S/MSAA Scratch */
+	u32 pr_buffer_fw_addr[ROGUE_FWIF_PRBUFFER_MAXSUPPORTED];
+};
+
+/*
+ * Client Circular Command Buffer (CCCB) control structure.
+ * This is shared between the Server and the Firmware and holds byte offsets
+ * into the CCCB as well as the wrapping mask to aid wrap around. A given
+ * snapshot of this queue with Cmd 1 running on the GPU might be:
+ *
+ *          Roff                           Doff                 Woff
+ * [..........|-1----------|=2===|=3===|=4===|~5~~~~|~6~~~~|~7~~~~|..........]
+ *            <      runnable commands       ><   !ready to run   >
+ *
+ * Cmd 1    : Currently executing on the GPU data master.
+ * Cmd 2,3,4: Fence dependencies met, commands runnable.
+ * Cmd 5... : Fence dependency not met yet.
+ */
+struct rogue_fwif_cccb_ctl {
+	/* Host write offset into CCB. This must be aligned to 16 bytes. */
+	u32 write_offset;
+	/*
+	 * Firmware read offset into CCB. Points to the command that is runnable
+	 * on GPU, if R!=W
+	 */
+	u32 read_offset;
+	/*
+	 * Firmware fence dependency offset. Points to commands not ready, i.e.
+	 * fence dependencies are not met.
+	 */
+	u32 dep_offset;
+	/* Offset wrapping mask, total capacity in bytes of the CCB-1 */
+	u32 wrap_mask;
+
+	/* Only used if SUPPORT_AGP is present. */
+	u32 read_offset2;
+
+	/* Only used if SUPPORT_AGP4 is present. */
+	u32 read_offset3;
+	/* Only used if SUPPORT_AGP4 is present. */
+	u32 read_offset4;
+
+	u32 padding;
+} __aligned(8);
+
+#define ROGUE_FW_LOCAL_FREELIST (0)
+#define ROGUE_FW_GLOBAL_FREELIST (1)
+#define ROGUE_FW_FREELIST_TYPE_LAST ROGUE_FW_GLOBAL_FREELIST
+#define ROGUE_FW_MAX_FREELISTS (ROGUE_FW_FREELIST_TYPE_LAST + 1U)
+
+struct rogue_fwif_geom_registers_caswitch {
+	u64 geom_reg_vdm_context_state_base_addr;
+	u64 geom_reg_vdm_context_state_resume_addr;
+	u64 geom_reg_ta_context_state_base_addr;
+
+	struct {
+		u64 geom_reg_vdm_context_store_task0;
+		u64 geom_reg_vdm_context_store_task1;
+		u64 geom_reg_vdm_context_store_task2;
+
+		/* VDM resume state update controls */
+		u64 geom_reg_vdm_context_resume_task0;
+		u64 geom_reg_vdm_context_resume_task1;
+		u64 geom_reg_vdm_context_resume_task2;
+
+		u64 geom_reg_vdm_context_store_task3;
+		u64 geom_reg_vdm_context_store_task4;
+
+		u64 geom_reg_vdm_context_resume_task3;
+		u64 geom_reg_vdm_context_resume_task4;
+	} geom_state[2];
+};
+
+#define ROGUE_FWIF_GEOM_REGISTERS_CSWITCH_SIZE \
+	sizeof(struct rogue_fwif_geom_registers_caswitch)
+
+struct rogue_fwif_cdm_registers_cswitch {
+	u64 cdmreg_cdm_context_pds0;
+	u64 cdmreg_cdm_context_pds1;
+	u64 cdmreg_cdm_terminate_pds;
+	u64 cdmreg_cdm_terminate_pds1;
+
+	/* CDM resume controls */
+	u64 cdmreg_cdm_resume_pds0;
+	u64 cdmreg_cdm_context_pds0_b;
+	u64 cdmreg_cdm_resume_pds0_b;
+};
+
+struct rogue_fwif_static_rendercontext_state {
+	/* Geom registers for ctx switch */
+	struct rogue_fwif_geom_registers_caswitch ctxswitch_regs[ROGUE_NUM_GEOM_CORES_SIZE]
+		__aligned(8);
+};
+
+#define ROGUE_FWIF_STATIC_RENDERCONTEXT_SIZE \
+	sizeof(struct rogue_fwif_static_rendercontext_state)
+
+struct rogue_fwif_static_computecontext_state {
+	/* CDM registers for ctx switch */
+	struct rogue_fwif_cdm_registers_cswitch ctxswitch_regs __aligned(8);
+};
+
+#define ROGUE_FWIF_STATIC_COMPUTECONTEXT_SIZE \
+	sizeof(struct rogue_fwif_static_computecontext_state)
+
+enum rogue_fwif_prbuffer_state {
+	ROGUE_FWIF_PRBUFFER_UNBACKED = 0,
+	ROGUE_FWIF_PRBUFFER_BACKED,
+	ROGUE_FWIF_PRBUFFER_BACKING_PENDING,
+	ROGUE_FWIF_PRBUFFER_UNBACKING_PENDING,
+};
+
+struct rogue_fwif_prbuffer {
+	/* Buffer ID*/
+	u32 buffer_id;
+	/* Needs On-demand Z/S/MSAA Buffer allocation */
+	bool on_demand __aligned(4);
+	/* Z/S/MSAA -Buffer state */
+	enum rogue_fwif_prbuffer_state state;
+	/* Cleanup state */
+	struct rogue_fwif_cleanup_ctl cleanup_sate;
+	/* Compatibility and other flags */
+	u32 prbuffer_flags;
+} __aligned(8);
+
+/* Last reset reason for a context. */
+enum rogue_context_reset_reason {
+	/* No reset reason recorded */
+	ROGUE_CONTEXT_RESET_REASON_NONE = 0,
+	/* Caused a reset due to locking up */
+	ROGUE_CONTEXT_RESET_REASON_GUILTY_LOCKUP = 1,
+	/* Affected by another context locking up */
+	ROGUE_CONTEXT_RESET_REASON_INNOCENT_LOCKUP = 2,
+	/* Overran the global deadline */
+	ROGUE_CONTEXT_RESET_REASON_GUILTY_OVERRUNING = 3,
+	/* Affected by another context overrunning */
+	ROGUE_CONTEXT_RESET_REASON_INNOCENT_OVERRUNING = 4,
+	/* Forced reset to ensure scheduling requirements */
+	ROGUE_CONTEXT_RESET_REASON_HARD_CONTEXT_SWITCH = 5,
+	/* FW Safety watchdog triggered */
+	ROGUE_CONTEXT_RESET_REASON_FW_WATCHDOG = 12,
+	/* FW page fault (no HWR) */
+	ROGUE_CONTEXT_RESET_REASON_FW_PAGEFAULT = 13,
+	/* FW execution error (GPU reset requested) */
+	ROGUE_CONTEXT_RESET_REASON_FW_EXEC_ERR = 14,
+	/* Host watchdog detected FW error */
+	ROGUE_CONTEXT_RESET_REASON_HOST_WDG_FW_ERR = 15,
+	/* Geometry DM OOM event is not allowed */
+	ROGUE_CONTEXT_GEOM_OOM_DISABLED = 16,
+};
+
+struct rogue_context_reset_reason_data {
+	enum rogue_context_reset_reason reset_reason;
+	u32 reset_ext_job_ref;
+};
+
+#include "pvr_rogue_fwif_shared_check.h"
+
+#endif /* PVR_ROGUE_FWIF_SHARED_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_fwif_shared_check.h b/drivers/gpu/drm/imagination/pvr_rogue_fwif_shared_check.h
new file mode 100644
index 000000000000..4d6684474185
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_fwif_shared_check.h
@@ -0,0 +1,108 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_FWIF_SHARED_CHECK_H
+#define PVR_ROGUE_FWIF_SHARED_CHECK_H
+
+#include <linux/build_bug.h>
+
+#define OFFSET_CHECK(type, member, offset) \
+	static_assert(offsetof(type, member) == (offset), \
+		      "offsetof(" #type ", " #member ") incorrect")
+
+#define SIZE_CHECK(type, size) \
+	static_assert(sizeof(type) == (size), #type " is incorrect size")
+
+OFFSET_CHECK(struct rogue_fwif_dma_addr, dev_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_dma_addr, fw_addr, 8);
+SIZE_CHECK(struct rogue_fwif_dma_addr, 16);
+
+OFFSET_CHECK(struct rogue_fwif_ufo, addr, 0);
+OFFSET_CHECK(struct rogue_fwif_ufo, value, 4);
+SIZE_CHECK(struct rogue_fwif_ufo, 8);
+
+OFFSET_CHECK(struct rogue_fwif_cleanup_ctl, submitted_commands, 0);
+OFFSET_CHECK(struct rogue_fwif_cleanup_ctl, executed_commands, 4);
+SIZE_CHECK(struct rogue_fwif_cleanup_ctl, 8);
+
+OFFSET_CHECK(struct rogue_fwif_cccb_ctl, write_offset, 0);
+OFFSET_CHECK(struct rogue_fwif_cccb_ctl, read_offset, 4);
+OFFSET_CHECK(struct rogue_fwif_cccb_ctl, dep_offset, 8);
+OFFSET_CHECK(struct rogue_fwif_cccb_ctl, wrap_mask, 12);
+OFFSET_CHECK(struct rogue_fwif_cccb_ctl, read_offset2, 16);
+OFFSET_CHECK(struct rogue_fwif_cccb_ctl, read_offset3, 20);
+OFFSET_CHECK(struct rogue_fwif_cccb_ctl, read_offset4, 24);
+SIZE_CHECK(struct rogue_fwif_cccb_ctl, 32);
+
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_reg_vdm_context_state_base_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_reg_vdm_context_state_resume_addr, 8);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_reg_ta_context_state_base_addr, 16);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[0].geom_reg_vdm_context_store_task0, 24);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[0].geom_reg_vdm_context_store_task1, 32);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[0].geom_reg_vdm_context_store_task2, 40);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[0].geom_reg_vdm_context_resume_task0, 48);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[0].geom_reg_vdm_context_resume_task1, 56);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[0].geom_reg_vdm_context_resume_task2, 64);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[0].geom_reg_vdm_context_store_task3, 72);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[0].geom_reg_vdm_context_store_task4, 80);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[0].geom_reg_vdm_context_resume_task3, 88);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[0].geom_reg_vdm_context_resume_task4, 96);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[1].geom_reg_vdm_context_store_task0, 104);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[1].geom_reg_vdm_context_store_task1, 112);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[1].geom_reg_vdm_context_store_task2, 120);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[1].geom_reg_vdm_context_resume_task0, 128);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[1].geom_reg_vdm_context_resume_task1, 136);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[1].geom_reg_vdm_context_resume_task2, 144);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[1].geom_reg_vdm_context_store_task3, 152);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[1].geom_reg_vdm_context_store_task4, 160);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[1].geom_reg_vdm_context_resume_task3, 168);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[1].geom_reg_vdm_context_resume_task4, 176);
+SIZE_CHECK(struct rogue_fwif_geom_registers_caswitch, 184);
+
+OFFSET_CHECK(struct rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_context_pds0, 0);
+OFFSET_CHECK(struct rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_context_pds1, 8);
+OFFSET_CHECK(struct rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_terminate_pds, 16);
+OFFSET_CHECK(struct rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_terminate_pds1, 24);
+OFFSET_CHECK(struct rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_resume_pds0, 32);
+OFFSET_CHECK(struct rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_context_pds0_b, 40);
+OFFSET_CHECK(struct rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_resume_pds0_b, 48);
+SIZE_CHECK(struct rogue_fwif_cdm_registers_cswitch, 56);
+
+OFFSET_CHECK(struct rogue_fwif_static_rendercontext_state, ctxswitch_regs, 0);
+SIZE_CHECK(struct rogue_fwif_static_rendercontext_state, 368);
+
+OFFSET_CHECK(struct rogue_fwif_static_computecontext_state, ctxswitch_regs, 0);
+SIZE_CHECK(struct rogue_fwif_static_computecontext_state, 56);
+
+OFFSET_CHECK(struct rogue_fwif_cmd_common, frame_num, 0);
+SIZE_CHECK(struct rogue_fwif_cmd_common, 4);
+
+OFFSET_CHECK(struct rogue_fwif_cmd_geom_frag_shared, cmn, 0);
+OFFSET_CHECK(struct rogue_fwif_cmd_geom_frag_shared, hwrt_data_fw_addr, 4);
+OFFSET_CHECK(struct rogue_fwif_cmd_geom_frag_shared, pr_buffer_fw_addr, 8);
+SIZE_CHECK(struct rogue_fwif_cmd_geom_frag_shared, 16);
+
+#endif /* PVR_ROGUE_FWIF_SHARED_CHECK_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_fwif_stream.h b/drivers/gpu/drm/imagination/pvr_rogue_fwif_stream.h
new file mode 100644
index 000000000000..97859a41e677
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_fwif_stream.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_FWIF_STREAM_H
+#define PVR_ROGUE_FWIF_STREAM_H
+
+/**
+ * DOC: Streams
+ *
+ * Commands are submitted to the kernel driver in the form of streams.
+ *
+ * A command stream has the following layout :
+ *  - A 64-bit header containing:
+ *    * A u32 containing the length of the main stream inclusive of the length of the header.
+ *    * A u32 for padding.
+ *  - The main stream data.
+ *  - The extension stream (optional), which is composed of:
+ *    * One or more headers.
+ *    * The extension stream data, corresponding to the extension headers.
+ *
+ * The main stream provides the base command data. This has a fixed layout based on the features
+ * supported by a given GPU.
+ *
+ * The extension stream provides the command parameters that are required for BRNs & ERNs for the
+ * current GPU. This stream is comprised of one or more headers, followed by data for each given
+ * BRN/ERN.
+ *
+ * Each header is a u32 containing a bitmask of quirks & enhancements in the extension stream, a
+ * "type" field determining the set of quirks & enhancements the bitmask represents, and a
+ * continuation bit determining whether any more headers are present. The headers are then followed
+ * by command data; this is specific to each quirk/enhancement. All unused / reserved bits in the
+ * header must be set to 0.
+ *
+ * All parameters and headers in the main and extension streams must be naturally aligned.
+ *
+ * If a parameter appears in both the main and extension streams, then the extension parameter is
+ * used.
+ */
+
+/*
+ * Stream extension header definition
+ */
+#define PVR_STREAM_EXTHDR_TYPE_SHIFT 29U
+#define PVR_STREAM_EXTHDR_TYPE_MASK (7U << PVR_STREAM_EXTHDR_TYPE_SHIFT)
+#define PVR_STREAM_EXTHDR_TYPE_MAX 8U
+#define PVR_STREAM_EXTHDR_CONTINUATION BIT(28U)
+
+#define PVR_STREAM_EXTHDR_DATA_MASK ~(PVR_STREAM_EXTHDR_TYPE_MASK | PVR_STREAM_EXTHDR_CONTINUATION)
+
+/*
+ * Stream extension header - Geometry 0
+ */
+#define PVR_STREAM_EXTHDR_TYPE_GEOM0 0U
+
+#define PVR_STREAM_EXTHDR_GEOM0_BRN49927 BIT(0U)
+
+#define PVR_STREAM_EXTHDR_GEOM0_VALID PVR_STREAM_EXTHDR_GEOM0_BRN49927
+
+/*
+ * Stream extension header - Fragment 0
+ */
+#define PVR_STREAM_EXTHDR_TYPE_FRAG0 0U
+
+#define PVR_STREAM_EXTHDR_FRAG0_BRN47217 BIT(0U)
+#define PVR_STREAM_EXTHDR_FRAG0_BRN49927 BIT(1U)
+
+#define PVR_STREAM_EXTHDR_FRAG0_VALID PVR_STREAM_EXTHDR_FRAG0_BRN49927
+
+/*
+ * Stream extension header - Compute 0
+ */
+#define PVR_STREAM_EXTHDR_TYPE_COMPUTE0 0U
+
+#define PVR_STREAM_EXTHDR_COMPUTE0_BRN49927 BIT(0U)
+
+#define PVR_STREAM_EXTHDR_COMPUTE0_VALID PVR_STREAM_EXTHDR_COMPUTE0_BRN49927
+
+#endif /* PVR_ROGUE_FWIF_STREAM_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_heap_config.h b/drivers/gpu/drm/imagination/pvr_rogue_heap_config.h
new file mode 100644
index 000000000000..632221b88281
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_heap_config.h
@@ -0,0 +1,113 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_HEAP_CONFIG_H
+#define PVR_ROGUE_HEAP_CONFIG_H
+
+#include <linux/sizes.h>
+
+/*
+ * ROGUE Device Virtual Address Space Definitions
+ *
+ * This file defines the ROGUE virtual address heaps that are used in
+ * application memory contexts. It also shows where the Firmware memory heap
+ * fits into this, but the firmware heap is only ever created in the
+ * kernel driver and never exposed to userspace.
+ *
+ * ROGUE_PDSCODEDATA_HEAP_BASE and ROGUE_USCCODE_HEAP_BASE will be programmed,
+ * on a global basis, into ROGUE_CR_PDS_EXEC_BASE and ROGUE_CR_USC_CODE_BASE_*
+ * respectively. Therefore if client drivers use multiple configs they must
+ * still be consistent with their definitions for these heaps.
+ *
+ * Base addresses have to be a multiple of 4MiB.
+ * Heaps must not start at 0x0000000000, as this is reserved for internal
+ * use within the driver.
+ * Range comments, those starting in column 0 below are a section heading of
+ * sorts and are above the heaps in that range. Often this is the reserved
+ * size of the heap within the range.
+ */
+
+/* 0x00_0000_0000 ************************************************************/
+
+/* 0x00_0000_0000 - 0x00_0040_0000 */
+/* 0 MiB to 4 MiB, size of 4 MiB : RESERVED */
+
+/* 0x00_0040_0000 - 0x7F_FFC0_0000 **/
+/* 4 MiB to 512 GiB, size of 512 GiB less 4 MiB : RESERVED **/
+
+/* 0x80_0000_0000 ************************************************************/
+
+/* 0x80_0000_0000 - 0x9F_FFFF_FFFF **/
+/* 512 GiB to 640 GiB, size of 128 GiB : GENERAL_HEAP **/
+#define ROGUE_GENERAL_HEAP_BASE 0x8000000000ull
+#define ROGUE_GENERAL_HEAP_SIZE SZ_128G
+
+/* 0xA0_0000_0000 - 0xAF_FFFF_FFFF */
+/* 640 GiB to 704 GiB, size of 64 GiB : FREE */
+
+/* B0_0000_0000 - 0xB7_FFFF_FFFF */
+/* 704 GiB to 736 GiB, size of 32 GiB : FREE */
+
+/* 0xB8_0000_0000 - 0xBF_FFFF_FFFF */
+/* 736 GiB to 768 GiB, size of 32 GiB : RESERVED */
+
+/* 0xC0_0000_0000 ************************************************************/
+
+/* 0xC0_0000_0000 - 0xD9_FFFF_FFFF */
+/* 768 GiB to 872 GiB, size of 104 GiB : FREE */
+
+/* 0xDA_0000_0000 - 0xDA_FFFF_FFFF */
+/* 872 GiB to 876 GiB, size of 4 GiB : PDSCODEDATA_HEAP */
+#define ROGUE_PDSCODEDATA_HEAP_BASE 0xDA00000000ull
+#define ROGUE_PDSCODEDATA_HEAP_SIZE SZ_4G
+
+/* 0xDB_0000_0000 - 0xDB_FFFF_FFFF */
+/* 876 GiB to 880 GiB, size of 256 MiB (reserved 4GiB) : BRN **/
+/*
+ * The BRN63142 quirk workaround requires Region Header memory to be at the top
+ * of a 16GiB aligned range. This is so when masked with 0x03FFFFFFFF the
+ * address will avoid aliasing PB addresses. Start at 879.75GiB. Size of 256MiB.
+ */
+#define ROGUE_RGNHDR_HEAP_BASE 0xDBF0000000ull
+#define ROGUE_RGNHDR_HEAP_SIZE SZ_256M
+
+/* 0xDC_0000_0000 - 0xDF_FFFF_FFFF */
+/* 880 GiB to 896 GiB, size of 16 GiB : FREE */
+
+/* 0xE0_0000_0000 - 0xE0_FFFF_FFFF */
+/* 896 GiB to 900 GiB, size of 4 GiB : USCCODE_HEAP */
+#define ROGUE_USCCODE_HEAP_BASE 0xE000000000ull
+#define ROGUE_USCCODE_HEAP_SIZE SZ_4G
+
+/* 0xE1_0000_0000 - 0xE1_BFFF_FFFF */
+/* 900 GiB to 903 GiB, size of 3 GiB : RESERVED */
+
+/* 0xE1_C000_000 - 0xE1_FFFF_FFFF */
+/* 903 GiB to 904 GiB, reserved 1 GiB, : FIRMWARE_HEAP */
+#define ROGUE_FW_HEAP_BASE 0xE1C0000000ull
+
+/* 0xE2_0000_0000 - 0xE3_FFFF_FFFF */
+/* 904 GiB to 912 GiB, size of 8 GiB : FREE */
+
+/* 0xE4_0000_0000 - 0xE7_FFFF_FFFF */
+/* 912 GiB to 968 GiB, size of 16 GiB : TRANSFER_FRAG */
+#define ROGUE_TRANSFER_FRAG_HEAP_BASE 0xE400000000ull
+#define ROGUE_TRANSFER_FRAG_HEAP_SIZE SZ_16G
+
+/* 0xE8_0000_0000 - 0xF1_FFFF_FFFF */
+/* 928 GiB to 968 GiB, size of 40 GiB : RESERVED */
+
+/* 0xF2_0000_0000 - 0xF2_001F_FFFF **/
+/* 968 GiB to 969 GiB, size of 2 MiB : VISTEST_HEAP */
+#define ROGUE_VISTEST_HEAP_BASE 0xF200000000ull
+#define ROGUE_VISTEST_HEAP_SIZE SZ_2M
+
+/* 0xF2_4000_0000 - 0xF2_FFFF_FFFF */
+/* 969 GiB to 972 GiB, size of 3 GiB : FREE */
+
+/* 0xF3_0000_0000 - 0xFF_FFFF_FFFF */
+/* 972 GiB to 1024 GiB, size of 52 GiB : FREE */
+
+/* 0xFF_FFFF_FFFF ************************************************************/
+
+#endif /* PVR_ROGUE_HEAP_CONFIG_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_meta.h b/drivers/gpu/drm/imagination/pvr_rogue_meta.h
new file mode 100644
index 000000000000..736e94618832
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_meta.h
@@ -0,0 +1,356 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_META_H
+#define PVR_ROGUE_META_H
+
+/***** The META HW register definitions in the file are updated manually *****/
+
+#include <linux/bits.h>
+#include <linux/types.h>
+
+/*
+ ******************************************************************************
+ * META registers and MACROS
+ *****************************************************************************
+ */
+#define META_CR_CTRLREG_BASE(t) (0x04800000U + (0x1000U * (t)))
+
+#define META_CR_TXPRIVEXT (0x048000E8)
+#define META_CR_TXPRIVEXT_MINIM_EN BIT(7)
+
+#define META_CR_SYSC_JTAG_THREAD (0x04830030)
+#define META_CR_SYSC_JTAG_THREAD_PRIV_EN (0x00000004)
+
+#define META_CR_PERF_COUNT0 (0x0480FFE0)
+#define META_CR_PERF_COUNT1 (0x0480FFE8)
+#define META_CR_PERF_COUNT_CTRL_SHIFT (28)
+#define META_CR_PERF_COUNT_CTRL_MASK (0xF0000000)
+#define META_CR_PERF_COUNT_CTRL_DCACHEHITS (8 << META_CR_PERF_COUNT_CTRL_SHIFT)
+#define META_CR_PERF_COUNT_CTRL_ICACHEHITS (9 << META_CR_PERF_COUNT_CTRL_SHIFT)
+#define META_CR_PERF_COUNT_CTRL_ICACHEMISS \
+	(0xA << META_CR_PERF_COUNT_CTRL_SHIFT)
+#define META_CR_PERF_COUNT_CTRL_ICORE (0xD << META_CR_PERF_COUNT_CTRL_SHIFT)
+#define META_CR_PERF_COUNT_THR_SHIFT (24)
+#define META_CR_PERF_COUNT_THR_MASK (0x0F000000)
+#define META_CR_PERF_COUNT_THR_0 (0x1 << META_CR_PERF_COUNT_THR_SHIFT)
+#define META_CR_PERF_COUNT_THR_1 (0x2 << META_CR_PERF_COUNT_THR_1)
+
+#define META_CR_TxVECINT_BHALT (0x04820500)
+#define META_CR_PERF_ICORE0 (0x0480FFD0)
+#define META_CR_PERF_ICORE1 (0x0480FFD8)
+#define META_CR_PERF_ICORE_DCACHEMISS (0x8)
+
+#define META_CR_PERF_COUNT(ctrl, thr)                                        \
+	((META_CR_PERF_COUNT_CTRL_##ctrl << META_CR_PERF_COUNT_CTRL_SHIFT) | \
+	 ((thr) << META_CR_PERF_COUNT_THR_SHIFT))
+
+#define META_CR_TXUXXRXDT_OFFSET (META_CR_CTRLREG_BASE(0U) + 0x0000FFF0U)
+#define META_CR_TXUXXRXRQ_OFFSET (META_CR_CTRLREG_BASE(0U) + 0x0000FFF8U)
+
+/* Poll for done. */
+#define META_CR_TXUXXRXRQ_DREADY_BIT (0x80000000U)
+/* Set for read. */
+#define META_CR_TXUXXRXRQ_RDnWR_BIT (0x00010000U)
+#define META_CR_TXUXXRXRQ_TX_S (12)
+#define META_CR_TXUXXRXRQ_RX_S (4)
+#define META_CR_TXUXXRXRQ_UXX_S (0)
+
+/* Internal ctrl regs. */
+#define META_CR_TXUIN_ID (0x0)
+/* Data unit regs. */
+#define META_CR_TXUD0_ID (0x1)
+/* Data unit regs. */
+#define META_CR_TXUD1_ID (0x2)
+/* Address unit regs. */
+#define META_CR_TXUA0_ID (0x3)
+/* Address unit regs. */
+#define META_CR_TXUA1_ID (0x4)
+/* PC registers. */
+#define META_CR_TXUPC_ID (0x5)
+
+/* Macros to calculate register access values. */
+#define META_CR_CORE_REG(thr, reg_num, unit)          \
+	(((u32)(thr) << META_CR_TXUXXRXRQ_TX_S) |     \
+	 ((u32)(reg_num) << META_CR_TXUXXRXRQ_RX_S) | \
+	 ((u32)(unit) << META_CR_TXUXXRXRQ_UXX_S))
+
+#define META_CR_THR0_PC META_CR_CORE_REG(0, 0, META_CR_TXUPC_ID)
+#define META_CR_THR0_PCX META_CR_CORE_REG(0, 1, META_CR_TXUPC_ID)
+#define META_CR_THR0_SP META_CR_CORE_REG(0, 0, META_CR_TXUA0_ID)
+
+#define META_CR_THR1_PC META_CR_CORE_REG(1, 0, META_CR_TXUPC_ID)
+#define META_CR_THR1_PCX META_CR_CORE_REG(1, 1, META_CR_TXUPC_ID)
+#define META_CR_THR1_SP META_CR_CORE_REG(1, 0, META_CR_TXUA0_ID)
+
+#define SP_ACCESS(thread) META_CR_CORE_REG(thread, 0, META_CR_TXUA0_ID)
+#define PC_ACCESS(thread) META_CR_CORE_REG(thread, 0, META_CR_TXUPC_ID)
+
+#define META_CR_COREREG_ENABLE (0x0000000U)
+#define META_CR_COREREG_STATUS (0x0000010U)
+#define META_CR_COREREG_DEFR (0x00000A0U)
+#define META_CR_COREREG_PRIVEXT (0x00000E8U)
+
+#define META_CR_T0ENABLE_OFFSET \
+	(META_CR_CTRLREG_BASE(0U) + META_CR_COREREG_ENABLE)
+#define META_CR_T0STATUS_OFFSET \
+	(META_CR_CTRLREG_BASE(0U) + META_CR_COREREG_STATUS)
+#define META_CR_T0DEFR_OFFSET (META_CR_CTRLREG_BASE(0U) + META_CR_COREREG_DEFR)
+#define META_CR_T0PRIVEXT_OFFSET \
+	(META_CR_CTRLREG_BASE(0U) + META_CR_COREREG_PRIVEXT)
+
+#define META_CR_T1ENABLE_OFFSET \
+	(META_CR_CTRLREG_BASE(1U) + META_CR_COREREG_ENABLE)
+#define META_CR_T1STATUS_OFFSET \
+	(META_CR_CTRLREG_BASE(1U) + META_CR_COREREG_STATUS)
+#define META_CR_T1DEFR_OFFSET (META_CR_CTRLREG_BASE(1U) + META_CR_COREREG_DEFR)
+#define META_CR_T1PRIVEXT_OFFSET \
+	(META_CR_CTRLREG_BASE(1U) + META_CR_COREREG_PRIVEXT)
+
+#define META_CR_TXENABLE_ENABLE_BIT (0x00000001U) /* Set if running */
+#define META_CR_TXSTATUS_PRIV (0x00020000U)
+#define META_CR_TXPRIVEXT_MINIM (0x00000080U)
+
+#define META_MEM_GLOBAL_RANGE_BIT (0x80000000U)
+
+#define META_CR_TXCLKCTRL (0x048000B0)
+#define META_CR_TXCLKCTRL_ALL_ON (0x55111111)
+#define META_CR_TXCLKCTRL_ALL_AUTO (0xAA222222)
+
+#define META_CR_MMCU_LOCAL_EBCTRL (0x04830600)
+#define META_CR_MMCU_LOCAL_EBCTRL_ICWIN (0x3 << 14)
+#define META_CR_MMCU_LOCAL_EBCTRL_DCWIN (0x3 << 6)
+#define META_CR_SYSC_DCPART(n) (0x04830200 + (n) * 0x8)
+#define META_CR_SYSC_DCPARTX_CACHED_WRITE_ENABLE (0x1 << 31)
+#define META_CR_SYSC_ICPART(n) (0x04830220 + (n) * 0x8)
+#define META_CR_SYSC_XCPARTX_LOCAL_ADDR_OFFSET_TOP_HALF (0x8 << 16)
+#define META_CR_SYSC_XCPARTX_LOCAL_ADDR_FULL_CACHE (0xF)
+#define META_CR_SYSC_XCPARTX_LOCAL_ADDR_HALF_CACHE (0x7)
+#define META_CR_MMCU_DCACHE_CTRL (0x04830018)
+#define META_CR_MMCU_ICACHE_CTRL (0x04830020)
+#define META_CR_MMCU_XCACHE_CTRL_CACHE_HITS_EN (0x1)
+
+/*
+ ******************************************************************************
+ * META LDR Format
+ ******************************************************************************
+ */
+/* Block header structure. */
+struct rogue_meta_ldr_block_hdr {
+	u32 dev_id;
+	u32 sl_code;
+	u32 sl_data;
+	u16 pc_ctrl;
+	u16 crc;
+};
+
+/* High level data stream block structure. */
+struct rogue_meta_ldr_l1_data_blk {
+	u16 cmd;
+	u16 length;
+	u32 next;
+	u32 cmd_data[4];
+};
+
+/* High level data stream block structure. */
+struct rogue_meta_ldr_l2_data_blk {
+	u16 tag;
+	u16 length;
+	u32 block_data[4];
+};
+
+/* Config command structure. */
+struct rogue_meta_ldr_cfg_blk {
+	u32 type;
+	u32 block_data[4];
+};
+
+/* Block type definitions */
+#define ROGUE_META_LDR_COMMENT_TYPE_MASK (0x0010U)
+#define ROGUE_META_LDR_BLK_IS_COMMENT(x) (((x) & ROGUE_META_LDR_COMMENT_TYPE_MASK) != 0U)
+
+/*
+ * Command definitions
+ *  Value   Name            Description
+ *  0       LoadMem         Load memory with binary data.
+ *  1       LoadCore        Load a set of core registers.
+ *  2       LoadMMReg       Load a set of memory mapped registers.
+ *  3       StartThreads    Set each thread PC and SP, then enable threads.
+ *  4       ZeroMem         Zeros a memory region.
+ *  5       Config          Perform a configuration command.
+ */
+#define ROGUE_META_LDR_CMD_MASK (0x000FU)
+
+#define ROGUE_META_LDR_CMD_LOADMEM (0x0000U)
+#define ROGUE_META_LDR_CMD_LOADCORE (0x0001U)
+#define ROGUE_META_LDR_CMD_LOADMMREG (0x0002U)
+#define ROGUE_META_LDR_CMD_START_THREADS (0x0003U)
+#define ROGUE_META_LDR_CMD_ZEROMEM (0x0004U)
+#define ROGUE_META_LDR_CMD_CONFIG (0x0005U)
+
+/*
+ * Config Command definitions
+ *  Value   Name        Description
+ *  0       Pause       Pause for x times 100 instructions
+ *  1       Read        Read a value from register - No value return needed.
+ *                      Utilises effects of issuing reads to certain registers
+ *  2       Write       Write to mem location
+ *  3       MemSet      Set mem to value
+ *  4       MemCheck    check mem for specific value.
+ */
+#define ROGUE_META_LDR_CFG_PAUSE (0x0000)
+#define ROGUE_META_LDR_CFG_READ (0x0001)
+#define ROGUE_META_LDR_CFG_WRITE (0x0002)
+#define ROGUE_META_LDR_CFG_MEMSET (0x0003)
+#define ROGUE_META_LDR_CFG_MEMCHECK (0x0004)
+
+/*
+ ******************************************************************************
+ * ROGUE FW segmented MMU definitions
+ ******************************************************************************
+ */
+/* All threads can access the segment. */
+#define ROGUE_FW_SEGMMU_ALLTHRS (0xf << 8U)
+/* Writable. */
+#define ROGUE_FW_SEGMMU_WRITEABLE (0x1U << 1U)
+/* All threads can access and writable. */
+#define ROGUE_FW_SEGMMU_ALLTHRS_WRITEABLE \
+	(ROGUE_FW_SEGMMU_ALLTHRS | ROGUE_FW_SEGMMU_WRITEABLE)
+
+/* Direct map region 10 used for mapping GPU memory - max 8MB. */
+#define ROGUE_FW_SEGMMU_DMAP_GPU_ID (10U)
+#define ROGUE_FW_SEGMMU_DMAP_GPU_ADDR_START (0x07000000U)
+#define ROGUE_FW_SEGMMU_DMAP_GPU_MAX_SIZE (0x00800000U)
+
+/* Segment IDs. */
+#define ROGUE_FW_SEGMMU_DATA_ID (1U)
+#define ROGUE_FW_SEGMMU_BOOTLDR_ID (2U)
+#define ROGUE_FW_SEGMMU_TEXT_ID (ROGUE_FW_SEGMMU_BOOTLDR_ID)
+
+/*
+ * SLC caching strategy in S7 and volcanic is emitted through the segment MMU.
+ * All the segments configured through the macro ROGUE_FW_SEGMMU_OUTADDR_TOP are
+ * CACHED in the SLC.
+ * The interface has been kept the same to simplify the code changes.
+ * The bifdm argument is ignored (no longer relevant) in S7 and volcanic.
+ */
+#define ROGUE_FW_SEGMMU_OUTADDR_TOP_VIVT_SLC(pers, slc_policy, mmu_ctx)  \
+	((((u64)((pers) & 0x3)) << 52) | (((u64)((mmu_ctx) & 0xFF)) << 44) | \
+	 (((u64)((slc_policy) & 0x1)) << 40))
+#define ROGUE_FW_SEGMMU_OUTADDR_TOP_VIVT_SLC_CACHED(mmu_ctx) \
+	ROGUE_FW_SEGMMU_OUTADDR_TOP_VIVT_SLC(0x3, 0x0, mmu_ctx)
+#define ROGUE_FW_SEGMMU_OUTADDR_TOP_VIVT_SLC_UNCACHED(mmu_ctx) \
+	ROGUE_FW_SEGMMU_OUTADDR_TOP_VIVT_SLC(0x0, 0x1, mmu_ctx)
+
+/*
+ * To configure the Page Catalog and BIF-DM fed into the BIF for Garten
+ * accesses through this segment.
+ */
+#define ROGUE_FW_SEGMMU_OUTADDR_TOP_SLC(pc, bifdm) \
+	(((u64)((u64)(pc) & 0xFU) << 44U) | ((u64)((u64)(bifdm) & 0xFU) << 40U))
+
+#define ROGUE_FW_SEGMMU_META_BIFDM_ID (0x7U)
+
+/* META segments have 4kB minimum size. */
+#define ROGUE_FW_SEGMMU_ALIGN (0x1000U)
+
+/* Segmented MMU registers (n = segment id). */
+#define META_CR_MMCU_SEGMENT_N_BASE(n) (0x04850000U + ((n) * 0x10U))
+#define META_CR_MMCU_SEGMENT_N_LIMIT(n) (0x04850004U + ((n) * 0x10U))
+#define META_CR_MMCU_SEGMENT_N_OUTA0(n) (0x04850008U + ((n) * 0x10U))
+#define META_CR_MMCU_SEGMENT_N_OUTA1(n) (0x0485000CU + ((n) * 0x10U))
+
+/*
+ * The following defines must be recalculated if the Meta MMU segments used
+ * to access Host-FW data are changed
+ * Current combinations are:
+ * - SLC uncached, META cached,   FW base address 0x70000000
+ * - SLC uncached, META uncached, FW base address 0xF0000000
+ * - SLC cached,   META cached,   FW base address 0x10000000
+ * - SLC cached,   META uncached, FW base address 0x90000000
+ */
+#define ROGUE_FW_SEGMMU_DATA_BASE_ADDRESS (0x10000000U)
+#define ROGUE_FW_SEGMMU_DATA_META_CACHED (0x0U)
+#define ROGUE_FW_SEGMMU_DATA_META_UNCACHED (META_MEM_GLOBAL_RANGE_BIT)
+#define ROGUE_FW_SEGMMU_DATA_META_CACHE_MASK (META_MEM_GLOBAL_RANGE_BIT)
+/*
+ * For non-VIVT SLCs the cacheability of the FW data in the SLC is selected in
+ * the PTEs for the FW data, not in the Meta Segment MMU, which means these
+ * defines have no real effect in those cases.
+ */
+#define ROGUE_FW_SEGMMU_DATA_VIVT_SLC_CACHED (0x0U)
+#define ROGUE_FW_SEGMMU_DATA_VIVT_SLC_UNCACHED (0x60000000U)
+#define ROGUE_FW_SEGMMU_DATA_VIVT_SLC_CACHE_MASK (0x60000000U)
+
+/*
+ ******************************************************************************
+ * ROGUE FW Bootloader defaults
+ ******************************************************************************
+ */
+#define ROGUE_FW_BOOTLDR_META_ADDR (0x40000000U)
+#define ROGUE_FW_BOOTLDR_DEVV_ADDR_0 (0xC0000000U)
+#define ROGUE_FW_BOOTLDR_DEVV_ADDR_1 (0x000000E1)
+#define ROGUE_FW_BOOTLDR_DEVV_ADDR                     \
+	((((u64)ROGUE_FW_BOOTLDR_DEVV_ADDR_1) << 32) | \
+	 ROGUE_FW_BOOTLDR_DEVV_ADDR_0)
+#define ROGUE_FW_BOOTLDR_LIMIT (0x1FFFF000)
+#define ROGUE_FW_MAX_BOOTLDR_OFFSET (0x1000)
+
+/* Bootloader configuration offset is in dwords (512 bytes) */
+#define ROGUE_FW_BOOTLDR_CONF_OFFSET (0x80)
+
+/*
+ ******************************************************************************
+ * ROGUE META Stack
+ ******************************************************************************
+ */
+#define ROGUE_META_STACK_SIZE (0x1000U)
+
+/*
+ ******************************************************************************
+ * ROGUE META Core memory
+ ******************************************************************************
+ */
+/* Code and data both map to the same physical memory. */
+#define ROGUE_META_COREMEM_CODE_ADDR (0x80000000U)
+#define ROGUE_META_COREMEM_DATA_ADDR (0x82000000U)
+#define ROGUE_META_COREMEM_OFFSET_MASK (0x01ffffffU)
+
+#define ROGUE_META_IS_COREMEM_CODE(a, b)                                \
+	({                                                              \
+		u32 _a = (a), _b = (b);                                 \
+		((_a) >= ROGUE_META_COREMEM_CODE_ADDR) &&               \
+			((_a) < (ROGUE_META_COREMEM_CODE_ADDR + (_b))); \
+	})
+#define ROGUE_META_IS_COREMEM_DATA(a, b)                                \
+	({                                                              \
+		u32 _a = (a), _b = (b);                                 \
+		((_a) >= ROGUE_META_COREMEM_DATA_ADDR) &&               \
+			((_a) < (ROGUE_META_COREMEM_DATA_ADDR + (_b))); \
+	})
+/*
+ ******************************************************************************
+ * 2nd thread
+ ******************************************************************************
+ */
+#define ROGUE_FW_THR1_PC (0x18930000)
+#define ROGUE_FW_THR1_SP (0x78890000)
+
+/*
+ ******************************************************************************
+ * META compatibility
+ ******************************************************************************
+ */
+
+#define META_CR_CORE_ID (0x04831000)
+#define META_CR_CORE_ID_VER_SHIFT (16U)
+#define META_CR_CORE_ID_VER_CLRMSK (0XFF00FFFFU)
+
+#define ROGUE_CR_META_MTP218_CORE_ID_VALUE 0x19
+#define ROGUE_CR_META_MTP219_CORE_ID_VALUE 0x1E
+#define ROGUE_CR_META_LTP218_CORE_ID_VALUE 0x1C
+#define ROGUE_CR_META_LTP217_CORE_ID_VALUE 0x1F
+
+#define ROGUE_FW_PROCESSOR_META "META"
+
+#endif /* PVR_ROGUE_META_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_mips.h b/drivers/gpu/drm/imagination/pvr_rogue_mips.h
new file mode 100644
index 000000000000..fe5167bf7fba
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_mips.h
@@ -0,0 +1,335 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_MIPS_H
+#define PVR_ROGUE_MIPS_H
+
+#include <linux/bits.h>
+#include <linux/types.h>
+
+/* Utility defines for memory management. */
+#define ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K (12)
+#define ROGUE_MIPSFW_PAGE_SIZE_4K (0x1 << ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K)
+#define ROGUE_MIPSFW_PAGE_MASK_4K (ROGUE_MIPSFW_PAGE_SIZE_4K - 1)
+#define ROGUE_MIPSFW_LOG2_PAGE_SIZE_64K (16)
+#define ROGUE_MIPSFW_PAGE_SIZE_64K (0x1 << ROGUE_MIPSFW_LOG2_PAGE_SIZE_64K)
+#define ROGUE_MIPSFW_PAGE_MASK_64K (ROGUE_MIPSFW_PAGE_SIZE_64K - 1)
+#define ROGUE_MIPSFW_LOG2_PAGE_SIZE_256K (18)
+#define ROGUE_MIPSFW_PAGE_SIZE_256K (0x1 << ROGUE_MIPSFW_LOG2_PAGE_SIZE_256K)
+#define ROGUE_MIPSFW_PAGE_MASK_256K (ROGUE_MIPSFW_PAGE_SIZE_256K - 1)
+#define ROGUE_MIPSFW_LOG2_PAGE_SIZE_1MB (20)
+#define ROGUE_MIPSFW_PAGE_SIZE_1MB (0x1 << ROGUE_MIPSFW_LOG2_PAGE_SIZE_1MB)
+#define ROGUE_MIPSFW_PAGE_MASK_1MB (ROGUE_MIPSFW_PAGE_SIZE_1MB - 1)
+#define ROGUE_MIPSFW_LOG2_PAGE_SIZE_4MB (22)
+#define ROGUE_MIPSFW_PAGE_SIZE_4MB (0x1 << ROGUE_MIPSFW_LOG2_PAGE_SIZE_4MB)
+#define ROGUE_MIPSFW_PAGE_MASK_4MB (ROGUE_MIPSFW_PAGE_SIZE_4MB - 1)
+#define ROGUE_MIPSFW_LOG2_PTE_ENTRY_SIZE (2)
+/* log2 page table sizes dependent on FW heap size and page size (for each OS). */
+#define ROGUE_MIPSFW_LOG2_PAGETABLE_SIZE_4K(pvr_dev) ((pvr_dev)->fw_dev.fw_heap_info.log2_size - \
+						      ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K +    \
+						      ROGUE_MIPSFW_LOG2_PTE_ENTRY_SIZE)
+#define ROGUE_MIPSFW_LOG2_PAGETABLE_SIZE_64K(pvr_dev) ((pvr_dev)->fw_dev.fw_heap_info.log2_size - \
+						       ROGUE_MIPSFW_LOG2_PAGE_SIZE_64K +   \
+						       ROGUE_MIPSFW_LOG2_PTE_ENTRY_SIZE)
+/* Maximum number of page table pages (both Host and MIPS pages). */
+#define ROGUE_MIPSFW_MAX_NUM_PAGETABLE_PAGES (4)
+/* Total number of TLB entries. */
+#define ROGUE_MIPSFW_NUMBER_OF_TLB_ENTRIES (16)
+/* "Uncached" caching policy. */
+#define ROGUE_MIPSFW_UNCACHED_CACHE_POLICY (2)
+/* "Write-back write-allocate" caching policy. */
+#define ROGUE_MIPSFW_WRITEBACK_CACHE_POLICY (3)
+/* "Write-through no write-allocate" caching policy. */
+#define ROGUE_MIPSFW_WRITETHROUGH_CACHE_POLICY (1)
+/* Cached policy used by MIPS in case of physical bus on 32 bit. */
+#define ROGUE_MIPSFW_CACHED_POLICY (ROGUE_MIPSFW_WRITEBACK_CACHE_POLICY)
+/* Cached policy used by MIPS in case of physical bus on more than 32 bit. */
+#define ROGUE_MIPSFW_CACHED_POLICY_ABOVE_32BIT (ROGUE_MIPSFW_WRITETHROUGH_CACHE_POLICY)
+/* Total number of Remap entries. */
+#define ROGUE_MIPSFW_NUMBER_OF_REMAP_ENTRIES (2 * ROGUE_MIPSFW_NUMBER_OF_TLB_ENTRIES)
+
+/* MIPS EntryLo/PTE format. */
+
+#define ROGUE_MIPSFW_ENTRYLO_READ_INHIBIT_SHIFT (31U)
+#define ROGUE_MIPSFW_ENTRYLO_READ_INHIBIT_CLRMSK (0X7FFFFFFF)
+#define ROGUE_MIPSFW_ENTRYLO_READ_INHIBIT_EN (0X80000000)
+
+#define ROGUE_MIPSFW_ENTRYLO_EXEC_INHIBIT_SHIFT (30U)
+#define ROGUE_MIPSFW_ENTRYLO_EXEC_INHIBIT_CLRMSK (0XBFFFFFFF)
+#define ROGUE_MIPSFW_ENTRYLO_EXEC_INHIBIT_EN (0X40000000)
+
+/* Page Frame Number */
+#define ROGUE_MIPSFW_ENTRYLO_PFN_SHIFT (6)
+#define ROGUE_MIPSFW_ENTRYLO_PFN_ALIGNSHIFT (12)
+/* Mask used for the MIPS Page Table in case of physical bus on 32 bit. */
+#define ROGUE_MIPSFW_ENTRYLO_PFN_MASK (0x03FFFFC0)
+#define ROGUE_MIPSFW_ENTRYLO_PFN_SIZE (20)
+/* Mask used for the MIPS Page Table in case of physical bus on more than 32 bit. */
+#define ROGUE_MIPSFW_ENTRYLO_PFN_MASK_ABOVE_32BIT (0x3FFFFFC0)
+#define ROGUE_MIPSFW_ENTRYLO_PFN_SIZE_ABOVE_32BIT (24)
+#define ROGUE_MIPSFW_ADDR_TO_ENTRYLO_PFN_RSHIFT (ROGUE_MIPSFW_ENTRYLO_PFN_ALIGNSHIFT - \
+						 ROGUE_MIPSFW_ENTRYLO_PFN_SHIFT)
+
+#define ROGUE_MIPSFW_ENTRYLO_CACHE_POLICY_SHIFT (3U)
+#define ROGUE_MIPSFW_ENTRYLO_CACHE_POLICY_CLRMSK (0XFFFFFFC7)
+
+#define ROGUE_MIPSFW_ENTRYLO_DIRTY_SHIFT (2U)
+#define ROGUE_MIPSFW_ENTRYLO_DIRTY_CLRMSK (0XFFFFFFFB)
+#define ROGUE_MIPSFW_ENTRYLO_DIRTY_EN (0X00000004)
+
+#define ROGUE_MIPSFW_ENTRYLO_VALID_SHIFT (1U)
+#define ROGUE_MIPSFW_ENTRYLO_VALID_CLRMSK (0XFFFFFFFD)
+#define ROGUE_MIPSFW_ENTRYLO_VALID_EN (0X00000002)
+
+#define ROGUE_MIPSFW_ENTRYLO_GLOBAL_SHIFT (0U)
+#define ROGUE_MIPSFW_ENTRYLO_GLOBAL_CLRMSK (0XFFFFFFFE)
+#define ROGUE_MIPSFW_ENTRYLO_GLOBAL_EN (0X00000001)
+
+#define ROGUE_MIPSFW_ENTRYLO_DVG (ROGUE_MIPSFW_ENTRYLO_DIRTY_EN | \
+				  ROGUE_MIPSFW_ENTRYLO_VALID_EN | \
+				  ROGUE_MIPSFW_ENTRYLO_GLOBAL_EN)
+#define ROGUE_MIPSFW_ENTRYLO_UNCACHED (ROGUE_MIPSFW_UNCACHED_CACHE_POLICY << \
+				       ROGUE_MIPSFW_ENTRYLO_CACHE_POLICY_SHIFT)
+#define ROGUE_MIPSFW_ENTRYLO_DVG_UNCACHED (ROGUE_MIPSFW_ENTRYLO_DVG | \
+					   ROGUE_MIPSFW_ENTRYLO_UNCACHED)
+
+/* Remap Range Config Addr Out. */
+/* These defines refer to the upper half of the Remap Range Config register. */
+#define ROGUE_MIPSFW_REMAP_RANGE_ADDR_OUT_MASK (0x0FFFFFF0)
+#define ROGUE_MIPSFW_REMAP_RANGE_ADDR_OUT_SHIFT (4) /* wrt upper half of the register. */
+#define ROGUE_MIPSFW_REMAP_RANGE_ADDR_OUT_ALIGNSHIFT (12)
+#define ROGUE_MIPSFW_ADDR_TO_RR_ADDR_OUT_RSHIFT (ROGUE_MIPSFW_REMAP_RANGE_ADDR_OUT_ALIGNSHIFT - \
+						 ROGUE_MIPSFW_REMAP_RANGE_ADDR_OUT_SHIFT)
+
+/*
+ * Pages to trampoline problematic physical addresses:
+ *   - ROGUE_MIPSFW_BOOT_REMAP_PHYS_ADDR_IN : 0x1FC0_0000
+ *   - ROGUE_MIPSFW_DATA_REMAP_PHYS_ADDR_IN : 0x1FC0_1000
+ *   - ROGUE_MIPSFW_CODE_REMAP_PHYS_ADDR_IN : 0x1FC0_2000
+ *   - (benign trampoline)               : 0x1FC0_3000
+ * that would otherwise be erroneously remapped by the MIPS wrapper.
+ * (see "Firmware virtual layout and remap configuration" section below)
+ */
+
+#define ROGUE_MIPSFW_TRAMPOLINE_LOG2_NUMPAGES (2)
+#define ROGUE_MIPSFW_TRAMPOLINE_NUMPAGES BIT(ROGUE_MIPSFW_TRAMPOLINE_LOG2_NUMPAGES)
+#define ROGUE_MIPSFW_TRAMPOLINE_SIZE (ROGUE_MIPSFW_TRAMPOLINE_NUMPAGES << \
+				      ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K)
+#define ROGUE_MIPSFW_TRAMPOLINE_LOG2_SEGMENT_SIZE (ROGUE_MIPSFW_TRAMPOLINE_LOG2_NUMPAGES + \
+						   ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K)
+
+#define ROGUE_MIPSFW_TRAMPOLINE_TARGET_PHYS_ADDR (ROGUE_MIPSFW_BOOT_REMAP_PHYS_ADDR_IN)
+#define ROGUE_MIPSFW_TRAMPOLINE_OFFSET(a) ((a) - ROGUE_MIPSFW_BOOT_REMAP_PHYS_ADDR_IN)
+
+#define ROGUE_MIPSFW_SENSITIVE_ADDR(a) (ROGUE_MIPSFW_BOOT_REMAP_PHYS_ADDR_IN == \
+					(~((1 << ROGUE_MIPSFW_TRAMPOLINE_LOG2_SEGMENT_SIZE) - 1) \
+					 & (a)))
+
+/* Firmware virtual layout and remap configuration. */
+/*
+ * For each remap region we define:
+ * - the virtual base used by the Firmware to access code/data through that region
+ * - the microAptivAP physical address correspondent to the virtual base address,
+ *   used as input address and remapped to the actual physical address
+ * - log2 of size of the region remapped by the MIPS wrapper, i.e. number of bits from
+ *   the bottom of the base input address that survive onto the output address
+ *   (this defines both the alignment and the maximum size of the remapped region)
+ * - one or more code/data segments within the remapped region.
+ */
+
+/* Boot remap setup. */
+#define ROGUE_MIPSFW_BOOT_REMAP_VIRTUAL_BASE (0xBFC00000)
+#define ROGUE_MIPSFW_BOOT_REMAP_PHYS_ADDR_IN (0x1FC00000)
+#define ROGUE_MIPSFW_BOOT_REMAP_LOG2_SEGMENT_SIZE (12)
+#define ROGUE_MIPSFW_BOOT_NMI_CODE_VIRTUAL_BASE (ROGUE_MIPSFW_BOOT_REMAP_VIRTUAL_BASE)
+
+/* Data remap setup. */
+#define ROGUE_MIPSFW_DATA_REMAP_VIRTUAL_BASE (0xBFC01000)
+#define ROGUE_MIPSFW_DATA_CACHED_REMAP_VIRTUAL_BASE (0x9FC01000)
+#define ROGUE_MIPSFW_DATA_REMAP_PHYS_ADDR_IN (0x1FC01000)
+#define ROGUE_MIPSFW_DATA_REMAP_LOG2_SEGMENT_SIZE (12)
+#define ROGUE_MIPSFW_BOOT_NMI_DATA_VIRTUAL_BASE (ROGUE_MIPSFW_DATA_REMAP_VIRTUAL_BASE)
+
+/* Code remap setup. */
+#define ROGUE_MIPSFW_CODE_REMAP_VIRTUAL_BASE (0x9FC02000)
+#define ROGUE_MIPSFW_CODE_REMAP_PHYS_ADDR_IN (0x1FC02000)
+#define ROGUE_MIPSFW_CODE_REMAP_LOG2_SEGMENT_SIZE (12)
+#define ROGUE_MIPSFW_EXCEPTIONS_VIRTUAL_BASE (ROGUE_MIPSFW_CODE_REMAP_VIRTUAL_BASE)
+
+/* Permanent mappings setup. */
+#define ROGUE_MIPSFW_PT_VIRTUAL_BASE (0xCF000000)
+#define ROGUE_MIPSFW_REGISTERS_VIRTUAL_BASE (0xCF800000)
+#define ROGUE_MIPSFW_STACK_VIRTUAL_BASE (0xCF600000)
+
+/* Bootloader configuration data. */
+/*
+ * Bootloader configuration offset (where ROGUE_MIPSFW_BOOT_DATA lives)
+ * within the bootloader/NMI data page.
+ */
+#define ROGUE_MIPSFW_BOOTLDR_CONF_OFFSET (0x0)
+
+/* NMI shared data. */
+/* Base address of the shared data within the bootloader/NMI data page. */
+#define ROGUE_MIPSFW_NMI_SHARED_DATA_BASE (0x100)
+/* Size used by Debug dump data. */
+#define ROGUE_MIPSFW_NMI_SHARED_SIZE (0x2B0)
+/* Offsets in the NMI shared area in 32-bit words. */
+#define ROGUE_MIPSFW_NMI_SYNC_FLAG_OFFSET (0x0)
+#define ROGUE_MIPSFW_NMI_STATE_OFFSET (0x1)
+#define ROGUE_MIPSFW_NMI_ERROR_STATE_SET (0x1)
+
+/* MIPS boot stage. */
+#define ROGUE_MIPSFW_BOOT_STAGE_OFFSET (0x400)
+
+/*
+ * MIPS private data in the bootloader data page.
+ * Memory below this offset is used by the FW only, no interface data allowed.
+ */
+#define ROGUE_MIPSFW_PRIVATE_DATA_OFFSET (0x800)
+
+struct rogue_mipsfw_boot_data {
+	u64 stack_phys_addr;
+	u64 reg_base;
+	u64 pt_phys_addr[ROGUE_MIPSFW_MAX_NUM_PAGETABLE_PAGES];
+	u32 pt_log2_page_size;
+	u32 pt_num_pages;
+	u32 reserved1;
+	u32 reserved2;
+};
+
+#define ROGUE_MIPSFW_GET_OFFSET_IN_DWORDS(offset) ((offset) / sizeof(u32))
+#define ROGUE_MIPSFW_GET_OFFSET_IN_QWORDS(offset) ((offset) / sizeof(u64))
+
+/* Used for compatibility checks. */
+#define ROGUE_MIPSFW_ARCHTYPE_VER_CLRMSK (0xFFFFE3FFU)
+#define ROGUE_MIPSFW_ARCHTYPE_VER_SHIFT (10U)
+#define ROGUE_MIPSFW_CORE_ID_VALUE (0x001U)
+#define ROGUE_FW_PROCESSOR_MIPS "MIPS"
+
+/* microAptivAP cache line size. */
+#define ROGUE_MIPSFW_MICROAPTIVEAP_CACHELINE_SIZE (16U)
+
+/*
+ * The SOCIF transactions are identified with the top 16 bits of the physical address emitted by
+ * the MIPS.
+ */
+#define ROGUE_MIPSFW_WRAPPER_CONFIG_REGBANK_ADDR_ALIGN (16U)
+
+/* Values to put in the MIPS selectors for performance counters. */
+/* Icache accesses in COUNTER0. */
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_ICACHE_ACCESSES_C0 (9U)
+/* Icache misses in COUNTER1. */
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_ICACHE_MISSES_C1 (9U)
+
+/* Dcache accesses in COUNTER0. */
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_DCACHE_ACCESSES_C0 (10U)
+/* Dcache misses in COUNTER1. */
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_DCACHE_MISSES_C1 (11U)
+
+/* ITLB instruction accesses in COUNTER0. */
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_ITLB_INSTR_ACCESSES_C0 (5U)
+/* JTLB instruction accesses misses in COUNTER1. */
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_JTLB_INSTR_MISSES_C1 (7U)
+
+  /* Instructions completed in COUNTER0. */
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_INSTR_COMPLETED_C0 (1U)
+/* JTLB data misses in COUNTER1. */
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_JTLB_DATA_MISSES_C1 (8U)
+
+/* Shift for the Event field in the MIPS perf ctrl registers. */
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_EVENT_SHIFT (5U)
+
+/* Additional flags for performance counters. See MIPS manual for further reference. */
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_COUNT_USER_MODE (8U)
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_COUNT_KERNEL_MODE (2U)
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_COUNT_EXL (1U)
+
+#define ROGUE_MIPSFW_C0_NBHWIRQ	8
+
+/* Macros to decode C0_Cause register. */
+#define ROGUE_MIPSFW_C0_CAUSE_EXCCODE(cause) (((cause) & 0x7c) >> 2)
+#define ROGUE_MIPSFW_C0_CAUSE_EXCCODE_FWERROR 9
+/* Use only when Coprocessor Unusable exception. */
+#define ROGUE_MIPSFW_C0_CAUSE_UNUSABLE_UNIT(cause) (((cause) >> 28) & 0x3)
+#define ROGUE_MIPSFW_C0_CAUSE_PENDING_HWIRQ(cause) (((cause) & 0x3fc00) >> 10)
+#define ROGUE_MIPSFW_C0_CAUSE_FDCIPENDING BIT(21)
+#define ROGUE_MIPSFW_C0_CAUSE_IV BIT(23)
+#define ROGUE_MIPSFW_C0_CAUSE_IC BIT(25)
+#define ROGUE_MIPSFW_C0_CAUSE_PCIPENDING BIT(26)
+#define ROGUE_MIPSFW_C0_CAUSE_TIPENDING BIT(30)
+#define ROGUE_MIPSFW_C0_CAUSE_BRANCH_DELAY BIT(31)
+
+/* Macros to decode C0_Debug register. */
+#define ROGUE_MIPSFW_C0_DEBUG_EXCCODE(debug) (((debug) >> 10) & 0x1f)
+#define ROGUE_MIPSFW_C0_DEBUG_DSS BIT(0)
+#define ROGUE_MIPSFW_C0_DEBUG_DBP BIT(1)
+#define ROGUE_MIPSFW_C0_DEBUG_DDBL BIT(2)
+#define ROGUE_MIPSFW_C0_DEBUG_DDBS BIT(3)
+#define ROGUE_MIPSFW_C0_DEBUG_DIB BIT(4)
+#define ROGUE_MIPSFW_C0_DEBUG_DINT BIT(5)
+#define ROGUE_MIPSFW_C0_DEBUG_DIBIMPR BIT(6)
+#define ROGUE_MIPSFW_C0_DEBUG_DDBLIMPR BIT(18)
+#define ROGUE_MIPSFW_C0_DEBUG_DDBSIMPR BIT(19)
+#define ROGUE_MIPSFW_C0_DEBUG_IEXI BIT(20)
+#define ROGUE_MIPSFW_C0_DEBUG_DBUSEP BIT(21)
+#define ROGUE_MIPSFW_C0_DEBUG_CACHEEP BIT(22)
+#define ROGUE_MIPSFW_C0_DEBUG_MCHECKP BIT(23)
+#define ROGUE_MIPSFW_C0_DEBUG_IBUSEP BIT(24)
+#define ROGUE_MIPSFW_C0_DEBUG_DM BIT(30)
+#define ROGUE_MIPSFW_C0_DEBUG_DBD BIT(31)
+
+/* Macros to decode TLB entries. */
+#define ROGUE_MIPSFW_TLB_GET_MASK(page_mask) (((page_mask) >> 13) & 0XFFFFU)
+/* Page size in KB. */
+#define ROGUE_MIPSFW_TLB_GET_PAGE_SIZE(page_mask) ((((page_mask) | 0x1FFF) + 1) >> 11)
+/* Page size in KB. */
+#define ROGUE_MIPSFW_TLB_GET_PAGE_MASK(page_size) ((((page_size) << 11) - 1) & ~0x7FF)
+#define ROGUE_MIPSFW_TLB_GET_VPN2(entry_hi) ((entry_hi) >> 13)
+#define ROGUE_MIPSFW_TLB_GET_COHERENCY(entry_lo) (((entry_lo) >> 3) & 0x7U)
+#define ROGUE_MIPSFW_TLB_GET_PFN(entry_lo) (((entry_lo) >> 6) & 0XFFFFFU)
+/* GET_PA uses a non-standard PFN mask for 36 bit addresses. */
+#define ROGUE_MIPSFW_TLB_GET_PA(entry_lo) (((u64)(entry_lo) & \
+					    ROGUE_MIPSFW_ENTRYLO_PFN_MASK_ABOVE_32BIT) << 6)
+#define ROGUE_MIPSFW_TLB_GET_INHIBIT(entry_lo) (((entry_lo) >> 30) & 0x3U)
+#define ROGUE_MIPSFW_TLB_GET_DGV(entry_lo) ((entry_lo) & 0x7U)
+#define ROGUE_MIPSFW_TLB_GLOBAL BIT(0)
+#define ROGUE_MIPSFW_TLB_VALID BIT(1)
+#define ROGUE_MIPSFW_TLB_DIRTY BIT(2)
+#define ROGUE_MIPSFW_TLB_XI BIT(30)
+#define ROGUE_MIPSFW_TLB_RI BIT(31)
+
+#define ROGUE_MIPSFW_REMAP_GET_REGION_SIZE(region_size_encoding) (1 << (((region_size_encoding) \
+									+ 1) << 1))
+
+struct rogue_mips_tlb_entry {
+	u32 tlb_page_mask;
+	u32 tlb_hi;
+	u32 tlb_lo0;
+	u32 tlb_lo1;
+};
+
+struct rogue_mips_remap_entry {
+	u32 remap_addr_in;  /* Always 4k aligned. */
+	u32 remap_addr_out; /* Always 4k aligned. */
+	u32 remap_region_size;
+};
+
+struct rogue_mips_state {
+	u32 error_state; /* This must come first in the structure. */
+	u32 error_epc;
+	u32 status_register;
+	u32 cause_register;
+	u32 bad_register;
+	u32 epc;
+	u32 sp;
+	u32 debug;
+	u32 depc;
+	u32 bad_instr;
+	u32 unmapped_address;
+	struct rogue_mips_tlb_entry tlb[ROGUE_MIPSFW_NUMBER_OF_TLB_ENTRIES];
+	struct rogue_mips_remap_entry remap[ROGUE_MIPSFW_NUMBER_OF_REMAP_ENTRIES];
+};
+
+#include "pvr_rogue_mips_check.h"
+
+#endif /* PVR_ROGUE_MIPS_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_mips_check.h b/drivers/gpu/drm/imagination/pvr_rogue_mips_check.h
new file mode 100644
index 000000000000..efad38039cae
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_mips_check.h
@@ -0,0 +1,58 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_MIPS_CHECK_H
+#define PVR_ROGUE_MIPS_CHECK_H
+
+#include <linux/build_bug.h>
+
+static_assert(offsetof(struct rogue_mips_tlb_entry, tlb_page_mask) == 0,
+	      "offsetof(struct rogue_mips_tlb_entry, tlb_page_mask) incorrect");
+static_assert(offsetof(struct rogue_mips_tlb_entry, tlb_hi) == 4,
+	      "offsetof(struct rogue_mips_tlb_entry, tlb_hi) incorrect");
+static_assert(offsetof(struct rogue_mips_tlb_entry, tlb_lo0) == 8,
+	      "offsetof(struct rogue_mips_tlb_entry, tlb_lo0) incorrect");
+static_assert(offsetof(struct rogue_mips_tlb_entry, tlb_lo1) == 12,
+	      "offsetof(struct rogue_mips_tlb_entry, tlb_lo1) incorrect");
+static_assert(sizeof(struct rogue_mips_tlb_entry) == 16,
+	      "struct rogue_mips_tlb_entry is incorrect size");
+
+static_assert(offsetof(struct rogue_mips_remap_entry, remap_addr_in) == 0,
+	      "offsetof(struct rogue_mips_remap_entry, remap_addr_in) incorrect");
+static_assert(offsetof(struct rogue_mips_remap_entry, remap_addr_out) == 4,
+	      "offsetof(struct rogue_mips_remap_entry, remap_addr_out) incorrect");
+static_assert(offsetof(struct rogue_mips_remap_entry, remap_region_size) == 8,
+	      "offsetof(struct rogue_mips_remap_entry, remap_region_size) incorrect");
+static_assert(sizeof(struct rogue_mips_remap_entry) == 12,
+	      "struct rogue_mips_remap_entry is incorrect size");
+
+static_assert(offsetof(struct rogue_mips_state, error_state) == 0,
+	      "offsetof(struct rogue_mips_state, error_state) incorrect");
+static_assert(offsetof(struct rogue_mips_state, error_epc) == 4,
+	      "offsetof(struct rogue_mips_state, error_epc) incorrect");
+static_assert(offsetof(struct rogue_mips_state, status_register) == 8,
+	      "offsetof(struct rogue_mips_state, status_register) incorrect");
+static_assert(offsetof(struct rogue_mips_state, cause_register) == 12,
+	      "offsetof(struct rogue_mips_state, cause_register) incorrect");
+static_assert(offsetof(struct rogue_mips_state, bad_register) == 16,
+	      "offsetof(struct rogue_mips_state, bad_register) incorrect");
+static_assert(offsetof(struct rogue_mips_state, epc) == 20,
+	      "offsetof(struct rogue_mips_state, epc) incorrect");
+static_assert(offsetof(struct rogue_mips_state, sp) == 24,
+	      "offsetof(struct rogue_mips_state, sp) incorrect");
+static_assert(offsetof(struct rogue_mips_state, debug) == 28,
+	      "offsetof(struct rogue_mips_state, debug) incorrect");
+static_assert(offsetof(struct rogue_mips_state, depc) == 32,
+	      "offsetof(struct rogue_mips_state, depc) incorrect");
+static_assert(offsetof(struct rogue_mips_state, bad_instr) == 36,
+	      "offsetof(struct rogue_mips_state, bad_instr) incorrect");
+static_assert(offsetof(struct rogue_mips_state, unmapped_address) == 40,
+	      "offsetof(struct rogue_mips_state, unmapped_address) incorrect");
+static_assert(offsetof(struct rogue_mips_state, tlb) == 44,
+	      "offsetof(struct rogue_mips_state, tlb) incorrect");
+static_assert(offsetof(struct rogue_mips_state, remap) == 300,
+	      "offsetof(struct rogue_mips_state, remap) incorrect");
+static_assert(sizeof(struct rogue_mips_state) == 684,
+	      "struct rogue_mips_state is incorrect size");
+
+#endif /* PVR_ROGUE_MIPS_CHECK_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_mmu_defs.h b/drivers/gpu/drm/imagination/pvr_rogue_mmu_defs.h
new file mode 100644
index 000000000000..cd28cded2741
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_mmu_defs.h
@@ -0,0 +1,136 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+/*  *** Autogenerated C -- do not edit ***  */
+
+#ifndef PVR_ROGUE_MMU_DEFS_H
+#define PVR_ROGUE_MMU_DEFS_H
+
+#define ROGUE_MMU_DEFS_REVISION 0
+
+#define ROGUE_BIF_DM_ENCODING_VERTEX (0x00000000U)
+#define ROGUE_BIF_DM_ENCODING_PIXEL (0x00000001U)
+#define ROGUE_BIF_DM_ENCODING_COMPUTE (0x00000002U)
+#define ROGUE_BIF_DM_ENCODING_TLA (0x00000003U)
+#define ROGUE_BIF_DM_ENCODING_PB_VCE (0x00000004U)
+#define ROGUE_BIF_DM_ENCODING_PB_TE (0x00000005U)
+#define ROGUE_BIF_DM_ENCODING_META (0x00000007U)
+#define ROGUE_BIF_DM_ENCODING_HOST (0x00000008U)
+#define ROGUE_BIF_DM_ENCODING_PM_ALIST (0x00000009U)
+
+#define ROGUE_MMUCTRL_VADDR_PC_INDEX_SHIFT (30U)
+#define ROGUE_MMUCTRL_VADDR_PC_INDEX_CLRMSK (0xFFFFFF003FFFFFFFULL)
+#define ROGUE_MMUCTRL_VADDR_PD_INDEX_SHIFT (21U)
+#define ROGUE_MMUCTRL_VADDR_PD_INDEX_CLRMSK (0xFFFFFFFFC01FFFFFULL)
+#define ROGUE_MMUCTRL_VADDR_PT_INDEX_SHIFT (12U)
+#define ROGUE_MMUCTRL_VADDR_PT_INDEX_CLRMSK (0xFFFFFFFFFFE00FFFULL)
+
+#define ROGUE_MMUCTRL_ENTRIES_PC_VALUE (0x00000400U)
+#define ROGUE_MMUCTRL_ENTRIES_PD_VALUE (0x00000200U)
+#define ROGUE_MMUCTRL_ENTRIES_PT_VALUE (0x00000200U)
+
+#define ROGUE_MMUCTRL_ENTRY_SIZE_PC_VALUE (0x00000020U)
+#define ROGUE_MMUCTRL_ENTRY_SIZE_PD_VALUE (0x00000040U)
+#define ROGUE_MMUCTRL_ENTRY_SIZE_PT_VALUE (0x00000040U)
+
+#define ROGUE_MMUCTRL_PAGE_SIZE_MASK (0x00000007U)
+#define ROGUE_MMUCTRL_PAGE_SIZE_4KB (0x00000000U)
+#define ROGUE_MMUCTRL_PAGE_SIZE_16KB (0x00000001U)
+#define ROGUE_MMUCTRL_PAGE_SIZE_64KB (0x00000002U)
+#define ROGUE_MMUCTRL_PAGE_SIZE_256KB (0x00000003U)
+#define ROGUE_MMUCTRL_PAGE_SIZE_1MB (0x00000004U)
+#define ROGUE_MMUCTRL_PAGE_SIZE_2MB (0x00000005U)
+
+#define ROGUE_MMUCTRL_PAGE_4KB_RANGE_SHIFT (12U)
+#define ROGUE_MMUCTRL_PAGE_4KB_RANGE_CLRMSK (0xFFFFFF0000000FFFULL)
+
+#define ROGUE_MMUCTRL_PAGE_16KB_RANGE_SHIFT (14U)
+#define ROGUE_MMUCTRL_PAGE_16KB_RANGE_CLRMSK (0xFFFFFF0000003FFFULL)
+
+#define ROGUE_MMUCTRL_PAGE_64KB_RANGE_SHIFT (16U)
+#define ROGUE_MMUCTRL_PAGE_64KB_RANGE_CLRMSK (0xFFFFFF000000FFFFULL)
+
+#define ROGUE_MMUCTRL_PAGE_256KB_RANGE_SHIFT (18U)
+#define ROGUE_MMUCTRL_PAGE_256KB_RANGE_CLRMSK (0xFFFFFF000003FFFFULL)
+
+#define ROGUE_MMUCTRL_PAGE_1MB_RANGE_SHIFT (20U)
+#define ROGUE_MMUCTRL_PAGE_1MB_RANGE_CLRMSK (0xFFFFFF00000FFFFFULL)
+
+#define ROGUE_MMUCTRL_PAGE_2MB_RANGE_SHIFT (21U)
+#define ROGUE_MMUCTRL_PAGE_2MB_RANGE_CLRMSK (0xFFFFFF00001FFFFFULL)
+
+#define ROGUE_MMUCTRL_PT_BASE_4KB_RANGE_SHIFT (12U)
+#define ROGUE_MMUCTRL_PT_BASE_4KB_RANGE_CLRMSK (0xFFFFFF0000000FFFULL)
+
+#define ROGUE_MMUCTRL_PT_BASE_16KB_RANGE_SHIFT (10U)
+#define ROGUE_MMUCTRL_PT_BASE_16KB_RANGE_CLRMSK (0xFFFFFF00000003FFULL)
+
+#define ROGUE_MMUCTRL_PT_BASE_64KB_RANGE_SHIFT (8U)
+#define ROGUE_MMUCTRL_PT_BASE_64KB_RANGE_CLRMSK (0xFFFFFF00000000FFULL)
+
+#define ROGUE_MMUCTRL_PT_BASE_256KB_RANGE_SHIFT (6U)
+#define ROGUE_MMUCTRL_PT_BASE_256KB_RANGE_CLRMSK (0xFFFFFF000000003FULL)
+
+#define ROGUE_MMUCTRL_PT_BASE_1MB_RANGE_SHIFT (5U)
+#define ROGUE_MMUCTRL_PT_BASE_1MB_RANGE_CLRMSK (0xFFFFFF000000001FULL)
+
+#define ROGUE_MMUCTRL_PT_BASE_2MB_RANGE_SHIFT (5U)
+#define ROGUE_MMUCTRL_PT_BASE_2MB_RANGE_CLRMSK (0xFFFFFF000000001FULL)
+
+#define ROGUE_MMUCTRL_PT_DATA_PM_META_PROTECT_SHIFT (62U)
+#define ROGUE_MMUCTRL_PT_DATA_PM_META_PROTECT_CLRMSK (0xBFFFFFFFFFFFFFFFULL)
+#define ROGUE_MMUCTRL_PT_DATA_PM_META_PROTECT_EN (0x4000000000000000ULL)
+#define ROGUE_MMUCTRL_PT_DATA_VP_PAGE_HI_SHIFT (40U)
+#define ROGUE_MMUCTRL_PT_DATA_VP_PAGE_HI_CLRMSK (0xC00000FFFFFFFFFFULL)
+#define ROGUE_MMUCTRL_PT_DATA_PAGE_SHIFT (12U)
+#define ROGUE_MMUCTRL_PT_DATA_PAGE_CLRMSK (0xFFFFFF0000000FFFULL)
+#define ROGUE_MMUCTRL_PT_DATA_VP_PAGE_LO_SHIFT (6U)
+#define ROGUE_MMUCTRL_PT_DATA_VP_PAGE_LO_CLRMSK (0xFFFFFFFFFFFFF03FULL)
+#define ROGUE_MMUCTRL_PT_DATA_ENTRY_PENDING_SHIFT (5U)
+#define ROGUE_MMUCTRL_PT_DATA_ENTRY_PENDING_CLRMSK (0xFFFFFFFFFFFFFFDFULL)
+#define ROGUE_MMUCTRL_PT_DATA_ENTRY_PENDING_EN (0x0000000000000020ULL)
+#define ROGUE_MMUCTRL_PT_DATA_PM_SRC_SHIFT (4U)
+#define ROGUE_MMUCTRL_PT_DATA_PM_SRC_CLRMSK (0xFFFFFFFFFFFFFFEFULL)
+#define ROGUE_MMUCTRL_PT_DATA_PM_SRC_EN (0x0000000000000010ULL)
+#define ROGUE_MMUCTRL_PT_DATA_SLC_BYPASS_CTRL_SHIFT (3U)
+#define ROGUE_MMUCTRL_PT_DATA_SLC_BYPASS_CTRL_CLRMSK (0xFFFFFFFFFFFFFFF7ULL)
+#define ROGUE_MMUCTRL_PT_DATA_SLC_BYPASS_CTRL_EN (0x0000000000000008ULL)
+#define ROGUE_MMUCTRL_PT_DATA_CC_SHIFT (2U)
+#define ROGUE_MMUCTRL_PT_DATA_CC_CLRMSK (0xFFFFFFFFFFFFFFFBULL)
+#define ROGUE_MMUCTRL_PT_DATA_CC_EN (0x0000000000000004ULL)
+#define ROGUE_MMUCTRL_PT_DATA_READ_ONLY_SHIFT (1U)
+#define ROGUE_MMUCTRL_PT_DATA_READ_ONLY_CLRMSK (0xFFFFFFFFFFFFFFFDULL)
+#define ROGUE_MMUCTRL_PT_DATA_READ_ONLY_EN (0x0000000000000002ULL)
+#define ROGUE_MMUCTRL_PT_DATA_VALID_SHIFT (0U)
+#define ROGUE_MMUCTRL_PT_DATA_VALID_CLRMSK (0xFFFFFFFFFFFFFFFEULL)
+#define ROGUE_MMUCTRL_PT_DATA_VALID_EN (0x0000000000000001ULL)
+
+#define ROGUE_MMUCTRL_PD_DATA_ENTRY_PENDING_SHIFT (40U)
+#define ROGUE_MMUCTRL_PD_DATA_ENTRY_PENDING_CLRMSK (0xFFFFFEFFFFFFFFFFULL)
+#define ROGUE_MMUCTRL_PD_DATA_ENTRY_PENDING_EN (0x0000010000000000ULL)
+#define ROGUE_MMUCTRL_PD_DATA_PT_BASE_SHIFT (5U)
+#define ROGUE_MMUCTRL_PD_DATA_PT_BASE_CLRMSK (0xFFFFFF000000001FULL)
+#define ROGUE_MMUCTRL_PD_DATA_PAGE_SIZE_SHIFT (1U)
+#define ROGUE_MMUCTRL_PD_DATA_PAGE_SIZE_CLRMSK (0xFFFFFFFFFFFFFFF1ULL)
+#define ROGUE_MMUCTRL_PD_DATA_PAGE_SIZE_4KB (0x0000000000000000ULL)
+#define ROGUE_MMUCTRL_PD_DATA_PAGE_SIZE_16KB (0x0000000000000002ULL)
+#define ROGUE_MMUCTRL_PD_DATA_PAGE_SIZE_64KB (0x0000000000000004ULL)
+#define ROGUE_MMUCTRL_PD_DATA_PAGE_SIZE_256KB (0x0000000000000006ULL)
+#define ROGUE_MMUCTRL_PD_DATA_PAGE_SIZE_1MB (0x0000000000000008ULL)
+#define ROGUE_MMUCTRL_PD_DATA_PAGE_SIZE_2MB (0x000000000000000aULL)
+#define ROGUE_MMUCTRL_PD_DATA_VALID_SHIFT (0U)
+#define ROGUE_MMUCTRL_PD_DATA_VALID_CLRMSK (0xFFFFFFFFFFFFFFFEULL)
+#define ROGUE_MMUCTRL_PD_DATA_VALID_EN (0x0000000000000001ULL)
+
+#define ROGUE_MMUCTRL_PC_DATA_PD_BASE_SHIFT (4U)
+#define ROGUE_MMUCTRL_PC_DATA_PD_BASE_CLRMSK (0x0000000FU)
+#define ROGUE_MMUCTRL_PC_DATA_PD_BASE_ALIGNSHIFT (12U)
+#define ROGUE_MMUCTRL_PC_DATA_PD_BASE_ALIGNSIZE (4096U)
+#define ROGUE_MMUCTRL_PC_DATA_ENTRY_PENDING_SHIFT (1U)
+#define ROGUE_MMUCTRL_PC_DATA_ENTRY_PENDING_CLRMSK (0xFFFFFFFDU)
+#define ROGUE_MMUCTRL_PC_DATA_ENTRY_PENDING_EN (0x00000002U)
+#define ROGUE_MMUCTRL_PC_DATA_VALID_SHIFT (0U)
+#define ROGUE_MMUCTRL_PC_DATA_VALID_CLRMSK (0xFFFFFFFEU)
+#define ROGUE_MMUCTRL_PC_DATA_VALID_EN (0x00000001U)
+
+#endif /* PVR_ROGUE_MMU_DEFS_H */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 06/17] drm/imagination: Add GPU register and FWIF headers
@ 2023-08-16  8:25   ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: matthew.brost, luben.tuikov, tzimmermann, linux-kernel, mripard,
	afd, boris.brezillon, dakr, donald.robson, hns, christian.koenig,
	faith.ekstrand

Changes since v4:
- Add FW header device info

Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 .../gpu/drm/imagination/pvr_rogue_cr_defs.h   | 6193 +++++++++++++++++
 .../imagination/pvr_rogue_cr_defs_client.h    |  159 +
 drivers/gpu/drm/imagination/pvr_rogue_defs.h  |  179 +
 drivers/gpu/drm/imagination/pvr_rogue_fwif.h  | 2208 ++++++
 .../drm/imagination/pvr_rogue_fwif_check.h    |  491 ++
 .../drm/imagination/pvr_rogue_fwif_client.h   |  371 +
 .../imagination/pvr_rogue_fwif_client_check.h |  133 +
 .../drm/imagination/pvr_rogue_fwif_common.h   |   60 +
 .../drm/imagination/pvr_rogue_fwif_dev_info.h |  112 +
 .../pvr_rogue_fwif_resetframework.h           |   28 +
 .../gpu/drm/imagination/pvr_rogue_fwif_sf.h   | 1648 +++++
 .../drm/imagination/pvr_rogue_fwif_shared.h   |  258 +
 .../imagination/pvr_rogue_fwif_shared_check.h |  108 +
 .../drm/imagination/pvr_rogue_fwif_stream.h   |   78 +
 .../drm/imagination/pvr_rogue_heap_config.h   |  113 +
 drivers/gpu/drm/imagination/pvr_rogue_meta.h  |  356 +
 drivers/gpu/drm/imagination/pvr_rogue_mips.h  |  335 +
 .../drm/imagination/pvr_rogue_mips_check.h    |   58 +
 .../gpu/drm/imagination/pvr_rogue_mmu_defs.h  |  136 +
 19 files changed, 13024 insertions(+)
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_cr_defs.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_cr_defs_client.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_defs.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_check.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_client.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_client_check.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_common.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_dev_info.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_resetframework.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_sf.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_shared.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_shared_check.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_stream.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_heap_config.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_meta.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_mips.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_mips_check.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_mmu_defs.h

diff --git a/drivers/gpu/drm/imagination/pvr_rogue_cr_defs.h b/drivers/gpu/drm/imagination/pvr_rogue_cr_defs.h
new file mode 100644
index 000000000000..23fc197c4f42
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_cr_defs.h
@@ -0,0 +1,6193 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+/*  *** Autogenerated C -- do not edit ***  */
+
+#ifndef PVR_ROGUE_CR_DEFS_H
+#define PVR_ROGUE_CR_DEFS_H
+
+/* clang-format off */
+
+#define ROGUE_CR_DEFS_REVISION 1
+
+/* Register ROGUE_CR_RASTERISATION_INDIRECT */
+#define ROGUE_CR_RASTERISATION_INDIRECT 0x8238U
+#define ROGUE_CR_RASTERISATION_INDIRECT_MASKFULL 0x000000000000000FULL
+#define ROGUE_CR_RASTERISATION_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_RASTERISATION_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFF0U
+
+/* Register ROGUE_CR_PBE_INDIRECT */
+#define ROGUE_CR_PBE_INDIRECT 0x83E0U
+#define ROGUE_CR_PBE_INDIRECT_MASKFULL 0x000000000000000FULL
+#define ROGUE_CR_PBE_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_PBE_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFF0U
+
+/* Register ROGUE_CR_PBE_PERF_INDIRECT */
+#define ROGUE_CR_PBE_PERF_INDIRECT 0x83D8U
+#define ROGUE_CR_PBE_PERF_INDIRECT_MASKFULL 0x000000000000000FULL
+#define ROGUE_CR_PBE_PERF_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_PBE_PERF_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFF0U
+
+/* Register ROGUE_CR_TPU_PERF_INDIRECT */
+#define ROGUE_CR_TPU_PERF_INDIRECT 0x83F0U
+#define ROGUE_CR_TPU_PERF_INDIRECT_MASKFULL 0x0000000000000007ULL
+#define ROGUE_CR_TPU_PERF_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_TPU_PERF_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFF8U
+
+/* Register ROGUE_CR_RASTERISATION_PERF_INDIRECT */
+#define ROGUE_CR_RASTERISATION_PERF_INDIRECT 0x8318U
+#define ROGUE_CR_RASTERISATION_PERF_INDIRECT_MASKFULL 0x000000000000000FULL
+#define ROGUE_CR_RASTERISATION_PERF_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_RASTERISATION_PERF_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFF0U
+
+/* Register ROGUE_CR_TPU_MCU_L0_PERF_INDIRECT */
+#define ROGUE_CR_TPU_MCU_L0_PERF_INDIRECT 0x8028U
+#define ROGUE_CR_TPU_MCU_L0_PERF_INDIRECT_MASKFULL 0x0000000000000007ULL
+#define ROGUE_CR_TPU_MCU_L0_PERF_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_TPU_MCU_L0_PERF_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFF8U
+
+/* Register ROGUE_CR_USC_PERF_INDIRECT */
+#define ROGUE_CR_USC_PERF_INDIRECT 0x8030U
+#define ROGUE_CR_USC_PERF_INDIRECT_MASKFULL 0x000000000000000FULL
+#define ROGUE_CR_USC_PERF_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_USC_PERF_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFF0U
+
+/* Register ROGUE_CR_BLACKPEARL_INDIRECT */
+#define ROGUE_CR_BLACKPEARL_INDIRECT 0x8388U
+#define ROGUE_CR_BLACKPEARL_INDIRECT_MASKFULL 0x0000000000000003ULL
+#define ROGUE_CR_BLACKPEARL_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_BLACKPEARL_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFFCU
+
+/* Register ROGUE_CR_BLACKPEARL_PERF_INDIRECT */
+#define ROGUE_CR_BLACKPEARL_PERF_INDIRECT 0x83F8U
+#define ROGUE_CR_BLACKPEARL_PERF_INDIRECT_MASKFULL 0x0000000000000003ULL
+#define ROGUE_CR_BLACKPEARL_PERF_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_BLACKPEARL_PERF_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFFCU
+
+/* Register ROGUE_CR_TEXAS3_PERF_INDIRECT */
+#define ROGUE_CR_TEXAS3_PERF_INDIRECT 0x83D0U
+#define ROGUE_CR_TEXAS3_PERF_INDIRECT_MASKFULL 0x0000000000000007ULL
+#define ROGUE_CR_TEXAS3_PERF_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_TEXAS3_PERF_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFF8U
+
+/* Register ROGUE_CR_TEXAS_PERF_INDIRECT */
+#define ROGUE_CR_TEXAS_PERF_INDIRECT 0x8288U
+#define ROGUE_CR_TEXAS_PERF_INDIRECT_MASKFULL 0x0000000000000003ULL
+#define ROGUE_CR_TEXAS_PERF_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_TEXAS_PERF_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFFCU
+
+/* Register ROGUE_CR_BX_TU_PERF_INDIRECT */
+#define ROGUE_CR_BX_TU_PERF_INDIRECT 0xC900U
+#define ROGUE_CR_BX_TU_PERF_INDIRECT_MASKFULL 0x0000000000000003ULL
+#define ROGUE_CR_BX_TU_PERF_INDIRECT_ADDRESS_SHIFT 0U
+#define ROGUE_CR_BX_TU_PERF_INDIRECT_ADDRESS_CLRMSK 0xFFFFFFFCU
+
+/* Register ROGUE_CR_CLK_CTRL */
+#define ROGUE_CR_CLK_CTRL 0x0000U
+#define ROGUE_CR_CLK_CTRL__PBE2_XE__MASKFULL 0xFFFFFF003F3FFFFFULL
+#define ROGUE_CR_CLK_CTRL__S7_TOP__MASKFULL 0xCFCF03000F3F3F0FULL
+#define ROGUE_CR_CLK_CTRL_MASKFULL 0xFFFFFF003F3FFFFFULL
+#define ROGUE_CR_CLK_CTRL_BIF_TEXAS_SHIFT 62U
+#define ROGUE_CR_CLK_CTRL_BIF_TEXAS_CLRMSK 0x3FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_BIF_TEXAS_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_BIF_TEXAS_ON 0x4000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_BIF_TEXAS_AUTO 0x8000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_IPP_SHIFT 60U
+#define ROGUE_CR_CLK_CTRL_IPP_CLRMSK 0xCFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_IPP_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_IPP_ON 0x1000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_IPP_AUTO 0x2000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_FBC_SHIFT 58U
+#define ROGUE_CR_CLK_CTRL_FBC_CLRMSK 0xF3FFFFFFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_FBC_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_FBC_ON 0x0400000000000000ULL
+#define ROGUE_CR_CLK_CTRL_FBC_AUTO 0x0800000000000000ULL
+#define ROGUE_CR_CLK_CTRL_FBDC_SHIFT 56U
+#define ROGUE_CR_CLK_CTRL_FBDC_CLRMSK 0xFCFFFFFFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_FBDC_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_FBDC_ON 0x0100000000000000ULL
+#define ROGUE_CR_CLK_CTRL_FBDC_AUTO 0x0200000000000000ULL
+#define ROGUE_CR_CLK_CTRL_FB_TLCACHE_SHIFT 54U
+#define ROGUE_CR_CLK_CTRL_FB_TLCACHE_CLRMSK 0xFF3FFFFFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_FB_TLCACHE_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_FB_TLCACHE_ON 0x0040000000000000ULL
+#define ROGUE_CR_CLK_CTRL_FB_TLCACHE_AUTO 0x0080000000000000ULL
+#define ROGUE_CR_CLK_CTRL_USCS_SHIFT 52U
+#define ROGUE_CR_CLK_CTRL_USCS_CLRMSK 0xFFCFFFFFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_USCS_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_USCS_ON 0x0010000000000000ULL
+#define ROGUE_CR_CLK_CTRL_USCS_AUTO 0x0020000000000000ULL
+#define ROGUE_CR_CLK_CTRL_PBE_SHIFT 50U
+#define ROGUE_CR_CLK_CTRL_PBE_CLRMSK 0xFFF3FFFFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_PBE_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_PBE_ON 0x0004000000000000ULL
+#define ROGUE_CR_CLK_CTRL_PBE_AUTO 0x0008000000000000ULL
+#define ROGUE_CR_CLK_CTRL_MCU_L1_SHIFT 48U
+#define ROGUE_CR_CLK_CTRL_MCU_L1_CLRMSK 0xFFFCFFFFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_MCU_L1_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_MCU_L1_ON 0x0001000000000000ULL
+#define ROGUE_CR_CLK_CTRL_MCU_L1_AUTO 0x0002000000000000ULL
+#define ROGUE_CR_CLK_CTRL_CDM_SHIFT 46U
+#define ROGUE_CR_CLK_CTRL_CDM_CLRMSK 0xFFFF3FFFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_CDM_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_CDM_ON 0x0000400000000000ULL
+#define ROGUE_CR_CLK_CTRL_CDM_AUTO 0x0000800000000000ULL
+#define ROGUE_CR_CLK_CTRL_SIDEKICK_SHIFT 44U
+#define ROGUE_CR_CLK_CTRL_SIDEKICK_CLRMSK 0xFFFFCFFFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_SIDEKICK_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_SIDEKICK_ON 0x0000100000000000ULL
+#define ROGUE_CR_CLK_CTRL_SIDEKICK_AUTO 0x0000200000000000ULL
+#define ROGUE_CR_CLK_CTRL_BIF_SIDEKICK_SHIFT 42U
+#define ROGUE_CR_CLK_CTRL_BIF_SIDEKICK_CLRMSK 0xFFFFF3FFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_BIF_SIDEKICK_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_BIF_SIDEKICK_ON 0x0000040000000000ULL
+#define ROGUE_CR_CLK_CTRL_BIF_SIDEKICK_AUTO 0x0000080000000000ULL
+#define ROGUE_CR_CLK_CTRL_BIF_SHIFT 40U
+#define ROGUE_CR_CLK_CTRL_BIF_CLRMSK 0xFFFFFCFFFFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_BIF_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_BIF_ON 0x0000010000000000ULL
+#define ROGUE_CR_CLK_CTRL_BIF_AUTO 0x0000020000000000ULL
+#define ROGUE_CR_CLK_CTRL_TPU_MCU_DEMUX_SHIFT 28U
+#define ROGUE_CR_CLK_CTRL_TPU_MCU_DEMUX_CLRMSK 0xFFFFFFFFCFFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_TPU_MCU_DEMUX_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_TPU_MCU_DEMUX_ON 0x0000000010000000ULL
+#define ROGUE_CR_CLK_CTRL_TPU_MCU_DEMUX_AUTO 0x0000000020000000ULL
+#define ROGUE_CR_CLK_CTRL_MCU_L0_SHIFT 26U
+#define ROGUE_CR_CLK_CTRL_MCU_L0_CLRMSK 0xFFFFFFFFF3FFFFFFULL
+#define ROGUE_CR_CLK_CTRL_MCU_L0_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_MCU_L0_ON 0x0000000004000000ULL
+#define ROGUE_CR_CLK_CTRL_MCU_L0_AUTO 0x0000000008000000ULL
+#define ROGUE_CR_CLK_CTRL_TPU_SHIFT 24U
+#define ROGUE_CR_CLK_CTRL_TPU_CLRMSK 0xFFFFFFFFFCFFFFFFULL
+#define ROGUE_CR_CLK_CTRL_TPU_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_TPU_ON 0x0000000001000000ULL
+#define ROGUE_CR_CLK_CTRL_TPU_AUTO 0x0000000002000000ULL
+#define ROGUE_CR_CLK_CTRL_USC_SHIFT 20U
+#define ROGUE_CR_CLK_CTRL_USC_CLRMSK 0xFFFFFFFFFFCFFFFFULL
+#define ROGUE_CR_CLK_CTRL_USC_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_USC_ON 0x0000000000100000ULL
+#define ROGUE_CR_CLK_CTRL_USC_AUTO 0x0000000000200000ULL
+#define ROGUE_CR_CLK_CTRL_TLA_SHIFT 18U
+#define ROGUE_CR_CLK_CTRL_TLA_CLRMSK 0xFFFFFFFFFFF3FFFFULL
+#define ROGUE_CR_CLK_CTRL_TLA_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_TLA_ON 0x0000000000040000ULL
+#define ROGUE_CR_CLK_CTRL_TLA_AUTO 0x0000000000080000ULL
+#define ROGUE_CR_CLK_CTRL_SLC_SHIFT 16U
+#define ROGUE_CR_CLK_CTRL_SLC_CLRMSK 0xFFFFFFFFFFFCFFFFULL
+#define ROGUE_CR_CLK_CTRL_SLC_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_SLC_ON 0x0000000000010000ULL
+#define ROGUE_CR_CLK_CTRL_SLC_AUTO 0x0000000000020000ULL
+#define ROGUE_CR_CLK_CTRL_UVS_SHIFT 14U
+#define ROGUE_CR_CLK_CTRL_UVS_CLRMSK 0xFFFFFFFFFFFF3FFFULL
+#define ROGUE_CR_CLK_CTRL_UVS_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_UVS_ON 0x0000000000004000ULL
+#define ROGUE_CR_CLK_CTRL_UVS_AUTO 0x0000000000008000ULL
+#define ROGUE_CR_CLK_CTRL_PDS_SHIFT 12U
+#define ROGUE_CR_CLK_CTRL_PDS_CLRMSK 0xFFFFFFFFFFFFCFFFULL
+#define ROGUE_CR_CLK_CTRL_PDS_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_PDS_ON 0x0000000000001000ULL
+#define ROGUE_CR_CLK_CTRL_PDS_AUTO 0x0000000000002000ULL
+#define ROGUE_CR_CLK_CTRL_VDM_SHIFT 10U
+#define ROGUE_CR_CLK_CTRL_VDM_CLRMSK 0xFFFFFFFFFFFFF3FFULL
+#define ROGUE_CR_CLK_CTRL_VDM_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_VDM_ON 0x0000000000000400ULL
+#define ROGUE_CR_CLK_CTRL_VDM_AUTO 0x0000000000000800ULL
+#define ROGUE_CR_CLK_CTRL_PM_SHIFT 8U
+#define ROGUE_CR_CLK_CTRL_PM_CLRMSK 0xFFFFFFFFFFFFFCFFULL
+#define ROGUE_CR_CLK_CTRL_PM_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_PM_ON 0x0000000000000100ULL
+#define ROGUE_CR_CLK_CTRL_PM_AUTO 0x0000000000000200ULL
+#define ROGUE_CR_CLK_CTRL_GPP_SHIFT 6U
+#define ROGUE_CR_CLK_CTRL_GPP_CLRMSK 0xFFFFFFFFFFFFFF3FULL
+#define ROGUE_CR_CLK_CTRL_GPP_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_GPP_ON 0x0000000000000040ULL
+#define ROGUE_CR_CLK_CTRL_GPP_AUTO 0x0000000000000080ULL
+#define ROGUE_CR_CLK_CTRL_TE_SHIFT 4U
+#define ROGUE_CR_CLK_CTRL_TE_CLRMSK 0xFFFFFFFFFFFFFFCFULL
+#define ROGUE_CR_CLK_CTRL_TE_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_TE_ON 0x0000000000000010ULL
+#define ROGUE_CR_CLK_CTRL_TE_AUTO 0x0000000000000020ULL
+#define ROGUE_CR_CLK_CTRL_TSP_SHIFT 2U
+#define ROGUE_CR_CLK_CTRL_TSP_CLRMSK 0xFFFFFFFFFFFFFFF3ULL
+#define ROGUE_CR_CLK_CTRL_TSP_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_TSP_ON 0x0000000000000004ULL
+#define ROGUE_CR_CLK_CTRL_TSP_AUTO 0x0000000000000008ULL
+#define ROGUE_CR_CLK_CTRL_ISP_SHIFT 0U
+#define ROGUE_CR_CLK_CTRL_ISP_CLRMSK 0xFFFFFFFFFFFFFFFCULL
+#define ROGUE_CR_CLK_CTRL_ISP_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL_ISP_ON 0x0000000000000001ULL
+#define ROGUE_CR_CLK_CTRL_ISP_AUTO 0x0000000000000002ULL
+
+/* Register ROGUE_CR_CLK_STATUS */
+#define ROGUE_CR_CLK_STATUS 0x0008U
+#define ROGUE_CR_CLK_STATUS__PBE2_XE__MASKFULL 0x00000001FFF077FFULL
+#define ROGUE_CR_CLK_STATUS__S7_TOP__MASKFULL 0x00000001B3101773ULL
+#define ROGUE_CR_CLK_STATUS_MASKFULL 0x00000001FFF077FFULL
+#define ROGUE_CR_CLK_STATUS_MCU_FBTC_SHIFT 32U
+#define ROGUE_CR_CLK_STATUS_MCU_FBTC_CLRMSK 0xFFFFFFFEFFFFFFFFULL
+#define ROGUE_CR_CLK_STATUS_MCU_FBTC_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_MCU_FBTC_RUNNING 0x0000000100000000ULL
+#define ROGUE_CR_CLK_STATUS_BIF_TEXAS_SHIFT 31U
+#define ROGUE_CR_CLK_STATUS_BIF_TEXAS_CLRMSK 0xFFFFFFFF7FFFFFFFULL
+#define ROGUE_CR_CLK_STATUS_BIF_TEXAS_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_BIF_TEXAS_RUNNING 0x0000000080000000ULL
+#define ROGUE_CR_CLK_STATUS_IPP_SHIFT 30U
+#define ROGUE_CR_CLK_STATUS_IPP_CLRMSK 0xFFFFFFFFBFFFFFFFULL
+#define ROGUE_CR_CLK_STATUS_IPP_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_IPP_RUNNING 0x0000000040000000ULL
+#define ROGUE_CR_CLK_STATUS_FBC_SHIFT 29U
+#define ROGUE_CR_CLK_STATUS_FBC_CLRMSK 0xFFFFFFFFDFFFFFFFULL
+#define ROGUE_CR_CLK_STATUS_FBC_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_FBC_RUNNING 0x0000000020000000ULL
+#define ROGUE_CR_CLK_STATUS_FBDC_SHIFT 28U
+#define ROGUE_CR_CLK_STATUS_FBDC_CLRMSK 0xFFFFFFFFEFFFFFFFULL
+#define ROGUE_CR_CLK_STATUS_FBDC_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_FBDC_RUNNING 0x0000000010000000ULL
+#define ROGUE_CR_CLK_STATUS_FB_TLCACHE_SHIFT 27U
+#define ROGUE_CR_CLK_STATUS_FB_TLCACHE_CLRMSK 0xFFFFFFFFF7FFFFFFULL
+#define ROGUE_CR_CLK_STATUS_FB_TLCACHE_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_FB_TLCACHE_RUNNING 0x0000000008000000ULL
+#define ROGUE_CR_CLK_STATUS_USCS_SHIFT 26U
+#define ROGUE_CR_CLK_STATUS_USCS_CLRMSK 0xFFFFFFFFFBFFFFFFULL
+#define ROGUE_CR_CLK_STATUS_USCS_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_USCS_RUNNING 0x0000000004000000ULL
+#define ROGUE_CR_CLK_STATUS_PBE_SHIFT 25U
+#define ROGUE_CR_CLK_STATUS_PBE_CLRMSK 0xFFFFFFFFFDFFFFFFULL
+#define ROGUE_CR_CLK_STATUS_PBE_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_PBE_RUNNING 0x0000000002000000ULL
+#define ROGUE_CR_CLK_STATUS_MCU_L1_SHIFT 24U
+#define ROGUE_CR_CLK_STATUS_MCU_L1_CLRMSK 0xFFFFFFFFFEFFFFFFULL
+#define ROGUE_CR_CLK_STATUS_MCU_L1_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_MCU_L1_RUNNING 0x0000000001000000ULL
+#define ROGUE_CR_CLK_STATUS_CDM_SHIFT 23U
+#define ROGUE_CR_CLK_STATUS_CDM_CLRMSK 0xFFFFFFFFFF7FFFFFULL
+#define ROGUE_CR_CLK_STATUS_CDM_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_CDM_RUNNING 0x0000000000800000ULL
+#define ROGUE_CR_CLK_STATUS_SIDEKICK_SHIFT 22U
+#define ROGUE_CR_CLK_STATUS_SIDEKICK_CLRMSK 0xFFFFFFFFFFBFFFFFULL
+#define ROGUE_CR_CLK_STATUS_SIDEKICK_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_SIDEKICK_RUNNING 0x0000000000400000ULL
+#define ROGUE_CR_CLK_STATUS_BIF_SIDEKICK_SHIFT 21U
+#define ROGUE_CR_CLK_STATUS_BIF_SIDEKICK_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_CLK_STATUS_BIF_SIDEKICK_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_BIF_SIDEKICK_RUNNING 0x0000000000200000ULL
+#define ROGUE_CR_CLK_STATUS_BIF_SHIFT 20U
+#define ROGUE_CR_CLK_STATUS_BIF_CLRMSK 0xFFFFFFFFFFEFFFFFULL
+#define ROGUE_CR_CLK_STATUS_BIF_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_BIF_RUNNING 0x0000000000100000ULL
+#define ROGUE_CR_CLK_STATUS_TPU_MCU_DEMUX_SHIFT 14U
+#define ROGUE_CR_CLK_STATUS_TPU_MCU_DEMUX_CLRMSK 0xFFFFFFFFFFFFBFFFULL
+#define ROGUE_CR_CLK_STATUS_TPU_MCU_DEMUX_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_TPU_MCU_DEMUX_RUNNING 0x0000000000004000ULL
+#define ROGUE_CR_CLK_STATUS_MCU_L0_SHIFT 13U
+#define ROGUE_CR_CLK_STATUS_MCU_L0_CLRMSK 0xFFFFFFFFFFFFDFFFULL
+#define ROGUE_CR_CLK_STATUS_MCU_L0_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_MCU_L0_RUNNING 0x0000000000002000ULL
+#define ROGUE_CR_CLK_STATUS_TPU_SHIFT 12U
+#define ROGUE_CR_CLK_STATUS_TPU_CLRMSK 0xFFFFFFFFFFFFEFFFULL
+#define ROGUE_CR_CLK_STATUS_TPU_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_TPU_RUNNING 0x0000000000001000ULL
+#define ROGUE_CR_CLK_STATUS_USC_SHIFT 10U
+#define ROGUE_CR_CLK_STATUS_USC_CLRMSK 0xFFFFFFFFFFFFFBFFULL
+#define ROGUE_CR_CLK_STATUS_USC_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_USC_RUNNING 0x0000000000000400ULL
+#define ROGUE_CR_CLK_STATUS_TLA_SHIFT 9U
+#define ROGUE_CR_CLK_STATUS_TLA_CLRMSK 0xFFFFFFFFFFFFFDFFULL
+#define ROGUE_CR_CLK_STATUS_TLA_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_TLA_RUNNING 0x0000000000000200ULL
+#define ROGUE_CR_CLK_STATUS_SLC_SHIFT 8U
+#define ROGUE_CR_CLK_STATUS_SLC_CLRMSK 0xFFFFFFFFFFFFFEFFULL
+#define ROGUE_CR_CLK_STATUS_SLC_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_SLC_RUNNING 0x0000000000000100ULL
+#define ROGUE_CR_CLK_STATUS_UVS_SHIFT 7U
+#define ROGUE_CR_CLK_STATUS_UVS_CLRMSK 0xFFFFFFFFFFFFFF7FULL
+#define ROGUE_CR_CLK_STATUS_UVS_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_UVS_RUNNING 0x0000000000000080ULL
+#define ROGUE_CR_CLK_STATUS_PDS_SHIFT 6U
+#define ROGUE_CR_CLK_STATUS_PDS_CLRMSK 0xFFFFFFFFFFFFFFBFULL
+#define ROGUE_CR_CLK_STATUS_PDS_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_PDS_RUNNING 0x0000000000000040ULL
+#define ROGUE_CR_CLK_STATUS_VDM_SHIFT 5U
+#define ROGUE_CR_CLK_STATUS_VDM_CLRMSK 0xFFFFFFFFFFFFFFDFULL
+#define ROGUE_CR_CLK_STATUS_VDM_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_VDM_RUNNING 0x0000000000000020ULL
+#define ROGUE_CR_CLK_STATUS_PM_SHIFT 4U
+#define ROGUE_CR_CLK_STATUS_PM_CLRMSK 0xFFFFFFFFFFFFFFEFULL
+#define ROGUE_CR_CLK_STATUS_PM_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_PM_RUNNING 0x0000000000000010ULL
+#define ROGUE_CR_CLK_STATUS_GPP_SHIFT 3U
+#define ROGUE_CR_CLK_STATUS_GPP_CLRMSK 0xFFFFFFFFFFFFFFF7ULL
+#define ROGUE_CR_CLK_STATUS_GPP_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_GPP_RUNNING 0x0000000000000008ULL
+#define ROGUE_CR_CLK_STATUS_TE_SHIFT 2U
+#define ROGUE_CR_CLK_STATUS_TE_CLRMSK 0xFFFFFFFFFFFFFFFBULL
+#define ROGUE_CR_CLK_STATUS_TE_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_TE_RUNNING 0x0000000000000004ULL
+#define ROGUE_CR_CLK_STATUS_TSP_SHIFT 1U
+#define ROGUE_CR_CLK_STATUS_TSP_CLRMSK 0xFFFFFFFFFFFFFFFDULL
+#define ROGUE_CR_CLK_STATUS_TSP_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_TSP_RUNNING 0x0000000000000002ULL
+#define ROGUE_CR_CLK_STATUS_ISP_SHIFT 0U
+#define ROGUE_CR_CLK_STATUS_ISP_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_CLK_STATUS_ISP_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS_ISP_RUNNING 0x0000000000000001ULL
+
+/* Register ROGUE_CR_CORE_ID */
+#define ROGUE_CR_CORE_ID__PBVNC 0x0020U
+#define ROGUE_CR_CORE_ID__PBVNC__MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_CORE_ID__PBVNC__BRANCH_ID_SHIFT 48U
+#define ROGUE_CR_CORE_ID__PBVNC__BRANCH_ID_CLRMSK 0x0000FFFFFFFFFFFFULL
+#define ROGUE_CR_CORE_ID__PBVNC__VERSION_ID_SHIFT 32U
+#define ROGUE_CR_CORE_ID__PBVNC__VERSION_ID_CLRMSK 0xFFFF0000FFFFFFFFULL
+#define ROGUE_CR_CORE_ID__PBVNC__NUMBER_OF_SCALABLE_UNITS_SHIFT 16U
+#define ROGUE_CR_CORE_ID__PBVNC__NUMBER_OF_SCALABLE_UNITS_CLRMSK 0xFFFFFFFF0000FFFFULL
+#define ROGUE_CR_CORE_ID__PBVNC__CONFIG_ID_SHIFT 0U
+#define ROGUE_CR_CORE_ID__PBVNC__CONFIG_ID_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_CORE_ID */
+#define ROGUE_CR_CORE_ID 0x0018U
+#define ROGUE_CR_CORE_ID_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_CORE_ID_ID_SHIFT 16U
+#define ROGUE_CR_CORE_ID_ID_CLRMSK 0x0000FFFFU
+#define ROGUE_CR_CORE_ID_CONFIG_SHIFT 0U
+#define ROGUE_CR_CORE_ID_CONFIG_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_CORE_REVISION */
+#define ROGUE_CR_CORE_REVISION 0x0020U
+#define ROGUE_CR_CORE_REVISION_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_CORE_REVISION_DESIGNER_SHIFT 24U
+#define ROGUE_CR_CORE_REVISION_DESIGNER_CLRMSK 0x00FFFFFFU
+#define ROGUE_CR_CORE_REVISION_MAJOR_SHIFT 16U
+#define ROGUE_CR_CORE_REVISION_MAJOR_CLRMSK 0xFF00FFFFU
+#define ROGUE_CR_CORE_REVISION_MINOR_SHIFT 8U
+#define ROGUE_CR_CORE_REVISION_MINOR_CLRMSK 0xFFFF00FFU
+#define ROGUE_CR_CORE_REVISION_MAINTENANCE_SHIFT 0U
+#define ROGUE_CR_CORE_REVISION_MAINTENANCE_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_DESIGNER_REV_FIELD1 */
+#define ROGUE_CR_DESIGNER_REV_FIELD1 0x0028U
+#define ROGUE_CR_DESIGNER_REV_FIELD1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_DESIGNER_REV_FIELD1_DESIGNER_REV_FIELD1_SHIFT 0U
+#define ROGUE_CR_DESIGNER_REV_FIELD1_DESIGNER_REV_FIELD1_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_DESIGNER_REV_FIELD2 */
+#define ROGUE_CR_DESIGNER_REV_FIELD2 0x0030U
+#define ROGUE_CR_DESIGNER_REV_FIELD2_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_DESIGNER_REV_FIELD2_DESIGNER_REV_FIELD2_SHIFT 0U
+#define ROGUE_CR_DESIGNER_REV_FIELD2_DESIGNER_REV_FIELD2_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_CHANGESET_NUMBER */
+#define ROGUE_CR_CHANGESET_NUMBER 0x0040U
+#define ROGUE_CR_CHANGESET_NUMBER_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_CHANGESET_NUMBER_CHANGESET_NUMBER_SHIFT 0U
+#define ROGUE_CR_CHANGESET_NUMBER_CHANGESET_NUMBER_CLRMSK 0x0000000000000000ULL
+
+/* Register ROGUE_CR_CLK_XTPLUS_CTRL */
+#define ROGUE_CR_CLK_XTPLUS_CTRL 0x0080U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_MASKFULL 0x0000003FFFFF0000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_TDM_SHIFT 36U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_TDM_CLRMSK 0xFFFFFFCFFFFFFFFFULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_TDM_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_TDM_ON 0x0000001000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_TDM_AUTO 0x0000002000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_ASTC_SHIFT 34U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_ASTC_CLRMSK 0xFFFFFFF3FFFFFFFFULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_ASTC_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_ASTC_ON 0x0000000400000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_ASTC_AUTO 0x0000000800000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_IPF_SHIFT 32U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_IPF_CLRMSK 0xFFFFFFFCFFFFFFFFULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_IPF_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_IPF_ON 0x0000000100000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_IPF_AUTO 0x0000000200000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_COMPUTE_SHIFT 30U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_COMPUTE_CLRMSK 0xFFFFFFFF3FFFFFFFULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_COMPUTE_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_COMPUTE_ON 0x0000000040000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_COMPUTE_AUTO 0x0000000080000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_PIXEL_SHIFT 28U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_PIXEL_CLRMSK 0xFFFFFFFFCFFFFFFFULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_PIXEL_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_PIXEL_ON 0x0000000010000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_PIXEL_AUTO 0x0000000020000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_VERTEX_SHIFT 26U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_VERTEX_CLRMSK 0xFFFFFFFFF3FFFFFFULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_VERTEX_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_VERTEX_ON 0x0000000004000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_VERTEX_AUTO 0x0000000008000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_USCPS_SHIFT 24U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_USCPS_CLRMSK 0xFFFFFFFFFCFFFFFFULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_USCPS_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_USCPS_ON 0x0000000001000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_USCPS_AUTO 0x0000000002000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_PDS_SHARED_SHIFT 22U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_PDS_SHARED_CLRMSK 0xFFFFFFFFFF3FFFFFULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_PDS_SHARED_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_PDS_SHARED_ON 0x0000000000400000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_PDS_SHARED_AUTO 0x0000000000800000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_BIF_BLACKPEARL_SHIFT 20U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_BIF_BLACKPEARL_CLRMSK 0xFFFFFFFFFFCFFFFFULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_BIF_BLACKPEARL_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_BIF_BLACKPEARL_ON 0x0000000000100000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_BIF_BLACKPEARL_AUTO 0x0000000000200000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_USC_SHARED_SHIFT 18U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_USC_SHARED_CLRMSK 0xFFFFFFFFFFF3FFFFULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_USC_SHARED_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_USC_SHARED_ON 0x0000000000040000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_USC_SHARED_AUTO 0x0000000000080000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_GEOMETRY_SHIFT 16U
+#define ROGUE_CR_CLK_XTPLUS_CTRL_GEOMETRY_CLRMSK 0xFFFFFFFFFFFCFFFFULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_GEOMETRY_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_GEOMETRY_ON 0x0000000000010000ULL
+#define ROGUE_CR_CLK_XTPLUS_CTRL_GEOMETRY_AUTO 0x0000000000020000ULL
+
+/* Register ROGUE_CR_CLK_XTPLUS_STATUS */
+#define ROGUE_CR_CLK_XTPLUS_STATUS 0x0088U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_MASKFULL 0x00000000000007FFULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_TDM_SHIFT 10U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_TDM_CLRMSK 0xFFFFFFFFFFFFFBFFULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_TDM_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_TDM_RUNNING 0x0000000000000400ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_IPF_SHIFT 9U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_IPF_CLRMSK 0xFFFFFFFFFFFFFDFFULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_IPF_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_IPF_RUNNING 0x0000000000000200ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_COMPUTE_SHIFT 8U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_COMPUTE_CLRMSK 0xFFFFFFFFFFFFFEFFULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_COMPUTE_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_COMPUTE_RUNNING 0x0000000000000100ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_ASTC_SHIFT 7U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_ASTC_CLRMSK 0xFFFFFFFFFFFFFF7FULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_ASTC_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_ASTC_RUNNING 0x0000000000000080ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_PIXEL_SHIFT 6U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_PIXEL_CLRMSK 0xFFFFFFFFFFFFFFBFULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_PIXEL_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_PIXEL_RUNNING 0x0000000000000040ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_VERTEX_SHIFT 5U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_VERTEX_CLRMSK 0xFFFFFFFFFFFFFFDFULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_VERTEX_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_VERTEX_RUNNING 0x0000000000000020ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_USCPS_SHIFT 4U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_USCPS_CLRMSK 0xFFFFFFFFFFFFFFEFULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_USCPS_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_USCPS_RUNNING 0x0000000000000010ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_PDS_SHARED_SHIFT 3U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_PDS_SHARED_CLRMSK 0xFFFFFFFFFFFFFFF7ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_PDS_SHARED_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_PDS_SHARED_RUNNING 0x0000000000000008ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_BIF_BLACKPEARL_SHIFT 2U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_BIF_BLACKPEARL_CLRMSK 0xFFFFFFFFFFFFFFFBULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_BIF_BLACKPEARL_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_BIF_BLACKPEARL_RUNNING 0x0000000000000004ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_USC_SHARED_SHIFT 1U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_USC_SHARED_CLRMSK 0xFFFFFFFFFFFFFFFDULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_USC_SHARED_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_USC_SHARED_RUNNING 0x0000000000000002ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_GEOMETRY_SHIFT 0U
+#define ROGUE_CR_CLK_XTPLUS_STATUS_GEOMETRY_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_GEOMETRY_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_XTPLUS_STATUS_GEOMETRY_RUNNING 0x0000000000000001ULL
+
+/* Register ROGUE_CR_SOFT_RESET */
+#define ROGUE_CR_SOFT_RESET 0x0100U
+#define ROGUE_CR_SOFT_RESET__PBE2_XE__MASKFULL 0xFFEFFFFFFFFFFC3DULL
+#define ROGUE_CR_SOFT_RESET_MASKFULL 0x00E7FFFFFFFFFC3DULL
+#define ROGUE_CR_SOFT_RESET_PHANTOM3_CORE_SHIFT 63U
+#define ROGUE_CR_SOFT_RESET_PHANTOM3_CORE_CLRMSK 0x7FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_PHANTOM3_CORE_EN 0x8000000000000000ULL
+#define ROGUE_CR_SOFT_RESET_PHANTOM2_CORE_SHIFT 62U
+#define ROGUE_CR_SOFT_RESET_PHANTOM2_CORE_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_PHANTOM2_CORE_EN 0x4000000000000000ULL
+#define ROGUE_CR_SOFT_RESET_BERNADO2_CORE_SHIFT 61U
+#define ROGUE_CR_SOFT_RESET_BERNADO2_CORE_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_BERNADO2_CORE_EN 0x2000000000000000ULL
+#define ROGUE_CR_SOFT_RESET_JONES_CORE_SHIFT 60U
+#define ROGUE_CR_SOFT_RESET_JONES_CORE_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_JONES_CORE_EN 0x1000000000000000ULL
+#define ROGUE_CR_SOFT_RESET_TILING_CORE_SHIFT 59U
+#define ROGUE_CR_SOFT_RESET_TILING_CORE_CLRMSK 0xF7FFFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_TILING_CORE_EN 0x0800000000000000ULL
+#define ROGUE_CR_SOFT_RESET_TE3_SHIFT 58U
+#define ROGUE_CR_SOFT_RESET_TE3_CLRMSK 0xFBFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_TE3_EN 0x0400000000000000ULL
+#define ROGUE_CR_SOFT_RESET_VCE_SHIFT 57U
+#define ROGUE_CR_SOFT_RESET_VCE_CLRMSK 0xFDFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_VCE_EN 0x0200000000000000ULL
+#define ROGUE_CR_SOFT_RESET_VBS_SHIFT 56U
+#define ROGUE_CR_SOFT_RESET_VBS_CLRMSK 0xFEFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_VBS_EN 0x0100000000000000ULL
+#define ROGUE_CR_SOFT_RESET_DPX1_CORE_SHIFT 55U
+#define ROGUE_CR_SOFT_RESET_DPX1_CORE_CLRMSK 0xFF7FFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_DPX1_CORE_EN 0x0080000000000000ULL
+#define ROGUE_CR_SOFT_RESET_DPX0_CORE_SHIFT 54U
+#define ROGUE_CR_SOFT_RESET_DPX0_CORE_CLRMSK 0xFFBFFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_DPX0_CORE_EN 0x0040000000000000ULL
+#define ROGUE_CR_SOFT_RESET_FBA_SHIFT 53U
+#define ROGUE_CR_SOFT_RESET_FBA_CLRMSK 0xFFDFFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_FBA_EN 0x0020000000000000ULL
+#define ROGUE_CR_SOFT_RESET_FB_CDC_SHIFT 51U
+#define ROGUE_CR_SOFT_RESET_FB_CDC_CLRMSK 0xFFF7FFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_FB_CDC_EN 0x0008000000000000ULL
+#define ROGUE_CR_SOFT_RESET_SH_SHIFT 50U
+#define ROGUE_CR_SOFT_RESET_SH_CLRMSK 0xFFFBFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_SH_EN 0x0004000000000000ULL
+#define ROGUE_CR_SOFT_RESET_VRDM_SHIFT 49U
+#define ROGUE_CR_SOFT_RESET_VRDM_CLRMSK 0xFFFDFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_VRDM_EN 0x0002000000000000ULL
+#define ROGUE_CR_SOFT_RESET_MCU_FBTC_SHIFT 48U
+#define ROGUE_CR_SOFT_RESET_MCU_FBTC_CLRMSK 0xFFFEFFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_MCU_FBTC_EN 0x0001000000000000ULL
+#define ROGUE_CR_SOFT_RESET_PHANTOM1_CORE_SHIFT 47U
+#define ROGUE_CR_SOFT_RESET_PHANTOM1_CORE_CLRMSK 0xFFFF7FFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_PHANTOM1_CORE_EN 0x0000800000000000ULL
+#define ROGUE_CR_SOFT_RESET_PHANTOM0_CORE_SHIFT 46U
+#define ROGUE_CR_SOFT_RESET_PHANTOM0_CORE_CLRMSK 0xFFFFBFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_PHANTOM0_CORE_EN 0x0000400000000000ULL
+#define ROGUE_CR_SOFT_RESET_BERNADO1_CORE_SHIFT 45U
+#define ROGUE_CR_SOFT_RESET_BERNADO1_CORE_CLRMSK 0xFFFFDFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_BERNADO1_CORE_EN 0x0000200000000000ULL
+#define ROGUE_CR_SOFT_RESET_BERNADO0_CORE_SHIFT 44U
+#define ROGUE_CR_SOFT_RESET_BERNADO0_CORE_CLRMSK 0xFFFFEFFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_BERNADO0_CORE_EN 0x0000100000000000ULL
+#define ROGUE_CR_SOFT_RESET_IPP_SHIFT 43U
+#define ROGUE_CR_SOFT_RESET_IPP_CLRMSK 0xFFFFF7FFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_IPP_EN 0x0000080000000000ULL
+#define ROGUE_CR_SOFT_RESET_BIF_TEXAS_SHIFT 42U
+#define ROGUE_CR_SOFT_RESET_BIF_TEXAS_CLRMSK 0xFFFFFBFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_BIF_TEXAS_EN 0x0000040000000000ULL
+#define ROGUE_CR_SOFT_RESET_TORNADO_CORE_SHIFT 41U
+#define ROGUE_CR_SOFT_RESET_TORNADO_CORE_CLRMSK 0xFFFFFDFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_TORNADO_CORE_EN 0x0000020000000000ULL
+#define ROGUE_CR_SOFT_RESET_DUST_H_CORE_SHIFT 40U
+#define ROGUE_CR_SOFT_RESET_DUST_H_CORE_CLRMSK 0xFFFFFEFFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_DUST_H_CORE_EN 0x0000010000000000ULL
+#define ROGUE_CR_SOFT_RESET_DUST_G_CORE_SHIFT 39U
+#define ROGUE_CR_SOFT_RESET_DUST_G_CORE_CLRMSK 0xFFFFFF7FFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_DUST_G_CORE_EN 0x0000008000000000ULL
+#define ROGUE_CR_SOFT_RESET_DUST_F_CORE_SHIFT 38U
+#define ROGUE_CR_SOFT_RESET_DUST_F_CORE_CLRMSK 0xFFFFFFBFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_DUST_F_CORE_EN 0x0000004000000000ULL
+#define ROGUE_CR_SOFT_RESET_DUST_E_CORE_SHIFT 37U
+#define ROGUE_CR_SOFT_RESET_DUST_E_CORE_CLRMSK 0xFFFFFFDFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_DUST_E_CORE_EN 0x0000002000000000ULL
+#define ROGUE_CR_SOFT_RESET_DUST_D_CORE_SHIFT 36U
+#define ROGUE_CR_SOFT_RESET_DUST_D_CORE_CLRMSK 0xFFFFFFEFFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_DUST_D_CORE_EN 0x0000001000000000ULL
+#define ROGUE_CR_SOFT_RESET_DUST_C_CORE_SHIFT 35U
+#define ROGUE_CR_SOFT_RESET_DUST_C_CORE_CLRMSK 0xFFFFFFF7FFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_DUST_C_CORE_EN 0x0000000800000000ULL
+#define ROGUE_CR_SOFT_RESET_MMU_SHIFT 34U
+#define ROGUE_CR_SOFT_RESET_MMU_CLRMSK 0xFFFFFFFBFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_MMU_EN 0x0000000400000000ULL
+#define ROGUE_CR_SOFT_RESET_BIF1_SHIFT 33U
+#define ROGUE_CR_SOFT_RESET_BIF1_CLRMSK 0xFFFFFFFDFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_BIF1_EN 0x0000000200000000ULL
+#define ROGUE_CR_SOFT_RESET_GARTEN_SHIFT 32U
+#define ROGUE_CR_SOFT_RESET_GARTEN_CLRMSK 0xFFFFFFFEFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_GARTEN_EN 0x0000000100000000ULL
+#define ROGUE_CR_SOFT_RESET_CPU_SHIFT 32U
+#define ROGUE_CR_SOFT_RESET_CPU_CLRMSK 0xFFFFFFFEFFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_CPU_EN 0x0000000100000000ULL
+#define ROGUE_CR_SOFT_RESET_RASCAL_CORE_SHIFT 31U
+#define ROGUE_CR_SOFT_RESET_RASCAL_CORE_CLRMSK 0xFFFFFFFF7FFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_RASCAL_CORE_EN 0x0000000080000000ULL
+#define ROGUE_CR_SOFT_RESET_DUST_B_CORE_SHIFT 30U
+#define ROGUE_CR_SOFT_RESET_DUST_B_CORE_CLRMSK 0xFFFFFFFFBFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_DUST_B_CORE_EN 0x0000000040000000ULL
+#define ROGUE_CR_SOFT_RESET_DUST_A_CORE_SHIFT 29U
+#define ROGUE_CR_SOFT_RESET_DUST_A_CORE_CLRMSK 0xFFFFFFFFDFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_DUST_A_CORE_EN 0x0000000020000000ULL
+#define ROGUE_CR_SOFT_RESET_FB_TLCACHE_SHIFT 28U
+#define ROGUE_CR_SOFT_RESET_FB_TLCACHE_CLRMSK 0xFFFFFFFFEFFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_FB_TLCACHE_EN 0x0000000010000000ULL
+#define ROGUE_CR_SOFT_RESET_SLC_SHIFT 27U
+#define ROGUE_CR_SOFT_RESET_SLC_CLRMSK 0xFFFFFFFFF7FFFFFFULL
+#define ROGUE_CR_SOFT_RESET_SLC_EN 0x0000000008000000ULL
+#define ROGUE_CR_SOFT_RESET_TLA_SHIFT 26U
+#define ROGUE_CR_SOFT_RESET_TLA_CLRMSK 0xFFFFFFFFFBFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_TLA_EN 0x0000000004000000ULL
+#define ROGUE_CR_SOFT_RESET_UVS_SHIFT 25U
+#define ROGUE_CR_SOFT_RESET_UVS_CLRMSK 0xFFFFFFFFFDFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_UVS_EN 0x0000000002000000ULL
+#define ROGUE_CR_SOFT_RESET_TE_SHIFT 24U
+#define ROGUE_CR_SOFT_RESET_TE_CLRMSK 0xFFFFFFFFFEFFFFFFULL
+#define ROGUE_CR_SOFT_RESET_TE_EN 0x0000000001000000ULL
+#define ROGUE_CR_SOFT_RESET_GPP_SHIFT 23U
+#define ROGUE_CR_SOFT_RESET_GPP_CLRMSK 0xFFFFFFFFFF7FFFFFULL
+#define ROGUE_CR_SOFT_RESET_GPP_EN 0x0000000000800000ULL
+#define ROGUE_CR_SOFT_RESET_FBDC_SHIFT 22U
+#define ROGUE_CR_SOFT_RESET_FBDC_CLRMSK 0xFFFFFFFFFFBFFFFFULL
+#define ROGUE_CR_SOFT_RESET_FBDC_EN 0x0000000000400000ULL
+#define ROGUE_CR_SOFT_RESET_FBC_SHIFT 21U
+#define ROGUE_CR_SOFT_RESET_FBC_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_SOFT_RESET_FBC_EN 0x0000000000200000ULL
+#define ROGUE_CR_SOFT_RESET_PM_SHIFT 20U
+#define ROGUE_CR_SOFT_RESET_PM_CLRMSK 0xFFFFFFFFFFEFFFFFULL
+#define ROGUE_CR_SOFT_RESET_PM_EN 0x0000000000100000ULL
+#define ROGUE_CR_SOFT_RESET_PBE_SHIFT 19U
+#define ROGUE_CR_SOFT_RESET_PBE_CLRMSK 0xFFFFFFFFFFF7FFFFULL
+#define ROGUE_CR_SOFT_RESET_PBE_EN 0x0000000000080000ULL
+#define ROGUE_CR_SOFT_RESET_USC_SHARED_SHIFT 18U
+#define ROGUE_CR_SOFT_RESET_USC_SHARED_CLRMSK 0xFFFFFFFFFFFBFFFFULL
+#define ROGUE_CR_SOFT_RESET_USC_SHARED_EN 0x0000000000040000ULL
+#define ROGUE_CR_SOFT_RESET_MCU_L1_SHIFT 17U
+#define ROGUE_CR_SOFT_RESET_MCU_L1_CLRMSK 0xFFFFFFFFFFFDFFFFULL
+#define ROGUE_CR_SOFT_RESET_MCU_L1_EN 0x0000000000020000ULL
+#define ROGUE_CR_SOFT_RESET_BIF_SHIFT 16U
+#define ROGUE_CR_SOFT_RESET_BIF_CLRMSK 0xFFFFFFFFFFFEFFFFULL
+#define ROGUE_CR_SOFT_RESET_BIF_EN 0x0000000000010000ULL
+#define ROGUE_CR_SOFT_RESET_CDM_SHIFT 15U
+#define ROGUE_CR_SOFT_RESET_CDM_CLRMSK 0xFFFFFFFFFFFF7FFFULL
+#define ROGUE_CR_SOFT_RESET_CDM_EN 0x0000000000008000ULL
+#define ROGUE_CR_SOFT_RESET_VDM_SHIFT 14U
+#define ROGUE_CR_SOFT_RESET_VDM_CLRMSK 0xFFFFFFFFFFFFBFFFULL
+#define ROGUE_CR_SOFT_RESET_VDM_EN 0x0000000000004000ULL
+#define ROGUE_CR_SOFT_RESET_TESS_SHIFT 13U
+#define ROGUE_CR_SOFT_RESET_TESS_CLRMSK 0xFFFFFFFFFFFFDFFFULL
+#define ROGUE_CR_SOFT_RESET_TESS_EN 0x0000000000002000ULL
+#define ROGUE_CR_SOFT_RESET_PDS_SHIFT 12U
+#define ROGUE_CR_SOFT_RESET_PDS_CLRMSK 0xFFFFFFFFFFFFEFFFULL
+#define ROGUE_CR_SOFT_RESET_PDS_EN 0x0000000000001000ULL
+#define ROGUE_CR_SOFT_RESET_ISP_SHIFT 11U
+#define ROGUE_CR_SOFT_RESET_ISP_CLRMSK 0xFFFFFFFFFFFFF7FFULL
+#define ROGUE_CR_SOFT_RESET_ISP_EN 0x0000000000000800ULL
+#define ROGUE_CR_SOFT_RESET_TSP_SHIFT 10U
+#define ROGUE_CR_SOFT_RESET_TSP_CLRMSK 0xFFFFFFFFFFFFFBFFULL
+#define ROGUE_CR_SOFT_RESET_TSP_EN 0x0000000000000400ULL
+#define ROGUE_CR_SOFT_RESET_SYSARB_SHIFT 5U
+#define ROGUE_CR_SOFT_RESET_SYSARB_CLRMSK 0xFFFFFFFFFFFFFFDFULL
+#define ROGUE_CR_SOFT_RESET_SYSARB_EN 0x0000000000000020ULL
+#define ROGUE_CR_SOFT_RESET_TPU_MCU_DEMUX_SHIFT 4U
+#define ROGUE_CR_SOFT_RESET_TPU_MCU_DEMUX_CLRMSK 0xFFFFFFFFFFFFFFEFULL
+#define ROGUE_CR_SOFT_RESET_TPU_MCU_DEMUX_EN 0x0000000000000010ULL
+#define ROGUE_CR_SOFT_RESET_MCU_L0_SHIFT 3U
+#define ROGUE_CR_SOFT_RESET_MCU_L0_CLRMSK 0xFFFFFFFFFFFFFFF7ULL
+#define ROGUE_CR_SOFT_RESET_MCU_L0_EN 0x0000000000000008ULL
+#define ROGUE_CR_SOFT_RESET_TPU_SHIFT 2U
+#define ROGUE_CR_SOFT_RESET_TPU_CLRMSK 0xFFFFFFFFFFFFFFFBULL
+#define ROGUE_CR_SOFT_RESET_TPU_EN 0x0000000000000004ULL
+#define ROGUE_CR_SOFT_RESET_USC_SHIFT 0U
+#define ROGUE_CR_SOFT_RESET_USC_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_SOFT_RESET_USC_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_SOFT_RESET2 */
+#define ROGUE_CR_SOFT_RESET2 0x0108U
+#define ROGUE_CR_SOFT_RESET2_MASKFULL 0x00000000001FFFFFULL
+#define ROGUE_CR_SOFT_RESET2_SPFILTER_SHIFT 12U
+#define ROGUE_CR_SOFT_RESET2_SPFILTER_CLRMSK 0xFFE00FFFU
+#define ROGUE_CR_SOFT_RESET2_TDM_SHIFT 11U
+#define ROGUE_CR_SOFT_RESET2_TDM_CLRMSK 0xFFFFF7FFU
+#define ROGUE_CR_SOFT_RESET2_TDM_EN 0x00000800U
+#define ROGUE_CR_SOFT_RESET2_ASTC_SHIFT 10U
+#define ROGUE_CR_SOFT_RESET2_ASTC_CLRMSK 0xFFFFFBFFU
+#define ROGUE_CR_SOFT_RESET2_ASTC_EN 0x00000400U
+#define ROGUE_CR_SOFT_RESET2_BLACKPEARL_SHIFT 9U
+#define ROGUE_CR_SOFT_RESET2_BLACKPEARL_CLRMSK 0xFFFFFDFFU
+#define ROGUE_CR_SOFT_RESET2_BLACKPEARL_EN 0x00000200U
+#define ROGUE_CR_SOFT_RESET2_USCPS_SHIFT 8U
+#define ROGUE_CR_SOFT_RESET2_USCPS_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_SOFT_RESET2_USCPS_EN 0x00000100U
+#define ROGUE_CR_SOFT_RESET2_IPF_SHIFT 7U
+#define ROGUE_CR_SOFT_RESET2_IPF_CLRMSK 0xFFFFFF7FU
+#define ROGUE_CR_SOFT_RESET2_IPF_EN 0x00000080U
+#define ROGUE_CR_SOFT_RESET2_GEOMETRY_SHIFT 6U
+#define ROGUE_CR_SOFT_RESET2_GEOMETRY_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_SOFT_RESET2_GEOMETRY_EN 0x00000040U
+#define ROGUE_CR_SOFT_RESET2_USC_SHARED_SHIFT 5U
+#define ROGUE_CR_SOFT_RESET2_USC_SHARED_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_SOFT_RESET2_USC_SHARED_EN 0x00000020U
+#define ROGUE_CR_SOFT_RESET2_PDS_SHARED_SHIFT 4U
+#define ROGUE_CR_SOFT_RESET2_PDS_SHARED_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_SOFT_RESET2_PDS_SHARED_EN 0x00000010U
+#define ROGUE_CR_SOFT_RESET2_BIF_BLACKPEARL_SHIFT 3U
+#define ROGUE_CR_SOFT_RESET2_BIF_BLACKPEARL_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_SOFT_RESET2_BIF_BLACKPEARL_EN 0x00000008U
+#define ROGUE_CR_SOFT_RESET2_PIXEL_SHIFT 2U
+#define ROGUE_CR_SOFT_RESET2_PIXEL_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_SOFT_RESET2_PIXEL_EN 0x00000004U
+#define ROGUE_CR_SOFT_RESET2_CDM_SHIFT 1U
+#define ROGUE_CR_SOFT_RESET2_CDM_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_SOFT_RESET2_CDM_EN 0x00000002U
+#define ROGUE_CR_SOFT_RESET2_VERTEX_SHIFT 0U
+#define ROGUE_CR_SOFT_RESET2_VERTEX_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SOFT_RESET2_VERTEX_EN 0x00000001U
+
+/* Register ROGUE_CR_EVENT_STATUS */
+#define ROGUE_CR_EVENT_STATUS 0x0130U
+#define ROGUE_CR_EVENT_STATUS__ROGUEXE__MASKFULL 0x00000000E01DFFFFULL
+#define ROGUE_CR_EVENT_STATUS__SIGNALS__MASKFULL 0x00000000E007FFFFULL
+#define ROGUE_CR_EVENT_STATUS_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_EVENT_STATUS_TDM_FENCE_FINISHED_SHIFT 31U
+#define ROGUE_CR_EVENT_STATUS_TDM_FENCE_FINISHED_CLRMSK 0x7FFFFFFFU
+#define ROGUE_CR_EVENT_STATUS_TDM_FENCE_FINISHED_EN 0x80000000U
+#define ROGUE_CR_EVENT_STATUS_TDM_BUFFER_STALL_SHIFT 30U
+#define ROGUE_CR_EVENT_STATUS_TDM_BUFFER_STALL_CLRMSK 0xBFFFFFFFU
+#define ROGUE_CR_EVENT_STATUS_TDM_BUFFER_STALL_EN 0x40000000U
+#define ROGUE_CR_EVENT_STATUS_COMPUTE_SIGNAL_FAILURE_SHIFT 29U
+#define ROGUE_CR_EVENT_STATUS_COMPUTE_SIGNAL_FAILURE_CLRMSK 0xDFFFFFFFU
+#define ROGUE_CR_EVENT_STATUS_COMPUTE_SIGNAL_FAILURE_EN 0x20000000U
+#define ROGUE_CR_EVENT_STATUS_DPX_OUT_OF_MEMORY_SHIFT 28U
+#define ROGUE_CR_EVENT_STATUS_DPX_OUT_OF_MEMORY_CLRMSK 0xEFFFFFFFU
+#define ROGUE_CR_EVENT_STATUS_DPX_OUT_OF_MEMORY_EN 0x10000000U
+#define ROGUE_CR_EVENT_STATUS_DPX_MMU_PAGE_FAULT_SHIFT 27U
+#define ROGUE_CR_EVENT_STATUS_DPX_MMU_PAGE_FAULT_CLRMSK 0xF7FFFFFFU
+#define ROGUE_CR_EVENT_STATUS_DPX_MMU_PAGE_FAULT_EN 0x08000000U
+#define ROGUE_CR_EVENT_STATUS_RPM_OUT_OF_MEMORY_SHIFT 26U
+#define ROGUE_CR_EVENT_STATUS_RPM_OUT_OF_MEMORY_CLRMSK 0xFBFFFFFFU
+#define ROGUE_CR_EVENT_STATUS_RPM_OUT_OF_MEMORY_EN 0x04000000U
+#define ROGUE_CR_EVENT_STATUS_FBA_FC3_FINISHED_SHIFT 25U
+#define ROGUE_CR_EVENT_STATUS_FBA_FC3_FINISHED_CLRMSK 0xFDFFFFFFU
+#define ROGUE_CR_EVENT_STATUS_FBA_FC3_FINISHED_EN 0x02000000U
+#define ROGUE_CR_EVENT_STATUS_FBA_FC2_FINISHED_SHIFT 24U
+#define ROGUE_CR_EVENT_STATUS_FBA_FC2_FINISHED_CLRMSK 0xFEFFFFFFU
+#define ROGUE_CR_EVENT_STATUS_FBA_FC2_FINISHED_EN 0x01000000U
+#define ROGUE_CR_EVENT_STATUS_FBA_FC1_FINISHED_SHIFT 23U
+#define ROGUE_CR_EVENT_STATUS_FBA_FC1_FINISHED_CLRMSK 0xFF7FFFFFU
+#define ROGUE_CR_EVENT_STATUS_FBA_FC1_FINISHED_EN 0x00800000U
+#define ROGUE_CR_EVENT_STATUS_FBA_FC0_FINISHED_SHIFT 22U
+#define ROGUE_CR_EVENT_STATUS_FBA_FC0_FINISHED_CLRMSK 0xFFBFFFFFU
+#define ROGUE_CR_EVENT_STATUS_FBA_FC0_FINISHED_EN 0x00400000U
+#define ROGUE_CR_EVENT_STATUS_RDM_FC3_FINISHED_SHIFT 21U
+#define ROGUE_CR_EVENT_STATUS_RDM_FC3_FINISHED_CLRMSK 0xFFDFFFFFU
+#define ROGUE_CR_EVENT_STATUS_RDM_FC3_FINISHED_EN 0x00200000U
+#define ROGUE_CR_EVENT_STATUS_RDM_FC2_FINISHED_SHIFT 20U
+#define ROGUE_CR_EVENT_STATUS_RDM_FC2_FINISHED_CLRMSK 0xFFEFFFFFU
+#define ROGUE_CR_EVENT_STATUS_RDM_FC2_FINISHED_EN 0x00100000U
+#define ROGUE_CR_EVENT_STATUS_SAFETY_SHIFT 20U
+#define ROGUE_CR_EVENT_STATUS_SAFETY_CLRMSK 0xFFEFFFFFU
+#define ROGUE_CR_EVENT_STATUS_SAFETY_EN 0x00100000U
+#define ROGUE_CR_EVENT_STATUS_RDM_FC1_FINISHED_SHIFT 19U
+#define ROGUE_CR_EVENT_STATUS_RDM_FC1_FINISHED_CLRMSK 0xFFF7FFFFU
+#define ROGUE_CR_EVENT_STATUS_RDM_FC1_FINISHED_EN 0x00080000U
+#define ROGUE_CR_EVENT_STATUS_SLAVE_REQ_SHIFT 19U
+#define ROGUE_CR_EVENT_STATUS_SLAVE_REQ_CLRMSK 0xFFF7FFFFU
+#define ROGUE_CR_EVENT_STATUS_SLAVE_REQ_EN 0x00080000U
+#define ROGUE_CR_EVENT_STATUS_RDM_FC0_FINISHED_SHIFT 18U
+#define ROGUE_CR_EVENT_STATUS_RDM_FC0_FINISHED_CLRMSK 0xFFFBFFFFU
+#define ROGUE_CR_EVENT_STATUS_RDM_FC0_FINISHED_EN 0x00040000U
+#define ROGUE_CR_EVENT_STATUS_TDM_CONTEXT_STORE_FINISHED_SHIFT 18U
+#define ROGUE_CR_EVENT_STATUS_TDM_CONTEXT_STORE_FINISHED_CLRMSK 0xFFFBFFFFU
+#define ROGUE_CR_EVENT_STATUS_TDM_CONTEXT_STORE_FINISHED_EN 0x00040000U
+#define ROGUE_CR_EVENT_STATUS_SHG_FINISHED_SHIFT 17U
+#define ROGUE_CR_EVENT_STATUS_SHG_FINISHED_CLRMSK 0xFFFDFFFFU
+#define ROGUE_CR_EVENT_STATUS_SHG_FINISHED_EN 0x00020000U
+#define ROGUE_CR_EVENT_STATUS_SPFILTER_SIGNAL_UPDATE_SHIFT 17U
+#define ROGUE_CR_EVENT_STATUS_SPFILTER_SIGNAL_UPDATE_CLRMSK 0xFFFDFFFFU
+#define ROGUE_CR_EVENT_STATUS_SPFILTER_SIGNAL_UPDATE_EN 0x00020000U
+#define ROGUE_CR_EVENT_STATUS_COMPUTE_BUFFER_STALL_SHIFT 16U
+#define ROGUE_CR_EVENT_STATUS_COMPUTE_BUFFER_STALL_CLRMSK 0xFFFEFFFFU
+#define ROGUE_CR_EVENT_STATUS_COMPUTE_BUFFER_STALL_EN 0x00010000U
+#define ROGUE_CR_EVENT_STATUS_USC_TRIGGER_SHIFT 15U
+#define ROGUE_CR_EVENT_STATUS_USC_TRIGGER_CLRMSK 0xFFFF7FFFU
+#define ROGUE_CR_EVENT_STATUS_USC_TRIGGER_EN 0x00008000U
+#define ROGUE_CR_EVENT_STATUS_ZLS_FINISHED_SHIFT 14U
+#define ROGUE_CR_EVENT_STATUS_ZLS_FINISHED_CLRMSK 0xFFFFBFFFU
+#define ROGUE_CR_EVENT_STATUS_ZLS_FINISHED_EN 0x00004000U
+#define ROGUE_CR_EVENT_STATUS_GPIO_ACK_SHIFT 13U
+#define ROGUE_CR_EVENT_STATUS_GPIO_ACK_CLRMSK 0xFFFFDFFFU
+#define ROGUE_CR_EVENT_STATUS_GPIO_ACK_EN 0x00002000U
+#define ROGUE_CR_EVENT_STATUS_GPIO_REQ_SHIFT 12U
+#define ROGUE_CR_EVENT_STATUS_GPIO_REQ_CLRMSK 0xFFFFEFFFU
+#define ROGUE_CR_EVENT_STATUS_GPIO_REQ_EN 0x00001000U
+#define ROGUE_CR_EVENT_STATUS_POWER_ABORT_SHIFT 11U
+#define ROGUE_CR_EVENT_STATUS_POWER_ABORT_CLRMSK 0xFFFFF7FFU
+#define ROGUE_CR_EVENT_STATUS_POWER_ABORT_EN 0x00000800U
+#define ROGUE_CR_EVENT_STATUS_POWER_COMPLETE_SHIFT 10U
+#define ROGUE_CR_EVENT_STATUS_POWER_COMPLETE_CLRMSK 0xFFFFFBFFU
+#define ROGUE_CR_EVENT_STATUS_POWER_COMPLETE_EN 0x00000400U
+#define ROGUE_CR_EVENT_STATUS_MMU_PAGE_FAULT_SHIFT 9U
+#define ROGUE_CR_EVENT_STATUS_MMU_PAGE_FAULT_CLRMSK 0xFFFFFDFFU
+#define ROGUE_CR_EVENT_STATUS_MMU_PAGE_FAULT_EN 0x00000200U
+#define ROGUE_CR_EVENT_STATUS_PM_3D_MEM_FREE_SHIFT 8U
+#define ROGUE_CR_EVENT_STATUS_PM_3D_MEM_FREE_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_EVENT_STATUS_PM_3D_MEM_FREE_EN 0x00000100U
+#define ROGUE_CR_EVENT_STATUS_PM_OUT_OF_MEMORY_SHIFT 7U
+#define ROGUE_CR_EVENT_STATUS_PM_OUT_OF_MEMORY_CLRMSK 0xFFFFFF7FU
+#define ROGUE_CR_EVENT_STATUS_PM_OUT_OF_MEMORY_EN 0x00000080U
+#define ROGUE_CR_EVENT_STATUS_TA_TERMINATE_SHIFT 6U
+#define ROGUE_CR_EVENT_STATUS_TA_TERMINATE_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_EVENT_STATUS_TA_TERMINATE_EN 0x00000040U
+#define ROGUE_CR_EVENT_STATUS_TA_FINISHED_SHIFT 5U
+#define ROGUE_CR_EVENT_STATUS_TA_FINISHED_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_EVENT_STATUS_TA_FINISHED_EN 0x00000020U
+#define ROGUE_CR_EVENT_STATUS_ISP_END_MACROTILE_SHIFT 4U
+#define ROGUE_CR_EVENT_STATUS_ISP_END_MACROTILE_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_EVENT_STATUS_ISP_END_MACROTILE_EN 0x00000010U
+#define ROGUE_CR_EVENT_STATUS_PIXELBE_END_RENDER_SHIFT 3U
+#define ROGUE_CR_EVENT_STATUS_PIXELBE_END_RENDER_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_EVENT_STATUS_PIXELBE_END_RENDER_EN 0x00000008U
+#define ROGUE_CR_EVENT_STATUS_COMPUTE_FINISHED_SHIFT 2U
+#define ROGUE_CR_EVENT_STATUS_COMPUTE_FINISHED_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_EVENT_STATUS_COMPUTE_FINISHED_EN 0x00000004U
+#define ROGUE_CR_EVENT_STATUS_KERNEL_FINISHED_SHIFT 1U
+#define ROGUE_CR_EVENT_STATUS_KERNEL_FINISHED_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_EVENT_STATUS_KERNEL_FINISHED_EN 0x00000002U
+#define ROGUE_CR_EVENT_STATUS_TLA_COMPLETE_SHIFT 0U
+#define ROGUE_CR_EVENT_STATUS_TLA_COMPLETE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_EVENT_STATUS_TLA_COMPLETE_EN 0x00000001U
+
+/* Register ROGUE_CR_TIMER */
+#define ROGUE_CR_TIMER 0x0160U
+#define ROGUE_CR_TIMER_MASKFULL 0x8000FFFFFFFFFFFFULL
+#define ROGUE_CR_TIMER_BIT31_SHIFT 63U
+#define ROGUE_CR_TIMER_BIT31_CLRMSK 0x7FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_TIMER_BIT31_EN 0x8000000000000000ULL
+#define ROGUE_CR_TIMER_VALUE_SHIFT 0U
+#define ROGUE_CR_TIMER_VALUE_CLRMSK 0xFFFF000000000000ULL
+
+/* Register ROGUE_CR_TLA_STATUS */
+#define ROGUE_CR_TLA_STATUS 0x0178U
+#define ROGUE_CR_TLA_STATUS_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_TLA_STATUS_BLIT_COUNT_SHIFT 39U
+#define ROGUE_CR_TLA_STATUS_BLIT_COUNT_CLRMSK 0x0000007FFFFFFFFFULL
+#define ROGUE_CR_TLA_STATUS_REQUEST_SHIFT 7U
+#define ROGUE_CR_TLA_STATUS_REQUEST_CLRMSK 0xFFFFFF800000007FULL
+#define ROGUE_CR_TLA_STATUS_FIFO_FULLNESS_SHIFT 1U
+#define ROGUE_CR_TLA_STATUS_FIFO_FULLNESS_CLRMSK 0xFFFFFFFFFFFFFF81ULL
+#define ROGUE_CR_TLA_STATUS_BUSY_SHIFT 0U
+#define ROGUE_CR_TLA_STATUS_BUSY_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_TLA_STATUS_BUSY_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_PM_PARTIAL_RENDER_ENABLE */
+#define ROGUE_CR_PM_PARTIAL_RENDER_ENABLE 0x0338U
+#define ROGUE_CR_PM_PARTIAL_RENDER_ENABLE_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_PM_PARTIAL_RENDER_ENABLE_OP_SHIFT 0U
+#define ROGUE_CR_PM_PARTIAL_RENDER_ENABLE_OP_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_PM_PARTIAL_RENDER_ENABLE_OP_EN 0x00000001U
+
+/* Register ROGUE_CR_SIDEKICK_IDLE */
+#define ROGUE_CR_SIDEKICK_IDLE 0x03C8U
+#define ROGUE_CR_SIDEKICK_IDLE_MASKFULL 0x000000000000007FULL
+#define ROGUE_CR_SIDEKICK_IDLE_FB_CDC_SHIFT 6U
+#define ROGUE_CR_SIDEKICK_IDLE_FB_CDC_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_SIDEKICK_IDLE_FB_CDC_EN 0x00000040U
+#define ROGUE_CR_SIDEKICK_IDLE_MMU_SHIFT 5U
+#define ROGUE_CR_SIDEKICK_IDLE_MMU_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_SIDEKICK_IDLE_MMU_EN 0x00000020U
+#define ROGUE_CR_SIDEKICK_IDLE_BIF128_SHIFT 4U
+#define ROGUE_CR_SIDEKICK_IDLE_BIF128_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_SIDEKICK_IDLE_BIF128_EN 0x00000010U
+#define ROGUE_CR_SIDEKICK_IDLE_TLA_SHIFT 3U
+#define ROGUE_CR_SIDEKICK_IDLE_TLA_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_SIDEKICK_IDLE_TLA_EN 0x00000008U
+#define ROGUE_CR_SIDEKICK_IDLE_GARTEN_SHIFT 2U
+#define ROGUE_CR_SIDEKICK_IDLE_GARTEN_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_SIDEKICK_IDLE_GARTEN_EN 0x00000004U
+#define ROGUE_CR_SIDEKICK_IDLE_HOSTIF_SHIFT 1U
+#define ROGUE_CR_SIDEKICK_IDLE_HOSTIF_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_SIDEKICK_IDLE_HOSTIF_EN 0x00000002U
+#define ROGUE_CR_SIDEKICK_IDLE_SOCIF_SHIFT 0U
+#define ROGUE_CR_SIDEKICK_IDLE_SOCIF_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SIDEKICK_IDLE_SOCIF_EN 0x00000001U
+
+/* Register ROGUE_CR_MARS_IDLE */
+#define ROGUE_CR_MARS_IDLE 0x08F8U
+#define ROGUE_CR_MARS_IDLE_MASKFULL 0x0000000000000007ULL
+#define ROGUE_CR_MARS_IDLE_MH_SYSARB0_SHIFT 2U
+#define ROGUE_CR_MARS_IDLE_MH_SYSARB0_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_MARS_IDLE_MH_SYSARB0_EN 0x00000004U
+#define ROGUE_CR_MARS_IDLE_CPU_SHIFT 1U
+#define ROGUE_CR_MARS_IDLE_CPU_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_MARS_IDLE_CPU_EN 0x00000002U
+#define ROGUE_CR_MARS_IDLE_SOCIF_SHIFT 0U
+#define ROGUE_CR_MARS_IDLE_SOCIF_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_MARS_IDLE_SOCIF_EN 0x00000001U
+
+/* Register ROGUE_CR_VDM_CONTEXT_STORE_STATUS */
+#define ROGUE_CR_VDM_CONTEXT_STORE_STATUS 0x0430U
+#define ROGUE_CR_VDM_CONTEXT_STORE_STATUS_MASKFULL 0x00000000000000F3ULL
+#define ROGUE_CR_VDM_CONTEXT_STORE_STATUS_LAST_PIPE_SHIFT 4U
+#define ROGUE_CR_VDM_CONTEXT_STORE_STATUS_LAST_PIPE_CLRMSK 0xFFFFFF0FU
+#define ROGUE_CR_VDM_CONTEXT_STORE_STATUS_NEED_RESUME_SHIFT 1U
+#define ROGUE_CR_VDM_CONTEXT_STORE_STATUS_NEED_RESUME_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_VDM_CONTEXT_STORE_STATUS_NEED_RESUME_EN 0x00000002U
+#define ROGUE_CR_VDM_CONTEXT_STORE_STATUS_COMPLETE_SHIFT 0U
+#define ROGUE_CR_VDM_CONTEXT_STORE_STATUS_COMPLETE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_VDM_CONTEXT_STORE_STATUS_COMPLETE_EN 0x00000001U
+
+/* Register ROGUE_CR_VDM_CONTEXT_STORE_TASK0 */
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK0 0x0438U
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK0_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK0_PDS_STATE1_SHIFT 32U
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK0_PDS_STATE1_CLRMSK 0x00000000FFFFFFFFULL
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK0_PDS_STATE0_SHIFT 0U
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK0_PDS_STATE0_CLRMSK 0xFFFFFFFF00000000ULL
+
+/* Register ROGUE_CR_VDM_CONTEXT_STORE_TASK1 */
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK1 0x0440U
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK1_PDS_STATE2_SHIFT 0U
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK1_PDS_STATE2_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_VDM_CONTEXT_STORE_TASK2 */
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK2 0x0448U
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK2_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK2_STREAM_OUT2_SHIFT 32U
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK2_STREAM_OUT2_CLRMSK 0x00000000FFFFFFFFULL
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK2_STREAM_OUT1_SHIFT 0U
+#define ROGUE_CR_VDM_CONTEXT_STORE_TASK2_STREAM_OUT1_CLRMSK 0xFFFFFFFF00000000ULL
+
+/* Register ROGUE_CR_VDM_CONTEXT_RESUME_TASK0 */
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK0 0x0450U
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK0_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK0_PDS_STATE1_SHIFT 32U
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK0_PDS_STATE1_CLRMSK 0x00000000FFFFFFFFULL
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK0_PDS_STATE0_SHIFT 0U
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK0_PDS_STATE0_CLRMSK 0xFFFFFFFF00000000ULL
+
+/* Register ROGUE_CR_VDM_CONTEXT_RESUME_TASK1 */
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK1 0x0458U
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK1_PDS_STATE2_SHIFT 0U
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK1_PDS_STATE2_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_VDM_CONTEXT_RESUME_TASK2 */
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK2 0x0460U
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK2_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK2_STREAM_OUT2_SHIFT 32U
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK2_STREAM_OUT2_CLRMSK 0x00000000FFFFFFFFULL
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK2_STREAM_OUT1_SHIFT 0U
+#define ROGUE_CR_VDM_CONTEXT_RESUME_TASK2_STREAM_OUT1_CLRMSK 0xFFFFFFFF00000000ULL
+
+/* Register ROGUE_CR_CDM_CONTEXT_STORE_STATUS */
+#define ROGUE_CR_CDM_CONTEXT_STORE_STATUS 0x04A0U
+#define ROGUE_CR_CDM_CONTEXT_STORE_STATUS_MASKFULL 0x0000000000000003ULL
+#define ROGUE_CR_CDM_CONTEXT_STORE_STATUS_NEED_RESUME_SHIFT 1U
+#define ROGUE_CR_CDM_CONTEXT_STORE_STATUS_NEED_RESUME_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_CDM_CONTEXT_STORE_STATUS_NEED_RESUME_EN 0x00000002U
+#define ROGUE_CR_CDM_CONTEXT_STORE_STATUS_COMPLETE_SHIFT 0U
+#define ROGUE_CR_CDM_CONTEXT_STORE_STATUS_COMPLETE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_CDM_CONTEXT_STORE_STATUS_COMPLETE_EN 0x00000001U
+
+/* Register ROGUE_CR_CDM_CONTEXT_PDS0 */
+#define ROGUE_CR_CDM_CONTEXT_PDS0 0x04A8U
+#define ROGUE_CR_CDM_CONTEXT_PDS0_MASKFULL 0xFFFFFFF0FFFFFFF0ULL
+#define ROGUE_CR_CDM_CONTEXT_PDS0_DATA_ADDR_SHIFT 36U
+#define ROGUE_CR_CDM_CONTEXT_PDS0_DATA_ADDR_CLRMSK 0x0000000FFFFFFFFFULL
+#define ROGUE_CR_CDM_CONTEXT_PDS0_DATA_ADDR_ALIGNSHIFT 4U
+#define ROGUE_CR_CDM_CONTEXT_PDS0_DATA_ADDR_ALIGNSIZE 16U
+#define ROGUE_CR_CDM_CONTEXT_PDS0_CODE_ADDR_SHIFT 4U
+#define ROGUE_CR_CDM_CONTEXT_PDS0_CODE_ADDR_CLRMSK 0xFFFFFFFF0000000FULL
+#define ROGUE_CR_CDM_CONTEXT_PDS0_CODE_ADDR_ALIGNSHIFT 4U
+#define ROGUE_CR_CDM_CONTEXT_PDS0_CODE_ADDR_ALIGNSIZE 16U
+
+/* Register ROGUE_CR_CDM_CONTEXT_PDS1 */
+#define ROGUE_CR_CDM_CONTEXT_PDS1 0x04B0U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__MASKFULL 0x000000007FFFFFFFULL
+#define ROGUE_CR_CDM_CONTEXT_PDS1_MASKFULL 0x000000003FFFFFFFULL
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__PDS_SEQ_DEP_SHIFT 30U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__PDS_SEQ_DEP_CLRMSK 0xBFFFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__PDS_SEQ_DEP_EN 0x40000000U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_PDS_SEQ_DEP_SHIFT 29U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_PDS_SEQ_DEP_CLRMSK 0xDFFFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1_PDS_SEQ_DEP_EN 0x20000000U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__USC_SEQ_DEP_SHIFT 29U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__USC_SEQ_DEP_CLRMSK 0xDFFFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__USC_SEQ_DEP_EN 0x20000000U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_USC_SEQ_DEP_SHIFT 28U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_USC_SEQ_DEP_CLRMSK 0xEFFFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1_USC_SEQ_DEP_EN 0x10000000U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__TARGET_SHIFT 28U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__TARGET_CLRMSK 0xEFFFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__TARGET_EN 0x10000000U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_TARGET_SHIFT 27U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_TARGET_CLRMSK 0xF7FFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1_TARGET_EN 0x08000000U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__UNIFIED_SIZE_SHIFT 22U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__UNIFIED_SIZE_CLRMSK 0xF03FFFFFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1_UNIFIED_SIZE_SHIFT 21U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_UNIFIED_SIZE_CLRMSK 0xF81FFFFFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__COMMON_SHARED_SHIFT 21U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__COMMON_SHARED_CLRMSK 0xFFDFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__COMMON_SHARED_EN 0x00200000U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_COMMON_SHARED_SHIFT 20U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_COMMON_SHARED_CLRMSK 0xFFEFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1_COMMON_SHARED_EN 0x00100000U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__COMMON_SIZE_SHIFT 12U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__COMMON_SIZE_CLRMSK 0xFFE00FFFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1_COMMON_SIZE_SHIFT 11U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_COMMON_SIZE_CLRMSK 0xFFF007FFU
+#define ROGUE_CR_CDM_CONTEXT_PDS1_TEMP_SIZE_SHIFT 7U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_TEMP_SIZE_CLRMSK 0xFFFFF87FU
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__TEMP_SIZE_SHIFT 7U
+#define ROGUE_CR_CDM_CONTEXT_PDS1__TEMPSIZE8__TEMP_SIZE_CLRMSK 0xFFFFF07FU
+#define ROGUE_CR_CDM_CONTEXT_PDS1_DATA_SIZE_SHIFT 1U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_DATA_SIZE_CLRMSK 0xFFFFFF81U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_FENCE_SHIFT 0U
+#define ROGUE_CR_CDM_CONTEXT_PDS1_FENCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_CDM_CONTEXT_PDS1_FENCE_EN 0x00000001U
+
+/* Register ROGUE_CR_CDM_TERMINATE_PDS */
+#define ROGUE_CR_CDM_TERMINATE_PDS 0x04B8U
+#define ROGUE_CR_CDM_TERMINATE_PDS_MASKFULL 0xFFFFFFF0FFFFFFF0ULL
+#define ROGUE_CR_CDM_TERMINATE_PDS_DATA_ADDR_SHIFT 36U
+#define ROGUE_CR_CDM_TERMINATE_PDS_DATA_ADDR_CLRMSK 0x0000000FFFFFFFFFULL
+#define ROGUE_CR_CDM_TERMINATE_PDS_DATA_ADDR_ALIGNSHIFT 4U
+#define ROGUE_CR_CDM_TERMINATE_PDS_DATA_ADDR_ALIGNSIZE 16U
+#define ROGUE_CR_CDM_TERMINATE_PDS_CODE_ADDR_SHIFT 4U
+#define ROGUE_CR_CDM_TERMINATE_PDS_CODE_ADDR_CLRMSK 0xFFFFFFFF0000000FULL
+#define ROGUE_CR_CDM_TERMINATE_PDS_CODE_ADDR_ALIGNSHIFT 4U
+#define ROGUE_CR_CDM_TERMINATE_PDS_CODE_ADDR_ALIGNSIZE 16U
+
+/* Register ROGUE_CR_CDM_TERMINATE_PDS1 */
+#define ROGUE_CR_CDM_TERMINATE_PDS1 0x04C0U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__MASKFULL 0x000000007FFFFFFFULL
+#define ROGUE_CR_CDM_TERMINATE_PDS1_MASKFULL 0x000000003FFFFFFFULL
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__PDS_SEQ_DEP_SHIFT 30U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__PDS_SEQ_DEP_CLRMSK 0xBFFFFFFFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__PDS_SEQ_DEP_EN 0x40000000U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_PDS_SEQ_DEP_SHIFT 29U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_PDS_SEQ_DEP_CLRMSK 0xDFFFFFFFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1_PDS_SEQ_DEP_EN 0x20000000U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__USC_SEQ_DEP_SHIFT 29U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__USC_SEQ_DEP_CLRMSK 0xDFFFFFFFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__USC_SEQ_DEP_EN 0x20000000U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_USC_SEQ_DEP_SHIFT 28U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_USC_SEQ_DEP_CLRMSK 0xEFFFFFFFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1_USC_SEQ_DEP_EN 0x10000000U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__TARGET_SHIFT 28U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__TARGET_CLRMSK 0xEFFFFFFFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__TARGET_EN 0x10000000U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_TARGET_SHIFT 27U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_TARGET_CLRMSK 0xF7FFFFFFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1_TARGET_EN 0x08000000U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__UNIFIED_SIZE_SHIFT 22U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__UNIFIED_SIZE_CLRMSK 0xF03FFFFFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1_UNIFIED_SIZE_SHIFT 21U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_UNIFIED_SIZE_CLRMSK 0xF81FFFFFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__COMMON_SHARED_SHIFT 21U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__COMMON_SHARED_CLRMSK 0xFFDFFFFFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__COMMON_SHARED_EN 0x00200000U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_COMMON_SHARED_SHIFT 20U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_COMMON_SHARED_CLRMSK 0xFFEFFFFFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1_COMMON_SHARED_EN 0x00100000U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__COMMON_SIZE_SHIFT 12U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__COMMON_SIZE_CLRMSK 0xFFE00FFFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1_COMMON_SIZE_SHIFT 11U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_COMMON_SIZE_CLRMSK 0xFFF007FFU
+#define ROGUE_CR_CDM_TERMINATE_PDS1_TEMP_SIZE_SHIFT 7U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_TEMP_SIZE_CLRMSK 0xFFFFF87FU
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__TEMP_SIZE_SHIFT 7U
+#define ROGUE_CR_CDM_TERMINATE_PDS1__TEMPSIZE8__TEMP_SIZE_CLRMSK 0xFFFFF07FU
+#define ROGUE_CR_CDM_TERMINATE_PDS1_DATA_SIZE_SHIFT 1U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_DATA_SIZE_CLRMSK 0xFFFFFF81U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_FENCE_SHIFT 0U
+#define ROGUE_CR_CDM_TERMINATE_PDS1_FENCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_CDM_TERMINATE_PDS1_FENCE_EN 0x00000001U
+
+/* Register ROGUE_CR_CDM_CONTEXT_LOAD_PDS0 */
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS0 0x04D8U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS0_MASKFULL 0xFFFFFFF0FFFFFFF0ULL
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS0_DATA_ADDR_SHIFT 36U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS0_DATA_ADDR_CLRMSK 0x0000000FFFFFFFFFULL
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS0_DATA_ADDR_ALIGNSHIFT 4U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS0_DATA_ADDR_ALIGNSIZE 16U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS0_CODE_ADDR_SHIFT 4U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS0_CODE_ADDR_CLRMSK 0xFFFFFFFF0000000FULL
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS0_CODE_ADDR_ALIGNSHIFT 4U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS0_CODE_ADDR_ALIGNSIZE 16U
+
+/* Register ROGUE_CR_CDM_CONTEXT_LOAD_PDS1 */
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1 0x04E0U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__MASKFULL 0x000000007FFFFFFFULL
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_MASKFULL 0x000000003FFFFFFFULL
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__PDS_SEQ_DEP_SHIFT 30U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__PDS_SEQ_DEP_CLRMSK 0xBFFFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__PDS_SEQ_DEP_EN 0x40000000U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_PDS_SEQ_DEP_SHIFT 29U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_PDS_SEQ_DEP_CLRMSK 0xDFFFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_PDS_SEQ_DEP_EN 0x20000000U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__USC_SEQ_DEP_SHIFT 29U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__USC_SEQ_DEP_CLRMSK 0xDFFFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__USC_SEQ_DEP_EN 0x20000000U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_USC_SEQ_DEP_SHIFT 28U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_USC_SEQ_DEP_CLRMSK 0xEFFFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_USC_SEQ_DEP_EN 0x10000000U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__TARGET_SHIFT 28U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__TARGET_CLRMSK 0xEFFFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__TARGET_EN 0x10000000U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_TARGET_SHIFT 27U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_TARGET_CLRMSK 0xF7FFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_TARGET_EN 0x08000000U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__UNIFIED_SIZE_SHIFT 22U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__UNIFIED_SIZE_CLRMSK 0xF03FFFFFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_UNIFIED_SIZE_SHIFT 21U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_UNIFIED_SIZE_CLRMSK 0xF81FFFFFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__COMMON_SHARED_SHIFT 21U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__COMMON_SHARED_CLRMSK 0xFFDFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__COMMON_SHARED_EN 0x00200000U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_COMMON_SHARED_SHIFT 20U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_COMMON_SHARED_CLRMSK 0xFFEFFFFFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_COMMON_SHARED_EN 0x00100000U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__COMMON_SIZE_SHIFT 12U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__COMMON_SIZE_CLRMSK 0xFFE00FFFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_COMMON_SIZE_SHIFT 11U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_COMMON_SIZE_CLRMSK 0xFFF007FFU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_TEMP_SIZE_SHIFT 7U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_TEMP_SIZE_CLRMSK 0xFFFFF87FU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__TEMP_SIZE_SHIFT 7U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1__TEMPSIZE8__TEMP_SIZE_CLRMSK 0xFFFFF07FU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_DATA_SIZE_SHIFT 1U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_DATA_SIZE_CLRMSK 0xFFFFFF81U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_FENCE_SHIFT 0U
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_FENCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_CDM_CONTEXT_LOAD_PDS1_FENCE_EN 0x00000001U
+
+/* Register ROGUE_CR_MIPS_WRAPPER_CONFIG */
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG 0x0810U
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_MASKFULL 0x000001030F01FFFFULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_FW_IDLE_ENABLE_SHIFT 40U
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_FW_IDLE_ENABLE_CLRMSK 0xFFFFFEFFFFFFFFFFULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_FW_IDLE_ENABLE_EN 0x0000010000000000ULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_DISABLE_BOOT_SHIFT 33U
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_DISABLE_BOOT_CLRMSK 0xFFFFFFFDFFFFFFFFULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_DISABLE_BOOT_EN 0x0000000200000000ULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_L2_CACHE_OFF_SHIFT 32U
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_L2_CACHE_OFF_CLRMSK 0xFFFFFFFEFFFFFFFFULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_L2_CACHE_OFF_EN 0x0000000100000000ULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_OS_ID_SHIFT 25U
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_OS_ID_CLRMSK 0xFFFFFFFFF1FFFFFFULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_TRUSTED_SHIFT 24U
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_TRUSTED_CLRMSK 0xFFFFFFFFFEFFFFFFULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_TRUSTED_EN 0x0000000001000000ULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_BOOT_ISA_MODE_SHIFT 16U
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_BOOT_ISA_MODE_CLRMSK 0xFFFFFFFFFFFEFFFFULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_BOOT_ISA_MODE_MIPS32 0x0000000000000000ULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_BOOT_ISA_MODE_MICROMIPS 0x0000000000010000ULL
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_REGBANK_BASE_ADDR_SHIFT 0U
+#define ROGUE_CR_MIPS_WRAPPER_CONFIG_REGBANK_BASE_ADDR_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG1 */
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG1 0x0818U
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG1_MASKFULL 0x00000000FFFFF001ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG1_BASE_ADDR_IN_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG1_BASE_ADDR_IN_CLRMSK 0xFFFFFFFF00000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG1_MODE_ENABLE_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG1_MODE_ENABLE_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG1_MODE_ENABLE_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2 */
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2 0x0820U
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2_MASKFULL 0x000000FFFFFFF1FFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2_ADDR_OUT_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2_ADDR_OUT_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2_OS_ID_SHIFT 6U
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2_OS_ID_CLRMSK 0xFFFFFFFFFFFFFE3FULL
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2_TRUSTED_SHIFT 5U
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2_TRUSTED_CLRMSK 0xFFFFFFFFFFFFFFDFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2_TRUSTED_EN 0x0000000000000020ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2_REGION_SIZE_POW2_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2_REGION_SIZE_POW2_CLRMSK 0xFFFFFFFFFFFFFFE0ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG1 */
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG1 0x0828U
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG1_MASKFULL 0x00000000FFFFF001ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG1_BASE_ADDR_IN_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG1_BASE_ADDR_IN_CLRMSK 0xFFFFFFFF00000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG1_MODE_ENABLE_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG1_MODE_ENABLE_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG1_MODE_ENABLE_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2 */
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2 0x0830U
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2_MASKFULL 0x000000FFFFFFF1FFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2_ADDR_OUT_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2_ADDR_OUT_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2_OS_ID_SHIFT 6U
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2_OS_ID_CLRMSK 0xFFFFFFFFFFFFFE3FULL
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2_TRUSTED_SHIFT 5U
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2_TRUSTED_CLRMSK 0xFFFFFFFFFFFFFFDFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2_TRUSTED_EN 0x0000000000000020ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2_REGION_SIZE_POW2_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2_REGION_SIZE_POW2_CLRMSK 0xFFFFFFFFFFFFFFE0ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG1 */
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG1 0x0838U
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG1_MASKFULL 0x00000000FFFFF001ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG1_BASE_ADDR_IN_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG1_BASE_ADDR_IN_CLRMSK 0xFFFFFFFF00000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG1_MODE_ENABLE_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG1_MODE_ENABLE_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG1_MODE_ENABLE_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2 */
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2 0x0840U
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2_MASKFULL 0x000000FFFFFFF1FFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2_ADDR_OUT_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2_ADDR_OUT_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2_OS_ID_SHIFT 6U
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2_OS_ID_CLRMSK 0xFFFFFFFFFFFFFE3FULL
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2_TRUSTED_SHIFT 5U
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2_TRUSTED_CLRMSK 0xFFFFFFFFFFFFFFDFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2_TRUSTED_EN 0x0000000000000020ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2_REGION_SIZE_POW2_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2_REGION_SIZE_POW2_CLRMSK 0xFFFFFFFFFFFFFFE0ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG1 */
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG1 0x0848U
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG1_MASKFULL 0x00000000FFFFF001ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG1_BASE_ADDR_IN_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG1_BASE_ADDR_IN_CLRMSK 0xFFFFFFFF00000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG1_MODE_ENABLE_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG1_MODE_ENABLE_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG1_MODE_ENABLE_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2 */
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2 0x0850U
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2_MASKFULL 0x000000FFFFFFF1FFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2_ADDR_OUT_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2_ADDR_OUT_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2_OS_ID_SHIFT 6U
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2_OS_ID_CLRMSK 0xFFFFFFFFFFFFFE3FULL
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2_TRUSTED_SHIFT 5U
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2_TRUSTED_CLRMSK 0xFFFFFFFFFFFFFFDFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2_TRUSTED_EN 0x0000000000000020ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2_REGION_SIZE_POW2_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP4_CONFIG2_REGION_SIZE_POW2_CLRMSK 0xFFFFFFFFFFFFFFE0ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG1 */
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG1 0x0858U
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG1_MASKFULL 0x00000000FFFFF001ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG1_BASE_ADDR_IN_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG1_BASE_ADDR_IN_CLRMSK 0xFFFFFFFF00000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG1_MODE_ENABLE_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG1_MODE_ENABLE_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG1_MODE_ENABLE_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2 */
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2 0x0860U
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2_MASKFULL 0x000000FFFFFFF1FFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2_ADDR_OUT_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2_ADDR_OUT_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2_OS_ID_SHIFT 6U
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2_OS_ID_CLRMSK 0xFFFFFFFFFFFFFE3FULL
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2_TRUSTED_SHIFT 5U
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2_TRUSTED_CLRMSK 0xFFFFFFFFFFFFFFDFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2_TRUSTED_EN 0x0000000000000020ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2_REGION_SIZE_POW2_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2_REGION_SIZE_POW2_CLRMSK 0xFFFFFFFFFFFFFFE0ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_STATUS */
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_STATUS 0x0868U
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_STATUS_MASKFULL 0x00000001FFFFFFFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_STATUS_EVENT_SHIFT 32U
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_STATUS_EVENT_CLRMSK 0xFFFFFFFEFFFFFFFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_STATUS_EVENT_EN 0x0000000100000000ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_STATUS_ADDRESS_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_STATUS_ADDRESS_CLRMSK 0xFFFFFFFF00000000ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_CLEAR */
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_CLEAR 0x0870U
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_CLEAR_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_CLEAR_EVENT_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_CLEAR_EVENT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_MIPS_ADDR_REMAP_UNMAPPED_CLEAR_EVENT_EN 0x00000001U
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG */
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG 0x0878U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_MASKFULL 0xFFFFFFF7FFFFFFBFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_ADDR_OUT_SHIFT 36U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_ADDR_OUT_CLRMSK 0x0000000FFFFFFFFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_OS_ID_SHIFT 32U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_OS_ID_CLRMSK 0xFFFFFFF8FFFFFFFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_BASE_ADDR_IN_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_BASE_ADDR_IN_CLRMSK 0xFFFFFFFF00000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_TRUSTED_SHIFT 11U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_TRUSTED_CLRMSK 0xFFFFFFFFFFFFF7FFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_TRUSTED_EN 0x0000000000000800ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_REGION_SIZE_SHIFT 7U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_REGION_SIZE_CLRMSK 0xFFFFFFFFFFFFF87FULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_REGION_SIZE_4KB 0x0000000000000000ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_REGION_SIZE_16KB 0x0000000000000080ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_REGION_SIZE_64KB 0x0000000000000100ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_REGION_SIZE_256KB 0x0000000000000180ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_REGION_SIZE_1MB 0x0000000000000200ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_REGION_SIZE_4MB 0x0000000000000280ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_REGION_SIZE_16MB 0x0000000000000300ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_REGION_SIZE_64MB 0x0000000000000380ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_REGION_SIZE_256MB 0x0000000000000400ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_ENTRY_SHIFT 1U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_ENTRY_CLRMSK 0xFFFFFFFFFFFFFFC1ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_MODE_ENABLE_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_MODE_ENABLE_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_CONFIG_MODE_ENABLE_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP_RANGE_READ */
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_READ 0x0880U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_READ_MASKFULL 0x000000000000003FULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_READ_ENTRY_SHIFT 1U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_READ_ENTRY_CLRMSK 0xFFFFFFC1U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_READ_REQUEST_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_READ_REQUEST_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_READ_REQUEST_EN 0x00000001U
+
+/* Register ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA */
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA 0x0888U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_MASKFULL 0xFFFFFFF7FFFFFF81ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_ADDR_OUT_SHIFT 36U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_ADDR_OUT_CLRMSK 0x0000000FFFFFFFFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_OS_ID_SHIFT 32U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_OS_ID_CLRMSK 0xFFFFFFF8FFFFFFFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_BASE_ADDR_IN_SHIFT 12U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_BASE_ADDR_IN_CLRMSK 0xFFFFFFFF00000FFFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_TRUSTED_SHIFT 11U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_TRUSTED_CLRMSK 0xFFFFFFFFFFFFF7FFULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_TRUSTED_EN 0x0000000000000800ULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_REGION_SIZE_SHIFT 7U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_REGION_SIZE_CLRMSK 0xFFFFFFFFFFFFF87FULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_MODE_ENABLE_SHIFT 0U
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_MODE_ENABLE_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_MIPS_ADDR_REMAP_RANGE_DATA_MODE_ENABLE_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_MIPS_WRAPPER_IRQ_ENABLE */
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_ENABLE 0x08A0U
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_ENABLE_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_ENABLE_EVENT_SHIFT 0U
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_ENABLE_EVENT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_ENABLE_EVENT_EN 0x00000001U
+
+/* Register ROGUE_CR_MIPS_WRAPPER_IRQ_STATUS */
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_STATUS 0x08A8U
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_STATUS_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_STATUS_EVENT_SHIFT 0U
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_STATUS_EVENT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_STATUS_EVENT_EN 0x00000001U
+
+/* Register ROGUE_CR_MIPS_WRAPPER_IRQ_CLEAR */
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_CLEAR 0x08B0U
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_CLEAR_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_CLEAR_EVENT_SHIFT 0U
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_CLEAR_EVENT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_MIPS_WRAPPER_IRQ_CLEAR_EVENT_EN 0x00000001U
+
+/* Register ROGUE_CR_MIPS_WRAPPER_NMI_ENABLE */
+#define ROGUE_CR_MIPS_WRAPPER_NMI_ENABLE 0x08B8U
+#define ROGUE_CR_MIPS_WRAPPER_NMI_ENABLE_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_MIPS_WRAPPER_NMI_ENABLE_EVENT_SHIFT 0U
+#define ROGUE_CR_MIPS_WRAPPER_NMI_ENABLE_EVENT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_MIPS_WRAPPER_NMI_ENABLE_EVENT_EN 0x00000001U
+
+/* Register ROGUE_CR_MIPS_WRAPPER_NMI_EVENT */
+#define ROGUE_CR_MIPS_WRAPPER_NMI_EVENT 0x08C0U
+#define ROGUE_CR_MIPS_WRAPPER_NMI_EVENT_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_MIPS_WRAPPER_NMI_EVENT_TRIGGER_SHIFT 0U
+#define ROGUE_CR_MIPS_WRAPPER_NMI_EVENT_TRIGGER_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_MIPS_WRAPPER_NMI_EVENT_TRIGGER_EN 0x00000001U
+
+/* Register ROGUE_CR_MIPS_DEBUG_CONFIG */
+#define ROGUE_CR_MIPS_DEBUG_CONFIG 0x08C8U
+#define ROGUE_CR_MIPS_DEBUG_CONFIG_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_MIPS_DEBUG_CONFIG_DISABLE_PROBE_DEBUG_SHIFT 0U
+#define ROGUE_CR_MIPS_DEBUG_CONFIG_DISABLE_PROBE_DEBUG_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_MIPS_DEBUG_CONFIG_DISABLE_PROBE_DEBUG_EN 0x00000001U
+
+/* Register ROGUE_CR_MIPS_EXCEPTION_STATUS */
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS 0x08D0U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_MASKFULL 0x000000000000003FULL
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_SLEEP_SHIFT 5U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_SLEEP_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_SLEEP_EN 0x00000020U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_NMI_TAKEN_SHIFT 4U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_NMI_TAKEN_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_NMI_TAKEN_EN 0x00000010U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_NEST_EXL_SHIFT 3U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_NEST_EXL_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_NEST_EXL_EN 0x00000008U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_NEST_ERL_SHIFT 2U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_NEST_ERL_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_NEST_ERL_EN 0x00000004U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_EXL_SHIFT 1U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_EXL_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_EXL_EN 0x00000002U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_ERL_SHIFT 0U
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_ERL_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_MIPS_EXCEPTION_STATUS_SI_ERL_EN 0x00000001U
+
+/* Register ROGUE_CR_MIPS_WRAPPER_STATUS */
+#define ROGUE_CR_MIPS_WRAPPER_STATUS 0x08E8U
+#define ROGUE_CR_MIPS_WRAPPER_STATUS_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_MIPS_WRAPPER_STATUS_OUTSTANDING_REQUESTS_SHIFT 0U
+#define ROGUE_CR_MIPS_WRAPPER_STATUS_OUTSTANDING_REQUESTS_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_XPU_BROADCAST */
+#define ROGUE_CR_XPU_BROADCAST 0x0890U
+#define ROGUE_CR_XPU_BROADCAST_MASKFULL 0x00000000000001FFULL
+#define ROGUE_CR_XPU_BROADCAST_MASK_SHIFT 0U
+#define ROGUE_CR_XPU_BROADCAST_MASK_CLRMSK 0xFFFFFE00U
+
+/* Register ROGUE_CR_META_SP_MSLVDATAX */
+#define ROGUE_CR_META_SP_MSLVDATAX 0x0A00U
+#define ROGUE_CR_META_SP_MSLVDATAX_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_META_SP_MSLVDATAX_MSLVDATAX_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVDATAX_MSLVDATAX_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_META_SP_MSLVDATAT */
+#define ROGUE_CR_META_SP_MSLVDATAT 0x0A08U
+#define ROGUE_CR_META_SP_MSLVDATAT_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_META_SP_MSLVDATAT_MSLVDATAT_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVDATAT_MSLVDATAT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_META_SP_MSLVCTRL0 */
+#define ROGUE_CR_META_SP_MSLVCTRL0 0x0A10U
+#define ROGUE_CR_META_SP_MSLVCTRL0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_META_SP_MSLVCTRL0_ADDR_SHIFT 2U
+#define ROGUE_CR_META_SP_MSLVCTRL0_ADDR_CLRMSK 0x00000003U
+#define ROGUE_CR_META_SP_MSLVCTRL0_AUTOINCR_SHIFT 1U
+#define ROGUE_CR_META_SP_MSLVCTRL0_AUTOINCR_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_META_SP_MSLVCTRL0_AUTOINCR_EN 0x00000002U
+#define ROGUE_CR_META_SP_MSLVCTRL0_RD_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVCTRL0_RD_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_META_SP_MSLVCTRL0_RD_EN 0x00000001U
+
+/* Register ROGUE_CR_META_SP_MSLVCTRL1 */
+#define ROGUE_CR_META_SP_MSLVCTRL1 0x0A18U
+#define ROGUE_CR_META_SP_MSLVCTRL1_MASKFULL 0x00000000F7F4003FULL
+#define ROGUE_CR_META_SP_MSLVCTRL1_DEFERRTHREAD_SHIFT 30U
+#define ROGUE_CR_META_SP_MSLVCTRL1_DEFERRTHREAD_CLRMSK 0x3FFFFFFFU
+#define ROGUE_CR_META_SP_MSLVCTRL1_LOCK2_INTERLOCK_SHIFT 29U
+#define ROGUE_CR_META_SP_MSLVCTRL1_LOCK2_INTERLOCK_CLRMSK 0xDFFFFFFFU
+#define ROGUE_CR_META_SP_MSLVCTRL1_LOCK2_INTERLOCK_EN 0x20000000U
+#define ROGUE_CR_META_SP_MSLVCTRL1_ATOMIC_INTERLOCK_SHIFT 28U
+#define ROGUE_CR_META_SP_MSLVCTRL1_ATOMIC_INTERLOCK_CLRMSK 0xEFFFFFFFU
+#define ROGUE_CR_META_SP_MSLVCTRL1_ATOMIC_INTERLOCK_EN 0x10000000U
+#define ROGUE_CR_META_SP_MSLVCTRL1_GBLPORT_IDLE_SHIFT 26U
+#define ROGUE_CR_META_SP_MSLVCTRL1_GBLPORT_IDLE_CLRMSK 0xFBFFFFFFU
+#define ROGUE_CR_META_SP_MSLVCTRL1_GBLPORT_IDLE_EN 0x04000000U
+#define ROGUE_CR_META_SP_MSLVCTRL1_COREMEM_IDLE_SHIFT 25U
+#define ROGUE_CR_META_SP_MSLVCTRL1_COREMEM_IDLE_CLRMSK 0xFDFFFFFFU
+#define ROGUE_CR_META_SP_MSLVCTRL1_COREMEM_IDLE_EN 0x02000000U
+#define ROGUE_CR_META_SP_MSLVCTRL1_READY_SHIFT 24U
+#define ROGUE_CR_META_SP_MSLVCTRL1_READY_CLRMSK 0xFEFFFFFFU
+#define ROGUE_CR_META_SP_MSLVCTRL1_READY_EN 0x01000000U
+#define ROGUE_CR_META_SP_MSLVCTRL1_DEFERRID_SHIFT 21U
+#define ROGUE_CR_META_SP_MSLVCTRL1_DEFERRID_CLRMSK 0xFF1FFFFFU
+#define ROGUE_CR_META_SP_MSLVCTRL1_DEFERR_SHIFT 20U
+#define ROGUE_CR_META_SP_MSLVCTRL1_DEFERR_CLRMSK 0xFFEFFFFFU
+#define ROGUE_CR_META_SP_MSLVCTRL1_DEFERR_EN 0x00100000U
+#define ROGUE_CR_META_SP_MSLVCTRL1_WR_ACTIVE_SHIFT 18U
+#define ROGUE_CR_META_SP_MSLVCTRL1_WR_ACTIVE_CLRMSK 0xFFFBFFFFU
+#define ROGUE_CR_META_SP_MSLVCTRL1_WR_ACTIVE_EN 0x00040000U
+#define ROGUE_CR_META_SP_MSLVCTRL1_THREAD_SHIFT 4U
+#define ROGUE_CR_META_SP_MSLVCTRL1_THREAD_CLRMSK 0xFFFFFFCFU
+#define ROGUE_CR_META_SP_MSLVCTRL1_TRANS_SIZE_SHIFT 2U
+#define ROGUE_CR_META_SP_MSLVCTRL1_TRANS_SIZE_CLRMSK 0xFFFFFFF3U
+#define ROGUE_CR_META_SP_MSLVCTRL1_BYTE_ROUND_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVCTRL1_BYTE_ROUND_CLRMSK 0xFFFFFFFCU
+
+/* Register ROGUE_CR_META_SP_MSLVHANDSHKE */
+#define ROGUE_CR_META_SP_MSLVHANDSHKE 0x0A50U
+#define ROGUE_CR_META_SP_MSLVHANDSHKE_MASKFULL 0x000000000000000FULL
+#define ROGUE_CR_META_SP_MSLVHANDSHKE_INPUT_SHIFT 2U
+#define ROGUE_CR_META_SP_MSLVHANDSHKE_INPUT_CLRMSK 0xFFFFFFF3U
+#define ROGUE_CR_META_SP_MSLVHANDSHKE_OUTPUT_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVHANDSHKE_OUTPUT_CLRMSK 0xFFFFFFFCU
+
+/* Register ROGUE_CR_META_SP_MSLVT0KICK */
+#define ROGUE_CR_META_SP_MSLVT0KICK 0x0A80U
+#define ROGUE_CR_META_SP_MSLVT0KICK_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_META_SP_MSLVT0KICK_MSLVT0KICK_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVT0KICK_MSLVT0KICK_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_META_SP_MSLVT0KICKI */
+#define ROGUE_CR_META_SP_MSLVT0KICKI 0x0A88U
+#define ROGUE_CR_META_SP_MSLVT0KICKI_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_META_SP_MSLVT0KICKI_MSLVT0KICKI_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVT0KICKI_MSLVT0KICKI_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_META_SP_MSLVT1KICK */
+#define ROGUE_CR_META_SP_MSLVT1KICK 0x0A90U
+#define ROGUE_CR_META_SP_MSLVT1KICK_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_META_SP_MSLVT1KICK_MSLVT1KICK_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVT1KICK_MSLVT1KICK_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_META_SP_MSLVT1KICKI */
+#define ROGUE_CR_META_SP_MSLVT1KICKI 0x0A98U
+#define ROGUE_CR_META_SP_MSLVT1KICKI_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_META_SP_MSLVT1KICKI_MSLVT1KICKI_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVT1KICKI_MSLVT1KICKI_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_META_SP_MSLVT2KICK */
+#define ROGUE_CR_META_SP_MSLVT2KICK 0x0AA0U
+#define ROGUE_CR_META_SP_MSLVT2KICK_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_META_SP_MSLVT2KICK_MSLVT2KICK_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVT2KICK_MSLVT2KICK_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_META_SP_MSLVT2KICKI */
+#define ROGUE_CR_META_SP_MSLVT2KICKI 0x0AA8U
+#define ROGUE_CR_META_SP_MSLVT2KICKI_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_META_SP_MSLVT2KICKI_MSLVT2KICKI_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVT2KICKI_MSLVT2KICKI_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_META_SP_MSLVT3KICK */
+#define ROGUE_CR_META_SP_MSLVT3KICK 0x0AB0U
+#define ROGUE_CR_META_SP_MSLVT3KICK_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_META_SP_MSLVT3KICK_MSLVT3KICK_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVT3KICK_MSLVT3KICK_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_META_SP_MSLVT3KICKI */
+#define ROGUE_CR_META_SP_MSLVT3KICKI 0x0AB8U
+#define ROGUE_CR_META_SP_MSLVT3KICKI_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_META_SP_MSLVT3KICKI_MSLVT3KICKI_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVT3KICKI_MSLVT3KICKI_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_META_SP_MSLVRST */
+#define ROGUE_CR_META_SP_MSLVRST 0x0AC0U
+#define ROGUE_CR_META_SP_MSLVRST_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_META_SP_MSLVRST_SOFTRESET_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVRST_SOFTRESET_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_META_SP_MSLVRST_SOFTRESET_EN 0x00000001U
+
+/* Register ROGUE_CR_META_SP_MSLVIRQSTATUS */
+#define ROGUE_CR_META_SP_MSLVIRQSTATUS 0x0AC8U
+#define ROGUE_CR_META_SP_MSLVIRQSTATUS_MASKFULL 0x000000000000000CULL
+#define ROGUE_CR_META_SP_MSLVIRQSTATUS_TRIGVECT3_SHIFT 3U
+#define ROGUE_CR_META_SP_MSLVIRQSTATUS_TRIGVECT3_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_META_SP_MSLVIRQSTATUS_TRIGVECT3_EN 0x00000008U
+#define ROGUE_CR_META_SP_MSLVIRQSTATUS_TRIGVECT2_SHIFT 2U
+#define ROGUE_CR_META_SP_MSLVIRQSTATUS_TRIGVECT2_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_META_SP_MSLVIRQSTATUS_TRIGVECT2_EN 0x00000004U
+
+/* Register ROGUE_CR_META_SP_MSLVIRQENABLE */
+#define ROGUE_CR_META_SP_MSLVIRQENABLE 0x0AD0U
+#define ROGUE_CR_META_SP_MSLVIRQENABLE_MASKFULL 0x000000000000000CULL
+#define ROGUE_CR_META_SP_MSLVIRQENABLE_EVENT1_SHIFT 3U
+#define ROGUE_CR_META_SP_MSLVIRQENABLE_EVENT1_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_META_SP_MSLVIRQENABLE_EVENT1_EN 0x00000008U
+#define ROGUE_CR_META_SP_MSLVIRQENABLE_EVENT0_SHIFT 2U
+#define ROGUE_CR_META_SP_MSLVIRQENABLE_EVENT0_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_META_SP_MSLVIRQENABLE_EVENT0_EN 0x00000004U
+
+/* Register ROGUE_CR_META_SP_MSLVIRQLEVEL */
+#define ROGUE_CR_META_SP_MSLVIRQLEVEL 0x0AD8U
+#define ROGUE_CR_META_SP_MSLVIRQLEVEL_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_META_SP_MSLVIRQLEVEL_MODE_SHIFT 0U
+#define ROGUE_CR_META_SP_MSLVIRQLEVEL_MODE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_META_SP_MSLVIRQLEVEL_MODE_EN 0x00000001U
+
+/* Register ROGUE_CR_MTS_SCHEDULE */
+#define ROGUE_CR_MTS_SCHEDULE 0x0B00U
+#define ROGUE_CR_MTS_SCHEDULE_MASKFULL 0x00000000000001FFULL
+#define ROGUE_CR_MTS_SCHEDULE_HOST_SHIFT 8U
+#define ROGUE_CR_MTS_SCHEDULE_HOST_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_MTS_SCHEDULE_HOST_BG_TIMER 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE_HOST_HOST 0x00000100U
+#define ROGUE_CR_MTS_SCHEDULE_PRIORITY_SHIFT 6U
+#define ROGUE_CR_MTS_SCHEDULE_PRIORITY_CLRMSK 0xFFFFFF3FU
+#define ROGUE_CR_MTS_SCHEDULE_PRIORITY_PRT0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE_PRIORITY_PRT1 0x00000040U
+#define ROGUE_CR_MTS_SCHEDULE_PRIORITY_PRT2 0x00000080U
+#define ROGUE_CR_MTS_SCHEDULE_PRIORITY_PRT3 0x000000C0U
+#define ROGUE_CR_MTS_SCHEDULE_CONTEXT_SHIFT 5U
+#define ROGUE_CR_MTS_SCHEDULE_CONTEXT_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_MTS_SCHEDULE_CONTEXT_BGCTX 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE_CONTEXT_INTCTX 0x00000020U
+#define ROGUE_CR_MTS_SCHEDULE_TASK_SHIFT 4U
+#define ROGUE_CR_MTS_SCHEDULE_TASK_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_MTS_SCHEDULE_TASK_NON_COUNTED 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE_TASK_COUNTED 0x00000010U
+#define ROGUE_CR_MTS_SCHEDULE_DM_SHIFT 0U
+#define ROGUE_CR_MTS_SCHEDULE_DM_CLRMSK 0xFFFFFFF0U
+#define ROGUE_CR_MTS_SCHEDULE_DM_DM0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE_DM_DM1 0x00000001U
+#define ROGUE_CR_MTS_SCHEDULE_DM_DM2 0x00000002U
+#define ROGUE_CR_MTS_SCHEDULE_DM_DM3 0x00000003U
+#define ROGUE_CR_MTS_SCHEDULE_DM_DM4 0x00000004U
+#define ROGUE_CR_MTS_SCHEDULE_DM_DM5 0x00000005U
+#define ROGUE_CR_MTS_SCHEDULE_DM_DM6 0x00000006U
+#define ROGUE_CR_MTS_SCHEDULE_DM_DM7 0x00000007U
+#define ROGUE_CR_MTS_SCHEDULE_DM_DM_ALL 0x0000000FU
+
+/* Register ROGUE_CR_MTS_SCHEDULE1 */
+#define ROGUE_CR_MTS_SCHEDULE1 0x10B00U
+#define ROGUE_CR_MTS_SCHEDULE1_MASKFULL 0x00000000000001FFULL
+#define ROGUE_CR_MTS_SCHEDULE1_HOST_SHIFT 8U
+#define ROGUE_CR_MTS_SCHEDULE1_HOST_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_MTS_SCHEDULE1_HOST_BG_TIMER 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE1_HOST_HOST 0x00000100U
+#define ROGUE_CR_MTS_SCHEDULE1_PRIORITY_SHIFT 6U
+#define ROGUE_CR_MTS_SCHEDULE1_PRIORITY_CLRMSK 0xFFFFFF3FU
+#define ROGUE_CR_MTS_SCHEDULE1_PRIORITY_PRT0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE1_PRIORITY_PRT1 0x00000040U
+#define ROGUE_CR_MTS_SCHEDULE1_PRIORITY_PRT2 0x00000080U
+#define ROGUE_CR_MTS_SCHEDULE1_PRIORITY_PRT3 0x000000C0U
+#define ROGUE_CR_MTS_SCHEDULE1_CONTEXT_SHIFT 5U
+#define ROGUE_CR_MTS_SCHEDULE1_CONTEXT_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_MTS_SCHEDULE1_CONTEXT_BGCTX 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE1_CONTEXT_INTCTX 0x00000020U
+#define ROGUE_CR_MTS_SCHEDULE1_TASK_SHIFT 4U
+#define ROGUE_CR_MTS_SCHEDULE1_TASK_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_MTS_SCHEDULE1_TASK_NON_COUNTED 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE1_TASK_COUNTED 0x00000010U
+#define ROGUE_CR_MTS_SCHEDULE1_DM_SHIFT 0U
+#define ROGUE_CR_MTS_SCHEDULE1_DM_CLRMSK 0xFFFFFFF0U
+#define ROGUE_CR_MTS_SCHEDULE1_DM_DM0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE1_DM_DM1 0x00000001U
+#define ROGUE_CR_MTS_SCHEDULE1_DM_DM2 0x00000002U
+#define ROGUE_CR_MTS_SCHEDULE1_DM_DM3 0x00000003U
+#define ROGUE_CR_MTS_SCHEDULE1_DM_DM4 0x00000004U
+#define ROGUE_CR_MTS_SCHEDULE1_DM_DM5 0x00000005U
+#define ROGUE_CR_MTS_SCHEDULE1_DM_DM6 0x00000006U
+#define ROGUE_CR_MTS_SCHEDULE1_DM_DM7 0x00000007U
+#define ROGUE_CR_MTS_SCHEDULE1_DM_DM_ALL 0x0000000FU
+
+/* Register ROGUE_CR_MTS_SCHEDULE2 */
+#define ROGUE_CR_MTS_SCHEDULE2 0x20B00U
+#define ROGUE_CR_MTS_SCHEDULE2_MASKFULL 0x00000000000001FFULL
+#define ROGUE_CR_MTS_SCHEDULE2_HOST_SHIFT 8U
+#define ROGUE_CR_MTS_SCHEDULE2_HOST_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_MTS_SCHEDULE2_HOST_BG_TIMER 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE2_HOST_HOST 0x00000100U
+#define ROGUE_CR_MTS_SCHEDULE2_PRIORITY_SHIFT 6U
+#define ROGUE_CR_MTS_SCHEDULE2_PRIORITY_CLRMSK 0xFFFFFF3FU
+#define ROGUE_CR_MTS_SCHEDULE2_PRIORITY_PRT0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE2_PRIORITY_PRT1 0x00000040U
+#define ROGUE_CR_MTS_SCHEDULE2_PRIORITY_PRT2 0x00000080U
+#define ROGUE_CR_MTS_SCHEDULE2_PRIORITY_PRT3 0x000000C0U
+#define ROGUE_CR_MTS_SCHEDULE2_CONTEXT_SHIFT 5U
+#define ROGUE_CR_MTS_SCHEDULE2_CONTEXT_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_MTS_SCHEDULE2_CONTEXT_BGCTX 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE2_CONTEXT_INTCTX 0x00000020U
+#define ROGUE_CR_MTS_SCHEDULE2_TASK_SHIFT 4U
+#define ROGUE_CR_MTS_SCHEDULE2_TASK_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_MTS_SCHEDULE2_TASK_NON_COUNTED 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE2_TASK_COUNTED 0x00000010U
+#define ROGUE_CR_MTS_SCHEDULE2_DM_SHIFT 0U
+#define ROGUE_CR_MTS_SCHEDULE2_DM_CLRMSK 0xFFFFFFF0U
+#define ROGUE_CR_MTS_SCHEDULE2_DM_DM0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE2_DM_DM1 0x00000001U
+#define ROGUE_CR_MTS_SCHEDULE2_DM_DM2 0x00000002U
+#define ROGUE_CR_MTS_SCHEDULE2_DM_DM3 0x00000003U
+#define ROGUE_CR_MTS_SCHEDULE2_DM_DM4 0x00000004U
+#define ROGUE_CR_MTS_SCHEDULE2_DM_DM5 0x00000005U
+#define ROGUE_CR_MTS_SCHEDULE2_DM_DM6 0x00000006U
+#define ROGUE_CR_MTS_SCHEDULE2_DM_DM7 0x00000007U
+#define ROGUE_CR_MTS_SCHEDULE2_DM_DM_ALL 0x0000000FU
+
+/* Register ROGUE_CR_MTS_SCHEDULE3 */
+#define ROGUE_CR_MTS_SCHEDULE3 0x30B00U
+#define ROGUE_CR_MTS_SCHEDULE3_MASKFULL 0x00000000000001FFULL
+#define ROGUE_CR_MTS_SCHEDULE3_HOST_SHIFT 8U
+#define ROGUE_CR_MTS_SCHEDULE3_HOST_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_MTS_SCHEDULE3_HOST_BG_TIMER 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE3_HOST_HOST 0x00000100U
+#define ROGUE_CR_MTS_SCHEDULE3_PRIORITY_SHIFT 6U
+#define ROGUE_CR_MTS_SCHEDULE3_PRIORITY_CLRMSK 0xFFFFFF3FU
+#define ROGUE_CR_MTS_SCHEDULE3_PRIORITY_PRT0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE3_PRIORITY_PRT1 0x00000040U
+#define ROGUE_CR_MTS_SCHEDULE3_PRIORITY_PRT2 0x00000080U
+#define ROGUE_CR_MTS_SCHEDULE3_PRIORITY_PRT3 0x000000C0U
+#define ROGUE_CR_MTS_SCHEDULE3_CONTEXT_SHIFT 5U
+#define ROGUE_CR_MTS_SCHEDULE3_CONTEXT_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_MTS_SCHEDULE3_CONTEXT_BGCTX 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE3_CONTEXT_INTCTX 0x00000020U
+#define ROGUE_CR_MTS_SCHEDULE3_TASK_SHIFT 4U
+#define ROGUE_CR_MTS_SCHEDULE3_TASK_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_MTS_SCHEDULE3_TASK_NON_COUNTED 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE3_TASK_COUNTED 0x00000010U
+#define ROGUE_CR_MTS_SCHEDULE3_DM_SHIFT 0U
+#define ROGUE_CR_MTS_SCHEDULE3_DM_CLRMSK 0xFFFFFFF0U
+#define ROGUE_CR_MTS_SCHEDULE3_DM_DM0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE3_DM_DM1 0x00000001U
+#define ROGUE_CR_MTS_SCHEDULE3_DM_DM2 0x00000002U
+#define ROGUE_CR_MTS_SCHEDULE3_DM_DM3 0x00000003U
+#define ROGUE_CR_MTS_SCHEDULE3_DM_DM4 0x00000004U
+#define ROGUE_CR_MTS_SCHEDULE3_DM_DM5 0x00000005U
+#define ROGUE_CR_MTS_SCHEDULE3_DM_DM6 0x00000006U
+#define ROGUE_CR_MTS_SCHEDULE3_DM_DM7 0x00000007U
+#define ROGUE_CR_MTS_SCHEDULE3_DM_DM_ALL 0x0000000FU
+
+/* Register ROGUE_CR_MTS_SCHEDULE4 */
+#define ROGUE_CR_MTS_SCHEDULE4 0x40B00U
+#define ROGUE_CR_MTS_SCHEDULE4_MASKFULL 0x00000000000001FFULL
+#define ROGUE_CR_MTS_SCHEDULE4_HOST_SHIFT 8U
+#define ROGUE_CR_MTS_SCHEDULE4_HOST_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_MTS_SCHEDULE4_HOST_BG_TIMER 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE4_HOST_HOST 0x00000100U
+#define ROGUE_CR_MTS_SCHEDULE4_PRIORITY_SHIFT 6U
+#define ROGUE_CR_MTS_SCHEDULE4_PRIORITY_CLRMSK 0xFFFFFF3FU
+#define ROGUE_CR_MTS_SCHEDULE4_PRIORITY_PRT0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE4_PRIORITY_PRT1 0x00000040U
+#define ROGUE_CR_MTS_SCHEDULE4_PRIORITY_PRT2 0x00000080U
+#define ROGUE_CR_MTS_SCHEDULE4_PRIORITY_PRT3 0x000000C0U
+#define ROGUE_CR_MTS_SCHEDULE4_CONTEXT_SHIFT 5U
+#define ROGUE_CR_MTS_SCHEDULE4_CONTEXT_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_MTS_SCHEDULE4_CONTEXT_BGCTX 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE4_CONTEXT_INTCTX 0x00000020U
+#define ROGUE_CR_MTS_SCHEDULE4_TASK_SHIFT 4U
+#define ROGUE_CR_MTS_SCHEDULE4_TASK_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_MTS_SCHEDULE4_TASK_NON_COUNTED 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE4_TASK_COUNTED 0x00000010U
+#define ROGUE_CR_MTS_SCHEDULE4_DM_SHIFT 0U
+#define ROGUE_CR_MTS_SCHEDULE4_DM_CLRMSK 0xFFFFFFF0U
+#define ROGUE_CR_MTS_SCHEDULE4_DM_DM0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE4_DM_DM1 0x00000001U
+#define ROGUE_CR_MTS_SCHEDULE4_DM_DM2 0x00000002U
+#define ROGUE_CR_MTS_SCHEDULE4_DM_DM3 0x00000003U
+#define ROGUE_CR_MTS_SCHEDULE4_DM_DM4 0x00000004U
+#define ROGUE_CR_MTS_SCHEDULE4_DM_DM5 0x00000005U
+#define ROGUE_CR_MTS_SCHEDULE4_DM_DM6 0x00000006U
+#define ROGUE_CR_MTS_SCHEDULE4_DM_DM7 0x00000007U
+#define ROGUE_CR_MTS_SCHEDULE4_DM_DM_ALL 0x0000000FU
+
+/* Register ROGUE_CR_MTS_SCHEDULE5 */
+#define ROGUE_CR_MTS_SCHEDULE5 0x50B00U
+#define ROGUE_CR_MTS_SCHEDULE5_MASKFULL 0x00000000000001FFULL
+#define ROGUE_CR_MTS_SCHEDULE5_HOST_SHIFT 8U
+#define ROGUE_CR_MTS_SCHEDULE5_HOST_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_MTS_SCHEDULE5_HOST_BG_TIMER 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE5_HOST_HOST 0x00000100U
+#define ROGUE_CR_MTS_SCHEDULE5_PRIORITY_SHIFT 6U
+#define ROGUE_CR_MTS_SCHEDULE5_PRIORITY_CLRMSK 0xFFFFFF3FU
+#define ROGUE_CR_MTS_SCHEDULE5_PRIORITY_PRT0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE5_PRIORITY_PRT1 0x00000040U
+#define ROGUE_CR_MTS_SCHEDULE5_PRIORITY_PRT2 0x00000080U
+#define ROGUE_CR_MTS_SCHEDULE5_PRIORITY_PRT3 0x000000C0U
+#define ROGUE_CR_MTS_SCHEDULE5_CONTEXT_SHIFT 5U
+#define ROGUE_CR_MTS_SCHEDULE5_CONTEXT_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_MTS_SCHEDULE5_CONTEXT_BGCTX 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE5_CONTEXT_INTCTX 0x00000020U
+#define ROGUE_CR_MTS_SCHEDULE5_TASK_SHIFT 4U
+#define ROGUE_CR_MTS_SCHEDULE5_TASK_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_MTS_SCHEDULE5_TASK_NON_COUNTED 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE5_TASK_COUNTED 0x00000010U
+#define ROGUE_CR_MTS_SCHEDULE5_DM_SHIFT 0U
+#define ROGUE_CR_MTS_SCHEDULE5_DM_CLRMSK 0xFFFFFFF0U
+#define ROGUE_CR_MTS_SCHEDULE5_DM_DM0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE5_DM_DM1 0x00000001U
+#define ROGUE_CR_MTS_SCHEDULE5_DM_DM2 0x00000002U
+#define ROGUE_CR_MTS_SCHEDULE5_DM_DM3 0x00000003U
+#define ROGUE_CR_MTS_SCHEDULE5_DM_DM4 0x00000004U
+#define ROGUE_CR_MTS_SCHEDULE5_DM_DM5 0x00000005U
+#define ROGUE_CR_MTS_SCHEDULE5_DM_DM6 0x00000006U
+#define ROGUE_CR_MTS_SCHEDULE5_DM_DM7 0x00000007U
+#define ROGUE_CR_MTS_SCHEDULE5_DM_DM_ALL 0x0000000FU
+
+/* Register ROGUE_CR_MTS_SCHEDULE6 */
+#define ROGUE_CR_MTS_SCHEDULE6 0x60B00U
+#define ROGUE_CR_MTS_SCHEDULE6_MASKFULL 0x00000000000001FFULL
+#define ROGUE_CR_MTS_SCHEDULE6_HOST_SHIFT 8U
+#define ROGUE_CR_MTS_SCHEDULE6_HOST_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_MTS_SCHEDULE6_HOST_BG_TIMER 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE6_HOST_HOST 0x00000100U
+#define ROGUE_CR_MTS_SCHEDULE6_PRIORITY_SHIFT 6U
+#define ROGUE_CR_MTS_SCHEDULE6_PRIORITY_CLRMSK 0xFFFFFF3FU
+#define ROGUE_CR_MTS_SCHEDULE6_PRIORITY_PRT0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE6_PRIORITY_PRT1 0x00000040U
+#define ROGUE_CR_MTS_SCHEDULE6_PRIORITY_PRT2 0x00000080U
+#define ROGUE_CR_MTS_SCHEDULE6_PRIORITY_PRT3 0x000000C0U
+#define ROGUE_CR_MTS_SCHEDULE6_CONTEXT_SHIFT 5U
+#define ROGUE_CR_MTS_SCHEDULE6_CONTEXT_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_MTS_SCHEDULE6_CONTEXT_BGCTX 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE6_CONTEXT_INTCTX 0x00000020U
+#define ROGUE_CR_MTS_SCHEDULE6_TASK_SHIFT 4U
+#define ROGUE_CR_MTS_SCHEDULE6_TASK_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_MTS_SCHEDULE6_TASK_NON_COUNTED 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE6_TASK_COUNTED 0x00000010U
+#define ROGUE_CR_MTS_SCHEDULE6_DM_SHIFT 0U
+#define ROGUE_CR_MTS_SCHEDULE6_DM_CLRMSK 0xFFFFFFF0U
+#define ROGUE_CR_MTS_SCHEDULE6_DM_DM0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE6_DM_DM1 0x00000001U
+#define ROGUE_CR_MTS_SCHEDULE6_DM_DM2 0x00000002U
+#define ROGUE_CR_MTS_SCHEDULE6_DM_DM3 0x00000003U
+#define ROGUE_CR_MTS_SCHEDULE6_DM_DM4 0x00000004U
+#define ROGUE_CR_MTS_SCHEDULE6_DM_DM5 0x00000005U
+#define ROGUE_CR_MTS_SCHEDULE6_DM_DM6 0x00000006U
+#define ROGUE_CR_MTS_SCHEDULE6_DM_DM7 0x00000007U
+#define ROGUE_CR_MTS_SCHEDULE6_DM_DM_ALL 0x0000000FU
+
+/* Register ROGUE_CR_MTS_SCHEDULE7 */
+#define ROGUE_CR_MTS_SCHEDULE7 0x70B00U
+#define ROGUE_CR_MTS_SCHEDULE7_MASKFULL 0x00000000000001FFULL
+#define ROGUE_CR_MTS_SCHEDULE7_HOST_SHIFT 8U
+#define ROGUE_CR_MTS_SCHEDULE7_HOST_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_MTS_SCHEDULE7_HOST_BG_TIMER 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE7_HOST_HOST 0x00000100U
+#define ROGUE_CR_MTS_SCHEDULE7_PRIORITY_SHIFT 6U
+#define ROGUE_CR_MTS_SCHEDULE7_PRIORITY_CLRMSK 0xFFFFFF3FU
+#define ROGUE_CR_MTS_SCHEDULE7_PRIORITY_PRT0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE7_PRIORITY_PRT1 0x00000040U
+#define ROGUE_CR_MTS_SCHEDULE7_PRIORITY_PRT2 0x00000080U
+#define ROGUE_CR_MTS_SCHEDULE7_PRIORITY_PRT3 0x000000C0U
+#define ROGUE_CR_MTS_SCHEDULE7_CONTEXT_SHIFT 5U
+#define ROGUE_CR_MTS_SCHEDULE7_CONTEXT_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_MTS_SCHEDULE7_CONTEXT_BGCTX 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE7_CONTEXT_INTCTX 0x00000020U
+#define ROGUE_CR_MTS_SCHEDULE7_TASK_SHIFT 4U
+#define ROGUE_CR_MTS_SCHEDULE7_TASK_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_MTS_SCHEDULE7_TASK_NON_COUNTED 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE7_TASK_COUNTED 0x00000010U
+#define ROGUE_CR_MTS_SCHEDULE7_DM_SHIFT 0U
+#define ROGUE_CR_MTS_SCHEDULE7_DM_CLRMSK 0xFFFFFFF0U
+#define ROGUE_CR_MTS_SCHEDULE7_DM_DM0 0x00000000U
+#define ROGUE_CR_MTS_SCHEDULE7_DM_DM1 0x00000001U
+#define ROGUE_CR_MTS_SCHEDULE7_DM_DM2 0x00000002U
+#define ROGUE_CR_MTS_SCHEDULE7_DM_DM3 0x00000003U
+#define ROGUE_CR_MTS_SCHEDULE7_DM_DM4 0x00000004U
+#define ROGUE_CR_MTS_SCHEDULE7_DM_DM5 0x00000005U
+#define ROGUE_CR_MTS_SCHEDULE7_DM_DM6 0x00000006U
+#define ROGUE_CR_MTS_SCHEDULE7_DM_DM7 0x00000007U
+#define ROGUE_CR_MTS_SCHEDULE7_DM_DM_ALL 0x0000000FU
+
+/* Register ROGUE_CR_MTS_BGCTX_THREAD0_DM_ASSOC */
+#define ROGUE_CR_MTS_BGCTX_THREAD0_DM_ASSOC 0x0B30U
+#define ROGUE_CR_MTS_BGCTX_THREAD0_DM_ASSOC_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_MTS_BGCTX_THREAD0_DM_ASSOC_DM_ASSOC_SHIFT 0U
+#define ROGUE_CR_MTS_BGCTX_THREAD0_DM_ASSOC_DM_ASSOC_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_MTS_BGCTX_THREAD1_DM_ASSOC */
+#define ROGUE_CR_MTS_BGCTX_THREAD1_DM_ASSOC 0x0B38U
+#define ROGUE_CR_MTS_BGCTX_THREAD1_DM_ASSOC_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_MTS_BGCTX_THREAD1_DM_ASSOC_DM_ASSOC_SHIFT 0U
+#define ROGUE_CR_MTS_BGCTX_THREAD1_DM_ASSOC_DM_ASSOC_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_MTS_INTCTX_THREAD0_DM_ASSOC */
+#define ROGUE_CR_MTS_INTCTX_THREAD0_DM_ASSOC 0x0B40U
+#define ROGUE_CR_MTS_INTCTX_THREAD0_DM_ASSOC_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_MTS_INTCTX_THREAD0_DM_ASSOC_DM_ASSOC_SHIFT 0U
+#define ROGUE_CR_MTS_INTCTX_THREAD0_DM_ASSOC_DM_ASSOC_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_MTS_INTCTX_THREAD1_DM_ASSOC */
+#define ROGUE_CR_MTS_INTCTX_THREAD1_DM_ASSOC 0x0B48U
+#define ROGUE_CR_MTS_INTCTX_THREAD1_DM_ASSOC_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_MTS_INTCTX_THREAD1_DM_ASSOC_DM_ASSOC_SHIFT 0U
+#define ROGUE_CR_MTS_INTCTX_THREAD1_DM_ASSOC_DM_ASSOC_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG */
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG 0x0B50U
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG__S7_TOP__MASKFULL 0x000FF0FFFFFFF701ULL
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_MASKFULL 0x0000FFFFFFFFF001ULL
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_PC_BASE_SHIFT 44U
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_PC_BASE_CLRMSK 0xFFFF0FFFFFFFFFFFULL
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG__S7_TOP__FENCE_PC_BASE_SHIFT 44U
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG__S7_TOP__FENCE_PC_BASE_CLRMSK 0xFFF00FFFFFFFFFFFULL
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_DM_SHIFT 40U
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_DM_CLRMSK 0xFFFFF0FFFFFFFFFFULL
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_ADDR_SHIFT 12U
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_PERSISTENCE_SHIFT 9U
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_PERSISTENCE_CLRMSK 0xFFFFFFFFFFFFF9FFULL
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_SLC_COHERENT_SHIFT 8U
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_SLC_COHERENT_CLRMSK 0xFFFFFFFFFFFFFEFFULL
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_SLC_COHERENT_EN 0x0000000000000100ULL
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_IDLE_CTRL_SHIFT 0U
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_IDLE_CTRL_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_IDLE_CTRL_META 0x0000000000000000ULL
+#define ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_IDLE_CTRL_MTS 0x0000000000000001ULL
+
+/* Register ROGUE_CR_MTS_DM0_INTERRUPT_ENABLE */
+#define ROGUE_CR_MTS_DM0_INTERRUPT_ENABLE 0x0B58U
+#define ROGUE_CR_MTS_DM0_INTERRUPT_ENABLE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_MTS_DM0_INTERRUPT_ENABLE_INT_ENABLE_SHIFT 0U
+#define ROGUE_CR_MTS_DM0_INTERRUPT_ENABLE_INT_ENABLE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_MTS_DM1_INTERRUPT_ENABLE */
+#define ROGUE_CR_MTS_DM1_INTERRUPT_ENABLE 0x0B60U
+#define ROGUE_CR_MTS_DM1_INTERRUPT_ENABLE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_MTS_DM1_INTERRUPT_ENABLE_INT_ENABLE_SHIFT 0U
+#define ROGUE_CR_MTS_DM1_INTERRUPT_ENABLE_INT_ENABLE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_MTS_DM2_INTERRUPT_ENABLE */
+#define ROGUE_CR_MTS_DM2_INTERRUPT_ENABLE 0x0B68U
+#define ROGUE_CR_MTS_DM2_INTERRUPT_ENABLE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_MTS_DM2_INTERRUPT_ENABLE_INT_ENABLE_SHIFT 0U
+#define ROGUE_CR_MTS_DM2_INTERRUPT_ENABLE_INT_ENABLE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_MTS_DM3_INTERRUPT_ENABLE */
+#define ROGUE_CR_MTS_DM3_INTERRUPT_ENABLE 0x0B70U
+#define ROGUE_CR_MTS_DM3_INTERRUPT_ENABLE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_MTS_DM3_INTERRUPT_ENABLE_INT_ENABLE_SHIFT 0U
+#define ROGUE_CR_MTS_DM3_INTERRUPT_ENABLE_INT_ENABLE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_MTS_DM4_INTERRUPT_ENABLE */
+#define ROGUE_CR_MTS_DM4_INTERRUPT_ENABLE 0x0B78U
+#define ROGUE_CR_MTS_DM4_INTERRUPT_ENABLE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_MTS_DM4_INTERRUPT_ENABLE_INT_ENABLE_SHIFT 0U
+#define ROGUE_CR_MTS_DM4_INTERRUPT_ENABLE_INT_ENABLE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_MTS_DM5_INTERRUPT_ENABLE */
+#define ROGUE_CR_MTS_DM5_INTERRUPT_ENABLE 0x0B80U
+#define ROGUE_CR_MTS_DM5_INTERRUPT_ENABLE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_MTS_DM5_INTERRUPT_ENABLE_INT_ENABLE_SHIFT 0U
+#define ROGUE_CR_MTS_DM5_INTERRUPT_ENABLE_INT_ENABLE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_MTS_INTCTX */
+#define ROGUE_CR_MTS_INTCTX 0x0B98U
+#define ROGUE_CR_MTS_INTCTX_MASKFULL 0x000000003FFFFFFFULL
+#define ROGUE_CR_MTS_INTCTX_DM_HOST_SCHEDULE_SHIFT 22U
+#define ROGUE_CR_MTS_INTCTX_DM_HOST_SCHEDULE_CLRMSK 0xC03FFFFFU
+#define ROGUE_CR_MTS_INTCTX_DM_PTR_SHIFT 18U
+#define ROGUE_CR_MTS_INTCTX_DM_PTR_CLRMSK 0xFFC3FFFFU
+#define ROGUE_CR_MTS_INTCTX_THREAD_ACTIVE_SHIFT 16U
+#define ROGUE_CR_MTS_INTCTX_THREAD_ACTIVE_CLRMSK 0xFFFCFFFFU
+#define ROGUE_CR_MTS_INTCTX_DM_TIMER_SCHEDULE_SHIFT 8U
+#define ROGUE_CR_MTS_INTCTX_DM_TIMER_SCHEDULE_CLRMSK 0xFFFF00FFU
+#define ROGUE_CR_MTS_INTCTX_DM_INTERRUPT_SCHEDULE_SHIFT 0U
+#define ROGUE_CR_MTS_INTCTX_DM_INTERRUPT_SCHEDULE_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_MTS_BGCTX */
+#define ROGUE_CR_MTS_BGCTX 0x0BA0U
+#define ROGUE_CR_MTS_BGCTX_MASKFULL 0x0000000000003FFFULL
+#define ROGUE_CR_MTS_BGCTX_DM_PTR_SHIFT 10U
+#define ROGUE_CR_MTS_BGCTX_DM_PTR_CLRMSK 0xFFFFC3FFU
+#define ROGUE_CR_MTS_BGCTX_THREAD_ACTIVE_SHIFT 8U
+#define ROGUE_CR_MTS_BGCTX_THREAD_ACTIVE_CLRMSK 0xFFFFFCFFU
+#define ROGUE_CR_MTS_BGCTX_DM_NONCOUNTED_SCHEDULE_SHIFT 0U
+#define ROGUE_CR_MTS_BGCTX_DM_NONCOUNTED_SCHEDULE_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE */
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE 0x0BA8U
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM7_SHIFT 56U
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM7_CLRMSK 0x00FFFFFFFFFFFFFFULL
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM6_SHIFT 48U
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM6_CLRMSK 0xFF00FFFFFFFFFFFFULL
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM5_SHIFT 40U
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM5_CLRMSK 0xFFFF00FFFFFFFFFFULL
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM4_SHIFT 32U
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM4_CLRMSK 0xFFFFFF00FFFFFFFFULL
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM3_SHIFT 24U
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM3_CLRMSK 0xFFFFFFFF00FFFFFFULL
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM2_SHIFT 16U
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM2_CLRMSK 0xFFFFFFFFFF00FFFFULL
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM1_SHIFT 8U
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM1_CLRMSK 0xFFFFFFFFFFFF00FFULL
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM0_SHIFT 0U
+#define ROGUE_CR_MTS_BGCTX_COUNTED_SCHEDULE_DM0_CLRMSK 0xFFFFFFFFFFFFFF00ULL
+
+/* Register ROGUE_CR_MTS_GPU_INT_STATUS */
+#define ROGUE_CR_MTS_GPU_INT_STATUS 0x0BB0U
+#define ROGUE_CR_MTS_GPU_INT_STATUS_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_MTS_GPU_INT_STATUS_STATUS_SHIFT 0U
+#define ROGUE_CR_MTS_GPU_INT_STATUS_STATUS_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_MTS_SCHEDULE_ENABLE */
+#define ROGUE_CR_MTS_SCHEDULE_ENABLE 0x0BC8U
+#define ROGUE_CR_MTS_SCHEDULE_ENABLE_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_MTS_SCHEDULE_ENABLE_MASK_SHIFT 0U
+#define ROGUE_CR_MTS_SCHEDULE_ENABLE_MASK_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_IRQ_OS0_EVENT_STATUS */
+#define ROGUE_CR_IRQ_OS0_EVENT_STATUS 0x0BD8U
+#define ROGUE_CR_IRQ_OS0_EVENT_STATUS_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS0_EVENT_STATUS_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS0_EVENT_STATUS_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS0_EVENT_STATUS_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS0_EVENT_CLEAR */
+#define ROGUE_CR_IRQ_OS0_EVENT_CLEAR 0x0BE8U
+#define ROGUE_CR_IRQ_OS0_EVENT_CLEAR_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS0_EVENT_CLEAR_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS0_EVENT_CLEAR_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS0_EVENT_CLEAR_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS1_EVENT_STATUS */
+#define ROGUE_CR_IRQ_OS1_EVENT_STATUS 0x10BD8U
+#define ROGUE_CR_IRQ_OS1_EVENT_STATUS_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS1_EVENT_STATUS_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS1_EVENT_STATUS_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS1_EVENT_STATUS_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS1_EVENT_CLEAR */
+#define ROGUE_CR_IRQ_OS1_EVENT_CLEAR 0x10BE8U
+#define ROGUE_CR_IRQ_OS1_EVENT_CLEAR_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS1_EVENT_CLEAR_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS1_EVENT_CLEAR_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS1_EVENT_CLEAR_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS2_EVENT_STATUS */
+#define ROGUE_CR_IRQ_OS2_EVENT_STATUS 0x20BD8U
+#define ROGUE_CR_IRQ_OS2_EVENT_STATUS_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS2_EVENT_STATUS_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS2_EVENT_STATUS_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS2_EVENT_STATUS_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS2_EVENT_CLEAR */
+#define ROGUE_CR_IRQ_OS2_EVENT_CLEAR 0x20BE8U
+#define ROGUE_CR_IRQ_OS2_EVENT_CLEAR_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS2_EVENT_CLEAR_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS2_EVENT_CLEAR_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS2_EVENT_CLEAR_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS3_EVENT_STATUS */
+#define ROGUE_CR_IRQ_OS3_EVENT_STATUS 0x30BD8U
+#define ROGUE_CR_IRQ_OS3_EVENT_STATUS_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS3_EVENT_STATUS_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS3_EVENT_STATUS_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS3_EVENT_STATUS_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS3_EVENT_CLEAR */
+#define ROGUE_CR_IRQ_OS3_EVENT_CLEAR 0x30BE8U
+#define ROGUE_CR_IRQ_OS3_EVENT_CLEAR_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS3_EVENT_CLEAR_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS3_EVENT_CLEAR_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS3_EVENT_CLEAR_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS4_EVENT_STATUS */
+#define ROGUE_CR_IRQ_OS4_EVENT_STATUS 0x40BD8U
+#define ROGUE_CR_IRQ_OS4_EVENT_STATUS_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS4_EVENT_STATUS_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS4_EVENT_STATUS_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS4_EVENT_STATUS_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS4_EVENT_CLEAR */
+#define ROGUE_CR_IRQ_OS4_EVENT_CLEAR 0x40BE8U
+#define ROGUE_CR_IRQ_OS4_EVENT_CLEAR_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS4_EVENT_CLEAR_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS4_EVENT_CLEAR_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS4_EVENT_CLEAR_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS5_EVENT_STATUS */
+#define ROGUE_CR_IRQ_OS5_EVENT_STATUS 0x50BD8U
+#define ROGUE_CR_IRQ_OS5_EVENT_STATUS_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS5_EVENT_STATUS_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS5_EVENT_STATUS_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS5_EVENT_STATUS_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS5_EVENT_CLEAR */
+#define ROGUE_CR_IRQ_OS5_EVENT_CLEAR 0x50BE8U
+#define ROGUE_CR_IRQ_OS5_EVENT_CLEAR_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS5_EVENT_CLEAR_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS5_EVENT_CLEAR_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS5_EVENT_CLEAR_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS6_EVENT_STATUS */
+#define ROGUE_CR_IRQ_OS6_EVENT_STATUS 0x60BD8U
+#define ROGUE_CR_IRQ_OS6_EVENT_STATUS_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS6_EVENT_STATUS_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS6_EVENT_STATUS_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS6_EVENT_STATUS_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS6_EVENT_CLEAR */
+#define ROGUE_CR_IRQ_OS6_EVENT_CLEAR 0x60BE8U
+#define ROGUE_CR_IRQ_OS6_EVENT_CLEAR_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS6_EVENT_CLEAR_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS6_EVENT_CLEAR_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS6_EVENT_CLEAR_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS7_EVENT_STATUS */
+#define ROGUE_CR_IRQ_OS7_EVENT_STATUS 0x70BD8U
+#define ROGUE_CR_IRQ_OS7_EVENT_STATUS_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS7_EVENT_STATUS_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS7_EVENT_STATUS_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS7_EVENT_STATUS_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_IRQ_OS7_EVENT_CLEAR */
+#define ROGUE_CR_IRQ_OS7_EVENT_CLEAR 0x70BE8U
+#define ROGUE_CR_IRQ_OS7_EVENT_CLEAR_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_IRQ_OS7_EVENT_CLEAR_SOURCE_SHIFT 0U
+#define ROGUE_CR_IRQ_OS7_EVENT_CLEAR_SOURCE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_IRQ_OS7_EVENT_CLEAR_SOURCE_EN 0x00000001U
+
+/* Register ROGUE_CR_META_BOOT */
+#define ROGUE_CR_META_BOOT 0x0BF8U
+#define ROGUE_CR_META_BOOT_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_META_BOOT_MODE_SHIFT 0U
+#define ROGUE_CR_META_BOOT_MODE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_META_BOOT_MODE_EN 0x00000001U
+
+/* Register ROGUE_CR_GARTEN_SLC */
+#define ROGUE_CR_GARTEN_SLC 0x0BB8U
+#define ROGUE_CR_GARTEN_SLC_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_GARTEN_SLC_FORCE_COHERENCY_SHIFT 0U
+#define ROGUE_CR_GARTEN_SLC_FORCE_COHERENCY_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_GARTEN_SLC_FORCE_COHERENCY_EN 0x00000001U
+
+/* Register ROGUE_CR_PPP */
+#define ROGUE_CR_PPP 0x0CD0U
+#define ROGUE_CR_PPP_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PPP_CHECKSUM_SHIFT 0U
+#define ROGUE_CR_PPP_CHECKSUM_CLRMSK 0x00000000U
+
+#define ROGUE_CR_ISP_RENDER_DIR_TYPE_MASK 0x00000003U
+/* Top-left to bottom-right */
+#define ROGUE_CR_ISP_RENDER_DIR_TYPE_TL2BR 0x00000000U
+/* Top-right to bottom-left */
+#define ROGUE_CR_ISP_RENDER_DIR_TYPE_TR2BL 0x00000001U
+/* Bottom-left to top-right */
+#define ROGUE_CR_ISP_RENDER_DIR_TYPE_BL2TR 0x00000002U
+/* Bottom-right to top-left */
+#define ROGUE_CR_ISP_RENDER_DIR_TYPE_BR2TL 0x00000003U
+
+#define ROGUE_CR_ISP_RENDER_MODE_TYPE_MASK 0x00000003U
+/* Normal render */
+#define ROGUE_CR_ISP_RENDER_MODE_TYPE_NORM 0x00000000U
+/* Fast 2D render */
+#define ROGUE_CR_ISP_RENDER_MODE_TYPE_FAST_2D 0x00000002U
+/* Fast scale render */
+#define ROGUE_CR_ISP_RENDER_MODE_TYPE_FAST_SCALE 0x00000003U
+
+/* Register ROGUE_CR_ISP_RENDER */
+#define ROGUE_CR_ISP_RENDER 0x0F08U
+#define ROGUE_CR_ISP_RENDER_MASKFULL 0x00000000000001FFULL
+#define ROGUE_CR_ISP_RENDER_FAST_RENDER_FORCE_PROTECT_SHIFT 8U
+#define ROGUE_CR_ISP_RENDER_FAST_RENDER_FORCE_PROTECT_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_ISP_RENDER_FAST_RENDER_FORCE_PROTECT_EN 0x00000100U
+#define ROGUE_CR_ISP_RENDER_PROCESS_PROTECTED_TILES_SHIFT 7U
+#define ROGUE_CR_ISP_RENDER_PROCESS_PROTECTED_TILES_CLRMSK 0xFFFFFF7FU
+#define ROGUE_CR_ISP_RENDER_PROCESS_PROTECTED_TILES_EN 0x00000080U
+#define ROGUE_CR_ISP_RENDER_PROCESS_UNPROTECTED_TILES_SHIFT 6U
+#define ROGUE_CR_ISP_RENDER_PROCESS_UNPROTECTED_TILES_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_ISP_RENDER_PROCESS_UNPROTECTED_TILES_EN 0x00000040U
+#define ROGUE_CR_ISP_RENDER_DISABLE_EOMT_SHIFT 5U
+#define ROGUE_CR_ISP_RENDER_DISABLE_EOMT_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_ISP_RENDER_DISABLE_EOMT_EN 0x00000020U
+#define ROGUE_CR_ISP_RENDER_RESUME_SHIFT 4U
+#define ROGUE_CR_ISP_RENDER_RESUME_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_ISP_RENDER_RESUME_EN 0x00000010U
+#define ROGUE_CR_ISP_RENDER_DIR_SHIFT 2U
+#define ROGUE_CR_ISP_RENDER_DIR_CLRMSK 0xFFFFFFF3U
+#define ROGUE_CR_ISP_RENDER_DIR_TL2BR 0x00000000U
+#define ROGUE_CR_ISP_RENDER_DIR_TR2BL 0x00000004U
+#define ROGUE_CR_ISP_RENDER_DIR_BL2TR 0x00000008U
+#define ROGUE_CR_ISP_RENDER_DIR_BR2TL 0x0000000CU
+#define ROGUE_CR_ISP_RENDER_MODE_SHIFT 0U
+#define ROGUE_CR_ISP_RENDER_MODE_CLRMSK 0xFFFFFFFCU
+#define ROGUE_CR_ISP_RENDER_MODE_NORM 0x00000000U
+#define ROGUE_CR_ISP_RENDER_MODE_FAST_2D 0x00000002U
+#define ROGUE_CR_ISP_RENDER_MODE_FAST_SCALE 0x00000003U
+
+/* Register ROGUE_CR_ISP_CTL */
+#define ROGUE_CR_ISP_CTL 0x0F38U
+#define ROGUE_CR_ISP_CTL_MASKFULL 0x00000000FFFFF3FFULL
+#define ROGUE_CR_ISP_CTL_SKIP_INIT_HDRS_SHIFT 31U
+#define ROGUE_CR_ISP_CTL_SKIP_INIT_HDRS_CLRMSK 0x7FFFFFFFU
+#define ROGUE_CR_ISP_CTL_SKIP_INIT_HDRS_EN 0x80000000U
+#define ROGUE_CR_ISP_CTL_LINE_STYLE_SHIFT 30U
+#define ROGUE_CR_ISP_CTL_LINE_STYLE_CLRMSK 0xBFFFFFFFU
+#define ROGUE_CR_ISP_CTL_LINE_STYLE_EN 0x40000000U
+#define ROGUE_CR_ISP_CTL_LINE_STYLE_PIX_SHIFT 29U
+#define ROGUE_CR_ISP_CTL_LINE_STYLE_PIX_CLRMSK 0xDFFFFFFFU
+#define ROGUE_CR_ISP_CTL_LINE_STYLE_PIX_EN 0x20000000U
+#define ROGUE_CR_ISP_CTL_PAIR_TILES_VERT_SHIFT 28U
+#define ROGUE_CR_ISP_CTL_PAIR_TILES_VERT_CLRMSK 0xEFFFFFFFU
+#define ROGUE_CR_ISP_CTL_PAIR_TILES_VERT_EN 0x10000000U
+#define ROGUE_CR_ISP_CTL_PAIR_TILES_SHIFT 27U
+#define ROGUE_CR_ISP_CTL_PAIR_TILES_CLRMSK 0xF7FFFFFFU
+#define ROGUE_CR_ISP_CTL_PAIR_TILES_EN 0x08000000U
+#define ROGUE_CR_ISP_CTL_CREQ_BUF_EN_SHIFT 26U
+#define ROGUE_CR_ISP_CTL_CREQ_BUF_EN_CLRMSK 0xFBFFFFFFU
+#define ROGUE_CR_ISP_CTL_CREQ_BUF_EN_EN 0x04000000U
+#define ROGUE_CR_ISP_CTL_TILE_AGE_EN_SHIFT 25U
+#define ROGUE_CR_ISP_CTL_TILE_AGE_EN_CLRMSK 0xFDFFFFFFU
+#define ROGUE_CR_ISP_CTL_TILE_AGE_EN_EN 0x02000000U
+#define ROGUE_CR_ISP_CTL_ISP_SAMPLE_POS_MODE_SHIFT 23U
+#define ROGUE_CR_ISP_CTL_ISP_SAMPLE_POS_MODE_CLRMSK 0xFE7FFFFFU
+#define ROGUE_CR_ISP_CTL_ISP_SAMPLE_POS_MODE_DX9 0x00000000U
+#define ROGUE_CR_ISP_CTL_ISP_SAMPLE_POS_MODE_DX10 0x00800000U
+#define ROGUE_CR_ISP_CTL_ISP_SAMPLE_POS_MODE_OGL 0x01000000U
+#define ROGUE_CR_ISP_CTL_NUM_TILES_PER_USC_SHIFT 21U
+#define ROGUE_CR_ISP_CTL_NUM_TILES_PER_USC_CLRMSK 0xFF9FFFFFU
+#define ROGUE_CR_ISP_CTL_DBIAS_IS_INT_SHIFT 20U
+#define ROGUE_CR_ISP_CTL_DBIAS_IS_INT_CLRMSK 0xFFEFFFFFU
+#define ROGUE_CR_ISP_CTL_DBIAS_IS_INT_EN 0x00100000U
+#define ROGUE_CR_ISP_CTL_OVERLAP_CHECK_MODE_SHIFT 19U
+#define ROGUE_CR_ISP_CTL_OVERLAP_CHECK_MODE_CLRMSK 0xFFF7FFFFU
+#define ROGUE_CR_ISP_CTL_OVERLAP_CHECK_MODE_EN 0x00080000U
+#define ROGUE_CR_ISP_CTL_PT_UPFRONT_DEPTH_DISABLE_SHIFT 18U
+#define ROGUE_CR_ISP_CTL_PT_UPFRONT_DEPTH_DISABLE_CLRMSK 0xFFFBFFFFU
+#define ROGUE_CR_ISP_CTL_PT_UPFRONT_DEPTH_DISABLE_EN 0x00040000U
+#define ROGUE_CR_ISP_CTL_PROCESS_EMPTY_TILES_SHIFT 17U
+#define ROGUE_CR_ISP_CTL_PROCESS_EMPTY_TILES_CLRMSK 0xFFFDFFFFU
+#define ROGUE_CR_ISP_CTL_PROCESS_EMPTY_TILES_EN 0x00020000U
+#define ROGUE_CR_ISP_CTL_SAMPLE_POS_SHIFT 16U
+#define ROGUE_CR_ISP_CTL_SAMPLE_POS_CLRMSK 0xFFFEFFFFU
+#define ROGUE_CR_ISP_CTL_SAMPLE_POS_EN 0x00010000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_SHIFT 12U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_CLRMSK 0xFFFF0FFFU
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_ONE 0x00000000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_TWO 0x00001000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_THREE 0x00002000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_FOUR 0x00003000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_FIVE 0x00004000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_SIX 0x00005000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_SEVEN 0x00006000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_EIGHT 0x00007000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_NINE 0x00008000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_TEN 0x00009000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_ELEVEN 0x0000A000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_TWELVE 0x0000B000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_THIRTEEN 0x0000C000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_FOURTEEN 0x0000D000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_FIFTEEN 0x0000E000U
+#define ROGUE_CR_ISP_CTL_PIPE_ENABLE_PIPE_SIXTEEN 0x0000F000U
+#define ROGUE_CR_ISP_CTL_VALID_ID_SHIFT 4U
+#define ROGUE_CR_ISP_CTL_VALID_ID_CLRMSK 0xFFFFFC0FU
+#define ROGUE_CR_ISP_CTL_UPASS_START_SHIFT 0U
+#define ROGUE_CR_ISP_CTL_UPASS_START_CLRMSK 0xFFFFFFF0U
+
+/* Register ROGUE_CR_ISP_STATUS */
+#define ROGUE_CR_ISP_STATUS 0x1038U
+#define ROGUE_CR_ISP_STATUS_MASKFULL 0x0000000000000007ULL
+#define ROGUE_CR_ISP_STATUS_SPLIT_MAX_SHIFT 2U
+#define ROGUE_CR_ISP_STATUS_SPLIT_MAX_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_ISP_STATUS_SPLIT_MAX_EN 0x00000004U
+#define ROGUE_CR_ISP_STATUS_ACTIVE_SHIFT 1U
+#define ROGUE_CR_ISP_STATUS_ACTIVE_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_ISP_STATUS_ACTIVE_EN 0x00000002U
+#define ROGUE_CR_ISP_STATUS_EOR_SHIFT 0U
+#define ROGUE_CR_ISP_STATUS_EOR_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_ISP_STATUS_EOR_EN 0x00000001U
+
+/* Register group: ROGUE_CR_ISP_XTP_RESUME, with 64 repeats */
+#define ROGUE_CR_ISP_XTP_RESUME_REPEATCOUNT 64U
+/* Register ROGUE_CR_ISP_XTP_RESUME0 */
+#define ROGUE_CR_ISP_XTP_RESUME0 0x3A00U
+#define ROGUE_CR_ISP_XTP_RESUME0_MASKFULL 0x00000000003FF3FFULL
+#define ROGUE_CR_ISP_XTP_RESUME0_TILE_X_SHIFT 12U
+#define ROGUE_CR_ISP_XTP_RESUME0_TILE_X_CLRMSK 0xFFC00FFFU
+#define ROGUE_CR_ISP_XTP_RESUME0_TILE_Y_SHIFT 0U
+#define ROGUE_CR_ISP_XTP_RESUME0_TILE_Y_CLRMSK 0xFFFFFC00U
+
+/* Register group: ROGUE_CR_ISP_XTP_STORE, with 32 repeats */
+#define ROGUE_CR_ISP_XTP_STORE_REPEATCOUNT 32U
+/* Register ROGUE_CR_ISP_XTP_STORE0 */
+#define ROGUE_CR_ISP_XTP_STORE0 0x3C00U
+#define ROGUE_CR_ISP_XTP_STORE0_MASKFULL 0x000000007F3FF3FFULL
+#define ROGUE_CR_ISP_XTP_STORE0_ACTIVE_SHIFT 30U
+#define ROGUE_CR_ISP_XTP_STORE0_ACTIVE_CLRMSK 0xBFFFFFFFU
+#define ROGUE_CR_ISP_XTP_STORE0_ACTIVE_EN 0x40000000U
+#define ROGUE_CR_ISP_XTP_STORE0_EOR_SHIFT 29U
+#define ROGUE_CR_ISP_XTP_STORE0_EOR_CLRMSK 0xDFFFFFFFU
+#define ROGUE_CR_ISP_XTP_STORE0_EOR_EN 0x20000000U
+#define ROGUE_CR_ISP_XTP_STORE0_TILE_LAST_SHIFT 28U
+#define ROGUE_CR_ISP_XTP_STORE0_TILE_LAST_CLRMSK 0xEFFFFFFFU
+#define ROGUE_CR_ISP_XTP_STORE0_TILE_LAST_EN 0x10000000U
+#define ROGUE_CR_ISP_XTP_STORE0_MT_SHIFT 24U
+#define ROGUE_CR_ISP_XTP_STORE0_MT_CLRMSK 0xF0FFFFFFU
+#define ROGUE_CR_ISP_XTP_STORE0_TILE_X_SHIFT 12U
+#define ROGUE_CR_ISP_XTP_STORE0_TILE_X_CLRMSK 0xFFC00FFFU
+#define ROGUE_CR_ISP_XTP_STORE0_TILE_Y_SHIFT 0U
+#define ROGUE_CR_ISP_XTP_STORE0_TILE_Y_CLRMSK 0xFFFFFC00U
+
+/* Register group: ROGUE_CR_BIF_CAT_BASE, with 8 repeats */
+#define ROGUE_CR_BIF_CAT_BASE_REPEATCOUNT 8U
+/* Register ROGUE_CR_BIF_CAT_BASE0 */
+#define ROGUE_CR_BIF_CAT_BASE0 0x1200U
+#define ROGUE_CR_BIF_CAT_BASE0_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_BIF_CAT_BASE0_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE0_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_CAT_BASE0_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE0_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_CAT_BASE1 */
+#define ROGUE_CR_BIF_CAT_BASE1 0x1208U
+#define ROGUE_CR_BIF_CAT_BASE1_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_BIF_CAT_BASE1_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE1_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_CAT_BASE1_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE1_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_CAT_BASE2 */
+#define ROGUE_CR_BIF_CAT_BASE2 0x1210U
+#define ROGUE_CR_BIF_CAT_BASE2_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_BIF_CAT_BASE2_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE2_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_CAT_BASE2_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE2_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_CAT_BASE3 */
+#define ROGUE_CR_BIF_CAT_BASE3 0x1218U
+#define ROGUE_CR_BIF_CAT_BASE3_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_BIF_CAT_BASE3_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE3_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_CAT_BASE3_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE3_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_CAT_BASE4 */
+#define ROGUE_CR_BIF_CAT_BASE4 0x1220U
+#define ROGUE_CR_BIF_CAT_BASE4_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_BIF_CAT_BASE4_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE4_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_CAT_BASE4_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE4_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_CAT_BASE5 */
+#define ROGUE_CR_BIF_CAT_BASE5 0x1228U
+#define ROGUE_CR_BIF_CAT_BASE5_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_BIF_CAT_BASE5_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE5_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_CAT_BASE5_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE5_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_CAT_BASE6 */
+#define ROGUE_CR_BIF_CAT_BASE6 0x1230U
+#define ROGUE_CR_BIF_CAT_BASE6_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_BIF_CAT_BASE6_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE6_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_CAT_BASE6_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE6_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_CAT_BASE7 */
+#define ROGUE_CR_BIF_CAT_BASE7 0x1238U
+#define ROGUE_CR_BIF_CAT_BASE7_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_BIF_CAT_BASE7_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE7_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_CAT_BASE7_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_CAT_BASE7_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_CAT_BASE_INDEX */
+#define ROGUE_CR_BIF_CAT_BASE_INDEX 0x1240U
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_MASKFULL 0x00070707073F0707ULL
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_RVTX_SHIFT 48U
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_RVTX_CLRMSK 0xFFF8FFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_RAY_SHIFT 40U
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_RAY_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_HOST_SHIFT 32U
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_HOST_CLRMSK 0xFFFFFFF8FFFFFFFFULL
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_TLA_SHIFT 24U
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_TLA_CLRMSK 0xFFFFFFFFF8FFFFFFULL
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_TDM_SHIFT 19U
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_TDM_CLRMSK 0xFFFFFFFFFFC7FFFFULL
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_CDM_SHIFT 16U
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_CDM_CLRMSK 0xFFFFFFFFFFF8FFFFULL
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_PIXEL_SHIFT 8U
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_PIXEL_CLRMSK 0xFFFFFFFFFFFFF8FFULL
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_TA_SHIFT 0U
+#define ROGUE_CR_BIF_CAT_BASE_INDEX_TA_CLRMSK 0xFFFFFFFFFFFFFFF8ULL
+
+/* Register ROGUE_CR_BIF_PM_CAT_BASE_VCE0 */
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0 0x1248U
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0_MASKFULL 0x0FFFFFFFFFFFF003ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0_INIT_PAGE_SHIFT 40U
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0_INIT_PAGE_CLRMSK 0xF00000FFFFFFFFFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0_WRAP_SHIFT 1U
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0_WRAP_CLRMSK 0xFFFFFFFFFFFFFFFDULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0_WRAP_EN 0x0000000000000002ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0_VALID_SHIFT 0U
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0_VALID_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE0_VALID_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_BIF_PM_CAT_BASE_TE0 */
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0 0x1250U
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0_MASKFULL 0x0FFFFFFFFFFFF003ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0_INIT_PAGE_SHIFT 40U
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0_INIT_PAGE_CLRMSK 0xF00000FFFFFFFFFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0_WRAP_SHIFT 1U
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0_WRAP_CLRMSK 0xFFFFFFFFFFFFFFFDULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0_WRAP_EN 0x0000000000000002ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0_VALID_SHIFT 0U
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0_VALID_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE0_VALID_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_BIF_PM_CAT_BASE_ALIST0 */
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0 0x1260U
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0_MASKFULL 0x0FFFFFFFFFFFF003ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0_INIT_PAGE_SHIFT 40U
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0_INIT_PAGE_CLRMSK 0xF00000FFFFFFFFFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0_WRAP_SHIFT 1U
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0_WRAP_CLRMSK 0xFFFFFFFFFFFFFFFDULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0_WRAP_EN 0x0000000000000002ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0_VALID_SHIFT 0U
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0_VALID_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST0_VALID_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_BIF_PM_CAT_BASE_VCE1 */
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1 0x1268U
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1_MASKFULL 0x0FFFFFFFFFFFF003ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1_INIT_PAGE_SHIFT 40U
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1_INIT_PAGE_CLRMSK 0xF00000FFFFFFFFFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1_WRAP_SHIFT 1U
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1_WRAP_CLRMSK 0xFFFFFFFFFFFFFFFDULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1_WRAP_EN 0x0000000000000002ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1_VALID_SHIFT 0U
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1_VALID_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_VCE1_VALID_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_BIF_PM_CAT_BASE_TE1 */
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1 0x1270U
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1_MASKFULL 0x0FFFFFFFFFFFF003ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1_INIT_PAGE_SHIFT 40U
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1_INIT_PAGE_CLRMSK 0xF00000FFFFFFFFFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1_WRAP_SHIFT 1U
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1_WRAP_CLRMSK 0xFFFFFFFFFFFFFFFDULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1_WRAP_EN 0x0000000000000002ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1_VALID_SHIFT 0U
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1_VALID_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_TE1_VALID_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_BIF_PM_CAT_BASE_ALIST1 */
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1 0x1280U
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1_MASKFULL 0x0FFFFFFFFFFFF003ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1_INIT_PAGE_SHIFT 40U
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1_INIT_PAGE_CLRMSK 0xF00000FFFFFFFFFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1_ADDR_SHIFT 12U
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1_WRAP_SHIFT 1U
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1_WRAP_CLRMSK 0xFFFFFFFFFFFFFFFDULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1_WRAP_EN 0x0000000000000002ULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1_VALID_SHIFT 0U
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1_VALID_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_BIF_PM_CAT_BASE_ALIST1_VALID_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_BIF_MMU_ENTRY_STATUS */
+#define ROGUE_CR_BIF_MMU_ENTRY_STATUS 0x1288U
+#define ROGUE_CR_BIF_MMU_ENTRY_STATUS_MASKFULL 0x000000FFFFFFF0F3ULL
+#define ROGUE_CR_BIF_MMU_ENTRY_STATUS_ADDRESS_SHIFT 12U
+#define ROGUE_CR_BIF_MMU_ENTRY_STATUS_ADDRESS_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_BIF_MMU_ENTRY_STATUS_CAT_BASE_SHIFT 4U
+#define ROGUE_CR_BIF_MMU_ENTRY_STATUS_CAT_BASE_CLRMSK 0xFFFFFFFFFFFFFF0FULL
+#define ROGUE_CR_BIF_MMU_ENTRY_STATUS_DATA_TYPE_SHIFT 0U
+#define ROGUE_CR_BIF_MMU_ENTRY_STATUS_DATA_TYPE_CLRMSK 0xFFFFFFFFFFFFFFFCULL
+
+/* Register ROGUE_CR_BIF_MMU_ENTRY */
+#define ROGUE_CR_BIF_MMU_ENTRY 0x1290U
+#define ROGUE_CR_BIF_MMU_ENTRY_MASKFULL 0x0000000000000003ULL
+#define ROGUE_CR_BIF_MMU_ENTRY_ENABLE_SHIFT 1U
+#define ROGUE_CR_BIF_MMU_ENTRY_ENABLE_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_BIF_MMU_ENTRY_ENABLE_EN 0x00000002U
+#define ROGUE_CR_BIF_MMU_ENTRY_PENDING_SHIFT 0U
+#define ROGUE_CR_BIF_MMU_ENTRY_PENDING_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_BIF_MMU_ENTRY_PENDING_EN 0x00000001U
+
+/* Register ROGUE_CR_BIF_CTRL_INVAL */
+#define ROGUE_CR_BIF_CTRL_INVAL 0x12A0U
+#define ROGUE_CR_BIF_CTRL_INVAL_MASKFULL 0x000000000000000FULL
+#define ROGUE_CR_BIF_CTRL_INVAL_TLB1_SHIFT 3U
+#define ROGUE_CR_BIF_CTRL_INVAL_TLB1_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_BIF_CTRL_INVAL_TLB1_EN 0x00000008U
+#define ROGUE_CR_BIF_CTRL_INVAL_PC_SHIFT 2U
+#define ROGUE_CR_BIF_CTRL_INVAL_PC_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_BIF_CTRL_INVAL_PC_EN 0x00000004U
+#define ROGUE_CR_BIF_CTRL_INVAL_PD_SHIFT 1U
+#define ROGUE_CR_BIF_CTRL_INVAL_PD_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_BIF_CTRL_INVAL_PD_EN 0x00000002U
+#define ROGUE_CR_BIF_CTRL_INVAL_PT_SHIFT 0U
+#define ROGUE_CR_BIF_CTRL_INVAL_PT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_BIF_CTRL_INVAL_PT_EN 0x00000001U
+
+/* Register ROGUE_CR_BIF_CTRL */
+#define ROGUE_CR_BIF_CTRL 0x12A8U
+#define ROGUE_CR_BIF_CTRL__XE_MEM__MASKFULL 0x000000000000033FULL
+#define ROGUE_CR_BIF_CTRL_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_CPU_SHIFT 9U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_CPU_CLRMSK 0xFFFFFDFFU
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_CPU_EN 0x00000200U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF4_SHIFT 8U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF4_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF4_EN 0x00000100U
+#define ROGUE_CR_BIF_CTRL_ENABLE_MMU_QUEUE_BYPASS_SHIFT 7U
+#define ROGUE_CR_BIF_CTRL_ENABLE_MMU_QUEUE_BYPASS_CLRMSK 0xFFFFFF7FU
+#define ROGUE_CR_BIF_CTRL_ENABLE_MMU_QUEUE_BYPASS_EN 0x00000080U
+#define ROGUE_CR_BIF_CTRL_ENABLE_MMU_AUTO_PREFETCH_SHIFT 6U
+#define ROGUE_CR_BIF_CTRL_ENABLE_MMU_AUTO_PREFETCH_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_BIF_CTRL_ENABLE_MMU_AUTO_PREFETCH_EN 0x00000040U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF3_SHIFT 5U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF3_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF3_EN 0x00000020U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF2_SHIFT 4U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF2_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF2_EN 0x00000010U
+#define ROGUE_CR_BIF_CTRL_PAUSE_BIF1_SHIFT 3U
+#define ROGUE_CR_BIF_CTRL_PAUSE_BIF1_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_BIF_CTRL_PAUSE_BIF1_EN 0x00000008U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_PM_SHIFT 2U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_PM_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_PM_EN 0x00000004U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF1_SHIFT 1U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF1_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF1_EN 0x00000002U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF0_SHIFT 0U
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF0_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_BIF_CTRL_PAUSE_MMU_BIF0_EN 0x00000001U
+
+/* Register ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS */
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS 0x12B0U
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_MASKFULL 0x000000000000F775ULL
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_CAT_BASE_SHIFT 12U
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_CAT_BASE_CLRMSK 0xFFFF0FFFU
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_PAGE_SIZE_SHIFT 8U
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_PAGE_SIZE_CLRMSK 0xFFFFF8FFU
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_DATA_TYPE_SHIFT 5U
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_DATA_TYPE_CLRMSK 0xFFFFFF9FU
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_FAULT_RO_SHIFT 4U
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_FAULT_RO_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_FAULT_RO_EN 0x00000010U
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_FAULT_PM_META_RO_SHIFT 2U
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_FAULT_PM_META_RO_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_FAULT_PM_META_RO_EN 0x00000004U
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_FAULT_SHIFT 0U
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_FAULT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_BIF_FAULT_BANK0_MMU_STATUS_FAULT_EN 0x00000001U
+
+/* Register ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS */
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS 0x12B8U
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS__XE_MEM__MASKFULL 0x001FFFFFFFFFFFF0ULL
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_MASKFULL 0x0007FFFFFFFFFFF0ULL
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS__XE_MEM__RNW_SHIFT 52U
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS__XE_MEM__RNW_CLRMSK 0xFFEFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS__XE_MEM__RNW_EN 0x0010000000000000ULL
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_RNW_SHIFT 50U
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_RNW_CLRMSK 0xFFFBFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_RNW_EN 0x0004000000000000ULL
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS__XE_MEM__TAG_SB_SHIFT 46U
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS__XE_MEM__TAG_SB_CLRMSK 0xFFF03FFFFFFFFFFFULL
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_TAG_SB_SHIFT 44U
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_TAG_SB_CLRMSK 0xFFFC0FFFFFFFFFFFULL
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_TAG_ID_SHIFT 40U
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_TAG_ID_CLRMSK 0xFFFFF0FFFFFFFFFFULL
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS__XE_MEM__TAG_ID_SHIFT 40U
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS__XE_MEM__TAG_ID_CLRMSK 0xFFFFC0FFFFFFFFFFULL
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_ADDRESS_SHIFT 4U
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_ADDRESS_CLRMSK 0xFFFFFF000000000FULL
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_ADDRESS_ALIGNSHIFT 4U
+#define ROGUE_CR_BIF_FAULT_BANK0_REQ_STATUS_ADDRESS_ALIGNSIZE 16U
+
+/* Register ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS */
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS 0x12C0U
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_MASKFULL 0x000000000000F775ULL
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_CAT_BASE_SHIFT 12U
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_CAT_BASE_CLRMSK 0xFFFF0FFFU
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_PAGE_SIZE_SHIFT 8U
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_PAGE_SIZE_CLRMSK 0xFFFFF8FFU
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_DATA_TYPE_SHIFT 5U
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_DATA_TYPE_CLRMSK 0xFFFFFF9FU
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_FAULT_RO_SHIFT 4U
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_FAULT_RO_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_FAULT_RO_EN 0x00000010U
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_FAULT_PM_META_RO_SHIFT 2U
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_FAULT_PM_META_RO_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_FAULT_PM_META_RO_EN 0x00000004U
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_FAULT_SHIFT 0U
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_FAULT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_BIF_FAULT_BANK1_MMU_STATUS_FAULT_EN 0x00000001U
+
+/* Register ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS */
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS 0x12C8U
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_MASKFULL 0x0007FFFFFFFFFFF0ULL
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_RNW_SHIFT 50U
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_RNW_CLRMSK 0xFFFBFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_RNW_EN 0x0004000000000000ULL
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_TAG_SB_SHIFT 44U
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_TAG_SB_CLRMSK 0xFFFC0FFFFFFFFFFFULL
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_TAG_ID_SHIFT 40U
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_TAG_ID_CLRMSK 0xFFFFF0FFFFFFFFFFULL
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_ADDRESS_SHIFT 4U
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_ADDRESS_CLRMSK 0xFFFFFF000000000FULL
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_ADDRESS_ALIGNSHIFT 4U
+#define ROGUE_CR_BIF_FAULT_BANK1_REQ_STATUS_ADDRESS_ALIGNSIZE 16U
+
+/* Register ROGUE_CR_BIF_MMU_STATUS */
+#define ROGUE_CR_BIF_MMU_STATUS 0x12D0U
+#define ROGUE_CR_BIF_MMU_STATUS__XE_MEM__MASKFULL 0x000000001FFFFFF7ULL
+#define ROGUE_CR_BIF_MMU_STATUS_MASKFULL 0x000000001FFFFFF7ULL
+#define ROGUE_CR_BIF_MMU_STATUS_PM_FAULT_SHIFT 28U
+#define ROGUE_CR_BIF_MMU_STATUS_PM_FAULT_CLRMSK 0xEFFFFFFFU
+#define ROGUE_CR_BIF_MMU_STATUS_PM_FAULT_EN 0x10000000U
+#define ROGUE_CR_BIF_MMU_STATUS_PC_DATA_SHIFT 20U
+#define ROGUE_CR_BIF_MMU_STATUS_PC_DATA_CLRMSK 0xF00FFFFFU
+#define ROGUE_CR_BIF_MMU_STATUS_PD_DATA_SHIFT 12U
+#define ROGUE_CR_BIF_MMU_STATUS_PD_DATA_CLRMSK 0xFFF00FFFU
+#define ROGUE_CR_BIF_MMU_STATUS_PT_DATA_SHIFT 4U
+#define ROGUE_CR_BIF_MMU_STATUS_PT_DATA_CLRMSK 0xFFFFF00FU
+#define ROGUE_CR_BIF_MMU_STATUS_STALLED_SHIFT 2U
+#define ROGUE_CR_BIF_MMU_STATUS_STALLED_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_BIF_MMU_STATUS_STALLED_EN 0x00000004U
+#define ROGUE_CR_BIF_MMU_STATUS_PAUSED_SHIFT 1U
+#define ROGUE_CR_BIF_MMU_STATUS_PAUSED_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_BIF_MMU_STATUS_PAUSED_EN 0x00000002U
+#define ROGUE_CR_BIF_MMU_STATUS_BUSY_SHIFT 0U
+#define ROGUE_CR_BIF_MMU_STATUS_BUSY_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_BIF_MMU_STATUS_BUSY_EN 0x00000001U
+
+/* Register group: ROGUE_CR_BIF_TILING_CFG, with 8 repeats */
+#define ROGUE_CR_BIF_TILING_CFG_REPEATCOUNT 8U
+/* Register ROGUE_CR_BIF_TILING_CFG0 */
+#define ROGUE_CR_BIF_TILING_CFG0 0x12D8U
+#define ROGUE_CR_BIF_TILING_CFG0_MASKFULL 0xFFFFFFFF0FFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG0_XSTRIDE_SHIFT 61U
+#define ROGUE_CR_BIF_TILING_CFG0_XSTRIDE_CLRMSK 0x1FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG0_ENABLE_SHIFT 60U
+#define ROGUE_CR_BIF_TILING_CFG0_ENABLE_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG0_ENABLE_EN 0x1000000000000000ULL
+#define ROGUE_CR_BIF_TILING_CFG0_MAX_ADDRESS_SHIFT 32U
+#define ROGUE_CR_BIF_TILING_CFG0_MAX_ADDRESS_CLRMSK 0xF0000000FFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG0_MAX_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG0_MAX_ADDRESS_ALIGNSIZE 4096U
+#define ROGUE_CR_BIF_TILING_CFG0_MIN_ADDRESS_SHIFT 0U
+#define ROGUE_CR_BIF_TILING_CFG0_MIN_ADDRESS_CLRMSK 0xFFFFFFFFF0000000ULL
+#define ROGUE_CR_BIF_TILING_CFG0_MIN_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG0_MIN_ADDRESS_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_TILING_CFG1 */
+#define ROGUE_CR_BIF_TILING_CFG1 0x12E0U
+#define ROGUE_CR_BIF_TILING_CFG1_MASKFULL 0xFFFFFFFF0FFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG1_XSTRIDE_SHIFT 61U
+#define ROGUE_CR_BIF_TILING_CFG1_XSTRIDE_CLRMSK 0x1FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG1_ENABLE_SHIFT 60U
+#define ROGUE_CR_BIF_TILING_CFG1_ENABLE_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG1_ENABLE_EN 0x1000000000000000ULL
+#define ROGUE_CR_BIF_TILING_CFG1_MAX_ADDRESS_SHIFT 32U
+#define ROGUE_CR_BIF_TILING_CFG1_MAX_ADDRESS_CLRMSK 0xF0000000FFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG1_MAX_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG1_MAX_ADDRESS_ALIGNSIZE 4096U
+#define ROGUE_CR_BIF_TILING_CFG1_MIN_ADDRESS_SHIFT 0U
+#define ROGUE_CR_BIF_TILING_CFG1_MIN_ADDRESS_CLRMSK 0xFFFFFFFFF0000000ULL
+#define ROGUE_CR_BIF_TILING_CFG1_MIN_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG1_MIN_ADDRESS_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_TILING_CFG2 */
+#define ROGUE_CR_BIF_TILING_CFG2 0x12E8U
+#define ROGUE_CR_BIF_TILING_CFG2_MASKFULL 0xFFFFFFFF0FFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG2_XSTRIDE_SHIFT 61U
+#define ROGUE_CR_BIF_TILING_CFG2_XSTRIDE_CLRMSK 0x1FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG2_ENABLE_SHIFT 60U
+#define ROGUE_CR_BIF_TILING_CFG2_ENABLE_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG2_ENABLE_EN 0x1000000000000000ULL
+#define ROGUE_CR_BIF_TILING_CFG2_MAX_ADDRESS_SHIFT 32U
+#define ROGUE_CR_BIF_TILING_CFG2_MAX_ADDRESS_CLRMSK 0xF0000000FFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG2_MAX_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG2_MAX_ADDRESS_ALIGNSIZE 4096U
+#define ROGUE_CR_BIF_TILING_CFG2_MIN_ADDRESS_SHIFT 0U
+#define ROGUE_CR_BIF_TILING_CFG2_MIN_ADDRESS_CLRMSK 0xFFFFFFFFF0000000ULL
+#define ROGUE_CR_BIF_TILING_CFG2_MIN_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG2_MIN_ADDRESS_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_TILING_CFG3 */
+#define ROGUE_CR_BIF_TILING_CFG3 0x12F0U
+#define ROGUE_CR_BIF_TILING_CFG3_MASKFULL 0xFFFFFFFF0FFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG3_XSTRIDE_SHIFT 61U
+#define ROGUE_CR_BIF_TILING_CFG3_XSTRIDE_CLRMSK 0x1FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG3_ENABLE_SHIFT 60U
+#define ROGUE_CR_BIF_TILING_CFG3_ENABLE_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG3_ENABLE_EN 0x1000000000000000ULL
+#define ROGUE_CR_BIF_TILING_CFG3_MAX_ADDRESS_SHIFT 32U
+#define ROGUE_CR_BIF_TILING_CFG3_MAX_ADDRESS_CLRMSK 0xF0000000FFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG3_MAX_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG3_MAX_ADDRESS_ALIGNSIZE 4096U
+#define ROGUE_CR_BIF_TILING_CFG3_MIN_ADDRESS_SHIFT 0U
+#define ROGUE_CR_BIF_TILING_CFG3_MIN_ADDRESS_CLRMSK 0xFFFFFFFFF0000000ULL
+#define ROGUE_CR_BIF_TILING_CFG3_MIN_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG3_MIN_ADDRESS_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_TILING_CFG4 */
+#define ROGUE_CR_BIF_TILING_CFG4 0x12F8U
+#define ROGUE_CR_BIF_TILING_CFG4_MASKFULL 0xFFFFFFFF0FFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG4_XSTRIDE_SHIFT 61U
+#define ROGUE_CR_BIF_TILING_CFG4_XSTRIDE_CLRMSK 0x1FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG4_ENABLE_SHIFT 60U
+#define ROGUE_CR_BIF_TILING_CFG4_ENABLE_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG4_ENABLE_EN 0x1000000000000000ULL
+#define ROGUE_CR_BIF_TILING_CFG4_MAX_ADDRESS_SHIFT 32U
+#define ROGUE_CR_BIF_TILING_CFG4_MAX_ADDRESS_CLRMSK 0xF0000000FFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG4_MAX_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG4_MAX_ADDRESS_ALIGNSIZE 4096U
+#define ROGUE_CR_BIF_TILING_CFG4_MIN_ADDRESS_SHIFT 0U
+#define ROGUE_CR_BIF_TILING_CFG4_MIN_ADDRESS_CLRMSK 0xFFFFFFFFF0000000ULL
+#define ROGUE_CR_BIF_TILING_CFG4_MIN_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG4_MIN_ADDRESS_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_TILING_CFG5 */
+#define ROGUE_CR_BIF_TILING_CFG5 0x1300U
+#define ROGUE_CR_BIF_TILING_CFG5_MASKFULL 0xFFFFFFFF0FFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG5_XSTRIDE_SHIFT 61U
+#define ROGUE_CR_BIF_TILING_CFG5_XSTRIDE_CLRMSK 0x1FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG5_ENABLE_SHIFT 60U
+#define ROGUE_CR_BIF_TILING_CFG5_ENABLE_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG5_ENABLE_EN 0x1000000000000000ULL
+#define ROGUE_CR_BIF_TILING_CFG5_MAX_ADDRESS_SHIFT 32U
+#define ROGUE_CR_BIF_TILING_CFG5_MAX_ADDRESS_CLRMSK 0xF0000000FFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG5_MAX_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG5_MAX_ADDRESS_ALIGNSIZE 4096U
+#define ROGUE_CR_BIF_TILING_CFG5_MIN_ADDRESS_SHIFT 0U
+#define ROGUE_CR_BIF_TILING_CFG5_MIN_ADDRESS_CLRMSK 0xFFFFFFFFF0000000ULL
+#define ROGUE_CR_BIF_TILING_CFG5_MIN_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG5_MIN_ADDRESS_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_TILING_CFG6 */
+#define ROGUE_CR_BIF_TILING_CFG6 0x1308U
+#define ROGUE_CR_BIF_TILING_CFG6_MASKFULL 0xFFFFFFFF0FFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG6_XSTRIDE_SHIFT 61U
+#define ROGUE_CR_BIF_TILING_CFG6_XSTRIDE_CLRMSK 0x1FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG6_ENABLE_SHIFT 60U
+#define ROGUE_CR_BIF_TILING_CFG6_ENABLE_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG6_ENABLE_EN 0x1000000000000000ULL
+#define ROGUE_CR_BIF_TILING_CFG6_MAX_ADDRESS_SHIFT 32U
+#define ROGUE_CR_BIF_TILING_CFG6_MAX_ADDRESS_CLRMSK 0xF0000000FFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG6_MAX_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG6_MAX_ADDRESS_ALIGNSIZE 4096U
+#define ROGUE_CR_BIF_TILING_CFG6_MIN_ADDRESS_SHIFT 0U
+#define ROGUE_CR_BIF_TILING_CFG6_MIN_ADDRESS_CLRMSK 0xFFFFFFFFF0000000ULL
+#define ROGUE_CR_BIF_TILING_CFG6_MIN_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG6_MIN_ADDRESS_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_TILING_CFG7 */
+#define ROGUE_CR_BIF_TILING_CFG7 0x1310U
+#define ROGUE_CR_BIF_TILING_CFG7_MASKFULL 0xFFFFFFFF0FFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG7_XSTRIDE_SHIFT 61U
+#define ROGUE_CR_BIF_TILING_CFG7_XSTRIDE_CLRMSK 0x1FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG7_ENABLE_SHIFT 60U
+#define ROGUE_CR_BIF_TILING_CFG7_ENABLE_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG7_ENABLE_EN 0x1000000000000000ULL
+#define ROGUE_CR_BIF_TILING_CFG7_MAX_ADDRESS_SHIFT 32U
+#define ROGUE_CR_BIF_TILING_CFG7_MAX_ADDRESS_CLRMSK 0xF0000000FFFFFFFFULL
+#define ROGUE_CR_BIF_TILING_CFG7_MAX_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG7_MAX_ADDRESS_ALIGNSIZE 4096U
+#define ROGUE_CR_BIF_TILING_CFG7_MIN_ADDRESS_SHIFT 0U
+#define ROGUE_CR_BIF_TILING_CFG7_MIN_ADDRESS_CLRMSK 0xFFFFFFFFF0000000ULL
+#define ROGUE_CR_BIF_TILING_CFG7_MIN_ADDRESS_ALIGNSHIFT 12U
+#define ROGUE_CR_BIF_TILING_CFG7_MIN_ADDRESS_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_BIF_READS_EXT_STATUS */
+#define ROGUE_CR_BIF_READS_EXT_STATUS 0x1320U
+#define ROGUE_CR_BIF_READS_EXT_STATUS_MASKFULL 0x000000000FFFFFFFULL
+#define ROGUE_CR_BIF_READS_EXT_STATUS_MMU_SHIFT 16U
+#define ROGUE_CR_BIF_READS_EXT_STATUS_MMU_CLRMSK 0xF000FFFFU
+#define ROGUE_CR_BIF_READS_EXT_STATUS_BANK1_SHIFT 0U
+#define ROGUE_CR_BIF_READS_EXT_STATUS_BANK1_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_BIF_READS_INT_STATUS */
+#define ROGUE_CR_BIF_READS_INT_STATUS 0x1328U
+#define ROGUE_CR_BIF_READS_INT_STATUS_MASKFULL 0x0000000007FFFFFFULL
+#define ROGUE_CR_BIF_READS_INT_STATUS_MMU_SHIFT 16U
+#define ROGUE_CR_BIF_READS_INT_STATUS_MMU_CLRMSK 0xF800FFFFU
+#define ROGUE_CR_BIF_READS_INT_STATUS_BANK1_SHIFT 0U
+#define ROGUE_CR_BIF_READS_INT_STATUS_BANK1_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_BIFPM_READS_INT_STATUS */
+#define ROGUE_CR_BIFPM_READS_INT_STATUS 0x1330U
+#define ROGUE_CR_BIFPM_READS_INT_STATUS_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_BIFPM_READS_INT_STATUS_BANK0_SHIFT 0U
+#define ROGUE_CR_BIFPM_READS_INT_STATUS_BANK0_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_BIFPM_READS_EXT_STATUS */
+#define ROGUE_CR_BIFPM_READS_EXT_STATUS 0x1338U
+#define ROGUE_CR_BIFPM_READS_EXT_STATUS_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_BIFPM_READS_EXT_STATUS_BANK0_SHIFT 0U
+#define ROGUE_CR_BIFPM_READS_EXT_STATUS_BANK0_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_BIFPM_STATUS_MMU */
+#define ROGUE_CR_BIFPM_STATUS_MMU 0x1350U
+#define ROGUE_CR_BIFPM_STATUS_MMU_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_BIFPM_STATUS_MMU_REQUESTS_SHIFT 0U
+#define ROGUE_CR_BIFPM_STATUS_MMU_REQUESTS_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_BIF_STATUS_MMU */
+#define ROGUE_CR_BIF_STATUS_MMU 0x1358U
+#define ROGUE_CR_BIF_STATUS_MMU_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_BIF_STATUS_MMU_REQUESTS_SHIFT 0U
+#define ROGUE_CR_BIF_STATUS_MMU_REQUESTS_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_BIF_FAULT_READ */
+#define ROGUE_CR_BIF_FAULT_READ 0x13E0U
+#define ROGUE_CR_BIF_FAULT_READ_MASKFULL 0x000000FFFFFFFFF0ULL
+#define ROGUE_CR_BIF_FAULT_READ_ADDRESS_SHIFT 4U
+#define ROGUE_CR_BIF_FAULT_READ_ADDRESS_CLRMSK 0xFFFFFF000000000FULL
+#define ROGUE_CR_BIF_FAULT_READ_ADDRESS_ALIGNSHIFT 4U
+#define ROGUE_CR_BIF_FAULT_READ_ADDRESS_ALIGNSIZE 16U
+
+/* Register ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS */
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS 0x1430U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_MASKFULL 0x000000000000F775ULL
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_CAT_BASE_SHIFT 12U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_CAT_BASE_CLRMSK 0xFFFF0FFFU
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_PAGE_SIZE_SHIFT 8U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_PAGE_SIZE_CLRMSK 0xFFFFF8FFU
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_DATA_TYPE_SHIFT 5U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_DATA_TYPE_CLRMSK 0xFFFFFF9FU
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_FAULT_RO_SHIFT 4U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_FAULT_RO_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_FAULT_RO_EN 0x00000010U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_FAULT_PM_META_RO_SHIFT 2U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_FAULT_PM_META_RO_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_FAULT_PM_META_RO_EN 0x00000004U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_FAULT_SHIFT 0U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_FAULT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_MMU_STATUS_FAULT_EN 0x00000001U
+
+/* Register ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS */
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS 0x1438U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_MASKFULL 0x0007FFFFFFFFFFF0ULL
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_RNW_SHIFT 50U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_RNW_CLRMSK 0xFFFBFFFFFFFFFFFFULL
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_RNW_EN 0x0004000000000000ULL
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_TAG_SB_SHIFT 44U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_TAG_SB_CLRMSK 0xFFFC0FFFFFFFFFFFULL
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_TAG_ID_SHIFT 40U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_TAG_ID_CLRMSK 0xFFFFF0FFFFFFFFFFULL
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_ADDRESS_SHIFT 4U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_ADDRESS_CLRMSK 0xFFFFFF000000000FULL
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_ADDRESS_ALIGNSHIFT 4U
+#define ROGUE_CR_TEXAS_BIF_FAULT_BANK0_REQ_STATUS_ADDRESS_ALIGNSIZE 16U
+
+/* Register ROGUE_CR_MCU_FENCE */
+#define ROGUE_CR_MCU_FENCE 0x1740U
+#define ROGUE_CR_MCU_FENCE_MASKFULL 0x000007FFFFFFFFE0ULL
+#define ROGUE_CR_MCU_FENCE_DM_SHIFT 40U
+#define ROGUE_CR_MCU_FENCE_DM_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_MCU_FENCE_DM_VERTEX 0x0000000000000000ULL
+#define ROGUE_CR_MCU_FENCE_DM_PIXEL 0x0000010000000000ULL
+#define ROGUE_CR_MCU_FENCE_DM_COMPUTE 0x0000020000000000ULL
+#define ROGUE_CR_MCU_FENCE_DM_RAY_VERTEX 0x0000030000000000ULL
+#define ROGUE_CR_MCU_FENCE_DM_RAY 0x0000040000000000ULL
+#define ROGUE_CR_MCU_FENCE_DM_FASTRENDER 0x0000050000000000ULL
+#define ROGUE_CR_MCU_FENCE_ADDR_SHIFT 5U
+#define ROGUE_CR_MCU_FENCE_ADDR_CLRMSK 0xFFFFFF000000001FULL
+#define ROGUE_CR_MCU_FENCE_ADDR_ALIGNSHIFT 5U
+#define ROGUE_CR_MCU_FENCE_ADDR_ALIGNSIZE 32U
+
+/* Register group: ROGUE_CR_SCRATCH, with 16 repeats */
+#define ROGUE_CR_SCRATCH_REPEATCOUNT 16U
+/* Register ROGUE_CR_SCRATCH0 */
+#define ROGUE_CR_SCRATCH0 0x1A00U
+#define ROGUE_CR_SCRATCH0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH0_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH0_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH1 */
+#define ROGUE_CR_SCRATCH1 0x1A08U
+#define ROGUE_CR_SCRATCH1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH1_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH1_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH2 */
+#define ROGUE_CR_SCRATCH2 0x1A10U
+#define ROGUE_CR_SCRATCH2_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH2_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH2_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH3 */
+#define ROGUE_CR_SCRATCH3 0x1A18U
+#define ROGUE_CR_SCRATCH3_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH3_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH3_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH4 */
+#define ROGUE_CR_SCRATCH4 0x1A20U
+#define ROGUE_CR_SCRATCH4_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH4_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH4_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH5 */
+#define ROGUE_CR_SCRATCH5 0x1A28U
+#define ROGUE_CR_SCRATCH5_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH5_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH5_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH6 */
+#define ROGUE_CR_SCRATCH6 0x1A30U
+#define ROGUE_CR_SCRATCH6_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH6_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH6_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH7 */
+#define ROGUE_CR_SCRATCH7 0x1A38U
+#define ROGUE_CR_SCRATCH7_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH7_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH7_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH8 */
+#define ROGUE_CR_SCRATCH8 0x1A40U
+#define ROGUE_CR_SCRATCH8_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH8_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH8_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH9 */
+#define ROGUE_CR_SCRATCH9 0x1A48U
+#define ROGUE_CR_SCRATCH9_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH9_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH9_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH10 */
+#define ROGUE_CR_SCRATCH10 0x1A50U
+#define ROGUE_CR_SCRATCH10_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH10_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH10_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH11 */
+#define ROGUE_CR_SCRATCH11 0x1A58U
+#define ROGUE_CR_SCRATCH11_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH11_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH11_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH12 */
+#define ROGUE_CR_SCRATCH12 0x1A60U
+#define ROGUE_CR_SCRATCH12_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH12_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH12_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH13 */
+#define ROGUE_CR_SCRATCH13 0x1A68U
+#define ROGUE_CR_SCRATCH13_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH13_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH13_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH14 */
+#define ROGUE_CR_SCRATCH14 0x1A70U
+#define ROGUE_CR_SCRATCH14_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH14_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH14_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SCRATCH15 */
+#define ROGUE_CR_SCRATCH15 0x1A78U
+#define ROGUE_CR_SCRATCH15_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SCRATCH15_DATA_SHIFT 0U
+#define ROGUE_CR_SCRATCH15_DATA_CLRMSK 0x00000000U
+
+/* Register group: ROGUE_CR_OS0_SCRATCH, with 2 repeats */
+#define ROGUE_CR_OS0_SCRATCH_REPEATCOUNT 2U
+/* Register ROGUE_CR_OS0_SCRATCH0 */
+#define ROGUE_CR_OS0_SCRATCH0 0x1A80U
+#define ROGUE_CR_OS0_SCRATCH0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS0_SCRATCH0_DATA_SHIFT 0U
+#define ROGUE_CR_OS0_SCRATCH0_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS0_SCRATCH1 */
+#define ROGUE_CR_OS0_SCRATCH1 0x1A88U
+#define ROGUE_CR_OS0_SCRATCH1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS0_SCRATCH1_DATA_SHIFT 0U
+#define ROGUE_CR_OS0_SCRATCH1_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS0_SCRATCH2 */
+#define ROGUE_CR_OS0_SCRATCH2 0x1A90U
+#define ROGUE_CR_OS0_SCRATCH2_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS0_SCRATCH2_DATA_SHIFT 0U
+#define ROGUE_CR_OS0_SCRATCH2_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_OS0_SCRATCH3 */
+#define ROGUE_CR_OS0_SCRATCH3 0x1A98U
+#define ROGUE_CR_OS0_SCRATCH3_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS0_SCRATCH3_DATA_SHIFT 0U
+#define ROGUE_CR_OS0_SCRATCH3_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register group: ROGUE_CR_OS1_SCRATCH, with 2 repeats */
+#define ROGUE_CR_OS1_SCRATCH_REPEATCOUNT 2U
+/* Register ROGUE_CR_OS1_SCRATCH0 */
+#define ROGUE_CR_OS1_SCRATCH0 0x11A80U
+#define ROGUE_CR_OS1_SCRATCH0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS1_SCRATCH0_DATA_SHIFT 0U
+#define ROGUE_CR_OS1_SCRATCH0_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS1_SCRATCH1 */
+#define ROGUE_CR_OS1_SCRATCH1 0x11A88U
+#define ROGUE_CR_OS1_SCRATCH1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS1_SCRATCH1_DATA_SHIFT 0U
+#define ROGUE_CR_OS1_SCRATCH1_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS1_SCRATCH2 */
+#define ROGUE_CR_OS1_SCRATCH2 0x11A90U
+#define ROGUE_CR_OS1_SCRATCH2_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS1_SCRATCH2_DATA_SHIFT 0U
+#define ROGUE_CR_OS1_SCRATCH2_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_OS1_SCRATCH3 */
+#define ROGUE_CR_OS1_SCRATCH3 0x11A98U
+#define ROGUE_CR_OS1_SCRATCH3_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS1_SCRATCH3_DATA_SHIFT 0U
+#define ROGUE_CR_OS1_SCRATCH3_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register group: ROGUE_CR_OS2_SCRATCH, with 2 repeats */
+#define ROGUE_CR_OS2_SCRATCH_REPEATCOUNT 2U
+/* Register ROGUE_CR_OS2_SCRATCH0 */
+#define ROGUE_CR_OS2_SCRATCH0 0x21A80U
+#define ROGUE_CR_OS2_SCRATCH0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS2_SCRATCH0_DATA_SHIFT 0U
+#define ROGUE_CR_OS2_SCRATCH0_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS2_SCRATCH1 */
+#define ROGUE_CR_OS2_SCRATCH1 0x21A88U
+#define ROGUE_CR_OS2_SCRATCH1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS2_SCRATCH1_DATA_SHIFT 0U
+#define ROGUE_CR_OS2_SCRATCH1_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS2_SCRATCH2 */
+#define ROGUE_CR_OS2_SCRATCH2 0x21A90U
+#define ROGUE_CR_OS2_SCRATCH2_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS2_SCRATCH2_DATA_SHIFT 0U
+#define ROGUE_CR_OS2_SCRATCH2_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_OS2_SCRATCH3 */
+#define ROGUE_CR_OS2_SCRATCH3 0x21A98U
+#define ROGUE_CR_OS2_SCRATCH3_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS2_SCRATCH3_DATA_SHIFT 0U
+#define ROGUE_CR_OS2_SCRATCH3_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register group: ROGUE_CR_OS3_SCRATCH, with 2 repeats */
+#define ROGUE_CR_OS3_SCRATCH_REPEATCOUNT 2U
+/* Register ROGUE_CR_OS3_SCRATCH0 */
+#define ROGUE_CR_OS3_SCRATCH0 0x31A80U
+#define ROGUE_CR_OS3_SCRATCH0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS3_SCRATCH0_DATA_SHIFT 0U
+#define ROGUE_CR_OS3_SCRATCH0_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS3_SCRATCH1 */
+#define ROGUE_CR_OS3_SCRATCH1 0x31A88U
+#define ROGUE_CR_OS3_SCRATCH1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS3_SCRATCH1_DATA_SHIFT 0U
+#define ROGUE_CR_OS3_SCRATCH1_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS3_SCRATCH2 */
+#define ROGUE_CR_OS3_SCRATCH2 0x31A90U
+#define ROGUE_CR_OS3_SCRATCH2_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS3_SCRATCH2_DATA_SHIFT 0U
+#define ROGUE_CR_OS3_SCRATCH2_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_OS3_SCRATCH3 */
+#define ROGUE_CR_OS3_SCRATCH3 0x31A98U
+#define ROGUE_CR_OS3_SCRATCH3_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS3_SCRATCH3_DATA_SHIFT 0U
+#define ROGUE_CR_OS3_SCRATCH3_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register group: ROGUE_CR_OS4_SCRATCH, with 2 repeats */
+#define ROGUE_CR_OS4_SCRATCH_REPEATCOUNT 2U
+/* Register ROGUE_CR_OS4_SCRATCH0 */
+#define ROGUE_CR_OS4_SCRATCH0 0x41A80U
+#define ROGUE_CR_OS4_SCRATCH0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS4_SCRATCH0_DATA_SHIFT 0U
+#define ROGUE_CR_OS4_SCRATCH0_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS4_SCRATCH1 */
+#define ROGUE_CR_OS4_SCRATCH1 0x41A88U
+#define ROGUE_CR_OS4_SCRATCH1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS4_SCRATCH1_DATA_SHIFT 0U
+#define ROGUE_CR_OS4_SCRATCH1_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS4_SCRATCH2 */
+#define ROGUE_CR_OS4_SCRATCH2 0x41A90U
+#define ROGUE_CR_OS4_SCRATCH2_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS4_SCRATCH2_DATA_SHIFT 0U
+#define ROGUE_CR_OS4_SCRATCH2_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_OS4_SCRATCH3 */
+#define ROGUE_CR_OS4_SCRATCH3 0x41A98U
+#define ROGUE_CR_OS4_SCRATCH3_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS4_SCRATCH3_DATA_SHIFT 0U
+#define ROGUE_CR_OS4_SCRATCH3_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register group: ROGUE_CR_OS5_SCRATCH, with 2 repeats */
+#define ROGUE_CR_OS5_SCRATCH_REPEATCOUNT 2U
+/* Register ROGUE_CR_OS5_SCRATCH0 */
+#define ROGUE_CR_OS5_SCRATCH0 0x51A80U
+#define ROGUE_CR_OS5_SCRATCH0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS5_SCRATCH0_DATA_SHIFT 0U
+#define ROGUE_CR_OS5_SCRATCH0_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS5_SCRATCH1 */
+#define ROGUE_CR_OS5_SCRATCH1 0x51A88U
+#define ROGUE_CR_OS5_SCRATCH1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS5_SCRATCH1_DATA_SHIFT 0U
+#define ROGUE_CR_OS5_SCRATCH1_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS5_SCRATCH2 */
+#define ROGUE_CR_OS5_SCRATCH2 0x51A90U
+#define ROGUE_CR_OS5_SCRATCH2_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS5_SCRATCH2_DATA_SHIFT 0U
+#define ROGUE_CR_OS5_SCRATCH2_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_OS5_SCRATCH3 */
+#define ROGUE_CR_OS5_SCRATCH3 0x51A98U
+#define ROGUE_CR_OS5_SCRATCH3_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS5_SCRATCH3_DATA_SHIFT 0U
+#define ROGUE_CR_OS5_SCRATCH3_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register group: ROGUE_CR_OS6_SCRATCH, with 2 repeats */
+#define ROGUE_CR_OS6_SCRATCH_REPEATCOUNT 2U
+/* Register ROGUE_CR_OS6_SCRATCH0 */
+#define ROGUE_CR_OS6_SCRATCH0 0x61A80U
+#define ROGUE_CR_OS6_SCRATCH0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS6_SCRATCH0_DATA_SHIFT 0U
+#define ROGUE_CR_OS6_SCRATCH0_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS6_SCRATCH1 */
+#define ROGUE_CR_OS6_SCRATCH1 0x61A88U
+#define ROGUE_CR_OS6_SCRATCH1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS6_SCRATCH1_DATA_SHIFT 0U
+#define ROGUE_CR_OS6_SCRATCH1_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS6_SCRATCH2 */
+#define ROGUE_CR_OS6_SCRATCH2 0x61A90U
+#define ROGUE_CR_OS6_SCRATCH2_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS6_SCRATCH2_DATA_SHIFT 0U
+#define ROGUE_CR_OS6_SCRATCH2_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_OS6_SCRATCH3 */
+#define ROGUE_CR_OS6_SCRATCH3 0x61A98U
+#define ROGUE_CR_OS6_SCRATCH3_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS6_SCRATCH3_DATA_SHIFT 0U
+#define ROGUE_CR_OS6_SCRATCH3_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register group: ROGUE_CR_OS7_SCRATCH, with 2 repeats */
+#define ROGUE_CR_OS7_SCRATCH_REPEATCOUNT 2U
+/* Register ROGUE_CR_OS7_SCRATCH0 */
+#define ROGUE_CR_OS7_SCRATCH0 0x71A80U
+#define ROGUE_CR_OS7_SCRATCH0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS7_SCRATCH0_DATA_SHIFT 0U
+#define ROGUE_CR_OS7_SCRATCH0_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS7_SCRATCH1 */
+#define ROGUE_CR_OS7_SCRATCH1 0x71A88U
+#define ROGUE_CR_OS7_SCRATCH1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_OS7_SCRATCH1_DATA_SHIFT 0U
+#define ROGUE_CR_OS7_SCRATCH1_DATA_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OS7_SCRATCH2 */
+#define ROGUE_CR_OS7_SCRATCH2 0x71A90U
+#define ROGUE_CR_OS7_SCRATCH2_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS7_SCRATCH2_DATA_SHIFT 0U
+#define ROGUE_CR_OS7_SCRATCH2_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_OS7_SCRATCH3 */
+#define ROGUE_CR_OS7_SCRATCH3 0x71A98U
+#define ROGUE_CR_OS7_SCRATCH3_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_OS7_SCRATCH3_DATA_SHIFT 0U
+#define ROGUE_CR_OS7_SCRATCH3_DATA_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_SPFILTER_SIGNAL_DESCR */
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR 0x2700U
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR_SIZE_SHIFT 0U
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR_SIZE_CLRMSK 0xFFFF0000U
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR_SIZE_ALIGNSHIFT 4U
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR_SIZE_ALIGNSIZE 16U
+
+/* Register ROGUE_CR_SPFILTER_SIGNAL_DESCR_MIN */
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR_MIN 0x2708U
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR_MIN_MASKFULL 0x000000FFFFFFFFF0ULL
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR_MIN_ADDR_SHIFT 4U
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR_MIN_ADDR_CLRMSK 0xFFFFFF000000000FULL
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR_MIN_ADDR_ALIGNSHIFT 4U
+#define ROGUE_CR_SPFILTER_SIGNAL_DESCR_MIN_ADDR_ALIGNSIZE 16U
+
+/* Register group: ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG, with 16 repeats */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG_REPEATCOUNT 16U
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0 0x3000U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1 0x3008U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG1_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2 0x3010U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG2_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3 0x3018U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG3_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4 0x3020U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG4_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5 0x3028U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG5_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6 0x3030U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG6_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7 0x3038U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG7_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8 0x3040U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG8_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9 0x3048U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG9_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10 0x3050U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG10_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11 0x3058U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG11_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12 0x3060U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG12_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13 0x3068U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG13_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14 0x3070U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG14_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15 */
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15 0x3078U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_MASKFULL 0x7FFFF7FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_TRUSTED_SHIFT 62U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_TRUSTED_CLRMSK 0xBFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_TRUSTED_EN 0x4000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_LOAD_STORE_EN_SHIFT 61U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_LOAD_STORE_EN_CLRMSK 0xDFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_LOAD_STORE_EN_EN 0x2000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_FETCH_EN_SHIFT 60U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_FETCH_EN_CLRMSK 0xEFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_FETCH_EN_EN 0x1000000000000000ULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_SIZE_SHIFT 44U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_SIZE_CLRMSK 0xF0000FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_CBASE_SHIFT 40U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_CBASE_CLRMSK 0xFFFFF8FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_DEVVADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_DEVVADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_DEVVADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG15_DEVVADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_BOOT */
+#define ROGUE_CR_FWCORE_BOOT 0x3090U
+#define ROGUE_CR_FWCORE_BOOT_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_FWCORE_BOOT_ENABLE_SHIFT 0U
+#define ROGUE_CR_FWCORE_BOOT_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_FWCORE_BOOT_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_FWCORE_RESET_ADDR */
+#define ROGUE_CR_FWCORE_RESET_ADDR 0x3098U
+#define ROGUE_CR_FWCORE_RESET_ADDR_MASKFULL 0x00000000FFFFFFFEULL
+#define ROGUE_CR_FWCORE_RESET_ADDR_ADDR_SHIFT 1U
+#define ROGUE_CR_FWCORE_RESET_ADDR_ADDR_CLRMSK 0x00000001U
+#define ROGUE_CR_FWCORE_RESET_ADDR_ADDR_ALIGNSHIFT 1U
+#define ROGUE_CR_FWCORE_RESET_ADDR_ADDR_ALIGNSIZE 2U
+
+/* Register ROGUE_CR_FWCORE_WRAPPER_NMI_ADDR */
+#define ROGUE_CR_FWCORE_WRAPPER_NMI_ADDR 0x30A0U
+#define ROGUE_CR_FWCORE_WRAPPER_NMI_ADDR_MASKFULL 0x00000000FFFFFFFEULL
+#define ROGUE_CR_FWCORE_WRAPPER_NMI_ADDR_ADDR_SHIFT 1U
+#define ROGUE_CR_FWCORE_WRAPPER_NMI_ADDR_ADDR_CLRMSK 0x00000001U
+#define ROGUE_CR_FWCORE_WRAPPER_NMI_ADDR_ADDR_ALIGNSHIFT 1U
+#define ROGUE_CR_FWCORE_WRAPPER_NMI_ADDR_ADDR_ALIGNSIZE 2U
+
+/* Register ROGUE_CR_FWCORE_WRAPPER_NMI_EVENT */
+#define ROGUE_CR_FWCORE_WRAPPER_NMI_EVENT 0x30A8U
+#define ROGUE_CR_FWCORE_WRAPPER_NMI_EVENT_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_FWCORE_WRAPPER_NMI_EVENT_TRIGGER_EN_SHIFT 0U
+#define ROGUE_CR_FWCORE_WRAPPER_NMI_EVENT_TRIGGER_EN_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_FWCORE_WRAPPER_NMI_EVENT_TRIGGER_EN_EN 0x00000001U
+
+/* Register ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS */
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS 0x30B0U
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_MASKFULL 0x000000000000F771ULL
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_CAT_BASE_SHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_CAT_BASE_CLRMSK 0xFFFF0FFFU
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_PAGE_SIZE_SHIFT 8U
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_PAGE_SIZE_CLRMSK 0xFFFFF8FFU
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_DATA_TYPE_SHIFT 5U
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_DATA_TYPE_CLRMSK 0xFFFFFF9FU
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_FAULT_RO_SHIFT 4U
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_FAULT_RO_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_FAULT_RO_EN 0x00000010U
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_FAULT_SHIFT 0U
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_FAULT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_FWCORE_MEM_FAULT_MMU_STATUS_FAULT_EN 0x00000001U
+
+/* Register ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS */
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS 0x30B8U
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_MASKFULL 0x001FFFFFFFFFFFF0ULL
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_RNW_SHIFT 52U
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_RNW_CLRMSK 0xFFEFFFFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_RNW_EN 0x0010000000000000ULL
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_TAG_SB_SHIFT 46U
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_TAG_SB_CLRMSK 0xFFF03FFFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_TAG_ID_SHIFT 40U
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_TAG_ID_CLRMSK 0xFFFFC0FFFFFFFFFFULL
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_ADDRESS_SHIFT 4U
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_ADDRESS_CLRMSK 0xFFFFFF000000000FULL
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_ADDRESS_ALIGNSHIFT 4U
+#define ROGUE_CR_FWCORE_MEM_FAULT_REQ_STATUS_ADDRESS_ALIGNSIZE 16U
+
+/* Register ROGUE_CR_FWCORE_MEM_CTRL_INVAL */
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL 0x30C0U
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_MASKFULL 0x000000000000000FULL
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_TLB_SHIFT 3U
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_TLB_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_TLB_EN 0x00000008U
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_PC_SHIFT 2U
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_PC_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_PC_EN 0x00000004U
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_PD_SHIFT 1U
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_PD_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_PD_EN 0x00000002U
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_PT_SHIFT 0U
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_PT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_FWCORE_MEM_CTRL_INVAL_PT_EN 0x00000001U
+
+/* Register ROGUE_CR_FWCORE_MEM_MMU_STATUS */
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS 0x30C8U
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_MASKFULL 0x000000000FFFFFF7ULL
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_PC_DATA_SHIFT 20U
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_PC_DATA_CLRMSK 0xF00FFFFFU
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_PD_DATA_SHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_PD_DATA_CLRMSK 0xFFF00FFFU
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_PT_DATA_SHIFT 4U
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_PT_DATA_CLRMSK 0xFFFFF00FU
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_STALLED_SHIFT 2U
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_STALLED_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_STALLED_EN 0x00000004U
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_PAUSED_SHIFT 1U
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_PAUSED_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_PAUSED_EN 0x00000002U
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_BUSY_SHIFT 0U
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_BUSY_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_FWCORE_MEM_MMU_STATUS_BUSY_EN 0x00000001U
+
+/* Register ROGUE_CR_FWCORE_MEM_READS_EXT_STATUS */
+#define ROGUE_CR_FWCORE_MEM_READS_EXT_STATUS 0x30D8U
+#define ROGUE_CR_FWCORE_MEM_READS_EXT_STATUS_MASKFULL 0x0000000000000FFFULL
+#define ROGUE_CR_FWCORE_MEM_READS_EXT_STATUS_MMU_SHIFT 0U
+#define ROGUE_CR_FWCORE_MEM_READS_EXT_STATUS_MMU_CLRMSK 0xFFFFF000U
+
+/* Register ROGUE_CR_FWCORE_MEM_READS_INT_STATUS */
+#define ROGUE_CR_FWCORE_MEM_READS_INT_STATUS 0x30E0U
+#define ROGUE_CR_FWCORE_MEM_READS_INT_STATUS_MASKFULL 0x00000000000007FFULL
+#define ROGUE_CR_FWCORE_MEM_READS_INT_STATUS_MMU_SHIFT 0U
+#define ROGUE_CR_FWCORE_MEM_READS_INT_STATUS_MMU_CLRMSK 0xFFFFF800U
+
+/* Register ROGUE_CR_FWCORE_WRAPPER_FENCE */
+#define ROGUE_CR_FWCORE_WRAPPER_FENCE 0x30E8U
+#define ROGUE_CR_FWCORE_WRAPPER_FENCE_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_FWCORE_WRAPPER_FENCE_ID_SHIFT 0U
+#define ROGUE_CR_FWCORE_WRAPPER_FENCE_ID_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_FWCORE_WRAPPER_FENCE_ID_EN 0x00000001U
+
+/* Register group: ROGUE_CR_FWCORE_MEM_CAT_BASE, with 8 repeats */
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE_REPEATCOUNT 8U
+/* Register ROGUE_CR_FWCORE_MEM_CAT_BASE0 */
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE0 0x30F0U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE0_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE0_ADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE0_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE0_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE0_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_MEM_CAT_BASE1 */
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE1 0x30F8U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE1_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE1_ADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE1_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE1_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE1_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_MEM_CAT_BASE2 */
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE2 0x3100U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE2_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE2_ADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE2_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE2_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE2_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_MEM_CAT_BASE3 */
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE3 0x3108U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE3_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE3_ADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE3_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE3_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE3_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_MEM_CAT_BASE4 */
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE4 0x3110U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE4_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE4_ADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE4_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE4_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE4_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_MEM_CAT_BASE5 */
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE5 0x3118U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE5_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE5_ADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE5_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE5_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE5_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_MEM_CAT_BASE6 */
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE6 0x3120U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE6_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE6_ADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE6_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE6_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE6_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_MEM_CAT_BASE7 */
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE7 0x3128U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE7_MASKFULL 0x000000FFFFFFF000ULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE7_ADDR_SHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE7_ADDR_CLRMSK 0xFFFFFF0000000FFFULL
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE7_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_FWCORE_MEM_CAT_BASE7_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_FWCORE_WDT_RESET */
+#define ROGUE_CR_FWCORE_WDT_RESET 0x3130U
+#define ROGUE_CR_FWCORE_WDT_RESET_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_FWCORE_WDT_RESET_EN_SHIFT 0U
+#define ROGUE_CR_FWCORE_WDT_RESET_EN_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_FWCORE_WDT_RESET_EN_EN 0x00000001U
+
+/* Register ROGUE_CR_FWCORE_WDT_CTRL */
+#define ROGUE_CR_FWCORE_WDT_CTRL 0x3138U
+#define ROGUE_CR_FWCORE_WDT_CTRL_MASKFULL 0x00000000FFFF1F01ULL
+#define ROGUE_CR_FWCORE_WDT_CTRL_PROT_SHIFT 16U
+#define ROGUE_CR_FWCORE_WDT_CTRL_PROT_CLRMSK 0x0000FFFFU
+#define ROGUE_CR_FWCORE_WDT_CTRL_THRESHOLD_SHIFT 8U
+#define ROGUE_CR_FWCORE_WDT_CTRL_THRESHOLD_CLRMSK 0xFFFFE0FFU
+#define ROGUE_CR_FWCORE_WDT_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_FWCORE_WDT_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_FWCORE_WDT_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_FWCORE_WDT_COUNT */
+#define ROGUE_CR_FWCORE_WDT_COUNT 0x3140U
+#define ROGUE_CR_FWCORE_WDT_COUNT_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_FWCORE_WDT_COUNT_VALUE_SHIFT 0U
+#define ROGUE_CR_FWCORE_WDT_COUNT_VALUE_CLRMSK 0x00000000U
+
+/* Register group: ROGUE_CR_FWCORE_DMI_RESERVED0, with 4 repeats */
+#define ROGUE_CR_FWCORE_DMI_RESERVED0_REPEATCOUNT 4U
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED00 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED00 0x3400U
+#define ROGUE_CR_FWCORE_DMI_RESERVED00_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED01 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED01 0x3408U
+#define ROGUE_CR_FWCORE_DMI_RESERVED01_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED02 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED02 0x3410U
+#define ROGUE_CR_FWCORE_DMI_RESERVED02_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED03 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED03 0x3418U
+#define ROGUE_CR_FWCORE_DMI_RESERVED03_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_DATA0 */
+#define ROGUE_CR_FWCORE_DMI_DATA0 0x3420U
+#define ROGUE_CR_FWCORE_DMI_DATA0_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_DATA1 */
+#define ROGUE_CR_FWCORE_DMI_DATA1 0x3428U
+#define ROGUE_CR_FWCORE_DMI_DATA1_MASKFULL 0x0000000000000000ULL
+
+/* Register group: ROGUE_CR_FWCORE_DMI_RESERVED1, with 5 repeats */
+#define ROGUE_CR_FWCORE_DMI_RESERVED1_REPEATCOUNT 5U
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED10 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED10 0x3430U
+#define ROGUE_CR_FWCORE_DMI_RESERVED10_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED11 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED11 0x3438U
+#define ROGUE_CR_FWCORE_DMI_RESERVED11_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED12 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED12 0x3440U
+#define ROGUE_CR_FWCORE_DMI_RESERVED12_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED13 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED13 0x3448U
+#define ROGUE_CR_FWCORE_DMI_RESERVED13_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED14 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED14 0x3450U
+#define ROGUE_CR_FWCORE_DMI_RESERVED14_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_DMCONTROL */
+#define ROGUE_CR_FWCORE_DMI_DMCONTROL 0x3480U
+#define ROGUE_CR_FWCORE_DMI_DMCONTROL_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_DMSTATUS */
+#define ROGUE_CR_FWCORE_DMI_DMSTATUS 0x3488U
+#define ROGUE_CR_FWCORE_DMI_DMSTATUS_MASKFULL 0x0000000000000000ULL
+
+/* Register group: ROGUE_CR_FWCORE_DMI_RESERVED2, with 4 repeats */
+#define ROGUE_CR_FWCORE_DMI_RESERVED2_REPEATCOUNT 4U
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED20 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED20 0x3490U
+#define ROGUE_CR_FWCORE_DMI_RESERVED20_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED21 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED21 0x3498U
+#define ROGUE_CR_FWCORE_DMI_RESERVED21_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED22 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED22 0x34A0U
+#define ROGUE_CR_FWCORE_DMI_RESERVED22_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED23 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED23 0x34A8U
+#define ROGUE_CR_FWCORE_DMI_RESERVED23_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_ABSTRACTCS */
+#define ROGUE_CR_FWCORE_DMI_ABSTRACTCS 0x34B0U
+#define ROGUE_CR_FWCORE_DMI_ABSTRACTCS_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_COMMAND */
+#define ROGUE_CR_FWCORE_DMI_COMMAND 0x34B8U
+#define ROGUE_CR_FWCORE_DMI_COMMAND_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_SBCS */
+#define ROGUE_CR_FWCORE_DMI_SBCS 0x35C0U
+#define ROGUE_CR_FWCORE_DMI_SBCS_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_SBADDRESS0 */
+#define ROGUE_CR_FWCORE_DMI_SBADDRESS0 0x35C8U
+#define ROGUE_CR_FWCORE_DMI_SBADDRESS0_MASKFULL 0x0000000000000000ULL
+
+/* Register group: ROGUE_CR_FWCORE_DMI_RESERVED3, with 2 repeats */
+#define ROGUE_CR_FWCORE_DMI_RESERVED3_REPEATCOUNT 2U
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED30 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED30 0x34D0U
+#define ROGUE_CR_FWCORE_DMI_RESERVED30_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_RESERVED31 */
+#define ROGUE_CR_FWCORE_DMI_RESERVED31 0x34D8U
+#define ROGUE_CR_FWCORE_DMI_RESERVED31_MASKFULL 0x0000000000000000ULL
+
+/* Register group: ROGUE_CR_FWCORE_DMI_SBDATA, with 4 repeats */
+#define ROGUE_CR_FWCORE_DMI_SBDATA_REPEATCOUNT 4U
+/* Register ROGUE_CR_FWCORE_DMI_SBDATA0 */
+#define ROGUE_CR_FWCORE_DMI_SBDATA0 0x35E0U
+#define ROGUE_CR_FWCORE_DMI_SBDATA0_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_SBDATA1 */
+#define ROGUE_CR_FWCORE_DMI_SBDATA1 0x35E8U
+#define ROGUE_CR_FWCORE_DMI_SBDATA1_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_SBDATA2 */
+#define ROGUE_CR_FWCORE_DMI_SBDATA2 0x35F0U
+#define ROGUE_CR_FWCORE_DMI_SBDATA2_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_SBDATA3 */
+#define ROGUE_CR_FWCORE_DMI_SBDATA3 0x35F8U
+#define ROGUE_CR_FWCORE_DMI_SBDATA3_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_FWCORE_DMI_HALTSUM0 */
+#define ROGUE_CR_FWCORE_DMI_HALTSUM0 0x3600U
+#define ROGUE_CR_FWCORE_DMI_HALTSUM0_MASKFULL 0x0000000000000000ULL
+
+/* Register ROGUE_CR_SLC_CTRL_MISC */
+#define ROGUE_CR_SLC_CTRL_MISC 0x3800U
+#define ROGUE_CR_SLC_CTRL_MISC_MASKFULL 0xFFFFFFFF01FF010FULL
+#define ROGUE_CR_SLC_CTRL_MISC_SCRAMBLE_BITS_SHIFT 32U
+#define ROGUE_CR_SLC_CTRL_MISC_SCRAMBLE_BITS_CLRMSK 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_MISC_LAZYWB_OVERRIDE_SHIFT 24U
+#define ROGUE_CR_SLC_CTRL_MISC_LAZYWB_OVERRIDE_CLRMSK 0xFFFFFFFFFEFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_MISC_LAZYWB_OVERRIDE_EN 0x0000000001000000ULL
+#define ROGUE_CR_SLC_CTRL_MISC_ADDR_DECODE_MODE_SHIFT 16U
+#define ROGUE_CR_SLC_CTRL_MISC_ADDR_DECODE_MODE_CLRMSK 0xFFFFFFFFFF00FFFFULL
+#define ROGUE_CR_SLC_CTRL_MISC_ADDR_DECODE_MODE_INTERLEAVED_64_BYTE 0x0000000000000000ULL
+#define ROGUE_CR_SLC_CTRL_MISC_ADDR_DECODE_MODE_INTERLEAVED_128_BYTE 0x0000000000010000ULL
+#define ROGUE_CR_SLC_CTRL_MISC_ADDR_DECODE_MODE_SIMPLE_HASH1 0x0000000000100000ULL
+#define ROGUE_CR_SLC_CTRL_MISC_ADDR_DECODE_MODE_SIMPLE_HASH2 0x0000000000110000ULL
+#define ROGUE_CR_SLC_CTRL_MISC_ADDR_DECODE_MODE_PVR_HASH1 0x0000000000200000ULL
+#define ROGUE_CR_SLC_CTRL_MISC_ADDR_DECODE_MODE_PVR_HASH2_SCRAMBLE 0x0000000000210000ULL
+#define ROGUE_CR_SLC_CTRL_MISC_PAUSE_SHIFT 8U
+#define ROGUE_CR_SLC_CTRL_MISC_PAUSE_CLRMSK 0xFFFFFFFFFFFFFEFFULL
+#define ROGUE_CR_SLC_CTRL_MISC_PAUSE_EN 0x0000000000000100ULL
+#define ROGUE_CR_SLC_CTRL_MISC_RESP_PRIORITY_SHIFT 3U
+#define ROGUE_CR_SLC_CTRL_MISC_RESP_PRIORITY_CLRMSK 0xFFFFFFFFFFFFFFF7ULL
+#define ROGUE_CR_SLC_CTRL_MISC_RESP_PRIORITY_EN 0x0000000000000008ULL
+#define ROGUE_CR_SLC_CTRL_MISC_ENABLE_LINE_USE_LIMIT_SHIFT 2U
+#define ROGUE_CR_SLC_CTRL_MISC_ENABLE_LINE_USE_LIMIT_CLRMSK 0xFFFFFFFFFFFFFFFBULL
+#define ROGUE_CR_SLC_CTRL_MISC_ENABLE_LINE_USE_LIMIT_EN 0x0000000000000004ULL
+#define ROGUE_CR_SLC_CTRL_MISC_ENABLE_PSG_HAZARD_CHECK_SHIFT 1U
+#define ROGUE_CR_SLC_CTRL_MISC_ENABLE_PSG_HAZARD_CHECK_CLRMSK 0xFFFFFFFFFFFFFFFDULL
+#define ROGUE_CR_SLC_CTRL_MISC_ENABLE_PSG_HAZARD_CHECK_EN 0x0000000000000002ULL
+#define ROGUE_CR_SLC_CTRL_MISC_BYPASS_BURST_COMBINER_SHIFT 0U
+#define ROGUE_CR_SLC_CTRL_MISC_BYPASS_BURST_COMBINER_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_SLC_CTRL_MISC_BYPASS_BURST_COMBINER_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_SLC_CTRL_FLUSH_INVAL */
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL 0x3818U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_MASKFULL 0x0000000080000FFFULL
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_LAZY_SHIFT 31U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_LAZY_CLRMSK 0x7FFFFFFFU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_LAZY_EN 0x80000000U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_FASTRENDER_SHIFT 11U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_FASTRENDER_CLRMSK 0xFFFFF7FFU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_FASTRENDER_EN 0x00000800U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_RAY_VERTEX_SHIFT 10U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_RAY_VERTEX_CLRMSK 0xFFFFFBFFU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_RAY_VERTEX_EN 0x00000400U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_RAY_SHIFT 9U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_RAY_CLRMSK 0xFFFFFDFFU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_RAY_EN 0x00000200U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_FRC_SHIFT 8U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_FRC_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_FRC_EN 0x00000100U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_VXE_SHIFT 7U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_VXE_CLRMSK 0xFFFFFF7FU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_VXE_EN 0x00000080U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_VXD_SHIFT 6U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_VXD_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_VXD_EN 0x00000040U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_HOST_META_SHIFT 5U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_HOST_META_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_HOST_META_EN 0x00000020U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_MMU_SHIFT 4U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_MMU_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_MMU_EN 0x00000010U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_COMPUTE_SHIFT 3U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_COMPUTE_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_COMPUTE_EN 0x00000008U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_PIXEL_SHIFT 2U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_PIXEL_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_PIXEL_EN 0x00000004U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_TA_SHIFT 1U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_TA_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_DM_TA_EN 0x00000002U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_ALL_SHIFT 0U
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_ALL_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SLC_CTRL_FLUSH_INVAL_ALL_EN 0x00000001U
+
+/* Register ROGUE_CR_SLC_STATUS0 */
+#define ROGUE_CR_SLC_STATUS0 0x3820U
+#define ROGUE_CR_SLC_STATUS0_MASKFULL 0x0000000000000007ULL
+#define ROGUE_CR_SLC_STATUS0_FLUSH_INVAL_PENDING_SHIFT 2U
+#define ROGUE_CR_SLC_STATUS0_FLUSH_INVAL_PENDING_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_SLC_STATUS0_FLUSH_INVAL_PENDING_EN 0x00000004U
+#define ROGUE_CR_SLC_STATUS0_INVAL_PENDING_SHIFT 1U
+#define ROGUE_CR_SLC_STATUS0_INVAL_PENDING_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_SLC_STATUS0_INVAL_PENDING_EN 0x00000002U
+#define ROGUE_CR_SLC_STATUS0_FLUSH_PENDING_SHIFT 0U
+#define ROGUE_CR_SLC_STATUS0_FLUSH_PENDING_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SLC_STATUS0_FLUSH_PENDING_EN 0x00000001U
+
+/* Register ROGUE_CR_SLC_CTRL_BYPASS */
+#define ROGUE_CR_SLC_CTRL_BYPASS 0x3828U
+#define ROGUE_CR_SLC_CTRL_BYPASS__XE_MEM__MASKFULL 0x0FFFFFFFFFFF7FFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_MASKFULL 0x000000000FFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_COMP_ZLS_SHIFT 59U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_COMP_ZLS_CLRMSK 0xF7FFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_COMP_ZLS_EN 0x0800000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_ZLS_HEADER_SHIFT 58U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_ZLS_HEADER_CLRMSK 0xFBFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_ZLS_HEADER_EN 0x0400000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_TCU_HEADER_SHIFT 57U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_TCU_HEADER_CLRMSK 0xFDFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_TCU_HEADER_EN 0x0200000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_ZLS_DATA_SHIFT 56U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_ZLS_DATA_CLRMSK 0xFEFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_ZLS_DATA_EN 0x0100000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_TCU_DATA_SHIFT 55U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_TCU_DATA_CLRMSK 0xFF7FFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_DECOMP_TCU_DATA_EN 0x0080000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_COMP_PBE_SHIFT 54U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_COMP_PBE_CLRMSK 0xFFBFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TFBC_COMP_PBE_EN 0x0040000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TCU_DM_COMPUTE_SHIFT 53U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TCU_DM_COMPUTE_CLRMSK 0xFFDFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TCU_DM_COMPUTE_EN 0x0020000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_PDSRW_NOLINEFILL_SHIFT 52U
+#define ROGUE_CR_SLC_CTRL_BYPASS_PDSRW_NOLINEFILL_CLRMSK 0xFFEFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_PDSRW_NOLINEFILL_EN 0x0010000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_PBE_NOLINEFILL_SHIFT 51U
+#define ROGUE_CR_SLC_CTRL_BYPASS_PBE_NOLINEFILL_CLRMSK 0xFFF7FFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_PBE_NOLINEFILL_EN 0x0008000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_FBC_SHIFT 50U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_FBC_CLRMSK 0xFFFBFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_FBC_EN 0x0004000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_RREQ_SHIFT 49U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_RREQ_CLRMSK 0xFFFDFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_RREQ_EN 0x0002000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_CREQ_SHIFT 48U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_CREQ_CLRMSK 0xFFFEFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_CREQ_EN 0x0001000000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_PREQ_SHIFT 47U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_PREQ_CLRMSK 0xFFFF7FFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_PREQ_EN 0x0000800000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_DBSC_SHIFT 46U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_DBSC_CLRMSK 0xFFFFBFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_DBSC_EN 0x0000400000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TCU_SHIFT 45U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TCU_CLRMSK 0xFFFFDFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TCU_EN 0x0000200000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_PBE_SHIFT 44U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_PBE_CLRMSK 0xFFFFEFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_PBE_EN 0x0000100000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_ISP_SHIFT 43U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_ISP_CLRMSK 0xFFFFF7FFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_ISP_EN 0x0000080000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_PM_SHIFT 42U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_PM_CLRMSK 0xFFFFFBFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_PM_EN 0x0000040000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TDM_SHIFT 41U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TDM_CLRMSK 0xFFFFFDFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TDM_EN 0x0000020000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_CDM_SHIFT 40U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_CDM_CLRMSK 0xFFFFFEFFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_CDM_EN 0x0000010000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TSPF_PDS_STATE_SHIFT 39U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TSPF_PDS_STATE_CLRMSK 0xFFFFFF7FFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TSPF_PDS_STATE_EN 0x0000008000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TSPF_DB_SHIFT 38U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TSPF_DB_CLRMSK 0xFFFFFFBFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TSPF_DB_EN 0x0000004000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TSPF_VTX_VAR_SHIFT 37U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TSPF_VTX_VAR_CLRMSK 0xFFFFFFDFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TSPF_VTX_VAR_EN 0x0000002000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_VDM_SHIFT 36U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_VDM_CLRMSK 0xFFFFFFEFFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_VDM_EN 0x0000001000000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_PSG_STREAM_SHIFT 35U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_PSG_STREAM_CLRMSK 0xFFFFFFF7FFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_PSG_STREAM_EN 0x0000000800000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_PSG_REGION_SHIFT 34U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_PSG_REGION_CLRMSK 0xFFFFFFFBFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_PSG_REGION_EN 0x0000000400000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_VCE_SHIFT 33U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_VCE_CLRMSK 0xFFFFFFFDFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_VCE_EN 0x0000000200000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_PPP_SHIFT 32U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_PPP_CLRMSK 0xFFFFFFFEFFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_PPP_EN 0x0000000100000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_FASTRENDER_SHIFT 31U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_FASTRENDER_CLRMSK 0xFFFFFFFF7FFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_FASTRENDER_EN 0x0000000080000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PM_ALIST_SHIFT 30U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PM_ALIST_CLRMSK 0xFFFFFFFFBFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PM_ALIST_EN 0x0000000040000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PB_TE_SHIFT 29U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PB_TE_CLRMSK 0xFFFFFFFFDFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PB_TE_EN 0x0000000020000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PB_VCE_SHIFT 28U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PB_VCE_CLRMSK 0xFFFFFFFFEFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PB_VCE_EN 0x0000000010000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_RAY_VERTEX_SHIFT 27U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_RAY_VERTEX_CLRMSK 0xFFFFFFFFF7FFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_RAY_VERTEX_EN 0x0000000008000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_RAY_SHIFT 26U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_RAY_CLRMSK 0xFFFFFFFFFBFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_RAY_EN 0x0000000004000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_CPF_SHIFT 25U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_CPF_CLRMSK 0xFFFFFFFFFDFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_CPF_EN 0x0000000002000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TPU_SHIFT 24U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TPU_CLRMSK 0xFFFFFFFFFEFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TPU_EN 0x0000000001000000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_FBDC_SHIFT 23U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_FBDC_CLRMSK 0xFFFFFFFFFF7FFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_FBDC_EN 0x0000000000800000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TLA_SHIFT 22U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TLA_CLRMSK 0xFFFFFFFFFFBFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TLA_EN 0x0000000000400000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_BYP_CC_N_SHIFT 21U
+#define ROGUE_CR_SLC_CTRL_BYPASS_BYP_CC_N_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_BYP_CC_N_EN 0x0000000000200000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_BYP_CC_SHIFT 20U
+#define ROGUE_CR_SLC_CTRL_BYPASS_BYP_CC_CLRMSK 0xFFFFFFFFFFEFFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_BYP_CC_EN 0x0000000000100000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MCU_SHIFT 19U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MCU_CLRMSK 0xFFFFFFFFFFF7FFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MCU_EN 0x0000000000080000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_PDS_SHIFT 18U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_PDS_CLRMSK 0xFFFFFFFFFFFBFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_PDS_EN 0x0000000000040000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TPF_SHIFT 17U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TPF_CLRMSK 0xFFFFFFFFFFFDFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TPF_EN 0x0000000000020000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_TPC_SHIFT 16U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_TPC_CLRMSK 0xFFFFFFFFFFFEFFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_TA_TPC_EN 0x0000000000010000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_OBJ_SHIFT 15U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_OBJ_CLRMSK 0xFFFFFFFFFFFF7FFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_IPF_OBJ_EN 0x0000000000008000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_USC_SHIFT 14U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_USC_CLRMSK 0xFFFFFFFFFFFFBFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_USC_EN 0x0000000000004000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_META_SHIFT 13U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_META_CLRMSK 0xFFFFFFFFFFFFDFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_META_EN 0x0000000000002000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_HOST_SHIFT 12U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_HOST_CLRMSK 0xFFFFFFFFFFFFEFFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_HOST_EN 0x0000000000001000ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MMU_PT_SHIFT 11U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MMU_PT_CLRMSK 0xFFFFFFFFFFFFF7FFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MMU_PT_EN 0x0000000000000800ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MMU_PD_SHIFT 10U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MMU_PD_CLRMSK 0xFFFFFFFFFFFFFBFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MMU_PD_EN 0x0000000000000400ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MMU_PC_SHIFT 9U
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MMU_PC_CLRMSK 0xFFFFFFFFFFFFFDFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_REQ_MMU_PC_EN 0x0000000000000200ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_FRC_SHIFT 8U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_FRC_CLRMSK 0xFFFFFFFFFFFFFEFFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_FRC_EN 0x0000000000000100ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_VXE_SHIFT 7U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_VXE_CLRMSK 0xFFFFFFFFFFFFFF7FULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_VXE_EN 0x0000000000000080ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_VXD_SHIFT 6U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_VXD_CLRMSK 0xFFFFFFFFFFFFFFBFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_VXD_EN 0x0000000000000040ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_HOST_META_SHIFT 5U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_HOST_META_CLRMSK 0xFFFFFFFFFFFFFFDFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_HOST_META_EN 0x0000000000000020ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_MMU_SHIFT 4U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_MMU_CLRMSK 0xFFFFFFFFFFFFFFEFULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_MMU_EN 0x0000000000000010ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_COMPUTE_SHIFT 3U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_COMPUTE_CLRMSK 0xFFFFFFFFFFFFFFF7ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_COMPUTE_EN 0x0000000000000008ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PIXEL_SHIFT 2U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PIXEL_CLRMSK 0xFFFFFFFFFFFFFFFBULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_PIXEL_EN 0x0000000000000004ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_TA_SHIFT 1U
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_TA_CLRMSK 0xFFFFFFFFFFFFFFFDULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_DM_TA_EN 0x0000000000000002ULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_ALL_SHIFT 0U
+#define ROGUE_CR_SLC_CTRL_BYPASS_ALL_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_SLC_CTRL_BYPASS_ALL_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_SLC_STATUS1 */
+#define ROGUE_CR_SLC_STATUS1 0x3870U
+#define ROGUE_CR_SLC_STATUS1_MASKFULL 0x800003FF03FFFFFFULL
+#define ROGUE_CR_SLC_STATUS1_PAUSED_SHIFT 63U
+#define ROGUE_CR_SLC_STATUS1_PAUSED_CLRMSK 0x7FFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC_STATUS1_PAUSED_EN 0x8000000000000000ULL
+#define ROGUE_CR_SLC_STATUS1_READS1_SHIFT 32U
+#define ROGUE_CR_SLC_STATUS1_READS1_CLRMSK 0xFFFFFC00FFFFFFFFULL
+#define ROGUE_CR_SLC_STATUS1_READS0_SHIFT 16U
+#define ROGUE_CR_SLC_STATUS1_READS0_CLRMSK 0xFFFFFFFFFC00FFFFULL
+#define ROGUE_CR_SLC_STATUS1_READS1_EXT_SHIFT 8U
+#define ROGUE_CR_SLC_STATUS1_READS1_EXT_CLRMSK 0xFFFFFFFFFFFF00FFULL
+#define ROGUE_CR_SLC_STATUS1_READS0_EXT_SHIFT 0U
+#define ROGUE_CR_SLC_STATUS1_READS0_EXT_CLRMSK 0xFFFFFFFFFFFFFF00ULL
+
+/* Register ROGUE_CR_SLC_IDLE */
+#define ROGUE_CR_SLC_IDLE 0x3898U
+#define ROGUE_CR_SLC_IDLE__XE_MEM__MASKFULL 0x00000000000003FFULL
+#define ROGUE_CR_SLC_IDLE_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_SLC_IDLE_MH_SYSARB1_SHIFT 9U
+#define ROGUE_CR_SLC_IDLE_MH_SYSARB1_CLRMSK 0xFFFFFDFFU
+#define ROGUE_CR_SLC_IDLE_MH_SYSARB1_EN 0x00000200U
+#define ROGUE_CR_SLC_IDLE_MH_SYSARB0_SHIFT 8U
+#define ROGUE_CR_SLC_IDLE_MH_SYSARB0_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_SLC_IDLE_MH_SYSARB0_EN 0x00000100U
+#define ROGUE_CR_SLC_IDLE_IMGBV4_SHIFT 7U
+#define ROGUE_CR_SLC_IDLE_IMGBV4_CLRMSK 0xFFFFFF7FU
+#define ROGUE_CR_SLC_IDLE_IMGBV4_EN 0x00000080U
+#define ROGUE_CR_SLC_IDLE_CACHE_BANKS_SHIFT 6U
+#define ROGUE_CR_SLC_IDLE_CACHE_BANKS_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_SLC_IDLE_CACHE_BANKS_EN 0x00000040U
+#define ROGUE_CR_SLC_IDLE_RBOFIFO_SHIFT 5U
+#define ROGUE_CR_SLC_IDLE_RBOFIFO_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_SLC_IDLE_RBOFIFO_EN 0x00000020U
+#define ROGUE_CR_SLC_IDLE_FRC_CONV_SHIFT 4U
+#define ROGUE_CR_SLC_IDLE_FRC_CONV_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_SLC_IDLE_FRC_CONV_EN 0x00000010U
+#define ROGUE_CR_SLC_IDLE_VXE_CONV_SHIFT 3U
+#define ROGUE_CR_SLC_IDLE_VXE_CONV_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_SLC_IDLE_VXE_CONV_EN 0x00000008U
+#define ROGUE_CR_SLC_IDLE_VXD_CONV_SHIFT 2U
+#define ROGUE_CR_SLC_IDLE_VXD_CONV_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_SLC_IDLE_VXD_CONV_EN 0x00000004U
+#define ROGUE_CR_SLC_IDLE_BIF1_CONV_SHIFT 1U
+#define ROGUE_CR_SLC_IDLE_BIF1_CONV_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_SLC_IDLE_BIF1_CONV_EN 0x00000002U
+#define ROGUE_CR_SLC_IDLE_CBAR_SHIFT 0U
+#define ROGUE_CR_SLC_IDLE_CBAR_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SLC_IDLE_CBAR_EN 0x00000001U
+
+/* Register ROGUE_CR_SLC_STATUS2 */
+#define ROGUE_CR_SLC_STATUS2 0x3908U
+#define ROGUE_CR_SLC_STATUS2_MASKFULL 0x000003FF03FFFFFFULL
+#define ROGUE_CR_SLC_STATUS2_READS3_SHIFT 32U
+#define ROGUE_CR_SLC_STATUS2_READS3_CLRMSK 0xFFFFFC00FFFFFFFFULL
+#define ROGUE_CR_SLC_STATUS2_READS2_SHIFT 16U
+#define ROGUE_CR_SLC_STATUS2_READS2_CLRMSK 0xFFFFFFFFFC00FFFFULL
+#define ROGUE_CR_SLC_STATUS2_READS3_EXT_SHIFT 8U
+#define ROGUE_CR_SLC_STATUS2_READS3_EXT_CLRMSK 0xFFFFFFFFFFFF00FFULL
+#define ROGUE_CR_SLC_STATUS2_READS2_EXT_SHIFT 0U
+#define ROGUE_CR_SLC_STATUS2_READS2_EXT_CLRMSK 0xFFFFFFFFFFFFFF00ULL
+
+/* Register ROGUE_CR_SLC_CTRL_MISC2 */
+#define ROGUE_CR_SLC_CTRL_MISC2 0x3930U
+#define ROGUE_CR_SLC_CTRL_MISC2_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SLC_CTRL_MISC2_SCRAMBLE_BITS_SHIFT 0U
+#define ROGUE_CR_SLC_CTRL_MISC2_SCRAMBLE_BITS_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SLC_CROSSBAR_LOAD_BALANCE */
+#define ROGUE_CR_SLC_CROSSBAR_LOAD_BALANCE 0x3938U
+#define ROGUE_CR_SLC_CROSSBAR_LOAD_BALANCE_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_SLC_CROSSBAR_LOAD_BALANCE_BYPASS_SHIFT 0U
+#define ROGUE_CR_SLC_CROSSBAR_LOAD_BALANCE_BYPASS_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SLC_CROSSBAR_LOAD_BALANCE_BYPASS_EN 0x00000001U
+
+/* Register ROGUE_CR_USC_UVS0_CHECKSUM */
+#define ROGUE_CR_USC_UVS0_CHECKSUM 0x5000U
+#define ROGUE_CR_USC_UVS0_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_USC_UVS0_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_USC_UVS0_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_USC_UVS1_CHECKSUM */
+#define ROGUE_CR_USC_UVS1_CHECKSUM 0x5008U
+#define ROGUE_CR_USC_UVS1_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_USC_UVS1_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_USC_UVS1_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_USC_UVS2_CHECKSUM */
+#define ROGUE_CR_USC_UVS2_CHECKSUM 0x5010U
+#define ROGUE_CR_USC_UVS2_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_USC_UVS2_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_USC_UVS2_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_USC_UVS3_CHECKSUM */
+#define ROGUE_CR_USC_UVS3_CHECKSUM 0x5018U
+#define ROGUE_CR_USC_UVS3_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_USC_UVS3_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_USC_UVS3_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PPP_SIGNATURE */
+#define ROGUE_CR_PPP_SIGNATURE 0x5020U
+#define ROGUE_CR_PPP_SIGNATURE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PPP_SIGNATURE_VALUE_SHIFT 0U
+#define ROGUE_CR_PPP_SIGNATURE_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_TE_SIGNATURE */
+#define ROGUE_CR_TE_SIGNATURE 0x5028U
+#define ROGUE_CR_TE_SIGNATURE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_TE_SIGNATURE_VALUE_SHIFT 0U
+#define ROGUE_CR_TE_SIGNATURE_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_TE_CHECKSUM */
+#define ROGUE_CR_TE_CHECKSUM 0x5110U
+#define ROGUE_CR_TE_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_TE_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_TE_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_USC_UVB_CHECKSUM */
+#define ROGUE_CR_USC_UVB_CHECKSUM 0x5118U
+#define ROGUE_CR_USC_UVB_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_USC_UVB_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_USC_UVB_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_VCE_CHECKSUM */
+#define ROGUE_CR_VCE_CHECKSUM 0x5030U
+#define ROGUE_CR_VCE_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_VCE_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_VCE_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_ISP_PDS_CHECKSUM */
+#define ROGUE_CR_ISP_PDS_CHECKSUM 0x5038U
+#define ROGUE_CR_ISP_PDS_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_ISP_PDS_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_ISP_PDS_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_ISP_TPF_CHECKSUM */
+#define ROGUE_CR_ISP_TPF_CHECKSUM 0x5040U
+#define ROGUE_CR_ISP_TPF_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_ISP_TPF_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_ISP_TPF_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_TFPU_PLANE0_CHECKSUM */
+#define ROGUE_CR_TFPU_PLANE0_CHECKSUM 0x5048U
+#define ROGUE_CR_TFPU_PLANE0_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_TFPU_PLANE0_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_TFPU_PLANE0_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_TFPU_PLANE1_CHECKSUM */
+#define ROGUE_CR_TFPU_PLANE1_CHECKSUM 0x5050U
+#define ROGUE_CR_TFPU_PLANE1_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_TFPU_PLANE1_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_TFPU_PLANE1_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PBE_CHECKSUM */
+#define ROGUE_CR_PBE_CHECKSUM 0x5058U
+#define ROGUE_CR_PBE_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PBE_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_PBE_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PDS_DOUTM_STM_SIGNATURE */
+#define ROGUE_CR_PDS_DOUTM_STM_SIGNATURE 0x5060U
+#define ROGUE_CR_PDS_DOUTM_STM_SIGNATURE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PDS_DOUTM_STM_SIGNATURE_VALUE_SHIFT 0U
+#define ROGUE_CR_PDS_DOUTM_STM_SIGNATURE_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_IFPU_ISP_CHECKSUM */
+#define ROGUE_CR_IFPU_ISP_CHECKSUM 0x5068U
+#define ROGUE_CR_IFPU_ISP_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_IFPU_ISP_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_IFPU_ISP_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_USC_UVS4_CHECKSUM */
+#define ROGUE_CR_USC_UVS4_CHECKSUM 0x5100U
+#define ROGUE_CR_USC_UVS4_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_USC_UVS4_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_USC_UVS4_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_USC_UVS5_CHECKSUM */
+#define ROGUE_CR_USC_UVS5_CHECKSUM 0x5108U
+#define ROGUE_CR_USC_UVS5_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_USC_UVS5_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_USC_UVS5_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PPP_CLIP_CHECKSUM */
+#define ROGUE_CR_PPP_CLIP_CHECKSUM 0x5120U
+#define ROGUE_CR_PPP_CLIP_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PPP_CLIP_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_PPP_CLIP_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_TA_PHASE */
+#define ROGUE_CR_PERF_TA_PHASE 0x6008U
+#define ROGUE_CR_PERF_TA_PHASE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_TA_PHASE_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_TA_PHASE_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_3D_PHASE */
+#define ROGUE_CR_PERF_3D_PHASE 0x6010U
+#define ROGUE_CR_PERF_3D_PHASE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_3D_PHASE_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_3D_PHASE_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_COMPUTE_PHASE */
+#define ROGUE_CR_PERF_COMPUTE_PHASE 0x6018U
+#define ROGUE_CR_PERF_COMPUTE_PHASE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_COMPUTE_PHASE_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_COMPUTE_PHASE_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_TA_CYCLE */
+#define ROGUE_CR_PERF_TA_CYCLE 0x6020U
+#define ROGUE_CR_PERF_TA_CYCLE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_TA_CYCLE_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_TA_CYCLE_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_3D_CYCLE */
+#define ROGUE_CR_PERF_3D_CYCLE 0x6028U
+#define ROGUE_CR_PERF_3D_CYCLE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_3D_CYCLE_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_3D_CYCLE_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_COMPUTE_CYCLE */
+#define ROGUE_CR_PERF_COMPUTE_CYCLE 0x6030U
+#define ROGUE_CR_PERF_COMPUTE_CYCLE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_COMPUTE_CYCLE_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_COMPUTE_CYCLE_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_TA_OR_3D_CYCLE */
+#define ROGUE_CR_PERF_TA_OR_3D_CYCLE 0x6038U
+#define ROGUE_CR_PERF_TA_OR_3D_CYCLE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_TA_OR_3D_CYCLE_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_TA_OR_3D_CYCLE_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_INITIAL_TA_CYCLE */
+#define ROGUE_CR_PERF_INITIAL_TA_CYCLE 0x6040U
+#define ROGUE_CR_PERF_INITIAL_TA_CYCLE_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_INITIAL_TA_CYCLE_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_INITIAL_TA_CYCLE_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_SLC0_READ_STALL */
+#define ROGUE_CR_PERF_SLC0_READ_STALL 0x60B8U
+#define ROGUE_CR_PERF_SLC0_READ_STALL_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_SLC0_READ_STALL_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_SLC0_READ_STALL_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_SLC0_WRITE_STALL */
+#define ROGUE_CR_PERF_SLC0_WRITE_STALL 0x60C0U
+#define ROGUE_CR_PERF_SLC0_WRITE_STALL_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_SLC0_WRITE_STALL_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_SLC0_WRITE_STALL_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_SLC1_READ_STALL */
+#define ROGUE_CR_PERF_SLC1_READ_STALL 0x60E0U
+#define ROGUE_CR_PERF_SLC1_READ_STALL_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_SLC1_READ_STALL_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_SLC1_READ_STALL_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_SLC1_WRITE_STALL */
+#define ROGUE_CR_PERF_SLC1_WRITE_STALL 0x60E8U
+#define ROGUE_CR_PERF_SLC1_WRITE_STALL_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_SLC1_WRITE_STALL_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_SLC1_WRITE_STALL_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_SLC2_READ_STALL */
+#define ROGUE_CR_PERF_SLC2_READ_STALL 0x6158U
+#define ROGUE_CR_PERF_SLC2_READ_STALL_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_SLC2_READ_STALL_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_SLC2_READ_STALL_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_SLC2_WRITE_STALL */
+#define ROGUE_CR_PERF_SLC2_WRITE_STALL 0x6160U
+#define ROGUE_CR_PERF_SLC2_WRITE_STALL_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_SLC2_WRITE_STALL_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_SLC2_WRITE_STALL_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_SLC3_READ_STALL */
+#define ROGUE_CR_PERF_SLC3_READ_STALL 0x6180U
+#define ROGUE_CR_PERF_SLC3_READ_STALL_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_SLC3_READ_STALL_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_SLC3_READ_STALL_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_SLC3_WRITE_STALL */
+#define ROGUE_CR_PERF_SLC3_WRITE_STALL 0x6188U
+#define ROGUE_CR_PERF_SLC3_WRITE_STALL_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_SLC3_WRITE_STALL_COUNT_SHIFT 0U
+#define ROGUE_CR_PERF_SLC3_WRITE_STALL_COUNT_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PERF_3D_SPINUP */
+#define ROGUE_CR_PERF_3D_SPINUP 0x6220U
+#define ROGUE_CR_PERF_3D_SPINUP_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PERF_3D_SPINUP_CYCLES_SHIFT 0U
+#define ROGUE_CR_PERF_3D_SPINUP_CYCLES_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_AXI_ACE_LITE_CONFIGURATION */
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION 0x38C0U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_MASKFULL 0x00003FFFFFFFFFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ENABLE_FENCE_OUT_SHIFT 45U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ENABLE_FENCE_OUT_CLRMSK 0xFFFFDFFFFFFFFFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ENABLE_FENCE_OUT_EN 0x0000200000000000ULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_OSID_SECURITY_SHIFT 37U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_OSID_SECURITY_CLRMSK 0xFFFFE01FFFFFFFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_DISABLE_COHERENT_WRITELINEUNIQUE_SHIFT 36U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_DISABLE_COHERENT_WRITELINEUNIQUE_CLRMSK \
+	0xFFFFFFEFFFFFFFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_DISABLE_COHERENT_WRITELINEUNIQUE_EN \
+	0x0000001000000000ULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_DISABLE_COHERENT_WRITE_SHIFT 35U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_DISABLE_COHERENT_WRITE_CLRMSK 0xFFFFFFF7FFFFFFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_DISABLE_COHERENT_WRITE_EN 0x0000000800000000ULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_DISABLE_COHERENT_READ_SHIFT 34U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_DISABLE_COHERENT_READ_CLRMSK 0xFFFFFFFBFFFFFFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_DISABLE_COHERENT_READ_EN 0x0000000400000000ULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARCACHE_CACHE_MAINTENANCE_SHIFT 30U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARCACHE_CACHE_MAINTENANCE_CLRMSK 0xFFFFFFFC3FFFFFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARCACHE_COHERENT_SHIFT 26U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARCACHE_COHERENT_CLRMSK 0xFFFFFFFFC3FFFFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWCACHE_COHERENT_SHIFT 22U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWCACHE_COHERENT_CLRMSK 0xFFFFFFFFFC3FFFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARDOMAIN_BARRIER_SHIFT 20U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARDOMAIN_BARRIER_CLRMSK 0xFFFFFFFFFFCFFFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWDOMAIN_BARRIER_SHIFT 18U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWDOMAIN_BARRIER_CLRMSK 0xFFFFFFFFFFF3FFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARDOMAIN_CACHE_MAINTENANCE_SHIFT 16U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARDOMAIN_CACHE_MAINTENANCE_CLRMSK 0xFFFFFFFFFFFCFFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWDOMAIN_COHERENT_SHIFT 14U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWDOMAIN_COHERENT_CLRMSK 0xFFFFFFFFFFFF3FFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARDOMAIN_COHERENT_SHIFT 12U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARDOMAIN_COHERENT_CLRMSK 0xFFFFFFFFFFFFCFFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARDOMAIN_NON_SNOOPING_SHIFT 10U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARDOMAIN_NON_SNOOPING_CLRMSK 0xFFFFFFFFFFFFF3FFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWDOMAIN_NON_SNOOPING_SHIFT 8U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWDOMAIN_NON_SNOOPING_CLRMSK 0xFFFFFFFFFFFFFCFFULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARCACHE_NON_SNOOPING_SHIFT 4U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARCACHE_NON_SNOOPING_CLRMSK 0xFFFFFFFFFFFFFF0FULL
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWCACHE_NON_SNOOPING_SHIFT 0U
+#define ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWCACHE_NON_SNOOPING_CLRMSK 0xFFFFFFFFFFFFFFF0ULL
+
+/* Register ROGUE_CR_POWER_ESTIMATE_RESULT */
+#define ROGUE_CR_POWER_ESTIMATE_RESULT 0x6328U
+#define ROGUE_CR_POWER_ESTIMATE_RESULT_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_POWER_ESTIMATE_RESULT_VALUE_SHIFT 0U
+#define ROGUE_CR_POWER_ESTIMATE_RESULT_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_TA_PERF */
+#define ROGUE_CR_TA_PERF 0x7600U
+#define ROGUE_CR_TA_PERF_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_TA_PERF_CLR_3_SHIFT 4U
+#define ROGUE_CR_TA_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_TA_PERF_CLR_3_EN 0x00000010U
+#define ROGUE_CR_TA_PERF_CLR_2_SHIFT 3U
+#define ROGUE_CR_TA_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_TA_PERF_CLR_2_EN 0x00000008U
+#define ROGUE_CR_TA_PERF_CLR_1_SHIFT 2U
+#define ROGUE_CR_TA_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_TA_PERF_CLR_1_EN 0x00000004U
+#define ROGUE_CR_TA_PERF_CLR_0_SHIFT 1U
+#define ROGUE_CR_TA_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_TA_PERF_CLR_0_EN 0x00000002U
+#define ROGUE_CR_TA_PERF_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_TA_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_TA_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_TA_PERF_SELECT0 */
+#define ROGUE_CR_TA_PERF_SELECT0 0x7608U
+#define ROGUE_CR_TA_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_TA_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_TA_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT0_MODE_SHIFT 21U
+#define ROGUE_CR_TA_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_TA_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_TA_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_TA_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_TA_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_TA_PERF_SELECT1 */
+#define ROGUE_CR_TA_PERF_SELECT1 0x7610U
+#define ROGUE_CR_TA_PERF_SELECT1_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT1_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_TA_PERF_SELECT1_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT1_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_TA_PERF_SELECT1_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT1_MODE_SHIFT 21U
+#define ROGUE_CR_TA_PERF_SELECT1_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT1_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_TA_PERF_SELECT1_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_TA_PERF_SELECT1_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_TA_PERF_SELECT1_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_TA_PERF_SELECT1_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_TA_PERF_SELECT2 */
+#define ROGUE_CR_TA_PERF_SELECT2 0x7618U
+#define ROGUE_CR_TA_PERF_SELECT2_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT2_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_TA_PERF_SELECT2_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT2_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_TA_PERF_SELECT2_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT2_MODE_SHIFT 21U
+#define ROGUE_CR_TA_PERF_SELECT2_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT2_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_TA_PERF_SELECT2_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_TA_PERF_SELECT2_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_TA_PERF_SELECT2_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_TA_PERF_SELECT2_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_TA_PERF_SELECT3 */
+#define ROGUE_CR_TA_PERF_SELECT3 0x7620U
+#define ROGUE_CR_TA_PERF_SELECT3_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT3_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_TA_PERF_SELECT3_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT3_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_TA_PERF_SELECT3_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT3_MODE_SHIFT 21U
+#define ROGUE_CR_TA_PERF_SELECT3_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECT3_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_TA_PERF_SELECT3_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_TA_PERF_SELECT3_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_TA_PERF_SELECT3_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_TA_PERF_SELECT3_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_TA_PERF_SELECTED_BITS */
+#define ROGUE_CR_TA_PERF_SELECTED_BITS 0x7648U
+#define ROGUE_CR_TA_PERF_SELECTED_BITS_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECTED_BITS_REG3_SHIFT 48U
+#define ROGUE_CR_TA_PERF_SELECTED_BITS_REG3_CLRMSK 0x0000FFFFFFFFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECTED_BITS_REG2_SHIFT 32U
+#define ROGUE_CR_TA_PERF_SELECTED_BITS_REG2_CLRMSK 0xFFFF0000FFFFFFFFULL
+#define ROGUE_CR_TA_PERF_SELECTED_BITS_REG1_SHIFT 16U
+#define ROGUE_CR_TA_PERF_SELECTED_BITS_REG1_CLRMSK 0xFFFFFFFF0000FFFFULL
+#define ROGUE_CR_TA_PERF_SELECTED_BITS_REG0_SHIFT 0U
+#define ROGUE_CR_TA_PERF_SELECTED_BITS_REG0_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_TA_PERF_COUNTER_0 */
+#define ROGUE_CR_TA_PERF_COUNTER_0 0x7650U
+#define ROGUE_CR_TA_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_TA_PERF_COUNTER_0_REG_SHIFT 0U
+#define ROGUE_CR_TA_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_TA_PERF_COUNTER_1 */
+#define ROGUE_CR_TA_PERF_COUNTER_1 0x7658U
+#define ROGUE_CR_TA_PERF_COUNTER_1_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_TA_PERF_COUNTER_1_REG_SHIFT 0U
+#define ROGUE_CR_TA_PERF_COUNTER_1_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_TA_PERF_COUNTER_2 */
+#define ROGUE_CR_TA_PERF_COUNTER_2 0x7660U
+#define ROGUE_CR_TA_PERF_COUNTER_2_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_TA_PERF_COUNTER_2_REG_SHIFT 0U
+#define ROGUE_CR_TA_PERF_COUNTER_2_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_TA_PERF_COUNTER_3 */
+#define ROGUE_CR_TA_PERF_COUNTER_3 0x7668U
+#define ROGUE_CR_TA_PERF_COUNTER_3_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_TA_PERF_COUNTER_3_REG_SHIFT 0U
+#define ROGUE_CR_TA_PERF_COUNTER_3_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_RASTERISATION_PERF */
+#define ROGUE_CR_RASTERISATION_PERF 0x7700U
+#define ROGUE_CR_RASTERISATION_PERF_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_RASTERISATION_PERF_CLR_3_SHIFT 4U
+#define ROGUE_CR_RASTERISATION_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_RASTERISATION_PERF_CLR_3_EN 0x00000010U
+#define ROGUE_CR_RASTERISATION_PERF_CLR_2_SHIFT 3U
+#define ROGUE_CR_RASTERISATION_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_RASTERISATION_PERF_CLR_2_EN 0x00000008U
+#define ROGUE_CR_RASTERISATION_PERF_CLR_1_SHIFT 2U
+#define ROGUE_CR_RASTERISATION_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_RASTERISATION_PERF_CLR_1_EN 0x00000004U
+#define ROGUE_CR_RASTERISATION_PERF_CLR_0_SHIFT 1U
+#define ROGUE_CR_RASTERISATION_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_RASTERISATION_PERF_CLR_0_EN 0x00000002U
+#define ROGUE_CR_RASTERISATION_PERF_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_RASTERISATION_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_RASTERISATION_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_RASTERISATION_PERF_SELECT0 */
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0 0x7708U
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_MODE_SHIFT 21U
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_RASTERISATION_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_RASTERISATION_PERF_COUNTER_0 */
+#define ROGUE_CR_RASTERISATION_PERF_COUNTER_0 0x7750U
+#define ROGUE_CR_RASTERISATION_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_RASTERISATION_PERF_COUNTER_0_REG_SHIFT 0U
+#define ROGUE_CR_RASTERISATION_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_HUB_BIFPMCACHE_PERF */
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF 0x7800U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_3_SHIFT 4U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_3_EN 0x00000010U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_2_SHIFT 3U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_2_EN 0x00000008U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_1_SHIFT 2U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_1_EN 0x00000004U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_0_SHIFT 1U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CLR_0_EN 0x00000002U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0 */
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0 0x7808U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_MODE_SHIFT 21U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_HUB_BIFPMCACHE_PERF_COUNTER_0 */
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_COUNTER_0 0x7850U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_COUNTER_0_REG_SHIFT 0U
+#define ROGUE_CR_HUB_BIFPMCACHE_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_TPU_MCU_L0_PERF */
+#define ROGUE_CR_TPU_MCU_L0_PERF 0x7900U
+#define ROGUE_CR_TPU_MCU_L0_PERF_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_3_SHIFT 4U
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_3_EN 0x00000010U
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_2_SHIFT 3U
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_2_EN 0x00000008U
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_1_SHIFT 2U
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_1_EN 0x00000004U
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_0_SHIFT 1U
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_TPU_MCU_L0_PERF_CLR_0_EN 0x00000002U
+#define ROGUE_CR_TPU_MCU_L0_PERF_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_TPU_MCU_L0_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_TPU_MCU_L0_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_TPU_MCU_L0_PERF_SELECT0 */
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0 0x7908U
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_MODE_SHIFT 21U
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_TPU_MCU_L0_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_TPU_MCU_L0_PERF_COUNTER_0 */
+#define ROGUE_CR_TPU_MCU_L0_PERF_COUNTER_0 0x7950U
+#define ROGUE_CR_TPU_MCU_L0_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_TPU_MCU_L0_PERF_COUNTER_0_REG_SHIFT 0U
+#define ROGUE_CR_TPU_MCU_L0_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_USC_PERF */
+#define ROGUE_CR_USC_PERF 0x8100U
+#define ROGUE_CR_USC_PERF_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_USC_PERF_CLR_3_SHIFT 4U
+#define ROGUE_CR_USC_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_USC_PERF_CLR_3_EN 0x00000010U
+#define ROGUE_CR_USC_PERF_CLR_2_SHIFT 3U
+#define ROGUE_CR_USC_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_USC_PERF_CLR_2_EN 0x00000008U
+#define ROGUE_CR_USC_PERF_CLR_1_SHIFT 2U
+#define ROGUE_CR_USC_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_USC_PERF_CLR_1_EN 0x00000004U
+#define ROGUE_CR_USC_PERF_CLR_0_SHIFT 1U
+#define ROGUE_CR_USC_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_USC_PERF_CLR_0_EN 0x00000002U
+#define ROGUE_CR_USC_PERF_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_USC_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_USC_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_USC_PERF_SELECT0 */
+#define ROGUE_CR_USC_PERF_SELECT0 0x8108U
+#define ROGUE_CR_USC_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_USC_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_USC_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_USC_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_USC_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_USC_PERF_SELECT0_MODE_SHIFT 21U
+#define ROGUE_CR_USC_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_USC_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_USC_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_USC_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_USC_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_USC_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_USC_PERF_COUNTER_0 */
+#define ROGUE_CR_USC_PERF_COUNTER_0 0x8150U
+#define ROGUE_CR_USC_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_USC_PERF_COUNTER_0_REG_SHIFT 0U
+#define ROGUE_CR_USC_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_JONES_IDLE */
+#define ROGUE_CR_JONES_IDLE 0x8328U
+#define ROGUE_CR_JONES_IDLE_MASKFULL 0x0000000000007FFFULL
+#define ROGUE_CR_JONES_IDLE_TDM_SHIFT 14U
+#define ROGUE_CR_JONES_IDLE_TDM_CLRMSK 0xFFFFBFFFU
+#define ROGUE_CR_JONES_IDLE_TDM_EN 0x00004000U
+#define ROGUE_CR_JONES_IDLE_FB_CDC_TLA_SHIFT 13U
+#define ROGUE_CR_JONES_IDLE_FB_CDC_TLA_CLRMSK 0xFFFFDFFFU
+#define ROGUE_CR_JONES_IDLE_FB_CDC_TLA_EN 0x00002000U
+#define ROGUE_CR_JONES_IDLE_FB_CDC_SHIFT 12U
+#define ROGUE_CR_JONES_IDLE_FB_CDC_CLRMSK 0xFFFFEFFFU
+#define ROGUE_CR_JONES_IDLE_FB_CDC_EN 0x00001000U
+#define ROGUE_CR_JONES_IDLE_MMU_SHIFT 11U
+#define ROGUE_CR_JONES_IDLE_MMU_CLRMSK 0xFFFFF7FFU
+#define ROGUE_CR_JONES_IDLE_MMU_EN 0x00000800U
+#define ROGUE_CR_JONES_IDLE_TLA_SHIFT 10U
+#define ROGUE_CR_JONES_IDLE_TLA_CLRMSK 0xFFFFFBFFU
+#define ROGUE_CR_JONES_IDLE_TLA_EN 0x00000400U
+#define ROGUE_CR_JONES_IDLE_GARTEN_SHIFT 9U
+#define ROGUE_CR_JONES_IDLE_GARTEN_CLRMSK 0xFFFFFDFFU
+#define ROGUE_CR_JONES_IDLE_GARTEN_EN 0x00000200U
+#define ROGUE_CR_JONES_IDLE_HOSTIF_SHIFT 8U
+#define ROGUE_CR_JONES_IDLE_HOSTIF_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_JONES_IDLE_HOSTIF_EN 0x00000100U
+#define ROGUE_CR_JONES_IDLE_SOCIF_SHIFT 7U
+#define ROGUE_CR_JONES_IDLE_SOCIF_CLRMSK 0xFFFFFF7FU
+#define ROGUE_CR_JONES_IDLE_SOCIF_EN 0x00000080U
+#define ROGUE_CR_JONES_IDLE_TILING_SHIFT 6U
+#define ROGUE_CR_JONES_IDLE_TILING_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_JONES_IDLE_TILING_EN 0x00000040U
+#define ROGUE_CR_JONES_IDLE_IPP_SHIFT 5U
+#define ROGUE_CR_JONES_IDLE_IPP_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_JONES_IDLE_IPP_EN 0x00000020U
+#define ROGUE_CR_JONES_IDLE_USCS_SHIFT 4U
+#define ROGUE_CR_JONES_IDLE_USCS_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_JONES_IDLE_USCS_EN 0x00000010U
+#define ROGUE_CR_JONES_IDLE_PM_SHIFT 3U
+#define ROGUE_CR_JONES_IDLE_PM_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_JONES_IDLE_PM_EN 0x00000008U
+#define ROGUE_CR_JONES_IDLE_CDM_SHIFT 2U
+#define ROGUE_CR_JONES_IDLE_CDM_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_JONES_IDLE_CDM_EN 0x00000004U
+#define ROGUE_CR_JONES_IDLE_VDM_SHIFT 1U
+#define ROGUE_CR_JONES_IDLE_VDM_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_JONES_IDLE_VDM_EN 0x00000002U
+#define ROGUE_CR_JONES_IDLE_BIF_SHIFT 0U
+#define ROGUE_CR_JONES_IDLE_BIF_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_JONES_IDLE_BIF_EN 0x00000001U
+
+/* Register ROGUE_CR_TORNADO_PERF */
+#define ROGUE_CR_TORNADO_PERF 0x8228U
+#define ROGUE_CR_TORNADO_PERF_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_TORNADO_PERF_CLR_3_SHIFT 4U
+#define ROGUE_CR_TORNADO_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_TORNADO_PERF_CLR_3_EN 0x00000010U
+#define ROGUE_CR_TORNADO_PERF_CLR_2_SHIFT 3U
+#define ROGUE_CR_TORNADO_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_TORNADO_PERF_CLR_2_EN 0x00000008U
+#define ROGUE_CR_TORNADO_PERF_CLR_1_SHIFT 2U
+#define ROGUE_CR_TORNADO_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_TORNADO_PERF_CLR_1_EN 0x00000004U
+#define ROGUE_CR_TORNADO_PERF_CLR_0_SHIFT 1U
+#define ROGUE_CR_TORNADO_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_TORNADO_PERF_CLR_0_EN 0x00000002U
+#define ROGUE_CR_TORNADO_PERF_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_TORNADO_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_TORNADO_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_TORNADO_PERF_SELECT0 */
+#define ROGUE_CR_TORNADO_PERF_SELECT0 0x8230U
+#define ROGUE_CR_TORNADO_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_TORNADO_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_TORNADO_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_TORNADO_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_TORNADO_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_TORNADO_PERF_SELECT0_MODE_SHIFT 21U
+#define ROGUE_CR_TORNADO_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_TORNADO_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_TORNADO_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_TORNADO_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_TORNADO_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_TORNADO_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_TORNADO_PERF_COUNTER_0 */
+#define ROGUE_CR_TORNADO_PERF_COUNTER_0 0x8268U
+#define ROGUE_CR_TORNADO_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_TORNADO_PERF_COUNTER_0_REG_SHIFT 0U
+#define ROGUE_CR_TORNADO_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_TEXAS_PERF */
+#define ROGUE_CR_TEXAS_PERF 0x8290U
+#define ROGUE_CR_TEXAS_PERF_MASKFULL 0x000000000000007FULL
+#define ROGUE_CR_TEXAS_PERF_CLR_5_SHIFT 6U
+#define ROGUE_CR_TEXAS_PERF_CLR_5_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_TEXAS_PERF_CLR_5_EN 0x00000040U
+#define ROGUE_CR_TEXAS_PERF_CLR_4_SHIFT 5U
+#define ROGUE_CR_TEXAS_PERF_CLR_4_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_TEXAS_PERF_CLR_4_EN 0x00000020U
+#define ROGUE_CR_TEXAS_PERF_CLR_3_SHIFT 4U
+#define ROGUE_CR_TEXAS_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_TEXAS_PERF_CLR_3_EN 0x00000010U
+#define ROGUE_CR_TEXAS_PERF_CLR_2_SHIFT 3U
+#define ROGUE_CR_TEXAS_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_TEXAS_PERF_CLR_2_EN 0x00000008U
+#define ROGUE_CR_TEXAS_PERF_CLR_1_SHIFT 2U
+#define ROGUE_CR_TEXAS_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_TEXAS_PERF_CLR_1_EN 0x00000004U
+#define ROGUE_CR_TEXAS_PERF_CLR_0_SHIFT 1U
+#define ROGUE_CR_TEXAS_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_TEXAS_PERF_CLR_0_EN 0x00000002U
+#define ROGUE_CR_TEXAS_PERF_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_TEXAS_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_TEXAS_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_TEXAS_PERF_SELECT0 */
+#define ROGUE_CR_TEXAS_PERF_SELECT0 0x8298U
+#define ROGUE_CR_TEXAS_PERF_SELECT0_MASKFULL 0x3FFF3FFF803FFFFFULL
+#define ROGUE_CR_TEXAS_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_TEXAS_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_TEXAS_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_TEXAS_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_TEXAS_PERF_SELECT0_MODE_SHIFT 31U
+#define ROGUE_CR_TEXAS_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFF7FFFFFFFULL
+#define ROGUE_CR_TEXAS_PERF_SELECT0_MODE_EN 0x0000000080000000ULL
+#define ROGUE_CR_TEXAS_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_TEXAS_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFC0FFFFULL
+#define ROGUE_CR_TEXAS_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_TEXAS_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_TEXAS_PERF_COUNTER_0 */
+#define ROGUE_CR_TEXAS_PERF_COUNTER_0 0x82D8U
+#define ROGUE_CR_TEXAS_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_TEXAS_PERF_COUNTER_0_REG_SHIFT 0U
+#define ROGUE_CR_TEXAS_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_JONES_PERF */
+#define ROGUE_CR_JONES_PERF 0x8330U
+#define ROGUE_CR_JONES_PERF_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_JONES_PERF_CLR_3_SHIFT 4U
+#define ROGUE_CR_JONES_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_JONES_PERF_CLR_3_EN 0x00000010U
+#define ROGUE_CR_JONES_PERF_CLR_2_SHIFT 3U
+#define ROGUE_CR_JONES_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_JONES_PERF_CLR_2_EN 0x00000008U
+#define ROGUE_CR_JONES_PERF_CLR_1_SHIFT 2U
+#define ROGUE_CR_JONES_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_JONES_PERF_CLR_1_EN 0x00000004U
+#define ROGUE_CR_JONES_PERF_CLR_0_SHIFT 1U
+#define ROGUE_CR_JONES_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_JONES_PERF_CLR_0_EN 0x00000002U
+#define ROGUE_CR_JONES_PERF_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_JONES_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_JONES_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_JONES_PERF_SELECT0 */
+#define ROGUE_CR_JONES_PERF_SELECT0 0x8338U
+#define ROGUE_CR_JONES_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_JONES_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_JONES_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_JONES_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_JONES_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_JONES_PERF_SELECT0_MODE_SHIFT 21U
+#define ROGUE_CR_JONES_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_JONES_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_JONES_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_JONES_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_JONES_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_JONES_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_JONES_PERF_COUNTER_0 */
+#define ROGUE_CR_JONES_PERF_COUNTER_0 0x8368U
+#define ROGUE_CR_JONES_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_JONES_PERF_COUNTER_0_REG_SHIFT 0U
+#define ROGUE_CR_JONES_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_BLACKPEARL_PERF */
+#define ROGUE_CR_BLACKPEARL_PERF 0x8400U
+#define ROGUE_CR_BLACKPEARL_PERF_MASKFULL 0x000000000000007FULL
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_5_SHIFT 6U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_5_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_5_EN 0x00000040U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_4_SHIFT 5U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_4_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_4_EN 0x00000020U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_3_SHIFT 4U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_3_EN 0x00000010U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_2_SHIFT 3U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_2_EN 0x00000008U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_1_SHIFT 2U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_1_EN 0x00000004U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_0_SHIFT 1U
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_BLACKPEARL_PERF_CLR_0_EN 0x00000002U
+#define ROGUE_CR_BLACKPEARL_PERF_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_BLACKPEARL_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_BLACKPEARL_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_BLACKPEARL_PERF_SELECT0 */
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0 0x8408U
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_MASKFULL 0x3FFF3FFF803FFFFFULL
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_MODE_SHIFT 31U
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFF7FFFFFFFULL
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_MODE_EN 0x0000000080000000ULL
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFC0FFFFULL
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_BLACKPEARL_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_BLACKPEARL_PERF_COUNTER_0 */
+#define ROGUE_CR_BLACKPEARL_PERF_COUNTER_0 0x8448U
+#define ROGUE_CR_BLACKPEARL_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_BLACKPEARL_PERF_COUNTER_0_REG_SHIFT 0U
+#define ROGUE_CR_BLACKPEARL_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_PBE_PERF */
+#define ROGUE_CR_PBE_PERF 0x8478U
+#define ROGUE_CR_PBE_PERF_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_PBE_PERF_CLR_3_SHIFT 4U
+#define ROGUE_CR_PBE_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_PBE_PERF_CLR_3_EN 0x00000010U
+#define ROGUE_CR_PBE_PERF_CLR_2_SHIFT 3U
+#define ROGUE_CR_PBE_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_PBE_PERF_CLR_2_EN 0x00000008U
+#define ROGUE_CR_PBE_PERF_CLR_1_SHIFT 2U
+#define ROGUE_CR_PBE_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_PBE_PERF_CLR_1_EN 0x00000004U
+#define ROGUE_CR_PBE_PERF_CLR_0_SHIFT 1U
+#define ROGUE_CR_PBE_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_PBE_PERF_CLR_0_EN 0x00000002U
+#define ROGUE_CR_PBE_PERF_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_PBE_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_PBE_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_PBE_PERF_SELECT0 */
+#define ROGUE_CR_PBE_PERF_SELECT0 0x8480U
+#define ROGUE_CR_PBE_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_PBE_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_PBE_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_PBE_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_PBE_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_PBE_PERF_SELECT0_MODE_SHIFT 21U
+#define ROGUE_CR_PBE_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_PBE_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_PBE_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_PBE_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_PBE_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_PBE_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_PBE_PERF_COUNTER_0 */
+#define ROGUE_CR_PBE_PERF_COUNTER_0 0x84B0U
+#define ROGUE_CR_PBE_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_PBE_PERF_COUNTER_0_REG_SHIFT 0U
+#define ROGUE_CR_PBE_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_OCP_REVINFO */
+#define ROGUE_CR_OCP_REVINFO 0x9000U
+#define ROGUE_CR_OCP_REVINFO_MASKFULL 0x00000007FFFFFFFFULL
+#define ROGUE_CR_OCP_REVINFO_HWINFO_SYSBUS_SHIFT 33U
+#define ROGUE_CR_OCP_REVINFO_HWINFO_SYSBUS_CLRMSK 0xFFFFFFF9FFFFFFFFULL
+#define ROGUE_CR_OCP_REVINFO_HWINFO_MEMBUS_SHIFT 32U
+#define ROGUE_CR_OCP_REVINFO_HWINFO_MEMBUS_CLRMSK 0xFFFFFFFEFFFFFFFFULL
+#define ROGUE_CR_OCP_REVINFO_HWINFO_MEMBUS_EN 0x0000000100000000ULL
+#define ROGUE_CR_OCP_REVINFO_REVISION_SHIFT 0U
+#define ROGUE_CR_OCP_REVINFO_REVISION_CLRMSK 0xFFFFFFFF00000000ULL
+
+/* Register ROGUE_CR_OCP_SYSCONFIG */
+#define ROGUE_CR_OCP_SYSCONFIG 0x9010U
+#define ROGUE_CR_OCP_SYSCONFIG_MASKFULL 0x0000000000000FFFULL
+#define ROGUE_CR_OCP_SYSCONFIG_DUST2_STANDBY_MODE_SHIFT 10U
+#define ROGUE_CR_OCP_SYSCONFIG_DUST2_STANDBY_MODE_CLRMSK 0xFFFFF3FFU
+#define ROGUE_CR_OCP_SYSCONFIG_DUST1_STANDBY_MODE_SHIFT 8U
+#define ROGUE_CR_OCP_SYSCONFIG_DUST1_STANDBY_MODE_CLRMSK 0xFFFFFCFFU
+#define ROGUE_CR_OCP_SYSCONFIG_DUST0_STANDBY_MODE_SHIFT 6U
+#define ROGUE_CR_OCP_SYSCONFIG_DUST0_STANDBY_MODE_CLRMSK 0xFFFFFF3FU
+#define ROGUE_CR_OCP_SYSCONFIG_RASCAL_STANDBYMODE_SHIFT 4U
+#define ROGUE_CR_OCP_SYSCONFIG_RASCAL_STANDBYMODE_CLRMSK 0xFFFFFFCFU
+#define ROGUE_CR_OCP_SYSCONFIG_STANDBY_MODE_SHIFT 2U
+#define ROGUE_CR_OCP_SYSCONFIG_STANDBY_MODE_CLRMSK 0xFFFFFFF3U
+#define ROGUE_CR_OCP_SYSCONFIG_IDLE_MODE_SHIFT 0U
+#define ROGUE_CR_OCP_SYSCONFIG_IDLE_MODE_CLRMSK 0xFFFFFFFCU
+
+/* Register ROGUE_CR_OCP_IRQSTATUS_RAW_0 */
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_0 0x9020U
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_0_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_0_INIT_MINTERRUPT_RAW_SHIFT 0U
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_0_INIT_MINTERRUPT_RAW_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_0_INIT_MINTERRUPT_RAW_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQSTATUS_RAW_1 */
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_1 0x9028U
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_1_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_1_TARGET_SINTERRUPT_RAW_SHIFT 0U
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_1_TARGET_SINTERRUPT_RAW_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_1_TARGET_SINTERRUPT_RAW_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQSTATUS_RAW_2 */
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_2 0x9030U
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_2_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_2_RGX_IRQ_RAW_SHIFT 0U
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_2_RGX_IRQ_RAW_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQSTATUS_RAW_2_RGX_IRQ_RAW_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQSTATUS_0 */
+#define ROGUE_CR_OCP_IRQSTATUS_0 0x9038U
+#define ROGUE_CR_OCP_IRQSTATUS_0_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQSTATUS_0_INIT_MINTERRUPT_STATUS_SHIFT 0U
+#define ROGUE_CR_OCP_IRQSTATUS_0_INIT_MINTERRUPT_STATUS_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQSTATUS_0_INIT_MINTERRUPT_STATUS_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQSTATUS_1 */
+#define ROGUE_CR_OCP_IRQSTATUS_1 0x9040U
+#define ROGUE_CR_OCP_IRQSTATUS_1_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQSTATUS_1_TARGET_SINTERRUPT_STATUS_SHIFT 0U
+#define ROGUE_CR_OCP_IRQSTATUS_1_TARGET_SINTERRUPT_STATUS_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQSTATUS_1_TARGET_SINTERRUPT_STATUS_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQSTATUS_2 */
+#define ROGUE_CR_OCP_IRQSTATUS_2 0x9048U
+#define ROGUE_CR_OCP_IRQSTATUS_2_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQSTATUS_2_RGX_IRQ_STATUS_SHIFT 0U
+#define ROGUE_CR_OCP_IRQSTATUS_2_RGX_IRQ_STATUS_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQSTATUS_2_RGX_IRQ_STATUS_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQENABLE_SET_0 */
+#define ROGUE_CR_OCP_IRQENABLE_SET_0 0x9050U
+#define ROGUE_CR_OCP_IRQENABLE_SET_0_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQENABLE_SET_0_INIT_MINTERRUPT_ENABLE_SHIFT 0U
+#define ROGUE_CR_OCP_IRQENABLE_SET_0_INIT_MINTERRUPT_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQENABLE_SET_0_INIT_MINTERRUPT_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQENABLE_SET_1 */
+#define ROGUE_CR_OCP_IRQENABLE_SET_1 0x9058U
+#define ROGUE_CR_OCP_IRQENABLE_SET_1_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQENABLE_SET_1_TARGET_SINTERRUPT_ENABLE_SHIFT 0U
+#define ROGUE_CR_OCP_IRQENABLE_SET_1_TARGET_SINTERRUPT_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQENABLE_SET_1_TARGET_SINTERRUPT_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQENABLE_SET_2 */
+#define ROGUE_CR_OCP_IRQENABLE_SET_2 0x9060U
+#define ROGUE_CR_OCP_IRQENABLE_SET_2_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQENABLE_SET_2_RGX_IRQ_ENABLE_SHIFT 0U
+#define ROGUE_CR_OCP_IRQENABLE_SET_2_RGX_IRQ_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQENABLE_SET_2_RGX_IRQ_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQENABLE_CLR_0 */
+#define ROGUE_CR_OCP_IRQENABLE_CLR_0 0x9068U
+#define ROGUE_CR_OCP_IRQENABLE_CLR_0_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQENABLE_CLR_0_INIT_MINTERRUPT_DISABLE_SHIFT 0U
+#define ROGUE_CR_OCP_IRQENABLE_CLR_0_INIT_MINTERRUPT_DISABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQENABLE_CLR_0_INIT_MINTERRUPT_DISABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQENABLE_CLR_1 */
+#define ROGUE_CR_OCP_IRQENABLE_CLR_1 0x9070U
+#define ROGUE_CR_OCP_IRQENABLE_CLR_1_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQENABLE_CLR_1_TARGET_SINTERRUPT_DISABLE_SHIFT 0U
+#define ROGUE_CR_OCP_IRQENABLE_CLR_1_TARGET_SINTERRUPT_DISABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQENABLE_CLR_1_TARGET_SINTERRUPT_DISABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQENABLE_CLR_2 */
+#define ROGUE_CR_OCP_IRQENABLE_CLR_2 0x9078U
+#define ROGUE_CR_OCP_IRQENABLE_CLR_2_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_IRQENABLE_CLR_2_RGX_IRQ_DISABLE_SHIFT 0U
+#define ROGUE_CR_OCP_IRQENABLE_CLR_2_RGX_IRQ_DISABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_IRQENABLE_CLR_2_RGX_IRQ_DISABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_IRQ_EVENT */
+#define ROGUE_CR_OCP_IRQ_EVENT 0x9080U
+#define ROGUE_CR_OCP_IRQ_EVENT_MASKFULL 0x00000000000FFFFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETH_RCVD_UNEXPECTED_RDATA_SHIFT 19U
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETH_RCVD_UNEXPECTED_RDATA_CLRMSK 0xFFFFFFFFFFF7FFFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETH_RCVD_UNEXPECTED_RDATA_EN 0x0000000000080000ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETH_RCVD_UNSUPPORTED_MCMD_SHIFT 18U
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETH_RCVD_UNSUPPORTED_MCMD_CLRMSK 0xFFFFFFFFFFFBFFFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETH_RCVD_UNSUPPORTED_MCMD_EN 0x0000000000040000ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETS_RCVD_UNEXPECTED_RDATA_SHIFT 17U
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETS_RCVD_UNEXPECTED_RDATA_CLRMSK 0xFFFFFFFFFFFDFFFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETS_RCVD_UNEXPECTED_RDATA_EN 0x0000000000020000ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETS_RCVD_UNSUPPORTED_MCMD_SHIFT 16U
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETS_RCVD_UNSUPPORTED_MCMD_CLRMSK 0xFFFFFFFFFFFEFFFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_TARGETS_RCVD_UNSUPPORTED_MCMD_EN 0x0000000000010000ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_IMG_PAGE_BOUNDARY_CROSS_SHIFT 15U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_IMG_PAGE_BOUNDARY_CROSS_CLRMSK 0xFFFFFFFFFFFF7FFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_IMG_PAGE_BOUNDARY_CROSS_EN 0x0000000000008000ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_RCVD_RESP_ERR_FAIL_SHIFT 14U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_RCVD_RESP_ERR_FAIL_CLRMSK 0xFFFFFFFFFFFFBFFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_RCVD_RESP_ERR_FAIL_EN 0x0000000000004000ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_RCVD_UNUSED_TAGID_SHIFT 13U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_RCVD_UNUSED_TAGID_CLRMSK 0xFFFFFFFFFFFFDFFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_RCVD_UNUSED_TAGID_EN 0x0000000000002000ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_RDATA_FIFO_OVERFILL_SHIFT 12U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_RDATA_FIFO_OVERFILL_CLRMSK 0xFFFFFFFFFFFFEFFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT3_RDATA_FIFO_OVERFILL_EN 0x0000000000001000ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_IMG_PAGE_BOUNDARY_CROSS_SHIFT 11U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_IMG_PAGE_BOUNDARY_CROSS_CLRMSK 0xFFFFFFFFFFFFF7FFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_IMG_PAGE_BOUNDARY_CROSS_EN 0x0000000000000800ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_RCVD_RESP_ERR_FAIL_SHIFT 10U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_RCVD_RESP_ERR_FAIL_CLRMSK 0xFFFFFFFFFFFFFBFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_RCVD_RESP_ERR_FAIL_EN 0x0000000000000400ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_RCVD_UNUSED_TAGID_SHIFT 9U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_RCVD_UNUSED_TAGID_CLRMSK 0xFFFFFFFFFFFFFDFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_RCVD_UNUSED_TAGID_EN 0x0000000000000200ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_RDATA_FIFO_OVERFILL_SHIFT 8U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_RDATA_FIFO_OVERFILL_CLRMSK 0xFFFFFFFFFFFFFEFFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT2_RDATA_FIFO_OVERFILL_EN 0x0000000000000100ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_IMG_PAGE_BOUNDARY_CROSS_SHIFT 7U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_IMG_PAGE_BOUNDARY_CROSS_CLRMSK 0xFFFFFFFFFFFFFF7FULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_IMG_PAGE_BOUNDARY_CROSS_EN 0x0000000000000080ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_RCVD_RESP_ERR_FAIL_SHIFT 6U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_RCVD_RESP_ERR_FAIL_CLRMSK 0xFFFFFFFFFFFFFFBFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_RCVD_RESP_ERR_FAIL_EN 0x0000000000000040ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_RCVD_UNUSED_TAGID_SHIFT 5U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_RCVD_UNUSED_TAGID_CLRMSK 0xFFFFFFFFFFFFFFDFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_RCVD_UNUSED_TAGID_EN 0x0000000000000020ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_RDATA_FIFO_OVERFILL_SHIFT 4U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_RDATA_FIFO_OVERFILL_CLRMSK 0xFFFFFFFFFFFFFFEFULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT1_RDATA_FIFO_OVERFILL_EN 0x0000000000000010ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_IMG_PAGE_BOUNDARY_CROSS_SHIFT 3U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_IMG_PAGE_BOUNDARY_CROSS_CLRMSK 0xFFFFFFFFFFFFFFF7ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_IMG_PAGE_BOUNDARY_CROSS_EN 0x0000000000000008ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_RCVD_RESP_ERR_FAIL_SHIFT 2U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_RCVD_RESP_ERR_FAIL_CLRMSK 0xFFFFFFFFFFFFFFFBULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_RCVD_RESP_ERR_FAIL_EN 0x0000000000000004ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_RCVD_UNUSED_TAGID_SHIFT 1U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_RCVD_UNUSED_TAGID_CLRMSK 0xFFFFFFFFFFFFFFFDULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_RCVD_UNUSED_TAGID_EN 0x0000000000000002ULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_RDATA_FIFO_OVERFILL_SHIFT 0U
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_RDATA_FIFO_OVERFILL_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_OCP_IRQ_EVENT_INIT0_RDATA_FIFO_OVERFILL_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_OCP_DEBUG_CONFIG */
+#define ROGUE_CR_OCP_DEBUG_CONFIG 0x9088U
+#define ROGUE_CR_OCP_DEBUG_CONFIG_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_OCP_DEBUG_CONFIG_REG_SHIFT 0U
+#define ROGUE_CR_OCP_DEBUG_CONFIG_REG_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_OCP_DEBUG_CONFIG_REG_EN 0x00000001U
+
+/* Register ROGUE_CR_OCP_DEBUG_STATUS */
+#define ROGUE_CR_OCP_DEBUG_STATUS 0x9090U
+#define ROGUE_CR_OCP_DEBUG_STATUS_MASKFULL 0x001F1F77FFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_SDISCACK_SHIFT 51U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_SDISCACK_CLRMSK 0xFFE7FFFFFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_SCONNECT_SHIFT 50U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_SCONNECT_CLRMSK 0xFFFBFFFFFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_SCONNECT_EN 0x0004000000000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_MCONNECT_SHIFT 48U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_MCONNECT_CLRMSK 0xFFFCFFFFFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_SDISCACK_SHIFT 43U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_SDISCACK_CLRMSK 0xFFFFE7FFFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_SCONNECT_SHIFT 42U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_SCONNECT_CLRMSK 0xFFFFFBFFFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_SCONNECT_EN 0x0000040000000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_MCONNECT_SHIFT 40U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_MCONNECT_CLRMSK 0xFFFFFCFFFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_BUSY_SHIFT 38U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_BUSY_CLRMSK 0xFFFFFFBFFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_BUSY_EN 0x0000004000000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_CMD_FIFO_FULL_SHIFT 37U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_CMD_FIFO_FULL_CLRMSK 0xFFFFFFDFFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_CMD_FIFO_FULL_EN 0x0000002000000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_SRESP_ERROR_SHIFT 36U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_SRESP_ERROR_CLRMSK 0xFFFFFFEFFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETH_SRESP_ERROR_EN 0x0000001000000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_BUSY_SHIFT 34U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_BUSY_CLRMSK 0xFFFFFFFBFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_BUSY_EN 0x0000000400000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_CMD_FIFO_FULL_SHIFT 33U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_CMD_FIFO_FULL_CLRMSK 0xFFFFFFFDFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_CMD_FIFO_FULL_EN 0x0000000200000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_SRESP_ERROR_SHIFT 32U
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_SRESP_ERROR_CLRMSK 0xFFFFFFFEFFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_TARGETS_SRESP_ERROR_EN 0x0000000100000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_RESERVED_SHIFT 31U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_RESERVED_CLRMSK 0xFFFFFFFF7FFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_RESERVED_EN 0x0000000080000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_SWAIT_SHIFT 30U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_SWAIT_CLRMSK 0xFFFFFFFFBFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_SWAIT_EN 0x0000000040000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_MDISCREQ_SHIFT 29U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_MDISCREQ_CLRMSK 0xFFFFFFFFDFFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_MDISCREQ_EN 0x0000000020000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_MDISCACK_SHIFT 27U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_MDISCACK_CLRMSK 0xFFFFFFFFE7FFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_SCONNECT_SHIFT 26U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_SCONNECT_CLRMSK 0xFFFFFFFFFBFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_SCONNECT_EN 0x0000000004000000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_MCONNECT_SHIFT 24U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT3_MCONNECT_CLRMSK 0xFFFFFFFFFCFFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_RESERVED_SHIFT 23U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_RESERVED_CLRMSK 0xFFFFFFFFFF7FFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_RESERVED_EN 0x0000000000800000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_SWAIT_SHIFT 22U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_SWAIT_CLRMSK 0xFFFFFFFFFFBFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_SWAIT_EN 0x0000000000400000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_MDISCREQ_SHIFT 21U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_MDISCREQ_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_MDISCREQ_EN 0x0000000000200000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_MDISCACK_SHIFT 19U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_MDISCACK_CLRMSK 0xFFFFFFFFFFE7FFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_SCONNECT_SHIFT 18U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_SCONNECT_CLRMSK 0xFFFFFFFFFFFBFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_SCONNECT_EN 0x0000000000040000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_MCONNECT_SHIFT 16U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT2_MCONNECT_CLRMSK 0xFFFFFFFFFFFCFFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_RESERVED_SHIFT 15U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_RESERVED_CLRMSK 0xFFFFFFFFFFFF7FFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_RESERVED_EN 0x0000000000008000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_SWAIT_SHIFT 14U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_SWAIT_CLRMSK 0xFFFFFFFFFFFFBFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_SWAIT_EN 0x0000000000004000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_MDISCREQ_SHIFT 13U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_MDISCREQ_CLRMSK 0xFFFFFFFFFFFFDFFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_MDISCREQ_EN 0x0000000000002000ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_MDISCACK_SHIFT 11U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_MDISCACK_CLRMSK 0xFFFFFFFFFFFFE7FFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_SCONNECT_SHIFT 10U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_SCONNECT_CLRMSK 0xFFFFFFFFFFFFFBFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_SCONNECT_EN 0x0000000000000400ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_MCONNECT_SHIFT 8U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT1_MCONNECT_CLRMSK 0xFFFFFFFFFFFFFCFFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_RESERVED_SHIFT 7U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_RESERVED_CLRMSK 0xFFFFFFFFFFFFFF7FULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_RESERVED_EN 0x0000000000000080ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_SWAIT_SHIFT 6U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_SWAIT_CLRMSK 0xFFFFFFFFFFFFFFBFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_SWAIT_EN 0x0000000000000040ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_MDISCREQ_SHIFT 5U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_MDISCREQ_CLRMSK 0xFFFFFFFFFFFFFFDFULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_MDISCREQ_EN 0x0000000000000020ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_MDISCACK_SHIFT 3U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_MDISCACK_CLRMSK 0xFFFFFFFFFFFFFFE7ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_SCONNECT_SHIFT 2U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_SCONNECT_CLRMSK 0xFFFFFFFFFFFFFFFBULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_SCONNECT_EN 0x0000000000000004ULL
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_MCONNECT_SHIFT 0U
+#define ROGUE_CR_OCP_DEBUG_STATUS_INIT0_MCONNECT_CLRMSK 0xFFFFFFFFFFFFFFFCULL
+
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PM_ALIST_SHIFT 6U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PM_ALIST_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PM_ALIST_EN 0x00000040U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_HOST_SHIFT 5U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_HOST_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_HOST_EN 0x00000020U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_META_SHIFT 4U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_META_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_META_EN 0x00000010U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PB_ZLS_SHIFT 3U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PB_ZLS_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PB_ZLS_EN 0x00000008U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PB_TE_SHIFT 2U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PB_TE_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PB_TE_EN 0x00000004U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PB_VCE_SHIFT 1U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PB_VCE_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_PB_VCE_EN 0x00000002U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_TLA_SHIFT 0U
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_TLA_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_BIF_TRUST_DM_TYPE_TLA_EN 0x00000001U
+
+#define ROGUE_CR_BIF_TRUST_DM_MASK 0x0000007FU
+
+/* Register ROGUE_CR_BIF_TRUST */
+#define ROGUE_CR_BIF_TRUST 0xA000U
+#define ROGUE_CR_BIF_TRUST_MASKFULL 0x00000000001FFFFFULL
+#define ROGUE_CR_BIF_TRUST_OTHER_RAY_VERTEX_DM_TRUSTED_SHIFT 20U
+#define ROGUE_CR_BIF_TRUST_OTHER_RAY_VERTEX_DM_TRUSTED_CLRMSK 0xFFEFFFFFU
+#define ROGUE_CR_BIF_TRUST_OTHER_RAY_VERTEX_DM_TRUSTED_EN 0x00100000U
+#define ROGUE_CR_BIF_TRUST_MCU_RAY_VERTEX_DM_TRUSTED_SHIFT 19U
+#define ROGUE_CR_BIF_TRUST_MCU_RAY_VERTEX_DM_TRUSTED_CLRMSK 0xFFF7FFFFU
+#define ROGUE_CR_BIF_TRUST_MCU_RAY_VERTEX_DM_TRUSTED_EN 0x00080000U
+#define ROGUE_CR_BIF_TRUST_OTHER_RAY_DM_TRUSTED_SHIFT 18U
+#define ROGUE_CR_BIF_TRUST_OTHER_RAY_DM_TRUSTED_CLRMSK 0xFFFBFFFFU
+#define ROGUE_CR_BIF_TRUST_OTHER_RAY_DM_TRUSTED_EN 0x00040000U
+#define ROGUE_CR_BIF_TRUST_MCU_RAY_DM_TRUSTED_SHIFT 17U
+#define ROGUE_CR_BIF_TRUST_MCU_RAY_DM_TRUSTED_CLRMSK 0xFFFDFFFFU
+#define ROGUE_CR_BIF_TRUST_MCU_RAY_DM_TRUSTED_EN 0x00020000U
+#define ROGUE_CR_BIF_TRUST_ENABLE_SHIFT 16U
+#define ROGUE_CR_BIF_TRUST_ENABLE_CLRMSK 0xFFFEFFFFU
+#define ROGUE_CR_BIF_TRUST_ENABLE_EN 0x00010000U
+#define ROGUE_CR_BIF_TRUST_DM_TRUSTED_SHIFT 9U
+#define ROGUE_CR_BIF_TRUST_DM_TRUSTED_CLRMSK 0xFFFF01FFU
+#define ROGUE_CR_BIF_TRUST_OTHER_COMPUTE_DM_TRUSTED_SHIFT 8U
+#define ROGUE_CR_BIF_TRUST_OTHER_COMPUTE_DM_TRUSTED_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_BIF_TRUST_OTHER_COMPUTE_DM_TRUSTED_EN 0x00000100U
+#define ROGUE_CR_BIF_TRUST_MCU_COMPUTE_DM_TRUSTED_SHIFT 7U
+#define ROGUE_CR_BIF_TRUST_MCU_COMPUTE_DM_TRUSTED_CLRMSK 0xFFFFFF7FU
+#define ROGUE_CR_BIF_TRUST_MCU_COMPUTE_DM_TRUSTED_EN 0x00000080U
+#define ROGUE_CR_BIF_TRUST_PBE_COMPUTE_DM_TRUSTED_SHIFT 6U
+#define ROGUE_CR_BIF_TRUST_PBE_COMPUTE_DM_TRUSTED_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_BIF_TRUST_PBE_COMPUTE_DM_TRUSTED_EN 0x00000040U
+#define ROGUE_CR_BIF_TRUST_OTHER_PIXEL_DM_TRUSTED_SHIFT 5U
+#define ROGUE_CR_BIF_TRUST_OTHER_PIXEL_DM_TRUSTED_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_BIF_TRUST_OTHER_PIXEL_DM_TRUSTED_EN 0x00000020U
+#define ROGUE_CR_BIF_TRUST_MCU_PIXEL_DM_TRUSTED_SHIFT 4U
+#define ROGUE_CR_BIF_TRUST_MCU_PIXEL_DM_TRUSTED_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_BIF_TRUST_MCU_PIXEL_DM_TRUSTED_EN 0x00000010U
+#define ROGUE_CR_BIF_TRUST_PBE_PIXEL_DM_TRUSTED_SHIFT 3U
+#define ROGUE_CR_BIF_TRUST_PBE_PIXEL_DM_TRUSTED_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_BIF_TRUST_PBE_PIXEL_DM_TRUSTED_EN 0x00000008U
+#define ROGUE_CR_BIF_TRUST_OTHER_VERTEX_DM_TRUSTED_SHIFT 2U
+#define ROGUE_CR_BIF_TRUST_OTHER_VERTEX_DM_TRUSTED_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_BIF_TRUST_OTHER_VERTEX_DM_TRUSTED_EN 0x00000004U
+#define ROGUE_CR_BIF_TRUST_MCU_VERTEX_DM_TRUSTED_SHIFT 1U
+#define ROGUE_CR_BIF_TRUST_MCU_VERTEX_DM_TRUSTED_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_BIF_TRUST_MCU_VERTEX_DM_TRUSTED_EN 0x00000002U
+#define ROGUE_CR_BIF_TRUST_PBE_VERTEX_DM_TRUSTED_SHIFT 0U
+#define ROGUE_CR_BIF_TRUST_PBE_VERTEX_DM_TRUSTED_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_BIF_TRUST_PBE_VERTEX_DM_TRUSTED_EN 0x00000001U
+
+/* Register ROGUE_CR_SYS_BUS_SECURE */
+#define ROGUE_CR_SYS_BUS_SECURE 0xA100U
+#define ROGUE_CR_SYS_BUS_SECURE__SECR__MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_SYS_BUS_SECURE_MASKFULL 0x0000000000000001ULL
+#define ROGUE_CR_SYS_BUS_SECURE_ENABLE_SHIFT 0U
+#define ROGUE_CR_SYS_BUS_SECURE_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SYS_BUS_SECURE_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_FBA_FC0_CHECKSUM */
+#define ROGUE_CR_FBA_FC0_CHECKSUM 0xD170U
+#define ROGUE_CR_FBA_FC0_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_FBA_FC0_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_FBA_FC0_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_FBA_FC1_CHECKSUM */
+#define ROGUE_CR_FBA_FC1_CHECKSUM 0xD178U
+#define ROGUE_CR_FBA_FC1_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_FBA_FC1_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_FBA_FC1_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_FBA_FC2_CHECKSUM */
+#define ROGUE_CR_FBA_FC2_CHECKSUM 0xD180U
+#define ROGUE_CR_FBA_FC2_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_FBA_FC2_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_FBA_FC2_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_FBA_FC3_CHECKSUM */
+#define ROGUE_CR_FBA_FC3_CHECKSUM 0xD188U
+#define ROGUE_CR_FBA_FC3_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_FBA_FC3_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_FBA_FC3_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_CLK_CTRL2 */
+#define ROGUE_CR_CLK_CTRL2 0xD200U
+#define ROGUE_CR_CLK_CTRL2_MASKFULL 0x0000000000000F33ULL
+#define ROGUE_CR_CLK_CTRL2_MCU_FBTC_SHIFT 10U
+#define ROGUE_CR_CLK_CTRL2_MCU_FBTC_CLRMSK 0xFFFFFFFFFFFFF3FFULL
+#define ROGUE_CR_CLK_CTRL2_MCU_FBTC_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL2_MCU_FBTC_ON 0x0000000000000400ULL
+#define ROGUE_CR_CLK_CTRL2_MCU_FBTC_AUTO 0x0000000000000800ULL
+#define ROGUE_CR_CLK_CTRL2_VRDM_SHIFT 8U
+#define ROGUE_CR_CLK_CTRL2_VRDM_CLRMSK 0xFFFFFFFFFFFFFCFFULL
+#define ROGUE_CR_CLK_CTRL2_VRDM_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL2_VRDM_ON 0x0000000000000100ULL
+#define ROGUE_CR_CLK_CTRL2_VRDM_AUTO 0x0000000000000200ULL
+#define ROGUE_CR_CLK_CTRL2_SH_SHIFT 4U
+#define ROGUE_CR_CLK_CTRL2_SH_CLRMSK 0xFFFFFFFFFFFFFFCFULL
+#define ROGUE_CR_CLK_CTRL2_SH_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL2_SH_ON 0x0000000000000010ULL
+#define ROGUE_CR_CLK_CTRL2_SH_AUTO 0x0000000000000020ULL
+#define ROGUE_CR_CLK_CTRL2_FBA_SHIFT 0U
+#define ROGUE_CR_CLK_CTRL2_FBA_CLRMSK 0xFFFFFFFFFFFFFFFCULL
+#define ROGUE_CR_CLK_CTRL2_FBA_OFF 0x0000000000000000ULL
+#define ROGUE_CR_CLK_CTRL2_FBA_ON 0x0000000000000001ULL
+#define ROGUE_CR_CLK_CTRL2_FBA_AUTO 0x0000000000000002ULL
+
+/* Register ROGUE_CR_CLK_STATUS2 */
+#define ROGUE_CR_CLK_STATUS2 0xD208U
+#define ROGUE_CR_CLK_STATUS2_MASKFULL 0x0000000000000015ULL
+#define ROGUE_CR_CLK_STATUS2_VRDM_SHIFT 4U
+#define ROGUE_CR_CLK_STATUS2_VRDM_CLRMSK 0xFFFFFFFFFFFFFFEFULL
+#define ROGUE_CR_CLK_STATUS2_VRDM_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS2_VRDM_RUNNING 0x0000000000000010ULL
+#define ROGUE_CR_CLK_STATUS2_SH_SHIFT 2U
+#define ROGUE_CR_CLK_STATUS2_SH_CLRMSK 0xFFFFFFFFFFFFFFFBULL
+#define ROGUE_CR_CLK_STATUS2_SH_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS2_SH_RUNNING 0x0000000000000004ULL
+#define ROGUE_CR_CLK_STATUS2_FBA_SHIFT 0U
+#define ROGUE_CR_CLK_STATUS2_FBA_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_CLK_STATUS2_FBA_GATED 0x0000000000000000ULL
+#define ROGUE_CR_CLK_STATUS2_FBA_RUNNING 0x0000000000000001ULL
+
+/* Register ROGUE_CR_RPM_SHF_FPL */
+#define ROGUE_CR_RPM_SHF_FPL 0xD520U
+#define ROGUE_CR_RPM_SHF_FPL_MASKFULL 0x3FFFFFFFFFFFFFFCULL
+#define ROGUE_CR_RPM_SHF_FPL_SIZE_SHIFT 40U
+#define ROGUE_CR_RPM_SHF_FPL_SIZE_CLRMSK 0xC00000FFFFFFFFFFULL
+#define ROGUE_CR_RPM_SHF_FPL_BASE_SHIFT 2U
+#define ROGUE_CR_RPM_SHF_FPL_BASE_CLRMSK 0xFFFFFF0000000003ULL
+#define ROGUE_CR_RPM_SHF_FPL_BASE_ALIGNSHIFT 2U
+#define ROGUE_CR_RPM_SHF_FPL_BASE_ALIGNSIZE 4U
+
+/* Register ROGUE_CR_RPM_SHF_FPL_READ */
+#define ROGUE_CR_RPM_SHF_FPL_READ 0xD528U
+#define ROGUE_CR_RPM_SHF_FPL_READ_MASKFULL 0x00000000007FFFFFULL
+#define ROGUE_CR_RPM_SHF_FPL_READ_TOGGLE_SHIFT 22U
+#define ROGUE_CR_RPM_SHF_FPL_READ_TOGGLE_CLRMSK 0xFFBFFFFFU
+#define ROGUE_CR_RPM_SHF_FPL_READ_TOGGLE_EN 0x00400000U
+#define ROGUE_CR_RPM_SHF_FPL_READ_OFFSET_SHIFT 0U
+#define ROGUE_CR_RPM_SHF_FPL_READ_OFFSET_CLRMSK 0xFFC00000U
+
+/* Register ROGUE_CR_RPM_SHF_FPL_WRITE */
+#define ROGUE_CR_RPM_SHF_FPL_WRITE 0xD530U
+#define ROGUE_CR_RPM_SHF_FPL_WRITE_MASKFULL 0x00000000007FFFFFULL
+#define ROGUE_CR_RPM_SHF_FPL_WRITE_TOGGLE_SHIFT 22U
+#define ROGUE_CR_RPM_SHF_FPL_WRITE_TOGGLE_CLRMSK 0xFFBFFFFFU
+#define ROGUE_CR_RPM_SHF_FPL_WRITE_TOGGLE_EN 0x00400000U
+#define ROGUE_CR_RPM_SHF_FPL_WRITE_OFFSET_SHIFT 0U
+#define ROGUE_CR_RPM_SHF_FPL_WRITE_OFFSET_CLRMSK 0xFFC00000U
+
+/* Register ROGUE_CR_RPM_SHG_FPL */
+#define ROGUE_CR_RPM_SHG_FPL 0xD538U
+#define ROGUE_CR_RPM_SHG_FPL_MASKFULL 0x3FFFFFFFFFFFFFFCULL
+#define ROGUE_CR_RPM_SHG_FPL_SIZE_SHIFT 40U
+#define ROGUE_CR_RPM_SHG_FPL_SIZE_CLRMSK 0xC00000FFFFFFFFFFULL
+#define ROGUE_CR_RPM_SHG_FPL_BASE_SHIFT 2U
+#define ROGUE_CR_RPM_SHG_FPL_BASE_CLRMSK 0xFFFFFF0000000003ULL
+#define ROGUE_CR_RPM_SHG_FPL_BASE_ALIGNSHIFT 2U
+#define ROGUE_CR_RPM_SHG_FPL_BASE_ALIGNSIZE 4U
+
+/* Register ROGUE_CR_RPM_SHG_FPL_READ */
+#define ROGUE_CR_RPM_SHG_FPL_READ 0xD540U
+#define ROGUE_CR_RPM_SHG_FPL_READ_MASKFULL 0x00000000007FFFFFULL
+#define ROGUE_CR_RPM_SHG_FPL_READ_TOGGLE_SHIFT 22U
+#define ROGUE_CR_RPM_SHG_FPL_READ_TOGGLE_CLRMSK 0xFFBFFFFFU
+#define ROGUE_CR_RPM_SHG_FPL_READ_TOGGLE_EN 0x00400000U
+#define ROGUE_CR_RPM_SHG_FPL_READ_OFFSET_SHIFT 0U
+#define ROGUE_CR_RPM_SHG_FPL_READ_OFFSET_CLRMSK 0xFFC00000U
+
+/* Register ROGUE_CR_RPM_SHG_FPL_WRITE */
+#define ROGUE_CR_RPM_SHG_FPL_WRITE 0xD548U
+#define ROGUE_CR_RPM_SHG_FPL_WRITE_MASKFULL 0x00000000007FFFFFULL
+#define ROGUE_CR_RPM_SHG_FPL_WRITE_TOGGLE_SHIFT 22U
+#define ROGUE_CR_RPM_SHG_FPL_WRITE_TOGGLE_CLRMSK 0xFFBFFFFFU
+#define ROGUE_CR_RPM_SHG_FPL_WRITE_TOGGLE_EN 0x00400000U
+#define ROGUE_CR_RPM_SHG_FPL_WRITE_OFFSET_SHIFT 0U
+#define ROGUE_CR_RPM_SHG_FPL_WRITE_OFFSET_CLRMSK 0xFFC00000U
+
+/* Register ROGUE_CR_SH_PERF */
+#define ROGUE_CR_SH_PERF 0xD5F8U
+#define ROGUE_CR_SH_PERF_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_SH_PERF_CLR_3_SHIFT 4U
+#define ROGUE_CR_SH_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_SH_PERF_CLR_3_EN 0x00000010U
+#define ROGUE_CR_SH_PERF_CLR_2_SHIFT 3U
+#define ROGUE_CR_SH_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_SH_PERF_CLR_2_EN 0x00000008U
+#define ROGUE_CR_SH_PERF_CLR_1_SHIFT 2U
+#define ROGUE_CR_SH_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_SH_PERF_CLR_1_EN 0x00000004U
+#define ROGUE_CR_SH_PERF_CLR_0_SHIFT 1U
+#define ROGUE_CR_SH_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_SH_PERF_CLR_0_EN 0x00000002U
+#define ROGUE_CR_SH_PERF_CTRL_ENABLE_SHIFT 0U
+#define ROGUE_CR_SH_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SH_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register ROGUE_CR_SH_PERF_SELECT0 */
+#define ROGUE_CR_SH_PERF_SELECT0 0xD600U
+#define ROGUE_CR_SH_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define ROGUE_CR_SH_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define ROGUE_CR_SH_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define ROGUE_CR_SH_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define ROGUE_CR_SH_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define ROGUE_CR_SH_PERF_SELECT0_MODE_SHIFT 21U
+#define ROGUE_CR_SH_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define ROGUE_CR_SH_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define ROGUE_CR_SH_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define ROGUE_CR_SH_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define ROGUE_CR_SH_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define ROGUE_CR_SH_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_SH_PERF_COUNTER_0 */
+#define ROGUE_CR_SH_PERF_COUNTER_0 0xD628U
+#define ROGUE_CR_SH_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SH_PERF_COUNTER_0_REG_SHIFT 0U
+#define ROGUE_CR_SH_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SHF_SHG_CHECKSUM */
+#define ROGUE_CR_SHF_SHG_CHECKSUM 0xD1C0U
+#define ROGUE_CR_SHF_SHG_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SHF_SHG_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_SHF_SHG_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SHF_VERTEX_BIF_CHECKSUM */
+#define ROGUE_CR_SHF_VERTEX_BIF_CHECKSUM 0xD1C8U
+#define ROGUE_CR_SHF_VERTEX_BIF_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SHF_VERTEX_BIF_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_SHF_VERTEX_BIF_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SHF_VARY_BIF_CHECKSUM */
+#define ROGUE_CR_SHF_VARY_BIF_CHECKSUM 0xD1D0U
+#define ROGUE_CR_SHF_VARY_BIF_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SHF_VARY_BIF_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_SHF_VARY_BIF_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_RPM_BIF_CHECKSUM */
+#define ROGUE_CR_RPM_BIF_CHECKSUM 0xD1D8U
+#define ROGUE_CR_RPM_BIF_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_RPM_BIF_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_RPM_BIF_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SHG_BIF_CHECKSUM */
+#define ROGUE_CR_SHG_BIF_CHECKSUM 0xD1E0U
+#define ROGUE_CR_SHG_BIF_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SHG_BIF_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_SHG_BIF_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register ROGUE_CR_SHG_FE_BE_CHECKSUM */
+#define ROGUE_CR_SHG_FE_BE_CHECKSUM 0xD1E8U
+#define ROGUE_CR_SHG_FE_BE_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_SHG_FE_BE_CHECKSUM_VALUE_SHIFT 0U
+#define ROGUE_CR_SHG_FE_BE_CHECKSUM_VALUE_CLRMSK 0x00000000U
+
+/* Register DPX_CR_BF_PERF */
+#define DPX_CR_BF_PERF 0xC458U
+#define DPX_CR_BF_PERF_MASKFULL 0x000000000000001FULL
+#define DPX_CR_BF_PERF_CLR_3_SHIFT 4U
+#define DPX_CR_BF_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define DPX_CR_BF_PERF_CLR_3_EN 0x00000010U
+#define DPX_CR_BF_PERF_CLR_2_SHIFT 3U
+#define DPX_CR_BF_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define DPX_CR_BF_PERF_CLR_2_EN 0x00000008U
+#define DPX_CR_BF_PERF_CLR_1_SHIFT 2U
+#define DPX_CR_BF_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define DPX_CR_BF_PERF_CLR_1_EN 0x00000004U
+#define DPX_CR_BF_PERF_CLR_0_SHIFT 1U
+#define DPX_CR_BF_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define DPX_CR_BF_PERF_CLR_0_EN 0x00000002U
+#define DPX_CR_BF_PERF_CTRL_ENABLE_SHIFT 0U
+#define DPX_CR_BF_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define DPX_CR_BF_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register DPX_CR_BF_PERF_SELECT0 */
+#define DPX_CR_BF_PERF_SELECT0 0xC460U
+#define DPX_CR_BF_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define DPX_CR_BF_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define DPX_CR_BF_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define DPX_CR_BF_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define DPX_CR_BF_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define DPX_CR_BF_PERF_SELECT0_MODE_SHIFT 21U
+#define DPX_CR_BF_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define DPX_CR_BF_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define DPX_CR_BF_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define DPX_CR_BF_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define DPX_CR_BF_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define DPX_CR_BF_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register DPX_CR_BF_PERF_COUNTER_0 */
+#define DPX_CR_BF_PERF_COUNTER_0 0xC488U
+#define DPX_CR_BF_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define DPX_CR_BF_PERF_COUNTER_0_REG_SHIFT 0U
+#define DPX_CR_BF_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register DPX_CR_BT_PERF */
+#define DPX_CR_BT_PERF 0xC3D0U
+#define DPX_CR_BT_PERF_MASKFULL 0x000000000000001FULL
+#define DPX_CR_BT_PERF_CLR_3_SHIFT 4U
+#define DPX_CR_BT_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define DPX_CR_BT_PERF_CLR_3_EN 0x00000010U
+#define DPX_CR_BT_PERF_CLR_2_SHIFT 3U
+#define DPX_CR_BT_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define DPX_CR_BT_PERF_CLR_2_EN 0x00000008U
+#define DPX_CR_BT_PERF_CLR_1_SHIFT 2U
+#define DPX_CR_BT_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define DPX_CR_BT_PERF_CLR_1_EN 0x00000004U
+#define DPX_CR_BT_PERF_CLR_0_SHIFT 1U
+#define DPX_CR_BT_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define DPX_CR_BT_PERF_CLR_0_EN 0x00000002U
+#define DPX_CR_BT_PERF_CTRL_ENABLE_SHIFT 0U
+#define DPX_CR_BT_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define DPX_CR_BT_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register DPX_CR_BT_PERF_SELECT0 */
+#define DPX_CR_BT_PERF_SELECT0 0xC3D8U
+#define DPX_CR_BT_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define DPX_CR_BT_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define DPX_CR_BT_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define DPX_CR_BT_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define DPX_CR_BT_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define DPX_CR_BT_PERF_SELECT0_MODE_SHIFT 21U
+#define DPX_CR_BT_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define DPX_CR_BT_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define DPX_CR_BT_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define DPX_CR_BT_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define DPX_CR_BT_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define DPX_CR_BT_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register DPX_CR_BT_PERF_COUNTER_0 */
+#define DPX_CR_BT_PERF_COUNTER_0 0xC420U
+#define DPX_CR_BT_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define DPX_CR_BT_PERF_COUNTER_0_REG_SHIFT 0U
+#define DPX_CR_BT_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register DPX_CR_RQ_USC_DEBUG */
+#define DPX_CR_RQ_USC_DEBUG 0xC110U
+#define DPX_CR_RQ_USC_DEBUG_MASKFULL 0x00000000FFFFFFFFULL
+#define DPX_CR_RQ_USC_DEBUG_CHECKSUM_SHIFT 0U
+#define DPX_CR_RQ_USC_DEBUG_CHECKSUM_CLRMSK 0xFFFFFFFF00000000ULL
+
+/* Register DPX_CR_BIF_FAULT_BANK_MMU_STATUS */
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS 0xC5C8U
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_MASKFULL 0x000000000000F775ULL
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_CAT_BASE_SHIFT 12U
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_CAT_BASE_CLRMSK 0xFFFF0FFFU
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_PAGE_SIZE_SHIFT 8U
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_PAGE_SIZE_CLRMSK 0xFFFFF8FFU
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_DATA_TYPE_SHIFT 5U
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_DATA_TYPE_CLRMSK 0xFFFFFF9FU
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_FAULT_RO_SHIFT 4U
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_FAULT_RO_CLRMSK 0xFFFFFFEFU
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_FAULT_RO_EN 0x00000010U
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_FAULT_PM_META_RO_SHIFT 2U
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_FAULT_PM_META_RO_CLRMSK 0xFFFFFFFBU
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_FAULT_PM_META_RO_EN 0x00000004U
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_FAULT_SHIFT 0U
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_FAULT_CLRMSK 0xFFFFFFFEU
+#define DPX_CR_BIF_FAULT_BANK_MMU_STATUS_FAULT_EN 0x00000001U
+
+/* Register DPX_CR_BIF_FAULT_BANK_REQ_STATUS */
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS 0xC5D0U
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_MASKFULL 0x03FFFFFFFFFFFFF0ULL
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_RNW_SHIFT 57U
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_RNW_CLRMSK 0xFDFFFFFFFFFFFFFFULL
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_RNW_EN 0x0200000000000000ULL
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_TAG_SB_SHIFT 44U
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_TAG_SB_CLRMSK 0xFE000FFFFFFFFFFFULL
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_TAG_ID_SHIFT 40U
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_TAG_ID_CLRMSK 0xFFFFF0FFFFFFFFFFULL
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_ADDRESS_SHIFT 4U
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_ADDRESS_CLRMSK 0xFFFFFF000000000FULL
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_ADDRESS_ALIGNSHIFT 4U
+#define DPX_CR_BIF_FAULT_BANK_REQ_STATUS_ADDRESS_ALIGNSIZE 16U
+
+/* Register DPX_CR_BIF_MMU_STATUS */
+#define DPX_CR_BIF_MMU_STATUS 0xC5D8U
+#define DPX_CR_BIF_MMU_STATUS_MASKFULL 0x000000000FFFFFF7ULL
+#define DPX_CR_BIF_MMU_STATUS_PC_DATA_SHIFT 20U
+#define DPX_CR_BIF_MMU_STATUS_PC_DATA_CLRMSK 0xF00FFFFFU
+#define DPX_CR_BIF_MMU_STATUS_PD_DATA_SHIFT 12U
+#define DPX_CR_BIF_MMU_STATUS_PD_DATA_CLRMSK 0xFFF00FFFU
+#define DPX_CR_BIF_MMU_STATUS_PT_DATA_SHIFT 4U
+#define DPX_CR_BIF_MMU_STATUS_PT_DATA_CLRMSK 0xFFFFF00FU
+#define DPX_CR_BIF_MMU_STATUS_STALLED_SHIFT 2U
+#define DPX_CR_BIF_MMU_STATUS_STALLED_CLRMSK 0xFFFFFFFBU
+#define DPX_CR_BIF_MMU_STATUS_STALLED_EN 0x00000004U
+#define DPX_CR_BIF_MMU_STATUS_PAUSED_SHIFT 1U
+#define DPX_CR_BIF_MMU_STATUS_PAUSED_CLRMSK 0xFFFFFFFDU
+#define DPX_CR_BIF_MMU_STATUS_PAUSED_EN 0x00000002U
+#define DPX_CR_BIF_MMU_STATUS_BUSY_SHIFT 0U
+#define DPX_CR_BIF_MMU_STATUS_BUSY_CLRMSK 0xFFFFFFFEU
+#define DPX_CR_BIF_MMU_STATUS_BUSY_EN 0x00000001U
+
+/* Register DPX_CR_RT_PERF */
+#define DPX_CR_RT_PERF 0xC700U
+#define DPX_CR_RT_PERF_MASKFULL 0x000000000000001FULL
+#define DPX_CR_RT_PERF_CLR_3_SHIFT 4U
+#define DPX_CR_RT_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define DPX_CR_RT_PERF_CLR_3_EN 0x00000010U
+#define DPX_CR_RT_PERF_CLR_2_SHIFT 3U
+#define DPX_CR_RT_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define DPX_CR_RT_PERF_CLR_2_EN 0x00000008U
+#define DPX_CR_RT_PERF_CLR_1_SHIFT 2U
+#define DPX_CR_RT_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define DPX_CR_RT_PERF_CLR_1_EN 0x00000004U
+#define DPX_CR_RT_PERF_CLR_0_SHIFT 1U
+#define DPX_CR_RT_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define DPX_CR_RT_PERF_CLR_0_EN 0x00000002U
+#define DPX_CR_RT_PERF_CTRL_ENABLE_SHIFT 0U
+#define DPX_CR_RT_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define DPX_CR_RT_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register DPX_CR_RT_PERF_SELECT0 */
+#define DPX_CR_RT_PERF_SELECT0 0xC708U
+#define DPX_CR_RT_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define DPX_CR_RT_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define DPX_CR_RT_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define DPX_CR_RT_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define DPX_CR_RT_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define DPX_CR_RT_PERF_SELECT0_MODE_SHIFT 21U
+#define DPX_CR_RT_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define DPX_CR_RT_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define DPX_CR_RT_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define DPX_CR_RT_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define DPX_CR_RT_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define DPX_CR_RT_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register DPX_CR_RT_PERF_COUNTER_0 */
+#define DPX_CR_RT_PERF_COUNTER_0 0xC730U
+#define DPX_CR_RT_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define DPX_CR_RT_PERF_COUNTER_0_REG_SHIFT 0U
+#define DPX_CR_RT_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register DPX_CR_BX_TU_PERF */
+#define DPX_CR_BX_TU_PERF 0xC908U
+#define DPX_CR_BX_TU_PERF_MASKFULL 0x000000000000001FULL
+#define DPX_CR_BX_TU_PERF_CLR_3_SHIFT 4U
+#define DPX_CR_BX_TU_PERF_CLR_3_CLRMSK 0xFFFFFFEFU
+#define DPX_CR_BX_TU_PERF_CLR_3_EN 0x00000010U
+#define DPX_CR_BX_TU_PERF_CLR_2_SHIFT 3U
+#define DPX_CR_BX_TU_PERF_CLR_2_CLRMSK 0xFFFFFFF7U
+#define DPX_CR_BX_TU_PERF_CLR_2_EN 0x00000008U
+#define DPX_CR_BX_TU_PERF_CLR_1_SHIFT 2U
+#define DPX_CR_BX_TU_PERF_CLR_1_CLRMSK 0xFFFFFFFBU
+#define DPX_CR_BX_TU_PERF_CLR_1_EN 0x00000004U
+#define DPX_CR_BX_TU_PERF_CLR_0_SHIFT 1U
+#define DPX_CR_BX_TU_PERF_CLR_0_CLRMSK 0xFFFFFFFDU
+#define DPX_CR_BX_TU_PERF_CLR_0_EN 0x00000002U
+#define DPX_CR_BX_TU_PERF_CTRL_ENABLE_SHIFT 0U
+#define DPX_CR_BX_TU_PERF_CTRL_ENABLE_CLRMSK 0xFFFFFFFEU
+#define DPX_CR_BX_TU_PERF_CTRL_ENABLE_EN 0x00000001U
+
+/* Register DPX_CR_BX_TU_PERF_SELECT0 */
+#define DPX_CR_BX_TU_PERF_SELECT0 0xC910U
+#define DPX_CR_BX_TU_PERF_SELECT0_MASKFULL 0x3FFF3FFF003FFFFFULL
+#define DPX_CR_BX_TU_PERF_SELECT0_BATCH_MAX_SHIFT 48U
+#define DPX_CR_BX_TU_PERF_SELECT0_BATCH_MAX_CLRMSK 0xC000FFFFFFFFFFFFULL
+#define DPX_CR_BX_TU_PERF_SELECT0_BATCH_MIN_SHIFT 32U
+#define DPX_CR_BX_TU_PERF_SELECT0_BATCH_MIN_CLRMSK 0xFFFFC000FFFFFFFFULL
+#define DPX_CR_BX_TU_PERF_SELECT0_MODE_SHIFT 21U
+#define DPX_CR_BX_TU_PERF_SELECT0_MODE_CLRMSK 0xFFFFFFFFFFDFFFFFULL
+#define DPX_CR_BX_TU_PERF_SELECT0_MODE_EN 0x0000000000200000ULL
+#define DPX_CR_BX_TU_PERF_SELECT0_GROUP_SELECT_SHIFT 16U
+#define DPX_CR_BX_TU_PERF_SELECT0_GROUP_SELECT_CLRMSK 0xFFFFFFFFFFE0FFFFULL
+#define DPX_CR_BX_TU_PERF_SELECT0_BIT_SELECT_SHIFT 0U
+#define DPX_CR_BX_TU_PERF_SELECT0_BIT_SELECT_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register DPX_CR_BX_TU_PERF_COUNTER_0 */
+#define DPX_CR_BX_TU_PERF_COUNTER_0 0xC938U
+#define DPX_CR_BX_TU_PERF_COUNTER_0_MASKFULL 0x00000000FFFFFFFFULL
+#define DPX_CR_BX_TU_PERF_COUNTER_0_REG_SHIFT 0U
+#define DPX_CR_BX_TU_PERF_COUNTER_0_REG_CLRMSK 0x00000000U
+
+/* Register DPX_CR_RS_PDS_RR_CHECKSUM */
+#define DPX_CR_RS_PDS_RR_CHECKSUM 0xC0F0U
+#define DPX_CR_RS_PDS_RR_CHECKSUM_MASKFULL 0x00000000FFFFFFFFULL
+#define DPX_CR_RS_PDS_RR_CHECKSUM_VALUE_SHIFT 0U
+#define DPX_CR_RS_PDS_RR_CHECKSUM_VALUE_CLRMSK 0xFFFFFFFF00000000ULL
+
+/* Register ROGUE_CR_MMU_CBASE_MAPPING_CONTEXT */
+#define ROGUE_CR_MMU_CBASE_MAPPING_CONTEXT 0xE140U
+#define ROGUE_CR_MMU_CBASE_MAPPING_CONTEXT_MASKFULL 0x00000000000000FFULL
+#define ROGUE_CR_MMU_CBASE_MAPPING_CONTEXT_ID_SHIFT 0U
+#define ROGUE_CR_MMU_CBASE_MAPPING_CONTEXT_ID_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_MMU_CBASE_MAPPING */
+#define ROGUE_CR_MMU_CBASE_MAPPING 0xE148U
+#define ROGUE_CR_MMU_CBASE_MAPPING_MASKFULL 0x000000000FFFFFFFULL
+#define ROGUE_CR_MMU_CBASE_MAPPING_BASE_ADDR_SHIFT 0U
+#define ROGUE_CR_MMU_CBASE_MAPPING_BASE_ADDR_CLRMSK 0xF0000000U
+#define ROGUE_CR_MMU_CBASE_MAPPING_BASE_ADDR_ALIGNSHIFT 12U
+#define ROGUE_CR_MMU_CBASE_MAPPING_BASE_ADDR_ALIGNSIZE 4096U
+
+/* Register ROGUE_CR_MMU_FAULT_STATUS */
+#define ROGUE_CR_MMU_FAULT_STATUS 0xE150U
+#define ROGUE_CR_MMU_FAULT_STATUS_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_MMU_FAULT_STATUS_ADDRESS_SHIFT 28U
+#define ROGUE_CR_MMU_FAULT_STATUS_ADDRESS_CLRMSK 0x000000000FFFFFFFULL
+#define ROGUE_CR_MMU_FAULT_STATUS_CONTEXT_SHIFT 20U
+#define ROGUE_CR_MMU_FAULT_STATUS_CONTEXT_CLRMSK 0xFFFFFFFFF00FFFFFULL
+#define ROGUE_CR_MMU_FAULT_STATUS_TAG_SB_SHIFT 12U
+#define ROGUE_CR_MMU_FAULT_STATUS_TAG_SB_CLRMSK 0xFFFFFFFFFFF00FFFULL
+#define ROGUE_CR_MMU_FAULT_STATUS_REQ_ID_SHIFT 6U
+#define ROGUE_CR_MMU_FAULT_STATUS_REQ_ID_CLRMSK 0xFFFFFFFFFFFFF03FULL
+#define ROGUE_CR_MMU_FAULT_STATUS_LEVEL_SHIFT 4U
+#define ROGUE_CR_MMU_FAULT_STATUS_LEVEL_CLRMSK 0xFFFFFFFFFFFFFFCFULL
+#define ROGUE_CR_MMU_FAULT_STATUS_RNW_SHIFT 3U
+#define ROGUE_CR_MMU_FAULT_STATUS_RNW_CLRMSK 0xFFFFFFFFFFFFFFF7ULL
+#define ROGUE_CR_MMU_FAULT_STATUS_RNW_EN 0x0000000000000008ULL
+#define ROGUE_CR_MMU_FAULT_STATUS_TYPE_SHIFT 1U
+#define ROGUE_CR_MMU_FAULT_STATUS_TYPE_CLRMSK 0xFFFFFFFFFFFFFFF9ULL
+#define ROGUE_CR_MMU_FAULT_STATUS_FAULT_SHIFT 0U
+#define ROGUE_CR_MMU_FAULT_STATUS_FAULT_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_MMU_FAULT_STATUS_FAULT_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_MMU_FAULT_STATUS_META */
+#define ROGUE_CR_MMU_FAULT_STATUS_META 0xE158U
+#define ROGUE_CR_MMU_FAULT_STATUS_META_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_MMU_FAULT_STATUS_META_ADDRESS_SHIFT 28U
+#define ROGUE_CR_MMU_FAULT_STATUS_META_ADDRESS_CLRMSK 0x000000000FFFFFFFULL
+#define ROGUE_CR_MMU_FAULT_STATUS_META_CONTEXT_SHIFT 20U
+#define ROGUE_CR_MMU_FAULT_STATUS_META_CONTEXT_CLRMSK 0xFFFFFFFFF00FFFFFULL
+#define ROGUE_CR_MMU_FAULT_STATUS_META_TAG_SB_SHIFT 12U
+#define ROGUE_CR_MMU_FAULT_STATUS_META_TAG_SB_CLRMSK 0xFFFFFFFFFFF00FFFULL
+#define ROGUE_CR_MMU_FAULT_STATUS_META_REQ_ID_SHIFT 6U
+#define ROGUE_CR_MMU_FAULT_STATUS_META_REQ_ID_CLRMSK 0xFFFFFFFFFFFFF03FULL
+#define ROGUE_CR_MMU_FAULT_STATUS_META_LEVEL_SHIFT 4U
+#define ROGUE_CR_MMU_FAULT_STATUS_META_LEVEL_CLRMSK 0xFFFFFFFFFFFFFFCFULL
+#define ROGUE_CR_MMU_FAULT_STATUS_META_RNW_SHIFT 3U
+#define ROGUE_CR_MMU_FAULT_STATUS_META_RNW_CLRMSK 0xFFFFFFFFFFFFFFF7ULL
+#define ROGUE_CR_MMU_FAULT_STATUS_META_RNW_EN 0x0000000000000008ULL
+#define ROGUE_CR_MMU_FAULT_STATUS_META_TYPE_SHIFT 1U
+#define ROGUE_CR_MMU_FAULT_STATUS_META_TYPE_CLRMSK 0xFFFFFFFFFFFFFFF9ULL
+#define ROGUE_CR_MMU_FAULT_STATUS_META_FAULT_SHIFT 0U
+#define ROGUE_CR_MMU_FAULT_STATUS_META_FAULT_CLRMSK 0xFFFFFFFFFFFFFFFEULL
+#define ROGUE_CR_MMU_FAULT_STATUS_META_FAULT_EN 0x0000000000000001ULL
+
+/* Register ROGUE_CR_SLC3_CTRL_MISC */
+#define ROGUE_CR_SLC3_CTRL_MISC 0xE200U
+#define ROGUE_CR_SLC3_CTRL_MISC_MASKFULL 0x0000000000000107ULL
+#define ROGUE_CR_SLC3_CTRL_MISC_WRITE_COMBINER_SHIFT 8U
+#define ROGUE_CR_SLC3_CTRL_MISC_WRITE_COMBINER_CLRMSK 0xFFFFFEFFU
+#define ROGUE_CR_SLC3_CTRL_MISC_WRITE_COMBINER_EN 0x00000100U
+#define ROGUE_CR_SLC3_CTRL_MISC_ADDR_DECODE_MODE_SHIFT 0U
+#define ROGUE_CR_SLC3_CTRL_MISC_ADDR_DECODE_MODE_CLRMSK 0xFFFFFFF8U
+#define ROGUE_CR_SLC3_CTRL_MISC_ADDR_DECODE_MODE_LINEAR 0x00000000U
+#define ROGUE_CR_SLC3_CTRL_MISC_ADDR_DECODE_MODE_IN_PAGE_HASH 0x00000001U
+#define ROGUE_CR_SLC3_CTRL_MISC_ADDR_DECODE_MODE_FIXED_PVR_HASH 0x00000002U
+#define ROGUE_CR_SLC3_CTRL_MISC_ADDR_DECODE_MODE_SCRAMBLE_PVR_HASH 0x00000003U
+#define ROGUE_CR_SLC3_CTRL_MISC_ADDR_DECODE_MODE_WEAVED_HASH 0x00000004U
+
+/* Register ROGUE_CR_SLC3_SCRAMBLE */
+#define ROGUE_CR_SLC3_SCRAMBLE 0xE208U
+#define ROGUE_CR_SLC3_SCRAMBLE_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC3_SCRAMBLE_BITS_SHIFT 0U
+#define ROGUE_CR_SLC3_SCRAMBLE_BITS_CLRMSK 0x0000000000000000ULL
+
+/* Register ROGUE_CR_SLC3_SCRAMBLE2 */
+#define ROGUE_CR_SLC3_SCRAMBLE2 0xE210U
+#define ROGUE_CR_SLC3_SCRAMBLE2_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC3_SCRAMBLE2_BITS_SHIFT 0U
+#define ROGUE_CR_SLC3_SCRAMBLE2_BITS_CLRMSK 0x0000000000000000ULL
+
+/* Register ROGUE_CR_SLC3_SCRAMBLE3 */
+#define ROGUE_CR_SLC3_SCRAMBLE3 0xE218U
+#define ROGUE_CR_SLC3_SCRAMBLE3_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC3_SCRAMBLE3_BITS_SHIFT 0U
+#define ROGUE_CR_SLC3_SCRAMBLE3_BITS_CLRMSK 0x0000000000000000ULL
+
+/* Register ROGUE_CR_SLC3_SCRAMBLE4 */
+#define ROGUE_CR_SLC3_SCRAMBLE4 0xE260U
+#define ROGUE_CR_SLC3_SCRAMBLE4_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC3_SCRAMBLE4_BITS_SHIFT 0U
+#define ROGUE_CR_SLC3_SCRAMBLE4_BITS_CLRMSK 0x0000000000000000ULL
+
+/* Register ROGUE_CR_SLC3_STATUS */
+#define ROGUE_CR_SLC3_STATUS 0xE220U
+#define ROGUE_CR_SLC3_STATUS_MASKFULL 0xFFFFFFFFFFFFFFFFULL
+#define ROGUE_CR_SLC3_STATUS_WRITES1_SHIFT 48U
+#define ROGUE_CR_SLC3_STATUS_WRITES1_CLRMSK 0x0000FFFFFFFFFFFFULL
+#define ROGUE_CR_SLC3_STATUS_WRITES0_SHIFT 32U
+#define ROGUE_CR_SLC3_STATUS_WRITES0_CLRMSK 0xFFFF0000FFFFFFFFULL
+#define ROGUE_CR_SLC3_STATUS_READS1_SHIFT 16U
+#define ROGUE_CR_SLC3_STATUS_READS1_CLRMSK 0xFFFFFFFF0000FFFFULL
+#define ROGUE_CR_SLC3_STATUS_READS0_SHIFT 0U
+#define ROGUE_CR_SLC3_STATUS_READS0_CLRMSK 0xFFFFFFFFFFFF0000ULL
+
+/* Register ROGUE_CR_SLC3_IDLE */
+#define ROGUE_CR_SLC3_IDLE 0xE228U
+#define ROGUE_CR_SLC3_IDLE_MASKFULL 0x00000000000FFFFFULL
+#define ROGUE_CR_SLC3_IDLE_ORDERQ_DUST2_SHIFT 18U
+#define ROGUE_CR_SLC3_IDLE_ORDERQ_DUST2_CLRMSK 0xFFF3FFFFU
+#define ROGUE_CR_SLC3_IDLE_MMU_SHIFT 17U
+#define ROGUE_CR_SLC3_IDLE_MMU_CLRMSK 0xFFFDFFFFU
+#define ROGUE_CR_SLC3_IDLE_MMU_EN 0x00020000U
+#define ROGUE_CR_SLC3_IDLE_RDI_SHIFT 16U
+#define ROGUE_CR_SLC3_IDLE_RDI_CLRMSK 0xFFFEFFFFU
+#define ROGUE_CR_SLC3_IDLE_RDI_EN 0x00010000U
+#define ROGUE_CR_SLC3_IDLE_IMGBV4_SHIFT 12U
+#define ROGUE_CR_SLC3_IDLE_IMGBV4_CLRMSK 0xFFFF0FFFU
+#define ROGUE_CR_SLC3_IDLE_CACHE_BANKS_SHIFT 4U
+#define ROGUE_CR_SLC3_IDLE_CACHE_BANKS_CLRMSK 0xFFFFF00FU
+#define ROGUE_CR_SLC3_IDLE_ORDERQ_DUST_SHIFT 2U
+#define ROGUE_CR_SLC3_IDLE_ORDERQ_DUST_CLRMSK 0xFFFFFFF3U
+#define ROGUE_CR_SLC3_IDLE_ORDERQ_JONES_SHIFT 1U
+#define ROGUE_CR_SLC3_IDLE_ORDERQ_JONES_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_SLC3_IDLE_ORDERQ_JONES_EN 0x00000002U
+#define ROGUE_CR_SLC3_IDLE_XBAR_SHIFT 0U
+#define ROGUE_CR_SLC3_IDLE_XBAR_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SLC3_IDLE_XBAR_EN 0x00000001U
+
+/* Register ROGUE_CR_SLC3_FAULT_STOP_STATUS */
+#define ROGUE_CR_SLC3_FAULT_STOP_STATUS 0xE248U
+#define ROGUE_CR_SLC3_FAULT_STOP_STATUS_MASKFULL 0x0000000000001FFFULL
+#define ROGUE_CR_SLC3_FAULT_STOP_STATUS_BIF_SHIFT 0U
+#define ROGUE_CR_SLC3_FAULT_STOP_STATUS_BIF_CLRMSK 0xFFFFE000U
+
+/* Register ROGUE_CR_VDM_CONTEXT_STORE_MODE */
+#define ROGUE_CR_VDM_CONTEXT_STORE_MODE 0xF048U
+#define ROGUE_CR_VDM_CONTEXT_STORE_MODE_MASKFULL 0x0000000000000003ULL
+#define ROGUE_CR_VDM_CONTEXT_STORE_MODE_MODE_SHIFT 0U
+#define ROGUE_CR_VDM_CONTEXT_STORE_MODE_MODE_CLRMSK 0xFFFFFFFCU
+#define ROGUE_CR_VDM_CONTEXT_STORE_MODE_MODE_INDEX 0x00000000U
+#define ROGUE_CR_VDM_CONTEXT_STORE_MODE_MODE_INSTANCE 0x00000001U
+#define ROGUE_CR_VDM_CONTEXT_STORE_MODE_MODE_LIST 0x00000002U
+
+/* Register ROGUE_CR_CONTEXT_MAPPING0 */
+#define ROGUE_CR_CONTEXT_MAPPING0 0xF078U
+#define ROGUE_CR_CONTEXT_MAPPING0_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_CONTEXT_MAPPING0_2D_SHIFT 24U
+#define ROGUE_CR_CONTEXT_MAPPING0_2D_CLRMSK 0x00FFFFFFU
+#define ROGUE_CR_CONTEXT_MAPPING0_CDM_SHIFT 16U
+#define ROGUE_CR_CONTEXT_MAPPING0_CDM_CLRMSK 0xFF00FFFFU
+#define ROGUE_CR_CONTEXT_MAPPING0_3D_SHIFT 8U
+#define ROGUE_CR_CONTEXT_MAPPING0_3D_CLRMSK 0xFFFF00FFU
+#define ROGUE_CR_CONTEXT_MAPPING0_TA_SHIFT 0U
+#define ROGUE_CR_CONTEXT_MAPPING0_TA_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_CONTEXT_MAPPING1 */
+#define ROGUE_CR_CONTEXT_MAPPING1 0xF080U
+#define ROGUE_CR_CONTEXT_MAPPING1_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_CONTEXT_MAPPING1_HOST_SHIFT 8U
+#define ROGUE_CR_CONTEXT_MAPPING1_HOST_CLRMSK 0xFFFF00FFU
+#define ROGUE_CR_CONTEXT_MAPPING1_TLA_SHIFT 0U
+#define ROGUE_CR_CONTEXT_MAPPING1_TLA_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_CONTEXT_MAPPING2 */
+#define ROGUE_CR_CONTEXT_MAPPING2 0xF088U
+#define ROGUE_CR_CONTEXT_MAPPING2_MASKFULL 0x0000000000FFFFFFULL
+#define ROGUE_CR_CONTEXT_MAPPING2_ALIST0_SHIFT 16U
+#define ROGUE_CR_CONTEXT_MAPPING2_ALIST0_CLRMSK 0xFF00FFFFU
+#define ROGUE_CR_CONTEXT_MAPPING2_TE0_SHIFT 8U
+#define ROGUE_CR_CONTEXT_MAPPING2_TE0_CLRMSK 0xFFFF00FFU
+#define ROGUE_CR_CONTEXT_MAPPING2_VCE0_SHIFT 0U
+#define ROGUE_CR_CONTEXT_MAPPING2_VCE0_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_CONTEXT_MAPPING3 */
+#define ROGUE_CR_CONTEXT_MAPPING3 0xF090U
+#define ROGUE_CR_CONTEXT_MAPPING3_MASKFULL 0x0000000000FFFFFFULL
+#define ROGUE_CR_CONTEXT_MAPPING3_ALIST1_SHIFT 16U
+#define ROGUE_CR_CONTEXT_MAPPING3_ALIST1_CLRMSK 0xFF00FFFFU
+#define ROGUE_CR_CONTEXT_MAPPING3_TE1_SHIFT 8U
+#define ROGUE_CR_CONTEXT_MAPPING3_TE1_CLRMSK 0xFFFF00FFU
+#define ROGUE_CR_CONTEXT_MAPPING3_VCE1_SHIFT 0U
+#define ROGUE_CR_CONTEXT_MAPPING3_VCE1_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_BIF_JONES_OUTSTANDING_READ */
+#define ROGUE_CR_BIF_JONES_OUTSTANDING_READ 0xF098U
+#define ROGUE_CR_BIF_JONES_OUTSTANDING_READ_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_BIF_JONES_OUTSTANDING_READ_COUNTER_SHIFT 0U
+#define ROGUE_CR_BIF_JONES_OUTSTANDING_READ_COUNTER_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_BIF_BLACKPEARL_OUTSTANDING_READ */
+#define ROGUE_CR_BIF_BLACKPEARL_OUTSTANDING_READ 0xF0A0U
+#define ROGUE_CR_BIF_BLACKPEARL_OUTSTANDING_READ_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_BIF_BLACKPEARL_OUTSTANDING_READ_COUNTER_SHIFT 0U
+#define ROGUE_CR_BIF_BLACKPEARL_OUTSTANDING_READ_COUNTER_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_BIF_DUST_OUTSTANDING_READ */
+#define ROGUE_CR_BIF_DUST_OUTSTANDING_READ 0xF0A8U
+#define ROGUE_CR_BIF_DUST_OUTSTANDING_READ_MASKFULL 0x000000000000FFFFULL
+#define ROGUE_CR_BIF_DUST_OUTSTANDING_READ_COUNTER_SHIFT 0U
+#define ROGUE_CR_BIF_DUST_OUTSTANDING_READ_COUNTER_CLRMSK 0xFFFF0000U
+
+/* Register ROGUE_CR_CONTEXT_MAPPING4 */
+#define ROGUE_CR_CONTEXT_MAPPING4 0xF210U
+#define ROGUE_CR_CONTEXT_MAPPING4_MASKFULL 0x0000FFFFFFFFFFFFULL
+#define ROGUE_CR_CONTEXT_MAPPING4_3D_MMU_STACK_SHIFT 40U
+#define ROGUE_CR_CONTEXT_MAPPING4_3D_MMU_STACK_CLRMSK 0xFFFF00FFFFFFFFFFULL
+#define ROGUE_CR_CONTEXT_MAPPING4_3D_UFSTACK_SHIFT 32U
+#define ROGUE_CR_CONTEXT_MAPPING4_3D_UFSTACK_CLRMSK 0xFFFFFF00FFFFFFFFULL
+#define ROGUE_CR_CONTEXT_MAPPING4_3D_FSTACK_SHIFT 24U
+#define ROGUE_CR_CONTEXT_MAPPING4_3D_FSTACK_CLRMSK 0xFFFFFFFF00FFFFFFULL
+#define ROGUE_CR_CONTEXT_MAPPING4_TA_MMU_STACK_SHIFT 16U
+#define ROGUE_CR_CONTEXT_MAPPING4_TA_MMU_STACK_CLRMSK 0xFFFFFFFFFF00FFFFULL
+#define ROGUE_CR_CONTEXT_MAPPING4_TA_UFSTACK_SHIFT 8U
+#define ROGUE_CR_CONTEXT_MAPPING4_TA_UFSTACK_CLRMSK 0xFFFFFFFFFFFF00FFULL
+#define ROGUE_CR_CONTEXT_MAPPING4_TA_FSTACK_SHIFT 0U
+#define ROGUE_CR_CONTEXT_MAPPING4_TA_FSTACK_CLRMSK 0xFFFFFFFFFFFFFF00ULL
+
+/* Register ROGUE_CR_MULTICORE_GPU */
+#define ROGUE_CR_MULTICORE_GPU 0xF300U
+#define ROGUE_CR_MULTICORE_GPU_MASKFULL 0x000000000000007FULL
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_FRAGMENT_SHIFT 6U
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_FRAGMENT_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_FRAGMENT_EN 0x00000040U
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_GEOMETRY_SHIFT 5U
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_GEOMETRY_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_GEOMETRY_EN 0x00000020U
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_COMPUTE_SHIFT 4U
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_COMPUTE_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_COMPUTE_EN 0x00000010U
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_PRIMARY_SHIFT 3U
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_PRIMARY_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_MULTICORE_GPU_CAPABILITY_PRIMARY_EN 0x00000008U
+#define ROGUE_CR_MULTICORE_GPU_ID_SHIFT 0U
+#define ROGUE_CR_MULTICORE_GPU_ID_CLRMSK 0xFFFFFFF8U
+
+/* Register ROGUE_CR_MULTICORE_SYSTEM */
+#define ROGUE_CR_MULTICORE_SYSTEM 0xF308U
+#define ROGUE_CR_MULTICORE_SYSTEM_MASKFULL 0x000000000000000FULL
+#define ROGUE_CR_MULTICORE_SYSTEM_GPU_COUNT_SHIFT 0U
+#define ROGUE_CR_MULTICORE_SYSTEM_GPU_COUNT_CLRMSK 0xFFFFFFF0U
+
+/* Register ROGUE_CR_MULTICORE_FRAGMENT_CTRL_COMMON */
+#define ROGUE_CR_MULTICORE_FRAGMENT_CTRL_COMMON 0xF310U
+#define ROGUE_CR_MULTICORE_FRAGMENT_CTRL_COMMON_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_MULTICORE_FRAGMENT_CTRL_COMMON_WORKLOAD_TYPE_SHIFT 30U
+#define ROGUE_CR_MULTICORE_FRAGMENT_CTRL_COMMON_WORKLOAD_TYPE_CLRMSK 0x3FFFFFFFU
+#define ROGUE_CR_MULTICORE_FRAGMENT_CTRL_COMMON_WORKLOAD_EXECUTE_COUNT_SHIFT 8U
+#define ROGUE_CR_MULTICORE_FRAGMENT_CTRL_COMMON_WORKLOAD_EXECUTE_COUNT_CLRMSK 0xC00000FFU
+#define ROGUE_CR_MULTICORE_FRAGMENT_CTRL_COMMON_GPU_ENABLE_SHIFT 0U
+#define ROGUE_CR_MULTICORE_FRAGMENT_CTRL_COMMON_GPU_ENABLE_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_MULTICORE_GEOMETRY_CTRL_COMMON */
+#define ROGUE_CR_MULTICORE_GEOMETRY_CTRL_COMMON 0xF320U
+#define ROGUE_CR_MULTICORE_GEOMETRY_CTRL_COMMON_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_MULTICORE_GEOMETRY_CTRL_COMMON_WORKLOAD_TYPE_SHIFT 30U
+#define ROGUE_CR_MULTICORE_GEOMETRY_CTRL_COMMON_WORKLOAD_TYPE_CLRMSK 0x3FFFFFFFU
+#define ROGUE_CR_MULTICORE_GEOMETRY_CTRL_COMMON_WORKLOAD_EXECUTE_COUNT_SHIFT 8U
+#define ROGUE_CR_MULTICORE_GEOMETRY_CTRL_COMMON_WORKLOAD_EXECUTE_COUNT_CLRMSK 0xC00000FFU
+#define ROGUE_CR_MULTICORE_GEOMETRY_CTRL_COMMON_GPU_ENABLE_SHIFT 0U
+#define ROGUE_CR_MULTICORE_GEOMETRY_CTRL_COMMON_GPU_ENABLE_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_MULTICORE_COMPUTE_CTRL_COMMON */
+#define ROGUE_CR_MULTICORE_COMPUTE_CTRL_COMMON 0xF330U
+#define ROGUE_CR_MULTICORE_COMPUTE_CTRL_COMMON_MASKFULL 0x00000000FFFFFFFFULL
+#define ROGUE_CR_MULTICORE_COMPUTE_CTRL_COMMON_WORKLOAD_TYPE_SHIFT 30U
+#define ROGUE_CR_MULTICORE_COMPUTE_CTRL_COMMON_WORKLOAD_TYPE_CLRMSK 0x3FFFFFFFU
+#define ROGUE_CR_MULTICORE_COMPUTE_CTRL_COMMON_WORKLOAD_EXECUTE_COUNT_SHIFT 8U
+#define ROGUE_CR_MULTICORE_COMPUTE_CTRL_COMMON_WORKLOAD_EXECUTE_COUNT_CLRMSK 0xC00000FFU
+#define ROGUE_CR_MULTICORE_COMPUTE_CTRL_COMMON_GPU_ENABLE_SHIFT 0U
+#define ROGUE_CR_MULTICORE_COMPUTE_CTRL_COMMON_GPU_ENABLE_CLRMSK 0xFFFFFF00U
+
+/* Register ROGUE_CR_ECC_RAM_ERR_INJ */
+#define ROGUE_CR_ECC_RAM_ERR_INJ 0xF340U
+#define ROGUE_CR_ECC_RAM_ERR_INJ_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_ECC_RAM_ERR_INJ_SLC_SIDEKICK_SHIFT 4U
+#define ROGUE_CR_ECC_RAM_ERR_INJ_SLC_SIDEKICK_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_ECC_RAM_ERR_INJ_SLC_SIDEKICK_EN 0x00000010U
+#define ROGUE_CR_ECC_RAM_ERR_INJ_USC_SHIFT 3U
+#define ROGUE_CR_ECC_RAM_ERR_INJ_USC_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_ECC_RAM_ERR_INJ_USC_EN 0x00000008U
+#define ROGUE_CR_ECC_RAM_ERR_INJ_TPU_MCU_L0_SHIFT 2U
+#define ROGUE_CR_ECC_RAM_ERR_INJ_TPU_MCU_L0_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_ECC_RAM_ERR_INJ_TPU_MCU_L0_EN 0x00000004U
+#define ROGUE_CR_ECC_RAM_ERR_INJ_RASCAL_SHIFT 1U
+#define ROGUE_CR_ECC_RAM_ERR_INJ_RASCAL_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_ECC_RAM_ERR_INJ_RASCAL_EN 0x00000002U
+#define ROGUE_CR_ECC_RAM_ERR_INJ_MARS_SHIFT 0U
+#define ROGUE_CR_ECC_RAM_ERR_INJ_MARS_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_ECC_RAM_ERR_INJ_MARS_EN 0x00000001U
+
+/* Register ROGUE_CR_ECC_RAM_INIT_KICK */
+#define ROGUE_CR_ECC_RAM_INIT_KICK 0xF348U
+#define ROGUE_CR_ECC_RAM_INIT_KICK_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_ECC_RAM_INIT_KICK_SLC_SIDEKICK_SHIFT 4U
+#define ROGUE_CR_ECC_RAM_INIT_KICK_SLC_SIDEKICK_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_ECC_RAM_INIT_KICK_SLC_SIDEKICK_EN 0x00000010U
+#define ROGUE_CR_ECC_RAM_INIT_KICK_USC_SHIFT 3U
+#define ROGUE_CR_ECC_RAM_INIT_KICK_USC_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_ECC_RAM_INIT_KICK_USC_EN 0x00000008U
+#define ROGUE_CR_ECC_RAM_INIT_KICK_TPU_MCU_L0_SHIFT 2U
+#define ROGUE_CR_ECC_RAM_INIT_KICK_TPU_MCU_L0_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_ECC_RAM_INIT_KICK_TPU_MCU_L0_EN 0x00000004U
+#define ROGUE_CR_ECC_RAM_INIT_KICK_RASCAL_SHIFT 1U
+#define ROGUE_CR_ECC_RAM_INIT_KICK_RASCAL_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_ECC_RAM_INIT_KICK_RASCAL_EN 0x00000002U
+#define ROGUE_CR_ECC_RAM_INIT_KICK_MARS_SHIFT 0U
+#define ROGUE_CR_ECC_RAM_INIT_KICK_MARS_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_ECC_RAM_INIT_KICK_MARS_EN 0x00000001U
+
+/* Register ROGUE_CR_ECC_RAM_INIT_DONE */
+#define ROGUE_CR_ECC_RAM_INIT_DONE 0xF350U
+#define ROGUE_CR_ECC_RAM_INIT_DONE_MASKFULL 0x000000000000001FULL
+#define ROGUE_CR_ECC_RAM_INIT_DONE_SLC_SIDEKICK_SHIFT 4U
+#define ROGUE_CR_ECC_RAM_INIT_DONE_SLC_SIDEKICK_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_ECC_RAM_INIT_DONE_SLC_SIDEKICK_EN 0x00000010U
+#define ROGUE_CR_ECC_RAM_INIT_DONE_USC_SHIFT 3U
+#define ROGUE_CR_ECC_RAM_INIT_DONE_USC_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_ECC_RAM_INIT_DONE_USC_EN 0x00000008U
+#define ROGUE_CR_ECC_RAM_INIT_DONE_TPU_MCU_L0_SHIFT 2U
+#define ROGUE_CR_ECC_RAM_INIT_DONE_TPU_MCU_L0_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_ECC_RAM_INIT_DONE_TPU_MCU_L0_EN 0x00000004U
+#define ROGUE_CR_ECC_RAM_INIT_DONE_RASCAL_SHIFT 1U
+#define ROGUE_CR_ECC_RAM_INIT_DONE_RASCAL_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_ECC_RAM_INIT_DONE_RASCAL_EN 0x00000002U
+#define ROGUE_CR_ECC_RAM_INIT_DONE_MARS_SHIFT 0U
+#define ROGUE_CR_ECC_RAM_INIT_DONE_MARS_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_ECC_RAM_INIT_DONE_MARS_EN 0x00000001U
+
+/* Register ROGUE_CR_SAFETY_EVENT_ENABLE */
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE 0xF390U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__MASKFULL 0x000000000000007FULL
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__CPU_PAGE_FAULT_SHIFT 6U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__CPU_PAGE_FAULT_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__CPU_PAGE_FAULT_EN 0x00000040U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__SAFE_COMPUTE_FAIL_SHIFT 5U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__SAFE_COMPUTE_FAIL_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__SAFE_COMPUTE_FAIL_EN 0x00000020U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__WATCHDOG_TIMEOUT_SHIFT 4U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__WATCHDOG_TIMEOUT_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__WATCHDOG_TIMEOUT_EN 0x00000010U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__TRP_FAIL_SHIFT 3U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__TRP_FAIL_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__TRP_FAIL_EN 0x00000008U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_FW_SHIFT 2U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_FW_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_FW_EN 0x00000004U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_GPU_SHIFT 1U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_GPU_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_GPU_EN 0x00000002U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__GPU_PAGE_FAULT_SHIFT 0U
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__GPU_PAGE_FAULT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SAFETY_EVENT_ENABLE__ROGUEXE__GPU_PAGE_FAULT_EN 0x00000001U
+
+/* Register ROGUE_CR_SAFETY_EVENT_STATUS */
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE 0xF398U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__MASKFULL 0x000000000000007FULL
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__CPU_PAGE_FAULT_SHIFT 6U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__CPU_PAGE_FAULT_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__CPU_PAGE_FAULT_EN 0x00000040U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__SAFE_COMPUTE_FAIL_SHIFT 5U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__SAFE_COMPUTE_FAIL_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__SAFE_COMPUTE_FAIL_EN 0x00000020U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__WATCHDOG_TIMEOUT_SHIFT 4U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__WATCHDOG_TIMEOUT_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__WATCHDOG_TIMEOUT_EN 0x00000010U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__TRP_FAIL_SHIFT 3U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__TRP_FAIL_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__TRP_FAIL_EN 0x00000008U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__FAULT_FW_SHIFT 2U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__FAULT_FW_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__FAULT_FW_EN 0x00000004U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__FAULT_GPU_SHIFT 1U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__FAULT_GPU_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__FAULT_GPU_EN 0x00000002U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__GPU_PAGE_FAULT_SHIFT 0U
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__GPU_PAGE_FAULT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SAFETY_EVENT_STATUS__ROGUEXE__GPU_PAGE_FAULT_EN 0x00000001U
+
+/* Register ROGUE_CR_SAFETY_EVENT_CLEAR */
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE 0xF3A0U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__MASKFULL 0x000000000000007FULL
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__CPU_PAGE_FAULT_SHIFT 6U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__CPU_PAGE_FAULT_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__CPU_PAGE_FAULT_EN 0x00000040U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__SAFE_COMPUTE_FAIL_SHIFT 5U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__SAFE_COMPUTE_FAIL_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__SAFE_COMPUTE_FAIL_EN 0x00000020U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__WATCHDOG_TIMEOUT_SHIFT 4U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__WATCHDOG_TIMEOUT_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__WATCHDOG_TIMEOUT_EN 0x00000010U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__TRP_FAIL_SHIFT 3U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__TRP_FAIL_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__TRP_FAIL_EN 0x00000008U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__FAULT_FW_SHIFT 2U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__FAULT_FW_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__FAULT_FW_EN 0x00000004U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__FAULT_GPU_SHIFT 1U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__FAULT_GPU_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__FAULT_GPU_EN 0x00000002U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__GPU_PAGE_FAULT_SHIFT 0U
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__GPU_PAGE_FAULT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_SAFETY_EVENT_CLEAR__ROGUEXE__GPU_PAGE_FAULT_EN 0x00000001U
+
+/* Register ROGUE_CR_MTS_SAFETY_EVENT_ENABLE */
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE 0xF3D8U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__MASKFULL 0x000000000000007FULL
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__CPU_PAGE_FAULT_SHIFT 6U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__CPU_PAGE_FAULT_CLRMSK 0xFFFFFFBFU
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__CPU_PAGE_FAULT_EN 0x00000040U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__SAFE_COMPUTE_FAIL_SHIFT 5U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__SAFE_COMPUTE_FAIL_CLRMSK 0xFFFFFFDFU
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__SAFE_COMPUTE_FAIL_EN 0x00000020U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__WATCHDOG_TIMEOUT_SHIFT 4U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__WATCHDOG_TIMEOUT_CLRMSK 0xFFFFFFEFU
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__WATCHDOG_TIMEOUT_EN 0x00000010U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__TRP_FAIL_SHIFT 3U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__TRP_FAIL_CLRMSK 0xFFFFFFF7U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__TRP_FAIL_EN 0x00000008U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_FW_SHIFT 2U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_FW_CLRMSK 0xFFFFFFFBU
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_FW_EN 0x00000004U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_GPU_SHIFT 1U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_GPU_CLRMSK 0xFFFFFFFDU
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__FAULT_GPU_EN 0x00000002U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__GPU_PAGE_FAULT_SHIFT 0U
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__GPU_PAGE_FAULT_CLRMSK 0xFFFFFFFEU
+#define ROGUE_CR_MTS_SAFETY_EVENT_ENABLE__ROGUEXE__GPU_PAGE_FAULT_EN 0x00000001U
+
+/* clang-format on */
+
+#endif /* PVR_ROGUE_CR_DEFS_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_cr_defs_client.h b/drivers/gpu/drm/imagination/pvr_rogue_cr_defs_client.h
new file mode 100644
index 000000000000..9a07516e5337
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_cr_defs_client.h
@@ -0,0 +1,159 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_CR_DEFS_CLIENT_H
+#define PVR_ROGUE_CR_DEFS_CLIENT_H
+
+/* clang-format off */
+
+/*
+ * This register controls the anti-aliasing mode of the Tiling Co-Processor, independent control is
+ * provided in both X & Y axis.
+ * This register needs to be set based on the ISP Samples Per Pixel a core supports.
+ *
+ * When ISP Samples Per Pixel = 1:
+ * 2xmsaa is achieved by enabling Y - TE does AA on Y plane only
+ * 4xmsaa is achieved by enabling Y and X - TE does AA on X and Y plane
+ * 8xmsaa not supported by XE cores
+ *
+ * When ISP Samples Per Pixel = 2:
+ * 2xmsaa is achieved by enabling X2 - does not affect TE
+ * 4xmsaa is achieved by enabling Y and X2 - TE does AA on Y plane only
+ * 8xmsaa is achieved by enabling Y, X and X2 - TE does AA on X and Y plane
+ * 8xmsaa not supported by XE cores
+ *
+ * When ISP Samples Per Pixel = 4:
+ * 2xmsaa is achieved by enabling X2 - does not affect TE
+ * 4xmsaa is achieved by enabling Y2 and X2 - TE does AA on Y plane only
+ * 8xmsaa not supported by XE cores
+ */
+/* Register ROGUE_CR_TE_AA */
+#define ROGUE_CR_TE_AA 0x0C00U
+#define ROGUE_CR_TE_AA_MASKFULL 0x000000000000000Full
+/* Y2
+ * Indicates 4xmsaa when X2 and Y2 are set to 1. This does not affect TE and is only used within
+ * TPW.
+ */
+#define ROGUE_CR_TE_AA_Y2_SHIFT 3
+#define ROGUE_CR_TE_AA_Y2_CLRMSK 0xFFFFFFF7
+#define ROGUE_CR_TE_AA_Y2_EN 0x00000008
+/* Y
+ * Anti-Aliasing in Y Plane Enabled
+ */
+#define ROGUE_CR_TE_AA_Y_SHIFT 2
+#define ROGUE_CR_TE_AA_Y_CLRMSK 0xFFFFFFFB
+#define ROGUE_CR_TE_AA_Y_EN 0x00000004
+/* X
+ * Anti-Aliasing in X Plane Enabled
+ */
+#define ROGUE_CR_TE_AA_X_SHIFT 1
+#define ROGUE_CR_TE_AA_X_CLRMSK 0xFFFFFFFD
+#define ROGUE_CR_TE_AA_X_EN 0x00000002
+/* X2
+ * 2x Anti-Aliasing Enabled, affects PPP only
+ */
+#define ROGUE_CR_TE_AA_X2_SHIFT                             (0U)
+#define ROGUE_CR_TE_AA_X2_CLRMSK                            (0xFFFFFFFEU)
+#define ROGUE_CR_TE_AA_X2_EN                                (0x00000001U)
+
+/* MacroTile Boundaries X Plane */
+/* Register ROGUE_CR_TE_MTILE1 */
+#define ROGUE_CR_TE_MTILE1 0x0C08
+#define ROGUE_CR_TE_MTILE1_MASKFULL 0x0000000007FFFFFFull
+/* X1 default: 0x00000004
+ * X1 MacroTile boundary, left tile X for second column of macrotiles (16MT mode) - 32 pixels across
+ * tile
+ */
+#define ROGUE_CR_TE_MTILE1_X1_SHIFT 18
+#define ROGUE_CR_TE_MTILE1_X1_CLRMSK 0xF803FFFF
+/* X2 default: 0x00000008
+ * X2 MacroTile boundary, left tile X for third(16MT) column of macrotiles - 32 pixels across tile
+ */
+#define ROGUE_CR_TE_MTILE1_X2_SHIFT 9U
+#define ROGUE_CR_TE_MTILE1_X2_CLRMSK 0xFFFC01FF
+/* X3 default: 0x0000000c
+ * X3 MacroTile boundary, left tile X for fourth column of macrotiles (16MT) - 32 pixels across tile
+ */
+#define ROGUE_CR_TE_MTILE1_X3_SHIFT 0
+#define ROGUE_CR_TE_MTILE1_X3_CLRMSK 0xFFFFFE00
+
+/* MacroTile Boundaries Y Plane. */
+/* Register ROGUE_CR_TE_MTILE2 */
+#define ROGUE_CR_TE_MTILE2 0x0C10
+#define ROGUE_CR_TE_MTILE2_MASKFULL 0x0000000007FFFFFFull
+/* Y1 default: 0x00000004
+ * X1 MacroTile boundary, ltop tile Y for second column of macrotiles (16MT mode) - 32 pixels tile
+ * height
+ */
+#define ROGUE_CR_TE_MTILE2_Y1_SHIFT 18
+#define ROGUE_CR_TE_MTILE2_Y1_CLRMSK 0xF803FFFF
+/* Y2 default: 0x00000008
+ * X2 MacroTile boundary, top tile Y for third(16MT) column of macrotiles - 32 pixels tile height
+ */
+#define ROGUE_CR_TE_MTILE2_Y2_SHIFT 9
+#define ROGUE_CR_TE_MTILE2_Y2_CLRMSK 0xFFFC01FF
+/* Y3 default: 0x0000000c
+ * X3 MacroTile boundary, top tile Y for fourth column of macrotiles (16MT) - 32 pixels tile height
+ */
+#define ROGUE_CR_TE_MTILE2_Y3_SHIFT 0
+#define ROGUE_CR_TE_MTILE2_Y3_CLRMSK 0xFFFFFE00
+
+/*
+ * In order to perform the tiling operation and generate the display list the maximum screen size
+ * must be configured in terms of the number of tiles in X & Y axis.
+ */
+
+/* Register ROGUE_CR_TE_SCREEN */
+#define ROGUE_CR_TE_SCREEN 0x0C18U
+#define ROGUE_CR_TE_SCREEN_MASKFULL 0x00000000001FF1FFull
+/* YMAX default: 0x00000010
+ * Maximum Y tile address visible on screen, 32 pixel tile height, 16Kx16K max screen size
+ */
+#define ROGUE_CR_TE_SCREEN_YMAX_SHIFT 12
+#define ROGUE_CR_TE_SCREEN_YMAX_CLRMSK 0xFFE00FFF
+/* XMAX default: 0x00000010
+ * Maximum X tile address visible on screen, 32 pixel tile width, 16Kx16K max screen size
+ */
+#define ROGUE_CR_TE_SCREEN_XMAX_SHIFT 0
+#define ROGUE_CR_TE_SCREEN_XMAX_CLRMSK 0xFFFFFE00
+
+/*
+ * In order to perform the tiling operation and generate the display list the maximum screen size
+ * must be configured in terms of the number of pixels in X & Y axis since this may not be the same
+ * as the number of tiles defined in the RGX_CR_TE_SCREEN register.
+ */
+/* Register ROGUE_CR_PPP_SCREEN */
+#define ROGUE_CR_PPP_SCREEN 0x0C98
+#define ROGUE_CR_PPP_SCREEN_MASKFULL 0x000000007FFF7FFFull
+/* PIXYMAX
+ * Screen height in pixels. (16K x 16K max screen size)
+ */
+#define ROGUE_CR_PPP_SCREEN_PIXYMAX_SHIFT 16
+#define ROGUE_CR_PPP_SCREEN_PIXYMAX_CLRMSK 0x8000FFFF
+/* PIXXMAX
+ * Screen width in pixels.(16K x 16K max screen size)
+ */
+#define ROGUE_CR_PPP_SCREEN_PIXXMAX_SHIFT 0
+#define ROGUE_CR_PPP_SCREEN_PIXXMAX_CLRMSK 0xFFFF8000
+
+/* Register ROGUE_CR_ISP_MTILE_SIZE */
+#define ROGUE_CR_ISP_MTILE_SIZE 0x0F18
+#define ROGUE_CR_ISP_MTILE_SIZE_MASKFULL 0x0000000003FF03FFull
+/* X
+ * Macrotile width, in tiles. A value of zero corresponds to the maximum size
+ */
+#define ROGUE_CR_ISP_MTILE_SIZE_X_SHIFT 16
+#define ROGUE_CR_ISP_MTILE_SIZE_X_CLRMSK 0xFC00FFFF
+#define ROGUE_CR_ISP_MTILE_SIZE_X_ALIGNSHIFT 0
+#define ROGUE_CR_ISP_MTILE_SIZE_X_ALIGNSIZE 1
+/* Y
+ * Macrotile height, in tiles. A value of zero corresponds to the maximum size
+ */
+#define ROGUE_CR_ISP_MTILE_SIZE_Y_SHIFT 0
+#define ROGUE_CR_ISP_MTILE_SIZE_Y_CLRMSK 0xFFFFFC00
+#define ROGUE_CR_ISP_MTILE_SIZE_Y_ALIGNSHIFT 0
+#define ROGUE_CR_ISP_MTILE_SIZE_Y_ALIGNSIZE 1
+
+/* clang-format on */
+
+#endif /* PVR_ROGUE_CR_DEFS_CLIENT_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_defs.h b/drivers/gpu/drm/imagination/pvr_rogue_defs.h
new file mode 100644
index 000000000000..b0662656444b
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_defs.h
@@ -0,0 +1,179 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_DEFS_H
+#define PVR_ROGUE_DEFS_H
+
+#include "pvr_rogue_cr_defs.h"
+
+#include <linux/bits.h>
+
+/*
+ ******************************************************************************
+ * ROGUE Defines
+ ******************************************************************************
+ */
+
+#define ROGUE_FW_MAX_NUM_OS (8U)
+#define ROGUE_FW_HOST_OS (0U)
+#define ROGUE_FW_GUEST_OSID_START (1U)
+
+#define ROGUE_FW_THREAD_0 (0U)
+#define ROGUE_FW_THREAD_1 (1U)
+
+#define GET_ROGUE_CACHE_LINE_SIZE(x) ((((s32)(x)) > 0) ? ((x) / 8) : (0))
+
+#define MAX_HW_GEOM_FRAG_CONTEXTS 2U
+
+#define ROGUE_CR_CLK_CTRL_ALL_ON \
+	(0x5555555555555555ull & ROGUE_CR_CLK_CTRL_MASKFULL)
+#define ROGUE_CR_CLK_CTRL_ALL_AUTO \
+	(0xaaaaaaaaaaaaaaaaull & ROGUE_CR_CLK_CTRL_MASKFULL)
+#define ROGUE_CR_CLK_CTRL2_ALL_ON \
+	(0x5555555555555555ull & ROGUE_CR_CLK_CTRL2_MASKFULL)
+#define ROGUE_CR_CLK_CTRL2_ALL_AUTO \
+	(0xaaaaaaaaaaaaaaaaull & ROGUE_CR_CLK_CTRL2_MASKFULL)
+
+#define ROGUE_CR_SOFT_RESET_DUST_n_CORE_EN    \
+	(ROGUE_CR_SOFT_RESET_DUST_A_CORE_EN | \
+	 ROGUE_CR_SOFT_RESET_DUST_B_CORE_EN | \
+	 ROGUE_CR_SOFT_RESET_DUST_C_CORE_EN | \
+	 ROGUE_CR_SOFT_RESET_DUST_D_CORE_EN | \
+	 ROGUE_CR_SOFT_RESET_DUST_E_CORE_EN | \
+	 ROGUE_CR_SOFT_RESET_DUST_F_CORE_EN | \
+	 ROGUE_CR_SOFT_RESET_DUST_G_CORE_EN | \
+	 ROGUE_CR_SOFT_RESET_DUST_H_CORE_EN)
+
+/* SOFT_RESET Rascal and DUSTs bits */
+#define ROGUE_CR_SOFT_RESET_RASCALDUSTS_EN    \
+	(ROGUE_CR_SOFT_RESET_RASCAL_CORE_EN | \
+	 ROGUE_CR_SOFT_RESET_DUST_n_CORE_EN)
+
+/* SOFT_RESET steps as defined in the TRM */
+#define ROGUE_S7_SOFT_RESET_DUSTS (ROGUE_CR_SOFT_RESET_DUST_n_CORE_EN)
+
+#define ROGUE_S7_SOFT_RESET_JONES                                 \
+	(ROGUE_CR_SOFT_RESET_PM_EN | ROGUE_CR_SOFT_RESET_VDM_EN | \
+	 ROGUE_CR_SOFT_RESET_ISP_EN)
+
+#define ROGUE_S7_SOFT_RESET_JONES_ALL                             \
+	(ROGUE_S7_SOFT_RESET_JONES | ROGUE_CR_SOFT_RESET_BIF_EN | \
+	 ROGUE_CR_SOFT_RESET_SLC_EN | ROGUE_CR_SOFT_RESET_GARTEN_EN)
+
+#define ROGUE_S7_SOFT_RESET2                                                  \
+	(ROGUE_CR_SOFT_RESET2_BLACKPEARL_EN | ROGUE_CR_SOFT_RESET2_PIXEL_EN | \
+	 ROGUE_CR_SOFT_RESET2_CDM_EN | ROGUE_CR_SOFT_RESET2_VERTEX_EN)
+
+#define ROGUE_BIF_PM_PHYSICAL_PAGE_ALIGNSHIFT (12U)
+#define ROGUE_BIF_PM_PHYSICAL_PAGE_SIZE \
+	BIT(ROGUE_BIF_PM_PHYSICAL_PAGE_ALIGNSHIFT)
+
+#define ROGUE_BIF_PM_VIRTUAL_PAGE_ALIGNSHIFT (14U)
+#define ROGUE_BIF_PM_VIRTUAL_PAGE_SIZE BIT(ROGUE_BIF_PM_VIRTUAL_PAGE_ALIGNSHIFT)
+
+#define ROGUE_BIF_PM_FREELIST_BASE_ADDR_ALIGNSIZE (16U)
+
+/*
+ * To get the number of required Dusts, divide the number of
+ * clusters by 2 and round up
+ */
+#define ROGUE_REQ_NUM_DUSTS(CLUSTERS) (((CLUSTERS) + 1U) / 2U)
+
+/*
+ * To get the number of required Bernado/Phantom(s), divide
+ * the number of clusters by 4 and round up
+ */
+#define ROGUE_REQ_NUM_PHANTOMS(CLUSTERS) (((CLUSTERS) + 3U) / 4U)
+#define ROGUE_REQ_NUM_BERNADOS(CLUSTERS) (((CLUSTERS) + 3U) / 4U)
+#define ROGUE_REQ_NUM_BLACKPEARLS(CLUSTERS) (((CLUSTERS) + 3U) / 4U)
+
+/*
+ * FW MMU contexts
+ */
+#define MMU_CONTEXT_MAPPING_FWPRIV (0x0) /* FW code/private data */
+#define MMU_CONTEXT_MAPPING_FWIF (0x0) /* Host/FW data */
+
+/*
+ * Utility macros to calculate CAT_BASE register addresses
+ */
+#define BIF_CAT_BASEX(n)          \
+	(ROGUE_CR_BIF_CAT_BASE0 + \
+	 (n) * (ROGUE_CR_BIF_CAT_BASE1 - ROGUE_CR_BIF_CAT_BASE0))
+
+#define FWCORE_MEM_CAT_BASEX(n)                 \
+	(ROGUE_CR_FWCORE_MEM_CAT_BASE0 +        \
+	 (n) * (ROGUE_CR_FWCORE_MEM_CAT_BASE1 - \
+		ROGUE_CR_FWCORE_MEM_CAT_BASE0))
+
+/*
+ * FWCORE wrapper register defines
+ */
+#define FWCORE_ADDR_REMAP_CONFIG0_MMU_CONTEXT_SHIFT \
+	ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_CBASE_SHIFT
+#define FWCORE_ADDR_REMAP_CONFIG0_MMU_CONTEXT_CLRMSK \
+	ROGUE_CR_FWCORE_ADDR_REMAP_CONFIG0_CBASE_CLRMSK
+#define FWCORE_ADDR_REMAP_CONFIG0_SIZE_ALIGNSHIFT (12U)
+
+#define ROGUE_MAX_COMPUTE_SHARED_REGISTERS (2 * 1024)
+#define ROGUE_MAX_VERTEX_SHARED_REGISTERS 1024
+#define ROGUE_MAX_PIXEL_SHARED_REGISTERS 1024
+#define ROGUE_CSRM_LINE_SIZE_IN_DWORDS (64 * 4 * 4)
+
+#define ROGUE_CDMCTRL_USC_COMMON_SIZE_ALIGNSIZE 64
+#define ROGUE_CDMCTRL_USC_COMMON_SIZE_UPPER 256
+
+/*
+ * The maximum amount of local memory which can be allocated by a single kernel
+ * (in dwords/32-bit registers).
+ *
+ * ROGUE_CDMCTRL_USC_COMMON_SIZE_ALIGNSIZE is in bytes so we divide by four.
+ */
+#define ROGUE_MAX_PER_KERNEL_LOCAL_MEM_SIZE_REGS ((ROGUE_CDMCTRL_USC_COMMON_SIZE_ALIGNSIZE * \
+						   ROGUE_CDMCTRL_USC_COMMON_SIZE_UPPER) >> 2)
+
+/*
+ ******************************************************************************
+ * WA HWBRNs
+ ******************************************************************************
+ */
+
+/* GPU CR timer tick in GPU cycles */
+#define ROGUE_CRTIME_TICK_IN_CYCLES (256U)
+
+/* for nohw multicore return max cores possible to client */
+#define ROGUE_MULTICORE_MAX_NOHW_CORES (4U)
+
+/*
+ * If the size of the SLC is less than this value then the TPU bypasses the SLC.
+ */
+#define ROGUE_TPU_CACHED_SLC_SIZE_THRESHOLD (128U * 1024U)
+
+/*
+ * If the size of the SLC is bigger than this value then the TCU must not be
+ * bypassed in the SLC.
+ * In XE_MEMORY_HIERARCHY cores, the TCU is bypassed by default.
+ */
+#define ROGUE_TCU_CACHED_SLC_SIZE_THRESHOLD (32U * 1024U)
+
+/*
+ * Register used by the FW to track the current boot stage (not used in MIPS)
+ */
+#define ROGUE_FW_BOOT_STAGE_REGISTER (ROGUE_CR_POWER_ESTIMATE_RESULT)
+
+/*
+ * Virtualisation definitions
+ */
+#define ROGUE_VIRTUALISATION_REG_SIZE_PER_OS \
+	(ROGUE_CR_MTS_SCHEDULE1 - ROGUE_CR_MTS_SCHEDULE)
+
+/*
+ * Macro used to indicate which version of HWPerf is active
+ */
+#define ROGUE_FEATURE_HWPERF_ROGUE
+
+/*
+ * Maximum number of cores supported by TRP
+ */
+#define ROGUE_TRP_MAX_NUM_CORES (4U)
+
+#endif /* PVR_ROGUE_DEFS_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_fwif.h b/drivers/gpu/drm/imagination/pvr_rogue_fwif.h
new file mode 100644
index 000000000000..4f720de6d4ca
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_fwif.h
@@ -0,0 +1,2208 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_FWIF_H
+#define PVR_ROGUE_FWIF_H
+
+#include <linux/bits.h>
+#include <linux/build_bug.h>
+#include <linux/compiler.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+
+#include "pvr_rogue_defs.h"
+#include "pvr_rogue_fwif_common.h"
+#include "pvr_rogue_fwif_shared.h"
+
+/*
+ ****************************************************************************
+ * Logging type
+ ****************************************************************************
+ */
+#define ROGUE_FWIF_LOG_TYPE_NONE 0x00000000U
+#define ROGUE_FWIF_LOG_TYPE_TRACE 0x00000001U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_MAIN 0x00000002U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_MTS 0x00000004U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_CLEANUP 0x00000008U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_CSW 0x00000010U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_BIF 0x00000020U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_PM 0x00000040U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_RTD 0x00000080U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_SPM 0x00000100U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_POW 0x00000200U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_HWR 0x00000400U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_HWP 0x00000800U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_RPM 0x00001000U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_DMA 0x00002000U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_MISC 0x00004000U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_DEBUG 0x80000000U
+#define ROGUE_FWIF_LOG_TYPE_GROUP_MASK 0x80007FFEU
+#define ROGUE_FWIF_LOG_TYPE_MASK 0x80007FFFU
+
+/* String used in pvrdebug -h output */
+#define ROGUE_FWIF_LOG_GROUPS_STRING_LIST \
+	"main,mts,cleanup,csw,bif,pm,rtd,spm,pow,hwr,hwp,rpm,dma,misc,debug"
+
+/* Table entry to map log group strings to log type value */
+struct rogue_fwif_log_group_map_entry {
+	const char *log_group_name;
+	u32 log_group_type;
+};
+
+/*
+ * Macro for use with the ROGUE_FWIF_LOG_GROUP_MAP_ENTRY type to create a lookup
+ * table where needed. Keep log group names short, no more than 20 chars.
+ */
+static const struct rogue_fwif_log_group_map_entry rogue_fwif_log_group_name_value_map[] = {
+	{"none", ROGUE_FWIF_LOG_TYPE_NONE},
+	{"main", ROGUE_FWIF_LOG_TYPE_GROUP_MAIN},
+	{"mts", ROGUE_FWIF_LOG_TYPE_GROUP_MTS},
+	{"cleanup", ROGUE_FWIF_LOG_TYPE_GROUP_CLEANUP},
+	{"csw", ROGUE_FWIF_LOG_TYPE_GROUP_CSW},
+	{"bif", ROGUE_FWIF_LOG_TYPE_GROUP_BIF},
+	{"pm", ROGUE_FWIF_LOG_TYPE_GROUP_PM},
+	{"rtd", ROGUE_FWIF_LOG_TYPE_GROUP_RTD},
+	{"spm", ROGUE_FWIF_LOG_TYPE_GROUP_SPM},
+	{"pow", ROGUE_FWIF_LOG_TYPE_GROUP_POW},
+	{"hwr", ROGUE_FWIF_LOG_TYPE_GROUP_HWR},
+	{"hwp", ROGUE_FWIF_LOG_TYPE_GROUP_HWP},
+	{"rpm", ROGUE_FWIF_LOG_TYPE_GROUP_RPM},
+	{"dma", ROGUE_FWIF_LOG_TYPE_GROUP_DMA},
+	{"misc", ROGUE_FWIF_LOG_TYPE_GROUP_MISC},
+	{"debug", ROGUE_FWIF_LOG_TYPE_GROUP_DEBUG}
+};
+
+/*
+ ****************************************************************************
+ * ROGUE FW signature checks
+ ****************************************************************************
+ */
+#define ROGUE_FW_SIG_BUFFER_SIZE_MIN (8192)
+
+#define ROGUE_FWIF_TIMEDIFF_ID ((0x1UL << 28) | ROGUE_CR_TIMER)
+
+/*
+ ****************************************************************************
+ * Trace Buffer
+ ****************************************************************************
+ */
+
+/* Default size of ROGUE_FWIF_TRACEBUF_SPACE in DWords */
+#define ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS 12000U
+#define ROGUE_FW_TRACE_BUFFER_ASSERT_SIZE 200U
+#define ROGUE_FW_THREAD_NUM 1U
+#define ROGUE_FW_THREAD_MAX 2U
+
+#define ROGUE_FW_POLL_TYPE_SET 0x80000000U
+
+struct rogue_fwif_file_info_buf {
+	char path[ROGUE_FW_TRACE_BUFFER_ASSERT_SIZE];
+	char info[ROGUE_FW_TRACE_BUFFER_ASSERT_SIZE];
+	u32 line_num;
+	u32 padding;
+} __aligned(8);
+
+struct rogue_fwif_tracebuf_space {
+	u32 trace_pointer;
+
+	u32 trace_buffer_fw_addr;
+
+	/* To be used by host when reading from trace buffer */
+	u32 *trace_buffer;
+
+	struct rogue_fwif_file_info_buf assert_buf;
+} __aligned(8);
+
+/* Total number of FW fault logs stored */
+#define ROGUE_FWIF_FWFAULTINFO_MAX (8U)
+
+struct rogue_fw_fault_info {
+	aligned_u64 cr_timer;
+	aligned_u64 os_timer;
+
+	u32 data __aligned(8);
+	u32 reserved;
+	struct rogue_fwif_file_info_buf fault_buf;
+} __aligned(8);
+
+enum rogue_fwif_pow_state {
+	ROGUE_FWIF_POW_OFF, /* idle and ready to full power down */
+	ROGUE_FWIF_POW_ON, /* running HW commands */
+	ROGUE_FWIF_POW_FORCED_IDLE, /* forced idle */
+	ROGUE_FWIF_POW_IDLE, /* idle waiting for host handshake */
+};
+
+/* Firmware HWR states */
+/* The HW state is ok or locked up */
+#define ROGUE_FWIF_HWR_HARDWARE_OK BIT(0)
+/* Tells if a HWR reset is in progress */
+#define ROGUE_FWIF_HWR_RESET_IN_PROGRESS BIT(1)
+/* A DM unrelated lockup has been detected */
+#define ROGUE_FWIF_HWR_GENERAL_LOCKUP BIT(3)
+/* At least one DM is running without being close to a lockup */
+#define ROGUE_FWIF_HWR_DM_RUNNING_OK BIT(4)
+/* At least one DM is close to lockup */
+#define ROGUE_FWIF_HWR_DM_STALLING BIT(5)
+/* The FW has faulted and needs to restart */
+#define ROGUE_FWIF_HWR_FW_FAULT BIT(6)
+/* The FW has requested the host to restart it */
+#define ROGUE_FWIF_HWR_RESTART_REQUESTED BIT(7)
+
+#define ROGUE_FWIF_PHR_STATE_SHIFT (8U)
+/* The FW has requested the host to restart it, per PHR configuration */
+#define ROGUE_FWIF_PHR_RESTART_REQUESTED ((1) << ROGUE_FWIF_PHR_STATE_SHIFT)
+/* A PHR triggered GPU reset has just finished */
+#define ROGUE_FWIF_PHR_RESTART_FINISHED ((2) << ROGUE_FWIF_PHR_STATE_SHIFT)
+#define ROGUE_FWIF_PHR_RESTART_MASK \
+	(ROGUE_FWIF_PHR_RESTART_REQUESTED | ROGUE_FWIF_PHR_RESTART_FINISHED)
+
+#define ROGUE_FWIF_PHR_MODE_OFF (0UL)
+#define ROGUE_FWIF_PHR_MODE_RD_RESET (1UL)
+#define ROGUE_FWIF_PHR_MODE_FULL_RESET (2UL)
+
+/* Firmware per-DM HWR states */
+/* DM is working if all flags are cleared */
+#define ROGUE_FWIF_DM_STATE_WORKING (0)
+/* DM is idle and ready for HWR */
+#define ROGUE_FWIF_DM_STATE_READY_FOR_HWR BIT(0)
+/* DM need to skip to next cmd before resuming processing */
+#define ROGUE_FWIF_DM_STATE_NEEDS_SKIP BIT(2)
+/* DM need partial render cleanup before resuming processing */
+#define ROGUE_FWIF_DM_STATE_NEEDS_PR_CLEANUP BIT(3)
+/* DM need to increment Recovery Count once fully recovered */
+#define ROGUE_FWIF_DM_STATE_NEEDS_TRACE_CLEAR BIT(4)
+/* DM was identified as locking up and causing HWR */
+#define ROGUE_FWIF_DM_STATE_GUILTY_LOCKUP BIT(5)
+/* DM was innocently affected by another lockup which caused HWR */
+#define ROGUE_FWIF_DM_STATE_INNOCENT_LOCKUP BIT(6)
+/* DM was identified as over-running and causing HWR */
+#define ROGUE_FWIF_DM_STATE_GUILTY_OVERRUNING BIT(7)
+/* DM was innocently affected by another DM over-running which caused HWR */
+#define ROGUE_FWIF_DM_STATE_INNOCENT_OVERRUNING BIT(8)
+/* DM was forced into HWR as it delayed more important workloads */
+#define ROGUE_FWIF_DM_STATE_HARD_CONTEXT_SWITCH BIT(9)
+/* DM was forced into HWR due to an uncorrected GPU ECC error */
+#define ROGUE_FWIF_DM_STATE_GPU_ECC_HWR BIT(10)
+
+/* Firmware's connection state */
+enum rogue_fwif_connection_fw_state {
+	/* Firmware is offline */
+	ROGUE_FW_CONNECTION_FW_OFFLINE = 0,
+	/* Firmware is initialised */
+	ROGUE_FW_CONNECTION_FW_READY,
+	/* Firmware connection is fully established */
+	ROGUE_FW_CONNECTION_FW_ACTIVE,
+	/* Firmware is clearing up connection data*/
+	ROGUE_FW_CONNECTION_FW_OFFLOADING,
+	ROGUE_FW_CONNECTION_FW_STATE_COUNT
+};
+
+/* OS' connection state */
+enum rogue_fwif_connection_os_state {
+	/* OS is offline */
+	ROGUE_FW_CONNECTION_OS_OFFLINE = 0,
+	/* OS's KM driver is setup and waiting */
+	ROGUE_FW_CONNECTION_OS_READY,
+	/* OS connection is fully established */
+	ROGUE_FW_CONNECTION_OS_ACTIVE,
+	ROGUE_FW_CONNECTION_OS_STATE_COUNT
+};
+
+struct rogue_fwif_os_runtime_flags {
+	unsigned int os_state : 3;
+	unsigned int fl_ok : 1;
+	unsigned int fl_grow_pending : 1;
+	unsigned int isolated_os : 1;
+	unsigned int reserved : 26;
+};
+
+#define PVR_SLR_LOG_ENTRIES 10
+/* MAX_CLIENT_CCB_NAME not visible to this header */
+#define PVR_SLR_LOG_STRLEN 30
+
+struct rogue_fwif_slr_entry {
+	aligned_u64 timestamp;
+	u32 fw_ctx_addr;
+	u32 num_ufos;
+	char ccb_name[PVR_SLR_LOG_STRLEN];
+	char padding[2];
+} __aligned(8);
+
+#define MAX_THREAD_NUM 2
+
+/* firmware trace control data */
+struct rogue_fwif_tracebuf {
+	u32 log_type;
+	struct rogue_fwif_tracebuf_space tracebuf[MAX_THREAD_NUM];
+	/*
+	 * Member initialised only when sTraceBuf is actually allocated (in
+	 * ROGUETraceBufferInitOnDemandResources)
+	 */
+	u32 tracebuf_size_in_dwords;
+	/* Compatibility and other flags */
+	u32 tracebuf_flags;
+} __aligned(8);
+
+/* firmware system data shared with the Host driver */
+struct rogue_fwif_sysdata {
+	/* Configuration flags from host */
+	u32 config_flags;
+	/* Extended configuration flags from host */
+	u32 config_flags_ext;
+	enum rogue_fwif_pow_state pow_state;
+	u32 hw_perf_ridx;
+	u32 hw_perf_widx;
+	u32 hw_perf_wrap_count;
+	/* Constant after setup, needed in FW */
+	u32 hw_perf_size;
+	/* The number of times the FW drops a packet due to buffer full */
+	u32 hw_perf_drop_count;
+
+	/*
+	 * ui32HWPerfUt, ui32FirstDropOrdinal, ui32LastDropOrdinal only valid
+	 * when FW is built with ROGUE_HWPERF_UTILIZATION &
+	 * ROGUE_HWPERF_DROP_TRACKING defined in rogue_fw_hwperf.c
+	 */
+	/* Buffer utilisation, high watermark of bytes in use */
+	u32 hw_perf_ut;
+	/* The ordinal of the first packet the FW dropped */
+	u32 first_drop_ordinal;
+	/* The ordinal of the last packet the FW dropped */
+	u32 last_drop_ordinal;
+	/* State flags for each Operating System mirrored from Fw coremem */
+	struct rogue_fwif_os_runtime_flags
+		os_runtime_flags_mirror[ROGUE_FW_MAX_NUM_OS];
+
+	struct rogue_fw_fault_info fault_info[ROGUE_FWIF_FWFAULTINFO_MAX];
+	u32 fw_faults;
+	u32 cr_poll_addr[MAX_THREAD_NUM];
+	u32 cr_poll_mask[MAX_THREAD_NUM];
+	u32 cr_poll_count[MAX_THREAD_NUM];
+	aligned_u64 start_idle_time;
+
+#if defined(SUPPORT_ROGUE_FW_STATS_FRAMEWORK)
+#	define ROGUE_FWIF_STATS_FRAMEWORK_LINESIZE (8)
+#	define ROGUE_FWIF_STATS_FRAMEWORK_MAX \
+		(2048 * ROGUE_FWIF_STATS_FRAMEWORK_LINESIZE)
+	u32 fw_stats_buf[ROGUE_FWIF_STATS_FRAMEWORK_MAX] __aligned(8);
+#endif
+	u32 hwr_state_flags;
+	u32 hwr_recovery_flags[PVR_FWIF_DM_MAX];
+	/* Compatibility and other flags */
+	u32 fw_sys_data_flags;
+	/* Identify whether MC config is P-P or P-S */
+	u32 mc_config;
+} __aligned(8);
+
+/* per-os firmware shared data */
+struct rogue_fwif_osdata {
+	/* Configuration flags from an OS */
+	u32 fw_os_config_flags;
+	/* Markers to signal that the host should perform a full sync check */
+	u32 fw_sync_check_mark;
+	u32 host_sync_check_mark;
+
+	u32 forced_updates_requested;
+	u8 slr_log_wp;
+	struct rogue_fwif_slr_entry slr_log_first;
+	struct rogue_fwif_slr_entry slr_log[PVR_SLR_LOG_ENTRIES];
+	aligned_u64 last_forced_update_time;
+
+	/* Interrupt count from Threads > */
+	u32 interrupt_count[MAX_THREAD_NUM];
+	u32 kccb_cmds_executed;
+	u32 power_sync_fw_addr;
+	/* Compatibility and other flags */
+	u32 fw_os_data_flags;
+	u32 padding;
+} __aligned(8);
+
+/* Firmware trace time-stamp field breakup */
+
+/* ROGUE_CR_TIMER register read (48 bits) value*/
+#define ROGUE_FWT_TIMESTAMP_TIME_SHIFT (0U)
+#define ROGUE_FWT_TIMESTAMP_TIME_CLRMSK (0xFFFF000000000000ull)
+
+/* Extra debug-info (16 bits) */
+#define ROGUE_FWT_TIMESTAMP_DEBUG_INFO_SHIFT (48U)
+#define ROGUE_FWT_TIMESTAMP_DEBUG_INFO_CLRMSK ~ROGUE_FWT_TIMESTAMP_TIME_CLRMSK
+
+/* Debug-info sub-fields */
+/*
+ * Bit 0: ROGUE_CR_EVENT_STATUS_MMU_PAGE_FAULT bit from ROGUE_CR_EVENT_STATUS
+ * register
+ */
+#define ROGUE_FWT_DEBUG_INFO_MMU_PAGE_FAULT_SHIFT (0U)
+#define ROGUE_FWT_DEBUG_INFO_MMU_PAGE_FAULT_SET \
+	BIT(ROGUE_FWT_DEBUG_INFO_MMU_PAGE_FAULT_SHIFT)
+
+/* Bit 1: ROGUE_CR_BIF_MMU_ENTRY_PENDING bit from ROGUE_CR_BIF_MMU_ENTRY register */
+#define ROGUE_FWT_DEBUG_INFO_MMU_ENTRY_PENDING_SHIFT (1U)
+#define ROGUE_FWT_DEBUG_INFO_MMU_ENTRY_PENDING_SET \
+	BIT(ROGUE_FWT_DEBUG_INFO_MMU_ENTRY_PENDING_SHIFT)
+
+/* Bit 2: ROGUE_CR_SLAVE_EVENT register is non-zero */
+#define ROGUE_FWT_DEBUG_INFO_SLAVE_EVENTS_SHIFT (2U)
+#define ROGUE_FWT_DEBUG_INFO_SLAVE_EVENTS_SET \
+	BIT(ROGUE_FWT_DEBUG_INFO_SLAVE_EVENTS_SHIFT)
+
+/* Bit 3-15: Unused bits */
+
+#define ROGUE_FWT_DEBUG_INFO_STR_MAXLEN 64
+#define ROGUE_FWT_DEBUG_INFO_STR_PREPEND " (debug info: "
+#define ROGUE_FWT_DEBUG_INFO_STR_APPEND ")"
+
+/*
+ ******************************************************************************
+ * HWR Data
+ ******************************************************************************
+ */
+enum rogue_hwrtype {
+	ROGUE_HWRTYPE_UNKNOWNFAILURE = 0,
+	ROGUE_HWRTYPE_OVERRUN = 1,
+	ROGUE_HWRTYPE_POLLFAILURE = 2,
+	ROGUE_HWRTYPE_BIF0FAULT = 3,
+	ROGUE_HWRTYPE_BIF1FAULT = 4,
+	ROGUE_HWRTYPE_TEXASBIF0FAULT = 5,
+	ROGUE_HWRTYPE_MMUFAULT = 6,
+	ROGUE_HWRTYPE_MMUMETAFAULT = 7,
+	ROGUE_HWRTYPE_MIPSTLBFAULT = 8,
+	ROGUE_HWRTYPE_ECCFAULT = 9,
+	ROGUE_HWRTYPE_MMURISCVFAULT = 10,
+};
+
+#define ROGUE_FWIF_HWRTYPE_BIF_BANK_GET(hwr_type) \
+	(((hwr_type) == ROGUE_HWRTYPE_BIF0FAULT) ? 0 : 1)
+
+#define ROGUE_FWIF_HWRTYPE_PAGE_FAULT_GET(hwr_type)       \
+	((((hwr_type) == ROGUE_HWRTYPE_BIF0FAULT) ||      \
+	  ((hwr_type) == ROGUE_HWRTYPE_BIF1FAULT) ||      \
+	  ((hwr_type) == ROGUE_HWRTYPE_TEXASBIF0FAULT) || \
+	  ((hwr_type) == ROGUE_HWRTYPE_MMUFAULT) ||       \
+	  ((hwr_type) == ROGUE_HWRTYPE_MMUMETAFAULT) ||   \
+	  ((hwr_type) == ROGUE_HWRTYPE_MIPSTLBFAULT) ||   \
+	  ((hwr_type) == ROGUE_HWRTYPE_MMURISCVFAULT))    \
+		 ? true                                   \
+		 : false)
+
+struct rogue_bifinfo {
+	aligned_u64 bif_req_status;
+	aligned_u64 bif_mmu_status;
+	aligned_u64 pc_address; /* phys address of the page catalogue */
+	aligned_u64 reserved;
+};
+
+struct rogue_eccinfo {
+	u32 fault_gpu;
+};
+
+struct rogue_mmuinfo {
+	aligned_u64 mmu_status[2];
+	aligned_u64 pc_address; /* phys address of the page catalogue */
+	aligned_u64 reserved;
+};
+
+struct rogue_pollinfo {
+	u32 thread_num;
+	u32 cr_poll_addr;
+	u32 cr_poll_mask;
+	u32 cr_poll_last_value;
+	aligned_u64 reserved;
+} __aligned(8);
+
+struct rogue_tlbinfo {
+	u32 bad_addr;
+	u32 entry_lo;
+};
+
+struct rogue_hwrinfo {
+	union {
+		struct rogue_bifinfo bif_info;
+		struct rogue_mmuinfo mmu_info;
+		struct rogue_pollinfo poll_info;
+		struct rogue_tlbinfo tlb_info;
+		struct rogue_eccinfo ecc_info;
+	} hwr_data;
+
+	aligned_u64 cr_timer;
+	aligned_u64 os_timer;
+	u32 frame_num;
+	u32 pid;
+	u32 active_hwrt_data;
+	u32 hwr_number;
+	u32 event_status;
+	u32 hwr_recovery_flags;
+	enum rogue_hwrtype hwr_type;
+	u32 dm;
+	u32 core_id;
+	aligned_u64 cr_time_of_kick;
+	aligned_u64 cr_time_hw_reset_start;
+	aligned_u64 cr_time_hw_reset_finish;
+	aligned_u64 cr_time_freelist_ready;
+	aligned_u64 reserved[2];
+} __aligned(8);
+
+/* Number of first HWR logs recorded (never overwritten by newer logs) */
+#define ROGUE_FWIF_HWINFO_MAX_FIRST 8U
+/* Number of latest HWR logs (older logs are overwritten by newer logs) */
+#define ROGUE_FWIF_HWINFO_MAX_LAST 8U
+/* Total number of HWR logs stored in a buffer */
+#define ROGUE_FWIF_HWINFO_MAX \
+	(ROGUE_FWIF_HWINFO_MAX_FIRST + ROGUE_FWIF_HWINFO_MAX_LAST)
+/* Index of the last log in the HWR log buffer */
+#define ROGUE_FWIF_HWINFO_LAST_INDEX (ROGUE_FWIF_HWINFO_MAX - 1U)
+
+struct rogue_fwif_hwrinfobuf {
+	struct rogue_hwrinfo hwr_info[ROGUE_FWIF_HWINFO_MAX];
+	u32 hwr_counter;
+	u32 write_index;
+	u32 dd_req_count;
+	u32 hwr_info_buf_flags; /* Compatibility and other flags */
+	u32 hwr_dm_locked_up_count[PVR_FWIF_DM_MAX];
+	u32 hwr_dm_overran_count[PVR_FWIF_DM_MAX];
+	u32 hwr_dm_recovered_count[PVR_FWIF_DM_MAX];
+	u32 hwr_dm_false_detect_count[PVR_FWIF_DM_MAX];
+} __aligned(8);
+
+#define ROGUE_FWIF_CTXSWITCH_PROFILE_FAST_EN (1)
+#define ROGUE_FWIF_CTXSWITCH_PROFILE_MEDIUM_EN (2)
+#define ROGUE_FWIF_CTXSWITCH_PROFILE_SLOW_EN (3)
+#define ROGUE_FWIF_CTXSWITCH_PROFILE_NODELAY_EN (4)
+
+#define ROGUE_FWIF_CDM_ARBITRATION_TASK_DEMAND_EN (1)
+#define ROGUE_FWIF_CDM_ARBITRATION_ROUND_ROBIN_EN (2)
+
+#define ROGUE_FWIF_ISP_SCHEDMODE_VER1_IPP (1)
+#define ROGUE_FWIF_ISP_SCHEDMODE_VER2_ISP (2)
+/*
+ ******************************************************************************
+ * ROGUE firmware Init Config Data
+ ******************************************************************************
+ */
+
+/* Flag definitions affecting the firmware globally */
+#define ROGUE_FWIF_INICFG_CTXSWITCH_MODE_RAND BIT(0)
+#define ROGUE_FWIF_INICFG_CTXSWITCH_SRESET_EN BIT(1)
+#define ROGUE_FWIF_INICFG_HWPERF_EN BIT(2)
+#define ROGUE_FWIF_INICFG_DM_KILL_MODE_RAND_EN BIT(3)
+#define ROGUE_FWIF_INICFG_POW_RASCALDUST BIT(4)
+/* Bit 5 is reserved. */
+#define ROGUE_FWIF_INICFG_FBCDC_V3_1_EN BIT(6)
+#define ROGUE_FWIF_INICFG_CHECK_MLIST_EN BIT(7)
+#define ROGUE_FWIF_INICFG_DISABLE_CLKGATING_EN BIT(8)
+/* Bit 9 is reserved. */
+/* Bit 10 is reserved. */
+/* Bit 11 is reserved. */
+#define ROGUE_FWIF_INICFG_REGCONFIG_EN BIT(12)
+#define ROGUE_FWIF_INICFG_ASSERT_ON_OUTOFMEMORY BIT(13)
+#define ROGUE_FWIF_INICFG_HWP_DISABLE_FILTER BIT(14)
+/* Bit 15 is reserved. */
+#define ROGUE_FWIF_INICFG_CTXSWITCH_PROFILE_SHIFT (16)
+#define ROGUE_FWIF_INICFG_CTXSWITCH_PROFILE_FAST \
+	(ROGUE_FWIF_CTXSWITCH_PROFILE_FAST_EN    \
+	 << ROGUE_FWIF_INICFG_CTXSWITCH_PROFILE_SHIFT)
+#define ROGUE_FWIF_INICFG_CTXSWITCH_PROFILE_MEDIUM \
+	(ROGUE_FWIF_CTXSWITCH_PROFILE_MEDIUM_EN    \
+	 << ROGUE_FWIF_INICFG_CTXSWITCH_PROFILE_SHIFT)
+#define ROGUE_FWIF_INICFG_CTXSWITCH_PROFILE_SLOW \
+	(ROGUE_FWIF_CTXSWITCH_PROFILE_SLOW_EN    \
+	 << ROGUE_FWIF_INICFG_CTXSWITCH_PROFILE_SHIFT)
+#define ROGUE_FWIF_INICFG_CTXSWITCH_PROFILE_NODELAY \
+	(ROGUE_FWIF_CTXSWITCH_PROFILE_NODELAY_EN    \
+	 << ROGUE_FWIF_INICFG_CTXSWITCH_PROFILE_SHIFT)
+#define ROGUE_FWIF_INICFG_CTXSWITCH_PROFILE_MASK \
+	(7 << ROGUE_FWIF_INICFG_CTXSWITCH_PROFILE_SHIFT)
+#define ROGUE_FWIF_INICFG_DISABLE_DM_OVERLAP BIT(19)
+#define ROGUE_FWIF_INICFG_ASSERT_ON_HWR_TRIGGER BIT(20)
+#define ROGUE_FWIF_INICFG_FABRIC_COHERENCY_ENABLED BIT(21)
+#define ROGUE_FWIF_INICFG_VALIDATE_IRQ BIT(22)
+#define ROGUE_FWIF_INICFG_DISABLE_PDP_EN BIT(23)
+#define ROGUE_FWIF_INICFG_SPU_POWER_STATE_MASK_CHANGE_EN BIT(24)
+#define ROGUE_FWIF_INICFG_WORKEST BIT(25)
+#define ROGUE_FWIF_INICFG_PDVFS BIT(26)
+#define ROGUE_FWIF_INICFG_CDM_ARBITRATION_SHIFT (27)
+#define ROGUE_FWIF_INICFG_CDM_ARBITRATION_TASK_DEMAND \
+	(ROGUE_FWIF_CDM_ARBITRATION_TASK_DEMAND_EN    \
+	 << ROGUE_FWIF_INICFG_CDM_ARBITRATION_SHIFT)
+#define ROGUE_FWIF_INICFG_CDM_ARBITRATION_ROUND_ROBIN \
+	(ROGUE_FWIF_CDM_ARBITRATION_ROUND_ROBIN_EN    \
+	 << ROGUE_FWIF_INICFG_CDM_ARBITRATION_SHIFT)
+#define ROGUE_FWIF_INICFG_CDM_ARBITRATION_MASK \
+	(3 << ROGUE_FWIF_INICFG_CDM_ARBITRATION_SHIFT)
+#define ROGUE_FWIF_INICFG_ISPSCHEDMODE_SHIFT (29)
+#define ROGUE_FWIF_INICFG_ISPSCHEDMODE_NONE (0)
+#define ROGUE_FWIF_INICFG_ISPSCHEDMODE_VER1_IPP \
+	(ROGUE_FWIF_ISP_SCHEDMODE_VER1_IPP      \
+	 << ROGUE_FWIF_INICFG_ISPSCHEDMODE_SHIFT)
+#define ROGUE_FWIF_INICFG_ISPSCHEDMODE_VER2_ISP \
+	(ROGUE_FWIF_ISP_SCHEDMODE_VER2_ISP      \
+	 << ROGUE_FWIF_INICFG_ISPSCHEDMODE_SHIFT)
+#define ROGUE_FWIF_INICFG_ISPSCHEDMODE_MASK        \
+	(ROGUE_FWIF_INICFG_ISPSCHEDMODE_VER1_IPP | \
+	 ROGUE_FWIF_INICFG_ISPSCHEDMODE_VER2_ISP)
+#define ROGUE_FWIF_INICFG_VALIDATE_SOCUSC_TIMER BIT(31)
+
+#define ROGUE_FWIF_INICFG_ALL (0xFFFFFFFFU)
+
+/* Extended Flag definitions affecting the firmware globally */
+#define ROGUE_FWIF_INICFG_EXT_TFBC_CONTROL_SHIFT (0)
+/* [7]   YUV10 override
+ * [6:4] Quality
+ * [3]   Quality enable
+ * [2:1] Compression scheme
+ * [0]   Lossy group
+ */
+#define ROGUE_FWIF_INICFG_EXT_TFBC_CONTROL_MASK (0xFF)
+#define ROGUE_FWIF_INICFG_EXT_ALL (ROGUE_FWIF_INICFG_EXT_TFBC_CONTROL_MASK)
+
+/* Flag definitions affecting only workloads submitted by a particular OS */
+#define ROGUE_FWIF_INICFG_OS_CTXSWITCH_TDM_EN BIT(0)
+#define ROGUE_FWIF_INICFG_OS_CTXSWITCH_GEOM_EN BIT(1)
+#define ROGUE_FWIF_INICFG_OS_CTXSWITCH_FRAG_EN BIT(2)
+#define ROGUE_FWIF_INICFG_OS_CTXSWITCH_CDM_EN BIT(3)
+
+#define ROGUE_FWIF_INICFG_OS_LOW_PRIO_CS_TDM BIT(4)
+#define ROGUE_FWIF_INICFG_OS_LOW_PRIO_CS_GEOM BIT(5)
+#define ROGUE_FWIF_INICFG_OS_LOW_PRIO_CS_FRAG BIT(6)
+#define ROGUE_FWIF_INICFG_OS_LOW_PRIO_CS_CDM BIT(7)
+
+#define ROGUE_FWIF_INICFG_OS_ALL (0xFF)
+
+#define ROGUE_FWIF_INICFG_OS_CTXSWITCH_DM_ALL     \
+	(ROGUE_FWIF_INICFG_OS_CTXSWITCH_TDM_EN |  \
+	 ROGUE_FWIF_INICFG_OS_CTXSWITCH_GEOM_EN | \
+	 ROGUE_FWIF_INICFG_OS_CTXSWITCH_FRAG_EN |   \
+	 ROGUE_FWIF_INICFG_OS_CTXSWITCH_CDM_EN)
+
+#define ROGUE_FWIF_INICFG_OS_CTXSWITCH_CLRMSK \
+	~(ROGUE_FWIF_INICFG_OS_CTXSWITCH_DM_ALL)
+
+#define ROGUE_FWIF_FILTCFG_TRUNCATE_HALF BIT(3)
+#define ROGUE_FWIF_FILTCFG_TRUNCATE_INT BIT(2)
+#define ROGUE_FWIF_FILTCFG_NEW_FILTER_MODE BIT(1)
+
+enum rogue_activepm_conf {
+	ROGUE_ACTIVEPM_FORCE_OFF = 0,
+	ROGUE_ACTIVEPM_FORCE_ON = 1,
+	ROGUE_ACTIVEPM_DEFAULT = 2
+};
+
+enum rogue_rd_power_island_conf {
+	ROGUE_RD_POWER_ISLAND_FORCE_OFF = 0,
+	ROGUE_RD_POWER_ISLAND_FORCE_ON = 1,
+	ROGUE_RD_POWER_ISLAND_DEFAULT = 2
+};
+
+struct rogue_fw_register_list {
+	/* Register number */
+	u16 reg_num;
+	/* Indirect register number (or 0 if not used) */
+	u16 indirect_reg_num;
+	/* Start value for indirect register */
+	u16 indirect_start_val;
+	/* End value for indirect register */
+	u16 indirect_end_val;
+};
+
+struct rogue_fwif_dllist_node {
+	u32 p;
+	u32 n;
+};
+
+/*
+ * This number is used to represent an invalid page catalogue physical address
+ */
+#define ROGUE_FWIF_INVALID_PC_PHYADDR 0xFFFFFFFFFFFFFFFFLLU
+
+/* This number is used to represent unallocated page catalog base register */
+#define ROGUE_FW_BIF_INVALID_PCSET 0xFFFFFFFFU
+
+/* Firmware memory context. */
+struct rogue_fwif_fwmemcontext {
+	/* device physical address of context's page catalogue */
+	aligned_u64 pc_dev_paddr;
+	/*
+	 * associated page catalog base register (ROGUE_FW_BIF_INVALID_PCSET ==
+	 * unallocated)
+	 */
+	u32 page_cat_base_reg_set;
+	/* breakpoint address */
+	u32 breakpoint_addr;
+	/* breakpoint handler address */
+	u32 bp_handler_addr;
+	/* DM and enable control for BP */
+	u32 breakpoint_ctl;
+	/* Compatibility and other flags */
+	u32 fw_mem_ctx_flags;
+	u32 padding;
+} __aligned(8);
+
+/*
+ * FW context state flags
+ */
+#define ROGUE_FWIF_CONTEXT_FLAGS_NEED_RESUME (0x00000001U)
+#define ROGUE_FWIF_CONTEXT_FLAGS_MC_NEED_RESUME_MASKFULL (0x000000FFU)
+#define ROGUE_FWIF_CONTEXT_FLAGS_TDM_HEADER_STALE (0x00000100U)
+#define ROGUE_FWIF_CONTEXT_FLAGS_LAST_KICK_SECURE (0x00000200U)
+
+#define ROGUE_NUM_GEOM_CORES_MAX 4
+
+/*
+ * FW-accessible TA state which must be written out to memory on context store
+ */
+struct rogue_fwif_geom_ctx_state_per_geom {
+	/* To store in mid-TA */
+	aligned_u64 geom_reg_vdm_call_stack_pointer;
+	/* Initial value (in case is 'lost' due to a lock-up */
+	aligned_u64 geom_reg_vdm_call_stack_pointer_init;
+	u32 geom_reg_vbs_so_prim[4];
+	u16 geom_current_idx;
+	u16 padding[3];
+} __aligned(8);
+
+struct rogue_fwif_geom_ctx_state {
+	/* FW-accessible TA state which must be written out to memory on context store */
+	struct rogue_fwif_geom_ctx_state_per_geom geom_core[ROGUE_NUM_GEOM_CORES_MAX];
+} __aligned(8);
+
+/*
+ * FW-accessible ISP state which must be written out to memory on context store
+ */
+struct rogue_fwif_frag_ctx_state {
+	u32 frag_reg_pm_deallocated_mask_status;
+	u32 frag_reg_dm_pds_mtilefree_status;
+	/* Compatibility and other flags */
+	u32 ctx_state_flags;
+	/*
+	 * frag_reg_isp_store should be the last element of the structure as this
+	 * is an array whose size is determined at runtime after detecting the
+	 * ROGUE core
+	 */
+	u32 frag_reg_isp_store[];
+} __aligned(8);
+
+#define ROGUE_FWIF_CTX_USING_BUFFER_A (0)
+#define ROGUE_FWIF_CTX_USING_BUFFER_B (1U)
+
+struct rogue_fwif_compute_ctx_state {
+	u32 ctx_state_flags; /* Target buffer and other flags */
+};
+
+struct rogue_fwif_fwcommoncontext {
+	/* CCB details for this firmware context */
+	u32 ccbctl_fw_addr; /* CCB control */
+	u32 ccb_fw_addr; /* CCB base */
+	struct rogue_fwif_dma_addr ccb_meta_dma_addr;
+
+	/* Context suspend state */
+	/* geom/frag context suspend state, read/written by FW */
+	u32 context_state_addr __aligned(8);
+
+	/* Flags e.g. for context switching */
+	u32 fw_com_ctx_flags;
+	u32 priority;
+	u32 priority_seq_num;
+
+	/* Framework state */
+	/* Register updates for Framework */
+	u32 rf_cmd_addr __aligned(8);
+
+	/* Statistic updates waiting to be passed back to the host... */
+	/* True when some stats are pending */
+	bool stats_pending __aligned(4);
+	/* Number of stores on this context since last update */
+	s32 stats_num_stores;
+	/* Number of OOMs on this context since last update */
+	s32 stats_num_out_of_memory;
+	/* Number of PRs on this context since last update */
+	s32 stats_num_partial_renders;
+	/* Data Master type */
+	u32 dm;
+	/* Device Virtual Address of the signal the context is waiting on */
+	aligned_u64 wait_signal_address;
+	/* List entry for the wait-signal list */
+	struct rogue_fwif_dllist_node wait_signal_node __aligned(8);
+	/* List entry for the buffer stalled list */
+	struct rogue_fwif_dllist_node buf_stalled_node __aligned(8);
+	/* Address of the circular buffer queue pointers */
+	aligned_u64 cbuf_queue_ctrl_addr;
+
+	aligned_u64 robustness_address;
+	/* Max HWR deadline limit in ms */
+	u32 max_deadline_ms;
+	/* Following HWR circular buffer read-offset needs resetting */
+	bool read_offset_needs_reset;
+
+	/* List entry for the waiting list */
+	struct rogue_fwif_dllist_node waiting_node __aligned(8);
+	/* List entry for the run list */
+	struct rogue_fwif_dllist_node run_node __aligned(8);
+	/* UFO that last failed (or NULL) */
+	struct rogue_fwif_ufo last_failed_ufo;
+
+	/* Memory context */
+	u32 fw_mem_context_fw_addr;
+
+	/* References to the host side originators */
+	/* the Server Common Context */
+	u32 server_common_context_id;
+	/* associated process ID */
+	u32 pid;
+
+	/* True when Geom DM OOM is not allowed */
+	bool geom_oom_disabled __aligned(4);
+} __aligned(8);
+
+/* Firmware render context. */
+struct rogue_fwif_fwrendercontext {
+	/* Geometry firmware context. */
+	struct rogue_fwif_fwcommoncontext geom_context;
+	/* Fragment firmware context. */
+	struct rogue_fwif_fwcommoncontext frag_context;
+
+	struct rogue_fwif_static_rendercontext_state static_render_context_state;
+
+	/* Number of commands submitted to the WorkEst FW CCB */
+	u32 work_est_ccb_submitted;
+
+	/* Compatibility and other flags */
+	u32 fw_render_ctx_flags;
+} __aligned(8);
+
+/* Firmware compute context. */
+struct rogue_fwif_fwcomputecontext {
+	/* Firmware context for the CDM */
+	struct rogue_fwif_fwcommoncontext cdm_context;
+
+	struct rogue_fwif_static_computecontext_state
+		static_compute_context_state;
+
+	/* Number of commands submitted to the WorkEst FW CCB */
+	u32 work_est_ccb_submitted;
+
+	/* Compatibility and other flags */
+	u32 compute_ctx_flags;
+
+	u32 wgp_state;
+	u32 wgp_checksum;
+	u32 core_mask_a;
+	u32 core_mask_b;
+} __aligned(8);
+
+/* Firmware TDM context. */
+struct rogue_fwif_fwtdmcontext {
+	/* Firmware context for the TDM */
+	struct rogue_fwif_fwcommoncontext tdm_context;
+
+	/* Number of commands submitted to the WorkEst FW CCB */
+	u32 work_est_ccb_submitted;
+} __aligned(8);
+
+/* Firmware TQ3D context. */
+struct rogue_fwif_fwtransfercontext {
+	/* Firmware context for TQ3D. */
+	struct rogue_fwif_fwcommoncontext tq_context;
+} __aligned(8);
+
+/*
+ ******************************************************************************
+ * Defines for CMD_TYPE corruption detection and forward compatibility check
+ ******************************************************************************
+ */
+
+/*
+ * CMD_TYPE 32bit contains:
+ * 31:16	Reserved for magic value to detect corruption (16 bits)
+ * 15		Reserved for ROGUE_CCB_TYPE_TASK (1 bit)
+ * 14:0		Bits available for CMD_TYPEs (15 bits)
+ */
+
+/* Magic value to detect corruption */
+#define ROGUE_CMD_MAGIC_DWORD (0x2ABC)
+#define ROGUE_CMD_MAGIC_DWORD_MASK (0xFFFF0000U)
+#define ROGUE_CMD_MAGIC_DWORD_SHIFT (16U)
+#define ROGUE_CMD_MAGIC_DWORD_SHIFTED \
+	(ROGUE_CMD_MAGIC_DWORD << ROGUE_CMD_MAGIC_DWORD_SHIFT)
+
+/* Kernel CCB control for ROGUE */
+struct rogue_fwif_ccb_ctl {
+	/* write offset into array of commands (MUST be aligned to 16 bytes!) */
+	u32 write_offset;
+	/* read offset into array of commands */
+	u32 read_offset;
+	/* Offset wrapping mask (Total capacity of the CCB - 1) */
+	u32 wrap_mask;
+	/* size of each command in bytes */
+	u32 cmd_size;
+} __aligned(8);
+
+/* Kernel CCB command structure for ROGUE */
+
+#define ROGUE_FWIF_MMUCACHEDATA_FLAGS_PT (0x1U) /* MMU_CTRL_INVAL_PT_EN */
+#define ROGUE_FWIF_MMUCACHEDATA_FLAGS_PD (0x2U) /* MMU_CTRL_INVAL_PD_EN */
+#define ROGUE_FWIF_MMUCACHEDATA_FLAGS_PC (0x4U) /* MMU_CTRL_INVAL_PC_EN */
+
+/*
+ * can't use PM_TLB0 bit from BIFPM_CTRL reg because it collides with PT
+ * bit from BIF_CTRL reg
+ */
+#define ROGUE_FWIF_MMUCACHEDATA_FLAGS_PMTLB (0x10)
+/* BIF_CTRL_INVAL_TLB1_EN */
+#define ROGUE_FWIF_MMUCACHEDATA_FLAGS_TLB \
+	(ROGUE_FWIF_MMUCACHEDATA_FLAGS_PMTLB | 0x8)
+/* MMU_CTRL_INVAL_ALL_CONTEXTS_EN */
+#define ROGUE_FWIF_MMUCACHEDATA_FLAGS_CTX_ALL (0x800)
+
+/* indicates FW should interrupt the host */
+#define ROGUE_FWIF_MMUCACHEDATA_FLAGS_INTERRUPT (0x4000000U)
+
+struct rogue_fwif_mmucachedata {
+	u32 cache_flags;
+	u32 mmu_cache_sync_fw_addr;
+	u32 mmu_cache_sync_update_value;
+};
+
+#define ROGUE_FWIF_BPDATA_FLAGS_ENABLE BIT(0)
+#define ROGUE_FWIF_BPDATA_FLAGS_WRITE BIT(1)
+#define ROGUE_FWIF_BPDATA_FLAGS_CTL BIT(2)
+#define ROGUE_FWIF_BPDATA_FLAGS_REGS BIT(3)
+
+struct rogue_fwif_bpdata {
+	/* Memory context */
+	u32 fw_mem_context_fw_addr;
+	/* Breakpoint address */
+	u32 bp_addr;
+	/* Breakpoint handler */
+	u32 bp_handler_addr;
+	/* Breakpoint control */
+	u32 bp_dm;
+	u32 bp_data_flags;
+	/* Number of temporary registers to overallocate */
+	u32 temp_regs;
+	/* Number of shared registers to overallocate */
+	u32 shared_regs;
+	/* DM associated with the breakpoint */
+	u32 dm;
+};
+
+#define ROGUE_FWIF_KCCB_CMD_KICK_DATA_MAX_NUM_CLEANUP_CTLS \
+	(ROGUE_FWIF_PRBUFFER_MAXSUPPORTED + 1U) /* +1 is RTDATASET cleanup */
+
+struct rogue_fwif_kccb_cmd_kick_data {
+	/* address of the firmware context */
+	u32 context_fw_addr;
+	/* Client CCB woff update */
+	u32 client_woff_update;
+	/* Client CCB wrap mask update after CCCB growth */
+	u32 client_wrap_mask_update;
+	/* number of CleanupCtl pointers attached */
+	u32 num_cleanup_ctl;
+	/* CleanupCtl structures associated with command */
+	u32 cleanup_ctl_fw_addr
+		[ROGUE_FWIF_KCCB_CMD_KICK_DATA_MAX_NUM_CLEANUP_CTLS];
+	/*
+	 * offset to the CmdHeader which houses the workload estimation kick
+	 * data.
+	 */
+	u32 work_est_cmd_header_offset;
+};
+
+struct rogue_fwif_kccb_cmd_combined_geom_frag_kick_data {
+	struct rogue_fwif_kccb_cmd_kick_data geom_cmd_kick_data;
+	struct rogue_fwif_kccb_cmd_kick_data frag_cmd_kick_data;
+};
+
+struct rogue_fwif_kccb_cmd_force_update_data {
+	/* address of the firmware context */
+	u32 context_fw_addr;
+	/* Client CCB fence offset */
+	u32 ccb_fence_offset;
+};
+
+enum rogue_fwif_cleanup_type {
+	/* FW common context cleanup */
+	ROGUE_FWIF_CLEANUP_FWCOMMONCONTEXT,
+	/* FW HW RT data cleanup */
+	ROGUE_FWIF_CLEANUP_HWRTDATA,
+	/* FW freelist cleanup */
+	ROGUE_FWIF_CLEANUP_FREELIST,
+	/* FW ZS Buffer cleanup */
+	ROGUE_FWIF_CLEANUP_ZSBUFFER,
+};
+
+struct rogue_fwif_cleanup_request {
+	/* Cleanup type */
+	enum rogue_fwif_cleanup_type cleanup_type;
+	union {
+		/* FW common context to cleanup */
+		u32 context_fw_addr;
+		/* HW RT to cleanup */
+		u32 hwrt_data_fw_addr;
+		/* Freelist to cleanup */
+		u32 freelist_fw_addr;
+		/* ZS Buffer to cleanup */
+		u32 zs_buffer_fw_addr;
+	} cleanup_data;
+};
+
+enum rogue_fwif_power_type {
+	ROGUE_FWIF_POW_OFF_REQ = 1,
+	ROGUE_FWIF_POW_FORCED_IDLE_REQ,
+	ROGUE_FWIF_POW_NUM_UNITS_CHANGE,
+	ROGUE_FWIF_POW_APM_LATENCY_CHANGE
+};
+
+enum rogue_fwif_power_force_idle_type {
+	ROGUE_FWIF_POWER_FORCE_IDLE = 1,
+	ROGUE_FWIF_POWER_CANCEL_FORCED_IDLE,
+	ROGUE_FWIF_POWER_HOST_TIMEOUT,
+};
+
+struct rogue_fwif_power_request {
+	/* Type of power request */
+	enum rogue_fwif_power_type pow_type;
+	union {
+		/* Number of active Dusts */
+		u32 num_of_dusts;
+		/* If the operation is mandatory */
+		bool forced __aligned(4);
+		/*
+		 * Type of Request. Consolidating Force Idle, Cancel Forced
+		 * Idle, Host Timeout
+		 */
+		enum rogue_fwif_power_force_idle_type pow_request_type;
+	} power_req_data;
+};
+
+struct rogue_fwif_slcflushinvaldata {
+	/* Context to fence on (only useful when bDMContext == TRUE) */
+	u32 context_fw_addr;
+	/* Invalidate the cache as well as flushing */
+	bool inval __aligned(4);
+	/* The data to flush/invalidate belongs to a specific DM context */
+	bool dm_context __aligned(4);
+	/* Optional address of range (only useful when bDMContext == FALSE) */
+	aligned_u64 address;
+	/* Optional size of range (only useful when bDMContext == FALSE) */
+	aligned_u64 size;
+};
+
+enum rogue_fwif_hwperf_update_config {
+	ROGUE_FWIF_HWPERF_CTRL_TOGGLE = 0,
+	ROGUE_FWIF_HWPERF_CTRL_SET = 1,
+	ROGUE_FWIF_HWPERF_CTRL_EMIT_FEATURES_EV = 2
+};
+
+struct rogue_fwif_hwperf_ctrl {
+	enum rogue_fwif_hwperf_update_config opcode; /* Control operation code */
+	aligned_u64 mask; /* Mask of events to toggle */
+};
+
+struct rogue_fwif_hwperf_config_enable_blks {
+	/* Number of ROGUE_HWPERF_CONFIG_MUX_CNTBLK in the array */
+	u32 num_blocks;
+	/* Address of the ROGUE_HWPERF_CONFIG_MUX_CNTBLK array */
+	u32 block_configs_fw_addr;
+};
+
+struct rogue_fwif_hwperf_config_da_blks {
+	/* Number of ROGUE_HWPERF_CONFIG_CNTBLK in the array */
+	u32 num_blocks;
+	/* Address of the ROGUE_HWPERF_CONFIG_CNTBLK array */
+	u32 block_configs_fw_addr;
+};
+
+struct rogue_fwif_coreclkspeedchange_data {
+	u32 new_clock_speed; /* New clock speed */
+};
+
+#define ROGUE_FWIF_HWPERF_CTRL_BLKS_MAX 16
+
+struct rogue_fwif_hwperf_ctrl_blks {
+	bool enable;
+	/* Number of block IDs in the array */
+	u32 num_blocks;
+	/* Array of ROGUE_HWPERF_CNTBLK_ID values */
+	u16 block_ids[ROGUE_FWIF_HWPERF_CTRL_BLKS_MAX];
+};
+
+struct rogue_fwif_hwperf_select_custom_cntrs {
+	u16 custom_block;
+	u16 num_counters;
+	u32 custom_counter_ids_fw_addr;
+};
+
+struct rogue_fwif_zsbuffer_backing_data {
+	u32 zs_buffer_fw_addr; /* ZS-Buffer FW address */
+
+	bool done __aligned(4); /* action backing/unbacking succeeded */
+};
+
+struct rogue_fwif_freelist_gs_data {
+	/* Freelist FW address */
+	u32 freelist_fw_addr;
+	/* Amount of the Freelist change */
+	u32 delta_pages;
+	/* New amount of pages on the freelist (including ready pages) */
+	u32 new_pages;
+	/* Number of ready pages to be held in reserve until OOM */
+	u32 ready_pages;
+};
+
+#define MAX_FREELISTS_SIZE 3
+#define MAX_HW_GEOM_FRAG_CONTEXTS_SIZE 3
+
+#define ROGUE_FWIF_MAX_FREELISTS_TO_RECONSTRUCT \
+	(MAX_HW_GEOM_FRAG_CONTEXTS_SIZE * MAX_FREELISTS_SIZE * 2U)
+#define ROGUE_FWIF_FREELISTS_RECONSTRUCTION_FAILED_FLAG 0x80000000U
+
+struct rogue_fwif_freelists_reconstruction_data {
+	u32 freelist_count;
+	u32 freelist_ids[ROGUE_FWIF_MAX_FREELISTS_TO_RECONSTRUCT];
+};
+
+struct rogue_fwif_write_offset_update_data {
+	/*
+	 * Context to that may need to be resumed following write offset update
+	 */
+	u32 context_fw_addr;
+} __aligned(8);
+
+/*
+ ******************************************************************************
+ * Proactive DVFS Structures
+ ******************************************************************************
+ */
+#define NUM_OPP_VALUES 16
+
+struct pdvfs_opp {
+	u32 volt; /* V  */
+	u32 freq; /* Hz */
+} __aligned(8);
+
+struct rogue_fwif_pdvfs_opp {
+	struct pdvfs_opp opp_values[NUM_OPP_VALUES];
+	u32 min_opp_point;
+	u32 max_opp_point;
+} __aligned(8);
+
+struct rogue_fwif_pdvfs_max_freq_data {
+	u32 max_opp_point;
+} __aligned(8);
+
+struct rogue_fwif_pdvfs_min_freq_data {
+	u32 min_opp_point;
+} __aligned(8);
+
+/*
+ ******************************************************************************
+ * Register configuration structures
+ ******************************************************************************
+ */
+
+#define ROGUE_FWIF_REG_CFG_MAX_SIZE 512
+
+enum rogue_fwif_regdata_cmd_type {
+	ROGUE_FWIF_REGCFG_CMD_ADD = 101,
+	ROGUE_FWIF_REGCFG_CMD_CLEAR = 102,
+	ROGUE_FWIF_REGCFG_CMD_ENABLE = 103,
+	ROGUE_FWIF_REGCFG_CMD_DISABLE = 104
+};
+
+enum rogue_fwif_reg_cfg_type {
+	/* Sidekick power event */
+	ROGUE_FWIF_REG_CFG_TYPE_PWR_ON = 0,
+	/* Rascal / dust power event */
+	ROGUE_FWIF_REG_CFG_TYPE_DUST_CHANGE,
+	/* Geometry kick */
+	ROGUE_FWIF_REG_CFG_TYPE_GEOM,
+	/* Fragment kick */
+	ROGUE_FWIF_REG_CFG_TYPE_FRAG,
+	/* Compute kick */
+	ROGUE_FWIF_REG_CFG_TYPE_CDM,
+	/* TLA kick */
+	ROGUE_FWIF_REG_CFG_TYPE_TLA,
+	/* TDM kick */
+	ROGUE_FWIF_REG_CFG_TYPE_TDM,
+	/* Applies to all types. Keep as last element */
+	ROGUE_FWIF_REG_CFG_TYPE_ALL
+};
+
+struct rogue_fwif_reg_cfg_rec {
+	u64 sddr;
+	u64 mask;
+	u64 value;
+};
+
+struct rogue_fwif_regconfig_data {
+	enum rogue_fwif_regdata_cmd_type cmd_type;
+	enum rogue_fwif_reg_cfg_type reg_config_type;
+	struct rogue_fwif_reg_cfg_rec reg_config __aligned(8);
+};
+
+struct rogue_fwif_reg_cfg {
+	/*
+	 * PDump WRW command write granularity is 32 bits.
+	 * Add padding to ensure array size is 32 bit granular.
+	 */
+	u8 num_regs_type[ALIGN((u32)ROGUE_FWIF_REG_CFG_TYPE_ALL,
+			       sizeof(u32))] __aligned(8);
+	struct rogue_fwif_reg_cfg_rec
+		reg_configs[ROGUE_FWIF_REG_CFG_MAX_SIZE] __aligned(8);
+} __aligned(8);
+
+enum rogue_fwif_os_state_change {
+	ROGUE_FWIF_OS_ONLINE = 1,
+	ROGUE_FWIF_OS_OFFLINE
+};
+
+struct rogue_fwif_os_state_change_data {
+	u32 osid;
+	enum rogue_fwif_os_state_change new_os_state;
+} __aligned(8);
+
+enum rogue_fwif_counter_dump_request {
+	ROGUE_FWIF_PWR_COUNTER_DUMP_START = 1,
+	ROGUE_FWIF_PWR_COUNTER_DUMP_STOP,
+	ROGUE_FWIF_PWR_COUNTER_DUMP_SAMPLE,
+};
+
+struct rogue_fwif_counter_dump_data {
+	enum rogue_fwif_counter_dump_request counter_dump_request;
+} __aligned(8);
+
+enum rogue_fwif_kccb_cmd_type {
+	/* Common commands */
+	ROGUE_FWIF_KCCB_CMD_KICK = 101U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	ROGUE_FWIF_KCCB_CMD_MMUCACHE = 102U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	ROGUE_FWIF_KCCB_CMD_BP = 103U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* SLC flush and invalidation request */
+	ROGUE_FWIF_KCCB_CMD_SLCFLUSHINVAL = 105U |
+					    ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/*
+	 * Requests cleanup of a FW resource (type specified in the command
+	 * data)
+	 */
+	ROGUE_FWIF_KCCB_CMD_CLEANUP = 106U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Power request */
+	ROGUE_FWIF_KCCB_CMD_POW = 107U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Backing for on-demand ZS-Buffer done */
+	ROGUE_FWIF_KCCB_CMD_ZSBUFFER_BACKING_UPDATE =
+		108U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Unbacking for on-demand ZS-Buffer done */
+	ROGUE_FWIF_KCCB_CMD_ZSBUFFER_UNBACKING_UPDATE =
+		109U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Freelist Grow done */
+	ROGUE_FWIF_KCCB_CMD_FREELIST_GROW_UPDATE =
+		110U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Freelists Reconstruction done */
+	ROGUE_FWIF_KCCB_CMD_FREELISTS_RECONSTRUCTION_UPDATE =
+		112U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/*
+	 * Informs the firmware that the host has added more data to a CDM2
+	 * Circular Buffer
+	 */
+	ROGUE_FWIF_KCCB_CMD_NOTIFY_WRITE_OFFSET_UPDATE =
+		114U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Health check request */
+	ROGUE_FWIF_KCCB_CMD_HEALTH_CHECK = 115U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Forcing signalling of all unmet UFOs for a given CCB offset */
+	ROGUE_FWIF_KCCB_CMD_FORCE_UPDATE = 116U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+
+	/* There is a geometry and a fragment command in this single kick */
+	ROGUE_FWIF_KCCB_CMD_COMBINED_GEOM_FRAG_KICK = 117U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Informs the FW that a Guest OS has come online / offline. */
+	ROGUE_FWIF_KCCB_CMD_OS_ONLINE_STATE_CONFIGURE	= 118U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+
+	/* Commands only permitted to the native or host OS */
+	ROGUE_FWIF_KCCB_CMD_REGCONFIG = 200U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+
+	/* Configure HWPerf events (to be generated) and HWPerf buffer address (if required) */
+	ROGUE_FWIF_KCCB_CMD_HWPERF_UPDATE_CONFIG = 201U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+
+	/* Enable or disable multiple HWPerf blocks (reusing existing configuration) */
+	ROGUE_FWIF_KCCB_CMD_HWPERF_CTRL_BLKS = 203U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Core clock speed change event */
+	ROGUE_FWIF_KCCB_CMD_CORECLKSPEEDCHANGE = 204U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+
+	/*
+	 * Ask the firmware to update its cached ui32LogType value from the (shared)
+	 * tracebuf control structure
+	 */
+	ROGUE_FWIF_KCCB_CMD_LOGTYPE_UPDATE = 206U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Set a maximum frequency/OPP point */
+	ROGUE_FWIF_KCCB_CMD_PDVFS_LIMIT_MAX_FREQ = 207U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/*
+	 * Changes the relative scheduling priority for a particular OSid. It can
+	 * only be serviced for the Host DDK
+	 */
+	ROGUE_FWIF_KCCB_CMD_OSID_PRIORITY_CHANGE = 208U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Set or clear firmware state flags */
+	ROGUE_FWIF_KCCB_CMD_STATEFLAGS_CTRL = 209U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+
+	/* Set a minimum frequency/OPP point */
+	ROGUE_FWIF_KCCB_CMD_PDVFS_LIMIT_MIN_FREQ = 212U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Configure Periodic Hardware Reset behaviour */
+	ROGUE_FWIF_KCCB_CMD_PHR_CFG = 213U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+
+	/* Configure Safety Firmware Watchdog */
+	ROGUE_FWIF_KCCB_CMD_WDG_CFG = 215U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Controls counter dumping in the FW */
+	ROGUE_FWIF_KCCB_CMD_COUNTER_DUMP = 216U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Configure, clear and enable multiple HWPerf blocks */
+	ROGUE_FWIF_KCCB_CMD_HWPERF_CONFIG_ENABLE_BLKS = 217U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Configure the custom counters for HWPerf */
+	ROGUE_FWIF_KCCB_CMD_HWPERF_SELECT_CUSTOM_CNTRS = 218U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+
+	/* Configure directly addressable counters for HWPerf */
+	ROGUE_FWIF_KCCB_CMD_HWPERF_CONFIG_BLKS = 220U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+};
+
+#define ROGUE_FWIF_LAST_ALLOWED_GUEST_KCCB_CMD \
+	(ROGUE_FWIF_KCCB_CMD_REGCONFIG - 1)
+
+/* Kernel CCB command packet */
+struct rogue_fwif_kccb_cmd {
+	/* Command type */
+	enum rogue_fwif_kccb_cmd_type cmd_type;
+	/* Compatibility and other flags */
+	u32 kccb_flags;
+
+	/*
+	 * NOTE: Make sure that uCmdData is the last member of this struct
+	 * This is to calculate actual command size for device mem copy.
+	 * (Refer ROGUEGetCmdMemCopySize())
+	 */
+	union {
+		/* Data for Kick command */
+		struct rogue_fwif_kccb_cmd_kick_data cmd_kick_data;
+		/* Data for combined geom/frag Kick command */
+		struct rogue_fwif_kccb_cmd_combined_geom_frag_kick_data
+			combined_geom_frag_cmd_kick_data;
+		/* Data for MMU cache command */
+		struct rogue_fwif_mmucachedata mmu_cache_data;
+		/* Data for Breakpoint Commands */
+		struct rogue_fwif_bpdata bp_data;
+		/* Data for SLC Flush/Inval commands */
+		struct rogue_fwif_slcflushinvaldata slc_flush_inval_data;
+		/* Data for cleanup commands */
+		struct rogue_fwif_cleanup_request cleanup_data;
+		/* Data for power request commands */
+		struct rogue_fwif_power_request pow_data;
+		/* Data for HWPerf control command */
+		struct rogue_fwif_hwperf_ctrl hw_perf_ctrl;
+		/*
+		 * Data for HWPerf configure, clear and enable performance
+		 * counter block command
+		 */
+		struct rogue_fwif_hwperf_config_enable_blks
+			hw_perf_cfg_enable_blks;
+		/*
+		 * Data for HWPerf enable or disable performance counter block
+		 * commands
+		 */
+		struct rogue_fwif_hwperf_ctrl_blks hw_perf_ctrl_blks;
+		/* Data for HWPerf configure the custom counters to read */
+		struct rogue_fwif_hwperf_select_custom_cntrs
+			hw_perf_select_cstm_cntrs;
+		/* Data for HWPerf configure Directly Addressable blocks */
+		struct rogue_fwif_hwperf_config_da_blks hw_perf_cfg_da_blks;
+		/* Data for core clock speed change */
+		struct rogue_fwif_coreclkspeedchange_data
+			core_clk_speed_change_data;
+		/* Feedback for Z/S Buffer backing/unbacking */
+		struct rogue_fwif_zsbuffer_backing_data zs_buffer_backing_data;
+		/* Feedback for Freelist grow/shrink */
+		struct rogue_fwif_freelist_gs_data free_list_gs_data;
+		/* Feedback for Freelists reconstruction*/
+		struct rogue_fwif_freelists_reconstruction_data
+			free_lists_reconstruction_data;
+		/* Data for custom register configuration */
+		struct rogue_fwif_regconfig_data reg_config_data;
+		/* Data for informing the FW about the write offset update */
+		struct rogue_fwif_write_offset_update_data
+			write_offset_update_data;
+		/* Data for setting the max frequency/OPP */
+		struct rogue_fwif_pdvfs_max_freq_data pdvfs_max_freq_data;
+		/* Data for setting the min frequency/OPP */
+		struct rogue_fwif_pdvfs_min_freq_data pdvfs_min_freq_data;
+		/* Data for updating the Guest Online states */
+		struct rogue_fwif_os_state_change_data cmd_os_online_state_data;
+		/* Dev address for TBI buffer allocated on demand */
+		u32 tbi_buffer_fw_addr;
+		/* Data for dumping of register ranges */
+		struct rogue_fwif_counter_dump_data counter_dump_config_data;
+		/* Data for signalling all unmet fences for a given CCB */
+		struct rogue_fwif_kccb_cmd_force_update_data force_update_data;
+	} cmd_data __aligned(8);
+} __aligned(8);
+
+PVR_FW_STRUCT_SIZE_ASSERT(struct rogue_fwif_kccb_cmd);
+
+/*
+ ******************************************************************************
+ * Firmware CCB command structure for ROGUE
+ ******************************************************************************
+ */
+
+struct rogue_fwif_fwccb_cmd_zsbuffer_backing_data {
+	u32 zs_buffer_id;
+};
+
+struct rogue_fwif_fwccb_cmd_freelist_gs_data {
+	u32 freelist_id;
+};
+
+struct rogue_fwif_fwccb_cmd_freelists_reconstruction_data {
+	u32 freelist_count;
+	u32 hwr_counter;
+	u32 freelist_ids[ROGUE_FWIF_MAX_FREELISTS_TO_RECONSTRUCT];
+};
+
+/* 1 if a page fault happened */
+#define ROGUE_FWIF_FWCCB_CMD_CONTEXT_RESET_FLAG_PF BIT(0)
+/* 1 if applicable to all contexts */
+#define ROGUE_FWIF_FWCCB_CMD_CONTEXT_RESET_FLAG_ALL_CTXS BIT(1)
+
+struct rogue_fwif_fwccb_cmd_context_reset_data {
+	/* Context affected by the reset */
+	u32 server_common_context_id;
+	/* Reason for reset */
+	enum rogue_context_reset_reason reset_reason;
+	/* Data Master affected by the reset */
+	u32 dm;
+	/* Job ref running at the time of reset */
+	u32 reset_job_ref;
+	/* ROGUE_FWIF_FWCCB_CMD_CONTEXT_RESET_FLAG bitfield */
+	u32 flags;
+	/* At what page catalog address */
+	aligned_u64 pc_address;
+	/* Page fault address (only when applicable) */
+	aligned_u64 fault_address;
+};
+
+struct rogue_fwif_fwccb_cmd_fw_pagefault_data {
+	/* Page fault address */
+	u64 fw_fault_addr;
+};
+
+enum rogue_fwif_fwccb_cmd_type {
+	/* Requests ZSBuffer to be backed with physical pages */
+	ROGUE_FWIF_FWCCB_CMD_ZSBUFFER_BACKING = 101U |
+						ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Requests ZSBuffer to be unbacked */
+	ROGUE_FWIF_FWCCB_CMD_ZSBUFFER_UNBACKING = 102U |
+						  ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Requests an on-demand freelist grow/shrink */
+	ROGUE_FWIF_FWCCB_CMD_FREELIST_GROW = 103U |
+					     ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Requests freelists reconstruction */
+	ROGUE_FWIF_FWCCB_CMD_FREELISTS_RECONSTRUCTION =
+		104U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Notifies host of a HWR event on a context */
+	ROGUE_FWIF_FWCCB_CMD_CONTEXT_RESET_NOTIFICATION =
+		105U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Requests an on-demand debug dump */
+	ROGUE_FWIF_FWCCB_CMD_DEBUG_DUMP = 106U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	/* Requests an on-demand update on process stats */
+	ROGUE_FWIF_FWCCB_CMD_UPDATE_STATS = 107U |
+					    ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+
+	ROGUE_FWIF_FWCCB_CMD_CORE_CLK_RATE_CHANGE =
+		108U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+	ROGUE_FWIF_FWCCB_CMD_REQUEST_GPU_RESTART =
+		109U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+
+	/* Notifies host of a FW pagefault */
+	ROGUE_FWIF_FWCCB_CMD_CONTEXT_FW_PF_NOTIFICATION =
+		112U | ROGUE_CMD_MAGIC_DWORD_SHIFTED,
+};
+
+enum rogue_fwif_fwccb_cmd_update_stats_type {
+	/*
+	 * PVRSRVStatsUpdateRenderContextStats should increase the value of the
+	 * ui32TotalNumPartialRenders stat
+	 */
+	ROGUE_FWIF_FWCCB_CMD_UPDATE_NUM_PARTIAL_RENDERS = 1,
+	/*
+	 * PVRSRVStatsUpdateRenderContextStats should increase the value of the
+	 * ui32TotalNumOutOfMemory stat
+	 */
+	ROGUE_FWIF_FWCCB_CMD_UPDATE_NUM_OUT_OF_MEMORY,
+	/*
+	 * PVRSRVStatsUpdateRenderContextStats should increase the value of the
+	 * ui32NumGeomStores stat
+	 */
+	ROGUE_FWIF_FWCCB_CMD_UPDATE_NUM_GEOM_STORES,
+	/*
+	 * PVRSRVStatsUpdateRenderContextStats should increase the value of the
+	 * ui32NumFragStores stat
+	 */
+	ROGUE_FWIF_FWCCB_CMD_UPDATE_NUM_FRAG_STORES,
+	/*
+	 * PVRSRVStatsUpdateRenderContextStats should increase the value of the
+	 * ui32NumCDMStores stat
+	 */
+	ROGUE_FWIF_FWCCB_CMD_UPDATE_NUM_CDM_STORES,
+	/*
+	 * PVRSRVStatsUpdateRenderContextStats should increase the value of the
+	 * ui32NumTDMStores stat
+	 */
+	ROGUE_FWIF_FWCCB_CMD_UPDATE_NUM_TDM_STORES
+};
+
+struct rogue_fwif_fwccb_cmd_update_stats_data {
+	/* Element to update */
+	enum rogue_fwif_fwccb_cmd_update_stats_type element_to_update;
+	/* The pid of the process whose stats are being updated */
+	u32 pid_owner;
+	/* Adjustment to be made to the statistic */
+	s32 adjustment_value;
+};
+
+struct rogue_fwif_fwccb_cmd_core_clk_rate_change_data {
+	u32 core_clk_rate;
+} __aligned(8);
+
+struct rogue_fwif_fwccb_cmd {
+	/* Command type */
+	enum rogue_fwif_fwccb_cmd_type cmd_type;
+	/* Compatibility and other flags */
+	u32 fwccb_flags;
+
+	union {
+		/* Data for Z/S-Buffer on-demand (un)backing*/
+		struct rogue_fwif_fwccb_cmd_zsbuffer_backing_data
+			cmd_zs_buffer_backing;
+		/* Data for on-demand freelist grow/shrink */
+		struct rogue_fwif_fwccb_cmd_freelist_gs_data cmd_free_list_gs;
+		/* Data for freelists reconstruction */
+		struct rogue_fwif_fwccb_cmd_freelists_reconstruction_data
+			cmd_freelists_reconstruction;
+		/* Data for context reset notification */
+		struct rogue_fwif_fwccb_cmd_context_reset_data
+			cmd_context_reset_notification;
+		/* Data for updating process stats */
+		struct rogue_fwif_fwccb_cmd_update_stats_data
+			cmd_update_stats_data;
+		struct rogue_fwif_fwccb_cmd_core_clk_rate_change_data
+			cmd_core_clk_rate_change;
+		struct rogue_fwif_fwccb_cmd_fw_pagefault_data cmd_fw_pagefault;
+	} cmd_data __aligned(8);
+} __aligned(8);
+
+PVR_FW_STRUCT_SIZE_ASSERT(struct rogue_fwif_fwccb_cmd);
+
+/*
+ ******************************************************************************
+ * Workload estimation Firmware CCB command structure for ROGUE
+ ******************************************************************************
+ */
+struct rogue_fwif_workest_fwccb_cmd {
+	/* Index for return data array */
+	u16 return_data_index;
+	/* The cycles the workload took on the hardware */
+	u32 cycles_taken;
+};
+
+/*
+ ******************************************************************************
+ * Client CCB commands for ROGUE
+ ******************************************************************************
+ */
+
+/*
+ * Required memory alignment for 64-bit variables accessible by Meta
+ * (The gcc meta aligns 64-bit variables to 64-bit; therefore, memory shared
+ * between the host and meta that contains 64-bit variables has to maintain
+ * this alignment)
+ */
+#define ROGUE_FWIF_FWALLOC_ALIGN sizeof(u64)
+
+#define ROGUE_CCB_TYPE_TASK BIT(15)
+#define ROGUE_CCB_FWALLOC_ALIGN(size)                \
+	(((size) + (ROGUE_FWIF_FWALLOC_ALIGN - 1)) & \
+	 ~(ROGUE_FWIF_FWALLOC_ALIGN - 1))
+
+#define ROGUE_FWIF_CCB_CMD_TYPE_GEOM \
+	(201U | ROGUE_CMD_MAGIC_DWORD_SHIFTED | ROGUE_CCB_TYPE_TASK)
+#define ROGUE_FWIF_CCB_CMD_TYPE_TQ_3D \
+	(202U | ROGUE_CMD_MAGIC_DWORD_SHIFTED | ROGUE_CCB_TYPE_TASK)
+#define ROGUE_FWIF_CCB_CMD_TYPE_FRAG \
+	(203U | ROGUE_CMD_MAGIC_DWORD_SHIFTED | ROGUE_CCB_TYPE_TASK)
+#define ROGUE_FWIF_CCB_CMD_TYPE_FRAG_PR \
+	(204U | ROGUE_CMD_MAGIC_DWORD_SHIFTED | ROGUE_CCB_TYPE_TASK)
+#define ROGUE_FWIF_CCB_CMD_TYPE_CDM \
+	(205U | ROGUE_CMD_MAGIC_DWORD_SHIFTED | ROGUE_CCB_TYPE_TASK)
+#define ROGUE_FWIF_CCB_CMD_TYPE_TQ_TDM \
+	(206U | ROGUE_CMD_MAGIC_DWORD_SHIFTED | ROGUE_CCB_TYPE_TASK)
+#define ROGUE_FWIF_CCB_CMD_TYPE_FBSC_INVALIDATE \
+	(207U | ROGUE_CMD_MAGIC_DWORD_SHIFTED | ROGUE_CCB_TYPE_TASK)
+#define ROGUE_FWIF_CCB_CMD_TYPE_TQ_2D \
+	(208U | ROGUE_CMD_MAGIC_DWORD_SHIFTED | ROGUE_CCB_TYPE_TASK)
+#define ROGUE_FWIF_CCB_CMD_TYPE_PRE_TIMESTAMP \
+	(209U | ROGUE_CMD_MAGIC_DWORD_SHIFTED | ROGUE_CCB_TYPE_TASK)
+#define ROGUE_FWIF_CCB_CMD_TYPE_NULL \
+	(210U | ROGUE_CMD_MAGIC_DWORD_SHIFTED | ROGUE_CCB_TYPE_TASK)
+#define ROGUE_FWIF_CCB_CMD_TYPE_ABORT \
+	(211U | ROGUE_CMD_MAGIC_DWORD_SHIFTED | ROGUE_CCB_TYPE_TASK)
+
+/* Leave a gap between CCB specific commands and generic commands */
+#define ROGUE_FWIF_CCB_CMD_TYPE_FENCE (212U | ROGUE_CMD_MAGIC_DWORD_SHIFTED)
+#define ROGUE_FWIF_CCB_CMD_TYPE_UPDATE (213U | ROGUE_CMD_MAGIC_DWORD_SHIFTED)
+#define ROGUE_FWIF_CCB_CMD_TYPE_RMW_UPDATE \
+	(214U | ROGUE_CMD_MAGIC_DWORD_SHIFTED)
+#define ROGUE_FWIF_CCB_CMD_TYPE_FENCE_PR (215U | ROGUE_CMD_MAGIC_DWORD_SHIFTED)
+#define ROGUE_FWIF_CCB_CMD_TYPE_PRIORITY (216U | ROGUE_CMD_MAGIC_DWORD_SHIFTED)
+/*
+ * Pre and Post timestamp commands are supposed to sandwich the DM cmd. The
+ * padding code with the CCB wrap upsets the FW if we don't have the task type
+ * bit cleared for POST_TIMESTAMPs. That's why we have 2 different cmd types.
+ */
+#define ROGUE_FWIF_CCB_CMD_TYPE_POST_TIMESTAMP \
+	(217U | ROGUE_CMD_MAGIC_DWORD_SHIFTED)
+#define ROGUE_FWIF_CCB_CMD_TYPE_UNFENCED_UPDATE \
+	(218U | ROGUE_CMD_MAGIC_DWORD_SHIFTED)
+#define ROGUE_FWIF_CCB_CMD_TYPE_UNFENCED_RMW_UPDATE \
+	(219U | ROGUE_CMD_MAGIC_DWORD_SHIFTED)
+
+#define ROGUE_FWIF_CCB_CMD_TYPE_PADDING (221U | ROGUE_CMD_MAGIC_DWORD_SHIFTED)
+
+struct rogue_fwif_workest_kick_data {
+	/* Index for the KM Workload estimation return data array */
+	u16 return_data_index __aligned(8);
+	/* Predicted time taken to do the work in cycles */
+	u32 cycles_prediction __aligned(8);
+	/* Deadline for the workload */
+	aligned_u64 deadline;
+};
+
+struct rogue_fwif_ccb_cmd_header {
+	u32 cmd_type;
+	u32 cmd_size;
+	/*
+	 * external job reference - provided by client and used in debug for
+	 * tracking submitted work
+	 */
+	u32 ext_job_ref;
+	/*
+	 * internal job reference - generated by services and used in debug for
+	 * tracking submitted work
+	 */
+	u32 int_job_ref;
+	/* Workload Estimation - Workload Estimation Data */
+	struct rogue_fwif_workest_kick_data work_est_kick_data __aligned(8);
+};
+
+/*
+ ******************************************************************************
+ * Client CCB commands which are only required by the kernel
+ ******************************************************************************
+ */
+struct rogue_fwif_cmd_priority {
+	s32 priority;
+};
+
+/*
+ ******************************************************************************
+ * Signature and Checksums Buffer
+ ******************************************************************************
+ */
+struct rogue_fwif_sigbuf_ctl {
+	/* Ptr to Signature Buffer memory */
+	u32 buffer_fw_addr;
+	/* Amount of space left for storing regs in the buffer */
+	u32 left_size_in_regs;
+} __aligned(8);
+
+struct rogue_fwif_counter_dump_ctl {
+	/* Ptr to counter dump buffer */
+	u32 buffer_fw_addr;
+	/* Amount of space for storing in the buffer */
+	u32 size_in_dwords;
+} __aligned(8);
+
+struct rogue_fwif_firmware_gcov_ctl {
+	/* Ptr to firmware gcov buffer */
+	u32 buffer_fw_addr;
+	/* Amount of space for storing in the buffer */
+	u32 size;
+} __aligned(8);
+
+/*
+ *****************************************************************************
+ * ROGUE Compatibility checks
+ *****************************************************************************
+ */
+
+/*
+ * WARNING: Whenever the layout of ROGUE_FWIF_COMPCHECKS_BVNC changes, the
+ * following define should be increased by 1 to indicate to the compatibility
+ * logic that layout has changed.
+ */
+#define ROGUE_FWIF_COMPCHECKS_LAYOUT_VERSION 3
+
+struct rogue_fwif_compchecks_bvnc {
+	/* WARNING: This field must be defined as first one in this structure */
+	u32 layout_version;
+	aligned_u64 bvnc;
+} __aligned(8);
+
+struct rogue_fwif_init_options {
+	u8 os_count_support;
+	u8 padding[7];
+} __aligned(8);
+
+#define ROGUE_FWIF_COMPCHECKS_BVNC_DECLARE_AND_INIT(name) \
+	struct rogue_fwif_compchecks_bvnc(name) = {       \
+		ROGUE_FWIF_COMPCHECKS_LAYOUT_VERSION,     \
+		0,                                        \
+	}
+
+static inline void rogue_fwif_compchecks_bvnc_init(struct rogue_fwif_compchecks_bvnc *compchecks)
+{
+	compchecks->layout_version = ROGUE_FWIF_COMPCHECKS_LAYOUT_VERSION;
+	compchecks->bvnc = 0;
+}
+
+struct rogue_fwif_compchecks {
+	/* hardware BVNC (from the ROGUE registers) */
+	struct rogue_fwif_compchecks_bvnc hw_bvnc;
+	/* firmware BVNC */
+	struct rogue_fwif_compchecks_bvnc fw_bvnc;
+	/* identifier of the FW processor version */
+	u32 fw_processor_version;
+	/* software DDK version */
+	u32 ddk_version;
+	/* software DDK build no. */
+	u32 ddk_build;
+	/* build options bit-field */
+	u32 build_options;
+	/* initialisation options bit-field */
+	struct rogue_fwif_init_options init_options;
+	/* Information is valid */
+	bool updated __aligned(4);
+	u32 padding;
+} __aligned(8);
+
+/*
+ ******************************************************************************
+ * Updated configuration post FW data init.
+ ******************************************************************************
+ */
+struct rogue_fwif_runtime_cfg {
+	/* APM latency in ms before signalling IDLE to the host */
+	u32 active_pm_latency_ms;
+	/* Compatibility and other flags */
+	u32 runtime_cfg_flags;
+	/*
+	 * If set, APM latency does not reset to system default each GPU power
+	 * transition
+	 */
+	bool active_pm_latency_persistant __aligned(4);
+	/* Core clock speed, currently only used to calculate timer ticks */
+	u32 core_clock_speed;
+	/* Last number of dusts change requested by the host */
+	u32 default_dusts_num_init;
+	/* Periodic Hardware Reset configuration values */
+	u32 phr_mode;
+	/* New number of milliseconds C/S is allowed to last */
+	u32 hcs_deadline_ms;
+	/* The watchdog period in microseconds */
+	u32 wdg_period_us;
+	/* Array of priorities per OS */
+	u32 osid_priority[ROGUE_FW_MAX_NUM_OS];
+	/* On-demand allocated HWPerf buffer address, to be passed to the FW */
+	u32 hwperf_buf_fw_addr;
+
+	bool padding __aligned(4);
+};
+
+/*
+ *****************************************************************************
+ * Control data for ROGUE
+ *****************************************************************************
+ */
+
+#define ROGUE_FWIF_HWR_DEBUG_DUMP_ALL (99999U)
+
+enum rogue_fwif_tpu_dm {
+	ROGUE_FWIF_TPU_DM_PDM = 0,
+	ROGUE_FWIF_TPU_DM_VDM = 1,
+	ROGUE_FWIF_TPU_DM_CDM = 2,
+	ROGUE_FWIF_TPU_DM_TDM = 3,
+	ROGUE_FWIF_TPU_DM_LAST
+};
+
+enum rogue_fwif_gpio_val_mode {
+	/* No GPIO validation */
+	ROGUE_FWIF_GPIO_VAL_OFF = 0,
+	/*
+	 * Simple test case that initiates by sending data via the GPIO and then
+	 * sends back any data received over the GPIO
+	 */
+	ROGUE_FWIF_GPIO_VAL_GENERAL = 1,
+	/*
+	 * More complex test case that writes and reads data across the entire
+	 * GPIO AP address range.
+	 */
+	ROGUE_FWIF_GPIO_VAL_AP = 2,
+	/* Validates the GPIO Testbench. */
+	ROGUE_FWIF_GPIO_VAL_TESTBENCH = 5,
+	/* Send and then receive each byte in the range 0-255. */
+	ROGUE_FWIF_GPIO_VAL_LOOPBACK = 6,
+	/* Send and then receive each power-of-2 byte in the range 0-255. */
+	ROGUE_FWIF_GPIO_VAL_LOOPBACK_LITE = 7,
+	ROGUE_FWIF_GPIO_VAL_LAST
+};
+
+enum fw_perf_conf {
+	FW_PERF_CONF_NONE = 0,
+	FW_PERF_CONF_ICACHE = 1,
+	FW_PERF_CONF_DCACHE = 2,
+	FW_PERF_CONF_JTLB_INSTR = 5,
+	FW_PERF_CONF_INSTRUCTIONS = 6
+};
+
+enum fw_boot_stage {
+	FW_BOOT_STAGE_TLB_INIT_FAILURE = -2,
+	FW_BOOT_STAGE_NOT_AVAILABLE = -1,
+	FW_BOOT_NOT_STARTED = 0,
+	FW_BOOT_BLDR_STARTED = 1,
+	FW_BOOT_CACHE_DONE,
+	FW_BOOT_TLB_DONE,
+	FW_BOOT_MAIN_STARTED,
+	FW_BOOT_ALIGNCHECKS_DONE,
+	FW_BOOT_INIT_DONE,
+};
+
+/*
+ * Kernel CCB return slot responses. Usage of bit-fields instead of bare
+ * integers allows FW to possibly pack-in several responses for each single kCCB
+ * command.
+ */
+/* Command executed (return status from FW) */
+#define ROGUE_FWIF_KCCB_RTN_SLOT_CMD_EXECUTED BIT(0)
+/* A cleanup was requested but resource busy */
+#define ROGUE_FWIF_KCCB_RTN_SLOT_CLEANUP_BUSY BIT(1)
+/* Poll failed in FW for a HW operation to complete */
+#define ROGUE_FWIF_KCCB_RTN_SLOT_POLL_FAILURE BIT(2)
+/* Reset value of a kCCB return slot (set by host) */
+#define ROGUE_FWIF_KCCB_RTN_SLOT_NO_RESPONSE 0x0U
+
+struct rogue_fwif_connection_ctl {
+	/* Fw-Os connection states */
+	enum rogue_fwif_connection_fw_state connection_fw_state;
+	enum rogue_fwif_connection_os_state connection_os_state;
+	u32 alive_fw_token;
+	u32 alive_os_token;
+} __aligned(8);
+
+struct rogue_fwif_osinit {
+	/* Kernel CCB */
+	u32 kernel_ccbctl_fw_addr;
+	u32 kernel_ccb_fw_addr;
+	u32 kernel_ccb_rtn_slots_fw_addr;
+
+	/* Firmware CCB */
+	u32 firmware_ccbctl_fw_addr;
+	u32 firmware_ccb_fw_addr;
+
+	/* Workload Estimation Firmware CCB */
+	u32 work_est_firmware_ccbctl_fw_addr;
+	u32 work_est_firmware_ccb_fw_addr;
+
+	u32 rogue_fwif_hwr_info_buf_ctl_fw_addr;
+
+	u32 hwr_debug_dump_limit;
+
+	u32 fw_os_data_fw_addr;
+
+	/* Compatibility checks to be populated by the Firmware */
+	struct rogue_fwif_compchecks rogue_comp_checks;
+} __aligned(8);
+
+/* BVNC Features */
+struct rogue_hwperf_bvnc_block {
+	/* Counter block ID, see ROGUE_HWPERF_CNTBLK_ID */
+	u16 block_id;
+
+	/* Number of counters in this block type */
+	u16 num_counters;
+
+	/* Number of blocks of this type */
+	u16 num_blocks;
+
+	u16 reserved;
+};
+
+#define ROGUE_HWPERF_MAX_BVNC_LEN (24)
+
+#define ROGUE_HWPERF_MAX_BVNC_BLOCK_LEN (16U)
+
+/* BVNC Features */
+struct rogue_hwperf_bvnc {
+	/* BVNC string */
+	char bvnc_string[ROGUE_HWPERF_MAX_BVNC_LEN];
+	/* See ROGUE_HWPERF_FEATURE_FLAGS */
+	u32 bvnc_km_feature_flags;
+	/* Number of blocks described in aBvncBlocks */
+	u16 num_bvnc_blocks;
+	/* Number of GPU cores present */
+	u16 bvnc_gpu_cores;
+	/* Supported Performance Blocks for BVNC */
+	struct rogue_hwperf_bvnc_block
+		bvnc_blocks[ROGUE_HWPERF_MAX_BVNC_BLOCK_LEN];
+};
+
+PVR_FW_STRUCT_SIZE_ASSERT(struct rogue_hwperf_bvnc);
+
+struct rogue_fwif_sysinit {
+	/* Fault read address */
+	aligned_u64 fault_phys_addr;
+
+	/* PDS execution base */
+	aligned_u64 pds_exec_base;
+	/* UCS execution base */
+	aligned_u64 usc_exec_base;
+	/* FBCDC bindless texture state table base */
+	aligned_u64 fbcdc_state_table_base;
+	aligned_u64 fbcdc_large_state_table_base;
+	/* Texture state base */
+	aligned_u64 texture_heap_base;
+
+	/* Event filter for Firmware events */
+	u64 hw_perf_filter;
+
+	aligned_u64 slc3_fence_dev_addr;
+
+	u32 tpu_trilinear_frac_mask[ROGUE_FWIF_TPU_DM_LAST] __aligned(8);
+
+	/* Signature and Checksum Buffers for DMs */
+	struct rogue_fwif_sigbuf_ctl sigbuf_ctl[PVR_FWIF_DM_MAX];
+
+	struct rogue_fwif_pdvfs_opp pdvfs_opp_info;
+
+	struct rogue_fwif_dma_addr coremem_data_store;
+
+	struct rogue_fwif_counter_dump_ctl counter_dump_ctl;
+
+	u32 filter_flags;
+
+	u32 runtime_cfg_fw_addr;
+
+	u32 trace_buf_ctl_fw_addr;
+	u32 fw_sys_data_fw_addr;
+
+	u32 gpu_util_fw_cb_ctl_fw_addr;
+	u32 reg_cfg_fw_addr;
+	u32 hwperf_ctl_fw_addr;
+
+	u32 align_checks;
+
+	/* Core clock speed at FW boot time */
+	u32 initial_core_clock_speed;
+
+	/* APM latency in ms before signalling IDLE to the host */
+	u32 active_pm_latency_ms;
+
+	/* Flag to be set by the Firmware after successful start */
+	bool firmware_started __aligned(4);
+
+	/* Host/FW Trace synchronisation Partition Marker */
+	u32 marker_val;
+
+	/* Firmware initialization complete time */
+	u32 firmware_started_timestamp;
+
+	u32 jones_disable_mask;
+
+	/* Firmware performance counter config */
+	enum fw_perf_conf firmware_perf;
+
+	/*
+	 * FW Pointer to memory containing core clock rate in Hz.
+	 * Firmware (PDVFS) updates the memory when running on non primary FW
+	 * thread to communicate to host driver.
+	 */
+	u32 core_clock_rate_fw_addr;
+
+	enum rogue_fwif_gpio_val_mode gpio_validation_mode;
+
+	/* Used in HWPerf for decoding BVNC Features */
+	struct rogue_hwperf_bvnc bvnc_km_feature_flags;
+
+	/* Value to write into ROGUE_CR_TFBC_COMPRESSION_CONTROL */
+	u32 tfbc_compression_control;
+} __aligned(8);
+
+/*
+ *****************************************************************************
+ * Timer correlation shared data and defines
+ *****************************************************************************
+ */
+
+struct rogue_fwif_time_corr {
+	aligned_u64 os_timestamp;
+	aligned_u64 os_mono_timestamp;
+	aligned_u64 cr_timestamp;
+
+	/*
+	 * Utility variable used to convert CR timer deltas to OS timer deltas
+	 * (nS), where the deltas are relative to the timestamps above:
+	 * deltaOS = (deltaCR * K) >> decimal_shift, see full explanation below
+	 */
+	aligned_u64 cr_delta_to_os_delta_kns;
+
+	u32 core_clock_speed;
+	u32 reserved;
+} __aligned(8);
+
+/*
+ * The following macros are used to help converting FW timestamps to the Host
+ * time domain. On the FW the ROGUE_CR_TIMER counter is used to keep track of
+ * time; it increments by 1 every 256 GPU clock ticks, so the general
+ * formula to perform the conversion is:
+ *
+ * [ GPU clock speed in Hz, if (scale == 10^9) then deltaOS is in nS,
+ *   otherwise if (scale == 10^6) then deltaOS is in uS ]
+ *
+ *             deltaCR * 256                                   256 * scale
+ *  deltaOS = --------------- * scale = deltaCR * K    [ K = --------------- ]
+ *             GPUclockspeed                                  GPUclockspeed
+ *
+ * The actual K is multiplied by 2^20 (and deltaCR * K is divided by 2^20)
+ * to get some better accuracy and to avoid returning 0 in the integer
+ * division 256000000/GPUfreq if GPUfreq is greater than 256MHz.
+ * This is the same as keeping K as a decimal number.
+ *
+ * The maximum deltaOS is slightly more than 5hrs for all GPU frequencies
+ * (deltaCR * K is more or less a constant), and it's relative to the base
+ * OS timestamp sampled as a part of the timer correlation data.
+ * This base is refreshed on GPU power-on, DVFS transition and periodic
+ * frequency calibration (executed every few seconds if the FW is doing
+ * some work), so as long as the GPU is doing something and one of these
+ * events is triggered then deltaCR * K will not overflow and deltaOS will be
+ * correct.
+ */
+
+#define ROGUE_FWIF_CRDELTA_TO_OSDELTA_ACCURACY_SHIFT (20)
+
+#define ROGUE_FWIF_GET_DELTA_OSTIME_NS(delta_cr, k) \
+	(((delta_cr) * (k)) >> ROGUE_FWIF_CRDELTA_TO_OSDELTA_ACCURACY_SHIFT)
+
+/*
+ ******************************************************************************
+ * GPU Utilisation
+ ******************************************************************************
+ */
+
+/* See rogue_common.h for a list of GPU states */
+#define ROGUE_FWIF_GPU_UTIL_TIME_MASK \
+	(0xFFFFFFFFFFFFFFFFull & ~ROGUE_FWIF_GPU_UTIL_STATE_MASK)
+
+#define ROGUE_FWIF_GPU_UTIL_GET_TIME(word) \
+	((word)(&ROGUE_FWIF_GPU_UTIL_TIME_MASK))
+#define ROGUE_FWIF_GPU_UTIL_GET_STATE(word) \
+	((word)(&ROGUE_FWIF_GPU_UTIL_STATE_MASK))
+
+/*
+ * The OS timestamps computed by the FW are approximations of the real time,
+ * which means they could be slightly behind or ahead the real timer on the
+ * Host. In some cases we can perform subtractions between FW approximated
+ * timestamps and real OS timestamps, so we need a form of protection against
+ * negative results if for instance the FW one is a bit ahead of time.
+ */
+#define ROGUE_FWIF_GPU_UTIL_GET_PERIOD(newtime, oldtime) \
+	(((newtime) > (oldtime)) ? ((newtime) - (oldtime)) : 0U)
+
+#define ROGUE_FWIF_GPU_UTIL_MAKE_WORD(time, state) \
+	(ROGUE_FWIF_GPU_UTIL_GET_TIME(time) |      \
+	 ROGUE_FWIF_GPU_UTIL_GET_STATE(state))
+
+/*
+ * The timer correlation array must be big enough to ensure old entries won't be
+ * overwritten before all the HWPerf events linked to those entries are
+ * processed by the MISR. The update frequency of this array depends on how fast
+ * the system can change state (basically how small the APM latency is) and
+ * perform DVFS transitions.
+ *
+ * The minimum size is 2 (not 1) to avoid race conditions between the FW reading
+ * an entry while the Host is updating it. With 2 entries in the worst case the
+ * FW will read old data, which is still quite ok if the Host is updating the
+ * timer correlation at that time.
+ */
+#define ROGUE_FWIF_TIME_CORR_ARRAY_SIZE 256U
+#define ROGUE_FWIF_TIME_CORR_CURR_INDEX(seqcount) \
+	((seqcount) % ROGUE_FWIF_TIME_CORR_ARRAY_SIZE)
+
+/* Make sure the timer correlation array size is a power of 2 */
+static_assert((ROGUE_FWIF_TIME_CORR_ARRAY_SIZE &
+	       (ROGUE_FWIF_TIME_CORR_ARRAY_SIZE - 1U)) == 0U,
+	      "ROGUE_FWIF_TIME_CORR_ARRAY_SIZE must be a power of two");
+
+struct rogue_fwif_gpu_util_fwcb {
+	struct rogue_fwif_time_corr time_corr[ROGUE_FWIF_TIME_CORR_ARRAY_SIZE];
+	u32 time_corr_seq_count;
+
+	/* Compatibility and other flags */
+	u32 gpu_util_flags;
+
+	/* Last GPU state + OS time of the last state update */
+	aligned_u64 last_word;
+
+	/* Counters for the amount of time the GPU was active/idle/blocked */
+	aligned_u64 stats_counters[PVR_FWIF_GPU_UTIL_STATE_NUM];
+} __aligned(8);
+
+struct rogue_fwif_rta_ctl {
+	/* Render number */
+	u32 render_target_index;
+	/* index in RTA */
+	u32 current_render_target;
+	/* total active RTs */
+	u32 active_render_targets;
+	/* total active RTs from the first TA kick, for OOM */
+	u32 cumul_active_render_targets;
+	/* Array of valid RT indices */
+	u32 valid_render_targets_fw_addr;
+	/* Array of number of occurred partial renders per render target */
+	u32 rta_num_partial_renders_fw_addr;
+	/* Number of render targets in the array */
+	u32 max_rts;
+	/* Compatibility and other flags */
+	u32 rta_ctl_flags;
+} __aligned(8);
+
+struct rogue_fwif_freelist {
+	aligned_u64 freelist_dev_addr;
+	aligned_u64 current_dev_addr;
+	u32 current_stack_top;
+	u32 max_pages;
+	u32 grow_pages;
+	/* HW pages */
+	u32 current_pages;
+	u32 allocated_page_count;
+	u32 allocated_mmu_page_count;
+	u32 freelist_id;
+
+	bool grow_pending __aligned(4);
+	/* Pages that should be used only when OOM is reached */
+	u32 ready_pages;
+	/* Compatibility and other flags */
+	u32 freelist_flags;
+	/* PM Global PB on which Freelist is loaded */
+	u32 pm_global_pb;
+	u32 padding;
+} __aligned(8);
+
+/*
+ ******************************************************************************
+ * HWRTData
+ ******************************************************************************
+ */
+
+/* HWRTData flags */
+/* Deprecated flags 1:0 */
+#define HWRTDATA_HAS_LAST_GEOM BIT(2)
+#define HWRTDATA_PARTIAL_RENDERED BIT(3)
+#define HWRTDATA_DISABLE_TILE_REORDERING BIT(4)
+#define HWRTDATA_NEED_BRN65101_BLIT BIT(5)
+#define HWRTDATA_FIRST_BRN65101_STRIP BIT(6)
+#define HWRTDATA_NEED_BRN67182_2ND_RENDER BIT(7)
+
+enum rogue_fwif_rtdata_state {
+	ROGUE_FWIF_RTDATA_STATE_NONE = 0,
+	ROGUE_FWIF_RTDATA_STATE_KICK_GEOM,
+	ROGUE_FWIF_RTDATA_STATE_KICK_GEOM_FIRST,
+	ROGUE_FWIF_RTDATA_STATE_GEOM_FINISHED,
+	ROGUE_FWIF_RTDATA_STATE_KICK_FRAG,
+	ROGUE_FWIF_RTDATA_STATE_FRAG_FINISHED,
+	ROGUE_FWIF_RTDATA_STATE_FRAG_CONTEXT_STORED,
+	ROGUE_FWIF_RTDATA_STATE_GEOM_OUTOFMEM,
+	ROGUE_FWIF_RTDATA_STATE_PARTIALRENDERFINISHED,
+	/*
+	 * In case of HWR, we can't set the RTDATA state to NONE, as this will
+	 * cause any TA to become a first TA. To ensure all related TA's are
+	 * skipped, we use the HWR state
+	 */
+	ROGUE_FWIF_RTDATA_STATE_HWR,
+	ROGUE_FWIF_RTDATA_STATE_UNKNOWN = 0x7FFFFFFFU
+};
+
+struct rogue_fwif_hwrtdata_common {
+	bool geom_caches_need_zeroing __aligned(4);
+
+	u32 screen_pixel_max;
+	aligned_u64 multi_sample_ctl;
+	u64 flipped_multi_sample_ctl;
+	u32 tpc_stride;
+	u32 tpc_size;
+	u32 te_screen;
+	u32 mtile_stride;
+	u32 teaa;
+	u32 te_mtile1;
+	u32 te_mtile2;
+	u32 isp_merge_lower_x;
+	u32 isp_merge_lower_y;
+	u32 isp_merge_upper_x;
+	u32 isp_merge_upper_y;
+	u32 isp_merge_scale_x;
+	u32 isp_merge_scale_y;
+	u32 rgn_header_size;
+	u32 isp_mtile_size;
+	u32 padding;
+} __aligned(8);
+
+struct rogue_fwif_hwrtdata {
+	/* MList Data Store */
+	aligned_u64 pm_mlist_dev_addr;
+
+	aligned_u64 vce_cat_base[4];
+	aligned_u64 vce_last_cat_base[4];
+	aligned_u64 te_cat_base[4];
+	aligned_u64 te_last_cat_base[4];
+	aligned_u64 alist_cat_base;
+	aligned_u64 alist_last_cat_base;
+
+	aligned_u64 pm_alist_stack_pointer;
+	u32 pm_mlist_stack_pointer;
+
+	u32 hwrt_data_common_fw_addr;
+
+	u32 hwrt_data_flags;
+	enum rogue_fwif_rtdata_state state;
+
+	u32 freelists_fw_addr[MAX_FREELISTS_SIZE] __aligned(8);
+	u32 freelist_hwr_snapshot[MAX_FREELISTS_SIZE];
+
+	aligned_u64 vheap_table_dev_addr;
+
+	struct rogue_fwif_rta_ctl rta_ctl;
+
+	aligned_u64 tail_ptrs_dev_addr;
+	aligned_u64 macrotile_array_dev_addr;
+	aligned_u64 rgn_header_dev_addr;
+	aligned_u64 rtc_dev_addr;
+
+	u32 owner_geom_not_used_by_host __aligned(8);
+
+	bool geom_caches_need_zeroing __aligned(4);
+
+	struct rogue_fwif_cleanup_ctl cleanup_state __aligned(64);
+} __aligned(8);
+
+/*
+ ******************************************************************************
+ * Sync checkpoints
+ ******************************************************************************
+ */
+
+#define PVR_SYNC_CHECKPOINT_UNDEF 0x000
+#define PVR_SYNC_CHECKPOINT_ACTIVE 0xac1     /* Checkpoint has not signaled. */
+#define PVR_SYNC_CHECKPOINT_SIGNALED 0x519   /* Checkpoint has signaled. */
+#define PVR_SYNC_CHECKPOINT_ERRORED 0xeff    /* Checkpoint has been errored. */
+
+#include "pvr_rogue_fwif_check.h"
+
+#endif /* PVR_ROGUE_FWIF_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_fwif_check.h b/drivers/gpu/drm/imagination/pvr_rogue_fwif_check.h
new file mode 100644
index 000000000000..eb8f1ffe7373
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_fwif_check.h
@@ -0,0 +1,491 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_FWIF_CHECK_H
+#define PVR_ROGUE_FWIF_CHECK_H
+
+#include <linux/build_bug.h>
+
+#define OFFSET_CHECK(type, member, offset) \
+	static_assert(offsetof(type, member) == (offset), \
+		      "offsetof(" #type ", " #member ") incorrect")
+
+#define SIZE_CHECK(type, size) \
+	static_assert(sizeof(type) == (size), #type " is incorrect size")
+
+OFFSET_CHECK(struct rogue_fwif_file_info_buf, path, 0);
+OFFSET_CHECK(struct rogue_fwif_file_info_buf, info, 200);
+OFFSET_CHECK(struct rogue_fwif_file_info_buf, line_num, 400);
+SIZE_CHECK(struct rogue_fwif_file_info_buf, 408);
+
+OFFSET_CHECK(struct rogue_fwif_tracebuf_space, trace_pointer, 0);
+OFFSET_CHECK(struct rogue_fwif_tracebuf_space, trace_buffer_fw_addr, 4);
+OFFSET_CHECK(struct rogue_fwif_tracebuf_space, trace_buffer, 8);
+OFFSET_CHECK(struct rogue_fwif_tracebuf_space, assert_buf, 16);
+SIZE_CHECK(struct rogue_fwif_tracebuf_space, 424);
+
+OFFSET_CHECK(struct rogue_fwif_tracebuf, log_type, 0);
+OFFSET_CHECK(struct rogue_fwif_tracebuf, tracebuf, 8);
+OFFSET_CHECK(struct rogue_fwif_tracebuf, tracebuf_size_in_dwords, 856);
+OFFSET_CHECK(struct rogue_fwif_tracebuf, tracebuf_flags, 860);
+SIZE_CHECK(struct rogue_fwif_tracebuf, 864);
+
+OFFSET_CHECK(struct rogue_fw_fault_info, cr_timer, 0);
+OFFSET_CHECK(struct rogue_fw_fault_info, os_timer, 8);
+OFFSET_CHECK(struct rogue_fw_fault_info, data, 16);
+OFFSET_CHECK(struct rogue_fw_fault_info, reserved, 20);
+OFFSET_CHECK(struct rogue_fw_fault_info, fault_buf, 24);
+SIZE_CHECK(struct rogue_fw_fault_info, 432);
+
+OFFSET_CHECK(struct rogue_fwif_sysdata, config_flags, 0);
+OFFSET_CHECK(struct rogue_fwif_sysdata, config_flags_ext, 4);
+OFFSET_CHECK(struct rogue_fwif_sysdata, pow_state, 8);
+OFFSET_CHECK(struct rogue_fwif_sysdata, hw_perf_ridx, 12);
+OFFSET_CHECK(struct rogue_fwif_sysdata, hw_perf_widx, 16);
+OFFSET_CHECK(struct rogue_fwif_sysdata, hw_perf_wrap_count, 20);
+OFFSET_CHECK(struct rogue_fwif_sysdata, hw_perf_size, 24);
+OFFSET_CHECK(struct rogue_fwif_sysdata, hw_perf_drop_count, 28);
+OFFSET_CHECK(struct rogue_fwif_sysdata, hw_perf_ut, 32);
+OFFSET_CHECK(struct rogue_fwif_sysdata, first_drop_ordinal, 36);
+OFFSET_CHECK(struct rogue_fwif_sysdata, last_drop_ordinal, 40);
+OFFSET_CHECK(struct rogue_fwif_sysdata, os_runtime_flags_mirror, 44);
+OFFSET_CHECK(struct rogue_fwif_sysdata, fault_info, 80);
+OFFSET_CHECK(struct rogue_fwif_sysdata, fw_faults, 3536);
+OFFSET_CHECK(struct rogue_fwif_sysdata, cr_poll_addr, 3540);
+OFFSET_CHECK(struct rogue_fwif_sysdata, cr_poll_mask, 3548);
+OFFSET_CHECK(struct rogue_fwif_sysdata, cr_poll_count, 3556);
+OFFSET_CHECK(struct rogue_fwif_sysdata, start_idle_time, 3568);
+OFFSET_CHECK(struct rogue_fwif_sysdata, hwr_state_flags, 3576);
+OFFSET_CHECK(struct rogue_fwif_sysdata, hwr_recovery_flags, 3580);
+OFFSET_CHECK(struct rogue_fwif_sysdata, fw_sys_data_flags, 3616);
+OFFSET_CHECK(struct rogue_fwif_sysdata, mc_config, 3620);
+SIZE_CHECK(struct rogue_fwif_sysdata, 3624);
+
+OFFSET_CHECK(struct rogue_fwif_slr_entry, timestamp, 0);
+OFFSET_CHECK(struct rogue_fwif_slr_entry, fw_ctx_addr, 8);
+OFFSET_CHECK(struct rogue_fwif_slr_entry, num_ufos, 12);
+OFFSET_CHECK(struct rogue_fwif_slr_entry, ccb_name, 16);
+SIZE_CHECK(struct rogue_fwif_slr_entry, 48);
+
+OFFSET_CHECK(struct rogue_fwif_osdata, fw_os_config_flags, 0);
+OFFSET_CHECK(struct rogue_fwif_osdata, fw_sync_check_mark, 4);
+OFFSET_CHECK(struct rogue_fwif_osdata, host_sync_check_mark, 8);
+OFFSET_CHECK(struct rogue_fwif_osdata, forced_updates_requested, 12);
+OFFSET_CHECK(struct rogue_fwif_osdata, slr_log_wp, 16);
+OFFSET_CHECK(struct rogue_fwif_osdata, slr_log_first, 24);
+OFFSET_CHECK(struct rogue_fwif_osdata, slr_log, 72);
+OFFSET_CHECK(struct rogue_fwif_osdata, last_forced_update_time, 552);
+OFFSET_CHECK(struct rogue_fwif_osdata, interrupt_count, 560);
+OFFSET_CHECK(struct rogue_fwif_osdata, kccb_cmds_executed, 568);
+OFFSET_CHECK(struct rogue_fwif_osdata, power_sync_fw_addr, 572);
+OFFSET_CHECK(struct rogue_fwif_osdata, fw_os_data_flags, 576);
+SIZE_CHECK(struct rogue_fwif_osdata, 584);
+
+OFFSET_CHECK(struct rogue_bifinfo, bif_req_status, 0);
+OFFSET_CHECK(struct rogue_bifinfo, bif_mmu_status, 8);
+OFFSET_CHECK(struct rogue_bifinfo, pc_address, 16);
+OFFSET_CHECK(struct rogue_bifinfo, reserved, 24);
+SIZE_CHECK(struct rogue_bifinfo, 32);
+
+OFFSET_CHECK(struct rogue_eccinfo, fault_gpu, 0);
+SIZE_CHECK(struct rogue_eccinfo, 4);
+
+OFFSET_CHECK(struct rogue_mmuinfo, mmu_status, 0);
+OFFSET_CHECK(struct rogue_mmuinfo, pc_address, 16);
+OFFSET_CHECK(struct rogue_mmuinfo, reserved, 24);
+SIZE_CHECK(struct rogue_mmuinfo, 32);
+
+OFFSET_CHECK(struct rogue_pollinfo, thread_num, 0);
+OFFSET_CHECK(struct rogue_pollinfo, cr_poll_addr, 4);
+OFFSET_CHECK(struct rogue_pollinfo, cr_poll_mask, 8);
+OFFSET_CHECK(struct rogue_pollinfo, cr_poll_last_value, 12);
+OFFSET_CHECK(struct rogue_pollinfo, reserved, 16);
+SIZE_CHECK(struct rogue_pollinfo, 24);
+
+OFFSET_CHECK(struct rogue_tlbinfo, bad_addr, 0);
+OFFSET_CHECK(struct rogue_tlbinfo, entry_lo, 4);
+SIZE_CHECK(struct rogue_tlbinfo, 8);
+
+OFFSET_CHECK(struct rogue_hwrinfo, hwr_data, 0);
+OFFSET_CHECK(struct rogue_hwrinfo, cr_timer, 32);
+OFFSET_CHECK(struct rogue_hwrinfo, os_timer, 40);
+OFFSET_CHECK(struct rogue_hwrinfo, frame_num, 48);
+OFFSET_CHECK(struct rogue_hwrinfo, pid, 52);
+OFFSET_CHECK(struct rogue_hwrinfo, active_hwrt_data, 56);
+OFFSET_CHECK(struct rogue_hwrinfo, hwr_number, 60);
+OFFSET_CHECK(struct rogue_hwrinfo, event_status, 64);
+OFFSET_CHECK(struct rogue_hwrinfo, hwr_recovery_flags, 68);
+OFFSET_CHECK(struct rogue_hwrinfo, hwr_type, 72);
+OFFSET_CHECK(struct rogue_hwrinfo, dm, 76);
+OFFSET_CHECK(struct rogue_hwrinfo, core_id, 80);
+OFFSET_CHECK(struct rogue_hwrinfo, cr_time_of_kick, 88);
+OFFSET_CHECK(struct rogue_hwrinfo, cr_time_hw_reset_start, 96);
+OFFSET_CHECK(struct rogue_hwrinfo, cr_time_hw_reset_finish, 104);
+OFFSET_CHECK(struct rogue_hwrinfo, cr_time_freelist_ready, 112);
+OFFSET_CHECK(struct rogue_hwrinfo, reserved, 120);
+SIZE_CHECK(struct rogue_hwrinfo, 136);
+
+OFFSET_CHECK(struct rogue_fwif_hwrinfobuf, hwr_info, 0);
+OFFSET_CHECK(struct rogue_fwif_hwrinfobuf, hwr_counter, 2176);
+OFFSET_CHECK(struct rogue_fwif_hwrinfobuf, write_index, 2180);
+OFFSET_CHECK(struct rogue_fwif_hwrinfobuf, dd_req_count, 2184);
+OFFSET_CHECK(struct rogue_fwif_hwrinfobuf, hwr_info_buf_flags, 2188);
+OFFSET_CHECK(struct rogue_fwif_hwrinfobuf, hwr_dm_locked_up_count, 2192);
+OFFSET_CHECK(struct rogue_fwif_hwrinfobuf, hwr_dm_overran_count, 2228);
+OFFSET_CHECK(struct rogue_fwif_hwrinfobuf, hwr_dm_recovered_count, 2264);
+OFFSET_CHECK(struct rogue_fwif_hwrinfobuf, hwr_dm_false_detect_count, 2300);
+SIZE_CHECK(struct rogue_fwif_hwrinfobuf, 2336);
+
+OFFSET_CHECK(struct rogue_fwif_fwmemcontext, pc_dev_paddr, 0);
+OFFSET_CHECK(struct rogue_fwif_fwmemcontext, page_cat_base_reg_set, 8);
+OFFSET_CHECK(struct rogue_fwif_fwmemcontext, breakpoint_addr, 12);
+OFFSET_CHECK(struct rogue_fwif_fwmemcontext, bp_handler_addr, 16);
+OFFSET_CHECK(struct rogue_fwif_fwmemcontext, breakpoint_ctl, 20);
+OFFSET_CHECK(struct rogue_fwif_fwmemcontext, fw_mem_ctx_flags, 24);
+SIZE_CHECK(struct rogue_fwif_fwmemcontext, 32);
+
+OFFSET_CHECK(struct rogue_fwif_geom_ctx_state_per_geom, geom_reg_vdm_call_stack_pointer, 0);
+OFFSET_CHECK(struct rogue_fwif_geom_ctx_state_per_geom, geom_reg_vdm_call_stack_pointer_init, 8);
+OFFSET_CHECK(struct rogue_fwif_geom_ctx_state_per_geom, geom_reg_vbs_so_prim, 16);
+OFFSET_CHECK(struct rogue_fwif_geom_ctx_state_per_geom, geom_current_idx, 32);
+SIZE_CHECK(struct rogue_fwif_geom_ctx_state_per_geom, 40);
+
+OFFSET_CHECK(struct rogue_fwif_geom_ctx_state, geom_core, 0);
+SIZE_CHECK(struct rogue_fwif_geom_ctx_state, 160);
+
+OFFSET_CHECK(struct rogue_fwif_frag_ctx_state, frag_reg_pm_deallocated_mask_status, 0);
+OFFSET_CHECK(struct rogue_fwif_frag_ctx_state, frag_reg_dm_pds_mtilefree_status, 4);
+OFFSET_CHECK(struct rogue_fwif_frag_ctx_state, ctx_state_flags, 8);
+OFFSET_CHECK(struct rogue_fwif_frag_ctx_state, frag_reg_isp_store, 12);
+SIZE_CHECK(struct rogue_fwif_frag_ctx_state, 16);
+
+OFFSET_CHECK(struct rogue_fwif_compute_ctx_state, ctx_state_flags, 0);
+SIZE_CHECK(struct rogue_fwif_compute_ctx_state, 4);
+
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, ccbctl_fw_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, ccb_fw_addr, 4);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, ccb_meta_dma_addr, 8);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, context_state_addr, 24);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, fw_com_ctx_flags, 28);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, priority, 32);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, priority_seq_num, 36);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, rf_cmd_addr, 40);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, stats_pending, 44);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, stats_num_stores, 48);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, stats_num_out_of_memory, 52);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, stats_num_partial_renders, 56);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, dm, 60);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, wait_signal_address, 64);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, wait_signal_node, 72);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, buf_stalled_node, 80);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, cbuf_queue_ctrl_addr, 88);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, robustness_address, 96);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, max_deadline_ms, 104);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, read_offset_needs_reset, 108);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, waiting_node, 112);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, run_node, 120);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, last_failed_ufo, 128);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, fw_mem_context_fw_addr, 136);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, server_common_context_id, 140);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, pid, 144);
+OFFSET_CHECK(struct rogue_fwif_fwcommoncontext, geom_oom_disabled, 148);
+SIZE_CHECK(struct rogue_fwif_fwcommoncontext, 152);
+
+OFFSET_CHECK(struct rogue_fwif_ccb_ctl, write_offset, 0);
+OFFSET_CHECK(struct rogue_fwif_ccb_ctl, read_offset, 4);
+OFFSET_CHECK(struct rogue_fwif_ccb_ctl, wrap_mask, 8);
+OFFSET_CHECK(struct rogue_fwif_ccb_ctl, cmd_size, 12);
+SIZE_CHECK(struct rogue_fwif_ccb_ctl, 16);
+
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd_kick_data, context_fw_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd_kick_data, client_woff_update, 4);
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd_kick_data, client_wrap_mask_update, 8);
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd_kick_data, num_cleanup_ctl, 12);
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd_kick_data, cleanup_ctl_fw_addr, 16);
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd_kick_data, work_est_cmd_header_offset, 28);
+SIZE_CHECK(struct rogue_fwif_kccb_cmd_kick_data, 32);
+
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd_combined_geom_frag_kick_data, geom_cmd_kick_data, 0);
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd_combined_geom_frag_kick_data, frag_cmd_kick_data, 32);
+SIZE_CHECK(struct rogue_fwif_kccb_cmd_combined_geom_frag_kick_data, 64);
+
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd_force_update_data, context_fw_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd_force_update_data, ccb_fence_offset, 4);
+SIZE_CHECK(struct rogue_fwif_kccb_cmd_force_update_data, 8);
+
+OFFSET_CHECK(struct rogue_fwif_cleanup_request, cleanup_type, 0);
+OFFSET_CHECK(struct rogue_fwif_cleanup_request, cleanup_data, 4);
+SIZE_CHECK(struct rogue_fwif_cleanup_request, 8);
+
+OFFSET_CHECK(struct rogue_fwif_power_request, pow_type, 0);
+OFFSET_CHECK(struct rogue_fwif_power_request, power_req_data, 4);
+SIZE_CHECK(struct rogue_fwif_power_request, 8);
+
+OFFSET_CHECK(struct rogue_fwif_slcflushinvaldata, context_fw_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_slcflushinvaldata, inval, 4);
+OFFSET_CHECK(struct rogue_fwif_slcflushinvaldata, dm_context, 8);
+OFFSET_CHECK(struct rogue_fwif_slcflushinvaldata, address, 16);
+OFFSET_CHECK(struct rogue_fwif_slcflushinvaldata, size, 24);
+SIZE_CHECK(struct rogue_fwif_slcflushinvaldata, 32);
+
+OFFSET_CHECK(struct rogue_fwif_hwperf_ctrl, opcode, 0);
+OFFSET_CHECK(struct rogue_fwif_hwperf_ctrl, mask, 8);
+SIZE_CHECK(struct rogue_fwif_hwperf_ctrl, 16);
+
+OFFSET_CHECK(struct rogue_fwif_hwperf_config_enable_blks, num_blocks, 0);
+OFFSET_CHECK(struct rogue_fwif_hwperf_config_enable_blks, block_configs_fw_addr, 4);
+SIZE_CHECK(struct rogue_fwif_hwperf_config_enable_blks, 8);
+
+OFFSET_CHECK(struct rogue_fwif_hwperf_config_da_blks, num_blocks, 0);
+OFFSET_CHECK(struct rogue_fwif_hwperf_config_da_blks, block_configs_fw_addr, 4);
+SIZE_CHECK(struct rogue_fwif_hwperf_config_da_blks, 8);
+
+OFFSET_CHECK(struct rogue_fwif_coreclkspeedchange_data, new_clock_speed, 0);
+SIZE_CHECK(struct rogue_fwif_coreclkspeedchange_data, 4);
+
+OFFSET_CHECK(struct rogue_fwif_hwperf_ctrl_blks, enable, 0);
+OFFSET_CHECK(struct rogue_fwif_hwperf_ctrl_blks, num_blocks, 4);
+OFFSET_CHECK(struct rogue_fwif_hwperf_ctrl_blks, block_ids, 8);
+SIZE_CHECK(struct rogue_fwif_hwperf_ctrl_blks, 40);
+
+OFFSET_CHECK(struct rogue_fwif_hwperf_select_custom_cntrs, custom_block, 0);
+OFFSET_CHECK(struct rogue_fwif_hwperf_select_custom_cntrs, num_counters, 2);
+OFFSET_CHECK(struct rogue_fwif_hwperf_select_custom_cntrs, custom_counter_ids_fw_addr, 4);
+SIZE_CHECK(struct rogue_fwif_hwperf_select_custom_cntrs, 8);
+
+OFFSET_CHECK(struct rogue_fwif_zsbuffer_backing_data, zs_buffer_fw_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_zsbuffer_backing_data, done, 4);
+SIZE_CHECK(struct rogue_fwif_zsbuffer_backing_data, 8);
+
+OFFSET_CHECK(struct rogue_fwif_freelist_gs_data, freelist_fw_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_freelist_gs_data, delta_pages, 4);
+OFFSET_CHECK(struct rogue_fwif_freelist_gs_data, new_pages, 8);
+OFFSET_CHECK(struct rogue_fwif_freelist_gs_data, ready_pages, 12);
+SIZE_CHECK(struct rogue_fwif_freelist_gs_data, 16);
+
+OFFSET_CHECK(struct rogue_fwif_freelists_reconstruction_data, freelist_count, 0);
+OFFSET_CHECK(struct rogue_fwif_freelists_reconstruction_data, freelist_ids, 4);
+SIZE_CHECK(struct rogue_fwif_freelists_reconstruction_data, 76);
+
+OFFSET_CHECK(struct rogue_fwif_write_offset_update_data, context_fw_addr, 0);
+SIZE_CHECK(struct rogue_fwif_write_offset_update_data, 8);
+
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd, cmd_type, 0);
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd, kccb_flags, 4);
+OFFSET_CHECK(struct rogue_fwif_kccb_cmd, cmd_data, 8);
+SIZE_CHECK(struct rogue_fwif_kccb_cmd, 88);
+
+OFFSET_CHECK(struct rogue_fwif_fwccb_cmd_context_reset_data, server_common_context_id, 0);
+OFFSET_CHECK(struct rogue_fwif_fwccb_cmd_context_reset_data, reset_reason, 4);
+OFFSET_CHECK(struct rogue_fwif_fwccb_cmd_context_reset_data, dm, 8);
+OFFSET_CHECK(struct rogue_fwif_fwccb_cmd_context_reset_data, reset_job_ref, 12);
+OFFSET_CHECK(struct rogue_fwif_fwccb_cmd_context_reset_data, flags, 16);
+OFFSET_CHECK(struct rogue_fwif_fwccb_cmd_context_reset_data, pc_address, 24);
+OFFSET_CHECK(struct rogue_fwif_fwccb_cmd_context_reset_data, fault_address, 32);
+SIZE_CHECK(struct rogue_fwif_fwccb_cmd_context_reset_data, 40);
+
+OFFSET_CHECK(struct rogue_fwif_fwccb_cmd_fw_pagefault_data, fw_fault_addr, 0);
+SIZE_CHECK(struct rogue_fwif_fwccb_cmd_fw_pagefault_data, 8);
+
+OFFSET_CHECK(struct rogue_fwif_fwccb_cmd, cmd_type, 0);
+OFFSET_CHECK(struct rogue_fwif_fwccb_cmd, fwccb_flags, 4);
+OFFSET_CHECK(struct rogue_fwif_fwccb_cmd, cmd_data, 8);
+SIZE_CHECK(struct rogue_fwif_fwccb_cmd, 88);
+
+OFFSET_CHECK(struct rogue_fwif_ccb_cmd_header, cmd_type, 0);
+OFFSET_CHECK(struct rogue_fwif_ccb_cmd_header, cmd_size, 4);
+OFFSET_CHECK(struct rogue_fwif_ccb_cmd_header, ext_job_ref, 8);
+OFFSET_CHECK(struct rogue_fwif_ccb_cmd_header, int_job_ref, 12);
+OFFSET_CHECK(struct rogue_fwif_ccb_cmd_header, work_est_kick_data, 16);
+SIZE_CHECK(struct rogue_fwif_ccb_cmd_header, 40);
+
+OFFSET_CHECK(struct rogue_fwif_runtime_cfg, active_pm_latency_ms, 0);
+OFFSET_CHECK(struct rogue_fwif_runtime_cfg, runtime_cfg_flags, 4);
+OFFSET_CHECK(struct rogue_fwif_runtime_cfg, active_pm_latency_persistant, 8);
+OFFSET_CHECK(struct rogue_fwif_runtime_cfg, core_clock_speed, 12);
+OFFSET_CHECK(struct rogue_fwif_runtime_cfg, default_dusts_num_init, 16);
+OFFSET_CHECK(struct rogue_fwif_runtime_cfg, phr_mode, 20);
+OFFSET_CHECK(struct rogue_fwif_runtime_cfg, hcs_deadline_ms, 24);
+OFFSET_CHECK(struct rogue_fwif_runtime_cfg, wdg_period_us, 28);
+OFFSET_CHECK(struct rogue_fwif_runtime_cfg, osid_priority, 32);
+OFFSET_CHECK(struct rogue_fwif_runtime_cfg, hwperf_buf_fw_addr, 64);
+OFFSET_CHECK(struct rogue_fwif_runtime_cfg, padding, 68);
+SIZE_CHECK(struct rogue_fwif_runtime_cfg, 72);
+
+OFFSET_CHECK(struct rogue_fwif_connection_ctl, connection_fw_state, 0);
+OFFSET_CHECK(struct rogue_fwif_connection_ctl, connection_os_state, 4);
+OFFSET_CHECK(struct rogue_fwif_connection_ctl, alive_fw_token, 8);
+OFFSET_CHECK(struct rogue_fwif_connection_ctl, alive_os_token, 12);
+SIZE_CHECK(struct rogue_fwif_connection_ctl, 16);
+
+OFFSET_CHECK(struct rogue_fwif_compchecks_bvnc, layout_version, 0);
+OFFSET_CHECK(struct rogue_fwif_compchecks_bvnc, bvnc, 8);
+SIZE_CHECK(struct rogue_fwif_compchecks_bvnc, 16);
+
+OFFSET_CHECK(struct rogue_fwif_init_options, os_count_support, 0);
+SIZE_CHECK(struct rogue_fwif_init_options, 8);
+
+OFFSET_CHECK(struct rogue_fwif_compchecks, hw_bvnc, 0);
+OFFSET_CHECK(struct rogue_fwif_compchecks, fw_bvnc, 16);
+OFFSET_CHECK(struct rogue_fwif_compchecks, fw_processor_version, 32);
+OFFSET_CHECK(struct rogue_fwif_compchecks, ddk_version, 36);
+OFFSET_CHECK(struct rogue_fwif_compchecks, ddk_build, 40);
+OFFSET_CHECK(struct rogue_fwif_compchecks, build_options, 44);
+OFFSET_CHECK(struct rogue_fwif_compchecks, init_options, 48);
+OFFSET_CHECK(struct rogue_fwif_compchecks, updated, 56);
+SIZE_CHECK(struct rogue_fwif_compchecks, 64);
+
+OFFSET_CHECK(struct rogue_fwif_osinit, kernel_ccbctl_fw_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_osinit, kernel_ccb_fw_addr, 4);
+OFFSET_CHECK(struct rogue_fwif_osinit, kernel_ccb_rtn_slots_fw_addr, 8);
+OFFSET_CHECK(struct rogue_fwif_osinit, firmware_ccbctl_fw_addr, 12);
+OFFSET_CHECK(struct rogue_fwif_osinit, firmware_ccb_fw_addr, 16);
+OFFSET_CHECK(struct rogue_fwif_osinit, work_est_firmware_ccbctl_fw_addr, 20);
+OFFSET_CHECK(struct rogue_fwif_osinit, work_est_firmware_ccb_fw_addr, 24);
+OFFSET_CHECK(struct rogue_fwif_osinit, rogue_fwif_hwr_info_buf_ctl_fw_addr, 28);
+OFFSET_CHECK(struct rogue_fwif_osinit, hwr_debug_dump_limit, 32);
+OFFSET_CHECK(struct rogue_fwif_osinit, fw_os_data_fw_addr, 36);
+OFFSET_CHECK(struct rogue_fwif_osinit, rogue_comp_checks, 40);
+SIZE_CHECK(struct rogue_fwif_osinit, 104);
+
+OFFSET_CHECK(struct rogue_fwif_sigbuf_ctl, buffer_fw_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_sigbuf_ctl, left_size_in_regs, 4);
+SIZE_CHECK(struct rogue_fwif_sigbuf_ctl, 8);
+
+OFFSET_CHECK(struct pdvfs_opp, volt, 0);
+OFFSET_CHECK(struct pdvfs_opp, freq, 4);
+SIZE_CHECK(struct pdvfs_opp, 8);
+
+OFFSET_CHECK(struct rogue_fwif_pdvfs_opp, opp_values, 0);
+OFFSET_CHECK(struct rogue_fwif_pdvfs_opp, min_opp_point, 128);
+OFFSET_CHECK(struct rogue_fwif_pdvfs_opp, max_opp_point, 132);
+SIZE_CHECK(struct rogue_fwif_pdvfs_opp, 136);
+
+OFFSET_CHECK(struct rogue_fwif_counter_dump_ctl, buffer_fw_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_counter_dump_ctl, size_in_dwords, 4);
+SIZE_CHECK(struct rogue_fwif_counter_dump_ctl, 8);
+
+OFFSET_CHECK(struct rogue_hwperf_bvnc, bvnc_string, 0);
+OFFSET_CHECK(struct rogue_hwperf_bvnc, bvnc_km_feature_flags, 24);
+OFFSET_CHECK(struct rogue_hwperf_bvnc, num_bvnc_blocks, 28);
+OFFSET_CHECK(struct rogue_hwperf_bvnc, bvnc_gpu_cores, 30);
+OFFSET_CHECK(struct rogue_hwperf_bvnc, bvnc_blocks, 32);
+SIZE_CHECK(struct rogue_hwperf_bvnc, 160);
+
+OFFSET_CHECK(struct rogue_fwif_sysinit, fault_phys_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_sysinit, pds_exec_base, 8);
+OFFSET_CHECK(struct rogue_fwif_sysinit, usc_exec_base, 16);
+OFFSET_CHECK(struct rogue_fwif_sysinit, fbcdc_state_table_base, 24);
+OFFSET_CHECK(struct rogue_fwif_sysinit, fbcdc_large_state_table_base, 32);
+OFFSET_CHECK(struct rogue_fwif_sysinit, texture_heap_base, 40);
+OFFSET_CHECK(struct rogue_fwif_sysinit, hw_perf_filter, 48);
+OFFSET_CHECK(struct rogue_fwif_sysinit, slc3_fence_dev_addr, 56);
+OFFSET_CHECK(struct rogue_fwif_sysinit, tpu_trilinear_frac_mask, 64);
+OFFSET_CHECK(struct rogue_fwif_sysinit, sigbuf_ctl, 80);
+OFFSET_CHECK(struct rogue_fwif_sysinit, pdvfs_opp_info, 152);
+OFFSET_CHECK(struct rogue_fwif_sysinit, coremem_data_store, 288);
+OFFSET_CHECK(struct rogue_fwif_sysinit, counter_dump_ctl, 304);
+OFFSET_CHECK(struct rogue_fwif_sysinit, filter_flags, 312);
+OFFSET_CHECK(struct rogue_fwif_sysinit, runtime_cfg_fw_addr, 316);
+OFFSET_CHECK(struct rogue_fwif_sysinit, trace_buf_ctl_fw_addr, 320);
+OFFSET_CHECK(struct rogue_fwif_sysinit, fw_sys_data_fw_addr, 324);
+OFFSET_CHECK(struct rogue_fwif_sysinit, gpu_util_fw_cb_ctl_fw_addr, 328);
+OFFSET_CHECK(struct rogue_fwif_sysinit, reg_cfg_fw_addr, 332);
+OFFSET_CHECK(struct rogue_fwif_sysinit, hwperf_ctl_fw_addr, 336);
+OFFSET_CHECK(struct rogue_fwif_sysinit, align_checks, 340);
+OFFSET_CHECK(struct rogue_fwif_sysinit, initial_core_clock_speed, 344);
+OFFSET_CHECK(struct rogue_fwif_sysinit, active_pm_latency_ms, 348);
+OFFSET_CHECK(struct rogue_fwif_sysinit, firmware_started, 352);
+OFFSET_CHECK(struct rogue_fwif_sysinit, marker_val, 356);
+OFFSET_CHECK(struct rogue_fwif_sysinit, firmware_started_timestamp, 360);
+OFFSET_CHECK(struct rogue_fwif_sysinit, jones_disable_mask, 364);
+OFFSET_CHECK(struct rogue_fwif_sysinit, firmware_perf, 368);
+OFFSET_CHECK(struct rogue_fwif_sysinit, core_clock_rate_fw_addr, 372);
+OFFSET_CHECK(struct rogue_fwif_sysinit, gpio_validation_mode, 376);
+OFFSET_CHECK(struct rogue_fwif_sysinit, bvnc_km_feature_flags, 380);
+OFFSET_CHECK(struct rogue_fwif_sysinit, tfbc_compression_control, 540);
+SIZE_CHECK(struct rogue_fwif_sysinit, 544);
+
+OFFSET_CHECK(struct rogue_fwif_gpu_util_fwcb, time_corr, 0);
+OFFSET_CHECK(struct rogue_fwif_gpu_util_fwcb, time_corr_seq_count, 10240);
+OFFSET_CHECK(struct rogue_fwif_gpu_util_fwcb, gpu_util_flags, 10244);
+OFFSET_CHECK(struct rogue_fwif_gpu_util_fwcb, last_word, 10248);
+OFFSET_CHECK(struct rogue_fwif_gpu_util_fwcb, stats_counters, 10256);
+SIZE_CHECK(struct rogue_fwif_gpu_util_fwcb, 10280);
+
+OFFSET_CHECK(struct rogue_fwif_rta_ctl, render_target_index, 0);
+OFFSET_CHECK(struct rogue_fwif_rta_ctl, current_render_target, 4);
+OFFSET_CHECK(struct rogue_fwif_rta_ctl, active_render_targets, 8);
+OFFSET_CHECK(struct rogue_fwif_rta_ctl, cumul_active_render_targets, 12);
+OFFSET_CHECK(struct rogue_fwif_rta_ctl, valid_render_targets_fw_addr, 16);
+OFFSET_CHECK(struct rogue_fwif_rta_ctl, rta_num_partial_renders_fw_addr, 20);
+OFFSET_CHECK(struct rogue_fwif_rta_ctl, max_rts, 24);
+OFFSET_CHECK(struct rogue_fwif_rta_ctl, rta_ctl_flags, 28);
+SIZE_CHECK(struct rogue_fwif_rta_ctl, 32);
+
+OFFSET_CHECK(struct rogue_fwif_freelist, freelist_dev_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_freelist, current_dev_addr, 8);
+OFFSET_CHECK(struct rogue_fwif_freelist, current_stack_top, 16);
+OFFSET_CHECK(struct rogue_fwif_freelist, max_pages, 20);
+OFFSET_CHECK(struct rogue_fwif_freelist, grow_pages, 24);
+OFFSET_CHECK(struct rogue_fwif_freelist, current_pages, 28);
+OFFSET_CHECK(struct rogue_fwif_freelist, allocated_page_count, 32);
+OFFSET_CHECK(struct rogue_fwif_freelist, allocated_mmu_page_count, 36);
+OFFSET_CHECK(struct rogue_fwif_freelist, freelist_id, 40);
+OFFSET_CHECK(struct rogue_fwif_freelist, grow_pending, 44);
+OFFSET_CHECK(struct rogue_fwif_freelist, ready_pages, 48);
+OFFSET_CHECK(struct rogue_fwif_freelist, freelist_flags, 52);
+OFFSET_CHECK(struct rogue_fwif_freelist, pm_global_pb, 56);
+SIZE_CHECK(struct rogue_fwif_freelist, 64);
+
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, geom_caches_need_zeroing, 0);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, screen_pixel_max, 4);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, multi_sample_ctl, 8);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, flipped_multi_sample_ctl, 16);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, tpc_stride, 24);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, tpc_size, 28);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, te_screen, 32);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, mtile_stride, 36);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, teaa, 40);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, te_mtile1, 44);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, te_mtile2, 48);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, isp_merge_lower_x, 52);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, isp_merge_lower_y, 56);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, isp_merge_upper_x, 60);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, isp_merge_upper_y, 64);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, isp_merge_scale_x, 68);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, isp_merge_scale_y, 72);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, rgn_header_size, 76);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata_common, isp_mtile_size, 80);
+SIZE_CHECK(struct rogue_fwif_hwrtdata_common, 88);
+
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, pm_mlist_dev_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, vce_cat_base, 8);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, vce_last_cat_base, 40);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, te_cat_base, 72);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, te_last_cat_base, 104);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, alist_cat_base, 136);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, alist_last_cat_base, 144);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, pm_alist_stack_pointer, 152);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, pm_mlist_stack_pointer, 160);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, hwrt_data_common_fw_addr, 164);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, hwrt_data_flags, 168);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, state, 172);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, freelists_fw_addr, 176);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, freelist_hwr_snapshot, 188);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, vheap_table_dev_addr, 200);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, rta_ctl, 208);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, tail_ptrs_dev_addr, 240);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, macrotile_array_dev_addr, 248);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, rgn_header_dev_addr, 256);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, rtc_dev_addr, 264);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, owner_geom_not_used_by_host, 272);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, geom_caches_need_zeroing, 276);
+OFFSET_CHECK(struct rogue_fwif_hwrtdata, cleanup_state, 320);
+SIZE_CHECK(struct rogue_fwif_hwrtdata, 384);
+
+OFFSET_CHECK(struct rogue_fwif_sync_checkpoint, state, 0);
+OFFSET_CHECK(struct rogue_fwif_sync_checkpoint, fw_ref_count, 4);
+SIZE_CHECK(struct rogue_fwif_sync_checkpoint, 8);
+
+#endif /* PVR_ROGUE_FWIF_CHECK_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_fwif_client.h b/drivers/gpu/drm/imagination/pvr_rogue_fwif_client.h
new file mode 100644
index 000000000000..abe166e02a73
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_fwif_client.h
@@ -0,0 +1,371 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_FWIF_CLIENT_H
+#define PVR_ROGUE_FWIF_CLIENT_H
+
+#include <linux/bits.h>
+#include <linux/kernel.h>
+#include <linux/sizes.h>
+#include <linux/types.h>
+
+#include "pvr_rogue_fwif_shared.h"
+
+/*
+ * Page size used for Parameter Management.
+ */
+#define ROGUE_PM_PAGE_SIZE SZ_4K
+
+/*
+ * Minimum/Maximum PB size.
+ *
+ * Base page size is dependent on core:
+ *   S6/S6XT/S7               = 50 pages
+ *   S8XE                     = 40 pages
+ *   S8XE with BRN66011 fixed = 25 pages
+ *
+ * Minimum PB = Base Pages + (NUM_TE_PIPES-1)*16K + (NUM_VCE_PIPES-1)*64K +
+ *              IF_PM_PREALLOC(NUM_TE_PIPES*16K + NUM_VCE_PIPES*16K)
+ *
+ * Maximum PB size must ensure that no PM address space can be fully used,
+ * because if the full address space was used it would wrap and corrupt itself.
+ * Since there are two freelists (local is always minimum sized) this can be
+ * described as following three conditions being met:
+ *
+ *   (Minimum PB + Maximum PB)  <  ALIST PM address space size (16GB)
+ *   (Minimum PB + Maximum PB)  <  TE PM address space size (16GB) / NUM_TE_PIPES
+ *   (Minimum PB + Maximum PB)  <  VCE PM address space size (16GB) / NUM_VCE_PIPES
+ *
+ * Since the max of NUM_TE_PIPES and NUM_VCE_PIPES is 4, we have a hard limit
+ * of 4GB minus the Minimum PB. For convenience we take the smaller power-of-2
+ * value of 2GB. This is far more than any current applications use.
+ */
+#define ROGUE_PM_MAX_FREELIST_SIZE SZ_2G
+
+/*
+ * Flags supported by the geometry DM command i.e. &struct rogue_fwif_cmd_geom.
+ */
+
+#define ROGUE_GEOM_FLAGS_FIRSTKICK BIT_MASK(0)
+#define ROGUE_GEOM_FLAGS_LASTKICK BIT_MASK(1)
+/* Use single core in a multi core setup. */
+#define ROGUE_GEOM_FLAGS_SINGLE_CORE BIT_MASK(3)
+
+/*
+ * Flags supported by the fragment DM command i.e. &struct rogue_fwif_cmd_frag.
+ */
+
+/* Use single core in a multi core setup. */
+#define ROGUE_FRAG_FLAGS_SINGLE_CORE BIT_MASK(3)
+/* Indicates whether this render produces visibility results. */
+#define ROGUE_FRAG_FLAGS_GET_VIS_RESULTS BIT_MASK(5)
+/* Indicates whether a depth buffer is present. */
+#define ROGUE_FRAG_FLAGS_DEPTHBUFFER BIT_MASK(7)
+/* Indicates whether a stencil buffer is present. */
+#define ROGUE_FRAG_FLAGS_STENCILBUFFER BIT_MASK(8)
+/* Indicates whether a scratch buffer is present. */
+#define ROGUE_FRAG_FLAGS_SCRATCHBUFFER BIT_MASK(19)
+/* Disallow compute overlapped with this render. */
+#define ROGUE_FRAG_FLAGS_PREVENT_CDM_OVERLAP BIT_MASK(26)
+
+/*
+ * Flags supported by the compute DM command i.e. &struct rogue_fwif_cmd_compute.
+ */
+
+#define ROGUE_COMPUTE_FLAG_PREVENT_ALL_OVERLAP BIT_MASK(2)
+/*!< Use single core in a multi core setup. */
+#define ROGUE_COMPUTE_FLAG_SINGLE_CORE BIT_MASK(5)
+
+/*
+ * Flags supported by the transfer DM command i.e. &struct rogue_fwif_cmd_transfer.
+ */
+
+/*!< Use single core in a multi core setup. */
+#define ROGUE_TRANSFER_FLAGS_SINGLE_CORE BIT_MASK(1)
+
+/*
+ ************************************************
+ * Parameter/HWRTData control structures.
+ ************************************************
+ */
+
+/*
+ * Configuration registers which need to be loaded by the firmware before a geometry
+ * job can be started.
+ */
+struct rogue_fwif_geom_regs {
+	u64 vdm_ctrl_stream_base;
+	u64 tpu_border_colour_table;
+
+	/* Only used when feature VDM_DRAWINDIRECT present. */
+	u64 vdm_draw_indirect0;
+	/* Only used when feature VDM_DRAWINDIRECT present. */
+	u32 vdm_draw_indirect1;
+
+	u32 ppp_ctrl;
+	u32 te_psg;
+	/* Only used when BRN 49927 present. */
+	u32 tpu;
+
+	u32 vdm_context_resume_task0_size;
+	/* Only used when feature VDM_OBJECT_LEVEL_LLS present. */
+	u32 vdm_context_resume_task3_size;
+
+	/* Only used when BRN 56279 or BRN 67381 present. */
+	u32 pds_ctrl;
+
+	u32 view_idx;
+
+	/* Only used when feature TESSELLATION present */
+	u32 pds_coeff_free_prog;
+
+	u32 padding;
+};
+
+/* Only used when BRN 44455 or BRN 63027 present. */
+struct rogue_fwif_dummy_rgnhdr_init_geom_regs {
+	u64 te_psgregion_addr;
+};
+
+/*
+ * Represents a geometry command that can be used to tile a whole scene's objects as
+ * per TA behavior.
+ */
+struct rogue_fwif_cmd_geom {
+	/*
+	 * rogue_fwif_cmd_geom_frag_shared field must always be at the beginning of the
+	 * struct.
+	 *
+	 * The command struct (rogue_fwif_cmd_geom) is shared between Client and
+	 * Firmware. Kernel is unable to perform read/write operations on the
+	 * command struct, the SHARED region is the only exception from this rule.
+	 * This region must be the first member so that Kernel can easily access it.
+	 * For more info, see rogue_fwif_cmd_geom_frag_shared definition.
+	 */
+	struct rogue_fwif_cmd_geom_frag_shared cmd_shared;
+
+	struct rogue_fwif_geom_regs regs __aligned(8);
+	u32 flags __aligned(8);
+
+	/*
+	 * Holds the geometry/fragment fence value to allow the fragment partial render command
+	 * to go through.
+	 */
+	struct rogue_fwif_ufo partial_render_geom_frag_fence;
+
+	/* Only used when BRN 44455 or BRN 63027 present. */
+	struct rogue_fwif_dummy_rgnhdr_init_geom_regs dummy_rgnhdr_init_geom_regs __aligned(8);
+
+	/* Only used when BRN 61484 or BRN 66333 present. */
+	u32 brn61484_66333_live_rt;
+
+	u32 padding;
+};
+
+/*
+ * Configuration registers which need to be loaded by the firmware before ISP
+ * can be started.
+ */
+struct rogue_fwif_frag_regs {
+	u32 usc_pixel_output_ctrl;
+
+#define ROGUE_MAXIMUM_OUTPUT_REGISTERS_PER_PIXEL 8U
+	u32 usc_clear_register[ROGUE_MAXIMUM_OUTPUT_REGISTERS_PER_PIXEL];
+
+	u32 isp_bgobjdepth;
+	u32 isp_bgobjvals;
+	u32 isp_aa;
+	/* Only used when feature S7_TOP_INFRASTRUCTURE present. */
+	u32 isp_xtp_pipe_enable;
+
+	u32 isp_ctl;
+
+	/* Only used when BRN 49927 present. */
+	u32 tpu;
+
+	u32 event_pixel_pds_info;
+
+	/* Only used when feature CLUSTER_GROUPING present. */
+	u32 pixel_phantom;
+
+	u32 view_idx;
+
+	u32 event_pixel_pds_data;
+
+	/* Only used when BRN 65101 present. */
+	u32 brn65101_event_pixel_pds_data;
+
+	/* Only used when feature GPU_MULTICORE_SUPPORT or BRN 47217 present. */
+	u32 isp_oclqry_stride;
+
+	/* Only used when feature ZLS_SUBTILE present. */
+	u32 isp_zls_pixels;
+
+	/* Only used when feature ISP_ZLS_D24_S8_PACKING_OGL_MODE present. */
+	u32 rgx_cr_blackpearl_fix;
+
+	/* All values below the ALIGN(8) must be 64 bit. */
+	aligned_u64 isp_scissor_base;
+	u64 isp_dbias_base;
+	u64 isp_oclqry_base;
+	u64 isp_zlsctl;
+	u64 isp_zload_store_base;
+	u64 isp_stencil_load_store_base;
+
+	/*
+	 * Only used when feature FBCDC_ALGORITHM present and value < 3 or feature
+	 * FB_CDC_V4 present. Additionally, BRNs 48754, 60227, 72310 and 72311 must
+	 * not be present.
+	 */
+	u64 fb_cdc_zls;
+
+#define ROGUE_PBE_WORDS_REQUIRED_FOR_RENDERS 3U
+	u64 pbe_word[8U][ROGUE_PBE_WORDS_REQUIRED_FOR_RENDERS];
+	u64 tpu_border_colour_table;
+	u64 pds_bgnd[3U];
+
+	/* Only used when BRN 65101 present. */
+	u64 pds_bgnd_brn65101[3U];
+
+	u64 pds_pr_bgnd[3U];
+
+	/* Only used when BRN 62850 or 62865 present. */
+	u64 isp_dummy_stencil_store_base;
+
+	/* Only used when BRN 66193 present. */
+	u64 isp_dummy_depth_store_base;
+
+	/* Only used when BRN 67182 present. */
+	u32 rgnhdr_single_rt_size;
+	/* Only used when BRN 67182 present. */
+	u32 rgnhdr_scratch_offset;
+};
+
+struct rogue_fwif_cmd_frag {
+	struct rogue_fwif_cmd_geom_frag_shared cmd_shared __aligned(8);
+
+	struct rogue_fwif_frag_regs regs __aligned(8);
+	/* command control flags. */
+	u32 flags;
+	/* Stride IN BYTES for Z-Buffer in case of RTAs. */
+	u32 zls_stride;
+	/* Stride IN BYTES for S-Buffer in case of RTAs. */
+	u32 sls_stride;
+
+	/* Only used if feature GPU_MULTICORE_SUPPORT present. */
+	u32 execute_count;
+};
+
+/*
+ * Configuration registers which need to be loaded by the firmware before CDM
+ * can be started.
+ */
+struct rogue_fwif_compute_regs {
+	u64 tpu_border_colour_table;
+
+	/* Only used when feature CDM_USER_MODE_QUEUE present. */
+	u64 cdm_cb_queue;
+
+	/* Only used when feature CDM_USER_MODE_QUEUE present. */
+	u64 cdm_cb_base;
+	/* Only used when feature CDM_USER_MODE_QUEUE present. */
+	u64 cdm_cb;
+
+	/* Only used when feature CDM_USER_MODE_QUEUE is not present. */
+	u64 cdm_ctrl_stream_base;
+
+	u64 cdm_context_state_base_addr;
+
+	/* Only used when BRN 49927 is present. */
+	u32 tpu;
+	u32 cdm_resume_pds1;
+
+	/* Only used when feature COMPUTE_MORTON_CAPABLE present. */
+	u32 cdm_item;
+
+	/* Only used when feature CLUSTER_GROUPING present. */
+	u32 compute_cluster;
+
+	/* Only used when feature TPU_DM_GLOBAL_REGISTERS present. */
+	u32 tpu_tag_cdm_ctrl;
+
+	u32 padding;
+};
+
+struct rogue_fwif_cmd_compute {
+	/* Common command attributes */
+	struct rogue_fwif_cmd_common common __aligned(8);
+
+	/* CDM registers */
+	struct rogue_fwif_compute_regs regs;
+
+	/* Control flags */
+	u32 flags __aligned(8);
+
+	/* Only used when feature UNIFIED_STORE_VIRTUAL_PARTITIONING present. */
+	u32 num_temp_regions;
+
+	/* Only used when feature CDM_USER_MODE_QUEUE present. */
+	u32 stream_start_offset;
+
+	/* Only used when feature GPU_MULTICORE_SUPPORT present. */
+	u32 execute_count;
+};
+
+struct rogue_fwif_transfer_regs {
+	/*
+	 * All 32 bit values should be added in the top section. This then requires only a
+	 * single RGXFW_ALIGN to align all the 64 bit values in the second section.
+	 */
+	u32 isp_bgobjvals;
+
+	u32 usc_pixel_output_ctrl;
+	u32 usc_clear_register0;
+	u32 usc_clear_register1;
+	u32 usc_clear_register2;
+	u32 usc_clear_register3;
+
+	u32 isp_mtile_size;
+	u32 isp_render_origin;
+	u32 isp_ctl;
+
+	/* Only used when feature S7_TOP_INFRASTRUCTURE present. */
+	u32 isp_xtp_pipe_enable;
+	u32 isp_aa;
+
+	u32 event_pixel_pds_info;
+
+	u32 event_pixel_pds_code;
+	u32 event_pixel_pds_data;
+
+	u32 isp_render;
+	u32 isp_rgn;
+
+	/* Only used when feature GPU_MULTICORE_SUPPORT present. */
+	u32 frag_screen;
+
+	/* All values below the aligned_u64 must be 64 bit. */
+	aligned_u64 pds_bgnd0_base;
+	u64 pds_bgnd1_base;
+	u64 pds_bgnd3_sizeinfo;
+
+	u64 isp_mtile_base;
+#define ROGUE_PBE_WORDS_REQUIRED_FOR_TQS 3
+	/* TQ_MAX_RENDER_TARGETS * PBE_STATE_SIZE */
+	u64 pbe_wordx_mrty[3U * ROGUE_PBE_WORDS_REQUIRED_FOR_TQS];
+};
+
+struct rogue_fwif_cmd_transfer {
+	/* Common command attributes */
+	struct rogue_fwif_cmd_common common __aligned(8);
+
+	struct rogue_fwif_transfer_regs regs __aligned(8);
+
+	u32 flags;
+
+	u32 padding;
+};
+
+#include "pvr_rogue_fwif_client_check.h"
+
+#endif /* PVR_ROGUE_FWIF_CLIENT_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_fwif_client_check.h b/drivers/gpu/drm/imagination/pvr_rogue_fwif_client_check.h
new file mode 100644
index 000000000000..9fd1c5b826a5
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_fwif_client_check.h
@@ -0,0 +1,133 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_FWIF_CLIENT_CHECK_H
+#define PVR_ROGUE_FWIF_CLIENT_CHECK_H
+
+#include <linux/build_bug.h>
+
+#define OFFSET_CHECK(type, member, offset) \
+	static_assert(offsetof(type, member) == (offset), \
+		      "offsetof(" #type ", " #member ") incorrect")
+
+#define SIZE_CHECK(type, size) \
+	static_assert(sizeof(type) == (size), #type " is incorrect size")
+
+OFFSET_CHECK(struct rogue_fwif_geom_regs, vdm_ctrl_stream_base, 0);
+OFFSET_CHECK(struct rogue_fwif_geom_regs, tpu_border_colour_table, 8);
+OFFSET_CHECK(struct rogue_fwif_geom_regs, vdm_draw_indirect0, 16);
+OFFSET_CHECK(struct rogue_fwif_geom_regs, vdm_draw_indirect1, 24);
+OFFSET_CHECK(struct rogue_fwif_geom_regs, ppp_ctrl, 28);
+OFFSET_CHECK(struct rogue_fwif_geom_regs, te_psg, 32);
+OFFSET_CHECK(struct rogue_fwif_geom_regs, tpu, 36);
+OFFSET_CHECK(struct rogue_fwif_geom_regs, vdm_context_resume_task0_size, 40);
+OFFSET_CHECK(struct rogue_fwif_geom_regs, vdm_context_resume_task3_size, 44);
+OFFSET_CHECK(struct rogue_fwif_geom_regs, pds_ctrl, 48);
+OFFSET_CHECK(struct rogue_fwif_geom_regs, view_idx, 52);
+OFFSET_CHECK(struct rogue_fwif_geom_regs, pds_coeff_free_prog, 56);
+SIZE_CHECK(struct rogue_fwif_geom_regs, 64);
+
+OFFSET_CHECK(struct rogue_fwif_dummy_rgnhdr_init_geom_regs, te_psgregion_addr, 0);
+SIZE_CHECK(struct rogue_fwif_dummy_rgnhdr_init_geom_regs, 8);
+
+OFFSET_CHECK(struct rogue_fwif_cmd_geom, cmd_shared, 0);
+OFFSET_CHECK(struct rogue_fwif_cmd_geom, regs, 16);
+OFFSET_CHECK(struct rogue_fwif_cmd_geom, flags, 80);
+OFFSET_CHECK(struct rogue_fwif_cmd_geom, partial_render_geom_frag_fence, 84);
+OFFSET_CHECK(struct rogue_fwif_cmd_geom, dummy_rgnhdr_init_geom_regs, 96);
+OFFSET_CHECK(struct rogue_fwif_cmd_geom, brn61484_66333_live_rt, 104);
+SIZE_CHECK(struct rogue_fwif_cmd_geom, 112);
+
+OFFSET_CHECK(struct rogue_fwif_frag_regs, usc_pixel_output_ctrl, 0);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, usc_clear_register, 4);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_bgobjdepth, 36);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_bgobjvals, 40);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_aa, 44);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_xtp_pipe_enable, 48);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_ctl, 52);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, tpu, 56);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, event_pixel_pds_info, 60);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, pixel_phantom, 64);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, view_idx, 68);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, event_pixel_pds_data, 72);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, brn65101_event_pixel_pds_data, 76);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_oclqry_stride, 80);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_zls_pixels, 84);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, rgx_cr_blackpearl_fix, 88);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_scissor_base, 96);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_dbias_base, 104);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_oclqry_base, 112);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_zlsctl, 120);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_zload_store_base, 128);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_stencil_load_store_base, 136);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, fb_cdc_zls, 144);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, pbe_word, 152);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, tpu_border_colour_table, 344);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, pds_bgnd, 352);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, pds_bgnd_brn65101, 376);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, pds_pr_bgnd, 400);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_dummy_stencil_store_base, 424);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, isp_dummy_depth_store_base, 432);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, rgnhdr_single_rt_size, 440);
+OFFSET_CHECK(struct rogue_fwif_frag_regs, rgnhdr_scratch_offset, 444);
+SIZE_CHECK(struct rogue_fwif_frag_regs, 448);
+
+OFFSET_CHECK(struct rogue_fwif_cmd_frag, cmd_shared, 0);
+OFFSET_CHECK(struct rogue_fwif_cmd_frag, regs, 16);
+OFFSET_CHECK(struct rogue_fwif_cmd_frag, flags, 464);
+OFFSET_CHECK(struct rogue_fwif_cmd_frag, zls_stride, 468);
+OFFSET_CHECK(struct rogue_fwif_cmd_frag, sls_stride, 472);
+OFFSET_CHECK(struct rogue_fwif_cmd_frag, execute_count, 476);
+SIZE_CHECK(struct rogue_fwif_cmd_frag, 480);
+
+OFFSET_CHECK(struct rogue_fwif_compute_regs, tpu_border_colour_table, 0);
+OFFSET_CHECK(struct rogue_fwif_compute_regs, cdm_cb_queue, 8);
+OFFSET_CHECK(struct rogue_fwif_compute_regs, cdm_cb_base, 16);
+OFFSET_CHECK(struct rogue_fwif_compute_regs, cdm_cb, 24);
+OFFSET_CHECK(struct rogue_fwif_compute_regs, cdm_ctrl_stream_base, 32);
+OFFSET_CHECK(struct rogue_fwif_compute_regs, cdm_context_state_base_addr, 40);
+OFFSET_CHECK(struct rogue_fwif_compute_regs, tpu, 48);
+OFFSET_CHECK(struct rogue_fwif_compute_regs, cdm_resume_pds1, 52);
+OFFSET_CHECK(struct rogue_fwif_compute_regs, cdm_item, 56);
+OFFSET_CHECK(struct rogue_fwif_compute_regs, compute_cluster, 60);
+OFFSET_CHECK(struct rogue_fwif_compute_regs, tpu_tag_cdm_ctrl, 64);
+SIZE_CHECK(struct rogue_fwif_compute_regs, 72);
+
+OFFSET_CHECK(struct rogue_fwif_cmd_compute, common, 0);
+OFFSET_CHECK(struct rogue_fwif_cmd_compute, regs, 8);
+OFFSET_CHECK(struct rogue_fwif_cmd_compute, flags, 80);
+OFFSET_CHECK(struct rogue_fwif_cmd_compute, num_temp_regions, 84);
+OFFSET_CHECK(struct rogue_fwif_cmd_compute, stream_start_offset, 88);
+OFFSET_CHECK(struct rogue_fwif_cmd_compute, execute_count, 92);
+SIZE_CHECK(struct rogue_fwif_cmd_compute, 96);
+
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, isp_bgobjvals, 0);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, usc_pixel_output_ctrl, 4);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, usc_clear_register0, 8);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, usc_clear_register1, 12);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, usc_clear_register2, 16);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, usc_clear_register3, 20);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, isp_mtile_size, 24);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, isp_render_origin, 28);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, isp_ctl, 32);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, isp_xtp_pipe_enable, 36);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, isp_aa, 40);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, event_pixel_pds_info, 44);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, event_pixel_pds_code, 48);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, event_pixel_pds_data, 52);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, isp_render, 56);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, isp_rgn, 60);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, frag_screen, 64);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, pds_bgnd0_base, 72);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, pds_bgnd1_base, 80);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, pds_bgnd3_sizeinfo, 88);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, isp_mtile_base, 96);
+OFFSET_CHECK(struct rogue_fwif_transfer_regs, pbe_wordx_mrty, 104);
+SIZE_CHECK(struct rogue_fwif_transfer_regs, 176);
+
+OFFSET_CHECK(struct rogue_fwif_cmd_transfer, common, 0);
+OFFSET_CHECK(struct rogue_fwif_cmd_transfer, regs, 8);
+OFFSET_CHECK(struct rogue_fwif_cmd_transfer, flags, 184);
+SIZE_CHECK(struct rogue_fwif_cmd_transfer, 192);
+
+#endif /* PVR_ROGUE_FWIF_CLIENT_CHECK_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_fwif_common.h b/drivers/gpu/drm/imagination/pvr_rogue_fwif_common.h
new file mode 100644
index 000000000000..18b69500ba96
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_fwif_common.h
@@ -0,0 +1,60 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_FWIF_COMMON_H
+#define PVR_ROGUE_FWIF_COMMON_H
+
+#include <linux/build_bug.h>
+
+/*
+ * This macro represents a mask of LSBs that must be zero on data structure
+ * sizes and offsets to ensure they are 8-byte granular on types shared between
+ * the FW and host driver.
+ */
+#define PVR_FW_ALIGNMENT_LSB 7U
+
+/* Macro to test structure size alignment. */
+#define PVR_FW_STRUCT_SIZE_ASSERT(_a)                            \
+	static_assert((sizeof(_a) & PVR_FW_ALIGNMENT_LSB) == 0U, \
+		      "Size of " #_a " is not properly aligned")
+
+/* The master definition for data masters known to the firmware. */
+
+#define PVR_FWIF_DM_GP (0)
+/* Either TDM or 2D DM is present. */
+/* When the 'tla' feature is present in the hw (as per @pvr_device_features). */
+#define PVR_FWIF_DM_2D (1)
+/*
+ * When the 'fastrender_dm' feature is present in the hw (as per
+ * @pvr_device_features).
+ */
+#define PVR_FWIF_DM_TDM (1)
+
+#define PVR_FWIF_DM_GEOM (2)
+#define PVR_FWIF_DM_FRAG (3)
+#define PVR_FWIF_DM_CDM (4)
+#define PVR_FWIF_DM_RAY (5)
+#define PVR_FWIF_DM_GEOM2 (6)
+#define PVR_FWIF_DM_GEOM3 (7)
+#define PVR_FWIF_DM_GEOM4 (8)
+
+#define PVR_FWIF_DM_LAST PVR_FWIF_DM_GEOM4
+
+/* Maximum number of DM in use: GP, 2D/TDM, GEOM, 3D, CDM, RAY, GEOM2, GEOM3, GEOM4 */
+#define PVR_FWIF_DM_MAX (PVR_FWIF_DM_LAST + 1U)
+
+/* GPU Utilisation states */
+#define PVR_FWIF_GPU_UTIL_STATE_IDLE 0U
+#define PVR_FWIF_GPU_UTIL_STATE_ACTIVE 1U
+#define PVR_FWIF_GPU_UTIL_STATE_BLOCKED 2U
+#define PVR_FWIF_GPU_UTIL_STATE_NUM 3U
+#define PVR_FWIF_GPU_UTIL_STATE_MASK 0x3ULL
+
+/*
+ * Maximum amount of register writes that can be done by the register
+ * programmer (FW or META DMA). This is not a HW limitation, it is only
+ * a protection against malformed inputs to the register programmer.
+ */
+#define PVR_MAX_NUM_REGISTER_PROGRAMMER_WRITES 128U
+
+#endif /* PVR_ROGUE_FWIF_COMMON_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_fwif_dev_info.h b/drivers/gpu/drm/imagination/pvr_rogue_fwif_dev_info.h
new file mode 100644
index 000000000000..53e6b9d9de3e
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_fwif_dev_info.h
@@ -0,0 +1,112 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef __PVR_ROGUE_FWIF_DEV_INFO_H__
+#define __PVR_ROGUE_FWIF_DEV_INFO_H__
+
+enum {
+	PVR_FW_HAS_BRN_44079 = 0,
+	PVR_FW_HAS_BRN_47217,
+	PVR_FW_HAS_BRN_48492,
+	PVR_FW_HAS_BRN_48545,
+	PVR_FW_HAS_BRN_49927,
+	PVR_FW_HAS_BRN_50767,
+	PVR_FW_HAS_BRN_51764,
+	PVR_FW_HAS_BRN_62269,
+	PVR_FW_HAS_BRN_63142,
+	PVR_FW_HAS_BRN_63553,
+	PVR_FW_HAS_BRN_66011,
+
+	PVR_FW_HAS_BRN_MAX
+};
+
+enum {
+	PVR_FW_HAS_ERN_35421 = 0,
+	PVR_FW_HAS_ERN_38020,
+	PVR_FW_HAS_ERN_38748,
+	PVR_FW_HAS_ERN_42064,
+	PVR_FW_HAS_ERN_42290,
+	PVR_FW_HAS_ERN_42606,
+	PVR_FW_HAS_ERN_47025,
+	PVR_FW_HAS_ERN_57596,
+
+	PVR_FW_HAS_ERN_MAX
+};
+
+enum {
+	PVR_FW_HAS_FEATURE_AXI_ACELITE = 0,
+	PVR_FW_HAS_FEATURE_CDM_CONTROL_STREAM_FORMAT,
+	PVR_FW_HAS_FEATURE_CLUSTER_GROUPING,
+	PVR_FW_HAS_FEATURE_COMMON_STORE_SIZE_IN_DWORDS,
+	PVR_FW_HAS_FEATURE_COMPUTE,
+	PVR_FW_HAS_FEATURE_COMPUTE_MORTON_CAPABLE,
+	PVR_FW_HAS_FEATURE_COMPUTE_OVERLAP,
+	PVR_FW_HAS_FEATURE_COREID_PER_OS,
+	PVR_FW_HAS_FEATURE_DYNAMIC_DUST_POWER,
+	PVR_FW_HAS_FEATURE_ECC_RAMS,
+	PVR_FW_HAS_FEATURE_FBCDC,
+	PVR_FW_HAS_FEATURE_FBCDC_ALGORITHM,
+	PVR_FW_HAS_FEATURE_FBCDC_ARCHITECTURE,
+	PVR_FW_HAS_FEATURE_FBC_MAX_DEFAULT_DESCRIPTORS,
+	PVR_FW_HAS_FEATURE_FBC_MAX_LARGE_DESCRIPTORS,
+	PVR_FW_HAS_FEATURE_FB_CDC_V4,
+	PVR_FW_HAS_FEATURE_GPU_MULTICORE_SUPPORT,
+	PVR_FW_HAS_FEATURE_GPU_VIRTUALISATION,
+	PVR_FW_HAS_FEATURE_GS_RTA_SUPPORT,
+	PVR_FW_HAS_FEATURE_IRQ_PER_OS,
+	PVR_FW_HAS_FEATURE_ISP_MAX_TILES_IN_FLIGHT,
+	PVR_FW_HAS_FEATURE_ISP_SAMPLES_PER_PIXEL,
+	PVR_FW_HAS_FEATURE_ISP_ZLS_D24_S8_PACKING_OGL_MODE,
+	PVR_FW_HAS_FEATURE_LAYOUT_MARS,
+	PVR_FW_HAS_FEATURE_MAX_PARTITIONS,
+	PVR_FW_HAS_FEATURE_META,
+	PVR_FW_HAS_FEATURE_META_COREMEM_SIZE,
+	PVR_FW_HAS_FEATURE_MIPS,
+	PVR_FW_HAS_FEATURE_NUM_CLUSTERS,
+	PVR_FW_HAS_FEATURE_NUM_ISP_IPP_PIPES,
+	PVR_FW_HAS_FEATURE_NUM_OSIDS,
+	PVR_FW_HAS_FEATURE_NUM_RASTER_PIPES,
+	PVR_FW_HAS_FEATURE_PBE2_IN_XE,
+	PVR_FW_HAS_FEATURE_PBVNC_COREID_REG,
+	PVR_FW_HAS_FEATURE_PERFBUS,
+	PVR_FW_HAS_FEATURE_PERF_COUNTER_BATCH,
+	PVR_FW_HAS_FEATURE_PHYS_BUS_WIDTH,
+	PVR_FW_HAS_FEATURE_RISCV_FW_PROCESSOR,
+	PVR_FW_HAS_FEATURE_ROGUEXE,
+	PVR_FW_HAS_FEATURE_S7_TOP_INFRASTRUCTURE,
+	PVR_FW_HAS_FEATURE_SIMPLE_INTERNAL_PARAMETER_FORMAT,
+	PVR_FW_HAS_FEATURE_SIMPLE_INTERNAL_PARAMETER_FORMAT_V2,
+	PVR_FW_HAS_FEATURE_SIMPLE_PARAMETER_FORMAT_VERSION,
+	PVR_FW_HAS_FEATURE_SLC_BANKS,
+	PVR_FW_HAS_FEATURE_SLC_CACHE_LINE_SIZE_BITS,
+	PVR_FW_HAS_FEATURE_SLC_SIZE_CONFIGURABLE,
+	PVR_FW_HAS_FEATURE_SLC_SIZE_IN_KILOBYTES,
+	PVR_FW_HAS_FEATURE_SOC_TIMER,
+	PVR_FW_HAS_FEATURE_SYS_BUS_SECURE_RESET,
+	PVR_FW_HAS_FEATURE_TESSELLATION,
+	PVR_FW_HAS_FEATURE_TILE_REGION_PROTECTION,
+	PVR_FW_HAS_FEATURE_TILE_SIZE_X,
+	PVR_FW_HAS_FEATURE_TILE_SIZE_Y,
+	PVR_FW_HAS_FEATURE_TLA,
+	PVR_FW_HAS_FEATURE_TPU_CEM_DATAMASTER_GLOBAL_REGISTERS,
+	PVR_FW_HAS_FEATURE_TPU_DM_GLOBAL_REGISTERS,
+	PVR_FW_HAS_FEATURE_TPU_FILTERING_MODE_CONTROL,
+	PVR_FW_HAS_FEATURE_USC_MIN_OUTPUT_REGISTERS_PER_PIX,
+	PVR_FW_HAS_FEATURE_VDM_DRAWINDIRECT,
+	PVR_FW_HAS_FEATURE_VDM_OBJECT_LEVEL_LLS,
+	PVR_FW_HAS_FEATURE_VIRTUAL_ADDRESS_SPACE_BITS,
+	PVR_FW_HAS_FEATURE_WATCHDOG_TIMER,
+	PVR_FW_HAS_FEATURE_WORKGROUP_PROTECTION,
+	PVR_FW_HAS_FEATURE_XE_ARCHITECTURE,
+	PVR_FW_HAS_FEATURE_XE_MEMORY_HIERARCHY,
+	PVR_FW_HAS_FEATURE_XE_TPU2,
+	PVR_FW_HAS_FEATURE_XPU_MAX_REGBANKS_ADDR_WIDTH,
+	PVR_FW_HAS_FEATURE_XPU_MAX_SLAVES,
+	PVR_FW_HAS_FEATURE_XPU_REGISTER_BROADCAST,
+	PVR_FW_HAS_FEATURE_XT_TOP_INFRASTRUCTURE,
+	PVR_FW_HAS_FEATURE_ZLS_SUBTILE,
+
+	PVR_FW_HAS_FEATURE_MAX
+};
+
+#endif /* __PVR_ROGUE_FWIF_DEV_INFO_H__ */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_fwif_resetframework.h b/drivers/gpu/drm/imagination/pvr_rogue_fwif_resetframework.h
new file mode 100644
index 000000000000..3d89524fe5d2
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_fwif_resetframework.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_FWIF_RESETFRAMEWORK_H
+#define PVR_ROGUE_FWIF_RESETFRAMEWORK_H
+
+#include <linux/bits.h>
+#include <linux/types.h>
+
+#include "pvr_rogue_fwif_shared.h"
+
+struct rogue_fwif_rf_registers {
+	union {
+		u64 cdmreg_cdm_cb_base;
+		u64 cdmreg_cdm_ctrl_stream_base;
+	};
+	u64 cdmreg_cdm_cb_queue;
+	u64 cdmreg_cdm_cb;
+};
+
+struct rogue_fwif_rf_cmd {
+	/* THIS MUST BE THE LAST MEMBER OF THE CONTAINING STRUCTURE */
+	struct rogue_fwif_rf_registers fw_registers __aligned(8);
+};
+
+#define ROGUE_FWIF_RF_CMD_SIZE sizeof(struct rogue_fwif_rf_cmd)
+
+#endif /* PVR_ROGUE_FWIF_RESETFRAMEWORK_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_fwif_sf.h b/drivers/gpu/drm/imagination/pvr_rogue_fwif_sf.h
new file mode 100644
index 000000000000..0fcc500fab21
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_fwif_sf.h
@@ -0,0 +1,1648 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_FWIF_SF_H
+#define PVR_ROGUE_FWIF_SF_H
+
+/*
+ ******************************************************************************
+ * *DO*NOT* rearrange or delete lines in rogue_fw_log_sfgroups or stid_fmts
+ *           WILL BREAK fw tracing message compatibility with previous
+ *           fw versions. Only add new ones, if so required.
+ ******************************************************************************
+ */
+
+/* Available log groups. */
+enum rogue_fw_log_sfgroups {
+	ROGUE_FW_GROUP_NULL,
+	ROGUE_FW_GROUP_MAIN,
+	ROGUE_FW_GROUP_CLEANUP,
+	ROGUE_FW_GROUP_CSW,
+	ROGUE_FW_GROUP_PM,
+	ROGUE_FW_GROUP_RTD,
+	ROGUE_FW_GROUP_SPM,
+	ROGUE_FW_GROUP_MTS,
+	ROGUE_FW_GROUP_BIF,
+	ROGUE_FW_GROUP_MISC,
+	ROGUE_FW_GROUP_POW,
+	ROGUE_FW_GROUP_HWR,
+	ROGUE_FW_GROUP_HWP,
+	ROGUE_FW_GROUP_RPM,
+	ROGUE_FW_GROUP_DMA,
+	ROGUE_FW_GROUP_DBG,
+};
+
+#define PVR_SF_STRING_MAX_SIZE 256U
+
+/* pair of string format id and string formats */
+struct rogue_fw_stid_fmt {
+	u32 id;
+	char name[PVR_SF_STRING_MAX_SIZE];
+};
+
+/*
+ *  The symbolic names found in the table above are assigned an u32 value of
+ *  the following format:
+ *  31 30 28 27       20   19  16    15  12      11            0   bits
+ *  -   ---   ---- ----     ----      ----        ---- ---- ----
+ *     0-11: id number
+ *    12-15: group id number
+ *    16-19: number of parameters
+ *    20-27: unused
+ *    28-30: active: identify SF packet, otherwise regular int32
+ *       31: reserved for signed/unsigned compatibility
+ *
+ *   The following macro assigns those values to the enum generated SF ids list.
+ */
+#define ROGUE_FW_LOG_IDMARKER (0x70000000U)
+#define ROGUE_FW_LOG_CREATESFID(a, b, e) ((u32)(a) | ((u32)(b) << 12) | ((u32)(e) << 16) | \
+					  ROGUE_FW_LOG_IDMARKER)
+
+#define ROGUE_FW_LOG_IDMASK (0xFFF00000)
+#define ROGUE_FW_LOG_VALIDID(I) (((I) & ROGUE_FW_LOG_IDMASK) == ROGUE_FW_LOG_IDMARKER)
+
+/* Return the group id that the given (enum generated) id belongs to */
+#define ROGUE_FW_SF_GID(x) (((u32)(x) >> 12) & 0xfU)
+/* Returns how many arguments the SF(string format) for the given (enum generated) id requires */
+#define ROGUE_FW_SF_PARAMNUM(x) (((u32)(x) >> 16) & 0xfU)
+
+/* pair of string format id and string formats */
+struct rogue_km_stid_fmt {
+	u32 id;
+	const char *name;
+};
+
+static const struct rogue_km_stid_fmt stid_fmts[] = {
+	{ ROGUE_FW_LOG_CREATESFID(0, ROGUE_FW_GROUP_NULL, 0),
+	  "You should not use this string" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_MAIN, 6),
+	  "Kick 3D: FWCtx 0x%08.8x @ %d, RTD 0x%08x. Partial render:%d, CSW resume:%d, prio:%d" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_MAIN, 2),
+	  "3D finished, HWRTData0State=%x, HWRTData1State=%x" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_MAIN, 4),
+	  "Kick 3D TQ: FWCtx 0x%08.8x @ %d, CSW resume:%d, prio: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_MAIN, 0),
+	  "3D Transfer finished" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_MAIN, 3),
+	  "Kick Compute: FWCtx 0x%08.8x @ %d, prio: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_MAIN, 0),
+	  "Compute finished" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_MAIN, 7),
+	  "Kick TA: FWCtx 0x%08.8x @ %d, RTD 0x%08x. First kick:%d, Last kick:%d, CSW resume:%d, prio:%d" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_MAIN, 0),
+	  "TA finished" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_MAIN, 0),
+	  "Restart TA after partial render" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_MAIN, 0),
+	  "Resume TA without partial render" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_MAIN, 2),
+	  "Out of memory! Context 0x%08x, HWRTData 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_MAIN, 3),
+	  "Kick TLA: FWCtx 0x%08.8x @ %d, prio:%d" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_MAIN, 0),
+	  "TLA finished" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_MAIN, 3),
+	  "cCCB Woff update = %d, DM = %d, FWCtx = 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_MAIN, 2),
+	  "UFO Checks for FWCtx 0x%08.8x @ %d" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_MAIN, 3),
+	  "UFO Check: [0x%08.8x] is 0x%08.8x requires 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_MAIN, 0),
+	  "UFO Checks succeeded" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_MAIN, 3),
+	  "UFO PR-Check: [0x%08.8x] is 0x%08.8x requires >= 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_MAIN, 1),
+	  "UFO SPM PR-Checks for FWCtx 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_MAIN, 4),
+	  "UFO SPM special PR-Check: [0x%08.8x] is 0x%08.8x requires >= ????????, [0x%08.8x] is ???????? requires 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(22, ROGUE_FW_GROUP_MAIN, 2),
+	  "UFO Updates for FWCtx 0x%08.8x @ %d" },
+	{ ROGUE_FW_LOG_CREATESFID(23, ROGUE_FW_GROUP_MAIN, 2),
+	  "UFO Update: [0x%08.8x] = 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(24, ROGUE_FW_GROUP_MAIN, 1),
+	  "ASSERT Failed: line %d of:" },
+	{ ROGUE_FW_LOG_CREATESFID(25, ROGUE_FW_GROUP_MAIN, 2),
+	  "HWR: Lockup detected on DM%d, FWCtx: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(26, ROGUE_FW_GROUP_MAIN, 3),
+	  "HWR: Reset fw state for DM%d, FWCtx: 0x%08.8x, MemCtx: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(27, ROGUE_FW_GROUP_MAIN, 0),
+	  "HWR: Reset HW" },
+	{ ROGUE_FW_LOG_CREATESFID(28, ROGUE_FW_GROUP_MAIN, 0),
+	  "HWR: Lockup recovered." },
+	{ ROGUE_FW_LOG_CREATESFID(29, ROGUE_FW_GROUP_MAIN, 1),
+	  "HWR: False lockup detected for DM%u" },
+	{ ROGUE_FW_LOG_CREATESFID(30, ROGUE_FW_GROUP_MAIN, 3),
+	  "Alignment check %d failed: host = 0x%x, fw = 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(31, ROGUE_FW_GROUP_MAIN, 0),
+	  "GP USC triggered" },
+	{ ROGUE_FW_LOG_CREATESFID(32, ROGUE_FW_GROUP_MAIN, 2),
+	  "Overallocating %u temporary registers and %u shared registers for breakpoint handler" },
+	{ ROGUE_FW_LOG_CREATESFID(33, ROGUE_FW_GROUP_MAIN, 1),
+	  "Setting breakpoint: Addr 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(34, ROGUE_FW_GROUP_MAIN, 0),
+	  "Store breakpoint state" },
+	{ ROGUE_FW_LOG_CREATESFID(35, ROGUE_FW_GROUP_MAIN, 0),
+	  "Unsetting BP Registers" },
+	{ ROGUE_FW_LOG_CREATESFID(36, ROGUE_FW_GROUP_MAIN, 1),
+	  "Active RTs expected to be zero, actually %u" },
+	{ ROGUE_FW_LOG_CREATESFID(37, ROGUE_FW_GROUP_MAIN, 1),
+	  "RTC present, %u active render targets" },
+	{ ROGUE_FW_LOG_CREATESFID(38, ROGUE_FW_GROUP_MAIN, 1),
+	  "Estimated Power 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(39, ROGUE_FW_GROUP_MAIN, 1),
+	  "RTA render target %u" },
+	{ ROGUE_FW_LOG_CREATESFID(40, ROGUE_FW_GROUP_MAIN, 2),
+	  "Kick RTA render %u of %u" },
+	{ ROGUE_FW_LOG_CREATESFID(41, ROGUE_FW_GROUP_MAIN, 3),
+	  "HWR sizes check %d failed: addresses = %d, sizes = %d" },
+	{ ROGUE_FW_LOG_CREATESFID(42, ROGUE_FW_GROUP_MAIN, 1),
+	  "Pow: DUSTS_ENABLE = 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(43, ROGUE_FW_GROUP_MAIN, 2),
+	  "Pow: On(1)/Off(0): %d, Units: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(44, ROGUE_FW_GROUP_MAIN, 2),
+	  "Pow: Changing number of dusts from %d to %d" },
+	{ ROGUE_FW_LOG_CREATESFID(45, ROGUE_FW_GROUP_MAIN, 0),
+	  "Pow: Sidekick ready to be powered down" },
+	{ ROGUE_FW_LOG_CREATESFID(46, ROGUE_FW_GROUP_MAIN, 2),
+	  "Pow: Request to change num of dusts to %d (bPowRascalDust=%d)" },
+	{ ROGUE_FW_LOG_CREATESFID(47, ROGUE_FW_GROUP_MAIN, 0),
+	  "No ZS Buffer used for partial render (store)" },
+	{ ROGUE_FW_LOG_CREATESFID(48, ROGUE_FW_GROUP_MAIN, 0),
+	  "No Depth/Stencil Buffer used for partial render (load)" },
+	{ ROGUE_FW_LOG_CREATESFID(49, ROGUE_FW_GROUP_MAIN, 2),
+	  "HWR: Lock-up DM%d FWCtx: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(50, ROGUE_FW_GROUP_MAIN, 7),
+	  "MLIST%d checker: CatBase TE=0x%08x (%d Pages), VCE=0x%08x (%d Pages), ALIST=0x%08x, IsTA=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(51, ROGUE_FW_GROUP_MAIN, 3),
+	  "MLIST%d checker: MList[%d] = 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(52, ROGUE_FW_GROUP_MAIN, 1),
+	  "MLIST%d OK" },
+	{ ROGUE_FW_LOG_CREATESFID(53, ROGUE_FW_GROUP_MAIN, 1),
+	  "MLIST%d is empty" },
+	{ ROGUE_FW_LOG_CREATESFID(54, ROGUE_FW_GROUP_MAIN, 8),
+	  "MLIST%d checker: CatBase TE=0x%08x%08x, VCE=0x%08x%08x, ALIST=0x%08x%08x, IsTA=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(55, ROGUE_FW_GROUP_MAIN, 0),
+	  "3D OQ flush kick" },
+	{ ROGUE_FW_LOG_CREATESFID(56, ROGUE_FW_GROUP_MAIN, 1),
+	  "HWPerf block ID (0x%x) unsupported by device" },
+	{ ROGUE_FW_LOG_CREATESFID(57, ROGUE_FW_GROUP_MAIN, 2),
+	  "Setting breakpoint: Addr 0x%08.8x DM%u" },
+	{ ROGUE_FW_LOG_CREATESFID(58, ROGUE_FW_GROUP_MAIN, 3),
+	  "Kick RTU: FWCtx 0x%08.8x @ %d, prio: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(59, ROGUE_FW_GROUP_MAIN, 1),
+	  "RDM finished on context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(60, ROGUE_FW_GROUP_MAIN, 3),
+	  "Kick SHG: FWCtx 0x%08.8x @ %d, prio: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(61, ROGUE_FW_GROUP_MAIN, 0),
+	  "SHG finished" },
+	{ ROGUE_FW_LOG_CREATESFID(62, ROGUE_FW_GROUP_MAIN, 1),
+	  "FBA finished on context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(63, ROGUE_FW_GROUP_MAIN, 0),
+	  "UFO Checks failed" },
+	{ ROGUE_FW_LOG_CREATESFID(64, ROGUE_FW_GROUP_MAIN, 1),
+	  "Kill DM%d start" },
+	{ ROGUE_FW_LOG_CREATESFID(65, ROGUE_FW_GROUP_MAIN, 1),
+	  "Kill DM%d complete" },
+	{ ROGUE_FW_LOG_CREATESFID(66, ROGUE_FW_GROUP_MAIN, 2),
+	  "FC%u cCCB Woff update = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(67, ROGUE_FW_GROUP_MAIN, 4),
+	  "Kick RTU: FWCtx 0x%08.8x @ %d, prio: %d, Frame Context: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(68, ROGUE_FW_GROUP_MAIN, 0),
+	  "GPU init" },
+	{ ROGUE_FW_LOG_CREATESFID(69, ROGUE_FW_GROUP_MAIN, 1),
+	  "GPU Units init (# mask: 0x%x)" },
+	{ ROGUE_FW_LOG_CREATESFID(70, ROGUE_FW_GROUP_MAIN, 3),
+	  "Register access cycles: read: %d cycles, write: %d cycles, iterations: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(71, ROGUE_FW_GROUP_MAIN, 3),
+	  "Register configuration added. Address: 0x%x Value: 0x%x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(72, ROGUE_FW_GROUP_MAIN, 1),
+	  "Register configuration applied to type %d. (0:pow on, 1:Rascal/dust init, 2-5: TA,3D,CDM,TLA, 6:All)" },
+	{ ROGUE_FW_LOG_CREATESFID(73, ROGUE_FW_GROUP_MAIN, 0),
+	  "Perform TPC flush." },
+	{ ROGUE_FW_LOG_CREATESFID(74, ROGUE_FW_GROUP_MAIN, 0),
+	  "GPU has locked up (see HWR logs for more info)" },
+	{ ROGUE_FW_LOG_CREATESFID(75, ROGUE_FW_GROUP_MAIN, 0),
+	  "HWR has been triggered - GPU has overrun its deadline (see HWR logs)" },
+	{ ROGUE_FW_LOG_CREATESFID(76, ROGUE_FW_GROUP_MAIN, 0),
+	  "HWR has been triggered - GPU has failed a poll (see HWR logs)" },
+	{ ROGUE_FW_LOG_CREATESFID(77, ROGUE_FW_GROUP_MAIN, 1),
+	  "Doppler out of memory event for FC %u" },
+	{ ROGUE_FW_LOG_CREATESFID(78, ROGUE_FW_GROUP_MAIN, 3),
+	  "UFO SPM special PR-Check: [0x%08.8x] is 0x%08.8x requires >= 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(79, ROGUE_FW_GROUP_MAIN, 3),
+	  "UFO SPM special PR-Check: [0x%08.8x] is 0x%08.8x requires 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(80, ROGUE_FW_GROUP_MAIN, 1),
+	  "TIMESTAMP -> [0x%08.8x]" },
+	{ ROGUE_FW_LOG_CREATESFID(81, ROGUE_FW_GROUP_MAIN, 2),
+	  "UFO RMW Updates for FWCtx 0x%08.8x @ %d" },
+	{ ROGUE_FW_LOG_CREATESFID(82, ROGUE_FW_GROUP_MAIN, 2),
+	  "UFO Update: [0x%08.8x] = 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(83, ROGUE_FW_GROUP_MAIN, 2),
+	  "Kick Null cmd: FWCtx 0x%08.8x @ %d" },
+	{ ROGUE_FW_LOG_CREATESFID(84, ROGUE_FW_GROUP_MAIN, 2),
+	  "RPM Out of memory! Context 0x%08x, SH requestor %d" },
+	{ ROGUE_FW_LOG_CREATESFID(85, ROGUE_FW_GROUP_MAIN, 4),
+	  "Discard RTU due to RPM abort: FWCtx 0x%08.8x @ %d, prio: %d, Frame Context: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(86, ROGUE_FW_GROUP_MAIN, 4),
+	  "Deferring DM%u from running context 0x%08x @ %d (deferred DMs = 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(87, ROGUE_FW_GROUP_MAIN, 4),
+	  "Deferring DM%u from running context 0x%08x @ %d to let other deferred DMs run (deferred DMs = 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(88, ROGUE_FW_GROUP_MAIN, 4),
+	  "No longer deferring DM%u from running context = 0x%08x @ %d (deferred DMs = 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(89, ROGUE_FW_GROUP_MAIN, 3),
+	  "FWCCB for DM%u is full, we will have to wait for space! (Roff = %u, Woff = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(90, ROGUE_FW_GROUP_MAIN, 3),
+	  "FWCCB for OSid %u is full, we will have to wait for space! (Roff = %u, Woff = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(91, ROGUE_FW_GROUP_MAIN, 1),
+	  "Host Sync Partition marker: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(92, ROGUE_FW_GROUP_MAIN, 1),
+	  "Host Sync Partition repeat: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(93, ROGUE_FW_GROUP_MAIN, 1),
+	  "Core clock set to %d Hz" },
+	{ ROGUE_FW_LOG_CREATESFID(94, ROGUE_FW_GROUP_MAIN, 7),
+	  "Compute Queue: FWCtx 0x%08.8x, prio: %d, queue: 0x%08x%08x (Roff = %u, Woff = %u, Size = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(95, ROGUE_FW_GROUP_MAIN, 3),
+	  "Signal check failed, Required Data: 0x%x, Address: 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(96, ROGUE_FW_GROUP_MAIN, 5),
+	  "Signal update, Snoop Filter: %u, MMU Ctx: %u, Signal Id: %u, Signals Base: 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(97, ROGUE_FW_GROUP_MAIN, 4),
+	  "Signalled the previously waiting FWCtx: 0x%08.8x, OSId: %u, Signal Address: 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(98, ROGUE_FW_GROUP_MAIN, 0),
+	  "Compute stalled" },
+	{ ROGUE_FW_LOG_CREATESFID(99, ROGUE_FW_GROUP_MAIN, 3),
+	  "Compute stalled (Roff = %u, Woff = %u, Size = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(100, ROGUE_FW_GROUP_MAIN, 3),
+	  "Compute resumed (Roff = %u, Woff = %u, Size = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(101, ROGUE_FW_GROUP_MAIN, 4),
+	  "Signal update notification from the host, PC Physical Address: 0x%08x%08x, Signal Virtual Address: 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(102, ROGUE_FW_GROUP_MAIN, 4),
+	  "Signal update from DM: %u, OSId: %u, PC Physical Address: 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(103, ROGUE_FW_GROUP_MAIN, 1),
+	  "DM: %u signal check failed" },
+	{ ROGUE_FW_LOG_CREATESFID(104, ROGUE_FW_GROUP_MAIN, 3),
+	  "Kick TDM: FWCtx 0x%08.8x @ %d, prio:%d" },
+	{ ROGUE_FW_LOG_CREATESFID(105, ROGUE_FW_GROUP_MAIN, 0),
+	  "TDM finished" },
+	{ ROGUE_FW_LOG_CREATESFID(106, ROGUE_FW_GROUP_MAIN, 4),
+	  "MMU_PM_CAT_BASE_TE[%d]_PIPE[%d]:  0x%08x 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(107, ROGUE_FW_GROUP_MAIN, 0),
+	  "BRN 54141 HIT" },
+	{ ROGUE_FW_LOG_CREATESFID(108, ROGUE_FW_GROUP_MAIN, 0),
+	  "BRN 54141 Dummy TA kicked" },
+	{ ROGUE_FW_LOG_CREATESFID(109, ROGUE_FW_GROUP_MAIN, 0),
+	  "BRN 54141 resume TA" },
+	{ ROGUE_FW_LOG_CREATESFID(110, ROGUE_FW_GROUP_MAIN, 0),
+	  "BRN 54141 double hit after applying WA" },
+	{ ROGUE_FW_LOG_CREATESFID(111, ROGUE_FW_GROUP_MAIN, 2),
+	  "BRN 54141 Dummy TA VDM base address: 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(112, ROGUE_FW_GROUP_MAIN, 4),
+	  "Signal check failed, Required Data: 0x%x, Current Data: 0x%x, Address: 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(113, ROGUE_FW_GROUP_MAIN, 2),
+	  "TDM stalled (Roff = %u, Woff = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(114, ROGUE_FW_GROUP_MAIN, 1),
+	  "Write Offset update notification for stalled FWCtx 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(115, ROGUE_FW_GROUP_MAIN, 3),
+	  "Changing OSid %d's priority from %u to %u" },
+	{ ROGUE_FW_LOG_CREATESFID(116, ROGUE_FW_GROUP_MAIN, 0),
+	  "Compute resumed" },
+	{ ROGUE_FW_LOG_CREATESFID(117, ROGUE_FW_GROUP_MAIN, 7),
+	  "Kick TLA: FWCtx 0x%08.8x @ %d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(118, ROGUE_FW_GROUP_MAIN, 7),
+	  "Kick TDM: FWCtx 0x%08.8x @ %d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(119, ROGUE_FW_GROUP_MAIN, 11),
+	  "Kick TA: FWCtx 0x%08.8x @ %d, RTD 0x%08x, First kick:%d, Last kick:%d, CSW resume:%d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(120, ROGUE_FW_GROUP_MAIN, 10),
+	  "Kick 3D: FWCtx 0x%08.8x @ %d, RTD 0x%08x, Partial render:%d, CSW resume:%d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(121, ROGUE_FW_GROUP_MAIN, 8),
+	  "Kick 3D TQ: FWCtx 0x%08.8x @ %d, CSW resume:%d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(122, ROGUE_FW_GROUP_MAIN, 6),
+	  "Kick Compute: FWCtx 0x%08.8x @ %d. (PID:%d, prio:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(123, ROGUE_FW_GROUP_MAIN, 8),
+	  "Kick RTU: FWCtx 0x%08.8x @ %d, Frame Context:%d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(124, ROGUE_FW_GROUP_MAIN, 7),
+	  "Kick SHG: FWCtx 0x%08.8x @ %d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(125, ROGUE_FW_GROUP_MAIN, 1),
+	  "Reconfigure CSRM: special coeff support enable %d." },
+	{ ROGUE_FW_LOG_CREATESFID(127, ROGUE_FW_GROUP_MAIN, 1),
+	  "TA requires max coeff mode, deferring: %d." },
+	{ ROGUE_FW_LOG_CREATESFID(128, ROGUE_FW_GROUP_MAIN, 1),
+	  "3D requires max coeff mode, deferring: %d." },
+	{ ROGUE_FW_LOG_CREATESFID(129, ROGUE_FW_GROUP_MAIN, 1),
+	  "Kill DM%d failed" },
+	{ ROGUE_FW_LOG_CREATESFID(130, ROGUE_FW_GROUP_MAIN, 2),
+	  "Thread Queue is full, we will have to wait for space! (Roff = %u, Woff = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(131, ROGUE_FW_GROUP_MAIN, 3),
+	  "Thread Queue is fencing, we are waiting for Roff = %d (Roff = %u, Woff = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(132, ROGUE_FW_GROUP_MAIN, 1),
+	  "DM %d failed to Context Switch on time. Triggered HCS (see HWR logs)." },
+	{ ROGUE_FW_LOG_CREATESFID(133, ROGUE_FW_GROUP_MAIN, 1),
+	  "HCS changed to %d ms" },
+	{ ROGUE_FW_LOG_CREATESFID(134, ROGUE_FW_GROUP_MAIN, 4),
+	  "Updating Tiles In Flight (Dusts=%d, PartitionMask=0x%08x, ISPCtl=0x%08x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(135, ROGUE_FW_GROUP_MAIN, 2),
+	  "  Phantom %d: USCTiles=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(136, ROGUE_FW_GROUP_MAIN, 0),
+	  "Isolation grouping is disabled" },
+	{ ROGUE_FW_LOG_CREATESFID(137, ROGUE_FW_GROUP_MAIN, 1),
+	  "Isolation group configured with a priority threshold of %d" },
+	{ ROGUE_FW_LOG_CREATESFID(138, ROGUE_FW_GROUP_MAIN, 1),
+	  "OS %d has come online" },
+	{ ROGUE_FW_LOG_CREATESFID(139, ROGUE_FW_GROUP_MAIN, 1),
+	  "OS %d has gone offline" },
+	{ ROGUE_FW_LOG_CREATESFID(140, ROGUE_FW_GROUP_MAIN, 4),
+	  "Signalled the previously stalled FWCtx: 0x%08.8x, OSId: %u, Signal Address: 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(141, ROGUE_FW_GROUP_MAIN, 7),
+	  "TDM Queue: FWCtx 0x%08.8x, prio: %d, queue: 0x%08x%08x (Roff = %u, Woff = %u, Size = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(142, ROGUE_FW_GROUP_MAIN, 6),
+	  "Reset TDM Queue Read Offset: FWCtx 0x%08.8x, queue: 0x%08x%08x (Roff = %u becomes 0, Woff = %u, Size = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(143, ROGUE_FW_GROUP_MAIN, 5),
+	  "User Mode Queue mismatched stream start: FWCtx 0x%08.8x, queue: 0x%08x%08x (Roff = %u, StreamStartOffset = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(144, ROGUE_FW_GROUP_MAIN, 0),
+	  "GPU deinit" },
+	{ ROGUE_FW_LOG_CREATESFID(145, ROGUE_FW_GROUP_MAIN, 0),
+	  "GPU units deinit" },
+	{ ROGUE_FW_LOG_CREATESFID(146, ROGUE_FW_GROUP_MAIN, 2),
+	  "Initialised OS %d with config flags 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(147, ROGUE_FW_GROUP_MAIN, 2),
+	  "UFO limit exceeded %d/%d" },
+	{ ROGUE_FW_LOG_CREATESFID(148, ROGUE_FW_GROUP_MAIN, 0),
+	  "3D Dummy stencil store" },
+	{ ROGUE_FW_LOG_CREATESFID(149, ROGUE_FW_GROUP_MAIN, 3),
+	  "Initialised OS %d with config flags 0x%08x and extended config flags 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(150, ROGUE_FW_GROUP_MAIN, 1),
+	  "Unknown Command (eCmdType=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(151, ROGUE_FW_GROUP_MAIN, 4),
+	  "UFO forced update: FWCtx 0x%08.8x @ %d [0x%08.8x] = 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(152, ROGUE_FW_GROUP_MAIN, 5),
+	  "UFO forced update NOP: FWCtx 0x%08.8x @ %d [0x%08.8x] = 0x%08.8x, reason %d" },
+	{ ROGUE_FW_LOG_CREATESFID(153, ROGUE_FW_GROUP_MAIN, 3),
+	  "TDM context switch check: Roff %u points to 0x%08x, Match=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(154, ROGUE_FW_GROUP_MAIN, 6),
+	  "OSid %d CCB init status: %d (1-ok 0-fail): kCCBCtl@0x%x kCCB@0x%x fwCCBCtl@0x%x fwCCB@0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(155, ROGUE_FW_GROUP_MAIN, 2),
+	  "FW IRQ # %u @ %u" },
+	{ ROGUE_FW_LOG_CREATESFID(156, ROGUE_FW_GROUP_MAIN, 3),
+	  "Setting breakpoint: Addr 0x%08.8x DM%u usc_breakpoint_ctrl_dm = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(157, ROGUE_FW_GROUP_MAIN, 3),
+	  "Invalid KCCB setup for OSid %u: KCCB 0x%08x, KCCB Ctrl 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(158, ROGUE_FW_GROUP_MAIN, 3),
+	  "Invalid KCCB cmd (%u) for OSid %u @ KCCB 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(159, ROGUE_FW_GROUP_MAIN, 4),
+	  "FW FAULT: At line %d in file 0x%08x%08x, additional data=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(160, ROGUE_FW_GROUP_MAIN, 4),
+	  "Invalid breakpoint: MemCtx 0x%08x Addr 0x%08.8x DM%u usc_breakpoint_ctrl_dm = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(161, ROGUE_FW_GROUP_MAIN, 3),
+	  "Discarding invalid SLC flushinval command for OSid %u: DM %u, FWCtx 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(162, ROGUE_FW_GROUP_MAIN, 4),
+	  "Invalid Write Offset update notification from OSid %u to DM %u: FWCtx 0x%08x, MemCtx 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(163, ROGUE_FW_GROUP_MAIN, 4),
+	  "Null FWCtx in KCCB kick cmd for OSid %u: KCCB 0x%08x, ROff %u, WOff %u" },
+	{ ROGUE_FW_LOG_CREATESFID(164, ROGUE_FW_GROUP_MAIN, 3),
+	  "Checkpoint CCB for OSid %u is full, signalling host for full check state (Roff = %u, Woff = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(165, ROGUE_FW_GROUP_MAIN, 8),
+	  "OSid %d CCB init status: %d (1-ok 0-fail): kCCBCtl@0x%x kCCB@0x%x fwCCBCtl@0x%x fwCCB@0x%x chptCCBCtl@0x%x chptCCB@0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(166, ROGUE_FW_GROUP_MAIN, 4),
+	  "OSid %d fw state transition request: from %d to %d (0-offline 1-ready 2-active 3-offloading). Status %d (1-ok 0-fail)" },
+	{ ROGUE_FW_LOG_CREATESFID(167, ROGUE_FW_GROUP_MAIN, 2),
+	  "OSid %u has %u stale commands in its KCCB" },
+	{ ROGUE_FW_LOG_CREATESFID(168, ROGUE_FW_GROUP_MAIN, 0),
+	  "Applying VCE pause" },
+	{ ROGUE_FW_LOG_CREATESFID(169, ROGUE_FW_GROUP_MAIN, 3),
+	  "OSid %u KCCB slot %u value updated to %u" },
+	{ ROGUE_FW_LOG_CREATESFID(170, ROGUE_FW_GROUP_MAIN, 7),
+	  "Unknown KCCB Command: KCCBCtl=0x%08x, KCCB=0x%08x, Roff=%u, Woff=%u, Wrap=%u, Cmd=0x%08x, CmdType=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(171, ROGUE_FW_GROUP_MAIN, 10),
+	  "Unknown Client CCB Command processing fences: FWCtx=0x%08x, CCBCtl=0x%08x, CCB=0x%08x, Roff=%u, Doff=%u, Woff=%u, Wrap=%u, CmdHdr=0x%08x, CmdType=0x%08x, CmdSize=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(172, ROGUE_FW_GROUP_MAIN, 10),
+	  "Unknown Client CCB Command executing kick: FWCtx=0x%08x, CCBCtl=0x%08x, CCB=0x%08x, Roff=%u, Doff=%u, Woff=%u, Wrap=%u, CmdHdr=0x%08x, CmdType=0x%08x, CmdSize=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(173, ROGUE_FW_GROUP_MAIN, 2),
+	  "Null FWCtx in KCCB kick cmd for OSid %u with WOff %u" },
+	{ ROGUE_FW_LOG_CREATESFID(174, ROGUE_FW_GROUP_MAIN, 2),
+	  "Discarding invalid SLC flushinval command for OSid %u, FWCtx 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(175, ROGUE_FW_GROUP_MAIN, 3),
+	  "Invalid Write Offset update notification from OSid %u: FWCtx 0x%08x, MemCtx 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(176, ROGUE_FW_GROUP_MAIN, 2),
+	  "Initialised Firmware with config flags 0x%08x and extended config flags 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(177, ROGUE_FW_GROUP_MAIN, 1),
+	  "Set Periodic Hardware Reset Mode: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(179, ROGUE_FW_GROUP_MAIN, 3),
+	  "PHR mode %d, FW state: 0x%08x, HWR flags: 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(180, ROGUE_FW_GROUP_MAIN, 1),
+	  "PHR mode %d triggered a reset" },
+	{ ROGUE_FW_LOG_CREATESFID(181, ROGUE_FW_GROUP_MAIN, 2),
+	  "Signal update, Snoop Filter: %u, Signal Id: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(182, ROGUE_FW_GROUP_MAIN, 1),
+	  "WARNING: Skipping FW KCCB Cmd type %d which is not yet supported on Series8." },
+	{ ROGUE_FW_LOG_CREATESFID(183, ROGUE_FW_GROUP_MAIN, 4),
+	  "MMU context cache data NULL, but cache flags=0x%x (sync counter=%u, update value=%u) OSId=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(184, ROGUE_FW_GROUP_MAIN, 5),
+	  "SLC range based flush: Context=%u VAddr=0x%02x%08x, Size=0x%08x, Invalidate=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(185, ROGUE_FW_GROUP_MAIN, 3),
+	  "FBSC invalidate for Context Set [0x%08x]: Entry mask 0x%08x%08x." },
+	{ ROGUE_FW_LOG_CREATESFID(186, ROGUE_FW_GROUP_MAIN, 3),
+	  "TDM context switch check: Roff %u was not valid for kick starting at %u, moving back to %u" },
+	{ ROGUE_FW_LOG_CREATESFID(187, ROGUE_FW_GROUP_MAIN, 2),
+	  "Signal updates: FIFO: %u, Signals: 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(188, ROGUE_FW_GROUP_MAIN, 2),
+	  "Invalid FBSC cmd: FWCtx 0x%08x, MemCtx 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(189, ROGUE_FW_GROUP_MAIN, 0),
+	  "Insert BRN68497 WA blit after TDM Context store." },
+	{ ROGUE_FW_LOG_CREATESFID(190, ROGUE_FW_GROUP_MAIN, 1),
+	  "UFO Updates for previously finished FWCtx 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(191, ROGUE_FW_GROUP_MAIN, 1),
+	  "RTC with RTA present, %u active render targets" },
+	{ ROGUE_FW_LOG_CREATESFID(192, ROGUE_FW_GROUP_MAIN, 0),
+	  "Invalid RTA Set-up. The ValidRenderTargets array in RTACtl is Null!" },
+	{ ROGUE_FW_LOG_CREATESFID(193, ROGUE_FW_GROUP_MAIN, 2),
+	  "Block 0x%x / Counter 0x%x INVALID and ignored" },
+	{ ROGUE_FW_LOG_CREATESFID(194, ROGUE_FW_GROUP_MAIN, 2),
+	  "ECC fault GPU=0x%08x FW=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(195, ROGUE_FW_GROUP_MAIN, 1),
+	  "Processing XPU event on DM = %d" },
+	{ ROGUE_FW_LOG_CREATESFID(196, ROGUE_FW_GROUP_MAIN, 2),
+	  "OSid %u failed to respond to the virtualisation watchdog in time. Timestamp of its last input = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(197, ROGUE_FW_GROUP_MAIN, 1),
+	  "GPU-%u has locked up (see HWR logs for more info)" },
+	{ ROGUE_FW_LOG_CREATESFID(198, ROGUE_FW_GROUP_MAIN, 3),
+	  "Updating Tiles In Flight (Dusts=%d, PartitionMask=0x%08x, ISPCtl=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(199, ROGUE_FW_GROUP_MAIN, 0),
+	  "GPU has locked up (see HWR logs for more info)" },
+	{ ROGUE_FW_LOG_CREATESFID(200, ROGUE_FW_GROUP_MAIN, 1),
+	  "Reprocessing outstanding XPU events from cores 0x%02x" },
+	{ ROGUE_FW_LOG_CREATESFID(201, ROGUE_FW_GROUP_MAIN, 3),
+	  "Secondary XPU event on DM=%d, CoreMask=0x%02x, Raised=0x%02x" },
+	{ ROGUE_FW_LOG_CREATESFID(202, ROGUE_FW_GROUP_MAIN, 8),
+	  "TDM Queue: Core %u, FWCtx 0x%08.8x, prio: %d, queue: 0x%08x%08x (Roff = %u, Woff = %u, Size = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(203, ROGUE_FW_GROUP_MAIN, 3),
+	  "TDM stalled Core %u (Roff = %u, Woff = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(204, ROGUE_FW_GROUP_MAIN, 8),
+	  "Compute Queue: Core %u, FWCtx 0x%08.8x, prio: %d, queue: 0x%08x%08x (Roff = %u, Woff = %u, Size = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(205, ROGUE_FW_GROUP_MAIN, 4),
+	  "Compute stalled core %u (Roff = %u, Woff = %u, Size = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(206, ROGUE_FW_GROUP_MAIN, 6),
+	  "User Mode Queue mismatched stream start: Core %u, FWCtx 0x%08.8x, queue: 0x%08x%08x (Roff = %u, StreamStartOffset = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(207, ROGUE_FW_GROUP_MAIN, 3),
+	  "TDM resumed core %u (Roff = %u, Woff = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(208, ROGUE_FW_GROUP_MAIN, 4),
+	  "Compute resumed core %u (Roff = %u, Woff = %u, Size = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(209, ROGUE_FW_GROUP_MAIN, 2),
+	  " Updated permission for OSid %u to perform MTS kicks: %u (1 = allowed, 0 = not allowed)" },
+	{ ROGUE_FW_LOG_CREATESFID(210, ROGUE_FW_GROUP_MAIN, 2),
+	  "Mask = 0x%X, mask2 = 0x%X" },
+	{ ROGUE_FW_LOG_CREATESFID(211, ROGUE_FW_GROUP_MAIN, 3),
+	  "  core %u, reg = %u, mask = 0x%X)" },
+	{ ROGUE_FW_LOG_CREATESFID(212, ROGUE_FW_GROUP_MAIN, 1),
+	  "ECC fault received from safety bus: 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(213, ROGUE_FW_GROUP_MAIN, 1),
+	  "Safety Watchdog threshold period set to 0x%x clock cycles" },
+	{ ROGUE_FW_LOG_CREATESFID(214, ROGUE_FW_GROUP_MAIN, 0),
+	  "MTS Safety Event trigged by the safety watchdog." },
+	{ ROGUE_FW_LOG_CREATESFID(215, ROGUE_FW_GROUP_MAIN, 3),
+	  "DM%d USC tasks range limit 0 - %d, stride %d" },
+	{ ROGUE_FW_LOG_CREATESFID(216, ROGUE_FW_GROUP_MAIN, 1),
+	  "ECC fault GPU=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(217, ROGUE_FW_GROUP_MAIN, 0),
+	  "GPU Hardware units reset to prevent transient faults." },
+	{ ROGUE_FW_LOG_CREATESFID(218, ROGUE_FW_GROUP_MAIN, 2),
+	  "Kick Abort cmd: FWCtx 0x%08.8x @ %d" },
+	{ ROGUE_FW_LOG_CREATESFID(219, ROGUE_FW_GROUP_MAIN, 7),
+	  "Kick Ray: FWCtx 0x%08.8x @ %d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(220, ROGUE_FW_GROUP_MAIN, 0),
+	  "Ray finished" },
+	{ ROGUE_FW_LOG_CREATESFID(221, ROGUE_FW_GROUP_MAIN, 2),
+	  "State of firmware's private data at boot time: %d (0 = uninitialised, 1 = initialised); Fw State Flags = 0x%08X" },
+	{ ROGUE_FW_LOG_CREATESFID(222, ROGUE_FW_GROUP_MAIN, 2),
+	  "CFI Timeout detected (%d increasing to %d)" },
+	{ ROGUE_FW_LOG_CREATESFID(223, ROGUE_FW_GROUP_MAIN, 2),
+	  "CFI Timeout detected for FBM (%d increasing to %d)" },
+	{ ROGUE_FW_LOG_CREATESFID(224, ROGUE_FW_GROUP_MAIN, 0),
+	  "Geom OOM event not allowed" },
+	{ ROGUE_FW_LOG_CREATESFID(225, ROGUE_FW_GROUP_MAIN, 4),
+	  "Changing OSid %d's priority from %u to %u; Isolation = %u (0 = off; 1 = on)" },
+	{ ROGUE_FW_LOG_CREATESFID(226, ROGUE_FW_GROUP_MAIN, 2),
+	  "Skipping already executed TA FWCtx 0x%08.8x @ %d" },
+	{ ROGUE_FW_LOG_CREATESFID(227, ROGUE_FW_GROUP_MAIN, 2),
+	  "Attempt to execute TA FWCtx 0x%08.8x @ %d ahead of time on other GEOM" },
+	{ ROGUE_FW_LOG_CREATESFID(228, ROGUE_FW_GROUP_MAIN, 8),
+	  "Kick TDM: Kick ID %u FWCtx 0x%08.8x @ %d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(229, ROGUE_FW_GROUP_MAIN, 12),
+	  "Kick TA: Kick ID %u FWCtx 0x%08.8x @ %d, RTD 0x%08x, First kick:%d, Last kick:%d, CSW resume:%d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(230, ROGUE_FW_GROUP_MAIN, 11),
+	  "Kick 3D: Kick ID %u FWCtx 0x%08.8x @ %d, RTD 0x%08x, Partial render:%d, CSW resume:%d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(231, ROGUE_FW_GROUP_MAIN, 7),
+	  "Kick Compute: Kick ID %u FWCtx 0x%08.8x @ %d. (PID:%d, prio:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(232, ROGUE_FW_GROUP_MAIN, 1),
+	  "TDM finished: Kick ID %u " },
+	{ ROGUE_FW_LOG_CREATESFID(233, ROGUE_FW_GROUP_MAIN, 1),
+	  "TA finished: Kick ID %u " },
+	{ ROGUE_FW_LOG_CREATESFID(234, ROGUE_FW_GROUP_MAIN, 3),
+	  "3D finished: Kick ID %u , HWRTData0State=%x, HWRTData1State=%x" },
+	{ ROGUE_FW_LOG_CREATESFID(235, ROGUE_FW_GROUP_MAIN, 1),
+	  "Compute finished: Kick ID %u " },
+	{ ROGUE_FW_LOG_CREATESFID(236, ROGUE_FW_GROUP_MAIN, 10),
+	  "Kick TDM: Kick ID %u FWCtx 0x%08.8x @ %d, Base 0x%08x%08x. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(237, ROGUE_FW_GROUP_MAIN, 8),
+	  "Kick Ray: Kick ID %u FWCtx 0x%08.8x @ %d. (PID:%d, prio:%d, frame:%d, ext:0x%08x, int:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(238, ROGUE_FW_GROUP_MAIN, 1),
+	  "Ray finished: Kick ID %u " },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_MTS, 2),
+	  "Bg Task DM = %u, counted = %d" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_MTS, 1),
+	  "Bg Task complete DM = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_MTS, 3),
+	  "Irq Task DM = %u, Breq = %d, SBIrq = 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_MTS, 1),
+	  "Irq Task complete DM = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_MTS, 0),
+	  "Kick MTS Bg task DM=All" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_MTS, 1),
+	  "Kick MTS Irq task DM=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_MTS, 2),
+	  "Ready queue debug DM = %u, celltype = %d" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_MTS, 2),
+	  "Ready-to-run debug DM = %u, item = 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_MTS, 3),
+	  "Client command header DM = %u, client CCB = 0x%x, cmd = 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_MTS, 3),
+	  "Ready-to-run debug OSid = %u, DM = %u, item = 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_MTS, 3),
+	  "Ready queue debug DM = %u, celltype = %d, OSid = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_MTS, 3),
+	  "Bg Task DM = %u, counted = %d, OSid = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_MTS, 1),
+	  "Bg Task complete DM Bitfield: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_MTS, 0),
+	  "Irq Task complete." },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_MTS, 7),
+	  "Discarded Command Type: %d OS ID = %d PID = %d context = 0x%08x cccb ROff = 0x%x, due to USC breakpoint hit by OS ID = %d PID = %d." },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_MTS, 4),
+	  "KCCB Slot %u: DM=%u, Cmd=0x%08x, OSid=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_MTS, 2),
+	  "KCCB Slot %u: Return value %u" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_MTS, 1),
+	  "Bg Task OSid = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_MTS, 3),
+	  "KCCB Slot %u: Cmd=0x%08x, OSid=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_MTS, 1),
+	  "Irq Task (EVENT_STATUS=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_MTS, 2),
+	  "VZ sideband test, kicked with OSid=%u from MTS, OSid for test=%u" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_CLEANUP, 1),
+	  "FwCommonContext [0x%08x] cleaned" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_CLEANUP, 3),
+	  "FwCommonContext [0x%08x] is busy: ReadOffset = %d, WriteOffset = %d" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_CLEANUP, 2),
+	  "HWRTData [0x%08x] for DM=%d, received cleanup request" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_CLEANUP, 3),
+	  "HWRTData [0x%08x] HW Context cleaned for DM%u, executed commands = %d" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_CLEANUP, 2),
+	  "HWRTData [0x%08x] HW Context for DM%u is busy" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_CLEANUP, 2),
+	  "HWRTData [0x%08x] HW Context %u cleaned" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_CLEANUP, 1),
+	  "Freelist [0x%08x] cleaned" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_CLEANUP, 1),
+	  "ZSBuffer [0x%08x] cleaned" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_CLEANUP, 3),
+	  "ZSBuffer [0x%08x] is busy: submitted = %d, executed = %d" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_CLEANUP, 4),
+	  "HWRTData [0x%08x] HW Context for DM%u is busy: submitted = %d, executed = %d" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_CLEANUP, 2),
+	  "HW Ray Frame data [0x%08x] for DM=%d, received cleanup request" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_CLEANUP, 3),
+	  "HW Ray Frame Data [0x%08x] cleaned for DM%u, executed commands = %d" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_CLEANUP, 4),
+	  "HW Ray Frame Data [0x%08x] for DM%u is busy: submitted = %d, executed = %d" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_CLEANUP, 2),
+	  "HW Ray Frame Data [0x%08x] HW Context %u cleaned" },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_CLEANUP, 1),
+	  "Discarding invalid cleanup request of type 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_CLEANUP, 1),
+	  "Received cleanup request for HWRTData [0x%08x]" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_CLEANUP, 3),
+	  "HWRTData [0x%08x] HW Context is busy: submitted = %d, executed = %d" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_CLEANUP, 3),
+	  "HWRTData [0x%08x] HW Context %u cleaned, executed commands = %d" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_CSW, 1),
+	  "CDM FWCtx 0x%08.8x needs resume" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_CSW, 3),
+	  "*** CDM FWCtx 0x%08.8x resume from snapshot buffer 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_CSW, 1),
+	  "CDM FWCtx shared alloc size load 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_CSW, 0),
+	  "*** CDM FWCtx store complete" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_CSW, 0),
+	  "*** CDM FWCtx store start" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_CSW, 0),
+	  "CDM Soft Reset" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_CSW, 1),
+	  "3D FWCtx 0x%08.8x needs resume" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_CSW, 1),
+	  "*** 3D FWCtx 0x%08.8x resume" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_CSW, 0),
+	  "*** 3D context store complete" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_CSW, 3),
+	  "3D context store pipe state: 0x%08.8x 0x%08.8x 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_CSW, 0),
+	  "*** 3D context store start" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_CSW, 1),
+	  "*** 3D TQ FWCtx 0x%08.8x resume" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_CSW, 1),
+	  "TA FWCtx 0x%08.8x needs resume" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_CSW, 3),
+	  "*** TA FWCtx 0x%08.8x resume from snapshot buffer 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_CSW, 2),
+	  "TA context shared alloc size store 0x%x, load 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_CSW, 0),
+	  "*** TA context store complete" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_CSW, 0),
+	  "*** TA context store start" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_CSW, 3),
+	  "Higher priority context scheduled for DM %u, old prio:%d, new prio:%d" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_CSW, 2),
+	  "Set FWCtx 0x%x priority to %u" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_CSW, 2),
+	  "3D context store pipe%d state: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_CSW, 2),
+	  "3D context resume pipe%d state: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(22, ROGUE_FW_GROUP_CSW, 1),
+	  "SHG FWCtx 0x%08.8x needs resume" },
+	{ ROGUE_FW_LOG_CREATESFID(23, ROGUE_FW_GROUP_CSW, 3),
+	  "*** SHG FWCtx 0x%08.8x resume from snapshot buffer 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(24, ROGUE_FW_GROUP_CSW, 2),
+	  "SHG context shared alloc size store 0x%x, load 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(25, ROGUE_FW_GROUP_CSW, 0),
+	  "*** SHG context store complete" },
+	{ ROGUE_FW_LOG_CREATESFID(26, ROGUE_FW_GROUP_CSW, 0),
+	  "*** SHG context store start" },
+	{ ROGUE_FW_LOG_CREATESFID(27, ROGUE_FW_GROUP_CSW, 1),
+	  "Performing TA indirection, last used pipe %d" },
+	{ ROGUE_FW_LOG_CREATESFID(28, ROGUE_FW_GROUP_CSW, 0),
+	  "CDM context store hit ctrl stream terminate. Skip resume." },
+	{ ROGUE_FW_LOG_CREATESFID(29, ROGUE_FW_GROUP_CSW, 4),
+	  "*** CDM FWCtx 0x%08.8x resume from snapshot buffer 0x%08x%08x, shader state %u" },
+	{ ROGUE_FW_LOG_CREATESFID(30, ROGUE_FW_GROUP_CSW, 2),
+	  "TA PDS/USC state buffer flip (%d->%d)" },
+	{ ROGUE_FW_LOG_CREATESFID(31, ROGUE_FW_GROUP_CSW, 0),
+	  "TA context store hit BRN 52563: vertex store tasks outstanding" },
+	{ ROGUE_FW_LOG_CREATESFID(32, ROGUE_FW_GROUP_CSW, 1),
+	  "TA USC poll failed (USC vertex task count: %d)" },
+	{ ROGUE_FW_LOG_CREATESFID(33, ROGUE_FW_GROUP_CSW, 0),
+	  "TA context store deferred due to BRN 54141." },
+	{ ROGUE_FW_LOG_CREATESFID(34, ROGUE_FW_GROUP_CSW, 7),
+	  "Higher priority context scheduled for DM %u. Prios (OSid, OSid Prio, Context Prio): Current: %u, %u, %u New: %u, %u, %u" },
+	{ ROGUE_FW_LOG_CREATESFID(35, ROGUE_FW_GROUP_CSW, 0),
+	  "*** TDM context store start" },
+	{ ROGUE_FW_LOG_CREATESFID(36, ROGUE_FW_GROUP_CSW, 0),
+	  "*** TDM context store complete" },
+	{ ROGUE_FW_LOG_CREATESFID(37, ROGUE_FW_GROUP_CSW, 2),
+	  "TDM context needs resume, header [0x%08.8x, 0x%08.8x]" },
+	{ ROGUE_FW_LOG_CREATESFID(38, ROGUE_FW_GROUP_CSW, 8),
+	  "Higher priority context scheduled for DM %u. Prios (OSid, OSid Prio, Context Prio): Current: %u, %u, %u New: %u, %u, %u. Hard Context Switching: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(39, ROGUE_FW_GROUP_CSW, 3),
+	  "3D context store pipe %2d (%2d) state: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(40, ROGUE_FW_GROUP_CSW, 3),
+	  "3D context resume pipe %2d (%2d) state: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(41, ROGUE_FW_GROUP_CSW, 1),
+	  "*** 3D context store start version %d (1=IPP_TILE, 2=ISP_TILE)" },
+	{ ROGUE_FW_LOG_CREATESFID(42, ROGUE_FW_GROUP_CSW, 3),
+	  "3D context store pipe%d state: 0x%08.8x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(43, ROGUE_FW_GROUP_CSW, 3),
+	  "3D context resume pipe%d state: 0x%08.8x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(44, ROGUE_FW_GROUP_CSW, 2),
+	  "3D context resume IPP state: 0x%08.8x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(45, ROGUE_FW_GROUP_CSW, 1),
+	  "All 3D pipes empty after ISP tile mode store! IPP_status: 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(46, ROGUE_FW_GROUP_CSW, 3),
+	  "TDM context resume pipe%d state: 0x%08.8x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(47, ROGUE_FW_GROUP_CSW, 0),
+	  "*** 3D context store start version 4" },
+	{ ROGUE_FW_LOG_CREATESFID(48, ROGUE_FW_GROUP_CSW, 2),
+	  "Multicore context resume on DM%d active core mask 0x%04.4x" },
+	{ ROGUE_FW_LOG_CREATESFID(49, ROGUE_FW_GROUP_CSW, 2),
+	  "Multicore context store on DM%d active core mask 0x%04.4x" },
+	{ ROGUE_FW_LOG_CREATESFID(50, ROGUE_FW_GROUP_CSW, 5),
+	  "TDM context resume Core %d, pipe%d state: 0x%08.8x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(51, ROGUE_FW_GROUP_CSW, 0),
+	  "*** RDM FWCtx store complete" },
+	{ ROGUE_FW_LOG_CREATESFID(52, ROGUE_FW_GROUP_CSW, 0),
+	  "*** RDM FWCtx store start" },
+	{ ROGUE_FW_LOG_CREATESFID(53, ROGUE_FW_GROUP_CSW, 1),
+	  "RDM FWCtx 0x%08.8x needs resume" },
+	{ ROGUE_FW_LOG_CREATESFID(54, ROGUE_FW_GROUP_CSW, 1),
+	  "RDM FWCtx 0x%08.8x resume" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_BIF, 3),
+	  "Activate MemCtx=0x%08x BIFreq=%d secure=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_BIF, 1),
+	  "Deactivate MemCtx=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_BIF, 1),
+	  "Alloc PC reg %d" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_BIF, 2),
+	  "Grab reg set %d refcount now %d" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_BIF, 2),
+	  "Ungrab reg set %d refcount now %d" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_BIF, 6),
+	  "Setup reg=%d BIFreq=%d, expect=0x%08x%08x, actual=0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_BIF, 2),
+	  "Trust enabled:%d, for BIFreq=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_BIF, 9),
+	  "BIF Tiling Cfg %d base 0x%08x%08x len 0x%08x%08x enable %d stride %d --> 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_BIF, 4),
+	  "Wrote the Value %d to OSID0, Cat Base %d, Register's contents are now 0x%08x 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_BIF, 3),
+	  "Wrote the Value %d to OSID1, Context  %d, Register's contents are now 0x%04x" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_BIF, 7),
+	  "ui32OSid = %u, Catbase = %u, Reg Address = 0x%x, Reg index = %u, Bitshift index = %u, Val = 0x%08x%08x" }, \
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_BIF, 5),
+	  "Map GPU memory DevVAddr 0x%x%08x, Size %u, Context ID %u, BIFREQ %u" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_BIF, 1),
+	  "Unmap GPU memory (event status 0x%x)" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_BIF, 3),
+	  "Activate MemCtx=0x%08x DM=%d secure=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_BIF, 6),
+	  "Setup reg=%d DM=%d, expect=0x%08x%08x, actual=0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_BIF, 4),
+	  "Map GPU memory DevVAddr 0x%x%08x, Size %u, Context ID %u" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_BIF, 2),
+	  "Trust enabled:%d, for DM=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_BIF, 5),
+	  "Map GPU memory DevVAddr 0x%x%08x, Size %u, Context ID %u, DM %u" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_BIF, 6),
+	  "Setup register set=%d DM=%d, PC address=0x%08x%08x, OSid=%u, NewPCRegRequired=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_BIF, 3),
+	  "Alloc PC set %d as register range [%u - %u]" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_MISC, 1),
+	  "GPIO write 0x%02x" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_MISC, 1),
+	  "GPIO read 0x%02x" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_MISC, 0),
+	  "GPIO enabled" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_MISC, 0),
+	  "GPIO disabled" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_MISC, 1),
+	  "GPIO status=%d (0=OK, 1=Disabled)" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_MISC, 2),
+	  "GPIO_AP: Read address=0x%02x (%d byte(s))" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_MISC, 2),
+	  "GPIO_AP: Write address=0x%02x (%d byte(s))" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_MISC, 0),
+	  "GPIO_AP timeout!" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_MISC, 1),
+	  "GPIO_AP error. GPIO status=%d (0=OK, 1=Disabled)" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_MISC, 1),
+	  "GPIO already read 0x%02x" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_MISC, 2),
+	  "SR: Check buffer %d available returned %d" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_MISC, 1),
+	  "SR: Waiting for buffer %d" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_MISC, 2),
+	  "SR: Timeout waiting for buffer %d (after %d ticks)" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_MISC, 2),
+	  "SR: Skip frame check for strip %d returned %d (0=No skip, 1=Skip frame)" },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_MISC, 1),
+	  "SR: Skip remaining strip %d in frame" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_MISC, 1),
+	  "SR: Inform HW that strip %d is a new frame" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_MISC, 1),
+	  "SR: Timeout waiting for INTERRUPT_FRAME_SKIP (after %d ticks)" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_MISC, 1),
+	  "SR: Strip mode is %d" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_MISC, 1),
+	  "SR: Strip Render start (strip %d)" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_MISC, 1),
+	  "SR: Strip Render complete (buffer %d)" },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_MISC, 1),
+	  "SR: Strip Render fault (buffer %d)" },
+	{ ROGUE_FW_LOG_CREATESFID(22, ROGUE_FW_GROUP_MISC, 1),
+	  "TRP state: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(23, ROGUE_FW_GROUP_MISC, 1),
+	  "TRP failure: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(24, ROGUE_FW_GROUP_MISC, 1),
+	  "SW TRP State: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(25, ROGUE_FW_GROUP_MISC, 1),
+	  "SW TRP failure: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(26, ROGUE_FW_GROUP_MISC, 1),
+	  "HW kick event (%u)" },
+	{ ROGUE_FW_LOG_CREATESFID(27, ROGUE_FW_GROUP_MISC, 4),
+	  "GPU core (%u/%u): checksum 0x%08x vs. 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(28, ROGUE_FW_GROUP_MISC, 6),
+	  "GPU core (%u/%u), unit (%u,%u): checksum 0x%08x vs. 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(29, ROGUE_FW_GROUP_MISC, 6),
+	  "HWR: Core%u, Register=0x%08x, OldValue=0x%08x%08x, CurrValue=0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(30, ROGUE_FW_GROUP_MISC, 4),
+	  "HWR: USC Core%u, ui32TotalSlotsUsedByDM=0x%08x, psDMHWCtl->ui32USCSlotsUsedByDM=0x%08x, bHWRNeeded=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(31, ROGUE_FW_GROUP_MISC, 6),
+	  "HWR: USC Core%u, Register=0x%08x, OldValue=0x%08x%08x, CurrValue=0x%08x%08x" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_PM, 10),
+	  "ALIST%d SP = %u, MLIST%d SP = %u (VCE 0x%08x%08x, TE 0x%08x%08x, ALIST 0x%08x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_PM, 8),
+	  "Is TA: %d, finished: %d on HW %u (HWRTData = 0x%08x, MemCtx = 0x%08x). FL different between TA/3D: global:%d, local:%d, mmu:%d" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_PM, 14),
+	  "UFL-3D-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u), FL-3D-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u), MFL-3D-Base: 0x%08x%08x (SP = %u, 4PT = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_PM, 14),
+	  "UFL-TA-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u), FL-TA-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u), MFL-TA-Base: 0x%08x%08x (SP = %u, 4PT = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_PM, 5),
+	  "Freelist grow completed [0x%08x]: added pages 0x%08x, total pages 0x%08x, new DevVirtAddr 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_PM, 1),
+	  "Grow for freelist ID=0x%08x denied by host" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_PM, 5),
+	  "Freelist update completed [0x%08x]: old total pages 0x%08x, new total pages 0x%08x, new DevVirtAddr 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_PM, 1),
+	  "Reconstruction of freelist ID=0x%08x failed" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_PM, 2),
+	  "Ignored attempt to pause or unpause the DM while there is no relevant operation in progress (0-TA,1-3D): %d, operation(0-unpause, 1-pause): %d" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_PM, 2),
+	  "Force free 3D Context memory, FWCtx: 0x%08x, status(1:success, 0:fail): %d" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_PM, 1),
+	  "PM pause TA ALLOC: PM_PAGE_MANAGEOP set to 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_PM, 1),
+	  "PM unpause TA ALLOC: PM_PAGE_MANAGEOP set to 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_PM, 1),
+	  "PM pause 3D DALLOC: PM_PAGE_MANAGEOP set to 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_PM, 1),
+	  "PM unpause 3D DALLOC: PM_PAGE_MANAGEOP set to 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_PM, 1),
+	  "PM ALLOC/DALLOC change was not actioned: PM_PAGE_MANAGEOP_STATUS=0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_PM, 7),
+	  "Is TA: %d, finished: %d on HW %u (HWRTData = 0x%08x, MemCtx = 0x%08x). FL different between TA/3D: global:%d, local:%d" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_PM, 10),
+	  "UFL-3D-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u), FL-3D-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_PM, 10),
+	  "UFL-TA-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u), FL-TA-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_PM, 7),
+	  "Freelist update completed [0x%08x / FL State 0x%08x%08x]: old total pages 0x%08x, new total pages 0x%08x, new DevVirtAddr 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_PM, 7),
+	  "Freelist update failed [0x%08x / FL State 0x%08x%08x]: old total pages 0x%08x, new total pages 0x%08x, new DevVirtAddr 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_PM, 10),
+	  "UFL-3D-State-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u), FL-3D-State-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(22, ROGUE_FW_GROUP_PM, 10),
+	  "UFL-TA-State-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u), FL-TA-State-Base: 0x%08x%08x (SP = %u, 4PB = %u, 4PT = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(23, ROGUE_FW_GROUP_PM, 5),
+	  "Freelist 0x%08x base address from HW: 0x%02x%08x (expected value: 0x%02x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(24, ROGUE_FW_GROUP_PM, 5),
+	  "Analysis of FL grow: Pause=(%u,%u) Paused+Valid(%u,%u) PMStateBuffer=0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(25, ROGUE_FW_GROUP_PM, 5),
+	  "Attempt FL grow for FL: 0x%08x, new dev address: 0x%02x%08x, new page count: %u, new ready count: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(26, ROGUE_FW_GROUP_PM, 5),
+	  "Deferring FL grow for non-loaded FL: 0x%08x, new dev address: 0x%02x%08x, new page count: %u, new ready count: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(27, ROGUE_FW_GROUP_PM, 4),
+	  "Is GEOM: %d, finished: %d (HWRTData = 0x%08x, MemCtx = 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(28, ROGUE_FW_GROUP_PM, 1),
+	  "3D Timeout Now for FWCtx 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(29, ROGUE_FW_GROUP_PM, 1),
+	  "GEOM PM Recycle for FWCtx 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(30, ROGUE_FW_GROUP_PM, 1),
+	  "PM running primary config (Core %d)" },
+	{ ROGUE_FW_LOG_CREATESFID(31, ROGUE_FW_GROUP_PM, 1),
+	  "PM running secondary config (Core %d)" },
+	{ ROGUE_FW_LOG_CREATESFID(32, ROGUE_FW_GROUP_PM, 1),
+	  "PM running tertiary config (Core %d)" },
+	{ ROGUE_FW_LOG_CREATESFID(33, ROGUE_FW_GROUP_PM, 1),
+	  "PM running quaternary config (Core %d)" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_RPM, 3),
+	  "Global link list dynamic page count: vertex 0x%x, varying 0x%x, node 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_RPM, 3),
+	  "Global link list static page count: vertex 0x%x, varying 0x%x, node 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_RPM, 0),
+	  "RPM request failed. Waiting for freelist grow." },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_RPM, 0),
+	  "RPM request failed. Aborting the current frame." },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_RPM, 1),
+	  "RPM waiting for pending grow on freelist 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_RPM, 3),
+	  "Request freelist grow [0x%08x] current pages %d, grow size %d" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_RPM, 2),
+	  "Freelist load: SHF = 0x%08x, SHG = 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_RPM, 2),
+	  "SHF FPL register: 0x%08x.0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_RPM, 2),
+	  "SHG FPL register: 0x%08x.0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_RPM, 5),
+	  "Kernel requested RPM grow on freelist (type %d) at 0x%08x from current size %d to new size %d, RPM restart: %d (1=Yes)" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_RPM, 0),
+	  "Restarting SHG" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_RPM, 0),
+	  "Grow failed, aborting the current frame." },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_RPM, 1),
+	  "RPM abort complete on HWFrameData [0x%08x]." },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_RPM, 1),
+	  "RPM freelist cleanup [0x%08x] requires abort to proceed." },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_RPM, 2),
+	  "RPM page table base register: 0x%08x.0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_RPM, 0),
+	  "Issuing RPM abort." },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_RPM, 0),
+	  "RPM OOM received but toggle bits indicate free pages available" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_RPM, 0),
+	  "RPM hardware timeout. Unable to process OOM event." },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_RPM, 5),
+	  "SHF FL (0x%08x) load, FPL: 0x%08x.0x%08x, roff: 0x%08x, woff: 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_RPM, 5),
+	  "SHG FL (0x%08x) load, FPL: 0x%08x.0x%08x, roff: 0x%08x, woff: 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_RPM, 3),
+	  "SHF FL (0x%08x) store, roff: 0x%08x, woff: 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(22, ROGUE_FW_GROUP_RPM, 3),
+	  "SHG FL (0x%08x) store, roff: 0x%08x, woff: 0x%08x" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_RTD, 2),
+	  "3D RTData 0x%08x finished on HW context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_RTD, 2),
+	  "3D RTData 0x%08x ready on HW context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_RTD, 4),
+	  "CONTEXT_PB_BASE set to 0x%x, FL different between TA/3D: local: %d, global: %d, mmu: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_RTD, 2),
+	  "Loading VFP table 0x%08x%08x for 3D" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_RTD, 2),
+	  "Loading VFP table 0x%08x%08x for TA" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_RTD, 10),
+	  "Load Freelist 0x%x type: %d (0:local,1:global,2:mmu) for DM%d: TotalPMPages = %d, FL-addr = 0x%08x%08x, stacktop = 0x%08x%08x, Alloc Page Count = %u, Alloc MMU Page Count = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_RTD, 0),
+	  "Perform VHEAP table store" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_RTD, 2),
+	  "RTData 0x%08x: found match in Context=%d: Load=No, Store=No" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_RTD, 2),
+	  "RTData 0x%08x: found NULL in Context=%d: Load=Yes, Store=No" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_RTD, 3),
+	  "RTData 0x%08x: found state 3D finished (0x%08x) in Context=%d: Load=Yes, Store=Yes" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_RTD, 3),
+	  "RTData 0x%08x: found state TA finished (0x%08x) in Context=%d: Load=Yes, Store=Yes" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_RTD, 5),
+	  "Loading stack-pointers for %d (0:MidTA,1:3D) on context %d, MLIST = 0x%08x, ALIST = 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_RTD, 10),
+	  "Store Freelist 0x%x type: %d (0:local,1:global,2:mmu) for DM%d: TotalPMPages = %d, FL-addr = 0x%08x%08x, stacktop = 0x%08x%08x, Alloc Page Count = %u, Alloc MMU Page Count = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_RTD, 2),
+	  "TA RTData 0x%08x finished on HW context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_RTD, 2),
+	  "TA RTData 0x%08x loaded on HW context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_RTD, 12),
+	  "Store Freelist 0x%x type: %d (0:local,1:global,2:mmu) for DM%d: FL Total Pages %u (max=%u,grow size=%u), FL-addr = 0x%08x%08x, stacktop = 0x%08x%08x, Alloc Page Count = %u, Alloc MMU Page Count = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_RTD, 12),
+	  "Load  Freelist 0x%x type: %d (0:local,1:global,2:mmu) for DM%d: FL Total Pages %u (max=%u,grow size=%u), FL-addr = 0x%08x%08x, stacktop = 0x%08x%08x, Alloc Page Count = %u, Alloc MMU Page Count = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_RTD, 1),
+	  "Freelist 0x%x RESET!!!!!!!!" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_RTD, 5),
+	  "Freelist 0x%x stacktop = 0x%08x%08x, Alloc Page Count = %u, Alloc MMU Page Count = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_RTD, 3),
+	  "Request reconstruction of Freelist 0x%x type: %d (0:local,1:global,2:mmu) on HW context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_RTD, 1),
+	  "Freelist reconstruction ACK from host (HWR state :%u)" },
+	{ ROGUE_FW_LOG_CREATESFID(22, ROGUE_FW_GROUP_RTD, 0),
+	  "Freelist reconstruction completed" },
+	{ ROGUE_FW_LOG_CREATESFID(23, ROGUE_FW_GROUP_RTD, 3),
+	  "TA RTData 0x%08x loaded on HW context %u HWRTDataNeedsLoading=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(24, ROGUE_FW_GROUP_RTD, 3),
+	  "TE Region headers base 0x%08x%08x (RGNHDR Init: %d)" },
+	{ ROGUE_FW_LOG_CREATESFID(25, ROGUE_FW_GROUP_RTD, 8),
+	  "TA Buffers: FWCtx 0x%08x, RT 0x%08x, RTData 0x%08x, VHeap 0x%08x%08x, TPC 0x%08x%08x (MemCtx 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(26, ROGUE_FW_GROUP_RTD, 2),
+	  "3D RTData 0x%08x loaded on HW context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(27, ROGUE_FW_GROUP_RTD, 4),
+	  "3D Buffers: FWCtx 0x%08x, RT 0x%08x, RTData 0x%08x (MemCtx 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(28, ROGUE_FW_GROUP_RTD, 2),
+	  "Restarting TA after partial render, HWRTData0State=0x%x, HWRTData1State=0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(29, ROGUE_FW_GROUP_RTD, 3),
+	  "CONTEXT_PB_BASE set to 0x%x, FL different between TA/3D: local: %d, global: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(30, ROGUE_FW_GROUP_RTD, 12),
+	  "Store Freelist 0x%x type: %d (0:local,1:global) for PMDM%d: FL Total Pages %u (max=%u,grow size=%u), FL-addr = 0x%08x%08x, stacktop = 0x%08x%08x, Alloc Page Count = %u, Alloc MMU Page Count = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(31, ROGUE_FW_GROUP_RTD, 12),
+	  "Load  Freelist 0x%x type: %d (0:local,1:global) for PMDM%d: FL Total Pages %u (max=%u,grow size=%u), FL-addr = 0x%08x%08x, stacktop = 0x%08x%08x, Alloc Page Count = %u, Alloc MMU Page Count = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(32, ROGUE_FW_GROUP_RTD, 5),
+	  "3D Buffers: FWCtx 0x%08x, parent RT 0x%08x, RTData 0x%08x on ctx %d, (MemCtx 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(33, ROGUE_FW_GROUP_RTD, 7),
+	  "TA Buffers: FWCtx 0x%08x, RTData 0x%08x, VHeap 0x%08x%08x, TPC 0x%08x%08x (MemCtx 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(34, ROGUE_FW_GROUP_RTD, 4),
+	  "3D Buffers: FWCtx 0x%08x, RTData 0x%08x on ctx %d, (MemCtx 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(35, ROGUE_FW_GROUP_RTD, 6),
+	  "Load  Freelist 0x%x type: %d (0:local,1:global) for PMDM%d: FL Total Pages %u (max=%u,grow size=%u)" },
+	{ ROGUE_FW_LOG_CREATESFID(36, ROGUE_FW_GROUP_RTD, 1),
+	  "TA RTData 0x%08x marked as killed." },
+	{ ROGUE_FW_LOG_CREATESFID(37, ROGUE_FW_GROUP_RTD, 1),
+	  "3D RTData 0x%08x marked as killed." },
+	{ ROGUE_FW_LOG_CREATESFID(38, ROGUE_FW_GROUP_RTD, 1),
+	  "RTData 0x%08x will be killed after TA restart." },
+	{ ROGUE_FW_LOG_CREATESFID(39, ROGUE_FW_GROUP_RTD, 3),
+	  "RTData 0x%08x Render State Buffer 0x%02x%08x will be reset." },
+	{ ROGUE_FW_LOG_CREATESFID(40, ROGUE_FW_GROUP_RTD, 3),
+	  "GEOM RTData 0x%08x using Render State Buffer 0x%02x%08x." },
+	{ ROGUE_FW_LOG_CREATESFID(41, ROGUE_FW_GROUP_RTD, 3),
+	  "FRAG RTData 0x%08x using Render State Buffer 0x%02x%08x." },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_SPM, 0),
+	  "Force Z-Load for partial render" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_SPM, 0),
+	  "Force Z-Store for partial render" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_SPM, 1),
+	  "3D MemFree: Local FL 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_SPM, 1),
+	  "3D MemFree: MMU FL 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_SPM, 1),
+	  "3D MemFree: Global FL 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_SPM, 6),
+	  "OOM TA/3D PR Check: [0x%08.8x] is 0x%08.8x requires 0x%08.8x, HardwareSync Fence [0x%08.8x] is 0x%08.8x requires 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_SPM, 3),
+	  "OOM TA_cmd=0x%08x, U-FL 0x%08x, N-FL 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_SPM, 5),
+	  "OOM TA_cmd=0x%08x, OOM MMU:%d, U-FL 0x%08x, N-FL 0x%08x, MMU-FL 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_SPM, 0),
+	  "Partial render avoided" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_SPM, 0),
+	  "Partial render discarded" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_SPM, 0),
+	  "Partial Render finished" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM Owner = 3D-BG" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM Owner = 3D-IRQ" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM Owner = NONE" },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM Owner = TA-BG" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM Owner = TA-IRQ" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_SPM, 2),
+	  "ZStore address 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_SPM, 2),
+	  "SStore address 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_SPM, 2),
+	  "ZLoad address 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_SPM, 2),
+	  "SLoad address 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_SPM, 0),
+	  "No deferred ZS Buffer provided" },
+	{ ROGUE_FW_LOG_CREATESFID(22, ROGUE_FW_GROUP_SPM, 1),
+	  "ZS Buffer successfully populated (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(23, ROGUE_FW_GROUP_SPM, 1),
+	  "No need to populate ZS Buffer (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(24, ROGUE_FW_GROUP_SPM, 1),
+	  "ZS Buffer successfully unpopulated (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(25, ROGUE_FW_GROUP_SPM, 1),
+	  "No need to unpopulate ZS Buffer (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(26, ROGUE_FW_GROUP_SPM, 1),
+	  "Send ZS-Buffer backing request to host (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(27, ROGUE_FW_GROUP_SPM, 1),
+	  "Send ZS-Buffer unbacking request to host (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(28, ROGUE_FW_GROUP_SPM, 1),
+	  "Don't send ZS-Buffer backing request. Previous request still pending (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(29, ROGUE_FW_GROUP_SPM, 1),
+	  "Don't send ZS-Buffer unbacking request. Previous request still pending (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(30, ROGUE_FW_GROUP_SPM, 1),
+	  "Partial Render waiting for ZBuffer to be backed (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(31, ROGUE_FW_GROUP_SPM, 1),
+	  "Partial Render waiting for SBuffer to be backed (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(32, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM State = none" },
+	{ ROGUE_FW_LOG_CREATESFID(33, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM State = PR blocked" },
+	{ ROGUE_FW_LOG_CREATESFID(34, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM State = wait for grow" },
+	{ ROGUE_FW_LOG_CREATESFID(35, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM State = wait for HW" },
+	{ ROGUE_FW_LOG_CREATESFID(36, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM State = PR running" },
+	{ ROGUE_FW_LOG_CREATESFID(37, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM State = PR avoided" },
+	{ ROGUE_FW_LOG_CREATESFID(38, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM State = PR executed" },
+	{ ROGUE_FW_LOG_CREATESFID(39, ROGUE_FW_GROUP_SPM, 2),
+	  "3DMemFree matches freelist 0x%08x (FL type = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(40, ROGUE_FW_GROUP_SPM, 0),
+	  "Raise the 3DMemFreeDedected flag" },
+	{ ROGUE_FW_LOG_CREATESFID(41, ROGUE_FW_GROUP_SPM, 1),
+	  "Wait for pending grow on Freelist 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(42, ROGUE_FW_GROUP_SPM, 1),
+	  "ZS Buffer failed to be populated (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(43, ROGUE_FW_GROUP_SPM, 5),
+	  "Grow update inconsistency: FL addr: 0x%02x%08x, curr pages: %u, ready: %u, new: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(44, ROGUE_FW_GROUP_SPM, 4),
+	  "OOM: Resumed TA with ready pages, FL addr: 0x%02x%08x, current pages: %u, SP : %u" },
+	{ ROGUE_FW_LOG_CREATESFID(45, ROGUE_FW_GROUP_SPM, 5),
+	  "Received grow update, FL addr: 0x%02x%08x, current pages: %u, ready pages: %u, threshold: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(46, ROGUE_FW_GROUP_SPM, 1),
+	  "No deferred partial render FW (Type=%d) Buffer provided" },
+	{ ROGUE_FW_LOG_CREATESFID(47, ROGUE_FW_GROUP_SPM, 1),
+	  "No need to populate PR Buffer (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(48, ROGUE_FW_GROUP_SPM, 1),
+	  "No need to unpopulate PR Buffer (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(49, ROGUE_FW_GROUP_SPM, 1),
+	  "Send PR Buffer backing request to host (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(50, ROGUE_FW_GROUP_SPM, 1),
+	  "Send PR Buffer unbacking request to host (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(51, ROGUE_FW_GROUP_SPM, 1),
+	  "Don't send PR Buffer backing request. Previous request still pending (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(52, ROGUE_FW_GROUP_SPM, 1),
+	  "Don't send PR Buffer unbacking request. Previous request still pending (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(53, ROGUE_FW_GROUP_SPM, 2),
+	  "Partial Render waiting for Buffer %d type to be backed (ID=0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(54, ROGUE_FW_GROUP_SPM, 4),
+	  "Received grow update, FL addr: 0x%02x%08x, new pages: %u, ready pages: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(66, ROGUE_FW_GROUP_SPM, 3),
+	  "OOM TA/3D PR Check: [0x%08.8x] is 0x%08.8x requires 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(67, ROGUE_FW_GROUP_SPM, 3),
+	  "OOM: Resumed TA with ready pages, FL addr: 0x%02x%08x, current pages: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(68, ROGUE_FW_GROUP_SPM, 3),
+	  "OOM TA/3D PR deadlock unblocked reordering DM%d runlist head from Context 0x%08x to 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(69, ROGUE_FW_GROUP_SPM, 0),
+	  "SPM State = PR force free" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_POW, 4),
+	  "Check Pow state DM%d int: 0x%x, ext: 0x%x, pow flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_POW, 3),
+	  "GPU idle (might be powered down). Pow state int: 0x%x, ext: 0x%x, flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_POW, 3),
+	  "OS requested pow off (forced = %d), DM%d, pow flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_POW, 4),
+	  "Initiate powoff query. Inactive DMs: %d %d %d %d" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_POW, 2),
+	  "Any RD-DM pending? %d, Any RD-DM Active? %d" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_POW, 3),
+	  "GPU ready to be powered down. Pow state int: 0x%x, ext: 0x%x, flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_POW, 2),
+	  "HW Request On(1)/Off(0): %d, Units: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_POW, 2),
+	  "Request to change num of dusts to %d (Power flags=%d)" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_POW, 2),
+	  "Changing number of dusts from %d to %d" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_POW, 0),
+	  "Sidekick init" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_POW, 1),
+	  "Rascal+Dusts init (# dusts mask: 0x%x)" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_POW, 0),
+	  "Initiate powoff query for RD-DMs." },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_POW, 0),
+	  "Initiate powoff query for TLA-DM." },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_POW, 2),
+	  "Any RD-DM pending? %d, Any RD-DM Active? %d" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_POW, 2),
+	  "TLA-DM pending? %d, TLA-DM Active? %d" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_POW, 1),
+	  "Request power up due to BRN37270. Pow stat int: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_POW, 3),
+	  "Cancel power off request int: 0x%x, ext: 0x%x, pow flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_POW, 1),
+	  "OS requested forced IDLE, pow flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_POW, 1),
+	  "OS cancelled forced IDLE, pow flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_POW, 3),
+	  "Idle timer start. Pow state int: 0x%x, ext: 0x%x, flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(22, ROGUE_FW_GROUP_POW, 3),
+	  "Cancel idle timer. Pow state int: 0x%x, ext: 0x%x, flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(23, ROGUE_FW_GROUP_POW, 2),
+	  "Active PM latency set to %dms. Core clock: %d Hz" },
+	{ ROGUE_FW_LOG_CREATESFID(24, ROGUE_FW_GROUP_POW, 2),
+	  "Compute cluster mask change to 0x%x, %d dusts powered." },
+	{ ROGUE_FW_LOG_CREATESFID(25, ROGUE_FW_GROUP_POW, 0),
+	  "Null command executed, repeating initiate powoff query for RD-DMs." },
+	{ ROGUE_FW_LOG_CREATESFID(26, ROGUE_FW_GROUP_POW, 1),
+	  "Power monitor: Estimate of dynamic energy %u" },
+	{ ROGUE_FW_LOG_CREATESFID(27, ROGUE_FW_GROUP_POW, 3),
+	  "Check Pow state: Int: 0x%x, Ext: 0x%x, Pow flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(28, ROGUE_FW_GROUP_POW, 2),
+	  "Proactive DVFS: New deadline, time = 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(29, ROGUE_FW_GROUP_POW, 2),
+	  "Proactive DVFS: New workload, cycles = 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(30, ROGUE_FW_GROUP_POW, 1),
+	  "Proactive DVFS: Proactive frequency calculated = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(31, ROGUE_FW_GROUP_POW, 1),
+	  "Proactive DVFS: Reactive utilisation = %u percent" },
+	{ ROGUE_FW_LOG_CREATESFID(32, ROGUE_FW_GROUP_POW, 2),
+	  "Proactive DVFS: Reactive frequency calculated = %u.%u" },
+	{ ROGUE_FW_LOG_CREATESFID(33, ROGUE_FW_GROUP_POW, 1),
+	  "Proactive DVFS: OPP Point Sent = 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(34, ROGUE_FW_GROUP_POW, 2),
+	  "Proactive DVFS: Deadline removed = 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(35, ROGUE_FW_GROUP_POW, 2),
+	  "Proactive DVFS: Workload removed = 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(36, ROGUE_FW_GROUP_POW, 1),
+	  "Proactive DVFS: Throttle to a maximum = 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(37, ROGUE_FW_GROUP_POW, 0),
+	  "Proactive DVFS: Failed to pass OPP point via GPIO." },
+	{ ROGUE_FW_LOG_CREATESFID(38, ROGUE_FW_GROUP_POW, 0),
+	  "Proactive DVFS: Invalid node passed to function." },
+	{ ROGUE_FW_LOG_CREATESFID(39, ROGUE_FW_GROUP_POW, 1),
+	  "Proactive DVFS: Guest OS attempted to do a privileged action. OSid = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(40, ROGUE_FW_GROUP_POW, 1),
+	  "Proactive DVFS: Unprofiled work started. Total unprofiled work present: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(41, ROGUE_FW_GROUP_POW, 1),
+	  "Proactive DVFS: Unprofiled work finished. Total unprofiled work present: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(42, ROGUE_FW_GROUP_POW, 0),
+	  "Proactive DVFS: Disabled: Not enabled by host." },
+	{ ROGUE_FW_LOG_CREATESFID(43, ROGUE_FW_GROUP_POW, 2),
+	  "HW Request Completed(1)/Aborted(0): %d, Ticks: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(44, ROGUE_FW_GROUP_POW, 1),
+	  "Allowed number of dusts is %d due to BRN59042." },
+	{ ROGUE_FW_LOG_CREATESFID(45, ROGUE_FW_GROUP_POW, 3),
+	  "Host timed out while waiting for a forced idle state. Pow state int: 0x%x, ext: 0x%x, flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(46, ROGUE_FW_GROUP_POW, 5),
+	  "Check Pow state: Int: 0x%x, Ext: 0x%x, Pow flags: 0x%x, Fence Counters: Check: %u - Update: %u" },
+	{ ROGUE_FW_LOG_CREATESFID(47, ROGUE_FW_GROUP_POW, 2),
+	  "Proactive DVFS: OPP Point Sent = 0x%x, Success = 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(48, ROGUE_FW_GROUP_POW, 0),
+	  "Proactive DVFS: GPU transitioned to idle" },
+	{ ROGUE_FW_LOG_CREATESFID(49, ROGUE_FW_GROUP_POW, 0),
+	  "Proactive DVFS: GPU transitioned to active" },
+	{ ROGUE_FW_LOG_CREATESFID(50, ROGUE_FW_GROUP_POW, 1),
+	  "Power counter dumping: Data truncated writing register %u. Buffer too small." },
+	{ ROGUE_FW_LOG_CREATESFID(51, ROGUE_FW_GROUP_POW, 0),
+	  "Power controller returned ABORT for last request so retrying." },
+	{ ROGUE_FW_LOG_CREATESFID(52, ROGUE_FW_GROUP_POW, 2),
+	  "Discarding invalid power request: type 0x%x, DM %u" },
+	{ ROGUE_FW_LOG_CREATESFID(53, ROGUE_FW_GROUP_POW, 2),
+	  "Detected attempt to cancel forced idle while not forced idle (pow state 0x%x, pow flags 0x%x)" },
+	{ ROGUE_FW_LOG_CREATESFID(54, ROGUE_FW_GROUP_POW, 2),
+	  "Detected attempt to force power off while not forced idle (pow state 0x%x, pow flags 0x%x)" },
+	{ ROGUE_FW_LOG_CREATESFID(55, ROGUE_FW_GROUP_POW, 1),
+	  "Detected attempt to change dust count while not forced idle (pow state 0x%x)" },
+	{ ROGUE_FW_LOG_CREATESFID(56, ROGUE_FW_GROUP_POW, 3),
+	  "Power monitor: Type = %d (0 = power, 1 = energy), Estimate result = 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(57, ROGUE_FW_GROUP_POW, 2),
+	  "Conflicting clock frequency range: OPP min = %u, max = %u" },
+	{ ROGUE_FW_LOG_CREATESFID(58, ROGUE_FW_GROUP_POW, 1),
+	  "Proactive DVFS: Set floor to a minimum = 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(59, ROGUE_FW_GROUP_POW, 2),
+	  "OS requested pow off (forced = %d), pow flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(60, ROGUE_FW_GROUP_POW, 1),
+	  "Discarding invalid power request: type 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(61, ROGUE_FW_GROUP_POW, 3),
+	  "Request to change SPU power state mask from 0x%x to 0x%x. Pow flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(62, ROGUE_FW_GROUP_POW, 2),
+	  "Changing SPU power state mask from 0x%x to 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(63, ROGUE_FW_GROUP_POW, 1),
+	  "Detected attempt to change SPU power state mask while not forced idle (pow state 0x%x)" },
+	{ ROGUE_FW_LOG_CREATESFID(64, ROGUE_FW_GROUP_POW, 1),
+	  "Invalid SPU power mask 0x%x! Changing to 1" },
+	{ ROGUE_FW_LOG_CREATESFID(65, ROGUE_FW_GROUP_POW, 2),
+	  "Proactive DVFS: Send OPP %u with clock divider value %u" },
+	{ ROGUE_FW_LOG_CREATESFID(66, ROGUE_FW_GROUP_POW, 0),
+	  "PPA block started in perf validation mode." },
+	{ ROGUE_FW_LOG_CREATESFID(67, ROGUE_FW_GROUP_POW, 1),
+	  "Reset PPA block state %u (1=reset, 0=recalculate)." },
+	{ ROGUE_FW_LOG_CREATESFID(68, ROGUE_FW_GROUP_POW, 1),
+	  "Power controller returned ABORT for Core-%d last request so retrying." },
+	{ ROGUE_FW_LOG_CREATESFID(69, ROGUE_FW_GROUP_POW, 3),
+	  "HW Request On(1)/Off(0): %d, Units: 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(70, ROGUE_FW_GROUP_POW, 5),
+	  "Request to change SPU power state mask from 0x%x to 0x%x and RAC from 0x%x to 0x%x. Pow flags: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(71, ROGUE_FW_GROUP_POW, 4),
+	  "Changing SPU power state mask from 0x%x to 0x%x and RAC from 0x%x to 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(72, ROGUE_FW_GROUP_POW, 2),
+	  "RAC pending? %d, RAC Active? %d" },
+	{ ROGUE_FW_LOG_CREATESFID(73, ROGUE_FW_GROUP_POW, 0),
+	  "Initiate powoff query for RAC." },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_HWR, 2),
+	  "Lockup detected on DM%d, FWCtx: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_HWR, 3),
+	  "Reset fw state for DM%d, FWCtx: 0x%08.8x, MemCtx: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_HWR, 0),
+	  "Reset HW" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_HWR, 0),
+	  "Lockup recovered." },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_HWR, 2),
+	  "Lock-up DM%d FWCtx: 0x%08.8x" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_HWR, 4),
+	  "Lockup detected: GLB(%d->%d), PER-DM(0x%08x->0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_HWR, 3),
+	  "Early fault detection: GLB(%d->%d), PER-DM(0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_HWR, 3),
+	  "Hold scheduling due lockup: GLB(%d), PER-DM(0x%08x->0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_HWR, 4),
+	  "False lockup detected: GLB(%d->%d), PER-DM(0x%08x->0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_HWR, 4),
+	  "BRN37729: GLB(%d->%d), PER-DM(0x%08x->0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_HWR, 3),
+	  "Freelists reconstructed: GLB(%d->%d), PER-DM(0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_HWR, 4),
+	  "Reconstructing freelists: %u (0-No, 1-Yes): GLB(%d->%d), PER-DM(0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_HWR, 3),
+	  "HW poll %u (0-Unset 1-Set) failed (reg:0x%08x val:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_HWR, 2),
+	  "Discarded cmd on DM%u FWCtx=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_HWR, 6),
+	  "Discarded cmd on DM%u (reason=%u) HWRTData=0x%08x (st: %d), FWCtx 0x%08x @ %d" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_HWR, 2),
+	  "PM fence WA could not be applied, Valid TA Setup: %d, RD powered off: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_HWR, 5),
+	  "FL snapshot RTD 0x%08.8x - local (0x%08.8x): %d, global (0x%08.8x): %d" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_HWR, 8),
+	  "FL check RTD 0x%08.8x, discard: %d - local (0x%08.8x): s%d?=c%d, global (0x%08.8x): s%d?=c%d" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_HWR, 2),
+	  "FL reconstruction 0x%08.8x c%d" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_HWR, 3),
+	  "3D check: missing TA FWCtx 0x%08.8x @ %d, RTD 0x%08x." },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_HWR, 2),
+	  "Reset HW (mmu:%d, extmem: %d)" },
+	{ ROGUE_FW_LOG_CREATESFID(22, ROGUE_FW_GROUP_HWR, 4),
+	  "Zero TA caches for FWCtx: 0x%08.8x (TPC addr: 0x%08x%08x, size: %d bytes)" },
+	{ ROGUE_FW_LOG_CREATESFID(23, ROGUE_FW_GROUP_HWR, 2),
+	  "Recovery DM%u: Freelists reconstructed. New R-Flags=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(24, ROGUE_FW_GROUP_HWR, 5),
+	  "Recovery DM%u: FWCtx 0x%08x skipped to command @ %u. PR=%u. New R-Flags=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(25, ROGUE_FW_GROUP_HWR, 1),
+	  "Recovery DM%u: DM fully recovered" },
+	{ ROGUE_FW_LOG_CREATESFID(26, ROGUE_FW_GROUP_HWR, 2),
+	  "DM%u: Hold scheduling due to R-Flag = 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(27, ROGUE_FW_GROUP_HWR, 0),
+	  "Analysis: Need freelist reconstruction" },
+	{ ROGUE_FW_LOG_CREATESFID(28, ROGUE_FW_GROUP_HWR, 2),
+	  "Analysis DM%u: Lockup FWCtx: 0x%08.8x. Need to skip to next command" },
+	{ ROGUE_FW_LOG_CREATESFID(29, ROGUE_FW_GROUP_HWR, 2),
+	  "Analysis DM%u: Lockup while TA is OOM FWCtx: 0x%08.8x. Need to skip to next command" },
+	{ ROGUE_FW_LOG_CREATESFID(30, ROGUE_FW_GROUP_HWR, 2),
+	  "Analysis DM%u: Lockup while partial render FWCtx: 0x%08.8x. Need PR cleanup" },
+	{ ROGUE_FW_LOG_CREATESFID(31, ROGUE_FW_GROUP_HWR, 0),
+	  "GPU has locked up" },
+	{ ROGUE_FW_LOG_CREATESFID(32, ROGUE_FW_GROUP_HWR, 1),
+	  "DM%u ready for HWR" },
+	{ ROGUE_FW_LOG_CREATESFID(33, ROGUE_FW_GROUP_HWR, 2),
+	  "Recovery DM%u: Updated Recovery counter. New R-Flags=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(34, ROGUE_FW_GROUP_HWR, 1),
+	  "Analysis: BRN37729 detected, reset TA and re-kicked 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(35, ROGUE_FW_GROUP_HWR, 1),
+	  "DM%u timed out" },
+	{ ROGUE_FW_LOG_CREATESFID(36, ROGUE_FW_GROUP_HWR, 1),
+	  "RGX_CR_EVENT_STATUS=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(37, ROGUE_FW_GROUP_HWR, 2),
+	  "DM%u lockup falsely detected, R-Flags=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(38, ROGUE_FW_GROUP_HWR, 0),
+	  "GPU has overrun its deadline" },
+	{ ROGUE_FW_LOG_CREATESFID(39, ROGUE_FW_GROUP_HWR, 0),
+	  "GPU has failed a poll" },
+	{ ROGUE_FW_LOG_CREATESFID(40, ROGUE_FW_GROUP_HWR, 2),
+	  "RGX DM%u phase count=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(41, ROGUE_FW_GROUP_HWR, 2),
+	  "Reset HW (loop:%d, poll failures: 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(42, ROGUE_FW_GROUP_HWR, 1),
+	  "MMU fault event: 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(43, ROGUE_FW_GROUP_HWR, 1),
+	  "BIF1 page fault detected (Bank1 MMU Status: 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(44, ROGUE_FW_GROUP_HWR, 1),
+	  "Fast CRC Failed. Proceeding to full register checking (DM: %u)." },
+	{ ROGUE_FW_LOG_CREATESFID(45, ROGUE_FW_GROUP_HWR, 2),
+	  "Meta MMU page fault detected (Meta MMU Status: 0x%08x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(46, ROGUE_FW_GROUP_HWR, 2),
+	  "Fast CRC Check result for DM%u is HWRNeeded=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(47, ROGUE_FW_GROUP_HWR, 2),
+	  "Full Signature Check result for DM%u is HWRNeeded=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(48, ROGUE_FW_GROUP_HWR, 3),
+	  "Final result for DM%u is HWRNeeded=%u with HWRChecksToGo=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(49, ROGUE_FW_GROUP_HWR, 3),
+	  "USC Slots result for DM%u is HWRNeeded=%u USCSlotsUsedByDM=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(50, ROGUE_FW_GROUP_HWR, 2),
+	  "Deadline counter for DM%u is HWRDeadline=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(51, ROGUE_FW_GROUP_HWR, 1),
+	  "Holding Scheduling on OSid %u due to pending freelist reconstruction" },
+	{ ROGUE_FW_LOG_CREATESFID(52, ROGUE_FW_GROUP_HWR, 2),
+	  "Requesting reconstruction for freelist 0x%x (ID=%d)" },
+	{ ROGUE_FW_LOG_CREATESFID(53, ROGUE_FW_GROUP_HWR, 1),
+	  "Reconstruction of freelist ID=%d complete" },
+	{ ROGUE_FW_LOG_CREATESFID(54, ROGUE_FW_GROUP_HWR, 4),
+	  "Reconstruction needed for freelist 0x%x (ID=%d) type: %d (0:local,1:global,2:mmu) on HW context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(55, ROGUE_FW_GROUP_HWR, 1),
+	  "Reconstruction of freelist ID=%d failed" },
+	{ ROGUE_FW_LOG_CREATESFID(56, ROGUE_FW_GROUP_HWR, 4),
+	  "Restricting PDS Tasks to help other stalling DMs (RunningMask=0x%02x, StallingMask=0x%02x, PDS_CTRL=0x%08x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(57, ROGUE_FW_GROUP_HWR, 4),
+	  "Unrestricting PDS Tasks again (RunningMask=0x%02x, StallingMask=0x%02x, PDS_CTRL=0x%08x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(58, ROGUE_FW_GROUP_HWR, 2),
+	  "USC slots: %u used by DM%u" },
+	{ ROGUE_FW_LOG_CREATESFID(59, ROGUE_FW_GROUP_HWR, 1),
+	  "USC slots: %u empty" },
+	{ ROGUE_FW_LOG_CREATESFID(60, ROGUE_FW_GROUP_HWR, 5),
+	  "HCS DM%d's Context Switch failed to meet deadline. Current time: 0x%08x%08x, deadline: 0x%08x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(61, ROGUE_FW_GROUP_HWR, 1),
+	  "Begin hardware reset (HWR Counter=%d)" },
+	{ ROGUE_FW_LOG_CREATESFID(62, ROGUE_FW_GROUP_HWR, 1),
+	  "Finished hardware reset (HWR Counter=%d)" },
+	{ ROGUE_FW_LOG_CREATESFID(63, ROGUE_FW_GROUP_HWR, 2),
+	  "Holding Scheduling on DM %u for OSid %u due to pending freelist reconstruction" },
+	{ ROGUE_FW_LOG_CREATESFID(64, ROGUE_FW_GROUP_HWR, 5),
+	  "User Mode Queue ROff reset: FWCtx 0x%08.8x, queue: 0x%08x%08x (Roff = %u becomes StreamStartOffset = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(65, ROGUE_FW_GROUP_HWR, 4),
+	  "Reconstruction needed for freelist 0x%x (ID=%d) type: %d (0:local,1:global) on HW context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(66, ROGUE_FW_GROUP_HWR, 3),
+	  "Mips page fault detected (BadVAddr: 0x%08x, EntryLo0: 0x%08x, EntryLo1: 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(67, ROGUE_FW_GROUP_HWR, 1),
+	  "At least one other DM is running okay so DM%u will get another chance" },
+	{ ROGUE_FW_LOG_CREATESFID(68, ROGUE_FW_GROUP_HWR, 2),
+	  "Reconstructing in FW, FL: 0x%x (ID=%d)" },
+	{ ROGUE_FW_LOG_CREATESFID(69, ROGUE_FW_GROUP_HWR, 4),
+	  "Zero RTC for FWCtx: 0x%08.8x (RTC addr: 0x%08x%08x, size: %d bytes)" },
+	{ ROGUE_FW_LOG_CREATESFID(70, ROGUE_FW_GROUP_HWR, 5),
+	  "Reconstruction needed for freelist 0x%x (ID=%d) type: %d (0:local,1:global) phase: %d (0:TA, 1:3D) on HW context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(71, ROGUE_FW_GROUP_HWR, 3),
+	  "Start long HW poll %u (0-Unset 1-Set) for (reg:0x%08x val:0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(72, ROGUE_FW_GROUP_HWR, 1),
+	  "End long HW poll (result=%d)" },
+	{ ROGUE_FW_LOG_CREATESFID(73, ROGUE_FW_GROUP_HWR, 3),
+	  "DM%u has taken %d ticks and deadline is %d ticks" },
+	{ ROGUE_FW_LOG_CREATESFID(74, ROGUE_FW_GROUP_HWR, 5),
+	  "USC Watchdog result for DM%u is HWRNeeded=%u Status=%u USCs={0x%x} with HWRChecksToGo=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(75, ROGUE_FW_GROUP_HWR, 6),
+	  "Reconstruction needed for freelist 0x%x (ID=%d) OSid: %d type: %d (0:local,1:global) phase: %d (0:TA, 1:3D) on HW context %u" },
+	{ ROGUE_FW_LOG_CREATESFID(76, ROGUE_FW_GROUP_HWR, 1),
+	  "GPU-%u has locked up" },
+	{ ROGUE_FW_LOG_CREATESFID(77, ROGUE_FW_GROUP_HWR, 1),
+	  "DM%u has locked up" },
+	{ ROGUE_FW_LOG_CREATESFID(78, ROGUE_FW_GROUP_HWR, 2),
+	  "Core %d RGX_CR_EVENT_STATUS=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(79, ROGUE_FW_GROUP_HWR, 2),
+	  "RGX_CR_MULTICORE_EVENT_STATUS%u=0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(80, ROGUE_FW_GROUP_HWR, 5),
+	  "BIF0 page fault detected (Core %d MMU Status: 0x%08x%08x Req Status: 0x%08x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(81, ROGUE_FW_GROUP_HWR, 3),
+	  "MMU page fault detected (Core %d MMU Status: 0x%08x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(82, ROGUE_FW_GROUP_HWR, 4),
+	  "MMU page fault detected (Core %d MMU Status: 0x%08x%08x 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(83, ROGUE_FW_GROUP_HWR, 4),
+	  "Reset HW (core:%d of %d, loop:%d, poll failures: 0x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(84, ROGUE_FW_GROUP_HWR, 3),
+	  "Fast CRC Check result for Core%u, DM%u is HWRNeeded=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(85, ROGUE_FW_GROUP_HWR, 3),
+	  "Full Signature Check result for Core%u, DM%u is HWRNeeded=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(86, ROGUE_FW_GROUP_HWR, 4),
+	  "USC Slots result for Core%u, DM%u is HWRNeeded=%u USCSlotsUsedByDM=%d" },
+	{ ROGUE_FW_LOG_CREATESFID(87, ROGUE_FW_GROUP_HWR, 6),
+	  "USC Watchdog result for Core%u DM%u is HWRNeeded=%u Status=%u USCs={0x%x} with HWRChecksToGo=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(88, ROGUE_FW_GROUP_HWR, 3),
+	  "RISC-V MMU page fault detected (FWCORE MMU Status 0x%08x Req Status 0x%08x%08x)" },
+	{ ROGUE_FW_LOG_CREATESFID(89, ROGUE_FW_GROUP_HWR, 2),
+	  "TEXAS1_PFS poll failed on core %d with value 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(90, ROGUE_FW_GROUP_HWR, 2),
+	  "BIF_PFS poll failed on core %d with value 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(91, ROGUE_FW_GROUP_HWR, 2),
+	  "MMU_ABORT_PM_STATUS set poll failed on core %d with value 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(92, ROGUE_FW_GROUP_HWR, 2),
+	  "MMU_ABORT_PM_STATUS unset poll failed on core %d with value 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(93, ROGUE_FW_GROUP_HWR, 2),
+	  "MMU_CTRL_INVAL poll (all but fw) failed on core %d with value 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(94, ROGUE_FW_GROUP_HWR, 2),
+	  "MMU_CTRL_INVAL poll (all) failed on core %d with value 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(95, ROGUE_FW_GROUP_HWR, 3),
+	  "TEXAS%d_PFS poll failed on core %d with value 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(96, ROGUE_FW_GROUP_HWR, 3),
+	  "Extra Registers Check result for Core%u, DM%u is HWRNeeded=%u" },
+	{ ROGUE_FW_LOG_CREATESFID(97, ROGUE_FW_GROUP_HWR, 1),
+	  "FW attempted to write to read-only GPU address 0x%08x" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_HWP, 2),
+	  "Block 0x%x mapped to Config Idx %u" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_HWP, 1),
+	  "Block 0x%x omitted from event - not enabled in HW" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_HWP, 1),
+	  "Block 0x%x included in event - enabled in HW" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_HWP, 2),
+	  "Select register state hi_0x%x lo_0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_HWP, 1),
+	  "Counter stream block header word 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_HWP, 1),
+	  "Counter register offset 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_HWP, 1),
+	  "Block 0x%x config unset, skipping" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_HWP, 1),
+	  "Accessing Indirect block 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_HWP, 1),
+	  "Accessing Direct block 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_HWP, 1),
+	  "Programmed counter select register at offset 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_HWP, 2),
+	  "Block register offset 0x%x and value 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_HWP, 1),
+	  "Reading config block from driver 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_HWP, 2),
+	  "Reading block range 0x%x to 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_HWP, 1),
+	  "Recording block 0x%x config from driver" },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_HWP, 0),
+	  "Finished reading config block from driver" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_HWP, 2),
+	  "Custom Counter offset: 0x%x  value: 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_HWP, 2),
+	  "Select counter n:%u  ID:0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_HWP, 3),
+	  "The counter ID 0x%x is not allowed. The package [b:%u, n:%u] will be discarded" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_HWP, 1),
+	  "Custom Counters filter status %d" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_HWP, 2),
+	  "The Custom block %d is not allowed. Use only blocks lower than %d. The package will be discarded" },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_HWP, 2),
+	  "The package will be discarded because it contains %d counters IDs while the upper limit is %d" },
+	{ ROGUE_FW_LOG_CREATESFID(22, ROGUE_FW_GROUP_HWP, 2),
+	  "Check Filter 0x%x is 0x%x ?" },
+	{ ROGUE_FW_LOG_CREATESFID(23, ROGUE_FW_GROUP_HWP, 1),
+	  "The custom block %u is reset" },
+	{ ROGUE_FW_LOG_CREATESFID(24, ROGUE_FW_GROUP_HWP, 1),
+	  "Encountered an invalid command (%d)" },
+	{ ROGUE_FW_LOG_CREATESFID(25, ROGUE_FW_GROUP_HWP, 2),
+	  "HWPerf Queue is full, we will have to wait for space! (Roff = %u, Woff = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(26, ROGUE_FW_GROUP_HWP, 3),
+	  "HWPerf Queue is fencing, we are waiting for Roff = %d (Roff = %u, Woff = %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(27, ROGUE_FW_GROUP_HWP, 1),
+	  "Custom Counter block: %d" },
+	{ ROGUE_FW_LOG_CREATESFID(28, ROGUE_FW_GROUP_HWP, 1),
+	  "Block 0x%x ENABLED" },
+	{ ROGUE_FW_LOG_CREATESFID(29, ROGUE_FW_GROUP_HWP, 1),
+	  "Block 0x%x DISABLED" },
+	{ ROGUE_FW_LOG_CREATESFID(30, ROGUE_FW_GROUP_HWP, 2),
+	  "Accessing Indirect block 0x%x, instance %u" },
+	{ ROGUE_FW_LOG_CREATESFID(31, ROGUE_FW_GROUP_HWP, 2),
+	  "Counter register 0x%x, Value 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(32, ROGUE_FW_GROUP_HWP, 1),
+	  "Counters filter status %d" },
+	{ ROGUE_FW_LOG_CREATESFID(33, ROGUE_FW_GROUP_HWP, 2),
+	  "Block 0x%x mapped to Ctl Idx %u" },
+	{ ROGUE_FW_LOG_CREATESFID(34, ROGUE_FW_GROUP_HWP, 0),
+	  "Block(s) in use for workload estimation." },
+	{ ROGUE_FW_LOG_CREATESFID(35, ROGUE_FW_GROUP_HWP, 3),
+	  "GPU %u Cycle counter 0x%x, Value 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(36, ROGUE_FW_GROUP_HWP, 3),
+	  "GPU Mask 0x%x Cycle counter 0x%x, Value 0x%x" },
+	{ ROGUE_FW_LOG_CREATESFID(37, ROGUE_FW_GROUP_HWP, 1),
+	  "Blocks IGNORED for GPU %u" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_DMA, 5),
+	  "Transfer 0x%02x request: 0x%02x%08x -> 0x%08x, size %u" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_DMA, 4),
+	  "Transfer of type 0x%02x expected on channel %u, 0x%02x found, status %u" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_DMA, 1),
+	  "DMA Interrupt register 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_DMA, 1),
+	  "Waiting for transfer of type 0x%02x completion..." },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_DMA, 3),
+	  "Loading of cCCB data from FW common context 0x%08x (offset: %u, size: %u) failed" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_DMA, 3),
+	  "Invalid load of cCCB data from FW common context 0x%08x (offset: %u, size: %u)" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_DMA, 1),
+	  "Transfer 0x%02x request poll failure" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_DMA, 2),
+	  "Boot transfer(s) failed (code? %u, data? %u), used slower memcpy instead" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_DMA, 7),
+	  "Transfer 0x%02x request on ch. %u: system 0x%02x%08x, coremem 0x%08x, flags 0x%x, size %u" },
+
+	{ ROGUE_FW_LOG_CREATESFID(1, ROGUE_FW_GROUP_DBG, 2),
+	  "0x%08x 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(2, ROGUE_FW_GROUP_DBG, 1),
+	  "0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(3, ROGUE_FW_GROUP_DBG, 2),
+	  "0x%08x 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(4, ROGUE_FW_GROUP_DBG, 3),
+	  "0x%08x 0x%08x 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(5, ROGUE_FW_GROUP_DBG, 4),
+	  "0x%08x 0x%08x 0x%08x 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(6, ROGUE_FW_GROUP_DBG, 5),
+	  "0x%08x 0x%08x 0x%08x 0x%08x 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(7, ROGUE_FW_GROUP_DBG, 6),
+	  "0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(8, ROGUE_FW_GROUP_DBG, 7),
+	  "0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(9, ROGUE_FW_GROUP_DBG, 8),
+	  "0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x" },
+	{ ROGUE_FW_LOG_CREATESFID(10, ROGUE_FW_GROUP_DBG, 1),
+	  "%d" },
+	{ ROGUE_FW_LOG_CREATESFID(11, ROGUE_FW_GROUP_DBG, 2),
+	  "%d %d" },
+	{ ROGUE_FW_LOG_CREATESFID(12, ROGUE_FW_GROUP_DBG, 3),
+	  "%d %d %d" },
+	{ ROGUE_FW_LOG_CREATESFID(13, ROGUE_FW_GROUP_DBG, 4),
+	  "%d %d %d %d" },
+	{ ROGUE_FW_LOG_CREATESFID(14, ROGUE_FW_GROUP_DBG, 5),
+	  "%d %d %d %d %d" },
+	{ ROGUE_FW_LOG_CREATESFID(15, ROGUE_FW_GROUP_DBG, 6),
+	  "%d %d %d %d %d %d" },
+	{ ROGUE_FW_LOG_CREATESFID(16, ROGUE_FW_GROUP_DBG, 7),
+	  "%d %d %d %d %d %d %d" },
+	{ ROGUE_FW_LOG_CREATESFID(17, ROGUE_FW_GROUP_DBG, 8),
+	  "%d %d %d %d %d %d %d %d" },
+	{ ROGUE_FW_LOG_CREATESFID(18, ROGUE_FW_GROUP_DBG, 1),
+	  "%u" },
+	{ ROGUE_FW_LOG_CREATESFID(19, ROGUE_FW_GROUP_DBG, 2),
+	  "%u %u" },
+	{ ROGUE_FW_LOG_CREATESFID(20, ROGUE_FW_GROUP_DBG, 3),
+	  "%u %u %u" },
+	{ ROGUE_FW_LOG_CREATESFID(21, ROGUE_FW_GROUP_DBG, 4),
+	  "%u %u %u %u" },
+	{ ROGUE_FW_LOG_CREATESFID(22, ROGUE_FW_GROUP_DBG, 5),
+	  "%u %u %u %u %u" },
+	{ ROGUE_FW_LOG_CREATESFID(23, ROGUE_FW_GROUP_DBG, 6),
+	  "%u %u %u %u %u %u" },
+	{ ROGUE_FW_LOG_CREATESFID(24, ROGUE_FW_GROUP_DBG, 7),
+	  "%u %u %u %u %u %u %u" },
+	{ ROGUE_FW_LOG_CREATESFID(25, ROGUE_FW_GROUP_DBG, 8),
+	  "%u %u %u %u %u %u %u %u" },
+
+	{ ROGUE_FW_LOG_CREATESFID(65535, ROGUE_FW_GROUP_NULL, 15),
+	  "You should not use this string" },
+};
+
+#define ROGUE_FW_SF_FIRST ROGUE_FW_LOG_CREATESFID(0, ROGUE_FW_GROUP_NULL, 0)
+#define ROGUE_FW_SF_MAIN_ASSERT_FAILED ROGUE_FW_LOG_CREATESFID(24, ROGUE_FW_GROUP_MAIN, 1)
+#define ROGUE_FW_SF_LAST ROGUE_FW_LOG_CREATESFID(65535, ROGUE_FW_GROUP_NULL, 15)
+
+#endif /* PVR_ROGUE_FWIF_SF_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_fwif_shared.h b/drivers/gpu/drm/imagination/pvr_rogue_fwif_shared.h
new file mode 100644
index 000000000000..06c8012b31d4
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_fwif_shared.h
@@ -0,0 +1,258 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_FWIF_SHARED_H
+#define PVR_ROGUE_FWIF_SHARED_H
+
+#include <linux/compiler.h>
+#include <linux/types.h>
+
+#define ROGUE_FWIF_NUM_RTDATAS 2U
+#define ROGUE_FWIF_NUM_GEOMDATAS 1U
+#define ROGUE_FWIF_NUM_RTDATA_FREELISTS 2U
+#define ROGUE_NUM_GEOM_CORES 1U
+
+#define ROGUE_NUM_GEOM_CORES_SIZE 2U
+
+/*
+ * Maximum number of UFOs in a CCB command.
+ * The number is based on having 32 sync prims (as originally), plus 32 sync
+ * checkpoints.
+ * Once the use of sync prims is no longer supported, we will retain
+ * the same total (64) as the number of sync checkpoints which may be
+ * supporting a fence is not visible to the client driver and has to
+ * allow for the number of different timelines involved in fence merges.
+ */
+#define ROGUE_FWIF_CCB_CMD_MAX_UFOS (32U + 32U)
+
+/*
+ * This is a generic limit imposed on any DM (GEOMETRY,FRAGMENT,CDM,TDM,2D,TRANSFER)
+ * command passed through the bridge.
+ * Just across the bridge in the server, any incoming kick command size is
+ * checked against this maximum limit.
+ * In case the incoming command size is larger than the specified limit,
+ * the bridge call is retired with error.
+ */
+#define ROGUE_FWIF_DM_INDEPENDENT_KICK_CMD_SIZE (1024U)
+
+#define ROGUE_FWIF_PRBUFFER_START (0)
+#define ROGUE_FWIF_PRBUFFER_ZSBUFFER (0)
+#define ROGUE_FWIF_PRBUFFER_MSAABUFFER (1)
+#define ROGUE_FWIF_PRBUFFER_MAXSUPPORTED (2)
+
+struct rogue_fwif_dma_addr {
+	aligned_u64 dev_addr;
+	u32 fw_addr;
+	u32 padding;
+} __aligned(8);
+
+struct rogue_fwif_ufo {
+	u32 addr;
+	u32 value;
+};
+
+#define ROGUE_FWIF_UFO_ADDR_IS_SYNC_CHECKPOINT (1)
+
+struct rogue_fwif_sync_checkpoint {
+	u32 state;
+	u32 fw_ref_count;
+};
+
+struct rogue_fwif_cleanup_ctl {
+	/* Number of commands received by the FW */
+	u32 submitted_commands;
+	/* Number of commands executed by the FW */
+	u32 executed_commands;
+} __aligned(8);
+
+/*
+ * Used to share frame numbers across UM-KM-FW,
+ * frame number is set in UM,
+ * frame number is required in both KM for HTB and FW for FW trace.
+ *
+ * May be used to house Kick flags in the future.
+ */
+struct rogue_fwif_cmd_common {
+	/* associated frame number */
+	u32 frame_num;
+};
+
+/*
+ * Geometry and fragment commands require set of firmware addresses that are stored in the Kernel.
+ * Client has handle(s) to Kernel containers storing these addresses, instead of raw addresses. We
+ * have to patch/write these addresses in KM to prevent UM from controlling FW addresses directly.
+ * Typedefs for geometry and fragment commands are shared between Client and Firmware (both
+ * single-BVNC). Kernel is implemented in a multi-BVNC manner, so it can't use geometry|fragment
+ * CMD type definitions directly. Therefore we have a SHARED block that is shared between UM-KM-FW
+ * across all BVNC configurations.
+ */
+struct rogue_fwif_cmd_geom_frag_shared {
+	/* Common command attributes */
+	struct rogue_fwif_cmd_common cmn;
+
+	/*
+	 * RTData associated with this command, this is used for context
+	 * selection and for storing out HW-context, when TA is switched out for
+	 * continuing later
+	 */
+	u32 hwrt_data_fw_addr;
+
+	/* Supported PR Buffers like Z/S/MSAA Scratch */
+	u32 pr_buffer_fw_addr[ROGUE_FWIF_PRBUFFER_MAXSUPPORTED];
+};
+
+/*
+ * Client Circular Command Buffer (CCCB) control structure.
+ * This is shared between the Server and the Firmware and holds byte offsets
+ * into the CCCB as well as the wrapping mask to aid wrap around. A given
+ * snapshot of this queue with Cmd 1 running on the GPU might be:
+ *
+ *          Roff                           Doff                 Woff
+ * [..........|-1----------|=2===|=3===|=4===|~5~~~~|~6~~~~|~7~~~~|..........]
+ *            <      runnable commands       ><   !ready to run   >
+ *
+ * Cmd 1    : Currently executing on the GPU data master.
+ * Cmd 2,3,4: Fence dependencies met, commands runnable.
+ * Cmd 5... : Fence dependency not met yet.
+ */
+struct rogue_fwif_cccb_ctl {
+	/* Host write offset into CCB. This must be aligned to 16 bytes. */
+	u32 write_offset;
+	/*
+	 * Firmware read offset into CCB. Points to the command that is runnable
+	 * on GPU, if R!=W
+	 */
+	u32 read_offset;
+	/*
+	 * Firmware fence dependency offset. Points to commands not ready, i.e.
+	 * fence dependencies are not met.
+	 */
+	u32 dep_offset;
+	/* Offset wrapping mask, total capacity in bytes of the CCB-1 */
+	u32 wrap_mask;
+
+	/* Only used if SUPPORT_AGP is present. */
+	u32 read_offset2;
+
+	/* Only used if SUPPORT_AGP4 is present. */
+	u32 read_offset3;
+	/* Only used if SUPPORT_AGP4 is present. */
+	u32 read_offset4;
+
+	u32 padding;
+} __aligned(8);
+
+#define ROGUE_FW_LOCAL_FREELIST (0)
+#define ROGUE_FW_GLOBAL_FREELIST (1)
+#define ROGUE_FW_FREELIST_TYPE_LAST ROGUE_FW_GLOBAL_FREELIST
+#define ROGUE_FW_MAX_FREELISTS (ROGUE_FW_FREELIST_TYPE_LAST + 1U)
+
+struct rogue_fwif_geom_registers_caswitch {
+	u64 geom_reg_vdm_context_state_base_addr;
+	u64 geom_reg_vdm_context_state_resume_addr;
+	u64 geom_reg_ta_context_state_base_addr;
+
+	struct {
+		u64 geom_reg_vdm_context_store_task0;
+		u64 geom_reg_vdm_context_store_task1;
+		u64 geom_reg_vdm_context_store_task2;
+
+		/* VDM resume state update controls */
+		u64 geom_reg_vdm_context_resume_task0;
+		u64 geom_reg_vdm_context_resume_task1;
+		u64 geom_reg_vdm_context_resume_task2;
+
+		u64 geom_reg_vdm_context_store_task3;
+		u64 geom_reg_vdm_context_store_task4;
+
+		u64 geom_reg_vdm_context_resume_task3;
+		u64 geom_reg_vdm_context_resume_task4;
+	} geom_state[2];
+};
+
+#define ROGUE_FWIF_GEOM_REGISTERS_CSWITCH_SIZE \
+	sizeof(struct rogue_fwif_geom_registers_caswitch)
+
+struct rogue_fwif_cdm_registers_cswitch {
+	u64 cdmreg_cdm_context_pds0;
+	u64 cdmreg_cdm_context_pds1;
+	u64 cdmreg_cdm_terminate_pds;
+	u64 cdmreg_cdm_terminate_pds1;
+
+	/* CDM resume controls */
+	u64 cdmreg_cdm_resume_pds0;
+	u64 cdmreg_cdm_context_pds0_b;
+	u64 cdmreg_cdm_resume_pds0_b;
+};
+
+struct rogue_fwif_static_rendercontext_state {
+	/* Geom registers for ctx switch */
+	struct rogue_fwif_geom_registers_caswitch ctxswitch_regs[ROGUE_NUM_GEOM_CORES_SIZE]
+		__aligned(8);
+};
+
+#define ROGUE_FWIF_STATIC_RENDERCONTEXT_SIZE \
+	sizeof(struct rogue_fwif_static_rendercontext_state)
+
+struct rogue_fwif_static_computecontext_state {
+	/* CDM registers for ctx switch */
+	struct rogue_fwif_cdm_registers_cswitch ctxswitch_regs __aligned(8);
+};
+
+#define ROGUE_FWIF_STATIC_COMPUTECONTEXT_SIZE \
+	sizeof(struct rogue_fwif_static_computecontext_state)
+
+enum rogue_fwif_prbuffer_state {
+	ROGUE_FWIF_PRBUFFER_UNBACKED = 0,
+	ROGUE_FWIF_PRBUFFER_BACKED,
+	ROGUE_FWIF_PRBUFFER_BACKING_PENDING,
+	ROGUE_FWIF_PRBUFFER_UNBACKING_PENDING,
+};
+
+struct rogue_fwif_prbuffer {
+	/* Buffer ID*/
+	u32 buffer_id;
+	/* Needs On-demand Z/S/MSAA Buffer allocation */
+	bool on_demand __aligned(4);
+	/* Z/S/MSAA -Buffer state */
+	enum rogue_fwif_prbuffer_state state;
+	/* Cleanup state */
+	struct rogue_fwif_cleanup_ctl cleanup_sate;
+	/* Compatibility and other flags */
+	u32 prbuffer_flags;
+} __aligned(8);
+
+/* Last reset reason for a context. */
+enum rogue_context_reset_reason {
+	/* No reset reason recorded */
+	ROGUE_CONTEXT_RESET_REASON_NONE = 0,
+	/* Caused a reset due to locking up */
+	ROGUE_CONTEXT_RESET_REASON_GUILTY_LOCKUP = 1,
+	/* Affected by another context locking up */
+	ROGUE_CONTEXT_RESET_REASON_INNOCENT_LOCKUP = 2,
+	/* Overran the global deadline */
+	ROGUE_CONTEXT_RESET_REASON_GUILTY_OVERRUNING = 3,
+	/* Affected by another context overrunning */
+	ROGUE_CONTEXT_RESET_REASON_INNOCENT_OVERRUNING = 4,
+	/* Forced reset to ensure scheduling requirements */
+	ROGUE_CONTEXT_RESET_REASON_HARD_CONTEXT_SWITCH = 5,
+	/* FW Safety watchdog triggered */
+	ROGUE_CONTEXT_RESET_REASON_FW_WATCHDOG = 12,
+	/* FW page fault (no HWR) */
+	ROGUE_CONTEXT_RESET_REASON_FW_PAGEFAULT = 13,
+	/* FW execution error (GPU reset requested) */
+	ROGUE_CONTEXT_RESET_REASON_FW_EXEC_ERR = 14,
+	/* Host watchdog detected FW error */
+	ROGUE_CONTEXT_RESET_REASON_HOST_WDG_FW_ERR = 15,
+	/* Geometry DM OOM event is not allowed */
+	ROGUE_CONTEXT_GEOM_OOM_DISABLED = 16,
+};
+
+struct rogue_context_reset_reason_data {
+	enum rogue_context_reset_reason reset_reason;
+	u32 reset_ext_job_ref;
+};
+
+#include "pvr_rogue_fwif_shared_check.h"
+
+#endif /* PVR_ROGUE_FWIF_SHARED_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_fwif_shared_check.h b/drivers/gpu/drm/imagination/pvr_rogue_fwif_shared_check.h
new file mode 100644
index 000000000000..4d6684474185
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_fwif_shared_check.h
@@ -0,0 +1,108 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_FWIF_SHARED_CHECK_H
+#define PVR_ROGUE_FWIF_SHARED_CHECK_H
+
+#include <linux/build_bug.h>
+
+#define OFFSET_CHECK(type, member, offset) \
+	static_assert(offsetof(type, member) == (offset), \
+		      "offsetof(" #type ", " #member ") incorrect")
+
+#define SIZE_CHECK(type, size) \
+	static_assert(sizeof(type) == (size), #type " is incorrect size")
+
+OFFSET_CHECK(struct rogue_fwif_dma_addr, dev_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_dma_addr, fw_addr, 8);
+SIZE_CHECK(struct rogue_fwif_dma_addr, 16);
+
+OFFSET_CHECK(struct rogue_fwif_ufo, addr, 0);
+OFFSET_CHECK(struct rogue_fwif_ufo, value, 4);
+SIZE_CHECK(struct rogue_fwif_ufo, 8);
+
+OFFSET_CHECK(struct rogue_fwif_cleanup_ctl, submitted_commands, 0);
+OFFSET_CHECK(struct rogue_fwif_cleanup_ctl, executed_commands, 4);
+SIZE_CHECK(struct rogue_fwif_cleanup_ctl, 8);
+
+OFFSET_CHECK(struct rogue_fwif_cccb_ctl, write_offset, 0);
+OFFSET_CHECK(struct rogue_fwif_cccb_ctl, read_offset, 4);
+OFFSET_CHECK(struct rogue_fwif_cccb_ctl, dep_offset, 8);
+OFFSET_CHECK(struct rogue_fwif_cccb_ctl, wrap_mask, 12);
+OFFSET_CHECK(struct rogue_fwif_cccb_ctl, read_offset2, 16);
+OFFSET_CHECK(struct rogue_fwif_cccb_ctl, read_offset3, 20);
+OFFSET_CHECK(struct rogue_fwif_cccb_ctl, read_offset4, 24);
+SIZE_CHECK(struct rogue_fwif_cccb_ctl, 32);
+
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_reg_vdm_context_state_base_addr, 0);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_reg_vdm_context_state_resume_addr, 8);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_reg_ta_context_state_base_addr, 16);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[0].geom_reg_vdm_context_store_task0, 24);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[0].geom_reg_vdm_context_store_task1, 32);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[0].geom_reg_vdm_context_store_task2, 40);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[0].geom_reg_vdm_context_resume_task0, 48);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[0].geom_reg_vdm_context_resume_task1, 56);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[0].geom_reg_vdm_context_resume_task2, 64);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[0].geom_reg_vdm_context_store_task3, 72);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[0].geom_reg_vdm_context_store_task4, 80);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[0].geom_reg_vdm_context_resume_task3, 88);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[0].geom_reg_vdm_context_resume_task4, 96);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[1].geom_reg_vdm_context_store_task0, 104);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[1].geom_reg_vdm_context_store_task1, 112);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[1].geom_reg_vdm_context_store_task2, 120);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[1].geom_reg_vdm_context_resume_task0, 128);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[1].geom_reg_vdm_context_resume_task1, 136);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[1].geom_reg_vdm_context_resume_task2, 144);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[1].geom_reg_vdm_context_store_task3, 152);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[1].geom_reg_vdm_context_store_task4, 160);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[1].geom_reg_vdm_context_resume_task3, 168);
+OFFSET_CHECK(struct rogue_fwif_geom_registers_caswitch,
+	     geom_state[1].geom_reg_vdm_context_resume_task4, 176);
+SIZE_CHECK(struct rogue_fwif_geom_registers_caswitch, 184);
+
+OFFSET_CHECK(struct rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_context_pds0, 0);
+OFFSET_CHECK(struct rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_context_pds1, 8);
+OFFSET_CHECK(struct rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_terminate_pds, 16);
+OFFSET_CHECK(struct rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_terminate_pds1, 24);
+OFFSET_CHECK(struct rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_resume_pds0, 32);
+OFFSET_CHECK(struct rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_context_pds0_b, 40);
+OFFSET_CHECK(struct rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_resume_pds0_b, 48);
+SIZE_CHECK(struct rogue_fwif_cdm_registers_cswitch, 56);
+
+OFFSET_CHECK(struct rogue_fwif_static_rendercontext_state, ctxswitch_regs, 0);
+SIZE_CHECK(struct rogue_fwif_static_rendercontext_state, 368);
+
+OFFSET_CHECK(struct rogue_fwif_static_computecontext_state, ctxswitch_regs, 0);
+SIZE_CHECK(struct rogue_fwif_static_computecontext_state, 56);
+
+OFFSET_CHECK(struct rogue_fwif_cmd_common, frame_num, 0);
+SIZE_CHECK(struct rogue_fwif_cmd_common, 4);
+
+OFFSET_CHECK(struct rogue_fwif_cmd_geom_frag_shared, cmn, 0);
+OFFSET_CHECK(struct rogue_fwif_cmd_geom_frag_shared, hwrt_data_fw_addr, 4);
+OFFSET_CHECK(struct rogue_fwif_cmd_geom_frag_shared, pr_buffer_fw_addr, 8);
+SIZE_CHECK(struct rogue_fwif_cmd_geom_frag_shared, 16);
+
+#endif /* PVR_ROGUE_FWIF_SHARED_CHECK_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_fwif_stream.h b/drivers/gpu/drm/imagination/pvr_rogue_fwif_stream.h
new file mode 100644
index 000000000000..97859a41e677
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_fwif_stream.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_FWIF_STREAM_H
+#define PVR_ROGUE_FWIF_STREAM_H
+
+/**
+ * DOC: Streams
+ *
+ * Commands are submitted to the kernel driver in the form of streams.
+ *
+ * A command stream has the following layout :
+ *  - A 64-bit header containing:
+ *    * A u32 containing the length of the main stream inclusive of the length of the header.
+ *    * A u32 for padding.
+ *  - The main stream data.
+ *  - The extension stream (optional), which is composed of:
+ *    * One or more headers.
+ *    * The extension stream data, corresponding to the extension headers.
+ *
+ * The main stream provides the base command data. This has a fixed layout based on the features
+ * supported by a given GPU.
+ *
+ * The extension stream provides the command parameters that are required for BRNs & ERNs for the
+ * current GPU. This stream is comprised of one or more headers, followed by data for each given
+ * BRN/ERN.
+ *
+ * Each header is a u32 containing a bitmask of quirks & enhancements in the extension stream, a
+ * "type" field determining the set of quirks & enhancements the bitmask represents, and a
+ * continuation bit determining whether any more headers are present. The headers are then followed
+ * by command data; this is specific to each quirk/enhancement. All unused / reserved bits in the
+ * header must be set to 0.
+ *
+ * All parameters and headers in the main and extension streams must be naturally aligned.
+ *
+ * If a parameter appears in both the main and extension streams, then the extension parameter is
+ * used.
+ */
+
+/*
+ * Stream extension header definition
+ */
+#define PVR_STREAM_EXTHDR_TYPE_SHIFT 29U
+#define PVR_STREAM_EXTHDR_TYPE_MASK (7U << PVR_STREAM_EXTHDR_TYPE_SHIFT)
+#define PVR_STREAM_EXTHDR_TYPE_MAX 8U
+#define PVR_STREAM_EXTHDR_CONTINUATION BIT(28U)
+
+#define PVR_STREAM_EXTHDR_DATA_MASK ~(PVR_STREAM_EXTHDR_TYPE_MASK | PVR_STREAM_EXTHDR_CONTINUATION)
+
+/*
+ * Stream extension header - Geometry 0
+ */
+#define PVR_STREAM_EXTHDR_TYPE_GEOM0 0U
+
+#define PVR_STREAM_EXTHDR_GEOM0_BRN49927 BIT(0U)
+
+#define PVR_STREAM_EXTHDR_GEOM0_VALID PVR_STREAM_EXTHDR_GEOM0_BRN49927
+
+/*
+ * Stream extension header - Fragment 0
+ */
+#define PVR_STREAM_EXTHDR_TYPE_FRAG0 0U
+
+#define PVR_STREAM_EXTHDR_FRAG0_BRN47217 BIT(0U)
+#define PVR_STREAM_EXTHDR_FRAG0_BRN49927 BIT(1U)
+
+#define PVR_STREAM_EXTHDR_FRAG0_VALID PVR_STREAM_EXTHDR_FRAG0_BRN49927
+
+/*
+ * Stream extension header - Compute 0
+ */
+#define PVR_STREAM_EXTHDR_TYPE_COMPUTE0 0U
+
+#define PVR_STREAM_EXTHDR_COMPUTE0_BRN49927 BIT(0U)
+
+#define PVR_STREAM_EXTHDR_COMPUTE0_VALID PVR_STREAM_EXTHDR_COMPUTE0_BRN49927
+
+#endif /* PVR_ROGUE_FWIF_STREAM_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_heap_config.h b/drivers/gpu/drm/imagination/pvr_rogue_heap_config.h
new file mode 100644
index 000000000000..632221b88281
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_heap_config.h
@@ -0,0 +1,113 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_HEAP_CONFIG_H
+#define PVR_ROGUE_HEAP_CONFIG_H
+
+#include <linux/sizes.h>
+
+/*
+ * ROGUE Device Virtual Address Space Definitions
+ *
+ * This file defines the ROGUE virtual address heaps that are used in
+ * application memory contexts. It also shows where the Firmware memory heap
+ * fits into this, but the firmware heap is only ever created in the
+ * kernel driver and never exposed to userspace.
+ *
+ * ROGUE_PDSCODEDATA_HEAP_BASE and ROGUE_USCCODE_HEAP_BASE will be programmed,
+ * on a global basis, into ROGUE_CR_PDS_EXEC_BASE and ROGUE_CR_USC_CODE_BASE_*
+ * respectively. Therefore if client drivers use multiple configs they must
+ * still be consistent with their definitions for these heaps.
+ *
+ * Base addresses have to be a multiple of 4MiB.
+ * Heaps must not start at 0x0000000000, as this is reserved for internal
+ * use within the driver.
+ * Range comments, those starting in column 0 below are a section heading of
+ * sorts and are above the heaps in that range. Often this is the reserved
+ * size of the heap within the range.
+ */
+
+/* 0x00_0000_0000 ************************************************************/
+
+/* 0x00_0000_0000 - 0x00_0040_0000 */
+/* 0 MiB to 4 MiB, size of 4 MiB : RESERVED */
+
+/* 0x00_0040_0000 - 0x7F_FFC0_0000 **/
+/* 4 MiB to 512 GiB, size of 512 GiB less 4 MiB : RESERVED **/
+
+/* 0x80_0000_0000 ************************************************************/
+
+/* 0x80_0000_0000 - 0x9F_FFFF_FFFF **/
+/* 512 GiB to 640 GiB, size of 128 GiB : GENERAL_HEAP **/
+#define ROGUE_GENERAL_HEAP_BASE 0x8000000000ull
+#define ROGUE_GENERAL_HEAP_SIZE SZ_128G
+
+/* 0xA0_0000_0000 - 0xAF_FFFF_FFFF */
+/* 640 GiB to 704 GiB, size of 64 GiB : FREE */
+
+/* B0_0000_0000 - 0xB7_FFFF_FFFF */
+/* 704 GiB to 736 GiB, size of 32 GiB : FREE */
+
+/* 0xB8_0000_0000 - 0xBF_FFFF_FFFF */
+/* 736 GiB to 768 GiB, size of 32 GiB : RESERVED */
+
+/* 0xC0_0000_0000 ************************************************************/
+
+/* 0xC0_0000_0000 - 0xD9_FFFF_FFFF */
+/* 768 GiB to 872 GiB, size of 104 GiB : FREE */
+
+/* 0xDA_0000_0000 - 0xDA_FFFF_FFFF */
+/* 872 GiB to 876 GiB, size of 4 GiB : PDSCODEDATA_HEAP */
+#define ROGUE_PDSCODEDATA_HEAP_BASE 0xDA00000000ull
+#define ROGUE_PDSCODEDATA_HEAP_SIZE SZ_4G
+
+/* 0xDB_0000_0000 - 0xDB_FFFF_FFFF */
+/* 876 GiB to 880 GiB, size of 256 MiB (reserved 4GiB) : BRN **/
+/*
+ * The BRN63142 quirk workaround requires Region Header memory to be at the top
+ * of a 16GiB aligned range. This is so when masked with 0x03FFFFFFFF the
+ * address will avoid aliasing PB addresses. Start at 879.75GiB. Size of 256MiB.
+ */
+#define ROGUE_RGNHDR_HEAP_BASE 0xDBF0000000ull
+#define ROGUE_RGNHDR_HEAP_SIZE SZ_256M
+
+/* 0xDC_0000_0000 - 0xDF_FFFF_FFFF */
+/* 880 GiB to 896 GiB, size of 16 GiB : FREE */
+
+/* 0xE0_0000_0000 - 0xE0_FFFF_FFFF */
+/* 896 GiB to 900 GiB, size of 4 GiB : USCCODE_HEAP */
+#define ROGUE_USCCODE_HEAP_BASE 0xE000000000ull
+#define ROGUE_USCCODE_HEAP_SIZE SZ_4G
+
+/* 0xE1_0000_0000 - 0xE1_BFFF_FFFF */
+/* 900 GiB to 903 GiB, size of 3 GiB : RESERVED */
+
+/* 0xE1_C000_000 - 0xE1_FFFF_FFFF */
+/* 903 GiB to 904 GiB, reserved 1 GiB, : FIRMWARE_HEAP */
+#define ROGUE_FW_HEAP_BASE 0xE1C0000000ull
+
+/* 0xE2_0000_0000 - 0xE3_FFFF_FFFF */
+/* 904 GiB to 912 GiB, size of 8 GiB : FREE */
+
+/* 0xE4_0000_0000 - 0xE7_FFFF_FFFF */
+/* 912 GiB to 968 GiB, size of 16 GiB : TRANSFER_FRAG */
+#define ROGUE_TRANSFER_FRAG_HEAP_BASE 0xE400000000ull
+#define ROGUE_TRANSFER_FRAG_HEAP_SIZE SZ_16G
+
+/* 0xE8_0000_0000 - 0xF1_FFFF_FFFF */
+/* 928 GiB to 968 GiB, size of 40 GiB : RESERVED */
+
+/* 0xF2_0000_0000 - 0xF2_001F_FFFF **/
+/* 968 GiB to 969 GiB, size of 2 MiB : VISTEST_HEAP */
+#define ROGUE_VISTEST_HEAP_BASE 0xF200000000ull
+#define ROGUE_VISTEST_HEAP_SIZE SZ_2M
+
+/* 0xF2_4000_0000 - 0xF2_FFFF_FFFF */
+/* 969 GiB to 972 GiB, size of 3 GiB : FREE */
+
+/* 0xF3_0000_0000 - 0xFF_FFFF_FFFF */
+/* 972 GiB to 1024 GiB, size of 52 GiB : FREE */
+
+/* 0xFF_FFFF_FFFF ************************************************************/
+
+#endif /* PVR_ROGUE_HEAP_CONFIG_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_meta.h b/drivers/gpu/drm/imagination/pvr_rogue_meta.h
new file mode 100644
index 000000000000..736e94618832
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_meta.h
@@ -0,0 +1,356 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_META_H
+#define PVR_ROGUE_META_H
+
+/***** The META HW register definitions in the file are updated manually *****/
+
+#include <linux/bits.h>
+#include <linux/types.h>
+
+/*
+ ******************************************************************************
+ * META registers and MACROS
+ *****************************************************************************
+ */
+#define META_CR_CTRLREG_BASE(t) (0x04800000U + (0x1000U * (t)))
+
+#define META_CR_TXPRIVEXT (0x048000E8)
+#define META_CR_TXPRIVEXT_MINIM_EN BIT(7)
+
+#define META_CR_SYSC_JTAG_THREAD (0x04830030)
+#define META_CR_SYSC_JTAG_THREAD_PRIV_EN (0x00000004)
+
+#define META_CR_PERF_COUNT0 (0x0480FFE0)
+#define META_CR_PERF_COUNT1 (0x0480FFE8)
+#define META_CR_PERF_COUNT_CTRL_SHIFT (28)
+#define META_CR_PERF_COUNT_CTRL_MASK (0xF0000000)
+#define META_CR_PERF_COUNT_CTRL_DCACHEHITS (8 << META_CR_PERF_COUNT_CTRL_SHIFT)
+#define META_CR_PERF_COUNT_CTRL_ICACHEHITS (9 << META_CR_PERF_COUNT_CTRL_SHIFT)
+#define META_CR_PERF_COUNT_CTRL_ICACHEMISS \
+	(0xA << META_CR_PERF_COUNT_CTRL_SHIFT)
+#define META_CR_PERF_COUNT_CTRL_ICORE (0xD << META_CR_PERF_COUNT_CTRL_SHIFT)
+#define META_CR_PERF_COUNT_THR_SHIFT (24)
+#define META_CR_PERF_COUNT_THR_MASK (0x0F000000)
+#define META_CR_PERF_COUNT_THR_0 (0x1 << META_CR_PERF_COUNT_THR_SHIFT)
+#define META_CR_PERF_COUNT_THR_1 (0x2 << META_CR_PERF_COUNT_THR_1)
+
+#define META_CR_TxVECINT_BHALT (0x04820500)
+#define META_CR_PERF_ICORE0 (0x0480FFD0)
+#define META_CR_PERF_ICORE1 (0x0480FFD8)
+#define META_CR_PERF_ICORE_DCACHEMISS (0x8)
+
+#define META_CR_PERF_COUNT(ctrl, thr)                                        \
+	((META_CR_PERF_COUNT_CTRL_##ctrl << META_CR_PERF_COUNT_CTRL_SHIFT) | \
+	 ((thr) << META_CR_PERF_COUNT_THR_SHIFT))
+
+#define META_CR_TXUXXRXDT_OFFSET (META_CR_CTRLREG_BASE(0U) + 0x0000FFF0U)
+#define META_CR_TXUXXRXRQ_OFFSET (META_CR_CTRLREG_BASE(0U) + 0x0000FFF8U)
+
+/* Poll for done. */
+#define META_CR_TXUXXRXRQ_DREADY_BIT (0x80000000U)
+/* Set for read. */
+#define META_CR_TXUXXRXRQ_RDnWR_BIT (0x00010000U)
+#define META_CR_TXUXXRXRQ_TX_S (12)
+#define META_CR_TXUXXRXRQ_RX_S (4)
+#define META_CR_TXUXXRXRQ_UXX_S (0)
+
+/* Internal ctrl regs. */
+#define META_CR_TXUIN_ID (0x0)
+/* Data unit regs. */
+#define META_CR_TXUD0_ID (0x1)
+/* Data unit regs. */
+#define META_CR_TXUD1_ID (0x2)
+/* Address unit regs. */
+#define META_CR_TXUA0_ID (0x3)
+/* Address unit regs. */
+#define META_CR_TXUA1_ID (0x4)
+/* PC registers. */
+#define META_CR_TXUPC_ID (0x5)
+
+/* Macros to calculate register access values. */
+#define META_CR_CORE_REG(thr, reg_num, unit)          \
+	(((u32)(thr) << META_CR_TXUXXRXRQ_TX_S) |     \
+	 ((u32)(reg_num) << META_CR_TXUXXRXRQ_RX_S) | \
+	 ((u32)(unit) << META_CR_TXUXXRXRQ_UXX_S))
+
+#define META_CR_THR0_PC META_CR_CORE_REG(0, 0, META_CR_TXUPC_ID)
+#define META_CR_THR0_PCX META_CR_CORE_REG(0, 1, META_CR_TXUPC_ID)
+#define META_CR_THR0_SP META_CR_CORE_REG(0, 0, META_CR_TXUA0_ID)
+
+#define META_CR_THR1_PC META_CR_CORE_REG(1, 0, META_CR_TXUPC_ID)
+#define META_CR_THR1_PCX META_CR_CORE_REG(1, 1, META_CR_TXUPC_ID)
+#define META_CR_THR1_SP META_CR_CORE_REG(1, 0, META_CR_TXUA0_ID)
+
+#define SP_ACCESS(thread) META_CR_CORE_REG(thread, 0, META_CR_TXUA0_ID)
+#define PC_ACCESS(thread) META_CR_CORE_REG(thread, 0, META_CR_TXUPC_ID)
+
+#define META_CR_COREREG_ENABLE (0x0000000U)
+#define META_CR_COREREG_STATUS (0x0000010U)
+#define META_CR_COREREG_DEFR (0x00000A0U)
+#define META_CR_COREREG_PRIVEXT (0x00000E8U)
+
+#define META_CR_T0ENABLE_OFFSET \
+	(META_CR_CTRLREG_BASE(0U) + META_CR_COREREG_ENABLE)
+#define META_CR_T0STATUS_OFFSET \
+	(META_CR_CTRLREG_BASE(0U) + META_CR_COREREG_STATUS)
+#define META_CR_T0DEFR_OFFSET (META_CR_CTRLREG_BASE(0U) + META_CR_COREREG_DEFR)
+#define META_CR_T0PRIVEXT_OFFSET \
+	(META_CR_CTRLREG_BASE(0U) + META_CR_COREREG_PRIVEXT)
+
+#define META_CR_T1ENABLE_OFFSET \
+	(META_CR_CTRLREG_BASE(1U) + META_CR_COREREG_ENABLE)
+#define META_CR_T1STATUS_OFFSET \
+	(META_CR_CTRLREG_BASE(1U) + META_CR_COREREG_STATUS)
+#define META_CR_T1DEFR_OFFSET (META_CR_CTRLREG_BASE(1U) + META_CR_COREREG_DEFR)
+#define META_CR_T1PRIVEXT_OFFSET \
+	(META_CR_CTRLREG_BASE(1U) + META_CR_COREREG_PRIVEXT)
+
+#define META_CR_TXENABLE_ENABLE_BIT (0x00000001U) /* Set if running */
+#define META_CR_TXSTATUS_PRIV (0x00020000U)
+#define META_CR_TXPRIVEXT_MINIM (0x00000080U)
+
+#define META_MEM_GLOBAL_RANGE_BIT (0x80000000U)
+
+#define META_CR_TXCLKCTRL (0x048000B0)
+#define META_CR_TXCLKCTRL_ALL_ON (0x55111111)
+#define META_CR_TXCLKCTRL_ALL_AUTO (0xAA222222)
+
+#define META_CR_MMCU_LOCAL_EBCTRL (0x04830600)
+#define META_CR_MMCU_LOCAL_EBCTRL_ICWIN (0x3 << 14)
+#define META_CR_MMCU_LOCAL_EBCTRL_DCWIN (0x3 << 6)
+#define META_CR_SYSC_DCPART(n) (0x04830200 + (n) * 0x8)
+#define META_CR_SYSC_DCPARTX_CACHED_WRITE_ENABLE (0x1 << 31)
+#define META_CR_SYSC_ICPART(n) (0x04830220 + (n) * 0x8)
+#define META_CR_SYSC_XCPARTX_LOCAL_ADDR_OFFSET_TOP_HALF (0x8 << 16)
+#define META_CR_SYSC_XCPARTX_LOCAL_ADDR_FULL_CACHE (0xF)
+#define META_CR_SYSC_XCPARTX_LOCAL_ADDR_HALF_CACHE (0x7)
+#define META_CR_MMCU_DCACHE_CTRL (0x04830018)
+#define META_CR_MMCU_ICACHE_CTRL (0x04830020)
+#define META_CR_MMCU_XCACHE_CTRL_CACHE_HITS_EN (0x1)
+
+/*
+ ******************************************************************************
+ * META LDR Format
+ ******************************************************************************
+ */
+/* Block header structure. */
+struct rogue_meta_ldr_block_hdr {
+	u32 dev_id;
+	u32 sl_code;
+	u32 sl_data;
+	u16 pc_ctrl;
+	u16 crc;
+};
+
+/* High level data stream block structure. */
+struct rogue_meta_ldr_l1_data_blk {
+	u16 cmd;
+	u16 length;
+	u32 next;
+	u32 cmd_data[4];
+};
+
+/* High level data stream block structure. */
+struct rogue_meta_ldr_l2_data_blk {
+	u16 tag;
+	u16 length;
+	u32 block_data[4];
+};
+
+/* Config command structure. */
+struct rogue_meta_ldr_cfg_blk {
+	u32 type;
+	u32 block_data[4];
+};
+
+/* Block type definitions */
+#define ROGUE_META_LDR_COMMENT_TYPE_MASK (0x0010U)
+#define ROGUE_META_LDR_BLK_IS_COMMENT(x) (((x) & ROGUE_META_LDR_COMMENT_TYPE_MASK) != 0U)
+
+/*
+ * Command definitions
+ *  Value   Name            Description
+ *  0       LoadMem         Load memory with binary data.
+ *  1       LoadCore        Load a set of core registers.
+ *  2       LoadMMReg       Load a set of memory mapped registers.
+ *  3       StartThreads    Set each thread PC and SP, then enable threads.
+ *  4       ZeroMem         Zeros a memory region.
+ *  5       Config          Perform a configuration command.
+ */
+#define ROGUE_META_LDR_CMD_MASK (0x000FU)
+
+#define ROGUE_META_LDR_CMD_LOADMEM (0x0000U)
+#define ROGUE_META_LDR_CMD_LOADCORE (0x0001U)
+#define ROGUE_META_LDR_CMD_LOADMMREG (0x0002U)
+#define ROGUE_META_LDR_CMD_START_THREADS (0x0003U)
+#define ROGUE_META_LDR_CMD_ZEROMEM (0x0004U)
+#define ROGUE_META_LDR_CMD_CONFIG (0x0005U)
+
+/*
+ * Config Command definitions
+ *  Value   Name        Description
+ *  0       Pause       Pause for x times 100 instructions
+ *  1       Read        Read a value from register - No value return needed.
+ *                      Utilises effects of issuing reads to certain registers
+ *  2       Write       Write to mem location
+ *  3       MemSet      Set mem to value
+ *  4       MemCheck    check mem for specific value.
+ */
+#define ROGUE_META_LDR_CFG_PAUSE (0x0000)
+#define ROGUE_META_LDR_CFG_READ (0x0001)
+#define ROGUE_META_LDR_CFG_WRITE (0x0002)
+#define ROGUE_META_LDR_CFG_MEMSET (0x0003)
+#define ROGUE_META_LDR_CFG_MEMCHECK (0x0004)
+
+/*
+ ******************************************************************************
+ * ROGUE FW segmented MMU definitions
+ ******************************************************************************
+ */
+/* All threads can access the segment. */
+#define ROGUE_FW_SEGMMU_ALLTHRS (0xf << 8U)
+/* Writable. */
+#define ROGUE_FW_SEGMMU_WRITEABLE (0x1U << 1U)
+/* All threads can access and writable. */
+#define ROGUE_FW_SEGMMU_ALLTHRS_WRITEABLE \
+	(ROGUE_FW_SEGMMU_ALLTHRS | ROGUE_FW_SEGMMU_WRITEABLE)
+
+/* Direct map region 10 used for mapping GPU memory - max 8MB. */
+#define ROGUE_FW_SEGMMU_DMAP_GPU_ID (10U)
+#define ROGUE_FW_SEGMMU_DMAP_GPU_ADDR_START (0x07000000U)
+#define ROGUE_FW_SEGMMU_DMAP_GPU_MAX_SIZE (0x00800000U)
+
+/* Segment IDs. */
+#define ROGUE_FW_SEGMMU_DATA_ID (1U)
+#define ROGUE_FW_SEGMMU_BOOTLDR_ID (2U)
+#define ROGUE_FW_SEGMMU_TEXT_ID (ROGUE_FW_SEGMMU_BOOTLDR_ID)
+
+/*
+ * SLC caching strategy in S7 and volcanic is emitted through the segment MMU.
+ * All the segments configured through the macro ROGUE_FW_SEGMMU_OUTADDR_TOP are
+ * CACHED in the SLC.
+ * The interface has been kept the same to simplify the code changes.
+ * The bifdm argument is ignored (no longer relevant) in S7 and volcanic.
+ */
+#define ROGUE_FW_SEGMMU_OUTADDR_TOP_VIVT_SLC(pers, slc_policy, mmu_ctx)  \
+	((((u64)((pers) & 0x3)) << 52) | (((u64)((mmu_ctx) & 0xFF)) << 44) | \
+	 (((u64)((slc_policy) & 0x1)) << 40))
+#define ROGUE_FW_SEGMMU_OUTADDR_TOP_VIVT_SLC_CACHED(mmu_ctx) \
+	ROGUE_FW_SEGMMU_OUTADDR_TOP_VIVT_SLC(0x3, 0x0, mmu_ctx)
+#define ROGUE_FW_SEGMMU_OUTADDR_TOP_VIVT_SLC_UNCACHED(mmu_ctx) \
+	ROGUE_FW_SEGMMU_OUTADDR_TOP_VIVT_SLC(0x0, 0x1, mmu_ctx)
+
+/*
+ * To configure the Page Catalog and BIF-DM fed into the BIF for Garten
+ * accesses through this segment.
+ */
+#define ROGUE_FW_SEGMMU_OUTADDR_TOP_SLC(pc, bifdm) \
+	(((u64)((u64)(pc) & 0xFU) << 44U) | ((u64)((u64)(bifdm) & 0xFU) << 40U))
+
+#define ROGUE_FW_SEGMMU_META_BIFDM_ID (0x7U)
+
+/* META segments have 4kB minimum size. */
+#define ROGUE_FW_SEGMMU_ALIGN (0x1000U)
+
+/* Segmented MMU registers (n = segment id). */
+#define META_CR_MMCU_SEGMENT_N_BASE(n) (0x04850000U + ((n) * 0x10U))
+#define META_CR_MMCU_SEGMENT_N_LIMIT(n) (0x04850004U + ((n) * 0x10U))
+#define META_CR_MMCU_SEGMENT_N_OUTA0(n) (0x04850008U + ((n) * 0x10U))
+#define META_CR_MMCU_SEGMENT_N_OUTA1(n) (0x0485000CU + ((n) * 0x10U))
+
+/*
+ * The following defines must be recalculated if the Meta MMU segments used
+ * to access Host-FW data are changed
+ * Current combinations are:
+ * - SLC uncached, META cached,   FW base address 0x70000000
+ * - SLC uncached, META uncached, FW base address 0xF0000000
+ * - SLC cached,   META cached,   FW base address 0x10000000
+ * - SLC cached,   META uncached, FW base address 0x90000000
+ */
+#define ROGUE_FW_SEGMMU_DATA_BASE_ADDRESS (0x10000000U)
+#define ROGUE_FW_SEGMMU_DATA_META_CACHED (0x0U)
+#define ROGUE_FW_SEGMMU_DATA_META_UNCACHED (META_MEM_GLOBAL_RANGE_BIT)
+#define ROGUE_FW_SEGMMU_DATA_META_CACHE_MASK (META_MEM_GLOBAL_RANGE_BIT)
+/*
+ * For non-VIVT SLCs the cacheability of the FW data in the SLC is selected in
+ * the PTEs for the FW data, not in the Meta Segment MMU, which means these
+ * defines have no real effect in those cases.
+ */
+#define ROGUE_FW_SEGMMU_DATA_VIVT_SLC_CACHED (0x0U)
+#define ROGUE_FW_SEGMMU_DATA_VIVT_SLC_UNCACHED (0x60000000U)
+#define ROGUE_FW_SEGMMU_DATA_VIVT_SLC_CACHE_MASK (0x60000000U)
+
+/*
+ ******************************************************************************
+ * ROGUE FW Bootloader defaults
+ ******************************************************************************
+ */
+#define ROGUE_FW_BOOTLDR_META_ADDR (0x40000000U)
+#define ROGUE_FW_BOOTLDR_DEVV_ADDR_0 (0xC0000000U)
+#define ROGUE_FW_BOOTLDR_DEVV_ADDR_1 (0x000000E1)
+#define ROGUE_FW_BOOTLDR_DEVV_ADDR                     \
+	((((u64)ROGUE_FW_BOOTLDR_DEVV_ADDR_1) << 32) | \
+	 ROGUE_FW_BOOTLDR_DEVV_ADDR_0)
+#define ROGUE_FW_BOOTLDR_LIMIT (0x1FFFF000)
+#define ROGUE_FW_MAX_BOOTLDR_OFFSET (0x1000)
+
+/* Bootloader configuration offset is in dwords (512 bytes) */
+#define ROGUE_FW_BOOTLDR_CONF_OFFSET (0x80)
+
+/*
+ ******************************************************************************
+ * ROGUE META Stack
+ ******************************************************************************
+ */
+#define ROGUE_META_STACK_SIZE (0x1000U)
+
+/*
+ ******************************************************************************
+ * ROGUE META Core memory
+ ******************************************************************************
+ */
+/* Code and data both map to the same physical memory. */
+#define ROGUE_META_COREMEM_CODE_ADDR (0x80000000U)
+#define ROGUE_META_COREMEM_DATA_ADDR (0x82000000U)
+#define ROGUE_META_COREMEM_OFFSET_MASK (0x01ffffffU)
+
+#define ROGUE_META_IS_COREMEM_CODE(a, b)                                \
+	({                                                              \
+		u32 _a = (a), _b = (b);                                 \
+		((_a) >= ROGUE_META_COREMEM_CODE_ADDR) &&               \
+			((_a) < (ROGUE_META_COREMEM_CODE_ADDR + (_b))); \
+	})
+#define ROGUE_META_IS_COREMEM_DATA(a, b)                                \
+	({                                                              \
+		u32 _a = (a), _b = (b);                                 \
+		((_a) >= ROGUE_META_COREMEM_DATA_ADDR) &&               \
+			((_a) < (ROGUE_META_COREMEM_DATA_ADDR + (_b))); \
+	})
+/*
+ ******************************************************************************
+ * 2nd thread
+ ******************************************************************************
+ */
+#define ROGUE_FW_THR1_PC (0x18930000)
+#define ROGUE_FW_THR1_SP (0x78890000)
+
+/*
+ ******************************************************************************
+ * META compatibility
+ ******************************************************************************
+ */
+
+#define META_CR_CORE_ID (0x04831000)
+#define META_CR_CORE_ID_VER_SHIFT (16U)
+#define META_CR_CORE_ID_VER_CLRMSK (0XFF00FFFFU)
+
+#define ROGUE_CR_META_MTP218_CORE_ID_VALUE 0x19
+#define ROGUE_CR_META_MTP219_CORE_ID_VALUE 0x1E
+#define ROGUE_CR_META_LTP218_CORE_ID_VALUE 0x1C
+#define ROGUE_CR_META_LTP217_CORE_ID_VALUE 0x1F
+
+#define ROGUE_FW_PROCESSOR_META "META"
+
+#endif /* PVR_ROGUE_META_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_mips.h b/drivers/gpu/drm/imagination/pvr_rogue_mips.h
new file mode 100644
index 000000000000..fe5167bf7fba
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_mips.h
@@ -0,0 +1,335 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_MIPS_H
+#define PVR_ROGUE_MIPS_H
+
+#include <linux/bits.h>
+#include <linux/types.h>
+
+/* Utility defines for memory management. */
+#define ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K (12)
+#define ROGUE_MIPSFW_PAGE_SIZE_4K (0x1 << ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K)
+#define ROGUE_MIPSFW_PAGE_MASK_4K (ROGUE_MIPSFW_PAGE_SIZE_4K - 1)
+#define ROGUE_MIPSFW_LOG2_PAGE_SIZE_64K (16)
+#define ROGUE_MIPSFW_PAGE_SIZE_64K (0x1 << ROGUE_MIPSFW_LOG2_PAGE_SIZE_64K)
+#define ROGUE_MIPSFW_PAGE_MASK_64K (ROGUE_MIPSFW_PAGE_SIZE_64K - 1)
+#define ROGUE_MIPSFW_LOG2_PAGE_SIZE_256K (18)
+#define ROGUE_MIPSFW_PAGE_SIZE_256K (0x1 << ROGUE_MIPSFW_LOG2_PAGE_SIZE_256K)
+#define ROGUE_MIPSFW_PAGE_MASK_256K (ROGUE_MIPSFW_PAGE_SIZE_256K - 1)
+#define ROGUE_MIPSFW_LOG2_PAGE_SIZE_1MB (20)
+#define ROGUE_MIPSFW_PAGE_SIZE_1MB (0x1 << ROGUE_MIPSFW_LOG2_PAGE_SIZE_1MB)
+#define ROGUE_MIPSFW_PAGE_MASK_1MB (ROGUE_MIPSFW_PAGE_SIZE_1MB - 1)
+#define ROGUE_MIPSFW_LOG2_PAGE_SIZE_4MB (22)
+#define ROGUE_MIPSFW_PAGE_SIZE_4MB (0x1 << ROGUE_MIPSFW_LOG2_PAGE_SIZE_4MB)
+#define ROGUE_MIPSFW_PAGE_MASK_4MB (ROGUE_MIPSFW_PAGE_SIZE_4MB - 1)
+#define ROGUE_MIPSFW_LOG2_PTE_ENTRY_SIZE (2)
+/* log2 page table sizes dependent on FW heap size and page size (for each OS). */
+#define ROGUE_MIPSFW_LOG2_PAGETABLE_SIZE_4K(pvr_dev) ((pvr_dev)->fw_dev.fw_heap_info.log2_size - \
+						      ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K +    \
+						      ROGUE_MIPSFW_LOG2_PTE_ENTRY_SIZE)
+#define ROGUE_MIPSFW_LOG2_PAGETABLE_SIZE_64K(pvr_dev) ((pvr_dev)->fw_dev.fw_heap_info.log2_size - \
+						       ROGUE_MIPSFW_LOG2_PAGE_SIZE_64K +   \
+						       ROGUE_MIPSFW_LOG2_PTE_ENTRY_SIZE)
+/* Maximum number of page table pages (both Host and MIPS pages). */
+#define ROGUE_MIPSFW_MAX_NUM_PAGETABLE_PAGES (4)
+/* Total number of TLB entries. */
+#define ROGUE_MIPSFW_NUMBER_OF_TLB_ENTRIES (16)
+/* "Uncached" caching policy. */
+#define ROGUE_MIPSFW_UNCACHED_CACHE_POLICY (2)
+/* "Write-back write-allocate" caching policy. */
+#define ROGUE_MIPSFW_WRITEBACK_CACHE_POLICY (3)
+/* "Write-through no write-allocate" caching policy. */
+#define ROGUE_MIPSFW_WRITETHROUGH_CACHE_POLICY (1)
+/* Cached policy used by MIPS in case of physical bus on 32 bit. */
+#define ROGUE_MIPSFW_CACHED_POLICY (ROGUE_MIPSFW_WRITEBACK_CACHE_POLICY)
+/* Cached policy used by MIPS in case of physical bus on more than 32 bit. */
+#define ROGUE_MIPSFW_CACHED_POLICY_ABOVE_32BIT (ROGUE_MIPSFW_WRITETHROUGH_CACHE_POLICY)
+/* Total number of Remap entries. */
+#define ROGUE_MIPSFW_NUMBER_OF_REMAP_ENTRIES (2 * ROGUE_MIPSFW_NUMBER_OF_TLB_ENTRIES)
+
+/* MIPS EntryLo/PTE format. */
+
+#define ROGUE_MIPSFW_ENTRYLO_READ_INHIBIT_SHIFT (31U)
+#define ROGUE_MIPSFW_ENTRYLO_READ_INHIBIT_CLRMSK (0X7FFFFFFF)
+#define ROGUE_MIPSFW_ENTRYLO_READ_INHIBIT_EN (0X80000000)
+
+#define ROGUE_MIPSFW_ENTRYLO_EXEC_INHIBIT_SHIFT (30U)
+#define ROGUE_MIPSFW_ENTRYLO_EXEC_INHIBIT_CLRMSK (0XBFFFFFFF)
+#define ROGUE_MIPSFW_ENTRYLO_EXEC_INHIBIT_EN (0X40000000)
+
+/* Page Frame Number */
+#define ROGUE_MIPSFW_ENTRYLO_PFN_SHIFT (6)
+#define ROGUE_MIPSFW_ENTRYLO_PFN_ALIGNSHIFT (12)
+/* Mask used for the MIPS Page Table in case of physical bus on 32 bit. */
+#define ROGUE_MIPSFW_ENTRYLO_PFN_MASK (0x03FFFFC0)
+#define ROGUE_MIPSFW_ENTRYLO_PFN_SIZE (20)
+/* Mask used for the MIPS Page Table in case of physical bus on more than 32 bit. */
+#define ROGUE_MIPSFW_ENTRYLO_PFN_MASK_ABOVE_32BIT (0x3FFFFFC0)
+#define ROGUE_MIPSFW_ENTRYLO_PFN_SIZE_ABOVE_32BIT (24)
+#define ROGUE_MIPSFW_ADDR_TO_ENTRYLO_PFN_RSHIFT (ROGUE_MIPSFW_ENTRYLO_PFN_ALIGNSHIFT - \
+						 ROGUE_MIPSFW_ENTRYLO_PFN_SHIFT)
+
+#define ROGUE_MIPSFW_ENTRYLO_CACHE_POLICY_SHIFT (3U)
+#define ROGUE_MIPSFW_ENTRYLO_CACHE_POLICY_CLRMSK (0XFFFFFFC7)
+
+#define ROGUE_MIPSFW_ENTRYLO_DIRTY_SHIFT (2U)
+#define ROGUE_MIPSFW_ENTRYLO_DIRTY_CLRMSK (0XFFFFFFFB)
+#define ROGUE_MIPSFW_ENTRYLO_DIRTY_EN (0X00000004)
+
+#define ROGUE_MIPSFW_ENTRYLO_VALID_SHIFT (1U)
+#define ROGUE_MIPSFW_ENTRYLO_VALID_CLRMSK (0XFFFFFFFD)
+#define ROGUE_MIPSFW_ENTRYLO_VALID_EN (0X00000002)
+
+#define ROGUE_MIPSFW_ENTRYLO_GLOBAL_SHIFT (0U)
+#define ROGUE_MIPSFW_ENTRYLO_GLOBAL_CLRMSK (0XFFFFFFFE)
+#define ROGUE_MIPSFW_ENTRYLO_GLOBAL_EN (0X00000001)
+
+#define ROGUE_MIPSFW_ENTRYLO_DVG (ROGUE_MIPSFW_ENTRYLO_DIRTY_EN | \
+				  ROGUE_MIPSFW_ENTRYLO_VALID_EN | \
+				  ROGUE_MIPSFW_ENTRYLO_GLOBAL_EN)
+#define ROGUE_MIPSFW_ENTRYLO_UNCACHED (ROGUE_MIPSFW_UNCACHED_CACHE_POLICY << \
+				       ROGUE_MIPSFW_ENTRYLO_CACHE_POLICY_SHIFT)
+#define ROGUE_MIPSFW_ENTRYLO_DVG_UNCACHED (ROGUE_MIPSFW_ENTRYLO_DVG | \
+					   ROGUE_MIPSFW_ENTRYLO_UNCACHED)
+
+/* Remap Range Config Addr Out. */
+/* These defines refer to the upper half of the Remap Range Config register. */
+#define ROGUE_MIPSFW_REMAP_RANGE_ADDR_OUT_MASK (0x0FFFFFF0)
+#define ROGUE_MIPSFW_REMAP_RANGE_ADDR_OUT_SHIFT (4) /* wrt upper half of the register. */
+#define ROGUE_MIPSFW_REMAP_RANGE_ADDR_OUT_ALIGNSHIFT (12)
+#define ROGUE_MIPSFW_ADDR_TO_RR_ADDR_OUT_RSHIFT (ROGUE_MIPSFW_REMAP_RANGE_ADDR_OUT_ALIGNSHIFT - \
+						 ROGUE_MIPSFW_REMAP_RANGE_ADDR_OUT_SHIFT)
+
+/*
+ * Pages to trampoline problematic physical addresses:
+ *   - ROGUE_MIPSFW_BOOT_REMAP_PHYS_ADDR_IN : 0x1FC0_0000
+ *   - ROGUE_MIPSFW_DATA_REMAP_PHYS_ADDR_IN : 0x1FC0_1000
+ *   - ROGUE_MIPSFW_CODE_REMAP_PHYS_ADDR_IN : 0x1FC0_2000
+ *   - (benign trampoline)               : 0x1FC0_3000
+ * that would otherwise be erroneously remapped by the MIPS wrapper.
+ * (see "Firmware virtual layout and remap configuration" section below)
+ */
+
+#define ROGUE_MIPSFW_TRAMPOLINE_LOG2_NUMPAGES (2)
+#define ROGUE_MIPSFW_TRAMPOLINE_NUMPAGES BIT(ROGUE_MIPSFW_TRAMPOLINE_LOG2_NUMPAGES)
+#define ROGUE_MIPSFW_TRAMPOLINE_SIZE (ROGUE_MIPSFW_TRAMPOLINE_NUMPAGES << \
+				      ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K)
+#define ROGUE_MIPSFW_TRAMPOLINE_LOG2_SEGMENT_SIZE (ROGUE_MIPSFW_TRAMPOLINE_LOG2_NUMPAGES + \
+						   ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K)
+
+#define ROGUE_MIPSFW_TRAMPOLINE_TARGET_PHYS_ADDR (ROGUE_MIPSFW_BOOT_REMAP_PHYS_ADDR_IN)
+#define ROGUE_MIPSFW_TRAMPOLINE_OFFSET(a) ((a) - ROGUE_MIPSFW_BOOT_REMAP_PHYS_ADDR_IN)
+
+#define ROGUE_MIPSFW_SENSITIVE_ADDR(a) (ROGUE_MIPSFW_BOOT_REMAP_PHYS_ADDR_IN == \
+					(~((1 << ROGUE_MIPSFW_TRAMPOLINE_LOG2_SEGMENT_SIZE) - 1) \
+					 & (a)))
+
+/* Firmware virtual layout and remap configuration. */
+/*
+ * For each remap region we define:
+ * - the virtual base used by the Firmware to access code/data through that region
+ * - the microAptivAP physical address correspondent to the virtual base address,
+ *   used as input address and remapped to the actual physical address
+ * - log2 of size of the region remapped by the MIPS wrapper, i.e. number of bits from
+ *   the bottom of the base input address that survive onto the output address
+ *   (this defines both the alignment and the maximum size of the remapped region)
+ * - one or more code/data segments within the remapped region.
+ */
+
+/* Boot remap setup. */
+#define ROGUE_MIPSFW_BOOT_REMAP_VIRTUAL_BASE (0xBFC00000)
+#define ROGUE_MIPSFW_BOOT_REMAP_PHYS_ADDR_IN (0x1FC00000)
+#define ROGUE_MIPSFW_BOOT_REMAP_LOG2_SEGMENT_SIZE (12)
+#define ROGUE_MIPSFW_BOOT_NMI_CODE_VIRTUAL_BASE (ROGUE_MIPSFW_BOOT_REMAP_VIRTUAL_BASE)
+
+/* Data remap setup. */
+#define ROGUE_MIPSFW_DATA_REMAP_VIRTUAL_BASE (0xBFC01000)
+#define ROGUE_MIPSFW_DATA_CACHED_REMAP_VIRTUAL_BASE (0x9FC01000)
+#define ROGUE_MIPSFW_DATA_REMAP_PHYS_ADDR_IN (0x1FC01000)
+#define ROGUE_MIPSFW_DATA_REMAP_LOG2_SEGMENT_SIZE (12)
+#define ROGUE_MIPSFW_BOOT_NMI_DATA_VIRTUAL_BASE (ROGUE_MIPSFW_DATA_REMAP_VIRTUAL_BASE)
+
+/* Code remap setup. */
+#define ROGUE_MIPSFW_CODE_REMAP_VIRTUAL_BASE (0x9FC02000)
+#define ROGUE_MIPSFW_CODE_REMAP_PHYS_ADDR_IN (0x1FC02000)
+#define ROGUE_MIPSFW_CODE_REMAP_LOG2_SEGMENT_SIZE (12)
+#define ROGUE_MIPSFW_EXCEPTIONS_VIRTUAL_BASE (ROGUE_MIPSFW_CODE_REMAP_VIRTUAL_BASE)
+
+/* Permanent mappings setup. */
+#define ROGUE_MIPSFW_PT_VIRTUAL_BASE (0xCF000000)
+#define ROGUE_MIPSFW_REGISTERS_VIRTUAL_BASE (0xCF800000)
+#define ROGUE_MIPSFW_STACK_VIRTUAL_BASE (0xCF600000)
+
+/* Bootloader configuration data. */
+/*
+ * Bootloader configuration offset (where ROGUE_MIPSFW_BOOT_DATA lives)
+ * within the bootloader/NMI data page.
+ */
+#define ROGUE_MIPSFW_BOOTLDR_CONF_OFFSET (0x0)
+
+/* NMI shared data. */
+/* Base address of the shared data within the bootloader/NMI data page. */
+#define ROGUE_MIPSFW_NMI_SHARED_DATA_BASE (0x100)
+/* Size used by Debug dump data. */
+#define ROGUE_MIPSFW_NMI_SHARED_SIZE (0x2B0)
+/* Offsets in the NMI shared area in 32-bit words. */
+#define ROGUE_MIPSFW_NMI_SYNC_FLAG_OFFSET (0x0)
+#define ROGUE_MIPSFW_NMI_STATE_OFFSET (0x1)
+#define ROGUE_MIPSFW_NMI_ERROR_STATE_SET (0x1)
+
+/* MIPS boot stage. */
+#define ROGUE_MIPSFW_BOOT_STAGE_OFFSET (0x400)
+
+/*
+ * MIPS private data in the bootloader data page.
+ * Memory below this offset is used by the FW only, no interface data allowed.
+ */
+#define ROGUE_MIPSFW_PRIVATE_DATA_OFFSET (0x800)
+
+struct rogue_mipsfw_boot_data {
+	u64 stack_phys_addr;
+	u64 reg_base;
+	u64 pt_phys_addr[ROGUE_MIPSFW_MAX_NUM_PAGETABLE_PAGES];
+	u32 pt_log2_page_size;
+	u32 pt_num_pages;
+	u32 reserved1;
+	u32 reserved2;
+};
+
+#define ROGUE_MIPSFW_GET_OFFSET_IN_DWORDS(offset) ((offset) / sizeof(u32))
+#define ROGUE_MIPSFW_GET_OFFSET_IN_QWORDS(offset) ((offset) / sizeof(u64))
+
+/* Used for compatibility checks. */
+#define ROGUE_MIPSFW_ARCHTYPE_VER_CLRMSK (0xFFFFE3FFU)
+#define ROGUE_MIPSFW_ARCHTYPE_VER_SHIFT (10U)
+#define ROGUE_MIPSFW_CORE_ID_VALUE (0x001U)
+#define ROGUE_FW_PROCESSOR_MIPS "MIPS"
+
+/* microAptivAP cache line size. */
+#define ROGUE_MIPSFW_MICROAPTIVEAP_CACHELINE_SIZE (16U)
+
+/*
+ * The SOCIF transactions are identified with the top 16 bits of the physical address emitted by
+ * the MIPS.
+ */
+#define ROGUE_MIPSFW_WRAPPER_CONFIG_REGBANK_ADDR_ALIGN (16U)
+
+/* Values to put in the MIPS selectors for performance counters. */
+/* Icache accesses in COUNTER0. */
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_ICACHE_ACCESSES_C0 (9U)
+/* Icache misses in COUNTER1. */
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_ICACHE_MISSES_C1 (9U)
+
+/* Dcache accesses in COUNTER0. */
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_DCACHE_ACCESSES_C0 (10U)
+/* Dcache misses in COUNTER1. */
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_DCACHE_MISSES_C1 (11U)
+
+/* ITLB instruction accesses in COUNTER0. */
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_ITLB_INSTR_ACCESSES_C0 (5U)
+/* JTLB instruction accesses misses in COUNTER1. */
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_JTLB_INSTR_MISSES_C1 (7U)
+
+  /* Instructions completed in COUNTER0. */
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_INSTR_COMPLETED_C0 (1U)
+/* JTLB data misses in COUNTER1. */
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_JTLB_DATA_MISSES_C1 (8U)
+
+/* Shift for the Event field in the MIPS perf ctrl registers. */
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_EVENT_SHIFT (5U)
+
+/* Additional flags for performance counters. See MIPS manual for further reference. */
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_COUNT_USER_MODE (8U)
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_COUNT_KERNEL_MODE (2U)
+#define ROGUE_MIPSFW_PERF_COUNT_CTRL_COUNT_EXL (1U)
+
+#define ROGUE_MIPSFW_C0_NBHWIRQ	8
+
+/* Macros to decode C0_Cause register. */
+#define ROGUE_MIPSFW_C0_CAUSE_EXCCODE(cause) (((cause) & 0x7c) >> 2)
+#define ROGUE_MIPSFW_C0_CAUSE_EXCCODE_FWERROR 9
+/* Use only when Coprocessor Unusable exception. */
+#define ROGUE_MIPSFW_C0_CAUSE_UNUSABLE_UNIT(cause) (((cause) >> 28) & 0x3)
+#define ROGUE_MIPSFW_C0_CAUSE_PENDING_HWIRQ(cause) (((cause) & 0x3fc00) >> 10)
+#define ROGUE_MIPSFW_C0_CAUSE_FDCIPENDING BIT(21)
+#define ROGUE_MIPSFW_C0_CAUSE_IV BIT(23)
+#define ROGUE_MIPSFW_C0_CAUSE_IC BIT(25)
+#define ROGUE_MIPSFW_C0_CAUSE_PCIPENDING BIT(26)
+#define ROGUE_MIPSFW_C0_CAUSE_TIPENDING BIT(30)
+#define ROGUE_MIPSFW_C0_CAUSE_BRANCH_DELAY BIT(31)
+
+/* Macros to decode C0_Debug register. */
+#define ROGUE_MIPSFW_C0_DEBUG_EXCCODE(debug) (((debug) >> 10) & 0x1f)
+#define ROGUE_MIPSFW_C0_DEBUG_DSS BIT(0)
+#define ROGUE_MIPSFW_C0_DEBUG_DBP BIT(1)
+#define ROGUE_MIPSFW_C0_DEBUG_DDBL BIT(2)
+#define ROGUE_MIPSFW_C0_DEBUG_DDBS BIT(3)
+#define ROGUE_MIPSFW_C0_DEBUG_DIB BIT(4)
+#define ROGUE_MIPSFW_C0_DEBUG_DINT BIT(5)
+#define ROGUE_MIPSFW_C0_DEBUG_DIBIMPR BIT(6)
+#define ROGUE_MIPSFW_C0_DEBUG_DDBLIMPR BIT(18)
+#define ROGUE_MIPSFW_C0_DEBUG_DDBSIMPR BIT(19)
+#define ROGUE_MIPSFW_C0_DEBUG_IEXI BIT(20)
+#define ROGUE_MIPSFW_C0_DEBUG_DBUSEP BIT(21)
+#define ROGUE_MIPSFW_C0_DEBUG_CACHEEP BIT(22)
+#define ROGUE_MIPSFW_C0_DEBUG_MCHECKP BIT(23)
+#define ROGUE_MIPSFW_C0_DEBUG_IBUSEP BIT(24)
+#define ROGUE_MIPSFW_C0_DEBUG_DM BIT(30)
+#define ROGUE_MIPSFW_C0_DEBUG_DBD BIT(31)
+
+/* Macros to decode TLB entries. */
+#define ROGUE_MIPSFW_TLB_GET_MASK(page_mask) (((page_mask) >> 13) & 0XFFFFU)
+/* Page size in KB. */
+#define ROGUE_MIPSFW_TLB_GET_PAGE_SIZE(page_mask) ((((page_mask) | 0x1FFF) + 1) >> 11)
+/* Page size in KB. */
+#define ROGUE_MIPSFW_TLB_GET_PAGE_MASK(page_size) ((((page_size) << 11) - 1) & ~0x7FF)
+#define ROGUE_MIPSFW_TLB_GET_VPN2(entry_hi) ((entry_hi) >> 13)
+#define ROGUE_MIPSFW_TLB_GET_COHERENCY(entry_lo) (((entry_lo) >> 3) & 0x7U)
+#define ROGUE_MIPSFW_TLB_GET_PFN(entry_lo) (((entry_lo) >> 6) & 0XFFFFFU)
+/* GET_PA uses a non-standard PFN mask for 36 bit addresses. */
+#define ROGUE_MIPSFW_TLB_GET_PA(entry_lo) (((u64)(entry_lo) & \
+					    ROGUE_MIPSFW_ENTRYLO_PFN_MASK_ABOVE_32BIT) << 6)
+#define ROGUE_MIPSFW_TLB_GET_INHIBIT(entry_lo) (((entry_lo) >> 30) & 0x3U)
+#define ROGUE_MIPSFW_TLB_GET_DGV(entry_lo) ((entry_lo) & 0x7U)
+#define ROGUE_MIPSFW_TLB_GLOBAL BIT(0)
+#define ROGUE_MIPSFW_TLB_VALID BIT(1)
+#define ROGUE_MIPSFW_TLB_DIRTY BIT(2)
+#define ROGUE_MIPSFW_TLB_XI BIT(30)
+#define ROGUE_MIPSFW_TLB_RI BIT(31)
+
+#define ROGUE_MIPSFW_REMAP_GET_REGION_SIZE(region_size_encoding) (1 << (((region_size_encoding) \
+									+ 1) << 1))
+
+struct rogue_mips_tlb_entry {
+	u32 tlb_page_mask;
+	u32 tlb_hi;
+	u32 tlb_lo0;
+	u32 tlb_lo1;
+};
+
+struct rogue_mips_remap_entry {
+	u32 remap_addr_in;  /* Always 4k aligned. */
+	u32 remap_addr_out; /* Always 4k aligned. */
+	u32 remap_region_size;
+};
+
+struct rogue_mips_state {
+	u32 error_state; /* This must come first in the structure. */
+	u32 error_epc;
+	u32 status_register;
+	u32 cause_register;
+	u32 bad_register;
+	u32 epc;
+	u32 sp;
+	u32 debug;
+	u32 depc;
+	u32 bad_instr;
+	u32 unmapped_address;
+	struct rogue_mips_tlb_entry tlb[ROGUE_MIPSFW_NUMBER_OF_TLB_ENTRIES];
+	struct rogue_mips_remap_entry remap[ROGUE_MIPSFW_NUMBER_OF_REMAP_ENTRIES];
+};
+
+#include "pvr_rogue_mips_check.h"
+
+#endif /* PVR_ROGUE_MIPS_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_mips_check.h b/drivers/gpu/drm/imagination/pvr_rogue_mips_check.h
new file mode 100644
index 000000000000..efad38039cae
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_mips_check.h
@@ -0,0 +1,58 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_ROGUE_MIPS_CHECK_H
+#define PVR_ROGUE_MIPS_CHECK_H
+
+#include <linux/build_bug.h>
+
+static_assert(offsetof(struct rogue_mips_tlb_entry, tlb_page_mask) == 0,
+	      "offsetof(struct rogue_mips_tlb_entry, tlb_page_mask) incorrect");
+static_assert(offsetof(struct rogue_mips_tlb_entry, tlb_hi) == 4,
+	      "offsetof(struct rogue_mips_tlb_entry, tlb_hi) incorrect");
+static_assert(offsetof(struct rogue_mips_tlb_entry, tlb_lo0) == 8,
+	      "offsetof(struct rogue_mips_tlb_entry, tlb_lo0) incorrect");
+static_assert(offsetof(struct rogue_mips_tlb_entry, tlb_lo1) == 12,
+	      "offsetof(struct rogue_mips_tlb_entry, tlb_lo1) incorrect");
+static_assert(sizeof(struct rogue_mips_tlb_entry) == 16,
+	      "struct rogue_mips_tlb_entry is incorrect size");
+
+static_assert(offsetof(struct rogue_mips_remap_entry, remap_addr_in) == 0,
+	      "offsetof(struct rogue_mips_remap_entry, remap_addr_in) incorrect");
+static_assert(offsetof(struct rogue_mips_remap_entry, remap_addr_out) == 4,
+	      "offsetof(struct rogue_mips_remap_entry, remap_addr_out) incorrect");
+static_assert(offsetof(struct rogue_mips_remap_entry, remap_region_size) == 8,
+	      "offsetof(struct rogue_mips_remap_entry, remap_region_size) incorrect");
+static_assert(sizeof(struct rogue_mips_remap_entry) == 12,
+	      "struct rogue_mips_remap_entry is incorrect size");
+
+static_assert(offsetof(struct rogue_mips_state, error_state) == 0,
+	      "offsetof(struct rogue_mips_state, error_state) incorrect");
+static_assert(offsetof(struct rogue_mips_state, error_epc) == 4,
+	      "offsetof(struct rogue_mips_state, error_epc) incorrect");
+static_assert(offsetof(struct rogue_mips_state, status_register) == 8,
+	      "offsetof(struct rogue_mips_state, status_register) incorrect");
+static_assert(offsetof(struct rogue_mips_state, cause_register) == 12,
+	      "offsetof(struct rogue_mips_state, cause_register) incorrect");
+static_assert(offsetof(struct rogue_mips_state, bad_register) == 16,
+	      "offsetof(struct rogue_mips_state, bad_register) incorrect");
+static_assert(offsetof(struct rogue_mips_state, epc) == 20,
+	      "offsetof(struct rogue_mips_state, epc) incorrect");
+static_assert(offsetof(struct rogue_mips_state, sp) == 24,
+	      "offsetof(struct rogue_mips_state, sp) incorrect");
+static_assert(offsetof(struct rogue_mips_state, debug) == 28,
+	      "offsetof(struct rogue_mips_state, debug) incorrect");
+static_assert(offsetof(struct rogue_mips_state, depc) == 32,
+	      "offsetof(struct rogue_mips_state, depc) incorrect");
+static_assert(offsetof(struct rogue_mips_state, bad_instr) == 36,
+	      "offsetof(struct rogue_mips_state, bad_instr) incorrect");
+static_assert(offsetof(struct rogue_mips_state, unmapped_address) == 40,
+	      "offsetof(struct rogue_mips_state, unmapped_address) incorrect");
+static_assert(offsetof(struct rogue_mips_state, tlb) == 44,
+	      "offsetof(struct rogue_mips_state, tlb) incorrect");
+static_assert(offsetof(struct rogue_mips_state, remap) == 300,
+	      "offsetof(struct rogue_mips_state, remap) incorrect");
+static_assert(sizeof(struct rogue_mips_state) == 684,
+	      "struct rogue_mips_state is incorrect size");
+
+#endif /* PVR_ROGUE_MIPS_CHECK_H */
diff --git a/drivers/gpu/drm/imagination/pvr_rogue_mmu_defs.h b/drivers/gpu/drm/imagination/pvr_rogue_mmu_defs.h
new file mode 100644
index 000000000000..cd28cded2741
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_rogue_mmu_defs.h
@@ -0,0 +1,136 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+/*  *** Autogenerated C -- do not edit ***  */
+
+#ifndef PVR_ROGUE_MMU_DEFS_H
+#define PVR_ROGUE_MMU_DEFS_H
+
+#define ROGUE_MMU_DEFS_REVISION 0
+
+#define ROGUE_BIF_DM_ENCODING_VERTEX (0x00000000U)
+#define ROGUE_BIF_DM_ENCODING_PIXEL (0x00000001U)
+#define ROGUE_BIF_DM_ENCODING_COMPUTE (0x00000002U)
+#define ROGUE_BIF_DM_ENCODING_TLA (0x00000003U)
+#define ROGUE_BIF_DM_ENCODING_PB_VCE (0x00000004U)
+#define ROGUE_BIF_DM_ENCODING_PB_TE (0x00000005U)
+#define ROGUE_BIF_DM_ENCODING_META (0x00000007U)
+#define ROGUE_BIF_DM_ENCODING_HOST (0x00000008U)
+#define ROGUE_BIF_DM_ENCODING_PM_ALIST (0x00000009U)
+
+#define ROGUE_MMUCTRL_VADDR_PC_INDEX_SHIFT (30U)
+#define ROGUE_MMUCTRL_VADDR_PC_INDEX_CLRMSK (0xFFFFFF003FFFFFFFULL)
+#define ROGUE_MMUCTRL_VADDR_PD_INDEX_SHIFT (21U)
+#define ROGUE_MMUCTRL_VADDR_PD_INDEX_CLRMSK (0xFFFFFFFFC01FFFFFULL)
+#define ROGUE_MMUCTRL_VADDR_PT_INDEX_SHIFT (12U)
+#define ROGUE_MMUCTRL_VADDR_PT_INDEX_CLRMSK (0xFFFFFFFFFFE00FFFULL)
+
+#define ROGUE_MMUCTRL_ENTRIES_PC_VALUE (0x00000400U)
+#define ROGUE_MMUCTRL_ENTRIES_PD_VALUE (0x00000200U)
+#define ROGUE_MMUCTRL_ENTRIES_PT_VALUE (0x00000200U)
+
+#define ROGUE_MMUCTRL_ENTRY_SIZE_PC_VALUE (0x00000020U)
+#define ROGUE_MMUCTRL_ENTRY_SIZE_PD_VALUE (0x00000040U)
+#define ROGUE_MMUCTRL_ENTRY_SIZE_PT_VALUE (0x00000040U)
+
+#define ROGUE_MMUCTRL_PAGE_SIZE_MASK (0x00000007U)
+#define ROGUE_MMUCTRL_PAGE_SIZE_4KB (0x00000000U)
+#define ROGUE_MMUCTRL_PAGE_SIZE_16KB (0x00000001U)
+#define ROGUE_MMUCTRL_PAGE_SIZE_64KB (0x00000002U)
+#define ROGUE_MMUCTRL_PAGE_SIZE_256KB (0x00000003U)
+#define ROGUE_MMUCTRL_PAGE_SIZE_1MB (0x00000004U)
+#define ROGUE_MMUCTRL_PAGE_SIZE_2MB (0x00000005U)
+
+#define ROGUE_MMUCTRL_PAGE_4KB_RANGE_SHIFT (12U)
+#define ROGUE_MMUCTRL_PAGE_4KB_RANGE_CLRMSK (0xFFFFFF0000000FFFULL)
+
+#define ROGUE_MMUCTRL_PAGE_16KB_RANGE_SHIFT (14U)
+#define ROGUE_MMUCTRL_PAGE_16KB_RANGE_CLRMSK (0xFFFFFF0000003FFFULL)
+
+#define ROGUE_MMUCTRL_PAGE_64KB_RANGE_SHIFT (16U)
+#define ROGUE_MMUCTRL_PAGE_64KB_RANGE_CLRMSK (0xFFFFFF000000FFFFULL)
+
+#define ROGUE_MMUCTRL_PAGE_256KB_RANGE_SHIFT (18U)
+#define ROGUE_MMUCTRL_PAGE_256KB_RANGE_CLRMSK (0xFFFFFF000003FFFFULL)
+
+#define ROGUE_MMUCTRL_PAGE_1MB_RANGE_SHIFT (20U)
+#define ROGUE_MMUCTRL_PAGE_1MB_RANGE_CLRMSK (0xFFFFFF00000FFFFFULL)
+
+#define ROGUE_MMUCTRL_PAGE_2MB_RANGE_SHIFT (21U)
+#define ROGUE_MMUCTRL_PAGE_2MB_RANGE_CLRMSK (0xFFFFFF00001FFFFFULL)
+
+#define ROGUE_MMUCTRL_PT_BASE_4KB_RANGE_SHIFT (12U)
+#define ROGUE_MMUCTRL_PT_BASE_4KB_RANGE_CLRMSK (0xFFFFFF0000000FFFULL)
+
+#define ROGUE_MMUCTRL_PT_BASE_16KB_RANGE_SHIFT (10U)
+#define ROGUE_MMUCTRL_PT_BASE_16KB_RANGE_CLRMSK (0xFFFFFF00000003FFULL)
+
+#define ROGUE_MMUCTRL_PT_BASE_64KB_RANGE_SHIFT (8U)
+#define ROGUE_MMUCTRL_PT_BASE_64KB_RANGE_CLRMSK (0xFFFFFF00000000FFULL)
+
+#define ROGUE_MMUCTRL_PT_BASE_256KB_RANGE_SHIFT (6U)
+#define ROGUE_MMUCTRL_PT_BASE_256KB_RANGE_CLRMSK (0xFFFFFF000000003FULL)
+
+#define ROGUE_MMUCTRL_PT_BASE_1MB_RANGE_SHIFT (5U)
+#define ROGUE_MMUCTRL_PT_BASE_1MB_RANGE_CLRMSK (0xFFFFFF000000001FULL)
+
+#define ROGUE_MMUCTRL_PT_BASE_2MB_RANGE_SHIFT (5U)
+#define ROGUE_MMUCTRL_PT_BASE_2MB_RANGE_CLRMSK (0xFFFFFF000000001FULL)
+
+#define ROGUE_MMUCTRL_PT_DATA_PM_META_PROTECT_SHIFT (62U)
+#define ROGUE_MMUCTRL_PT_DATA_PM_META_PROTECT_CLRMSK (0xBFFFFFFFFFFFFFFFULL)
+#define ROGUE_MMUCTRL_PT_DATA_PM_META_PROTECT_EN (0x4000000000000000ULL)
+#define ROGUE_MMUCTRL_PT_DATA_VP_PAGE_HI_SHIFT (40U)
+#define ROGUE_MMUCTRL_PT_DATA_VP_PAGE_HI_CLRMSK (0xC00000FFFFFFFFFFULL)
+#define ROGUE_MMUCTRL_PT_DATA_PAGE_SHIFT (12U)
+#define ROGUE_MMUCTRL_PT_DATA_PAGE_CLRMSK (0xFFFFFF0000000FFFULL)
+#define ROGUE_MMUCTRL_PT_DATA_VP_PAGE_LO_SHIFT (6U)
+#define ROGUE_MMUCTRL_PT_DATA_VP_PAGE_LO_CLRMSK (0xFFFFFFFFFFFFF03FULL)
+#define ROGUE_MMUCTRL_PT_DATA_ENTRY_PENDING_SHIFT (5U)
+#define ROGUE_MMUCTRL_PT_DATA_ENTRY_PENDING_CLRMSK (0xFFFFFFFFFFFFFFDFULL)
+#define ROGUE_MMUCTRL_PT_DATA_ENTRY_PENDING_EN (0x0000000000000020ULL)
+#define ROGUE_MMUCTRL_PT_DATA_PM_SRC_SHIFT (4U)
+#define ROGUE_MMUCTRL_PT_DATA_PM_SRC_CLRMSK (0xFFFFFFFFFFFFFFEFULL)
+#define ROGUE_MMUCTRL_PT_DATA_PM_SRC_EN (0x0000000000000010ULL)
+#define ROGUE_MMUCTRL_PT_DATA_SLC_BYPASS_CTRL_SHIFT (3U)
+#define ROGUE_MMUCTRL_PT_DATA_SLC_BYPASS_CTRL_CLRMSK (0xFFFFFFFFFFFFFFF7ULL)
+#define ROGUE_MMUCTRL_PT_DATA_SLC_BYPASS_CTRL_EN (0x0000000000000008ULL)
+#define ROGUE_MMUCTRL_PT_DATA_CC_SHIFT (2U)
+#define ROGUE_MMUCTRL_PT_DATA_CC_CLRMSK (0xFFFFFFFFFFFFFFFBULL)
+#define ROGUE_MMUCTRL_PT_DATA_CC_EN (0x0000000000000004ULL)
+#define ROGUE_MMUCTRL_PT_DATA_READ_ONLY_SHIFT (1U)
+#define ROGUE_MMUCTRL_PT_DATA_READ_ONLY_CLRMSK (0xFFFFFFFFFFFFFFFDULL)
+#define ROGUE_MMUCTRL_PT_DATA_READ_ONLY_EN (0x0000000000000002ULL)
+#define ROGUE_MMUCTRL_PT_DATA_VALID_SHIFT (0U)
+#define ROGUE_MMUCTRL_PT_DATA_VALID_CLRMSK (0xFFFFFFFFFFFFFFFEULL)
+#define ROGUE_MMUCTRL_PT_DATA_VALID_EN (0x0000000000000001ULL)
+
+#define ROGUE_MMUCTRL_PD_DATA_ENTRY_PENDING_SHIFT (40U)
+#define ROGUE_MMUCTRL_PD_DATA_ENTRY_PENDING_CLRMSK (0xFFFFFEFFFFFFFFFFULL)
+#define ROGUE_MMUCTRL_PD_DATA_ENTRY_PENDING_EN (0x0000010000000000ULL)
+#define ROGUE_MMUCTRL_PD_DATA_PT_BASE_SHIFT (5U)
+#define ROGUE_MMUCTRL_PD_DATA_PT_BASE_CLRMSK (0xFFFFFF000000001FULL)
+#define ROGUE_MMUCTRL_PD_DATA_PAGE_SIZE_SHIFT (1U)
+#define ROGUE_MMUCTRL_PD_DATA_PAGE_SIZE_CLRMSK (0xFFFFFFFFFFFFFFF1ULL)
+#define ROGUE_MMUCTRL_PD_DATA_PAGE_SIZE_4KB (0x0000000000000000ULL)
+#define ROGUE_MMUCTRL_PD_DATA_PAGE_SIZE_16KB (0x0000000000000002ULL)
+#define ROGUE_MMUCTRL_PD_DATA_PAGE_SIZE_64KB (0x0000000000000004ULL)
+#define ROGUE_MMUCTRL_PD_DATA_PAGE_SIZE_256KB (0x0000000000000006ULL)
+#define ROGUE_MMUCTRL_PD_DATA_PAGE_SIZE_1MB (0x0000000000000008ULL)
+#define ROGUE_MMUCTRL_PD_DATA_PAGE_SIZE_2MB (0x000000000000000aULL)
+#define ROGUE_MMUCTRL_PD_DATA_VALID_SHIFT (0U)
+#define ROGUE_MMUCTRL_PD_DATA_VALID_CLRMSK (0xFFFFFFFFFFFFFFFEULL)
+#define ROGUE_MMUCTRL_PD_DATA_VALID_EN (0x0000000000000001ULL)
+
+#define ROGUE_MMUCTRL_PC_DATA_PD_BASE_SHIFT (4U)
+#define ROGUE_MMUCTRL_PC_DATA_PD_BASE_CLRMSK (0x0000000FU)
+#define ROGUE_MMUCTRL_PC_DATA_PD_BASE_ALIGNSHIFT (12U)
+#define ROGUE_MMUCTRL_PC_DATA_PD_BASE_ALIGNSIZE (4096U)
+#define ROGUE_MMUCTRL_PC_DATA_ENTRY_PENDING_SHIFT (1U)
+#define ROGUE_MMUCTRL_PC_DATA_ENTRY_PENDING_CLRMSK (0xFFFFFFFDU)
+#define ROGUE_MMUCTRL_PC_DATA_ENTRY_PENDING_EN (0x00000002U)
+#define ROGUE_MMUCTRL_PC_DATA_VALID_SHIFT (0U)
+#define ROGUE_MMUCTRL_PC_DATA_VALID_CLRMSK (0xFFFFFFFEU)
+#define ROGUE_MMUCTRL_PC_DATA_VALID_EN (0x00000001U)
+
+#endif /* PVR_ROGUE_MMU_DEFS_H */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 07/17] drm/imagination: Add GPU ID parsing and firmware loading
  2023-08-16  8:25 ` Sarah Walker
@ 2023-08-16  8:25   ` Sarah Walker
  -1 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: matthew.brost, luben.tuikov, tzimmermann, linux-kernel, mripard,
	afd, Matt Coster, boris.brezillon, dakr, donald.robson, hns,
	christian.koenig, faith.ekstrand

Read the GPU ID register at probe time and select the correct
features/quirks/enhancements. Use the GPU ID to form the firmware
file name and load the firmware.

The features/quirks/enhancements arrays are currently hardcoded in
the driver for the supported GPUs. We are looking at moving this
information to the firmware image.

Changes since v4:
- Retrieve device information from firmware header
- Pull forward firmware header parsing from FW infrastructure patch
- Use devm_add_action_or_reset to release firmware

Changes since v3:
- Use drm_dev_{enter,exit}

Co-developed-by: Frank Binns <frank.binns@imgtec.com>
Signed-off-by: Frank Binns <frank.binns@imgtec.com>
Co-developed-by: Matt Coster <matt.coster@imgtec.com>
Signed-off-by: Matt Coster <matt.coster@imgtec.com>
Co-developed-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 drivers/gpu/drm/imagination/Makefile          |   2 +
 drivers/gpu/drm/imagination/pvr_device.c      | 323 ++++++++++-
 drivers/gpu/drm/imagination/pvr_device.h      | 220 ++++++++
 drivers/gpu/drm/imagination/pvr_device_info.c | 253 +++++++++
 drivers/gpu/drm/imagination/pvr_device_info.h | 185 +++++++
 drivers/gpu/drm/imagination/pvr_drv.c         | 521 +++++++++++++++++-
 drivers/gpu/drm/imagination/pvr_drv.h         | 107 ++++
 drivers/gpu/drm/imagination/pvr_fw.c          | 145 +++++
 drivers/gpu/drm/imagination/pvr_fw.h          |  34 ++
 drivers/gpu/drm/imagination/pvr_fw_info.h     | 135 +++++
 10 files changed, 1923 insertions(+), 2 deletions(-)
 create mode 100644 drivers/gpu/drm/imagination/pvr_device_info.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_device_info.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_info.h

diff --git a/drivers/gpu/drm/imagination/Makefile b/drivers/gpu/drm/imagination/Makefile
index b4aa190c9d4a..9e144ff2742b 100644
--- a/drivers/gpu/drm/imagination/Makefile
+++ b/drivers/gpu/drm/imagination/Makefile
@@ -5,6 +5,8 @@ subdir-ccflags-y := -I$(srctree)/$(src)
 
 powervr-y := \
 	pvr_device.o \
+	pvr_device_info.o \
 	pvr_drv.o \
+	pvr_fw.o
 
 obj-$(CONFIG_DRM_POWERVR) += powervr.o
diff --git a/drivers/gpu/drm/imagination/pvr_device.c b/drivers/gpu/drm/imagination/pvr_device.c
index cef3511c0c42..b1fae182c4f6 100644
--- a/drivers/gpu/drm/imagination/pvr_device.c
+++ b/drivers/gpu/drm/imagination/pvr_device.c
@@ -2,19 +2,31 @@
 /* Copyright (c) 2023 Imagination Technologies Ltd. */
 
 #include "pvr_device.h"
+#include "pvr_device_info.h"
+
+#include "pvr_fw.h"
+#include "pvr_rogue_cr_defs.h"
 
 #include <drm/drm_print.h>
 
+#include <linux/bitfield.h>
 #include <linux/clk.h>
 #include <linux/compiler_attributes.h>
 #include <linux/compiler_types.h>
 #include <linux/dma-mapping.h>
 #include <linux/err.h>
+#include <linux/firmware.h>
 #include <linux/gfp.h>
+#include <linux/interrupt.h>
 #include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
 #include <linux/slab.h>
 #include <linux/stddef.h>
 #include <linux/types.h>
+#include <linux/workqueue.h>
+
+/* Major number for the supported version of the firmware. */
+#define PVR_FW_VERSION_MAJOR 1
 
 /**
  * pvr_device_reg_init() - Initialize kernel access to a PowerVR device's
@@ -100,6 +112,209 @@ static int pvr_device_clk_init(struct pvr_device *pvr_dev)
 	return 0;
 }
 
+/**
+ * pvr_build_firmware_filename() - Construct a PowerVR firmware filename
+ * @pvr_dev: Target PowerVR device.
+ * @base: First part of the filename.
+ * @major: Major version number.
+ *
+ * A PowerVR firmware filename consists of three parts separated by underscores
+ * (``'_'``) along with a '.fw' file suffix. The first part is the exact value
+ * of @base, the second part is the hardware version string derived from @pvr_fw
+ * and the final part is the firmware version number constructed from @major with
+ * a 'v' prefix, e.g. powervr/rogue_4.40.2.51_v1.fw.
+ *
+ * The returned string will have been slab allocated and must be freed with
+ * kfree().
+ *
+ * Return:
+ *  * The constructed filename on success, or
+ *  * Any error returned by kasprintf().
+ */
+static char *
+pvr_build_firmware_filename(struct pvr_device *pvr_dev, const char *base,
+			    u8 major)
+{
+	struct pvr_gpu_id *gpu_id = &pvr_dev->gpu_id;
+
+	return kasprintf(GFP_KERNEL, "%s_%d.%d.%d.%d_v%d.fw", base, gpu_id->b,
+			 gpu_id->v, gpu_id->n, gpu_id->c, major);
+}
+
+static void
+pvr_release_firmware(void *data)
+{
+	struct pvr_device *pvr_dev = data;
+
+	release_firmware(pvr_dev->fw_dev.firmware);
+}
+
+/**
+ * pvr_request_firmware() - Load firmware for a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ *
+ * See pvr_build_firmware_filename() for details on firmware file naming.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * Any error returned by pvr_build_firmware_filename(), or
+ *  * Any error returned by request_firmware().
+ */
+static int
+pvr_request_firmware(struct pvr_device *pvr_dev)
+{
+	struct drm_device *drm_dev = &pvr_dev->base;
+	char *filename;
+	const struct firmware *fw;
+	int err;
+
+	filename = pvr_build_firmware_filename(pvr_dev, "powervr/rogue",
+					       PVR_FW_VERSION_MAJOR);
+	if (IS_ERR(filename))
+		return PTR_ERR(filename);
+
+	/*
+	 * This function takes a copy of &filename, meaning we can free our
+	 * instance before returning.
+	 */
+	err = request_firmware(&fw, filename, pvr_dev->base.dev);
+	if (err) {
+		drm_err(drm_dev, "failed to load firmware %s (err=%d)\n",
+			filename, err);
+		goto err_free_filename;
+	}
+
+	drm_info(drm_dev, "loaded firmware %s\n", filename);
+	kfree(filename);
+
+	pvr_dev->fw_dev.firmware = fw;
+
+	return devm_add_action_or_reset(drm_dev->dev, pvr_release_firmware, pvr_dev);
+
+err_free_filename:
+	kfree(filename);
+
+	return err;
+}
+
+/**
+ * pvr_load_gpu_id() - Load a PowerVR device's GPU ID (BVNC) from control registers.
+ *
+ * Sets struct pvr_dev.gpu_id.
+ *
+ * @pvr_dev: Target PowerVR device.
+ */
+static void
+pvr_load_gpu_id(struct pvr_device *pvr_dev)
+{
+	struct pvr_gpu_id *gpu_id = &pvr_dev->gpu_id;
+	u64 bvnc;
+
+	/*
+	 * Try reading the BVNC using the newer (cleaner) method first. If the
+	 * B value is zero, fall back to the older method.
+	 */
+	bvnc = pvr_cr_read64(pvr_dev, ROGUE_CR_CORE_ID__PBVNC);
+
+	gpu_id->b = PVR_CR_FIELD_GET(bvnc, CORE_ID__PBVNC__BRANCH_ID);
+	if (gpu_id->b != 0) {
+		gpu_id->v = PVR_CR_FIELD_GET(bvnc, CORE_ID__PBVNC__VERSION_ID);
+		gpu_id->n = PVR_CR_FIELD_GET(bvnc, CORE_ID__PBVNC__NUMBER_OF_SCALABLE_UNITS);
+		gpu_id->c = PVR_CR_FIELD_GET(bvnc, CORE_ID__PBVNC__CONFIG_ID);
+	} else {
+		u32 core_rev = pvr_cr_read32(pvr_dev, ROGUE_CR_CORE_REVISION);
+		u32 core_id = pvr_cr_read32(pvr_dev, ROGUE_CR_CORE_ID);
+		u16 core_id_config = PVR_CR_FIELD_GET(core_id, CORE_ID_CONFIG);
+
+		gpu_id->b = PVR_CR_FIELD_GET(core_rev, CORE_REVISION_MAJOR);
+		gpu_id->v = PVR_CR_FIELD_GET(core_rev, CORE_REVISION_MINOR);
+		gpu_id->n = FIELD_GET(0xFF00, core_id_config);
+		gpu_id->c = FIELD_GET(0x00FF, core_id_config);
+	}
+}
+
+/**
+ * pvr_set_dma_info() - Set PowerVR device DMA information
+ * @pvr_dev: Target PowerVR device.
+ *
+ * Sets the DMA mask and max segment size for the PowerVR device.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * Any error returned by PVR_FEATURE_VALUE(), or
+ *  * Any error returned by dma_set_mask().
+ */
+
+static int
+pvr_set_dma_info(struct pvr_device *pvr_dev)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	u16 phys_bus_width;
+	int err;
+
+	err = PVR_FEATURE_VALUE(pvr_dev, phys_bus_width, &phys_bus_width);
+	if (err) {
+		drm_err(drm_dev, "Failed to get device physical bus width\n");
+		return err;
+	}
+
+	err = dma_set_mask(drm_dev->dev, DMA_BIT_MASK(phys_bus_width));
+	if (err) {
+		drm_err(drm_dev, "Failed to set DMA mask (err=%d)\n", err);
+		return err;
+	}
+
+	dma_set_max_seg_size(drm_dev->dev, UINT_MAX);
+
+	return 0;
+}
+
+/**
+ * pvr_device_gpu_init() - GPU-specific initialization for a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ *
+ * The following steps are taken to ensure the device is ready:
+ *
+ *  1. Read the hardware version information from control registers,
+ *  2. Initialise the hardware feature information,
+ *  3. Setup the device DMA information,
+ *  4. Setup the device-scoped memory context, and
+ *  5. Load firmware into the device.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * -%ENODEV if the GPU is not supported,
+ *  * Any error returned by pvr_set_dma_info(),
+ *  * Any error returned by pvr_memory_context_init(), or
+ *  * Any error returned by pvr_request_firmware().
+ */
+static int
+pvr_device_gpu_init(struct pvr_device *pvr_dev)
+{
+	int err;
+
+	pvr_load_gpu_id(pvr_dev);
+
+	err = pvr_request_firmware(pvr_dev);
+	if (err)
+		return err;
+
+	err = pvr_fw_validate_init_device_info(pvr_dev);
+	if (err)
+		return err;
+
+	if (PVR_HAS_FEATURE(pvr_dev, meta))
+		pvr_dev->fw_dev.processor_type = PVR_FW_PROCESSOR_TYPE_META;
+	else if (PVR_HAS_FEATURE(pvr_dev, mips))
+		pvr_dev->fw_dev.processor_type = PVR_FW_PROCESSOR_TYPE_MIPS;
+	else if (PVR_HAS_FEATURE(pvr_dev, riscv_fw_processor))
+		pvr_dev->fw_dev.processor_type = PVR_FW_PROCESSOR_TYPE_RISCV;
+	else
+		return -EINVAL;
+
+	return pvr_set_dma_info(pvr_dev);
+}
+
 /**
  * pvr_device_init() - Initialize a PowerVR device
  * @pvr_dev: Target PowerVR device.
@@ -130,7 +345,12 @@ pvr_device_init(struct pvr_device *pvr_dev)
 		return err;
 
 	/* Map the control registers into memory. */
-	return pvr_device_reg_init(pvr_dev);
+	err = pvr_device_reg_init(pvr_dev);
+	if (err)
+		return err;
+
+	/* Perform GPU-specific initialization steps. */
+	return pvr_device_gpu_init(pvr_dev);
 }
 
 /**
@@ -145,3 +365,104 @@ pvr_device_fini(struct pvr_device *pvr_dev)
 	 * the initialization stages in pvr_device_init().
 	 */
 }
+
+bool
+pvr_device_has_uapi_quirk(struct pvr_device *pvr_dev, u32 quirk)
+{
+	switch (quirk) {
+	case 47217:
+		return PVR_HAS_QUIRK(pvr_dev, 47217);
+	case 48545:
+		return PVR_HAS_QUIRK(pvr_dev, 48545);
+	case 49927:
+		return PVR_HAS_QUIRK(pvr_dev, 49927);
+	case 51764:
+		return PVR_HAS_QUIRK(pvr_dev, 51764);
+	case 62269:
+		return PVR_HAS_QUIRK(pvr_dev, 62269);
+	default:
+		return false;
+	};
+}
+
+bool
+pvr_device_has_uapi_enhancement(struct pvr_device *pvr_dev, u32 enhancement)
+{
+	switch (enhancement) {
+	case 35421:
+		return PVR_HAS_ENHANCEMENT(pvr_dev, 35421);
+	case 42064:
+		return PVR_HAS_ENHANCEMENT(pvr_dev, 42064);
+	default:
+		return false;
+	};
+}
+
+/**
+ * pvr_device_has_feature() - Look up device feature based on feature definition
+ * @pvr_dev: Device pointer.
+ * @feature: Feature to look up. Should be one of %PVR_FEATURE_*.
+ *
+ * Returns:
+ *  * %true if feature is present on device, or
+ *  * %false if feature is not present on device.
+ */
+bool
+pvr_device_has_feature(struct pvr_device *pvr_dev, u32 feature)
+{
+	switch (feature) {
+	case PVR_FEATURE_CLUSTER_GROUPING:
+		return PVR_HAS_FEATURE(pvr_dev, cluster_grouping);
+
+	case PVR_FEATURE_COMPUTE_MORTON_CAPABLE:
+		return PVR_HAS_FEATURE(pvr_dev, compute_morton_capable);
+
+	case PVR_FEATURE_FB_CDC_V4:
+		return PVR_HAS_FEATURE(pvr_dev, fb_cdc_v4);
+
+	case PVR_FEATURE_GPU_MULTICORE_SUPPORT:
+		return PVR_HAS_FEATURE(pvr_dev, gpu_multicore_support);
+
+	case PVR_FEATURE_ISP_ZLS_D24_S8_PACKING_OGL_MODE:
+		return PVR_HAS_FEATURE(pvr_dev, isp_zls_d24_s8_packing_ogl_mode);
+
+	case PVR_FEATURE_S7_TOP_INFRASTRUCTURE:
+		return PVR_HAS_FEATURE(pvr_dev, s7_top_infrastructure);
+
+	case PVR_FEATURE_TESSELLATION:
+		return PVR_HAS_FEATURE(pvr_dev, tessellation);
+
+	case PVR_FEATURE_TPU_DM_GLOBAL_REGISTERS:
+		return PVR_HAS_FEATURE(pvr_dev, tpu_dm_global_registers);
+
+	case PVR_FEATURE_VDM_DRAWINDIRECT:
+		return PVR_HAS_FEATURE(pvr_dev, vdm_drawindirect);
+
+	case PVR_FEATURE_VDM_OBJECT_LEVEL_LLS:
+		return PVR_HAS_FEATURE(pvr_dev, vdm_object_level_lls);
+
+	case PVR_FEATURE_ZLS_SUBTILE:
+		return PVR_HAS_FEATURE(pvr_dev, zls_subtile);
+
+	/* Derived features. */
+	case PVR_FEATURE_CDM_USER_MODE_QUEUE: {
+		u8 cdm_control_stream_format = 0;
+
+		PVR_FEATURE_VALUE(pvr_dev, cdm_control_stream_format, &cdm_control_stream_format);
+		return (cdm_control_stream_format >= 2 && cdm_control_stream_format <= 4);
+	}
+
+	case PVR_FEATURE_REQUIRES_FB_CDC_ZLS_SETUP:
+		if (PVR_HAS_FEATURE(pvr_dev, fbcdc_algorithm)) {
+			u8 fbcdc_algorithm = 0;
+
+			PVR_FEATURE_VALUE(pvr_dev, fbcdc_algorithm, &fbcdc_algorithm);
+			return (fbcdc_algorithm < 3 || PVR_HAS_FEATURE(pvr_dev, fb_cdc_v4));
+		}
+		return false;
+
+	default:
+		WARN(true, "Looking up undefined feature %u\n", feature);
+		return false;
+	}
+}
diff --git a/drivers/gpu/drm/imagination/pvr_device.h b/drivers/gpu/drm/imagination/pvr_device.h
index 41a00d3d8220..65ece87e2405 100644
--- a/drivers/gpu/drm/imagination/pvr_device.h
+++ b/drivers/gpu/drm/imagination/pvr_device.h
@@ -4,6 +4,9 @@
 #ifndef PVR_DEVICE_H
 #define PVR_DEVICE_H
 
+#include "pvr_device_info.h"
+#include "pvr_fw.h"
+
 #include <drm/drm_device.h>
 #include <drm/drm_file.h>
 #include <drm/drm_mm.h>
@@ -28,6 +31,26 @@ struct clk;
 /* Forward declaration from <linux/firmware.h>. */
 struct firmware;
 
+/**
+ * struct pvr_gpu_id - Hardware GPU ID information for a PowerVR device
+ * @b: Branch ID.
+ * @v: Version ID.
+ * @n: Number of scalable units.
+ * @c: Config ID.
+ */
+struct pvr_gpu_id {
+	u16 b, v, n, c;
+};
+
+/**
+ * struct pvr_fw_version - Firmware version information
+ * @major: Major version number.
+ * @minor: Minor version number.
+ */
+struct pvr_fw_version {
+	u16 major, minor;
+};
+
 /**
  * struct pvr_device - powervr-specific wrapper for &struct drm_device
  */
@@ -40,6 +63,35 @@ struct pvr_device {
 	 */
 	struct drm_device base;
 
+	/** @gpu_id: GPU ID detected at runtime. */
+	struct pvr_gpu_id gpu_id;
+
+	/**
+	 * @features: Hardware feature information.
+	 *
+	 * Do not access this member directly, instead use PVR_HAS_FEATURE()
+	 * or PVR_FEATURE_VALUE() macros.
+	 */
+	struct pvr_device_features features;
+
+	/**
+	 * @quirks: Hardware quirk information.
+	 *
+	 * Do not access this member directly, instead use PVR_HAS_QUIRK().
+	 */
+	struct pvr_device_quirks quirks;
+
+	/**
+	 * @enhancements: Hardware enhancement information.
+	 *
+	 * Do not access this member directly, instead use
+	 * PVR_HAS_ENHANCEMENT().
+	 */
+	struct pvr_device_enhancements enhancements;
+
+	/** @fw_version: Firmware version detected at runtime. */
+	struct pvr_fw_version fw_version;
+
 	/**
 	 * @regs: Device control registers.
 	 *
@@ -70,6 +122,9 @@ struct pvr_device {
 	 * Interface (MEMIF). If present, this needs to be enabled/disabled together with @core_clk.
 	 */
 	struct clk *mem_clk;
+
+	/** @fw_dev: Firmware related data. */
+	struct pvr_fw_device fw_dev;
 };
 
 /**
@@ -92,6 +147,76 @@ struct pvr_file {
 	struct pvr_device *pvr_dev;
 };
 
+/**
+ * PVR_HAS_FEATURE() - Tests whether a PowerVR device has a given feature
+ * @pvr_dev: [IN] Target PowerVR device.
+ * @feature: [IN] Hardware feature name.
+ *
+ * Feature names are derived from those found in &struct pvr_device_features by
+ * dropping the 'has_' prefix, which is applied by this macro.
+ *
+ * Return:
+ *  * true if the named feature is present in the hardware
+ *  * false if the named feature is not present in the hardware
+ */
+#define PVR_HAS_FEATURE(pvr_dev, feature) ((pvr_dev)->features.has_##feature)
+
+/**
+ * PVR_FEATURE_VALUE() - Gets a PowerVR device feature value
+ * @pvr_dev: [IN] Target PowerVR device.
+ * @feature: [IN] Feature name.
+ * @value_out: [OUT] Feature value.
+ *
+ * This macro will get a feature value for those features that have values.
+ * If the feature is not present, nothing will be stored to @value_out.
+ *
+ * Feature names are derived from those found in &struct pvr_device_features by
+ * dropping the 'has_' prefix.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * -%EINVAL if the named feature is not present in the hardware
+ */
+#define PVR_FEATURE_VALUE(pvr_dev, feature, value_out)             \
+	({                                                         \
+		struct pvr_device *_pvr_dev = pvr_dev;             \
+		int _ret = -EINVAL;                                \
+		if (_pvr_dev->features.has_##feature) {            \
+			*(value_out) = _pvr_dev->features.feature; \
+			_ret = 0;                                  \
+		}                                                  \
+		_ret;                                              \
+	})
+
+/**
+ * PVR_HAS_QUIRK() - Tests whether a physical device has a given quirk
+ * @pvr_dev: [IN] Target PowerVR device.
+ * @quirk: [IN] Hardware quirk name.
+ *
+ * Quirk numbers are derived from those found in #pvr_device_quirks by
+ * dropping the 'has_brn' prefix, which is applied by this macro.
+ *
+ * Returns
+ *  * true if the quirk is present in the hardware, or
+ *  * false if the quirk is not present in the hardware.
+ */
+#define PVR_HAS_QUIRK(pvr_dev, quirk) ((pvr_dev)->quirks.has_brn##quirk)
+
+/**
+ * PVR_HAS_ENHANCEMENT() - Tests whether a physical device has a given
+ *                         enhancement
+ * @pvr_dev: [IN] Target PowerVR device.
+ * @enhancement: [IN] Hardware enhancement name.
+ *
+ * Enhancement numbers are derived from those found in #pvr_device_enhancements
+ * by dropping the 'has_ern' prefix, which is applied by this macro.
+ *
+ * Returns
+ *  * true if the enhancement is present in the hardware, or
+ *  * false if the enhancement is not present in the hardware.
+ */
+#define PVR_HAS_ENHANCEMENT(pvr_dev, enhancement) ((pvr_dev)->enhancements.has_ern##enhancement)
+
 #define from_pvr_device(pvr_dev) (&pvr_dev->base)
 
 #define to_pvr_device(drm_dev) container_of_const(drm_dev, struct pvr_device, base)
@@ -100,9 +225,77 @@ struct pvr_file {
 
 #define to_pvr_file(file) (file->driver_priv)
 
+/**
+ * PVR_PACKED_BVNC() - Packs B, V, N and C values into a 64-bit unsigned integer
+ * @b: Branch ID.
+ * @v: Version ID.
+ * @n: Number of scalable units.
+ * @c: Config ID.
+ *
+ * The packed layout is as follows:
+ *
+ *    +--------+--------+--------+-------+
+ *    | 63..48 | 47..32 | 31..16 | 15..0 |
+ *    +========+========+========+=======+
+ *    | B      | V      | N      | C     |
+ *    +--------+--------+--------+-------+
+ *
+ * pvr_gpu_id_to_packed_bvnc() should be used instead of this macro when a
+ * &struct pvr_gpu_id is available in order to ensure proper type checking.
+ *
+ * Return: Packed BVNC.
+ */
+/* clang-format off */
+#define PVR_PACKED_BVNC(b, v, n, c) \
+	((((u64)(b) & GENMASK_ULL(15, 0)) << 48) | \
+	 (((u64)(v) & GENMASK_ULL(15, 0)) << 32) | \
+	 (((u64)(n) & GENMASK_ULL(15, 0)) << 16) | \
+	 (((u64)(c) & GENMASK_ULL(15, 0)) <<  0))
+/* clang-format on */
+
+/**
+ * pvr_gpu_id_to_packed_bvnc() - Packs B, V, N and C values into a 64-bit
+ * unsigned integer
+ * @gpu_id: GPU ID.
+ *
+ * The packed layout is as follows:
+ *
+ *    +--------+--------+--------+-------+
+ *    | 63..48 | 47..32 | 31..16 | 15..0 |
+ *    +========+========+========+=======+
+ *    | B      | V      | N      | C     |
+ *    +--------+--------+--------+-------+
+ *
+ * This should be used in preference to PVR_PACKED_BVNC() when a &struct
+ * pvr_gpu_id is available in order to ensure proper type checking.
+ *
+ * Return: Packed BVNC.
+ */
+static __always_inline u64
+pvr_gpu_id_to_packed_bvnc(struct pvr_gpu_id *gpu_id)
+{
+	return PVR_PACKED_BVNC(gpu_id->b, gpu_id->v, gpu_id->n, gpu_id->c);
+}
+
+static __always_inline void
+packed_bvnc_to_pvr_gpu_id(u64 bvnc, struct pvr_gpu_id *gpu_id)
+{
+	gpu_id->b = (bvnc & GENMASK_ULL(63, 48)) >> 48;
+	gpu_id->v = (bvnc & GENMASK_ULL(47, 32)) >> 32;
+	gpu_id->n = (bvnc & GENMASK_ULL(31, 16)) >> 16;
+	gpu_id->c = bvnc & GENMASK_ULL(15, 0);
+}
+
 int pvr_device_init(struct pvr_device *pvr_dev);
 void pvr_device_fini(struct pvr_device *pvr_dev);
 
+bool
+pvr_device_has_uapi_quirk(struct pvr_device *pvr_dev, u32 quirk);
+bool
+pvr_device_has_uapi_enhancement(struct pvr_device *pvr_dev, u32 enhancement);
+bool
+pvr_device_has_feature(struct pvr_device *pvr_dev, u32 feature);
+
 /**
  * PVR_CR_FIELD_GET() - Extract a single field from a PowerVR control register
  * @val: Value of the target register.
@@ -208,6 +401,29 @@ pvr_cr_poll_reg64(struct pvr_device *pvr_dev, u32 reg_addr, u64 reg_value,
 		(value & reg_mask) == reg_value, 0, timeout_usec);
 }
 
+/**
+ * pvr_round_up_to_cacheline_size() - Round up a provided size to be cacheline
+ *                                    aligned
+ * @pvr_dev: Target PowerVR device.
+ * @size: Initial size, in bytes.
+ *
+ * Returns:
+ *  * Size aligned to cacheline size.
+ */
+static __always_inline size_t
+pvr_round_up_to_cacheline_size(struct pvr_device *pvr_dev, size_t size)
+{
+	u16 slc_cacheline_size_bits = 0;
+	u16 slc_cacheline_size_bytes;
+
+	WARN_ON(!PVR_HAS_FEATURE(pvr_dev, slc_cache_line_size_bits));
+	PVR_FEATURE_VALUE(pvr_dev, slc_cache_line_size_bits,
+			  &slc_cacheline_size_bits);
+	slc_cacheline_size_bytes = slc_cacheline_size_bits / 8;
+
+	return round_up(size, slc_cacheline_size_bytes);
+}
+
 /**
  * DOC: IOCTL validation helpers
  *
@@ -302,4 +518,8 @@ pvr_ioctl_union_padding_check(void *instance, size_t union_offset,
 					      __union_size, __member_size);  \
 	})
 
+#define PVR_FW_PROCESSOR_TYPE_META  0
+#define PVR_FW_PROCESSOR_TYPE_MIPS  1
+#define PVR_FW_PROCESSOR_TYPE_RISCV 2
+
 #endif /* PVR_DEVICE_H */
diff --git a/drivers/gpu/drm/imagination/pvr_device_info.c b/drivers/gpu/drm/imagination/pvr_device_info.c
new file mode 100644
index 000000000000..7eabc13e20b7
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_device_info.c
@@ -0,0 +1,253 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+#include "pvr_device_info.h"
+#include "pvr_rogue_fwif_dev_info.h"
+
+#include <drm/drm_print.h>
+
+#include <linux/bits.h>
+#include <linux/minmax.h>
+#include <linux/stddef.h>
+#include <linux/types.h>
+
+#define QUIRK_MAPPING(quirk) \
+	[PVR_FW_HAS_BRN_##quirk] = offsetof(struct pvr_device, quirks.has_brn##quirk)
+
+static const uintptr_t quirks_mapping[] = {
+	QUIRK_MAPPING(44079),
+	QUIRK_MAPPING(47217),
+	QUIRK_MAPPING(48492),
+	QUIRK_MAPPING(48545),
+	QUIRK_MAPPING(49927),
+	QUIRK_MAPPING(50767),
+	QUIRK_MAPPING(51764),
+	QUIRK_MAPPING(62269),
+	QUIRK_MAPPING(63142),
+	QUIRK_MAPPING(63553),
+	QUIRK_MAPPING(66011),
+};
+
+#undef QUIRK_MAPPING
+
+#define ENHANCEMENT_MAPPING(enhancement)                             \
+	[PVR_FW_HAS_ERN_##enhancement] = offsetof(struct pvr_device, \
+						  enhancements.has_ern##enhancement)
+
+static const uintptr_t enhancements_mapping[] = {
+	ENHANCEMENT_MAPPING(35421),
+	ENHANCEMENT_MAPPING(38020),
+	ENHANCEMENT_MAPPING(38748),
+	ENHANCEMENT_MAPPING(42064),
+	ENHANCEMENT_MAPPING(42290),
+	ENHANCEMENT_MAPPING(42606),
+	ENHANCEMENT_MAPPING(47025),
+	ENHANCEMENT_MAPPING(57596),
+};
+
+#undef ENHANCEMENT_MAPPING
+
+static void pvr_device_info_set_common(struct pvr_device *pvr_dev, const u64 *bitmask,
+				       u32 bitmask_size, const uintptr_t *mapping, u32 mapping_max)
+{
+	const u32 mapping_max_size = (mapping_max + 63) >> 6;
+	const u32 nr_bits = min(bitmask_size * 64, mapping_max);
+
+	/* Warn if any unsupported values in the bitmask. */
+	if (bitmask_size > mapping_max_size) {
+		if (mapping == quirks_mapping)
+			drm_warn(from_pvr_device(pvr_dev), "Unsupported quirks in firmware image");
+		else
+			drm_warn(from_pvr_device(pvr_dev),
+				 "Unsupported enhancements in firmware image");
+	} else if (bitmask_size == mapping_max_size && (mapping_max & 63)) {
+		u64 invalid_mask = ~0ull << (mapping_max & 63);
+
+		if (bitmask[bitmask_size - 1] & invalid_mask) {
+			if (mapping == quirks_mapping)
+				drm_warn(from_pvr_device(pvr_dev),
+					 "Unsupported quirks in firmware image");
+			else
+				drm_warn(from_pvr_device(pvr_dev),
+					 "Unsupported enhancements in firmware image");
+		}
+	}
+
+	for (u32 i = 0; i < nr_bits; i++) {
+		if (bitmask[i >> 6] & BIT_ULL(i & 63))
+			*(bool *)((u8 *)pvr_dev + mapping[i]) = true;
+	}
+}
+
+/**
+ * pvr_device_info_set_quirks() - Set device quirks from device information in firmware
+ * @pvr_dev: Device pointer.
+ * @quirks: Pointer to quirks mask in device information.
+ * @quirks_size: Size of quirks mask, in u64s.
+ */
+void pvr_device_info_set_quirks(struct pvr_device *pvr_dev, const u64 *quirks, u32 quirks_size)
+{
+	BUILD_BUG_ON(ARRAY_SIZE(quirks_mapping) != PVR_FW_HAS_BRN_MAX);
+
+	pvr_device_info_set_common(pvr_dev, quirks, quirks_size, quirks_mapping,
+				   ARRAY_SIZE(quirks_mapping));
+}
+
+/**
+ * pvr_device_info_set_enhancements() - Set device enhancements from device information in firmware
+ * @pvr_dev: Device pointer.
+ * @quirks: Pointer to enhancements mask in device information.
+ * @quirks_size: Size of enhancements mask, in u64s.
+ */
+void pvr_device_info_set_enhancements(struct pvr_device *pvr_dev, const u64 *enhancements,
+				      u32 enhancements_size)
+{
+	BUILD_BUG_ON(ARRAY_SIZE(enhancements_mapping) != PVR_FW_HAS_ERN_MAX);
+
+	pvr_device_info_set_common(pvr_dev, enhancements, enhancements_size,
+				   enhancements_mapping, ARRAY_SIZE(enhancements_mapping));
+}
+
+#define FEATURE_MAPPING(fw_feature, feature)                                        \
+	[PVR_FW_HAS_FEATURE_##fw_feature] = {                                       \
+		.flag_offset = offsetof(struct pvr_device, features.has_##feature), \
+		.value_offset = 0                                                   \
+	}
+
+#define FEATURE_MAPPING_VALUE(fw_feature, feature)                                  \
+	[PVR_FW_HAS_FEATURE_##fw_feature] = {                                       \
+		.flag_offset = offsetof(struct pvr_device, features.has_##feature), \
+		.value_offset = offsetof(struct pvr_device, features.feature)       \
+	}
+
+static const struct {
+	uintptr_t flag_offset;
+	uintptr_t value_offset;
+} features_mapping[] = {
+	FEATURE_MAPPING(AXI_ACELITE, axi_acelite),
+	FEATURE_MAPPING_VALUE(CDM_CONTROL_STREAM_FORMAT, cdm_control_stream_format),
+	FEATURE_MAPPING(CLUSTER_GROUPING, cluster_grouping),
+	FEATURE_MAPPING_VALUE(COMMON_STORE_SIZE_IN_DWORDS, common_store_size_in_dwords),
+	FEATURE_MAPPING(COMPUTE, compute),
+	FEATURE_MAPPING(COMPUTE_MORTON_CAPABLE, compute_morton_capable),
+	FEATURE_MAPPING(COMPUTE_OVERLAP, compute_overlap),
+	FEATURE_MAPPING(COREID_PER_OS, coreid_per_os),
+	FEATURE_MAPPING(DYNAMIC_DUST_POWER, dynamic_dust_power),
+	FEATURE_MAPPING_VALUE(ECC_RAMS, ecc_rams),
+	FEATURE_MAPPING_VALUE(FBCDC, fbcdc),
+	FEATURE_MAPPING_VALUE(FBCDC_ALGORITHM, fbcdc_algorithm),
+	FEATURE_MAPPING_VALUE(FBCDC_ARCHITECTURE, fbcdc_architecture),
+	FEATURE_MAPPING_VALUE(FBC_MAX_DEFAULT_DESCRIPTORS, fbc_max_default_descriptors),
+	FEATURE_MAPPING_VALUE(FBC_MAX_LARGE_DESCRIPTORS, fbc_max_large_descriptors),
+	FEATURE_MAPPING(FB_CDC_V4, fb_cdc_v4),
+	FEATURE_MAPPING(GPU_MULTICORE_SUPPORT, gpu_multicore_support),
+	FEATURE_MAPPING(GPU_VIRTUALISATION, gpu_virtualisation),
+	FEATURE_MAPPING(GS_RTA_SUPPORT, gs_rta_support),
+	FEATURE_MAPPING(IRQ_PER_OS, irq_per_os),
+	FEATURE_MAPPING_VALUE(ISP_MAX_TILES_IN_FLIGHT, isp_max_tiles_in_flight),
+	FEATURE_MAPPING_VALUE(ISP_SAMPLES_PER_PIXEL, isp_samples_per_pixel),
+	FEATURE_MAPPING(ISP_ZLS_D24_S8_PACKING_OGL_MODE, isp_zls_d24_s8_packing_ogl_mode),
+	FEATURE_MAPPING_VALUE(LAYOUT_MARS, layout_mars),
+	FEATURE_MAPPING_VALUE(MAX_PARTITIONS, max_partitions),
+	FEATURE_MAPPING_VALUE(META, meta),
+	FEATURE_MAPPING_VALUE(META_COREMEM_SIZE, meta_coremem_size),
+	FEATURE_MAPPING(MIPS, mips),
+	FEATURE_MAPPING_VALUE(NUM_CLUSTERS, num_clusters),
+	FEATURE_MAPPING_VALUE(NUM_ISP_IPP_PIPES, num_isp_ipp_pipes),
+	FEATURE_MAPPING_VALUE(NUM_OSIDS, num_osids),
+	FEATURE_MAPPING_VALUE(NUM_RASTER_PIPES, num_raster_pipes),
+	FEATURE_MAPPING(PBE2_IN_XE, pbe2_in_xe),
+	FEATURE_MAPPING(PBVNC_COREID_REG, pbvnc_coreid_reg),
+	FEATURE_MAPPING(PERFBUS, perfbus),
+	FEATURE_MAPPING(PERF_COUNTER_BATCH, perf_counter_batch),
+	FEATURE_MAPPING_VALUE(PHYS_BUS_WIDTH, phys_bus_width),
+	FEATURE_MAPPING(RISCV_FW_PROCESSOR, riscv_fw_processor),
+	FEATURE_MAPPING(ROGUEXE, roguexe),
+	FEATURE_MAPPING(S7_TOP_INFRASTRUCTURE, s7_top_infrastructure),
+	FEATURE_MAPPING(SIMPLE_INTERNAL_PARAMETER_FORMAT, simple_internal_parameter_format),
+	FEATURE_MAPPING(SIMPLE_INTERNAL_PARAMETER_FORMAT_V2, simple_internal_parameter_format_v2),
+	FEATURE_MAPPING_VALUE(SIMPLE_PARAMETER_FORMAT_VERSION, simple_parameter_format_version),
+	FEATURE_MAPPING_VALUE(SLC_BANKS, slc_banks),
+	FEATURE_MAPPING_VALUE(SLC_CACHE_LINE_SIZE_BITS, slc_cache_line_size_bits),
+	FEATURE_MAPPING(SLC_SIZE_CONFIGURABLE, slc_size_configurable),
+	FEATURE_MAPPING_VALUE(SLC_SIZE_IN_KILOBYTES, slc_size_in_kilobytes),
+	FEATURE_MAPPING(SOC_TIMER, soc_timer),
+	FEATURE_MAPPING(SYS_BUS_SECURE_RESET, sys_bus_secure_reset),
+	FEATURE_MAPPING(TESSELLATION, tessellation),
+	FEATURE_MAPPING(TILE_REGION_PROTECTION, tile_region_protection),
+	FEATURE_MAPPING_VALUE(TILE_SIZE_X, tile_size_x),
+	FEATURE_MAPPING_VALUE(TILE_SIZE_Y, tile_size_y),
+	FEATURE_MAPPING(TLA, tla),
+	FEATURE_MAPPING(TPU_CEM_DATAMASTER_GLOBAL_REGISTERS, tpu_cem_datamaster_global_registers),
+	FEATURE_MAPPING(TPU_DM_GLOBAL_REGISTERS, tpu_dm_global_registers),
+	FEATURE_MAPPING(TPU_FILTERING_MODE_CONTROL, tpu_filtering_mode_control),
+	FEATURE_MAPPING_VALUE(USC_MIN_OUTPUT_REGISTERS_PER_PIX, usc_min_output_registers_per_pix),
+	FEATURE_MAPPING(VDM_DRAWINDIRECT, vdm_drawindirect),
+	FEATURE_MAPPING(VDM_OBJECT_LEVEL_LLS, vdm_object_level_lls),
+	FEATURE_MAPPING_VALUE(VIRTUAL_ADDRESS_SPACE_BITS, virtual_address_space_bits),
+	FEATURE_MAPPING(WATCHDOG_TIMER, watchdog_timer),
+	FEATURE_MAPPING(WORKGROUP_PROTECTION, workgroup_protection),
+	FEATURE_MAPPING_VALUE(XE_ARCHITECTURE, xe_architecture),
+	FEATURE_MAPPING(XE_MEMORY_HIERARCHY, xe_memory_hierarchy),
+	FEATURE_MAPPING(XE_TPU2, xe_tpu2),
+	FEATURE_MAPPING_VALUE(XPU_MAX_REGBANKS_ADDR_WIDTH, xpu_max_regbanks_addr_width),
+	FEATURE_MAPPING_VALUE(XPU_MAX_SLAVES, xpu_max_slaves),
+	FEATURE_MAPPING_VALUE(XPU_REGISTER_BROADCAST, xpu_register_broadcast),
+	FEATURE_MAPPING(XT_TOP_INFRASTRUCTURE, xt_top_infrastructure),
+	FEATURE_MAPPING(ZLS_SUBTILE, zls_subtile),
+};
+
+#undef FEATURE_MAPPING_VALUE
+#undef FEATURE_MAPPING
+
+/**
+ * pvr_device_info_set_features() - Set device features from device information in firmware
+ * @pvr_dev: Device pointer.
+ * @features: Pointer to features mask in device information.
+ * @features_size: Size of features mask, in u64s.
+ * @feature_param_size: Size of feature parameters, in u64s.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -%EINVAL on malformed stream.
+ */
+int pvr_device_info_set_features(struct pvr_device *pvr_dev, const u64 *features, u32 features_size,
+				 u32 feature_param_size)
+{
+	const u32 mapping_max = ARRAY_SIZE(features_mapping);
+	const u32 mapping_max_size = (mapping_max + 63) >> 6;
+	const u32 nr_bits = min(features_size * 64, mapping_max);
+	const u64 *feature_params = features + features_size;
+	u32 param_idx = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(features_mapping) != PVR_FW_HAS_FEATURE_MAX);
+
+	/* Verify no unsupported values in the bitmask. */
+	if (features_size > mapping_max_size) {
+		drm_warn(from_pvr_device(pvr_dev), "Unsupported features in firmware image");
+	} else if (features_size == mapping_max_size && (mapping_max & 63)) {
+		u64 invalid_mask = ~0ull << (mapping_max & 63);
+
+		if (features[features_size - 1] & invalid_mask)
+			drm_warn(from_pvr_device(pvr_dev),
+				 "Unsupported features in firmware image");
+	}
+
+	for (u32 i = 0; i < nr_bits; i++) {
+		if (features[i >> 6] & BIT_ULL(i & 63)) {
+			*(bool *)((u8 *)pvr_dev + features_mapping[i].flag_offset) = true;
+
+			if (features_mapping[i].value_offset) {
+				if (param_idx >= feature_param_size)
+					return -EINVAL;
+
+				*(u64 *)((u8 *)pvr_dev + features_mapping[i].value_offset) =
+					feature_params[param_idx];
+				param_idx++;
+			}
+		}
+	}
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/imagination/pvr_device_info.h b/drivers/gpu/drm/imagination/pvr_device_info.h
new file mode 100644
index 000000000000..cfad708fc20a
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_device_info.h
@@ -0,0 +1,185 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_DEVICE_INFO_H
+#define PVR_DEVICE_INFO_H
+
+#include <linux/types.h>
+
+struct pvr_device;
+
+/*
+ * struct pvr_device_features - Hardware feature information
+ */
+struct pvr_device_features {
+	bool has_axi_acelite;
+	bool has_cdm_control_stream_format;
+	bool has_cluster_grouping;
+	bool has_common_store_size_in_dwords;
+	bool has_compute;
+	bool has_compute_morton_capable;
+	bool has_compute_overlap;
+	bool has_coreid_per_os;
+	bool has_dynamic_dust_power;
+	bool has_ecc_rams;
+	bool has_fb_cdc_v4;
+	bool has_fbc_max_default_descriptors;
+	bool has_fbc_max_large_descriptors;
+	bool has_fbcdc;
+	bool has_fbcdc_algorithm;
+	bool has_fbcdc_architecture;
+	bool has_gpu_multicore_support;
+	bool has_gpu_virtualisation;
+	bool has_gs_rta_support;
+	bool has_irq_per_os;
+	bool has_isp_max_tiles_in_flight;
+	bool has_isp_samples_per_pixel;
+	bool has_isp_zls_d24_s8_packing_ogl_mode;
+	bool has_layout_mars;
+	bool has_max_partitions;
+	bool has_meta;
+	bool has_meta_coremem_size;
+	bool has_mips;
+	bool has_num_clusters;
+	bool has_num_isp_ipp_pipes;
+	bool has_num_osids;
+	bool has_num_raster_pipes;
+	bool has_pbe2_in_xe;
+	bool has_pbvnc_coreid_reg;
+	bool has_perfbus;
+	bool has_perf_counter_batch;
+	bool has_phys_bus_width;
+	bool has_riscv_fw_processor;
+	bool has_roguexe;
+	bool has_s7_top_infrastructure;
+	bool has_simple_internal_parameter_format;
+	bool has_simple_internal_parameter_format_v2;
+	bool has_simple_parameter_format_version;
+	bool has_slc_banks;
+	bool has_slc_cache_line_size_bits;
+	bool has_slc_size_configurable;
+	bool has_slc_size_in_kilobytes;
+	bool has_soc_timer;
+	bool has_sys_bus_secure_reset;
+	bool has_tessellation;
+	bool has_tile_region_protection;
+	bool has_tile_size_x;
+	bool has_tile_size_y;
+	bool has_tla;
+	bool has_tpu_cem_datamaster_global_registers;
+	bool has_tpu_dm_global_registers;
+	bool has_tpu_filtering_mode_control;
+	bool has_usc_min_output_registers_per_pix;
+	bool has_vdm_drawindirect;
+	bool has_vdm_object_level_lls;
+	bool has_virtual_address_space_bits;
+	bool has_watchdog_timer;
+	bool has_workgroup_protection;
+	bool has_xe_architecture;
+	bool has_xe_memory_hierarchy;
+	bool has_xe_tpu2;
+	bool has_xpu_max_regbanks_addr_width;
+	bool has_xpu_max_slaves;
+	bool has_xpu_register_broadcast;
+	bool has_xt_top_infrastructure;
+	bool has_zls_subtile;
+
+	u64 cdm_control_stream_format;
+	u64 common_store_size_in_dwords;
+	u64 ecc_rams;
+	u64 fbc_max_default_descriptors;
+	u64 fbc_max_large_descriptors;
+	u64 fbcdc;
+	u64 fbcdc_algorithm;
+	u64 fbcdc_architecture;
+	u64 isp_max_tiles_in_flight;
+	u64 isp_samples_per_pixel;
+	u64 layout_mars;
+	u64 max_partitions;
+	u64 meta;
+	u64 meta_coremem_size;
+	u64 num_clusters;
+	u64 num_isp_ipp_pipes;
+	u64 num_osids;
+	u64 num_raster_pipes;
+	u64 phys_bus_width;
+	u64 simple_parameter_format_version;
+	u64 slc_banks;
+	u64 slc_cache_line_size_bits;
+	u64 slc_size_in_kilobytes;
+	u64 tile_size_x;
+	u64 tile_size_y;
+	u64 usc_min_output_registers_per_pix;
+	u64 virtual_address_space_bits;
+	u64 xe_architecture;
+	u64 xpu_max_regbanks_addr_width;
+	u64 xpu_max_slaves;
+	u64 xpu_register_broadcast;
+};
+
+/*
+ * struct pvr_device_quirks - Hardware quirk information
+ */
+struct pvr_device_quirks {
+	bool has_brn44079;
+	bool has_brn47217;
+	bool has_brn48492;
+	bool has_brn48545;
+	bool has_brn49927;
+	bool has_brn50767;
+	bool has_brn51764;
+	bool has_brn62269;
+	bool has_brn63142;
+	bool has_brn63553;
+	bool has_brn66011;
+};
+
+/*
+ * struct pvr_device_enhancements - Hardware enhancement information
+ */
+struct pvr_device_enhancements {
+	bool has_ern35421;
+	bool has_ern38020;
+	bool has_ern38748;
+	bool has_ern42064;
+	bool has_ern42290;
+	bool has_ern42606;
+	bool has_ern47025;
+	bool has_ern57596;
+};
+
+void pvr_device_info_set_quirks(struct pvr_device *pvr_dev, const u64 *bitmask,
+				u32 bitmask_len);
+void pvr_device_info_set_enhancements(struct pvr_device *pvr_dev, const u64 *bitmask,
+				      u32 bitmask_len);
+int pvr_device_info_set_features(struct pvr_device *pvr_dev, const u64 *features, u32 features_size,
+				 u32 feature_param_size);
+
+/*
+ * Meta cores
+ *
+ * These are the values for the 'meta' feature when the feature is present
+ * (as per &struct pvr_device_features)/
+ */
+#define PVR_META_MTP218 (1)
+#define PVR_META_MTP219 (2)
+#define PVR_META_LTP218 (3)
+#define PVR_META_LTP217 (4)
+
+enum {
+	PVR_FEATURE_CDM_USER_MODE_QUEUE,
+	PVR_FEATURE_CLUSTER_GROUPING,
+	PVR_FEATURE_COMPUTE_MORTON_CAPABLE,
+	PVR_FEATURE_FB_CDC_V4,
+	PVR_FEATURE_GPU_MULTICORE_SUPPORT,
+	PVR_FEATURE_ISP_ZLS_D24_S8_PACKING_OGL_MODE,
+	PVR_FEATURE_REQUIRES_FB_CDC_ZLS_SETUP,
+	PVR_FEATURE_S7_TOP_INFRASTRUCTURE,
+	PVR_FEATURE_TESSELLATION,
+	PVR_FEATURE_TPU_DM_GLOBAL_REGISTERS,
+	PVR_FEATURE_VDM_DRAWINDIRECT,
+	PVR_FEATURE_VDM_OBJECT_LEVEL_LLS,
+	PVR_FEATURE_ZLS_SUBTILE,
+};
+
+#endif /* PVR_DEVICE_INFO_H */
diff --git a/drivers/gpu/drm/imagination/pvr_drv.c b/drivers/gpu/drm/imagination/pvr_drv.c
index ebab8b477749..eb3018f94f7c 100644
--- a/drivers/gpu/drm/imagination/pvr_drv.c
+++ b/drivers/gpu/drm/imagination/pvr_drv.c
@@ -3,6 +3,9 @@
 
 #include "pvr_device.h"
 #include "pvr_drv.h"
+#include "pvr_rogue_defs.h"
+#include "pvr_rogue_fwif_client.h"
+#include "pvr_rogue_fwif_shared.h"
 
 #include <uapi/drm/pvr_drm.h>
 
@@ -88,6 +91,382 @@ pvr_ioctl_get_bo_mmap_offset(struct drm_device *drm_dev, void *raw_args,
 	return -ENOTTY;
 }
 
+static __always_inline u64
+pvr_fw_version_packed(u32 major, u32 minor)
+{
+	return ((u64)major << 32) | minor;
+}
+
+static u32
+rogue_get_common_store_partition_space_size(struct pvr_device *pvr_dev)
+{
+	u32 max_partitions = 0;
+	u32 tile_size_x = 0;
+	u32 tile_size_y = 0;
+
+	PVR_FEATURE_VALUE(pvr_dev, tile_size_x, &tile_size_x);
+	PVR_FEATURE_VALUE(pvr_dev, tile_size_y, &tile_size_y);
+	PVR_FEATURE_VALUE(pvr_dev, max_partitions, &max_partitions);
+
+	if (tile_size_x == 16 && tile_size_y == 16) {
+		u32 usc_min_output_registers_per_pix = 0;
+
+		PVR_FEATURE_VALUE(pvr_dev, usc_min_output_registers_per_pix,
+				  &usc_min_output_registers_per_pix);
+
+		return tile_size_x * tile_size_y * max_partitions *
+		       usc_min_output_registers_per_pix;
+	}
+
+	return max_partitions * 1024;
+}
+
+static u32
+rogue_get_common_store_alloc_region_size(struct pvr_device *pvr_dev)
+{
+	u32 common_store_size_in_dwords = 512 * 4 * 4;
+	u32 alloc_region_size;
+
+	PVR_FEATURE_VALUE(pvr_dev, common_store_size_in_dwords, &common_store_size_in_dwords);
+
+	alloc_region_size = common_store_size_in_dwords - (256U * 4U) -
+			    rogue_get_common_store_partition_space_size(pvr_dev);
+
+	if (PVR_HAS_QUIRK(pvr_dev, 44079)) {
+		u32 common_store_split_point = (768U * 4U * 4U);
+
+		return min(common_store_split_point - (256U * 4U), alloc_region_size);
+	}
+
+	return alloc_region_size;
+}
+
+static inline u32
+rogue_get_num_phantoms(struct pvr_device *pvr_dev)
+{
+	u32 num_clusters = 1;
+
+	PVR_FEATURE_VALUE(pvr_dev, num_clusters, &num_clusters);
+
+	return ROGUE_REQ_NUM_PHANTOMS(num_clusters);
+}
+
+static inline u32
+rogue_get_max_coeffs(struct pvr_device *pvr_dev)
+{
+	u32 max_coeff_additional_portion = ROGUE_MAX_VERTEX_SHARED_REGISTERS;
+	u32 pending_allocation_shared_regs = 2U * 1024U;
+	u32 pending_allocation_coeff_regs = 0U;
+	u32 num_phantoms = rogue_get_num_phantoms(pvr_dev);
+	u32 tiles_in_flight = 0;
+	u32 max_coeff_pixel_portion;
+
+	PVR_FEATURE_VALUE(pvr_dev, isp_max_tiles_in_flight, &tiles_in_flight);
+	max_coeff_pixel_portion = DIV_ROUND_UP(tiles_in_flight, num_phantoms);
+	max_coeff_pixel_portion *= ROGUE_MAX_PIXEL_SHARED_REGISTERS;
+
+	/*
+	 * Compute tasks on cores with BRN48492 and without compute overlap may lock
+	 * up without two additional lines of coeffs.
+	 */
+	if (PVR_HAS_QUIRK(pvr_dev, 48492) && !PVR_HAS_FEATURE(pvr_dev, compute_overlap))
+		pending_allocation_coeff_regs = 2U * 1024U;
+
+	if (PVR_HAS_ENHANCEMENT(pvr_dev, 38748))
+		pending_allocation_shared_regs = 0;
+
+	if (PVR_HAS_ENHANCEMENT(pvr_dev, 38020))
+		max_coeff_additional_portion += ROGUE_MAX_COMPUTE_SHARED_REGISTERS;
+
+	return rogue_get_common_store_alloc_region_size(pvr_dev) + pending_allocation_coeff_regs -
+		(max_coeff_pixel_portion + max_coeff_additional_portion +
+		 pending_allocation_shared_regs);
+}
+
+static inline u32
+rogue_get_cdm_max_local_mem_size_regs(struct pvr_device *pvr_dev)
+{
+	u32 available_coeffs_in_dwords = rogue_get_max_coeffs(pvr_dev);
+
+	if (PVR_HAS_QUIRK(pvr_dev, 48492) && PVR_HAS_FEATURE(pvr_dev, roguexe) &&
+	    !PVR_HAS_FEATURE(pvr_dev, compute_overlap)) {
+		/* Driver must not use the 2 reserved lines. */
+		available_coeffs_in_dwords -= ROGUE_CSRM_LINE_SIZE_IN_DWORDS * 2;
+	}
+
+	/*
+	 * The maximum amount of local memory available to a kernel is the minimum
+	 * of the total number of coefficient registers available and the max common
+	 * store allocation size which can be made by the CDM.
+	 *
+	 * If any coeff lines are reserved for tessellation or pixel then we need to
+	 * subtract those too.
+	 */
+	return min(available_coeffs_in_dwords, (u32)ROGUE_MAX_PER_KERNEL_LOCAL_MEM_SIZE_REGS);
+}
+
+/**
+ * pvr_dev_query_gpu_info_get()
+ * @pvr_dev: Device pointer.
+ * @args: [IN] Device query arguments containing a pointer to a userspace
+ *        struct drm_pvr_dev_query_gpu_info.
+ *
+ * If the query object pointer is NULL, the size field is updated with the
+ * expected size of the query object.
+ *
+ * Returns:
+ *  * 0 on success, or if size is requested using a NULL pointer, or
+ *  * -%E2BIG if the indicated length of the allocation is less than is
+ *    required to contain the copied data, or
+ *  * -%EFAULT if local memory could not be copied to userspace.
+ */
+static int
+pvr_dev_query_gpu_info_get(struct pvr_device *pvr_dev,
+			   struct drm_pvr_ioctl_dev_query_args *args)
+{
+	struct drm_pvr_dev_query_gpu_info gpu_info = {0};
+	int err;
+
+	if (!args->pointer) {
+		args->size = sizeof(struct drm_pvr_dev_query_gpu_info);
+		return 0;
+	}
+
+	gpu_info.gpu_id =
+		pvr_gpu_id_to_packed_bvnc(&pvr_dev->gpu_id);
+	gpu_info.num_phantoms = rogue_get_num_phantoms(pvr_dev);
+
+	err = PVR_UOBJ_SET(args->pointer, args->size, gpu_info);
+	if (err < 0)
+		return err;
+
+	if (args->size > sizeof(gpu_info))
+		args->size = sizeof(gpu_info);
+	return 0;
+}
+
+/**
+ * pvr_dev_query_runtime_info_get()
+ * @pvr_dev: Device pointer.
+ * @args: [IN] Device query arguments containing a pointer to a userspace
+ *        struct drm_pvr_dev_query_runtime_info.
+ *
+ * If the query object pointer is NULL, the size field is updated with the
+ * expected size of the query object.
+ *
+ * Returns:
+ *  * 0 on success, or if size is requested using a NULL pointer, or
+ *  * -%E2BIG if the indicated length of the allocation is less than is
+ *    required to contain the copied data, or
+ *  * -%EFAULT if local memory could not be copied to userspace.
+ */
+static int
+pvr_dev_query_runtime_info_get(struct pvr_device *pvr_dev,
+			       struct drm_pvr_ioctl_dev_query_args *args)
+{
+	struct drm_pvr_dev_query_runtime_info runtime_info = {0};
+	int err;
+
+	if (!args->pointer) {
+		args->size = sizeof(struct drm_pvr_dev_query_runtime_info);
+		return 0;
+	}
+
+	runtime_info.free_list_min_pages = 0; /* FIXME */
+	runtime_info.free_list_max_pages =
+		ROGUE_PM_MAX_FREELIST_SIZE / ROGUE_PM_PAGE_SIZE;
+	runtime_info.common_store_alloc_region_size =
+		rogue_get_common_store_alloc_region_size(pvr_dev);
+	runtime_info.common_store_partition_space_size =
+		rogue_get_common_store_partition_space_size(pvr_dev);
+	runtime_info.max_coeffs = rogue_get_max_coeffs(pvr_dev);
+	runtime_info.cdm_max_local_mem_size_regs =
+		rogue_get_cdm_max_local_mem_size_regs(pvr_dev);
+
+	err = PVR_UOBJ_SET(args->pointer, args->size, runtime_info);
+	if (err < 0)
+		return err;
+
+	if (args->size > sizeof(runtime_info))
+		args->size = sizeof(runtime_info);
+	return 0;
+}
+
+/**
+ * pvr_dev_query_quirks_get() - Unpack array of quirks at the address given
+ * in a struct drm_pvr_dev_query_quirks, or gets the amount of space required
+ * for it.
+ * @pvr_dev: Device pointer.
+ * @args: [IN] Device query arguments containing a pointer to a userspace
+ *        struct drm_pvr_dev_query_query_quirks.
+ *
+ * If the query object pointer is NULL, the size field is updated with the
+ * expected size of the query object.
+ * If the userspace pointer in the query object is NULL, or the count is
+ * short, no data is copied.
+ * The count field will be updated to that copied, or if either pointer is
+ * NULL, that which would have been copied.
+ * The size field in the query object will be updated to the size copied.
+ *
+ * Returns:
+ *  * 0 on success, or if size/count is requested using a NULL pointer, or
+ *  * -%EINVAL if args contained non-zero reserved fields, or
+ *  * -%E2BIG if the indicated length of the allocation is less than is
+ *    required to contain the copied data, or
+ *  * -%EFAULT if local memory could not be copied to userspace.
+ */
+static int
+pvr_dev_query_quirks_get(struct pvr_device *pvr_dev,
+			 struct drm_pvr_ioctl_dev_query_args *args)
+{
+	/*
+	 * @FIXME - hardcoding of numbers here is intended as an
+	 * intermediate step so the UAPI can be fixed, but requires a
+	 * a refactor in the future to store them in a more appropriate
+	 * location
+	 */
+	static const u32 umd_quirks_musthave[] = {
+		47217,
+		49927,
+		62269,
+	};
+	static const u32 umd_quirks[] = {
+		48545,
+		51764,
+	};
+	struct drm_pvr_dev_query_quirks query;
+	u32 out[ARRAY_SIZE(umd_quirks_musthave) + ARRAY_SIZE(umd_quirks)];
+	size_t out_musthave_count = 0;
+	size_t out_count = 0;
+	int err;
+
+	if (!args->pointer) {
+		args->size = sizeof(struct drm_pvr_dev_query_quirks);
+		return 0;
+	}
+
+	err = PVR_UOBJ_GET(query, args->size, args->pointer);
+
+	if (err < 0)
+		return err;
+	if (query._padding_c)
+		return -EINVAL;
+
+	for (int i = 0; i < ARRAY_SIZE(umd_quirks_musthave); i++) {
+		if (pvr_device_has_uapi_quirk(pvr_dev, umd_quirks_musthave[i])) {
+			out[out_count++] = umd_quirks_musthave[i];
+			out_musthave_count++;
+		}
+	}
+
+	for (int i = 0; i < ARRAY_SIZE(umd_quirks); i++) {
+		if (pvr_device_has_uapi_quirk(pvr_dev, umd_quirks[i]))
+			out[out_count++] = umd_quirks[i];
+	}
+
+	if (!query.quirks)
+		goto copy_out;
+	if (query.count < out_count)
+		return -E2BIG;
+
+	if (copy_to_user(u64_to_user_ptr(query.quirks), out,
+			 out_count * sizeof(u32))) {
+		return -EFAULT;
+	}
+
+	query.musthave_count = out_musthave_count;
+
+copy_out:
+	query.count = out_count;
+	err = PVR_UOBJ_SET(args->pointer, args->size, query);
+	if (err < 0)
+		return err;
+
+	args->size = sizeof(query);
+	return 0;
+}
+
+/**
+ * pvr_dev_query_enhancements_get() - Unpack array of enhancements at the
+ * address given in a struct drm_pvr_dev_query_enhancements, or gets the amount
+ * of space required for it.
+ * @pvr_dev: Device pointer.
+ * @args: [IN] Device query arguments containing a pointer to a userspace
+ *        struct drm_pvr_dev_query_enhancements.
+ *
+ * If the query object pointer is NULL, the size field is updated with the
+ * expected size of the query object.
+ * If the userspace pointer in the query object is NULL, or the count is
+ * short, no data is copied.
+ * The count field will be updated to that copied, or if either pointer is
+ * NULL, that which would have been copied.
+ * The size field in the query object will be updated to the size copied.
+ *
+ * Returns:
+ *  * 0 on success, or if size/count is requested using a NULL pointer, or
+ *  * -%EINVAL if args contained non-zero reserved fields, or
+ *  * -%E2BIG if the indicated length of the allocation is less than is
+ *    required to contain the copied data, or
+ *  * -%EFAULT if local memory could not be copied to userspace.
+ */
+static int
+pvr_dev_query_enhancements_get(struct pvr_device *pvr_dev,
+			       struct drm_pvr_ioctl_dev_query_args *args)
+{
+	/*
+	 * @FIXME - hardcoding of numbers here is intended as an
+	 * intermediate step so the UAPI can be fixed, but requires a
+	 * a refactor in the future to store them in a more appropriate
+	 * location
+	 */
+	const u32 umd_enhancements[] = {
+		35421,
+		42064,
+	};
+	struct drm_pvr_dev_query_enhancements query;
+	u32 out[ARRAY_SIZE(umd_enhancements)];
+	size_t out_idx = 0;
+	int err;
+
+	if (!args->pointer) {
+		args->size = sizeof(struct drm_pvr_dev_query_enhancements);
+		return 0;
+	}
+
+	err = PVR_UOBJ_GET(query, args->size, args->pointer);
+
+	if (err < 0)
+		return err;
+	if (query._padding_a)
+		return -EINVAL;
+	if (query._padding_c)
+		return -EINVAL;
+
+	for (int i = 0; i < ARRAY_SIZE(umd_enhancements); i++) {
+		if (pvr_device_has_uapi_enhancement(pvr_dev, umd_enhancements[i]))
+			out[out_idx++] = umd_enhancements[i];
+	}
+
+	if (!query.enhancements)
+		goto copy_out;
+	if (query.count < out_idx)
+		return -E2BIG;
+
+	if (copy_to_user(u64_to_user_ptr(query.enhancements), out,
+			 out_idx * sizeof(u32))) {
+		return -EFAULT;
+	}
+
+copy_out:
+	query.count = out_idx;
+	err = PVR_UOBJ_SET(args->pointer, args->size, query);
+	if (err < 0)
+		return err;
+
+	args->size = sizeof(query);
+	return 0;
+}
+
 /**
  * pvr_ioctl_dev_query() - IOCTL to copy information about a device
  * @drm_dev: [IN] DRM device.
@@ -112,7 +491,41 @@ static int
 pvr_ioctl_dev_query(struct drm_device *drm_dev, void *raw_args,
 		    struct drm_file *file)
 {
-	return -ENOTTY;
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+	struct drm_pvr_ioctl_dev_query_args *args = raw_args;
+	int idx;
+	int ret = -EINVAL;
+
+	if (!drm_dev_enter(drm_dev, &idx))
+		return -EIO;
+
+	switch ((enum drm_pvr_dev_query)args->type) {
+	case DRM_PVR_DEV_QUERY_GPU_INFO_GET:
+		ret = pvr_dev_query_gpu_info_get(pvr_dev, args);
+		break;
+
+	case DRM_PVR_DEV_QUERY_RUNTIME_INFO_GET:
+		ret = pvr_dev_query_runtime_info_get(pvr_dev, args);
+		break;
+
+	case DRM_PVR_DEV_QUERY_QUIRKS_GET:
+		ret = pvr_dev_query_quirks_get(pvr_dev, args);
+		break;
+
+	case DRM_PVR_DEV_QUERY_ENHANCEMENTS_GET:
+		ret = pvr_dev_query_enhancements_get(pvr_dev, args);
+		break;
+
+	case DRM_PVR_DEV_QUERY_HEAP_INFO_GET:
+		return -EINVAL;
+
+	case DRM_PVR_DEV_QUERY_STATIC_DATA_AREAS_GET:
+		return -EINVAL;
+	}
+
+	drm_dev_exit(idx);
+
+	return ret;
 }
 
 /**
@@ -350,6 +763,112 @@ pvr_ioctl_submit_jobs(struct drm_device *drm_dev, void *raw_args,
 	return -ENOTTY;
 }
 
+int
+pvr_get_uobj(u64 usr_ptr, u32 usr_stride, u32 min_stride, u32 obj_size, void *out)
+{
+	if (usr_stride < min_stride)
+		return -EINVAL;
+
+	return copy_struct_from_user(out, obj_size, u64_to_user_ptr(usr_ptr), usr_stride);
+}
+
+int
+pvr_set_uobj(u64 usr_ptr, u32 usr_stride, u32 min_stride, u32 obj_size, const void *in)
+{
+	if (usr_stride < min_stride)
+		return -EINVAL;
+
+	if (copy_to_user(u64_to_user_ptr(usr_ptr), in, min_t(u32, usr_stride, obj_size)))
+		return -EFAULT;
+
+	if (usr_stride > obj_size &&
+	    clear_user(u64_to_user_ptr(usr_ptr + obj_size), usr_stride - obj_size)) {
+		return -EFAULT;
+	}
+
+	return 0;
+}
+
+int
+pvr_get_uobj_array(const struct drm_pvr_obj_array *in, u32 min_stride, u32 obj_size, void **out)
+{
+	int ret = 0;
+	void *out_alloc;
+
+	if (in->stride < min_stride)
+		return -EINVAL;
+
+	if (!in->count)
+		return 0;
+
+	out_alloc = kvmalloc_array(in->count, obj_size, GFP_KERNEL);
+	if (!out_alloc)
+		return -ENOMEM;
+
+	if (obj_size == in->stride) {
+		if (copy_from_user(out_alloc, u64_to_user_ptr(in->array),
+				   (unsigned long)obj_size * in->count))
+			ret = -EFAULT;
+	} else {
+		void __user *in_ptr = u64_to_user_ptr(in->array);
+		void *out_ptr = out_alloc;
+
+		for (u32 i = 0; i < in->count; i++) {
+			ret = copy_struct_from_user(out_ptr, obj_size, in_ptr, in->stride);
+			if (ret)
+				break;
+
+			out_ptr += obj_size;
+			in_ptr += in->stride;
+		}
+	}
+
+	if (ret) {
+		kvfree(out_alloc);
+		return ret;
+	}
+
+	*out = out_alloc;
+	return 0;
+}
+
+int
+pvr_set_uobj_array(const struct drm_pvr_obj_array *out, u32 min_stride, u32 obj_size,
+		   const void *in)
+{
+	if (out->stride < min_stride)
+		return -EINVAL;
+
+	if (!out->count)
+		return 0;
+
+	if (obj_size == out->stride) {
+		if (copy_to_user(u64_to_user_ptr(out->array), in,
+				 (unsigned long)obj_size * out->count))
+			return -EFAULT;
+	} else {
+		u32 cpy_elem_size = min_t(u32, out->stride, obj_size);
+		void __user *out_ptr = u64_to_user_ptr(out->array);
+		const void *in_ptr = in;
+
+		for (u32 i = 0; i < out->count; i++) {
+			if (copy_to_user(out_ptr, in_ptr, cpy_elem_size))
+				return -EFAULT;
+
+			out_ptr += obj_size;
+			in_ptr += out->stride;
+		}
+
+		if (out->stride > obj_size &&
+		    clear_user(u64_to_user_ptr(out->array + obj_size),
+			       out->stride - obj_size)) {
+			return -EFAULT;
+		}
+	}
+
+	return 0;
+}
+
 #define DRM_PVR_IOCTL(_name, _func, _flags) \
 	DRM_IOCTL_DEF_DRV(PVR_##_name, pvr_ioctl_##_func, _flags)
 
diff --git a/drivers/gpu/drm/imagination/pvr_drv.h b/drivers/gpu/drm/imagination/pvr_drv.h
index 6075a993d922..c413176b8006 100644
--- a/drivers/gpu/drm/imagination/pvr_drv.h
+++ b/drivers/gpu/drm/imagination/pvr_drv.h
@@ -19,4 +19,111 @@
 #define PVR_DRIVER_MINOR 0
 #define PVR_DRIVER_PATCHLEVEL 0
 
+int pvr_get_uobj(u64 usr_ptr, u32 usr_size, u32 min_size, u32 obj_size, void *out);
+int pvr_set_uobj(u64 usr_ptr, u32 usr_size, u32 min_size, u32 obj_size, const void *in);
+int pvr_get_uobj_array(const struct drm_pvr_obj_array *in, u32 min_stride, u32 obj_size,
+		       void **out);
+int pvr_set_uobj_array(const struct drm_pvr_obj_array *out, u32 min_stride, u32 obj_size,
+		       const void *in);
+
+#define PVR_UOBJ_MIN_SIZE_INTERNAL(_typename, _last_mandatory_field) \
+	(offsetof(_typename, _last_mandatory_field) + \
+	 sizeof(((_typename *)NULL)->_last_mandatory_field))
+
+/* NOLINTBEGIN(bugprone-macro-parentheses) */
+#define PVR_UOBJ_DECL(_typename, _last_mandatory_field) \
+	, _typename : PVR_UOBJ_MIN_SIZE_INTERNAL(_typename, _last_mandatory_field)
+/* NOLINTEND(bugprone-macro-parentheses) */
+
+/**
+ * DOC: PVR user objects.
+ *
+ * Macros used to aid copying structured and array data to and from
+ * userspace. Objects can differ in size, provided the minimum size
+ * allowed is specified (using the last mandatory field in the struct).
+ * All types used with PVR_UOBJ_GET/SET macros must be listed here under
+ * PVR_UOBJ_MIN_SIZE, with the last mandatory struct field specified.
+ */
+
+/**
+ * PVR_UOBJ_MIN_SIZE() - Fetch the minimum copy size of a compatible type object.
+ * @_obj_name: The name of the object. Cannot be a typename - this is deduced.
+ *
+ * This cannot fail. Using the macro with an incompatible type will result in a
+ * compiler error.
+ *
+ * To add compatibility for a type, list it within the macro in an orderly
+ * fashion. The second argument is the name of the last mandatory field of the
+ * struct type, which is used to calculate the size. See also PVR_UOBJ_DECL().
+ *
+ * Return: The minimum copy size.
+ */
+#define PVR_UOBJ_MIN_SIZE(_obj_name) _Generic(_obj_name \
+	PVR_UOBJ_DECL(struct drm_pvr_job, hwrt) \
+	PVR_UOBJ_DECL(struct drm_pvr_sync_op, value) \
+	PVR_UOBJ_DECL(struct drm_pvr_dev_query_gpu_info, num_phantoms) \
+	PVR_UOBJ_DECL(struct drm_pvr_dev_query_runtime_info, cdm_max_local_mem_size_regs) \
+	PVR_UOBJ_DECL(struct drm_pvr_dev_query_quirks, _padding_c) \
+	PVR_UOBJ_DECL(struct drm_pvr_dev_query_enhancements, _padding_c) \
+	PVR_UOBJ_DECL(struct drm_pvr_heap, page_size_log2) \
+	PVR_UOBJ_DECL(struct drm_pvr_dev_query_heap_info, heaps) \
+	PVR_UOBJ_DECL(struct drm_pvr_static_data_area, offset) \
+	PVR_UOBJ_DECL(struct drm_pvr_dev_query_static_data_areas, static_data_areas) \
+	)
+
+/**
+ * PVR_UOBJ_GET() - Copies from _src_usr_ptr to &_dest_obj.
+ * @_dest_obj: The destination container object in kernel space.
+ * @_usr_size: The size of the source container in user space.
+ * @_src_usr_ptr: __u64 raw pointer to the source container in user space.
+ *
+ * Return: Error code. See pvr_get_uobj().
+ */
+#define PVR_UOBJ_GET(_dest_obj, _usr_size, _src_usr_ptr) \
+	pvr_get_uobj(_src_usr_ptr, _usr_size, \
+		     PVR_UOBJ_MIN_SIZE(_dest_obj), \
+		     sizeof(_dest_obj), &(_dest_obj))
+
+/**
+ * PVR_UOBJ_SET() - Copies from &_src_obj to _dest_usr_ptr.
+ * @_dest_usr_ptr: __u64 raw pointer to the destination container in user space.
+ * @_usr_size: The size of the destination container in user space.
+ * @_src_obj: The source container object in kernel space.
+ *
+ * Return: Error code. See pvr_set_uobj().
+ */
+#define PVR_UOBJ_SET(_dest_usr_ptr, _usr_size, _src_obj) \
+	pvr_set_uobj(_dest_usr_ptr, _usr_size, \
+		     PVR_UOBJ_MIN_SIZE(_src_obj), \
+		     sizeof(_src_obj), &(_src_obj))
+
+/**
+ * PVR_UOBJ_GET_ARRAY() - Copies from @_src_drm_pvr_obj_array.array to
+ * alloced memory and returns a pointer in _dest_array.
+ * @_dest_array: The destination C array object in kernel space.
+ * @_src_drm_pvr_obj_array: The &struct drm_pvr_obj_array containing a __u64 raw
+ * pointer to the source C array in user space and the size of each array
+ * element in user space (the 'stride').
+ *
+ * Return: Error code. See pvr_get_uobj_array().
+ */
+#define PVR_UOBJ_GET_ARRAY(_dest_array, _src_drm_pvr_obj_array) \
+	pvr_get_uobj_array(_src_drm_pvr_obj_array, \
+			   PVR_UOBJ_MIN_SIZE((_dest_array)[0]), \
+			   sizeof((_dest_array)[0]), (void **)&(_dest_array))
+
+/**
+ * PVR_UOBJ_SET_ARRAY() - Copies from _src_array to @_dest_drm_pvr_obj_array.array.
+ * @_dest_drm_pvr_obj_array: The &struct drm_pvr_obj_array containing a __u64 raw
+ * pointer to the destination C array in user space and the size of each array
+ * element in user space (the 'stride').
+ * @_src_array: The source C array object in kernel space.
+ *
+ * Return: Error code. See pvr_set_uobj_array().
+ */
+#define PVR_UOBJ_SET_ARRAY(_dest_drm_pvr_obj_array, _src_array) \
+	pvr_set_uobj_array(_dest_drm_pvr_obj_array, \
+			   PVR_UOBJ_MIN_SIZE((_src_array)[0]), \
+			   sizeof((_src_array)[0]), _src_array)
+
 #endif /* PVR_DRV_H */
diff --git a/drivers/gpu/drm/imagination/pvr_fw.c b/drivers/gpu/drm/imagination/pvr_fw.c
new file mode 100644
index 000000000000..c48de4a3af46
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_fw.c
@@ -0,0 +1,145 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+#include "pvr_device_info.h"
+#include "pvr_fw.h"
+
+#include <drm/drm_drv.h>
+#include <linux/firmware.h>
+#include <linux/sizes.h>
+
+#define FW_MAX_SUPPORTED_MAJOR_VERSION 1
+
+/**
+ * pvr_fw_validate() - Parse firmware header and check compatibility
+ * @pvr_dev: Device pointer.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -EINVAL if firmware is incompatible.
+ */
+static int
+pvr_fw_validate(struct pvr_device *pvr_dev)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	const struct firmware *firmware = pvr_dev->fw_dev.firmware;
+	const struct pvr_fw_layout_entry *layout_entries;
+	const struct pvr_fw_info_header *header;
+	const u8 *fw = firmware->data;
+	u32 fw_offset = firmware->size - SZ_4K;
+	u32 layout_table_size;
+	u32 entry;
+
+	if (firmware->size < SZ_4K || (firmware->size % FW_BLOCK_SIZE))
+		return -EINVAL;
+
+	header = (const struct pvr_fw_info_header *)&fw[fw_offset];
+
+	if (header->info_version != PVR_FW_INFO_VERSION) {
+		drm_err(drm_dev, "Unsupported fw info version %u\n",
+			header->info_version);
+		return -EINVAL;
+	}
+
+	if (header->header_len != sizeof(struct pvr_fw_info_header) ||
+	    header->layout_entry_size != sizeof(struct pvr_fw_layout_entry) ||
+	    header->layout_entry_num > PVR_FW_INFO_MAX_NUM_ENTRIES) {
+		drm_err(drm_dev, "FW info format mismatch\n");
+		return -EINVAL;
+	}
+
+	if (!(header->flags & PVR_FW_FLAGS_OPEN_SOURCE) ||
+	    header->fw_version_major > FW_MAX_SUPPORTED_MAJOR_VERSION ||
+	    header->fw_version_major == 0) {
+		drm_err(drm_dev, "Unsupported FW version %u.%u (build: %u%s)\n",
+			header->fw_version_major, header->fw_version_minor,
+			header->fw_version_build,
+			(header->flags & PVR_FW_FLAGS_OPEN_SOURCE) ? " OS" : "");
+		return -EINVAL;
+	}
+
+	if (pvr_gpu_id_to_packed_bvnc(&pvr_dev->gpu_id) != header->bvnc) {
+		struct pvr_gpu_id fw_gpu_id;
+
+		packed_bvnc_to_pvr_gpu_id(header->bvnc, &fw_gpu_id);
+		drm_err(drm_dev, "FW built for incorrect GPU ID %i.%i.%i.%i (expected %i.%i.%i.%i)\n",
+			fw_gpu_id.b, fw_gpu_id.v, fw_gpu_id.n, fw_gpu_id.c,
+			pvr_dev->gpu_id.b, pvr_dev->gpu_id.v, pvr_dev->gpu_id.n, pvr_dev->gpu_id.c);
+		return -EINVAL;
+	}
+
+	fw_offset += header->header_len;
+	layout_table_size =
+		header->layout_entry_size * header->layout_entry_num;
+	if ((fw_offset + layout_table_size) > firmware->size)
+		return -EINVAL;
+
+	layout_entries = (const struct pvr_fw_layout_entry *)&fw[fw_offset];
+	for (entry = 0; entry < header->layout_entry_num; entry++) {
+		u32 start_addr = layout_entries[entry].base_addr;
+		u32 end_addr = start_addr + layout_entries[entry].alloc_size;
+
+		if (start_addr >= end_addr)
+			return -EINVAL;
+	}
+
+	fw_offset = (firmware->size - SZ_4K) - header->device_info_size;
+
+	drm_info(drm_dev, "FW version v%u.%u (build %u OS)\n", header->fw_version_major,
+		 header->fw_version_minor, header->fw_version_build);
+
+	pvr_dev->fw_version.major = header->fw_version_major;
+	pvr_dev->fw_version.minor = header->fw_version_minor;
+
+	pvr_dev->fw_dev.header = header;
+	pvr_dev->fw_dev.layout_entries = layout_entries;
+
+	return 0;
+}
+
+static int
+pvr_fw_get_device_info(struct pvr_device *pvr_dev)
+{
+	const struct firmware *firmware = pvr_dev->fw_dev.firmware;
+	struct pvr_fw_device_info_header *header;
+	const u8 *fw = firmware->data;
+	const u64 *dev_info;
+	u32 fw_offset;
+
+	fw_offset = (firmware->size - SZ_4K) - pvr_dev->fw_dev.header->device_info_size;
+
+	header = (struct pvr_fw_device_info_header *)&fw[fw_offset];
+	dev_info = (u64 *)(header + 1);
+
+	pvr_device_info_set_quirks(pvr_dev, dev_info, header->brn_mask_size);
+	dev_info += header->brn_mask_size;
+
+	pvr_device_info_set_enhancements(pvr_dev, dev_info, header->ern_mask_size);
+	dev_info += header->ern_mask_size;
+
+	return pvr_device_info_set_features(pvr_dev, dev_info, header->feature_mask_size,
+					    header->feature_param_size);
+}
+
+/**
+ * pvr_fw_validate_init_device_info() - Validate firmware and initialise device information
+ * @pvr_dev: Target PowerVR device.
+ *
+ * This function must be called before querying device information.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -%EINVAL if firmware validation fails.
+ */
+int
+pvr_fw_validate_init_device_info(struct pvr_device *pvr_dev)
+{
+	int err;
+
+	err = pvr_fw_validate(pvr_dev);
+	if (err)
+		return err;
+
+	return pvr_fw_get_device_info(pvr_dev);
+}
diff --git a/drivers/gpu/drm/imagination/pvr_fw.h b/drivers/gpu/drm/imagination/pvr_fw.h
new file mode 100644
index 000000000000..dca7fe5b3dd0
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_fw.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_FW_H
+#define PVR_FW_H
+
+#include "pvr_fw_info.h"
+
+#include <linux/types.h>
+
+/* Forward declarations from "pvr_device.h". */
+struct pvr_device;
+struct pvr_file;
+
+struct pvr_fw_device {
+	/** @firmware: Handle to the firmware loaded into the device. */
+	const struct firmware *firmware;
+
+	/** @header: Pointer to firmware header. */
+	const struct pvr_fw_info_header *header;
+
+	/** @layout_entries: Pointer to firmware layout. */
+	const struct pvr_fw_layout_entry *layout_entries;
+
+	/**
+	 * @processor_type: FW processor type for this device. Must be one of
+	 *                  %PVR_FW_PROCESSOR_TYPE_*.
+	 */
+	u16 processor_type;
+};
+
+int pvr_fw_validate_init_device_info(struct pvr_device *pvr_dev);
+
+#endif /* PVR_FW_H */
diff --git a/drivers/gpu/drm/imagination/pvr_fw_info.h b/drivers/gpu/drm/imagination/pvr_fw_info.h
new file mode 100644
index 000000000000..40bf66f1c4b6
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_fw_info.h
@@ -0,0 +1,135 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_FW_INFO_H
+#define PVR_FW_INFO_H
+
+#include <linux/bits.h>
+#include <linux/sizes.h>
+#include <linux/types.h>
+
+/*
+ * Firmware binary block unit in bytes.
+ * Raw data stored in FW binary will be aligned to this size.
+ */
+#define FW_BLOCK_SIZE SZ_4K
+
+/* Maximum number of entries in firmware layout table. */
+#define PVR_FW_INFO_MAX_NUM_ENTRIES 8
+
+enum pvr_fw_section_id {
+	META_CODE = 0,
+	META_PRIVATE_DATA,
+	META_COREMEM_CODE,
+	META_COREMEM_DATA,
+	MIPS_CODE,
+	MIPS_EXCEPTIONS_CODE,
+	MIPS_BOOT_CODE,
+	MIPS_PRIVATE_DATA,
+	MIPS_BOOT_DATA,
+	MIPS_STACK,
+	RISCV_UNCACHED_CODE,
+	RISCV_CACHED_CODE,
+	RISCV_PRIVATE_DATA,
+	RISCV_COREMEM_CODE,
+	RISCV_COREMEM_DATA,
+};
+
+enum pvr_fw_section_type {
+	NONE = 0,
+	FW_CODE,
+	FW_DATA,
+	FW_COREMEM_CODE,
+	FW_COREMEM_DATA,
+};
+
+/*
+ * FW binary format with FW info attached:
+ *
+ *          Contents        Offset
+ *     +-----------------+
+ *     |                 |    0
+ *     |                 |
+ *     | Original binary |
+ *     |      file       |
+ *     |   (.ldr/.elf)   |
+ *     |                 |
+ *     |                 |
+ *     +-----------------+
+ *     |   Device info   |  FILE_SIZE - 4K - device_info_size
+ *     +-----------------+
+ *     | FW info header  |  FILE_SIZE - 4K
+ *     +-----------------+
+ *     |                 |
+ *     | FW layout table |
+ *     |                 |
+ *     +-----------------+
+ *                          FILE_SIZE
+ */
+
+#define PVR_FW_INFO_VERSION 3
+
+#define PVR_FW_FLAGS_OPEN_SOURCE BIT(0)
+
+/** struct pvr_fw_info_header - Firmware header */
+struct pvr_fw_info_header {
+	/** @info_version: FW info header version. */
+	u32 info_version;
+	/** @header_len: Header length. */
+	u32 header_len;
+	/** @layout_entry_num: Number of entries in the layout table. */
+	u32 layout_entry_num;
+	/** @layout_entry_size: Size of an entry in the layout table. */
+	u32 layout_entry_size;
+	/** @bvnc: GPU ID supported by firmware. */
+	aligned_u64 bvnc;
+	/** @fw_page_size: Page size of processor on which firmware executes. */
+	u32 fw_page_size;
+	/** @flags: Compatibility flags. */
+	u32 flags;
+	/** @fw_version_major: Firmware major version number. */
+	u16 fw_version_major;
+	/** @fw_version_minor: Firmware minor version number. */
+	u16 fw_version_minor;
+	/** @fw_version_build: Firmware build number. */
+	u32 fw_version_build;
+	/** @device_info_size: Size of device info structure. */
+	u32 device_info_size;
+	/** @padding: Padding. */
+	u32 padding;
+};
+
+/**
+ * struct pvr_fw_layout_entry - Entry in firmware layout table, describing a
+ *                              section of the firmware image
+ */
+struct pvr_fw_layout_entry {
+	/** @id: Section ID. */
+	enum pvr_fw_section_id id;
+	/** @type: Section type. */
+	enum pvr_fw_section_type type;
+	/** @base_addr: Base address of section in FW address space. */
+	u32 base_addr;
+	/** @max_size: Maximum size of section, in bytes. */
+	u32 max_size;
+	/** @alloc_size: Allocation size of section, in bytes. */
+	u32 alloc_size;
+	/** @alloc_offset: Allocation offset of section. */
+	u32 alloc_offset;
+};
+
+/**
+ * struct pvr_fw_device_info_header - Device information header.
+ */
+struct pvr_fw_device_info_header {
+	/* BRN Mask size (in u64s). */
+	u64 brn_mask_size;
+	/* ERN Mask size (in u64s). */
+	u64 ern_mask_size;
+	/* Feature Mask size (in u64s). */
+	u64 feature_mask_size;
+	/* Feature Parameter size (in u64s). */
+	u64 feature_param_size;
+};
+
+#endif /* PVR_FW_INFO_H */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 07/17] drm/imagination: Add GPU ID parsing and firmware loading
@ 2023-08-16  8:25   ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: frank.binns, donald.robson, boris.brezillon, faith.ekstrand,
	airlied, daniel, maarten.lankhorst, mripard, tzimmermann, afd,
	hns, matthew.brost, christian.koenig, luben.tuikov, dakr,
	linux-kernel, Matt Coster

Read the GPU ID register at probe time and select the correct
features/quirks/enhancements. Use the GPU ID to form the firmware
file name and load the firmware.

The features/quirks/enhancements arrays are currently hardcoded in
the driver for the supported GPUs. We are looking at moving this
information to the firmware image.

Changes since v4:
- Retrieve device information from firmware header
- Pull forward firmware header parsing from FW infrastructure patch
- Use devm_add_action_or_reset to release firmware

Changes since v3:
- Use drm_dev_{enter,exit}

Co-developed-by: Frank Binns <frank.binns@imgtec.com>
Signed-off-by: Frank Binns <frank.binns@imgtec.com>
Co-developed-by: Matt Coster <matt.coster@imgtec.com>
Signed-off-by: Matt Coster <matt.coster@imgtec.com>
Co-developed-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 drivers/gpu/drm/imagination/Makefile          |   2 +
 drivers/gpu/drm/imagination/pvr_device.c      | 323 ++++++++++-
 drivers/gpu/drm/imagination/pvr_device.h      | 220 ++++++++
 drivers/gpu/drm/imagination/pvr_device_info.c | 253 +++++++++
 drivers/gpu/drm/imagination/pvr_device_info.h | 185 +++++++
 drivers/gpu/drm/imagination/pvr_drv.c         | 521 +++++++++++++++++-
 drivers/gpu/drm/imagination/pvr_drv.h         | 107 ++++
 drivers/gpu/drm/imagination/pvr_fw.c          | 145 +++++
 drivers/gpu/drm/imagination/pvr_fw.h          |  34 ++
 drivers/gpu/drm/imagination/pvr_fw_info.h     | 135 +++++
 10 files changed, 1923 insertions(+), 2 deletions(-)
 create mode 100644 drivers/gpu/drm/imagination/pvr_device_info.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_device_info.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_info.h

diff --git a/drivers/gpu/drm/imagination/Makefile b/drivers/gpu/drm/imagination/Makefile
index b4aa190c9d4a..9e144ff2742b 100644
--- a/drivers/gpu/drm/imagination/Makefile
+++ b/drivers/gpu/drm/imagination/Makefile
@@ -5,6 +5,8 @@ subdir-ccflags-y := -I$(srctree)/$(src)
 
 powervr-y := \
 	pvr_device.o \
+	pvr_device_info.o \
 	pvr_drv.o \
+	pvr_fw.o
 
 obj-$(CONFIG_DRM_POWERVR) += powervr.o
diff --git a/drivers/gpu/drm/imagination/pvr_device.c b/drivers/gpu/drm/imagination/pvr_device.c
index cef3511c0c42..b1fae182c4f6 100644
--- a/drivers/gpu/drm/imagination/pvr_device.c
+++ b/drivers/gpu/drm/imagination/pvr_device.c
@@ -2,19 +2,31 @@
 /* Copyright (c) 2023 Imagination Technologies Ltd. */
 
 #include "pvr_device.h"
+#include "pvr_device_info.h"
+
+#include "pvr_fw.h"
+#include "pvr_rogue_cr_defs.h"
 
 #include <drm/drm_print.h>
 
+#include <linux/bitfield.h>
 #include <linux/clk.h>
 #include <linux/compiler_attributes.h>
 #include <linux/compiler_types.h>
 #include <linux/dma-mapping.h>
 #include <linux/err.h>
+#include <linux/firmware.h>
 #include <linux/gfp.h>
+#include <linux/interrupt.h>
 #include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
 #include <linux/slab.h>
 #include <linux/stddef.h>
 #include <linux/types.h>
+#include <linux/workqueue.h>
+
+/* Major number for the supported version of the firmware. */
+#define PVR_FW_VERSION_MAJOR 1
 
 /**
  * pvr_device_reg_init() - Initialize kernel access to a PowerVR device's
@@ -100,6 +112,209 @@ static int pvr_device_clk_init(struct pvr_device *pvr_dev)
 	return 0;
 }
 
+/**
+ * pvr_build_firmware_filename() - Construct a PowerVR firmware filename
+ * @pvr_dev: Target PowerVR device.
+ * @base: First part of the filename.
+ * @major: Major version number.
+ *
+ * A PowerVR firmware filename consists of three parts separated by underscores
+ * (``'_'``) along with a '.fw' file suffix. The first part is the exact value
+ * of @base, the second part is the hardware version string derived from @pvr_fw
+ * and the final part is the firmware version number constructed from @major with
+ * a 'v' prefix, e.g. powervr/rogue_4.40.2.51_v1.fw.
+ *
+ * The returned string will have been slab allocated and must be freed with
+ * kfree().
+ *
+ * Return:
+ *  * The constructed filename on success, or
+ *  * Any error returned by kasprintf().
+ */
+static char *
+pvr_build_firmware_filename(struct pvr_device *pvr_dev, const char *base,
+			    u8 major)
+{
+	struct pvr_gpu_id *gpu_id = &pvr_dev->gpu_id;
+
+	return kasprintf(GFP_KERNEL, "%s_%d.%d.%d.%d_v%d.fw", base, gpu_id->b,
+			 gpu_id->v, gpu_id->n, gpu_id->c, major);
+}
+
+static void
+pvr_release_firmware(void *data)
+{
+	struct pvr_device *pvr_dev = data;
+
+	release_firmware(pvr_dev->fw_dev.firmware);
+}
+
+/**
+ * pvr_request_firmware() - Load firmware for a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ *
+ * See pvr_build_firmware_filename() for details on firmware file naming.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * Any error returned by pvr_build_firmware_filename(), or
+ *  * Any error returned by request_firmware().
+ */
+static int
+pvr_request_firmware(struct pvr_device *pvr_dev)
+{
+	struct drm_device *drm_dev = &pvr_dev->base;
+	char *filename;
+	const struct firmware *fw;
+	int err;
+
+	filename = pvr_build_firmware_filename(pvr_dev, "powervr/rogue",
+					       PVR_FW_VERSION_MAJOR);
+	if (IS_ERR(filename))
+		return PTR_ERR(filename);
+
+	/*
+	 * This function takes a copy of &filename, meaning we can free our
+	 * instance before returning.
+	 */
+	err = request_firmware(&fw, filename, pvr_dev->base.dev);
+	if (err) {
+		drm_err(drm_dev, "failed to load firmware %s (err=%d)\n",
+			filename, err);
+		goto err_free_filename;
+	}
+
+	drm_info(drm_dev, "loaded firmware %s\n", filename);
+	kfree(filename);
+
+	pvr_dev->fw_dev.firmware = fw;
+
+	return devm_add_action_or_reset(drm_dev->dev, pvr_release_firmware, pvr_dev);
+
+err_free_filename:
+	kfree(filename);
+
+	return err;
+}
+
+/**
+ * pvr_load_gpu_id() - Load a PowerVR device's GPU ID (BVNC) from control registers.
+ *
+ * Sets struct pvr_dev.gpu_id.
+ *
+ * @pvr_dev: Target PowerVR device.
+ */
+static void
+pvr_load_gpu_id(struct pvr_device *pvr_dev)
+{
+	struct pvr_gpu_id *gpu_id = &pvr_dev->gpu_id;
+	u64 bvnc;
+
+	/*
+	 * Try reading the BVNC using the newer (cleaner) method first. If the
+	 * B value is zero, fall back to the older method.
+	 */
+	bvnc = pvr_cr_read64(pvr_dev, ROGUE_CR_CORE_ID__PBVNC);
+
+	gpu_id->b = PVR_CR_FIELD_GET(bvnc, CORE_ID__PBVNC__BRANCH_ID);
+	if (gpu_id->b != 0) {
+		gpu_id->v = PVR_CR_FIELD_GET(bvnc, CORE_ID__PBVNC__VERSION_ID);
+		gpu_id->n = PVR_CR_FIELD_GET(bvnc, CORE_ID__PBVNC__NUMBER_OF_SCALABLE_UNITS);
+		gpu_id->c = PVR_CR_FIELD_GET(bvnc, CORE_ID__PBVNC__CONFIG_ID);
+	} else {
+		u32 core_rev = pvr_cr_read32(pvr_dev, ROGUE_CR_CORE_REVISION);
+		u32 core_id = pvr_cr_read32(pvr_dev, ROGUE_CR_CORE_ID);
+		u16 core_id_config = PVR_CR_FIELD_GET(core_id, CORE_ID_CONFIG);
+
+		gpu_id->b = PVR_CR_FIELD_GET(core_rev, CORE_REVISION_MAJOR);
+		gpu_id->v = PVR_CR_FIELD_GET(core_rev, CORE_REVISION_MINOR);
+		gpu_id->n = FIELD_GET(0xFF00, core_id_config);
+		gpu_id->c = FIELD_GET(0x00FF, core_id_config);
+	}
+}
+
+/**
+ * pvr_set_dma_info() - Set PowerVR device DMA information
+ * @pvr_dev: Target PowerVR device.
+ *
+ * Sets the DMA mask and max segment size for the PowerVR device.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * Any error returned by PVR_FEATURE_VALUE(), or
+ *  * Any error returned by dma_set_mask().
+ */
+
+static int
+pvr_set_dma_info(struct pvr_device *pvr_dev)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	u16 phys_bus_width;
+	int err;
+
+	err = PVR_FEATURE_VALUE(pvr_dev, phys_bus_width, &phys_bus_width);
+	if (err) {
+		drm_err(drm_dev, "Failed to get device physical bus width\n");
+		return err;
+	}
+
+	err = dma_set_mask(drm_dev->dev, DMA_BIT_MASK(phys_bus_width));
+	if (err) {
+		drm_err(drm_dev, "Failed to set DMA mask (err=%d)\n", err);
+		return err;
+	}
+
+	dma_set_max_seg_size(drm_dev->dev, UINT_MAX);
+
+	return 0;
+}
+
+/**
+ * pvr_device_gpu_init() - GPU-specific initialization for a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ *
+ * The following steps are taken to ensure the device is ready:
+ *
+ *  1. Read the hardware version information from control registers,
+ *  2. Initialise the hardware feature information,
+ *  3. Setup the device DMA information,
+ *  4. Setup the device-scoped memory context, and
+ *  5. Load firmware into the device.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * -%ENODEV if the GPU is not supported,
+ *  * Any error returned by pvr_set_dma_info(),
+ *  * Any error returned by pvr_memory_context_init(), or
+ *  * Any error returned by pvr_request_firmware().
+ */
+static int
+pvr_device_gpu_init(struct pvr_device *pvr_dev)
+{
+	int err;
+
+	pvr_load_gpu_id(pvr_dev);
+
+	err = pvr_request_firmware(pvr_dev);
+	if (err)
+		return err;
+
+	err = pvr_fw_validate_init_device_info(pvr_dev);
+	if (err)
+		return err;
+
+	if (PVR_HAS_FEATURE(pvr_dev, meta))
+		pvr_dev->fw_dev.processor_type = PVR_FW_PROCESSOR_TYPE_META;
+	else if (PVR_HAS_FEATURE(pvr_dev, mips))
+		pvr_dev->fw_dev.processor_type = PVR_FW_PROCESSOR_TYPE_MIPS;
+	else if (PVR_HAS_FEATURE(pvr_dev, riscv_fw_processor))
+		pvr_dev->fw_dev.processor_type = PVR_FW_PROCESSOR_TYPE_RISCV;
+	else
+		return -EINVAL;
+
+	return pvr_set_dma_info(pvr_dev);
+}
+
 /**
  * pvr_device_init() - Initialize a PowerVR device
  * @pvr_dev: Target PowerVR device.
@@ -130,7 +345,12 @@ pvr_device_init(struct pvr_device *pvr_dev)
 		return err;
 
 	/* Map the control registers into memory. */
-	return pvr_device_reg_init(pvr_dev);
+	err = pvr_device_reg_init(pvr_dev);
+	if (err)
+		return err;
+
+	/* Perform GPU-specific initialization steps. */
+	return pvr_device_gpu_init(pvr_dev);
 }
 
 /**
@@ -145,3 +365,104 @@ pvr_device_fini(struct pvr_device *pvr_dev)
 	 * the initialization stages in pvr_device_init().
 	 */
 }
+
+bool
+pvr_device_has_uapi_quirk(struct pvr_device *pvr_dev, u32 quirk)
+{
+	switch (quirk) {
+	case 47217:
+		return PVR_HAS_QUIRK(pvr_dev, 47217);
+	case 48545:
+		return PVR_HAS_QUIRK(pvr_dev, 48545);
+	case 49927:
+		return PVR_HAS_QUIRK(pvr_dev, 49927);
+	case 51764:
+		return PVR_HAS_QUIRK(pvr_dev, 51764);
+	case 62269:
+		return PVR_HAS_QUIRK(pvr_dev, 62269);
+	default:
+		return false;
+	};
+}
+
+bool
+pvr_device_has_uapi_enhancement(struct pvr_device *pvr_dev, u32 enhancement)
+{
+	switch (enhancement) {
+	case 35421:
+		return PVR_HAS_ENHANCEMENT(pvr_dev, 35421);
+	case 42064:
+		return PVR_HAS_ENHANCEMENT(pvr_dev, 42064);
+	default:
+		return false;
+	};
+}
+
+/**
+ * pvr_device_has_feature() - Look up device feature based on feature definition
+ * @pvr_dev: Device pointer.
+ * @feature: Feature to look up. Should be one of %PVR_FEATURE_*.
+ *
+ * Returns:
+ *  * %true if feature is present on device, or
+ *  * %false if feature is not present on device.
+ */
+bool
+pvr_device_has_feature(struct pvr_device *pvr_dev, u32 feature)
+{
+	switch (feature) {
+	case PVR_FEATURE_CLUSTER_GROUPING:
+		return PVR_HAS_FEATURE(pvr_dev, cluster_grouping);
+
+	case PVR_FEATURE_COMPUTE_MORTON_CAPABLE:
+		return PVR_HAS_FEATURE(pvr_dev, compute_morton_capable);
+
+	case PVR_FEATURE_FB_CDC_V4:
+		return PVR_HAS_FEATURE(pvr_dev, fb_cdc_v4);
+
+	case PVR_FEATURE_GPU_MULTICORE_SUPPORT:
+		return PVR_HAS_FEATURE(pvr_dev, gpu_multicore_support);
+
+	case PVR_FEATURE_ISP_ZLS_D24_S8_PACKING_OGL_MODE:
+		return PVR_HAS_FEATURE(pvr_dev, isp_zls_d24_s8_packing_ogl_mode);
+
+	case PVR_FEATURE_S7_TOP_INFRASTRUCTURE:
+		return PVR_HAS_FEATURE(pvr_dev, s7_top_infrastructure);
+
+	case PVR_FEATURE_TESSELLATION:
+		return PVR_HAS_FEATURE(pvr_dev, tessellation);
+
+	case PVR_FEATURE_TPU_DM_GLOBAL_REGISTERS:
+		return PVR_HAS_FEATURE(pvr_dev, tpu_dm_global_registers);
+
+	case PVR_FEATURE_VDM_DRAWINDIRECT:
+		return PVR_HAS_FEATURE(pvr_dev, vdm_drawindirect);
+
+	case PVR_FEATURE_VDM_OBJECT_LEVEL_LLS:
+		return PVR_HAS_FEATURE(pvr_dev, vdm_object_level_lls);
+
+	case PVR_FEATURE_ZLS_SUBTILE:
+		return PVR_HAS_FEATURE(pvr_dev, zls_subtile);
+
+	/* Derived features. */
+	case PVR_FEATURE_CDM_USER_MODE_QUEUE: {
+		u8 cdm_control_stream_format = 0;
+
+		PVR_FEATURE_VALUE(pvr_dev, cdm_control_stream_format, &cdm_control_stream_format);
+		return (cdm_control_stream_format >= 2 && cdm_control_stream_format <= 4);
+	}
+
+	case PVR_FEATURE_REQUIRES_FB_CDC_ZLS_SETUP:
+		if (PVR_HAS_FEATURE(pvr_dev, fbcdc_algorithm)) {
+			u8 fbcdc_algorithm = 0;
+
+			PVR_FEATURE_VALUE(pvr_dev, fbcdc_algorithm, &fbcdc_algorithm);
+			return (fbcdc_algorithm < 3 || PVR_HAS_FEATURE(pvr_dev, fb_cdc_v4));
+		}
+		return false;
+
+	default:
+		WARN(true, "Looking up undefined feature %u\n", feature);
+		return false;
+	}
+}
diff --git a/drivers/gpu/drm/imagination/pvr_device.h b/drivers/gpu/drm/imagination/pvr_device.h
index 41a00d3d8220..65ece87e2405 100644
--- a/drivers/gpu/drm/imagination/pvr_device.h
+++ b/drivers/gpu/drm/imagination/pvr_device.h
@@ -4,6 +4,9 @@
 #ifndef PVR_DEVICE_H
 #define PVR_DEVICE_H
 
+#include "pvr_device_info.h"
+#include "pvr_fw.h"
+
 #include <drm/drm_device.h>
 #include <drm/drm_file.h>
 #include <drm/drm_mm.h>
@@ -28,6 +31,26 @@ struct clk;
 /* Forward declaration from <linux/firmware.h>. */
 struct firmware;
 
+/**
+ * struct pvr_gpu_id - Hardware GPU ID information for a PowerVR device
+ * @b: Branch ID.
+ * @v: Version ID.
+ * @n: Number of scalable units.
+ * @c: Config ID.
+ */
+struct pvr_gpu_id {
+	u16 b, v, n, c;
+};
+
+/**
+ * struct pvr_fw_version - Firmware version information
+ * @major: Major version number.
+ * @minor: Minor version number.
+ */
+struct pvr_fw_version {
+	u16 major, minor;
+};
+
 /**
  * struct pvr_device - powervr-specific wrapper for &struct drm_device
  */
@@ -40,6 +63,35 @@ struct pvr_device {
 	 */
 	struct drm_device base;
 
+	/** @gpu_id: GPU ID detected at runtime. */
+	struct pvr_gpu_id gpu_id;
+
+	/**
+	 * @features: Hardware feature information.
+	 *
+	 * Do not access this member directly, instead use PVR_HAS_FEATURE()
+	 * or PVR_FEATURE_VALUE() macros.
+	 */
+	struct pvr_device_features features;
+
+	/**
+	 * @quirks: Hardware quirk information.
+	 *
+	 * Do not access this member directly, instead use PVR_HAS_QUIRK().
+	 */
+	struct pvr_device_quirks quirks;
+
+	/**
+	 * @enhancements: Hardware enhancement information.
+	 *
+	 * Do not access this member directly, instead use
+	 * PVR_HAS_ENHANCEMENT().
+	 */
+	struct pvr_device_enhancements enhancements;
+
+	/** @fw_version: Firmware version detected at runtime. */
+	struct pvr_fw_version fw_version;
+
 	/**
 	 * @regs: Device control registers.
 	 *
@@ -70,6 +122,9 @@ struct pvr_device {
 	 * Interface (MEMIF). If present, this needs to be enabled/disabled together with @core_clk.
 	 */
 	struct clk *mem_clk;
+
+	/** @fw_dev: Firmware related data. */
+	struct pvr_fw_device fw_dev;
 };
 
 /**
@@ -92,6 +147,76 @@ struct pvr_file {
 	struct pvr_device *pvr_dev;
 };
 
+/**
+ * PVR_HAS_FEATURE() - Tests whether a PowerVR device has a given feature
+ * @pvr_dev: [IN] Target PowerVR device.
+ * @feature: [IN] Hardware feature name.
+ *
+ * Feature names are derived from those found in &struct pvr_device_features by
+ * dropping the 'has_' prefix, which is applied by this macro.
+ *
+ * Return:
+ *  * true if the named feature is present in the hardware
+ *  * false if the named feature is not present in the hardware
+ */
+#define PVR_HAS_FEATURE(pvr_dev, feature) ((pvr_dev)->features.has_##feature)
+
+/**
+ * PVR_FEATURE_VALUE() - Gets a PowerVR device feature value
+ * @pvr_dev: [IN] Target PowerVR device.
+ * @feature: [IN] Feature name.
+ * @value_out: [OUT] Feature value.
+ *
+ * This macro will get a feature value for those features that have values.
+ * If the feature is not present, nothing will be stored to @value_out.
+ *
+ * Feature names are derived from those found in &struct pvr_device_features by
+ * dropping the 'has_' prefix.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * -%EINVAL if the named feature is not present in the hardware
+ */
+#define PVR_FEATURE_VALUE(pvr_dev, feature, value_out)             \
+	({                                                         \
+		struct pvr_device *_pvr_dev = pvr_dev;             \
+		int _ret = -EINVAL;                                \
+		if (_pvr_dev->features.has_##feature) {            \
+			*(value_out) = _pvr_dev->features.feature; \
+			_ret = 0;                                  \
+		}                                                  \
+		_ret;                                              \
+	})
+
+/**
+ * PVR_HAS_QUIRK() - Tests whether a physical device has a given quirk
+ * @pvr_dev: [IN] Target PowerVR device.
+ * @quirk: [IN] Hardware quirk name.
+ *
+ * Quirk numbers are derived from those found in #pvr_device_quirks by
+ * dropping the 'has_brn' prefix, which is applied by this macro.
+ *
+ * Returns
+ *  * true if the quirk is present in the hardware, or
+ *  * false if the quirk is not present in the hardware.
+ */
+#define PVR_HAS_QUIRK(pvr_dev, quirk) ((pvr_dev)->quirks.has_brn##quirk)
+
+/**
+ * PVR_HAS_ENHANCEMENT() - Tests whether a physical device has a given
+ *                         enhancement
+ * @pvr_dev: [IN] Target PowerVR device.
+ * @enhancement: [IN] Hardware enhancement name.
+ *
+ * Enhancement numbers are derived from those found in #pvr_device_enhancements
+ * by dropping the 'has_ern' prefix, which is applied by this macro.
+ *
+ * Returns
+ *  * true if the enhancement is present in the hardware, or
+ *  * false if the enhancement is not present in the hardware.
+ */
+#define PVR_HAS_ENHANCEMENT(pvr_dev, enhancement) ((pvr_dev)->enhancements.has_ern##enhancement)
+
 #define from_pvr_device(pvr_dev) (&pvr_dev->base)
 
 #define to_pvr_device(drm_dev) container_of_const(drm_dev, struct pvr_device, base)
@@ -100,9 +225,77 @@ struct pvr_file {
 
 #define to_pvr_file(file) (file->driver_priv)
 
+/**
+ * PVR_PACKED_BVNC() - Packs B, V, N and C values into a 64-bit unsigned integer
+ * @b: Branch ID.
+ * @v: Version ID.
+ * @n: Number of scalable units.
+ * @c: Config ID.
+ *
+ * The packed layout is as follows:
+ *
+ *    +--------+--------+--------+-------+
+ *    | 63..48 | 47..32 | 31..16 | 15..0 |
+ *    +========+========+========+=======+
+ *    | B      | V      | N      | C     |
+ *    +--------+--------+--------+-------+
+ *
+ * pvr_gpu_id_to_packed_bvnc() should be used instead of this macro when a
+ * &struct pvr_gpu_id is available in order to ensure proper type checking.
+ *
+ * Return: Packed BVNC.
+ */
+/* clang-format off */
+#define PVR_PACKED_BVNC(b, v, n, c) \
+	((((u64)(b) & GENMASK_ULL(15, 0)) << 48) | \
+	 (((u64)(v) & GENMASK_ULL(15, 0)) << 32) | \
+	 (((u64)(n) & GENMASK_ULL(15, 0)) << 16) | \
+	 (((u64)(c) & GENMASK_ULL(15, 0)) <<  0))
+/* clang-format on */
+
+/**
+ * pvr_gpu_id_to_packed_bvnc() - Packs B, V, N and C values into a 64-bit
+ * unsigned integer
+ * @gpu_id: GPU ID.
+ *
+ * The packed layout is as follows:
+ *
+ *    +--------+--------+--------+-------+
+ *    | 63..48 | 47..32 | 31..16 | 15..0 |
+ *    +========+========+========+=======+
+ *    | B      | V      | N      | C     |
+ *    +--------+--------+--------+-------+
+ *
+ * This should be used in preference to PVR_PACKED_BVNC() when a &struct
+ * pvr_gpu_id is available in order to ensure proper type checking.
+ *
+ * Return: Packed BVNC.
+ */
+static __always_inline u64
+pvr_gpu_id_to_packed_bvnc(struct pvr_gpu_id *gpu_id)
+{
+	return PVR_PACKED_BVNC(gpu_id->b, gpu_id->v, gpu_id->n, gpu_id->c);
+}
+
+static __always_inline void
+packed_bvnc_to_pvr_gpu_id(u64 bvnc, struct pvr_gpu_id *gpu_id)
+{
+	gpu_id->b = (bvnc & GENMASK_ULL(63, 48)) >> 48;
+	gpu_id->v = (bvnc & GENMASK_ULL(47, 32)) >> 32;
+	gpu_id->n = (bvnc & GENMASK_ULL(31, 16)) >> 16;
+	gpu_id->c = bvnc & GENMASK_ULL(15, 0);
+}
+
 int pvr_device_init(struct pvr_device *pvr_dev);
 void pvr_device_fini(struct pvr_device *pvr_dev);
 
+bool
+pvr_device_has_uapi_quirk(struct pvr_device *pvr_dev, u32 quirk);
+bool
+pvr_device_has_uapi_enhancement(struct pvr_device *pvr_dev, u32 enhancement);
+bool
+pvr_device_has_feature(struct pvr_device *pvr_dev, u32 feature);
+
 /**
  * PVR_CR_FIELD_GET() - Extract a single field from a PowerVR control register
  * @val: Value of the target register.
@@ -208,6 +401,29 @@ pvr_cr_poll_reg64(struct pvr_device *pvr_dev, u32 reg_addr, u64 reg_value,
 		(value & reg_mask) == reg_value, 0, timeout_usec);
 }
 
+/**
+ * pvr_round_up_to_cacheline_size() - Round up a provided size to be cacheline
+ *                                    aligned
+ * @pvr_dev: Target PowerVR device.
+ * @size: Initial size, in bytes.
+ *
+ * Returns:
+ *  * Size aligned to cacheline size.
+ */
+static __always_inline size_t
+pvr_round_up_to_cacheline_size(struct pvr_device *pvr_dev, size_t size)
+{
+	u16 slc_cacheline_size_bits = 0;
+	u16 slc_cacheline_size_bytes;
+
+	WARN_ON(!PVR_HAS_FEATURE(pvr_dev, slc_cache_line_size_bits));
+	PVR_FEATURE_VALUE(pvr_dev, slc_cache_line_size_bits,
+			  &slc_cacheline_size_bits);
+	slc_cacheline_size_bytes = slc_cacheline_size_bits / 8;
+
+	return round_up(size, slc_cacheline_size_bytes);
+}
+
 /**
  * DOC: IOCTL validation helpers
  *
@@ -302,4 +518,8 @@ pvr_ioctl_union_padding_check(void *instance, size_t union_offset,
 					      __union_size, __member_size);  \
 	})
 
+#define PVR_FW_PROCESSOR_TYPE_META  0
+#define PVR_FW_PROCESSOR_TYPE_MIPS  1
+#define PVR_FW_PROCESSOR_TYPE_RISCV 2
+
 #endif /* PVR_DEVICE_H */
diff --git a/drivers/gpu/drm/imagination/pvr_device_info.c b/drivers/gpu/drm/imagination/pvr_device_info.c
new file mode 100644
index 000000000000..7eabc13e20b7
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_device_info.c
@@ -0,0 +1,253 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+#include "pvr_device_info.h"
+#include "pvr_rogue_fwif_dev_info.h"
+
+#include <drm/drm_print.h>
+
+#include <linux/bits.h>
+#include <linux/minmax.h>
+#include <linux/stddef.h>
+#include <linux/types.h>
+
+#define QUIRK_MAPPING(quirk) \
+	[PVR_FW_HAS_BRN_##quirk] = offsetof(struct pvr_device, quirks.has_brn##quirk)
+
+static const uintptr_t quirks_mapping[] = {
+	QUIRK_MAPPING(44079),
+	QUIRK_MAPPING(47217),
+	QUIRK_MAPPING(48492),
+	QUIRK_MAPPING(48545),
+	QUIRK_MAPPING(49927),
+	QUIRK_MAPPING(50767),
+	QUIRK_MAPPING(51764),
+	QUIRK_MAPPING(62269),
+	QUIRK_MAPPING(63142),
+	QUIRK_MAPPING(63553),
+	QUIRK_MAPPING(66011),
+};
+
+#undef QUIRK_MAPPING
+
+#define ENHANCEMENT_MAPPING(enhancement)                             \
+	[PVR_FW_HAS_ERN_##enhancement] = offsetof(struct pvr_device, \
+						  enhancements.has_ern##enhancement)
+
+static const uintptr_t enhancements_mapping[] = {
+	ENHANCEMENT_MAPPING(35421),
+	ENHANCEMENT_MAPPING(38020),
+	ENHANCEMENT_MAPPING(38748),
+	ENHANCEMENT_MAPPING(42064),
+	ENHANCEMENT_MAPPING(42290),
+	ENHANCEMENT_MAPPING(42606),
+	ENHANCEMENT_MAPPING(47025),
+	ENHANCEMENT_MAPPING(57596),
+};
+
+#undef ENHANCEMENT_MAPPING
+
+static void pvr_device_info_set_common(struct pvr_device *pvr_dev, const u64 *bitmask,
+				       u32 bitmask_size, const uintptr_t *mapping, u32 mapping_max)
+{
+	const u32 mapping_max_size = (mapping_max + 63) >> 6;
+	const u32 nr_bits = min(bitmask_size * 64, mapping_max);
+
+	/* Warn if any unsupported values in the bitmask. */
+	if (bitmask_size > mapping_max_size) {
+		if (mapping == quirks_mapping)
+			drm_warn(from_pvr_device(pvr_dev), "Unsupported quirks in firmware image");
+		else
+			drm_warn(from_pvr_device(pvr_dev),
+				 "Unsupported enhancements in firmware image");
+	} else if (bitmask_size == mapping_max_size && (mapping_max & 63)) {
+		u64 invalid_mask = ~0ull << (mapping_max & 63);
+
+		if (bitmask[bitmask_size - 1] & invalid_mask) {
+			if (mapping == quirks_mapping)
+				drm_warn(from_pvr_device(pvr_dev),
+					 "Unsupported quirks in firmware image");
+			else
+				drm_warn(from_pvr_device(pvr_dev),
+					 "Unsupported enhancements in firmware image");
+		}
+	}
+
+	for (u32 i = 0; i < nr_bits; i++) {
+		if (bitmask[i >> 6] & BIT_ULL(i & 63))
+			*(bool *)((u8 *)pvr_dev + mapping[i]) = true;
+	}
+}
+
+/**
+ * pvr_device_info_set_quirks() - Set device quirks from device information in firmware
+ * @pvr_dev: Device pointer.
+ * @quirks: Pointer to quirks mask in device information.
+ * @quirks_size: Size of quirks mask, in u64s.
+ */
+void pvr_device_info_set_quirks(struct pvr_device *pvr_dev, const u64 *quirks, u32 quirks_size)
+{
+	BUILD_BUG_ON(ARRAY_SIZE(quirks_mapping) != PVR_FW_HAS_BRN_MAX);
+
+	pvr_device_info_set_common(pvr_dev, quirks, quirks_size, quirks_mapping,
+				   ARRAY_SIZE(quirks_mapping));
+}
+
+/**
+ * pvr_device_info_set_enhancements() - Set device enhancements from device information in firmware
+ * @pvr_dev: Device pointer.
+ * @quirks: Pointer to enhancements mask in device information.
+ * @quirks_size: Size of enhancements mask, in u64s.
+ */
+void pvr_device_info_set_enhancements(struct pvr_device *pvr_dev, const u64 *enhancements,
+				      u32 enhancements_size)
+{
+	BUILD_BUG_ON(ARRAY_SIZE(enhancements_mapping) != PVR_FW_HAS_ERN_MAX);
+
+	pvr_device_info_set_common(pvr_dev, enhancements, enhancements_size,
+				   enhancements_mapping, ARRAY_SIZE(enhancements_mapping));
+}
+
+#define FEATURE_MAPPING(fw_feature, feature)                                        \
+	[PVR_FW_HAS_FEATURE_##fw_feature] = {                                       \
+		.flag_offset = offsetof(struct pvr_device, features.has_##feature), \
+		.value_offset = 0                                                   \
+	}
+
+#define FEATURE_MAPPING_VALUE(fw_feature, feature)                                  \
+	[PVR_FW_HAS_FEATURE_##fw_feature] = {                                       \
+		.flag_offset = offsetof(struct pvr_device, features.has_##feature), \
+		.value_offset = offsetof(struct pvr_device, features.feature)       \
+	}
+
+static const struct {
+	uintptr_t flag_offset;
+	uintptr_t value_offset;
+} features_mapping[] = {
+	FEATURE_MAPPING(AXI_ACELITE, axi_acelite),
+	FEATURE_MAPPING_VALUE(CDM_CONTROL_STREAM_FORMAT, cdm_control_stream_format),
+	FEATURE_MAPPING(CLUSTER_GROUPING, cluster_grouping),
+	FEATURE_MAPPING_VALUE(COMMON_STORE_SIZE_IN_DWORDS, common_store_size_in_dwords),
+	FEATURE_MAPPING(COMPUTE, compute),
+	FEATURE_MAPPING(COMPUTE_MORTON_CAPABLE, compute_morton_capable),
+	FEATURE_MAPPING(COMPUTE_OVERLAP, compute_overlap),
+	FEATURE_MAPPING(COREID_PER_OS, coreid_per_os),
+	FEATURE_MAPPING(DYNAMIC_DUST_POWER, dynamic_dust_power),
+	FEATURE_MAPPING_VALUE(ECC_RAMS, ecc_rams),
+	FEATURE_MAPPING_VALUE(FBCDC, fbcdc),
+	FEATURE_MAPPING_VALUE(FBCDC_ALGORITHM, fbcdc_algorithm),
+	FEATURE_MAPPING_VALUE(FBCDC_ARCHITECTURE, fbcdc_architecture),
+	FEATURE_MAPPING_VALUE(FBC_MAX_DEFAULT_DESCRIPTORS, fbc_max_default_descriptors),
+	FEATURE_MAPPING_VALUE(FBC_MAX_LARGE_DESCRIPTORS, fbc_max_large_descriptors),
+	FEATURE_MAPPING(FB_CDC_V4, fb_cdc_v4),
+	FEATURE_MAPPING(GPU_MULTICORE_SUPPORT, gpu_multicore_support),
+	FEATURE_MAPPING(GPU_VIRTUALISATION, gpu_virtualisation),
+	FEATURE_MAPPING(GS_RTA_SUPPORT, gs_rta_support),
+	FEATURE_MAPPING(IRQ_PER_OS, irq_per_os),
+	FEATURE_MAPPING_VALUE(ISP_MAX_TILES_IN_FLIGHT, isp_max_tiles_in_flight),
+	FEATURE_MAPPING_VALUE(ISP_SAMPLES_PER_PIXEL, isp_samples_per_pixel),
+	FEATURE_MAPPING(ISP_ZLS_D24_S8_PACKING_OGL_MODE, isp_zls_d24_s8_packing_ogl_mode),
+	FEATURE_MAPPING_VALUE(LAYOUT_MARS, layout_mars),
+	FEATURE_MAPPING_VALUE(MAX_PARTITIONS, max_partitions),
+	FEATURE_MAPPING_VALUE(META, meta),
+	FEATURE_MAPPING_VALUE(META_COREMEM_SIZE, meta_coremem_size),
+	FEATURE_MAPPING(MIPS, mips),
+	FEATURE_MAPPING_VALUE(NUM_CLUSTERS, num_clusters),
+	FEATURE_MAPPING_VALUE(NUM_ISP_IPP_PIPES, num_isp_ipp_pipes),
+	FEATURE_MAPPING_VALUE(NUM_OSIDS, num_osids),
+	FEATURE_MAPPING_VALUE(NUM_RASTER_PIPES, num_raster_pipes),
+	FEATURE_MAPPING(PBE2_IN_XE, pbe2_in_xe),
+	FEATURE_MAPPING(PBVNC_COREID_REG, pbvnc_coreid_reg),
+	FEATURE_MAPPING(PERFBUS, perfbus),
+	FEATURE_MAPPING(PERF_COUNTER_BATCH, perf_counter_batch),
+	FEATURE_MAPPING_VALUE(PHYS_BUS_WIDTH, phys_bus_width),
+	FEATURE_MAPPING(RISCV_FW_PROCESSOR, riscv_fw_processor),
+	FEATURE_MAPPING(ROGUEXE, roguexe),
+	FEATURE_MAPPING(S7_TOP_INFRASTRUCTURE, s7_top_infrastructure),
+	FEATURE_MAPPING(SIMPLE_INTERNAL_PARAMETER_FORMAT, simple_internal_parameter_format),
+	FEATURE_MAPPING(SIMPLE_INTERNAL_PARAMETER_FORMAT_V2, simple_internal_parameter_format_v2),
+	FEATURE_MAPPING_VALUE(SIMPLE_PARAMETER_FORMAT_VERSION, simple_parameter_format_version),
+	FEATURE_MAPPING_VALUE(SLC_BANKS, slc_banks),
+	FEATURE_MAPPING_VALUE(SLC_CACHE_LINE_SIZE_BITS, slc_cache_line_size_bits),
+	FEATURE_MAPPING(SLC_SIZE_CONFIGURABLE, slc_size_configurable),
+	FEATURE_MAPPING_VALUE(SLC_SIZE_IN_KILOBYTES, slc_size_in_kilobytes),
+	FEATURE_MAPPING(SOC_TIMER, soc_timer),
+	FEATURE_MAPPING(SYS_BUS_SECURE_RESET, sys_bus_secure_reset),
+	FEATURE_MAPPING(TESSELLATION, tessellation),
+	FEATURE_MAPPING(TILE_REGION_PROTECTION, tile_region_protection),
+	FEATURE_MAPPING_VALUE(TILE_SIZE_X, tile_size_x),
+	FEATURE_MAPPING_VALUE(TILE_SIZE_Y, tile_size_y),
+	FEATURE_MAPPING(TLA, tla),
+	FEATURE_MAPPING(TPU_CEM_DATAMASTER_GLOBAL_REGISTERS, tpu_cem_datamaster_global_registers),
+	FEATURE_MAPPING(TPU_DM_GLOBAL_REGISTERS, tpu_dm_global_registers),
+	FEATURE_MAPPING(TPU_FILTERING_MODE_CONTROL, tpu_filtering_mode_control),
+	FEATURE_MAPPING_VALUE(USC_MIN_OUTPUT_REGISTERS_PER_PIX, usc_min_output_registers_per_pix),
+	FEATURE_MAPPING(VDM_DRAWINDIRECT, vdm_drawindirect),
+	FEATURE_MAPPING(VDM_OBJECT_LEVEL_LLS, vdm_object_level_lls),
+	FEATURE_MAPPING_VALUE(VIRTUAL_ADDRESS_SPACE_BITS, virtual_address_space_bits),
+	FEATURE_MAPPING(WATCHDOG_TIMER, watchdog_timer),
+	FEATURE_MAPPING(WORKGROUP_PROTECTION, workgroup_protection),
+	FEATURE_MAPPING_VALUE(XE_ARCHITECTURE, xe_architecture),
+	FEATURE_MAPPING(XE_MEMORY_HIERARCHY, xe_memory_hierarchy),
+	FEATURE_MAPPING(XE_TPU2, xe_tpu2),
+	FEATURE_MAPPING_VALUE(XPU_MAX_REGBANKS_ADDR_WIDTH, xpu_max_regbanks_addr_width),
+	FEATURE_MAPPING_VALUE(XPU_MAX_SLAVES, xpu_max_slaves),
+	FEATURE_MAPPING_VALUE(XPU_REGISTER_BROADCAST, xpu_register_broadcast),
+	FEATURE_MAPPING(XT_TOP_INFRASTRUCTURE, xt_top_infrastructure),
+	FEATURE_MAPPING(ZLS_SUBTILE, zls_subtile),
+};
+
+#undef FEATURE_MAPPING_VALUE
+#undef FEATURE_MAPPING
+
+/**
+ * pvr_device_info_set_features() - Set device features from device information in firmware
+ * @pvr_dev: Device pointer.
+ * @features: Pointer to features mask in device information.
+ * @features_size: Size of features mask, in u64s.
+ * @feature_param_size: Size of feature parameters, in u64s.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -%EINVAL on malformed stream.
+ */
+int pvr_device_info_set_features(struct pvr_device *pvr_dev, const u64 *features, u32 features_size,
+				 u32 feature_param_size)
+{
+	const u32 mapping_max = ARRAY_SIZE(features_mapping);
+	const u32 mapping_max_size = (mapping_max + 63) >> 6;
+	const u32 nr_bits = min(features_size * 64, mapping_max);
+	const u64 *feature_params = features + features_size;
+	u32 param_idx = 0;
+
+	BUILD_BUG_ON(ARRAY_SIZE(features_mapping) != PVR_FW_HAS_FEATURE_MAX);
+
+	/* Verify no unsupported values in the bitmask. */
+	if (features_size > mapping_max_size) {
+		drm_warn(from_pvr_device(pvr_dev), "Unsupported features in firmware image");
+	} else if (features_size == mapping_max_size && (mapping_max & 63)) {
+		u64 invalid_mask = ~0ull << (mapping_max & 63);
+
+		if (features[features_size - 1] & invalid_mask)
+			drm_warn(from_pvr_device(pvr_dev),
+				 "Unsupported features in firmware image");
+	}
+
+	for (u32 i = 0; i < nr_bits; i++) {
+		if (features[i >> 6] & BIT_ULL(i & 63)) {
+			*(bool *)((u8 *)pvr_dev + features_mapping[i].flag_offset) = true;
+
+			if (features_mapping[i].value_offset) {
+				if (param_idx >= feature_param_size)
+					return -EINVAL;
+
+				*(u64 *)((u8 *)pvr_dev + features_mapping[i].value_offset) =
+					feature_params[param_idx];
+				param_idx++;
+			}
+		}
+	}
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/imagination/pvr_device_info.h b/drivers/gpu/drm/imagination/pvr_device_info.h
new file mode 100644
index 000000000000..cfad708fc20a
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_device_info.h
@@ -0,0 +1,185 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_DEVICE_INFO_H
+#define PVR_DEVICE_INFO_H
+
+#include <linux/types.h>
+
+struct pvr_device;
+
+/*
+ * struct pvr_device_features - Hardware feature information
+ */
+struct pvr_device_features {
+	bool has_axi_acelite;
+	bool has_cdm_control_stream_format;
+	bool has_cluster_grouping;
+	bool has_common_store_size_in_dwords;
+	bool has_compute;
+	bool has_compute_morton_capable;
+	bool has_compute_overlap;
+	bool has_coreid_per_os;
+	bool has_dynamic_dust_power;
+	bool has_ecc_rams;
+	bool has_fb_cdc_v4;
+	bool has_fbc_max_default_descriptors;
+	bool has_fbc_max_large_descriptors;
+	bool has_fbcdc;
+	bool has_fbcdc_algorithm;
+	bool has_fbcdc_architecture;
+	bool has_gpu_multicore_support;
+	bool has_gpu_virtualisation;
+	bool has_gs_rta_support;
+	bool has_irq_per_os;
+	bool has_isp_max_tiles_in_flight;
+	bool has_isp_samples_per_pixel;
+	bool has_isp_zls_d24_s8_packing_ogl_mode;
+	bool has_layout_mars;
+	bool has_max_partitions;
+	bool has_meta;
+	bool has_meta_coremem_size;
+	bool has_mips;
+	bool has_num_clusters;
+	bool has_num_isp_ipp_pipes;
+	bool has_num_osids;
+	bool has_num_raster_pipes;
+	bool has_pbe2_in_xe;
+	bool has_pbvnc_coreid_reg;
+	bool has_perfbus;
+	bool has_perf_counter_batch;
+	bool has_phys_bus_width;
+	bool has_riscv_fw_processor;
+	bool has_roguexe;
+	bool has_s7_top_infrastructure;
+	bool has_simple_internal_parameter_format;
+	bool has_simple_internal_parameter_format_v2;
+	bool has_simple_parameter_format_version;
+	bool has_slc_banks;
+	bool has_slc_cache_line_size_bits;
+	bool has_slc_size_configurable;
+	bool has_slc_size_in_kilobytes;
+	bool has_soc_timer;
+	bool has_sys_bus_secure_reset;
+	bool has_tessellation;
+	bool has_tile_region_protection;
+	bool has_tile_size_x;
+	bool has_tile_size_y;
+	bool has_tla;
+	bool has_tpu_cem_datamaster_global_registers;
+	bool has_tpu_dm_global_registers;
+	bool has_tpu_filtering_mode_control;
+	bool has_usc_min_output_registers_per_pix;
+	bool has_vdm_drawindirect;
+	bool has_vdm_object_level_lls;
+	bool has_virtual_address_space_bits;
+	bool has_watchdog_timer;
+	bool has_workgroup_protection;
+	bool has_xe_architecture;
+	bool has_xe_memory_hierarchy;
+	bool has_xe_tpu2;
+	bool has_xpu_max_regbanks_addr_width;
+	bool has_xpu_max_slaves;
+	bool has_xpu_register_broadcast;
+	bool has_xt_top_infrastructure;
+	bool has_zls_subtile;
+
+	u64 cdm_control_stream_format;
+	u64 common_store_size_in_dwords;
+	u64 ecc_rams;
+	u64 fbc_max_default_descriptors;
+	u64 fbc_max_large_descriptors;
+	u64 fbcdc;
+	u64 fbcdc_algorithm;
+	u64 fbcdc_architecture;
+	u64 isp_max_tiles_in_flight;
+	u64 isp_samples_per_pixel;
+	u64 layout_mars;
+	u64 max_partitions;
+	u64 meta;
+	u64 meta_coremem_size;
+	u64 num_clusters;
+	u64 num_isp_ipp_pipes;
+	u64 num_osids;
+	u64 num_raster_pipes;
+	u64 phys_bus_width;
+	u64 simple_parameter_format_version;
+	u64 slc_banks;
+	u64 slc_cache_line_size_bits;
+	u64 slc_size_in_kilobytes;
+	u64 tile_size_x;
+	u64 tile_size_y;
+	u64 usc_min_output_registers_per_pix;
+	u64 virtual_address_space_bits;
+	u64 xe_architecture;
+	u64 xpu_max_regbanks_addr_width;
+	u64 xpu_max_slaves;
+	u64 xpu_register_broadcast;
+};
+
+/*
+ * struct pvr_device_quirks - Hardware quirk information
+ */
+struct pvr_device_quirks {
+	bool has_brn44079;
+	bool has_brn47217;
+	bool has_brn48492;
+	bool has_brn48545;
+	bool has_brn49927;
+	bool has_brn50767;
+	bool has_brn51764;
+	bool has_brn62269;
+	bool has_brn63142;
+	bool has_brn63553;
+	bool has_brn66011;
+};
+
+/*
+ * struct pvr_device_enhancements - Hardware enhancement information
+ */
+struct pvr_device_enhancements {
+	bool has_ern35421;
+	bool has_ern38020;
+	bool has_ern38748;
+	bool has_ern42064;
+	bool has_ern42290;
+	bool has_ern42606;
+	bool has_ern47025;
+	bool has_ern57596;
+};
+
+void pvr_device_info_set_quirks(struct pvr_device *pvr_dev, const u64 *bitmask,
+				u32 bitmask_len);
+void pvr_device_info_set_enhancements(struct pvr_device *pvr_dev, const u64 *bitmask,
+				      u32 bitmask_len);
+int pvr_device_info_set_features(struct pvr_device *pvr_dev, const u64 *features, u32 features_size,
+				 u32 feature_param_size);
+
+/*
+ * Meta cores
+ *
+ * These are the values for the 'meta' feature when the feature is present
+ * (as per &struct pvr_device_features)/
+ */
+#define PVR_META_MTP218 (1)
+#define PVR_META_MTP219 (2)
+#define PVR_META_LTP218 (3)
+#define PVR_META_LTP217 (4)
+
+enum {
+	PVR_FEATURE_CDM_USER_MODE_QUEUE,
+	PVR_FEATURE_CLUSTER_GROUPING,
+	PVR_FEATURE_COMPUTE_MORTON_CAPABLE,
+	PVR_FEATURE_FB_CDC_V4,
+	PVR_FEATURE_GPU_MULTICORE_SUPPORT,
+	PVR_FEATURE_ISP_ZLS_D24_S8_PACKING_OGL_MODE,
+	PVR_FEATURE_REQUIRES_FB_CDC_ZLS_SETUP,
+	PVR_FEATURE_S7_TOP_INFRASTRUCTURE,
+	PVR_FEATURE_TESSELLATION,
+	PVR_FEATURE_TPU_DM_GLOBAL_REGISTERS,
+	PVR_FEATURE_VDM_DRAWINDIRECT,
+	PVR_FEATURE_VDM_OBJECT_LEVEL_LLS,
+	PVR_FEATURE_ZLS_SUBTILE,
+};
+
+#endif /* PVR_DEVICE_INFO_H */
diff --git a/drivers/gpu/drm/imagination/pvr_drv.c b/drivers/gpu/drm/imagination/pvr_drv.c
index ebab8b477749..eb3018f94f7c 100644
--- a/drivers/gpu/drm/imagination/pvr_drv.c
+++ b/drivers/gpu/drm/imagination/pvr_drv.c
@@ -3,6 +3,9 @@
 
 #include "pvr_device.h"
 #include "pvr_drv.h"
+#include "pvr_rogue_defs.h"
+#include "pvr_rogue_fwif_client.h"
+#include "pvr_rogue_fwif_shared.h"
 
 #include <uapi/drm/pvr_drm.h>
 
@@ -88,6 +91,382 @@ pvr_ioctl_get_bo_mmap_offset(struct drm_device *drm_dev, void *raw_args,
 	return -ENOTTY;
 }
 
+static __always_inline u64
+pvr_fw_version_packed(u32 major, u32 minor)
+{
+	return ((u64)major << 32) | minor;
+}
+
+static u32
+rogue_get_common_store_partition_space_size(struct pvr_device *pvr_dev)
+{
+	u32 max_partitions = 0;
+	u32 tile_size_x = 0;
+	u32 tile_size_y = 0;
+
+	PVR_FEATURE_VALUE(pvr_dev, tile_size_x, &tile_size_x);
+	PVR_FEATURE_VALUE(pvr_dev, tile_size_y, &tile_size_y);
+	PVR_FEATURE_VALUE(pvr_dev, max_partitions, &max_partitions);
+
+	if (tile_size_x == 16 && tile_size_y == 16) {
+		u32 usc_min_output_registers_per_pix = 0;
+
+		PVR_FEATURE_VALUE(pvr_dev, usc_min_output_registers_per_pix,
+				  &usc_min_output_registers_per_pix);
+
+		return tile_size_x * tile_size_y * max_partitions *
+		       usc_min_output_registers_per_pix;
+	}
+
+	return max_partitions * 1024;
+}
+
+static u32
+rogue_get_common_store_alloc_region_size(struct pvr_device *pvr_dev)
+{
+	u32 common_store_size_in_dwords = 512 * 4 * 4;
+	u32 alloc_region_size;
+
+	PVR_FEATURE_VALUE(pvr_dev, common_store_size_in_dwords, &common_store_size_in_dwords);
+
+	alloc_region_size = common_store_size_in_dwords - (256U * 4U) -
+			    rogue_get_common_store_partition_space_size(pvr_dev);
+
+	if (PVR_HAS_QUIRK(pvr_dev, 44079)) {
+		u32 common_store_split_point = (768U * 4U * 4U);
+
+		return min(common_store_split_point - (256U * 4U), alloc_region_size);
+	}
+
+	return alloc_region_size;
+}
+
+static inline u32
+rogue_get_num_phantoms(struct pvr_device *pvr_dev)
+{
+	u32 num_clusters = 1;
+
+	PVR_FEATURE_VALUE(pvr_dev, num_clusters, &num_clusters);
+
+	return ROGUE_REQ_NUM_PHANTOMS(num_clusters);
+}
+
+static inline u32
+rogue_get_max_coeffs(struct pvr_device *pvr_dev)
+{
+	u32 max_coeff_additional_portion = ROGUE_MAX_VERTEX_SHARED_REGISTERS;
+	u32 pending_allocation_shared_regs = 2U * 1024U;
+	u32 pending_allocation_coeff_regs = 0U;
+	u32 num_phantoms = rogue_get_num_phantoms(pvr_dev);
+	u32 tiles_in_flight = 0;
+	u32 max_coeff_pixel_portion;
+
+	PVR_FEATURE_VALUE(pvr_dev, isp_max_tiles_in_flight, &tiles_in_flight);
+	max_coeff_pixel_portion = DIV_ROUND_UP(tiles_in_flight, num_phantoms);
+	max_coeff_pixel_portion *= ROGUE_MAX_PIXEL_SHARED_REGISTERS;
+
+	/*
+	 * Compute tasks on cores with BRN48492 and without compute overlap may lock
+	 * up without two additional lines of coeffs.
+	 */
+	if (PVR_HAS_QUIRK(pvr_dev, 48492) && !PVR_HAS_FEATURE(pvr_dev, compute_overlap))
+		pending_allocation_coeff_regs = 2U * 1024U;
+
+	if (PVR_HAS_ENHANCEMENT(pvr_dev, 38748))
+		pending_allocation_shared_regs = 0;
+
+	if (PVR_HAS_ENHANCEMENT(pvr_dev, 38020))
+		max_coeff_additional_portion += ROGUE_MAX_COMPUTE_SHARED_REGISTERS;
+
+	return rogue_get_common_store_alloc_region_size(pvr_dev) + pending_allocation_coeff_regs -
+		(max_coeff_pixel_portion + max_coeff_additional_portion +
+		 pending_allocation_shared_regs);
+}
+
+static inline u32
+rogue_get_cdm_max_local_mem_size_regs(struct pvr_device *pvr_dev)
+{
+	u32 available_coeffs_in_dwords = rogue_get_max_coeffs(pvr_dev);
+
+	if (PVR_HAS_QUIRK(pvr_dev, 48492) && PVR_HAS_FEATURE(pvr_dev, roguexe) &&
+	    !PVR_HAS_FEATURE(pvr_dev, compute_overlap)) {
+		/* Driver must not use the 2 reserved lines. */
+		available_coeffs_in_dwords -= ROGUE_CSRM_LINE_SIZE_IN_DWORDS * 2;
+	}
+
+	/*
+	 * The maximum amount of local memory available to a kernel is the minimum
+	 * of the total number of coefficient registers available and the max common
+	 * store allocation size which can be made by the CDM.
+	 *
+	 * If any coeff lines are reserved for tessellation or pixel then we need to
+	 * subtract those too.
+	 */
+	return min(available_coeffs_in_dwords, (u32)ROGUE_MAX_PER_KERNEL_LOCAL_MEM_SIZE_REGS);
+}
+
+/**
+ * pvr_dev_query_gpu_info_get()
+ * @pvr_dev: Device pointer.
+ * @args: [IN] Device query arguments containing a pointer to a userspace
+ *        struct drm_pvr_dev_query_gpu_info.
+ *
+ * If the query object pointer is NULL, the size field is updated with the
+ * expected size of the query object.
+ *
+ * Returns:
+ *  * 0 on success, or if size is requested using a NULL pointer, or
+ *  * -%E2BIG if the indicated length of the allocation is less than is
+ *    required to contain the copied data, or
+ *  * -%EFAULT if local memory could not be copied to userspace.
+ */
+static int
+pvr_dev_query_gpu_info_get(struct pvr_device *pvr_dev,
+			   struct drm_pvr_ioctl_dev_query_args *args)
+{
+	struct drm_pvr_dev_query_gpu_info gpu_info = {0};
+	int err;
+
+	if (!args->pointer) {
+		args->size = sizeof(struct drm_pvr_dev_query_gpu_info);
+		return 0;
+	}
+
+	gpu_info.gpu_id =
+		pvr_gpu_id_to_packed_bvnc(&pvr_dev->gpu_id);
+	gpu_info.num_phantoms = rogue_get_num_phantoms(pvr_dev);
+
+	err = PVR_UOBJ_SET(args->pointer, args->size, gpu_info);
+	if (err < 0)
+		return err;
+
+	if (args->size > sizeof(gpu_info))
+		args->size = sizeof(gpu_info);
+	return 0;
+}
+
+/**
+ * pvr_dev_query_runtime_info_get()
+ * @pvr_dev: Device pointer.
+ * @args: [IN] Device query arguments containing a pointer to a userspace
+ *        struct drm_pvr_dev_query_runtime_info.
+ *
+ * If the query object pointer is NULL, the size field is updated with the
+ * expected size of the query object.
+ *
+ * Returns:
+ *  * 0 on success, or if size is requested using a NULL pointer, or
+ *  * -%E2BIG if the indicated length of the allocation is less than is
+ *    required to contain the copied data, or
+ *  * -%EFAULT if local memory could not be copied to userspace.
+ */
+static int
+pvr_dev_query_runtime_info_get(struct pvr_device *pvr_dev,
+			       struct drm_pvr_ioctl_dev_query_args *args)
+{
+	struct drm_pvr_dev_query_runtime_info runtime_info = {0};
+	int err;
+
+	if (!args->pointer) {
+		args->size = sizeof(struct drm_pvr_dev_query_runtime_info);
+		return 0;
+	}
+
+	runtime_info.free_list_min_pages = 0; /* FIXME */
+	runtime_info.free_list_max_pages =
+		ROGUE_PM_MAX_FREELIST_SIZE / ROGUE_PM_PAGE_SIZE;
+	runtime_info.common_store_alloc_region_size =
+		rogue_get_common_store_alloc_region_size(pvr_dev);
+	runtime_info.common_store_partition_space_size =
+		rogue_get_common_store_partition_space_size(pvr_dev);
+	runtime_info.max_coeffs = rogue_get_max_coeffs(pvr_dev);
+	runtime_info.cdm_max_local_mem_size_regs =
+		rogue_get_cdm_max_local_mem_size_regs(pvr_dev);
+
+	err = PVR_UOBJ_SET(args->pointer, args->size, runtime_info);
+	if (err < 0)
+		return err;
+
+	if (args->size > sizeof(runtime_info))
+		args->size = sizeof(runtime_info);
+	return 0;
+}
+
+/**
+ * pvr_dev_query_quirks_get() - Unpack array of quirks at the address given
+ * in a struct drm_pvr_dev_query_quirks, or gets the amount of space required
+ * for it.
+ * @pvr_dev: Device pointer.
+ * @args: [IN] Device query arguments containing a pointer to a userspace
+ *        struct drm_pvr_dev_query_query_quirks.
+ *
+ * If the query object pointer is NULL, the size field is updated with the
+ * expected size of the query object.
+ * If the userspace pointer in the query object is NULL, or the count is
+ * short, no data is copied.
+ * The count field will be updated to that copied, or if either pointer is
+ * NULL, that which would have been copied.
+ * The size field in the query object will be updated to the size copied.
+ *
+ * Returns:
+ *  * 0 on success, or if size/count is requested using a NULL pointer, or
+ *  * -%EINVAL if args contained non-zero reserved fields, or
+ *  * -%E2BIG if the indicated length of the allocation is less than is
+ *    required to contain the copied data, or
+ *  * -%EFAULT if local memory could not be copied to userspace.
+ */
+static int
+pvr_dev_query_quirks_get(struct pvr_device *pvr_dev,
+			 struct drm_pvr_ioctl_dev_query_args *args)
+{
+	/*
+	 * @FIXME - hardcoding of numbers here is intended as an
+	 * intermediate step so the UAPI can be fixed, but requires a
+	 * a refactor in the future to store them in a more appropriate
+	 * location
+	 */
+	static const u32 umd_quirks_musthave[] = {
+		47217,
+		49927,
+		62269,
+	};
+	static const u32 umd_quirks[] = {
+		48545,
+		51764,
+	};
+	struct drm_pvr_dev_query_quirks query;
+	u32 out[ARRAY_SIZE(umd_quirks_musthave) + ARRAY_SIZE(umd_quirks)];
+	size_t out_musthave_count = 0;
+	size_t out_count = 0;
+	int err;
+
+	if (!args->pointer) {
+		args->size = sizeof(struct drm_pvr_dev_query_quirks);
+		return 0;
+	}
+
+	err = PVR_UOBJ_GET(query, args->size, args->pointer);
+
+	if (err < 0)
+		return err;
+	if (query._padding_c)
+		return -EINVAL;
+
+	for (int i = 0; i < ARRAY_SIZE(umd_quirks_musthave); i++) {
+		if (pvr_device_has_uapi_quirk(pvr_dev, umd_quirks_musthave[i])) {
+			out[out_count++] = umd_quirks_musthave[i];
+			out_musthave_count++;
+		}
+	}
+
+	for (int i = 0; i < ARRAY_SIZE(umd_quirks); i++) {
+		if (pvr_device_has_uapi_quirk(pvr_dev, umd_quirks[i]))
+			out[out_count++] = umd_quirks[i];
+	}
+
+	if (!query.quirks)
+		goto copy_out;
+	if (query.count < out_count)
+		return -E2BIG;
+
+	if (copy_to_user(u64_to_user_ptr(query.quirks), out,
+			 out_count * sizeof(u32))) {
+		return -EFAULT;
+	}
+
+	query.musthave_count = out_musthave_count;
+
+copy_out:
+	query.count = out_count;
+	err = PVR_UOBJ_SET(args->pointer, args->size, query);
+	if (err < 0)
+		return err;
+
+	args->size = sizeof(query);
+	return 0;
+}
+
+/**
+ * pvr_dev_query_enhancements_get() - Unpack array of enhancements at the
+ * address given in a struct drm_pvr_dev_query_enhancements, or gets the amount
+ * of space required for it.
+ * @pvr_dev: Device pointer.
+ * @args: [IN] Device query arguments containing a pointer to a userspace
+ *        struct drm_pvr_dev_query_enhancements.
+ *
+ * If the query object pointer is NULL, the size field is updated with the
+ * expected size of the query object.
+ * If the userspace pointer in the query object is NULL, or the count is
+ * short, no data is copied.
+ * The count field will be updated to that copied, or if either pointer is
+ * NULL, that which would have been copied.
+ * The size field in the query object will be updated to the size copied.
+ *
+ * Returns:
+ *  * 0 on success, or if size/count is requested using a NULL pointer, or
+ *  * -%EINVAL if args contained non-zero reserved fields, or
+ *  * -%E2BIG if the indicated length of the allocation is less than is
+ *    required to contain the copied data, or
+ *  * -%EFAULT if local memory could not be copied to userspace.
+ */
+static int
+pvr_dev_query_enhancements_get(struct pvr_device *pvr_dev,
+			       struct drm_pvr_ioctl_dev_query_args *args)
+{
+	/*
+	 * @FIXME - hardcoding of numbers here is intended as an
+	 * intermediate step so the UAPI can be fixed, but requires a
+	 * a refactor in the future to store them in a more appropriate
+	 * location
+	 */
+	const u32 umd_enhancements[] = {
+		35421,
+		42064,
+	};
+	struct drm_pvr_dev_query_enhancements query;
+	u32 out[ARRAY_SIZE(umd_enhancements)];
+	size_t out_idx = 0;
+	int err;
+
+	if (!args->pointer) {
+		args->size = sizeof(struct drm_pvr_dev_query_enhancements);
+		return 0;
+	}
+
+	err = PVR_UOBJ_GET(query, args->size, args->pointer);
+
+	if (err < 0)
+		return err;
+	if (query._padding_a)
+		return -EINVAL;
+	if (query._padding_c)
+		return -EINVAL;
+
+	for (int i = 0; i < ARRAY_SIZE(umd_enhancements); i++) {
+		if (pvr_device_has_uapi_enhancement(pvr_dev, umd_enhancements[i]))
+			out[out_idx++] = umd_enhancements[i];
+	}
+
+	if (!query.enhancements)
+		goto copy_out;
+	if (query.count < out_idx)
+		return -E2BIG;
+
+	if (copy_to_user(u64_to_user_ptr(query.enhancements), out,
+			 out_idx * sizeof(u32))) {
+		return -EFAULT;
+	}
+
+copy_out:
+	query.count = out_idx;
+	err = PVR_UOBJ_SET(args->pointer, args->size, query);
+	if (err < 0)
+		return err;
+
+	args->size = sizeof(query);
+	return 0;
+}
+
 /**
  * pvr_ioctl_dev_query() - IOCTL to copy information about a device
  * @drm_dev: [IN] DRM device.
@@ -112,7 +491,41 @@ static int
 pvr_ioctl_dev_query(struct drm_device *drm_dev, void *raw_args,
 		    struct drm_file *file)
 {
-	return -ENOTTY;
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+	struct drm_pvr_ioctl_dev_query_args *args = raw_args;
+	int idx;
+	int ret = -EINVAL;
+
+	if (!drm_dev_enter(drm_dev, &idx))
+		return -EIO;
+
+	switch ((enum drm_pvr_dev_query)args->type) {
+	case DRM_PVR_DEV_QUERY_GPU_INFO_GET:
+		ret = pvr_dev_query_gpu_info_get(pvr_dev, args);
+		break;
+
+	case DRM_PVR_DEV_QUERY_RUNTIME_INFO_GET:
+		ret = pvr_dev_query_runtime_info_get(pvr_dev, args);
+		break;
+
+	case DRM_PVR_DEV_QUERY_QUIRKS_GET:
+		ret = pvr_dev_query_quirks_get(pvr_dev, args);
+		break;
+
+	case DRM_PVR_DEV_QUERY_ENHANCEMENTS_GET:
+		ret = pvr_dev_query_enhancements_get(pvr_dev, args);
+		break;
+
+	case DRM_PVR_DEV_QUERY_HEAP_INFO_GET:
+		return -EINVAL;
+
+	case DRM_PVR_DEV_QUERY_STATIC_DATA_AREAS_GET:
+		return -EINVAL;
+	}
+
+	drm_dev_exit(idx);
+
+	return ret;
 }
 
 /**
@@ -350,6 +763,112 @@ pvr_ioctl_submit_jobs(struct drm_device *drm_dev, void *raw_args,
 	return -ENOTTY;
 }
 
+int
+pvr_get_uobj(u64 usr_ptr, u32 usr_stride, u32 min_stride, u32 obj_size, void *out)
+{
+	if (usr_stride < min_stride)
+		return -EINVAL;
+
+	return copy_struct_from_user(out, obj_size, u64_to_user_ptr(usr_ptr), usr_stride);
+}
+
+int
+pvr_set_uobj(u64 usr_ptr, u32 usr_stride, u32 min_stride, u32 obj_size, const void *in)
+{
+	if (usr_stride < min_stride)
+		return -EINVAL;
+
+	if (copy_to_user(u64_to_user_ptr(usr_ptr), in, min_t(u32, usr_stride, obj_size)))
+		return -EFAULT;
+
+	if (usr_stride > obj_size &&
+	    clear_user(u64_to_user_ptr(usr_ptr + obj_size), usr_stride - obj_size)) {
+		return -EFAULT;
+	}
+
+	return 0;
+}
+
+int
+pvr_get_uobj_array(const struct drm_pvr_obj_array *in, u32 min_stride, u32 obj_size, void **out)
+{
+	int ret = 0;
+	void *out_alloc;
+
+	if (in->stride < min_stride)
+		return -EINVAL;
+
+	if (!in->count)
+		return 0;
+
+	out_alloc = kvmalloc_array(in->count, obj_size, GFP_KERNEL);
+	if (!out_alloc)
+		return -ENOMEM;
+
+	if (obj_size == in->stride) {
+		if (copy_from_user(out_alloc, u64_to_user_ptr(in->array),
+				   (unsigned long)obj_size * in->count))
+			ret = -EFAULT;
+	} else {
+		void __user *in_ptr = u64_to_user_ptr(in->array);
+		void *out_ptr = out_alloc;
+
+		for (u32 i = 0; i < in->count; i++) {
+			ret = copy_struct_from_user(out_ptr, obj_size, in_ptr, in->stride);
+			if (ret)
+				break;
+
+			out_ptr += obj_size;
+			in_ptr += in->stride;
+		}
+	}
+
+	if (ret) {
+		kvfree(out_alloc);
+		return ret;
+	}
+
+	*out = out_alloc;
+	return 0;
+}
+
+int
+pvr_set_uobj_array(const struct drm_pvr_obj_array *out, u32 min_stride, u32 obj_size,
+		   const void *in)
+{
+	if (out->stride < min_stride)
+		return -EINVAL;
+
+	if (!out->count)
+		return 0;
+
+	if (obj_size == out->stride) {
+		if (copy_to_user(u64_to_user_ptr(out->array), in,
+				 (unsigned long)obj_size * out->count))
+			return -EFAULT;
+	} else {
+		u32 cpy_elem_size = min_t(u32, out->stride, obj_size);
+		void __user *out_ptr = u64_to_user_ptr(out->array);
+		const void *in_ptr = in;
+
+		for (u32 i = 0; i < out->count; i++) {
+			if (copy_to_user(out_ptr, in_ptr, cpy_elem_size))
+				return -EFAULT;
+
+			out_ptr += obj_size;
+			in_ptr += out->stride;
+		}
+
+		if (out->stride > obj_size &&
+		    clear_user(u64_to_user_ptr(out->array + obj_size),
+			       out->stride - obj_size)) {
+			return -EFAULT;
+		}
+	}
+
+	return 0;
+}
+
 #define DRM_PVR_IOCTL(_name, _func, _flags) \
 	DRM_IOCTL_DEF_DRV(PVR_##_name, pvr_ioctl_##_func, _flags)
 
diff --git a/drivers/gpu/drm/imagination/pvr_drv.h b/drivers/gpu/drm/imagination/pvr_drv.h
index 6075a993d922..c413176b8006 100644
--- a/drivers/gpu/drm/imagination/pvr_drv.h
+++ b/drivers/gpu/drm/imagination/pvr_drv.h
@@ -19,4 +19,111 @@
 #define PVR_DRIVER_MINOR 0
 #define PVR_DRIVER_PATCHLEVEL 0
 
+int pvr_get_uobj(u64 usr_ptr, u32 usr_size, u32 min_size, u32 obj_size, void *out);
+int pvr_set_uobj(u64 usr_ptr, u32 usr_size, u32 min_size, u32 obj_size, const void *in);
+int pvr_get_uobj_array(const struct drm_pvr_obj_array *in, u32 min_stride, u32 obj_size,
+		       void **out);
+int pvr_set_uobj_array(const struct drm_pvr_obj_array *out, u32 min_stride, u32 obj_size,
+		       const void *in);
+
+#define PVR_UOBJ_MIN_SIZE_INTERNAL(_typename, _last_mandatory_field) \
+	(offsetof(_typename, _last_mandatory_field) + \
+	 sizeof(((_typename *)NULL)->_last_mandatory_field))
+
+/* NOLINTBEGIN(bugprone-macro-parentheses) */
+#define PVR_UOBJ_DECL(_typename, _last_mandatory_field) \
+	, _typename : PVR_UOBJ_MIN_SIZE_INTERNAL(_typename, _last_mandatory_field)
+/* NOLINTEND(bugprone-macro-parentheses) */
+
+/**
+ * DOC: PVR user objects.
+ *
+ * Macros used to aid copying structured and array data to and from
+ * userspace. Objects can differ in size, provided the minimum size
+ * allowed is specified (using the last mandatory field in the struct).
+ * All types used with PVR_UOBJ_GET/SET macros must be listed here under
+ * PVR_UOBJ_MIN_SIZE, with the last mandatory struct field specified.
+ */
+
+/**
+ * PVR_UOBJ_MIN_SIZE() - Fetch the minimum copy size of a compatible type object.
+ * @_obj_name: The name of the object. Cannot be a typename - this is deduced.
+ *
+ * This cannot fail. Using the macro with an incompatible type will result in a
+ * compiler error.
+ *
+ * To add compatibility for a type, list it within the macro in an orderly
+ * fashion. The second argument is the name of the last mandatory field of the
+ * struct type, which is used to calculate the size. See also PVR_UOBJ_DECL().
+ *
+ * Return: The minimum copy size.
+ */
+#define PVR_UOBJ_MIN_SIZE(_obj_name) _Generic(_obj_name \
+	PVR_UOBJ_DECL(struct drm_pvr_job, hwrt) \
+	PVR_UOBJ_DECL(struct drm_pvr_sync_op, value) \
+	PVR_UOBJ_DECL(struct drm_pvr_dev_query_gpu_info, num_phantoms) \
+	PVR_UOBJ_DECL(struct drm_pvr_dev_query_runtime_info, cdm_max_local_mem_size_regs) \
+	PVR_UOBJ_DECL(struct drm_pvr_dev_query_quirks, _padding_c) \
+	PVR_UOBJ_DECL(struct drm_pvr_dev_query_enhancements, _padding_c) \
+	PVR_UOBJ_DECL(struct drm_pvr_heap, page_size_log2) \
+	PVR_UOBJ_DECL(struct drm_pvr_dev_query_heap_info, heaps) \
+	PVR_UOBJ_DECL(struct drm_pvr_static_data_area, offset) \
+	PVR_UOBJ_DECL(struct drm_pvr_dev_query_static_data_areas, static_data_areas) \
+	)
+
+/**
+ * PVR_UOBJ_GET() - Copies from _src_usr_ptr to &_dest_obj.
+ * @_dest_obj: The destination container object in kernel space.
+ * @_usr_size: The size of the source container in user space.
+ * @_src_usr_ptr: __u64 raw pointer to the source container in user space.
+ *
+ * Return: Error code. See pvr_get_uobj().
+ */
+#define PVR_UOBJ_GET(_dest_obj, _usr_size, _src_usr_ptr) \
+	pvr_get_uobj(_src_usr_ptr, _usr_size, \
+		     PVR_UOBJ_MIN_SIZE(_dest_obj), \
+		     sizeof(_dest_obj), &(_dest_obj))
+
+/**
+ * PVR_UOBJ_SET() - Copies from &_src_obj to _dest_usr_ptr.
+ * @_dest_usr_ptr: __u64 raw pointer to the destination container in user space.
+ * @_usr_size: The size of the destination container in user space.
+ * @_src_obj: The source container object in kernel space.
+ *
+ * Return: Error code. See pvr_set_uobj().
+ */
+#define PVR_UOBJ_SET(_dest_usr_ptr, _usr_size, _src_obj) \
+	pvr_set_uobj(_dest_usr_ptr, _usr_size, \
+		     PVR_UOBJ_MIN_SIZE(_src_obj), \
+		     sizeof(_src_obj), &(_src_obj))
+
+/**
+ * PVR_UOBJ_GET_ARRAY() - Copies from @_src_drm_pvr_obj_array.array to
+ * alloced memory and returns a pointer in _dest_array.
+ * @_dest_array: The destination C array object in kernel space.
+ * @_src_drm_pvr_obj_array: The &struct drm_pvr_obj_array containing a __u64 raw
+ * pointer to the source C array in user space and the size of each array
+ * element in user space (the 'stride').
+ *
+ * Return: Error code. See pvr_get_uobj_array().
+ */
+#define PVR_UOBJ_GET_ARRAY(_dest_array, _src_drm_pvr_obj_array) \
+	pvr_get_uobj_array(_src_drm_pvr_obj_array, \
+			   PVR_UOBJ_MIN_SIZE((_dest_array)[0]), \
+			   sizeof((_dest_array)[0]), (void **)&(_dest_array))
+
+/**
+ * PVR_UOBJ_SET_ARRAY() - Copies from _src_array to @_dest_drm_pvr_obj_array.array.
+ * @_dest_drm_pvr_obj_array: The &struct drm_pvr_obj_array containing a __u64 raw
+ * pointer to the destination C array in user space and the size of each array
+ * element in user space (the 'stride').
+ * @_src_array: The source C array object in kernel space.
+ *
+ * Return: Error code. See pvr_set_uobj_array().
+ */
+#define PVR_UOBJ_SET_ARRAY(_dest_drm_pvr_obj_array, _src_array) \
+	pvr_set_uobj_array(_dest_drm_pvr_obj_array, \
+			   PVR_UOBJ_MIN_SIZE((_src_array)[0]), \
+			   sizeof((_src_array)[0]), _src_array)
+
 #endif /* PVR_DRV_H */
diff --git a/drivers/gpu/drm/imagination/pvr_fw.c b/drivers/gpu/drm/imagination/pvr_fw.c
new file mode 100644
index 000000000000..c48de4a3af46
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_fw.c
@@ -0,0 +1,145 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+#include "pvr_device_info.h"
+#include "pvr_fw.h"
+
+#include <drm/drm_drv.h>
+#include <linux/firmware.h>
+#include <linux/sizes.h>
+
+#define FW_MAX_SUPPORTED_MAJOR_VERSION 1
+
+/**
+ * pvr_fw_validate() - Parse firmware header and check compatibility
+ * @pvr_dev: Device pointer.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -EINVAL if firmware is incompatible.
+ */
+static int
+pvr_fw_validate(struct pvr_device *pvr_dev)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	const struct firmware *firmware = pvr_dev->fw_dev.firmware;
+	const struct pvr_fw_layout_entry *layout_entries;
+	const struct pvr_fw_info_header *header;
+	const u8 *fw = firmware->data;
+	u32 fw_offset = firmware->size - SZ_4K;
+	u32 layout_table_size;
+	u32 entry;
+
+	if (firmware->size < SZ_4K || (firmware->size % FW_BLOCK_SIZE))
+		return -EINVAL;
+
+	header = (const struct pvr_fw_info_header *)&fw[fw_offset];
+
+	if (header->info_version != PVR_FW_INFO_VERSION) {
+		drm_err(drm_dev, "Unsupported fw info version %u\n",
+			header->info_version);
+		return -EINVAL;
+	}
+
+	if (header->header_len != sizeof(struct pvr_fw_info_header) ||
+	    header->layout_entry_size != sizeof(struct pvr_fw_layout_entry) ||
+	    header->layout_entry_num > PVR_FW_INFO_MAX_NUM_ENTRIES) {
+		drm_err(drm_dev, "FW info format mismatch\n");
+		return -EINVAL;
+	}
+
+	if (!(header->flags & PVR_FW_FLAGS_OPEN_SOURCE) ||
+	    header->fw_version_major > FW_MAX_SUPPORTED_MAJOR_VERSION ||
+	    header->fw_version_major == 0) {
+		drm_err(drm_dev, "Unsupported FW version %u.%u (build: %u%s)\n",
+			header->fw_version_major, header->fw_version_minor,
+			header->fw_version_build,
+			(header->flags & PVR_FW_FLAGS_OPEN_SOURCE) ? " OS" : "");
+		return -EINVAL;
+	}
+
+	if (pvr_gpu_id_to_packed_bvnc(&pvr_dev->gpu_id) != header->bvnc) {
+		struct pvr_gpu_id fw_gpu_id;
+
+		packed_bvnc_to_pvr_gpu_id(header->bvnc, &fw_gpu_id);
+		drm_err(drm_dev, "FW built for incorrect GPU ID %i.%i.%i.%i (expected %i.%i.%i.%i)\n",
+			fw_gpu_id.b, fw_gpu_id.v, fw_gpu_id.n, fw_gpu_id.c,
+			pvr_dev->gpu_id.b, pvr_dev->gpu_id.v, pvr_dev->gpu_id.n, pvr_dev->gpu_id.c);
+		return -EINVAL;
+	}
+
+	fw_offset += header->header_len;
+	layout_table_size =
+		header->layout_entry_size * header->layout_entry_num;
+	if ((fw_offset + layout_table_size) > firmware->size)
+		return -EINVAL;
+
+	layout_entries = (const struct pvr_fw_layout_entry *)&fw[fw_offset];
+	for (entry = 0; entry < header->layout_entry_num; entry++) {
+		u32 start_addr = layout_entries[entry].base_addr;
+		u32 end_addr = start_addr + layout_entries[entry].alloc_size;
+
+		if (start_addr >= end_addr)
+			return -EINVAL;
+	}
+
+	fw_offset = (firmware->size - SZ_4K) - header->device_info_size;
+
+	drm_info(drm_dev, "FW version v%u.%u (build %u OS)\n", header->fw_version_major,
+		 header->fw_version_minor, header->fw_version_build);
+
+	pvr_dev->fw_version.major = header->fw_version_major;
+	pvr_dev->fw_version.minor = header->fw_version_minor;
+
+	pvr_dev->fw_dev.header = header;
+	pvr_dev->fw_dev.layout_entries = layout_entries;
+
+	return 0;
+}
+
+static int
+pvr_fw_get_device_info(struct pvr_device *pvr_dev)
+{
+	const struct firmware *firmware = pvr_dev->fw_dev.firmware;
+	struct pvr_fw_device_info_header *header;
+	const u8 *fw = firmware->data;
+	const u64 *dev_info;
+	u32 fw_offset;
+
+	fw_offset = (firmware->size - SZ_4K) - pvr_dev->fw_dev.header->device_info_size;
+
+	header = (struct pvr_fw_device_info_header *)&fw[fw_offset];
+	dev_info = (u64 *)(header + 1);
+
+	pvr_device_info_set_quirks(pvr_dev, dev_info, header->brn_mask_size);
+	dev_info += header->brn_mask_size;
+
+	pvr_device_info_set_enhancements(pvr_dev, dev_info, header->ern_mask_size);
+	dev_info += header->ern_mask_size;
+
+	return pvr_device_info_set_features(pvr_dev, dev_info, header->feature_mask_size,
+					    header->feature_param_size);
+}
+
+/**
+ * pvr_fw_validate_init_device_info() - Validate firmware and initialise device information
+ * @pvr_dev: Target PowerVR device.
+ *
+ * This function must be called before querying device information.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -%EINVAL if firmware validation fails.
+ */
+int
+pvr_fw_validate_init_device_info(struct pvr_device *pvr_dev)
+{
+	int err;
+
+	err = pvr_fw_validate(pvr_dev);
+	if (err)
+		return err;
+
+	return pvr_fw_get_device_info(pvr_dev);
+}
diff --git a/drivers/gpu/drm/imagination/pvr_fw.h b/drivers/gpu/drm/imagination/pvr_fw.h
new file mode 100644
index 000000000000..dca7fe5b3dd0
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_fw.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_FW_H
+#define PVR_FW_H
+
+#include "pvr_fw_info.h"
+
+#include <linux/types.h>
+
+/* Forward declarations from "pvr_device.h". */
+struct pvr_device;
+struct pvr_file;
+
+struct pvr_fw_device {
+	/** @firmware: Handle to the firmware loaded into the device. */
+	const struct firmware *firmware;
+
+	/** @header: Pointer to firmware header. */
+	const struct pvr_fw_info_header *header;
+
+	/** @layout_entries: Pointer to firmware layout. */
+	const struct pvr_fw_layout_entry *layout_entries;
+
+	/**
+	 * @processor_type: FW processor type for this device. Must be one of
+	 *                  %PVR_FW_PROCESSOR_TYPE_*.
+	 */
+	u16 processor_type;
+};
+
+int pvr_fw_validate_init_device_info(struct pvr_device *pvr_dev);
+
+#endif /* PVR_FW_H */
diff --git a/drivers/gpu/drm/imagination/pvr_fw_info.h b/drivers/gpu/drm/imagination/pvr_fw_info.h
new file mode 100644
index 000000000000..40bf66f1c4b6
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_fw_info.h
@@ -0,0 +1,135 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_FW_INFO_H
+#define PVR_FW_INFO_H
+
+#include <linux/bits.h>
+#include <linux/sizes.h>
+#include <linux/types.h>
+
+/*
+ * Firmware binary block unit in bytes.
+ * Raw data stored in FW binary will be aligned to this size.
+ */
+#define FW_BLOCK_SIZE SZ_4K
+
+/* Maximum number of entries in firmware layout table. */
+#define PVR_FW_INFO_MAX_NUM_ENTRIES 8
+
+enum pvr_fw_section_id {
+	META_CODE = 0,
+	META_PRIVATE_DATA,
+	META_COREMEM_CODE,
+	META_COREMEM_DATA,
+	MIPS_CODE,
+	MIPS_EXCEPTIONS_CODE,
+	MIPS_BOOT_CODE,
+	MIPS_PRIVATE_DATA,
+	MIPS_BOOT_DATA,
+	MIPS_STACK,
+	RISCV_UNCACHED_CODE,
+	RISCV_CACHED_CODE,
+	RISCV_PRIVATE_DATA,
+	RISCV_COREMEM_CODE,
+	RISCV_COREMEM_DATA,
+};
+
+enum pvr_fw_section_type {
+	NONE = 0,
+	FW_CODE,
+	FW_DATA,
+	FW_COREMEM_CODE,
+	FW_COREMEM_DATA,
+};
+
+/*
+ * FW binary format with FW info attached:
+ *
+ *          Contents        Offset
+ *     +-----------------+
+ *     |                 |    0
+ *     |                 |
+ *     | Original binary |
+ *     |      file       |
+ *     |   (.ldr/.elf)   |
+ *     |                 |
+ *     |                 |
+ *     +-----------------+
+ *     |   Device info   |  FILE_SIZE - 4K - device_info_size
+ *     +-----------------+
+ *     | FW info header  |  FILE_SIZE - 4K
+ *     +-----------------+
+ *     |                 |
+ *     | FW layout table |
+ *     |                 |
+ *     +-----------------+
+ *                          FILE_SIZE
+ */
+
+#define PVR_FW_INFO_VERSION 3
+
+#define PVR_FW_FLAGS_OPEN_SOURCE BIT(0)
+
+/** struct pvr_fw_info_header - Firmware header */
+struct pvr_fw_info_header {
+	/** @info_version: FW info header version. */
+	u32 info_version;
+	/** @header_len: Header length. */
+	u32 header_len;
+	/** @layout_entry_num: Number of entries in the layout table. */
+	u32 layout_entry_num;
+	/** @layout_entry_size: Size of an entry in the layout table. */
+	u32 layout_entry_size;
+	/** @bvnc: GPU ID supported by firmware. */
+	aligned_u64 bvnc;
+	/** @fw_page_size: Page size of processor on which firmware executes. */
+	u32 fw_page_size;
+	/** @flags: Compatibility flags. */
+	u32 flags;
+	/** @fw_version_major: Firmware major version number. */
+	u16 fw_version_major;
+	/** @fw_version_minor: Firmware minor version number. */
+	u16 fw_version_minor;
+	/** @fw_version_build: Firmware build number. */
+	u32 fw_version_build;
+	/** @device_info_size: Size of device info structure. */
+	u32 device_info_size;
+	/** @padding: Padding. */
+	u32 padding;
+};
+
+/**
+ * struct pvr_fw_layout_entry - Entry in firmware layout table, describing a
+ *                              section of the firmware image
+ */
+struct pvr_fw_layout_entry {
+	/** @id: Section ID. */
+	enum pvr_fw_section_id id;
+	/** @type: Section type. */
+	enum pvr_fw_section_type type;
+	/** @base_addr: Base address of section in FW address space. */
+	u32 base_addr;
+	/** @max_size: Maximum size of section, in bytes. */
+	u32 max_size;
+	/** @alloc_size: Allocation size of section, in bytes. */
+	u32 alloc_size;
+	/** @alloc_offset: Allocation offset of section. */
+	u32 alloc_offset;
+};
+
+/**
+ * struct pvr_fw_device_info_header - Device information header.
+ */
+struct pvr_fw_device_info_header {
+	/* BRN Mask size (in u64s). */
+	u64 brn_mask_size;
+	/* ERN Mask size (in u64s). */
+	u64 ern_mask_size;
+	/* Feature Mask size (in u64s). */
+	u64 feature_mask_size;
+	/* Feature Parameter size (in u64s). */
+	u64 feature_param_size;
+};
+
+#endif /* PVR_FW_INFO_H */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 08/17] drm/imagination: Add GEM and VM related code
  2023-08-16  8:25 ` Sarah Walker
@ 2023-08-16  8:25   ` Sarah Walker
  -1 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: matthew.brost, luben.tuikov, tzimmermann, linux-kernel, mripard,
	afd, Matt Coster, boris.brezillon, dakr, donald.robson, hns,
	christian.koenig, faith.ekstrand

Add a GEM implementation based on drm_gem_shmem, and support code for the
PowerVR GPU MMU. The GPU VA manager is used for address space management.

Changes since v4:
- Correct sync function in vmap/vunmap function documentation
- Update for upstream GPU VA manager
- Fix missing frees when unmapping drm_gpuva objects
- Always zero GEM BOs on creation

Changes since v3:
- Split MMU and VM code
- Register page table allocations with kmemleak
- Use drm_dev_{enter,exit}

Changes since v2:
- Use GPU VA manager
- Use drm_gem_shmem

Co-developed-by: Matt Coster <matt.coster@imgtec.com>
Signed-off-by: Matt Coster <matt.coster@imgtec.com>
Co-developed-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 drivers/gpu/drm/imagination/Makefile     |    5 +-
 drivers/gpu/drm/imagination/pvr_device.c |   23 +-
 drivers/gpu/drm/imagination/pvr_device.h |   18 +
 drivers/gpu/drm/imagination/pvr_drv.c    |  302 ++-
 drivers/gpu/drm/imagination/pvr_gem.c    |  396 ++++
 drivers/gpu/drm/imagination/pvr_gem.h    |  177 ++
 drivers/gpu/drm/imagination/pvr_mmu.c    | 2487 ++++++++++++++++++++++
 drivers/gpu/drm/imagination/pvr_mmu.h    |  108 +
 drivers/gpu/drm/imagination/pvr_vm.c     |  890 ++++++++
 drivers/gpu/drm/imagination/pvr_vm.h     |   60 +
 10 files changed, 4455 insertions(+), 11 deletions(-)
 create mode 100644 drivers/gpu/drm/imagination/pvr_gem.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_gem.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_vm.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_vm.h

diff --git a/drivers/gpu/drm/imagination/Makefile b/drivers/gpu/drm/imagination/Makefile
index 9e144ff2742b..8fcabc1bea36 100644
--- a/drivers/gpu/drm/imagination/Makefile
+++ b/drivers/gpu/drm/imagination/Makefile
@@ -7,6 +7,9 @@ powervr-y := \
 	pvr_device.o \
 	pvr_device_info.o \
 	pvr_drv.o \
-	pvr_fw.o
+	pvr_fw.o \
+	pvr_gem.o \
+	pvr_mmu.o \
+	pvr_vm.o
 
 obj-$(CONFIG_DRM_POWERVR) += powervr.o
diff --git a/drivers/gpu/drm/imagination/pvr_device.c b/drivers/gpu/drm/imagination/pvr_device.c
index b1fae182c4f6..ef8f7a2ff1a9 100644
--- a/drivers/gpu/drm/imagination/pvr_device.c
+++ b/drivers/gpu/drm/imagination/pvr_device.c
@@ -6,6 +6,7 @@
 
 #include "pvr_fw.h"
 #include "pvr_rogue_cr_defs.h"
+#include "pvr_vm.h"
 
 #include <drm/drm_print.h>
 
@@ -312,7 +313,26 @@ pvr_device_gpu_init(struct pvr_device *pvr_dev)
 	else
 		return -EINVAL;
 
-	return pvr_set_dma_info(pvr_dev);
+	err = pvr_set_dma_info(pvr_dev);
+	if (err)
+		return err;
+
+	pvr_dev->kernel_vm_ctx = pvr_vm_create_context(pvr_dev, false);
+	if (IS_ERR(pvr_dev->kernel_vm_ctx))
+		return PTR_ERR(pvr_dev->kernel_vm_ctx);
+
+	return 0;
+}
+
+/**
+ * pvr_device_gpu_fini() - GPU-specific deinitialization for a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ */
+static void
+pvr_device_gpu_fini(struct pvr_device *pvr_dev)
+{
+	WARN_ON(!pvr_vm_context_put(pvr_dev->kernel_vm_ctx));
+	pvr_dev->kernel_vm_ctx = NULL;
 }
 
 /**
@@ -364,6 +384,7 @@ pvr_device_fini(struct pvr_device *pvr_dev)
 	 * Deinitialization stages are performed in reverse order compared to
 	 * the initialization stages in pvr_device_init().
 	 */
+	pvr_device_gpu_fini(pvr_dev);
 }
 
 bool
diff --git a/drivers/gpu/drm/imagination/pvr_device.h b/drivers/gpu/drm/imagination/pvr_device.h
index 65ece87e2405..990363e433d7 100644
--- a/drivers/gpu/drm/imagination/pvr_device.h
+++ b/drivers/gpu/drm/imagination/pvr_device.h
@@ -123,6 +123,16 @@ struct pvr_device {
 	 */
 	struct clk *mem_clk;
 
+	/**
+	 * @kernel_vm_ctx: Virtual memory context used for kernel mappings.
+	 *
+	 * This is used for mappings in the firmware address region when a META firmware processor
+	 * is in use.
+	 *
+	 * When a MIPS firmware processor is in use, this will be %NULL.
+	 */
+	struct pvr_vm_context *kernel_vm_ctx;
+
 	/** @fw_dev: Firmware related data. */
 	struct pvr_fw_device fw_dev;
 };
@@ -145,6 +155,14 @@ struct pvr_file {
 	 *           to_pvr_device().
 	 */
 	struct pvr_device *pvr_dev;
+
+	/**
+	 * @vm_ctx_handles: Array of VM contexts belonging to this file. Array
+	 * members are of type "struct pvr_vm_context *".
+	 *
+	 * This array is used to allocate handles returned to userspace.
+	 */
+	struct xarray vm_ctx_handles;
 };
 
 /**
diff --git a/drivers/gpu/drm/imagination/pvr_drv.c b/drivers/gpu/drm/imagination/pvr_drv.c
index eb3018f94f7c..0d51177b506c 100644
--- a/drivers/gpu/drm/imagination/pvr_drv.c
+++ b/drivers/gpu/drm/imagination/pvr_drv.c
@@ -3,9 +3,11 @@
 
 #include "pvr_device.h"
 #include "pvr_drv.h"
+#include "pvr_gem.h"
 #include "pvr_rogue_defs.h"
 #include "pvr_rogue_fwif_client.h"
 #include "pvr_rogue_fwif_shared.h"
+#include "pvr_vm.h"
 
 #include <uapi/drm/pvr_drm.h>
 
@@ -61,7 +63,86 @@ static int
 pvr_ioctl_create_bo(struct drm_device *drm_dev, void *raw_args,
 		    struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_create_bo_args *args = raw_args;
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+	struct pvr_file *pvr_file = to_pvr_file(file);
+
+	struct pvr_gem_object *pvr_obj;
+	size_t sanitized_size;
+	size_t real_size;
+
+	int idx;
+	int err;
+
+	if (!drm_dev_enter(drm_dev, &idx))
+		return -EIO;
+
+	/* All padding fields must be zeroed. */
+	if (args->_padding_c != 0) {
+		err = -EINVAL;
+		goto err_drm_dev_exit;
+	}
+
+	/*
+	 * On 64-bit platforms (our primary target), size_t is a u64. However,
+	 * on other architectures we have to check for overflow when casting
+	 * down to size_t from u64.
+	 *
+	 * We also disallow zero-sized allocations, and reserved (kernel-only)
+	 * flags.
+	 */
+	if (args->size > SIZE_MAX || args->size == 0 ||
+	    args->flags & PVR_BO_RESERVED_MASK) {
+		err = -EINVAL;
+		goto err_drm_dev_exit;
+	}
+
+	sanitized_size = (size_t)args->size;
+
+	/*
+	 * Create a buffer object and transfer ownership to a userspace-
+	 * accessible handle.
+	 */
+	pvr_obj = pvr_gem_object_create(pvr_dev, sanitized_size, args->flags);
+	if (IS_ERR(pvr_obj)) {
+		err = PTR_ERR(pvr_obj);
+		goto err_drm_dev_exit;
+	}
+
+	/*
+	 * Store the actual size of the created buffer object. We can't fetch
+	 * this after this point because we will no longer have a reference to
+	 * &pvr_obj.
+	 */
+	real_size = pvr_gem_object_size(pvr_obj);
+
+	/* This function will not modify &args->handle unless it succeeds. */
+	err = pvr_gem_object_into_handle(pvr_obj, pvr_file, &args->handle);
+	if (err)
+		goto err_destroy_obj;
+
+	/*
+	 * Now write the real size back to the args struct, after no further
+	 * errors can occur.
+	 */
+	args->size = real_size;
+
+	drm_dev_exit(idx);
+
+	return 0;
+
+err_destroy_obj:
+	/*
+	 * GEM objects are refcounted, so there is no explicit destructor
+	 * function. Instead, we release the singular reference we currently
+	 * hold on the object and let GEM take care of the rest.
+	 */
+	pvr_gem_object_put(pvr_obj);
+
+err_drm_dev_exit:
+	drm_dev_exit(idx);
+
+	return err;
 }
 
 /**
@@ -88,7 +169,61 @@ static int
 pvr_ioctl_get_bo_mmap_offset(struct drm_device *drm_dev, void *raw_args,
 			     struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_get_bo_mmap_offset_args *args = raw_args;
+	struct pvr_file *pvr_file = to_pvr_file(file);
+	struct pvr_gem_object *pvr_obj;
+	struct drm_gem_object *gem_obj;
+	int idx;
+	int ret;
+
+	if (!drm_dev_enter(drm_dev, &idx))
+		return -EIO;
+
+	/* All padding fields must be zeroed. */
+	if (args->_padding_4 != 0) {
+		ret = -EINVAL;
+		goto err_drm_dev_exit;
+	}
+
+	/*
+	 * Obtain a kernel reference to the buffer object. This reference is
+	 * counted and must be manually dropped before returning. If a buffer
+	 * object cannot be found for the specified handle, return -%ENOENT (No
+	 * such file or directory).
+	 */
+	pvr_obj = pvr_gem_object_from_handle(pvr_file, args->handle);
+	if (!pvr_obj) {
+		ret = -ENOENT;
+		goto err_drm_dev_exit;
+	}
+
+	gem_obj = gem_from_pvr_gem(pvr_obj);
+
+	/*
+	 * Allocate a fake offset which can be used in userspace calls to mmap
+	 * on the DRM device file. If this fails, return the error code. This
+	 * operation is idempotent.
+	 */
+	ret = drm_gem_create_mmap_offset(gem_obj);
+	if (ret != 0) {
+		/* Drop our reference to the buffer object. */
+		drm_gem_object_put(gem_obj);
+		goto err_drm_dev_exit;
+	}
+
+	/*
+	 * Read out the fake offset allocated by the earlier call to
+	 * drm_gem_create_mmap_offset.
+	 */
+	args->offset = drm_vma_node_offset_addr(&gem_obj->vma_node);
+
+	/* Drop our reference to the buffer object. */
+	pvr_gem_object_put(pvr_obj);
+
+err_drm_dev_exit:
+	drm_dev_exit(idx);
+
+	return ret;
 }
 
 static __always_inline u64
@@ -517,10 +652,12 @@ pvr_ioctl_dev_query(struct drm_device *drm_dev, void *raw_args,
 		break;
 
 	case DRM_PVR_DEV_QUERY_HEAP_INFO_GET:
-		return -EINVAL;
+		ret = pvr_heap_info_get(pvr_dev, args);
+		break;
 
 	case DRM_PVR_DEV_QUERY_STATIC_DATA_AREAS_GET:
-		return -EINVAL;
+		ret = pvr_static_data_areas_get(pvr_dev, args);
+		break;
 	}
 
 	drm_dev_exit(idx);
@@ -667,7 +804,46 @@ static int
 pvr_ioctl_create_vm_context(struct drm_device *drm_dev, void *raw_args,
 			    struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_create_vm_context_args *args = raw_args;
+	struct pvr_file *pvr_file = to_pvr_file(file);
+	struct pvr_vm_context *vm_ctx;
+	int idx;
+	int err;
+
+	if (!drm_dev_enter(drm_dev, &idx))
+		return -EIO;
+
+	if (args->_padding_4) {
+		err = -EINVAL;
+		goto err_drm_dev_exit;
+	}
+
+	vm_ctx = pvr_vm_create_context(pvr_file->pvr_dev, true);
+	if (IS_ERR(vm_ctx)) {
+		err = PTR_ERR(vm_ctx);
+		goto err_drm_dev_exit;
+	}
+
+	/* Allocate object handle for userspace. */
+	err = xa_alloc(&pvr_file->vm_ctx_handles,
+		       &args->handle,
+		       vm_ctx,
+		       xa_limit_32b,
+		       GFP_KERNEL);
+	if (err < 0)
+		goto err_cleanup;
+
+	drm_dev_exit(idx);
+
+	return 0;
+
+err_cleanup:
+	pvr_vm_context_put(vm_ctx);
+
+err_drm_dev_exit:
+	drm_dev_exit(idx);
+
+	return err;
 }
 
 /**
@@ -687,7 +863,19 @@ static int
 pvr_ioctl_destroy_vm_context(struct drm_device *drm_dev, void *raw_args,
 			     struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_destroy_vm_context_args *args = raw_args;
+	struct pvr_file *pvr_file = to_pvr_file(file);
+	struct pvr_vm_context *vm_ctx;
+
+	if (args->_padding_4)
+		return -EINVAL;
+
+	vm_ctx = xa_erase(&pvr_file->vm_ctx_handles, args->handle);
+	if (!vm_ctx)
+		return -EINVAL;
+
+	pvr_vm_context_put(vm_ctx);
+	return 0;
 }
 
 /**
@@ -717,7 +905,79 @@ static int
 pvr_ioctl_vm_map(struct drm_device *drm_dev, void *raw_args,
 		 struct drm_file *file)
 {
-	return -ENOTTY;
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+	struct drm_pvr_ioctl_vm_map_args *args = raw_args;
+	struct pvr_file *pvr_file = to_pvr_file(file);
+	struct pvr_vm_context *vm_ctx;
+
+	struct pvr_gem_object *pvr_obj;
+	size_t pvr_obj_size;
+
+	u64 offset_plus_size;
+	int idx;
+	int err;
+
+	if (!drm_dev_enter(drm_dev, &idx))
+		return -EIO;
+
+	/* Initial validation of args. */
+	if (args->_padding_14) {
+		err = -EINVAL;
+		goto err_drm_dev_exit;
+	}
+
+	if (args->flags != 0 ||
+	    check_add_overflow(args->offset, args->size, &offset_plus_size) ||
+	    !pvr_find_heap_containing(pvr_dev, args->device_addr, args->size)) {
+		err = -EINVAL;
+		goto err_drm_dev_exit;
+	}
+
+	vm_ctx = pvr_vm_context_lookup(pvr_file, args->vm_context_handle);
+	if (!vm_ctx) {
+		err = -EINVAL;
+		goto err_drm_dev_exit;
+	}
+
+	pvr_obj = pvr_gem_object_from_handle(pvr_file, args->handle);
+	if (!pvr_obj) {
+		err = -ENOENT;
+		goto err_put_vm_context;
+	}
+
+	pvr_obj_size = pvr_gem_object_size(pvr_obj);
+
+	/*
+	 * Validate offset and size args. The alignment of these will be
+	 * checked when mapping; for now just check that they're within valid
+	 * bounds
+	 */
+	if (args->offset >= pvr_obj_size || offset_plus_size > pvr_obj_size) {
+		err = -EINVAL;
+		goto err_put_pvr_object;
+	}
+
+	err = pvr_vm_map(vm_ctx, pvr_obj, args->offset,
+			 args->device_addr, args->size);
+	if (err)
+		goto err_put_pvr_object;
+
+	/*
+	 * In order to set up the mapping, we needed a reference to &pvr_obj.
+	 * However, pvr_vm_map() obtains and stores its own reference, so we
+	 * must release ours before returning.
+	 */
+
+err_put_pvr_object:
+	pvr_gem_object_put(pvr_obj);
+
+err_put_vm_context:
+	pvr_vm_context_put(vm_ctx);
+
+err_drm_dev_exit:
+	drm_dev_exit(idx);
+
+	return err;
 }
 
 /**
@@ -740,7 +1000,24 @@ static int
 pvr_ioctl_vm_unmap(struct drm_device *drm_dev, void *raw_args,
 		   struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_vm_unmap_args *args = raw_args;
+	struct pvr_file *pvr_file = to_pvr_file(file);
+	struct pvr_vm_context *vm_ctx;
+	int err;
+
+	/* Initial validation of args. */
+	if (args->_padding_4)
+		return -EINVAL;
+
+	vm_ctx = pvr_vm_context_lookup(pvr_file, args->vm_context_handle);
+	if (!vm_ctx)
+		return -EINVAL;
+
+	err = pvr_vm_unmap(vm_ctx, args->device_addr, args->size);
+
+	pvr_vm_context_put(vm_ctx);
+
+	return err;
 }
 
 /*
@@ -931,6 +1208,8 @@ pvr_drm_driver_open(struct drm_device *drm_dev, struct drm_file *file)
 	 */
 	pvr_file->pvr_dev = pvr_dev;
 
+	xa_init_flags(&pvr_file->vm_ctx_handles, XA_FLAGS_ALLOC1);
+
 	/*
 	 * Store reference to powervr-specific file private data in DRM file
 	 * private data.
@@ -956,6 +1235,9 @@ pvr_drm_driver_postclose(__always_unused struct drm_device *drm_dev,
 {
 	struct pvr_file *pvr_file = to_pvr_file(file);
 
+	/* Drop references on any remaining objects. */
+	pvr_destroy_vm_contexts_for_file(pvr_file);
+
 	kfree(pvr_file);
 	file->driver_priv = NULL;
 }
@@ -963,7 +1245,7 @@ pvr_drm_driver_postclose(__always_unused struct drm_device *drm_dev,
 DEFINE_DRM_GEM_FOPS(pvr_drm_driver_fops);
 
 static struct drm_driver pvr_drm_driver = {
-	.driver_features = DRIVER_RENDER,
+	.driver_features = DRIVER_GEM | DRIVER_GEM_GPUVA | DRIVER_RENDER,
 	.open = pvr_drm_driver_open,
 	.postclose = pvr_drm_driver_postclose,
 	.ioctls = pvr_drm_driver_ioctls,
@@ -977,6 +1259,8 @@ static struct drm_driver pvr_drm_driver = {
 	.minor = PVR_DRIVER_MINOR,
 	.patchlevel = PVR_DRIVER_PATCHLEVEL,
 
+	.gem_prime_import_sg_table = drm_gem_shmem_prime_import_sg_table,
+	.gem_create_object = pvr_gem_create_object,
 };
 
 static int
diff --git a/drivers/gpu/drm/imagination/pvr_gem.c b/drivers/gpu/drm/imagination/pvr_gem.c
new file mode 100644
index 000000000000..8a07bb4c38ac
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_gem.c
@@ -0,0 +1,396 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+#include "pvr_gem.h"
+#include "pvr_vm.h"
+
+#include <drm/drm_gem.h>
+#include <drm/drm_prime.h>
+
+#include <linux/compiler.h>
+#include <linux/compiler_attributes.h>
+#include <linux/dma-buf.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/gfp.h>
+#include <linux/iosys-map.h>
+#include <linux/log2.h>
+#include <linux/mutex.h>
+#include <linux/pagemap.h>
+#include <linux/refcount.h>
+#include <linux/scatterlist.h>
+
+static int pvr_gem_mmap(struct drm_gem_object *gem_obj, struct vm_area_struct *vma)
+{
+	struct pvr_gem_object *pvr_obj = gem_to_pvr_gem(gem_obj);
+	struct drm_gem_shmem_object *shmem_obj = shmem_gem_from_pvr_gem(pvr_obj);
+
+	if (!(pvr_obj->flags & DRM_PVR_BO_CPU_ALLOW_USERSPACE_ACCESS))
+		return -EINVAL;
+
+	return drm_gem_shmem_mmap(shmem_obj, vma);
+}
+
+static const struct drm_gem_object_funcs pvr_gem_object_funcs = {
+	.free = drm_gem_shmem_object_free,
+	.print_info = drm_gem_shmem_object_print_info,
+	.pin = drm_gem_shmem_object_pin,
+	.unpin = drm_gem_shmem_object_unpin,
+	.get_sg_table = drm_gem_shmem_object_get_sg_table,
+	.vmap = drm_gem_shmem_object_vmap,
+	.vunmap = drm_gem_shmem_object_vunmap,
+	.mmap = pvr_gem_mmap,
+	.vm_ops = &drm_gem_shmem_vm_ops,
+};
+
+/**
+ * pvr_gem_object_flags_validate() - Verify that a collection of PowerVR GEM
+ * mapping and/or creation flags form a valid combination.
+ * @flags: PowerVR GEM mapping/creation flags to validate.
+ *
+ * This function explicitly allows kernel-only flags. All ioctl entrypoints
+ * should do their own validation as well as relying on this function.
+ *
+ * Return:
+ *  * %true if @flags contains valid mapping and/or creation flags, or
+ *  * %false otherwise.
+ */
+static bool
+pvr_gem_object_flags_validate(u64 flags)
+{
+	static const u64 invalid_combinations[] = {
+		/*
+		 * Memory flagged as PM/FW-protected cannot be mapped to
+		 * userspace. To make this explicit, we require that the two
+		 * flags allowing each of these respective features are never
+		 * specified together.
+		 */
+		(DRM_PVR_BO_DEVICE_PM_FW_PROTECT |
+		 DRM_PVR_BO_CPU_ALLOW_USERSPACE_ACCESS),
+	};
+
+	int i;
+
+	/*
+	 * Check for bits set in undefined regions. Reserved regions refer to
+	 * options that can only be set by the kernel. These are explicitly
+	 * allowed in most cases, and must be checked specifically in IOCTL
+	 * callback code.
+	 */
+	if ((flags & PVR_BO_UNDEFINED_MASK) != 0)
+		return false;
+
+	/*
+	 * Check for all combinations of flags marked as invalid in the array
+	 * above.
+	 */
+	for (i = 0; i < ARRAY_SIZE(invalid_combinations); ++i) {
+		u64 combo = invalid_combinations[i];
+
+		if ((flags & combo) == combo)
+			return false;
+	}
+
+	return true;
+}
+
+/**
+ * pvr_gem_object_into_handle() - Convert a reference to an object into a
+ * userspace-accessible handle.
+ * @pvr_obj: [IN] Target PowerVR-specific object.
+ * @pvr_file: [IN] File to associate the handle with.
+ * @handle: [OUT] Pointer to store the created handle in. Remains unmodified if
+ * an error is encountered.
+ *
+ * If an error is encountered, ownership of @pvr_obj will not have been
+ * transferred. If this function succeeds, however, further use of @pvr_obj is
+ * considered undefined behaviour unless another reference to it is explicitly
+ * held.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error encountered while attempting to allocate a handle on @pvr_file.
+ */
+int
+pvr_gem_object_into_handle(struct pvr_gem_object *pvr_obj,
+			   struct pvr_file *pvr_file, u32 *handle)
+{
+	struct drm_gem_object *gem_obj = gem_from_pvr_gem(pvr_obj);
+	struct drm_file *file = from_pvr_file(pvr_file);
+
+	u32 new_handle;
+	int err;
+
+	err = drm_gem_handle_create(file, gem_obj, &new_handle);
+	if (err)
+		return err;
+
+	/*
+	 * Release our reference to @pvr_obj, effectively transferring
+	 * ownership to the handle.
+	 */
+	pvr_gem_object_put(pvr_obj);
+
+	/*
+	 * Do not store the new handle in @handle until no more errors can
+	 * occur.
+	 */
+	*handle = new_handle;
+
+	return 0;
+}
+
+/**
+ * pvr_gem_object_from_handle() - Obtain a reference to an object from a
+ * userspace handle.
+ * @pvr_file: PowerVR-specific file to which @handle is associated.
+ * @handle: Userspace handle referencing the target object.
+ *
+ * On return, @handle always maintains its reference to the requested object
+ * (if it had one in the first place). If this function succeeds, the returned
+ * object will hold an additional reference. When the caller is finished with
+ * the returned object, they should call pvr_gem_object_put() on it to release
+ * this reference.
+ *
+ * Return:
+ *  * A pointer to the requested PowerVR-specific object on success, or
+ *  * %NULL otherwise.
+ */
+struct pvr_gem_object *
+pvr_gem_object_from_handle(struct pvr_file *pvr_file, u32 handle)
+{
+	struct drm_file *file = from_pvr_file(pvr_file);
+	struct drm_gem_object *gem_obj;
+
+	gem_obj = drm_gem_object_lookup(file, handle);
+	if (!gem_obj)
+		return NULL;
+
+	return gem_to_pvr_gem(gem_obj);
+}
+
+/**
+ * pvr_gem_object_vmap() - Map a PowerVR GEM object into CPU virtual address
+ * space.
+ * @pvr_obj: Target PowerVR GEM object.
+ *
+ * Once the caller is finished with the CPU mapping, they must call
+ * pvr_gem_object_vunmap() on @pvr_obj.
+ *
+ * If @pvr_obj is CPU-cached, dma_sync_sgtable_for_cpu() is called to make
+ * sure the CPU mapping is consistent.
+ *
+ * Return:
+ *  * A pointer to the CPU mapping on success,
+ *  * -%ENOMEM if the mapping fails, or
+ *  * Any error encountered while attempting to acquire a reference to the
+ *    backing pages for @pvr_obj.
+ */
+void *
+pvr_gem_object_vmap(struct pvr_gem_object *pvr_obj)
+{
+	struct drm_gem_shmem_object *shmem_obj = shmem_gem_from_pvr_gem(pvr_obj);
+	struct drm_gem_object *obj = gem_from_pvr_gem(pvr_obj);
+	struct iosys_map map;
+	int err;
+
+	dma_resv_lock(obj->resv, NULL);
+
+	err = drm_gem_shmem_vmap(shmem_obj, &map);
+	if (err)
+		goto err_unlock;
+
+	if (pvr_obj->flags & PVR_BO_CPU_CACHED) {
+		struct device *dev = shmem_obj->base.dev->dev;
+
+		/* If shmem_obj->sgt is NULL, that means the buffer hasn't been mapped
+		 * in GPU space yet.
+		 */
+		if (shmem_obj->sgt)
+			dma_sync_sgtable_for_cpu(dev, shmem_obj->sgt, DMA_BIDIRECTIONAL);
+	}
+
+	dma_resv_unlock(obj->resv);
+
+	return map.vaddr;
+
+err_unlock:
+	dma_resv_unlock(obj->resv);
+
+	return ERR_PTR(err);
+}
+
+/**
+ * pvr_gem_object_vunmap() - Unmap a PowerVR memory object from CPU virtual
+ * address space.
+ * @pvr_obj: Target PowerVR GEM object.
+ *
+ * If @pvr_obj is CPU-cached, dma_sync_sgtable_for_device() is called to make
+ * sure the GPU mapping is consistent.
+ */
+void
+pvr_gem_object_vunmap(struct pvr_gem_object *pvr_obj)
+{
+	struct drm_gem_shmem_object *shmem_obj = shmem_gem_from_pvr_gem(pvr_obj);
+	struct iosys_map map = IOSYS_MAP_INIT_VADDR(shmem_obj->vaddr);
+	struct drm_gem_object *obj = gem_from_pvr_gem(pvr_obj);
+
+	if (WARN_ON(!map.vaddr))
+		return;
+
+	dma_resv_lock(obj->resv, NULL);
+
+	if (pvr_obj->flags & PVR_BO_CPU_CACHED) {
+		struct device *dev = shmem_obj->base.dev->dev;
+
+		/* If shmem_obj->sgt is NULL, that means the buffer hasn't been mapped
+		 * in GPU space yet.
+		 */
+		if (shmem_obj->sgt)
+			dma_sync_sgtable_for_device(dev, shmem_obj->sgt, DMA_BIDIRECTIONAL);
+	}
+
+	drm_gem_shmem_vunmap(shmem_obj, &map);
+
+	dma_resv_unlock(obj->resv);
+}
+
+/**
+ * pvr_gem_object_zero() - Zeroes the physical memory behind an object.
+ * @pvr_obj: Target PowerVR GEM object.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error encountered while attempting to map @pvr_obj to the CPU (see
+ *    pvr_gem_object_vmap()).
+ */
+static int
+pvr_gem_object_zero(struct pvr_gem_object *pvr_obj)
+{
+	void *cpu_ptr;
+
+	cpu_ptr = pvr_gem_object_vmap(pvr_obj);
+	if (IS_ERR(cpu_ptr))
+		return PTR_ERR(cpu_ptr);
+
+	memset(cpu_ptr, 0, pvr_gem_object_size(pvr_obj));
+
+	/* Make sure the zero-ing is done before vumap-ing the object. */
+	wmb();
+
+	pvr_gem_object_vunmap(pvr_obj);
+
+	return 0;
+}
+
+/**
+ * pvr_gem_create_object() - Allocate and pre-initializes a pvr_gem_object
+ * @drm_dev: DRM device creating this object.
+ * @size: Size of the object to allocate in bytes.
+ *
+ * Return:
+ *  * The new pre-initialized GEM object on success,
+ *  * -ENOMEM if the allocation failed.
+ */
+struct drm_gem_object *pvr_gem_create_object(struct drm_device *drm_dev, size_t size)
+{
+	struct drm_gem_object *gem_obj;
+	struct pvr_gem_object *pvr_obj;
+
+	pvr_obj = kzalloc(sizeof(*pvr_obj), GFP_KERNEL);
+	if (!pvr_obj)
+		return ERR_PTR(-ENOMEM);
+
+	gem_obj = gem_from_pvr_gem(pvr_obj);
+	gem_obj->funcs = &pvr_gem_object_funcs;
+	return gem_obj;
+}
+
+/**
+ * pvr_gem_object_create() - Creates a PowerVR-specific buffer object.
+ * @pvr_dev: Target PowerVR device.
+ * @size: Size of the object to allocate in bytes. Must be greater than zero.
+ * Any value which is not an exact multiple of the system page size will be
+ * rounded up to satisfy this condition.
+ * @flags: Options which affect both this operation and future mapping
+ * operations performed on the returned object. Must be a combination of
+ * DRM_PVR_BO_* and/or PVR_BO_* flags.
+ *
+ * The created object may be larger than @size, but can never be smaller. To
+ * get the exact size, call pvr_gem_object_size() on the returned pointer.
+ *
+ * Return:
+ *  * The newly-minted PowerVR-specific buffer object on success,
+ *  * -%EINVAL if @size is zero or @flags is not valid,
+ *  * -%ENOMEM if sufficient physical memory cannot be allocated, or
+ *  * Any other error returned by drm_gem_create_mmap_offset().
+ */
+struct pvr_gem_object *
+pvr_gem_object_create(struct pvr_device *pvr_dev, size_t size, u64 flags)
+{
+	struct drm_gem_shmem_object *shmem_obj;
+	struct pvr_gem_object *pvr_obj;
+
+	/* Verify @size and @flags before continuing. */
+	if (size == 0 || !pvr_gem_object_flags_validate(flags))
+		return ERR_PTR(-EINVAL);
+
+	shmem_obj = drm_gem_shmem_create(from_pvr_device(pvr_dev), size);
+	if (IS_ERR(shmem_obj))
+		return ERR_CAST(shmem_obj);
+
+	shmem_obj->pages_mark_dirty_on_put = true;
+	shmem_obj->map_wc = !(flags & PVR_BO_CPU_CACHED);
+	pvr_obj = shmem_gem_to_pvr_gem(shmem_obj);
+	pvr_obj->flags = flags;
+
+	/*
+	 * Do this last because pvr_gem_object_zero() requires a fully
+	 * configured instance of struct pvr_gem_object.
+	 */
+	pvr_gem_object_zero(pvr_obj);
+
+	return pvr_obj;
+}
+
+/**
+ * pvr_gem_get_dma_addr() - Get DMA address for given offset in object
+ * @pvr_obj: Pointer to object to lookup address in.
+ * @offset: Offset within object to lookup address at.
+ * @dma_addr_out: Pointer to location to store DMA address.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -%EINVAL if object is not currently backed, or if @offset is out of valid
+ *    range for this object.
+ */
+int
+pvr_gem_get_dma_addr(struct pvr_gem_object *pvr_obj, u32 offset,
+		     dma_addr_t *dma_addr_out)
+{
+	struct drm_gem_shmem_object *shmem_obj = shmem_gem_from_pvr_gem(pvr_obj);
+	struct sg_table *sgt;
+	u32 accumulated_offset = 0;
+	struct scatterlist *sgl;
+	unsigned int sgt_idx;
+
+	sgt = drm_gem_shmem_get_pages_sgt(shmem_obj);
+	if (IS_ERR(sgt))
+		return PTR_ERR(sgt);
+
+	for_each_sgtable_dma_sg(sgt, sgl, sgt_idx) {
+		u32 new_offset = accumulated_offset + sg_dma_len(sgl);
+
+		if (offset >= accumulated_offset && offset < new_offset) {
+			*dma_addr_out = sg_dma_address(sgl) +
+					(offset - accumulated_offset);
+			return 0;
+		}
+
+		accumulated_offset = new_offset;
+	}
+
+	return -EINVAL;
+}
diff --git a/drivers/gpu/drm/imagination/pvr_gem.h b/drivers/gpu/drm/imagination/pvr_gem.h
new file mode 100644
index 000000000000..5b82cd67d83c
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_gem.h
@@ -0,0 +1,177 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_GEM_H
+#define PVR_GEM_H
+
+#include "pvr_rogue_heap_config.h"
+#include "pvr_rogue_meta.h"
+
+#include <uapi/drm/pvr_drm.h>
+
+#include <drm/drm_gem.h>
+#include <drm/drm_gem_shmem_helper.h>
+#include <drm/drm_mm.h>
+
+#include <linux/bitfield.h>
+#include <linux/bits.h>
+#include <linux/const.h>
+#include <linux/compiler_attributes.h>
+#include <linux/kernel.h>
+#include <linux/mutex.h>
+#include <linux/refcount.h>
+#include <linux/scatterlist.h>
+#include <linux/sizes.h>
+#include <linux/types.h>
+
+/* Forward declaration from "pvr_device.h". */
+struct pvr_device;
+struct pvr_file;
+
+/**
+ * DOC: Flags for DRM_IOCTL_PVR_CREATE_BO (kernel-only)
+ *
+ * Kernel-only values allowed in &pvr_gem_object->flags. The majority of options
+ * for this field are specified in the UAPI header "pvr_drm.h" with a
+ * DRM_PVR_BO_ prefix. To distinguish these internal options (which must exist
+ * in ranges marked as "reserved" in the UAPI header), we drop the DRM prefix.
+ * The public options should be used directly, DRM prefix and all.
+ *
+ * To avoid potentially confusing gaps in the UAPI options, these kernel-only
+ * options are specified "in reverse", starting at bit 63.
+ *
+ * We use "reserved" to refer to bits defined here and not exposed in the UAPI.
+ * Bits not defined anywhere are "undefined".
+ *
+ * Creation options
+ *    These use the prefix PVR_BO_CREATE_.
+ *
+ *    *There are currently no kernel-only flags in this group.*
+ *
+ * Device mapping options
+ *    These use the prefix PVR_BO_DEVICE_.
+ *
+ *    *There are currently no kernel-only flags in this group.*
+ *
+ * CPU mapping options
+ *    These use the prefix PVR_BO_CPU_.
+ *
+ *    :CACHED: By default, all GEM objects are mapped write-combined on the
+ *       CPU. Set this flag to override this behaviour and map the object
+ *       cached.
+ */
+#define PVR_BO_CPU_CACHED BIT_ULL(63)
+
+#define PVR_BO_FW_NO_CLEAR_ON_RESET BIT_ULL(62)
+
+/* Bits 62..3 are undefined. */
+/* Bits 2..0 are defined in the UAPI. */
+
+/* Other utilities. */
+#define PVR_BO_UNDEFINED_MASK GENMASK_ULL(61, 3)
+#define PVR_BO_RESERVED_MASK (PVR_BO_UNDEFINED_MASK | GENMASK_ULL(63, 63))
+
+/*
+ * All firmware-mapped memory uses (mostly) the same flags. Specifically,
+ * firmware-mapped memory should be:
+ *  * Read/write on the device,
+ *  * Read/write on the CPU, and
+ *  * Write-combined on the CPU.
+ *
+ * The only variation is in caching on the device.
+ */
+#define PVR_BO_FW_FLAGS_DEVICE_CACHED (ULL(0))
+#define PVR_BO_FW_FLAGS_DEVICE_UNCACHED DRM_PVR_BO_DEVICE_BYPASS_CACHE
+
+/**
+ * struct pvr_gem_object - powervr-specific wrapper for &struct drm_gem_object
+ */
+struct pvr_gem_object {
+	/**
+	 * @base: The underlying &struct drm_gem_shmem_object.
+	 *
+	 * Do not access this member directly, instead call
+	 * shem_gem_from_pvr_gem().
+	 */
+	struct drm_gem_shmem_object base;
+
+	/**
+	 * @flags: Options set at creation-time. Some of these options apply to
+	 * the creation operation itself (which are stored here for reference)
+	 * with the remainder used for mapping options to both the device and
+	 * CPU. These are used every time this object is mapped, but may be
+	 * changed after creation.
+	 *
+	 * Must be a combination of DRM_PVR_BO_* and/or PVR_BO_* flags.
+	 *
+	 * .. note::
+	 *
+	 *    This member is declared const to indicate that none of these
+	 *    options may change or be changed throughout the object's
+	 *    lifetime.
+	 */
+	u64 flags;
+};
+
+static_assert(offsetof(struct pvr_gem_object, base) == 0,
+	      "offsetof(struct pvr_gem_object, base) not zero");
+
+#define shmem_gem_from_pvr_gem(pvr_obj) (&pvr_obj->base)
+
+#define shmem_gem_to_pvr_gem(shmem_obj) container_of_const(shmem_obj, struct pvr_gem_object, base)
+
+#define gem_from_pvr_gem(pvr_obj) (&pvr_obj->base.base)
+
+#define gem_to_pvr_gem(gem_obj) container_of_const(gem_obj, struct pvr_gem_object, base.base)
+
+/* Functions defined in pvr_gem.c */
+
+struct drm_gem_object *pvr_gem_create_object(struct drm_device *drm_dev, size_t size);
+
+struct pvr_gem_object *pvr_gem_object_create(struct pvr_device *pvr_dev,
+					     size_t size, u64 flags);
+
+int pvr_gem_object_into_handle(struct pvr_gem_object *pvr_obj,
+			       struct pvr_file *pvr_file, u32 *handle);
+struct pvr_gem_object *pvr_gem_object_from_handle(struct pvr_file *pvr_file,
+						  u32 handle);
+
+static __always_inline struct sg_table *
+pvr_gem_object_get_pages_sgt(struct pvr_gem_object *pvr_obj)
+{
+	return drm_gem_shmem_get_pages_sgt(shmem_gem_from_pvr_gem(pvr_obj));
+}
+
+void *pvr_gem_object_vmap(struct pvr_gem_object *pvr_obj);
+void pvr_gem_object_vunmap(struct pvr_gem_object *pvr_obj);
+
+int pvr_gem_get_dma_addr(struct pvr_gem_object *pvr_obj, u32 offset,
+			 dma_addr_t *dma_addr_out);
+
+/**
+ * pvr_gem_object_get() - Acquire reference on pvr_gem_object
+ * @pvr_obj: Pointer to object to acquire reference on.
+ */
+static __always_inline void
+pvr_gem_object_get(struct pvr_gem_object *pvr_obj)
+{
+	drm_gem_object_get(gem_from_pvr_gem(pvr_obj));
+}
+
+/**
+ * pvr_gem_object_put() - Release reference on pvr_gem_object
+ * @pvr_obj: Pointer to object to release reference on.
+ */
+static __always_inline void
+pvr_gem_object_put(struct pvr_gem_object *pvr_obj)
+{
+	drm_gem_object_put(gem_from_pvr_gem(pvr_obj));
+}
+
+static __always_inline size_t
+pvr_gem_object_size(struct pvr_gem_object *pvr_obj)
+{
+	return gem_from_pvr_gem(pvr_obj)->size;
+}
+
+#endif /* PVR_GEM_H */
diff --git a/drivers/gpu/drm/imagination/pvr_mmu.c b/drivers/gpu/drm/imagination/pvr_mmu.c
new file mode 100644
index 000000000000..3b48e1f77d09
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_mmu.c
@@ -0,0 +1,2487 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_mmu.h"
+
+#include "pvr_device.h"
+#include "pvr_fw.h"
+#include "pvr_gem.h"
+#include "pvr_rogue_fwif.h"
+#include "pvr_rogue_mmu_defs.h"
+
+#include <drm/drm_drv.h>
+#include <linux/bitops.h>
+#include <linux/dma-mapping.h>
+#include <linux/kmemleak.h>
+#include <linux/minmax.h>
+#include <linux/sizes.h>
+
+#define PVR_SHIFT_FROM_SIZE(size_) (__builtin_ctzll(size_))
+#define PVR_MASK_FROM_SIZE(size_) (~((size_) - U64_C(1)))
+
+/*
+ * The value of the device page size (%PVR_DEVICE_PAGE_SIZE) is currently
+ * pegged to the host page size (%PAGE_SIZE). This chunk of macro goodness both
+ * ensures that the selected host page size corresponds to a valid device page
+ * size and sets up values needed by the MMU code below.
+ */
+#if (PVR_DEVICE_PAGE_SIZE == SZ_4K)
+# define ROGUE_MMUCTRL_PAGE_SIZE_X ROGUE_MMUCTRL_PAGE_SIZE_4KB
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_SHIFT ROGUE_MMUCTRL_PAGE_4KB_RANGE_SHIFT
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_CLRMSK ROGUE_MMUCTRL_PAGE_4KB_RANGE_CLRMSK
+#elif (PVR_DEVICE_PAGE_SIZE == SZ_16K)
+# define ROGUE_MMUCTRL_PAGE_SIZE_X ROGUE_MMUCTRL_PAGE_SIZE_16KB
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_SHIFT ROGUE_MMUCTRL_PAGE_16KB_RANGE_SHIFT
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_CLRMSK ROGUE_MMUCTRL_PAGE_16KB_RANGE_CLRMSK
+#elif (PVR_DEVICE_PAGE_SIZE == SZ_64K)
+# define ROGUE_MMUCTRL_PAGE_SIZE_X ROGUE_MMUCTRL_PAGE_SIZE_64KB
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_SHIFT ROGUE_MMUCTRL_PAGE_64KB_RANGE_SHIFT
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_CLRMSK ROGUE_MMUCTRL_PAGE_64KB_RANGE_CLRMSK
+#elif (PVR_DEVICE_PAGE_SIZE == SZ_256K)
+# define ROGUE_MMUCTRL_PAGE_SIZE_X ROGUE_MMUCTRL_PAGE_SIZE_256KB
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_SHIFT ROGUE_MMUCTRL_PAGE_256KB_RANGE_SHIFT
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_CLRMSK ROGUE_MMUCTRL_PAGE_256KB_RANGE_CLRMSK
+#elif (PVR_DEVICE_PAGE_SIZE == SZ_1M)
+# define ROGUE_MMUCTRL_PAGE_SIZE_X ROGUE_MMUCTRL_PAGE_SIZE_1MB
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_SHIFT ROGUE_MMUCTRL_PAGE_1MB_RANGE_SHIFT
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_CLRMSK ROGUE_MMUCTRL_PAGE_1MB_RANGE_CLRMSK
+#elif (PVR_DEVICE_PAGE_SIZE == SZ_2M)
+# define ROGUE_MMUCTRL_PAGE_SIZE_X ROGUE_MMUCTRL_PAGE_SIZE_2MB
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_SHIFT ROGUE_MMUCTRL_PAGE_2MB_RANGE_SHIFT
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_CLRMSK ROGUE_MMUCTRL_PAGE_2MB_RANGE_CLRMSK
+#else
+# error Unsupported device page size PVR_DEVICE_PAGE_SIZE
+#endif
+
+#define ROGUE_MMUCTRL_ENTRIES_PT_VALUE_X   \
+	(ROGUE_MMUCTRL_ENTRIES_PT_VALUE >> \
+	 (PVR_DEVICE_PAGE_SHIFT - PVR_SHIFT_FROM_SIZE(SZ_4K)))
+
+/**
+ * pvr_mmu_flush() - Request flush of all MMU caches.
+ * @pvr_dev: Target PowerVR device.
+ *
+ * This function must be called following any possible change to the MMU page
+ * tables.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error encountered while submitting the flush command via the KCCB.
+ */
+int
+pvr_mmu_flush(struct pvr_device *pvr_dev)
+{
+	/* TODO: implement */
+	return -ENODEV;
+}
+
+/**
+ * DOC: PowerVR Virtual Memory Handling
+ */
+/**
+ * DOC: PowerVR Virtual Memory Handling (constants)
+ *
+ * .. c:macro:: PVR_IDX_INVALID
+ *
+ *    Default value for a u16-based index.
+ *
+ *    This value cannot be zero, since zero is a valid index value.
+ */
+#define PVR_IDX_INVALID ((u16)(-1))
+
+/**
+ * DOC: MMU backing pages
+ */
+/**
+ * DOC: MMU backing pages (constants)
+ *
+ * .. c:macro:: PVR_MMU_BACKING_PAGE_SIZE
+ *
+ *    Page size of a PowerVR device's integrated MMU. The CPU page size must be
+ *    at least as large as this value for the current implementation; this is
+ *    checked at compile-time.
+ */
+#define PVR_MMU_BACKING_PAGE_SIZE SZ_4K
+static_assert(PAGE_SIZE >= PVR_MMU_BACKING_PAGE_SIZE);
+
+/**
+ * struct pvr_mmu_backing_page - Represents a single page used to back a page
+ *                              table of any level.
+ * @dma_addr: DMA address of this page.
+ * @host_ptr: CPU address of this page.
+ * @pvr_dev: The PowerVR device to which this page is associated. **For
+ *           internal use only.**
+ */
+struct pvr_mmu_backing_page {
+	dma_addr_t dma_addr;
+	void *host_ptr;
+/* private: internal use only */
+	struct page *raw_page;
+	struct pvr_device *pvr_dev;
+};
+
+/**
+ * pvr_mmu_backing_page_init() - Initialize a MMU backing page.
+ * @page: Target backing page.
+ * @pvr_dev: Target PowerVR device.
+ *
+ * This function performs three distinct operations:
+ *
+ * 1. Allocate a single page,
+ * 2. Map the page to the CPU, and
+ * 3. Map the page to DMA-space.
+ *
+ * It is expected that @page be zeroed (e.g. from kzalloc()) before calling
+ * this function.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * -%ENOMEM if allocation of the backing page or mapping of the backing
+ *    page to DMA fails.
+ */
+static int
+pvr_mmu_backing_page_init(struct pvr_mmu_backing_page *page,
+			  struct pvr_device *pvr_dev)
+{
+	struct device *dev = from_pvr_device(pvr_dev)->dev;
+
+	struct page *raw_page;
+	int err;
+
+	dma_addr_t dma_addr;
+	void *host_ptr;
+
+	raw_page = alloc_page(__GFP_ZERO | GFP_KERNEL);
+	if (!raw_page)
+		return -ENOMEM;
+
+	host_ptr = vmap(&raw_page, 1, VM_MAP, pgprot_writecombine(PAGE_KERNEL));
+	if (!host_ptr) {
+		err = -ENOMEM;
+		goto err_free_page;
+	}
+
+	dma_addr = dma_map_page(dev, raw_page, 0, PVR_MMU_BACKING_PAGE_SIZE,
+				DMA_TO_DEVICE);
+	if (dma_mapping_error(dev, dma_addr)) {
+		err = -ENOMEM;
+		goto err_unmap_page;
+	}
+
+	page->dma_addr = dma_addr;
+	page->host_ptr = host_ptr;
+	page->pvr_dev = pvr_dev;
+	page->raw_page = raw_page;
+	kmemleak_alloc(page->host_ptr, PAGE_SIZE, 1, GFP_KERNEL);
+
+	return 0;
+
+err_unmap_page:
+	vunmap(host_ptr);
+
+err_free_page:
+	__free_page(raw_page);
+
+	return err;
+}
+
+/**
+ * pvr_mmu_backing_page_fini() - Teardown a MMU backing page.
+ * @page: Target backing page.
+ *
+ * This function performs the mirror operations to pvr_mmu_backing_page_init(),
+ * in reverse order:
+ *
+ * 1. Unmap the page from DMA-space,
+ * 2. Unmap the page from the CPU, and
+ * 3. Free the page.
+ *
+ * It also zeros @page.
+ *
+ * It is a no-op to call this function a second (or further) time on any @page.
+ */
+static void
+pvr_mmu_backing_page_fini(struct pvr_mmu_backing_page *page)
+{
+	struct device *dev = from_pvr_device(page->pvr_dev)->dev;
+
+	/* Do nothing if no allocation is present. */
+	if (!page->pvr_dev)
+		return;
+
+	dma_unmap_page(dev, page->dma_addr, PVR_MMU_BACKING_PAGE_SIZE,
+		       DMA_TO_DEVICE);
+
+	kmemleak_free(page->host_ptr);
+	vunmap(page->host_ptr);
+
+	__free_page(page->raw_page);
+
+	memset(page, 0, sizeof(*page));
+}
+
+/**
+ * pvr_mmu_backing_page_sync() - Flush a MMU backing page from the CPU to the
+ *                              device.
+ * @page: Target backing page.
+ *
+ * .. caution::
+ *
+ *    **This is potentially an expensive function call.** Only call
+ *    pvr_mmu_backing_page_sync() once you're sure you have no more changes to
+ *    make to the backing page in the immediate future.
+ */
+static void
+pvr_mmu_backing_page_sync(struct pvr_mmu_backing_page *page)
+{
+	struct device *dev;
+
+	/*
+	 * Do nothing if no allocation is present. This may be the case if
+	 * we are unmapping pages.
+	 */
+	if (!page->pvr_dev)
+		return;
+
+	dev = from_pvr_device(page->pvr_dev)->dev;
+
+	dma_sync_single_for_device(dev, page->dma_addr,
+				   PVR_MMU_BACKING_PAGE_SIZE, DMA_TO_DEVICE);
+}
+
+/**
+ * DOC: Raw page tables
+ */
+
+#define PVR_PAGE_TABLE_TYPEOF_ENTRY(level_) \
+	typeof_member(struct pvr_page_table_l##level_##_entry_raw, val)
+
+#define PVR_PAGE_TABLE_FIELD_GET(level_, name_, field_, entry_)           \
+	(((entry_).val &                                           \
+	  ~ROGUE_MMUCTRL_##name_##_DATA_##field_##_CLRMSK) >> \
+	 ROGUE_MMUCTRL_##name_##_DATA_##field_##_SHIFT)
+
+#define PVR_PAGE_TABLE_FIELD_PREP(level_, name_, field_, val_)            \
+	((((PVR_PAGE_TABLE_TYPEOF_ENTRY(level_))(val_))            \
+	  << ROGUE_MMUCTRL_##name_##_DATA_##field_##_SHIFT) & \
+	 ~ROGUE_MMUCTRL_##name_##_DATA_##field_##_CLRMSK)
+
+/**
+ * struct pvr_page_table_l2_entry_raw - A single entry in a level 2 page table.
+ * @val: The raw value of this entry.
+ *
+ * This type is a structure for type-checking purposes. At compile-time, its
+ * size is checked against %ROGUE_MMUCTRL_ENTRY_SIZE_PC_VALUE.
+ *
+ * The value stored in this structure can be decoded using the following bitmap:
+ *
+ * .. flat-table::
+ *    :widths: 1 5
+ *    :stub-columns: 1
+ *
+ *    * - 31..4
+ *      - **Level 1 Page Table Base Address:** Bits 39..12 of the L1
+ *        page table base address, which is 4KiB aligned.
+ *
+ *    * - 3..2
+ *      - *(reserved)*
+ *
+ *    * - 1
+ *      - **Pending:** When valid bit is not set, indicates that a valid
+ *        entry is pending and the MMU should wait for the driver to map
+ *        the entry. This is used to support page demand mapping of
+ *        memory.
+ *
+ *    * - 0
+ *      - **Valid:** Indicates that the entry contains a valid L1 page
+ *        table. If the valid bit is not set, then an attempted use of
+ *        the page would result in a page fault.
+ */
+struct pvr_page_table_l2_entry_raw {
+	u32 val;
+} __packed;
+static_assert(sizeof(struct pvr_page_table_l2_entry_raw) * 8 ==
+	      ROGUE_MMUCTRL_ENTRY_SIZE_PC_VALUE);
+
+static bool
+pvr_page_table_l2_entry_raw_is_valid(struct pvr_page_table_l2_entry_raw entry)
+{
+	return PVR_PAGE_TABLE_FIELD_GET(2, PC, VALID, entry);
+}
+
+/**
+ * pvr_page_table_l2_entry_raw_set() - Write a valid entry into a raw level 2
+ *                                     page table.
+ * @entry: Target raw level 2 page table entry.
+ * @child_table_dma_addr: DMA address of the level 1 page table to be
+ *                        associated with @entry.
+ *
+ * When calling this function, @child_table_dma_addr must be a valid DMA
+ * address and a multiple of %ROGUE_MMUCTRL_PC_DATA_PD_BASE_ALIGNSIZE.
+ */
+static void
+pvr_page_table_l2_entry_raw_set(struct pvr_page_table_l2_entry_raw *entry,
+				dma_addr_t child_table_dma_addr)
+{
+	child_table_dma_addr >>= ROGUE_MMUCTRL_PC_DATA_PD_BASE_ALIGNSHIFT;
+
+	entry->val =
+		PVR_PAGE_TABLE_FIELD_PREP(2, PC, VALID, true) |
+		PVR_PAGE_TABLE_FIELD_PREP(2, PC, ENTRY_PENDING, false) |
+		PVR_PAGE_TABLE_FIELD_PREP(2, PC, PD_BASE, child_table_dma_addr);
+}
+
+static void
+pvr_page_table_l2_entry_raw_clear(struct pvr_page_table_l2_entry_raw *entry)
+{
+	entry->val = 0;
+}
+
+/**
+ * struct pvr_page_table_l1_entry_raw - A single entry in a level 1 page table.
+ * @val: The raw value of this entry.
+ *
+ * This type is a structure for type-checking purposes. At compile-time, its
+ * size is checked against %ROGUE_MMUCTRL_ENTRY_SIZE_PD_VALUE.
+ *
+ * The value stored in this structure can be decoded using the following bitmap:
+ *
+ * .. flat-table::
+ *    :widths: 1 5
+ *    :stub-columns: 1
+ *
+ *    * - 63..41
+ *      - *(reserved)*
+ *
+ *    * - 40
+ *      - **Pending:** When valid bit is not set, indicates that a valid entry
+ *        is pending and the MMU should wait for the driver to map the entry.
+ *        This is used to support page demand mapping of memory.
+ *
+ *    * - 39..5
+ *      - **Level 0 Page Table Base Address:** The way this value is
+ *        interpreted depends on the page size. Bits not specified in the
+ *        table below (e.g. bits 11..5 for page size 4KiB) should be
+ *        considered reserved.
+ *
+ *        This table shows the bits used in an L1 page table entry to
+ *        represent the Physical Table Base Address for a given Page Size.
+ *        Since each L1 page table entry covers 2MiB of address space, the
+ *        maximum page size is 2MiB.
+ *
+ *        .. flat-table::
+ *           :widths: 1 1 1 1
+ *           :header-rows: 1
+ *           :stub-columns: 1
+ *
+ *           * - Page size
+ *             - L0 page table base address bits
+ *             - Number of L0 page table entries
+ *             - Size of L0 page table
+ *
+ *           * - 4KiB
+ *             - 39..12
+ *             - 512
+ *             - 4KiB
+ *
+ *           * - 16KiB
+ *             - 39..10
+ *             - 128
+ *             - 1KiB
+ *
+ *           * - 64KiB
+ *             - 39..8
+ *             - 32
+ *             - 256B
+ *
+ *           * - 256KiB
+ *             - 39..6
+ *             - 8
+ *             - 64B
+ *
+ *           * - 1MiB
+ *             - 39..5 (4 = '0')
+ *             - 2
+ *             - 16B
+ *
+ *           * - 2MiB
+ *             - 39..5 (4..3 = '00')
+ *             - 1
+ *             - 8B
+ *
+ *    * - 4
+ *      - *(reserved)*
+ *
+ *    * - 3..1
+ *      - **Page Size:** Sets the page size, from 4KiB to 2MiB.
+ *
+ *    * - 0
+ *      - **Valid:** Indicates that the entry contains a valid L0 page table.
+ *        If the valid bit is not set, then an attempted use of the page would
+ *        result in a page fault.
+ */
+struct pvr_page_table_l1_entry_raw {
+	u64 val;
+} __packed;
+static_assert(sizeof(struct pvr_page_table_l1_entry_raw) * 8 ==
+	      ROGUE_MMUCTRL_ENTRY_SIZE_PD_VALUE);
+
+static bool
+pvr_page_table_l1_entry_raw_is_valid(struct pvr_page_table_l1_entry_raw entry)
+{
+	return PVR_PAGE_TABLE_FIELD_GET(1, PD, VALID, entry);
+}
+
+/**
+ * pvr_page_table_l1_entry_raw_set() - Write a valid entry into a raw level 1
+ *                                     page table.
+ * @entry: Target raw level 1 page table entry.
+ * @child_table_dma_addr: DMA address of the level 0 page table to be
+ *                        associated with @entry.
+ *
+ * When calling this function, @child_table_dma_addr must be a valid DMA
+ * address and a multiple of 4 KiB.
+ */
+static void
+pvr_page_table_l1_entry_raw_set(struct pvr_page_table_l1_entry_raw *entry,
+				dma_addr_t child_table_dma_addr)
+{
+	entry->val = PVR_PAGE_TABLE_FIELD_PREP(1, PD, VALID, true) |
+		     PVR_PAGE_TABLE_FIELD_PREP(1, PD, ENTRY_PENDING, false) |
+		     PVR_PAGE_TABLE_FIELD_PREP(1, PD, PAGE_SIZE,
+					       ROGUE_MMUCTRL_PAGE_SIZE_X) |
+		     /*
+		      * The use of a 4K-specific macro here is correct. It is
+		      * a future optimization to allocate sub-host-page-sized
+		      * blocks for individual tables, so the condition that any
+		      * page table address is aligned to the size of the
+		      * largest (a 4KB) table currently holds.
+		      */
+		     (child_table_dma_addr &
+		      ~ROGUE_MMUCTRL_PT_BASE_4KB_RANGE_CLRMSK);
+}
+
+static void
+pvr_page_table_l1_entry_raw_clear(struct pvr_page_table_l1_entry_raw *entry)
+{
+	entry->val = 0;
+}
+
+/**
+ * struct pvr_page_table_l0_entry_raw - A single entry in a level 0 page table.
+ * @val: The raw value of this entry.
+ *
+ * This type is a structure for type-checking purposes. At compile-time, its
+ * size is checked against %ROGUE_MMUCTRL_ENTRY_SIZE_PT_VALUE.
+ *
+ * The value stored in this structure can be decoded using the following bitmap:
+ *
+ * .. flat-table::
+ *    :widths: 1 5
+ *    :stub-columns: 1
+ *
+ *    * - 63
+ *      - *(reserved)*
+ *
+ *    * - 62
+ *      - **PM/FW Protect:** Indicates a protected region which only the
+ *        Parameter Manager (PM) or firmware processor can write to.
+ *
+ *    * - 61..40
+ *      - **VP Page (High):** Virtual-physical page used for Parameter Manager
+ *        (PM) memory. This field is only used if the additional level of PB
+ *        virtualization is enabled. The VP Page field is needed by the PM in
+ *        order to correctly reconstitute the free lists after render
+ *        completion. This (High) field holds bits 39..18 of the value; the
+ *        Low field holds bits 17..12. Bits 11..0 are always zero because the
+ *        value is always aligned to the 4KiB page size.
+ *
+ *    * - 39..12
+ *      - **Physical Page Address:** The way this value is interpreted depends
+ *        on the page size. Bits not specified in the table below (e.g. bits
+ *        20..12 for page size 2MiB) should be considered reserved.
+ *
+ *        This table shows the bits used in an L0 page table entry to represent
+ *        the Physical Page Address for a given page size (as defined in the
+ *        associated L1 page table entry).
+ *
+ *        .. flat-table::
+ *           :widths: 1 1
+ *           :header-rows: 1
+ *           :stub-columns: 1
+ *
+ *           * - Page size
+ *             - Physical address bits
+ *
+ *           * - 4KiB
+ *             - 39..12
+ *
+ *           * - 16KiB
+ *             - 39..14
+ *
+ *           * - 64KiB
+ *             - 39..16
+ *
+ *           * - 256KiB
+ *             - 39..18
+ *
+ *           * - 1MiB
+ *             - 39..20
+ *
+ *           * - 2MiB
+ *             - 39..21
+ *
+ *    * - 11..6
+ *      - **VP Page (Low):** Continuation of VP Page (High).
+ *
+ *    * - 5
+ *      - **Pending:** When valid bit is not set, indicates that a valid entry
+ *        is pending and the MMU should wait for the driver to map the entry.
+ *        This is used to support page demand mapping of memory.
+ *
+ *    * - 4
+ *      - **PM Src:** Set on Parameter Manager (PM) allocated page table
+ *        entries when indicated by the PM. Note that this bit will only be set
+ *        by the PM, not by the device driver.
+ *
+ *    * - 3
+ *      - **SLC Bypass Control:** Specifies requests to this page should bypass
+ *        the System Level Cache (SLC), if enabled in SLC configuration.
+ *
+ *    * - 2
+ *      - **Cache Coherency:** Indicates that the page is coherent (i.e. it
+ *        does not require a cache flush between operations on the CPU and the
+ *        device).
+ *
+ *    * - 1
+ *      - **Read Only:** If set, this bit indicates that the page is read only.
+ *        An attempted write to this page would result in a write-protection
+ *        fault.
+ *
+ *    * - 0
+ *      - **Valid:** Indicates that the entry contains a valid page. If the
+ *        valid bit is not set, then an attempted use of the page would result
+ *        in a page fault.
+ */
+struct pvr_page_table_l0_entry_raw {
+	u64 val;
+} __packed;
+static_assert(sizeof(struct pvr_page_table_l0_entry_raw) * 8 ==
+	      ROGUE_MMUCTRL_ENTRY_SIZE_PT_VALUE);
+
+/**
+ * struct pvr_page_flags_raw - The configurable flags from a single entry in a
+ *                             level 0 page table.
+ * @val: The raw value of these flags. Since these are a strict subset of
+ *       &struct pvr_page_table_l0_entry_raw; use that type for our member here.
+ *
+ * The flags stored in this type are: PM/FW Protect; SLC Bypass Control; Cache
+ * Coherency, and Read Only (bits 62, 3, 2 and 1 respectively).
+ *
+ * This type should never be instantiated directly; instead use
+ * pvr_page_flags_raw_create() to ensure only valid bits of @val are set.
+ */
+struct pvr_page_flags_raw {
+	struct pvr_page_table_l0_entry_raw val;
+} __packed;
+static_assert(sizeof(struct pvr_page_flags_raw) ==
+	      sizeof(struct pvr_page_table_l0_entry_raw));
+
+static bool
+pvr_page_table_l0_entry_raw_is_valid(struct pvr_page_table_l0_entry_raw entry)
+{
+	return PVR_PAGE_TABLE_FIELD_GET(0, PT, VALID, entry);
+}
+
+/**
+ * pvr_page_table_l0_entry_raw_set() - Write a valid entry into a raw level 0
+ *                                     page table.
+ * @entry: Target raw level 0 page table entry.
+ * @dma_addr: DMA address of the physical page to be associated with @entry.
+ * @flags: Options to be set on @entry.
+ *
+ * When calling this function, @child_table_dma_addr must be a valid DMA
+ * address and a multiple of %PVR_DEVICE_PAGE_SIZE.
+ *
+ * The @flags parameter is directly assigned into @entry. It is the callers
+ * responsibility to ensure that only bits specified in
+ * &struct pvr_page_flags_raw are set in @flags.
+ */
+static void
+pvr_page_table_l0_entry_raw_set(struct pvr_page_table_l0_entry_raw *entry,
+				dma_addr_t dma_addr,
+				struct pvr_page_flags_raw flags)
+{
+	entry->val = PVR_PAGE_TABLE_FIELD_PREP(0, PT, VALID, true) |
+		     PVR_PAGE_TABLE_FIELD_PREP(0, PT, ENTRY_PENDING, false) |
+		     (dma_addr & ~ROGUE_MMUCTRL_PAGE_X_RANGE_CLRMSK) |
+		     flags.val.val;
+}
+
+static void
+pvr_page_table_l0_entry_raw_clear(struct pvr_page_table_l0_entry_raw *entry)
+{
+	entry->val = 0;
+}
+
+/**
+ * pvr_page_flags_raw_create() - Initialize the flag bits of a raw level 0 page
+ *                               table entry.
+ * @read_only: This page is read-only (see: Read Only).
+ * @cache_coherent: This page does not require cache flushes (see: Cache
+ *                  Coherency).
+ * @slc_bypass: This page bypasses the device cache (see: SLC Bypass Control).
+ * @pm_fw_protect: This page is only for use by the firmware or Parameter
+ *                 Manager (see PM/FW Protect).
+ *
+ * For more details on the use of these four options, see their respective
+ * entries in the table under &struct pvr_page_table_l0_entry_raw.
+ *
+ * Return:
+ * A new &struct pvr_page_flags_raw instance which can be passed directly to
+ * pvr_page_table_l0_entry_raw_set() or pvr_page_table_l0_insert().
+ */
+static struct pvr_page_flags_raw
+pvr_page_flags_raw_create(bool read_only, bool cache_coherent, bool slc_bypass,
+			  bool pm_fw_protect)
+{
+	struct pvr_page_flags_raw flags;
+
+	flags.val.val =
+		PVR_PAGE_TABLE_FIELD_PREP(0, PT, READ_ONLY, read_only) |
+		PVR_PAGE_TABLE_FIELD_PREP(0, PT, CC, cache_coherent) |
+		PVR_PAGE_TABLE_FIELD_PREP(0, PT, SLC_BYPASS_CTRL, slc_bypass) |
+		PVR_PAGE_TABLE_FIELD_PREP(0, PT, PM_META_PROTECT, pm_fw_protect);
+
+	return flags;
+}
+
+/**
+ * struct pvr_page_table_l2_raw - The raw data of a level 2 page table.
+ *
+ * This type is a structure for type-checking purposes. At compile-time, its
+ * size is checked against %PVR_MMU_BACKING_PAGE_SIZE.
+ */
+struct pvr_page_table_l2_raw {
+	/** @entries: The raw values of this table. */
+	struct pvr_page_table_l2_entry_raw
+		entries[ROGUE_MMUCTRL_ENTRIES_PC_VALUE];
+} __packed;
+static_assert(sizeof(struct pvr_page_table_l2_raw) == PVR_MMU_BACKING_PAGE_SIZE);
+
+/**
+ * struct pvr_page_table_l1_raw - The raw data of a level 1 page table.
+ *
+ * This type is a structure for type-checking purposes. At compile-time, its
+ * size is checked against %PVR_MMU_BACKING_PAGE_SIZE.
+ */
+struct pvr_page_table_l1_raw {
+	/** @entries: The raw values of this table. */
+	struct pvr_page_table_l1_entry_raw
+		entries[ROGUE_MMUCTRL_ENTRIES_PD_VALUE];
+} __packed;
+static_assert(sizeof(struct pvr_page_table_l1_raw) == PVR_MMU_BACKING_PAGE_SIZE);
+
+/**
+ * struct pvr_page_table_l0_raw - The raw data of a level 0 page table.
+ *
+ * This type is a structure for type-checking purposes. At compile-time, its
+ * size is checked against %PVR_MMU_BACKING_PAGE_SIZE.
+ *
+ * .. caution::
+ *
+ *    The size of level 0 page tables is variable depending on the page size
+ *    specified in the associated level 1 page table entry. Since the device
+ *    page size in use is pegged to the host page size, it cannot vary at
+ *    runtime. This structure is therefore only defined to contain the required
+ *    number of entries for the current device page size. **You should never
+ *    read or write beyond the last supported entry.**
+ */
+struct pvr_page_table_l0_raw {
+	/** @entries: The raw values of this table. */
+	struct pvr_page_table_l0_entry_raw
+		entries[ROGUE_MMUCTRL_ENTRIES_PT_VALUE_X];
+} __packed;
+static_assert(sizeof(struct pvr_page_table_l0_raw) <= PVR_MMU_BACKING_PAGE_SIZE);
+
+/**
+ * DOC: Mirror page tables
+ */
+
+/*
+ * We pre-declare these types because they cross-depend on pointers to each
+ * other.
+ */
+struct pvr_page_table_l1;
+struct pvr_page_table_l0;
+
+/**
+ * struct pvr_page_table_l2 - A wrapped level 2 page table.
+ *
+ * To access the raw part of this table, use pvr_page_table_l2_get_raw().
+ * Alternatively to access a raw entry directly, use
+ * pvr_page_table_l2_get_entry_raw().
+ *
+ * A level 2 page table forms the root of the page table tree structure, so
+ * this type has no &parent or &parent_idx members.
+ */
+struct pvr_page_table_l2 {
+	/**
+	 * @entries: The children of this node in the page table tree
+	 * structure. These are also mirror tables. The indexing of this array
+	 * is identical to that of the raw equivalent
+	 * (&pvr_page_table_l1_raw.entries).
+	 */
+	struct pvr_page_table_l1 *entries[ROGUE_MMUCTRL_ENTRIES_PC_VALUE];
+
+	/**
+	 * @backing_page: A handle to the memory which holds the raw
+	 * equivalent of this table. **For internal use only.**
+	 */
+	struct pvr_mmu_backing_page backing_page;
+
+	/**
+	 * @entry_count: The current number of valid entries (that we know of)
+	 * in this table. This value is essentially a refcount - the table is
+	 * destroyed when this value is decremented to zero by
+	 * pvr_page_table_l2_remove().
+	 */
+	u16 entry_count;
+};
+
+/**
+ * pvr_page_table_l2_init() - Initialize a level 2 page table.
+ * @table: Target level 2 page table.
+ * @pvr_dev: Target PowerVR device
+ *
+ * It is expected that @table be zeroed (e.g. from kzalloc()) before calling
+ * this function.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error encountered while intializing &table->backing_page using
+ *    pvr_mmu_backing_page_init().
+ */
+static int
+pvr_page_table_l2_init(struct pvr_page_table_l2 *table,
+		       struct pvr_device *pvr_dev)
+{
+	return pvr_mmu_backing_page_init(&table->backing_page, pvr_dev);
+}
+
+/**
+ * pvr_page_table_l2_fini() - Teardown a level 2 page table.
+ * @table: Target level 2 page table.
+ *
+ * It is an error to attempt to use @table after calling this function.
+ */
+static void
+pvr_page_table_l2_fini(struct pvr_page_table_l2 *table)
+{
+	pvr_mmu_backing_page_fini(&table->backing_page);
+}
+
+/**
+ * pvr_page_table_l2_sync() - Flush a level 2 page table from the CPU to the
+ *                            device.
+ * @table: Target level 2 page table.
+ *
+ * This is just a thin wrapper around pvr_mmu_backing_page_sync(), so the
+ * warning there applies here too: **Only call pvr_page_table_l2_sync() once
+ * you're sure you have no more changes to make to** @table **in the immediate
+ * future.**
+ *
+ * If child level 1 page tables of @table also need to be flushed, this should
+ * be done first using pvr_page_table_l1_sync() *before* calling this function.
+ */
+static void
+pvr_page_table_l2_sync(struct pvr_page_table_l2 *table)
+{
+	pvr_mmu_backing_page_sync(&table->backing_page);
+}
+
+/**
+ * pvr_page_table_l2_get_raw() - Access the raw equivalent of a mirror level 2
+ *                               page table.
+ * @table: Target level 2 page table.
+ *
+ * Essentially returns the CPU address of the raw equivalent of @table, cast to
+ * a &struct pvr_page_table_l2_raw pointer.
+ *
+ * You probably want to call pvr_page_table_l2_get_entry_raw() instead.
+ *
+ * Return:
+ * The raw equivalent of @table.
+ */
+static struct pvr_page_table_l2_raw *
+pvr_page_table_l2_get_raw(struct pvr_page_table_l2 *table)
+{
+	return table->backing_page.host_ptr;
+}
+
+/**
+ * pvr_page_table_l2_get_entry_raw() - Access an entry from the raw equivalent
+ *                                     of a mirror level 2 page table.
+ * @table: Target level 2 page table.
+ * @idx: Index of the entry to access.
+ *
+ * Technically this function returns a pointer to a slot in a raw level 2 page
+ * table, since the returned "entry" is not guaranteed to be valid. The caller
+ * must verify the validity of the entry at the returned address (perhaps using
+ * pvr_page_table_l2_entry_raw_is_valid()) before reading or overwriting it.
+ *
+ * The value of @idx is not checked here; it is the callers responsibility to
+ * ensure @idx refers to a valid index within @table before dereferencing the
+ * returned pointer.
+ *
+ * Return:
+ * A pointer to the requested raw level 2 page table entry.
+ */
+static struct pvr_page_table_l2_entry_raw *
+pvr_page_table_l2_get_entry_raw(struct pvr_page_table_l2 *table, u16 idx)
+{
+	return &pvr_page_table_l2_get_raw(table)->entries[idx];
+}
+
+/**
+ * pvr_page_table_l2_entry_is_valid() - Check if a level 2 page table entry is
+ *                                      marked as valid.
+ * @table: Target level 2 page table.
+ * @idx: Index of the entry to check.
+ *
+ * The value of @idx is not checked here; it is the callers responsibility to
+ * ensure @idx refers to a valid index within @table before calling this
+ * function.
+ */
+static bool
+pvr_page_table_l2_entry_is_valid(struct pvr_page_table_l2 *table, u16 idx)
+{
+	struct pvr_page_table_l2_entry_raw entry_raw =
+		*pvr_page_table_l2_get_entry_raw(table, idx);
+
+	return pvr_page_table_l2_entry_raw_is_valid(entry_raw);
+}
+
+/**
+ * struct pvr_page_table_l1 - A wrapped level 1 page table.
+ *
+ * To access the raw part of this table, use pvr_page_table_l1_get_raw().
+ * Alternatively to access a raw entry directly, use
+ * pvr_page_table_l1_get_entry_raw().
+ */
+struct pvr_page_table_l1 {
+	/**
+	 * @entries: The children of this node in the page table tree
+	 * structure. These are also mirror tables. The indexing of this array
+	 * is identical to that of the raw equivalent
+	 * (&pvr_page_table_l0_raw.entries).
+	 */
+	struct pvr_page_table_l0 *entries[ROGUE_MMUCTRL_ENTRIES_PD_VALUE];
+
+	/**
+	 * @backing_page: A handle to the memory which holds the raw
+	 * equivalent of this table. **For internal use only.**
+	 */
+	struct pvr_mmu_backing_page backing_page;
+
+	union {
+		/**
+		 * @parent: The parent of this node in the page table tree structure.
+		 *
+		 * This is also a mirror table.
+		 *
+		 * Only valid when the L1 page table is active. When the L1 page table
+		 * has been removed and queued for destruction, the next_free field
+		 * should be used instead.
+		 */
+		struct pvr_page_table_l2 *parent;
+
+		/**
+		 * @next_free: Pointer to the next L1 page table to take/free.
+		 *
+		 * Used to form a linked list of L1 page tables. This is used
+		 * when preallocating tables and when the page table has been
+		 * removed and queued for destruction.
+		 */
+		struct pvr_page_table_l1 *next_free;
+	};
+
+	/**
+	 * @parent_idx: The index of the entry in the parent table (see
+	 * @parent) which corresponds to this table.
+	 */
+	u16 parent_idx;
+
+	/**
+	 * @entry_count: The current number of valid entries (that we know of)
+	 * in this table. This value is essentially a refcount - the table is
+	 * destroyed when this value is decremented to zero by
+	 * pvr_page_table_l1_remove().
+	 */
+	u16 entry_count;
+};
+
+/**
+ * pvr_page_table_l1_init() - Initialize a level 1 page table.
+ * @table: Target level 1 page table.
+ * @pvr_dev: Target PowerVR device
+ *
+ * When this function returns successfully, @table is still not considered
+ * valid. It must be inserted into the page table tree structure with
+ * pvr_page_table_l2_insert() before it is ready for use.
+ *
+ * It is expected that @table be zeroed (e.g. from kzalloc()) before calling
+ * this function.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error encountered while intializing &table->backing_page using
+ *    pvr_mmu_backing_page_init().
+ */
+static int
+pvr_page_table_l1_init(struct pvr_page_table_l1 *table,
+		       struct pvr_device *pvr_dev)
+{
+	table->parent_idx = PVR_IDX_INVALID;
+
+	return pvr_mmu_backing_page_init(&table->backing_page, pvr_dev);
+}
+
+/**
+ * pvr_page_table_l1_free() - Teardown a level 1 page table.
+ * @table: Target level 1 page table.
+ *
+ * It is an error to attempt to use @table after calling this function, even
+ * indirectly. This includes calling pvr_page_table_l2_remove(), which must
+ * be called *before* pvr_page_table_l1_free().
+ */
+static void
+pvr_page_table_l1_free(struct pvr_page_table_l1 *table)
+{
+	pvr_mmu_backing_page_fini(&table->backing_page);
+	kfree(table);
+}
+
+/**
+ * pvr_page_table_l1_sync() - Flush a level 1 page table from the CPU to the
+ *                            device.
+ * @table: Target level 1 page table.
+ *
+ * This is just a thin wrapper around pvr_mmu_backing_page_sync(), so the
+ * warning there applies here too: **Only call pvr_page_table_l1_sync() once
+ * you're sure you have no more changes to make to** @table **in the immediate
+ * future.**
+ *
+ * If child level 0 page tables of @table also need to be flushed, this should
+ * be done first using pvr_page_table_l0_sync() *before* calling this function.
+ */
+static void
+pvr_page_table_l1_sync(struct pvr_page_table_l1 *table)
+{
+	pvr_mmu_backing_page_sync(&table->backing_page);
+}
+
+/**
+ * pvr_page_table_l1_get_raw() - Access the raw equivalent of a mirror level 1
+ *                               page table.
+ * @table: Target level 1 page table.
+ *
+ * Essentially returns the CPU address of the raw equivalent of @table, cast to
+ * a &struct pvr_page_table_l1_raw pointer.
+ *
+ * You probably want to call pvr_page_table_l1_get_entry_raw() instead.
+ *
+ * Return:
+ * The raw equivalent of @table.
+ */
+static struct pvr_page_table_l1_raw *
+pvr_page_table_l1_get_raw(struct pvr_page_table_l1 *table)
+{
+	return table->backing_page.host_ptr;
+}
+
+/**
+ * pvr_page_table_l1_get_entry_raw() - Access an entry from the raw equivalent
+ *                                     of a mirror level 1 page table.
+ * @table: Target level 1 page table.
+ * @idx: Index of the entry to access.
+ *
+ * Technically this function returns a pointer to a slot in a raw level 1 page
+ * table, since the returned "entry" is not guaranteed to be valid. The caller
+ * must verify the validity of the entry at the returned address (perhaps using
+ * pvr_page_table_l1_entry_raw_is_valid()) before reading or overwriting it.
+ *
+ * The value of @idx is not checked here; it is the callers responsibility to
+ * ensure @idx refers to a valid index within @table before dereferencing the
+ * returned pointer.
+ *
+ * Return:
+ * A pointer to the requested raw level 1 page table entry.
+ */
+static struct pvr_page_table_l1_entry_raw *
+pvr_page_table_l1_get_entry_raw(struct pvr_page_table_l1 *table, u16 idx)
+{
+	return &pvr_page_table_l1_get_raw(table)->entries[idx];
+}
+
+/**
+ * pvr_page_table_l1_entry_is_valid() - Check if a level 1 page table entry is
+ *                                      marked as valid.
+ * @table: Target level 1 page table.
+ * @idx: Index of the entry to check.
+ *
+ * The value of @idx is not checked here; it is the callers responsibility to
+ * ensure @idx refers to a valid index within @table before calling this
+ * function.
+ */
+static bool
+pvr_page_table_l1_entry_is_valid(struct pvr_page_table_l1 *table, u16 idx)
+{
+	struct pvr_page_table_l1_entry_raw entry_raw =
+		*pvr_page_table_l1_get_entry_raw(table, idx);
+
+	return pvr_page_table_l1_entry_raw_is_valid(entry_raw);
+}
+
+/**
+ * struct pvr_page_table_l0 - A wrapped level 0 page table.
+ *
+ * To access the raw part of this table, use pvr_page_table_l0_get_raw().
+ * Alternatively to access a raw entry directly, use
+ * pvr_page_table_l0_get_entry_raw().
+ *
+ * There is no mirror representation of an individual page, so this type has no
+ * &entries member.
+ */
+struct pvr_page_table_l0 {
+	/**
+	 * @backing_page: A handle to the memory which holds the raw
+	 * equivalent of this table. **For internal use only.**
+	 */
+	struct pvr_mmu_backing_page backing_page;
+
+	union {
+		/**
+		 * @parent: The parent of this node in the page table tree structure.
+		 *
+		 * This is also a mirror table.
+		 *
+		 * Only valid when the L0 page table is active. When the L0 page table
+		 * has been removed and queued for destruction, the next_free field
+		 * should be used instead.
+		 */
+		struct pvr_page_table_l1 *parent;
+
+		/**
+		 * @next_free: Pointer to the next L0 page table to take/free.
+		 *
+		 * Used to form a linked list of L0 page tables. This is used
+		 * when preallocating tables and when the page table has been
+		 * removed and queued for destruction.
+		 */
+		struct pvr_page_table_l0 *next_free;
+	};
+
+	/**
+	 * @parent_idx: The index of the entry in the parent table (see
+	 * @parent) which corresponds to this table.
+	 */
+	u16 parent_idx;
+
+	/**
+	 * @entry_count: The current number of valid entries (that we know of)
+	 * in this table. This value is essentially a refcount - the table is
+	 * destroyed when this value is decremented to zero by
+	 * pvr_page_table_l0_remove().
+	 */
+	u16 entry_count;
+};
+
+/**
+ * pvr_page_table_l0_init() - Initialize a level 0 page table.
+ * @table: Target level 0 page table.
+ * @pvr_dev: Target PowerVR device
+ *
+ * When this function returns successfully, @table is still not considered
+ * valid. It must be inserted into the page table tree structure with
+ * pvr_page_table_l1_insert() before it is ready for use.
+ *
+ * It is expected that @table be zeroed (e.g. from kzalloc()) before calling
+ * this function.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error encountered while intializing &table->backing_page using
+ *    pvr_mmu_backing_page_init().
+ */
+static int
+pvr_page_table_l0_init(struct pvr_page_table_l0 *table,
+		       struct pvr_device *pvr_dev)
+{
+	table->parent_idx = PVR_IDX_INVALID;
+
+	return pvr_mmu_backing_page_init(&table->backing_page, pvr_dev);
+}
+
+/**
+ * pvr_page_table_l0_free() - Teardown a level 0 page table.
+ * @table: Target level 0 page table.
+ *
+ * It is an error to attempt to use @table after calling this function, even
+ * indirectly. This includes calling pvr_page_table_l1_remove(), which must
+ * be called *before* pvr_page_table_l0_free().
+ */
+static void
+pvr_page_table_l0_free(struct pvr_page_table_l0 *table)
+{
+	pvr_mmu_backing_page_fini(&table->backing_page);
+	kfree(table);
+}
+
+/**
+ * pvr_page_table_l0_sync() - Flush a level 0 page table from the CPU to the
+ *                            device.
+ * @table: Target level 0 page table.
+ *
+ * This is just a thin wrapper around pvr_mmu_backing_page_sync(), so the
+ * warning there applies here too: **Only call pvr_page_table_l0_sync() once
+ * you're sure you have no more changes to make to** @table **in the immediate
+ * future.**
+ *
+ * If child pages of @table also need to be flushed, this should be done first
+ * using a DMA sync function (e.g. dma_sync_sg_for_device()) *before* calling
+ * this function.
+ */
+static void
+pvr_page_table_l0_sync(struct pvr_page_table_l0 *table)
+{
+	pvr_mmu_backing_page_sync(&table->backing_page);
+}
+
+/**
+ * pvr_page_table_l0_get_raw() - Access the raw equivalent of a mirror level 0
+ *                               page table.
+ * @table: Target level 0 page table.
+ *
+ * Essentially returns the CPU address of the raw equivalent of @table, cast to
+ * a &struct pvr_page_table_l0_raw pointer.
+ *
+ * You probably want to call pvr_page_table_l0_get_entry_raw() instead.
+ *
+ * Return:
+ * The raw equivalent of @table.
+ */
+static struct pvr_page_table_l0_raw *
+pvr_page_table_l0_get_raw(struct pvr_page_table_l0 *table)
+{
+	return table->backing_page.host_ptr;
+}
+
+/**
+ * pvr_page_table_l0_get_entry_raw() - Access an entry from the raw equivalent
+ *                                     of a mirror level 0 page table.
+ * @table: Target level 0 page table.
+ * @idx: Index of the entry to access.
+ *
+ * Technically this function returns a pointer to a slot in a raw level 0 page
+ * table, since the returned "entry" is not guaranteed to be valid. The caller
+ * must verify the validity of the entry at the returned address (perhaps using
+ * pvr_page_table_l0_entry_raw_is_valid()) before reading or overwriting it.
+ *
+ * The value of @idx is not checked here; it is the callers responsibility to
+ * ensure @idx refers to a valid index within @table before dereferencing the
+ * returned pointer. This is espcially important for level 0 page tables, which
+ * can have a variable number of entries.
+ *
+ * Return:
+ * A pointer to the requested raw level 0 page table entry.
+ */
+static struct pvr_page_table_l0_entry_raw *
+pvr_page_table_l0_get_entry_raw(struct pvr_page_table_l0 *table, u16 idx)
+{
+	return &pvr_page_table_l0_get_raw(table)->entries[idx];
+}
+
+/**
+ * pvr_page_table_l0_entry_is_valid() - Check if a level 0 page table entry is
+ *                                      marked as valid.
+ * @table: Target level 0 page table.
+ * @idx: Index of the entry to check.
+ *
+ * The value of @idx is not checked here; it is the callers responsibility to
+ * ensure @idx refers to a valid index within @table before calling this
+ * function.
+ */
+static bool
+pvr_page_table_l0_entry_is_valid(struct pvr_page_table_l0 *table, u16 idx)
+{
+	struct pvr_page_table_l0_entry_raw entry_raw =
+		*pvr_page_table_l0_get_entry_raw(table, idx);
+
+	return pvr_page_table_l0_entry_raw_is_valid(entry_raw);
+}
+
+/**
+ * struct pvr_mmu_context - context holding data for operations at page
+ * catalogue level, intended for use with a VM context.
+ */
+struct pvr_mmu_context {
+	/** @pvr_dev: The PVR device associated with the owning VM context. */
+	struct pvr_device *pvr_dev;
+
+	/** @page_table_l2: The MMU table root. */
+	struct pvr_page_table_l2 page_table_l2;
+};
+
+enum pvr_mmu_sync_level {
+	PVR_MMU_SYNC_LEVEL_NONE = -1,
+	PVR_MMU_SYNC_LEVEL_0 = 0,
+	PVR_MMU_SYNC_LEVEL_1 = 1,
+	PVR_MMU_SYNC_LEVEL_2 = 2,
+};
+
+/**
+ * struct pvr_page_table_ptr - A reference to a single physical page as indexed
+ * by the page table structure.
+ *
+ * Intended for embedding in a &struct pvr_mmu_op_context.
+ */
+struct pvr_page_table_ptr {
+	/**
+	 * @l1_table: A cached handle to the level 1 page table the
+	 * context is currently traversing.
+	 */
+	struct pvr_page_table_l1 *l1_table;
+
+	/**
+	 * @l0_table: A cached handle to the level 0 page table the
+	 * context is currently traversing.
+	 */
+	struct pvr_page_table_l0 *l0_table;
+
+	/**
+	 * @l2_idx: Index into the level 2 page table the context is
+	 * currently referencing.
+	 */
+	u16 l2_idx;
+
+	/**
+	 * @l1_idx: Index into the level 1 page table the context is
+	 * currently referencing.
+	 */
+	u16 l1_idx;
+
+	/**
+	 * @l0_idx: Index into the level 0 page table the context is
+	 * currently referencing.
+	 */
+	u16 l0_idx;
+};
+
+/**
+ * struct pvr_mmu_op_context - context holding data for individual
+ * device-virtual mapping operations. Intended for use with a VM bind operation.
+ */
+struct pvr_mmu_op_context {
+	/** @mmu_ctx: The MMU context associated with the owning VM context. */
+	struct pvr_mmu_context *mmu_ctx;
+
+	/** @map: Data specifically for map operations. */
+	struct {
+		/**
+		 * @sgt: Scatter gather table containing pages pinned for use by
+		 * this context - these are currently pinned when initialising
+		 * the VM bind operation.
+		 */
+		struct sg_table *sgt;
+
+		/** @sgt_offset: Start address of the device-virtual mapping. */
+		u64 sgt_offset;
+	} map;
+
+	/**
+	 * @l1_free_tables: Preallocated l1 page table objects for use by this
+	 * context when creating a page mapping. Linked list created during
+	 * initialisation. Also used to collect page table objects freed by an
+	 * unmap.
+	 */
+	struct pvr_page_table_l1 *l1_free_tables;
+
+	/**
+	 * @l0_free_tables: Preallocated l0 page table objects for use by this
+	 * context when creating a page mapping. Linked list created during
+	 * initialisation. Also used to collect page table objects freed by an
+	 * unmap.
+	 */
+	struct pvr_page_table_l0 *l0_free_tables;
+
+	/**
+	 * @curr_page - A reference to a single physical page as indexed by
+	 * the page table structure.
+	 */
+	struct pvr_page_table_ptr curr_page;
+
+	/**
+	 * @sync_level_required: The maximum level of the page table tree
+	 * structure which has (possibly) been modified since it was last
+	 * flushed to the device.
+	 *
+	 * This field should only be set with pvr_mmu_op_context_require_sync()
+	 * or indirectly by pvr_mmu_op_context_sync_partial().
+	 */
+	enum pvr_mmu_sync_level sync_level_required;
+};
+
+/**
+ * pvr_page_table_l2_insert() - Insert an entry referring to a level 1 page
+ * table into a level 2 page table.
+ * @op_ctx: Target MMU op context pointing at the entry to insert the L1 page
+ * table into.
+ * @child_table: Target level 1 page table to be referenced by the new entry.
+ *
+ * It is the caller's responsibility to ensure @op_ctx.curr_page points to a
+ * valid L2 entry.
+ */
+static void
+pvr_page_table_l2_insert(struct pvr_mmu_op_context *op_ctx,
+			 struct pvr_page_table_l1 *child_table)
+{
+	struct pvr_page_table_l2 *l2_table =
+		&op_ctx->mmu_ctx->page_table_l2;
+	struct pvr_page_table_l2_entry_raw *entry_raw =
+		pvr_page_table_l2_get_entry_raw(l2_table,
+						op_ctx->curr_page.l2_idx);
+
+	pvr_page_table_l2_entry_raw_set(entry_raw,
+					child_table->backing_page.dma_addr);
+
+	child_table->parent = l2_table;
+	child_table->parent_idx = op_ctx->curr_page.l2_idx;
+	l2_table->entries[op_ctx->curr_page.l2_idx] = child_table;
+	++l2_table->entry_count;
+	op_ctx->curr_page.l1_table = child_table;
+}
+
+/**
+ * pvr_page_table_l2_remove() - Remove a level 1 page table from a level 2 page
+ * table.
+ * @op_ctx: Target MMU op context pointing at the L2 entry to remove.
+ *
+ * It is the caller's responsibility to ensure @op_ctx.curr_page points to a
+ * valid L2 entry.
+ */
+static void
+pvr_page_table_l2_remove(struct pvr_mmu_op_context *op_ctx)
+{
+	struct pvr_page_table_l2 *l2_table =
+		&op_ctx->mmu_ctx->page_table_l2;
+	struct pvr_page_table_l2_entry_raw *entry_raw =
+		pvr_page_table_l2_get_entry_raw(l2_table,
+						op_ctx->curr_page.l1_table->parent_idx);
+
+	WARN_ON(op_ctx->curr_page.l1_table->parent != l2_table);
+
+	pvr_page_table_l2_entry_raw_clear(entry_raw);
+
+	l2_table->entries[op_ctx->curr_page.l1_table->parent_idx] = NULL;
+	op_ctx->curr_page.l1_table->parent_idx = PVR_IDX_INVALID;
+	op_ctx->curr_page.l1_table->next_free = op_ctx->l1_free_tables;
+	op_ctx->l1_free_tables = op_ctx->curr_page.l1_table;
+	op_ctx->curr_page.l1_table = NULL;
+
+	--l2_table->entry_count;
+}
+
+/**
+ * pvr_page_table_l1_insert() - Insert an entry referring to a level 0 page
+ * table into a level 1 page table.
+ * @op_ctx: Target MMU op context pointing at the entry to insert the L0 page
+ * table into.
+ * @child_table: L0 page table to insert.
+ *
+ * It is the caller's responsibility to ensure @op_ctx.curr_page points to a
+ * valid L1 entry.
+ */
+static void
+pvr_page_table_l1_insert(struct pvr_mmu_op_context *op_ctx,
+			 struct pvr_page_table_l0 *child_table)
+{
+	struct pvr_page_table_l1_entry_raw *entry_raw =
+		pvr_page_table_l1_get_entry_raw(op_ctx->curr_page.l1_table,
+						op_ctx->curr_page.l1_idx);
+
+	pvr_page_table_l1_entry_raw_set(entry_raw,
+					child_table->backing_page.dma_addr);
+
+	child_table->parent = op_ctx->curr_page.l1_table;
+	child_table->parent_idx = op_ctx->curr_page.l1_idx;
+	op_ctx->curr_page.l1_table->entries[op_ctx->curr_page.l1_idx] = child_table;
+	++op_ctx->curr_page.l1_table->entry_count;
+	op_ctx->curr_page.l0_table = child_table;
+}
+
+/**
+ * pvr_page_table_l1_remove() - Remove a level 0 page table from a level 1 page
+ *                              table.
+ * @op_ctx: Target MMU op context pointing at the L1 entry to remove.
+ *
+ * If this function results in the L1 table becoming empty, it will be removed
+ * from its parent level 2 page table and destroyed.
+ *
+ * It is the caller's responsibility to ensure @op_ctx.curr_page points to a
+ * valid L1 entry.
+ */
+static void
+pvr_page_table_l1_remove(struct pvr_mmu_op_context *op_ctx)
+{
+	struct pvr_page_table_l1_entry_raw *entry_raw =
+		pvr_page_table_l1_get_entry_raw(op_ctx->curr_page.l0_table->parent,
+						op_ctx->curr_page.l0_table->parent_idx);
+
+	WARN_ON(op_ctx->curr_page.l0_table->parent !=
+		op_ctx->curr_page.l1_table);
+
+	pvr_page_table_l1_entry_raw_clear(entry_raw);
+
+	op_ctx->curr_page.l1_table->entries[op_ctx->curr_page.l0_table->parent_idx] = NULL;
+	op_ctx->curr_page.l0_table->parent_idx = PVR_IDX_INVALID;
+	op_ctx->curr_page.l0_table->next_free = op_ctx->l0_free_tables;
+	op_ctx->l0_free_tables = op_ctx->curr_page.l0_table;
+	op_ctx->curr_page.l0_table = NULL;
+
+	if (--op_ctx->curr_page.l1_table->entry_count == 0) {
+		/* Clear the parent L2 page table entry. */
+		if (op_ctx->curr_page.l1_table->parent_idx != PVR_IDX_INVALID)
+			pvr_page_table_l2_remove(op_ctx);
+	}
+}
+
+/**
+ * pvr_page_table_l0_insert() - Insert an entry referring to a physical page
+ * into a level 0 page table.
+ * @op_ctx: Target MMU op context pointing at the L0 entry to insert.
+ * @dma_addr: Target DMA address to be referenced by the new entry.
+ * @flags: Page options to be stored in the new entry.
+ *
+ * It is the caller's responsibility to ensure @op_ctx.curr_page points to a
+ * valid L0 entry.
+ */
+static void
+pvr_page_table_l0_insert(struct pvr_mmu_op_context *op_ctx,
+			 dma_addr_t dma_addr, struct pvr_page_flags_raw flags)
+{
+	struct pvr_page_table_l0_entry_raw *entry_raw =
+		pvr_page_table_l0_get_entry_raw(op_ctx->curr_page.l0_table,
+						op_ctx->curr_page.l0_idx);
+
+	pvr_page_table_l0_entry_raw_set(entry_raw, dma_addr, flags);
+
+	/*
+	 * There is no entry to set here - we don't keep a mirror of
+	 * individual pages.
+	 */
+
+	++op_ctx->curr_page.l0_table->entry_count;
+}
+
+/**
+ * pvr_page_table_l0_remove() - Remove a physical page from a level 0 page
+ * table.
+ * @op_ctx: Target MMU op context pointing at the L0 entry to remove.
+ *
+ * If this function results in the L0 table becoming empty, it will be removed
+ * from its parent L1 page table and destroyed.
+ *
+ * It is the caller's responsibility to ensure @op_ctx.curr_page points to a
+ * valid L0 entry.
+ */
+static void
+pvr_page_table_l0_remove(struct pvr_mmu_op_context *op_ctx)
+{
+	struct pvr_page_table_l0_entry_raw *entry_raw =
+		pvr_page_table_l0_get_entry_raw(op_ctx->curr_page.l0_table,
+						op_ctx->curr_page.l0_idx);
+
+	pvr_page_table_l0_entry_raw_clear(entry_raw);
+
+	/*
+	 * There is no entry to clear here - we don't keep a mirror of
+	 * individual pages.
+	 */
+
+	if (--op_ctx->curr_page.l0_table->entry_count == 0) {
+		/* Clear the parent L1 page table entry. */
+		if (op_ctx->curr_page.l0_table->parent_idx != PVR_IDX_INVALID)
+			pvr_page_table_l1_remove(op_ctx);
+	}
+}
+
+/**
+ * DOC: Page table index utilities
+ */
+
+/**
+ * pvr_page_table_l2_idx() - Calculate the level 2 page table index for a
+ *                           device-virtual address.
+ * @device_addr: Target device-virtual address.
+ *
+ * This function does not perform any bounds checking - it is the caller's
+ * responsibility to ensure that @device_addr is valid before interpreting
+ * the result.
+ *
+ * Return:
+ * The index into a level 2 page table corresponding to @device_addr.
+ */
+static u16
+pvr_page_table_l2_idx(u64 device_addr)
+{
+	return (device_addr & ~ROGUE_MMUCTRL_VADDR_PC_INDEX_CLRMSK) >>
+	       ROGUE_MMUCTRL_VADDR_PC_INDEX_SHIFT;
+}
+
+/**
+ * pvr_page_table_l1_idx() - Calculate the level 1 page table index for a
+ *                           device-virtual address.
+ * @device_addr: Target device-virtual address.
+ *
+ * This function does not perform any bounds checking - it is the caller's
+ * responsibility to ensure that @device_addr is valid before interpreting
+ * the result.
+ *
+ * Return:
+ * The index into a level 1 page table corresponding to @device_addr.
+ */
+static u16
+pvr_page_table_l1_idx(u64 device_addr)
+{
+	return (device_addr & ~ROGUE_MMUCTRL_VADDR_PD_INDEX_CLRMSK) >>
+	       ROGUE_MMUCTRL_VADDR_PD_INDEX_SHIFT;
+}
+
+/**
+ * pvr_page_table_l0_idx() - Calculate the level 0 page table index for a
+ *                           device-virtual address.
+ * @device_addr: Target device-virtual address.
+ *
+ * This function does not perform any bounds checking - it is the caller's
+ * responsibility to ensure that @device_addr is valid before interpreting
+ * the result.
+ *
+ * Return:
+ * The index into a level 0 page table corresponding to @device_addr.
+ */
+static u16
+pvr_page_table_l0_idx(u64 device_addr)
+{
+	return (device_addr & ~ROGUE_MMUCTRL_VADDR_PT_INDEX_CLRMSK) >>
+	       ROGUE_MMUCTRL_PAGE_X_RANGE_SHIFT;
+}
+
+/**
+ * DOC: High-level page table operations
+ */
+
+/**
+ * pvr_page_table_l1_get_or_insert() - Retrieves (optionally inserting if
+ * necessary) a level 1 page table from the specified level 2 page table entry.
+ * @op_ctx: Target MMU op context.
+ * @should_insert: [IN] Specifies whether new page tables should be inserted
+ * when empty page table entries are encountered during traversal.
+ *
+ * Return:
+ *  * 0 on success, or
+ *
+ *    If @should_insert is %false:
+ *     * -%ENXIO if a level 1 page table would have been inserted.
+ *
+ *    If @should_insert is %true:
+ *     * Any error encountered while inserting the level 1 page table.
+ */
+static int
+pvr_page_table_l1_get_or_insert(struct pvr_mmu_op_context *op_ctx,
+				bool should_insert)
+{
+	struct pvr_page_table_l2 *l2_table =
+		&op_ctx->mmu_ctx->page_table_l2;
+	struct pvr_page_table_l1 *table;
+	int err;
+
+	if (pvr_page_table_l2_entry_is_valid(l2_table,
+					     op_ctx->curr_page.l2_idx)) {
+		op_ctx->curr_page.l1_table =
+			l2_table->entries[op_ctx->curr_page.l2_idx];
+		return 0;
+	}
+
+	if (!should_insert)
+		return -ENXIO;
+
+	/* Take a prealloced table. */
+	table = op_ctx->l1_free_tables;
+	if (!table)
+		return -ENOMEM;
+
+	err = pvr_page_table_l1_init(table, op_ctx->mmu_ctx->pvr_dev);
+	if (err)
+		return err;
+
+	/* Pop */
+	op_ctx->l1_free_tables = table->next_free;
+	table->next_free = NULL;
+
+	pvr_page_table_l2_insert(op_ctx, table);
+
+	return 0;
+}
+
+/**
+ * pvr_page_table_l0_get_or_insert() - Retrieves (optionally inserting if
+ * necessary) a level 0 page table from the specified level 1 page table entry.
+ * @op_ctx: Target MMU op context.
+ * @should_insert: [IN] Specifies whether new page tables should be inserted
+ * when empty page table entries are encountered during traversal.
+ *
+ * Return:
+ *  * 0 on success,
+ *
+ *    If @should_insert is %false:
+ *     * -%ENXIO if a level 0 page table would have been inserted.
+ *
+ *    If @should_insert is %true:
+ *     * Any error encountered while inserting the level 0 page table.
+ */
+static int
+pvr_page_table_l0_get_or_insert(struct pvr_mmu_op_context *op_ctx,
+				bool should_insert)
+{
+	struct pvr_page_table_l0 *table;
+	int err;
+
+	if (pvr_page_table_l1_entry_is_valid(op_ctx->curr_page.l1_table,
+					     op_ctx->curr_page.l1_idx)) {
+		op_ctx->curr_page.l0_table =
+			op_ctx->curr_page.l1_table->entries[op_ctx->curr_page.l1_idx];
+		return 0;
+	}
+
+	if (!should_insert)
+		return -ENXIO;
+
+	/* Take a prealloced table. */
+	table = op_ctx->l0_free_tables;
+	if (!table)
+		return -ENOMEM;
+
+	err = pvr_page_table_l0_init(table, op_ctx->mmu_ctx->pvr_dev);
+	if (err)
+		return err;
+
+	/* Pop */
+	op_ctx->l0_free_tables = table->next_free;
+	table->next_free = NULL;
+
+	pvr_page_table_l1_insert(op_ctx, table);
+
+	return err;
+}
+
+/**
+ * pvr_mmu_context_create() - Create an MMU context.
+ * @pvr_dev: PVR device associated with owning VM context.
+ *
+ * Returns:
+ *  * Newly created MMU context object on success, or
+ *  * -%ENOMEM if no memory is available,
+ *  * Any error code returned by pvr_page_table_l2_init().
+ */
+struct pvr_mmu_context *pvr_mmu_context_create(struct pvr_device *pvr_dev)
+{
+	struct pvr_mmu_context *ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+	int err;
+
+	if (!ctx)
+		return ERR_PTR(-ENOMEM);
+
+	err = pvr_page_table_l2_init(&ctx->page_table_l2, pvr_dev);
+	if (err)
+		return ERR_PTR(err);
+
+	ctx->pvr_dev = pvr_dev;
+
+	return ctx;
+}
+
+/**
+ * pvr_mmu_context_destroy() - Destroy an MMU context.
+ * @ctx: Target MMU context.
+ */
+void pvr_mmu_context_destroy(struct pvr_mmu_context *ctx)
+{
+	pvr_page_table_l2_fini(&ctx->page_table_l2);
+	kfree(ctx);
+}
+
+/**
+ * pvr_mmu_get_root_table_dma_addr() - Get the DMA address of the root of the
+ * page table structure behind a VM context.
+ * @root: Target MMU page table root.
+ */
+dma_addr_t pvr_mmu_get_root_table_dma_addr(struct pvr_mmu_context *ctx)
+{
+	return ctx->page_table_l2.backing_page.dma_addr;
+}
+
+/**
+ * pvr_page_table_l1_alloc() - Allocate a l1 page_table object.
+ * @ctx: MMU context of owning VM context.
+ *
+ * Returns:
+ *  * Newly created page table object on success, or
+ *  * -%ENOMEM if no memory is available,
+ *  * Any error code returned by pvr_page_table_l1_init().
+ */
+static struct pvr_page_table_l1 *
+pvr_page_table_l1_alloc(struct pvr_mmu_context *ctx)
+{
+	int err;
+
+	struct pvr_page_table_l1 *table =
+		kzalloc(sizeof(*table), GFP_KERNEL);
+
+	if (!table)
+		return ERR_PTR(-ENOMEM);
+
+	err = pvr_page_table_l1_init(table, ctx->pvr_dev);
+	if (err) {
+		kfree(table);
+		return ERR_PTR(err);
+	}
+
+	return table;
+}
+
+/**
+ * pvr_page_table_l0_alloc() - Allocate a l0 page_table object.
+ * @ctx: MMU context of owning VM context.
+ *
+ * Returns:
+ *  * Newly created page table object on success, or
+ *  * -%ENOMEM if no memory is available,
+ *  * Any error code returned by pvr_page_table_l0_init().
+ */
+static struct pvr_page_table_l0 *
+pvr_page_table_l0_alloc(struct pvr_mmu_context *ctx)
+{
+	int err;
+
+	struct pvr_page_table_l0 *table =
+		kzalloc(sizeof(*table), GFP_KERNEL);
+
+	if (!table)
+		return ERR_PTR(-ENOMEM);
+
+	err = pvr_page_table_l0_init(table, ctx->pvr_dev);
+	if (err) {
+		kfree(table);
+		return ERR_PTR(err);
+	}
+
+	return table;
+}
+
+/**
+ * pvr_mmu_op_context_require_sync() - Mark an MMU op context as requiring a
+ * sync operation for the referenced page tables up to a specified level.
+ * @op_ctx: Target MMU op context.
+ * @level: Maximum page table level for which a sync is required.
+ */
+static void
+pvr_mmu_op_context_require_sync(struct pvr_mmu_op_context *op_ctx,
+				enum pvr_mmu_sync_level level)
+{
+	if (op_ctx->sync_level_required < level)
+		op_ctx->sync_level_required = level;
+}
+
+/**
+ * pvr_mmu_op_context_sync_manual() - Trigger a sync of some or all of the
+ * page tables referenced by a MMU op context.
+ * @op_ctx: Target MMU op context.
+ * @level: Maximum page table level to sync.
+ *
+ * Do not call this function directly. Instead use
+ * pvr_mmu_op_context_sync_partial() which is checked against the current
+ * value of &op_ctx->sync_level_required as set by
+ * pvr_mmu_op_context_require_sync().
+ */
+static void
+pvr_mmu_op_context_sync_manual(struct pvr_mmu_op_context *op_ctx,
+			       enum pvr_mmu_sync_level level)
+{
+	/*
+	 * We sync the page table levels in ascending order (starting from the
+	 * leaf node) to ensure consistency.
+	 */
+
+	WARN_ON(level < PVR_MMU_SYNC_LEVEL_NONE);
+
+	if (level <= PVR_MMU_SYNC_LEVEL_NONE)
+		return;
+
+	if (op_ctx->curr_page.l0_table)
+		pvr_page_table_l0_sync(op_ctx->curr_page.l0_table);
+
+	if (level < PVR_MMU_SYNC_LEVEL_1)
+		return;
+
+	if (op_ctx->curr_page.l1_table)
+		pvr_page_table_l1_sync(op_ctx->curr_page.l1_table);
+
+	if (level < PVR_MMU_SYNC_LEVEL_2)
+		return;
+
+	pvr_page_table_l2_sync(&op_ctx->mmu_ctx->page_table_l2);
+}
+
+/**
+ * pvr_mmu_op_context_sync_partial() - Trigger a sync of some or all of the
+ * page tables referenced by a MMU op context.
+ * @op_ctx: Target MMU op context.
+ * @level: Requested page table level to sync up to (inclusive).
+ *
+ * If @level is greater than the maximum level recorded by @op_ctx as requiring
+ * a sync operation, only the previously recorded maximum will be used.
+ *
+ * Additionally, if @level is greater than or equal to the maximum level
+ * recorded by @op_ctx as requiring a sync operation, that maximum level will be
+ * reset as a full sync will be performed. This is equivalent to calling
+ * pvr_mmu_op_context_sync().
+ */
+static void
+pvr_mmu_op_context_sync_partial(struct pvr_mmu_op_context *op_ctx,
+				enum pvr_mmu_sync_level level)
+{
+	/*
+	 * If the requested sync level is greater than or equal to the
+	 * currently required sync level, we do two things:
+	 *  * Don't waste time syncing levels we haven't previously marked as
+	 *    requiring a sync, and
+	 *  * Reset the required sync level since we are about to sync
+	 *    everything that was previously marked as requiring a sync.
+	 */
+	if (level >= op_ctx->sync_level_required) {
+		level = op_ctx->sync_level_required;
+		op_ctx->sync_level_required = PVR_MMU_SYNC_LEVEL_NONE;
+	}
+
+	pvr_mmu_op_context_sync_manual(op_ctx, level);
+}
+
+/**
+ * pvr_mmu_op_context_sync() - Trigger a sync of every page table referenced by
+ * a MMU op context.
+ * @op_ctx: Target MMU op context.
+ *
+ * The maximum level marked internally as requiring a sync will be reset so
+ * that subsequent calls to this function will be no-ops unless @op_ctx is
+ * otherwise updated.
+ */
+static void
+pvr_mmu_op_context_sync(struct pvr_mmu_op_context *op_ctx)
+{
+	pvr_mmu_op_context_sync_manual(op_ctx, op_ctx->sync_level_required);
+
+	op_ctx->sync_level_required = PVR_MMU_SYNC_LEVEL_NONE;
+}
+
+/**
+ * pvr_mmu_op_context_load_tables() - Load pointers to tables in each level of
+ * the page table tree structure needed to reference the physical page
+ * referenced by a MMU op context.
+ * @op_ctx: Target MMU op context.
+ * @should_create: Specifies whether new page tables should be created when
+ * empty page table entries are encountered during traversal.
+ * @load_level_required: Maximum page table level to load.
+ *
+ * If @should_create is %true, this function may modify the stored required
+ * sync level of @op_ctx as new page tables are created and inserted into their
+ * respective parents.
+ *
+ * Since there is only one root page table, it is technically incorrect to call
+ * this function with a value of @load_level_required greater than or equal to
+ * the root level number. However, this is not explicitly disallowed here.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * Any error returned by pvr_page_table_l1_get_or_create() if
+ *    @load_level_required >= 1 except -%ENXIO, or
+ *  * Any error returned by pvr_page_table_l0_get_or_create() if
+ *    @load_level_required >= 0 except -%ENXIO.
+ */
+static int
+pvr_mmu_op_context_load_tables(struct pvr_mmu_op_context *op_ctx,
+			       bool should_create,
+			       enum pvr_mmu_sync_level load_level_required)
+{
+	const struct pvr_page_table_l1 *l1_head_before = op_ctx->l1_free_tables;
+	const struct pvr_page_table_l0 *l0_head_before = op_ctx->l0_free_tables;
+	int err;
+
+	/* Clear tables we're about to fetch in case of error states. */
+	if (load_level_required >= PVR_MMU_SYNC_LEVEL_1)
+		op_ctx->curr_page.l1_table = NULL;
+
+	if (load_level_required >= PVR_MMU_SYNC_LEVEL_0)
+		op_ctx->curr_page.l0_table = NULL;
+
+	/* Get or create L1 page table. */
+	if (load_level_required >= PVR_MMU_SYNC_LEVEL_1) {
+		err = pvr_page_table_l1_get_or_insert(op_ctx, should_create);
+		if (err) {
+			/*
+			 * If @should_create is %false and no L1 page table was
+			 * found, return early but without an error. Since
+			 * pvr_page_table_l1_get_or_create() can only return
+			 * -%ENXIO if @should_create is %false, there is no
+			 * need to check it here.
+			 */
+			if (err == -ENXIO)
+				err = 0;
+
+			return err;
+		}
+	}
+
+	/* Get or create L0 page table. */
+	if (load_level_required >= PVR_MMU_SYNC_LEVEL_0) {
+		err = pvr_page_table_l0_get_or_insert(op_ctx, should_create);
+		if (err) {
+			/*
+			 * If @should_create is %false and no L0 page table was
+			 * found, return early but without an error. Since
+			 * pvr_page_table_l0_get_or_insert() can only return
+			 * -%ENXIO if @should_create is %false, there is no
+			 * need to check it here.
+			 */
+			if (err == -ENXIO)
+				err = 0;
+
+			/*
+			 * At this point, an L1 page table could have been
+			 * inserted but is now empty due to the failed attempt
+			 * at inserting an L0 page table. In this instance, we
+			 * must remove the empty L1 page table ourselves as
+			 * pvr_page_table_l1_remove() is never called as part
+			 * of the error path in
+			 * pvr_page_table_l0_get_or_insert().
+			 */
+			if (l1_head_before != op_ctx->l1_free_tables) {
+				pvr_page_table_l2_remove(op_ctx);
+				pvr_mmu_op_context_require_sync(op_ctx, PVR_MMU_SYNC_LEVEL_2);
+			}
+
+			return err;
+		}
+	}
+
+	/*
+	 * A sync is only needed if table objects were inserted. This can be
+	 * inferred by checking if the pointer at the head of the linked list
+	 * has changed.
+	 */
+	if (l1_head_before != op_ctx->l1_free_tables)
+		pvr_mmu_op_context_require_sync(op_ctx, PVR_MMU_SYNC_LEVEL_2);
+	else if (l0_head_before != op_ctx->l0_free_tables)
+		pvr_mmu_op_context_require_sync(op_ctx, PVR_MMU_SYNC_LEVEL_1);
+
+	return 0;
+}
+
+/**
+ * pvr_mmu_op_context_set_curr_page() - Reassign the current page of an MMU op
+ * context, syncing any page tables previously assigned to it which are no
+ * longer relevant.
+ * @op_ctx: Target MMU op context.
+ * @device_addr: New pointer target.
+ * @should_create: Specify whether new page tables should be created when
+ * empty page table entries are encountered during traversal.
+ *
+ * This function performs a full sync on the pointer, regardless of which
+ * levels are modified.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_mmu_op_context_load_tables().
+ */
+static int
+pvr_mmu_op_context_set_curr_page(struct pvr_mmu_op_context *op_ctx,
+				 u64 device_addr, bool should_create)
+{
+	pvr_mmu_op_context_sync(op_ctx);
+
+	op_ctx->curr_page.l2_idx = pvr_page_table_l2_idx(device_addr);
+	op_ctx->curr_page.l1_idx = pvr_page_table_l1_idx(device_addr);
+	op_ctx->curr_page.l0_idx = pvr_page_table_l0_idx(device_addr);
+	op_ctx->curr_page.l1_table = NULL;
+	op_ctx->curr_page.l0_table = NULL;
+
+	return pvr_mmu_op_context_load_tables(op_ctx, should_create,
+					      PVR_MMU_SYNC_LEVEL_1);
+}
+
+/**
+ * pvr_mmu_op_context_next_page() - Advance the current page of an MMU op
+ * context.
+ * @op_ctx: Target MMU op context.
+ * @should_create: Specify whether new page tables should be created when
+ * empty page table entries are encountered during traversal.
+ *
+ * If @should_create is %false, it is the caller's responsibility to verify that
+ * the state of the table references in @op_ctx is valid on return. If -%ENXIO
+ * is returned, at least one of the table references is invalid. It should be
+ * noted that @op_ctx as a whole will be left in a valid state if -%ENXIO is
+ * returned, unlike other error codes. The caller should check which references
+ * are invalid by comparing them to %NULL. Only &@ptr->l2_table is guaranteed
+ * to be valid, since it represents the root of the page table tree structure.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * -%EPERM if the operation would wrap at the top of the page table
+ *    hierarchy,
+ *  * -%ENXIO if @should_create is %false and a page table of any level would
+ *    have otherwise been created, or
+ *  * Any error returned while attempting to create missing page tables if
+ *    @should_create is %true.
+ */
+static int
+pvr_mmu_op_context_next_page(struct pvr_mmu_op_context *op_ctx,
+			     bool should_create)
+{
+	s8 load_level_required = PVR_MMU_SYNC_LEVEL_NONE;
+
+	if (++op_ctx->curr_page.l0_idx != ROGUE_MMUCTRL_ENTRIES_PT_VALUE_X)
+		goto load_tables;
+
+	op_ctx->curr_page.l0_idx = 0;
+	load_level_required = PVR_MMU_SYNC_LEVEL_0;
+
+	if (++op_ctx->curr_page.l1_idx != ROGUE_MMUCTRL_ENTRIES_PD_VALUE)
+		goto load_tables;
+
+	op_ctx->curr_page.l1_idx = 0;
+	load_level_required = PVR_MMU_SYNC_LEVEL_1;
+
+	if (++op_ctx->curr_page.l2_idx != ROGUE_MMUCTRL_ENTRIES_PC_VALUE)
+		goto load_tables;
+
+	/*
+	 * If the pattern continued, we would set &op_ctx->curr_page.l2_idx to
+	 * zero here. However, that would wrap the top layer of the page table
+	 * hierarchy which is not a valid operation. Instead, we warn and return
+	 * an error.
+	 */
+	WARN(true,
+	     "%s(%p) attempted to loop the top of the page table hierarchy",
+	     __func__, op_ctx);
+	return -EPERM;
+
+	/* If indices have wrapped, we need to load new tables. */
+load_tables:
+	/* First, flush tables which will be unloaded. */
+	pvr_mmu_op_context_sync_partial(op_ctx, load_level_required);
+
+	/* Then load tables from the required level down. */
+	return pvr_mmu_op_context_load_tables(op_ctx, should_create,
+					      load_level_required);
+}
+
+/**
+ * DOC: Single page operations
+ */
+
+/**
+ * pvr_page_create() - Create a device-virtual memory page and insert it into
+ * a level 0 page table.
+ * @op_ctx: Target MMU op context pointing at the device-virtual address of the
+ * target page.
+ * @dma_addr: DMA address of the physical page backing the created page.
+ * @flags: Page options saved on the level 0 page table entry for reading by
+ *         the device.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * -%EEXIST if the requested page already exists.
+ */
+static int
+pvr_page_create(struct pvr_mmu_op_context *op_ctx, dma_addr_t dma_addr,
+		struct pvr_page_flags_raw flags)
+{
+	/* Do not create a new page if one already exists. */
+	if (pvr_page_table_l0_entry_is_valid(op_ctx->curr_page.l0_table,
+					     op_ctx->curr_page.l0_idx)) {
+		return -EEXIST;
+	}
+
+	pvr_page_table_l0_insert(op_ctx, dma_addr, flags);
+
+	pvr_mmu_op_context_require_sync(op_ctx, PVR_MMU_SYNC_LEVEL_0);
+
+	return 0;
+}
+
+/**
+ * pvr_page_destroy() - Destroy a device page after removing it from its
+ * parent level 0 page table.
+ * @op_ctx: Target MMU op context.
+ * @ptr: Page table pointer to the device-virtual address of the target page.
+ */
+static void
+pvr_page_destroy(struct pvr_mmu_op_context *op_ctx)
+{
+	/* Do nothing if the page does not exist. */
+	if (!pvr_page_table_l0_entry_is_valid(op_ctx->curr_page.l0_table,
+					      op_ctx->curr_page.l0_idx)) {
+		return;
+	}
+
+	/* Clear the parent L0 page table entry. */
+	pvr_page_table_l0_remove(op_ctx);
+
+	pvr_mmu_op_context_require_sync(op_ctx, PVR_MMU_SYNC_LEVEL_0);
+}
+
+/**
+ * pvr_mmu_op_context_destroy() - Destroy an MMU op context.
+ * @op_ctx: Target MMU op context.
+ */
+void pvr_mmu_op_context_destroy(struct pvr_mmu_op_context *op_ctx)
+{
+	const bool flush_caches =
+		op_ctx->sync_level_required != PVR_MMU_SYNC_LEVEL_NONE;
+
+	pvr_mmu_op_context_sync(op_ctx);
+
+	if (flush_caches)
+		WARN_ON(pvr_mmu_flush(op_ctx->mmu_ctx->pvr_dev));
+
+	while (op_ctx->l0_free_tables) {
+		struct pvr_page_table_l0 *tmp = op_ctx->l0_free_tables;
+
+		op_ctx->l0_free_tables = op_ctx->l0_free_tables->next_free;
+		pvr_page_table_l0_free(tmp);
+	}
+
+	while (op_ctx->l1_free_tables) {
+		struct pvr_page_table_l1 *tmp = op_ctx->l1_free_tables;
+
+		op_ctx->l1_free_tables = op_ctx->l1_free_tables->next_free;
+		pvr_page_table_l1_free(tmp);
+	}
+
+	kfree(op_ctx);
+}
+
+/**
+ * pvr_mmu_op_context_create() - Create an MMU op context.
+ * @ctx: MMU context associated with owning VM context.
+ * @sgt: Scatter gather table containing pages pinned for use by this context.
+ * @sgt_offset: Start offset of the requested device-virtual memory mapping.
+ * @size: Size in bytes of the requested device-virtual memory mapping. For an
+ * unmapping, this should be zero so that no page tables are allocated.
+ *
+ * Returns:
+ *  * Newly created MMU op context object on success, or
+ *  * -%ENOMEM if no memory is available,
+ *  * Any error code returned by pvr_page_table_l2_init().
+ */
+struct pvr_mmu_op_context *
+pvr_mmu_op_context_create(struct pvr_mmu_context *ctx, struct sg_table *sgt,
+			  u64 sgt_offset, u64 size)
+{
+	int err;
+
+	struct pvr_mmu_op_context *op_ctx =
+		kzalloc(sizeof(*op_ctx), GFP_KERNEL);
+
+	if (!op_ctx)
+		return ERR_PTR(-ENOMEM);
+
+	op_ctx->mmu_ctx = ctx;
+	op_ctx->map.sgt = sgt;
+	op_ctx->map.sgt_offset = sgt_offset;
+	op_ctx->sync_level_required = PVR_MMU_SYNC_LEVEL_NONE;
+
+	if (size) {
+		const u32 l1_start_idx = pvr_page_table_l2_idx(sgt_offset);
+		const u32 l1_end_idx = pvr_page_table_l2_idx(sgt_offset + size);
+		const u32 l1_count = l1_end_idx - l1_start_idx + 1;
+		const u32 l0_start_idx = pvr_page_table_l1_idx(sgt_offset);
+		const u32 l0_end_idx = pvr_page_table_l1_idx(sgt_offset + size);
+		const u32 l0_count = l0_end_idx - l0_start_idx + 1;
+
+		/*
+		 * Alloc and push page table entries until we have enough of
+		 * each type, ending with linked lists of l0 and l1 entries in
+		 * reverse order.
+		 */
+		for (int i = 0; i < l1_count; i++) {
+			struct pvr_page_table_l1 *l1_tmp =
+				pvr_page_table_l1_alloc(ctx);
+
+			err = PTR_ERR_OR_ZERO(l1_tmp);
+			if (err)
+				goto err_cleanup;
+
+			l1_tmp->next_free = op_ctx->l1_free_tables;
+			op_ctx->l1_free_tables = l1_tmp;
+		}
+
+		for (int i = 0; i < l0_count; i++) {
+			struct pvr_page_table_l0 *l0_tmp =
+				pvr_page_table_l0_alloc(ctx);
+
+			err = PTR_ERR_OR_ZERO(l0_tmp);
+			if (err)
+				goto err_cleanup;
+
+			l0_tmp->next_free = op_ctx->l0_free_tables;
+			op_ctx->l0_free_tables = l0_tmp;
+		}
+	}
+
+	return op_ctx;
+
+err_cleanup:
+	pvr_mmu_op_context_destroy(op_ctx);
+
+	return ERR_PTR(err);
+}
+
+/**
+ * pvr_mmu_op_context_unmap_curr_page() - Unmap pages from a memory context
+ * starting from the current page of an MMU op context.
+ * @op_ctx: Target MMU op context pointing at the first page to unmap.
+ * @nr_pages: Number of pages to unmap.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error encountered while advancing @op_ctx.curr_page with
+ *    pvr_mmu_op_context_next_page() (except -%ENXIO).
+ */
+static int
+pvr_mmu_op_context_unmap_curr_page(struct pvr_mmu_op_context *op_ctx,
+				   u64 nr_pages)
+{
+	int err;
+
+	if (nr_pages == 0)
+		return 0;
+
+	/*
+	 * Destroy first page outside loop, as it doesn't require a page
+	 * advance beforehand. If the L0 page table reference in
+	 * @op_ctx.curr_page is %NULL, there cannot be a mapped page at
+	 * @op_ctx.curr_page (so skip ahead).
+	 */
+	if (op_ctx->curr_page.l0_table)
+		pvr_page_destroy(op_ctx);
+
+	for (u64 page = 1; page < nr_pages; ++page) {
+		err = pvr_mmu_op_context_next_page(op_ctx, false);
+		/*
+		 * If the page table tree structure at @op_ctx.curr_page is
+		 * incomplete, skip ahead. We don't care about unmapping pages
+		 * that cannot exist.
+		 *
+		 * FIXME: This could be made more efficient by jumping ahead
+		 * using pvr_mmu_op_context_set_curr_page().
+		 */
+		if (err == -ENXIO)
+			continue;
+		else if (err)
+			return err;
+
+		pvr_page_destroy(op_ctx);
+	}
+
+	return 0;
+}
+
+/**
+ * pvr_mmu_unmap() - Unmap pages from a memory context.
+ * @op_ctx: Target MMU op context.
+ * @device_addr: First device-virtual address to unmap.
+ * @size: Size in bytes to unmap.
+ *
+ * The total amount of device-virtual memory unmapped is
+ * @nr_pages * %PVR_DEVICE_PAGE_SIZE.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error code returned by pvr_page_table_ptr_init(), or
+ *  * Any error code returned by pvr_page_table_ptr_unmap().
+ */
+int pvr_mmu_unmap(struct pvr_mmu_op_context *op_ctx, u64 device_addr, u64 size)
+{
+	int err = pvr_mmu_op_context_set_curr_page(op_ctx, device_addr, false);
+
+	if (err)
+		return err;
+
+	return pvr_mmu_op_context_unmap_curr_page(op_ctx,
+						  size >> PVR_DEVICE_PAGE_SHIFT);
+}
+
+/**
+ * pvr_mmu_map_sgl() - Map part of a scatter-gather table entry to
+ * device-virtual memory.
+ * @op_ctx: Target MMU op context pointing to the first page that should be
+ * mapped.
+ * @sgl: Target scatter-gather table entry.
+ * @offset: Offset into @sgl to map from. Must result in a starting address
+ * from @sgl which is CPU page-aligned.
+ * @size: Size of the memory to be mapped in bytes. Must be a non-zero multiple
+ * of the device page size.
+ * @page_flags: Page options to be applied to every device-virtual memory page
+ * in the created mapping.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * -%EINVAL if the range specified by @offset and @size is not completely
+ *    within @sgl, or
+ *  * Any error encountered while creating a page with pvr_page_create(), or
+ *  * Any error encountered while advancing @op_ctx.curr_page with
+ *    pvr_mmu_op_context_next_page().
+ */
+static int
+pvr_mmu_map_sgl(struct pvr_mmu_op_context *op_ctx, struct scatterlist *sgl,
+		u64 offset, u64 size, struct pvr_page_flags_raw page_flags)
+{
+	const unsigned int pages = size >> PVR_DEVICE_PAGE_SHIFT;
+	dma_addr_t dma_addr = sg_dma_address(sgl) + offset;
+	const unsigned int dma_len = sg_dma_len(sgl);
+	struct pvr_page_table_ptr ptr_copy;
+	unsigned int page;
+	int err;
+
+	if (size > dma_len || offset > dma_len - size)
+		return -EINVAL;
+
+	/*
+	 * Before progressing, save a copy of the start pointer so we can use
+	 * it again if we enter an error state and have to destroy pages.
+	 */
+	memcpy(&ptr_copy, &op_ctx->curr_page, sizeof(ptr_copy));
+
+	/*
+	 * Create first page outside loop, as it doesn't require a page advance
+	 * beforehand.
+	 */
+	err = pvr_page_create(op_ctx, dma_addr, page_flags);
+	if (err)
+		return err;
+
+	for (page = 1; page < pages; ++page) {
+		err = pvr_mmu_op_context_next_page(op_ctx, true);
+		if (err)
+			goto err_destroy_pages;
+
+		dma_addr += PVR_DEVICE_PAGE_SIZE;
+
+		err = pvr_page_create(op_ctx, dma_addr, page_flags);
+		if (err)
+			goto err_destroy_pages;
+	}
+
+	return 0;
+
+err_destroy_pages:
+	memcpy(&op_ctx->curr_page, &ptr_copy, sizeof(op_ctx->curr_page));
+	err = pvr_mmu_op_context_unmap_curr_page(op_ctx, page);
+
+	return err;
+}
+
+/**
+ * pvr_mmu_map() - Map an object's virtual memory to physical memory.
+ * @op_ctx: Target MMU op context.
+ * @size: Size of memory to be mapped in bytes. Must be a non-zero multiple
+ * of the device page size.
+ * @flags: Flags from pvr_gem_object associated with the mapping.
+ * @device_addr: Virtual device address to map to. Must be device page-aligned.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error code returned by pvr_page_table_ptr_init(), or
+ *  * Any error code returned by pvr_mmu_map_sgl(), or
+ *  * Any error code returned by pvr_page_table_ptr_next_page().
+ */
+int pvr_mmu_map(struct pvr_mmu_op_context *op_ctx, u64 size, u64 flags,
+		u64 device_addr)
+{
+	struct pvr_page_table_ptr ptr_copy;
+	struct pvr_page_flags_raw flags_raw;
+	struct scatterlist *sgl;
+	u64 mapped_size = 0;
+	unsigned int count;
+	int err;
+
+	if (!size)
+		return 0;
+
+	if ((op_ctx->map.sgt_offset | size) & ~PVR_DEVICE_PAGE_MASK)
+		return -EINVAL;
+
+	err = pvr_mmu_op_context_set_curr_page(op_ctx, device_addr, true);
+	if (err)
+		return -EINVAL;
+
+	memcpy(&ptr_copy, &op_ctx->curr_page, sizeof(ptr_copy));
+
+	flags_raw = pvr_page_flags_raw_create(false, false,
+					      flags & DRM_PVR_BO_DEVICE_BYPASS_CACHE,
+					      flags & DRM_PVR_BO_DEVICE_PM_FW_PROTECT);
+
+	/* Map scatter gather table */
+	for_each_sgtable_dma_sg(op_ctx->map.sgt, sgl, count) {
+		const size_t sgl_len = sg_dma_len(sgl);
+		u64 sgl_offset, map_sgl_len;
+
+		if (sgl_len <= op_ctx->map.sgt_offset) {
+			op_ctx->map.sgt_offset -= sgl_len;
+			continue;
+		}
+
+		sgl_offset = op_ctx->map.sgt_offset;
+		map_sgl_len = min_t(u64, sgl_len - sgl_offset, size - mapped_size);
+
+		err = pvr_mmu_map_sgl(op_ctx, sgl, sgl_offset, map_sgl_len,
+				      flags_raw);
+		if (err)
+			break;
+
+		/*
+		 * Flag the L0 page table as requiring a flush when the MMU op
+		 * context is destroyed.
+		 */
+		pvr_mmu_op_context_require_sync(op_ctx, PVR_MMU_SYNC_LEVEL_0);
+
+		op_ctx->map.sgt_offset = 0;
+		mapped_size += map_sgl_len;
+
+		if (mapped_size >= size)
+			break;
+
+		err = pvr_mmu_op_context_next_page(op_ctx, true);
+		if (err)
+			break;
+	}
+
+	if (err && mapped_size) {
+		memcpy(&op_ctx->curr_page, &ptr_copy, sizeof(op_ctx->curr_page));
+		pvr_mmu_op_context_unmap_curr_page(op_ctx,
+						   mapped_size >> PVR_DEVICE_PAGE_SHIFT);
+	}
+
+	return err;
+}
diff --git a/drivers/gpu/drm/imagination/pvr_mmu.h b/drivers/gpu/drm/imagination/pvr_mmu.h
new file mode 100644
index 000000000000..bf93c5ffc86a
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_mmu.h
@@ -0,0 +1,108 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_MMU_H
+#define PVR_MMU_H
+
+#include <linux/memory.h>
+#include <linux/types.h>
+
+/* Forward declaration from "pvr_device.h" */
+struct pvr_device;
+
+/* Forward declaration from "pvr_mmu.c" */
+struct pvr_mmu_context;
+struct pvr_mmu_op_context;
+
+/* Forward declaration from "pvr_vm.c" */
+struct pvr_vm_context;
+
+/* Forward declaration from <linux/scatterlist.h> */
+struct sg_table;
+
+/**
+ * DOC: Public API (constants)
+ *
+ * .. c:macro:: PVR_DEVICE_PAGE_SIZE
+ *
+ *    Fixed page size referenced by leaf nodes in the page table tree
+ *    structure. In the current implementation, this value is pegged to the
+ *    CPU page size (%PAGE_SIZE). It is therefore an error to specify a CPU
+ *    page size which is not also a supported device page size. The supported
+ *    device page sizes are: 4KiB, 16KiB, 64KiB, 256KiB, 1MiB and 2MiB.
+ *
+ * .. c:macro:: PVR_DEVICE_PAGE_SHIFT
+ *
+ *    Shift value used to efficiently multiply or divide by
+ *    %PVR_DEVICE_PAGE_SIZE.
+ *
+ *    This value is derived from %PVR_DEVICE_PAGE_SIZE.
+ *
+ * .. c:macro:: PVR_DEVICE_PAGE_MASK
+ *
+ *    Mask used to round a value down to the nearest multiple of
+ *    %PVR_DEVICE_PAGE_SIZE. When bitwise negated, it will indicate whether a
+ *    value is already a multiple of %PVR_DEVICE_PAGE_SIZE.
+ *
+ *    This value is derived from %PVR_DEVICE_PAGE_SIZE.
+ */
+
+/* PVR_DEVICE_PAGE_SIZE determines the page size */
+#define PVR_DEVICE_PAGE_SIZE (PAGE_SIZE)
+#define PVR_DEVICE_PAGE_SHIFT (PAGE_SHIFT)
+#define PVR_DEVICE_PAGE_MASK (PAGE_MASK)
+
+/**
+ * DOC: Page table index utilities (constants)
+ *
+ * .. c:macro:: PVR_PAGE_TABLE_ADDR_SPACE_SIZE
+ *
+ *    Size of device-virtual address space which can be represented in the page
+ *    table structure.
+ *
+ *    This value is checked at runtime against
+ *    &pvr_device_features.virtual_address_space_bits by
+ *    pvr_vm_create_context(), which will return an error if the feature value
+ *    does not match this constant.
+ *
+ *    .. admonition:: Future work
+ *
+ *       It should be possible to support other values of
+ *       &pvr_device_features.virtual_address_space_bits, but so far no
+ *       hardware has been created which advertises an unsupported value.
+ *
+ * .. c:macro:: PVR_PAGE_TABLE_ADDR_BITS
+ *
+ *    Number of bits needed to represent any value less than
+ *    %PVR_PAGE_TABLE_ADDR_SPACE_SIZE exactly.
+ *
+ * .. c:macro:: PVR_PAGE_TABLE_ADDR_MASK
+ *
+ *    Bitmask of device-virtual addresses which are valid in the page table
+ *    structure.
+ *
+ *    This value is derived from %PVR_PAGE_TABLE_ADDR_SPACE_SIZE, so the same
+ *    notes on that constant apply here.
+ */
+#define PVR_PAGE_TABLE_ADDR_SPACE_SIZE SZ_1T
+#define PVR_PAGE_TABLE_ADDR_BITS __ffs(PVR_PAGE_TABLE_ADDR_SPACE_SIZE)
+#define PVR_PAGE_TABLE_ADDR_MASK (PVR_PAGE_TABLE_ADDR_SPACE_SIZE - 1)
+
+int pvr_mmu_flush(struct pvr_device *pvr_dev);
+
+struct pvr_mmu_context *pvr_mmu_context_create(struct pvr_device *pvr_dev);
+void pvr_mmu_context_destroy(struct pvr_mmu_context *ctx);
+
+dma_addr_t pvr_mmu_get_root_table_dma_addr(struct pvr_mmu_context *ctx);
+
+void pvr_mmu_op_context_destroy(struct pvr_mmu_op_context *op_ctx);
+struct pvr_mmu_op_context *
+pvr_mmu_op_context_create(struct pvr_mmu_context *ctx,
+			  struct sg_table *sgt, u64 sgt_offset, u64 size);
+
+int pvr_mmu_map(struct pvr_mmu_op_context *op_ctx, u64 size, u64 flags,
+		u64 device_addr);
+int pvr_mmu_unmap(struct pvr_mmu_op_context *op_ctx, u64 device_addr, u64 size);
+
+#endif /* PVR_MMU_H */
+
diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
new file mode 100644
index 000000000000..616fad3a3325
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_vm.c
@@ -0,0 +1,890 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_vm.h"
+
+#include "pvr_device.h"
+#include "pvr_drv.h"
+#include "pvr_gem.h"
+#include "pvr_mmu.h"
+#include "pvr_rogue_fwif.h"
+#include "pvr_rogue_heap_config.h"
+
+#include <drm/drm_gem.h>
+#include <drm/drm_gpuva_mgr.h>
+
+#include <linux/container_of.h>
+#include <linux/err.h>
+#include <linux/errno.h>
+#include <linux/gfp_types.h>
+#include <linux/kref.h>
+#include <linux/mutex.h>
+#include <linux/stddef.h>
+
+/**
+ * DOC: Memory context
+ *
+ * This is the "top level" datatype in the VM code. It's exposed in the public
+ * API as an opaque handle.
+ */
+
+/**
+ * struct pvr_vm_context - Context type which encapsulates an entire page table
+ * tree structure.
+ * @pvr_dev: The PowerVR device to which this context is bound.
+ *
+ * This binding is immutable for the life of the context.
+ * @mmu_ctx: The context for binding to physical memory.
+ * @gpuva_mgr: GPUVA manager object associated with this context.
+ * @lock: Global lock on this entire structure of page tables.
+ * @fw_mem_ctx_obj: Firmware object representing firmware memory context.
+ * @ref_count: Reference count of object.
+ */
+struct pvr_vm_context {
+	struct pvr_device *pvr_dev;
+	struct pvr_mmu_context *mmu_ctx;
+	struct drm_gpuva_manager gpuva_mgr;
+	struct mutex lock;
+	struct pvr_fw_object *fw_mem_ctx_obj;
+	struct kref ref_count;
+};
+
+/**
+ * pvr_vm_get_page_table_root_addr() - Get the DMA address of the root of the
+ *                                     page table structure behind a VM context.
+ * @vm_ctx: Target VM context.
+ */
+dma_addr_t pvr_vm_get_page_table_root_addr(struct pvr_vm_context *vm_ctx)
+{
+	return pvr_mmu_get_root_table_dma_addr(vm_ctx->mmu_ctx);
+}
+
+/**
+ * DOC: Memory mappings
+ */
+
+/**
+ * pvr_vm_gpuva_mapping_init() - Setup a mapping object with the specified
+ * parameters ready for mapping using pvr_vm_gpuva_mapping_map().
+ * @va: Pointer to drm_gpuva mapping object.
+ * @device_addr: Device-virtual address at the start of the mapping.
+ * @size: Size of the desired mapping.
+ * @pvr_obj: Target PowerVR memory object.
+ * @pvr_obj_offset: Offset into @pvr_obj to begin mapping from.
+ *
+ * Some parameters of this function are unchecked. It is therefore the callers
+ * responsibility to ensure certain constraints are met. Specifically:
+ *
+ * * @pvr_obj_offset must be less than the size of @pvr_obj,
+ * * The sum of @pvr_obj_offset and @size must be less than or equal to the
+ *   size of @pvr_obj,
+ * * The range specified by @pvr_obj_offset and @size (the "CPU range") must be
+ *   CPU page-aligned both in start position and size, and
+ * * The range specified by @device_addr and @size (the "device range") must be
+ *   device page-aligned both in start position and size.
+ *
+ * Furthermore, it is up to the caller to make sure that a reference to @pvr_obj
+ * is taken prior to mapping @va with the drm_gpuva_manager.
+ */
+static void
+pvr_vm_gpuva_mapping_init(struct drm_gpuva *va, u64 device_addr, u64 size,
+			  struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset)
+{
+	va->va.addr = device_addr;
+	va->va.range = size;
+	va->gem.obj = gem_from_pvr_gem(pvr_obj);
+	va->gem.offset = pvr_obj_offset;
+}
+
+struct pvr_vm_gpuva_op_ctx {
+	struct pvr_vm_context *vm_ctx;
+	struct pvr_mmu_op_context *mmu_op_ctx;
+	struct drm_gpuva *new_va, *prev_va, *next_va;
+};
+
+/**
+ * pvr_vm_gpuva_map() - Insert a mapping into a memory context.
+ * @op: gpuva op containing the remap details.
+ * @op_ctx: Operation context.
+ *
+ * Context: Called by drm_gpuva_sm_map following a successful mapping while
+ * @op_ctx.vm_ctx mutex is held.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_mmu_map().
+ */
+static int
+pvr_vm_gpuva_map(struct drm_gpuva_op *op, void *op_ctx)
+{
+	struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->map.gem.obj);
+	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
+	int err;
+
+	if ((op->map.gem.offset | op->map.va.range) & ~PVR_DEVICE_PAGE_MASK)
+		return -EINVAL;
+
+	err = pvr_mmu_map(ctx->mmu_op_ctx, op->map.va.range, pvr_gem->flags,
+			  op->map.va.addr);
+	if (err)
+		return err;
+
+	pvr_vm_gpuva_mapping_init(ctx->new_va, op->map.va.addr,
+				  op->map.va.range, pvr_gem, op->map.gem.offset);
+
+	drm_gpuva_map(&ctx->vm_ctx->gpuva_mgr, ctx->new_va, &op->map);
+	drm_gpuva_link(ctx->new_va);
+	ctx->new_va = NULL;
+
+	/*
+	 * Increment the refcount on the underlying physical memory resource
+	 * to prevent de-allocation while the mapping exists.
+	 */
+	pvr_gem_object_get(pvr_gem);
+
+	return 0;
+}
+
+/**
+ * pvr_vm_gpuva_unmap() - Remove a mapping from a memory context.
+ * @op: gpuva op containing the unmap details.
+ * @op_ctx: Operation context.
+ *
+ * Context: Called by drm_gpuva_sm_unmap following a successful unmapping while
+ * @op_ctx.vm_ctx mutex is held.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_mmu_unmap().
+ */
+static int
+pvr_vm_gpuva_unmap(struct drm_gpuva_op *op, void *op_ctx)
+{
+	struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->unmap.va->gem.obj);
+	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
+
+	int err = pvr_mmu_unmap(ctx->mmu_op_ctx, op->unmap.va->va.addr,
+				op->unmap.va->va.range);
+
+	if (err)
+		return err;
+
+	drm_gpuva_unmap(&op->unmap);
+	drm_gpuva_unlink(op->unmap.va);
+	kfree(op->unmap.va);
+
+	pvr_gem_object_put(pvr_gem);
+
+	return 0;
+}
+
+/**
+ * pvr_vm_gpuva_remap() - Remap a mapping within a memory context.
+ * @op: gpuva op containing the remap details.
+ * @op_ctx: Operation context.
+ *
+ * Context: Called by either drm_gpuva_sm_map or drm_gpuva_sm_unmap when a
+ * mapping or unmapping operation causes a region to be split. The
+ * @op_ctx.vm_ctx mutex is held.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_vm_gpuva_unmap() or pvr_vm_gpuva_unmap().
+ */
+static int
+pvr_vm_gpuva_remap(struct drm_gpuva_op *op, void *op_ctx)
+{
+	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
+
+	if (op->remap.unmap) {
+		const u64 va_start = op->remap.prev ?
+				     op->remap.prev->va.addr + op->remap.prev->va.range :
+				     op->remap.unmap->va->va.addr;
+		const u64 va_end = op->remap.next ?
+				   op->remap.next->va.addr :
+				   op->remap.unmap->va->va.addr + op->remap.unmap->va->va.range;
+
+		int err = pvr_mmu_unmap(ctx->mmu_op_ctx, va_start,
+					va_end - va_start);
+
+		if (err)
+			return err;
+	}
+
+	if (op->remap.prev)
+		pvr_vm_gpuva_mapping_init(ctx->prev_va, op->remap.prev->va.addr,
+					  op->remap.prev->va.range,
+					  gem_to_pvr_gem(op->remap.prev->gem.obj),
+					  op->remap.prev->gem.offset);
+
+	if (op->remap.next)
+		pvr_vm_gpuva_mapping_init(ctx->next_va, op->remap.next->va.addr,
+					  op->remap.next->va.range,
+					  gem_to_pvr_gem(op->remap.next->gem.obj),
+					  op->remap.next->gem.offset);
+
+	/* No actual remap required: the page table tree depth is fixed to 3,
+	 * and we use 4k page table entries only for now.
+	 */
+	drm_gpuva_remap(ctx->prev_va, ctx->next_va, &op->remap);
+
+	if (op->remap.prev) {
+		pvr_gem_object_get(gem_to_pvr_gem(ctx->prev_va->gem.obj));
+		drm_gpuva_link(ctx->prev_va);
+		ctx->prev_va = NULL;
+	}
+
+	if (op->remap.next) {
+		pvr_gem_object_get(gem_to_pvr_gem(ctx->next_va->gem.obj));
+		drm_gpuva_link(ctx->next_va);
+		ctx->next_va = NULL;
+	}
+
+	if (op->remap.unmap) {
+		struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->remap.unmap->va->gem.obj);
+
+		drm_gpuva_unlink(op->unmap.va);
+		kfree(op->unmap.va);
+
+		pvr_gem_object_put(pvr_gem);
+	}
+
+	return 0;
+}
+
+/*
+ * Public API
+ *
+ * For an overview of these functions, see *DOC: Public API* in "pvr_vm.h".
+ */
+
+/**
+ * pvr_device_addr_is_valid() - Tests whether a device-virtual address
+ *                              is valid.
+ * @device_addr: Virtual device address to test.
+ *
+ * Return:
+ *  * %true if @device_addr is within the valid range for a device page
+ *    table and is aligned to the device page size, or
+ *  * %false otherwise.
+ */
+bool
+pvr_device_addr_is_valid(u64 device_addr)
+{
+	return (device_addr & ~PVR_PAGE_TABLE_ADDR_MASK) == 0 &&
+	       (device_addr & ~PVR_DEVICE_PAGE_MASK) == 0;
+}
+
+/**
+ * pvr_device_addr_and_size_are_valid() - Tests whether a device-virtual
+ * address and associated size are both valid.
+ * @device_addr: Virtual device address to test.
+ * @size: Size of the range based at @device_addr to test.
+ *
+ * Calling pvr_device_addr_is_valid() twice (once on @size, and again on
+ * @device_addr + @size) to verify a device-virtual address range initially
+ * seems intuitive, but it produces a false-negative when the address range
+ * is right at the end of device-virtual address space.
+ *
+ * This function catches that corner case, as well as checking that
+ * @size is non-zero.
+ *
+ * Return:
+ *  * %true if @device_addr is device page aligned; @size is device page
+ *    aligned; the range specified by @device_addr and @size is within the
+ *    bounds of the device-virtual address space, and @size is non-zero, or
+ *  * %false otherwise.
+ */
+bool
+pvr_device_addr_and_size_are_valid(u64 device_addr, u64 size)
+{
+	return pvr_device_addr_is_valid(device_addr) &&
+	       size != 0 && (size & ~PVR_DEVICE_PAGE_MASK) == 0 &&
+	       (device_addr + size <= PVR_PAGE_TABLE_ADDR_SPACE_SIZE);
+}
+
+static const struct drm_gpuva_fn_ops pvr_vm_gpuva_ops = {
+	.sm_step_map = pvr_vm_gpuva_map,
+	.sm_step_remap = pvr_vm_gpuva_remap,
+	.sm_step_unmap = pvr_vm_gpuva_unmap,
+};
+
+/**
+ * pvr_vm_create_context() - Create a new VM context.
+ * @pvr_dev: Target PowerVR device.
+ * @is_userspace_context: %true if this context is for userspace. This will
+ *                        create a firmware memory context for the VM context
+ *                        and disable warnings when tearing down mappings.
+ *
+ * Return:
+ *  * A handle to the newly-minted VM context on success,
+ *  * -%EINVAL if the feature "virtual address space bits" on @pvr_dev is
+ *    missing or has an unsupported value,
+ *  * -%ENOMEM if allocation of the structure behind the opaque handle fails,
+ *    or
+ *  * Any error encountered while setting up internal structures.
+ */
+struct pvr_vm_context *
+pvr_vm_create_context(struct pvr_device *pvr_dev, bool is_userspace_context)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+
+	struct pvr_vm_context *vm_ctx;
+	u16 device_addr_bits;
+
+	int err;
+
+	err = PVR_FEATURE_VALUE(pvr_dev, virtual_address_space_bits,
+				&device_addr_bits);
+	if (err) {
+		drm_err(drm_dev,
+			"Failed to get device virtual address space bits\n");
+		return ERR_PTR(err);
+	}
+
+	if (device_addr_bits != PVR_PAGE_TABLE_ADDR_BITS) {
+		drm_err(drm_dev,
+			"Device has unsupported virtual address space size\n");
+		return ERR_PTR(-EINVAL);
+	}
+
+	vm_ctx = kzalloc(sizeof(*vm_ctx), GFP_KERNEL);
+	if (!vm_ctx)
+		return ERR_PTR(-ENOMEM);
+
+	vm_ctx->pvr_dev = pvr_dev;
+	kref_init(&vm_ctx->ref_count);
+	mutex_init(&vm_ctx->lock);
+
+	drm_gpuva_manager_init(&vm_ctx->gpuva_mgr,
+			       is_userspace_context ? "PowerVR-user-VM" : "PowerVR-FW-VM",
+			       0, 1ULL << device_addr_bits, 0, 0, &pvr_vm_gpuva_ops);
+
+	vm_ctx->mmu_ctx = pvr_mmu_context_create(pvr_dev);
+	err = PTR_ERR_OR_ZERO(&vm_ctx->mmu_ctx);
+	if (err) {
+		vm_ctx->mmu_ctx = NULL;
+		goto err_put_ctx;
+	}
+
+	if (is_userspace_context) {
+		/* TODO: Create FW mem context */
+		err = -ENODEV;
+		goto err_put_ctx;
+	}
+
+	return vm_ctx;
+
+err_put_ctx:
+	pvr_vm_context_put(vm_ctx);
+
+	return ERR_PTR(err);
+}
+
+/**
+ * pvr_vm_context_release() - Teardown a VM context.
+ * @ref_count: Pointer to reference counter of the VM context.
+ *
+ * This function ensures that no mappings are left dangling by unmapping them
+ * all in order of ascending device-virtual address.
+ */
+static void
+pvr_vm_context_release(struct kref *ref_count)
+{
+	struct pvr_vm_context *vm_ctx =
+		container_of(ref_count, struct pvr_vm_context, ref_count);
+
+	/* TODO: Destroy FW mem context */
+	WARN_ON(vm_ctx->fw_mem_ctx_obj);
+
+	WARN_ON(pvr_vm_unmap(vm_ctx, vm_ctx->gpuva_mgr.mm_start,
+			     vm_ctx->gpuva_mgr.mm_range));
+
+	drm_gpuva_manager_destroy(&vm_ctx->gpuva_mgr);
+	pvr_mmu_context_destroy(vm_ctx->mmu_ctx);
+	mutex_destroy(&vm_ctx->lock);
+
+	kfree(vm_ctx);
+}
+
+/**
+ * pvr_vm_context_lookup() - Look up VM context from handle
+ * @pvr_file: Pointer to pvr_file structure.
+ * @handle: Object handle.
+ *
+ * Takes reference on VM context object. Call pvr_vm_context_put() to release.
+ *
+ * Returns:
+ *  * The requested object on success, or
+ *  * %NULL on failure (object does not exist in list, or is not a VM context)
+ */
+struct pvr_vm_context *
+pvr_vm_context_lookup(struct pvr_file *pvr_file, u32 handle)
+{
+	struct pvr_vm_context *vm_ctx;
+
+	xa_lock(&pvr_file->vm_ctx_handles);
+	vm_ctx = xa_load(&pvr_file->vm_ctx_handles, handle);
+	if (vm_ctx)
+		kref_get(&vm_ctx->ref_count);
+
+	xa_unlock(&pvr_file->vm_ctx_handles);
+
+	return vm_ctx;
+}
+
+/**
+ * pvr_vm_context_put() - Release a reference on a VM context
+ * @vm_ctx: Target VM context.
+ *
+ * Returns:
+ *  * %true if the VM context was destroyed, or
+ *  * %false if there are any references still remaining.
+ */
+bool
+pvr_vm_context_put(struct pvr_vm_context *vm_ctx)
+{
+	WARN_ON(!vm_ctx);
+
+	if (vm_ctx)
+		return kref_put(&vm_ctx->ref_count, pvr_vm_context_release);
+
+	return true;
+}
+
+/**
+ * pvr_destroy_vm_contexts_for_file: Destroy any VM contexts associated with the
+ * given file.
+ * @pvr_file: Pointer to pvr_file structure.
+ *
+ * Removes all vm_contexts associated with @pvr_file from the device VM context
+ * list and drops initial references. vm_contexts will then be destroyed once
+ * all outstanding references are dropped.
+ */
+void pvr_destroy_vm_contexts_for_file(struct pvr_file *pvr_file)
+{
+	struct pvr_vm_context *vm_ctx;
+	unsigned long handle;
+
+	xa_for_each(&pvr_file->vm_ctx_handles, handle, vm_ctx) {
+		/* vm_ctx is not used here because that would create a race with xa_erase */
+		pvr_vm_context_put(xa_erase(&pvr_file->vm_ctx_handles, handle));
+	}
+}
+
+/**
+ * pvr_vm_map() - Map a section of physical memory into a section of device-virtual memory.
+ * @vm_ctx: Target VM context.
+ * @pvr_obj: Target PowerVR memory object.
+ * @pvr_obj_offset: Offset into @pvr_obj to map from.
+ * @device_addr: Virtual device address at the start of the requested mapping.
+ * @size: Size of the requested mapping.
+ *
+ * No handle is returned to represent the mapping. Instead, callers should
+ * remember @device_addr and use that as a handle.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * -%EINVAL if @device_addr is not a valid page-aligned device-virtual
+ *    address; the region specified by @pvr_obj_offset and @size does not fall
+ *    entirely within @pvr_obj, or any part of the specified region of @pvr_obj
+ *    is not device-virtual page-aligned,
+ *  * Any error encountered while performing internal operations required to
+ *    destroy the mapping (returned from pvr_vm_gpuva_map or
+ *    pvr_vm_gpuva_remap).
+ */
+int
+pvr_vm_map(struct pvr_vm_context *vm_ctx,
+	   struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset,
+	   u64 device_addr, u64 size)
+{
+	const size_t pvr_obj_size = pvr_gem_object_size(pvr_obj);
+	struct pvr_vm_gpuva_op_ctx op_ctx = { .vm_ctx = vm_ctx };
+	struct sg_table *sgt;
+	int err;
+
+	if (!pvr_device_addr_and_size_are_valid(device_addr, size) ||
+	    pvr_obj_offset & ~PAGE_MASK || size & ~PAGE_MASK ||
+	    pvr_obj_offset + size > pvr_obj_size ||
+	    pvr_obj_offset > pvr_obj_size) {
+		return -EINVAL;
+	}
+
+	op_ctx.new_va = kzalloc(sizeof(*op_ctx.new_va), GFP_KERNEL);
+	op_ctx.prev_va = kzalloc(sizeof(*op_ctx.prev_va), GFP_KERNEL);
+	op_ctx.next_va = kzalloc(sizeof(*op_ctx.next_va), GFP_KERNEL);
+	if (!op_ctx.new_va || !op_ctx.prev_va || !op_ctx.next_va) {
+		err = -ENOMEM;
+		goto out_free;
+	}
+
+	sgt = pvr_gem_object_get_pages_sgt(pvr_obj);
+	err = PTR_ERR_OR_ZERO(sgt);
+	if (err)
+		goto out_free;
+
+	op_ctx.mmu_op_ctx = pvr_mmu_op_context_create(vm_ctx->mmu_ctx, sgt,
+						      pvr_obj_offset, size);
+	err = PTR_ERR_OR_ZERO(op_ctx.mmu_op_ctx);
+	if (err) {
+		op_ctx.mmu_op_ctx = NULL;
+		goto out_mmu_op_ctx_destroy;
+	}
+
+	mutex_lock(&vm_ctx->lock);
+	err = drm_gpuva_sm_map(&vm_ctx->gpuva_mgr, &op_ctx, device_addr, size,
+			       gem_from_pvr_gem(pvr_obj), pvr_obj_offset);
+	mutex_unlock(&vm_ctx->lock);
+
+out_mmu_op_ctx_destroy:
+	pvr_mmu_op_context_destroy(op_ctx.mmu_op_ctx);
+
+out_free:
+	kfree(op_ctx.next_va);
+	kfree(op_ctx.prev_va);
+	kfree(op_ctx.new_va);
+
+	return err;
+}
+
+/**
+ * pvr_vm_unmap() - Unmap an already mapped section of device-virtual memory.
+ * @vm_ctx: Target VM context.
+ * @device_addr: Virtual device address at the start of the target mapping.
+ * @size: Size of the target mapping.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * -%EINVAL if @device_addr is not a valid page-aligned device-virtual
+ *    address,
+ *  * Any error encountered while performing internal operations required to
+ *    destroy the mapping (returned from pvr_vm_gpuva_unmap or
+ *    pvr_vm_gpuva_remap).
+ */
+int
+pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size)
+{
+	struct pvr_vm_gpuva_op_ctx op_ctx = { .vm_ctx = vm_ctx };
+	int err;
+
+	if (!pvr_device_addr_and_size_are_valid(device_addr, size))
+		return -EINVAL;
+
+	op_ctx.prev_va = kzalloc(sizeof(*op_ctx.prev_va), GFP_KERNEL);
+	op_ctx.next_va = kzalloc(sizeof(*op_ctx.next_va), GFP_KERNEL);
+	if (!op_ctx.prev_va || !op_ctx.next_va) {
+		err = -ENOMEM;
+		goto out;
+	}
+
+	op_ctx.mmu_op_ctx =
+		pvr_mmu_op_context_create(vm_ctx->mmu_ctx, NULL, 0, 0);
+	err = PTR_ERR_OR_ZERO(op_ctx.mmu_op_ctx);
+	if (err) {
+		op_ctx.mmu_op_ctx = NULL;
+		goto out;
+	}
+
+	mutex_lock(&vm_ctx->lock);
+	err = drm_gpuva_sm_unmap(&vm_ctx->gpuva_mgr, &op_ctx, device_addr, size);
+	mutex_unlock(&vm_ctx->lock);
+
+out:
+	pvr_mmu_op_context_destroy(op_ctx.mmu_op_ctx);
+	kfree(op_ctx.next_va);
+	kfree(op_ctx.prev_va);
+
+	return err;
+}
+
+/*
+ * Static data areas are determined by firmware.
+ *
+ * When adding a new static data area you will also need to update the reserved_size field for the
+ * heap in pvr_heaps[].
+ */
+static const struct drm_pvr_static_data_area static_data_areas[] = {
+	{
+		.area_usage = DRM_PVR_STATIC_DATA_AREA_FENCE,
+		.location_heap_id = DRM_PVR_HEAP_GENERAL,
+		.offset = 0,
+		.size = 128,
+	},
+	{
+		.area_usage = DRM_PVR_STATIC_DATA_AREA_YUV_CSC,
+		.location_heap_id = DRM_PVR_HEAP_GENERAL,
+		.offset = 128,
+		.size = 1024,
+	},
+	{
+		.area_usage = DRM_PVR_STATIC_DATA_AREA_VDM_SYNC,
+		.location_heap_id = DRM_PVR_HEAP_PDS_CODE_DATA,
+		.offset = 0,
+		.size = 128,
+	},
+	{
+		.area_usage = DRM_PVR_STATIC_DATA_AREA_EOT,
+		.location_heap_id = DRM_PVR_HEAP_PDS_CODE_DATA,
+		.offset = 128,
+		.size = 128,
+	},
+	{
+		.area_usage = DRM_PVR_STATIC_DATA_AREA_VDM_SYNC,
+		.location_heap_id = DRM_PVR_HEAP_USC_CODE,
+		.offset = 0,
+		.size = 128,
+	},
+};
+
+#define GET_RESERVED_SIZE(last_offset, last_size) round_up((last_offset) + (last_size), PAGE_SIZE)
+
+/*
+ * The values given to GET_RESERVED_SIZE() are taken from the last entry in the corresponding
+ * static data area for each heap.
+ */
+static const struct drm_pvr_heap pvr_heaps[] = {
+	[DRM_PVR_HEAP_GENERAL] = {
+		.base = ROGUE_GENERAL_HEAP_BASE,
+		.size = ROGUE_GENERAL_HEAP_SIZE,
+		.flags = 0,
+		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
+	},
+	[DRM_PVR_HEAP_PDS_CODE_DATA] = {
+		.base = ROGUE_PDSCODEDATA_HEAP_BASE,
+		.size = ROGUE_PDSCODEDATA_HEAP_SIZE,
+		.flags = 0,
+		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
+	},
+	[DRM_PVR_HEAP_USC_CODE] = {
+		.base = ROGUE_USCCODE_HEAP_BASE,
+		.size = ROGUE_USCCODE_HEAP_SIZE,
+		.flags = 0,
+		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
+	},
+	[DRM_PVR_HEAP_RGNHDR] = {
+		.base = ROGUE_RGNHDR_HEAP_BASE,
+		.size = ROGUE_RGNHDR_HEAP_SIZE,
+		.flags = 0,
+		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
+	},
+	[DRM_PVR_HEAP_VIS_TEST] = {
+		.base = ROGUE_VISTEST_HEAP_BASE,
+		.size = ROGUE_VISTEST_HEAP_SIZE,
+		.flags = 0,
+		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
+	},
+	[DRM_PVR_HEAP_TRANSFER_FRAG] = {
+		.base = ROGUE_TRANSFER_FRAG_HEAP_BASE,
+		.size = ROGUE_TRANSFER_FRAG_HEAP_SIZE,
+		.flags = 0,
+		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
+	},
+};
+
+int
+pvr_static_data_areas_get(const struct pvr_device *pvr_dev,
+			  struct drm_pvr_ioctl_dev_query_args *args)
+{
+	struct drm_pvr_dev_query_static_data_areas query = {0};
+	int err;
+
+	if (!args->pointer) {
+		args->size = sizeof(struct drm_pvr_dev_query_static_data_areas);
+		return 0;
+	}
+
+	err = PVR_UOBJ_GET(query, args->size, args->pointer);
+	if (err < 0)
+		return err;
+
+	if (!query.static_data_areas.array) {
+		query.static_data_areas.count = ARRAY_SIZE(static_data_areas);
+		query.static_data_areas.stride = sizeof(struct drm_pvr_static_data_area);
+		goto copy_out;
+	}
+
+	if (query.static_data_areas.count > ARRAY_SIZE(static_data_areas))
+		query.static_data_areas.count = ARRAY_SIZE(static_data_areas);
+
+	err = PVR_UOBJ_SET_ARRAY(&query.static_data_areas, static_data_areas);
+	if (err < 0)
+		return err;
+
+copy_out:
+	err = PVR_UOBJ_SET(args->pointer, args->size, query);
+	if (err < 0)
+		return err;
+
+	args->size = sizeof(query);
+	return 0;
+}
+
+int
+pvr_heap_info_get(const struct pvr_device *pvr_dev,
+		  struct drm_pvr_ioctl_dev_query_args *args)
+{
+	struct drm_pvr_dev_query_heap_info query = {0};
+	u64 dest;
+	int err;
+
+	if (!args->pointer) {
+		args->size = sizeof(struct drm_pvr_dev_query_heap_info);
+		return 0;
+	}
+
+	err = PVR_UOBJ_GET(query, args->size, args->pointer);
+	if (err < 0)
+		return err;
+
+	if (!query.heaps.array) {
+		query.heaps.count = ARRAY_SIZE(pvr_heaps);
+		query.heaps.stride = sizeof(struct drm_pvr_heap);
+		goto copy_out;
+	}
+
+	if (query.heaps.count > ARRAY_SIZE(pvr_heaps))
+		query.heaps.count = ARRAY_SIZE(pvr_heaps);
+
+	/* Region header heap is only present if BRN63142 is present. */
+	dest = query.heaps.array;
+	for (size_t i = 0; i < query.heaps.count; i++) {
+		struct drm_pvr_heap heap = pvr_heaps[i];
+
+		if (i == DRM_PVR_HEAP_RGNHDR && !PVR_HAS_QUIRK(pvr_dev, 63142))
+			heap.size = 0;
+
+		err = PVR_UOBJ_SET(dest, query.heaps.stride, heap);
+		if (err < 0)
+			return err;
+
+		dest += query.heaps.stride;
+	}
+
+copy_out:
+	err = PVR_UOBJ_SET(args->pointer, args->size, query);
+	if (err < 0)
+		return err;
+
+	args->size = sizeof(query);
+	return 0;
+}
+
+/**
+ * pvr_heap_contains_range() - Determine if a given heap contains the specified
+ *                             device-virtual address range.
+ * @pvr_heap: Target heap.
+ * @start: Inclusive start of the target range.
+ * @end: Inclusive end of the target range.
+ *
+ * It is an error to call this function with values of @start and @end that do
+ * not satisfy the condition @start <= @end.
+ */
+static __always_inline bool
+pvr_heap_contains_range(const struct drm_pvr_heap *pvr_heap, u64 start, u64 end)
+{
+	return pvr_heap->base <= start && end < pvr_heap->base + pvr_heap->size;
+}
+
+/**
+ * pvr_find_heap_containing() - Find a heap which contains the specified
+ *                              device-virtual address range.
+ * @pvr_dev: Target PowerVR device.
+ * @start: Start of the target range.
+ * @size: Size of the target range.
+ *
+ * Return:
+ *  * A pointer to a constant instance of struct drm_pvr_heap representing the
+ *    heap containing the entire range specified by @start and @size on
+ *    success, or
+ *  * %NULL if no such heap exists.
+ */
+const struct drm_pvr_heap *
+pvr_find_heap_containing(struct pvr_device *pvr_dev, u64 start, u64 size)
+{
+	u64 end;
+
+	if (check_add_overflow(start, size - 1, &end))
+		return NULL;
+
+	/*
+	 * There are no guarantees about the order of address ranges in
+	 * &pvr_heaps, so iterate over the entire array for a heap whose
+	 * range completely encompasses the given range.
+	 */
+	for (u32 heap_id = 0; heap_id < ARRAY_SIZE(pvr_heaps); heap_id++) {
+		/* Filter heaps that present only with an associated quirk */
+		if (heap_id == DRM_PVR_HEAP_RGNHDR &&
+		    !PVR_HAS_QUIRK(pvr_dev, 63142)) {
+			continue;
+		}
+
+		if (pvr_heap_contains_range(&pvr_heaps[heap_id], start, end))
+			return &pvr_heaps[heap_id];
+	}
+
+	return NULL;
+}
+
+/**
+ * pvr_vm_find_gem_object() - Look up a buffer object from a given
+ *                            device-virtual address.
+ * @vm_ctx: [IN] Target VM context.
+ * @device_addr: [IN] Virtual device address at the start of the required
+ *               object.
+ * @mapped_offset_out: [OUT] Pointer to location to write offset of the start
+ *                     of the mapped region within the buffer object. May be
+ *                     %NULL if this information is not required.
+ * @mapped_size_out: [OUT] Pointer to location to write size of the mapped
+ *                   region. May be %NULL if this information is not required.
+ *
+ * If successful, a reference will be taken on the buffer object. The caller
+ * must drop the reference with pvr_gem_object_put().
+ *
+ * Return:
+ *  * The PowerVR buffer object mapped at @device_addr if one exists, or
+ *  * %NULL otherwise.
+ */
+struct pvr_gem_object *
+pvr_vm_find_gem_object(struct pvr_vm_context *vm_ctx, u64 device_addr,
+		       u64 *mapped_offset_out, u64 *mapped_size_out)
+{
+	struct pvr_gem_object *pvr_obj;
+	struct drm_gpuva *va;
+
+	mutex_lock(&vm_ctx->lock);
+
+	va = drm_gpuva_find_first(&vm_ctx->gpuva_mgr, device_addr, 1);
+	if (!va)
+		goto err_unlock;
+
+	pvr_obj = gem_to_pvr_gem(va->gem.obj);
+	pvr_gem_object_get(pvr_obj);
+
+	if (mapped_offset_out)
+		*mapped_offset_out = va->gem.offset;
+	if (mapped_size_out)
+		*mapped_size_out = va->va.range;
+
+	mutex_unlock(&vm_ctx->lock);
+
+	return pvr_obj;
+
+err_unlock:
+	mutex_unlock(&vm_ctx->lock);
+
+	return NULL;
+}
+
+/**
+ * pvr_vm_get_fw_mem_context: Get object representing firmware memory context
+ * @vm_ctx: Target VM context.
+ *
+ * Returns:
+ *  * FW object representing firmware memory context, or
+ *  * %NULL if this VM context does not have a firmware memory context.
+ */
+struct pvr_fw_object *
+pvr_vm_get_fw_mem_context(struct pvr_vm_context *vm_ctx)
+{
+	return vm_ctx->fw_mem_ctx_obj;
+}
diff --git a/drivers/gpu/drm/imagination/pvr_vm.h b/drivers/gpu/drm/imagination/pvr_vm.h
new file mode 100644
index 000000000000..b98bc3981807
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_vm.h
@@ -0,0 +1,60 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_VM_H
+#define PVR_VM_H
+
+#include "pvr_rogue_mmu_defs.h"
+
+#include <uapi/drm/pvr_drm.h>
+
+#include <linux/types.h>
+
+/* Forward declaration from "pvr_device.h" */
+struct pvr_device;
+struct pvr_file;
+
+/* Forward declaration from "pvr_gem.h" */
+struct pvr_gem_object;
+
+/* Forward declaration from "pvr_vm.c" */
+struct pvr_vm_context;
+
+/* Forward declaration from <uapi/drm/pvr_drm.h> */
+struct drm_pvr_ioctl_get_heap_info_args;
+
+/* Functions defined in pvr_vm.c */
+
+bool pvr_device_addr_is_valid(u64 device_addr);
+bool pvr_device_addr_and_size_are_valid(u64 device_addr, u64 size);
+
+struct pvr_vm_context *pvr_vm_create_context(struct pvr_device *pvr_dev,
+					     bool is_userspace_context);
+
+int pvr_vm_map(struct pvr_vm_context *vm_ctx,
+	       struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset,
+	       u64 device_addr, u64 size);
+int pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size);
+
+dma_addr_t pvr_vm_get_page_table_root_addr(struct pvr_vm_context *vm_ctx);
+
+int pvr_static_data_areas_get(const struct pvr_device *pvr_dev,
+			      struct drm_pvr_ioctl_dev_query_args *args);
+int pvr_heap_info_get(const struct pvr_device *pvr_dev,
+		      struct drm_pvr_ioctl_dev_query_args *args);
+const struct drm_pvr_heap *pvr_find_heap_containing(struct pvr_device *pvr_dev,
+						    u64 addr, u64 size);
+
+struct pvr_gem_object *pvr_vm_find_gem_object(struct pvr_vm_context *vm_ctx,
+					      u64 device_addr,
+					      u64 *mapped_offset_out,
+					      u64 *mapped_size_out);
+
+struct pvr_fw_object *
+pvr_vm_get_fw_mem_context(struct pvr_vm_context *vm_ctx);
+
+struct pvr_vm_context *pvr_vm_context_lookup(struct pvr_file *pvr_file, u32 handle);
+bool pvr_vm_context_put(struct pvr_vm_context *vm_ctx);
+void pvr_destroy_vm_contexts_for_file(struct pvr_file *pvr_file);
+
+#endif /* PVR_VM_H */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 08/17] drm/imagination: Add GEM and VM related code
@ 2023-08-16  8:25   ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: frank.binns, donald.robson, boris.brezillon, faith.ekstrand,
	airlied, daniel, maarten.lankhorst, mripard, tzimmermann, afd,
	hns, matthew.brost, christian.koenig, luben.tuikov, dakr,
	linux-kernel, Matt Coster

Add a GEM implementation based on drm_gem_shmem, and support code for the
PowerVR GPU MMU. The GPU VA manager is used for address space management.

Changes since v4:
- Correct sync function in vmap/vunmap function documentation
- Update for upstream GPU VA manager
- Fix missing frees when unmapping drm_gpuva objects
- Always zero GEM BOs on creation

Changes since v3:
- Split MMU and VM code
- Register page table allocations with kmemleak
- Use drm_dev_{enter,exit}

Changes since v2:
- Use GPU VA manager
- Use drm_gem_shmem

Co-developed-by: Matt Coster <matt.coster@imgtec.com>
Signed-off-by: Matt Coster <matt.coster@imgtec.com>
Co-developed-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 drivers/gpu/drm/imagination/Makefile     |    5 +-
 drivers/gpu/drm/imagination/pvr_device.c |   23 +-
 drivers/gpu/drm/imagination/pvr_device.h |   18 +
 drivers/gpu/drm/imagination/pvr_drv.c    |  302 ++-
 drivers/gpu/drm/imagination/pvr_gem.c    |  396 ++++
 drivers/gpu/drm/imagination/pvr_gem.h    |  177 ++
 drivers/gpu/drm/imagination/pvr_mmu.c    | 2487 ++++++++++++++++++++++
 drivers/gpu/drm/imagination/pvr_mmu.h    |  108 +
 drivers/gpu/drm/imagination/pvr_vm.c     |  890 ++++++++
 drivers/gpu/drm/imagination/pvr_vm.h     |   60 +
 10 files changed, 4455 insertions(+), 11 deletions(-)
 create mode 100644 drivers/gpu/drm/imagination/pvr_gem.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_gem.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_vm.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_vm.h

diff --git a/drivers/gpu/drm/imagination/Makefile b/drivers/gpu/drm/imagination/Makefile
index 9e144ff2742b..8fcabc1bea36 100644
--- a/drivers/gpu/drm/imagination/Makefile
+++ b/drivers/gpu/drm/imagination/Makefile
@@ -7,6 +7,9 @@ powervr-y := \
 	pvr_device.o \
 	pvr_device_info.o \
 	pvr_drv.o \
-	pvr_fw.o
+	pvr_fw.o \
+	pvr_gem.o \
+	pvr_mmu.o \
+	pvr_vm.o
 
 obj-$(CONFIG_DRM_POWERVR) += powervr.o
diff --git a/drivers/gpu/drm/imagination/pvr_device.c b/drivers/gpu/drm/imagination/pvr_device.c
index b1fae182c4f6..ef8f7a2ff1a9 100644
--- a/drivers/gpu/drm/imagination/pvr_device.c
+++ b/drivers/gpu/drm/imagination/pvr_device.c
@@ -6,6 +6,7 @@
 
 #include "pvr_fw.h"
 #include "pvr_rogue_cr_defs.h"
+#include "pvr_vm.h"
 
 #include <drm/drm_print.h>
 
@@ -312,7 +313,26 @@ pvr_device_gpu_init(struct pvr_device *pvr_dev)
 	else
 		return -EINVAL;
 
-	return pvr_set_dma_info(pvr_dev);
+	err = pvr_set_dma_info(pvr_dev);
+	if (err)
+		return err;
+
+	pvr_dev->kernel_vm_ctx = pvr_vm_create_context(pvr_dev, false);
+	if (IS_ERR(pvr_dev->kernel_vm_ctx))
+		return PTR_ERR(pvr_dev->kernel_vm_ctx);
+
+	return 0;
+}
+
+/**
+ * pvr_device_gpu_fini() - GPU-specific deinitialization for a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ */
+static void
+pvr_device_gpu_fini(struct pvr_device *pvr_dev)
+{
+	WARN_ON(!pvr_vm_context_put(pvr_dev->kernel_vm_ctx));
+	pvr_dev->kernel_vm_ctx = NULL;
 }
 
 /**
@@ -364,6 +384,7 @@ pvr_device_fini(struct pvr_device *pvr_dev)
 	 * Deinitialization stages are performed in reverse order compared to
 	 * the initialization stages in pvr_device_init().
 	 */
+	pvr_device_gpu_fini(pvr_dev);
 }
 
 bool
diff --git a/drivers/gpu/drm/imagination/pvr_device.h b/drivers/gpu/drm/imagination/pvr_device.h
index 65ece87e2405..990363e433d7 100644
--- a/drivers/gpu/drm/imagination/pvr_device.h
+++ b/drivers/gpu/drm/imagination/pvr_device.h
@@ -123,6 +123,16 @@ struct pvr_device {
 	 */
 	struct clk *mem_clk;
 
+	/**
+	 * @kernel_vm_ctx: Virtual memory context used for kernel mappings.
+	 *
+	 * This is used for mappings in the firmware address region when a META firmware processor
+	 * is in use.
+	 *
+	 * When a MIPS firmware processor is in use, this will be %NULL.
+	 */
+	struct pvr_vm_context *kernel_vm_ctx;
+
 	/** @fw_dev: Firmware related data. */
 	struct pvr_fw_device fw_dev;
 };
@@ -145,6 +155,14 @@ struct pvr_file {
 	 *           to_pvr_device().
 	 */
 	struct pvr_device *pvr_dev;
+
+	/**
+	 * @vm_ctx_handles: Array of VM contexts belonging to this file. Array
+	 * members are of type "struct pvr_vm_context *".
+	 *
+	 * This array is used to allocate handles returned to userspace.
+	 */
+	struct xarray vm_ctx_handles;
 };
 
 /**
diff --git a/drivers/gpu/drm/imagination/pvr_drv.c b/drivers/gpu/drm/imagination/pvr_drv.c
index eb3018f94f7c..0d51177b506c 100644
--- a/drivers/gpu/drm/imagination/pvr_drv.c
+++ b/drivers/gpu/drm/imagination/pvr_drv.c
@@ -3,9 +3,11 @@
 
 #include "pvr_device.h"
 #include "pvr_drv.h"
+#include "pvr_gem.h"
 #include "pvr_rogue_defs.h"
 #include "pvr_rogue_fwif_client.h"
 #include "pvr_rogue_fwif_shared.h"
+#include "pvr_vm.h"
 
 #include <uapi/drm/pvr_drm.h>
 
@@ -61,7 +63,86 @@ static int
 pvr_ioctl_create_bo(struct drm_device *drm_dev, void *raw_args,
 		    struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_create_bo_args *args = raw_args;
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+	struct pvr_file *pvr_file = to_pvr_file(file);
+
+	struct pvr_gem_object *pvr_obj;
+	size_t sanitized_size;
+	size_t real_size;
+
+	int idx;
+	int err;
+
+	if (!drm_dev_enter(drm_dev, &idx))
+		return -EIO;
+
+	/* All padding fields must be zeroed. */
+	if (args->_padding_c != 0) {
+		err = -EINVAL;
+		goto err_drm_dev_exit;
+	}
+
+	/*
+	 * On 64-bit platforms (our primary target), size_t is a u64. However,
+	 * on other architectures we have to check for overflow when casting
+	 * down to size_t from u64.
+	 *
+	 * We also disallow zero-sized allocations, and reserved (kernel-only)
+	 * flags.
+	 */
+	if (args->size > SIZE_MAX || args->size == 0 ||
+	    args->flags & PVR_BO_RESERVED_MASK) {
+		err = -EINVAL;
+		goto err_drm_dev_exit;
+	}
+
+	sanitized_size = (size_t)args->size;
+
+	/*
+	 * Create a buffer object and transfer ownership to a userspace-
+	 * accessible handle.
+	 */
+	pvr_obj = pvr_gem_object_create(pvr_dev, sanitized_size, args->flags);
+	if (IS_ERR(pvr_obj)) {
+		err = PTR_ERR(pvr_obj);
+		goto err_drm_dev_exit;
+	}
+
+	/*
+	 * Store the actual size of the created buffer object. We can't fetch
+	 * this after this point because we will no longer have a reference to
+	 * &pvr_obj.
+	 */
+	real_size = pvr_gem_object_size(pvr_obj);
+
+	/* This function will not modify &args->handle unless it succeeds. */
+	err = pvr_gem_object_into_handle(pvr_obj, pvr_file, &args->handle);
+	if (err)
+		goto err_destroy_obj;
+
+	/*
+	 * Now write the real size back to the args struct, after no further
+	 * errors can occur.
+	 */
+	args->size = real_size;
+
+	drm_dev_exit(idx);
+
+	return 0;
+
+err_destroy_obj:
+	/*
+	 * GEM objects are refcounted, so there is no explicit destructor
+	 * function. Instead, we release the singular reference we currently
+	 * hold on the object and let GEM take care of the rest.
+	 */
+	pvr_gem_object_put(pvr_obj);
+
+err_drm_dev_exit:
+	drm_dev_exit(idx);
+
+	return err;
 }
 
 /**
@@ -88,7 +169,61 @@ static int
 pvr_ioctl_get_bo_mmap_offset(struct drm_device *drm_dev, void *raw_args,
 			     struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_get_bo_mmap_offset_args *args = raw_args;
+	struct pvr_file *pvr_file = to_pvr_file(file);
+	struct pvr_gem_object *pvr_obj;
+	struct drm_gem_object *gem_obj;
+	int idx;
+	int ret;
+
+	if (!drm_dev_enter(drm_dev, &idx))
+		return -EIO;
+
+	/* All padding fields must be zeroed. */
+	if (args->_padding_4 != 0) {
+		ret = -EINVAL;
+		goto err_drm_dev_exit;
+	}
+
+	/*
+	 * Obtain a kernel reference to the buffer object. This reference is
+	 * counted and must be manually dropped before returning. If a buffer
+	 * object cannot be found for the specified handle, return -%ENOENT (No
+	 * such file or directory).
+	 */
+	pvr_obj = pvr_gem_object_from_handle(pvr_file, args->handle);
+	if (!pvr_obj) {
+		ret = -ENOENT;
+		goto err_drm_dev_exit;
+	}
+
+	gem_obj = gem_from_pvr_gem(pvr_obj);
+
+	/*
+	 * Allocate a fake offset which can be used in userspace calls to mmap
+	 * on the DRM device file. If this fails, return the error code. This
+	 * operation is idempotent.
+	 */
+	ret = drm_gem_create_mmap_offset(gem_obj);
+	if (ret != 0) {
+		/* Drop our reference to the buffer object. */
+		drm_gem_object_put(gem_obj);
+		goto err_drm_dev_exit;
+	}
+
+	/*
+	 * Read out the fake offset allocated by the earlier call to
+	 * drm_gem_create_mmap_offset.
+	 */
+	args->offset = drm_vma_node_offset_addr(&gem_obj->vma_node);
+
+	/* Drop our reference to the buffer object. */
+	pvr_gem_object_put(pvr_obj);
+
+err_drm_dev_exit:
+	drm_dev_exit(idx);
+
+	return ret;
 }
 
 static __always_inline u64
@@ -517,10 +652,12 @@ pvr_ioctl_dev_query(struct drm_device *drm_dev, void *raw_args,
 		break;
 
 	case DRM_PVR_DEV_QUERY_HEAP_INFO_GET:
-		return -EINVAL;
+		ret = pvr_heap_info_get(pvr_dev, args);
+		break;
 
 	case DRM_PVR_DEV_QUERY_STATIC_DATA_AREAS_GET:
-		return -EINVAL;
+		ret = pvr_static_data_areas_get(pvr_dev, args);
+		break;
 	}
 
 	drm_dev_exit(idx);
@@ -667,7 +804,46 @@ static int
 pvr_ioctl_create_vm_context(struct drm_device *drm_dev, void *raw_args,
 			    struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_create_vm_context_args *args = raw_args;
+	struct pvr_file *pvr_file = to_pvr_file(file);
+	struct pvr_vm_context *vm_ctx;
+	int idx;
+	int err;
+
+	if (!drm_dev_enter(drm_dev, &idx))
+		return -EIO;
+
+	if (args->_padding_4) {
+		err = -EINVAL;
+		goto err_drm_dev_exit;
+	}
+
+	vm_ctx = pvr_vm_create_context(pvr_file->pvr_dev, true);
+	if (IS_ERR(vm_ctx)) {
+		err = PTR_ERR(vm_ctx);
+		goto err_drm_dev_exit;
+	}
+
+	/* Allocate object handle for userspace. */
+	err = xa_alloc(&pvr_file->vm_ctx_handles,
+		       &args->handle,
+		       vm_ctx,
+		       xa_limit_32b,
+		       GFP_KERNEL);
+	if (err < 0)
+		goto err_cleanup;
+
+	drm_dev_exit(idx);
+
+	return 0;
+
+err_cleanup:
+	pvr_vm_context_put(vm_ctx);
+
+err_drm_dev_exit:
+	drm_dev_exit(idx);
+
+	return err;
 }
 
 /**
@@ -687,7 +863,19 @@ static int
 pvr_ioctl_destroy_vm_context(struct drm_device *drm_dev, void *raw_args,
 			     struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_destroy_vm_context_args *args = raw_args;
+	struct pvr_file *pvr_file = to_pvr_file(file);
+	struct pvr_vm_context *vm_ctx;
+
+	if (args->_padding_4)
+		return -EINVAL;
+
+	vm_ctx = xa_erase(&pvr_file->vm_ctx_handles, args->handle);
+	if (!vm_ctx)
+		return -EINVAL;
+
+	pvr_vm_context_put(vm_ctx);
+	return 0;
 }
 
 /**
@@ -717,7 +905,79 @@ static int
 pvr_ioctl_vm_map(struct drm_device *drm_dev, void *raw_args,
 		 struct drm_file *file)
 {
-	return -ENOTTY;
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+	struct drm_pvr_ioctl_vm_map_args *args = raw_args;
+	struct pvr_file *pvr_file = to_pvr_file(file);
+	struct pvr_vm_context *vm_ctx;
+
+	struct pvr_gem_object *pvr_obj;
+	size_t pvr_obj_size;
+
+	u64 offset_plus_size;
+	int idx;
+	int err;
+
+	if (!drm_dev_enter(drm_dev, &idx))
+		return -EIO;
+
+	/* Initial validation of args. */
+	if (args->_padding_14) {
+		err = -EINVAL;
+		goto err_drm_dev_exit;
+	}
+
+	if (args->flags != 0 ||
+	    check_add_overflow(args->offset, args->size, &offset_plus_size) ||
+	    !pvr_find_heap_containing(pvr_dev, args->device_addr, args->size)) {
+		err = -EINVAL;
+		goto err_drm_dev_exit;
+	}
+
+	vm_ctx = pvr_vm_context_lookup(pvr_file, args->vm_context_handle);
+	if (!vm_ctx) {
+		err = -EINVAL;
+		goto err_drm_dev_exit;
+	}
+
+	pvr_obj = pvr_gem_object_from_handle(pvr_file, args->handle);
+	if (!pvr_obj) {
+		err = -ENOENT;
+		goto err_put_vm_context;
+	}
+
+	pvr_obj_size = pvr_gem_object_size(pvr_obj);
+
+	/*
+	 * Validate offset and size args. The alignment of these will be
+	 * checked when mapping; for now just check that they're within valid
+	 * bounds
+	 */
+	if (args->offset >= pvr_obj_size || offset_plus_size > pvr_obj_size) {
+		err = -EINVAL;
+		goto err_put_pvr_object;
+	}
+
+	err = pvr_vm_map(vm_ctx, pvr_obj, args->offset,
+			 args->device_addr, args->size);
+	if (err)
+		goto err_put_pvr_object;
+
+	/*
+	 * In order to set up the mapping, we needed a reference to &pvr_obj.
+	 * However, pvr_vm_map() obtains and stores its own reference, so we
+	 * must release ours before returning.
+	 */
+
+err_put_pvr_object:
+	pvr_gem_object_put(pvr_obj);
+
+err_put_vm_context:
+	pvr_vm_context_put(vm_ctx);
+
+err_drm_dev_exit:
+	drm_dev_exit(idx);
+
+	return err;
 }
 
 /**
@@ -740,7 +1000,24 @@ static int
 pvr_ioctl_vm_unmap(struct drm_device *drm_dev, void *raw_args,
 		   struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_vm_unmap_args *args = raw_args;
+	struct pvr_file *pvr_file = to_pvr_file(file);
+	struct pvr_vm_context *vm_ctx;
+	int err;
+
+	/* Initial validation of args. */
+	if (args->_padding_4)
+		return -EINVAL;
+
+	vm_ctx = pvr_vm_context_lookup(pvr_file, args->vm_context_handle);
+	if (!vm_ctx)
+		return -EINVAL;
+
+	err = pvr_vm_unmap(vm_ctx, args->device_addr, args->size);
+
+	pvr_vm_context_put(vm_ctx);
+
+	return err;
 }
 
 /*
@@ -931,6 +1208,8 @@ pvr_drm_driver_open(struct drm_device *drm_dev, struct drm_file *file)
 	 */
 	pvr_file->pvr_dev = pvr_dev;
 
+	xa_init_flags(&pvr_file->vm_ctx_handles, XA_FLAGS_ALLOC1);
+
 	/*
 	 * Store reference to powervr-specific file private data in DRM file
 	 * private data.
@@ -956,6 +1235,9 @@ pvr_drm_driver_postclose(__always_unused struct drm_device *drm_dev,
 {
 	struct pvr_file *pvr_file = to_pvr_file(file);
 
+	/* Drop references on any remaining objects. */
+	pvr_destroy_vm_contexts_for_file(pvr_file);
+
 	kfree(pvr_file);
 	file->driver_priv = NULL;
 }
@@ -963,7 +1245,7 @@ pvr_drm_driver_postclose(__always_unused struct drm_device *drm_dev,
 DEFINE_DRM_GEM_FOPS(pvr_drm_driver_fops);
 
 static struct drm_driver pvr_drm_driver = {
-	.driver_features = DRIVER_RENDER,
+	.driver_features = DRIVER_GEM | DRIVER_GEM_GPUVA | DRIVER_RENDER,
 	.open = pvr_drm_driver_open,
 	.postclose = pvr_drm_driver_postclose,
 	.ioctls = pvr_drm_driver_ioctls,
@@ -977,6 +1259,8 @@ static struct drm_driver pvr_drm_driver = {
 	.minor = PVR_DRIVER_MINOR,
 	.patchlevel = PVR_DRIVER_PATCHLEVEL,
 
+	.gem_prime_import_sg_table = drm_gem_shmem_prime_import_sg_table,
+	.gem_create_object = pvr_gem_create_object,
 };
 
 static int
diff --git a/drivers/gpu/drm/imagination/pvr_gem.c b/drivers/gpu/drm/imagination/pvr_gem.c
new file mode 100644
index 000000000000..8a07bb4c38ac
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_gem.c
@@ -0,0 +1,396 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+#include "pvr_gem.h"
+#include "pvr_vm.h"
+
+#include <drm/drm_gem.h>
+#include <drm/drm_prime.h>
+
+#include <linux/compiler.h>
+#include <linux/compiler_attributes.h>
+#include <linux/dma-buf.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/gfp.h>
+#include <linux/iosys-map.h>
+#include <linux/log2.h>
+#include <linux/mutex.h>
+#include <linux/pagemap.h>
+#include <linux/refcount.h>
+#include <linux/scatterlist.h>
+
+static int pvr_gem_mmap(struct drm_gem_object *gem_obj, struct vm_area_struct *vma)
+{
+	struct pvr_gem_object *pvr_obj = gem_to_pvr_gem(gem_obj);
+	struct drm_gem_shmem_object *shmem_obj = shmem_gem_from_pvr_gem(pvr_obj);
+
+	if (!(pvr_obj->flags & DRM_PVR_BO_CPU_ALLOW_USERSPACE_ACCESS))
+		return -EINVAL;
+
+	return drm_gem_shmem_mmap(shmem_obj, vma);
+}
+
+static const struct drm_gem_object_funcs pvr_gem_object_funcs = {
+	.free = drm_gem_shmem_object_free,
+	.print_info = drm_gem_shmem_object_print_info,
+	.pin = drm_gem_shmem_object_pin,
+	.unpin = drm_gem_shmem_object_unpin,
+	.get_sg_table = drm_gem_shmem_object_get_sg_table,
+	.vmap = drm_gem_shmem_object_vmap,
+	.vunmap = drm_gem_shmem_object_vunmap,
+	.mmap = pvr_gem_mmap,
+	.vm_ops = &drm_gem_shmem_vm_ops,
+};
+
+/**
+ * pvr_gem_object_flags_validate() - Verify that a collection of PowerVR GEM
+ * mapping and/or creation flags form a valid combination.
+ * @flags: PowerVR GEM mapping/creation flags to validate.
+ *
+ * This function explicitly allows kernel-only flags. All ioctl entrypoints
+ * should do their own validation as well as relying on this function.
+ *
+ * Return:
+ *  * %true if @flags contains valid mapping and/or creation flags, or
+ *  * %false otherwise.
+ */
+static bool
+pvr_gem_object_flags_validate(u64 flags)
+{
+	static const u64 invalid_combinations[] = {
+		/*
+		 * Memory flagged as PM/FW-protected cannot be mapped to
+		 * userspace. To make this explicit, we require that the two
+		 * flags allowing each of these respective features are never
+		 * specified together.
+		 */
+		(DRM_PVR_BO_DEVICE_PM_FW_PROTECT |
+		 DRM_PVR_BO_CPU_ALLOW_USERSPACE_ACCESS),
+	};
+
+	int i;
+
+	/*
+	 * Check for bits set in undefined regions. Reserved regions refer to
+	 * options that can only be set by the kernel. These are explicitly
+	 * allowed in most cases, and must be checked specifically in IOCTL
+	 * callback code.
+	 */
+	if ((flags & PVR_BO_UNDEFINED_MASK) != 0)
+		return false;
+
+	/*
+	 * Check for all combinations of flags marked as invalid in the array
+	 * above.
+	 */
+	for (i = 0; i < ARRAY_SIZE(invalid_combinations); ++i) {
+		u64 combo = invalid_combinations[i];
+
+		if ((flags & combo) == combo)
+			return false;
+	}
+
+	return true;
+}
+
+/**
+ * pvr_gem_object_into_handle() - Convert a reference to an object into a
+ * userspace-accessible handle.
+ * @pvr_obj: [IN] Target PowerVR-specific object.
+ * @pvr_file: [IN] File to associate the handle with.
+ * @handle: [OUT] Pointer to store the created handle in. Remains unmodified if
+ * an error is encountered.
+ *
+ * If an error is encountered, ownership of @pvr_obj will not have been
+ * transferred. If this function succeeds, however, further use of @pvr_obj is
+ * considered undefined behaviour unless another reference to it is explicitly
+ * held.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error encountered while attempting to allocate a handle on @pvr_file.
+ */
+int
+pvr_gem_object_into_handle(struct pvr_gem_object *pvr_obj,
+			   struct pvr_file *pvr_file, u32 *handle)
+{
+	struct drm_gem_object *gem_obj = gem_from_pvr_gem(pvr_obj);
+	struct drm_file *file = from_pvr_file(pvr_file);
+
+	u32 new_handle;
+	int err;
+
+	err = drm_gem_handle_create(file, gem_obj, &new_handle);
+	if (err)
+		return err;
+
+	/*
+	 * Release our reference to @pvr_obj, effectively transferring
+	 * ownership to the handle.
+	 */
+	pvr_gem_object_put(pvr_obj);
+
+	/*
+	 * Do not store the new handle in @handle until no more errors can
+	 * occur.
+	 */
+	*handle = new_handle;
+
+	return 0;
+}
+
+/**
+ * pvr_gem_object_from_handle() - Obtain a reference to an object from a
+ * userspace handle.
+ * @pvr_file: PowerVR-specific file to which @handle is associated.
+ * @handle: Userspace handle referencing the target object.
+ *
+ * On return, @handle always maintains its reference to the requested object
+ * (if it had one in the first place). If this function succeeds, the returned
+ * object will hold an additional reference. When the caller is finished with
+ * the returned object, they should call pvr_gem_object_put() on it to release
+ * this reference.
+ *
+ * Return:
+ *  * A pointer to the requested PowerVR-specific object on success, or
+ *  * %NULL otherwise.
+ */
+struct pvr_gem_object *
+pvr_gem_object_from_handle(struct pvr_file *pvr_file, u32 handle)
+{
+	struct drm_file *file = from_pvr_file(pvr_file);
+	struct drm_gem_object *gem_obj;
+
+	gem_obj = drm_gem_object_lookup(file, handle);
+	if (!gem_obj)
+		return NULL;
+
+	return gem_to_pvr_gem(gem_obj);
+}
+
+/**
+ * pvr_gem_object_vmap() - Map a PowerVR GEM object into CPU virtual address
+ * space.
+ * @pvr_obj: Target PowerVR GEM object.
+ *
+ * Once the caller is finished with the CPU mapping, they must call
+ * pvr_gem_object_vunmap() on @pvr_obj.
+ *
+ * If @pvr_obj is CPU-cached, dma_sync_sgtable_for_cpu() is called to make
+ * sure the CPU mapping is consistent.
+ *
+ * Return:
+ *  * A pointer to the CPU mapping on success,
+ *  * -%ENOMEM if the mapping fails, or
+ *  * Any error encountered while attempting to acquire a reference to the
+ *    backing pages for @pvr_obj.
+ */
+void *
+pvr_gem_object_vmap(struct pvr_gem_object *pvr_obj)
+{
+	struct drm_gem_shmem_object *shmem_obj = shmem_gem_from_pvr_gem(pvr_obj);
+	struct drm_gem_object *obj = gem_from_pvr_gem(pvr_obj);
+	struct iosys_map map;
+	int err;
+
+	dma_resv_lock(obj->resv, NULL);
+
+	err = drm_gem_shmem_vmap(shmem_obj, &map);
+	if (err)
+		goto err_unlock;
+
+	if (pvr_obj->flags & PVR_BO_CPU_CACHED) {
+		struct device *dev = shmem_obj->base.dev->dev;
+
+		/* If shmem_obj->sgt is NULL, that means the buffer hasn't been mapped
+		 * in GPU space yet.
+		 */
+		if (shmem_obj->sgt)
+			dma_sync_sgtable_for_cpu(dev, shmem_obj->sgt, DMA_BIDIRECTIONAL);
+	}
+
+	dma_resv_unlock(obj->resv);
+
+	return map.vaddr;
+
+err_unlock:
+	dma_resv_unlock(obj->resv);
+
+	return ERR_PTR(err);
+}
+
+/**
+ * pvr_gem_object_vunmap() - Unmap a PowerVR memory object from CPU virtual
+ * address space.
+ * @pvr_obj: Target PowerVR GEM object.
+ *
+ * If @pvr_obj is CPU-cached, dma_sync_sgtable_for_device() is called to make
+ * sure the GPU mapping is consistent.
+ */
+void
+pvr_gem_object_vunmap(struct pvr_gem_object *pvr_obj)
+{
+	struct drm_gem_shmem_object *shmem_obj = shmem_gem_from_pvr_gem(pvr_obj);
+	struct iosys_map map = IOSYS_MAP_INIT_VADDR(shmem_obj->vaddr);
+	struct drm_gem_object *obj = gem_from_pvr_gem(pvr_obj);
+
+	if (WARN_ON(!map.vaddr))
+		return;
+
+	dma_resv_lock(obj->resv, NULL);
+
+	if (pvr_obj->flags & PVR_BO_CPU_CACHED) {
+		struct device *dev = shmem_obj->base.dev->dev;
+
+		/* If shmem_obj->sgt is NULL, that means the buffer hasn't been mapped
+		 * in GPU space yet.
+		 */
+		if (shmem_obj->sgt)
+			dma_sync_sgtable_for_device(dev, shmem_obj->sgt, DMA_BIDIRECTIONAL);
+	}
+
+	drm_gem_shmem_vunmap(shmem_obj, &map);
+
+	dma_resv_unlock(obj->resv);
+}
+
+/**
+ * pvr_gem_object_zero() - Zeroes the physical memory behind an object.
+ * @pvr_obj: Target PowerVR GEM object.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error encountered while attempting to map @pvr_obj to the CPU (see
+ *    pvr_gem_object_vmap()).
+ */
+static int
+pvr_gem_object_zero(struct pvr_gem_object *pvr_obj)
+{
+	void *cpu_ptr;
+
+	cpu_ptr = pvr_gem_object_vmap(pvr_obj);
+	if (IS_ERR(cpu_ptr))
+		return PTR_ERR(cpu_ptr);
+
+	memset(cpu_ptr, 0, pvr_gem_object_size(pvr_obj));
+
+	/* Make sure the zero-ing is done before vumap-ing the object. */
+	wmb();
+
+	pvr_gem_object_vunmap(pvr_obj);
+
+	return 0;
+}
+
+/**
+ * pvr_gem_create_object() - Allocate and pre-initializes a pvr_gem_object
+ * @drm_dev: DRM device creating this object.
+ * @size: Size of the object to allocate in bytes.
+ *
+ * Return:
+ *  * The new pre-initialized GEM object on success,
+ *  * -ENOMEM if the allocation failed.
+ */
+struct drm_gem_object *pvr_gem_create_object(struct drm_device *drm_dev, size_t size)
+{
+	struct drm_gem_object *gem_obj;
+	struct pvr_gem_object *pvr_obj;
+
+	pvr_obj = kzalloc(sizeof(*pvr_obj), GFP_KERNEL);
+	if (!pvr_obj)
+		return ERR_PTR(-ENOMEM);
+
+	gem_obj = gem_from_pvr_gem(pvr_obj);
+	gem_obj->funcs = &pvr_gem_object_funcs;
+	return gem_obj;
+}
+
+/**
+ * pvr_gem_object_create() - Creates a PowerVR-specific buffer object.
+ * @pvr_dev: Target PowerVR device.
+ * @size: Size of the object to allocate in bytes. Must be greater than zero.
+ * Any value which is not an exact multiple of the system page size will be
+ * rounded up to satisfy this condition.
+ * @flags: Options which affect both this operation and future mapping
+ * operations performed on the returned object. Must be a combination of
+ * DRM_PVR_BO_* and/or PVR_BO_* flags.
+ *
+ * The created object may be larger than @size, but can never be smaller. To
+ * get the exact size, call pvr_gem_object_size() on the returned pointer.
+ *
+ * Return:
+ *  * The newly-minted PowerVR-specific buffer object on success,
+ *  * -%EINVAL if @size is zero or @flags is not valid,
+ *  * -%ENOMEM if sufficient physical memory cannot be allocated, or
+ *  * Any other error returned by drm_gem_create_mmap_offset().
+ */
+struct pvr_gem_object *
+pvr_gem_object_create(struct pvr_device *pvr_dev, size_t size, u64 flags)
+{
+	struct drm_gem_shmem_object *shmem_obj;
+	struct pvr_gem_object *pvr_obj;
+
+	/* Verify @size and @flags before continuing. */
+	if (size == 0 || !pvr_gem_object_flags_validate(flags))
+		return ERR_PTR(-EINVAL);
+
+	shmem_obj = drm_gem_shmem_create(from_pvr_device(pvr_dev), size);
+	if (IS_ERR(shmem_obj))
+		return ERR_CAST(shmem_obj);
+
+	shmem_obj->pages_mark_dirty_on_put = true;
+	shmem_obj->map_wc = !(flags & PVR_BO_CPU_CACHED);
+	pvr_obj = shmem_gem_to_pvr_gem(shmem_obj);
+	pvr_obj->flags = flags;
+
+	/*
+	 * Do this last because pvr_gem_object_zero() requires a fully
+	 * configured instance of struct pvr_gem_object.
+	 */
+	pvr_gem_object_zero(pvr_obj);
+
+	return pvr_obj;
+}
+
+/**
+ * pvr_gem_get_dma_addr() - Get DMA address for given offset in object
+ * @pvr_obj: Pointer to object to lookup address in.
+ * @offset: Offset within object to lookup address at.
+ * @dma_addr_out: Pointer to location to store DMA address.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -%EINVAL if object is not currently backed, or if @offset is out of valid
+ *    range for this object.
+ */
+int
+pvr_gem_get_dma_addr(struct pvr_gem_object *pvr_obj, u32 offset,
+		     dma_addr_t *dma_addr_out)
+{
+	struct drm_gem_shmem_object *shmem_obj = shmem_gem_from_pvr_gem(pvr_obj);
+	struct sg_table *sgt;
+	u32 accumulated_offset = 0;
+	struct scatterlist *sgl;
+	unsigned int sgt_idx;
+
+	sgt = drm_gem_shmem_get_pages_sgt(shmem_obj);
+	if (IS_ERR(sgt))
+		return PTR_ERR(sgt);
+
+	for_each_sgtable_dma_sg(sgt, sgl, sgt_idx) {
+		u32 new_offset = accumulated_offset + sg_dma_len(sgl);
+
+		if (offset >= accumulated_offset && offset < new_offset) {
+			*dma_addr_out = sg_dma_address(sgl) +
+					(offset - accumulated_offset);
+			return 0;
+		}
+
+		accumulated_offset = new_offset;
+	}
+
+	return -EINVAL;
+}
diff --git a/drivers/gpu/drm/imagination/pvr_gem.h b/drivers/gpu/drm/imagination/pvr_gem.h
new file mode 100644
index 000000000000..5b82cd67d83c
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_gem.h
@@ -0,0 +1,177 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_GEM_H
+#define PVR_GEM_H
+
+#include "pvr_rogue_heap_config.h"
+#include "pvr_rogue_meta.h"
+
+#include <uapi/drm/pvr_drm.h>
+
+#include <drm/drm_gem.h>
+#include <drm/drm_gem_shmem_helper.h>
+#include <drm/drm_mm.h>
+
+#include <linux/bitfield.h>
+#include <linux/bits.h>
+#include <linux/const.h>
+#include <linux/compiler_attributes.h>
+#include <linux/kernel.h>
+#include <linux/mutex.h>
+#include <linux/refcount.h>
+#include <linux/scatterlist.h>
+#include <linux/sizes.h>
+#include <linux/types.h>
+
+/* Forward declaration from "pvr_device.h". */
+struct pvr_device;
+struct pvr_file;
+
+/**
+ * DOC: Flags for DRM_IOCTL_PVR_CREATE_BO (kernel-only)
+ *
+ * Kernel-only values allowed in &pvr_gem_object->flags. The majority of options
+ * for this field are specified in the UAPI header "pvr_drm.h" with a
+ * DRM_PVR_BO_ prefix. To distinguish these internal options (which must exist
+ * in ranges marked as "reserved" in the UAPI header), we drop the DRM prefix.
+ * The public options should be used directly, DRM prefix and all.
+ *
+ * To avoid potentially confusing gaps in the UAPI options, these kernel-only
+ * options are specified "in reverse", starting at bit 63.
+ *
+ * We use "reserved" to refer to bits defined here and not exposed in the UAPI.
+ * Bits not defined anywhere are "undefined".
+ *
+ * Creation options
+ *    These use the prefix PVR_BO_CREATE_.
+ *
+ *    *There are currently no kernel-only flags in this group.*
+ *
+ * Device mapping options
+ *    These use the prefix PVR_BO_DEVICE_.
+ *
+ *    *There are currently no kernel-only flags in this group.*
+ *
+ * CPU mapping options
+ *    These use the prefix PVR_BO_CPU_.
+ *
+ *    :CACHED: By default, all GEM objects are mapped write-combined on the
+ *       CPU. Set this flag to override this behaviour and map the object
+ *       cached.
+ */
+#define PVR_BO_CPU_CACHED BIT_ULL(63)
+
+#define PVR_BO_FW_NO_CLEAR_ON_RESET BIT_ULL(62)
+
+/* Bits 62..3 are undefined. */
+/* Bits 2..0 are defined in the UAPI. */
+
+/* Other utilities. */
+#define PVR_BO_UNDEFINED_MASK GENMASK_ULL(61, 3)
+#define PVR_BO_RESERVED_MASK (PVR_BO_UNDEFINED_MASK | GENMASK_ULL(63, 63))
+
+/*
+ * All firmware-mapped memory uses (mostly) the same flags. Specifically,
+ * firmware-mapped memory should be:
+ *  * Read/write on the device,
+ *  * Read/write on the CPU, and
+ *  * Write-combined on the CPU.
+ *
+ * The only variation is in caching on the device.
+ */
+#define PVR_BO_FW_FLAGS_DEVICE_CACHED (ULL(0))
+#define PVR_BO_FW_FLAGS_DEVICE_UNCACHED DRM_PVR_BO_DEVICE_BYPASS_CACHE
+
+/**
+ * struct pvr_gem_object - powervr-specific wrapper for &struct drm_gem_object
+ */
+struct pvr_gem_object {
+	/**
+	 * @base: The underlying &struct drm_gem_shmem_object.
+	 *
+	 * Do not access this member directly, instead call
+	 * shem_gem_from_pvr_gem().
+	 */
+	struct drm_gem_shmem_object base;
+
+	/**
+	 * @flags: Options set at creation-time. Some of these options apply to
+	 * the creation operation itself (which are stored here for reference)
+	 * with the remainder used for mapping options to both the device and
+	 * CPU. These are used every time this object is mapped, but may be
+	 * changed after creation.
+	 *
+	 * Must be a combination of DRM_PVR_BO_* and/or PVR_BO_* flags.
+	 *
+	 * .. note::
+	 *
+	 *    This member is declared const to indicate that none of these
+	 *    options may change or be changed throughout the object's
+	 *    lifetime.
+	 */
+	u64 flags;
+};
+
+static_assert(offsetof(struct pvr_gem_object, base) == 0,
+	      "offsetof(struct pvr_gem_object, base) not zero");
+
+#define shmem_gem_from_pvr_gem(pvr_obj) (&pvr_obj->base)
+
+#define shmem_gem_to_pvr_gem(shmem_obj) container_of_const(shmem_obj, struct pvr_gem_object, base)
+
+#define gem_from_pvr_gem(pvr_obj) (&pvr_obj->base.base)
+
+#define gem_to_pvr_gem(gem_obj) container_of_const(gem_obj, struct pvr_gem_object, base.base)
+
+/* Functions defined in pvr_gem.c */
+
+struct drm_gem_object *pvr_gem_create_object(struct drm_device *drm_dev, size_t size);
+
+struct pvr_gem_object *pvr_gem_object_create(struct pvr_device *pvr_dev,
+					     size_t size, u64 flags);
+
+int pvr_gem_object_into_handle(struct pvr_gem_object *pvr_obj,
+			       struct pvr_file *pvr_file, u32 *handle);
+struct pvr_gem_object *pvr_gem_object_from_handle(struct pvr_file *pvr_file,
+						  u32 handle);
+
+static __always_inline struct sg_table *
+pvr_gem_object_get_pages_sgt(struct pvr_gem_object *pvr_obj)
+{
+	return drm_gem_shmem_get_pages_sgt(shmem_gem_from_pvr_gem(pvr_obj));
+}
+
+void *pvr_gem_object_vmap(struct pvr_gem_object *pvr_obj);
+void pvr_gem_object_vunmap(struct pvr_gem_object *pvr_obj);
+
+int pvr_gem_get_dma_addr(struct pvr_gem_object *pvr_obj, u32 offset,
+			 dma_addr_t *dma_addr_out);
+
+/**
+ * pvr_gem_object_get() - Acquire reference on pvr_gem_object
+ * @pvr_obj: Pointer to object to acquire reference on.
+ */
+static __always_inline void
+pvr_gem_object_get(struct pvr_gem_object *pvr_obj)
+{
+	drm_gem_object_get(gem_from_pvr_gem(pvr_obj));
+}
+
+/**
+ * pvr_gem_object_put() - Release reference on pvr_gem_object
+ * @pvr_obj: Pointer to object to release reference on.
+ */
+static __always_inline void
+pvr_gem_object_put(struct pvr_gem_object *pvr_obj)
+{
+	drm_gem_object_put(gem_from_pvr_gem(pvr_obj));
+}
+
+static __always_inline size_t
+pvr_gem_object_size(struct pvr_gem_object *pvr_obj)
+{
+	return gem_from_pvr_gem(pvr_obj)->size;
+}
+
+#endif /* PVR_GEM_H */
diff --git a/drivers/gpu/drm/imagination/pvr_mmu.c b/drivers/gpu/drm/imagination/pvr_mmu.c
new file mode 100644
index 000000000000..3b48e1f77d09
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_mmu.c
@@ -0,0 +1,2487 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_mmu.h"
+
+#include "pvr_device.h"
+#include "pvr_fw.h"
+#include "pvr_gem.h"
+#include "pvr_rogue_fwif.h"
+#include "pvr_rogue_mmu_defs.h"
+
+#include <drm/drm_drv.h>
+#include <linux/bitops.h>
+#include <linux/dma-mapping.h>
+#include <linux/kmemleak.h>
+#include <linux/minmax.h>
+#include <linux/sizes.h>
+
+#define PVR_SHIFT_FROM_SIZE(size_) (__builtin_ctzll(size_))
+#define PVR_MASK_FROM_SIZE(size_) (~((size_) - U64_C(1)))
+
+/*
+ * The value of the device page size (%PVR_DEVICE_PAGE_SIZE) is currently
+ * pegged to the host page size (%PAGE_SIZE). This chunk of macro goodness both
+ * ensures that the selected host page size corresponds to a valid device page
+ * size and sets up values needed by the MMU code below.
+ */
+#if (PVR_DEVICE_PAGE_SIZE == SZ_4K)
+# define ROGUE_MMUCTRL_PAGE_SIZE_X ROGUE_MMUCTRL_PAGE_SIZE_4KB
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_SHIFT ROGUE_MMUCTRL_PAGE_4KB_RANGE_SHIFT
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_CLRMSK ROGUE_MMUCTRL_PAGE_4KB_RANGE_CLRMSK
+#elif (PVR_DEVICE_PAGE_SIZE == SZ_16K)
+# define ROGUE_MMUCTRL_PAGE_SIZE_X ROGUE_MMUCTRL_PAGE_SIZE_16KB
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_SHIFT ROGUE_MMUCTRL_PAGE_16KB_RANGE_SHIFT
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_CLRMSK ROGUE_MMUCTRL_PAGE_16KB_RANGE_CLRMSK
+#elif (PVR_DEVICE_PAGE_SIZE == SZ_64K)
+# define ROGUE_MMUCTRL_PAGE_SIZE_X ROGUE_MMUCTRL_PAGE_SIZE_64KB
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_SHIFT ROGUE_MMUCTRL_PAGE_64KB_RANGE_SHIFT
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_CLRMSK ROGUE_MMUCTRL_PAGE_64KB_RANGE_CLRMSK
+#elif (PVR_DEVICE_PAGE_SIZE == SZ_256K)
+# define ROGUE_MMUCTRL_PAGE_SIZE_X ROGUE_MMUCTRL_PAGE_SIZE_256KB
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_SHIFT ROGUE_MMUCTRL_PAGE_256KB_RANGE_SHIFT
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_CLRMSK ROGUE_MMUCTRL_PAGE_256KB_RANGE_CLRMSK
+#elif (PVR_DEVICE_PAGE_SIZE == SZ_1M)
+# define ROGUE_MMUCTRL_PAGE_SIZE_X ROGUE_MMUCTRL_PAGE_SIZE_1MB
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_SHIFT ROGUE_MMUCTRL_PAGE_1MB_RANGE_SHIFT
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_CLRMSK ROGUE_MMUCTRL_PAGE_1MB_RANGE_CLRMSK
+#elif (PVR_DEVICE_PAGE_SIZE == SZ_2M)
+# define ROGUE_MMUCTRL_PAGE_SIZE_X ROGUE_MMUCTRL_PAGE_SIZE_2MB
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_SHIFT ROGUE_MMUCTRL_PAGE_2MB_RANGE_SHIFT
+# define ROGUE_MMUCTRL_PAGE_X_RANGE_CLRMSK ROGUE_MMUCTRL_PAGE_2MB_RANGE_CLRMSK
+#else
+# error Unsupported device page size PVR_DEVICE_PAGE_SIZE
+#endif
+
+#define ROGUE_MMUCTRL_ENTRIES_PT_VALUE_X   \
+	(ROGUE_MMUCTRL_ENTRIES_PT_VALUE >> \
+	 (PVR_DEVICE_PAGE_SHIFT - PVR_SHIFT_FROM_SIZE(SZ_4K)))
+
+/**
+ * pvr_mmu_flush() - Request flush of all MMU caches.
+ * @pvr_dev: Target PowerVR device.
+ *
+ * This function must be called following any possible change to the MMU page
+ * tables.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error encountered while submitting the flush command via the KCCB.
+ */
+int
+pvr_mmu_flush(struct pvr_device *pvr_dev)
+{
+	/* TODO: implement */
+	return -ENODEV;
+}
+
+/**
+ * DOC: PowerVR Virtual Memory Handling
+ */
+/**
+ * DOC: PowerVR Virtual Memory Handling (constants)
+ *
+ * .. c:macro:: PVR_IDX_INVALID
+ *
+ *    Default value for a u16-based index.
+ *
+ *    This value cannot be zero, since zero is a valid index value.
+ */
+#define PVR_IDX_INVALID ((u16)(-1))
+
+/**
+ * DOC: MMU backing pages
+ */
+/**
+ * DOC: MMU backing pages (constants)
+ *
+ * .. c:macro:: PVR_MMU_BACKING_PAGE_SIZE
+ *
+ *    Page size of a PowerVR device's integrated MMU. The CPU page size must be
+ *    at least as large as this value for the current implementation; this is
+ *    checked at compile-time.
+ */
+#define PVR_MMU_BACKING_PAGE_SIZE SZ_4K
+static_assert(PAGE_SIZE >= PVR_MMU_BACKING_PAGE_SIZE);
+
+/**
+ * struct pvr_mmu_backing_page - Represents a single page used to back a page
+ *                              table of any level.
+ * @dma_addr: DMA address of this page.
+ * @host_ptr: CPU address of this page.
+ * @pvr_dev: The PowerVR device to which this page is associated. **For
+ *           internal use only.**
+ */
+struct pvr_mmu_backing_page {
+	dma_addr_t dma_addr;
+	void *host_ptr;
+/* private: internal use only */
+	struct page *raw_page;
+	struct pvr_device *pvr_dev;
+};
+
+/**
+ * pvr_mmu_backing_page_init() - Initialize a MMU backing page.
+ * @page: Target backing page.
+ * @pvr_dev: Target PowerVR device.
+ *
+ * This function performs three distinct operations:
+ *
+ * 1. Allocate a single page,
+ * 2. Map the page to the CPU, and
+ * 3. Map the page to DMA-space.
+ *
+ * It is expected that @page be zeroed (e.g. from kzalloc()) before calling
+ * this function.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * -%ENOMEM if allocation of the backing page or mapping of the backing
+ *    page to DMA fails.
+ */
+static int
+pvr_mmu_backing_page_init(struct pvr_mmu_backing_page *page,
+			  struct pvr_device *pvr_dev)
+{
+	struct device *dev = from_pvr_device(pvr_dev)->dev;
+
+	struct page *raw_page;
+	int err;
+
+	dma_addr_t dma_addr;
+	void *host_ptr;
+
+	raw_page = alloc_page(__GFP_ZERO | GFP_KERNEL);
+	if (!raw_page)
+		return -ENOMEM;
+
+	host_ptr = vmap(&raw_page, 1, VM_MAP, pgprot_writecombine(PAGE_KERNEL));
+	if (!host_ptr) {
+		err = -ENOMEM;
+		goto err_free_page;
+	}
+
+	dma_addr = dma_map_page(dev, raw_page, 0, PVR_MMU_BACKING_PAGE_SIZE,
+				DMA_TO_DEVICE);
+	if (dma_mapping_error(dev, dma_addr)) {
+		err = -ENOMEM;
+		goto err_unmap_page;
+	}
+
+	page->dma_addr = dma_addr;
+	page->host_ptr = host_ptr;
+	page->pvr_dev = pvr_dev;
+	page->raw_page = raw_page;
+	kmemleak_alloc(page->host_ptr, PAGE_SIZE, 1, GFP_KERNEL);
+
+	return 0;
+
+err_unmap_page:
+	vunmap(host_ptr);
+
+err_free_page:
+	__free_page(raw_page);
+
+	return err;
+}
+
+/**
+ * pvr_mmu_backing_page_fini() - Teardown a MMU backing page.
+ * @page: Target backing page.
+ *
+ * This function performs the mirror operations to pvr_mmu_backing_page_init(),
+ * in reverse order:
+ *
+ * 1. Unmap the page from DMA-space,
+ * 2. Unmap the page from the CPU, and
+ * 3. Free the page.
+ *
+ * It also zeros @page.
+ *
+ * It is a no-op to call this function a second (or further) time on any @page.
+ */
+static void
+pvr_mmu_backing_page_fini(struct pvr_mmu_backing_page *page)
+{
+	struct device *dev = from_pvr_device(page->pvr_dev)->dev;
+
+	/* Do nothing if no allocation is present. */
+	if (!page->pvr_dev)
+		return;
+
+	dma_unmap_page(dev, page->dma_addr, PVR_MMU_BACKING_PAGE_SIZE,
+		       DMA_TO_DEVICE);
+
+	kmemleak_free(page->host_ptr);
+	vunmap(page->host_ptr);
+
+	__free_page(page->raw_page);
+
+	memset(page, 0, sizeof(*page));
+}
+
+/**
+ * pvr_mmu_backing_page_sync() - Flush a MMU backing page from the CPU to the
+ *                              device.
+ * @page: Target backing page.
+ *
+ * .. caution::
+ *
+ *    **This is potentially an expensive function call.** Only call
+ *    pvr_mmu_backing_page_sync() once you're sure you have no more changes to
+ *    make to the backing page in the immediate future.
+ */
+static void
+pvr_mmu_backing_page_sync(struct pvr_mmu_backing_page *page)
+{
+	struct device *dev;
+
+	/*
+	 * Do nothing if no allocation is present. This may be the case if
+	 * we are unmapping pages.
+	 */
+	if (!page->pvr_dev)
+		return;
+
+	dev = from_pvr_device(page->pvr_dev)->dev;
+
+	dma_sync_single_for_device(dev, page->dma_addr,
+				   PVR_MMU_BACKING_PAGE_SIZE, DMA_TO_DEVICE);
+}
+
+/**
+ * DOC: Raw page tables
+ */
+
+#define PVR_PAGE_TABLE_TYPEOF_ENTRY(level_) \
+	typeof_member(struct pvr_page_table_l##level_##_entry_raw, val)
+
+#define PVR_PAGE_TABLE_FIELD_GET(level_, name_, field_, entry_)           \
+	(((entry_).val &                                           \
+	  ~ROGUE_MMUCTRL_##name_##_DATA_##field_##_CLRMSK) >> \
+	 ROGUE_MMUCTRL_##name_##_DATA_##field_##_SHIFT)
+
+#define PVR_PAGE_TABLE_FIELD_PREP(level_, name_, field_, val_)            \
+	((((PVR_PAGE_TABLE_TYPEOF_ENTRY(level_))(val_))            \
+	  << ROGUE_MMUCTRL_##name_##_DATA_##field_##_SHIFT) & \
+	 ~ROGUE_MMUCTRL_##name_##_DATA_##field_##_CLRMSK)
+
+/**
+ * struct pvr_page_table_l2_entry_raw - A single entry in a level 2 page table.
+ * @val: The raw value of this entry.
+ *
+ * This type is a structure for type-checking purposes. At compile-time, its
+ * size is checked against %ROGUE_MMUCTRL_ENTRY_SIZE_PC_VALUE.
+ *
+ * The value stored in this structure can be decoded using the following bitmap:
+ *
+ * .. flat-table::
+ *    :widths: 1 5
+ *    :stub-columns: 1
+ *
+ *    * - 31..4
+ *      - **Level 1 Page Table Base Address:** Bits 39..12 of the L1
+ *        page table base address, which is 4KiB aligned.
+ *
+ *    * - 3..2
+ *      - *(reserved)*
+ *
+ *    * - 1
+ *      - **Pending:** When valid bit is not set, indicates that a valid
+ *        entry is pending and the MMU should wait for the driver to map
+ *        the entry. This is used to support page demand mapping of
+ *        memory.
+ *
+ *    * - 0
+ *      - **Valid:** Indicates that the entry contains a valid L1 page
+ *        table. If the valid bit is not set, then an attempted use of
+ *        the page would result in a page fault.
+ */
+struct pvr_page_table_l2_entry_raw {
+	u32 val;
+} __packed;
+static_assert(sizeof(struct pvr_page_table_l2_entry_raw) * 8 ==
+	      ROGUE_MMUCTRL_ENTRY_SIZE_PC_VALUE);
+
+static bool
+pvr_page_table_l2_entry_raw_is_valid(struct pvr_page_table_l2_entry_raw entry)
+{
+	return PVR_PAGE_TABLE_FIELD_GET(2, PC, VALID, entry);
+}
+
+/**
+ * pvr_page_table_l2_entry_raw_set() - Write a valid entry into a raw level 2
+ *                                     page table.
+ * @entry: Target raw level 2 page table entry.
+ * @child_table_dma_addr: DMA address of the level 1 page table to be
+ *                        associated with @entry.
+ *
+ * When calling this function, @child_table_dma_addr must be a valid DMA
+ * address and a multiple of %ROGUE_MMUCTRL_PC_DATA_PD_BASE_ALIGNSIZE.
+ */
+static void
+pvr_page_table_l2_entry_raw_set(struct pvr_page_table_l2_entry_raw *entry,
+				dma_addr_t child_table_dma_addr)
+{
+	child_table_dma_addr >>= ROGUE_MMUCTRL_PC_DATA_PD_BASE_ALIGNSHIFT;
+
+	entry->val =
+		PVR_PAGE_TABLE_FIELD_PREP(2, PC, VALID, true) |
+		PVR_PAGE_TABLE_FIELD_PREP(2, PC, ENTRY_PENDING, false) |
+		PVR_PAGE_TABLE_FIELD_PREP(2, PC, PD_BASE, child_table_dma_addr);
+}
+
+static void
+pvr_page_table_l2_entry_raw_clear(struct pvr_page_table_l2_entry_raw *entry)
+{
+	entry->val = 0;
+}
+
+/**
+ * struct pvr_page_table_l1_entry_raw - A single entry in a level 1 page table.
+ * @val: The raw value of this entry.
+ *
+ * This type is a structure for type-checking purposes. At compile-time, its
+ * size is checked against %ROGUE_MMUCTRL_ENTRY_SIZE_PD_VALUE.
+ *
+ * The value stored in this structure can be decoded using the following bitmap:
+ *
+ * .. flat-table::
+ *    :widths: 1 5
+ *    :stub-columns: 1
+ *
+ *    * - 63..41
+ *      - *(reserved)*
+ *
+ *    * - 40
+ *      - **Pending:** When valid bit is not set, indicates that a valid entry
+ *        is pending and the MMU should wait for the driver to map the entry.
+ *        This is used to support page demand mapping of memory.
+ *
+ *    * - 39..5
+ *      - **Level 0 Page Table Base Address:** The way this value is
+ *        interpreted depends on the page size. Bits not specified in the
+ *        table below (e.g. bits 11..5 for page size 4KiB) should be
+ *        considered reserved.
+ *
+ *        This table shows the bits used in an L1 page table entry to
+ *        represent the Physical Table Base Address for a given Page Size.
+ *        Since each L1 page table entry covers 2MiB of address space, the
+ *        maximum page size is 2MiB.
+ *
+ *        .. flat-table::
+ *           :widths: 1 1 1 1
+ *           :header-rows: 1
+ *           :stub-columns: 1
+ *
+ *           * - Page size
+ *             - L0 page table base address bits
+ *             - Number of L0 page table entries
+ *             - Size of L0 page table
+ *
+ *           * - 4KiB
+ *             - 39..12
+ *             - 512
+ *             - 4KiB
+ *
+ *           * - 16KiB
+ *             - 39..10
+ *             - 128
+ *             - 1KiB
+ *
+ *           * - 64KiB
+ *             - 39..8
+ *             - 32
+ *             - 256B
+ *
+ *           * - 256KiB
+ *             - 39..6
+ *             - 8
+ *             - 64B
+ *
+ *           * - 1MiB
+ *             - 39..5 (4 = '0')
+ *             - 2
+ *             - 16B
+ *
+ *           * - 2MiB
+ *             - 39..5 (4..3 = '00')
+ *             - 1
+ *             - 8B
+ *
+ *    * - 4
+ *      - *(reserved)*
+ *
+ *    * - 3..1
+ *      - **Page Size:** Sets the page size, from 4KiB to 2MiB.
+ *
+ *    * - 0
+ *      - **Valid:** Indicates that the entry contains a valid L0 page table.
+ *        If the valid bit is not set, then an attempted use of the page would
+ *        result in a page fault.
+ */
+struct pvr_page_table_l1_entry_raw {
+	u64 val;
+} __packed;
+static_assert(sizeof(struct pvr_page_table_l1_entry_raw) * 8 ==
+	      ROGUE_MMUCTRL_ENTRY_SIZE_PD_VALUE);
+
+static bool
+pvr_page_table_l1_entry_raw_is_valid(struct pvr_page_table_l1_entry_raw entry)
+{
+	return PVR_PAGE_TABLE_FIELD_GET(1, PD, VALID, entry);
+}
+
+/**
+ * pvr_page_table_l1_entry_raw_set() - Write a valid entry into a raw level 1
+ *                                     page table.
+ * @entry: Target raw level 1 page table entry.
+ * @child_table_dma_addr: DMA address of the level 0 page table to be
+ *                        associated with @entry.
+ *
+ * When calling this function, @child_table_dma_addr must be a valid DMA
+ * address and a multiple of 4 KiB.
+ */
+static void
+pvr_page_table_l1_entry_raw_set(struct pvr_page_table_l1_entry_raw *entry,
+				dma_addr_t child_table_dma_addr)
+{
+	entry->val = PVR_PAGE_TABLE_FIELD_PREP(1, PD, VALID, true) |
+		     PVR_PAGE_TABLE_FIELD_PREP(1, PD, ENTRY_PENDING, false) |
+		     PVR_PAGE_TABLE_FIELD_PREP(1, PD, PAGE_SIZE,
+					       ROGUE_MMUCTRL_PAGE_SIZE_X) |
+		     /*
+		      * The use of a 4K-specific macro here is correct. It is
+		      * a future optimization to allocate sub-host-page-sized
+		      * blocks for individual tables, so the condition that any
+		      * page table address is aligned to the size of the
+		      * largest (a 4KB) table currently holds.
+		      */
+		     (child_table_dma_addr &
+		      ~ROGUE_MMUCTRL_PT_BASE_4KB_RANGE_CLRMSK);
+}
+
+static void
+pvr_page_table_l1_entry_raw_clear(struct pvr_page_table_l1_entry_raw *entry)
+{
+	entry->val = 0;
+}
+
+/**
+ * struct pvr_page_table_l0_entry_raw - A single entry in a level 0 page table.
+ * @val: The raw value of this entry.
+ *
+ * This type is a structure for type-checking purposes. At compile-time, its
+ * size is checked against %ROGUE_MMUCTRL_ENTRY_SIZE_PT_VALUE.
+ *
+ * The value stored in this structure can be decoded using the following bitmap:
+ *
+ * .. flat-table::
+ *    :widths: 1 5
+ *    :stub-columns: 1
+ *
+ *    * - 63
+ *      - *(reserved)*
+ *
+ *    * - 62
+ *      - **PM/FW Protect:** Indicates a protected region which only the
+ *        Parameter Manager (PM) or firmware processor can write to.
+ *
+ *    * - 61..40
+ *      - **VP Page (High):** Virtual-physical page used for Parameter Manager
+ *        (PM) memory. This field is only used if the additional level of PB
+ *        virtualization is enabled. The VP Page field is needed by the PM in
+ *        order to correctly reconstitute the free lists after render
+ *        completion. This (High) field holds bits 39..18 of the value; the
+ *        Low field holds bits 17..12. Bits 11..0 are always zero because the
+ *        value is always aligned to the 4KiB page size.
+ *
+ *    * - 39..12
+ *      - **Physical Page Address:** The way this value is interpreted depends
+ *        on the page size. Bits not specified in the table below (e.g. bits
+ *        20..12 for page size 2MiB) should be considered reserved.
+ *
+ *        This table shows the bits used in an L0 page table entry to represent
+ *        the Physical Page Address for a given page size (as defined in the
+ *        associated L1 page table entry).
+ *
+ *        .. flat-table::
+ *           :widths: 1 1
+ *           :header-rows: 1
+ *           :stub-columns: 1
+ *
+ *           * - Page size
+ *             - Physical address bits
+ *
+ *           * - 4KiB
+ *             - 39..12
+ *
+ *           * - 16KiB
+ *             - 39..14
+ *
+ *           * - 64KiB
+ *             - 39..16
+ *
+ *           * - 256KiB
+ *             - 39..18
+ *
+ *           * - 1MiB
+ *             - 39..20
+ *
+ *           * - 2MiB
+ *             - 39..21
+ *
+ *    * - 11..6
+ *      - **VP Page (Low):** Continuation of VP Page (High).
+ *
+ *    * - 5
+ *      - **Pending:** When valid bit is not set, indicates that a valid entry
+ *        is pending and the MMU should wait for the driver to map the entry.
+ *        This is used to support page demand mapping of memory.
+ *
+ *    * - 4
+ *      - **PM Src:** Set on Parameter Manager (PM) allocated page table
+ *        entries when indicated by the PM. Note that this bit will only be set
+ *        by the PM, not by the device driver.
+ *
+ *    * - 3
+ *      - **SLC Bypass Control:** Specifies requests to this page should bypass
+ *        the System Level Cache (SLC), if enabled in SLC configuration.
+ *
+ *    * - 2
+ *      - **Cache Coherency:** Indicates that the page is coherent (i.e. it
+ *        does not require a cache flush between operations on the CPU and the
+ *        device).
+ *
+ *    * - 1
+ *      - **Read Only:** If set, this bit indicates that the page is read only.
+ *        An attempted write to this page would result in a write-protection
+ *        fault.
+ *
+ *    * - 0
+ *      - **Valid:** Indicates that the entry contains a valid page. If the
+ *        valid bit is not set, then an attempted use of the page would result
+ *        in a page fault.
+ */
+struct pvr_page_table_l0_entry_raw {
+	u64 val;
+} __packed;
+static_assert(sizeof(struct pvr_page_table_l0_entry_raw) * 8 ==
+	      ROGUE_MMUCTRL_ENTRY_SIZE_PT_VALUE);
+
+/**
+ * struct pvr_page_flags_raw - The configurable flags from a single entry in a
+ *                             level 0 page table.
+ * @val: The raw value of these flags. Since these are a strict subset of
+ *       &struct pvr_page_table_l0_entry_raw; use that type for our member here.
+ *
+ * The flags stored in this type are: PM/FW Protect; SLC Bypass Control; Cache
+ * Coherency, and Read Only (bits 62, 3, 2 and 1 respectively).
+ *
+ * This type should never be instantiated directly; instead use
+ * pvr_page_flags_raw_create() to ensure only valid bits of @val are set.
+ */
+struct pvr_page_flags_raw {
+	struct pvr_page_table_l0_entry_raw val;
+} __packed;
+static_assert(sizeof(struct pvr_page_flags_raw) ==
+	      sizeof(struct pvr_page_table_l0_entry_raw));
+
+static bool
+pvr_page_table_l0_entry_raw_is_valid(struct pvr_page_table_l0_entry_raw entry)
+{
+	return PVR_PAGE_TABLE_FIELD_GET(0, PT, VALID, entry);
+}
+
+/**
+ * pvr_page_table_l0_entry_raw_set() - Write a valid entry into a raw level 0
+ *                                     page table.
+ * @entry: Target raw level 0 page table entry.
+ * @dma_addr: DMA address of the physical page to be associated with @entry.
+ * @flags: Options to be set on @entry.
+ *
+ * When calling this function, @child_table_dma_addr must be a valid DMA
+ * address and a multiple of %PVR_DEVICE_PAGE_SIZE.
+ *
+ * The @flags parameter is directly assigned into @entry. It is the callers
+ * responsibility to ensure that only bits specified in
+ * &struct pvr_page_flags_raw are set in @flags.
+ */
+static void
+pvr_page_table_l0_entry_raw_set(struct pvr_page_table_l0_entry_raw *entry,
+				dma_addr_t dma_addr,
+				struct pvr_page_flags_raw flags)
+{
+	entry->val = PVR_PAGE_TABLE_FIELD_PREP(0, PT, VALID, true) |
+		     PVR_PAGE_TABLE_FIELD_PREP(0, PT, ENTRY_PENDING, false) |
+		     (dma_addr & ~ROGUE_MMUCTRL_PAGE_X_RANGE_CLRMSK) |
+		     flags.val.val;
+}
+
+static void
+pvr_page_table_l0_entry_raw_clear(struct pvr_page_table_l0_entry_raw *entry)
+{
+	entry->val = 0;
+}
+
+/**
+ * pvr_page_flags_raw_create() - Initialize the flag bits of a raw level 0 page
+ *                               table entry.
+ * @read_only: This page is read-only (see: Read Only).
+ * @cache_coherent: This page does not require cache flushes (see: Cache
+ *                  Coherency).
+ * @slc_bypass: This page bypasses the device cache (see: SLC Bypass Control).
+ * @pm_fw_protect: This page is only for use by the firmware or Parameter
+ *                 Manager (see PM/FW Protect).
+ *
+ * For more details on the use of these four options, see their respective
+ * entries in the table under &struct pvr_page_table_l0_entry_raw.
+ *
+ * Return:
+ * A new &struct pvr_page_flags_raw instance which can be passed directly to
+ * pvr_page_table_l0_entry_raw_set() or pvr_page_table_l0_insert().
+ */
+static struct pvr_page_flags_raw
+pvr_page_flags_raw_create(bool read_only, bool cache_coherent, bool slc_bypass,
+			  bool pm_fw_protect)
+{
+	struct pvr_page_flags_raw flags;
+
+	flags.val.val =
+		PVR_PAGE_TABLE_FIELD_PREP(0, PT, READ_ONLY, read_only) |
+		PVR_PAGE_TABLE_FIELD_PREP(0, PT, CC, cache_coherent) |
+		PVR_PAGE_TABLE_FIELD_PREP(0, PT, SLC_BYPASS_CTRL, slc_bypass) |
+		PVR_PAGE_TABLE_FIELD_PREP(0, PT, PM_META_PROTECT, pm_fw_protect);
+
+	return flags;
+}
+
+/**
+ * struct pvr_page_table_l2_raw - The raw data of a level 2 page table.
+ *
+ * This type is a structure for type-checking purposes. At compile-time, its
+ * size is checked against %PVR_MMU_BACKING_PAGE_SIZE.
+ */
+struct pvr_page_table_l2_raw {
+	/** @entries: The raw values of this table. */
+	struct pvr_page_table_l2_entry_raw
+		entries[ROGUE_MMUCTRL_ENTRIES_PC_VALUE];
+} __packed;
+static_assert(sizeof(struct pvr_page_table_l2_raw) == PVR_MMU_BACKING_PAGE_SIZE);
+
+/**
+ * struct pvr_page_table_l1_raw - The raw data of a level 1 page table.
+ *
+ * This type is a structure for type-checking purposes. At compile-time, its
+ * size is checked against %PVR_MMU_BACKING_PAGE_SIZE.
+ */
+struct pvr_page_table_l1_raw {
+	/** @entries: The raw values of this table. */
+	struct pvr_page_table_l1_entry_raw
+		entries[ROGUE_MMUCTRL_ENTRIES_PD_VALUE];
+} __packed;
+static_assert(sizeof(struct pvr_page_table_l1_raw) == PVR_MMU_BACKING_PAGE_SIZE);
+
+/**
+ * struct pvr_page_table_l0_raw - The raw data of a level 0 page table.
+ *
+ * This type is a structure for type-checking purposes. At compile-time, its
+ * size is checked against %PVR_MMU_BACKING_PAGE_SIZE.
+ *
+ * .. caution::
+ *
+ *    The size of level 0 page tables is variable depending on the page size
+ *    specified in the associated level 1 page table entry. Since the device
+ *    page size in use is pegged to the host page size, it cannot vary at
+ *    runtime. This structure is therefore only defined to contain the required
+ *    number of entries for the current device page size. **You should never
+ *    read or write beyond the last supported entry.**
+ */
+struct pvr_page_table_l0_raw {
+	/** @entries: The raw values of this table. */
+	struct pvr_page_table_l0_entry_raw
+		entries[ROGUE_MMUCTRL_ENTRIES_PT_VALUE_X];
+} __packed;
+static_assert(sizeof(struct pvr_page_table_l0_raw) <= PVR_MMU_BACKING_PAGE_SIZE);
+
+/**
+ * DOC: Mirror page tables
+ */
+
+/*
+ * We pre-declare these types because they cross-depend on pointers to each
+ * other.
+ */
+struct pvr_page_table_l1;
+struct pvr_page_table_l0;
+
+/**
+ * struct pvr_page_table_l2 - A wrapped level 2 page table.
+ *
+ * To access the raw part of this table, use pvr_page_table_l2_get_raw().
+ * Alternatively to access a raw entry directly, use
+ * pvr_page_table_l2_get_entry_raw().
+ *
+ * A level 2 page table forms the root of the page table tree structure, so
+ * this type has no &parent or &parent_idx members.
+ */
+struct pvr_page_table_l2 {
+	/**
+	 * @entries: The children of this node in the page table tree
+	 * structure. These are also mirror tables. The indexing of this array
+	 * is identical to that of the raw equivalent
+	 * (&pvr_page_table_l1_raw.entries).
+	 */
+	struct pvr_page_table_l1 *entries[ROGUE_MMUCTRL_ENTRIES_PC_VALUE];
+
+	/**
+	 * @backing_page: A handle to the memory which holds the raw
+	 * equivalent of this table. **For internal use only.**
+	 */
+	struct pvr_mmu_backing_page backing_page;
+
+	/**
+	 * @entry_count: The current number of valid entries (that we know of)
+	 * in this table. This value is essentially a refcount - the table is
+	 * destroyed when this value is decremented to zero by
+	 * pvr_page_table_l2_remove().
+	 */
+	u16 entry_count;
+};
+
+/**
+ * pvr_page_table_l2_init() - Initialize a level 2 page table.
+ * @table: Target level 2 page table.
+ * @pvr_dev: Target PowerVR device
+ *
+ * It is expected that @table be zeroed (e.g. from kzalloc()) before calling
+ * this function.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error encountered while intializing &table->backing_page using
+ *    pvr_mmu_backing_page_init().
+ */
+static int
+pvr_page_table_l2_init(struct pvr_page_table_l2 *table,
+		       struct pvr_device *pvr_dev)
+{
+	return pvr_mmu_backing_page_init(&table->backing_page, pvr_dev);
+}
+
+/**
+ * pvr_page_table_l2_fini() - Teardown a level 2 page table.
+ * @table: Target level 2 page table.
+ *
+ * It is an error to attempt to use @table after calling this function.
+ */
+static void
+pvr_page_table_l2_fini(struct pvr_page_table_l2 *table)
+{
+	pvr_mmu_backing_page_fini(&table->backing_page);
+}
+
+/**
+ * pvr_page_table_l2_sync() - Flush a level 2 page table from the CPU to the
+ *                            device.
+ * @table: Target level 2 page table.
+ *
+ * This is just a thin wrapper around pvr_mmu_backing_page_sync(), so the
+ * warning there applies here too: **Only call pvr_page_table_l2_sync() once
+ * you're sure you have no more changes to make to** @table **in the immediate
+ * future.**
+ *
+ * If child level 1 page tables of @table also need to be flushed, this should
+ * be done first using pvr_page_table_l1_sync() *before* calling this function.
+ */
+static void
+pvr_page_table_l2_sync(struct pvr_page_table_l2 *table)
+{
+	pvr_mmu_backing_page_sync(&table->backing_page);
+}
+
+/**
+ * pvr_page_table_l2_get_raw() - Access the raw equivalent of a mirror level 2
+ *                               page table.
+ * @table: Target level 2 page table.
+ *
+ * Essentially returns the CPU address of the raw equivalent of @table, cast to
+ * a &struct pvr_page_table_l2_raw pointer.
+ *
+ * You probably want to call pvr_page_table_l2_get_entry_raw() instead.
+ *
+ * Return:
+ * The raw equivalent of @table.
+ */
+static struct pvr_page_table_l2_raw *
+pvr_page_table_l2_get_raw(struct pvr_page_table_l2 *table)
+{
+	return table->backing_page.host_ptr;
+}
+
+/**
+ * pvr_page_table_l2_get_entry_raw() - Access an entry from the raw equivalent
+ *                                     of a mirror level 2 page table.
+ * @table: Target level 2 page table.
+ * @idx: Index of the entry to access.
+ *
+ * Technically this function returns a pointer to a slot in a raw level 2 page
+ * table, since the returned "entry" is not guaranteed to be valid. The caller
+ * must verify the validity of the entry at the returned address (perhaps using
+ * pvr_page_table_l2_entry_raw_is_valid()) before reading or overwriting it.
+ *
+ * The value of @idx is not checked here; it is the callers responsibility to
+ * ensure @idx refers to a valid index within @table before dereferencing the
+ * returned pointer.
+ *
+ * Return:
+ * A pointer to the requested raw level 2 page table entry.
+ */
+static struct pvr_page_table_l2_entry_raw *
+pvr_page_table_l2_get_entry_raw(struct pvr_page_table_l2 *table, u16 idx)
+{
+	return &pvr_page_table_l2_get_raw(table)->entries[idx];
+}
+
+/**
+ * pvr_page_table_l2_entry_is_valid() - Check if a level 2 page table entry is
+ *                                      marked as valid.
+ * @table: Target level 2 page table.
+ * @idx: Index of the entry to check.
+ *
+ * The value of @idx is not checked here; it is the callers responsibility to
+ * ensure @idx refers to a valid index within @table before calling this
+ * function.
+ */
+static bool
+pvr_page_table_l2_entry_is_valid(struct pvr_page_table_l2 *table, u16 idx)
+{
+	struct pvr_page_table_l2_entry_raw entry_raw =
+		*pvr_page_table_l2_get_entry_raw(table, idx);
+
+	return pvr_page_table_l2_entry_raw_is_valid(entry_raw);
+}
+
+/**
+ * struct pvr_page_table_l1 - A wrapped level 1 page table.
+ *
+ * To access the raw part of this table, use pvr_page_table_l1_get_raw().
+ * Alternatively to access a raw entry directly, use
+ * pvr_page_table_l1_get_entry_raw().
+ */
+struct pvr_page_table_l1 {
+	/**
+	 * @entries: The children of this node in the page table tree
+	 * structure. These are also mirror tables. The indexing of this array
+	 * is identical to that of the raw equivalent
+	 * (&pvr_page_table_l0_raw.entries).
+	 */
+	struct pvr_page_table_l0 *entries[ROGUE_MMUCTRL_ENTRIES_PD_VALUE];
+
+	/**
+	 * @backing_page: A handle to the memory which holds the raw
+	 * equivalent of this table. **For internal use only.**
+	 */
+	struct pvr_mmu_backing_page backing_page;
+
+	union {
+		/**
+		 * @parent: The parent of this node in the page table tree structure.
+		 *
+		 * This is also a mirror table.
+		 *
+		 * Only valid when the L1 page table is active. When the L1 page table
+		 * has been removed and queued for destruction, the next_free field
+		 * should be used instead.
+		 */
+		struct pvr_page_table_l2 *parent;
+
+		/**
+		 * @next_free: Pointer to the next L1 page table to take/free.
+		 *
+		 * Used to form a linked list of L1 page tables. This is used
+		 * when preallocating tables and when the page table has been
+		 * removed and queued for destruction.
+		 */
+		struct pvr_page_table_l1 *next_free;
+	};
+
+	/**
+	 * @parent_idx: The index of the entry in the parent table (see
+	 * @parent) which corresponds to this table.
+	 */
+	u16 parent_idx;
+
+	/**
+	 * @entry_count: The current number of valid entries (that we know of)
+	 * in this table. This value is essentially a refcount - the table is
+	 * destroyed when this value is decremented to zero by
+	 * pvr_page_table_l1_remove().
+	 */
+	u16 entry_count;
+};
+
+/**
+ * pvr_page_table_l1_init() - Initialize a level 1 page table.
+ * @table: Target level 1 page table.
+ * @pvr_dev: Target PowerVR device
+ *
+ * When this function returns successfully, @table is still not considered
+ * valid. It must be inserted into the page table tree structure with
+ * pvr_page_table_l2_insert() before it is ready for use.
+ *
+ * It is expected that @table be zeroed (e.g. from kzalloc()) before calling
+ * this function.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error encountered while intializing &table->backing_page using
+ *    pvr_mmu_backing_page_init().
+ */
+static int
+pvr_page_table_l1_init(struct pvr_page_table_l1 *table,
+		       struct pvr_device *pvr_dev)
+{
+	table->parent_idx = PVR_IDX_INVALID;
+
+	return pvr_mmu_backing_page_init(&table->backing_page, pvr_dev);
+}
+
+/**
+ * pvr_page_table_l1_free() - Teardown a level 1 page table.
+ * @table: Target level 1 page table.
+ *
+ * It is an error to attempt to use @table after calling this function, even
+ * indirectly. This includes calling pvr_page_table_l2_remove(), which must
+ * be called *before* pvr_page_table_l1_free().
+ */
+static void
+pvr_page_table_l1_free(struct pvr_page_table_l1 *table)
+{
+	pvr_mmu_backing_page_fini(&table->backing_page);
+	kfree(table);
+}
+
+/**
+ * pvr_page_table_l1_sync() - Flush a level 1 page table from the CPU to the
+ *                            device.
+ * @table: Target level 1 page table.
+ *
+ * This is just a thin wrapper around pvr_mmu_backing_page_sync(), so the
+ * warning there applies here too: **Only call pvr_page_table_l1_sync() once
+ * you're sure you have no more changes to make to** @table **in the immediate
+ * future.**
+ *
+ * If child level 0 page tables of @table also need to be flushed, this should
+ * be done first using pvr_page_table_l0_sync() *before* calling this function.
+ */
+static void
+pvr_page_table_l1_sync(struct pvr_page_table_l1 *table)
+{
+	pvr_mmu_backing_page_sync(&table->backing_page);
+}
+
+/**
+ * pvr_page_table_l1_get_raw() - Access the raw equivalent of a mirror level 1
+ *                               page table.
+ * @table: Target level 1 page table.
+ *
+ * Essentially returns the CPU address of the raw equivalent of @table, cast to
+ * a &struct pvr_page_table_l1_raw pointer.
+ *
+ * You probably want to call pvr_page_table_l1_get_entry_raw() instead.
+ *
+ * Return:
+ * The raw equivalent of @table.
+ */
+static struct pvr_page_table_l1_raw *
+pvr_page_table_l1_get_raw(struct pvr_page_table_l1 *table)
+{
+	return table->backing_page.host_ptr;
+}
+
+/**
+ * pvr_page_table_l1_get_entry_raw() - Access an entry from the raw equivalent
+ *                                     of a mirror level 1 page table.
+ * @table: Target level 1 page table.
+ * @idx: Index of the entry to access.
+ *
+ * Technically this function returns a pointer to a slot in a raw level 1 page
+ * table, since the returned "entry" is not guaranteed to be valid. The caller
+ * must verify the validity of the entry at the returned address (perhaps using
+ * pvr_page_table_l1_entry_raw_is_valid()) before reading or overwriting it.
+ *
+ * The value of @idx is not checked here; it is the callers responsibility to
+ * ensure @idx refers to a valid index within @table before dereferencing the
+ * returned pointer.
+ *
+ * Return:
+ * A pointer to the requested raw level 1 page table entry.
+ */
+static struct pvr_page_table_l1_entry_raw *
+pvr_page_table_l1_get_entry_raw(struct pvr_page_table_l1 *table, u16 idx)
+{
+	return &pvr_page_table_l1_get_raw(table)->entries[idx];
+}
+
+/**
+ * pvr_page_table_l1_entry_is_valid() - Check if a level 1 page table entry is
+ *                                      marked as valid.
+ * @table: Target level 1 page table.
+ * @idx: Index of the entry to check.
+ *
+ * The value of @idx is not checked here; it is the callers responsibility to
+ * ensure @idx refers to a valid index within @table before calling this
+ * function.
+ */
+static bool
+pvr_page_table_l1_entry_is_valid(struct pvr_page_table_l1 *table, u16 idx)
+{
+	struct pvr_page_table_l1_entry_raw entry_raw =
+		*pvr_page_table_l1_get_entry_raw(table, idx);
+
+	return pvr_page_table_l1_entry_raw_is_valid(entry_raw);
+}
+
+/**
+ * struct pvr_page_table_l0 - A wrapped level 0 page table.
+ *
+ * To access the raw part of this table, use pvr_page_table_l0_get_raw().
+ * Alternatively to access a raw entry directly, use
+ * pvr_page_table_l0_get_entry_raw().
+ *
+ * There is no mirror representation of an individual page, so this type has no
+ * &entries member.
+ */
+struct pvr_page_table_l0 {
+	/**
+	 * @backing_page: A handle to the memory which holds the raw
+	 * equivalent of this table. **For internal use only.**
+	 */
+	struct pvr_mmu_backing_page backing_page;
+
+	union {
+		/**
+		 * @parent: The parent of this node in the page table tree structure.
+		 *
+		 * This is also a mirror table.
+		 *
+		 * Only valid when the L0 page table is active. When the L0 page table
+		 * has been removed and queued for destruction, the next_free field
+		 * should be used instead.
+		 */
+		struct pvr_page_table_l1 *parent;
+
+		/**
+		 * @next_free: Pointer to the next L0 page table to take/free.
+		 *
+		 * Used to form a linked list of L0 page tables. This is used
+		 * when preallocating tables and when the page table has been
+		 * removed and queued for destruction.
+		 */
+		struct pvr_page_table_l0 *next_free;
+	};
+
+	/**
+	 * @parent_idx: The index of the entry in the parent table (see
+	 * @parent) which corresponds to this table.
+	 */
+	u16 parent_idx;
+
+	/**
+	 * @entry_count: The current number of valid entries (that we know of)
+	 * in this table. This value is essentially a refcount - the table is
+	 * destroyed when this value is decremented to zero by
+	 * pvr_page_table_l0_remove().
+	 */
+	u16 entry_count;
+};
+
+/**
+ * pvr_page_table_l0_init() - Initialize a level 0 page table.
+ * @table: Target level 0 page table.
+ * @pvr_dev: Target PowerVR device
+ *
+ * When this function returns successfully, @table is still not considered
+ * valid. It must be inserted into the page table tree structure with
+ * pvr_page_table_l1_insert() before it is ready for use.
+ *
+ * It is expected that @table be zeroed (e.g. from kzalloc()) before calling
+ * this function.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error encountered while intializing &table->backing_page using
+ *    pvr_mmu_backing_page_init().
+ */
+static int
+pvr_page_table_l0_init(struct pvr_page_table_l0 *table,
+		       struct pvr_device *pvr_dev)
+{
+	table->parent_idx = PVR_IDX_INVALID;
+
+	return pvr_mmu_backing_page_init(&table->backing_page, pvr_dev);
+}
+
+/**
+ * pvr_page_table_l0_free() - Teardown a level 0 page table.
+ * @table: Target level 0 page table.
+ *
+ * It is an error to attempt to use @table after calling this function, even
+ * indirectly. This includes calling pvr_page_table_l1_remove(), which must
+ * be called *before* pvr_page_table_l0_free().
+ */
+static void
+pvr_page_table_l0_free(struct pvr_page_table_l0 *table)
+{
+	pvr_mmu_backing_page_fini(&table->backing_page);
+	kfree(table);
+}
+
+/**
+ * pvr_page_table_l0_sync() - Flush a level 0 page table from the CPU to the
+ *                            device.
+ * @table: Target level 0 page table.
+ *
+ * This is just a thin wrapper around pvr_mmu_backing_page_sync(), so the
+ * warning there applies here too: **Only call pvr_page_table_l0_sync() once
+ * you're sure you have no more changes to make to** @table **in the immediate
+ * future.**
+ *
+ * If child pages of @table also need to be flushed, this should be done first
+ * using a DMA sync function (e.g. dma_sync_sg_for_device()) *before* calling
+ * this function.
+ */
+static void
+pvr_page_table_l0_sync(struct pvr_page_table_l0 *table)
+{
+	pvr_mmu_backing_page_sync(&table->backing_page);
+}
+
+/**
+ * pvr_page_table_l0_get_raw() - Access the raw equivalent of a mirror level 0
+ *                               page table.
+ * @table: Target level 0 page table.
+ *
+ * Essentially returns the CPU address of the raw equivalent of @table, cast to
+ * a &struct pvr_page_table_l0_raw pointer.
+ *
+ * You probably want to call pvr_page_table_l0_get_entry_raw() instead.
+ *
+ * Return:
+ * The raw equivalent of @table.
+ */
+static struct pvr_page_table_l0_raw *
+pvr_page_table_l0_get_raw(struct pvr_page_table_l0 *table)
+{
+	return table->backing_page.host_ptr;
+}
+
+/**
+ * pvr_page_table_l0_get_entry_raw() - Access an entry from the raw equivalent
+ *                                     of a mirror level 0 page table.
+ * @table: Target level 0 page table.
+ * @idx: Index of the entry to access.
+ *
+ * Technically this function returns a pointer to a slot in a raw level 0 page
+ * table, since the returned "entry" is not guaranteed to be valid. The caller
+ * must verify the validity of the entry at the returned address (perhaps using
+ * pvr_page_table_l0_entry_raw_is_valid()) before reading or overwriting it.
+ *
+ * The value of @idx is not checked here; it is the callers responsibility to
+ * ensure @idx refers to a valid index within @table before dereferencing the
+ * returned pointer. This is espcially important for level 0 page tables, which
+ * can have a variable number of entries.
+ *
+ * Return:
+ * A pointer to the requested raw level 0 page table entry.
+ */
+static struct pvr_page_table_l0_entry_raw *
+pvr_page_table_l0_get_entry_raw(struct pvr_page_table_l0 *table, u16 idx)
+{
+	return &pvr_page_table_l0_get_raw(table)->entries[idx];
+}
+
+/**
+ * pvr_page_table_l0_entry_is_valid() - Check if a level 0 page table entry is
+ *                                      marked as valid.
+ * @table: Target level 0 page table.
+ * @idx: Index of the entry to check.
+ *
+ * The value of @idx is not checked here; it is the callers responsibility to
+ * ensure @idx refers to a valid index within @table before calling this
+ * function.
+ */
+static bool
+pvr_page_table_l0_entry_is_valid(struct pvr_page_table_l0 *table, u16 idx)
+{
+	struct pvr_page_table_l0_entry_raw entry_raw =
+		*pvr_page_table_l0_get_entry_raw(table, idx);
+
+	return pvr_page_table_l0_entry_raw_is_valid(entry_raw);
+}
+
+/**
+ * struct pvr_mmu_context - context holding data for operations at page
+ * catalogue level, intended for use with a VM context.
+ */
+struct pvr_mmu_context {
+	/** @pvr_dev: The PVR device associated with the owning VM context. */
+	struct pvr_device *pvr_dev;
+
+	/** @page_table_l2: The MMU table root. */
+	struct pvr_page_table_l2 page_table_l2;
+};
+
+enum pvr_mmu_sync_level {
+	PVR_MMU_SYNC_LEVEL_NONE = -1,
+	PVR_MMU_SYNC_LEVEL_0 = 0,
+	PVR_MMU_SYNC_LEVEL_1 = 1,
+	PVR_MMU_SYNC_LEVEL_2 = 2,
+};
+
+/**
+ * struct pvr_page_table_ptr - A reference to a single physical page as indexed
+ * by the page table structure.
+ *
+ * Intended for embedding in a &struct pvr_mmu_op_context.
+ */
+struct pvr_page_table_ptr {
+	/**
+	 * @l1_table: A cached handle to the level 1 page table the
+	 * context is currently traversing.
+	 */
+	struct pvr_page_table_l1 *l1_table;
+
+	/**
+	 * @l0_table: A cached handle to the level 0 page table the
+	 * context is currently traversing.
+	 */
+	struct pvr_page_table_l0 *l0_table;
+
+	/**
+	 * @l2_idx: Index into the level 2 page table the context is
+	 * currently referencing.
+	 */
+	u16 l2_idx;
+
+	/**
+	 * @l1_idx: Index into the level 1 page table the context is
+	 * currently referencing.
+	 */
+	u16 l1_idx;
+
+	/**
+	 * @l0_idx: Index into the level 0 page table the context is
+	 * currently referencing.
+	 */
+	u16 l0_idx;
+};
+
+/**
+ * struct pvr_mmu_op_context - context holding data for individual
+ * device-virtual mapping operations. Intended for use with a VM bind operation.
+ */
+struct pvr_mmu_op_context {
+	/** @mmu_ctx: The MMU context associated with the owning VM context. */
+	struct pvr_mmu_context *mmu_ctx;
+
+	/** @map: Data specifically for map operations. */
+	struct {
+		/**
+		 * @sgt: Scatter gather table containing pages pinned for use by
+		 * this context - these are currently pinned when initialising
+		 * the VM bind operation.
+		 */
+		struct sg_table *sgt;
+
+		/** @sgt_offset: Start address of the device-virtual mapping. */
+		u64 sgt_offset;
+	} map;
+
+	/**
+	 * @l1_free_tables: Preallocated l1 page table objects for use by this
+	 * context when creating a page mapping. Linked list created during
+	 * initialisation. Also used to collect page table objects freed by an
+	 * unmap.
+	 */
+	struct pvr_page_table_l1 *l1_free_tables;
+
+	/**
+	 * @l0_free_tables: Preallocated l0 page table objects for use by this
+	 * context when creating a page mapping. Linked list created during
+	 * initialisation. Also used to collect page table objects freed by an
+	 * unmap.
+	 */
+	struct pvr_page_table_l0 *l0_free_tables;
+
+	/**
+	 * @curr_page - A reference to a single physical page as indexed by
+	 * the page table structure.
+	 */
+	struct pvr_page_table_ptr curr_page;
+
+	/**
+	 * @sync_level_required: The maximum level of the page table tree
+	 * structure which has (possibly) been modified since it was last
+	 * flushed to the device.
+	 *
+	 * This field should only be set with pvr_mmu_op_context_require_sync()
+	 * or indirectly by pvr_mmu_op_context_sync_partial().
+	 */
+	enum pvr_mmu_sync_level sync_level_required;
+};
+
+/**
+ * pvr_page_table_l2_insert() - Insert an entry referring to a level 1 page
+ * table into a level 2 page table.
+ * @op_ctx: Target MMU op context pointing at the entry to insert the L1 page
+ * table into.
+ * @child_table: Target level 1 page table to be referenced by the new entry.
+ *
+ * It is the caller's responsibility to ensure @op_ctx.curr_page points to a
+ * valid L2 entry.
+ */
+static void
+pvr_page_table_l2_insert(struct pvr_mmu_op_context *op_ctx,
+			 struct pvr_page_table_l1 *child_table)
+{
+	struct pvr_page_table_l2 *l2_table =
+		&op_ctx->mmu_ctx->page_table_l2;
+	struct pvr_page_table_l2_entry_raw *entry_raw =
+		pvr_page_table_l2_get_entry_raw(l2_table,
+						op_ctx->curr_page.l2_idx);
+
+	pvr_page_table_l2_entry_raw_set(entry_raw,
+					child_table->backing_page.dma_addr);
+
+	child_table->parent = l2_table;
+	child_table->parent_idx = op_ctx->curr_page.l2_idx;
+	l2_table->entries[op_ctx->curr_page.l2_idx] = child_table;
+	++l2_table->entry_count;
+	op_ctx->curr_page.l1_table = child_table;
+}
+
+/**
+ * pvr_page_table_l2_remove() - Remove a level 1 page table from a level 2 page
+ * table.
+ * @op_ctx: Target MMU op context pointing at the L2 entry to remove.
+ *
+ * It is the caller's responsibility to ensure @op_ctx.curr_page points to a
+ * valid L2 entry.
+ */
+static void
+pvr_page_table_l2_remove(struct pvr_mmu_op_context *op_ctx)
+{
+	struct pvr_page_table_l2 *l2_table =
+		&op_ctx->mmu_ctx->page_table_l2;
+	struct pvr_page_table_l2_entry_raw *entry_raw =
+		pvr_page_table_l2_get_entry_raw(l2_table,
+						op_ctx->curr_page.l1_table->parent_idx);
+
+	WARN_ON(op_ctx->curr_page.l1_table->parent != l2_table);
+
+	pvr_page_table_l2_entry_raw_clear(entry_raw);
+
+	l2_table->entries[op_ctx->curr_page.l1_table->parent_idx] = NULL;
+	op_ctx->curr_page.l1_table->parent_idx = PVR_IDX_INVALID;
+	op_ctx->curr_page.l1_table->next_free = op_ctx->l1_free_tables;
+	op_ctx->l1_free_tables = op_ctx->curr_page.l1_table;
+	op_ctx->curr_page.l1_table = NULL;
+
+	--l2_table->entry_count;
+}
+
+/**
+ * pvr_page_table_l1_insert() - Insert an entry referring to a level 0 page
+ * table into a level 1 page table.
+ * @op_ctx: Target MMU op context pointing at the entry to insert the L0 page
+ * table into.
+ * @child_table: L0 page table to insert.
+ *
+ * It is the caller's responsibility to ensure @op_ctx.curr_page points to a
+ * valid L1 entry.
+ */
+static void
+pvr_page_table_l1_insert(struct pvr_mmu_op_context *op_ctx,
+			 struct pvr_page_table_l0 *child_table)
+{
+	struct pvr_page_table_l1_entry_raw *entry_raw =
+		pvr_page_table_l1_get_entry_raw(op_ctx->curr_page.l1_table,
+						op_ctx->curr_page.l1_idx);
+
+	pvr_page_table_l1_entry_raw_set(entry_raw,
+					child_table->backing_page.dma_addr);
+
+	child_table->parent = op_ctx->curr_page.l1_table;
+	child_table->parent_idx = op_ctx->curr_page.l1_idx;
+	op_ctx->curr_page.l1_table->entries[op_ctx->curr_page.l1_idx] = child_table;
+	++op_ctx->curr_page.l1_table->entry_count;
+	op_ctx->curr_page.l0_table = child_table;
+}
+
+/**
+ * pvr_page_table_l1_remove() - Remove a level 0 page table from a level 1 page
+ *                              table.
+ * @op_ctx: Target MMU op context pointing at the L1 entry to remove.
+ *
+ * If this function results in the L1 table becoming empty, it will be removed
+ * from its parent level 2 page table and destroyed.
+ *
+ * It is the caller's responsibility to ensure @op_ctx.curr_page points to a
+ * valid L1 entry.
+ */
+static void
+pvr_page_table_l1_remove(struct pvr_mmu_op_context *op_ctx)
+{
+	struct pvr_page_table_l1_entry_raw *entry_raw =
+		pvr_page_table_l1_get_entry_raw(op_ctx->curr_page.l0_table->parent,
+						op_ctx->curr_page.l0_table->parent_idx);
+
+	WARN_ON(op_ctx->curr_page.l0_table->parent !=
+		op_ctx->curr_page.l1_table);
+
+	pvr_page_table_l1_entry_raw_clear(entry_raw);
+
+	op_ctx->curr_page.l1_table->entries[op_ctx->curr_page.l0_table->parent_idx] = NULL;
+	op_ctx->curr_page.l0_table->parent_idx = PVR_IDX_INVALID;
+	op_ctx->curr_page.l0_table->next_free = op_ctx->l0_free_tables;
+	op_ctx->l0_free_tables = op_ctx->curr_page.l0_table;
+	op_ctx->curr_page.l0_table = NULL;
+
+	if (--op_ctx->curr_page.l1_table->entry_count == 0) {
+		/* Clear the parent L2 page table entry. */
+		if (op_ctx->curr_page.l1_table->parent_idx != PVR_IDX_INVALID)
+			pvr_page_table_l2_remove(op_ctx);
+	}
+}
+
+/**
+ * pvr_page_table_l0_insert() - Insert an entry referring to a physical page
+ * into a level 0 page table.
+ * @op_ctx: Target MMU op context pointing at the L0 entry to insert.
+ * @dma_addr: Target DMA address to be referenced by the new entry.
+ * @flags: Page options to be stored in the new entry.
+ *
+ * It is the caller's responsibility to ensure @op_ctx.curr_page points to a
+ * valid L0 entry.
+ */
+static void
+pvr_page_table_l0_insert(struct pvr_mmu_op_context *op_ctx,
+			 dma_addr_t dma_addr, struct pvr_page_flags_raw flags)
+{
+	struct pvr_page_table_l0_entry_raw *entry_raw =
+		pvr_page_table_l0_get_entry_raw(op_ctx->curr_page.l0_table,
+						op_ctx->curr_page.l0_idx);
+
+	pvr_page_table_l0_entry_raw_set(entry_raw, dma_addr, flags);
+
+	/*
+	 * There is no entry to set here - we don't keep a mirror of
+	 * individual pages.
+	 */
+
+	++op_ctx->curr_page.l0_table->entry_count;
+}
+
+/**
+ * pvr_page_table_l0_remove() - Remove a physical page from a level 0 page
+ * table.
+ * @op_ctx: Target MMU op context pointing at the L0 entry to remove.
+ *
+ * If this function results in the L0 table becoming empty, it will be removed
+ * from its parent L1 page table and destroyed.
+ *
+ * It is the caller's responsibility to ensure @op_ctx.curr_page points to a
+ * valid L0 entry.
+ */
+static void
+pvr_page_table_l0_remove(struct pvr_mmu_op_context *op_ctx)
+{
+	struct pvr_page_table_l0_entry_raw *entry_raw =
+		pvr_page_table_l0_get_entry_raw(op_ctx->curr_page.l0_table,
+						op_ctx->curr_page.l0_idx);
+
+	pvr_page_table_l0_entry_raw_clear(entry_raw);
+
+	/*
+	 * There is no entry to clear here - we don't keep a mirror of
+	 * individual pages.
+	 */
+
+	if (--op_ctx->curr_page.l0_table->entry_count == 0) {
+		/* Clear the parent L1 page table entry. */
+		if (op_ctx->curr_page.l0_table->parent_idx != PVR_IDX_INVALID)
+			pvr_page_table_l1_remove(op_ctx);
+	}
+}
+
+/**
+ * DOC: Page table index utilities
+ */
+
+/**
+ * pvr_page_table_l2_idx() - Calculate the level 2 page table index for a
+ *                           device-virtual address.
+ * @device_addr: Target device-virtual address.
+ *
+ * This function does not perform any bounds checking - it is the caller's
+ * responsibility to ensure that @device_addr is valid before interpreting
+ * the result.
+ *
+ * Return:
+ * The index into a level 2 page table corresponding to @device_addr.
+ */
+static u16
+pvr_page_table_l2_idx(u64 device_addr)
+{
+	return (device_addr & ~ROGUE_MMUCTRL_VADDR_PC_INDEX_CLRMSK) >>
+	       ROGUE_MMUCTRL_VADDR_PC_INDEX_SHIFT;
+}
+
+/**
+ * pvr_page_table_l1_idx() - Calculate the level 1 page table index for a
+ *                           device-virtual address.
+ * @device_addr: Target device-virtual address.
+ *
+ * This function does not perform any bounds checking - it is the caller's
+ * responsibility to ensure that @device_addr is valid before interpreting
+ * the result.
+ *
+ * Return:
+ * The index into a level 1 page table corresponding to @device_addr.
+ */
+static u16
+pvr_page_table_l1_idx(u64 device_addr)
+{
+	return (device_addr & ~ROGUE_MMUCTRL_VADDR_PD_INDEX_CLRMSK) >>
+	       ROGUE_MMUCTRL_VADDR_PD_INDEX_SHIFT;
+}
+
+/**
+ * pvr_page_table_l0_idx() - Calculate the level 0 page table index for a
+ *                           device-virtual address.
+ * @device_addr: Target device-virtual address.
+ *
+ * This function does not perform any bounds checking - it is the caller's
+ * responsibility to ensure that @device_addr is valid before interpreting
+ * the result.
+ *
+ * Return:
+ * The index into a level 0 page table corresponding to @device_addr.
+ */
+static u16
+pvr_page_table_l0_idx(u64 device_addr)
+{
+	return (device_addr & ~ROGUE_MMUCTRL_VADDR_PT_INDEX_CLRMSK) >>
+	       ROGUE_MMUCTRL_PAGE_X_RANGE_SHIFT;
+}
+
+/**
+ * DOC: High-level page table operations
+ */
+
+/**
+ * pvr_page_table_l1_get_or_insert() - Retrieves (optionally inserting if
+ * necessary) a level 1 page table from the specified level 2 page table entry.
+ * @op_ctx: Target MMU op context.
+ * @should_insert: [IN] Specifies whether new page tables should be inserted
+ * when empty page table entries are encountered during traversal.
+ *
+ * Return:
+ *  * 0 on success, or
+ *
+ *    If @should_insert is %false:
+ *     * -%ENXIO if a level 1 page table would have been inserted.
+ *
+ *    If @should_insert is %true:
+ *     * Any error encountered while inserting the level 1 page table.
+ */
+static int
+pvr_page_table_l1_get_or_insert(struct pvr_mmu_op_context *op_ctx,
+				bool should_insert)
+{
+	struct pvr_page_table_l2 *l2_table =
+		&op_ctx->mmu_ctx->page_table_l2;
+	struct pvr_page_table_l1 *table;
+	int err;
+
+	if (pvr_page_table_l2_entry_is_valid(l2_table,
+					     op_ctx->curr_page.l2_idx)) {
+		op_ctx->curr_page.l1_table =
+			l2_table->entries[op_ctx->curr_page.l2_idx];
+		return 0;
+	}
+
+	if (!should_insert)
+		return -ENXIO;
+
+	/* Take a prealloced table. */
+	table = op_ctx->l1_free_tables;
+	if (!table)
+		return -ENOMEM;
+
+	err = pvr_page_table_l1_init(table, op_ctx->mmu_ctx->pvr_dev);
+	if (err)
+		return err;
+
+	/* Pop */
+	op_ctx->l1_free_tables = table->next_free;
+	table->next_free = NULL;
+
+	pvr_page_table_l2_insert(op_ctx, table);
+
+	return 0;
+}
+
+/**
+ * pvr_page_table_l0_get_or_insert() - Retrieves (optionally inserting if
+ * necessary) a level 0 page table from the specified level 1 page table entry.
+ * @op_ctx: Target MMU op context.
+ * @should_insert: [IN] Specifies whether new page tables should be inserted
+ * when empty page table entries are encountered during traversal.
+ *
+ * Return:
+ *  * 0 on success,
+ *
+ *    If @should_insert is %false:
+ *     * -%ENXIO if a level 0 page table would have been inserted.
+ *
+ *    If @should_insert is %true:
+ *     * Any error encountered while inserting the level 0 page table.
+ */
+static int
+pvr_page_table_l0_get_or_insert(struct pvr_mmu_op_context *op_ctx,
+				bool should_insert)
+{
+	struct pvr_page_table_l0 *table;
+	int err;
+
+	if (pvr_page_table_l1_entry_is_valid(op_ctx->curr_page.l1_table,
+					     op_ctx->curr_page.l1_idx)) {
+		op_ctx->curr_page.l0_table =
+			op_ctx->curr_page.l1_table->entries[op_ctx->curr_page.l1_idx];
+		return 0;
+	}
+
+	if (!should_insert)
+		return -ENXIO;
+
+	/* Take a prealloced table. */
+	table = op_ctx->l0_free_tables;
+	if (!table)
+		return -ENOMEM;
+
+	err = pvr_page_table_l0_init(table, op_ctx->mmu_ctx->pvr_dev);
+	if (err)
+		return err;
+
+	/* Pop */
+	op_ctx->l0_free_tables = table->next_free;
+	table->next_free = NULL;
+
+	pvr_page_table_l1_insert(op_ctx, table);
+
+	return err;
+}
+
+/**
+ * pvr_mmu_context_create() - Create an MMU context.
+ * @pvr_dev: PVR device associated with owning VM context.
+ *
+ * Returns:
+ *  * Newly created MMU context object on success, or
+ *  * -%ENOMEM if no memory is available,
+ *  * Any error code returned by pvr_page_table_l2_init().
+ */
+struct pvr_mmu_context *pvr_mmu_context_create(struct pvr_device *pvr_dev)
+{
+	struct pvr_mmu_context *ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+	int err;
+
+	if (!ctx)
+		return ERR_PTR(-ENOMEM);
+
+	err = pvr_page_table_l2_init(&ctx->page_table_l2, pvr_dev);
+	if (err)
+		return ERR_PTR(err);
+
+	ctx->pvr_dev = pvr_dev;
+
+	return ctx;
+}
+
+/**
+ * pvr_mmu_context_destroy() - Destroy an MMU context.
+ * @ctx: Target MMU context.
+ */
+void pvr_mmu_context_destroy(struct pvr_mmu_context *ctx)
+{
+	pvr_page_table_l2_fini(&ctx->page_table_l2);
+	kfree(ctx);
+}
+
+/**
+ * pvr_mmu_get_root_table_dma_addr() - Get the DMA address of the root of the
+ * page table structure behind a VM context.
+ * @root: Target MMU page table root.
+ */
+dma_addr_t pvr_mmu_get_root_table_dma_addr(struct pvr_mmu_context *ctx)
+{
+	return ctx->page_table_l2.backing_page.dma_addr;
+}
+
+/**
+ * pvr_page_table_l1_alloc() - Allocate a l1 page_table object.
+ * @ctx: MMU context of owning VM context.
+ *
+ * Returns:
+ *  * Newly created page table object on success, or
+ *  * -%ENOMEM if no memory is available,
+ *  * Any error code returned by pvr_page_table_l1_init().
+ */
+static struct pvr_page_table_l1 *
+pvr_page_table_l1_alloc(struct pvr_mmu_context *ctx)
+{
+	int err;
+
+	struct pvr_page_table_l1 *table =
+		kzalloc(sizeof(*table), GFP_KERNEL);
+
+	if (!table)
+		return ERR_PTR(-ENOMEM);
+
+	err = pvr_page_table_l1_init(table, ctx->pvr_dev);
+	if (err) {
+		kfree(table);
+		return ERR_PTR(err);
+	}
+
+	return table;
+}
+
+/**
+ * pvr_page_table_l0_alloc() - Allocate a l0 page_table object.
+ * @ctx: MMU context of owning VM context.
+ *
+ * Returns:
+ *  * Newly created page table object on success, or
+ *  * -%ENOMEM if no memory is available,
+ *  * Any error code returned by pvr_page_table_l0_init().
+ */
+static struct pvr_page_table_l0 *
+pvr_page_table_l0_alloc(struct pvr_mmu_context *ctx)
+{
+	int err;
+
+	struct pvr_page_table_l0 *table =
+		kzalloc(sizeof(*table), GFP_KERNEL);
+
+	if (!table)
+		return ERR_PTR(-ENOMEM);
+
+	err = pvr_page_table_l0_init(table, ctx->pvr_dev);
+	if (err) {
+		kfree(table);
+		return ERR_PTR(err);
+	}
+
+	return table;
+}
+
+/**
+ * pvr_mmu_op_context_require_sync() - Mark an MMU op context as requiring a
+ * sync operation for the referenced page tables up to a specified level.
+ * @op_ctx: Target MMU op context.
+ * @level: Maximum page table level for which a sync is required.
+ */
+static void
+pvr_mmu_op_context_require_sync(struct pvr_mmu_op_context *op_ctx,
+				enum pvr_mmu_sync_level level)
+{
+	if (op_ctx->sync_level_required < level)
+		op_ctx->sync_level_required = level;
+}
+
+/**
+ * pvr_mmu_op_context_sync_manual() - Trigger a sync of some or all of the
+ * page tables referenced by a MMU op context.
+ * @op_ctx: Target MMU op context.
+ * @level: Maximum page table level to sync.
+ *
+ * Do not call this function directly. Instead use
+ * pvr_mmu_op_context_sync_partial() which is checked against the current
+ * value of &op_ctx->sync_level_required as set by
+ * pvr_mmu_op_context_require_sync().
+ */
+static void
+pvr_mmu_op_context_sync_manual(struct pvr_mmu_op_context *op_ctx,
+			       enum pvr_mmu_sync_level level)
+{
+	/*
+	 * We sync the page table levels in ascending order (starting from the
+	 * leaf node) to ensure consistency.
+	 */
+
+	WARN_ON(level < PVR_MMU_SYNC_LEVEL_NONE);
+
+	if (level <= PVR_MMU_SYNC_LEVEL_NONE)
+		return;
+
+	if (op_ctx->curr_page.l0_table)
+		pvr_page_table_l0_sync(op_ctx->curr_page.l0_table);
+
+	if (level < PVR_MMU_SYNC_LEVEL_1)
+		return;
+
+	if (op_ctx->curr_page.l1_table)
+		pvr_page_table_l1_sync(op_ctx->curr_page.l1_table);
+
+	if (level < PVR_MMU_SYNC_LEVEL_2)
+		return;
+
+	pvr_page_table_l2_sync(&op_ctx->mmu_ctx->page_table_l2);
+}
+
+/**
+ * pvr_mmu_op_context_sync_partial() - Trigger a sync of some or all of the
+ * page tables referenced by a MMU op context.
+ * @op_ctx: Target MMU op context.
+ * @level: Requested page table level to sync up to (inclusive).
+ *
+ * If @level is greater than the maximum level recorded by @op_ctx as requiring
+ * a sync operation, only the previously recorded maximum will be used.
+ *
+ * Additionally, if @level is greater than or equal to the maximum level
+ * recorded by @op_ctx as requiring a sync operation, that maximum level will be
+ * reset as a full sync will be performed. This is equivalent to calling
+ * pvr_mmu_op_context_sync().
+ */
+static void
+pvr_mmu_op_context_sync_partial(struct pvr_mmu_op_context *op_ctx,
+				enum pvr_mmu_sync_level level)
+{
+	/*
+	 * If the requested sync level is greater than or equal to the
+	 * currently required sync level, we do two things:
+	 *  * Don't waste time syncing levels we haven't previously marked as
+	 *    requiring a sync, and
+	 *  * Reset the required sync level since we are about to sync
+	 *    everything that was previously marked as requiring a sync.
+	 */
+	if (level >= op_ctx->sync_level_required) {
+		level = op_ctx->sync_level_required;
+		op_ctx->sync_level_required = PVR_MMU_SYNC_LEVEL_NONE;
+	}
+
+	pvr_mmu_op_context_sync_manual(op_ctx, level);
+}
+
+/**
+ * pvr_mmu_op_context_sync() - Trigger a sync of every page table referenced by
+ * a MMU op context.
+ * @op_ctx: Target MMU op context.
+ *
+ * The maximum level marked internally as requiring a sync will be reset so
+ * that subsequent calls to this function will be no-ops unless @op_ctx is
+ * otherwise updated.
+ */
+static void
+pvr_mmu_op_context_sync(struct pvr_mmu_op_context *op_ctx)
+{
+	pvr_mmu_op_context_sync_manual(op_ctx, op_ctx->sync_level_required);
+
+	op_ctx->sync_level_required = PVR_MMU_SYNC_LEVEL_NONE;
+}
+
+/**
+ * pvr_mmu_op_context_load_tables() - Load pointers to tables in each level of
+ * the page table tree structure needed to reference the physical page
+ * referenced by a MMU op context.
+ * @op_ctx: Target MMU op context.
+ * @should_create: Specifies whether new page tables should be created when
+ * empty page table entries are encountered during traversal.
+ * @load_level_required: Maximum page table level to load.
+ *
+ * If @should_create is %true, this function may modify the stored required
+ * sync level of @op_ctx as new page tables are created and inserted into their
+ * respective parents.
+ *
+ * Since there is only one root page table, it is technically incorrect to call
+ * this function with a value of @load_level_required greater than or equal to
+ * the root level number. However, this is not explicitly disallowed here.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * Any error returned by pvr_page_table_l1_get_or_create() if
+ *    @load_level_required >= 1 except -%ENXIO, or
+ *  * Any error returned by pvr_page_table_l0_get_or_create() if
+ *    @load_level_required >= 0 except -%ENXIO.
+ */
+static int
+pvr_mmu_op_context_load_tables(struct pvr_mmu_op_context *op_ctx,
+			       bool should_create,
+			       enum pvr_mmu_sync_level load_level_required)
+{
+	const struct pvr_page_table_l1 *l1_head_before = op_ctx->l1_free_tables;
+	const struct pvr_page_table_l0 *l0_head_before = op_ctx->l0_free_tables;
+	int err;
+
+	/* Clear tables we're about to fetch in case of error states. */
+	if (load_level_required >= PVR_MMU_SYNC_LEVEL_1)
+		op_ctx->curr_page.l1_table = NULL;
+
+	if (load_level_required >= PVR_MMU_SYNC_LEVEL_0)
+		op_ctx->curr_page.l0_table = NULL;
+
+	/* Get or create L1 page table. */
+	if (load_level_required >= PVR_MMU_SYNC_LEVEL_1) {
+		err = pvr_page_table_l1_get_or_insert(op_ctx, should_create);
+		if (err) {
+			/*
+			 * If @should_create is %false and no L1 page table was
+			 * found, return early but without an error. Since
+			 * pvr_page_table_l1_get_or_create() can only return
+			 * -%ENXIO if @should_create is %false, there is no
+			 * need to check it here.
+			 */
+			if (err == -ENXIO)
+				err = 0;
+
+			return err;
+		}
+	}
+
+	/* Get or create L0 page table. */
+	if (load_level_required >= PVR_MMU_SYNC_LEVEL_0) {
+		err = pvr_page_table_l0_get_or_insert(op_ctx, should_create);
+		if (err) {
+			/*
+			 * If @should_create is %false and no L0 page table was
+			 * found, return early but without an error. Since
+			 * pvr_page_table_l0_get_or_insert() can only return
+			 * -%ENXIO if @should_create is %false, there is no
+			 * need to check it here.
+			 */
+			if (err == -ENXIO)
+				err = 0;
+
+			/*
+			 * At this point, an L1 page table could have been
+			 * inserted but is now empty due to the failed attempt
+			 * at inserting an L0 page table. In this instance, we
+			 * must remove the empty L1 page table ourselves as
+			 * pvr_page_table_l1_remove() is never called as part
+			 * of the error path in
+			 * pvr_page_table_l0_get_or_insert().
+			 */
+			if (l1_head_before != op_ctx->l1_free_tables) {
+				pvr_page_table_l2_remove(op_ctx);
+				pvr_mmu_op_context_require_sync(op_ctx, PVR_MMU_SYNC_LEVEL_2);
+			}
+
+			return err;
+		}
+	}
+
+	/*
+	 * A sync is only needed if table objects were inserted. This can be
+	 * inferred by checking if the pointer at the head of the linked list
+	 * has changed.
+	 */
+	if (l1_head_before != op_ctx->l1_free_tables)
+		pvr_mmu_op_context_require_sync(op_ctx, PVR_MMU_SYNC_LEVEL_2);
+	else if (l0_head_before != op_ctx->l0_free_tables)
+		pvr_mmu_op_context_require_sync(op_ctx, PVR_MMU_SYNC_LEVEL_1);
+
+	return 0;
+}
+
+/**
+ * pvr_mmu_op_context_set_curr_page() - Reassign the current page of an MMU op
+ * context, syncing any page tables previously assigned to it which are no
+ * longer relevant.
+ * @op_ctx: Target MMU op context.
+ * @device_addr: New pointer target.
+ * @should_create: Specify whether new page tables should be created when
+ * empty page table entries are encountered during traversal.
+ *
+ * This function performs a full sync on the pointer, regardless of which
+ * levels are modified.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_mmu_op_context_load_tables().
+ */
+static int
+pvr_mmu_op_context_set_curr_page(struct pvr_mmu_op_context *op_ctx,
+				 u64 device_addr, bool should_create)
+{
+	pvr_mmu_op_context_sync(op_ctx);
+
+	op_ctx->curr_page.l2_idx = pvr_page_table_l2_idx(device_addr);
+	op_ctx->curr_page.l1_idx = pvr_page_table_l1_idx(device_addr);
+	op_ctx->curr_page.l0_idx = pvr_page_table_l0_idx(device_addr);
+	op_ctx->curr_page.l1_table = NULL;
+	op_ctx->curr_page.l0_table = NULL;
+
+	return pvr_mmu_op_context_load_tables(op_ctx, should_create,
+					      PVR_MMU_SYNC_LEVEL_1);
+}
+
+/**
+ * pvr_mmu_op_context_next_page() - Advance the current page of an MMU op
+ * context.
+ * @op_ctx: Target MMU op context.
+ * @should_create: Specify whether new page tables should be created when
+ * empty page table entries are encountered during traversal.
+ *
+ * If @should_create is %false, it is the caller's responsibility to verify that
+ * the state of the table references in @op_ctx is valid on return. If -%ENXIO
+ * is returned, at least one of the table references is invalid. It should be
+ * noted that @op_ctx as a whole will be left in a valid state if -%ENXIO is
+ * returned, unlike other error codes. The caller should check which references
+ * are invalid by comparing them to %NULL. Only &@ptr->l2_table is guaranteed
+ * to be valid, since it represents the root of the page table tree structure.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * -%EPERM if the operation would wrap at the top of the page table
+ *    hierarchy,
+ *  * -%ENXIO if @should_create is %false and a page table of any level would
+ *    have otherwise been created, or
+ *  * Any error returned while attempting to create missing page tables if
+ *    @should_create is %true.
+ */
+static int
+pvr_mmu_op_context_next_page(struct pvr_mmu_op_context *op_ctx,
+			     bool should_create)
+{
+	s8 load_level_required = PVR_MMU_SYNC_LEVEL_NONE;
+
+	if (++op_ctx->curr_page.l0_idx != ROGUE_MMUCTRL_ENTRIES_PT_VALUE_X)
+		goto load_tables;
+
+	op_ctx->curr_page.l0_idx = 0;
+	load_level_required = PVR_MMU_SYNC_LEVEL_0;
+
+	if (++op_ctx->curr_page.l1_idx != ROGUE_MMUCTRL_ENTRIES_PD_VALUE)
+		goto load_tables;
+
+	op_ctx->curr_page.l1_idx = 0;
+	load_level_required = PVR_MMU_SYNC_LEVEL_1;
+
+	if (++op_ctx->curr_page.l2_idx != ROGUE_MMUCTRL_ENTRIES_PC_VALUE)
+		goto load_tables;
+
+	/*
+	 * If the pattern continued, we would set &op_ctx->curr_page.l2_idx to
+	 * zero here. However, that would wrap the top layer of the page table
+	 * hierarchy which is not a valid operation. Instead, we warn and return
+	 * an error.
+	 */
+	WARN(true,
+	     "%s(%p) attempted to loop the top of the page table hierarchy",
+	     __func__, op_ctx);
+	return -EPERM;
+
+	/* If indices have wrapped, we need to load new tables. */
+load_tables:
+	/* First, flush tables which will be unloaded. */
+	pvr_mmu_op_context_sync_partial(op_ctx, load_level_required);
+
+	/* Then load tables from the required level down. */
+	return pvr_mmu_op_context_load_tables(op_ctx, should_create,
+					      load_level_required);
+}
+
+/**
+ * DOC: Single page operations
+ */
+
+/**
+ * pvr_page_create() - Create a device-virtual memory page and insert it into
+ * a level 0 page table.
+ * @op_ctx: Target MMU op context pointing at the device-virtual address of the
+ * target page.
+ * @dma_addr: DMA address of the physical page backing the created page.
+ * @flags: Page options saved on the level 0 page table entry for reading by
+ *         the device.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * -%EEXIST if the requested page already exists.
+ */
+static int
+pvr_page_create(struct pvr_mmu_op_context *op_ctx, dma_addr_t dma_addr,
+		struct pvr_page_flags_raw flags)
+{
+	/* Do not create a new page if one already exists. */
+	if (pvr_page_table_l0_entry_is_valid(op_ctx->curr_page.l0_table,
+					     op_ctx->curr_page.l0_idx)) {
+		return -EEXIST;
+	}
+
+	pvr_page_table_l0_insert(op_ctx, dma_addr, flags);
+
+	pvr_mmu_op_context_require_sync(op_ctx, PVR_MMU_SYNC_LEVEL_0);
+
+	return 0;
+}
+
+/**
+ * pvr_page_destroy() - Destroy a device page after removing it from its
+ * parent level 0 page table.
+ * @op_ctx: Target MMU op context.
+ * @ptr: Page table pointer to the device-virtual address of the target page.
+ */
+static void
+pvr_page_destroy(struct pvr_mmu_op_context *op_ctx)
+{
+	/* Do nothing if the page does not exist. */
+	if (!pvr_page_table_l0_entry_is_valid(op_ctx->curr_page.l0_table,
+					      op_ctx->curr_page.l0_idx)) {
+		return;
+	}
+
+	/* Clear the parent L0 page table entry. */
+	pvr_page_table_l0_remove(op_ctx);
+
+	pvr_mmu_op_context_require_sync(op_ctx, PVR_MMU_SYNC_LEVEL_0);
+}
+
+/**
+ * pvr_mmu_op_context_destroy() - Destroy an MMU op context.
+ * @op_ctx: Target MMU op context.
+ */
+void pvr_mmu_op_context_destroy(struct pvr_mmu_op_context *op_ctx)
+{
+	const bool flush_caches =
+		op_ctx->sync_level_required != PVR_MMU_SYNC_LEVEL_NONE;
+
+	pvr_mmu_op_context_sync(op_ctx);
+
+	if (flush_caches)
+		WARN_ON(pvr_mmu_flush(op_ctx->mmu_ctx->pvr_dev));
+
+	while (op_ctx->l0_free_tables) {
+		struct pvr_page_table_l0 *tmp = op_ctx->l0_free_tables;
+
+		op_ctx->l0_free_tables = op_ctx->l0_free_tables->next_free;
+		pvr_page_table_l0_free(tmp);
+	}
+
+	while (op_ctx->l1_free_tables) {
+		struct pvr_page_table_l1 *tmp = op_ctx->l1_free_tables;
+
+		op_ctx->l1_free_tables = op_ctx->l1_free_tables->next_free;
+		pvr_page_table_l1_free(tmp);
+	}
+
+	kfree(op_ctx);
+}
+
+/**
+ * pvr_mmu_op_context_create() - Create an MMU op context.
+ * @ctx: MMU context associated with owning VM context.
+ * @sgt: Scatter gather table containing pages pinned for use by this context.
+ * @sgt_offset: Start offset of the requested device-virtual memory mapping.
+ * @size: Size in bytes of the requested device-virtual memory mapping. For an
+ * unmapping, this should be zero so that no page tables are allocated.
+ *
+ * Returns:
+ *  * Newly created MMU op context object on success, or
+ *  * -%ENOMEM if no memory is available,
+ *  * Any error code returned by pvr_page_table_l2_init().
+ */
+struct pvr_mmu_op_context *
+pvr_mmu_op_context_create(struct pvr_mmu_context *ctx, struct sg_table *sgt,
+			  u64 sgt_offset, u64 size)
+{
+	int err;
+
+	struct pvr_mmu_op_context *op_ctx =
+		kzalloc(sizeof(*op_ctx), GFP_KERNEL);
+
+	if (!op_ctx)
+		return ERR_PTR(-ENOMEM);
+
+	op_ctx->mmu_ctx = ctx;
+	op_ctx->map.sgt = sgt;
+	op_ctx->map.sgt_offset = sgt_offset;
+	op_ctx->sync_level_required = PVR_MMU_SYNC_LEVEL_NONE;
+
+	if (size) {
+		const u32 l1_start_idx = pvr_page_table_l2_idx(sgt_offset);
+		const u32 l1_end_idx = pvr_page_table_l2_idx(sgt_offset + size);
+		const u32 l1_count = l1_end_idx - l1_start_idx + 1;
+		const u32 l0_start_idx = pvr_page_table_l1_idx(sgt_offset);
+		const u32 l0_end_idx = pvr_page_table_l1_idx(sgt_offset + size);
+		const u32 l0_count = l0_end_idx - l0_start_idx + 1;
+
+		/*
+		 * Alloc and push page table entries until we have enough of
+		 * each type, ending with linked lists of l0 and l1 entries in
+		 * reverse order.
+		 */
+		for (int i = 0; i < l1_count; i++) {
+			struct pvr_page_table_l1 *l1_tmp =
+				pvr_page_table_l1_alloc(ctx);
+
+			err = PTR_ERR_OR_ZERO(l1_tmp);
+			if (err)
+				goto err_cleanup;
+
+			l1_tmp->next_free = op_ctx->l1_free_tables;
+			op_ctx->l1_free_tables = l1_tmp;
+		}
+
+		for (int i = 0; i < l0_count; i++) {
+			struct pvr_page_table_l0 *l0_tmp =
+				pvr_page_table_l0_alloc(ctx);
+
+			err = PTR_ERR_OR_ZERO(l0_tmp);
+			if (err)
+				goto err_cleanup;
+
+			l0_tmp->next_free = op_ctx->l0_free_tables;
+			op_ctx->l0_free_tables = l0_tmp;
+		}
+	}
+
+	return op_ctx;
+
+err_cleanup:
+	pvr_mmu_op_context_destroy(op_ctx);
+
+	return ERR_PTR(err);
+}
+
+/**
+ * pvr_mmu_op_context_unmap_curr_page() - Unmap pages from a memory context
+ * starting from the current page of an MMU op context.
+ * @op_ctx: Target MMU op context pointing at the first page to unmap.
+ * @nr_pages: Number of pages to unmap.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error encountered while advancing @op_ctx.curr_page with
+ *    pvr_mmu_op_context_next_page() (except -%ENXIO).
+ */
+static int
+pvr_mmu_op_context_unmap_curr_page(struct pvr_mmu_op_context *op_ctx,
+				   u64 nr_pages)
+{
+	int err;
+
+	if (nr_pages == 0)
+		return 0;
+
+	/*
+	 * Destroy first page outside loop, as it doesn't require a page
+	 * advance beforehand. If the L0 page table reference in
+	 * @op_ctx.curr_page is %NULL, there cannot be a mapped page at
+	 * @op_ctx.curr_page (so skip ahead).
+	 */
+	if (op_ctx->curr_page.l0_table)
+		pvr_page_destroy(op_ctx);
+
+	for (u64 page = 1; page < nr_pages; ++page) {
+		err = pvr_mmu_op_context_next_page(op_ctx, false);
+		/*
+		 * If the page table tree structure at @op_ctx.curr_page is
+		 * incomplete, skip ahead. We don't care about unmapping pages
+		 * that cannot exist.
+		 *
+		 * FIXME: This could be made more efficient by jumping ahead
+		 * using pvr_mmu_op_context_set_curr_page().
+		 */
+		if (err == -ENXIO)
+			continue;
+		else if (err)
+			return err;
+
+		pvr_page_destroy(op_ctx);
+	}
+
+	return 0;
+}
+
+/**
+ * pvr_mmu_unmap() - Unmap pages from a memory context.
+ * @op_ctx: Target MMU op context.
+ * @device_addr: First device-virtual address to unmap.
+ * @size: Size in bytes to unmap.
+ *
+ * The total amount of device-virtual memory unmapped is
+ * @nr_pages * %PVR_DEVICE_PAGE_SIZE.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error code returned by pvr_page_table_ptr_init(), or
+ *  * Any error code returned by pvr_page_table_ptr_unmap().
+ */
+int pvr_mmu_unmap(struct pvr_mmu_op_context *op_ctx, u64 device_addr, u64 size)
+{
+	int err = pvr_mmu_op_context_set_curr_page(op_ctx, device_addr, false);
+
+	if (err)
+		return err;
+
+	return pvr_mmu_op_context_unmap_curr_page(op_ctx,
+						  size >> PVR_DEVICE_PAGE_SHIFT);
+}
+
+/**
+ * pvr_mmu_map_sgl() - Map part of a scatter-gather table entry to
+ * device-virtual memory.
+ * @op_ctx: Target MMU op context pointing to the first page that should be
+ * mapped.
+ * @sgl: Target scatter-gather table entry.
+ * @offset: Offset into @sgl to map from. Must result in a starting address
+ * from @sgl which is CPU page-aligned.
+ * @size: Size of the memory to be mapped in bytes. Must be a non-zero multiple
+ * of the device page size.
+ * @page_flags: Page options to be applied to every device-virtual memory page
+ * in the created mapping.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * -%EINVAL if the range specified by @offset and @size is not completely
+ *    within @sgl, or
+ *  * Any error encountered while creating a page with pvr_page_create(), or
+ *  * Any error encountered while advancing @op_ctx.curr_page with
+ *    pvr_mmu_op_context_next_page().
+ */
+static int
+pvr_mmu_map_sgl(struct pvr_mmu_op_context *op_ctx, struct scatterlist *sgl,
+		u64 offset, u64 size, struct pvr_page_flags_raw page_flags)
+{
+	const unsigned int pages = size >> PVR_DEVICE_PAGE_SHIFT;
+	dma_addr_t dma_addr = sg_dma_address(sgl) + offset;
+	const unsigned int dma_len = sg_dma_len(sgl);
+	struct pvr_page_table_ptr ptr_copy;
+	unsigned int page;
+	int err;
+
+	if (size > dma_len || offset > dma_len - size)
+		return -EINVAL;
+
+	/*
+	 * Before progressing, save a copy of the start pointer so we can use
+	 * it again if we enter an error state and have to destroy pages.
+	 */
+	memcpy(&ptr_copy, &op_ctx->curr_page, sizeof(ptr_copy));
+
+	/*
+	 * Create first page outside loop, as it doesn't require a page advance
+	 * beforehand.
+	 */
+	err = pvr_page_create(op_ctx, dma_addr, page_flags);
+	if (err)
+		return err;
+
+	for (page = 1; page < pages; ++page) {
+		err = pvr_mmu_op_context_next_page(op_ctx, true);
+		if (err)
+			goto err_destroy_pages;
+
+		dma_addr += PVR_DEVICE_PAGE_SIZE;
+
+		err = pvr_page_create(op_ctx, dma_addr, page_flags);
+		if (err)
+			goto err_destroy_pages;
+	}
+
+	return 0;
+
+err_destroy_pages:
+	memcpy(&op_ctx->curr_page, &ptr_copy, sizeof(op_ctx->curr_page));
+	err = pvr_mmu_op_context_unmap_curr_page(op_ctx, page);
+
+	return err;
+}
+
+/**
+ * pvr_mmu_map() - Map an object's virtual memory to physical memory.
+ * @op_ctx: Target MMU op context.
+ * @size: Size of memory to be mapped in bytes. Must be a non-zero multiple
+ * of the device page size.
+ * @flags: Flags from pvr_gem_object associated with the mapping.
+ * @device_addr: Virtual device address to map to. Must be device page-aligned.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error code returned by pvr_page_table_ptr_init(), or
+ *  * Any error code returned by pvr_mmu_map_sgl(), or
+ *  * Any error code returned by pvr_page_table_ptr_next_page().
+ */
+int pvr_mmu_map(struct pvr_mmu_op_context *op_ctx, u64 size, u64 flags,
+		u64 device_addr)
+{
+	struct pvr_page_table_ptr ptr_copy;
+	struct pvr_page_flags_raw flags_raw;
+	struct scatterlist *sgl;
+	u64 mapped_size = 0;
+	unsigned int count;
+	int err;
+
+	if (!size)
+		return 0;
+
+	if ((op_ctx->map.sgt_offset | size) & ~PVR_DEVICE_PAGE_MASK)
+		return -EINVAL;
+
+	err = pvr_mmu_op_context_set_curr_page(op_ctx, device_addr, true);
+	if (err)
+		return -EINVAL;
+
+	memcpy(&ptr_copy, &op_ctx->curr_page, sizeof(ptr_copy));
+
+	flags_raw = pvr_page_flags_raw_create(false, false,
+					      flags & DRM_PVR_BO_DEVICE_BYPASS_CACHE,
+					      flags & DRM_PVR_BO_DEVICE_PM_FW_PROTECT);
+
+	/* Map scatter gather table */
+	for_each_sgtable_dma_sg(op_ctx->map.sgt, sgl, count) {
+		const size_t sgl_len = sg_dma_len(sgl);
+		u64 sgl_offset, map_sgl_len;
+
+		if (sgl_len <= op_ctx->map.sgt_offset) {
+			op_ctx->map.sgt_offset -= sgl_len;
+			continue;
+		}
+
+		sgl_offset = op_ctx->map.sgt_offset;
+		map_sgl_len = min_t(u64, sgl_len - sgl_offset, size - mapped_size);
+
+		err = pvr_mmu_map_sgl(op_ctx, sgl, sgl_offset, map_sgl_len,
+				      flags_raw);
+		if (err)
+			break;
+
+		/*
+		 * Flag the L0 page table as requiring a flush when the MMU op
+		 * context is destroyed.
+		 */
+		pvr_mmu_op_context_require_sync(op_ctx, PVR_MMU_SYNC_LEVEL_0);
+
+		op_ctx->map.sgt_offset = 0;
+		mapped_size += map_sgl_len;
+
+		if (mapped_size >= size)
+			break;
+
+		err = pvr_mmu_op_context_next_page(op_ctx, true);
+		if (err)
+			break;
+	}
+
+	if (err && mapped_size) {
+		memcpy(&op_ctx->curr_page, &ptr_copy, sizeof(op_ctx->curr_page));
+		pvr_mmu_op_context_unmap_curr_page(op_ctx,
+						   mapped_size >> PVR_DEVICE_PAGE_SHIFT);
+	}
+
+	return err;
+}
diff --git a/drivers/gpu/drm/imagination/pvr_mmu.h b/drivers/gpu/drm/imagination/pvr_mmu.h
new file mode 100644
index 000000000000..bf93c5ffc86a
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_mmu.h
@@ -0,0 +1,108 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_MMU_H
+#define PVR_MMU_H
+
+#include <linux/memory.h>
+#include <linux/types.h>
+
+/* Forward declaration from "pvr_device.h" */
+struct pvr_device;
+
+/* Forward declaration from "pvr_mmu.c" */
+struct pvr_mmu_context;
+struct pvr_mmu_op_context;
+
+/* Forward declaration from "pvr_vm.c" */
+struct pvr_vm_context;
+
+/* Forward declaration from <linux/scatterlist.h> */
+struct sg_table;
+
+/**
+ * DOC: Public API (constants)
+ *
+ * .. c:macro:: PVR_DEVICE_PAGE_SIZE
+ *
+ *    Fixed page size referenced by leaf nodes in the page table tree
+ *    structure. In the current implementation, this value is pegged to the
+ *    CPU page size (%PAGE_SIZE). It is therefore an error to specify a CPU
+ *    page size which is not also a supported device page size. The supported
+ *    device page sizes are: 4KiB, 16KiB, 64KiB, 256KiB, 1MiB and 2MiB.
+ *
+ * .. c:macro:: PVR_DEVICE_PAGE_SHIFT
+ *
+ *    Shift value used to efficiently multiply or divide by
+ *    %PVR_DEVICE_PAGE_SIZE.
+ *
+ *    This value is derived from %PVR_DEVICE_PAGE_SIZE.
+ *
+ * .. c:macro:: PVR_DEVICE_PAGE_MASK
+ *
+ *    Mask used to round a value down to the nearest multiple of
+ *    %PVR_DEVICE_PAGE_SIZE. When bitwise negated, it will indicate whether a
+ *    value is already a multiple of %PVR_DEVICE_PAGE_SIZE.
+ *
+ *    This value is derived from %PVR_DEVICE_PAGE_SIZE.
+ */
+
+/* PVR_DEVICE_PAGE_SIZE determines the page size */
+#define PVR_DEVICE_PAGE_SIZE (PAGE_SIZE)
+#define PVR_DEVICE_PAGE_SHIFT (PAGE_SHIFT)
+#define PVR_DEVICE_PAGE_MASK (PAGE_MASK)
+
+/**
+ * DOC: Page table index utilities (constants)
+ *
+ * .. c:macro:: PVR_PAGE_TABLE_ADDR_SPACE_SIZE
+ *
+ *    Size of device-virtual address space which can be represented in the page
+ *    table structure.
+ *
+ *    This value is checked at runtime against
+ *    &pvr_device_features.virtual_address_space_bits by
+ *    pvr_vm_create_context(), which will return an error if the feature value
+ *    does not match this constant.
+ *
+ *    .. admonition:: Future work
+ *
+ *       It should be possible to support other values of
+ *       &pvr_device_features.virtual_address_space_bits, but so far no
+ *       hardware has been created which advertises an unsupported value.
+ *
+ * .. c:macro:: PVR_PAGE_TABLE_ADDR_BITS
+ *
+ *    Number of bits needed to represent any value less than
+ *    %PVR_PAGE_TABLE_ADDR_SPACE_SIZE exactly.
+ *
+ * .. c:macro:: PVR_PAGE_TABLE_ADDR_MASK
+ *
+ *    Bitmask of device-virtual addresses which are valid in the page table
+ *    structure.
+ *
+ *    This value is derived from %PVR_PAGE_TABLE_ADDR_SPACE_SIZE, so the same
+ *    notes on that constant apply here.
+ */
+#define PVR_PAGE_TABLE_ADDR_SPACE_SIZE SZ_1T
+#define PVR_PAGE_TABLE_ADDR_BITS __ffs(PVR_PAGE_TABLE_ADDR_SPACE_SIZE)
+#define PVR_PAGE_TABLE_ADDR_MASK (PVR_PAGE_TABLE_ADDR_SPACE_SIZE - 1)
+
+int pvr_mmu_flush(struct pvr_device *pvr_dev);
+
+struct pvr_mmu_context *pvr_mmu_context_create(struct pvr_device *pvr_dev);
+void pvr_mmu_context_destroy(struct pvr_mmu_context *ctx);
+
+dma_addr_t pvr_mmu_get_root_table_dma_addr(struct pvr_mmu_context *ctx);
+
+void pvr_mmu_op_context_destroy(struct pvr_mmu_op_context *op_ctx);
+struct pvr_mmu_op_context *
+pvr_mmu_op_context_create(struct pvr_mmu_context *ctx,
+			  struct sg_table *sgt, u64 sgt_offset, u64 size);
+
+int pvr_mmu_map(struct pvr_mmu_op_context *op_ctx, u64 size, u64 flags,
+		u64 device_addr);
+int pvr_mmu_unmap(struct pvr_mmu_op_context *op_ctx, u64 device_addr, u64 size);
+
+#endif /* PVR_MMU_H */
+
diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
new file mode 100644
index 000000000000..616fad3a3325
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_vm.c
@@ -0,0 +1,890 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_vm.h"
+
+#include "pvr_device.h"
+#include "pvr_drv.h"
+#include "pvr_gem.h"
+#include "pvr_mmu.h"
+#include "pvr_rogue_fwif.h"
+#include "pvr_rogue_heap_config.h"
+
+#include <drm/drm_gem.h>
+#include <drm/drm_gpuva_mgr.h>
+
+#include <linux/container_of.h>
+#include <linux/err.h>
+#include <linux/errno.h>
+#include <linux/gfp_types.h>
+#include <linux/kref.h>
+#include <linux/mutex.h>
+#include <linux/stddef.h>
+
+/**
+ * DOC: Memory context
+ *
+ * This is the "top level" datatype in the VM code. It's exposed in the public
+ * API as an opaque handle.
+ */
+
+/**
+ * struct pvr_vm_context - Context type which encapsulates an entire page table
+ * tree structure.
+ * @pvr_dev: The PowerVR device to which this context is bound.
+ *
+ * This binding is immutable for the life of the context.
+ * @mmu_ctx: The context for binding to physical memory.
+ * @gpuva_mgr: GPUVA manager object associated with this context.
+ * @lock: Global lock on this entire structure of page tables.
+ * @fw_mem_ctx_obj: Firmware object representing firmware memory context.
+ * @ref_count: Reference count of object.
+ */
+struct pvr_vm_context {
+	struct pvr_device *pvr_dev;
+	struct pvr_mmu_context *mmu_ctx;
+	struct drm_gpuva_manager gpuva_mgr;
+	struct mutex lock;
+	struct pvr_fw_object *fw_mem_ctx_obj;
+	struct kref ref_count;
+};
+
+/**
+ * pvr_vm_get_page_table_root_addr() - Get the DMA address of the root of the
+ *                                     page table structure behind a VM context.
+ * @vm_ctx: Target VM context.
+ */
+dma_addr_t pvr_vm_get_page_table_root_addr(struct pvr_vm_context *vm_ctx)
+{
+	return pvr_mmu_get_root_table_dma_addr(vm_ctx->mmu_ctx);
+}
+
+/**
+ * DOC: Memory mappings
+ */
+
+/**
+ * pvr_vm_gpuva_mapping_init() - Setup a mapping object with the specified
+ * parameters ready for mapping using pvr_vm_gpuva_mapping_map().
+ * @va: Pointer to drm_gpuva mapping object.
+ * @device_addr: Device-virtual address at the start of the mapping.
+ * @size: Size of the desired mapping.
+ * @pvr_obj: Target PowerVR memory object.
+ * @pvr_obj_offset: Offset into @pvr_obj to begin mapping from.
+ *
+ * Some parameters of this function are unchecked. It is therefore the callers
+ * responsibility to ensure certain constraints are met. Specifically:
+ *
+ * * @pvr_obj_offset must be less than the size of @pvr_obj,
+ * * The sum of @pvr_obj_offset and @size must be less than or equal to the
+ *   size of @pvr_obj,
+ * * The range specified by @pvr_obj_offset and @size (the "CPU range") must be
+ *   CPU page-aligned both in start position and size, and
+ * * The range specified by @device_addr and @size (the "device range") must be
+ *   device page-aligned both in start position and size.
+ *
+ * Furthermore, it is up to the caller to make sure that a reference to @pvr_obj
+ * is taken prior to mapping @va with the drm_gpuva_manager.
+ */
+static void
+pvr_vm_gpuva_mapping_init(struct drm_gpuva *va, u64 device_addr, u64 size,
+			  struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset)
+{
+	va->va.addr = device_addr;
+	va->va.range = size;
+	va->gem.obj = gem_from_pvr_gem(pvr_obj);
+	va->gem.offset = pvr_obj_offset;
+}
+
+struct pvr_vm_gpuva_op_ctx {
+	struct pvr_vm_context *vm_ctx;
+	struct pvr_mmu_op_context *mmu_op_ctx;
+	struct drm_gpuva *new_va, *prev_va, *next_va;
+};
+
+/**
+ * pvr_vm_gpuva_map() - Insert a mapping into a memory context.
+ * @op: gpuva op containing the remap details.
+ * @op_ctx: Operation context.
+ *
+ * Context: Called by drm_gpuva_sm_map following a successful mapping while
+ * @op_ctx.vm_ctx mutex is held.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_mmu_map().
+ */
+static int
+pvr_vm_gpuva_map(struct drm_gpuva_op *op, void *op_ctx)
+{
+	struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->map.gem.obj);
+	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
+	int err;
+
+	if ((op->map.gem.offset | op->map.va.range) & ~PVR_DEVICE_PAGE_MASK)
+		return -EINVAL;
+
+	err = pvr_mmu_map(ctx->mmu_op_ctx, op->map.va.range, pvr_gem->flags,
+			  op->map.va.addr);
+	if (err)
+		return err;
+
+	pvr_vm_gpuva_mapping_init(ctx->new_va, op->map.va.addr,
+				  op->map.va.range, pvr_gem, op->map.gem.offset);
+
+	drm_gpuva_map(&ctx->vm_ctx->gpuva_mgr, ctx->new_va, &op->map);
+	drm_gpuva_link(ctx->new_va);
+	ctx->new_va = NULL;
+
+	/*
+	 * Increment the refcount on the underlying physical memory resource
+	 * to prevent de-allocation while the mapping exists.
+	 */
+	pvr_gem_object_get(pvr_gem);
+
+	return 0;
+}
+
+/**
+ * pvr_vm_gpuva_unmap() - Remove a mapping from a memory context.
+ * @op: gpuva op containing the unmap details.
+ * @op_ctx: Operation context.
+ *
+ * Context: Called by drm_gpuva_sm_unmap following a successful unmapping while
+ * @op_ctx.vm_ctx mutex is held.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_mmu_unmap().
+ */
+static int
+pvr_vm_gpuva_unmap(struct drm_gpuva_op *op, void *op_ctx)
+{
+	struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->unmap.va->gem.obj);
+	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
+
+	int err = pvr_mmu_unmap(ctx->mmu_op_ctx, op->unmap.va->va.addr,
+				op->unmap.va->va.range);
+
+	if (err)
+		return err;
+
+	drm_gpuva_unmap(&op->unmap);
+	drm_gpuva_unlink(op->unmap.va);
+	kfree(op->unmap.va);
+
+	pvr_gem_object_put(pvr_gem);
+
+	return 0;
+}
+
+/**
+ * pvr_vm_gpuva_remap() - Remap a mapping within a memory context.
+ * @op: gpuva op containing the remap details.
+ * @op_ctx: Operation context.
+ *
+ * Context: Called by either drm_gpuva_sm_map or drm_gpuva_sm_unmap when a
+ * mapping or unmapping operation causes a region to be split. The
+ * @op_ctx.vm_ctx mutex is held.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_vm_gpuva_unmap() or pvr_vm_gpuva_unmap().
+ */
+static int
+pvr_vm_gpuva_remap(struct drm_gpuva_op *op, void *op_ctx)
+{
+	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
+
+	if (op->remap.unmap) {
+		const u64 va_start = op->remap.prev ?
+				     op->remap.prev->va.addr + op->remap.prev->va.range :
+				     op->remap.unmap->va->va.addr;
+		const u64 va_end = op->remap.next ?
+				   op->remap.next->va.addr :
+				   op->remap.unmap->va->va.addr + op->remap.unmap->va->va.range;
+
+		int err = pvr_mmu_unmap(ctx->mmu_op_ctx, va_start,
+					va_end - va_start);
+
+		if (err)
+			return err;
+	}
+
+	if (op->remap.prev)
+		pvr_vm_gpuva_mapping_init(ctx->prev_va, op->remap.prev->va.addr,
+					  op->remap.prev->va.range,
+					  gem_to_pvr_gem(op->remap.prev->gem.obj),
+					  op->remap.prev->gem.offset);
+
+	if (op->remap.next)
+		pvr_vm_gpuva_mapping_init(ctx->next_va, op->remap.next->va.addr,
+					  op->remap.next->va.range,
+					  gem_to_pvr_gem(op->remap.next->gem.obj),
+					  op->remap.next->gem.offset);
+
+	/* No actual remap required: the page table tree depth is fixed to 3,
+	 * and we use 4k page table entries only for now.
+	 */
+	drm_gpuva_remap(ctx->prev_va, ctx->next_va, &op->remap);
+
+	if (op->remap.prev) {
+		pvr_gem_object_get(gem_to_pvr_gem(ctx->prev_va->gem.obj));
+		drm_gpuva_link(ctx->prev_va);
+		ctx->prev_va = NULL;
+	}
+
+	if (op->remap.next) {
+		pvr_gem_object_get(gem_to_pvr_gem(ctx->next_va->gem.obj));
+		drm_gpuva_link(ctx->next_va);
+		ctx->next_va = NULL;
+	}
+
+	if (op->remap.unmap) {
+		struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->remap.unmap->va->gem.obj);
+
+		drm_gpuva_unlink(op->unmap.va);
+		kfree(op->unmap.va);
+
+		pvr_gem_object_put(pvr_gem);
+	}
+
+	return 0;
+}
+
+/*
+ * Public API
+ *
+ * For an overview of these functions, see *DOC: Public API* in "pvr_vm.h".
+ */
+
+/**
+ * pvr_device_addr_is_valid() - Tests whether a device-virtual address
+ *                              is valid.
+ * @device_addr: Virtual device address to test.
+ *
+ * Return:
+ *  * %true if @device_addr is within the valid range for a device page
+ *    table and is aligned to the device page size, or
+ *  * %false otherwise.
+ */
+bool
+pvr_device_addr_is_valid(u64 device_addr)
+{
+	return (device_addr & ~PVR_PAGE_TABLE_ADDR_MASK) == 0 &&
+	       (device_addr & ~PVR_DEVICE_PAGE_MASK) == 0;
+}
+
+/**
+ * pvr_device_addr_and_size_are_valid() - Tests whether a device-virtual
+ * address and associated size are both valid.
+ * @device_addr: Virtual device address to test.
+ * @size: Size of the range based at @device_addr to test.
+ *
+ * Calling pvr_device_addr_is_valid() twice (once on @size, and again on
+ * @device_addr + @size) to verify a device-virtual address range initially
+ * seems intuitive, but it produces a false-negative when the address range
+ * is right at the end of device-virtual address space.
+ *
+ * This function catches that corner case, as well as checking that
+ * @size is non-zero.
+ *
+ * Return:
+ *  * %true if @device_addr is device page aligned; @size is device page
+ *    aligned; the range specified by @device_addr and @size is within the
+ *    bounds of the device-virtual address space, and @size is non-zero, or
+ *  * %false otherwise.
+ */
+bool
+pvr_device_addr_and_size_are_valid(u64 device_addr, u64 size)
+{
+	return pvr_device_addr_is_valid(device_addr) &&
+	       size != 0 && (size & ~PVR_DEVICE_PAGE_MASK) == 0 &&
+	       (device_addr + size <= PVR_PAGE_TABLE_ADDR_SPACE_SIZE);
+}
+
+static const struct drm_gpuva_fn_ops pvr_vm_gpuva_ops = {
+	.sm_step_map = pvr_vm_gpuva_map,
+	.sm_step_remap = pvr_vm_gpuva_remap,
+	.sm_step_unmap = pvr_vm_gpuva_unmap,
+};
+
+/**
+ * pvr_vm_create_context() - Create a new VM context.
+ * @pvr_dev: Target PowerVR device.
+ * @is_userspace_context: %true if this context is for userspace. This will
+ *                        create a firmware memory context for the VM context
+ *                        and disable warnings when tearing down mappings.
+ *
+ * Return:
+ *  * A handle to the newly-minted VM context on success,
+ *  * -%EINVAL if the feature "virtual address space bits" on @pvr_dev is
+ *    missing or has an unsupported value,
+ *  * -%ENOMEM if allocation of the structure behind the opaque handle fails,
+ *    or
+ *  * Any error encountered while setting up internal structures.
+ */
+struct pvr_vm_context *
+pvr_vm_create_context(struct pvr_device *pvr_dev, bool is_userspace_context)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+
+	struct pvr_vm_context *vm_ctx;
+	u16 device_addr_bits;
+
+	int err;
+
+	err = PVR_FEATURE_VALUE(pvr_dev, virtual_address_space_bits,
+				&device_addr_bits);
+	if (err) {
+		drm_err(drm_dev,
+			"Failed to get device virtual address space bits\n");
+		return ERR_PTR(err);
+	}
+
+	if (device_addr_bits != PVR_PAGE_TABLE_ADDR_BITS) {
+		drm_err(drm_dev,
+			"Device has unsupported virtual address space size\n");
+		return ERR_PTR(-EINVAL);
+	}
+
+	vm_ctx = kzalloc(sizeof(*vm_ctx), GFP_KERNEL);
+	if (!vm_ctx)
+		return ERR_PTR(-ENOMEM);
+
+	vm_ctx->pvr_dev = pvr_dev;
+	kref_init(&vm_ctx->ref_count);
+	mutex_init(&vm_ctx->lock);
+
+	drm_gpuva_manager_init(&vm_ctx->gpuva_mgr,
+			       is_userspace_context ? "PowerVR-user-VM" : "PowerVR-FW-VM",
+			       0, 1ULL << device_addr_bits, 0, 0, &pvr_vm_gpuva_ops);
+
+	vm_ctx->mmu_ctx = pvr_mmu_context_create(pvr_dev);
+	err = PTR_ERR_OR_ZERO(&vm_ctx->mmu_ctx);
+	if (err) {
+		vm_ctx->mmu_ctx = NULL;
+		goto err_put_ctx;
+	}
+
+	if (is_userspace_context) {
+		/* TODO: Create FW mem context */
+		err = -ENODEV;
+		goto err_put_ctx;
+	}
+
+	return vm_ctx;
+
+err_put_ctx:
+	pvr_vm_context_put(vm_ctx);
+
+	return ERR_PTR(err);
+}
+
+/**
+ * pvr_vm_context_release() - Teardown a VM context.
+ * @ref_count: Pointer to reference counter of the VM context.
+ *
+ * This function ensures that no mappings are left dangling by unmapping them
+ * all in order of ascending device-virtual address.
+ */
+static void
+pvr_vm_context_release(struct kref *ref_count)
+{
+	struct pvr_vm_context *vm_ctx =
+		container_of(ref_count, struct pvr_vm_context, ref_count);
+
+	/* TODO: Destroy FW mem context */
+	WARN_ON(vm_ctx->fw_mem_ctx_obj);
+
+	WARN_ON(pvr_vm_unmap(vm_ctx, vm_ctx->gpuva_mgr.mm_start,
+			     vm_ctx->gpuva_mgr.mm_range));
+
+	drm_gpuva_manager_destroy(&vm_ctx->gpuva_mgr);
+	pvr_mmu_context_destroy(vm_ctx->mmu_ctx);
+	mutex_destroy(&vm_ctx->lock);
+
+	kfree(vm_ctx);
+}
+
+/**
+ * pvr_vm_context_lookup() - Look up VM context from handle
+ * @pvr_file: Pointer to pvr_file structure.
+ * @handle: Object handle.
+ *
+ * Takes reference on VM context object. Call pvr_vm_context_put() to release.
+ *
+ * Returns:
+ *  * The requested object on success, or
+ *  * %NULL on failure (object does not exist in list, or is not a VM context)
+ */
+struct pvr_vm_context *
+pvr_vm_context_lookup(struct pvr_file *pvr_file, u32 handle)
+{
+	struct pvr_vm_context *vm_ctx;
+
+	xa_lock(&pvr_file->vm_ctx_handles);
+	vm_ctx = xa_load(&pvr_file->vm_ctx_handles, handle);
+	if (vm_ctx)
+		kref_get(&vm_ctx->ref_count);
+
+	xa_unlock(&pvr_file->vm_ctx_handles);
+
+	return vm_ctx;
+}
+
+/**
+ * pvr_vm_context_put() - Release a reference on a VM context
+ * @vm_ctx: Target VM context.
+ *
+ * Returns:
+ *  * %true if the VM context was destroyed, or
+ *  * %false if there are any references still remaining.
+ */
+bool
+pvr_vm_context_put(struct pvr_vm_context *vm_ctx)
+{
+	WARN_ON(!vm_ctx);
+
+	if (vm_ctx)
+		return kref_put(&vm_ctx->ref_count, pvr_vm_context_release);
+
+	return true;
+}
+
+/**
+ * pvr_destroy_vm_contexts_for_file: Destroy any VM contexts associated with the
+ * given file.
+ * @pvr_file: Pointer to pvr_file structure.
+ *
+ * Removes all vm_contexts associated with @pvr_file from the device VM context
+ * list and drops initial references. vm_contexts will then be destroyed once
+ * all outstanding references are dropped.
+ */
+void pvr_destroy_vm_contexts_for_file(struct pvr_file *pvr_file)
+{
+	struct pvr_vm_context *vm_ctx;
+	unsigned long handle;
+
+	xa_for_each(&pvr_file->vm_ctx_handles, handle, vm_ctx) {
+		/* vm_ctx is not used here because that would create a race with xa_erase */
+		pvr_vm_context_put(xa_erase(&pvr_file->vm_ctx_handles, handle));
+	}
+}
+
+/**
+ * pvr_vm_map() - Map a section of physical memory into a section of device-virtual memory.
+ * @vm_ctx: Target VM context.
+ * @pvr_obj: Target PowerVR memory object.
+ * @pvr_obj_offset: Offset into @pvr_obj to map from.
+ * @device_addr: Virtual device address at the start of the requested mapping.
+ * @size: Size of the requested mapping.
+ *
+ * No handle is returned to represent the mapping. Instead, callers should
+ * remember @device_addr and use that as a handle.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * -%EINVAL if @device_addr is not a valid page-aligned device-virtual
+ *    address; the region specified by @pvr_obj_offset and @size does not fall
+ *    entirely within @pvr_obj, or any part of the specified region of @pvr_obj
+ *    is not device-virtual page-aligned,
+ *  * Any error encountered while performing internal operations required to
+ *    destroy the mapping (returned from pvr_vm_gpuva_map or
+ *    pvr_vm_gpuva_remap).
+ */
+int
+pvr_vm_map(struct pvr_vm_context *vm_ctx,
+	   struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset,
+	   u64 device_addr, u64 size)
+{
+	const size_t pvr_obj_size = pvr_gem_object_size(pvr_obj);
+	struct pvr_vm_gpuva_op_ctx op_ctx = { .vm_ctx = vm_ctx };
+	struct sg_table *sgt;
+	int err;
+
+	if (!pvr_device_addr_and_size_are_valid(device_addr, size) ||
+	    pvr_obj_offset & ~PAGE_MASK || size & ~PAGE_MASK ||
+	    pvr_obj_offset + size > pvr_obj_size ||
+	    pvr_obj_offset > pvr_obj_size) {
+		return -EINVAL;
+	}
+
+	op_ctx.new_va = kzalloc(sizeof(*op_ctx.new_va), GFP_KERNEL);
+	op_ctx.prev_va = kzalloc(sizeof(*op_ctx.prev_va), GFP_KERNEL);
+	op_ctx.next_va = kzalloc(sizeof(*op_ctx.next_va), GFP_KERNEL);
+	if (!op_ctx.new_va || !op_ctx.prev_va || !op_ctx.next_va) {
+		err = -ENOMEM;
+		goto out_free;
+	}
+
+	sgt = pvr_gem_object_get_pages_sgt(pvr_obj);
+	err = PTR_ERR_OR_ZERO(sgt);
+	if (err)
+		goto out_free;
+
+	op_ctx.mmu_op_ctx = pvr_mmu_op_context_create(vm_ctx->mmu_ctx, sgt,
+						      pvr_obj_offset, size);
+	err = PTR_ERR_OR_ZERO(op_ctx.mmu_op_ctx);
+	if (err) {
+		op_ctx.mmu_op_ctx = NULL;
+		goto out_mmu_op_ctx_destroy;
+	}
+
+	mutex_lock(&vm_ctx->lock);
+	err = drm_gpuva_sm_map(&vm_ctx->gpuva_mgr, &op_ctx, device_addr, size,
+			       gem_from_pvr_gem(pvr_obj), pvr_obj_offset);
+	mutex_unlock(&vm_ctx->lock);
+
+out_mmu_op_ctx_destroy:
+	pvr_mmu_op_context_destroy(op_ctx.mmu_op_ctx);
+
+out_free:
+	kfree(op_ctx.next_va);
+	kfree(op_ctx.prev_va);
+	kfree(op_ctx.new_va);
+
+	return err;
+}
+
+/**
+ * pvr_vm_unmap() - Unmap an already mapped section of device-virtual memory.
+ * @vm_ctx: Target VM context.
+ * @device_addr: Virtual device address at the start of the target mapping.
+ * @size: Size of the target mapping.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * -%EINVAL if @device_addr is not a valid page-aligned device-virtual
+ *    address,
+ *  * Any error encountered while performing internal operations required to
+ *    destroy the mapping (returned from pvr_vm_gpuva_unmap or
+ *    pvr_vm_gpuva_remap).
+ */
+int
+pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size)
+{
+	struct pvr_vm_gpuva_op_ctx op_ctx = { .vm_ctx = vm_ctx };
+	int err;
+
+	if (!pvr_device_addr_and_size_are_valid(device_addr, size))
+		return -EINVAL;
+
+	op_ctx.prev_va = kzalloc(sizeof(*op_ctx.prev_va), GFP_KERNEL);
+	op_ctx.next_va = kzalloc(sizeof(*op_ctx.next_va), GFP_KERNEL);
+	if (!op_ctx.prev_va || !op_ctx.next_va) {
+		err = -ENOMEM;
+		goto out;
+	}
+
+	op_ctx.mmu_op_ctx =
+		pvr_mmu_op_context_create(vm_ctx->mmu_ctx, NULL, 0, 0);
+	err = PTR_ERR_OR_ZERO(op_ctx.mmu_op_ctx);
+	if (err) {
+		op_ctx.mmu_op_ctx = NULL;
+		goto out;
+	}
+
+	mutex_lock(&vm_ctx->lock);
+	err = drm_gpuva_sm_unmap(&vm_ctx->gpuva_mgr, &op_ctx, device_addr, size);
+	mutex_unlock(&vm_ctx->lock);
+
+out:
+	pvr_mmu_op_context_destroy(op_ctx.mmu_op_ctx);
+	kfree(op_ctx.next_va);
+	kfree(op_ctx.prev_va);
+
+	return err;
+}
+
+/*
+ * Static data areas are determined by firmware.
+ *
+ * When adding a new static data area you will also need to update the reserved_size field for the
+ * heap in pvr_heaps[].
+ */
+static const struct drm_pvr_static_data_area static_data_areas[] = {
+	{
+		.area_usage = DRM_PVR_STATIC_DATA_AREA_FENCE,
+		.location_heap_id = DRM_PVR_HEAP_GENERAL,
+		.offset = 0,
+		.size = 128,
+	},
+	{
+		.area_usage = DRM_PVR_STATIC_DATA_AREA_YUV_CSC,
+		.location_heap_id = DRM_PVR_HEAP_GENERAL,
+		.offset = 128,
+		.size = 1024,
+	},
+	{
+		.area_usage = DRM_PVR_STATIC_DATA_AREA_VDM_SYNC,
+		.location_heap_id = DRM_PVR_HEAP_PDS_CODE_DATA,
+		.offset = 0,
+		.size = 128,
+	},
+	{
+		.area_usage = DRM_PVR_STATIC_DATA_AREA_EOT,
+		.location_heap_id = DRM_PVR_HEAP_PDS_CODE_DATA,
+		.offset = 128,
+		.size = 128,
+	},
+	{
+		.area_usage = DRM_PVR_STATIC_DATA_AREA_VDM_SYNC,
+		.location_heap_id = DRM_PVR_HEAP_USC_CODE,
+		.offset = 0,
+		.size = 128,
+	},
+};
+
+#define GET_RESERVED_SIZE(last_offset, last_size) round_up((last_offset) + (last_size), PAGE_SIZE)
+
+/*
+ * The values given to GET_RESERVED_SIZE() are taken from the last entry in the corresponding
+ * static data area for each heap.
+ */
+static const struct drm_pvr_heap pvr_heaps[] = {
+	[DRM_PVR_HEAP_GENERAL] = {
+		.base = ROGUE_GENERAL_HEAP_BASE,
+		.size = ROGUE_GENERAL_HEAP_SIZE,
+		.flags = 0,
+		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
+	},
+	[DRM_PVR_HEAP_PDS_CODE_DATA] = {
+		.base = ROGUE_PDSCODEDATA_HEAP_BASE,
+		.size = ROGUE_PDSCODEDATA_HEAP_SIZE,
+		.flags = 0,
+		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
+	},
+	[DRM_PVR_HEAP_USC_CODE] = {
+		.base = ROGUE_USCCODE_HEAP_BASE,
+		.size = ROGUE_USCCODE_HEAP_SIZE,
+		.flags = 0,
+		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
+	},
+	[DRM_PVR_HEAP_RGNHDR] = {
+		.base = ROGUE_RGNHDR_HEAP_BASE,
+		.size = ROGUE_RGNHDR_HEAP_SIZE,
+		.flags = 0,
+		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
+	},
+	[DRM_PVR_HEAP_VIS_TEST] = {
+		.base = ROGUE_VISTEST_HEAP_BASE,
+		.size = ROGUE_VISTEST_HEAP_SIZE,
+		.flags = 0,
+		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
+	},
+	[DRM_PVR_HEAP_TRANSFER_FRAG] = {
+		.base = ROGUE_TRANSFER_FRAG_HEAP_BASE,
+		.size = ROGUE_TRANSFER_FRAG_HEAP_SIZE,
+		.flags = 0,
+		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
+	},
+};
+
+int
+pvr_static_data_areas_get(const struct pvr_device *pvr_dev,
+			  struct drm_pvr_ioctl_dev_query_args *args)
+{
+	struct drm_pvr_dev_query_static_data_areas query = {0};
+	int err;
+
+	if (!args->pointer) {
+		args->size = sizeof(struct drm_pvr_dev_query_static_data_areas);
+		return 0;
+	}
+
+	err = PVR_UOBJ_GET(query, args->size, args->pointer);
+	if (err < 0)
+		return err;
+
+	if (!query.static_data_areas.array) {
+		query.static_data_areas.count = ARRAY_SIZE(static_data_areas);
+		query.static_data_areas.stride = sizeof(struct drm_pvr_static_data_area);
+		goto copy_out;
+	}
+
+	if (query.static_data_areas.count > ARRAY_SIZE(static_data_areas))
+		query.static_data_areas.count = ARRAY_SIZE(static_data_areas);
+
+	err = PVR_UOBJ_SET_ARRAY(&query.static_data_areas, static_data_areas);
+	if (err < 0)
+		return err;
+
+copy_out:
+	err = PVR_UOBJ_SET(args->pointer, args->size, query);
+	if (err < 0)
+		return err;
+
+	args->size = sizeof(query);
+	return 0;
+}
+
+int
+pvr_heap_info_get(const struct pvr_device *pvr_dev,
+		  struct drm_pvr_ioctl_dev_query_args *args)
+{
+	struct drm_pvr_dev_query_heap_info query = {0};
+	u64 dest;
+	int err;
+
+	if (!args->pointer) {
+		args->size = sizeof(struct drm_pvr_dev_query_heap_info);
+		return 0;
+	}
+
+	err = PVR_UOBJ_GET(query, args->size, args->pointer);
+	if (err < 0)
+		return err;
+
+	if (!query.heaps.array) {
+		query.heaps.count = ARRAY_SIZE(pvr_heaps);
+		query.heaps.stride = sizeof(struct drm_pvr_heap);
+		goto copy_out;
+	}
+
+	if (query.heaps.count > ARRAY_SIZE(pvr_heaps))
+		query.heaps.count = ARRAY_SIZE(pvr_heaps);
+
+	/* Region header heap is only present if BRN63142 is present. */
+	dest = query.heaps.array;
+	for (size_t i = 0; i < query.heaps.count; i++) {
+		struct drm_pvr_heap heap = pvr_heaps[i];
+
+		if (i == DRM_PVR_HEAP_RGNHDR && !PVR_HAS_QUIRK(pvr_dev, 63142))
+			heap.size = 0;
+
+		err = PVR_UOBJ_SET(dest, query.heaps.stride, heap);
+		if (err < 0)
+			return err;
+
+		dest += query.heaps.stride;
+	}
+
+copy_out:
+	err = PVR_UOBJ_SET(args->pointer, args->size, query);
+	if (err < 0)
+		return err;
+
+	args->size = sizeof(query);
+	return 0;
+}
+
+/**
+ * pvr_heap_contains_range() - Determine if a given heap contains the specified
+ *                             device-virtual address range.
+ * @pvr_heap: Target heap.
+ * @start: Inclusive start of the target range.
+ * @end: Inclusive end of the target range.
+ *
+ * It is an error to call this function with values of @start and @end that do
+ * not satisfy the condition @start <= @end.
+ */
+static __always_inline bool
+pvr_heap_contains_range(const struct drm_pvr_heap *pvr_heap, u64 start, u64 end)
+{
+	return pvr_heap->base <= start && end < pvr_heap->base + pvr_heap->size;
+}
+
+/**
+ * pvr_find_heap_containing() - Find a heap which contains the specified
+ *                              device-virtual address range.
+ * @pvr_dev: Target PowerVR device.
+ * @start: Start of the target range.
+ * @size: Size of the target range.
+ *
+ * Return:
+ *  * A pointer to a constant instance of struct drm_pvr_heap representing the
+ *    heap containing the entire range specified by @start and @size on
+ *    success, or
+ *  * %NULL if no such heap exists.
+ */
+const struct drm_pvr_heap *
+pvr_find_heap_containing(struct pvr_device *pvr_dev, u64 start, u64 size)
+{
+	u64 end;
+
+	if (check_add_overflow(start, size - 1, &end))
+		return NULL;
+
+	/*
+	 * There are no guarantees about the order of address ranges in
+	 * &pvr_heaps, so iterate over the entire array for a heap whose
+	 * range completely encompasses the given range.
+	 */
+	for (u32 heap_id = 0; heap_id < ARRAY_SIZE(pvr_heaps); heap_id++) {
+		/* Filter heaps that present only with an associated quirk */
+		if (heap_id == DRM_PVR_HEAP_RGNHDR &&
+		    !PVR_HAS_QUIRK(pvr_dev, 63142)) {
+			continue;
+		}
+
+		if (pvr_heap_contains_range(&pvr_heaps[heap_id], start, end))
+			return &pvr_heaps[heap_id];
+	}
+
+	return NULL;
+}
+
+/**
+ * pvr_vm_find_gem_object() - Look up a buffer object from a given
+ *                            device-virtual address.
+ * @vm_ctx: [IN] Target VM context.
+ * @device_addr: [IN] Virtual device address at the start of the required
+ *               object.
+ * @mapped_offset_out: [OUT] Pointer to location to write offset of the start
+ *                     of the mapped region within the buffer object. May be
+ *                     %NULL if this information is not required.
+ * @mapped_size_out: [OUT] Pointer to location to write size of the mapped
+ *                   region. May be %NULL if this information is not required.
+ *
+ * If successful, a reference will be taken on the buffer object. The caller
+ * must drop the reference with pvr_gem_object_put().
+ *
+ * Return:
+ *  * The PowerVR buffer object mapped at @device_addr if one exists, or
+ *  * %NULL otherwise.
+ */
+struct pvr_gem_object *
+pvr_vm_find_gem_object(struct pvr_vm_context *vm_ctx, u64 device_addr,
+		       u64 *mapped_offset_out, u64 *mapped_size_out)
+{
+	struct pvr_gem_object *pvr_obj;
+	struct drm_gpuva *va;
+
+	mutex_lock(&vm_ctx->lock);
+
+	va = drm_gpuva_find_first(&vm_ctx->gpuva_mgr, device_addr, 1);
+	if (!va)
+		goto err_unlock;
+
+	pvr_obj = gem_to_pvr_gem(va->gem.obj);
+	pvr_gem_object_get(pvr_obj);
+
+	if (mapped_offset_out)
+		*mapped_offset_out = va->gem.offset;
+	if (mapped_size_out)
+		*mapped_size_out = va->va.range;
+
+	mutex_unlock(&vm_ctx->lock);
+
+	return pvr_obj;
+
+err_unlock:
+	mutex_unlock(&vm_ctx->lock);
+
+	return NULL;
+}
+
+/**
+ * pvr_vm_get_fw_mem_context: Get object representing firmware memory context
+ * @vm_ctx: Target VM context.
+ *
+ * Returns:
+ *  * FW object representing firmware memory context, or
+ *  * %NULL if this VM context does not have a firmware memory context.
+ */
+struct pvr_fw_object *
+pvr_vm_get_fw_mem_context(struct pvr_vm_context *vm_ctx)
+{
+	return vm_ctx->fw_mem_ctx_obj;
+}
diff --git a/drivers/gpu/drm/imagination/pvr_vm.h b/drivers/gpu/drm/imagination/pvr_vm.h
new file mode 100644
index 000000000000..b98bc3981807
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_vm.h
@@ -0,0 +1,60 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_VM_H
+#define PVR_VM_H
+
+#include "pvr_rogue_mmu_defs.h"
+
+#include <uapi/drm/pvr_drm.h>
+
+#include <linux/types.h>
+
+/* Forward declaration from "pvr_device.h" */
+struct pvr_device;
+struct pvr_file;
+
+/* Forward declaration from "pvr_gem.h" */
+struct pvr_gem_object;
+
+/* Forward declaration from "pvr_vm.c" */
+struct pvr_vm_context;
+
+/* Forward declaration from <uapi/drm/pvr_drm.h> */
+struct drm_pvr_ioctl_get_heap_info_args;
+
+/* Functions defined in pvr_vm.c */
+
+bool pvr_device_addr_is_valid(u64 device_addr);
+bool pvr_device_addr_and_size_are_valid(u64 device_addr, u64 size);
+
+struct pvr_vm_context *pvr_vm_create_context(struct pvr_device *pvr_dev,
+					     bool is_userspace_context);
+
+int pvr_vm_map(struct pvr_vm_context *vm_ctx,
+	       struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset,
+	       u64 device_addr, u64 size);
+int pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size);
+
+dma_addr_t pvr_vm_get_page_table_root_addr(struct pvr_vm_context *vm_ctx);
+
+int pvr_static_data_areas_get(const struct pvr_device *pvr_dev,
+			      struct drm_pvr_ioctl_dev_query_args *args);
+int pvr_heap_info_get(const struct pvr_device *pvr_dev,
+		      struct drm_pvr_ioctl_dev_query_args *args);
+const struct drm_pvr_heap *pvr_find_heap_containing(struct pvr_device *pvr_dev,
+						    u64 addr, u64 size);
+
+struct pvr_gem_object *pvr_vm_find_gem_object(struct pvr_vm_context *vm_ctx,
+					      u64 device_addr,
+					      u64 *mapped_offset_out,
+					      u64 *mapped_size_out);
+
+struct pvr_fw_object *
+pvr_vm_get_fw_mem_context(struct pvr_vm_context *vm_ctx);
+
+struct pvr_vm_context *pvr_vm_context_lookup(struct pvr_file *pvr_file, u32 handle);
+bool pvr_vm_context_put(struct pvr_vm_context *vm_ctx);
+void pvr_destroy_vm_contexts_for_file(struct pvr_file *pvr_file);
+
+#endif /* PVR_VM_H */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 09/17] drm/imagination: Implement power management
  2023-08-16  8:25 ` Sarah Walker
@ 2023-08-16  8:25   ` Sarah Walker
  -1 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: matthew.brost, luben.tuikov, tzimmermann, linux-kernel, mripard,
	afd, boris.brezillon, dakr, donald.robson, hns, christian.koenig,
	faith.ekstrand

Add power management to the driver, using runtime pm. The power off
sequence depends on firmware commands which are not implemented in this
patch.

Changes since v4:
- Suspend runtime PM before unplugging device on rmmod

Changes since v3:
- Don't power device when calling pvr_device_gpu_fini()
- Documentation for pvr_dev->lost has been improved
- pvr_power_init() renamed to pvr_watchdog_init()
- Use drm_dev_{enter,exit}

Changes since v2:
- Use runtime PM
- Implement watchdog

Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 drivers/gpu/drm/imagination/Makefile     |   1 +
 drivers/gpu/drm/imagination/pvr_device.c |  23 +-
 drivers/gpu/drm/imagination/pvr_device.h |  22 ++
 drivers/gpu/drm/imagination/pvr_drv.c    |  20 +-
 drivers/gpu/drm/imagination/pvr_power.c  | 271 +++++++++++++++++++++++
 drivers/gpu/drm/imagination/pvr_power.h  |  39 ++++
 6 files changed, 373 insertions(+), 3 deletions(-)
 create mode 100644 drivers/gpu/drm/imagination/pvr_power.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_power.h

diff --git a/drivers/gpu/drm/imagination/Makefile b/drivers/gpu/drm/imagination/Makefile
index 8fcabc1bea36..235e2d329e29 100644
--- a/drivers/gpu/drm/imagination/Makefile
+++ b/drivers/gpu/drm/imagination/Makefile
@@ -10,6 +10,7 @@ powervr-y := \
 	pvr_fw.o \
 	pvr_gem.o \
 	pvr_mmu.o \
+	pvr_power.o \
 	pvr_vm.o
 
 obj-$(CONFIG_DRM_POWERVR) += powervr.o
diff --git a/drivers/gpu/drm/imagination/pvr_device.c b/drivers/gpu/drm/imagination/pvr_device.c
index ef8f7a2ff1a9..5dbd05f21238 100644
--- a/drivers/gpu/drm/imagination/pvr_device.c
+++ b/drivers/gpu/drm/imagination/pvr_device.c
@@ -5,6 +5,7 @@
 #include "pvr_device_info.h"
 
 #include "pvr_fw.h"
+#include "pvr_power.h"
 #include "pvr_rogue_cr_defs.h"
 #include "pvr_vm.h"
 
@@ -357,6 +358,8 @@ pvr_device_gpu_fini(struct pvr_device *pvr_dev)
 int
 pvr_device_init(struct pvr_device *pvr_dev)
 {
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	struct device *dev = drm_dev->dev;
 	int err;
 
 	/* Enable and initialize clocks required for the device to operate. */
@@ -364,13 +367,29 @@ pvr_device_init(struct pvr_device *pvr_dev)
 	if (err)
 		return err;
 
+	/* Explicitly power the GPU so we can access control registers before the FW is booted. */
+	err = pm_runtime_resume_and_get(dev);
+	if (err)
+		return err;
+
 	/* Map the control registers into memory. */
 	err = pvr_device_reg_init(pvr_dev);
 	if (err)
-		return err;
+		goto err_pm_runtime_put;
 
 	/* Perform GPU-specific initialization steps. */
-	return pvr_device_gpu_init(pvr_dev);
+	err = pvr_device_gpu_init(pvr_dev);
+	if (err)
+		goto err_pm_runtime_put;
+
+	pm_runtime_put(dev);
+
+	return 0;
+
+err_pm_runtime_put:
+	pm_runtime_put_sync_suspend(dev);
+
+	return err;
 }
 
 /**
diff --git a/drivers/gpu/drm/imagination/pvr_device.h b/drivers/gpu/drm/imagination/pvr_device.h
index 990363e433d7..6d58ecfbc838 100644
--- a/drivers/gpu/drm/imagination/pvr_device.h
+++ b/drivers/gpu/drm/imagination/pvr_device.h
@@ -135,6 +135,28 @@ struct pvr_device {
 
 	/** @fw_dev: Firmware related data. */
 	struct pvr_fw_device fw_dev;
+
+	struct {
+		/** @work: Work item for watchdog callback. */
+		struct delayed_work work;
+
+		/** @old_kccb_cmds_executed: KCCB command execution count at last watchdog poll. */
+		u32 old_kccb_cmds_executed;
+
+		/** @kccb_stall_count: Number of watchdog polls KCCB has been stalled for. */
+		u32 kccb_stall_count;
+	} watchdog;
+
+	/**
+	 * @lost: %true if the device has been lost.
+	 *
+	 * This variable is set if the device has become irretrievably unavailable, e.g. if the
+	 * firmware processor has stopped responding and can not be revived via a hard reset.
+	 */
+	bool lost;
+
+	/** @sched_wq: Workqueue for schedulers. */
+	struct workqueue_struct *sched_wq;
 };
 
 /**
diff --git a/drivers/gpu/drm/imagination/pvr_drv.c b/drivers/gpu/drm/imagination/pvr_drv.c
index 0d51177b506c..0331dc549116 100644
--- a/drivers/gpu/drm/imagination/pvr_drv.c
+++ b/drivers/gpu/drm/imagination/pvr_drv.c
@@ -4,6 +4,7 @@
 #include "pvr_device.h"
 #include "pvr_drv.h"
 #include "pvr_gem.h"
+#include "pvr_power.h"
 #include "pvr_rogue_defs.h"
 #include "pvr_rogue_fwif_client.h"
 #include "pvr_rogue_fwif_shared.h"
@@ -1279,9 +1280,16 @@ pvr_probe(struct platform_device *plat_dev)
 
 	platform_set_drvdata(plat_dev, drm_dev);
 
+	devm_pm_runtime_enable(&plat_dev->dev);
+	pm_runtime_mark_last_busy(&plat_dev->dev);
+
+	pm_runtime_set_autosuspend_delay(&plat_dev->dev, 50);
+	pm_runtime_use_autosuspend(&plat_dev->dev);
+	pvr_watchdog_init(pvr_dev);
+
 	err = pvr_device_init(pvr_dev);
 	if (err)
-		return err;
+		goto err_watchdog_fini;
 
 	err = drm_dev_register(drm_dev, 0);
 	if (err)
@@ -1292,6 +1300,9 @@ pvr_probe(struct platform_device *plat_dev)
 err_device_fini:
 	pvr_device_fini(pvr_dev);
 
+err_watchdog_fini:
+	pvr_watchdog_fini(pvr_dev);
+
 	return err;
 }
 
@@ -1301,8 +1312,10 @@ pvr_remove(struct platform_device *plat_dev)
 	struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
 	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
 
+	pm_runtime_suspend(drm_dev->dev);
 	drm_dev_unplug(drm_dev);
 	pvr_device_fini(pvr_dev);
+	pvr_watchdog_fini(pvr_dev);
 
 	return 0;
 }
@@ -1313,11 +1326,16 @@ static const struct of_device_id dt_match[] = {
 };
 MODULE_DEVICE_TABLE(of, dt_match);
 
+static const struct dev_pm_ops pvr_pm_ops = {
+	SET_RUNTIME_PM_OPS(pvr_power_device_suspend, pvr_power_device_resume, pvr_power_device_idle)
+};
+
 static struct platform_driver pvr_driver = {
 	.probe = pvr_probe,
 	.remove = pvr_remove,
 	.driver = {
 		.name = PVR_DRIVER_NAME,
+		.pm = &pvr_pm_ops,
 		.of_match_table = dt_match,
 	},
 };
diff --git a/drivers/gpu/drm/imagination/pvr_power.c b/drivers/gpu/drm/imagination/pvr_power.c
new file mode 100644
index 000000000000..a494fed92e81
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_power.c
@@ -0,0 +1,271 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+#include "pvr_fw.h"
+#include "pvr_power.h"
+#include "pvr_rogue_fwif.h"
+
+#include <drm/drm_drv.h>
+#include <drm/drm_managed.h>
+#include <linux/clk.h>
+#include <linux/interrupt.h>
+#include <linux/mutex.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/timer.h>
+#include <linux/types.h>
+#include <linux/workqueue.h>
+
+#define POWER_SYNC_TIMEOUT_US (1000000) /* 1s */
+
+#define WATCHDOG_TIME_MS (500)
+
+static int
+pvr_power_send_command(struct pvr_device *pvr_dev, struct rogue_fwif_kccb_cmd *pow_cmd)
+{
+	/* TODO: implement */
+	return -ENODEV;
+}
+
+static int
+pvr_power_request_idle(struct pvr_device *pvr_dev)
+{
+	struct rogue_fwif_kccb_cmd pow_cmd;
+
+	/* Send FORCED_IDLE request to FW. */
+	pow_cmd.cmd_type = ROGUE_FWIF_KCCB_CMD_POW;
+	pow_cmd.cmd_data.pow_data.pow_type = ROGUE_FWIF_POW_FORCED_IDLE_REQ;
+	pow_cmd.cmd_data.pow_data.power_req_data.pow_request_type = ROGUE_FWIF_POWER_FORCE_IDLE;
+
+	return pvr_power_send_command(pvr_dev, &pow_cmd);
+}
+
+static int
+pvr_power_request_pwr_off(struct pvr_device *pvr_dev)
+{
+	struct rogue_fwif_kccb_cmd pow_cmd;
+
+	/* Send POW_OFF request to firmware. */
+	pow_cmd.cmd_type = ROGUE_FWIF_KCCB_CMD_POW;
+	pow_cmd.cmd_data.pow_data.pow_type = ROGUE_FWIF_POW_OFF_REQ;
+	pow_cmd.cmd_data.pow_data.power_req_data.forced = true;
+
+	return pvr_power_send_command(pvr_dev, &pow_cmd);
+}
+
+static int
+pvr_power_fw_disable(struct pvr_device *pvr_dev, bool hard_reset)
+{
+	if (!hard_reset) {
+		int err;
+
+		cancel_delayed_work_sync(&pvr_dev->watchdog.work);
+
+		err = pvr_power_request_idle(pvr_dev);
+		if (err)
+			return err;
+
+		err = pvr_power_request_pwr_off(pvr_dev);
+		if (err)
+			return err;
+	}
+
+	/* TODO: stop firmware */
+	return -ENODEV;
+}
+
+static int
+pvr_power_fw_enable(struct pvr_device *pvr_dev)
+{
+	int err;
+
+	/* TODO: start firmware */
+	err = -ENODEV;
+	if (err)
+		return err;
+
+	queue_delayed_work(pvr_dev->sched_wq, &pvr_dev->watchdog.work,
+			   msecs_to_jiffies(WATCHDOG_TIME_MS));
+
+	return 0;
+}
+
+bool
+pvr_power_is_idle(struct pvr_device *pvr_dev)
+{
+	/* TODO: implement */
+	return true;
+}
+
+static bool
+pvr_watchdog_kccb_stalled(struct pvr_device *pvr_dev)
+{
+	/* TODO: implement */
+	return false;
+}
+
+static void
+pvr_watchdog_worker(struct work_struct *work)
+{
+	struct pvr_device *pvr_dev = container_of(work, struct pvr_device,
+						  watchdog.work.work);
+	bool stalled;
+
+	if (pvr_dev->lost)
+		return;
+
+	if (pm_runtime_get_if_in_use(from_pvr_device(pvr_dev)->dev) <= 0)
+		goto out_requeue;
+
+	stalled = pvr_watchdog_kccb_stalled(pvr_dev);
+
+	if (stalled) {
+		drm_err(from_pvr_device(pvr_dev), "FW stalled, trying hard reset");
+
+		pvr_power_reset(pvr_dev, true);
+		/* Device may be lost at this point. */
+	}
+
+	pm_runtime_put(from_pvr_device(pvr_dev)->dev);
+
+out_requeue:
+	if (!pvr_dev->lost) {
+		queue_delayed_work(pvr_dev->sched_wq, &pvr_dev->watchdog.work,
+				   msecs_to_jiffies(WATCHDOG_TIME_MS));
+	}
+}
+
+/**
+ * pvr_watchdog_init() - Initialise watchdog for device
+ * @pvr_dev: Target PowerVR device.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -%ENOMEM on out of memory.
+ */
+int
+pvr_watchdog_init(struct pvr_device *pvr_dev)
+{
+	INIT_DELAYED_WORK(&pvr_dev->watchdog.work, pvr_watchdog_worker);
+
+	return 0;
+}
+
+int
+pvr_power_device_suspend(struct device *dev)
+{
+	struct platform_device *plat_dev = to_platform_device(dev);
+	struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+	int idx;
+
+	if (!drm_dev_enter(drm_dev, &idx))
+		return -EIO;
+
+	clk_disable_unprepare(pvr_dev->mem_clk);
+	clk_disable_unprepare(pvr_dev->sys_clk);
+	clk_disable_unprepare(pvr_dev->core_clk);
+
+	drm_dev_exit(idx);
+
+	return 0;
+}
+
+int
+pvr_power_device_resume(struct device *dev)
+{
+	struct platform_device *plat_dev = to_platform_device(dev);
+	struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+	int idx;
+	int err;
+
+	if (!drm_dev_enter(drm_dev, &idx))
+		return -EIO;
+
+	err = clk_prepare_enable(pvr_dev->core_clk);
+	if (err)
+		goto err_drm_dev_exit;
+
+	err = clk_prepare_enable(pvr_dev->sys_clk);
+	if (err)
+		goto err_core_clk_disable;
+
+	err = clk_prepare_enable(pvr_dev->mem_clk);
+	if (err)
+		goto err_sys_clk_disable;
+
+	drm_dev_exit(idx);
+
+	return 0;
+
+err_sys_clk_disable:
+	clk_disable_unprepare(pvr_dev->sys_clk);
+
+err_core_clk_disable:
+	clk_disable_unprepare(pvr_dev->core_clk);
+
+err_drm_dev_exit:
+	drm_dev_exit(idx);
+
+	return err;
+}
+
+int
+pvr_power_device_idle(struct device *dev)
+{
+	struct platform_device *plat_dev = to_platform_device(dev);
+	struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+
+	return pvr_power_is_idle(pvr_dev) ? 0 : -EBUSY;
+}
+
+/**
+ * pvr_power_reset() - Reset the GPU
+ * @pvr_dev: Device pointer
+ * @hard_reset: %true for hard reset, %false for soft reset
+ *
+ * If @hard_reset is %false and the FW processor fails to respond during the reset process, this
+ * function will attempt a hard reset.
+ *
+ * If a hard reset fails then the GPU device is reported as lost.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error code returned by pvr_power_get, pvr_power_fw_disable or pvr_power_fw_enable().
+ */
+int
+pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
+{
+	/* TODO: Implement hard reset. */
+	int err;
+
+	/*
+	 * Take a power reference during the reset. This should prevent any interference with the
+	 * power state during reset.
+	 */
+	WARN_ON(pvr_power_get(pvr_dev));
+
+	err = pvr_power_fw_disable(pvr_dev, false);
+	if (err)
+		goto err_power_put;
+
+	err = pvr_power_fw_enable(pvr_dev);
+
+err_power_put:
+	pvr_power_put(pvr_dev);
+
+	return err;
+}
+
+/**
+ * pvr_watchdog_fini() - Shutdown watchdog for device
+ * @pvr_dev: Target PowerVR device.
+ */
+void
+pvr_watchdog_fini(struct pvr_device *pvr_dev)
+{
+	cancel_delayed_work_sync(&pvr_dev->watchdog.work);
+}
diff --git a/drivers/gpu/drm/imagination/pvr_power.h b/drivers/gpu/drm/imagination/pvr_power.h
new file mode 100644
index 000000000000..439f08d13655
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_power.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_POWER_H
+#define PVR_POWER_H
+
+#include "pvr_device.h"
+
+#include <linux/mutex.h>
+#include <linux/pm_runtime.h>
+
+int pvr_watchdog_init(struct pvr_device *pvr_dev);
+void pvr_watchdog_fini(struct pvr_device *pvr_dev);
+
+bool pvr_power_is_idle(struct pvr_device *pvr_dev);
+
+int pvr_power_device_suspend(struct device *dev);
+int pvr_power_device_resume(struct device *dev);
+int pvr_power_device_idle(struct device *dev);
+
+int pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset);
+
+static __always_inline int
+pvr_power_get(struct pvr_device *pvr_dev)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+
+	return pm_runtime_resume_and_get(drm_dev->dev);
+}
+
+static __always_inline int
+pvr_power_put(struct pvr_device *pvr_dev)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+
+	return pm_runtime_put(drm_dev->dev);
+}
+
+#endif /* PVR_POWER_H */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 09/17] drm/imagination: Implement power management
@ 2023-08-16  8:25   ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: frank.binns, donald.robson, boris.brezillon, faith.ekstrand,
	airlied, daniel, maarten.lankhorst, mripard, tzimmermann, afd,
	hns, matthew.brost, christian.koenig, luben.tuikov, dakr,
	linux-kernel

Add power management to the driver, using runtime pm. The power off
sequence depends on firmware commands which are not implemented in this
patch.

Changes since v4:
- Suspend runtime PM before unplugging device on rmmod

Changes since v3:
- Don't power device when calling pvr_device_gpu_fini()
- Documentation for pvr_dev->lost has been improved
- pvr_power_init() renamed to pvr_watchdog_init()
- Use drm_dev_{enter,exit}

Changes since v2:
- Use runtime PM
- Implement watchdog

Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 drivers/gpu/drm/imagination/Makefile     |   1 +
 drivers/gpu/drm/imagination/pvr_device.c |  23 +-
 drivers/gpu/drm/imagination/pvr_device.h |  22 ++
 drivers/gpu/drm/imagination/pvr_drv.c    |  20 +-
 drivers/gpu/drm/imagination/pvr_power.c  | 271 +++++++++++++++++++++++
 drivers/gpu/drm/imagination/pvr_power.h  |  39 ++++
 6 files changed, 373 insertions(+), 3 deletions(-)
 create mode 100644 drivers/gpu/drm/imagination/pvr_power.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_power.h

diff --git a/drivers/gpu/drm/imagination/Makefile b/drivers/gpu/drm/imagination/Makefile
index 8fcabc1bea36..235e2d329e29 100644
--- a/drivers/gpu/drm/imagination/Makefile
+++ b/drivers/gpu/drm/imagination/Makefile
@@ -10,6 +10,7 @@ powervr-y := \
 	pvr_fw.o \
 	pvr_gem.o \
 	pvr_mmu.o \
+	pvr_power.o \
 	pvr_vm.o
 
 obj-$(CONFIG_DRM_POWERVR) += powervr.o
diff --git a/drivers/gpu/drm/imagination/pvr_device.c b/drivers/gpu/drm/imagination/pvr_device.c
index ef8f7a2ff1a9..5dbd05f21238 100644
--- a/drivers/gpu/drm/imagination/pvr_device.c
+++ b/drivers/gpu/drm/imagination/pvr_device.c
@@ -5,6 +5,7 @@
 #include "pvr_device_info.h"
 
 #include "pvr_fw.h"
+#include "pvr_power.h"
 #include "pvr_rogue_cr_defs.h"
 #include "pvr_vm.h"
 
@@ -357,6 +358,8 @@ pvr_device_gpu_fini(struct pvr_device *pvr_dev)
 int
 pvr_device_init(struct pvr_device *pvr_dev)
 {
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	struct device *dev = drm_dev->dev;
 	int err;
 
 	/* Enable and initialize clocks required for the device to operate. */
@@ -364,13 +367,29 @@ pvr_device_init(struct pvr_device *pvr_dev)
 	if (err)
 		return err;
 
+	/* Explicitly power the GPU so we can access control registers before the FW is booted. */
+	err = pm_runtime_resume_and_get(dev);
+	if (err)
+		return err;
+
 	/* Map the control registers into memory. */
 	err = pvr_device_reg_init(pvr_dev);
 	if (err)
-		return err;
+		goto err_pm_runtime_put;
 
 	/* Perform GPU-specific initialization steps. */
-	return pvr_device_gpu_init(pvr_dev);
+	err = pvr_device_gpu_init(pvr_dev);
+	if (err)
+		goto err_pm_runtime_put;
+
+	pm_runtime_put(dev);
+
+	return 0;
+
+err_pm_runtime_put:
+	pm_runtime_put_sync_suspend(dev);
+
+	return err;
 }
 
 /**
diff --git a/drivers/gpu/drm/imagination/pvr_device.h b/drivers/gpu/drm/imagination/pvr_device.h
index 990363e433d7..6d58ecfbc838 100644
--- a/drivers/gpu/drm/imagination/pvr_device.h
+++ b/drivers/gpu/drm/imagination/pvr_device.h
@@ -135,6 +135,28 @@ struct pvr_device {
 
 	/** @fw_dev: Firmware related data. */
 	struct pvr_fw_device fw_dev;
+
+	struct {
+		/** @work: Work item for watchdog callback. */
+		struct delayed_work work;
+
+		/** @old_kccb_cmds_executed: KCCB command execution count at last watchdog poll. */
+		u32 old_kccb_cmds_executed;
+
+		/** @kccb_stall_count: Number of watchdog polls KCCB has been stalled for. */
+		u32 kccb_stall_count;
+	} watchdog;
+
+	/**
+	 * @lost: %true if the device has been lost.
+	 *
+	 * This variable is set if the device has become irretrievably unavailable, e.g. if the
+	 * firmware processor has stopped responding and can not be revived via a hard reset.
+	 */
+	bool lost;
+
+	/** @sched_wq: Workqueue for schedulers. */
+	struct workqueue_struct *sched_wq;
 };
 
 /**
diff --git a/drivers/gpu/drm/imagination/pvr_drv.c b/drivers/gpu/drm/imagination/pvr_drv.c
index 0d51177b506c..0331dc549116 100644
--- a/drivers/gpu/drm/imagination/pvr_drv.c
+++ b/drivers/gpu/drm/imagination/pvr_drv.c
@@ -4,6 +4,7 @@
 #include "pvr_device.h"
 #include "pvr_drv.h"
 #include "pvr_gem.h"
+#include "pvr_power.h"
 #include "pvr_rogue_defs.h"
 #include "pvr_rogue_fwif_client.h"
 #include "pvr_rogue_fwif_shared.h"
@@ -1279,9 +1280,16 @@ pvr_probe(struct platform_device *plat_dev)
 
 	platform_set_drvdata(plat_dev, drm_dev);
 
+	devm_pm_runtime_enable(&plat_dev->dev);
+	pm_runtime_mark_last_busy(&plat_dev->dev);
+
+	pm_runtime_set_autosuspend_delay(&plat_dev->dev, 50);
+	pm_runtime_use_autosuspend(&plat_dev->dev);
+	pvr_watchdog_init(pvr_dev);
+
 	err = pvr_device_init(pvr_dev);
 	if (err)
-		return err;
+		goto err_watchdog_fini;
 
 	err = drm_dev_register(drm_dev, 0);
 	if (err)
@@ -1292,6 +1300,9 @@ pvr_probe(struct platform_device *plat_dev)
 err_device_fini:
 	pvr_device_fini(pvr_dev);
 
+err_watchdog_fini:
+	pvr_watchdog_fini(pvr_dev);
+
 	return err;
 }
 
@@ -1301,8 +1312,10 @@ pvr_remove(struct platform_device *plat_dev)
 	struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
 	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
 
+	pm_runtime_suspend(drm_dev->dev);
 	drm_dev_unplug(drm_dev);
 	pvr_device_fini(pvr_dev);
+	pvr_watchdog_fini(pvr_dev);
 
 	return 0;
 }
@@ -1313,11 +1326,16 @@ static const struct of_device_id dt_match[] = {
 };
 MODULE_DEVICE_TABLE(of, dt_match);
 
+static const struct dev_pm_ops pvr_pm_ops = {
+	SET_RUNTIME_PM_OPS(pvr_power_device_suspend, pvr_power_device_resume, pvr_power_device_idle)
+};
+
 static struct platform_driver pvr_driver = {
 	.probe = pvr_probe,
 	.remove = pvr_remove,
 	.driver = {
 		.name = PVR_DRIVER_NAME,
+		.pm = &pvr_pm_ops,
 		.of_match_table = dt_match,
 	},
 };
diff --git a/drivers/gpu/drm/imagination/pvr_power.c b/drivers/gpu/drm/imagination/pvr_power.c
new file mode 100644
index 000000000000..a494fed92e81
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_power.c
@@ -0,0 +1,271 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+#include "pvr_fw.h"
+#include "pvr_power.h"
+#include "pvr_rogue_fwif.h"
+
+#include <drm/drm_drv.h>
+#include <drm/drm_managed.h>
+#include <linux/clk.h>
+#include <linux/interrupt.h>
+#include <linux/mutex.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/timer.h>
+#include <linux/types.h>
+#include <linux/workqueue.h>
+
+#define POWER_SYNC_TIMEOUT_US (1000000) /* 1s */
+
+#define WATCHDOG_TIME_MS (500)
+
+static int
+pvr_power_send_command(struct pvr_device *pvr_dev, struct rogue_fwif_kccb_cmd *pow_cmd)
+{
+	/* TODO: implement */
+	return -ENODEV;
+}
+
+static int
+pvr_power_request_idle(struct pvr_device *pvr_dev)
+{
+	struct rogue_fwif_kccb_cmd pow_cmd;
+
+	/* Send FORCED_IDLE request to FW. */
+	pow_cmd.cmd_type = ROGUE_FWIF_KCCB_CMD_POW;
+	pow_cmd.cmd_data.pow_data.pow_type = ROGUE_FWIF_POW_FORCED_IDLE_REQ;
+	pow_cmd.cmd_data.pow_data.power_req_data.pow_request_type = ROGUE_FWIF_POWER_FORCE_IDLE;
+
+	return pvr_power_send_command(pvr_dev, &pow_cmd);
+}
+
+static int
+pvr_power_request_pwr_off(struct pvr_device *pvr_dev)
+{
+	struct rogue_fwif_kccb_cmd pow_cmd;
+
+	/* Send POW_OFF request to firmware. */
+	pow_cmd.cmd_type = ROGUE_FWIF_KCCB_CMD_POW;
+	pow_cmd.cmd_data.pow_data.pow_type = ROGUE_FWIF_POW_OFF_REQ;
+	pow_cmd.cmd_data.pow_data.power_req_data.forced = true;
+
+	return pvr_power_send_command(pvr_dev, &pow_cmd);
+}
+
+static int
+pvr_power_fw_disable(struct pvr_device *pvr_dev, bool hard_reset)
+{
+	if (!hard_reset) {
+		int err;
+
+		cancel_delayed_work_sync(&pvr_dev->watchdog.work);
+
+		err = pvr_power_request_idle(pvr_dev);
+		if (err)
+			return err;
+
+		err = pvr_power_request_pwr_off(pvr_dev);
+		if (err)
+			return err;
+	}
+
+	/* TODO: stop firmware */
+	return -ENODEV;
+}
+
+static int
+pvr_power_fw_enable(struct pvr_device *pvr_dev)
+{
+	int err;
+
+	/* TODO: start firmware */
+	err = -ENODEV;
+	if (err)
+		return err;
+
+	queue_delayed_work(pvr_dev->sched_wq, &pvr_dev->watchdog.work,
+			   msecs_to_jiffies(WATCHDOG_TIME_MS));
+
+	return 0;
+}
+
+bool
+pvr_power_is_idle(struct pvr_device *pvr_dev)
+{
+	/* TODO: implement */
+	return true;
+}
+
+static bool
+pvr_watchdog_kccb_stalled(struct pvr_device *pvr_dev)
+{
+	/* TODO: implement */
+	return false;
+}
+
+static void
+pvr_watchdog_worker(struct work_struct *work)
+{
+	struct pvr_device *pvr_dev = container_of(work, struct pvr_device,
+						  watchdog.work.work);
+	bool stalled;
+
+	if (pvr_dev->lost)
+		return;
+
+	if (pm_runtime_get_if_in_use(from_pvr_device(pvr_dev)->dev) <= 0)
+		goto out_requeue;
+
+	stalled = pvr_watchdog_kccb_stalled(pvr_dev);
+
+	if (stalled) {
+		drm_err(from_pvr_device(pvr_dev), "FW stalled, trying hard reset");
+
+		pvr_power_reset(pvr_dev, true);
+		/* Device may be lost at this point. */
+	}
+
+	pm_runtime_put(from_pvr_device(pvr_dev)->dev);
+
+out_requeue:
+	if (!pvr_dev->lost) {
+		queue_delayed_work(pvr_dev->sched_wq, &pvr_dev->watchdog.work,
+				   msecs_to_jiffies(WATCHDOG_TIME_MS));
+	}
+}
+
+/**
+ * pvr_watchdog_init() - Initialise watchdog for device
+ * @pvr_dev: Target PowerVR device.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -%ENOMEM on out of memory.
+ */
+int
+pvr_watchdog_init(struct pvr_device *pvr_dev)
+{
+	INIT_DELAYED_WORK(&pvr_dev->watchdog.work, pvr_watchdog_worker);
+
+	return 0;
+}
+
+int
+pvr_power_device_suspend(struct device *dev)
+{
+	struct platform_device *plat_dev = to_platform_device(dev);
+	struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+	int idx;
+
+	if (!drm_dev_enter(drm_dev, &idx))
+		return -EIO;
+
+	clk_disable_unprepare(pvr_dev->mem_clk);
+	clk_disable_unprepare(pvr_dev->sys_clk);
+	clk_disable_unprepare(pvr_dev->core_clk);
+
+	drm_dev_exit(idx);
+
+	return 0;
+}
+
+int
+pvr_power_device_resume(struct device *dev)
+{
+	struct platform_device *plat_dev = to_platform_device(dev);
+	struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+	int idx;
+	int err;
+
+	if (!drm_dev_enter(drm_dev, &idx))
+		return -EIO;
+
+	err = clk_prepare_enable(pvr_dev->core_clk);
+	if (err)
+		goto err_drm_dev_exit;
+
+	err = clk_prepare_enable(pvr_dev->sys_clk);
+	if (err)
+		goto err_core_clk_disable;
+
+	err = clk_prepare_enable(pvr_dev->mem_clk);
+	if (err)
+		goto err_sys_clk_disable;
+
+	drm_dev_exit(idx);
+
+	return 0;
+
+err_sys_clk_disable:
+	clk_disable_unprepare(pvr_dev->sys_clk);
+
+err_core_clk_disable:
+	clk_disable_unprepare(pvr_dev->core_clk);
+
+err_drm_dev_exit:
+	drm_dev_exit(idx);
+
+	return err;
+}
+
+int
+pvr_power_device_idle(struct device *dev)
+{
+	struct platform_device *plat_dev = to_platform_device(dev);
+	struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+
+	return pvr_power_is_idle(pvr_dev) ? 0 : -EBUSY;
+}
+
+/**
+ * pvr_power_reset() - Reset the GPU
+ * @pvr_dev: Device pointer
+ * @hard_reset: %true for hard reset, %false for soft reset
+ *
+ * If @hard_reset is %false and the FW processor fails to respond during the reset process, this
+ * function will attempt a hard reset.
+ *
+ * If a hard reset fails then the GPU device is reported as lost.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error code returned by pvr_power_get, pvr_power_fw_disable or pvr_power_fw_enable().
+ */
+int
+pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
+{
+	/* TODO: Implement hard reset. */
+	int err;
+
+	/*
+	 * Take a power reference during the reset. This should prevent any interference with the
+	 * power state during reset.
+	 */
+	WARN_ON(pvr_power_get(pvr_dev));
+
+	err = pvr_power_fw_disable(pvr_dev, false);
+	if (err)
+		goto err_power_put;
+
+	err = pvr_power_fw_enable(pvr_dev);
+
+err_power_put:
+	pvr_power_put(pvr_dev);
+
+	return err;
+}
+
+/**
+ * pvr_watchdog_fini() - Shutdown watchdog for device
+ * @pvr_dev: Target PowerVR device.
+ */
+void
+pvr_watchdog_fini(struct pvr_device *pvr_dev)
+{
+	cancel_delayed_work_sync(&pvr_dev->watchdog.work);
+}
diff --git a/drivers/gpu/drm/imagination/pvr_power.h b/drivers/gpu/drm/imagination/pvr_power.h
new file mode 100644
index 000000000000..439f08d13655
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_power.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_POWER_H
+#define PVR_POWER_H
+
+#include "pvr_device.h"
+
+#include <linux/mutex.h>
+#include <linux/pm_runtime.h>
+
+int pvr_watchdog_init(struct pvr_device *pvr_dev);
+void pvr_watchdog_fini(struct pvr_device *pvr_dev);
+
+bool pvr_power_is_idle(struct pvr_device *pvr_dev);
+
+int pvr_power_device_suspend(struct device *dev);
+int pvr_power_device_resume(struct device *dev);
+int pvr_power_device_idle(struct device *dev);
+
+int pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset);
+
+static __always_inline int
+pvr_power_get(struct pvr_device *pvr_dev)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+
+	return pm_runtime_resume_and_get(drm_dev->dev);
+}
+
+static __always_inline int
+pvr_power_put(struct pvr_device *pvr_dev)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+
+	return pm_runtime_put(drm_dev->dev);
+}
+
+#endif /* PVR_POWER_H */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 10/17] drm/imagination: Implement firmware infrastructure and META FW support
  2023-08-16  8:25 ` Sarah Walker
@ 2023-08-16  8:25   ` Sarah Walker
  -1 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: frank.binns, donald.robson, boris.brezillon, faith.ekstrand,
	airlied, daniel, maarten.lankhorst, mripard, tzimmermann, afd,
	hns, matthew.brost, christian.koenig, luben.tuikov, dakr,
	linux-kernel

The infrastructure includes parsing of the firmware image, initialising
FW-side structures, handling the kernel and firmware command
ringbuffers and starting & stopping the firmware processor.

This patch also adds the necessary support code for the META firmware
processor.

Changes since v4:
- Remove use of drm_gem_shmem_get_pages()
- Remove interrupt resource name

Changes since v3:
- Hard reset FW processor on watchdog timeout
- Switch to threaded IRQ
- Rework FW object creation/initialisation to aid hard reset
- Added MODULE_FIRMWARE()
- Use drm_dev_{enter,exit}

Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 drivers/gpu/drm/imagination/Makefile          |    4 +
 drivers/gpu/drm/imagination/pvr_ccb.c         |  631 ++++++++
 drivers/gpu/drm/imagination/pvr_ccb.h         |   71 +
 drivers/gpu/drm/imagination/pvr_device.c      |  100 ++
 drivers/gpu/drm/imagination/pvr_device.h      |   60 +
 drivers/gpu/drm/imagination/pvr_drv.c         |    1 +
 drivers/gpu/drm/imagination/pvr_fw.c          | 1323 +++++++++++++++++
 drivers/gpu/drm/imagination/pvr_fw.h          |  474 ++++++
 drivers/gpu/drm/imagination/pvr_fw_meta.c     |  554 +++++++
 drivers/gpu/drm/imagination/pvr_fw_meta.h     |   14 +
 .../gpu/drm/imagination/pvr_fw_startstop.c    |  301 ++++
 .../gpu/drm/imagination/pvr_fw_startstop.h    |   13 +
 drivers/gpu/drm/imagination/pvr_fw_trace.c    |  121 ++
 drivers/gpu/drm/imagination/pvr_fw_trace.h    |   78 +
 drivers/gpu/drm/imagination/pvr_gem.h         |    7 +
 drivers/gpu/drm/imagination/pvr_mmu.c         |   40 +-
 drivers/gpu/drm/imagination/pvr_power.c       |  154 +-
 drivers/gpu/drm/imagination/pvr_vm.c          |   26 +-
 18 files changed, 3949 insertions(+), 23 deletions(-)
 create mode 100644 drivers/gpu/drm/imagination/pvr_ccb.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_ccb.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_meta.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_meta.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_startstop.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_startstop.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_trace.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_trace.h

diff --git a/drivers/gpu/drm/imagination/Makefile b/drivers/gpu/drm/imagination/Makefile
index 235e2d329e29..5b02440841be 100644
--- a/drivers/gpu/drm/imagination/Makefile
+++ b/drivers/gpu/drm/imagination/Makefile
@@ -4,10 +4,14 @@
 subdir-ccflags-y := -I$(srctree)/$(src)
 
 powervr-y := \
+	pvr_ccb.o \
 	pvr_device.o \
 	pvr_device_info.o \
 	pvr_drv.o \
 	pvr_fw.o \
+	pvr_fw_meta.o \
+	pvr_fw_startstop.o \
+	pvr_fw_trace.o \
 	pvr_gem.o \
 	pvr_mmu.o \
 	pvr_power.o \
diff --git a/drivers/gpu/drm/imagination/pvr_ccb.c b/drivers/gpu/drm/imagination/pvr_ccb.c
new file mode 100644
index 000000000000..1dfcce963e67
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_ccb.c
@@ -0,0 +1,631 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_ccb.h"
+#include "pvr_device.h"
+#include "pvr_drv.h"
+#include "pvr_fw.h"
+#include "pvr_gem.h"
+#include "pvr_power.h"
+
+#include <drm/drm_managed.h>
+#include <linux/compiler.h>
+#include <linux/delay.h>
+#include <linux/jiffies.h>
+#include <linux/kernel.h>
+#include <linux/mutex.h>
+#include <linux/types.h>
+#include <linux/workqueue.h>
+
+#define RESERVE_SLOT_TIMEOUT (1 * HZ) /* 1s */
+
+static void
+ccb_ctrl_init(void *cpu_ptr, void *priv)
+{
+	struct rogue_fwif_ccb_ctl *ctrl = cpu_ptr;
+	struct pvr_ccb *pvr_ccb = priv;
+
+	ctrl->write_offset = 0;
+	ctrl->read_offset = 0;
+	ctrl->wrap_mask = pvr_ccb->num_cmds - 1;
+	ctrl->cmd_size = pvr_ccb->cmd_size;
+}
+
+/**
+ * pvr_ccb_init() - Initialise a CCB
+ * @pvr_dev: Device pointer.
+ * @pvr_ccb: Pointer to CCB structure to initialise.
+ * @num_cmds_log2: Log2 of number of commands in this CCB.
+ * @cmd_size: Command size for this CCB.
+ *
+ * Return:
+ *  * Zero on success, or
+ *  * Any error code returned by pvr_fw_object_create_and_map().
+ */
+static int
+pvr_ccb_init(struct pvr_device *pvr_dev, struct pvr_ccb *pvr_ccb,
+	     u32 num_cmds_log2, size_t cmd_size)
+{
+	u32 num_cmds = 1 << num_cmds_log2;
+	u32 ccb_size = num_cmds * cmd_size;
+	int err;
+
+	pvr_ccb->num_cmds = num_cmds;
+	pvr_ccb->cmd_size = cmd_size;
+
+	err = drmm_mutex_init(from_pvr_device(pvr_dev), &pvr_ccb->lock);
+	if (err)
+		return err;
+
+	/*
+	 * Map CCB and control structure as uncached, so we don't have to flush
+	 * CPU cache repeatedly when polling for space.
+	 */
+	pvr_ccb->ctrl = pvr_fw_object_create_and_map(pvr_dev, sizeof(*pvr_ccb->ctrl),
+						     PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+						     ccb_ctrl_init, pvr_ccb, &pvr_ccb->ctrl_obj);
+	if (IS_ERR(pvr_ccb->ctrl))
+		return PTR_ERR(pvr_ccb->ctrl);
+
+	pvr_ccb->ccb = pvr_fw_object_create_and_map(pvr_dev, ccb_size,
+						    PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+						    NULL, NULL, &pvr_ccb->ccb_obj);
+	if (IS_ERR(pvr_ccb->ccb)) {
+		err = PTR_ERR(pvr_ccb->ccb);
+		goto err_free_ctrl;
+	}
+
+	pvr_fw_object_get_fw_addr(pvr_ccb->ctrl_obj, &pvr_ccb->ctrl_fw_addr);
+	pvr_fw_object_get_fw_addr(pvr_ccb->ccb_obj, &pvr_ccb->ccb_fw_addr);
+
+	WRITE_ONCE(pvr_ccb->ctrl->write_offset, 0);
+	WRITE_ONCE(pvr_ccb->ctrl->read_offset, 0);
+	WRITE_ONCE(pvr_ccb->ctrl->wrap_mask, num_cmds - 1);
+	WRITE_ONCE(pvr_ccb->ctrl->cmd_size, cmd_size);
+
+	return 0;
+
+err_free_ctrl:
+	pvr_fw_object_unmap_and_destroy(pvr_ccb->ctrl_obj);
+
+	return err;
+}
+
+/**
+ * pvr_ccb_fini() - Release CCB structure
+ * @pvr_ccb: CCB to release.
+ */
+void
+pvr_ccb_fini(struct pvr_ccb *pvr_ccb)
+{
+	pvr_fw_object_unmap_and_destroy(pvr_ccb->ccb_obj);
+	pvr_fw_object_unmap_and_destroy(pvr_ccb->ctrl_obj);
+}
+
+/**
+ * pvr_ccb_slot_available_locked() - Test whether any slots are available in CCB
+ * @pvr_ccb: CCB to test.
+ * @write_offset: Address to store number of next available slot. May be %NULL.
+ *
+ * Caller must hold @pvr_ccb->lock.
+ *
+ * Return:
+ *  * %true if a slot is available, or
+ *  * %false if no slot is available.
+ */
+static __always_inline bool
+pvr_ccb_slot_available_locked(struct pvr_ccb *pvr_ccb, u32 *write_offset)
+{
+	struct rogue_fwif_ccb_ctl *ctrl = pvr_ccb->ctrl;
+	u32 next_write_offset = (READ_ONCE(ctrl->write_offset) + 1) & READ_ONCE(ctrl->wrap_mask);
+
+	lockdep_assert_held(&pvr_ccb->lock);
+
+	if (READ_ONCE(ctrl->read_offset) != next_write_offset) {
+		if (write_offset)
+			*write_offset = next_write_offset;
+		return true;
+	}
+
+	return false;
+}
+
+static void
+process_fwccb_command(struct pvr_device *pvr_dev, struct rogue_fwif_fwccb_cmd *cmd)
+{
+	switch (cmd->cmd_type) {
+	case ROGUE_FWIF_FWCCB_CMD_REQUEST_GPU_RESTART:
+		pvr_power_reset(pvr_dev, false);
+		break;
+
+	default:
+		drm_info(from_pvr_device(pvr_dev), "Received unknown FWCCB command %x\n",
+			 cmd->cmd_type);
+		break;
+	}
+}
+
+/**
+ * pvr_fwccb_process_worker() - Process any pending FWCCB commands
+ * @work: Work item.
+ *
+ * For this initial implementation, FWCCB commands will be printed to the console but otherwise not
+ * processed.
+ */
+void pvr_fwccb_process(struct pvr_device *pvr_dev)
+{
+	struct rogue_fwif_fwccb_cmd *fwccb = pvr_dev->fwccb.ccb;
+	struct rogue_fwif_ccb_ctl *ctrl = pvr_dev->fwccb.ctrl;
+	u32 read_offset;
+
+	mutex_lock(&pvr_dev->fwccb.lock);
+
+	while ((read_offset = READ_ONCE(ctrl->read_offset)) != READ_ONCE(ctrl->write_offset)) {
+		struct rogue_fwif_fwccb_cmd cmd = fwccb[read_offset];
+
+		WRITE_ONCE(ctrl->read_offset, (read_offset + 1) & READ_ONCE(ctrl->wrap_mask));
+
+		/* Drop FWCCB lock while we process command. */
+		mutex_unlock(&pvr_dev->fwccb.lock);
+
+		process_fwccb_command(pvr_dev, &cmd);
+
+		mutex_lock(&pvr_dev->fwccb.lock);
+	}
+
+	mutex_unlock(&pvr_dev->fwccb.lock);
+}
+
+/**
+ * pvr_kccb_capacity() - Returns the maximum number of usable KCCB slots.
+ * @pvr_dev: Target PowerVR device
+ *
+ * Return:
+ *  * The maximum number of active slots.
+ */
+static u32 pvr_kccb_capacity(struct pvr_device *pvr_dev)
+{
+	/* Capacity is the number of slot minus one to cope with the wrapping
+	 * mechanisms. If we were to use all slots, we might end up with
+	 * read_offset == write_offset, which the FW considers as a KCCB-is-empty
+	 * condition.
+	 */
+	return pvr_dev->kccb.slot_count - 1;
+}
+
+/**
+ * pvr_kccb_used_slot_count_locked() - Get the number of used slots
+ * @pvr_dev: Device pointer.
+ *
+ * KCCB lock must be held.
+ *
+ * Return:
+ *  * The number of slots currently used.
+ */
+static u32
+pvr_kccb_used_slot_count_locked(struct pvr_device *pvr_dev)
+{
+	struct pvr_ccb *pvr_ccb = &pvr_dev->kccb.ccb;
+	struct rogue_fwif_ccb_ctl *ctrl = pvr_ccb->ctrl;
+	u32 wr_offset = READ_ONCE(ctrl->write_offset);
+	u32 rd_offset = READ_ONCE(ctrl->read_offset);
+	u32 used_count;
+
+	lockdep_assert_held(&pvr_ccb->lock);
+
+	if (wr_offset >= rd_offset)
+		used_count = wr_offset - rd_offset;
+	else
+		used_count = wr_offset + pvr_dev->kccb.slot_count - rd_offset;
+
+	return used_count;
+}
+
+/**
+ * pvr_kccb_send_cmd_reserved_powered() - Send command to the KCCB, with the PM ref
+ * held and a slot pre-reserved
+ * @pvr_dev: Device pointer.
+ * @cmd: Command to sent.
+ * @kccb_slot: Address to store the KCCB slot for this command. May be %NULL.
+ */
+void
+pvr_kccb_send_cmd_reserved_powered(struct pvr_device *pvr_dev,
+				   struct rogue_fwif_kccb_cmd *cmd,
+				   u32 *kccb_slot)
+{
+	struct pvr_ccb *pvr_ccb = &pvr_dev->kccb.ccb;
+	struct rogue_fwif_kccb_cmd *kccb = pvr_ccb->ccb;
+	struct rogue_fwif_ccb_ctl *ctrl = pvr_ccb->ctrl;
+	u32 old_write_offset;
+	u32 new_write_offset;
+
+	WARN_ON(pvr_dev->lost);
+
+	mutex_lock(&pvr_ccb->lock);
+
+	if (WARN_ON(!pvr_dev->kccb.reserved_count))
+		goto out_unlock;
+
+	old_write_offset = READ_ONCE(ctrl->write_offset);
+
+	/* We reserved the slot, we should have one available. */
+	if (WARN_ON(!pvr_ccb_slot_available_locked(pvr_ccb, &new_write_offset)))
+		goto out_unlock;
+
+	memcpy(&kccb[old_write_offset], cmd,
+	       sizeof(struct rogue_fwif_kccb_cmd));
+	if (kccb_slot) {
+		*kccb_slot = old_write_offset;
+		/* Clear return status for this slot. */
+		WRITE_ONCE(pvr_dev->kccb.rtn[old_write_offset],
+			   ROGUE_FWIF_KCCB_RTN_SLOT_NO_RESPONSE);
+	}
+	mb(); /* memory barrier */
+	WRITE_ONCE(ctrl->write_offset, new_write_offset);
+	pvr_dev->kccb.reserved_count--;
+
+	/* Kick MTS */
+	pvr_fw_mts_schedule(pvr_dev,
+			    PVR_FWIF_DM_GP & ~ROGUE_CR_MTS_SCHEDULE_DM_CLRMSK);
+
+out_unlock:
+	mutex_unlock(&pvr_ccb->lock);
+}
+
+/**
+ * pvr_kccb_try_reserve_slot() - Try to reserve a KCCB slot
+ * @pvr_dev: Device pointer.
+ *
+ * Return:
+ *  * true if a KCCB slot was reserved, or
+ *  * false otherwise.
+ */
+static bool pvr_kccb_try_reserve_slot(struct pvr_device *pvr_dev)
+{
+	bool reserved = false;
+	u32 used_count;
+
+	mutex_lock(&pvr_dev->kccb.ccb.lock);
+
+	used_count = pvr_kccb_used_slot_count_locked(pvr_dev);
+	if (pvr_dev->kccb.reserved_count < pvr_kccb_capacity(pvr_dev) - used_count) {
+		pvr_dev->kccb.reserved_count++;
+		reserved = true;
+	}
+
+	mutex_unlock(&pvr_dev->kccb.ccb.lock);
+
+	return reserved;
+}
+
+/**
+ * pvr_kccb_reserve_slot_sync() - Try to reserve a slot synchronously
+ * @pvr_dev: Device pointer.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * -EBUSY if now slots were reserved after %RESERVE_SLOT_TIMEOUT.
+ */
+static int pvr_kccb_reserve_slot_sync(struct pvr_device *pvr_dev)
+{
+	unsigned long start_timestamp = jiffies;
+	bool reserved = false;
+
+	while ((jiffies - start_timestamp) < (u32)RESERVE_SLOT_TIMEOUT) {
+		reserved = pvr_kccb_try_reserve_slot(pvr_dev);
+		if (reserved)
+			break;
+
+		usleep_range(1, 50);
+	}
+
+	return reserved ? 0 : -EBUSY;
+}
+
+/**
+ * pvr_kccb_send_cmd_powered() - Send command to the KCCB, with a PM ref held
+ * @pvr_dev: Device pointer.
+ * @cmd: Command to sent.
+ * @kccb_slot: Address to store the KCCB slot for this command. May be %NULL.
+ *
+ * Returns:
+ *  * Zero on success, or
+ *  * -EBUSY if timeout while waiting for a free KCCB slot.
+ */
+int
+pvr_kccb_send_cmd_powered(struct pvr_device *pvr_dev, struct rogue_fwif_kccb_cmd *cmd,
+			  u32 *kccb_slot)
+{
+	int err;
+
+	err = pvr_kccb_reserve_slot_sync(pvr_dev);
+	if (err)
+		return err;
+
+	pvr_kccb_send_cmd_reserved_powered(pvr_dev, cmd, kccb_slot);
+	return 0;
+}
+
+/**
+ * pvr_kccb_send_cmd() - Send command to the KCCB
+ * @pvr_dev: Device pointer.
+ * @cmd: Command to sent.
+ * @kccb_slot: Address to store the KCCB slot for this command. May be %NULL.
+ *
+ * Returns:
+ *  * Zero on success, or
+ *  * -EBUSY if timeout while waiting for a free KCCB slot.
+ */
+int
+pvr_kccb_send_cmd(struct pvr_device *pvr_dev, struct rogue_fwif_kccb_cmd *cmd,
+		  u32 *kccb_slot)
+{
+	int err;
+
+	err = pvr_power_get(pvr_dev);
+	if (err)
+		return err;
+
+	err = pvr_kccb_send_cmd_powered(pvr_dev, cmd, kccb_slot);
+
+	pvr_power_put(pvr_dev);
+
+	return err;
+}
+
+/**
+ * pvr_kccb_wait_for_completion() - Wait for a KCCB command to complete
+ * @pvr_dev: Device pointer.
+ * @slot_nr: KCCB slot to wait on.
+ * @timeout: Timeout length (in jiffies).
+ * @rtn_out: Location to store KCCB command result. May be %NULL.
+ *
+ * Returns:
+ *  * Zero on success, or
+ *  * -ETIMEDOUT on timeout.
+ */
+int
+pvr_kccb_wait_for_completion(struct pvr_device *pvr_dev, u32 slot_nr,
+			     u32 timeout, u32 *rtn_out)
+{
+	int ret = wait_event_timeout(pvr_dev->kccb.rtn_q, READ_ONCE(pvr_dev->kccb.rtn[slot_nr]) &
+				     ROGUE_FWIF_KCCB_RTN_SLOT_CMD_EXECUTED, timeout);
+
+	if (ret && rtn_out)
+		*rtn_out = READ_ONCE(pvr_dev->kccb.rtn[slot_nr]);
+
+	return ret ? 0 : -ETIMEDOUT;
+}
+
+/**
+ * pvr_kccb_is_idle() - Returns whether the device's KCCB is idle
+ * @pvr_dev: Device pointer
+ *
+ * Returns:
+ *  * %true if the KCCB is idle (contains no commands), or
+ *  * %false if the KCCB contains pending commands.
+ */
+bool
+pvr_kccb_is_idle(struct pvr_device *pvr_dev)
+{
+	struct rogue_fwif_ccb_ctl *ctrl = pvr_dev->kccb.ccb.ctrl;
+	bool idle;
+
+	mutex_lock(&pvr_dev->kccb.ccb.lock);
+
+	idle = (READ_ONCE(ctrl->write_offset) == READ_ONCE(ctrl->read_offset));
+
+	mutex_unlock(&pvr_dev->kccb.ccb.lock);
+
+	return idle;
+}
+
+static const char *
+pvr_kccb_fence_get_driver_name(struct dma_fence *f)
+{
+	return PVR_DRIVER_NAME;
+}
+
+static const char *
+pvr_kccb_fence_get_timeline_name(struct dma_fence *f)
+{
+	return "kccb";
+}
+
+static const struct dma_fence_ops pvr_kccb_fence_ops = {
+	.get_driver_name = pvr_kccb_fence_get_driver_name,
+	.get_timeline_name = pvr_kccb_fence_get_timeline_name,
+};
+
+/**
+ * struct pvr_kccb_fence - Fence object used to wait for a KCCB slot
+ */
+struct pvr_kccb_fence {
+	/** @base: Base dma_fence object. */
+	struct dma_fence base;
+
+	/** @node: Node used to insert the fence in the pvr_device::kccb::waiters list. */
+	struct list_head node;
+};
+
+/**
+ * pvr_kccb_wake_up_waiters() - Check the KCCB waiters
+ * @pvr_dev: Target PowerVR device
+ *
+ * Signal as many KCCB fences as we have slots available.
+ */
+void pvr_kccb_wake_up_waiters(struct pvr_device *pvr_dev)
+{
+	struct pvr_kccb_fence *fence, *tmp_fence;
+	u32 used_count, available_count;
+
+	/* Wake up those waiting for KCCB slot execution. */
+	wake_up_all(&pvr_dev->kccb.rtn_q);
+
+	/* Then iterate over all KCCB fences and signal as many as we can. */
+	mutex_lock(&pvr_dev->kccb.ccb.lock);
+	used_count = pvr_kccb_used_slot_count_locked(pvr_dev);
+
+	if (WARN_ON(used_count + pvr_dev->kccb.reserved_count > pvr_kccb_capacity(pvr_dev)))
+		goto out_unlock;
+
+	available_count = pvr_kccb_capacity(pvr_dev) - used_count - pvr_dev->kccb.reserved_count;
+	list_for_each_entry_safe(fence, tmp_fence, &pvr_dev->kccb.waiters, node) {
+		if (!available_count)
+			break;
+
+		list_del(&fence->node);
+		pvr_dev->kccb.reserved_count++;
+		available_count--;
+		dma_fence_signal(&fence->base);
+		dma_fence_put(&fence->base);
+	}
+
+out_unlock:
+	mutex_unlock(&pvr_dev->kccb.ccb.lock);
+}
+
+/**
+ * pvr_kccb_fini() - Cleanup device KCCB
+ * @pvr_dev: Target PowerVR device
+ */
+void pvr_kccb_fini(struct pvr_device *pvr_dev)
+{
+	pvr_ccb_fini(&pvr_dev->kccb.ccb);
+	WARN_ON(!list_empty(&pvr_dev->kccb.waiters));
+	WARN_ON(pvr_dev->kccb.reserved_count);
+}
+
+/**
+ * pvr_kccb_init() - Initialise device KCCB
+ * @pvr_dev: Target PowerVR device
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_ccb_init().
+ */
+int
+pvr_kccb_init(struct pvr_device *pvr_dev)
+{
+	pvr_dev->kccb.slot_count = 1 << ROGUE_FWIF_KCCB_NUMCMDS_LOG2_DEFAULT;
+	INIT_LIST_HEAD(&pvr_dev->kccb.waiters);
+	pvr_dev->kccb.fence_ctx.id = dma_fence_context_alloc(1);
+	spin_lock_init(&pvr_dev->kccb.fence_ctx.lock);
+
+	return pvr_ccb_init(pvr_dev, &pvr_dev->kccb.ccb,
+			    ROGUE_FWIF_KCCB_NUMCMDS_LOG2_DEFAULT,
+			    sizeof(struct rogue_fwif_kccb_cmd));
+}
+
+/**
+ * pvr_kccb_fence_alloc() - Allocate a pvr_kccb_fence object
+ *
+ * Return:
+ *  * NULL if the allocation fails, or
+ *  * A valid dma_fence pointer otherwise.
+ */
+struct dma_fence *pvr_kccb_fence_alloc(void)
+{
+	struct pvr_kccb_fence *kccb_fence;
+
+	kccb_fence = kzalloc(sizeof(*kccb_fence), GFP_KERNEL);
+	if (!kccb_fence)
+		return NULL;
+
+	return &kccb_fence->base;
+}
+
+/**
+ * pvr_kccb_fence_put() - Drop a KCCB fence reference
+ * @fence: The fence to drop the reference on.
+ *
+ * If the fence hasn't been initialized yet, dma_fence_free() is called. This
+ * way we have a single function taking care of both cases.
+ */
+void pvr_kccb_fence_put(struct dma_fence *fence)
+{
+	if (!fence)
+		return;
+
+	if (!fence->ops) {
+		dma_fence_free(fence);
+	} else {
+		WARN_ON(fence->ops != &pvr_kccb_fence_ops);
+		dma_fence_put(fence);
+	}
+}
+
+/**
+ * pvr_kccb_reserve_slot() - Reserve a KCCB slot for later use
+ * @pvr_dev: Target PowerVR device
+ * @f: KCCB fence object previously allocated with pvr_kccb_fence_alloc()
+ *
+ * Try to reserve a KCCB slot, and if there's no slot available,
+ * initializes the fence object and queue it to the waiters list.
+ *
+ * If NULL is returned, that means the slot is reserved. In that case,
+ * the @f is freed and shouldn't be accessed after that point.
+ *
+ * Return:
+ *  * NULL if a slot was available directly, or
+ *  * A valid dma_fence object to wait on if no slot was available.
+ */
+struct dma_fence *
+pvr_kccb_reserve_slot(struct pvr_device *pvr_dev, struct dma_fence *f)
+{
+	struct pvr_kccb_fence *fence = container_of(f, struct pvr_kccb_fence, base);
+	struct dma_fence *out_fence = NULL;
+	u32 used_count;
+
+	mutex_lock(&pvr_dev->kccb.ccb.lock);
+
+	used_count = pvr_kccb_used_slot_count_locked(pvr_dev);
+	if (pvr_dev->kccb.reserved_count >= pvr_kccb_capacity(pvr_dev) - used_count) {
+		dma_fence_init(&fence->base, &pvr_kccb_fence_ops,
+			       &pvr_dev->kccb.fence_ctx.lock,
+			       pvr_dev->kccb.fence_ctx.id,
+			       atomic_inc_return(&pvr_dev->kccb.fence_ctx.seqno));
+		out_fence = dma_fence_get(&fence->base);
+		list_add_tail(&fence->node, &pvr_dev->kccb.waiters);
+	} else {
+		pvr_kccb_fence_put(f);
+		pvr_dev->kccb.reserved_count++;
+	}
+
+	mutex_unlock(&pvr_dev->kccb.ccb.lock);
+
+	return out_fence;
+}
+
+/**
+ * pvr_kccb_release_slot() - Release a KCCB slot reserved with
+ * pvr_kccb_reserve_slot()
+ * @pvr_dev: Target PowerVR device
+ *
+ * Should only be called if something failed after the
+ * pvr_kccb_reserve_slot() call and you know you won't call
+ * pvr_kccb_send_cmd_reserved().
+ */
+void pvr_kccb_release_slot(struct pvr_device *pvr_dev)
+{
+	mutex_lock(&pvr_dev->kccb.ccb.lock);
+	if (!WARN_ON(!pvr_dev->kccb.reserved_count))
+		pvr_dev->kccb.reserved_count--;
+	mutex_unlock(&pvr_dev->kccb.ccb.lock);
+}
+
+/**
+ * pvr_fwccb_init() - Initialise device FWCCB
+ * @pvr_dev: Target PowerVR device
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_ccb_init().
+ */
+int
+pvr_fwccb_init(struct pvr_device *pvr_dev)
+{
+	return pvr_ccb_init(pvr_dev, &pvr_dev->fwccb,
+			    ROGUE_FWIF_FWCCB_NUMCMDS_LOG2,
+			    sizeof(struct rogue_fwif_fwccb_cmd));
+}
diff --git a/drivers/gpu/drm/imagination/pvr_ccb.h b/drivers/gpu/drm/imagination/pvr_ccb.h
new file mode 100644
index 000000000000..60ebaa7dca63
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_ccb.h
@@ -0,0 +1,71 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_CCB_H
+#define PVR_CCB_H
+
+#include "pvr_rogue_fwif.h"
+
+#include <linux/mutex.h>
+#include <linux/types.h>
+
+/* Forward declaration from pvr_device.h. */
+struct pvr_device;
+
+/* Forward declaration from pvr_gem.h. */
+struct pvr_fw_object;
+
+struct pvr_ccb {
+	/** @ctrl_obj: FW object representing CCB control structure. */
+	struct pvr_fw_object *ctrl_obj;
+	/** @ccb_obj: FW object representing CCB. */
+	struct pvr_fw_object *ccb_obj;
+
+	/** @ctrl_fw_addr: FW virtual address of CCB control structure. */
+	u32 ctrl_fw_addr;
+	/** @ccb_fw_addr: FW virtual address of CCB. */
+	u32 ccb_fw_addr;
+
+	/** @num_cmds: Number of commands in this CCB. */
+	u32 num_cmds;
+
+	/** @cmd_size: Size of each command in this CCB, in bytes. */
+	u32 cmd_size;
+
+	/** @lock: Mutex protecting @ctrl and @ccb. */
+	struct mutex lock;
+	/**
+	 * @ctrl: Kernel mapping of CCB control structure. @lock must be held
+	 *        when accessing.
+	 */
+	struct rogue_fwif_ccb_ctl *ctrl;
+	/** @ccb: Kernel mapping of CCB. @lock must be held when accessing. */
+	void *ccb;
+};
+
+int pvr_kccb_init(struct pvr_device *pvr_dev);
+void pvr_kccb_fini(struct pvr_device *pvr_dev);
+int pvr_fwccb_init(struct pvr_device *pvr_dev);
+void pvr_ccb_fini(struct pvr_ccb *ccb);
+
+void pvr_fwccb_process(struct pvr_device *pvr_dev);
+
+struct dma_fence *pvr_kccb_fence_alloc(void);
+void pvr_kccb_fence_put(struct dma_fence *fence);
+struct dma_fence *
+pvr_kccb_reserve_slot(struct pvr_device *pvr_dev, struct dma_fence *f);
+void pvr_kccb_release_slot(struct pvr_device *pvr_dev);
+int pvr_kccb_send_cmd(struct pvr_device *pvr_dev,
+		      struct rogue_fwif_kccb_cmd *cmd, u32 *kccb_slot);
+int pvr_kccb_send_cmd_powered(struct pvr_device *pvr_dev,
+			      struct rogue_fwif_kccb_cmd *cmd,
+			      u32 *kccb_slot);
+void pvr_kccb_send_cmd_reserved_powered(struct pvr_device *pvr_dev,
+					struct rogue_fwif_kccb_cmd *cmd,
+					u32 *kccb_slot);
+int pvr_kccb_wait_for_completion(struct pvr_device *pvr_dev, u32 slot_nr, u32 timeout,
+				 u32 *rtn_out);
+bool pvr_kccb_is_idle(struct pvr_device *pvr_dev);
+void pvr_kccb_wake_up_waiters(struct pvr_device *pvr_dev);
+
+#endif /* PVR_CCB_H */
diff --git a/drivers/gpu/drm/imagination/pvr_device.c b/drivers/gpu/drm/imagination/pvr_device.c
index 5dbd05f21238..ad198ed432fe 100644
--- a/drivers/gpu/drm/imagination/pvr_device.c
+++ b/drivers/gpu/drm/imagination/pvr_device.c
@@ -114,6 +114,87 @@ static int pvr_device_clk_init(struct pvr_device *pvr_dev)
 	return 0;
 }
 
+static irqreturn_t pvr_device_irq_thread_handler(int irq, void *data)
+{
+	struct pvr_device *pvr_dev = data;
+	irqreturn_t ret = IRQ_NONE;
+
+	/* We are in the threaded handler, we can keep dequeuing events until we
+	 * don't see any. This should allow us to reduce the number of interrupts
+	 * when the GPU is receiving a massive amount of short jobs.
+	 */
+	while (pvr_fw_irq_pending(pvr_dev)) {
+		pvr_fw_irq_clear(pvr_dev);
+
+		if (pvr_dev->fw_dev.booted) {
+			pvr_fwccb_process(pvr_dev);
+			pvr_kccb_wake_up_waiters(pvr_dev);
+		}
+
+		pm_runtime_mark_last_busy(from_pvr_device(pvr_dev)->dev);
+
+		ret = IRQ_HANDLED;
+	}
+
+	/* Unmask FW irqs before returning, so new interrupts can be received. */
+	pvr_fw_irq_enable(pvr_dev);
+	return ret;
+}
+
+static irqreturn_t pvr_device_irq_handler(int irq, void *data)
+{
+	struct pvr_device *pvr_dev = data;
+
+	if (!pvr_fw_irq_pending(pvr_dev))
+		return IRQ_NONE; /* Spurious IRQ - ignore. */
+
+	/* Mask the FW interrupts before waking up the thread. Will be unmasked
+	 * when the thread handler is done processing events.
+	 */
+	pvr_fw_irq_disable(pvr_dev);
+	return IRQ_WAKE_THREAD;
+}
+
+/**
+ * pvr_device_irq_init() - Initialise IRQ required by a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ *
+ * Returns:
+ *  * 0 on success,
+ *  * Any error returned by platform_get_irq_byname(), or
+ *  * Any error returned by request_irq().
+ */
+static int
+pvr_device_irq_init(struct pvr_device *pvr_dev)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	struct platform_device *plat_dev = to_platform_device(drm_dev->dev);
+
+	init_waitqueue_head(&pvr_dev->kccb.rtn_q);
+
+	pvr_dev->irq = platform_get_irq(plat_dev, 0);
+	if (pvr_dev->irq < 0)
+		return pvr_dev->irq;
+
+	/* Clear any pending events before requesting the IRQ line. */
+	pvr_fw_irq_clear(pvr_dev);
+	pvr_fw_irq_enable(pvr_dev);
+
+	return request_threaded_irq(pvr_dev->irq, pvr_device_irq_handler,
+				    pvr_device_irq_thread_handler,
+				    IRQF_SHARED, "gpu", pvr_dev);
+}
+
+/**
+ * pvr_device_irq_fini() - Deinitialise IRQ required by a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ */
+static void
+pvr_device_irq_fini(struct pvr_device *pvr_dev)
+{
+	free_irq(pvr_dev->irq, pvr_dev);
+}
+
 /**
  * pvr_build_firmware_filename() - Construct a PowerVR firmware filename
  * @pvr_dev: Target PowerVR device.
@@ -322,7 +403,17 @@ pvr_device_gpu_init(struct pvr_device *pvr_dev)
 	if (IS_ERR(pvr_dev->kernel_vm_ctx))
 		return PTR_ERR(pvr_dev->kernel_vm_ctx);
 
+	err = pvr_fw_init(pvr_dev);
+	if (err)
+		goto err_vm_ctx_put;
+
 	return 0;
+
+err_vm_ctx_put:
+	pvr_vm_context_put(pvr_dev->kernel_vm_ctx);
+	pvr_dev->kernel_vm_ctx = NULL;
+
+	return err;
 }
 
 /**
@@ -332,6 +423,7 @@ pvr_device_gpu_init(struct pvr_device *pvr_dev)
 static void
 pvr_device_gpu_fini(struct pvr_device *pvr_dev)
 {
+	pvr_fw_fini(pvr_dev);
 	WARN_ON(!pvr_vm_context_put(pvr_dev->kernel_vm_ctx));
 	pvr_dev->kernel_vm_ctx = NULL;
 }
@@ -382,10 +474,17 @@ pvr_device_init(struct pvr_device *pvr_dev)
 	if (err)
 		goto err_pm_runtime_put;
 
+	err = pvr_device_irq_init(pvr_dev);
+	if (err)
+		goto err_device_gpu_fini;
+
 	pm_runtime_put(dev);
 
 	return 0;
 
+err_device_gpu_fini:
+	pvr_device_gpu_fini(pvr_dev);
+
 err_pm_runtime_put:
 	pm_runtime_put_sync_suspend(dev);
 
@@ -403,6 +502,7 @@ pvr_device_fini(struct pvr_device *pvr_dev)
 	 * Deinitialization stages are performed in reverse order compared to
 	 * the initialization stages in pvr_device_init().
 	 */
+	pvr_device_irq_fini(pvr_dev);
 	pvr_device_gpu_fini(pvr_dev);
 }
 
diff --git a/drivers/gpu/drm/imagination/pvr_device.h b/drivers/gpu/drm/imagination/pvr_device.h
index 6d58ecfbc838..d6feac60834b 100644
--- a/drivers/gpu/drm/imagination/pvr_device.h
+++ b/drivers/gpu/drm/imagination/pvr_device.h
@@ -4,6 +4,7 @@
 #ifndef PVR_DEVICE_H
 #define PVR_DEVICE_H
 
+#include "pvr_ccb.h"
 #include "pvr_device_info.h"
 #include "pvr_fw.h"
 
@@ -123,6 +124,12 @@ struct pvr_device {
 	 */
 	struct clk *mem_clk;
 
+	/** @irq: IRQ number. */
+	int irq;
+
+	/** @fwccb: Firmware CCB. */
+	struct pvr_ccb fwccb;
+
 	/**
 	 * @kernel_vm_ctx: Virtual memory context used for kernel mappings.
 	 *
@@ -147,6 +154,49 @@ struct pvr_device {
 		u32 kccb_stall_count;
 	} watchdog;
 
+	struct {
+		/** @ccb: Kernel CCB. */
+		struct pvr_ccb ccb;
+
+		/** @rtn_q: Waitqueue for KCCB command return waiters. */
+		wait_queue_head_t rtn_q;
+
+		/** @rtn_obj: Object representing KCCB return slots. */
+		struct pvr_fw_object *rtn_obj;
+
+		/**
+		 * @rtn: Pointer to CPU mapping of KCCB return slots. Must be accessed by
+		 *       READ_ONCE()/WRITE_ONCE().
+		 */
+		u32 *rtn;
+
+		/** @slot_count: Total number of KCCB slots available. */
+		u32 slot_count;
+
+		/** @reserved_count: Number of KCCB slots reserved for future use. */
+		u32 reserved_count;
+
+		/**
+		 * @waiters: List of KCCB slot waiters.
+		 */
+		struct list_head waiters;
+
+		/** @fence_ctx: KCCB fence context. */
+		struct {
+			/** @id: KCCB fence context ID allocated with dma_fence_context_alloc(). */
+			u64 id;
+
+			/** @seqno: Sequence number incremented each time a fence is created. */
+			atomic_t seqno;
+
+			/**
+			 * @lock: Lock used to synchronize access to fences allocated by this
+			 * context.
+			 */
+			spinlock_t lock;
+		} fence_ctx;
+	} kccb;
+
 	/**
 	 * @lost: %true if the device has been lost.
 	 *
@@ -155,6 +205,16 @@ struct pvr_device {
 	 */
 	bool lost;
 
+	/**
+	 * @reset_sem: Reset semaphore.
+	 *
+	 * GPU reset code will lock this for writing. Any code that submits commands to the firmware
+	 * that isn't in an IRQ handler or on the scheduler workqueue must lock this for reading.
+	 * Once this has been successfully locked, &pvr_dev->lost _must_ be checked, and -%EIO must
+	 * be returned if it is set.
+	 */
+	struct rw_semaphore reset_sem;
+
 	/** @sched_wq: Workqueue for schedulers. */
 	struct workqueue_struct *sched_wq;
 };
diff --git a/drivers/gpu/drm/imagination/pvr_drv.c b/drivers/gpu/drm/imagination/pvr_drv.c
index 0331dc549116..af12f87cef97 100644
--- a/drivers/gpu/drm/imagination/pvr_drv.c
+++ b/drivers/gpu/drm/imagination/pvr_drv.c
@@ -1345,3 +1345,4 @@ MODULE_AUTHOR("Imagination Technologies Ltd.");
 MODULE_DESCRIPTION(PVR_DRIVER_DESC);
 MODULE_LICENSE("Dual MIT/GPL");
 MODULE_IMPORT_NS(DMA_BUF);
+MODULE_FIRMWARE("powervr/rogue_33.15.11.3_v1.fw");
diff --git a/drivers/gpu/drm/imagination/pvr_fw.c b/drivers/gpu/drm/imagination/pvr_fw.c
index c48de4a3af46..449984c0f233 100644
--- a/drivers/gpu/drm/imagination/pvr_fw.c
+++ b/drivers/gpu/drm/imagination/pvr_fw.c
@@ -1,16 +1,84 @@
 // SPDX-License-Identifier: GPL-2.0 OR MIT
 /* Copyright (c) 2023 Imagination Technologies Ltd. */
 
+#include "pvr_ccb.h"
 #include "pvr_device.h"
 #include "pvr_device_info.h"
 #include "pvr_fw.h"
+#include "pvr_fw_info.h"
+#include "pvr_fw_startstop.h"
+#include "pvr_fw_trace.h"
+#include "pvr_gem.h"
+#include "pvr_power.h"
+#include "pvr_rogue_fwif_dev_info.h"
+#include "pvr_rogue_heap_config.h"
+#include "pvr_vm.h"
 
 #include <drm/drm_drv.h>
+#include <drm/drm_managed.h>
+#include <drm/drm_mm.h>
+#include <linux/clk.h>
 #include <linux/firmware.h>
+#include <linux/math.h>
+#include <linux/minmax.h>
 #include <linux/sizes.h>
 
 #define FW_MAX_SUPPORTED_MAJOR_VERSION 1
 
+#define FW_BOOT_TIMEOUT_USEC 5000000
+
+/* Config heap occupies top 192k of the firmware heap. */
+#define PVR_ROGUE_FW_CONFIG_HEAP_GRANULARITY SZ_64K
+#define PVR_ROGUE_FW_CONFIG_HEAP_SIZE (3 * PVR_ROGUE_FW_CONFIG_HEAP_GRANULARITY)
+
+/* Main firmware allocations should come from the remainder of the heap. */
+#define PVR_ROGUE_FW_MAIN_HEAP_BASE ROGUE_FW_HEAP_BASE
+
+/* Offsets from start of configuration area of FW heap. */
+#define PVR_ROGUE_FWIF_CONNECTION_CTL_OFFSET 0
+#define PVR_ROGUE_FWIF_OSINIT_OFFSET \
+	(PVR_ROGUE_FWIF_CONNECTION_CTL_OFFSET + PVR_ROGUE_FW_CONFIG_HEAP_GRANULARITY)
+#define PVR_ROGUE_FWIF_SYSINIT_OFFSET \
+	(PVR_ROGUE_FWIF_OSINIT_OFFSET + PVR_ROGUE_FW_CONFIG_HEAP_GRANULARITY)
+
+#define PVR_ROGUE_FAULT_PAGE_SIZE SZ_4K
+
+#define PVR_SYNC_OBJ_SIZE sizeof(u32)
+
+const struct pvr_fw_layout_entry *
+pvr_fw_find_layout_entry(struct pvr_device *pvr_dev, enum pvr_fw_section_id id)
+{
+	const struct pvr_fw_layout_entry *layout_entries = pvr_dev->fw_dev.layout_entries;
+	u32 num_layout_entries = pvr_dev->fw_dev.header->layout_entry_num;
+	u32 entry;
+
+	for (entry = 0; entry < num_layout_entries; entry++) {
+		if (layout_entries[entry].id == id)
+			return &layout_entries[entry];
+	}
+
+	return NULL;
+}
+
+static const struct pvr_fw_layout_entry *
+pvr_fw_find_private_data(struct pvr_device *pvr_dev)
+{
+	const struct pvr_fw_layout_entry *layout_entries = pvr_dev->fw_dev.layout_entries;
+	u32 num_layout_entries = pvr_dev->fw_dev.header->layout_entry_num;
+	u32 entry;
+
+	for (entry = 0; entry < num_layout_entries; entry++) {
+		if (layout_entries[entry].id == META_PRIVATE_DATA ||
+		    layout_entries[entry].id == MIPS_PRIVATE_DATA ||
+		    layout_entries[entry].id == RISCV_PRIVATE_DATA)
+			return &layout_entries[entry];
+	}
+
+	return NULL;
+}
+
+#define DEV_INFO_MASK_SIZE(x) DIV_ROUND_UP(x, 64)
+
 /**
  * pvr_fw_validate() - Parse firmware header and check compatibility
  * @pvr_dev: Device pointer.
@@ -122,6 +190,685 @@ pvr_fw_get_device_info(struct pvr_device *pvr_dev)
 					    header->feature_param_size);
 }
 
+static void
+layout_get_sizes(struct pvr_device *pvr_dev)
+{
+	const struct pvr_fw_layout_entry *layout_entries = pvr_dev->fw_dev.layout_entries;
+	u32 num_layout_entries = pvr_dev->fw_dev.header->layout_entry_num;
+	struct pvr_fw_mem *fw_mem = &pvr_dev->fw_dev.mem;
+
+	fw_mem->code_alloc_size = 0;
+	fw_mem->data_alloc_size = 0;
+	fw_mem->core_code_alloc_size = 0;
+	fw_mem->core_data_alloc_size = 0;
+
+	/* Extract section sizes from FW layout table. */
+	for (u32 entry = 0; entry < num_layout_entries; entry++) {
+		switch (layout_entries[entry].type) {
+		case FW_CODE:
+			fw_mem->code_alloc_size += layout_entries[entry].alloc_size;
+			break;
+		case FW_DATA:
+			fw_mem->data_alloc_size += layout_entries[entry].alloc_size;
+			break;
+		case FW_COREMEM_CODE:
+			fw_mem->core_code_alloc_size +=
+				layout_entries[entry].alloc_size;
+			break;
+		case FW_COREMEM_DATA:
+			fw_mem->core_data_alloc_size +=
+				layout_entries[entry].alloc_size;
+			break;
+		case NONE:
+			break;
+		}
+	}
+}
+
+int
+pvr_fw_find_mmu_segment(struct pvr_device *pvr_dev, u32 addr, u32 size, void *fw_code_ptr,
+			void *fw_data_ptr, void *fw_core_code_ptr, void *fw_core_data_ptr,
+			void **host_addr_out)
+{
+	const struct pvr_fw_layout_entry *layout_entries = pvr_dev->fw_dev.layout_entries;
+	u32 num_layout_entries = pvr_dev->fw_dev.header->layout_entry_num;
+	u32 end_addr = addr + size;
+	int entry = 0;
+
+	/* Ensure requested range is not zero, and size is not causing addr to overflow. */
+	if (end_addr <= addr)
+		return -EINVAL;
+
+	for (entry = 0; entry < num_layout_entries; entry++) {
+		u32 entry_start_addr = layout_entries[entry].base_addr;
+		u32 entry_end_addr = entry_start_addr + layout_entries[entry].alloc_size;
+
+		if (addr >= entry_start_addr && addr < entry_end_addr &&
+		    end_addr > entry_start_addr && end_addr <= entry_end_addr) {
+			switch (layout_entries[entry].type) {
+			case FW_CODE:
+				*host_addr_out = fw_code_ptr;
+				break;
+
+			case FW_DATA:
+				*host_addr_out = fw_data_ptr;
+				break;
+
+			case FW_COREMEM_CODE:
+				*host_addr_out = fw_core_code_ptr;
+				break;
+
+			case FW_COREMEM_DATA:
+				*host_addr_out = fw_core_data_ptr;
+				break;
+
+			default:
+				return -EINVAL;
+			}
+			/* Direct Mem write to mapped memory */
+			addr -= layout_entries[entry].base_addr;
+			addr += layout_entries[entry].alloc_offset;
+
+			/*
+			 * Add offset to pointer to FW allocation only if that
+			 * allocation is available
+			 */
+			*(u8 **)host_addr_out += addr;
+			return 0;
+		}
+	}
+
+	return -EINVAL;
+}
+
+static int
+pvr_fw_create_fwif_connection_ctl(struct pvr_device *pvr_dev)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+
+	fw_dev->fwif_connection_ctl =
+		pvr_fw_object_create_and_map_offset(pvr_dev,
+						    fw_dev->fw_heap_info.config_offset +
+						    PVR_ROGUE_FWIF_CONNECTION_CTL_OFFSET,
+						    sizeof(*fw_dev->fwif_connection_ctl),
+						    PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+						    NULL, NULL,
+						    &fw_dev->mem.fwif_connection_ctl_obj);
+	if (IS_ERR(fw_dev->fwif_connection_ctl)) {
+		drm_err(drm_dev,
+			"Unable to allocate FWIF connection control memory\n");
+		return PTR_ERR(fw_dev->fwif_connection_ctl);
+	}
+
+	return 0;
+}
+
+static void
+pvr_fw_fini_fwif_connection_ctl(struct pvr_device *pvr_dev)
+{
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+
+	pvr_fw_object_unmap_and_destroy(fw_dev->mem.fwif_connection_ctl_obj);
+}
+
+static void
+fw_osinit_init(void *cpu_ptr, void *priv)
+{
+	struct rogue_fwif_osinit *fwif_osinit = cpu_ptr;
+	struct pvr_device *pvr_dev = priv;
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+	struct pvr_fw_mem *fw_mem = &fw_dev->mem;
+
+	fwif_osinit->kernel_ccbctl_fw_addr = pvr_dev->kccb.ccb.ctrl_fw_addr;
+	fwif_osinit->kernel_ccb_fw_addr = pvr_dev->kccb.ccb.ccb_fw_addr;
+	pvr_fw_object_get_fw_addr(pvr_dev->kccb.rtn_obj,
+				  &fwif_osinit->kernel_ccb_rtn_slots_fw_addr);
+
+	fwif_osinit->firmware_ccbctl_fw_addr = pvr_dev->fwccb.ctrl_fw_addr;
+	fwif_osinit->firmware_ccb_fw_addr = pvr_dev->fwccb.ccb_fw_addr;
+
+	fwif_osinit->work_est_firmware_ccbctl_fw_addr = 0;
+	fwif_osinit->work_est_firmware_ccb_fw_addr = 0;
+
+	pvr_fw_object_get_fw_addr(fw_mem->hwrinfobuf_obj,
+				  &fwif_osinit->rogue_fwif_hwr_info_buf_ctl_fw_addr);
+	pvr_fw_object_get_fw_addr(fw_mem->osdata_obj, &fwif_osinit->fw_os_data_fw_addr);
+
+	fwif_osinit->hwr_debug_dump_limit = 0;
+
+	rogue_fwif_compchecks_bvnc_init(&fwif_osinit->rogue_comp_checks.hw_bvnc);
+	rogue_fwif_compchecks_bvnc_init(&fwif_osinit->rogue_comp_checks.fw_bvnc);
+}
+
+static void
+fw_osdata_init(void *cpu_ptr, void *priv)
+{
+	struct rogue_fwif_osdata *fwif_osdata = cpu_ptr;
+	struct pvr_device *pvr_dev = priv;
+	struct pvr_fw_mem *fw_mem = &pvr_dev->fw_dev.mem;
+
+	pvr_fw_object_get_fw_addr(fw_mem->power_sync_obj, &fwif_osdata->power_sync_fw_addr);
+}
+
+static void
+fw_fault_page_init(void *cpu_ptr, void *priv)
+{
+	u32 *fault_page = cpu_ptr;
+
+	for (int i = 0; i < PVR_ROGUE_FAULT_PAGE_SIZE / sizeof(*fault_page); i++)
+		fault_page[i] = 0xdeadbee0;
+}
+
+static void
+fw_sysinit_init(void *cpu_ptr, void *priv)
+{
+	struct rogue_fwif_sysinit *fwif_sysinit = cpu_ptr;
+	struct pvr_device *pvr_dev = priv;
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+	struct pvr_fw_mem *fw_mem = &fw_dev->mem;
+	dma_addr_t fault_dma_addr = 0;
+	u32 clock_speed_hz = clk_get_rate(pvr_dev->core_clk);
+
+	WARN_ON(!clock_speed_hz);
+
+	WARN_ON(pvr_fw_object_get_dma_addr(fw_mem->fault_page_obj, 0, &fault_dma_addr));
+	fwif_sysinit->fault_phys_addr = (u64)fault_dma_addr;
+
+	fwif_sysinit->pds_exec_base = ROGUE_PDSCODEDATA_HEAP_BASE;
+	fwif_sysinit->usc_exec_base = ROGUE_USCCODE_HEAP_BASE;
+
+	pvr_fw_object_get_fw_addr(fw_mem->runtime_cfg_obj, &fwif_sysinit->runtime_cfg_fw_addr);
+	pvr_fw_object_get_fw_addr(fw_dev->fw_trace.tracebuf_ctrl_obj,
+				  &fwif_sysinit->trace_buf_ctl_fw_addr);
+	pvr_fw_object_get_fw_addr(fw_mem->sysdata_obj, &fwif_sysinit->fw_sys_data_fw_addr);
+	pvr_fw_object_get_fw_addr(fw_mem->gpu_util_fwcb_obj,
+				  &fwif_sysinit->gpu_util_fw_cb_ctl_fw_addr);
+	if (fw_mem->core_data_obj) {
+		pvr_fw_object_get_fw_addr(fw_mem->core_data_obj,
+					  &fwif_sysinit->coremem_data_store.fw_addr);
+	}
+
+	/* Currently unsupported. */
+	fwif_sysinit->counter_dump_ctl.buffer_fw_addr = 0;
+	fwif_sysinit->counter_dump_ctl.size_in_dwords = 0;
+
+	/* Skip alignment checks. */
+	fwif_sysinit->align_checks = 0;
+
+	fwif_sysinit->filter_flags = 0;
+	fwif_sysinit->hw_perf_filter = 0;
+	fwif_sysinit->firmware_perf = FW_PERF_CONF_NONE;
+	fwif_sysinit->initial_core_clock_speed = clock_speed_hz;
+	fwif_sysinit->active_pm_latency_ms = 0;
+	fwif_sysinit->gpio_validation_mode = ROGUE_FWIF_GPIO_VAL_OFF;
+	fwif_sysinit->firmware_started = false;
+	fwif_sysinit->marker_val = 1;
+
+	memset(&fwif_sysinit->bvnc_km_feature_flags, 0,
+	       sizeof(fwif_sysinit->bvnc_km_feature_flags));
+}
+
+static void
+fw_runtime_cfg_init(void *cpu_ptr, void *priv)
+{
+	struct rogue_fwif_runtime_cfg *runtime_cfg = cpu_ptr;
+	struct pvr_device *pvr_dev = priv;
+	u32 clock_speed_hz = clk_get_rate(pvr_dev->core_clk);
+
+	WARN_ON(!clock_speed_hz);
+
+	runtime_cfg->core_clock_speed = clock_speed_hz;
+	runtime_cfg->active_pm_latency_ms = 0;
+	runtime_cfg->active_pm_latency_persistant = true;
+	WARN_ON(PVR_FEATURE_VALUE(pvr_dev, num_clusters,
+				  &runtime_cfg->default_dusts_num_init) != 0);
+}
+
+static void
+fw_gpu_util_fwcb_init(void *cpu_ptr, void *priv)
+{
+	struct rogue_fwif_gpu_util_fwcb *gpu_util_fwcb = cpu_ptr;
+
+	gpu_util_fwcb->last_word = PVR_FWIF_GPU_UTIL_STATE_IDLE;
+}
+
+static int
+pvr_fw_create_structures(struct pvr_device *pvr_dev)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+	struct pvr_fw_mem *fw_mem = &fw_dev->mem;
+	int err;
+
+	fw_dev->power_sync = pvr_fw_object_create_and_map(pvr_dev, sizeof(*fw_dev->power_sync),
+							  PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+							  NULL, NULL, &fw_mem->power_sync_obj);
+	if (IS_ERR(fw_dev->power_sync)) {
+		drm_err(drm_dev, "Unable to allocate FW power_sync structure\n");
+		return PTR_ERR(fw_dev->power_sync);
+	}
+
+	fw_dev->hwrinfobuf = pvr_fw_object_create_and_map(pvr_dev, sizeof(*fw_dev->hwrinfobuf),
+							  PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+							  NULL, NULL, &fw_mem->hwrinfobuf_obj);
+	if (IS_ERR(fw_dev->hwrinfobuf)) {
+		drm_err(drm_dev,
+			"Unable to allocate FW hwrinfobuf structure\n");
+		err = PTR_ERR(fw_dev->hwrinfobuf);
+		goto err_release_power_sync;
+	}
+
+	err = pvr_fw_object_create(pvr_dev, PVR_SYNC_OBJ_SIZE,
+				   PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+				   NULL, NULL, &fw_mem->mmucache_sync_obj);
+	if (err) {
+		drm_err(drm_dev,
+			"Unable to allocate MMU cache sync object\n");
+		goto err_release_hwrinfobuf;
+	}
+
+	fw_dev->fwif_sysdata = pvr_fw_object_create_and_map(pvr_dev,
+							    sizeof(*fw_dev->fwif_sysdata),
+							    PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+							    NULL, NULL, &fw_mem->sysdata_obj);
+	if (IS_ERR(fw_dev->fwif_sysdata)) {
+		drm_err(drm_dev, "Unable to allocate FW SYSDATA structure\n");
+		err = PTR_ERR(fw_dev->fwif_sysdata);
+		goto err_release_mmucache_sync_obj;
+	}
+
+	err = pvr_fw_object_create(pvr_dev, PVR_ROGUE_FAULT_PAGE_SIZE,
+				   PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+				   fw_fault_page_init, NULL, &fw_mem->fault_page_obj);
+	if (err) {
+		drm_err(drm_dev, "Unable to allocate FW fault page\n");
+		goto err_release_sysdata;
+	}
+
+	err = pvr_fw_object_create(pvr_dev, sizeof(struct rogue_fwif_gpu_util_fwcb),
+				   PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+				   fw_gpu_util_fwcb_init, pvr_dev, &fw_mem->gpu_util_fwcb_obj);
+	if (err) {
+		drm_err(drm_dev, "Unable to allocate GPU util FWCB\n");
+		goto err_release_fault_page;
+	}
+
+	err = pvr_fw_object_create(pvr_dev, sizeof(struct rogue_fwif_runtime_cfg),
+				   PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+				   fw_runtime_cfg_init, pvr_dev, &fw_mem->runtime_cfg_obj);
+	if (err) {
+		drm_err(drm_dev, "Unable to allocate FW runtime config\n");
+		goto err_release_gpu_util_fwcb;
+	}
+
+	err = pvr_fw_trace_init(pvr_dev);
+	if (err)
+		goto err_release_runtime_cfg;
+
+	fw_dev->fwif_osdata = pvr_fw_object_create_and_map(pvr_dev,
+							   sizeof(*fw_dev->fwif_osdata),
+							   PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+							   fw_osdata_init, pvr_dev,
+							   &fw_mem->osdata_obj);
+	if (IS_ERR(fw_dev->fwif_osdata)) {
+		drm_err(drm_dev, "Unable to allocate FW OSDATA structure\n");
+		err = PTR_ERR(fw_dev->fwif_osdata);
+		goto err_fw_trace_fini;
+	}
+
+	fw_dev->fwif_osinit =
+		pvr_fw_object_create_and_map_offset(pvr_dev,
+						    fw_dev->fw_heap_info.config_offset +
+						    PVR_ROGUE_FWIF_OSINIT_OFFSET,
+						    sizeof(*fw_dev->fwif_osinit),
+						    PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+						    fw_osinit_init, pvr_dev, &fw_mem->osinit_obj);
+	if (IS_ERR(fw_dev->fwif_osinit)) {
+		drm_err(drm_dev, "Unable to allocate FW OSINIT structure\n");
+		err = PTR_ERR(fw_dev->fwif_osinit);
+		goto err_release_osdata;
+	}
+
+	fw_dev->fwif_sysinit =
+		pvr_fw_object_create_and_map_offset(pvr_dev,
+						    fw_dev->fw_heap_info.config_offset +
+						    PVR_ROGUE_FWIF_SYSINIT_OFFSET,
+						    sizeof(*fw_dev->fwif_sysinit),
+						    PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+						    fw_sysinit_init, pvr_dev, &fw_mem->sysinit_obj);
+	if (IS_ERR(fw_dev->fwif_sysinit)) {
+		drm_err(drm_dev, "Unable to allocate FW SYSINIT structure\n");
+		err = PTR_ERR(fw_dev->fwif_sysinit);
+		goto err_release_osinit;
+	}
+
+	return 0;
+
+err_release_osinit:
+	pvr_fw_object_unmap_and_destroy(fw_mem->osinit_obj);
+
+err_release_osdata:
+	pvr_fw_object_unmap_and_destroy(fw_mem->osdata_obj);
+
+err_fw_trace_fini:
+	pvr_fw_trace_fini(pvr_dev);
+
+err_release_runtime_cfg:
+	pvr_fw_object_destroy(fw_mem->runtime_cfg_obj);
+
+err_release_gpu_util_fwcb:
+	pvr_fw_object_destroy(fw_mem->gpu_util_fwcb_obj);
+
+err_release_fault_page:
+	pvr_fw_object_destroy(fw_mem->fault_page_obj);
+
+err_release_sysdata:
+	pvr_fw_object_unmap_and_destroy(fw_mem->sysdata_obj);
+
+err_release_mmucache_sync_obj:
+	pvr_fw_object_destroy(fw_mem->mmucache_sync_obj);
+
+err_release_hwrinfobuf:
+	pvr_fw_object_unmap_and_destroy(fw_mem->hwrinfobuf_obj);
+
+err_release_power_sync:
+	pvr_fw_object_unmap_and_destroy(fw_mem->power_sync_obj);
+
+	return err;
+}
+
+static void
+pvr_fw_destroy_structures(struct pvr_device *pvr_dev)
+{
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+	struct pvr_fw_mem *fw_mem = &fw_dev->mem;
+
+	pvr_fw_trace_fini(pvr_dev);
+	pvr_fw_object_destroy(fw_mem->runtime_cfg_obj);
+	pvr_fw_object_destroy(fw_mem->gpu_util_fwcb_obj);
+	pvr_fw_object_destroy(fw_mem->fault_page_obj);
+	pvr_fw_object_unmap_and_destroy(fw_mem->sysdata_obj);
+	pvr_fw_object_unmap_and_destroy(fw_mem->sysinit_obj);
+
+	pvr_fw_object_destroy(fw_mem->mmucache_sync_obj);
+	pvr_fw_object_unmap_and_destroy(fw_mem->hwrinfobuf_obj);
+	pvr_fw_object_unmap_and_destroy(fw_mem->power_sync_obj);
+	pvr_fw_object_unmap_and_destroy(fw_mem->osdata_obj);
+	pvr_fw_object_unmap_and_destroy(fw_mem->osinit_obj);
+}
+
+/**
+ * pvr_fw_process() - Process firmware image, allocate FW memory and create boot
+ *                    arguments
+ * @pvr_dev: Device pointer.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_fw_object_create_and_map_offset(), or
+ *  * Any error returned by pvr_fw_object_create_and_map().
+ */
+static int
+pvr_fw_process(struct pvr_device *pvr_dev)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	struct pvr_fw_mem *fw_mem = &pvr_dev->fw_dev.mem;
+	const u8 *fw = pvr_dev->fw_dev.firmware->data;
+	const struct pvr_fw_layout_entry *private_data;
+	u8 *fw_code_ptr;
+	u8 *fw_data_ptr;
+	u8 *fw_core_code_ptr;
+	u8 *fw_core_data_ptr;
+	int err;
+
+	layout_get_sizes(pvr_dev);
+
+	private_data = pvr_fw_find_private_data(pvr_dev);
+	if (!private_data)
+		return -EINVAL;
+
+	/* Allocate and map memory for firmware sections. */
+
+	/*
+	 * Code allocation must be at the start of the firmware heap, otherwise
+	 * firmware processor will be unable to boot.
+	 *
+	 * This has the useful side-effect that for every other object in the
+	 * driver, a firmware address of 0 is invalid.
+	 */
+	fw_code_ptr = pvr_fw_object_create_and_map_offset(pvr_dev, 0, fw_mem->code_alloc_size,
+							  PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+							  NULL, NULL, &fw_mem->code_obj);
+	if (IS_ERR(fw_code_ptr)) {
+		drm_err(drm_dev, "Unable to allocate FW code memory\n");
+		return PTR_ERR(fw_code_ptr);
+	}
+
+	if (pvr_dev->fw_dev.defs->has_fixed_data_addr()) {
+		u32 base_addr = private_data->base_addr & pvr_dev->fw_dev.fw_heap_info.offset_mask;
+
+		fw_data_ptr =
+			pvr_fw_object_create_and_map_offset(pvr_dev, base_addr,
+							    fw_mem->data_alloc_size,
+							    PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+							    NULL, NULL, &fw_mem->data_obj);
+	} else {
+		fw_data_ptr = pvr_fw_object_create_and_map(pvr_dev, fw_mem->data_alloc_size,
+							   PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+							   NULL, NULL, &fw_mem->data_obj);
+	}
+	if (IS_ERR(fw_data_ptr)) {
+		drm_err(drm_dev, "Unable to allocate FW data memory\n");
+		err = PTR_ERR(fw_data_ptr);
+		goto err_free_fw_code_obj;
+	}
+
+	/* Core code and data sections are optional. */
+	if (fw_mem->core_code_alloc_size) {
+		fw_core_code_ptr =
+			pvr_fw_object_create_and_map(pvr_dev, fw_mem->core_code_alloc_size,
+						     PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+						     NULL, NULL, &fw_mem->core_code_obj);
+		if (IS_ERR(fw_core_code_ptr)) {
+			drm_err(drm_dev,
+				"Unable to allocate FW core code memory\n");
+			err = PTR_ERR(fw_core_code_ptr);
+			goto err_free_fw_data_obj;
+		}
+	} else {
+		fw_core_code_ptr = NULL;
+	}
+
+	if (fw_mem->core_data_alloc_size) {
+		fw_core_data_ptr =
+			pvr_fw_object_create_and_map(pvr_dev, fw_mem->core_data_alloc_size,
+						     PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+						     NULL, NULL, &fw_mem->core_data_obj);
+		if (IS_ERR(fw_core_data_ptr)) {
+			drm_err(drm_dev,
+				"Unable to allocate FW core data memory\n");
+			err = PTR_ERR(fw_core_data_ptr);
+			goto err_free_fw_core_code_obj;
+		}
+	} else {
+		fw_core_data_ptr = NULL;
+	}
+
+	fw_mem->code = kzalloc(fw_mem->code_alloc_size, GFP_KERNEL);
+	fw_mem->data = kzalloc(fw_mem->data_alloc_size, GFP_KERNEL);
+	if (fw_mem->core_code_alloc_size)
+		fw_mem->core_code = kzalloc(fw_mem->core_code_alloc_size, GFP_KERNEL);
+	if (fw_mem->core_data_alloc_size)
+		fw_mem->core_data = kzalloc(fw_mem->core_data_alloc_size, GFP_KERNEL);
+
+	if (!fw_mem->code || !fw_mem->data ||
+	    (!fw_mem->core_code && fw_mem->core_code_alloc_size) ||
+	    (!fw_mem->core_data && fw_mem->core_data_alloc_size)) {
+		err = -ENOMEM;
+		goto err_free_kdata;
+	}
+
+	err = pvr_dev->fw_dev.defs->fw_process(pvr_dev, fw,
+					       fw_mem->code, fw_mem->data, fw_mem->core_code,
+					       fw_mem->core_data, fw_mem->core_code_alloc_size);
+
+	if (err)
+		goto err_free_fw_core_data_obj;
+
+	memcpy(fw_code_ptr, fw_mem->code, fw_mem->code_alloc_size);
+	memcpy(fw_data_ptr, fw_mem->data, fw_mem->data_alloc_size);
+	if (fw_mem->core_code)
+		memcpy(fw_core_code_ptr, fw_mem->core_code, fw_mem->core_code_alloc_size);
+	if (fw_mem->core_data)
+		memcpy(fw_core_data_ptr, fw_mem->core_data, fw_mem->core_data_alloc_size);
+
+	/* We're finished with the firmware section memory on the CPU, unmap. */
+	if (fw_core_data_ptr)
+		pvr_fw_object_vunmap(fw_mem->core_data_obj);
+	if (fw_core_code_ptr)
+		pvr_fw_object_vunmap(fw_mem->core_code_obj);
+	pvr_fw_object_vunmap(fw_mem->data_obj);
+	fw_data_ptr = NULL;
+	pvr_fw_object_vunmap(fw_mem->code_obj);
+	fw_code_ptr = NULL;
+
+	err = pvr_fw_create_fwif_connection_ctl(pvr_dev);
+	if (err)
+		goto err_free_fw_core_data_obj;
+
+	return 0;
+
+err_free_kdata:
+	kfree(fw_mem->core_data);
+	kfree(fw_mem->core_code);
+	kfree(fw_mem->data);
+	kfree(fw_mem->code);
+
+err_free_fw_core_data_obj:
+	if (fw_core_data_ptr)
+		pvr_fw_object_unmap_and_destroy(fw_mem->core_data_obj);
+
+err_free_fw_core_code_obj:
+	if (fw_core_code_ptr)
+		pvr_fw_object_unmap_and_destroy(fw_mem->core_code_obj);
+
+err_free_fw_data_obj:
+	if (fw_data_ptr)
+		pvr_fw_object_vunmap(fw_mem->data_obj);
+	pvr_fw_object_destroy(fw_mem->data_obj);
+
+err_free_fw_code_obj:
+	if (fw_code_ptr)
+		pvr_fw_object_vunmap(fw_mem->code_obj);
+	pvr_fw_object_destroy(fw_mem->code_obj);
+
+	return err;
+}
+
+static int
+pvr_copy_to_fw(struct pvr_fw_object *dest_obj, u8 *src_ptr, u32 size)
+{
+	u8 *dest_ptr = pvr_fw_object_vmap(dest_obj);
+
+	if (IS_ERR(dest_ptr))
+		return PTR_ERR(dest_ptr);
+
+	memcpy(dest_ptr, src_ptr, size);
+
+	pvr_fw_object_vunmap(dest_obj);
+
+	return 0;
+}
+
+static int
+pvr_fw_reinit_code_data(struct pvr_device *pvr_dev)
+{
+	struct pvr_fw_mem *fw_mem = &pvr_dev->fw_dev.mem;
+	int err;
+
+	err = pvr_copy_to_fw(fw_mem->code_obj, fw_mem->code, fw_mem->code_alloc_size);
+	if (err)
+		return err;
+
+	err = pvr_copy_to_fw(fw_mem->data_obj, fw_mem->data, fw_mem->data_alloc_size);
+	if (err)
+		return err;
+
+	if (fw_mem->core_code) {
+		err = pvr_copy_to_fw(fw_mem->core_code_obj, fw_mem->core_code,
+				     fw_mem->core_code_alloc_size);
+		if (err)
+			return err;
+	}
+
+	if (fw_mem->core_data) {
+		err = pvr_copy_to_fw(fw_mem->core_data_obj, fw_mem->core_data,
+				     fw_mem->core_data_alloc_size);
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
+
+static void
+pvr_fw_cleanup(struct pvr_device *pvr_dev)
+{
+	struct pvr_fw_mem *fw_mem = &pvr_dev->fw_dev.mem;
+
+	pvr_fw_fini_fwif_connection_ctl(pvr_dev);
+	if (fw_mem->core_code_obj)
+		pvr_fw_object_destroy(fw_mem->core_code_obj);
+	if (fw_mem->core_data_obj)
+		pvr_fw_object_destroy(fw_mem->core_data_obj);
+	pvr_fw_object_destroy(fw_mem->code_obj);
+	pvr_fw_object_destroy(fw_mem->data_obj);
+}
+
+/**
+ * pvr_wait_for_fw_boot() - Wait for firmware to finish booting
+ * @pvr_dev: Target PowerVR device.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -%ETIMEDOUT if firmware fails to boot within timeout.
+ */
+int
+pvr_wait_for_fw_boot(struct pvr_device *pvr_dev)
+{
+	ktime_t deadline = ktime_add_us(ktime_get(), FW_BOOT_TIMEOUT_USEC);
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+
+	while (ktime_to_ns(ktime_sub(deadline, ktime_get())) > 0) {
+		if (READ_ONCE(fw_dev->fwif_sysinit->firmware_started))
+			return 0;
+	}
+
+	return -ETIMEDOUT;
+}
+
+/*
+ * pvr_fw_heap_info_init() - Calculate size and masks for FW heap
+ * @pvr_dev: Target PowerVR device.
+ * @log2_size: Log2 of raw heap size.
+ * @reserved_size: Size of reserved area of heap, in bytes. May be zero.
+ */
+void
+pvr_fw_heap_info_init(struct pvr_device *pvr_dev, u32 log2_size, u32 reserved_size)
+{
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+
+	fw_dev->fw_heap_info.gpu_addr = PVR_ROGUE_FW_MAIN_HEAP_BASE;
+	fw_dev->fw_heap_info.log2_size = log2_size;
+	fw_dev->fw_heap_info.reserved_size = reserved_size;
+	fw_dev->fw_heap_info.raw_size = 1 << fw_dev->fw_heap_info.log2_size;
+	fw_dev->fw_heap_info.offset_mask = fw_dev->fw_heap_info.raw_size - 1;
+	fw_dev->fw_heap_info.config_offset = fw_dev->fw_heap_info.raw_size -
+					     PVR_ROGUE_FW_CONFIG_HEAP_SIZE;
+	fw_dev->fw_heap_info.size = fw_dev->fw_heap_info.raw_size -
+				    (PVR_ROGUE_FW_CONFIG_HEAP_SIZE + reserved_size);
+}
+
 /**
  * pvr_fw_validate_init_device_info() - Validate firmware and initialise device information
  * @pvr_dev: Target PowerVR device.
@@ -143,3 +890,579 @@ pvr_fw_validate_init_device_info(struct pvr_device *pvr_dev)
 
 	return pvr_fw_get_device_info(pvr_dev);
 }
+
+/**
+ * pvr_fw_init() - Initialise and boot firmware
+ * @pvr_dev: Target PowerVR device
+ *
+ * On successful completion of the function the PowerVR device will be
+ * initialised and ready to use.
+ *
+ * Returns:
+ *  * 0 on success,
+ *  * -%EINVAL on invalid firmware image,
+ *  * -%ENOMEM on out of memory, or
+ *  * -%ETIMEDOUT if firmware processor fails to boot or on register poll timeout.
+ */
+int
+pvr_fw_init(struct pvr_device *pvr_dev)
+{
+	u32 kccb_size_log2 = ROGUE_FWIF_KCCB_NUMCMDS_LOG2_DEFAULT;
+	u32 kccb_rtn_size = (1 << kccb_size_log2) * sizeof(*pvr_dev->kccb.rtn);
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+	int err;
+
+	if (fw_dev->processor_type == PVR_FW_PROCESSOR_TYPE_META)
+		fw_dev->defs = &pvr_fw_defs_meta;
+	else
+		return -EINVAL;
+
+	err = fw_dev->defs->init(pvr_dev);
+	if (err)
+		return err;
+
+	drm_mm_init(&fw_dev->fw_mm, ROGUE_FW_HEAP_BASE, fw_dev->fw_heap_info.raw_size);
+	fw_dev->fw_mm_base = ROGUE_FW_HEAP_BASE;
+	spin_lock_init(&fw_dev->fw_mm_lock);
+
+	INIT_LIST_HEAD(&fw_dev->fw_objs.list);
+	err = drmm_mutex_init(from_pvr_device(pvr_dev), &fw_dev->fw_objs.lock);
+	if (err)
+		goto err_mm_takedown;
+
+	err = pvr_fw_process(pvr_dev);
+	if (err)
+		goto err_mm_takedown;
+
+	/* Initialise KCCB and FWCCB. */
+	err = pvr_kccb_init(pvr_dev);
+	if (err)
+		goto err_fw_cleanup;
+
+	err = pvr_fwccb_init(pvr_dev);
+	if (err)
+		goto err_kccb_fini;
+
+	/* Allocate memory for KCCB return slots. */
+	pvr_dev->kccb.rtn = pvr_fw_object_create_and_map(pvr_dev, kccb_rtn_size,
+							 PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+							 NULL, NULL, &pvr_dev->kccb.rtn_obj);
+	if (IS_ERR(pvr_dev->kccb.rtn)) {
+		err = PTR_ERR(pvr_dev->kccb.rtn);
+		goto err_fwccb_fini;
+	}
+
+	err = pvr_fw_create_structures(pvr_dev);
+	if (err)
+		goto err_kccb_rtn_release;
+
+	err = pvr_fw_start(pvr_dev);
+	if (err)
+		goto err_destroy_structures;
+
+	err = pvr_wait_for_fw_boot(pvr_dev);
+	if (err) {
+		drm_err(from_pvr_device(pvr_dev), "Firmware failed to boot\n");
+		goto err_fw_stop;
+	}
+
+	fw_dev->booted = true;
+
+	return 0;
+
+err_fw_stop:
+	pvr_fw_stop(pvr_dev);
+
+err_destroy_structures:
+	pvr_fw_destroy_structures(pvr_dev);
+
+err_kccb_rtn_release:
+	pvr_fw_object_unmap_and_destroy(pvr_dev->kccb.rtn_obj);
+
+err_fwccb_fini:
+	pvr_ccb_fini(&pvr_dev->fwccb);
+
+err_kccb_fini:
+	pvr_kccb_fini(pvr_dev);
+
+err_fw_cleanup:
+	pvr_fw_cleanup(pvr_dev);
+
+err_mm_takedown:
+	drm_mm_takedown(&fw_dev->fw_mm);
+
+	if (fw_dev->defs->fini)
+		fw_dev->defs->fini(pvr_dev);
+
+	return err;
+}
+
+/**
+ * pvr_fw_fini() - Shutdown firmware processor and free associated memory
+ * @pvr_dev: Target PowerVR device
+ */
+void
+pvr_fw_fini(struct pvr_device *pvr_dev)
+{
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+
+	fw_dev->booted = false;
+
+	pvr_fw_destroy_structures(pvr_dev);
+	pvr_fw_object_unmap_and_destroy(pvr_dev->kccb.rtn_obj);
+
+	/*
+	 * Ensure FWCCB worker has finished executing before destroying FWCCB. The IRQ handler has
+	 * been unregistered at this point so no new work should be being submitted.
+	 */
+	pvr_ccb_fini(&pvr_dev->fwccb);
+	pvr_kccb_fini(pvr_dev);
+	pvr_fw_cleanup(pvr_dev);
+
+	mutex_lock(&pvr_dev->fw_dev.fw_objs.lock);
+	WARN_ON(!list_empty(&pvr_dev->fw_dev.fw_objs.list));
+	mutex_unlock(&pvr_dev->fw_dev.fw_objs.lock);
+
+	drm_mm_takedown(&fw_dev->fw_mm);
+
+	if (fw_dev->defs->fini)
+		fw_dev->defs->fini(pvr_dev);
+}
+
+/**
+ * pvr_fw_mts_schedule() - Schedule work via an MTS kick
+ * @pvr_dev: Target PowerVR device
+ * @val: Kick mask. Should be a combination of %ROGUE_CR_MTS_SCHEDULE_*
+ */
+void
+pvr_fw_mts_schedule(struct pvr_device *pvr_dev, u32 val)
+{
+	/* Ensure memory is flushed before kicking MTS. */
+	wmb();
+
+	pvr_cr_write32(pvr_dev, ROGUE_CR_MTS_SCHEDULE, val);
+
+	/* Ensure the MTS kick goes through before continuing. */
+	mb();
+}
+
+/**
+ * pvr_fw_structure_cleanup() - Send FW cleanup request for an object
+ * @pvr_dev: Target PowerVR device.
+ * @type: Type of object to cleanup. Must be one of &enum rogue_fwif_cleanup_type.
+ * @fw_obj: Pointer to FW object containing object to cleanup.
+ * @offset: Offset within FW object of object to cleanup.
+ *
+ * Returns:
+ *  * 0 on success,
+ *  * -EBUSY if object is busy,
+ *  * -ETIMEDOUT on timeout, or
+ *  * -EIO if device is lost.
+ */
+int
+pvr_fw_structure_cleanup(struct pvr_device *pvr_dev, u32 type, struct pvr_fw_object *fw_obj,
+			 u32 offset)
+{
+	struct rogue_fwif_kccb_cmd cmd;
+	int slot_nr;
+	int idx;
+	int err;
+	u32 rtn;
+
+	struct rogue_fwif_cleanup_request *cleanup_req = &cmd.cmd_data.cleanup_data;
+
+	down_read(&pvr_dev->reset_sem);
+
+	if (!drm_dev_enter(from_pvr_device(pvr_dev), &idx)) {
+		err = -EIO;
+		goto err_up_read;
+	}
+
+	cmd.cmd_type = ROGUE_FWIF_KCCB_CMD_CLEANUP;
+	cmd.kccb_flags = 0;
+	cleanup_req->cleanup_type = type;
+
+	switch (type) {
+	case ROGUE_FWIF_CLEANUP_FWCOMMONCONTEXT:
+		pvr_fw_object_get_fw_addr_offset(fw_obj, offset,
+						 &cleanup_req->cleanup_data.context_fw_addr);
+		break;
+	case ROGUE_FWIF_CLEANUP_HWRTDATA:
+		pvr_fw_object_get_fw_addr_offset(fw_obj, offset,
+						 &cleanup_req->cleanup_data.hwrt_data_fw_addr);
+		break;
+	case ROGUE_FWIF_CLEANUP_FREELIST:
+		pvr_fw_object_get_fw_addr_offset(fw_obj, offset,
+						 &cleanup_req->cleanup_data.freelist_fw_addr);
+		break;
+	default:
+		err = -EINVAL;
+		goto err_drm_dev_exit;
+	}
+
+	err = pvr_kccb_send_cmd(pvr_dev, &cmd, &slot_nr);
+	if (err)
+		goto err_drm_dev_exit;
+
+	err = pvr_kccb_wait_for_completion(pvr_dev, slot_nr, HZ, &rtn);
+	if (err)
+		goto err_drm_dev_exit;
+
+	if (rtn & ROGUE_FWIF_KCCB_RTN_SLOT_CLEANUP_BUSY)
+		err = -EBUSY;
+
+err_drm_dev_exit:
+	drm_dev_exit(idx);
+
+err_up_read:
+	up_read(&pvr_dev->reset_sem);
+
+	return err;
+}
+
+/**
+ * pvr_fw_object_fw_map() - Map a FW object in firmware address space
+ * @pvr_dev: Device pointer.
+ * @fw_obj: FW object to map.
+ * @dev_addr: Desired address in device space, if a specific address is
+ *            required. 0 otherwise.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -%EINVAL if @fw_obj is already mapped but has no references, or
+ *  * Any error returned by DRM.
+ */
+static int
+pvr_fw_object_fw_map(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj, u64 dev_addr)
+{
+	struct pvr_gem_object *pvr_obj = fw_obj->gem;
+	struct drm_gem_object *gem_obj = gem_from_pvr_gem(pvr_obj);
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+
+	int err;
+
+	spin_lock(&fw_dev->fw_mm_lock);
+
+	if (drm_mm_node_allocated(&fw_obj->fw_mm_node)) {
+		err = -EINVAL;
+		goto err_unlock;
+	}
+
+	if (!dev_addr) {
+		/*
+		 * Allocate from the main heap only (firmware heap minus
+		 * config space).
+		 */
+		err = drm_mm_insert_node_in_range(&fw_dev->fw_mm, &fw_obj->fw_mm_node,
+						  gem_obj->size, 0, 0,
+						  fw_dev->fw_heap_info.gpu_addr,
+						  fw_dev->fw_heap_info.gpu_addr +
+						  fw_dev->fw_heap_info.size, 0);
+		if (err)
+			goto err_unlock;
+	} else {
+		fw_obj->fw_mm_node.start = dev_addr;
+		fw_obj->fw_mm_node.size = gem_obj->size;
+		err = drm_mm_reserve_node(&fw_dev->fw_mm, &fw_obj->fw_mm_node);
+		if (err)
+			goto err_unlock;
+	}
+
+	spin_unlock(&fw_dev->fw_mm_lock);
+
+	/* Map object on GPU. */
+	err = fw_dev->defs->vm_map(pvr_dev, fw_obj);
+	if (err)
+		goto err_remove_node;
+
+	fw_obj->fw_addr_offset = (u32)(fw_obj->fw_mm_node.start - fw_dev->fw_mm_base);
+
+	return 0;
+
+err_remove_node:
+	spin_lock(&fw_dev->fw_mm_lock);
+	drm_mm_remove_node(&fw_obj->fw_mm_node);
+
+err_unlock:
+	spin_unlock(&fw_dev->fw_mm_lock);
+
+	return err;
+}
+
+/**
+ * pvr_fw_object_fw_unmap() - Unmap a previously mapped FW object
+ * @fw_obj: FW object to unmap.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -%EINVAL if object is not currently mapped.
+ */
+static int
+pvr_fw_object_fw_unmap(struct pvr_fw_object *fw_obj)
+{
+	struct pvr_gem_object *pvr_obj = fw_obj->gem;
+	struct drm_gem_object *gem_obj = gem_from_pvr_gem(pvr_obj);
+	struct pvr_device *pvr_dev = to_pvr_device(gem_obj->dev);
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+
+	fw_dev->defs->vm_unmap(pvr_dev, fw_obj);
+
+	spin_lock(&fw_dev->fw_mm_lock);
+
+	if (!drm_mm_node_allocated(&fw_obj->fw_mm_node)) {
+		spin_unlock(&fw_dev->fw_mm_lock);
+		return -EINVAL;
+	}
+
+	drm_mm_remove_node(&fw_obj->fw_mm_node);
+
+	spin_unlock(&fw_dev->fw_mm_lock);
+
+	return 0;
+}
+
+static void *
+pvr_fw_object_create_and_map_common(struct pvr_device *pvr_dev, size_t size,
+				    u64 flags, u64 dev_addr,
+				    void (*init)(void *cpu_ptr, void *priv),
+				    void *init_priv, struct pvr_fw_object **fw_obj_out)
+{
+	struct pvr_fw_object *fw_obj;
+	void *cpu_ptr;
+	int err;
+
+	/* %DRM_PVR_BO_DEVICE_PM_FW_PROTECT is implicit for FW objects. */
+	flags |= DRM_PVR_BO_DEVICE_PM_FW_PROTECT;
+
+	fw_obj = kzalloc(sizeof(*fw_obj), GFP_KERNEL);
+	if (!fw_obj)
+		return ERR_PTR(-ENOMEM);
+
+	INIT_LIST_HEAD(&fw_obj->node);
+	fw_obj->init = init;
+	fw_obj->init_priv = init_priv;
+
+	fw_obj->gem = pvr_gem_object_create(pvr_dev, size, flags);
+	if (IS_ERR(fw_obj->gem)) {
+		err = PTR_ERR(fw_obj->gem);
+		fw_obj->gem = NULL;
+		goto err_put_object;
+	}
+
+	err = pvr_fw_object_fw_map(pvr_dev, fw_obj, dev_addr);
+	if (err)
+		goto err_put_object;
+
+	cpu_ptr = pvr_fw_object_vmap(fw_obj);
+	if (IS_ERR(cpu_ptr)) {
+		err = PTR_ERR(cpu_ptr);
+		goto err_put_object;
+	}
+
+	*fw_obj_out = fw_obj;
+
+	if (fw_obj->init)
+		fw_obj->init(cpu_ptr, fw_obj->init_priv);
+
+	mutex_lock(&pvr_dev->fw_dev.fw_objs.lock);
+	list_add_tail(&fw_obj->node, &pvr_dev->fw_dev.fw_objs.list);
+	mutex_unlock(&pvr_dev->fw_dev.fw_objs.lock);
+
+	return cpu_ptr;
+
+err_put_object:
+	pvr_fw_object_destroy(fw_obj);
+
+	return ERR_PTR(err);
+}
+
+/**
+ * pvr_fw_object_create() - Create a FW object and map to firmware
+ * @pvr_dev: PowerVR device pointer.
+ * @size: Size of object, in bytes.
+ * @flags: Options which affect both this operation and future mapping
+ * operations performed on the returned object. Must be a combination of
+ * DRM_PVR_BO_* and/or PVR_BO_* flags.
+ * @init: Initialisation callback.
+ * @init_priv: Private pointer to pass to initialisation callback.
+ * @fw_obj_out: Pointer to location to store created object pointer.
+ *
+ * %DRM_PVR_BO_DEVICE_PM_FW_PROTECT is implied for all FW objects. Consequently,
+ * this function will fail if @flags has %DRM_PVR_BO_CPU_ALLOW_USERSPACE_ACCESS
+ * set.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_fw_object_create_common().
+ */
+int
+pvr_fw_object_create(struct pvr_device *pvr_dev, size_t size, u64 flags,
+		     void (*init)(void *cpu_ptr, void *priv), void *init_priv,
+		     struct pvr_fw_object **fw_obj_out)
+{
+	void *cpu_ptr;
+
+	cpu_ptr = pvr_fw_object_create_and_map_common(pvr_dev, size, flags, 0, init, init_priv,
+						      fw_obj_out);
+	if (IS_ERR(cpu_ptr))
+		return PTR_ERR(cpu_ptr);
+
+	pvr_fw_object_vunmap(*fw_obj_out);
+
+	return 0;
+}
+
+/**
+ * pvr_fw_object_create_and_map() - Create a FW object and map to firmware and CPU
+ * @pvr_dev: PowerVR device pointer.
+ * @size: Size of object, in bytes.
+ * @flags: Options which affect both this operation and future mapping
+ * operations performed on the returned object. Must be a combination of
+ * DRM_PVR_BO_* and/or PVR_BO_* flags.
+ * @init: Initialisation callback.
+ * @init_priv: Private pointer to pass to initialisation callback.
+ * @fw_obj_out: Pointer to location to store created object pointer.
+ *
+ * %DRM_PVR_BO_DEVICE_PM_FW_PROTECT is implied for all FW objects. Consequently,
+ * this function will fail if @flags has %DRM_PVR_BO_CPU_ALLOW_USERSPACE_ACCESS
+ * set.
+ *
+ * Caller is responsible for calling pvr_fw_object_vunmap() to release the CPU
+ * mapping.
+ *
+ * Returns:
+ *  * Pointer to CPU mapping of newly created object, or
+ *  * Any error returned by pvr_fw_object_create(), or
+ *  * Any error returned by pvr_fw_object_vmap().
+ */
+void *
+pvr_fw_object_create_and_map(struct pvr_device *pvr_dev, size_t size, u64 flags,
+			     void (*init)(void *cpu_ptr, void *priv),
+			     void *init_priv, struct pvr_fw_object **fw_obj_out)
+{
+	return pvr_fw_object_create_and_map_common(pvr_dev, size, flags, 0, init, init_priv,
+						   fw_obj_out);
+}
+
+/**
+ * pvr_fw_object_create_and_map_offset() - Create a FW object and map to
+ * firmware at the provided offset and to the CPU.
+ * @pvr_dev: PowerVR device pointer.
+ * @dev_offset: Base address of desired FW mapping, offset from start of FW heap.
+ * @size: Size of object, in bytes.
+ * @flags: Options which affect both this operation and future mapping
+ * operations performed on the returned object. Must be a combination of
+ * DRM_PVR_BO_* and/or PVR_BO_* flags.
+ * @init: Initialisation callback.
+ * @init_priv: Private pointer to pass to initialisation callback.
+ * @fw_obj_out: Pointer to location to store created object pointer.
+ *
+ * %DRM_PVR_BO_DEVICE_PM_FW_PROTECT is implied for all FW objects. Consequently,
+ * this function will fail if @flags has %DRM_PVR_BO_CPU_ALLOW_USERSPACE_ACCESS
+ * set.
+ *
+ * Caller is responsible for calling pvr_fw_object_vunmap() to release the CPU
+ * mapping.
+ *
+ * Returns:
+ *  * Pointer to CPU mapping of newly created object, or
+ *  * Any error returned by pvr_fw_object_create(), or
+ *  * Any error returned by pvr_fw_object_vmap().
+ */
+void *
+pvr_fw_object_create_and_map_offset(struct pvr_device *pvr_dev,
+				    u32 dev_offset, size_t size, u64 flags,
+				    void (*init)(void *cpu_ptr, void *priv),
+				    void *init_priv, struct pvr_fw_object **fw_obj_out)
+{
+	u64 dev_addr = pvr_dev->fw_dev.fw_mm_base + dev_offset;
+
+	return pvr_fw_object_create_and_map_common(pvr_dev, size, flags, dev_addr, init, init_priv,
+						   fw_obj_out);
+}
+
+/**
+ * pvr_fw_object_destroy() - Destroy a pvr_fw_object
+ * @fw_obj: Pointer to object to destroy.
+ */
+void pvr_fw_object_destroy(struct pvr_fw_object *fw_obj)
+{
+	struct pvr_gem_object *pvr_obj = fw_obj->gem;
+	struct drm_gem_object *gem_obj = gem_from_pvr_gem(pvr_obj);
+	struct pvr_device *pvr_dev = to_pvr_device(gem_obj->dev);
+
+	mutex_lock(&pvr_dev->fw_dev.fw_objs.lock);
+	list_del(&fw_obj->node);
+	mutex_unlock(&pvr_dev->fw_dev.fw_objs.lock);
+
+	if (drm_mm_node_allocated(&fw_obj->fw_mm_node)) {
+		/* If we can't unmap, leak the memory. */
+		if (WARN_ON(pvr_fw_object_fw_unmap(fw_obj)))
+			return;
+	}
+
+	if (fw_obj->gem)
+		pvr_gem_object_put(fw_obj->gem);
+
+	kfree(fw_obj);
+}
+
+/**
+ * pvr_fw_object_get_fw_addr_offset() - Return address of object in firmware address space, with
+ * given offset.
+ * @fw_obj: Pointer to object.
+ * @offset: Desired offset from start of object.
+ * @fw_addr_out: Location to store address to.
+ */
+void pvr_fw_object_get_fw_addr_offset(struct pvr_fw_object *fw_obj, u32 offset, u32 *fw_addr_out)
+{
+	struct pvr_gem_object *pvr_obj = fw_obj->gem;
+	struct pvr_device *pvr_dev = to_pvr_device(gem_from_pvr_gem(pvr_obj)->dev);
+
+	*fw_addr_out = pvr_dev->fw_dev.defs->get_fw_addr_with_offset(fw_obj, offset);
+}
+
+/*
+ * pvr_fw_hard_reset() - Re-initialise the FW code and data segments, and reset all global FW
+ *                       structures
+ * @pvr_dev: Device pointer
+ *
+ * If this function returns an error then the caller must regard the device as lost.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_fw_init_dev_structures() or pvr_fw_reset_all().
+ */
+int
+pvr_fw_hard_reset(struct pvr_device *pvr_dev)
+{
+	struct list_head *pos;
+	int err;
+
+	/* Reset all FW objects */
+	mutex_lock(&pvr_dev->fw_dev.fw_objs.lock);
+
+	list_for_each(pos, &pvr_dev->fw_dev.fw_objs.list) {
+		struct pvr_fw_object *fw_obj = container_of(pos, struct pvr_fw_object, node);
+		void *cpu_ptr = pvr_fw_object_vmap(fw_obj);
+
+		WARN_ON(IS_ERR(cpu_ptr));
+
+		if (!(fw_obj->gem->flags & PVR_BO_FW_NO_CLEAR_ON_RESET)) {
+			memset(cpu_ptr, 0, pvr_gem_object_size(fw_obj->gem));
+
+			if (fw_obj->init)
+				fw_obj->init(cpu_ptr, fw_obj->init_priv);
+		}
+
+		pvr_fw_object_vunmap(fw_obj);
+	}
+
+	mutex_unlock(&pvr_dev->fw_dev.fw_objs.lock);
+
+	err = pvr_fw_reinit_code_data(pvr_dev);
+	if (err)
+		return err;
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/imagination/pvr_fw.h b/drivers/gpu/drm/imagination/pvr_fw.h
index dca7fe5b3dd0..8b36fe6d85e3 100644
--- a/drivers/gpu/drm/imagination/pvr_fw.h
+++ b/drivers/gpu/drm/imagination/pvr_fw.h
@@ -5,6 +5,10 @@
 #define PVR_FW_H
 
 #include "pvr_fw_info.h"
+#include "pvr_fw_trace.h"
+#include "pvr_gem.h"
+
+#include <drm/drm_mm.h>
 
 #include <linux/types.h>
 
@@ -12,6 +16,289 @@
 struct pvr_device;
 struct pvr_file;
 
+/* Forward declaration from "pvr_vm.h". */
+struct pvr_vm_context;
+
+#define ROGUE_FWIF_FWCCB_NUMCMDS_LOG2 5
+
+#define ROGUE_FWIF_KCCB_NUMCMDS_LOG2_DEFAULT 7
+
+/**
+ * struct pvr_fw_object - container for firmware memory allocations
+ */
+struct pvr_fw_object {
+	/** @ref_count: FW object reference counter. */
+	struct kref ref_count;
+
+	/** @gem: GEM object backing the FW object. */
+	struct pvr_gem_object *gem;
+
+	/**
+	 * @fw_mm_node: Node representing mapping in FW address space. @pvr_obj->lock must
+	 *              be held when writing.
+	 */
+	struct drm_mm_node fw_mm_node;
+
+	/**
+	 * @fw_addr_offset: Virtual address offset of firmware mapping. Only
+	 *                  valid if @flags has %PVR_GEM_OBJECT_FLAGS_FW_MAPPED
+	 *                  set.
+	 */
+	u32 fw_addr_offset;
+
+	/**
+	 * @init: Initialisation callback. Will be called on object creation and FW hard reset.
+	 *        Object will have been zeroed before this is called.
+	 */
+	void (*init)(void *cpu_ptr, void *priv);
+
+	/** @init_priv: Private data for initialisation callback. */
+	void *init_priv;
+
+	/** @node: Node for firmware object list. */
+	struct list_head node;
+};
+
+/**
+ * struct pvr_fw_defs - FW processor function table and static definitions
+ */
+struct pvr_fw_defs {
+	/**
+	 * @init:
+	 *
+	 * FW processor specific initialisation.
+	 * @pvr_dev: Target PowerVR device.
+	 *
+	 * This function must call pvr_fw_heap_calculate() to initialise the firmware heap for this
+	 * FW processor.
+	 *
+	 * This function is mandatory.
+	 *
+	 * Returns:
+	 *  * 0 on success, or
+	 *  * Any appropriate error on failure.
+	 */
+	int (*init)(struct pvr_device *pvr_dev);
+
+	/**
+	 * @fini:
+	 *
+	 * FW processor specific finalisation.
+	 * @pvr_dev: Target PowerVR device.
+	 *
+	 * This function is optional.
+	 */
+	void (*fini)(struct pvr_device *pvr_dev);
+
+	/**
+	 * @fw_process:
+	 *
+	 * Load and process firmware image.
+	 * @pvr_dev: Target PowerVR device.
+	 * @fw: Pointer to firmware image.
+	 * @fw_code_ptr: Pointer to firmware code section.
+	 * @fw_data_ptr: Pointer to firmware data section.
+	 * @fw_core_code_ptr: Pointer to firmware core code section. May be %NULL.
+	 * @fw_core_data_ptr: Pointer to firmware core data section. May be %NULL.
+	 * @core_code_alloc_size: Total allocation size of core code section.
+	 *
+	 * This function is mandatory.
+	 *
+	 * Returns:
+	 *  * 0 on success, or
+	 *  * Any appropriate error on failure.
+	 */
+	int (*fw_process)(struct pvr_device *pvr_dev, const u8 *fw,
+			  u8 *fw_code_ptr, u8 *fw_data_ptr, u8 *fw_core_code_ptr,
+			  u8 *fw_core_data_ptr, u32 core_code_alloc_size);
+
+	/**
+	 * @vm_map:
+	 *
+	 * Map FW object into FW processor address space.
+	 * @pvr_dev: Target PowerVR device.
+	 * @fw_obj: FW object to map.
+	 *
+	 * This function is mandatory.
+	 *
+	 * Returns:
+	 *  * 0 on success, or
+	 *  * Any appropriate error on failure.
+	 */
+	int (*vm_map)(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj);
+
+	/**
+	 * @vm_unmap:
+	 *
+	 * Unmap FW object from FW processor address space.
+	 * @pvr_dev: Target PowerVR device.
+	 * @fw_obj: FW object to map.
+	 *
+	 * This function is mandatory.
+	 */
+	void (*vm_unmap)(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj);
+
+	/**
+	 * @get_fw_addr_with_offset:
+	 *
+	 * Called to get address of object in firmware address space, with offset.
+	 * @fw_obj: Pointer to object.
+	 * @offset: Desired offset from start of object.
+	 *
+	 * This function is mandatory.
+	 *
+	 * Returns:
+	 *  * Address in firmware address space.
+	 */
+	u32 (*get_fw_addr_with_offset)(struct pvr_fw_object *fw_obj, u32 offset);
+
+	/**
+	 * @wrapper_init:
+	 *
+	 * Called to initialise FW wrapper.
+	 * @pvr_dev: Target PowerVR device.
+	 *
+	 * This function is mandatory.
+	 *
+	 * Returns:
+	 *  * 0 on success.
+	 *  * Any appropriate error on failure.
+	 */
+	int (*wrapper_init)(struct pvr_device *pvr_dev);
+
+	/**
+	 * @has_fixed_data_addr:
+	 *
+	 * Called to check if firmware fixed data must be loaded at the address given by the
+	 * firmware layout table.
+	 *
+	 * This function is mandatory.
+	 *
+	 * Returns:
+	 *  * %true if firmware fixed data must be loaded at the address given by the firmware
+	 *    layout table.
+	 *  * %false otherwise.
+	 */
+	bool (*has_fixed_data_addr)(void);
+
+	/**
+	 * @irq: FW Interrupt information.
+	 *
+	 * Those are processor dependent, and should be initialized by the
+	 * processor backend in pvr_fw_funcs::init().
+	 */
+	struct {
+		/** @enable_reg: FW interrupt enable register. */
+		u32 enable_reg;
+
+		/** @status_reg: FW interrupt status register. */
+		u32 status_reg;
+
+		/**
+		 * @clear_reg: FW interrupt clear register.
+		 *
+		 * If @status_reg == @clear_reg, we clear by write a bit to zero,
+		 * otherwise we clear by writing a bit to one.
+		 */
+		u32 clear_reg;
+
+		/** @event_mask: Bitmask of events to listen for. */
+		u32 event_mask;
+
+		/** @clear_mask: Value to write to the clear_reg in order to clear FW IRQs. */
+		u32 clear_mask;
+	} irq;
+};
+
+/**
+ * struct pvr_fw_mem - FW memory allocations
+ */
+struct pvr_fw_mem {
+	/** @code_obj: Object representing firmware code. */
+	struct pvr_fw_object *code_obj;
+
+	/** @data_obj: Object representing firmware data. */
+	struct pvr_fw_object *data_obj;
+
+	/**
+	 * @core_code_obj: Object representing firmware core code. May be
+	 *                 %NULL if firmware does not contain this section.
+	 */
+	struct pvr_fw_object *core_code_obj;
+
+	/**
+	 * @core_data_obj: Object representing firmware core data. May be
+	 *                 %NULL if firmware does not contain this section.
+	 */
+	struct pvr_fw_object *core_data_obj;
+
+	/** @code: Driver-side copy of firmware code. */
+	u8 *code;
+
+	/** @data: Driver-side copy of firmware data. */
+	u8 *data;
+
+	/**
+	 * @core_code: Driver-side copy of firmware core code. May be %NULL if firmware does not
+	 *             contain this section.
+	 */
+	u8 *core_code;
+
+	/**
+	 * @core_data: Driver-side copy of firmware core data. May be %NULL if firmware does not
+	 *             contain this section.
+	 */
+	u8 *core_data;
+
+	/** @code_alloc_size: Allocation size of firmware code section. */
+	u32 code_alloc_size;
+
+	/** @data_alloc_size: Allocation size of firmware data section. */
+	u32 data_alloc_size;
+
+	/** @core_code_alloc_size: Allocation size of firmware core code section. */
+	u32 core_code_alloc_size;
+
+	/** @core_data_alloc_size: Allocation size of firmware core data section. */
+	u32 core_data_alloc_size;
+
+	/**
+	 * @fwif_connection_ctl_obj: Object representing FWIF connection control
+	 *                           structure.
+	 */
+	struct pvr_fw_object *fwif_connection_ctl_obj;
+
+	/** @osinit_obj: Object representing FW OSINIT structure. */
+	struct pvr_fw_object *osinit_obj;
+
+	/** @sysinit_obj: Object representing FW SYSINIT structure. */
+	struct pvr_fw_object *sysinit_obj;
+
+	/** @osdata_obj: Object representing FW OSDATA structure. */
+	struct pvr_fw_object *osdata_obj;
+
+	/** @hwrinfobuf_obj: Object representing FW hwrinfobuf structure. */
+	struct pvr_fw_object *hwrinfobuf_obj;
+
+	/** @sysdata_obj: Object representing FW SYSDATA structure. */
+	struct pvr_fw_object *sysdata_obj;
+
+	/** @power_sync_obj: Object representing power sync state. */
+	struct pvr_fw_object *power_sync_obj;
+
+	/** @fault_page_obj: Object representing FW fault page. */
+	struct pvr_fw_object *fault_page_obj;
+
+	/** @gpu_util_fwcb_obj: Object representing FW GPU utilisation control structure. */
+	struct pvr_fw_object *gpu_util_fwcb_obj;
+
+	/** @runtime_cfg_obj: Object representing FW runtime config structure. */
+	struct pvr_fw_object *runtime_cfg_obj;
+
+	/** @mmucache_sync_obj: Object used as the sync parameter in an MMU cache operation. */
+	struct pvr_fw_object *mmucache_sync_obj;
+};
+
 struct pvr_fw_device {
 	/** @firmware: Handle to the firmware loaded into the device. */
 	const struct firmware *firmware;
@@ -22,13 +309,200 @@ struct pvr_fw_device {
 	/** @layout_entries: Pointer to firmware layout. */
 	const struct pvr_fw_layout_entry *layout_entries;
 
+	/** @mem: Structure containing objects representing firmware memory allocations. */
+	struct pvr_fw_mem mem;
+
+	/** @booted: %true if the firmware has been booted, %false otherwise. */
+	bool booted;
+
 	/**
 	 * @processor_type: FW processor type for this device. Must be one of
 	 *                  %PVR_FW_PROCESSOR_TYPE_*.
 	 */
 	u16 processor_type;
+
+	/** @funcs: Function table for the FW processor used by this device. */
+	const struct pvr_fw_defs *defs;
+
+	/** @processor_data: Pointer to data specific to FW processor. */
+	union {
+		/** @mips_data: Pointer to MIPS-specific data. */
+		struct pvr_fw_mips_data *mips_data;
+	} processor_data;
+
+	/** @fw_heap_info: Firmware heap information. */
+	struct {
+		/** @gpu_addr: Base address of firmware heap in GPU address space. */
+		u64 gpu_addr;
+
+		/** @size: Size of main area of heap. */
+		u32 size;
+
+		/** @offset_mask: Mask for offsets within FW heap. */
+		u32 offset_mask;
+
+		/** @raw_size: Raw size of heap, including reserved areas. */
+		u32 raw_size;
+
+		/** @log2_size: Log2 of raw size of heap. */
+		u32 log2_size;
+
+		/** @config_offset: Offset of config area within heap. */
+		u32 config_offset;
+
+		/** @reserved_size: Size of reserved area in heap. */
+		u32 reserved_size;
+	} fw_heap_info;
+
+	/** @fw_mm: Firmware address space allocator. */
+	struct drm_mm fw_mm;
+
+	/** @fw_mm_lock: Lock protecting access to &fw_mm. */
+	spinlock_t fw_mm_lock;
+
+	/** @fw_mm_base: Base address of address space managed by @fw_mm. */
+	u64 fw_mm_base;
+
+	/**
+	 * @fwif_connection_ctl: Pointer to CPU mapping of FWIF connection
+	 *                       control structure.
+	 */
+	struct rogue_fwif_connection_ctl *fwif_connection_ctl;
+
+	/** @fwif_sysinit: Pointer to CPU mapping of FW SYSINIT structure. */
+	struct rogue_fwif_sysinit *fwif_sysinit;
+
+	/** @fwif_sysdata: Pointer to CPU mapping of FW SYSDATA structure. */
+	struct rogue_fwif_sysdata *fwif_sysdata;
+
+	/** @fwif_osinit: Pointer to CPU mapping of FW OSINIT structure. */
+	struct rogue_fwif_osinit *fwif_osinit;
+
+	/** @fwif_osdata: Pointer to CPU mapping of FW OSDATA structure. */
+	struct rogue_fwif_osdata *fwif_osdata;
+
+	/** @power_sync: Pointer to CPU mapping of power sync state. */
+	u32 *power_sync;
+
+	/** @hwrinfobuf: Pointer to CPU mapping of FW HWR info buffer. */
+	struct rogue_fwif_hwrinfobuf *hwrinfobuf;
+
+	/** @fw_trace: Device firmware trace buffer state. */
+	struct pvr_fw_trace fw_trace;
+
+	/** @fw_objs: Structure tracking FW objects. */
+	struct {
+		/** @list: Head of FW object list. */
+		struct list_head list;
+
+		/** @lock: Lock protecting access to FW object list. */
+		struct mutex lock;
+	} fw_objs;
 };
 
+#define pvr_fw_irq_read_reg(pvr_dev, name) \
+	pvr_cr_read32((pvr_dev), (pvr_dev)->fw_dev.defs->irq.name ## _reg)
+
+#define pvr_fw_irq_write_reg(pvr_dev, name, value) \
+	pvr_cr_write32((pvr_dev), (pvr_dev)->fw_dev.defs->irq.name ## _reg, value)
+
+#define pvr_fw_irq_pending(pvr_dev) \
+	(pvr_fw_irq_read_reg(pvr_dev, status) & (pvr_dev)->fw_dev.defs->irq.event_mask)
+
+#define pvr_fw_irq_clear(pvr_dev) \
+	pvr_fw_irq_write_reg(pvr_dev, clear, (pvr_dev)->fw_dev.defs->irq.clear_mask)
+
+#define pvr_fw_irq_enable(pvr_dev) \
+	pvr_fw_irq_write_reg(pvr_dev, enable, (pvr_dev)->fw_dev.defs->irq.event_mask)
+
+#define pvr_fw_irq_disable(pvr_dev) \
+	pvr_fw_irq_write_reg(pvr_dev, enable, 0)
+
+extern const struct pvr_fw_defs pvr_fw_defs_meta;
+extern const struct pvr_fw_defs pvr_fw_defs_mips;
+
 int pvr_fw_validate_init_device_info(struct pvr_device *pvr_dev);
+int pvr_fw_init(struct pvr_device *pvr_dev);
+void pvr_fw_fini(struct pvr_device *pvr_dev);
+
+int pvr_wait_for_fw_boot(struct pvr_device *pvr_dev);
+
+int
+pvr_fw_hard_reset(struct pvr_device *pvr_dev);
+
+void pvr_fw_mts_schedule(struct pvr_device *pvr_dev, u32 val);
+
+void
+pvr_fw_heap_info_init(struct pvr_device *pvr_dev, u32 log2_size, u32 reserved_size);
+
+const struct pvr_fw_layout_entry *
+pvr_fw_find_layout_entry(struct pvr_device *pvr_dev, enum pvr_fw_section_id id);
+int
+pvr_fw_find_mmu_segment(struct pvr_device *pvr_dev, u32 addr, u32 size, void *fw_code_ptr,
+			void *fw_data_ptr, void *fw_core_code_ptr, void *fw_core_data_ptr,
+			void **host_addr_out);
+
+int
+pvr_fw_structure_cleanup(struct pvr_device *pvr_dev, u32 type, struct pvr_fw_object *fw_obj,
+			 u32 offset);
+
+int pvr_fw_object_create(struct pvr_device *pvr_dev, size_t size, u64 flags,
+			 void (*init)(void *cpu_ptr, void *priv), void *init_priv,
+			 struct pvr_fw_object **pvr_obj_out);
+
+void *pvr_fw_object_create_and_map(struct pvr_device *pvr_dev, size_t size, u64 flags,
+				   void (*init)(void *cpu_ptr, void *priv),
+				   void *init_priv, struct pvr_fw_object **pvr_obj_out);
+
+void *
+pvr_fw_object_create_and_map_offset(struct pvr_device *pvr_dev, u32 dev_offset, size_t size,
+				    u64 flags, void (*init)(void *cpu_ptr, void *priv),
+				    void *init_priv, struct pvr_fw_object **pvr_obj_out);
+
+static __always_inline void *
+pvr_fw_object_vmap(struct pvr_fw_object *fw_obj)
+{
+	return pvr_gem_object_vmap(fw_obj->gem);
+}
+
+static __always_inline void
+pvr_fw_object_vunmap(struct pvr_fw_object *fw_obj)
+{
+	pvr_gem_object_vunmap(fw_obj->gem);
+}
+
+void pvr_fw_object_destroy(struct pvr_fw_object *fw_obj);
+
+static __always_inline void
+pvr_fw_object_unmap_and_destroy(struct pvr_fw_object *fw_obj)
+{
+	pvr_fw_object_vunmap(fw_obj);
+	pvr_fw_object_destroy(fw_obj);
+}
+
+/**
+ * pvr_fw_get_dma_addr() - Get DMA address for given offset in firmware object
+ * @fw_obj: Pointer to object to lookup address in.
+ * @offset: Offset within object to lookup address at.
+ * @dma_addr_out: Pointer to location to store DMA address.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -%EINVAL if object is not currently backed, or if @offset is out of valid
+ *    range for this object.
+ */
+static __always_inline int
+pvr_fw_object_get_dma_addr(struct pvr_fw_object *fw_obj, u32 offset, dma_addr_t *dma_addr_out)
+{
+	return pvr_gem_get_dma_addr(fw_obj->gem, offset, dma_addr_out);
+}
+
+void pvr_fw_object_get_fw_addr_offset(struct pvr_fw_object *fw_obj, u32 offset, u32 *fw_addr_out);
+
+static __always_inline void
+pvr_fw_object_get_fw_addr(struct pvr_fw_object *fw_obj, u32 *fw_addr_out)
+{
+	pvr_fw_object_get_fw_addr_offset(fw_obj, 0, fw_addr_out);
+}
 
 #endif /* PVR_FW_H */
diff --git a/drivers/gpu/drm/imagination/pvr_fw_meta.c b/drivers/gpu/drm/imagination/pvr_fw_meta.c
new file mode 100644
index 000000000000..7a5d26c0b9e0
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_fw_meta.c
@@ -0,0 +1,554 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+#include "pvr_fw.h"
+#include "pvr_fw_info.h"
+#include "pvr_gem.h"
+#include "pvr_rogue_cr_defs.h"
+#include "pvr_rogue_meta.h"
+#include "pvr_vm.h"
+
+#include <linux/compiler.h>
+#include <linux/delay.h>
+#include <linux/firmware.h>
+#include <linux/ktime.h>
+#include <linux/types.h>
+
+#define ROGUE_FW_HEAP_META_SHIFT 25 /* 32 MB */
+
+#define POLL_TIMEOUT_USEC 1000000
+
+/**
+ * pvr_meta_cr_read32() - Read a META register via the Slave Port
+ * @pvr_dev: Device pointer.
+ * @reg_addr: Address of register to read.
+ * @reg_value_out: Pointer to location to store register value.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_cr_poll_reg32().
+ */
+int
+pvr_meta_cr_read32(struct pvr_device *pvr_dev, u32 reg_addr, u32 *reg_value_out)
+{
+	int err;
+
+	/* Wait for Slave Port to be Ready. */
+	err = pvr_cr_poll_reg32(pvr_dev, ROGUE_CR_META_SP_MSLVCTRL1,
+				ROGUE_CR_META_SP_MSLVCTRL1_READY_EN |
+					ROGUE_CR_META_SP_MSLVCTRL1_GBLPORT_IDLE_EN,
+				ROGUE_CR_META_SP_MSLVCTRL1_READY_EN |
+					ROGUE_CR_META_SP_MSLVCTRL1_GBLPORT_IDLE_EN,
+				POLL_TIMEOUT_USEC);
+	if (err)
+		return err;
+
+	/* Issue a Read. */
+	pvr_cr_write32(pvr_dev, ROGUE_CR_META_SP_MSLVCTRL0,
+		       reg_addr | ROGUE_CR_META_SP_MSLVCTRL0_RD_EN);
+	(void)pvr_cr_read32(pvr_dev, ROGUE_CR_META_SP_MSLVCTRL0); /* Fence write. */
+
+	/* Wait for Slave Port to be Ready. */
+	err = pvr_cr_poll_reg32(pvr_dev, ROGUE_CR_META_SP_MSLVCTRL1,
+				ROGUE_CR_META_SP_MSLVCTRL1_READY_EN |
+					ROGUE_CR_META_SP_MSLVCTRL1_GBLPORT_IDLE_EN,
+				ROGUE_CR_META_SP_MSLVCTRL1_READY_EN |
+					ROGUE_CR_META_SP_MSLVCTRL1_GBLPORT_IDLE_EN,
+				POLL_TIMEOUT_USEC);
+	if (err)
+		return err;
+
+	*reg_value_out = pvr_cr_read32(pvr_dev, ROGUE_CR_META_SP_MSLVDATAX);
+
+	return 0;
+}
+
+static int
+pvr_meta_wrapper_init(struct pvr_device *pvr_dev)
+{
+	u64 garten_config;
+
+	/* Configure META to Master boot. */
+	pvr_cr_write64(pvr_dev, ROGUE_CR_META_BOOT, ROGUE_CR_META_BOOT_MODE_EN);
+
+	/* Set Garten IDLE to META idle and Set the Garten Wrapper BIF Fence address. */
+
+	/* Garten IDLE bit controlled by META. */
+	garten_config = ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_IDLE_CTRL_META;
+
+	/* The fence addr is set during the fw init sequence. */
+
+	/* Set PC = 0 for fences. */
+	garten_config &=
+		ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_PC_BASE_CLRMSK;
+	garten_config |=
+		(u64)MMU_CONTEXT_MAPPING_FWPRIV
+		<< ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_PC_BASE_SHIFT;
+
+	/* Set SLC DM=META. */
+	garten_config |= ((u64)ROGUE_FW_SEGMMU_META_BIFDM_ID)
+			 << ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_DM_SHIFT;
+
+	pvr_cr_write64(pvr_dev, ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG, garten_config);
+
+	return 0;
+}
+
+static __always_inline void
+add_boot_arg(u32 **boot_conf, u32 param, u32 data)
+{
+	*(*boot_conf)++ = param;
+	*(*boot_conf)++ = data;
+}
+
+static int
+meta_ldr_cmd_loadmem(struct drm_device *drm_dev, const u8 *fw,
+		     struct rogue_meta_ldr_l1_data_blk *l1_data, u32 coremem_size, u8 *fw_code_ptr,
+		     u8 *fw_data_ptr, u8 *fw_core_code_ptr, u8 *fw_core_data_ptr, const u32 fw_size)
+{
+	struct rogue_meta_ldr_l2_data_blk *l2_block =
+		(struct rogue_meta_ldr_l2_data_blk *)(fw +
+						      l1_data->cmd_data[1]);
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+	u32 offset = l1_data->cmd_data[0];
+	u32 data_size;
+	void *write_addr;
+	int err;
+
+	/* Verify header is within bounds. */
+	if (((u8 *)l2_block - fw) >= fw_size || ((u8 *)(l2_block + 1) - fw) >= fw_size)
+		return -EINVAL;
+
+	data_size = l2_block->length - 6 /* L2 Tag length and checksum */;
+
+	/* Verify data is within bounds. */
+	if (((u8 *)l2_block->block_data - fw) >= fw_size ||
+	    ((((u8 *)l2_block->block_data) + data_size) - fw) >= fw_size)
+		return -EINVAL;
+
+	if (!ROGUE_META_IS_COREMEM_CODE(offset, coremem_size) &&
+	    !ROGUE_META_IS_COREMEM_DATA(offset, coremem_size)) {
+		/* Global range is aliased to local range */
+		offset &= ~META_MEM_GLOBAL_RANGE_BIT;
+	}
+
+	err = pvr_fw_find_mmu_segment(pvr_dev, offset, data_size, fw_code_ptr, fw_data_ptr,
+				      fw_core_code_ptr, fw_core_data_ptr, &write_addr);
+	if (err) {
+		drm_err(drm_dev,
+			"Addr 0x%x (size: %d) not found in any firmware segment",
+			offset, data_size);
+		return err;
+	}
+
+	memcpy(write_addr, l2_block->block_data, data_size);
+
+	return 0;
+}
+
+static int
+meta_ldr_cmd_zeromem(struct drm_device *drm_dev,
+		     struct rogue_meta_ldr_l1_data_blk *l1_data, u32 coremem_size,
+		     u8 *fw_code_ptr, u8 *fw_data_ptr, u8 *fw_core_code_ptr, u8 *fw_core_data_ptr)
+{
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+	u32 offset = l1_data->cmd_data[0];
+	u32 byte_count = l1_data->cmd_data[1];
+	void *write_addr;
+	int err;
+
+	if (ROGUE_META_IS_COREMEM_DATA(offset, coremem_size)) {
+		/* cannot zero coremem directly */
+		return 0;
+	}
+
+	/* Global range is aliased to local range */
+	offset &= ~META_MEM_GLOBAL_RANGE_BIT;
+
+	err = pvr_fw_find_mmu_segment(pvr_dev, offset, byte_count, fw_code_ptr, fw_data_ptr,
+				      fw_core_code_ptr, fw_core_data_ptr, &write_addr);
+	if (err) {
+		drm_err(drm_dev,
+			"Addr 0x%x (size: %d) not found in any firmware segment",
+			offset, byte_count);
+		return err;
+	}
+
+	memset(write_addr, 0, byte_count);
+
+	return 0;
+}
+
+static int
+meta_ldr_cmd_config(struct drm_device *drm_dev, const u8 *fw,
+		    struct rogue_meta_ldr_l1_data_blk *l1_data,
+		    const u32 fw_size, u32 **boot_conf_ptr)
+{
+	struct rogue_meta_ldr_l2_data_blk *l2_block =
+		(struct rogue_meta_ldr_l2_data_blk *)(fw +
+						      l1_data->cmd_data[0]);
+	struct rogue_meta_ldr_cfg_blk *config_command;
+	u32 l2_block_size;
+	u32 curr_block_size = 0;
+	u32 *boot_conf = boot_conf_ptr ? *boot_conf_ptr : NULL;
+
+	/* Verify block header is within bounds. */
+	if (((u8 *)l2_block - fw) >= fw_size || ((u8 *)(l2_block + 1) - fw) >= fw_size)
+		return -EINVAL;
+
+	l2_block_size = l2_block->length - 6 /* L2 Tag length and checksum */;
+	config_command = (struct rogue_meta_ldr_cfg_blk *)l2_block->block_data;
+
+	if (((u8 *)config_command - fw) >= fw_size ||
+	    ((((u8 *)config_command) + l2_block_size) - fw) >= fw_size)
+		return -EINVAL;
+
+	while (l2_block_size >= 12) {
+		if (config_command->type != ROGUE_META_LDR_CFG_WRITE)
+			return -EINVAL;
+
+		/*
+		 * Only write to bootloader if we got a valid pointer to the FW
+		 * code allocation.
+		 */
+		if (boot_conf) {
+			u32 register_offset = config_command->block_data[0];
+			u32 register_value = config_command->block_data[1];
+
+			/* Do register write */
+			add_boot_arg(&boot_conf, register_offset,
+				     register_value);
+		}
+
+		curr_block_size = 12;
+		l2_block_size -= curr_block_size;
+		config_command = (struct rogue_meta_ldr_cfg_blk
+					  *)((uintptr_t)config_command +
+					     curr_block_size);
+	}
+
+	if (boot_conf_ptr)
+		*boot_conf_ptr = boot_conf;
+
+	return 0;
+}
+
+/**
+ * process_ldr_command_stream() - Process LDR firmware image and populate
+ *                                firmware sections
+ * @pvr_dev: Device pointer.
+ * @fw: Pointer to firmware image.
+ * @fw_code_ptr: Pointer to FW code section.
+ * @fw_data_ptr: Pointer to FW data section.
+ * @fw_core_code_ptr: Pointer to FW coremem code section.
+ * @fw_core_data_ptr: Pointer to FW coremem data section.
+ * @boot_conf_ptr: Pointer to boot config argument pointer.
+ *
+ * Returns :
+ *  * 0 on success, or
+ *  * -EINVAL on any error in LDR command stream.
+ */
+static int
+process_ldr_command_stream(struct pvr_device *pvr_dev, const u8 *fw, u8 *fw_code_ptr,
+			   u8 *fw_data_ptr, u8 *fw_core_code_ptr,
+			   u8 *fw_core_data_ptr, u32 **boot_conf_ptr)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	struct rogue_meta_ldr_block_hdr *ldr_header =
+		(struct rogue_meta_ldr_block_hdr *)fw;
+	struct rogue_meta_ldr_l1_data_blk *l1_data =
+		(struct rogue_meta_ldr_l1_data_blk *)(fw + ldr_header->sl_data);
+	const u32 fw_size = pvr_dev->fw_dev.firmware->size;
+	int err;
+
+	u32 *boot_conf = boot_conf_ptr ? *boot_conf_ptr : NULL;
+	u32 coremem_size;
+
+	err = PVR_FEATURE_VALUE(pvr_dev, meta_coremem_size, &coremem_size);
+	if (err)
+		return err;
+
+	coremem_size *= SZ_1K;
+
+	while (l1_data) {
+		/* Verify block header is within bounds. */
+		if (((u8 *)l1_data - fw) >= fw_size || ((u8 *)(l1_data + 1) - fw) >= fw_size)
+			return -EINVAL;
+
+		if (ROGUE_META_LDR_BLK_IS_COMMENT(l1_data->cmd)) {
+			/* Don't process comment blocks */
+			goto next_block;
+		}
+
+		switch (l1_data->cmd & ROGUE_META_LDR_CMD_MASK)
+		case ROGUE_META_LDR_CMD_LOADMEM: {
+			err = meta_ldr_cmd_loadmem(drm_dev, fw, l1_data,
+						   coremem_size,
+						   fw_code_ptr, fw_data_ptr,
+						   fw_core_code_ptr,
+						   fw_core_data_ptr, fw_size);
+			if (err)
+				return err;
+			break;
+
+		case ROGUE_META_LDR_CMD_START_THREADS:
+			/* Don't process this block */
+			break;
+
+		case ROGUE_META_LDR_CMD_ZEROMEM:
+			err = meta_ldr_cmd_zeromem(drm_dev, l1_data,
+						   coremem_size,
+						   fw_code_ptr, fw_data_ptr,
+						   fw_core_code_ptr,
+						   fw_core_data_ptr);
+			if (err)
+				return err;
+			break;
+
+		case ROGUE_META_LDR_CMD_CONFIG:
+			err = meta_ldr_cmd_config(drm_dev, fw, l1_data, fw_size,
+						  &boot_conf);
+			if (err)
+				return err;
+			break;
+
+		default:
+			return -EINVAL;
+		}
+
+next_block:
+		if (l1_data->next == 0xFFFFFFFF)
+			break;
+
+		l1_data = (struct rogue_meta_ldr_l1_data_blk *)(fw +
+								l1_data->next);
+	}
+
+	if (boot_conf_ptr)
+		*boot_conf_ptr = boot_conf;
+
+	return 0;
+}
+
+static void
+configure_seg_id(u64 seg_out_addr, u32 seg_base, u32 seg_limit, u32 seg_id,
+		 u32 **boot_conf_ptr)
+{
+	u32 seg_out_addr0 = seg_out_addr & 0x00000000FFFFFFFFUL;
+	u32 seg_out_addr1 = (seg_out_addr >> 32) & 0x00000000FFFFFFFFUL;
+	u32 *boot_conf = *boot_conf_ptr;
+
+	/* META segments have a minimum size. */
+	u32 limit_off = max(seg_limit, ROGUE_FW_SEGMMU_ALIGN);
+
+	/* The limit is an offset, therefore off = size - 1. */
+	limit_off -= 1;
+
+	seg_base |= ROGUE_FW_SEGMMU_ALLTHRS_WRITEABLE;
+
+	add_boot_arg(&boot_conf, META_CR_MMCU_SEGMENT_N_BASE(seg_id), seg_base);
+	add_boot_arg(&boot_conf, META_CR_MMCU_SEGMENT_N_LIMIT(seg_id), limit_off);
+	add_boot_arg(&boot_conf, META_CR_MMCU_SEGMENT_N_OUTA0(seg_id), seg_out_addr0);
+	add_boot_arg(&boot_conf, META_CR_MMCU_SEGMENT_N_OUTA1(seg_id), seg_out_addr1);
+
+	*boot_conf_ptr = boot_conf;
+}
+
+static u64 get_fw_obj_gpu_addr(struct pvr_fw_object *fw_obj)
+{
+	struct pvr_device *pvr_dev = to_pvr_device(gem_from_pvr_gem(fw_obj->gem)->dev);
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+
+	return fw_obj->fw_addr_offset + fw_dev->fw_heap_info.gpu_addr;
+}
+
+static void
+configure_seg_mmu(struct pvr_device *pvr_dev, u32 **boot_conf_ptr)
+{
+	const struct pvr_fw_layout_entry *layout_entries = pvr_dev->fw_dev.layout_entries;
+	u32 num_layout_entries = pvr_dev->fw_dev.header->layout_entry_num;
+	u64 seg_out_addr_top;
+	u32 i;
+
+	seg_out_addr_top =
+		ROGUE_FW_SEGMMU_OUTADDR_TOP_SLC(MMU_CONTEXT_MAPPING_FWPRIV,
+						ROGUE_FW_SEGMMU_META_BIFDM_ID);
+
+	for (i = 0; i < num_layout_entries; i++) {
+		/*
+		 * FW code is using the bootloader segment which is already
+		 * configured on boot. FW coremem code and data don't use the
+		 * segment MMU. Only the FW data segment needs to be configured.
+		 */
+		if (layout_entries[i].type == FW_DATA) {
+			u32 seg_id = ROGUE_FW_SEGMMU_DATA_ID;
+			u64 seg_out_addr = get_fw_obj_gpu_addr(pvr_dev->fw_dev.mem.data_obj);
+
+			seg_out_addr += layout_entries[i].alloc_offset;
+			seg_out_addr |= seg_out_addr_top;
+
+			/* Write the sequence to the bootldr. */
+			configure_seg_id(seg_out_addr,
+					 layout_entries[i].base_addr,
+					 layout_entries[i].alloc_size, seg_id,
+					 boot_conf_ptr);
+
+			break;
+		}
+	}
+}
+
+static void
+configure_meta_caches(u32 **boot_conf_ptr)
+{
+	u32 *boot_conf = *boot_conf_ptr;
+	u32 d_cache_t0, i_cache_t0;
+	u32 d_cache_t1, i_cache_t1;
+	u32 d_cache_t2, i_cache_t2;
+	u32 d_cache_t3, i_cache_t3;
+
+	/* Initialise I/Dcache settings */
+	d_cache_t0 = META_CR_SYSC_DCPARTX_CACHED_WRITE_ENABLE;
+	d_cache_t1 = META_CR_SYSC_DCPARTX_CACHED_WRITE_ENABLE;
+	d_cache_t2 = META_CR_SYSC_DCPARTX_CACHED_WRITE_ENABLE;
+	d_cache_t3 = META_CR_SYSC_DCPARTX_CACHED_WRITE_ENABLE;
+	i_cache_t0 = 0;
+	i_cache_t1 = 0;
+	i_cache_t2 = 0;
+	i_cache_t3 = 0;
+
+	d_cache_t0 |= META_CR_SYSC_XCPARTX_LOCAL_ADDR_FULL_CACHE;
+	i_cache_t0 |= META_CR_SYSC_XCPARTX_LOCAL_ADDR_FULL_CACHE;
+
+	/* Local region MMU enhanced bypass: WIN-3 mode for code and data caches */
+	add_boot_arg(&boot_conf, META_CR_MMCU_LOCAL_EBCTRL,
+		     META_CR_MMCU_LOCAL_EBCTRL_ICWIN |
+			     META_CR_MMCU_LOCAL_EBCTRL_DCWIN);
+
+	/* Data cache partitioning thread 0 to 3 */
+	add_boot_arg(&boot_conf, META_CR_SYSC_DCPART(0), d_cache_t0);
+	add_boot_arg(&boot_conf, META_CR_SYSC_DCPART(1), d_cache_t1);
+	add_boot_arg(&boot_conf, META_CR_SYSC_DCPART(2), d_cache_t2);
+	add_boot_arg(&boot_conf, META_CR_SYSC_DCPART(3), d_cache_t3);
+
+	/* Enable data cache hits */
+	add_boot_arg(&boot_conf, META_CR_MMCU_DCACHE_CTRL,
+		     META_CR_MMCU_XCACHE_CTRL_CACHE_HITS_EN);
+
+	/* Instruction cache partitioning thread 0 to 3 */
+	add_boot_arg(&boot_conf, META_CR_SYSC_ICPART(0), i_cache_t0);
+	add_boot_arg(&boot_conf, META_CR_SYSC_ICPART(1), i_cache_t1);
+	add_boot_arg(&boot_conf, META_CR_SYSC_ICPART(2), i_cache_t2);
+	add_boot_arg(&boot_conf, META_CR_SYSC_ICPART(3), i_cache_t3);
+
+	/* Enable instruction cache hits */
+	add_boot_arg(&boot_conf, META_CR_MMCU_ICACHE_CTRL,
+		     META_CR_MMCU_XCACHE_CTRL_CACHE_HITS_EN);
+
+	add_boot_arg(&boot_conf, 0x040000C0, 0);
+
+	*boot_conf_ptr = boot_conf;
+}
+
+static int
+pvr_meta_fw_process(struct pvr_device *pvr_dev, const u8 *fw,
+		    u8 *fw_code_ptr, u8 *fw_data_ptr, u8 *fw_core_code_ptr, u8 *fw_core_data_ptr,
+		    u32 core_code_alloc_size)
+{
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+	u32 *boot_conf;
+	int err;
+
+	boot_conf = ((u32 *)fw_code_ptr) + ROGUE_FW_BOOTLDR_CONF_OFFSET;
+
+	/* Slave port and JTAG accesses are privileged. */
+	add_boot_arg(&boot_conf, META_CR_SYSC_JTAG_THREAD,
+		     META_CR_SYSC_JTAG_THREAD_PRIV_EN);
+
+	configure_seg_mmu(pvr_dev, &boot_conf);
+
+	/* Populate FW sections from LDR image. */
+	err = process_ldr_command_stream(pvr_dev, fw, fw_code_ptr, fw_data_ptr, fw_core_code_ptr,
+					 fw_core_data_ptr, &boot_conf);
+	if (err)
+		return err;
+
+	configure_meta_caches(&boot_conf);
+
+	/* End argument list. */
+	add_boot_arg(&boot_conf, 0, 0);
+
+	if (fw_dev->mem.core_code_obj) {
+		u32 core_code_fw_addr;
+
+		pvr_fw_object_get_fw_addr(fw_dev->mem.core_code_obj, &core_code_fw_addr);
+		add_boot_arg(&boot_conf, core_code_fw_addr, core_code_alloc_size);
+	} else {
+		add_boot_arg(&boot_conf, 0, 0);
+	}
+	/* None of the cores supported by this driver have META DMA. */
+	add_boot_arg(&boot_conf, 0, 0);
+
+	return 0;
+}
+
+static int
+pvr_meta_init(struct pvr_device *pvr_dev)
+{
+	pvr_fw_heap_info_init(pvr_dev, ROGUE_FW_HEAP_META_SHIFT, 0);
+
+	return 0;
+}
+
+static u32
+pvr_meta_get_fw_addr_with_offset(struct pvr_fw_object *fw_obj, u32 offset)
+{
+	u32 fw_addr = fw_obj->fw_addr_offset + offset + ROGUE_FW_SEGMMU_DATA_BASE_ADDRESS;
+
+	/* META cacheability is determined by address. */
+	if (fw_obj->gem->flags & PVR_BO_FW_FLAGS_DEVICE_UNCACHED)
+		fw_addr |= ROGUE_FW_SEGMMU_DATA_META_UNCACHED |
+			   ROGUE_FW_SEGMMU_DATA_VIVT_SLC_UNCACHED;
+
+	return fw_addr;
+}
+
+static int
+pvr_meta_vm_map(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj)
+{
+	struct pvr_gem_object *pvr_obj = fw_obj->gem;
+
+	return pvr_vm_map(pvr_dev->kernel_vm_ctx, pvr_obj, 0, fw_obj->fw_mm_node.start,
+			  pvr_gem_object_size(pvr_obj));
+}
+
+static void
+pvr_meta_vm_unmap(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj)
+{
+	pvr_vm_unmap(pvr_dev->kernel_vm_ctx, fw_obj->fw_mm_node.start,
+		     fw_obj->fw_mm_node.size);
+}
+
+static bool
+pvr_meta_has_fixed_data_addr(void)
+{
+	return false;
+}
+
+const struct pvr_fw_defs pvr_fw_defs_meta = {
+	.init = pvr_meta_init,
+	.fw_process = pvr_meta_fw_process,
+	.vm_map = pvr_meta_vm_map,
+	.vm_unmap = pvr_meta_vm_unmap,
+	.get_fw_addr_with_offset = pvr_meta_get_fw_addr_with_offset,
+	.wrapper_init = pvr_meta_wrapper_init,
+	.has_fixed_data_addr = pvr_meta_has_fixed_data_addr,
+	.irq = {
+		.enable_reg = ROGUE_CR_META_SP_MSLVIRQENABLE,
+		.status_reg = ROGUE_CR_META_SP_MSLVIRQSTATUS,
+		.clear_reg = ROGUE_CR_META_SP_MSLVIRQSTATUS,
+		.event_mask = ROGUE_CR_META_SP_MSLVIRQSTATUS_TRIGVECT2_EN,
+		.clear_mask = ROGUE_CR_META_SP_MSLVIRQSTATUS_TRIGVECT2_CLRMSK,
+	},
+};
diff --git a/drivers/gpu/drm/imagination/pvr_fw_meta.h b/drivers/gpu/drm/imagination/pvr_fw_meta.h
new file mode 100644
index 000000000000..d88970e82334
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_fw_meta.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_FW_META_H
+#define PVR_FW_META_H
+
+#include <linux/types.h>
+
+/* Forward declaration from pvr_device.h */
+struct pvr_device;
+
+int pvr_meta_cr_read32(struct pvr_device *pvr_dev, u32 reg_addr, u32 *reg_value_out);
+
+#endif /* PVR_FW_META_H */
diff --git a/drivers/gpu/drm/imagination/pvr_fw_startstop.c b/drivers/gpu/drm/imagination/pvr_fw_startstop.c
new file mode 100644
index 000000000000..a9db0c60e57f
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_fw_startstop.c
@@ -0,0 +1,301 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+#include "pvr_fw.h"
+#include "pvr_fw_meta.h"
+#include "pvr_fw_startstop.h"
+#include "pvr_rogue_cr_defs.h"
+#include "pvr_rogue_meta.h"
+#include "pvr_vm.h"
+
+#include <linux/compiler.h>
+#include <linux/delay.h>
+#include <linux/ktime.h>
+#include <linux/types.h>
+
+#define POLL_TIMEOUT_USEC 1000000
+
+static void
+rogue_axi_ace_list_init(struct pvr_device *pvr_dev)
+{
+	/* Setup AXI-ACE config. Set everything to outer cache. */
+	u64 reg_val =
+		(3U << ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWDOMAIN_NON_SNOOPING_SHIFT) |
+		(3U << ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARDOMAIN_NON_SNOOPING_SHIFT) |
+		(2U << ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARDOMAIN_CACHE_MAINTENANCE_SHIFT) |
+		(2U << ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWDOMAIN_COHERENT_SHIFT) |
+		(2U << ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARDOMAIN_COHERENT_SHIFT) |
+		(2U << ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWCACHE_COHERENT_SHIFT) |
+		(2U << ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARCACHE_COHERENT_SHIFT) |
+		(2U << ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARCACHE_CACHE_MAINTENANCE_SHIFT);
+
+	pvr_cr_write64(pvr_dev, ROGUE_CR_AXI_ACE_LITE_CONFIGURATION, reg_val);
+}
+
+static void
+rogue_bif_init(struct pvr_device *pvr_dev)
+{
+	dma_addr_t pc_dma_addr;
+	u64 pc_addr;
+
+	/* Acquire the address of the Kernel Page Catalogue. */
+	pc_dma_addr = pvr_vm_get_page_table_root_addr(pvr_dev->kernel_vm_ctx);
+
+	/* Write the kernel catalogue base. */
+	pc_addr = ((((u64)pc_dma_addr >> ROGUE_CR_BIF_CAT_BASE0_ADDR_ALIGNSHIFT)
+		    << ROGUE_CR_BIF_CAT_BASE0_ADDR_SHIFT) &
+		   ~ROGUE_CR_BIF_CAT_BASE0_ADDR_CLRMSK);
+
+	pvr_cr_write64(pvr_dev, BIF_CAT_BASEX(MMU_CONTEXT_MAPPING_FWPRIV),
+		       pc_addr);
+}
+
+static int
+rogue_slc_init(struct pvr_device *pvr_dev)
+{
+	u16 slc_cache_line_size_bits;
+	u32 reg_val;
+	int err;
+
+	/*
+	 * SLC Misc control.
+	 *
+	 * Note: This is a 64bit register and we set only the lower 32bits
+	 *       leaving the top 32bits (ROGUE_CR_SLC_CTRL_MISC_SCRAMBLE_BITS)
+	 *       unchanged from the HW default.
+	 */
+	reg_val = (pvr_cr_read32(pvr_dev, ROGUE_CR_SLC_CTRL_MISC) &
+		      ROGUE_CR_SLC_CTRL_MISC_ENABLE_PSG_HAZARD_CHECK_EN) |
+		     ROGUE_CR_SLC_CTRL_MISC_ADDR_DECODE_MODE_PVR_HASH1;
+
+	err = PVR_FEATURE_VALUE(pvr_dev, slc_cache_line_size_bits, &slc_cache_line_size_bits);
+	if (err)
+		return err;
+
+	/* Bypass burst combiner if SLC line size is smaller than 1024 bits. */
+	if (slc_cache_line_size_bits < 1024)
+		reg_val |= ROGUE_CR_SLC_CTRL_MISC_BYPASS_BURST_COMBINER_EN;
+
+	pvr_cr_write32(pvr_dev, ROGUE_CR_SLC_CTRL_MISC, reg_val);
+
+	return 0;
+}
+
+/**
+ * pvr_fw_start() - Start FW processor and boot firmware
+ * @pvr_dev: Target PowerVR device.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error returned by rogue_slc_init().
+ */
+int
+pvr_fw_start(struct pvr_device *pvr_dev)
+{
+	bool has_reset2 = PVR_HAS_FEATURE(pvr_dev, xe_tpu2);
+	u64 soft_reset_mask;
+	int err;
+
+	if (PVR_HAS_FEATURE(pvr_dev, pbe2_in_xe))
+		soft_reset_mask = ROGUE_CR_SOFT_RESET__PBE2_XE__MASKFULL;
+	else
+		soft_reset_mask = ROGUE_CR_SOFT_RESET_MASKFULL;
+
+	if (PVR_HAS_FEATURE(pvr_dev, sys_bus_secure_reset)) {
+		/*
+		 * Disable the default sys_bus_secure protection to perform
+		 * minimal setup.
+		 */
+		pvr_cr_write32(pvr_dev, ROGUE_CR_SYS_BUS_SECURE, 0);
+		(void)pvr_cr_read32(pvr_dev, ROGUE_CR_SYS_BUS_SECURE); /* Fence write */
+	}
+
+	/* Set Rogue in soft-reset. */
+	pvr_cr_write64(pvr_dev, ROGUE_CR_SOFT_RESET, soft_reset_mask);
+	if (has_reset2)
+		pvr_cr_write64(pvr_dev, ROGUE_CR_SOFT_RESET2, ROGUE_CR_SOFT_RESET2_MASKFULL);
+
+	/* Read soft-reset to fence previous write in order to clear the SOCIF pipeline. */
+	(void)pvr_cr_read64(pvr_dev, ROGUE_CR_SOFT_RESET);
+	if (has_reset2)
+		(void)pvr_cr_read64(pvr_dev, ROGUE_CR_SOFT_RESET2);
+
+	/* Take Rascal and Dust out of reset. */
+	pvr_cr_write64(pvr_dev, ROGUE_CR_SOFT_RESET,
+		       soft_reset_mask ^ ROGUE_CR_SOFT_RESET_RASCALDUSTS_EN);
+	if (has_reset2)
+		pvr_cr_write64(pvr_dev, ROGUE_CR_SOFT_RESET2, 0);
+
+	(void)pvr_cr_read64(pvr_dev, ROGUE_CR_SOFT_RESET);
+	if (has_reset2)
+		(void)pvr_cr_read64(pvr_dev, ROGUE_CR_SOFT_RESET2);
+
+	/* Take everything out of reset but the FW processor. */
+	pvr_cr_write64(pvr_dev, ROGUE_CR_SOFT_RESET, ROGUE_CR_SOFT_RESET_GARTEN_EN);
+	if (has_reset2)
+		pvr_cr_write64(pvr_dev, ROGUE_CR_SOFT_RESET2, 0);
+
+	(void)pvr_cr_read64(pvr_dev, ROGUE_CR_SOFT_RESET);
+	if (has_reset2)
+		(void)pvr_cr_read64(pvr_dev, ROGUE_CR_SOFT_RESET2);
+
+	err = rogue_slc_init(pvr_dev);
+	if (err)
+		goto err_reset;
+
+	/* Initialise Firmware wrapper. */
+	pvr_dev->fw_dev.defs->wrapper_init(pvr_dev);
+
+	/* We must init the AXI-ACE interface before first BIF transaction. */
+	rogue_axi_ace_list_init(pvr_dev);
+
+	/* Initialise BIF. */
+	rogue_bif_init(pvr_dev);
+
+	/* Need to wait for at least 16 cycles before taking the FW processor out of reset ... */
+	udelay(3);
+
+	pvr_cr_write64(pvr_dev, ROGUE_CR_SOFT_RESET, 0x0);
+	(void)pvr_cr_read64(pvr_dev, ROGUE_CR_SOFT_RESET);
+
+	/* ... and afterwards. */
+	udelay(3);
+
+	return 0;
+
+err_reset:
+	/* Put everything back into soft-reset. */
+	pvr_cr_write64(pvr_dev, ROGUE_CR_SOFT_RESET, soft_reset_mask);
+
+	return err;
+}
+
+/**
+ * pvr_fw_stop() - Stop FW processor
+ * @pvr_dev: Target PowerVR device.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_cr_poll_reg32().
+ */
+int
+pvr_fw_stop(struct pvr_device *pvr_dev)
+{
+	const u32 sidekick_idle_mask = ROGUE_CR_SIDEKICK_IDLE_MASKFULL &
+				       ~(ROGUE_CR_SIDEKICK_IDLE_GARTEN_EN |
+					 ROGUE_CR_SIDEKICK_IDLE_SOCIF_EN |
+					 ROGUE_CR_SIDEKICK_IDLE_HOSTIF_EN);
+	bool skip_garten_idle = false;
+	u32 reg_value;
+	int err;
+
+	/*
+	 * Wait for Sidekick/Jones to signal IDLE except for the Garten Wrapper.
+	 * For cores with the LAYOUT_MARS feature, SIDEKICK would have been
+	 * powered down by the FW.
+	 */
+	err = pvr_cr_poll_reg32(pvr_dev, ROGUE_CR_SIDEKICK_IDLE, sidekick_idle_mask,
+				sidekick_idle_mask, POLL_TIMEOUT_USEC);
+	if (err)
+		return err;
+
+	/* Unset MTS DM association with threads. */
+	pvr_cr_write32(pvr_dev, ROGUE_CR_MTS_INTCTX_THREAD0_DM_ASSOC,
+		       ROGUE_CR_MTS_INTCTX_THREAD0_DM_ASSOC_MASKFULL &
+		       ROGUE_CR_MTS_INTCTX_THREAD0_DM_ASSOC_DM_ASSOC_CLRMSK);
+	pvr_cr_write32(pvr_dev, ROGUE_CR_MTS_BGCTX_THREAD0_DM_ASSOC,
+		       ROGUE_CR_MTS_BGCTX_THREAD0_DM_ASSOC_MASKFULL &
+		       ROGUE_CR_MTS_BGCTX_THREAD0_DM_ASSOC_DM_ASSOC_CLRMSK);
+	pvr_cr_write32(pvr_dev, ROGUE_CR_MTS_INTCTX_THREAD1_DM_ASSOC,
+		       ROGUE_CR_MTS_INTCTX_THREAD1_DM_ASSOC_MASKFULL &
+		       ROGUE_CR_MTS_INTCTX_THREAD1_DM_ASSOC_DM_ASSOC_CLRMSK);
+	pvr_cr_write32(pvr_dev, ROGUE_CR_MTS_BGCTX_THREAD1_DM_ASSOC,
+		       ROGUE_CR_MTS_BGCTX_THREAD1_DM_ASSOC_MASKFULL &
+		       ROGUE_CR_MTS_BGCTX_THREAD1_DM_ASSOC_DM_ASSOC_CLRMSK);
+
+	/* Extra Idle checks. */
+	err = pvr_cr_poll_reg32(pvr_dev, ROGUE_CR_BIF_STATUS_MMU, 0,
+				ROGUE_CR_BIF_STATUS_MMU_MASKFULL,
+				POLL_TIMEOUT_USEC);
+	if (err)
+		return err;
+
+	err = pvr_cr_poll_reg32(pvr_dev, ROGUE_CR_BIFPM_STATUS_MMU, 0,
+				ROGUE_CR_BIFPM_STATUS_MMU_MASKFULL,
+				POLL_TIMEOUT_USEC);
+	if (err)
+		return err;
+
+	if (!PVR_HAS_FEATURE(pvr_dev, xt_top_infrastructure)) {
+		err = pvr_cr_poll_reg32(pvr_dev, ROGUE_CR_BIF_READS_EXT_STATUS, 0,
+					ROGUE_CR_BIF_READS_EXT_STATUS_MASKFULL,
+					POLL_TIMEOUT_USEC);
+		if (err)
+			return err;
+	}
+
+	err = pvr_cr_poll_reg32(pvr_dev, ROGUE_CR_BIFPM_READS_EXT_STATUS, 0,
+				ROGUE_CR_BIFPM_READS_EXT_STATUS_MASKFULL,
+				POLL_TIMEOUT_USEC);
+	if (err)
+		return err;
+
+	err = pvr_cr_poll_reg64(pvr_dev, ROGUE_CR_SLC_STATUS1, 0,
+				ROGUE_CR_SLC_STATUS1_MASKFULL,
+				POLL_TIMEOUT_USEC);
+	if (err)
+		return err;
+
+	/*
+	 * Wait for SLC to signal IDLE.
+	 * For cores with the LAYOUT_MARS feature, SLC would have been powered
+	 * down by the FW.
+	 */
+	err = pvr_cr_poll_reg32(pvr_dev, ROGUE_CR_SLC_IDLE,
+				ROGUE_CR_SLC_IDLE_MASKFULL,
+				ROGUE_CR_SLC_IDLE_MASKFULL, POLL_TIMEOUT_USEC);
+	if (err)
+		return err;
+
+	/*
+	 * Wait for Sidekick/Jones to signal IDLE except for the Garten Wrapper.
+	 * For cores with the LAYOUT_MARS feature, SIDEKICK would have been powered
+	 * down by the FW.
+	 */
+	err = pvr_cr_poll_reg32(pvr_dev, ROGUE_CR_SIDEKICK_IDLE, sidekick_idle_mask,
+				sidekick_idle_mask, POLL_TIMEOUT_USEC);
+	if (err)
+		return err;
+
+	if (pvr_dev->fw_dev.processor_type == PVR_FW_PROCESSOR_TYPE_META) {
+		err = pvr_meta_cr_read32(pvr_dev, META_CR_TxVECINT_BHALT, &reg_value);
+		if (err)
+			return err;
+
+		/*
+		 * Wait for Sidekick/Jones to signal IDLE including the Garten
+		 * Wrapper if there is no debugger attached (TxVECINT_BHALT =
+		 * 0x0).
+		 */
+		if (reg_value)
+			skip_garten_idle = true;
+	}
+
+	if (!skip_garten_idle) {
+		err = pvr_cr_poll_reg32(pvr_dev, ROGUE_CR_SIDEKICK_IDLE,
+					ROGUE_CR_SIDEKICK_IDLE_GARTEN_EN,
+					ROGUE_CR_SIDEKICK_IDLE_GARTEN_EN,
+					POLL_TIMEOUT_USEC);
+		if (err)
+			return err;
+	}
+
+	if (PVR_HAS_FEATURE(pvr_dev, pbe2_in_xe))
+		pvr_cr_write64(pvr_dev, ROGUE_CR_SOFT_RESET,
+			       ROGUE_CR_SOFT_RESET__PBE2_XE__MASKFULL);
+	else
+		pvr_cr_write64(pvr_dev, ROGUE_CR_SOFT_RESET, ROGUE_CR_SOFT_RESET_MASKFULL);
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/imagination/pvr_fw_startstop.h b/drivers/gpu/drm/imagination/pvr_fw_startstop.h
new file mode 100644
index 000000000000..ab84f14596c7
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_fw_startstop.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_FW_STARTSTOP_H
+#define PVR_FW_STARTSTOP_H
+
+/* Forward declaration from pvr_device.h. */
+struct pvr_device;
+
+int pvr_fw_start(struct pvr_device *pvr_dev);
+int pvr_fw_stop(struct pvr_device *pvr_dev);
+
+#endif /* PVR_FW_STARTSTOP_H */
diff --git a/drivers/gpu/drm/imagination/pvr_fw_trace.c b/drivers/gpu/drm/imagination/pvr_fw_trace.c
new file mode 100644
index 000000000000..11f10946aeb1
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_fw_trace.c
@@ -0,0 +1,121 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+#include "pvr_gem.h"
+#include "pvr_rogue_fwif.h"
+#include "pvr_rogue_fwif_sf.h"
+#include "pvr_fw_trace.h"
+
+#include <drm/drm_file.h>
+
+#include <linux/build_bug.h>
+#include <linux/dcache.h>
+#include <linux/sysfs.h>
+#include <linux/types.h>
+
+static void
+tracebuf_ctrl_init(void *cpu_ptr, void *priv)
+{
+	struct rogue_fwif_tracebuf *tracebuf_ctrl = cpu_ptr;
+	struct pvr_fw_trace *fw_trace = priv;
+	u32 thread_nr;
+
+	tracebuf_ctrl->tracebuf_size_in_dwords = ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS;
+	tracebuf_ctrl->tracebuf_flags = 0;
+
+	if (fw_trace->group_mask)
+		tracebuf_ctrl->log_type = fw_trace->group_mask | ROGUE_FWIF_LOG_TYPE_TRACE;
+	else
+		tracebuf_ctrl->log_type = ROGUE_FWIF_LOG_TYPE_NONE;
+
+	for (thread_nr = 0; thread_nr < ARRAY_SIZE(fw_trace->buffers); thread_nr++) {
+		struct rogue_fwif_tracebuf_space *tracebuf_space =
+			&tracebuf_ctrl->tracebuf[thread_nr];
+		struct pvr_fw_trace_buffer *trace_buffer = &fw_trace->buffers[thread_nr];
+
+		pvr_fw_object_get_fw_addr(trace_buffer->buf_obj,
+					  &tracebuf_space->trace_buffer_fw_addr);
+
+		tracebuf_space->trace_buffer = trace_buffer->buf;
+		tracebuf_space->trace_pointer = 0;
+	}
+}
+
+int pvr_fw_trace_init(struct pvr_device *pvr_dev)
+{
+	struct pvr_fw_trace *fw_trace = &pvr_dev->fw_dev.fw_trace;
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	u32 thread_nr;
+	int err;
+
+	for (thread_nr = 0; thread_nr < ARRAY_SIZE(fw_trace->buffers); thread_nr++) {
+		struct pvr_fw_trace_buffer *trace_buffer = &fw_trace->buffers[thread_nr];
+
+		trace_buffer->buf =
+			pvr_fw_object_create_and_map(pvr_dev,
+						     ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS *
+						     sizeof(*trace_buffer->buf),
+						     PVR_BO_FW_FLAGS_DEVICE_UNCACHED |
+						     PVR_BO_FW_NO_CLEAR_ON_RESET,
+						     NULL, NULL, &trace_buffer->buf_obj);
+		if (IS_ERR(trace_buffer->buf)) {
+			drm_err(drm_dev, "Unable to allocate trace buffer\n");
+			err = PTR_ERR(trace_buffer->buf);
+			trace_buffer->buf = NULL;
+			goto err_free_buf;
+		}
+	}
+
+	/* TODO: Provide control of group mask. */
+	fw_trace->group_mask = 0;
+
+	fw_trace->tracebuf_ctrl =
+		pvr_fw_object_create_and_map(pvr_dev,
+					     sizeof(*fw_trace->tracebuf_ctrl),
+					     PVR_BO_FW_FLAGS_DEVICE_UNCACHED |
+					     PVR_BO_FW_NO_CLEAR_ON_RESET,
+					     tracebuf_ctrl_init, fw_trace,
+					     &fw_trace->tracebuf_ctrl_obj);
+	if (IS_ERR(fw_trace->tracebuf_ctrl)) {
+		drm_err(drm_dev, "Unable to allocate trace buffer control structure\n");
+		err = PTR_ERR(fw_trace->tracebuf_ctrl);
+		goto err_free_buf;
+	}
+
+	BUILD_BUG_ON(ARRAY_SIZE(fw_trace->tracebuf_ctrl->tracebuf) !=
+		     ARRAY_SIZE(fw_trace->buffers));
+
+	for (thread_nr = 0; thread_nr < ARRAY_SIZE(fw_trace->buffers); thread_nr++) {
+		struct rogue_fwif_tracebuf_space *tracebuf_space =
+			&fw_trace->tracebuf_ctrl->tracebuf[thread_nr];
+		struct pvr_fw_trace_buffer *trace_buffer = &fw_trace->buffers[thread_nr];
+
+		trace_buffer->tracebuf_space = tracebuf_space;
+	}
+
+	return 0;
+
+err_free_buf:
+	for (thread_nr = 0; thread_nr < ARRAY_SIZE(fw_trace->buffers); thread_nr++) {
+		struct pvr_fw_trace_buffer *trace_buffer = &fw_trace->buffers[thread_nr];
+
+		if (trace_buffer->buf)
+			pvr_fw_object_unmap_and_destroy(trace_buffer->buf_obj);
+	}
+
+	return err;
+}
+
+void pvr_fw_trace_fini(struct pvr_device *pvr_dev)
+{
+	struct pvr_fw_trace *fw_trace = &pvr_dev->fw_dev.fw_trace;
+	u32 thread_nr;
+
+	for (thread_nr = 0; thread_nr < ARRAY_SIZE(fw_trace->buffers); thread_nr++) {
+		struct pvr_fw_trace_buffer *trace_buffer = &fw_trace->buffers[thread_nr];
+
+		pvr_fw_object_unmap_and_destroy(trace_buffer->buf_obj);
+	}
+	pvr_fw_object_unmap_and_destroy(fw_trace->tracebuf_ctrl_obj);
+}
diff --git a/drivers/gpu/drm/imagination/pvr_fw_trace.h b/drivers/gpu/drm/imagination/pvr_fw_trace.h
new file mode 100644
index 000000000000..e637bb9de2b4
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_fw_trace.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_FW_TRACE_H
+#define PVR_FW_TRACE_H
+
+#include <drm/drm_file.h>
+#include <linux/types.h>
+
+#include "pvr_rogue_fwif.h"
+
+/* Forward declaration from pvr_device.h. */
+struct pvr_device;
+
+/* Forward declaration from pvr_gem.h. */
+struct pvr_fw_object;
+
+/* Forward declarations from pvr_rogue_fwif.h */
+struct rogue_fwif_tracebuf;
+struct rogue_fwif_tracebuf_space;
+
+/**
+ * struct pvr_fw_trace_buffer - Structure representing a trace buffer
+ */
+struct pvr_fw_trace_buffer {
+	/** @buf_obj: FW buffer object representing trace buffer. */
+	struct pvr_fw_object *buf_obj;
+
+	/** @buf: Pointer to CPU mapping of trace buffer. */
+	u32 *buf;
+
+	/**
+	 * @tracebuf_space: Pointer to FW tracebuf_space structure for this
+	 *                  trace buffer.
+	 */
+	struct rogue_fwif_tracebuf_space *tracebuf_space;
+};
+
+/**
+ * struct pvr_fw_trace - Device firmware trace data
+ */
+struct pvr_fw_trace {
+	/**
+	 * @tracebuf_ctrl_obj: Object representing FW trace buffer control
+	 *                     structure.
+	 */
+	struct pvr_fw_object *tracebuf_ctrl_obj;
+
+	/**
+	 * @tracebuf_ctrl: Pointer to CPU mapping of FW trace buffer control
+	 *                 structure.
+	 */
+	struct rogue_fwif_tracebuf *tracebuf_ctrl;
+
+	/**
+	 * @buffers: Array representing the actual trace buffers owned by this
+	 *           device.
+	 */
+	struct pvr_fw_trace_buffer buffers[ROGUE_FW_THREAD_MAX];
+
+	/** @group_mask: Mask of enabled trace groups. */
+	u32 group_mask;
+};
+
+int pvr_fw_trace_init(struct pvr_device *pvr_dev);
+void pvr_fw_trace_fini(struct pvr_device *pvr_dev);
+
+#if defined(CONFIG_DEBUG_FS)
+/* Forward declaration from <linux/dcache.h>. */
+struct dentry;
+
+void pvr_fw_trace_mask_update(struct pvr_device *pvr_dev, u32 old_mask,
+			      u32 new_mask);
+
+void pvr_fw_trace_debugfs_init(struct pvr_device *pvr_dev, struct dentry *dir);
+#endif /* defined(CONFIG_DEBUG_FS) */
+
+#endif /* PVR_FW_TRACE_H */
diff --git a/drivers/gpu/drm/imagination/pvr_gem.h b/drivers/gpu/drm/imagination/pvr_gem.h
index 5b82cd67d83c..11b7692dae56 100644
--- a/drivers/gpu/drm/imagination/pvr_gem.h
+++ b/drivers/gpu/drm/imagination/pvr_gem.h
@@ -59,6 +59,13 @@ struct pvr_file;
  *    :CACHED: By default, all GEM objects are mapped write-combined on the
  *       CPU. Set this flag to override this behaviour and map the object
  *       cached.
+ *
+ * Firmware options
+ *    These use the prefix PVR_BO_FW_.
+ *
+ *    :NO_CLEAR_ON_RESET: By default, all FW objects are cleared and
+ *       reinitialised on hard reset. Set this flag to override this behaviour
+ *       and preserve buffer contents on reset.
  */
 #define PVR_BO_CPU_CACHED BIT_ULL(63)
 
diff --git a/drivers/gpu/drm/imagination/pvr_mmu.c b/drivers/gpu/drm/imagination/pvr_mmu.c
index 3b48e1f77d09..bee357588ade 100644
--- a/drivers/gpu/drm/imagination/pvr_mmu.c
+++ b/drivers/gpu/drm/imagination/pvr_mmu.c
@@ -3,6 +3,7 @@
 
 #include "pvr_mmu.h"
 
+#include "pvr_ccb.h"
 #include "pvr_device.h"
 #include "pvr_fw.h"
 #include "pvr_gem.h"
@@ -71,8 +72,43 @@
 int
 pvr_mmu_flush(struct pvr_device *pvr_dev)
 {
-	/* TODO: implement */
-	return -ENODEV;
+	struct rogue_fwif_kccb_cmd cmd_mmu_cache;
+	struct rogue_fwif_mmucachedata *cmd_mmu_cache_data =
+		&cmd_mmu_cache.cmd_data.mmu_cache_data;
+	u32 slot;
+	int idx;
+	int err;
+
+	if (!drm_dev_enter(from_pvr_device(pvr_dev), &idx))
+		return -EIO;
+
+	/* Can't flush MMU if the firmware hasn't booted yet. */
+	if (!pvr_dev->fw_dev.booted) {
+		err = 0;
+		goto err_drm_dev_exit;
+	}
+
+	cmd_mmu_cache.cmd_type = ROGUE_FWIF_KCCB_CMD_MMUCACHE;
+	/* Request a complete MMU flush, across all pagetable levels, TLBs and contexts. */
+	cmd_mmu_cache_data->cache_flags = ROGUE_FWIF_MMUCACHEDATA_FLAGS_PT |
+					  ROGUE_FWIF_MMUCACHEDATA_FLAGS_PD |
+					  ROGUE_FWIF_MMUCACHEDATA_FLAGS_PC |
+					  ROGUE_FWIF_MMUCACHEDATA_FLAGS_TLB |
+					  ROGUE_FWIF_MMUCACHEDATA_FLAGS_INTERRUPT;
+	pvr_fw_object_get_fw_addr(pvr_dev->fw_dev.mem.mmucache_sync_obj,
+				  &cmd_mmu_cache_data->mmu_cache_sync_fw_addr);
+	cmd_mmu_cache_data->mmu_cache_sync_update_value = 0;
+
+	err = pvr_kccb_send_cmd(pvr_dev, &cmd_mmu_cache, &slot);
+	if (err)
+		goto err_drm_dev_exit;
+
+	err = pvr_kccb_wait_for_completion(pvr_dev, slot, HZ, NULL);
+
+err_drm_dev_exit:
+	drm_dev_exit(idx);
+
+	return err;
 }
 
 /**
diff --git a/drivers/gpu/drm/imagination/pvr_power.c b/drivers/gpu/drm/imagination/pvr_power.c
index a494fed92e81..90dfdf22044c 100644
--- a/drivers/gpu/drm/imagination/pvr_power.c
+++ b/drivers/gpu/drm/imagination/pvr_power.c
@@ -3,6 +3,7 @@
 
 #include "pvr_device.h"
 #include "pvr_fw.h"
+#include "pvr_fw_startstop.h"
 #include "pvr_power.h"
 #include "pvr_rogue_fwif.h"
 
@@ -21,11 +22,32 @@
 
 #define WATCHDOG_TIME_MS (500)
 
+static void
+pvr_device_lost(struct pvr_device *pvr_dev)
+{
+	if (!pvr_dev->lost) {
+		pvr_dev->lost = true;
+		drm_dev_unplug(from_pvr_device(pvr_dev));
+	}
+}
+
 static int
 pvr_power_send_command(struct pvr_device *pvr_dev, struct rogue_fwif_kccb_cmd *pow_cmd)
 {
-	/* TODO: implement */
-	return -ENODEV;
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+	u32 slot_nr;
+	u32 value;
+	int err;
+
+	WRITE_ONCE(*fw_dev->power_sync, 0);
+
+	err = pvr_kccb_send_cmd_powered(pvr_dev, pow_cmd, &slot_nr);
+	if (err)
+		return err;
+
+	/* Wait for FW to acknowledge. */
+	return readl_poll_timeout(pvr_dev->fw_dev.power_sync, value, value != 0, 100,
+				  POWER_SYNC_TIMEOUT_US);
 }
 
 static int
@@ -71,8 +93,7 @@ pvr_power_fw_disable(struct pvr_device *pvr_dev, bool hard_reset)
 			return err;
 	}
 
-	/* TODO: stop firmware */
-	return -ENODEV;
+	return pvr_fw_stop(pvr_dev);
 }
 
 static int
@@ -80,11 +101,17 @@ pvr_power_fw_enable(struct pvr_device *pvr_dev)
 {
 	int err;
 
-	/* TODO: start firmware */
-	err = -ENODEV;
+	err = pvr_fw_start(pvr_dev);
 	if (err)
 		return err;
 
+	err = pvr_wait_for_fw_boot(pvr_dev);
+	if (err) {
+		drm_err(from_pvr_device(pvr_dev), "Firmware failed to boot\n");
+		pvr_fw_stop(pvr_dev);
+		return err;
+	}
+
 	queue_delayed_work(pvr_dev->sched_wq, &pvr_dev->watchdog.work,
 			   msecs_to_jiffies(WATCHDOG_TIME_MS));
 
@@ -94,14 +121,39 @@ pvr_power_fw_enable(struct pvr_device *pvr_dev)
 bool
 pvr_power_is_idle(struct pvr_device *pvr_dev)
 {
-	/* TODO: implement */
-	return true;
+	/*
+	 * FW power state can be out of date if a KCCB command has been submitted but the FW hasn't
+	 * started processing it yet. So also check the KCCB status.
+	 */
+	enum rogue_fwif_pow_state pow_state = READ_ONCE(pvr_dev->fw_dev.fwif_sysdata->pow_state);
+	bool kccb_idle = pvr_kccb_is_idle(pvr_dev);
+
+	return (pow_state == ROGUE_FWIF_POW_IDLE) && kccb_idle;
 }
 
 static bool
 pvr_watchdog_kccb_stalled(struct pvr_device *pvr_dev)
 {
-	/* TODO: implement */
+	/* Check KCCB commands are progressing. */
+	u32 kccb_cmds_executed = pvr_dev->fw_dev.fwif_osdata->kccb_cmds_executed;
+	bool kccb_is_idle = pvr_kccb_is_idle(pvr_dev);
+
+	if (pvr_dev->watchdog.old_kccb_cmds_executed == kccb_cmds_executed && !kccb_is_idle) {
+		pvr_dev->watchdog.kccb_stall_count++;
+
+		/*
+		 * If we have commands pending with no progress for 2 consecutive polls then
+		 * consider KCCB command processing stalled.
+		 */
+		if (pvr_dev->watchdog.kccb_stall_count == 2) {
+			pvr_dev->watchdog.kccb_stall_count = 0;
+			return true;
+		}
+	} else {
+		pvr_dev->watchdog.old_kccb_cmds_executed = kccb_cmds_executed;
+		pvr_dev->watchdog.kccb_stall_count = 0;
+	}
+
 	return false;
 }
 
@@ -118,6 +170,9 @@ pvr_watchdog_worker(struct work_struct *work)
 	if (pm_runtime_get_if_in_use(from_pvr_device(pvr_dev)->dev) <= 0)
 		goto out_requeue;
 
+	if (!pvr_dev->fw_dev.booted)
+		goto out_pm_runtime_put;
+
 	stalled = pvr_watchdog_kccb_stalled(pvr_dev);
 
 	if (stalled) {
@@ -127,6 +182,7 @@ pvr_watchdog_worker(struct work_struct *work)
 		/* Device may be lost at this point. */
 	}
 
+out_pm_runtime_put:
 	pm_runtime_put(from_pvr_device(pvr_dev)->dev);
 
 out_requeue:
@@ -158,18 +214,26 @@ pvr_power_device_suspend(struct device *dev)
 	struct platform_device *plat_dev = to_platform_device(dev);
 	struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
 	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+	int err = 0;
 	int idx;
 
 	if (!drm_dev_enter(drm_dev, &idx))
 		return -EIO;
 
+	if (pvr_dev->fw_dev.booted) {
+		err = pvr_power_fw_disable(pvr_dev, false);
+		if (err)
+			goto err_drm_dev_exit;
+	}
+
 	clk_disable_unprepare(pvr_dev->mem_clk);
 	clk_disable_unprepare(pvr_dev->sys_clk);
 	clk_disable_unprepare(pvr_dev->core_clk);
 
+err_drm_dev_exit:
 	drm_dev_exit(idx);
 
-	return 0;
+	return err;
 }
 
 int
@@ -196,10 +260,19 @@ pvr_power_device_resume(struct device *dev)
 	if (err)
 		goto err_sys_clk_disable;
 
+	if (pvr_dev->fw_dev.booted) {
+		err = pvr_power_fw_enable(pvr_dev);
+		if (err)
+			goto err_mem_clk_disable;
+	}
+
 	drm_dev_exit(idx);
 
 	return 0;
 
+err_mem_clk_disable:
+	clk_disable_unprepare(pvr_dev->mem_clk);
+
 err_sys_clk_disable:
 	clk_disable_unprepare(pvr_dev->sys_clk);
 
@@ -239,7 +312,6 @@ pvr_power_device_idle(struct device *dev)
 int
 pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
 {
-	/* TODO: Implement hard reset. */
 	int err;
 
 	/*
@@ -248,13 +320,63 @@ pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
 	 */
 	WARN_ON(pvr_power_get(pvr_dev));
 
-	err = pvr_power_fw_disable(pvr_dev, false);
-	if (err)
-		goto err_power_put;
+	down_write(&pvr_dev->reset_sem);
+
+	/* Disable IRQs for the duration of the reset. */
+	disable_irq(pvr_dev->irq);
+
+	do {
+		err = pvr_power_fw_disable(pvr_dev, hard_reset);
+		if (!err) {
+			if (hard_reset) {
+				pvr_dev->fw_dev.booted = false;
+				WARN_ON(pm_runtime_force_suspend(from_pvr_device(pvr_dev)->dev));
+
+				err = pvr_fw_hard_reset(pvr_dev);
+				if (err)
+					goto err_device_lost;
+
+				err = pm_runtime_force_resume(from_pvr_device(pvr_dev)->dev);
+				pvr_dev->fw_dev.booted = true;
+				if (err)
+					goto err_device_lost;
+			} else {
+				/* Clear the FW faulted flags. */
+				pvr_dev->fw_dev.fwif_sysdata->hwr_state_flags &=
+					~(ROGUE_FWIF_HWR_FW_FAULT |
+					  ROGUE_FWIF_HWR_RESTART_REQUESTED);
+			}
+
+			pvr_fw_irq_clear(pvr_dev);
+
+			err = pvr_power_fw_enable(pvr_dev);
+		}
+
+		if (err && hard_reset)
+			goto err_device_lost;
+
+		if (err && !hard_reset) {
+			drm_err(from_pvr_device(pvr_dev), "FW stalled, trying hard reset");
+			hard_reset = true;
+		}
+	} while (err);
+
+	enable_irq(pvr_dev->irq);
+
+	up_write(&pvr_dev->reset_sem);
+
+	pvr_power_put(pvr_dev);
+
+	return 0;
+
+err_device_lost:
+	drm_err(from_pvr_device(pvr_dev), "GPU device lost");
+	pvr_device_lost(pvr_dev);
+
+	/* Leave IRQs disabled if the device is lost. */
 
-	err = pvr_power_fw_enable(pvr_dev);
+	up_write(&pvr_dev->reset_sem);
 
-err_power_put:
 	pvr_power_put(pvr_dev);
 
 	return err;
diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
index 616fad3a3325..047844eba416 100644
--- a/drivers/gpu/drm/imagination/pvr_vm.c
+++ b/drivers/gpu/drm/imagination/pvr_vm.c
@@ -309,6 +309,16 @@ static const struct drm_gpuva_fn_ops pvr_vm_gpuva_ops = {
 	.sm_step_unmap = pvr_vm_gpuva_unmap,
 };
 
+static void
+fw_mem_context_init(void *cpu_ptr, void *priv)
+{
+	struct rogue_fwif_fwmemcontext *fw_mem_ctx = cpu_ptr;
+	struct pvr_vm_context *vm_ctx = priv;
+
+	fw_mem_ctx->pc_dev_paddr = pvr_vm_get_page_table_root_addr(vm_ctx);
+	fw_mem_ctx->page_cat_base_reg_set = ROGUE_FW_BIF_INVALID_PCSET;
+}
+
 /**
  * pvr_vm_create_context() - Create a new VM context.
  * @pvr_dev: Target PowerVR device.
@@ -368,13 +378,19 @@ pvr_vm_create_context(struct pvr_device *pvr_dev, bool is_userspace_context)
 	}
 
 	if (is_userspace_context) {
-		/* TODO: Create FW mem context */
-		err = -ENODEV;
-		goto err_put_ctx;
+		err = pvr_fw_object_create(pvr_dev, sizeof(struct rogue_fwif_fwmemcontext),
+					   PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+					   fw_mem_context_init, vm_ctx, &vm_ctx->fw_mem_ctx_obj);
+
+		if (err)
+			goto err_page_table_destroy;
 	}
 
 	return vm_ctx;
 
+err_page_table_destroy:
+	pvr_mmu_context_destroy(vm_ctx->mmu_ctx);
+
 err_put_ctx:
 	pvr_vm_context_put(vm_ctx);
 
@@ -394,8 +410,8 @@ pvr_vm_context_release(struct kref *ref_count)
 	struct pvr_vm_context *vm_ctx =
 		container_of(ref_count, struct pvr_vm_context, ref_count);
 
-	/* TODO: Destroy FW mem context */
-	WARN_ON(vm_ctx->fw_mem_ctx_obj);
+	if (vm_ctx->fw_mem_ctx_obj)
+		pvr_fw_object_destroy(vm_ctx->fw_mem_ctx_obj);
 
 	WARN_ON(pvr_vm_unmap(vm_ctx, vm_ctx->gpuva_mgr.mm_start,
 			     vm_ctx->gpuva_mgr.mm_range));
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 10/17] drm/imagination: Implement firmware infrastructure and META FW support
@ 2023-08-16  8:25   ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: matthew.brost, luben.tuikov, tzimmermann, linux-kernel, mripard,
	afd, boris.brezillon, dakr, donald.robson, hns, christian.koenig,
	faith.ekstrand

The infrastructure includes parsing of the firmware image, initialising
FW-side structures, handling the kernel and firmware command
ringbuffers and starting & stopping the firmware processor.

This patch also adds the necessary support code for the META firmware
processor.

Changes since v4:
- Remove use of drm_gem_shmem_get_pages()
- Remove interrupt resource name

Changes since v3:
- Hard reset FW processor on watchdog timeout
- Switch to threaded IRQ
- Rework FW object creation/initialisation to aid hard reset
- Added MODULE_FIRMWARE()
- Use drm_dev_{enter,exit}

Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 drivers/gpu/drm/imagination/Makefile          |    4 +
 drivers/gpu/drm/imagination/pvr_ccb.c         |  631 ++++++++
 drivers/gpu/drm/imagination/pvr_ccb.h         |   71 +
 drivers/gpu/drm/imagination/pvr_device.c      |  100 ++
 drivers/gpu/drm/imagination/pvr_device.h      |   60 +
 drivers/gpu/drm/imagination/pvr_drv.c         |    1 +
 drivers/gpu/drm/imagination/pvr_fw.c          | 1323 +++++++++++++++++
 drivers/gpu/drm/imagination/pvr_fw.h          |  474 ++++++
 drivers/gpu/drm/imagination/pvr_fw_meta.c     |  554 +++++++
 drivers/gpu/drm/imagination/pvr_fw_meta.h     |   14 +
 .../gpu/drm/imagination/pvr_fw_startstop.c    |  301 ++++
 .../gpu/drm/imagination/pvr_fw_startstop.h    |   13 +
 drivers/gpu/drm/imagination/pvr_fw_trace.c    |  121 ++
 drivers/gpu/drm/imagination/pvr_fw_trace.h    |   78 +
 drivers/gpu/drm/imagination/pvr_gem.h         |    7 +
 drivers/gpu/drm/imagination/pvr_mmu.c         |   40 +-
 drivers/gpu/drm/imagination/pvr_power.c       |  154 +-
 drivers/gpu/drm/imagination/pvr_vm.c          |   26 +-
 18 files changed, 3949 insertions(+), 23 deletions(-)
 create mode 100644 drivers/gpu/drm/imagination/pvr_ccb.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_ccb.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_meta.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_meta.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_startstop.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_startstop.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_trace.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_trace.h

diff --git a/drivers/gpu/drm/imagination/Makefile b/drivers/gpu/drm/imagination/Makefile
index 235e2d329e29..5b02440841be 100644
--- a/drivers/gpu/drm/imagination/Makefile
+++ b/drivers/gpu/drm/imagination/Makefile
@@ -4,10 +4,14 @@
 subdir-ccflags-y := -I$(srctree)/$(src)
 
 powervr-y := \
+	pvr_ccb.o \
 	pvr_device.o \
 	pvr_device_info.o \
 	pvr_drv.o \
 	pvr_fw.o \
+	pvr_fw_meta.o \
+	pvr_fw_startstop.o \
+	pvr_fw_trace.o \
 	pvr_gem.o \
 	pvr_mmu.o \
 	pvr_power.o \
diff --git a/drivers/gpu/drm/imagination/pvr_ccb.c b/drivers/gpu/drm/imagination/pvr_ccb.c
new file mode 100644
index 000000000000..1dfcce963e67
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_ccb.c
@@ -0,0 +1,631 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_ccb.h"
+#include "pvr_device.h"
+#include "pvr_drv.h"
+#include "pvr_fw.h"
+#include "pvr_gem.h"
+#include "pvr_power.h"
+
+#include <drm/drm_managed.h>
+#include <linux/compiler.h>
+#include <linux/delay.h>
+#include <linux/jiffies.h>
+#include <linux/kernel.h>
+#include <linux/mutex.h>
+#include <linux/types.h>
+#include <linux/workqueue.h>
+
+#define RESERVE_SLOT_TIMEOUT (1 * HZ) /* 1s */
+
+static void
+ccb_ctrl_init(void *cpu_ptr, void *priv)
+{
+	struct rogue_fwif_ccb_ctl *ctrl = cpu_ptr;
+	struct pvr_ccb *pvr_ccb = priv;
+
+	ctrl->write_offset = 0;
+	ctrl->read_offset = 0;
+	ctrl->wrap_mask = pvr_ccb->num_cmds - 1;
+	ctrl->cmd_size = pvr_ccb->cmd_size;
+}
+
+/**
+ * pvr_ccb_init() - Initialise a CCB
+ * @pvr_dev: Device pointer.
+ * @pvr_ccb: Pointer to CCB structure to initialise.
+ * @num_cmds_log2: Log2 of number of commands in this CCB.
+ * @cmd_size: Command size for this CCB.
+ *
+ * Return:
+ *  * Zero on success, or
+ *  * Any error code returned by pvr_fw_object_create_and_map().
+ */
+static int
+pvr_ccb_init(struct pvr_device *pvr_dev, struct pvr_ccb *pvr_ccb,
+	     u32 num_cmds_log2, size_t cmd_size)
+{
+	u32 num_cmds = 1 << num_cmds_log2;
+	u32 ccb_size = num_cmds * cmd_size;
+	int err;
+
+	pvr_ccb->num_cmds = num_cmds;
+	pvr_ccb->cmd_size = cmd_size;
+
+	err = drmm_mutex_init(from_pvr_device(pvr_dev), &pvr_ccb->lock);
+	if (err)
+		return err;
+
+	/*
+	 * Map CCB and control structure as uncached, so we don't have to flush
+	 * CPU cache repeatedly when polling for space.
+	 */
+	pvr_ccb->ctrl = pvr_fw_object_create_and_map(pvr_dev, sizeof(*pvr_ccb->ctrl),
+						     PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+						     ccb_ctrl_init, pvr_ccb, &pvr_ccb->ctrl_obj);
+	if (IS_ERR(pvr_ccb->ctrl))
+		return PTR_ERR(pvr_ccb->ctrl);
+
+	pvr_ccb->ccb = pvr_fw_object_create_and_map(pvr_dev, ccb_size,
+						    PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+						    NULL, NULL, &pvr_ccb->ccb_obj);
+	if (IS_ERR(pvr_ccb->ccb)) {
+		err = PTR_ERR(pvr_ccb->ccb);
+		goto err_free_ctrl;
+	}
+
+	pvr_fw_object_get_fw_addr(pvr_ccb->ctrl_obj, &pvr_ccb->ctrl_fw_addr);
+	pvr_fw_object_get_fw_addr(pvr_ccb->ccb_obj, &pvr_ccb->ccb_fw_addr);
+
+	WRITE_ONCE(pvr_ccb->ctrl->write_offset, 0);
+	WRITE_ONCE(pvr_ccb->ctrl->read_offset, 0);
+	WRITE_ONCE(pvr_ccb->ctrl->wrap_mask, num_cmds - 1);
+	WRITE_ONCE(pvr_ccb->ctrl->cmd_size, cmd_size);
+
+	return 0;
+
+err_free_ctrl:
+	pvr_fw_object_unmap_and_destroy(pvr_ccb->ctrl_obj);
+
+	return err;
+}
+
+/**
+ * pvr_ccb_fini() - Release CCB structure
+ * @pvr_ccb: CCB to release.
+ */
+void
+pvr_ccb_fini(struct pvr_ccb *pvr_ccb)
+{
+	pvr_fw_object_unmap_and_destroy(pvr_ccb->ccb_obj);
+	pvr_fw_object_unmap_and_destroy(pvr_ccb->ctrl_obj);
+}
+
+/**
+ * pvr_ccb_slot_available_locked() - Test whether any slots are available in CCB
+ * @pvr_ccb: CCB to test.
+ * @write_offset: Address to store number of next available slot. May be %NULL.
+ *
+ * Caller must hold @pvr_ccb->lock.
+ *
+ * Return:
+ *  * %true if a slot is available, or
+ *  * %false if no slot is available.
+ */
+static __always_inline bool
+pvr_ccb_slot_available_locked(struct pvr_ccb *pvr_ccb, u32 *write_offset)
+{
+	struct rogue_fwif_ccb_ctl *ctrl = pvr_ccb->ctrl;
+	u32 next_write_offset = (READ_ONCE(ctrl->write_offset) + 1) & READ_ONCE(ctrl->wrap_mask);
+
+	lockdep_assert_held(&pvr_ccb->lock);
+
+	if (READ_ONCE(ctrl->read_offset) != next_write_offset) {
+		if (write_offset)
+			*write_offset = next_write_offset;
+		return true;
+	}
+
+	return false;
+}
+
+static void
+process_fwccb_command(struct pvr_device *pvr_dev, struct rogue_fwif_fwccb_cmd *cmd)
+{
+	switch (cmd->cmd_type) {
+	case ROGUE_FWIF_FWCCB_CMD_REQUEST_GPU_RESTART:
+		pvr_power_reset(pvr_dev, false);
+		break;
+
+	default:
+		drm_info(from_pvr_device(pvr_dev), "Received unknown FWCCB command %x\n",
+			 cmd->cmd_type);
+		break;
+	}
+}
+
+/**
+ * pvr_fwccb_process_worker() - Process any pending FWCCB commands
+ * @work: Work item.
+ *
+ * For this initial implementation, FWCCB commands will be printed to the console but otherwise not
+ * processed.
+ */
+void pvr_fwccb_process(struct pvr_device *pvr_dev)
+{
+	struct rogue_fwif_fwccb_cmd *fwccb = pvr_dev->fwccb.ccb;
+	struct rogue_fwif_ccb_ctl *ctrl = pvr_dev->fwccb.ctrl;
+	u32 read_offset;
+
+	mutex_lock(&pvr_dev->fwccb.lock);
+
+	while ((read_offset = READ_ONCE(ctrl->read_offset)) != READ_ONCE(ctrl->write_offset)) {
+		struct rogue_fwif_fwccb_cmd cmd = fwccb[read_offset];
+
+		WRITE_ONCE(ctrl->read_offset, (read_offset + 1) & READ_ONCE(ctrl->wrap_mask));
+
+		/* Drop FWCCB lock while we process command. */
+		mutex_unlock(&pvr_dev->fwccb.lock);
+
+		process_fwccb_command(pvr_dev, &cmd);
+
+		mutex_lock(&pvr_dev->fwccb.lock);
+	}
+
+	mutex_unlock(&pvr_dev->fwccb.lock);
+}
+
+/**
+ * pvr_kccb_capacity() - Returns the maximum number of usable KCCB slots.
+ * @pvr_dev: Target PowerVR device
+ *
+ * Return:
+ *  * The maximum number of active slots.
+ */
+static u32 pvr_kccb_capacity(struct pvr_device *pvr_dev)
+{
+	/* Capacity is the number of slot minus one to cope with the wrapping
+	 * mechanisms. If we were to use all slots, we might end up with
+	 * read_offset == write_offset, which the FW considers as a KCCB-is-empty
+	 * condition.
+	 */
+	return pvr_dev->kccb.slot_count - 1;
+}
+
+/**
+ * pvr_kccb_used_slot_count_locked() - Get the number of used slots
+ * @pvr_dev: Device pointer.
+ *
+ * KCCB lock must be held.
+ *
+ * Return:
+ *  * The number of slots currently used.
+ */
+static u32
+pvr_kccb_used_slot_count_locked(struct pvr_device *pvr_dev)
+{
+	struct pvr_ccb *pvr_ccb = &pvr_dev->kccb.ccb;
+	struct rogue_fwif_ccb_ctl *ctrl = pvr_ccb->ctrl;
+	u32 wr_offset = READ_ONCE(ctrl->write_offset);
+	u32 rd_offset = READ_ONCE(ctrl->read_offset);
+	u32 used_count;
+
+	lockdep_assert_held(&pvr_ccb->lock);
+
+	if (wr_offset >= rd_offset)
+		used_count = wr_offset - rd_offset;
+	else
+		used_count = wr_offset + pvr_dev->kccb.slot_count - rd_offset;
+
+	return used_count;
+}
+
+/**
+ * pvr_kccb_send_cmd_reserved_powered() - Send command to the KCCB, with the PM ref
+ * held and a slot pre-reserved
+ * @pvr_dev: Device pointer.
+ * @cmd: Command to sent.
+ * @kccb_slot: Address to store the KCCB slot for this command. May be %NULL.
+ */
+void
+pvr_kccb_send_cmd_reserved_powered(struct pvr_device *pvr_dev,
+				   struct rogue_fwif_kccb_cmd *cmd,
+				   u32 *kccb_slot)
+{
+	struct pvr_ccb *pvr_ccb = &pvr_dev->kccb.ccb;
+	struct rogue_fwif_kccb_cmd *kccb = pvr_ccb->ccb;
+	struct rogue_fwif_ccb_ctl *ctrl = pvr_ccb->ctrl;
+	u32 old_write_offset;
+	u32 new_write_offset;
+
+	WARN_ON(pvr_dev->lost);
+
+	mutex_lock(&pvr_ccb->lock);
+
+	if (WARN_ON(!pvr_dev->kccb.reserved_count))
+		goto out_unlock;
+
+	old_write_offset = READ_ONCE(ctrl->write_offset);
+
+	/* We reserved the slot, we should have one available. */
+	if (WARN_ON(!pvr_ccb_slot_available_locked(pvr_ccb, &new_write_offset)))
+		goto out_unlock;
+
+	memcpy(&kccb[old_write_offset], cmd,
+	       sizeof(struct rogue_fwif_kccb_cmd));
+	if (kccb_slot) {
+		*kccb_slot = old_write_offset;
+		/* Clear return status for this slot. */
+		WRITE_ONCE(pvr_dev->kccb.rtn[old_write_offset],
+			   ROGUE_FWIF_KCCB_RTN_SLOT_NO_RESPONSE);
+	}
+	mb(); /* memory barrier */
+	WRITE_ONCE(ctrl->write_offset, new_write_offset);
+	pvr_dev->kccb.reserved_count--;
+
+	/* Kick MTS */
+	pvr_fw_mts_schedule(pvr_dev,
+			    PVR_FWIF_DM_GP & ~ROGUE_CR_MTS_SCHEDULE_DM_CLRMSK);
+
+out_unlock:
+	mutex_unlock(&pvr_ccb->lock);
+}
+
+/**
+ * pvr_kccb_try_reserve_slot() - Try to reserve a KCCB slot
+ * @pvr_dev: Device pointer.
+ *
+ * Return:
+ *  * true if a KCCB slot was reserved, or
+ *  * false otherwise.
+ */
+static bool pvr_kccb_try_reserve_slot(struct pvr_device *pvr_dev)
+{
+	bool reserved = false;
+	u32 used_count;
+
+	mutex_lock(&pvr_dev->kccb.ccb.lock);
+
+	used_count = pvr_kccb_used_slot_count_locked(pvr_dev);
+	if (pvr_dev->kccb.reserved_count < pvr_kccb_capacity(pvr_dev) - used_count) {
+		pvr_dev->kccb.reserved_count++;
+		reserved = true;
+	}
+
+	mutex_unlock(&pvr_dev->kccb.ccb.lock);
+
+	return reserved;
+}
+
+/**
+ * pvr_kccb_reserve_slot_sync() - Try to reserve a slot synchronously
+ * @pvr_dev: Device pointer.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * -EBUSY if now slots were reserved after %RESERVE_SLOT_TIMEOUT.
+ */
+static int pvr_kccb_reserve_slot_sync(struct pvr_device *pvr_dev)
+{
+	unsigned long start_timestamp = jiffies;
+	bool reserved = false;
+
+	while ((jiffies - start_timestamp) < (u32)RESERVE_SLOT_TIMEOUT) {
+		reserved = pvr_kccb_try_reserve_slot(pvr_dev);
+		if (reserved)
+			break;
+
+		usleep_range(1, 50);
+	}
+
+	return reserved ? 0 : -EBUSY;
+}
+
+/**
+ * pvr_kccb_send_cmd_powered() - Send command to the KCCB, with a PM ref held
+ * @pvr_dev: Device pointer.
+ * @cmd: Command to sent.
+ * @kccb_slot: Address to store the KCCB slot for this command. May be %NULL.
+ *
+ * Returns:
+ *  * Zero on success, or
+ *  * -EBUSY if timeout while waiting for a free KCCB slot.
+ */
+int
+pvr_kccb_send_cmd_powered(struct pvr_device *pvr_dev, struct rogue_fwif_kccb_cmd *cmd,
+			  u32 *kccb_slot)
+{
+	int err;
+
+	err = pvr_kccb_reserve_slot_sync(pvr_dev);
+	if (err)
+		return err;
+
+	pvr_kccb_send_cmd_reserved_powered(pvr_dev, cmd, kccb_slot);
+	return 0;
+}
+
+/**
+ * pvr_kccb_send_cmd() - Send command to the KCCB
+ * @pvr_dev: Device pointer.
+ * @cmd: Command to sent.
+ * @kccb_slot: Address to store the KCCB slot for this command. May be %NULL.
+ *
+ * Returns:
+ *  * Zero on success, or
+ *  * -EBUSY if timeout while waiting for a free KCCB slot.
+ */
+int
+pvr_kccb_send_cmd(struct pvr_device *pvr_dev, struct rogue_fwif_kccb_cmd *cmd,
+		  u32 *kccb_slot)
+{
+	int err;
+
+	err = pvr_power_get(pvr_dev);
+	if (err)
+		return err;
+
+	err = pvr_kccb_send_cmd_powered(pvr_dev, cmd, kccb_slot);
+
+	pvr_power_put(pvr_dev);
+
+	return err;
+}
+
+/**
+ * pvr_kccb_wait_for_completion() - Wait for a KCCB command to complete
+ * @pvr_dev: Device pointer.
+ * @slot_nr: KCCB slot to wait on.
+ * @timeout: Timeout length (in jiffies).
+ * @rtn_out: Location to store KCCB command result. May be %NULL.
+ *
+ * Returns:
+ *  * Zero on success, or
+ *  * -ETIMEDOUT on timeout.
+ */
+int
+pvr_kccb_wait_for_completion(struct pvr_device *pvr_dev, u32 slot_nr,
+			     u32 timeout, u32 *rtn_out)
+{
+	int ret = wait_event_timeout(pvr_dev->kccb.rtn_q, READ_ONCE(pvr_dev->kccb.rtn[slot_nr]) &
+				     ROGUE_FWIF_KCCB_RTN_SLOT_CMD_EXECUTED, timeout);
+
+	if (ret && rtn_out)
+		*rtn_out = READ_ONCE(pvr_dev->kccb.rtn[slot_nr]);
+
+	return ret ? 0 : -ETIMEDOUT;
+}
+
+/**
+ * pvr_kccb_is_idle() - Returns whether the device's KCCB is idle
+ * @pvr_dev: Device pointer
+ *
+ * Returns:
+ *  * %true if the KCCB is idle (contains no commands), or
+ *  * %false if the KCCB contains pending commands.
+ */
+bool
+pvr_kccb_is_idle(struct pvr_device *pvr_dev)
+{
+	struct rogue_fwif_ccb_ctl *ctrl = pvr_dev->kccb.ccb.ctrl;
+	bool idle;
+
+	mutex_lock(&pvr_dev->kccb.ccb.lock);
+
+	idle = (READ_ONCE(ctrl->write_offset) == READ_ONCE(ctrl->read_offset));
+
+	mutex_unlock(&pvr_dev->kccb.ccb.lock);
+
+	return idle;
+}
+
+static const char *
+pvr_kccb_fence_get_driver_name(struct dma_fence *f)
+{
+	return PVR_DRIVER_NAME;
+}
+
+static const char *
+pvr_kccb_fence_get_timeline_name(struct dma_fence *f)
+{
+	return "kccb";
+}
+
+static const struct dma_fence_ops pvr_kccb_fence_ops = {
+	.get_driver_name = pvr_kccb_fence_get_driver_name,
+	.get_timeline_name = pvr_kccb_fence_get_timeline_name,
+};
+
+/**
+ * struct pvr_kccb_fence - Fence object used to wait for a KCCB slot
+ */
+struct pvr_kccb_fence {
+	/** @base: Base dma_fence object. */
+	struct dma_fence base;
+
+	/** @node: Node used to insert the fence in the pvr_device::kccb::waiters list. */
+	struct list_head node;
+};
+
+/**
+ * pvr_kccb_wake_up_waiters() - Check the KCCB waiters
+ * @pvr_dev: Target PowerVR device
+ *
+ * Signal as many KCCB fences as we have slots available.
+ */
+void pvr_kccb_wake_up_waiters(struct pvr_device *pvr_dev)
+{
+	struct pvr_kccb_fence *fence, *tmp_fence;
+	u32 used_count, available_count;
+
+	/* Wake up those waiting for KCCB slot execution. */
+	wake_up_all(&pvr_dev->kccb.rtn_q);
+
+	/* Then iterate over all KCCB fences and signal as many as we can. */
+	mutex_lock(&pvr_dev->kccb.ccb.lock);
+	used_count = pvr_kccb_used_slot_count_locked(pvr_dev);
+
+	if (WARN_ON(used_count + pvr_dev->kccb.reserved_count > pvr_kccb_capacity(pvr_dev)))
+		goto out_unlock;
+
+	available_count = pvr_kccb_capacity(pvr_dev) - used_count - pvr_dev->kccb.reserved_count;
+	list_for_each_entry_safe(fence, tmp_fence, &pvr_dev->kccb.waiters, node) {
+		if (!available_count)
+			break;
+
+		list_del(&fence->node);
+		pvr_dev->kccb.reserved_count++;
+		available_count--;
+		dma_fence_signal(&fence->base);
+		dma_fence_put(&fence->base);
+	}
+
+out_unlock:
+	mutex_unlock(&pvr_dev->kccb.ccb.lock);
+}
+
+/**
+ * pvr_kccb_fini() - Cleanup device KCCB
+ * @pvr_dev: Target PowerVR device
+ */
+void pvr_kccb_fini(struct pvr_device *pvr_dev)
+{
+	pvr_ccb_fini(&pvr_dev->kccb.ccb);
+	WARN_ON(!list_empty(&pvr_dev->kccb.waiters));
+	WARN_ON(pvr_dev->kccb.reserved_count);
+}
+
+/**
+ * pvr_kccb_init() - Initialise device KCCB
+ * @pvr_dev: Target PowerVR device
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_ccb_init().
+ */
+int
+pvr_kccb_init(struct pvr_device *pvr_dev)
+{
+	pvr_dev->kccb.slot_count = 1 << ROGUE_FWIF_KCCB_NUMCMDS_LOG2_DEFAULT;
+	INIT_LIST_HEAD(&pvr_dev->kccb.waiters);
+	pvr_dev->kccb.fence_ctx.id = dma_fence_context_alloc(1);
+	spin_lock_init(&pvr_dev->kccb.fence_ctx.lock);
+
+	return pvr_ccb_init(pvr_dev, &pvr_dev->kccb.ccb,
+			    ROGUE_FWIF_KCCB_NUMCMDS_LOG2_DEFAULT,
+			    sizeof(struct rogue_fwif_kccb_cmd));
+}
+
+/**
+ * pvr_kccb_fence_alloc() - Allocate a pvr_kccb_fence object
+ *
+ * Return:
+ *  * NULL if the allocation fails, or
+ *  * A valid dma_fence pointer otherwise.
+ */
+struct dma_fence *pvr_kccb_fence_alloc(void)
+{
+	struct pvr_kccb_fence *kccb_fence;
+
+	kccb_fence = kzalloc(sizeof(*kccb_fence), GFP_KERNEL);
+	if (!kccb_fence)
+		return NULL;
+
+	return &kccb_fence->base;
+}
+
+/**
+ * pvr_kccb_fence_put() - Drop a KCCB fence reference
+ * @fence: The fence to drop the reference on.
+ *
+ * If the fence hasn't been initialized yet, dma_fence_free() is called. This
+ * way we have a single function taking care of both cases.
+ */
+void pvr_kccb_fence_put(struct dma_fence *fence)
+{
+	if (!fence)
+		return;
+
+	if (!fence->ops) {
+		dma_fence_free(fence);
+	} else {
+		WARN_ON(fence->ops != &pvr_kccb_fence_ops);
+		dma_fence_put(fence);
+	}
+}
+
+/**
+ * pvr_kccb_reserve_slot() - Reserve a KCCB slot for later use
+ * @pvr_dev: Target PowerVR device
+ * @f: KCCB fence object previously allocated with pvr_kccb_fence_alloc()
+ *
+ * Try to reserve a KCCB slot, and if there's no slot available,
+ * initializes the fence object and queue it to the waiters list.
+ *
+ * If NULL is returned, that means the slot is reserved. In that case,
+ * the @f is freed and shouldn't be accessed after that point.
+ *
+ * Return:
+ *  * NULL if a slot was available directly, or
+ *  * A valid dma_fence object to wait on if no slot was available.
+ */
+struct dma_fence *
+pvr_kccb_reserve_slot(struct pvr_device *pvr_dev, struct dma_fence *f)
+{
+	struct pvr_kccb_fence *fence = container_of(f, struct pvr_kccb_fence, base);
+	struct dma_fence *out_fence = NULL;
+	u32 used_count;
+
+	mutex_lock(&pvr_dev->kccb.ccb.lock);
+
+	used_count = pvr_kccb_used_slot_count_locked(pvr_dev);
+	if (pvr_dev->kccb.reserved_count >= pvr_kccb_capacity(pvr_dev) - used_count) {
+		dma_fence_init(&fence->base, &pvr_kccb_fence_ops,
+			       &pvr_dev->kccb.fence_ctx.lock,
+			       pvr_dev->kccb.fence_ctx.id,
+			       atomic_inc_return(&pvr_dev->kccb.fence_ctx.seqno));
+		out_fence = dma_fence_get(&fence->base);
+		list_add_tail(&fence->node, &pvr_dev->kccb.waiters);
+	} else {
+		pvr_kccb_fence_put(f);
+		pvr_dev->kccb.reserved_count++;
+	}
+
+	mutex_unlock(&pvr_dev->kccb.ccb.lock);
+
+	return out_fence;
+}
+
+/**
+ * pvr_kccb_release_slot() - Release a KCCB slot reserved with
+ * pvr_kccb_reserve_slot()
+ * @pvr_dev: Target PowerVR device
+ *
+ * Should only be called if something failed after the
+ * pvr_kccb_reserve_slot() call and you know you won't call
+ * pvr_kccb_send_cmd_reserved().
+ */
+void pvr_kccb_release_slot(struct pvr_device *pvr_dev)
+{
+	mutex_lock(&pvr_dev->kccb.ccb.lock);
+	if (!WARN_ON(!pvr_dev->kccb.reserved_count))
+		pvr_dev->kccb.reserved_count--;
+	mutex_unlock(&pvr_dev->kccb.ccb.lock);
+}
+
+/**
+ * pvr_fwccb_init() - Initialise device FWCCB
+ * @pvr_dev: Target PowerVR device
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_ccb_init().
+ */
+int
+pvr_fwccb_init(struct pvr_device *pvr_dev)
+{
+	return pvr_ccb_init(pvr_dev, &pvr_dev->fwccb,
+			    ROGUE_FWIF_FWCCB_NUMCMDS_LOG2,
+			    sizeof(struct rogue_fwif_fwccb_cmd));
+}
diff --git a/drivers/gpu/drm/imagination/pvr_ccb.h b/drivers/gpu/drm/imagination/pvr_ccb.h
new file mode 100644
index 000000000000..60ebaa7dca63
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_ccb.h
@@ -0,0 +1,71 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_CCB_H
+#define PVR_CCB_H
+
+#include "pvr_rogue_fwif.h"
+
+#include <linux/mutex.h>
+#include <linux/types.h>
+
+/* Forward declaration from pvr_device.h. */
+struct pvr_device;
+
+/* Forward declaration from pvr_gem.h. */
+struct pvr_fw_object;
+
+struct pvr_ccb {
+	/** @ctrl_obj: FW object representing CCB control structure. */
+	struct pvr_fw_object *ctrl_obj;
+	/** @ccb_obj: FW object representing CCB. */
+	struct pvr_fw_object *ccb_obj;
+
+	/** @ctrl_fw_addr: FW virtual address of CCB control structure. */
+	u32 ctrl_fw_addr;
+	/** @ccb_fw_addr: FW virtual address of CCB. */
+	u32 ccb_fw_addr;
+
+	/** @num_cmds: Number of commands in this CCB. */
+	u32 num_cmds;
+
+	/** @cmd_size: Size of each command in this CCB, in bytes. */
+	u32 cmd_size;
+
+	/** @lock: Mutex protecting @ctrl and @ccb. */
+	struct mutex lock;
+	/**
+	 * @ctrl: Kernel mapping of CCB control structure. @lock must be held
+	 *        when accessing.
+	 */
+	struct rogue_fwif_ccb_ctl *ctrl;
+	/** @ccb: Kernel mapping of CCB. @lock must be held when accessing. */
+	void *ccb;
+};
+
+int pvr_kccb_init(struct pvr_device *pvr_dev);
+void pvr_kccb_fini(struct pvr_device *pvr_dev);
+int pvr_fwccb_init(struct pvr_device *pvr_dev);
+void pvr_ccb_fini(struct pvr_ccb *ccb);
+
+void pvr_fwccb_process(struct pvr_device *pvr_dev);
+
+struct dma_fence *pvr_kccb_fence_alloc(void);
+void pvr_kccb_fence_put(struct dma_fence *fence);
+struct dma_fence *
+pvr_kccb_reserve_slot(struct pvr_device *pvr_dev, struct dma_fence *f);
+void pvr_kccb_release_slot(struct pvr_device *pvr_dev);
+int pvr_kccb_send_cmd(struct pvr_device *pvr_dev,
+		      struct rogue_fwif_kccb_cmd *cmd, u32 *kccb_slot);
+int pvr_kccb_send_cmd_powered(struct pvr_device *pvr_dev,
+			      struct rogue_fwif_kccb_cmd *cmd,
+			      u32 *kccb_slot);
+void pvr_kccb_send_cmd_reserved_powered(struct pvr_device *pvr_dev,
+					struct rogue_fwif_kccb_cmd *cmd,
+					u32 *kccb_slot);
+int pvr_kccb_wait_for_completion(struct pvr_device *pvr_dev, u32 slot_nr, u32 timeout,
+				 u32 *rtn_out);
+bool pvr_kccb_is_idle(struct pvr_device *pvr_dev);
+void pvr_kccb_wake_up_waiters(struct pvr_device *pvr_dev);
+
+#endif /* PVR_CCB_H */
diff --git a/drivers/gpu/drm/imagination/pvr_device.c b/drivers/gpu/drm/imagination/pvr_device.c
index 5dbd05f21238..ad198ed432fe 100644
--- a/drivers/gpu/drm/imagination/pvr_device.c
+++ b/drivers/gpu/drm/imagination/pvr_device.c
@@ -114,6 +114,87 @@ static int pvr_device_clk_init(struct pvr_device *pvr_dev)
 	return 0;
 }
 
+static irqreturn_t pvr_device_irq_thread_handler(int irq, void *data)
+{
+	struct pvr_device *pvr_dev = data;
+	irqreturn_t ret = IRQ_NONE;
+
+	/* We are in the threaded handler, we can keep dequeuing events until we
+	 * don't see any. This should allow us to reduce the number of interrupts
+	 * when the GPU is receiving a massive amount of short jobs.
+	 */
+	while (pvr_fw_irq_pending(pvr_dev)) {
+		pvr_fw_irq_clear(pvr_dev);
+
+		if (pvr_dev->fw_dev.booted) {
+			pvr_fwccb_process(pvr_dev);
+			pvr_kccb_wake_up_waiters(pvr_dev);
+		}
+
+		pm_runtime_mark_last_busy(from_pvr_device(pvr_dev)->dev);
+
+		ret = IRQ_HANDLED;
+	}
+
+	/* Unmask FW irqs before returning, so new interrupts can be received. */
+	pvr_fw_irq_enable(pvr_dev);
+	return ret;
+}
+
+static irqreturn_t pvr_device_irq_handler(int irq, void *data)
+{
+	struct pvr_device *pvr_dev = data;
+
+	if (!pvr_fw_irq_pending(pvr_dev))
+		return IRQ_NONE; /* Spurious IRQ - ignore. */
+
+	/* Mask the FW interrupts before waking up the thread. Will be unmasked
+	 * when the thread handler is done processing events.
+	 */
+	pvr_fw_irq_disable(pvr_dev);
+	return IRQ_WAKE_THREAD;
+}
+
+/**
+ * pvr_device_irq_init() - Initialise IRQ required by a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ *
+ * Returns:
+ *  * 0 on success,
+ *  * Any error returned by platform_get_irq_byname(), or
+ *  * Any error returned by request_irq().
+ */
+static int
+pvr_device_irq_init(struct pvr_device *pvr_dev)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	struct platform_device *plat_dev = to_platform_device(drm_dev->dev);
+
+	init_waitqueue_head(&pvr_dev->kccb.rtn_q);
+
+	pvr_dev->irq = platform_get_irq(plat_dev, 0);
+	if (pvr_dev->irq < 0)
+		return pvr_dev->irq;
+
+	/* Clear any pending events before requesting the IRQ line. */
+	pvr_fw_irq_clear(pvr_dev);
+	pvr_fw_irq_enable(pvr_dev);
+
+	return request_threaded_irq(pvr_dev->irq, pvr_device_irq_handler,
+				    pvr_device_irq_thread_handler,
+				    IRQF_SHARED, "gpu", pvr_dev);
+}
+
+/**
+ * pvr_device_irq_fini() - Deinitialise IRQ required by a PowerVR device
+ * @pvr_dev: Target PowerVR device.
+ */
+static void
+pvr_device_irq_fini(struct pvr_device *pvr_dev)
+{
+	free_irq(pvr_dev->irq, pvr_dev);
+}
+
 /**
  * pvr_build_firmware_filename() - Construct a PowerVR firmware filename
  * @pvr_dev: Target PowerVR device.
@@ -322,7 +403,17 @@ pvr_device_gpu_init(struct pvr_device *pvr_dev)
 	if (IS_ERR(pvr_dev->kernel_vm_ctx))
 		return PTR_ERR(pvr_dev->kernel_vm_ctx);
 
+	err = pvr_fw_init(pvr_dev);
+	if (err)
+		goto err_vm_ctx_put;
+
 	return 0;
+
+err_vm_ctx_put:
+	pvr_vm_context_put(pvr_dev->kernel_vm_ctx);
+	pvr_dev->kernel_vm_ctx = NULL;
+
+	return err;
 }
 
 /**
@@ -332,6 +423,7 @@ pvr_device_gpu_init(struct pvr_device *pvr_dev)
 static void
 pvr_device_gpu_fini(struct pvr_device *pvr_dev)
 {
+	pvr_fw_fini(pvr_dev);
 	WARN_ON(!pvr_vm_context_put(pvr_dev->kernel_vm_ctx));
 	pvr_dev->kernel_vm_ctx = NULL;
 }
@@ -382,10 +474,17 @@ pvr_device_init(struct pvr_device *pvr_dev)
 	if (err)
 		goto err_pm_runtime_put;
 
+	err = pvr_device_irq_init(pvr_dev);
+	if (err)
+		goto err_device_gpu_fini;
+
 	pm_runtime_put(dev);
 
 	return 0;
 
+err_device_gpu_fini:
+	pvr_device_gpu_fini(pvr_dev);
+
 err_pm_runtime_put:
 	pm_runtime_put_sync_suspend(dev);
 
@@ -403,6 +502,7 @@ pvr_device_fini(struct pvr_device *pvr_dev)
 	 * Deinitialization stages are performed in reverse order compared to
 	 * the initialization stages in pvr_device_init().
 	 */
+	pvr_device_irq_fini(pvr_dev);
 	pvr_device_gpu_fini(pvr_dev);
 }
 
diff --git a/drivers/gpu/drm/imagination/pvr_device.h b/drivers/gpu/drm/imagination/pvr_device.h
index 6d58ecfbc838..d6feac60834b 100644
--- a/drivers/gpu/drm/imagination/pvr_device.h
+++ b/drivers/gpu/drm/imagination/pvr_device.h
@@ -4,6 +4,7 @@
 #ifndef PVR_DEVICE_H
 #define PVR_DEVICE_H
 
+#include "pvr_ccb.h"
 #include "pvr_device_info.h"
 #include "pvr_fw.h"
 
@@ -123,6 +124,12 @@ struct pvr_device {
 	 */
 	struct clk *mem_clk;
 
+	/** @irq: IRQ number. */
+	int irq;
+
+	/** @fwccb: Firmware CCB. */
+	struct pvr_ccb fwccb;
+
 	/**
 	 * @kernel_vm_ctx: Virtual memory context used for kernel mappings.
 	 *
@@ -147,6 +154,49 @@ struct pvr_device {
 		u32 kccb_stall_count;
 	} watchdog;
 
+	struct {
+		/** @ccb: Kernel CCB. */
+		struct pvr_ccb ccb;
+
+		/** @rtn_q: Waitqueue for KCCB command return waiters. */
+		wait_queue_head_t rtn_q;
+
+		/** @rtn_obj: Object representing KCCB return slots. */
+		struct pvr_fw_object *rtn_obj;
+
+		/**
+		 * @rtn: Pointer to CPU mapping of KCCB return slots. Must be accessed by
+		 *       READ_ONCE()/WRITE_ONCE().
+		 */
+		u32 *rtn;
+
+		/** @slot_count: Total number of KCCB slots available. */
+		u32 slot_count;
+
+		/** @reserved_count: Number of KCCB slots reserved for future use. */
+		u32 reserved_count;
+
+		/**
+		 * @waiters: List of KCCB slot waiters.
+		 */
+		struct list_head waiters;
+
+		/** @fence_ctx: KCCB fence context. */
+		struct {
+			/** @id: KCCB fence context ID allocated with dma_fence_context_alloc(). */
+			u64 id;
+
+			/** @seqno: Sequence number incremented each time a fence is created. */
+			atomic_t seqno;
+
+			/**
+			 * @lock: Lock used to synchronize access to fences allocated by this
+			 * context.
+			 */
+			spinlock_t lock;
+		} fence_ctx;
+	} kccb;
+
 	/**
 	 * @lost: %true if the device has been lost.
 	 *
@@ -155,6 +205,16 @@ struct pvr_device {
 	 */
 	bool lost;
 
+	/**
+	 * @reset_sem: Reset semaphore.
+	 *
+	 * GPU reset code will lock this for writing. Any code that submits commands to the firmware
+	 * that isn't in an IRQ handler or on the scheduler workqueue must lock this for reading.
+	 * Once this has been successfully locked, &pvr_dev->lost _must_ be checked, and -%EIO must
+	 * be returned if it is set.
+	 */
+	struct rw_semaphore reset_sem;
+
 	/** @sched_wq: Workqueue for schedulers. */
 	struct workqueue_struct *sched_wq;
 };
diff --git a/drivers/gpu/drm/imagination/pvr_drv.c b/drivers/gpu/drm/imagination/pvr_drv.c
index 0331dc549116..af12f87cef97 100644
--- a/drivers/gpu/drm/imagination/pvr_drv.c
+++ b/drivers/gpu/drm/imagination/pvr_drv.c
@@ -1345,3 +1345,4 @@ MODULE_AUTHOR("Imagination Technologies Ltd.");
 MODULE_DESCRIPTION(PVR_DRIVER_DESC);
 MODULE_LICENSE("Dual MIT/GPL");
 MODULE_IMPORT_NS(DMA_BUF);
+MODULE_FIRMWARE("powervr/rogue_33.15.11.3_v1.fw");
diff --git a/drivers/gpu/drm/imagination/pvr_fw.c b/drivers/gpu/drm/imagination/pvr_fw.c
index c48de4a3af46..449984c0f233 100644
--- a/drivers/gpu/drm/imagination/pvr_fw.c
+++ b/drivers/gpu/drm/imagination/pvr_fw.c
@@ -1,16 +1,84 @@
 // SPDX-License-Identifier: GPL-2.0 OR MIT
 /* Copyright (c) 2023 Imagination Technologies Ltd. */
 
+#include "pvr_ccb.h"
 #include "pvr_device.h"
 #include "pvr_device_info.h"
 #include "pvr_fw.h"
+#include "pvr_fw_info.h"
+#include "pvr_fw_startstop.h"
+#include "pvr_fw_trace.h"
+#include "pvr_gem.h"
+#include "pvr_power.h"
+#include "pvr_rogue_fwif_dev_info.h"
+#include "pvr_rogue_heap_config.h"
+#include "pvr_vm.h"
 
 #include <drm/drm_drv.h>
+#include <drm/drm_managed.h>
+#include <drm/drm_mm.h>
+#include <linux/clk.h>
 #include <linux/firmware.h>
+#include <linux/math.h>
+#include <linux/minmax.h>
 #include <linux/sizes.h>
 
 #define FW_MAX_SUPPORTED_MAJOR_VERSION 1
 
+#define FW_BOOT_TIMEOUT_USEC 5000000
+
+/* Config heap occupies top 192k of the firmware heap. */
+#define PVR_ROGUE_FW_CONFIG_HEAP_GRANULARITY SZ_64K
+#define PVR_ROGUE_FW_CONFIG_HEAP_SIZE (3 * PVR_ROGUE_FW_CONFIG_HEAP_GRANULARITY)
+
+/* Main firmware allocations should come from the remainder of the heap. */
+#define PVR_ROGUE_FW_MAIN_HEAP_BASE ROGUE_FW_HEAP_BASE
+
+/* Offsets from start of configuration area of FW heap. */
+#define PVR_ROGUE_FWIF_CONNECTION_CTL_OFFSET 0
+#define PVR_ROGUE_FWIF_OSINIT_OFFSET \
+	(PVR_ROGUE_FWIF_CONNECTION_CTL_OFFSET + PVR_ROGUE_FW_CONFIG_HEAP_GRANULARITY)
+#define PVR_ROGUE_FWIF_SYSINIT_OFFSET \
+	(PVR_ROGUE_FWIF_OSINIT_OFFSET + PVR_ROGUE_FW_CONFIG_HEAP_GRANULARITY)
+
+#define PVR_ROGUE_FAULT_PAGE_SIZE SZ_4K
+
+#define PVR_SYNC_OBJ_SIZE sizeof(u32)
+
+const struct pvr_fw_layout_entry *
+pvr_fw_find_layout_entry(struct pvr_device *pvr_dev, enum pvr_fw_section_id id)
+{
+	const struct pvr_fw_layout_entry *layout_entries = pvr_dev->fw_dev.layout_entries;
+	u32 num_layout_entries = pvr_dev->fw_dev.header->layout_entry_num;
+	u32 entry;
+
+	for (entry = 0; entry < num_layout_entries; entry++) {
+		if (layout_entries[entry].id == id)
+			return &layout_entries[entry];
+	}
+
+	return NULL;
+}
+
+static const struct pvr_fw_layout_entry *
+pvr_fw_find_private_data(struct pvr_device *pvr_dev)
+{
+	const struct pvr_fw_layout_entry *layout_entries = pvr_dev->fw_dev.layout_entries;
+	u32 num_layout_entries = pvr_dev->fw_dev.header->layout_entry_num;
+	u32 entry;
+
+	for (entry = 0; entry < num_layout_entries; entry++) {
+		if (layout_entries[entry].id == META_PRIVATE_DATA ||
+		    layout_entries[entry].id == MIPS_PRIVATE_DATA ||
+		    layout_entries[entry].id == RISCV_PRIVATE_DATA)
+			return &layout_entries[entry];
+	}
+
+	return NULL;
+}
+
+#define DEV_INFO_MASK_SIZE(x) DIV_ROUND_UP(x, 64)
+
 /**
  * pvr_fw_validate() - Parse firmware header and check compatibility
  * @pvr_dev: Device pointer.
@@ -122,6 +190,685 @@ pvr_fw_get_device_info(struct pvr_device *pvr_dev)
 					    header->feature_param_size);
 }
 
+static void
+layout_get_sizes(struct pvr_device *pvr_dev)
+{
+	const struct pvr_fw_layout_entry *layout_entries = pvr_dev->fw_dev.layout_entries;
+	u32 num_layout_entries = pvr_dev->fw_dev.header->layout_entry_num;
+	struct pvr_fw_mem *fw_mem = &pvr_dev->fw_dev.mem;
+
+	fw_mem->code_alloc_size = 0;
+	fw_mem->data_alloc_size = 0;
+	fw_mem->core_code_alloc_size = 0;
+	fw_mem->core_data_alloc_size = 0;
+
+	/* Extract section sizes from FW layout table. */
+	for (u32 entry = 0; entry < num_layout_entries; entry++) {
+		switch (layout_entries[entry].type) {
+		case FW_CODE:
+			fw_mem->code_alloc_size += layout_entries[entry].alloc_size;
+			break;
+		case FW_DATA:
+			fw_mem->data_alloc_size += layout_entries[entry].alloc_size;
+			break;
+		case FW_COREMEM_CODE:
+			fw_mem->core_code_alloc_size +=
+				layout_entries[entry].alloc_size;
+			break;
+		case FW_COREMEM_DATA:
+			fw_mem->core_data_alloc_size +=
+				layout_entries[entry].alloc_size;
+			break;
+		case NONE:
+			break;
+		}
+	}
+}
+
+int
+pvr_fw_find_mmu_segment(struct pvr_device *pvr_dev, u32 addr, u32 size, void *fw_code_ptr,
+			void *fw_data_ptr, void *fw_core_code_ptr, void *fw_core_data_ptr,
+			void **host_addr_out)
+{
+	const struct pvr_fw_layout_entry *layout_entries = pvr_dev->fw_dev.layout_entries;
+	u32 num_layout_entries = pvr_dev->fw_dev.header->layout_entry_num;
+	u32 end_addr = addr + size;
+	int entry = 0;
+
+	/* Ensure requested range is not zero, and size is not causing addr to overflow. */
+	if (end_addr <= addr)
+		return -EINVAL;
+
+	for (entry = 0; entry < num_layout_entries; entry++) {
+		u32 entry_start_addr = layout_entries[entry].base_addr;
+		u32 entry_end_addr = entry_start_addr + layout_entries[entry].alloc_size;
+
+		if (addr >= entry_start_addr && addr < entry_end_addr &&
+		    end_addr > entry_start_addr && end_addr <= entry_end_addr) {
+			switch (layout_entries[entry].type) {
+			case FW_CODE:
+				*host_addr_out = fw_code_ptr;
+				break;
+
+			case FW_DATA:
+				*host_addr_out = fw_data_ptr;
+				break;
+
+			case FW_COREMEM_CODE:
+				*host_addr_out = fw_core_code_ptr;
+				break;
+
+			case FW_COREMEM_DATA:
+				*host_addr_out = fw_core_data_ptr;
+				break;
+
+			default:
+				return -EINVAL;
+			}
+			/* Direct Mem write to mapped memory */
+			addr -= layout_entries[entry].base_addr;
+			addr += layout_entries[entry].alloc_offset;
+
+			/*
+			 * Add offset to pointer to FW allocation only if that
+			 * allocation is available
+			 */
+			*(u8 **)host_addr_out += addr;
+			return 0;
+		}
+	}
+
+	return -EINVAL;
+}
+
+static int
+pvr_fw_create_fwif_connection_ctl(struct pvr_device *pvr_dev)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+
+	fw_dev->fwif_connection_ctl =
+		pvr_fw_object_create_and_map_offset(pvr_dev,
+						    fw_dev->fw_heap_info.config_offset +
+						    PVR_ROGUE_FWIF_CONNECTION_CTL_OFFSET,
+						    sizeof(*fw_dev->fwif_connection_ctl),
+						    PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+						    NULL, NULL,
+						    &fw_dev->mem.fwif_connection_ctl_obj);
+	if (IS_ERR(fw_dev->fwif_connection_ctl)) {
+		drm_err(drm_dev,
+			"Unable to allocate FWIF connection control memory\n");
+		return PTR_ERR(fw_dev->fwif_connection_ctl);
+	}
+
+	return 0;
+}
+
+static void
+pvr_fw_fini_fwif_connection_ctl(struct pvr_device *pvr_dev)
+{
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+
+	pvr_fw_object_unmap_and_destroy(fw_dev->mem.fwif_connection_ctl_obj);
+}
+
+static void
+fw_osinit_init(void *cpu_ptr, void *priv)
+{
+	struct rogue_fwif_osinit *fwif_osinit = cpu_ptr;
+	struct pvr_device *pvr_dev = priv;
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+	struct pvr_fw_mem *fw_mem = &fw_dev->mem;
+
+	fwif_osinit->kernel_ccbctl_fw_addr = pvr_dev->kccb.ccb.ctrl_fw_addr;
+	fwif_osinit->kernel_ccb_fw_addr = pvr_dev->kccb.ccb.ccb_fw_addr;
+	pvr_fw_object_get_fw_addr(pvr_dev->kccb.rtn_obj,
+				  &fwif_osinit->kernel_ccb_rtn_slots_fw_addr);
+
+	fwif_osinit->firmware_ccbctl_fw_addr = pvr_dev->fwccb.ctrl_fw_addr;
+	fwif_osinit->firmware_ccb_fw_addr = pvr_dev->fwccb.ccb_fw_addr;
+
+	fwif_osinit->work_est_firmware_ccbctl_fw_addr = 0;
+	fwif_osinit->work_est_firmware_ccb_fw_addr = 0;
+
+	pvr_fw_object_get_fw_addr(fw_mem->hwrinfobuf_obj,
+				  &fwif_osinit->rogue_fwif_hwr_info_buf_ctl_fw_addr);
+	pvr_fw_object_get_fw_addr(fw_mem->osdata_obj, &fwif_osinit->fw_os_data_fw_addr);
+
+	fwif_osinit->hwr_debug_dump_limit = 0;
+
+	rogue_fwif_compchecks_bvnc_init(&fwif_osinit->rogue_comp_checks.hw_bvnc);
+	rogue_fwif_compchecks_bvnc_init(&fwif_osinit->rogue_comp_checks.fw_bvnc);
+}
+
+static void
+fw_osdata_init(void *cpu_ptr, void *priv)
+{
+	struct rogue_fwif_osdata *fwif_osdata = cpu_ptr;
+	struct pvr_device *pvr_dev = priv;
+	struct pvr_fw_mem *fw_mem = &pvr_dev->fw_dev.mem;
+
+	pvr_fw_object_get_fw_addr(fw_mem->power_sync_obj, &fwif_osdata->power_sync_fw_addr);
+}
+
+static void
+fw_fault_page_init(void *cpu_ptr, void *priv)
+{
+	u32 *fault_page = cpu_ptr;
+
+	for (int i = 0; i < PVR_ROGUE_FAULT_PAGE_SIZE / sizeof(*fault_page); i++)
+		fault_page[i] = 0xdeadbee0;
+}
+
+static void
+fw_sysinit_init(void *cpu_ptr, void *priv)
+{
+	struct rogue_fwif_sysinit *fwif_sysinit = cpu_ptr;
+	struct pvr_device *pvr_dev = priv;
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+	struct pvr_fw_mem *fw_mem = &fw_dev->mem;
+	dma_addr_t fault_dma_addr = 0;
+	u32 clock_speed_hz = clk_get_rate(pvr_dev->core_clk);
+
+	WARN_ON(!clock_speed_hz);
+
+	WARN_ON(pvr_fw_object_get_dma_addr(fw_mem->fault_page_obj, 0, &fault_dma_addr));
+	fwif_sysinit->fault_phys_addr = (u64)fault_dma_addr;
+
+	fwif_sysinit->pds_exec_base = ROGUE_PDSCODEDATA_HEAP_BASE;
+	fwif_sysinit->usc_exec_base = ROGUE_USCCODE_HEAP_BASE;
+
+	pvr_fw_object_get_fw_addr(fw_mem->runtime_cfg_obj, &fwif_sysinit->runtime_cfg_fw_addr);
+	pvr_fw_object_get_fw_addr(fw_dev->fw_trace.tracebuf_ctrl_obj,
+				  &fwif_sysinit->trace_buf_ctl_fw_addr);
+	pvr_fw_object_get_fw_addr(fw_mem->sysdata_obj, &fwif_sysinit->fw_sys_data_fw_addr);
+	pvr_fw_object_get_fw_addr(fw_mem->gpu_util_fwcb_obj,
+				  &fwif_sysinit->gpu_util_fw_cb_ctl_fw_addr);
+	if (fw_mem->core_data_obj) {
+		pvr_fw_object_get_fw_addr(fw_mem->core_data_obj,
+					  &fwif_sysinit->coremem_data_store.fw_addr);
+	}
+
+	/* Currently unsupported. */
+	fwif_sysinit->counter_dump_ctl.buffer_fw_addr = 0;
+	fwif_sysinit->counter_dump_ctl.size_in_dwords = 0;
+
+	/* Skip alignment checks. */
+	fwif_sysinit->align_checks = 0;
+
+	fwif_sysinit->filter_flags = 0;
+	fwif_sysinit->hw_perf_filter = 0;
+	fwif_sysinit->firmware_perf = FW_PERF_CONF_NONE;
+	fwif_sysinit->initial_core_clock_speed = clock_speed_hz;
+	fwif_sysinit->active_pm_latency_ms = 0;
+	fwif_sysinit->gpio_validation_mode = ROGUE_FWIF_GPIO_VAL_OFF;
+	fwif_sysinit->firmware_started = false;
+	fwif_sysinit->marker_val = 1;
+
+	memset(&fwif_sysinit->bvnc_km_feature_flags, 0,
+	       sizeof(fwif_sysinit->bvnc_km_feature_flags));
+}
+
+static void
+fw_runtime_cfg_init(void *cpu_ptr, void *priv)
+{
+	struct rogue_fwif_runtime_cfg *runtime_cfg = cpu_ptr;
+	struct pvr_device *pvr_dev = priv;
+	u32 clock_speed_hz = clk_get_rate(pvr_dev->core_clk);
+
+	WARN_ON(!clock_speed_hz);
+
+	runtime_cfg->core_clock_speed = clock_speed_hz;
+	runtime_cfg->active_pm_latency_ms = 0;
+	runtime_cfg->active_pm_latency_persistant = true;
+	WARN_ON(PVR_FEATURE_VALUE(pvr_dev, num_clusters,
+				  &runtime_cfg->default_dusts_num_init) != 0);
+}
+
+static void
+fw_gpu_util_fwcb_init(void *cpu_ptr, void *priv)
+{
+	struct rogue_fwif_gpu_util_fwcb *gpu_util_fwcb = cpu_ptr;
+
+	gpu_util_fwcb->last_word = PVR_FWIF_GPU_UTIL_STATE_IDLE;
+}
+
+static int
+pvr_fw_create_structures(struct pvr_device *pvr_dev)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+	struct pvr_fw_mem *fw_mem = &fw_dev->mem;
+	int err;
+
+	fw_dev->power_sync = pvr_fw_object_create_and_map(pvr_dev, sizeof(*fw_dev->power_sync),
+							  PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+							  NULL, NULL, &fw_mem->power_sync_obj);
+	if (IS_ERR(fw_dev->power_sync)) {
+		drm_err(drm_dev, "Unable to allocate FW power_sync structure\n");
+		return PTR_ERR(fw_dev->power_sync);
+	}
+
+	fw_dev->hwrinfobuf = pvr_fw_object_create_and_map(pvr_dev, sizeof(*fw_dev->hwrinfobuf),
+							  PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+							  NULL, NULL, &fw_mem->hwrinfobuf_obj);
+	if (IS_ERR(fw_dev->hwrinfobuf)) {
+		drm_err(drm_dev,
+			"Unable to allocate FW hwrinfobuf structure\n");
+		err = PTR_ERR(fw_dev->hwrinfobuf);
+		goto err_release_power_sync;
+	}
+
+	err = pvr_fw_object_create(pvr_dev, PVR_SYNC_OBJ_SIZE,
+				   PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+				   NULL, NULL, &fw_mem->mmucache_sync_obj);
+	if (err) {
+		drm_err(drm_dev,
+			"Unable to allocate MMU cache sync object\n");
+		goto err_release_hwrinfobuf;
+	}
+
+	fw_dev->fwif_sysdata = pvr_fw_object_create_and_map(pvr_dev,
+							    sizeof(*fw_dev->fwif_sysdata),
+							    PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+							    NULL, NULL, &fw_mem->sysdata_obj);
+	if (IS_ERR(fw_dev->fwif_sysdata)) {
+		drm_err(drm_dev, "Unable to allocate FW SYSDATA structure\n");
+		err = PTR_ERR(fw_dev->fwif_sysdata);
+		goto err_release_mmucache_sync_obj;
+	}
+
+	err = pvr_fw_object_create(pvr_dev, PVR_ROGUE_FAULT_PAGE_SIZE,
+				   PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+				   fw_fault_page_init, NULL, &fw_mem->fault_page_obj);
+	if (err) {
+		drm_err(drm_dev, "Unable to allocate FW fault page\n");
+		goto err_release_sysdata;
+	}
+
+	err = pvr_fw_object_create(pvr_dev, sizeof(struct rogue_fwif_gpu_util_fwcb),
+				   PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+				   fw_gpu_util_fwcb_init, pvr_dev, &fw_mem->gpu_util_fwcb_obj);
+	if (err) {
+		drm_err(drm_dev, "Unable to allocate GPU util FWCB\n");
+		goto err_release_fault_page;
+	}
+
+	err = pvr_fw_object_create(pvr_dev, sizeof(struct rogue_fwif_runtime_cfg),
+				   PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+				   fw_runtime_cfg_init, pvr_dev, &fw_mem->runtime_cfg_obj);
+	if (err) {
+		drm_err(drm_dev, "Unable to allocate FW runtime config\n");
+		goto err_release_gpu_util_fwcb;
+	}
+
+	err = pvr_fw_trace_init(pvr_dev);
+	if (err)
+		goto err_release_runtime_cfg;
+
+	fw_dev->fwif_osdata = pvr_fw_object_create_and_map(pvr_dev,
+							   sizeof(*fw_dev->fwif_osdata),
+							   PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+							   fw_osdata_init, pvr_dev,
+							   &fw_mem->osdata_obj);
+	if (IS_ERR(fw_dev->fwif_osdata)) {
+		drm_err(drm_dev, "Unable to allocate FW OSDATA structure\n");
+		err = PTR_ERR(fw_dev->fwif_osdata);
+		goto err_fw_trace_fini;
+	}
+
+	fw_dev->fwif_osinit =
+		pvr_fw_object_create_and_map_offset(pvr_dev,
+						    fw_dev->fw_heap_info.config_offset +
+						    PVR_ROGUE_FWIF_OSINIT_OFFSET,
+						    sizeof(*fw_dev->fwif_osinit),
+						    PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+						    fw_osinit_init, pvr_dev, &fw_mem->osinit_obj);
+	if (IS_ERR(fw_dev->fwif_osinit)) {
+		drm_err(drm_dev, "Unable to allocate FW OSINIT structure\n");
+		err = PTR_ERR(fw_dev->fwif_osinit);
+		goto err_release_osdata;
+	}
+
+	fw_dev->fwif_sysinit =
+		pvr_fw_object_create_and_map_offset(pvr_dev,
+						    fw_dev->fw_heap_info.config_offset +
+						    PVR_ROGUE_FWIF_SYSINIT_OFFSET,
+						    sizeof(*fw_dev->fwif_sysinit),
+						    PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+						    fw_sysinit_init, pvr_dev, &fw_mem->sysinit_obj);
+	if (IS_ERR(fw_dev->fwif_sysinit)) {
+		drm_err(drm_dev, "Unable to allocate FW SYSINIT structure\n");
+		err = PTR_ERR(fw_dev->fwif_sysinit);
+		goto err_release_osinit;
+	}
+
+	return 0;
+
+err_release_osinit:
+	pvr_fw_object_unmap_and_destroy(fw_mem->osinit_obj);
+
+err_release_osdata:
+	pvr_fw_object_unmap_and_destroy(fw_mem->osdata_obj);
+
+err_fw_trace_fini:
+	pvr_fw_trace_fini(pvr_dev);
+
+err_release_runtime_cfg:
+	pvr_fw_object_destroy(fw_mem->runtime_cfg_obj);
+
+err_release_gpu_util_fwcb:
+	pvr_fw_object_destroy(fw_mem->gpu_util_fwcb_obj);
+
+err_release_fault_page:
+	pvr_fw_object_destroy(fw_mem->fault_page_obj);
+
+err_release_sysdata:
+	pvr_fw_object_unmap_and_destroy(fw_mem->sysdata_obj);
+
+err_release_mmucache_sync_obj:
+	pvr_fw_object_destroy(fw_mem->mmucache_sync_obj);
+
+err_release_hwrinfobuf:
+	pvr_fw_object_unmap_and_destroy(fw_mem->hwrinfobuf_obj);
+
+err_release_power_sync:
+	pvr_fw_object_unmap_and_destroy(fw_mem->power_sync_obj);
+
+	return err;
+}
+
+static void
+pvr_fw_destroy_structures(struct pvr_device *pvr_dev)
+{
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+	struct pvr_fw_mem *fw_mem = &fw_dev->mem;
+
+	pvr_fw_trace_fini(pvr_dev);
+	pvr_fw_object_destroy(fw_mem->runtime_cfg_obj);
+	pvr_fw_object_destroy(fw_mem->gpu_util_fwcb_obj);
+	pvr_fw_object_destroy(fw_mem->fault_page_obj);
+	pvr_fw_object_unmap_and_destroy(fw_mem->sysdata_obj);
+	pvr_fw_object_unmap_and_destroy(fw_mem->sysinit_obj);
+
+	pvr_fw_object_destroy(fw_mem->mmucache_sync_obj);
+	pvr_fw_object_unmap_and_destroy(fw_mem->hwrinfobuf_obj);
+	pvr_fw_object_unmap_and_destroy(fw_mem->power_sync_obj);
+	pvr_fw_object_unmap_and_destroy(fw_mem->osdata_obj);
+	pvr_fw_object_unmap_and_destroy(fw_mem->osinit_obj);
+}
+
+/**
+ * pvr_fw_process() - Process firmware image, allocate FW memory and create boot
+ *                    arguments
+ * @pvr_dev: Device pointer.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_fw_object_create_and_map_offset(), or
+ *  * Any error returned by pvr_fw_object_create_and_map().
+ */
+static int
+pvr_fw_process(struct pvr_device *pvr_dev)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	struct pvr_fw_mem *fw_mem = &pvr_dev->fw_dev.mem;
+	const u8 *fw = pvr_dev->fw_dev.firmware->data;
+	const struct pvr_fw_layout_entry *private_data;
+	u8 *fw_code_ptr;
+	u8 *fw_data_ptr;
+	u8 *fw_core_code_ptr;
+	u8 *fw_core_data_ptr;
+	int err;
+
+	layout_get_sizes(pvr_dev);
+
+	private_data = pvr_fw_find_private_data(pvr_dev);
+	if (!private_data)
+		return -EINVAL;
+
+	/* Allocate and map memory for firmware sections. */
+
+	/*
+	 * Code allocation must be at the start of the firmware heap, otherwise
+	 * firmware processor will be unable to boot.
+	 *
+	 * This has the useful side-effect that for every other object in the
+	 * driver, a firmware address of 0 is invalid.
+	 */
+	fw_code_ptr = pvr_fw_object_create_and_map_offset(pvr_dev, 0, fw_mem->code_alloc_size,
+							  PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+							  NULL, NULL, &fw_mem->code_obj);
+	if (IS_ERR(fw_code_ptr)) {
+		drm_err(drm_dev, "Unable to allocate FW code memory\n");
+		return PTR_ERR(fw_code_ptr);
+	}
+
+	if (pvr_dev->fw_dev.defs->has_fixed_data_addr()) {
+		u32 base_addr = private_data->base_addr & pvr_dev->fw_dev.fw_heap_info.offset_mask;
+
+		fw_data_ptr =
+			pvr_fw_object_create_and_map_offset(pvr_dev, base_addr,
+							    fw_mem->data_alloc_size,
+							    PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+							    NULL, NULL, &fw_mem->data_obj);
+	} else {
+		fw_data_ptr = pvr_fw_object_create_and_map(pvr_dev, fw_mem->data_alloc_size,
+							   PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+							   NULL, NULL, &fw_mem->data_obj);
+	}
+	if (IS_ERR(fw_data_ptr)) {
+		drm_err(drm_dev, "Unable to allocate FW data memory\n");
+		err = PTR_ERR(fw_data_ptr);
+		goto err_free_fw_code_obj;
+	}
+
+	/* Core code and data sections are optional. */
+	if (fw_mem->core_code_alloc_size) {
+		fw_core_code_ptr =
+			pvr_fw_object_create_and_map(pvr_dev, fw_mem->core_code_alloc_size,
+						     PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+						     NULL, NULL, &fw_mem->core_code_obj);
+		if (IS_ERR(fw_core_code_ptr)) {
+			drm_err(drm_dev,
+				"Unable to allocate FW core code memory\n");
+			err = PTR_ERR(fw_core_code_ptr);
+			goto err_free_fw_data_obj;
+		}
+	} else {
+		fw_core_code_ptr = NULL;
+	}
+
+	if (fw_mem->core_data_alloc_size) {
+		fw_core_data_ptr =
+			pvr_fw_object_create_and_map(pvr_dev, fw_mem->core_data_alloc_size,
+						     PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+						     NULL, NULL, &fw_mem->core_data_obj);
+		if (IS_ERR(fw_core_data_ptr)) {
+			drm_err(drm_dev,
+				"Unable to allocate FW core data memory\n");
+			err = PTR_ERR(fw_core_data_ptr);
+			goto err_free_fw_core_code_obj;
+		}
+	} else {
+		fw_core_data_ptr = NULL;
+	}
+
+	fw_mem->code = kzalloc(fw_mem->code_alloc_size, GFP_KERNEL);
+	fw_mem->data = kzalloc(fw_mem->data_alloc_size, GFP_KERNEL);
+	if (fw_mem->core_code_alloc_size)
+		fw_mem->core_code = kzalloc(fw_mem->core_code_alloc_size, GFP_KERNEL);
+	if (fw_mem->core_data_alloc_size)
+		fw_mem->core_data = kzalloc(fw_mem->core_data_alloc_size, GFP_KERNEL);
+
+	if (!fw_mem->code || !fw_mem->data ||
+	    (!fw_mem->core_code && fw_mem->core_code_alloc_size) ||
+	    (!fw_mem->core_data && fw_mem->core_data_alloc_size)) {
+		err = -ENOMEM;
+		goto err_free_kdata;
+	}
+
+	err = pvr_dev->fw_dev.defs->fw_process(pvr_dev, fw,
+					       fw_mem->code, fw_mem->data, fw_mem->core_code,
+					       fw_mem->core_data, fw_mem->core_code_alloc_size);
+
+	if (err)
+		goto err_free_fw_core_data_obj;
+
+	memcpy(fw_code_ptr, fw_mem->code, fw_mem->code_alloc_size);
+	memcpy(fw_data_ptr, fw_mem->data, fw_mem->data_alloc_size);
+	if (fw_mem->core_code)
+		memcpy(fw_core_code_ptr, fw_mem->core_code, fw_mem->core_code_alloc_size);
+	if (fw_mem->core_data)
+		memcpy(fw_core_data_ptr, fw_mem->core_data, fw_mem->core_data_alloc_size);
+
+	/* We're finished with the firmware section memory on the CPU, unmap. */
+	if (fw_core_data_ptr)
+		pvr_fw_object_vunmap(fw_mem->core_data_obj);
+	if (fw_core_code_ptr)
+		pvr_fw_object_vunmap(fw_mem->core_code_obj);
+	pvr_fw_object_vunmap(fw_mem->data_obj);
+	fw_data_ptr = NULL;
+	pvr_fw_object_vunmap(fw_mem->code_obj);
+	fw_code_ptr = NULL;
+
+	err = pvr_fw_create_fwif_connection_ctl(pvr_dev);
+	if (err)
+		goto err_free_fw_core_data_obj;
+
+	return 0;
+
+err_free_kdata:
+	kfree(fw_mem->core_data);
+	kfree(fw_mem->core_code);
+	kfree(fw_mem->data);
+	kfree(fw_mem->code);
+
+err_free_fw_core_data_obj:
+	if (fw_core_data_ptr)
+		pvr_fw_object_unmap_and_destroy(fw_mem->core_data_obj);
+
+err_free_fw_core_code_obj:
+	if (fw_core_code_ptr)
+		pvr_fw_object_unmap_and_destroy(fw_mem->core_code_obj);
+
+err_free_fw_data_obj:
+	if (fw_data_ptr)
+		pvr_fw_object_vunmap(fw_mem->data_obj);
+	pvr_fw_object_destroy(fw_mem->data_obj);
+
+err_free_fw_code_obj:
+	if (fw_code_ptr)
+		pvr_fw_object_vunmap(fw_mem->code_obj);
+	pvr_fw_object_destroy(fw_mem->code_obj);
+
+	return err;
+}
+
+static int
+pvr_copy_to_fw(struct pvr_fw_object *dest_obj, u8 *src_ptr, u32 size)
+{
+	u8 *dest_ptr = pvr_fw_object_vmap(dest_obj);
+
+	if (IS_ERR(dest_ptr))
+		return PTR_ERR(dest_ptr);
+
+	memcpy(dest_ptr, src_ptr, size);
+
+	pvr_fw_object_vunmap(dest_obj);
+
+	return 0;
+}
+
+static int
+pvr_fw_reinit_code_data(struct pvr_device *pvr_dev)
+{
+	struct pvr_fw_mem *fw_mem = &pvr_dev->fw_dev.mem;
+	int err;
+
+	err = pvr_copy_to_fw(fw_mem->code_obj, fw_mem->code, fw_mem->code_alloc_size);
+	if (err)
+		return err;
+
+	err = pvr_copy_to_fw(fw_mem->data_obj, fw_mem->data, fw_mem->data_alloc_size);
+	if (err)
+		return err;
+
+	if (fw_mem->core_code) {
+		err = pvr_copy_to_fw(fw_mem->core_code_obj, fw_mem->core_code,
+				     fw_mem->core_code_alloc_size);
+		if (err)
+			return err;
+	}
+
+	if (fw_mem->core_data) {
+		err = pvr_copy_to_fw(fw_mem->core_data_obj, fw_mem->core_data,
+				     fw_mem->core_data_alloc_size);
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
+
+static void
+pvr_fw_cleanup(struct pvr_device *pvr_dev)
+{
+	struct pvr_fw_mem *fw_mem = &pvr_dev->fw_dev.mem;
+
+	pvr_fw_fini_fwif_connection_ctl(pvr_dev);
+	if (fw_mem->core_code_obj)
+		pvr_fw_object_destroy(fw_mem->core_code_obj);
+	if (fw_mem->core_data_obj)
+		pvr_fw_object_destroy(fw_mem->core_data_obj);
+	pvr_fw_object_destroy(fw_mem->code_obj);
+	pvr_fw_object_destroy(fw_mem->data_obj);
+}
+
+/**
+ * pvr_wait_for_fw_boot() - Wait for firmware to finish booting
+ * @pvr_dev: Target PowerVR device.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -%ETIMEDOUT if firmware fails to boot within timeout.
+ */
+int
+pvr_wait_for_fw_boot(struct pvr_device *pvr_dev)
+{
+	ktime_t deadline = ktime_add_us(ktime_get(), FW_BOOT_TIMEOUT_USEC);
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+
+	while (ktime_to_ns(ktime_sub(deadline, ktime_get())) > 0) {
+		if (READ_ONCE(fw_dev->fwif_sysinit->firmware_started))
+			return 0;
+	}
+
+	return -ETIMEDOUT;
+}
+
+/*
+ * pvr_fw_heap_info_init() - Calculate size and masks for FW heap
+ * @pvr_dev: Target PowerVR device.
+ * @log2_size: Log2 of raw heap size.
+ * @reserved_size: Size of reserved area of heap, in bytes. May be zero.
+ */
+void
+pvr_fw_heap_info_init(struct pvr_device *pvr_dev, u32 log2_size, u32 reserved_size)
+{
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+
+	fw_dev->fw_heap_info.gpu_addr = PVR_ROGUE_FW_MAIN_HEAP_BASE;
+	fw_dev->fw_heap_info.log2_size = log2_size;
+	fw_dev->fw_heap_info.reserved_size = reserved_size;
+	fw_dev->fw_heap_info.raw_size = 1 << fw_dev->fw_heap_info.log2_size;
+	fw_dev->fw_heap_info.offset_mask = fw_dev->fw_heap_info.raw_size - 1;
+	fw_dev->fw_heap_info.config_offset = fw_dev->fw_heap_info.raw_size -
+					     PVR_ROGUE_FW_CONFIG_HEAP_SIZE;
+	fw_dev->fw_heap_info.size = fw_dev->fw_heap_info.raw_size -
+				    (PVR_ROGUE_FW_CONFIG_HEAP_SIZE + reserved_size);
+}
+
 /**
  * pvr_fw_validate_init_device_info() - Validate firmware and initialise device information
  * @pvr_dev: Target PowerVR device.
@@ -143,3 +890,579 @@ pvr_fw_validate_init_device_info(struct pvr_device *pvr_dev)
 
 	return pvr_fw_get_device_info(pvr_dev);
 }
+
+/**
+ * pvr_fw_init() - Initialise and boot firmware
+ * @pvr_dev: Target PowerVR device
+ *
+ * On successful completion of the function the PowerVR device will be
+ * initialised and ready to use.
+ *
+ * Returns:
+ *  * 0 on success,
+ *  * -%EINVAL on invalid firmware image,
+ *  * -%ENOMEM on out of memory, or
+ *  * -%ETIMEDOUT if firmware processor fails to boot or on register poll timeout.
+ */
+int
+pvr_fw_init(struct pvr_device *pvr_dev)
+{
+	u32 kccb_size_log2 = ROGUE_FWIF_KCCB_NUMCMDS_LOG2_DEFAULT;
+	u32 kccb_rtn_size = (1 << kccb_size_log2) * sizeof(*pvr_dev->kccb.rtn);
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+	int err;
+
+	if (fw_dev->processor_type == PVR_FW_PROCESSOR_TYPE_META)
+		fw_dev->defs = &pvr_fw_defs_meta;
+	else
+		return -EINVAL;
+
+	err = fw_dev->defs->init(pvr_dev);
+	if (err)
+		return err;
+
+	drm_mm_init(&fw_dev->fw_mm, ROGUE_FW_HEAP_BASE, fw_dev->fw_heap_info.raw_size);
+	fw_dev->fw_mm_base = ROGUE_FW_HEAP_BASE;
+	spin_lock_init(&fw_dev->fw_mm_lock);
+
+	INIT_LIST_HEAD(&fw_dev->fw_objs.list);
+	err = drmm_mutex_init(from_pvr_device(pvr_dev), &fw_dev->fw_objs.lock);
+	if (err)
+		goto err_mm_takedown;
+
+	err = pvr_fw_process(pvr_dev);
+	if (err)
+		goto err_mm_takedown;
+
+	/* Initialise KCCB and FWCCB. */
+	err = pvr_kccb_init(pvr_dev);
+	if (err)
+		goto err_fw_cleanup;
+
+	err = pvr_fwccb_init(pvr_dev);
+	if (err)
+		goto err_kccb_fini;
+
+	/* Allocate memory for KCCB return slots. */
+	pvr_dev->kccb.rtn = pvr_fw_object_create_and_map(pvr_dev, kccb_rtn_size,
+							 PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+							 NULL, NULL, &pvr_dev->kccb.rtn_obj);
+	if (IS_ERR(pvr_dev->kccb.rtn)) {
+		err = PTR_ERR(pvr_dev->kccb.rtn);
+		goto err_fwccb_fini;
+	}
+
+	err = pvr_fw_create_structures(pvr_dev);
+	if (err)
+		goto err_kccb_rtn_release;
+
+	err = pvr_fw_start(pvr_dev);
+	if (err)
+		goto err_destroy_structures;
+
+	err = pvr_wait_for_fw_boot(pvr_dev);
+	if (err) {
+		drm_err(from_pvr_device(pvr_dev), "Firmware failed to boot\n");
+		goto err_fw_stop;
+	}
+
+	fw_dev->booted = true;
+
+	return 0;
+
+err_fw_stop:
+	pvr_fw_stop(pvr_dev);
+
+err_destroy_structures:
+	pvr_fw_destroy_structures(pvr_dev);
+
+err_kccb_rtn_release:
+	pvr_fw_object_unmap_and_destroy(pvr_dev->kccb.rtn_obj);
+
+err_fwccb_fini:
+	pvr_ccb_fini(&pvr_dev->fwccb);
+
+err_kccb_fini:
+	pvr_kccb_fini(pvr_dev);
+
+err_fw_cleanup:
+	pvr_fw_cleanup(pvr_dev);
+
+err_mm_takedown:
+	drm_mm_takedown(&fw_dev->fw_mm);
+
+	if (fw_dev->defs->fini)
+		fw_dev->defs->fini(pvr_dev);
+
+	return err;
+}
+
+/**
+ * pvr_fw_fini() - Shutdown firmware processor and free associated memory
+ * @pvr_dev: Target PowerVR device
+ */
+void
+pvr_fw_fini(struct pvr_device *pvr_dev)
+{
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+
+	fw_dev->booted = false;
+
+	pvr_fw_destroy_structures(pvr_dev);
+	pvr_fw_object_unmap_and_destroy(pvr_dev->kccb.rtn_obj);
+
+	/*
+	 * Ensure FWCCB worker has finished executing before destroying FWCCB. The IRQ handler has
+	 * been unregistered at this point so no new work should be being submitted.
+	 */
+	pvr_ccb_fini(&pvr_dev->fwccb);
+	pvr_kccb_fini(pvr_dev);
+	pvr_fw_cleanup(pvr_dev);
+
+	mutex_lock(&pvr_dev->fw_dev.fw_objs.lock);
+	WARN_ON(!list_empty(&pvr_dev->fw_dev.fw_objs.list));
+	mutex_unlock(&pvr_dev->fw_dev.fw_objs.lock);
+
+	drm_mm_takedown(&fw_dev->fw_mm);
+
+	if (fw_dev->defs->fini)
+		fw_dev->defs->fini(pvr_dev);
+}
+
+/**
+ * pvr_fw_mts_schedule() - Schedule work via an MTS kick
+ * @pvr_dev: Target PowerVR device
+ * @val: Kick mask. Should be a combination of %ROGUE_CR_MTS_SCHEDULE_*
+ */
+void
+pvr_fw_mts_schedule(struct pvr_device *pvr_dev, u32 val)
+{
+	/* Ensure memory is flushed before kicking MTS. */
+	wmb();
+
+	pvr_cr_write32(pvr_dev, ROGUE_CR_MTS_SCHEDULE, val);
+
+	/* Ensure the MTS kick goes through before continuing. */
+	mb();
+}
+
+/**
+ * pvr_fw_structure_cleanup() - Send FW cleanup request for an object
+ * @pvr_dev: Target PowerVR device.
+ * @type: Type of object to cleanup. Must be one of &enum rogue_fwif_cleanup_type.
+ * @fw_obj: Pointer to FW object containing object to cleanup.
+ * @offset: Offset within FW object of object to cleanup.
+ *
+ * Returns:
+ *  * 0 on success,
+ *  * -EBUSY if object is busy,
+ *  * -ETIMEDOUT on timeout, or
+ *  * -EIO if device is lost.
+ */
+int
+pvr_fw_structure_cleanup(struct pvr_device *pvr_dev, u32 type, struct pvr_fw_object *fw_obj,
+			 u32 offset)
+{
+	struct rogue_fwif_kccb_cmd cmd;
+	int slot_nr;
+	int idx;
+	int err;
+	u32 rtn;
+
+	struct rogue_fwif_cleanup_request *cleanup_req = &cmd.cmd_data.cleanup_data;
+
+	down_read(&pvr_dev->reset_sem);
+
+	if (!drm_dev_enter(from_pvr_device(pvr_dev), &idx)) {
+		err = -EIO;
+		goto err_up_read;
+	}
+
+	cmd.cmd_type = ROGUE_FWIF_KCCB_CMD_CLEANUP;
+	cmd.kccb_flags = 0;
+	cleanup_req->cleanup_type = type;
+
+	switch (type) {
+	case ROGUE_FWIF_CLEANUP_FWCOMMONCONTEXT:
+		pvr_fw_object_get_fw_addr_offset(fw_obj, offset,
+						 &cleanup_req->cleanup_data.context_fw_addr);
+		break;
+	case ROGUE_FWIF_CLEANUP_HWRTDATA:
+		pvr_fw_object_get_fw_addr_offset(fw_obj, offset,
+						 &cleanup_req->cleanup_data.hwrt_data_fw_addr);
+		break;
+	case ROGUE_FWIF_CLEANUP_FREELIST:
+		pvr_fw_object_get_fw_addr_offset(fw_obj, offset,
+						 &cleanup_req->cleanup_data.freelist_fw_addr);
+		break;
+	default:
+		err = -EINVAL;
+		goto err_drm_dev_exit;
+	}
+
+	err = pvr_kccb_send_cmd(pvr_dev, &cmd, &slot_nr);
+	if (err)
+		goto err_drm_dev_exit;
+
+	err = pvr_kccb_wait_for_completion(pvr_dev, slot_nr, HZ, &rtn);
+	if (err)
+		goto err_drm_dev_exit;
+
+	if (rtn & ROGUE_FWIF_KCCB_RTN_SLOT_CLEANUP_BUSY)
+		err = -EBUSY;
+
+err_drm_dev_exit:
+	drm_dev_exit(idx);
+
+err_up_read:
+	up_read(&pvr_dev->reset_sem);
+
+	return err;
+}
+
+/**
+ * pvr_fw_object_fw_map() - Map a FW object in firmware address space
+ * @pvr_dev: Device pointer.
+ * @fw_obj: FW object to map.
+ * @dev_addr: Desired address in device space, if a specific address is
+ *            required. 0 otherwise.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -%EINVAL if @fw_obj is already mapped but has no references, or
+ *  * Any error returned by DRM.
+ */
+static int
+pvr_fw_object_fw_map(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj, u64 dev_addr)
+{
+	struct pvr_gem_object *pvr_obj = fw_obj->gem;
+	struct drm_gem_object *gem_obj = gem_from_pvr_gem(pvr_obj);
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+
+	int err;
+
+	spin_lock(&fw_dev->fw_mm_lock);
+
+	if (drm_mm_node_allocated(&fw_obj->fw_mm_node)) {
+		err = -EINVAL;
+		goto err_unlock;
+	}
+
+	if (!dev_addr) {
+		/*
+		 * Allocate from the main heap only (firmware heap minus
+		 * config space).
+		 */
+		err = drm_mm_insert_node_in_range(&fw_dev->fw_mm, &fw_obj->fw_mm_node,
+						  gem_obj->size, 0, 0,
+						  fw_dev->fw_heap_info.gpu_addr,
+						  fw_dev->fw_heap_info.gpu_addr +
+						  fw_dev->fw_heap_info.size, 0);
+		if (err)
+			goto err_unlock;
+	} else {
+		fw_obj->fw_mm_node.start = dev_addr;
+		fw_obj->fw_mm_node.size = gem_obj->size;
+		err = drm_mm_reserve_node(&fw_dev->fw_mm, &fw_obj->fw_mm_node);
+		if (err)
+			goto err_unlock;
+	}
+
+	spin_unlock(&fw_dev->fw_mm_lock);
+
+	/* Map object on GPU. */
+	err = fw_dev->defs->vm_map(pvr_dev, fw_obj);
+	if (err)
+		goto err_remove_node;
+
+	fw_obj->fw_addr_offset = (u32)(fw_obj->fw_mm_node.start - fw_dev->fw_mm_base);
+
+	return 0;
+
+err_remove_node:
+	spin_lock(&fw_dev->fw_mm_lock);
+	drm_mm_remove_node(&fw_obj->fw_mm_node);
+
+err_unlock:
+	spin_unlock(&fw_dev->fw_mm_lock);
+
+	return err;
+}
+
+/**
+ * pvr_fw_object_fw_unmap() - Unmap a previously mapped FW object
+ * @fw_obj: FW object to unmap.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -%EINVAL if object is not currently mapped.
+ */
+static int
+pvr_fw_object_fw_unmap(struct pvr_fw_object *fw_obj)
+{
+	struct pvr_gem_object *pvr_obj = fw_obj->gem;
+	struct drm_gem_object *gem_obj = gem_from_pvr_gem(pvr_obj);
+	struct pvr_device *pvr_dev = to_pvr_device(gem_obj->dev);
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+
+	fw_dev->defs->vm_unmap(pvr_dev, fw_obj);
+
+	spin_lock(&fw_dev->fw_mm_lock);
+
+	if (!drm_mm_node_allocated(&fw_obj->fw_mm_node)) {
+		spin_unlock(&fw_dev->fw_mm_lock);
+		return -EINVAL;
+	}
+
+	drm_mm_remove_node(&fw_obj->fw_mm_node);
+
+	spin_unlock(&fw_dev->fw_mm_lock);
+
+	return 0;
+}
+
+static void *
+pvr_fw_object_create_and_map_common(struct pvr_device *pvr_dev, size_t size,
+				    u64 flags, u64 dev_addr,
+				    void (*init)(void *cpu_ptr, void *priv),
+				    void *init_priv, struct pvr_fw_object **fw_obj_out)
+{
+	struct pvr_fw_object *fw_obj;
+	void *cpu_ptr;
+	int err;
+
+	/* %DRM_PVR_BO_DEVICE_PM_FW_PROTECT is implicit for FW objects. */
+	flags |= DRM_PVR_BO_DEVICE_PM_FW_PROTECT;
+
+	fw_obj = kzalloc(sizeof(*fw_obj), GFP_KERNEL);
+	if (!fw_obj)
+		return ERR_PTR(-ENOMEM);
+
+	INIT_LIST_HEAD(&fw_obj->node);
+	fw_obj->init = init;
+	fw_obj->init_priv = init_priv;
+
+	fw_obj->gem = pvr_gem_object_create(pvr_dev, size, flags);
+	if (IS_ERR(fw_obj->gem)) {
+		err = PTR_ERR(fw_obj->gem);
+		fw_obj->gem = NULL;
+		goto err_put_object;
+	}
+
+	err = pvr_fw_object_fw_map(pvr_dev, fw_obj, dev_addr);
+	if (err)
+		goto err_put_object;
+
+	cpu_ptr = pvr_fw_object_vmap(fw_obj);
+	if (IS_ERR(cpu_ptr)) {
+		err = PTR_ERR(cpu_ptr);
+		goto err_put_object;
+	}
+
+	*fw_obj_out = fw_obj;
+
+	if (fw_obj->init)
+		fw_obj->init(cpu_ptr, fw_obj->init_priv);
+
+	mutex_lock(&pvr_dev->fw_dev.fw_objs.lock);
+	list_add_tail(&fw_obj->node, &pvr_dev->fw_dev.fw_objs.list);
+	mutex_unlock(&pvr_dev->fw_dev.fw_objs.lock);
+
+	return cpu_ptr;
+
+err_put_object:
+	pvr_fw_object_destroy(fw_obj);
+
+	return ERR_PTR(err);
+}
+
+/**
+ * pvr_fw_object_create() - Create a FW object and map to firmware
+ * @pvr_dev: PowerVR device pointer.
+ * @size: Size of object, in bytes.
+ * @flags: Options which affect both this operation and future mapping
+ * operations performed on the returned object. Must be a combination of
+ * DRM_PVR_BO_* and/or PVR_BO_* flags.
+ * @init: Initialisation callback.
+ * @init_priv: Private pointer to pass to initialisation callback.
+ * @fw_obj_out: Pointer to location to store created object pointer.
+ *
+ * %DRM_PVR_BO_DEVICE_PM_FW_PROTECT is implied for all FW objects. Consequently,
+ * this function will fail if @flags has %DRM_PVR_BO_CPU_ALLOW_USERSPACE_ACCESS
+ * set.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_fw_object_create_common().
+ */
+int
+pvr_fw_object_create(struct pvr_device *pvr_dev, size_t size, u64 flags,
+		     void (*init)(void *cpu_ptr, void *priv), void *init_priv,
+		     struct pvr_fw_object **fw_obj_out)
+{
+	void *cpu_ptr;
+
+	cpu_ptr = pvr_fw_object_create_and_map_common(pvr_dev, size, flags, 0, init, init_priv,
+						      fw_obj_out);
+	if (IS_ERR(cpu_ptr))
+		return PTR_ERR(cpu_ptr);
+
+	pvr_fw_object_vunmap(*fw_obj_out);
+
+	return 0;
+}
+
+/**
+ * pvr_fw_object_create_and_map() - Create a FW object and map to firmware and CPU
+ * @pvr_dev: PowerVR device pointer.
+ * @size: Size of object, in bytes.
+ * @flags: Options which affect both this operation and future mapping
+ * operations performed on the returned object. Must be a combination of
+ * DRM_PVR_BO_* and/or PVR_BO_* flags.
+ * @init: Initialisation callback.
+ * @init_priv: Private pointer to pass to initialisation callback.
+ * @fw_obj_out: Pointer to location to store created object pointer.
+ *
+ * %DRM_PVR_BO_DEVICE_PM_FW_PROTECT is implied for all FW objects. Consequently,
+ * this function will fail if @flags has %DRM_PVR_BO_CPU_ALLOW_USERSPACE_ACCESS
+ * set.
+ *
+ * Caller is responsible for calling pvr_fw_object_vunmap() to release the CPU
+ * mapping.
+ *
+ * Returns:
+ *  * Pointer to CPU mapping of newly created object, or
+ *  * Any error returned by pvr_fw_object_create(), or
+ *  * Any error returned by pvr_fw_object_vmap().
+ */
+void *
+pvr_fw_object_create_and_map(struct pvr_device *pvr_dev, size_t size, u64 flags,
+			     void (*init)(void *cpu_ptr, void *priv),
+			     void *init_priv, struct pvr_fw_object **fw_obj_out)
+{
+	return pvr_fw_object_create_and_map_common(pvr_dev, size, flags, 0, init, init_priv,
+						   fw_obj_out);
+}
+
+/**
+ * pvr_fw_object_create_and_map_offset() - Create a FW object and map to
+ * firmware at the provided offset and to the CPU.
+ * @pvr_dev: PowerVR device pointer.
+ * @dev_offset: Base address of desired FW mapping, offset from start of FW heap.
+ * @size: Size of object, in bytes.
+ * @flags: Options which affect both this operation and future mapping
+ * operations performed on the returned object. Must be a combination of
+ * DRM_PVR_BO_* and/or PVR_BO_* flags.
+ * @init: Initialisation callback.
+ * @init_priv: Private pointer to pass to initialisation callback.
+ * @fw_obj_out: Pointer to location to store created object pointer.
+ *
+ * %DRM_PVR_BO_DEVICE_PM_FW_PROTECT is implied for all FW objects. Consequently,
+ * this function will fail if @flags has %DRM_PVR_BO_CPU_ALLOW_USERSPACE_ACCESS
+ * set.
+ *
+ * Caller is responsible for calling pvr_fw_object_vunmap() to release the CPU
+ * mapping.
+ *
+ * Returns:
+ *  * Pointer to CPU mapping of newly created object, or
+ *  * Any error returned by pvr_fw_object_create(), or
+ *  * Any error returned by pvr_fw_object_vmap().
+ */
+void *
+pvr_fw_object_create_and_map_offset(struct pvr_device *pvr_dev,
+				    u32 dev_offset, size_t size, u64 flags,
+				    void (*init)(void *cpu_ptr, void *priv),
+				    void *init_priv, struct pvr_fw_object **fw_obj_out)
+{
+	u64 dev_addr = pvr_dev->fw_dev.fw_mm_base + dev_offset;
+
+	return pvr_fw_object_create_and_map_common(pvr_dev, size, flags, dev_addr, init, init_priv,
+						   fw_obj_out);
+}
+
+/**
+ * pvr_fw_object_destroy() - Destroy a pvr_fw_object
+ * @fw_obj: Pointer to object to destroy.
+ */
+void pvr_fw_object_destroy(struct pvr_fw_object *fw_obj)
+{
+	struct pvr_gem_object *pvr_obj = fw_obj->gem;
+	struct drm_gem_object *gem_obj = gem_from_pvr_gem(pvr_obj);
+	struct pvr_device *pvr_dev = to_pvr_device(gem_obj->dev);
+
+	mutex_lock(&pvr_dev->fw_dev.fw_objs.lock);
+	list_del(&fw_obj->node);
+	mutex_unlock(&pvr_dev->fw_dev.fw_objs.lock);
+
+	if (drm_mm_node_allocated(&fw_obj->fw_mm_node)) {
+		/* If we can't unmap, leak the memory. */
+		if (WARN_ON(pvr_fw_object_fw_unmap(fw_obj)))
+			return;
+	}
+
+	if (fw_obj->gem)
+		pvr_gem_object_put(fw_obj->gem);
+
+	kfree(fw_obj);
+}
+
+/**
+ * pvr_fw_object_get_fw_addr_offset() - Return address of object in firmware address space, with
+ * given offset.
+ * @fw_obj: Pointer to object.
+ * @offset: Desired offset from start of object.
+ * @fw_addr_out: Location to store address to.
+ */
+void pvr_fw_object_get_fw_addr_offset(struct pvr_fw_object *fw_obj, u32 offset, u32 *fw_addr_out)
+{
+	struct pvr_gem_object *pvr_obj = fw_obj->gem;
+	struct pvr_device *pvr_dev = to_pvr_device(gem_from_pvr_gem(pvr_obj)->dev);
+
+	*fw_addr_out = pvr_dev->fw_dev.defs->get_fw_addr_with_offset(fw_obj, offset);
+}
+
+/*
+ * pvr_fw_hard_reset() - Re-initialise the FW code and data segments, and reset all global FW
+ *                       structures
+ * @pvr_dev: Device pointer
+ *
+ * If this function returns an error then the caller must regard the device as lost.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_fw_init_dev_structures() or pvr_fw_reset_all().
+ */
+int
+pvr_fw_hard_reset(struct pvr_device *pvr_dev)
+{
+	struct list_head *pos;
+	int err;
+
+	/* Reset all FW objects */
+	mutex_lock(&pvr_dev->fw_dev.fw_objs.lock);
+
+	list_for_each(pos, &pvr_dev->fw_dev.fw_objs.list) {
+		struct pvr_fw_object *fw_obj = container_of(pos, struct pvr_fw_object, node);
+		void *cpu_ptr = pvr_fw_object_vmap(fw_obj);
+
+		WARN_ON(IS_ERR(cpu_ptr));
+
+		if (!(fw_obj->gem->flags & PVR_BO_FW_NO_CLEAR_ON_RESET)) {
+			memset(cpu_ptr, 0, pvr_gem_object_size(fw_obj->gem));
+
+			if (fw_obj->init)
+				fw_obj->init(cpu_ptr, fw_obj->init_priv);
+		}
+
+		pvr_fw_object_vunmap(fw_obj);
+	}
+
+	mutex_unlock(&pvr_dev->fw_dev.fw_objs.lock);
+
+	err = pvr_fw_reinit_code_data(pvr_dev);
+	if (err)
+		return err;
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/imagination/pvr_fw.h b/drivers/gpu/drm/imagination/pvr_fw.h
index dca7fe5b3dd0..8b36fe6d85e3 100644
--- a/drivers/gpu/drm/imagination/pvr_fw.h
+++ b/drivers/gpu/drm/imagination/pvr_fw.h
@@ -5,6 +5,10 @@
 #define PVR_FW_H
 
 #include "pvr_fw_info.h"
+#include "pvr_fw_trace.h"
+#include "pvr_gem.h"
+
+#include <drm/drm_mm.h>
 
 #include <linux/types.h>
 
@@ -12,6 +16,289 @@
 struct pvr_device;
 struct pvr_file;
 
+/* Forward declaration from "pvr_vm.h". */
+struct pvr_vm_context;
+
+#define ROGUE_FWIF_FWCCB_NUMCMDS_LOG2 5
+
+#define ROGUE_FWIF_KCCB_NUMCMDS_LOG2_DEFAULT 7
+
+/**
+ * struct pvr_fw_object - container for firmware memory allocations
+ */
+struct pvr_fw_object {
+	/** @ref_count: FW object reference counter. */
+	struct kref ref_count;
+
+	/** @gem: GEM object backing the FW object. */
+	struct pvr_gem_object *gem;
+
+	/**
+	 * @fw_mm_node: Node representing mapping in FW address space. @pvr_obj->lock must
+	 *              be held when writing.
+	 */
+	struct drm_mm_node fw_mm_node;
+
+	/**
+	 * @fw_addr_offset: Virtual address offset of firmware mapping. Only
+	 *                  valid if @flags has %PVR_GEM_OBJECT_FLAGS_FW_MAPPED
+	 *                  set.
+	 */
+	u32 fw_addr_offset;
+
+	/**
+	 * @init: Initialisation callback. Will be called on object creation and FW hard reset.
+	 *        Object will have been zeroed before this is called.
+	 */
+	void (*init)(void *cpu_ptr, void *priv);
+
+	/** @init_priv: Private data for initialisation callback. */
+	void *init_priv;
+
+	/** @node: Node for firmware object list. */
+	struct list_head node;
+};
+
+/**
+ * struct pvr_fw_defs - FW processor function table and static definitions
+ */
+struct pvr_fw_defs {
+	/**
+	 * @init:
+	 *
+	 * FW processor specific initialisation.
+	 * @pvr_dev: Target PowerVR device.
+	 *
+	 * This function must call pvr_fw_heap_calculate() to initialise the firmware heap for this
+	 * FW processor.
+	 *
+	 * This function is mandatory.
+	 *
+	 * Returns:
+	 *  * 0 on success, or
+	 *  * Any appropriate error on failure.
+	 */
+	int (*init)(struct pvr_device *pvr_dev);
+
+	/**
+	 * @fini:
+	 *
+	 * FW processor specific finalisation.
+	 * @pvr_dev: Target PowerVR device.
+	 *
+	 * This function is optional.
+	 */
+	void (*fini)(struct pvr_device *pvr_dev);
+
+	/**
+	 * @fw_process:
+	 *
+	 * Load and process firmware image.
+	 * @pvr_dev: Target PowerVR device.
+	 * @fw: Pointer to firmware image.
+	 * @fw_code_ptr: Pointer to firmware code section.
+	 * @fw_data_ptr: Pointer to firmware data section.
+	 * @fw_core_code_ptr: Pointer to firmware core code section. May be %NULL.
+	 * @fw_core_data_ptr: Pointer to firmware core data section. May be %NULL.
+	 * @core_code_alloc_size: Total allocation size of core code section.
+	 *
+	 * This function is mandatory.
+	 *
+	 * Returns:
+	 *  * 0 on success, or
+	 *  * Any appropriate error on failure.
+	 */
+	int (*fw_process)(struct pvr_device *pvr_dev, const u8 *fw,
+			  u8 *fw_code_ptr, u8 *fw_data_ptr, u8 *fw_core_code_ptr,
+			  u8 *fw_core_data_ptr, u32 core_code_alloc_size);
+
+	/**
+	 * @vm_map:
+	 *
+	 * Map FW object into FW processor address space.
+	 * @pvr_dev: Target PowerVR device.
+	 * @fw_obj: FW object to map.
+	 *
+	 * This function is mandatory.
+	 *
+	 * Returns:
+	 *  * 0 on success, or
+	 *  * Any appropriate error on failure.
+	 */
+	int (*vm_map)(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj);
+
+	/**
+	 * @vm_unmap:
+	 *
+	 * Unmap FW object from FW processor address space.
+	 * @pvr_dev: Target PowerVR device.
+	 * @fw_obj: FW object to map.
+	 *
+	 * This function is mandatory.
+	 */
+	void (*vm_unmap)(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj);
+
+	/**
+	 * @get_fw_addr_with_offset:
+	 *
+	 * Called to get address of object in firmware address space, with offset.
+	 * @fw_obj: Pointer to object.
+	 * @offset: Desired offset from start of object.
+	 *
+	 * This function is mandatory.
+	 *
+	 * Returns:
+	 *  * Address in firmware address space.
+	 */
+	u32 (*get_fw_addr_with_offset)(struct pvr_fw_object *fw_obj, u32 offset);
+
+	/**
+	 * @wrapper_init:
+	 *
+	 * Called to initialise FW wrapper.
+	 * @pvr_dev: Target PowerVR device.
+	 *
+	 * This function is mandatory.
+	 *
+	 * Returns:
+	 *  * 0 on success.
+	 *  * Any appropriate error on failure.
+	 */
+	int (*wrapper_init)(struct pvr_device *pvr_dev);
+
+	/**
+	 * @has_fixed_data_addr:
+	 *
+	 * Called to check if firmware fixed data must be loaded at the address given by the
+	 * firmware layout table.
+	 *
+	 * This function is mandatory.
+	 *
+	 * Returns:
+	 *  * %true if firmware fixed data must be loaded at the address given by the firmware
+	 *    layout table.
+	 *  * %false otherwise.
+	 */
+	bool (*has_fixed_data_addr)(void);
+
+	/**
+	 * @irq: FW Interrupt information.
+	 *
+	 * Those are processor dependent, and should be initialized by the
+	 * processor backend in pvr_fw_funcs::init().
+	 */
+	struct {
+		/** @enable_reg: FW interrupt enable register. */
+		u32 enable_reg;
+
+		/** @status_reg: FW interrupt status register. */
+		u32 status_reg;
+
+		/**
+		 * @clear_reg: FW interrupt clear register.
+		 *
+		 * If @status_reg == @clear_reg, we clear by write a bit to zero,
+		 * otherwise we clear by writing a bit to one.
+		 */
+		u32 clear_reg;
+
+		/** @event_mask: Bitmask of events to listen for. */
+		u32 event_mask;
+
+		/** @clear_mask: Value to write to the clear_reg in order to clear FW IRQs. */
+		u32 clear_mask;
+	} irq;
+};
+
+/**
+ * struct pvr_fw_mem - FW memory allocations
+ */
+struct pvr_fw_mem {
+	/** @code_obj: Object representing firmware code. */
+	struct pvr_fw_object *code_obj;
+
+	/** @data_obj: Object representing firmware data. */
+	struct pvr_fw_object *data_obj;
+
+	/**
+	 * @core_code_obj: Object representing firmware core code. May be
+	 *                 %NULL if firmware does not contain this section.
+	 */
+	struct pvr_fw_object *core_code_obj;
+
+	/**
+	 * @core_data_obj: Object representing firmware core data. May be
+	 *                 %NULL if firmware does not contain this section.
+	 */
+	struct pvr_fw_object *core_data_obj;
+
+	/** @code: Driver-side copy of firmware code. */
+	u8 *code;
+
+	/** @data: Driver-side copy of firmware data. */
+	u8 *data;
+
+	/**
+	 * @core_code: Driver-side copy of firmware core code. May be %NULL if firmware does not
+	 *             contain this section.
+	 */
+	u8 *core_code;
+
+	/**
+	 * @core_data: Driver-side copy of firmware core data. May be %NULL if firmware does not
+	 *             contain this section.
+	 */
+	u8 *core_data;
+
+	/** @code_alloc_size: Allocation size of firmware code section. */
+	u32 code_alloc_size;
+
+	/** @data_alloc_size: Allocation size of firmware data section. */
+	u32 data_alloc_size;
+
+	/** @core_code_alloc_size: Allocation size of firmware core code section. */
+	u32 core_code_alloc_size;
+
+	/** @core_data_alloc_size: Allocation size of firmware core data section. */
+	u32 core_data_alloc_size;
+
+	/**
+	 * @fwif_connection_ctl_obj: Object representing FWIF connection control
+	 *                           structure.
+	 */
+	struct pvr_fw_object *fwif_connection_ctl_obj;
+
+	/** @osinit_obj: Object representing FW OSINIT structure. */
+	struct pvr_fw_object *osinit_obj;
+
+	/** @sysinit_obj: Object representing FW SYSINIT structure. */
+	struct pvr_fw_object *sysinit_obj;
+
+	/** @osdata_obj: Object representing FW OSDATA structure. */
+	struct pvr_fw_object *osdata_obj;
+
+	/** @hwrinfobuf_obj: Object representing FW hwrinfobuf structure. */
+	struct pvr_fw_object *hwrinfobuf_obj;
+
+	/** @sysdata_obj: Object representing FW SYSDATA structure. */
+	struct pvr_fw_object *sysdata_obj;
+
+	/** @power_sync_obj: Object representing power sync state. */
+	struct pvr_fw_object *power_sync_obj;
+
+	/** @fault_page_obj: Object representing FW fault page. */
+	struct pvr_fw_object *fault_page_obj;
+
+	/** @gpu_util_fwcb_obj: Object representing FW GPU utilisation control structure. */
+	struct pvr_fw_object *gpu_util_fwcb_obj;
+
+	/** @runtime_cfg_obj: Object representing FW runtime config structure. */
+	struct pvr_fw_object *runtime_cfg_obj;
+
+	/** @mmucache_sync_obj: Object used as the sync parameter in an MMU cache operation. */
+	struct pvr_fw_object *mmucache_sync_obj;
+};
+
 struct pvr_fw_device {
 	/** @firmware: Handle to the firmware loaded into the device. */
 	const struct firmware *firmware;
@@ -22,13 +309,200 @@ struct pvr_fw_device {
 	/** @layout_entries: Pointer to firmware layout. */
 	const struct pvr_fw_layout_entry *layout_entries;
 
+	/** @mem: Structure containing objects representing firmware memory allocations. */
+	struct pvr_fw_mem mem;
+
+	/** @booted: %true if the firmware has been booted, %false otherwise. */
+	bool booted;
+
 	/**
 	 * @processor_type: FW processor type for this device. Must be one of
 	 *                  %PVR_FW_PROCESSOR_TYPE_*.
 	 */
 	u16 processor_type;
+
+	/** @funcs: Function table for the FW processor used by this device. */
+	const struct pvr_fw_defs *defs;
+
+	/** @processor_data: Pointer to data specific to FW processor. */
+	union {
+		/** @mips_data: Pointer to MIPS-specific data. */
+		struct pvr_fw_mips_data *mips_data;
+	} processor_data;
+
+	/** @fw_heap_info: Firmware heap information. */
+	struct {
+		/** @gpu_addr: Base address of firmware heap in GPU address space. */
+		u64 gpu_addr;
+
+		/** @size: Size of main area of heap. */
+		u32 size;
+
+		/** @offset_mask: Mask for offsets within FW heap. */
+		u32 offset_mask;
+
+		/** @raw_size: Raw size of heap, including reserved areas. */
+		u32 raw_size;
+
+		/** @log2_size: Log2 of raw size of heap. */
+		u32 log2_size;
+
+		/** @config_offset: Offset of config area within heap. */
+		u32 config_offset;
+
+		/** @reserved_size: Size of reserved area in heap. */
+		u32 reserved_size;
+	} fw_heap_info;
+
+	/** @fw_mm: Firmware address space allocator. */
+	struct drm_mm fw_mm;
+
+	/** @fw_mm_lock: Lock protecting access to &fw_mm. */
+	spinlock_t fw_mm_lock;
+
+	/** @fw_mm_base: Base address of address space managed by @fw_mm. */
+	u64 fw_mm_base;
+
+	/**
+	 * @fwif_connection_ctl: Pointer to CPU mapping of FWIF connection
+	 *                       control structure.
+	 */
+	struct rogue_fwif_connection_ctl *fwif_connection_ctl;
+
+	/** @fwif_sysinit: Pointer to CPU mapping of FW SYSINIT structure. */
+	struct rogue_fwif_sysinit *fwif_sysinit;
+
+	/** @fwif_sysdata: Pointer to CPU mapping of FW SYSDATA structure. */
+	struct rogue_fwif_sysdata *fwif_sysdata;
+
+	/** @fwif_osinit: Pointer to CPU mapping of FW OSINIT structure. */
+	struct rogue_fwif_osinit *fwif_osinit;
+
+	/** @fwif_osdata: Pointer to CPU mapping of FW OSDATA structure. */
+	struct rogue_fwif_osdata *fwif_osdata;
+
+	/** @power_sync: Pointer to CPU mapping of power sync state. */
+	u32 *power_sync;
+
+	/** @hwrinfobuf: Pointer to CPU mapping of FW HWR info buffer. */
+	struct rogue_fwif_hwrinfobuf *hwrinfobuf;
+
+	/** @fw_trace: Device firmware trace buffer state. */
+	struct pvr_fw_trace fw_trace;
+
+	/** @fw_objs: Structure tracking FW objects. */
+	struct {
+		/** @list: Head of FW object list. */
+		struct list_head list;
+
+		/** @lock: Lock protecting access to FW object list. */
+		struct mutex lock;
+	} fw_objs;
 };
 
+#define pvr_fw_irq_read_reg(pvr_dev, name) \
+	pvr_cr_read32((pvr_dev), (pvr_dev)->fw_dev.defs->irq.name ## _reg)
+
+#define pvr_fw_irq_write_reg(pvr_dev, name, value) \
+	pvr_cr_write32((pvr_dev), (pvr_dev)->fw_dev.defs->irq.name ## _reg, value)
+
+#define pvr_fw_irq_pending(pvr_dev) \
+	(pvr_fw_irq_read_reg(pvr_dev, status) & (pvr_dev)->fw_dev.defs->irq.event_mask)
+
+#define pvr_fw_irq_clear(pvr_dev) \
+	pvr_fw_irq_write_reg(pvr_dev, clear, (pvr_dev)->fw_dev.defs->irq.clear_mask)
+
+#define pvr_fw_irq_enable(pvr_dev) \
+	pvr_fw_irq_write_reg(pvr_dev, enable, (pvr_dev)->fw_dev.defs->irq.event_mask)
+
+#define pvr_fw_irq_disable(pvr_dev) \
+	pvr_fw_irq_write_reg(pvr_dev, enable, 0)
+
+extern const struct pvr_fw_defs pvr_fw_defs_meta;
+extern const struct pvr_fw_defs pvr_fw_defs_mips;
+
 int pvr_fw_validate_init_device_info(struct pvr_device *pvr_dev);
+int pvr_fw_init(struct pvr_device *pvr_dev);
+void pvr_fw_fini(struct pvr_device *pvr_dev);
+
+int pvr_wait_for_fw_boot(struct pvr_device *pvr_dev);
+
+int
+pvr_fw_hard_reset(struct pvr_device *pvr_dev);
+
+void pvr_fw_mts_schedule(struct pvr_device *pvr_dev, u32 val);
+
+void
+pvr_fw_heap_info_init(struct pvr_device *pvr_dev, u32 log2_size, u32 reserved_size);
+
+const struct pvr_fw_layout_entry *
+pvr_fw_find_layout_entry(struct pvr_device *pvr_dev, enum pvr_fw_section_id id);
+int
+pvr_fw_find_mmu_segment(struct pvr_device *pvr_dev, u32 addr, u32 size, void *fw_code_ptr,
+			void *fw_data_ptr, void *fw_core_code_ptr, void *fw_core_data_ptr,
+			void **host_addr_out);
+
+int
+pvr_fw_structure_cleanup(struct pvr_device *pvr_dev, u32 type, struct pvr_fw_object *fw_obj,
+			 u32 offset);
+
+int pvr_fw_object_create(struct pvr_device *pvr_dev, size_t size, u64 flags,
+			 void (*init)(void *cpu_ptr, void *priv), void *init_priv,
+			 struct pvr_fw_object **pvr_obj_out);
+
+void *pvr_fw_object_create_and_map(struct pvr_device *pvr_dev, size_t size, u64 flags,
+				   void (*init)(void *cpu_ptr, void *priv),
+				   void *init_priv, struct pvr_fw_object **pvr_obj_out);
+
+void *
+pvr_fw_object_create_and_map_offset(struct pvr_device *pvr_dev, u32 dev_offset, size_t size,
+				    u64 flags, void (*init)(void *cpu_ptr, void *priv),
+				    void *init_priv, struct pvr_fw_object **pvr_obj_out);
+
+static __always_inline void *
+pvr_fw_object_vmap(struct pvr_fw_object *fw_obj)
+{
+	return pvr_gem_object_vmap(fw_obj->gem);
+}
+
+static __always_inline void
+pvr_fw_object_vunmap(struct pvr_fw_object *fw_obj)
+{
+	pvr_gem_object_vunmap(fw_obj->gem);
+}
+
+void pvr_fw_object_destroy(struct pvr_fw_object *fw_obj);
+
+static __always_inline void
+pvr_fw_object_unmap_and_destroy(struct pvr_fw_object *fw_obj)
+{
+	pvr_fw_object_vunmap(fw_obj);
+	pvr_fw_object_destroy(fw_obj);
+}
+
+/**
+ * pvr_fw_get_dma_addr() - Get DMA address for given offset in firmware object
+ * @fw_obj: Pointer to object to lookup address in.
+ * @offset: Offset within object to lookup address at.
+ * @dma_addr_out: Pointer to location to store DMA address.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -%EINVAL if object is not currently backed, or if @offset is out of valid
+ *    range for this object.
+ */
+static __always_inline int
+pvr_fw_object_get_dma_addr(struct pvr_fw_object *fw_obj, u32 offset, dma_addr_t *dma_addr_out)
+{
+	return pvr_gem_get_dma_addr(fw_obj->gem, offset, dma_addr_out);
+}
+
+void pvr_fw_object_get_fw_addr_offset(struct pvr_fw_object *fw_obj, u32 offset, u32 *fw_addr_out);
+
+static __always_inline void
+pvr_fw_object_get_fw_addr(struct pvr_fw_object *fw_obj, u32 *fw_addr_out)
+{
+	pvr_fw_object_get_fw_addr_offset(fw_obj, 0, fw_addr_out);
+}
 
 #endif /* PVR_FW_H */
diff --git a/drivers/gpu/drm/imagination/pvr_fw_meta.c b/drivers/gpu/drm/imagination/pvr_fw_meta.c
new file mode 100644
index 000000000000..7a5d26c0b9e0
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_fw_meta.c
@@ -0,0 +1,554 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+#include "pvr_fw.h"
+#include "pvr_fw_info.h"
+#include "pvr_gem.h"
+#include "pvr_rogue_cr_defs.h"
+#include "pvr_rogue_meta.h"
+#include "pvr_vm.h"
+
+#include <linux/compiler.h>
+#include <linux/delay.h>
+#include <linux/firmware.h>
+#include <linux/ktime.h>
+#include <linux/types.h>
+
+#define ROGUE_FW_HEAP_META_SHIFT 25 /* 32 MB */
+
+#define POLL_TIMEOUT_USEC 1000000
+
+/**
+ * pvr_meta_cr_read32() - Read a META register via the Slave Port
+ * @pvr_dev: Device pointer.
+ * @reg_addr: Address of register to read.
+ * @reg_value_out: Pointer to location to store register value.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_cr_poll_reg32().
+ */
+int
+pvr_meta_cr_read32(struct pvr_device *pvr_dev, u32 reg_addr, u32 *reg_value_out)
+{
+	int err;
+
+	/* Wait for Slave Port to be Ready. */
+	err = pvr_cr_poll_reg32(pvr_dev, ROGUE_CR_META_SP_MSLVCTRL1,
+				ROGUE_CR_META_SP_MSLVCTRL1_READY_EN |
+					ROGUE_CR_META_SP_MSLVCTRL1_GBLPORT_IDLE_EN,
+				ROGUE_CR_META_SP_MSLVCTRL1_READY_EN |
+					ROGUE_CR_META_SP_MSLVCTRL1_GBLPORT_IDLE_EN,
+				POLL_TIMEOUT_USEC);
+	if (err)
+		return err;
+
+	/* Issue a Read. */
+	pvr_cr_write32(pvr_dev, ROGUE_CR_META_SP_MSLVCTRL0,
+		       reg_addr | ROGUE_CR_META_SP_MSLVCTRL0_RD_EN);
+	(void)pvr_cr_read32(pvr_dev, ROGUE_CR_META_SP_MSLVCTRL0); /* Fence write. */
+
+	/* Wait for Slave Port to be Ready. */
+	err = pvr_cr_poll_reg32(pvr_dev, ROGUE_CR_META_SP_MSLVCTRL1,
+				ROGUE_CR_META_SP_MSLVCTRL1_READY_EN |
+					ROGUE_CR_META_SP_MSLVCTRL1_GBLPORT_IDLE_EN,
+				ROGUE_CR_META_SP_MSLVCTRL1_READY_EN |
+					ROGUE_CR_META_SP_MSLVCTRL1_GBLPORT_IDLE_EN,
+				POLL_TIMEOUT_USEC);
+	if (err)
+		return err;
+
+	*reg_value_out = pvr_cr_read32(pvr_dev, ROGUE_CR_META_SP_MSLVDATAX);
+
+	return 0;
+}
+
+static int
+pvr_meta_wrapper_init(struct pvr_device *pvr_dev)
+{
+	u64 garten_config;
+
+	/* Configure META to Master boot. */
+	pvr_cr_write64(pvr_dev, ROGUE_CR_META_BOOT, ROGUE_CR_META_BOOT_MODE_EN);
+
+	/* Set Garten IDLE to META idle and Set the Garten Wrapper BIF Fence address. */
+
+	/* Garten IDLE bit controlled by META. */
+	garten_config = ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_IDLE_CTRL_META;
+
+	/* The fence addr is set during the fw init sequence. */
+
+	/* Set PC = 0 for fences. */
+	garten_config &=
+		ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_PC_BASE_CLRMSK;
+	garten_config |=
+		(u64)MMU_CONTEXT_MAPPING_FWPRIV
+		<< ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_PC_BASE_SHIFT;
+
+	/* Set SLC DM=META. */
+	garten_config |= ((u64)ROGUE_FW_SEGMMU_META_BIFDM_ID)
+			 << ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_FENCE_DM_SHIFT;
+
+	pvr_cr_write64(pvr_dev, ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG, garten_config);
+
+	return 0;
+}
+
+static __always_inline void
+add_boot_arg(u32 **boot_conf, u32 param, u32 data)
+{
+	*(*boot_conf)++ = param;
+	*(*boot_conf)++ = data;
+}
+
+static int
+meta_ldr_cmd_loadmem(struct drm_device *drm_dev, const u8 *fw,
+		     struct rogue_meta_ldr_l1_data_blk *l1_data, u32 coremem_size, u8 *fw_code_ptr,
+		     u8 *fw_data_ptr, u8 *fw_core_code_ptr, u8 *fw_core_data_ptr, const u32 fw_size)
+{
+	struct rogue_meta_ldr_l2_data_blk *l2_block =
+		(struct rogue_meta_ldr_l2_data_blk *)(fw +
+						      l1_data->cmd_data[1]);
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+	u32 offset = l1_data->cmd_data[0];
+	u32 data_size;
+	void *write_addr;
+	int err;
+
+	/* Verify header is within bounds. */
+	if (((u8 *)l2_block - fw) >= fw_size || ((u8 *)(l2_block + 1) - fw) >= fw_size)
+		return -EINVAL;
+
+	data_size = l2_block->length - 6 /* L2 Tag length and checksum */;
+
+	/* Verify data is within bounds. */
+	if (((u8 *)l2_block->block_data - fw) >= fw_size ||
+	    ((((u8 *)l2_block->block_data) + data_size) - fw) >= fw_size)
+		return -EINVAL;
+
+	if (!ROGUE_META_IS_COREMEM_CODE(offset, coremem_size) &&
+	    !ROGUE_META_IS_COREMEM_DATA(offset, coremem_size)) {
+		/* Global range is aliased to local range */
+		offset &= ~META_MEM_GLOBAL_RANGE_BIT;
+	}
+
+	err = pvr_fw_find_mmu_segment(pvr_dev, offset, data_size, fw_code_ptr, fw_data_ptr,
+				      fw_core_code_ptr, fw_core_data_ptr, &write_addr);
+	if (err) {
+		drm_err(drm_dev,
+			"Addr 0x%x (size: %d) not found in any firmware segment",
+			offset, data_size);
+		return err;
+	}
+
+	memcpy(write_addr, l2_block->block_data, data_size);
+
+	return 0;
+}
+
+static int
+meta_ldr_cmd_zeromem(struct drm_device *drm_dev,
+		     struct rogue_meta_ldr_l1_data_blk *l1_data, u32 coremem_size,
+		     u8 *fw_code_ptr, u8 *fw_data_ptr, u8 *fw_core_code_ptr, u8 *fw_core_data_ptr)
+{
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+	u32 offset = l1_data->cmd_data[0];
+	u32 byte_count = l1_data->cmd_data[1];
+	void *write_addr;
+	int err;
+
+	if (ROGUE_META_IS_COREMEM_DATA(offset, coremem_size)) {
+		/* cannot zero coremem directly */
+		return 0;
+	}
+
+	/* Global range is aliased to local range */
+	offset &= ~META_MEM_GLOBAL_RANGE_BIT;
+
+	err = pvr_fw_find_mmu_segment(pvr_dev, offset, byte_count, fw_code_ptr, fw_data_ptr,
+				      fw_core_code_ptr, fw_core_data_ptr, &write_addr);
+	if (err) {
+		drm_err(drm_dev,
+			"Addr 0x%x (size: %d) not found in any firmware segment",
+			offset, byte_count);
+		return err;
+	}
+
+	memset(write_addr, 0, byte_count);
+
+	return 0;
+}
+
+static int
+meta_ldr_cmd_config(struct drm_device *drm_dev, const u8 *fw,
+		    struct rogue_meta_ldr_l1_data_blk *l1_data,
+		    const u32 fw_size, u32 **boot_conf_ptr)
+{
+	struct rogue_meta_ldr_l2_data_blk *l2_block =
+		(struct rogue_meta_ldr_l2_data_blk *)(fw +
+						      l1_data->cmd_data[0]);
+	struct rogue_meta_ldr_cfg_blk *config_command;
+	u32 l2_block_size;
+	u32 curr_block_size = 0;
+	u32 *boot_conf = boot_conf_ptr ? *boot_conf_ptr : NULL;
+
+	/* Verify block header is within bounds. */
+	if (((u8 *)l2_block - fw) >= fw_size || ((u8 *)(l2_block + 1) - fw) >= fw_size)
+		return -EINVAL;
+
+	l2_block_size = l2_block->length - 6 /* L2 Tag length and checksum */;
+	config_command = (struct rogue_meta_ldr_cfg_blk *)l2_block->block_data;
+
+	if (((u8 *)config_command - fw) >= fw_size ||
+	    ((((u8 *)config_command) + l2_block_size) - fw) >= fw_size)
+		return -EINVAL;
+
+	while (l2_block_size >= 12) {
+		if (config_command->type != ROGUE_META_LDR_CFG_WRITE)
+			return -EINVAL;
+
+		/*
+		 * Only write to bootloader if we got a valid pointer to the FW
+		 * code allocation.
+		 */
+		if (boot_conf) {
+			u32 register_offset = config_command->block_data[0];
+			u32 register_value = config_command->block_data[1];
+
+			/* Do register write */
+			add_boot_arg(&boot_conf, register_offset,
+				     register_value);
+		}
+
+		curr_block_size = 12;
+		l2_block_size -= curr_block_size;
+		config_command = (struct rogue_meta_ldr_cfg_blk
+					  *)((uintptr_t)config_command +
+					     curr_block_size);
+	}
+
+	if (boot_conf_ptr)
+		*boot_conf_ptr = boot_conf;
+
+	return 0;
+}
+
+/**
+ * process_ldr_command_stream() - Process LDR firmware image and populate
+ *                                firmware sections
+ * @pvr_dev: Device pointer.
+ * @fw: Pointer to firmware image.
+ * @fw_code_ptr: Pointer to FW code section.
+ * @fw_data_ptr: Pointer to FW data section.
+ * @fw_core_code_ptr: Pointer to FW coremem code section.
+ * @fw_core_data_ptr: Pointer to FW coremem data section.
+ * @boot_conf_ptr: Pointer to boot config argument pointer.
+ *
+ * Returns :
+ *  * 0 on success, or
+ *  * -EINVAL on any error in LDR command stream.
+ */
+static int
+process_ldr_command_stream(struct pvr_device *pvr_dev, const u8 *fw, u8 *fw_code_ptr,
+			   u8 *fw_data_ptr, u8 *fw_core_code_ptr,
+			   u8 *fw_core_data_ptr, u32 **boot_conf_ptr)
+{
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	struct rogue_meta_ldr_block_hdr *ldr_header =
+		(struct rogue_meta_ldr_block_hdr *)fw;
+	struct rogue_meta_ldr_l1_data_blk *l1_data =
+		(struct rogue_meta_ldr_l1_data_blk *)(fw + ldr_header->sl_data);
+	const u32 fw_size = pvr_dev->fw_dev.firmware->size;
+	int err;
+
+	u32 *boot_conf = boot_conf_ptr ? *boot_conf_ptr : NULL;
+	u32 coremem_size;
+
+	err = PVR_FEATURE_VALUE(pvr_dev, meta_coremem_size, &coremem_size);
+	if (err)
+		return err;
+
+	coremem_size *= SZ_1K;
+
+	while (l1_data) {
+		/* Verify block header is within bounds. */
+		if (((u8 *)l1_data - fw) >= fw_size || ((u8 *)(l1_data + 1) - fw) >= fw_size)
+			return -EINVAL;
+
+		if (ROGUE_META_LDR_BLK_IS_COMMENT(l1_data->cmd)) {
+			/* Don't process comment blocks */
+			goto next_block;
+		}
+
+		switch (l1_data->cmd & ROGUE_META_LDR_CMD_MASK)
+		case ROGUE_META_LDR_CMD_LOADMEM: {
+			err = meta_ldr_cmd_loadmem(drm_dev, fw, l1_data,
+						   coremem_size,
+						   fw_code_ptr, fw_data_ptr,
+						   fw_core_code_ptr,
+						   fw_core_data_ptr, fw_size);
+			if (err)
+				return err;
+			break;
+
+		case ROGUE_META_LDR_CMD_START_THREADS:
+			/* Don't process this block */
+			break;
+
+		case ROGUE_META_LDR_CMD_ZEROMEM:
+			err = meta_ldr_cmd_zeromem(drm_dev, l1_data,
+						   coremem_size,
+						   fw_code_ptr, fw_data_ptr,
+						   fw_core_code_ptr,
+						   fw_core_data_ptr);
+			if (err)
+				return err;
+			break;
+
+		case ROGUE_META_LDR_CMD_CONFIG:
+			err = meta_ldr_cmd_config(drm_dev, fw, l1_data, fw_size,
+						  &boot_conf);
+			if (err)
+				return err;
+			break;
+
+		default:
+			return -EINVAL;
+		}
+
+next_block:
+		if (l1_data->next == 0xFFFFFFFF)
+			break;
+
+		l1_data = (struct rogue_meta_ldr_l1_data_blk *)(fw +
+								l1_data->next);
+	}
+
+	if (boot_conf_ptr)
+		*boot_conf_ptr = boot_conf;
+
+	return 0;
+}
+
+static void
+configure_seg_id(u64 seg_out_addr, u32 seg_base, u32 seg_limit, u32 seg_id,
+		 u32 **boot_conf_ptr)
+{
+	u32 seg_out_addr0 = seg_out_addr & 0x00000000FFFFFFFFUL;
+	u32 seg_out_addr1 = (seg_out_addr >> 32) & 0x00000000FFFFFFFFUL;
+	u32 *boot_conf = *boot_conf_ptr;
+
+	/* META segments have a minimum size. */
+	u32 limit_off = max(seg_limit, ROGUE_FW_SEGMMU_ALIGN);
+
+	/* The limit is an offset, therefore off = size - 1. */
+	limit_off -= 1;
+
+	seg_base |= ROGUE_FW_SEGMMU_ALLTHRS_WRITEABLE;
+
+	add_boot_arg(&boot_conf, META_CR_MMCU_SEGMENT_N_BASE(seg_id), seg_base);
+	add_boot_arg(&boot_conf, META_CR_MMCU_SEGMENT_N_LIMIT(seg_id), limit_off);
+	add_boot_arg(&boot_conf, META_CR_MMCU_SEGMENT_N_OUTA0(seg_id), seg_out_addr0);
+	add_boot_arg(&boot_conf, META_CR_MMCU_SEGMENT_N_OUTA1(seg_id), seg_out_addr1);
+
+	*boot_conf_ptr = boot_conf;
+}
+
+static u64 get_fw_obj_gpu_addr(struct pvr_fw_object *fw_obj)
+{
+	struct pvr_device *pvr_dev = to_pvr_device(gem_from_pvr_gem(fw_obj->gem)->dev);
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+
+	return fw_obj->fw_addr_offset + fw_dev->fw_heap_info.gpu_addr;
+}
+
+static void
+configure_seg_mmu(struct pvr_device *pvr_dev, u32 **boot_conf_ptr)
+{
+	const struct pvr_fw_layout_entry *layout_entries = pvr_dev->fw_dev.layout_entries;
+	u32 num_layout_entries = pvr_dev->fw_dev.header->layout_entry_num;
+	u64 seg_out_addr_top;
+	u32 i;
+
+	seg_out_addr_top =
+		ROGUE_FW_SEGMMU_OUTADDR_TOP_SLC(MMU_CONTEXT_MAPPING_FWPRIV,
+						ROGUE_FW_SEGMMU_META_BIFDM_ID);
+
+	for (i = 0; i < num_layout_entries; i++) {
+		/*
+		 * FW code is using the bootloader segment which is already
+		 * configured on boot. FW coremem code and data don't use the
+		 * segment MMU. Only the FW data segment needs to be configured.
+		 */
+		if (layout_entries[i].type == FW_DATA) {
+			u32 seg_id = ROGUE_FW_SEGMMU_DATA_ID;
+			u64 seg_out_addr = get_fw_obj_gpu_addr(pvr_dev->fw_dev.mem.data_obj);
+
+			seg_out_addr += layout_entries[i].alloc_offset;
+			seg_out_addr |= seg_out_addr_top;
+
+			/* Write the sequence to the bootldr. */
+			configure_seg_id(seg_out_addr,
+					 layout_entries[i].base_addr,
+					 layout_entries[i].alloc_size, seg_id,
+					 boot_conf_ptr);
+
+			break;
+		}
+	}
+}
+
+static void
+configure_meta_caches(u32 **boot_conf_ptr)
+{
+	u32 *boot_conf = *boot_conf_ptr;
+	u32 d_cache_t0, i_cache_t0;
+	u32 d_cache_t1, i_cache_t1;
+	u32 d_cache_t2, i_cache_t2;
+	u32 d_cache_t3, i_cache_t3;
+
+	/* Initialise I/Dcache settings */
+	d_cache_t0 = META_CR_SYSC_DCPARTX_CACHED_WRITE_ENABLE;
+	d_cache_t1 = META_CR_SYSC_DCPARTX_CACHED_WRITE_ENABLE;
+	d_cache_t2 = META_CR_SYSC_DCPARTX_CACHED_WRITE_ENABLE;
+	d_cache_t3 = META_CR_SYSC_DCPARTX_CACHED_WRITE_ENABLE;
+	i_cache_t0 = 0;
+	i_cache_t1 = 0;
+	i_cache_t2 = 0;
+	i_cache_t3 = 0;
+
+	d_cache_t0 |= META_CR_SYSC_XCPARTX_LOCAL_ADDR_FULL_CACHE;
+	i_cache_t0 |= META_CR_SYSC_XCPARTX_LOCAL_ADDR_FULL_CACHE;
+
+	/* Local region MMU enhanced bypass: WIN-3 mode for code and data caches */
+	add_boot_arg(&boot_conf, META_CR_MMCU_LOCAL_EBCTRL,
+		     META_CR_MMCU_LOCAL_EBCTRL_ICWIN |
+			     META_CR_MMCU_LOCAL_EBCTRL_DCWIN);
+
+	/* Data cache partitioning thread 0 to 3 */
+	add_boot_arg(&boot_conf, META_CR_SYSC_DCPART(0), d_cache_t0);
+	add_boot_arg(&boot_conf, META_CR_SYSC_DCPART(1), d_cache_t1);
+	add_boot_arg(&boot_conf, META_CR_SYSC_DCPART(2), d_cache_t2);
+	add_boot_arg(&boot_conf, META_CR_SYSC_DCPART(3), d_cache_t3);
+
+	/* Enable data cache hits */
+	add_boot_arg(&boot_conf, META_CR_MMCU_DCACHE_CTRL,
+		     META_CR_MMCU_XCACHE_CTRL_CACHE_HITS_EN);
+
+	/* Instruction cache partitioning thread 0 to 3 */
+	add_boot_arg(&boot_conf, META_CR_SYSC_ICPART(0), i_cache_t0);
+	add_boot_arg(&boot_conf, META_CR_SYSC_ICPART(1), i_cache_t1);
+	add_boot_arg(&boot_conf, META_CR_SYSC_ICPART(2), i_cache_t2);
+	add_boot_arg(&boot_conf, META_CR_SYSC_ICPART(3), i_cache_t3);
+
+	/* Enable instruction cache hits */
+	add_boot_arg(&boot_conf, META_CR_MMCU_ICACHE_CTRL,
+		     META_CR_MMCU_XCACHE_CTRL_CACHE_HITS_EN);
+
+	add_boot_arg(&boot_conf, 0x040000C0, 0);
+
+	*boot_conf_ptr = boot_conf;
+}
+
+static int
+pvr_meta_fw_process(struct pvr_device *pvr_dev, const u8 *fw,
+		    u8 *fw_code_ptr, u8 *fw_data_ptr, u8 *fw_core_code_ptr, u8 *fw_core_data_ptr,
+		    u32 core_code_alloc_size)
+{
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+	u32 *boot_conf;
+	int err;
+
+	boot_conf = ((u32 *)fw_code_ptr) + ROGUE_FW_BOOTLDR_CONF_OFFSET;
+
+	/* Slave port and JTAG accesses are privileged. */
+	add_boot_arg(&boot_conf, META_CR_SYSC_JTAG_THREAD,
+		     META_CR_SYSC_JTAG_THREAD_PRIV_EN);
+
+	configure_seg_mmu(pvr_dev, &boot_conf);
+
+	/* Populate FW sections from LDR image. */
+	err = process_ldr_command_stream(pvr_dev, fw, fw_code_ptr, fw_data_ptr, fw_core_code_ptr,
+					 fw_core_data_ptr, &boot_conf);
+	if (err)
+		return err;
+
+	configure_meta_caches(&boot_conf);
+
+	/* End argument list. */
+	add_boot_arg(&boot_conf, 0, 0);
+
+	if (fw_dev->mem.core_code_obj) {
+		u32 core_code_fw_addr;
+
+		pvr_fw_object_get_fw_addr(fw_dev->mem.core_code_obj, &core_code_fw_addr);
+		add_boot_arg(&boot_conf, core_code_fw_addr, core_code_alloc_size);
+	} else {
+		add_boot_arg(&boot_conf, 0, 0);
+	}
+	/* None of the cores supported by this driver have META DMA. */
+	add_boot_arg(&boot_conf, 0, 0);
+
+	return 0;
+}
+
+static int
+pvr_meta_init(struct pvr_device *pvr_dev)
+{
+	pvr_fw_heap_info_init(pvr_dev, ROGUE_FW_HEAP_META_SHIFT, 0);
+
+	return 0;
+}
+
+static u32
+pvr_meta_get_fw_addr_with_offset(struct pvr_fw_object *fw_obj, u32 offset)
+{
+	u32 fw_addr = fw_obj->fw_addr_offset + offset + ROGUE_FW_SEGMMU_DATA_BASE_ADDRESS;
+
+	/* META cacheability is determined by address. */
+	if (fw_obj->gem->flags & PVR_BO_FW_FLAGS_DEVICE_UNCACHED)
+		fw_addr |= ROGUE_FW_SEGMMU_DATA_META_UNCACHED |
+			   ROGUE_FW_SEGMMU_DATA_VIVT_SLC_UNCACHED;
+
+	return fw_addr;
+}
+
+static int
+pvr_meta_vm_map(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj)
+{
+	struct pvr_gem_object *pvr_obj = fw_obj->gem;
+
+	return pvr_vm_map(pvr_dev->kernel_vm_ctx, pvr_obj, 0, fw_obj->fw_mm_node.start,
+			  pvr_gem_object_size(pvr_obj));
+}
+
+static void
+pvr_meta_vm_unmap(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj)
+{
+	pvr_vm_unmap(pvr_dev->kernel_vm_ctx, fw_obj->fw_mm_node.start,
+		     fw_obj->fw_mm_node.size);
+}
+
+static bool
+pvr_meta_has_fixed_data_addr(void)
+{
+	return false;
+}
+
+const struct pvr_fw_defs pvr_fw_defs_meta = {
+	.init = pvr_meta_init,
+	.fw_process = pvr_meta_fw_process,
+	.vm_map = pvr_meta_vm_map,
+	.vm_unmap = pvr_meta_vm_unmap,
+	.get_fw_addr_with_offset = pvr_meta_get_fw_addr_with_offset,
+	.wrapper_init = pvr_meta_wrapper_init,
+	.has_fixed_data_addr = pvr_meta_has_fixed_data_addr,
+	.irq = {
+		.enable_reg = ROGUE_CR_META_SP_MSLVIRQENABLE,
+		.status_reg = ROGUE_CR_META_SP_MSLVIRQSTATUS,
+		.clear_reg = ROGUE_CR_META_SP_MSLVIRQSTATUS,
+		.event_mask = ROGUE_CR_META_SP_MSLVIRQSTATUS_TRIGVECT2_EN,
+		.clear_mask = ROGUE_CR_META_SP_MSLVIRQSTATUS_TRIGVECT2_CLRMSK,
+	},
+};
diff --git a/drivers/gpu/drm/imagination/pvr_fw_meta.h b/drivers/gpu/drm/imagination/pvr_fw_meta.h
new file mode 100644
index 000000000000..d88970e82334
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_fw_meta.h
@@ -0,0 +1,14 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_FW_META_H
+#define PVR_FW_META_H
+
+#include <linux/types.h>
+
+/* Forward declaration from pvr_device.h */
+struct pvr_device;
+
+int pvr_meta_cr_read32(struct pvr_device *pvr_dev, u32 reg_addr, u32 *reg_value_out);
+
+#endif /* PVR_FW_META_H */
diff --git a/drivers/gpu/drm/imagination/pvr_fw_startstop.c b/drivers/gpu/drm/imagination/pvr_fw_startstop.c
new file mode 100644
index 000000000000..a9db0c60e57f
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_fw_startstop.c
@@ -0,0 +1,301 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+#include "pvr_fw.h"
+#include "pvr_fw_meta.h"
+#include "pvr_fw_startstop.h"
+#include "pvr_rogue_cr_defs.h"
+#include "pvr_rogue_meta.h"
+#include "pvr_vm.h"
+
+#include <linux/compiler.h>
+#include <linux/delay.h>
+#include <linux/ktime.h>
+#include <linux/types.h>
+
+#define POLL_TIMEOUT_USEC 1000000
+
+static void
+rogue_axi_ace_list_init(struct pvr_device *pvr_dev)
+{
+	/* Setup AXI-ACE config. Set everything to outer cache. */
+	u64 reg_val =
+		(3U << ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWDOMAIN_NON_SNOOPING_SHIFT) |
+		(3U << ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARDOMAIN_NON_SNOOPING_SHIFT) |
+		(2U << ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARDOMAIN_CACHE_MAINTENANCE_SHIFT) |
+		(2U << ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWDOMAIN_COHERENT_SHIFT) |
+		(2U << ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARDOMAIN_COHERENT_SHIFT) |
+		(2U << ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_AWCACHE_COHERENT_SHIFT) |
+		(2U << ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARCACHE_COHERENT_SHIFT) |
+		(2U << ROGUE_CR_AXI_ACE_LITE_CONFIGURATION_ARCACHE_CACHE_MAINTENANCE_SHIFT);
+
+	pvr_cr_write64(pvr_dev, ROGUE_CR_AXI_ACE_LITE_CONFIGURATION, reg_val);
+}
+
+static void
+rogue_bif_init(struct pvr_device *pvr_dev)
+{
+	dma_addr_t pc_dma_addr;
+	u64 pc_addr;
+
+	/* Acquire the address of the Kernel Page Catalogue. */
+	pc_dma_addr = pvr_vm_get_page_table_root_addr(pvr_dev->kernel_vm_ctx);
+
+	/* Write the kernel catalogue base. */
+	pc_addr = ((((u64)pc_dma_addr >> ROGUE_CR_BIF_CAT_BASE0_ADDR_ALIGNSHIFT)
+		    << ROGUE_CR_BIF_CAT_BASE0_ADDR_SHIFT) &
+		   ~ROGUE_CR_BIF_CAT_BASE0_ADDR_CLRMSK);
+
+	pvr_cr_write64(pvr_dev, BIF_CAT_BASEX(MMU_CONTEXT_MAPPING_FWPRIV),
+		       pc_addr);
+}
+
+static int
+rogue_slc_init(struct pvr_device *pvr_dev)
+{
+	u16 slc_cache_line_size_bits;
+	u32 reg_val;
+	int err;
+
+	/*
+	 * SLC Misc control.
+	 *
+	 * Note: This is a 64bit register and we set only the lower 32bits
+	 *       leaving the top 32bits (ROGUE_CR_SLC_CTRL_MISC_SCRAMBLE_BITS)
+	 *       unchanged from the HW default.
+	 */
+	reg_val = (pvr_cr_read32(pvr_dev, ROGUE_CR_SLC_CTRL_MISC) &
+		      ROGUE_CR_SLC_CTRL_MISC_ENABLE_PSG_HAZARD_CHECK_EN) |
+		     ROGUE_CR_SLC_CTRL_MISC_ADDR_DECODE_MODE_PVR_HASH1;
+
+	err = PVR_FEATURE_VALUE(pvr_dev, slc_cache_line_size_bits, &slc_cache_line_size_bits);
+	if (err)
+		return err;
+
+	/* Bypass burst combiner if SLC line size is smaller than 1024 bits. */
+	if (slc_cache_line_size_bits < 1024)
+		reg_val |= ROGUE_CR_SLC_CTRL_MISC_BYPASS_BURST_COMBINER_EN;
+
+	pvr_cr_write32(pvr_dev, ROGUE_CR_SLC_CTRL_MISC, reg_val);
+
+	return 0;
+}
+
+/**
+ * pvr_fw_start() - Start FW processor and boot firmware
+ * @pvr_dev: Target PowerVR device.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error returned by rogue_slc_init().
+ */
+int
+pvr_fw_start(struct pvr_device *pvr_dev)
+{
+	bool has_reset2 = PVR_HAS_FEATURE(pvr_dev, xe_tpu2);
+	u64 soft_reset_mask;
+	int err;
+
+	if (PVR_HAS_FEATURE(pvr_dev, pbe2_in_xe))
+		soft_reset_mask = ROGUE_CR_SOFT_RESET__PBE2_XE__MASKFULL;
+	else
+		soft_reset_mask = ROGUE_CR_SOFT_RESET_MASKFULL;
+
+	if (PVR_HAS_FEATURE(pvr_dev, sys_bus_secure_reset)) {
+		/*
+		 * Disable the default sys_bus_secure protection to perform
+		 * minimal setup.
+		 */
+		pvr_cr_write32(pvr_dev, ROGUE_CR_SYS_BUS_SECURE, 0);
+		(void)pvr_cr_read32(pvr_dev, ROGUE_CR_SYS_BUS_SECURE); /* Fence write */
+	}
+
+	/* Set Rogue in soft-reset. */
+	pvr_cr_write64(pvr_dev, ROGUE_CR_SOFT_RESET, soft_reset_mask);
+	if (has_reset2)
+		pvr_cr_write64(pvr_dev, ROGUE_CR_SOFT_RESET2, ROGUE_CR_SOFT_RESET2_MASKFULL);
+
+	/* Read soft-reset to fence previous write in order to clear the SOCIF pipeline. */
+	(void)pvr_cr_read64(pvr_dev, ROGUE_CR_SOFT_RESET);
+	if (has_reset2)
+		(void)pvr_cr_read64(pvr_dev, ROGUE_CR_SOFT_RESET2);
+
+	/* Take Rascal and Dust out of reset. */
+	pvr_cr_write64(pvr_dev, ROGUE_CR_SOFT_RESET,
+		       soft_reset_mask ^ ROGUE_CR_SOFT_RESET_RASCALDUSTS_EN);
+	if (has_reset2)
+		pvr_cr_write64(pvr_dev, ROGUE_CR_SOFT_RESET2, 0);
+
+	(void)pvr_cr_read64(pvr_dev, ROGUE_CR_SOFT_RESET);
+	if (has_reset2)
+		(void)pvr_cr_read64(pvr_dev, ROGUE_CR_SOFT_RESET2);
+
+	/* Take everything out of reset but the FW processor. */
+	pvr_cr_write64(pvr_dev, ROGUE_CR_SOFT_RESET, ROGUE_CR_SOFT_RESET_GARTEN_EN);
+	if (has_reset2)
+		pvr_cr_write64(pvr_dev, ROGUE_CR_SOFT_RESET2, 0);
+
+	(void)pvr_cr_read64(pvr_dev, ROGUE_CR_SOFT_RESET);
+	if (has_reset2)
+		(void)pvr_cr_read64(pvr_dev, ROGUE_CR_SOFT_RESET2);
+
+	err = rogue_slc_init(pvr_dev);
+	if (err)
+		goto err_reset;
+
+	/* Initialise Firmware wrapper. */
+	pvr_dev->fw_dev.defs->wrapper_init(pvr_dev);
+
+	/* We must init the AXI-ACE interface before first BIF transaction. */
+	rogue_axi_ace_list_init(pvr_dev);
+
+	/* Initialise BIF. */
+	rogue_bif_init(pvr_dev);
+
+	/* Need to wait for at least 16 cycles before taking the FW processor out of reset ... */
+	udelay(3);
+
+	pvr_cr_write64(pvr_dev, ROGUE_CR_SOFT_RESET, 0x0);
+	(void)pvr_cr_read64(pvr_dev, ROGUE_CR_SOFT_RESET);
+
+	/* ... and afterwards. */
+	udelay(3);
+
+	return 0;
+
+err_reset:
+	/* Put everything back into soft-reset. */
+	pvr_cr_write64(pvr_dev, ROGUE_CR_SOFT_RESET, soft_reset_mask);
+
+	return err;
+}
+
+/**
+ * pvr_fw_stop() - Stop FW processor
+ * @pvr_dev: Target PowerVR device.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_cr_poll_reg32().
+ */
+int
+pvr_fw_stop(struct pvr_device *pvr_dev)
+{
+	const u32 sidekick_idle_mask = ROGUE_CR_SIDEKICK_IDLE_MASKFULL &
+				       ~(ROGUE_CR_SIDEKICK_IDLE_GARTEN_EN |
+					 ROGUE_CR_SIDEKICK_IDLE_SOCIF_EN |
+					 ROGUE_CR_SIDEKICK_IDLE_HOSTIF_EN);
+	bool skip_garten_idle = false;
+	u32 reg_value;
+	int err;
+
+	/*
+	 * Wait for Sidekick/Jones to signal IDLE except for the Garten Wrapper.
+	 * For cores with the LAYOUT_MARS feature, SIDEKICK would have been
+	 * powered down by the FW.
+	 */
+	err = pvr_cr_poll_reg32(pvr_dev, ROGUE_CR_SIDEKICK_IDLE, sidekick_idle_mask,
+				sidekick_idle_mask, POLL_TIMEOUT_USEC);
+	if (err)
+		return err;
+
+	/* Unset MTS DM association with threads. */
+	pvr_cr_write32(pvr_dev, ROGUE_CR_MTS_INTCTX_THREAD0_DM_ASSOC,
+		       ROGUE_CR_MTS_INTCTX_THREAD0_DM_ASSOC_MASKFULL &
+		       ROGUE_CR_MTS_INTCTX_THREAD0_DM_ASSOC_DM_ASSOC_CLRMSK);
+	pvr_cr_write32(pvr_dev, ROGUE_CR_MTS_BGCTX_THREAD0_DM_ASSOC,
+		       ROGUE_CR_MTS_BGCTX_THREAD0_DM_ASSOC_MASKFULL &
+		       ROGUE_CR_MTS_BGCTX_THREAD0_DM_ASSOC_DM_ASSOC_CLRMSK);
+	pvr_cr_write32(pvr_dev, ROGUE_CR_MTS_INTCTX_THREAD1_DM_ASSOC,
+		       ROGUE_CR_MTS_INTCTX_THREAD1_DM_ASSOC_MASKFULL &
+		       ROGUE_CR_MTS_INTCTX_THREAD1_DM_ASSOC_DM_ASSOC_CLRMSK);
+	pvr_cr_write32(pvr_dev, ROGUE_CR_MTS_BGCTX_THREAD1_DM_ASSOC,
+		       ROGUE_CR_MTS_BGCTX_THREAD1_DM_ASSOC_MASKFULL &
+		       ROGUE_CR_MTS_BGCTX_THREAD1_DM_ASSOC_DM_ASSOC_CLRMSK);
+
+	/* Extra Idle checks. */
+	err = pvr_cr_poll_reg32(pvr_dev, ROGUE_CR_BIF_STATUS_MMU, 0,
+				ROGUE_CR_BIF_STATUS_MMU_MASKFULL,
+				POLL_TIMEOUT_USEC);
+	if (err)
+		return err;
+
+	err = pvr_cr_poll_reg32(pvr_dev, ROGUE_CR_BIFPM_STATUS_MMU, 0,
+				ROGUE_CR_BIFPM_STATUS_MMU_MASKFULL,
+				POLL_TIMEOUT_USEC);
+	if (err)
+		return err;
+
+	if (!PVR_HAS_FEATURE(pvr_dev, xt_top_infrastructure)) {
+		err = pvr_cr_poll_reg32(pvr_dev, ROGUE_CR_BIF_READS_EXT_STATUS, 0,
+					ROGUE_CR_BIF_READS_EXT_STATUS_MASKFULL,
+					POLL_TIMEOUT_USEC);
+		if (err)
+			return err;
+	}
+
+	err = pvr_cr_poll_reg32(pvr_dev, ROGUE_CR_BIFPM_READS_EXT_STATUS, 0,
+				ROGUE_CR_BIFPM_READS_EXT_STATUS_MASKFULL,
+				POLL_TIMEOUT_USEC);
+	if (err)
+		return err;
+
+	err = pvr_cr_poll_reg64(pvr_dev, ROGUE_CR_SLC_STATUS1, 0,
+				ROGUE_CR_SLC_STATUS1_MASKFULL,
+				POLL_TIMEOUT_USEC);
+	if (err)
+		return err;
+
+	/*
+	 * Wait for SLC to signal IDLE.
+	 * For cores with the LAYOUT_MARS feature, SLC would have been powered
+	 * down by the FW.
+	 */
+	err = pvr_cr_poll_reg32(pvr_dev, ROGUE_CR_SLC_IDLE,
+				ROGUE_CR_SLC_IDLE_MASKFULL,
+				ROGUE_CR_SLC_IDLE_MASKFULL, POLL_TIMEOUT_USEC);
+	if (err)
+		return err;
+
+	/*
+	 * Wait for Sidekick/Jones to signal IDLE except for the Garten Wrapper.
+	 * For cores with the LAYOUT_MARS feature, SIDEKICK would have been powered
+	 * down by the FW.
+	 */
+	err = pvr_cr_poll_reg32(pvr_dev, ROGUE_CR_SIDEKICK_IDLE, sidekick_idle_mask,
+				sidekick_idle_mask, POLL_TIMEOUT_USEC);
+	if (err)
+		return err;
+
+	if (pvr_dev->fw_dev.processor_type == PVR_FW_PROCESSOR_TYPE_META) {
+		err = pvr_meta_cr_read32(pvr_dev, META_CR_TxVECINT_BHALT, &reg_value);
+		if (err)
+			return err;
+
+		/*
+		 * Wait for Sidekick/Jones to signal IDLE including the Garten
+		 * Wrapper if there is no debugger attached (TxVECINT_BHALT =
+		 * 0x0).
+		 */
+		if (reg_value)
+			skip_garten_idle = true;
+	}
+
+	if (!skip_garten_idle) {
+		err = pvr_cr_poll_reg32(pvr_dev, ROGUE_CR_SIDEKICK_IDLE,
+					ROGUE_CR_SIDEKICK_IDLE_GARTEN_EN,
+					ROGUE_CR_SIDEKICK_IDLE_GARTEN_EN,
+					POLL_TIMEOUT_USEC);
+		if (err)
+			return err;
+	}
+
+	if (PVR_HAS_FEATURE(pvr_dev, pbe2_in_xe))
+		pvr_cr_write64(pvr_dev, ROGUE_CR_SOFT_RESET,
+			       ROGUE_CR_SOFT_RESET__PBE2_XE__MASKFULL);
+	else
+		pvr_cr_write64(pvr_dev, ROGUE_CR_SOFT_RESET, ROGUE_CR_SOFT_RESET_MASKFULL);
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/imagination/pvr_fw_startstop.h b/drivers/gpu/drm/imagination/pvr_fw_startstop.h
new file mode 100644
index 000000000000..ab84f14596c7
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_fw_startstop.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_FW_STARTSTOP_H
+#define PVR_FW_STARTSTOP_H
+
+/* Forward declaration from pvr_device.h. */
+struct pvr_device;
+
+int pvr_fw_start(struct pvr_device *pvr_dev);
+int pvr_fw_stop(struct pvr_device *pvr_dev);
+
+#endif /* PVR_FW_STARTSTOP_H */
diff --git a/drivers/gpu/drm/imagination/pvr_fw_trace.c b/drivers/gpu/drm/imagination/pvr_fw_trace.c
new file mode 100644
index 000000000000..11f10946aeb1
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_fw_trace.c
@@ -0,0 +1,121 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+#include "pvr_gem.h"
+#include "pvr_rogue_fwif.h"
+#include "pvr_rogue_fwif_sf.h"
+#include "pvr_fw_trace.h"
+
+#include <drm/drm_file.h>
+
+#include <linux/build_bug.h>
+#include <linux/dcache.h>
+#include <linux/sysfs.h>
+#include <linux/types.h>
+
+static void
+tracebuf_ctrl_init(void *cpu_ptr, void *priv)
+{
+	struct rogue_fwif_tracebuf *tracebuf_ctrl = cpu_ptr;
+	struct pvr_fw_trace *fw_trace = priv;
+	u32 thread_nr;
+
+	tracebuf_ctrl->tracebuf_size_in_dwords = ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS;
+	tracebuf_ctrl->tracebuf_flags = 0;
+
+	if (fw_trace->group_mask)
+		tracebuf_ctrl->log_type = fw_trace->group_mask | ROGUE_FWIF_LOG_TYPE_TRACE;
+	else
+		tracebuf_ctrl->log_type = ROGUE_FWIF_LOG_TYPE_NONE;
+
+	for (thread_nr = 0; thread_nr < ARRAY_SIZE(fw_trace->buffers); thread_nr++) {
+		struct rogue_fwif_tracebuf_space *tracebuf_space =
+			&tracebuf_ctrl->tracebuf[thread_nr];
+		struct pvr_fw_trace_buffer *trace_buffer = &fw_trace->buffers[thread_nr];
+
+		pvr_fw_object_get_fw_addr(trace_buffer->buf_obj,
+					  &tracebuf_space->trace_buffer_fw_addr);
+
+		tracebuf_space->trace_buffer = trace_buffer->buf;
+		tracebuf_space->trace_pointer = 0;
+	}
+}
+
+int pvr_fw_trace_init(struct pvr_device *pvr_dev)
+{
+	struct pvr_fw_trace *fw_trace = &pvr_dev->fw_dev.fw_trace;
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	u32 thread_nr;
+	int err;
+
+	for (thread_nr = 0; thread_nr < ARRAY_SIZE(fw_trace->buffers); thread_nr++) {
+		struct pvr_fw_trace_buffer *trace_buffer = &fw_trace->buffers[thread_nr];
+
+		trace_buffer->buf =
+			pvr_fw_object_create_and_map(pvr_dev,
+						     ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS *
+						     sizeof(*trace_buffer->buf),
+						     PVR_BO_FW_FLAGS_DEVICE_UNCACHED |
+						     PVR_BO_FW_NO_CLEAR_ON_RESET,
+						     NULL, NULL, &trace_buffer->buf_obj);
+		if (IS_ERR(trace_buffer->buf)) {
+			drm_err(drm_dev, "Unable to allocate trace buffer\n");
+			err = PTR_ERR(trace_buffer->buf);
+			trace_buffer->buf = NULL;
+			goto err_free_buf;
+		}
+	}
+
+	/* TODO: Provide control of group mask. */
+	fw_trace->group_mask = 0;
+
+	fw_trace->tracebuf_ctrl =
+		pvr_fw_object_create_and_map(pvr_dev,
+					     sizeof(*fw_trace->tracebuf_ctrl),
+					     PVR_BO_FW_FLAGS_DEVICE_UNCACHED |
+					     PVR_BO_FW_NO_CLEAR_ON_RESET,
+					     tracebuf_ctrl_init, fw_trace,
+					     &fw_trace->tracebuf_ctrl_obj);
+	if (IS_ERR(fw_trace->tracebuf_ctrl)) {
+		drm_err(drm_dev, "Unable to allocate trace buffer control structure\n");
+		err = PTR_ERR(fw_trace->tracebuf_ctrl);
+		goto err_free_buf;
+	}
+
+	BUILD_BUG_ON(ARRAY_SIZE(fw_trace->tracebuf_ctrl->tracebuf) !=
+		     ARRAY_SIZE(fw_trace->buffers));
+
+	for (thread_nr = 0; thread_nr < ARRAY_SIZE(fw_trace->buffers); thread_nr++) {
+		struct rogue_fwif_tracebuf_space *tracebuf_space =
+			&fw_trace->tracebuf_ctrl->tracebuf[thread_nr];
+		struct pvr_fw_trace_buffer *trace_buffer = &fw_trace->buffers[thread_nr];
+
+		trace_buffer->tracebuf_space = tracebuf_space;
+	}
+
+	return 0;
+
+err_free_buf:
+	for (thread_nr = 0; thread_nr < ARRAY_SIZE(fw_trace->buffers); thread_nr++) {
+		struct pvr_fw_trace_buffer *trace_buffer = &fw_trace->buffers[thread_nr];
+
+		if (trace_buffer->buf)
+			pvr_fw_object_unmap_and_destroy(trace_buffer->buf_obj);
+	}
+
+	return err;
+}
+
+void pvr_fw_trace_fini(struct pvr_device *pvr_dev)
+{
+	struct pvr_fw_trace *fw_trace = &pvr_dev->fw_dev.fw_trace;
+	u32 thread_nr;
+
+	for (thread_nr = 0; thread_nr < ARRAY_SIZE(fw_trace->buffers); thread_nr++) {
+		struct pvr_fw_trace_buffer *trace_buffer = &fw_trace->buffers[thread_nr];
+
+		pvr_fw_object_unmap_and_destroy(trace_buffer->buf_obj);
+	}
+	pvr_fw_object_unmap_and_destroy(fw_trace->tracebuf_ctrl_obj);
+}
diff --git a/drivers/gpu/drm/imagination/pvr_fw_trace.h b/drivers/gpu/drm/imagination/pvr_fw_trace.h
new file mode 100644
index 000000000000..e637bb9de2b4
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_fw_trace.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_FW_TRACE_H
+#define PVR_FW_TRACE_H
+
+#include <drm/drm_file.h>
+#include <linux/types.h>
+
+#include "pvr_rogue_fwif.h"
+
+/* Forward declaration from pvr_device.h. */
+struct pvr_device;
+
+/* Forward declaration from pvr_gem.h. */
+struct pvr_fw_object;
+
+/* Forward declarations from pvr_rogue_fwif.h */
+struct rogue_fwif_tracebuf;
+struct rogue_fwif_tracebuf_space;
+
+/**
+ * struct pvr_fw_trace_buffer - Structure representing a trace buffer
+ */
+struct pvr_fw_trace_buffer {
+	/** @buf_obj: FW buffer object representing trace buffer. */
+	struct pvr_fw_object *buf_obj;
+
+	/** @buf: Pointer to CPU mapping of trace buffer. */
+	u32 *buf;
+
+	/**
+	 * @tracebuf_space: Pointer to FW tracebuf_space structure for this
+	 *                  trace buffer.
+	 */
+	struct rogue_fwif_tracebuf_space *tracebuf_space;
+};
+
+/**
+ * struct pvr_fw_trace - Device firmware trace data
+ */
+struct pvr_fw_trace {
+	/**
+	 * @tracebuf_ctrl_obj: Object representing FW trace buffer control
+	 *                     structure.
+	 */
+	struct pvr_fw_object *tracebuf_ctrl_obj;
+
+	/**
+	 * @tracebuf_ctrl: Pointer to CPU mapping of FW trace buffer control
+	 *                 structure.
+	 */
+	struct rogue_fwif_tracebuf *tracebuf_ctrl;
+
+	/**
+	 * @buffers: Array representing the actual trace buffers owned by this
+	 *           device.
+	 */
+	struct pvr_fw_trace_buffer buffers[ROGUE_FW_THREAD_MAX];
+
+	/** @group_mask: Mask of enabled trace groups. */
+	u32 group_mask;
+};
+
+int pvr_fw_trace_init(struct pvr_device *pvr_dev);
+void pvr_fw_trace_fini(struct pvr_device *pvr_dev);
+
+#if defined(CONFIG_DEBUG_FS)
+/* Forward declaration from <linux/dcache.h>. */
+struct dentry;
+
+void pvr_fw_trace_mask_update(struct pvr_device *pvr_dev, u32 old_mask,
+			      u32 new_mask);
+
+void pvr_fw_trace_debugfs_init(struct pvr_device *pvr_dev, struct dentry *dir);
+#endif /* defined(CONFIG_DEBUG_FS) */
+
+#endif /* PVR_FW_TRACE_H */
diff --git a/drivers/gpu/drm/imagination/pvr_gem.h b/drivers/gpu/drm/imagination/pvr_gem.h
index 5b82cd67d83c..11b7692dae56 100644
--- a/drivers/gpu/drm/imagination/pvr_gem.h
+++ b/drivers/gpu/drm/imagination/pvr_gem.h
@@ -59,6 +59,13 @@ struct pvr_file;
  *    :CACHED: By default, all GEM objects are mapped write-combined on the
  *       CPU. Set this flag to override this behaviour and map the object
  *       cached.
+ *
+ * Firmware options
+ *    These use the prefix PVR_BO_FW_.
+ *
+ *    :NO_CLEAR_ON_RESET: By default, all FW objects are cleared and
+ *       reinitialised on hard reset. Set this flag to override this behaviour
+ *       and preserve buffer contents on reset.
  */
 #define PVR_BO_CPU_CACHED BIT_ULL(63)
 
diff --git a/drivers/gpu/drm/imagination/pvr_mmu.c b/drivers/gpu/drm/imagination/pvr_mmu.c
index 3b48e1f77d09..bee357588ade 100644
--- a/drivers/gpu/drm/imagination/pvr_mmu.c
+++ b/drivers/gpu/drm/imagination/pvr_mmu.c
@@ -3,6 +3,7 @@
 
 #include "pvr_mmu.h"
 
+#include "pvr_ccb.h"
 #include "pvr_device.h"
 #include "pvr_fw.h"
 #include "pvr_gem.h"
@@ -71,8 +72,43 @@
 int
 pvr_mmu_flush(struct pvr_device *pvr_dev)
 {
-	/* TODO: implement */
-	return -ENODEV;
+	struct rogue_fwif_kccb_cmd cmd_mmu_cache;
+	struct rogue_fwif_mmucachedata *cmd_mmu_cache_data =
+		&cmd_mmu_cache.cmd_data.mmu_cache_data;
+	u32 slot;
+	int idx;
+	int err;
+
+	if (!drm_dev_enter(from_pvr_device(pvr_dev), &idx))
+		return -EIO;
+
+	/* Can't flush MMU if the firmware hasn't booted yet. */
+	if (!pvr_dev->fw_dev.booted) {
+		err = 0;
+		goto err_drm_dev_exit;
+	}
+
+	cmd_mmu_cache.cmd_type = ROGUE_FWIF_KCCB_CMD_MMUCACHE;
+	/* Request a complete MMU flush, across all pagetable levels, TLBs and contexts. */
+	cmd_mmu_cache_data->cache_flags = ROGUE_FWIF_MMUCACHEDATA_FLAGS_PT |
+					  ROGUE_FWIF_MMUCACHEDATA_FLAGS_PD |
+					  ROGUE_FWIF_MMUCACHEDATA_FLAGS_PC |
+					  ROGUE_FWIF_MMUCACHEDATA_FLAGS_TLB |
+					  ROGUE_FWIF_MMUCACHEDATA_FLAGS_INTERRUPT;
+	pvr_fw_object_get_fw_addr(pvr_dev->fw_dev.mem.mmucache_sync_obj,
+				  &cmd_mmu_cache_data->mmu_cache_sync_fw_addr);
+	cmd_mmu_cache_data->mmu_cache_sync_update_value = 0;
+
+	err = pvr_kccb_send_cmd(pvr_dev, &cmd_mmu_cache, &slot);
+	if (err)
+		goto err_drm_dev_exit;
+
+	err = pvr_kccb_wait_for_completion(pvr_dev, slot, HZ, NULL);
+
+err_drm_dev_exit:
+	drm_dev_exit(idx);
+
+	return err;
 }
 
 /**
diff --git a/drivers/gpu/drm/imagination/pvr_power.c b/drivers/gpu/drm/imagination/pvr_power.c
index a494fed92e81..90dfdf22044c 100644
--- a/drivers/gpu/drm/imagination/pvr_power.c
+++ b/drivers/gpu/drm/imagination/pvr_power.c
@@ -3,6 +3,7 @@
 
 #include "pvr_device.h"
 #include "pvr_fw.h"
+#include "pvr_fw_startstop.h"
 #include "pvr_power.h"
 #include "pvr_rogue_fwif.h"
 
@@ -21,11 +22,32 @@
 
 #define WATCHDOG_TIME_MS (500)
 
+static void
+pvr_device_lost(struct pvr_device *pvr_dev)
+{
+	if (!pvr_dev->lost) {
+		pvr_dev->lost = true;
+		drm_dev_unplug(from_pvr_device(pvr_dev));
+	}
+}
+
 static int
 pvr_power_send_command(struct pvr_device *pvr_dev, struct rogue_fwif_kccb_cmd *pow_cmd)
 {
-	/* TODO: implement */
-	return -ENODEV;
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+	u32 slot_nr;
+	u32 value;
+	int err;
+
+	WRITE_ONCE(*fw_dev->power_sync, 0);
+
+	err = pvr_kccb_send_cmd_powered(pvr_dev, pow_cmd, &slot_nr);
+	if (err)
+		return err;
+
+	/* Wait for FW to acknowledge. */
+	return readl_poll_timeout(pvr_dev->fw_dev.power_sync, value, value != 0, 100,
+				  POWER_SYNC_TIMEOUT_US);
 }
 
 static int
@@ -71,8 +93,7 @@ pvr_power_fw_disable(struct pvr_device *pvr_dev, bool hard_reset)
 			return err;
 	}
 
-	/* TODO: stop firmware */
-	return -ENODEV;
+	return pvr_fw_stop(pvr_dev);
 }
 
 static int
@@ -80,11 +101,17 @@ pvr_power_fw_enable(struct pvr_device *pvr_dev)
 {
 	int err;
 
-	/* TODO: start firmware */
-	err = -ENODEV;
+	err = pvr_fw_start(pvr_dev);
 	if (err)
 		return err;
 
+	err = pvr_wait_for_fw_boot(pvr_dev);
+	if (err) {
+		drm_err(from_pvr_device(pvr_dev), "Firmware failed to boot\n");
+		pvr_fw_stop(pvr_dev);
+		return err;
+	}
+
 	queue_delayed_work(pvr_dev->sched_wq, &pvr_dev->watchdog.work,
 			   msecs_to_jiffies(WATCHDOG_TIME_MS));
 
@@ -94,14 +121,39 @@ pvr_power_fw_enable(struct pvr_device *pvr_dev)
 bool
 pvr_power_is_idle(struct pvr_device *pvr_dev)
 {
-	/* TODO: implement */
-	return true;
+	/*
+	 * FW power state can be out of date if a KCCB command has been submitted but the FW hasn't
+	 * started processing it yet. So also check the KCCB status.
+	 */
+	enum rogue_fwif_pow_state pow_state = READ_ONCE(pvr_dev->fw_dev.fwif_sysdata->pow_state);
+	bool kccb_idle = pvr_kccb_is_idle(pvr_dev);
+
+	return (pow_state == ROGUE_FWIF_POW_IDLE) && kccb_idle;
 }
 
 static bool
 pvr_watchdog_kccb_stalled(struct pvr_device *pvr_dev)
 {
-	/* TODO: implement */
+	/* Check KCCB commands are progressing. */
+	u32 kccb_cmds_executed = pvr_dev->fw_dev.fwif_osdata->kccb_cmds_executed;
+	bool kccb_is_idle = pvr_kccb_is_idle(pvr_dev);
+
+	if (pvr_dev->watchdog.old_kccb_cmds_executed == kccb_cmds_executed && !kccb_is_idle) {
+		pvr_dev->watchdog.kccb_stall_count++;
+
+		/*
+		 * If we have commands pending with no progress for 2 consecutive polls then
+		 * consider KCCB command processing stalled.
+		 */
+		if (pvr_dev->watchdog.kccb_stall_count == 2) {
+			pvr_dev->watchdog.kccb_stall_count = 0;
+			return true;
+		}
+	} else {
+		pvr_dev->watchdog.old_kccb_cmds_executed = kccb_cmds_executed;
+		pvr_dev->watchdog.kccb_stall_count = 0;
+	}
+
 	return false;
 }
 
@@ -118,6 +170,9 @@ pvr_watchdog_worker(struct work_struct *work)
 	if (pm_runtime_get_if_in_use(from_pvr_device(pvr_dev)->dev) <= 0)
 		goto out_requeue;
 
+	if (!pvr_dev->fw_dev.booted)
+		goto out_pm_runtime_put;
+
 	stalled = pvr_watchdog_kccb_stalled(pvr_dev);
 
 	if (stalled) {
@@ -127,6 +182,7 @@ pvr_watchdog_worker(struct work_struct *work)
 		/* Device may be lost at this point. */
 	}
 
+out_pm_runtime_put:
 	pm_runtime_put(from_pvr_device(pvr_dev)->dev);
 
 out_requeue:
@@ -158,18 +214,26 @@ pvr_power_device_suspend(struct device *dev)
 	struct platform_device *plat_dev = to_platform_device(dev);
 	struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
 	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+	int err = 0;
 	int idx;
 
 	if (!drm_dev_enter(drm_dev, &idx))
 		return -EIO;
 
+	if (pvr_dev->fw_dev.booted) {
+		err = pvr_power_fw_disable(pvr_dev, false);
+		if (err)
+			goto err_drm_dev_exit;
+	}
+
 	clk_disable_unprepare(pvr_dev->mem_clk);
 	clk_disable_unprepare(pvr_dev->sys_clk);
 	clk_disable_unprepare(pvr_dev->core_clk);
 
+err_drm_dev_exit:
 	drm_dev_exit(idx);
 
-	return 0;
+	return err;
 }
 
 int
@@ -196,10 +260,19 @@ pvr_power_device_resume(struct device *dev)
 	if (err)
 		goto err_sys_clk_disable;
 
+	if (pvr_dev->fw_dev.booted) {
+		err = pvr_power_fw_enable(pvr_dev);
+		if (err)
+			goto err_mem_clk_disable;
+	}
+
 	drm_dev_exit(idx);
 
 	return 0;
 
+err_mem_clk_disable:
+	clk_disable_unprepare(pvr_dev->mem_clk);
+
 err_sys_clk_disable:
 	clk_disable_unprepare(pvr_dev->sys_clk);
 
@@ -239,7 +312,6 @@ pvr_power_device_idle(struct device *dev)
 int
 pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
 {
-	/* TODO: Implement hard reset. */
 	int err;
 
 	/*
@@ -248,13 +320,63 @@ pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
 	 */
 	WARN_ON(pvr_power_get(pvr_dev));
 
-	err = pvr_power_fw_disable(pvr_dev, false);
-	if (err)
-		goto err_power_put;
+	down_write(&pvr_dev->reset_sem);
+
+	/* Disable IRQs for the duration of the reset. */
+	disable_irq(pvr_dev->irq);
+
+	do {
+		err = pvr_power_fw_disable(pvr_dev, hard_reset);
+		if (!err) {
+			if (hard_reset) {
+				pvr_dev->fw_dev.booted = false;
+				WARN_ON(pm_runtime_force_suspend(from_pvr_device(pvr_dev)->dev));
+
+				err = pvr_fw_hard_reset(pvr_dev);
+				if (err)
+					goto err_device_lost;
+
+				err = pm_runtime_force_resume(from_pvr_device(pvr_dev)->dev);
+				pvr_dev->fw_dev.booted = true;
+				if (err)
+					goto err_device_lost;
+			} else {
+				/* Clear the FW faulted flags. */
+				pvr_dev->fw_dev.fwif_sysdata->hwr_state_flags &=
+					~(ROGUE_FWIF_HWR_FW_FAULT |
+					  ROGUE_FWIF_HWR_RESTART_REQUESTED);
+			}
+
+			pvr_fw_irq_clear(pvr_dev);
+
+			err = pvr_power_fw_enable(pvr_dev);
+		}
+
+		if (err && hard_reset)
+			goto err_device_lost;
+
+		if (err && !hard_reset) {
+			drm_err(from_pvr_device(pvr_dev), "FW stalled, trying hard reset");
+			hard_reset = true;
+		}
+	} while (err);
+
+	enable_irq(pvr_dev->irq);
+
+	up_write(&pvr_dev->reset_sem);
+
+	pvr_power_put(pvr_dev);
+
+	return 0;
+
+err_device_lost:
+	drm_err(from_pvr_device(pvr_dev), "GPU device lost");
+	pvr_device_lost(pvr_dev);
+
+	/* Leave IRQs disabled if the device is lost. */
 
-	err = pvr_power_fw_enable(pvr_dev);
+	up_write(&pvr_dev->reset_sem);
 
-err_power_put:
 	pvr_power_put(pvr_dev);
 
 	return err;
diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
index 616fad3a3325..047844eba416 100644
--- a/drivers/gpu/drm/imagination/pvr_vm.c
+++ b/drivers/gpu/drm/imagination/pvr_vm.c
@@ -309,6 +309,16 @@ static const struct drm_gpuva_fn_ops pvr_vm_gpuva_ops = {
 	.sm_step_unmap = pvr_vm_gpuva_unmap,
 };
 
+static void
+fw_mem_context_init(void *cpu_ptr, void *priv)
+{
+	struct rogue_fwif_fwmemcontext *fw_mem_ctx = cpu_ptr;
+	struct pvr_vm_context *vm_ctx = priv;
+
+	fw_mem_ctx->pc_dev_paddr = pvr_vm_get_page_table_root_addr(vm_ctx);
+	fw_mem_ctx->page_cat_base_reg_set = ROGUE_FW_BIF_INVALID_PCSET;
+}
+
 /**
  * pvr_vm_create_context() - Create a new VM context.
  * @pvr_dev: Target PowerVR device.
@@ -368,13 +378,19 @@ pvr_vm_create_context(struct pvr_device *pvr_dev, bool is_userspace_context)
 	}
 
 	if (is_userspace_context) {
-		/* TODO: Create FW mem context */
-		err = -ENODEV;
-		goto err_put_ctx;
+		err = pvr_fw_object_create(pvr_dev, sizeof(struct rogue_fwif_fwmemcontext),
+					   PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+					   fw_mem_context_init, vm_ctx, &vm_ctx->fw_mem_ctx_obj);
+
+		if (err)
+			goto err_page_table_destroy;
 	}
 
 	return vm_ctx;
 
+err_page_table_destroy:
+	pvr_mmu_context_destroy(vm_ctx->mmu_ctx);
+
 err_put_ctx:
 	pvr_vm_context_put(vm_ctx);
 
@@ -394,8 +410,8 @@ pvr_vm_context_release(struct kref *ref_count)
 	struct pvr_vm_context *vm_ctx =
 		container_of(ref_count, struct pvr_vm_context, ref_count);
 
-	/* TODO: Destroy FW mem context */
-	WARN_ON(vm_ctx->fw_mem_ctx_obj);
+	if (vm_ctx->fw_mem_ctx_obj)
+		pvr_fw_object_destroy(vm_ctx->fw_mem_ctx_obj);
 
 	WARN_ON(pvr_vm_unmap(vm_ctx, vm_ctx->gpuva_mgr.mm_start,
 			     vm_ctx->gpuva_mgr.mm_range));
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 11/17] drm/imagination: Implement MIPS firmware processor and MMU support
  2023-08-16  8:25 ` Sarah Walker
@ 2023-08-16  8:25   ` Sarah Walker
  -1 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: matthew.brost, luben.tuikov, tzimmermann, linux-kernel, mripard,
	afd, boris.brezillon, dakr, donald.robson, hns, christian.koenig,
	faith.ekstrand

Add support for the MIPS firmware processor, used in the Series AXE GPU.
The MIPS firmware processor uses a separate MMU to the rest of the GPU, so
this patch adds support for that as well.

Changes since v3:
- Get regs resource (removed from GPU resources commit)

Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 drivers/gpu/drm/imagination/Makefile      |   4 +-
 drivers/gpu/drm/imagination/pvr_device.c  |   5 +-
 drivers/gpu/drm/imagination/pvr_device.h  |   3 +
 drivers/gpu/drm/imagination/pvr_fw.c      |   2 +
 drivers/gpu/drm/imagination/pvr_fw_mips.c | 250 ++++++++++++++++++++++
 drivers/gpu/drm/imagination/pvr_fw_mips.h |  38 ++++
 drivers/gpu/drm/imagination/pvr_vm_mips.c | 208 ++++++++++++++++++
 drivers/gpu/drm/imagination/pvr_vm_mips.h |  22 ++
 8 files changed, 530 insertions(+), 2 deletions(-)
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_mips.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_mips.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_vm_mips.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_vm_mips.h

diff --git a/drivers/gpu/drm/imagination/Makefile b/drivers/gpu/drm/imagination/Makefile
index 5b02440841be..0a6532d30c00 100644
--- a/drivers/gpu/drm/imagination/Makefile
+++ b/drivers/gpu/drm/imagination/Makefile
@@ -10,11 +10,13 @@ powervr-y := \
 	pvr_drv.o \
 	pvr_fw.o \
 	pvr_fw_meta.o \
+	pvr_fw_mips.o \
 	pvr_fw_startstop.o \
 	pvr_fw_trace.o \
 	pvr_gem.o \
 	pvr_mmu.o \
 	pvr_power.o \
-	pvr_vm.o
+	pvr_vm.o \
+	pvr_vm_mips.o
 
 obj-$(CONFIG_DRM_POWERVR) += powervr.o
diff --git a/drivers/gpu/drm/imagination/pvr_device.c b/drivers/gpu/drm/imagination/pvr_device.c
index ad198ed432fe..bc7333444388 100644
--- a/drivers/gpu/drm/imagination/pvr_device.c
+++ b/drivers/gpu/drm/imagination/pvr_device.c
@@ -50,16 +50,19 @@ pvr_device_reg_init(struct pvr_device *pvr_dev)
 {
 	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
 	struct platform_device *plat_dev = to_platform_device(drm_dev->dev);
+	struct resource *regs_resource;
 	void __iomem *regs;
 
+	pvr_dev->regs_resource = NULL;
 	pvr_dev->regs = NULL;
 
-	regs = devm_platform_ioremap_resource(plat_dev, 0);
+	regs = devm_platform_get_and_ioremap_resource(plat_dev, 0, &regs_resource);
 	if (IS_ERR(regs))
 		return dev_err_probe(drm_dev->dev, PTR_ERR(regs),
 				     "failed to ioremap gpu registers\n");
 
 	pvr_dev->regs = regs;
+	pvr_dev->regs_resource = regs_resource;
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/imagination/pvr_device.h b/drivers/gpu/drm/imagination/pvr_device.h
index d6feac60834b..7e353cfe4fd2 100644
--- a/drivers/gpu/drm/imagination/pvr_device.h
+++ b/drivers/gpu/drm/imagination/pvr_device.h
@@ -93,6 +93,9 @@ struct pvr_device {
 	/** @fw_version: Firmware version detected at runtime. */
 	struct pvr_fw_version fw_version;
 
+	/** @regs_resource: Resource representing device control registers. */
+	struct resource *regs_resource;
+
 	/**
 	 * @regs: Device control registers.
 	 *
diff --git a/drivers/gpu/drm/imagination/pvr_fw.c b/drivers/gpu/drm/imagination/pvr_fw.c
index 449984c0f233..0799c70f1a0c 100644
--- a/drivers/gpu/drm/imagination/pvr_fw.c
+++ b/drivers/gpu/drm/imagination/pvr_fw.c
@@ -914,6 +914,8 @@ pvr_fw_init(struct pvr_device *pvr_dev)
 
 	if (fw_dev->processor_type == PVR_FW_PROCESSOR_TYPE_META)
 		fw_dev->defs = &pvr_fw_defs_meta;
+	else if (fw_dev->processor_type == PVR_FW_PROCESSOR_TYPE_MIPS)
+		fw_dev->defs = &pvr_fw_defs_mips;
 	else
 		return -EINVAL;
 
diff --git a/drivers/gpu/drm/imagination/pvr_fw_mips.c b/drivers/gpu/drm/imagination/pvr_fw_mips.c
new file mode 100644
index 000000000000..b842d37a7e7f
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_fw_mips.c
@@ -0,0 +1,250 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+#include "pvr_fw.h"
+#include "pvr_fw_mips.h"
+#include "pvr_gem.h"
+#include "pvr_rogue_mips.h"
+#include "pvr_vm_mips.h"
+
+#include <linux/elf.h>
+#include <linux/err.h>
+#include <linux/types.h>
+
+#define ROGUE_FW_HEAP_MIPS_BASE 0xC0000000
+#define ROGUE_FW_HEAP_MIPS_SHIFT 24 /* 16 MB */
+#define ROGUE_FW_HEAP_MIPS_RESERVED_SIZE SZ_1M
+
+/**
+ * process_elf_command_stream() - Process ELF firmware image and populate
+ *                                firmware sections
+ * @pvr_dev: Device pointer.
+ * @fw: Pointer to firmware image.
+ * @fw_code_ptr: Pointer to FW code section.
+ * @fw_data_ptr: Pointer to FW data section.
+ * @fw_core_code_ptr: Pointer to FW coremem code section.
+ * @fw_core_data_ptr: Pointer to FW coremem data section.
+ *
+ * Returns :
+ *  * 0 on success, or
+ *  * -EINVAL on any error in ELF command stream.
+ */
+static int
+process_elf_command_stream(struct pvr_device *pvr_dev, const u8 *fw, u8 *fw_code_ptr,
+			   u8 *fw_data_ptr, u8 *fw_core_code_ptr, u8 *fw_core_data_ptr)
+{
+	struct elf32_hdr *header = (struct elf32_hdr *)fw;
+	struct elf32_phdr *program_header = (struct elf32_phdr *)(fw + header->e_phoff);
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	u32 entry;
+	int err;
+
+	for (entry = 0; entry < header->e_phnum; entry++, program_header++) {
+		void *write_addr;
+
+		/* Only consider loadable entries in the ELF segment table */
+		if (program_header->p_type != PT_LOAD)
+			continue;
+
+		err = pvr_fw_find_mmu_segment(pvr_dev, program_header->p_vaddr,
+					      program_header->p_memsz, fw_code_ptr, fw_data_ptr,
+					      fw_core_code_ptr, fw_core_data_ptr, &write_addr);
+		if (err) {
+			drm_err(drm_dev,
+				"Addr 0x%x (size: %d) not found in any firmware segment",
+				program_header->p_vaddr, program_header->p_memsz);
+			return err;
+		}
+
+		/* Write to FW allocation only if available */
+		if (write_addr) {
+			memcpy(write_addr, fw + program_header->p_offset,
+			       program_header->p_filesz);
+
+			memset((u8 *)write_addr + program_header->p_filesz, 0,
+			       program_header->p_memsz - program_header->p_filesz);
+		}
+	}
+
+	return 0;
+}
+
+static int
+pvr_mips_init(struct pvr_device *pvr_dev)
+{
+	pvr_fw_heap_info_init(pvr_dev, ROGUE_FW_HEAP_MIPS_SHIFT, ROGUE_FW_HEAP_MIPS_RESERVED_SIZE);
+
+	return pvr_vm_mips_init(pvr_dev);
+}
+
+static void
+pvr_mips_fini(struct pvr_device *pvr_dev)
+{
+	pvr_vm_mips_fini(pvr_dev);
+}
+
+static int
+pvr_mips_fw_process(struct pvr_device *pvr_dev, const u8 *fw,
+		    u8 *fw_code_ptr, u8 *fw_data_ptr, u8 *fw_core_code_ptr, u8 *fw_core_data_ptr,
+		    u32 core_code_alloc_size)
+{
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+	struct pvr_fw_mips_data *mips_data = fw_dev->processor_data.mips_data;
+	const struct pvr_fw_layout_entry *boot_code_entry;
+	const struct pvr_fw_layout_entry *boot_data_entry;
+	const struct pvr_fw_layout_entry *exception_code_entry;
+	const struct pvr_fw_layout_entry *stack_entry;
+	struct rogue_mipsfw_boot_data *boot_data;
+	dma_addr_t dma_addr;
+	u32 page_nr;
+	int err;
+
+	err = process_elf_command_stream(pvr_dev, fw, fw_code_ptr, fw_data_ptr, fw_core_code_ptr,
+					 fw_core_data_ptr);
+	if (err)
+		return err;
+
+	boot_code_entry = pvr_fw_find_layout_entry(pvr_dev, MIPS_BOOT_CODE);
+	boot_data_entry = pvr_fw_find_layout_entry(pvr_dev, MIPS_BOOT_DATA);
+	exception_code_entry = pvr_fw_find_layout_entry(pvr_dev, MIPS_EXCEPTIONS_CODE);
+	if (!boot_code_entry || !boot_data_entry || !exception_code_entry)
+		return -EINVAL;
+
+	WARN_ON(pvr_gem_get_dma_addr(fw_dev->mem.code_obj->gem, boot_code_entry->alloc_offset,
+				     &mips_data->boot_code_dma_addr));
+	WARN_ON(pvr_gem_get_dma_addr(fw_dev->mem.data_obj->gem, boot_data_entry->alloc_offset,
+				     &mips_data->boot_data_dma_addr));
+	WARN_ON(pvr_gem_get_dma_addr(fw_dev->mem.code_obj->gem,
+				     exception_code_entry->alloc_offset,
+				     &mips_data->exception_code_dma_addr));
+
+	stack_entry = pvr_fw_find_layout_entry(pvr_dev, MIPS_STACK);
+	if (!stack_entry)
+		return -EINVAL;
+
+	boot_data = (struct rogue_mipsfw_boot_data *)(fw_data_ptr + boot_data_entry->alloc_offset +
+						      ROGUE_MIPSFW_BOOTLDR_CONF_OFFSET);
+
+	WARN_ON(pvr_fw_object_get_dma_addr(fw_dev->mem.data_obj, stack_entry->alloc_offset,
+					   &dma_addr));
+	boot_data->stack_phys_addr = dma_addr;
+
+	boot_data->reg_base = pvr_dev->regs_resource->start;
+
+	for (page_nr = 0; page_nr < ARRAY_SIZE(boot_data->pt_phys_addr); page_nr++) {
+		WARN_ON(pvr_gem_get_dma_addr(mips_data->pt_obj,
+					     page_nr << ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K, &dma_addr));
+
+		boot_data->pt_phys_addr[page_nr] = dma_addr;
+	}
+
+	boot_data->pt_log2_page_size = ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K;
+	boot_data->pt_num_pages = ROGUE_MIPSFW_MAX_NUM_PAGETABLE_PAGES;
+	boot_data->reserved1 = 0;
+	boot_data->reserved2 = 0;
+
+	return 0;
+}
+
+static int
+pvr_mips_wrapper_init(struct pvr_device *pvr_dev)
+{
+	struct pvr_fw_mips_data *mips_data = pvr_dev->fw_dev.processor_data.mips_data;
+	const u64 remap_settings = ROGUE_MIPSFW_BOOT_REMAP_LOG2_SEGMENT_SIZE;
+	u32 phys_bus_width;
+
+	int err = PVR_FEATURE_VALUE(pvr_dev, phys_bus_width, &phys_bus_width);
+
+	if (WARN_ON(err))
+		return err;
+
+	/* Currently MIPS FW only supported with physical bus width > 32 bits. */
+	if (WARN_ON(phys_bus_width <= 32))
+		return -EINVAL;
+
+	pvr_cr_write32(pvr_dev, ROGUE_CR_MIPS_WRAPPER_CONFIG,
+		       (ROGUE_MIPSFW_REGISTERS_VIRTUAL_BASE >>
+			ROGUE_MIPSFW_WRAPPER_CONFIG_REGBANK_ADDR_ALIGN) |
+		       ROGUE_CR_MIPS_WRAPPER_CONFIG_BOOT_ISA_MODE_MICROMIPS);
+
+	/* Configure remap for boot code, boot data and exceptions code areas. */
+	pvr_cr_write64(pvr_dev, ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG1,
+		       ROGUE_MIPSFW_BOOT_REMAP_PHYS_ADDR_IN |
+		       ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG1_MODE_ENABLE_EN);
+	pvr_cr_write64(pvr_dev, ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2,
+		       (mips_data->boot_code_dma_addr &
+			~ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2_ADDR_OUT_CLRMSK) | remap_settings);
+
+	if (PVR_HAS_QUIRK(pvr_dev, 63553)) {
+		/*
+		 * WA always required on 36 bit cores, to avoid continuous unmapped memory accesses
+		 * to address 0x0.
+		 */
+		WARN_ON(phys_bus_width != 36);
+
+		pvr_cr_write64(pvr_dev, ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG1,
+			       ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG1_MODE_ENABLE_EN);
+		pvr_cr_write64(pvr_dev, ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2,
+			       (mips_data->boot_code_dma_addr &
+				~ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2_ADDR_OUT_CLRMSK) |
+			       remap_settings);
+	}
+
+	pvr_cr_write64(pvr_dev, ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG1,
+		       ROGUE_MIPSFW_DATA_REMAP_PHYS_ADDR_IN |
+		       ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG1_MODE_ENABLE_EN);
+	pvr_cr_write64(pvr_dev, ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2,
+		       (mips_data->boot_data_dma_addr &
+			~ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2_ADDR_OUT_CLRMSK) | remap_settings);
+
+	pvr_cr_write64(pvr_dev, ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG1,
+		       ROGUE_MIPSFW_CODE_REMAP_PHYS_ADDR_IN |
+		       ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG1_MODE_ENABLE_EN);
+	pvr_cr_write64(pvr_dev, ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2,
+		       (mips_data->exception_code_dma_addr &
+			~ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2_ADDR_OUT_CLRMSK) | remap_settings);
+
+	/* Garten IDLE bit controlled by MIPS. */
+	pvr_cr_write64(pvr_dev, ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG,
+		       ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_IDLE_CTRL_META);
+
+	/* Turn on the EJTAG probe. */
+	pvr_cr_write32(pvr_dev, ROGUE_CR_MIPS_DEBUG_CONFIG, 0);
+
+	return 0;
+}
+
+static u32
+pvr_mips_get_fw_addr_with_offset(struct pvr_fw_object *fw_obj, u32 offset)
+{
+	struct pvr_device *pvr_dev = to_pvr_device(gem_from_pvr_gem(fw_obj->gem)->dev);
+
+	/* MIPS cacheability is determined by page table. */
+	return ((fw_obj->fw_addr_offset + offset) & pvr_dev->fw_dev.fw_heap_info.offset_mask) |
+	       ROGUE_FW_HEAP_MIPS_BASE;
+}
+
+static bool
+pvr_mips_has_fixed_data_addr(void)
+{
+	return true;
+}
+
+const struct pvr_fw_defs pvr_fw_defs_mips = {
+	.init = pvr_mips_init,
+	.fini = pvr_mips_fini,
+	.fw_process = pvr_mips_fw_process,
+	.vm_map = pvr_vm_mips_map,
+	.vm_unmap = pvr_vm_mips_unmap,
+	.get_fw_addr_with_offset = pvr_mips_get_fw_addr_with_offset,
+	.wrapper_init = pvr_mips_wrapper_init,
+	.has_fixed_data_addr = pvr_mips_has_fixed_data_addr,
+	.irq = {
+		.enable_reg = ROGUE_CR_MIPS_WRAPPER_IRQ_ENABLE,
+		.status_reg = ROGUE_CR_MIPS_WRAPPER_IRQ_STATUS,
+		.clear_reg = ROGUE_CR_MIPS_WRAPPER_IRQ_CLEAR,
+		.event_mask = ROGUE_CR_MIPS_WRAPPER_IRQ_STATUS_EVENT_EN,
+		.clear_mask = ROGUE_CR_MIPS_WRAPPER_IRQ_CLEAR_EVENT_EN,
+	},
+};
diff --git a/drivers/gpu/drm/imagination/pvr_fw_mips.h b/drivers/gpu/drm/imagination/pvr_fw_mips.h
new file mode 100644
index 000000000000..efb31a69af29
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_fw_mips.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_FW_MIPS_H
+#define PVR_FW_MIPS_H
+
+#include <linux/types.h>
+
+/* Forward declaration from pvr_gem.h. */
+struct pvr_gem_object;
+
+/**
+ * struct pvr_fw_mips_data - MIPS-specific data
+ */
+struct pvr_fw_mips_data {
+	/** @pt_obj: Object representing MIPS pagetable. */
+	struct pvr_gem_object *pt_obj;
+
+	/** @pt: Pointer to CPU mapping of MIPS pagetable. */
+	u32 *pt;
+
+	/** @boot_code_dma_addr: DMA address of MIPS boot code. */
+	dma_addr_t boot_code_dma_addr;
+
+	/** @boot_data_dma_addr: DMA address of MIPS boot data. */
+	dma_addr_t boot_data_dma_addr;
+
+	/** @exception_code_dma_addr: DMA address of MIPS exception code. */
+	dma_addr_t exception_code_dma_addr;
+
+	/** @cache_policy: Cache policy for this processor. */
+	u32 cache_policy;
+
+	/** @pfn_mask: PFN mask for MIPS pagetable. */
+	u32 pfn_mask;
+};
+
+#endif /* PVR_FW_MIPS_H */
diff --git a/drivers/gpu/drm/imagination/pvr_vm_mips.c b/drivers/gpu/drm/imagination/pvr_vm_mips.c
new file mode 100644
index 000000000000..940c7da17fec
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_vm_mips.c
@@ -0,0 +1,208 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+#include "pvr_fw_mips.h"
+#include "pvr_gem.h"
+#include "pvr_mmu.h"
+#include "pvr_rogue_mips.h"
+#include "pvr_vm.h"
+#include "pvr_vm_mips.h"
+
+#include <drm/drm_managed.h>
+#include <linux/err.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+
+/**
+ * pvr_vm_mips_init() - Initialise MIPS FW pagetable
+ * @pvr_dev: Target PowerVR device.
+ *
+ * Returns:
+ *  * 0 on success,
+ *  * -%EINVAL,
+ *  * Any error returned by pvr_gem_object_create(), or
+ *  * And error returned by pvr_gem_object_vmap().
+ */
+int
+pvr_vm_mips_init(struct pvr_device *pvr_dev)
+{
+	u32 pt_size = 1 << ROGUE_MIPSFW_LOG2_PAGETABLE_SIZE_4K(pvr_dev);
+	struct pvr_fw_mips_data *mips_data;
+	u32 phys_bus_width;
+	int err;
+
+	/* Page table size must be at most ROGUE_MIPSFW_MAX_NUM_PAGETABLE_PAGES * 4k pages. */
+	if (pt_size > ROGUE_MIPSFW_MAX_NUM_PAGETABLE_PAGES * SZ_4K)
+		return -EINVAL;
+
+	if (PVR_FEATURE_VALUE(pvr_dev, phys_bus_width, &phys_bus_width))
+		return -EINVAL;
+
+	mips_data = drmm_kzalloc(from_pvr_device(pvr_dev), sizeof(*mips_data), GFP_KERNEL);
+	if (!mips_data)
+		return -ENOMEM;
+
+	mips_data->pt_obj = pvr_gem_object_create(pvr_dev, pt_size,
+						  DRM_PVR_BO_DEVICE_PM_FW_PROTECT);
+	if (IS_ERR(mips_data->pt_obj))
+		return PTR_ERR(mips_data->pt_obj);
+
+	mips_data->pt = pvr_gem_object_vmap(mips_data->pt_obj);
+	if (IS_ERR(mips_data->pt)) {
+		err = PTR_ERR(mips_data->pt);
+		goto err_put_obj;
+	}
+
+	mips_data->pfn_mask = (phys_bus_width > 32) ? ROGUE_MIPSFW_ENTRYLO_PFN_MASK_ABOVE_32BIT :
+						      ROGUE_MIPSFW_ENTRYLO_PFN_MASK;
+
+	mips_data->cache_policy = (phys_bus_width > 32) ? ROGUE_MIPSFW_CACHED_POLICY_ABOVE_32BIT :
+							  ROGUE_MIPSFW_CACHED_POLICY;
+
+	pvr_dev->fw_dev.processor_data.mips_data = mips_data;
+
+	return 0;
+
+err_put_obj:
+	pvr_gem_object_put(mips_data->pt_obj);
+
+	return err;
+}
+
+/**
+ * pvr_vm_mips_fini() - Release MIPS FW pagetable
+ * @pvr_dev: Target PowerVR device.
+ */
+void
+pvr_vm_mips_fini(struct pvr_device *pvr_dev)
+{
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+	struct pvr_fw_mips_data *mips_data = fw_dev->processor_data.mips_data;
+
+	pvr_gem_object_vunmap(mips_data->pt_obj);
+	pvr_gem_object_put(mips_data->pt_obj);
+	fw_dev->processor_data.mips_data = NULL;
+}
+
+static u32
+get_mips_pte_flags(bool read, bool write, u32 cache_policy)
+{
+	u32 flags = 0;
+
+	if (read && write) /* Read/write. */
+		flags |= ROGUE_MIPSFW_ENTRYLO_DIRTY_EN;
+	else if (write)    /* Write only. */
+		flags |= ROGUE_MIPSFW_ENTRYLO_READ_INHIBIT_EN;
+	else
+		WARN_ON(!read);
+
+	flags |= cache_policy << ROGUE_MIPSFW_ENTRYLO_CACHE_POLICY_SHIFT;
+
+	flags |= ROGUE_MIPSFW_ENTRYLO_VALID_EN | ROGUE_MIPSFW_ENTRYLO_GLOBAL_EN;
+
+	return flags;
+}
+
+/**
+ * pvr_vm_mips_map() - Map a FW object into MIPS address space
+ * @pvr_dev: Target PowerVR device.
+ * @fw_obj: FW object to map.
+ *
+ * Returns:
+ *  * 0 on success,
+ *  * -%EINVAL if object does not reside within FW address space, or
+ *  * Any error returned by pvr_fw_object_get_dma_addr().
+ */
+int
+pvr_vm_mips_map(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj)
+{
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+	struct pvr_fw_mips_data *mips_data = fw_dev->processor_data.mips_data;
+	struct pvr_gem_object *pvr_obj = fw_obj->gem;
+	u64 start = fw_obj->fw_mm_node.start;
+	u64 size = fw_obj->fw_mm_node.size;
+	u64 end;
+	u32 cache_policy;
+	u32 pte_flags;
+	u32 start_pfn;
+	u32 end_pfn;
+	u32 pfn;
+	int err;
+
+	if (check_add_overflow(start, size - 1, &end))
+		return -EINVAL;
+
+	if (start < ROGUE_FW_HEAP_BASE ||
+	    start >= ROGUE_FW_HEAP_BASE + fw_dev->fw_heap_info.raw_size ||
+	    end < ROGUE_FW_HEAP_BASE ||
+	    end >= ROGUE_FW_HEAP_BASE + fw_dev->fw_heap_info.raw_size ||
+	    (start & ROGUE_MIPSFW_PAGE_MASK_4K) ||
+	    ((end + 1) & ROGUE_MIPSFW_PAGE_MASK_4K))
+		return -EINVAL;
+
+	start_pfn = (start & fw_dev->fw_heap_info.offset_mask) >> ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K;
+	end_pfn = (end & fw_dev->fw_heap_info.offset_mask) >> ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K;
+
+	if (pvr_obj->flags & PVR_BO_FW_FLAGS_DEVICE_UNCACHED)
+		cache_policy = ROGUE_MIPSFW_UNCACHED_CACHE_POLICY;
+	else
+		cache_policy = mips_data->cache_policy;
+
+	pte_flags = get_mips_pte_flags(true, true, cache_policy);
+
+	for (pfn = start_pfn; pfn <= end_pfn; pfn++) {
+		dma_addr_t dma_addr;
+		u32 pte;
+
+		err = pvr_fw_object_get_dma_addr(fw_obj,
+						 (pfn - start_pfn) <<
+						 ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K,
+						 &dma_addr);
+		if (err)
+			goto err_unmap_pages;
+
+		pte = ((dma_addr >> ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K)
+		       << ROGUE_MIPSFW_ENTRYLO_PFN_SHIFT) & mips_data->pfn_mask;
+		pte |= pte_flags;
+
+		WRITE_ONCE(mips_data->pt[pfn], pte);
+	}
+
+	pvr_mmu_flush(pvr_dev);
+
+	return 0;
+
+err_unmap_pages:
+	for (; pfn >= start_pfn; pfn--)
+		WRITE_ONCE(mips_data->pt[pfn], 0);
+
+	pvr_mmu_flush(pvr_dev);
+
+	return err;
+}
+
+/**
+ * pvr_vm_mips_unmap() - Unmap a FW object into MIPS address space
+ * @pvr_dev: Target PowerVR device.
+ * @fw_obj: FW object to unmap.
+ */
+void
+pvr_vm_mips_unmap(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj)
+{
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+	struct pvr_fw_mips_data *mips_data = fw_dev->processor_data.mips_data;
+	u64 start = fw_obj->fw_mm_node.start;
+	u64 size = fw_obj->fw_mm_node.size;
+	u64 end = start + size;
+
+	u32 start_pfn = (start & fw_dev->fw_heap_info.offset_mask) >>
+			ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K;
+	u32 end_pfn = (end & fw_dev->fw_heap_info.offset_mask) >> ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K;
+	u32 pfn;
+
+	for (pfn = start_pfn; pfn < end_pfn; pfn++)
+		WRITE_ONCE(mips_data->pt[pfn], 0);
+
+	pvr_mmu_flush(pvr_dev);
+}
diff --git a/drivers/gpu/drm/imagination/pvr_vm_mips.h b/drivers/gpu/drm/imagination/pvr_vm_mips.h
new file mode 100644
index 000000000000..71d238d5327a
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_vm_mips.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_VM_MIPS_H
+#define PVR_VM_MIPS_H
+
+/* Forward declaration from pvr_device.h. */
+struct pvr_device;
+
+/* Forward declaration from pvr_gem.h. */
+struct pvr_fw_object;
+
+int
+pvr_vm_mips_init(struct pvr_device *pvr_dev);
+void
+pvr_vm_mips_fini(struct pvr_device *pvr_dev);
+int
+pvr_vm_mips_map(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj);
+void
+pvr_vm_mips_unmap(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj);
+
+#endif /* PVR_VM_MIPS_H */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 11/17] drm/imagination: Implement MIPS firmware processor and MMU support
@ 2023-08-16  8:25   ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: frank.binns, donald.robson, boris.brezillon, faith.ekstrand,
	airlied, daniel, maarten.lankhorst, mripard, tzimmermann, afd,
	hns, matthew.brost, christian.koenig, luben.tuikov, dakr,
	linux-kernel

Add support for the MIPS firmware processor, used in the Series AXE GPU.
The MIPS firmware processor uses a separate MMU to the rest of the GPU, so
this patch adds support for that as well.

Changes since v3:
- Get regs resource (removed from GPU resources commit)

Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 drivers/gpu/drm/imagination/Makefile      |   4 +-
 drivers/gpu/drm/imagination/pvr_device.c  |   5 +-
 drivers/gpu/drm/imagination/pvr_device.h  |   3 +
 drivers/gpu/drm/imagination/pvr_fw.c      |   2 +
 drivers/gpu/drm/imagination/pvr_fw_mips.c | 250 ++++++++++++++++++++++
 drivers/gpu/drm/imagination/pvr_fw_mips.h |  38 ++++
 drivers/gpu/drm/imagination/pvr_vm_mips.c | 208 ++++++++++++++++++
 drivers/gpu/drm/imagination/pvr_vm_mips.h |  22 ++
 8 files changed, 530 insertions(+), 2 deletions(-)
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_mips.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_fw_mips.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_vm_mips.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_vm_mips.h

diff --git a/drivers/gpu/drm/imagination/Makefile b/drivers/gpu/drm/imagination/Makefile
index 5b02440841be..0a6532d30c00 100644
--- a/drivers/gpu/drm/imagination/Makefile
+++ b/drivers/gpu/drm/imagination/Makefile
@@ -10,11 +10,13 @@ powervr-y := \
 	pvr_drv.o \
 	pvr_fw.o \
 	pvr_fw_meta.o \
+	pvr_fw_mips.o \
 	pvr_fw_startstop.o \
 	pvr_fw_trace.o \
 	pvr_gem.o \
 	pvr_mmu.o \
 	pvr_power.o \
-	pvr_vm.o
+	pvr_vm.o \
+	pvr_vm_mips.o
 
 obj-$(CONFIG_DRM_POWERVR) += powervr.o
diff --git a/drivers/gpu/drm/imagination/pvr_device.c b/drivers/gpu/drm/imagination/pvr_device.c
index ad198ed432fe..bc7333444388 100644
--- a/drivers/gpu/drm/imagination/pvr_device.c
+++ b/drivers/gpu/drm/imagination/pvr_device.c
@@ -50,16 +50,19 @@ pvr_device_reg_init(struct pvr_device *pvr_dev)
 {
 	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
 	struct platform_device *plat_dev = to_platform_device(drm_dev->dev);
+	struct resource *regs_resource;
 	void __iomem *regs;
 
+	pvr_dev->regs_resource = NULL;
 	pvr_dev->regs = NULL;
 
-	regs = devm_platform_ioremap_resource(plat_dev, 0);
+	regs = devm_platform_get_and_ioremap_resource(plat_dev, 0, &regs_resource);
 	if (IS_ERR(regs))
 		return dev_err_probe(drm_dev->dev, PTR_ERR(regs),
 				     "failed to ioremap gpu registers\n");
 
 	pvr_dev->regs = regs;
+	pvr_dev->regs_resource = regs_resource;
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/imagination/pvr_device.h b/drivers/gpu/drm/imagination/pvr_device.h
index d6feac60834b..7e353cfe4fd2 100644
--- a/drivers/gpu/drm/imagination/pvr_device.h
+++ b/drivers/gpu/drm/imagination/pvr_device.h
@@ -93,6 +93,9 @@ struct pvr_device {
 	/** @fw_version: Firmware version detected at runtime. */
 	struct pvr_fw_version fw_version;
 
+	/** @regs_resource: Resource representing device control registers. */
+	struct resource *regs_resource;
+
 	/**
 	 * @regs: Device control registers.
 	 *
diff --git a/drivers/gpu/drm/imagination/pvr_fw.c b/drivers/gpu/drm/imagination/pvr_fw.c
index 449984c0f233..0799c70f1a0c 100644
--- a/drivers/gpu/drm/imagination/pvr_fw.c
+++ b/drivers/gpu/drm/imagination/pvr_fw.c
@@ -914,6 +914,8 @@ pvr_fw_init(struct pvr_device *pvr_dev)
 
 	if (fw_dev->processor_type == PVR_FW_PROCESSOR_TYPE_META)
 		fw_dev->defs = &pvr_fw_defs_meta;
+	else if (fw_dev->processor_type == PVR_FW_PROCESSOR_TYPE_MIPS)
+		fw_dev->defs = &pvr_fw_defs_mips;
 	else
 		return -EINVAL;
 
diff --git a/drivers/gpu/drm/imagination/pvr_fw_mips.c b/drivers/gpu/drm/imagination/pvr_fw_mips.c
new file mode 100644
index 000000000000..b842d37a7e7f
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_fw_mips.c
@@ -0,0 +1,250 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+#include "pvr_fw.h"
+#include "pvr_fw_mips.h"
+#include "pvr_gem.h"
+#include "pvr_rogue_mips.h"
+#include "pvr_vm_mips.h"
+
+#include <linux/elf.h>
+#include <linux/err.h>
+#include <linux/types.h>
+
+#define ROGUE_FW_HEAP_MIPS_BASE 0xC0000000
+#define ROGUE_FW_HEAP_MIPS_SHIFT 24 /* 16 MB */
+#define ROGUE_FW_HEAP_MIPS_RESERVED_SIZE SZ_1M
+
+/**
+ * process_elf_command_stream() - Process ELF firmware image and populate
+ *                                firmware sections
+ * @pvr_dev: Device pointer.
+ * @fw: Pointer to firmware image.
+ * @fw_code_ptr: Pointer to FW code section.
+ * @fw_data_ptr: Pointer to FW data section.
+ * @fw_core_code_ptr: Pointer to FW coremem code section.
+ * @fw_core_data_ptr: Pointer to FW coremem data section.
+ *
+ * Returns :
+ *  * 0 on success, or
+ *  * -EINVAL on any error in ELF command stream.
+ */
+static int
+process_elf_command_stream(struct pvr_device *pvr_dev, const u8 *fw, u8 *fw_code_ptr,
+			   u8 *fw_data_ptr, u8 *fw_core_code_ptr, u8 *fw_core_data_ptr)
+{
+	struct elf32_hdr *header = (struct elf32_hdr *)fw;
+	struct elf32_phdr *program_header = (struct elf32_phdr *)(fw + header->e_phoff);
+	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
+	u32 entry;
+	int err;
+
+	for (entry = 0; entry < header->e_phnum; entry++, program_header++) {
+		void *write_addr;
+
+		/* Only consider loadable entries in the ELF segment table */
+		if (program_header->p_type != PT_LOAD)
+			continue;
+
+		err = pvr_fw_find_mmu_segment(pvr_dev, program_header->p_vaddr,
+					      program_header->p_memsz, fw_code_ptr, fw_data_ptr,
+					      fw_core_code_ptr, fw_core_data_ptr, &write_addr);
+		if (err) {
+			drm_err(drm_dev,
+				"Addr 0x%x (size: %d) not found in any firmware segment",
+				program_header->p_vaddr, program_header->p_memsz);
+			return err;
+		}
+
+		/* Write to FW allocation only if available */
+		if (write_addr) {
+			memcpy(write_addr, fw + program_header->p_offset,
+			       program_header->p_filesz);
+
+			memset((u8 *)write_addr + program_header->p_filesz, 0,
+			       program_header->p_memsz - program_header->p_filesz);
+		}
+	}
+
+	return 0;
+}
+
+static int
+pvr_mips_init(struct pvr_device *pvr_dev)
+{
+	pvr_fw_heap_info_init(pvr_dev, ROGUE_FW_HEAP_MIPS_SHIFT, ROGUE_FW_HEAP_MIPS_RESERVED_SIZE);
+
+	return pvr_vm_mips_init(pvr_dev);
+}
+
+static void
+pvr_mips_fini(struct pvr_device *pvr_dev)
+{
+	pvr_vm_mips_fini(pvr_dev);
+}
+
+static int
+pvr_mips_fw_process(struct pvr_device *pvr_dev, const u8 *fw,
+		    u8 *fw_code_ptr, u8 *fw_data_ptr, u8 *fw_core_code_ptr, u8 *fw_core_data_ptr,
+		    u32 core_code_alloc_size)
+{
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+	struct pvr_fw_mips_data *mips_data = fw_dev->processor_data.mips_data;
+	const struct pvr_fw_layout_entry *boot_code_entry;
+	const struct pvr_fw_layout_entry *boot_data_entry;
+	const struct pvr_fw_layout_entry *exception_code_entry;
+	const struct pvr_fw_layout_entry *stack_entry;
+	struct rogue_mipsfw_boot_data *boot_data;
+	dma_addr_t dma_addr;
+	u32 page_nr;
+	int err;
+
+	err = process_elf_command_stream(pvr_dev, fw, fw_code_ptr, fw_data_ptr, fw_core_code_ptr,
+					 fw_core_data_ptr);
+	if (err)
+		return err;
+
+	boot_code_entry = pvr_fw_find_layout_entry(pvr_dev, MIPS_BOOT_CODE);
+	boot_data_entry = pvr_fw_find_layout_entry(pvr_dev, MIPS_BOOT_DATA);
+	exception_code_entry = pvr_fw_find_layout_entry(pvr_dev, MIPS_EXCEPTIONS_CODE);
+	if (!boot_code_entry || !boot_data_entry || !exception_code_entry)
+		return -EINVAL;
+
+	WARN_ON(pvr_gem_get_dma_addr(fw_dev->mem.code_obj->gem, boot_code_entry->alloc_offset,
+				     &mips_data->boot_code_dma_addr));
+	WARN_ON(pvr_gem_get_dma_addr(fw_dev->mem.data_obj->gem, boot_data_entry->alloc_offset,
+				     &mips_data->boot_data_dma_addr));
+	WARN_ON(pvr_gem_get_dma_addr(fw_dev->mem.code_obj->gem,
+				     exception_code_entry->alloc_offset,
+				     &mips_data->exception_code_dma_addr));
+
+	stack_entry = pvr_fw_find_layout_entry(pvr_dev, MIPS_STACK);
+	if (!stack_entry)
+		return -EINVAL;
+
+	boot_data = (struct rogue_mipsfw_boot_data *)(fw_data_ptr + boot_data_entry->alloc_offset +
+						      ROGUE_MIPSFW_BOOTLDR_CONF_OFFSET);
+
+	WARN_ON(pvr_fw_object_get_dma_addr(fw_dev->mem.data_obj, stack_entry->alloc_offset,
+					   &dma_addr));
+	boot_data->stack_phys_addr = dma_addr;
+
+	boot_data->reg_base = pvr_dev->regs_resource->start;
+
+	for (page_nr = 0; page_nr < ARRAY_SIZE(boot_data->pt_phys_addr); page_nr++) {
+		WARN_ON(pvr_gem_get_dma_addr(mips_data->pt_obj,
+					     page_nr << ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K, &dma_addr));
+
+		boot_data->pt_phys_addr[page_nr] = dma_addr;
+	}
+
+	boot_data->pt_log2_page_size = ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K;
+	boot_data->pt_num_pages = ROGUE_MIPSFW_MAX_NUM_PAGETABLE_PAGES;
+	boot_data->reserved1 = 0;
+	boot_data->reserved2 = 0;
+
+	return 0;
+}
+
+static int
+pvr_mips_wrapper_init(struct pvr_device *pvr_dev)
+{
+	struct pvr_fw_mips_data *mips_data = pvr_dev->fw_dev.processor_data.mips_data;
+	const u64 remap_settings = ROGUE_MIPSFW_BOOT_REMAP_LOG2_SEGMENT_SIZE;
+	u32 phys_bus_width;
+
+	int err = PVR_FEATURE_VALUE(pvr_dev, phys_bus_width, &phys_bus_width);
+
+	if (WARN_ON(err))
+		return err;
+
+	/* Currently MIPS FW only supported with physical bus width > 32 bits. */
+	if (WARN_ON(phys_bus_width <= 32))
+		return -EINVAL;
+
+	pvr_cr_write32(pvr_dev, ROGUE_CR_MIPS_WRAPPER_CONFIG,
+		       (ROGUE_MIPSFW_REGISTERS_VIRTUAL_BASE >>
+			ROGUE_MIPSFW_WRAPPER_CONFIG_REGBANK_ADDR_ALIGN) |
+		       ROGUE_CR_MIPS_WRAPPER_CONFIG_BOOT_ISA_MODE_MICROMIPS);
+
+	/* Configure remap for boot code, boot data and exceptions code areas. */
+	pvr_cr_write64(pvr_dev, ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG1,
+		       ROGUE_MIPSFW_BOOT_REMAP_PHYS_ADDR_IN |
+		       ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG1_MODE_ENABLE_EN);
+	pvr_cr_write64(pvr_dev, ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2,
+		       (mips_data->boot_code_dma_addr &
+			~ROGUE_CR_MIPS_ADDR_REMAP1_CONFIG2_ADDR_OUT_CLRMSK) | remap_settings);
+
+	if (PVR_HAS_QUIRK(pvr_dev, 63553)) {
+		/*
+		 * WA always required on 36 bit cores, to avoid continuous unmapped memory accesses
+		 * to address 0x0.
+		 */
+		WARN_ON(phys_bus_width != 36);
+
+		pvr_cr_write64(pvr_dev, ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG1,
+			       ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG1_MODE_ENABLE_EN);
+		pvr_cr_write64(pvr_dev, ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2,
+			       (mips_data->boot_code_dma_addr &
+				~ROGUE_CR_MIPS_ADDR_REMAP5_CONFIG2_ADDR_OUT_CLRMSK) |
+			       remap_settings);
+	}
+
+	pvr_cr_write64(pvr_dev, ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG1,
+		       ROGUE_MIPSFW_DATA_REMAP_PHYS_ADDR_IN |
+		       ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG1_MODE_ENABLE_EN);
+	pvr_cr_write64(pvr_dev, ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2,
+		       (mips_data->boot_data_dma_addr &
+			~ROGUE_CR_MIPS_ADDR_REMAP2_CONFIG2_ADDR_OUT_CLRMSK) | remap_settings);
+
+	pvr_cr_write64(pvr_dev, ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG1,
+		       ROGUE_MIPSFW_CODE_REMAP_PHYS_ADDR_IN |
+		       ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG1_MODE_ENABLE_EN);
+	pvr_cr_write64(pvr_dev, ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2,
+		       (mips_data->exception_code_dma_addr &
+			~ROGUE_CR_MIPS_ADDR_REMAP3_CONFIG2_ADDR_OUT_CLRMSK) | remap_settings);
+
+	/* Garten IDLE bit controlled by MIPS. */
+	pvr_cr_write64(pvr_dev, ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG,
+		       ROGUE_CR_MTS_GARTEN_WRAPPER_CONFIG_IDLE_CTRL_META);
+
+	/* Turn on the EJTAG probe. */
+	pvr_cr_write32(pvr_dev, ROGUE_CR_MIPS_DEBUG_CONFIG, 0);
+
+	return 0;
+}
+
+static u32
+pvr_mips_get_fw_addr_with_offset(struct pvr_fw_object *fw_obj, u32 offset)
+{
+	struct pvr_device *pvr_dev = to_pvr_device(gem_from_pvr_gem(fw_obj->gem)->dev);
+
+	/* MIPS cacheability is determined by page table. */
+	return ((fw_obj->fw_addr_offset + offset) & pvr_dev->fw_dev.fw_heap_info.offset_mask) |
+	       ROGUE_FW_HEAP_MIPS_BASE;
+}
+
+static bool
+pvr_mips_has_fixed_data_addr(void)
+{
+	return true;
+}
+
+const struct pvr_fw_defs pvr_fw_defs_mips = {
+	.init = pvr_mips_init,
+	.fini = pvr_mips_fini,
+	.fw_process = pvr_mips_fw_process,
+	.vm_map = pvr_vm_mips_map,
+	.vm_unmap = pvr_vm_mips_unmap,
+	.get_fw_addr_with_offset = pvr_mips_get_fw_addr_with_offset,
+	.wrapper_init = pvr_mips_wrapper_init,
+	.has_fixed_data_addr = pvr_mips_has_fixed_data_addr,
+	.irq = {
+		.enable_reg = ROGUE_CR_MIPS_WRAPPER_IRQ_ENABLE,
+		.status_reg = ROGUE_CR_MIPS_WRAPPER_IRQ_STATUS,
+		.clear_reg = ROGUE_CR_MIPS_WRAPPER_IRQ_CLEAR,
+		.event_mask = ROGUE_CR_MIPS_WRAPPER_IRQ_STATUS_EVENT_EN,
+		.clear_mask = ROGUE_CR_MIPS_WRAPPER_IRQ_CLEAR_EVENT_EN,
+	},
+};
diff --git a/drivers/gpu/drm/imagination/pvr_fw_mips.h b/drivers/gpu/drm/imagination/pvr_fw_mips.h
new file mode 100644
index 000000000000..efb31a69af29
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_fw_mips.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_FW_MIPS_H
+#define PVR_FW_MIPS_H
+
+#include <linux/types.h>
+
+/* Forward declaration from pvr_gem.h. */
+struct pvr_gem_object;
+
+/**
+ * struct pvr_fw_mips_data - MIPS-specific data
+ */
+struct pvr_fw_mips_data {
+	/** @pt_obj: Object representing MIPS pagetable. */
+	struct pvr_gem_object *pt_obj;
+
+	/** @pt: Pointer to CPU mapping of MIPS pagetable. */
+	u32 *pt;
+
+	/** @boot_code_dma_addr: DMA address of MIPS boot code. */
+	dma_addr_t boot_code_dma_addr;
+
+	/** @boot_data_dma_addr: DMA address of MIPS boot data. */
+	dma_addr_t boot_data_dma_addr;
+
+	/** @exception_code_dma_addr: DMA address of MIPS exception code. */
+	dma_addr_t exception_code_dma_addr;
+
+	/** @cache_policy: Cache policy for this processor. */
+	u32 cache_policy;
+
+	/** @pfn_mask: PFN mask for MIPS pagetable. */
+	u32 pfn_mask;
+};
+
+#endif /* PVR_FW_MIPS_H */
diff --git a/drivers/gpu/drm/imagination/pvr_vm_mips.c b/drivers/gpu/drm/imagination/pvr_vm_mips.c
new file mode 100644
index 000000000000..940c7da17fec
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_vm_mips.c
@@ -0,0 +1,208 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+#include "pvr_fw_mips.h"
+#include "pvr_gem.h"
+#include "pvr_mmu.h"
+#include "pvr_rogue_mips.h"
+#include "pvr_vm.h"
+#include "pvr_vm_mips.h"
+
+#include <drm/drm_managed.h>
+#include <linux/err.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+
+/**
+ * pvr_vm_mips_init() - Initialise MIPS FW pagetable
+ * @pvr_dev: Target PowerVR device.
+ *
+ * Returns:
+ *  * 0 on success,
+ *  * -%EINVAL,
+ *  * Any error returned by pvr_gem_object_create(), or
+ *  * And error returned by pvr_gem_object_vmap().
+ */
+int
+pvr_vm_mips_init(struct pvr_device *pvr_dev)
+{
+	u32 pt_size = 1 << ROGUE_MIPSFW_LOG2_PAGETABLE_SIZE_4K(pvr_dev);
+	struct pvr_fw_mips_data *mips_data;
+	u32 phys_bus_width;
+	int err;
+
+	/* Page table size must be at most ROGUE_MIPSFW_MAX_NUM_PAGETABLE_PAGES * 4k pages. */
+	if (pt_size > ROGUE_MIPSFW_MAX_NUM_PAGETABLE_PAGES * SZ_4K)
+		return -EINVAL;
+
+	if (PVR_FEATURE_VALUE(pvr_dev, phys_bus_width, &phys_bus_width))
+		return -EINVAL;
+
+	mips_data = drmm_kzalloc(from_pvr_device(pvr_dev), sizeof(*mips_data), GFP_KERNEL);
+	if (!mips_data)
+		return -ENOMEM;
+
+	mips_data->pt_obj = pvr_gem_object_create(pvr_dev, pt_size,
+						  DRM_PVR_BO_DEVICE_PM_FW_PROTECT);
+	if (IS_ERR(mips_data->pt_obj))
+		return PTR_ERR(mips_data->pt_obj);
+
+	mips_data->pt = pvr_gem_object_vmap(mips_data->pt_obj);
+	if (IS_ERR(mips_data->pt)) {
+		err = PTR_ERR(mips_data->pt);
+		goto err_put_obj;
+	}
+
+	mips_data->pfn_mask = (phys_bus_width > 32) ? ROGUE_MIPSFW_ENTRYLO_PFN_MASK_ABOVE_32BIT :
+						      ROGUE_MIPSFW_ENTRYLO_PFN_MASK;
+
+	mips_data->cache_policy = (phys_bus_width > 32) ? ROGUE_MIPSFW_CACHED_POLICY_ABOVE_32BIT :
+							  ROGUE_MIPSFW_CACHED_POLICY;
+
+	pvr_dev->fw_dev.processor_data.mips_data = mips_data;
+
+	return 0;
+
+err_put_obj:
+	pvr_gem_object_put(mips_data->pt_obj);
+
+	return err;
+}
+
+/**
+ * pvr_vm_mips_fini() - Release MIPS FW pagetable
+ * @pvr_dev: Target PowerVR device.
+ */
+void
+pvr_vm_mips_fini(struct pvr_device *pvr_dev)
+{
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+	struct pvr_fw_mips_data *mips_data = fw_dev->processor_data.mips_data;
+
+	pvr_gem_object_vunmap(mips_data->pt_obj);
+	pvr_gem_object_put(mips_data->pt_obj);
+	fw_dev->processor_data.mips_data = NULL;
+}
+
+static u32
+get_mips_pte_flags(bool read, bool write, u32 cache_policy)
+{
+	u32 flags = 0;
+
+	if (read && write) /* Read/write. */
+		flags |= ROGUE_MIPSFW_ENTRYLO_DIRTY_EN;
+	else if (write)    /* Write only. */
+		flags |= ROGUE_MIPSFW_ENTRYLO_READ_INHIBIT_EN;
+	else
+		WARN_ON(!read);
+
+	flags |= cache_policy << ROGUE_MIPSFW_ENTRYLO_CACHE_POLICY_SHIFT;
+
+	flags |= ROGUE_MIPSFW_ENTRYLO_VALID_EN | ROGUE_MIPSFW_ENTRYLO_GLOBAL_EN;
+
+	return flags;
+}
+
+/**
+ * pvr_vm_mips_map() - Map a FW object into MIPS address space
+ * @pvr_dev: Target PowerVR device.
+ * @fw_obj: FW object to map.
+ *
+ * Returns:
+ *  * 0 on success,
+ *  * -%EINVAL if object does not reside within FW address space, or
+ *  * Any error returned by pvr_fw_object_get_dma_addr().
+ */
+int
+pvr_vm_mips_map(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj)
+{
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+	struct pvr_fw_mips_data *mips_data = fw_dev->processor_data.mips_data;
+	struct pvr_gem_object *pvr_obj = fw_obj->gem;
+	u64 start = fw_obj->fw_mm_node.start;
+	u64 size = fw_obj->fw_mm_node.size;
+	u64 end;
+	u32 cache_policy;
+	u32 pte_flags;
+	u32 start_pfn;
+	u32 end_pfn;
+	u32 pfn;
+	int err;
+
+	if (check_add_overflow(start, size - 1, &end))
+		return -EINVAL;
+
+	if (start < ROGUE_FW_HEAP_BASE ||
+	    start >= ROGUE_FW_HEAP_BASE + fw_dev->fw_heap_info.raw_size ||
+	    end < ROGUE_FW_HEAP_BASE ||
+	    end >= ROGUE_FW_HEAP_BASE + fw_dev->fw_heap_info.raw_size ||
+	    (start & ROGUE_MIPSFW_PAGE_MASK_4K) ||
+	    ((end + 1) & ROGUE_MIPSFW_PAGE_MASK_4K))
+		return -EINVAL;
+
+	start_pfn = (start & fw_dev->fw_heap_info.offset_mask) >> ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K;
+	end_pfn = (end & fw_dev->fw_heap_info.offset_mask) >> ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K;
+
+	if (pvr_obj->flags & PVR_BO_FW_FLAGS_DEVICE_UNCACHED)
+		cache_policy = ROGUE_MIPSFW_UNCACHED_CACHE_POLICY;
+	else
+		cache_policy = mips_data->cache_policy;
+
+	pte_flags = get_mips_pte_flags(true, true, cache_policy);
+
+	for (pfn = start_pfn; pfn <= end_pfn; pfn++) {
+		dma_addr_t dma_addr;
+		u32 pte;
+
+		err = pvr_fw_object_get_dma_addr(fw_obj,
+						 (pfn - start_pfn) <<
+						 ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K,
+						 &dma_addr);
+		if (err)
+			goto err_unmap_pages;
+
+		pte = ((dma_addr >> ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K)
+		       << ROGUE_MIPSFW_ENTRYLO_PFN_SHIFT) & mips_data->pfn_mask;
+		pte |= pte_flags;
+
+		WRITE_ONCE(mips_data->pt[pfn], pte);
+	}
+
+	pvr_mmu_flush(pvr_dev);
+
+	return 0;
+
+err_unmap_pages:
+	for (; pfn >= start_pfn; pfn--)
+		WRITE_ONCE(mips_data->pt[pfn], 0);
+
+	pvr_mmu_flush(pvr_dev);
+
+	return err;
+}
+
+/**
+ * pvr_vm_mips_unmap() - Unmap a FW object into MIPS address space
+ * @pvr_dev: Target PowerVR device.
+ * @fw_obj: FW object to unmap.
+ */
+void
+pvr_vm_mips_unmap(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj)
+{
+	struct pvr_fw_device *fw_dev = &pvr_dev->fw_dev;
+	struct pvr_fw_mips_data *mips_data = fw_dev->processor_data.mips_data;
+	u64 start = fw_obj->fw_mm_node.start;
+	u64 size = fw_obj->fw_mm_node.size;
+	u64 end = start + size;
+
+	u32 start_pfn = (start & fw_dev->fw_heap_info.offset_mask) >>
+			ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K;
+	u32 end_pfn = (end & fw_dev->fw_heap_info.offset_mask) >> ROGUE_MIPSFW_LOG2_PAGE_SIZE_4K;
+	u32 pfn;
+
+	for (pfn = start_pfn; pfn < end_pfn; pfn++)
+		WRITE_ONCE(mips_data->pt[pfn], 0);
+
+	pvr_mmu_flush(pvr_dev);
+}
diff --git a/drivers/gpu/drm/imagination/pvr_vm_mips.h b/drivers/gpu/drm/imagination/pvr_vm_mips.h
new file mode 100644
index 000000000000..71d238d5327a
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_vm_mips.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_VM_MIPS_H
+#define PVR_VM_MIPS_H
+
+/* Forward declaration from pvr_device.h. */
+struct pvr_device;
+
+/* Forward declaration from pvr_gem.h. */
+struct pvr_fw_object;
+
+int
+pvr_vm_mips_init(struct pvr_device *pvr_dev);
+void
+pvr_vm_mips_fini(struct pvr_device *pvr_dev);
+int
+pvr_vm_mips_map(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj);
+void
+pvr_vm_mips_unmap(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj);
+
+#endif /* PVR_VM_MIPS_H */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 12/17] drm/imagination: Implement free list and HWRT create and destroy ioctls
  2023-08-16  8:25 ` Sarah Walker
@ 2023-08-16  8:25   ` Sarah Walker
  -1 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: matthew.brost, luben.tuikov, tzimmermann, linux-kernel, mripard,
	afd, boris.brezillon, dakr, donald.robson, hns, christian.koenig,
	faith.ekstrand

Implement ioctls to create and destroy free lists and HWRT datasets. Free
lists are used for GPU-side memory allocation during geometry processing.
HWRT datasets are the FW-side structures representing render targets.

Changes since v4:
- Remove use of drm_gem_shmem_get_pages()

Changes since v3:
- Support free list grow requests from FW
- Use drm_dev_{enter,exit}

Co-developed-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Co-developed-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 drivers/gpu/drm/imagination/Makefile        |   2 +
 drivers/gpu/drm/imagination/pvr_ccb.c       |  10 +
 drivers/gpu/drm/imagination/pvr_device.h    |  24 +
 drivers/gpu/drm/imagination/pvr_drv.c       | 112 +++-
 drivers/gpu/drm/imagination/pvr_free_list.c | 625 ++++++++++++++++++++
 drivers/gpu/drm/imagination/pvr_free_list.h | 195 ++++++
 drivers/gpu/drm/imagination/pvr_hwrt.c      | 549 +++++++++++++++++
 drivers/gpu/drm/imagination/pvr_hwrt.h      | 165 ++++++
 8 files changed, 1678 insertions(+), 4 deletions(-)
 create mode 100644 drivers/gpu/drm/imagination/pvr_free_list.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_free_list.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_hwrt.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_hwrt.h

diff --git a/drivers/gpu/drm/imagination/Makefile b/drivers/gpu/drm/imagination/Makefile
index 0a6532d30c00..fca2ee2efbac 100644
--- a/drivers/gpu/drm/imagination/Makefile
+++ b/drivers/gpu/drm/imagination/Makefile
@@ -8,12 +8,14 @@ powervr-y := \
 	pvr_device.o \
 	pvr_device_info.o \
 	pvr_drv.o \
+	pvr_free_list.o \
 	pvr_fw.o \
 	pvr_fw_meta.o \
 	pvr_fw_mips.o \
 	pvr_fw_startstop.o \
 	pvr_fw_trace.o \
 	pvr_gem.o \
+	pvr_hwrt.o \
 	pvr_mmu.o \
 	pvr_power.o \
 	pvr_vm.o \
diff --git a/drivers/gpu/drm/imagination/pvr_ccb.c b/drivers/gpu/drm/imagination/pvr_ccb.c
index 1dfcce963e67..faa32a548a80 100644
--- a/drivers/gpu/drm/imagination/pvr_ccb.c
+++ b/drivers/gpu/drm/imagination/pvr_ccb.c
@@ -4,6 +4,7 @@
 #include "pvr_ccb.h"
 #include "pvr_device.h"
 #include "pvr_drv.h"
+#include "pvr_free_list.h"
 #include "pvr_fw.h"
 #include "pvr_gem.h"
 #include "pvr_power.h"
@@ -138,6 +139,15 @@ process_fwccb_command(struct pvr_device *pvr_dev, struct rogue_fwif_fwccb_cmd *c
 		pvr_power_reset(pvr_dev, false);
 		break;
 
+	case ROGUE_FWIF_FWCCB_CMD_FREELISTS_RECONSTRUCTION:
+		pvr_free_list_process_reconstruct_req(pvr_dev,
+						      &cmd->cmd_data.cmd_freelists_reconstruction);
+		break;
+
+	case ROGUE_FWIF_FWCCB_CMD_FREELIST_GROW:
+		pvr_free_list_process_grow_req(pvr_dev, &cmd->cmd_data.cmd_free_list_gs);
+		break;
+
 	default:
 		drm_info(from_pvr_device(pvr_dev), "Received unknown FWCCB command %x\n",
 			 cmd->cmd_type);
diff --git a/drivers/gpu/drm/imagination/pvr_device.h b/drivers/gpu/drm/imagination/pvr_device.h
index 7e353cfe4fd2..a810b233689b 100644
--- a/drivers/gpu/drm/imagination/pvr_device.h
+++ b/drivers/gpu/drm/imagination/pvr_device.h
@@ -146,6 +146,14 @@ struct pvr_device {
 	/** @fw_dev: Firmware related data. */
 	struct pvr_fw_device fw_dev;
 
+	/**
+	 * @free_list_ids: Array of free lists belonging to this device. Array members
+	 *                 are of type "struct pvr_free_list *".
+	 *
+	 * This array is used to allocate IDs used by the firmware.
+	 */
+	struct xarray free_list_ids;
+
 	struct {
 		/** @work: Work item for watchdog callback. */
 		struct delayed_work work;
@@ -241,6 +249,22 @@ struct pvr_file {
 	 */
 	struct pvr_device *pvr_dev;
 
+	/**
+	 * @free_list_handles: Array of free lists belonging to this file. Array
+	 * members are of type "struct pvr_free_list *".
+	 *
+	 * This array is used to allocate handles returned to userspace.
+	 */
+	struct xarray free_list_handles;
+
+	/**
+	 * @hwrt_handles: Array of HWRT datasets belonging to this file. Array
+	 * members are of type "struct pvr_hwrt_dataset *".
+	 *
+	 * This array is used to allocate handles returned to userspace.
+	 */
+	struct xarray hwrt_handles;
+
 	/**
 	 * @vm_ctx_handles: Array of VM contexts belonging to this file. Array
 	 * members are of type "struct pvr_vm_context *".
diff --git a/drivers/gpu/drm/imagination/pvr_drv.c b/drivers/gpu/drm/imagination/pvr_drv.c
index af12f87cef97..bd601eced4f2 100644
--- a/drivers/gpu/drm/imagination/pvr_drv.c
+++ b/drivers/gpu/drm/imagination/pvr_drv.c
@@ -3,7 +3,9 @@
 
 #include "pvr_device.h"
 #include "pvr_drv.h"
+#include "pvr_free_list.h"
 #include "pvr_gem.h"
+#include "pvr_hwrt.h"
 #include "pvr_power.h"
 #include "pvr_rogue_defs.h"
 #include "pvr_rogue_fwif_client.h"
@@ -725,7 +727,41 @@ static int
 pvr_ioctl_create_free_list(struct drm_device *drm_dev, void *raw_args,
 			   struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_create_free_list_args *args = raw_args;
+	struct pvr_file *pvr_file = to_pvr_file(file);
+	struct pvr_free_list *free_list;
+	int idx;
+	int err;
+
+	if (!drm_dev_enter(drm_dev, &idx))
+		return -EIO;
+
+	free_list = pvr_free_list_create(pvr_file, args);
+	if (IS_ERR(free_list)) {
+		err = PTR_ERR(free_list);
+		goto err_drm_dev_exit;
+	}
+
+	/* Allocate object handle for userspace. */
+	err = xa_alloc(&pvr_file->free_list_handles,
+		       &args->handle,
+		       free_list,
+		       xa_limit_32b,
+		       GFP_KERNEL);
+	if (err < 0)
+		goto err_cleanup;
+
+	drm_dev_exit(idx);
+
+	return 0;
+
+err_cleanup:
+	pvr_free_list_put(free_list);
+
+err_drm_dev_exit:
+	drm_dev_exit(idx);
+
+	return err;
 }
 
 /**
@@ -745,7 +781,19 @@ static int
 pvr_ioctl_destroy_free_list(struct drm_device *drm_dev, void *raw_args,
 			    struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_destroy_free_list_args *args = raw_args;
+	struct pvr_file *pvr_file = to_pvr_file(file);
+	struct pvr_free_list *free_list;
+
+	if (args->_padding_4)
+		return -EINVAL;
+
+	free_list = xa_erase(&pvr_file->free_list_handles, args->handle);
+	if (!free_list)
+		return -EINVAL;
+
+	pvr_free_list_put(free_list);
+	return 0;
 }
 
 /**
@@ -765,7 +813,41 @@ static int
 pvr_ioctl_create_hwrt_dataset(struct drm_device *drm_dev, void *raw_args,
 			      struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_create_hwrt_dataset_args *args = raw_args;
+	struct pvr_file *pvr_file = to_pvr_file(file);
+	struct pvr_hwrt_dataset *hwrt;
+	int idx;
+	int err;
+
+	if (!drm_dev_enter(drm_dev, &idx))
+		return -EIO;
+
+	hwrt = pvr_hwrt_dataset_create(pvr_file, args);
+	if (IS_ERR(hwrt)) {
+		err = PTR_ERR(hwrt);
+		goto err_drm_dev_exit;
+	}
+
+	/* Allocate object handle for userspace. */
+	err = xa_alloc(&pvr_file->hwrt_handles,
+		       &args->handle,
+		       hwrt,
+		       xa_limit_32b,
+		       GFP_KERNEL);
+	if (err < 0)
+		goto err_cleanup;
+
+	drm_dev_exit(idx);
+
+	return 0;
+
+err_cleanup:
+	pvr_hwrt_dataset_put(hwrt);
+
+err_drm_dev_exit:
+	drm_dev_exit(idx);
+
+	return err;
 }
 
 /**
@@ -785,7 +867,19 @@ static int
 pvr_ioctl_destroy_hwrt_dataset(struct drm_device *drm_dev, void *raw_args,
 			       struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_destroy_hwrt_dataset_args *args = raw_args;
+	struct pvr_file *pvr_file = to_pvr_file(file);
+	struct pvr_hwrt_dataset *hwrt;
+
+	if (args->_padding_4)
+		return -EINVAL;
+
+	hwrt = xa_erase(&pvr_file->hwrt_handles, args->handle);
+	if (!hwrt)
+		return -EINVAL;
+
+	pvr_hwrt_dataset_put(hwrt);
+	return 0;
 }
 
 /**
@@ -1209,6 +1303,8 @@ pvr_drm_driver_open(struct drm_device *drm_dev, struct drm_file *file)
 	 */
 	pvr_file->pvr_dev = pvr_dev;
 
+	xa_init_flags(&pvr_file->free_list_handles, XA_FLAGS_ALLOC1);
+	xa_init_flags(&pvr_file->hwrt_handles, XA_FLAGS_ALLOC1);
 	xa_init_flags(&pvr_file->vm_ctx_handles, XA_FLAGS_ALLOC1);
 
 	/*
@@ -1237,6 +1333,8 @@ pvr_drm_driver_postclose(__always_unused struct drm_device *drm_dev,
 	struct pvr_file *pvr_file = to_pvr_file(file);
 
 	/* Drop references on any remaining objects. */
+	pvr_destroy_free_lists_for_file(pvr_file);
+	pvr_destroy_hwrt_datasets_for_file(pvr_file);
 	pvr_destroy_vm_contexts_for_file(pvr_file);
 
 	kfree(pvr_file);
@@ -1295,6 +1393,8 @@ pvr_probe(struct platform_device *plat_dev)
 	if (err)
 		goto err_device_fini;
 
+	xa_init_flags(&pvr_dev->free_list_ids, XA_FLAGS_ALLOC1);
+
 	return 0;
 
 err_device_fini:
@@ -1312,6 +1412,10 @@ pvr_remove(struct platform_device *plat_dev)
 	struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
 	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
 
+	WARN_ON(!xa_empty(&pvr_dev->free_list_ids));
+
+	xa_destroy(&pvr_dev->free_list_ids);
+
 	pm_runtime_suspend(drm_dev->dev);
 	drm_dev_unplug(drm_dev);
 	pvr_device_fini(pvr_dev);
diff --git a/drivers/gpu/drm/imagination/pvr_free_list.c b/drivers/gpu/drm/imagination/pvr_free_list.c
new file mode 100644
index 000000000000..683882fdfbc6
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_free_list.c
@@ -0,0 +1,625 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_free_list.h"
+#include "pvr_gem.h"
+#include "pvr_hwrt.h"
+#include "pvr_rogue_fwif.h"
+#include "pvr_vm.h"
+
+#include <drm/drm_gem.h>
+#include <linux/slab.h>
+#include <linux/xarray.h>
+#include <uapi/drm/pvr_drm.h>
+
+#define FREE_LIST_ENTRY_SIZE sizeof(u32)
+
+#define FREE_LIST_ALIGNMENT \
+	((ROGUE_BIF_PM_FREELIST_BASE_ADDR_ALIGNSIZE / FREE_LIST_ENTRY_SIZE) - 1)
+
+#define FREE_LIST_MIN_PAGES 50
+#define FREE_LIST_MIN_PAGES_BRN66011 40
+#define FREE_LIST_MIN_PAGES_ROGUEXE 25
+
+/**
+ * pvr_get_free_list_min_pages() - Get minimum free list size for this device
+ * @pvr_dev: Device pointer.
+ *
+ * Returns:
+ *  * Minimum free list size, in PM physical pages.
+ */
+u32
+pvr_get_free_list_min_pages(struct pvr_device *pvr_dev)
+{
+	u32 value;
+
+	if (PVR_HAS_FEATURE(pvr_dev, roguexe)) {
+		if (PVR_HAS_QUIRK(pvr_dev, 66011))
+			value = FREE_LIST_MIN_PAGES_BRN66011;
+		else
+			value = FREE_LIST_MIN_PAGES_ROGUEXE;
+	} else {
+		value = FREE_LIST_MIN_PAGES;
+	}
+
+	return value;
+}
+
+static int
+free_list_create_kernel_structure(struct pvr_file *pvr_file,
+				  struct drm_pvr_ioctl_create_free_list_args *args,
+				  struct pvr_free_list *free_list)
+{
+	struct pvr_gem_object *free_list_obj;
+	struct pvr_vm_context *vm_ctx;
+	u64 free_list_size;
+	int err;
+
+	if (args->grow_threshold > 100 ||
+	    args->initial_num_pages > args->max_num_pages ||
+	    args->grow_num_pages > args->max_num_pages ||
+	    args->max_num_pages == 0 ||
+	    (args->initial_num_pages < args->max_num_pages && !args->grow_num_pages) ||
+	    (args->initial_num_pages == args->max_num_pages && args->grow_num_pages))
+		return -EINVAL;
+
+	if ((args->initial_num_pages & FREE_LIST_ALIGNMENT) ||
+	    (args->max_num_pages & FREE_LIST_ALIGNMENT) ||
+	    (args->grow_num_pages & FREE_LIST_ALIGNMENT))
+		return -EINVAL;
+
+	vm_ctx = pvr_vm_context_lookup(pvr_file, args->vm_context_handle);
+	if (!vm_ctx)
+		return -EINVAL;
+
+	free_list_obj = pvr_vm_find_gem_object(vm_ctx, args->free_list_gpu_addr,
+					       NULL, &free_list_size);
+	if (!free_list_obj) {
+		err = -EINVAL;
+		goto err_put_vm_context;
+	}
+
+	if ((free_list_obj->flags & DRM_PVR_BO_CPU_ALLOW_USERSPACE_ACCESS) ||
+	    !(free_list_obj->flags & DRM_PVR_BO_DEVICE_PM_FW_PROTECT) ||
+	    free_list_size < (args->max_num_pages * FREE_LIST_ENTRY_SIZE)) {
+		err = -EINVAL;
+		goto err_put_free_list_obj;
+	}
+
+	free_list->pvr_dev = pvr_file->pvr_dev;
+	free_list->current_pages = 0;
+	free_list->max_pages = args->max_num_pages;
+	free_list->grow_pages = args->grow_num_pages;
+	free_list->grow_threshold = args->grow_threshold;
+	free_list->obj = free_list_obj;
+	free_list->free_list_gpu_addr = args->free_list_gpu_addr;
+	free_list->initial_num_pages = args->initial_num_pages;
+
+	pvr_vm_context_put(vm_ctx);
+
+	return 0;
+
+err_put_free_list_obj:
+	pvr_gem_object_put(free_list_obj);
+
+err_put_vm_context:
+	pvr_vm_context_put(vm_ctx);
+
+	return err;
+}
+
+static void
+free_list_destroy_kernel_structure(struct pvr_free_list *free_list)
+{
+	WARN_ON(!list_empty(&free_list->hwrt_list));
+
+	pvr_gem_object_put(free_list->obj);
+}
+
+/**
+ * calculate_free_list_ready_pages_locked() - Function to work out the number of free
+ *                                            list pages to reserve for growing within
+ *                                            the FW without having to wait for the
+ *                                            host to progress a grow request
+ * @free_list: Pointer to free list.
+ * @pages: Total pages currently in free list.
+ *
+ * If the threshold or grow size means less than the alignment size (4 pages on
+ * Rogue), then the feature is not used.
+ *
+ * Caller must hold &free_list->lock.
+ *
+ * Return: number of pages to reserve.
+ */
+static u32
+calculate_free_list_ready_pages_locked(struct pvr_free_list *free_list, u32 pages)
+{
+	u32 ready_pages;
+
+	lockdep_assert_held(&free_list->lock);
+
+	ready_pages = ((pages * free_list->grow_threshold) / 100);
+
+	/* The number of pages must be less than the grow size. */
+	ready_pages = min(ready_pages, free_list->grow_pages);
+
+	/*
+	 * The number of pages must be a multiple of the free list align size.
+	 */
+	ready_pages &= ~FREE_LIST_ALIGNMENT;
+
+	return ready_pages;
+}
+
+static u32
+calculate_free_list_ready_pages(struct pvr_free_list *free_list, u32 pages)
+{
+	u32 ret;
+
+	mutex_lock(&free_list->lock);
+
+	ret = calculate_free_list_ready_pages_locked(free_list, pages);
+
+	mutex_unlock(&free_list->lock);
+
+	return ret;
+}
+
+static void
+free_list_fw_init(void *cpu_ptr, void *priv)
+{
+	struct rogue_fwif_freelist *fw_data = cpu_ptr;
+	struct pvr_free_list *free_list = priv;
+	u32 ready_pages;
+
+	/* Fill out FW structure */
+	ready_pages = calculate_free_list_ready_pages(free_list,
+						      free_list->initial_num_pages);
+
+	fw_data->max_pages = free_list->max_pages;
+	fw_data->current_pages = free_list->initial_num_pages - ready_pages;
+	fw_data->grow_pages = free_list->grow_pages;
+	fw_data->ready_pages = ready_pages;
+	fw_data->freelist_id = free_list->fw_id;
+	fw_data->grow_pending = false;
+	fw_data->current_stack_top = fw_data->current_pages - 1;
+	fw_data->freelist_dev_addr = free_list->free_list_gpu_addr;
+	fw_data->current_dev_addr = (fw_data->freelist_dev_addr +
+				     ((fw_data->max_pages - fw_data->current_pages) *
+				      FREE_LIST_ENTRY_SIZE)) &
+				    ~((u64)ROGUE_BIF_PM_FREELIST_BASE_ADDR_ALIGNSIZE - 1);
+}
+
+static int
+free_list_create_fw_structure(struct pvr_file *pvr_file,
+			      struct drm_pvr_ioctl_create_free_list_args *args,
+			      struct pvr_free_list *free_list)
+{
+	struct pvr_device *pvr_dev = pvr_file->pvr_dev;
+
+	/*
+	 * Create and map the FW structure so we can initialise it. This is not
+	 * accessed on the CPU side post-initialisation so the mapping lifetime
+	 * is only for this function.
+	 */
+	free_list->fw_data = pvr_fw_object_create_and_map(pvr_dev, sizeof(*free_list->fw_data),
+							  PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+							  free_list_fw_init, free_list,
+							  &free_list->fw_obj);
+	if (IS_ERR(free_list->fw_data))
+		return PTR_ERR(free_list->fw_data);
+
+	return 0;
+}
+
+static void
+free_list_destroy_fw_structure(struct pvr_free_list *free_list)
+{
+	pvr_fw_object_unmap_and_destroy(free_list->fw_obj);
+}
+
+static int
+pvr_free_list_insert_pages_locked(struct pvr_free_list *free_list,
+				  struct sg_table *sgt, u32 offset, u32 num_pages)
+{
+	struct sg_dma_page_iter dma_iter;
+	u32 *page_list;
+
+	lockdep_assert_held(&free_list->lock);
+
+	page_list = pvr_gem_object_vmap(free_list->obj);
+	if (IS_ERR(page_list))
+		return PTR_ERR(page_list);
+
+	offset /= FREE_LIST_ENTRY_SIZE;
+	/* clang-format off */
+	for_each_sgtable_dma_page(sgt, &dma_iter, 0) {
+		dma_addr_t dma_addr = sg_page_iter_dma_address(&dma_iter);
+		u64 dma_pfn = dma_addr >>
+			       ROGUE_BIF_PM_PHYSICAL_PAGE_ALIGNSHIFT;
+		u32 dma_addr_offset;
+
+		BUILD_BUG_ON(ROGUE_BIF_PM_PHYSICAL_PAGE_SIZE > PAGE_SIZE);
+
+		for (dma_addr_offset = 0; dma_addr_offset < PAGE_SIZE;
+		     dma_addr_offset += ROGUE_BIF_PM_PHYSICAL_PAGE_SIZE) {
+			WARN_ON_ONCE(dma_pfn >> 32);
+
+			page_list[offset++] = (u32)dma_pfn;
+			dma_pfn++;
+
+			num_pages--;
+			if (!num_pages)
+				break;
+		}
+
+		if (!num_pages)
+			break;
+	};
+	/* clang-format on */
+
+	/* Make sure our free_list update is flushed. */
+	wmb();
+
+	pvr_gem_object_vunmap(free_list->obj);
+
+	return 0;
+}
+
+static int
+pvr_free_list_insert_node_locked(struct pvr_free_list_node *free_list_node)
+{
+	struct pvr_free_list *free_list = free_list_node->free_list;
+	struct sg_table *sgt;
+	u32 start_page;
+	u32 offset;
+	int err;
+
+	lockdep_assert_held(&free_list->lock);
+
+	start_page = free_list->max_pages - free_list->current_pages -
+		     free_list_node->num_pages;
+	offset = (start_page * FREE_LIST_ENTRY_SIZE) &
+		  ~((u64)ROGUE_BIF_PM_FREELIST_BASE_ADDR_ALIGNSIZE - 1);
+
+	sgt = drm_gem_shmem_get_pages_sgt(&free_list_node->mem_obj->base);
+	if (WARN_ON(IS_ERR(sgt)))
+		return PTR_ERR(sgt);
+
+	err = pvr_free_list_insert_pages_locked(free_list, sgt,
+						offset, free_list_node->num_pages);
+	if (!err)
+		free_list->current_pages += free_list_node->num_pages;
+
+	return err;
+}
+
+static int
+pvr_free_list_grow(struct pvr_free_list *free_list, u32 num_pages)
+{
+	struct pvr_device *pvr_dev = free_list->pvr_dev;
+	struct pvr_free_list_node *free_list_node;
+	int err;
+
+	mutex_lock(&free_list->lock);
+
+	if (num_pages & FREE_LIST_ALIGNMENT) {
+		err = -EINVAL;
+		goto err_unlock;
+	}
+
+	free_list_node = kzalloc(sizeof(*free_list_node), GFP_KERNEL);
+	if (!free_list_node) {
+		err = -ENOMEM;
+		goto err_unlock;
+	}
+
+	free_list_node->num_pages = num_pages;
+	free_list_node->free_list = free_list;
+
+	free_list_node->mem_obj = pvr_gem_object_create(pvr_dev,
+							num_pages <<
+							ROGUE_BIF_PM_PHYSICAL_PAGE_ALIGNSHIFT,
+							PVR_BO_FW_FLAGS_DEVICE_CACHED);
+	if (IS_ERR(free_list_node->mem_obj)) {
+		err = PTR_ERR(free_list_node->mem_obj);
+		goto err_free;
+	}
+
+	err = pvr_free_list_insert_node_locked(free_list_node);
+	if (err)
+		goto err_destroy_gem_object;
+
+	list_add_tail(&free_list_node->node, &free_list->mem_block_list);
+
+	/*
+	 * Reserve a number ready pages to allow the FW to process OOM quickly
+	 * and asynchronously request a grow.
+	 */
+	free_list->ready_pages =
+		calculate_free_list_ready_pages_locked(free_list,
+						       free_list->current_pages);
+	free_list->current_pages -= free_list->ready_pages;
+
+	mutex_unlock(&free_list->lock);
+
+	return 0;
+
+err_destroy_gem_object:
+	pvr_gem_object_put(free_list_node->mem_obj);
+
+err_free:
+	kfree(free_list_node);
+
+err_unlock:
+	mutex_unlock(&free_list->lock);
+
+	return err;
+}
+
+void pvr_free_list_process_grow_req(struct pvr_device *pvr_dev,
+				    struct rogue_fwif_fwccb_cmd_freelist_gs_data *req)
+{
+	struct pvr_free_list *free_list = pvr_free_list_lookup_id(pvr_dev, req->freelist_id);
+	struct rogue_fwif_kccb_cmd resp_cmd = {
+		.cmd_type = ROGUE_FWIF_KCCB_CMD_FREELIST_GROW_UPDATE,
+	};
+	struct rogue_fwif_freelist_gs_data *resp = &resp_cmd.cmd_data.free_list_gs_data;
+	u32 grow_pages = 0;
+
+	/* If we don't have a freelist registered for this ID, we can't do much. */
+	if (WARN_ON(!free_list))
+		return;
+
+	/* Since the FW made the request, it has already consumed the ready pages,
+	 * update the host struct.
+	 */
+	free_list->current_pages += free_list->ready_pages;
+	free_list->ready_pages = 0;
+
+	/* If the grow succeeds, update the grow_pages argument. */
+	if (!pvr_free_list_grow(free_list, free_list->grow_pages))
+		grow_pages = free_list->grow_pages;
+
+	/* Now prepare the response and send it back to the FW. */
+	pvr_fw_object_get_fw_addr(free_list->fw_obj, &resp->freelist_fw_addr);
+	resp->delta_pages = grow_pages;
+	resp->new_pages = free_list->current_pages + free_list->ready_pages;
+	resp->ready_pages = free_list->ready_pages;
+	pvr_free_list_put(free_list);
+
+	WARN_ON(pvr_kccb_send_cmd(pvr_dev, &resp_cmd, NULL));
+}
+
+static void
+pvr_free_list_free_node(struct pvr_free_list_node *free_list_node)
+{
+	pvr_gem_object_put(free_list_node->mem_obj);
+
+	kfree(free_list_node);
+}
+
+/**
+ * pvr_free_list_create() - Create a new free list and return an object pointer
+ * @pvr_file: Pointer to pvr_file structure.
+ * @args: Creation arguments from userspace.
+ *
+ * Return:
+ *  * Pointer to new free_list, or
+ *  * ERR_PTR(-%ENOMEM) on out of memory.
+ */
+struct pvr_free_list *
+pvr_free_list_create(struct pvr_file *pvr_file,
+		     struct drm_pvr_ioctl_create_free_list_args *args)
+{
+	struct pvr_free_list *free_list;
+	int err;
+
+	/* Create and fill out the kernel structure */
+	free_list = kzalloc(sizeof(*free_list), GFP_KERNEL);
+
+	if (!free_list)
+		return ERR_PTR(-ENOMEM);
+
+	kref_init(&free_list->ref_count);
+	INIT_LIST_HEAD(&free_list->mem_block_list);
+	INIT_LIST_HEAD(&free_list->hwrt_list);
+	mutex_init(&free_list->lock);
+
+	err = free_list_create_kernel_structure(pvr_file, args, free_list);
+	if (err < 0)
+		goto err_free;
+
+	/* Allocate global object ID for firmware. */
+	err = xa_alloc(&pvr_file->pvr_dev->free_list_ids,
+		       &free_list->fw_id,
+		       free_list,
+		       xa_limit_32b,
+		       GFP_KERNEL);
+	if (err)
+		goto err_destroy_kernel_structure;
+
+	err = free_list_create_fw_structure(pvr_file, args, free_list);
+	if (err < 0)
+		goto err_free_fw_id;
+
+	err = pvr_free_list_grow(free_list, args->initial_num_pages);
+	if (err < 0)
+		goto err_fw_struct_cleanup;
+
+	return free_list;
+
+err_fw_struct_cleanup:
+	WARN_ON(pvr_fw_structure_cleanup(free_list->pvr_dev,
+					 ROGUE_FWIF_CLEANUP_FREELIST,
+					 free_list->fw_obj, 0));
+
+err_free_fw_id:
+	xa_erase(&free_list->pvr_dev->free_list_ids, free_list->fw_id);
+
+err_destroy_kernel_structure:
+	free_list_destroy_kernel_structure(free_list);
+
+err_free:
+	mutex_destroy(&free_list->lock);
+	kfree(free_list);
+
+	return ERR_PTR(err);
+}
+
+static void
+pvr_free_list_release(struct kref *ref_count)
+{
+	struct pvr_free_list *free_list =
+		container_of(ref_count, struct pvr_free_list, ref_count);
+	struct list_head *pos, *n;
+	int err;
+
+	xa_erase(&free_list->pvr_dev->free_list_ids, free_list->fw_id);
+
+	err = pvr_fw_structure_cleanup(free_list->pvr_dev,
+				       ROGUE_FWIF_CLEANUP_FREELIST,
+				       free_list->fw_obj, 0);
+	if (err == -EBUSY) {
+		/* Flush the FWCCB to process any HWR or freelist reconstruction
+		 * request that might keep the freelist busy, and try again.
+		 */
+		pvr_fwccb_process(free_list->pvr_dev);
+		err = pvr_fw_structure_cleanup(free_list->pvr_dev,
+					       ROGUE_FWIF_CLEANUP_FREELIST,
+					       free_list->fw_obj, 0);
+	}
+
+	WARN_ON(err);
+
+	/* clang-format off */
+	list_for_each_safe(pos, n, &free_list->mem_block_list) {
+		struct pvr_free_list_node *free_list_node =
+			container_of(pos, struct pvr_free_list_node, node);
+
+		list_del(pos);
+		pvr_free_list_free_node(free_list_node);
+	}
+	/* clang-format on */
+
+	free_list_destroy_kernel_structure(free_list);
+	free_list_destroy_fw_structure(free_list);
+	mutex_destroy(&free_list->lock);
+	kfree(free_list);
+}
+
+/**
+ * pvr_destroy_free_lists_for_file: Destroy any free lists associated with the
+ * given file.
+ * @pvr_file: Pointer to pvr_file structure.
+ *
+ * Removes all free lists associated with @pvr_file from the device free_list
+ * list and drops initial references. Free lists will then be destroyed once
+ * all outstanding references are dropped.
+ */
+void pvr_destroy_free_lists_for_file(struct pvr_file *pvr_file)
+{
+	struct pvr_free_list *free_list;
+	unsigned long handle;
+
+	xa_for_each(&pvr_file->free_list_handles, handle, free_list) {
+		(void)free_list;
+		pvr_free_list_put(xa_erase(&pvr_file->free_list_handles, handle));
+	}
+}
+
+/**
+ * pvr_free_list_put() - Release reference on free list
+ * @free_list: Pointer to list to release reference on
+ */
+void
+pvr_free_list_put(struct pvr_free_list *free_list)
+{
+	if (free_list)
+		kref_put(&free_list->ref_count, pvr_free_list_release);
+}
+
+void pvr_free_list_add_hwrt(struct pvr_free_list *free_list, struct pvr_hwrt_data *hwrt_data)
+{
+	mutex_lock(&free_list->lock);
+
+	list_add_tail(&hwrt_data->freelist_node, &free_list->hwrt_list);
+
+	mutex_unlock(&free_list->lock);
+}
+
+void pvr_free_list_remove_hwrt(struct pvr_free_list *free_list, struct pvr_hwrt_data *hwrt_data)
+{
+	mutex_lock(&free_list->lock);
+
+	list_del(&hwrt_data->freelist_node);
+
+	mutex_unlock(&free_list->lock);
+}
+
+static void
+pvr_free_list_reconstruct(struct pvr_device *pvr_dev, u32 freelist_id)
+{
+	struct pvr_free_list *free_list = pvr_free_list_lookup_id(pvr_dev, freelist_id);
+	struct pvr_free_list_node *free_list_node;
+	struct rogue_fwif_freelist *fw_data;
+	struct pvr_hwrt_data *hwrt_data;
+
+	if (!free_list)
+		return;
+
+	mutex_lock(&free_list->lock);
+
+	/* Rebuild the free list based on the memory block list. */
+	free_list->current_pages = 0;
+
+	list_for_each_entry(free_list_node, &free_list->mem_block_list, node)
+		WARN_ON(pvr_free_list_insert_node_locked(free_list_node));
+
+	/*
+	 * Remove the ready pages, which are reserved to allow the FW to process OOM quickly and
+	 * asynchronously request a grow.
+	 */
+	free_list->current_pages -= free_list->ready_pages;
+
+	fw_data = free_list->fw_data;
+	fw_data->current_stack_top = fw_data->current_pages - 1;
+	fw_data->allocated_page_count = 0;
+	fw_data->allocated_mmu_page_count = 0;
+
+	/* Reset the state of any associated HWRTs. */
+	list_for_each_entry(hwrt_data, &free_list->hwrt_list, freelist_node) {
+		struct rogue_fwif_hwrtdata *hwrt_fw_data = pvr_fw_object_vmap(hwrt_data->fw_obj);
+
+		if (!WARN_ON(IS_ERR(hwrt_fw_data))) {
+			hwrt_fw_data->state = ROGUE_FWIF_RTDATA_STATE_HWR;
+			hwrt_fw_data->hwrt_data_flags &= ~HWRTDATA_HAS_LAST_GEOM;
+		}
+
+		pvr_fw_object_vunmap(hwrt_data->fw_obj);
+	}
+
+	mutex_unlock(&free_list->lock);
+
+	pvr_free_list_put(free_list);
+}
+
+void
+pvr_free_list_process_reconstruct_req(struct pvr_device *pvr_dev,
+				struct rogue_fwif_fwccb_cmd_freelists_reconstruction_data *req)
+{
+	struct rogue_fwif_kccb_cmd resp_cmd = {
+		.cmd_type = ROGUE_FWIF_KCCB_CMD_FREELISTS_RECONSTRUCTION_UPDATE,
+	};
+	struct rogue_fwif_freelists_reconstruction_data *resp =
+		&resp_cmd.cmd_data.free_lists_reconstruction_data;
+
+	for (u32 i = 0; i < req->freelist_count; i++)
+		pvr_free_list_reconstruct(pvr_dev, req->freelist_ids[i]);
+
+	resp->freelist_count = req->freelist_count;
+	memcpy(resp->freelist_ids, req->freelist_ids,
+	       req->freelist_count * sizeof(resp->freelist_ids[0]));
+
+	WARN_ON(pvr_kccb_send_cmd(pvr_dev, &resp_cmd, NULL));
+}
diff --git a/drivers/gpu/drm/imagination/pvr_free_list.h b/drivers/gpu/drm/imagination/pvr_free_list.h
new file mode 100644
index 000000000000..9f3ed6d7c4c5
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_free_list.h
@@ -0,0 +1,195 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_FREE_LIST_H
+#define PVR_FREE_LIST_H
+
+#include <linux/compiler_attributes.h>
+#include <linux/kref.h>
+#include <linux/list.h>
+#include <linux/mutex.h>
+#include <linux/types.h>
+#include <linux/xarray.h>
+#include <uapi/drm/pvr_drm.h>
+
+#include "pvr_device.h"
+
+/* Forward declaration from pvr_gem.h. */
+struct pvr_fw_object;
+
+/* Forward declaration from pvr_gem.h. */
+struct pvr_gem_object;
+
+/* Forward declaration from pvr_hwrt.h. */
+struct pvr_hwrt_data;
+
+/**
+ * struct pvr_free_list_node - structure representing an allocation in the free
+ *                             list
+ */
+struct pvr_free_list_node {
+	/** @node: List node for &pvr_free_list.mem_block_list. */
+	struct list_head node;
+
+	/** @free_list: Pointer to owning free list. */
+	struct pvr_free_list *free_list;
+
+	/** @num_pages: Number of pages in this node. */
+	u32 num_pages;
+
+	/** @mem_obj: GEM object representing the pages in this node. */
+	struct pvr_gem_object *mem_obj;
+};
+
+/**
+ * struct pvr_free_list - structure representing a free list
+ */
+struct pvr_free_list {
+	/** @ref_count: Reference count of object. */
+	struct kref ref_count;
+
+	/** @pvr_dev: Pointer to device that owns this object. */
+	struct pvr_device *pvr_dev;
+
+	/** @obj: GEM object representing the free list. */
+	struct pvr_gem_object *obj;
+
+	/** @fw_obj: FW object representing the FW-side structure. */
+	struct pvr_fw_object *fw_obj;
+
+	/** @fw_data: Pointer to CPU mapping of the FW-side structure. */
+	struct rogue_fwif_freelist *fw_data;
+
+	/**
+	 * @lock: Mutex protecting modification of the free list. Must be held when accessing any
+	 *        of the members below.
+	 */
+	struct mutex lock;
+
+	/** @fw_id: Firmware ID for this object. */
+	u32 fw_id;
+
+	/** @current_pages: Current number of pages in free list. */
+	u32 current_pages;
+
+	/** @max_pages: Maximum number of pages in free list. */
+	u32 max_pages;
+
+	/** @grow_pages: Pages to grow free list by per request. */
+	u32 grow_pages;
+
+	/**
+	 * @grow_threshold: Percentage of FL memory used that should trigger a
+	 *                  new grow request.
+	 */
+	u32 grow_threshold;
+
+	/**
+	 * @ready_pages: Number of pages reserved for FW to use while a grow
+	 *               request is being processed.
+	 */
+	u32 ready_pages;
+
+	/** @mem_block_list: List of memory blocks in this free list. */
+	struct list_head mem_block_list;
+
+	/** @hwrt_list: List of HWRTs using this free list. */
+	struct list_head hwrt_list;
+
+	/** @initial_num_pages: Initial number of pages in free list. */
+	u32 initial_num_pages;
+
+	/** @free_list_gpu_addr: Address of free list in GPU address space. */
+	u64 free_list_gpu_addr;
+};
+
+struct pvr_free_list *
+pvr_free_list_create(struct pvr_file *pvr_file,
+		     struct drm_pvr_ioctl_create_free_list_args *args);
+
+void
+pvr_destroy_free_lists_for_file(struct pvr_file *pvr_file);
+
+u32
+pvr_get_free_list_min_pages(struct pvr_device *pvr_dev);
+
+static __always_inline struct pvr_free_list *
+pvr_free_list_get(struct pvr_free_list *free_list)
+{
+	if (free_list)
+		kref_get(&free_list->ref_count);
+
+	return free_list;
+}
+
+/**
+ * pvr_free_list_lookup() - Lookup free list pointer from handle and file
+ * @pvr_file: Pointer to pvr_file structure.
+ * @handle: Object handle.
+ *
+ * Takes reference on free list object. Call pvr_free_list_put() to release.
+ *
+ * Returns:
+ *  * The requested object on success, or
+ *  * %NULL on failure (object does not exist in list, is not a free list, or
+ *    does not belong to @pvr_file)
+ */
+static __always_inline struct pvr_free_list *
+pvr_free_list_lookup(struct pvr_file *pvr_file, u32 handle)
+{
+	struct pvr_free_list *free_list;
+
+	xa_lock(&pvr_file->free_list_handles);
+	free_list = pvr_free_list_get(xa_load(&pvr_file->free_list_handles, handle));
+	xa_unlock(&pvr_file->free_list_handles);
+
+	return free_list;
+}
+
+/**
+ * pvr_free_list_lookup_id() - Lookup free list pointer from FW ID
+ * @pvr_dev: Device pointer.
+ * @id: FW object ID.
+ *
+ * Takes reference on free list object. Call pvr_free_list_put() to release.
+ *
+ * Returns:
+ *  * The requested object on success, or
+ *  * %NULL on failure (object does not exist in list, or is not a free list)
+ */
+static __always_inline struct pvr_free_list *
+pvr_free_list_lookup_id(struct pvr_device *pvr_dev, u32 id)
+{
+	struct pvr_free_list *free_list;
+
+	xa_lock(&pvr_dev->free_list_ids);
+
+	/* Contexts are removed from the ctx_ids set in the context release path,
+	 * meaning the ref_count reached zero before they get removed. We need
+	 * to make sure we're not trying to acquire a context that's being
+	 * destroyed.
+	 */
+	free_list = xa_load(&pvr_dev->free_list_ids, id);
+	if (free_list && !kref_get_unless_zero(&free_list->ref_count))
+		free_list = NULL;
+	xa_unlock(&pvr_dev->free_list_ids);
+
+	return free_list;
+}
+
+void
+pvr_free_list_put(struct pvr_free_list *free_list);
+
+void
+pvr_free_list_add_hwrt(struct pvr_free_list *free_list, struct pvr_hwrt_data *hwrt_data);
+void
+pvr_free_list_remove_hwrt(struct pvr_free_list *free_list, struct pvr_hwrt_data *hwrt_data);
+
+void pvr_free_list_process_grow_req(struct pvr_device *pvr_dev,
+				    struct rogue_fwif_fwccb_cmd_freelist_gs_data *req);
+
+void
+pvr_free_list_process_reconstruct_req(struct pvr_device *pvr_dev,
+				struct rogue_fwif_fwccb_cmd_freelists_reconstruction_data *req);
+
+#endif /* PVR_FREE_LIST_H */
diff --git a/drivers/gpu/drm/imagination/pvr_hwrt.c b/drivers/gpu/drm/imagination/pvr_hwrt.c
new file mode 100644
index 000000000000..684dccf30148
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_hwrt.c
@@ -0,0 +1,549 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_free_list.h"
+#include "pvr_hwrt.h"
+#include "pvr_gem.h"
+#include "pvr_rogue_cr_defs_client.h"
+#include "pvr_rogue_fwif.h"
+
+#include <drm/drm_gem.h>
+#include <linux/bitops.h>
+#include <linux/math.h>
+#include <linux/slab.h>
+#include <linux/xarray.h>
+#include <uapi/drm/pvr_drm.h>
+
+static_assert(ROGUE_FWIF_NUM_RTDATAS == 2);
+static_assert(ROGUE_FWIF_NUM_GEOMDATAS == 1);
+static_assert(ROGUE_FWIF_NUM_RTDATA_FREELISTS == 2);
+
+/*
+ * struct pvr_rt_mtile_info - Render target macrotile information
+ */
+struct pvr_rt_mtile_info {
+	u32 mtile_x[3];
+	u32 mtile_y[3];
+	u32 tile_max_x;
+	u32 tile_max_y;
+	u32 tile_size_x;
+	u32 tile_size_y;
+	u32 num_tiles_x;
+	u32 num_tiles_y;
+};
+
+/* Size of Shadow Render Target Cache entry */
+#define SRTC_ENTRY_SIZE sizeof(u32)
+/* Size of Renders Accumulation Array entry */
+#define RAA_ENTRY_SIZE sizeof(u32)
+
+static int
+hwrt_init_kernel_structure(struct pvr_file *pvr_file,
+			   struct drm_pvr_ioctl_create_hwrt_dataset_args *args,
+			   struct pvr_hwrt_dataset *hwrt)
+{
+	struct pvr_device *pvr_dev = pvr_file->pvr_dev;
+	int err;
+	int i;
+
+	hwrt->pvr_dev = pvr_dev;
+	hwrt->max_rts = args->layers;
+
+	/* Get pointers to the free lists */
+	for (i = 0; i < ARRAY_SIZE(hwrt->free_lists); i++) {
+		hwrt->free_lists[i] = pvr_free_list_lookup(pvr_file,  args->free_list_handles[i]);
+		if (!hwrt->free_lists[i]) {
+			err = -EINVAL;
+			goto err_put_free_lists;
+		}
+	}
+
+	if (hwrt->free_lists[ROGUE_FW_LOCAL_FREELIST]->current_pages <
+	    pvr_get_free_list_min_pages(pvr_dev)) {
+		err = -EINVAL;
+		goto err_put_free_lists;
+	}
+
+	return 0;
+
+err_put_free_lists:
+	for (i = 0; i < ARRAY_SIZE(hwrt->free_lists); i++) {
+		pvr_free_list_put(hwrt->free_lists[i]);
+		hwrt->free_lists[i] = NULL;
+	}
+
+	return err;
+}
+
+static void
+hwrt_fini_kernel_structure(struct pvr_hwrt_dataset *hwrt)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(hwrt->free_lists); i++) {
+		pvr_free_list_put(hwrt->free_lists[i]);
+		hwrt->free_lists[i] = NULL;
+	}
+}
+
+static void
+hwrt_fini_common_fw_structure(struct pvr_hwrt_dataset *hwrt)
+{
+	pvr_fw_object_destroy(hwrt->common_fw_obj);
+}
+
+static int
+get_cr_isp_mtile_size_val(struct pvr_device *pvr_dev, u32 samples,
+			  struct pvr_rt_mtile_info *info, u32 *value_out)
+{
+	u32 x = info->mtile_x[0];
+	u32 y = info->mtile_y[0];
+	u32 samples_per_pixel;
+	int err;
+
+	err = PVR_FEATURE_VALUE(pvr_dev, isp_samples_per_pixel, &samples_per_pixel);
+	if (err)
+		return err;
+
+	if (samples_per_pixel == 1) {
+		if (samples >= 4)
+			x <<= 1;
+		if (samples >= 2)
+			y <<= 1;
+	} else if (samples_per_pixel == 2) {
+		if (samples >= 8)
+			x <<= 1;
+		if (samples >= 4)
+			y <<= 1;
+	} else if (samples_per_pixel == 4) {
+		if (samples >= 8)
+			y <<= 1;
+	} else {
+		WARN(true, "Unsupported ISP samples per pixel value");
+		return -EINVAL;
+	}
+
+	*value_out = ((x << ROGUE_CR_ISP_MTILE_SIZE_X_SHIFT) & ~ROGUE_CR_ISP_MTILE_SIZE_X_CLRMSK) |
+		     ((y << ROGUE_CR_ISP_MTILE_SIZE_Y_SHIFT) & ~ROGUE_CR_ISP_MTILE_SIZE_Y_CLRMSK);
+
+	return 0;
+}
+
+static int
+get_cr_multisamplectl_val(u32 samples, bool y_flip, u64 *value_out)
+{
+	static const struct {
+		u8 x[8];
+		u8 y[8];
+	} sample_positions[4] = {
+		/* 1 sample */
+		{
+			.x = { 8 },
+			.y = { 8 },
+		},
+		/* 2 samples */
+		{
+			.x = { 12, 4 },
+			.y = { 12, 4 },
+		},
+		/* 4 samples */
+		{
+			.x = { 6, 14, 2, 10 },
+			.y = { 2, 6, 10, 14 },
+		},
+		/* 8 samples */
+		{
+			.x = { 9, 7, 13, 5, 3, 1, 11, 15 },
+			.y = { 5, 11, 9, 3, 13, 7, 15, 1 },
+		},
+	};
+	const int idx = fls(samples) - 1;
+	u64 value = 0;
+
+	if (idx < 0 || idx > 3)
+		return -EINVAL;
+
+	for (u32 i = 0; i < 8; i++) {
+		value |= sample_positions[idx].x[i] << (i * 8);
+		if (y_flip)
+			value |= ((16 - sample_positions[idx].y[i]) & 0xf) << (i * 8 + 4);
+		else
+			value |= (sample_positions[idx].y[i]) << (i * 8 + 4);
+	}
+
+	*value_out = value;
+
+	return 0;
+}
+
+static int
+get_cr_te_aa_val(struct pvr_device *pvr_dev, u32 samples, u32 *value_out)
+{
+	u32 samples_per_pixel;
+	u32 value = 0;
+	int err = 0;
+
+	err = PVR_FEATURE_VALUE(pvr_dev, isp_samples_per_pixel, &samples_per_pixel);
+	if (err)
+		return err;
+
+	switch (samples_per_pixel) {
+	case 1:
+		if (samples >= 2)
+			value |= ROGUE_CR_TE_AA_Y_EN;
+		if (samples >= 4)
+			value |= ROGUE_CR_TE_AA_X_EN;
+		break;
+	case 2:
+		if (samples >= 2)
+			value |= ROGUE_CR_TE_AA_X2_EN;
+		if (samples >= 4)
+			value |= ROGUE_CR_TE_AA_Y_EN;
+		if (samples >= 8)
+			value |= ROGUE_CR_TE_AA_X_EN;
+		break;
+	case 4:
+		if (samples >= 2)
+			value |= ROGUE_CR_TE_AA_X2_EN;
+		if (samples >= 4)
+			value |= ROGUE_CR_TE_AA_Y2_EN;
+		if (samples >= 8)
+			value |= ROGUE_CR_TE_AA_Y_EN;
+		break;
+	default:
+		WARN(true, "Unsupported ISP samples per pixel value");
+		return -EINVAL;
+	}
+
+	*value_out = value;
+
+	return 0;
+}
+
+static void
+hwrtdata_common_init(void *cpu_ptr, void *priv)
+{
+	struct pvr_hwrt_dataset *hwrt = priv;
+
+	memcpy(cpu_ptr, &hwrt->common, sizeof(hwrt->common));
+}
+
+static int
+hwrt_init_common_fw_structure(struct pvr_file *pvr_file,
+			      struct drm_pvr_ioctl_create_hwrt_dataset_args *args,
+			      struct pvr_hwrt_dataset *hwrt)
+{
+	struct drm_pvr_create_hwrt_geom_data_args *geom_data_args = &args->geom_data_args;
+	struct pvr_device *pvr_dev = pvr_file->pvr_dev;
+	struct pvr_rt_mtile_info info;
+	int err;
+
+	err = PVR_FEATURE_VALUE(pvr_dev, tile_size_x, &info.tile_size_x);
+	if (WARN_ON(err))
+		return err;
+
+	err = PVR_FEATURE_VALUE(pvr_dev, tile_size_y, &info.tile_size_y);
+	if (WARN_ON(err))
+		return err;
+
+	info.num_tiles_x = DIV_ROUND_UP(args->width, info.tile_size_x);
+	info.num_tiles_y = DIV_ROUND_UP(args->height, info.tile_size_y);
+
+	if (PVR_HAS_FEATURE(pvr_dev, simple_parameter_format_version)) {
+		u32 parameter_format;
+
+		err = PVR_FEATURE_VALUE(pvr_dev, simple_parameter_format_version,
+					&parameter_format);
+		if (WARN_ON(err))
+			return err;
+
+		WARN_ON(parameter_format != 2);
+
+		/*
+		 * Set up 16 macrotiles with a multiple of 2x2 tiles per macrotile, which is
+		 * aligned to a tile group.
+		 */
+		info.mtile_x[0] = DIV_ROUND_UP(info.num_tiles_x, 8) * 2;
+		info.mtile_y[0] = DIV_ROUND_UP(info.num_tiles_y, 8) * 2;
+		info.mtile_x[1] = 0;
+		info.mtile_y[1] = 0;
+		info.mtile_x[2] = 0;
+		info.mtile_y[2] = 0;
+		info.tile_max_x = round_up(info.num_tiles_x, 2) - 1;
+		info.tile_max_y = round_up(info.num_tiles_y, 2) - 1;
+	} else {
+		/* Set up 16 macrotiles with a multiple of 4x4 tiles per macrotile. */
+		info.mtile_x[0] = round_up(DIV_ROUND_UP(info.num_tiles_x, 4), 4);
+		info.mtile_y[0] = round_up(DIV_ROUND_UP(info.num_tiles_y, 4), 4);
+		info.mtile_x[1] = info.mtile_x[0] * 2;
+		info.mtile_y[1] = info.mtile_y[0] * 2;
+		info.mtile_x[2] = info.mtile_x[0] * 3;
+		info.mtile_y[2] = info.mtile_y[0] * 3;
+		info.tile_max_x = info.num_tiles_x - 1;
+		info.tile_max_y = info.num_tiles_y - 1;
+	}
+
+	hwrt->common.geom_caches_need_zeroing = false;
+
+	hwrt->common.isp_merge_lower_x = args->isp_merge_lower_x;
+	hwrt->common.isp_merge_lower_y = args->isp_merge_lower_y;
+	hwrt->common.isp_merge_upper_x = args->isp_merge_upper_x;
+	hwrt->common.isp_merge_upper_y = args->isp_merge_upper_y;
+	hwrt->common.isp_merge_scale_x = args->isp_merge_scale_x;
+	hwrt->common.isp_merge_scale_y = args->isp_merge_scale_y;
+
+	err = get_cr_multisamplectl_val(args->samples, false,
+					&hwrt->common.multi_sample_ctl);
+	if (err)
+		return err;
+
+	err = get_cr_multisamplectl_val(args->samples, true,
+					&hwrt->common.flipped_multi_sample_ctl);
+	if (err)
+		return err;
+
+	hwrt->common.mtile_stride = info.mtile_x[0] * info.mtile_y[0];
+
+	err = get_cr_te_aa_val(pvr_dev, args->samples, &hwrt->common.teaa);
+	if (err)
+		return err;
+
+	hwrt->common.screen_pixel_max =
+		(((args->width - 1) << ROGUE_CR_PPP_SCREEN_PIXXMAX_SHIFT) &
+		 ~ROGUE_CR_PPP_SCREEN_PIXXMAX_CLRMSK) |
+		(((args->height - 1) << ROGUE_CR_PPP_SCREEN_PIXYMAX_SHIFT) &
+		 ~ROGUE_CR_PPP_SCREEN_PIXYMAX_CLRMSK);
+
+	hwrt->common.te_screen =
+		((info.tile_max_x << ROGUE_CR_TE_SCREEN_XMAX_SHIFT) &
+		 ~ROGUE_CR_TE_SCREEN_XMAX_CLRMSK) |
+		((info.tile_max_y << ROGUE_CR_TE_SCREEN_YMAX_SHIFT) &
+		 ~ROGUE_CR_TE_SCREEN_YMAX_CLRMSK);
+	hwrt->common.te_mtile1 =
+		((info.mtile_x[0] << ROGUE_CR_TE_MTILE1_X1_SHIFT) & ~ROGUE_CR_TE_MTILE1_X1_CLRMSK) |
+		((info.mtile_x[1] << ROGUE_CR_TE_MTILE1_X2_SHIFT) & ~ROGUE_CR_TE_MTILE1_X2_CLRMSK) |
+		((info.mtile_x[2] << ROGUE_CR_TE_MTILE1_X3_SHIFT) & ~ROGUE_CR_TE_MTILE1_X3_CLRMSK);
+	hwrt->common.te_mtile2 =
+		((info.mtile_y[0] << ROGUE_CR_TE_MTILE2_Y1_SHIFT) & ~ROGUE_CR_TE_MTILE2_Y1_CLRMSK) |
+		((info.mtile_y[1] << ROGUE_CR_TE_MTILE2_Y2_SHIFT) & ~ROGUE_CR_TE_MTILE2_Y2_CLRMSK) |
+		((info.mtile_y[2] << ROGUE_CR_TE_MTILE2_Y3_SHIFT) & ~ROGUE_CR_TE_MTILE2_Y3_CLRMSK);
+
+	err = get_cr_isp_mtile_size_val(pvr_dev, args->samples, &info,
+					&hwrt->common.isp_mtile_size);
+	if (err)
+		return err;
+
+	hwrt->common.tpc_stride = geom_data_args->tpc_stride;
+	hwrt->common.tpc_size = geom_data_args->tpc_size;
+
+	hwrt->common.rgn_header_size = args->region_header_size;
+
+	err = pvr_fw_object_create(pvr_dev, sizeof(struct rogue_fwif_hwrtdata_common),
+				   PVR_BO_FW_FLAGS_DEVICE_UNCACHED, hwrtdata_common_init, hwrt,
+				   &hwrt->common_fw_obj);
+
+	return err;
+}
+
+static void
+hwrt_fw_data_init(void *cpu_ptr, void *priv)
+{
+	struct pvr_hwrt_data *hwrt_data = priv;
+
+	memcpy(cpu_ptr, &hwrt_data->data, sizeof(hwrt_data->data));
+}
+
+static int
+hwrt_data_init_fw_structure(struct pvr_file *pvr_file,
+			    struct pvr_hwrt_dataset *hwrt,
+			    struct drm_pvr_ioctl_create_hwrt_dataset_args *args,
+			    struct drm_pvr_create_hwrt_rt_data_args *rt_data_args,
+			    struct pvr_hwrt_data *hwrt_data)
+{
+	struct drm_pvr_create_hwrt_geom_data_args *geom_data_args = &args->geom_data_args;
+	struct pvr_device *pvr_dev = pvr_file->pvr_dev;
+	struct rogue_fwif_rta_ctl *rta_ctl;
+	int free_list_i;
+	int err;
+
+	pvr_fw_object_get_fw_addr(hwrt->common_fw_obj,
+				  &hwrt_data->data.hwrt_data_common_fw_addr);
+
+	for (free_list_i = 0; free_list_i < ARRAY_SIZE(hwrt->free_lists); free_list_i++) {
+		pvr_fw_object_get_fw_addr(hwrt->free_lists[free_list_i]->fw_obj,
+					  &hwrt_data->data.freelists_fw_addr[free_list_i]);
+	}
+
+	hwrt_data->data.tail_ptrs_dev_addr = geom_data_args->tpc_dev_addr;
+	hwrt_data->data.vheap_table_dev_addr = geom_data_args->vheap_table_dev_addr;
+	hwrt_data->data.rtc_dev_addr = geom_data_args->rtc_dev_addr;
+
+	hwrt_data->data.pm_mlist_dev_addr = rt_data_args->pm_mlist_dev_addr;
+	hwrt_data->data.macrotile_array_dev_addr = rt_data_args->macrotile_array_dev_addr;
+	hwrt_data->data.rgn_header_dev_addr = rt_data_args->region_header_dev_addr;
+
+	rta_ctl = &hwrt_data->data.rta_ctl;
+
+	rta_ctl->render_target_index = 0;
+	rta_ctl->active_render_targets = 0;
+	rta_ctl->valid_render_targets_fw_addr = 0;
+	rta_ctl->rta_num_partial_renders_fw_addr = 0;
+	rta_ctl->max_rts = args->layers;
+
+	if (args->layers > 1) {
+		err = pvr_fw_object_create(pvr_dev, args->layers * SRTC_ENTRY_SIZE,
+					   PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+					   NULL, NULL, &hwrt_data->srtc_obj);
+		if (err)
+			return err;
+		pvr_fw_object_get_fw_addr(hwrt_data->srtc_obj,
+					  &rta_ctl->valid_render_targets_fw_addr);
+
+		err = pvr_fw_object_create(pvr_dev, args->layers * RAA_ENTRY_SIZE,
+					   PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+					   NULL, NULL, &hwrt_data->raa_obj);
+		if (err)
+			goto err_put_shadow_rt_cache;
+		pvr_fw_object_get_fw_addr(hwrt_data->raa_obj,
+					  &rta_ctl->rta_num_partial_renders_fw_addr);
+	}
+
+	err = pvr_fw_object_create(pvr_dev, sizeof(struct rogue_fwif_hwrtdata),
+				   PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+				   hwrt_fw_data_init, hwrt_data, &hwrt_data->fw_obj);
+	if (err)
+		goto err_put_raa_obj;
+
+	pvr_free_list_add_hwrt(hwrt->free_lists[0], hwrt_data);
+
+	return 0;
+
+err_put_raa_obj:
+	if (args->layers > 1)
+		pvr_fw_object_destroy(hwrt_data->raa_obj);
+
+err_put_shadow_rt_cache:
+	if (args->layers > 1)
+		pvr_fw_object_destroy(hwrt_data->srtc_obj);
+
+	return err;
+}
+
+static void
+hwrt_data_fini_fw_structure(struct pvr_hwrt_dataset *hwrt, int hwrt_nr)
+{
+	struct pvr_hwrt_data *hwrt_data = &hwrt->data[hwrt_nr];
+
+	pvr_free_list_remove_hwrt(hwrt->free_lists[0], hwrt_data);
+
+	if (hwrt->max_rts > 1) {
+		pvr_fw_object_destroy(hwrt_data->raa_obj);
+		pvr_fw_object_destroy(hwrt_data->srtc_obj);
+	}
+
+	pvr_fw_object_destroy(hwrt_data->fw_obj);
+}
+
+/**
+ * pvr_hwrt_dataset_create() - Create a new HWRT dataset
+ * @pvr_file: Pointer to pvr_file structure.
+ * @args: Creation arguments from userspace.
+ *
+ * Return:
+ *  * Pointer to new HWRT, or
+ *  * ERR_PTR(-%ENOMEM) on out of memory.
+ */
+struct pvr_hwrt_dataset *
+pvr_hwrt_dataset_create(struct pvr_file *pvr_file,
+			struct drm_pvr_ioctl_create_hwrt_dataset_args *args)
+{
+	struct pvr_hwrt_dataset *hwrt;
+	int err;
+
+	/* Create and fill out the kernel structure */
+	hwrt = kzalloc(sizeof(*hwrt), GFP_KERNEL);
+
+	if (!hwrt)
+		return ERR_PTR(-ENOMEM);
+
+	kref_init(&hwrt->ref_count);
+
+	err = hwrt_init_kernel_structure(pvr_file, args, hwrt);
+	if (err < 0)
+		goto err_free;
+
+	err = hwrt_init_common_fw_structure(pvr_file, args, hwrt);
+	if (err < 0)
+		goto err_free;
+
+	for (int i = 0; i < ARRAY_SIZE(hwrt->data); i++) {
+		err = hwrt_data_init_fw_structure(pvr_file, hwrt, args,
+						  &args->rt_data_args[i],
+						  &hwrt->data[i]);
+		if (err < 0) {
+			i--;
+			/* Destroy already created structures. */
+			for (; i >= 0; i--)
+				hwrt_data_fini_fw_structure(hwrt, i);
+			goto err_free;
+		}
+
+		hwrt->data[i].hwrt_dataset = hwrt;
+	}
+
+	return hwrt;
+
+err_free:
+	pvr_hwrt_dataset_put(hwrt);
+
+	return ERR_PTR(err);
+}
+
+static void
+pvr_hwrt_dataset_release(struct kref *ref_count)
+{
+	struct pvr_hwrt_dataset *hwrt =
+		container_of(ref_count, struct pvr_hwrt_dataset, ref_count);
+
+	for (int i = ARRAY_SIZE(hwrt->data) - 1; i >= 0; i--) {
+		WARN_ON(pvr_fw_structure_cleanup(hwrt->pvr_dev, ROGUE_FWIF_CLEANUP_HWRTDATA,
+						 hwrt->data[i].fw_obj, 0));
+		hwrt_data_fini_fw_structure(hwrt, i);
+	}
+
+	hwrt_fini_common_fw_structure(hwrt);
+	hwrt_fini_kernel_structure(hwrt);
+
+	kfree(hwrt);
+}
+
+/**
+ * pvr_destroy_hwrt_datasets_for_file: Destroy any HWRT datasets associated
+ * with the given file.
+ * @pvr_file: Pointer to pvr_file structure.
+ *
+ * Removes all HWRT datasets associated with @pvr_file from the device
+ * hwrt_dataset list and drops initial references. HWRT datasets will then be
+ * destroyed once all outstanding references are dropped.
+ */
+void pvr_destroy_hwrt_datasets_for_file(struct pvr_file *pvr_file)
+{
+	struct pvr_hwrt_dataset *hwrt;
+	unsigned long handle;
+
+	xa_for_each(&pvr_file->hwrt_handles, handle, hwrt) {
+		(void)hwrt;
+		pvr_hwrt_dataset_put(xa_erase(&pvr_file->hwrt_handles, handle));
+	}
+}
+
+/**
+ * pvr_hwrt_dataset_put() - Release reference on HWRT dataset
+ * @hwrt: Pointer to HWRT dataset to release reference on
+ */
+void
+pvr_hwrt_dataset_put(struct pvr_hwrt_dataset *hwrt)
+{
+	if (hwrt)
+		kref_put(&hwrt->ref_count, pvr_hwrt_dataset_release);
+}
diff --git a/drivers/gpu/drm/imagination/pvr_hwrt.h b/drivers/gpu/drm/imagination/pvr_hwrt.h
new file mode 100644
index 000000000000..dd7797a85f27
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_hwrt.h
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_HWRT_H
+#define PVR_HWRT_H
+
+#include <linux/compiler_attributes.h>
+#include <linux/kref.h>
+#include <linux/list.h>
+#include <linux/types.h>
+#include <linux/xarray.h>
+#include <uapi/drm/pvr_drm.h>
+
+#include "pvr_device.h"
+#include "pvr_rogue_fwif_shared.h"
+
+/* Forward declaration from pvr_free_list.h. */
+struct pvr_free_list;
+
+/* Forward declaration from pvr_gem.h. */
+struct pvr_fw_object;
+
+/**
+ * struct pvr_hwrt_data - structure representing HWRT data
+ */
+struct pvr_hwrt_data {
+	/** @fw_obj: FW object representing the FW-side structure. */
+	struct pvr_fw_object *fw_obj;
+
+	/** @data: Local copy of FW-side structure. */
+	struct rogue_fwif_hwrtdata data;
+
+	/** @freelist_node: List node connecting this HWRT to the local freelist. */
+	struct list_head freelist_node;
+
+	/**
+	 * @srtc_obj: FW object representing shadow render target cache.
+	 *
+	 * Only valid if @max_rts > 1.
+	 */
+	struct pvr_fw_object *srtc_obj;
+
+	/**
+	 * @raa_obj: FW object representing renders accumulation array.
+	 *
+	 * Only valid if @max_rts > 1.
+	 */
+	struct pvr_fw_object *raa_obj;
+
+	/** @hwrt_dataset: Back pointer to owning HWRT dataset. */
+	struct pvr_hwrt_dataset *hwrt_dataset;
+};
+
+/**
+ * struct pvr_hwrt_dataset - structure representing a HWRT data set.
+ */
+struct pvr_hwrt_dataset {
+	/** @ref_count: Reference count of object. */
+	struct kref ref_count;
+
+	/** @pvr_dev: Pointer to device that owns this object. */
+	struct pvr_device *pvr_dev;
+
+	/** @common_fw_obj: FW object representing common FW-side structure. */
+	struct pvr_fw_object *common_fw_obj;
+
+	struct rogue_fwif_hwrtdata_common common;
+
+	/** @data: HWRT data structures belonging to this set. */
+	struct pvr_hwrt_data data[ROGUE_FWIF_NUM_RTDATAS];
+
+	/** @free_lists: Free lists used by HWRT data set. */
+	struct pvr_free_list *free_lists[ROGUE_FWIF_NUM_RTDATA_FREELISTS];
+
+	/** @max_rts: Maximum render targets for this HWRT data set. */
+	u16 max_rts;
+};
+
+struct pvr_hwrt_dataset *
+pvr_hwrt_dataset_create(struct pvr_file *pvr_file,
+			struct drm_pvr_ioctl_create_hwrt_dataset_args *args);
+
+void
+pvr_destroy_hwrt_datasets_for_file(struct pvr_file *pvr_file);
+
+/**
+ * pvr_hwrt_dataset_lookup() - Lookup HWRT dataset pointer from handle
+ * @pvr_file: Pointer to pvr_file structure.
+ * @handle: Object handle.
+ *
+ * Takes reference on dataset object. Call pvr_hwrt_dataset_put() to release.
+ *
+ * Returns:
+ *  * The requested object on success, or
+ *  * %NULL on failure (object does not exist in list, or is not a HWRT
+ *    dataset)
+ */
+static __always_inline struct pvr_hwrt_dataset *
+pvr_hwrt_dataset_lookup(struct pvr_file *pvr_file, u32 handle)
+{
+	struct pvr_hwrt_dataset *hwrt;
+
+	xa_lock(&pvr_file->hwrt_handles);
+	hwrt = xa_load(&pvr_file->hwrt_handles, handle);
+
+	if (hwrt)
+		kref_get(&hwrt->ref_count);
+
+	xa_unlock(&pvr_file->hwrt_handles);
+
+	return hwrt;
+}
+
+void
+pvr_hwrt_dataset_put(struct pvr_hwrt_dataset *hwrt);
+
+/**
+ * pvr_hwrt_data_lookup() - Lookup HWRT data pointer from handle and index
+ * @pvr_file: Pointer to pvr_file structure.
+ * @handle: Object handle.
+ * @index: Index of RT data within dataset.
+ *
+ * Takes reference on dataset object. Call pvr_hwrt_data_put() to release.
+ *
+ * Returns:
+ *  * The requested object on success, or
+ *  * %NULL on failure (object does not exist in list, or is not a HWRT
+ *    dataset, or index is out of range)
+ */
+static __always_inline struct pvr_hwrt_data *
+pvr_hwrt_data_lookup(struct pvr_file *pvr_file, u32 handle, u32 index)
+{
+	struct pvr_hwrt_dataset *hwrt_dataset = pvr_hwrt_dataset_lookup(pvr_file, handle);
+
+	if (hwrt_dataset) {
+		if (index < ARRAY_SIZE(hwrt_dataset->data))
+			return &hwrt_dataset->data[index];
+
+		pvr_hwrt_dataset_put(hwrt_dataset);
+	}
+
+	return NULL;
+}
+
+/**
+ * pvr_hwrt_data_put() - Release reference on HWRT data
+ * @hwrt: Pointer to HWRT data to release reference on
+ */
+static __always_inline void
+pvr_hwrt_data_put(struct pvr_hwrt_data *hwrt)
+{
+	if (hwrt)
+		pvr_hwrt_dataset_put(hwrt->hwrt_dataset);
+}
+
+static __always_inline struct pvr_hwrt_data *
+pvr_hwrt_data_get(struct pvr_hwrt_data *hwrt)
+{
+	if (hwrt)
+		kref_get(&hwrt->hwrt_dataset->ref_count);
+
+	return hwrt;
+}
+
+#endif /* PVR_HWRT_H */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 12/17] drm/imagination: Implement free list and HWRT create and destroy ioctls
@ 2023-08-16  8:25   ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: frank.binns, donald.robson, boris.brezillon, faith.ekstrand,
	airlied, daniel, maarten.lankhorst, mripard, tzimmermann, afd,
	hns, matthew.brost, christian.koenig, luben.tuikov, dakr,
	linux-kernel

Implement ioctls to create and destroy free lists and HWRT datasets. Free
lists are used for GPU-side memory allocation during geometry processing.
HWRT datasets are the FW-side structures representing render targets.

Changes since v4:
- Remove use of drm_gem_shmem_get_pages()

Changes since v3:
- Support free list grow requests from FW
- Use drm_dev_{enter,exit}

Co-developed-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Co-developed-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 drivers/gpu/drm/imagination/Makefile        |   2 +
 drivers/gpu/drm/imagination/pvr_ccb.c       |  10 +
 drivers/gpu/drm/imagination/pvr_device.h    |  24 +
 drivers/gpu/drm/imagination/pvr_drv.c       | 112 +++-
 drivers/gpu/drm/imagination/pvr_free_list.c | 625 ++++++++++++++++++++
 drivers/gpu/drm/imagination/pvr_free_list.h | 195 ++++++
 drivers/gpu/drm/imagination/pvr_hwrt.c      | 549 +++++++++++++++++
 drivers/gpu/drm/imagination/pvr_hwrt.h      | 165 ++++++
 8 files changed, 1678 insertions(+), 4 deletions(-)
 create mode 100644 drivers/gpu/drm/imagination/pvr_free_list.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_free_list.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_hwrt.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_hwrt.h

diff --git a/drivers/gpu/drm/imagination/Makefile b/drivers/gpu/drm/imagination/Makefile
index 0a6532d30c00..fca2ee2efbac 100644
--- a/drivers/gpu/drm/imagination/Makefile
+++ b/drivers/gpu/drm/imagination/Makefile
@@ -8,12 +8,14 @@ powervr-y := \
 	pvr_device.o \
 	pvr_device_info.o \
 	pvr_drv.o \
+	pvr_free_list.o \
 	pvr_fw.o \
 	pvr_fw_meta.o \
 	pvr_fw_mips.o \
 	pvr_fw_startstop.o \
 	pvr_fw_trace.o \
 	pvr_gem.o \
+	pvr_hwrt.o \
 	pvr_mmu.o \
 	pvr_power.o \
 	pvr_vm.o \
diff --git a/drivers/gpu/drm/imagination/pvr_ccb.c b/drivers/gpu/drm/imagination/pvr_ccb.c
index 1dfcce963e67..faa32a548a80 100644
--- a/drivers/gpu/drm/imagination/pvr_ccb.c
+++ b/drivers/gpu/drm/imagination/pvr_ccb.c
@@ -4,6 +4,7 @@
 #include "pvr_ccb.h"
 #include "pvr_device.h"
 #include "pvr_drv.h"
+#include "pvr_free_list.h"
 #include "pvr_fw.h"
 #include "pvr_gem.h"
 #include "pvr_power.h"
@@ -138,6 +139,15 @@ process_fwccb_command(struct pvr_device *pvr_dev, struct rogue_fwif_fwccb_cmd *c
 		pvr_power_reset(pvr_dev, false);
 		break;
 
+	case ROGUE_FWIF_FWCCB_CMD_FREELISTS_RECONSTRUCTION:
+		pvr_free_list_process_reconstruct_req(pvr_dev,
+						      &cmd->cmd_data.cmd_freelists_reconstruction);
+		break;
+
+	case ROGUE_FWIF_FWCCB_CMD_FREELIST_GROW:
+		pvr_free_list_process_grow_req(pvr_dev, &cmd->cmd_data.cmd_free_list_gs);
+		break;
+
 	default:
 		drm_info(from_pvr_device(pvr_dev), "Received unknown FWCCB command %x\n",
 			 cmd->cmd_type);
diff --git a/drivers/gpu/drm/imagination/pvr_device.h b/drivers/gpu/drm/imagination/pvr_device.h
index 7e353cfe4fd2..a810b233689b 100644
--- a/drivers/gpu/drm/imagination/pvr_device.h
+++ b/drivers/gpu/drm/imagination/pvr_device.h
@@ -146,6 +146,14 @@ struct pvr_device {
 	/** @fw_dev: Firmware related data. */
 	struct pvr_fw_device fw_dev;
 
+	/**
+	 * @free_list_ids: Array of free lists belonging to this device. Array members
+	 *                 are of type "struct pvr_free_list *".
+	 *
+	 * This array is used to allocate IDs used by the firmware.
+	 */
+	struct xarray free_list_ids;
+
 	struct {
 		/** @work: Work item for watchdog callback. */
 		struct delayed_work work;
@@ -241,6 +249,22 @@ struct pvr_file {
 	 */
 	struct pvr_device *pvr_dev;
 
+	/**
+	 * @free_list_handles: Array of free lists belonging to this file. Array
+	 * members are of type "struct pvr_free_list *".
+	 *
+	 * This array is used to allocate handles returned to userspace.
+	 */
+	struct xarray free_list_handles;
+
+	/**
+	 * @hwrt_handles: Array of HWRT datasets belonging to this file. Array
+	 * members are of type "struct pvr_hwrt_dataset *".
+	 *
+	 * This array is used to allocate handles returned to userspace.
+	 */
+	struct xarray hwrt_handles;
+
 	/**
 	 * @vm_ctx_handles: Array of VM contexts belonging to this file. Array
 	 * members are of type "struct pvr_vm_context *".
diff --git a/drivers/gpu/drm/imagination/pvr_drv.c b/drivers/gpu/drm/imagination/pvr_drv.c
index af12f87cef97..bd601eced4f2 100644
--- a/drivers/gpu/drm/imagination/pvr_drv.c
+++ b/drivers/gpu/drm/imagination/pvr_drv.c
@@ -3,7 +3,9 @@
 
 #include "pvr_device.h"
 #include "pvr_drv.h"
+#include "pvr_free_list.h"
 #include "pvr_gem.h"
+#include "pvr_hwrt.h"
 #include "pvr_power.h"
 #include "pvr_rogue_defs.h"
 #include "pvr_rogue_fwif_client.h"
@@ -725,7 +727,41 @@ static int
 pvr_ioctl_create_free_list(struct drm_device *drm_dev, void *raw_args,
 			   struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_create_free_list_args *args = raw_args;
+	struct pvr_file *pvr_file = to_pvr_file(file);
+	struct pvr_free_list *free_list;
+	int idx;
+	int err;
+
+	if (!drm_dev_enter(drm_dev, &idx))
+		return -EIO;
+
+	free_list = pvr_free_list_create(pvr_file, args);
+	if (IS_ERR(free_list)) {
+		err = PTR_ERR(free_list);
+		goto err_drm_dev_exit;
+	}
+
+	/* Allocate object handle for userspace. */
+	err = xa_alloc(&pvr_file->free_list_handles,
+		       &args->handle,
+		       free_list,
+		       xa_limit_32b,
+		       GFP_KERNEL);
+	if (err < 0)
+		goto err_cleanup;
+
+	drm_dev_exit(idx);
+
+	return 0;
+
+err_cleanup:
+	pvr_free_list_put(free_list);
+
+err_drm_dev_exit:
+	drm_dev_exit(idx);
+
+	return err;
 }
 
 /**
@@ -745,7 +781,19 @@ static int
 pvr_ioctl_destroy_free_list(struct drm_device *drm_dev, void *raw_args,
 			    struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_destroy_free_list_args *args = raw_args;
+	struct pvr_file *pvr_file = to_pvr_file(file);
+	struct pvr_free_list *free_list;
+
+	if (args->_padding_4)
+		return -EINVAL;
+
+	free_list = xa_erase(&pvr_file->free_list_handles, args->handle);
+	if (!free_list)
+		return -EINVAL;
+
+	pvr_free_list_put(free_list);
+	return 0;
 }
 
 /**
@@ -765,7 +813,41 @@ static int
 pvr_ioctl_create_hwrt_dataset(struct drm_device *drm_dev, void *raw_args,
 			      struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_create_hwrt_dataset_args *args = raw_args;
+	struct pvr_file *pvr_file = to_pvr_file(file);
+	struct pvr_hwrt_dataset *hwrt;
+	int idx;
+	int err;
+
+	if (!drm_dev_enter(drm_dev, &idx))
+		return -EIO;
+
+	hwrt = pvr_hwrt_dataset_create(pvr_file, args);
+	if (IS_ERR(hwrt)) {
+		err = PTR_ERR(hwrt);
+		goto err_drm_dev_exit;
+	}
+
+	/* Allocate object handle for userspace. */
+	err = xa_alloc(&pvr_file->hwrt_handles,
+		       &args->handle,
+		       hwrt,
+		       xa_limit_32b,
+		       GFP_KERNEL);
+	if (err < 0)
+		goto err_cleanup;
+
+	drm_dev_exit(idx);
+
+	return 0;
+
+err_cleanup:
+	pvr_hwrt_dataset_put(hwrt);
+
+err_drm_dev_exit:
+	drm_dev_exit(idx);
+
+	return err;
 }
 
 /**
@@ -785,7 +867,19 @@ static int
 pvr_ioctl_destroy_hwrt_dataset(struct drm_device *drm_dev, void *raw_args,
 			       struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_destroy_hwrt_dataset_args *args = raw_args;
+	struct pvr_file *pvr_file = to_pvr_file(file);
+	struct pvr_hwrt_dataset *hwrt;
+
+	if (args->_padding_4)
+		return -EINVAL;
+
+	hwrt = xa_erase(&pvr_file->hwrt_handles, args->handle);
+	if (!hwrt)
+		return -EINVAL;
+
+	pvr_hwrt_dataset_put(hwrt);
+	return 0;
 }
 
 /**
@@ -1209,6 +1303,8 @@ pvr_drm_driver_open(struct drm_device *drm_dev, struct drm_file *file)
 	 */
 	pvr_file->pvr_dev = pvr_dev;
 
+	xa_init_flags(&pvr_file->free_list_handles, XA_FLAGS_ALLOC1);
+	xa_init_flags(&pvr_file->hwrt_handles, XA_FLAGS_ALLOC1);
 	xa_init_flags(&pvr_file->vm_ctx_handles, XA_FLAGS_ALLOC1);
 
 	/*
@@ -1237,6 +1333,8 @@ pvr_drm_driver_postclose(__always_unused struct drm_device *drm_dev,
 	struct pvr_file *pvr_file = to_pvr_file(file);
 
 	/* Drop references on any remaining objects. */
+	pvr_destroy_free_lists_for_file(pvr_file);
+	pvr_destroy_hwrt_datasets_for_file(pvr_file);
 	pvr_destroy_vm_contexts_for_file(pvr_file);
 
 	kfree(pvr_file);
@@ -1295,6 +1393,8 @@ pvr_probe(struct platform_device *plat_dev)
 	if (err)
 		goto err_device_fini;
 
+	xa_init_flags(&pvr_dev->free_list_ids, XA_FLAGS_ALLOC1);
+
 	return 0;
 
 err_device_fini:
@@ -1312,6 +1412,10 @@ pvr_remove(struct platform_device *plat_dev)
 	struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
 	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
 
+	WARN_ON(!xa_empty(&pvr_dev->free_list_ids));
+
+	xa_destroy(&pvr_dev->free_list_ids);
+
 	pm_runtime_suspend(drm_dev->dev);
 	drm_dev_unplug(drm_dev);
 	pvr_device_fini(pvr_dev);
diff --git a/drivers/gpu/drm/imagination/pvr_free_list.c b/drivers/gpu/drm/imagination/pvr_free_list.c
new file mode 100644
index 000000000000..683882fdfbc6
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_free_list.c
@@ -0,0 +1,625 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_free_list.h"
+#include "pvr_gem.h"
+#include "pvr_hwrt.h"
+#include "pvr_rogue_fwif.h"
+#include "pvr_vm.h"
+
+#include <drm/drm_gem.h>
+#include <linux/slab.h>
+#include <linux/xarray.h>
+#include <uapi/drm/pvr_drm.h>
+
+#define FREE_LIST_ENTRY_SIZE sizeof(u32)
+
+#define FREE_LIST_ALIGNMENT \
+	((ROGUE_BIF_PM_FREELIST_BASE_ADDR_ALIGNSIZE / FREE_LIST_ENTRY_SIZE) - 1)
+
+#define FREE_LIST_MIN_PAGES 50
+#define FREE_LIST_MIN_PAGES_BRN66011 40
+#define FREE_LIST_MIN_PAGES_ROGUEXE 25
+
+/**
+ * pvr_get_free_list_min_pages() - Get minimum free list size for this device
+ * @pvr_dev: Device pointer.
+ *
+ * Returns:
+ *  * Minimum free list size, in PM physical pages.
+ */
+u32
+pvr_get_free_list_min_pages(struct pvr_device *pvr_dev)
+{
+	u32 value;
+
+	if (PVR_HAS_FEATURE(pvr_dev, roguexe)) {
+		if (PVR_HAS_QUIRK(pvr_dev, 66011))
+			value = FREE_LIST_MIN_PAGES_BRN66011;
+		else
+			value = FREE_LIST_MIN_PAGES_ROGUEXE;
+	} else {
+		value = FREE_LIST_MIN_PAGES;
+	}
+
+	return value;
+}
+
+static int
+free_list_create_kernel_structure(struct pvr_file *pvr_file,
+				  struct drm_pvr_ioctl_create_free_list_args *args,
+				  struct pvr_free_list *free_list)
+{
+	struct pvr_gem_object *free_list_obj;
+	struct pvr_vm_context *vm_ctx;
+	u64 free_list_size;
+	int err;
+
+	if (args->grow_threshold > 100 ||
+	    args->initial_num_pages > args->max_num_pages ||
+	    args->grow_num_pages > args->max_num_pages ||
+	    args->max_num_pages == 0 ||
+	    (args->initial_num_pages < args->max_num_pages && !args->grow_num_pages) ||
+	    (args->initial_num_pages == args->max_num_pages && args->grow_num_pages))
+		return -EINVAL;
+
+	if ((args->initial_num_pages & FREE_LIST_ALIGNMENT) ||
+	    (args->max_num_pages & FREE_LIST_ALIGNMENT) ||
+	    (args->grow_num_pages & FREE_LIST_ALIGNMENT))
+		return -EINVAL;
+
+	vm_ctx = pvr_vm_context_lookup(pvr_file, args->vm_context_handle);
+	if (!vm_ctx)
+		return -EINVAL;
+
+	free_list_obj = pvr_vm_find_gem_object(vm_ctx, args->free_list_gpu_addr,
+					       NULL, &free_list_size);
+	if (!free_list_obj) {
+		err = -EINVAL;
+		goto err_put_vm_context;
+	}
+
+	if ((free_list_obj->flags & DRM_PVR_BO_CPU_ALLOW_USERSPACE_ACCESS) ||
+	    !(free_list_obj->flags & DRM_PVR_BO_DEVICE_PM_FW_PROTECT) ||
+	    free_list_size < (args->max_num_pages * FREE_LIST_ENTRY_SIZE)) {
+		err = -EINVAL;
+		goto err_put_free_list_obj;
+	}
+
+	free_list->pvr_dev = pvr_file->pvr_dev;
+	free_list->current_pages = 0;
+	free_list->max_pages = args->max_num_pages;
+	free_list->grow_pages = args->grow_num_pages;
+	free_list->grow_threshold = args->grow_threshold;
+	free_list->obj = free_list_obj;
+	free_list->free_list_gpu_addr = args->free_list_gpu_addr;
+	free_list->initial_num_pages = args->initial_num_pages;
+
+	pvr_vm_context_put(vm_ctx);
+
+	return 0;
+
+err_put_free_list_obj:
+	pvr_gem_object_put(free_list_obj);
+
+err_put_vm_context:
+	pvr_vm_context_put(vm_ctx);
+
+	return err;
+}
+
+static void
+free_list_destroy_kernel_structure(struct pvr_free_list *free_list)
+{
+	WARN_ON(!list_empty(&free_list->hwrt_list));
+
+	pvr_gem_object_put(free_list->obj);
+}
+
+/**
+ * calculate_free_list_ready_pages_locked() - Function to work out the number of free
+ *                                            list pages to reserve for growing within
+ *                                            the FW without having to wait for the
+ *                                            host to progress a grow request
+ * @free_list: Pointer to free list.
+ * @pages: Total pages currently in free list.
+ *
+ * If the threshold or grow size means less than the alignment size (4 pages on
+ * Rogue), then the feature is not used.
+ *
+ * Caller must hold &free_list->lock.
+ *
+ * Return: number of pages to reserve.
+ */
+static u32
+calculate_free_list_ready_pages_locked(struct pvr_free_list *free_list, u32 pages)
+{
+	u32 ready_pages;
+
+	lockdep_assert_held(&free_list->lock);
+
+	ready_pages = ((pages * free_list->grow_threshold) / 100);
+
+	/* The number of pages must be less than the grow size. */
+	ready_pages = min(ready_pages, free_list->grow_pages);
+
+	/*
+	 * The number of pages must be a multiple of the free list align size.
+	 */
+	ready_pages &= ~FREE_LIST_ALIGNMENT;
+
+	return ready_pages;
+}
+
+static u32
+calculate_free_list_ready_pages(struct pvr_free_list *free_list, u32 pages)
+{
+	u32 ret;
+
+	mutex_lock(&free_list->lock);
+
+	ret = calculate_free_list_ready_pages_locked(free_list, pages);
+
+	mutex_unlock(&free_list->lock);
+
+	return ret;
+}
+
+static void
+free_list_fw_init(void *cpu_ptr, void *priv)
+{
+	struct rogue_fwif_freelist *fw_data = cpu_ptr;
+	struct pvr_free_list *free_list = priv;
+	u32 ready_pages;
+
+	/* Fill out FW structure */
+	ready_pages = calculate_free_list_ready_pages(free_list,
+						      free_list->initial_num_pages);
+
+	fw_data->max_pages = free_list->max_pages;
+	fw_data->current_pages = free_list->initial_num_pages - ready_pages;
+	fw_data->grow_pages = free_list->grow_pages;
+	fw_data->ready_pages = ready_pages;
+	fw_data->freelist_id = free_list->fw_id;
+	fw_data->grow_pending = false;
+	fw_data->current_stack_top = fw_data->current_pages - 1;
+	fw_data->freelist_dev_addr = free_list->free_list_gpu_addr;
+	fw_data->current_dev_addr = (fw_data->freelist_dev_addr +
+				     ((fw_data->max_pages - fw_data->current_pages) *
+				      FREE_LIST_ENTRY_SIZE)) &
+				    ~((u64)ROGUE_BIF_PM_FREELIST_BASE_ADDR_ALIGNSIZE - 1);
+}
+
+static int
+free_list_create_fw_structure(struct pvr_file *pvr_file,
+			      struct drm_pvr_ioctl_create_free_list_args *args,
+			      struct pvr_free_list *free_list)
+{
+	struct pvr_device *pvr_dev = pvr_file->pvr_dev;
+
+	/*
+	 * Create and map the FW structure so we can initialise it. This is not
+	 * accessed on the CPU side post-initialisation so the mapping lifetime
+	 * is only for this function.
+	 */
+	free_list->fw_data = pvr_fw_object_create_and_map(pvr_dev, sizeof(*free_list->fw_data),
+							  PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+							  free_list_fw_init, free_list,
+							  &free_list->fw_obj);
+	if (IS_ERR(free_list->fw_data))
+		return PTR_ERR(free_list->fw_data);
+
+	return 0;
+}
+
+static void
+free_list_destroy_fw_structure(struct pvr_free_list *free_list)
+{
+	pvr_fw_object_unmap_and_destroy(free_list->fw_obj);
+}
+
+static int
+pvr_free_list_insert_pages_locked(struct pvr_free_list *free_list,
+				  struct sg_table *sgt, u32 offset, u32 num_pages)
+{
+	struct sg_dma_page_iter dma_iter;
+	u32 *page_list;
+
+	lockdep_assert_held(&free_list->lock);
+
+	page_list = pvr_gem_object_vmap(free_list->obj);
+	if (IS_ERR(page_list))
+		return PTR_ERR(page_list);
+
+	offset /= FREE_LIST_ENTRY_SIZE;
+	/* clang-format off */
+	for_each_sgtable_dma_page(sgt, &dma_iter, 0) {
+		dma_addr_t dma_addr = sg_page_iter_dma_address(&dma_iter);
+		u64 dma_pfn = dma_addr >>
+			       ROGUE_BIF_PM_PHYSICAL_PAGE_ALIGNSHIFT;
+		u32 dma_addr_offset;
+
+		BUILD_BUG_ON(ROGUE_BIF_PM_PHYSICAL_PAGE_SIZE > PAGE_SIZE);
+
+		for (dma_addr_offset = 0; dma_addr_offset < PAGE_SIZE;
+		     dma_addr_offset += ROGUE_BIF_PM_PHYSICAL_PAGE_SIZE) {
+			WARN_ON_ONCE(dma_pfn >> 32);
+
+			page_list[offset++] = (u32)dma_pfn;
+			dma_pfn++;
+
+			num_pages--;
+			if (!num_pages)
+				break;
+		}
+
+		if (!num_pages)
+			break;
+	};
+	/* clang-format on */
+
+	/* Make sure our free_list update is flushed. */
+	wmb();
+
+	pvr_gem_object_vunmap(free_list->obj);
+
+	return 0;
+}
+
+static int
+pvr_free_list_insert_node_locked(struct pvr_free_list_node *free_list_node)
+{
+	struct pvr_free_list *free_list = free_list_node->free_list;
+	struct sg_table *sgt;
+	u32 start_page;
+	u32 offset;
+	int err;
+
+	lockdep_assert_held(&free_list->lock);
+
+	start_page = free_list->max_pages - free_list->current_pages -
+		     free_list_node->num_pages;
+	offset = (start_page * FREE_LIST_ENTRY_SIZE) &
+		  ~((u64)ROGUE_BIF_PM_FREELIST_BASE_ADDR_ALIGNSIZE - 1);
+
+	sgt = drm_gem_shmem_get_pages_sgt(&free_list_node->mem_obj->base);
+	if (WARN_ON(IS_ERR(sgt)))
+		return PTR_ERR(sgt);
+
+	err = pvr_free_list_insert_pages_locked(free_list, sgt,
+						offset, free_list_node->num_pages);
+	if (!err)
+		free_list->current_pages += free_list_node->num_pages;
+
+	return err;
+}
+
+static int
+pvr_free_list_grow(struct pvr_free_list *free_list, u32 num_pages)
+{
+	struct pvr_device *pvr_dev = free_list->pvr_dev;
+	struct pvr_free_list_node *free_list_node;
+	int err;
+
+	mutex_lock(&free_list->lock);
+
+	if (num_pages & FREE_LIST_ALIGNMENT) {
+		err = -EINVAL;
+		goto err_unlock;
+	}
+
+	free_list_node = kzalloc(sizeof(*free_list_node), GFP_KERNEL);
+	if (!free_list_node) {
+		err = -ENOMEM;
+		goto err_unlock;
+	}
+
+	free_list_node->num_pages = num_pages;
+	free_list_node->free_list = free_list;
+
+	free_list_node->mem_obj = pvr_gem_object_create(pvr_dev,
+							num_pages <<
+							ROGUE_BIF_PM_PHYSICAL_PAGE_ALIGNSHIFT,
+							PVR_BO_FW_FLAGS_DEVICE_CACHED);
+	if (IS_ERR(free_list_node->mem_obj)) {
+		err = PTR_ERR(free_list_node->mem_obj);
+		goto err_free;
+	}
+
+	err = pvr_free_list_insert_node_locked(free_list_node);
+	if (err)
+		goto err_destroy_gem_object;
+
+	list_add_tail(&free_list_node->node, &free_list->mem_block_list);
+
+	/*
+	 * Reserve a number ready pages to allow the FW to process OOM quickly
+	 * and asynchronously request a grow.
+	 */
+	free_list->ready_pages =
+		calculate_free_list_ready_pages_locked(free_list,
+						       free_list->current_pages);
+	free_list->current_pages -= free_list->ready_pages;
+
+	mutex_unlock(&free_list->lock);
+
+	return 0;
+
+err_destroy_gem_object:
+	pvr_gem_object_put(free_list_node->mem_obj);
+
+err_free:
+	kfree(free_list_node);
+
+err_unlock:
+	mutex_unlock(&free_list->lock);
+
+	return err;
+}
+
+void pvr_free_list_process_grow_req(struct pvr_device *pvr_dev,
+				    struct rogue_fwif_fwccb_cmd_freelist_gs_data *req)
+{
+	struct pvr_free_list *free_list = pvr_free_list_lookup_id(pvr_dev, req->freelist_id);
+	struct rogue_fwif_kccb_cmd resp_cmd = {
+		.cmd_type = ROGUE_FWIF_KCCB_CMD_FREELIST_GROW_UPDATE,
+	};
+	struct rogue_fwif_freelist_gs_data *resp = &resp_cmd.cmd_data.free_list_gs_data;
+	u32 grow_pages = 0;
+
+	/* If we don't have a freelist registered for this ID, we can't do much. */
+	if (WARN_ON(!free_list))
+		return;
+
+	/* Since the FW made the request, it has already consumed the ready pages,
+	 * update the host struct.
+	 */
+	free_list->current_pages += free_list->ready_pages;
+	free_list->ready_pages = 0;
+
+	/* If the grow succeeds, update the grow_pages argument. */
+	if (!pvr_free_list_grow(free_list, free_list->grow_pages))
+		grow_pages = free_list->grow_pages;
+
+	/* Now prepare the response and send it back to the FW. */
+	pvr_fw_object_get_fw_addr(free_list->fw_obj, &resp->freelist_fw_addr);
+	resp->delta_pages = grow_pages;
+	resp->new_pages = free_list->current_pages + free_list->ready_pages;
+	resp->ready_pages = free_list->ready_pages;
+	pvr_free_list_put(free_list);
+
+	WARN_ON(pvr_kccb_send_cmd(pvr_dev, &resp_cmd, NULL));
+}
+
+static void
+pvr_free_list_free_node(struct pvr_free_list_node *free_list_node)
+{
+	pvr_gem_object_put(free_list_node->mem_obj);
+
+	kfree(free_list_node);
+}
+
+/**
+ * pvr_free_list_create() - Create a new free list and return an object pointer
+ * @pvr_file: Pointer to pvr_file structure.
+ * @args: Creation arguments from userspace.
+ *
+ * Return:
+ *  * Pointer to new free_list, or
+ *  * ERR_PTR(-%ENOMEM) on out of memory.
+ */
+struct pvr_free_list *
+pvr_free_list_create(struct pvr_file *pvr_file,
+		     struct drm_pvr_ioctl_create_free_list_args *args)
+{
+	struct pvr_free_list *free_list;
+	int err;
+
+	/* Create and fill out the kernel structure */
+	free_list = kzalloc(sizeof(*free_list), GFP_KERNEL);
+
+	if (!free_list)
+		return ERR_PTR(-ENOMEM);
+
+	kref_init(&free_list->ref_count);
+	INIT_LIST_HEAD(&free_list->mem_block_list);
+	INIT_LIST_HEAD(&free_list->hwrt_list);
+	mutex_init(&free_list->lock);
+
+	err = free_list_create_kernel_structure(pvr_file, args, free_list);
+	if (err < 0)
+		goto err_free;
+
+	/* Allocate global object ID for firmware. */
+	err = xa_alloc(&pvr_file->pvr_dev->free_list_ids,
+		       &free_list->fw_id,
+		       free_list,
+		       xa_limit_32b,
+		       GFP_KERNEL);
+	if (err)
+		goto err_destroy_kernel_structure;
+
+	err = free_list_create_fw_structure(pvr_file, args, free_list);
+	if (err < 0)
+		goto err_free_fw_id;
+
+	err = pvr_free_list_grow(free_list, args->initial_num_pages);
+	if (err < 0)
+		goto err_fw_struct_cleanup;
+
+	return free_list;
+
+err_fw_struct_cleanup:
+	WARN_ON(pvr_fw_structure_cleanup(free_list->pvr_dev,
+					 ROGUE_FWIF_CLEANUP_FREELIST,
+					 free_list->fw_obj, 0));
+
+err_free_fw_id:
+	xa_erase(&free_list->pvr_dev->free_list_ids, free_list->fw_id);
+
+err_destroy_kernel_structure:
+	free_list_destroy_kernel_structure(free_list);
+
+err_free:
+	mutex_destroy(&free_list->lock);
+	kfree(free_list);
+
+	return ERR_PTR(err);
+}
+
+static void
+pvr_free_list_release(struct kref *ref_count)
+{
+	struct pvr_free_list *free_list =
+		container_of(ref_count, struct pvr_free_list, ref_count);
+	struct list_head *pos, *n;
+	int err;
+
+	xa_erase(&free_list->pvr_dev->free_list_ids, free_list->fw_id);
+
+	err = pvr_fw_structure_cleanup(free_list->pvr_dev,
+				       ROGUE_FWIF_CLEANUP_FREELIST,
+				       free_list->fw_obj, 0);
+	if (err == -EBUSY) {
+		/* Flush the FWCCB to process any HWR or freelist reconstruction
+		 * request that might keep the freelist busy, and try again.
+		 */
+		pvr_fwccb_process(free_list->pvr_dev);
+		err = pvr_fw_structure_cleanup(free_list->pvr_dev,
+					       ROGUE_FWIF_CLEANUP_FREELIST,
+					       free_list->fw_obj, 0);
+	}
+
+	WARN_ON(err);
+
+	/* clang-format off */
+	list_for_each_safe(pos, n, &free_list->mem_block_list) {
+		struct pvr_free_list_node *free_list_node =
+			container_of(pos, struct pvr_free_list_node, node);
+
+		list_del(pos);
+		pvr_free_list_free_node(free_list_node);
+	}
+	/* clang-format on */
+
+	free_list_destroy_kernel_structure(free_list);
+	free_list_destroy_fw_structure(free_list);
+	mutex_destroy(&free_list->lock);
+	kfree(free_list);
+}
+
+/**
+ * pvr_destroy_free_lists_for_file: Destroy any free lists associated with the
+ * given file.
+ * @pvr_file: Pointer to pvr_file structure.
+ *
+ * Removes all free lists associated with @pvr_file from the device free_list
+ * list and drops initial references. Free lists will then be destroyed once
+ * all outstanding references are dropped.
+ */
+void pvr_destroy_free_lists_for_file(struct pvr_file *pvr_file)
+{
+	struct pvr_free_list *free_list;
+	unsigned long handle;
+
+	xa_for_each(&pvr_file->free_list_handles, handle, free_list) {
+		(void)free_list;
+		pvr_free_list_put(xa_erase(&pvr_file->free_list_handles, handle));
+	}
+}
+
+/**
+ * pvr_free_list_put() - Release reference on free list
+ * @free_list: Pointer to list to release reference on
+ */
+void
+pvr_free_list_put(struct pvr_free_list *free_list)
+{
+	if (free_list)
+		kref_put(&free_list->ref_count, pvr_free_list_release);
+}
+
+void pvr_free_list_add_hwrt(struct pvr_free_list *free_list, struct pvr_hwrt_data *hwrt_data)
+{
+	mutex_lock(&free_list->lock);
+
+	list_add_tail(&hwrt_data->freelist_node, &free_list->hwrt_list);
+
+	mutex_unlock(&free_list->lock);
+}
+
+void pvr_free_list_remove_hwrt(struct pvr_free_list *free_list, struct pvr_hwrt_data *hwrt_data)
+{
+	mutex_lock(&free_list->lock);
+
+	list_del(&hwrt_data->freelist_node);
+
+	mutex_unlock(&free_list->lock);
+}
+
+static void
+pvr_free_list_reconstruct(struct pvr_device *pvr_dev, u32 freelist_id)
+{
+	struct pvr_free_list *free_list = pvr_free_list_lookup_id(pvr_dev, freelist_id);
+	struct pvr_free_list_node *free_list_node;
+	struct rogue_fwif_freelist *fw_data;
+	struct pvr_hwrt_data *hwrt_data;
+
+	if (!free_list)
+		return;
+
+	mutex_lock(&free_list->lock);
+
+	/* Rebuild the free list based on the memory block list. */
+	free_list->current_pages = 0;
+
+	list_for_each_entry(free_list_node, &free_list->mem_block_list, node)
+		WARN_ON(pvr_free_list_insert_node_locked(free_list_node));
+
+	/*
+	 * Remove the ready pages, which are reserved to allow the FW to process OOM quickly and
+	 * asynchronously request a grow.
+	 */
+	free_list->current_pages -= free_list->ready_pages;
+
+	fw_data = free_list->fw_data;
+	fw_data->current_stack_top = fw_data->current_pages - 1;
+	fw_data->allocated_page_count = 0;
+	fw_data->allocated_mmu_page_count = 0;
+
+	/* Reset the state of any associated HWRTs. */
+	list_for_each_entry(hwrt_data, &free_list->hwrt_list, freelist_node) {
+		struct rogue_fwif_hwrtdata *hwrt_fw_data = pvr_fw_object_vmap(hwrt_data->fw_obj);
+
+		if (!WARN_ON(IS_ERR(hwrt_fw_data))) {
+			hwrt_fw_data->state = ROGUE_FWIF_RTDATA_STATE_HWR;
+			hwrt_fw_data->hwrt_data_flags &= ~HWRTDATA_HAS_LAST_GEOM;
+		}
+
+		pvr_fw_object_vunmap(hwrt_data->fw_obj);
+	}
+
+	mutex_unlock(&free_list->lock);
+
+	pvr_free_list_put(free_list);
+}
+
+void
+pvr_free_list_process_reconstruct_req(struct pvr_device *pvr_dev,
+				struct rogue_fwif_fwccb_cmd_freelists_reconstruction_data *req)
+{
+	struct rogue_fwif_kccb_cmd resp_cmd = {
+		.cmd_type = ROGUE_FWIF_KCCB_CMD_FREELISTS_RECONSTRUCTION_UPDATE,
+	};
+	struct rogue_fwif_freelists_reconstruction_data *resp =
+		&resp_cmd.cmd_data.free_lists_reconstruction_data;
+
+	for (u32 i = 0; i < req->freelist_count; i++)
+		pvr_free_list_reconstruct(pvr_dev, req->freelist_ids[i]);
+
+	resp->freelist_count = req->freelist_count;
+	memcpy(resp->freelist_ids, req->freelist_ids,
+	       req->freelist_count * sizeof(resp->freelist_ids[0]));
+
+	WARN_ON(pvr_kccb_send_cmd(pvr_dev, &resp_cmd, NULL));
+}
diff --git a/drivers/gpu/drm/imagination/pvr_free_list.h b/drivers/gpu/drm/imagination/pvr_free_list.h
new file mode 100644
index 000000000000..9f3ed6d7c4c5
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_free_list.h
@@ -0,0 +1,195 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_FREE_LIST_H
+#define PVR_FREE_LIST_H
+
+#include <linux/compiler_attributes.h>
+#include <linux/kref.h>
+#include <linux/list.h>
+#include <linux/mutex.h>
+#include <linux/types.h>
+#include <linux/xarray.h>
+#include <uapi/drm/pvr_drm.h>
+
+#include "pvr_device.h"
+
+/* Forward declaration from pvr_gem.h. */
+struct pvr_fw_object;
+
+/* Forward declaration from pvr_gem.h. */
+struct pvr_gem_object;
+
+/* Forward declaration from pvr_hwrt.h. */
+struct pvr_hwrt_data;
+
+/**
+ * struct pvr_free_list_node - structure representing an allocation in the free
+ *                             list
+ */
+struct pvr_free_list_node {
+	/** @node: List node for &pvr_free_list.mem_block_list. */
+	struct list_head node;
+
+	/** @free_list: Pointer to owning free list. */
+	struct pvr_free_list *free_list;
+
+	/** @num_pages: Number of pages in this node. */
+	u32 num_pages;
+
+	/** @mem_obj: GEM object representing the pages in this node. */
+	struct pvr_gem_object *mem_obj;
+};
+
+/**
+ * struct pvr_free_list - structure representing a free list
+ */
+struct pvr_free_list {
+	/** @ref_count: Reference count of object. */
+	struct kref ref_count;
+
+	/** @pvr_dev: Pointer to device that owns this object. */
+	struct pvr_device *pvr_dev;
+
+	/** @obj: GEM object representing the free list. */
+	struct pvr_gem_object *obj;
+
+	/** @fw_obj: FW object representing the FW-side structure. */
+	struct pvr_fw_object *fw_obj;
+
+	/** @fw_data: Pointer to CPU mapping of the FW-side structure. */
+	struct rogue_fwif_freelist *fw_data;
+
+	/**
+	 * @lock: Mutex protecting modification of the free list. Must be held when accessing any
+	 *        of the members below.
+	 */
+	struct mutex lock;
+
+	/** @fw_id: Firmware ID for this object. */
+	u32 fw_id;
+
+	/** @current_pages: Current number of pages in free list. */
+	u32 current_pages;
+
+	/** @max_pages: Maximum number of pages in free list. */
+	u32 max_pages;
+
+	/** @grow_pages: Pages to grow free list by per request. */
+	u32 grow_pages;
+
+	/**
+	 * @grow_threshold: Percentage of FL memory used that should trigger a
+	 *                  new grow request.
+	 */
+	u32 grow_threshold;
+
+	/**
+	 * @ready_pages: Number of pages reserved for FW to use while a grow
+	 *               request is being processed.
+	 */
+	u32 ready_pages;
+
+	/** @mem_block_list: List of memory blocks in this free list. */
+	struct list_head mem_block_list;
+
+	/** @hwrt_list: List of HWRTs using this free list. */
+	struct list_head hwrt_list;
+
+	/** @initial_num_pages: Initial number of pages in free list. */
+	u32 initial_num_pages;
+
+	/** @free_list_gpu_addr: Address of free list in GPU address space. */
+	u64 free_list_gpu_addr;
+};
+
+struct pvr_free_list *
+pvr_free_list_create(struct pvr_file *pvr_file,
+		     struct drm_pvr_ioctl_create_free_list_args *args);
+
+void
+pvr_destroy_free_lists_for_file(struct pvr_file *pvr_file);
+
+u32
+pvr_get_free_list_min_pages(struct pvr_device *pvr_dev);
+
+static __always_inline struct pvr_free_list *
+pvr_free_list_get(struct pvr_free_list *free_list)
+{
+	if (free_list)
+		kref_get(&free_list->ref_count);
+
+	return free_list;
+}
+
+/**
+ * pvr_free_list_lookup() - Lookup free list pointer from handle and file
+ * @pvr_file: Pointer to pvr_file structure.
+ * @handle: Object handle.
+ *
+ * Takes reference on free list object. Call pvr_free_list_put() to release.
+ *
+ * Returns:
+ *  * The requested object on success, or
+ *  * %NULL on failure (object does not exist in list, is not a free list, or
+ *    does not belong to @pvr_file)
+ */
+static __always_inline struct pvr_free_list *
+pvr_free_list_lookup(struct pvr_file *pvr_file, u32 handle)
+{
+	struct pvr_free_list *free_list;
+
+	xa_lock(&pvr_file->free_list_handles);
+	free_list = pvr_free_list_get(xa_load(&pvr_file->free_list_handles, handle));
+	xa_unlock(&pvr_file->free_list_handles);
+
+	return free_list;
+}
+
+/**
+ * pvr_free_list_lookup_id() - Lookup free list pointer from FW ID
+ * @pvr_dev: Device pointer.
+ * @id: FW object ID.
+ *
+ * Takes reference on free list object. Call pvr_free_list_put() to release.
+ *
+ * Returns:
+ *  * The requested object on success, or
+ *  * %NULL on failure (object does not exist in list, or is not a free list)
+ */
+static __always_inline struct pvr_free_list *
+pvr_free_list_lookup_id(struct pvr_device *pvr_dev, u32 id)
+{
+	struct pvr_free_list *free_list;
+
+	xa_lock(&pvr_dev->free_list_ids);
+
+	/* Contexts are removed from the ctx_ids set in the context release path,
+	 * meaning the ref_count reached zero before they get removed. We need
+	 * to make sure we're not trying to acquire a context that's being
+	 * destroyed.
+	 */
+	free_list = xa_load(&pvr_dev->free_list_ids, id);
+	if (free_list && !kref_get_unless_zero(&free_list->ref_count))
+		free_list = NULL;
+	xa_unlock(&pvr_dev->free_list_ids);
+
+	return free_list;
+}
+
+void
+pvr_free_list_put(struct pvr_free_list *free_list);
+
+void
+pvr_free_list_add_hwrt(struct pvr_free_list *free_list, struct pvr_hwrt_data *hwrt_data);
+void
+pvr_free_list_remove_hwrt(struct pvr_free_list *free_list, struct pvr_hwrt_data *hwrt_data);
+
+void pvr_free_list_process_grow_req(struct pvr_device *pvr_dev,
+				    struct rogue_fwif_fwccb_cmd_freelist_gs_data *req);
+
+void
+pvr_free_list_process_reconstruct_req(struct pvr_device *pvr_dev,
+				struct rogue_fwif_fwccb_cmd_freelists_reconstruction_data *req);
+
+#endif /* PVR_FREE_LIST_H */
diff --git a/drivers/gpu/drm/imagination/pvr_hwrt.c b/drivers/gpu/drm/imagination/pvr_hwrt.c
new file mode 100644
index 000000000000..684dccf30148
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_hwrt.c
@@ -0,0 +1,549 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_free_list.h"
+#include "pvr_hwrt.h"
+#include "pvr_gem.h"
+#include "pvr_rogue_cr_defs_client.h"
+#include "pvr_rogue_fwif.h"
+
+#include <drm/drm_gem.h>
+#include <linux/bitops.h>
+#include <linux/math.h>
+#include <linux/slab.h>
+#include <linux/xarray.h>
+#include <uapi/drm/pvr_drm.h>
+
+static_assert(ROGUE_FWIF_NUM_RTDATAS == 2);
+static_assert(ROGUE_FWIF_NUM_GEOMDATAS == 1);
+static_assert(ROGUE_FWIF_NUM_RTDATA_FREELISTS == 2);
+
+/*
+ * struct pvr_rt_mtile_info - Render target macrotile information
+ */
+struct pvr_rt_mtile_info {
+	u32 mtile_x[3];
+	u32 mtile_y[3];
+	u32 tile_max_x;
+	u32 tile_max_y;
+	u32 tile_size_x;
+	u32 tile_size_y;
+	u32 num_tiles_x;
+	u32 num_tiles_y;
+};
+
+/* Size of Shadow Render Target Cache entry */
+#define SRTC_ENTRY_SIZE sizeof(u32)
+/* Size of Renders Accumulation Array entry */
+#define RAA_ENTRY_SIZE sizeof(u32)
+
+static int
+hwrt_init_kernel_structure(struct pvr_file *pvr_file,
+			   struct drm_pvr_ioctl_create_hwrt_dataset_args *args,
+			   struct pvr_hwrt_dataset *hwrt)
+{
+	struct pvr_device *pvr_dev = pvr_file->pvr_dev;
+	int err;
+	int i;
+
+	hwrt->pvr_dev = pvr_dev;
+	hwrt->max_rts = args->layers;
+
+	/* Get pointers to the free lists */
+	for (i = 0; i < ARRAY_SIZE(hwrt->free_lists); i++) {
+		hwrt->free_lists[i] = pvr_free_list_lookup(pvr_file,  args->free_list_handles[i]);
+		if (!hwrt->free_lists[i]) {
+			err = -EINVAL;
+			goto err_put_free_lists;
+		}
+	}
+
+	if (hwrt->free_lists[ROGUE_FW_LOCAL_FREELIST]->current_pages <
+	    pvr_get_free_list_min_pages(pvr_dev)) {
+		err = -EINVAL;
+		goto err_put_free_lists;
+	}
+
+	return 0;
+
+err_put_free_lists:
+	for (i = 0; i < ARRAY_SIZE(hwrt->free_lists); i++) {
+		pvr_free_list_put(hwrt->free_lists[i]);
+		hwrt->free_lists[i] = NULL;
+	}
+
+	return err;
+}
+
+static void
+hwrt_fini_kernel_structure(struct pvr_hwrt_dataset *hwrt)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(hwrt->free_lists); i++) {
+		pvr_free_list_put(hwrt->free_lists[i]);
+		hwrt->free_lists[i] = NULL;
+	}
+}
+
+static void
+hwrt_fini_common_fw_structure(struct pvr_hwrt_dataset *hwrt)
+{
+	pvr_fw_object_destroy(hwrt->common_fw_obj);
+}
+
+static int
+get_cr_isp_mtile_size_val(struct pvr_device *pvr_dev, u32 samples,
+			  struct pvr_rt_mtile_info *info, u32 *value_out)
+{
+	u32 x = info->mtile_x[0];
+	u32 y = info->mtile_y[0];
+	u32 samples_per_pixel;
+	int err;
+
+	err = PVR_FEATURE_VALUE(pvr_dev, isp_samples_per_pixel, &samples_per_pixel);
+	if (err)
+		return err;
+
+	if (samples_per_pixel == 1) {
+		if (samples >= 4)
+			x <<= 1;
+		if (samples >= 2)
+			y <<= 1;
+	} else if (samples_per_pixel == 2) {
+		if (samples >= 8)
+			x <<= 1;
+		if (samples >= 4)
+			y <<= 1;
+	} else if (samples_per_pixel == 4) {
+		if (samples >= 8)
+			y <<= 1;
+	} else {
+		WARN(true, "Unsupported ISP samples per pixel value");
+		return -EINVAL;
+	}
+
+	*value_out = ((x << ROGUE_CR_ISP_MTILE_SIZE_X_SHIFT) & ~ROGUE_CR_ISP_MTILE_SIZE_X_CLRMSK) |
+		     ((y << ROGUE_CR_ISP_MTILE_SIZE_Y_SHIFT) & ~ROGUE_CR_ISP_MTILE_SIZE_Y_CLRMSK);
+
+	return 0;
+}
+
+static int
+get_cr_multisamplectl_val(u32 samples, bool y_flip, u64 *value_out)
+{
+	static const struct {
+		u8 x[8];
+		u8 y[8];
+	} sample_positions[4] = {
+		/* 1 sample */
+		{
+			.x = { 8 },
+			.y = { 8 },
+		},
+		/* 2 samples */
+		{
+			.x = { 12, 4 },
+			.y = { 12, 4 },
+		},
+		/* 4 samples */
+		{
+			.x = { 6, 14, 2, 10 },
+			.y = { 2, 6, 10, 14 },
+		},
+		/* 8 samples */
+		{
+			.x = { 9, 7, 13, 5, 3, 1, 11, 15 },
+			.y = { 5, 11, 9, 3, 13, 7, 15, 1 },
+		},
+	};
+	const int idx = fls(samples) - 1;
+	u64 value = 0;
+
+	if (idx < 0 || idx > 3)
+		return -EINVAL;
+
+	for (u32 i = 0; i < 8; i++) {
+		value |= sample_positions[idx].x[i] << (i * 8);
+		if (y_flip)
+			value |= ((16 - sample_positions[idx].y[i]) & 0xf) << (i * 8 + 4);
+		else
+			value |= (sample_positions[idx].y[i]) << (i * 8 + 4);
+	}
+
+	*value_out = value;
+
+	return 0;
+}
+
+static int
+get_cr_te_aa_val(struct pvr_device *pvr_dev, u32 samples, u32 *value_out)
+{
+	u32 samples_per_pixel;
+	u32 value = 0;
+	int err = 0;
+
+	err = PVR_FEATURE_VALUE(pvr_dev, isp_samples_per_pixel, &samples_per_pixel);
+	if (err)
+		return err;
+
+	switch (samples_per_pixel) {
+	case 1:
+		if (samples >= 2)
+			value |= ROGUE_CR_TE_AA_Y_EN;
+		if (samples >= 4)
+			value |= ROGUE_CR_TE_AA_X_EN;
+		break;
+	case 2:
+		if (samples >= 2)
+			value |= ROGUE_CR_TE_AA_X2_EN;
+		if (samples >= 4)
+			value |= ROGUE_CR_TE_AA_Y_EN;
+		if (samples >= 8)
+			value |= ROGUE_CR_TE_AA_X_EN;
+		break;
+	case 4:
+		if (samples >= 2)
+			value |= ROGUE_CR_TE_AA_X2_EN;
+		if (samples >= 4)
+			value |= ROGUE_CR_TE_AA_Y2_EN;
+		if (samples >= 8)
+			value |= ROGUE_CR_TE_AA_Y_EN;
+		break;
+	default:
+		WARN(true, "Unsupported ISP samples per pixel value");
+		return -EINVAL;
+	}
+
+	*value_out = value;
+
+	return 0;
+}
+
+static void
+hwrtdata_common_init(void *cpu_ptr, void *priv)
+{
+	struct pvr_hwrt_dataset *hwrt = priv;
+
+	memcpy(cpu_ptr, &hwrt->common, sizeof(hwrt->common));
+}
+
+static int
+hwrt_init_common_fw_structure(struct pvr_file *pvr_file,
+			      struct drm_pvr_ioctl_create_hwrt_dataset_args *args,
+			      struct pvr_hwrt_dataset *hwrt)
+{
+	struct drm_pvr_create_hwrt_geom_data_args *geom_data_args = &args->geom_data_args;
+	struct pvr_device *pvr_dev = pvr_file->pvr_dev;
+	struct pvr_rt_mtile_info info;
+	int err;
+
+	err = PVR_FEATURE_VALUE(pvr_dev, tile_size_x, &info.tile_size_x);
+	if (WARN_ON(err))
+		return err;
+
+	err = PVR_FEATURE_VALUE(pvr_dev, tile_size_y, &info.tile_size_y);
+	if (WARN_ON(err))
+		return err;
+
+	info.num_tiles_x = DIV_ROUND_UP(args->width, info.tile_size_x);
+	info.num_tiles_y = DIV_ROUND_UP(args->height, info.tile_size_y);
+
+	if (PVR_HAS_FEATURE(pvr_dev, simple_parameter_format_version)) {
+		u32 parameter_format;
+
+		err = PVR_FEATURE_VALUE(pvr_dev, simple_parameter_format_version,
+					&parameter_format);
+		if (WARN_ON(err))
+			return err;
+
+		WARN_ON(parameter_format != 2);
+
+		/*
+		 * Set up 16 macrotiles with a multiple of 2x2 tiles per macrotile, which is
+		 * aligned to a tile group.
+		 */
+		info.mtile_x[0] = DIV_ROUND_UP(info.num_tiles_x, 8) * 2;
+		info.mtile_y[0] = DIV_ROUND_UP(info.num_tiles_y, 8) * 2;
+		info.mtile_x[1] = 0;
+		info.mtile_y[1] = 0;
+		info.mtile_x[2] = 0;
+		info.mtile_y[2] = 0;
+		info.tile_max_x = round_up(info.num_tiles_x, 2) - 1;
+		info.tile_max_y = round_up(info.num_tiles_y, 2) - 1;
+	} else {
+		/* Set up 16 macrotiles with a multiple of 4x4 tiles per macrotile. */
+		info.mtile_x[0] = round_up(DIV_ROUND_UP(info.num_tiles_x, 4), 4);
+		info.mtile_y[0] = round_up(DIV_ROUND_UP(info.num_tiles_y, 4), 4);
+		info.mtile_x[1] = info.mtile_x[0] * 2;
+		info.mtile_y[1] = info.mtile_y[0] * 2;
+		info.mtile_x[2] = info.mtile_x[0] * 3;
+		info.mtile_y[2] = info.mtile_y[0] * 3;
+		info.tile_max_x = info.num_tiles_x - 1;
+		info.tile_max_y = info.num_tiles_y - 1;
+	}
+
+	hwrt->common.geom_caches_need_zeroing = false;
+
+	hwrt->common.isp_merge_lower_x = args->isp_merge_lower_x;
+	hwrt->common.isp_merge_lower_y = args->isp_merge_lower_y;
+	hwrt->common.isp_merge_upper_x = args->isp_merge_upper_x;
+	hwrt->common.isp_merge_upper_y = args->isp_merge_upper_y;
+	hwrt->common.isp_merge_scale_x = args->isp_merge_scale_x;
+	hwrt->common.isp_merge_scale_y = args->isp_merge_scale_y;
+
+	err = get_cr_multisamplectl_val(args->samples, false,
+					&hwrt->common.multi_sample_ctl);
+	if (err)
+		return err;
+
+	err = get_cr_multisamplectl_val(args->samples, true,
+					&hwrt->common.flipped_multi_sample_ctl);
+	if (err)
+		return err;
+
+	hwrt->common.mtile_stride = info.mtile_x[0] * info.mtile_y[0];
+
+	err = get_cr_te_aa_val(pvr_dev, args->samples, &hwrt->common.teaa);
+	if (err)
+		return err;
+
+	hwrt->common.screen_pixel_max =
+		(((args->width - 1) << ROGUE_CR_PPP_SCREEN_PIXXMAX_SHIFT) &
+		 ~ROGUE_CR_PPP_SCREEN_PIXXMAX_CLRMSK) |
+		(((args->height - 1) << ROGUE_CR_PPP_SCREEN_PIXYMAX_SHIFT) &
+		 ~ROGUE_CR_PPP_SCREEN_PIXYMAX_CLRMSK);
+
+	hwrt->common.te_screen =
+		((info.tile_max_x << ROGUE_CR_TE_SCREEN_XMAX_SHIFT) &
+		 ~ROGUE_CR_TE_SCREEN_XMAX_CLRMSK) |
+		((info.tile_max_y << ROGUE_CR_TE_SCREEN_YMAX_SHIFT) &
+		 ~ROGUE_CR_TE_SCREEN_YMAX_CLRMSK);
+	hwrt->common.te_mtile1 =
+		((info.mtile_x[0] << ROGUE_CR_TE_MTILE1_X1_SHIFT) & ~ROGUE_CR_TE_MTILE1_X1_CLRMSK) |
+		((info.mtile_x[1] << ROGUE_CR_TE_MTILE1_X2_SHIFT) & ~ROGUE_CR_TE_MTILE1_X2_CLRMSK) |
+		((info.mtile_x[2] << ROGUE_CR_TE_MTILE1_X3_SHIFT) & ~ROGUE_CR_TE_MTILE1_X3_CLRMSK);
+	hwrt->common.te_mtile2 =
+		((info.mtile_y[0] << ROGUE_CR_TE_MTILE2_Y1_SHIFT) & ~ROGUE_CR_TE_MTILE2_Y1_CLRMSK) |
+		((info.mtile_y[1] << ROGUE_CR_TE_MTILE2_Y2_SHIFT) & ~ROGUE_CR_TE_MTILE2_Y2_CLRMSK) |
+		((info.mtile_y[2] << ROGUE_CR_TE_MTILE2_Y3_SHIFT) & ~ROGUE_CR_TE_MTILE2_Y3_CLRMSK);
+
+	err = get_cr_isp_mtile_size_val(pvr_dev, args->samples, &info,
+					&hwrt->common.isp_mtile_size);
+	if (err)
+		return err;
+
+	hwrt->common.tpc_stride = geom_data_args->tpc_stride;
+	hwrt->common.tpc_size = geom_data_args->tpc_size;
+
+	hwrt->common.rgn_header_size = args->region_header_size;
+
+	err = pvr_fw_object_create(pvr_dev, sizeof(struct rogue_fwif_hwrtdata_common),
+				   PVR_BO_FW_FLAGS_DEVICE_UNCACHED, hwrtdata_common_init, hwrt,
+				   &hwrt->common_fw_obj);
+
+	return err;
+}
+
+static void
+hwrt_fw_data_init(void *cpu_ptr, void *priv)
+{
+	struct pvr_hwrt_data *hwrt_data = priv;
+
+	memcpy(cpu_ptr, &hwrt_data->data, sizeof(hwrt_data->data));
+}
+
+static int
+hwrt_data_init_fw_structure(struct pvr_file *pvr_file,
+			    struct pvr_hwrt_dataset *hwrt,
+			    struct drm_pvr_ioctl_create_hwrt_dataset_args *args,
+			    struct drm_pvr_create_hwrt_rt_data_args *rt_data_args,
+			    struct pvr_hwrt_data *hwrt_data)
+{
+	struct drm_pvr_create_hwrt_geom_data_args *geom_data_args = &args->geom_data_args;
+	struct pvr_device *pvr_dev = pvr_file->pvr_dev;
+	struct rogue_fwif_rta_ctl *rta_ctl;
+	int free_list_i;
+	int err;
+
+	pvr_fw_object_get_fw_addr(hwrt->common_fw_obj,
+				  &hwrt_data->data.hwrt_data_common_fw_addr);
+
+	for (free_list_i = 0; free_list_i < ARRAY_SIZE(hwrt->free_lists); free_list_i++) {
+		pvr_fw_object_get_fw_addr(hwrt->free_lists[free_list_i]->fw_obj,
+					  &hwrt_data->data.freelists_fw_addr[free_list_i]);
+	}
+
+	hwrt_data->data.tail_ptrs_dev_addr = geom_data_args->tpc_dev_addr;
+	hwrt_data->data.vheap_table_dev_addr = geom_data_args->vheap_table_dev_addr;
+	hwrt_data->data.rtc_dev_addr = geom_data_args->rtc_dev_addr;
+
+	hwrt_data->data.pm_mlist_dev_addr = rt_data_args->pm_mlist_dev_addr;
+	hwrt_data->data.macrotile_array_dev_addr = rt_data_args->macrotile_array_dev_addr;
+	hwrt_data->data.rgn_header_dev_addr = rt_data_args->region_header_dev_addr;
+
+	rta_ctl = &hwrt_data->data.rta_ctl;
+
+	rta_ctl->render_target_index = 0;
+	rta_ctl->active_render_targets = 0;
+	rta_ctl->valid_render_targets_fw_addr = 0;
+	rta_ctl->rta_num_partial_renders_fw_addr = 0;
+	rta_ctl->max_rts = args->layers;
+
+	if (args->layers > 1) {
+		err = pvr_fw_object_create(pvr_dev, args->layers * SRTC_ENTRY_SIZE,
+					   PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+					   NULL, NULL, &hwrt_data->srtc_obj);
+		if (err)
+			return err;
+		pvr_fw_object_get_fw_addr(hwrt_data->srtc_obj,
+					  &rta_ctl->valid_render_targets_fw_addr);
+
+		err = pvr_fw_object_create(pvr_dev, args->layers * RAA_ENTRY_SIZE,
+					   PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+					   NULL, NULL, &hwrt_data->raa_obj);
+		if (err)
+			goto err_put_shadow_rt_cache;
+		pvr_fw_object_get_fw_addr(hwrt_data->raa_obj,
+					  &rta_ctl->rta_num_partial_renders_fw_addr);
+	}
+
+	err = pvr_fw_object_create(pvr_dev, sizeof(struct rogue_fwif_hwrtdata),
+				   PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+				   hwrt_fw_data_init, hwrt_data, &hwrt_data->fw_obj);
+	if (err)
+		goto err_put_raa_obj;
+
+	pvr_free_list_add_hwrt(hwrt->free_lists[0], hwrt_data);
+
+	return 0;
+
+err_put_raa_obj:
+	if (args->layers > 1)
+		pvr_fw_object_destroy(hwrt_data->raa_obj);
+
+err_put_shadow_rt_cache:
+	if (args->layers > 1)
+		pvr_fw_object_destroy(hwrt_data->srtc_obj);
+
+	return err;
+}
+
+static void
+hwrt_data_fini_fw_structure(struct pvr_hwrt_dataset *hwrt, int hwrt_nr)
+{
+	struct pvr_hwrt_data *hwrt_data = &hwrt->data[hwrt_nr];
+
+	pvr_free_list_remove_hwrt(hwrt->free_lists[0], hwrt_data);
+
+	if (hwrt->max_rts > 1) {
+		pvr_fw_object_destroy(hwrt_data->raa_obj);
+		pvr_fw_object_destroy(hwrt_data->srtc_obj);
+	}
+
+	pvr_fw_object_destroy(hwrt_data->fw_obj);
+}
+
+/**
+ * pvr_hwrt_dataset_create() - Create a new HWRT dataset
+ * @pvr_file: Pointer to pvr_file structure.
+ * @args: Creation arguments from userspace.
+ *
+ * Return:
+ *  * Pointer to new HWRT, or
+ *  * ERR_PTR(-%ENOMEM) on out of memory.
+ */
+struct pvr_hwrt_dataset *
+pvr_hwrt_dataset_create(struct pvr_file *pvr_file,
+			struct drm_pvr_ioctl_create_hwrt_dataset_args *args)
+{
+	struct pvr_hwrt_dataset *hwrt;
+	int err;
+
+	/* Create and fill out the kernel structure */
+	hwrt = kzalloc(sizeof(*hwrt), GFP_KERNEL);
+
+	if (!hwrt)
+		return ERR_PTR(-ENOMEM);
+
+	kref_init(&hwrt->ref_count);
+
+	err = hwrt_init_kernel_structure(pvr_file, args, hwrt);
+	if (err < 0)
+		goto err_free;
+
+	err = hwrt_init_common_fw_structure(pvr_file, args, hwrt);
+	if (err < 0)
+		goto err_free;
+
+	for (int i = 0; i < ARRAY_SIZE(hwrt->data); i++) {
+		err = hwrt_data_init_fw_structure(pvr_file, hwrt, args,
+						  &args->rt_data_args[i],
+						  &hwrt->data[i]);
+		if (err < 0) {
+			i--;
+			/* Destroy already created structures. */
+			for (; i >= 0; i--)
+				hwrt_data_fini_fw_structure(hwrt, i);
+			goto err_free;
+		}
+
+		hwrt->data[i].hwrt_dataset = hwrt;
+	}
+
+	return hwrt;
+
+err_free:
+	pvr_hwrt_dataset_put(hwrt);
+
+	return ERR_PTR(err);
+}
+
+static void
+pvr_hwrt_dataset_release(struct kref *ref_count)
+{
+	struct pvr_hwrt_dataset *hwrt =
+		container_of(ref_count, struct pvr_hwrt_dataset, ref_count);
+
+	for (int i = ARRAY_SIZE(hwrt->data) - 1; i >= 0; i--) {
+		WARN_ON(pvr_fw_structure_cleanup(hwrt->pvr_dev, ROGUE_FWIF_CLEANUP_HWRTDATA,
+						 hwrt->data[i].fw_obj, 0));
+		hwrt_data_fini_fw_structure(hwrt, i);
+	}
+
+	hwrt_fini_common_fw_structure(hwrt);
+	hwrt_fini_kernel_structure(hwrt);
+
+	kfree(hwrt);
+}
+
+/**
+ * pvr_destroy_hwrt_datasets_for_file: Destroy any HWRT datasets associated
+ * with the given file.
+ * @pvr_file: Pointer to pvr_file structure.
+ *
+ * Removes all HWRT datasets associated with @pvr_file from the device
+ * hwrt_dataset list and drops initial references. HWRT datasets will then be
+ * destroyed once all outstanding references are dropped.
+ */
+void pvr_destroy_hwrt_datasets_for_file(struct pvr_file *pvr_file)
+{
+	struct pvr_hwrt_dataset *hwrt;
+	unsigned long handle;
+
+	xa_for_each(&pvr_file->hwrt_handles, handle, hwrt) {
+		(void)hwrt;
+		pvr_hwrt_dataset_put(xa_erase(&pvr_file->hwrt_handles, handle));
+	}
+}
+
+/**
+ * pvr_hwrt_dataset_put() - Release reference on HWRT dataset
+ * @hwrt: Pointer to HWRT dataset to release reference on
+ */
+void
+pvr_hwrt_dataset_put(struct pvr_hwrt_dataset *hwrt)
+{
+	if (hwrt)
+		kref_put(&hwrt->ref_count, pvr_hwrt_dataset_release);
+}
diff --git a/drivers/gpu/drm/imagination/pvr_hwrt.h b/drivers/gpu/drm/imagination/pvr_hwrt.h
new file mode 100644
index 000000000000..dd7797a85f27
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_hwrt.h
@@ -0,0 +1,165 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_HWRT_H
+#define PVR_HWRT_H
+
+#include <linux/compiler_attributes.h>
+#include <linux/kref.h>
+#include <linux/list.h>
+#include <linux/types.h>
+#include <linux/xarray.h>
+#include <uapi/drm/pvr_drm.h>
+
+#include "pvr_device.h"
+#include "pvr_rogue_fwif_shared.h"
+
+/* Forward declaration from pvr_free_list.h. */
+struct pvr_free_list;
+
+/* Forward declaration from pvr_gem.h. */
+struct pvr_fw_object;
+
+/**
+ * struct pvr_hwrt_data - structure representing HWRT data
+ */
+struct pvr_hwrt_data {
+	/** @fw_obj: FW object representing the FW-side structure. */
+	struct pvr_fw_object *fw_obj;
+
+	/** @data: Local copy of FW-side structure. */
+	struct rogue_fwif_hwrtdata data;
+
+	/** @freelist_node: List node connecting this HWRT to the local freelist. */
+	struct list_head freelist_node;
+
+	/**
+	 * @srtc_obj: FW object representing shadow render target cache.
+	 *
+	 * Only valid if @max_rts > 1.
+	 */
+	struct pvr_fw_object *srtc_obj;
+
+	/**
+	 * @raa_obj: FW object representing renders accumulation array.
+	 *
+	 * Only valid if @max_rts > 1.
+	 */
+	struct pvr_fw_object *raa_obj;
+
+	/** @hwrt_dataset: Back pointer to owning HWRT dataset. */
+	struct pvr_hwrt_dataset *hwrt_dataset;
+};
+
+/**
+ * struct pvr_hwrt_dataset - structure representing a HWRT data set.
+ */
+struct pvr_hwrt_dataset {
+	/** @ref_count: Reference count of object. */
+	struct kref ref_count;
+
+	/** @pvr_dev: Pointer to device that owns this object. */
+	struct pvr_device *pvr_dev;
+
+	/** @common_fw_obj: FW object representing common FW-side structure. */
+	struct pvr_fw_object *common_fw_obj;
+
+	struct rogue_fwif_hwrtdata_common common;
+
+	/** @data: HWRT data structures belonging to this set. */
+	struct pvr_hwrt_data data[ROGUE_FWIF_NUM_RTDATAS];
+
+	/** @free_lists: Free lists used by HWRT data set. */
+	struct pvr_free_list *free_lists[ROGUE_FWIF_NUM_RTDATA_FREELISTS];
+
+	/** @max_rts: Maximum render targets for this HWRT data set. */
+	u16 max_rts;
+};
+
+struct pvr_hwrt_dataset *
+pvr_hwrt_dataset_create(struct pvr_file *pvr_file,
+			struct drm_pvr_ioctl_create_hwrt_dataset_args *args);
+
+void
+pvr_destroy_hwrt_datasets_for_file(struct pvr_file *pvr_file);
+
+/**
+ * pvr_hwrt_dataset_lookup() - Lookup HWRT dataset pointer from handle
+ * @pvr_file: Pointer to pvr_file structure.
+ * @handle: Object handle.
+ *
+ * Takes reference on dataset object. Call pvr_hwrt_dataset_put() to release.
+ *
+ * Returns:
+ *  * The requested object on success, or
+ *  * %NULL on failure (object does not exist in list, or is not a HWRT
+ *    dataset)
+ */
+static __always_inline struct pvr_hwrt_dataset *
+pvr_hwrt_dataset_lookup(struct pvr_file *pvr_file, u32 handle)
+{
+	struct pvr_hwrt_dataset *hwrt;
+
+	xa_lock(&pvr_file->hwrt_handles);
+	hwrt = xa_load(&pvr_file->hwrt_handles, handle);
+
+	if (hwrt)
+		kref_get(&hwrt->ref_count);
+
+	xa_unlock(&pvr_file->hwrt_handles);
+
+	return hwrt;
+}
+
+void
+pvr_hwrt_dataset_put(struct pvr_hwrt_dataset *hwrt);
+
+/**
+ * pvr_hwrt_data_lookup() - Lookup HWRT data pointer from handle and index
+ * @pvr_file: Pointer to pvr_file structure.
+ * @handle: Object handle.
+ * @index: Index of RT data within dataset.
+ *
+ * Takes reference on dataset object. Call pvr_hwrt_data_put() to release.
+ *
+ * Returns:
+ *  * The requested object on success, or
+ *  * %NULL on failure (object does not exist in list, or is not a HWRT
+ *    dataset, or index is out of range)
+ */
+static __always_inline struct pvr_hwrt_data *
+pvr_hwrt_data_lookup(struct pvr_file *pvr_file, u32 handle, u32 index)
+{
+	struct pvr_hwrt_dataset *hwrt_dataset = pvr_hwrt_dataset_lookup(pvr_file, handle);
+
+	if (hwrt_dataset) {
+		if (index < ARRAY_SIZE(hwrt_dataset->data))
+			return &hwrt_dataset->data[index];
+
+		pvr_hwrt_dataset_put(hwrt_dataset);
+	}
+
+	return NULL;
+}
+
+/**
+ * pvr_hwrt_data_put() - Release reference on HWRT data
+ * @hwrt: Pointer to HWRT data to release reference on
+ */
+static __always_inline void
+pvr_hwrt_data_put(struct pvr_hwrt_data *hwrt)
+{
+	if (hwrt)
+		pvr_hwrt_dataset_put(hwrt->hwrt_dataset);
+}
+
+static __always_inline struct pvr_hwrt_data *
+pvr_hwrt_data_get(struct pvr_hwrt_data *hwrt)
+{
+	if (hwrt)
+		kref_get(&hwrt->hwrt_dataset->ref_count);
+
+	return hwrt;
+}
+
+#endif /* PVR_HWRT_H */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 13/17] drm/imagination: Implement context creation/destruction ioctls
  2023-08-16  8:25 ` Sarah Walker
@ 2023-08-16  8:25   ` Sarah Walker
  -1 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: matthew.brost, luben.tuikov, tzimmermann, linux-kernel, mripard,
	afd, boris.brezillon, dakr, donald.robson, hns, christian.koenig,
	faith.ekstrand

Implement ioctls for the creation and destruction of contexts. Contexts are
used for job submission and each is associated with a particular job type.

Changes since v3:
- Use drm_dev_{enter,exit}

Co-developed-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 drivers/gpu/drm/imagination/Makefile          |   4 +
 drivers/gpu/drm/imagination/pvr_cccb.c        | 267 ++++++++++++++
 drivers/gpu/drm/imagination/pvr_cccb.h        | 109 ++++++
 drivers/gpu/drm/imagination/pvr_context.c     | 337 ++++++++++++++++++
 drivers/gpu/drm/imagination/pvr_context.h     | 161 +++++++++
 drivers/gpu/drm/imagination/pvr_device.h      |  21 ++
 drivers/gpu/drm/imagination/pvr_drv.c         |  29 +-
 drivers/gpu/drm/imagination/pvr_stream.c      | 285 +++++++++++++++
 drivers/gpu/drm/imagination/pvr_stream.h      |  75 ++++
 drivers/gpu/drm/imagination/pvr_stream_defs.c | 125 +++++++
 drivers/gpu/drm/imagination/pvr_stream_defs.h |  16 +
 11 files changed, 1427 insertions(+), 2 deletions(-)
 create mode 100644 drivers/gpu/drm/imagination/pvr_cccb.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_cccb.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_context.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_context.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_stream.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_stream.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_stream_defs.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_stream_defs.h

diff --git a/drivers/gpu/drm/imagination/Makefile b/drivers/gpu/drm/imagination/Makefile
index fca2ee2efbac..0c8ab120f277 100644
--- a/drivers/gpu/drm/imagination/Makefile
+++ b/drivers/gpu/drm/imagination/Makefile
@@ -5,6 +5,8 @@ subdir-ccflags-y := -I$(srctree)/$(src)
 
 powervr-y := \
 	pvr_ccb.o \
+	pvr_cccb.o \
+	pvr_context.o \
 	pvr_device.o \
 	pvr_device_info.o \
 	pvr_drv.o \
@@ -18,6 +20,8 @@ powervr-y := \
 	pvr_hwrt.o \
 	pvr_mmu.o \
 	pvr_power.o \
+	pvr_stream.o \
+	pvr_stream_defs.o \
 	pvr_vm.o \
 	pvr_vm_mips.o
 
diff --git a/drivers/gpu/drm/imagination/pvr_cccb.c b/drivers/gpu/drm/imagination/pvr_cccb.c
new file mode 100644
index 000000000000..8dfc157c3c10
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_cccb.c
@@ -0,0 +1,267 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_ccb.h"
+#include "pvr_cccb.h"
+#include "pvr_device.h"
+#include "pvr_gem.h"
+#include "pvr_hwrt.h"
+
+#include <linux/compiler.h>
+#include <linux/delay.h>
+#include <linux/jiffies.h>
+#include <linux/mutex.h>
+#include <linux/types.h>
+
+static __always_inline u32
+get_ccb_space(u32 w_off, u32 r_off, u32 ccb_size)
+{
+	return (((r_off) - (w_off)) + ((ccb_size) - 1)) & ((ccb_size) - 1);
+}
+
+static void
+cccb_ctrl_init(void *cpu_ptr, void *priv)
+{
+	struct rogue_fwif_cccb_ctl *ctrl = cpu_ptr;
+	struct pvr_cccb *pvr_cccb = priv;
+
+	WRITE_ONCE(ctrl->write_offset, 0);
+	WRITE_ONCE(ctrl->read_offset, 0);
+	WRITE_ONCE(ctrl->dep_offset, 0);
+	WRITE_ONCE(ctrl->wrap_mask, pvr_cccb->wrap_mask);
+}
+
+/**
+ * pvr_cccb_init() - Initialise a Client CCB
+ * @pvr_dev: Device pointer.
+ * @pvr_cccb: Pointer to Client CCB structure to initialise.
+ * @size_log2: Log2 size of Client CCB in bytes.
+ * @name: Name of owner of Client CCB. Used for fence context.
+ *
+ * Return:
+ *  * Zero on success, or
+ *  * Any error code returned by pvr_fw_object_create_and_map().
+ */
+int
+pvr_cccb_init(struct pvr_device *pvr_dev, struct pvr_cccb *pvr_cccb,
+	      u32 size_log2, const char *name)
+{
+	size_t size = 1 << size_log2;
+	int err;
+
+	pvr_cccb->size = size;
+	pvr_cccb->write_offset = 0;
+	pvr_cccb->wrap_mask = size - 1;
+
+	/*
+	 * Map CCCB and control structure as uncached, so we don't have to flush
+	 * CPU cache repeatedly when polling for space.
+	 */
+	pvr_cccb->ctrl = pvr_fw_object_create_and_map(pvr_dev, sizeof(*pvr_cccb->ctrl),
+						      PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+						      cccb_ctrl_init, pvr_cccb,
+						      &pvr_cccb->ctrl_obj);
+	if (IS_ERR(pvr_cccb->ctrl))
+		return PTR_ERR(pvr_cccb->ctrl);
+
+	pvr_cccb->cccb = pvr_fw_object_create_and_map(pvr_dev, size,
+						      PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+						      NULL, NULL, &pvr_cccb->cccb_obj);
+	if (IS_ERR(pvr_cccb->cccb)) {
+		err = PTR_ERR(pvr_cccb->cccb);
+		goto err_free_ctrl;
+	}
+
+	pvr_fw_object_get_fw_addr(pvr_cccb->ctrl_obj, &pvr_cccb->ctrl_fw_addr);
+	pvr_fw_object_get_fw_addr(pvr_cccb->cccb_obj, &pvr_cccb->cccb_fw_addr);
+
+	return 0;
+
+err_free_ctrl:
+	pvr_fw_object_unmap_and_destroy(pvr_cccb->ctrl_obj);
+
+	return err;
+}
+
+/**
+ * pvr_cccb_fini() - Release Client CCB structure
+ * @pvr_cccb: Client CCB to release.
+ */
+void
+pvr_cccb_fini(struct pvr_cccb *pvr_cccb)
+{
+	pvr_fw_object_unmap_and_destroy(pvr_cccb->cccb_obj);
+	pvr_fw_object_unmap_and_destroy(pvr_cccb->ctrl_obj);
+}
+
+/**
+ * pvr_cccb_cmdseq_fits() - Check if a command sequence fits in the CCCB
+ * @pvr_cccb: Target Client CCB.
+ * @size: Size of the command sequence.
+ *
+ * Check if a command sequence fits in the CCCB we have at hand.
+ *
+ * Return:
+ *  * true if the command sequence fits in the CCCB, or
+ *  * false otherwise.
+ */
+bool pvr_cccb_cmdseq_fits(struct pvr_cccb *pvr_cccb, size_t size)
+{
+	struct rogue_fwif_cccb_ctl *ctrl = pvr_cccb->ctrl;
+	u32 read_offset, remaining;
+	bool fits = false;
+
+	read_offset = READ_ONCE(ctrl->read_offset);
+	remaining = pvr_cccb->size - pvr_cccb->write_offset;
+
+	/* Always ensure we have enough room for a padding command at the end of the CCCB.
+	 * If our command sequence does not fit, reserve the remaining space for a padding
+	 * command.
+	 */
+	if (size + PADDING_COMMAND_SIZE > remaining)
+		size += remaining;
+
+	if (get_ccb_space(pvr_cccb->write_offset, read_offset, pvr_cccb->size) >= size)
+		fits = true;
+
+	return fits;
+}
+
+/**
+ * pvr_cccb_write_command_with_header() - Write a command + command header to a
+ *                                        Client CCB
+ * @pvr_cccb: Target Client CCB.
+ * @cmd_type: Client CCB command type. Must be one of %ROGUE_FWIF_CCB_CMD_TYPE_*.
+ * @cmd_size: Size of command in bytes.
+ * @cmd_data: Pointer to command to write.
+ * @ext_job_ref: External job reference.
+ * @int_job_ref: Internal job reference.
+ *
+ * Caller must make sure there's enough space in CCCB to queue this command. This
+ * can be done by calling pvr_cccb_cmdseq_fits().
+ *
+ * This function is not protected by any lock. The caller must ensure there's
+ * no concurrent caller, which should be guaranteed by the drm_sched model (job
+ * submission is serialized in drm_sched_main()).
+ */
+void
+pvr_cccb_write_command_with_header(struct pvr_cccb *pvr_cccb, u32 cmd_type, u32 cmd_size,
+				   void *cmd_data, u32 ext_job_ref, u32 int_job_ref)
+{
+	u32 sz_with_hdr = pvr_cccb_get_size_of_cmd_with_hdr(cmd_size);
+	struct rogue_fwif_ccb_cmd_header cmd_header = {
+		.cmd_type = cmd_type,
+		.cmd_size = ALIGN(cmd_size, 8),
+		.ext_job_ref = ext_job_ref,
+		.int_job_ref = int_job_ref,
+	};
+	struct rogue_fwif_cccb_ctl *ctrl = pvr_cccb->ctrl;
+	u32 remaining = pvr_cccb->size - pvr_cccb->write_offset;
+	u32 required_size, cccb_space, read_offset;
+
+	/*
+	 * Always ensure we have enough room for a padding command at the end of
+	 * the CCCB.
+	 */
+	if (remaining < sz_with_hdr + PADDING_COMMAND_SIZE) {
+		/*
+		 * Command would need to wrap, so we need to pad the remainder
+		 * of the CCCB.
+		 */
+		required_size = sz_with_hdr + remaining;
+	} else {
+		required_size = sz_with_hdr;
+	}
+
+	read_offset = READ_ONCE(ctrl->read_offset);
+	cccb_space = get_ccb_space(pvr_cccb->write_offset, read_offset, pvr_cccb->size);
+	if (WARN_ON(cccb_space < required_size))
+		return;
+
+	if (required_size != sz_with_hdr) {
+		/* Add padding command */
+		struct rogue_fwif_ccb_cmd_header pad_cmd = {
+			.cmd_type = ROGUE_FWIF_CCB_CMD_TYPE_PADDING,
+			.cmd_size = remaining - sizeof(pad_cmd),
+		};
+
+		memcpy(&pvr_cccb->cccb[pvr_cccb->write_offset], &pad_cmd, sizeof(pad_cmd));
+		pvr_cccb->write_offset = 0;
+	}
+
+	memcpy(&pvr_cccb->cccb[pvr_cccb->write_offset], &cmd_header, sizeof(cmd_header));
+	memcpy(&pvr_cccb->cccb[pvr_cccb->write_offset + sizeof(cmd_header)], cmd_data, cmd_size);
+	pvr_cccb->write_offset += sz_with_hdr;
+}
+
+static void fill_cmd_kick_data(struct pvr_cccb *cccb, u32 ctx_fw_addr,
+			       struct pvr_hwrt_data *hwrt,
+			       struct rogue_fwif_kccb_cmd_kick_data *k)
+{
+	k->context_fw_addr = ctx_fw_addr;
+	k->client_woff_update = cccb->write_offset;
+	k->client_wrap_mask_update = cccb->wrap_mask;
+
+	if (hwrt) {
+		u32 cleanup_state_offset = offsetof(struct rogue_fwif_hwrtdata, cleanup_state);
+
+		pvr_fw_object_get_fw_addr_offset(hwrt->fw_obj, cleanup_state_offset,
+						 &k->cleanup_ctl_fw_addr[k->num_cleanup_ctl++]);
+	}
+}
+
+/**
+ * pvr_cccb_send_kccb_kick: Send KCCB kick to trigger command processing
+ * @pvr_dev: Device pointer.
+ * @pvr_cccb: Pointer to CCCB to process.
+ * @cctx_fw_addr: FW virtual address for context owning this Client CCB.
+ * @hwrt: HWRT data set associated with this kick. May be %NULL.
+ *
+ * You must call pvr_kccb_reserve_slot() and wait for the returned fence to
+ * signal (if this function didn't return NULL) before calling
+ * pvr_cccb_send_kccb_kick().
+ */
+void
+pvr_cccb_send_kccb_kick(struct pvr_device *pvr_dev,
+			struct pvr_cccb *pvr_cccb, u32 cctx_fw_addr,
+			struct pvr_hwrt_data *hwrt)
+{
+	struct rogue_fwif_kccb_cmd cmd_kick = {
+		.cmd_type = ROGUE_FWIF_KCCB_CMD_KICK,
+	};
+
+	fill_cmd_kick_data(pvr_cccb, cctx_fw_addr, hwrt, &cmd_kick.cmd_data.cmd_kick_data);
+
+	/* Make sure the writes to the CCCB are flushed before sending the KICK. */
+	wmb();
+
+	pvr_kccb_send_cmd_reserved_powered(pvr_dev, &cmd_kick, NULL);
+}
+
+void
+pvr_cccb_send_kccb_combined_kick(struct pvr_device *pvr_dev,
+				 struct pvr_cccb *geom_cccb,
+				 struct pvr_cccb *frag_cccb,
+				 u32 geom_ctx_fw_addr,
+				 u32 frag_ctx_fw_addr,
+				 struct pvr_hwrt_data *hwrt,
+				 bool frag_is_pr)
+{
+	struct rogue_fwif_kccb_cmd cmd_kick = {
+		.cmd_type = ROGUE_FWIF_KCCB_CMD_COMBINED_GEOM_FRAG_KICK,
+	};
+
+	fill_cmd_kick_data(geom_cccb, geom_ctx_fw_addr, hwrt,
+			   &cmd_kick.cmd_data.combined_geom_frag_cmd_kick_data.geom_cmd_kick_data);
+
+	/* If this is a partial-render job, we don't attach resources to cleanup-ctl array,
+	 * because the resources are already retained by the geometry job.
+	 */
+	fill_cmd_kick_data(frag_cccb, frag_ctx_fw_addr, frag_is_pr ? NULL : hwrt,
+			   &cmd_kick.cmd_data.combined_geom_frag_cmd_kick_data.frag_cmd_kick_data);
+
+	/* Make sure the writes to the CCCB are flushed before sending the KICK. */
+	wmb();
+
+	pvr_kccb_send_cmd_reserved_powered(pvr_dev, &cmd_kick, NULL);
+}
diff --git a/drivers/gpu/drm/imagination/pvr_cccb.h b/drivers/gpu/drm/imagination/pvr_cccb.h
new file mode 100644
index 000000000000..a9f5f38b874b
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_cccb.h
@@ -0,0 +1,109 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_CCCB_H
+#define PVR_CCCB_H
+
+#include "pvr_rogue_fwif.h"
+#include "pvr_rogue_fwif_shared.h"
+
+#include <linux/mutex.h>
+#include <linux/types.h>
+
+#define PADDING_COMMAND_SIZE sizeof(struct rogue_fwif_ccb_cmd_header)
+
+/* Forward declaration from pvr_device.h. */
+struct pvr_device;
+
+/* Forward declaration from pvr_gem.h. */
+struct pvr_fw_object;
+
+/* Forward declaration from pvr_hwrt.h. */
+struct pvr_hwrt_data;
+
+struct pvr_cccb {
+	/** @ctrl_obj: FW object representing CCCB control structure. */
+	struct pvr_fw_object *ctrl_obj;
+
+	/** @ccb_obj: FW object representing CCCB. */
+	struct pvr_fw_object *cccb_obj;
+
+	/**
+	 * @ctrl: Kernel mapping of CCCB control structure. @lock must be held
+	 *        when accessing.
+	 */
+	struct rogue_fwif_cccb_ctl *ctrl;
+
+	/** @cccb: Kernel mapping of CCCB. @lock must be held when accessing.*/
+	u8 *cccb;
+
+	/** @ctrl_fw_addr: FW virtual address of CCCB control structure. */
+	u32 ctrl_fw_addr;
+	/** @ccb_fw_addr: FW virtual address of CCCB. */
+	u32 cccb_fw_addr;
+
+	/** @size: Size of CCCB in bytes. */
+	size_t size;
+
+	/** @write_offset: CCCB write offset. */
+	u32 write_offset;
+
+	/** @wrap_mask: CCCB wrap mask. */
+	u32 wrap_mask;
+};
+
+int pvr_cccb_init(struct pvr_device *pvr_dev, struct pvr_cccb *cccb,
+		  u32 size_log2, const char *name);
+void pvr_cccb_fini(struct pvr_cccb *cccb);
+
+void pvr_cccb_write_command_with_header(struct pvr_cccb *pvr_cccb,
+					u32 cmd_type, u32 cmd_size, void *cmd_data,
+					u32 ext_job_ref, u32 int_job_ref);
+void pvr_cccb_send_kccb_kick(struct pvr_device *pvr_dev,
+			     struct pvr_cccb *pvr_cccb, u32 cctx_fw_addr,
+			     struct pvr_hwrt_data *hwrt);
+void pvr_cccb_send_kccb_combined_kick(struct pvr_device *pvr_dev,
+				      struct pvr_cccb *geom_cccb,
+				      struct pvr_cccb *frag_cccb,
+				      u32 geom_ctx_fw_addr,
+				      u32 frag_ctx_fw_addr,
+				      struct pvr_hwrt_data *hwrt,
+				      bool frag_is_pr);
+bool pvr_cccb_cmdseq_fits(struct pvr_cccb *pvr_cccb, size_t size);
+
+/**
+ * pvr_cccb_get_size_of_cmd_with_hdr() - Get the size of a command and its header.
+ * @cmd_size: Command size.
+ *
+ * Returns the size of the command and its header.
+ */
+static __always_inline u32
+pvr_cccb_get_size_of_cmd_with_hdr(u32 cmd_size)
+{
+	WARN_ON(!IS_ALIGNED(cmd_size, 8));
+	return sizeof(struct rogue_fwif_ccb_cmd_header) + ALIGN(cmd_size, 8);
+}
+
+/**
+ * pvr_cccb_cmdseq_can_fit() - Check if a command sequence can fit in the CCCB.
+ * @size: Command sequence size.
+ *
+ * Returns:
+ *  * true it the CCCB is big enough to contain a command sequence, or
+ *  * false otherwise.
+ */
+static __always_inline bool
+pvr_cccb_cmdseq_can_fit(struct pvr_cccb *pvr_cccb, size_t size)
+{
+	/* We divide the capacity by two to simplify our CCCB fencing logic:
+	 * we want to be sure that, no matter what we had queued before, we
+	 * are able to either queue our command sequence at the end or add a
+	 * padding command and queue the command sequence at the beginning
+	 * of the CCCB. If the command sequence size is bigger than half the
+	 * CCCB capacity, we'd have to queue the padding command and make sure
+	 * the FW is done processing it before queueing our command sequence.
+	 */
+	return size + PADDING_COMMAND_SIZE <= pvr_cccb->size / 2;
+}
+
+#endif /* PVR_CCCB_H */
diff --git a/drivers/gpu/drm/imagination/pvr_context.c b/drivers/gpu/drm/imagination/pvr_context.c
new file mode 100644
index 000000000000..a87f5a295397
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_context.c
@@ -0,0 +1,337 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_cccb.h"
+#include "pvr_context.h"
+#include "pvr_device.h"
+#include "pvr_drv.h"
+#include "pvr_gem.h"
+#include "pvr_power.h"
+#include "pvr_rogue_fwif.h"
+#include "pvr_rogue_fwif_common.h"
+#include "pvr_rogue_fwif_resetframework.h"
+#include "pvr_stream_defs.h"
+#include "pvr_vm.h"
+
+#include <drm/drm_auth.h>
+#include <drm/drm_managed.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/xarray.h>
+
+static int
+remap_priority(struct pvr_file *pvr_file, s32 uapi_priority,
+	       enum pvr_context_priority *priority_out)
+{
+	switch (uapi_priority) {
+	case DRM_PVR_CTX_PRIORITY_LOW:
+		*priority_out = PVR_CTX_PRIORITY_LOW;
+		break;
+	case DRM_PVR_CTX_PRIORITY_NORMAL:
+		*priority_out = PVR_CTX_PRIORITY_MEDIUM;
+		break;
+	case DRM_PVR_CTX_PRIORITY_HIGH:
+		if (!capable(CAP_SYS_NICE) && !drm_is_current_master(from_pvr_file(pvr_file)))
+			return -EACCES;
+		*priority_out = PVR_CTX_PRIORITY_HIGH;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int get_fw_obj_size(enum drm_pvr_ctx_type type)
+{
+	switch (type) {
+	case DRM_PVR_CTX_TYPE_RENDER:
+		return sizeof(struct rogue_fwif_fwrendercontext);
+	case DRM_PVR_CTX_TYPE_COMPUTE:
+		return sizeof(struct rogue_fwif_fwcomputecontext);
+	case DRM_PVR_CTX_TYPE_TRANSFER_FRAG:
+		return sizeof(struct rogue_fwif_fwtransfercontext);
+	}
+
+	return -EINVAL;
+}
+
+static int
+process_static_context_state(struct pvr_device *pvr_dev, const struct pvr_stream_cmd_defs *cmd_defs,
+			     u64 stream_user_ptr, u32 stream_size, void *dest)
+{
+	void *stream;
+	int err;
+
+	stream = kzalloc(stream_size, GFP_KERNEL);
+	if (!stream)
+		return -ENOMEM;
+
+	if (copy_from_user(stream, u64_to_user_ptr(stream_user_ptr), stream_size)) {
+		err = -EFAULT;
+		goto err_free;
+	}
+
+	err = pvr_stream_process(pvr_dev, cmd_defs, stream, stream_size, dest);
+	if (err)
+		goto err_free;
+
+	kfree(stream);
+
+	return 0;
+
+err_free:
+	kfree(stream);
+
+	return err;
+}
+
+static int init_render_fw_objs(struct pvr_context *ctx,
+			       struct drm_pvr_ioctl_create_context_args *args,
+			       void *fw_ctx_map)
+{
+	struct rogue_fwif_static_rendercontext_state *static_rendercontext_state;
+	struct rogue_fwif_fwrendercontext *fw_render_context = fw_ctx_map;
+
+	if (!args->static_context_state_len)
+		return -EINVAL;
+
+	static_rendercontext_state = &fw_render_context->static_render_context_state;
+
+	/* Copy static render context state from userspace. */
+	return process_static_context_state(ctx->pvr_dev,
+					    &pvr_static_render_context_state_stream,
+					    args->static_context_state,
+					    args->static_context_state_len,
+					    &static_rendercontext_state->ctxswitch_regs[0]);
+}
+
+static int init_compute_fw_objs(struct pvr_context *ctx,
+				struct drm_pvr_ioctl_create_context_args *args,
+				void *fw_ctx_map)
+{
+	struct rogue_fwif_fwcomputecontext *fw_compute_context = fw_ctx_map;
+	struct rogue_fwif_cdm_registers_cswitch *ctxswitch_regs;
+
+	if (!args->static_context_state_len)
+		return -EINVAL;
+
+	ctxswitch_regs = &fw_compute_context->static_compute_context_state.ctxswitch_regs;
+
+	/* Copy static render context state from userspace. */
+	return process_static_context_state(ctx->pvr_dev,
+					    &pvr_static_compute_context_state_stream,
+					    args->static_context_state,
+					    args->static_context_state_len,
+					    ctxswitch_regs);
+}
+
+static int init_transfer_fw_objs(struct pvr_context *ctx,
+				 struct drm_pvr_ioctl_create_context_args *args,
+				 void *fw_ctx_map)
+{
+	if (args->static_context_state_len)
+		return -EINVAL;
+
+	return 0;
+}
+
+static int init_fw_objs(struct pvr_context *ctx,
+			struct drm_pvr_ioctl_create_context_args *args,
+			void *fw_ctx_map)
+{
+	switch (ctx->type) {
+	case DRM_PVR_CTX_TYPE_RENDER:
+		return init_render_fw_objs(ctx, args, fw_ctx_map);
+	case DRM_PVR_CTX_TYPE_COMPUTE:
+		return init_compute_fw_objs(ctx, args, fw_ctx_map);
+	case DRM_PVR_CTX_TYPE_TRANSFER_FRAG:
+		return init_transfer_fw_objs(ctx, args, fw_ctx_map);
+	}
+
+	return -EINVAL;
+}
+
+static void
+ctx_fw_data_init(void *cpu_ptr, void *priv)
+{
+	struct pvr_context *ctx = priv;
+
+	memcpy(cpu_ptr, ctx->data, ctx->data_size);
+}
+
+/**
+ * pvr_context_create() - Create a context.
+ * @pvr_file: File to attach the created context to.
+ * @args: Context creation arguments.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * A negative error code on failure.
+ */
+int pvr_context_create(struct pvr_file *pvr_file, struct drm_pvr_ioctl_create_context_args *args)
+{
+	struct pvr_device *pvr_dev = pvr_file->pvr_dev;
+	struct pvr_context *ctx;
+	int ctx_size;
+	int err;
+
+	/* Context creation flags are currently unused and must be zero. */
+	if (args->flags)
+		return -EINVAL;
+
+	ctx_size = get_fw_obj_size(args->type);
+	if (ctx_size < 0)
+		return ctx_size;
+
+	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+	if (!ctx)
+		return -ENOMEM;
+
+	ctx->data_size = ctx_size;
+	ctx->type = args->type;
+	ctx->flags = args->flags;
+	ctx->pvr_dev = pvr_dev;
+	kref_init(&ctx->ref_count);
+
+	err = remap_priority(pvr_file, args->priority, &ctx->priority);
+	if (err)
+		goto err_free_ctx;
+
+	ctx->vm_ctx = pvr_vm_context_lookup(pvr_file, args->vm_context_handle);
+	if (IS_ERR(ctx->vm_ctx)) {
+		err = PTR_ERR(ctx->vm_ctx);
+		goto err_free_ctx;
+	}
+
+	ctx->data = kzalloc(ctx_size, GFP_KERNEL);
+	if (!ctx->data) {
+		err = -ENOMEM;
+		goto err_put_vm;
+	}
+
+	err = init_fw_objs(ctx, args, ctx->data);
+	if (err)
+		goto err_free_ctx_data;
+
+	err = pvr_fw_object_create(pvr_dev, ctx_size, PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+				   ctx_fw_data_init, ctx, &ctx->fw_obj);
+	if (err)
+		goto err_free_ctx_data;
+
+	err = xa_alloc(&pvr_dev->ctx_ids, &ctx->ctx_id, ctx, xa_limit_32b, GFP_KERNEL);
+	if (err)
+		goto err_destroy_fw_obj;
+
+	err = xa_alloc(&pvr_file->ctx_handles, &args->handle, ctx, xa_limit_32b, GFP_KERNEL);
+	if (err)
+		goto err_release_id;
+
+	return 0;
+
+err_release_id:
+	xa_erase(&pvr_dev->ctx_ids, ctx->ctx_id);
+
+err_destroy_fw_obj:
+	pvr_fw_object_destroy(ctx->fw_obj);
+
+err_free_ctx_data:
+	kfree(ctx->data);
+
+err_put_vm:
+	pvr_vm_context_put(ctx->vm_ctx);
+
+err_free_ctx:
+	kfree(ctx);
+	return err;
+}
+
+static void
+pvr_context_release(struct kref *ref_count)
+{
+	struct pvr_context *ctx =
+		container_of(ref_count, struct pvr_context, ref_count);
+	struct pvr_device *pvr_dev = ctx->pvr_dev;
+
+	xa_erase(&pvr_dev->ctx_ids, ctx->ctx_id);
+	pvr_fw_object_destroy(ctx->fw_obj);
+	kfree(ctx->data);
+	pvr_vm_context_put(ctx->vm_ctx);
+	kfree(ctx);
+}
+
+/**
+ * pvr_context_put() - Release reference on context
+ * @ctx: Target context.
+ */
+void
+pvr_context_put(struct pvr_context *ctx)
+{
+	if (ctx)
+		kref_put(&ctx->ref_count, pvr_context_release);
+}
+
+/**
+ * pvr_context_destroy() - Destroy context
+ * @pvr_file: Pointer to pvr_file structure.
+ * @handle: Userspace context handle.
+ *
+ * Removes context from context list and drops initial reference. Context will
+ * then be destroyed once all outstanding references are dropped.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * -%EINVAL if context not in context list.
+ */
+int
+pvr_context_destroy(struct pvr_file *pvr_file, u32 handle)
+{
+	struct pvr_context *ctx = xa_erase(&pvr_file->ctx_handles, handle);
+
+	if (!ctx)
+		return -EINVAL;
+
+	/* Release the reference held by the handle set. */
+	pvr_context_put(ctx);
+
+	return 0;
+}
+
+/**
+ * pvr_destroy_contexts_for_file: Destroy any contexts associated with the given file
+ * @pvr_file: Pointer to pvr_file structure.
+ *
+ * Removes all contexts associated with @pvr_file from the device context list and drops initial
+ * references. Contexts will then be destroyed once all outstanding references are dropped.
+ */
+void pvr_destroy_contexts_for_file(struct pvr_file *pvr_file)
+{
+	struct pvr_context *ctx;
+	unsigned long handle;
+
+	xa_for_each(&pvr_file->ctx_handles, handle, ctx)
+		pvr_context_destroy(pvr_file, handle);
+}
+
+/**
+ * pvr_context_device_init() - Device level initialization for queue related resources.
+ * @pvr_dev: The device to initialize.
+ */
+void pvr_context_device_init(struct pvr_device *pvr_dev)
+{
+	xa_init_flags(&pvr_dev->ctx_ids, XA_FLAGS_ALLOC1);
+}
+
+/**
+ * pvr_context_device_fini() - Device level cleanup for queue related resources.
+ * @pvr_dev: The device to cleanup.
+ */
+void pvr_context_device_fini(struct pvr_device *pvr_dev)
+{
+	WARN_ON(!xa_empty(&pvr_dev->ctx_ids));
+	xa_destroy(&pvr_dev->ctx_ids);
+}
diff --git a/drivers/gpu/drm/imagination/pvr_context.h b/drivers/gpu/drm/imagination/pvr_context.h
new file mode 100644
index 000000000000..fa7115380272
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_context.h
@@ -0,0 +1,161 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_CONTEXT_H
+#define PVR_CONTEXT_H
+
+#include <drm/gpu_scheduler.h>
+
+#include <linux/compiler_attributes.h>
+#include <linux/dma-fence.h>
+#include <linux/kref.h>
+#include <linux/types.h>
+#include <linux/xarray.h>
+#include <uapi/drm/pvr_drm.h>
+
+#include "pvr_cccb.h"
+#include "pvr_device.h"
+
+/* Forward declaration from pvr_gem.h. */
+struct pvr_fw_object;
+
+enum pvr_context_priority {
+	PVR_CTX_PRIORITY_LOW = 0,
+	PVR_CTX_PRIORITY_MEDIUM,
+	PVR_CTX_PRIORITY_HIGH,
+};
+
+/**
+ * struct pvr_context - Context data
+ */
+struct pvr_context {
+	/** @ref_count: Refcount for context. */
+	struct kref ref_count;
+
+	/** @pvr_dev: Pointer to owning device. */
+	struct pvr_device *pvr_dev;
+
+	/** @vm_ctx: Pointer to associated VM context. */
+	struct pvr_vm_context *vm_ctx;
+
+	/** @type: Type of context. */
+	enum drm_pvr_ctx_type type;
+
+	/** @flags: Context flags. */
+	u32 flags;
+
+	/** @priority: Context priority*/
+	enum pvr_context_priority priority;
+
+	/** @fw_obj: FW object representing FW-side context data. */
+	struct pvr_fw_object *fw_obj;
+
+	/** @data: Pointer to local copy of FW context data. */
+	void *data;
+
+	/** @data_size: Size of FW context data, in bytes. */
+	u32 data_size;
+
+	/** @ctx_id: FW context ID. */
+	u32 ctx_id;
+};
+
+/**
+ * pvr_context_get() - Take additional reference on context.
+ * @ctx: Context pointer.
+ *
+ * Call pvr_context_put() to release.
+ *
+ * Returns:
+ *  * The requested context on success, or
+ *  * %NULL if no context pointer passed.
+ */
+static __always_inline struct pvr_context *
+pvr_context_get(struct pvr_context *ctx)
+{
+	if (ctx)
+		kref_get(&ctx->ref_count);
+
+	return ctx;
+}
+
+/**
+ * pvr_context_lookup() - Lookup context pointer from handle and file.
+ * @pvr_file: Pointer to pvr_file structure.
+ * @handle: Context handle.
+ *
+ * Takes reference on context. Call pvr_context_put() to release.
+ *
+ * Return:
+ *  * The requested context on success, or
+ *  * %NULL on failure (context does not exist, or does not belong to @pvr_file).
+ */
+static __always_inline struct pvr_context *
+pvr_context_lookup(struct pvr_file *pvr_file, u32 handle)
+{
+	struct pvr_context *ctx;
+
+	/* Take the array lock to protect against context removal.  */
+	xa_lock(&pvr_file->ctx_handles);
+	ctx = pvr_context_get(xa_load(&pvr_file->ctx_handles, handle));
+	xa_unlock(&pvr_file->ctx_handles);
+
+	return ctx;
+}
+
+/**
+ * pvr_context_lookup_id() - Lookup context pointer from ID.
+ * @pvr_dev: Device pointer.
+ * @id: FW context ID.
+ *
+ * Takes reference on context. Call pvr_context_put() to release.
+ *
+ * Return:
+ *  * The requested context on success, or
+ *  * %NULL on failure (context does not exist).
+ */
+static __always_inline struct pvr_context *
+pvr_context_lookup_id(struct pvr_device *pvr_dev, u32 id)
+{
+	struct pvr_context *ctx;
+
+	/* Take the array lock to protect against context removal.  */
+	xa_lock(&pvr_dev->ctx_ids);
+
+	/* Contexts are removed from the ctx_ids set in the context release path,
+	 * meaning the ref_count reached zero before they get removed. We need
+	 * to make sure we're not trying to acquire a context that's being
+	 * destroyed.
+	 */
+	ctx = xa_load(&pvr_dev->ctx_ids, id);
+	if (!kref_get_unless_zero(&ctx->ref_count))
+		ctx = NULL;
+
+	xa_unlock(&pvr_dev->ctx_ids);
+
+	return ctx;
+}
+
+static __always_inline u32
+pvr_context_get_fw_addr(struct pvr_context *ctx)
+{
+	u32 ctx_fw_addr = 0;
+
+	pvr_fw_object_get_fw_addr(ctx->fw_obj, &ctx_fw_addr);
+
+	return ctx_fw_addr;
+}
+
+void pvr_context_put(struct pvr_context *ctx);
+
+int pvr_context_create(struct pvr_file *pvr_file, struct drm_pvr_ioctl_create_context_args *args);
+
+int pvr_context_destroy(struct pvr_file *pvr_file, u32 handle);
+
+void pvr_destroy_contexts_for_file(struct pvr_file *pvr_file);
+
+void pvr_context_device_init(struct pvr_device *pvr_dev);
+
+void pvr_context_device_fini(struct pvr_device *pvr_dev);
+
+#endif /* PVR_CONTEXT_H */
diff --git a/drivers/gpu/drm/imagination/pvr_device.h b/drivers/gpu/drm/imagination/pvr_device.h
index a810b233689b..431b8d8ad579 100644
--- a/drivers/gpu/drm/imagination/pvr_device.h
+++ b/drivers/gpu/drm/imagination/pvr_device.h
@@ -7,6 +7,8 @@
 #include "pvr_ccb.h"
 #include "pvr_device_info.h"
 #include "pvr_fw.h"
+#include "pvr_rogue_fwif_stream.h"
+#include "pvr_stream.h"
 
 #include <drm/drm_device.h>
 #include <drm/drm_file.h>
@@ -146,6 +148,17 @@ struct pvr_device {
 	/** @fw_dev: Firmware related data. */
 	struct pvr_fw_device fw_dev;
 
+	/** @stream_musthave_quirks: Bit array of "must-have" quirks for stream commands. */
+	u32 stream_musthave_quirks[PVR_STREAM_TYPE_MAX][PVR_STREAM_EXTHDR_TYPE_MAX];
+
+	/**
+	 * @ctx_ids: Array of contexts belonging to this device. Array members
+	 *           are of type "struct pvr_context *".
+	 *
+	 * This array is used to allocate IDs used by the firmware.
+	 */
+	struct xarray ctx_ids;
+
 	/**
 	 * @free_list_ids: Array of free lists belonging to this device. Array members
 	 *                 are of type "struct pvr_free_list *".
@@ -249,6 +262,14 @@ struct pvr_file {
 	 */
 	struct pvr_device *pvr_dev;
 
+	/**
+	 * @ctx_handles: Array of contexts belonging to this file. Array members
+	 *               are of type "struct pvr_context *".
+	 *
+	 * This array is used to allocate handles returned to userspace.
+	 */
+	struct xarray ctx_handles;
+
 	/**
 	 * @free_list_handles: Array of free lists belonging to this file. Array
 	 * members are of type "struct pvr_free_list *".
diff --git a/drivers/gpu/drm/imagination/pvr_drv.c b/drivers/gpu/drm/imagination/pvr_drv.c
index bd601eced4f2..02c6b8542436 100644
--- a/drivers/gpu/drm/imagination/pvr_drv.c
+++ b/drivers/gpu/drm/imagination/pvr_drv.c
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0 OR MIT
 /* Copyright (c) 2023 Imagination Technologies Ltd. */
 
+#include "pvr_context.h"
 #include "pvr_device.h"
 #include "pvr_drv.h"
 #include "pvr_free_list.h"
@@ -687,7 +688,19 @@ static int
 pvr_ioctl_create_context(struct drm_device *drm_dev, void *raw_args,
 			 struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_create_context_args *args = raw_args;
+	struct pvr_file *pvr_file = file->driver_priv;
+	int idx;
+	int ret;
+
+	if (!drm_dev_enter(drm_dev, &idx))
+		return -EIO;
+
+	ret = pvr_context_create(pvr_file, args);
+
+	drm_dev_exit(idx);
+
+	return ret;
 }
 
 /**
@@ -707,7 +720,13 @@ static int
 pvr_ioctl_destroy_context(struct drm_device *drm_dev, void *raw_args,
 			  struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_destroy_context_args *args = raw_args;
+	struct pvr_file *pvr_file = file->driver_priv;
+
+	if (args->_padding_4)
+		return -EINVAL;
+
+	return pvr_context_destroy(pvr_file, args->handle);
 }
 
 /**
@@ -1303,6 +1322,7 @@ pvr_drm_driver_open(struct drm_device *drm_dev, struct drm_file *file)
 	 */
 	pvr_file->pvr_dev = pvr_dev;
 
+	xa_init_flags(&pvr_file->ctx_handles, XA_FLAGS_ALLOC1);
 	xa_init_flags(&pvr_file->free_list_handles, XA_FLAGS_ALLOC1);
 	xa_init_flags(&pvr_file->hwrt_handles, XA_FLAGS_ALLOC1);
 	xa_init_flags(&pvr_file->vm_ctx_handles, XA_FLAGS_ALLOC1);
@@ -1332,6 +1352,9 @@ pvr_drm_driver_postclose(__always_unused struct drm_device *drm_dev,
 {
 	struct pvr_file *pvr_file = to_pvr_file(file);
 
+	/* Kill remaining contexts. */
+	pvr_destroy_contexts_for_file(pvr_file);
+
 	/* Drop references on any remaining objects. */
 	pvr_destroy_free_lists_for_file(pvr_file);
 	pvr_destroy_hwrt_datasets_for_file(pvr_file);
@@ -1377,6 +1400,7 @@ pvr_probe(struct platform_device *plat_dev)
 	drm_dev = &pvr_dev->base;
 
 	platform_set_drvdata(plat_dev, drm_dev);
+	pvr_context_device_init(pvr_dev);
 
 	devm_pm_runtime_enable(&plat_dev->dev);
 	pm_runtime_mark_last_busy(&plat_dev->dev);
@@ -1420,6 +1444,7 @@ pvr_remove(struct platform_device *plat_dev)
 	drm_dev_unplug(drm_dev);
 	pvr_device_fini(pvr_dev);
 	pvr_watchdog_fini(pvr_dev);
+	pvr_context_device_fini(pvr_dev);
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/imagination/pvr_stream.c b/drivers/gpu/drm/imagination/pvr_stream.c
new file mode 100644
index 000000000000..7603db7e79a7
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_stream.c
@@ -0,0 +1,285 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+#include "pvr_rogue_fwif_stream.h"
+#include "pvr_stream.h"
+
+#include <linux/align.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <uapi/drm/pvr_drm.h>
+
+static __always_inline bool
+stream_def_is_supported(struct pvr_device *pvr_dev, const struct pvr_stream_def *stream_def)
+{
+	if (stream_def->feature == PVR_FEATURE_NONE)
+		return true;
+
+	if (!(stream_def->feature & PVR_FEATURE_NOT) &&
+	    pvr_device_has_feature(pvr_dev, stream_def->feature)) {
+		return true;
+	}
+
+	if ((stream_def->feature & PVR_FEATURE_NOT) &&
+	    !pvr_device_has_feature(pvr_dev, stream_def->feature & ~PVR_FEATURE_NOT)) {
+		return true;
+	}
+
+	return false;
+}
+
+static int
+pvr_stream_get_data(u8 *stream, u32 *stream_offset, u32 stream_size, u32 data_size, u32 align_size,
+		    void *dest)
+{
+	*stream_offset = ALIGN(*stream_offset, align_size);
+
+	if ((*stream_offset + data_size) > stream_size)
+		return -EINVAL;
+
+	memcpy(dest, stream + *stream_offset, data_size);
+
+	(*stream_offset) += data_size;
+
+	return 0;
+}
+
+/**
+ * pvr_stream_process_1() - Process a single stream and fill destination structure
+ * @pvr_dev: Device pointer.
+ * @stream_def: Stream definition.
+ * @nr_entries: Number of entries in &stream_def.
+ * @stream: Pointer to stream.
+ * @stream_offset: Starting offset within stream.
+ * @stream_size: Size of input stream, in bytes.
+ * @dest: Pointer to destination structure.
+ * @dest_size: Size of destination structure.
+ * @stream_offset_out: Pointer to variable to write updated stream offset to. May be NULL.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -%EINVAL on malformed stream.
+ */
+static int
+pvr_stream_process_1(struct pvr_device *pvr_dev, const struct pvr_stream_def *stream_def,
+		     u32 nr_entries, u8 *stream, u32 stream_offset, u32 stream_size,
+		     u8 *dest, u32 dest_size, u32 *stream_offset_out)
+{
+	int err = 0;
+	u32 i;
+
+	for (i = 0; i < nr_entries; i++) {
+		if (stream_def[i].offset >= dest_size) {
+			err = -EINVAL;
+			break;
+		}
+
+		if (!stream_def_is_supported(pvr_dev, &stream_def[i]))
+			continue;
+
+		switch (stream_def[i].size) {
+		case PVR_STREAM_SIZE_8:
+			err = pvr_stream_get_data(stream, &stream_offset, stream_size, sizeof(u8),
+						  sizeof(u8), dest + stream_def[i].offset);
+			if (err)
+				return err;
+			break;
+
+		case PVR_STREAM_SIZE_16:
+			err = pvr_stream_get_data(stream, &stream_offset, stream_size, sizeof(u16),
+						  sizeof(u16), dest + stream_def[i].offset);
+			if (err)
+				return err;
+			break;
+
+		case PVR_STREAM_SIZE_32:
+			err = pvr_stream_get_data(stream, &stream_offset, stream_size, sizeof(u32),
+						  sizeof(u32), dest + stream_def[i].offset);
+			if (err)
+				return err;
+			break;
+
+		case PVR_STREAM_SIZE_64:
+			err = pvr_stream_get_data(stream, &stream_offset, stream_size, sizeof(u64),
+						  sizeof(u64), dest + stream_def[i].offset);
+			if (err)
+				return err;
+			break;
+
+		case PVR_STREAM_SIZE_ARRAY:
+			err = pvr_stream_get_data(stream, &stream_offset, stream_size,
+						  stream_def[i].array_size, sizeof(u64),
+						  dest + stream_def[i].offset);
+			if (err)
+				return err;
+			break;
+		}
+	}
+
+	if (stream_offset_out)
+		*stream_offset_out = stream_offset;
+
+	return 0;
+}
+
+static int
+pvr_stream_process_ext_stream(struct pvr_device *pvr_dev,
+			      const struct pvr_stream_cmd_defs *cmd_defs, void *ext_stream,
+			      u32 stream_offset, u32 ext_stream_size, void *dest)
+{
+	u32 musthave_masks[PVR_STREAM_EXTHDR_TYPE_MAX];
+	u32 ext_header;
+	int err = 0;
+	u32 i;
+
+	/* Copy "must have" mask from device. We clear this as we process the stream. */
+	memcpy(musthave_masks, pvr_dev->stream_musthave_quirks[cmd_defs->type],
+	       sizeof(musthave_masks));
+
+	do {
+		const struct pvr_stream_ext_header *header;
+		u32 type;
+		u32 data;
+
+		err = pvr_stream_get_data(ext_stream, &stream_offset, ext_stream_size, sizeof(u32),
+					  sizeof(ext_header), &ext_header);
+		if (err)
+			return err;
+
+		type = (ext_header & PVR_STREAM_EXTHDR_TYPE_MASK) >> PVR_STREAM_EXTHDR_TYPE_SHIFT;
+		data = ext_header & PVR_STREAM_EXTHDR_DATA_MASK;
+
+		if (type >= cmd_defs->ext_nr_headers)
+			return -EINVAL;
+
+		header = &cmd_defs->ext_headers[type];
+		if (data & ~header->valid_mask)
+			return -EINVAL;
+
+		musthave_masks[type] &= ~data;
+
+		for (i = 0; i < header->ext_streams_num; i++) {
+			const struct pvr_stream_ext_def *ext_def = &header->ext_streams[i];
+
+			if (!(ext_header & ext_def->header_mask))
+				continue;
+
+			if (!pvr_device_has_uapi_quirk(pvr_dev, ext_def->quirk))
+				return -EINVAL;
+
+			err = pvr_stream_process_1(pvr_dev, ext_def->stream, ext_def->stream_len,
+						   ext_stream, stream_offset,
+						   ext_stream_size, dest,
+						   cmd_defs->dest_size, &stream_offset);
+			if (err)
+				return err;
+		}
+	} while (ext_header & PVR_STREAM_EXTHDR_CONTINUATION);
+
+	/*
+	 * Verify that "must have" mask is now zero. If it isn't then one of the "must have" quirks
+	 * for this command was not present.
+	 */
+	for (i = 0; i < cmd_defs->ext_nr_headers; i++) {
+		if (musthave_masks[i])
+			return -EINVAL;
+	}
+
+	return 0;
+}
+
+/**
+ * pvr_stream_process() - Build FW structure from stream
+ * @pvr_dev: Device pointer.
+ * @cmd_defs: Stream definition.
+ * @stream: Pointer to command stream.
+ * @stream_size: Size of command stream, in bytes.
+ * @dest_out: Pointer to destination buffer.
+ *
+ * Caller is responsible for freeing the output structure.
+ *
+ * Returns:
+ *  * 0 on success,
+ *  * -%ENOMEM on out of memory, or
+ *  * -%EINVAL on malformed stream.
+ */
+int
+pvr_stream_process(struct pvr_device *pvr_dev, const struct pvr_stream_cmd_defs *cmd_defs,
+		   void *stream, u32 stream_size, void *dest_out)
+{
+	u32 stream_offset = 0;
+	u32 main_stream_len;
+	u32 padding;
+	int err;
+
+	if (!stream || !stream_size)
+		return -EINVAL;
+
+	err = pvr_stream_get_data(stream, &stream_offset, stream_size, sizeof(u32),
+				  sizeof(u32), &main_stream_len);
+	if (err)
+		return err;
+
+	/*
+	 * u32 after stream length is padding to ensure u64 alignment, but may be used for expansion
+	 * in the future. Verify it's zero.
+	 */
+	err = pvr_stream_get_data(stream, &stream_offset, stream_size, sizeof(u32),
+				  sizeof(u32), &padding);
+	if (err)
+		return err;
+
+	if (main_stream_len < stream_offset || main_stream_len > stream_size || padding)
+		return -EINVAL;
+
+	err = pvr_stream_process_1(pvr_dev, cmd_defs->main_stream, cmd_defs->main_stream_len,
+				   stream, stream_offset, main_stream_len, dest_out,
+				   cmd_defs->dest_size, &stream_offset);
+	if (err)
+		return err;
+
+	if (stream_offset < stream_size) {
+		err = pvr_stream_process_ext_stream(pvr_dev, cmd_defs, stream, stream_offset,
+						    stream_size, dest_out);
+		if (err)
+			return err;
+	} else {
+		u32 i;
+
+		/*
+		 * If we don't have an extension stream then there must not be any "must have"
+		 * quirks for this command.
+		 */
+		for (i = 0; i < cmd_defs->ext_nr_headers; i++) {
+			if (pvr_dev->stream_musthave_quirks[cmd_defs->type][i])
+				return -EINVAL;
+		}
+	}
+
+	return 0;
+}
+
+/**
+ * pvr_stream_create_musthave_masks() - Create "must have" masks for streams based on current device
+ *                                      quirks
+ * @pvr_dev: Device pointer.
+ */
+void
+pvr_stream_create_musthave_masks(struct pvr_device *pvr_dev)
+{
+	memset(pvr_dev->stream_musthave_quirks, 0, sizeof(pvr_dev->stream_musthave_quirks));
+
+	if (pvr_device_has_uapi_quirk(pvr_dev, 47217))
+		pvr_dev->stream_musthave_quirks[PVR_STREAM_TYPE_FRAG][0] |=
+			PVR_STREAM_EXTHDR_FRAG0_BRN47217;
+
+	if (pvr_device_has_uapi_quirk(pvr_dev, 49927)) {
+		pvr_dev->stream_musthave_quirks[PVR_STREAM_TYPE_GEOM][0] |=
+			PVR_STREAM_EXTHDR_GEOM0_BRN49927;
+		pvr_dev->stream_musthave_quirks[PVR_STREAM_TYPE_FRAG][0] |=
+			PVR_STREAM_EXTHDR_FRAG0_BRN49927;
+		pvr_dev->stream_musthave_quirks[PVR_STREAM_TYPE_COMPUTE][0] |=
+			PVR_STREAM_EXTHDR_COMPUTE0_BRN49927;
+	}
+}
diff --git a/drivers/gpu/drm/imagination/pvr_stream.h b/drivers/gpu/drm/imagination/pvr_stream.h
new file mode 100644
index 000000000000..ecc5edfb7bf4
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_stream.h
@@ -0,0 +1,75 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_STREAM_H
+#define PVR_STREAM_H
+
+#include <linux/bits.h>
+#include <linux/limits.h>
+#include <linux/types.h>
+
+struct pvr_device;
+
+struct pvr_job;
+
+enum pvr_stream_type {
+	PVR_STREAM_TYPE_GEOM = 0,
+	PVR_STREAM_TYPE_FRAG,
+	PVR_STREAM_TYPE_COMPUTE,
+	PVR_STREAM_TYPE_TRANSFER,
+	PVR_STREAM_TYPE_STATIC_RENDER_CONTEXT,
+	PVR_STREAM_TYPE_STATIC_COMPUTE_CONTEXT,
+
+	PVR_STREAM_TYPE_MAX
+};
+
+enum pvr_stream_size {
+	PVR_STREAM_SIZE_8 = 0,
+	PVR_STREAM_SIZE_16,
+	PVR_STREAM_SIZE_32,
+	PVR_STREAM_SIZE_64,
+	PVR_STREAM_SIZE_ARRAY,
+};
+
+#define PVR_FEATURE_NOT  BIT(31)
+#define PVR_FEATURE_NONE U32_MAX
+
+struct pvr_stream_def {
+	u32 offset;
+	enum pvr_stream_size size;
+	u32 array_size;
+	u32 feature;
+};
+
+struct pvr_stream_ext_def {
+	const struct pvr_stream_def *stream;
+	u32 stream_len;
+	u32 header_mask;
+	u32 quirk;
+};
+
+struct pvr_stream_ext_header {
+	const struct pvr_stream_ext_def *ext_streams;
+	u32 ext_streams_num;
+	u32 valid_mask;
+};
+
+struct pvr_stream_cmd_defs {
+	enum pvr_stream_type type;
+
+	const struct pvr_stream_def *main_stream;
+	u32 main_stream_len;
+
+	u32 ext_nr_headers;
+	const struct pvr_stream_ext_header *ext_headers;
+
+	size_t dest_size;
+};
+
+int
+pvr_stream_process(struct pvr_device *pvr_dev, const struct pvr_stream_cmd_defs *cmd_defs,
+		   void *stream, u32 stream_size, void *dest_out);
+void
+pvr_stream_create_musthave_masks(struct pvr_device *pvr_dev);
+
+#endif /* PVR_STREAM_H */
diff --git a/drivers/gpu/drm/imagination/pvr_stream_defs.c b/drivers/gpu/drm/imagination/pvr_stream_defs.c
new file mode 100644
index 000000000000..3c646e25accf
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_stream_defs.c
@@ -0,0 +1,125 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device_info.h"
+#include "pvr_rogue_fwif_client.h"
+#include "pvr_rogue_fwif_stream.h"
+#include "pvr_stream.h"
+#include "pvr_stream_defs.h"
+
+#include <linux/stddef.h>
+#include <uapi/drm/pvr_drm.h>
+
+#define PVR_STREAM_DEF_SET(owner, member, _size, _array_size, _feature) \
+	{ .offset = offsetof(struct owner, member), \
+	  .size = (_size),  \
+	  .array_size = (_array_size), \
+	  .feature = (_feature) }
+
+#define PVR_STREAM_DEF(owner, member, member_size)  \
+	PVR_STREAM_DEF_SET(owner, member, PVR_STREAM_SIZE_ ## member_size, 0, PVR_FEATURE_NONE)
+
+#define PVR_STREAM_DEF_FEATURE(owner, member, member_size, feature) \
+	PVR_STREAM_DEF_SET(owner, member, PVR_STREAM_SIZE_ ## member_size, 0, feature)
+
+#define PVR_STREAM_DEF_NOT_FEATURE(owner, member, member_size, feature)       \
+	PVR_STREAM_DEF_SET(owner, member, PVR_STREAM_SIZE_ ## member_size, 0, \
+			   (feature) | PVR_FEATURE_NOT)
+
+#define PVR_STREAM_DEF_ARRAY(owner, member)                                       \
+	PVR_STREAM_DEF_SET(owner, member, PVR_STREAM_SIZE_ARRAY,                  \
+			   sizeof(((struct owner *)0)->member), PVR_FEATURE_NONE)
+
+#define PVR_STREAM_DEF_ARRAY_FEATURE(owner, member, feature)            \
+	PVR_STREAM_DEF_SET(owner, member, PVR_STREAM_SIZE_ARRAY,         \
+			   sizeof(((struct owner *)0)->member), feature)
+
+#define PVR_STREAM_DEF_ARRAY_NOT_FEATURE(owner, member, feature)                             \
+	PVR_STREAM_DEF_SET(owner, member, PVR_STREAM_SIZE_ARRAY,                             \
+			   sizeof(((struct owner *)0)->member), (feature) | PVR_FEATURE_NOT)
+
+/*
+ * When adding new parameters to the stream definition, the new parameters must go after the
+ * existing parameters, to preserve order. As parameters are naturally aligned, care must be taken
+ * with respect to implicit padding in the stream; padding should be minimised as much as possible.
+ */
+static const struct pvr_stream_def rogue_fwif_static_render_context_state_stream[] = {
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_reg_vdm_context_state_base_addr, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_reg_vdm_context_state_resume_addr, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_reg_ta_context_state_base_addr, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[0].geom_reg_vdm_context_store_task0, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[0].geom_reg_vdm_context_store_task1, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[0].geom_reg_vdm_context_store_task2, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[0].geom_reg_vdm_context_store_task3, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[0].geom_reg_vdm_context_store_task4, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[0].geom_reg_vdm_context_resume_task0, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[0].geom_reg_vdm_context_resume_task1, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[0].geom_reg_vdm_context_resume_task2, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[0].geom_reg_vdm_context_resume_task3, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[0].geom_reg_vdm_context_resume_task4, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[1].geom_reg_vdm_context_store_task0, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[1].geom_reg_vdm_context_store_task1, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[1].geom_reg_vdm_context_store_task2, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[1].geom_reg_vdm_context_store_task3, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[1].geom_reg_vdm_context_store_task4, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[1].geom_reg_vdm_context_resume_task0, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[1].geom_reg_vdm_context_resume_task1, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[1].geom_reg_vdm_context_resume_task2, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[1].geom_reg_vdm_context_resume_task3, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[1].geom_reg_vdm_context_resume_task4, 64),
+};
+
+const struct pvr_stream_cmd_defs pvr_static_render_context_state_stream = {
+	.type = PVR_STREAM_TYPE_STATIC_RENDER_CONTEXT,
+
+	.main_stream = rogue_fwif_static_render_context_state_stream,
+	.main_stream_len = ARRAY_SIZE(rogue_fwif_static_render_context_state_stream),
+
+	.ext_nr_headers = 0,
+
+	.dest_size = sizeof(struct rogue_fwif_geom_registers_caswitch),
+};
+
+static const struct pvr_stream_def rogue_fwif_static_compute_context_state_stream[] = {
+	PVR_STREAM_DEF(rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_context_pds0, 64),
+	PVR_STREAM_DEF(rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_context_pds1, 64),
+	PVR_STREAM_DEF(rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_terminate_pds, 64),
+	PVR_STREAM_DEF(rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_terminate_pds1, 64),
+	PVR_STREAM_DEF(rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_resume_pds0, 64),
+	PVR_STREAM_DEF(rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_context_pds0_b, 64),
+	PVR_STREAM_DEF(rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_resume_pds0_b, 64),
+};
+
+const struct pvr_stream_cmd_defs pvr_static_compute_context_state_stream = {
+	.type = PVR_STREAM_TYPE_STATIC_COMPUTE_CONTEXT,
+
+	.main_stream = rogue_fwif_static_compute_context_state_stream,
+	.main_stream_len = ARRAY_SIZE(rogue_fwif_static_compute_context_state_stream),
+
+	.ext_nr_headers = 0,
+
+	.dest_size = sizeof(struct rogue_fwif_cdm_registers_cswitch),
+};
diff --git a/drivers/gpu/drm/imagination/pvr_stream_defs.h b/drivers/gpu/drm/imagination/pvr_stream_defs.h
new file mode 100644
index 000000000000..7e0ecfa12030
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_stream_defs.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_STREAM_DEFS_H
+#define PVR_STREAM_DEFS_H
+
+#include "pvr_stream.h"
+
+extern const struct pvr_stream_cmd_defs pvr_cmd_geom_stream;
+extern const struct pvr_stream_cmd_defs pvr_cmd_frag_stream;
+extern const struct pvr_stream_cmd_defs pvr_cmd_compute_stream;
+extern const struct pvr_stream_cmd_defs pvr_cmd_transfer_stream;
+extern const struct pvr_stream_cmd_defs pvr_static_render_context_state_stream;
+extern const struct pvr_stream_cmd_defs pvr_static_compute_context_state_stream;
+
+#endif /* PVR_STREAM_DEFS_H */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 13/17] drm/imagination: Implement context creation/destruction ioctls
@ 2023-08-16  8:25   ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: frank.binns, donald.robson, boris.brezillon, faith.ekstrand,
	airlied, daniel, maarten.lankhorst, mripard, tzimmermann, afd,
	hns, matthew.brost, christian.koenig, luben.tuikov, dakr,
	linux-kernel

Implement ioctls for the creation and destruction of contexts. Contexts are
used for job submission and each is associated with a particular job type.

Changes since v3:
- Use drm_dev_{enter,exit}

Co-developed-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 drivers/gpu/drm/imagination/Makefile          |   4 +
 drivers/gpu/drm/imagination/pvr_cccb.c        | 267 ++++++++++++++
 drivers/gpu/drm/imagination/pvr_cccb.h        | 109 ++++++
 drivers/gpu/drm/imagination/pvr_context.c     | 337 ++++++++++++++++++
 drivers/gpu/drm/imagination/pvr_context.h     | 161 +++++++++
 drivers/gpu/drm/imagination/pvr_device.h      |  21 ++
 drivers/gpu/drm/imagination/pvr_drv.c         |  29 +-
 drivers/gpu/drm/imagination/pvr_stream.c      | 285 +++++++++++++++
 drivers/gpu/drm/imagination/pvr_stream.h      |  75 ++++
 drivers/gpu/drm/imagination/pvr_stream_defs.c | 125 +++++++
 drivers/gpu/drm/imagination/pvr_stream_defs.h |  16 +
 11 files changed, 1427 insertions(+), 2 deletions(-)
 create mode 100644 drivers/gpu/drm/imagination/pvr_cccb.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_cccb.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_context.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_context.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_stream.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_stream.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_stream_defs.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_stream_defs.h

diff --git a/drivers/gpu/drm/imagination/Makefile b/drivers/gpu/drm/imagination/Makefile
index fca2ee2efbac..0c8ab120f277 100644
--- a/drivers/gpu/drm/imagination/Makefile
+++ b/drivers/gpu/drm/imagination/Makefile
@@ -5,6 +5,8 @@ subdir-ccflags-y := -I$(srctree)/$(src)
 
 powervr-y := \
 	pvr_ccb.o \
+	pvr_cccb.o \
+	pvr_context.o \
 	pvr_device.o \
 	pvr_device_info.o \
 	pvr_drv.o \
@@ -18,6 +20,8 @@ powervr-y := \
 	pvr_hwrt.o \
 	pvr_mmu.o \
 	pvr_power.o \
+	pvr_stream.o \
+	pvr_stream_defs.o \
 	pvr_vm.o \
 	pvr_vm_mips.o
 
diff --git a/drivers/gpu/drm/imagination/pvr_cccb.c b/drivers/gpu/drm/imagination/pvr_cccb.c
new file mode 100644
index 000000000000..8dfc157c3c10
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_cccb.c
@@ -0,0 +1,267 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_ccb.h"
+#include "pvr_cccb.h"
+#include "pvr_device.h"
+#include "pvr_gem.h"
+#include "pvr_hwrt.h"
+
+#include <linux/compiler.h>
+#include <linux/delay.h>
+#include <linux/jiffies.h>
+#include <linux/mutex.h>
+#include <linux/types.h>
+
+static __always_inline u32
+get_ccb_space(u32 w_off, u32 r_off, u32 ccb_size)
+{
+	return (((r_off) - (w_off)) + ((ccb_size) - 1)) & ((ccb_size) - 1);
+}
+
+static void
+cccb_ctrl_init(void *cpu_ptr, void *priv)
+{
+	struct rogue_fwif_cccb_ctl *ctrl = cpu_ptr;
+	struct pvr_cccb *pvr_cccb = priv;
+
+	WRITE_ONCE(ctrl->write_offset, 0);
+	WRITE_ONCE(ctrl->read_offset, 0);
+	WRITE_ONCE(ctrl->dep_offset, 0);
+	WRITE_ONCE(ctrl->wrap_mask, pvr_cccb->wrap_mask);
+}
+
+/**
+ * pvr_cccb_init() - Initialise a Client CCB
+ * @pvr_dev: Device pointer.
+ * @pvr_cccb: Pointer to Client CCB structure to initialise.
+ * @size_log2: Log2 size of Client CCB in bytes.
+ * @name: Name of owner of Client CCB. Used for fence context.
+ *
+ * Return:
+ *  * Zero on success, or
+ *  * Any error code returned by pvr_fw_object_create_and_map().
+ */
+int
+pvr_cccb_init(struct pvr_device *pvr_dev, struct pvr_cccb *pvr_cccb,
+	      u32 size_log2, const char *name)
+{
+	size_t size = 1 << size_log2;
+	int err;
+
+	pvr_cccb->size = size;
+	pvr_cccb->write_offset = 0;
+	pvr_cccb->wrap_mask = size - 1;
+
+	/*
+	 * Map CCCB and control structure as uncached, so we don't have to flush
+	 * CPU cache repeatedly when polling for space.
+	 */
+	pvr_cccb->ctrl = pvr_fw_object_create_and_map(pvr_dev, sizeof(*pvr_cccb->ctrl),
+						      PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+						      cccb_ctrl_init, pvr_cccb,
+						      &pvr_cccb->ctrl_obj);
+	if (IS_ERR(pvr_cccb->ctrl))
+		return PTR_ERR(pvr_cccb->ctrl);
+
+	pvr_cccb->cccb = pvr_fw_object_create_and_map(pvr_dev, size,
+						      PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+						      NULL, NULL, &pvr_cccb->cccb_obj);
+	if (IS_ERR(pvr_cccb->cccb)) {
+		err = PTR_ERR(pvr_cccb->cccb);
+		goto err_free_ctrl;
+	}
+
+	pvr_fw_object_get_fw_addr(pvr_cccb->ctrl_obj, &pvr_cccb->ctrl_fw_addr);
+	pvr_fw_object_get_fw_addr(pvr_cccb->cccb_obj, &pvr_cccb->cccb_fw_addr);
+
+	return 0;
+
+err_free_ctrl:
+	pvr_fw_object_unmap_and_destroy(pvr_cccb->ctrl_obj);
+
+	return err;
+}
+
+/**
+ * pvr_cccb_fini() - Release Client CCB structure
+ * @pvr_cccb: Client CCB to release.
+ */
+void
+pvr_cccb_fini(struct pvr_cccb *pvr_cccb)
+{
+	pvr_fw_object_unmap_and_destroy(pvr_cccb->cccb_obj);
+	pvr_fw_object_unmap_and_destroy(pvr_cccb->ctrl_obj);
+}
+
+/**
+ * pvr_cccb_cmdseq_fits() - Check if a command sequence fits in the CCCB
+ * @pvr_cccb: Target Client CCB.
+ * @size: Size of the command sequence.
+ *
+ * Check if a command sequence fits in the CCCB we have at hand.
+ *
+ * Return:
+ *  * true if the command sequence fits in the CCCB, or
+ *  * false otherwise.
+ */
+bool pvr_cccb_cmdseq_fits(struct pvr_cccb *pvr_cccb, size_t size)
+{
+	struct rogue_fwif_cccb_ctl *ctrl = pvr_cccb->ctrl;
+	u32 read_offset, remaining;
+	bool fits = false;
+
+	read_offset = READ_ONCE(ctrl->read_offset);
+	remaining = pvr_cccb->size - pvr_cccb->write_offset;
+
+	/* Always ensure we have enough room for a padding command at the end of the CCCB.
+	 * If our command sequence does not fit, reserve the remaining space for a padding
+	 * command.
+	 */
+	if (size + PADDING_COMMAND_SIZE > remaining)
+		size += remaining;
+
+	if (get_ccb_space(pvr_cccb->write_offset, read_offset, pvr_cccb->size) >= size)
+		fits = true;
+
+	return fits;
+}
+
+/**
+ * pvr_cccb_write_command_with_header() - Write a command + command header to a
+ *                                        Client CCB
+ * @pvr_cccb: Target Client CCB.
+ * @cmd_type: Client CCB command type. Must be one of %ROGUE_FWIF_CCB_CMD_TYPE_*.
+ * @cmd_size: Size of command in bytes.
+ * @cmd_data: Pointer to command to write.
+ * @ext_job_ref: External job reference.
+ * @int_job_ref: Internal job reference.
+ *
+ * Caller must make sure there's enough space in CCCB to queue this command. This
+ * can be done by calling pvr_cccb_cmdseq_fits().
+ *
+ * This function is not protected by any lock. The caller must ensure there's
+ * no concurrent caller, which should be guaranteed by the drm_sched model (job
+ * submission is serialized in drm_sched_main()).
+ */
+void
+pvr_cccb_write_command_with_header(struct pvr_cccb *pvr_cccb, u32 cmd_type, u32 cmd_size,
+				   void *cmd_data, u32 ext_job_ref, u32 int_job_ref)
+{
+	u32 sz_with_hdr = pvr_cccb_get_size_of_cmd_with_hdr(cmd_size);
+	struct rogue_fwif_ccb_cmd_header cmd_header = {
+		.cmd_type = cmd_type,
+		.cmd_size = ALIGN(cmd_size, 8),
+		.ext_job_ref = ext_job_ref,
+		.int_job_ref = int_job_ref,
+	};
+	struct rogue_fwif_cccb_ctl *ctrl = pvr_cccb->ctrl;
+	u32 remaining = pvr_cccb->size - pvr_cccb->write_offset;
+	u32 required_size, cccb_space, read_offset;
+
+	/*
+	 * Always ensure we have enough room for a padding command at the end of
+	 * the CCCB.
+	 */
+	if (remaining < sz_with_hdr + PADDING_COMMAND_SIZE) {
+		/*
+		 * Command would need to wrap, so we need to pad the remainder
+		 * of the CCCB.
+		 */
+		required_size = sz_with_hdr + remaining;
+	} else {
+		required_size = sz_with_hdr;
+	}
+
+	read_offset = READ_ONCE(ctrl->read_offset);
+	cccb_space = get_ccb_space(pvr_cccb->write_offset, read_offset, pvr_cccb->size);
+	if (WARN_ON(cccb_space < required_size))
+		return;
+
+	if (required_size != sz_with_hdr) {
+		/* Add padding command */
+		struct rogue_fwif_ccb_cmd_header pad_cmd = {
+			.cmd_type = ROGUE_FWIF_CCB_CMD_TYPE_PADDING,
+			.cmd_size = remaining - sizeof(pad_cmd),
+		};
+
+		memcpy(&pvr_cccb->cccb[pvr_cccb->write_offset], &pad_cmd, sizeof(pad_cmd));
+		pvr_cccb->write_offset = 0;
+	}
+
+	memcpy(&pvr_cccb->cccb[pvr_cccb->write_offset], &cmd_header, sizeof(cmd_header));
+	memcpy(&pvr_cccb->cccb[pvr_cccb->write_offset + sizeof(cmd_header)], cmd_data, cmd_size);
+	pvr_cccb->write_offset += sz_with_hdr;
+}
+
+static void fill_cmd_kick_data(struct pvr_cccb *cccb, u32 ctx_fw_addr,
+			       struct pvr_hwrt_data *hwrt,
+			       struct rogue_fwif_kccb_cmd_kick_data *k)
+{
+	k->context_fw_addr = ctx_fw_addr;
+	k->client_woff_update = cccb->write_offset;
+	k->client_wrap_mask_update = cccb->wrap_mask;
+
+	if (hwrt) {
+		u32 cleanup_state_offset = offsetof(struct rogue_fwif_hwrtdata, cleanup_state);
+
+		pvr_fw_object_get_fw_addr_offset(hwrt->fw_obj, cleanup_state_offset,
+						 &k->cleanup_ctl_fw_addr[k->num_cleanup_ctl++]);
+	}
+}
+
+/**
+ * pvr_cccb_send_kccb_kick: Send KCCB kick to trigger command processing
+ * @pvr_dev: Device pointer.
+ * @pvr_cccb: Pointer to CCCB to process.
+ * @cctx_fw_addr: FW virtual address for context owning this Client CCB.
+ * @hwrt: HWRT data set associated with this kick. May be %NULL.
+ *
+ * You must call pvr_kccb_reserve_slot() and wait for the returned fence to
+ * signal (if this function didn't return NULL) before calling
+ * pvr_cccb_send_kccb_kick().
+ */
+void
+pvr_cccb_send_kccb_kick(struct pvr_device *pvr_dev,
+			struct pvr_cccb *pvr_cccb, u32 cctx_fw_addr,
+			struct pvr_hwrt_data *hwrt)
+{
+	struct rogue_fwif_kccb_cmd cmd_kick = {
+		.cmd_type = ROGUE_FWIF_KCCB_CMD_KICK,
+	};
+
+	fill_cmd_kick_data(pvr_cccb, cctx_fw_addr, hwrt, &cmd_kick.cmd_data.cmd_kick_data);
+
+	/* Make sure the writes to the CCCB are flushed before sending the KICK. */
+	wmb();
+
+	pvr_kccb_send_cmd_reserved_powered(pvr_dev, &cmd_kick, NULL);
+}
+
+void
+pvr_cccb_send_kccb_combined_kick(struct pvr_device *pvr_dev,
+				 struct pvr_cccb *geom_cccb,
+				 struct pvr_cccb *frag_cccb,
+				 u32 geom_ctx_fw_addr,
+				 u32 frag_ctx_fw_addr,
+				 struct pvr_hwrt_data *hwrt,
+				 bool frag_is_pr)
+{
+	struct rogue_fwif_kccb_cmd cmd_kick = {
+		.cmd_type = ROGUE_FWIF_KCCB_CMD_COMBINED_GEOM_FRAG_KICK,
+	};
+
+	fill_cmd_kick_data(geom_cccb, geom_ctx_fw_addr, hwrt,
+			   &cmd_kick.cmd_data.combined_geom_frag_cmd_kick_data.geom_cmd_kick_data);
+
+	/* If this is a partial-render job, we don't attach resources to cleanup-ctl array,
+	 * because the resources are already retained by the geometry job.
+	 */
+	fill_cmd_kick_data(frag_cccb, frag_ctx_fw_addr, frag_is_pr ? NULL : hwrt,
+			   &cmd_kick.cmd_data.combined_geom_frag_cmd_kick_data.frag_cmd_kick_data);
+
+	/* Make sure the writes to the CCCB are flushed before sending the KICK. */
+	wmb();
+
+	pvr_kccb_send_cmd_reserved_powered(pvr_dev, &cmd_kick, NULL);
+}
diff --git a/drivers/gpu/drm/imagination/pvr_cccb.h b/drivers/gpu/drm/imagination/pvr_cccb.h
new file mode 100644
index 000000000000..a9f5f38b874b
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_cccb.h
@@ -0,0 +1,109 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_CCCB_H
+#define PVR_CCCB_H
+
+#include "pvr_rogue_fwif.h"
+#include "pvr_rogue_fwif_shared.h"
+
+#include <linux/mutex.h>
+#include <linux/types.h>
+
+#define PADDING_COMMAND_SIZE sizeof(struct rogue_fwif_ccb_cmd_header)
+
+/* Forward declaration from pvr_device.h. */
+struct pvr_device;
+
+/* Forward declaration from pvr_gem.h. */
+struct pvr_fw_object;
+
+/* Forward declaration from pvr_hwrt.h. */
+struct pvr_hwrt_data;
+
+struct pvr_cccb {
+	/** @ctrl_obj: FW object representing CCCB control structure. */
+	struct pvr_fw_object *ctrl_obj;
+
+	/** @ccb_obj: FW object representing CCCB. */
+	struct pvr_fw_object *cccb_obj;
+
+	/**
+	 * @ctrl: Kernel mapping of CCCB control structure. @lock must be held
+	 *        when accessing.
+	 */
+	struct rogue_fwif_cccb_ctl *ctrl;
+
+	/** @cccb: Kernel mapping of CCCB. @lock must be held when accessing.*/
+	u8 *cccb;
+
+	/** @ctrl_fw_addr: FW virtual address of CCCB control structure. */
+	u32 ctrl_fw_addr;
+	/** @ccb_fw_addr: FW virtual address of CCCB. */
+	u32 cccb_fw_addr;
+
+	/** @size: Size of CCCB in bytes. */
+	size_t size;
+
+	/** @write_offset: CCCB write offset. */
+	u32 write_offset;
+
+	/** @wrap_mask: CCCB wrap mask. */
+	u32 wrap_mask;
+};
+
+int pvr_cccb_init(struct pvr_device *pvr_dev, struct pvr_cccb *cccb,
+		  u32 size_log2, const char *name);
+void pvr_cccb_fini(struct pvr_cccb *cccb);
+
+void pvr_cccb_write_command_with_header(struct pvr_cccb *pvr_cccb,
+					u32 cmd_type, u32 cmd_size, void *cmd_data,
+					u32 ext_job_ref, u32 int_job_ref);
+void pvr_cccb_send_kccb_kick(struct pvr_device *pvr_dev,
+			     struct pvr_cccb *pvr_cccb, u32 cctx_fw_addr,
+			     struct pvr_hwrt_data *hwrt);
+void pvr_cccb_send_kccb_combined_kick(struct pvr_device *pvr_dev,
+				      struct pvr_cccb *geom_cccb,
+				      struct pvr_cccb *frag_cccb,
+				      u32 geom_ctx_fw_addr,
+				      u32 frag_ctx_fw_addr,
+				      struct pvr_hwrt_data *hwrt,
+				      bool frag_is_pr);
+bool pvr_cccb_cmdseq_fits(struct pvr_cccb *pvr_cccb, size_t size);
+
+/**
+ * pvr_cccb_get_size_of_cmd_with_hdr() - Get the size of a command and its header.
+ * @cmd_size: Command size.
+ *
+ * Returns the size of the command and its header.
+ */
+static __always_inline u32
+pvr_cccb_get_size_of_cmd_with_hdr(u32 cmd_size)
+{
+	WARN_ON(!IS_ALIGNED(cmd_size, 8));
+	return sizeof(struct rogue_fwif_ccb_cmd_header) + ALIGN(cmd_size, 8);
+}
+
+/**
+ * pvr_cccb_cmdseq_can_fit() - Check if a command sequence can fit in the CCCB.
+ * @size: Command sequence size.
+ *
+ * Returns:
+ *  * true it the CCCB is big enough to contain a command sequence, or
+ *  * false otherwise.
+ */
+static __always_inline bool
+pvr_cccb_cmdseq_can_fit(struct pvr_cccb *pvr_cccb, size_t size)
+{
+	/* We divide the capacity by two to simplify our CCCB fencing logic:
+	 * we want to be sure that, no matter what we had queued before, we
+	 * are able to either queue our command sequence at the end or add a
+	 * padding command and queue the command sequence at the beginning
+	 * of the CCCB. If the command sequence size is bigger than half the
+	 * CCCB capacity, we'd have to queue the padding command and make sure
+	 * the FW is done processing it before queueing our command sequence.
+	 */
+	return size + PADDING_COMMAND_SIZE <= pvr_cccb->size / 2;
+}
+
+#endif /* PVR_CCCB_H */
diff --git a/drivers/gpu/drm/imagination/pvr_context.c b/drivers/gpu/drm/imagination/pvr_context.c
new file mode 100644
index 000000000000..a87f5a295397
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_context.c
@@ -0,0 +1,337 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_cccb.h"
+#include "pvr_context.h"
+#include "pvr_device.h"
+#include "pvr_drv.h"
+#include "pvr_gem.h"
+#include "pvr_power.h"
+#include "pvr_rogue_fwif.h"
+#include "pvr_rogue_fwif_common.h"
+#include "pvr_rogue_fwif_resetframework.h"
+#include "pvr_stream_defs.h"
+#include "pvr_vm.h"
+
+#include <drm/drm_auth.h>
+#include <drm/drm_managed.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/xarray.h>
+
+static int
+remap_priority(struct pvr_file *pvr_file, s32 uapi_priority,
+	       enum pvr_context_priority *priority_out)
+{
+	switch (uapi_priority) {
+	case DRM_PVR_CTX_PRIORITY_LOW:
+		*priority_out = PVR_CTX_PRIORITY_LOW;
+		break;
+	case DRM_PVR_CTX_PRIORITY_NORMAL:
+		*priority_out = PVR_CTX_PRIORITY_MEDIUM;
+		break;
+	case DRM_PVR_CTX_PRIORITY_HIGH:
+		if (!capable(CAP_SYS_NICE) && !drm_is_current_master(from_pvr_file(pvr_file)))
+			return -EACCES;
+		*priority_out = PVR_CTX_PRIORITY_HIGH;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int get_fw_obj_size(enum drm_pvr_ctx_type type)
+{
+	switch (type) {
+	case DRM_PVR_CTX_TYPE_RENDER:
+		return sizeof(struct rogue_fwif_fwrendercontext);
+	case DRM_PVR_CTX_TYPE_COMPUTE:
+		return sizeof(struct rogue_fwif_fwcomputecontext);
+	case DRM_PVR_CTX_TYPE_TRANSFER_FRAG:
+		return sizeof(struct rogue_fwif_fwtransfercontext);
+	}
+
+	return -EINVAL;
+}
+
+static int
+process_static_context_state(struct pvr_device *pvr_dev, const struct pvr_stream_cmd_defs *cmd_defs,
+			     u64 stream_user_ptr, u32 stream_size, void *dest)
+{
+	void *stream;
+	int err;
+
+	stream = kzalloc(stream_size, GFP_KERNEL);
+	if (!stream)
+		return -ENOMEM;
+
+	if (copy_from_user(stream, u64_to_user_ptr(stream_user_ptr), stream_size)) {
+		err = -EFAULT;
+		goto err_free;
+	}
+
+	err = pvr_stream_process(pvr_dev, cmd_defs, stream, stream_size, dest);
+	if (err)
+		goto err_free;
+
+	kfree(stream);
+
+	return 0;
+
+err_free:
+	kfree(stream);
+
+	return err;
+}
+
+static int init_render_fw_objs(struct pvr_context *ctx,
+			       struct drm_pvr_ioctl_create_context_args *args,
+			       void *fw_ctx_map)
+{
+	struct rogue_fwif_static_rendercontext_state *static_rendercontext_state;
+	struct rogue_fwif_fwrendercontext *fw_render_context = fw_ctx_map;
+
+	if (!args->static_context_state_len)
+		return -EINVAL;
+
+	static_rendercontext_state = &fw_render_context->static_render_context_state;
+
+	/* Copy static render context state from userspace. */
+	return process_static_context_state(ctx->pvr_dev,
+					    &pvr_static_render_context_state_stream,
+					    args->static_context_state,
+					    args->static_context_state_len,
+					    &static_rendercontext_state->ctxswitch_regs[0]);
+}
+
+static int init_compute_fw_objs(struct pvr_context *ctx,
+				struct drm_pvr_ioctl_create_context_args *args,
+				void *fw_ctx_map)
+{
+	struct rogue_fwif_fwcomputecontext *fw_compute_context = fw_ctx_map;
+	struct rogue_fwif_cdm_registers_cswitch *ctxswitch_regs;
+
+	if (!args->static_context_state_len)
+		return -EINVAL;
+
+	ctxswitch_regs = &fw_compute_context->static_compute_context_state.ctxswitch_regs;
+
+	/* Copy static render context state from userspace. */
+	return process_static_context_state(ctx->pvr_dev,
+					    &pvr_static_compute_context_state_stream,
+					    args->static_context_state,
+					    args->static_context_state_len,
+					    ctxswitch_regs);
+}
+
+static int init_transfer_fw_objs(struct pvr_context *ctx,
+				 struct drm_pvr_ioctl_create_context_args *args,
+				 void *fw_ctx_map)
+{
+	if (args->static_context_state_len)
+		return -EINVAL;
+
+	return 0;
+}
+
+static int init_fw_objs(struct pvr_context *ctx,
+			struct drm_pvr_ioctl_create_context_args *args,
+			void *fw_ctx_map)
+{
+	switch (ctx->type) {
+	case DRM_PVR_CTX_TYPE_RENDER:
+		return init_render_fw_objs(ctx, args, fw_ctx_map);
+	case DRM_PVR_CTX_TYPE_COMPUTE:
+		return init_compute_fw_objs(ctx, args, fw_ctx_map);
+	case DRM_PVR_CTX_TYPE_TRANSFER_FRAG:
+		return init_transfer_fw_objs(ctx, args, fw_ctx_map);
+	}
+
+	return -EINVAL;
+}
+
+static void
+ctx_fw_data_init(void *cpu_ptr, void *priv)
+{
+	struct pvr_context *ctx = priv;
+
+	memcpy(cpu_ptr, ctx->data, ctx->data_size);
+}
+
+/**
+ * pvr_context_create() - Create a context.
+ * @pvr_file: File to attach the created context to.
+ * @args: Context creation arguments.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * A negative error code on failure.
+ */
+int pvr_context_create(struct pvr_file *pvr_file, struct drm_pvr_ioctl_create_context_args *args)
+{
+	struct pvr_device *pvr_dev = pvr_file->pvr_dev;
+	struct pvr_context *ctx;
+	int ctx_size;
+	int err;
+
+	/* Context creation flags are currently unused and must be zero. */
+	if (args->flags)
+		return -EINVAL;
+
+	ctx_size = get_fw_obj_size(args->type);
+	if (ctx_size < 0)
+		return ctx_size;
+
+	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+	if (!ctx)
+		return -ENOMEM;
+
+	ctx->data_size = ctx_size;
+	ctx->type = args->type;
+	ctx->flags = args->flags;
+	ctx->pvr_dev = pvr_dev;
+	kref_init(&ctx->ref_count);
+
+	err = remap_priority(pvr_file, args->priority, &ctx->priority);
+	if (err)
+		goto err_free_ctx;
+
+	ctx->vm_ctx = pvr_vm_context_lookup(pvr_file, args->vm_context_handle);
+	if (IS_ERR(ctx->vm_ctx)) {
+		err = PTR_ERR(ctx->vm_ctx);
+		goto err_free_ctx;
+	}
+
+	ctx->data = kzalloc(ctx_size, GFP_KERNEL);
+	if (!ctx->data) {
+		err = -ENOMEM;
+		goto err_put_vm;
+	}
+
+	err = init_fw_objs(ctx, args, ctx->data);
+	if (err)
+		goto err_free_ctx_data;
+
+	err = pvr_fw_object_create(pvr_dev, ctx_size, PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+				   ctx_fw_data_init, ctx, &ctx->fw_obj);
+	if (err)
+		goto err_free_ctx_data;
+
+	err = xa_alloc(&pvr_dev->ctx_ids, &ctx->ctx_id, ctx, xa_limit_32b, GFP_KERNEL);
+	if (err)
+		goto err_destroy_fw_obj;
+
+	err = xa_alloc(&pvr_file->ctx_handles, &args->handle, ctx, xa_limit_32b, GFP_KERNEL);
+	if (err)
+		goto err_release_id;
+
+	return 0;
+
+err_release_id:
+	xa_erase(&pvr_dev->ctx_ids, ctx->ctx_id);
+
+err_destroy_fw_obj:
+	pvr_fw_object_destroy(ctx->fw_obj);
+
+err_free_ctx_data:
+	kfree(ctx->data);
+
+err_put_vm:
+	pvr_vm_context_put(ctx->vm_ctx);
+
+err_free_ctx:
+	kfree(ctx);
+	return err;
+}
+
+static void
+pvr_context_release(struct kref *ref_count)
+{
+	struct pvr_context *ctx =
+		container_of(ref_count, struct pvr_context, ref_count);
+	struct pvr_device *pvr_dev = ctx->pvr_dev;
+
+	xa_erase(&pvr_dev->ctx_ids, ctx->ctx_id);
+	pvr_fw_object_destroy(ctx->fw_obj);
+	kfree(ctx->data);
+	pvr_vm_context_put(ctx->vm_ctx);
+	kfree(ctx);
+}
+
+/**
+ * pvr_context_put() - Release reference on context
+ * @ctx: Target context.
+ */
+void
+pvr_context_put(struct pvr_context *ctx)
+{
+	if (ctx)
+		kref_put(&ctx->ref_count, pvr_context_release);
+}
+
+/**
+ * pvr_context_destroy() - Destroy context
+ * @pvr_file: Pointer to pvr_file structure.
+ * @handle: Userspace context handle.
+ *
+ * Removes context from context list and drops initial reference. Context will
+ * then be destroyed once all outstanding references are dropped.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * -%EINVAL if context not in context list.
+ */
+int
+pvr_context_destroy(struct pvr_file *pvr_file, u32 handle)
+{
+	struct pvr_context *ctx = xa_erase(&pvr_file->ctx_handles, handle);
+
+	if (!ctx)
+		return -EINVAL;
+
+	/* Release the reference held by the handle set. */
+	pvr_context_put(ctx);
+
+	return 0;
+}
+
+/**
+ * pvr_destroy_contexts_for_file: Destroy any contexts associated with the given file
+ * @pvr_file: Pointer to pvr_file structure.
+ *
+ * Removes all contexts associated with @pvr_file from the device context list and drops initial
+ * references. Contexts will then be destroyed once all outstanding references are dropped.
+ */
+void pvr_destroy_contexts_for_file(struct pvr_file *pvr_file)
+{
+	struct pvr_context *ctx;
+	unsigned long handle;
+
+	xa_for_each(&pvr_file->ctx_handles, handle, ctx)
+		pvr_context_destroy(pvr_file, handle);
+}
+
+/**
+ * pvr_context_device_init() - Device level initialization for queue related resources.
+ * @pvr_dev: The device to initialize.
+ */
+void pvr_context_device_init(struct pvr_device *pvr_dev)
+{
+	xa_init_flags(&pvr_dev->ctx_ids, XA_FLAGS_ALLOC1);
+}
+
+/**
+ * pvr_context_device_fini() - Device level cleanup for queue related resources.
+ * @pvr_dev: The device to cleanup.
+ */
+void pvr_context_device_fini(struct pvr_device *pvr_dev)
+{
+	WARN_ON(!xa_empty(&pvr_dev->ctx_ids));
+	xa_destroy(&pvr_dev->ctx_ids);
+}
diff --git a/drivers/gpu/drm/imagination/pvr_context.h b/drivers/gpu/drm/imagination/pvr_context.h
new file mode 100644
index 000000000000..fa7115380272
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_context.h
@@ -0,0 +1,161 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_CONTEXT_H
+#define PVR_CONTEXT_H
+
+#include <drm/gpu_scheduler.h>
+
+#include <linux/compiler_attributes.h>
+#include <linux/dma-fence.h>
+#include <linux/kref.h>
+#include <linux/types.h>
+#include <linux/xarray.h>
+#include <uapi/drm/pvr_drm.h>
+
+#include "pvr_cccb.h"
+#include "pvr_device.h"
+
+/* Forward declaration from pvr_gem.h. */
+struct pvr_fw_object;
+
+enum pvr_context_priority {
+	PVR_CTX_PRIORITY_LOW = 0,
+	PVR_CTX_PRIORITY_MEDIUM,
+	PVR_CTX_PRIORITY_HIGH,
+};
+
+/**
+ * struct pvr_context - Context data
+ */
+struct pvr_context {
+	/** @ref_count: Refcount for context. */
+	struct kref ref_count;
+
+	/** @pvr_dev: Pointer to owning device. */
+	struct pvr_device *pvr_dev;
+
+	/** @vm_ctx: Pointer to associated VM context. */
+	struct pvr_vm_context *vm_ctx;
+
+	/** @type: Type of context. */
+	enum drm_pvr_ctx_type type;
+
+	/** @flags: Context flags. */
+	u32 flags;
+
+	/** @priority: Context priority*/
+	enum pvr_context_priority priority;
+
+	/** @fw_obj: FW object representing FW-side context data. */
+	struct pvr_fw_object *fw_obj;
+
+	/** @data: Pointer to local copy of FW context data. */
+	void *data;
+
+	/** @data_size: Size of FW context data, in bytes. */
+	u32 data_size;
+
+	/** @ctx_id: FW context ID. */
+	u32 ctx_id;
+};
+
+/**
+ * pvr_context_get() - Take additional reference on context.
+ * @ctx: Context pointer.
+ *
+ * Call pvr_context_put() to release.
+ *
+ * Returns:
+ *  * The requested context on success, or
+ *  * %NULL if no context pointer passed.
+ */
+static __always_inline struct pvr_context *
+pvr_context_get(struct pvr_context *ctx)
+{
+	if (ctx)
+		kref_get(&ctx->ref_count);
+
+	return ctx;
+}
+
+/**
+ * pvr_context_lookup() - Lookup context pointer from handle and file.
+ * @pvr_file: Pointer to pvr_file structure.
+ * @handle: Context handle.
+ *
+ * Takes reference on context. Call pvr_context_put() to release.
+ *
+ * Return:
+ *  * The requested context on success, or
+ *  * %NULL on failure (context does not exist, or does not belong to @pvr_file).
+ */
+static __always_inline struct pvr_context *
+pvr_context_lookup(struct pvr_file *pvr_file, u32 handle)
+{
+	struct pvr_context *ctx;
+
+	/* Take the array lock to protect against context removal.  */
+	xa_lock(&pvr_file->ctx_handles);
+	ctx = pvr_context_get(xa_load(&pvr_file->ctx_handles, handle));
+	xa_unlock(&pvr_file->ctx_handles);
+
+	return ctx;
+}
+
+/**
+ * pvr_context_lookup_id() - Lookup context pointer from ID.
+ * @pvr_dev: Device pointer.
+ * @id: FW context ID.
+ *
+ * Takes reference on context. Call pvr_context_put() to release.
+ *
+ * Return:
+ *  * The requested context on success, or
+ *  * %NULL on failure (context does not exist).
+ */
+static __always_inline struct pvr_context *
+pvr_context_lookup_id(struct pvr_device *pvr_dev, u32 id)
+{
+	struct pvr_context *ctx;
+
+	/* Take the array lock to protect against context removal.  */
+	xa_lock(&pvr_dev->ctx_ids);
+
+	/* Contexts are removed from the ctx_ids set in the context release path,
+	 * meaning the ref_count reached zero before they get removed. We need
+	 * to make sure we're not trying to acquire a context that's being
+	 * destroyed.
+	 */
+	ctx = xa_load(&pvr_dev->ctx_ids, id);
+	if (!kref_get_unless_zero(&ctx->ref_count))
+		ctx = NULL;
+
+	xa_unlock(&pvr_dev->ctx_ids);
+
+	return ctx;
+}
+
+static __always_inline u32
+pvr_context_get_fw_addr(struct pvr_context *ctx)
+{
+	u32 ctx_fw_addr = 0;
+
+	pvr_fw_object_get_fw_addr(ctx->fw_obj, &ctx_fw_addr);
+
+	return ctx_fw_addr;
+}
+
+void pvr_context_put(struct pvr_context *ctx);
+
+int pvr_context_create(struct pvr_file *pvr_file, struct drm_pvr_ioctl_create_context_args *args);
+
+int pvr_context_destroy(struct pvr_file *pvr_file, u32 handle);
+
+void pvr_destroy_contexts_for_file(struct pvr_file *pvr_file);
+
+void pvr_context_device_init(struct pvr_device *pvr_dev);
+
+void pvr_context_device_fini(struct pvr_device *pvr_dev);
+
+#endif /* PVR_CONTEXT_H */
diff --git a/drivers/gpu/drm/imagination/pvr_device.h b/drivers/gpu/drm/imagination/pvr_device.h
index a810b233689b..431b8d8ad579 100644
--- a/drivers/gpu/drm/imagination/pvr_device.h
+++ b/drivers/gpu/drm/imagination/pvr_device.h
@@ -7,6 +7,8 @@
 #include "pvr_ccb.h"
 #include "pvr_device_info.h"
 #include "pvr_fw.h"
+#include "pvr_rogue_fwif_stream.h"
+#include "pvr_stream.h"
 
 #include <drm/drm_device.h>
 #include <drm/drm_file.h>
@@ -146,6 +148,17 @@ struct pvr_device {
 	/** @fw_dev: Firmware related data. */
 	struct pvr_fw_device fw_dev;
 
+	/** @stream_musthave_quirks: Bit array of "must-have" quirks for stream commands. */
+	u32 stream_musthave_quirks[PVR_STREAM_TYPE_MAX][PVR_STREAM_EXTHDR_TYPE_MAX];
+
+	/**
+	 * @ctx_ids: Array of contexts belonging to this device. Array members
+	 *           are of type "struct pvr_context *".
+	 *
+	 * This array is used to allocate IDs used by the firmware.
+	 */
+	struct xarray ctx_ids;
+
 	/**
 	 * @free_list_ids: Array of free lists belonging to this device. Array members
 	 *                 are of type "struct pvr_free_list *".
@@ -249,6 +262,14 @@ struct pvr_file {
 	 */
 	struct pvr_device *pvr_dev;
 
+	/**
+	 * @ctx_handles: Array of contexts belonging to this file. Array members
+	 *               are of type "struct pvr_context *".
+	 *
+	 * This array is used to allocate handles returned to userspace.
+	 */
+	struct xarray ctx_handles;
+
 	/**
 	 * @free_list_handles: Array of free lists belonging to this file. Array
 	 * members are of type "struct pvr_free_list *".
diff --git a/drivers/gpu/drm/imagination/pvr_drv.c b/drivers/gpu/drm/imagination/pvr_drv.c
index bd601eced4f2..02c6b8542436 100644
--- a/drivers/gpu/drm/imagination/pvr_drv.c
+++ b/drivers/gpu/drm/imagination/pvr_drv.c
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0 OR MIT
 /* Copyright (c) 2023 Imagination Technologies Ltd. */
 
+#include "pvr_context.h"
 #include "pvr_device.h"
 #include "pvr_drv.h"
 #include "pvr_free_list.h"
@@ -687,7 +688,19 @@ static int
 pvr_ioctl_create_context(struct drm_device *drm_dev, void *raw_args,
 			 struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_create_context_args *args = raw_args;
+	struct pvr_file *pvr_file = file->driver_priv;
+	int idx;
+	int ret;
+
+	if (!drm_dev_enter(drm_dev, &idx))
+		return -EIO;
+
+	ret = pvr_context_create(pvr_file, args);
+
+	drm_dev_exit(idx);
+
+	return ret;
 }
 
 /**
@@ -707,7 +720,13 @@ static int
 pvr_ioctl_destroy_context(struct drm_device *drm_dev, void *raw_args,
 			  struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_destroy_context_args *args = raw_args;
+	struct pvr_file *pvr_file = file->driver_priv;
+
+	if (args->_padding_4)
+		return -EINVAL;
+
+	return pvr_context_destroy(pvr_file, args->handle);
 }
 
 /**
@@ -1303,6 +1322,7 @@ pvr_drm_driver_open(struct drm_device *drm_dev, struct drm_file *file)
 	 */
 	pvr_file->pvr_dev = pvr_dev;
 
+	xa_init_flags(&pvr_file->ctx_handles, XA_FLAGS_ALLOC1);
 	xa_init_flags(&pvr_file->free_list_handles, XA_FLAGS_ALLOC1);
 	xa_init_flags(&pvr_file->hwrt_handles, XA_FLAGS_ALLOC1);
 	xa_init_flags(&pvr_file->vm_ctx_handles, XA_FLAGS_ALLOC1);
@@ -1332,6 +1352,9 @@ pvr_drm_driver_postclose(__always_unused struct drm_device *drm_dev,
 {
 	struct pvr_file *pvr_file = to_pvr_file(file);
 
+	/* Kill remaining contexts. */
+	pvr_destroy_contexts_for_file(pvr_file);
+
 	/* Drop references on any remaining objects. */
 	pvr_destroy_free_lists_for_file(pvr_file);
 	pvr_destroy_hwrt_datasets_for_file(pvr_file);
@@ -1377,6 +1400,7 @@ pvr_probe(struct platform_device *plat_dev)
 	drm_dev = &pvr_dev->base;
 
 	platform_set_drvdata(plat_dev, drm_dev);
+	pvr_context_device_init(pvr_dev);
 
 	devm_pm_runtime_enable(&plat_dev->dev);
 	pm_runtime_mark_last_busy(&plat_dev->dev);
@@ -1420,6 +1444,7 @@ pvr_remove(struct platform_device *plat_dev)
 	drm_dev_unplug(drm_dev);
 	pvr_device_fini(pvr_dev);
 	pvr_watchdog_fini(pvr_dev);
+	pvr_context_device_fini(pvr_dev);
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/imagination/pvr_stream.c b/drivers/gpu/drm/imagination/pvr_stream.c
new file mode 100644
index 000000000000..7603db7e79a7
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_stream.c
@@ -0,0 +1,285 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device.h"
+#include "pvr_rogue_fwif_stream.h"
+#include "pvr_stream.h"
+
+#include <linux/align.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <uapi/drm/pvr_drm.h>
+
+static __always_inline bool
+stream_def_is_supported(struct pvr_device *pvr_dev, const struct pvr_stream_def *stream_def)
+{
+	if (stream_def->feature == PVR_FEATURE_NONE)
+		return true;
+
+	if (!(stream_def->feature & PVR_FEATURE_NOT) &&
+	    pvr_device_has_feature(pvr_dev, stream_def->feature)) {
+		return true;
+	}
+
+	if ((stream_def->feature & PVR_FEATURE_NOT) &&
+	    !pvr_device_has_feature(pvr_dev, stream_def->feature & ~PVR_FEATURE_NOT)) {
+		return true;
+	}
+
+	return false;
+}
+
+static int
+pvr_stream_get_data(u8 *stream, u32 *stream_offset, u32 stream_size, u32 data_size, u32 align_size,
+		    void *dest)
+{
+	*stream_offset = ALIGN(*stream_offset, align_size);
+
+	if ((*stream_offset + data_size) > stream_size)
+		return -EINVAL;
+
+	memcpy(dest, stream + *stream_offset, data_size);
+
+	(*stream_offset) += data_size;
+
+	return 0;
+}
+
+/**
+ * pvr_stream_process_1() - Process a single stream and fill destination structure
+ * @pvr_dev: Device pointer.
+ * @stream_def: Stream definition.
+ * @nr_entries: Number of entries in &stream_def.
+ * @stream: Pointer to stream.
+ * @stream_offset: Starting offset within stream.
+ * @stream_size: Size of input stream, in bytes.
+ * @dest: Pointer to destination structure.
+ * @dest_size: Size of destination structure.
+ * @stream_offset_out: Pointer to variable to write updated stream offset to. May be NULL.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * -%EINVAL on malformed stream.
+ */
+static int
+pvr_stream_process_1(struct pvr_device *pvr_dev, const struct pvr_stream_def *stream_def,
+		     u32 nr_entries, u8 *stream, u32 stream_offset, u32 stream_size,
+		     u8 *dest, u32 dest_size, u32 *stream_offset_out)
+{
+	int err = 0;
+	u32 i;
+
+	for (i = 0; i < nr_entries; i++) {
+		if (stream_def[i].offset >= dest_size) {
+			err = -EINVAL;
+			break;
+		}
+
+		if (!stream_def_is_supported(pvr_dev, &stream_def[i]))
+			continue;
+
+		switch (stream_def[i].size) {
+		case PVR_STREAM_SIZE_8:
+			err = pvr_stream_get_data(stream, &stream_offset, stream_size, sizeof(u8),
+						  sizeof(u8), dest + stream_def[i].offset);
+			if (err)
+				return err;
+			break;
+
+		case PVR_STREAM_SIZE_16:
+			err = pvr_stream_get_data(stream, &stream_offset, stream_size, sizeof(u16),
+						  sizeof(u16), dest + stream_def[i].offset);
+			if (err)
+				return err;
+			break;
+
+		case PVR_STREAM_SIZE_32:
+			err = pvr_stream_get_data(stream, &stream_offset, stream_size, sizeof(u32),
+						  sizeof(u32), dest + stream_def[i].offset);
+			if (err)
+				return err;
+			break;
+
+		case PVR_STREAM_SIZE_64:
+			err = pvr_stream_get_data(stream, &stream_offset, stream_size, sizeof(u64),
+						  sizeof(u64), dest + stream_def[i].offset);
+			if (err)
+				return err;
+			break;
+
+		case PVR_STREAM_SIZE_ARRAY:
+			err = pvr_stream_get_data(stream, &stream_offset, stream_size,
+						  stream_def[i].array_size, sizeof(u64),
+						  dest + stream_def[i].offset);
+			if (err)
+				return err;
+			break;
+		}
+	}
+
+	if (stream_offset_out)
+		*stream_offset_out = stream_offset;
+
+	return 0;
+}
+
+static int
+pvr_stream_process_ext_stream(struct pvr_device *pvr_dev,
+			      const struct pvr_stream_cmd_defs *cmd_defs, void *ext_stream,
+			      u32 stream_offset, u32 ext_stream_size, void *dest)
+{
+	u32 musthave_masks[PVR_STREAM_EXTHDR_TYPE_MAX];
+	u32 ext_header;
+	int err = 0;
+	u32 i;
+
+	/* Copy "must have" mask from device. We clear this as we process the stream. */
+	memcpy(musthave_masks, pvr_dev->stream_musthave_quirks[cmd_defs->type],
+	       sizeof(musthave_masks));
+
+	do {
+		const struct pvr_stream_ext_header *header;
+		u32 type;
+		u32 data;
+
+		err = pvr_stream_get_data(ext_stream, &stream_offset, ext_stream_size, sizeof(u32),
+					  sizeof(ext_header), &ext_header);
+		if (err)
+			return err;
+
+		type = (ext_header & PVR_STREAM_EXTHDR_TYPE_MASK) >> PVR_STREAM_EXTHDR_TYPE_SHIFT;
+		data = ext_header & PVR_STREAM_EXTHDR_DATA_MASK;
+
+		if (type >= cmd_defs->ext_nr_headers)
+			return -EINVAL;
+
+		header = &cmd_defs->ext_headers[type];
+		if (data & ~header->valid_mask)
+			return -EINVAL;
+
+		musthave_masks[type] &= ~data;
+
+		for (i = 0; i < header->ext_streams_num; i++) {
+			const struct pvr_stream_ext_def *ext_def = &header->ext_streams[i];
+
+			if (!(ext_header & ext_def->header_mask))
+				continue;
+
+			if (!pvr_device_has_uapi_quirk(pvr_dev, ext_def->quirk))
+				return -EINVAL;
+
+			err = pvr_stream_process_1(pvr_dev, ext_def->stream, ext_def->stream_len,
+						   ext_stream, stream_offset,
+						   ext_stream_size, dest,
+						   cmd_defs->dest_size, &stream_offset);
+			if (err)
+				return err;
+		}
+	} while (ext_header & PVR_STREAM_EXTHDR_CONTINUATION);
+
+	/*
+	 * Verify that "must have" mask is now zero. If it isn't then one of the "must have" quirks
+	 * for this command was not present.
+	 */
+	for (i = 0; i < cmd_defs->ext_nr_headers; i++) {
+		if (musthave_masks[i])
+			return -EINVAL;
+	}
+
+	return 0;
+}
+
+/**
+ * pvr_stream_process() - Build FW structure from stream
+ * @pvr_dev: Device pointer.
+ * @cmd_defs: Stream definition.
+ * @stream: Pointer to command stream.
+ * @stream_size: Size of command stream, in bytes.
+ * @dest_out: Pointer to destination buffer.
+ *
+ * Caller is responsible for freeing the output structure.
+ *
+ * Returns:
+ *  * 0 on success,
+ *  * -%ENOMEM on out of memory, or
+ *  * -%EINVAL on malformed stream.
+ */
+int
+pvr_stream_process(struct pvr_device *pvr_dev, const struct pvr_stream_cmd_defs *cmd_defs,
+		   void *stream, u32 stream_size, void *dest_out)
+{
+	u32 stream_offset = 0;
+	u32 main_stream_len;
+	u32 padding;
+	int err;
+
+	if (!stream || !stream_size)
+		return -EINVAL;
+
+	err = pvr_stream_get_data(stream, &stream_offset, stream_size, sizeof(u32),
+				  sizeof(u32), &main_stream_len);
+	if (err)
+		return err;
+
+	/*
+	 * u32 after stream length is padding to ensure u64 alignment, but may be used for expansion
+	 * in the future. Verify it's zero.
+	 */
+	err = pvr_stream_get_data(stream, &stream_offset, stream_size, sizeof(u32),
+				  sizeof(u32), &padding);
+	if (err)
+		return err;
+
+	if (main_stream_len < stream_offset || main_stream_len > stream_size || padding)
+		return -EINVAL;
+
+	err = pvr_stream_process_1(pvr_dev, cmd_defs->main_stream, cmd_defs->main_stream_len,
+				   stream, stream_offset, main_stream_len, dest_out,
+				   cmd_defs->dest_size, &stream_offset);
+	if (err)
+		return err;
+
+	if (stream_offset < stream_size) {
+		err = pvr_stream_process_ext_stream(pvr_dev, cmd_defs, stream, stream_offset,
+						    stream_size, dest_out);
+		if (err)
+			return err;
+	} else {
+		u32 i;
+
+		/*
+		 * If we don't have an extension stream then there must not be any "must have"
+		 * quirks for this command.
+		 */
+		for (i = 0; i < cmd_defs->ext_nr_headers; i++) {
+			if (pvr_dev->stream_musthave_quirks[cmd_defs->type][i])
+				return -EINVAL;
+		}
+	}
+
+	return 0;
+}
+
+/**
+ * pvr_stream_create_musthave_masks() - Create "must have" masks for streams based on current device
+ *                                      quirks
+ * @pvr_dev: Device pointer.
+ */
+void
+pvr_stream_create_musthave_masks(struct pvr_device *pvr_dev)
+{
+	memset(pvr_dev->stream_musthave_quirks, 0, sizeof(pvr_dev->stream_musthave_quirks));
+
+	if (pvr_device_has_uapi_quirk(pvr_dev, 47217))
+		pvr_dev->stream_musthave_quirks[PVR_STREAM_TYPE_FRAG][0] |=
+			PVR_STREAM_EXTHDR_FRAG0_BRN47217;
+
+	if (pvr_device_has_uapi_quirk(pvr_dev, 49927)) {
+		pvr_dev->stream_musthave_quirks[PVR_STREAM_TYPE_GEOM][0] |=
+			PVR_STREAM_EXTHDR_GEOM0_BRN49927;
+		pvr_dev->stream_musthave_quirks[PVR_STREAM_TYPE_FRAG][0] |=
+			PVR_STREAM_EXTHDR_FRAG0_BRN49927;
+		pvr_dev->stream_musthave_quirks[PVR_STREAM_TYPE_COMPUTE][0] |=
+			PVR_STREAM_EXTHDR_COMPUTE0_BRN49927;
+	}
+}
diff --git a/drivers/gpu/drm/imagination/pvr_stream.h b/drivers/gpu/drm/imagination/pvr_stream.h
new file mode 100644
index 000000000000..ecc5edfb7bf4
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_stream.h
@@ -0,0 +1,75 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_STREAM_H
+#define PVR_STREAM_H
+
+#include <linux/bits.h>
+#include <linux/limits.h>
+#include <linux/types.h>
+
+struct pvr_device;
+
+struct pvr_job;
+
+enum pvr_stream_type {
+	PVR_STREAM_TYPE_GEOM = 0,
+	PVR_STREAM_TYPE_FRAG,
+	PVR_STREAM_TYPE_COMPUTE,
+	PVR_STREAM_TYPE_TRANSFER,
+	PVR_STREAM_TYPE_STATIC_RENDER_CONTEXT,
+	PVR_STREAM_TYPE_STATIC_COMPUTE_CONTEXT,
+
+	PVR_STREAM_TYPE_MAX
+};
+
+enum pvr_stream_size {
+	PVR_STREAM_SIZE_8 = 0,
+	PVR_STREAM_SIZE_16,
+	PVR_STREAM_SIZE_32,
+	PVR_STREAM_SIZE_64,
+	PVR_STREAM_SIZE_ARRAY,
+};
+
+#define PVR_FEATURE_NOT  BIT(31)
+#define PVR_FEATURE_NONE U32_MAX
+
+struct pvr_stream_def {
+	u32 offset;
+	enum pvr_stream_size size;
+	u32 array_size;
+	u32 feature;
+};
+
+struct pvr_stream_ext_def {
+	const struct pvr_stream_def *stream;
+	u32 stream_len;
+	u32 header_mask;
+	u32 quirk;
+};
+
+struct pvr_stream_ext_header {
+	const struct pvr_stream_ext_def *ext_streams;
+	u32 ext_streams_num;
+	u32 valid_mask;
+};
+
+struct pvr_stream_cmd_defs {
+	enum pvr_stream_type type;
+
+	const struct pvr_stream_def *main_stream;
+	u32 main_stream_len;
+
+	u32 ext_nr_headers;
+	const struct pvr_stream_ext_header *ext_headers;
+
+	size_t dest_size;
+};
+
+int
+pvr_stream_process(struct pvr_device *pvr_dev, const struct pvr_stream_cmd_defs *cmd_defs,
+		   void *stream, u32 stream_size, void *dest_out);
+void
+pvr_stream_create_musthave_masks(struct pvr_device *pvr_dev);
+
+#endif /* PVR_STREAM_H */
diff --git a/drivers/gpu/drm/imagination/pvr_stream_defs.c b/drivers/gpu/drm/imagination/pvr_stream_defs.c
new file mode 100644
index 000000000000..3c646e25accf
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_stream_defs.c
@@ -0,0 +1,125 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_device_info.h"
+#include "pvr_rogue_fwif_client.h"
+#include "pvr_rogue_fwif_stream.h"
+#include "pvr_stream.h"
+#include "pvr_stream_defs.h"
+
+#include <linux/stddef.h>
+#include <uapi/drm/pvr_drm.h>
+
+#define PVR_STREAM_DEF_SET(owner, member, _size, _array_size, _feature) \
+	{ .offset = offsetof(struct owner, member), \
+	  .size = (_size),  \
+	  .array_size = (_array_size), \
+	  .feature = (_feature) }
+
+#define PVR_STREAM_DEF(owner, member, member_size)  \
+	PVR_STREAM_DEF_SET(owner, member, PVR_STREAM_SIZE_ ## member_size, 0, PVR_FEATURE_NONE)
+
+#define PVR_STREAM_DEF_FEATURE(owner, member, member_size, feature) \
+	PVR_STREAM_DEF_SET(owner, member, PVR_STREAM_SIZE_ ## member_size, 0, feature)
+
+#define PVR_STREAM_DEF_NOT_FEATURE(owner, member, member_size, feature)       \
+	PVR_STREAM_DEF_SET(owner, member, PVR_STREAM_SIZE_ ## member_size, 0, \
+			   (feature) | PVR_FEATURE_NOT)
+
+#define PVR_STREAM_DEF_ARRAY(owner, member)                                       \
+	PVR_STREAM_DEF_SET(owner, member, PVR_STREAM_SIZE_ARRAY,                  \
+			   sizeof(((struct owner *)0)->member), PVR_FEATURE_NONE)
+
+#define PVR_STREAM_DEF_ARRAY_FEATURE(owner, member, feature)            \
+	PVR_STREAM_DEF_SET(owner, member, PVR_STREAM_SIZE_ARRAY,         \
+			   sizeof(((struct owner *)0)->member), feature)
+
+#define PVR_STREAM_DEF_ARRAY_NOT_FEATURE(owner, member, feature)                             \
+	PVR_STREAM_DEF_SET(owner, member, PVR_STREAM_SIZE_ARRAY,                             \
+			   sizeof(((struct owner *)0)->member), (feature) | PVR_FEATURE_NOT)
+
+/*
+ * When adding new parameters to the stream definition, the new parameters must go after the
+ * existing parameters, to preserve order. As parameters are naturally aligned, care must be taken
+ * with respect to implicit padding in the stream; padding should be minimised as much as possible.
+ */
+static const struct pvr_stream_def rogue_fwif_static_render_context_state_stream[] = {
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_reg_vdm_context_state_base_addr, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_reg_vdm_context_state_resume_addr, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_reg_ta_context_state_base_addr, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[0].geom_reg_vdm_context_store_task0, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[0].geom_reg_vdm_context_store_task1, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[0].geom_reg_vdm_context_store_task2, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[0].geom_reg_vdm_context_store_task3, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[0].geom_reg_vdm_context_store_task4, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[0].geom_reg_vdm_context_resume_task0, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[0].geom_reg_vdm_context_resume_task1, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[0].geom_reg_vdm_context_resume_task2, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[0].geom_reg_vdm_context_resume_task3, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[0].geom_reg_vdm_context_resume_task4, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[1].geom_reg_vdm_context_store_task0, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[1].geom_reg_vdm_context_store_task1, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[1].geom_reg_vdm_context_store_task2, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[1].geom_reg_vdm_context_store_task3, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[1].geom_reg_vdm_context_store_task4, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[1].geom_reg_vdm_context_resume_task0, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[1].geom_reg_vdm_context_resume_task1, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[1].geom_reg_vdm_context_resume_task2, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[1].geom_reg_vdm_context_resume_task3, 64),
+	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
+		       geom_state[1].geom_reg_vdm_context_resume_task4, 64),
+};
+
+const struct pvr_stream_cmd_defs pvr_static_render_context_state_stream = {
+	.type = PVR_STREAM_TYPE_STATIC_RENDER_CONTEXT,
+
+	.main_stream = rogue_fwif_static_render_context_state_stream,
+	.main_stream_len = ARRAY_SIZE(rogue_fwif_static_render_context_state_stream),
+
+	.ext_nr_headers = 0,
+
+	.dest_size = sizeof(struct rogue_fwif_geom_registers_caswitch),
+};
+
+static const struct pvr_stream_def rogue_fwif_static_compute_context_state_stream[] = {
+	PVR_STREAM_DEF(rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_context_pds0, 64),
+	PVR_STREAM_DEF(rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_context_pds1, 64),
+	PVR_STREAM_DEF(rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_terminate_pds, 64),
+	PVR_STREAM_DEF(rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_terminate_pds1, 64),
+	PVR_STREAM_DEF(rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_resume_pds0, 64),
+	PVR_STREAM_DEF(rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_context_pds0_b, 64),
+	PVR_STREAM_DEF(rogue_fwif_cdm_registers_cswitch, cdmreg_cdm_resume_pds0_b, 64),
+};
+
+const struct pvr_stream_cmd_defs pvr_static_compute_context_state_stream = {
+	.type = PVR_STREAM_TYPE_STATIC_COMPUTE_CONTEXT,
+
+	.main_stream = rogue_fwif_static_compute_context_state_stream,
+	.main_stream_len = ARRAY_SIZE(rogue_fwif_static_compute_context_state_stream),
+
+	.ext_nr_headers = 0,
+
+	.dest_size = sizeof(struct rogue_fwif_cdm_registers_cswitch),
+};
diff --git a/drivers/gpu/drm/imagination/pvr_stream_defs.h b/drivers/gpu/drm/imagination/pvr_stream_defs.h
new file mode 100644
index 000000000000..7e0ecfa12030
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_stream_defs.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_STREAM_DEFS_H
+#define PVR_STREAM_DEFS_H
+
+#include "pvr_stream.h"
+
+extern const struct pvr_stream_cmd_defs pvr_cmd_geom_stream;
+extern const struct pvr_stream_cmd_defs pvr_cmd_frag_stream;
+extern const struct pvr_stream_cmd_defs pvr_cmd_compute_stream;
+extern const struct pvr_stream_cmd_defs pvr_cmd_transfer_stream;
+extern const struct pvr_stream_cmd_defs pvr_static_render_context_state_stream;
+extern const struct pvr_stream_cmd_defs pvr_static_compute_context_state_stream;
+
+#endif /* PVR_STREAM_DEFS_H */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 14/17] drm/imagination: Implement job submission and scheduling
  2023-08-16  8:25 ` Sarah Walker
@ 2023-08-16  8:25   ` Sarah Walker
  -1 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: matthew.brost, luben.tuikov, tzimmermann, linux-kernel, mripard,
	afd, boris.brezillon, dakr, donald.robson, hns, christian.koenig,
	faith.ekstrand

Implement job submission ioctl. Job scheduling is implemented using
drm_sched.

Jobs are submitted in a stream format. This is intended to allow the UAPI
data format to be independent of the actual FWIF structures in use, which
vary depending on the GPU in use.

The stream formats are documented at:
https://gitlab.freedesktop.org/mesa/mesa/-/blob/f8d2b42ae65c2f16f36a43e0ae39d288431e4263/src/imagination/csbgen/rogue_kmd_stream.xml

This patch depends on:
drm/sched: Convert drm scheduler to use a work queue rather than kthread:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-2-matthew.brost@intel.com/
drm/sched: Move schedule policy to scheduler / entity:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-3-matthew.brost@intel.com/
drm/sched: Add DRM_SCHED_POLICY_SINGLE_ENTITY scheduling policy:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-4-matthew.brost@intel.com/
drm/sched: Start run wq before TDR in drm_sched_start:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-6-matthew.brost@intel.com/
drm/sched: Submit job before starting TDR:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-7-matthew.brost@intel.com/
drm/sched: Add helper to set TDR timeout:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-8-matthew.brost@intel.com/

Changes since v4:
- Use a regular workqueue for job scheduling

Changes since v3:
- Support partial render jobs
- Add job timeout handler
- Split sync handling out of job code
- Use drm_dev_{enter,exit}

Changes since v2:
- Use drm_sched for job scheduling

Co-developed-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Co-developed-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 drivers/gpu/drm/imagination/Kconfig           |    1 +
 drivers/gpu/drm/imagination/Makefile          |    3 +
 drivers/gpu/drm/imagination/pvr_context.c     |  125 +-
 drivers/gpu/drm/imagination/pvr_context.h     |   44 +
 drivers/gpu/drm/imagination/pvr_device.c      |   31 +
 drivers/gpu/drm/imagination/pvr_device.h      |   21 +
 drivers/gpu/drm/imagination/pvr_drv.c         |   40 +-
 drivers/gpu/drm/imagination/pvr_job.c         |  770 +++++++++
 drivers/gpu/drm/imagination/pvr_job.h         |  161 ++
 drivers/gpu/drm/imagination/pvr_power.c       |   28 +
 drivers/gpu/drm/imagination/pvr_queue.c       | 1455 +++++++++++++++++
 drivers/gpu/drm/imagination/pvr_queue.h       |  179 ++
 drivers/gpu/drm/imagination/pvr_stream_defs.c |  226 +++
 drivers/gpu/drm/imagination/pvr_sync.c        |  287 ++++
 drivers/gpu/drm/imagination/pvr_sync.h        |   84 +
 15 files changed, 3451 insertions(+), 4 deletions(-)
 create mode 100644 drivers/gpu/drm/imagination/pvr_job.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_job.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_queue.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_queue.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_sync.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_sync.h

diff --git a/drivers/gpu/drm/imagination/Kconfig b/drivers/gpu/drm/imagination/Kconfig
index b7cc82de53fc..113dea126b00 100644
--- a/drivers/gpu/drm/imagination/Kconfig
+++ b/drivers/gpu/drm/imagination/Kconfig
@@ -5,6 +5,7 @@ config DRM_POWERVR
 	tristate "Imagination Technologies PowerVR Graphics (Series 6 and later)"
 	depends on ARM64
 	depends on DRM
+	select DRM_EXEC
 	select DRM_GEM_SHMEM_HELPER
 	select DRM_SCHED
 	select FW_LOADER
diff --git a/drivers/gpu/drm/imagination/Makefile b/drivers/gpu/drm/imagination/Makefile
index 0c8ab120f277..313af5312d7b 100644
--- a/drivers/gpu/drm/imagination/Makefile
+++ b/drivers/gpu/drm/imagination/Makefile
@@ -18,10 +18,13 @@ powervr-y := \
 	pvr_fw_trace.o \
 	pvr_gem.o \
 	pvr_hwrt.o \
+	pvr_job.o \
 	pvr_mmu.o \
 	pvr_power.o \
+	pvr_queue.o \
 	pvr_stream.o \
 	pvr_stream_defs.o \
+	pvr_sync.o \
 	pvr_vm.o \
 	pvr_vm_mips.o
 
diff --git a/drivers/gpu/drm/imagination/pvr_context.c b/drivers/gpu/drm/imagination/pvr_context.c
index a87f5a295397..4a4c93c5d907 100644
--- a/drivers/gpu/drm/imagination/pvr_context.c
+++ b/drivers/gpu/drm/imagination/pvr_context.c
@@ -6,10 +6,12 @@
 #include "pvr_device.h"
 #include "pvr_drv.h"
 #include "pvr_gem.h"
+#include "pvr_job.h"
 #include "pvr_power.h"
 #include "pvr_rogue_fwif.h"
 #include "pvr_rogue_fwif_common.h"
 #include "pvr_rogue_fwif_resetframework.h"
+#include "pvr_stream.h"
 #include "pvr_stream_defs.h"
 #include "pvr_vm.h"
 
@@ -164,6 +166,116 @@ ctx_fw_data_init(void *cpu_ptr, void *priv)
 	memcpy(cpu_ptr, ctx->data, ctx->data_size);
 }
 
+/**
+ * pvr_context_destroy_queues() - Destroy all queues attached to a context.
+ * @ctx: Context to destroy queues on.
+ *
+ * Should be called when the last reference to a context object is dropped.
+ * It releases all resources attached to the queues bound to this context.
+ */
+static void pvr_context_destroy_queues(struct pvr_context *ctx)
+{
+	switch (ctx->type) {
+	case DRM_PVR_CTX_TYPE_RENDER:
+		pvr_queue_destroy(ctx->queues.fragment);
+		pvr_queue_destroy(ctx->queues.geometry);
+		break;
+	case DRM_PVR_CTX_TYPE_COMPUTE:
+		pvr_queue_destroy(ctx->queues.compute);
+		break;
+	case DRM_PVR_CTX_TYPE_TRANSFER_FRAG:
+		pvr_queue_destroy(ctx->queues.transfer);
+		break;
+	}
+}
+
+/**
+ * pvr_context_create_queues() - Create all queues attached to a context.
+ * @ctx: Context to create queues on.
+ * @args: Context creation arguments passed by userspace.
+ * @fw_ctx_map: CPU mapping of the FW context object.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * A negative error code otherwise.
+ */
+static int pvr_context_create_queues(struct pvr_context *ctx,
+				     struct drm_pvr_ioctl_create_context_args *args,
+				     void *fw_ctx_map)
+{
+	int err;
+
+	switch (ctx->type) {
+	case DRM_PVR_CTX_TYPE_RENDER:
+		ctx->queues.geometry = pvr_queue_create(ctx, DRM_PVR_JOB_TYPE_GEOMETRY,
+							args, fw_ctx_map);
+		if (IS_ERR(ctx->queues.geometry)) {
+			err = PTR_ERR(ctx->queues.geometry);
+			ctx->queues.geometry = NULL;
+			goto err_destroy_queues;
+		}
+
+		ctx->queues.fragment = pvr_queue_create(ctx, DRM_PVR_JOB_TYPE_FRAGMENT,
+							args, fw_ctx_map);
+		if (IS_ERR(ctx->queues.fragment)) {
+			err = PTR_ERR(ctx->queues.fragment);
+			ctx->queues.fragment = NULL;
+			goto err_destroy_queues;
+		}
+		return 0;
+
+	case DRM_PVR_CTX_TYPE_COMPUTE:
+		ctx->queues.compute = pvr_queue_create(ctx, DRM_PVR_JOB_TYPE_COMPUTE,
+						       args, fw_ctx_map);
+		if (IS_ERR(ctx->queues.compute)) {
+			err = PTR_ERR(ctx->queues.compute);
+			ctx->queues.compute = NULL;
+			goto err_destroy_queues;
+		}
+		return 0;
+
+	case DRM_PVR_CTX_TYPE_TRANSFER_FRAG:
+		ctx->queues.transfer = pvr_queue_create(ctx, DRM_PVR_JOB_TYPE_TRANSFER_FRAG,
+							args, fw_ctx_map);
+		if (IS_ERR(ctx->queues.transfer)) {
+			err = PTR_ERR(ctx->queues.transfer);
+			ctx->queues.transfer = NULL;
+			goto err_destroy_queues;
+		}
+		return 0;
+	}
+
+	return -EINVAL;
+
+err_destroy_queues:
+	pvr_context_destroy_queues(ctx);
+	return err;
+}
+
+/**
+ * pvr_context_kill_queues() - Kill queues attached to context.
+ * @ctx: Context to kill queues on.
+ *
+ * Killing the queues implies making them unusable for future jobs, while still
+ * letting the currently submitted jobs a chance to finish. Queue resources will
+ * stay around until pvr_context_destroy_queues() is called.
+ */
+static void pvr_context_kill_queues(struct pvr_context *ctx)
+{
+	switch (ctx->type) {
+	case DRM_PVR_CTX_TYPE_RENDER:
+		pvr_queue_kill(ctx->queues.fragment);
+		pvr_queue_kill(ctx->queues.geometry);
+		break;
+	case DRM_PVR_CTX_TYPE_COMPUTE:
+		pvr_queue_kill(ctx->queues.compute);
+		break;
+	case DRM_PVR_CTX_TYPE_TRANSFER_FRAG:
+		pvr_queue_kill(ctx->queues.transfer);
+		break;
+	}
+}
+
 /**
  * pvr_context_create() - Create a context.
  * @pvr_file: File to attach the created context to.
@@ -214,10 +326,14 @@ int pvr_context_create(struct pvr_file *pvr_file, struct drm_pvr_ioctl_create_co
 		goto err_put_vm;
 	}
 
-	err = init_fw_objs(ctx, args, ctx->data);
+	err = pvr_context_create_queues(ctx, args, ctx->data);
 	if (err)
 		goto err_free_ctx_data;
 
+	err = init_fw_objs(ctx, args, ctx->data);
+	if (err)
+		goto err_destroy_queues;
+
 	err = pvr_fw_object_create(pvr_dev, ctx_size, PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
 				   ctx_fw_data_init, ctx, &ctx->fw_obj);
 	if (err)
@@ -239,6 +355,9 @@ int pvr_context_create(struct pvr_file *pvr_file, struct drm_pvr_ioctl_create_co
 err_destroy_fw_obj:
 	pvr_fw_object_destroy(ctx->fw_obj);
 
+err_destroy_queues:
+	pvr_context_destroy_queues(ctx);
+
 err_free_ctx_data:
 	kfree(ctx->data);
 
@@ -258,6 +377,7 @@ pvr_context_release(struct kref *ref_count)
 	struct pvr_device *pvr_dev = ctx->pvr_dev;
 
 	xa_erase(&pvr_dev->ctx_ids, ctx->ctx_id);
+	pvr_context_destroy_queues(ctx);
 	pvr_fw_object_destroy(ctx->fw_obj);
 	kfree(ctx->data);
 	pvr_vm_context_put(ctx->vm_ctx);
@@ -295,6 +415,9 @@ pvr_context_destroy(struct pvr_file *pvr_file, u32 handle)
 	if (!ctx)
 		return -EINVAL;
 
+	/* Make sure nothing can be queued to the queues after that point. */
+	pvr_context_kill_queues(ctx);
+
 	/* Release the reference held by the handle set. */
 	pvr_context_put(ctx);
 
diff --git a/drivers/gpu/drm/imagination/pvr_context.h b/drivers/gpu/drm/imagination/pvr_context.h
index fa7115380272..582d64c44d5f 100644
--- a/drivers/gpu/drm/imagination/pvr_context.h
+++ b/drivers/gpu/drm/imagination/pvr_context.h
@@ -15,6 +15,7 @@
 
 #include "pvr_cccb.h"
 #include "pvr_device.h"
+#include "pvr_queue.h"
 
 /* Forward declaration from pvr_gem.h. */
 struct pvr_fw_object;
@@ -58,8 +59,51 @@ struct pvr_context {
 
 	/** @ctx_id: FW context ID. */
 	u32 ctx_id;
+
+	/**
+	 * @faulty: Set to 1 when the context queues had unfinished job when
+	 * a GPU reset happened.
+	 *
+	 * In that case, the context is in an inconsistent state and can't be
+	 * used anymore.
+	 */
+	atomic_t faulty;
+
+	/** @queues: Union containing all kind of queues. */
+	union {
+		struct {
+			/** @geometry: Geometry queue. */
+			struct pvr_queue *geometry;
+
+			/** @fragment: Fragment queue. */
+			struct pvr_queue *fragment;
+		};
+
+		/** @compute: Compute queue. */
+		struct pvr_queue *compute;
+
+		/** @compute: Transfer queue. */
+		struct pvr_queue *transfer;
+	} queues;
 };
 
+static __always_inline struct pvr_queue *
+pvr_context_get_queue_for_job(struct pvr_context *ctx, enum drm_pvr_job_type type)
+{
+	switch (type) {
+	case DRM_PVR_JOB_TYPE_GEOMETRY:
+		return ctx->type == DRM_PVR_CTX_TYPE_RENDER ? ctx->queues.geometry : NULL;
+	case DRM_PVR_JOB_TYPE_FRAGMENT:
+		return ctx->type == DRM_PVR_CTX_TYPE_RENDER ? ctx->queues.fragment : NULL;
+	case DRM_PVR_JOB_TYPE_COMPUTE:
+		return ctx->type == DRM_PVR_CTX_TYPE_COMPUTE ? ctx->queues.compute : NULL;
+	case DRM_PVR_JOB_TYPE_TRANSFER_FRAG:
+		return ctx->type == DRM_PVR_CTX_TYPE_TRANSFER_FRAG ? ctx->queues.transfer : NULL;
+	}
+
+	return NULL;
+}
+
 /**
  * pvr_context_get() - Take additional reference on context.
  * @ctx: Context pointer.
diff --git a/drivers/gpu/drm/imagination/pvr_device.c b/drivers/gpu/drm/imagination/pvr_device.c
index bc7333444388..9ae758a88644 100644
--- a/drivers/gpu/drm/imagination/pvr_device.c
+++ b/drivers/gpu/drm/imagination/pvr_device.c
@@ -6,7 +6,9 @@
 
 #include "pvr_fw.h"
 #include "pvr_power.h"
+#include "pvr_queue.h"
 #include "pvr_rogue_cr_defs.h"
+#include "pvr_stream.h"
 #include "pvr_vm.h"
 
 #include <drm/drm_print.h>
@@ -117,6 +119,32 @@ static int pvr_device_clk_init(struct pvr_device *pvr_dev)
 	return 0;
 }
 
+/**
+ * pvr_device_process_active_queues() - Process all queue related events.
+ * @pvr_dev: PowerVR device to check
+ *
+ * This is called any time we receive a FW event. It iterates over all
+ * active queues and calls pvr_queue_process() on them.
+ */
+void pvr_device_process_active_queues(struct pvr_device *pvr_dev)
+{
+	struct pvr_queue *queue, *tmp_queue;
+	LIST_HEAD(active_queues);
+
+	mutex_lock(&pvr_dev->queues.lock);
+
+	/* Move all active queues to a temporary list. Queues that remain
+	 * active after we're done processing them are re-inserted to
+	 * the queues.active list by pvr_queue_process().
+	 */
+	list_splice_init(&pvr_dev->queues.active, &active_queues);
+
+	list_for_each_entry_safe(queue, tmp_queue, &active_queues, node)
+		pvr_queue_process(queue);
+
+	mutex_unlock(&pvr_dev->queues.lock);
+}
+
 static irqreturn_t pvr_device_irq_thread_handler(int irq, void *data)
 {
 	struct pvr_device *pvr_dev = data;
@@ -132,6 +160,7 @@ static irqreturn_t pvr_device_irq_thread_handler(int irq, void *data)
 		if (pvr_dev->fw_dev.booted) {
 			pvr_fwccb_process(pvr_dev);
 			pvr_kccb_wake_up_waiters(pvr_dev);
+			pvr_device_process_active_queues(pvr_dev);
 		}
 
 		pm_runtime_mark_last_busy(from_pvr_device(pvr_dev)->dev);
@@ -398,6 +427,8 @@ pvr_device_gpu_init(struct pvr_device *pvr_dev)
 	else
 		return -EINVAL;
 
+	pvr_stream_create_musthave_masks(pvr_dev);
+
 	err = pvr_set_dma_info(pvr_dev);
 	if (err)
 		return err;
diff --git a/drivers/gpu/drm/imagination/pvr_device.h b/drivers/gpu/drm/imagination/pvr_device.h
index 431b8d8ad579..fc6124ae4b26 100644
--- a/drivers/gpu/drm/imagination/pvr_device.h
+++ b/drivers/gpu/drm/imagination/pvr_device.h
@@ -167,6 +167,26 @@ struct pvr_device {
 	 */
 	struct xarray free_list_ids;
 
+	/**
+	 * @job_ids: Array of jobs belonging to this device. Array members
+	 *           are of type "struct pvr_job *".
+	 */
+	struct xarray job_ids;
+
+	/**
+	 * @queues: Queue-related fields.
+	 */
+	struct {
+		/** @active: Active queue list. */
+		struct list_head active;
+
+		/** @idle: Idle queue list. */
+		struct list_head idle;
+
+		/** @lock: Lock protecting access to the active/idle lists. */
+		struct mutex lock;
+	} queues;
+
 	struct {
 		/** @work: Work item for watchdog callback. */
 		struct delayed_work work;
@@ -436,6 +456,7 @@ packed_bvnc_to_pvr_gpu_id(u64 bvnc, struct pvr_gpu_id *gpu_id)
 
 int pvr_device_init(struct pvr_device *pvr_dev);
 void pvr_device_fini(struct pvr_device *pvr_dev);
+void pvr_device_reset(struct pvr_device *pvr_dev);
 
 bool
 pvr_device_has_uapi_quirk(struct pvr_device *pvr_dev, u32 quirk);
diff --git a/drivers/gpu/drm/imagination/pvr_drv.c b/drivers/gpu/drm/imagination/pvr_drv.c
index 02c6b8542436..757263eee34e 100644
--- a/drivers/gpu/drm/imagination/pvr_drv.c
+++ b/drivers/gpu/drm/imagination/pvr_drv.c
@@ -7,6 +7,7 @@
 #include "pvr_free_list.h"
 #include "pvr_gem.h"
 #include "pvr_hwrt.h"
+#include "pvr_job.h"
 #include "pvr_power.h"
 #include "pvr_rogue_defs.h"
 #include "pvr_rogue_fwif_client.h"
@@ -31,6 +32,8 @@
 #include <linux/of_device.h>
 #include <linux/of_platform.h>
 #include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/xarray.h>
 
 /**
  * DOC: PowerVR (Series 6 and later) Graphics Driver
@@ -411,7 +414,8 @@ pvr_dev_query_runtime_info_get(struct pvr_device *pvr_dev,
 		return 0;
 	}
 
-	runtime_info.free_list_min_pages = 0; /* FIXME */
+	runtime_info.free_list_min_pages =
+		pvr_get_free_list_min_pages(pvr_dev);
 	runtime_info.free_list_max_pages =
 		ROGUE_PM_MAX_FREELIST_SIZE / ROGUE_PM_PAGE_SIZE;
 	runtime_info.common_store_alloc_region_size =
@@ -1151,7 +1155,20 @@ static int
 pvr_ioctl_submit_jobs(struct drm_device *drm_dev, void *raw_args,
 		      struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_submit_jobs_args *args = raw_args;
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+	struct pvr_file *pvr_file = to_pvr_file(file);
+	int idx;
+	int err;
+
+	if (!drm_dev_enter(drm_dev, &idx))
+		return -EIO;
+
+	err = pvr_submit_jobs(pvr_dev, pvr_file, args);
+
+	drm_dev_exit(idx);
+
+	return err;
 }
 
 int
@@ -1367,7 +1384,8 @@ pvr_drm_driver_postclose(__always_unused struct drm_device *drm_dev,
 DEFINE_DRM_GEM_FOPS(pvr_drm_driver_fops);
 
 static struct drm_driver pvr_drm_driver = {
-	.driver_features = DRIVER_GEM | DRIVER_GEM_GPUVA | DRIVER_RENDER,
+	.driver_features = DRIVER_GEM | DRIVER_GEM_GPUVA | DRIVER_RENDER |
+			   DRIVER_SYNCOBJ | DRIVER_SYNCOBJ_TIMELINE,
 	.open = pvr_drm_driver_open,
 	.postclose = pvr_drm_driver_postclose,
 	.ioctls = pvr_drm_driver_ioctls,
@@ -1400,8 +1418,15 @@ pvr_probe(struct platform_device *plat_dev)
 	drm_dev = &pvr_dev->base;
 
 	platform_set_drvdata(plat_dev, drm_dev);
+
+	init_rwsem(&pvr_dev->reset_sem);
+
 	pvr_context_device_init(pvr_dev);
 
+	err = pvr_queue_device_init(pvr_dev);
+	if (err)
+		goto err_context_fini;
+
 	devm_pm_runtime_enable(&plat_dev->dev);
 	pm_runtime_mark_last_busy(&plat_dev->dev);
 
@@ -1418,6 +1443,7 @@ pvr_probe(struct platform_device *plat_dev)
 		goto err_device_fini;
 
 	xa_init_flags(&pvr_dev->free_list_ids, XA_FLAGS_ALLOC1);
+	xa_init_flags(&pvr_dev->job_ids, XA_FLAGS_ALLOC1);
 
 	return 0;
 
@@ -1427,6 +1453,11 @@ pvr_probe(struct platform_device *plat_dev)
 err_watchdog_fini:
 	pvr_watchdog_fini(pvr_dev);
 
+	pvr_queue_device_fini(pvr_dev);
+
+err_context_fini:
+	pvr_context_device_fini(pvr_dev);
+
 	return err;
 }
 
@@ -1436,14 +1467,17 @@ pvr_remove(struct platform_device *plat_dev)
 	struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
 	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
 
+	WARN_ON(!xa_empty(&pvr_dev->job_ids));
 	WARN_ON(!xa_empty(&pvr_dev->free_list_ids));
 
+	xa_destroy(&pvr_dev->job_ids);
 	xa_destroy(&pvr_dev->free_list_ids);
 
 	pm_runtime_suspend(drm_dev->dev);
 	drm_dev_unplug(drm_dev);
 	pvr_device_fini(pvr_dev);
 	pvr_watchdog_fini(pvr_dev);
+	pvr_queue_device_fini(pvr_dev);
 	pvr_context_device_fini(pvr_dev);
 
 	return 0;
diff --git a/drivers/gpu/drm/imagination/pvr_job.c b/drivers/gpu/drm/imagination/pvr_job.c
new file mode 100644
index 000000000000..fc93b81d1ffe
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_job.c
@@ -0,0 +1,770 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_context.h"
+#include "pvr_device.h"
+#include "pvr_drv.h"
+#include "pvr_gem.h"
+#include "pvr_hwrt.h"
+#include "pvr_job.h"
+#include "pvr_power.h"
+#include "pvr_rogue_fwif.h"
+#include "pvr_rogue_fwif_client.h"
+#include "pvr_stream.h"
+#include "pvr_stream_defs.h"
+#include "pvr_sync.h"
+
+#include <drm/drm_exec.h>
+#include <drm/drm_gem.h>
+#include <linux/types.h>
+#include <uapi/drm/pvr_drm.h>
+
+static void pvr_job_release(struct kref *kref)
+{
+	struct pvr_job *job = container_of(kref, struct pvr_job, ref_count);
+
+	xa_erase(&job->pvr_dev->job_ids, job->id);
+
+	pvr_hwrt_data_put(job->hwrt);
+	pvr_context_put(job->ctx);
+
+	WARN_ON(job->paired_job);
+
+	pvr_queue_job_cleanup(job);
+	pvr_job_release_pm_ref(job);
+
+	kfree(job->cmd);
+	kfree(job);
+}
+
+/**
+ * pvr_job_put() - Release reference on job
+ * @job: Target job.
+ */
+void
+pvr_job_put(struct pvr_job *job)
+{
+	if (job)
+		kref_put(&job->ref_count, pvr_job_release);
+}
+
+/**
+ * pvr_job_process_stream() - Build job FW structure from stream
+ * @pvr_dev: Device pointer.
+ * @cmd_defs: Stream definition.
+ * @stream: Pointer to command stream.
+ * @stream_size: Size of command stream, in bytes.
+ * @job: Pointer to job.
+ *
+ * Caller is responsible for freeing the output structure.
+ *
+ * Returns:
+ *  * 0 on success,
+ *  * -%ENOMEM on out of memory, or
+ *  * -%EINVAL on malformed stream.
+ */
+static int
+pvr_job_process_stream(struct pvr_device *pvr_dev, const struct pvr_stream_cmd_defs *cmd_defs,
+		       void *stream, u32 stream_size, struct pvr_job *job)
+{
+	int err;
+
+	job->cmd = kzalloc(cmd_defs->dest_size, GFP_KERNEL);
+	if (!job->cmd)
+		return -ENOMEM;
+
+	job->cmd_len = cmd_defs->dest_size;
+
+	err = pvr_stream_process(pvr_dev, cmd_defs, stream, stream_size, job->cmd);
+	if (err)
+		kfree(job->cmd);
+
+	return err;
+}
+
+static int pvr_fw_cmd_init(struct pvr_device *pvr_dev, struct pvr_job *job,
+			   const struct pvr_stream_cmd_defs *stream_def,
+			   u64 stream_userptr, u32 stream_len)
+{
+	void *stream;
+	int err;
+
+	stream = kzalloc(stream_len, GFP_KERNEL);
+	if (!stream)
+		return -ENOMEM;
+
+	if (copy_from_user(stream, u64_to_user_ptr(stream_userptr), stream_len)) {
+		err = -EFAULT;
+		goto err_free_stream;
+	}
+
+	err = pvr_job_process_stream(pvr_dev, stream_def, stream, stream_len, job);
+
+err_free_stream:
+	kfree(stream);
+
+	return err;
+}
+
+static u32
+convert_geom_flags(u32 in_flags)
+{
+	u32 out_flags = 0;
+
+	if (in_flags & DRM_PVR_SUBMIT_JOB_GEOM_CMD_FIRST)
+		out_flags |= ROGUE_GEOM_FLAGS_FIRSTKICK;
+	if (in_flags & DRM_PVR_SUBMIT_JOB_GEOM_CMD_LAST)
+		out_flags |= ROGUE_GEOM_FLAGS_LASTKICK;
+	if (in_flags & DRM_PVR_SUBMIT_JOB_GEOM_CMD_SINGLE_CORE)
+		out_flags |= ROGUE_GEOM_FLAGS_SINGLE_CORE;
+
+	return out_flags;
+}
+
+static u32
+convert_frag_flags(u32 in_flags)
+{
+	u32 out_flags = 0;
+
+	if (in_flags & DRM_PVR_SUBMIT_JOB_FRAG_CMD_SINGLE_CORE)
+		out_flags |= ROGUE_FRAG_FLAGS_SINGLE_CORE;
+	if (in_flags & DRM_PVR_SUBMIT_JOB_FRAG_CMD_DEPTHBUFFER)
+		out_flags |= ROGUE_FRAG_FLAGS_DEPTHBUFFER;
+	if (in_flags & DRM_PVR_SUBMIT_JOB_FRAG_CMD_STENCILBUFFER)
+		out_flags |= ROGUE_FRAG_FLAGS_STENCILBUFFER;
+	if (in_flags & DRM_PVR_SUBMIT_JOB_FRAG_CMD_PREVENT_CDM_OVERLAP)
+		out_flags |= ROGUE_FRAG_FLAGS_PREVENT_CDM_OVERLAP;
+	if (in_flags & DRM_PVR_SUBMIT_JOB_FRAG_CMD_SCRATCHBUFFER)
+		out_flags |= ROGUE_FRAG_FLAGS_SCRATCHBUFFER;
+	if (in_flags & DRM_PVR_SUBMIT_JOB_FRAG_CMD_GET_VIS_RESULTS)
+		out_flags |= ROGUE_FRAG_FLAGS_GET_VIS_RESULTS;
+
+	return out_flags;
+}
+
+static int
+pvr_geom_job_fw_cmd_init(struct pvr_job *job,
+			 struct drm_pvr_job *args)
+{
+	struct rogue_fwif_cmd_geom *cmd;
+	int err;
+
+	if (args->flags & ~DRM_PVR_SUBMIT_JOB_GEOM_CMD_FLAGS_MASK)
+		return -EINVAL;
+
+	if (job->ctx->type != DRM_PVR_CTX_TYPE_RENDER)
+		return -EINVAL;
+
+	if (!job->hwrt)
+		return -EINVAL;
+
+	job->fw_ccb_cmd_type = ROGUE_FWIF_CCB_CMD_TYPE_GEOM;
+	err = pvr_fw_cmd_init(job->pvr_dev, job, &pvr_cmd_geom_stream,
+			      args->cmd_stream, args->cmd_stream_len);
+	if (err)
+		return err;
+
+	cmd = job->cmd;
+	cmd->cmd_shared.cmn.frame_num = 0;
+	cmd->flags = convert_geom_flags(args->flags);
+	pvr_fw_object_get_fw_addr(job->hwrt->fw_obj, &cmd->cmd_shared.hwrt_data_fw_addr);
+	return 0;
+}
+
+static int
+pvr_frag_job_fw_cmd_init(struct pvr_job *job,
+			 struct drm_pvr_job *args)
+{
+	struct rogue_fwif_cmd_frag *cmd;
+	int err;
+
+	if (args->flags & ~DRM_PVR_SUBMIT_JOB_FRAG_CMD_FLAGS_MASK)
+		return -EINVAL;
+
+	if (job->ctx->type != DRM_PVR_CTX_TYPE_RENDER)
+		return -EINVAL;
+
+	if (!job->hwrt)
+		return -EINVAL;
+
+	job->fw_ccb_cmd_type = (args->flags & DRM_PVR_SUBMIT_JOB_FRAG_CMD_PARTIAL_RENDER) ?
+			       ROGUE_FWIF_CCB_CMD_TYPE_FRAG_PR :
+			       ROGUE_FWIF_CCB_CMD_TYPE_FRAG;
+	err = pvr_fw_cmd_init(job->pvr_dev, job, &pvr_cmd_frag_stream,
+			      args->cmd_stream, args->cmd_stream_len);
+	if (err)
+		return err;
+
+	cmd = job->cmd;
+	cmd->cmd_shared.cmn.frame_num = 0;
+	cmd->flags = convert_frag_flags(args->flags);
+	pvr_fw_object_get_fw_addr(job->hwrt->fw_obj, &cmd->cmd_shared.hwrt_data_fw_addr);
+	return 0;
+}
+
+static u32
+convert_compute_flags(u32 in_flags)
+{
+	u32 out_flags = 0;
+
+	if (in_flags & DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_PREVENT_ALL_OVERLAP)
+		out_flags |= ROGUE_COMPUTE_FLAG_PREVENT_ALL_OVERLAP;
+	if (in_flags & DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_SINGLE_CORE)
+		out_flags |= ROGUE_COMPUTE_FLAG_SINGLE_CORE;
+
+	return out_flags;
+}
+
+static int
+pvr_compute_job_fw_cmd_init(struct pvr_job *job,
+			    struct drm_pvr_job *args)
+{
+	struct rogue_fwif_cmd_compute *cmd;
+	int err;
+
+	if (args->flags & ~DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_FLAGS_MASK)
+		return -EINVAL;
+
+	if (job->ctx->type != DRM_PVR_CTX_TYPE_COMPUTE)
+		return -EINVAL;
+
+	job->fw_ccb_cmd_type = ROGUE_FWIF_CCB_CMD_TYPE_CDM;
+	err = pvr_fw_cmd_init(job->pvr_dev, job, &pvr_cmd_compute_stream,
+			      args->cmd_stream, args->cmd_stream_len);
+	if (err)
+		return err;
+
+	cmd = job->cmd;
+	cmd->common.frame_num = 0;
+	cmd->flags = convert_compute_flags(args->flags);
+	return 0;
+}
+
+static u32
+convert_transfer_flags(u32 in_flags)
+{
+	u32 out_flags = 0;
+
+	if (in_flags & DRM_PVR_SUBMIT_JOB_TRANSFER_CMD_SINGLE_CORE)
+		out_flags |= ROGUE_TRANSFER_FLAGS_SINGLE_CORE;
+
+	return out_flags;
+}
+
+static int
+pvr_transfer_job_fw_cmd_init(struct pvr_job *job,
+			     struct drm_pvr_job *args)
+{
+	struct rogue_fwif_cmd_transfer *cmd;
+	int err;
+
+	if (args->flags & ~DRM_PVR_SUBMIT_JOB_TRANSFER_CMD_FLAGS_MASK)
+		return -EINVAL;
+
+	if (job->ctx->type != DRM_PVR_CTX_TYPE_TRANSFER_FRAG)
+		return -EINVAL;
+
+	job->fw_ccb_cmd_type = ROGUE_FWIF_CCB_CMD_TYPE_TQ_3D;
+	err = pvr_fw_cmd_init(job->pvr_dev, job, &pvr_cmd_transfer_stream,
+			      args->cmd_stream, args->cmd_stream_len);
+	if (err)
+		return err;
+
+	cmd = job->cmd;
+	cmd->common.frame_num = 0;
+	cmd->flags = convert_transfer_flags(args->flags);
+	return 0;
+}
+
+static int
+pvr_job_fw_cmd_init(struct pvr_job *job,
+		    struct drm_pvr_job *args)
+{
+	switch (args->type) {
+	case DRM_PVR_JOB_TYPE_GEOMETRY:
+		return pvr_geom_job_fw_cmd_init(job, args);
+
+	case DRM_PVR_JOB_TYPE_FRAGMENT:
+		return pvr_frag_job_fw_cmd_init(job, args);
+
+	case DRM_PVR_JOB_TYPE_COMPUTE:
+		return pvr_compute_job_fw_cmd_init(job, args);
+
+	case DRM_PVR_JOB_TYPE_TRANSFER_FRAG:
+		return pvr_transfer_job_fw_cmd_init(job, args);
+
+	default:
+		return -EINVAL;
+	}
+}
+
+/**
+ * struct pvr_job_data - Helper container for pairing jobs with the
+ * sync_ops supplied for them by the user.
+ */
+struct pvr_job_data {
+	/** @job: Pointer to the job. */
+	struct pvr_job *job;
+
+	/** @sync_ops: Pointer to the sync_ops associated with @job. */
+	struct drm_pvr_sync_op *sync_ops;
+
+	/** @sync_op_count: Number of members of @sync_ops. */
+	u32 sync_op_count;
+};
+
+/**
+ * prepare_job_syncs() - Prepare all sync objects for a single job.
+ * @pvr_file: PowerVR file.
+ * @job_data: Precreated job and sync_ops array.
+ * @signal_array: xarray to receive signal sync objects.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error code returned by pvr_sync_signal_array_collect_ops(),
+ *    pvr_sync_add_deps_to_job(), drm_sched_job_add_resv_dependencies() or
+ *    pvr_sync_signal_array_update_fences().
+ */
+static int
+prepare_job_syncs(struct pvr_file *pvr_file,
+		  struct pvr_job_data *job_data,
+		  struct xarray *signal_array)
+{
+	struct dma_fence *done_fence;
+	int err = pvr_sync_signal_array_collect_ops(signal_array,
+						    from_pvr_file(pvr_file),
+						    job_data->sync_op_count,
+						    job_data->sync_ops);
+
+	if (err)
+		return err;
+
+	err = pvr_sync_add_deps_to_job(pvr_file, &job_data->job->base,
+				       job_data->sync_op_count,
+				       job_data->sync_ops, signal_array);
+	if (err)
+		return err;
+
+	if (job_data->job->hwrt) {
+		/* The geometry job writes the HWRT region headers, which are
+		 * then read by the fragment job.
+		 */
+		struct drm_gem_object *obj =
+			gem_from_pvr_gem(job_data->job->hwrt->fw_obj->gem);
+		enum dma_resv_usage usage =
+			dma_resv_usage_rw(job_data->job->type ==
+					  DRM_PVR_JOB_TYPE_GEOMETRY);
+
+		err = drm_sched_job_add_resv_dependencies(&job_data->job->base,
+							  obj->resv, usage);
+		if (err)
+			return err;
+	}
+
+	/* We need to arm the job to get the job done fence. */
+	done_fence = pvr_queue_job_arm(job_data->job);
+
+	err = pvr_sync_signal_array_update_fences(signal_array,
+						  job_data->sync_op_count,
+						  job_data->sync_ops,
+						  done_fence);
+	return err;
+}
+
+/**
+ * prepare_job_syncs_for_each() - Prepare all sync objects for an array of jobs.
+ * @file: PowerVR file.
+ * @job_data: Array of precreated jobs and their sync_ops.
+ * @job_count: Number of jobs.
+ * @signal_array: xarray to receive signal sync objects.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error code returned by pvr_vm_bind_job_prepare_syncs().
+ */
+static int
+prepare_job_syncs_for_each(struct pvr_file *pvr_file,
+			   struct pvr_job_data *job_data,
+			   u32 *job_count,
+			   struct xarray *signal_array)
+{
+	for (u32 i = 0; i < *job_count; i++) {
+		int err = prepare_job_syncs(pvr_file, &job_data[i],
+					    signal_array);
+
+		if (err) {
+			*job_count = i;
+			return err;
+		}
+	}
+
+	return 0;
+}
+
+static struct pvr_job *
+create_job(struct pvr_device *pvr_dev,
+	   struct pvr_file *pvr_file,
+	   struct drm_pvr_job *args)
+{
+	struct pvr_job *job = NULL;
+	int err;
+
+	if (!args->cmd_stream || !args->cmd_stream_len)
+		return ERR_PTR(-EINVAL);
+
+	if (args->type != DRM_PVR_JOB_TYPE_GEOMETRY &&
+	    args->type != DRM_PVR_JOB_TYPE_FRAGMENT &&
+	    (args->hwrt.set_handle || args->hwrt.data_index))
+		return ERR_PTR(-EINVAL);
+
+	job = kzalloc(sizeof(*job), GFP_KERNEL);
+	if (!job)
+		return ERR_PTR(-ENOMEM);
+
+	kref_init(&job->ref_count);
+	job->type = args->type;
+	job->pvr_dev = pvr_dev;
+
+	err = xa_alloc(&pvr_dev->job_ids, &job->id, job, xa_limit_32b, GFP_KERNEL);
+	if (err)
+		goto err_put_job;
+
+	job->ctx = pvr_context_lookup(pvr_file, args->context_handle);
+	if (!job->ctx) {
+		err = -EINVAL;
+		goto err_put_job;
+	}
+
+	if (args->hwrt.set_handle) {
+		job->hwrt = pvr_hwrt_data_lookup(pvr_file, args->hwrt.set_handle,
+						 args->hwrt.data_index);
+		if (!job->hwrt) {
+			err = -EINVAL;
+			goto err_put_job;
+		}
+	}
+
+	err = pvr_job_fw_cmd_init(job, args);
+	if (err)
+		goto err_put_job;
+
+	err = pvr_queue_job_init(job);
+	if (err)
+		goto err_put_job;
+
+	return job;
+
+err_put_job:
+	pvr_job_put(job);
+	return ERR_PTR(err);
+}
+
+/**
+ * pvr_job_data_fini() - Cleanup all allocs used to set up job submission.
+ * @job_data: Job data array.
+ * @job_count: Number of members of @job_data.
+ */
+static void
+pvr_job_data_fini(struct pvr_job_data *job_data, u32 job_count)
+{
+	for (u32 i = 0; i < job_count; i++) {
+		pvr_job_put(job_data[i].job);
+		kvfree(job_data[i].sync_ops);
+	}
+}
+
+/**
+ * pvr_job_data_init() - Init an array of created jobs, associating them with
+ * the appropriate sync_ops args, which will be copied in.
+ * @pvr_dev: Target PowerVR device.
+ * @pvr_file: Pointer to PowerVR file structure.
+ * @job_args: Job args array copied from user.
+ * @job_count: Number of members of @job_args.
+ * @job_data_out: Job data array.
+ */
+static int pvr_job_data_init(struct pvr_device *pvr_dev,
+			     struct pvr_file *pvr_file,
+			     struct drm_pvr_job *job_args,
+			     u32 *job_count,
+			     struct pvr_job_data *job_data_out)
+{
+	int err = 0, i = 0;
+
+	for (; i < *job_count; i++) {
+		job_data_out[i].job =
+			create_job(pvr_dev, pvr_file, &job_args[i]);
+		err = PTR_ERR_OR_ZERO(job_data_out[i].job);
+
+		if (err) {
+			*job_count = i;
+			job_data_out[i].job = NULL;
+			goto err_cleanup;
+		}
+
+		err = PVR_UOBJ_GET_ARRAY(job_data_out[i].sync_ops,
+					 &job_args[i].sync_ops);
+		if (err) {
+			*job_count = i;
+			goto err_cleanup;
+		}
+
+		job_data_out[i].sync_op_count = job_args[i].sync_ops.count;
+	}
+
+	return 0;
+
+err_cleanup:
+	pvr_job_data_fini(job_data_out, i);
+
+	return err;
+}
+
+static void
+push_jobs(struct pvr_job_data *job_data, u32 job_count)
+{
+	for (u32 i = 0; i < job_count; i++)
+		pvr_queue_job_push(job_data[i].job);
+}
+
+static int
+prepare_fw_obj_resv(struct drm_exec *exec, struct pvr_fw_object *fw_obj)
+{
+	return drm_exec_prepare_obj(exec, gem_from_pvr_gem(fw_obj->gem), 1);
+}
+
+static int
+jobs_lock_all_objs(struct drm_exec *exec, struct pvr_job_data *job_data,
+		   u32 job_count)
+{
+	for (u32 i = 0; i < job_count; i++) {
+		struct pvr_job *job = job_data[i].job;
+
+		/* Grab a lock on a the context, to guard against
+		 * concurrent submission to the same queue.
+		 */
+		int err = drm_exec_lock_obj(exec,
+					    gem_from_pvr_gem(job->ctx->fw_obj->gem));
+
+		if (err)
+			return err;
+
+		if (job->hwrt) {
+			err = prepare_fw_obj_resv(exec,
+						  job->hwrt->fw_obj);
+			if (err)
+				return err;
+		}
+	}
+
+	return 0;
+}
+
+static int
+prepare_job_resvs_for_each(struct drm_exec *exec, struct pvr_job_data *job_data,
+			   u32 job_count)
+{
+	drm_exec_until_all_locked(exec) {
+		int err = jobs_lock_all_objs(exec, job_data, job_count);
+
+		drm_exec_retry_on_contention(exec);
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
+
+static void
+update_job_resvs(struct pvr_job *job)
+{
+	if (job->hwrt) {
+		enum dma_resv_usage usage = job->type == DRM_PVR_JOB_TYPE_GEOMETRY ?
+					    DMA_RESV_USAGE_WRITE : DMA_RESV_USAGE_READ;
+		struct drm_gem_object *obj = gem_from_pvr_gem(job->hwrt->fw_obj->gem);
+
+		dma_resv_add_fence(obj->resv, &job->base.s_fence->finished, usage);
+	}
+}
+
+static void
+update_job_resvs_for_each(struct pvr_job_data *job_data, u32 job_count)
+{
+	for (u32 i = 0; i < job_count; i++)
+		update_job_resvs(job_data[i].job);
+}
+
+static bool can_combine_jobs(struct pvr_job *a, struct pvr_job *b)
+{
+	struct pvr_job *geom_job = a, *frag_job = b;
+	struct dma_fence *fence;
+	unsigned long index;
+
+	/* Geometry and fragment jobs can be combined if they are queued to the
+	 * same context and targeting the same HWRT.
+	 */
+	if (a->type != DRM_PVR_JOB_TYPE_GEOMETRY ||
+	    b->type != DRM_PVR_JOB_TYPE_FRAGMENT ||
+	    a->ctx != b->ctx ||
+	    a->hwrt != b->hwrt)
+		return false;
+
+	xa_for_each(&frag_job->base.dependencies, index, fence) {
+		/* We combine when we see an explicit geom -> frag dep. */
+		if (&geom_job->base.s_fence->scheduled == fence)
+			return true;
+	}
+
+	return false;
+}
+
+static struct dma_fence *
+get_last_queued_job_scheduled_fence(struct pvr_queue *queue,
+				    struct pvr_job_data *job_data,
+				    u32 cur_job_pos)
+{
+	/* We iterate over the current job array in reverse order to grab the
+	 * last to-be-queued job targeting the same queue.
+	 */
+	for (u32 i = cur_job_pos; i > 0; i--) {
+		struct pvr_job *job = job_data[i - 1].job;
+
+		if (job->ctx == queue->ctx && job->type == queue->type)
+			return dma_fence_get(&job->base.s_fence->scheduled);
+	}
+
+	/* If we didn't find any, we just return the last queued job scheduled
+	 * fence attached to the queue.
+	 */
+	return dma_fence_get(queue->last_queued_job_scheduled_fence);
+}
+
+static int
+pvr_jobs_link_geom_frag(struct pvr_job_data *job_data, u32 *job_count)
+{
+	for (u32 i = 0; i < *job_count - 1; i++) {
+		struct pvr_job *geom_job = job_data[i].job;
+		struct pvr_job *frag_job = job_data[i + 1].job;
+		struct pvr_queue *frag_queue;
+		struct dma_fence *f;
+
+		if (!can_combine_jobs(job_data[i].job, job_data[i + 1].job))
+			continue;
+
+		/* The fragment job will be submitted by the geometry queue. We
+		 * need to make sure it comes after all the other fragment jobs
+		 * queued before it.
+		 */
+		frag_queue = pvr_context_get_queue_for_job(frag_job->ctx,
+							   frag_job->type);
+		f = get_last_queued_job_scheduled_fence(frag_queue, job_data,
+							i);
+		if (f) {
+			int err = drm_sched_job_add_dependency(&geom_job->base,
+							       f);
+			if (err) {
+				*job_count = i;
+				return err;
+			}
+		}
+
+		/* The KCCB slot will be reserved by the geometry job, so we can
+		 * drop the KCCB fence on the fragment job.
+		 */
+		pvr_kccb_fence_put(frag_job->kccb_fence);
+		frag_job->kccb_fence = NULL;
+
+		geom_job->paired_job = frag_job;
+		frag_job->paired_job = geom_job;
+
+		/* Skip the fragment job we just paired to the geometry job. */
+		i++;
+	}
+
+	return 0;
+}
+
+/**
+ * pvr_submit_jobs() - Submit jobs to the GPU
+ * @pvr_dev: Target PowerVR device.
+ * @pvr_file: Pointer to PowerVR file structure.
+ * @args: Ioctl args.
+ * @job_count: Number of jobs in @jobs_args. On error this will be updated
+ * with the index into @jobs_args where the error occurred.
+ *
+ * This initial implementation is entirely synchronous; on return the GPU will
+ * be idle. This will not be the case for future implementations.
+ *
+ * Returns:
+ *  * 0 on success,
+ *  * -%EFAULT if arguments can not be copied from user space, or
+ *  * -%EINVAL on invalid arguments, or
+ *  * Any other error.
+ */
+int
+pvr_submit_jobs(struct pvr_device *pvr_dev, struct pvr_file *pvr_file,
+		struct drm_pvr_ioctl_submit_jobs_args *args)
+{
+	struct pvr_job_data *job_data = NULL;
+	struct drm_pvr_job *job_args;
+	struct xarray signal_array;
+	u32 jobs_alloced = 0;
+	struct drm_exec exec;
+	int err;
+
+	if (!args->jobs.count)
+		return -EINVAL;
+
+	err = PVR_UOBJ_GET_ARRAY(job_args, &args->jobs);
+	if (err)
+		return err;
+
+	job_data = kvmalloc_array(args->jobs.count, sizeof(*job_data),
+				  GFP_KERNEL | __GFP_ZERO);
+	if (!job_data) {
+		err = -ENOMEM;
+		goto out_free;
+	}
+
+	err = pvr_job_data_init(pvr_dev, pvr_file, job_args, &args->jobs.count,
+				job_data);
+	if (err)
+		goto out_free;
+
+	jobs_alloced = args->jobs.count;
+
+	drm_exec_init(&exec, DRM_EXEC_INTERRUPTIBLE_WAIT | DRM_EXEC_IGNORE_DUPLICATES);
+
+	xa_init_flags(&signal_array, XA_FLAGS_ALLOC);
+
+	err = prepare_job_syncs_for_each(pvr_file, job_data, &args->jobs.count,
+					 &signal_array);
+	if (err)
+		goto out_exec_fini;
+
+	err = prepare_job_resvs_for_each(&exec, job_data, args->jobs.count);
+	if (err)
+		goto out_exec_fini;
+
+	err = pvr_jobs_link_geom_frag(job_data, &args->jobs.count);
+	if (err)
+		goto out_exec_fini;
+
+	/* Anything after that point must succeed because we start exposing job
+	 * finished fences to the outside world.
+	 */
+	update_job_resvs_for_each(job_data, args->jobs.count);
+	push_jobs(job_data, args->jobs.count);
+	pvr_sync_signal_array_push_fences(&signal_array);
+	err = 0;
+
+out_exec_fini:
+	drm_exec_fini(&exec);
+	pvr_sync_signal_array_cleanup(&signal_array);
+	pvr_job_data_fini(job_data, jobs_alloced);
+
+out_free:
+	kvfree(job_data);
+	kvfree(job_args);
+
+	return err;
+}
diff --git a/drivers/gpu/drm/imagination/pvr_job.h b/drivers/gpu/drm/imagination/pvr_job.h
new file mode 100644
index 000000000000..dd5f9e9677f9
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_job.h
@@ -0,0 +1,161 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_JOB_H
+#define PVR_JOB_H
+
+#include <uapi/drm/pvr_drm.h>
+
+#include <linux/kref.h>
+#include <linux/types.h>
+
+#include <drm/drm_gem.h>
+#include <drm/gpu_scheduler.h>
+
+#include "pvr_power.h"
+
+/* Forward declaration from "pvr_context.h". */
+struct pvr_context;
+
+/* Forward declarations from "pvr_device.h". */
+struct pvr_device;
+struct pvr_file;
+
+/* Forward declarations from "pvr_hwrt.h". */
+struct pvr_hwrt_data;
+
+/* Forward declaration from "pvr_queue.h". */
+struct pvr_queue;
+
+struct pvr_job {
+	/** @base: drm_sched_job object. */
+	struct drm_sched_job base;
+
+	/** @ref_count: Refcount for job. */
+	struct kref ref_count;
+
+	/** @type: Type of job. */
+	enum drm_pvr_job_type type;
+
+	/** @id: Job ID number. */
+	u32 id;
+
+	/**
+	 * @paired_job: Job paired to this job.
+	 *
+	 * This field is only meaningful for geometry and fragment jobs.
+	 *
+	 * Paired jobs are executed on the same context, and need to be submitted
+	 * atomically to the FW, to make sure the partial render logic has a
+	 * fragment job to execute when the Parameter Manager runs out of memory.
+	 *
+	 * The geometry job should point to the fragment job it's paired with,
+	 * and the fragment job should point to the geometry job it's paired with.
+	 */
+	struct pvr_job *paired_job;
+
+	/** @cccb_fence: Fence used to wait for CCCB space. */
+	struct dma_fence *cccb_fence;
+
+	/** @kccb_fence: Fence used to wait for KCCB space. */
+	struct dma_fence *kccb_fence;
+
+	/** @done_fence: Fence to signal when the job is done. */
+	struct dma_fence *done_fence;
+
+	/** @pvr_dev: Device pointer. */
+	struct pvr_device *pvr_dev;
+
+	/** @ctx: Pointer to owning context. */
+	struct pvr_context *ctx;
+
+	/** @cmd: Command data. Format depends on @type. */
+	void *cmd;
+
+	/** @cmd_len: Length of command data, in bytes. */
+	u32 cmd_len;
+
+	/**
+	 * @fw_ccb_cmd_type: Firmware CCB command type. Must be one of %ROGUE_FWIF_CCB_CMD_TYPE_*.
+	 */
+	u32 fw_ccb_cmd_type;
+
+	/** @hwrt: HWRT object. Will be NULL for compute and transfer jobs. */
+	struct pvr_hwrt_data *hwrt;
+
+	/**
+	 * @has_pm_ref: True if the job has a power ref, thus forcing the GPU to stay on until
+	 * the job is done.
+	 */
+	bool has_pm_ref;
+};
+
+/**
+ * pvr_job_get() - Take additional reference on job.
+ * @job: Job pointer.
+ *
+ * Call pvr_job_put() to release.
+ *
+ * Returns:
+ *  * The requested job on success, or
+ *  * %NULL if no job pointer passed.
+ */
+static __always_inline struct pvr_job *
+pvr_job_get(struct pvr_job *job)
+{
+	if (job)
+		kref_get(&job->ref_count);
+
+	return job;
+}
+
+void pvr_job_put(struct pvr_job *job);
+
+/**
+ * pvr_job_release_pm_ref() - Release the PM ref if the job acquired it.
+ * @job: The job to release the PM ref on.
+ */
+static __always_inline void
+pvr_job_release_pm_ref(struct pvr_job *job)
+{
+	if (job->has_pm_ref) {
+		pvr_power_put(job->pvr_dev);
+		job->has_pm_ref = false;
+	}
+}
+
+/**
+ * pvr_job_get_pm_ref() - Get a PM ref and attach it to the job.
+ * @job: The job to attach the PM ref to.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_power_get() otherwise.
+ */
+static __always_inline int
+pvr_job_get_pm_ref(struct pvr_job *job)
+{
+	int err;
+
+	if (job->has_pm_ref)
+		return 0;
+
+	err = pvr_power_get(job->pvr_dev);
+	if (!err)
+		job->has_pm_ref = true;
+
+	return err;
+}
+
+int pvr_job_wait_first_non_signaled_native_dep(struct pvr_job *job);
+
+bool pvr_job_non_native_deps_done(struct pvr_job *job);
+
+int pvr_job_fits_in_cccb(struct pvr_job *job, unsigned long native_dep_count);
+
+void pvr_job_submit(struct pvr_job *job);
+
+int pvr_submit_jobs(struct pvr_device *pvr_dev, struct pvr_file *pvr_file,
+		    struct drm_pvr_ioctl_submit_jobs_args *args);
+
+#endif /* PVR_JOB_H */
diff --git a/drivers/gpu/drm/imagination/pvr_power.c b/drivers/gpu/drm/imagination/pvr_power.c
index 90dfdf22044c..72f81fbc7e6b 100644
--- a/drivers/gpu/drm/imagination/pvr_power.c
+++ b/drivers/gpu/drm/imagination/pvr_power.c
@@ -5,6 +5,7 @@
 #include "pvr_fw.h"
 #include "pvr_fw_startstop.h"
 #include "pvr_power.h"
+#include "pvr_queue.h"
 #include "pvr_rogue_fwif.h"
 
 #include <drm/drm_drv.h>
@@ -149,6 +150,21 @@ pvr_watchdog_kccb_stalled(struct pvr_device *pvr_dev)
 			pvr_dev->watchdog.kccb_stall_count = 0;
 			return true;
 		}
+	} else if (pvr_dev->watchdog.old_kccb_cmds_executed == kccb_cmds_executed) {
+		bool has_active_contexts;
+
+		mutex_lock(&pvr_dev->queues.lock);
+		has_active_contexts = list_empty(&pvr_dev->queues.active);
+		mutex_unlock(&pvr_dev->queues.lock);
+
+		if (has_active_contexts) {
+			/* Send a HEALTH_CHECK command so we can verify FW is still alive. */
+			struct rogue_fwif_kccb_cmd health_check_cmd;
+
+			health_check_cmd.cmd_type = ROGUE_FWIF_KCCB_CMD_HEALTH_CHECK;
+
+			pvr_kccb_send_cmd_powered(pvr_dev, &health_check_cmd, NULL);
+		}
 	} else {
 		pvr_dev->watchdog.old_kccb_cmds_executed = kccb_cmds_executed;
 		pvr_dev->watchdog.kccb_stall_count = 0;
@@ -312,6 +328,7 @@ pvr_power_device_idle(struct device *dev)
 int
 pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
 {
+	bool queues_disabled = false;
 	int err;
 
 	/*
@@ -326,6 +343,11 @@ pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
 	disable_irq(pvr_dev->irq);
 
 	do {
+		if (hard_reset) {
+			pvr_queue_device_pre_reset(pvr_dev);
+			queues_disabled = true;
+		}
+
 		err = pvr_power_fw_disable(pvr_dev, hard_reset);
 		if (!err) {
 			if (hard_reset) {
@@ -361,6 +383,9 @@ pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
 		}
 	} while (err);
 
+	if (queues_disabled)
+		pvr_queue_device_post_reset(pvr_dev);
+
 	enable_irq(pvr_dev->irq);
 
 	up_write(&pvr_dev->reset_sem);
@@ -375,6 +400,9 @@ pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
 
 	/* Leave IRQs disabled if the device is lost. */
 
+	if (queues_disabled)
+		pvr_queue_device_post_reset(pvr_dev);
+
 	up_write(&pvr_dev->reset_sem);
 
 	pvr_power_put(pvr_dev);
diff --git a/drivers/gpu/drm/imagination/pvr_queue.c b/drivers/gpu/drm/imagination/pvr_queue.c
new file mode 100644
index 000000000000..3a5a73fb7f0a
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_queue.c
@@ -0,0 +1,1455 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include <drm/drm_managed.h>
+#include <drm/gpu_scheduler.h>
+
+#include "pvr_cccb.h"
+#include "pvr_context.h"
+#include "pvr_device.h"
+#include "pvr_drv.h"
+#include "pvr_job.h"
+#include "pvr_queue.h"
+#include "pvr_vm.h"
+
+#include "pvr_rogue_fwif_client.h"
+
+#define MAX_DEADLINE_MS 30000
+
+#define CTX_COMPUTE_CCCB_SIZE_LOG2 15
+#define CTX_FRAG_CCCB_SIZE_LOG2 15
+#define CTX_GEOM_CCCB_SIZE_LOG2 15
+#define CTX_TRANSFER_CCCB_SIZE_LOG2 15
+
+static int get_xfer_ctx_state_size(struct pvr_device *pvr_dev)
+{
+	u32 num_isp_store_registers;
+
+	if (PVR_HAS_FEATURE(pvr_dev, xe_memory_hierarchy)) {
+		num_isp_store_registers = 1;
+	} else {
+		int err;
+
+		err = PVR_FEATURE_VALUE(pvr_dev, num_isp_ipp_pipes, &num_isp_store_registers);
+		if (WARN_ON(err))
+			return err;
+	}
+
+	return sizeof(struct rogue_fwif_frag_ctx_state) +
+	       (num_isp_store_registers *
+		sizeof(((struct rogue_fwif_frag_ctx_state *)0)->frag_reg_isp_store[0]));
+}
+
+static int get_frag_ctx_state_size(struct pvr_device *pvr_dev)
+{
+	u32 num_isp_store_registers;
+	int err;
+
+	if (PVR_HAS_FEATURE(pvr_dev, xe_memory_hierarchy)) {
+		err = PVR_FEATURE_VALUE(pvr_dev, num_raster_pipes, &num_isp_store_registers);
+		if (WARN_ON(err))
+			return err;
+
+		if (PVR_HAS_FEATURE(pvr_dev, gpu_multicore_support)) {
+			u32 xpu_max_slaves;
+
+			err = PVR_FEATURE_VALUE(pvr_dev, xpu_max_slaves, &xpu_max_slaves);
+			if (WARN_ON(err))
+				return err;
+
+			num_isp_store_registers *= (1 + xpu_max_slaves);
+		}
+	} else {
+		err = PVR_FEATURE_VALUE(pvr_dev, num_isp_ipp_pipes, &num_isp_store_registers);
+		if (WARN_ON(err))
+			return err;
+	}
+
+	return sizeof(struct rogue_fwif_frag_ctx_state) +
+	       (num_isp_store_registers *
+		sizeof(((struct rogue_fwif_frag_ctx_state *)0)->frag_reg_isp_store[0]));
+}
+
+static int get_ctx_state_size(struct pvr_device *pvr_dev, enum drm_pvr_job_type type)
+{
+	switch (type) {
+	case DRM_PVR_JOB_TYPE_GEOMETRY:
+		return sizeof(struct rogue_fwif_geom_ctx_state);
+	case DRM_PVR_JOB_TYPE_FRAGMENT:
+		return get_frag_ctx_state_size(pvr_dev);
+	case DRM_PVR_JOB_TYPE_COMPUTE:
+		return sizeof(struct rogue_fwif_compute_ctx_state);
+	case DRM_PVR_JOB_TYPE_TRANSFER_FRAG:
+		return get_xfer_ctx_state_size(pvr_dev);
+	}
+
+	WARN(1, "Invalid queue type");
+	return -EINVAL;
+}
+
+static u32 get_ctx_offset(enum drm_pvr_job_type type)
+{
+	switch (type) {
+	case DRM_PVR_JOB_TYPE_GEOMETRY:
+		return offsetof(struct rogue_fwif_fwrendercontext, geom_context);
+	case DRM_PVR_JOB_TYPE_FRAGMENT:
+		return offsetof(struct rogue_fwif_fwrendercontext, frag_context);
+	case DRM_PVR_JOB_TYPE_COMPUTE:
+		return offsetof(struct rogue_fwif_fwcomputecontext, cdm_context);
+	case DRM_PVR_JOB_TYPE_TRANSFER_FRAG:
+		return offsetof(struct rogue_fwif_fwtransfercontext, tq_context);
+	}
+
+	return 0;
+}
+
+static const char *
+pvr_queue_fence_get_driver_name(struct dma_fence *f)
+{
+	return PVR_DRIVER_NAME;
+}
+
+static void pvr_queue_fence_release(struct dma_fence *f)
+{
+	struct pvr_queue_fence *fence = container_of(f, struct pvr_queue_fence, base);
+
+	pvr_context_put(fence->queue->ctx);
+	dma_fence_free(f);
+}
+
+static const char *
+pvr_queue_job_fence_get_timeline_name(struct dma_fence *f)
+{
+	struct pvr_queue_fence *fence = container_of(f, struct pvr_queue_fence, base);
+
+	switch (fence->queue->type) {
+	case DRM_PVR_JOB_TYPE_GEOMETRY:
+		return "geometry";
+
+	case DRM_PVR_JOB_TYPE_FRAGMENT:
+		return "fragment";
+
+	case DRM_PVR_JOB_TYPE_COMPUTE:
+		return "compute";
+
+	case DRM_PVR_JOB_TYPE_TRANSFER_FRAG:
+		return "transfer";
+	}
+
+	WARN(1, "Invalid queue type");
+	return "invalid";
+}
+
+static const char *
+pvr_queue_cccb_fence_get_timeline_name(struct dma_fence *f)
+{
+	struct pvr_queue_fence *fence = container_of(f, struct pvr_queue_fence, base);
+
+	switch (fence->queue->type) {
+	case DRM_PVR_JOB_TYPE_GEOMETRY:
+		return "geometry-cccb";
+
+	case DRM_PVR_JOB_TYPE_FRAGMENT:
+		return "fragment-cccb";
+
+	case DRM_PVR_JOB_TYPE_COMPUTE:
+		return "compute-cccb";
+
+	case DRM_PVR_JOB_TYPE_TRANSFER_FRAG:
+		return "transfer-cccb";
+	}
+
+	WARN(1, "Invalid queue type");
+	return "invalid";
+}
+
+static const struct dma_fence_ops pvr_queue_job_fence_ops = {
+	.get_driver_name = pvr_queue_fence_get_driver_name,
+	.get_timeline_name = pvr_queue_job_fence_get_timeline_name,
+	.release = pvr_queue_fence_release,
+};
+
+/**
+ * to_pvr_queue_job_fence() - Return a pvr_queue_fence object if the fence is
+ * backed by a UFO.
+ * @f: The dma_fence to turn into a pvr_queue_fence.
+ *
+ * Return:
+ *  * A non-NULL pvr_queue_fence object if the dma_fence is backed by a UFO, or
+ *  * NULL otherwise.
+ */
+static struct pvr_queue_fence *
+to_pvr_queue_job_fence(struct dma_fence *f)
+{
+	struct drm_sched_fence *sched_fence = to_drm_sched_fence(f);
+
+	if (sched_fence)
+		f = sched_fence->parent;
+
+	if (f && f->ops == &pvr_queue_job_fence_ops)
+		return container_of(f, struct pvr_queue_fence, base);
+
+	return NULL;
+}
+
+static const struct dma_fence_ops pvr_queue_cccb_fence_ops = {
+	.get_driver_name = pvr_queue_fence_get_driver_name,
+	.get_timeline_name = pvr_queue_cccb_fence_get_timeline_name,
+	.release = pvr_queue_fence_release,
+};
+
+/**
+ * pvr_queue_fence_put() - Put wrapper for pvr_queue_fence objects.
+ * @f: The dma_fence object to put.
+ *
+ * If the pvr_queue_fence has been initialized, we call dma_fence_put(),
+ * otherwise we free the object with dma_fence_free(). This allows us
+ * to do the right thing before and after pvr_queue_fence_init() had been
+ * called.
+ */
+static void pvr_queue_fence_put(struct dma_fence *f)
+{
+	if (!f)
+		return;
+
+	if (WARN_ON(f->ops &&
+		    f->ops != &pvr_queue_cccb_fence_ops &&
+		    f->ops != &pvr_queue_job_fence_ops))
+		return;
+
+	/* If the fence hasn't been initialized yet, free the object directly. */
+	if (f->ops)
+		dma_fence_put(f);
+	else
+		dma_fence_free(f);
+}
+
+/**
+ * pvr_queue_fence_alloc() - Allocate a pvr_queue_fence fence object
+ *
+ * Call this function to allocate job CCCB and done fences. This only
+ * allocates the objects. Initialization happens when the underlying
+ * dma_fence object is to be returned to drm_sched (in prepare_job() or
+ * run_job()).
+ *
+ * Return:
+ *  * A valid pointer if the allocation succeeds, or
+ *  * NULL if the allocation fails.
+ */
+static struct dma_fence *
+pvr_queue_fence_alloc(void)
+{
+	struct pvr_queue_fence *fence;
+
+	fence = kzalloc(sizeof(*fence), GFP_KERNEL);
+	if (!fence)
+		return NULL;
+
+	return &fence->base;
+}
+
+/**
+ * pvr_queue_fence_init() - Initializes a pvr_queue_fence object.
+ * @f: The fence to initialize
+ * @queue: The queue this fence belongs to.
+ * @fence_ops: The fence operations.
+ * @fence_ctx: The fence context.
+ *
+ * Wrapper around dma_fence_init() that takes care of initializing the
+ * pvr_queue_fence::queue field too.
+ */
+static void
+pvr_queue_fence_init(struct dma_fence *f,
+		     struct pvr_queue *queue,
+		     const struct dma_fence_ops *fence_ops,
+		     struct pvr_queue_fence_ctx *fence_ctx)
+{
+	struct pvr_queue_fence *fence = container_of(f, struct pvr_queue_fence, base);
+
+	pvr_context_get(queue->ctx);
+	fence->queue = queue;
+	dma_fence_init(&fence->base, fence_ops,
+		       &fence_ctx->lock, fence_ctx->id,
+		       atomic_inc_return(&fence_ctx->seqno));
+}
+
+/**
+ * pvr_queue_cccb_fence_init() - Initializes a CCCB fence object.
+ * @fence: The fence to initialize.
+ * @queue: The queue this fence belongs to.
+ *
+ * Initializes a fence that can be used to wait for CCCB space.
+ *
+ * Should be called in the ::prepare_job() path, so the fence returned to
+ * drm_sched is valid.
+ */
+static void
+pvr_queue_cccb_fence_init(struct dma_fence *fence, struct pvr_queue *queue)
+{
+	pvr_queue_fence_init(fence, queue, &pvr_queue_cccb_fence_ops,
+			     &queue->cccb_fence_ctx.base);
+}
+
+/**
+ * pvr_queue_job_fence_init() - Initializes a job done fence object.
+ * @fence: The fence to initialize.
+ * @queue: The queue this fence belongs to.
+ *
+ * Initializes a fence that will be signaled when the GPU is done executing
+ * a job.
+ *
+ * Should be called in the ::run_job() path, so the fence returned to drm_sched
+ * is valid.
+ */
+static void
+pvr_queue_job_fence_init(struct dma_fence *fence, struct pvr_queue *queue)
+{
+	pvr_queue_fence_init(fence, queue, &pvr_queue_job_fence_ops,
+			     &queue->job_fence_ctx);
+}
+
+/**
+ * pvr_queue_fence_ctx_init() - Queue fence context initialization.
+ * @fence_ctx: The context to initialize
+ */
+static void
+pvr_queue_fence_ctx_init(struct pvr_queue_fence_ctx *fence_ctx)
+{
+	spin_lock_init(&fence_ctx->lock);
+	fence_ctx->id = dma_fence_context_alloc(1);
+	atomic_set(&fence_ctx->seqno, 0);
+}
+
+static u32 ufo_cmds_size(u32 elem_count)
+{
+	/* We can pass at most ROGUE_FWIF_CCB_CMD_MAX_UFOS per UFO-related command. */
+	u32 full_cmd_count = elem_count / ROGUE_FWIF_CCB_CMD_MAX_UFOS;
+	u32 remaining_elems = elem_count % ROGUE_FWIF_CCB_CMD_MAX_UFOS;
+	u32 size = full_cmd_count *
+		   pvr_cccb_get_size_of_cmd_with_hdr(ROGUE_FWIF_CCB_CMD_MAX_UFOS *
+						     sizeof(struct rogue_fwif_ufo));
+
+	if (remaining_elems) {
+		size += pvr_cccb_get_size_of_cmd_with_hdr(remaining_elems *
+							  sizeof(struct rogue_fwif_ufo));
+	}
+
+	return size;
+}
+
+static u32 job_cmds_size(struct pvr_job *job, u32 ufo_wait_count)
+{
+	/* One UFO cmd for the fence signaling, one UFO cmd per native fence native,
+	 * and a command for the job itself.
+	 */
+	return ufo_cmds_size(1) + ufo_cmds_size(ufo_wait_count) +
+	       pvr_cccb_get_size_of_cmd_with_hdr(job->cmd_len);
+}
+
+/**
+ * job_count_remaining_native_deps() - Count the number of non-signaled native dependencies.
+ * @job: Job to operate on.
+ *
+ * Returns: Number of non-signaled native deps remaining.
+ */
+static unsigned long job_count_remaining_native_deps(struct pvr_job *job)
+{
+	unsigned long remaining_count = 0;
+	struct dma_fence *fence = NULL;
+	unsigned long index;
+
+	xa_for_each(&job->base.dependencies, index, fence) {
+		struct pvr_queue_fence *jfence;
+
+		jfence = to_pvr_queue_job_fence(fence);
+		if (!jfence)
+			continue;
+
+		if (!dma_fence_is_signaled(&jfence->base))
+			remaining_count++;
+	}
+
+	return remaining_count;
+}
+
+/**
+ * pvr_queue_get_job_cccb_fence() - Get the CCCB fence attached to a job.
+ * @queue: The queue this job will be submitted to.
+ * @job: The job to get the CCCB fence on.
+ *
+ * The CCCB fence is a synchronization primitive allowing us to delay job
+ * submission until there's enough space in the CCCB to submit the job.
+ *
+ * Return:
+ *  * NULL if there's enough space in the CCCB to submit this job, or
+ *  * A valid dma_fence object otherwise.
+ */
+static struct dma_fence *
+pvr_queue_get_job_cccb_fence(struct pvr_queue *queue, struct pvr_job *job)
+{
+	struct pvr_queue_fence *cccb_fence;
+	unsigned int native_deps_remaining;
+
+	/* If the fence is NULL, that means we already checked that we had
+	 * enough space in the cccb for our job.
+	 */
+	if (!job->cccb_fence)
+		return NULL;
+
+	mutex_lock(&queue->cccb_fence_ctx.job_lock);
+
+	/* Count remaining native dependencies and check if the job fits in the CCCB. */
+	native_deps_remaining = job_count_remaining_native_deps(job);
+	if (pvr_cccb_cmdseq_fits(&queue->cccb, job_cmds_size(job, native_deps_remaining))) {
+		pvr_queue_fence_put(job->cccb_fence);
+		job->cccb_fence = NULL;
+		goto out_unlock;
+	}
+
+	/* There should be no job attached to the CCCB fence context:
+	 * drm_sched_entity guarantees that jobs are submitted one at a time.
+	 */
+	if (WARN_ON(queue->cccb_fence_ctx.job))
+		pvr_job_put(queue->cccb_fence_ctx.job);
+
+	queue->cccb_fence_ctx.job = pvr_job_get(job);
+
+	/* Initialize the fence before returning it. */
+	cccb_fence = container_of(job->cccb_fence, struct pvr_queue_fence, base);
+	if (!WARN_ON(cccb_fence->queue))
+		pvr_queue_cccb_fence_init(job->cccb_fence, queue);
+
+out_unlock:
+	mutex_unlock(&queue->cccb_fence_ctx.job_lock);
+
+	return dma_fence_get(job->cccb_fence);
+}
+
+/**
+ * pvr_queue_get_job_kccb_fence() - Get the KCCB fence attached to a job.
+ * @queue: The queue this job will be submitted to.
+ * @job: The job to get the KCCB fence on.
+ *
+ * The KCCB fence is a synchronization primitive allowing us to delay job
+ * submission until there's enough space in the KCCB to submit the job.
+ *
+ * Return:
+ *  * NULL if there's enough space in the KCCB to submit this job, or
+ *  * A valid dma_fence object otherwise.
+ */
+static struct dma_fence *
+pvr_queue_get_job_kccb_fence(struct pvr_queue *queue, struct pvr_job *job)
+{
+	struct pvr_device *pvr_dev = queue->ctx->pvr_dev;
+	struct dma_fence *kccb_fence = NULL;
+
+	/* If the fence is NULL, that means we already checked that we had
+	 * enough space in the KCCB for our job.
+	 */
+	if (!job->kccb_fence)
+		return NULL;
+
+	if (!WARN_ON(job->kccb_fence->ops)) {
+		kccb_fence = pvr_kccb_reserve_slot(pvr_dev, job->kccb_fence);
+		job->kccb_fence = NULL;
+	}
+
+	return kccb_fence;
+}
+
+static struct dma_fence *
+pvr_queue_get_paired_frag_job_dep(struct pvr_queue *queue, struct pvr_job *job)
+{
+	struct pvr_job *frag_job = job->type == DRM_PVR_JOB_TYPE_GEOMETRY ?
+				   job->paired_job : NULL;
+	struct dma_fence *f;
+	unsigned long index;
+
+	if (!frag_job)
+		return NULL;
+
+	xa_for_each(&frag_job->base.dependencies, index, f) {
+		/* Skip already signaled fences. */
+		if (dma_fence_is_signaled(f))
+			continue;
+
+		/* Skip our own fence. */
+		if (f == &job->base.s_fence->scheduled)
+			continue;
+
+		return dma_fence_get(f);
+	}
+
+	return frag_job->base.sched->ops->prepare_job(&frag_job->base, &queue->entity);
+}
+
+/**
+ * pvr_queue_prepare_job() - Return the next internal dependencies expressed as a dma_fence.
+ * @sched_job: The job to query the next internal dependency on
+ * @s_entity: The entity this job is queue on.
+ *
+ * After iterating over drm_sched_job::dependencies, drm_sched let the driver return
+ * its own internal dependencies. We use this function to return our internal dependencies.
+ */
+static struct dma_fence *
+pvr_queue_prepare_job(struct drm_sched_job *sched_job,
+		      struct drm_sched_entity *s_entity)
+{
+	struct pvr_job *job = container_of(sched_job, struct pvr_job, base);
+	struct pvr_queue *queue = container_of(s_entity, struct pvr_queue, entity);
+	struct dma_fence *internal_dep = NULL;
+
+	/* CCCB fence is used to make sure we have enough space in the CCCB to
+	 * submit our commands.
+	 */
+	internal_dep = pvr_queue_get_job_cccb_fence(queue, job);
+
+	/* KCCB fence is used to make sure we have a KCCB slot to queue our
+	 * CMD_KICK.
+	 */
+	if (!internal_dep)
+		internal_dep = pvr_queue_get_job_kccb_fence(queue, job);
+
+	/* Any extra internal dependency should be added here, using the following
+	 * the following pattern:
+	 *
+	 *	if (!internal_dep)
+	 *		internal_dep = pvr_queue_get_job_xxxx_fence(queue, job);
+	 */
+
+	/* The paired job fence should come last, when everything else is ready. */
+	if (!internal_dep)
+		internal_dep = pvr_queue_get_paired_frag_job_dep(queue, job);
+
+	return internal_dep;
+}
+
+/**
+ * pvr_queue_update_active_state_locked() - Update the queue active state.
+ * @queue: Queue to update the state on.
+ *
+ * Locked version of pvr_queue_update_active_state(). Must be called with
+ * pvr_device::queue::lock held.
+ */
+static void pvr_queue_update_active_state_locked(struct pvr_queue *queue)
+{
+	struct pvr_device *pvr_dev = queue->ctx->pvr_dev;
+
+	lockdep_assert_held(&pvr_dev->queues.lock);
+
+	/* The queue is temporary out of any list when it's being reset,
+	 * we don't want a call to pvr_queue_update_active_state_locked()
+	 * to re-insert it behind our back.
+	 */
+	if (list_empty(&queue->node))
+		return;
+
+	if (!atomic_read(&queue->in_flight_job_count))
+		list_move_tail(&queue->node, &pvr_dev->queues.idle);
+	else
+		list_move_tail(&queue->node, &pvr_dev->queues.active);
+}
+
+/**
+ * pvr_queue_update_active_state() - Update the queue active state.
+ * @queue: Queue to update the state on.
+ *
+ * Active state is based on the in_flight_job_count value.
+ *
+ * Updating the active state implies moving the queue in or out of the
+ * active queue list, which also defines whether the queue is checked
+ * or not when a FW event is received.
+ *
+ * This function should be called any time a job is submitted or it done
+ * fence is signaled.
+ */
+static void pvr_queue_update_active_state(struct pvr_queue *queue)
+{
+	struct pvr_device *pvr_dev = queue->ctx->pvr_dev;
+
+	mutex_lock(&pvr_dev->queues.lock);
+	pvr_queue_update_active_state_locked(queue);
+	mutex_unlock(&pvr_dev->queues.lock);
+}
+
+static void pvr_queue_submit_job_to_cccb(struct pvr_job *job)
+{
+	struct pvr_queue *queue = container_of(job->base.sched, struct pvr_queue, scheduler);
+	struct rogue_fwif_ufo ufos[ROGUE_FWIF_CCB_CMD_MAX_UFOS];
+	struct pvr_cccb *cccb = &queue->cccb;
+	struct pvr_queue_fence *jfence;
+	struct dma_fence *fence;
+	unsigned long index;
+	u32 ufo_count = 0;
+
+	/* Initialize the done_fence, so we can signal it. */
+	pvr_queue_job_fence_init(job->done_fence, queue);
+
+	/* We need to add the queue to the active list before updating the CCCB,
+	 * otherwise we might miss the FW event informing us that something
+	 * happened on this queue.
+	 */
+	atomic_inc(&queue->in_flight_job_count);
+	pvr_queue_update_active_state(queue);
+
+	xa_for_each(&job->base.dependencies, index, fence) {
+		jfence = to_pvr_queue_job_fence(fence);
+		if (!jfence)
+			continue;
+
+		/* Skip the partial render fence, we will place it at the end. */
+		if (job->type == DRM_PVR_JOB_TYPE_FRAGMENT && job->paired_job &&
+		    &job->paired_job->base.s_fence->scheduled == fence)
+			continue;
+
+		if (dma_fence_is_signaled(&jfence->base))
+			continue;
+
+		pvr_fw_object_get_fw_addr(jfence->queue->timeline_ufo.fw_obj,
+					  &ufos[ufo_count].addr);
+		ufos[ufo_count++].value = jfence->base.seqno;
+
+		if (ufo_count == ARRAY_SIZE(ufos)) {
+			pvr_cccb_write_command_with_header(cccb, ROGUE_FWIF_CCB_CMD_TYPE_FENCE_PR,
+							   sizeof(ufos), ufos, 0, 0);
+			ufo_count = 0;
+		}
+	}
+
+	/* Partial render fence goes last. */
+	if (job->type == DRM_PVR_JOB_TYPE_FRAGMENT && job->paired_job) {
+		jfence = to_pvr_queue_job_fence(job->paired_job->done_fence);
+		if (!WARN_ON(!jfence)) {
+			pvr_fw_object_get_fw_addr(jfence->queue->timeline_ufo.fw_obj,
+						  &ufos[ufo_count].addr);
+			ufos[ufo_count++].value = job->paired_job->done_fence->seqno;
+		}
+	}
+
+	if (ufo_count) {
+		pvr_cccb_write_command_with_header(cccb, ROGUE_FWIF_CCB_CMD_TYPE_FENCE_PR,
+						   sizeof(ufos[0]) * ufo_count, ufos, 0, 0);
+	}
+
+	if (job->type == DRM_PVR_JOB_TYPE_GEOMETRY && job->paired_job) {
+		struct rogue_fwif_cmd_geom *cmd = job->cmd;
+
+		/* Reference value for the partial render test is the current queue fence
+		 * seqno minus one.
+		 */
+		pvr_fw_object_get_fw_addr(queue->timeline_ufo.fw_obj,
+					  &cmd->partial_render_geom_frag_fence.addr);
+		cmd->partial_render_geom_frag_fence.value = job->done_fence->seqno - 1;
+	}
+
+	/* Submit job to FW */
+	pvr_cccb_write_command_with_header(cccb, job->fw_ccb_cmd_type, job->cmd_len, job->cmd,
+					   job->id, job->id);
+
+	/* Signal the job fence. */
+	pvr_fw_object_get_fw_addr(queue->timeline_ufo.fw_obj, &ufos[0].addr);
+	ufos[0].value = job->done_fence->seqno;
+	pvr_cccb_write_command_with_header(cccb, ROGUE_FWIF_CCB_CMD_TYPE_UPDATE,
+					   sizeof(ufos[0]), ufos, 0, 0);
+
+	/* The job we submit is added to the drm_gpu_scheduler::pending_list in
+	 * drm_sched_job_begin() which is called after ::run_job() returns. But the
+	 * GPU might be done executing the job before drm_sched_main() had a chance
+	 * to queue it to the pending_list, resulting in a lost event. Work around
+	 * this race by keeping track of the last submitted job so we can check it in
+	 * pvr_queue_signal_done_fences() if this job is not part of the pending_list
+	 * already.
+	 * No need to keep a reference to the job object here, because we only use
+	 * check the pointer value.
+	 */
+	spin_lock(&queue->scheduler.job_list_lock);
+	queue->last_submitted_job = job;
+	spin_unlock(&queue->scheduler.job_list_lock);
+}
+
+/**
+ * pvr_queue_run_job() - Submit a job to the FW.
+ * @sched_job: The job to submit.
+ *
+ * This function is called when all non-native dependencies have been met and
+ * when the commands resulting from this job are guaranteed to fit in the CCCB.
+ */
+static struct dma_fence *pvr_queue_run_job(struct drm_sched_job *sched_job)
+{
+	struct pvr_job *job = container_of(sched_job, struct pvr_job, base);
+	struct pvr_device *pvr_dev = job->pvr_dev;
+	int err;
+
+	/* The fragment job is issued along the geometry job when we use combined
+	 * geom+frag kicks. When we get there, we should simply return the
+	 * done_fence that's been initialized earlier.
+	 */
+	if (job->type == DRM_PVR_JOB_TYPE_FRAGMENT && job->done_fence->ops)
+		return dma_fence_get(job->done_fence);
+
+	/* The only kind of jobs that can be paired are geometry and fragment, and
+	 * we bail out early if we see a fragment job that's paired with a geomtry
+	 * job.
+	 * Paired jobs must also target the same context and point to the same
+	 * HWRT.
+	 */
+	if (WARN_ON(job->paired_job &&
+		    (job->type != DRM_PVR_JOB_TYPE_GEOMETRY ||
+		     job->paired_job->type != DRM_PVR_JOB_TYPE_FRAGMENT ||
+		     job->hwrt != job->paired_job->hwrt ||
+		     job->ctx != job->paired_job->ctx)))
+		return ERR_PTR(-EINVAL);
+
+	err = pvr_job_get_pm_ref(job);
+	if (WARN_ON(err))
+		return ERR_PTR(err);
+
+	if (job->paired_job) {
+		err = pvr_job_get_pm_ref(job->paired_job);
+		if (WARN_ON(err))
+			return ERR_PTR(err);
+	}
+
+	/* Submit our job to the CCCB */
+	pvr_queue_submit_job_to_cccb(job);
+
+	if (job->paired_job) {
+		struct pvr_job *geom_job = job;
+		struct pvr_job *frag_job = job->paired_job;
+		struct pvr_queue *geom_queue = job->ctx->queues.geometry;
+		struct pvr_queue *frag_queue = job->ctx->queues.fragment;
+
+		/* Submit the fragment job along the geometry job and send a combined kick. */
+		pvr_queue_submit_job_to_cccb(frag_job);
+		pvr_cccb_send_kccb_combined_kick(pvr_dev,
+						 &geom_queue->cccb, &frag_queue->cccb,
+						 pvr_context_get_fw_addr(geom_job->ctx) +
+						 geom_queue->ctx_offset,
+						 pvr_context_get_fw_addr(frag_job->ctx) +
+						 frag_queue->ctx_offset,
+						 job->hwrt,
+						 frag_job->fw_ccb_cmd_type ==
+						 ROGUE_FWIF_CCB_CMD_TYPE_FRAG_PR);
+		geom_job->paired_job = NULL;
+		frag_job->paired_job = NULL;
+	} else {
+		struct pvr_queue *queue = container_of(job->base.sched,
+						       struct pvr_queue, scheduler);
+
+		pvr_cccb_send_kccb_kick(pvr_dev, &queue->cccb,
+					pvr_context_get_fw_addr(job->ctx) + queue->ctx_offset,
+					job->hwrt);
+	}
+
+	return dma_fence_get(job->done_fence);
+}
+
+static void pvr_queue_stop(struct pvr_queue *queue, struct pvr_job *bad_job)
+{
+	drm_sched_stop(&queue->scheduler, bad_job ? &bad_job->base : NULL);
+}
+
+static void pvr_queue_start(struct pvr_queue *queue)
+{
+	struct pvr_job *job;
+
+	/* Make sure we CPU-signal the UFO object, so other queues don't get
+	 * blocked waiting on it.
+	 */
+	*queue->timeline_ufo.value = atomic_read(&queue->job_fence_ctx.seqno);
+
+	list_for_each_entry(job, &queue->scheduler.pending_list, base.list) {
+		if (dma_fence_is_signaled(job->done_fence)) {
+			/* Jobs might have completed after drm_sched_stop() was called.
+			 * In that case, re-assign the parent field to the done_fence.
+			 */
+			WARN_ON(job->base.s_fence->parent);
+			job->base.s_fence->parent = dma_fence_get(job->done_fence);
+		} else {
+			/* If we had unfinished jobs, flag the entity as guilty so no
+			 * new job can be submitted.
+			 */
+			atomic_set(&queue->ctx->faulty, 1);
+
+			if (job == queue->last_submitted_job) {
+				queue->last_submitted_job = NULL;
+				dma_fence_signal(job->done_fence);
+				pvr_job_release_pm_ref(job);
+				atomic_dec(&queue->in_flight_job_count);
+			}
+		}
+	}
+
+	drm_sched_start(&queue->scheduler, true);
+}
+
+/**
+ * pvr_queue_timedout_job() - Handle a job timeout event.
+ * @s_job: The job this timeout occurred on.
+ *
+ * FIXME: We don't do anything here to unblock the situation, we just stop+start
+ * the scheduler, and re-assign parent fences in the middle.
+ *
+ * Return:
+ *  * DRM_GPU_SCHED_STAT_NOMINAL.
+ */
+static enum drm_gpu_sched_stat
+pvr_queue_timedout_job(struct drm_sched_job *s_job)
+{
+	struct drm_gpu_scheduler *sched = s_job->sched;
+	struct pvr_queue *queue = container_of(sched, struct pvr_queue, scheduler);
+	struct pvr_device *pvr_dev = queue->ctx->pvr_dev;
+	struct pvr_job *job;
+	u32 job_count = 0;
+
+	dev_err(sched->dev, "Job timeout\n");
+
+	/* Before we stop the scheduler, make sure the queue is out of any list, so
+	 * any call to pvr_queue_update_active_state_locked() that might happen
+	 * until the scheduler is really stopped doesn't end up re-inserting the
+	 * queue in the active list. This would cause
+	 * pvr_queue_signal_done_fences() and drm_sched_stop() to race with each
+	 * other when accessing the pending_list, since drm_sched_stop() doesn't
+	 * grab the job_list_lock when modifying the list (it's assuming the
+	 * only other accessor is the scheduler, and it's safe to not grab the
+	 * lock since it's stopped).
+	 */
+	mutex_lock(&pvr_dev->queues.lock);
+	list_del_init(&queue->node);
+	mutex_unlock(&pvr_dev->queues.lock);
+
+	drm_sched_stop(sched, s_job);
+
+	/* Reset the last_submitted_job field now, just in case. No need to grab
+	 * the job_list_lock here, all the path accessing this field are guaranteed
+	 * to be turned off at that point.
+	 */
+	queue->last_submitted_job = NULL;
+
+	/* Re-assign job parent fences. */
+	list_for_each_entry(job, &sched->pending_list, base.list) {
+		job->base.s_fence->parent = dma_fence_get(job->done_fence);
+		job_count++;
+	}
+	WARN_ON(atomic_read(&queue->in_flight_job_count) != job_count);
+
+	/* Re-insert the queue in the proper list, and kick a queue processing
+	 * operation if there were jobs pending.
+	 */
+	mutex_lock(&pvr_dev->queues.lock);
+	if (!atomic_read(&queue->in_flight_job_count)) {
+		list_move_tail(&queue->node, &pvr_dev->queues.idle);
+	} else {
+		list_move_tail(&queue->node, &pvr_dev->queues.active);
+		pvr_queue_process(queue);
+	}
+	mutex_unlock(&pvr_dev->queues.lock);
+
+	drm_sched_start(sched, true);
+
+	return DRM_GPU_SCHED_STAT_NOMINAL;
+}
+
+/**
+ * pvr_queue_free_job() - Release the reference the scheduler had on a job object.
+ * @sched_job: Job object to free.
+ */
+static void pvr_queue_free_job(struct drm_sched_job *sched_job)
+{
+	struct pvr_job *job = container_of(sched_job, struct pvr_job, base);
+
+	drm_sched_job_cleanup(sched_job);
+	job->paired_job = NULL;
+	pvr_job_put(job);
+}
+
+static const struct drm_sched_backend_ops pvr_queue_sched_ops = {
+	.prepare_job = pvr_queue_prepare_job,
+	.run_job = pvr_queue_run_job,
+	.timedout_job = pvr_queue_timedout_job,
+	.free_job = pvr_queue_free_job,
+};
+
+/**
+ * pvr_queue_fence_is_ufo_backed() - Check if a dma_fence is backed by a UFO object
+ * @f: Fence to test.
+ *
+ * A UFO-backed fence is a fence that can be signaled or waited upon FW-side.
+ * pvr_job::done_fence objects are backed by the timeline UFO attached to the queue
+ * they are pushed to, but those fences are not directly exposed to the outside
+ * world, so we also need to check if the fence we're being passed is a
+ * drm_sched_fence that was coming from our driver.
+ */
+bool pvr_queue_fence_is_ufo_backed(struct dma_fence *f)
+{
+	struct drm_sched_fence *sched_fence = f ? to_drm_sched_fence(f) : NULL;
+
+	if (sched_fence &&
+	    sched_fence->sched->ops == &pvr_queue_sched_ops)
+		return true;
+
+	if (f && f->ops == &pvr_queue_job_fence_ops)
+		return true;
+
+	return false;
+}
+
+/**
+ * pvr_queue_signal_done_fences() - Signal done fences.
+ * @queue: Queue to check.
+ *
+ * Signal done fences of jobs whose seqno is less than the current value of
+ * the UFO object attached to the queue.
+ */
+static void
+pvr_queue_signal_done_fences(struct pvr_queue *queue)
+{
+	struct pvr_job *job, *tmp_job;
+	u32 cur_seqno;
+
+	spin_lock(&queue->scheduler.job_list_lock);
+	cur_seqno = *queue->timeline_ufo.value;
+	list_for_each_entry_safe(job, tmp_job, &queue->scheduler.pending_list, base.list) {
+		if ((int)(cur_seqno - lower_32_bits(job->done_fence->seqno)) < 0)
+			break;
+
+		if (!dma_fence_is_signaled(job->done_fence)) {
+			dma_fence_signal(job->done_fence);
+			pvr_job_release_pm_ref(job);
+			atomic_dec(&queue->in_flight_job_count);
+		}
+	}
+
+	/* We don't want to test jobs twice, so reset last_submitted_job
+	 * if the job is already part of the pending_list.
+	 */
+	job = list_last_entry(&queue->scheduler.pending_list, struct pvr_job, base.list);
+	if (job == queue->last_submitted_job)
+		queue->last_submitted_job = NULL;
+
+	if (queue->last_submitted_job &&
+	    (int)(cur_seqno - lower_32_bits(queue->last_submitted_job->done_fence->seqno)) >= 0) {
+		dma_fence_signal(queue->last_submitted_job->done_fence);
+		pvr_job_release_pm_ref(queue->last_submitted_job);
+		atomic_dec(&queue->in_flight_job_count);
+
+		/* We signaled the job, so no need to check it again next time. Most importantly,
+		 * it's addressing a race where we signal the job before and drm_sched cleans it
+		 * up before pvr_queue_signal_done_fences() is called again, meaning the job
+		 * will never show up in the pending_list, and we might be pointing to an already
+		 * freed job next time pvr_queue_signal_done_fences() is called.
+		 */
+		queue->last_submitted_job = NULL;
+	}
+	spin_unlock(&queue->scheduler.job_list_lock);
+}
+
+/**
+ * pvr_queue_check_job_waiting_for_cccb_space() - Check if the job waiting for CCCB space
+ * can be unblocked
+ * pushed to the CCCB
+ * @queue: Queue to check
+ *
+ * If we have a job waiting for CCCB, and this job now fits in the CCCB, we signal
+ * its CCCB fence, which should kick drm_sched.
+ */
+static void
+pvr_queue_check_job_waiting_for_cccb_space(struct pvr_queue *queue)
+{
+	struct pvr_queue_fence *cccb_fence;
+	u32 native_deps_remaining;
+	struct pvr_job *job;
+
+	mutex_lock(&queue->cccb_fence_ctx.job_lock);
+	job = queue->cccb_fence_ctx.job;
+	if (!job)
+		goto out_unlock;
+
+	/* If we have a job attached to the CCCB fence context, its CCCB fence
+	 * shouldn't be NULL.
+	 */
+	if (WARN_ON(!job->cccb_fence)) {
+		job = NULL;
+		goto out_unlock;
+	}
+
+	/* If we get there, CCCB fence has to be initialized. */
+	cccb_fence = container_of(job->cccb_fence, struct pvr_queue_fence, base);
+	if (WARN_ON(!cccb_fence->queue)) {
+		job = NULL;
+		goto out_unlock;
+	}
+
+	/* Evict signaled dependencies before checking for CCCB space.
+	 * If the job fits, signal the CCCB fence, this should unblock
+	 * the drm_sched_entity.
+	 */
+	native_deps_remaining = job_count_remaining_native_deps(job);
+	if (!pvr_cccb_cmdseq_fits(&queue->cccb, job_cmds_size(job, native_deps_remaining))) {
+		job = NULL;
+		goto out_unlock;
+	}
+
+	dma_fence_signal(job->cccb_fence);
+	pvr_queue_fence_put(job->cccb_fence);
+	job->cccb_fence = NULL;
+	queue->cccb_fence_ctx.job = NULL;
+
+out_unlock:
+	mutex_unlock(&queue->cccb_fence_ctx.job_lock);
+
+	pvr_job_put(job);
+}
+
+/**
+ * pvr_queue_process() - Process events that happened on a queue.
+ * @queue: Queue to check
+ *
+ * Signal job fences and check if jobs waiting for CCCB space can be unblocked.
+ */
+void pvr_queue_process(struct pvr_queue *queue)
+{
+	lockdep_assert_held(&queue->ctx->pvr_dev->queues.lock);
+
+	pvr_queue_check_job_waiting_for_cccb_space(queue);
+	pvr_queue_signal_done_fences(queue);
+	pvr_queue_update_active_state_locked(queue);
+}
+
+static u32 get_dm_type(struct pvr_queue *queue)
+{
+	switch (queue->type) {
+	case DRM_PVR_JOB_TYPE_GEOMETRY:
+		return PVR_FWIF_DM_GEOM;
+	case DRM_PVR_JOB_TYPE_TRANSFER_FRAG:
+	case DRM_PVR_JOB_TYPE_FRAGMENT:
+		return PVR_FWIF_DM_FRAG;
+	case DRM_PVR_JOB_TYPE_COMPUTE:
+		return PVR_FWIF_DM_CDM;
+	}
+
+	return ~0;
+}
+
+/**
+ * init_fw_context() - Initializes the queue part of a FW context.
+ * @queue: Queue object to initialize the FW context for.
+ * @fw_ctx_map: The FW context CPU mapping.
+ *
+ * FW contexts are containing various states, one of them being a per-queue state
+ * that needs to be initialized for each queue being exposed by a context. This
+ * function takes care of that.
+ */
+static void init_fw_context(struct pvr_queue *queue, void *fw_ctx_map)
+{
+	struct pvr_context *ctx = queue->ctx;
+	struct pvr_fw_object *fw_mem_ctx_obj = pvr_vm_get_fw_mem_context(ctx->vm_ctx);
+	struct rogue_fwif_fwcommoncontext *cctx_fw;
+	struct pvr_cccb *cccb = &queue->cccb;
+
+	cctx_fw = fw_ctx_map + queue->ctx_offset;
+	cctx_fw->ccbctl_fw_addr = cccb->ctrl_fw_addr;
+	cctx_fw->ccb_fw_addr = cccb->cccb_fw_addr;
+
+	cctx_fw->dm = get_dm_type(queue);
+	cctx_fw->priority = ctx->priority;
+	cctx_fw->priority_seq_num = 0;
+	cctx_fw->max_deadline_ms = MAX_DEADLINE_MS;
+	cctx_fw->pid = task_tgid_nr(current);
+	cctx_fw->server_common_context_id = ctx->ctx_id;
+
+	pvr_fw_object_get_fw_addr(fw_mem_ctx_obj, &cctx_fw->fw_mem_context_fw_addr);
+
+	pvr_fw_object_get_fw_addr(queue->reg_state_obj, &cctx_fw->context_state_addr);
+}
+
+/**
+ * pvr_queue_cleanup_fw_context() - Wait for the FW context to be idle and clean it up.
+ * @queue: Queue on FW context to clean up.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * Any error returned by pvr_fw_structure_cleanup() otherwise.
+ */
+static int pvr_queue_cleanup_fw_context(struct pvr_queue *queue)
+{
+	return pvr_fw_structure_cleanup(queue->ctx->pvr_dev,
+					ROGUE_FWIF_CLEANUP_FWCOMMONCONTEXT,
+					queue->ctx->fw_obj, queue->ctx_offset);
+}
+
+/**
+ * pvr_queue_job_init() - Initialize queue related fields in a pvr_job object.
+ * @job: The job to initialize.
+ *
+ * Bind the job to a queue and allocate memory to guarantee pvr_queue_job_arm()
+ * and pvr_queue_job_push() can't fail. We also make sure the context type is
+ * valid and the job can fit in the CCCB.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * An error code if something failed.
+ */
+int pvr_queue_job_init(struct pvr_job *job)
+{
+	/* Fragment jobs need at least one native fence wait on the geometry job fence. */
+	u32 min_native_dep_count = job->type == DRM_PVR_JOB_TYPE_FRAGMENT ? 1 : 0;
+	struct pvr_queue *queue;
+	int err;
+
+	if (atomic_read(&job->ctx->faulty))
+		return -EIO;
+
+	queue = pvr_context_get_queue_for_job(job->ctx, job->type);
+	if (!queue)
+		return -EINVAL;
+
+	if (!pvr_cccb_cmdseq_can_fit(&queue->cccb, job_cmds_size(job, min_native_dep_count)))
+		return -E2BIG;
+
+	err = drm_sched_job_init(&job->base, &queue->entity, THIS_MODULE);
+	if (err)
+		return err;
+
+	job->cccb_fence = pvr_queue_fence_alloc();
+	job->kccb_fence = pvr_kccb_fence_alloc();
+	job->done_fence = pvr_queue_fence_alloc();
+	if (!job->cccb_fence || !job->done_fence)
+		return -ENOMEM;
+
+	return 0;
+}
+
+/**
+ * pvr_queue_job_arm() - Arm a job object.
+ * @job: The job to arm.
+ *
+ * Initializes fences and return the drm_sched finished fence so it can
+ * be exposed to the outside world. Once this function is called, you should
+ * make sure the job is pushed using pvr_queue_job_push(), or guarantee that
+ * no one grabbed a reference to the returned fence. The latter can happen if
+ * we do multi-job submission, and something failed when creating/initializing
+ * a job. In that case, we know the fence didn't leave the driver, and we
+ * can thus guarantee nobody will wait on an dead fence object.
+ *
+ * Return:
+ *  * A dma_fence object.
+ */
+struct dma_fence *pvr_queue_job_arm(struct pvr_job *job)
+{
+	drm_sched_job_arm(&job->base);
+
+	return &job->base.s_fence->finished;
+}
+
+/**
+ * pvr_queue_job_cleanup() - Cleanup fence/scheduler related fields in the job object.
+ * @job: The job to cleanup.
+ *
+ * Should be called in the job release path.
+ */
+void pvr_queue_job_cleanup(struct pvr_job *job)
+{
+	pvr_queue_fence_put(job->done_fence);
+	pvr_queue_fence_put(job->cccb_fence);
+	pvr_kccb_fence_put(job->kccb_fence);
+
+	if (job->base.s_fence)
+		drm_sched_job_cleanup(&job->base);
+}
+
+/**
+ * pvr_queue_job_push() - Push a job to its queue.
+ * @job: The job to push.
+ *
+ * Must be called after pvr_queue_job_init() and after all dependencies
+ * have been added to the job. This will effectively queue the job to
+ * the drm_sched_entity attached to the queue. We grab a reference on
+ * the job object, so the caller is free to drop its reference when it's
+ * done accessing the job object.
+ */
+void pvr_queue_job_push(struct pvr_job *job)
+{
+	struct pvr_queue *queue = container_of(job->base.sched, struct pvr_queue, scheduler);
+
+	/* Keep track of the last queued job scheduled fence for combined submit. */
+	dma_fence_put(queue->last_queued_job_scheduled_fence);
+	queue->last_queued_job_scheduled_fence = dma_fence_get(&job->base.s_fence->scheduled);
+
+	pvr_job_get(job);
+	drm_sched_entity_push_job(&job->base);
+}
+
+static void reg_state_init(void *cpu_ptr, void *priv)
+{
+	struct pvr_queue *queue = priv;
+
+	if (queue->type == DRM_PVR_JOB_TYPE_GEOMETRY) {
+		struct rogue_fwif_geom_ctx_state *geom_ctx_state_fw = cpu_ptr;
+
+		geom_ctx_state_fw->geom_core[0].geom_reg_vdm_call_stack_pointer_init =
+			queue->callstack_addr;
+	}
+}
+
+/**
+ * pvr_queue_create() - Create a queue object.
+ * @ctx: The context this queue will be attached to.
+ * @type: The type of jobs being pushed to this queue.
+ * @args: The arguments passed to the context creation function.
+ * @fw_ctx_map: CPU mapping of the FW context object.
+ *
+ * Create a queue object that will be used to queue and track jobs.
+ *
+ * Return:
+ *  * A valid pointer to a pvr_queue object, or
+ *  * An error pointer if the creation/initialization failed.
+ */
+struct pvr_queue *pvr_queue_create(struct pvr_context *ctx,
+				   enum drm_pvr_job_type type,
+				   struct drm_pvr_ioctl_create_context_args *args,
+				   void *fw_ctx_map)
+{
+	static const struct {
+		u32 cccb_size;
+		const char *name;
+	} props[] = {
+		[DRM_PVR_JOB_TYPE_GEOMETRY] = {
+			.cccb_size = CTX_GEOM_CCCB_SIZE_LOG2,
+			.name = "geometry",
+		},
+		[DRM_PVR_JOB_TYPE_FRAGMENT] = {
+			.cccb_size = CTX_FRAG_CCCB_SIZE_LOG2,
+			.name = "fragment"
+		},
+		[DRM_PVR_JOB_TYPE_COMPUTE] = {
+			.cccb_size = CTX_COMPUTE_CCCB_SIZE_LOG2,
+			.name = "compute"
+		},
+		[DRM_PVR_JOB_TYPE_TRANSFER_FRAG] = {
+			.cccb_size = CTX_TRANSFER_CCCB_SIZE_LOG2,
+			.name = "transfer_frag"
+		},
+	};
+	struct pvr_device *pvr_dev = ctx->pvr_dev;
+	struct drm_gpu_scheduler *sched;
+	struct pvr_queue *queue;
+	int ctx_state_size, err;
+	void *cpu_map;
+
+	if (WARN_ON(type >= sizeof(props)))
+		return ERR_PTR(-EINVAL);
+
+	switch (ctx->type) {
+	case DRM_PVR_CTX_TYPE_RENDER:
+		if (type != DRM_PVR_JOB_TYPE_GEOMETRY &&
+		    type != DRM_PVR_JOB_TYPE_FRAGMENT)
+			return ERR_PTR(-EINVAL);
+		break;
+	case DRM_PVR_CTX_TYPE_COMPUTE:
+		if (type != DRM_PVR_JOB_TYPE_COMPUTE)
+			return ERR_PTR(-EINVAL);
+		break;
+	case DRM_PVR_CTX_TYPE_TRANSFER_FRAG:
+		if (type != DRM_PVR_JOB_TYPE_TRANSFER_FRAG)
+			return ERR_PTR(-EINVAL);
+		break;
+	default:
+		return ERR_PTR(-EINVAL);
+	}
+
+	ctx_state_size = get_ctx_state_size(pvr_dev, type);
+	if (ctx_state_size < 0)
+		return ERR_PTR(ctx_state_size);
+
+	queue = kzalloc(sizeof(*queue), GFP_KERNEL);
+	if (!queue)
+		return ERR_PTR(-ENOMEM);
+
+	queue->type = type;
+	queue->ctx_offset = get_ctx_offset(type);
+	queue->ctx = ctx;
+	queue->callstack_addr = args->callstack_addr;
+	sched = &queue->scheduler;
+	INIT_LIST_HEAD(&queue->node);
+	mutex_init(&queue->cccb_fence_ctx.job_lock);
+	pvr_queue_fence_ctx_init(&queue->cccb_fence_ctx.base);
+	pvr_queue_fence_ctx_init(&queue->job_fence_ctx);
+
+	err = pvr_cccb_init(pvr_dev, &queue->cccb, props[type].cccb_size, props[type].name);
+	if (err)
+		goto err_free_queue;
+
+	err = pvr_fw_object_create(pvr_dev, ctx_state_size,
+				   PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+				   reg_state_init, queue, &queue->reg_state_obj);
+	if (err)
+		goto err_cccb_fini;
+
+	init_fw_context(queue, fw_ctx_map);
+
+	if (type != DRM_PVR_JOB_TYPE_GEOMETRY && type != DRM_PVR_JOB_TYPE_FRAGMENT &&
+	    args->callstack_addr) {
+		err = -EINVAL;
+		goto err_release_reg_state;
+	}
+
+	cpu_map = pvr_fw_object_create_and_map(pvr_dev, sizeof(*queue->timeline_ufo.value),
+					       PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+					       NULL, NULL, &queue->timeline_ufo.fw_obj);
+	if (IS_ERR(cpu_map)) {
+		err = PTR_ERR(cpu_map);
+		goto err_release_reg_state;
+	}
+
+	queue->timeline_ufo.value = cpu_map;
+
+	err = drm_sched_init(&queue->scheduler,
+			     &pvr_queue_sched_ops,
+			     pvr_dev->sched_wq, 64 * 1024, 1,
+			     msecs_to_jiffies(500),
+			     pvr_dev->sched_wq, NULL, "pvr-queue",
+			     DRM_SCHED_POLICY_SINGLE_ENTITY,
+			     pvr_dev->base.dev);
+	if (err)
+		goto err_release_ufo;
+
+	err = drm_sched_entity_init(&queue->entity,
+				    DRM_SCHED_PRIORITY_MIN,
+				    &sched, 1, &ctx->faulty);
+	if (err)
+		goto err_sched_fini;
+
+	mutex_lock(&pvr_dev->queues.lock);
+	list_add_tail(&queue->node, &pvr_dev->queues.idle);
+	mutex_unlock(&pvr_dev->queues.lock);
+
+	return queue;
+
+err_sched_fini:
+	drm_sched_fini(&queue->scheduler);
+
+err_release_ufo:
+	pvr_fw_object_unmap_and_destroy(queue->timeline_ufo.fw_obj);
+
+err_release_reg_state:
+	pvr_fw_object_destroy(queue->reg_state_obj);
+
+err_cccb_fini:
+	pvr_cccb_fini(&queue->cccb);
+
+err_free_queue:
+	mutex_destroy(&queue->cccb_fence_ctx.job_lock);
+	kfree(queue);
+
+	return ERR_PTR(err);
+}
+
+void pvr_queue_device_pre_reset(struct pvr_device *pvr_dev)
+{
+	struct pvr_queue *queue;
+
+	mutex_lock(&pvr_dev->queues.lock);
+	list_for_each_entry(queue, &pvr_dev->queues.idle, node)
+		pvr_queue_stop(queue, NULL);
+	list_for_each_entry(queue, &pvr_dev->queues.active, node)
+		pvr_queue_stop(queue, NULL);
+	mutex_unlock(&pvr_dev->queues.lock);
+}
+
+void pvr_queue_device_post_reset(struct pvr_device *pvr_dev)
+{
+	struct pvr_queue *queue;
+
+	mutex_lock(&pvr_dev->queues.lock);
+	list_for_each_entry(queue, &pvr_dev->queues.active, node)
+		pvr_queue_start(queue);
+	list_for_each_entry(queue, &pvr_dev->queues.idle, node)
+		pvr_queue_start(queue);
+	mutex_unlock(&pvr_dev->queues.lock);
+}
+
+/**
+ * pvr_queue_kill() - Kill a queue.
+ * @queue: The queue to kill.
+ *
+ * Kill the queue so no new jobs can be pushed. Should be called when the
+ * context handle is destroyed. The queue object might last longer if jobs
+ * are still in flight and holding a reference to the context this queue
+ * belongs to.
+ */
+void pvr_queue_kill(struct pvr_queue *queue)
+{
+	drm_sched_entity_destroy(&queue->entity);
+	dma_fence_put(queue->last_queued_job_scheduled_fence);
+	queue->last_queued_job_scheduled_fence = NULL;
+}
+
+/**
+ * pvr_queue_destroy() - Destroy a queue.
+ * @queue: The queue to destroy.
+ *
+ * Cleanup the queue and free the resources attached to it. Should be
+ * called from the context release function.
+ */
+void pvr_queue_destroy(struct pvr_queue *queue)
+{
+	if (!queue)
+		return;
+
+	mutex_lock(&queue->ctx->pvr_dev->queues.lock);
+	list_del_init(&queue->node);
+	mutex_unlock(&queue->ctx->pvr_dev->queues.lock);
+
+	drm_sched_fini(&queue->scheduler);
+	drm_sched_entity_fini(&queue->entity);
+
+	if (WARN_ON(queue->last_queued_job_scheduled_fence))
+		dma_fence_put(queue->last_queued_job_scheduled_fence);
+
+	pvr_queue_cleanup_fw_context(queue);
+
+	pvr_fw_object_unmap_and_destroy(queue->timeline_ufo.fw_obj);
+	pvr_fw_object_destroy(queue->reg_state_obj);
+	pvr_cccb_fini(&queue->cccb);
+	mutex_destroy(&queue->cccb_fence_ctx.job_lock);
+	kfree(queue);
+}
+
+/**
+ * pvr_queue_device_init() - Device-level initialization of queue related fields.
+ * @pvr_dev: The device to initialize.
+ *
+ * Initializes all fields related to queue management in pvr_device.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * An error code on failure.
+ */
+int pvr_queue_device_init(struct pvr_device *pvr_dev)
+{
+	int err;
+
+	INIT_LIST_HEAD(&pvr_dev->queues.active);
+	INIT_LIST_HEAD(&pvr_dev->queues.idle);
+	err = drmm_mutex_init(from_pvr_device(pvr_dev), &pvr_dev->queues.lock);
+	if (err)
+		return err;
+
+	pvr_dev->sched_wq = alloc_workqueue("powervr-sched", WQ_UNBOUND, 0);
+	if (!pvr_dev->sched_wq)
+		return -ENOMEM;
+
+	return 0;
+}
+
+/**
+ * pvr_queue_device_fini() - Device-level cleanup of queue related fields.
+ * @pvr_dev: The device to cleanup.
+ *
+ * Cleanup/free all queue-related resources attached to a pvr_device object.
+ */
+void pvr_queue_device_fini(struct pvr_device *pvr_dev)
+{
+	destroy_workqueue(pvr_dev->sched_wq);
+}
diff --git a/drivers/gpu/drm/imagination/pvr_queue.h b/drivers/gpu/drm/imagination/pvr_queue.h
new file mode 100644
index 000000000000..01bdeff777bf
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_queue.h
@@ -0,0 +1,179 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_QUEUE_H
+#define PVR_QUEUE_H
+
+#include <drm/gpu_scheduler.h>
+
+#include "pvr_cccb.h"
+#include "pvr_device.h"
+
+struct pvr_context;
+struct pvr_queue;
+
+/**
+ * struct pvr_queue_fence_ctx - Queue fence context
+ *
+ * Used to implement dma_fence_ops for pvr_job::{done,cccb}_fence.
+ */
+struct pvr_queue_fence_ctx {
+	/** @id: Fence context ID allocated with dma_fence_context_alloc(). */
+	u64 id;
+
+	/** @seqno: Sequence number incremented each time a fence is created. */
+	atomic_t seqno;
+
+	/** @lock: Lock used to synchronize access to fences allocated by this context. */
+	spinlock_t lock;
+};
+
+/**
+ * struct pvr_queue_cccb_fence_ctx - CCCB fence context
+ *
+ * Context used to manage fences controlling access to the CCCB. No fences are
+ * issued if there's enough space in the CCCB to push job commands.
+ */
+struct pvr_queue_cccb_fence_ctx {
+	/** @base: Base queue fence context. */
+	struct pvr_queue_fence_ctx base;
+
+	/**
+	 * @job: Job waiting for CCCB space.
+	 *
+	 * Thanks to the serializationg done at the drm_sched_entity level,
+	 * there's no more than one job waiting for CCCB at a given time.
+	 *
+	 * This field is NULL if no jobs are currently waiting for CCCB space.
+	 *
+	 * Must be accessed with @job_lock held.
+	 */
+	struct pvr_job *job;
+
+	/** @lock: Lock protecting access to the job object. */
+	struct mutex job_lock;
+};
+
+/**
+ * struct pvr_queue_fence - Queue fence object
+ */
+struct pvr_queue_fence {
+	/** @base: Base dma_fence. */
+	struct dma_fence base;
+
+	/** @queue: Queue that created this fence. */
+	struct pvr_queue *queue;
+};
+
+/**
+ * struct pvr_queue - Job queue
+ *
+ * Used to queue and track execution of pvr_job objects.
+ */
+struct pvr_queue {
+	/** @scheduler: Single entity scheduler use to push jobs to this queue. */
+	struct drm_gpu_scheduler scheduler;
+
+	/** @entity: Scheduling entity backing this queue. */
+	struct drm_sched_entity entity;
+
+	/** @type: Type of jobs queued to this queue. */
+	enum drm_pvr_job_type type;
+
+	/** @ctx: Context object this queue is bound to. */
+	struct pvr_context *ctx;
+
+	/** @node: Used to add the queue to the active/idle queue list. */
+	struct list_head node;
+
+	/**
+	 * @in_flight_job_count: Number of jobs submitted to the CCCB that
+	 * have not been processed yet.
+	 */
+	atomic_t in_flight_job_count;
+
+	/**
+	 * @cccb_fence_ctx: CCCB fence context.
+	 *
+	 * Used to control access to the CCCB is full, such that we don't
+	 * end up trying to push commands to the CCCB if there's not enough
+	 * space to receive all commands needed for a job to complete.
+	 */
+	struct pvr_queue_cccb_fence_ctx cccb_fence_ctx;
+
+	/** @job_fence_ctx: Job fence context object. */
+	struct pvr_queue_fence_ctx job_fence_ctx;
+
+	/** @timeline_ufo: Timeline UFO for the context queue. */
+	struct {
+		/** @fw_obj: FW object representing the UFO value. */
+		struct pvr_fw_object *fw_obj;
+
+		/** @value: CPU mapping of the UFO value. */
+		u32 *value;
+	} timeline_ufo;
+
+	/**
+	 * @last_submitted_job: Last submitted job.
+	 *
+	 * We need to keep that one around because drm_sched_main() only
+	 * queues the job to the drm_gpu_scheduler::pending_list after
+	 * ::run_job() has returned, which is racy with the queue process
+	 * worker that's handling done job signaling.
+	 */
+	struct pvr_job *last_submitted_job;
+
+	/**
+	 * last_queued_job_scheduled_fence: The scheduled fence of the last
+	 * job queued to this queue.
+	 *
+	 * We use it to insert frag -> geom dependencies when issuing combined
+	 * geom+frag jobs, to guarantee that the fragment job that's part of
+	 * the combined operation comes after all fragment jobs that were queued
+	 * before it.
+	 */
+	struct dma_fence *last_queued_job_scheduled_fence;
+
+	/** @cccb: Client Circular Command Buffer. */
+	struct pvr_cccb cccb;
+
+	/** @reg_state_obj: FW object representing the register state of this queue. */
+	struct pvr_fw_object *reg_state_obj;
+
+	/** @ctx_offset: Offset of the queue context in the FW context object. */
+	u32 ctx_offset;
+
+	/** @callstack_addr: Initial call stack address for register state object. */
+	u64 callstack_addr;
+};
+
+bool pvr_queue_fence_is_ufo_backed(struct dma_fence *f);
+
+int pvr_queue_job_init(struct pvr_job *job);
+
+void pvr_queue_job_cleanup(struct pvr_job *job);
+
+void pvr_queue_job_push(struct pvr_job *job);
+
+struct dma_fence *pvr_queue_job_arm(struct pvr_job *job);
+
+struct pvr_queue *pvr_queue_create(struct pvr_context *ctx,
+				   enum drm_pvr_job_type type,
+				   struct drm_pvr_ioctl_create_context_args *args,
+				   void *fw_ctx_map);
+
+void pvr_queue_kill(struct pvr_queue *queue);
+
+void pvr_queue_destroy(struct pvr_queue *queue);
+
+void pvr_queue_process(struct pvr_queue *queue);
+
+void pvr_queue_device_pre_reset(struct pvr_device *pvr_dev);
+
+void pvr_queue_device_post_reset(struct pvr_device *pvr_dev);
+
+int pvr_queue_device_init(struct pvr_device *pvr_dev);
+
+void pvr_queue_device_fini(struct pvr_device *pvr_dev);
+
+#endif /* PVR_QUEUE_H */
diff --git a/drivers/gpu/drm/imagination/pvr_stream_defs.c b/drivers/gpu/drm/imagination/pvr_stream_defs.c
index 3c646e25accf..ca1dc5e66a6f 100644
--- a/drivers/gpu/drm/imagination/pvr_stream_defs.c
+++ b/drivers/gpu/drm/imagination/pvr_stream_defs.c
@@ -43,6 +43,232 @@
  * existing parameters, to preserve order. As parameters are naturally aligned, care must be taken
  * with respect to implicit padding in the stream; padding should be minimised as much as possible.
  */
+static const struct pvr_stream_def rogue_fwif_cmd_geom_stream[] = {
+	PVR_STREAM_DEF(rogue_fwif_cmd_geom, regs.vdm_ctrl_stream_base, 64),
+	PVR_STREAM_DEF(rogue_fwif_cmd_geom, regs.tpu_border_colour_table, 64),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_geom, regs.vdm_draw_indirect0, 64,
+			       PVR_FEATURE_VDM_DRAWINDIRECT),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_geom, regs.vdm_draw_indirect1, 32,
+			       PVR_FEATURE_VDM_DRAWINDIRECT),
+	PVR_STREAM_DEF(rogue_fwif_cmd_geom, regs.ppp_ctrl, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_geom, regs.te_psg, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_geom, regs.vdm_context_resume_task0_size, 32),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_geom, regs.vdm_context_resume_task3_size, 32,
+			       PVR_FEATURE_VDM_OBJECT_LEVEL_LLS),
+	PVR_STREAM_DEF(rogue_fwif_cmd_geom, regs.view_idx, 32),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_geom, regs.pds_coeff_free_prog, 32,
+			       PVR_FEATURE_TESSELLATION),
+};
+
+static const struct pvr_stream_def rogue_fwif_cmd_geom_stream_brn49927[] = {
+	PVR_STREAM_DEF(rogue_fwif_cmd_geom, regs.tpu, 32),
+};
+
+static const struct pvr_stream_ext_def cmd_geom_ext_streams_0[] = {
+	{
+		.stream = rogue_fwif_cmd_geom_stream_brn49927,
+		.stream_len = ARRAY_SIZE(rogue_fwif_cmd_geom_stream_brn49927),
+		.header_mask = PVR_STREAM_EXTHDR_GEOM0_BRN49927,
+		.quirk = 49927,
+	},
+};
+
+static const struct pvr_stream_ext_header cmd_geom_ext_headers[] = {
+	{
+		.ext_streams = cmd_geom_ext_streams_0,
+		.ext_streams_num = ARRAY_SIZE(cmd_geom_ext_streams_0),
+		.valid_mask = PVR_STREAM_EXTHDR_GEOM0_VALID,
+	},
+};
+
+const struct pvr_stream_cmd_defs pvr_cmd_geom_stream = {
+	.type = PVR_STREAM_TYPE_GEOM,
+
+	.main_stream = rogue_fwif_cmd_geom_stream,
+	.main_stream_len = ARRAY_SIZE(rogue_fwif_cmd_geom_stream),
+
+	.ext_nr_headers = ARRAY_SIZE(cmd_geom_ext_headers),
+	.ext_headers = cmd_geom_ext_headers,
+
+	.dest_size = sizeof(struct rogue_fwif_cmd_geom),
+};
+
+static const struct pvr_stream_def rogue_fwif_cmd_frag_stream[] = {
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.isp_scissor_base, 64),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.isp_dbias_base, 64),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.isp_oclqry_base, 64),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.isp_zlsctl, 64),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.isp_zload_store_base, 64),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.isp_stencil_load_store_base, 64),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_frag, regs.fb_cdc_zls, 64,
+			       PVR_FEATURE_REQUIRES_FB_CDC_ZLS_SETUP),
+	PVR_STREAM_DEF_ARRAY(rogue_fwif_cmd_frag, regs.pbe_word),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.tpu_border_colour_table, 64),
+	PVR_STREAM_DEF_ARRAY(rogue_fwif_cmd_frag, regs.pds_bgnd),
+	PVR_STREAM_DEF_ARRAY(rogue_fwif_cmd_frag, regs.pds_pr_bgnd),
+	PVR_STREAM_DEF_ARRAY(rogue_fwif_cmd_frag, regs.usc_clear_register),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.usc_pixel_output_ctrl, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.isp_bgobjdepth, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.isp_bgobjvals, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.isp_aa, 32),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_frag, regs.isp_xtp_pipe_enable, 32,
+			       PVR_FEATURE_S7_TOP_INFRASTRUCTURE),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.isp_ctl, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.event_pixel_pds_info, 32),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_frag, regs.pixel_phantom, 32,
+			       PVR_FEATURE_CLUSTER_GROUPING),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.view_idx, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.event_pixel_pds_data, 32),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_frag, regs.isp_oclqry_stride, 32,
+			       PVR_FEATURE_GPU_MULTICORE_SUPPORT),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_frag, regs.isp_zls_pixels, 32,
+			       PVR_FEATURE_ZLS_SUBTILE),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_frag, regs.rgx_cr_blackpearl_fix, 32,
+			       PVR_FEATURE_ISP_ZLS_D24_S8_PACKING_OGL_MODE),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, zls_stride, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, sls_stride, 32),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_frag, execute_count, 32,
+			       PVR_FEATURE_GPU_MULTICORE_SUPPORT),
+};
+
+static const struct pvr_stream_def rogue_fwif_cmd_frag_stream_brn47217[] = {
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.isp_oclqry_stride, 32),
+};
+
+static const struct pvr_stream_def rogue_fwif_cmd_frag_stream_brn49927[] = {
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.tpu, 32),
+};
+
+static const struct pvr_stream_ext_def cmd_frag_ext_streams_0[] = {
+	{
+		.stream = rogue_fwif_cmd_frag_stream_brn47217,
+		.stream_len = ARRAY_SIZE(rogue_fwif_cmd_frag_stream_brn47217),
+		.header_mask = PVR_STREAM_EXTHDR_FRAG0_BRN47217,
+		.quirk = 47217,
+	},
+	{
+		.stream = rogue_fwif_cmd_frag_stream_brn49927,
+		.stream_len = ARRAY_SIZE(rogue_fwif_cmd_frag_stream_brn49927),
+		.header_mask = PVR_STREAM_EXTHDR_FRAG0_BRN49927,
+		.quirk = 49927,
+	},
+};
+
+static const struct pvr_stream_ext_header cmd_frag_ext_headers[] = {
+	{
+		.ext_streams = cmd_frag_ext_streams_0,
+		.ext_streams_num = ARRAY_SIZE(cmd_frag_ext_streams_0),
+		.valid_mask = PVR_STREAM_EXTHDR_FRAG0_VALID,
+	},
+};
+
+const struct pvr_stream_cmd_defs pvr_cmd_frag_stream = {
+	.type = PVR_STREAM_TYPE_FRAG,
+
+	.main_stream = rogue_fwif_cmd_frag_stream,
+	.main_stream_len = ARRAY_SIZE(rogue_fwif_cmd_frag_stream),
+
+	.ext_nr_headers = ARRAY_SIZE(cmd_frag_ext_headers),
+	.ext_headers = cmd_frag_ext_headers,
+
+	.dest_size = sizeof(struct rogue_fwif_cmd_frag),
+};
+
+static const struct pvr_stream_def rogue_fwif_cmd_compute_stream[] = {
+	PVR_STREAM_DEF(rogue_fwif_cmd_compute, regs.tpu_border_colour_table, 64),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_compute, regs.cdm_cb_queue, 64,
+			       PVR_FEATURE_CDM_USER_MODE_QUEUE),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_compute, regs.cdm_cb_base, 64,
+			       PVR_FEATURE_CDM_USER_MODE_QUEUE),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_compute, regs.cdm_cb, 64,
+			       PVR_FEATURE_CDM_USER_MODE_QUEUE),
+	PVR_STREAM_DEF_NOT_FEATURE(rogue_fwif_cmd_compute, regs.cdm_ctrl_stream_base, 64,
+				   PVR_FEATURE_CDM_USER_MODE_QUEUE),
+	PVR_STREAM_DEF(rogue_fwif_cmd_compute, regs.cdm_context_state_base_addr, 64),
+	PVR_STREAM_DEF(rogue_fwif_cmd_compute, regs.cdm_resume_pds1, 32),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_compute, regs.cdm_item, 32,
+			       PVR_FEATURE_COMPUTE_MORTON_CAPABLE),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_compute, regs.compute_cluster, 32,
+			       PVR_FEATURE_CLUSTER_GROUPING),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_compute, regs.tpu_tag_cdm_ctrl, 32,
+			       PVR_FEATURE_TPU_DM_GLOBAL_REGISTERS),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_compute, stream_start_offset, 32,
+			       PVR_FEATURE_CDM_USER_MODE_QUEUE),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_compute, execute_count, 32,
+			       PVR_FEATURE_GPU_MULTICORE_SUPPORT),
+};
+
+static const struct pvr_stream_def rogue_fwif_cmd_compute_stream_brn49927[] = {
+	PVR_STREAM_DEF(rogue_fwif_cmd_compute, regs.tpu, 32),
+};
+
+static const struct pvr_stream_ext_def cmd_compute_ext_streams_0[] = {
+	{
+		.stream = rogue_fwif_cmd_compute_stream_brn49927,
+		.stream_len = ARRAY_SIZE(rogue_fwif_cmd_compute_stream_brn49927),
+		.header_mask = PVR_STREAM_EXTHDR_COMPUTE0_BRN49927,
+		.quirk = 49927,
+	},
+};
+
+static const struct pvr_stream_ext_header cmd_compute_ext_headers[] = {
+	{
+		.ext_streams = cmd_compute_ext_streams_0,
+		.ext_streams_num = ARRAY_SIZE(cmd_compute_ext_streams_0),
+		.valid_mask = PVR_STREAM_EXTHDR_COMPUTE0_VALID,
+	},
+};
+
+const struct pvr_stream_cmd_defs pvr_cmd_compute_stream = {
+	.type = PVR_STREAM_TYPE_COMPUTE,
+
+	.main_stream = rogue_fwif_cmd_compute_stream,
+	.main_stream_len = ARRAY_SIZE(rogue_fwif_cmd_compute_stream),
+
+	.ext_nr_headers = ARRAY_SIZE(cmd_compute_ext_headers),
+	.ext_headers = cmd_compute_ext_headers,
+
+	.dest_size = sizeof(struct rogue_fwif_cmd_compute),
+};
+
+static const struct pvr_stream_def rogue_fwif_cmd_transfer_stream[] = {
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.pds_bgnd0_base, 64),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.pds_bgnd1_base, 64),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.pds_bgnd3_sizeinfo, 64),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.isp_mtile_base, 64),
+	PVR_STREAM_DEF_ARRAY(rogue_fwif_cmd_transfer, regs.pbe_wordx_mrty),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.isp_bgobjvals, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.usc_pixel_output_ctrl, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.usc_clear_register0, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.usc_clear_register1, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.usc_clear_register2, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.usc_clear_register3, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.isp_mtile_size, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.isp_render_origin, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.isp_ctl, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.isp_aa, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.event_pixel_pds_info, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.event_pixel_pds_code, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.event_pixel_pds_data, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.isp_render, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.isp_rgn, 32),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_transfer, regs.isp_xtp_pipe_enable, 32,
+			       PVR_FEATURE_S7_TOP_INFRASTRUCTURE),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_transfer, regs.frag_screen, 32,
+			       PVR_FEATURE_GPU_MULTICORE_SUPPORT),
+};
+
+const struct pvr_stream_cmd_defs pvr_cmd_transfer_stream = {
+	.type = PVR_STREAM_TYPE_TRANSFER,
+
+	.main_stream = rogue_fwif_cmd_transfer_stream,
+	.main_stream_len = ARRAY_SIZE(rogue_fwif_cmd_transfer_stream),
+
+	.ext_nr_headers = 0,
+
+	.dest_size = sizeof(struct rogue_fwif_cmd_transfer),
+};
+
 static const struct pvr_stream_def rogue_fwif_static_render_context_state_stream[] = {
 	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
 		       geom_reg_vdm_context_state_base_addr, 64),
diff --git a/drivers/gpu/drm/imagination/pvr_sync.c b/drivers/gpu/drm/imagination/pvr_sync.c
new file mode 100644
index 000000000000..c2815b089051
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_sync.c
@@ -0,0 +1,287 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include <uapi/drm/pvr_drm.h>
+
+#include <drm/drm_syncobj.h>
+#include <drm/gpu_scheduler.h>
+#include <linux/xarray.h>
+#include <linux/dma-fence-unwrap.h>
+
+#include "pvr_device.h"
+#include "pvr_queue.h"
+#include "pvr_sync.h"
+
+static int
+pvr_check_sync_op(const struct drm_pvr_sync_op *sync_op)
+{
+	u8 handle_type;
+
+	if (sync_op->flags & ~DRM_PVR_SYNC_OP_FLAGS_MASK)
+		return -EINVAL;
+
+	handle_type = sync_op->flags & DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_MASK;
+	if (handle_type != DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_SYNCOBJ &&
+	    handle_type != DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_TIMELINE_SYNCOBJ)
+		return -EINVAL;
+
+	if (handle_type == DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_SYNCOBJ &&
+	    sync_op->value != 0)
+		return -EINVAL;
+
+	return 0;
+}
+
+static void
+pvr_sync_signal_free(struct pvr_sync_signal *sig_sync)
+{
+	if (!sig_sync)
+		return;
+
+	drm_syncobj_put(sig_sync->syncobj);
+	dma_fence_chain_free(sig_sync->chain);
+	dma_fence_put(sig_sync->fence);
+	kfree(sig_sync);
+}
+
+void
+pvr_sync_signal_array_cleanup(struct xarray *array)
+{
+	struct pvr_sync_signal *sig_sync;
+	unsigned long i;
+
+	xa_for_each(array, i, sig_sync)
+		pvr_sync_signal_free(sig_sync);
+
+	xa_destroy(array);
+}
+
+static struct pvr_sync_signal *
+pvr_sync_signal_array_add(struct xarray *array, struct drm_file *file, u32 handle, u64 point)
+{
+	struct pvr_sync_signal *sig_sync;
+	int err;
+	u32 id;
+
+	sig_sync = kzalloc(sizeof(*sig_sync), GFP_KERNEL);
+	if (!sig_sync)
+		return ERR_PTR(-ENOMEM);
+
+	sig_sync->handle = handle;
+	sig_sync->point = point;
+
+	if (point > 0) {
+		sig_sync->chain = dma_fence_chain_alloc();
+		if (!sig_sync->chain) {
+			err = -ENOMEM;
+			goto err_free_sig_sync;
+		}
+	}
+
+	sig_sync->syncobj = drm_syncobj_find(file, handle);
+	if (!sig_sync->syncobj) {
+		err = -EINVAL;
+		goto err_free_sig_sync;
+	}
+
+	/* Retrieve the current fence attached to that point. It's
+	 * perfectly fine to get a NULL fence here, it just means there's
+	 * no fence attached to that point yet.
+	 */
+	drm_syncobj_find_fence(file, handle, point, 0, &sig_sync->fence);
+
+	err = xa_alloc(array, &id, sig_sync, xa_limit_32b, GFP_KERNEL);
+	if (err)
+		goto err_free_sig_sync;
+
+	return sig_sync;
+
+err_free_sig_sync:
+	pvr_sync_signal_free(sig_sync);
+	return ERR_PTR(err);
+}
+
+static struct pvr_sync_signal *
+pvr_sync_signal_array_search(struct xarray *array, u32 handle, u64 point)
+{
+	struct pvr_sync_signal *sig_sync;
+	unsigned long i;
+
+	xa_for_each(array, i, sig_sync) {
+		if (handle == sig_sync->handle && point == sig_sync->point)
+			return sig_sync;
+	}
+
+	return NULL;
+}
+
+static struct pvr_sync_signal *
+pvr_sync_signal_array_get(struct xarray *array, struct drm_file *file, u32 handle, u64 point)
+{
+	struct pvr_sync_signal *sig_sync;
+
+	sig_sync = pvr_sync_signal_array_search(array, handle, point);
+	if (sig_sync)
+		return sig_sync;
+
+	return pvr_sync_signal_array_add(array, file, handle, point);
+}
+
+int
+pvr_sync_signal_array_collect_ops(struct xarray *array,
+				  struct drm_file *file,
+				  u32 sync_op_count,
+				  const struct drm_pvr_sync_op *sync_ops)
+{
+	for (u32 i = 0; i < sync_op_count; i++) {
+		struct pvr_sync_signal *sig_sync;
+		int ret;
+
+		if (!(sync_ops[i].flags & DRM_PVR_SYNC_OP_FLAG_SIGNAL))
+			continue;
+
+		ret = pvr_check_sync_op(&sync_ops[i]);
+		if (ret)
+			return ret;
+
+		sig_sync = pvr_sync_signal_array_get(array, file,
+						     sync_ops[i].handle,
+						     sync_ops[i].value);
+		if (IS_ERR(sig_sync))
+			return PTR_ERR(sig_sync);
+	}
+
+	return 0;
+}
+
+int
+pvr_sync_signal_array_update_fences(struct xarray *array,
+				    u32 sync_op_count,
+				    const struct drm_pvr_sync_op *sync_ops,
+				    struct dma_fence *done_fence)
+{
+	for (u32 i = 0; i < sync_op_count; i++) {
+		struct dma_fence *old_fence;
+		struct pvr_sync_signal *sig_sync;
+
+		if (!(sync_ops[i].flags & DRM_PVR_SYNC_OP_FLAG_SIGNAL))
+			continue;
+
+		sig_sync = pvr_sync_signal_array_search(array, sync_ops[i].handle,
+							sync_ops[i].value);
+		if (WARN_ON(!sig_sync))
+			return -EINVAL;
+
+		old_fence = sig_sync->fence;
+		sig_sync->fence = dma_fence_get(done_fence);
+		dma_fence_put(old_fence);
+
+		if (WARN_ON(!sig_sync->fence))
+			return -EINVAL;
+	}
+
+	return 0;
+}
+
+void
+pvr_sync_signal_array_push_fences(struct xarray *array)
+{
+	struct pvr_sync_signal *sig_sync;
+	unsigned long i;
+
+	xa_for_each(array, i, sig_sync) {
+		if (sig_sync->chain) {
+			drm_syncobj_add_point(sig_sync->syncobj, sig_sync->chain,
+					      sig_sync->fence, sig_sync->point);
+			sig_sync->chain = NULL;
+		} else {
+			drm_syncobj_replace_fence(sig_sync->syncobj, sig_sync->fence);
+		}
+	}
+}
+
+static int
+pvr_sync_add_dep_to_job(struct drm_sched_job *job, struct dma_fence *f)
+{
+	struct dma_fence_unwrap iter;
+	u32 native_fence_count = 0;
+	struct dma_fence *uf;
+	int err = 0;
+
+	dma_fence_unwrap_for_each(uf, &iter, f) {
+		if (pvr_queue_fence_is_ufo_backed(uf))
+			native_fence_count++;
+	}
+
+	/* No need to unwrap the fence if it's fully non-native. */
+	if (!native_fence_count)
+		return drm_sched_job_add_dependency(job, f);
+
+	dma_fence_unwrap_for_each(uf, &iter, f) {
+		/* There's no dma_fence_unwrap_stop() helper cleaning up the refs
+		 * owned by dma_fence_unwrap(), so let's just iterate over all
+		 * entries without doing anything when something failed.
+		 */
+		if (err)
+			continue;
+
+		if (pvr_queue_fence_is_ufo_backed(uf)) {
+			struct drm_sched_fence *s_fence = to_drm_sched_fence(uf);
+
+			/* If this is a native dependency, we wait for the scheduled fence,
+			 * and we will let pvr_queue_run_job() issue FW waits.
+			 */
+			err = drm_sched_job_add_dependency(job,
+							   dma_fence_get(&s_fence->scheduled));
+		} else {
+			err = drm_sched_job_add_dependency(job, dma_fence_get(uf));
+		}
+	}
+
+	dma_fence_put(f);
+	return err;
+}
+
+int
+pvr_sync_add_deps_to_job(struct pvr_file *pvr_file, struct drm_sched_job *job,
+			 u32 sync_op_count,
+			 const struct drm_pvr_sync_op *sync_ops,
+			 struct xarray *signal_array)
+{
+	int err = 0;
+
+	if (!sync_op_count)
+		return 0;
+
+	for (u32 i = 0; i < sync_op_count; i++) {
+		struct pvr_sync_signal *sig_sync;
+		struct dma_fence *fence;
+
+		if (sync_ops[i].flags & DRM_PVR_SYNC_OP_FLAG_SIGNAL)
+			continue;
+
+		err = pvr_check_sync_op(&sync_ops[i]);
+		if (err)
+			return err;
+
+		sig_sync = pvr_sync_signal_array_search(signal_array, sync_ops[i].handle,
+							sync_ops[i].value);
+		if (sig_sync) {
+			if (WARN_ON(!sig_sync->fence))
+				return -EINVAL;
+
+			fence = dma_fence_get(sig_sync->fence);
+		} else {
+			err = drm_syncobj_find_fence(from_pvr_file(pvr_file), sync_ops[i].handle,
+						     sync_ops[i].value, 0, &fence);
+			if (err)
+				return err;
+		}
+
+		err = pvr_sync_add_dep_to_job(job, fence);
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/imagination/pvr_sync.h b/drivers/gpu/drm/imagination/pvr_sync.h
new file mode 100644
index 000000000000..e6278f0384c6
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_sync.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_SYNC_H
+#define PVR_SYNC_H
+
+#include <uapi/drm/pvr_drm.h>
+
+/* Forward declaration from <linux/xarray.h>. */
+struct xarray;
+
+/* Forward declaration from <drm/drm_file.h>. */
+struct drm_file;
+
+/* Forward declaration from <drm/gpu_scheduler.h>. */
+struct drm_sched_job;
+
+/* Forward declaration from "pvr_device.h". */
+struct pvr_file;
+
+/**
+ * struct pvr_sync_signal - Object encoding a syncobj signal operation
+ *
+ * The job submission logic collects all signal operations in an array of
+ * pvr_sync_signal objects. This array also serves as a cache to get the
+ * latest dma_fence when multiple jobs are submitted at once, and one job
+ * signals a syncobj point that's later waited on by a subsequent job.
+ */
+struct pvr_sync_signal {
+	/** @handle: Handle of the syncobj to signal. */
+	u32 handle;
+
+	/**
+	 * @point: Point to signal in the syncobj.
+	 *
+	 * Only relevant for timeline syncobjs.
+	 */
+	u64 point;
+
+	/** @syncobj: Syncobj retrieved from the handle. */
+	struct drm_syncobj *syncobj;
+
+	/**
+	 * @chain: Chain object used to link the new fence with the
+	 *	   existing timeline syncobj.
+	 *
+	 * Should be zero when manipulating a regular syncobj.
+	 */
+	struct dma_fence_chain *chain;
+
+	/**
+	 * @fence: New fence object to attach to the syncobj.
+	 *
+	 * This pointer starts with the current fence bound to
+	 * the <handle,point> pair.
+	 */
+	struct dma_fence *fence;
+};
+
+void
+pvr_sync_signal_array_cleanup(struct xarray *array);
+
+int
+pvr_sync_signal_array_collect_ops(struct xarray *array,
+				  struct drm_file *file,
+				  u32 sync_op_count,
+				  const struct drm_pvr_sync_op *sync_ops);
+
+int
+pvr_sync_signal_array_update_fences(struct xarray *array,
+				    u32 sync_op_count,
+				    const struct drm_pvr_sync_op *sync_ops,
+				    struct dma_fence *done_fence);
+
+void
+pvr_sync_signal_array_push_fences(struct xarray *array);
+
+int
+pvr_sync_add_deps_to_job(struct pvr_file *pvr_file, struct drm_sched_job *job,
+			 u32 sync_op_count,
+			 const struct drm_pvr_sync_op *sync_ops,
+			 struct xarray *signal_array);
+
+#endif /* PVR_SYNC_H */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 14/17] drm/imagination: Implement job submission and scheduling
@ 2023-08-16  8:25   ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: frank.binns, donald.robson, boris.brezillon, faith.ekstrand,
	airlied, daniel, maarten.lankhorst, mripard, tzimmermann, afd,
	hns, matthew.brost, christian.koenig, luben.tuikov, dakr,
	linux-kernel

Implement job submission ioctl. Job scheduling is implemented using
drm_sched.

Jobs are submitted in a stream format. This is intended to allow the UAPI
data format to be independent of the actual FWIF structures in use, which
vary depending on the GPU in use.

The stream formats are documented at:
https://gitlab.freedesktop.org/mesa/mesa/-/blob/f8d2b42ae65c2f16f36a43e0ae39d288431e4263/src/imagination/csbgen/rogue_kmd_stream.xml

This patch depends on:
drm/sched: Convert drm scheduler to use a work queue rather than kthread:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-2-matthew.brost@intel.com/
drm/sched: Move schedule policy to scheduler / entity:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-3-matthew.brost@intel.com/
drm/sched: Add DRM_SCHED_POLICY_SINGLE_ENTITY scheduling policy:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-4-matthew.brost@intel.com/
drm/sched: Start run wq before TDR in drm_sched_start:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-6-matthew.brost@intel.com/
drm/sched: Submit job before starting TDR:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-7-matthew.brost@intel.com/
drm/sched: Add helper to set TDR timeout:
  https://lore.kernel.org/dri-devel/20230404002211.3611376-8-matthew.brost@intel.com/

Changes since v4:
- Use a regular workqueue for job scheduling

Changes since v3:
- Support partial render jobs
- Add job timeout handler
- Split sync handling out of job code
- Use drm_dev_{enter,exit}

Changes since v2:
- Use drm_sched for job scheduling

Co-developed-by: Boris Brezillon <boris.brezillon@collabora.com>
Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
Co-developed-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 drivers/gpu/drm/imagination/Kconfig           |    1 +
 drivers/gpu/drm/imagination/Makefile          |    3 +
 drivers/gpu/drm/imagination/pvr_context.c     |  125 +-
 drivers/gpu/drm/imagination/pvr_context.h     |   44 +
 drivers/gpu/drm/imagination/pvr_device.c      |   31 +
 drivers/gpu/drm/imagination/pvr_device.h      |   21 +
 drivers/gpu/drm/imagination/pvr_drv.c         |   40 +-
 drivers/gpu/drm/imagination/pvr_job.c         |  770 +++++++++
 drivers/gpu/drm/imagination/pvr_job.h         |  161 ++
 drivers/gpu/drm/imagination/pvr_power.c       |   28 +
 drivers/gpu/drm/imagination/pvr_queue.c       | 1455 +++++++++++++++++
 drivers/gpu/drm/imagination/pvr_queue.h       |  179 ++
 drivers/gpu/drm/imagination/pvr_stream_defs.c |  226 +++
 drivers/gpu/drm/imagination/pvr_sync.c        |  287 ++++
 drivers/gpu/drm/imagination/pvr_sync.h        |   84 +
 15 files changed, 3451 insertions(+), 4 deletions(-)
 create mode 100644 drivers/gpu/drm/imagination/pvr_job.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_job.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_queue.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_queue.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_sync.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_sync.h

diff --git a/drivers/gpu/drm/imagination/Kconfig b/drivers/gpu/drm/imagination/Kconfig
index b7cc82de53fc..113dea126b00 100644
--- a/drivers/gpu/drm/imagination/Kconfig
+++ b/drivers/gpu/drm/imagination/Kconfig
@@ -5,6 +5,7 @@ config DRM_POWERVR
 	tristate "Imagination Technologies PowerVR Graphics (Series 6 and later)"
 	depends on ARM64
 	depends on DRM
+	select DRM_EXEC
 	select DRM_GEM_SHMEM_HELPER
 	select DRM_SCHED
 	select FW_LOADER
diff --git a/drivers/gpu/drm/imagination/Makefile b/drivers/gpu/drm/imagination/Makefile
index 0c8ab120f277..313af5312d7b 100644
--- a/drivers/gpu/drm/imagination/Makefile
+++ b/drivers/gpu/drm/imagination/Makefile
@@ -18,10 +18,13 @@ powervr-y := \
 	pvr_fw_trace.o \
 	pvr_gem.o \
 	pvr_hwrt.o \
+	pvr_job.o \
 	pvr_mmu.o \
 	pvr_power.o \
+	pvr_queue.o \
 	pvr_stream.o \
 	pvr_stream_defs.o \
+	pvr_sync.o \
 	pvr_vm.o \
 	pvr_vm_mips.o
 
diff --git a/drivers/gpu/drm/imagination/pvr_context.c b/drivers/gpu/drm/imagination/pvr_context.c
index a87f5a295397..4a4c93c5d907 100644
--- a/drivers/gpu/drm/imagination/pvr_context.c
+++ b/drivers/gpu/drm/imagination/pvr_context.c
@@ -6,10 +6,12 @@
 #include "pvr_device.h"
 #include "pvr_drv.h"
 #include "pvr_gem.h"
+#include "pvr_job.h"
 #include "pvr_power.h"
 #include "pvr_rogue_fwif.h"
 #include "pvr_rogue_fwif_common.h"
 #include "pvr_rogue_fwif_resetframework.h"
+#include "pvr_stream.h"
 #include "pvr_stream_defs.h"
 #include "pvr_vm.h"
 
@@ -164,6 +166,116 @@ ctx_fw_data_init(void *cpu_ptr, void *priv)
 	memcpy(cpu_ptr, ctx->data, ctx->data_size);
 }
 
+/**
+ * pvr_context_destroy_queues() - Destroy all queues attached to a context.
+ * @ctx: Context to destroy queues on.
+ *
+ * Should be called when the last reference to a context object is dropped.
+ * It releases all resources attached to the queues bound to this context.
+ */
+static void pvr_context_destroy_queues(struct pvr_context *ctx)
+{
+	switch (ctx->type) {
+	case DRM_PVR_CTX_TYPE_RENDER:
+		pvr_queue_destroy(ctx->queues.fragment);
+		pvr_queue_destroy(ctx->queues.geometry);
+		break;
+	case DRM_PVR_CTX_TYPE_COMPUTE:
+		pvr_queue_destroy(ctx->queues.compute);
+		break;
+	case DRM_PVR_CTX_TYPE_TRANSFER_FRAG:
+		pvr_queue_destroy(ctx->queues.transfer);
+		break;
+	}
+}
+
+/**
+ * pvr_context_create_queues() - Create all queues attached to a context.
+ * @ctx: Context to create queues on.
+ * @args: Context creation arguments passed by userspace.
+ * @fw_ctx_map: CPU mapping of the FW context object.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * A negative error code otherwise.
+ */
+static int pvr_context_create_queues(struct pvr_context *ctx,
+				     struct drm_pvr_ioctl_create_context_args *args,
+				     void *fw_ctx_map)
+{
+	int err;
+
+	switch (ctx->type) {
+	case DRM_PVR_CTX_TYPE_RENDER:
+		ctx->queues.geometry = pvr_queue_create(ctx, DRM_PVR_JOB_TYPE_GEOMETRY,
+							args, fw_ctx_map);
+		if (IS_ERR(ctx->queues.geometry)) {
+			err = PTR_ERR(ctx->queues.geometry);
+			ctx->queues.geometry = NULL;
+			goto err_destroy_queues;
+		}
+
+		ctx->queues.fragment = pvr_queue_create(ctx, DRM_PVR_JOB_TYPE_FRAGMENT,
+							args, fw_ctx_map);
+		if (IS_ERR(ctx->queues.fragment)) {
+			err = PTR_ERR(ctx->queues.fragment);
+			ctx->queues.fragment = NULL;
+			goto err_destroy_queues;
+		}
+		return 0;
+
+	case DRM_PVR_CTX_TYPE_COMPUTE:
+		ctx->queues.compute = pvr_queue_create(ctx, DRM_PVR_JOB_TYPE_COMPUTE,
+						       args, fw_ctx_map);
+		if (IS_ERR(ctx->queues.compute)) {
+			err = PTR_ERR(ctx->queues.compute);
+			ctx->queues.compute = NULL;
+			goto err_destroy_queues;
+		}
+		return 0;
+
+	case DRM_PVR_CTX_TYPE_TRANSFER_FRAG:
+		ctx->queues.transfer = pvr_queue_create(ctx, DRM_PVR_JOB_TYPE_TRANSFER_FRAG,
+							args, fw_ctx_map);
+		if (IS_ERR(ctx->queues.transfer)) {
+			err = PTR_ERR(ctx->queues.transfer);
+			ctx->queues.transfer = NULL;
+			goto err_destroy_queues;
+		}
+		return 0;
+	}
+
+	return -EINVAL;
+
+err_destroy_queues:
+	pvr_context_destroy_queues(ctx);
+	return err;
+}
+
+/**
+ * pvr_context_kill_queues() - Kill queues attached to context.
+ * @ctx: Context to kill queues on.
+ *
+ * Killing the queues implies making them unusable for future jobs, while still
+ * letting the currently submitted jobs a chance to finish. Queue resources will
+ * stay around until pvr_context_destroy_queues() is called.
+ */
+static void pvr_context_kill_queues(struct pvr_context *ctx)
+{
+	switch (ctx->type) {
+	case DRM_PVR_CTX_TYPE_RENDER:
+		pvr_queue_kill(ctx->queues.fragment);
+		pvr_queue_kill(ctx->queues.geometry);
+		break;
+	case DRM_PVR_CTX_TYPE_COMPUTE:
+		pvr_queue_kill(ctx->queues.compute);
+		break;
+	case DRM_PVR_CTX_TYPE_TRANSFER_FRAG:
+		pvr_queue_kill(ctx->queues.transfer);
+		break;
+	}
+}
+
 /**
  * pvr_context_create() - Create a context.
  * @pvr_file: File to attach the created context to.
@@ -214,10 +326,14 @@ int pvr_context_create(struct pvr_file *pvr_file, struct drm_pvr_ioctl_create_co
 		goto err_put_vm;
 	}
 
-	err = init_fw_objs(ctx, args, ctx->data);
+	err = pvr_context_create_queues(ctx, args, ctx->data);
 	if (err)
 		goto err_free_ctx_data;
 
+	err = init_fw_objs(ctx, args, ctx->data);
+	if (err)
+		goto err_destroy_queues;
+
 	err = pvr_fw_object_create(pvr_dev, ctx_size, PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
 				   ctx_fw_data_init, ctx, &ctx->fw_obj);
 	if (err)
@@ -239,6 +355,9 @@ int pvr_context_create(struct pvr_file *pvr_file, struct drm_pvr_ioctl_create_co
 err_destroy_fw_obj:
 	pvr_fw_object_destroy(ctx->fw_obj);
 
+err_destroy_queues:
+	pvr_context_destroy_queues(ctx);
+
 err_free_ctx_data:
 	kfree(ctx->data);
 
@@ -258,6 +377,7 @@ pvr_context_release(struct kref *ref_count)
 	struct pvr_device *pvr_dev = ctx->pvr_dev;
 
 	xa_erase(&pvr_dev->ctx_ids, ctx->ctx_id);
+	pvr_context_destroy_queues(ctx);
 	pvr_fw_object_destroy(ctx->fw_obj);
 	kfree(ctx->data);
 	pvr_vm_context_put(ctx->vm_ctx);
@@ -295,6 +415,9 @@ pvr_context_destroy(struct pvr_file *pvr_file, u32 handle)
 	if (!ctx)
 		return -EINVAL;
 
+	/* Make sure nothing can be queued to the queues after that point. */
+	pvr_context_kill_queues(ctx);
+
 	/* Release the reference held by the handle set. */
 	pvr_context_put(ctx);
 
diff --git a/drivers/gpu/drm/imagination/pvr_context.h b/drivers/gpu/drm/imagination/pvr_context.h
index fa7115380272..582d64c44d5f 100644
--- a/drivers/gpu/drm/imagination/pvr_context.h
+++ b/drivers/gpu/drm/imagination/pvr_context.h
@@ -15,6 +15,7 @@
 
 #include "pvr_cccb.h"
 #include "pvr_device.h"
+#include "pvr_queue.h"
 
 /* Forward declaration from pvr_gem.h. */
 struct pvr_fw_object;
@@ -58,8 +59,51 @@ struct pvr_context {
 
 	/** @ctx_id: FW context ID. */
 	u32 ctx_id;
+
+	/**
+	 * @faulty: Set to 1 when the context queues had unfinished job when
+	 * a GPU reset happened.
+	 *
+	 * In that case, the context is in an inconsistent state and can't be
+	 * used anymore.
+	 */
+	atomic_t faulty;
+
+	/** @queues: Union containing all kind of queues. */
+	union {
+		struct {
+			/** @geometry: Geometry queue. */
+			struct pvr_queue *geometry;
+
+			/** @fragment: Fragment queue. */
+			struct pvr_queue *fragment;
+		};
+
+		/** @compute: Compute queue. */
+		struct pvr_queue *compute;
+
+		/** @compute: Transfer queue. */
+		struct pvr_queue *transfer;
+	} queues;
 };
 
+static __always_inline struct pvr_queue *
+pvr_context_get_queue_for_job(struct pvr_context *ctx, enum drm_pvr_job_type type)
+{
+	switch (type) {
+	case DRM_PVR_JOB_TYPE_GEOMETRY:
+		return ctx->type == DRM_PVR_CTX_TYPE_RENDER ? ctx->queues.geometry : NULL;
+	case DRM_PVR_JOB_TYPE_FRAGMENT:
+		return ctx->type == DRM_PVR_CTX_TYPE_RENDER ? ctx->queues.fragment : NULL;
+	case DRM_PVR_JOB_TYPE_COMPUTE:
+		return ctx->type == DRM_PVR_CTX_TYPE_COMPUTE ? ctx->queues.compute : NULL;
+	case DRM_PVR_JOB_TYPE_TRANSFER_FRAG:
+		return ctx->type == DRM_PVR_CTX_TYPE_TRANSFER_FRAG ? ctx->queues.transfer : NULL;
+	}
+
+	return NULL;
+}
+
 /**
  * pvr_context_get() - Take additional reference on context.
  * @ctx: Context pointer.
diff --git a/drivers/gpu/drm/imagination/pvr_device.c b/drivers/gpu/drm/imagination/pvr_device.c
index bc7333444388..9ae758a88644 100644
--- a/drivers/gpu/drm/imagination/pvr_device.c
+++ b/drivers/gpu/drm/imagination/pvr_device.c
@@ -6,7 +6,9 @@
 
 #include "pvr_fw.h"
 #include "pvr_power.h"
+#include "pvr_queue.h"
 #include "pvr_rogue_cr_defs.h"
+#include "pvr_stream.h"
 #include "pvr_vm.h"
 
 #include <drm/drm_print.h>
@@ -117,6 +119,32 @@ static int pvr_device_clk_init(struct pvr_device *pvr_dev)
 	return 0;
 }
 
+/**
+ * pvr_device_process_active_queues() - Process all queue related events.
+ * @pvr_dev: PowerVR device to check
+ *
+ * This is called any time we receive a FW event. It iterates over all
+ * active queues and calls pvr_queue_process() on them.
+ */
+void pvr_device_process_active_queues(struct pvr_device *pvr_dev)
+{
+	struct pvr_queue *queue, *tmp_queue;
+	LIST_HEAD(active_queues);
+
+	mutex_lock(&pvr_dev->queues.lock);
+
+	/* Move all active queues to a temporary list. Queues that remain
+	 * active after we're done processing them are re-inserted to
+	 * the queues.active list by pvr_queue_process().
+	 */
+	list_splice_init(&pvr_dev->queues.active, &active_queues);
+
+	list_for_each_entry_safe(queue, tmp_queue, &active_queues, node)
+		pvr_queue_process(queue);
+
+	mutex_unlock(&pvr_dev->queues.lock);
+}
+
 static irqreturn_t pvr_device_irq_thread_handler(int irq, void *data)
 {
 	struct pvr_device *pvr_dev = data;
@@ -132,6 +160,7 @@ static irqreturn_t pvr_device_irq_thread_handler(int irq, void *data)
 		if (pvr_dev->fw_dev.booted) {
 			pvr_fwccb_process(pvr_dev);
 			pvr_kccb_wake_up_waiters(pvr_dev);
+			pvr_device_process_active_queues(pvr_dev);
 		}
 
 		pm_runtime_mark_last_busy(from_pvr_device(pvr_dev)->dev);
@@ -398,6 +427,8 @@ pvr_device_gpu_init(struct pvr_device *pvr_dev)
 	else
 		return -EINVAL;
 
+	pvr_stream_create_musthave_masks(pvr_dev);
+
 	err = pvr_set_dma_info(pvr_dev);
 	if (err)
 		return err;
diff --git a/drivers/gpu/drm/imagination/pvr_device.h b/drivers/gpu/drm/imagination/pvr_device.h
index 431b8d8ad579..fc6124ae4b26 100644
--- a/drivers/gpu/drm/imagination/pvr_device.h
+++ b/drivers/gpu/drm/imagination/pvr_device.h
@@ -167,6 +167,26 @@ struct pvr_device {
 	 */
 	struct xarray free_list_ids;
 
+	/**
+	 * @job_ids: Array of jobs belonging to this device. Array members
+	 *           are of type "struct pvr_job *".
+	 */
+	struct xarray job_ids;
+
+	/**
+	 * @queues: Queue-related fields.
+	 */
+	struct {
+		/** @active: Active queue list. */
+		struct list_head active;
+
+		/** @idle: Idle queue list. */
+		struct list_head idle;
+
+		/** @lock: Lock protecting access to the active/idle lists. */
+		struct mutex lock;
+	} queues;
+
 	struct {
 		/** @work: Work item for watchdog callback. */
 		struct delayed_work work;
@@ -436,6 +456,7 @@ packed_bvnc_to_pvr_gpu_id(u64 bvnc, struct pvr_gpu_id *gpu_id)
 
 int pvr_device_init(struct pvr_device *pvr_dev);
 void pvr_device_fini(struct pvr_device *pvr_dev);
+void pvr_device_reset(struct pvr_device *pvr_dev);
 
 bool
 pvr_device_has_uapi_quirk(struct pvr_device *pvr_dev, u32 quirk);
diff --git a/drivers/gpu/drm/imagination/pvr_drv.c b/drivers/gpu/drm/imagination/pvr_drv.c
index 02c6b8542436..757263eee34e 100644
--- a/drivers/gpu/drm/imagination/pvr_drv.c
+++ b/drivers/gpu/drm/imagination/pvr_drv.c
@@ -7,6 +7,7 @@
 #include "pvr_free_list.h"
 #include "pvr_gem.h"
 #include "pvr_hwrt.h"
+#include "pvr_job.h"
 #include "pvr_power.h"
 #include "pvr_rogue_defs.h"
 #include "pvr_rogue_fwif_client.h"
@@ -31,6 +32,8 @@
 #include <linux/of_device.h>
 #include <linux/of_platform.h>
 #include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/xarray.h>
 
 /**
  * DOC: PowerVR (Series 6 and later) Graphics Driver
@@ -411,7 +414,8 @@ pvr_dev_query_runtime_info_get(struct pvr_device *pvr_dev,
 		return 0;
 	}
 
-	runtime_info.free_list_min_pages = 0; /* FIXME */
+	runtime_info.free_list_min_pages =
+		pvr_get_free_list_min_pages(pvr_dev);
 	runtime_info.free_list_max_pages =
 		ROGUE_PM_MAX_FREELIST_SIZE / ROGUE_PM_PAGE_SIZE;
 	runtime_info.common_store_alloc_region_size =
@@ -1151,7 +1155,20 @@ static int
 pvr_ioctl_submit_jobs(struct drm_device *drm_dev, void *raw_args,
 		      struct drm_file *file)
 {
-	return -ENOTTY;
+	struct drm_pvr_ioctl_submit_jobs_args *args = raw_args;
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+	struct pvr_file *pvr_file = to_pvr_file(file);
+	int idx;
+	int err;
+
+	if (!drm_dev_enter(drm_dev, &idx))
+		return -EIO;
+
+	err = pvr_submit_jobs(pvr_dev, pvr_file, args);
+
+	drm_dev_exit(idx);
+
+	return err;
 }
 
 int
@@ -1367,7 +1384,8 @@ pvr_drm_driver_postclose(__always_unused struct drm_device *drm_dev,
 DEFINE_DRM_GEM_FOPS(pvr_drm_driver_fops);
 
 static struct drm_driver pvr_drm_driver = {
-	.driver_features = DRIVER_GEM | DRIVER_GEM_GPUVA | DRIVER_RENDER,
+	.driver_features = DRIVER_GEM | DRIVER_GEM_GPUVA | DRIVER_RENDER |
+			   DRIVER_SYNCOBJ | DRIVER_SYNCOBJ_TIMELINE,
 	.open = pvr_drm_driver_open,
 	.postclose = pvr_drm_driver_postclose,
 	.ioctls = pvr_drm_driver_ioctls,
@@ -1400,8 +1418,15 @@ pvr_probe(struct platform_device *plat_dev)
 	drm_dev = &pvr_dev->base;
 
 	platform_set_drvdata(plat_dev, drm_dev);
+
+	init_rwsem(&pvr_dev->reset_sem);
+
 	pvr_context_device_init(pvr_dev);
 
+	err = pvr_queue_device_init(pvr_dev);
+	if (err)
+		goto err_context_fini;
+
 	devm_pm_runtime_enable(&plat_dev->dev);
 	pm_runtime_mark_last_busy(&plat_dev->dev);
 
@@ -1418,6 +1443,7 @@ pvr_probe(struct platform_device *plat_dev)
 		goto err_device_fini;
 
 	xa_init_flags(&pvr_dev->free_list_ids, XA_FLAGS_ALLOC1);
+	xa_init_flags(&pvr_dev->job_ids, XA_FLAGS_ALLOC1);
 
 	return 0;
 
@@ -1427,6 +1453,11 @@ pvr_probe(struct platform_device *plat_dev)
 err_watchdog_fini:
 	pvr_watchdog_fini(pvr_dev);
 
+	pvr_queue_device_fini(pvr_dev);
+
+err_context_fini:
+	pvr_context_device_fini(pvr_dev);
+
 	return err;
 }
 
@@ -1436,14 +1467,17 @@ pvr_remove(struct platform_device *plat_dev)
 	struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
 	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
 
+	WARN_ON(!xa_empty(&pvr_dev->job_ids));
 	WARN_ON(!xa_empty(&pvr_dev->free_list_ids));
 
+	xa_destroy(&pvr_dev->job_ids);
 	xa_destroy(&pvr_dev->free_list_ids);
 
 	pm_runtime_suspend(drm_dev->dev);
 	drm_dev_unplug(drm_dev);
 	pvr_device_fini(pvr_dev);
 	pvr_watchdog_fini(pvr_dev);
+	pvr_queue_device_fini(pvr_dev);
 	pvr_context_device_fini(pvr_dev);
 
 	return 0;
diff --git a/drivers/gpu/drm/imagination/pvr_job.c b/drivers/gpu/drm/imagination/pvr_job.c
new file mode 100644
index 000000000000..fc93b81d1ffe
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_job.c
@@ -0,0 +1,770 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_context.h"
+#include "pvr_device.h"
+#include "pvr_drv.h"
+#include "pvr_gem.h"
+#include "pvr_hwrt.h"
+#include "pvr_job.h"
+#include "pvr_power.h"
+#include "pvr_rogue_fwif.h"
+#include "pvr_rogue_fwif_client.h"
+#include "pvr_stream.h"
+#include "pvr_stream_defs.h"
+#include "pvr_sync.h"
+
+#include <drm/drm_exec.h>
+#include <drm/drm_gem.h>
+#include <linux/types.h>
+#include <uapi/drm/pvr_drm.h>
+
+static void pvr_job_release(struct kref *kref)
+{
+	struct pvr_job *job = container_of(kref, struct pvr_job, ref_count);
+
+	xa_erase(&job->pvr_dev->job_ids, job->id);
+
+	pvr_hwrt_data_put(job->hwrt);
+	pvr_context_put(job->ctx);
+
+	WARN_ON(job->paired_job);
+
+	pvr_queue_job_cleanup(job);
+	pvr_job_release_pm_ref(job);
+
+	kfree(job->cmd);
+	kfree(job);
+}
+
+/**
+ * pvr_job_put() - Release reference on job
+ * @job: Target job.
+ */
+void
+pvr_job_put(struct pvr_job *job)
+{
+	if (job)
+		kref_put(&job->ref_count, pvr_job_release);
+}
+
+/**
+ * pvr_job_process_stream() - Build job FW structure from stream
+ * @pvr_dev: Device pointer.
+ * @cmd_defs: Stream definition.
+ * @stream: Pointer to command stream.
+ * @stream_size: Size of command stream, in bytes.
+ * @job: Pointer to job.
+ *
+ * Caller is responsible for freeing the output structure.
+ *
+ * Returns:
+ *  * 0 on success,
+ *  * -%ENOMEM on out of memory, or
+ *  * -%EINVAL on malformed stream.
+ */
+static int
+pvr_job_process_stream(struct pvr_device *pvr_dev, const struct pvr_stream_cmd_defs *cmd_defs,
+		       void *stream, u32 stream_size, struct pvr_job *job)
+{
+	int err;
+
+	job->cmd = kzalloc(cmd_defs->dest_size, GFP_KERNEL);
+	if (!job->cmd)
+		return -ENOMEM;
+
+	job->cmd_len = cmd_defs->dest_size;
+
+	err = pvr_stream_process(pvr_dev, cmd_defs, stream, stream_size, job->cmd);
+	if (err)
+		kfree(job->cmd);
+
+	return err;
+}
+
+static int pvr_fw_cmd_init(struct pvr_device *pvr_dev, struct pvr_job *job,
+			   const struct pvr_stream_cmd_defs *stream_def,
+			   u64 stream_userptr, u32 stream_len)
+{
+	void *stream;
+	int err;
+
+	stream = kzalloc(stream_len, GFP_KERNEL);
+	if (!stream)
+		return -ENOMEM;
+
+	if (copy_from_user(stream, u64_to_user_ptr(stream_userptr), stream_len)) {
+		err = -EFAULT;
+		goto err_free_stream;
+	}
+
+	err = pvr_job_process_stream(pvr_dev, stream_def, stream, stream_len, job);
+
+err_free_stream:
+	kfree(stream);
+
+	return err;
+}
+
+static u32
+convert_geom_flags(u32 in_flags)
+{
+	u32 out_flags = 0;
+
+	if (in_flags & DRM_PVR_SUBMIT_JOB_GEOM_CMD_FIRST)
+		out_flags |= ROGUE_GEOM_FLAGS_FIRSTKICK;
+	if (in_flags & DRM_PVR_SUBMIT_JOB_GEOM_CMD_LAST)
+		out_flags |= ROGUE_GEOM_FLAGS_LASTKICK;
+	if (in_flags & DRM_PVR_SUBMIT_JOB_GEOM_CMD_SINGLE_CORE)
+		out_flags |= ROGUE_GEOM_FLAGS_SINGLE_CORE;
+
+	return out_flags;
+}
+
+static u32
+convert_frag_flags(u32 in_flags)
+{
+	u32 out_flags = 0;
+
+	if (in_flags & DRM_PVR_SUBMIT_JOB_FRAG_CMD_SINGLE_CORE)
+		out_flags |= ROGUE_FRAG_FLAGS_SINGLE_CORE;
+	if (in_flags & DRM_PVR_SUBMIT_JOB_FRAG_CMD_DEPTHBUFFER)
+		out_flags |= ROGUE_FRAG_FLAGS_DEPTHBUFFER;
+	if (in_flags & DRM_PVR_SUBMIT_JOB_FRAG_CMD_STENCILBUFFER)
+		out_flags |= ROGUE_FRAG_FLAGS_STENCILBUFFER;
+	if (in_flags & DRM_PVR_SUBMIT_JOB_FRAG_CMD_PREVENT_CDM_OVERLAP)
+		out_flags |= ROGUE_FRAG_FLAGS_PREVENT_CDM_OVERLAP;
+	if (in_flags & DRM_PVR_SUBMIT_JOB_FRAG_CMD_SCRATCHBUFFER)
+		out_flags |= ROGUE_FRAG_FLAGS_SCRATCHBUFFER;
+	if (in_flags & DRM_PVR_SUBMIT_JOB_FRAG_CMD_GET_VIS_RESULTS)
+		out_flags |= ROGUE_FRAG_FLAGS_GET_VIS_RESULTS;
+
+	return out_flags;
+}
+
+static int
+pvr_geom_job_fw_cmd_init(struct pvr_job *job,
+			 struct drm_pvr_job *args)
+{
+	struct rogue_fwif_cmd_geom *cmd;
+	int err;
+
+	if (args->flags & ~DRM_PVR_SUBMIT_JOB_GEOM_CMD_FLAGS_MASK)
+		return -EINVAL;
+
+	if (job->ctx->type != DRM_PVR_CTX_TYPE_RENDER)
+		return -EINVAL;
+
+	if (!job->hwrt)
+		return -EINVAL;
+
+	job->fw_ccb_cmd_type = ROGUE_FWIF_CCB_CMD_TYPE_GEOM;
+	err = pvr_fw_cmd_init(job->pvr_dev, job, &pvr_cmd_geom_stream,
+			      args->cmd_stream, args->cmd_stream_len);
+	if (err)
+		return err;
+
+	cmd = job->cmd;
+	cmd->cmd_shared.cmn.frame_num = 0;
+	cmd->flags = convert_geom_flags(args->flags);
+	pvr_fw_object_get_fw_addr(job->hwrt->fw_obj, &cmd->cmd_shared.hwrt_data_fw_addr);
+	return 0;
+}
+
+static int
+pvr_frag_job_fw_cmd_init(struct pvr_job *job,
+			 struct drm_pvr_job *args)
+{
+	struct rogue_fwif_cmd_frag *cmd;
+	int err;
+
+	if (args->flags & ~DRM_PVR_SUBMIT_JOB_FRAG_CMD_FLAGS_MASK)
+		return -EINVAL;
+
+	if (job->ctx->type != DRM_PVR_CTX_TYPE_RENDER)
+		return -EINVAL;
+
+	if (!job->hwrt)
+		return -EINVAL;
+
+	job->fw_ccb_cmd_type = (args->flags & DRM_PVR_SUBMIT_JOB_FRAG_CMD_PARTIAL_RENDER) ?
+			       ROGUE_FWIF_CCB_CMD_TYPE_FRAG_PR :
+			       ROGUE_FWIF_CCB_CMD_TYPE_FRAG;
+	err = pvr_fw_cmd_init(job->pvr_dev, job, &pvr_cmd_frag_stream,
+			      args->cmd_stream, args->cmd_stream_len);
+	if (err)
+		return err;
+
+	cmd = job->cmd;
+	cmd->cmd_shared.cmn.frame_num = 0;
+	cmd->flags = convert_frag_flags(args->flags);
+	pvr_fw_object_get_fw_addr(job->hwrt->fw_obj, &cmd->cmd_shared.hwrt_data_fw_addr);
+	return 0;
+}
+
+static u32
+convert_compute_flags(u32 in_flags)
+{
+	u32 out_flags = 0;
+
+	if (in_flags & DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_PREVENT_ALL_OVERLAP)
+		out_flags |= ROGUE_COMPUTE_FLAG_PREVENT_ALL_OVERLAP;
+	if (in_flags & DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_SINGLE_CORE)
+		out_flags |= ROGUE_COMPUTE_FLAG_SINGLE_CORE;
+
+	return out_flags;
+}
+
+static int
+pvr_compute_job_fw_cmd_init(struct pvr_job *job,
+			    struct drm_pvr_job *args)
+{
+	struct rogue_fwif_cmd_compute *cmd;
+	int err;
+
+	if (args->flags & ~DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_FLAGS_MASK)
+		return -EINVAL;
+
+	if (job->ctx->type != DRM_PVR_CTX_TYPE_COMPUTE)
+		return -EINVAL;
+
+	job->fw_ccb_cmd_type = ROGUE_FWIF_CCB_CMD_TYPE_CDM;
+	err = pvr_fw_cmd_init(job->pvr_dev, job, &pvr_cmd_compute_stream,
+			      args->cmd_stream, args->cmd_stream_len);
+	if (err)
+		return err;
+
+	cmd = job->cmd;
+	cmd->common.frame_num = 0;
+	cmd->flags = convert_compute_flags(args->flags);
+	return 0;
+}
+
+static u32
+convert_transfer_flags(u32 in_flags)
+{
+	u32 out_flags = 0;
+
+	if (in_flags & DRM_PVR_SUBMIT_JOB_TRANSFER_CMD_SINGLE_CORE)
+		out_flags |= ROGUE_TRANSFER_FLAGS_SINGLE_CORE;
+
+	return out_flags;
+}
+
+static int
+pvr_transfer_job_fw_cmd_init(struct pvr_job *job,
+			     struct drm_pvr_job *args)
+{
+	struct rogue_fwif_cmd_transfer *cmd;
+	int err;
+
+	if (args->flags & ~DRM_PVR_SUBMIT_JOB_TRANSFER_CMD_FLAGS_MASK)
+		return -EINVAL;
+
+	if (job->ctx->type != DRM_PVR_CTX_TYPE_TRANSFER_FRAG)
+		return -EINVAL;
+
+	job->fw_ccb_cmd_type = ROGUE_FWIF_CCB_CMD_TYPE_TQ_3D;
+	err = pvr_fw_cmd_init(job->pvr_dev, job, &pvr_cmd_transfer_stream,
+			      args->cmd_stream, args->cmd_stream_len);
+	if (err)
+		return err;
+
+	cmd = job->cmd;
+	cmd->common.frame_num = 0;
+	cmd->flags = convert_transfer_flags(args->flags);
+	return 0;
+}
+
+static int
+pvr_job_fw_cmd_init(struct pvr_job *job,
+		    struct drm_pvr_job *args)
+{
+	switch (args->type) {
+	case DRM_PVR_JOB_TYPE_GEOMETRY:
+		return pvr_geom_job_fw_cmd_init(job, args);
+
+	case DRM_PVR_JOB_TYPE_FRAGMENT:
+		return pvr_frag_job_fw_cmd_init(job, args);
+
+	case DRM_PVR_JOB_TYPE_COMPUTE:
+		return pvr_compute_job_fw_cmd_init(job, args);
+
+	case DRM_PVR_JOB_TYPE_TRANSFER_FRAG:
+		return pvr_transfer_job_fw_cmd_init(job, args);
+
+	default:
+		return -EINVAL;
+	}
+}
+
+/**
+ * struct pvr_job_data - Helper container for pairing jobs with the
+ * sync_ops supplied for them by the user.
+ */
+struct pvr_job_data {
+	/** @job: Pointer to the job. */
+	struct pvr_job *job;
+
+	/** @sync_ops: Pointer to the sync_ops associated with @job. */
+	struct drm_pvr_sync_op *sync_ops;
+
+	/** @sync_op_count: Number of members of @sync_ops. */
+	u32 sync_op_count;
+};
+
+/**
+ * prepare_job_syncs() - Prepare all sync objects for a single job.
+ * @pvr_file: PowerVR file.
+ * @job_data: Precreated job and sync_ops array.
+ * @signal_array: xarray to receive signal sync objects.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error code returned by pvr_sync_signal_array_collect_ops(),
+ *    pvr_sync_add_deps_to_job(), drm_sched_job_add_resv_dependencies() or
+ *    pvr_sync_signal_array_update_fences().
+ */
+static int
+prepare_job_syncs(struct pvr_file *pvr_file,
+		  struct pvr_job_data *job_data,
+		  struct xarray *signal_array)
+{
+	struct dma_fence *done_fence;
+	int err = pvr_sync_signal_array_collect_ops(signal_array,
+						    from_pvr_file(pvr_file),
+						    job_data->sync_op_count,
+						    job_data->sync_ops);
+
+	if (err)
+		return err;
+
+	err = pvr_sync_add_deps_to_job(pvr_file, &job_data->job->base,
+				       job_data->sync_op_count,
+				       job_data->sync_ops, signal_array);
+	if (err)
+		return err;
+
+	if (job_data->job->hwrt) {
+		/* The geometry job writes the HWRT region headers, which are
+		 * then read by the fragment job.
+		 */
+		struct drm_gem_object *obj =
+			gem_from_pvr_gem(job_data->job->hwrt->fw_obj->gem);
+		enum dma_resv_usage usage =
+			dma_resv_usage_rw(job_data->job->type ==
+					  DRM_PVR_JOB_TYPE_GEOMETRY);
+
+		err = drm_sched_job_add_resv_dependencies(&job_data->job->base,
+							  obj->resv, usage);
+		if (err)
+			return err;
+	}
+
+	/* We need to arm the job to get the job done fence. */
+	done_fence = pvr_queue_job_arm(job_data->job);
+
+	err = pvr_sync_signal_array_update_fences(signal_array,
+						  job_data->sync_op_count,
+						  job_data->sync_ops,
+						  done_fence);
+	return err;
+}
+
+/**
+ * prepare_job_syncs_for_each() - Prepare all sync objects for an array of jobs.
+ * @file: PowerVR file.
+ * @job_data: Array of precreated jobs and their sync_ops.
+ * @job_count: Number of jobs.
+ * @signal_array: xarray to receive signal sync objects.
+ *
+ * Returns:
+ *  * 0 on success, or
+ *  * Any error code returned by pvr_vm_bind_job_prepare_syncs().
+ */
+static int
+prepare_job_syncs_for_each(struct pvr_file *pvr_file,
+			   struct pvr_job_data *job_data,
+			   u32 *job_count,
+			   struct xarray *signal_array)
+{
+	for (u32 i = 0; i < *job_count; i++) {
+		int err = prepare_job_syncs(pvr_file, &job_data[i],
+					    signal_array);
+
+		if (err) {
+			*job_count = i;
+			return err;
+		}
+	}
+
+	return 0;
+}
+
+static struct pvr_job *
+create_job(struct pvr_device *pvr_dev,
+	   struct pvr_file *pvr_file,
+	   struct drm_pvr_job *args)
+{
+	struct pvr_job *job = NULL;
+	int err;
+
+	if (!args->cmd_stream || !args->cmd_stream_len)
+		return ERR_PTR(-EINVAL);
+
+	if (args->type != DRM_PVR_JOB_TYPE_GEOMETRY &&
+	    args->type != DRM_PVR_JOB_TYPE_FRAGMENT &&
+	    (args->hwrt.set_handle || args->hwrt.data_index))
+		return ERR_PTR(-EINVAL);
+
+	job = kzalloc(sizeof(*job), GFP_KERNEL);
+	if (!job)
+		return ERR_PTR(-ENOMEM);
+
+	kref_init(&job->ref_count);
+	job->type = args->type;
+	job->pvr_dev = pvr_dev;
+
+	err = xa_alloc(&pvr_dev->job_ids, &job->id, job, xa_limit_32b, GFP_KERNEL);
+	if (err)
+		goto err_put_job;
+
+	job->ctx = pvr_context_lookup(pvr_file, args->context_handle);
+	if (!job->ctx) {
+		err = -EINVAL;
+		goto err_put_job;
+	}
+
+	if (args->hwrt.set_handle) {
+		job->hwrt = pvr_hwrt_data_lookup(pvr_file, args->hwrt.set_handle,
+						 args->hwrt.data_index);
+		if (!job->hwrt) {
+			err = -EINVAL;
+			goto err_put_job;
+		}
+	}
+
+	err = pvr_job_fw_cmd_init(job, args);
+	if (err)
+		goto err_put_job;
+
+	err = pvr_queue_job_init(job);
+	if (err)
+		goto err_put_job;
+
+	return job;
+
+err_put_job:
+	pvr_job_put(job);
+	return ERR_PTR(err);
+}
+
+/**
+ * pvr_job_data_fini() - Cleanup all allocs used to set up job submission.
+ * @job_data: Job data array.
+ * @job_count: Number of members of @job_data.
+ */
+static void
+pvr_job_data_fini(struct pvr_job_data *job_data, u32 job_count)
+{
+	for (u32 i = 0; i < job_count; i++) {
+		pvr_job_put(job_data[i].job);
+		kvfree(job_data[i].sync_ops);
+	}
+}
+
+/**
+ * pvr_job_data_init() - Init an array of created jobs, associating them with
+ * the appropriate sync_ops args, which will be copied in.
+ * @pvr_dev: Target PowerVR device.
+ * @pvr_file: Pointer to PowerVR file structure.
+ * @job_args: Job args array copied from user.
+ * @job_count: Number of members of @job_args.
+ * @job_data_out: Job data array.
+ */
+static int pvr_job_data_init(struct pvr_device *pvr_dev,
+			     struct pvr_file *pvr_file,
+			     struct drm_pvr_job *job_args,
+			     u32 *job_count,
+			     struct pvr_job_data *job_data_out)
+{
+	int err = 0, i = 0;
+
+	for (; i < *job_count; i++) {
+		job_data_out[i].job =
+			create_job(pvr_dev, pvr_file, &job_args[i]);
+		err = PTR_ERR_OR_ZERO(job_data_out[i].job);
+
+		if (err) {
+			*job_count = i;
+			job_data_out[i].job = NULL;
+			goto err_cleanup;
+		}
+
+		err = PVR_UOBJ_GET_ARRAY(job_data_out[i].sync_ops,
+					 &job_args[i].sync_ops);
+		if (err) {
+			*job_count = i;
+			goto err_cleanup;
+		}
+
+		job_data_out[i].sync_op_count = job_args[i].sync_ops.count;
+	}
+
+	return 0;
+
+err_cleanup:
+	pvr_job_data_fini(job_data_out, i);
+
+	return err;
+}
+
+static void
+push_jobs(struct pvr_job_data *job_data, u32 job_count)
+{
+	for (u32 i = 0; i < job_count; i++)
+		pvr_queue_job_push(job_data[i].job);
+}
+
+static int
+prepare_fw_obj_resv(struct drm_exec *exec, struct pvr_fw_object *fw_obj)
+{
+	return drm_exec_prepare_obj(exec, gem_from_pvr_gem(fw_obj->gem), 1);
+}
+
+static int
+jobs_lock_all_objs(struct drm_exec *exec, struct pvr_job_data *job_data,
+		   u32 job_count)
+{
+	for (u32 i = 0; i < job_count; i++) {
+		struct pvr_job *job = job_data[i].job;
+
+		/* Grab a lock on a the context, to guard against
+		 * concurrent submission to the same queue.
+		 */
+		int err = drm_exec_lock_obj(exec,
+					    gem_from_pvr_gem(job->ctx->fw_obj->gem));
+
+		if (err)
+			return err;
+
+		if (job->hwrt) {
+			err = prepare_fw_obj_resv(exec,
+						  job->hwrt->fw_obj);
+			if (err)
+				return err;
+		}
+	}
+
+	return 0;
+}
+
+static int
+prepare_job_resvs_for_each(struct drm_exec *exec, struct pvr_job_data *job_data,
+			   u32 job_count)
+{
+	drm_exec_until_all_locked(exec) {
+		int err = jobs_lock_all_objs(exec, job_data, job_count);
+
+		drm_exec_retry_on_contention(exec);
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
+
+static void
+update_job_resvs(struct pvr_job *job)
+{
+	if (job->hwrt) {
+		enum dma_resv_usage usage = job->type == DRM_PVR_JOB_TYPE_GEOMETRY ?
+					    DMA_RESV_USAGE_WRITE : DMA_RESV_USAGE_READ;
+		struct drm_gem_object *obj = gem_from_pvr_gem(job->hwrt->fw_obj->gem);
+
+		dma_resv_add_fence(obj->resv, &job->base.s_fence->finished, usage);
+	}
+}
+
+static void
+update_job_resvs_for_each(struct pvr_job_data *job_data, u32 job_count)
+{
+	for (u32 i = 0; i < job_count; i++)
+		update_job_resvs(job_data[i].job);
+}
+
+static bool can_combine_jobs(struct pvr_job *a, struct pvr_job *b)
+{
+	struct pvr_job *geom_job = a, *frag_job = b;
+	struct dma_fence *fence;
+	unsigned long index;
+
+	/* Geometry and fragment jobs can be combined if they are queued to the
+	 * same context and targeting the same HWRT.
+	 */
+	if (a->type != DRM_PVR_JOB_TYPE_GEOMETRY ||
+	    b->type != DRM_PVR_JOB_TYPE_FRAGMENT ||
+	    a->ctx != b->ctx ||
+	    a->hwrt != b->hwrt)
+		return false;
+
+	xa_for_each(&frag_job->base.dependencies, index, fence) {
+		/* We combine when we see an explicit geom -> frag dep. */
+		if (&geom_job->base.s_fence->scheduled == fence)
+			return true;
+	}
+
+	return false;
+}
+
+static struct dma_fence *
+get_last_queued_job_scheduled_fence(struct pvr_queue *queue,
+				    struct pvr_job_data *job_data,
+				    u32 cur_job_pos)
+{
+	/* We iterate over the current job array in reverse order to grab the
+	 * last to-be-queued job targeting the same queue.
+	 */
+	for (u32 i = cur_job_pos; i > 0; i--) {
+		struct pvr_job *job = job_data[i - 1].job;
+
+		if (job->ctx == queue->ctx && job->type == queue->type)
+			return dma_fence_get(&job->base.s_fence->scheduled);
+	}
+
+	/* If we didn't find any, we just return the last queued job scheduled
+	 * fence attached to the queue.
+	 */
+	return dma_fence_get(queue->last_queued_job_scheduled_fence);
+}
+
+static int
+pvr_jobs_link_geom_frag(struct pvr_job_data *job_data, u32 *job_count)
+{
+	for (u32 i = 0; i < *job_count - 1; i++) {
+		struct pvr_job *geom_job = job_data[i].job;
+		struct pvr_job *frag_job = job_data[i + 1].job;
+		struct pvr_queue *frag_queue;
+		struct dma_fence *f;
+
+		if (!can_combine_jobs(job_data[i].job, job_data[i + 1].job))
+			continue;
+
+		/* The fragment job will be submitted by the geometry queue. We
+		 * need to make sure it comes after all the other fragment jobs
+		 * queued before it.
+		 */
+		frag_queue = pvr_context_get_queue_for_job(frag_job->ctx,
+							   frag_job->type);
+		f = get_last_queued_job_scheduled_fence(frag_queue, job_data,
+							i);
+		if (f) {
+			int err = drm_sched_job_add_dependency(&geom_job->base,
+							       f);
+			if (err) {
+				*job_count = i;
+				return err;
+			}
+		}
+
+		/* The KCCB slot will be reserved by the geometry job, so we can
+		 * drop the KCCB fence on the fragment job.
+		 */
+		pvr_kccb_fence_put(frag_job->kccb_fence);
+		frag_job->kccb_fence = NULL;
+
+		geom_job->paired_job = frag_job;
+		frag_job->paired_job = geom_job;
+
+		/* Skip the fragment job we just paired to the geometry job. */
+		i++;
+	}
+
+	return 0;
+}
+
+/**
+ * pvr_submit_jobs() - Submit jobs to the GPU
+ * @pvr_dev: Target PowerVR device.
+ * @pvr_file: Pointer to PowerVR file structure.
+ * @args: Ioctl args.
+ * @job_count: Number of jobs in @jobs_args. On error this will be updated
+ * with the index into @jobs_args where the error occurred.
+ *
+ * This initial implementation is entirely synchronous; on return the GPU will
+ * be idle. This will not be the case for future implementations.
+ *
+ * Returns:
+ *  * 0 on success,
+ *  * -%EFAULT if arguments can not be copied from user space, or
+ *  * -%EINVAL on invalid arguments, or
+ *  * Any other error.
+ */
+int
+pvr_submit_jobs(struct pvr_device *pvr_dev, struct pvr_file *pvr_file,
+		struct drm_pvr_ioctl_submit_jobs_args *args)
+{
+	struct pvr_job_data *job_data = NULL;
+	struct drm_pvr_job *job_args;
+	struct xarray signal_array;
+	u32 jobs_alloced = 0;
+	struct drm_exec exec;
+	int err;
+
+	if (!args->jobs.count)
+		return -EINVAL;
+
+	err = PVR_UOBJ_GET_ARRAY(job_args, &args->jobs);
+	if (err)
+		return err;
+
+	job_data = kvmalloc_array(args->jobs.count, sizeof(*job_data),
+				  GFP_KERNEL | __GFP_ZERO);
+	if (!job_data) {
+		err = -ENOMEM;
+		goto out_free;
+	}
+
+	err = pvr_job_data_init(pvr_dev, pvr_file, job_args, &args->jobs.count,
+				job_data);
+	if (err)
+		goto out_free;
+
+	jobs_alloced = args->jobs.count;
+
+	drm_exec_init(&exec, DRM_EXEC_INTERRUPTIBLE_WAIT | DRM_EXEC_IGNORE_DUPLICATES);
+
+	xa_init_flags(&signal_array, XA_FLAGS_ALLOC);
+
+	err = prepare_job_syncs_for_each(pvr_file, job_data, &args->jobs.count,
+					 &signal_array);
+	if (err)
+		goto out_exec_fini;
+
+	err = prepare_job_resvs_for_each(&exec, job_data, args->jobs.count);
+	if (err)
+		goto out_exec_fini;
+
+	err = pvr_jobs_link_geom_frag(job_data, &args->jobs.count);
+	if (err)
+		goto out_exec_fini;
+
+	/* Anything after that point must succeed because we start exposing job
+	 * finished fences to the outside world.
+	 */
+	update_job_resvs_for_each(job_data, args->jobs.count);
+	push_jobs(job_data, args->jobs.count);
+	pvr_sync_signal_array_push_fences(&signal_array);
+	err = 0;
+
+out_exec_fini:
+	drm_exec_fini(&exec);
+	pvr_sync_signal_array_cleanup(&signal_array);
+	pvr_job_data_fini(job_data, jobs_alloced);
+
+out_free:
+	kvfree(job_data);
+	kvfree(job_args);
+
+	return err;
+}
diff --git a/drivers/gpu/drm/imagination/pvr_job.h b/drivers/gpu/drm/imagination/pvr_job.h
new file mode 100644
index 000000000000..dd5f9e9677f9
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_job.h
@@ -0,0 +1,161 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_JOB_H
+#define PVR_JOB_H
+
+#include <uapi/drm/pvr_drm.h>
+
+#include <linux/kref.h>
+#include <linux/types.h>
+
+#include <drm/drm_gem.h>
+#include <drm/gpu_scheduler.h>
+
+#include "pvr_power.h"
+
+/* Forward declaration from "pvr_context.h". */
+struct pvr_context;
+
+/* Forward declarations from "pvr_device.h". */
+struct pvr_device;
+struct pvr_file;
+
+/* Forward declarations from "pvr_hwrt.h". */
+struct pvr_hwrt_data;
+
+/* Forward declaration from "pvr_queue.h". */
+struct pvr_queue;
+
+struct pvr_job {
+	/** @base: drm_sched_job object. */
+	struct drm_sched_job base;
+
+	/** @ref_count: Refcount for job. */
+	struct kref ref_count;
+
+	/** @type: Type of job. */
+	enum drm_pvr_job_type type;
+
+	/** @id: Job ID number. */
+	u32 id;
+
+	/**
+	 * @paired_job: Job paired to this job.
+	 *
+	 * This field is only meaningful for geometry and fragment jobs.
+	 *
+	 * Paired jobs are executed on the same context, and need to be submitted
+	 * atomically to the FW, to make sure the partial render logic has a
+	 * fragment job to execute when the Parameter Manager runs out of memory.
+	 *
+	 * The geometry job should point to the fragment job it's paired with,
+	 * and the fragment job should point to the geometry job it's paired with.
+	 */
+	struct pvr_job *paired_job;
+
+	/** @cccb_fence: Fence used to wait for CCCB space. */
+	struct dma_fence *cccb_fence;
+
+	/** @kccb_fence: Fence used to wait for KCCB space. */
+	struct dma_fence *kccb_fence;
+
+	/** @done_fence: Fence to signal when the job is done. */
+	struct dma_fence *done_fence;
+
+	/** @pvr_dev: Device pointer. */
+	struct pvr_device *pvr_dev;
+
+	/** @ctx: Pointer to owning context. */
+	struct pvr_context *ctx;
+
+	/** @cmd: Command data. Format depends on @type. */
+	void *cmd;
+
+	/** @cmd_len: Length of command data, in bytes. */
+	u32 cmd_len;
+
+	/**
+	 * @fw_ccb_cmd_type: Firmware CCB command type. Must be one of %ROGUE_FWIF_CCB_CMD_TYPE_*.
+	 */
+	u32 fw_ccb_cmd_type;
+
+	/** @hwrt: HWRT object. Will be NULL for compute and transfer jobs. */
+	struct pvr_hwrt_data *hwrt;
+
+	/**
+	 * @has_pm_ref: True if the job has a power ref, thus forcing the GPU to stay on until
+	 * the job is done.
+	 */
+	bool has_pm_ref;
+};
+
+/**
+ * pvr_job_get() - Take additional reference on job.
+ * @job: Job pointer.
+ *
+ * Call pvr_job_put() to release.
+ *
+ * Returns:
+ *  * The requested job on success, or
+ *  * %NULL if no job pointer passed.
+ */
+static __always_inline struct pvr_job *
+pvr_job_get(struct pvr_job *job)
+{
+	if (job)
+		kref_get(&job->ref_count);
+
+	return job;
+}
+
+void pvr_job_put(struct pvr_job *job);
+
+/**
+ * pvr_job_release_pm_ref() - Release the PM ref if the job acquired it.
+ * @job: The job to release the PM ref on.
+ */
+static __always_inline void
+pvr_job_release_pm_ref(struct pvr_job *job)
+{
+	if (job->has_pm_ref) {
+		pvr_power_put(job->pvr_dev);
+		job->has_pm_ref = false;
+	}
+}
+
+/**
+ * pvr_job_get_pm_ref() - Get a PM ref and attach it to the job.
+ * @job: The job to attach the PM ref to.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * Any error returned by pvr_power_get() otherwise.
+ */
+static __always_inline int
+pvr_job_get_pm_ref(struct pvr_job *job)
+{
+	int err;
+
+	if (job->has_pm_ref)
+		return 0;
+
+	err = pvr_power_get(job->pvr_dev);
+	if (!err)
+		job->has_pm_ref = true;
+
+	return err;
+}
+
+int pvr_job_wait_first_non_signaled_native_dep(struct pvr_job *job);
+
+bool pvr_job_non_native_deps_done(struct pvr_job *job);
+
+int pvr_job_fits_in_cccb(struct pvr_job *job, unsigned long native_dep_count);
+
+void pvr_job_submit(struct pvr_job *job);
+
+int pvr_submit_jobs(struct pvr_device *pvr_dev, struct pvr_file *pvr_file,
+		    struct drm_pvr_ioctl_submit_jobs_args *args);
+
+#endif /* PVR_JOB_H */
diff --git a/drivers/gpu/drm/imagination/pvr_power.c b/drivers/gpu/drm/imagination/pvr_power.c
index 90dfdf22044c..72f81fbc7e6b 100644
--- a/drivers/gpu/drm/imagination/pvr_power.c
+++ b/drivers/gpu/drm/imagination/pvr_power.c
@@ -5,6 +5,7 @@
 #include "pvr_fw.h"
 #include "pvr_fw_startstop.h"
 #include "pvr_power.h"
+#include "pvr_queue.h"
 #include "pvr_rogue_fwif.h"
 
 #include <drm/drm_drv.h>
@@ -149,6 +150,21 @@ pvr_watchdog_kccb_stalled(struct pvr_device *pvr_dev)
 			pvr_dev->watchdog.kccb_stall_count = 0;
 			return true;
 		}
+	} else if (pvr_dev->watchdog.old_kccb_cmds_executed == kccb_cmds_executed) {
+		bool has_active_contexts;
+
+		mutex_lock(&pvr_dev->queues.lock);
+		has_active_contexts = list_empty(&pvr_dev->queues.active);
+		mutex_unlock(&pvr_dev->queues.lock);
+
+		if (has_active_contexts) {
+			/* Send a HEALTH_CHECK command so we can verify FW is still alive. */
+			struct rogue_fwif_kccb_cmd health_check_cmd;
+
+			health_check_cmd.cmd_type = ROGUE_FWIF_KCCB_CMD_HEALTH_CHECK;
+
+			pvr_kccb_send_cmd_powered(pvr_dev, &health_check_cmd, NULL);
+		}
 	} else {
 		pvr_dev->watchdog.old_kccb_cmds_executed = kccb_cmds_executed;
 		pvr_dev->watchdog.kccb_stall_count = 0;
@@ -312,6 +328,7 @@ pvr_power_device_idle(struct device *dev)
 int
 pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
 {
+	bool queues_disabled = false;
 	int err;
 
 	/*
@@ -326,6 +343,11 @@ pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
 	disable_irq(pvr_dev->irq);
 
 	do {
+		if (hard_reset) {
+			pvr_queue_device_pre_reset(pvr_dev);
+			queues_disabled = true;
+		}
+
 		err = pvr_power_fw_disable(pvr_dev, hard_reset);
 		if (!err) {
 			if (hard_reset) {
@@ -361,6 +383,9 @@ pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
 		}
 	} while (err);
 
+	if (queues_disabled)
+		pvr_queue_device_post_reset(pvr_dev);
+
 	enable_irq(pvr_dev->irq);
 
 	up_write(&pvr_dev->reset_sem);
@@ -375,6 +400,9 @@ pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
 
 	/* Leave IRQs disabled if the device is lost. */
 
+	if (queues_disabled)
+		pvr_queue_device_post_reset(pvr_dev);
+
 	up_write(&pvr_dev->reset_sem);
 
 	pvr_power_put(pvr_dev);
diff --git a/drivers/gpu/drm/imagination/pvr_queue.c b/drivers/gpu/drm/imagination/pvr_queue.c
new file mode 100644
index 000000000000..3a5a73fb7f0a
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_queue.c
@@ -0,0 +1,1455 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include <drm/drm_managed.h>
+#include <drm/gpu_scheduler.h>
+
+#include "pvr_cccb.h"
+#include "pvr_context.h"
+#include "pvr_device.h"
+#include "pvr_drv.h"
+#include "pvr_job.h"
+#include "pvr_queue.h"
+#include "pvr_vm.h"
+
+#include "pvr_rogue_fwif_client.h"
+
+#define MAX_DEADLINE_MS 30000
+
+#define CTX_COMPUTE_CCCB_SIZE_LOG2 15
+#define CTX_FRAG_CCCB_SIZE_LOG2 15
+#define CTX_GEOM_CCCB_SIZE_LOG2 15
+#define CTX_TRANSFER_CCCB_SIZE_LOG2 15
+
+static int get_xfer_ctx_state_size(struct pvr_device *pvr_dev)
+{
+	u32 num_isp_store_registers;
+
+	if (PVR_HAS_FEATURE(pvr_dev, xe_memory_hierarchy)) {
+		num_isp_store_registers = 1;
+	} else {
+		int err;
+
+		err = PVR_FEATURE_VALUE(pvr_dev, num_isp_ipp_pipes, &num_isp_store_registers);
+		if (WARN_ON(err))
+			return err;
+	}
+
+	return sizeof(struct rogue_fwif_frag_ctx_state) +
+	       (num_isp_store_registers *
+		sizeof(((struct rogue_fwif_frag_ctx_state *)0)->frag_reg_isp_store[0]));
+}
+
+static int get_frag_ctx_state_size(struct pvr_device *pvr_dev)
+{
+	u32 num_isp_store_registers;
+	int err;
+
+	if (PVR_HAS_FEATURE(pvr_dev, xe_memory_hierarchy)) {
+		err = PVR_FEATURE_VALUE(pvr_dev, num_raster_pipes, &num_isp_store_registers);
+		if (WARN_ON(err))
+			return err;
+
+		if (PVR_HAS_FEATURE(pvr_dev, gpu_multicore_support)) {
+			u32 xpu_max_slaves;
+
+			err = PVR_FEATURE_VALUE(pvr_dev, xpu_max_slaves, &xpu_max_slaves);
+			if (WARN_ON(err))
+				return err;
+
+			num_isp_store_registers *= (1 + xpu_max_slaves);
+		}
+	} else {
+		err = PVR_FEATURE_VALUE(pvr_dev, num_isp_ipp_pipes, &num_isp_store_registers);
+		if (WARN_ON(err))
+			return err;
+	}
+
+	return sizeof(struct rogue_fwif_frag_ctx_state) +
+	       (num_isp_store_registers *
+		sizeof(((struct rogue_fwif_frag_ctx_state *)0)->frag_reg_isp_store[0]));
+}
+
+static int get_ctx_state_size(struct pvr_device *pvr_dev, enum drm_pvr_job_type type)
+{
+	switch (type) {
+	case DRM_PVR_JOB_TYPE_GEOMETRY:
+		return sizeof(struct rogue_fwif_geom_ctx_state);
+	case DRM_PVR_JOB_TYPE_FRAGMENT:
+		return get_frag_ctx_state_size(pvr_dev);
+	case DRM_PVR_JOB_TYPE_COMPUTE:
+		return sizeof(struct rogue_fwif_compute_ctx_state);
+	case DRM_PVR_JOB_TYPE_TRANSFER_FRAG:
+		return get_xfer_ctx_state_size(pvr_dev);
+	}
+
+	WARN(1, "Invalid queue type");
+	return -EINVAL;
+}
+
+static u32 get_ctx_offset(enum drm_pvr_job_type type)
+{
+	switch (type) {
+	case DRM_PVR_JOB_TYPE_GEOMETRY:
+		return offsetof(struct rogue_fwif_fwrendercontext, geom_context);
+	case DRM_PVR_JOB_TYPE_FRAGMENT:
+		return offsetof(struct rogue_fwif_fwrendercontext, frag_context);
+	case DRM_PVR_JOB_TYPE_COMPUTE:
+		return offsetof(struct rogue_fwif_fwcomputecontext, cdm_context);
+	case DRM_PVR_JOB_TYPE_TRANSFER_FRAG:
+		return offsetof(struct rogue_fwif_fwtransfercontext, tq_context);
+	}
+
+	return 0;
+}
+
+static const char *
+pvr_queue_fence_get_driver_name(struct dma_fence *f)
+{
+	return PVR_DRIVER_NAME;
+}
+
+static void pvr_queue_fence_release(struct dma_fence *f)
+{
+	struct pvr_queue_fence *fence = container_of(f, struct pvr_queue_fence, base);
+
+	pvr_context_put(fence->queue->ctx);
+	dma_fence_free(f);
+}
+
+static const char *
+pvr_queue_job_fence_get_timeline_name(struct dma_fence *f)
+{
+	struct pvr_queue_fence *fence = container_of(f, struct pvr_queue_fence, base);
+
+	switch (fence->queue->type) {
+	case DRM_PVR_JOB_TYPE_GEOMETRY:
+		return "geometry";
+
+	case DRM_PVR_JOB_TYPE_FRAGMENT:
+		return "fragment";
+
+	case DRM_PVR_JOB_TYPE_COMPUTE:
+		return "compute";
+
+	case DRM_PVR_JOB_TYPE_TRANSFER_FRAG:
+		return "transfer";
+	}
+
+	WARN(1, "Invalid queue type");
+	return "invalid";
+}
+
+static const char *
+pvr_queue_cccb_fence_get_timeline_name(struct dma_fence *f)
+{
+	struct pvr_queue_fence *fence = container_of(f, struct pvr_queue_fence, base);
+
+	switch (fence->queue->type) {
+	case DRM_PVR_JOB_TYPE_GEOMETRY:
+		return "geometry-cccb";
+
+	case DRM_PVR_JOB_TYPE_FRAGMENT:
+		return "fragment-cccb";
+
+	case DRM_PVR_JOB_TYPE_COMPUTE:
+		return "compute-cccb";
+
+	case DRM_PVR_JOB_TYPE_TRANSFER_FRAG:
+		return "transfer-cccb";
+	}
+
+	WARN(1, "Invalid queue type");
+	return "invalid";
+}
+
+static const struct dma_fence_ops pvr_queue_job_fence_ops = {
+	.get_driver_name = pvr_queue_fence_get_driver_name,
+	.get_timeline_name = pvr_queue_job_fence_get_timeline_name,
+	.release = pvr_queue_fence_release,
+};
+
+/**
+ * to_pvr_queue_job_fence() - Return a pvr_queue_fence object if the fence is
+ * backed by a UFO.
+ * @f: The dma_fence to turn into a pvr_queue_fence.
+ *
+ * Return:
+ *  * A non-NULL pvr_queue_fence object if the dma_fence is backed by a UFO, or
+ *  * NULL otherwise.
+ */
+static struct pvr_queue_fence *
+to_pvr_queue_job_fence(struct dma_fence *f)
+{
+	struct drm_sched_fence *sched_fence = to_drm_sched_fence(f);
+
+	if (sched_fence)
+		f = sched_fence->parent;
+
+	if (f && f->ops == &pvr_queue_job_fence_ops)
+		return container_of(f, struct pvr_queue_fence, base);
+
+	return NULL;
+}
+
+static const struct dma_fence_ops pvr_queue_cccb_fence_ops = {
+	.get_driver_name = pvr_queue_fence_get_driver_name,
+	.get_timeline_name = pvr_queue_cccb_fence_get_timeline_name,
+	.release = pvr_queue_fence_release,
+};
+
+/**
+ * pvr_queue_fence_put() - Put wrapper for pvr_queue_fence objects.
+ * @f: The dma_fence object to put.
+ *
+ * If the pvr_queue_fence has been initialized, we call dma_fence_put(),
+ * otherwise we free the object with dma_fence_free(). This allows us
+ * to do the right thing before and after pvr_queue_fence_init() had been
+ * called.
+ */
+static void pvr_queue_fence_put(struct dma_fence *f)
+{
+	if (!f)
+		return;
+
+	if (WARN_ON(f->ops &&
+		    f->ops != &pvr_queue_cccb_fence_ops &&
+		    f->ops != &pvr_queue_job_fence_ops))
+		return;
+
+	/* If the fence hasn't been initialized yet, free the object directly. */
+	if (f->ops)
+		dma_fence_put(f);
+	else
+		dma_fence_free(f);
+}
+
+/**
+ * pvr_queue_fence_alloc() - Allocate a pvr_queue_fence fence object
+ *
+ * Call this function to allocate job CCCB and done fences. This only
+ * allocates the objects. Initialization happens when the underlying
+ * dma_fence object is to be returned to drm_sched (in prepare_job() or
+ * run_job()).
+ *
+ * Return:
+ *  * A valid pointer if the allocation succeeds, or
+ *  * NULL if the allocation fails.
+ */
+static struct dma_fence *
+pvr_queue_fence_alloc(void)
+{
+	struct pvr_queue_fence *fence;
+
+	fence = kzalloc(sizeof(*fence), GFP_KERNEL);
+	if (!fence)
+		return NULL;
+
+	return &fence->base;
+}
+
+/**
+ * pvr_queue_fence_init() - Initializes a pvr_queue_fence object.
+ * @f: The fence to initialize
+ * @queue: The queue this fence belongs to.
+ * @fence_ops: The fence operations.
+ * @fence_ctx: The fence context.
+ *
+ * Wrapper around dma_fence_init() that takes care of initializing the
+ * pvr_queue_fence::queue field too.
+ */
+static void
+pvr_queue_fence_init(struct dma_fence *f,
+		     struct pvr_queue *queue,
+		     const struct dma_fence_ops *fence_ops,
+		     struct pvr_queue_fence_ctx *fence_ctx)
+{
+	struct pvr_queue_fence *fence = container_of(f, struct pvr_queue_fence, base);
+
+	pvr_context_get(queue->ctx);
+	fence->queue = queue;
+	dma_fence_init(&fence->base, fence_ops,
+		       &fence_ctx->lock, fence_ctx->id,
+		       atomic_inc_return(&fence_ctx->seqno));
+}
+
+/**
+ * pvr_queue_cccb_fence_init() - Initializes a CCCB fence object.
+ * @fence: The fence to initialize.
+ * @queue: The queue this fence belongs to.
+ *
+ * Initializes a fence that can be used to wait for CCCB space.
+ *
+ * Should be called in the ::prepare_job() path, so the fence returned to
+ * drm_sched is valid.
+ */
+static void
+pvr_queue_cccb_fence_init(struct dma_fence *fence, struct pvr_queue *queue)
+{
+	pvr_queue_fence_init(fence, queue, &pvr_queue_cccb_fence_ops,
+			     &queue->cccb_fence_ctx.base);
+}
+
+/**
+ * pvr_queue_job_fence_init() - Initializes a job done fence object.
+ * @fence: The fence to initialize.
+ * @queue: The queue this fence belongs to.
+ *
+ * Initializes a fence that will be signaled when the GPU is done executing
+ * a job.
+ *
+ * Should be called in the ::run_job() path, so the fence returned to drm_sched
+ * is valid.
+ */
+static void
+pvr_queue_job_fence_init(struct dma_fence *fence, struct pvr_queue *queue)
+{
+	pvr_queue_fence_init(fence, queue, &pvr_queue_job_fence_ops,
+			     &queue->job_fence_ctx);
+}
+
+/**
+ * pvr_queue_fence_ctx_init() - Queue fence context initialization.
+ * @fence_ctx: The context to initialize
+ */
+static void
+pvr_queue_fence_ctx_init(struct pvr_queue_fence_ctx *fence_ctx)
+{
+	spin_lock_init(&fence_ctx->lock);
+	fence_ctx->id = dma_fence_context_alloc(1);
+	atomic_set(&fence_ctx->seqno, 0);
+}
+
+static u32 ufo_cmds_size(u32 elem_count)
+{
+	/* We can pass at most ROGUE_FWIF_CCB_CMD_MAX_UFOS per UFO-related command. */
+	u32 full_cmd_count = elem_count / ROGUE_FWIF_CCB_CMD_MAX_UFOS;
+	u32 remaining_elems = elem_count % ROGUE_FWIF_CCB_CMD_MAX_UFOS;
+	u32 size = full_cmd_count *
+		   pvr_cccb_get_size_of_cmd_with_hdr(ROGUE_FWIF_CCB_CMD_MAX_UFOS *
+						     sizeof(struct rogue_fwif_ufo));
+
+	if (remaining_elems) {
+		size += pvr_cccb_get_size_of_cmd_with_hdr(remaining_elems *
+							  sizeof(struct rogue_fwif_ufo));
+	}
+
+	return size;
+}
+
+static u32 job_cmds_size(struct pvr_job *job, u32 ufo_wait_count)
+{
+	/* One UFO cmd for the fence signaling, one UFO cmd per native fence native,
+	 * and a command for the job itself.
+	 */
+	return ufo_cmds_size(1) + ufo_cmds_size(ufo_wait_count) +
+	       pvr_cccb_get_size_of_cmd_with_hdr(job->cmd_len);
+}
+
+/**
+ * job_count_remaining_native_deps() - Count the number of non-signaled native dependencies.
+ * @job: Job to operate on.
+ *
+ * Returns: Number of non-signaled native deps remaining.
+ */
+static unsigned long job_count_remaining_native_deps(struct pvr_job *job)
+{
+	unsigned long remaining_count = 0;
+	struct dma_fence *fence = NULL;
+	unsigned long index;
+
+	xa_for_each(&job->base.dependencies, index, fence) {
+		struct pvr_queue_fence *jfence;
+
+		jfence = to_pvr_queue_job_fence(fence);
+		if (!jfence)
+			continue;
+
+		if (!dma_fence_is_signaled(&jfence->base))
+			remaining_count++;
+	}
+
+	return remaining_count;
+}
+
+/**
+ * pvr_queue_get_job_cccb_fence() - Get the CCCB fence attached to a job.
+ * @queue: The queue this job will be submitted to.
+ * @job: The job to get the CCCB fence on.
+ *
+ * The CCCB fence is a synchronization primitive allowing us to delay job
+ * submission until there's enough space in the CCCB to submit the job.
+ *
+ * Return:
+ *  * NULL if there's enough space in the CCCB to submit this job, or
+ *  * A valid dma_fence object otherwise.
+ */
+static struct dma_fence *
+pvr_queue_get_job_cccb_fence(struct pvr_queue *queue, struct pvr_job *job)
+{
+	struct pvr_queue_fence *cccb_fence;
+	unsigned int native_deps_remaining;
+
+	/* If the fence is NULL, that means we already checked that we had
+	 * enough space in the cccb for our job.
+	 */
+	if (!job->cccb_fence)
+		return NULL;
+
+	mutex_lock(&queue->cccb_fence_ctx.job_lock);
+
+	/* Count remaining native dependencies and check if the job fits in the CCCB. */
+	native_deps_remaining = job_count_remaining_native_deps(job);
+	if (pvr_cccb_cmdseq_fits(&queue->cccb, job_cmds_size(job, native_deps_remaining))) {
+		pvr_queue_fence_put(job->cccb_fence);
+		job->cccb_fence = NULL;
+		goto out_unlock;
+	}
+
+	/* There should be no job attached to the CCCB fence context:
+	 * drm_sched_entity guarantees that jobs are submitted one at a time.
+	 */
+	if (WARN_ON(queue->cccb_fence_ctx.job))
+		pvr_job_put(queue->cccb_fence_ctx.job);
+
+	queue->cccb_fence_ctx.job = pvr_job_get(job);
+
+	/* Initialize the fence before returning it. */
+	cccb_fence = container_of(job->cccb_fence, struct pvr_queue_fence, base);
+	if (!WARN_ON(cccb_fence->queue))
+		pvr_queue_cccb_fence_init(job->cccb_fence, queue);
+
+out_unlock:
+	mutex_unlock(&queue->cccb_fence_ctx.job_lock);
+
+	return dma_fence_get(job->cccb_fence);
+}
+
+/**
+ * pvr_queue_get_job_kccb_fence() - Get the KCCB fence attached to a job.
+ * @queue: The queue this job will be submitted to.
+ * @job: The job to get the KCCB fence on.
+ *
+ * The KCCB fence is a synchronization primitive allowing us to delay job
+ * submission until there's enough space in the KCCB to submit the job.
+ *
+ * Return:
+ *  * NULL if there's enough space in the KCCB to submit this job, or
+ *  * A valid dma_fence object otherwise.
+ */
+static struct dma_fence *
+pvr_queue_get_job_kccb_fence(struct pvr_queue *queue, struct pvr_job *job)
+{
+	struct pvr_device *pvr_dev = queue->ctx->pvr_dev;
+	struct dma_fence *kccb_fence = NULL;
+
+	/* If the fence is NULL, that means we already checked that we had
+	 * enough space in the KCCB for our job.
+	 */
+	if (!job->kccb_fence)
+		return NULL;
+
+	if (!WARN_ON(job->kccb_fence->ops)) {
+		kccb_fence = pvr_kccb_reserve_slot(pvr_dev, job->kccb_fence);
+		job->kccb_fence = NULL;
+	}
+
+	return kccb_fence;
+}
+
+static struct dma_fence *
+pvr_queue_get_paired_frag_job_dep(struct pvr_queue *queue, struct pvr_job *job)
+{
+	struct pvr_job *frag_job = job->type == DRM_PVR_JOB_TYPE_GEOMETRY ?
+				   job->paired_job : NULL;
+	struct dma_fence *f;
+	unsigned long index;
+
+	if (!frag_job)
+		return NULL;
+
+	xa_for_each(&frag_job->base.dependencies, index, f) {
+		/* Skip already signaled fences. */
+		if (dma_fence_is_signaled(f))
+			continue;
+
+		/* Skip our own fence. */
+		if (f == &job->base.s_fence->scheduled)
+			continue;
+
+		return dma_fence_get(f);
+	}
+
+	return frag_job->base.sched->ops->prepare_job(&frag_job->base, &queue->entity);
+}
+
+/**
+ * pvr_queue_prepare_job() - Return the next internal dependencies expressed as a dma_fence.
+ * @sched_job: The job to query the next internal dependency on
+ * @s_entity: The entity this job is queue on.
+ *
+ * After iterating over drm_sched_job::dependencies, drm_sched let the driver return
+ * its own internal dependencies. We use this function to return our internal dependencies.
+ */
+static struct dma_fence *
+pvr_queue_prepare_job(struct drm_sched_job *sched_job,
+		      struct drm_sched_entity *s_entity)
+{
+	struct pvr_job *job = container_of(sched_job, struct pvr_job, base);
+	struct pvr_queue *queue = container_of(s_entity, struct pvr_queue, entity);
+	struct dma_fence *internal_dep = NULL;
+
+	/* CCCB fence is used to make sure we have enough space in the CCCB to
+	 * submit our commands.
+	 */
+	internal_dep = pvr_queue_get_job_cccb_fence(queue, job);
+
+	/* KCCB fence is used to make sure we have a KCCB slot to queue our
+	 * CMD_KICK.
+	 */
+	if (!internal_dep)
+		internal_dep = pvr_queue_get_job_kccb_fence(queue, job);
+
+	/* Any extra internal dependency should be added here, using the following
+	 * the following pattern:
+	 *
+	 *	if (!internal_dep)
+	 *		internal_dep = pvr_queue_get_job_xxxx_fence(queue, job);
+	 */
+
+	/* The paired job fence should come last, when everything else is ready. */
+	if (!internal_dep)
+		internal_dep = pvr_queue_get_paired_frag_job_dep(queue, job);
+
+	return internal_dep;
+}
+
+/**
+ * pvr_queue_update_active_state_locked() - Update the queue active state.
+ * @queue: Queue to update the state on.
+ *
+ * Locked version of pvr_queue_update_active_state(). Must be called with
+ * pvr_device::queue::lock held.
+ */
+static void pvr_queue_update_active_state_locked(struct pvr_queue *queue)
+{
+	struct pvr_device *pvr_dev = queue->ctx->pvr_dev;
+
+	lockdep_assert_held(&pvr_dev->queues.lock);
+
+	/* The queue is temporary out of any list when it's being reset,
+	 * we don't want a call to pvr_queue_update_active_state_locked()
+	 * to re-insert it behind our back.
+	 */
+	if (list_empty(&queue->node))
+		return;
+
+	if (!atomic_read(&queue->in_flight_job_count))
+		list_move_tail(&queue->node, &pvr_dev->queues.idle);
+	else
+		list_move_tail(&queue->node, &pvr_dev->queues.active);
+}
+
+/**
+ * pvr_queue_update_active_state() - Update the queue active state.
+ * @queue: Queue to update the state on.
+ *
+ * Active state is based on the in_flight_job_count value.
+ *
+ * Updating the active state implies moving the queue in or out of the
+ * active queue list, which also defines whether the queue is checked
+ * or not when a FW event is received.
+ *
+ * This function should be called any time a job is submitted or it done
+ * fence is signaled.
+ */
+static void pvr_queue_update_active_state(struct pvr_queue *queue)
+{
+	struct pvr_device *pvr_dev = queue->ctx->pvr_dev;
+
+	mutex_lock(&pvr_dev->queues.lock);
+	pvr_queue_update_active_state_locked(queue);
+	mutex_unlock(&pvr_dev->queues.lock);
+}
+
+static void pvr_queue_submit_job_to_cccb(struct pvr_job *job)
+{
+	struct pvr_queue *queue = container_of(job->base.sched, struct pvr_queue, scheduler);
+	struct rogue_fwif_ufo ufos[ROGUE_FWIF_CCB_CMD_MAX_UFOS];
+	struct pvr_cccb *cccb = &queue->cccb;
+	struct pvr_queue_fence *jfence;
+	struct dma_fence *fence;
+	unsigned long index;
+	u32 ufo_count = 0;
+
+	/* Initialize the done_fence, so we can signal it. */
+	pvr_queue_job_fence_init(job->done_fence, queue);
+
+	/* We need to add the queue to the active list before updating the CCCB,
+	 * otherwise we might miss the FW event informing us that something
+	 * happened on this queue.
+	 */
+	atomic_inc(&queue->in_flight_job_count);
+	pvr_queue_update_active_state(queue);
+
+	xa_for_each(&job->base.dependencies, index, fence) {
+		jfence = to_pvr_queue_job_fence(fence);
+		if (!jfence)
+			continue;
+
+		/* Skip the partial render fence, we will place it at the end. */
+		if (job->type == DRM_PVR_JOB_TYPE_FRAGMENT && job->paired_job &&
+		    &job->paired_job->base.s_fence->scheduled == fence)
+			continue;
+
+		if (dma_fence_is_signaled(&jfence->base))
+			continue;
+
+		pvr_fw_object_get_fw_addr(jfence->queue->timeline_ufo.fw_obj,
+					  &ufos[ufo_count].addr);
+		ufos[ufo_count++].value = jfence->base.seqno;
+
+		if (ufo_count == ARRAY_SIZE(ufos)) {
+			pvr_cccb_write_command_with_header(cccb, ROGUE_FWIF_CCB_CMD_TYPE_FENCE_PR,
+							   sizeof(ufos), ufos, 0, 0);
+			ufo_count = 0;
+		}
+	}
+
+	/* Partial render fence goes last. */
+	if (job->type == DRM_PVR_JOB_TYPE_FRAGMENT && job->paired_job) {
+		jfence = to_pvr_queue_job_fence(job->paired_job->done_fence);
+		if (!WARN_ON(!jfence)) {
+			pvr_fw_object_get_fw_addr(jfence->queue->timeline_ufo.fw_obj,
+						  &ufos[ufo_count].addr);
+			ufos[ufo_count++].value = job->paired_job->done_fence->seqno;
+		}
+	}
+
+	if (ufo_count) {
+		pvr_cccb_write_command_with_header(cccb, ROGUE_FWIF_CCB_CMD_TYPE_FENCE_PR,
+						   sizeof(ufos[0]) * ufo_count, ufos, 0, 0);
+	}
+
+	if (job->type == DRM_PVR_JOB_TYPE_GEOMETRY && job->paired_job) {
+		struct rogue_fwif_cmd_geom *cmd = job->cmd;
+
+		/* Reference value for the partial render test is the current queue fence
+		 * seqno minus one.
+		 */
+		pvr_fw_object_get_fw_addr(queue->timeline_ufo.fw_obj,
+					  &cmd->partial_render_geom_frag_fence.addr);
+		cmd->partial_render_geom_frag_fence.value = job->done_fence->seqno - 1;
+	}
+
+	/* Submit job to FW */
+	pvr_cccb_write_command_with_header(cccb, job->fw_ccb_cmd_type, job->cmd_len, job->cmd,
+					   job->id, job->id);
+
+	/* Signal the job fence. */
+	pvr_fw_object_get_fw_addr(queue->timeline_ufo.fw_obj, &ufos[0].addr);
+	ufos[0].value = job->done_fence->seqno;
+	pvr_cccb_write_command_with_header(cccb, ROGUE_FWIF_CCB_CMD_TYPE_UPDATE,
+					   sizeof(ufos[0]), ufos, 0, 0);
+
+	/* The job we submit is added to the drm_gpu_scheduler::pending_list in
+	 * drm_sched_job_begin() which is called after ::run_job() returns. But the
+	 * GPU might be done executing the job before drm_sched_main() had a chance
+	 * to queue it to the pending_list, resulting in a lost event. Work around
+	 * this race by keeping track of the last submitted job so we can check it in
+	 * pvr_queue_signal_done_fences() if this job is not part of the pending_list
+	 * already.
+	 * No need to keep a reference to the job object here, because we only use
+	 * check the pointer value.
+	 */
+	spin_lock(&queue->scheduler.job_list_lock);
+	queue->last_submitted_job = job;
+	spin_unlock(&queue->scheduler.job_list_lock);
+}
+
+/**
+ * pvr_queue_run_job() - Submit a job to the FW.
+ * @sched_job: The job to submit.
+ *
+ * This function is called when all non-native dependencies have been met and
+ * when the commands resulting from this job are guaranteed to fit in the CCCB.
+ */
+static struct dma_fence *pvr_queue_run_job(struct drm_sched_job *sched_job)
+{
+	struct pvr_job *job = container_of(sched_job, struct pvr_job, base);
+	struct pvr_device *pvr_dev = job->pvr_dev;
+	int err;
+
+	/* The fragment job is issued along the geometry job when we use combined
+	 * geom+frag kicks. When we get there, we should simply return the
+	 * done_fence that's been initialized earlier.
+	 */
+	if (job->type == DRM_PVR_JOB_TYPE_FRAGMENT && job->done_fence->ops)
+		return dma_fence_get(job->done_fence);
+
+	/* The only kind of jobs that can be paired are geometry and fragment, and
+	 * we bail out early if we see a fragment job that's paired with a geomtry
+	 * job.
+	 * Paired jobs must also target the same context and point to the same
+	 * HWRT.
+	 */
+	if (WARN_ON(job->paired_job &&
+		    (job->type != DRM_PVR_JOB_TYPE_GEOMETRY ||
+		     job->paired_job->type != DRM_PVR_JOB_TYPE_FRAGMENT ||
+		     job->hwrt != job->paired_job->hwrt ||
+		     job->ctx != job->paired_job->ctx)))
+		return ERR_PTR(-EINVAL);
+
+	err = pvr_job_get_pm_ref(job);
+	if (WARN_ON(err))
+		return ERR_PTR(err);
+
+	if (job->paired_job) {
+		err = pvr_job_get_pm_ref(job->paired_job);
+		if (WARN_ON(err))
+			return ERR_PTR(err);
+	}
+
+	/* Submit our job to the CCCB */
+	pvr_queue_submit_job_to_cccb(job);
+
+	if (job->paired_job) {
+		struct pvr_job *geom_job = job;
+		struct pvr_job *frag_job = job->paired_job;
+		struct pvr_queue *geom_queue = job->ctx->queues.geometry;
+		struct pvr_queue *frag_queue = job->ctx->queues.fragment;
+
+		/* Submit the fragment job along the geometry job and send a combined kick. */
+		pvr_queue_submit_job_to_cccb(frag_job);
+		pvr_cccb_send_kccb_combined_kick(pvr_dev,
+						 &geom_queue->cccb, &frag_queue->cccb,
+						 pvr_context_get_fw_addr(geom_job->ctx) +
+						 geom_queue->ctx_offset,
+						 pvr_context_get_fw_addr(frag_job->ctx) +
+						 frag_queue->ctx_offset,
+						 job->hwrt,
+						 frag_job->fw_ccb_cmd_type ==
+						 ROGUE_FWIF_CCB_CMD_TYPE_FRAG_PR);
+		geom_job->paired_job = NULL;
+		frag_job->paired_job = NULL;
+	} else {
+		struct pvr_queue *queue = container_of(job->base.sched,
+						       struct pvr_queue, scheduler);
+
+		pvr_cccb_send_kccb_kick(pvr_dev, &queue->cccb,
+					pvr_context_get_fw_addr(job->ctx) + queue->ctx_offset,
+					job->hwrt);
+	}
+
+	return dma_fence_get(job->done_fence);
+}
+
+static void pvr_queue_stop(struct pvr_queue *queue, struct pvr_job *bad_job)
+{
+	drm_sched_stop(&queue->scheduler, bad_job ? &bad_job->base : NULL);
+}
+
+static void pvr_queue_start(struct pvr_queue *queue)
+{
+	struct pvr_job *job;
+
+	/* Make sure we CPU-signal the UFO object, so other queues don't get
+	 * blocked waiting on it.
+	 */
+	*queue->timeline_ufo.value = atomic_read(&queue->job_fence_ctx.seqno);
+
+	list_for_each_entry(job, &queue->scheduler.pending_list, base.list) {
+		if (dma_fence_is_signaled(job->done_fence)) {
+			/* Jobs might have completed after drm_sched_stop() was called.
+			 * In that case, re-assign the parent field to the done_fence.
+			 */
+			WARN_ON(job->base.s_fence->parent);
+			job->base.s_fence->parent = dma_fence_get(job->done_fence);
+		} else {
+			/* If we had unfinished jobs, flag the entity as guilty so no
+			 * new job can be submitted.
+			 */
+			atomic_set(&queue->ctx->faulty, 1);
+
+			if (job == queue->last_submitted_job) {
+				queue->last_submitted_job = NULL;
+				dma_fence_signal(job->done_fence);
+				pvr_job_release_pm_ref(job);
+				atomic_dec(&queue->in_flight_job_count);
+			}
+		}
+	}
+
+	drm_sched_start(&queue->scheduler, true);
+}
+
+/**
+ * pvr_queue_timedout_job() - Handle a job timeout event.
+ * @s_job: The job this timeout occurred on.
+ *
+ * FIXME: We don't do anything here to unblock the situation, we just stop+start
+ * the scheduler, and re-assign parent fences in the middle.
+ *
+ * Return:
+ *  * DRM_GPU_SCHED_STAT_NOMINAL.
+ */
+static enum drm_gpu_sched_stat
+pvr_queue_timedout_job(struct drm_sched_job *s_job)
+{
+	struct drm_gpu_scheduler *sched = s_job->sched;
+	struct pvr_queue *queue = container_of(sched, struct pvr_queue, scheduler);
+	struct pvr_device *pvr_dev = queue->ctx->pvr_dev;
+	struct pvr_job *job;
+	u32 job_count = 0;
+
+	dev_err(sched->dev, "Job timeout\n");
+
+	/* Before we stop the scheduler, make sure the queue is out of any list, so
+	 * any call to pvr_queue_update_active_state_locked() that might happen
+	 * until the scheduler is really stopped doesn't end up re-inserting the
+	 * queue in the active list. This would cause
+	 * pvr_queue_signal_done_fences() and drm_sched_stop() to race with each
+	 * other when accessing the pending_list, since drm_sched_stop() doesn't
+	 * grab the job_list_lock when modifying the list (it's assuming the
+	 * only other accessor is the scheduler, and it's safe to not grab the
+	 * lock since it's stopped).
+	 */
+	mutex_lock(&pvr_dev->queues.lock);
+	list_del_init(&queue->node);
+	mutex_unlock(&pvr_dev->queues.lock);
+
+	drm_sched_stop(sched, s_job);
+
+	/* Reset the last_submitted_job field now, just in case. No need to grab
+	 * the job_list_lock here, all the path accessing this field are guaranteed
+	 * to be turned off at that point.
+	 */
+	queue->last_submitted_job = NULL;
+
+	/* Re-assign job parent fences. */
+	list_for_each_entry(job, &sched->pending_list, base.list) {
+		job->base.s_fence->parent = dma_fence_get(job->done_fence);
+		job_count++;
+	}
+	WARN_ON(atomic_read(&queue->in_flight_job_count) != job_count);
+
+	/* Re-insert the queue in the proper list, and kick a queue processing
+	 * operation if there were jobs pending.
+	 */
+	mutex_lock(&pvr_dev->queues.lock);
+	if (!atomic_read(&queue->in_flight_job_count)) {
+		list_move_tail(&queue->node, &pvr_dev->queues.idle);
+	} else {
+		list_move_tail(&queue->node, &pvr_dev->queues.active);
+		pvr_queue_process(queue);
+	}
+	mutex_unlock(&pvr_dev->queues.lock);
+
+	drm_sched_start(sched, true);
+
+	return DRM_GPU_SCHED_STAT_NOMINAL;
+}
+
+/**
+ * pvr_queue_free_job() - Release the reference the scheduler had on a job object.
+ * @sched_job: Job object to free.
+ */
+static void pvr_queue_free_job(struct drm_sched_job *sched_job)
+{
+	struct pvr_job *job = container_of(sched_job, struct pvr_job, base);
+
+	drm_sched_job_cleanup(sched_job);
+	job->paired_job = NULL;
+	pvr_job_put(job);
+}
+
+static const struct drm_sched_backend_ops pvr_queue_sched_ops = {
+	.prepare_job = pvr_queue_prepare_job,
+	.run_job = pvr_queue_run_job,
+	.timedout_job = pvr_queue_timedout_job,
+	.free_job = pvr_queue_free_job,
+};
+
+/**
+ * pvr_queue_fence_is_ufo_backed() - Check if a dma_fence is backed by a UFO object
+ * @f: Fence to test.
+ *
+ * A UFO-backed fence is a fence that can be signaled or waited upon FW-side.
+ * pvr_job::done_fence objects are backed by the timeline UFO attached to the queue
+ * they are pushed to, but those fences are not directly exposed to the outside
+ * world, so we also need to check if the fence we're being passed is a
+ * drm_sched_fence that was coming from our driver.
+ */
+bool pvr_queue_fence_is_ufo_backed(struct dma_fence *f)
+{
+	struct drm_sched_fence *sched_fence = f ? to_drm_sched_fence(f) : NULL;
+
+	if (sched_fence &&
+	    sched_fence->sched->ops == &pvr_queue_sched_ops)
+		return true;
+
+	if (f && f->ops == &pvr_queue_job_fence_ops)
+		return true;
+
+	return false;
+}
+
+/**
+ * pvr_queue_signal_done_fences() - Signal done fences.
+ * @queue: Queue to check.
+ *
+ * Signal done fences of jobs whose seqno is less than the current value of
+ * the UFO object attached to the queue.
+ */
+static void
+pvr_queue_signal_done_fences(struct pvr_queue *queue)
+{
+	struct pvr_job *job, *tmp_job;
+	u32 cur_seqno;
+
+	spin_lock(&queue->scheduler.job_list_lock);
+	cur_seqno = *queue->timeline_ufo.value;
+	list_for_each_entry_safe(job, tmp_job, &queue->scheduler.pending_list, base.list) {
+		if ((int)(cur_seqno - lower_32_bits(job->done_fence->seqno)) < 0)
+			break;
+
+		if (!dma_fence_is_signaled(job->done_fence)) {
+			dma_fence_signal(job->done_fence);
+			pvr_job_release_pm_ref(job);
+			atomic_dec(&queue->in_flight_job_count);
+		}
+	}
+
+	/* We don't want to test jobs twice, so reset last_submitted_job
+	 * if the job is already part of the pending_list.
+	 */
+	job = list_last_entry(&queue->scheduler.pending_list, struct pvr_job, base.list);
+	if (job == queue->last_submitted_job)
+		queue->last_submitted_job = NULL;
+
+	if (queue->last_submitted_job &&
+	    (int)(cur_seqno - lower_32_bits(queue->last_submitted_job->done_fence->seqno)) >= 0) {
+		dma_fence_signal(queue->last_submitted_job->done_fence);
+		pvr_job_release_pm_ref(queue->last_submitted_job);
+		atomic_dec(&queue->in_flight_job_count);
+
+		/* We signaled the job, so no need to check it again next time. Most importantly,
+		 * it's addressing a race where we signal the job before and drm_sched cleans it
+		 * up before pvr_queue_signal_done_fences() is called again, meaning the job
+		 * will never show up in the pending_list, and we might be pointing to an already
+		 * freed job next time pvr_queue_signal_done_fences() is called.
+		 */
+		queue->last_submitted_job = NULL;
+	}
+	spin_unlock(&queue->scheduler.job_list_lock);
+}
+
+/**
+ * pvr_queue_check_job_waiting_for_cccb_space() - Check if the job waiting for CCCB space
+ * can be unblocked
+ * pushed to the CCCB
+ * @queue: Queue to check
+ *
+ * If we have a job waiting for CCCB, and this job now fits in the CCCB, we signal
+ * its CCCB fence, which should kick drm_sched.
+ */
+static void
+pvr_queue_check_job_waiting_for_cccb_space(struct pvr_queue *queue)
+{
+	struct pvr_queue_fence *cccb_fence;
+	u32 native_deps_remaining;
+	struct pvr_job *job;
+
+	mutex_lock(&queue->cccb_fence_ctx.job_lock);
+	job = queue->cccb_fence_ctx.job;
+	if (!job)
+		goto out_unlock;
+
+	/* If we have a job attached to the CCCB fence context, its CCCB fence
+	 * shouldn't be NULL.
+	 */
+	if (WARN_ON(!job->cccb_fence)) {
+		job = NULL;
+		goto out_unlock;
+	}
+
+	/* If we get there, CCCB fence has to be initialized. */
+	cccb_fence = container_of(job->cccb_fence, struct pvr_queue_fence, base);
+	if (WARN_ON(!cccb_fence->queue)) {
+		job = NULL;
+		goto out_unlock;
+	}
+
+	/* Evict signaled dependencies before checking for CCCB space.
+	 * If the job fits, signal the CCCB fence, this should unblock
+	 * the drm_sched_entity.
+	 */
+	native_deps_remaining = job_count_remaining_native_deps(job);
+	if (!pvr_cccb_cmdseq_fits(&queue->cccb, job_cmds_size(job, native_deps_remaining))) {
+		job = NULL;
+		goto out_unlock;
+	}
+
+	dma_fence_signal(job->cccb_fence);
+	pvr_queue_fence_put(job->cccb_fence);
+	job->cccb_fence = NULL;
+	queue->cccb_fence_ctx.job = NULL;
+
+out_unlock:
+	mutex_unlock(&queue->cccb_fence_ctx.job_lock);
+
+	pvr_job_put(job);
+}
+
+/**
+ * pvr_queue_process() - Process events that happened on a queue.
+ * @queue: Queue to check
+ *
+ * Signal job fences and check if jobs waiting for CCCB space can be unblocked.
+ */
+void pvr_queue_process(struct pvr_queue *queue)
+{
+	lockdep_assert_held(&queue->ctx->pvr_dev->queues.lock);
+
+	pvr_queue_check_job_waiting_for_cccb_space(queue);
+	pvr_queue_signal_done_fences(queue);
+	pvr_queue_update_active_state_locked(queue);
+}
+
+static u32 get_dm_type(struct pvr_queue *queue)
+{
+	switch (queue->type) {
+	case DRM_PVR_JOB_TYPE_GEOMETRY:
+		return PVR_FWIF_DM_GEOM;
+	case DRM_PVR_JOB_TYPE_TRANSFER_FRAG:
+	case DRM_PVR_JOB_TYPE_FRAGMENT:
+		return PVR_FWIF_DM_FRAG;
+	case DRM_PVR_JOB_TYPE_COMPUTE:
+		return PVR_FWIF_DM_CDM;
+	}
+
+	return ~0;
+}
+
+/**
+ * init_fw_context() - Initializes the queue part of a FW context.
+ * @queue: Queue object to initialize the FW context for.
+ * @fw_ctx_map: The FW context CPU mapping.
+ *
+ * FW contexts are containing various states, one of them being a per-queue state
+ * that needs to be initialized for each queue being exposed by a context. This
+ * function takes care of that.
+ */
+static void init_fw_context(struct pvr_queue *queue, void *fw_ctx_map)
+{
+	struct pvr_context *ctx = queue->ctx;
+	struct pvr_fw_object *fw_mem_ctx_obj = pvr_vm_get_fw_mem_context(ctx->vm_ctx);
+	struct rogue_fwif_fwcommoncontext *cctx_fw;
+	struct pvr_cccb *cccb = &queue->cccb;
+
+	cctx_fw = fw_ctx_map + queue->ctx_offset;
+	cctx_fw->ccbctl_fw_addr = cccb->ctrl_fw_addr;
+	cctx_fw->ccb_fw_addr = cccb->cccb_fw_addr;
+
+	cctx_fw->dm = get_dm_type(queue);
+	cctx_fw->priority = ctx->priority;
+	cctx_fw->priority_seq_num = 0;
+	cctx_fw->max_deadline_ms = MAX_DEADLINE_MS;
+	cctx_fw->pid = task_tgid_nr(current);
+	cctx_fw->server_common_context_id = ctx->ctx_id;
+
+	pvr_fw_object_get_fw_addr(fw_mem_ctx_obj, &cctx_fw->fw_mem_context_fw_addr);
+
+	pvr_fw_object_get_fw_addr(queue->reg_state_obj, &cctx_fw->context_state_addr);
+}
+
+/**
+ * pvr_queue_cleanup_fw_context() - Wait for the FW context to be idle and clean it up.
+ * @queue: Queue on FW context to clean up.
+ *
+ * Return:
+ *  * 0 on success,
+ *  * Any error returned by pvr_fw_structure_cleanup() otherwise.
+ */
+static int pvr_queue_cleanup_fw_context(struct pvr_queue *queue)
+{
+	return pvr_fw_structure_cleanup(queue->ctx->pvr_dev,
+					ROGUE_FWIF_CLEANUP_FWCOMMONCONTEXT,
+					queue->ctx->fw_obj, queue->ctx_offset);
+}
+
+/**
+ * pvr_queue_job_init() - Initialize queue related fields in a pvr_job object.
+ * @job: The job to initialize.
+ *
+ * Bind the job to a queue and allocate memory to guarantee pvr_queue_job_arm()
+ * and pvr_queue_job_push() can't fail. We also make sure the context type is
+ * valid and the job can fit in the CCCB.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * An error code if something failed.
+ */
+int pvr_queue_job_init(struct pvr_job *job)
+{
+	/* Fragment jobs need at least one native fence wait on the geometry job fence. */
+	u32 min_native_dep_count = job->type == DRM_PVR_JOB_TYPE_FRAGMENT ? 1 : 0;
+	struct pvr_queue *queue;
+	int err;
+
+	if (atomic_read(&job->ctx->faulty))
+		return -EIO;
+
+	queue = pvr_context_get_queue_for_job(job->ctx, job->type);
+	if (!queue)
+		return -EINVAL;
+
+	if (!pvr_cccb_cmdseq_can_fit(&queue->cccb, job_cmds_size(job, min_native_dep_count)))
+		return -E2BIG;
+
+	err = drm_sched_job_init(&job->base, &queue->entity, THIS_MODULE);
+	if (err)
+		return err;
+
+	job->cccb_fence = pvr_queue_fence_alloc();
+	job->kccb_fence = pvr_kccb_fence_alloc();
+	job->done_fence = pvr_queue_fence_alloc();
+	if (!job->cccb_fence || !job->done_fence)
+		return -ENOMEM;
+
+	return 0;
+}
+
+/**
+ * pvr_queue_job_arm() - Arm a job object.
+ * @job: The job to arm.
+ *
+ * Initializes fences and return the drm_sched finished fence so it can
+ * be exposed to the outside world. Once this function is called, you should
+ * make sure the job is pushed using pvr_queue_job_push(), or guarantee that
+ * no one grabbed a reference to the returned fence. The latter can happen if
+ * we do multi-job submission, and something failed when creating/initializing
+ * a job. In that case, we know the fence didn't leave the driver, and we
+ * can thus guarantee nobody will wait on an dead fence object.
+ *
+ * Return:
+ *  * A dma_fence object.
+ */
+struct dma_fence *pvr_queue_job_arm(struct pvr_job *job)
+{
+	drm_sched_job_arm(&job->base);
+
+	return &job->base.s_fence->finished;
+}
+
+/**
+ * pvr_queue_job_cleanup() - Cleanup fence/scheduler related fields in the job object.
+ * @job: The job to cleanup.
+ *
+ * Should be called in the job release path.
+ */
+void pvr_queue_job_cleanup(struct pvr_job *job)
+{
+	pvr_queue_fence_put(job->done_fence);
+	pvr_queue_fence_put(job->cccb_fence);
+	pvr_kccb_fence_put(job->kccb_fence);
+
+	if (job->base.s_fence)
+		drm_sched_job_cleanup(&job->base);
+}
+
+/**
+ * pvr_queue_job_push() - Push a job to its queue.
+ * @job: The job to push.
+ *
+ * Must be called after pvr_queue_job_init() and after all dependencies
+ * have been added to the job. This will effectively queue the job to
+ * the drm_sched_entity attached to the queue. We grab a reference on
+ * the job object, so the caller is free to drop its reference when it's
+ * done accessing the job object.
+ */
+void pvr_queue_job_push(struct pvr_job *job)
+{
+	struct pvr_queue *queue = container_of(job->base.sched, struct pvr_queue, scheduler);
+
+	/* Keep track of the last queued job scheduled fence for combined submit. */
+	dma_fence_put(queue->last_queued_job_scheduled_fence);
+	queue->last_queued_job_scheduled_fence = dma_fence_get(&job->base.s_fence->scheduled);
+
+	pvr_job_get(job);
+	drm_sched_entity_push_job(&job->base);
+}
+
+static void reg_state_init(void *cpu_ptr, void *priv)
+{
+	struct pvr_queue *queue = priv;
+
+	if (queue->type == DRM_PVR_JOB_TYPE_GEOMETRY) {
+		struct rogue_fwif_geom_ctx_state *geom_ctx_state_fw = cpu_ptr;
+
+		geom_ctx_state_fw->geom_core[0].geom_reg_vdm_call_stack_pointer_init =
+			queue->callstack_addr;
+	}
+}
+
+/**
+ * pvr_queue_create() - Create a queue object.
+ * @ctx: The context this queue will be attached to.
+ * @type: The type of jobs being pushed to this queue.
+ * @args: The arguments passed to the context creation function.
+ * @fw_ctx_map: CPU mapping of the FW context object.
+ *
+ * Create a queue object that will be used to queue and track jobs.
+ *
+ * Return:
+ *  * A valid pointer to a pvr_queue object, or
+ *  * An error pointer if the creation/initialization failed.
+ */
+struct pvr_queue *pvr_queue_create(struct pvr_context *ctx,
+				   enum drm_pvr_job_type type,
+				   struct drm_pvr_ioctl_create_context_args *args,
+				   void *fw_ctx_map)
+{
+	static const struct {
+		u32 cccb_size;
+		const char *name;
+	} props[] = {
+		[DRM_PVR_JOB_TYPE_GEOMETRY] = {
+			.cccb_size = CTX_GEOM_CCCB_SIZE_LOG2,
+			.name = "geometry",
+		},
+		[DRM_PVR_JOB_TYPE_FRAGMENT] = {
+			.cccb_size = CTX_FRAG_CCCB_SIZE_LOG2,
+			.name = "fragment"
+		},
+		[DRM_PVR_JOB_TYPE_COMPUTE] = {
+			.cccb_size = CTX_COMPUTE_CCCB_SIZE_LOG2,
+			.name = "compute"
+		},
+		[DRM_PVR_JOB_TYPE_TRANSFER_FRAG] = {
+			.cccb_size = CTX_TRANSFER_CCCB_SIZE_LOG2,
+			.name = "transfer_frag"
+		},
+	};
+	struct pvr_device *pvr_dev = ctx->pvr_dev;
+	struct drm_gpu_scheduler *sched;
+	struct pvr_queue *queue;
+	int ctx_state_size, err;
+	void *cpu_map;
+
+	if (WARN_ON(type >= sizeof(props)))
+		return ERR_PTR(-EINVAL);
+
+	switch (ctx->type) {
+	case DRM_PVR_CTX_TYPE_RENDER:
+		if (type != DRM_PVR_JOB_TYPE_GEOMETRY &&
+		    type != DRM_PVR_JOB_TYPE_FRAGMENT)
+			return ERR_PTR(-EINVAL);
+		break;
+	case DRM_PVR_CTX_TYPE_COMPUTE:
+		if (type != DRM_PVR_JOB_TYPE_COMPUTE)
+			return ERR_PTR(-EINVAL);
+		break;
+	case DRM_PVR_CTX_TYPE_TRANSFER_FRAG:
+		if (type != DRM_PVR_JOB_TYPE_TRANSFER_FRAG)
+			return ERR_PTR(-EINVAL);
+		break;
+	default:
+		return ERR_PTR(-EINVAL);
+	}
+
+	ctx_state_size = get_ctx_state_size(pvr_dev, type);
+	if (ctx_state_size < 0)
+		return ERR_PTR(ctx_state_size);
+
+	queue = kzalloc(sizeof(*queue), GFP_KERNEL);
+	if (!queue)
+		return ERR_PTR(-ENOMEM);
+
+	queue->type = type;
+	queue->ctx_offset = get_ctx_offset(type);
+	queue->ctx = ctx;
+	queue->callstack_addr = args->callstack_addr;
+	sched = &queue->scheduler;
+	INIT_LIST_HEAD(&queue->node);
+	mutex_init(&queue->cccb_fence_ctx.job_lock);
+	pvr_queue_fence_ctx_init(&queue->cccb_fence_ctx.base);
+	pvr_queue_fence_ctx_init(&queue->job_fence_ctx);
+
+	err = pvr_cccb_init(pvr_dev, &queue->cccb, props[type].cccb_size, props[type].name);
+	if (err)
+		goto err_free_queue;
+
+	err = pvr_fw_object_create(pvr_dev, ctx_state_size,
+				   PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+				   reg_state_init, queue, &queue->reg_state_obj);
+	if (err)
+		goto err_cccb_fini;
+
+	init_fw_context(queue, fw_ctx_map);
+
+	if (type != DRM_PVR_JOB_TYPE_GEOMETRY && type != DRM_PVR_JOB_TYPE_FRAGMENT &&
+	    args->callstack_addr) {
+		err = -EINVAL;
+		goto err_release_reg_state;
+	}
+
+	cpu_map = pvr_fw_object_create_and_map(pvr_dev, sizeof(*queue->timeline_ufo.value),
+					       PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
+					       NULL, NULL, &queue->timeline_ufo.fw_obj);
+	if (IS_ERR(cpu_map)) {
+		err = PTR_ERR(cpu_map);
+		goto err_release_reg_state;
+	}
+
+	queue->timeline_ufo.value = cpu_map;
+
+	err = drm_sched_init(&queue->scheduler,
+			     &pvr_queue_sched_ops,
+			     pvr_dev->sched_wq, 64 * 1024, 1,
+			     msecs_to_jiffies(500),
+			     pvr_dev->sched_wq, NULL, "pvr-queue",
+			     DRM_SCHED_POLICY_SINGLE_ENTITY,
+			     pvr_dev->base.dev);
+	if (err)
+		goto err_release_ufo;
+
+	err = drm_sched_entity_init(&queue->entity,
+				    DRM_SCHED_PRIORITY_MIN,
+				    &sched, 1, &ctx->faulty);
+	if (err)
+		goto err_sched_fini;
+
+	mutex_lock(&pvr_dev->queues.lock);
+	list_add_tail(&queue->node, &pvr_dev->queues.idle);
+	mutex_unlock(&pvr_dev->queues.lock);
+
+	return queue;
+
+err_sched_fini:
+	drm_sched_fini(&queue->scheduler);
+
+err_release_ufo:
+	pvr_fw_object_unmap_and_destroy(queue->timeline_ufo.fw_obj);
+
+err_release_reg_state:
+	pvr_fw_object_destroy(queue->reg_state_obj);
+
+err_cccb_fini:
+	pvr_cccb_fini(&queue->cccb);
+
+err_free_queue:
+	mutex_destroy(&queue->cccb_fence_ctx.job_lock);
+	kfree(queue);
+
+	return ERR_PTR(err);
+}
+
+void pvr_queue_device_pre_reset(struct pvr_device *pvr_dev)
+{
+	struct pvr_queue *queue;
+
+	mutex_lock(&pvr_dev->queues.lock);
+	list_for_each_entry(queue, &pvr_dev->queues.idle, node)
+		pvr_queue_stop(queue, NULL);
+	list_for_each_entry(queue, &pvr_dev->queues.active, node)
+		pvr_queue_stop(queue, NULL);
+	mutex_unlock(&pvr_dev->queues.lock);
+}
+
+void pvr_queue_device_post_reset(struct pvr_device *pvr_dev)
+{
+	struct pvr_queue *queue;
+
+	mutex_lock(&pvr_dev->queues.lock);
+	list_for_each_entry(queue, &pvr_dev->queues.active, node)
+		pvr_queue_start(queue);
+	list_for_each_entry(queue, &pvr_dev->queues.idle, node)
+		pvr_queue_start(queue);
+	mutex_unlock(&pvr_dev->queues.lock);
+}
+
+/**
+ * pvr_queue_kill() - Kill a queue.
+ * @queue: The queue to kill.
+ *
+ * Kill the queue so no new jobs can be pushed. Should be called when the
+ * context handle is destroyed. The queue object might last longer if jobs
+ * are still in flight and holding a reference to the context this queue
+ * belongs to.
+ */
+void pvr_queue_kill(struct pvr_queue *queue)
+{
+	drm_sched_entity_destroy(&queue->entity);
+	dma_fence_put(queue->last_queued_job_scheduled_fence);
+	queue->last_queued_job_scheduled_fence = NULL;
+}
+
+/**
+ * pvr_queue_destroy() - Destroy a queue.
+ * @queue: The queue to destroy.
+ *
+ * Cleanup the queue and free the resources attached to it. Should be
+ * called from the context release function.
+ */
+void pvr_queue_destroy(struct pvr_queue *queue)
+{
+	if (!queue)
+		return;
+
+	mutex_lock(&queue->ctx->pvr_dev->queues.lock);
+	list_del_init(&queue->node);
+	mutex_unlock(&queue->ctx->pvr_dev->queues.lock);
+
+	drm_sched_fini(&queue->scheduler);
+	drm_sched_entity_fini(&queue->entity);
+
+	if (WARN_ON(queue->last_queued_job_scheduled_fence))
+		dma_fence_put(queue->last_queued_job_scheduled_fence);
+
+	pvr_queue_cleanup_fw_context(queue);
+
+	pvr_fw_object_unmap_and_destroy(queue->timeline_ufo.fw_obj);
+	pvr_fw_object_destroy(queue->reg_state_obj);
+	pvr_cccb_fini(&queue->cccb);
+	mutex_destroy(&queue->cccb_fence_ctx.job_lock);
+	kfree(queue);
+}
+
+/**
+ * pvr_queue_device_init() - Device-level initialization of queue related fields.
+ * @pvr_dev: The device to initialize.
+ *
+ * Initializes all fields related to queue management in pvr_device.
+ *
+ * Return:
+ *  * 0 on success, or
+ *  * An error code on failure.
+ */
+int pvr_queue_device_init(struct pvr_device *pvr_dev)
+{
+	int err;
+
+	INIT_LIST_HEAD(&pvr_dev->queues.active);
+	INIT_LIST_HEAD(&pvr_dev->queues.idle);
+	err = drmm_mutex_init(from_pvr_device(pvr_dev), &pvr_dev->queues.lock);
+	if (err)
+		return err;
+
+	pvr_dev->sched_wq = alloc_workqueue("powervr-sched", WQ_UNBOUND, 0);
+	if (!pvr_dev->sched_wq)
+		return -ENOMEM;
+
+	return 0;
+}
+
+/**
+ * pvr_queue_device_fini() - Device-level cleanup of queue related fields.
+ * @pvr_dev: The device to cleanup.
+ *
+ * Cleanup/free all queue-related resources attached to a pvr_device object.
+ */
+void pvr_queue_device_fini(struct pvr_device *pvr_dev)
+{
+	destroy_workqueue(pvr_dev->sched_wq);
+}
diff --git a/drivers/gpu/drm/imagination/pvr_queue.h b/drivers/gpu/drm/imagination/pvr_queue.h
new file mode 100644
index 000000000000..01bdeff777bf
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_queue.h
@@ -0,0 +1,179 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_QUEUE_H
+#define PVR_QUEUE_H
+
+#include <drm/gpu_scheduler.h>
+
+#include "pvr_cccb.h"
+#include "pvr_device.h"
+
+struct pvr_context;
+struct pvr_queue;
+
+/**
+ * struct pvr_queue_fence_ctx - Queue fence context
+ *
+ * Used to implement dma_fence_ops for pvr_job::{done,cccb}_fence.
+ */
+struct pvr_queue_fence_ctx {
+	/** @id: Fence context ID allocated with dma_fence_context_alloc(). */
+	u64 id;
+
+	/** @seqno: Sequence number incremented each time a fence is created. */
+	atomic_t seqno;
+
+	/** @lock: Lock used to synchronize access to fences allocated by this context. */
+	spinlock_t lock;
+};
+
+/**
+ * struct pvr_queue_cccb_fence_ctx - CCCB fence context
+ *
+ * Context used to manage fences controlling access to the CCCB. No fences are
+ * issued if there's enough space in the CCCB to push job commands.
+ */
+struct pvr_queue_cccb_fence_ctx {
+	/** @base: Base queue fence context. */
+	struct pvr_queue_fence_ctx base;
+
+	/**
+	 * @job: Job waiting for CCCB space.
+	 *
+	 * Thanks to the serializationg done at the drm_sched_entity level,
+	 * there's no more than one job waiting for CCCB at a given time.
+	 *
+	 * This field is NULL if no jobs are currently waiting for CCCB space.
+	 *
+	 * Must be accessed with @job_lock held.
+	 */
+	struct pvr_job *job;
+
+	/** @lock: Lock protecting access to the job object. */
+	struct mutex job_lock;
+};
+
+/**
+ * struct pvr_queue_fence - Queue fence object
+ */
+struct pvr_queue_fence {
+	/** @base: Base dma_fence. */
+	struct dma_fence base;
+
+	/** @queue: Queue that created this fence. */
+	struct pvr_queue *queue;
+};
+
+/**
+ * struct pvr_queue - Job queue
+ *
+ * Used to queue and track execution of pvr_job objects.
+ */
+struct pvr_queue {
+	/** @scheduler: Single entity scheduler use to push jobs to this queue. */
+	struct drm_gpu_scheduler scheduler;
+
+	/** @entity: Scheduling entity backing this queue. */
+	struct drm_sched_entity entity;
+
+	/** @type: Type of jobs queued to this queue. */
+	enum drm_pvr_job_type type;
+
+	/** @ctx: Context object this queue is bound to. */
+	struct pvr_context *ctx;
+
+	/** @node: Used to add the queue to the active/idle queue list. */
+	struct list_head node;
+
+	/**
+	 * @in_flight_job_count: Number of jobs submitted to the CCCB that
+	 * have not been processed yet.
+	 */
+	atomic_t in_flight_job_count;
+
+	/**
+	 * @cccb_fence_ctx: CCCB fence context.
+	 *
+	 * Used to control access to the CCCB is full, such that we don't
+	 * end up trying to push commands to the CCCB if there's not enough
+	 * space to receive all commands needed for a job to complete.
+	 */
+	struct pvr_queue_cccb_fence_ctx cccb_fence_ctx;
+
+	/** @job_fence_ctx: Job fence context object. */
+	struct pvr_queue_fence_ctx job_fence_ctx;
+
+	/** @timeline_ufo: Timeline UFO for the context queue. */
+	struct {
+		/** @fw_obj: FW object representing the UFO value. */
+		struct pvr_fw_object *fw_obj;
+
+		/** @value: CPU mapping of the UFO value. */
+		u32 *value;
+	} timeline_ufo;
+
+	/**
+	 * @last_submitted_job: Last submitted job.
+	 *
+	 * We need to keep that one around because drm_sched_main() only
+	 * queues the job to the drm_gpu_scheduler::pending_list after
+	 * ::run_job() has returned, which is racy with the queue process
+	 * worker that's handling done job signaling.
+	 */
+	struct pvr_job *last_submitted_job;
+
+	/**
+	 * last_queued_job_scheduled_fence: The scheduled fence of the last
+	 * job queued to this queue.
+	 *
+	 * We use it to insert frag -> geom dependencies when issuing combined
+	 * geom+frag jobs, to guarantee that the fragment job that's part of
+	 * the combined operation comes after all fragment jobs that were queued
+	 * before it.
+	 */
+	struct dma_fence *last_queued_job_scheduled_fence;
+
+	/** @cccb: Client Circular Command Buffer. */
+	struct pvr_cccb cccb;
+
+	/** @reg_state_obj: FW object representing the register state of this queue. */
+	struct pvr_fw_object *reg_state_obj;
+
+	/** @ctx_offset: Offset of the queue context in the FW context object. */
+	u32 ctx_offset;
+
+	/** @callstack_addr: Initial call stack address for register state object. */
+	u64 callstack_addr;
+};
+
+bool pvr_queue_fence_is_ufo_backed(struct dma_fence *f);
+
+int pvr_queue_job_init(struct pvr_job *job);
+
+void pvr_queue_job_cleanup(struct pvr_job *job);
+
+void pvr_queue_job_push(struct pvr_job *job);
+
+struct dma_fence *pvr_queue_job_arm(struct pvr_job *job);
+
+struct pvr_queue *pvr_queue_create(struct pvr_context *ctx,
+				   enum drm_pvr_job_type type,
+				   struct drm_pvr_ioctl_create_context_args *args,
+				   void *fw_ctx_map);
+
+void pvr_queue_kill(struct pvr_queue *queue);
+
+void pvr_queue_destroy(struct pvr_queue *queue);
+
+void pvr_queue_process(struct pvr_queue *queue);
+
+void pvr_queue_device_pre_reset(struct pvr_device *pvr_dev);
+
+void pvr_queue_device_post_reset(struct pvr_device *pvr_dev);
+
+int pvr_queue_device_init(struct pvr_device *pvr_dev);
+
+void pvr_queue_device_fini(struct pvr_device *pvr_dev);
+
+#endif /* PVR_QUEUE_H */
diff --git a/drivers/gpu/drm/imagination/pvr_stream_defs.c b/drivers/gpu/drm/imagination/pvr_stream_defs.c
index 3c646e25accf..ca1dc5e66a6f 100644
--- a/drivers/gpu/drm/imagination/pvr_stream_defs.c
+++ b/drivers/gpu/drm/imagination/pvr_stream_defs.c
@@ -43,6 +43,232 @@
  * existing parameters, to preserve order. As parameters are naturally aligned, care must be taken
  * with respect to implicit padding in the stream; padding should be minimised as much as possible.
  */
+static const struct pvr_stream_def rogue_fwif_cmd_geom_stream[] = {
+	PVR_STREAM_DEF(rogue_fwif_cmd_geom, regs.vdm_ctrl_stream_base, 64),
+	PVR_STREAM_DEF(rogue_fwif_cmd_geom, regs.tpu_border_colour_table, 64),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_geom, regs.vdm_draw_indirect0, 64,
+			       PVR_FEATURE_VDM_DRAWINDIRECT),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_geom, regs.vdm_draw_indirect1, 32,
+			       PVR_FEATURE_VDM_DRAWINDIRECT),
+	PVR_STREAM_DEF(rogue_fwif_cmd_geom, regs.ppp_ctrl, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_geom, regs.te_psg, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_geom, regs.vdm_context_resume_task0_size, 32),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_geom, regs.vdm_context_resume_task3_size, 32,
+			       PVR_FEATURE_VDM_OBJECT_LEVEL_LLS),
+	PVR_STREAM_DEF(rogue_fwif_cmd_geom, regs.view_idx, 32),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_geom, regs.pds_coeff_free_prog, 32,
+			       PVR_FEATURE_TESSELLATION),
+};
+
+static const struct pvr_stream_def rogue_fwif_cmd_geom_stream_brn49927[] = {
+	PVR_STREAM_DEF(rogue_fwif_cmd_geom, regs.tpu, 32),
+};
+
+static const struct pvr_stream_ext_def cmd_geom_ext_streams_0[] = {
+	{
+		.stream = rogue_fwif_cmd_geom_stream_brn49927,
+		.stream_len = ARRAY_SIZE(rogue_fwif_cmd_geom_stream_brn49927),
+		.header_mask = PVR_STREAM_EXTHDR_GEOM0_BRN49927,
+		.quirk = 49927,
+	},
+};
+
+static const struct pvr_stream_ext_header cmd_geom_ext_headers[] = {
+	{
+		.ext_streams = cmd_geom_ext_streams_0,
+		.ext_streams_num = ARRAY_SIZE(cmd_geom_ext_streams_0),
+		.valid_mask = PVR_STREAM_EXTHDR_GEOM0_VALID,
+	},
+};
+
+const struct pvr_stream_cmd_defs pvr_cmd_geom_stream = {
+	.type = PVR_STREAM_TYPE_GEOM,
+
+	.main_stream = rogue_fwif_cmd_geom_stream,
+	.main_stream_len = ARRAY_SIZE(rogue_fwif_cmd_geom_stream),
+
+	.ext_nr_headers = ARRAY_SIZE(cmd_geom_ext_headers),
+	.ext_headers = cmd_geom_ext_headers,
+
+	.dest_size = sizeof(struct rogue_fwif_cmd_geom),
+};
+
+static const struct pvr_stream_def rogue_fwif_cmd_frag_stream[] = {
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.isp_scissor_base, 64),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.isp_dbias_base, 64),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.isp_oclqry_base, 64),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.isp_zlsctl, 64),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.isp_zload_store_base, 64),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.isp_stencil_load_store_base, 64),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_frag, regs.fb_cdc_zls, 64,
+			       PVR_FEATURE_REQUIRES_FB_CDC_ZLS_SETUP),
+	PVR_STREAM_DEF_ARRAY(rogue_fwif_cmd_frag, regs.pbe_word),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.tpu_border_colour_table, 64),
+	PVR_STREAM_DEF_ARRAY(rogue_fwif_cmd_frag, regs.pds_bgnd),
+	PVR_STREAM_DEF_ARRAY(rogue_fwif_cmd_frag, regs.pds_pr_bgnd),
+	PVR_STREAM_DEF_ARRAY(rogue_fwif_cmd_frag, regs.usc_clear_register),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.usc_pixel_output_ctrl, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.isp_bgobjdepth, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.isp_bgobjvals, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.isp_aa, 32),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_frag, regs.isp_xtp_pipe_enable, 32,
+			       PVR_FEATURE_S7_TOP_INFRASTRUCTURE),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.isp_ctl, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.event_pixel_pds_info, 32),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_frag, regs.pixel_phantom, 32,
+			       PVR_FEATURE_CLUSTER_GROUPING),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.view_idx, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.event_pixel_pds_data, 32),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_frag, regs.isp_oclqry_stride, 32,
+			       PVR_FEATURE_GPU_MULTICORE_SUPPORT),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_frag, regs.isp_zls_pixels, 32,
+			       PVR_FEATURE_ZLS_SUBTILE),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_frag, regs.rgx_cr_blackpearl_fix, 32,
+			       PVR_FEATURE_ISP_ZLS_D24_S8_PACKING_OGL_MODE),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, zls_stride, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, sls_stride, 32),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_frag, execute_count, 32,
+			       PVR_FEATURE_GPU_MULTICORE_SUPPORT),
+};
+
+static const struct pvr_stream_def rogue_fwif_cmd_frag_stream_brn47217[] = {
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.isp_oclqry_stride, 32),
+};
+
+static const struct pvr_stream_def rogue_fwif_cmd_frag_stream_brn49927[] = {
+	PVR_STREAM_DEF(rogue_fwif_cmd_frag, regs.tpu, 32),
+};
+
+static const struct pvr_stream_ext_def cmd_frag_ext_streams_0[] = {
+	{
+		.stream = rogue_fwif_cmd_frag_stream_brn47217,
+		.stream_len = ARRAY_SIZE(rogue_fwif_cmd_frag_stream_brn47217),
+		.header_mask = PVR_STREAM_EXTHDR_FRAG0_BRN47217,
+		.quirk = 47217,
+	},
+	{
+		.stream = rogue_fwif_cmd_frag_stream_brn49927,
+		.stream_len = ARRAY_SIZE(rogue_fwif_cmd_frag_stream_brn49927),
+		.header_mask = PVR_STREAM_EXTHDR_FRAG0_BRN49927,
+		.quirk = 49927,
+	},
+};
+
+static const struct pvr_stream_ext_header cmd_frag_ext_headers[] = {
+	{
+		.ext_streams = cmd_frag_ext_streams_0,
+		.ext_streams_num = ARRAY_SIZE(cmd_frag_ext_streams_0),
+		.valid_mask = PVR_STREAM_EXTHDR_FRAG0_VALID,
+	},
+};
+
+const struct pvr_stream_cmd_defs pvr_cmd_frag_stream = {
+	.type = PVR_STREAM_TYPE_FRAG,
+
+	.main_stream = rogue_fwif_cmd_frag_stream,
+	.main_stream_len = ARRAY_SIZE(rogue_fwif_cmd_frag_stream),
+
+	.ext_nr_headers = ARRAY_SIZE(cmd_frag_ext_headers),
+	.ext_headers = cmd_frag_ext_headers,
+
+	.dest_size = sizeof(struct rogue_fwif_cmd_frag),
+};
+
+static const struct pvr_stream_def rogue_fwif_cmd_compute_stream[] = {
+	PVR_STREAM_DEF(rogue_fwif_cmd_compute, regs.tpu_border_colour_table, 64),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_compute, regs.cdm_cb_queue, 64,
+			       PVR_FEATURE_CDM_USER_MODE_QUEUE),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_compute, regs.cdm_cb_base, 64,
+			       PVR_FEATURE_CDM_USER_MODE_QUEUE),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_compute, regs.cdm_cb, 64,
+			       PVR_FEATURE_CDM_USER_MODE_QUEUE),
+	PVR_STREAM_DEF_NOT_FEATURE(rogue_fwif_cmd_compute, regs.cdm_ctrl_stream_base, 64,
+				   PVR_FEATURE_CDM_USER_MODE_QUEUE),
+	PVR_STREAM_DEF(rogue_fwif_cmd_compute, regs.cdm_context_state_base_addr, 64),
+	PVR_STREAM_DEF(rogue_fwif_cmd_compute, regs.cdm_resume_pds1, 32),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_compute, regs.cdm_item, 32,
+			       PVR_FEATURE_COMPUTE_MORTON_CAPABLE),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_compute, regs.compute_cluster, 32,
+			       PVR_FEATURE_CLUSTER_GROUPING),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_compute, regs.tpu_tag_cdm_ctrl, 32,
+			       PVR_FEATURE_TPU_DM_GLOBAL_REGISTERS),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_compute, stream_start_offset, 32,
+			       PVR_FEATURE_CDM_USER_MODE_QUEUE),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_compute, execute_count, 32,
+			       PVR_FEATURE_GPU_MULTICORE_SUPPORT),
+};
+
+static const struct pvr_stream_def rogue_fwif_cmd_compute_stream_brn49927[] = {
+	PVR_STREAM_DEF(rogue_fwif_cmd_compute, regs.tpu, 32),
+};
+
+static const struct pvr_stream_ext_def cmd_compute_ext_streams_0[] = {
+	{
+		.stream = rogue_fwif_cmd_compute_stream_brn49927,
+		.stream_len = ARRAY_SIZE(rogue_fwif_cmd_compute_stream_brn49927),
+		.header_mask = PVR_STREAM_EXTHDR_COMPUTE0_BRN49927,
+		.quirk = 49927,
+	},
+};
+
+static const struct pvr_stream_ext_header cmd_compute_ext_headers[] = {
+	{
+		.ext_streams = cmd_compute_ext_streams_0,
+		.ext_streams_num = ARRAY_SIZE(cmd_compute_ext_streams_0),
+		.valid_mask = PVR_STREAM_EXTHDR_COMPUTE0_VALID,
+	},
+};
+
+const struct pvr_stream_cmd_defs pvr_cmd_compute_stream = {
+	.type = PVR_STREAM_TYPE_COMPUTE,
+
+	.main_stream = rogue_fwif_cmd_compute_stream,
+	.main_stream_len = ARRAY_SIZE(rogue_fwif_cmd_compute_stream),
+
+	.ext_nr_headers = ARRAY_SIZE(cmd_compute_ext_headers),
+	.ext_headers = cmd_compute_ext_headers,
+
+	.dest_size = sizeof(struct rogue_fwif_cmd_compute),
+};
+
+static const struct pvr_stream_def rogue_fwif_cmd_transfer_stream[] = {
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.pds_bgnd0_base, 64),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.pds_bgnd1_base, 64),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.pds_bgnd3_sizeinfo, 64),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.isp_mtile_base, 64),
+	PVR_STREAM_DEF_ARRAY(rogue_fwif_cmd_transfer, regs.pbe_wordx_mrty),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.isp_bgobjvals, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.usc_pixel_output_ctrl, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.usc_clear_register0, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.usc_clear_register1, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.usc_clear_register2, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.usc_clear_register3, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.isp_mtile_size, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.isp_render_origin, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.isp_ctl, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.isp_aa, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.event_pixel_pds_info, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.event_pixel_pds_code, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.event_pixel_pds_data, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.isp_render, 32),
+	PVR_STREAM_DEF(rogue_fwif_cmd_transfer, regs.isp_rgn, 32),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_transfer, regs.isp_xtp_pipe_enable, 32,
+			       PVR_FEATURE_S7_TOP_INFRASTRUCTURE),
+	PVR_STREAM_DEF_FEATURE(rogue_fwif_cmd_transfer, regs.frag_screen, 32,
+			       PVR_FEATURE_GPU_MULTICORE_SUPPORT),
+};
+
+const struct pvr_stream_cmd_defs pvr_cmd_transfer_stream = {
+	.type = PVR_STREAM_TYPE_TRANSFER,
+
+	.main_stream = rogue_fwif_cmd_transfer_stream,
+	.main_stream_len = ARRAY_SIZE(rogue_fwif_cmd_transfer_stream),
+
+	.ext_nr_headers = 0,
+
+	.dest_size = sizeof(struct rogue_fwif_cmd_transfer),
+};
+
 static const struct pvr_stream_def rogue_fwif_static_render_context_state_stream[] = {
 	PVR_STREAM_DEF(rogue_fwif_geom_registers_caswitch,
 		       geom_reg_vdm_context_state_base_addr, 64),
diff --git a/drivers/gpu/drm/imagination/pvr_sync.c b/drivers/gpu/drm/imagination/pvr_sync.c
new file mode 100644
index 000000000000..c2815b089051
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_sync.c
@@ -0,0 +1,287 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include <uapi/drm/pvr_drm.h>
+
+#include <drm/drm_syncobj.h>
+#include <drm/gpu_scheduler.h>
+#include <linux/xarray.h>
+#include <linux/dma-fence-unwrap.h>
+
+#include "pvr_device.h"
+#include "pvr_queue.h"
+#include "pvr_sync.h"
+
+static int
+pvr_check_sync_op(const struct drm_pvr_sync_op *sync_op)
+{
+	u8 handle_type;
+
+	if (sync_op->flags & ~DRM_PVR_SYNC_OP_FLAGS_MASK)
+		return -EINVAL;
+
+	handle_type = sync_op->flags & DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_MASK;
+	if (handle_type != DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_SYNCOBJ &&
+	    handle_type != DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_TIMELINE_SYNCOBJ)
+		return -EINVAL;
+
+	if (handle_type == DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_SYNCOBJ &&
+	    sync_op->value != 0)
+		return -EINVAL;
+
+	return 0;
+}
+
+static void
+pvr_sync_signal_free(struct pvr_sync_signal *sig_sync)
+{
+	if (!sig_sync)
+		return;
+
+	drm_syncobj_put(sig_sync->syncobj);
+	dma_fence_chain_free(sig_sync->chain);
+	dma_fence_put(sig_sync->fence);
+	kfree(sig_sync);
+}
+
+void
+pvr_sync_signal_array_cleanup(struct xarray *array)
+{
+	struct pvr_sync_signal *sig_sync;
+	unsigned long i;
+
+	xa_for_each(array, i, sig_sync)
+		pvr_sync_signal_free(sig_sync);
+
+	xa_destroy(array);
+}
+
+static struct pvr_sync_signal *
+pvr_sync_signal_array_add(struct xarray *array, struct drm_file *file, u32 handle, u64 point)
+{
+	struct pvr_sync_signal *sig_sync;
+	int err;
+	u32 id;
+
+	sig_sync = kzalloc(sizeof(*sig_sync), GFP_KERNEL);
+	if (!sig_sync)
+		return ERR_PTR(-ENOMEM);
+
+	sig_sync->handle = handle;
+	sig_sync->point = point;
+
+	if (point > 0) {
+		sig_sync->chain = dma_fence_chain_alloc();
+		if (!sig_sync->chain) {
+			err = -ENOMEM;
+			goto err_free_sig_sync;
+		}
+	}
+
+	sig_sync->syncobj = drm_syncobj_find(file, handle);
+	if (!sig_sync->syncobj) {
+		err = -EINVAL;
+		goto err_free_sig_sync;
+	}
+
+	/* Retrieve the current fence attached to that point. It's
+	 * perfectly fine to get a NULL fence here, it just means there's
+	 * no fence attached to that point yet.
+	 */
+	drm_syncobj_find_fence(file, handle, point, 0, &sig_sync->fence);
+
+	err = xa_alloc(array, &id, sig_sync, xa_limit_32b, GFP_KERNEL);
+	if (err)
+		goto err_free_sig_sync;
+
+	return sig_sync;
+
+err_free_sig_sync:
+	pvr_sync_signal_free(sig_sync);
+	return ERR_PTR(err);
+}
+
+static struct pvr_sync_signal *
+pvr_sync_signal_array_search(struct xarray *array, u32 handle, u64 point)
+{
+	struct pvr_sync_signal *sig_sync;
+	unsigned long i;
+
+	xa_for_each(array, i, sig_sync) {
+		if (handle == sig_sync->handle && point == sig_sync->point)
+			return sig_sync;
+	}
+
+	return NULL;
+}
+
+static struct pvr_sync_signal *
+pvr_sync_signal_array_get(struct xarray *array, struct drm_file *file, u32 handle, u64 point)
+{
+	struct pvr_sync_signal *sig_sync;
+
+	sig_sync = pvr_sync_signal_array_search(array, handle, point);
+	if (sig_sync)
+		return sig_sync;
+
+	return pvr_sync_signal_array_add(array, file, handle, point);
+}
+
+int
+pvr_sync_signal_array_collect_ops(struct xarray *array,
+				  struct drm_file *file,
+				  u32 sync_op_count,
+				  const struct drm_pvr_sync_op *sync_ops)
+{
+	for (u32 i = 0; i < sync_op_count; i++) {
+		struct pvr_sync_signal *sig_sync;
+		int ret;
+
+		if (!(sync_ops[i].flags & DRM_PVR_SYNC_OP_FLAG_SIGNAL))
+			continue;
+
+		ret = pvr_check_sync_op(&sync_ops[i]);
+		if (ret)
+			return ret;
+
+		sig_sync = pvr_sync_signal_array_get(array, file,
+						     sync_ops[i].handle,
+						     sync_ops[i].value);
+		if (IS_ERR(sig_sync))
+			return PTR_ERR(sig_sync);
+	}
+
+	return 0;
+}
+
+int
+pvr_sync_signal_array_update_fences(struct xarray *array,
+				    u32 sync_op_count,
+				    const struct drm_pvr_sync_op *sync_ops,
+				    struct dma_fence *done_fence)
+{
+	for (u32 i = 0; i < sync_op_count; i++) {
+		struct dma_fence *old_fence;
+		struct pvr_sync_signal *sig_sync;
+
+		if (!(sync_ops[i].flags & DRM_PVR_SYNC_OP_FLAG_SIGNAL))
+			continue;
+
+		sig_sync = pvr_sync_signal_array_search(array, sync_ops[i].handle,
+							sync_ops[i].value);
+		if (WARN_ON(!sig_sync))
+			return -EINVAL;
+
+		old_fence = sig_sync->fence;
+		sig_sync->fence = dma_fence_get(done_fence);
+		dma_fence_put(old_fence);
+
+		if (WARN_ON(!sig_sync->fence))
+			return -EINVAL;
+	}
+
+	return 0;
+}
+
+void
+pvr_sync_signal_array_push_fences(struct xarray *array)
+{
+	struct pvr_sync_signal *sig_sync;
+	unsigned long i;
+
+	xa_for_each(array, i, sig_sync) {
+		if (sig_sync->chain) {
+			drm_syncobj_add_point(sig_sync->syncobj, sig_sync->chain,
+					      sig_sync->fence, sig_sync->point);
+			sig_sync->chain = NULL;
+		} else {
+			drm_syncobj_replace_fence(sig_sync->syncobj, sig_sync->fence);
+		}
+	}
+}
+
+static int
+pvr_sync_add_dep_to_job(struct drm_sched_job *job, struct dma_fence *f)
+{
+	struct dma_fence_unwrap iter;
+	u32 native_fence_count = 0;
+	struct dma_fence *uf;
+	int err = 0;
+
+	dma_fence_unwrap_for_each(uf, &iter, f) {
+		if (pvr_queue_fence_is_ufo_backed(uf))
+			native_fence_count++;
+	}
+
+	/* No need to unwrap the fence if it's fully non-native. */
+	if (!native_fence_count)
+		return drm_sched_job_add_dependency(job, f);
+
+	dma_fence_unwrap_for_each(uf, &iter, f) {
+		/* There's no dma_fence_unwrap_stop() helper cleaning up the refs
+		 * owned by dma_fence_unwrap(), so let's just iterate over all
+		 * entries without doing anything when something failed.
+		 */
+		if (err)
+			continue;
+
+		if (pvr_queue_fence_is_ufo_backed(uf)) {
+			struct drm_sched_fence *s_fence = to_drm_sched_fence(uf);
+
+			/* If this is a native dependency, we wait for the scheduled fence,
+			 * and we will let pvr_queue_run_job() issue FW waits.
+			 */
+			err = drm_sched_job_add_dependency(job,
+							   dma_fence_get(&s_fence->scheduled));
+		} else {
+			err = drm_sched_job_add_dependency(job, dma_fence_get(uf));
+		}
+	}
+
+	dma_fence_put(f);
+	return err;
+}
+
+int
+pvr_sync_add_deps_to_job(struct pvr_file *pvr_file, struct drm_sched_job *job,
+			 u32 sync_op_count,
+			 const struct drm_pvr_sync_op *sync_ops,
+			 struct xarray *signal_array)
+{
+	int err = 0;
+
+	if (!sync_op_count)
+		return 0;
+
+	for (u32 i = 0; i < sync_op_count; i++) {
+		struct pvr_sync_signal *sig_sync;
+		struct dma_fence *fence;
+
+		if (sync_ops[i].flags & DRM_PVR_SYNC_OP_FLAG_SIGNAL)
+			continue;
+
+		err = pvr_check_sync_op(&sync_ops[i]);
+		if (err)
+			return err;
+
+		sig_sync = pvr_sync_signal_array_search(signal_array, sync_ops[i].handle,
+							sync_ops[i].value);
+		if (sig_sync) {
+			if (WARN_ON(!sig_sync->fence))
+				return -EINVAL;
+
+			fence = dma_fence_get(sig_sync->fence);
+		} else {
+			err = drm_syncobj_find_fence(from_pvr_file(pvr_file), sync_ops[i].handle,
+						     sync_ops[i].value, 0, &fence);
+			if (err)
+				return err;
+		}
+
+		err = pvr_sync_add_dep_to_job(job, fence);
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/imagination/pvr_sync.h b/drivers/gpu/drm/imagination/pvr_sync.h
new file mode 100644
index 000000000000..e6278f0384c6
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_sync.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_SYNC_H
+#define PVR_SYNC_H
+
+#include <uapi/drm/pvr_drm.h>
+
+/* Forward declaration from <linux/xarray.h>. */
+struct xarray;
+
+/* Forward declaration from <drm/drm_file.h>. */
+struct drm_file;
+
+/* Forward declaration from <drm/gpu_scheduler.h>. */
+struct drm_sched_job;
+
+/* Forward declaration from "pvr_device.h". */
+struct pvr_file;
+
+/**
+ * struct pvr_sync_signal - Object encoding a syncobj signal operation
+ *
+ * The job submission logic collects all signal operations in an array of
+ * pvr_sync_signal objects. This array also serves as a cache to get the
+ * latest dma_fence when multiple jobs are submitted at once, and one job
+ * signals a syncobj point that's later waited on by a subsequent job.
+ */
+struct pvr_sync_signal {
+	/** @handle: Handle of the syncobj to signal. */
+	u32 handle;
+
+	/**
+	 * @point: Point to signal in the syncobj.
+	 *
+	 * Only relevant for timeline syncobjs.
+	 */
+	u64 point;
+
+	/** @syncobj: Syncobj retrieved from the handle. */
+	struct drm_syncobj *syncobj;
+
+	/**
+	 * @chain: Chain object used to link the new fence with the
+	 *	   existing timeline syncobj.
+	 *
+	 * Should be zero when manipulating a regular syncobj.
+	 */
+	struct dma_fence_chain *chain;
+
+	/**
+	 * @fence: New fence object to attach to the syncobj.
+	 *
+	 * This pointer starts with the current fence bound to
+	 * the <handle,point> pair.
+	 */
+	struct dma_fence *fence;
+};
+
+void
+pvr_sync_signal_array_cleanup(struct xarray *array);
+
+int
+pvr_sync_signal_array_collect_ops(struct xarray *array,
+				  struct drm_file *file,
+				  u32 sync_op_count,
+				  const struct drm_pvr_sync_op *sync_ops);
+
+int
+pvr_sync_signal_array_update_fences(struct xarray *array,
+				    u32 sync_op_count,
+				    const struct drm_pvr_sync_op *sync_ops,
+				    struct dma_fence *done_fence);
+
+void
+pvr_sync_signal_array_push_fences(struct xarray *array);
+
+int
+pvr_sync_add_deps_to_job(struct pvr_file *pvr_file, struct drm_sched_job *job,
+			 u32 sync_op_count,
+			 const struct drm_pvr_sync_op *sync_ops,
+			 struct xarray *signal_array);
+
+#endif /* PVR_SYNC_H */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 15/17] drm/imagination: Add firmware trace to debugfs
  2023-08-16  8:25 ` Sarah Walker
@ 2023-08-16  8:25   ` Sarah Walker
  -1 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: matthew.brost, luben.tuikov, tzimmermann, linux-kernel, mripard,
	afd, Matt Coster, boris.brezillon, dakr, donald.robson, hns,
	christian.koenig, faith.ekstrand

Firmware trace is exposed at /sys/debug/dri/<dev_nr>/pvr_fw/trace_0.
Trace is enabled via the group mask at
/sys/debug/dri/<dev_nr>/pvr_params/fw_trace_mask.

Changes since v3:
- Use drm_dev_{enter,exit}

Co-developed-by: Matt Coster <matt.coster@imgtec.com>
Signed-off-by: Matt Coster <matt.coster@imgtec.com>
Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 drivers/gpu/drm/imagination/Makefile       |   4 +
 drivers/gpu/drm/imagination/pvr_debugfs.c  |  53 +++
 drivers/gpu/drm/imagination/pvr_debugfs.h  |  29 ++
 drivers/gpu/drm/imagination/pvr_device.c   |   9 +
 drivers/gpu/drm/imagination/pvr_device.h   |  10 +
 drivers/gpu/drm/imagination/pvr_drv.c      |   4 +
 drivers/gpu/drm/imagination/pvr_fw_trace.c | 394 +++++++++++++++++++++
 drivers/gpu/drm/imagination/pvr_params.c   | 147 ++++++++
 drivers/gpu/drm/imagination/pvr_params.h   |  72 ++++
 9 files changed, 722 insertions(+)
 create mode 100644 drivers/gpu/drm/imagination/pvr_debugfs.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_debugfs.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_params.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_params.h

diff --git a/drivers/gpu/drm/imagination/Makefile b/drivers/gpu/drm/imagination/Makefile
index 313af5312d7b..1db003cf39ee 100644
--- a/drivers/gpu/drm/imagination/Makefile
+++ b/drivers/gpu/drm/imagination/Makefile
@@ -20,6 +20,7 @@ powervr-y := \
 	pvr_hwrt.o \
 	pvr_job.o \
 	pvr_mmu.o \
+	pvr_params.o \
 	pvr_power.o \
 	pvr_queue.o \
 	pvr_stream.o \
@@ -28,4 +29,7 @@ powervr-y := \
 	pvr_vm.o \
 	pvr_vm_mips.o
 
+powervr-$(CONFIG_DEBUG_FS) += \
+	pvr_debugfs.o
+
 obj-$(CONFIG_DRM_POWERVR) += powervr.o
diff --git a/drivers/gpu/drm/imagination/pvr_debugfs.c b/drivers/gpu/drm/imagination/pvr_debugfs.c
new file mode 100644
index 000000000000..fa0d7c89773c
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_debugfs.c
@@ -0,0 +1,53 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_debugfs.h"
+
+#include "pvr_device.h"
+#include "pvr_fw_trace.h"
+#include "pvr_params.h"
+
+#include <linux/dcache.h>
+#include <linux/debugfs.h>
+#include <linux/err.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+
+#include <drm/drm_device.h>
+#include <drm/drm_file.h>
+#include <drm/drm_print.h>
+
+static const struct pvr_debugfs_entry pvr_debugfs_entries[] = {
+	{"pvr_params", pvr_params_debugfs_init},
+	{"pvr_fw", pvr_fw_trace_debugfs_init},
+};
+
+void
+pvr_debugfs_init(struct drm_minor *minor)
+{
+	struct drm_device *drm_dev = minor->dev;
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+	struct dentry *root = minor->debugfs_root;
+	size_t i;
+
+	for (i = 0; i < ARRAY_SIZE(pvr_debugfs_entries); ++i) {
+		const struct pvr_debugfs_entry *entry = &pvr_debugfs_entries[i];
+		struct dentry *dir;
+
+		dir = debugfs_create_dir(entry->name, root);
+		if (IS_ERR(dir)) {
+			drm_warn(drm_dev,
+				 "failed to create debugfs dir '%s' (err=%d)",
+				 entry->name, (int)PTR_ERR(dir));
+			continue;
+		}
+
+		entry->init(pvr_dev, dir);
+	}
+}
+
+/*
+ * Since all entries are created under &drm_minor->debugfs_root, there's no
+ * need for a pvr_debugfs_fini() as DRM will clean up everything under its root
+ * automatically.
+ */
diff --git a/drivers/gpu/drm/imagination/pvr_debugfs.h b/drivers/gpu/drm/imagination/pvr_debugfs.h
new file mode 100644
index 000000000000..7b7ff384053e
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_debugfs.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_DEBUGFS_H
+#define PVR_DEBUGFS_H
+
+/* Forward declaration from <drm/drm_drv.h>. */
+struct drm_minor;
+
+#if defined(CONFIG_DEBUG_FS)
+/* Forward declaration from "pvr_device.h". */
+struct pvr_device;
+
+/* Forward declaration from <linux/dcache.h>. */
+struct dentry;
+
+struct pvr_debugfs_entry {
+	const char *name;
+	void (*init)(struct pvr_device *pvr_dev, struct dentry *dir);
+};
+
+void pvr_debugfs_init(struct drm_minor *minor);
+#else /* defined(CONFIG_DEBUG_FS) */
+#include <linux/compiler_attributes.h>
+
+static __always_inline void pvr_debugfs_init(struct drm_minor *minor) {}
+#endif /* defined(CONFIG_DEBUG_FS) */
+
+#endif /* PVR_DEBUGFS_H */
diff --git a/drivers/gpu/drm/imagination/pvr_device.c b/drivers/gpu/drm/imagination/pvr_device.c
index 9ae758a88644..a01544f64d8e 100644
--- a/drivers/gpu/drm/imagination/pvr_device.c
+++ b/drivers/gpu/drm/imagination/pvr_device.c
@@ -5,6 +5,7 @@
 #include "pvr_device_info.h"
 
 #include "pvr_fw.h"
+#include "pvr_params.h"
 #include "pvr_power.h"
 #include "pvr_queue.h"
 #include "pvr_rogue_cr_defs.h"
@@ -488,6 +489,14 @@ pvr_device_init(struct pvr_device *pvr_dev)
 	struct device *dev = drm_dev->dev;
 	int err;
 
+	/*
+	 * Setup device parameters. We do this first in case other steps
+	 * depend on them.
+	 */
+	err = pvr_device_params_init(&pvr_dev->params);
+	if (err)
+		return err;
+
 	/* Enable and initialize clocks required for the device to operate. */
 	err = pvr_device_clk_init(pvr_dev);
 	if (err)
diff --git a/drivers/gpu/drm/imagination/pvr_device.h b/drivers/gpu/drm/imagination/pvr_device.h
index fc6124ae4b26..fcaaeef8519d 100644
--- a/drivers/gpu/drm/imagination/pvr_device.h
+++ b/drivers/gpu/drm/imagination/pvr_device.h
@@ -7,6 +7,7 @@
 #include "pvr_ccb.h"
 #include "pvr_device_info.h"
 #include "pvr_fw.h"
+#include "pvr_params.h"
 #include "pvr_rogue_fwif_stream.h"
 #include "pvr_stream.h"
 
@@ -148,6 +149,15 @@ struct pvr_device {
 	/** @fw_dev: Firmware related data. */
 	struct pvr_fw_device fw_dev;
 
+	/**
+	 * @params: Device-specific parameters.
+	 *
+	 *          The values of these parameters are initialized from the
+	 *          defaults specified as module parameters. They may be
+	 *          modified at runtime via debugfs (if enabled).
+	 */
+	struct pvr_device_params params;
+
 	/** @stream_musthave_quirks: Bit array of "must-have" quirks for stream commands. */
 	u32 stream_musthave_quirks[PVR_STREAM_TYPE_MAX][PVR_STREAM_EXTHDR_TYPE_MAX];
 
diff --git a/drivers/gpu/drm/imagination/pvr_drv.c b/drivers/gpu/drm/imagination/pvr_drv.c
index 757263eee34e..004395f2a9e8 100644
--- a/drivers/gpu/drm/imagination/pvr_drv.c
+++ b/drivers/gpu/drm/imagination/pvr_drv.c
@@ -2,6 +2,7 @@
 /* Copyright (c) 2023 Imagination Technologies Ltd. */
 
 #include "pvr_context.h"
+#include "pvr_debugfs.h"
 #include "pvr_device.h"
 #include "pvr_drv.h"
 #include "pvr_free_list.h"
@@ -1391,6 +1392,9 @@ static struct drm_driver pvr_drm_driver = {
 	.ioctls = pvr_drm_driver_ioctls,
 	.num_ioctls = ARRAY_SIZE(pvr_drm_driver_ioctls),
 	.fops = &pvr_drm_driver_fops,
+#if defined(CONFIG_DEBUG_FS)
+	.debugfs_init = pvr_debugfs_init,
+#endif
 
 	.name = PVR_DRIVER_NAME,
 	.desc = PVR_DRIVER_DESC,
diff --git a/drivers/gpu/drm/imagination/pvr_fw_trace.c b/drivers/gpu/drm/imagination/pvr_fw_trace.c
index 11f10946aeb1..3f184d0f6e05 100644
--- a/drivers/gpu/drm/imagination/pvr_fw_trace.c
+++ b/drivers/gpu/drm/imagination/pvr_fw_trace.c
@@ -7,6 +7,7 @@
 #include "pvr_rogue_fwif_sf.h"
 #include "pvr_fw_trace.h"
 
+#include <drm/drm_drv.h>
 #include <drm/drm_file.h>
 
 #include <linux/build_bug.h>
@@ -119,3 +120,396 @@ void pvr_fw_trace_fini(struct pvr_device *pvr_dev)
 	}
 	pvr_fw_object_unmap_and_destroy(fw_trace->tracebuf_ctrl_obj);
 }
+
+/**
+ * update_logtype() - Send KCCB command to trigger FW to update logtype
+ * @pvr_dev: Target PowerVR device
+ * @group_mask: New log group mask.
+ *
+ * Returns:
+ *  * 0 on success,
+ *  * Any error returned by pvr_kccb_send_cmd(), or
+ *  * -%EIO if the device is lost.
+ */
+static int
+update_logtype(struct pvr_device *pvr_dev, u32 group_mask)
+{
+	struct pvr_fw_trace *fw_trace = &pvr_dev->fw_dev.fw_trace;
+	struct rogue_fwif_kccb_cmd cmd;
+	int idx;
+	int err;
+
+	if (group_mask)
+		fw_trace->tracebuf_ctrl->log_type = ROGUE_FWIF_LOG_TYPE_TRACE | group_mask;
+	else
+		fw_trace->tracebuf_ctrl->log_type = ROGUE_FWIF_LOG_TYPE_NONE;
+
+	fw_trace->group_mask = group_mask;
+
+	down_read(&pvr_dev->reset_sem);
+	if (!drm_dev_enter(from_pvr_device(pvr_dev), &idx)) {
+		err = -EIO;
+		goto err_up_read;
+	}
+
+	cmd.cmd_type = ROGUE_FWIF_KCCB_CMD_LOGTYPE_UPDATE;
+	cmd.kccb_flags = 0;
+
+	err = pvr_kccb_send_cmd(pvr_dev, &cmd, NULL);
+
+	drm_dev_exit(idx);
+
+err_up_read:
+	up_read(&pvr_dev->reset_sem);
+
+	return err;
+}
+
+#if defined(CONFIG_DEBUG_FS)
+
+static int fw_trace_group_mask_show(struct seq_file *m, void *data)
+{
+	struct pvr_device *pvr_dev = m->private;
+
+	seq_printf(m, "%08x\n", pvr_dev->fw_dev.fw_trace.group_mask);
+
+	return 0;
+}
+
+static int fw_trace_group_mask_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, fw_trace_group_mask_show, inode->i_private);
+}
+
+static ssize_t fw_trace_group_mask_write(struct file *file, const char __user *ubuf, size_t len,
+					 loff_t *offp)
+{
+	struct seq_file *m = file->private_data;
+	struct pvr_device *pvr_dev = m->private;
+	u32 new_group_mask;
+	int err;
+
+	err = kstrtouint_from_user(ubuf, len, 0, &new_group_mask);
+	if (err)
+		return err;
+
+	err = update_logtype(pvr_dev, new_group_mask);
+	if (err)
+		return err;
+
+	pvr_dev->fw_dev.fw_trace.group_mask = new_group_mask;
+
+	return (ssize_t)len;
+}
+
+static const struct file_operations pvr_fw_trace_group_mask_fops = {
+	.owner = THIS_MODULE,
+	.open = fw_trace_group_mask_open,
+	.read = seq_read,
+	.write = fw_trace_group_mask_write,
+	.llseek = default_llseek,
+	.release = single_release,
+};
+
+struct pvr_fw_trace_seq_data {
+	/** @buffer: Pointer to copy of trace data. */
+	u32 *buffer;
+
+	/** @start_offset: Starting offset in trace data, as reported by FW. */
+	u32 start_offset;
+
+	/** @idx: Current index into trace data. */
+	u32 idx;
+
+	/** @assert_buf: Trace assert buffer, as reported by FW. */
+	struct rogue_fwif_file_info_buf assert_buf;
+};
+
+static u32 find_sfid(u32 id)
+{
+	u32 i;
+
+	for (i = 0; i < ARRAY_SIZE(stid_fmts); i++) {
+		if (stid_fmts[i].id == id)
+			return i;
+	}
+
+	return ROGUE_FW_SF_LAST;
+}
+
+static u32 read_fw_trace(struct pvr_fw_trace_seq_data *trace_seq_data, u32 offset)
+{
+	u32 idx;
+
+	idx = trace_seq_data->idx + offset;
+	if (idx >= ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS)
+		return 0;
+
+	idx = (idx + trace_seq_data->start_offset) % ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS;
+	return trace_seq_data->buffer[idx];
+}
+
+/**
+ * fw_trace_get_next() - Advance trace index to next entry
+ * @trace_seq_data: Trace sequence data.
+ *
+ * Returns:
+ *  * %true if trace index is now pointing to a valid entry, or
+ *  * %false if trace index is pointing to an invalid entry, or has hit the end
+ *    of the trace.
+ */
+static bool fw_trace_get_next(struct pvr_fw_trace_seq_data *trace_seq_data)
+{
+	u32 id, sf_id;
+
+	while (trace_seq_data->idx < ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS) {
+		id = read_fw_trace(trace_seq_data, 0);
+		trace_seq_data->idx++;
+		if (!ROGUE_FW_LOG_VALIDID(id))
+			continue;
+		if (id == ROGUE_FW_SF_MAIN_ASSERT_FAILED) {
+			/* Assertion failure marks the end of the trace. */
+			return false;
+		}
+
+		sf_id = find_sfid(id);
+		if (sf_id == ROGUE_FW_SF_FIRST)
+			continue;
+		if (sf_id == ROGUE_FW_SF_LAST) {
+			/*
+			 * Could not match with an ID in the SF table, trace is
+			 * most likely corrupt from this point.
+			 */
+			return false;
+		}
+
+		/* Skip over the timestamp, and any parameters. */
+		trace_seq_data->idx += 2 + ROGUE_FW_SF_PARAMNUM(id);
+
+		/* Ensure index is now pointing to a valid trace entry. */
+		id = read_fw_trace(trace_seq_data, 0);
+		if (!ROGUE_FW_LOG_VALIDID(id))
+			continue;
+
+		return true;
+	};
+
+	/* Hit end of trace data. */
+	return false;
+}
+
+/**
+ * fw_trace_get_first() - Find first valid entry in trace
+ * @trace_seq_data: Trace sequence data.
+ *
+ * Skips over invalid (usually zero) and ROGUE_FW_SF_FIRST entries.
+ *
+ * If the trace has no valid entries, this function will exit with the trace
+ * index pointing to the end of the trace. trace_seq_show() will return an error
+ * in this state.
+ */
+static void fw_trace_get_first(struct pvr_fw_trace_seq_data *trace_seq_data)
+{
+	trace_seq_data->idx = 0;
+
+	while (trace_seq_data->idx < ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS) {
+		u32 id = read_fw_trace(trace_seq_data, 0);
+
+		if (ROGUE_FW_LOG_VALIDID(id)) {
+			u32 sf_id = find_sfid(id);
+
+			if (sf_id != ROGUE_FW_SF_FIRST)
+				break;
+		}
+		trace_seq_data->idx++;
+	}
+}
+
+static void *fw_trace_seq_start(struct seq_file *s, loff_t *pos)
+{
+	struct pvr_fw_trace_seq_data *trace_seq_data = s->private;
+	u32 i;
+
+	/* Reset trace index, then advance to *pos. */
+	fw_trace_get_first(trace_seq_data);
+
+	for (i = 0; i < *pos; i++) {
+		if (!fw_trace_get_next(trace_seq_data))
+			return NULL;
+	}
+
+	return (trace_seq_data->idx < ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS) ? pos : NULL;
+}
+
+static void *fw_trace_seq_next(struct seq_file *s, void *v, loff_t *pos)
+{
+	struct pvr_fw_trace_seq_data *trace_seq_data = s->private;
+
+	(*pos)++;
+	if (!fw_trace_get_next(trace_seq_data))
+		return NULL;
+
+	return (trace_seq_data->idx < ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS) ? pos : NULL;
+}
+
+static void fw_trace_seq_stop(struct seq_file *s, void *v)
+{
+}
+
+static int fw_trace_seq_show(struct seq_file *s, void *v)
+{
+	struct pvr_fw_trace_seq_data *trace_seq_data = s->private;
+	u64 timestamp;
+	u32 id;
+	u32 sf_id;
+
+	if (trace_seq_data->idx >= ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS)
+		return -EINVAL;
+
+	id = read_fw_trace(trace_seq_data, 0);
+	/* Index is not pointing at a valid entry. */
+	if (!ROGUE_FW_LOG_VALIDID(id))
+		return -EINVAL;
+
+	sf_id = find_sfid(id);
+	/* Index is not pointing at a valid entry. */
+	if (sf_id == ROGUE_FW_SF_LAST)
+		return -EINVAL;
+
+	timestamp = read_fw_trace(trace_seq_data, 1) |
+		((u64)read_fw_trace(trace_seq_data, 2) << 32);
+	timestamp = (timestamp & ~ROGUE_FWT_TIMESTAMP_TIME_CLRMSK) >>
+		ROGUE_FWT_TIMESTAMP_TIME_SHIFT;
+
+	seq_printf(s, "[%llu] : ", timestamp);
+	if (id == ROGUE_FW_SF_MAIN_ASSERT_FAILED) {
+		seq_printf(s, "ASSERTION %s failed at %s:%u",
+			   trace_seq_data->assert_buf.info,
+			   trace_seq_data->assert_buf.path,
+			   trace_seq_data->assert_buf.line_num);
+	} else {
+		seq_printf(s, stid_fmts[sf_id].name,
+			   read_fw_trace(trace_seq_data, 3),
+			   read_fw_trace(trace_seq_data, 4),
+			   read_fw_trace(trace_seq_data, 5),
+			   read_fw_trace(trace_seq_data, 6),
+			   read_fw_trace(trace_seq_data, 7),
+			   read_fw_trace(trace_seq_data, 8),
+			   read_fw_trace(trace_seq_data, 9),
+			   read_fw_trace(trace_seq_data, 10),
+			   read_fw_trace(trace_seq_data, 11),
+			   read_fw_trace(trace_seq_data, 12),
+			   read_fw_trace(trace_seq_data, 13),
+			   read_fw_trace(trace_seq_data, 14),
+			   read_fw_trace(trace_seq_data, 15),
+			   read_fw_trace(trace_seq_data, 16),
+			   read_fw_trace(trace_seq_data, 17),
+			   read_fw_trace(trace_seq_data, 18),
+			   read_fw_trace(trace_seq_data, 19),
+			   read_fw_trace(trace_seq_data, 20),
+			   read_fw_trace(trace_seq_data, 21),
+			   read_fw_trace(trace_seq_data, 22));
+	}
+	seq_puts(s, "\n");
+	return 0;
+}
+
+static const struct seq_operations pvr_fw_trace_seq_ops = {
+	.start = fw_trace_seq_start,
+	.next = fw_trace_seq_next,
+	.stop = fw_trace_seq_stop,
+	.show = fw_trace_seq_show
+};
+
+static int fw_trace_open(struct inode *inode, struct file *file)
+{
+	struct pvr_fw_trace_buffer *trace_buffer = inode->i_private;
+	struct rogue_fwif_tracebuf_space *tracebuf_space =
+		trace_buffer->tracebuf_space;
+	struct pvr_fw_trace_seq_data *trace_seq_data;
+	int err;
+
+	trace_seq_data = kzalloc(sizeof(*trace_seq_data), GFP_KERNEL);
+	if (!trace_seq_data)
+		return -ENOMEM;
+
+	trace_seq_data->buffer = kcalloc(ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS,
+					 sizeof(*trace_seq_data->buffer), GFP_KERNEL);
+	if (!trace_seq_data->buffer) {
+		err = -ENOMEM;
+		goto err_free_data;
+	}
+
+	/*
+	 * Take a local copy of the trace buffer, as firmware may still be
+	 * writing to it. This will exist as long as this file is open.
+	 */
+	memcpy(trace_seq_data->buffer, trace_buffer->buf,
+	       ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS * sizeof(u32));
+	trace_seq_data->start_offset = READ_ONCE(tracebuf_space->trace_pointer);
+	trace_seq_data->assert_buf = tracebuf_space->assert_buf;
+	fw_trace_get_first(trace_seq_data);
+
+	err = seq_open(file, &pvr_fw_trace_seq_ops);
+	if (err)
+		goto err_free_buffer;
+
+	((struct seq_file *)file->private_data)->private = trace_seq_data;
+
+	return 0;
+
+err_free_buffer:
+	kfree(trace_seq_data->buffer);
+
+err_free_data:
+	kfree(trace_seq_data);
+
+	return err;
+}
+
+static int fw_trace_release(struct inode *inode, struct file *file)
+{
+	struct pvr_fw_trace_seq_data *trace_seq_data =
+		((struct seq_file *)file->private_data)->private;
+
+	seq_release(inode, file);
+	kfree(trace_seq_data->buffer);
+	kfree(trace_seq_data);
+
+	return 0;
+}
+
+static const struct file_operations pvr_fw_trace_fops = {
+	.owner = THIS_MODULE,
+	.open = fw_trace_open,
+	.read = seq_read,
+	.llseek = seq_lseek,
+	.release = fw_trace_release,
+};
+
+void
+pvr_fw_trace_mask_update(struct pvr_device *pvr_dev, u32 old_mask, u32 new_mask)
+{
+	if (old_mask != new_mask)
+		update_logtype(pvr_dev, new_mask);
+}
+
+void
+pvr_fw_trace_debugfs_init(struct pvr_device *pvr_dev, struct dentry *dir)
+{
+	struct pvr_fw_trace *fw_trace = &pvr_dev->fw_dev.fw_trace;
+	u32 thread_nr;
+
+	static_assert(ARRAY_SIZE(fw_trace->buffers) <= 10,
+		      "The filename buffer is only large enough for a single-digit thread count");
+
+	for (thread_nr = 0; thread_nr < ARRAY_SIZE(fw_trace->buffers); ++thread_nr) {
+		char filename[8];
+
+		snprintf(filename, ARRAY_SIZE(filename), "trace_%u", thread_nr);
+		debugfs_create_file(filename, 0400, dir,
+				    &fw_trace->buffers[thread_nr],
+				    &pvr_fw_trace_fops);
+	}
+}
+#endif
diff --git a/drivers/gpu/drm/imagination/pvr_params.c b/drivers/gpu/drm/imagination/pvr_params.c
new file mode 100644
index 000000000000..6e2a3750e70e
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_params.c
@@ -0,0 +1,147 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_params.h"
+
+#include <linux/cache.h>
+#include <linux/moduleparam.h>
+
+static struct pvr_device_params pvr_device_param_defaults __read_mostly = {
+#define X(type_, name_, value_, desc_, ...) .name_ = (value_),
+	PVR_DEVICE_PARAMS
+#undef X
+};
+
+#define PVR_DEVICE_PARAM_NAMED(name_, type_, desc_) \
+	module_param_named(name_, pvr_device_param_defaults.name_, type_, \
+			   0400);                                         \
+	MODULE_PARM_DESC(name_, desc_);
+
+/*
+ * This list of defines must contain every type specified in "pvr_params.h" as
+ * ``PVR_PARAM_TYPE_*_C``.
+ */
+#define PVR_PARAM_TYPE_X32_MODPARAM uint
+
+#define X(type_, name_, value_, desc_, ...) \
+	PVR_DEVICE_PARAM_NAMED(name_, PVR_PARAM_TYPE_##type_##_MODPARAM, desc_);
+PVR_DEVICE_PARAMS
+#undef X
+
+int
+pvr_device_params_init(struct pvr_device_params *params)
+{
+	/*
+	 * If heap-allocated parameters are added in the future (e.g.
+	 * modparam's charp type), they must be handled specially here (via
+	 * kstrdup() in the case of charp). Since that's not necessary yet,
+	 * a straight copy will do for now. This change will also require a
+	 * pvr_device_params_fini() function to free any heap-allocated copies.
+	 */
+
+	*params = pvr_device_param_defaults;
+
+	return 0;
+}
+
+#if defined(CONFIG_DEBUG_FS)
+#include "pvr_device.h"
+
+#include <linux/dcache.h>
+#include <linux/debugfs.h>
+#include <linux/export.h>
+#include <linux/fs.h>
+#include <linux/stddef.h>
+
+/*
+ * This list of defines must contain every type specified in "pvr_params.h" as
+ * ``PVR_PARAM_TYPE_*_C``.
+ */
+#define PVR_PARAM_TYPE_X32_FMT "0x%08llx"
+
+#define X_SET(name_, mode_) X_SET_##mode_(name_)
+#define X_SET_DEF(name_, update_, mode_) X_SET_DEF_##mode_(name_, update_)
+
+#define X_SET_RO(name_) NULL
+#define X_SET_RW(name_) __pvr_device_param_##name_##set
+
+#define X_SET_DEF_RO(name_, update_)
+#define X_SET_DEF_RW(name_, update_)                                    \
+	static int                                                      \
+	X_SET_RW(name_)(void *data, u64 val)                            \
+	{                                                               \
+		struct pvr_device *pvr_dev = data;                      \
+		/* This is not just (update_) to suppress -Waddress. */ \
+		if ((void *)(update_) != NULL)                          \
+			(update_)(pvr_dev, pvr_dev->params.name_, val); \
+		pvr_dev->params.name_ = val;                            \
+		return 0;                                               \
+	}
+
+#define X(type_, name_, value_, desc_, mode_, update_)                     \
+	static int                                                         \
+	__pvr_device_param_##name_##_get(void *data, u64 *val)             \
+	{                                                                  \
+		struct pvr_device *pvr_dev = data;                         \
+		*val = pvr_dev->params.name_;                              \
+		return 0;                                                  \
+	}                                                                  \
+	X_SET_DEF(name_, update_, mode_)                                   \
+	static int                                                         \
+	__pvr_device_param_##name_##_open(struct inode *inode,             \
+					  struct file *file)               \
+	{                                                                  \
+		__simple_attr_check_format(PVR_PARAM_TYPE_##type_##_FMT,   \
+					   0ull);                          \
+		return simple_attr_open(inode, file,                       \
+					__pvr_device_param_##name_##_get,  \
+					X_SET(name_, mode_),               \
+					PVR_PARAM_TYPE_##type_##_FMT);     \
+	}
+PVR_DEVICE_PARAMS
+#undef X
+
+#undef X_SET
+#undef X_SET_RO
+#undef X_SET_RW
+#undef X_SET_DEF
+#undef X_SET_DEF_RO
+#undef X_SET_DEF_RW
+
+static struct {
+#define X(type_, name_, value_, desc_, mode_, update_) \
+	const struct file_operations name_;
+	PVR_DEVICE_PARAMS
+#undef X
+} pvr_device_param_debugfs_fops = {
+#define X(type_, name_, value_, desc_, mode_, update_)     \
+	.name_ = {                                         \
+		.owner = THIS_MODULE,                      \
+		.open = __pvr_device_param_##name_##_open, \
+		.release = simple_attr_release,            \
+		.read = simple_attr_read,                  \
+		.write = simple_attr_write,                \
+		.llseek = generic_file_llseek,             \
+	},
+	PVR_DEVICE_PARAMS
+#undef X
+};
+
+void
+pvr_params_debugfs_init(struct pvr_device *pvr_dev, struct dentry *dir)
+{
+#define X_MODE(mode_) X_MODE_##mode_
+#define X_MODE_RO 0400
+#define X_MODE_RW 0600
+
+#define X(type_, name_, value_, desc_, mode_, update_)             \
+	debugfs_create_file(#name_, X_MODE(mode_), dir, pvr_dev,   \
+			    &pvr_device_param_debugfs_fops.name_);
+	PVR_DEVICE_PARAMS
+#undef X
+
+#undef X_MODE
+#undef X_MODE_RO
+#undef X_MODE_RW
+}
+#endif
diff --git a/drivers/gpu/drm/imagination/pvr_params.h b/drivers/gpu/drm/imagination/pvr_params.h
new file mode 100644
index 000000000000..9988c941f83f
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_params.h
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_PARAMS_H
+#define PVR_PARAMS_H
+
+#include "pvr_rogue_fwif.h"
+
+#include <linux/cache.h>
+#include <linux/compiler_attributes.h>
+
+/*
+ * This is the definitive list of types allowed in the definition of
+ * %PVR_DEVICE_PARAMS.
+ */
+#define PVR_PARAM_TYPE_X32_C u32
+
+/*
+ * This macro defines all device-specific parameters; that is parameters which
+ * are set independently per device.
+ *
+ * The X-macro accepts the following arguments. Arguments marked with [debugfs]
+ * are ignored when debugfs is disabled; values used for these arguments may
+ * safely be gated behind CONFIG_DEBUG_FS.
+ *
+ * @type_: The definitive list of allowed values is PVR_PARAM_TYPE_*_C.
+ * @name_: Name of the parameter. This is used both as the field name in C and
+ *         stringified as the parameter name.
+ * @value_: Initial/default value.
+ * @desc_: String literal used as help text to describe the usage of this
+ *         parameter.
+ * @mode_: [debugfs] One of {RO,RW}. The access mode of the debugfs entry for
+ *         this parameter.
+ * @update_: [debugfs] When debugfs support is enabled, parameters may be
+ *           updated at runtime. When this happens, this function will be
+ *           called to allow changes to propagate. The signature of this
+ *           function is:
+ *
+ *              void (*)(struct pvr_device *pvr_dev, T old_val, T new_val)
+ *
+ *           Where T is the C type associated with @type_.
+ *
+ *           If @mode_ does not allow write access, this function will never be
+ *           called. In this case, or if no update callback is required, you
+ *           should specify NULL for this argument.
+ */
+#define PVR_DEVICE_PARAMS                                                    \
+	X(X32, fw_trace_mask, ROGUE_FWIF_LOG_TYPE_NONE,                      \
+	  "Enable FW trace for the specified groups. Specifying 0 disables " \
+	  "all FW tracing.",                                                 \
+	  RW, pvr_fw_trace_mask_update)
+
+struct pvr_device_params {
+#define X(type_, name_, value_, desc_, ...) \
+	PVR_PARAM_TYPE_##type_##_C name_;
+	PVR_DEVICE_PARAMS
+#undef X
+};
+
+int pvr_device_params_init(struct pvr_device_params *params);
+
+#if defined(CONFIG_DEBUG_FS)
+/* Forward declaration from "pvr_device.h". */
+struct pvr_device;
+
+/* Forward declaration from <linux/dcache.h>. */
+struct dentry;
+
+void pvr_params_debugfs_init(struct pvr_device *pvr_dev, struct dentry *dir);
+#endif /* defined(CONFIG_DEBUG_FS) */
+
+#endif /* PVR_PARAMS_H */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 15/17] drm/imagination: Add firmware trace to debugfs
@ 2023-08-16  8:25   ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: frank.binns, donald.robson, boris.brezillon, faith.ekstrand,
	airlied, daniel, maarten.lankhorst, mripard, tzimmermann, afd,
	hns, matthew.brost, christian.koenig, luben.tuikov, dakr,
	linux-kernel, Matt Coster

Firmware trace is exposed at /sys/debug/dri/<dev_nr>/pvr_fw/trace_0.
Trace is enabled via the group mask at
/sys/debug/dri/<dev_nr>/pvr_params/fw_trace_mask.

Changes since v3:
- Use drm_dev_{enter,exit}

Co-developed-by: Matt Coster <matt.coster@imgtec.com>
Signed-off-by: Matt Coster <matt.coster@imgtec.com>
Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 drivers/gpu/drm/imagination/Makefile       |   4 +
 drivers/gpu/drm/imagination/pvr_debugfs.c  |  53 +++
 drivers/gpu/drm/imagination/pvr_debugfs.h  |  29 ++
 drivers/gpu/drm/imagination/pvr_device.c   |   9 +
 drivers/gpu/drm/imagination/pvr_device.h   |  10 +
 drivers/gpu/drm/imagination/pvr_drv.c      |   4 +
 drivers/gpu/drm/imagination/pvr_fw_trace.c | 394 +++++++++++++++++++++
 drivers/gpu/drm/imagination/pvr_params.c   | 147 ++++++++
 drivers/gpu/drm/imagination/pvr_params.h   |  72 ++++
 9 files changed, 722 insertions(+)
 create mode 100644 drivers/gpu/drm/imagination/pvr_debugfs.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_debugfs.h
 create mode 100644 drivers/gpu/drm/imagination/pvr_params.c
 create mode 100644 drivers/gpu/drm/imagination/pvr_params.h

diff --git a/drivers/gpu/drm/imagination/Makefile b/drivers/gpu/drm/imagination/Makefile
index 313af5312d7b..1db003cf39ee 100644
--- a/drivers/gpu/drm/imagination/Makefile
+++ b/drivers/gpu/drm/imagination/Makefile
@@ -20,6 +20,7 @@ powervr-y := \
 	pvr_hwrt.o \
 	pvr_job.o \
 	pvr_mmu.o \
+	pvr_params.o \
 	pvr_power.o \
 	pvr_queue.o \
 	pvr_stream.o \
@@ -28,4 +29,7 @@ powervr-y := \
 	pvr_vm.o \
 	pvr_vm_mips.o
 
+powervr-$(CONFIG_DEBUG_FS) += \
+	pvr_debugfs.o
+
 obj-$(CONFIG_DRM_POWERVR) += powervr.o
diff --git a/drivers/gpu/drm/imagination/pvr_debugfs.c b/drivers/gpu/drm/imagination/pvr_debugfs.c
new file mode 100644
index 000000000000..fa0d7c89773c
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_debugfs.c
@@ -0,0 +1,53 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_debugfs.h"
+
+#include "pvr_device.h"
+#include "pvr_fw_trace.h"
+#include "pvr_params.h"
+
+#include <linux/dcache.h>
+#include <linux/debugfs.h>
+#include <linux/err.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+
+#include <drm/drm_device.h>
+#include <drm/drm_file.h>
+#include <drm/drm_print.h>
+
+static const struct pvr_debugfs_entry pvr_debugfs_entries[] = {
+	{"pvr_params", pvr_params_debugfs_init},
+	{"pvr_fw", pvr_fw_trace_debugfs_init},
+};
+
+void
+pvr_debugfs_init(struct drm_minor *minor)
+{
+	struct drm_device *drm_dev = minor->dev;
+	struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
+	struct dentry *root = minor->debugfs_root;
+	size_t i;
+
+	for (i = 0; i < ARRAY_SIZE(pvr_debugfs_entries); ++i) {
+		const struct pvr_debugfs_entry *entry = &pvr_debugfs_entries[i];
+		struct dentry *dir;
+
+		dir = debugfs_create_dir(entry->name, root);
+		if (IS_ERR(dir)) {
+			drm_warn(drm_dev,
+				 "failed to create debugfs dir '%s' (err=%d)",
+				 entry->name, (int)PTR_ERR(dir));
+			continue;
+		}
+
+		entry->init(pvr_dev, dir);
+	}
+}
+
+/*
+ * Since all entries are created under &drm_minor->debugfs_root, there's no
+ * need for a pvr_debugfs_fini() as DRM will clean up everything under its root
+ * automatically.
+ */
diff --git a/drivers/gpu/drm/imagination/pvr_debugfs.h b/drivers/gpu/drm/imagination/pvr_debugfs.h
new file mode 100644
index 000000000000..7b7ff384053e
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_debugfs.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_DEBUGFS_H
+#define PVR_DEBUGFS_H
+
+/* Forward declaration from <drm/drm_drv.h>. */
+struct drm_minor;
+
+#if defined(CONFIG_DEBUG_FS)
+/* Forward declaration from "pvr_device.h". */
+struct pvr_device;
+
+/* Forward declaration from <linux/dcache.h>. */
+struct dentry;
+
+struct pvr_debugfs_entry {
+	const char *name;
+	void (*init)(struct pvr_device *pvr_dev, struct dentry *dir);
+};
+
+void pvr_debugfs_init(struct drm_minor *minor);
+#else /* defined(CONFIG_DEBUG_FS) */
+#include <linux/compiler_attributes.h>
+
+static __always_inline void pvr_debugfs_init(struct drm_minor *minor) {}
+#endif /* defined(CONFIG_DEBUG_FS) */
+
+#endif /* PVR_DEBUGFS_H */
diff --git a/drivers/gpu/drm/imagination/pvr_device.c b/drivers/gpu/drm/imagination/pvr_device.c
index 9ae758a88644..a01544f64d8e 100644
--- a/drivers/gpu/drm/imagination/pvr_device.c
+++ b/drivers/gpu/drm/imagination/pvr_device.c
@@ -5,6 +5,7 @@
 #include "pvr_device_info.h"
 
 #include "pvr_fw.h"
+#include "pvr_params.h"
 #include "pvr_power.h"
 #include "pvr_queue.h"
 #include "pvr_rogue_cr_defs.h"
@@ -488,6 +489,14 @@ pvr_device_init(struct pvr_device *pvr_dev)
 	struct device *dev = drm_dev->dev;
 	int err;
 
+	/*
+	 * Setup device parameters. We do this first in case other steps
+	 * depend on them.
+	 */
+	err = pvr_device_params_init(&pvr_dev->params);
+	if (err)
+		return err;
+
 	/* Enable and initialize clocks required for the device to operate. */
 	err = pvr_device_clk_init(pvr_dev);
 	if (err)
diff --git a/drivers/gpu/drm/imagination/pvr_device.h b/drivers/gpu/drm/imagination/pvr_device.h
index fc6124ae4b26..fcaaeef8519d 100644
--- a/drivers/gpu/drm/imagination/pvr_device.h
+++ b/drivers/gpu/drm/imagination/pvr_device.h
@@ -7,6 +7,7 @@
 #include "pvr_ccb.h"
 #include "pvr_device_info.h"
 #include "pvr_fw.h"
+#include "pvr_params.h"
 #include "pvr_rogue_fwif_stream.h"
 #include "pvr_stream.h"
 
@@ -148,6 +149,15 @@ struct pvr_device {
 	/** @fw_dev: Firmware related data. */
 	struct pvr_fw_device fw_dev;
 
+	/**
+	 * @params: Device-specific parameters.
+	 *
+	 *          The values of these parameters are initialized from the
+	 *          defaults specified as module parameters. They may be
+	 *          modified at runtime via debugfs (if enabled).
+	 */
+	struct pvr_device_params params;
+
 	/** @stream_musthave_quirks: Bit array of "must-have" quirks for stream commands. */
 	u32 stream_musthave_quirks[PVR_STREAM_TYPE_MAX][PVR_STREAM_EXTHDR_TYPE_MAX];
 
diff --git a/drivers/gpu/drm/imagination/pvr_drv.c b/drivers/gpu/drm/imagination/pvr_drv.c
index 757263eee34e..004395f2a9e8 100644
--- a/drivers/gpu/drm/imagination/pvr_drv.c
+++ b/drivers/gpu/drm/imagination/pvr_drv.c
@@ -2,6 +2,7 @@
 /* Copyright (c) 2023 Imagination Technologies Ltd. */
 
 #include "pvr_context.h"
+#include "pvr_debugfs.h"
 #include "pvr_device.h"
 #include "pvr_drv.h"
 #include "pvr_free_list.h"
@@ -1391,6 +1392,9 @@ static struct drm_driver pvr_drm_driver = {
 	.ioctls = pvr_drm_driver_ioctls,
 	.num_ioctls = ARRAY_SIZE(pvr_drm_driver_ioctls),
 	.fops = &pvr_drm_driver_fops,
+#if defined(CONFIG_DEBUG_FS)
+	.debugfs_init = pvr_debugfs_init,
+#endif
 
 	.name = PVR_DRIVER_NAME,
 	.desc = PVR_DRIVER_DESC,
diff --git a/drivers/gpu/drm/imagination/pvr_fw_trace.c b/drivers/gpu/drm/imagination/pvr_fw_trace.c
index 11f10946aeb1..3f184d0f6e05 100644
--- a/drivers/gpu/drm/imagination/pvr_fw_trace.c
+++ b/drivers/gpu/drm/imagination/pvr_fw_trace.c
@@ -7,6 +7,7 @@
 #include "pvr_rogue_fwif_sf.h"
 #include "pvr_fw_trace.h"
 
+#include <drm/drm_drv.h>
 #include <drm/drm_file.h>
 
 #include <linux/build_bug.h>
@@ -119,3 +120,396 @@ void pvr_fw_trace_fini(struct pvr_device *pvr_dev)
 	}
 	pvr_fw_object_unmap_and_destroy(fw_trace->tracebuf_ctrl_obj);
 }
+
+/**
+ * update_logtype() - Send KCCB command to trigger FW to update logtype
+ * @pvr_dev: Target PowerVR device
+ * @group_mask: New log group mask.
+ *
+ * Returns:
+ *  * 0 on success,
+ *  * Any error returned by pvr_kccb_send_cmd(), or
+ *  * -%EIO if the device is lost.
+ */
+static int
+update_logtype(struct pvr_device *pvr_dev, u32 group_mask)
+{
+	struct pvr_fw_trace *fw_trace = &pvr_dev->fw_dev.fw_trace;
+	struct rogue_fwif_kccb_cmd cmd;
+	int idx;
+	int err;
+
+	if (group_mask)
+		fw_trace->tracebuf_ctrl->log_type = ROGUE_FWIF_LOG_TYPE_TRACE | group_mask;
+	else
+		fw_trace->tracebuf_ctrl->log_type = ROGUE_FWIF_LOG_TYPE_NONE;
+
+	fw_trace->group_mask = group_mask;
+
+	down_read(&pvr_dev->reset_sem);
+	if (!drm_dev_enter(from_pvr_device(pvr_dev), &idx)) {
+		err = -EIO;
+		goto err_up_read;
+	}
+
+	cmd.cmd_type = ROGUE_FWIF_KCCB_CMD_LOGTYPE_UPDATE;
+	cmd.kccb_flags = 0;
+
+	err = pvr_kccb_send_cmd(pvr_dev, &cmd, NULL);
+
+	drm_dev_exit(idx);
+
+err_up_read:
+	up_read(&pvr_dev->reset_sem);
+
+	return err;
+}
+
+#if defined(CONFIG_DEBUG_FS)
+
+static int fw_trace_group_mask_show(struct seq_file *m, void *data)
+{
+	struct pvr_device *pvr_dev = m->private;
+
+	seq_printf(m, "%08x\n", pvr_dev->fw_dev.fw_trace.group_mask);
+
+	return 0;
+}
+
+static int fw_trace_group_mask_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, fw_trace_group_mask_show, inode->i_private);
+}
+
+static ssize_t fw_trace_group_mask_write(struct file *file, const char __user *ubuf, size_t len,
+					 loff_t *offp)
+{
+	struct seq_file *m = file->private_data;
+	struct pvr_device *pvr_dev = m->private;
+	u32 new_group_mask;
+	int err;
+
+	err = kstrtouint_from_user(ubuf, len, 0, &new_group_mask);
+	if (err)
+		return err;
+
+	err = update_logtype(pvr_dev, new_group_mask);
+	if (err)
+		return err;
+
+	pvr_dev->fw_dev.fw_trace.group_mask = new_group_mask;
+
+	return (ssize_t)len;
+}
+
+static const struct file_operations pvr_fw_trace_group_mask_fops = {
+	.owner = THIS_MODULE,
+	.open = fw_trace_group_mask_open,
+	.read = seq_read,
+	.write = fw_trace_group_mask_write,
+	.llseek = default_llseek,
+	.release = single_release,
+};
+
+struct pvr_fw_trace_seq_data {
+	/** @buffer: Pointer to copy of trace data. */
+	u32 *buffer;
+
+	/** @start_offset: Starting offset in trace data, as reported by FW. */
+	u32 start_offset;
+
+	/** @idx: Current index into trace data. */
+	u32 idx;
+
+	/** @assert_buf: Trace assert buffer, as reported by FW. */
+	struct rogue_fwif_file_info_buf assert_buf;
+};
+
+static u32 find_sfid(u32 id)
+{
+	u32 i;
+
+	for (i = 0; i < ARRAY_SIZE(stid_fmts); i++) {
+		if (stid_fmts[i].id == id)
+			return i;
+	}
+
+	return ROGUE_FW_SF_LAST;
+}
+
+static u32 read_fw_trace(struct pvr_fw_trace_seq_data *trace_seq_data, u32 offset)
+{
+	u32 idx;
+
+	idx = trace_seq_data->idx + offset;
+	if (idx >= ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS)
+		return 0;
+
+	idx = (idx + trace_seq_data->start_offset) % ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS;
+	return trace_seq_data->buffer[idx];
+}
+
+/**
+ * fw_trace_get_next() - Advance trace index to next entry
+ * @trace_seq_data: Trace sequence data.
+ *
+ * Returns:
+ *  * %true if trace index is now pointing to a valid entry, or
+ *  * %false if trace index is pointing to an invalid entry, or has hit the end
+ *    of the trace.
+ */
+static bool fw_trace_get_next(struct pvr_fw_trace_seq_data *trace_seq_data)
+{
+	u32 id, sf_id;
+
+	while (trace_seq_data->idx < ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS) {
+		id = read_fw_trace(trace_seq_data, 0);
+		trace_seq_data->idx++;
+		if (!ROGUE_FW_LOG_VALIDID(id))
+			continue;
+		if (id == ROGUE_FW_SF_MAIN_ASSERT_FAILED) {
+			/* Assertion failure marks the end of the trace. */
+			return false;
+		}
+
+		sf_id = find_sfid(id);
+		if (sf_id == ROGUE_FW_SF_FIRST)
+			continue;
+		if (sf_id == ROGUE_FW_SF_LAST) {
+			/*
+			 * Could not match with an ID in the SF table, trace is
+			 * most likely corrupt from this point.
+			 */
+			return false;
+		}
+
+		/* Skip over the timestamp, and any parameters. */
+		trace_seq_data->idx += 2 + ROGUE_FW_SF_PARAMNUM(id);
+
+		/* Ensure index is now pointing to a valid trace entry. */
+		id = read_fw_trace(trace_seq_data, 0);
+		if (!ROGUE_FW_LOG_VALIDID(id))
+			continue;
+
+		return true;
+	};
+
+	/* Hit end of trace data. */
+	return false;
+}
+
+/**
+ * fw_trace_get_first() - Find first valid entry in trace
+ * @trace_seq_data: Trace sequence data.
+ *
+ * Skips over invalid (usually zero) and ROGUE_FW_SF_FIRST entries.
+ *
+ * If the trace has no valid entries, this function will exit with the trace
+ * index pointing to the end of the trace. trace_seq_show() will return an error
+ * in this state.
+ */
+static void fw_trace_get_first(struct pvr_fw_trace_seq_data *trace_seq_data)
+{
+	trace_seq_data->idx = 0;
+
+	while (trace_seq_data->idx < ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS) {
+		u32 id = read_fw_trace(trace_seq_data, 0);
+
+		if (ROGUE_FW_LOG_VALIDID(id)) {
+			u32 sf_id = find_sfid(id);
+
+			if (sf_id != ROGUE_FW_SF_FIRST)
+				break;
+		}
+		trace_seq_data->idx++;
+	}
+}
+
+static void *fw_trace_seq_start(struct seq_file *s, loff_t *pos)
+{
+	struct pvr_fw_trace_seq_data *trace_seq_data = s->private;
+	u32 i;
+
+	/* Reset trace index, then advance to *pos. */
+	fw_trace_get_first(trace_seq_data);
+
+	for (i = 0; i < *pos; i++) {
+		if (!fw_trace_get_next(trace_seq_data))
+			return NULL;
+	}
+
+	return (trace_seq_data->idx < ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS) ? pos : NULL;
+}
+
+static void *fw_trace_seq_next(struct seq_file *s, void *v, loff_t *pos)
+{
+	struct pvr_fw_trace_seq_data *trace_seq_data = s->private;
+
+	(*pos)++;
+	if (!fw_trace_get_next(trace_seq_data))
+		return NULL;
+
+	return (trace_seq_data->idx < ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS) ? pos : NULL;
+}
+
+static void fw_trace_seq_stop(struct seq_file *s, void *v)
+{
+}
+
+static int fw_trace_seq_show(struct seq_file *s, void *v)
+{
+	struct pvr_fw_trace_seq_data *trace_seq_data = s->private;
+	u64 timestamp;
+	u32 id;
+	u32 sf_id;
+
+	if (trace_seq_data->idx >= ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS)
+		return -EINVAL;
+
+	id = read_fw_trace(trace_seq_data, 0);
+	/* Index is not pointing at a valid entry. */
+	if (!ROGUE_FW_LOG_VALIDID(id))
+		return -EINVAL;
+
+	sf_id = find_sfid(id);
+	/* Index is not pointing at a valid entry. */
+	if (sf_id == ROGUE_FW_SF_LAST)
+		return -EINVAL;
+
+	timestamp = read_fw_trace(trace_seq_data, 1) |
+		((u64)read_fw_trace(trace_seq_data, 2) << 32);
+	timestamp = (timestamp & ~ROGUE_FWT_TIMESTAMP_TIME_CLRMSK) >>
+		ROGUE_FWT_TIMESTAMP_TIME_SHIFT;
+
+	seq_printf(s, "[%llu] : ", timestamp);
+	if (id == ROGUE_FW_SF_MAIN_ASSERT_FAILED) {
+		seq_printf(s, "ASSERTION %s failed at %s:%u",
+			   trace_seq_data->assert_buf.info,
+			   trace_seq_data->assert_buf.path,
+			   trace_seq_data->assert_buf.line_num);
+	} else {
+		seq_printf(s, stid_fmts[sf_id].name,
+			   read_fw_trace(trace_seq_data, 3),
+			   read_fw_trace(trace_seq_data, 4),
+			   read_fw_trace(trace_seq_data, 5),
+			   read_fw_trace(trace_seq_data, 6),
+			   read_fw_trace(trace_seq_data, 7),
+			   read_fw_trace(trace_seq_data, 8),
+			   read_fw_trace(trace_seq_data, 9),
+			   read_fw_trace(trace_seq_data, 10),
+			   read_fw_trace(trace_seq_data, 11),
+			   read_fw_trace(trace_seq_data, 12),
+			   read_fw_trace(trace_seq_data, 13),
+			   read_fw_trace(trace_seq_data, 14),
+			   read_fw_trace(trace_seq_data, 15),
+			   read_fw_trace(trace_seq_data, 16),
+			   read_fw_trace(trace_seq_data, 17),
+			   read_fw_trace(trace_seq_data, 18),
+			   read_fw_trace(trace_seq_data, 19),
+			   read_fw_trace(trace_seq_data, 20),
+			   read_fw_trace(trace_seq_data, 21),
+			   read_fw_trace(trace_seq_data, 22));
+	}
+	seq_puts(s, "\n");
+	return 0;
+}
+
+static const struct seq_operations pvr_fw_trace_seq_ops = {
+	.start = fw_trace_seq_start,
+	.next = fw_trace_seq_next,
+	.stop = fw_trace_seq_stop,
+	.show = fw_trace_seq_show
+};
+
+static int fw_trace_open(struct inode *inode, struct file *file)
+{
+	struct pvr_fw_trace_buffer *trace_buffer = inode->i_private;
+	struct rogue_fwif_tracebuf_space *tracebuf_space =
+		trace_buffer->tracebuf_space;
+	struct pvr_fw_trace_seq_data *trace_seq_data;
+	int err;
+
+	trace_seq_data = kzalloc(sizeof(*trace_seq_data), GFP_KERNEL);
+	if (!trace_seq_data)
+		return -ENOMEM;
+
+	trace_seq_data->buffer = kcalloc(ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS,
+					 sizeof(*trace_seq_data->buffer), GFP_KERNEL);
+	if (!trace_seq_data->buffer) {
+		err = -ENOMEM;
+		goto err_free_data;
+	}
+
+	/*
+	 * Take a local copy of the trace buffer, as firmware may still be
+	 * writing to it. This will exist as long as this file is open.
+	 */
+	memcpy(trace_seq_data->buffer, trace_buffer->buf,
+	       ROGUE_FW_TRACE_BUF_DEFAULT_SIZE_IN_DWORDS * sizeof(u32));
+	trace_seq_data->start_offset = READ_ONCE(tracebuf_space->trace_pointer);
+	trace_seq_data->assert_buf = tracebuf_space->assert_buf;
+	fw_trace_get_first(trace_seq_data);
+
+	err = seq_open(file, &pvr_fw_trace_seq_ops);
+	if (err)
+		goto err_free_buffer;
+
+	((struct seq_file *)file->private_data)->private = trace_seq_data;
+
+	return 0;
+
+err_free_buffer:
+	kfree(trace_seq_data->buffer);
+
+err_free_data:
+	kfree(trace_seq_data);
+
+	return err;
+}
+
+static int fw_trace_release(struct inode *inode, struct file *file)
+{
+	struct pvr_fw_trace_seq_data *trace_seq_data =
+		((struct seq_file *)file->private_data)->private;
+
+	seq_release(inode, file);
+	kfree(trace_seq_data->buffer);
+	kfree(trace_seq_data);
+
+	return 0;
+}
+
+static const struct file_operations pvr_fw_trace_fops = {
+	.owner = THIS_MODULE,
+	.open = fw_trace_open,
+	.read = seq_read,
+	.llseek = seq_lseek,
+	.release = fw_trace_release,
+};
+
+void
+pvr_fw_trace_mask_update(struct pvr_device *pvr_dev, u32 old_mask, u32 new_mask)
+{
+	if (old_mask != new_mask)
+		update_logtype(pvr_dev, new_mask);
+}
+
+void
+pvr_fw_trace_debugfs_init(struct pvr_device *pvr_dev, struct dentry *dir)
+{
+	struct pvr_fw_trace *fw_trace = &pvr_dev->fw_dev.fw_trace;
+	u32 thread_nr;
+
+	static_assert(ARRAY_SIZE(fw_trace->buffers) <= 10,
+		      "The filename buffer is only large enough for a single-digit thread count");
+
+	for (thread_nr = 0; thread_nr < ARRAY_SIZE(fw_trace->buffers); ++thread_nr) {
+		char filename[8];
+
+		snprintf(filename, ARRAY_SIZE(filename), "trace_%u", thread_nr);
+		debugfs_create_file(filename, 0400, dir,
+				    &fw_trace->buffers[thread_nr],
+				    &pvr_fw_trace_fops);
+	}
+}
+#endif
diff --git a/drivers/gpu/drm/imagination/pvr_params.c b/drivers/gpu/drm/imagination/pvr_params.c
new file mode 100644
index 000000000000..6e2a3750e70e
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_params.c
@@ -0,0 +1,147 @@
+// SPDX-License-Identifier: GPL-2.0 OR MIT
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#include "pvr_params.h"
+
+#include <linux/cache.h>
+#include <linux/moduleparam.h>
+
+static struct pvr_device_params pvr_device_param_defaults __read_mostly = {
+#define X(type_, name_, value_, desc_, ...) .name_ = (value_),
+	PVR_DEVICE_PARAMS
+#undef X
+};
+
+#define PVR_DEVICE_PARAM_NAMED(name_, type_, desc_) \
+	module_param_named(name_, pvr_device_param_defaults.name_, type_, \
+			   0400);                                         \
+	MODULE_PARM_DESC(name_, desc_);
+
+/*
+ * This list of defines must contain every type specified in "pvr_params.h" as
+ * ``PVR_PARAM_TYPE_*_C``.
+ */
+#define PVR_PARAM_TYPE_X32_MODPARAM uint
+
+#define X(type_, name_, value_, desc_, ...) \
+	PVR_DEVICE_PARAM_NAMED(name_, PVR_PARAM_TYPE_##type_##_MODPARAM, desc_);
+PVR_DEVICE_PARAMS
+#undef X
+
+int
+pvr_device_params_init(struct pvr_device_params *params)
+{
+	/*
+	 * If heap-allocated parameters are added in the future (e.g.
+	 * modparam's charp type), they must be handled specially here (via
+	 * kstrdup() in the case of charp). Since that's not necessary yet,
+	 * a straight copy will do for now. This change will also require a
+	 * pvr_device_params_fini() function to free any heap-allocated copies.
+	 */
+
+	*params = pvr_device_param_defaults;
+
+	return 0;
+}
+
+#if defined(CONFIG_DEBUG_FS)
+#include "pvr_device.h"
+
+#include <linux/dcache.h>
+#include <linux/debugfs.h>
+#include <linux/export.h>
+#include <linux/fs.h>
+#include <linux/stddef.h>
+
+/*
+ * This list of defines must contain every type specified in "pvr_params.h" as
+ * ``PVR_PARAM_TYPE_*_C``.
+ */
+#define PVR_PARAM_TYPE_X32_FMT "0x%08llx"
+
+#define X_SET(name_, mode_) X_SET_##mode_(name_)
+#define X_SET_DEF(name_, update_, mode_) X_SET_DEF_##mode_(name_, update_)
+
+#define X_SET_RO(name_) NULL
+#define X_SET_RW(name_) __pvr_device_param_##name_##set
+
+#define X_SET_DEF_RO(name_, update_)
+#define X_SET_DEF_RW(name_, update_)                                    \
+	static int                                                      \
+	X_SET_RW(name_)(void *data, u64 val)                            \
+	{                                                               \
+		struct pvr_device *pvr_dev = data;                      \
+		/* This is not just (update_) to suppress -Waddress. */ \
+		if ((void *)(update_) != NULL)                          \
+			(update_)(pvr_dev, pvr_dev->params.name_, val); \
+		pvr_dev->params.name_ = val;                            \
+		return 0;                                               \
+	}
+
+#define X(type_, name_, value_, desc_, mode_, update_)                     \
+	static int                                                         \
+	__pvr_device_param_##name_##_get(void *data, u64 *val)             \
+	{                                                                  \
+		struct pvr_device *pvr_dev = data;                         \
+		*val = pvr_dev->params.name_;                              \
+		return 0;                                                  \
+	}                                                                  \
+	X_SET_DEF(name_, update_, mode_)                                   \
+	static int                                                         \
+	__pvr_device_param_##name_##_open(struct inode *inode,             \
+					  struct file *file)               \
+	{                                                                  \
+		__simple_attr_check_format(PVR_PARAM_TYPE_##type_##_FMT,   \
+					   0ull);                          \
+		return simple_attr_open(inode, file,                       \
+					__pvr_device_param_##name_##_get,  \
+					X_SET(name_, mode_),               \
+					PVR_PARAM_TYPE_##type_##_FMT);     \
+	}
+PVR_DEVICE_PARAMS
+#undef X
+
+#undef X_SET
+#undef X_SET_RO
+#undef X_SET_RW
+#undef X_SET_DEF
+#undef X_SET_DEF_RO
+#undef X_SET_DEF_RW
+
+static struct {
+#define X(type_, name_, value_, desc_, mode_, update_) \
+	const struct file_operations name_;
+	PVR_DEVICE_PARAMS
+#undef X
+} pvr_device_param_debugfs_fops = {
+#define X(type_, name_, value_, desc_, mode_, update_)     \
+	.name_ = {                                         \
+		.owner = THIS_MODULE,                      \
+		.open = __pvr_device_param_##name_##_open, \
+		.release = simple_attr_release,            \
+		.read = simple_attr_read,                  \
+		.write = simple_attr_write,                \
+		.llseek = generic_file_llseek,             \
+	},
+	PVR_DEVICE_PARAMS
+#undef X
+};
+
+void
+pvr_params_debugfs_init(struct pvr_device *pvr_dev, struct dentry *dir)
+{
+#define X_MODE(mode_) X_MODE_##mode_
+#define X_MODE_RO 0400
+#define X_MODE_RW 0600
+
+#define X(type_, name_, value_, desc_, mode_, update_)             \
+	debugfs_create_file(#name_, X_MODE(mode_), dir, pvr_dev,   \
+			    &pvr_device_param_debugfs_fops.name_);
+	PVR_DEVICE_PARAMS
+#undef X
+
+#undef X_MODE
+#undef X_MODE_RO
+#undef X_MODE_RW
+}
+#endif
diff --git a/drivers/gpu/drm/imagination/pvr_params.h b/drivers/gpu/drm/imagination/pvr_params.h
new file mode 100644
index 000000000000..9988c941f83f
--- /dev/null
+++ b/drivers/gpu/drm/imagination/pvr_params.h
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: GPL-2.0 OR MIT */
+/* Copyright (c) 2023 Imagination Technologies Ltd. */
+
+#ifndef PVR_PARAMS_H
+#define PVR_PARAMS_H
+
+#include "pvr_rogue_fwif.h"
+
+#include <linux/cache.h>
+#include <linux/compiler_attributes.h>
+
+/*
+ * This is the definitive list of types allowed in the definition of
+ * %PVR_DEVICE_PARAMS.
+ */
+#define PVR_PARAM_TYPE_X32_C u32
+
+/*
+ * This macro defines all device-specific parameters; that is parameters which
+ * are set independently per device.
+ *
+ * The X-macro accepts the following arguments. Arguments marked with [debugfs]
+ * are ignored when debugfs is disabled; values used for these arguments may
+ * safely be gated behind CONFIG_DEBUG_FS.
+ *
+ * @type_: The definitive list of allowed values is PVR_PARAM_TYPE_*_C.
+ * @name_: Name of the parameter. This is used both as the field name in C and
+ *         stringified as the parameter name.
+ * @value_: Initial/default value.
+ * @desc_: String literal used as help text to describe the usage of this
+ *         parameter.
+ * @mode_: [debugfs] One of {RO,RW}. The access mode of the debugfs entry for
+ *         this parameter.
+ * @update_: [debugfs] When debugfs support is enabled, parameters may be
+ *           updated at runtime. When this happens, this function will be
+ *           called to allow changes to propagate. The signature of this
+ *           function is:
+ *
+ *              void (*)(struct pvr_device *pvr_dev, T old_val, T new_val)
+ *
+ *           Where T is the C type associated with @type_.
+ *
+ *           If @mode_ does not allow write access, this function will never be
+ *           called. In this case, or if no update callback is required, you
+ *           should specify NULL for this argument.
+ */
+#define PVR_DEVICE_PARAMS                                                    \
+	X(X32, fw_trace_mask, ROGUE_FWIF_LOG_TYPE_NONE,                      \
+	  "Enable FW trace for the specified groups. Specifying 0 disables " \
+	  "all FW tracing.",                                                 \
+	  RW, pvr_fw_trace_mask_update)
+
+struct pvr_device_params {
+#define X(type_, name_, value_, desc_, ...) \
+	PVR_PARAM_TYPE_##type_##_C name_;
+	PVR_DEVICE_PARAMS
+#undef X
+};
+
+int pvr_device_params_init(struct pvr_device_params *params);
+
+#if defined(CONFIG_DEBUG_FS)
+/* Forward declaration from "pvr_device.h". */
+struct pvr_device;
+
+/* Forward declaration from <linux/dcache.h>. */
+struct dentry;
+
+void pvr_params_debugfs_init(struct pvr_device *pvr_dev, struct dentry *dir);
+#endif /* defined(CONFIG_DEBUG_FS) */
+
+#endif /* PVR_PARAMS_H */
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 16/17] drm/imagination: Add driver documentation
  2023-08-16  8:25 ` Sarah Walker
@ 2023-08-16  8:25   ` Sarah Walker
  -1 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: matthew.brost, luben.tuikov, tzimmermann, linux-kernel, mripard,
	afd, Matt Coster, boris.brezillon, dakr, donald.robson, hns,
	christian.koenig, faith.ekstrand

Add documentation for the UAPI and for the virtual memory design.

Co-developed-by: Matt Coster <matt.coster@imgtec.com>
Signed-off-by: Matt Coster <matt.coster@imgtec.com>
Co-developed-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
Reviewed-by: Maxime Ripard <mripard@kernel.org>
---
 Documentation/gpu/drivers.rst                 |   2 +
 Documentation/gpu/imagination/index.rst       |  14 +
 Documentation/gpu/imagination/uapi.rst        | 174 +++++++
 .../gpu/imagination/virtual_memory.rst        | 462 ++++++++++++++++++
 MAINTAINERS                                   |   1 +
 5 files changed, 653 insertions(+)
 create mode 100644 Documentation/gpu/imagination/index.rst
 create mode 100644 Documentation/gpu/imagination/uapi.rst
 create mode 100644 Documentation/gpu/imagination/virtual_memory.rst

diff --git a/Documentation/gpu/drivers.rst b/Documentation/gpu/drivers.rst
index 3a52f48215a3..5487deb218a3 100644
--- a/Documentation/gpu/drivers.rst
+++ b/Documentation/gpu/drivers.rst
@@ -3,9 +3,11 @@ GPU Driver Documentation
 ========================
 
 .. toctree::
+   :maxdepth: 3
 
    amdgpu/index
    i915
+   imagination/index
    mcde
    meson
    pl111
diff --git a/Documentation/gpu/imagination/index.rst b/Documentation/gpu/imagination/index.rst
new file mode 100644
index 000000000000..57f28e460a03
--- /dev/null
+++ b/Documentation/gpu/imagination/index.rst
@@ -0,0 +1,14 @@
+=======================================
+drm/imagination PowerVR Graphics Driver
+=======================================
+
+.. kernel-doc:: drivers/gpu/drm/imagination/pvr_drv.c
+   :doc: PowerVR Graphics Driver
+
+Contents
+========
+.. toctree::
+   :maxdepth: 2
+
+   uapi
+   virtual_memory
diff --git a/Documentation/gpu/imagination/uapi.rst b/Documentation/gpu/imagination/uapi.rst
new file mode 100644
index 000000000000..2227ea7e6222
--- /dev/null
+++ b/Documentation/gpu/imagination/uapi.rst
@@ -0,0 +1,174 @@
+====
+UAPI
+====
+The sources associated with this section can be found in ``pvr_drm.h``.
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: PowerVR UAPI
+
+OBJECT ARRAYS
+=============
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_obj_array
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: DRM_PVR_OBJ_ARRAY
+
+IOCTLS
+======
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: PowerVR IOCTL interface
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: PVR_IOCTL
+
+DEV_QUERY
+---------
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: PowerVR IOCTL DEV_QUERY interface
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_dev_query
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_dev_query_args
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_dev_query_gpu_info
+                 drm_pvr_dev_query_runtime_info
+                 drm_pvr_dev_query_hwrt_info
+                 drm_pvr_dev_query_quirks
+                 drm_pvr_dev_query_enhancements
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_heap_id
+                 drm_pvr_heap
+                 drm_pvr_dev_query_heap_info
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: Flags for DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_static_data_area_usage
+                 drm_pvr_static_data_area
+                 drm_pvr_dev_query_static_data_areas
+
+CREATE_BO
+---------
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: PowerVR IOCTL CREATE_BO interface
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_create_bo_args
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: Flags for CREATE_BO
+
+GET_BO_MMAP_OFFSET
+------------------
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: PowerVR IOCTL GET_BO_MMAP_OFFSET interface
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_get_bo_mmap_offset_args
+
+CREATE_VM_CONTEXT and DESTROY_VM_CONTEXT
+----------------------------------------
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: PowerVR IOCTL CREATE_VM_CONTEXT and DESTROY_VM_CONTEXT interfaces
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_create_vm_context_args
+                 drm_pvr_ioctl_destroy_vm_context_args
+
+VM_MAP and VM_UNMAP
+-------------------
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: PowerVR IOCTL VM_MAP and VM_UNMAP interfaces
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_vm_map_args
+                 drm_pvr_ioctl_vm_unmap_args
+
+CREATE_CONTEXT and DESTROY_CONTEXT
+----------------------------------
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: PowerVR IOCTL CREATE_CONTEXT and DESTROY_CONTEXT interfaces
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_create_context_args
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ctx_priority
+                 drm_pvr_ctx_type
+                 drm_pvr_static_render_context_state
+                 drm_pvr_static_render_context_state_format
+                 drm_pvr_reset_framework
+                 drm_pvr_reset_framework_format
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_destroy_context_args
+
+CREATE_FREE_LIST and DESTROY_FREE_LIST
+--------------------------------------
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: PowerVR IOCTL CREATE_FREE_LIST and DESTROY_FREE_LIST interfaces
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_create_free_list_args
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_destroy_free_list_args
+
+CREATE_HWRT_DATASET and DESTROY_HWRT_DATASET
+--------------------------------------
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: PowerVR IOCTL CREATE_HWRT_DATASET and DESTROY_HWRT_DATASET interfaces
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_create_hwrt_dataset_args
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_create_hwrt_geom_data_args
+                 drm_pvr_create_hwrt_rt_data_args
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_destroy_hwrt_dataset_args
+
+SUBMIT_JOBS
+-----------
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: PowerVR IOCTL SUBMIT_JOBS interface
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: Flags for the drm_pvr_sync_op object.
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_submit_jobs_args
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: Flags for SUBMIT_JOB ioctl geometry command.
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: Flags for SUBMIT_JOB ioctl fragment command.
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: Flags for SUBMIT_JOB ioctl compute command.
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: Flags for SUBMIT_JOB ioctl transfer command.
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_sync_op
+                 drm_pvr_job_type
+                 drm_pvr_hwrt_data_ref
+                 drm_pvr_job
+
+Internal notes
+==============
+.. kernel-doc:: drivers/gpu/drm/imagination/pvr_device.h
+   :doc: IOCTL validation helpers
+
+.. kernel-doc:: drivers/gpu/drm/imagination/pvr_device.h
+   :identifiers: PVR_STATIC_ASSERT_64BIT_ALIGNED PVR_IOCTL_UNION_PADDING_CHECK
+                 pvr_ioctl_union_padding_check
diff --git a/Documentation/gpu/imagination/virtual_memory.rst b/Documentation/gpu/imagination/virtual_memory.rst
new file mode 100644
index 000000000000..aab58e12a24e
--- /dev/null
+++ b/Documentation/gpu/imagination/virtual_memory.rst
@@ -0,0 +1,462 @@
+===========================
+GPU Virtual Memory Handling
+===========================
+The sources associated with this section can be found in ``pvr_vm.c`` and
+``pvr_vm.h``.
+
+There's a lot going on in this section, so it's broken down into several parts;
+beginning with the public-facing API surface.
+
+.. admonition:: Note on page table naming
+
+   This file uses a different naming convention for page table levels than the
+   generated hardware defs. Since page table implementation details are not
+   exposed outside this file, the description of the name mapping exists here
+   in its normative form:
+
+   * Level 0 page table => Page table
+   * Level 1 page table => Page directory
+   * Level 2 page table => Page catalog
+
+   The variable/function naming convention in this file is ``page_table_lx_*``
+   where x is either ``0``, ``1`` or ``2`` to represent the level of the page
+   table structure. The name ``page_table_*`` without the ``_lx`` suffix is
+   used for references to the entire tree structure including all three levels,
+   or an operation or value which is level-agnostic.
+
+.. contents::
+   :local:
+   :depth: 1
+
+
+Public API
+==========
+The public-facing API of our virtual memory management is exposed as a
+collection of functions associated with an opaque handle type.
+
+The opaque handle is :c:type:`pvr_vm_context`. This holds a "global state",
+including a complete page table tree structure. You do not need to consider
+this internal structure (or anything else in :c:type:`pvr_vm_context`) when
+using this API; it is designed to operate as a black box.
+
+Usage
+-----
+Begin by calling :c:func:`pvr_vm_create_context` to obtain a VM context. It is
+bound to a PowerVR device (:c:type:`pvr_device`) and holds a reference to it.
+This binding is immutable.
+
+Once you're finished with a VM context, call :c:func:`pvr_vm_destroy_context`
+to release it. This should be done before freeing or otherwise releasing the
+PowerVR device to which the VM context is bound.
+
+It is an error to destroy a VM context which has already been destroyed. If a
+VM context still contains valid memory mappings at the point it is destroyed,
+these will be unmapped, and (optionally) warnings will be printed.
+
+Functions
+---------
+* :c:func:`pvr_vm_create_context`
+* :c:func:`pvr_vm_destroy_context`
+* :c:func:`pvr_vm_map`
+* :c:func:`pvr_vm_map_partial`
+* :c:func:`pvr_vm_unmap`
+
+Helper functions
+----------------
+* :c:func:`pvr_device_addr_is_valid`
+* :c:func:`pvr_device_addr_and_size_are_valid`
+* :c:func:`pvr_vm_get_root_page_table_addr`
+
+Constants
+---------
+* :c:macro:`PVR_VM_BACKING_PAGE_SIZE`
+
+
+Memory mappings
+===============
+Physical memory is exposed to the device via **mappings**. Mappings may never
+overlap, although any given region of physical memory may be referenced by
+multiple mappings.
+
+Use :c:func:`pvr_vm_map` to create a mapping, providing a
+:c:type:`pvr_gem_object` holding the physical memory to be mapped. The physical
+memory behind the object (or each "section" if it's not contiguous) must be
+device page-aligned. This restriction is checked by :c:func:`pvr_vm_map`, which
+returns -``EINVAL`` if the check fails.
+
+If only part of the :c:type:`pvr_gem_object` must be mapped, use
+:c:func:`pvr_vm_map_partial` instead. In addition to the parameters accepted by
+:c:func:`pvr_vm_map`, this also takes an offset into the object and the size of
+the mapping to be created. The restrictions regarding alignment on
+:c:func:`pvr_vm_map` also apply here, with the exception that only the region
+of the object within the bounds specified by the offset and size must satisfy
+them. These are checked by :c:func:`pvr_vm_map_partial`, along with the offset
+and size values to ensure that the region they specify falls entirely within
+the bounds of the provided object.
+
+Both of these mapping functions call :c:func:`pvr_gem_object_get` to ensure the
+underlying physical memory is not freed until *after* the mapping is released.
+
+Although :c:func:`pvr_vm_map_partial` could technically always be used in place
+of :c:func:`pvr_vm_map`, the latter should be preferred when possible since it
+is a more efficient operation.
+
+Mappings are tracked internally so that it is theoretically impossible to
+accidentally create overlapping mappings. No handle is returned after a
+mapping operation succeeds; callers should instead use the start device
+virtual address of the mapping as its handle.
+
+When mapped memory is no longer required by the device, it should be
+unmapped using :c:func:`pvr_vm_unmap`. In addition to making the memory
+inaccessible to the device, this will call :c:func:`pvr_gem_object_put` to
+release the underlying physical memory. If the mapping held the last reference,
+the physical memory will automatically be freed. Attempting to unmap an invalid
+mapping (or one that has already been unmapped) will result in an -``ENOENT``
+error.
+
+Types
+-----
+* :c:type:`pvr_vm_mapping`
+
+Functions
+---------
+* :c:func:`pvr_vm_mapping_init_partial`
+* :c:func:`pvr_vm_mapping_init`
+* :c:func:`pvr_vm_mapping_fini`
+* :c:func:`pvr_vm_mapping_map`
+* :c:func:`pvr_vm_mapping_unmap`
+* :c:func:`pvr_vm_mapping_page_flags_raw`
+
+Constants
+---------
+* :c:macro:`PVR_VM_MAPPING_COMPLETE`
+
+
+VM backing pages
+================
+While the page tables hold memory accessible to the rest of the driver, the
+page tables themselves must have memory allocated to back them. We call this
+memory "VM backing pages". Conveniently, each page table is (currently) exactly
+4KiB, as defined by ``PVR_VM_BACKING_PAGE_SIZE``. We currently support any CPU
+page size of this size or greater.
+
+Usage
+-----
+To add this functionality to a structure which wraps a raw page table, embed
+an instance of :c:type:`pvr_vm_backing_page` in the wrapper struct. Call
+:c:func:`pvr_vm_backing_page_init` to allocate and map the backing page, and
+:c:func:`pvr_vm_backing_page_fini` to perform the reverse operations when
+you're finished with it. Use :c:func:`pvr_vm_backing_page_sync` to flush the
+memory from the host to the device. As this is an expensive operation (calling
+out to :c:func:`dma_sync_single_for_device`), be sure to perform all necessary
+changes to the backing memory before calling it.
+
+Between calls to :c:func:`pvr_vm_backing_page_init` and
+:c:func:`pvr_vm_backing_page_fini`, the public fields of
+:c:type:`pvr_vm_backing_page` can be used to access the allocated page. To
+access the memory from the CPU, use :c:member:`pvr_vm_backing_page.host_ptr`.
+For an address which can be passed to the device, use
+:c:member:`pvr_vm_backing_page.dma_addr`.
+
+It is expected that the embedded :c:type:`pvr_vm_backing_page` will be zeroed
+before calling :c:func:`pvr_vm_backing_page_init`. In return,
+:c:func:`pvr_vm_backing_page_fini` will re-zero it before returning. You can
+therefore compare the value of either :c:member:`pvr_vm_backing_page.dma_addr`
+or :c:member:`pvr_vm_backing_page.host_ptr` to zero or ``NULL`` to check if the
+backing page is ready for use.
+
+.. note:: This API is not expected to be exposed outside ``pvr_vm.c``.
+
+Types
+-----
+* :c:type:`pvr_vm_backing_page`
+
+Functions
+---------
+* :c:func:`pvr_vm_backing_page_init`
+* :c:func:`pvr_vm_backing_page_fini`
+* :c:func:`pvr_vm_backing_page_sync`
+
+Constants
+---------
+* :c:macro:`PVR_VM_BACKING_PAGE_SIZE`
+
+
+Raw page tables
+===============
+These types define the lowest level representation of the page table structure.
+This is the format which a PowerVR device's MMU can interpret directly. As
+such, their definitions are taken directly from hardware documentation.
+
+To store additional information required by the driver, we use
+`mirror page tables`_. In most cases, the mirror types are the ones you want to
+use for handles.
+
+Types
+-----
+* :c:type:`pvr_page_table_l2_entry_raw`
+* :c:type:`pvr_page_table_l1_entry_raw`
+* :c:type:`pvr_page_table_l0_entry_raw`
+* :c:type:`pvr_page_table_l2_raw`
+* :c:type:`pvr_page_table_l1_raw`
+* :c:type:`pvr_page_table_l0_raw`
+
+Functions
+---------
+* :c:func:`pvr_page_table_l2_entry_raw_is_valid`
+* :c:func:`pvr_page_table_l2_entry_raw_set`
+* :c:func:`pvr_page_table_l2_entry_raw_clear`
+* :c:func:`pvr_page_table_l1_entry_raw_is_valid`
+* :c:func:`pvr_page_table_l1_entry_raw_set`
+* :c:func:`pvr_page_table_l1_entry_raw_clear`
+* :c:func:`pvr_page_table_l0_entry_raw_is_valid`
+* :c:func:`pvr_page_table_l0_entry_raw_set`
+* :c:func:`pvr_page_table_l0_entry_raw_clear`
+
+
+Mirror page tables
+==================
+These structures hold additional information required by the driver that cannot
+be stored in `raw page tables`_ (since those are defined by the hardware).
+
+In most cases, you should hold a handle to these types instead of the raw types
+directly.
+
+Types
+-----
+* :c:type:`pvr_page_table_l2`
+* :c:type:`pvr_page_table_l1`
+* :c:type:`pvr_page_table_l0`
+
+Functions
+---------
+* :c:func:`pvr_page_table_l2_init`
+* :c:func:`pvr_page_table_l2_fini`
+* :c:func:`pvr_page_table_l2_sync`
+* :c:func:`pvr_page_table_l2_get_raw`
+* :c:func:`pvr_page_table_l2_get_entry_raw`
+* :c:func:`pvr_page_table_l2_insert`
+* :c:func:`pvr_page_table_l2_remove`
+* :c:func:`pvr_page_table_l1_init`
+* :c:func:`pvr_page_table_l1_fini`
+* :c:func:`pvr_page_table_l1_sync`
+* :c:func:`pvr_page_table_l1_get_raw`
+* :c:func:`pvr_page_table_l1_get_entry_raw`
+* :c:func:`pvr_page_table_l1_insert`
+* :c:func:`pvr_page_table_l1_remove`
+* :c:func:`pvr_page_table_l0_init`
+* :c:func:`pvr_page_table_l0_fini`
+* :c:func:`pvr_page_table_l0_sync`
+* :c:func:`pvr_page_table_l0_get_raw`
+* :c:func:`pvr_page_table_l0_get_entry_raw`
+* :c:func:`pvr_page_table_l0_insert`
+* :c:func:`pvr_page_table_l0_remove`
+
+
+Page table index utilities
+==========================
+These utilities are not tied to the raw or mirror page tables since they
+operate only on device-virtual addresses which are identical between the two
+structures.
+
+Functions
+---------
+* :c:func:`pvr_page_table_l2_idx`
+* :c:func:`pvr_page_table_l1_idx`
+* :c:func:`pvr_page_table_l0_idx`
+
+Constants
+---------
+* :c:macro:`PVR_PAGE_TABLE_ADDR_SPACE_SIZE`
+* :c:macro:`PVR_PAGE_TABLE_ADDR_BITS`
+* :c:macro:`PVR_PAGE_TABLE_ADDR_MASK`
+
+
+High-level page table operations
+================================
+We designate any functions which operate on our wrappers for page tables as
+"high-level".
+
+.. note::
+
+    This section contains functions prefixed with ``__`` that should never be
+    called directly, even internally.
+
+The two primary functions in this section are consumed by the page table
+pointer operations; that API is the expected method of performing operations
+on the page table tree structure
+
+The ``__`` functions noted previously are triggered when the refcount
+(implemented as the number of valid entries in the target page table) reaches
+zero.
+
+Functions
+---------
+* :c:func:`pvr_page_table_l1_get_or_create`
+* :c:func:`pvr_page_table_l0_get_or_create`
+
+Internal functions
+------------------
+* :c:func:`pvr_page_table_l1_create_unchecked`
+* :c:func:`__pvr_page_table_l1_destroy`
+* :c:func:`pvr_page_table_l0_create_unchecked`
+* :c:func:`__pvr_page_table_l0_destroy`
+
+
+Page table pointer
+==================
+Traversing the page table tree structure is not a straightforward operation
+since there are multiple layers, each with different properties. To contain and
+attempt to reduce this complexity, it's mostly encompassed in a "heavy pointer"
+type (:c:type:`pvr_page_table_ptr`) and its associated functions.
+
+Usage
+-----
+To start using a :c:type:`pvr_page_table_ptr` instance (a "pointer"), you must
+first initialize it to the starting address of your traversal using
+:c:func:`pvr_page_table_ptr_init`. Once finished, destroy it with
+:c:func:`pvr_page_table_ptr_fini`.
+
+You can advance the pointer using :c:func:`pvr_page_table_ptr_next_page`. If
+you're writing to the page table structure, you'll want to set the
+``should_create`` argument to ``true``. This will ensure the pointer doesn't
+dangle after advancing. See the function doc for more details.
+
+The pointer cannot be iterated in reverse; if you need to backtrack (e.g. in
+case of an error), keep a copy using :c:func:`pvr_page_table_ptr_copy`. The
+copy must be destroyed in the same fashion as the original (using
+:c:func:`pvr_page_table_ptr_fini`). There are no restrictions on the lifetime
+of the copy; it may outlive its original. Pending sync operations are not
+copied, so they will only be executed by operations on the original. This
+prevents some sync duplication, but it should be considered when working with
+copies.
+
+To avoid a free/alloc pair, you can reuse an existing pointer for a completely
+different range. This is achieved by calling :c:func:`pvr_page_table_ptr_set`
+to effectively re-initialize the pointer.
+
+We've mentioned sync operations in passing, but here are some actual details
+about how the pointer performs them. When a pointer is "initialized" (either by
+:c:func:`pvr_page_table_ptr_init`, :c:func:`pvr_page_table_ptr_copy` or
+:c:func:`pvr_page_table_ptr_set`), it's marked as "synced". If the pointer was
+destroyed at this point, no sync operation would occur. As the page table
+hierarchy is traversed (using :c:func:`pvr_page_table_ptr_next_page`), you
+should call :c:func:`pvr_page_table_ptr_require_sync` to indicate which levels
+of the hierarchy have been touched. This is a very cheap operation which just
+marks the pointer as "unsynced" up to and including the specified page table
+level.
+
+At the *next* call to :c:func:`pvr_page_table_ptr_next_page`, this "unsynced"
+level will be compared against the maximum level in the tree structure at which
+the pointer has changed. This information will then be used to perform the
+(somewhat expensive) DMA sync operation (:c:func:`pvr_vm_backing_page_sync`) on
+only the touched tables. Remember this decision relies on the user (you)
+reporting this status correctly, so always call
+:c:func:`pvr_page_table_ptr_require_sync`! In addition to
+:c:func:`pvr_page_table_ptr_next_page`, this "smart sync" will be performed by
+:c:func:`pvr_page_table_ptr_fini`. It can also be triggered manually by calling
+:c:func:`pvr_page_table_sync_partial`, or the simpler
+:c:func:`pvr_page_table_sync`. The former will only perform sync operations up
+to a specified level, while the latter always leaves the pointer in the
+"synced" state.
+
+Types
+-----
+* :c:type:`pvr_page_table_ptr`
+
+Functions
+---------
+* :c:func:`pvr_page_table_ptr_init`
+* :c:func:`pvr_page_table_ptr_fini`
+* :c:func:`pvr_page_table_ptr_next_page`
+* :c:func:`pvr_page_table_ptr_set`
+* :c:func:`pvr_page_table_ptr_require_sync`
+* :c:func:`pvr_page_table_ptr_copy`
+* :c:func:`pvr_page_table_ptr_sync`
+* :c:func:`pvr_page_table_ptr_sync_partial`
+
+Internal functions
+------------------
+* :c:func:`pvr_page_table_ptr_sync_manual`
+* :c:func:`pvr_page_table_ptr_load_tables`
+
+Constants
+---------
+* :c:macro:`PVR_PAGE_TABLE_PTR_IN_SYNC`
+
+
+Single page operations
+======================
+These functions operate on single device-virtual pages, as addressed by a
+:c:type:`pvr_page_table_ptr`. They keep the page table hierarchy updated.
+
+They are distinct from the High-level page table operations because they are
+used by consumers of the page table pointer, rather than the page table pointer
+functions themselves.
+
+Functions
+---------
+* :c:func:`pvr_page_create`
+* :c:func:`pvr_page_destroy`
+
+
+Interval tree base implementation
+=================================
+There is a note in ``<linux/interval_tree_generic.h>`` which says:
+
+   Note - before using this, please consider if generic version
+   (``interval_tree.h``) would work for you...
+
+Here, then, is our justification for using the generic version, instead of the
+generic version (naming is hard, okay!):
+
+The generic version of :c:type:`interval_tree_node` (from
+``<linux/interval_tree.h>``) uses unsigned long. We always need the elements to
+be 64 bits wide, regardless of host pointer size. We could gate this
+implementation on ``BITS_PER_LONG``, but it's better for us to store ``start``
+and ``size`` then derive ``last`` rather than the way
+:c:type:`interval_tree_node` does it, storing ``start`` and ``last`` then
+deriving ``size``.
+
+Types
+-----
+* :c:type:`pvr_vm_interval_tree_node`
+
+Functions
+---------
+* :c:func:`pvr_vm_interval_tree_compute_last`
+* :c:func:`pvr_vm_interval_tree_insert`
+* :c:func:`pvr_vm_interval_tree_iter_first`
+* :c:func:`pvr_vm_interval_tree_iter_next`
+* :c:func:`pvr_vm_interval_tree_init`
+* :c:func:`pvr_vm_interval_tree_fini`
+* :c:func:`pvr_vm_interval_tree_node_init`
+* :c:func:`pvr_vm_interval_tree_node_fini`
+* :c:func:`pvr_vm_interval_tree_node_start`
+* :c:func:`pvr_vm_interval_tree_node_size`
+* :c:func:`pvr_vm_interval_tree_node_last`
+* :c:func:`pvr_vm_interval_tree_node_is_inserted`
+* :c:func:`pvr_vm_interval_tree_node_mark_removed`
+
+
+Reference
+=========
+.. kernel-doc:: drivers/gpu/drm/imagination/pvr_vm.c
+   :identifiers:
+
+Constants
+---------
+.. kernel-doc:: drivers/gpu/drm/imagination/pvr_vm.h
+   :doc: Public API (constants)
+
+.. kernel-doc:: drivers/gpu/drm/imagination/pvr_vm.c
+   :doc: Memory mappings (constants)
+
+.. kernel-doc:: drivers/gpu/drm/imagination/pvr_vm.c
+   :doc: VM backing pages (constants)
+
+.. kernel-doc:: drivers/gpu/drm/imagination/pvr_vm.c
+   :doc: Page table index utilities (constants)
+
+.. kernel-doc:: drivers/gpu/drm/imagination/pvr_vm.c
+   :doc: Page table pointer (constants)
diff --git a/MAINTAINERS b/MAINTAINERS
index 82b82cbdb22a..7f99999ceabb 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -10144,6 +10144,7 @@ M:	Sarah Walker <sarah.walker@imgtec.com>
 M:	Donald Robson <donald.robson@imgtec.com>
 S:	Supported
 F:	Documentation/devicetree/bindings/gpu/img,powervr.yaml
+F:	Documentation/gpu/imagination/
 F:	drivers/gpu/drm/imagination/
 F:	include/uapi/drm/pvr_drm.h
 
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 16/17] drm/imagination: Add driver documentation
@ 2023-08-16  8:25   ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: frank.binns, donald.robson, boris.brezillon, faith.ekstrand,
	airlied, daniel, maarten.lankhorst, mripard, tzimmermann, afd,
	hns, matthew.brost, christian.koenig, luben.tuikov, dakr,
	linux-kernel, Matt Coster

Add documentation for the UAPI and for the virtual memory design.

Co-developed-by: Matt Coster <matt.coster@imgtec.com>
Signed-off-by: Matt Coster <matt.coster@imgtec.com>
Co-developed-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Donald Robson <donald.robson@imgtec.com>
Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
Reviewed-by: Maxime Ripard <mripard@kernel.org>
---
 Documentation/gpu/drivers.rst                 |   2 +
 Documentation/gpu/imagination/index.rst       |  14 +
 Documentation/gpu/imagination/uapi.rst        | 174 +++++++
 .../gpu/imagination/virtual_memory.rst        | 462 ++++++++++++++++++
 MAINTAINERS                                   |   1 +
 5 files changed, 653 insertions(+)
 create mode 100644 Documentation/gpu/imagination/index.rst
 create mode 100644 Documentation/gpu/imagination/uapi.rst
 create mode 100644 Documentation/gpu/imagination/virtual_memory.rst

diff --git a/Documentation/gpu/drivers.rst b/Documentation/gpu/drivers.rst
index 3a52f48215a3..5487deb218a3 100644
--- a/Documentation/gpu/drivers.rst
+++ b/Documentation/gpu/drivers.rst
@@ -3,9 +3,11 @@ GPU Driver Documentation
 ========================
 
 .. toctree::
+   :maxdepth: 3
 
    amdgpu/index
    i915
+   imagination/index
    mcde
    meson
    pl111
diff --git a/Documentation/gpu/imagination/index.rst b/Documentation/gpu/imagination/index.rst
new file mode 100644
index 000000000000..57f28e460a03
--- /dev/null
+++ b/Documentation/gpu/imagination/index.rst
@@ -0,0 +1,14 @@
+=======================================
+drm/imagination PowerVR Graphics Driver
+=======================================
+
+.. kernel-doc:: drivers/gpu/drm/imagination/pvr_drv.c
+   :doc: PowerVR Graphics Driver
+
+Contents
+========
+.. toctree::
+   :maxdepth: 2
+
+   uapi
+   virtual_memory
diff --git a/Documentation/gpu/imagination/uapi.rst b/Documentation/gpu/imagination/uapi.rst
new file mode 100644
index 000000000000..2227ea7e6222
--- /dev/null
+++ b/Documentation/gpu/imagination/uapi.rst
@@ -0,0 +1,174 @@
+====
+UAPI
+====
+The sources associated with this section can be found in ``pvr_drm.h``.
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: PowerVR UAPI
+
+OBJECT ARRAYS
+=============
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_obj_array
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: DRM_PVR_OBJ_ARRAY
+
+IOCTLS
+======
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: PowerVR IOCTL interface
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: PVR_IOCTL
+
+DEV_QUERY
+---------
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: PowerVR IOCTL DEV_QUERY interface
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_dev_query
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_dev_query_args
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_dev_query_gpu_info
+                 drm_pvr_dev_query_runtime_info
+                 drm_pvr_dev_query_hwrt_info
+                 drm_pvr_dev_query_quirks
+                 drm_pvr_dev_query_enhancements
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_heap_id
+                 drm_pvr_heap
+                 drm_pvr_dev_query_heap_info
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: Flags for DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_static_data_area_usage
+                 drm_pvr_static_data_area
+                 drm_pvr_dev_query_static_data_areas
+
+CREATE_BO
+---------
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: PowerVR IOCTL CREATE_BO interface
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_create_bo_args
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: Flags for CREATE_BO
+
+GET_BO_MMAP_OFFSET
+------------------
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: PowerVR IOCTL GET_BO_MMAP_OFFSET interface
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_get_bo_mmap_offset_args
+
+CREATE_VM_CONTEXT and DESTROY_VM_CONTEXT
+----------------------------------------
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: PowerVR IOCTL CREATE_VM_CONTEXT and DESTROY_VM_CONTEXT interfaces
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_create_vm_context_args
+                 drm_pvr_ioctl_destroy_vm_context_args
+
+VM_MAP and VM_UNMAP
+-------------------
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: PowerVR IOCTL VM_MAP and VM_UNMAP interfaces
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_vm_map_args
+                 drm_pvr_ioctl_vm_unmap_args
+
+CREATE_CONTEXT and DESTROY_CONTEXT
+----------------------------------
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: PowerVR IOCTL CREATE_CONTEXT and DESTROY_CONTEXT interfaces
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_create_context_args
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ctx_priority
+                 drm_pvr_ctx_type
+                 drm_pvr_static_render_context_state
+                 drm_pvr_static_render_context_state_format
+                 drm_pvr_reset_framework
+                 drm_pvr_reset_framework_format
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_destroy_context_args
+
+CREATE_FREE_LIST and DESTROY_FREE_LIST
+--------------------------------------
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: PowerVR IOCTL CREATE_FREE_LIST and DESTROY_FREE_LIST interfaces
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_create_free_list_args
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_destroy_free_list_args
+
+CREATE_HWRT_DATASET and DESTROY_HWRT_DATASET
+--------------------------------------
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: PowerVR IOCTL CREATE_HWRT_DATASET and DESTROY_HWRT_DATASET interfaces
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_create_hwrt_dataset_args
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_create_hwrt_geom_data_args
+                 drm_pvr_create_hwrt_rt_data_args
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_destroy_hwrt_dataset_args
+
+SUBMIT_JOBS
+-----------
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: PowerVR IOCTL SUBMIT_JOBS interface
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: Flags for the drm_pvr_sync_op object.
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_ioctl_submit_jobs_args
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: Flags for SUBMIT_JOB ioctl geometry command.
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: Flags for SUBMIT_JOB ioctl fragment command.
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: Flags for SUBMIT_JOB ioctl compute command.
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :doc: Flags for SUBMIT_JOB ioctl transfer command.
+
+.. kernel-doc:: include/uapi/drm/pvr_drm.h
+   :identifiers: drm_pvr_sync_op
+                 drm_pvr_job_type
+                 drm_pvr_hwrt_data_ref
+                 drm_pvr_job
+
+Internal notes
+==============
+.. kernel-doc:: drivers/gpu/drm/imagination/pvr_device.h
+   :doc: IOCTL validation helpers
+
+.. kernel-doc:: drivers/gpu/drm/imagination/pvr_device.h
+   :identifiers: PVR_STATIC_ASSERT_64BIT_ALIGNED PVR_IOCTL_UNION_PADDING_CHECK
+                 pvr_ioctl_union_padding_check
diff --git a/Documentation/gpu/imagination/virtual_memory.rst b/Documentation/gpu/imagination/virtual_memory.rst
new file mode 100644
index 000000000000..aab58e12a24e
--- /dev/null
+++ b/Documentation/gpu/imagination/virtual_memory.rst
@@ -0,0 +1,462 @@
+===========================
+GPU Virtual Memory Handling
+===========================
+The sources associated with this section can be found in ``pvr_vm.c`` and
+``pvr_vm.h``.
+
+There's a lot going on in this section, so it's broken down into several parts;
+beginning with the public-facing API surface.
+
+.. admonition:: Note on page table naming
+
+   This file uses a different naming convention for page table levels than the
+   generated hardware defs. Since page table implementation details are not
+   exposed outside this file, the description of the name mapping exists here
+   in its normative form:
+
+   * Level 0 page table => Page table
+   * Level 1 page table => Page directory
+   * Level 2 page table => Page catalog
+
+   The variable/function naming convention in this file is ``page_table_lx_*``
+   where x is either ``0``, ``1`` or ``2`` to represent the level of the page
+   table structure. The name ``page_table_*`` without the ``_lx`` suffix is
+   used for references to the entire tree structure including all three levels,
+   or an operation or value which is level-agnostic.
+
+.. contents::
+   :local:
+   :depth: 1
+
+
+Public API
+==========
+The public-facing API of our virtual memory management is exposed as a
+collection of functions associated with an opaque handle type.
+
+The opaque handle is :c:type:`pvr_vm_context`. This holds a "global state",
+including a complete page table tree structure. You do not need to consider
+this internal structure (or anything else in :c:type:`pvr_vm_context`) when
+using this API; it is designed to operate as a black box.
+
+Usage
+-----
+Begin by calling :c:func:`pvr_vm_create_context` to obtain a VM context. It is
+bound to a PowerVR device (:c:type:`pvr_device`) and holds a reference to it.
+This binding is immutable.
+
+Once you're finished with a VM context, call :c:func:`pvr_vm_destroy_context`
+to release it. This should be done before freeing or otherwise releasing the
+PowerVR device to which the VM context is bound.
+
+It is an error to destroy a VM context which has already been destroyed. If a
+VM context still contains valid memory mappings at the point it is destroyed,
+these will be unmapped, and (optionally) warnings will be printed.
+
+Functions
+---------
+* :c:func:`pvr_vm_create_context`
+* :c:func:`pvr_vm_destroy_context`
+* :c:func:`pvr_vm_map`
+* :c:func:`pvr_vm_map_partial`
+* :c:func:`pvr_vm_unmap`
+
+Helper functions
+----------------
+* :c:func:`pvr_device_addr_is_valid`
+* :c:func:`pvr_device_addr_and_size_are_valid`
+* :c:func:`pvr_vm_get_root_page_table_addr`
+
+Constants
+---------
+* :c:macro:`PVR_VM_BACKING_PAGE_SIZE`
+
+
+Memory mappings
+===============
+Physical memory is exposed to the device via **mappings**. Mappings may never
+overlap, although any given region of physical memory may be referenced by
+multiple mappings.
+
+Use :c:func:`pvr_vm_map` to create a mapping, providing a
+:c:type:`pvr_gem_object` holding the physical memory to be mapped. The physical
+memory behind the object (or each "section" if it's not contiguous) must be
+device page-aligned. This restriction is checked by :c:func:`pvr_vm_map`, which
+returns -``EINVAL`` if the check fails.
+
+If only part of the :c:type:`pvr_gem_object` must be mapped, use
+:c:func:`pvr_vm_map_partial` instead. In addition to the parameters accepted by
+:c:func:`pvr_vm_map`, this also takes an offset into the object and the size of
+the mapping to be created. The restrictions regarding alignment on
+:c:func:`pvr_vm_map` also apply here, with the exception that only the region
+of the object within the bounds specified by the offset and size must satisfy
+them. These are checked by :c:func:`pvr_vm_map_partial`, along with the offset
+and size values to ensure that the region they specify falls entirely within
+the bounds of the provided object.
+
+Both of these mapping functions call :c:func:`pvr_gem_object_get` to ensure the
+underlying physical memory is not freed until *after* the mapping is released.
+
+Although :c:func:`pvr_vm_map_partial` could technically always be used in place
+of :c:func:`pvr_vm_map`, the latter should be preferred when possible since it
+is a more efficient operation.
+
+Mappings are tracked internally so that it is theoretically impossible to
+accidentally create overlapping mappings. No handle is returned after a
+mapping operation succeeds; callers should instead use the start device
+virtual address of the mapping as its handle.
+
+When mapped memory is no longer required by the device, it should be
+unmapped using :c:func:`pvr_vm_unmap`. In addition to making the memory
+inaccessible to the device, this will call :c:func:`pvr_gem_object_put` to
+release the underlying physical memory. If the mapping held the last reference,
+the physical memory will automatically be freed. Attempting to unmap an invalid
+mapping (or one that has already been unmapped) will result in an -``ENOENT``
+error.
+
+Types
+-----
+* :c:type:`pvr_vm_mapping`
+
+Functions
+---------
+* :c:func:`pvr_vm_mapping_init_partial`
+* :c:func:`pvr_vm_mapping_init`
+* :c:func:`pvr_vm_mapping_fini`
+* :c:func:`pvr_vm_mapping_map`
+* :c:func:`pvr_vm_mapping_unmap`
+* :c:func:`pvr_vm_mapping_page_flags_raw`
+
+Constants
+---------
+* :c:macro:`PVR_VM_MAPPING_COMPLETE`
+
+
+VM backing pages
+================
+While the page tables hold memory accessible to the rest of the driver, the
+page tables themselves must have memory allocated to back them. We call this
+memory "VM backing pages". Conveniently, each page table is (currently) exactly
+4KiB, as defined by ``PVR_VM_BACKING_PAGE_SIZE``. We currently support any CPU
+page size of this size or greater.
+
+Usage
+-----
+To add this functionality to a structure which wraps a raw page table, embed
+an instance of :c:type:`pvr_vm_backing_page` in the wrapper struct. Call
+:c:func:`pvr_vm_backing_page_init` to allocate and map the backing page, and
+:c:func:`pvr_vm_backing_page_fini` to perform the reverse operations when
+you're finished with it. Use :c:func:`pvr_vm_backing_page_sync` to flush the
+memory from the host to the device. As this is an expensive operation (calling
+out to :c:func:`dma_sync_single_for_device`), be sure to perform all necessary
+changes to the backing memory before calling it.
+
+Between calls to :c:func:`pvr_vm_backing_page_init` and
+:c:func:`pvr_vm_backing_page_fini`, the public fields of
+:c:type:`pvr_vm_backing_page` can be used to access the allocated page. To
+access the memory from the CPU, use :c:member:`pvr_vm_backing_page.host_ptr`.
+For an address which can be passed to the device, use
+:c:member:`pvr_vm_backing_page.dma_addr`.
+
+It is expected that the embedded :c:type:`pvr_vm_backing_page` will be zeroed
+before calling :c:func:`pvr_vm_backing_page_init`. In return,
+:c:func:`pvr_vm_backing_page_fini` will re-zero it before returning. You can
+therefore compare the value of either :c:member:`pvr_vm_backing_page.dma_addr`
+or :c:member:`pvr_vm_backing_page.host_ptr` to zero or ``NULL`` to check if the
+backing page is ready for use.
+
+.. note:: This API is not expected to be exposed outside ``pvr_vm.c``.
+
+Types
+-----
+* :c:type:`pvr_vm_backing_page`
+
+Functions
+---------
+* :c:func:`pvr_vm_backing_page_init`
+* :c:func:`pvr_vm_backing_page_fini`
+* :c:func:`pvr_vm_backing_page_sync`
+
+Constants
+---------
+* :c:macro:`PVR_VM_BACKING_PAGE_SIZE`
+
+
+Raw page tables
+===============
+These types define the lowest level representation of the page table structure.
+This is the format which a PowerVR device's MMU can interpret directly. As
+such, their definitions are taken directly from hardware documentation.
+
+To store additional information required by the driver, we use
+`mirror page tables`_. In most cases, the mirror types are the ones you want to
+use for handles.
+
+Types
+-----
+* :c:type:`pvr_page_table_l2_entry_raw`
+* :c:type:`pvr_page_table_l1_entry_raw`
+* :c:type:`pvr_page_table_l0_entry_raw`
+* :c:type:`pvr_page_table_l2_raw`
+* :c:type:`pvr_page_table_l1_raw`
+* :c:type:`pvr_page_table_l0_raw`
+
+Functions
+---------
+* :c:func:`pvr_page_table_l2_entry_raw_is_valid`
+* :c:func:`pvr_page_table_l2_entry_raw_set`
+* :c:func:`pvr_page_table_l2_entry_raw_clear`
+* :c:func:`pvr_page_table_l1_entry_raw_is_valid`
+* :c:func:`pvr_page_table_l1_entry_raw_set`
+* :c:func:`pvr_page_table_l1_entry_raw_clear`
+* :c:func:`pvr_page_table_l0_entry_raw_is_valid`
+* :c:func:`pvr_page_table_l0_entry_raw_set`
+* :c:func:`pvr_page_table_l0_entry_raw_clear`
+
+
+Mirror page tables
+==================
+These structures hold additional information required by the driver that cannot
+be stored in `raw page tables`_ (since those are defined by the hardware).
+
+In most cases, you should hold a handle to these types instead of the raw types
+directly.
+
+Types
+-----
+* :c:type:`pvr_page_table_l2`
+* :c:type:`pvr_page_table_l1`
+* :c:type:`pvr_page_table_l0`
+
+Functions
+---------
+* :c:func:`pvr_page_table_l2_init`
+* :c:func:`pvr_page_table_l2_fini`
+* :c:func:`pvr_page_table_l2_sync`
+* :c:func:`pvr_page_table_l2_get_raw`
+* :c:func:`pvr_page_table_l2_get_entry_raw`
+* :c:func:`pvr_page_table_l2_insert`
+* :c:func:`pvr_page_table_l2_remove`
+* :c:func:`pvr_page_table_l1_init`
+* :c:func:`pvr_page_table_l1_fini`
+* :c:func:`pvr_page_table_l1_sync`
+* :c:func:`pvr_page_table_l1_get_raw`
+* :c:func:`pvr_page_table_l1_get_entry_raw`
+* :c:func:`pvr_page_table_l1_insert`
+* :c:func:`pvr_page_table_l1_remove`
+* :c:func:`pvr_page_table_l0_init`
+* :c:func:`pvr_page_table_l0_fini`
+* :c:func:`pvr_page_table_l0_sync`
+* :c:func:`pvr_page_table_l0_get_raw`
+* :c:func:`pvr_page_table_l0_get_entry_raw`
+* :c:func:`pvr_page_table_l0_insert`
+* :c:func:`pvr_page_table_l0_remove`
+
+
+Page table index utilities
+==========================
+These utilities are not tied to the raw or mirror page tables since they
+operate only on device-virtual addresses which are identical between the two
+structures.
+
+Functions
+---------
+* :c:func:`pvr_page_table_l2_idx`
+* :c:func:`pvr_page_table_l1_idx`
+* :c:func:`pvr_page_table_l0_idx`
+
+Constants
+---------
+* :c:macro:`PVR_PAGE_TABLE_ADDR_SPACE_SIZE`
+* :c:macro:`PVR_PAGE_TABLE_ADDR_BITS`
+* :c:macro:`PVR_PAGE_TABLE_ADDR_MASK`
+
+
+High-level page table operations
+================================
+We designate any functions which operate on our wrappers for page tables as
+"high-level".
+
+.. note::
+
+    This section contains functions prefixed with ``__`` that should never be
+    called directly, even internally.
+
+The two primary functions in this section are consumed by the page table
+pointer operations; that API is the expected method of performing operations
+on the page table tree structure
+
+The ``__`` functions noted previously are triggered when the refcount
+(implemented as the number of valid entries in the target page table) reaches
+zero.
+
+Functions
+---------
+* :c:func:`pvr_page_table_l1_get_or_create`
+* :c:func:`pvr_page_table_l0_get_or_create`
+
+Internal functions
+------------------
+* :c:func:`pvr_page_table_l1_create_unchecked`
+* :c:func:`__pvr_page_table_l1_destroy`
+* :c:func:`pvr_page_table_l0_create_unchecked`
+* :c:func:`__pvr_page_table_l0_destroy`
+
+
+Page table pointer
+==================
+Traversing the page table tree structure is not a straightforward operation
+since there are multiple layers, each with different properties. To contain and
+attempt to reduce this complexity, it's mostly encompassed in a "heavy pointer"
+type (:c:type:`pvr_page_table_ptr`) and its associated functions.
+
+Usage
+-----
+To start using a :c:type:`pvr_page_table_ptr` instance (a "pointer"), you must
+first initialize it to the starting address of your traversal using
+:c:func:`pvr_page_table_ptr_init`. Once finished, destroy it with
+:c:func:`pvr_page_table_ptr_fini`.
+
+You can advance the pointer using :c:func:`pvr_page_table_ptr_next_page`. If
+you're writing to the page table structure, you'll want to set the
+``should_create`` argument to ``true``. This will ensure the pointer doesn't
+dangle after advancing. See the function doc for more details.
+
+The pointer cannot be iterated in reverse; if you need to backtrack (e.g. in
+case of an error), keep a copy using :c:func:`pvr_page_table_ptr_copy`. The
+copy must be destroyed in the same fashion as the original (using
+:c:func:`pvr_page_table_ptr_fini`). There are no restrictions on the lifetime
+of the copy; it may outlive its original. Pending sync operations are not
+copied, so they will only be executed by operations on the original. This
+prevents some sync duplication, but it should be considered when working with
+copies.
+
+To avoid a free/alloc pair, you can reuse an existing pointer for a completely
+different range. This is achieved by calling :c:func:`pvr_page_table_ptr_set`
+to effectively re-initialize the pointer.
+
+We've mentioned sync operations in passing, but here are some actual details
+about how the pointer performs them. When a pointer is "initialized" (either by
+:c:func:`pvr_page_table_ptr_init`, :c:func:`pvr_page_table_ptr_copy` or
+:c:func:`pvr_page_table_ptr_set`), it's marked as "synced". If the pointer was
+destroyed at this point, no sync operation would occur. As the page table
+hierarchy is traversed (using :c:func:`pvr_page_table_ptr_next_page`), you
+should call :c:func:`pvr_page_table_ptr_require_sync` to indicate which levels
+of the hierarchy have been touched. This is a very cheap operation which just
+marks the pointer as "unsynced" up to and including the specified page table
+level.
+
+At the *next* call to :c:func:`pvr_page_table_ptr_next_page`, this "unsynced"
+level will be compared against the maximum level in the tree structure at which
+the pointer has changed. This information will then be used to perform the
+(somewhat expensive) DMA sync operation (:c:func:`pvr_vm_backing_page_sync`) on
+only the touched tables. Remember this decision relies on the user (you)
+reporting this status correctly, so always call
+:c:func:`pvr_page_table_ptr_require_sync`! In addition to
+:c:func:`pvr_page_table_ptr_next_page`, this "smart sync" will be performed by
+:c:func:`pvr_page_table_ptr_fini`. It can also be triggered manually by calling
+:c:func:`pvr_page_table_sync_partial`, or the simpler
+:c:func:`pvr_page_table_sync`. The former will only perform sync operations up
+to a specified level, while the latter always leaves the pointer in the
+"synced" state.
+
+Types
+-----
+* :c:type:`pvr_page_table_ptr`
+
+Functions
+---------
+* :c:func:`pvr_page_table_ptr_init`
+* :c:func:`pvr_page_table_ptr_fini`
+* :c:func:`pvr_page_table_ptr_next_page`
+* :c:func:`pvr_page_table_ptr_set`
+* :c:func:`pvr_page_table_ptr_require_sync`
+* :c:func:`pvr_page_table_ptr_copy`
+* :c:func:`pvr_page_table_ptr_sync`
+* :c:func:`pvr_page_table_ptr_sync_partial`
+
+Internal functions
+------------------
+* :c:func:`pvr_page_table_ptr_sync_manual`
+* :c:func:`pvr_page_table_ptr_load_tables`
+
+Constants
+---------
+* :c:macro:`PVR_PAGE_TABLE_PTR_IN_SYNC`
+
+
+Single page operations
+======================
+These functions operate on single device-virtual pages, as addressed by a
+:c:type:`pvr_page_table_ptr`. They keep the page table hierarchy updated.
+
+They are distinct from the High-level page table operations because they are
+used by consumers of the page table pointer, rather than the page table pointer
+functions themselves.
+
+Functions
+---------
+* :c:func:`pvr_page_create`
+* :c:func:`pvr_page_destroy`
+
+
+Interval tree base implementation
+=================================
+There is a note in ``<linux/interval_tree_generic.h>`` which says:
+
+   Note - before using this, please consider if generic version
+   (``interval_tree.h``) would work for you...
+
+Here, then, is our justification for using the generic version, instead of the
+generic version (naming is hard, okay!):
+
+The generic version of :c:type:`interval_tree_node` (from
+``<linux/interval_tree.h>``) uses unsigned long. We always need the elements to
+be 64 bits wide, regardless of host pointer size. We could gate this
+implementation on ``BITS_PER_LONG``, but it's better for us to store ``start``
+and ``size`` then derive ``last`` rather than the way
+:c:type:`interval_tree_node` does it, storing ``start`` and ``last`` then
+deriving ``size``.
+
+Types
+-----
+* :c:type:`pvr_vm_interval_tree_node`
+
+Functions
+---------
+* :c:func:`pvr_vm_interval_tree_compute_last`
+* :c:func:`pvr_vm_interval_tree_insert`
+* :c:func:`pvr_vm_interval_tree_iter_first`
+* :c:func:`pvr_vm_interval_tree_iter_next`
+* :c:func:`pvr_vm_interval_tree_init`
+* :c:func:`pvr_vm_interval_tree_fini`
+* :c:func:`pvr_vm_interval_tree_node_init`
+* :c:func:`pvr_vm_interval_tree_node_fini`
+* :c:func:`pvr_vm_interval_tree_node_start`
+* :c:func:`pvr_vm_interval_tree_node_size`
+* :c:func:`pvr_vm_interval_tree_node_last`
+* :c:func:`pvr_vm_interval_tree_node_is_inserted`
+* :c:func:`pvr_vm_interval_tree_node_mark_removed`
+
+
+Reference
+=========
+.. kernel-doc:: drivers/gpu/drm/imagination/pvr_vm.c
+   :identifiers:
+
+Constants
+---------
+.. kernel-doc:: drivers/gpu/drm/imagination/pvr_vm.h
+   :doc: Public API (constants)
+
+.. kernel-doc:: drivers/gpu/drm/imagination/pvr_vm.c
+   :doc: Memory mappings (constants)
+
+.. kernel-doc:: drivers/gpu/drm/imagination/pvr_vm.c
+   :doc: VM backing pages (constants)
+
+.. kernel-doc:: drivers/gpu/drm/imagination/pvr_vm.c
+   :doc: Page table index utilities (constants)
+
+.. kernel-doc:: drivers/gpu/drm/imagination/pvr_vm.c
+   :doc: Page table pointer (constants)
diff --git a/MAINTAINERS b/MAINTAINERS
index 82b82cbdb22a..7f99999ceabb 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -10144,6 +10144,7 @@ M:	Sarah Walker <sarah.walker@imgtec.com>
 M:	Donald Robson <donald.robson@imgtec.com>
 S:	Supported
 F:	Documentation/devicetree/bindings/gpu/img,powervr.yaml
+F:	Documentation/gpu/imagination/
 F:	drivers/gpu/drm/imagination/
 F:	include/uapi/drm/pvr_drm.h
 
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 17/17] arm64: dts: ti: k3-am62-main: Add GPU device node [DO NOT MERGE]
  2023-08-16  8:25 ` Sarah Walker
@ 2023-08-16  8:25   ` Sarah Walker
  -1 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: matthew.brost, luben.tuikov, tzimmermann, linux-kernel, mripard,
	afd, boris.brezillon, dakr, donald.robson, hns, christian.koenig,
	faith.ekstrand

Add the Series AXE GPU node to the AM62 device tree.

Changes since v4:
- Remove interrupt name
- Make property order consistent across dts and bindings doc
- Fixed formatting (replaced spaces with tabs)

Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 arch/arm64/boot/dts/ti/k3-am62-main.dtsi | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
index 2488e3a537fe..b55bb3d0556e 100644
--- a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
+++ b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
@@ -913,4 +913,13 @@ mcasp2: audio-controller@2b20000 {
 		power-domains = <&k3_pds 192 TI_SCI_PD_EXCLUSIVE>;
 		status = "disabled";
 	};
+
+	gpu: gpu@fd00000 {
+		compatible = "ti,am62-gpu", "img,powervr-seriesaxe";
+		reg = <0x00 0x0fd00000 0x00 0x20000>;
+		clocks = <&k3_clks 187 0>;
+		clock-names = "core";
+		interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_HIGH>;
+		power-domains = <&k3_pds 187 TI_SCI_PD_EXCLUSIVE>;
+	};
 };
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* [PATCH v5 17/17] arm64: dts: ti: k3-am62-main: Add GPU device node [DO NOT MERGE]
@ 2023-08-16  8:25   ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-16  8:25 UTC (permalink / raw)
  To: dri-devel
  Cc: frank.binns, donald.robson, boris.brezillon, faith.ekstrand,
	airlied, daniel, maarten.lankhorst, mripard, tzimmermann, afd,
	hns, matthew.brost, christian.koenig, luben.tuikov, dakr,
	linux-kernel

Add the Series AXE GPU node to the AM62 device tree.

Changes since v4:
- Remove interrupt name
- Make property order consistent across dts and bindings doc
- Fixed formatting (replaced spaces with tabs)

Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
---
 arch/arm64/boot/dts/ti/k3-am62-main.dtsi | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
index 2488e3a537fe..b55bb3d0556e 100644
--- a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
+++ b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
@@ -913,4 +913,13 @@ mcasp2: audio-controller@2b20000 {
 		power-domains = <&k3_pds 192 TI_SCI_PD_EXCLUSIVE>;
 		status = "disabled";
 	};
+
+	gpu: gpu@fd00000 {
+		compatible = "ti,am62-gpu", "img,powervr-seriesaxe";
+		reg = <0x00 0x0fd00000 0x00 0x20000>;
+		clocks = <&k3_clks 187 0>;
+		clock-names = "core";
+		interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_HIGH>;
+		power-domains = <&k3_pds 187 TI_SCI_PD_EXCLUSIVE>;
+	};
 };
-- 
2.41.0


^ permalink raw reply related	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 09/17] drm/imagination: Implement power management
  2023-08-16  8:25   ` Sarah Walker
@ 2023-08-16 10:56     ` Paul Cercueil
  -1 siblings, 0 replies; 77+ messages in thread
From: Paul Cercueil @ 2023-08-16 10:56 UTC (permalink / raw)
  To: Sarah Walker, dri-devel
  Cc: matthew.brost, hns, linux-kernel, mripard, afd, luben.tuikov,
	dakr, donald.robson, tzimmermann, boris.brezillon,
	christian.koenig, faith.ekstrand

Hi Sarah,

Le mercredi 16 août 2023 à 09:25 +0100, Sarah Walker a écrit :
> Add power management to the driver, using runtime pm. The power off
> sequence depends on firmware commands which are not implemented in
> this
> patch.
> 
> Changes since v4:
> - Suspend runtime PM before unplugging device on rmmod
> 
> Changes since v3:
> - Don't power device when calling pvr_device_gpu_fini()
> - Documentation for pvr_dev->lost has been improved
> - pvr_power_init() renamed to pvr_watchdog_init()
> - Use drm_dev_{enter,exit}
> 
> Changes since v2:
> - Use runtime PM
> - Implement watchdog
> 
> Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
> ---
>  drivers/gpu/drm/imagination/Makefile     |   1 +
>  drivers/gpu/drm/imagination/pvr_device.c |  23 +-
>  drivers/gpu/drm/imagination/pvr_device.h |  22 ++
>  drivers/gpu/drm/imagination/pvr_drv.c    |  20 +-
>  drivers/gpu/drm/imagination/pvr_power.c  | 271
> +++++++++++++++++++++++
>  drivers/gpu/drm/imagination/pvr_power.h  |  39 ++++
>  6 files changed, 373 insertions(+), 3 deletions(-)
>  create mode 100644 drivers/gpu/drm/imagination/pvr_power.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_power.h
> 
> diff --git a/drivers/gpu/drm/imagination/Makefile
> b/drivers/gpu/drm/imagination/Makefile
> index 8fcabc1bea36..235e2d329e29 100644
> --- a/drivers/gpu/drm/imagination/Makefile
> +++ b/drivers/gpu/drm/imagination/Makefile
> @@ -10,6 +10,7 @@ powervr-y := \
>         pvr_fw.o \
>         pvr_gem.o \
>         pvr_mmu.o \
> +       pvr_power.o \
>         pvr_vm.o
>  
>  obj-$(CONFIG_DRM_POWERVR) += powervr.o
> diff --git a/drivers/gpu/drm/imagination/pvr_device.c
> b/drivers/gpu/drm/imagination/pvr_device.c
> index ef8f7a2ff1a9..5dbd05f21238 100644
> --- a/drivers/gpu/drm/imagination/pvr_device.c
> +++ b/drivers/gpu/drm/imagination/pvr_device.c
> @@ -5,6 +5,7 @@
>  #include "pvr_device_info.h"
>  
>  #include "pvr_fw.h"
> +#include "pvr_power.h"
>  #include "pvr_rogue_cr_defs.h"
>  #include "pvr_vm.h"
>  
> @@ -357,6 +358,8 @@ pvr_device_gpu_fini(struct pvr_device *pvr_dev)
>  int
>  pvr_device_init(struct pvr_device *pvr_dev)
>  {
> +       struct drm_device *drm_dev = from_pvr_device(pvr_dev);
> +       struct device *dev = drm_dev->dev;
>         int err;
>  
>         /* Enable and initialize clocks required for the device to
> operate. */
> @@ -364,13 +367,29 @@ pvr_device_init(struct pvr_device *pvr_dev)
>         if (err)
>                 return err;
>  
> +       /* Explicitly power the GPU so we can access control
> registers before the FW is booted. */
> +       err = pm_runtime_resume_and_get(dev);
> +       if (err)
> +               return err;
> +
>         /* Map the control registers into memory. */
>         err = pvr_device_reg_init(pvr_dev);
>         if (err)
> -               return err;
> +               goto err_pm_runtime_put;
>  
>         /* Perform GPU-specific initialization steps. */
> -       return pvr_device_gpu_init(pvr_dev);
> +       err = pvr_device_gpu_init(pvr_dev);
> +       if (err)
> +               goto err_pm_runtime_put;
> +
> +       pm_runtime_put(dev);
> +
> +       return 0;
> +
> +err_pm_runtime_put:
> +       pm_runtime_put_sync_suspend(dev);
> +
> +       return err;
>  }
>  
>  /**
> diff --git a/drivers/gpu/drm/imagination/pvr_device.h
> b/drivers/gpu/drm/imagination/pvr_device.h
> index 990363e433d7..6d58ecfbc838 100644
> --- a/drivers/gpu/drm/imagination/pvr_device.h
> +++ b/drivers/gpu/drm/imagination/pvr_device.h
> @@ -135,6 +135,28 @@ struct pvr_device {
>  
>         /** @fw_dev: Firmware related data. */
>         struct pvr_fw_device fw_dev;
> +
> +       struct {
> +               /** @work: Work item for watchdog callback. */
> +               struct delayed_work work;
> +
> +               /** @old_kccb_cmds_executed: KCCB command execution
> count at last watchdog poll. */
> +               u32 old_kccb_cmds_executed;
> +
> +               /** @kccb_stall_count: Number of watchdog polls KCCB
> has been stalled for. */
> +               u32 kccb_stall_count;
> +       } watchdog;
> +
> +       /**
> +        * @lost: %true if the device has been lost.
> +        *
> +        * This variable is set if the device has become
> irretrievably unavailable, e.g. if the
> +        * firmware processor has stopped responding and can not be
> revived via a hard reset.
> +        */
> +       bool lost;
> +
> +       /** @sched_wq: Workqueue for schedulers. */
> +       struct workqueue_struct *sched_wq;
>  };
>  
>  /**
> diff --git a/drivers/gpu/drm/imagination/pvr_drv.c
> b/drivers/gpu/drm/imagination/pvr_drv.c
> index 0d51177b506c..0331dc549116 100644
> --- a/drivers/gpu/drm/imagination/pvr_drv.c
> +++ b/drivers/gpu/drm/imagination/pvr_drv.c
> @@ -4,6 +4,7 @@
>  #include "pvr_device.h"
>  #include "pvr_drv.h"
>  #include "pvr_gem.h"
> +#include "pvr_power.h"
>  #include "pvr_rogue_defs.h"
>  #include "pvr_rogue_fwif_client.h"
>  #include "pvr_rogue_fwif_shared.h"
> @@ -1279,9 +1280,16 @@ pvr_probe(struct platform_device *plat_dev)
>  
>         platform_set_drvdata(plat_dev, drm_dev);
>  
> +       devm_pm_runtime_enable(&plat_dev->dev);
> +       pm_runtime_mark_last_busy(&plat_dev->dev);
> +
> +       pm_runtime_set_autosuspend_delay(&plat_dev->dev, 50);
> +       pm_runtime_use_autosuspend(&plat_dev->dev);
> +       pvr_watchdog_init(pvr_dev);
> +
>         err = pvr_device_init(pvr_dev);
>         if (err)
> -               return err;
> +               goto err_watchdog_fini;
>  
>         err = drm_dev_register(drm_dev, 0);
>         if (err)
> @@ -1292,6 +1300,9 @@ pvr_probe(struct platform_device *plat_dev)
>  err_device_fini:
>         pvr_device_fini(pvr_dev);
>  
> +err_watchdog_fini:
> +       pvr_watchdog_fini(pvr_dev);
> +
>         return err;
>  }
>  
> @@ -1301,8 +1312,10 @@ pvr_remove(struct platform_device *plat_dev)
>         struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
>         struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
>  
> +       pm_runtime_suspend(drm_dev->dev);
>         drm_dev_unplug(drm_dev);
>         pvr_device_fini(pvr_dev);
> +       pvr_watchdog_fini(pvr_dev);
>  
>         return 0;
>  }
> @@ -1313,11 +1326,16 @@ static const struct of_device_id dt_match[] =
> {
>  };
>  MODULE_DEVICE_TABLE(of, dt_match);
>  
> +static const struct dev_pm_ops pvr_pm_ops = {
> +       SET_RUNTIME_PM_OPS(pvr_power_device_suspend,
> pvr_power_device_resume, pvr_power_device_idle)
> +};
> +
>  static struct platform_driver pvr_driver = {
>         .probe = pvr_probe,
>         .remove = pvr_remove,
>         .driver = {
>                 .name = PVR_DRIVER_NAME,
> +               .pm = &pvr_pm_ops,
>                 .of_match_table = dt_match,
>         },
>  };
> diff --git a/drivers/gpu/drm/imagination/pvr_power.c
> b/drivers/gpu/drm/imagination/pvr_power.c
> new file mode 100644
> index 000000000000..a494fed92e81
> --- /dev/null
> +++ b/drivers/gpu/drm/imagination/pvr_power.c
> @@ -0,0 +1,271 @@
> +// SPDX-License-Identifier: GPL-2.0 OR MIT
> +/* Copyright (c) 2023 Imagination Technologies Ltd. */
> +
> +#include "pvr_device.h"
> +#include "pvr_fw.h"
> +#include "pvr_power.h"
> +#include "pvr_rogue_fwif.h"
> +
> +#include <drm/drm_drv.h>
> +#include <drm/drm_managed.h>
> +#include <linux/clk.h>
> +#include <linux/interrupt.h>
> +#include <linux/mutex.h>
> +#include <linux/platform_device.h>
> +#include <linux/pm_runtime.h>
> +#include <linux/timer.h>
> +#include <linux/types.h>
> +#include <linux/workqueue.h>
> +
> +#define POWER_SYNC_TIMEOUT_US (1000000) /* 1s */
> +
> +#define WATCHDOG_TIME_MS (500)
> +
> +static int
> +pvr_power_send_command(struct pvr_device *pvr_dev, struct
> rogue_fwif_kccb_cmd *pow_cmd)
> +{
> +       /* TODO: implement */
> +       return -ENODEV;
> +}
> +
> +static int
> +pvr_power_request_idle(struct pvr_device *pvr_dev)
> +{
> +       struct rogue_fwif_kccb_cmd pow_cmd;
> +
> +       /* Send FORCED_IDLE request to FW. */
> +       pow_cmd.cmd_type = ROGUE_FWIF_KCCB_CMD_POW;
> +       pow_cmd.cmd_data.pow_data.pow_type =
> ROGUE_FWIF_POW_FORCED_IDLE_REQ;
> +       pow_cmd.cmd_data.pow_data.power_req_data.pow_request_type =
> ROGUE_FWIF_POWER_FORCE_IDLE;
> +
> +       return pvr_power_send_command(pvr_dev, &pow_cmd);
> +}
> +
> +static int
> +pvr_power_request_pwr_off(struct pvr_device *pvr_dev)
> +{
> +       struct rogue_fwif_kccb_cmd pow_cmd;
> +
> +       /* Send POW_OFF request to firmware. */
> +       pow_cmd.cmd_type = ROGUE_FWIF_KCCB_CMD_POW;
> +       pow_cmd.cmd_data.pow_data.pow_type = ROGUE_FWIF_POW_OFF_REQ;
> +       pow_cmd.cmd_data.pow_data.power_req_data.forced = true;
> +
> +       return pvr_power_send_command(pvr_dev, &pow_cmd);
> +}
> +
> +static int
> +pvr_power_fw_disable(struct pvr_device *pvr_dev, bool hard_reset)
> +{
> +       if (!hard_reset) {
> +               int err;
> +
> +               cancel_delayed_work_sync(&pvr_dev->watchdog.work);
> +
> +               err = pvr_power_request_idle(pvr_dev);
> +               if (err)
> +                       return err;
> +
> +               err = pvr_power_request_pwr_off(pvr_dev);
> +               if (err)
> +                       return err;
> +       }
> +
> +       /* TODO: stop firmware */
> +       return -ENODEV;
> +}
> +
> +static int
> +pvr_power_fw_enable(struct pvr_device *pvr_dev)
> +{
> +       int err;
> +
> +       /* TODO: start firmware */
> +       err = -ENODEV;
> +       if (err)
> +               return err;
> +
> +       queue_delayed_work(pvr_dev->sched_wq, &pvr_dev-
> >watchdog.work,
> +                          msecs_to_jiffies(WATCHDOG_TIME_MS));
> +
> +       return 0;
> +}
> +
> +bool
> +pvr_power_is_idle(struct pvr_device *pvr_dev)
> +{
> +       /* TODO: implement */
> +       return true;
> +}
> +
> +static bool
> +pvr_watchdog_kccb_stalled(struct pvr_device *pvr_dev)
> +{
> +       /* TODO: implement */
> +       return false;
> +}
> +
> +static void
> +pvr_watchdog_worker(struct work_struct *work)
> +{
> +       struct pvr_device *pvr_dev = container_of(work, struct
> pvr_device,
> +                                                
> watchdog.work.work);
> +       bool stalled;
> +
> +       if (pvr_dev->lost)
> +               return;
> +
> +       if (pm_runtime_get_if_in_use(from_pvr_device(pvr_dev)->dev)
> <= 0)
> +               goto out_requeue;
> +
> +       stalled = pvr_watchdog_kccb_stalled(pvr_dev);
> +
> +       if (stalled) {
> +               drm_err(from_pvr_device(pvr_dev), "FW stalled, trying
> hard reset");
> +
> +               pvr_power_reset(pvr_dev, true);
> +               /* Device may be lost at this point. */
> +       }
> +
> +       pm_runtime_put(from_pvr_device(pvr_dev)->dev);
> +
> +out_requeue:
> +       if (!pvr_dev->lost) {
> +               queue_delayed_work(pvr_dev->sched_wq, &pvr_dev-
> >watchdog.work,
> +                                 
> msecs_to_jiffies(WATCHDOG_TIME_MS));
> +       }
> +}
> +
> +/**
> + * pvr_watchdog_init() - Initialise watchdog for device
> + * @pvr_dev: Target PowerVR device.
> + *
> + * Returns:
> + *  * 0 on success, or
> + *  * -%ENOMEM on out of memory.
> + */
> +int
> +pvr_watchdog_init(struct pvr_device *pvr_dev)
> +{
> +       INIT_DELAYED_WORK(&pvr_dev->watchdog.work,
> pvr_watchdog_worker);
> +
> +       return 0;
> +}
> +
> +int
> +pvr_power_device_suspend(struct device *dev)
> +{
> +       struct platform_device *plat_dev = to_platform_device(dev);
> +       struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
> +       struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
> +       int idx;
> +
> +       if (!drm_dev_enter(drm_dev, &idx))
> +               return -EIO;
> +
> +       clk_disable_unprepare(pvr_dev->mem_clk);
> +       clk_disable_unprepare(pvr_dev->sys_clk);
> +       clk_disable_unprepare(pvr_dev->core_clk);
> +
> +       drm_dev_exit(idx);
> +
> +       return 0;
> +}
> +
> +int
> +pvr_power_device_resume(struct device *dev)
> +{
> +       struct platform_device *plat_dev = to_platform_device(dev);
> +       struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
> +       struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
> +       int idx;
> +       int err;
> +
> +       if (!drm_dev_enter(drm_dev, &idx))
> +               return -EIO;
> +
> +       err = clk_prepare_enable(pvr_dev->core_clk);
> +       if (err)
> +               goto err_drm_dev_exit;
> +
> +       err = clk_prepare_enable(pvr_dev->sys_clk);
> +       if (err)
> +               goto err_core_clk_disable;
> +
> +       err = clk_prepare_enable(pvr_dev->mem_clk);
> +       if (err)
> +               goto err_sys_clk_disable;
> +
> +       drm_dev_exit(idx);
> +
> +       return 0;
> +
> +err_sys_clk_disable:
> +       clk_disable_unprepare(pvr_dev->sys_clk);
> +
> +err_core_clk_disable:
> +       clk_disable_unprepare(pvr_dev->core_clk);
> +
> +err_drm_dev_exit:
> +       drm_dev_exit(idx);
> +
> +       return err;
> +}
> +
> +int
> +pvr_power_device_idle(struct device *dev)
> +{
> +       struct platform_device *plat_dev = to_platform_device(dev);
> +       struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
> +       struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
> +
> +       return pvr_power_is_idle(pvr_dev) ? 0 : -EBUSY;
> +}

You have two options here. Either you require CONFIG_PM, and in that
case you should add a dependency for it in the Kconfig. Also use
RUNTIME_PM_OPS() instead of SET_RUNTIME_PM_OPS().

Otherwise, I would suggest to use EXPORT_NS_RUNTIME_DEV_PM_OPS() here,
which allows you to have pvr_power_device_{suspend,resume,idle} static,
and having them garbage-collected in the !CONFIG_PM case.

You can then have an "extern struct dev_pm_ops pvr_power_pm_ops;" in
your header, and reference it in the driver using the pm_ptr() macro.

Cheers,
-Paul

> +
> +/**
> + * pvr_power_reset() - Reset the GPU
> + * @pvr_dev: Device pointer
> + * @hard_reset: %true for hard reset, %false for soft reset
> + *
> + * If @hard_reset is %false and the FW processor fails to respond
> during the reset process, this
> + * function will attempt a hard reset.
> + *
> + * If a hard reset fails then the GPU device is reported as lost.
> + *
> + * Returns:
> + *  * 0 on success, or
> + *  * Any error code returned by pvr_power_get, pvr_power_fw_disable
> or pvr_power_fw_enable().
> + */
> +int
> +pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
> +{
> +       /* TODO: Implement hard reset. */
> +       int err;
> +
> +       /*
> +        * Take a power reference during the reset. This should
> prevent any interference with the
> +        * power state during reset.
> +        */
> +       WARN_ON(pvr_power_get(pvr_dev));
> +
> +       err = pvr_power_fw_disable(pvr_dev, false);
> +       if (err)
> +               goto err_power_put;
> +
> +       err = pvr_power_fw_enable(pvr_dev);
> +
> +err_power_put:
> +       pvr_power_put(pvr_dev);
> +
> +       return err;
> +}
> +
> +/**
> + * pvr_watchdog_fini() - Shutdown watchdog for device
> + * @pvr_dev: Target PowerVR device.
> + */
> +void
> +pvr_watchdog_fini(struct pvr_device *pvr_dev)
> +{
> +       cancel_delayed_work_sync(&pvr_dev->watchdog.work);
> +}
> diff --git a/drivers/gpu/drm/imagination/pvr_power.h
> b/drivers/gpu/drm/imagination/pvr_power.h
> new file mode 100644
> index 000000000000..439f08d13655
> --- /dev/null
> +++ b/drivers/gpu/drm/imagination/pvr_power.h
> @@ -0,0 +1,39 @@
> +/* SPDX-License-Identifier: GPL-2.0 OR MIT */
> +/* Copyright (c) 2023 Imagination Technologies Ltd. */
> +
> +#ifndef PVR_POWER_H
> +#define PVR_POWER_H
> +
> +#include "pvr_device.h"
> +
> +#include <linux/mutex.h>
> +#include <linux/pm_runtime.h>
> +
> +int pvr_watchdog_init(struct pvr_device *pvr_dev);
> +void pvr_watchdog_fini(struct pvr_device *pvr_dev);
> +
> +bool pvr_power_is_idle(struct pvr_device *pvr_dev);
> +
> +int pvr_power_device_suspend(struct device *dev);
> +int pvr_power_device_resume(struct device *dev);
> +int pvr_power_device_idle(struct device *dev);
> +
> +int pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset);
> +
> +static __always_inline int
> +pvr_power_get(struct pvr_device *pvr_dev)
> +{
> +       struct drm_device *drm_dev = from_pvr_device(pvr_dev);
> +
> +       return pm_runtime_resume_and_get(drm_dev->dev);
> +}
> +
> +static __always_inline int
> +pvr_power_put(struct pvr_device *pvr_dev)
> +{
> +       struct drm_device *drm_dev = from_pvr_device(pvr_dev);
> +
> +       return pm_runtime_put(drm_dev->dev);
> +}
> +
> +#endif /* PVR_POWER_H */


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 09/17] drm/imagination: Implement power management
@ 2023-08-16 10:56     ` Paul Cercueil
  0 siblings, 0 replies; 77+ messages in thread
From: Paul Cercueil @ 2023-08-16 10:56 UTC (permalink / raw)
  To: Sarah Walker, dri-devel
  Cc: matthew.brost, luben.tuikov, tzimmermann, linux-kernel, mripard,
	afd, boris.brezillon, dakr, donald.robson, hns, christian.koenig,
	faith.ekstrand

Hi Sarah,

Le mercredi 16 août 2023 à 09:25 +0100, Sarah Walker a écrit :
> Add power management to the driver, using runtime pm. The power off
> sequence depends on firmware commands which are not implemented in
> this
> patch.
> 
> Changes since v4:
> - Suspend runtime PM before unplugging device on rmmod
> 
> Changes since v3:
> - Don't power device when calling pvr_device_gpu_fini()
> - Documentation for pvr_dev->lost has been improved
> - pvr_power_init() renamed to pvr_watchdog_init()
> - Use drm_dev_{enter,exit}
> 
> Changes since v2:
> - Use runtime PM
> - Implement watchdog
> 
> Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
> ---
>  drivers/gpu/drm/imagination/Makefile     |   1 +
>  drivers/gpu/drm/imagination/pvr_device.c |  23 +-
>  drivers/gpu/drm/imagination/pvr_device.h |  22 ++
>  drivers/gpu/drm/imagination/pvr_drv.c    |  20 +-
>  drivers/gpu/drm/imagination/pvr_power.c  | 271
> +++++++++++++++++++++++
>  drivers/gpu/drm/imagination/pvr_power.h  |  39 ++++
>  6 files changed, 373 insertions(+), 3 deletions(-)
>  create mode 100644 drivers/gpu/drm/imagination/pvr_power.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_power.h
> 
> diff --git a/drivers/gpu/drm/imagination/Makefile
> b/drivers/gpu/drm/imagination/Makefile
> index 8fcabc1bea36..235e2d329e29 100644
> --- a/drivers/gpu/drm/imagination/Makefile
> +++ b/drivers/gpu/drm/imagination/Makefile
> @@ -10,6 +10,7 @@ powervr-y := \
>         pvr_fw.o \
>         pvr_gem.o \
>         pvr_mmu.o \
> +       pvr_power.o \
>         pvr_vm.o
>  
>  obj-$(CONFIG_DRM_POWERVR) += powervr.o
> diff --git a/drivers/gpu/drm/imagination/pvr_device.c
> b/drivers/gpu/drm/imagination/pvr_device.c
> index ef8f7a2ff1a9..5dbd05f21238 100644
> --- a/drivers/gpu/drm/imagination/pvr_device.c
> +++ b/drivers/gpu/drm/imagination/pvr_device.c
> @@ -5,6 +5,7 @@
>  #include "pvr_device_info.h"
>  
>  #include "pvr_fw.h"
> +#include "pvr_power.h"
>  #include "pvr_rogue_cr_defs.h"
>  #include "pvr_vm.h"
>  
> @@ -357,6 +358,8 @@ pvr_device_gpu_fini(struct pvr_device *pvr_dev)
>  int
>  pvr_device_init(struct pvr_device *pvr_dev)
>  {
> +       struct drm_device *drm_dev = from_pvr_device(pvr_dev);
> +       struct device *dev = drm_dev->dev;
>         int err;
>  
>         /* Enable and initialize clocks required for the device to
> operate. */
> @@ -364,13 +367,29 @@ pvr_device_init(struct pvr_device *pvr_dev)
>         if (err)
>                 return err;
>  
> +       /* Explicitly power the GPU so we can access control
> registers before the FW is booted. */
> +       err = pm_runtime_resume_and_get(dev);
> +       if (err)
> +               return err;
> +
>         /* Map the control registers into memory. */
>         err = pvr_device_reg_init(pvr_dev);
>         if (err)
> -               return err;
> +               goto err_pm_runtime_put;
>  
>         /* Perform GPU-specific initialization steps. */
> -       return pvr_device_gpu_init(pvr_dev);
> +       err = pvr_device_gpu_init(pvr_dev);
> +       if (err)
> +               goto err_pm_runtime_put;
> +
> +       pm_runtime_put(dev);
> +
> +       return 0;
> +
> +err_pm_runtime_put:
> +       pm_runtime_put_sync_suspend(dev);
> +
> +       return err;
>  }
>  
>  /**
> diff --git a/drivers/gpu/drm/imagination/pvr_device.h
> b/drivers/gpu/drm/imagination/pvr_device.h
> index 990363e433d7..6d58ecfbc838 100644
> --- a/drivers/gpu/drm/imagination/pvr_device.h
> +++ b/drivers/gpu/drm/imagination/pvr_device.h
> @@ -135,6 +135,28 @@ struct pvr_device {
>  
>         /** @fw_dev: Firmware related data. */
>         struct pvr_fw_device fw_dev;
> +
> +       struct {
> +               /** @work: Work item for watchdog callback. */
> +               struct delayed_work work;
> +
> +               /** @old_kccb_cmds_executed: KCCB command execution
> count at last watchdog poll. */
> +               u32 old_kccb_cmds_executed;
> +
> +               /** @kccb_stall_count: Number of watchdog polls KCCB
> has been stalled for. */
> +               u32 kccb_stall_count;
> +       } watchdog;
> +
> +       /**
> +        * @lost: %true if the device has been lost.
> +        *
> +        * This variable is set if the device has become
> irretrievably unavailable, e.g. if the
> +        * firmware processor has stopped responding and can not be
> revived via a hard reset.
> +        */
> +       bool lost;
> +
> +       /** @sched_wq: Workqueue for schedulers. */
> +       struct workqueue_struct *sched_wq;
>  };
>  
>  /**
> diff --git a/drivers/gpu/drm/imagination/pvr_drv.c
> b/drivers/gpu/drm/imagination/pvr_drv.c
> index 0d51177b506c..0331dc549116 100644
> --- a/drivers/gpu/drm/imagination/pvr_drv.c
> +++ b/drivers/gpu/drm/imagination/pvr_drv.c
> @@ -4,6 +4,7 @@
>  #include "pvr_device.h"
>  #include "pvr_drv.h"
>  #include "pvr_gem.h"
> +#include "pvr_power.h"
>  #include "pvr_rogue_defs.h"
>  #include "pvr_rogue_fwif_client.h"
>  #include "pvr_rogue_fwif_shared.h"
> @@ -1279,9 +1280,16 @@ pvr_probe(struct platform_device *plat_dev)
>  
>         platform_set_drvdata(plat_dev, drm_dev);
>  
> +       devm_pm_runtime_enable(&plat_dev->dev);
> +       pm_runtime_mark_last_busy(&plat_dev->dev);
> +
> +       pm_runtime_set_autosuspend_delay(&plat_dev->dev, 50);
> +       pm_runtime_use_autosuspend(&plat_dev->dev);
> +       pvr_watchdog_init(pvr_dev);
> +
>         err = pvr_device_init(pvr_dev);
>         if (err)
> -               return err;
> +               goto err_watchdog_fini;
>  
>         err = drm_dev_register(drm_dev, 0);
>         if (err)
> @@ -1292,6 +1300,9 @@ pvr_probe(struct platform_device *plat_dev)
>  err_device_fini:
>         pvr_device_fini(pvr_dev);
>  
> +err_watchdog_fini:
> +       pvr_watchdog_fini(pvr_dev);
> +
>         return err;
>  }
>  
> @@ -1301,8 +1312,10 @@ pvr_remove(struct platform_device *plat_dev)
>         struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
>         struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
>  
> +       pm_runtime_suspend(drm_dev->dev);
>         drm_dev_unplug(drm_dev);
>         pvr_device_fini(pvr_dev);
> +       pvr_watchdog_fini(pvr_dev);
>  
>         return 0;
>  }
> @@ -1313,11 +1326,16 @@ static const struct of_device_id dt_match[] =
> {
>  };
>  MODULE_DEVICE_TABLE(of, dt_match);
>  
> +static const struct dev_pm_ops pvr_pm_ops = {
> +       SET_RUNTIME_PM_OPS(pvr_power_device_suspend,
> pvr_power_device_resume, pvr_power_device_idle)
> +};
> +
>  static struct platform_driver pvr_driver = {
>         .probe = pvr_probe,
>         .remove = pvr_remove,
>         .driver = {
>                 .name = PVR_DRIVER_NAME,
> +               .pm = &pvr_pm_ops,
>                 .of_match_table = dt_match,
>         },
>  };
> diff --git a/drivers/gpu/drm/imagination/pvr_power.c
> b/drivers/gpu/drm/imagination/pvr_power.c
> new file mode 100644
> index 000000000000..a494fed92e81
> --- /dev/null
> +++ b/drivers/gpu/drm/imagination/pvr_power.c
> @@ -0,0 +1,271 @@
> +// SPDX-License-Identifier: GPL-2.0 OR MIT
> +/* Copyright (c) 2023 Imagination Technologies Ltd. */
> +
> +#include "pvr_device.h"
> +#include "pvr_fw.h"
> +#include "pvr_power.h"
> +#include "pvr_rogue_fwif.h"
> +
> +#include <drm/drm_drv.h>
> +#include <drm/drm_managed.h>
> +#include <linux/clk.h>
> +#include <linux/interrupt.h>
> +#include <linux/mutex.h>
> +#include <linux/platform_device.h>
> +#include <linux/pm_runtime.h>
> +#include <linux/timer.h>
> +#include <linux/types.h>
> +#include <linux/workqueue.h>
> +
> +#define POWER_SYNC_TIMEOUT_US (1000000) /* 1s */
> +
> +#define WATCHDOG_TIME_MS (500)
> +
> +static int
> +pvr_power_send_command(struct pvr_device *pvr_dev, struct
> rogue_fwif_kccb_cmd *pow_cmd)
> +{
> +       /* TODO: implement */
> +       return -ENODEV;
> +}
> +
> +static int
> +pvr_power_request_idle(struct pvr_device *pvr_dev)
> +{
> +       struct rogue_fwif_kccb_cmd pow_cmd;
> +
> +       /* Send FORCED_IDLE request to FW. */
> +       pow_cmd.cmd_type = ROGUE_FWIF_KCCB_CMD_POW;
> +       pow_cmd.cmd_data.pow_data.pow_type =
> ROGUE_FWIF_POW_FORCED_IDLE_REQ;
> +       pow_cmd.cmd_data.pow_data.power_req_data.pow_request_type =
> ROGUE_FWIF_POWER_FORCE_IDLE;
> +
> +       return pvr_power_send_command(pvr_dev, &pow_cmd);
> +}
> +
> +static int
> +pvr_power_request_pwr_off(struct pvr_device *pvr_dev)
> +{
> +       struct rogue_fwif_kccb_cmd pow_cmd;
> +
> +       /* Send POW_OFF request to firmware. */
> +       pow_cmd.cmd_type = ROGUE_FWIF_KCCB_CMD_POW;
> +       pow_cmd.cmd_data.pow_data.pow_type = ROGUE_FWIF_POW_OFF_REQ;
> +       pow_cmd.cmd_data.pow_data.power_req_data.forced = true;
> +
> +       return pvr_power_send_command(pvr_dev, &pow_cmd);
> +}
> +
> +static int
> +pvr_power_fw_disable(struct pvr_device *pvr_dev, bool hard_reset)
> +{
> +       if (!hard_reset) {
> +               int err;
> +
> +               cancel_delayed_work_sync(&pvr_dev->watchdog.work);
> +
> +               err = pvr_power_request_idle(pvr_dev);
> +               if (err)
> +                       return err;
> +
> +               err = pvr_power_request_pwr_off(pvr_dev);
> +               if (err)
> +                       return err;
> +       }
> +
> +       /* TODO: stop firmware */
> +       return -ENODEV;
> +}
> +
> +static int
> +pvr_power_fw_enable(struct pvr_device *pvr_dev)
> +{
> +       int err;
> +
> +       /* TODO: start firmware */
> +       err = -ENODEV;
> +       if (err)
> +               return err;
> +
> +       queue_delayed_work(pvr_dev->sched_wq, &pvr_dev-
> >watchdog.work,
> +                          msecs_to_jiffies(WATCHDOG_TIME_MS));
> +
> +       return 0;
> +}
> +
> +bool
> +pvr_power_is_idle(struct pvr_device *pvr_dev)
> +{
> +       /* TODO: implement */
> +       return true;
> +}
> +
> +static bool
> +pvr_watchdog_kccb_stalled(struct pvr_device *pvr_dev)
> +{
> +       /* TODO: implement */
> +       return false;
> +}
> +
> +static void
> +pvr_watchdog_worker(struct work_struct *work)
> +{
> +       struct pvr_device *pvr_dev = container_of(work, struct
> pvr_device,
> +                                                
> watchdog.work.work);
> +       bool stalled;
> +
> +       if (pvr_dev->lost)
> +               return;
> +
> +       if (pm_runtime_get_if_in_use(from_pvr_device(pvr_dev)->dev)
> <= 0)
> +               goto out_requeue;
> +
> +       stalled = pvr_watchdog_kccb_stalled(pvr_dev);
> +
> +       if (stalled) {
> +               drm_err(from_pvr_device(pvr_dev), "FW stalled, trying
> hard reset");
> +
> +               pvr_power_reset(pvr_dev, true);
> +               /* Device may be lost at this point. */
> +       }
> +
> +       pm_runtime_put(from_pvr_device(pvr_dev)->dev);
> +
> +out_requeue:
> +       if (!pvr_dev->lost) {
> +               queue_delayed_work(pvr_dev->sched_wq, &pvr_dev-
> >watchdog.work,
> +                                 
> msecs_to_jiffies(WATCHDOG_TIME_MS));
> +       }
> +}
> +
> +/**
> + * pvr_watchdog_init() - Initialise watchdog for device
> + * @pvr_dev: Target PowerVR device.
> + *
> + * Returns:
> + *  * 0 on success, or
> + *  * -%ENOMEM on out of memory.
> + */
> +int
> +pvr_watchdog_init(struct pvr_device *pvr_dev)
> +{
> +       INIT_DELAYED_WORK(&pvr_dev->watchdog.work,
> pvr_watchdog_worker);
> +
> +       return 0;
> +}
> +
> +int
> +pvr_power_device_suspend(struct device *dev)
> +{
> +       struct platform_device *plat_dev = to_platform_device(dev);
> +       struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
> +       struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
> +       int idx;
> +
> +       if (!drm_dev_enter(drm_dev, &idx))
> +               return -EIO;
> +
> +       clk_disable_unprepare(pvr_dev->mem_clk);
> +       clk_disable_unprepare(pvr_dev->sys_clk);
> +       clk_disable_unprepare(pvr_dev->core_clk);
> +
> +       drm_dev_exit(idx);
> +
> +       return 0;
> +}
> +
> +int
> +pvr_power_device_resume(struct device *dev)
> +{
> +       struct platform_device *plat_dev = to_platform_device(dev);
> +       struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
> +       struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
> +       int idx;
> +       int err;
> +
> +       if (!drm_dev_enter(drm_dev, &idx))
> +               return -EIO;
> +
> +       err = clk_prepare_enable(pvr_dev->core_clk);
> +       if (err)
> +               goto err_drm_dev_exit;
> +
> +       err = clk_prepare_enable(pvr_dev->sys_clk);
> +       if (err)
> +               goto err_core_clk_disable;
> +
> +       err = clk_prepare_enable(pvr_dev->mem_clk);
> +       if (err)
> +               goto err_sys_clk_disable;
> +
> +       drm_dev_exit(idx);
> +
> +       return 0;
> +
> +err_sys_clk_disable:
> +       clk_disable_unprepare(pvr_dev->sys_clk);
> +
> +err_core_clk_disable:
> +       clk_disable_unprepare(pvr_dev->core_clk);
> +
> +err_drm_dev_exit:
> +       drm_dev_exit(idx);
> +
> +       return err;
> +}
> +
> +int
> +pvr_power_device_idle(struct device *dev)
> +{
> +       struct platform_device *plat_dev = to_platform_device(dev);
> +       struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
> +       struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
> +
> +       return pvr_power_is_idle(pvr_dev) ? 0 : -EBUSY;
> +}

You have two options here. Either you require CONFIG_PM, and in that
case you should add a dependency for it in the Kconfig. Also use
RUNTIME_PM_OPS() instead of SET_RUNTIME_PM_OPS().

Otherwise, I would suggest to use EXPORT_NS_RUNTIME_DEV_PM_OPS() here,
which allows you to have pvr_power_device_{suspend,resume,idle} static,
and having them garbage-collected in the !CONFIG_PM case.

You can then have an "extern struct dev_pm_ops pvr_power_pm_ops;" in
your header, and reference it in the driver using the pm_ptr() macro.

Cheers,
-Paul

> +
> +/**
> + * pvr_power_reset() - Reset the GPU
> + * @pvr_dev: Device pointer
> + * @hard_reset: %true for hard reset, %false for soft reset
> + *
> + * If @hard_reset is %false and the FW processor fails to respond
> during the reset process, this
> + * function will attempt a hard reset.
> + *
> + * If a hard reset fails then the GPU device is reported as lost.
> + *
> + * Returns:
> + *  * 0 on success, or
> + *  * Any error code returned by pvr_power_get, pvr_power_fw_disable
> or pvr_power_fw_enable().
> + */
> +int
> +pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
> +{
> +       /* TODO: Implement hard reset. */
> +       int err;
> +
> +       /*
> +        * Take a power reference during the reset. This should
> prevent any interference with the
> +        * power state during reset.
> +        */
> +       WARN_ON(pvr_power_get(pvr_dev));
> +
> +       err = pvr_power_fw_disable(pvr_dev, false);
> +       if (err)
> +               goto err_power_put;
> +
> +       err = pvr_power_fw_enable(pvr_dev);
> +
> +err_power_put:
> +       pvr_power_put(pvr_dev);
> +
> +       return err;
> +}
> +
> +/**
> + * pvr_watchdog_fini() - Shutdown watchdog for device
> + * @pvr_dev: Target PowerVR device.
> + */
> +void
> +pvr_watchdog_fini(struct pvr_device *pvr_dev)
> +{
> +       cancel_delayed_work_sync(&pvr_dev->watchdog.work);
> +}
> diff --git a/drivers/gpu/drm/imagination/pvr_power.h
> b/drivers/gpu/drm/imagination/pvr_power.h
> new file mode 100644
> index 000000000000..439f08d13655
> --- /dev/null
> +++ b/drivers/gpu/drm/imagination/pvr_power.h
> @@ -0,0 +1,39 @@
> +/* SPDX-License-Identifier: GPL-2.0 OR MIT */
> +/* Copyright (c) 2023 Imagination Technologies Ltd. */
> +
> +#ifndef PVR_POWER_H
> +#define PVR_POWER_H
> +
> +#include "pvr_device.h"
> +
> +#include <linux/mutex.h>
> +#include <linux/pm_runtime.h>
> +
> +int pvr_watchdog_init(struct pvr_device *pvr_dev);
> +void pvr_watchdog_fini(struct pvr_device *pvr_dev);
> +
> +bool pvr_power_is_idle(struct pvr_device *pvr_dev);
> +
> +int pvr_power_device_suspend(struct device *dev);
> +int pvr_power_device_resume(struct device *dev);
> +int pvr_power_device_idle(struct device *dev);
> +
> +int pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset);
> +
> +static __always_inline int
> +pvr_power_get(struct pvr_device *pvr_dev)
> +{
> +       struct drm_device *drm_dev = from_pvr_device(pvr_dev);
> +
> +       return pm_runtime_resume_and_get(drm_dev->dev);
> +}
> +
> +static __always_inline int
> +pvr_power_put(struct pvr_device *pvr_dev)
> +{
> +       struct drm_device *drm_dev = from_pvr_device(pvr_dev);
> +
> +       return pm_runtime_put(drm_dev->dev);
> +}
> +
> +#endif /* PVR_POWER_H */


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 08/17] drm/imagination: Add GEM and VM related code
  2023-08-16  8:25   ` Sarah Walker
@ 2023-08-17 22:42     ` Jann Horn
  -1 siblings, 0 replies; 77+ messages in thread
From: Jann Horn @ 2023-08-17 22:42 UTC (permalink / raw)
  To: Sarah Walker
  Cc: matthew.brost, hns, linux-kernel, mripard, afd, Matt Coster,
	luben.tuikov, dakr, dri-devel, tzimmermann, boris.brezillon,
	christian.koenig, faith.ekstrand, donald.robson

Hi!

Thanks, I think it's great that Imagination is writing an upstream
driver for PowerVR. :)

On Wed, Aug 16, 2023 at 10:25 AM Sarah Walker <sarah.walker@imgtec.com> wrote:
> Add a GEM implementation based on drm_gem_shmem, and support code for the
> PowerVR GPU MMU. The GPU VA manager is used for address space management.
[...]
> +/**
> + * DOC: Flags for DRM_IOCTL_PVR_CREATE_BO (kernel-only)
> + *
> + * Kernel-only values allowed in &pvr_gem_object->flags. The majority of options
> + * for this field are specified in the UAPI header "pvr_drm.h" with a
> + * DRM_PVR_BO_ prefix. To distinguish these internal options (which must exist
> + * in ranges marked as "reserved" in the UAPI header), we drop the DRM prefix.
> + * The public options should be used directly, DRM prefix and all.
> + *
> + * To avoid potentially confusing gaps in the UAPI options, these kernel-only
> + * options are specified "in reverse", starting at bit 63.
> + *
> + * We use "reserved" to refer to bits defined here and not exposed in the UAPI.
> + * Bits not defined anywhere are "undefined".
> + *
> + * Creation options
> + *    These use the prefix PVR_BO_CREATE_.
> + *
> + *    *There are currently no kernel-only flags in this group.*
> + *
> + * Device mapping options
> + *    These use the prefix PVR_BO_DEVICE_.
> + *
> + *    *There are currently no kernel-only flags in this group.*
> + *
> + * CPU mapping options
> + *    These use the prefix PVR_BO_CPU_.
> + *
> + *    :CACHED: By default, all GEM objects are mapped write-combined on the
> + *       CPU. Set this flag to override this behaviour and map the object
> + *       cached.
> + */
> +#define PVR_BO_CPU_CACHED BIT_ULL(63)
> +
> +#define PVR_BO_FW_NO_CLEAR_ON_RESET BIT_ULL(62)
> +
> +/* Bits 62..3 are undefined. */
> +/* Bits 2..0 are defined in the UAPI. */
> +
> +/* Other utilities. */
> +#define PVR_BO_UNDEFINED_MASK GENMASK_ULL(61, 3)
> +#define PVR_BO_RESERVED_MASK (PVR_BO_UNDEFINED_MASK | GENMASK_ULL(63, 63))

In commit 1a9c568fb559 ("drm/imagination: Rework firmware object
initialisation") in powervr-next, PVR_BO_FW_NO_CLEAR_ON_RESET (bit 62)
was added in the kernel-only flags group, but the mask
PVR_BO_RESERVED_MASK (which is used in pvr_ioctl_create_bo to detect
kernel-only and reserved flags) looks like it wasn't changed to
include bit 62. I think that means it becomes possible for userspace
to pass this bit in via pvr_ioctl_create_bo()?
If my understanding is correct and that was unintentional, it might be
a good idea to change these defines:

#define PVR_BO_UNDEFINED_MASK GENMASK_ULL(61, 3)
#define PVR_BO_RESERVED_MASK (PVR_BO_UNDEFINED_MASK | GENMASK_ULL(63, 63))

into something like this to avoid future mishaps like this:

/* first bit that is not used for UAPI BO options */
#define PVR_BO_FIRST_RESERVED_BIT 3
#define PVR_BO_UNDEFINED_MASK GENMASK_ULL(61, PVR_BO_FIRST_RESERVED_BIT)
#define PVR_BO_RESERVED_MASK GENMASK_ULL(63, PVR_BO_FIRST_RESERVED_BIT)

> +
> +/*
> + * All firmware-mapped memory uses (mostly) the same flags. Specifically,
> + * firmware-mapped memory should be:
> + *  * Read/write on the device,
> + *  * Read/write on the CPU, and
> + *  * Write-combined on the CPU.
> + *
> + * The only variation is in caching on the device.
> + */
> +#define PVR_BO_FW_FLAGS_DEVICE_CACHED (ULL(0))
> +#define PVR_BO_FW_FLAGS_DEVICE_UNCACHED DRM_PVR_BO_DEVICE_BYPASS_CACHE
[...]
> +/**
> + * pvr_page_table_l2_entry_raw_set() - Write a valid entry into a raw level 2
> + *                                     page table.
> + * @entry: Target raw level 2 page table entry.
> + * @child_table_dma_addr: DMA address of the level 1 page table to be
> + *                        associated with @entry.
> + *
> + * When calling this function, @child_table_dma_addr must be a valid DMA
> + * address and a multiple of %ROGUE_MMUCTRL_PC_DATA_PD_BASE_ALIGNSIZE.
> + */
> +static void
> +pvr_page_table_l2_entry_raw_set(struct pvr_page_table_l2_entry_raw *entry,
> +                               dma_addr_t child_table_dma_addr)
> +{
> +       child_table_dma_addr >>= ROGUE_MMUCTRL_PC_DATA_PD_BASE_ALIGNSHIFT;
> +
> +       entry->val =
> +               PVR_PAGE_TABLE_FIELD_PREP(2, PC, VALID, true) |
> +               PVR_PAGE_TABLE_FIELD_PREP(2, PC, ENTRY_PENDING, false) |
> +               PVR_PAGE_TABLE_FIELD_PREP(2, PC, PD_BASE, child_table_dma_addr);
> +}

For this function and others that manipulate page table entries,
please use some kernel helper that ensures that the store can't tear
(at least WRITE_ONCE() - that can still tear on 32-bit, but I see the
driver depends on ARM64, so that's not a problem).

[...]
> +/**
> + * pvr_page_table_l2_insert() - Insert an entry referring to a level 1 page
> + * table into a level 2 page table.
> + * @op_ctx: Target MMU op context pointing at the entry to insert the L1 page
> + * table into.
> + * @child_table: Target level 1 page table to be referenced by the new entry.
> + *
> + * It is the caller's responsibility to ensure @op_ctx.curr_page points to a
> + * valid L2 entry.
> + */
> +static void
> +pvr_page_table_l2_insert(struct pvr_mmu_op_context *op_ctx,
> +                        struct pvr_page_table_l1 *child_table)
> +{
> +       struct pvr_page_table_l2 *l2_table =
> +               &op_ctx->mmu_ctx->page_table_l2;
> +       struct pvr_page_table_l2_entry_raw *entry_raw =
> +               pvr_page_table_l2_get_entry_raw(l2_table,
> +                                               op_ctx->curr_page.l2_idx);
> +
> +       pvr_page_table_l2_entry_raw_set(entry_raw,
> +                                       child_table->backing_page.dma_addr);

Can you maybe add comments in functions that set page table entries to
document who is responsible for using a memory barrier (like wmb()) to
ensure that the creation of a page table entry is ordered after the
thing it points to is fully initialized, so that the GPU can't end up
concurrently walking into a page table and observe its old contents
from before it was zero-initialized?

> +
> +       child_table->parent = l2_table;
> +       child_table->parent_idx = op_ctx->curr_page.l2_idx;
> +       l2_table->entries[op_ctx->curr_page.l2_idx] = child_table;
> +       ++l2_table->entry_count;
> +       op_ctx->curr_page.l1_table = child_table;
> +}
[...]
> +/**
> + * pvr_page_table_l1_get_or_insert() - Retrieves (optionally inserting if
> + * necessary) a level 1 page table from the specified level 2 page table entry.
> + * @op_ctx: Target MMU op context.
> + * @should_insert: [IN] Specifies whether new page tables should be inserted
> + * when empty page table entries are encountered during traversal.
> + *
> + * Return:
> + *  * 0 on success, or
> + *
> + *    If @should_insert is %false:
> + *     * -%ENXIO if a level 1 page table would have been inserted.
> + *
> + *    If @should_insert is %true:
> + *     * Any error encountered while inserting the level 1 page table.
> + */
> +static int
> +pvr_page_table_l1_get_or_insert(struct pvr_mmu_op_context *op_ctx,
> +                               bool should_insert)
> +{
> +       struct pvr_page_table_l2 *l2_table =
> +               &op_ctx->mmu_ctx->page_table_l2;
> +       struct pvr_page_table_l1 *table;
> +       int err;
> +
> +       if (pvr_page_table_l2_entry_is_valid(l2_table,
> +                                            op_ctx->curr_page.l2_idx)) {
> +               op_ctx->curr_page.l1_table =
> +                       l2_table->entries[op_ctx->curr_page.l2_idx];
> +               return 0;
> +       }
> +
> +       if (!should_insert)
> +               return -ENXIO;
> +
> +       /* Take a prealloced table. */
> +       table = op_ctx->l1_free_tables;
> +       if (!table)
> +               return -ENOMEM;
> +
> +       err = pvr_page_table_l1_init(table, op_ctx->mmu_ctx->pvr_dev);

I think when we have a preallocated table here, it was allocated in
pvr_page_table_l1_alloc(), which already called
pvr_page_table_l1_init()? So it looks to me like this second
pvr_page_table_l1_init() call will allocate another page and leak the
old allocation.

> +       if (err)
> +               return err;
> +
> +       /* Pop */
> +       op_ctx->l1_free_tables = table->next_free;
> +       table->next_free = NULL;
> +
> +       pvr_page_table_l2_insert(op_ctx, table);
> +
> +       return 0;
> +}
[...]
> +/**
> + * pvr_mmu_op_context_create() - Create an MMU op context.
> + * @ctx: MMU context associated with owning VM context.
> + * @sgt: Scatter gather table containing pages pinned for use by this context.
> + * @sgt_offset: Start offset of the requested device-virtual memory mapping.
> + * @size: Size in bytes of the requested device-virtual memory mapping. For an
> + * unmapping, this should be zero so that no page tables are allocated.
> + *
> + * Returns:
> + *  * Newly created MMU op context object on success, or
> + *  * -%ENOMEM if no memory is available,
> + *  * Any error code returned by pvr_page_table_l2_init().
> + */
> +struct pvr_mmu_op_context *
> +pvr_mmu_op_context_create(struct pvr_mmu_context *ctx, struct sg_table *sgt,
> +                         u64 sgt_offset, u64 size)
> +{
> +       int err;
> +
> +       struct pvr_mmu_op_context *op_ctx =
> +               kzalloc(sizeof(*op_ctx), GFP_KERNEL);
> +
> +       if (!op_ctx)
> +               return ERR_PTR(-ENOMEM);
> +
> +       op_ctx->mmu_ctx = ctx;
> +       op_ctx->map.sgt = sgt;
> +       op_ctx->map.sgt_offset = sgt_offset;
> +       op_ctx->sync_level_required = PVR_MMU_SYNC_LEVEL_NONE;
> +
> +       if (size) {
> +               const u32 l1_start_idx = pvr_page_table_l2_idx(sgt_offset);
> +               const u32 l1_end_idx = pvr_page_table_l2_idx(sgt_offset + size);
> +               const u32 l1_count = l1_end_idx - l1_start_idx + 1;
> +               const u32 l0_start_idx = pvr_page_table_l1_idx(sgt_offset);
> +               const u32 l0_end_idx = pvr_page_table_l1_idx(sgt_offset + size);
> +               const u32 l0_count = l0_end_idx - l0_start_idx + 1;

Shouldn't the page table indices be calculated from the device_addr
(which is not currently passed in by pvr_vm_map())? As far as I can
tell, sgt_offset doesn't have anything to do with the device address
at which this mapping will be inserted in the page tables?

> +
> +               /*
> +                * Alloc and push page table entries until we have enough of
> +                * each type, ending with linked lists of l0 and l1 entries in
> +                * reverse order.
> +                */
> +               for (int i = 0; i < l1_count; i++) {
> +                       struct pvr_page_table_l1 *l1_tmp =
> +                               pvr_page_table_l1_alloc(ctx);
> +
> +                       err = PTR_ERR_OR_ZERO(l1_tmp);
> +                       if (err)
> +                               goto err_cleanup;
> +
> +                       l1_tmp->next_free = op_ctx->l1_free_tables;
> +                       op_ctx->l1_free_tables = l1_tmp;
> +               }
> +
> +               for (int i = 0; i < l0_count; i++) {
> +                       struct pvr_page_table_l0 *l0_tmp =
> +                               pvr_page_table_l0_alloc(ctx);
> +
> +                       err = PTR_ERR_OR_ZERO(l0_tmp);
> +                       if (err)
> +                               goto err_cleanup;
> +
> +                       l0_tmp->next_free = op_ctx->l0_free_tables;
> +                       op_ctx->l0_free_tables = l0_tmp;
> +               }
> +       }
> +
> +       return op_ctx;
> +
> +err_cleanup:
> +       pvr_mmu_op_context_destroy(op_ctx);

> +
> +       return ERR_PTR(err);
> +}

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 08/17] drm/imagination: Add GEM and VM related code
@ 2023-08-17 22:42     ` Jann Horn
  0 siblings, 0 replies; 77+ messages in thread
From: Jann Horn @ 2023-08-17 22:42 UTC (permalink / raw)
  To: Sarah Walker
  Cc: dri-devel, matthew.brost, luben.tuikov, tzimmermann,
	linux-kernel, mripard, afd, Matt Coster, boris.brezillon, dakr,
	donald.robson, hns, christian.koenig, faith.ekstrand

Hi!

Thanks, I think it's great that Imagination is writing an upstream
driver for PowerVR. :)

On Wed, Aug 16, 2023 at 10:25 AM Sarah Walker <sarah.walker@imgtec.com> wrote:
> Add a GEM implementation based on drm_gem_shmem, and support code for the
> PowerVR GPU MMU. The GPU VA manager is used for address space management.
[...]
> +/**
> + * DOC: Flags for DRM_IOCTL_PVR_CREATE_BO (kernel-only)
> + *
> + * Kernel-only values allowed in &pvr_gem_object->flags. The majority of options
> + * for this field are specified in the UAPI header "pvr_drm.h" with a
> + * DRM_PVR_BO_ prefix. To distinguish these internal options (which must exist
> + * in ranges marked as "reserved" in the UAPI header), we drop the DRM prefix.
> + * The public options should be used directly, DRM prefix and all.
> + *
> + * To avoid potentially confusing gaps in the UAPI options, these kernel-only
> + * options are specified "in reverse", starting at bit 63.
> + *
> + * We use "reserved" to refer to bits defined here and not exposed in the UAPI.
> + * Bits not defined anywhere are "undefined".
> + *
> + * Creation options
> + *    These use the prefix PVR_BO_CREATE_.
> + *
> + *    *There are currently no kernel-only flags in this group.*
> + *
> + * Device mapping options
> + *    These use the prefix PVR_BO_DEVICE_.
> + *
> + *    *There are currently no kernel-only flags in this group.*
> + *
> + * CPU mapping options
> + *    These use the prefix PVR_BO_CPU_.
> + *
> + *    :CACHED: By default, all GEM objects are mapped write-combined on the
> + *       CPU. Set this flag to override this behaviour and map the object
> + *       cached.
> + */
> +#define PVR_BO_CPU_CACHED BIT_ULL(63)
> +
> +#define PVR_BO_FW_NO_CLEAR_ON_RESET BIT_ULL(62)
> +
> +/* Bits 62..3 are undefined. */
> +/* Bits 2..0 are defined in the UAPI. */
> +
> +/* Other utilities. */
> +#define PVR_BO_UNDEFINED_MASK GENMASK_ULL(61, 3)
> +#define PVR_BO_RESERVED_MASK (PVR_BO_UNDEFINED_MASK | GENMASK_ULL(63, 63))

In commit 1a9c568fb559 ("drm/imagination: Rework firmware object
initialisation") in powervr-next, PVR_BO_FW_NO_CLEAR_ON_RESET (bit 62)
was added in the kernel-only flags group, but the mask
PVR_BO_RESERVED_MASK (which is used in pvr_ioctl_create_bo to detect
kernel-only and reserved flags) looks like it wasn't changed to
include bit 62. I think that means it becomes possible for userspace
to pass this bit in via pvr_ioctl_create_bo()?
If my understanding is correct and that was unintentional, it might be
a good idea to change these defines:

#define PVR_BO_UNDEFINED_MASK GENMASK_ULL(61, 3)
#define PVR_BO_RESERVED_MASK (PVR_BO_UNDEFINED_MASK | GENMASK_ULL(63, 63))

into something like this to avoid future mishaps like this:

/* first bit that is not used for UAPI BO options */
#define PVR_BO_FIRST_RESERVED_BIT 3
#define PVR_BO_UNDEFINED_MASK GENMASK_ULL(61, PVR_BO_FIRST_RESERVED_BIT)
#define PVR_BO_RESERVED_MASK GENMASK_ULL(63, PVR_BO_FIRST_RESERVED_BIT)

> +
> +/*
> + * All firmware-mapped memory uses (mostly) the same flags. Specifically,
> + * firmware-mapped memory should be:
> + *  * Read/write on the device,
> + *  * Read/write on the CPU, and
> + *  * Write-combined on the CPU.
> + *
> + * The only variation is in caching on the device.
> + */
> +#define PVR_BO_FW_FLAGS_DEVICE_CACHED (ULL(0))
> +#define PVR_BO_FW_FLAGS_DEVICE_UNCACHED DRM_PVR_BO_DEVICE_BYPASS_CACHE
[...]
> +/**
> + * pvr_page_table_l2_entry_raw_set() - Write a valid entry into a raw level 2
> + *                                     page table.
> + * @entry: Target raw level 2 page table entry.
> + * @child_table_dma_addr: DMA address of the level 1 page table to be
> + *                        associated with @entry.
> + *
> + * When calling this function, @child_table_dma_addr must be a valid DMA
> + * address and a multiple of %ROGUE_MMUCTRL_PC_DATA_PD_BASE_ALIGNSIZE.
> + */
> +static void
> +pvr_page_table_l2_entry_raw_set(struct pvr_page_table_l2_entry_raw *entry,
> +                               dma_addr_t child_table_dma_addr)
> +{
> +       child_table_dma_addr >>= ROGUE_MMUCTRL_PC_DATA_PD_BASE_ALIGNSHIFT;
> +
> +       entry->val =
> +               PVR_PAGE_TABLE_FIELD_PREP(2, PC, VALID, true) |
> +               PVR_PAGE_TABLE_FIELD_PREP(2, PC, ENTRY_PENDING, false) |
> +               PVR_PAGE_TABLE_FIELD_PREP(2, PC, PD_BASE, child_table_dma_addr);
> +}

For this function and others that manipulate page table entries,
please use some kernel helper that ensures that the store can't tear
(at least WRITE_ONCE() - that can still tear on 32-bit, but I see the
driver depends on ARM64, so that's not a problem).

[...]
> +/**
> + * pvr_page_table_l2_insert() - Insert an entry referring to a level 1 page
> + * table into a level 2 page table.
> + * @op_ctx: Target MMU op context pointing at the entry to insert the L1 page
> + * table into.
> + * @child_table: Target level 1 page table to be referenced by the new entry.
> + *
> + * It is the caller's responsibility to ensure @op_ctx.curr_page points to a
> + * valid L2 entry.
> + */
> +static void
> +pvr_page_table_l2_insert(struct pvr_mmu_op_context *op_ctx,
> +                        struct pvr_page_table_l1 *child_table)
> +{
> +       struct pvr_page_table_l2 *l2_table =
> +               &op_ctx->mmu_ctx->page_table_l2;
> +       struct pvr_page_table_l2_entry_raw *entry_raw =
> +               pvr_page_table_l2_get_entry_raw(l2_table,
> +                                               op_ctx->curr_page.l2_idx);
> +
> +       pvr_page_table_l2_entry_raw_set(entry_raw,
> +                                       child_table->backing_page.dma_addr);

Can you maybe add comments in functions that set page table entries to
document who is responsible for using a memory barrier (like wmb()) to
ensure that the creation of a page table entry is ordered after the
thing it points to is fully initialized, so that the GPU can't end up
concurrently walking into a page table and observe its old contents
from before it was zero-initialized?

> +
> +       child_table->parent = l2_table;
> +       child_table->parent_idx = op_ctx->curr_page.l2_idx;
> +       l2_table->entries[op_ctx->curr_page.l2_idx] = child_table;
> +       ++l2_table->entry_count;
> +       op_ctx->curr_page.l1_table = child_table;
> +}
[...]
> +/**
> + * pvr_page_table_l1_get_or_insert() - Retrieves (optionally inserting if
> + * necessary) a level 1 page table from the specified level 2 page table entry.
> + * @op_ctx: Target MMU op context.
> + * @should_insert: [IN] Specifies whether new page tables should be inserted
> + * when empty page table entries are encountered during traversal.
> + *
> + * Return:
> + *  * 0 on success, or
> + *
> + *    If @should_insert is %false:
> + *     * -%ENXIO if a level 1 page table would have been inserted.
> + *
> + *    If @should_insert is %true:
> + *     * Any error encountered while inserting the level 1 page table.
> + */
> +static int
> +pvr_page_table_l1_get_or_insert(struct pvr_mmu_op_context *op_ctx,
> +                               bool should_insert)
> +{
> +       struct pvr_page_table_l2 *l2_table =
> +               &op_ctx->mmu_ctx->page_table_l2;
> +       struct pvr_page_table_l1 *table;
> +       int err;
> +
> +       if (pvr_page_table_l2_entry_is_valid(l2_table,
> +                                            op_ctx->curr_page.l2_idx)) {
> +               op_ctx->curr_page.l1_table =
> +                       l2_table->entries[op_ctx->curr_page.l2_idx];
> +               return 0;
> +       }
> +
> +       if (!should_insert)
> +               return -ENXIO;
> +
> +       /* Take a prealloced table. */
> +       table = op_ctx->l1_free_tables;
> +       if (!table)
> +               return -ENOMEM;
> +
> +       err = pvr_page_table_l1_init(table, op_ctx->mmu_ctx->pvr_dev);

I think when we have a preallocated table here, it was allocated in
pvr_page_table_l1_alloc(), which already called
pvr_page_table_l1_init()? So it looks to me like this second
pvr_page_table_l1_init() call will allocate another page and leak the
old allocation.

> +       if (err)
> +               return err;
> +
> +       /* Pop */
> +       op_ctx->l1_free_tables = table->next_free;
> +       table->next_free = NULL;
> +
> +       pvr_page_table_l2_insert(op_ctx, table);
> +
> +       return 0;
> +}
[...]
> +/**
> + * pvr_mmu_op_context_create() - Create an MMU op context.
> + * @ctx: MMU context associated with owning VM context.
> + * @sgt: Scatter gather table containing pages pinned for use by this context.
> + * @sgt_offset: Start offset of the requested device-virtual memory mapping.
> + * @size: Size in bytes of the requested device-virtual memory mapping. For an
> + * unmapping, this should be zero so that no page tables are allocated.
> + *
> + * Returns:
> + *  * Newly created MMU op context object on success, or
> + *  * -%ENOMEM if no memory is available,
> + *  * Any error code returned by pvr_page_table_l2_init().
> + */
> +struct pvr_mmu_op_context *
> +pvr_mmu_op_context_create(struct pvr_mmu_context *ctx, struct sg_table *sgt,
> +                         u64 sgt_offset, u64 size)
> +{
> +       int err;
> +
> +       struct pvr_mmu_op_context *op_ctx =
> +               kzalloc(sizeof(*op_ctx), GFP_KERNEL);
> +
> +       if (!op_ctx)
> +               return ERR_PTR(-ENOMEM);
> +
> +       op_ctx->mmu_ctx = ctx;
> +       op_ctx->map.sgt = sgt;
> +       op_ctx->map.sgt_offset = sgt_offset;
> +       op_ctx->sync_level_required = PVR_MMU_SYNC_LEVEL_NONE;
> +
> +       if (size) {
> +               const u32 l1_start_idx = pvr_page_table_l2_idx(sgt_offset);
> +               const u32 l1_end_idx = pvr_page_table_l2_idx(sgt_offset + size);
> +               const u32 l1_count = l1_end_idx - l1_start_idx + 1;
> +               const u32 l0_start_idx = pvr_page_table_l1_idx(sgt_offset);
> +               const u32 l0_end_idx = pvr_page_table_l1_idx(sgt_offset + size);
> +               const u32 l0_count = l0_end_idx - l0_start_idx + 1;

Shouldn't the page table indices be calculated from the device_addr
(which is not currently passed in by pvr_vm_map())? As far as I can
tell, sgt_offset doesn't have anything to do with the device address
at which this mapping will be inserted in the page tables?

> +
> +               /*
> +                * Alloc and push page table entries until we have enough of
> +                * each type, ending with linked lists of l0 and l1 entries in
> +                * reverse order.
> +                */
> +               for (int i = 0; i < l1_count; i++) {
> +                       struct pvr_page_table_l1 *l1_tmp =
> +                               pvr_page_table_l1_alloc(ctx);
> +
> +                       err = PTR_ERR_OR_ZERO(l1_tmp);
> +                       if (err)
> +                               goto err_cleanup;
> +
> +                       l1_tmp->next_free = op_ctx->l1_free_tables;
> +                       op_ctx->l1_free_tables = l1_tmp;
> +               }
> +
> +               for (int i = 0; i < l0_count; i++) {
> +                       struct pvr_page_table_l0 *l0_tmp =
> +                               pvr_page_table_l0_alloc(ctx);
> +
> +                       err = PTR_ERR_OR_ZERO(l0_tmp);
> +                       if (err)
> +                               goto err_cleanup;
> +
> +                       l0_tmp->next_free = op_ctx->l0_free_tables;
> +                       op_ctx->l0_free_tables = l0_tmp;
> +               }
> +       }
> +
> +       return op_ctx;
> +
> +err_cleanup:
> +       pvr_mmu_op_context_destroy(op_ctx);

> +
> +       return ERR_PTR(err);
> +}

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 13/17] drm/imagination: Implement context creation/destruction ioctls
  2023-08-16  8:25   ` Sarah Walker
@ 2023-08-17 22:42     ` Jann Horn
  -1 siblings, 0 replies; 77+ messages in thread
From: Jann Horn @ 2023-08-17 22:42 UTC (permalink / raw)
  To: Sarah Walker
  Cc: matthew.brost, hns, linux-kernel, mripard, afd, luben.tuikov,
	dakr, dri-devel, tzimmermann, boris.brezillon, christian.koenig,
	faith.ekstrand, donald.robson

On Wed, Aug 16, 2023 at 10:25 AM Sarah Walker <sarah.walker@imgtec.com> wrote:
>
> Implement ioctls for the creation and destruction of contexts. Contexts are
> used for job submission and each is associated with a particular job type.
[...]
> +/**
> + * pvr_context_create() - Create a context.
> + * @pvr_file: File to attach the created context to.
> + * @args: Context creation arguments.
> + *
> + * Return:
> + *  * 0 on success, or
> + *  * A negative error code on failure.
> + */
> +int pvr_context_create(struct pvr_file *pvr_file, struct drm_pvr_ioctl_create_context_args *args)
> +{
> +       struct pvr_device *pvr_dev = pvr_file->pvr_dev;
> +       struct pvr_context *ctx;
> +       int ctx_size;
> +       int err;
> +
> +       /* Context creation flags are currently unused and must be zero. */
> +       if (args->flags)
> +               return -EINVAL;
> +
> +       ctx_size = get_fw_obj_size(args->type);
> +       if (ctx_size < 0)
> +               return ctx_size;
> +
> +       ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
> +       if (!ctx)
> +               return -ENOMEM;
> +
> +       ctx->data_size = ctx_size;
> +       ctx->type = args->type;
> +       ctx->flags = args->flags;
> +       ctx->pvr_dev = pvr_dev;
> +       kref_init(&ctx->ref_count);
> +
> +       err = remap_priority(pvr_file, args->priority, &ctx->priority);
> +       if (err)
> +               goto err_free_ctx;
> +
> +       ctx->vm_ctx = pvr_vm_context_lookup(pvr_file, args->vm_context_handle);
> +       if (IS_ERR(ctx->vm_ctx)) {
> +               err = PTR_ERR(ctx->vm_ctx);
> +               goto err_free_ctx;
> +       }
> +
> +       ctx->data = kzalloc(ctx_size, GFP_KERNEL);
> +       if (!ctx->data) {
> +               err = -ENOMEM;
> +               goto err_put_vm;
> +       }
> +
> +       err = init_fw_objs(ctx, args, ctx->data);
> +       if (err)
> +               goto err_free_ctx_data;
> +
> +       err = pvr_fw_object_create(pvr_dev, ctx_size, PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
> +                                  ctx_fw_data_init, ctx, &ctx->fw_obj);
> +       if (err)
> +               goto err_free_ctx_data;
> +
> +       err = xa_alloc(&pvr_dev->ctx_ids, &ctx->ctx_id, ctx, xa_limit_32b, GFP_KERNEL);
> +       if (err)
> +               goto err_destroy_fw_obj;
> +
> +       err = xa_alloc(&pvr_file->ctx_handles, &args->handle, ctx, xa_limit_32b, GFP_KERNEL);
> +       if (err)
> +               goto err_release_id;

This bailout looks a bit dodgy. We have already inserted ctx into
&pvr_dev->ctx_ids, and now we just take it out again. If someone could
concurrently call pvr_context_lookup_id() on the ID we just allocated
(I don't understand enough about what's going on here at a high level
to be able to tell if that's possible), I think they would be able to
elevate the ctx->ref_count from 1 to 2, and then on the bailout path
we'll just free the ctx without looking at the refcount.

If this can't happen, it might be a good idea to add a comment
explaining why. If it can happen, I guess one way to fix it would be
to replace this last bailout with a call to pvr_context_put()?


> +
> +       return 0;
> +
> +err_release_id:
> +       xa_erase(&pvr_dev->ctx_ids, ctx->ctx_id);
> +
> +err_destroy_fw_obj:
> +       pvr_fw_object_destroy(ctx->fw_obj);
> +
> +err_free_ctx_data:
> +       kfree(ctx->data);
> +
> +err_put_vm:
> +       pvr_vm_context_put(ctx->vm_ctx);
> +
> +err_free_ctx:
> +       kfree(ctx);
> +       return err;
> +}

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 13/17] drm/imagination: Implement context creation/destruction ioctls
@ 2023-08-17 22:42     ` Jann Horn
  0 siblings, 0 replies; 77+ messages in thread
From: Jann Horn @ 2023-08-17 22:42 UTC (permalink / raw)
  To: Sarah Walker
  Cc: dri-devel, matthew.brost, luben.tuikov, tzimmermann,
	linux-kernel, mripard, afd, boris.brezillon, dakr, donald.robson,
	hns, christian.koenig, faith.ekstrand

On Wed, Aug 16, 2023 at 10:25 AM Sarah Walker <sarah.walker@imgtec.com> wrote:
>
> Implement ioctls for the creation and destruction of contexts. Contexts are
> used for job submission and each is associated with a particular job type.
[...]
> +/**
> + * pvr_context_create() - Create a context.
> + * @pvr_file: File to attach the created context to.
> + * @args: Context creation arguments.
> + *
> + * Return:
> + *  * 0 on success, or
> + *  * A negative error code on failure.
> + */
> +int pvr_context_create(struct pvr_file *pvr_file, struct drm_pvr_ioctl_create_context_args *args)
> +{
> +       struct pvr_device *pvr_dev = pvr_file->pvr_dev;
> +       struct pvr_context *ctx;
> +       int ctx_size;
> +       int err;
> +
> +       /* Context creation flags are currently unused and must be zero. */
> +       if (args->flags)
> +               return -EINVAL;
> +
> +       ctx_size = get_fw_obj_size(args->type);
> +       if (ctx_size < 0)
> +               return ctx_size;
> +
> +       ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
> +       if (!ctx)
> +               return -ENOMEM;
> +
> +       ctx->data_size = ctx_size;
> +       ctx->type = args->type;
> +       ctx->flags = args->flags;
> +       ctx->pvr_dev = pvr_dev;
> +       kref_init(&ctx->ref_count);
> +
> +       err = remap_priority(pvr_file, args->priority, &ctx->priority);
> +       if (err)
> +               goto err_free_ctx;
> +
> +       ctx->vm_ctx = pvr_vm_context_lookup(pvr_file, args->vm_context_handle);
> +       if (IS_ERR(ctx->vm_ctx)) {
> +               err = PTR_ERR(ctx->vm_ctx);
> +               goto err_free_ctx;
> +       }
> +
> +       ctx->data = kzalloc(ctx_size, GFP_KERNEL);
> +       if (!ctx->data) {
> +               err = -ENOMEM;
> +               goto err_put_vm;
> +       }
> +
> +       err = init_fw_objs(ctx, args, ctx->data);
> +       if (err)
> +               goto err_free_ctx_data;
> +
> +       err = pvr_fw_object_create(pvr_dev, ctx_size, PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
> +                                  ctx_fw_data_init, ctx, &ctx->fw_obj);
> +       if (err)
> +               goto err_free_ctx_data;
> +
> +       err = xa_alloc(&pvr_dev->ctx_ids, &ctx->ctx_id, ctx, xa_limit_32b, GFP_KERNEL);
> +       if (err)
> +               goto err_destroy_fw_obj;
> +
> +       err = xa_alloc(&pvr_file->ctx_handles, &args->handle, ctx, xa_limit_32b, GFP_KERNEL);
> +       if (err)
> +               goto err_release_id;

This bailout looks a bit dodgy. We have already inserted ctx into
&pvr_dev->ctx_ids, and now we just take it out again. If someone could
concurrently call pvr_context_lookup_id() on the ID we just allocated
(I don't understand enough about what's going on here at a high level
to be able to tell if that's possible), I think they would be able to
elevate the ctx->ref_count from 1 to 2, and then on the bailout path
we'll just free the ctx without looking at the refcount.

If this can't happen, it might be a good idea to add a comment
explaining why. If it can happen, I guess one way to fix it would be
to replace this last bailout with a call to pvr_context_put()?


> +
> +       return 0;
> +
> +err_release_id:
> +       xa_erase(&pvr_dev->ctx_ids, ctx->ctx_id);
> +
> +err_destroy_fw_obj:
> +       pvr_fw_object_destroy(ctx->fw_obj);
> +
> +err_free_ctx_data:
> +       kfree(ctx->data);
> +
> +err_put_vm:
> +       pvr_vm_context_put(ctx->vm_ctx);
> +
> +err_free_ctx:
> +       kfree(ctx);
> +       return err;
> +}

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 14/17] drm/imagination: Implement job submission and scheduling
  2023-08-16  8:25   ` Sarah Walker
@ 2023-08-17 22:42     ` Jann Horn
  -1 siblings, 0 replies; 77+ messages in thread
From: Jann Horn @ 2023-08-17 22:42 UTC (permalink / raw)
  To: Sarah Walker
  Cc: matthew.brost, hns, linux-kernel, mripard, afd, luben.tuikov,
	dakr, dri-devel, tzimmermann, boris.brezillon, christian.koenig,
	faith.ekstrand, donald.robson

On Wed, Aug 16, 2023 at 10:25 AM Sarah Walker <sarah.walker@imgtec.com> wrote:
> Implement job submission ioctl. Job scheduling is implemented using
> drm_sched.
[...]
> +/**
> + * pvr_job_data_fini() - Cleanup all allocs used to set up job submission.
> + * @job_data: Job data array.
> + * @job_count: Number of members of @job_data.
> + */
> +static void
> +pvr_job_data_fini(struct pvr_job_data *job_data, u32 job_count)
> +{
> +       for (u32 i = 0; i < job_count; i++) {
> +               pvr_job_put(job_data[i].job);
> +               kvfree(job_data[i].sync_ops);
> +       }
> +}
> +
> +/**
> + * pvr_job_data_init() - Init an array of created jobs, associating them with
> + * the appropriate sync_ops args, which will be copied in.
> + * @pvr_dev: Target PowerVR device.
> + * @pvr_file: Pointer to PowerVR file structure.
> + * @job_args: Job args array copied from user.
> + * @job_count: Number of members of @job_args.
> + * @job_data_out: Job data array.
> + */
> +static int pvr_job_data_init(struct pvr_device *pvr_dev,
> +                            struct pvr_file *pvr_file,
> +                            struct drm_pvr_job *job_args,
> +                            u32 *job_count,
> +                            struct pvr_job_data *job_data_out)
> +{
> +       int err = 0, i = 0;
> +
> +       for (; i < *job_count; i++) {
> +               job_data_out[i].job =
> +                       create_job(pvr_dev, pvr_file, &job_args[i]);
> +               err = PTR_ERR_OR_ZERO(job_data_out[i].job);
> +
> +               if (err) {
> +                       *job_count = i;
> +                       job_data_out[i].job = NULL;
> +                       goto err_cleanup;

I think this bailout looks fine...

> +               }
> +
> +               err = PVR_UOBJ_GET_ARRAY(job_data_out[i].sync_ops,
> +                                        &job_args[i].sync_ops);
> +               if (err) {
> +                       *job_count = i;
> +                       goto err_cleanup;

... but this bailout looks like it has an off-by-one. We have
initialized the job pointers up to (inclusive) index i...

> +               }
> +
> +               job_data_out[i].sync_op_count = job_args[i].sync_ops.count;
> +       }
> +
> +       return 0;
> +
> +err_cleanup:
> +       pvr_job_data_fini(job_data_out, i);

... but then we pass i to pvr_job_data_fini() as the exclusive limit
of job indices to free? I think this will leave a leaked job around.


> +
> +       return err;
> +}

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 14/17] drm/imagination: Implement job submission and scheduling
@ 2023-08-17 22:42     ` Jann Horn
  0 siblings, 0 replies; 77+ messages in thread
From: Jann Horn @ 2023-08-17 22:42 UTC (permalink / raw)
  To: Sarah Walker
  Cc: dri-devel, matthew.brost, luben.tuikov, tzimmermann,
	linux-kernel, mripard, afd, boris.brezillon, dakr, donald.robson,
	hns, christian.koenig, faith.ekstrand

On Wed, Aug 16, 2023 at 10:25 AM Sarah Walker <sarah.walker@imgtec.com> wrote:
> Implement job submission ioctl. Job scheduling is implemented using
> drm_sched.
[...]
> +/**
> + * pvr_job_data_fini() - Cleanup all allocs used to set up job submission.
> + * @job_data: Job data array.
> + * @job_count: Number of members of @job_data.
> + */
> +static void
> +pvr_job_data_fini(struct pvr_job_data *job_data, u32 job_count)
> +{
> +       for (u32 i = 0; i < job_count; i++) {
> +               pvr_job_put(job_data[i].job);
> +               kvfree(job_data[i].sync_ops);
> +       }
> +}
> +
> +/**
> + * pvr_job_data_init() - Init an array of created jobs, associating them with
> + * the appropriate sync_ops args, which will be copied in.
> + * @pvr_dev: Target PowerVR device.
> + * @pvr_file: Pointer to PowerVR file structure.
> + * @job_args: Job args array copied from user.
> + * @job_count: Number of members of @job_args.
> + * @job_data_out: Job data array.
> + */
> +static int pvr_job_data_init(struct pvr_device *pvr_dev,
> +                            struct pvr_file *pvr_file,
> +                            struct drm_pvr_job *job_args,
> +                            u32 *job_count,
> +                            struct pvr_job_data *job_data_out)
> +{
> +       int err = 0, i = 0;
> +
> +       for (; i < *job_count; i++) {
> +               job_data_out[i].job =
> +                       create_job(pvr_dev, pvr_file, &job_args[i]);
> +               err = PTR_ERR_OR_ZERO(job_data_out[i].job);
> +
> +               if (err) {
> +                       *job_count = i;
> +                       job_data_out[i].job = NULL;
> +                       goto err_cleanup;

I think this bailout looks fine...

> +               }
> +
> +               err = PVR_UOBJ_GET_ARRAY(job_data_out[i].sync_ops,
> +                                        &job_args[i].sync_ops);
> +               if (err) {
> +                       *job_count = i;
> +                       goto err_cleanup;

... but this bailout looks like it has an off-by-one. We have
initialized the job pointers up to (inclusive) index i...

> +               }
> +
> +               job_data_out[i].sync_op_count = job_args[i].sync_ops.count;
> +       }
> +
> +       return 0;
> +
> +err_cleanup:
> +       pvr_job_data_fini(job_data_out, i);

... but then we pass i to pvr_job_data_fini() as the exclusive limit
of job indices to free? I think this will leave a leaked job around.


> +
> +       return err;
> +}

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 03/17] drm/imagination/uapi: Add PowerVR driver UAPI
  2023-08-16  8:25   ` Sarah Walker
  (?)
@ 2023-08-18  0:43   ` Faith Ekstrand
  2023-08-18 13:49       ` Sarah Walker
  -1 siblings, 1 reply; 77+ messages in thread
From: Faith Ekstrand @ 2023-08-18  0:43 UTC (permalink / raw)
  To: Sarah Walker
  Cc: matthew.brost, hns, linux-kernel, mripard, afd, Matt Coster,
	luben.tuikov, dakr, dri-devel, tzimmermann, boris.brezillon,
	christian.koenig, faith.ekstrand, donald.robson

[-- Attachment #1: Type: text/plain, Size: 51698 bytes --]

On Wed, Aug 16, 2023 at 3:26 AM Sarah Walker <sarah.walker@imgtec.com>
wrote:

> Add the UAPI implementation for the PowerVR driver.
>
> Changes from v4 :
> - Remove CREATE_ZEROED flag for BO creation (all buffers are now zeroed)
>
> Co-developed-by: Frank Binns <frank.binns@imgtec.com>
> Signed-off-by: Frank Binns <frank.binns@imgtec.com>
> Co-developed-by: Boris Brezillon <boris.brezillon@collabora.com>
> Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
> Co-developed-by: Matt Coster <matt.coster@imgtec.com>
> Signed-off-by: Matt Coster <matt.coster@imgtec.com>
> Co-developed-by: Donald Robson <donald.robson@imgtec.com>
> Signed-off-by: Donald Robson <donald.robson@imgtec.com>
> Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
> ---
>  MAINTAINERS                |    1 +
>  include/uapi/drm/pvr_drm.h | 1303 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 1304 insertions(+)
>  create mode 100644 include/uapi/drm/pvr_drm.h
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index f84390bb6cfe..55e17f8aea91 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -10144,6 +10144,7 @@ M:      Sarah Walker <sarah.walker@imgtec.com>
>  M:     Donald Robson <donald.robson@imgtec.com>
>  S:     Supported
>  F:     Documentation/devicetree/bindings/gpu/img,powervr.yaml
> +F:     include/uapi/drm/pvr_drm.h
>
>  IMON SOUNDGRAPH USB IR RECEIVER
>  M:     Sean Young <sean@mess.org>
> diff --git a/include/uapi/drm/pvr_drm.h b/include/uapi/drm/pvr_drm.h
> new file mode 100644
> index 000000000000..c0aac8b135ca
> --- /dev/null
> +++ b/include/uapi/drm/pvr_drm.h
> @@ -0,0 +1,1303 @@
> +/* SPDX-License-Identifier: (GPL-2.0 WITH Linux-syscall-note) OR MIT */
> +/* Copyright (c) 2023 Imagination Technologies Ltd. */
> +
> +#ifndef PVR_DRM_UAPI_H
> +#define PVR_DRM_UAPI_H
> +
> +#include "drm.h"
> +
> +#include <linux/const.h>
> +#include <linux/types.h>
> +
> +#if defined(__cplusplus)
> +extern "C" {
> +#endif
> +
> +/**
> + * DOC: PowerVR UAPI
> + *
> + * The PowerVR IOCTL argument structs have a few limitations in place, in
> + * addition to the standard kernel restrictions:
> + *
> + *  - All members must be type-aligned.
> + *  - The overall struct must be padded to 64-bit alignment.
> + *  - Explicit padding is almost always required. This takes the form of
> + *    ``_padding_[x]`` members of sufficient size to pad to the next
> power-of-two
> + *    alignment, where [x] is the offset into the struct in hexadecimal.
> Arrays
> + *    are never used for alignment. Padding fields must be zeroed; this is
> + *    always checked.
> + *  - Unions may only appear as the last member of a struct.
> + *  - Individual union members may grow in the future. The space between
> the
> + *    end of a union member and the end of its containing union is
> considered
> + *    "implicit padding" and must be zeroed. This is always checked.
> + *
> + * In addition to the IOCTL argument structs, the PowerVR UAPI makes use
> of
> + * DEV_QUERY argument structs. These are used to fetch information about
> the
> + * device and runtime. These structs are subject to the same rules set out
> + * above.
> + */
> +
> +/**
> + * struct drm_pvr_obj_array - Container used to pass arrays of objects
> + *
> + * It is not unusual to have to extend objects to pass new parameters,
> and the DRM
> + * ioctl infrastructure is supporting that by padding ioctl arguments
> with zeros
> + * when the data passed by userspace is smaller than the struct defined
> in the
> + * drm_ioctl_desc, thus keeping things backward compatible. This type is
> just
> + * applying the same concepts to indirect objects passed through arrays
> referenced
> + * from the main ioctl arguments structure: the stride basically defines
> the size
> + * of the object passed by userspace, which allows the kernel driver to
> pad with
> + * zeros when it's smaller than the size of the object it expects.
> + *
> + * Use ``DRM_PVR_OBJ_ARRAY()`` to fill object array fields, unless you
> + * have a very good reason not to.
> + */
> +struct drm_pvr_obj_array {
> +       /** @stride: Stride of object struct. Used for versioning. */
> +       __u32 stride;
> +
> +       /** @count: Number of objects in the array. */
> +       __u32 count;
> +
> +       /** @array: User pointer to an array of objects. */
> +       __u64 array;
> +};
> +
> +/**
> + * DRM_PVR_OBJ_ARRAY() - Helper macro for filling &struct
> drm_pvr_obj_array.
> + * @cnt: Number of elements pointed to py @ptr.
> + * @ptr: Pointer to start of a C array.
> + *
> + * Return: Literal of type &struct drm_pvr_obj_array.
> + */
> +#define DRM_PVR_OBJ_ARRAY(cnt, ptr) \
> +       { .stride = sizeof((ptr)[0]), .count = (cnt), .array =
> (__u64)(uintptr_t)(ptr) }
> +
> +/**
> + * DOC: PowerVR IOCTL interface
> + */
> +
> +/**
> + * PVR_IOCTL() - Build a PowerVR IOCTL number
> + * @_ioctl: An incrementing id for this IOCTL. Added to %DRM_COMMAND_BASE.
> + * @_mode: Must be one of %DRM_IOR, %DRM_IOW or %DRM_IOWR.
> + * @_data: The type of the args struct passed by this IOCTL.
> + *
> + * The struct referred to by @_data must have a ``drm_pvr_ioctl_`` prefix
> and an
> + * ``_args suffix``. They are therefore omitted from @_data.
> + *
> + * This should only be used to build the constants described below; it
> should
> + * never be used to call an IOCTL directly.
> + *
> + * Return: An IOCTL number to be passed to ioctl() from userspace.
> + */
> +#define PVR_IOCTL(_ioctl, _mode, _data) \
> +       _mode(DRM_COMMAND_BASE + (_ioctl), struct
> drm_pvr_ioctl_##_data##_args)
> +
> +#define DRM_IOCTL_PVR_DEV_QUERY PVR_IOCTL(0x00, DRM_IOWR, dev_query)
> +#define DRM_IOCTL_PVR_CREATE_BO PVR_IOCTL(0x01, DRM_IOWR, create_bo)
> +#define DRM_IOCTL_PVR_GET_BO_MMAP_OFFSET PVR_IOCTL(0x02, DRM_IOWR,
> get_bo_mmap_offset)
> +#define DRM_IOCTL_PVR_CREATE_VM_CONTEXT PVR_IOCTL(0x03, DRM_IOWR,
> create_vm_context)
> +#define DRM_IOCTL_PVR_DESTROY_VM_CONTEXT PVR_IOCTL(0x04, DRM_IOW,
> destroy_vm_context)
> +#define DRM_IOCTL_PVR_VM_MAP PVR_IOCTL(0x05, DRM_IOW, vm_map)
> +#define DRM_IOCTL_PVR_VM_UNMAP PVR_IOCTL(0x06, DRM_IOW, vm_unmap)
> +#define DRM_IOCTL_PVR_CREATE_CONTEXT PVR_IOCTL(0x07, DRM_IOWR,
> create_context)
> +#define DRM_IOCTL_PVR_DESTROY_CONTEXT PVR_IOCTL(0x08, DRM_IOW,
> destroy_context)
> +#define DRM_IOCTL_PVR_CREATE_FREE_LIST PVR_IOCTL(0x09, DRM_IOWR,
> create_free_list)
> +#define DRM_IOCTL_PVR_DESTROY_FREE_LIST PVR_IOCTL(0x0a, DRM_IOW,
> destroy_free_list)
> +#define DRM_IOCTL_PVR_CREATE_HWRT_DATASET PVR_IOCTL(0x0b, DRM_IOWR,
> create_hwrt_dataset)
> +#define DRM_IOCTL_PVR_DESTROY_HWRT_DATASET PVR_IOCTL(0x0c, DRM_IOW,
> destroy_hwrt_dataset)
> +#define DRM_IOCTL_PVR_SUBMIT_JOBS PVR_IOCTL(0x0d, DRM_IOW, submit_jobs)
> +
> +/**
> + * DOC: PowerVR IOCTL DEV_QUERY interface
> + */
> +
> +/**
> + * struct drm_pvr_dev_query_gpu_info - Container used to fetch
> information about
> + * the graphics processor.
> + *
> + * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must
> be set
> + * to %DRM_PVR_DEV_QUERY_GPU_INFO_GET.
> + */
> +struct drm_pvr_dev_query_gpu_info {
> +       /**
> +        * @gpu_id: GPU identifier.
> +        *
> +        * For all currently supported GPUs this is the BVNC encoded as a
> 64-bit
> +        * value as follows:
> +        *
> +        *    +--------+--------+--------+-------+
> +        *    | 63..48 | 47..32 | 31..16 | 15..0 |
> +        *    +========+========+========+=======+
> +        *    | B      | V      | N      | C     |
> +        *    +--------+--------+--------+-------+
> +        */
> +       __u64 gpu_id;
> +
> +       /**
> +        * @num_phantoms: Number of Phantoms present.
> +        */
> +       __u32 num_phantoms;
> +};
> +
> +/**
> + * struct drm_pvr_dev_query_runtime_info - Container used to fetch
> information
> + * about the graphics runtime.
> + *
> + * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must
> be set
> + * to %DRM_PVR_DEV_QUERY_RUNTIME_INFO_GET.
> + */
> +struct drm_pvr_dev_query_runtime_info {
> +       /**
> +        * @free_list_min_pages: Minimum allowed free list size,
> +        * in PM physical pages.
> +        */
> +       __u64 free_list_min_pages;
> +
> +       /**
> +        * @free_list_max_pages: Maximum allowed free list size,
> +        * in PM physical pages.
> +        */
> +       __u64 free_list_max_pages;
> +
> +       /**
> +        * @common_store_alloc_region_size: Size of the Allocation
> +        * Region within the Common Store used for coefficient and shared
> +        * registers, in dwords.
> +        */
> +       __u32 common_store_alloc_region_size;
>

Any reason why this is in dwords?  It's not really my place to have an
opinion but that seems like kind-of a funny unit for the size of an
allocation region. Why not just bytes?


> +
> +       /**
> +        * @common_store_partition_space_size: Size of the
> +        * Partition Space within the Common Store for output buffers, in
> +        * dwords.
> +        */
> +       __u32 common_store_partition_space_size;
> +
> +       /**
> +        * @max_coeffs: Maximum coefficients, in dwords.
> +        */
> +       __u32 max_coeffs;
> +
> +       /**
> +        * @cdm_max_local_mem_size_regs: Maximum amount of local
> +        * memory available to a compute kernel, in dwords.
> +        */
> +       __u32 cdm_max_local_mem_size_regs;
> +};
> +
> +/**
> + * struct drm_pvr_dev_query_quirks - Container used to fetch information
> about
> + * hardware fixes for which the device may require support in the user
> mode
> + * driver.
> + *
> + * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must
> be set
> + * to %DRM_PVR_DEV_QUERY_QUIRKS_GET.
> + */
> +struct drm_pvr_dev_query_quirks {
> +       /**
> +        * @quirks: A userspace address for the hardware quirks __u32
> array.
> +        *
> +        * The first @musthave_count items in the list are quirks that the
> +        * client must support for this device. If userspace does not
> support
> +        * all these quirks then functionality is not guaranteed and client
> +        * initialisation must fail.
> +        * The remaining quirks in the list affect userspace and the
> kernel or
> +        * firmware. They are disabled by default and require userspace to
> +        * opt-in. The opt-in mechanism depends on the quirk.
> +        */
> +       __u64 quirks;
>

Where are these quirk IDs defined and where do they come from? If they're
effectively coming from hardware, possibly via firmware, that's probably
okay.  The important thing is that quirks should only ever get removed for
any given piece of hardware otherwise you risk breaking userspace.


> +
> +       /** @count: Length of @quirks (number of __u32). */
> +       __u16 count;
> +
> +       /**
> +        * @musthave_count: The number of entries in @quirks that are
> +        * mandatory, starting at index 0.
> +        */
> +       __u16 musthave_count;
> +
> +       /** @_padding_c: Reserved. This field must be zeroed. */
> +       __u32 _padding_c;
> +};
> +
> +/**
> + * struct drm_pvr_dev_query_enhancements - Container used to fetch
> information
> + * about optional enhancements supported by the device that require
> support in
> + * the user mode driver.
> + *
> + * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must
> be set
> + * to %DRM_PVR_DEV_ENHANCEMENTS_GET.
> + */
> +struct drm_pvr_dev_query_enhancements {
> +       /**
> +        * @enhancements: A userspace address for the hardware enhancements
> +        * __u32 array.
> +        *
> +        * These enhancements affect userspace and the kernel or firmware.
> They
> +        * are disabled by default and require userspace to opt-in. The
> opt-in
> +        * mechanism depends on the quirk.
> +        */
> +       __u64 enhancements;
>

Can you provide some examples of "enhancements"? Not that you need to put
it in the docs. I'm just trying to understand what this API is doing so I
can better review. Again, where do these come from? Also, how is an
enhancement different from a quirk?

~Faith


> +
> +       /** @count: Length of @enhancements (number of __u32). */
> +       __u16 count;
> +
> +       /** @_padding_a: Reserved. This field must be zeroed. */
> +       __u16 _padding_a;
> +
> +       /** @_padding_c: Reserved. This field must be zeroed. */
> +       __u32 _padding_c;
> +};
> +
> +/**
> + * enum drm_pvr_heap_id - Array index for heap info data returned by
> + * %DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
> + *
> + * For compatibility reasons all indices will be present in the returned
> array,
> + * however some heaps may not be present. These are indicated where
> + * &struct drm_pvr_heap.size is set to zero.
> + */
> +enum drm_pvr_heap_id {
> +       /** @DRM_PVR_HEAP_GENERAL: General purpose heap. */
> +       DRM_PVR_HEAP_GENERAL = 0,
> +       /** @DRM_PVR_HEAP_PDS_CODE_DATA: PDS code and data heap. */
> +       DRM_PVR_HEAP_PDS_CODE_DATA,
> +       /** @DRM_PVR_HEAP_USC_CODE: USC code heap. */
> +       DRM_PVR_HEAP_USC_CODE,
> +       /** @DRM_PVR_HEAP_RGNHDR: Region header heap. Only used if GPU has
> BRN63142. */
> +       DRM_PVR_HEAP_RGNHDR,
> +       /** @DRM_PVR_HEAP_VIS_TEST: Visibility test heap. */
> +       DRM_PVR_HEAP_VIS_TEST,
> +       /** @DRM_PVR_HEAP_TRANSFER_FRAG: Transfer fragment heap. */
> +       DRM_PVR_HEAP_TRANSFER_FRAG,
> +
> +       /**
> +        * @DRM_PVR_HEAP_COUNT: The number of heaps returned by
> +        * %DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
> +        *
> +        * More heaps may be added, so this also serves as the copy limit
> when
> +        * sent by the caller.
> +        */
> +       DRM_PVR_HEAP_COUNT
> +       /* Please only add additional heaps above DRM_PVR_HEAP_COUNT! */
> +};
> +
> +/**
> + * DOC: Flags for DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
> + *
> + * .. c:macro:: DRM_PVR_HEAP_FLAG_STATIC_CARVEOUT_AT_END
> + *
> + *    The static data area is at the end of the heap memory area, rather
> than
> + *    at the beginning.
> + *    The base address will be:
> + *        drm_pvr_heap::base +
> + *            (drm_pvr_heap::size -
> drm_pvr_heap::static_data_carveout_size)
> + */
> +#define DRM_PVR_HEAP_FLAG_STATIC_CARVEOUT_AT_END _BITUL(0)
> +
> +/**
> + * struct drm_pvr_heap - Container holding information about a single
> heap.
> + *
> + * This will always be fetched as an array.
> + */
> +struct drm_pvr_heap {
> +       /** @base: Base address of heap. */
> +       __u64 base;
> +
> +       /**
> +        * @size: Size of heap, in bytes. Will be 0 if the heap is not
> present.
> +        */
> +       __u64 size;
> +
> +       /** @flags: Flags for this heap. See &enum drm_pvr_heap_flags. */
> +       __u32 flags;
> +
> +       /** @page_size_log2: Log2 of page size. */
> +       __u32 page_size_log2;
> +};
> +
> +/**
> + * struct drm_pvr_dev_query_heap_info - Container used to fetch
> information
> + * about heaps supported by the device driver.
> + *
> + * Please note all driver-supported heaps will be returned up to
> &heaps.count.
> + * Some heaps will not be present in all devices, which will be indicated
> by
> + * &struct drm_pvr_heap.size being set to zero.
> + *
> + * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must
> be set
> + * to %DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
> + */
> +struct drm_pvr_dev_query_heap_info {
> +       /**
> +        * @heaps: Array of &struct drm_pvr_heap. If pointer is NULL, the
> count
> +        * and stride will be updated with those known to the driver
> version, to
> +        * facilitate allocation by the caller.
> +        */
> +       struct drm_pvr_obj_array heaps;
> +};
> +
> +/**
> + * enum drm_pvr_static_data_area_usage - Array index for static data area
> info
> + * returned by %DRM_PVR_DEV_QUERY_STATIC_DATA_AREAS_GET.
> + *
> + * For compatibility reasons all indices will be present in the returned
> array,
> + * however some areas may not be present. These are indicated where
> + * &struct drm_pvr_static_data_area.size is set to zero.
> + */
> +enum drm_pvr_static_data_area_usage {
> +       /**
> +        * @DRM_PVR_STATIC_DATA_AREA_EOT: End of Tile USC program.
> +        *
> +        * The End of Tile task runs at completion of a tile, and is
> responsible for emitting the
> +        * tile to the Pixel Back End.
> +        */
> +       DRM_PVR_STATIC_DATA_AREA_EOT = 0,
> +
> +       /**
> +        * @DRM_PVR_STATIC_DATA_AREA_FENCE: MCU fence area, used during
> cache flush and
> +        * invalidation.
> +        *
> +        * This must point to valid physical memory but the contents
> otherwise are not used.
> +        */
> +       DRM_PVR_STATIC_DATA_AREA_FENCE,
> +
> +       /**
> +        * @DRM_PVR_STATIC_DATA_AREA_VDM_SYNC: VDM sync program.
> +        *
> +        * The VDM sync program is used to synchronise multiple areas of
> the GPU hardware.
> +        */
> +       DRM_PVR_STATIC_DATA_AREA_VDM_SYNC,
> +
> +       /**
> +        * @DRM_PVR_STATIC_DATA_AREA_YUV_CSC: YUV coefficients.
> +        *
> +        * Area contains up to 16 slots with stride of 64 bytes. Each is a
> 3x4 matrix of u16 fixed
> +        * point numbers, with 1 sign bit, 2 integer bits and 13
> fractional bits.
> +        *
> +        * The slots are :
> +        * 0 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_RGB_IDENTITY_KHR
> +        * 1 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_IDENTITY_KHR (full
> range)
> +        * 2 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_IDENTITY_KHR
> (conformant range)
> +        * 3 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_709_KHR (full range)
> +        * 4 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_709_KHR (conformant
> range)
> +        * 5 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_601_KHR (full range)
> +        * 6 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_601_KHR (conformant
> range)
> +        * 7 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_2020_KHR (full
> range)
> +        * 8 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_2020_KHR
> (conformant range)
> +        * 9 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_601_KHR (conformant
> range, 10 bit)
> +        * 10 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_709_KHR
> (conformant range, 10 bit)
> +        * 11 = VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_2020_KHR
> (conformant range, 10 bit)
> +        * 14 = Identity (biased)
> +        * 15 = Identity
> +        */
> +       DRM_PVR_STATIC_DATA_AREA_YUV_CSC,
> +};
> +
> +/**
> + * struct drm_pvr_static_data_area - Container holding information about a
> + * single static data area.
> + *
> + * This will always be fetched as an array.
> + */
> +struct drm_pvr_static_data_area {
> +       /**
> +        * @area_usage: Usage of static data area.
> +        * See &enum drm_pvr_static_data_area_usage.
> +        */
> +       __u16 area_usage;
> +
> +       /**
> +        * @location_heap_id: Array index of heap where this of static data
> +        * area is located. This array is fetched using
> +        * %DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
> +        */
> +       __u16 location_heap_id;
> +
> +       /** @size: Size of static data area. Not present if set to zero. */
> +       __u32 size;
> +
> +       /**
> +        * @offset: Offset of static data area from start of static data
> +        * carveout.
> +        */
> +       __u64 offset;
> +};
> +
> +/**
> + * struct drm_pvr_dev_query_static_data_areas - Container used to fetch
> + * information about the static data areas in heaps supported by the
> device
> + * driver.
> + *
> + * Please note all driver-supported static data areas will be returned up
> to
> + * &static_data_areas.count. Some will not be present for all devices
> which,
> + * will be indicated by &struct drm_pvr_static_data_area.size being set
> to zero.
> + *
> + * Further, some heaps will not be present either. See &struct
> + * drm_pvr_dev_query_heap_info.
> + *
> + * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must
> be set
> + * to %DRM_PVR_DEV_QUERY_STATIC_DATA_AREAS_GET.
> + */
> +struct drm_pvr_dev_query_static_data_areas {
> +       /**
> +        * @static_data_areas: Array of &struct drm_pvr_static_data_area.
> If
> +        * pointer is NULL, the count and stride will be updated with those
> +        * known to the driver version, to facilitate allocation by the
> caller.
> +        */
> +       struct drm_pvr_obj_array static_data_areas;
> +};
> +
> +/**
> + * enum drm_pvr_dev_query - For use with
> &drm_pvr_ioctl_dev_query_args.type to
> + * indicate the type of the receiving container.
> + *
> + * Append only. Do not reorder.
> + */
> +enum drm_pvr_dev_query {
> +       /**
> +        * @DRM_PVR_DEV_QUERY_GPU_INFO_GET: The dev query args contain a
> pointer
> +        * to &struct drm_pvr_dev_query_gpu_info.
> +        */
> +       DRM_PVR_DEV_QUERY_GPU_INFO_GET = 0,
> +
> +       /**
> +        * @DRM_PVR_DEV_QUERY_RUNTIME_INFO_GET: The dev query args contain
> a
> +        * pointer to &struct drm_pvr_dev_query_runtime_info.
> +        */
> +       DRM_PVR_DEV_QUERY_RUNTIME_INFO_GET,
> +
> +       /**
> +        * @DRM_PVR_DEV_QUERY_QUIRKS_GET: The dev query args contain a
> pointer
> +        * to &struct drm_pvr_dev_query_quirks.
> +        */
> +       DRM_PVR_DEV_QUERY_QUIRKS_GET,
> +
> +       /**
> +        * @DRM_PVR_DEV_QUERY_ENHANCEMENTS_GET: The dev query args contain
> a
> +        * pointer to &struct drm_pvr_dev_query_enhancements.
> +        */
> +       DRM_PVR_DEV_QUERY_ENHANCEMENTS_GET,
> +
> +       /**
> +        * @DRM_PVR_DEV_QUERY_HEAP_INFO_GET: The dev query args contain a
> +        * pointer to &struct drm_pvr_dev_query_heap_info.
> +        */
> +       DRM_PVR_DEV_QUERY_HEAP_INFO_GET,
> +
> +       /**
> +        * @DRM_PVR_DEV_QUERY_STATIC_DATA_AREAS_GET: The dev query args
> contain
> +        * a pointer to &struct drm_pvr_dev_query_static_data_areas.
> +        */
> +       DRM_PVR_DEV_QUERY_STATIC_DATA_AREAS_GET,
> +};
> +
> +/**
> + * struct drm_pvr_ioctl_dev_query_args - Arguments for
> %DRM_IOCTL_PVR_DEV_QUERY.
> + */
> +struct drm_pvr_ioctl_dev_query_args {
> +       /**
> +        * @type: Type of query and output struct. See &enum
> drm_pvr_dev_query.
> +        */
> +       __u32 type;
> +
> +       /**
> +        * @size: Size of the receiving struct, see @type.
> +        *
> +        * After a successful call this will be updated to the written byte
> +        * length.
> +        * Can also be used to get the minimum byte length (see @pointer).
> +        * This allows additional fields to be appended to the structs in
> +        * future.
> +        */
> +       __u32 size;
> +
> +       /**
> +        * @pointer: Pointer to struct @type.
> +        *
> +        * Must be large enough to contain @size bytes.
> +        * If pointer is NULL, the expected size will be returned in the
> @size
> +        * field, but no other data will be written.
> +        */
> +       __u64 pointer;
> +};
> +
> +/**
> + * DOC: PowerVR IOCTL CREATE_BO interface
> + */
> +
> +/**
> + * DOC: Flags for CREATE_BO
> + *
> + * The &struct drm_pvr_ioctl_create_bo_args.flags field is 64 bits wide
> and consists
> + * of three groups of flags: creation, device mapping and CPU mapping.
> + *
> + * We use "device" to refer to the GPU here because of the ambiguity
> between
> + * CPU and GPU in some fonts.
> + *
> + * Creation options
> + *    These use the prefix ``DRM_PVR_BO_CREATE_``.
> + *
> + * Device mapping options
> + *    These use the prefix ``DRM_PVR_BO_DEVICE_``.
> + *
> + *    :BYPASS_CACHE: There are very few situations where this flag is
> useful.
> + *       By default, the device flushes its memory caches after every job.
> + *    :PM_FW_PROTECT: Specify that only the Parameter Manager (PM) and/or
> + *       firmware processor should be allowed to access this memory when
> mapped
> + *       to the device. It is not valid to specify this flag with
> + *       CPU_ALLOW_USERSPACE_ACCESS.
> + *
> + * CPU mapping options
> + *    These use the prefix ``DRM_PVR_BO_CPU_``.
> + *
> + *    :ALLOW_USERSPACE_ACCESS: Allow userspace to map and access the
> contents
> + *       of this memory. It is not valid to specify this flag with
> + *       DEVICE_PM_FW_PROTECT.
> + */
> +#define DRM_PVR_BO_DEVICE_BYPASS_CACHE _BITULL(0)
> +#define DRM_PVR_BO_DEVICE_PM_FW_PROTECT _BITULL(1)
> +#define DRM_PVR_BO_CPU_ALLOW_USERSPACE_ACCESS _BITULL(2)
> +/* Bits 3..63 are reserved. */
> +
> +/**
> + * struct drm_pvr_ioctl_create_bo_args - Arguments for
> %DRM_IOCTL_PVR_CREATE_BO
> + */
> +struct drm_pvr_ioctl_create_bo_args {
> +       /**
> +        * @size: [IN/OUT] Unaligned size of buffer object to create. On
> +        * return, this will be populated with the actual aligned size of
> the
> +        * new buffer.
> +        */
> +       __u64 size;
> +
> +       /**
> +        * @handle: [OUT] GEM handle of the new buffer object for use in
> +        * userspace.
> +        */
> +       __u32 handle;
> +
> +       /** @_padding_c: Reserved. This field must be zeroed. */
> +       __u32 _padding_c;
> +
> +       /**
> +        * @flags: [IN] Options which will affect the behaviour of this
> +        * creation operation and future mapping operations on the created
> +        * object. This field must be a valid combination of
> ``DRM_PVR_BO_*``
> +        * values, with all bits marked as reserved set to zero.
> +        */
> +       __u64 flags;
> +};
> +
> +/**
> + * DOC: PowerVR IOCTL GET_BO_MMAP_OFFSET interface
> + */
> +
> +/**
> + * struct drm_pvr_ioctl_get_bo_mmap_offset_args - Arguments for
> + * %DRM_IOCTL_PVR_GET_BO_MMAP_OFFSET
> + *
> + * Like other DRM drivers, the "mmap" IOCTL doesn't actually map any
> memory.
> + * Instead, it allocates a fake offset which refers to the specified
> buffer
> + * object. This offset can be used with a real mmap call on the DRM device
> + * itself.
> + */
> +struct drm_pvr_ioctl_get_bo_mmap_offset_args {
> +       /** @handle: [IN] GEM handle of the buffer object to be mapped. */
> +       __u32 handle;
> +
> +       /** @_padding_4: Reserved. This field must be zeroed. */
> +       __u32 _padding_4;
> +
> +       /** @offset: [OUT] Fake offset to use in the real mmap call. */
> +       __u64 offset;
> +};
> +
> +/**
> + * DOC: PowerVR IOCTL CREATE_VM_CONTEXT and DESTROY_VM_CONTEXT interfaces
> + */
> +
> +/**
> + * struct drm_pvr_ioctl_create_vm_context_args - Arguments for
> + * %DRM_IOCTL_PVR_CREATE_VM_CONTEXT
> + */
> +struct drm_pvr_ioctl_create_vm_context_args {
> +       /** @handle: [OUT] Handle for new VM context. */
> +       __u32 handle;
> +
> +       /** @_padding_4: Reserved. This field must be zeroed. */
> +       __u32 _padding_4;
> +};
> +
> +/**
> + * struct drm_pvr_ioctl_destroy_vm_context_args - Arguments for
> + * %DRM_IOCTL_PVR_DESTROY_VM_CONTEXT
> + */
> +struct drm_pvr_ioctl_destroy_vm_context_args {
> +       /**
> +        * @handle: [IN] Handle for VM context to be destroyed.
> +        */
> +       __u32 handle;
> +
> +       /** @_padding_4: Reserved. This field must be zeroed. */
> +       __u32 _padding_4;
> +};
> +
> +/**
> + * DOC: PowerVR IOCTL VM_MAP and VM_UNMAP interfaces
> + *
> + * The VM UAPI allows userspace to create buffer object mappings in GPU
> virtual address space.
> + *
> + * The client is responsible for managing GPU address space. It should
> allocate mappings within
> + * the heaps returned by %DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
> + *
> + * %DRM_IOCTL_PVR_VM_MAP creates a new mapping. The client provides the
> target virtual address for
> + * the mapping. Size and offset within the mapped buffer object can be
> specified, so the client can
> + * partially map a buffer.
> + *
> + * %DRM_IOCTL_PVR_VM_UNMAP removes a mapping. The entire mapping will be
> removed from GPU address
> + * space. For this reason only the start address is provided by the
> client.
> + */
> +
> +/**
> + * struct drm_pvr_ioctl_vm_map_args - Arguments for %DRM_IOCTL_PVR_VM_MAP.
> + */
> +struct drm_pvr_ioctl_vm_map_args {
> +       /**
> +        * @vm_context_handle: [IN] Handle for VM context for this mapping
> to
> +        * exist in.
> +        */
> +       __u32 vm_context_handle;
> +
> +       /** @flags: [IN] Flags which affect this mapping. Currently always
> 0. */
> +       __u32 flags;
> +
> +       /**
> +        * @device_addr: [IN] Requested device-virtual address for the
> mapping.
> +        * This must be non-zero and aligned to the device page size for
> the
> +        * heap containing the requested address. It is an error to
> specify an
> +        * address which is not contained within one of the heaps returned
> by
> +        * %DRM_PVR_DEV_QUERY_HEAP_INFO_GET.
> +        */
> +       __u64 device_addr;
> +
> +       /**
> +        * @handle: [IN] Handle of the target buffer object. This must be a
> +        * valid handle returned by %DRM_IOCTL_PVR_CREATE_BO.
> +        */
> +       __u32 handle;
> +
> +       /** @_padding_14: Reserved. This field must be zeroed. */
> +       __u32 _padding_14;
> +
> +       /**
> +        * @offset: [IN] Offset into the target bo from which to begin the
> +        * mapping.
> +        */
> +       __u64 offset;
> +
> +       /**
> +        * @size: [IN] Size of the requested mapping. Must be aligned to
> +        * the device page size for the heap containing the requested
> address,
> +        * as well as the host page size. When added to @device_addr, the
> +        * result must not overflow the heap which contains @device_addr
> (i.e.
> +        * the range specified by @device_addr and @size must be completely
> +        * contained within a single heap specified by
> +        * %DRM_PVR_DEV_QUERY_HEAP_INFO_GET).
> +        */
> +       __u64 size;
> +};
> +
> +/**
> + * struct drm_pvr_ioctl_vm_unmap_args - Arguments for
> %DRM_IOCTL_PVR_VM_UNMAP.
> + */
> +struct drm_pvr_ioctl_vm_unmap_args {
> +       /**
> +        * @vm_context_handle: [IN] Handle for VM context that this mapping
> +        * exists in.
> +        */
> +       __u32 vm_context_handle;
> +
> +       /** @_padding_4: Reserved. This field must be zeroed. */
> +       __u32 _padding_4;
> +
> +       /**
> +        * @device_addr: [IN] Device-virtual address at the start of the
> target
> +        * mapping. This must be non-zero.
> +        */
> +       __u64 device_addr;
> +
> +       /**
> +        * @size: Size in bytes of the target mapping. This must be
> non-zero.
> +        */
> +       __u64 size;
> +};
> +
> +/**
> + * DOC: PowerVR IOCTL CREATE_CONTEXT and DESTROY_CONTEXT interfaces
> + */
> +
> +/**
> + * enum drm_pvr_ctx_priority - Arguments for
> + * &drm_pvr_ioctl_create_context_args.priority
> + */
> +enum drm_pvr_ctx_priority {
> +       /** @DRM_PVR_CTX_PRIORITY_LOW: Priority below normal. */
> +       DRM_PVR_CTX_PRIORITY_LOW = -512,
> +
> +       /** @DRM_PVR_CTX_PRIORITY_NORMAL: Normal priority. */
> +       DRM_PVR_CTX_PRIORITY_NORMAL = 0,
> +
> +       /**
> +        * @DRM_PVR_CTX_PRIORITY_HIGH: Priority above normal.
> +        * Note this requires ``CAP_SYS_NICE`` or ``DRM_MASTER``.
> +        */
> +       DRM_PVR_CTX_PRIORITY_HIGH = 512,
> +};
> +
> +/**
> + * enum drm_pvr_ctx_type - Arguments for
> + * &struct drm_pvr_ioctl_create_context_args.type
> + */
> +enum drm_pvr_ctx_type {
> +       /**
> +        * @DRM_PVR_CTX_TYPE_RENDER: Render context. Use &struct
> +        * drm_pvr_ioctl_create_render_context_args for context creation
> arguments.
> +        */
> +       DRM_PVR_CTX_TYPE_RENDER = 0,
> +
> +       /**
> +        * @DRM_PVR_CTX_TYPE_COMPUTE: Compute context. Use &struct
> +        * drm_pvr_ioctl_create_compute_context_args for context creation
> arguments.
> +        */
> +       DRM_PVR_CTX_TYPE_COMPUTE,
> +
> +       /**
> +        * @DRM_PVR_CTX_TYPE_TRANSFER_FRAG: Transfer context for fragment
> data masters. Use
> +        * &struct drm_pvr_ioctl_create_transfer_context_args for context
> creation arguments.
> +        */
> +       DRM_PVR_CTX_TYPE_TRANSFER_FRAG,
> +};
> +
> +/**
> + * struct drm_pvr_ioctl_create_context_args - Arguments for
> + * %DRM_IOCTL_PVR_CREATE_CONTEXT
> + */
> +struct drm_pvr_ioctl_create_context_args {
> +       /**
> +        * @type: [IN] Type of context to create.
> +        *
> +        * This must be one of the values defined by &enum
> drm_pvr_ctx_type.
> +        */
> +       __u32 type;
> +
> +       /** @flags: [IN] Flags for context. */
> +       __u32 flags;
> +
> +       /**
> +        * @priority: [IN] Priority of new context.
> +        *
> +        * This must be one of the values defined by &enum
> drm_pvr_ctx_priority.
> +        */
> +       __s32 priority;
> +
> +       /** @handle: [OUT] Handle for new context. */
> +       __u32 handle;
> +
> +       /**
> +        * @static_context_state: [IN] Pointer to static context state
> stream.
> +        */
> +       __u64 static_context_state;
> +
> +       /**
> +        * @static_context_state_len: [IN] Length of static context state,
> in bytes.
> +        */
> +       __u32 static_context_state_len;
> +
> +       /**
> +        * @vm_context_handle: [IN] Handle for VM context that this
> context is
> +        * associated with.
> +        */
> +       __u32 vm_context_handle;
> +
> +       /**
> +        * @callstack_addr: [IN] Address for initial call stack pointer.
> Only valid
> +        * if @type is %DRM_PVR_CTX_TYPE_RENDER, otherwise must be 0.
> +        */
> +       __u64 callstack_addr;
> +};
> +
> +/**
> + * struct drm_pvr_ioctl_destroy_context_args - Arguments for
> + * %DRM_IOCTL_PVR_DESTROY_CONTEXT
> + */
> +struct drm_pvr_ioctl_destroy_context_args {
> +       /**
> +        * @handle: [IN] Handle for context to be destroyed.
> +        */
> +       __u32 handle;
> +
> +       /** @_padding_4: Reserved. This field must be zeroed. */
> +       __u32 _padding_4;
> +};
> +
> +/**
> + * DOC: PowerVR IOCTL CREATE_FREE_LIST and DESTROY_FREE_LIST interfaces
> + */
> +
> +/**
> + * struct drm_pvr_ioctl_create_free_list_args - Arguments for
> + * %DRM_IOCTL_PVR_CREATE_FREE_LIST
> + *
> + * Free list arguments have the following constraints :
> + *
> + * - @max_num_pages must be greater than zero.
> + * - @grow_threshold must be between 0 and 100.
> + * - @grow_num_pages must be less than or equal to &max_num_pages.
> + * - @initial_num_pages, @max_num_pages and @grow_num_pages must be
> multiples
> + *   of 4.
> + * - When &grow_num_pages is 0, @initial_num_pages must be equal to
> + *   @max_num_pages.
> + * - When &grow_num_pages is non-zero, @initial_num_pages must be less
> than
> + *   @max_num_pages.
> + */
> +struct drm_pvr_ioctl_create_free_list_args {
> +       /**
> +        * @free_list_gpu_addr: [IN] Address of GPU mapping of buffer
> object
> +        * containing memory to be used by free list.
> +        *
> +        * The mapped region of the buffer object must be at least
> +        * @max_num_pages * ``sizeof(__u32)``.
> +        *
> +        * The buffer object must have been created with
> +        * %DRM_PVR_BO_DEVICE_PM_FW_PROTECT set and
> +        * %DRM_PVR_BO_CPU_ALLOW_USERSPACE_ACCESS not set.
> +        */
> +       __u64 free_list_gpu_addr;
> +
> +       /** @initial_num_pages: [IN] Pages initially allocated to free
> list. */
> +       __u32 initial_num_pages;
> +
> +       /** @max_num_pages: [IN] Maximum number of pages in free list. */
> +       __u32 max_num_pages;
> +
> +       /** @grow_num_pages: [IN] Pages to grow free list by per request.
> */
> +       __u32 grow_num_pages;
> +
> +       /**
> +        * @grow_threshold: [IN] Percentage of FL memory used that should
> +        * trigger a new grow request.
> +        */
> +       __u32 grow_threshold;
> +
> +       /**
> +        * @vm_context_handle: [IN] Handle for VM context that the free
> list buffer
> +        * object is mapped in.
> +        */
> +       __u32 vm_context_handle;
> +
> +       /**
> +        * @handle: [OUT] Handle for created free list.
> +        */
> +       __u32 handle;
> +};
> +
> +/**
> + * struct drm_pvr_ioctl_destroy_free_list_args - Arguments for
> + * %DRM_IOCTL_PVR_DESTROY_FREE_LIST
> + */
> +struct drm_pvr_ioctl_destroy_free_list_args {
> +       /**
> +        * @handle: [IN] Handle for free list to be destroyed.
> +        */
> +       __u32 handle;
> +
> +       /** @_padding_4: Reserved. This field must be zeroed. */
> +       __u32 _padding_4;
> +};
> +
> +/**
> + * DOC: PowerVR IOCTL CREATE_HWRT_DATASET and DESTROY_HWRT_DATASET
> interfaces
> + */
> +
> +/**
> + * struct drm_pvr_create_hwrt_geom_data_args - Geometry data arguments
> used for
> + * &struct drm_pvr_ioctl_create_hwrt_dataset_args.geom_data_args.
> + */
> +struct drm_pvr_create_hwrt_geom_data_args {
> +       /** @tpc_dev_addr: [IN] Tail pointer cache GPU virtual address. */
> +       __u64 tpc_dev_addr;
> +
> +       /** @tpc_size: [IN] Size of TPC, in bytes. */
> +       __u32 tpc_size;
> +
> +       /** @tpc_stride: [IN] Stride between layers in TPC, in pages */
> +       __u32 tpc_stride;
> +
> +       /** @vheap_table_dev_addr: [IN] VHEAP table GPU virtual address. */
> +       __u64 vheap_table_dev_addr;
> +
> +       /** @rtc_dev_addr: [IN] Render Target Cache virtual address. */
> +       __u64 rtc_dev_addr;
> +};
> +
> +/**
> + * struct drm_pvr_create_hwrt_rt_data_args - Render target arguments used
> for
> + * &struct drm_pvr_ioctl_create_hwrt_dataset_args.rt_data_args.
> + */
> +struct drm_pvr_create_hwrt_rt_data_args {
> +       /** @pm_mlist_dev_addr: [IN] PM MLIST GPU virtual address. */
> +       __u64 pm_mlist_dev_addr;
> +
> +       /** @macrotile_array_dev_addr: [IN] Macrotile array GPU virtual
> address. */
> +       __u64 macrotile_array_dev_addr;
> +
> +       /** @region_header_dev_addr: [IN] Region header array GPU virtual
> address. */
> +       __u64 region_header_dev_addr;
> +};
> +
> +/**
> + * struct drm_pvr_ioctl_create_hwrt_dataset_args - Arguments for
> + * %DRM_IOCTL_PVR_CREATE_HWRT_DATASET
> + */
> +struct drm_pvr_ioctl_create_hwrt_dataset_args {
> +       /** @geom_data_args: [IN] Geometry data arguments. */
> +       struct drm_pvr_create_hwrt_geom_data_args geom_data_args;
> +
> +       /** @rt_data_args: [IN] Array of render target arguments. */
> +       struct drm_pvr_create_hwrt_rt_data_args rt_data_args[2];
> +
> +       /**
> +        * @free_list_handles: [IN] Array of free list handles.
> +        *
> +        * free_list_handles[0] must have initial size of at least that
> reported
> +        * by &drm_pvr_dev_query_runtime_info.free_list_min_pages.
> +        */
> +       __u32 free_list_handles[2];
> +
> +       /** @width: [IN] Width in pixels. */
> +       __u32 width;
> +
> +       /** @height: [IN] Height in pixels. */
> +       __u32 height;
> +
> +       /** @samples: [IN] Number of samples. */
> +       __u32 samples;
> +
> +       /** @layers: [IN] Number of layers. */
> +       __u32 layers;
> +
> +       /** @isp_merge_lower_x: [IN] Lower X coefficient for triangle
> merging. */
> +       __u32 isp_merge_lower_x;
> +
> +       /** @isp_merge_lower_y: [IN] Lower Y coefficient for triangle
> merging. */
> +       __u32 isp_merge_lower_y;
> +
> +       /** @isp_merge_scale_x: [IN] Scale X coefficient for triangle
> merging. */
> +       __u32 isp_merge_scale_x;
> +
> +       /** @isp_merge_scale_y: [IN] Scale Y coefficient for triangle
> merging. */
> +       __u32 isp_merge_scale_y;
> +
> +       /** @isp_merge_upper_x: [IN] Upper X coefficient for triangle
> merging. */
> +       __u32 isp_merge_upper_x;
> +
> +       /** @isp_merge_upper_y: [IN] Upper Y coefficient for triangle
> merging. */
> +       __u32 isp_merge_upper_y;
> +
> +       /**
> +        * @region_header_size: [IN] Size of region header array. This
> common field is used by
> +        * both render targets in this data set.
> +        *
> +        * The units for this field differ depending on what version of
> the simple internal
> +        * parameter format the device uses. If format 2 is in use then
> this is interpreted as the
> +        * number of region headers. For other formats it is interpreted
> as the size in dwords.
> +        */
> +       __u32 region_header_size;
> +
> +       /**
> +        * @handle: [OUT] Handle for created HWRT dataset.
> +        */
> +       __u32 handle;
> +};
> +
> +/**
> + * struct drm_pvr_ioctl_destroy_hwrt_dataset_args - Arguments for
> + * %DRM_IOCTL_PVR_DESTROY_HWRT_DATASET
> + */
> +struct drm_pvr_ioctl_destroy_hwrt_dataset_args {
> +       /**
> +        * @handle: [IN] Handle for HWRT dataset to be destroyed.
> +        */
> +       __u32 handle;
> +
> +       /** @_padding_4: Reserved. This field must be zeroed. */
> +       __u32 _padding_4;
> +};
> +
> +/**
> + * DOC: PowerVR IOCTL SUBMIT_JOBS interface
> + */
> +
> +/**
> + * DOC: Flags for the drm_pvr_sync_op object.
> + *
> + * .. c:macro:: DRM_PVR_SYNC_OP_HANDLE_TYPE_MASK
> + *
> + *    Handle type mask for the drm_pvr_sync_op::flags field.
> + *
> + * .. c:macro:: DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_SYNCOBJ
> + *
> + *    Indicates the handle passed in drm_pvr_sync_op::handle is a syncobj
> handle.
> + *    This is the default type.
> + *
> + * .. c:macro:: DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_TIMELINE_SYNCOBJ
> + *
> + *    Indicates the handle passed in drm_pvr_sync_op::handle is a
> timeline syncobj handle.
> + *
> + * .. c:macro:: DRM_PVR_SYNC_OP_FLAG_SIGNAL
> + *
> + *    Signal operation requested. The out-fence bound to the job will be
> attached to
> + *    the syncobj whose handle is passed in drm_pvr_sync_op::handle.
> + *
> + * .. c:macro:: DRM_PVR_SYNC_OP_FLAG_WAIT
> + *
> + *    Wait operation requested. The job will wait for this particular
> syncobj or syncobj
> + *    point to be signaled before being started.
> + *    This is the default operation.
> + */
> +#define DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_MASK 0xf
> +#define DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_SYNCOBJ 0
> +#define DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_TIMELINE_SYNCOBJ 1
> +#define DRM_PVR_SYNC_OP_FLAG_SIGNAL _BITULL(31)
> +#define DRM_PVR_SYNC_OP_FLAG_WAIT 0
> +
> +#define DRM_PVR_SYNC_OP_FLAGS_MASK (DRM_PVR_SYNC_OP_FLAG_HANDLE_TYPE_MASK
> | \
> +                                   DRM_PVR_SYNC_OP_FLAG_SIGNAL)
> +
> +/**
> + * struct drm_pvr_sync_op - Object describing a sync operation
> + */
> +struct drm_pvr_sync_op {
> +       /** @handle: Handle of sync object. */
> +       __u32 handle;
> +
> +       /** @flags: Combination of ``DRM_PVR_SYNC_OP_FLAG_`` flags. */
> +       __u32 flags;
> +
> +       /** @value: Timeline value for this drm_syncobj. MBZ for a binary
> syncobj. */
> +       __u64 value;
> +};
> +
> +/**
> + * DOC: Flags for SUBMIT_JOB ioctl geometry command.
> + *
> + * .. c:macro:: DRM_PVR_SUBMIT_JOB_GEOM_CMD_FIRST
> + *
> + *    Indicates if this the first command to be issued for a render.
> + *
> + * .. c:macro:: DRM_PVR_SUBMIT_JOB_GEOM_CMD_LAST
> + *
> + *    Indicates if this the last command to be issued for a render.
> + *
> + * .. c:macro:: DRM_PVR_SUBMIT_JOB_GEOM_CMD_SINGLE_CORE
> + *
> + *    Forces to use single core in a multi core device.
> + *
> + * .. c:macro:: DRM_PVR_SUBMIT_JOB_GEOM_CMD_FLAGS_MASK
> + *
> + *    Logical OR of all the geometry cmd flags.
> + */
> +#define DRM_PVR_SUBMIT_JOB_GEOM_CMD_FIRST _BITULL(0)
> +#define DRM_PVR_SUBMIT_JOB_GEOM_CMD_LAST _BITULL(1)
> +#define DRM_PVR_SUBMIT_JOB_GEOM_CMD_SINGLE_CORE _BITULL(2)
> +#define DRM_PVR_SUBMIT_JOB_GEOM_CMD_FLAGS_MASK
>      \
> +       (DRM_PVR_SUBMIT_JOB_GEOM_CMD_FIRST |
>      \
> +        DRM_PVR_SUBMIT_JOB_GEOM_CMD_LAST |
>     \
> +        DRM_PVR_SUBMIT_JOB_GEOM_CMD_SINGLE_CORE)
> +
> +/**
> + * DOC: Flags for SUBMIT_JOB ioctl fragment command.
> + *
> + * .. c:macro:: DRM_PVR_SUBMIT_JOB_FRAG_CMD_SINGLE_CORE
> + *
> + *    Use single core in a multi core setup.
> + *
> + * .. c:macro:: DRM_PVR_SUBMIT_JOB_FRAG_CMD_DEPTHBUFFER
> + *
> + *    Indicates whether a depth buffer is present.
> + *
> + * .. c:macro:: DRM_PVR_SUBMIT_JOB_FRAG_CMD_STENCILBUFFER
> + *
> + *    Indicates whether a stencil buffer is present.
> + *
> + * .. c:macro:: DRM_PVR_SUBMIT_JOB_FRAG_CMD_PREVENT_CDM_OVERLAP
> + *
> + *    Disallow compute overlapped with this render.
> + *
> + * .. c:macro:: DRM_PVR_SUBMIT_JOB_FRAG_CMD_GET_VIS_RESULTS
> + *
> + *    Indicates whether this render produces visibility results.
> + *
> + * .. c:macro:: DRM_PVR_SUBMIT_JOB_FRAG_CMD_SCRATCHBUFFER
> + *
> + *    Indicates whether partial renders write to a scratch buffer instead
> of
> + *    the final surface. It also forces the full screen copy expected to
> be
> + *    present on the last render after all partial renders have completed.
> + *
> + * .. c:macro:: DRM_PVR_SUBMIT_JOB_FRAG_CMD_FLAGS_MASK
> + *
> + *    Logical OR of all the fragment cmd flags.
> + */
> +#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_SINGLE_CORE _BITULL(0)
> +#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_DEPTHBUFFER _BITULL(1)
> +#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_STENCILBUFFER _BITULL(2)
> +#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_PREVENT_CDM_OVERLAP _BITULL(3)
> +#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_SCRATCHBUFFER _BITULL(4)
> +#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_GET_VIS_RESULTS _BITULL(5)
> +#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_PARTIAL_RENDER _BITULL(6)
> +#define DRM_PVR_SUBMIT_JOB_FRAG_CMD_FLAGS_MASK
>      \
> +       (DRM_PVR_SUBMIT_JOB_FRAG_CMD_SINGLE_CORE |
>      \
> +        DRM_PVR_SUBMIT_JOB_FRAG_CMD_DEPTHBUFFER |
>      \
> +        DRM_PVR_SUBMIT_JOB_FRAG_CMD_STENCILBUFFER |
>      \
> +        DRM_PVR_SUBMIT_JOB_FRAG_CMD_PREVENT_CDM_OVERLAP |
>      \
> +        DRM_PVR_SUBMIT_JOB_FRAG_CMD_SCRATCHBUFFER |
>      \
> +        DRM_PVR_SUBMIT_JOB_FRAG_CMD_GET_VIS_RESULTS |
>      \
> +        DRM_PVR_SUBMIT_JOB_FRAG_CMD_PARTIAL_RENDER)
> +
> +/**
> + * DOC: Flags for SUBMIT_JOB ioctl compute command.
> + *
> + * .. c:macro:: DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_PREVENT_ALL_OVERLAP
> + *
> + *    Disallow other jobs overlapped with this compute.
> + *
> + * .. c:macro:: DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_SINGLE_CORE
> + *
> + *    Forces to use single core in a multi core device.
> + *
> + * .. c:macro:: DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_FLAGS_MASK
> + *
> + *    Logical OR of all the compute cmd flags.
> + */
> +#define DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_PREVENT_ALL_OVERLAP _BITULL(0)
> +#define DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_SINGLE_CORE _BITULL(1)
> +#define DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_FLAGS_MASK         \
> +       (DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_PREVENT_ALL_OVERLAP | \
> +        DRM_PVR_SUBMIT_JOB_COMPUTE_CMD_SINGLE_CORE)
> +
> +/**
> + * DOC: Flags for SUBMIT_JOB ioctl transfer command.
> + *
> + * .. c:macro:: DRM_PVR_SUBMIT_JOB_TRANSFER_CMD_SINGLE_CORE
> + *
> + *    Forces job to use a single core in a multi core device.
> + *
> + * .. c:macro:: DRM_PVR_SUBMIT_JOB_TRANSFER_CMD_FLAGS_MASK
> + *
> + *    Logical OR of all the transfer cmd flags.
> + */
> +#define DRM_PVR_SUBMIT_JOB_TRANSFER_CMD_SINGLE_CORE _BITULL(0)
> +
> +#define DRM_PVR_SUBMIT_JOB_TRANSFER_CMD_FLAGS_MASK \
> +       DRM_PVR_SUBMIT_JOB_TRANSFER_CMD_SINGLE_CORE
> +
> +/**
> + * enum drm_pvr_job_type - Arguments for &struct drm_pvr_job.job_type
> + */
> +enum drm_pvr_job_type {
> +       /** @DRM_PVR_JOB_TYPE_GEOMETRY: Job type is geometry. */
> +       DRM_PVR_JOB_TYPE_GEOMETRY = 0,
> +
> +       /** @DRM_PVR_JOB_TYPE_FRAGMENT: Job type is fragment. */
> +       DRM_PVR_JOB_TYPE_FRAGMENT,
> +
> +       /** @DRM_PVR_JOB_TYPE_COMPUTE: Job type is compute. */
> +       DRM_PVR_JOB_TYPE_COMPUTE,
> +
> +       /** @DRM_PVR_JOB_TYPE_TRANSFER_FRAG: Job type is a fragment
> transfer. */
> +       DRM_PVR_JOB_TYPE_TRANSFER_FRAG,
> +};
> +
> +/**
> + * struct drm_pvr_hwrt_data_ref - Reference HWRT data
> + */
> +struct drm_pvr_hwrt_data_ref {
> +       /** @set_handle: HWRT data set handle. */
> +       __u32 set_handle;
> +
> +       /** @data_index: Index of the HWRT data inside the data set. */
> +       __u32 data_index;
> +};
> +
> +/**
> + * struct drm_pvr_job - Job arguments passed to the
> %DRM_IOCTL_PVR_SUBMIT_JOBS ioctl
> + */
> +struct drm_pvr_job {
> +       /**
> +        * @type: [IN] Type of job being submitted
> +        *
> +        * This must be one of the values defined by &enum
> drm_pvr_job_type.
> +        */
> +       __u32 type;
> +
> +       /**
> +        * @context_handle: [IN] Context handle.
> +        *
> +        * When @job_type is %DRM_PVR_JOB_TYPE_RENDER,
> %DRM_PVR_JOB_TYPE_COMPUTE or
> +        * %DRM_PVR_JOB_TYPE_TRANSFER_FRAG, this must be a valid handle
> returned by
> +        * %DRM_IOCTL_PVR_CREATE_CONTEXT. The type of context must be
> compatible
> +        * with the type of job being submitted.
> +        *
> +        * When @job_type is %DRM_PVR_JOB_TYPE_NULL, this must be zero.
> +        */
> +       __u32 context_handle;
> +
> +       /**
> +        * @flags: [IN] Flags for command.
> +        *
> +        * Those are job-dependent. See all ``DRM_PVR_SUBMIT_JOB_*``.
> +        */
> +       __u32 flags;
> +
> +       /**
> +        * @cmd_stream_len: [IN] Length of command stream, in bytes.
> +        */
> +       __u32 cmd_stream_len;
> +
> +       /**
> +        * @cmd_stream: [IN] Pointer to command stream for command.
> +        *
> +        * The command stream must be u64-aligned.
> +        */
> +       __u64 cmd_stream;
> +
> +       /** @sync_ops: [IN] Fragment sync operations. */
> +       struct drm_pvr_obj_array sync_ops;
> +
> +       /**
> +        * @hwrt: [IN] HWRT data used by render jobs (geometry or
> fragment).
> +        *
> +        * Must be zero for non-render jobs.
> +        */
> +       struct drm_pvr_hwrt_data_ref hwrt;
> +};
> +
> +/**
> + * struct drm_pvr_ioctl_submit_jobs_args - Arguments for
> %DRM_IOCTL_PVR_SUBMIT_JOB
> + *
> + * If the syscall returns an error it is important to check the value of
> + * @jobs.count. This indicates the index into @jobs.array where the
> + * error occurred.
> + */
> +struct drm_pvr_ioctl_submit_jobs_args {
> +       /** @jobs: [IN] Array of jobs to submit. */
> +       struct drm_pvr_obj_array jobs;
> +};
> +
> +#if defined(__cplusplus)
> +}
> +#endif
> +
> +#endif /* PVR_DRM_UAPI_H */
> --
> 2.41.0
>
>

[-- Attachment #2: Type: text/html, Size: 58647 bytes --]

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 02/17] dt-bindings: gpu: Add Imagination Technologies PowerVR GPU
  2023-08-16  8:25   ` Sarah Walker
@ 2023-08-18  9:36     ` Linus Walleij
  -1 siblings, 0 replies; 77+ messages in thread
From: Linus Walleij @ 2023-08-18  9:36 UTC (permalink / raw)
  To: Sarah Walker,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Rob Herring, Krzysztof Kozlowski
  Cc: dri-devel, matthew.brost, luben.tuikov, tzimmermann,
	linux-kernel, mripard, afd, boris.brezillon, dakr, donald.robson,
	hns, christian.koenig, faith.ekstrand

Hi Sarah,

thanks for your patch!

Patches adding device tree bindings need to be CC:ed to
devicetree@vger.kernel.org
and the DT binding maintainers, I have added it for now.

On Wed, Aug 16, 2023 at 10:26 AM Sarah Walker <sarah.walker@imgtec.com> wrote:

> Add the device tree binding documentation for the Series AXE GPU used in
> TI AM62 SoCs.
>
> Co-developed-by: Frank Binns <frank.binns@imgtec.com>
> Signed-off-by: Frank Binns <frank.binns@imgtec.com>
> Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
(...)
> +properties:
> +  compatible:
> +    items:
> +      - enum:
> +          - ti,am62-gpu
> +      - const: img,powervr-seriesaxe

Should there not at least be a dash there?

img,powervr-series-axe?

It is spelled in two words in the commit message,
Series AXE not SeriesAXE?

Moreover, if this pertains to the AXE-1-16 and AXE-2-16 it is kind of a wildcard
and we usually don't do that, I would use the exact version instead,
such as:
const: img,powervr-axe-1-16
any reason not to do this?

I asked about the relationship between these strings and the product
designations earlier I think :/

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 02/17] dt-bindings: gpu: Add Imagination Technologies PowerVR GPU
@ 2023-08-18  9:36     ` Linus Walleij
  0 siblings, 0 replies; 77+ messages in thread
From: Linus Walleij @ 2023-08-18  9:36 UTC (permalink / raw)
  To: Sarah Walker,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Rob Herring, Krzysztof Kozlowski
  Cc: matthew.brost, hns, linux-kernel, mripard, afd, luben.tuikov,
	dakr, dri-devel, tzimmermann, boris.brezillon, christian.koenig,
	faith.ekstrand, donald.robson

Hi Sarah,

thanks for your patch!

Patches adding device tree bindings need to be CC:ed to
devicetree@vger.kernel.org
and the DT binding maintainers, I have added it for now.

On Wed, Aug 16, 2023 at 10:26 AM Sarah Walker <sarah.walker@imgtec.com> wrote:

> Add the device tree binding documentation for the Series AXE GPU used in
> TI AM62 SoCs.
>
> Co-developed-by: Frank Binns <frank.binns@imgtec.com>
> Signed-off-by: Frank Binns <frank.binns@imgtec.com>
> Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
(...)
> +properties:
> +  compatible:
> +    items:
> +      - enum:
> +          - ti,am62-gpu
> +      - const: img,powervr-seriesaxe

Should there not at least be a dash there?

img,powervr-series-axe?

It is spelled in two words in the commit message,
Series AXE not SeriesAXE?

Moreover, if this pertains to the AXE-1-16 and AXE-2-16 it is kind of a wildcard
and we usually don't do that, I would use the exact version instead,
such as:
const: img,powervr-axe-1-16
any reason not to do this?

I asked about the relationship between these strings and the product
designations earlier I think :/

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 02/17] dt-bindings: gpu: Add Imagination Technologies PowerVR GPU
  2023-08-16  8:25   ` Sarah Walker
@ 2023-08-18 10:32     ` Krzysztof Kozlowski
  -1 siblings, 0 replies; 77+ messages in thread
From: Krzysztof Kozlowski @ 2023-08-18 10:32 UTC (permalink / raw)
  To: Sarah Walker, dri-devel
  Cc: matthew.brost, hns, linux-kernel, mripard, afd, luben.tuikov,
	dakr, donald.robson, tzimmermann, boris.brezillon,
	christian.koenig, faith.ekstrand

On 16/08/2023 10:25, Sarah Walker wrote:
> Add the device tree binding documentation for the Series AXE GPU used in
> TI AM62 SoCs.
> 
> Co-developed-by: Frank Binns <frank.binns@imgtec.com>
> Signed-off-by: Frank Binns <frank.binns@imgtec.com>
> Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
> ---
> Changes since v4:
> - Add clocks constraint for ti,am62-gpu


Please use scripts/get_maintainers.pl to get a list of necessary people
and lists to CC. It might happen, that command when run on an older
kernel, gives you outdated entries. Therefore please be sure you base
your patches on recent Linux kernel.

You missed at least DT list (maybe more), so this won't be tested by
automated tooling. Performing review on untested code might be a waste
of time, thus I will skip this patch entirely till you follow the
process allowing the patch to be tested.

Please kindly resend and include all necessary To/Cc entries.


You already got this comment. I think more than once. Fix your
processes, so finally this is resolved.

Best regards,
Krzysztof


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 02/17] dt-bindings: gpu: Add Imagination Technologies PowerVR GPU
@ 2023-08-18 10:32     ` Krzysztof Kozlowski
  0 siblings, 0 replies; 77+ messages in thread
From: Krzysztof Kozlowski @ 2023-08-18 10:32 UTC (permalink / raw)
  To: Sarah Walker, dri-devel
  Cc: matthew.brost, luben.tuikov, tzimmermann, linux-kernel, mripard,
	afd, boris.brezillon, dakr, donald.robson, hns, christian.koenig,
	faith.ekstrand

On 16/08/2023 10:25, Sarah Walker wrote:
> Add the device tree binding documentation for the Series AXE GPU used in
> TI AM62 SoCs.
> 
> Co-developed-by: Frank Binns <frank.binns@imgtec.com>
> Signed-off-by: Frank Binns <frank.binns@imgtec.com>
> Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
> ---
> Changes since v4:
> - Add clocks constraint for ti,am62-gpu


Please use scripts/get_maintainers.pl to get a list of necessary people
and lists to CC. It might happen, that command when run on an older
kernel, gives you outdated entries. Therefore please be sure you base
your patches on recent Linux kernel.

You missed at least DT list (maybe more), so this won't be tested by
automated tooling. Performing review on untested code might be a waste
of time, thus I will skip this patch entirely till you follow the
process allowing the patch to be tested.

Please kindly resend and include all necessary To/Cc entries.


You already got this comment. I think more than once. Fix your
processes, so finally this is resolved.

Best regards,
Krzysztof


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 02/17] dt-bindings: gpu: Add Imagination Technologies PowerVR GPU
  2023-08-18  9:36     ` Linus Walleij
@ 2023-08-18 10:33       ` Krzysztof Kozlowski
  -1 siblings, 0 replies; 77+ messages in thread
From: Krzysztof Kozlowski @ 2023-08-18 10:33 UTC (permalink / raw)
  To: Linus Walleij, Sarah Walker,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Rob Herring, Krzysztof Kozlowski
  Cc: matthew.brost, hns, linux-kernel, mripard, afd, luben.tuikov,
	dakr, dri-devel, tzimmermann, boris.brezillon, christian.koenig,
	faith.ekstrand, donald.robson

On 18/08/2023 11:36, Linus Walleij wrote:
> Hi Sarah,
> 
> thanks for your patch!
> 
> Patches adding device tree bindings need to be CC:ed to
> devicetree@vger.kernel.org
> and the DT binding maintainers, I have added it for now.
> 

This won't help, I think. Patch will not be tested.

I was already asking for using get_maintainers in the past... sigh...

Best regards,
Krzysztof


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 02/17] dt-bindings: gpu: Add Imagination Technologies PowerVR GPU
@ 2023-08-18 10:33       ` Krzysztof Kozlowski
  0 siblings, 0 replies; 77+ messages in thread
From: Krzysztof Kozlowski @ 2023-08-18 10:33 UTC (permalink / raw)
  To: Linus Walleij, Sarah Walker,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	Rob Herring, Krzysztof Kozlowski
  Cc: dri-devel, matthew.brost, luben.tuikov, tzimmermann,
	linux-kernel, mripard, afd, boris.brezillon, dakr, donald.robson,
	hns, christian.koenig, faith.ekstrand

On 18/08/2023 11:36, Linus Walleij wrote:
> Hi Sarah,
> 
> thanks for your patch!
> 
> Patches adding device tree bindings need to be CC:ed to
> devicetree@vger.kernel.org
> and the DT binding maintainers, I have added it for now.
> 

This won't help, I think. Patch will not be tested.

I was already asking for using get_maintainers in the past... sigh...

Best regards,
Krzysztof


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 17/17] arm64: dts: ti: k3-am62-main: Add GPU device node [DO NOT MERGE]
  2023-08-16  8:25   ` Sarah Walker
@ 2023-08-18 10:34     ` Krzysztof Kozlowski
  -1 siblings, 0 replies; 77+ messages in thread
From: Krzysztof Kozlowski @ 2023-08-18 10:34 UTC (permalink / raw)
  To: Sarah Walker, dri-devel
  Cc: matthew.brost, hns, linux-kernel, mripard, afd, luben.tuikov,
	dakr, donald.robson, tzimmermann, boris.brezillon,
	christian.koenig, faith.ekstrand

On 16/08/2023 10:25, Sarah Walker wrote:
> Add the Series AXE GPU node to the AM62 device tree.
> 
> Changes since v4:
> - Remove interrupt name
> - Make property order consistent across dts and bindings doc
> - Fixed formatting (replaced spaces with tabs)
> 

Nope, DTS go via SoC tree. You skipped all lists and maybe also all
maintainers.

Really, start finally using the Linux tools - scripts/get_maintainers.pl


Best regards,
Krzysztof


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 17/17] arm64: dts: ti: k3-am62-main: Add GPU device node [DO NOT MERGE]
@ 2023-08-18 10:34     ` Krzysztof Kozlowski
  0 siblings, 0 replies; 77+ messages in thread
From: Krzysztof Kozlowski @ 2023-08-18 10:34 UTC (permalink / raw)
  To: Sarah Walker, dri-devel
  Cc: matthew.brost, luben.tuikov, tzimmermann, linux-kernel, mripard,
	afd, boris.brezillon, dakr, donald.robson, hns, christian.koenig,
	faith.ekstrand

On 16/08/2023 10:25, Sarah Walker wrote:
> Add the Series AXE GPU node to the AM62 device tree.
> 
> Changes since v4:
> - Remove interrupt name
> - Make property order consistent across dts and bindings doc
> - Fixed formatting (replaced spaces with tabs)
> 

Nope, DTS go via SoC tree. You skipped all lists and maybe also all
maintainers.

Really, start finally using the Linux tools - scripts/get_maintainers.pl


Best regards,
Krzysztof


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v5 13/17] drm/imagination: Implement context creation/destruction ioctls
  2023-08-17 22:42     ` Jann Horn
@ 2023-08-18 11:00       ` Sarah Walker
  -1 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-18 11:00 UTC (permalink / raw)
  To: jannh
  Cc: matthew.brost, mripard, hns, linux-kernel, dri-devel, afd,
	luben.tuikov, dakr, Donald Robson, tzimmermann, boris.brezillon,
	christian.koenig, faith.ekstrand

On Fri, 2023-08-18 at 00:42 +0200, Jann Horn wrote:
> *** CAUTION: This email originates from a source not known to Imagination Technologies. Think before you click a link or open an attachment ***
> 
> On Wed, Aug 16, 2023 at 10:25 AM Sarah Walker <sarah.walker@imgtec.com> wrote:
> > Implement ioctls for the creation and destruction of contexts. Contexts are
> > used for job submission and each is associated with a particular job type.
> [...]
> > +/**
> > + * pvr_context_create() - Create a context.
> > + * @pvr_file: File to attach the created context to.
> > + * @args: Context creation arguments.
> > + *
> > + * Return:
> > + *  * 0 on success, or
> > + *  * A negative error code on failure.
> > + */
> > +int pvr_context_create(struct pvr_file *pvr_file, struct drm_pvr_ioctl_create_context_args *args)
> > +{
> > +       struct pvr_device *pvr_dev = pvr_file->pvr_dev;
> > +       struct pvr_context *ctx;
> > +       int ctx_size;
> > +       int err;
> > +
> > +       /* Context creation flags are currently unused and must be zero. */
> > +       if (args->flags)
> > +               return -EINVAL;
> > +
> > +       ctx_size = get_fw_obj_size(args->type);
> > +       if (ctx_size < 0)
> > +               return ctx_size;
> > +
> > +       ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
> > +       if (!ctx)
> > +               return -ENOMEM;
> > +
> > +       ctx->data_size = ctx_size;
> > +       ctx->type = args->type;
> > +       ctx->flags = args->flags;
> > +       ctx->pvr_dev = pvr_dev;
> > +       kref_init(&ctx->ref_count);
> > +
> > +       err = remap_priority(pvr_file, args->priority, &ctx->priority);
> > +       if (err)
> > +               goto err_free_ctx;
> > +
> > +       ctx->vm_ctx = pvr_vm_context_lookup(pvr_file, args->vm_context_handle);
> > +       if (IS_ERR(ctx->vm_ctx)) {
> > +               err = PTR_ERR(ctx->vm_ctx);
> > +               goto err_free_ctx;
> > +       }
> > +
> > +       ctx->data = kzalloc(ctx_size, GFP_KERNEL);
> > +       if (!ctx->data) {
> > +               err = -ENOMEM;
> > +               goto err_put_vm;
> > +       }
> > +
> > +       err = init_fw_objs(ctx, args, ctx->data);
> > +       if (err)
> > +               goto err_free_ctx_data;
> > +
> > +       err = pvr_fw_object_create(pvr_dev, ctx_size, PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
> > +                                  ctx_fw_data_init, ctx, &ctx->fw_obj);
> > +       if (err)
> > +               goto err_free_ctx_data;
> > +
> > +       err = xa_alloc(&pvr_dev->ctx_ids, &ctx->ctx_id, ctx, xa_limit_32b, GFP_KERNEL);
> > +       if (err)
> > +               goto err_destroy_fw_obj;
> > +
> > +       err = xa_alloc(&pvr_file->ctx_handles, &args->handle, ctx, xa_limit_32b, GFP_KERNEL);
> > +       if (err)
> > +               goto err_release_id;
> 
> This bailout looks a bit dodgy. We have already inserted ctx into
> &pvr_dev->ctx_ids, and now we just take it out again. If someone could
> concurrently call pvr_context_lookup_id() on the ID we just allocated
> (I don't understand enough about what's going on here at a high level
> to be able to tell if that's possible), I think they would be able to
> elevate the ctx->ref_count from 1 to 2, and then on the bailout path
> we'll just free the ctx without looking at the refcount.
> 
> If this can't happen, it might be a good idea to add a comment
> explaining why. If it can happen, I guess one way to fix it would be
> to replace this last bailout with a call to pvr_context_put()?

Yes, I think you're correct here. I don't think there's anything in the current
patch set that can actually trigger this, but it definitely needs fixing.

Thanks,
Sarah

> 
> 
> > +
> > +       return 0;
> > +
> > +err_release_id:
> > +       xa_erase(&pvr_dev->ctx_ids, ctx->ctx_id);
> > +
> > +err_destroy_fw_obj:
> > +       pvr_fw_object_destroy(ctx->fw_obj);
> > +
> > +err_free_ctx_data:
> > +       kfree(ctx->data);
> > +
> > +err_put_vm:
> > +       pvr_vm_context_put(ctx->vm_ctx);
> > +
> > +err_free_ctx:
> > +       kfree(ctx);
> > +       return err;
> > +}

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v5 13/17] drm/imagination: Implement context creation/destruction ioctls
@ 2023-08-18 11:00       ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-18 11:00 UTC (permalink / raw)
  To: jannh
  Cc: tzimmermann, afd, Donald Robson, dri-devel, matthew.brost,
	linux-kernel, luben.tuikov, boris.brezillon, faith.ekstrand,
	dakr, mripard, hns, christian.koenig

On Fri, 2023-08-18 at 00:42 +0200, Jann Horn wrote:
> *** CAUTION: This email originates from a source not known to Imagination Technologies. Think before you click a link or open an attachment ***
> 
> On Wed, Aug 16, 2023 at 10:25 AM Sarah Walker <sarah.walker@imgtec.com> wrote:
> > Implement ioctls for the creation and destruction of contexts. Contexts are
> > used for job submission and each is associated with a particular job type.
> [...]
> > +/**
> > + * pvr_context_create() - Create a context.
> > + * @pvr_file: File to attach the created context to.
> > + * @args: Context creation arguments.
> > + *
> > + * Return:
> > + *  * 0 on success, or
> > + *  * A negative error code on failure.
> > + */
> > +int pvr_context_create(struct pvr_file *pvr_file, struct drm_pvr_ioctl_create_context_args *args)
> > +{
> > +       struct pvr_device *pvr_dev = pvr_file->pvr_dev;
> > +       struct pvr_context *ctx;
> > +       int ctx_size;
> > +       int err;
> > +
> > +       /* Context creation flags are currently unused and must be zero. */
> > +       if (args->flags)
> > +               return -EINVAL;
> > +
> > +       ctx_size = get_fw_obj_size(args->type);
> > +       if (ctx_size < 0)
> > +               return ctx_size;
> > +
> > +       ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
> > +       if (!ctx)
> > +               return -ENOMEM;
> > +
> > +       ctx->data_size = ctx_size;
> > +       ctx->type = args->type;
> > +       ctx->flags = args->flags;
> > +       ctx->pvr_dev = pvr_dev;
> > +       kref_init(&ctx->ref_count);
> > +
> > +       err = remap_priority(pvr_file, args->priority, &ctx->priority);
> > +       if (err)
> > +               goto err_free_ctx;
> > +
> > +       ctx->vm_ctx = pvr_vm_context_lookup(pvr_file, args->vm_context_handle);
> > +       if (IS_ERR(ctx->vm_ctx)) {
> > +               err = PTR_ERR(ctx->vm_ctx);
> > +               goto err_free_ctx;
> > +       }
> > +
> > +       ctx->data = kzalloc(ctx_size, GFP_KERNEL);
> > +       if (!ctx->data) {
> > +               err = -ENOMEM;
> > +               goto err_put_vm;
> > +       }
> > +
> > +       err = init_fw_objs(ctx, args, ctx->data);
> > +       if (err)
> > +               goto err_free_ctx_data;
> > +
> > +       err = pvr_fw_object_create(pvr_dev, ctx_size, PVR_BO_FW_FLAGS_DEVICE_UNCACHED,
> > +                                  ctx_fw_data_init, ctx, &ctx->fw_obj);
> > +       if (err)
> > +               goto err_free_ctx_data;
> > +
> > +       err = xa_alloc(&pvr_dev->ctx_ids, &ctx->ctx_id, ctx, xa_limit_32b, GFP_KERNEL);
> > +       if (err)
> > +               goto err_destroy_fw_obj;
> > +
> > +       err = xa_alloc(&pvr_file->ctx_handles, &args->handle, ctx, xa_limit_32b, GFP_KERNEL);
> > +       if (err)
> > +               goto err_release_id;
> 
> This bailout looks a bit dodgy. We have already inserted ctx into
> &pvr_dev->ctx_ids, and now we just take it out again. If someone could
> concurrently call pvr_context_lookup_id() on the ID we just allocated
> (I don't understand enough about what's going on here at a high level
> to be able to tell if that's possible), I think they would be able to
> elevate the ctx->ref_count from 1 to 2, and then on the bailout path
> we'll just free the ctx without looking at the refcount.
> 
> If this can't happen, it might be a good idea to add a comment
> explaining why. If it can happen, I guess one way to fix it would be
> to replace this last bailout with a call to pvr_context_put()?

Yes, I think you're correct here. I don't think there's anything in the current
patch set that can actually trigger this, but it definitely needs fixing.

Thanks,
Sarah

> 
> 
> > +
> > +       return 0;
> > +
> > +err_release_id:
> > +       xa_erase(&pvr_dev->ctx_ids, ctx->ctx_id);
> > +
> > +err_destroy_fw_obj:
> > +       pvr_fw_object_destroy(ctx->fw_obj);
> > +
> > +err_free_ctx_data:
> > +       kfree(ctx->data);
> > +
> > +err_put_vm:
> > +       pvr_vm_context_put(ctx->vm_ctx);
> > +
> > +err_free_ctx:
> > +       kfree(ctx);
> > +       return err;
> > +}

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v5 03/17] drm/imagination/uapi: Add PowerVR driver UAPI
  2023-08-18  0:43   ` Faith Ekstrand
@ 2023-08-18 13:49       ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-18 13:49 UTC (permalink / raw)
  To: faith
  Cc: matthew.brost, hns, linux-kernel, dri-devel, afd, Matt Coster,
	luben.tuikov, dakr, mripard, tzimmermann, boris.brezillon,
	christian.koenig, faith.ekstrand, Donald Robson

On Thu, 2023-08-17 at 19:43 -0500, Faith Ekstrand wrote:
> On Wed, Aug 16, 2023 at 3:26 AM Sarah Walker <sarah.walker@imgtec.com> wrote:
> > 
> > +/**
> > + * struct drm_pvr_dev_query_runtime_info - Container used to fetch information
> > + * about the graphics runtime.
> > + *
> > + * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must be set
> > + * to %DRM_PVR_DEV_QUERY_RUNTIME_INFO_GET.
> > + */
> > +struct drm_pvr_dev_query_runtime_info {
> > +       /**
> > +        * @free_list_min_pages: Minimum allowed free list size,
> > +        * in PM physical pages.
> > +        */
> > +       __u64 free_list_min_pages;
> > +
> > +       /**
> > +        * @free_list_max_pages: Maximum allowed free list size,
> > +        * in PM physical pages.
> > +        */
> > +       __u64 free_list_max_pages;
> > +
> > +       /**
> > +        * @common_store_alloc_region_size: Size of the Allocation
> > +        * Region within the Common Store used for coefficient and shared
> > +        * registers, in dwords.
> > +        */
> > +       __u32 common_store_alloc_region_size;
> 
> Any reason why this is in dwords?  It's not really my place to have an opinion but that seems like kind-of a funny unit for the size of an allocation region. Why not just bytes?

This is a holdover from the closed source driver. It can be changed to bytes if
that is particularly desired?

> +/**
> > + * struct drm_pvr_dev_query_quirks - Container used to fetch information about
> > + * hardware fixes for which the device may require support in the user mode
> > + * driver.
> > + *
> > + * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must be set
> > + * to %DRM_PVR_DEV_QUERY_QUIRKS_GET.
> > + */
> > +struct drm_pvr_dev_query_quirks {
> > +       /**
> > +        * @quirks: A userspace address for the hardware quirks __u32 array.
> > +        *
> > +        * The first @musthave_count items in the list are quirks that the
> > +        * client must support for this device. If userspace does not support
> > +        * all these quirks then functionality is not guaranteed and client
> > +        * initialisation must fail.
> > +        * The remaining quirks in the list affect userspace and the kernel or
> > +        * firmware. They are disabled by default and require userspace to
> > +        * opt-in. The opt-in mechanism depends on the quirk.
> > +        */
> > +       __u64 quirks;
> 
> Where are these quirk IDs defined and where do they come from? If they're effectively coming from hardware, possibly via firmware, that's probably okay.  The important thing is that quirks should only ever get removed for any given piece of hardware otherwise you risk breaking userspace.

Quirks are defined in the firmware header. The actual IDs are from our issue
tracking system; they're shared with the closed source driver. We are aware of
the need to not remove quirks for a given GPU.

> > +/**
> > + * struct drm_pvr_dev_query_enhancements - Container used to fetch information
> > + * about optional enhancements supported by the device that require support in
> > + * the user mode driver.
> > + *
> > + * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must be set
> > + * to %DRM_PVR_DEV_ENHANCEMENTS_GET.
> > + */
> > +struct drm_pvr_dev_query_enhancements {
> > +       /**
> > +        * @enhancements: A userspace address for the hardware enhancements
> > +        * __u32 array.
> > +        *
> > +        * These enhancements affect userspace and the kernel or firmware. They
> > +        * are disabled by default and require userspace to opt-in. The opt-in
> > +        * mechanism depends on the quirk.
> > +        */
> > +       __u64 enhancements;
> 
> Can you provide some examples of "enhancements"? Not that you need to put it in the docs. I'm just trying to understand what this API is doing so I can better review. Again, where do these come from? Also, how is an enhancement different from a quirk?

Enhancements are comparatively minor improvements in GPU subrevisions that don't
qualify as a full "product feature". A couple of examples would be 35421, which
improves compute thread barrier support, and 42064, which adds mask support for
the pixel backend. As with quirks, enhancements are defined in the firmware
header, with the IDs coming from our issue tracker.

Thanks,
Sarah


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v5 03/17] drm/imagination/uapi: Add PowerVR driver UAPI
@ 2023-08-18 13:49       ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-18 13:49 UTC (permalink / raw)
  To: faith
  Cc: tzimmermann, afd, Matt Coster, dri-devel, matthew.brost,
	linux-kernel, luben.tuikov, boris.brezillon, faith.ekstrand,
	dakr, mripard, Donald Robson, hns, christian.koenig

On Thu, 2023-08-17 at 19:43 -0500, Faith Ekstrand wrote:
> On Wed, Aug 16, 2023 at 3:26 AM Sarah Walker <sarah.walker@imgtec.com> wrote:
> > 
> > +/**
> > + * struct drm_pvr_dev_query_runtime_info - Container used to fetch information
> > + * about the graphics runtime.
> > + *
> > + * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must be set
> > + * to %DRM_PVR_DEV_QUERY_RUNTIME_INFO_GET.
> > + */
> > +struct drm_pvr_dev_query_runtime_info {
> > +       /**
> > +        * @free_list_min_pages: Minimum allowed free list size,
> > +        * in PM physical pages.
> > +        */
> > +       __u64 free_list_min_pages;
> > +
> > +       /**
> > +        * @free_list_max_pages: Maximum allowed free list size,
> > +        * in PM physical pages.
> > +        */
> > +       __u64 free_list_max_pages;
> > +
> > +       /**
> > +        * @common_store_alloc_region_size: Size of the Allocation
> > +        * Region within the Common Store used for coefficient and shared
> > +        * registers, in dwords.
> > +        */
> > +       __u32 common_store_alloc_region_size;
> 
> Any reason why this is in dwords?  It's not really my place to have an opinion but that seems like kind-of a funny unit for the size of an allocation region. Why not just bytes?

This is a holdover from the closed source driver. It can be changed to bytes if
that is particularly desired?

> +/**
> > + * struct drm_pvr_dev_query_quirks - Container used to fetch information about
> > + * hardware fixes for which the device may require support in the user mode
> > + * driver.
> > + *
> > + * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must be set
> > + * to %DRM_PVR_DEV_QUERY_QUIRKS_GET.
> > + */
> > +struct drm_pvr_dev_query_quirks {
> > +       /**
> > +        * @quirks: A userspace address for the hardware quirks __u32 array.
> > +        *
> > +        * The first @musthave_count items in the list are quirks that the
> > +        * client must support for this device. If userspace does not support
> > +        * all these quirks then functionality is not guaranteed and client
> > +        * initialisation must fail.
> > +        * The remaining quirks in the list affect userspace and the kernel or
> > +        * firmware. They are disabled by default and require userspace to
> > +        * opt-in. The opt-in mechanism depends on the quirk.
> > +        */
> > +       __u64 quirks;
> 
> Where are these quirk IDs defined and where do they come from? If they're effectively coming from hardware, possibly via firmware, that's probably okay.  The important thing is that quirks should only ever get removed for any given piece of hardware otherwise you risk breaking userspace.

Quirks are defined in the firmware header. The actual IDs are from our issue
tracking system; they're shared with the closed source driver. We are aware of
the need to not remove quirks for a given GPU.

> > +/**
> > + * struct drm_pvr_dev_query_enhancements - Container used to fetch information
> > + * about optional enhancements supported by the device that require support in
> > + * the user mode driver.
> > + *
> > + * When fetching this type &struct drm_pvr_ioctl_dev_query_args.type must be set
> > + * to %DRM_PVR_DEV_ENHANCEMENTS_GET.
> > + */
> > +struct drm_pvr_dev_query_enhancements {
> > +       /**
> > +        * @enhancements: A userspace address for the hardware enhancements
> > +        * __u32 array.
> > +        *
> > +        * These enhancements affect userspace and the kernel or firmware. They
> > +        * are disabled by default and require userspace to opt-in. The opt-in
> > +        * mechanism depends on the quirk.
> > +        */
> > +       __u64 enhancements;
> 
> Can you provide some examples of "enhancements"? Not that you need to put it in the docs. I'm just trying to understand what this API is doing so I can better review. Again, where do these come from? Also, how is an enhancement different from a quirk?

Enhancements are comparatively minor improvements in GPU subrevisions that don't
qualify as a full "product feature". A couple of examples would be 35421, which
improves compute thread barrier support, and 42064, which adds mask support for
the pixel backend. As with quirks, enhancements are defined in the firmware
header, with the IDs coming from our issue tracker.

Thanks,
Sarah


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v5 08/17] drm/imagination: Add GEM and VM related code
  2023-08-17 22:42     ` Jann Horn
@ 2023-08-18 14:19       ` Sarah Walker
  -1 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-18 14:19 UTC (permalink / raw)
  To: jannh
  Cc: tzimmermann, afd, Matt Coster, dri-devel, matthew.brost,
	linux-kernel, luben.tuikov, boris.brezillon, faith.ekstrand,
	dakr, mripard, Donald Robson, hns, christian.koenig

On Fri, 2023-08-18 at 00:42 +0200, Jann Horn wrote:
> *** CAUTION: This email originates from a source not known to Imagination Technologies. Think before you click a link or open an attachment ***
> 
> Hi!
> 
> Thanks, I think it's great that Imagination is writing an upstream
> driver for PowerVR. :)
> 
> On Wed, Aug 16, 2023 at 10:25 AM Sarah Walker <sarah.walker@imgtec.com> wrote:
> > +#define PVR_BO_CPU_CACHED BIT_ULL(63)
> > +
> > +#define PVR_BO_FW_NO_CLEAR_ON_RESET BIT_ULL(62)
> > +
> > +/* Bits 62..3 are undefined. */
> > +/* Bits 2..0 are defined in the UAPI. */
> > +
> > +/* Other utilities. */
> > +#define PVR_BO_UNDEFINED_MASK GENMASK_ULL(61, 3)
> > +#define PVR_BO_RESERVED_MASK (PVR_BO_UNDEFINED_MASK | GENMASK_ULL(63, 63))
> 
> In commit 1a9c568fb559 ("drm/imagination: Rework firmware object
> initialisation") in powervr-next, PVR_BO_FW_NO_CLEAR_ON_RESET (bit 62)
> was added in the kernel-only flags group, but the mask
> PVR_BO_RESERVED_MASK (which is used in pvr_ioctl_create_bo to detect
> kernel-only and reserved flags) looks like it wasn't changed to
> include bit 62. I think that means it becomes possible for userspace
> to pass this bit in via pvr_ioctl_create_bo()?

Yes, this is a bug. Will fix (and refactor a bit).

> > +/**
> > + * pvr_page_table_l2_entry_raw_set() - Write a valid entry into a raw level 2
> > + *                                     page table.
> > + * @entry: Target raw level 2 page table entry.
> > + * @child_table_dma_addr: DMA address of the level 1 page table to be
> > + *                        associated with @entry.
> > + *
> > + * When calling this function, @child_table_dma_addr must be a valid DMA
> > + * address and a multiple of %ROGUE_MMUCTRL_PC_DATA_PD_BASE_ALIGNSIZE.
> > + */
> > +static void
> > +pvr_page_table_l2_entry_raw_set(struct pvr_page_table_l2_entry_raw *entry,
> > +                               dma_addr_t child_table_dma_addr)
> > +{
> > +       child_table_dma_addr >>= ROGUE_MMUCTRL_PC_DATA_PD_BASE_ALIGNSHIFT;
> > +
> > +       entry->val =
> > +               PVR_PAGE_TABLE_FIELD_PREP(2, PC, VALID, true) |
> > +               PVR_PAGE_TABLE_FIELD_PREP(2, PC, ENTRY_PENDING, false) |
> > +               PVR_PAGE_TABLE_FIELD_PREP(2, PC, PD_BASE, child_table_dma_addr);
> > +}
> 
> For this function and others that manipulate page table entries,
> please use some kernel helper that ensures that the store can't tear
> (at least WRITE_ONCE() - that can still tear on 32-bit, but I see the
> driver depends on ARM64, so that's not a problem).

Will do.

> > +/**
> > + * pvr_page_table_l2_insert() - Insert an entry referring to a level 1 page
> > + * table into a level 2 page table.
> > + * @op_ctx: Target MMU op context pointing at the entry to insert the L1 page
> > + * table into.
> > + * @child_table: Target level 1 page table to be referenced by the new entry.
> > + *
> > + * It is the caller's responsibility to ensure @op_ctx.curr_page points to a
> > + * valid L2 entry.
> > + */
> > +static void
> > +pvr_page_table_l2_insert(struct pvr_mmu_op_context *op_ctx,
> > +                        struct pvr_page_table_l1 *child_table)
> > +{
> > +       struct pvr_page_table_l2 *l2_table =
> > +               &op_ctx->mmu_ctx->page_table_l2;
> > +       struct pvr_page_table_l2_entry_raw *entry_raw =
> > +               pvr_page_table_l2_get_entry_raw(l2_table,
> > +                                               op_ctx->curr_page.l2_idx);
> > +
> > +       pvr_page_table_l2_entry_raw_set(entry_raw,
> > +                                       child_table->backing_page.dma_addr);
> 
> Can you maybe add comments in functions that set page table entries to
> document who is responsible for using a memory barrier (like wmb()) to
> ensure that the creation of a page table entry is ordered after the
> thing it points to is fully initialized, so that the GPU can't end up
> concurrently walking into a page table and observe its old contents
> from before it was zero-initialized?

Will do.

> > +static int
> > +pvr_page_table_l1_get_or_insert(struct pvr_mmu_op_context *op_ctx,
> > +                               bool should_insert)
> > +{
> > +       struct pvr_page_table_l2 *l2_table =
> > +               &op_ctx->mmu_ctx->page_table_l2;
> > +       struct pvr_page_table_l1 *table;
> > +       int err;
> > +
> > +       if (pvr_page_table_l2_entry_is_valid(l2_table,
> > +                                            op_ctx->curr_page.l2_idx)) {
> > +               op_ctx->curr_page.l1_table =
> > +                       l2_table->entries[op_ctx->curr_page.l2_idx];
> > +               return 0;
> > +       }
> > +
> > +       if (!should_insert)
> > +               return -ENXIO;
> > +
> > +       /* Take a prealloced table. */
> > +       table = op_ctx->l1_free_tables;
> > +       if (!table)
> > +               return -ENOMEM;
> > +
> > +       err = pvr_page_table_l1_init(table, op_ctx->mmu_ctx->pvr_dev);
> 
> I think when we have a preallocated table here, it was allocated in
> pvr_page_table_l1_alloc(), which already called
> pvr_page_table_l1_init()? So it looks to me like this second
> pvr_page_table_l1_init() call will allocate another page and leak the
> old allocation.

Yes, this is also a bug. Will address.

> +/**
> > + * pvr_mmu_op_context_create() - Create an MMU op context.
> > + * @ctx: MMU context associated with owning VM context.
> > + * @sgt: Scatter gather table containing pages pinned for use by this context.
> > + * @sgt_offset: Start offset of the requested device-virtual memory mapping.
> > + * @size: Size in bytes of the requested device-virtual memory mapping. For an
> > + * unmapping, this should be zero so that no page tables are allocated.
> > + *
> > + * Returns:
> > + *  * Newly created MMU op context object on success, or
> > + *  * -%ENOMEM if no memory is available,
> > + *  * Any error code returned by pvr_page_table_l2_init().
> > + */
> > +struct pvr_mmu_op_context *
> > +pvr_mmu_op_context_create(struct pvr_mmu_context *ctx, struct sg_table *sgt,
> > +                         u64 sgt_offset, u64 size)
> > +{
> > +       int err;
> > +
> > +       struct pvr_mmu_op_context *op_ctx =
> > +               kzalloc(sizeof(*op_ctx), GFP_KERNEL);
> > +
> > +       if (!op_ctx)
> > +               return ERR_PTR(-ENOMEM);
> > +
> > +       op_ctx->mmu_ctx = ctx;
> > +       op_ctx->map.sgt = sgt;
> > +       op_ctx->map.sgt_offset = sgt_offset;
> > +       op_ctx->sync_level_required = PVR_MMU_SYNC_LEVEL_NONE;
> > +
> > +       if (size) {
> > +               const u32 l1_start_idx = pvr_page_table_l2_idx(sgt_offset);
> > +               const u32 l1_end_idx = pvr_page_table_l2_idx(sgt_offset + size);
> > +               const u32 l1_count = l1_end_idx - l1_start_idx + 1;
> > +               const u32 l0_start_idx = pvr_page_table_l1_idx(sgt_offset);
> > +               const u32 l0_end_idx = pvr_page_table_l1_idx(sgt_offset + size);
> > +               const u32 l0_count = l0_end_idx - l0_start_idx + 1;
> 
> Shouldn't the page table indices be calculated from the device_addr
> (which is not currently passed in by pvr_vm_map())? As far as I can
> tell, sgt_offset doesn't have anything to do with the device address
> at which this mapping will be inserted in the page tables?

This code is correct, but badly documented; this function only cares about the
number of l0/l1 pages required, not the address. Will improve it to make less
confusing.

Thanks,
Sarah

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v5 08/17] drm/imagination: Add GEM and VM related code
@ 2023-08-18 14:19       ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-18 14:19 UTC (permalink / raw)
  To: jannh
  Cc: matthew.brost, hns, linux-kernel, dri-devel, afd, Matt Coster,
	luben.tuikov, dakr, mripard, tzimmermann, boris.brezillon,
	christian.koenig, faith.ekstrand, Donald Robson

On Fri, 2023-08-18 at 00:42 +0200, Jann Horn wrote:
> *** CAUTION: This email originates from a source not known to Imagination Technologies. Think before you click a link or open an attachment ***
> 
> Hi!
> 
> Thanks, I think it's great that Imagination is writing an upstream
> driver for PowerVR. :)
> 
> On Wed, Aug 16, 2023 at 10:25 AM Sarah Walker <sarah.walker@imgtec.com> wrote:
> > +#define PVR_BO_CPU_CACHED BIT_ULL(63)
> > +
> > +#define PVR_BO_FW_NO_CLEAR_ON_RESET BIT_ULL(62)
> > +
> > +/* Bits 62..3 are undefined. */
> > +/* Bits 2..0 are defined in the UAPI. */
> > +
> > +/* Other utilities. */
> > +#define PVR_BO_UNDEFINED_MASK GENMASK_ULL(61, 3)
> > +#define PVR_BO_RESERVED_MASK (PVR_BO_UNDEFINED_MASK | GENMASK_ULL(63, 63))
> 
> In commit 1a9c568fb559 ("drm/imagination: Rework firmware object
> initialisation") in powervr-next, PVR_BO_FW_NO_CLEAR_ON_RESET (bit 62)
> was added in the kernel-only flags group, but the mask
> PVR_BO_RESERVED_MASK (which is used in pvr_ioctl_create_bo to detect
> kernel-only and reserved flags) looks like it wasn't changed to
> include bit 62. I think that means it becomes possible for userspace
> to pass this bit in via pvr_ioctl_create_bo()?

Yes, this is a bug. Will fix (and refactor a bit).

> > +/**
> > + * pvr_page_table_l2_entry_raw_set() - Write a valid entry into a raw level 2
> > + *                                     page table.
> > + * @entry: Target raw level 2 page table entry.
> > + * @child_table_dma_addr: DMA address of the level 1 page table to be
> > + *                        associated with @entry.
> > + *
> > + * When calling this function, @child_table_dma_addr must be a valid DMA
> > + * address and a multiple of %ROGUE_MMUCTRL_PC_DATA_PD_BASE_ALIGNSIZE.
> > + */
> > +static void
> > +pvr_page_table_l2_entry_raw_set(struct pvr_page_table_l2_entry_raw *entry,
> > +                               dma_addr_t child_table_dma_addr)
> > +{
> > +       child_table_dma_addr >>= ROGUE_MMUCTRL_PC_DATA_PD_BASE_ALIGNSHIFT;
> > +
> > +       entry->val =
> > +               PVR_PAGE_TABLE_FIELD_PREP(2, PC, VALID, true) |
> > +               PVR_PAGE_TABLE_FIELD_PREP(2, PC, ENTRY_PENDING, false) |
> > +               PVR_PAGE_TABLE_FIELD_PREP(2, PC, PD_BASE, child_table_dma_addr);
> > +}
> 
> For this function and others that manipulate page table entries,
> please use some kernel helper that ensures that the store can't tear
> (at least WRITE_ONCE() - that can still tear on 32-bit, but I see the
> driver depends on ARM64, so that's not a problem).

Will do.

> > +/**
> > + * pvr_page_table_l2_insert() - Insert an entry referring to a level 1 page
> > + * table into a level 2 page table.
> > + * @op_ctx: Target MMU op context pointing at the entry to insert the L1 page
> > + * table into.
> > + * @child_table: Target level 1 page table to be referenced by the new entry.
> > + *
> > + * It is the caller's responsibility to ensure @op_ctx.curr_page points to a
> > + * valid L2 entry.
> > + */
> > +static void
> > +pvr_page_table_l2_insert(struct pvr_mmu_op_context *op_ctx,
> > +                        struct pvr_page_table_l1 *child_table)
> > +{
> > +       struct pvr_page_table_l2 *l2_table =
> > +               &op_ctx->mmu_ctx->page_table_l2;
> > +       struct pvr_page_table_l2_entry_raw *entry_raw =
> > +               pvr_page_table_l2_get_entry_raw(l2_table,
> > +                                               op_ctx->curr_page.l2_idx);
> > +
> > +       pvr_page_table_l2_entry_raw_set(entry_raw,
> > +                                       child_table->backing_page.dma_addr);
> 
> Can you maybe add comments in functions that set page table entries to
> document who is responsible for using a memory barrier (like wmb()) to
> ensure that the creation of a page table entry is ordered after the
> thing it points to is fully initialized, so that the GPU can't end up
> concurrently walking into a page table and observe its old contents
> from before it was zero-initialized?

Will do.

> > +static int
> > +pvr_page_table_l1_get_or_insert(struct pvr_mmu_op_context *op_ctx,
> > +                               bool should_insert)
> > +{
> > +       struct pvr_page_table_l2 *l2_table =
> > +               &op_ctx->mmu_ctx->page_table_l2;
> > +       struct pvr_page_table_l1 *table;
> > +       int err;
> > +
> > +       if (pvr_page_table_l2_entry_is_valid(l2_table,
> > +                                            op_ctx->curr_page.l2_idx)) {
> > +               op_ctx->curr_page.l1_table =
> > +                       l2_table->entries[op_ctx->curr_page.l2_idx];
> > +               return 0;
> > +       }
> > +
> > +       if (!should_insert)
> > +               return -ENXIO;
> > +
> > +       /* Take a prealloced table. */
> > +       table = op_ctx->l1_free_tables;
> > +       if (!table)
> > +               return -ENOMEM;
> > +
> > +       err = pvr_page_table_l1_init(table, op_ctx->mmu_ctx->pvr_dev);
> 
> I think when we have a preallocated table here, it was allocated in
> pvr_page_table_l1_alloc(), which already called
> pvr_page_table_l1_init()? So it looks to me like this second
> pvr_page_table_l1_init() call will allocate another page and leak the
> old allocation.

Yes, this is also a bug. Will address.

> +/**
> > + * pvr_mmu_op_context_create() - Create an MMU op context.
> > + * @ctx: MMU context associated with owning VM context.
> > + * @sgt: Scatter gather table containing pages pinned for use by this context.
> > + * @sgt_offset: Start offset of the requested device-virtual memory mapping.
> > + * @size: Size in bytes of the requested device-virtual memory mapping. For an
> > + * unmapping, this should be zero so that no page tables are allocated.
> > + *
> > + * Returns:
> > + *  * Newly created MMU op context object on success, or
> > + *  * -%ENOMEM if no memory is available,
> > + *  * Any error code returned by pvr_page_table_l2_init().
> > + */
> > +struct pvr_mmu_op_context *
> > +pvr_mmu_op_context_create(struct pvr_mmu_context *ctx, struct sg_table *sgt,
> > +                         u64 sgt_offset, u64 size)
> > +{
> > +       int err;
> > +
> > +       struct pvr_mmu_op_context *op_ctx =
> > +               kzalloc(sizeof(*op_ctx), GFP_KERNEL);
> > +
> > +       if (!op_ctx)
> > +               return ERR_PTR(-ENOMEM);
> > +
> > +       op_ctx->mmu_ctx = ctx;
> > +       op_ctx->map.sgt = sgt;
> > +       op_ctx->map.sgt_offset = sgt_offset;
> > +       op_ctx->sync_level_required = PVR_MMU_SYNC_LEVEL_NONE;
> > +
> > +       if (size) {
> > +               const u32 l1_start_idx = pvr_page_table_l2_idx(sgt_offset);
> > +               const u32 l1_end_idx = pvr_page_table_l2_idx(sgt_offset + size);
> > +               const u32 l1_count = l1_end_idx - l1_start_idx + 1;
> > +               const u32 l0_start_idx = pvr_page_table_l1_idx(sgt_offset);
> > +               const u32 l0_end_idx = pvr_page_table_l1_idx(sgt_offset + size);
> > +               const u32 l0_count = l0_end_idx - l0_start_idx + 1;
> 
> Shouldn't the page table indices be calculated from the device_addr
> (which is not currently passed in by pvr_vm_map())? As far as I can
> tell, sgt_offset doesn't have anything to do with the device address
> at which this mapping will be inserted in the page tables?

This code is correct, but badly documented; this function only cares about the
number of l0/l1 pages required, not the address. Will improve it to make less
confusing.

Thanks,
Sarah

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 08/17] drm/imagination: Add GEM and VM related code
  2023-08-16  8:25   ` Sarah Walker
@ 2023-08-18 15:30     ` Danilo Krummrich
  -1 siblings, 0 replies; 77+ messages in thread
From: Danilo Krummrich @ 2023-08-18 15:30 UTC (permalink / raw)
  To: Sarah Walker
  Cc: mripard, matthew.brost, tzimmermann, luben.tuikov, linux-kernel,
	dri-devel, afd, Matt Coster, boris.brezillon, donald.robson, hns,
	christian.koenig, faith.ekstrand

Hi Sarah,

On Wed, Aug 16, 2023 at 09:25:23AM +0100, Sarah Walker wrote:
> Add a GEM implementation based on drm_gem_shmem, and support code for the
> PowerVR GPU MMU. The GPU VA manager is used for address space management.
> 
> Changes since v4:
> - Correct sync function in vmap/vunmap function documentation
> - Update for upstream GPU VA manager
> - Fix missing frees when unmapping drm_gpuva objects
> - Always zero GEM BOs on creation
> 
> Changes since v3:
> - Split MMU and VM code
> - Register page table allocations with kmemleak
> - Use drm_dev_{enter,exit}
> 
> Changes since v2:
> - Use GPU VA manager
> - Use drm_gem_shmem
> 
> Co-developed-by: Matt Coster <matt.coster@imgtec.com>
> Signed-off-by: Matt Coster <matt.coster@imgtec.com>
> Co-developed-by: Donald Robson <donald.robson@imgtec.com>
> Signed-off-by: Donald Robson <donald.robson@imgtec.com>
> Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
> ---
>  drivers/gpu/drm/imagination/Makefile     |    5 +-
>  drivers/gpu/drm/imagination/pvr_device.c |   23 +-
>  drivers/gpu/drm/imagination/pvr_device.h |   18 +
>  drivers/gpu/drm/imagination/pvr_drv.c    |  302 ++-
>  drivers/gpu/drm/imagination/pvr_gem.c    |  396 ++++
>  drivers/gpu/drm/imagination/pvr_gem.h    |  177 ++
>  drivers/gpu/drm/imagination/pvr_mmu.c    | 2487 ++++++++++++++++++++++
>  drivers/gpu/drm/imagination/pvr_mmu.h    |  108 +
>  drivers/gpu/drm/imagination/pvr_vm.c     |  890 ++++++++
>  drivers/gpu/drm/imagination/pvr_vm.h     |   60 +
>  10 files changed, 4455 insertions(+), 11 deletions(-)
>  create mode 100644 drivers/gpu/drm/imagination/pvr_gem.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_gem.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_vm.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_vm.h

<snip>

> diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
> new file mode 100644
> index 000000000000..616fad3a3325
> --- /dev/null
> +++ b/drivers/gpu/drm/imagination/pvr_vm.c
> @@ -0,0 +1,890 @@
> +// SPDX-License-Identifier: GPL-2.0 OR MIT
> +/* Copyright (c) 2023 Imagination Technologies Ltd. */
> +
> +#include "pvr_vm.h"
> +
> +#include "pvr_device.h"
> +#include "pvr_drv.h"
> +#include "pvr_gem.h"
> +#include "pvr_mmu.h"
> +#include "pvr_rogue_fwif.h"
> +#include "pvr_rogue_heap_config.h"
> +
> +#include <drm/drm_gem.h>
> +#include <drm/drm_gpuva_mgr.h>
> +
> +#include <linux/container_of.h>
> +#include <linux/err.h>
> +#include <linux/errno.h>
> +#include <linux/gfp_types.h>
> +#include <linux/kref.h>
> +#include <linux/mutex.h>
> +#include <linux/stddef.h>
> +
> +/**
> + * DOC: Memory context
> + *
> + * This is the "top level" datatype in the VM code. It's exposed in the public
> + * API as an opaque handle.
> + */
> +
> +/**
> + * struct pvr_vm_context - Context type which encapsulates an entire page table
> + * tree structure.
> + * @pvr_dev: The PowerVR device to which this context is bound.
> + *
> + * This binding is immutable for the life of the context.
> + * @mmu_ctx: The context for binding to physical memory.
> + * @gpuva_mgr: GPUVA manager object associated with this context.
> + * @lock: Global lock on this entire structure of page tables.
> + * @fw_mem_ctx_obj: Firmware object representing firmware memory context.
> + * @ref_count: Reference count of object.
> + */
> +struct pvr_vm_context {
> +	struct pvr_device *pvr_dev;
> +	struct pvr_mmu_context *mmu_ctx;
> +	struct drm_gpuva_manager gpuva_mgr;
> +	struct mutex lock;
> +	struct pvr_fw_object *fw_mem_ctx_obj;
> +	struct kref ref_count;
> +};
> +
> +/**
> + * pvr_vm_get_page_table_root_addr() - Get the DMA address of the root of the
> + *                                     page table structure behind a VM context.
> + * @vm_ctx: Target VM context.
> + */
> +dma_addr_t pvr_vm_get_page_table_root_addr(struct pvr_vm_context *vm_ctx)
> +{
> +	return pvr_mmu_get_root_table_dma_addr(vm_ctx->mmu_ctx);
> +}
> +
> +/**
> + * DOC: Memory mappings
> + */
> +
> +/**
> + * pvr_vm_gpuva_mapping_init() - Setup a mapping object with the specified
> + * parameters ready for mapping using pvr_vm_gpuva_mapping_map().
> + * @va: Pointer to drm_gpuva mapping object.
> + * @device_addr: Device-virtual address at the start of the mapping.
> + * @size: Size of the desired mapping.
> + * @pvr_obj: Target PowerVR memory object.
> + * @pvr_obj_offset: Offset into @pvr_obj to begin mapping from.
> + *
> + * Some parameters of this function are unchecked. It is therefore the callers
> + * responsibility to ensure certain constraints are met. Specifically:
> + *
> + * * @pvr_obj_offset must be less than the size of @pvr_obj,
> + * * The sum of @pvr_obj_offset and @size must be less than or equal to the
> + *   size of @pvr_obj,
> + * * The range specified by @pvr_obj_offset and @size (the "CPU range") must be
> + *   CPU page-aligned both in start position and size, and
> + * * The range specified by @device_addr and @size (the "device range") must be
> + *   device page-aligned both in start position and size.
> + *
> + * Furthermore, it is up to the caller to make sure that a reference to @pvr_obj
> + * is taken prior to mapping @va with the drm_gpuva_manager.
> + */
> +static void
> +pvr_vm_gpuva_mapping_init(struct drm_gpuva *va, u64 device_addr, u64 size,
> +			  struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset)

There's already drm_gpuva_init() doing the same thing.

> +{
> +	va->va.addr = device_addr;
> +	va->va.range = size;
> +	va->gem.obj = gem_from_pvr_gem(pvr_obj);
> +	va->gem.offset = pvr_obj_offset;
> +}
> +
> +struct pvr_vm_gpuva_op_ctx {
> +	struct pvr_vm_context *vm_ctx;
> +	struct pvr_mmu_op_context *mmu_op_ctx;
> +	struct drm_gpuva *new_va, *prev_va, *next_va;
> +};
> +
> +/**
> + * pvr_vm_gpuva_map() - Insert a mapping into a memory context.
> + * @op: gpuva op containing the remap details.
> + * @op_ctx: Operation context.
> + *
> + * Context: Called by drm_gpuva_sm_map following a successful mapping while
> + * @op_ctx.vm_ctx mutex is held.
> + *
> + * Return:
> + *  * 0 on success, or
> + *  * Any error returned by pvr_mmu_map().
> + */
> +static int
> +pvr_vm_gpuva_map(struct drm_gpuva_op *op, void *op_ctx)
> +{
> +	struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->map.gem.obj);
> +	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
> +	int err;
> +
> +	if ((op->map.gem.offset | op->map.va.range) & ~PVR_DEVICE_PAGE_MASK)
> +		return -EINVAL;
> +
> +	err = pvr_mmu_map(ctx->mmu_op_ctx, op->map.va.range, pvr_gem->flags,
> +			  op->map.va.addr);
> +	if (err)
> +		return err;
> +
> +	pvr_vm_gpuva_mapping_init(ctx->new_va, op->map.va.addr,
> +				  op->map.va.range, pvr_gem, op->map.gem.offset);
> +
> +	drm_gpuva_map(&ctx->vm_ctx->gpuva_mgr, ctx->new_va, &op->map);

drm_gpuva_map() does use drm_gpuva_init_from_op() internally, hence the extra
call to pvr_vm_gpuva_mapping_init() should be unnecessary.

> +	drm_gpuva_link(ctx->new_va);

How is this protected?

drm_gpuva_link() and drm_gpuva_unlink() require either the dma_resv lock of the
corresponding GEM object being held or, alternatively, the driver specific lock
indicated via drm_gem_gpuva_set_lock().

> +	ctx->new_va = NULL;
> +
> +	/*
> +	 * Increment the refcount on the underlying physical memory resource
> +	 * to prevent de-allocation while the mapping exists.
> +	 */
> +	pvr_gem_object_get(pvr_gem);
> +
> +	return 0;
> +}
> +
> +/**
> + * pvr_vm_gpuva_unmap() - Remove a mapping from a memory context.
> + * @op: gpuva op containing the unmap details.
> + * @op_ctx: Operation context.
> + *
> + * Context: Called by drm_gpuva_sm_unmap following a successful unmapping while
> + * @op_ctx.vm_ctx mutex is held.
> + *
> + * Return:
> + *  * 0 on success, or
> + *  * Any error returned by pvr_mmu_unmap().
> + */
> +static int
> +pvr_vm_gpuva_unmap(struct drm_gpuva_op *op, void *op_ctx)
> +{
> +	struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->unmap.va->gem.obj);
> +	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
> +
> +	int err = pvr_mmu_unmap(ctx->mmu_op_ctx, op->unmap.va->va.addr,
> +				op->unmap.va->va.range);
> +
> +	if (err)
> +		return err;
> +
> +	drm_gpuva_unmap(&op->unmap);
> +	drm_gpuva_unlink(op->unmap.va);
> +	kfree(op->unmap.va);
> +
> +	pvr_gem_object_put(pvr_gem);
> +
> +	return 0;
> +}
> +
> +/**
> + * pvr_vm_gpuva_remap() - Remap a mapping within a memory context.
> + * @op: gpuva op containing the remap details.
> + * @op_ctx: Operation context.
> + *
> + * Context: Called by either drm_gpuva_sm_map or drm_gpuva_sm_unmap when a
> + * mapping or unmapping operation causes a region to be split. The
> + * @op_ctx.vm_ctx mutex is held.
> + *
> + * Return:
> + *  * 0 on success, or
> + *  * Any error returned by pvr_vm_gpuva_unmap() or pvr_vm_gpuva_unmap().
> + */
> +static int
> +pvr_vm_gpuva_remap(struct drm_gpuva_op *op, void *op_ctx)
> +{
> +	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
> +
> +	if (op->remap.unmap) {

You can omit this check, remap operations always contain a valid unmap
operation. However, you might want to know whether the remap operation was
generated due to a call to drm_gpuva_sm_map() or drm_gpuva_sm_unmap(), since for
the latter you might want to free page table structures.

> +		const u64 va_start = op->remap.prev ?
> +				     op->remap.prev->va.addr + op->remap.prev->va.range :
> +				     op->remap.unmap->va->va.addr;
> +		const u64 va_end = op->remap.next ?
> +				   op->remap.next->va.addr :
> +				   op->remap.unmap->va->va.addr + op->remap.unmap->va->va.range;

This seems to be a common calculation for drivers, it is probably worth to come
up with a helper, something like
drm_gpuva_op_unmap_range(struct drm_gpuva_op *op, u64 *addr, u64 *range).

> +
> +		int err = pvr_mmu_unmap(ctx->mmu_op_ctx, va_start,
> +					va_end - va_start);
> +
> +		if (err)
> +			return err;
> +	}
> +
> +	if (op->remap.prev)
> +		pvr_vm_gpuva_mapping_init(ctx->prev_va, op->remap.prev->va.addr,
> +					  op->remap.prev->va.range,
> +					  gem_to_pvr_gem(op->remap.prev->gem.obj),
> +					  op->remap.prev->gem.offset);
> +
> +	if (op->remap.next)
> +		pvr_vm_gpuva_mapping_init(ctx->next_va, op->remap.next->va.addr,
> +					  op->remap.next->va.range,
> +					  gem_to_pvr_gem(op->remap.next->gem.obj),
> +					  op->remap.next->gem.offset);
> +
> +	/* No actual remap required: the page table tree depth is fixed to 3,
> +	 * and we use 4k page table entries only for now.
> +	 */
> +	drm_gpuva_remap(ctx->prev_va, ctx->next_va, &op->remap);

As above, drm_gpuva_remap() does use drm_gpuva_init_from_op() internally, hence
the extra call to pvr_vm_gpuva_mapping_init() should be unnecessary.

> +
> +	if (op->remap.prev) {
> +		pvr_gem_object_get(gem_to_pvr_gem(ctx->prev_va->gem.obj));
> +		drm_gpuva_link(ctx->prev_va);
> +		ctx->prev_va = NULL;
> +	}
> +
> +	if (op->remap.next) {
> +		pvr_gem_object_get(gem_to_pvr_gem(ctx->next_va->gem.obj));
> +		drm_gpuva_link(ctx->next_va);
> +		ctx->next_va = NULL;
> +	}
> +
> +	if (op->remap.unmap) {

As above, no need for this check.

- Danilo

> +		struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->remap.unmap->va->gem.obj);
> +
> +		drm_gpuva_unlink(op->unmap.va);
> +		kfree(op->unmap.va);
> +
> +		pvr_gem_object_put(pvr_gem);
> +	}
> +
> +	return 0;
> +}
> +
> +/*
> + * Public API
> + *
> + * For an overview of these functions, see *DOC: Public API* in "pvr_vm.h".
> + */
> +
> +/**
> + * pvr_device_addr_is_valid() - Tests whether a device-virtual address
> + *                              is valid.
> + * @device_addr: Virtual device address to test.
> + *
> + * Return:
> + *  * %true if @device_addr is within the valid range for a device page
> + *    table and is aligned to the device page size, or
> + *  * %false otherwise.
> + */
> +bool
> +pvr_device_addr_is_valid(u64 device_addr)
> +{
> +	return (device_addr & ~PVR_PAGE_TABLE_ADDR_MASK) == 0 &&
> +	       (device_addr & ~PVR_DEVICE_PAGE_MASK) == 0;
> +}
> +
> +/**
> + * pvr_device_addr_and_size_are_valid() - Tests whether a device-virtual
> + * address and associated size are both valid.
> + * @device_addr: Virtual device address to test.
> + * @size: Size of the range based at @device_addr to test.
> + *
> + * Calling pvr_device_addr_is_valid() twice (once on @size, and again on
> + * @device_addr + @size) to verify a device-virtual address range initially
> + * seems intuitive, but it produces a false-negative when the address range
> + * is right at the end of device-virtual address space.
> + *
> + * This function catches that corner case, as well as checking that
> + * @size is non-zero.
> + *
> + * Return:
> + *  * %true if @device_addr is device page aligned; @size is device page
> + *    aligned; the range specified by @device_addr and @size is within the
> + *    bounds of the device-virtual address space, and @size is non-zero, or
> + *  * %false otherwise.
> + */
> +bool
> +pvr_device_addr_and_size_are_valid(u64 device_addr, u64 size)
> +{
> +	return pvr_device_addr_is_valid(device_addr) &&
> +	       size != 0 && (size & ~PVR_DEVICE_PAGE_MASK) == 0 &&
> +	       (device_addr + size <= PVR_PAGE_TABLE_ADDR_SPACE_SIZE);
> +}
> +
> +static const struct drm_gpuva_fn_ops pvr_vm_gpuva_ops = {
> +	.sm_step_map = pvr_vm_gpuva_map,
> +	.sm_step_remap = pvr_vm_gpuva_remap,
> +	.sm_step_unmap = pvr_vm_gpuva_unmap,
> +};
> +
> +/**
> + * pvr_vm_create_context() - Create a new VM context.
> + * @pvr_dev: Target PowerVR device.
> + * @is_userspace_context: %true if this context is for userspace. This will
> + *                        create a firmware memory context for the VM context
> + *                        and disable warnings when tearing down mappings.
> + *
> + * Return:
> + *  * A handle to the newly-minted VM context on success,
> + *  * -%EINVAL if the feature "virtual address space bits" on @pvr_dev is
> + *    missing or has an unsupported value,
> + *  * -%ENOMEM if allocation of the structure behind the opaque handle fails,
> + *    or
> + *  * Any error encountered while setting up internal structures.
> + */
> +struct pvr_vm_context *
> +pvr_vm_create_context(struct pvr_device *pvr_dev, bool is_userspace_context)
> +{
> +	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
> +
> +	struct pvr_vm_context *vm_ctx;
> +	u16 device_addr_bits;
> +
> +	int err;
> +
> +	err = PVR_FEATURE_VALUE(pvr_dev, virtual_address_space_bits,
> +				&device_addr_bits);
> +	if (err) {
> +		drm_err(drm_dev,
> +			"Failed to get device virtual address space bits\n");
> +		return ERR_PTR(err);
> +	}
> +
> +	if (device_addr_bits != PVR_PAGE_TABLE_ADDR_BITS) {
> +		drm_err(drm_dev,
> +			"Device has unsupported virtual address space size\n");
> +		return ERR_PTR(-EINVAL);
> +	}
> +
> +	vm_ctx = kzalloc(sizeof(*vm_ctx), GFP_KERNEL);
> +	if (!vm_ctx)
> +		return ERR_PTR(-ENOMEM);
> +
> +	vm_ctx->pvr_dev = pvr_dev;
> +	kref_init(&vm_ctx->ref_count);
> +	mutex_init(&vm_ctx->lock);
> +
> +	drm_gpuva_manager_init(&vm_ctx->gpuva_mgr,
> +			       is_userspace_context ? "PowerVR-user-VM" : "PowerVR-FW-VM",
> +			       0, 1ULL << device_addr_bits, 0, 0, &pvr_vm_gpuva_ops);
> +
> +	vm_ctx->mmu_ctx = pvr_mmu_context_create(pvr_dev);
> +	err = PTR_ERR_OR_ZERO(&vm_ctx->mmu_ctx);
> +	if (err) {
> +		vm_ctx->mmu_ctx = NULL;
> +		goto err_put_ctx;
> +	}
> +
> +	if (is_userspace_context) {
> +		/* TODO: Create FW mem context */
> +		err = -ENODEV;
> +		goto err_put_ctx;
> +	}
> +
> +	return vm_ctx;
> +
> +err_put_ctx:
> +	pvr_vm_context_put(vm_ctx);
> +
> +	return ERR_PTR(err);
> +}
> +
> +/**
> + * pvr_vm_context_release() - Teardown a VM context.
> + * @ref_count: Pointer to reference counter of the VM context.
> + *
> + * This function ensures that no mappings are left dangling by unmapping them
> + * all in order of ascending device-virtual address.
> + */
> +static void
> +pvr_vm_context_release(struct kref *ref_count)
> +{
> +	struct pvr_vm_context *vm_ctx =
> +		container_of(ref_count, struct pvr_vm_context, ref_count);
> +
> +	/* TODO: Destroy FW mem context */
> +	WARN_ON(vm_ctx->fw_mem_ctx_obj);
> +
> +	WARN_ON(pvr_vm_unmap(vm_ctx, vm_ctx->gpuva_mgr.mm_start,
> +			     vm_ctx->gpuva_mgr.mm_range));
> +
> +	drm_gpuva_manager_destroy(&vm_ctx->gpuva_mgr);
> +	pvr_mmu_context_destroy(vm_ctx->mmu_ctx);
> +	mutex_destroy(&vm_ctx->lock);
> +
> +	kfree(vm_ctx);
> +}
> +
> +/**
> + * pvr_vm_context_lookup() - Look up VM context from handle
> + * @pvr_file: Pointer to pvr_file structure.
> + * @handle: Object handle.
> + *
> + * Takes reference on VM context object. Call pvr_vm_context_put() to release.
> + *
> + * Returns:
> + *  * The requested object on success, or
> + *  * %NULL on failure (object does not exist in list, or is not a VM context)
> + */
> +struct pvr_vm_context *
> +pvr_vm_context_lookup(struct pvr_file *pvr_file, u32 handle)
> +{
> +	struct pvr_vm_context *vm_ctx;
> +
> +	xa_lock(&pvr_file->vm_ctx_handles);
> +	vm_ctx = xa_load(&pvr_file->vm_ctx_handles, handle);
> +	if (vm_ctx)
> +		kref_get(&vm_ctx->ref_count);
> +
> +	xa_unlock(&pvr_file->vm_ctx_handles);
> +
> +	return vm_ctx;
> +}
> +
> +/**
> + * pvr_vm_context_put() - Release a reference on a VM context
> + * @vm_ctx: Target VM context.
> + *
> + * Returns:
> + *  * %true if the VM context was destroyed, or
> + *  * %false if there are any references still remaining.
> + */
> +bool
> +pvr_vm_context_put(struct pvr_vm_context *vm_ctx)
> +{
> +	WARN_ON(!vm_ctx);
> +
> +	if (vm_ctx)
> +		return kref_put(&vm_ctx->ref_count, pvr_vm_context_release);
> +
> +	return true;
> +}
> +
> +/**
> + * pvr_destroy_vm_contexts_for_file: Destroy any VM contexts associated with the
> + * given file.
> + * @pvr_file: Pointer to pvr_file structure.
> + *
> + * Removes all vm_contexts associated with @pvr_file from the device VM context
> + * list and drops initial references. vm_contexts will then be destroyed once
> + * all outstanding references are dropped.
> + */
> +void pvr_destroy_vm_contexts_for_file(struct pvr_file *pvr_file)
> +{
> +	struct pvr_vm_context *vm_ctx;
> +	unsigned long handle;
> +
> +	xa_for_each(&pvr_file->vm_ctx_handles, handle, vm_ctx) {
> +		/* vm_ctx is not used here because that would create a race with xa_erase */
> +		pvr_vm_context_put(xa_erase(&pvr_file->vm_ctx_handles, handle));
> +	}
> +}
> +
> +/**
> + * pvr_vm_map() - Map a section of physical memory into a section of device-virtual memory.
> + * @vm_ctx: Target VM context.
> + * @pvr_obj: Target PowerVR memory object.
> + * @pvr_obj_offset: Offset into @pvr_obj to map from.
> + * @device_addr: Virtual device address at the start of the requested mapping.
> + * @size: Size of the requested mapping.
> + *
> + * No handle is returned to represent the mapping. Instead, callers should
> + * remember @device_addr and use that as a handle.
> + *
> + * Return:
> + *  * 0 on success,
> + *  * -%EINVAL if @device_addr is not a valid page-aligned device-virtual
> + *    address; the region specified by @pvr_obj_offset and @size does not fall
> + *    entirely within @pvr_obj, or any part of the specified region of @pvr_obj
> + *    is not device-virtual page-aligned,
> + *  * Any error encountered while performing internal operations required to
> + *    destroy the mapping (returned from pvr_vm_gpuva_map or
> + *    pvr_vm_gpuva_remap).
> + */
> +int
> +pvr_vm_map(struct pvr_vm_context *vm_ctx,
> +	   struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset,
> +	   u64 device_addr, u64 size)
> +{
> +	const size_t pvr_obj_size = pvr_gem_object_size(pvr_obj);
> +	struct pvr_vm_gpuva_op_ctx op_ctx = { .vm_ctx = vm_ctx };
> +	struct sg_table *sgt;
> +	int err;
> +
> +	if (!pvr_device_addr_and_size_are_valid(device_addr, size) ||
> +	    pvr_obj_offset & ~PAGE_MASK || size & ~PAGE_MASK ||
> +	    pvr_obj_offset + size > pvr_obj_size ||
> +	    pvr_obj_offset > pvr_obj_size) {
> +		return -EINVAL;
> +	}
> +
> +	op_ctx.new_va = kzalloc(sizeof(*op_ctx.new_va), GFP_KERNEL);
> +	op_ctx.prev_va = kzalloc(sizeof(*op_ctx.prev_va), GFP_KERNEL);
> +	op_ctx.next_va = kzalloc(sizeof(*op_ctx.next_va), GFP_KERNEL);
> +	if (!op_ctx.new_va || !op_ctx.prev_va || !op_ctx.next_va) {
> +		err = -ENOMEM;
> +		goto out_free;
> +	}
> +
> +	sgt = pvr_gem_object_get_pages_sgt(pvr_obj);
> +	err = PTR_ERR_OR_ZERO(sgt);
> +	if (err)
> +		goto out_free;
> +
> +	op_ctx.mmu_op_ctx = pvr_mmu_op_context_create(vm_ctx->mmu_ctx, sgt,
> +						      pvr_obj_offset, size);
> +	err = PTR_ERR_OR_ZERO(op_ctx.mmu_op_ctx);
> +	if (err) {
> +		op_ctx.mmu_op_ctx = NULL;
> +		goto out_mmu_op_ctx_destroy;
> +	}
> +
> +	mutex_lock(&vm_ctx->lock);
> +	err = drm_gpuva_sm_map(&vm_ctx->gpuva_mgr, &op_ctx, device_addr, size,
> +			       gem_from_pvr_gem(pvr_obj), pvr_obj_offset);
> +	mutex_unlock(&vm_ctx->lock);
> +
> +out_mmu_op_ctx_destroy:
> +	pvr_mmu_op_context_destroy(op_ctx.mmu_op_ctx);
> +
> +out_free:
> +	kfree(op_ctx.next_va);
> +	kfree(op_ctx.prev_va);
> +	kfree(op_ctx.new_va);
> +
> +	return err;
> +}
> +
> +/**
> + * pvr_vm_unmap() - Unmap an already mapped section of device-virtual memory.
> + * @vm_ctx: Target VM context.
> + * @device_addr: Virtual device address at the start of the target mapping.
> + * @size: Size of the target mapping.
> + *
> + * Return:
> + *  * 0 on success,
> + *  * -%EINVAL if @device_addr is not a valid page-aligned device-virtual
> + *    address,
> + *  * Any error encountered while performing internal operations required to
> + *    destroy the mapping (returned from pvr_vm_gpuva_unmap or
> + *    pvr_vm_gpuva_remap).
> + */
> +int
> +pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size)
> +{
> +	struct pvr_vm_gpuva_op_ctx op_ctx = { .vm_ctx = vm_ctx };
> +	int err;
> +
> +	if (!pvr_device_addr_and_size_are_valid(device_addr, size))
> +		return -EINVAL;
> +
> +	op_ctx.prev_va = kzalloc(sizeof(*op_ctx.prev_va), GFP_KERNEL);
> +	op_ctx.next_va = kzalloc(sizeof(*op_ctx.next_va), GFP_KERNEL);
> +	if (!op_ctx.prev_va || !op_ctx.next_va) {
> +		err = -ENOMEM;
> +		goto out;
> +	}
> +
> +	op_ctx.mmu_op_ctx =
> +		pvr_mmu_op_context_create(vm_ctx->mmu_ctx, NULL, 0, 0);
> +	err = PTR_ERR_OR_ZERO(op_ctx.mmu_op_ctx);
> +	if (err) {
> +		op_ctx.mmu_op_ctx = NULL;
> +		goto out;
> +	}
> +
> +	mutex_lock(&vm_ctx->lock);
> +	err = drm_gpuva_sm_unmap(&vm_ctx->gpuva_mgr, &op_ctx, device_addr, size);
> +	mutex_unlock(&vm_ctx->lock);
> +
> +out:
> +	pvr_mmu_op_context_destroy(op_ctx.mmu_op_ctx);
> +	kfree(op_ctx.next_va);
> +	kfree(op_ctx.prev_va);
> +
> +	return err;
> +}
> +
> +/*
> + * Static data areas are determined by firmware.
> + *
> + * When adding a new static data area you will also need to update the reserved_size field for the
> + * heap in pvr_heaps[].
> + */
> +static const struct drm_pvr_static_data_area static_data_areas[] = {
> +	{
> +		.area_usage = DRM_PVR_STATIC_DATA_AREA_FENCE,
> +		.location_heap_id = DRM_PVR_HEAP_GENERAL,
> +		.offset = 0,
> +		.size = 128,
> +	},
> +	{
> +		.area_usage = DRM_PVR_STATIC_DATA_AREA_YUV_CSC,
> +		.location_heap_id = DRM_PVR_HEAP_GENERAL,
> +		.offset = 128,
> +		.size = 1024,
> +	},
> +	{
> +		.area_usage = DRM_PVR_STATIC_DATA_AREA_VDM_SYNC,
> +		.location_heap_id = DRM_PVR_HEAP_PDS_CODE_DATA,
> +		.offset = 0,
> +		.size = 128,
> +	},
> +	{
> +		.area_usage = DRM_PVR_STATIC_DATA_AREA_EOT,
> +		.location_heap_id = DRM_PVR_HEAP_PDS_CODE_DATA,
> +		.offset = 128,
> +		.size = 128,
> +	},
> +	{
> +		.area_usage = DRM_PVR_STATIC_DATA_AREA_VDM_SYNC,
> +		.location_heap_id = DRM_PVR_HEAP_USC_CODE,
> +		.offset = 0,
> +		.size = 128,
> +	},
> +};
> +
> +#define GET_RESERVED_SIZE(last_offset, last_size) round_up((last_offset) + (last_size), PAGE_SIZE)
> +
> +/*
> + * The values given to GET_RESERVED_SIZE() are taken from the last entry in the corresponding
> + * static data area for each heap.
> + */
> +static const struct drm_pvr_heap pvr_heaps[] = {
> +	[DRM_PVR_HEAP_GENERAL] = {
> +		.base = ROGUE_GENERAL_HEAP_BASE,
> +		.size = ROGUE_GENERAL_HEAP_SIZE,
> +		.flags = 0,
> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> +	},
> +	[DRM_PVR_HEAP_PDS_CODE_DATA] = {
> +		.base = ROGUE_PDSCODEDATA_HEAP_BASE,
> +		.size = ROGUE_PDSCODEDATA_HEAP_SIZE,
> +		.flags = 0,
> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> +	},
> +	[DRM_PVR_HEAP_USC_CODE] = {
> +		.base = ROGUE_USCCODE_HEAP_BASE,
> +		.size = ROGUE_USCCODE_HEAP_SIZE,
> +		.flags = 0,
> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> +	},
> +	[DRM_PVR_HEAP_RGNHDR] = {
> +		.base = ROGUE_RGNHDR_HEAP_BASE,
> +		.size = ROGUE_RGNHDR_HEAP_SIZE,
> +		.flags = 0,
> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> +	},
> +	[DRM_PVR_HEAP_VIS_TEST] = {
> +		.base = ROGUE_VISTEST_HEAP_BASE,
> +		.size = ROGUE_VISTEST_HEAP_SIZE,
> +		.flags = 0,
> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> +	},
> +	[DRM_PVR_HEAP_TRANSFER_FRAG] = {
> +		.base = ROGUE_TRANSFER_FRAG_HEAP_BASE,
> +		.size = ROGUE_TRANSFER_FRAG_HEAP_SIZE,
> +		.flags = 0,
> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> +	},
> +};
> +
> +int
> +pvr_static_data_areas_get(const struct pvr_device *pvr_dev,
> +			  struct drm_pvr_ioctl_dev_query_args *args)
> +{
> +	struct drm_pvr_dev_query_static_data_areas query = {0};
> +	int err;
> +
> +	if (!args->pointer) {
> +		args->size = sizeof(struct drm_pvr_dev_query_static_data_areas);
> +		return 0;
> +	}
> +
> +	err = PVR_UOBJ_GET(query, args->size, args->pointer);
> +	if (err < 0)
> +		return err;
> +
> +	if (!query.static_data_areas.array) {
> +		query.static_data_areas.count = ARRAY_SIZE(static_data_areas);
> +		query.static_data_areas.stride = sizeof(struct drm_pvr_static_data_area);
> +		goto copy_out;
> +	}
> +
> +	if (query.static_data_areas.count > ARRAY_SIZE(static_data_areas))
> +		query.static_data_areas.count = ARRAY_SIZE(static_data_areas);
> +
> +	err = PVR_UOBJ_SET_ARRAY(&query.static_data_areas, static_data_areas);
> +	if (err < 0)
> +		return err;
> +
> +copy_out:
> +	err = PVR_UOBJ_SET(args->pointer, args->size, query);
> +	if (err < 0)
> +		return err;
> +
> +	args->size = sizeof(query);
> +	return 0;
> +}
> +
> +int
> +pvr_heap_info_get(const struct pvr_device *pvr_dev,
> +		  struct drm_pvr_ioctl_dev_query_args *args)
> +{
> +	struct drm_pvr_dev_query_heap_info query = {0};
> +	u64 dest;
> +	int err;
> +
> +	if (!args->pointer) {
> +		args->size = sizeof(struct drm_pvr_dev_query_heap_info);
> +		return 0;
> +	}
> +
> +	err = PVR_UOBJ_GET(query, args->size, args->pointer);
> +	if (err < 0)
> +		return err;
> +
> +	if (!query.heaps.array) {
> +		query.heaps.count = ARRAY_SIZE(pvr_heaps);
> +		query.heaps.stride = sizeof(struct drm_pvr_heap);
> +		goto copy_out;
> +	}
> +
> +	if (query.heaps.count > ARRAY_SIZE(pvr_heaps))
> +		query.heaps.count = ARRAY_SIZE(pvr_heaps);
> +
> +	/* Region header heap is only present if BRN63142 is present. */
> +	dest = query.heaps.array;
> +	for (size_t i = 0; i < query.heaps.count; i++) {
> +		struct drm_pvr_heap heap = pvr_heaps[i];
> +
> +		if (i == DRM_PVR_HEAP_RGNHDR && !PVR_HAS_QUIRK(pvr_dev, 63142))
> +			heap.size = 0;
> +
> +		err = PVR_UOBJ_SET(dest, query.heaps.stride, heap);
> +		if (err < 0)
> +			return err;
> +
> +		dest += query.heaps.stride;
> +	}
> +
> +copy_out:
> +	err = PVR_UOBJ_SET(args->pointer, args->size, query);
> +	if (err < 0)
> +		return err;
> +
> +	args->size = sizeof(query);
> +	return 0;
> +}
> +
> +/**
> + * pvr_heap_contains_range() - Determine if a given heap contains the specified
> + *                             device-virtual address range.
> + * @pvr_heap: Target heap.
> + * @start: Inclusive start of the target range.
> + * @end: Inclusive end of the target range.
> + *
> + * It is an error to call this function with values of @start and @end that do
> + * not satisfy the condition @start <= @end.
> + */
> +static __always_inline bool
> +pvr_heap_contains_range(const struct drm_pvr_heap *pvr_heap, u64 start, u64 end)
> +{
> +	return pvr_heap->base <= start && end < pvr_heap->base + pvr_heap->size;
> +}
> +
> +/**
> + * pvr_find_heap_containing() - Find a heap which contains the specified
> + *                              device-virtual address range.
> + * @pvr_dev: Target PowerVR device.
> + * @start: Start of the target range.
> + * @size: Size of the target range.
> + *
> + * Return:
> + *  * A pointer to a constant instance of struct drm_pvr_heap representing the
> + *    heap containing the entire range specified by @start and @size on
> + *    success, or
> + *  * %NULL if no such heap exists.
> + */
> +const struct drm_pvr_heap *
> +pvr_find_heap_containing(struct pvr_device *pvr_dev, u64 start, u64 size)
> +{
> +	u64 end;
> +
> +	if (check_add_overflow(start, size - 1, &end))
> +		return NULL;
> +
> +	/*
> +	 * There are no guarantees about the order of address ranges in
> +	 * &pvr_heaps, so iterate over the entire array for a heap whose
> +	 * range completely encompasses the given range.
> +	 */
> +	for (u32 heap_id = 0; heap_id < ARRAY_SIZE(pvr_heaps); heap_id++) {
> +		/* Filter heaps that present only with an associated quirk */
> +		if (heap_id == DRM_PVR_HEAP_RGNHDR &&
> +		    !PVR_HAS_QUIRK(pvr_dev, 63142)) {
> +			continue;
> +		}
> +
> +		if (pvr_heap_contains_range(&pvr_heaps[heap_id], start, end))
> +			return &pvr_heaps[heap_id];
> +	}
> +
> +	return NULL;
> +}
> +
> +/**
> + * pvr_vm_find_gem_object() - Look up a buffer object from a given
> + *                            device-virtual address.
> + * @vm_ctx: [IN] Target VM context.
> + * @device_addr: [IN] Virtual device address at the start of the required
> + *               object.
> + * @mapped_offset_out: [OUT] Pointer to location to write offset of the start
> + *                     of the mapped region within the buffer object. May be
> + *                     %NULL if this information is not required.
> + * @mapped_size_out: [OUT] Pointer to location to write size of the mapped
> + *                   region. May be %NULL if this information is not required.
> + *
> + * If successful, a reference will be taken on the buffer object. The caller
> + * must drop the reference with pvr_gem_object_put().
> + *
> + * Return:
> + *  * The PowerVR buffer object mapped at @device_addr if one exists, or
> + *  * %NULL otherwise.
> + */
> +struct pvr_gem_object *
> +pvr_vm_find_gem_object(struct pvr_vm_context *vm_ctx, u64 device_addr,
> +		       u64 *mapped_offset_out, u64 *mapped_size_out)
> +{
> +	struct pvr_gem_object *pvr_obj;
> +	struct drm_gpuva *va;
> +
> +	mutex_lock(&vm_ctx->lock);
> +
> +	va = drm_gpuva_find_first(&vm_ctx->gpuva_mgr, device_addr, 1);
> +	if (!va)
> +		goto err_unlock;
> +
> +	pvr_obj = gem_to_pvr_gem(va->gem.obj);
> +	pvr_gem_object_get(pvr_obj);
> +
> +	if (mapped_offset_out)
> +		*mapped_offset_out = va->gem.offset;
> +	if (mapped_size_out)
> +		*mapped_size_out = va->va.range;
> +
> +	mutex_unlock(&vm_ctx->lock);
> +
> +	return pvr_obj;
> +
> +err_unlock:
> +	mutex_unlock(&vm_ctx->lock);
> +
> +	return NULL;
> +}
> +
> +/**
> + * pvr_vm_get_fw_mem_context: Get object representing firmware memory context
> + * @vm_ctx: Target VM context.
> + *
> + * Returns:
> + *  * FW object representing firmware memory context, or
> + *  * %NULL if this VM context does not have a firmware memory context.
> + */
> +struct pvr_fw_object *
> +pvr_vm_get_fw_mem_context(struct pvr_vm_context *vm_ctx)
> +{
> +	return vm_ctx->fw_mem_ctx_obj;
> +}
> diff --git a/drivers/gpu/drm/imagination/pvr_vm.h b/drivers/gpu/drm/imagination/pvr_vm.h
> new file mode 100644
> index 000000000000..b98bc3981807
> --- /dev/null
> +++ b/drivers/gpu/drm/imagination/pvr_vm.h
> @@ -0,0 +1,60 @@
> +/* SPDX-License-Identifier: GPL-2.0 OR MIT */
> +/* Copyright (c) 2023 Imagination Technologies Ltd. */
> +
> +#ifndef PVR_VM_H
> +#define PVR_VM_H
> +
> +#include "pvr_rogue_mmu_defs.h"
> +
> +#include <uapi/drm/pvr_drm.h>
> +
> +#include <linux/types.h>
> +
> +/* Forward declaration from "pvr_device.h" */
> +struct pvr_device;
> +struct pvr_file;
> +
> +/* Forward declaration from "pvr_gem.h" */
> +struct pvr_gem_object;
> +
> +/* Forward declaration from "pvr_vm.c" */
> +struct pvr_vm_context;
> +
> +/* Forward declaration from <uapi/drm/pvr_drm.h> */
> +struct drm_pvr_ioctl_get_heap_info_args;
> +
> +/* Functions defined in pvr_vm.c */
> +
> +bool pvr_device_addr_is_valid(u64 device_addr);
> +bool pvr_device_addr_and_size_are_valid(u64 device_addr, u64 size);
> +
> +struct pvr_vm_context *pvr_vm_create_context(struct pvr_device *pvr_dev,
> +					     bool is_userspace_context);
> +
> +int pvr_vm_map(struct pvr_vm_context *vm_ctx,
> +	       struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset,
> +	       u64 device_addr, u64 size);
> +int pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size);
> +
> +dma_addr_t pvr_vm_get_page_table_root_addr(struct pvr_vm_context *vm_ctx);
> +
> +int pvr_static_data_areas_get(const struct pvr_device *pvr_dev,
> +			      struct drm_pvr_ioctl_dev_query_args *args);
> +int pvr_heap_info_get(const struct pvr_device *pvr_dev,
> +		      struct drm_pvr_ioctl_dev_query_args *args);
> +const struct drm_pvr_heap *pvr_find_heap_containing(struct pvr_device *pvr_dev,
> +						    u64 addr, u64 size);
> +
> +struct pvr_gem_object *pvr_vm_find_gem_object(struct pvr_vm_context *vm_ctx,
> +					      u64 device_addr,
> +					      u64 *mapped_offset_out,
> +					      u64 *mapped_size_out);
> +
> +struct pvr_fw_object *
> +pvr_vm_get_fw_mem_context(struct pvr_vm_context *vm_ctx);
> +
> +struct pvr_vm_context *pvr_vm_context_lookup(struct pvr_file *pvr_file, u32 handle);
> +bool pvr_vm_context_put(struct pvr_vm_context *vm_ctx);
> +void pvr_destroy_vm_contexts_for_file(struct pvr_file *pvr_file);
> +
> +#endif /* PVR_VM_H */
> -- 
> 2.41.0
> 


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 08/17] drm/imagination: Add GEM and VM related code
@ 2023-08-18 15:30     ` Danilo Krummrich
  0 siblings, 0 replies; 77+ messages in thread
From: Danilo Krummrich @ 2023-08-18 15:30 UTC (permalink / raw)
  To: Sarah Walker
  Cc: dri-devel, frank.binns, donald.robson, boris.brezillon,
	faith.ekstrand, airlied, daniel, maarten.lankhorst, mripard,
	tzimmermann, afd, hns, matthew.brost, christian.koenig,
	luben.tuikov, linux-kernel, Matt Coster

Hi Sarah,

On Wed, Aug 16, 2023 at 09:25:23AM +0100, Sarah Walker wrote:
> Add a GEM implementation based on drm_gem_shmem, and support code for the
> PowerVR GPU MMU. The GPU VA manager is used for address space management.
> 
> Changes since v4:
> - Correct sync function in vmap/vunmap function documentation
> - Update for upstream GPU VA manager
> - Fix missing frees when unmapping drm_gpuva objects
> - Always zero GEM BOs on creation
> 
> Changes since v3:
> - Split MMU and VM code
> - Register page table allocations with kmemleak
> - Use drm_dev_{enter,exit}
> 
> Changes since v2:
> - Use GPU VA manager
> - Use drm_gem_shmem
> 
> Co-developed-by: Matt Coster <matt.coster@imgtec.com>
> Signed-off-by: Matt Coster <matt.coster@imgtec.com>
> Co-developed-by: Donald Robson <donald.robson@imgtec.com>
> Signed-off-by: Donald Robson <donald.robson@imgtec.com>
> Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
> ---
>  drivers/gpu/drm/imagination/Makefile     |    5 +-
>  drivers/gpu/drm/imagination/pvr_device.c |   23 +-
>  drivers/gpu/drm/imagination/pvr_device.h |   18 +
>  drivers/gpu/drm/imagination/pvr_drv.c    |  302 ++-
>  drivers/gpu/drm/imagination/pvr_gem.c    |  396 ++++
>  drivers/gpu/drm/imagination/pvr_gem.h    |  177 ++
>  drivers/gpu/drm/imagination/pvr_mmu.c    | 2487 ++++++++++++++++++++++
>  drivers/gpu/drm/imagination/pvr_mmu.h    |  108 +
>  drivers/gpu/drm/imagination/pvr_vm.c     |  890 ++++++++
>  drivers/gpu/drm/imagination/pvr_vm.h     |   60 +
>  10 files changed, 4455 insertions(+), 11 deletions(-)
>  create mode 100644 drivers/gpu/drm/imagination/pvr_gem.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_gem.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_vm.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_vm.h

<snip>

> diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
> new file mode 100644
> index 000000000000..616fad3a3325
> --- /dev/null
> +++ b/drivers/gpu/drm/imagination/pvr_vm.c
> @@ -0,0 +1,890 @@
> +// SPDX-License-Identifier: GPL-2.0 OR MIT
> +/* Copyright (c) 2023 Imagination Technologies Ltd. */
> +
> +#include "pvr_vm.h"
> +
> +#include "pvr_device.h"
> +#include "pvr_drv.h"
> +#include "pvr_gem.h"
> +#include "pvr_mmu.h"
> +#include "pvr_rogue_fwif.h"
> +#include "pvr_rogue_heap_config.h"
> +
> +#include <drm/drm_gem.h>
> +#include <drm/drm_gpuva_mgr.h>
> +
> +#include <linux/container_of.h>
> +#include <linux/err.h>
> +#include <linux/errno.h>
> +#include <linux/gfp_types.h>
> +#include <linux/kref.h>
> +#include <linux/mutex.h>
> +#include <linux/stddef.h>
> +
> +/**
> + * DOC: Memory context
> + *
> + * This is the "top level" datatype in the VM code. It's exposed in the public
> + * API as an opaque handle.
> + */
> +
> +/**
> + * struct pvr_vm_context - Context type which encapsulates an entire page table
> + * tree structure.
> + * @pvr_dev: The PowerVR device to which this context is bound.
> + *
> + * This binding is immutable for the life of the context.
> + * @mmu_ctx: The context for binding to physical memory.
> + * @gpuva_mgr: GPUVA manager object associated with this context.
> + * @lock: Global lock on this entire structure of page tables.
> + * @fw_mem_ctx_obj: Firmware object representing firmware memory context.
> + * @ref_count: Reference count of object.
> + */
> +struct pvr_vm_context {
> +	struct pvr_device *pvr_dev;
> +	struct pvr_mmu_context *mmu_ctx;
> +	struct drm_gpuva_manager gpuva_mgr;
> +	struct mutex lock;
> +	struct pvr_fw_object *fw_mem_ctx_obj;
> +	struct kref ref_count;
> +};
> +
> +/**
> + * pvr_vm_get_page_table_root_addr() - Get the DMA address of the root of the
> + *                                     page table structure behind a VM context.
> + * @vm_ctx: Target VM context.
> + */
> +dma_addr_t pvr_vm_get_page_table_root_addr(struct pvr_vm_context *vm_ctx)
> +{
> +	return pvr_mmu_get_root_table_dma_addr(vm_ctx->mmu_ctx);
> +}
> +
> +/**
> + * DOC: Memory mappings
> + */
> +
> +/**
> + * pvr_vm_gpuva_mapping_init() - Setup a mapping object with the specified
> + * parameters ready for mapping using pvr_vm_gpuva_mapping_map().
> + * @va: Pointer to drm_gpuva mapping object.
> + * @device_addr: Device-virtual address at the start of the mapping.
> + * @size: Size of the desired mapping.
> + * @pvr_obj: Target PowerVR memory object.
> + * @pvr_obj_offset: Offset into @pvr_obj to begin mapping from.
> + *
> + * Some parameters of this function are unchecked. It is therefore the callers
> + * responsibility to ensure certain constraints are met. Specifically:
> + *
> + * * @pvr_obj_offset must be less than the size of @pvr_obj,
> + * * The sum of @pvr_obj_offset and @size must be less than or equal to the
> + *   size of @pvr_obj,
> + * * The range specified by @pvr_obj_offset and @size (the "CPU range") must be
> + *   CPU page-aligned both in start position and size, and
> + * * The range specified by @device_addr and @size (the "device range") must be
> + *   device page-aligned both in start position and size.
> + *
> + * Furthermore, it is up to the caller to make sure that a reference to @pvr_obj
> + * is taken prior to mapping @va with the drm_gpuva_manager.
> + */
> +static void
> +pvr_vm_gpuva_mapping_init(struct drm_gpuva *va, u64 device_addr, u64 size,
> +			  struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset)

There's already drm_gpuva_init() doing the same thing.

> +{
> +	va->va.addr = device_addr;
> +	va->va.range = size;
> +	va->gem.obj = gem_from_pvr_gem(pvr_obj);
> +	va->gem.offset = pvr_obj_offset;
> +}
> +
> +struct pvr_vm_gpuva_op_ctx {
> +	struct pvr_vm_context *vm_ctx;
> +	struct pvr_mmu_op_context *mmu_op_ctx;
> +	struct drm_gpuva *new_va, *prev_va, *next_va;
> +};
> +
> +/**
> + * pvr_vm_gpuva_map() - Insert a mapping into a memory context.
> + * @op: gpuva op containing the remap details.
> + * @op_ctx: Operation context.
> + *
> + * Context: Called by drm_gpuva_sm_map following a successful mapping while
> + * @op_ctx.vm_ctx mutex is held.
> + *
> + * Return:
> + *  * 0 on success, or
> + *  * Any error returned by pvr_mmu_map().
> + */
> +static int
> +pvr_vm_gpuva_map(struct drm_gpuva_op *op, void *op_ctx)
> +{
> +	struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->map.gem.obj);
> +	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
> +	int err;
> +
> +	if ((op->map.gem.offset | op->map.va.range) & ~PVR_DEVICE_PAGE_MASK)
> +		return -EINVAL;
> +
> +	err = pvr_mmu_map(ctx->mmu_op_ctx, op->map.va.range, pvr_gem->flags,
> +			  op->map.va.addr);
> +	if (err)
> +		return err;
> +
> +	pvr_vm_gpuva_mapping_init(ctx->new_va, op->map.va.addr,
> +				  op->map.va.range, pvr_gem, op->map.gem.offset);
> +
> +	drm_gpuva_map(&ctx->vm_ctx->gpuva_mgr, ctx->new_va, &op->map);

drm_gpuva_map() does use drm_gpuva_init_from_op() internally, hence the extra
call to pvr_vm_gpuva_mapping_init() should be unnecessary.

> +	drm_gpuva_link(ctx->new_va);

How is this protected?

drm_gpuva_link() and drm_gpuva_unlink() require either the dma_resv lock of the
corresponding GEM object being held or, alternatively, the driver specific lock
indicated via drm_gem_gpuva_set_lock().

> +	ctx->new_va = NULL;
> +
> +	/*
> +	 * Increment the refcount on the underlying physical memory resource
> +	 * to prevent de-allocation while the mapping exists.
> +	 */
> +	pvr_gem_object_get(pvr_gem);
> +
> +	return 0;
> +}
> +
> +/**
> + * pvr_vm_gpuva_unmap() - Remove a mapping from a memory context.
> + * @op: gpuva op containing the unmap details.
> + * @op_ctx: Operation context.
> + *
> + * Context: Called by drm_gpuva_sm_unmap following a successful unmapping while
> + * @op_ctx.vm_ctx mutex is held.
> + *
> + * Return:
> + *  * 0 on success, or
> + *  * Any error returned by pvr_mmu_unmap().
> + */
> +static int
> +pvr_vm_gpuva_unmap(struct drm_gpuva_op *op, void *op_ctx)
> +{
> +	struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->unmap.va->gem.obj);
> +	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
> +
> +	int err = pvr_mmu_unmap(ctx->mmu_op_ctx, op->unmap.va->va.addr,
> +				op->unmap.va->va.range);
> +
> +	if (err)
> +		return err;
> +
> +	drm_gpuva_unmap(&op->unmap);
> +	drm_gpuva_unlink(op->unmap.va);
> +	kfree(op->unmap.va);
> +
> +	pvr_gem_object_put(pvr_gem);
> +
> +	return 0;
> +}
> +
> +/**
> + * pvr_vm_gpuva_remap() - Remap a mapping within a memory context.
> + * @op: gpuva op containing the remap details.
> + * @op_ctx: Operation context.
> + *
> + * Context: Called by either drm_gpuva_sm_map or drm_gpuva_sm_unmap when a
> + * mapping or unmapping operation causes a region to be split. The
> + * @op_ctx.vm_ctx mutex is held.
> + *
> + * Return:
> + *  * 0 on success, or
> + *  * Any error returned by pvr_vm_gpuva_unmap() or pvr_vm_gpuva_unmap().
> + */
> +static int
> +pvr_vm_gpuva_remap(struct drm_gpuva_op *op, void *op_ctx)
> +{
> +	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
> +
> +	if (op->remap.unmap) {

You can omit this check, remap operations always contain a valid unmap
operation. However, you might want to know whether the remap operation was
generated due to a call to drm_gpuva_sm_map() or drm_gpuva_sm_unmap(), since for
the latter you might want to free page table structures.

> +		const u64 va_start = op->remap.prev ?
> +				     op->remap.prev->va.addr + op->remap.prev->va.range :
> +				     op->remap.unmap->va->va.addr;
> +		const u64 va_end = op->remap.next ?
> +				   op->remap.next->va.addr :
> +				   op->remap.unmap->va->va.addr + op->remap.unmap->va->va.range;

This seems to be a common calculation for drivers, it is probably worth to come
up with a helper, something like
drm_gpuva_op_unmap_range(struct drm_gpuva_op *op, u64 *addr, u64 *range).

> +
> +		int err = pvr_mmu_unmap(ctx->mmu_op_ctx, va_start,
> +					va_end - va_start);
> +
> +		if (err)
> +			return err;
> +	}
> +
> +	if (op->remap.prev)
> +		pvr_vm_gpuva_mapping_init(ctx->prev_va, op->remap.prev->va.addr,
> +					  op->remap.prev->va.range,
> +					  gem_to_pvr_gem(op->remap.prev->gem.obj),
> +					  op->remap.prev->gem.offset);
> +
> +	if (op->remap.next)
> +		pvr_vm_gpuva_mapping_init(ctx->next_va, op->remap.next->va.addr,
> +					  op->remap.next->va.range,
> +					  gem_to_pvr_gem(op->remap.next->gem.obj),
> +					  op->remap.next->gem.offset);
> +
> +	/* No actual remap required: the page table tree depth is fixed to 3,
> +	 * and we use 4k page table entries only for now.
> +	 */
> +	drm_gpuva_remap(ctx->prev_va, ctx->next_va, &op->remap);

As above, drm_gpuva_remap() does use drm_gpuva_init_from_op() internally, hence
the extra call to pvr_vm_gpuva_mapping_init() should be unnecessary.

> +
> +	if (op->remap.prev) {
> +		pvr_gem_object_get(gem_to_pvr_gem(ctx->prev_va->gem.obj));
> +		drm_gpuva_link(ctx->prev_va);
> +		ctx->prev_va = NULL;
> +	}
> +
> +	if (op->remap.next) {
> +		pvr_gem_object_get(gem_to_pvr_gem(ctx->next_va->gem.obj));
> +		drm_gpuva_link(ctx->next_va);
> +		ctx->next_va = NULL;
> +	}
> +
> +	if (op->remap.unmap) {

As above, no need for this check.

- Danilo

> +		struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->remap.unmap->va->gem.obj);
> +
> +		drm_gpuva_unlink(op->unmap.va);
> +		kfree(op->unmap.va);
> +
> +		pvr_gem_object_put(pvr_gem);
> +	}
> +
> +	return 0;
> +}
> +
> +/*
> + * Public API
> + *
> + * For an overview of these functions, see *DOC: Public API* in "pvr_vm.h".
> + */
> +
> +/**
> + * pvr_device_addr_is_valid() - Tests whether a device-virtual address
> + *                              is valid.
> + * @device_addr: Virtual device address to test.
> + *
> + * Return:
> + *  * %true if @device_addr is within the valid range for a device page
> + *    table and is aligned to the device page size, or
> + *  * %false otherwise.
> + */
> +bool
> +pvr_device_addr_is_valid(u64 device_addr)
> +{
> +	return (device_addr & ~PVR_PAGE_TABLE_ADDR_MASK) == 0 &&
> +	       (device_addr & ~PVR_DEVICE_PAGE_MASK) == 0;
> +}
> +
> +/**
> + * pvr_device_addr_and_size_are_valid() - Tests whether a device-virtual
> + * address and associated size are both valid.
> + * @device_addr: Virtual device address to test.
> + * @size: Size of the range based at @device_addr to test.
> + *
> + * Calling pvr_device_addr_is_valid() twice (once on @size, and again on
> + * @device_addr + @size) to verify a device-virtual address range initially
> + * seems intuitive, but it produces a false-negative when the address range
> + * is right at the end of device-virtual address space.
> + *
> + * This function catches that corner case, as well as checking that
> + * @size is non-zero.
> + *
> + * Return:
> + *  * %true if @device_addr is device page aligned; @size is device page
> + *    aligned; the range specified by @device_addr and @size is within the
> + *    bounds of the device-virtual address space, and @size is non-zero, or
> + *  * %false otherwise.
> + */
> +bool
> +pvr_device_addr_and_size_are_valid(u64 device_addr, u64 size)
> +{
> +	return pvr_device_addr_is_valid(device_addr) &&
> +	       size != 0 && (size & ~PVR_DEVICE_PAGE_MASK) == 0 &&
> +	       (device_addr + size <= PVR_PAGE_TABLE_ADDR_SPACE_SIZE);
> +}
> +
> +static const struct drm_gpuva_fn_ops pvr_vm_gpuva_ops = {
> +	.sm_step_map = pvr_vm_gpuva_map,
> +	.sm_step_remap = pvr_vm_gpuva_remap,
> +	.sm_step_unmap = pvr_vm_gpuva_unmap,
> +};
> +
> +/**
> + * pvr_vm_create_context() - Create a new VM context.
> + * @pvr_dev: Target PowerVR device.
> + * @is_userspace_context: %true if this context is for userspace. This will
> + *                        create a firmware memory context for the VM context
> + *                        and disable warnings when tearing down mappings.
> + *
> + * Return:
> + *  * A handle to the newly-minted VM context on success,
> + *  * -%EINVAL if the feature "virtual address space bits" on @pvr_dev is
> + *    missing or has an unsupported value,
> + *  * -%ENOMEM if allocation of the structure behind the opaque handle fails,
> + *    or
> + *  * Any error encountered while setting up internal structures.
> + */
> +struct pvr_vm_context *
> +pvr_vm_create_context(struct pvr_device *pvr_dev, bool is_userspace_context)
> +{
> +	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
> +
> +	struct pvr_vm_context *vm_ctx;
> +	u16 device_addr_bits;
> +
> +	int err;
> +
> +	err = PVR_FEATURE_VALUE(pvr_dev, virtual_address_space_bits,
> +				&device_addr_bits);
> +	if (err) {
> +		drm_err(drm_dev,
> +			"Failed to get device virtual address space bits\n");
> +		return ERR_PTR(err);
> +	}
> +
> +	if (device_addr_bits != PVR_PAGE_TABLE_ADDR_BITS) {
> +		drm_err(drm_dev,
> +			"Device has unsupported virtual address space size\n");
> +		return ERR_PTR(-EINVAL);
> +	}
> +
> +	vm_ctx = kzalloc(sizeof(*vm_ctx), GFP_KERNEL);
> +	if (!vm_ctx)
> +		return ERR_PTR(-ENOMEM);
> +
> +	vm_ctx->pvr_dev = pvr_dev;
> +	kref_init(&vm_ctx->ref_count);
> +	mutex_init(&vm_ctx->lock);
> +
> +	drm_gpuva_manager_init(&vm_ctx->gpuva_mgr,
> +			       is_userspace_context ? "PowerVR-user-VM" : "PowerVR-FW-VM",
> +			       0, 1ULL << device_addr_bits, 0, 0, &pvr_vm_gpuva_ops);
> +
> +	vm_ctx->mmu_ctx = pvr_mmu_context_create(pvr_dev);
> +	err = PTR_ERR_OR_ZERO(&vm_ctx->mmu_ctx);
> +	if (err) {
> +		vm_ctx->mmu_ctx = NULL;
> +		goto err_put_ctx;
> +	}
> +
> +	if (is_userspace_context) {
> +		/* TODO: Create FW mem context */
> +		err = -ENODEV;
> +		goto err_put_ctx;
> +	}
> +
> +	return vm_ctx;
> +
> +err_put_ctx:
> +	pvr_vm_context_put(vm_ctx);
> +
> +	return ERR_PTR(err);
> +}
> +
> +/**
> + * pvr_vm_context_release() - Teardown a VM context.
> + * @ref_count: Pointer to reference counter of the VM context.
> + *
> + * This function ensures that no mappings are left dangling by unmapping them
> + * all in order of ascending device-virtual address.
> + */
> +static void
> +pvr_vm_context_release(struct kref *ref_count)
> +{
> +	struct pvr_vm_context *vm_ctx =
> +		container_of(ref_count, struct pvr_vm_context, ref_count);
> +
> +	/* TODO: Destroy FW mem context */
> +	WARN_ON(vm_ctx->fw_mem_ctx_obj);
> +
> +	WARN_ON(pvr_vm_unmap(vm_ctx, vm_ctx->gpuva_mgr.mm_start,
> +			     vm_ctx->gpuva_mgr.mm_range));
> +
> +	drm_gpuva_manager_destroy(&vm_ctx->gpuva_mgr);
> +	pvr_mmu_context_destroy(vm_ctx->mmu_ctx);
> +	mutex_destroy(&vm_ctx->lock);
> +
> +	kfree(vm_ctx);
> +}
> +
> +/**
> + * pvr_vm_context_lookup() - Look up VM context from handle
> + * @pvr_file: Pointer to pvr_file structure.
> + * @handle: Object handle.
> + *
> + * Takes reference on VM context object. Call pvr_vm_context_put() to release.
> + *
> + * Returns:
> + *  * The requested object on success, or
> + *  * %NULL on failure (object does not exist in list, or is not a VM context)
> + */
> +struct pvr_vm_context *
> +pvr_vm_context_lookup(struct pvr_file *pvr_file, u32 handle)
> +{
> +	struct pvr_vm_context *vm_ctx;
> +
> +	xa_lock(&pvr_file->vm_ctx_handles);
> +	vm_ctx = xa_load(&pvr_file->vm_ctx_handles, handle);
> +	if (vm_ctx)
> +		kref_get(&vm_ctx->ref_count);
> +
> +	xa_unlock(&pvr_file->vm_ctx_handles);
> +
> +	return vm_ctx;
> +}
> +
> +/**
> + * pvr_vm_context_put() - Release a reference on a VM context
> + * @vm_ctx: Target VM context.
> + *
> + * Returns:
> + *  * %true if the VM context was destroyed, or
> + *  * %false if there are any references still remaining.
> + */
> +bool
> +pvr_vm_context_put(struct pvr_vm_context *vm_ctx)
> +{
> +	WARN_ON(!vm_ctx);
> +
> +	if (vm_ctx)
> +		return kref_put(&vm_ctx->ref_count, pvr_vm_context_release);
> +
> +	return true;
> +}
> +
> +/**
> + * pvr_destroy_vm_contexts_for_file: Destroy any VM contexts associated with the
> + * given file.
> + * @pvr_file: Pointer to pvr_file structure.
> + *
> + * Removes all vm_contexts associated with @pvr_file from the device VM context
> + * list and drops initial references. vm_contexts will then be destroyed once
> + * all outstanding references are dropped.
> + */
> +void pvr_destroy_vm_contexts_for_file(struct pvr_file *pvr_file)
> +{
> +	struct pvr_vm_context *vm_ctx;
> +	unsigned long handle;
> +
> +	xa_for_each(&pvr_file->vm_ctx_handles, handle, vm_ctx) {
> +		/* vm_ctx is not used here because that would create a race with xa_erase */
> +		pvr_vm_context_put(xa_erase(&pvr_file->vm_ctx_handles, handle));
> +	}
> +}
> +
> +/**
> + * pvr_vm_map() - Map a section of physical memory into a section of device-virtual memory.
> + * @vm_ctx: Target VM context.
> + * @pvr_obj: Target PowerVR memory object.
> + * @pvr_obj_offset: Offset into @pvr_obj to map from.
> + * @device_addr: Virtual device address at the start of the requested mapping.
> + * @size: Size of the requested mapping.
> + *
> + * No handle is returned to represent the mapping. Instead, callers should
> + * remember @device_addr and use that as a handle.
> + *
> + * Return:
> + *  * 0 on success,
> + *  * -%EINVAL if @device_addr is not a valid page-aligned device-virtual
> + *    address; the region specified by @pvr_obj_offset and @size does not fall
> + *    entirely within @pvr_obj, or any part of the specified region of @pvr_obj
> + *    is not device-virtual page-aligned,
> + *  * Any error encountered while performing internal operations required to
> + *    destroy the mapping (returned from pvr_vm_gpuva_map or
> + *    pvr_vm_gpuva_remap).
> + */
> +int
> +pvr_vm_map(struct pvr_vm_context *vm_ctx,
> +	   struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset,
> +	   u64 device_addr, u64 size)
> +{
> +	const size_t pvr_obj_size = pvr_gem_object_size(pvr_obj);
> +	struct pvr_vm_gpuva_op_ctx op_ctx = { .vm_ctx = vm_ctx };
> +	struct sg_table *sgt;
> +	int err;
> +
> +	if (!pvr_device_addr_and_size_are_valid(device_addr, size) ||
> +	    pvr_obj_offset & ~PAGE_MASK || size & ~PAGE_MASK ||
> +	    pvr_obj_offset + size > pvr_obj_size ||
> +	    pvr_obj_offset > pvr_obj_size) {
> +		return -EINVAL;
> +	}
> +
> +	op_ctx.new_va = kzalloc(sizeof(*op_ctx.new_va), GFP_KERNEL);
> +	op_ctx.prev_va = kzalloc(sizeof(*op_ctx.prev_va), GFP_KERNEL);
> +	op_ctx.next_va = kzalloc(sizeof(*op_ctx.next_va), GFP_KERNEL);
> +	if (!op_ctx.new_va || !op_ctx.prev_va || !op_ctx.next_va) {
> +		err = -ENOMEM;
> +		goto out_free;
> +	}
> +
> +	sgt = pvr_gem_object_get_pages_sgt(pvr_obj);
> +	err = PTR_ERR_OR_ZERO(sgt);
> +	if (err)
> +		goto out_free;
> +
> +	op_ctx.mmu_op_ctx = pvr_mmu_op_context_create(vm_ctx->mmu_ctx, sgt,
> +						      pvr_obj_offset, size);
> +	err = PTR_ERR_OR_ZERO(op_ctx.mmu_op_ctx);
> +	if (err) {
> +		op_ctx.mmu_op_ctx = NULL;
> +		goto out_mmu_op_ctx_destroy;
> +	}
> +
> +	mutex_lock(&vm_ctx->lock);
> +	err = drm_gpuva_sm_map(&vm_ctx->gpuva_mgr, &op_ctx, device_addr, size,
> +			       gem_from_pvr_gem(pvr_obj), pvr_obj_offset);
> +	mutex_unlock(&vm_ctx->lock);
> +
> +out_mmu_op_ctx_destroy:
> +	pvr_mmu_op_context_destroy(op_ctx.mmu_op_ctx);
> +
> +out_free:
> +	kfree(op_ctx.next_va);
> +	kfree(op_ctx.prev_va);
> +	kfree(op_ctx.new_va);
> +
> +	return err;
> +}
> +
> +/**
> + * pvr_vm_unmap() - Unmap an already mapped section of device-virtual memory.
> + * @vm_ctx: Target VM context.
> + * @device_addr: Virtual device address at the start of the target mapping.
> + * @size: Size of the target mapping.
> + *
> + * Return:
> + *  * 0 on success,
> + *  * -%EINVAL if @device_addr is not a valid page-aligned device-virtual
> + *    address,
> + *  * Any error encountered while performing internal operations required to
> + *    destroy the mapping (returned from pvr_vm_gpuva_unmap or
> + *    pvr_vm_gpuva_remap).
> + */
> +int
> +pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size)
> +{
> +	struct pvr_vm_gpuva_op_ctx op_ctx = { .vm_ctx = vm_ctx };
> +	int err;
> +
> +	if (!pvr_device_addr_and_size_are_valid(device_addr, size))
> +		return -EINVAL;
> +
> +	op_ctx.prev_va = kzalloc(sizeof(*op_ctx.prev_va), GFP_KERNEL);
> +	op_ctx.next_va = kzalloc(sizeof(*op_ctx.next_va), GFP_KERNEL);
> +	if (!op_ctx.prev_va || !op_ctx.next_va) {
> +		err = -ENOMEM;
> +		goto out;
> +	}
> +
> +	op_ctx.mmu_op_ctx =
> +		pvr_mmu_op_context_create(vm_ctx->mmu_ctx, NULL, 0, 0);
> +	err = PTR_ERR_OR_ZERO(op_ctx.mmu_op_ctx);
> +	if (err) {
> +		op_ctx.mmu_op_ctx = NULL;
> +		goto out;
> +	}
> +
> +	mutex_lock(&vm_ctx->lock);
> +	err = drm_gpuva_sm_unmap(&vm_ctx->gpuva_mgr, &op_ctx, device_addr, size);
> +	mutex_unlock(&vm_ctx->lock);
> +
> +out:
> +	pvr_mmu_op_context_destroy(op_ctx.mmu_op_ctx);
> +	kfree(op_ctx.next_va);
> +	kfree(op_ctx.prev_va);
> +
> +	return err;
> +}
> +
> +/*
> + * Static data areas are determined by firmware.
> + *
> + * When adding a new static data area you will also need to update the reserved_size field for the
> + * heap in pvr_heaps[].
> + */
> +static const struct drm_pvr_static_data_area static_data_areas[] = {
> +	{
> +		.area_usage = DRM_PVR_STATIC_DATA_AREA_FENCE,
> +		.location_heap_id = DRM_PVR_HEAP_GENERAL,
> +		.offset = 0,
> +		.size = 128,
> +	},
> +	{
> +		.area_usage = DRM_PVR_STATIC_DATA_AREA_YUV_CSC,
> +		.location_heap_id = DRM_PVR_HEAP_GENERAL,
> +		.offset = 128,
> +		.size = 1024,
> +	},
> +	{
> +		.area_usage = DRM_PVR_STATIC_DATA_AREA_VDM_SYNC,
> +		.location_heap_id = DRM_PVR_HEAP_PDS_CODE_DATA,
> +		.offset = 0,
> +		.size = 128,
> +	},
> +	{
> +		.area_usage = DRM_PVR_STATIC_DATA_AREA_EOT,
> +		.location_heap_id = DRM_PVR_HEAP_PDS_CODE_DATA,
> +		.offset = 128,
> +		.size = 128,
> +	},
> +	{
> +		.area_usage = DRM_PVR_STATIC_DATA_AREA_VDM_SYNC,
> +		.location_heap_id = DRM_PVR_HEAP_USC_CODE,
> +		.offset = 0,
> +		.size = 128,
> +	},
> +};
> +
> +#define GET_RESERVED_SIZE(last_offset, last_size) round_up((last_offset) + (last_size), PAGE_SIZE)
> +
> +/*
> + * The values given to GET_RESERVED_SIZE() are taken from the last entry in the corresponding
> + * static data area for each heap.
> + */
> +static const struct drm_pvr_heap pvr_heaps[] = {
> +	[DRM_PVR_HEAP_GENERAL] = {
> +		.base = ROGUE_GENERAL_HEAP_BASE,
> +		.size = ROGUE_GENERAL_HEAP_SIZE,
> +		.flags = 0,
> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> +	},
> +	[DRM_PVR_HEAP_PDS_CODE_DATA] = {
> +		.base = ROGUE_PDSCODEDATA_HEAP_BASE,
> +		.size = ROGUE_PDSCODEDATA_HEAP_SIZE,
> +		.flags = 0,
> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> +	},
> +	[DRM_PVR_HEAP_USC_CODE] = {
> +		.base = ROGUE_USCCODE_HEAP_BASE,
> +		.size = ROGUE_USCCODE_HEAP_SIZE,
> +		.flags = 0,
> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> +	},
> +	[DRM_PVR_HEAP_RGNHDR] = {
> +		.base = ROGUE_RGNHDR_HEAP_BASE,
> +		.size = ROGUE_RGNHDR_HEAP_SIZE,
> +		.flags = 0,
> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> +	},
> +	[DRM_PVR_HEAP_VIS_TEST] = {
> +		.base = ROGUE_VISTEST_HEAP_BASE,
> +		.size = ROGUE_VISTEST_HEAP_SIZE,
> +		.flags = 0,
> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> +	},
> +	[DRM_PVR_HEAP_TRANSFER_FRAG] = {
> +		.base = ROGUE_TRANSFER_FRAG_HEAP_BASE,
> +		.size = ROGUE_TRANSFER_FRAG_HEAP_SIZE,
> +		.flags = 0,
> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> +	},
> +};
> +
> +int
> +pvr_static_data_areas_get(const struct pvr_device *pvr_dev,
> +			  struct drm_pvr_ioctl_dev_query_args *args)
> +{
> +	struct drm_pvr_dev_query_static_data_areas query = {0};
> +	int err;
> +
> +	if (!args->pointer) {
> +		args->size = sizeof(struct drm_pvr_dev_query_static_data_areas);
> +		return 0;
> +	}
> +
> +	err = PVR_UOBJ_GET(query, args->size, args->pointer);
> +	if (err < 0)
> +		return err;
> +
> +	if (!query.static_data_areas.array) {
> +		query.static_data_areas.count = ARRAY_SIZE(static_data_areas);
> +		query.static_data_areas.stride = sizeof(struct drm_pvr_static_data_area);
> +		goto copy_out;
> +	}
> +
> +	if (query.static_data_areas.count > ARRAY_SIZE(static_data_areas))
> +		query.static_data_areas.count = ARRAY_SIZE(static_data_areas);
> +
> +	err = PVR_UOBJ_SET_ARRAY(&query.static_data_areas, static_data_areas);
> +	if (err < 0)
> +		return err;
> +
> +copy_out:
> +	err = PVR_UOBJ_SET(args->pointer, args->size, query);
> +	if (err < 0)
> +		return err;
> +
> +	args->size = sizeof(query);
> +	return 0;
> +}
> +
> +int
> +pvr_heap_info_get(const struct pvr_device *pvr_dev,
> +		  struct drm_pvr_ioctl_dev_query_args *args)
> +{
> +	struct drm_pvr_dev_query_heap_info query = {0};
> +	u64 dest;
> +	int err;
> +
> +	if (!args->pointer) {
> +		args->size = sizeof(struct drm_pvr_dev_query_heap_info);
> +		return 0;
> +	}
> +
> +	err = PVR_UOBJ_GET(query, args->size, args->pointer);
> +	if (err < 0)
> +		return err;
> +
> +	if (!query.heaps.array) {
> +		query.heaps.count = ARRAY_SIZE(pvr_heaps);
> +		query.heaps.stride = sizeof(struct drm_pvr_heap);
> +		goto copy_out;
> +	}
> +
> +	if (query.heaps.count > ARRAY_SIZE(pvr_heaps))
> +		query.heaps.count = ARRAY_SIZE(pvr_heaps);
> +
> +	/* Region header heap is only present if BRN63142 is present. */
> +	dest = query.heaps.array;
> +	for (size_t i = 0; i < query.heaps.count; i++) {
> +		struct drm_pvr_heap heap = pvr_heaps[i];
> +
> +		if (i == DRM_PVR_HEAP_RGNHDR && !PVR_HAS_QUIRK(pvr_dev, 63142))
> +			heap.size = 0;
> +
> +		err = PVR_UOBJ_SET(dest, query.heaps.stride, heap);
> +		if (err < 0)
> +			return err;
> +
> +		dest += query.heaps.stride;
> +	}
> +
> +copy_out:
> +	err = PVR_UOBJ_SET(args->pointer, args->size, query);
> +	if (err < 0)
> +		return err;
> +
> +	args->size = sizeof(query);
> +	return 0;
> +}
> +
> +/**
> + * pvr_heap_contains_range() - Determine if a given heap contains the specified
> + *                             device-virtual address range.
> + * @pvr_heap: Target heap.
> + * @start: Inclusive start of the target range.
> + * @end: Inclusive end of the target range.
> + *
> + * It is an error to call this function with values of @start and @end that do
> + * not satisfy the condition @start <= @end.
> + */
> +static __always_inline bool
> +pvr_heap_contains_range(const struct drm_pvr_heap *pvr_heap, u64 start, u64 end)
> +{
> +	return pvr_heap->base <= start && end < pvr_heap->base + pvr_heap->size;
> +}
> +
> +/**
> + * pvr_find_heap_containing() - Find a heap which contains the specified
> + *                              device-virtual address range.
> + * @pvr_dev: Target PowerVR device.
> + * @start: Start of the target range.
> + * @size: Size of the target range.
> + *
> + * Return:
> + *  * A pointer to a constant instance of struct drm_pvr_heap representing the
> + *    heap containing the entire range specified by @start and @size on
> + *    success, or
> + *  * %NULL if no such heap exists.
> + */
> +const struct drm_pvr_heap *
> +pvr_find_heap_containing(struct pvr_device *pvr_dev, u64 start, u64 size)
> +{
> +	u64 end;
> +
> +	if (check_add_overflow(start, size - 1, &end))
> +		return NULL;
> +
> +	/*
> +	 * There are no guarantees about the order of address ranges in
> +	 * &pvr_heaps, so iterate over the entire array for a heap whose
> +	 * range completely encompasses the given range.
> +	 */
> +	for (u32 heap_id = 0; heap_id < ARRAY_SIZE(pvr_heaps); heap_id++) {
> +		/* Filter heaps that present only with an associated quirk */
> +		if (heap_id == DRM_PVR_HEAP_RGNHDR &&
> +		    !PVR_HAS_QUIRK(pvr_dev, 63142)) {
> +			continue;
> +		}
> +
> +		if (pvr_heap_contains_range(&pvr_heaps[heap_id], start, end))
> +			return &pvr_heaps[heap_id];
> +	}
> +
> +	return NULL;
> +}
> +
> +/**
> + * pvr_vm_find_gem_object() - Look up a buffer object from a given
> + *                            device-virtual address.
> + * @vm_ctx: [IN] Target VM context.
> + * @device_addr: [IN] Virtual device address at the start of the required
> + *               object.
> + * @mapped_offset_out: [OUT] Pointer to location to write offset of the start
> + *                     of the mapped region within the buffer object. May be
> + *                     %NULL if this information is not required.
> + * @mapped_size_out: [OUT] Pointer to location to write size of the mapped
> + *                   region. May be %NULL if this information is not required.
> + *
> + * If successful, a reference will be taken on the buffer object. The caller
> + * must drop the reference with pvr_gem_object_put().
> + *
> + * Return:
> + *  * The PowerVR buffer object mapped at @device_addr if one exists, or
> + *  * %NULL otherwise.
> + */
> +struct pvr_gem_object *
> +pvr_vm_find_gem_object(struct pvr_vm_context *vm_ctx, u64 device_addr,
> +		       u64 *mapped_offset_out, u64 *mapped_size_out)
> +{
> +	struct pvr_gem_object *pvr_obj;
> +	struct drm_gpuva *va;
> +
> +	mutex_lock(&vm_ctx->lock);
> +
> +	va = drm_gpuva_find_first(&vm_ctx->gpuva_mgr, device_addr, 1);
> +	if (!va)
> +		goto err_unlock;
> +
> +	pvr_obj = gem_to_pvr_gem(va->gem.obj);
> +	pvr_gem_object_get(pvr_obj);
> +
> +	if (mapped_offset_out)
> +		*mapped_offset_out = va->gem.offset;
> +	if (mapped_size_out)
> +		*mapped_size_out = va->va.range;
> +
> +	mutex_unlock(&vm_ctx->lock);
> +
> +	return pvr_obj;
> +
> +err_unlock:
> +	mutex_unlock(&vm_ctx->lock);
> +
> +	return NULL;
> +}
> +
> +/**
> + * pvr_vm_get_fw_mem_context: Get object representing firmware memory context
> + * @vm_ctx: Target VM context.
> + *
> + * Returns:
> + *  * FW object representing firmware memory context, or
> + *  * %NULL if this VM context does not have a firmware memory context.
> + */
> +struct pvr_fw_object *
> +pvr_vm_get_fw_mem_context(struct pvr_vm_context *vm_ctx)
> +{
> +	return vm_ctx->fw_mem_ctx_obj;
> +}
> diff --git a/drivers/gpu/drm/imagination/pvr_vm.h b/drivers/gpu/drm/imagination/pvr_vm.h
> new file mode 100644
> index 000000000000..b98bc3981807
> --- /dev/null
> +++ b/drivers/gpu/drm/imagination/pvr_vm.h
> @@ -0,0 +1,60 @@
> +/* SPDX-License-Identifier: GPL-2.0 OR MIT */
> +/* Copyright (c) 2023 Imagination Technologies Ltd. */
> +
> +#ifndef PVR_VM_H
> +#define PVR_VM_H
> +
> +#include "pvr_rogue_mmu_defs.h"
> +
> +#include <uapi/drm/pvr_drm.h>
> +
> +#include <linux/types.h>
> +
> +/* Forward declaration from "pvr_device.h" */
> +struct pvr_device;
> +struct pvr_file;
> +
> +/* Forward declaration from "pvr_gem.h" */
> +struct pvr_gem_object;
> +
> +/* Forward declaration from "pvr_vm.c" */
> +struct pvr_vm_context;
> +
> +/* Forward declaration from <uapi/drm/pvr_drm.h> */
> +struct drm_pvr_ioctl_get_heap_info_args;
> +
> +/* Functions defined in pvr_vm.c */
> +
> +bool pvr_device_addr_is_valid(u64 device_addr);
> +bool pvr_device_addr_and_size_are_valid(u64 device_addr, u64 size);
> +
> +struct pvr_vm_context *pvr_vm_create_context(struct pvr_device *pvr_dev,
> +					     bool is_userspace_context);
> +
> +int pvr_vm_map(struct pvr_vm_context *vm_ctx,
> +	       struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset,
> +	       u64 device_addr, u64 size);
> +int pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size);
> +
> +dma_addr_t pvr_vm_get_page_table_root_addr(struct pvr_vm_context *vm_ctx);
> +
> +int pvr_static_data_areas_get(const struct pvr_device *pvr_dev,
> +			      struct drm_pvr_ioctl_dev_query_args *args);
> +int pvr_heap_info_get(const struct pvr_device *pvr_dev,
> +		      struct drm_pvr_ioctl_dev_query_args *args);
> +const struct drm_pvr_heap *pvr_find_heap_containing(struct pvr_device *pvr_dev,
> +						    u64 addr, u64 size);
> +
> +struct pvr_gem_object *pvr_vm_find_gem_object(struct pvr_vm_context *vm_ctx,
> +					      u64 device_addr,
> +					      u64 *mapped_offset_out,
> +					      u64 *mapped_size_out);
> +
> +struct pvr_fw_object *
> +pvr_vm_get_fw_mem_context(struct pvr_vm_context *vm_ctx);
> +
> +struct pvr_vm_context *pvr_vm_context_lookup(struct pvr_file *pvr_file, u32 handle);
> +bool pvr_vm_context_put(struct pvr_vm_context *vm_ctx);
> +void pvr_destroy_vm_contexts_for_file(struct pvr_file *pvr_file);
> +
> +#endif /* PVR_VM_H */
> -- 
> 2.41.0
> 


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 08/17] drm/imagination: Add GEM and VM related code
  2023-08-16  8:25   ` Sarah Walker
@ 2023-08-18 22:59     ` Jann Horn
  -1 siblings, 0 replies; 77+ messages in thread
From: Jann Horn @ 2023-08-18 22:59 UTC (permalink / raw)
  To: Sarah Walker
  Cc: matthew.brost, hns, linux-kernel, mripard, afd, Matt Coster,
	luben.tuikov, dakr, dri-devel, tzimmermann, boris.brezillon,
	christian.koenig, faith.ekstrand, donald.robson

On Wed, Aug 16, 2023 at 10:25 AM Sarah Walker <sarah.walker@imgtec.com> wrote:
> Add a GEM implementation based on drm_gem_shmem, and support code for the
> PowerVR GPU MMU. The GPU VA manager is used for address space management.
[...]
> +/**
> + * pvr_mmu_flush() - Request flush of all MMU caches.
> + * @pvr_dev: Target PowerVR device.
> + *
> + * This function must be called following any possible change to the MMU page
> + * tables.
> + *
> + * Returns:
> + *  * 0 on success, or
> + *  * Any error encountered while submitting the flush command via the KCCB.
> + */
> +int
> +pvr_mmu_flush(struct pvr_device *pvr_dev)
> +{
> +       /* TODO: implement */
> +       return -ENODEV;
> +}

pvr_mmu_flush() being an operation that can fail looks dodgy to me.
Especially given that a later patch implements pvr_mmu_flush() such
that it looks like it can hit a transient failure on the path:

pvr_mmu_flush
  pvr_kccb_send_cmd
    pvr_kccb_send_cmd_powered
      pvr_kccb_reserve_slot_sync

pvr_kccb_reserve_slot_sync() even looks like it could transiently fail
without even once calling pvr_kccb_try_reserve_slot() if it gets
preempted until RESERVE_SLOT_TIMEOUT time has passed between the first
and the second read of "jiffies". (Which is probably not very likely
to happen by chance, but Android devices are typically configured with
full kernel preemption, so if an attacker causes pvr_mmu_flush() to
run in a process with a SCHED_IDLE scheduling policy and deliberately
preempts the process at the right point, they might be able to achieve
this.) A more robust retry pattern might be to do a fixed number of
retry iterations with sleeps in between instead of retrying until a
fixed amount of time has passed; though I still wouldn't want to rely
on that for making sure that a TLB flush happens.

In my opinion, any error path of pvr_mmu_flush() should guarantee that
by the time it returns, the address space is no longer used by the GPU
and can never be used by the GPU again.

[...]
> +/**
> + * struct pvr_mmu_op_context - context holding data for individual
> + * device-virtual mapping operations. Intended for use with a VM bind operation.
> + */
> +struct pvr_mmu_op_context {
> +       /** @mmu_ctx: The MMU context associated with the owning VM context. */
> +       struct pvr_mmu_context *mmu_ctx;
> +
> +       /** @map: Data specifically for map operations. */
> +       struct {
> +               /**
> +                * @sgt: Scatter gather table containing pages pinned for use by
> +                * this context - these are currently pinned when initialising
> +                * the VM bind operation.
> +                */
> +               struct sg_table *sgt;
> +
> +               /** @sgt_offset: Start address of the device-virtual mapping. */
> +               u64 sgt_offset;
> +       } map;
> +
> +       /**
> +        * @l1_free_tables: Preallocated l1 page table objects for use by this
> +        * context when creating a page mapping. Linked list created during
> +        * initialisation. Also used to collect page table objects freed by an
> +        * unmap.
> +        */
> +       struct pvr_page_table_l1 *l1_free_tables;
> +
> +       /**
> +        * @l0_free_tables: Preallocated l0 page table objects for use by this
> +        * context when creating a page mapping. Linked list created during
> +        * initialisation. Also used to collect page table objects freed by an
> +        * unmap.
> +        */
> +       struct pvr_page_table_l0 *l0_free_tables;

The free page table lists are shared between page table allocation and
freeing within one operation, and they have last-in-first-out (stack)
behavior, which means that when a pvr_vm_map() invocation does
pvr_vm_gpuva_unmap() invocations followed by pvr_vm_gpuva_map()
invocations, it can end up immediately reusing the freed page tables
within the same operation, right?
Since pvr_mmu_flush() only happens in pvr_mmu_op_context_destroy() at
the end of pvr_vm_map(), that means a concurrent GPU page table walk
could walk down to the old address where a page table used to be
mapped, and then observe the page table entries that were created at
the new address to which the page table was moved? Like:


GPU         AP
===         ==
load L2 PTE for VA A
load L1 PTE for VA A, it contains a reference to L0 table T1
            pvr_page_table_l1_remove() removes L0 table T1 for VA A
            pvr_page_table_l1_remove() removes L0 table T2 for VA B
            pvr_page_table_l1_insert() inserts L0 table T2 at VA A
            pvr_page_table_l1_insert() inserts L0 table T1 at VA B
            pvr_page_table_l0_insert() inserts PTE into T1 for VA B
load L0 PTE for VA A from L0 table T1


And since page tables also don't seem to be cache-flushed when they
are put on these freelists, this could maybe also happen the other way
around: The GPU walks down into a page table that was moved to a new
address, but observes PTEs from the old address at which the page
table was previously mapped?

> +
> +       /**
> +        * @curr_page - A reference to a single physical page as indexed by
> +        * the page table structure.
> +        */
> +       struct pvr_page_table_ptr curr_page;
> +
> +       /**
> +        * @sync_level_required: The maximum level of the page table tree
> +        * structure which has (possibly) been modified since it was last
> +        * flushed to the device.
> +        *
> +        * This field should only be set with pvr_mmu_op_context_require_sync()
> +        * or indirectly by pvr_mmu_op_context_sync_partial().
> +        */
> +       enum pvr_mmu_sync_level sync_level_required;
> +};
[...]
> +/**
> + * pvr_mmu_unmap() - Unmap pages from a memory context.
> + * @op_ctx: Target MMU op context.
> + * @device_addr: First device-virtual address to unmap.
> + * @size: Size in bytes to unmap.
> + *
> + * The total amount of device-virtual memory unmapped is
> + * @nr_pages * %PVR_DEVICE_PAGE_SIZE.
> + *
> + * Returns:
> + *  * 0 on success, or
> + *  * Any error code returned by pvr_page_table_ptr_init(), or
> + *  * Any error code returned by pvr_page_table_ptr_unmap().
> + */
> +int pvr_mmu_unmap(struct pvr_mmu_op_context *op_ctx, u64 device_addr, u64 size)
> +{
> +       int err = pvr_mmu_op_context_set_curr_page(op_ctx, device_addr, false);
> +
> +       if (err)
> +               return err;
> +
> +       return pvr_mmu_op_context_unmap_curr_page(op_ctx,
> +                                                 size >> PVR_DEVICE_PAGE_SHIFT);
> +}

I think we can get here in the middle of this call path:

  pvr_ioctl_vm_unmap
    pvr_vm_unmap
      pvr_mmu_op_context_create
      mutex_lock(&vm_ctx->lock);
      drm_gpuva_sm_unmap
        __drm_gpuva_sm_unmap
          loop:
            op_unmap_cb [conditional]
              pvr_vm_gpuva_unmap
                pvr_mmu_unmap [WE ARE HERE]
                  pvr_mmu_op_context_set_curr_page
                    pvr_mmu_op_context_sync [CACHE FLUSH]
                    pvr_mmu_op_context_load_tables
                  pvr_mmu_op_context_unmap_curr_page
                    pvr_page_destroy [conditional]
                    loop:
                      pvr_mmu_op_context_next_page
                        pvr_mmu_op_context_sync_partial [CACHE FLUSH]
                        pvr_mmu_op_context_load_tables
                      pvr_page_destroy
                        pvr_page_table_l0_remove [REMOVES PTE]
                          pvr_page_table_l1_remove [conditional]
                        pvr_mmu_op_context_require_sync
                drm_gpuva_unmap
                drm_gpuva_unlink
                kfree(op->unmap.va)
                pvr_gem_object_put [FREES PAGES]
      mutex_unlock(&vm_ctx->lock);
      pvr_mmu_op_context_destroy
        pvr_mmu_op_context_sync [CACHE FLUSH]
        pvr_mmu_flush [FLUSHES MMU]
        loop:
          pvr_page_table_l0_free
        loop:
          pvr_page_table_l1_free

From what I can tell, we can get from `pvr_page_table_l0_remove`
(where the GPU PTE is cleared) to `pvr_gem_object_put` (where the page
referenced by the PTE can be freed) without going through a page table
cache flush or a MMU flush; I think we need both (unless the
pvr_gem_object_put() is somehow deferred until
pvr_mmu_op_context_destroy() is reached).

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 08/17] drm/imagination: Add GEM and VM related code
@ 2023-08-18 22:59     ` Jann Horn
  0 siblings, 0 replies; 77+ messages in thread
From: Jann Horn @ 2023-08-18 22:59 UTC (permalink / raw)
  To: Sarah Walker
  Cc: dri-devel, matthew.brost, luben.tuikov, tzimmermann,
	linux-kernel, mripard, afd, Matt Coster, boris.brezillon, dakr,
	donald.robson, hns, christian.koenig, faith.ekstrand

On Wed, Aug 16, 2023 at 10:25 AM Sarah Walker <sarah.walker@imgtec.com> wrote:
> Add a GEM implementation based on drm_gem_shmem, and support code for the
> PowerVR GPU MMU. The GPU VA manager is used for address space management.
[...]
> +/**
> + * pvr_mmu_flush() - Request flush of all MMU caches.
> + * @pvr_dev: Target PowerVR device.
> + *
> + * This function must be called following any possible change to the MMU page
> + * tables.
> + *
> + * Returns:
> + *  * 0 on success, or
> + *  * Any error encountered while submitting the flush command via the KCCB.
> + */
> +int
> +pvr_mmu_flush(struct pvr_device *pvr_dev)
> +{
> +       /* TODO: implement */
> +       return -ENODEV;
> +}

pvr_mmu_flush() being an operation that can fail looks dodgy to me.
Especially given that a later patch implements pvr_mmu_flush() such
that it looks like it can hit a transient failure on the path:

pvr_mmu_flush
  pvr_kccb_send_cmd
    pvr_kccb_send_cmd_powered
      pvr_kccb_reserve_slot_sync

pvr_kccb_reserve_slot_sync() even looks like it could transiently fail
without even once calling pvr_kccb_try_reserve_slot() if it gets
preempted until RESERVE_SLOT_TIMEOUT time has passed between the first
and the second read of "jiffies". (Which is probably not very likely
to happen by chance, but Android devices are typically configured with
full kernel preemption, so if an attacker causes pvr_mmu_flush() to
run in a process with a SCHED_IDLE scheduling policy and deliberately
preempts the process at the right point, they might be able to achieve
this.) A more robust retry pattern might be to do a fixed number of
retry iterations with sleeps in between instead of retrying until a
fixed amount of time has passed; though I still wouldn't want to rely
on that for making sure that a TLB flush happens.

In my opinion, any error path of pvr_mmu_flush() should guarantee that
by the time it returns, the address space is no longer used by the GPU
and can never be used by the GPU again.

[...]
> +/**
> + * struct pvr_mmu_op_context - context holding data for individual
> + * device-virtual mapping operations. Intended for use with a VM bind operation.
> + */
> +struct pvr_mmu_op_context {
> +       /** @mmu_ctx: The MMU context associated with the owning VM context. */
> +       struct pvr_mmu_context *mmu_ctx;
> +
> +       /** @map: Data specifically for map operations. */
> +       struct {
> +               /**
> +                * @sgt: Scatter gather table containing pages pinned for use by
> +                * this context - these are currently pinned when initialising
> +                * the VM bind operation.
> +                */
> +               struct sg_table *sgt;
> +
> +               /** @sgt_offset: Start address of the device-virtual mapping. */
> +               u64 sgt_offset;
> +       } map;
> +
> +       /**
> +        * @l1_free_tables: Preallocated l1 page table objects for use by this
> +        * context when creating a page mapping. Linked list created during
> +        * initialisation. Also used to collect page table objects freed by an
> +        * unmap.
> +        */
> +       struct pvr_page_table_l1 *l1_free_tables;
> +
> +       /**
> +        * @l0_free_tables: Preallocated l0 page table objects for use by this
> +        * context when creating a page mapping. Linked list created during
> +        * initialisation. Also used to collect page table objects freed by an
> +        * unmap.
> +        */
> +       struct pvr_page_table_l0 *l0_free_tables;

The free page table lists are shared between page table allocation and
freeing within one operation, and they have last-in-first-out (stack)
behavior, which means that when a pvr_vm_map() invocation does
pvr_vm_gpuva_unmap() invocations followed by pvr_vm_gpuva_map()
invocations, it can end up immediately reusing the freed page tables
within the same operation, right?
Since pvr_mmu_flush() only happens in pvr_mmu_op_context_destroy() at
the end of pvr_vm_map(), that means a concurrent GPU page table walk
could walk down to the old address where a page table used to be
mapped, and then observe the page table entries that were created at
the new address to which the page table was moved? Like:


GPU         AP
===         ==
load L2 PTE for VA A
load L1 PTE for VA A, it contains a reference to L0 table T1
            pvr_page_table_l1_remove() removes L0 table T1 for VA A
            pvr_page_table_l1_remove() removes L0 table T2 for VA B
            pvr_page_table_l1_insert() inserts L0 table T2 at VA A
            pvr_page_table_l1_insert() inserts L0 table T1 at VA B
            pvr_page_table_l0_insert() inserts PTE into T1 for VA B
load L0 PTE for VA A from L0 table T1


And since page tables also don't seem to be cache-flushed when they
are put on these freelists, this could maybe also happen the other way
around: The GPU walks down into a page table that was moved to a new
address, but observes PTEs from the old address at which the page
table was previously mapped?

> +
> +       /**
> +        * @curr_page - A reference to a single physical page as indexed by
> +        * the page table structure.
> +        */
> +       struct pvr_page_table_ptr curr_page;
> +
> +       /**
> +        * @sync_level_required: The maximum level of the page table tree
> +        * structure which has (possibly) been modified since it was last
> +        * flushed to the device.
> +        *
> +        * This field should only be set with pvr_mmu_op_context_require_sync()
> +        * or indirectly by pvr_mmu_op_context_sync_partial().
> +        */
> +       enum pvr_mmu_sync_level sync_level_required;
> +};
[...]
> +/**
> + * pvr_mmu_unmap() - Unmap pages from a memory context.
> + * @op_ctx: Target MMU op context.
> + * @device_addr: First device-virtual address to unmap.
> + * @size: Size in bytes to unmap.
> + *
> + * The total amount of device-virtual memory unmapped is
> + * @nr_pages * %PVR_DEVICE_PAGE_SIZE.
> + *
> + * Returns:
> + *  * 0 on success, or
> + *  * Any error code returned by pvr_page_table_ptr_init(), or
> + *  * Any error code returned by pvr_page_table_ptr_unmap().
> + */
> +int pvr_mmu_unmap(struct pvr_mmu_op_context *op_ctx, u64 device_addr, u64 size)
> +{
> +       int err = pvr_mmu_op_context_set_curr_page(op_ctx, device_addr, false);
> +
> +       if (err)
> +               return err;
> +
> +       return pvr_mmu_op_context_unmap_curr_page(op_ctx,
> +                                                 size >> PVR_DEVICE_PAGE_SHIFT);
> +}

I think we can get here in the middle of this call path:

  pvr_ioctl_vm_unmap
    pvr_vm_unmap
      pvr_mmu_op_context_create
      mutex_lock(&vm_ctx->lock);
      drm_gpuva_sm_unmap
        __drm_gpuva_sm_unmap
          loop:
            op_unmap_cb [conditional]
              pvr_vm_gpuva_unmap
                pvr_mmu_unmap [WE ARE HERE]
                  pvr_mmu_op_context_set_curr_page
                    pvr_mmu_op_context_sync [CACHE FLUSH]
                    pvr_mmu_op_context_load_tables
                  pvr_mmu_op_context_unmap_curr_page
                    pvr_page_destroy [conditional]
                    loop:
                      pvr_mmu_op_context_next_page
                        pvr_mmu_op_context_sync_partial [CACHE FLUSH]
                        pvr_mmu_op_context_load_tables
                      pvr_page_destroy
                        pvr_page_table_l0_remove [REMOVES PTE]
                          pvr_page_table_l1_remove [conditional]
                        pvr_mmu_op_context_require_sync
                drm_gpuva_unmap
                drm_gpuva_unlink
                kfree(op->unmap.va)
                pvr_gem_object_put [FREES PAGES]
      mutex_unlock(&vm_ctx->lock);
      pvr_mmu_op_context_destroy
        pvr_mmu_op_context_sync [CACHE FLUSH]
        pvr_mmu_flush [FLUSHES MMU]
        loop:
          pvr_page_table_l0_free
        loop:
          pvr_page_table_l1_free

From what I can tell, we can get from `pvr_page_table_l0_remove`
(where the GPU PTE is cleared) to `pvr_gem_object_put` (where the page
referenced by the PTE can be freed) without going through a page table
cache flush or a MMU flush; I think we need both (unless the
pvr_gem_object_put() is somehow deferred until
pvr_mmu_op_context_destroy() is reached).

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v5 08/17] drm/imagination: Add GEM and VM related code
  2023-08-18 15:30     ` Danilo Krummrich
@ 2023-08-21  8:30       ` Donald Robson
  -1 siblings, 0 replies; 77+ messages in thread
From: Donald Robson @ 2023-08-21  8:30 UTC (permalink / raw)
  To: dakr, Sarah Walker
  Cc: matthew.brost, tzimmermann, hns, linux-kernel, mripard, afd,
	Matt Coster, luben.tuikov, dri-devel, boris.brezillon,
	christian.koenig, faith.ekstrand

Hi Danilo,
Thanks for the feedback.  On the subject of locking, I have dma_resv locking
in another branch where I'm trying to enable bind queues, but I didn't think
I needed locking for the single, synchronous operations seen here.  Would a 
mutex on the gem object wrapper suffice?
Thanks,
Donald
On Fri, 2023-08-18 at 17:30 +0200, Danilo Krummrich wrote:
> 
> Hi Sarah,
> 
> On Wed, Aug 16, 2023 at 09:25:23AM +0100, Sarah Walker wrote:
> > Add a GEM implementation based on drm_gem_shmem, and support code for the
> > PowerVR GPU MMU. The GPU VA manager is used for address space management.
> > 
> > Changes since v4:
> > - Correct sync function in vmap/vunmap function documentation
> > - Update for upstream GPU VA manager
> > - Fix missing frees when unmapping drm_gpuva objects
> > - Always zero GEM BOs on creation
> > 
> > Changes since v3:
> > - Split MMU and VM code
> > - Register page table allocations with kmemleak
> > - Use drm_dev_{enter,exit}
> > 
> > Changes since v2:
> > - Use GPU VA manager
> > - Use drm_gem_shmem
> > 
> > Co-developed-by: Matt Coster <matt.coster@imgtec.com>
> > Signed-off-by: Matt Coster <matt.coster@imgtec.com>
> > Co-developed-by: Donald Robson <donald.robson@imgtec.com>
> > Signed-off-by: Donald Robson <donald.robson@imgtec.com>
> > Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
> > ---
> >  drivers/gpu/drm/imagination/Makefile     |    5 +-
> >  drivers/gpu/drm/imagination/pvr_device.c |   23 +-
> >  drivers/gpu/drm/imagination/pvr_device.h |   18 +
> >  drivers/gpu/drm/imagination/pvr_drv.c    |  302 ++-
> >  drivers/gpu/drm/imagination/pvr_gem.c    |  396 ++++
> >  drivers/gpu/drm/imagination/pvr_gem.h    |  177 ++
> >  drivers/gpu/drm/imagination/pvr_mmu.c    | 2487 ++++++++++++++++++++++
> >  drivers/gpu/drm/imagination/pvr_mmu.h    |  108 +
> >  drivers/gpu/drm/imagination/pvr_vm.c     |  890 ++++++++
> >  drivers/gpu/drm/imagination/pvr_vm.h     |   60 +
> >  10 files changed, 4455 insertions(+), 11 deletions(-)
> >  create mode 100644 drivers/gpu/drm/imagination/pvr_gem.c
> >  create mode 100644 drivers/gpu/drm/imagination/pvr_gem.h
> >  create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.c
> >  create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.h
> >  create mode 100644 drivers/gpu/drm/imagination/pvr_vm.c
> >  create mode 100644 drivers/gpu/drm/imagination/pvr_vm.h
> 
> <snip>
> 
> > diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
> > new file mode 100644
> > index 000000000000..616fad3a3325
> > --- /dev/null
> > +++ b/drivers/gpu/drm/imagination/pvr_vm.c
> > @@ -0,0 +1,890 @@
> > +// SPDX-License-Identifier: GPL-2.0 OR MIT
> > +/* Copyright (c) 2023 Imagination Technologies Ltd. */
> > +
> > +#include "pvr_vm.h"
> > +
> > +#include "pvr_device.h"
> > +#include "pvr_drv.h"
> > +#include "pvr_gem.h"
> > +#include "pvr_mmu.h"
> > +#include "pvr_rogue_fwif.h"
> > +#include "pvr_rogue_heap_config.h"
> > +
> > +#include <drm/drm_gem.h>
> > +#include <drm/drm_gpuva_mgr.h>
> > +
> > +#include <linux/container_of.h>
> > +#include <linux/err.h>
> > +#include <linux/errno.h>
> > +#include <linux/gfp_types.h>
> > +#include <linux/kref.h>
> > +#include <linux/mutex.h>
> > +#include <linux/stddef.h>
> > +
> > +/**
> > + * DOC: Memory context
> > + *
> > + * This is the "top level" datatype in the VM code. It's exposed in the public
> > + * API as an opaque handle.
> > + */
> > +
> > +/**
> > + * struct pvr_vm_context - Context type which encapsulates an entire page table
> > + * tree structure.
> > + * @pvr_dev: The PowerVR device to which this context is bound.
> > + *
> > + * This binding is immutable for the life of the context.
> > + * @mmu_ctx: The context for binding to physical memory.
> > + * @gpuva_mgr: GPUVA manager object associated with this context.
> > + * @lock: Global lock on this entire structure of page tables.
> > + * @fw_mem_ctx_obj: Firmware object representing firmware memory context.
> > + * @ref_count: Reference count of object.
> > + */
> > +struct pvr_vm_context {
> > +	struct pvr_device *pvr_dev;
> > +	struct pvr_mmu_context *mmu_ctx;
> > +	struct drm_gpuva_manager gpuva_mgr;
> > +	struct mutex lock;
> > +	struct pvr_fw_object *fw_mem_ctx_obj;
> > +	struct kref ref_count;
> > +};
> > +
> > +/**
> > + * pvr_vm_get_page_table_root_addr() - Get the DMA address of the root of the
> > + *                                     page table structure behind a VM context.
> > + * @vm_ctx: Target VM context.
> > + */
> > +dma_addr_t pvr_vm_get_page_table_root_addr(struct pvr_vm_context *vm_ctx)
> > +{
> > +	return pvr_mmu_get_root_table_dma_addr(vm_ctx->mmu_ctx);
> > +}
> > +
> > +/**
> > + * DOC: Memory mappings
> > + */
> > +
> > +/**
> > + * pvr_vm_gpuva_mapping_init() - Setup a mapping object with the specified
> > + * parameters ready for mapping using pvr_vm_gpuva_mapping_map().
> > + * @va: Pointer to drm_gpuva mapping object.
> > + * @device_addr: Device-virtual address at the start of the mapping.
> > + * @size: Size of the desired mapping.
> > + * @pvr_obj: Target PowerVR memory object.
> > + * @pvr_obj_offset: Offset into @pvr_obj to begin mapping from.
> > + *
> > + * Some parameters of this function are unchecked. It is therefore the callers
> > + * responsibility to ensure certain constraints are met. Specifically:
> > + *
> > + * * @pvr_obj_offset must be less than the size of @pvr_obj,
> > + * * The sum of @pvr_obj_offset and @size must be less than or equal to the
> > + *   size of @pvr_obj,
> > + * * The range specified by @pvr_obj_offset and @size (the "CPU range") must be
> > + *   CPU page-aligned both in start position and size, and
> > + * * The range specified by @device_addr and @size (the "device range") must be
> > + *   device page-aligned both in start position and size.
> > + *
> > + * Furthermore, it is up to the caller to make sure that a reference to @pvr_obj
> > + * is taken prior to mapping @va with the drm_gpuva_manager.
> > + */
> > +static void
> > +pvr_vm_gpuva_mapping_init(struct drm_gpuva *va, u64 device_addr, u64 size,
> > +			  struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset)
> 
> There's already drm_gpuva_init() doing the same thing.
> 
> > +{
> > +	va->va.addr = device_addr;
> > +	va->va.range = size;
> > +	va->gem.obj = gem_from_pvr_gem(pvr_obj);
> > +	va->gem.offset = pvr_obj_offset;
> > +}
> > +
> > +struct pvr_vm_gpuva_op_ctx {
> > +	struct pvr_vm_context *vm_ctx;
> > +	struct pvr_mmu_op_context *mmu_op_ctx;
> > +	struct drm_gpuva *new_va, *prev_va, *next_va;
> > +};
> > +
> > +/**
> > + * pvr_vm_gpuva_map() - Insert a mapping into a memory context.
> > + * @op: gpuva op containing the remap details.
> > + * @op_ctx: Operation context.
> > + *
> > + * Context: Called by drm_gpuva_sm_map following a successful mapping while
> > + * @op_ctx.vm_ctx mutex is held.
> > + *
> > + * Return:
> > + *  * 0 on success, or
> > + *  * Any error returned by pvr_mmu_map().
> > + */
> > +static int
> > +pvr_vm_gpuva_map(struct drm_gpuva_op *op, void *op_ctx)
> > +{
> > +	struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->map.gem.obj);
> > +	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
> > +	int err;
> > +
> > +	if ((op->map.gem.offset | op->map.va.range) & ~PVR_DEVICE_PAGE_MASK)
> > +		return -EINVAL;
> > +
> > +	err = pvr_mmu_map(ctx->mmu_op_ctx, op->map.va.range, pvr_gem->flags,
> > +			  op->map.va.addr);
> > +	if (err)
> > +		return err;
> > +
> > +	pvr_vm_gpuva_mapping_init(ctx->new_va, op->map.va.addr,
> > +				  op->map.va.range, pvr_gem, op->map.gem.offset);
> > +
> > +	drm_gpuva_map(&ctx->vm_ctx->gpuva_mgr, ctx->new_va, &op->map);
> 
> drm_gpuva_map() does use drm_gpuva_init_from_op() internally, hence the extra
> call to pvr_vm_gpuva_mapping_init() should be unnecessary.
> 
> > +	drm_gpuva_link(ctx->new_va);
> 
> How is this protected?
> 
> drm_gpuva_link() and drm_gpuva_unlink() require either the dma_resv lock of the
> corresponding GEM object being held or, alternatively, the driver specific lock
> indicated via drm_gem_gpuva_set_lock().
> 
> > +	ctx->new_va = NULL;
> > +
> > +	/*
> > +	 * Increment the refcount on the underlying physical memory resource
> > +	 * to prevent de-allocation while the mapping exists.
> > +	 */
> > +	pvr_gem_object_get(pvr_gem);
> > +
> > +	return 0;
> > +}
> > +
> > +/**
> > + * pvr_vm_gpuva_unmap() - Remove a mapping from a memory context.
> > + * @op: gpuva op containing the unmap details.
> > + * @op_ctx: Operation context.
> > + *
> > + * Context: Called by drm_gpuva_sm_unmap following a successful unmapping while
> > + * @op_ctx.vm_ctx mutex is held.
> > + *
> > + * Return:
> > + *  * 0 on success, or
> > + *  * Any error returned by pvr_mmu_unmap().
> > + */
> > +static int
> > +pvr_vm_gpuva_unmap(struct drm_gpuva_op *op, void *op_ctx)
> > +{
> > +	struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->unmap.va->gem.obj);
> > +	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
> > +
> > +	int err = pvr_mmu_unmap(ctx->mmu_op_ctx, op->unmap.va->va.addr,
> > +				op->unmap.va->va.range);
> > +
> > +	if (err)
> > +		return err;
> > +
> > +	drm_gpuva_unmap(&op->unmap);
> > +	drm_gpuva_unlink(op->unmap.va);
> > +	kfree(op->unmap.va);
> > +
> > +	pvr_gem_object_put(pvr_gem);
> > +
> > +	return 0;
> > +}
> > +
> > +/**
> > + * pvr_vm_gpuva_remap() - Remap a mapping within a memory context.
> > + * @op: gpuva op containing the remap details.
> > + * @op_ctx: Operation context.
> > + *
> > + * Context: Called by either drm_gpuva_sm_map or drm_gpuva_sm_unmap when a
> > + * mapping or unmapping operation causes a region to be split. The
> > + * @op_ctx.vm_ctx mutex is held.
> > + *
> > + * Return:
> > + *  * 0 on success, or
> > + *  * Any error returned by pvr_vm_gpuva_unmap() or pvr_vm_gpuva_unmap().
> > + */
> > +static int
> > +pvr_vm_gpuva_remap(struct drm_gpuva_op *op, void *op_ctx)
> > +{
> > +	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
> > +
> > +	if (op->remap.unmap) {
> 
> You can omit this check, remap operations always contain a valid unmap
> operation. However, you might want to know whether the remap operation was
> generated due to a call to drm_gpuva_sm_map() or drm_gpuva_sm_unmap(), since for
> the latter you might want to free page table structures.
> 
> > +		const u64 va_start = op->remap.prev ?
> > +				     op->remap.prev->va.addr + op->remap.prev->va.range :
> > +				     op->remap.unmap->va->va.addr;
> > +		const u64 va_end = op->remap.next ?
> > +				   op->remap.next->va.addr :
> > +				   op->remap.unmap->va->va.addr + op->remap.unmap->va->va.range;
> 
> This seems to be a common calculation for drivers, it is probably worth to come
> up with a helper, something like
> drm_gpuva_op_unmap_range(struct drm_gpuva_op *op, u64 *addr, u64 *range).
> 
> > +
> > +		int err = pvr_mmu_unmap(ctx->mmu_op_ctx, va_start,
> > +					va_end - va_start);
> > +
> > +		if (err)
> > +			return err;
> > +	}
> > +
> > +	if (op->remap.prev)
> > +		pvr_vm_gpuva_mapping_init(ctx->prev_va, op->remap.prev->va.addr,
> > +					  op->remap.prev->va.range,
> > +					  gem_to_pvr_gem(op->remap.prev->gem.obj),
> > +					  op->remap.prev->gem.offset);
> > +
> > +	if (op->remap.next)
> > +		pvr_vm_gpuva_mapping_init(ctx->next_va, op->remap.next->va.addr,
> > +					  op->remap.next->va.range,
> > +					  gem_to_pvr_gem(op->remap.next->gem.obj),
> > +					  op->remap.next->gem.offset);
> > +
> > +	/* No actual remap required: the page table tree depth is fixed to 3,
> > +	 * and we use 4k page table entries only for now.
> > +	 */
> > +	drm_gpuva_remap(ctx->prev_va, ctx->next_va, &op->remap);
> 
> As above, drm_gpuva_remap() does use drm_gpuva_init_from_op() internally, hence
> the extra call to pvr_vm_gpuva_mapping_init() should be unnecessary.
> 
> > +
> > +	if (op->remap.prev) {
> > +		pvr_gem_object_get(gem_to_pvr_gem(ctx->prev_va->gem.obj));
> > +		drm_gpuva_link(ctx->prev_va);
> > +		ctx->prev_va = NULL;
> > +	}
> > +
> > +	if (op->remap.next) {
> > +		pvr_gem_object_get(gem_to_pvr_gem(ctx->next_va->gem.obj));
> > +		drm_gpuva_link(ctx->next_va);
> > +		ctx->next_va = NULL;
> > +	}
> > +
> > +	if (op->remap.unmap) {
> 
> As above, no need for this check.
> 
> - Danilo
> 
> > +		struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->remap.unmap->va->gem.obj);
> > +
> > +		drm_gpuva_unlink(op->unmap.va);
> > +		kfree(op->unmap.va);
> > +
> > +		pvr_gem_object_put(pvr_gem);
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +/*
> > + * Public API
> > + *
> > + * For an overview of these functions, see *DOC: Public API* in "pvr_vm.h".
> > + */
> > +
> > +/**
> > + * pvr_device_addr_is_valid() - Tests whether a device-virtual address
> > + *                              is valid.
> > + * @device_addr: Virtual device address to test.
> > + *
> > + * Return:
> > + *  * %true if @device_addr is within the valid range for a device page
> > + *    table and is aligned to the device page size, or
> > + *  * %false otherwise.
> > + */
> > +bool
> > +pvr_device_addr_is_valid(u64 device_addr)
> > +{
> > +	return (device_addr & ~PVR_PAGE_TABLE_ADDR_MASK) == 0 &&
> > +	       (device_addr & ~PVR_DEVICE_PAGE_MASK) == 0;
> > +}
> > +
> > +/**
> > + * pvr_device_addr_and_size_are_valid() - Tests whether a device-virtual
> > + * address and associated size are both valid.
> > + * @device_addr: Virtual device address to test.
> > + * @size: Size of the range based at @device_addr to test.
> > + *
> > + * Calling pvr_device_addr_is_valid() twice (once on @size, and again on
> > + * @device_addr + @size) to verify a device-virtual address range initially
> > + * seems intuitive, but it produces a false-negative when the address range
> > + * is right at the end of device-virtual address space.
> > + *
> > + * This function catches that corner case, as well as checking that
> > + * @size is non-zero.
> > + *
> > + * Return:
> > + *  * %true if @device_addr is device page aligned; @size is device page
> > + *    aligned; the range specified by @device_addr and @size is within the
> > + *    bounds of the device-virtual address space, and @size is non-zero, or
> > + *  * %false otherwise.
> > + */
> > +bool
> > +pvr_device_addr_and_size_are_valid(u64 device_addr, u64 size)
> > +{
> > +	return pvr_device_addr_is_valid(device_addr) &&
> > +	       size != 0 && (size & ~PVR_DEVICE_PAGE_MASK) == 0 &&
> > +	       (device_addr + size <= PVR_PAGE_TABLE_ADDR_SPACE_SIZE);
> > +}
> > +
> > +static const struct drm_gpuva_fn_ops pvr_vm_gpuva_ops = {
> > +	.sm_step_map = pvr_vm_gpuva_map,
> > +	.sm_step_remap = pvr_vm_gpuva_remap,
> > +	.sm_step_unmap = pvr_vm_gpuva_unmap,
> > +};
> > +
> > +/**
> > + * pvr_vm_create_context() - Create a new VM context.
> > + * @pvr_dev: Target PowerVR device.
> > + * @is_userspace_context: %true if this context is for userspace. This will
> > + *                        create a firmware memory context for the VM context
> > + *                        and disable warnings when tearing down mappings.
> > + *
> > + * Return:
> > + *  * A handle to the newly-minted VM context on success,
> > + *  * -%EINVAL if the feature "virtual address space bits" on @pvr_dev is
> > + *    missing or has an unsupported value,
> > + *  * -%ENOMEM if allocation of the structure behind the opaque handle fails,
> > + *    or
> > + *  * Any error encountered while setting up internal structures.
> > + */
> > +struct pvr_vm_context *
> > +pvr_vm_create_context(struct pvr_device *pvr_dev, bool is_userspace_context)
> > +{
> > +	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
> > +
> > +	struct pvr_vm_context *vm_ctx;
> > +	u16 device_addr_bits;
> > +
> > +	int err;
> > +
> > +	err = PVR_FEATURE_VALUE(pvr_dev, virtual_address_space_bits,
> > +				&device_addr_bits);
> > +	if (err) {
> > +		drm_err(drm_dev,
> > +			"Failed to get device virtual address space bits\n");
> > +		return ERR_PTR(err);
> > +	}
> > +
> > +	if (device_addr_bits != PVR_PAGE_TABLE_ADDR_BITS) {
> > +		drm_err(drm_dev,
> > +			"Device has unsupported virtual address space size\n");
> > +		return ERR_PTR(-EINVAL);
> > +	}
> > +
> > +	vm_ctx = kzalloc(sizeof(*vm_ctx), GFP_KERNEL);
> > +	if (!vm_ctx)
> > +		return ERR_PTR(-ENOMEM);
> > +
> > +	vm_ctx->pvr_dev = pvr_dev;
> > +	kref_init(&vm_ctx->ref_count);
> > +	mutex_init(&vm_ctx->lock);
> > +
> > +	drm_gpuva_manager_init(&vm_ctx->gpuva_mgr,
> > +			       is_userspace_context ? "PowerVR-user-VM" : "PowerVR-FW-VM",
> > +			       0, 1ULL << device_addr_bits, 0, 0, &pvr_vm_gpuva_ops);
> > +
> > +	vm_ctx->mmu_ctx = pvr_mmu_context_create(pvr_dev);
> > +	err = PTR_ERR_OR_ZERO(&vm_ctx->mmu_ctx);
> > +	if (err) {
> > +		vm_ctx->mmu_ctx = NULL;
> > +		goto err_put_ctx;
> > +	}
> > +
> > +	if (is_userspace_context) {
> > +		/* TODO: Create FW mem context */
> > +		err = -ENODEV;
> > +		goto err_put_ctx;
> > +	}
> > +
> > +	return vm_ctx;
> > +
> > +err_put_ctx:
> > +	pvr_vm_context_put(vm_ctx);
> > +
> > +	return ERR_PTR(err);
> > +}
> > +
> > +/**
> > + * pvr_vm_context_release() - Teardown a VM context.
> > + * @ref_count: Pointer to reference counter of the VM context.
> > + *
> > + * This function ensures that no mappings are left dangling by unmapping them
> > + * all in order of ascending device-virtual address.
> > + */
> > +static void
> > +pvr_vm_context_release(struct kref *ref_count)
> > +{
> > +	struct pvr_vm_context *vm_ctx =
> > +		container_of(ref_count, struct pvr_vm_context, ref_count);
> > +
> > +	/* TODO: Destroy FW mem context */
> > +	WARN_ON(vm_ctx->fw_mem_ctx_obj);
> > +
> > +	WARN_ON(pvr_vm_unmap(vm_ctx, vm_ctx->gpuva_mgr.mm_start,
> > +			     vm_ctx->gpuva_mgr.mm_range));
> > +
> > +	drm_gpuva_manager_destroy(&vm_ctx->gpuva_mgr);
> > +	pvr_mmu_context_destroy(vm_ctx->mmu_ctx);
> > +	mutex_destroy(&vm_ctx->lock);
> > +
> > +	kfree(vm_ctx);
> > +}
> > +
> > +/**
> > + * pvr_vm_context_lookup() - Look up VM context from handle
> > + * @pvr_file: Pointer to pvr_file structure.
> > + * @handle: Object handle.
> > + *
> > + * Takes reference on VM context object. Call pvr_vm_context_put() to release.
> > + *
> > + * Returns:
> > + *  * The requested object on success, or
> > + *  * %NULL on failure (object does not exist in list, or is not a VM context)
> > + */
> > +struct pvr_vm_context *
> > +pvr_vm_context_lookup(struct pvr_file *pvr_file, u32 handle)
> > +{
> > +	struct pvr_vm_context *vm_ctx;
> > +
> > +	xa_lock(&pvr_file->vm_ctx_handles);
> > +	vm_ctx = xa_load(&pvr_file->vm_ctx_handles, handle);
> > +	if (vm_ctx)
> > +		kref_get(&vm_ctx->ref_count);
> > +
> > +	xa_unlock(&pvr_file->vm_ctx_handles);
> > +
> > +	return vm_ctx;
> > +}
> > +
> > +/**
> > + * pvr_vm_context_put() - Release a reference on a VM context
> > + * @vm_ctx: Target VM context.
> > + *
> > + * Returns:
> > + *  * %true if the VM context was destroyed, or
> > + *  * %false if there are any references still remaining.
> > + */
> > +bool
> > +pvr_vm_context_put(struct pvr_vm_context *vm_ctx)
> > +{
> > +	WARN_ON(!vm_ctx);
> > +
> > +	if (vm_ctx)
> > +		return kref_put(&vm_ctx->ref_count, pvr_vm_context_release);
> > +
> > +	return true;
> > +}
> > +
> > +/**
> > + * pvr_destroy_vm_contexts_for_file: Destroy any VM contexts associated with the
> > + * given file.
> > + * @pvr_file: Pointer to pvr_file structure.
> > + *
> > + * Removes all vm_contexts associated with @pvr_file from the device VM context
> > + * list and drops initial references. vm_contexts will then be destroyed once
> > + * all outstanding references are dropped.
> > + */
> > +void pvr_destroy_vm_contexts_for_file(struct pvr_file *pvr_file)
> > +{
> > +	struct pvr_vm_context *vm_ctx;
> > +	unsigned long handle;
> > +
> > +	xa_for_each(&pvr_file->vm_ctx_handles, handle, vm_ctx) {
> > +		/* vm_ctx is not used here because that would create a race with xa_erase */
> > +		pvr_vm_context_put(xa_erase(&pvr_file->vm_ctx_handles, handle));
> > +	}
> > +}
> > +
> > +/**
> > + * pvr_vm_map() - Map a section of physical memory into a section of device-virtual memory.
> > + * @vm_ctx: Target VM context.
> > + * @pvr_obj: Target PowerVR memory object.
> > + * @pvr_obj_offset: Offset into @pvr_obj to map from.
> > + * @device_addr: Virtual device address at the start of the requested mapping.
> > + * @size: Size of the requested mapping.
> > + *
> > + * No handle is returned to represent the mapping. Instead, callers should
> > + * remember @device_addr and use that as a handle.
> > + *
> > + * Return:
> > + *  * 0 on success,
> > + *  * -%EINVAL if @device_addr is not a valid page-aligned device-virtual
> > + *    address; the region specified by @pvr_obj_offset and @size does not fall
> > + *    entirely within @pvr_obj, or any part of the specified region of @pvr_obj
> > + *    is not device-virtual page-aligned,
> > + *  * Any error encountered while performing internal operations required to
> > + *    destroy the mapping (returned from pvr_vm_gpuva_map or
> > + *    pvr_vm_gpuva_remap).
> > + */
> > +int
> > +pvr_vm_map(struct pvr_vm_context *vm_ctx,
> > +	   struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset,
> > +	   u64 device_addr, u64 size)
> > +{
> > +	const size_t pvr_obj_size = pvr_gem_object_size(pvr_obj);
> > +	struct pvr_vm_gpuva_op_ctx op_ctx = { .vm_ctx = vm_ctx };
> > +	struct sg_table *sgt;
> > +	int err;
> > +
> > +	if (!pvr_device_addr_and_size_are_valid(device_addr, size) ||
> > +	    pvr_obj_offset & ~PAGE_MASK || size & ~PAGE_MASK ||
> > +	    pvr_obj_offset + size > pvr_obj_size ||
> > +	    pvr_obj_offset > pvr_obj_size) {
> > +		return -EINVAL;
> > +	}
> > +
> > +	op_ctx.new_va = kzalloc(sizeof(*op_ctx.new_va), GFP_KERNEL);
> > +	op_ctx.prev_va = kzalloc(sizeof(*op_ctx.prev_va), GFP_KERNEL);
> > +	op_ctx.next_va = kzalloc(sizeof(*op_ctx.next_va), GFP_KERNEL);
> > +	if (!op_ctx.new_va || !op_ctx.prev_va || !op_ctx.next_va) {
> > +		err = -ENOMEM;
> > +		goto out_free;
> > +	}
> > +
> > +	sgt = pvr_gem_object_get_pages_sgt(pvr_obj);
> > +	err = PTR_ERR_OR_ZERO(sgt);
> > +	if (err)
> > +		goto out_free;
> > +
> > +	op_ctx.mmu_op_ctx = pvr_mmu_op_context_create(vm_ctx->mmu_ctx, sgt,
> > +						      pvr_obj_offset, size);
> > +	err = PTR_ERR_OR_ZERO(op_ctx.mmu_op_ctx);
> > +	if (err) {
> > +		op_ctx.mmu_op_ctx = NULL;
> > +		goto out_mmu_op_ctx_destroy;
> > +	}
> > +
> > +	mutex_lock(&vm_ctx->lock);
> > +	err = drm_gpuva_sm_map(&vm_ctx->gpuva_mgr, &op_ctx, device_addr, size,
> > +			       gem_from_pvr_gem(pvr_obj), pvr_obj_offset);
> > +	mutex_unlock(&vm_ctx->lock);
> > +
> > +out_mmu_op_ctx_destroy:
> > +	pvr_mmu_op_context_destroy(op_ctx.mmu_op_ctx);
> > +
> > +out_free:
> > +	kfree(op_ctx.next_va);
> > +	kfree(op_ctx.prev_va);
> > +	kfree(op_ctx.new_va);
> > +
> > +	return err;
> > +}
> > +
> > +/**
> > + * pvr_vm_unmap() - Unmap an already mapped section of device-virtual memory.
> > + * @vm_ctx: Target VM context.
> > + * @device_addr: Virtual device address at the start of the target mapping.
> > + * @size: Size of the target mapping.
> > + *
> > + * Return:
> > + *  * 0 on success,
> > + *  * -%EINVAL if @device_addr is not a valid page-aligned device-virtual
> > + *    address,
> > + *  * Any error encountered while performing internal operations required to
> > + *    destroy the mapping (returned from pvr_vm_gpuva_unmap or
> > + *    pvr_vm_gpuva_remap).
> > + */
> > +int
> > +pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size)
> > +{
> > +	struct pvr_vm_gpuva_op_ctx op_ctx = { .vm_ctx = vm_ctx };
> > +	int err;
> > +
> > +	if (!pvr_device_addr_and_size_are_valid(device_addr, size))
> > +		return -EINVAL;
> > +
> > +	op_ctx.prev_va = kzalloc(sizeof(*op_ctx.prev_va), GFP_KERNEL);
> > +	op_ctx.next_va = kzalloc(sizeof(*op_ctx.next_va), GFP_KERNEL);
> > +	if (!op_ctx.prev_va || !op_ctx.next_va) {
> > +		err = -ENOMEM;
> > +		goto out;
> > +	}
> > +
> > +	op_ctx.mmu_op_ctx =
> > +		pvr_mmu_op_context_create(vm_ctx->mmu_ctx, NULL, 0, 0);
> > +	err = PTR_ERR_OR_ZERO(op_ctx.mmu_op_ctx);
> > +	if (err) {
> > +		op_ctx.mmu_op_ctx = NULL;
> > +		goto out;
> > +	}
> > +
> > +	mutex_lock(&vm_ctx->lock);
> > +	err = drm_gpuva_sm_unmap(&vm_ctx->gpuva_mgr, &op_ctx, device_addr, size);
> > +	mutex_unlock(&vm_ctx->lock);
> > +
> > +out:
> > +	pvr_mmu_op_context_destroy(op_ctx.mmu_op_ctx);
> > +	kfree(op_ctx.next_va);
> > +	kfree(op_ctx.prev_va);
> > +
> > +	return err;
> > +}
> > +
> > +/*
> > + * Static data areas are determined by firmware.
> > + *
> > + * When adding a new static data area you will also need to update the reserved_size field for the
> > + * heap in pvr_heaps[].
> > + */
> > +static const struct drm_pvr_static_data_area static_data_areas[] = {
> > +	{
> > +		.area_usage = DRM_PVR_STATIC_DATA_AREA_FENCE,
> > +		.location_heap_id = DRM_PVR_HEAP_GENERAL,
> > +		.offset = 0,
> > +		.size = 128,
> > +	},
> > +	{
> > +		.area_usage = DRM_PVR_STATIC_DATA_AREA_YUV_CSC,
> > +		.location_heap_id = DRM_PVR_HEAP_GENERAL,
> > +		.offset = 128,
> > +		.size = 1024,
> > +	},
> > +	{
> > +		.area_usage = DRM_PVR_STATIC_DATA_AREA_VDM_SYNC,
> > +		.location_heap_id = DRM_PVR_HEAP_PDS_CODE_DATA,
> > +		.offset = 0,
> > +		.size = 128,
> > +	},
> > +	{
> > +		.area_usage = DRM_PVR_STATIC_DATA_AREA_EOT,
> > +		.location_heap_id = DRM_PVR_HEAP_PDS_CODE_DATA,
> > +		.offset = 128,
> > +		.size = 128,
> > +	},
> > +	{
> > +		.area_usage = DRM_PVR_STATIC_DATA_AREA_VDM_SYNC,
> > +		.location_heap_id = DRM_PVR_HEAP_USC_CODE,
> > +		.offset = 0,
> > +		.size = 128,
> > +	},
> > +};
> > +
> > +#define GET_RESERVED_SIZE(last_offset, last_size) round_up((last_offset) + (last_size), PAGE_SIZE)
> > +
> > +/*
> > + * The values given to GET_RESERVED_SIZE() are taken from the last entry in the corresponding
> > + * static data area for each heap.
> > + */
> > +static const struct drm_pvr_heap pvr_heaps[] = {
> > +	[DRM_PVR_HEAP_GENERAL] = {
> > +		.base = ROGUE_GENERAL_HEAP_BASE,
> > +		.size = ROGUE_GENERAL_HEAP_SIZE,
> > +		.flags = 0,
> > +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> > +	},
> > +	[DRM_PVR_HEAP_PDS_CODE_DATA] = {
> > +		.base = ROGUE_PDSCODEDATA_HEAP_BASE,
> > +		.size = ROGUE_PDSCODEDATA_HEAP_SIZE,
> > +		.flags = 0,
> > +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> > +	},
> > +	[DRM_PVR_HEAP_USC_CODE] = {
> > +		.base = ROGUE_USCCODE_HEAP_BASE,
> > +		.size = ROGUE_USCCODE_HEAP_SIZE,
> > +		.flags = 0,
> > +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> > +	},
> > +	[DRM_PVR_HEAP_RGNHDR] = {
> > +		.base = ROGUE_RGNHDR_HEAP_BASE,
> > +		.size = ROGUE_RGNHDR_HEAP_SIZE,
> > +		.flags = 0,
> > +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> > +	},
> > +	[DRM_PVR_HEAP_VIS_TEST] = {
> > +		.base = ROGUE_VISTEST_HEAP_BASE,
> > +		.size = ROGUE_VISTEST_HEAP_SIZE,
> > +		.flags = 0,
> > +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> > +	},
> > +	[DRM_PVR_HEAP_TRANSFER_FRAG] = {
> > +		.base = ROGUE_TRANSFER_FRAG_HEAP_BASE,
> > +		.size = ROGUE_TRANSFER_FRAG_HEAP_SIZE,
> > +		.flags = 0,
> > +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> > +	},
> > +};
> > +
> > +int
> > +pvr_static_data_areas_get(const struct pvr_device *pvr_dev,
> > +			  struct drm_pvr_ioctl_dev_query_args *args)
> > +{
> > +	struct drm_pvr_dev_query_static_data_areas query = {0};
> > +	int err;
> > +
> > +	if (!args->pointer) {
> > +		args->size = sizeof(struct drm_pvr_dev_query_static_data_areas);
> > +		return 0;
> > +	}
> > +
> > +	err = PVR_UOBJ_GET(query, args->size, args->pointer);
> > +	if (err < 0)
> > +		return err;
> > +
> > +	if (!query.static_data_areas.array) {
> > +		query.static_data_areas.count = ARRAY_SIZE(static_data_areas);
> > +		query.static_data_areas.stride = sizeof(struct drm_pvr_static_data_area);
> > +		goto copy_out;
> > +	}
> > +
> > +	if (query.static_data_areas.count > ARRAY_SIZE(static_data_areas))
> > +		query.static_data_areas.count = ARRAY_SIZE(static_data_areas);
> > +
> > +	err = PVR_UOBJ_SET_ARRAY(&query.static_data_areas, static_data_areas);
> > +	if (err < 0)
> > +		return err;
> > +
> > +copy_out:
> > +	err = PVR_UOBJ_SET(args->pointer, args->size, query);
> > +	if (err < 0)
> > +		return err;
> > +
> > +	args->size = sizeof(query);
> > +	return 0;
> > +}
> > +
> > +int
> > +pvr_heap_info_get(const struct pvr_device *pvr_dev,
> > +		  struct drm_pvr_ioctl_dev_query_args *args)
> > +{
> > +	struct drm_pvr_dev_query_heap_info query = {0};
> > +	u64 dest;
> > +	int err;
> > +
> > +	if (!args->pointer) {
> > +		args->size = sizeof(struct drm_pvr_dev_query_heap_info);
> > +		return 0;
> > +	}
> > +
> > +	err = PVR_UOBJ_GET(query, args->size, args->pointer);
> > +	if (err < 0)
> > +		return err;
> > +
> > +	if (!query.heaps.array) {
> > +		query.heaps.count = ARRAY_SIZE(pvr_heaps);
> > +		query.heaps.stride = sizeof(struct drm_pvr_heap);
> > +		goto copy_out;
> > +	}
> > +
> > +	if (query.heaps.count > ARRAY_SIZE(pvr_heaps))
> > +		query.heaps.count = ARRAY_SIZE(pvr_heaps);
> > +
> > +	/* Region header heap is only present if BRN63142 is present. */
> > +	dest = query.heaps.array;
> > +	for (size_t i = 0; i < query.heaps.count; i++) {
> > +		struct drm_pvr_heap heap = pvr_heaps[i];
> > +
> > +		if (i == DRM_PVR_HEAP_RGNHDR && !PVR_HAS_QUIRK(pvr_dev, 63142))
> > +			heap.size = 0;
> > +
> > +		err = PVR_UOBJ_SET(dest, query.heaps.stride, heap);
> > +		if (err < 0)
> > +			return err;
> > +
> > +		dest += query.heaps.stride;
> > +	}
> > +
> > +copy_out:
> > +	err = PVR_UOBJ_SET(args->pointer, args->size, query);
> > +	if (err < 0)
> > +		return err;
> > +
> > +	args->size = sizeof(query);
> > +	return 0;
> > +}
> > +
> > +/**
> > + * pvr_heap_contains_range() - Determine if a given heap contains the specified
> > + *                             device-virtual address range.
> > + * @pvr_heap: Target heap.
> > + * @start: Inclusive start of the target range.
> > + * @end: Inclusive end of the target range.
> > + *
> > + * It is an error to call this function with values of @start and @end that do
> > + * not satisfy the condition @start <= @end.
> > + */
> > +static __always_inline bool
> > +pvr_heap_contains_range(const struct drm_pvr_heap *pvr_heap, u64 start, u64 end)
> > +{
> > +	return pvr_heap->base <= start && end < pvr_heap->base + pvr_heap->size;
> > +}
> > +
> > +/**
> > + * pvr_find_heap_containing() - Find a heap which contains the specified
> > + *                              device-virtual address range.
> > + * @pvr_dev: Target PowerVR device.
> > + * @start: Start of the target range.
> > + * @size: Size of the target range.
> > + *
> > + * Return:
> > + *  * A pointer to a constant instance of struct drm_pvr_heap representing the
> > + *    heap containing the entire range specified by @start and @size on
> > + *    success, or
> > + *  * %NULL if no such heap exists.
> > + */
> > +const struct drm_pvr_heap *
> > +pvr_find_heap_containing(struct pvr_device *pvr_dev, u64 start, u64 size)
> > +{
> > +	u64 end;
> > +
> > +	if (check_add_overflow(start, size - 1, &end))
> > +		return NULL;
> > +
> > +	/*
> > +	 * There are no guarantees about the order of address ranges in
> > +	 * &pvr_heaps, so iterate over the entire array for a heap whose
> > +	 * range completely encompasses the given range.
> > +	 */
> > +	for (u32 heap_id = 0; heap_id < ARRAY_SIZE(pvr_heaps); heap_id++) {
> > +		/* Filter heaps that present only with an associated quirk */
> > +		if (heap_id == DRM_PVR_HEAP_RGNHDR &&
> > +		    !PVR_HAS_QUIRK(pvr_dev, 63142)) {
> > +			continue;
> > +		}
> > +
> > +		if (pvr_heap_contains_range(&pvr_heaps[heap_id], start, end))
> > +			return &pvr_heaps[heap_id];
> > +	}
> > +
> > +	return NULL;
> > +}
> > +
> > +/**
> > + * pvr_vm_find_gem_object() - Look up a buffer object from a given
> > + *                            device-virtual address.
> > + * @vm_ctx: [IN] Target VM context.
> > + * @device_addr: [IN] Virtual device address at the start of the required
> > + *               object.
> > + * @mapped_offset_out: [OUT] Pointer to location to write offset of the start
> > + *                     of the mapped region within the buffer object. May be
> > + *                     %NULL if this information is not required.
> > + * @mapped_size_out: [OUT] Pointer to location to write size of the mapped
> > + *                   region. May be %NULL if this information is not required.
> > + *
> > + * If successful, a reference will be taken on the buffer object. The caller
> > + * must drop the reference with pvr_gem_object_put().
> > + *
> > + * Return:
> > + *  * The PowerVR buffer object mapped at @device_addr if one exists, or
> > + *  * %NULL otherwise.
> > + */
> > +struct pvr_gem_object *
> > +pvr_vm_find_gem_object(struct pvr_vm_context *vm_ctx, u64 device_addr,
> > +		       u64 *mapped_offset_out, u64 *mapped_size_out)
> > +{
> > +	struct pvr_gem_object *pvr_obj;
> > +	struct drm_gpuva *va;
> > +
> > +	mutex_lock(&vm_ctx->lock);
> > +
> > +	va = drm_gpuva_find_first(&vm_ctx->gpuva_mgr, device_addr, 1);
> > +	if (!va)
> > +		goto err_unlock;
> > +
> > +	pvr_obj = gem_to_pvr_gem(va->gem.obj);
> > +	pvr_gem_object_get(pvr_obj);
> > +
> > +	if (mapped_offset_out)
> > +		*mapped_offset_out = va->gem.offset;
> > +	if (mapped_size_out)
> > +		*mapped_size_out = va->va.range;
> > +
> > +	mutex_unlock(&vm_ctx->lock);
> > +
> > +	return pvr_obj;
> > +
> > +err_unlock:
> > +	mutex_unlock(&vm_ctx->lock);
> > +
> > +	return NULL;
> > +}
> > +
> > +/**
> > + * pvr_vm_get_fw_mem_context: Get object representing firmware memory context
> > + * @vm_ctx: Target VM context.
> > + *
> > + * Returns:
> > + *  * FW object representing firmware memory context, or
> > + *  * %NULL if this VM context does not have a firmware memory context.
> > + */
> > +struct pvr_fw_object *
> > +pvr_vm_get_fw_mem_context(struct pvr_vm_context *vm_ctx)
> > +{
> > +	return vm_ctx->fw_mem_ctx_obj;
> > +}
> > diff --git a/drivers/gpu/drm/imagination/pvr_vm.h b/drivers/gpu/drm/imagination/pvr_vm.h
> > new file mode 100644
> > index 000000000000..b98bc3981807
> > --- /dev/null
> > +++ b/drivers/gpu/drm/imagination/pvr_vm.h
> > @@ -0,0 +1,60 @@
> > +/* SPDX-License-Identifier: GPL-2.0 OR MIT */
> > +/* Copyright (c) 2023 Imagination Technologies Ltd. */
> > +
> > +#ifndef PVR_VM_H
> > +#define PVR_VM_H
> > +
> > +#include "pvr_rogue_mmu_defs.h"
> > +
> > +#include <uapi/drm/pvr_drm.h>
> > +
> > +#include <linux/types.h>
> > +
> > +/* Forward declaration from "pvr_device.h" */
> > +struct pvr_device;
> > +struct pvr_file;
> > +
> > +/* Forward declaration from "pvr_gem.h" */
> > +struct pvr_gem_object;
> > +
> > +/* Forward declaration from "pvr_vm.c" */
> > +struct pvr_vm_context;
> > +
> > +/* Forward declaration from <uapi/drm/pvr_drm.h> */
> > +struct drm_pvr_ioctl_get_heap_info_args;
> > +
> > +/* Functions defined in pvr_vm.c */
> > +
> > +bool pvr_device_addr_is_valid(u64 device_addr);
> > +bool pvr_device_addr_and_size_are_valid(u64 device_addr, u64 size);
> > +
> > +struct pvr_vm_context *pvr_vm_create_context(struct pvr_device *pvr_dev,
> > +					     bool is_userspace_context);
> > +
> > +int pvr_vm_map(struct pvr_vm_context *vm_ctx,
> > +	       struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset,
> > +	       u64 device_addr, u64 size);
> > +int pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size);
> > +
> > +dma_addr_t pvr_vm_get_page_table_root_addr(struct pvr_vm_context *vm_ctx);
> > +
> > +int pvr_static_data_areas_get(const struct pvr_device *pvr_dev,
> > +			      struct drm_pvr_ioctl_dev_query_args *args);
> > +int pvr_heap_info_get(const struct pvr_device *pvr_dev,
> > +		      struct drm_pvr_ioctl_dev_query_args *args);
> > +const struct drm_pvr_heap *pvr_find_heap_containing(struct pvr_device *pvr_dev,
> > +						    u64 addr, u64 size);
> > +
> > +struct pvr_gem_object *pvr_vm_find_gem_object(struct pvr_vm_context *vm_ctx,
> > +					      u64 device_addr,
> > +					      u64 *mapped_offset_out,
> > +					      u64 *mapped_size_out);
> > +
> > +struct pvr_fw_object *
> > +pvr_vm_get_fw_mem_context(struct pvr_vm_context *vm_ctx);
> > +
> > +struct pvr_vm_context *pvr_vm_context_lookup(struct pvr_file *pvr_file, u32 handle);
> > +bool pvr_vm_context_put(struct pvr_vm_context *vm_ctx);
> > +void pvr_destroy_vm_contexts_for_file(struct pvr_file *pvr_file);
> > +
> > +#endif /* PVR_VM_H */
> > -- 
> > 2.41.0
> > 

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v5 08/17] drm/imagination: Add GEM and VM related code
@ 2023-08-21  8:30       ` Donald Robson
  0 siblings, 0 replies; 77+ messages in thread
From: Donald Robson @ 2023-08-21  8:30 UTC (permalink / raw)
  To: dakr, Sarah Walker
  Cc: luben.tuikov, christian.koenig, tzimmermann, mripard, hns,
	daniel, afd, Matt Coster, maarten.lankhorst, boris.brezillon,
	matthew.brost, linux-kernel, dri-devel, Frank Binns, airlied,
	faith.ekstrand

Hi Danilo,
Thanks for the feedback.  On the subject of locking, I have dma_resv locking
in another branch where I'm trying to enable bind queues, but I didn't think
I needed locking for the single, synchronous operations seen here.  Would a 
mutex on the gem object wrapper suffice?
Thanks,
Donald
On Fri, 2023-08-18 at 17:30 +0200, Danilo Krummrich wrote:
> 
> Hi Sarah,
> 
> On Wed, Aug 16, 2023 at 09:25:23AM +0100, Sarah Walker wrote:
> > Add a GEM implementation based on drm_gem_shmem, and support code for the
> > PowerVR GPU MMU. The GPU VA manager is used for address space management.
> > 
> > Changes since v4:
> > - Correct sync function in vmap/vunmap function documentation
> > - Update for upstream GPU VA manager
> > - Fix missing frees when unmapping drm_gpuva objects
> > - Always zero GEM BOs on creation
> > 
> > Changes since v3:
> > - Split MMU and VM code
> > - Register page table allocations with kmemleak
> > - Use drm_dev_{enter,exit}
> > 
> > Changes since v2:
> > - Use GPU VA manager
> > - Use drm_gem_shmem
> > 
> > Co-developed-by: Matt Coster <matt.coster@imgtec.com>
> > Signed-off-by: Matt Coster <matt.coster@imgtec.com>
> > Co-developed-by: Donald Robson <donald.robson@imgtec.com>
> > Signed-off-by: Donald Robson <donald.robson@imgtec.com>
> > Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
> > ---
> >  drivers/gpu/drm/imagination/Makefile     |    5 +-
> >  drivers/gpu/drm/imagination/pvr_device.c |   23 +-
> >  drivers/gpu/drm/imagination/pvr_device.h |   18 +
> >  drivers/gpu/drm/imagination/pvr_drv.c    |  302 ++-
> >  drivers/gpu/drm/imagination/pvr_gem.c    |  396 ++++
> >  drivers/gpu/drm/imagination/pvr_gem.h    |  177 ++
> >  drivers/gpu/drm/imagination/pvr_mmu.c    | 2487 ++++++++++++++++++++++
> >  drivers/gpu/drm/imagination/pvr_mmu.h    |  108 +
> >  drivers/gpu/drm/imagination/pvr_vm.c     |  890 ++++++++
> >  drivers/gpu/drm/imagination/pvr_vm.h     |   60 +
> >  10 files changed, 4455 insertions(+), 11 deletions(-)
> >  create mode 100644 drivers/gpu/drm/imagination/pvr_gem.c
> >  create mode 100644 drivers/gpu/drm/imagination/pvr_gem.h
> >  create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.c
> >  create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.h
> >  create mode 100644 drivers/gpu/drm/imagination/pvr_vm.c
> >  create mode 100644 drivers/gpu/drm/imagination/pvr_vm.h
> 
> <snip>
> 
> > diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
> > new file mode 100644
> > index 000000000000..616fad3a3325
> > --- /dev/null
> > +++ b/drivers/gpu/drm/imagination/pvr_vm.c
> > @@ -0,0 +1,890 @@
> > +// SPDX-License-Identifier: GPL-2.0 OR MIT
> > +/* Copyright (c) 2023 Imagination Technologies Ltd. */
> > +
> > +#include "pvr_vm.h"
> > +
> > +#include "pvr_device.h"
> > +#include "pvr_drv.h"
> > +#include "pvr_gem.h"
> > +#include "pvr_mmu.h"
> > +#include "pvr_rogue_fwif.h"
> > +#include "pvr_rogue_heap_config.h"
> > +
> > +#include <drm/drm_gem.h>
> > +#include <drm/drm_gpuva_mgr.h>
> > +
> > +#include <linux/container_of.h>
> > +#include <linux/err.h>
> > +#include <linux/errno.h>
> > +#include <linux/gfp_types.h>
> > +#include <linux/kref.h>
> > +#include <linux/mutex.h>
> > +#include <linux/stddef.h>
> > +
> > +/**
> > + * DOC: Memory context
> > + *
> > + * This is the "top level" datatype in the VM code. It's exposed in the public
> > + * API as an opaque handle.
> > + */
> > +
> > +/**
> > + * struct pvr_vm_context - Context type which encapsulates an entire page table
> > + * tree structure.
> > + * @pvr_dev: The PowerVR device to which this context is bound.
> > + *
> > + * This binding is immutable for the life of the context.
> > + * @mmu_ctx: The context for binding to physical memory.
> > + * @gpuva_mgr: GPUVA manager object associated with this context.
> > + * @lock: Global lock on this entire structure of page tables.
> > + * @fw_mem_ctx_obj: Firmware object representing firmware memory context.
> > + * @ref_count: Reference count of object.
> > + */
> > +struct pvr_vm_context {
> > +	struct pvr_device *pvr_dev;
> > +	struct pvr_mmu_context *mmu_ctx;
> > +	struct drm_gpuva_manager gpuva_mgr;
> > +	struct mutex lock;
> > +	struct pvr_fw_object *fw_mem_ctx_obj;
> > +	struct kref ref_count;
> > +};
> > +
> > +/**
> > + * pvr_vm_get_page_table_root_addr() - Get the DMA address of the root of the
> > + *                                     page table structure behind a VM context.
> > + * @vm_ctx: Target VM context.
> > + */
> > +dma_addr_t pvr_vm_get_page_table_root_addr(struct pvr_vm_context *vm_ctx)
> > +{
> > +	return pvr_mmu_get_root_table_dma_addr(vm_ctx->mmu_ctx);
> > +}
> > +
> > +/**
> > + * DOC: Memory mappings
> > + */
> > +
> > +/**
> > + * pvr_vm_gpuva_mapping_init() - Setup a mapping object with the specified
> > + * parameters ready for mapping using pvr_vm_gpuva_mapping_map().
> > + * @va: Pointer to drm_gpuva mapping object.
> > + * @device_addr: Device-virtual address at the start of the mapping.
> > + * @size: Size of the desired mapping.
> > + * @pvr_obj: Target PowerVR memory object.
> > + * @pvr_obj_offset: Offset into @pvr_obj to begin mapping from.
> > + *
> > + * Some parameters of this function are unchecked. It is therefore the callers
> > + * responsibility to ensure certain constraints are met. Specifically:
> > + *
> > + * * @pvr_obj_offset must be less than the size of @pvr_obj,
> > + * * The sum of @pvr_obj_offset and @size must be less than or equal to the
> > + *   size of @pvr_obj,
> > + * * The range specified by @pvr_obj_offset and @size (the "CPU range") must be
> > + *   CPU page-aligned both in start position and size, and
> > + * * The range specified by @device_addr and @size (the "device range") must be
> > + *   device page-aligned both in start position and size.
> > + *
> > + * Furthermore, it is up to the caller to make sure that a reference to @pvr_obj
> > + * is taken prior to mapping @va with the drm_gpuva_manager.
> > + */
> > +static void
> > +pvr_vm_gpuva_mapping_init(struct drm_gpuva *va, u64 device_addr, u64 size,
> > +			  struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset)
> 
> There's already drm_gpuva_init() doing the same thing.
> 
> > +{
> > +	va->va.addr = device_addr;
> > +	va->va.range = size;
> > +	va->gem.obj = gem_from_pvr_gem(pvr_obj);
> > +	va->gem.offset = pvr_obj_offset;
> > +}
> > +
> > +struct pvr_vm_gpuva_op_ctx {
> > +	struct pvr_vm_context *vm_ctx;
> > +	struct pvr_mmu_op_context *mmu_op_ctx;
> > +	struct drm_gpuva *new_va, *prev_va, *next_va;
> > +};
> > +
> > +/**
> > + * pvr_vm_gpuva_map() - Insert a mapping into a memory context.
> > + * @op: gpuva op containing the remap details.
> > + * @op_ctx: Operation context.
> > + *
> > + * Context: Called by drm_gpuva_sm_map following a successful mapping while
> > + * @op_ctx.vm_ctx mutex is held.
> > + *
> > + * Return:
> > + *  * 0 on success, or
> > + *  * Any error returned by pvr_mmu_map().
> > + */
> > +static int
> > +pvr_vm_gpuva_map(struct drm_gpuva_op *op, void *op_ctx)
> > +{
> > +	struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->map.gem.obj);
> > +	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
> > +	int err;
> > +
> > +	if ((op->map.gem.offset | op->map.va.range) & ~PVR_DEVICE_PAGE_MASK)
> > +		return -EINVAL;
> > +
> > +	err = pvr_mmu_map(ctx->mmu_op_ctx, op->map.va.range, pvr_gem->flags,
> > +			  op->map.va.addr);
> > +	if (err)
> > +		return err;
> > +
> > +	pvr_vm_gpuva_mapping_init(ctx->new_va, op->map.va.addr,
> > +				  op->map.va.range, pvr_gem, op->map.gem.offset);
> > +
> > +	drm_gpuva_map(&ctx->vm_ctx->gpuva_mgr, ctx->new_va, &op->map);
> 
> drm_gpuva_map() does use drm_gpuva_init_from_op() internally, hence the extra
> call to pvr_vm_gpuva_mapping_init() should be unnecessary.
> 
> > +	drm_gpuva_link(ctx->new_va);
> 
> How is this protected?
> 
> drm_gpuva_link() and drm_gpuva_unlink() require either the dma_resv lock of the
> corresponding GEM object being held or, alternatively, the driver specific lock
> indicated via drm_gem_gpuva_set_lock().
> 
> > +	ctx->new_va = NULL;
> > +
> > +	/*
> > +	 * Increment the refcount on the underlying physical memory resource
> > +	 * to prevent de-allocation while the mapping exists.
> > +	 */
> > +	pvr_gem_object_get(pvr_gem);
> > +
> > +	return 0;
> > +}
> > +
> > +/**
> > + * pvr_vm_gpuva_unmap() - Remove a mapping from a memory context.
> > + * @op: gpuva op containing the unmap details.
> > + * @op_ctx: Operation context.
> > + *
> > + * Context: Called by drm_gpuva_sm_unmap following a successful unmapping while
> > + * @op_ctx.vm_ctx mutex is held.
> > + *
> > + * Return:
> > + *  * 0 on success, or
> > + *  * Any error returned by pvr_mmu_unmap().
> > + */
> > +static int
> > +pvr_vm_gpuva_unmap(struct drm_gpuva_op *op, void *op_ctx)
> > +{
> > +	struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->unmap.va->gem.obj);
> > +	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
> > +
> > +	int err = pvr_mmu_unmap(ctx->mmu_op_ctx, op->unmap.va->va.addr,
> > +				op->unmap.va->va.range);
> > +
> > +	if (err)
> > +		return err;
> > +
> > +	drm_gpuva_unmap(&op->unmap);
> > +	drm_gpuva_unlink(op->unmap.va);
> > +	kfree(op->unmap.va);
> > +
> > +	pvr_gem_object_put(pvr_gem);
> > +
> > +	return 0;
> > +}
> > +
> > +/**
> > + * pvr_vm_gpuva_remap() - Remap a mapping within a memory context.
> > + * @op: gpuva op containing the remap details.
> > + * @op_ctx: Operation context.
> > + *
> > + * Context: Called by either drm_gpuva_sm_map or drm_gpuva_sm_unmap when a
> > + * mapping or unmapping operation causes a region to be split. The
> > + * @op_ctx.vm_ctx mutex is held.
> > + *
> > + * Return:
> > + *  * 0 on success, or
> > + *  * Any error returned by pvr_vm_gpuva_unmap() or pvr_vm_gpuva_unmap().
> > + */
> > +static int
> > +pvr_vm_gpuva_remap(struct drm_gpuva_op *op, void *op_ctx)
> > +{
> > +	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
> > +
> > +	if (op->remap.unmap) {
> 
> You can omit this check, remap operations always contain a valid unmap
> operation. However, you might want to know whether the remap operation was
> generated due to a call to drm_gpuva_sm_map() or drm_gpuva_sm_unmap(), since for
> the latter you might want to free page table structures.
> 
> > +		const u64 va_start = op->remap.prev ?
> > +				     op->remap.prev->va.addr + op->remap.prev->va.range :
> > +				     op->remap.unmap->va->va.addr;
> > +		const u64 va_end = op->remap.next ?
> > +				   op->remap.next->va.addr :
> > +				   op->remap.unmap->va->va.addr + op->remap.unmap->va->va.range;
> 
> This seems to be a common calculation for drivers, it is probably worth to come
> up with a helper, something like
> drm_gpuva_op_unmap_range(struct drm_gpuva_op *op, u64 *addr, u64 *range).
> 
> > +
> > +		int err = pvr_mmu_unmap(ctx->mmu_op_ctx, va_start,
> > +					va_end - va_start);
> > +
> > +		if (err)
> > +			return err;
> > +	}
> > +
> > +	if (op->remap.prev)
> > +		pvr_vm_gpuva_mapping_init(ctx->prev_va, op->remap.prev->va.addr,
> > +					  op->remap.prev->va.range,
> > +					  gem_to_pvr_gem(op->remap.prev->gem.obj),
> > +					  op->remap.prev->gem.offset);
> > +
> > +	if (op->remap.next)
> > +		pvr_vm_gpuva_mapping_init(ctx->next_va, op->remap.next->va.addr,
> > +					  op->remap.next->va.range,
> > +					  gem_to_pvr_gem(op->remap.next->gem.obj),
> > +					  op->remap.next->gem.offset);
> > +
> > +	/* No actual remap required: the page table tree depth is fixed to 3,
> > +	 * and we use 4k page table entries only for now.
> > +	 */
> > +	drm_gpuva_remap(ctx->prev_va, ctx->next_va, &op->remap);
> 
> As above, drm_gpuva_remap() does use drm_gpuva_init_from_op() internally, hence
> the extra call to pvr_vm_gpuva_mapping_init() should be unnecessary.
> 
> > +
> > +	if (op->remap.prev) {
> > +		pvr_gem_object_get(gem_to_pvr_gem(ctx->prev_va->gem.obj));
> > +		drm_gpuva_link(ctx->prev_va);
> > +		ctx->prev_va = NULL;
> > +	}
> > +
> > +	if (op->remap.next) {
> > +		pvr_gem_object_get(gem_to_pvr_gem(ctx->next_va->gem.obj));
> > +		drm_gpuva_link(ctx->next_va);
> > +		ctx->next_va = NULL;
> > +	}
> > +
> > +	if (op->remap.unmap) {
> 
> As above, no need for this check.
> 
> - Danilo
> 
> > +		struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->remap.unmap->va->gem.obj);
> > +
> > +		drm_gpuva_unlink(op->unmap.va);
> > +		kfree(op->unmap.va);
> > +
> > +		pvr_gem_object_put(pvr_gem);
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +/*
> > + * Public API
> > + *
> > + * For an overview of these functions, see *DOC: Public API* in "pvr_vm.h".
> > + */
> > +
> > +/**
> > + * pvr_device_addr_is_valid() - Tests whether a device-virtual address
> > + *                              is valid.
> > + * @device_addr: Virtual device address to test.
> > + *
> > + * Return:
> > + *  * %true if @device_addr is within the valid range for a device page
> > + *    table and is aligned to the device page size, or
> > + *  * %false otherwise.
> > + */
> > +bool
> > +pvr_device_addr_is_valid(u64 device_addr)
> > +{
> > +	return (device_addr & ~PVR_PAGE_TABLE_ADDR_MASK) == 0 &&
> > +	       (device_addr & ~PVR_DEVICE_PAGE_MASK) == 0;
> > +}
> > +
> > +/**
> > + * pvr_device_addr_and_size_are_valid() - Tests whether a device-virtual
> > + * address and associated size are both valid.
> > + * @device_addr: Virtual device address to test.
> > + * @size: Size of the range based at @device_addr to test.
> > + *
> > + * Calling pvr_device_addr_is_valid() twice (once on @size, and again on
> > + * @device_addr + @size) to verify a device-virtual address range initially
> > + * seems intuitive, but it produces a false-negative when the address range
> > + * is right at the end of device-virtual address space.
> > + *
> > + * This function catches that corner case, as well as checking that
> > + * @size is non-zero.
> > + *
> > + * Return:
> > + *  * %true if @device_addr is device page aligned; @size is device page
> > + *    aligned; the range specified by @device_addr and @size is within the
> > + *    bounds of the device-virtual address space, and @size is non-zero, or
> > + *  * %false otherwise.
> > + */
> > +bool
> > +pvr_device_addr_and_size_are_valid(u64 device_addr, u64 size)
> > +{
> > +	return pvr_device_addr_is_valid(device_addr) &&
> > +	       size != 0 && (size & ~PVR_DEVICE_PAGE_MASK) == 0 &&
> > +	       (device_addr + size <= PVR_PAGE_TABLE_ADDR_SPACE_SIZE);
> > +}
> > +
> > +static const struct drm_gpuva_fn_ops pvr_vm_gpuva_ops = {
> > +	.sm_step_map = pvr_vm_gpuva_map,
> > +	.sm_step_remap = pvr_vm_gpuva_remap,
> > +	.sm_step_unmap = pvr_vm_gpuva_unmap,
> > +};
> > +
> > +/**
> > + * pvr_vm_create_context() - Create a new VM context.
> > + * @pvr_dev: Target PowerVR device.
> > + * @is_userspace_context: %true if this context is for userspace. This will
> > + *                        create a firmware memory context for the VM context
> > + *                        and disable warnings when tearing down mappings.
> > + *
> > + * Return:
> > + *  * A handle to the newly-minted VM context on success,
> > + *  * -%EINVAL if the feature "virtual address space bits" on @pvr_dev is
> > + *    missing or has an unsupported value,
> > + *  * -%ENOMEM if allocation of the structure behind the opaque handle fails,
> > + *    or
> > + *  * Any error encountered while setting up internal structures.
> > + */
> > +struct pvr_vm_context *
> > +pvr_vm_create_context(struct pvr_device *pvr_dev, bool is_userspace_context)
> > +{
> > +	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
> > +
> > +	struct pvr_vm_context *vm_ctx;
> > +	u16 device_addr_bits;
> > +
> > +	int err;
> > +
> > +	err = PVR_FEATURE_VALUE(pvr_dev, virtual_address_space_bits,
> > +				&device_addr_bits);
> > +	if (err) {
> > +		drm_err(drm_dev,
> > +			"Failed to get device virtual address space bits\n");
> > +		return ERR_PTR(err);
> > +	}
> > +
> > +	if (device_addr_bits != PVR_PAGE_TABLE_ADDR_BITS) {
> > +		drm_err(drm_dev,
> > +			"Device has unsupported virtual address space size\n");
> > +		return ERR_PTR(-EINVAL);
> > +	}
> > +
> > +	vm_ctx = kzalloc(sizeof(*vm_ctx), GFP_KERNEL);
> > +	if (!vm_ctx)
> > +		return ERR_PTR(-ENOMEM);
> > +
> > +	vm_ctx->pvr_dev = pvr_dev;
> > +	kref_init(&vm_ctx->ref_count);
> > +	mutex_init(&vm_ctx->lock);
> > +
> > +	drm_gpuva_manager_init(&vm_ctx->gpuva_mgr,
> > +			       is_userspace_context ? "PowerVR-user-VM" : "PowerVR-FW-VM",
> > +			       0, 1ULL << device_addr_bits, 0, 0, &pvr_vm_gpuva_ops);
> > +
> > +	vm_ctx->mmu_ctx = pvr_mmu_context_create(pvr_dev);
> > +	err = PTR_ERR_OR_ZERO(&vm_ctx->mmu_ctx);
> > +	if (err) {
> > +		vm_ctx->mmu_ctx = NULL;
> > +		goto err_put_ctx;
> > +	}
> > +
> > +	if (is_userspace_context) {
> > +		/* TODO: Create FW mem context */
> > +		err = -ENODEV;
> > +		goto err_put_ctx;
> > +	}
> > +
> > +	return vm_ctx;
> > +
> > +err_put_ctx:
> > +	pvr_vm_context_put(vm_ctx);
> > +
> > +	return ERR_PTR(err);
> > +}
> > +
> > +/**
> > + * pvr_vm_context_release() - Teardown a VM context.
> > + * @ref_count: Pointer to reference counter of the VM context.
> > + *
> > + * This function ensures that no mappings are left dangling by unmapping them
> > + * all in order of ascending device-virtual address.
> > + */
> > +static void
> > +pvr_vm_context_release(struct kref *ref_count)
> > +{
> > +	struct pvr_vm_context *vm_ctx =
> > +		container_of(ref_count, struct pvr_vm_context, ref_count);
> > +
> > +	/* TODO: Destroy FW mem context */
> > +	WARN_ON(vm_ctx->fw_mem_ctx_obj);
> > +
> > +	WARN_ON(pvr_vm_unmap(vm_ctx, vm_ctx->gpuva_mgr.mm_start,
> > +			     vm_ctx->gpuva_mgr.mm_range));
> > +
> > +	drm_gpuva_manager_destroy(&vm_ctx->gpuva_mgr);
> > +	pvr_mmu_context_destroy(vm_ctx->mmu_ctx);
> > +	mutex_destroy(&vm_ctx->lock);
> > +
> > +	kfree(vm_ctx);
> > +}
> > +
> > +/**
> > + * pvr_vm_context_lookup() - Look up VM context from handle
> > + * @pvr_file: Pointer to pvr_file structure.
> > + * @handle: Object handle.
> > + *
> > + * Takes reference on VM context object. Call pvr_vm_context_put() to release.
> > + *
> > + * Returns:
> > + *  * The requested object on success, or
> > + *  * %NULL on failure (object does not exist in list, or is not a VM context)
> > + */
> > +struct pvr_vm_context *
> > +pvr_vm_context_lookup(struct pvr_file *pvr_file, u32 handle)
> > +{
> > +	struct pvr_vm_context *vm_ctx;
> > +
> > +	xa_lock(&pvr_file->vm_ctx_handles);
> > +	vm_ctx = xa_load(&pvr_file->vm_ctx_handles, handle);
> > +	if (vm_ctx)
> > +		kref_get(&vm_ctx->ref_count);
> > +
> > +	xa_unlock(&pvr_file->vm_ctx_handles);
> > +
> > +	return vm_ctx;
> > +}
> > +
> > +/**
> > + * pvr_vm_context_put() - Release a reference on a VM context
> > + * @vm_ctx: Target VM context.
> > + *
> > + * Returns:
> > + *  * %true if the VM context was destroyed, or
> > + *  * %false if there are any references still remaining.
> > + */
> > +bool
> > +pvr_vm_context_put(struct pvr_vm_context *vm_ctx)
> > +{
> > +	WARN_ON(!vm_ctx);
> > +
> > +	if (vm_ctx)
> > +		return kref_put(&vm_ctx->ref_count, pvr_vm_context_release);
> > +
> > +	return true;
> > +}
> > +
> > +/**
> > + * pvr_destroy_vm_contexts_for_file: Destroy any VM contexts associated with the
> > + * given file.
> > + * @pvr_file: Pointer to pvr_file structure.
> > + *
> > + * Removes all vm_contexts associated with @pvr_file from the device VM context
> > + * list and drops initial references. vm_contexts will then be destroyed once
> > + * all outstanding references are dropped.
> > + */
> > +void pvr_destroy_vm_contexts_for_file(struct pvr_file *pvr_file)
> > +{
> > +	struct pvr_vm_context *vm_ctx;
> > +	unsigned long handle;
> > +
> > +	xa_for_each(&pvr_file->vm_ctx_handles, handle, vm_ctx) {
> > +		/* vm_ctx is not used here because that would create a race with xa_erase */
> > +		pvr_vm_context_put(xa_erase(&pvr_file->vm_ctx_handles, handle));
> > +	}
> > +}
> > +
> > +/**
> > + * pvr_vm_map() - Map a section of physical memory into a section of device-virtual memory.
> > + * @vm_ctx: Target VM context.
> > + * @pvr_obj: Target PowerVR memory object.
> > + * @pvr_obj_offset: Offset into @pvr_obj to map from.
> > + * @device_addr: Virtual device address at the start of the requested mapping.
> > + * @size: Size of the requested mapping.
> > + *
> > + * No handle is returned to represent the mapping. Instead, callers should
> > + * remember @device_addr and use that as a handle.
> > + *
> > + * Return:
> > + *  * 0 on success,
> > + *  * -%EINVAL if @device_addr is not a valid page-aligned device-virtual
> > + *    address; the region specified by @pvr_obj_offset and @size does not fall
> > + *    entirely within @pvr_obj, or any part of the specified region of @pvr_obj
> > + *    is not device-virtual page-aligned,
> > + *  * Any error encountered while performing internal operations required to
> > + *    destroy the mapping (returned from pvr_vm_gpuva_map or
> > + *    pvr_vm_gpuva_remap).
> > + */
> > +int
> > +pvr_vm_map(struct pvr_vm_context *vm_ctx,
> > +	   struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset,
> > +	   u64 device_addr, u64 size)
> > +{
> > +	const size_t pvr_obj_size = pvr_gem_object_size(pvr_obj);
> > +	struct pvr_vm_gpuva_op_ctx op_ctx = { .vm_ctx = vm_ctx };
> > +	struct sg_table *sgt;
> > +	int err;
> > +
> > +	if (!pvr_device_addr_and_size_are_valid(device_addr, size) ||
> > +	    pvr_obj_offset & ~PAGE_MASK || size & ~PAGE_MASK ||
> > +	    pvr_obj_offset + size > pvr_obj_size ||
> > +	    pvr_obj_offset > pvr_obj_size) {
> > +		return -EINVAL;
> > +	}
> > +
> > +	op_ctx.new_va = kzalloc(sizeof(*op_ctx.new_va), GFP_KERNEL);
> > +	op_ctx.prev_va = kzalloc(sizeof(*op_ctx.prev_va), GFP_KERNEL);
> > +	op_ctx.next_va = kzalloc(sizeof(*op_ctx.next_va), GFP_KERNEL);
> > +	if (!op_ctx.new_va || !op_ctx.prev_va || !op_ctx.next_va) {
> > +		err = -ENOMEM;
> > +		goto out_free;
> > +	}
> > +
> > +	sgt = pvr_gem_object_get_pages_sgt(pvr_obj);
> > +	err = PTR_ERR_OR_ZERO(sgt);
> > +	if (err)
> > +		goto out_free;
> > +
> > +	op_ctx.mmu_op_ctx = pvr_mmu_op_context_create(vm_ctx->mmu_ctx, sgt,
> > +						      pvr_obj_offset, size);
> > +	err = PTR_ERR_OR_ZERO(op_ctx.mmu_op_ctx);
> > +	if (err) {
> > +		op_ctx.mmu_op_ctx = NULL;
> > +		goto out_mmu_op_ctx_destroy;
> > +	}
> > +
> > +	mutex_lock(&vm_ctx->lock);
> > +	err = drm_gpuva_sm_map(&vm_ctx->gpuva_mgr, &op_ctx, device_addr, size,
> > +			       gem_from_pvr_gem(pvr_obj), pvr_obj_offset);
> > +	mutex_unlock(&vm_ctx->lock);
> > +
> > +out_mmu_op_ctx_destroy:
> > +	pvr_mmu_op_context_destroy(op_ctx.mmu_op_ctx);
> > +
> > +out_free:
> > +	kfree(op_ctx.next_va);
> > +	kfree(op_ctx.prev_va);
> > +	kfree(op_ctx.new_va);
> > +
> > +	return err;
> > +}
> > +
> > +/**
> > + * pvr_vm_unmap() - Unmap an already mapped section of device-virtual memory.
> > + * @vm_ctx: Target VM context.
> > + * @device_addr: Virtual device address at the start of the target mapping.
> > + * @size: Size of the target mapping.
> > + *
> > + * Return:
> > + *  * 0 on success,
> > + *  * -%EINVAL if @device_addr is not a valid page-aligned device-virtual
> > + *    address,
> > + *  * Any error encountered while performing internal operations required to
> > + *    destroy the mapping (returned from pvr_vm_gpuva_unmap or
> > + *    pvr_vm_gpuva_remap).
> > + */
> > +int
> > +pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size)
> > +{
> > +	struct pvr_vm_gpuva_op_ctx op_ctx = { .vm_ctx = vm_ctx };
> > +	int err;
> > +
> > +	if (!pvr_device_addr_and_size_are_valid(device_addr, size))
> > +		return -EINVAL;
> > +
> > +	op_ctx.prev_va = kzalloc(sizeof(*op_ctx.prev_va), GFP_KERNEL);
> > +	op_ctx.next_va = kzalloc(sizeof(*op_ctx.next_va), GFP_KERNEL);
> > +	if (!op_ctx.prev_va || !op_ctx.next_va) {
> > +		err = -ENOMEM;
> > +		goto out;
> > +	}
> > +
> > +	op_ctx.mmu_op_ctx =
> > +		pvr_mmu_op_context_create(vm_ctx->mmu_ctx, NULL, 0, 0);
> > +	err = PTR_ERR_OR_ZERO(op_ctx.mmu_op_ctx);
> > +	if (err) {
> > +		op_ctx.mmu_op_ctx = NULL;
> > +		goto out;
> > +	}
> > +
> > +	mutex_lock(&vm_ctx->lock);
> > +	err = drm_gpuva_sm_unmap(&vm_ctx->gpuva_mgr, &op_ctx, device_addr, size);
> > +	mutex_unlock(&vm_ctx->lock);
> > +
> > +out:
> > +	pvr_mmu_op_context_destroy(op_ctx.mmu_op_ctx);
> > +	kfree(op_ctx.next_va);
> > +	kfree(op_ctx.prev_va);
> > +
> > +	return err;
> > +}
> > +
> > +/*
> > + * Static data areas are determined by firmware.
> > + *
> > + * When adding a new static data area you will also need to update the reserved_size field for the
> > + * heap in pvr_heaps[].
> > + */
> > +static const struct drm_pvr_static_data_area static_data_areas[] = {
> > +	{
> > +		.area_usage = DRM_PVR_STATIC_DATA_AREA_FENCE,
> > +		.location_heap_id = DRM_PVR_HEAP_GENERAL,
> > +		.offset = 0,
> > +		.size = 128,
> > +	},
> > +	{
> > +		.area_usage = DRM_PVR_STATIC_DATA_AREA_YUV_CSC,
> > +		.location_heap_id = DRM_PVR_HEAP_GENERAL,
> > +		.offset = 128,
> > +		.size = 1024,
> > +	},
> > +	{
> > +		.area_usage = DRM_PVR_STATIC_DATA_AREA_VDM_SYNC,
> > +		.location_heap_id = DRM_PVR_HEAP_PDS_CODE_DATA,
> > +		.offset = 0,
> > +		.size = 128,
> > +	},
> > +	{
> > +		.area_usage = DRM_PVR_STATIC_DATA_AREA_EOT,
> > +		.location_heap_id = DRM_PVR_HEAP_PDS_CODE_DATA,
> > +		.offset = 128,
> > +		.size = 128,
> > +	},
> > +	{
> > +		.area_usage = DRM_PVR_STATIC_DATA_AREA_VDM_SYNC,
> > +		.location_heap_id = DRM_PVR_HEAP_USC_CODE,
> > +		.offset = 0,
> > +		.size = 128,
> > +	},
> > +};
> > +
> > +#define GET_RESERVED_SIZE(last_offset, last_size) round_up((last_offset) + (last_size), PAGE_SIZE)
> > +
> > +/*
> > + * The values given to GET_RESERVED_SIZE() are taken from the last entry in the corresponding
> > + * static data area for each heap.
> > + */
> > +static const struct drm_pvr_heap pvr_heaps[] = {
> > +	[DRM_PVR_HEAP_GENERAL] = {
> > +		.base = ROGUE_GENERAL_HEAP_BASE,
> > +		.size = ROGUE_GENERAL_HEAP_SIZE,
> > +		.flags = 0,
> > +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> > +	},
> > +	[DRM_PVR_HEAP_PDS_CODE_DATA] = {
> > +		.base = ROGUE_PDSCODEDATA_HEAP_BASE,
> > +		.size = ROGUE_PDSCODEDATA_HEAP_SIZE,
> > +		.flags = 0,
> > +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> > +	},
> > +	[DRM_PVR_HEAP_USC_CODE] = {
> > +		.base = ROGUE_USCCODE_HEAP_BASE,
> > +		.size = ROGUE_USCCODE_HEAP_SIZE,
> > +		.flags = 0,
> > +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> > +	},
> > +	[DRM_PVR_HEAP_RGNHDR] = {
> > +		.base = ROGUE_RGNHDR_HEAP_BASE,
> > +		.size = ROGUE_RGNHDR_HEAP_SIZE,
> > +		.flags = 0,
> > +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> > +	},
> > +	[DRM_PVR_HEAP_VIS_TEST] = {
> > +		.base = ROGUE_VISTEST_HEAP_BASE,
> > +		.size = ROGUE_VISTEST_HEAP_SIZE,
> > +		.flags = 0,
> > +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> > +	},
> > +	[DRM_PVR_HEAP_TRANSFER_FRAG] = {
> > +		.base = ROGUE_TRANSFER_FRAG_HEAP_BASE,
> > +		.size = ROGUE_TRANSFER_FRAG_HEAP_SIZE,
> > +		.flags = 0,
> > +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
> > +	},
> > +};
> > +
> > +int
> > +pvr_static_data_areas_get(const struct pvr_device *pvr_dev,
> > +			  struct drm_pvr_ioctl_dev_query_args *args)
> > +{
> > +	struct drm_pvr_dev_query_static_data_areas query = {0};
> > +	int err;
> > +
> > +	if (!args->pointer) {
> > +		args->size = sizeof(struct drm_pvr_dev_query_static_data_areas);
> > +		return 0;
> > +	}
> > +
> > +	err = PVR_UOBJ_GET(query, args->size, args->pointer);
> > +	if (err < 0)
> > +		return err;
> > +
> > +	if (!query.static_data_areas.array) {
> > +		query.static_data_areas.count = ARRAY_SIZE(static_data_areas);
> > +		query.static_data_areas.stride = sizeof(struct drm_pvr_static_data_area);
> > +		goto copy_out;
> > +	}
> > +
> > +	if (query.static_data_areas.count > ARRAY_SIZE(static_data_areas))
> > +		query.static_data_areas.count = ARRAY_SIZE(static_data_areas);
> > +
> > +	err = PVR_UOBJ_SET_ARRAY(&query.static_data_areas, static_data_areas);
> > +	if (err < 0)
> > +		return err;
> > +
> > +copy_out:
> > +	err = PVR_UOBJ_SET(args->pointer, args->size, query);
> > +	if (err < 0)
> > +		return err;
> > +
> > +	args->size = sizeof(query);
> > +	return 0;
> > +}
> > +
> > +int
> > +pvr_heap_info_get(const struct pvr_device *pvr_dev,
> > +		  struct drm_pvr_ioctl_dev_query_args *args)
> > +{
> > +	struct drm_pvr_dev_query_heap_info query = {0};
> > +	u64 dest;
> > +	int err;
> > +
> > +	if (!args->pointer) {
> > +		args->size = sizeof(struct drm_pvr_dev_query_heap_info);
> > +		return 0;
> > +	}
> > +
> > +	err = PVR_UOBJ_GET(query, args->size, args->pointer);
> > +	if (err < 0)
> > +		return err;
> > +
> > +	if (!query.heaps.array) {
> > +		query.heaps.count = ARRAY_SIZE(pvr_heaps);
> > +		query.heaps.stride = sizeof(struct drm_pvr_heap);
> > +		goto copy_out;
> > +	}
> > +
> > +	if (query.heaps.count > ARRAY_SIZE(pvr_heaps))
> > +		query.heaps.count = ARRAY_SIZE(pvr_heaps);
> > +
> > +	/* Region header heap is only present if BRN63142 is present. */
> > +	dest = query.heaps.array;
> > +	for (size_t i = 0; i < query.heaps.count; i++) {
> > +		struct drm_pvr_heap heap = pvr_heaps[i];
> > +
> > +		if (i == DRM_PVR_HEAP_RGNHDR && !PVR_HAS_QUIRK(pvr_dev, 63142))
> > +			heap.size = 0;
> > +
> > +		err = PVR_UOBJ_SET(dest, query.heaps.stride, heap);
> > +		if (err < 0)
> > +			return err;
> > +
> > +		dest += query.heaps.stride;
> > +	}
> > +
> > +copy_out:
> > +	err = PVR_UOBJ_SET(args->pointer, args->size, query);
> > +	if (err < 0)
> > +		return err;
> > +
> > +	args->size = sizeof(query);
> > +	return 0;
> > +}
> > +
> > +/**
> > + * pvr_heap_contains_range() - Determine if a given heap contains the specified
> > + *                             device-virtual address range.
> > + * @pvr_heap: Target heap.
> > + * @start: Inclusive start of the target range.
> > + * @end: Inclusive end of the target range.
> > + *
> > + * It is an error to call this function with values of @start and @end that do
> > + * not satisfy the condition @start <= @end.
> > + */
> > +static __always_inline bool
> > +pvr_heap_contains_range(const struct drm_pvr_heap *pvr_heap, u64 start, u64 end)
> > +{
> > +	return pvr_heap->base <= start && end < pvr_heap->base + pvr_heap->size;
> > +}
> > +
> > +/**
> > + * pvr_find_heap_containing() - Find a heap which contains the specified
> > + *                              device-virtual address range.
> > + * @pvr_dev: Target PowerVR device.
> > + * @start: Start of the target range.
> > + * @size: Size of the target range.
> > + *
> > + * Return:
> > + *  * A pointer to a constant instance of struct drm_pvr_heap representing the
> > + *    heap containing the entire range specified by @start and @size on
> > + *    success, or
> > + *  * %NULL if no such heap exists.
> > + */
> > +const struct drm_pvr_heap *
> > +pvr_find_heap_containing(struct pvr_device *pvr_dev, u64 start, u64 size)
> > +{
> > +	u64 end;
> > +
> > +	if (check_add_overflow(start, size - 1, &end))
> > +		return NULL;
> > +
> > +	/*
> > +	 * There are no guarantees about the order of address ranges in
> > +	 * &pvr_heaps, so iterate over the entire array for a heap whose
> > +	 * range completely encompasses the given range.
> > +	 */
> > +	for (u32 heap_id = 0; heap_id < ARRAY_SIZE(pvr_heaps); heap_id++) {
> > +		/* Filter heaps that present only with an associated quirk */
> > +		if (heap_id == DRM_PVR_HEAP_RGNHDR &&
> > +		    !PVR_HAS_QUIRK(pvr_dev, 63142)) {
> > +			continue;
> > +		}
> > +
> > +		if (pvr_heap_contains_range(&pvr_heaps[heap_id], start, end))
> > +			return &pvr_heaps[heap_id];
> > +	}
> > +
> > +	return NULL;
> > +}
> > +
> > +/**
> > + * pvr_vm_find_gem_object() - Look up a buffer object from a given
> > + *                            device-virtual address.
> > + * @vm_ctx: [IN] Target VM context.
> > + * @device_addr: [IN] Virtual device address at the start of the required
> > + *               object.
> > + * @mapped_offset_out: [OUT] Pointer to location to write offset of the start
> > + *                     of the mapped region within the buffer object. May be
> > + *                     %NULL if this information is not required.
> > + * @mapped_size_out: [OUT] Pointer to location to write size of the mapped
> > + *                   region. May be %NULL if this information is not required.
> > + *
> > + * If successful, a reference will be taken on the buffer object. The caller
> > + * must drop the reference with pvr_gem_object_put().
> > + *
> > + * Return:
> > + *  * The PowerVR buffer object mapped at @device_addr if one exists, or
> > + *  * %NULL otherwise.
> > + */
> > +struct pvr_gem_object *
> > +pvr_vm_find_gem_object(struct pvr_vm_context *vm_ctx, u64 device_addr,
> > +		       u64 *mapped_offset_out, u64 *mapped_size_out)
> > +{
> > +	struct pvr_gem_object *pvr_obj;
> > +	struct drm_gpuva *va;
> > +
> > +	mutex_lock(&vm_ctx->lock);
> > +
> > +	va = drm_gpuva_find_first(&vm_ctx->gpuva_mgr, device_addr, 1);
> > +	if (!va)
> > +		goto err_unlock;
> > +
> > +	pvr_obj = gem_to_pvr_gem(va->gem.obj);
> > +	pvr_gem_object_get(pvr_obj);
> > +
> > +	if (mapped_offset_out)
> > +		*mapped_offset_out = va->gem.offset;
> > +	if (mapped_size_out)
> > +		*mapped_size_out = va->va.range;
> > +
> > +	mutex_unlock(&vm_ctx->lock);
> > +
> > +	return pvr_obj;
> > +
> > +err_unlock:
> > +	mutex_unlock(&vm_ctx->lock);
> > +
> > +	return NULL;
> > +}
> > +
> > +/**
> > + * pvr_vm_get_fw_mem_context: Get object representing firmware memory context
> > + * @vm_ctx: Target VM context.
> > + *
> > + * Returns:
> > + *  * FW object representing firmware memory context, or
> > + *  * %NULL if this VM context does not have a firmware memory context.
> > + */
> > +struct pvr_fw_object *
> > +pvr_vm_get_fw_mem_context(struct pvr_vm_context *vm_ctx)
> > +{
> > +	return vm_ctx->fw_mem_ctx_obj;
> > +}
> > diff --git a/drivers/gpu/drm/imagination/pvr_vm.h b/drivers/gpu/drm/imagination/pvr_vm.h
> > new file mode 100644
> > index 000000000000..b98bc3981807
> > --- /dev/null
> > +++ b/drivers/gpu/drm/imagination/pvr_vm.h
> > @@ -0,0 +1,60 @@
> > +/* SPDX-License-Identifier: GPL-2.0 OR MIT */
> > +/* Copyright (c) 2023 Imagination Technologies Ltd. */
> > +
> > +#ifndef PVR_VM_H
> > +#define PVR_VM_H
> > +
> > +#include "pvr_rogue_mmu_defs.h"
> > +
> > +#include <uapi/drm/pvr_drm.h>
> > +
> > +#include <linux/types.h>
> > +
> > +/* Forward declaration from "pvr_device.h" */
> > +struct pvr_device;
> > +struct pvr_file;
> > +
> > +/* Forward declaration from "pvr_gem.h" */
> > +struct pvr_gem_object;
> > +
> > +/* Forward declaration from "pvr_vm.c" */
> > +struct pvr_vm_context;
> > +
> > +/* Forward declaration from <uapi/drm/pvr_drm.h> */
> > +struct drm_pvr_ioctl_get_heap_info_args;
> > +
> > +/* Functions defined in pvr_vm.c */
> > +
> > +bool pvr_device_addr_is_valid(u64 device_addr);
> > +bool pvr_device_addr_and_size_are_valid(u64 device_addr, u64 size);
> > +
> > +struct pvr_vm_context *pvr_vm_create_context(struct pvr_device *pvr_dev,
> > +					     bool is_userspace_context);
> > +
> > +int pvr_vm_map(struct pvr_vm_context *vm_ctx,
> > +	       struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset,
> > +	       u64 device_addr, u64 size);
> > +int pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size);
> > +
> > +dma_addr_t pvr_vm_get_page_table_root_addr(struct pvr_vm_context *vm_ctx);
> > +
> > +int pvr_static_data_areas_get(const struct pvr_device *pvr_dev,
> > +			      struct drm_pvr_ioctl_dev_query_args *args);
> > +int pvr_heap_info_get(const struct pvr_device *pvr_dev,
> > +		      struct drm_pvr_ioctl_dev_query_args *args);
> > +const struct drm_pvr_heap *pvr_find_heap_containing(struct pvr_device *pvr_dev,
> > +						    u64 addr, u64 size);
> > +
> > +struct pvr_gem_object *pvr_vm_find_gem_object(struct pvr_vm_context *vm_ctx,
> > +					      u64 device_addr,
> > +					      u64 *mapped_offset_out,
> > +					      u64 *mapped_size_out);
> > +
> > +struct pvr_fw_object *
> > +pvr_vm_get_fw_mem_context(struct pvr_vm_context *vm_ctx);
> > +
> > +struct pvr_vm_context *pvr_vm_context_lookup(struct pvr_file *pvr_file, u32 handle);
> > +bool pvr_vm_context_put(struct pvr_vm_context *vm_ctx);
> > +void pvr_destroy_vm_contexts_for_file(struct pvr_file *pvr_file);
> > +
> > +#endif /* PVR_VM_H */
> > -- 
> > 2.41.0
> > 

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v5 08/17] drm/imagination: Add GEM and VM related code
  2023-08-21  8:30       ` Donald Robson
@ 2023-08-21 11:05         ` Danilo Krummrich
  -1 siblings, 0 replies; 77+ messages in thread
From: Danilo Krummrich @ 2023-08-21 11:05 UTC (permalink / raw)
  To: Donald Robson, Sarah Walker
  Cc: luben.tuikov, christian.koenig, tzimmermann, mripard, hns,
	daniel, afd, Matt Coster, maarten.lankhorst, boris.brezillon,
	matthew.brost, linux-kernel, dri-devel, Frank Binns, airlied,
	faith.ekstrand

On 8/21/23 10:30, Donald Robson wrote:
> Hi Danilo,
> Thanks for the feedback.  On the subject of locking, I have dma_resv locking
> in another branch where I'm trying to enable bind queues, but I didn't think
> I needed locking for the single, synchronous operations seen here.  Would a
> mutex on the gem object wrapper suffice?

The reason you need to either acquire the dma-resv lock or a driver 
specific lock for drm_gpuva_link() and drm_gpuva_unlink() is that you 
probably iterate the linked drm_gpuvas somewhere else, typically from a
ttm_device_funcs.move callback to unmap mappings when their backing GEM 
objects are evicted. If you handle that a different way and never 
iterate linked drm_gpuvas, you can probably omit drm_gpuva_(un)link() 
entirely.

> Thanks,
> Donald
> On Fri, 2023-08-18 at 17:30 +0200, Danilo Krummrich wrote:
>>
>> Hi Sarah,
>>
>> On Wed, Aug 16, 2023 at 09:25:23AM +0100, Sarah Walker wrote:
>>> Add a GEM implementation based on drm_gem_shmem, and support code for the
>>> PowerVR GPU MMU. The GPU VA manager is used for address space management.
>>>
>>> Changes since v4:
>>> - Correct sync function in vmap/vunmap function documentation
>>> - Update for upstream GPU VA manager
>>> - Fix missing frees when unmapping drm_gpuva objects
>>> - Always zero GEM BOs on creation
>>>
>>> Changes since v3:
>>> - Split MMU and VM code
>>> - Register page table allocations with kmemleak
>>> - Use drm_dev_{enter,exit}
>>>
>>> Changes since v2:
>>> - Use GPU VA manager
>>> - Use drm_gem_shmem
>>>
>>> Co-developed-by: Matt Coster <matt.coster@imgtec.com>
>>> Signed-off-by: Matt Coster <matt.coster@imgtec.com>
>>> Co-developed-by: Donald Robson <donald.robson@imgtec.com>
>>> Signed-off-by: Donald Robson <donald.robson@imgtec.com>
>>> Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
>>> ---
>>>   drivers/gpu/drm/imagination/Makefile     |    5 +-
>>>   drivers/gpu/drm/imagination/pvr_device.c |   23 +-
>>>   drivers/gpu/drm/imagination/pvr_device.h |   18 +
>>>   drivers/gpu/drm/imagination/pvr_drv.c    |  302 ++-
>>>   drivers/gpu/drm/imagination/pvr_gem.c    |  396 ++++
>>>   drivers/gpu/drm/imagination/pvr_gem.h    |  177 ++
>>>   drivers/gpu/drm/imagination/pvr_mmu.c    | 2487 ++++++++++++++++++++++
>>>   drivers/gpu/drm/imagination/pvr_mmu.h    |  108 +
>>>   drivers/gpu/drm/imagination/pvr_vm.c     |  890 ++++++++
>>>   drivers/gpu/drm/imagination/pvr_vm.h     |   60 +
>>>   10 files changed, 4455 insertions(+), 11 deletions(-)
>>>   create mode 100644 drivers/gpu/drm/imagination/pvr_gem.c
>>>   create mode 100644 drivers/gpu/drm/imagination/pvr_gem.h
>>>   create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.c
>>>   create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.h
>>>   create mode 100644 drivers/gpu/drm/imagination/pvr_vm.c
>>>   create mode 100644 drivers/gpu/drm/imagination/pvr_vm.h
>>
>> <snip>
>>
>>> diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
>>> new file mode 100644
>>> index 000000000000..616fad3a3325
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/imagination/pvr_vm.c
>>> @@ -0,0 +1,890 @@
>>> +// SPDX-License-Identifier: GPL-2.0 OR MIT
>>> +/* Copyright (c) 2023 Imagination Technologies Ltd. */
>>> +
>>> +#include "pvr_vm.h"
>>> +
>>> +#include "pvr_device.h"
>>> +#include "pvr_drv.h"
>>> +#include "pvr_gem.h"
>>> +#include "pvr_mmu.h"
>>> +#include "pvr_rogue_fwif.h"
>>> +#include "pvr_rogue_heap_config.h"
>>> +
>>> +#include <drm/drm_gem.h>
>>> +#include <drm/drm_gpuva_mgr.h>
>>> +
>>> +#include <linux/container_of.h>
>>> +#include <linux/err.h>
>>> +#include <linux/errno.h>
>>> +#include <linux/gfp_types.h>
>>> +#include <linux/kref.h>
>>> +#include <linux/mutex.h>
>>> +#include <linux/stddef.h>
>>> +
>>> +/**
>>> + * DOC: Memory context
>>> + *
>>> + * This is the "top level" datatype in the VM code. It's exposed in the public
>>> + * API as an opaque handle.
>>> + */
>>> +
>>> +/**
>>> + * struct pvr_vm_context - Context type which encapsulates an entire page table
>>> + * tree structure.
>>> + * @pvr_dev: The PowerVR device to which this context is bound.
>>> + *
>>> + * This binding is immutable for the life of the context.
>>> + * @mmu_ctx: The context for binding to physical memory.
>>> + * @gpuva_mgr: GPUVA manager object associated with this context.
>>> + * @lock: Global lock on this entire structure of page tables.
>>> + * @fw_mem_ctx_obj: Firmware object representing firmware memory context.
>>> + * @ref_count: Reference count of object.
>>> + */
>>> +struct pvr_vm_context {
>>> +	struct pvr_device *pvr_dev;
>>> +	struct pvr_mmu_context *mmu_ctx;
>>> +	struct drm_gpuva_manager gpuva_mgr;
>>> +	struct mutex lock;
>>> +	struct pvr_fw_object *fw_mem_ctx_obj;
>>> +	struct kref ref_count;
>>> +};
>>> +
>>> +/**
>>> + * pvr_vm_get_page_table_root_addr() - Get the DMA address of the root of the
>>> + *                                     page table structure behind a VM context.
>>> + * @vm_ctx: Target VM context.
>>> + */
>>> +dma_addr_t pvr_vm_get_page_table_root_addr(struct pvr_vm_context *vm_ctx)
>>> +{
>>> +	return pvr_mmu_get_root_table_dma_addr(vm_ctx->mmu_ctx);
>>> +}
>>> +
>>> +/**
>>> + * DOC: Memory mappings
>>> + */
>>> +
>>> +/**
>>> + * pvr_vm_gpuva_mapping_init() - Setup a mapping object with the specified
>>> + * parameters ready for mapping using pvr_vm_gpuva_mapping_map().
>>> + * @va: Pointer to drm_gpuva mapping object.
>>> + * @device_addr: Device-virtual address at the start of the mapping.
>>> + * @size: Size of the desired mapping.
>>> + * @pvr_obj: Target PowerVR memory object.
>>> + * @pvr_obj_offset: Offset into @pvr_obj to begin mapping from.
>>> + *
>>> + * Some parameters of this function are unchecked. It is therefore the callers
>>> + * responsibility to ensure certain constraints are met. Specifically:
>>> + *
>>> + * * @pvr_obj_offset must be less than the size of @pvr_obj,
>>> + * * The sum of @pvr_obj_offset and @size must be less than or equal to the
>>> + *   size of @pvr_obj,
>>> + * * The range specified by @pvr_obj_offset and @size (the "CPU range") must be
>>> + *   CPU page-aligned both in start position and size, and
>>> + * * The range specified by @device_addr and @size (the "device range") must be
>>> + *   device page-aligned both in start position and size.
>>> + *
>>> + * Furthermore, it is up to the caller to make sure that a reference to @pvr_obj
>>> + * is taken prior to mapping @va with the drm_gpuva_manager.
>>> + */
>>> +static void
>>> +pvr_vm_gpuva_mapping_init(struct drm_gpuva *va, u64 device_addr, u64 size,
>>> +			  struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset)
>>
>> There's already drm_gpuva_init() doing the same thing.
>>
>>> +{
>>> +	va->va.addr = device_addr;
>>> +	va->va.range = size;
>>> +	va->gem.obj = gem_from_pvr_gem(pvr_obj);
>>> +	va->gem.offset = pvr_obj_offset;
>>> +}
>>> +
>>> +struct pvr_vm_gpuva_op_ctx {
>>> +	struct pvr_vm_context *vm_ctx;
>>> +	struct pvr_mmu_op_context *mmu_op_ctx;
>>> +	struct drm_gpuva *new_va, *prev_va, *next_va;
>>> +};
>>> +
>>> +/**
>>> + * pvr_vm_gpuva_map() - Insert a mapping into a memory context.
>>> + * @op: gpuva op containing the remap details.
>>> + * @op_ctx: Operation context.
>>> + *
>>> + * Context: Called by drm_gpuva_sm_map following a successful mapping while
>>> + * @op_ctx.vm_ctx mutex is held.
>>> + *
>>> + * Return:
>>> + *  * 0 on success, or
>>> + *  * Any error returned by pvr_mmu_map().
>>> + */
>>> +static int
>>> +pvr_vm_gpuva_map(struct drm_gpuva_op *op, void *op_ctx)
>>> +{
>>> +	struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->map.gem.obj);
>>> +	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
>>> +	int err;
>>> +
>>> +	if ((op->map.gem.offset | op->map.va.range) & ~PVR_DEVICE_PAGE_MASK)
>>> +		return -EINVAL;
>>> +
>>> +	err = pvr_mmu_map(ctx->mmu_op_ctx, op->map.va.range, pvr_gem->flags,
>>> +			  op->map.va.addr);
>>> +	if (err)
>>> +		return err;
>>> +
>>> +	pvr_vm_gpuva_mapping_init(ctx->new_va, op->map.va.addr,
>>> +				  op->map.va.range, pvr_gem, op->map.gem.offset);
>>> +
>>> +	drm_gpuva_map(&ctx->vm_ctx->gpuva_mgr, ctx->new_va, &op->map);
>>
>> drm_gpuva_map() does use drm_gpuva_init_from_op() internally, hence the extra
>> call to pvr_vm_gpuva_mapping_init() should be unnecessary.
>>
>>> +	drm_gpuva_link(ctx->new_va);
>>
>> How is this protected?
>>
>> drm_gpuva_link() and drm_gpuva_unlink() require either the dma_resv lock of the
>> corresponding GEM object being held or, alternatively, the driver specific lock
>> indicated via drm_gem_gpuva_set_lock().
>>
>>> +	ctx->new_va = NULL;
>>> +
>>> +	/*
>>> +	 * Increment the refcount on the underlying physical memory resource
>>> +	 * to prevent de-allocation while the mapping exists.
>>> +	 */
>>> +	pvr_gem_object_get(pvr_gem);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +/**
>>> + * pvr_vm_gpuva_unmap() - Remove a mapping from a memory context.
>>> + * @op: gpuva op containing the unmap details.
>>> + * @op_ctx: Operation context.
>>> + *
>>> + * Context: Called by drm_gpuva_sm_unmap following a successful unmapping while
>>> + * @op_ctx.vm_ctx mutex is held.
>>> + *
>>> + * Return:
>>> + *  * 0 on success, or
>>> + *  * Any error returned by pvr_mmu_unmap().
>>> + */
>>> +static int
>>> +pvr_vm_gpuva_unmap(struct drm_gpuva_op *op, void *op_ctx)
>>> +{
>>> +	struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->unmap.va->gem.obj);
>>> +	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
>>> +
>>> +	int err = pvr_mmu_unmap(ctx->mmu_op_ctx, op->unmap.va->va.addr,
>>> +				op->unmap.va->va.range);
>>> +
>>> +	if (err)
>>> +		return err;
>>> +
>>> +	drm_gpuva_unmap(&op->unmap);
>>> +	drm_gpuva_unlink(op->unmap.va);
>>> +	kfree(op->unmap.va);
>>> +
>>> +	pvr_gem_object_put(pvr_gem);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +/**
>>> + * pvr_vm_gpuva_remap() - Remap a mapping within a memory context.
>>> + * @op: gpuva op containing the remap details.
>>> + * @op_ctx: Operation context.
>>> + *
>>> + * Context: Called by either drm_gpuva_sm_map or drm_gpuva_sm_unmap when a
>>> + * mapping or unmapping operation causes a region to be split. The
>>> + * @op_ctx.vm_ctx mutex is held.
>>> + *
>>> + * Return:
>>> + *  * 0 on success, or
>>> + *  * Any error returned by pvr_vm_gpuva_unmap() or pvr_vm_gpuva_unmap().
>>> + */
>>> +static int
>>> +pvr_vm_gpuva_remap(struct drm_gpuva_op *op, void *op_ctx)
>>> +{
>>> +	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
>>> +
>>> +	if (op->remap.unmap) {
>>
>> You can omit this check, remap operations always contain a valid unmap
>> operation. However, you might want to know whether the remap operation was
>> generated due to a call to drm_gpuva_sm_map() or drm_gpuva_sm_unmap(), since for
>> the latter you might want to free page table structures.
>>
>>> +		const u64 va_start = op->remap.prev ?
>>> +				     op->remap.prev->va.addr + op->remap.prev->va.range :
>>> +				     op->remap.unmap->va->va.addr;
>>> +		const u64 va_end = op->remap.next ?
>>> +				   op->remap.next->va.addr :
>>> +				   op->remap.unmap->va->va.addr + op->remap.unmap->va->va.range;
>>
>> This seems to be a common calculation for drivers, it is probably worth to come
>> up with a helper, something like
>> drm_gpuva_op_unmap_range(struct drm_gpuva_op *op, u64 *addr, u64 *range).
>>
>>> +
>>> +		int err = pvr_mmu_unmap(ctx->mmu_op_ctx, va_start,
>>> +					va_end - va_start);
>>> +
>>> +		if (err)
>>> +			return err;
>>> +	}
>>> +
>>> +	if (op->remap.prev)
>>> +		pvr_vm_gpuva_mapping_init(ctx->prev_va, op->remap.prev->va.addr,
>>> +					  op->remap.prev->va.range,
>>> +					  gem_to_pvr_gem(op->remap.prev->gem.obj),
>>> +					  op->remap.prev->gem.offset);
>>> +
>>> +	if (op->remap.next)
>>> +		pvr_vm_gpuva_mapping_init(ctx->next_va, op->remap.next->va.addr,
>>> +					  op->remap.next->va.range,
>>> +					  gem_to_pvr_gem(op->remap.next->gem.obj),
>>> +					  op->remap.next->gem.offset);
>>> +
>>> +	/* No actual remap required: the page table tree depth is fixed to 3,
>>> +	 * and we use 4k page table entries only for now.
>>> +	 */
>>> +	drm_gpuva_remap(ctx->prev_va, ctx->next_va, &op->remap);
>>
>> As above, drm_gpuva_remap() does use drm_gpuva_init_from_op() internally, hence
>> the extra call to pvr_vm_gpuva_mapping_init() should be unnecessary.
>>
>>> +
>>> +	if (op->remap.prev) {
>>> +		pvr_gem_object_get(gem_to_pvr_gem(ctx->prev_va->gem.obj));
>>> +		drm_gpuva_link(ctx->prev_va);
>>> +		ctx->prev_va = NULL;
>>> +	}
>>> +
>>> +	if (op->remap.next) {
>>> +		pvr_gem_object_get(gem_to_pvr_gem(ctx->next_va->gem.obj));
>>> +		drm_gpuva_link(ctx->next_va);
>>> +		ctx->next_va = NULL;
>>> +	}
>>> +
>>> +	if (op->remap.unmap) {
>>
>> As above, no need for this check.
>>
>> - Danilo
>>
>>> +		struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->remap.unmap->va->gem.obj);
>>> +
>>> +		drm_gpuva_unlink(op->unmap.va);
>>> +		kfree(op->unmap.va);
>>> +
>>> +		pvr_gem_object_put(pvr_gem);
>>> +	}
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +/*
>>> + * Public API
>>> + *
>>> + * For an overview of these functions, see *DOC: Public API* in "pvr_vm.h".
>>> + */
>>> +
>>> +/**
>>> + * pvr_device_addr_is_valid() - Tests whether a device-virtual address
>>> + *                              is valid.
>>> + * @device_addr: Virtual device address to test.
>>> + *
>>> + * Return:
>>> + *  * %true if @device_addr is within the valid range for a device page
>>> + *    table and is aligned to the device page size, or
>>> + *  * %false otherwise.
>>> + */
>>> +bool
>>> +pvr_device_addr_is_valid(u64 device_addr)
>>> +{
>>> +	return (device_addr & ~PVR_PAGE_TABLE_ADDR_MASK) == 0 &&
>>> +	       (device_addr & ~PVR_DEVICE_PAGE_MASK) == 0;
>>> +}
>>> +
>>> +/**
>>> + * pvr_device_addr_and_size_are_valid() - Tests whether a device-virtual
>>> + * address and associated size are both valid.
>>> + * @device_addr: Virtual device address to test.
>>> + * @size: Size of the range based at @device_addr to test.
>>> + *
>>> + * Calling pvr_device_addr_is_valid() twice (once on @size, and again on
>>> + * @device_addr + @size) to verify a device-virtual address range initially
>>> + * seems intuitive, but it produces a false-negative when the address range
>>> + * is right at the end of device-virtual address space.
>>> + *
>>> + * This function catches that corner case, as well as checking that
>>> + * @size is non-zero.
>>> + *
>>> + * Return:
>>> + *  * %true if @device_addr is device page aligned; @size is device page
>>> + *    aligned; the range specified by @device_addr and @size is within the
>>> + *    bounds of the device-virtual address space, and @size is non-zero, or
>>> + *  * %false otherwise.
>>> + */
>>> +bool
>>> +pvr_device_addr_and_size_are_valid(u64 device_addr, u64 size)
>>> +{
>>> +	return pvr_device_addr_is_valid(device_addr) &&
>>> +	       size != 0 && (size & ~PVR_DEVICE_PAGE_MASK) == 0 &&
>>> +	       (device_addr + size <= PVR_PAGE_TABLE_ADDR_SPACE_SIZE);
>>> +}
>>> +
>>> +static const struct drm_gpuva_fn_ops pvr_vm_gpuva_ops = {
>>> +	.sm_step_map = pvr_vm_gpuva_map,
>>> +	.sm_step_remap = pvr_vm_gpuva_remap,
>>> +	.sm_step_unmap = pvr_vm_gpuva_unmap,
>>> +};
>>> +
>>> +/**
>>> + * pvr_vm_create_context() - Create a new VM context.
>>> + * @pvr_dev: Target PowerVR device.
>>> + * @is_userspace_context: %true if this context is for userspace. This will
>>> + *                        create a firmware memory context for the VM context
>>> + *                        and disable warnings when tearing down mappings.
>>> + *
>>> + * Return:
>>> + *  * A handle to the newly-minted VM context on success,
>>> + *  * -%EINVAL if the feature "virtual address space bits" on @pvr_dev is
>>> + *    missing or has an unsupported value,
>>> + *  * -%ENOMEM if allocation of the structure behind the opaque handle fails,
>>> + *    or
>>> + *  * Any error encountered while setting up internal structures.
>>> + */
>>> +struct pvr_vm_context *
>>> +pvr_vm_create_context(struct pvr_device *pvr_dev, bool is_userspace_context)
>>> +{
>>> +	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
>>> +
>>> +	struct pvr_vm_context *vm_ctx;
>>> +	u16 device_addr_bits;
>>> +
>>> +	int err;
>>> +
>>> +	err = PVR_FEATURE_VALUE(pvr_dev, virtual_address_space_bits,
>>> +				&device_addr_bits);
>>> +	if (err) {
>>> +		drm_err(drm_dev,
>>> +			"Failed to get device virtual address space bits\n");
>>> +		return ERR_PTR(err);
>>> +	}
>>> +
>>> +	if (device_addr_bits != PVR_PAGE_TABLE_ADDR_BITS) {
>>> +		drm_err(drm_dev,
>>> +			"Device has unsupported virtual address space size\n");
>>> +		return ERR_PTR(-EINVAL);
>>> +	}
>>> +
>>> +	vm_ctx = kzalloc(sizeof(*vm_ctx), GFP_KERNEL);
>>> +	if (!vm_ctx)
>>> +		return ERR_PTR(-ENOMEM);
>>> +
>>> +	vm_ctx->pvr_dev = pvr_dev;
>>> +	kref_init(&vm_ctx->ref_count);
>>> +	mutex_init(&vm_ctx->lock);
>>> +
>>> +	drm_gpuva_manager_init(&vm_ctx->gpuva_mgr,
>>> +			       is_userspace_context ? "PowerVR-user-VM" : "PowerVR-FW-VM",
>>> +			       0, 1ULL << device_addr_bits, 0, 0, &pvr_vm_gpuva_ops);
>>> +
>>> +	vm_ctx->mmu_ctx = pvr_mmu_context_create(pvr_dev);
>>> +	err = PTR_ERR_OR_ZERO(&vm_ctx->mmu_ctx);
>>> +	if (err) {
>>> +		vm_ctx->mmu_ctx = NULL;
>>> +		goto err_put_ctx;
>>> +	}
>>> +
>>> +	if (is_userspace_context) {
>>> +		/* TODO: Create FW mem context */
>>> +		err = -ENODEV;
>>> +		goto err_put_ctx;
>>> +	}
>>> +
>>> +	return vm_ctx;
>>> +
>>> +err_put_ctx:
>>> +	pvr_vm_context_put(vm_ctx);
>>> +
>>> +	return ERR_PTR(err);
>>> +}
>>> +
>>> +/**
>>> + * pvr_vm_context_release() - Teardown a VM context.
>>> + * @ref_count: Pointer to reference counter of the VM context.
>>> + *
>>> + * This function ensures that no mappings are left dangling by unmapping them
>>> + * all in order of ascending device-virtual address.
>>> + */
>>> +static void
>>> +pvr_vm_context_release(struct kref *ref_count)
>>> +{
>>> +	struct pvr_vm_context *vm_ctx =
>>> +		container_of(ref_count, struct pvr_vm_context, ref_count);
>>> +
>>> +	/* TODO: Destroy FW mem context */
>>> +	WARN_ON(vm_ctx->fw_mem_ctx_obj);
>>> +
>>> +	WARN_ON(pvr_vm_unmap(vm_ctx, vm_ctx->gpuva_mgr.mm_start,
>>> +			     vm_ctx->gpuva_mgr.mm_range));
>>> +
>>> +	drm_gpuva_manager_destroy(&vm_ctx->gpuva_mgr);
>>> +	pvr_mmu_context_destroy(vm_ctx->mmu_ctx);
>>> +	mutex_destroy(&vm_ctx->lock);
>>> +
>>> +	kfree(vm_ctx);
>>> +}
>>> +
>>> +/**
>>> + * pvr_vm_context_lookup() - Look up VM context from handle
>>> + * @pvr_file: Pointer to pvr_file structure.
>>> + * @handle: Object handle.
>>> + *
>>> + * Takes reference on VM context object. Call pvr_vm_context_put() to release.
>>> + *
>>> + * Returns:
>>> + *  * The requested object on success, or
>>> + *  * %NULL on failure (object does not exist in list, or is not a VM context)
>>> + */
>>> +struct pvr_vm_context *
>>> +pvr_vm_context_lookup(struct pvr_file *pvr_file, u32 handle)
>>> +{
>>> +	struct pvr_vm_context *vm_ctx;
>>> +
>>> +	xa_lock(&pvr_file->vm_ctx_handles);
>>> +	vm_ctx = xa_load(&pvr_file->vm_ctx_handles, handle);
>>> +	if (vm_ctx)
>>> +		kref_get(&vm_ctx->ref_count);
>>> +
>>> +	xa_unlock(&pvr_file->vm_ctx_handles);
>>> +
>>> +	return vm_ctx;
>>> +}
>>> +
>>> +/**
>>> + * pvr_vm_context_put() - Release a reference on a VM context
>>> + * @vm_ctx: Target VM context.
>>> + *
>>> + * Returns:
>>> + *  * %true if the VM context was destroyed, or
>>> + *  * %false if there are any references still remaining.
>>> + */
>>> +bool
>>> +pvr_vm_context_put(struct pvr_vm_context *vm_ctx)
>>> +{
>>> +	WARN_ON(!vm_ctx);
>>> +
>>> +	if (vm_ctx)
>>> +		return kref_put(&vm_ctx->ref_count, pvr_vm_context_release);
>>> +
>>> +	return true;
>>> +}
>>> +
>>> +/**
>>> + * pvr_destroy_vm_contexts_for_file: Destroy any VM contexts associated with the
>>> + * given file.
>>> + * @pvr_file: Pointer to pvr_file structure.
>>> + *
>>> + * Removes all vm_contexts associated with @pvr_file from the device VM context
>>> + * list and drops initial references. vm_contexts will then be destroyed once
>>> + * all outstanding references are dropped.
>>> + */
>>> +void pvr_destroy_vm_contexts_for_file(struct pvr_file *pvr_file)
>>> +{
>>> +	struct pvr_vm_context *vm_ctx;
>>> +	unsigned long handle;
>>> +
>>> +	xa_for_each(&pvr_file->vm_ctx_handles, handle, vm_ctx) {
>>> +		/* vm_ctx is not used here because that would create a race with xa_erase */
>>> +		pvr_vm_context_put(xa_erase(&pvr_file->vm_ctx_handles, handle));
>>> +	}
>>> +}
>>> +
>>> +/**
>>> + * pvr_vm_map() - Map a section of physical memory into a section of device-virtual memory.
>>> + * @vm_ctx: Target VM context.
>>> + * @pvr_obj: Target PowerVR memory object.
>>> + * @pvr_obj_offset: Offset into @pvr_obj to map from.
>>> + * @device_addr: Virtual device address at the start of the requested mapping.
>>> + * @size: Size of the requested mapping.
>>> + *
>>> + * No handle is returned to represent the mapping. Instead, callers should
>>> + * remember @device_addr and use that as a handle.
>>> + *
>>> + * Return:
>>> + *  * 0 on success,
>>> + *  * -%EINVAL if @device_addr is not a valid page-aligned device-virtual
>>> + *    address; the region specified by @pvr_obj_offset and @size does not fall
>>> + *    entirely within @pvr_obj, or any part of the specified region of @pvr_obj
>>> + *    is not device-virtual page-aligned,
>>> + *  * Any error encountered while performing internal operations required to
>>> + *    destroy the mapping (returned from pvr_vm_gpuva_map or
>>> + *    pvr_vm_gpuva_remap).
>>> + */
>>> +int
>>> +pvr_vm_map(struct pvr_vm_context *vm_ctx,
>>> +	   struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset,
>>> +	   u64 device_addr, u64 size)
>>> +{
>>> +	const size_t pvr_obj_size = pvr_gem_object_size(pvr_obj);
>>> +	struct pvr_vm_gpuva_op_ctx op_ctx = { .vm_ctx = vm_ctx };
>>> +	struct sg_table *sgt;
>>> +	int err;
>>> +
>>> +	if (!pvr_device_addr_and_size_are_valid(device_addr, size) ||
>>> +	    pvr_obj_offset & ~PAGE_MASK || size & ~PAGE_MASK ||
>>> +	    pvr_obj_offset + size > pvr_obj_size ||
>>> +	    pvr_obj_offset > pvr_obj_size) {
>>> +		return -EINVAL;
>>> +	}
>>> +
>>> +	op_ctx.new_va = kzalloc(sizeof(*op_ctx.new_va), GFP_KERNEL);
>>> +	op_ctx.prev_va = kzalloc(sizeof(*op_ctx.prev_va), GFP_KERNEL);
>>> +	op_ctx.next_va = kzalloc(sizeof(*op_ctx.next_va), GFP_KERNEL);
>>> +	if (!op_ctx.new_va || !op_ctx.prev_va || !op_ctx.next_va) {
>>> +		err = -ENOMEM;
>>> +		goto out_free;
>>> +	}
>>> +
>>> +	sgt = pvr_gem_object_get_pages_sgt(pvr_obj);
>>> +	err = PTR_ERR_OR_ZERO(sgt);
>>> +	if (err)
>>> +		goto out_free;
>>> +
>>> +	op_ctx.mmu_op_ctx = pvr_mmu_op_context_create(vm_ctx->mmu_ctx, sgt,
>>> +						      pvr_obj_offset, size);
>>> +	err = PTR_ERR_OR_ZERO(op_ctx.mmu_op_ctx);
>>> +	if (err) {
>>> +		op_ctx.mmu_op_ctx = NULL;
>>> +		goto out_mmu_op_ctx_destroy;
>>> +	}
>>> +
>>> +	mutex_lock(&vm_ctx->lock);
>>> +	err = drm_gpuva_sm_map(&vm_ctx->gpuva_mgr, &op_ctx, device_addr, size,
>>> +			       gem_from_pvr_gem(pvr_obj), pvr_obj_offset);
>>> +	mutex_unlock(&vm_ctx->lock);
>>> +
>>> +out_mmu_op_ctx_destroy:
>>> +	pvr_mmu_op_context_destroy(op_ctx.mmu_op_ctx);
>>> +
>>> +out_free:
>>> +	kfree(op_ctx.next_va);
>>> +	kfree(op_ctx.prev_va);
>>> +	kfree(op_ctx.new_va);
>>> +
>>> +	return err;
>>> +}
>>> +
>>> +/**
>>> + * pvr_vm_unmap() - Unmap an already mapped section of device-virtual memory.
>>> + * @vm_ctx: Target VM context.
>>> + * @device_addr: Virtual device address at the start of the target mapping.
>>> + * @size: Size of the target mapping.
>>> + *
>>> + * Return:
>>> + *  * 0 on success,
>>> + *  * -%EINVAL if @device_addr is not a valid page-aligned device-virtual
>>> + *    address,
>>> + *  * Any error encountered while performing internal operations required to
>>> + *    destroy the mapping (returned from pvr_vm_gpuva_unmap or
>>> + *    pvr_vm_gpuva_remap).
>>> + */
>>> +int
>>> +pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size)
>>> +{
>>> +	struct pvr_vm_gpuva_op_ctx op_ctx = { .vm_ctx = vm_ctx };
>>> +	int err;
>>> +
>>> +	if (!pvr_device_addr_and_size_are_valid(device_addr, size))
>>> +		return -EINVAL;
>>> +
>>> +	op_ctx.prev_va = kzalloc(sizeof(*op_ctx.prev_va), GFP_KERNEL);
>>> +	op_ctx.next_va = kzalloc(sizeof(*op_ctx.next_va), GFP_KERNEL);
>>> +	if (!op_ctx.prev_va || !op_ctx.next_va) {
>>> +		err = -ENOMEM;
>>> +		goto out;
>>> +	}
>>> +
>>> +	op_ctx.mmu_op_ctx =
>>> +		pvr_mmu_op_context_create(vm_ctx->mmu_ctx, NULL, 0, 0);
>>> +	err = PTR_ERR_OR_ZERO(op_ctx.mmu_op_ctx);
>>> +	if (err) {
>>> +		op_ctx.mmu_op_ctx = NULL;
>>> +		goto out;
>>> +	}
>>> +
>>> +	mutex_lock(&vm_ctx->lock);
>>> +	err = drm_gpuva_sm_unmap(&vm_ctx->gpuva_mgr, &op_ctx, device_addr, size);
>>> +	mutex_unlock(&vm_ctx->lock);
>>> +
>>> +out:
>>> +	pvr_mmu_op_context_destroy(op_ctx.mmu_op_ctx);
>>> +	kfree(op_ctx.next_va);
>>> +	kfree(op_ctx.prev_va);
>>> +
>>> +	return err;
>>> +}
>>> +
>>> +/*
>>> + * Static data areas are determined by firmware.
>>> + *
>>> + * When adding a new static data area you will also need to update the reserved_size field for the
>>> + * heap in pvr_heaps[].
>>> + */
>>> +static const struct drm_pvr_static_data_area static_data_areas[] = {
>>> +	{
>>> +		.area_usage = DRM_PVR_STATIC_DATA_AREA_FENCE,
>>> +		.location_heap_id = DRM_PVR_HEAP_GENERAL,
>>> +		.offset = 0,
>>> +		.size = 128,
>>> +	},
>>> +	{
>>> +		.area_usage = DRM_PVR_STATIC_DATA_AREA_YUV_CSC,
>>> +		.location_heap_id = DRM_PVR_HEAP_GENERAL,
>>> +		.offset = 128,
>>> +		.size = 1024,
>>> +	},
>>> +	{
>>> +		.area_usage = DRM_PVR_STATIC_DATA_AREA_VDM_SYNC,
>>> +		.location_heap_id = DRM_PVR_HEAP_PDS_CODE_DATA,
>>> +		.offset = 0,
>>> +		.size = 128,
>>> +	},
>>> +	{
>>> +		.area_usage = DRM_PVR_STATIC_DATA_AREA_EOT,
>>> +		.location_heap_id = DRM_PVR_HEAP_PDS_CODE_DATA,
>>> +		.offset = 128,
>>> +		.size = 128,
>>> +	},
>>> +	{
>>> +		.area_usage = DRM_PVR_STATIC_DATA_AREA_VDM_SYNC,
>>> +		.location_heap_id = DRM_PVR_HEAP_USC_CODE,
>>> +		.offset = 0,
>>> +		.size = 128,
>>> +	},
>>> +};
>>> +
>>> +#define GET_RESERVED_SIZE(last_offset, last_size) round_up((last_offset) + (last_size), PAGE_SIZE)
>>> +
>>> +/*
>>> + * The values given to GET_RESERVED_SIZE() are taken from the last entry in the corresponding
>>> + * static data area for each heap.
>>> + */
>>> +static const struct drm_pvr_heap pvr_heaps[] = {
>>> +	[DRM_PVR_HEAP_GENERAL] = {
>>> +		.base = ROGUE_GENERAL_HEAP_BASE,
>>> +		.size = ROGUE_GENERAL_HEAP_SIZE,
>>> +		.flags = 0,
>>> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
>>> +	},
>>> +	[DRM_PVR_HEAP_PDS_CODE_DATA] = {
>>> +		.base = ROGUE_PDSCODEDATA_HEAP_BASE,
>>> +		.size = ROGUE_PDSCODEDATA_HEAP_SIZE,
>>> +		.flags = 0,
>>> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
>>> +	},
>>> +	[DRM_PVR_HEAP_USC_CODE] = {
>>> +		.base = ROGUE_USCCODE_HEAP_BASE,
>>> +		.size = ROGUE_USCCODE_HEAP_SIZE,
>>> +		.flags = 0,
>>> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
>>> +	},
>>> +	[DRM_PVR_HEAP_RGNHDR] = {
>>> +		.base = ROGUE_RGNHDR_HEAP_BASE,
>>> +		.size = ROGUE_RGNHDR_HEAP_SIZE,
>>> +		.flags = 0,
>>> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
>>> +	},
>>> +	[DRM_PVR_HEAP_VIS_TEST] = {
>>> +		.base = ROGUE_VISTEST_HEAP_BASE,
>>> +		.size = ROGUE_VISTEST_HEAP_SIZE,
>>> +		.flags = 0,
>>> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
>>> +	},
>>> +	[DRM_PVR_HEAP_TRANSFER_FRAG] = {
>>> +		.base = ROGUE_TRANSFER_FRAG_HEAP_BASE,
>>> +		.size = ROGUE_TRANSFER_FRAG_HEAP_SIZE,
>>> +		.flags = 0,
>>> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
>>> +	},
>>> +};
>>> +
>>> +int
>>> +pvr_static_data_areas_get(const struct pvr_device *pvr_dev,
>>> +			  struct drm_pvr_ioctl_dev_query_args *args)
>>> +{
>>> +	struct drm_pvr_dev_query_static_data_areas query = {0};
>>> +	int err;
>>> +
>>> +	if (!args->pointer) {
>>> +		args->size = sizeof(struct drm_pvr_dev_query_static_data_areas);
>>> +		return 0;
>>> +	}
>>> +
>>> +	err = PVR_UOBJ_GET(query, args->size, args->pointer);
>>> +	if (err < 0)
>>> +		return err;
>>> +
>>> +	if (!query.static_data_areas.array) {
>>> +		query.static_data_areas.count = ARRAY_SIZE(static_data_areas);
>>> +		query.static_data_areas.stride = sizeof(struct drm_pvr_static_data_area);
>>> +		goto copy_out;
>>> +	}
>>> +
>>> +	if (query.static_data_areas.count > ARRAY_SIZE(static_data_areas))
>>> +		query.static_data_areas.count = ARRAY_SIZE(static_data_areas);
>>> +
>>> +	err = PVR_UOBJ_SET_ARRAY(&query.static_data_areas, static_data_areas);
>>> +	if (err < 0)
>>> +		return err;
>>> +
>>> +copy_out:
>>> +	err = PVR_UOBJ_SET(args->pointer, args->size, query);
>>> +	if (err < 0)
>>> +		return err;
>>> +
>>> +	args->size = sizeof(query);
>>> +	return 0;
>>> +}
>>> +
>>> +int
>>> +pvr_heap_info_get(const struct pvr_device *pvr_dev,
>>> +		  struct drm_pvr_ioctl_dev_query_args *args)
>>> +{
>>> +	struct drm_pvr_dev_query_heap_info query = {0};
>>> +	u64 dest;
>>> +	int err;
>>> +
>>> +	if (!args->pointer) {
>>> +		args->size = sizeof(struct drm_pvr_dev_query_heap_info);
>>> +		return 0;
>>> +	}
>>> +
>>> +	err = PVR_UOBJ_GET(query, args->size, args->pointer);
>>> +	if (err < 0)
>>> +		return err;
>>> +
>>> +	if (!query.heaps.array) {
>>> +		query.heaps.count = ARRAY_SIZE(pvr_heaps);
>>> +		query.heaps.stride = sizeof(struct drm_pvr_heap);
>>> +		goto copy_out;
>>> +	}
>>> +
>>> +	if (query.heaps.count > ARRAY_SIZE(pvr_heaps))
>>> +		query.heaps.count = ARRAY_SIZE(pvr_heaps);
>>> +
>>> +	/* Region header heap is only present if BRN63142 is present. */
>>> +	dest = query.heaps.array;
>>> +	for (size_t i = 0; i < query.heaps.count; i++) {
>>> +		struct drm_pvr_heap heap = pvr_heaps[i];
>>> +
>>> +		if (i == DRM_PVR_HEAP_RGNHDR && !PVR_HAS_QUIRK(pvr_dev, 63142))
>>> +			heap.size = 0;
>>> +
>>> +		err = PVR_UOBJ_SET(dest, query.heaps.stride, heap);
>>> +		if (err < 0)
>>> +			return err;
>>> +
>>> +		dest += query.heaps.stride;
>>> +	}
>>> +
>>> +copy_out:
>>> +	err = PVR_UOBJ_SET(args->pointer, args->size, query);
>>> +	if (err < 0)
>>> +		return err;
>>> +
>>> +	args->size = sizeof(query);
>>> +	return 0;
>>> +}
>>> +
>>> +/**
>>> + * pvr_heap_contains_range() - Determine if a given heap contains the specified
>>> + *                             device-virtual address range.
>>> + * @pvr_heap: Target heap.
>>> + * @start: Inclusive start of the target range.
>>> + * @end: Inclusive end of the target range.
>>> + *
>>> + * It is an error to call this function with values of @start and @end that do
>>> + * not satisfy the condition @start <= @end.
>>> + */
>>> +static __always_inline bool
>>> +pvr_heap_contains_range(const struct drm_pvr_heap *pvr_heap, u64 start, u64 end)
>>> +{
>>> +	return pvr_heap->base <= start && end < pvr_heap->base + pvr_heap->size;
>>> +}
>>> +
>>> +/**
>>> + * pvr_find_heap_containing() - Find a heap which contains the specified
>>> + *                              device-virtual address range.
>>> + * @pvr_dev: Target PowerVR device.
>>> + * @start: Start of the target range.
>>> + * @size: Size of the target range.
>>> + *
>>> + * Return:
>>> + *  * A pointer to a constant instance of struct drm_pvr_heap representing the
>>> + *    heap containing the entire range specified by @start and @size on
>>> + *    success, or
>>> + *  * %NULL if no such heap exists.
>>> + */
>>> +const struct drm_pvr_heap *
>>> +pvr_find_heap_containing(struct pvr_device *pvr_dev, u64 start, u64 size)
>>> +{
>>> +	u64 end;
>>> +
>>> +	if (check_add_overflow(start, size - 1, &end))
>>> +		return NULL;
>>> +
>>> +	/*
>>> +	 * There are no guarantees about the order of address ranges in
>>> +	 * &pvr_heaps, so iterate over the entire array for a heap whose
>>> +	 * range completely encompasses the given range.
>>> +	 */
>>> +	for (u32 heap_id = 0; heap_id < ARRAY_SIZE(pvr_heaps); heap_id++) {
>>> +		/* Filter heaps that present only with an associated quirk */
>>> +		if (heap_id == DRM_PVR_HEAP_RGNHDR &&
>>> +		    !PVR_HAS_QUIRK(pvr_dev, 63142)) {
>>> +			continue;
>>> +		}
>>> +
>>> +		if (pvr_heap_contains_range(&pvr_heaps[heap_id], start, end))
>>> +			return &pvr_heaps[heap_id];
>>> +	}
>>> +
>>> +	return NULL;
>>> +}
>>> +
>>> +/**
>>> + * pvr_vm_find_gem_object() - Look up a buffer object from a given
>>> + *                            device-virtual address.
>>> + * @vm_ctx: [IN] Target VM context.
>>> + * @device_addr: [IN] Virtual device address at the start of the required
>>> + *               object.
>>> + * @mapped_offset_out: [OUT] Pointer to location to write offset of the start
>>> + *                     of the mapped region within the buffer object. May be
>>> + *                     %NULL if this information is not required.
>>> + * @mapped_size_out: [OUT] Pointer to location to write size of the mapped
>>> + *                   region. May be %NULL if this information is not required.
>>> + *
>>> + * If successful, a reference will be taken on the buffer object. The caller
>>> + * must drop the reference with pvr_gem_object_put().
>>> + *
>>> + * Return:
>>> + *  * The PowerVR buffer object mapped at @device_addr if one exists, or
>>> + *  * %NULL otherwise.
>>> + */
>>> +struct pvr_gem_object *
>>> +pvr_vm_find_gem_object(struct pvr_vm_context *vm_ctx, u64 device_addr,
>>> +		       u64 *mapped_offset_out, u64 *mapped_size_out)
>>> +{
>>> +	struct pvr_gem_object *pvr_obj;
>>> +	struct drm_gpuva *va;
>>> +
>>> +	mutex_lock(&vm_ctx->lock);
>>> +
>>> +	va = drm_gpuva_find_first(&vm_ctx->gpuva_mgr, device_addr, 1);
>>> +	if (!va)
>>> +		goto err_unlock;
>>> +
>>> +	pvr_obj = gem_to_pvr_gem(va->gem.obj);
>>> +	pvr_gem_object_get(pvr_obj);
>>> +
>>> +	if (mapped_offset_out)
>>> +		*mapped_offset_out = va->gem.offset;
>>> +	if (mapped_size_out)
>>> +		*mapped_size_out = va->va.range;
>>> +
>>> +	mutex_unlock(&vm_ctx->lock);
>>> +
>>> +	return pvr_obj;
>>> +
>>> +err_unlock:
>>> +	mutex_unlock(&vm_ctx->lock);
>>> +
>>> +	return NULL;
>>> +}
>>> +
>>> +/**
>>> + * pvr_vm_get_fw_mem_context: Get object representing firmware memory context
>>> + * @vm_ctx: Target VM context.
>>> + *
>>> + * Returns:
>>> + *  * FW object representing firmware memory context, or
>>> + *  * %NULL if this VM context does not have a firmware memory context.
>>> + */
>>> +struct pvr_fw_object *
>>> +pvr_vm_get_fw_mem_context(struct pvr_vm_context *vm_ctx)
>>> +{
>>> +	return vm_ctx->fw_mem_ctx_obj;
>>> +}
>>> diff --git a/drivers/gpu/drm/imagination/pvr_vm.h b/drivers/gpu/drm/imagination/pvr_vm.h
>>> new file mode 100644
>>> index 000000000000..b98bc3981807
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/imagination/pvr_vm.h
>>> @@ -0,0 +1,60 @@
>>> +/* SPDX-License-Identifier: GPL-2.0 OR MIT */
>>> +/* Copyright (c) 2023 Imagination Technologies Ltd. */
>>> +
>>> +#ifndef PVR_VM_H
>>> +#define PVR_VM_H
>>> +
>>> +#include "pvr_rogue_mmu_defs.h"
>>> +
>>> +#include <uapi/drm/pvr_drm.h>
>>> +
>>> +#include <linux/types.h>
>>> +
>>> +/* Forward declaration from "pvr_device.h" */
>>> +struct pvr_device;
>>> +struct pvr_file;
>>> +
>>> +/* Forward declaration from "pvr_gem.h" */
>>> +struct pvr_gem_object;
>>> +
>>> +/* Forward declaration from "pvr_vm.c" */
>>> +struct pvr_vm_context;
>>> +
>>> +/* Forward declaration from <uapi/drm/pvr_drm.h> */
>>> +struct drm_pvr_ioctl_get_heap_info_args;
>>> +
>>> +/* Functions defined in pvr_vm.c */
>>> +
>>> +bool pvr_device_addr_is_valid(u64 device_addr);
>>> +bool pvr_device_addr_and_size_are_valid(u64 device_addr, u64 size);
>>> +
>>> +struct pvr_vm_context *pvr_vm_create_context(struct pvr_device *pvr_dev,
>>> +					     bool is_userspace_context);
>>> +
>>> +int pvr_vm_map(struct pvr_vm_context *vm_ctx,
>>> +	       struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset,
>>> +	       u64 device_addr, u64 size);
>>> +int pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size);
>>> +
>>> +dma_addr_t pvr_vm_get_page_table_root_addr(struct pvr_vm_context *vm_ctx);
>>> +
>>> +int pvr_static_data_areas_get(const struct pvr_device *pvr_dev,
>>> +			      struct drm_pvr_ioctl_dev_query_args *args);
>>> +int pvr_heap_info_get(const struct pvr_device *pvr_dev,
>>> +		      struct drm_pvr_ioctl_dev_query_args *args);
>>> +const struct drm_pvr_heap *pvr_find_heap_containing(struct pvr_device *pvr_dev,
>>> +						    u64 addr, u64 size);
>>> +
>>> +struct pvr_gem_object *pvr_vm_find_gem_object(struct pvr_vm_context *vm_ctx,
>>> +					      u64 device_addr,
>>> +					      u64 *mapped_offset_out,
>>> +					      u64 *mapped_size_out);
>>> +
>>> +struct pvr_fw_object *
>>> +pvr_vm_get_fw_mem_context(struct pvr_vm_context *vm_ctx);
>>> +
>>> +struct pvr_vm_context *pvr_vm_context_lookup(struct pvr_file *pvr_file, u32 handle);
>>> +bool pvr_vm_context_put(struct pvr_vm_context *vm_ctx);
>>> +void pvr_destroy_vm_contexts_for_file(struct pvr_file *pvr_file);
>>> +
>>> +#endif /* PVR_VM_H */
>>> -- 
>>> 2.41.0
>>>


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v5 08/17] drm/imagination: Add GEM and VM related code
@ 2023-08-21 11:05         ` Danilo Krummrich
  0 siblings, 0 replies; 77+ messages in thread
From: Danilo Krummrich @ 2023-08-21 11:05 UTC (permalink / raw)
  To: Donald Robson, Sarah Walker
  Cc: matthew.brost, tzimmermann, hns, linux-kernel, mripard, afd,
	Matt Coster, luben.tuikov, dri-devel, boris.brezillon,
	christian.koenig, faith.ekstrand

On 8/21/23 10:30, Donald Robson wrote:
> Hi Danilo,
> Thanks for the feedback.  On the subject of locking, I have dma_resv locking
> in another branch where I'm trying to enable bind queues, but I didn't think
> I needed locking for the single, synchronous operations seen here.  Would a
> mutex on the gem object wrapper suffice?

The reason you need to either acquire the dma-resv lock or a driver 
specific lock for drm_gpuva_link() and drm_gpuva_unlink() is that you 
probably iterate the linked drm_gpuvas somewhere else, typically from a
ttm_device_funcs.move callback to unmap mappings when their backing GEM 
objects are evicted. If you handle that a different way and never 
iterate linked drm_gpuvas, you can probably omit drm_gpuva_(un)link() 
entirely.

> Thanks,
> Donald
> On Fri, 2023-08-18 at 17:30 +0200, Danilo Krummrich wrote:
>>
>> Hi Sarah,
>>
>> On Wed, Aug 16, 2023 at 09:25:23AM +0100, Sarah Walker wrote:
>>> Add a GEM implementation based on drm_gem_shmem, and support code for the
>>> PowerVR GPU MMU. The GPU VA manager is used for address space management.
>>>
>>> Changes since v4:
>>> - Correct sync function in vmap/vunmap function documentation
>>> - Update for upstream GPU VA manager
>>> - Fix missing frees when unmapping drm_gpuva objects
>>> - Always zero GEM BOs on creation
>>>
>>> Changes since v3:
>>> - Split MMU and VM code
>>> - Register page table allocations with kmemleak
>>> - Use drm_dev_{enter,exit}
>>>
>>> Changes since v2:
>>> - Use GPU VA manager
>>> - Use drm_gem_shmem
>>>
>>> Co-developed-by: Matt Coster <matt.coster@imgtec.com>
>>> Signed-off-by: Matt Coster <matt.coster@imgtec.com>
>>> Co-developed-by: Donald Robson <donald.robson@imgtec.com>
>>> Signed-off-by: Donald Robson <donald.robson@imgtec.com>
>>> Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
>>> ---
>>>   drivers/gpu/drm/imagination/Makefile     |    5 +-
>>>   drivers/gpu/drm/imagination/pvr_device.c |   23 +-
>>>   drivers/gpu/drm/imagination/pvr_device.h |   18 +
>>>   drivers/gpu/drm/imagination/pvr_drv.c    |  302 ++-
>>>   drivers/gpu/drm/imagination/pvr_gem.c    |  396 ++++
>>>   drivers/gpu/drm/imagination/pvr_gem.h    |  177 ++
>>>   drivers/gpu/drm/imagination/pvr_mmu.c    | 2487 ++++++++++++++++++++++
>>>   drivers/gpu/drm/imagination/pvr_mmu.h    |  108 +
>>>   drivers/gpu/drm/imagination/pvr_vm.c     |  890 ++++++++
>>>   drivers/gpu/drm/imagination/pvr_vm.h     |   60 +
>>>   10 files changed, 4455 insertions(+), 11 deletions(-)
>>>   create mode 100644 drivers/gpu/drm/imagination/pvr_gem.c
>>>   create mode 100644 drivers/gpu/drm/imagination/pvr_gem.h
>>>   create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.c
>>>   create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.h
>>>   create mode 100644 drivers/gpu/drm/imagination/pvr_vm.c
>>>   create mode 100644 drivers/gpu/drm/imagination/pvr_vm.h
>>
>> <snip>
>>
>>> diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
>>> new file mode 100644
>>> index 000000000000..616fad3a3325
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/imagination/pvr_vm.c
>>> @@ -0,0 +1,890 @@
>>> +// SPDX-License-Identifier: GPL-2.0 OR MIT
>>> +/* Copyright (c) 2023 Imagination Technologies Ltd. */
>>> +
>>> +#include "pvr_vm.h"
>>> +
>>> +#include "pvr_device.h"
>>> +#include "pvr_drv.h"
>>> +#include "pvr_gem.h"
>>> +#include "pvr_mmu.h"
>>> +#include "pvr_rogue_fwif.h"
>>> +#include "pvr_rogue_heap_config.h"
>>> +
>>> +#include <drm/drm_gem.h>
>>> +#include <drm/drm_gpuva_mgr.h>
>>> +
>>> +#include <linux/container_of.h>
>>> +#include <linux/err.h>
>>> +#include <linux/errno.h>
>>> +#include <linux/gfp_types.h>
>>> +#include <linux/kref.h>
>>> +#include <linux/mutex.h>
>>> +#include <linux/stddef.h>
>>> +
>>> +/**
>>> + * DOC: Memory context
>>> + *
>>> + * This is the "top level" datatype in the VM code. It's exposed in the public
>>> + * API as an opaque handle.
>>> + */
>>> +
>>> +/**
>>> + * struct pvr_vm_context - Context type which encapsulates an entire page table
>>> + * tree structure.
>>> + * @pvr_dev: The PowerVR device to which this context is bound.
>>> + *
>>> + * This binding is immutable for the life of the context.
>>> + * @mmu_ctx: The context for binding to physical memory.
>>> + * @gpuva_mgr: GPUVA manager object associated with this context.
>>> + * @lock: Global lock on this entire structure of page tables.
>>> + * @fw_mem_ctx_obj: Firmware object representing firmware memory context.
>>> + * @ref_count: Reference count of object.
>>> + */
>>> +struct pvr_vm_context {
>>> +	struct pvr_device *pvr_dev;
>>> +	struct pvr_mmu_context *mmu_ctx;
>>> +	struct drm_gpuva_manager gpuva_mgr;
>>> +	struct mutex lock;
>>> +	struct pvr_fw_object *fw_mem_ctx_obj;
>>> +	struct kref ref_count;
>>> +};
>>> +
>>> +/**
>>> + * pvr_vm_get_page_table_root_addr() - Get the DMA address of the root of the
>>> + *                                     page table structure behind a VM context.
>>> + * @vm_ctx: Target VM context.
>>> + */
>>> +dma_addr_t pvr_vm_get_page_table_root_addr(struct pvr_vm_context *vm_ctx)
>>> +{
>>> +	return pvr_mmu_get_root_table_dma_addr(vm_ctx->mmu_ctx);
>>> +}
>>> +
>>> +/**
>>> + * DOC: Memory mappings
>>> + */
>>> +
>>> +/**
>>> + * pvr_vm_gpuva_mapping_init() - Setup a mapping object with the specified
>>> + * parameters ready for mapping using pvr_vm_gpuva_mapping_map().
>>> + * @va: Pointer to drm_gpuva mapping object.
>>> + * @device_addr: Device-virtual address at the start of the mapping.
>>> + * @size: Size of the desired mapping.
>>> + * @pvr_obj: Target PowerVR memory object.
>>> + * @pvr_obj_offset: Offset into @pvr_obj to begin mapping from.
>>> + *
>>> + * Some parameters of this function are unchecked. It is therefore the callers
>>> + * responsibility to ensure certain constraints are met. Specifically:
>>> + *
>>> + * * @pvr_obj_offset must be less than the size of @pvr_obj,
>>> + * * The sum of @pvr_obj_offset and @size must be less than or equal to the
>>> + *   size of @pvr_obj,
>>> + * * The range specified by @pvr_obj_offset and @size (the "CPU range") must be
>>> + *   CPU page-aligned both in start position and size, and
>>> + * * The range specified by @device_addr and @size (the "device range") must be
>>> + *   device page-aligned both in start position and size.
>>> + *
>>> + * Furthermore, it is up to the caller to make sure that a reference to @pvr_obj
>>> + * is taken prior to mapping @va with the drm_gpuva_manager.
>>> + */
>>> +static void
>>> +pvr_vm_gpuva_mapping_init(struct drm_gpuva *va, u64 device_addr, u64 size,
>>> +			  struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset)
>>
>> There's already drm_gpuva_init() doing the same thing.
>>
>>> +{
>>> +	va->va.addr = device_addr;
>>> +	va->va.range = size;
>>> +	va->gem.obj = gem_from_pvr_gem(pvr_obj);
>>> +	va->gem.offset = pvr_obj_offset;
>>> +}
>>> +
>>> +struct pvr_vm_gpuva_op_ctx {
>>> +	struct pvr_vm_context *vm_ctx;
>>> +	struct pvr_mmu_op_context *mmu_op_ctx;
>>> +	struct drm_gpuva *new_va, *prev_va, *next_va;
>>> +};
>>> +
>>> +/**
>>> + * pvr_vm_gpuva_map() - Insert a mapping into a memory context.
>>> + * @op: gpuva op containing the remap details.
>>> + * @op_ctx: Operation context.
>>> + *
>>> + * Context: Called by drm_gpuva_sm_map following a successful mapping while
>>> + * @op_ctx.vm_ctx mutex is held.
>>> + *
>>> + * Return:
>>> + *  * 0 on success, or
>>> + *  * Any error returned by pvr_mmu_map().
>>> + */
>>> +static int
>>> +pvr_vm_gpuva_map(struct drm_gpuva_op *op, void *op_ctx)
>>> +{
>>> +	struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->map.gem.obj);
>>> +	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
>>> +	int err;
>>> +
>>> +	if ((op->map.gem.offset | op->map.va.range) & ~PVR_DEVICE_PAGE_MASK)
>>> +		return -EINVAL;
>>> +
>>> +	err = pvr_mmu_map(ctx->mmu_op_ctx, op->map.va.range, pvr_gem->flags,
>>> +			  op->map.va.addr);
>>> +	if (err)
>>> +		return err;
>>> +
>>> +	pvr_vm_gpuva_mapping_init(ctx->new_va, op->map.va.addr,
>>> +				  op->map.va.range, pvr_gem, op->map.gem.offset);
>>> +
>>> +	drm_gpuva_map(&ctx->vm_ctx->gpuva_mgr, ctx->new_va, &op->map);
>>
>> drm_gpuva_map() does use drm_gpuva_init_from_op() internally, hence the extra
>> call to pvr_vm_gpuva_mapping_init() should be unnecessary.
>>
>>> +	drm_gpuva_link(ctx->new_va);
>>
>> How is this protected?
>>
>> drm_gpuva_link() and drm_gpuva_unlink() require either the dma_resv lock of the
>> corresponding GEM object being held or, alternatively, the driver specific lock
>> indicated via drm_gem_gpuva_set_lock().
>>
>>> +	ctx->new_va = NULL;
>>> +
>>> +	/*
>>> +	 * Increment the refcount on the underlying physical memory resource
>>> +	 * to prevent de-allocation while the mapping exists.
>>> +	 */
>>> +	pvr_gem_object_get(pvr_gem);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +/**
>>> + * pvr_vm_gpuva_unmap() - Remove a mapping from a memory context.
>>> + * @op: gpuva op containing the unmap details.
>>> + * @op_ctx: Operation context.
>>> + *
>>> + * Context: Called by drm_gpuva_sm_unmap following a successful unmapping while
>>> + * @op_ctx.vm_ctx mutex is held.
>>> + *
>>> + * Return:
>>> + *  * 0 on success, or
>>> + *  * Any error returned by pvr_mmu_unmap().
>>> + */
>>> +static int
>>> +pvr_vm_gpuva_unmap(struct drm_gpuva_op *op, void *op_ctx)
>>> +{
>>> +	struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->unmap.va->gem.obj);
>>> +	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
>>> +
>>> +	int err = pvr_mmu_unmap(ctx->mmu_op_ctx, op->unmap.va->va.addr,
>>> +				op->unmap.va->va.range);
>>> +
>>> +	if (err)
>>> +		return err;
>>> +
>>> +	drm_gpuva_unmap(&op->unmap);
>>> +	drm_gpuva_unlink(op->unmap.va);
>>> +	kfree(op->unmap.va);
>>> +
>>> +	pvr_gem_object_put(pvr_gem);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +/**
>>> + * pvr_vm_gpuva_remap() - Remap a mapping within a memory context.
>>> + * @op: gpuva op containing the remap details.
>>> + * @op_ctx: Operation context.
>>> + *
>>> + * Context: Called by either drm_gpuva_sm_map or drm_gpuva_sm_unmap when a
>>> + * mapping or unmapping operation causes a region to be split. The
>>> + * @op_ctx.vm_ctx mutex is held.
>>> + *
>>> + * Return:
>>> + *  * 0 on success, or
>>> + *  * Any error returned by pvr_vm_gpuva_unmap() or pvr_vm_gpuva_unmap().
>>> + */
>>> +static int
>>> +pvr_vm_gpuva_remap(struct drm_gpuva_op *op, void *op_ctx)
>>> +{
>>> +	struct pvr_vm_gpuva_op_ctx *ctx = op_ctx;
>>> +
>>> +	if (op->remap.unmap) {
>>
>> You can omit this check, remap operations always contain a valid unmap
>> operation. However, you might want to know whether the remap operation was
>> generated due to a call to drm_gpuva_sm_map() or drm_gpuva_sm_unmap(), since for
>> the latter you might want to free page table structures.
>>
>>> +		const u64 va_start = op->remap.prev ?
>>> +				     op->remap.prev->va.addr + op->remap.prev->va.range :
>>> +				     op->remap.unmap->va->va.addr;
>>> +		const u64 va_end = op->remap.next ?
>>> +				   op->remap.next->va.addr :
>>> +				   op->remap.unmap->va->va.addr + op->remap.unmap->va->va.range;
>>
>> This seems to be a common calculation for drivers, it is probably worth to come
>> up with a helper, something like
>> drm_gpuva_op_unmap_range(struct drm_gpuva_op *op, u64 *addr, u64 *range).
>>
>>> +
>>> +		int err = pvr_mmu_unmap(ctx->mmu_op_ctx, va_start,
>>> +					va_end - va_start);
>>> +
>>> +		if (err)
>>> +			return err;
>>> +	}
>>> +
>>> +	if (op->remap.prev)
>>> +		pvr_vm_gpuva_mapping_init(ctx->prev_va, op->remap.prev->va.addr,
>>> +					  op->remap.prev->va.range,
>>> +					  gem_to_pvr_gem(op->remap.prev->gem.obj),
>>> +					  op->remap.prev->gem.offset);
>>> +
>>> +	if (op->remap.next)
>>> +		pvr_vm_gpuva_mapping_init(ctx->next_va, op->remap.next->va.addr,
>>> +					  op->remap.next->va.range,
>>> +					  gem_to_pvr_gem(op->remap.next->gem.obj),
>>> +					  op->remap.next->gem.offset);
>>> +
>>> +	/* No actual remap required: the page table tree depth is fixed to 3,
>>> +	 * and we use 4k page table entries only for now.
>>> +	 */
>>> +	drm_gpuva_remap(ctx->prev_va, ctx->next_va, &op->remap);
>>
>> As above, drm_gpuva_remap() does use drm_gpuva_init_from_op() internally, hence
>> the extra call to pvr_vm_gpuva_mapping_init() should be unnecessary.
>>
>>> +
>>> +	if (op->remap.prev) {
>>> +		pvr_gem_object_get(gem_to_pvr_gem(ctx->prev_va->gem.obj));
>>> +		drm_gpuva_link(ctx->prev_va);
>>> +		ctx->prev_va = NULL;
>>> +	}
>>> +
>>> +	if (op->remap.next) {
>>> +		pvr_gem_object_get(gem_to_pvr_gem(ctx->next_va->gem.obj));
>>> +		drm_gpuva_link(ctx->next_va);
>>> +		ctx->next_va = NULL;
>>> +	}
>>> +
>>> +	if (op->remap.unmap) {
>>
>> As above, no need for this check.
>>
>> - Danilo
>>
>>> +		struct pvr_gem_object *pvr_gem = gem_to_pvr_gem(op->remap.unmap->va->gem.obj);
>>> +
>>> +		drm_gpuva_unlink(op->unmap.va);
>>> +		kfree(op->unmap.va);
>>> +
>>> +		pvr_gem_object_put(pvr_gem);
>>> +	}
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +/*
>>> + * Public API
>>> + *
>>> + * For an overview of these functions, see *DOC: Public API* in "pvr_vm.h".
>>> + */
>>> +
>>> +/**
>>> + * pvr_device_addr_is_valid() - Tests whether a device-virtual address
>>> + *                              is valid.
>>> + * @device_addr: Virtual device address to test.
>>> + *
>>> + * Return:
>>> + *  * %true if @device_addr is within the valid range for a device page
>>> + *    table and is aligned to the device page size, or
>>> + *  * %false otherwise.
>>> + */
>>> +bool
>>> +pvr_device_addr_is_valid(u64 device_addr)
>>> +{
>>> +	return (device_addr & ~PVR_PAGE_TABLE_ADDR_MASK) == 0 &&
>>> +	       (device_addr & ~PVR_DEVICE_PAGE_MASK) == 0;
>>> +}
>>> +
>>> +/**
>>> + * pvr_device_addr_and_size_are_valid() - Tests whether a device-virtual
>>> + * address and associated size are both valid.
>>> + * @device_addr: Virtual device address to test.
>>> + * @size: Size of the range based at @device_addr to test.
>>> + *
>>> + * Calling pvr_device_addr_is_valid() twice (once on @size, and again on
>>> + * @device_addr + @size) to verify a device-virtual address range initially
>>> + * seems intuitive, but it produces a false-negative when the address range
>>> + * is right at the end of device-virtual address space.
>>> + *
>>> + * This function catches that corner case, as well as checking that
>>> + * @size is non-zero.
>>> + *
>>> + * Return:
>>> + *  * %true if @device_addr is device page aligned; @size is device page
>>> + *    aligned; the range specified by @device_addr and @size is within the
>>> + *    bounds of the device-virtual address space, and @size is non-zero, or
>>> + *  * %false otherwise.
>>> + */
>>> +bool
>>> +pvr_device_addr_and_size_are_valid(u64 device_addr, u64 size)
>>> +{
>>> +	return pvr_device_addr_is_valid(device_addr) &&
>>> +	       size != 0 && (size & ~PVR_DEVICE_PAGE_MASK) == 0 &&
>>> +	       (device_addr + size <= PVR_PAGE_TABLE_ADDR_SPACE_SIZE);
>>> +}
>>> +
>>> +static const struct drm_gpuva_fn_ops pvr_vm_gpuva_ops = {
>>> +	.sm_step_map = pvr_vm_gpuva_map,
>>> +	.sm_step_remap = pvr_vm_gpuva_remap,
>>> +	.sm_step_unmap = pvr_vm_gpuva_unmap,
>>> +};
>>> +
>>> +/**
>>> + * pvr_vm_create_context() - Create a new VM context.
>>> + * @pvr_dev: Target PowerVR device.
>>> + * @is_userspace_context: %true if this context is for userspace. This will
>>> + *                        create a firmware memory context for the VM context
>>> + *                        and disable warnings when tearing down mappings.
>>> + *
>>> + * Return:
>>> + *  * A handle to the newly-minted VM context on success,
>>> + *  * -%EINVAL if the feature "virtual address space bits" on @pvr_dev is
>>> + *    missing or has an unsupported value,
>>> + *  * -%ENOMEM if allocation of the structure behind the opaque handle fails,
>>> + *    or
>>> + *  * Any error encountered while setting up internal structures.
>>> + */
>>> +struct pvr_vm_context *
>>> +pvr_vm_create_context(struct pvr_device *pvr_dev, bool is_userspace_context)
>>> +{
>>> +	struct drm_device *drm_dev = from_pvr_device(pvr_dev);
>>> +
>>> +	struct pvr_vm_context *vm_ctx;
>>> +	u16 device_addr_bits;
>>> +
>>> +	int err;
>>> +
>>> +	err = PVR_FEATURE_VALUE(pvr_dev, virtual_address_space_bits,
>>> +				&device_addr_bits);
>>> +	if (err) {
>>> +		drm_err(drm_dev,
>>> +			"Failed to get device virtual address space bits\n");
>>> +		return ERR_PTR(err);
>>> +	}
>>> +
>>> +	if (device_addr_bits != PVR_PAGE_TABLE_ADDR_BITS) {
>>> +		drm_err(drm_dev,
>>> +			"Device has unsupported virtual address space size\n");
>>> +		return ERR_PTR(-EINVAL);
>>> +	}
>>> +
>>> +	vm_ctx = kzalloc(sizeof(*vm_ctx), GFP_KERNEL);
>>> +	if (!vm_ctx)
>>> +		return ERR_PTR(-ENOMEM);
>>> +
>>> +	vm_ctx->pvr_dev = pvr_dev;
>>> +	kref_init(&vm_ctx->ref_count);
>>> +	mutex_init(&vm_ctx->lock);
>>> +
>>> +	drm_gpuva_manager_init(&vm_ctx->gpuva_mgr,
>>> +			       is_userspace_context ? "PowerVR-user-VM" : "PowerVR-FW-VM",
>>> +			       0, 1ULL << device_addr_bits, 0, 0, &pvr_vm_gpuva_ops);
>>> +
>>> +	vm_ctx->mmu_ctx = pvr_mmu_context_create(pvr_dev);
>>> +	err = PTR_ERR_OR_ZERO(&vm_ctx->mmu_ctx);
>>> +	if (err) {
>>> +		vm_ctx->mmu_ctx = NULL;
>>> +		goto err_put_ctx;
>>> +	}
>>> +
>>> +	if (is_userspace_context) {
>>> +		/* TODO: Create FW mem context */
>>> +		err = -ENODEV;
>>> +		goto err_put_ctx;
>>> +	}
>>> +
>>> +	return vm_ctx;
>>> +
>>> +err_put_ctx:
>>> +	pvr_vm_context_put(vm_ctx);
>>> +
>>> +	return ERR_PTR(err);
>>> +}
>>> +
>>> +/**
>>> + * pvr_vm_context_release() - Teardown a VM context.
>>> + * @ref_count: Pointer to reference counter of the VM context.
>>> + *
>>> + * This function ensures that no mappings are left dangling by unmapping them
>>> + * all in order of ascending device-virtual address.
>>> + */
>>> +static void
>>> +pvr_vm_context_release(struct kref *ref_count)
>>> +{
>>> +	struct pvr_vm_context *vm_ctx =
>>> +		container_of(ref_count, struct pvr_vm_context, ref_count);
>>> +
>>> +	/* TODO: Destroy FW mem context */
>>> +	WARN_ON(vm_ctx->fw_mem_ctx_obj);
>>> +
>>> +	WARN_ON(pvr_vm_unmap(vm_ctx, vm_ctx->gpuva_mgr.mm_start,
>>> +			     vm_ctx->gpuva_mgr.mm_range));
>>> +
>>> +	drm_gpuva_manager_destroy(&vm_ctx->gpuva_mgr);
>>> +	pvr_mmu_context_destroy(vm_ctx->mmu_ctx);
>>> +	mutex_destroy(&vm_ctx->lock);
>>> +
>>> +	kfree(vm_ctx);
>>> +}
>>> +
>>> +/**
>>> + * pvr_vm_context_lookup() - Look up VM context from handle
>>> + * @pvr_file: Pointer to pvr_file structure.
>>> + * @handle: Object handle.
>>> + *
>>> + * Takes reference on VM context object. Call pvr_vm_context_put() to release.
>>> + *
>>> + * Returns:
>>> + *  * The requested object on success, or
>>> + *  * %NULL on failure (object does not exist in list, or is not a VM context)
>>> + */
>>> +struct pvr_vm_context *
>>> +pvr_vm_context_lookup(struct pvr_file *pvr_file, u32 handle)
>>> +{
>>> +	struct pvr_vm_context *vm_ctx;
>>> +
>>> +	xa_lock(&pvr_file->vm_ctx_handles);
>>> +	vm_ctx = xa_load(&pvr_file->vm_ctx_handles, handle);
>>> +	if (vm_ctx)
>>> +		kref_get(&vm_ctx->ref_count);
>>> +
>>> +	xa_unlock(&pvr_file->vm_ctx_handles);
>>> +
>>> +	return vm_ctx;
>>> +}
>>> +
>>> +/**
>>> + * pvr_vm_context_put() - Release a reference on a VM context
>>> + * @vm_ctx: Target VM context.
>>> + *
>>> + * Returns:
>>> + *  * %true if the VM context was destroyed, or
>>> + *  * %false if there are any references still remaining.
>>> + */
>>> +bool
>>> +pvr_vm_context_put(struct pvr_vm_context *vm_ctx)
>>> +{
>>> +	WARN_ON(!vm_ctx);
>>> +
>>> +	if (vm_ctx)
>>> +		return kref_put(&vm_ctx->ref_count, pvr_vm_context_release);
>>> +
>>> +	return true;
>>> +}
>>> +
>>> +/**
>>> + * pvr_destroy_vm_contexts_for_file: Destroy any VM contexts associated with the
>>> + * given file.
>>> + * @pvr_file: Pointer to pvr_file structure.
>>> + *
>>> + * Removes all vm_contexts associated with @pvr_file from the device VM context
>>> + * list and drops initial references. vm_contexts will then be destroyed once
>>> + * all outstanding references are dropped.
>>> + */
>>> +void pvr_destroy_vm_contexts_for_file(struct pvr_file *pvr_file)
>>> +{
>>> +	struct pvr_vm_context *vm_ctx;
>>> +	unsigned long handle;
>>> +
>>> +	xa_for_each(&pvr_file->vm_ctx_handles, handle, vm_ctx) {
>>> +		/* vm_ctx is not used here because that would create a race with xa_erase */
>>> +		pvr_vm_context_put(xa_erase(&pvr_file->vm_ctx_handles, handle));
>>> +	}
>>> +}
>>> +
>>> +/**
>>> + * pvr_vm_map() - Map a section of physical memory into a section of device-virtual memory.
>>> + * @vm_ctx: Target VM context.
>>> + * @pvr_obj: Target PowerVR memory object.
>>> + * @pvr_obj_offset: Offset into @pvr_obj to map from.
>>> + * @device_addr: Virtual device address at the start of the requested mapping.
>>> + * @size: Size of the requested mapping.
>>> + *
>>> + * No handle is returned to represent the mapping. Instead, callers should
>>> + * remember @device_addr and use that as a handle.
>>> + *
>>> + * Return:
>>> + *  * 0 on success,
>>> + *  * -%EINVAL if @device_addr is not a valid page-aligned device-virtual
>>> + *    address; the region specified by @pvr_obj_offset and @size does not fall
>>> + *    entirely within @pvr_obj, or any part of the specified region of @pvr_obj
>>> + *    is not device-virtual page-aligned,
>>> + *  * Any error encountered while performing internal operations required to
>>> + *    destroy the mapping (returned from pvr_vm_gpuva_map or
>>> + *    pvr_vm_gpuva_remap).
>>> + */
>>> +int
>>> +pvr_vm_map(struct pvr_vm_context *vm_ctx,
>>> +	   struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset,
>>> +	   u64 device_addr, u64 size)
>>> +{
>>> +	const size_t pvr_obj_size = pvr_gem_object_size(pvr_obj);
>>> +	struct pvr_vm_gpuva_op_ctx op_ctx = { .vm_ctx = vm_ctx };
>>> +	struct sg_table *sgt;
>>> +	int err;
>>> +
>>> +	if (!pvr_device_addr_and_size_are_valid(device_addr, size) ||
>>> +	    pvr_obj_offset & ~PAGE_MASK || size & ~PAGE_MASK ||
>>> +	    pvr_obj_offset + size > pvr_obj_size ||
>>> +	    pvr_obj_offset > pvr_obj_size) {
>>> +		return -EINVAL;
>>> +	}
>>> +
>>> +	op_ctx.new_va = kzalloc(sizeof(*op_ctx.new_va), GFP_KERNEL);
>>> +	op_ctx.prev_va = kzalloc(sizeof(*op_ctx.prev_va), GFP_KERNEL);
>>> +	op_ctx.next_va = kzalloc(sizeof(*op_ctx.next_va), GFP_KERNEL);
>>> +	if (!op_ctx.new_va || !op_ctx.prev_va || !op_ctx.next_va) {
>>> +		err = -ENOMEM;
>>> +		goto out_free;
>>> +	}
>>> +
>>> +	sgt = pvr_gem_object_get_pages_sgt(pvr_obj);
>>> +	err = PTR_ERR_OR_ZERO(sgt);
>>> +	if (err)
>>> +		goto out_free;
>>> +
>>> +	op_ctx.mmu_op_ctx = pvr_mmu_op_context_create(vm_ctx->mmu_ctx, sgt,
>>> +						      pvr_obj_offset, size);
>>> +	err = PTR_ERR_OR_ZERO(op_ctx.mmu_op_ctx);
>>> +	if (err) {
>>> +		op_ctx.mmu_op_ctx = NULL;
>>> +		goto out_mmu_op_ctx_destroy;
>>> +	}
>>> +
>>> +	mutex_lock(&vm_ctx->lock);
>>> +	err = drm_gpuva_sm_map(&vm_ctx->gpuva_mgr, &op_ctx, device_addr, size,
>>> +			       gem_from_pvr_gem(pvr_obj), pvr_obj_offset);
>>> +	mutex_unlock(&vm_ctx->lock);
>>> +
>>> +out_mmu_op_ctx_destroy:
>>> +	pvr_mmu_op_context_destroy(op_ctx.mmu_op_ctx);
>>> +
>>> +out_free:
>>> +	kfree(op_ctx.next_va);
>>> +	kfree(op_ctx.prev_va);
>>> +	kfree(op_ctx.new_va);
>>> +
>>> +	return err;
>>> +}
>>> +
>>> +/**
>>> + * pvr_vm_unmap() - Unmap an already mapped section of device-virtual memory.
>>> + * @vm_ctx: Target VM context.
>>> + * @device_addr: Virtual device address at the start of the target mapping.
>>> + * @size: Size of the target mapping.
>>> + *
>>> + * Return:
>>> + *  * 0 on success,
>>> + *  * -%EINVAL if @device_addr is not a valid page-aligned device-virtual
>>> + *    address,
>>> + *  * Any error encountered while performing internal operations required to
>>> + *    destroy the mapping (returned from pvr_vm_gpuva_unmap or
>>> + *    pvr_vm_gpuva_remap).
>>> + */
>>> +int
>>> +pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size)
>>> +{
>>> +	struct pvr_vm_gpuva_op_ctx op_ctx = { .vm_ctx = vm_ctx };
>>> +	int err;
>>> +
>>> +	if (!pvr_device_addr_and_size_are_valid(device_addr, size))
>>> +		return -EINVAL;
>>> +
>>> +	op_ctx.prev_va = kzalloc(sizeof(*op_ctx.prev_va), GFP_KERNEL);
>>> +	op_ctx.next_va = kzalloc(sizeof(*op_ctx.next_va), GFP_KERNEL);
>>> +	if (!op_ctx.prev_va || !op_ctx.next_va) {
>>> +		err = -ENOMEM;
>>> +		goto out;
>>> +	}
>>> +
>>> +	op_ctx.mmu_op_ctx =
>>> +		pvr_mmu_op_context_create(vm_ctx->mmu_ctx, NULL, 0, 0);
>>> +	err = PTR_ERR_OR_ZERO(op_ctx.mmu_op_ctx);
>>> +	if (err) {
>>> +		op_ctx.mmu_op_ctx = NULL;
>>> +		goto out;
>>> +	}
>>> +
>>> +	mutex_lock(&vm_ctx->lock);
>>> +	err = drm_gpuva_sm_unmap(&vm_ctx->gpuva_mgr, &op_ctx, device_addr, size);
>>> +	mutex_unlock(&vm_ctx->lock);
>>> +
>>> +out:
>>> +	pvr_mmu_op_context_destroy(op_ctx.mmu_op_ctx);
>>> +	kfree(op_ctx.next_va);
>>> +	kfree(op_ctx.prev_va);
>>> +
>>> +	return err;
>>> +}
>>> +
>>> +/*
>>> + * Static data areas are determined by firmware.
>>> + *
>>> + * When adding a new static data area you will also need to update the reserved_size field for the
>>> + * heap in pvr_heaps[].
>>> + */
>>> +static const struct drm_pvr_static_data_area static_data_areas[] = {
>>> +	{
>>> +		.area_usage = DRM_PVR_STATIC_DATA_AREA_FENCE,
>>> +		.location_heap_id = DRM_PVR_HEAP_GENERAL,
>>> +		.offset = 0,
>>> +		.size = 128,
>>> +	},
>>> +	{
>>> +		.area_usage = DRM_PVR_STATIC_DATA_AREA_YUV_CSC,
>>> +		.location_heap_id = DRM_PVR_HEAP_GENERAL,
>>> +		.offset = 128,
>>> +		.size = 1024,
>>> +	},
>>> +	{
>>> +		.area_usage = DRM_PVR_STATIC_DATA_AREA_VDM_SYNC,
>>> +		.location_heap_id = DRM_PVR_HEAP_PDS_CODE_DATA,
>>> +		.offset = 0,
>>> +		.size = 128,
>>> +	},
>>> +	{
>>> +		.area_usage = DRM_PVR_STATIC_DATA_AREA_EOT,
>>> +		.location_heap_id = DRM_PVR_HEAP_PDS_CODE_DATA,
>>> +		.offset = 128,
>>> +		.size = 128,
>>> +	},
>>> +	{
>>> +		.area_usage = DRM_PVR_STATIC_DATA_AREA_VDM_SYNC,
>>> +		.location_heap_id = DRM_PVR_HEAP_USC_CODE,
>>> +		.offset = 0,
>>> +		.size = 128,
>>> +	},
>>> +};
>>> +
>>> +#define GET_RESERVED_SIZE(last_offset, last_size) round_up((last_offset) + (last_size), PAGE_SIZE)
>>> +
>>> +/*
>>> + * The values given to GET_RESERVED_SIZE() are taken from the last entry in the corresponding
>>> + * static data area for each heap.
>>> + */
>>> +static const struct drm_pvr_heap pvr_heaps[] = {
>>> +	[DRM_PVR_HEAP_GENERAL] = {
>>> +		.base = ROGUE_GENERAL_HEAP_BASE,
>>> +		.size = ROGUE_GENERAL_HEAP_SIZE,
>>> +		.flags = 0,
>>> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
>>> +	},
>>> +	[DRM_PVR_HEAP_PDS_CODE_DATA] = {
>>> +		.base = ROGUE_PDSCODEDATA_HEAP_BASE,
>>> +		.size = ROGUE_PDSCODEDATA_HEAP_SIZE,
>>> +		.flags = 0,
>>> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
>>> +	},
>>> +	[DRM_PVR_HEAP_USC_CODE] = {
>>> +		.base = ROGUE_USCCODE_HEAP_BASE,
>>> +		.size = ROGUE_USCCODE_HEAP_SIZE,
>>> +		.flags = 0,
>>> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
>>> +	},
>>> +	[DRM_PVR_HEAP_RGNHDR] = {
>>> +		.base = ROGUE_RGNHDR_HEAP_BASE,
>>> +		.size = ROGUE_RGNHDR_HEAP_SIZE,
>>> +		.flags = 0,
>>> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
>>> +	},
>>> +	[DRM_PVR_HEAP_VIS_TEST] = {
>>> +		.base = ROGUE_VISTEST_HEAP_BASE,
>>> +		.size = ROGUE_VISTEST_HEAP_SIZE,
>>> +		.flags = 0,
>>> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
>>> +	},
>>> +	[DRM_PVR_HEAP_TRANSFER_FRAG] = {
>>> +		.base = ROGUE_TRANSFER_FRAG_HEAP_BASE,
>>> +		.size = ROGUE_TRANSFER_FRAG_HEAP_SIZE,
>>> +		.flags = 0,
>>> +		.page_size_log2 = PVR_DEVICE_PAGE_SHIFT,
>>> +	},
>>> +};
>>> +
>>> +int
>>> +pvr_static_data_areas_get(const struct pvr_device *pvr_dev,
>>> +			  struct drm_pvr_ioctl_dev_query_args *args)
>>> +{
>>> +	struct drm_pvr_dev_query_static_data_areas query = {0};
>>> +	int err;
>>> +
>>> +	if (!args->pointer) {
>>> +		args->size = sizeof(struct drm_pvr_dev_query_static_data_areas);
>>> +		return 0;
>>> +	}
>>> +
>>> +	err = PVR_UOBJ_GET(query, args->size, args->pointer);
>>> +	if (err < 0)
>>> +		return err;
>>> +
>>> +	if (!query.static_data_areas.array) {
>>> +		query.static_data_areas.count = ARRAY_SIZE(static_data_areas);
>>> +		query.static_data_areas.stride = sizeof(struct drm_pvr_static_data_area);
>>> +		goto copy_out;
>>> +	}
>>> +
>>> +	if (query.static_data_areas.count > ARRAY_SIZE(static_data_areas))
>>> +		query.static_data_areas.count = ARRAY_SIZE(static_data_areas);
>>> +
>>> +	err = PVR_UOBJ_SET_ARRAY(&query.static_data_areas, static_data_areas);
>>> +	if (err < 0)
>>> +		return err;
>>> +
>>> +copy_out:
>>> +	err = PVR_UOBJ_SET(args->pointer, args->size, query);
>>> +	if (err < 0)
>>> +		return err;
>>> +
>>> +	args->size = sizeof(query);
>>> +	return 0;
>>> +}
>>> +
>>> +int
>>> +pvr_heap_info_get(const struct pvr_device *pvr_dev,
>>> +		  struct drm_pvr_ioctl_dev_query_args *args)
>>> +{
>>> +	struct drm_pvr_dev_query_heap_info query = {0};
>>> +	u64 dest;
>>> +	int err;
>>> +
>>> +	if (!args->pointer) {
>>> +		args->size = sizeof(struct drm_pvr_dev_query_heap_info);
>>> +		return 0;
>>> +	}
>>> +
>>> +	err = PVR_UOBJ_GET(query, args->size, args->pointer);
>>> +	if (err < 0)
>>> +		return err;
>>> +
>>> +	if (!query.heaps.array) {
>>> +		query.heaps.count = ARRAY_SIZE(pvr_heaps);
>>> +		query.heaps.stride = sizeof(struct drm_pvr_heap);
>>> +		goto copy_out;
>>> +	}
>>> +
>>> +	if (query.heaps.count > ARRAY_SIZE(pvr_heaps))
>>> +		query.heaps.count = ARRAY_SIZE(pvr_heaps);
>>> +
>>> +	/* Region header heap is only present if BRN63142 is present. */
>>> +	dest = query.heaps.array;
>>> +	for (size_t i = 0; i < query.heaps.count; i++) {
>>> +		struct drm_pvr_heap heap = pvr_heaps[i];
>>> +
>>> +		if (i == DRM_PVR_HEAP_RGNHDR && !PVR_HAS_QUIRK(pvr_dev, 63142))
>>> +			heap.size = 0;
>>> +
>>> +		err = PVR_UOBJ_SET(dest, query.heaps.stride, heap);
>>> +		if (err < 0)
>>> +			return err;
>>> +
>>> +		dest += query.heaps.stride;
>>> +	}
>>> +
>>> +copy_out:
>>> +	err = PVR_UOBJ_SET(args->pointer, args->size, query);
>>> +	if (err < 0)
>>> +		return err;
>>> +
>>> +	args->size = sizeof(query);
>>> +	return 0;
>>> +}
>>> +
>>> +/**
>>> + * pvr_heap_contains_range() - Determine if a given heap contains the specified
>>> + *                             device-virtual address range.
>>> + * @pvr_heap: Target heap.
>>> + * @start: Inclusive start of the target range.
>>> + * @end: Inclusive end of the target range.
>>> + *
>>> + * It is an error to call this function with values of @start and @end that do
>>> + * not satisfy the condition @start <= @end.
>>> + */
>>> +static __always_inline bool
>>> +pvr_heap_contains_range(const struct drm_pvr_heap *pvr_heap, u64 start, u64 end)
>>> +{
>>> +	return pvr_heap->base <= start && end < pvr_heap->base + pvr_heap->size;
>>> +}
>>> +
>>> +/**
>>> + * pvr_find_heap_containing() - Find a heap which contains the specified
>>> + *                              device-virtual address range.
>>> + * @pvr_dev: Target PowerVR device.
>>> + * @start: Start of the target range.
>>> + * @size: Size of the target range.
>>> + *
>>> + * Return:
>>> + *  * A pointer to a constant instance of struct drm_pvr_heap representing the
>>> + *    heap containing the entire range specified by @start and @size on
>>> + *    success, or
>>> + *  * %NULL if no such heap exists.
>>> + */
>>> +const struct drm_pvr_heap *
>>> +pvr_find_heap_containing(struct pvr_device *pvr_dev, u64 start, u64 size)
>>> +{
>>> +	u64 end;
>>> +
>>> +	if (check_add_overflow(start, size - 1, &end))
>>> +		return NULL;
>>> +
>>> +	/*
>>> +	 * There are no guarantees about the order of address ranges in
>>> +	 * &pvr_heaps, so iterate over the entire array for a heap whose
>>> +	 * range completely encompasses the given range.
>>> +	 */
>>> +	for (u32 heap_id = 0; heap_id < ARRAY_SIZE(pvr_heaps); heap_id++) {
>>> +		/* Filter heaps that present only with an associated quirk */
>>> +		if (heap_id == DRM_PVR_HEAP_RGNHDR &&
>>> +		    !PVR_HAS_QUIRK(pvr_dev, 63142)) {
>>> +			continue;
>>> +		}
>>> +
>>> +		if (pvr_heap_contains_range(&pvr_heaps[heap_id], start, end))
>>> +			return &pvr_heaps[heap_id];
>>> +	}
>>> +
>>> +	return NULL;
>>> +}
>>> +
>>> +/**
>>> + * pvr_vm_find_gem_object() - Look up a buffer object from a given
>>> + *                            device-virtual address.
>>> + * @vm_ctx: [IN] Target VM context.
>>> + * @device_addr: [IN] Virtual device address at the start of the required
>>> + *               object.
>>> + * @mapped_offset_out: [OUT] Pointer to location to write offset of the start
>>> + *                     of the mapped region within the buffer object. May be
>>> + *                     %NULL if this information is not required.
>>> + * @mapped_size_out: [OUT] Pointer to location to write size of the mapped
>>> + *                   region. May be %NULL if this information is not required.
>>> + *
>>> + * If successful, a reference will be taken on the buffer object. The caller
>>> + * must drop the reference with pvr_gem_object_put().
>>> + *
>>> + * Return:
>>> + *  * The PowerVR buffer object mapped at @device_addr if one exists, or
>>> + *  * %NULL otherwise.
>>> + */
>>> +struct pvr_gem_object *
>>> +pvr_vm_find_gem_object(struct pvr_vm_context *vm_ctx, u64 device_addr,
>>> +		       u64 *mapped_offset_out, u64 *mapped_size_out)
>>> +{
>>> +	struct pvr_gem_object *pvr_obj;
>>> +	struct drm_gpuva *va;
>>> +
>>> +	mutex_lock(&vm_ctx->lock);
>>> +
>>> +	va = drm_gpuva_find_first(&vm_ctx->gpuva_mgr, device_addr, 1);
>>> +	if (!va)
>>> +		goto err_unlock;
>>> +
>>> +	pvr_obj = gem_to_pvr_gem(va->gem.obj);
>>> +	pvr_gem_object_get(pvr_obj);
>>> +
>>> +	if (mapped_offset_out)
>>> +		*mapped_offset_out = va->gem.offset;
>>> +	if (mapped_size_out)
>>> +		*mapped_size_out = va->va.range;
>>> +
>>> +	mutex_unlock(&vm_ctx->lock);
>>> +
>>> +	return pvr_obj;
>>> +
>>> +err_unlock:
>>> +	mutex_unlock(&vm_ctx->lock);
>>> +
>>> +	return NULL;
>>> +}
>>> +
>>> +/**
>>> + * pvr_vm_get_fw_mem_context: Get object representing firmware memory context
>>> + * @vm_ctx: Target VM context.
>>> + *
>>> + * Returns:
>>> + *  * FW object representing firmware memory context, or
>>> + *  * %NULL if this VM context does not have a firmware memory context.
>>> + */
>>> +struct pvr_fw_object *
>>> +pvr_vm_get_fw_mem_context(struct pvr_vm_context *vm_ctx)
>>> +{
>>> +	return vm_ctx->fw_mem_ctx_obj;
>>> +}
>>> diff --git a/drivers/gpu/drm/imagination/pvr_vm.h b/drivers/gpu/drm/imagination/pvr_vm.h
>>> new file mode 100644
>>> index 000000000000..b98bc3981807
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/imagination/pvr_vm.h
>>> @@ -0,0 +1,60 @@
>>> +/* SPDX-License-Identifier: GPL-2.0 OR MIT */
>>> +/* Copyright (c) 2023 Imagination Technologies Ltd. */
>>> +
>>> +#ifndef PVR_VM_H
>>> +#define PVR_VM_H
>>> +
>>> +#include "pvr_rogue_mmu_defs.h"
>>> +
>>> +#include <uapi/drm/pvr_drm.h>
>>> +
>>> +#include <linux/types.h>
>>> +
>>> +/* Forward declaration from "pvr_device.h" */
>>> +struct pvr_device;
>>> +struct pvr_file;
>>> +
>>> +/* Forward declaration from "pvr_gem.h" */
>>> +struct pvr_gem_object;
>>> +
>>> +/* Forward declaration from "pvr_vm.c" */
>>> +struct pvr_vm_context;
>>> +
>>> +/* Forward declaration from <uapi/drm/pvr_drm.h> */
>>> +struct drm_pvr_ioctl_get_heap_info_args;
>>> +
>>> +/* Functions defined in pvr_vm.c */
>>> +
>>> +bool pvr_device_addr_is_valid(u64 device_addr);
>>> +bool pvr_device_addr_and_size_are_valid(u64 device_addr, u64 size);
>>> +
>>> +struct pvr_vm_context *pvr_vm_create_context(struct pvr_device *pvr_dev,
>>> +					     bool is_userspace_context);
>>> +
>>> +int pvr_vm_map(struct pvr_vm_context *vm_ctx,
>>> +	       struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset,
>>> +	       u64 device_addr, u64 size);
>>> +int pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size);
>>> +
>>> +dma_addr_t pvr_vm_get_page_table_root_addr(struct pvr_vm_context *vm_ctx);
>>> +
>>> +int pvr_static_data_areas_get(const struct pvr_device *pvr_dev,
>>> +			      struct drm_pvr_ioctl_dev_query_args *args);
>>> +int pvr_heap_info_get(const struct pvr_device *pvr_dev,
>>> +		      struct drm_pvr_ioctl_dev_query_args *args);
>>> +const struct drm_pvr_heap *pvr_find_heap_containing(struct pvr_device *pvr_dev,
>>> +						    u64 addr, u64 size);
>>> +
>>> +struct pvr_gem_object *pvr_vm_find_gem_object(struct pvr_vm_context *vm_ctx,
>>> +					      u64 device_addr,
>>> +					      u64 *mapped_offset_out,
>>> +					      u64 *mapped_size_out);
>>> +
>>> +struct pvr_fw_object *
>>> +pvr_vm_get_fw_mem_context(struct pvr_vm_context *vm_ctx);
>>> +
>>> +struct pvr_vm_context *pvr_vm_context_lookup(struct pvr_file *pvr_file, u32 handle);
>>> +bool pvr_vm_context_put(struct pvr_vm_context *vm_ctx);
>>> +void pvr_destroy_vm_contexts_for_file(struct pvr_file *pvr_file);
>>> +
>>> +#endif /* PVR_VM_H */
>>> -- 
>>> 2.41.0
>>>


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 00/17] Imagination Technologies PowerVR DRM driver
  2023-08-16  8:25 ` Sarah Walker
@ 2023-08-23 22:31   ` Masahiro Yamada
  -1 siblings, 0 replies; 77+ messages in thread
From: Masahiro Yamada @ 2023-08-23 22:31 UTC (permalink / raw)
  To: Sarah Walker
  Cc: dri-devel, frank.binns, donald.robson, boris.brezillon,
	faith.ekstrand, airlied, daniel, maarten.lankhorst, mripard,
	tzimmermann, afd, hns, matthew.brost, christian.koenig,
	luben.tuikov, dakr, linux-kernel

On Fri, Aug 18, 2023 at 4:35 AM Sarah Walker <sarah.walker@imgtec.com> wrote:
>
> This patch series adds the initial DRM driver for Imagination Technologies PowerVR
> GPUs, starting with those based on our Rogue architecture. It's worth pointing
> out that this is a new driver, written from the ground up, rather than a
> refactored version of our existing downstream driver (pvrsrvkm).
>
> This new DRM driver supports:
> - GEM shmem allocations
> - dma-buf / PRIME
> - Per-context userspace managed virtual address space
> - DRM sync objects (binary and timeline)
> - Power management suspend / resume
> - GPU job submission (geometry, fragment, compute, transfer)
> - META firmware processor
> - MIPS firmware processor
> - GPU hang detection and recovery
>
> Currently our main focus is on the AXE-1-16M GPU. Testing so far has been done
> using a TI SK-AM62 board (AXE-1-16M GPU). Firmware for the AXE-1-16M can be
> found here:
> https://gitlab.freedesktop.org/frankbinns/linux-firmware/-/tree/powervr
>
> A Vulkan driver that works with our downstream kernel driver has already been
> merged into Mesa [1][2]. Support for this new DRM driver is being maintained in
> a merge request [3], with the branch located here:
> https://gitlab.freedesktop.org/frankbinns/mesa/-/tree/powervr-winsys
>
> Job stream formats are documented at:
> https://gitlab.freedesktop.org/mesa/mesa/-/blob/f8d2b42ae65c2f16f36a43e0ae39d288431e4263/src/imagination/csbgen/rogue_kmd_stream.xml
>
> The Vulkan driver is progressing towards Vulkan 1.0. We're code complete, and
> are working towards passing conformance. The current combination of this kernel
> driver with the Mesa Vulkan driver (powervr-mesa-next branch) achieves 88.3% conformance.
>
> The code in this patch series, along with some of its history, can also be found here:
> https://gitlab.freedesktop.org/frankbinns/powervr/-/tree/powervr-next
>
> This patch series has dependencies on a number of patches not yet merged. They
> are listed below :
>
> drm/sched: Convert drm scheduler to use a work queue rather than kthread:
>   https://lore.kernel.org/dri-devel/20230404002211.3611376-2-matthew.brost@intel.com/
> drm/sched: Move schedule policy to scheduler / entity:
>   https://lore.kernel.org/dri-devel/20230404002211.3611376-3-matthew.brost@intel.com/
> drm/sched: Add DRM_SCHED_POLICY_SINGLE_ENTITY scheduling policy:
>   https://lore.kernel.org/dri-devel/20230404002211.3611376-4-matthew.brost@intel.com/
> drm/sched: Start run wq before TDR in drm_sched_start:
>   https://lore.kernel.org/dri-devel/20230404002211.3611376-6-matthew.brost@intel.com/
> drm/sched: Submit job before starting TDR:
>   https://lore.kernel.org/dri-devel/20230404002211.3611376-7-matthew.brost@intel.com/
> drm/sched: Add helper to set TDR timeout:
>   https://lore.kernel.org/dri-devel/20230404002211.3611376-8-matthew.brost@intel.com/
>
> [1] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/15243
> [2] https://gitlab.freedesktop.org/mesa/mesa/-/tree/main/src/imagination/vulkan
> [3] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/15507
>
> High level summary of changes:
>
> v5:
> * Retrieve GPU device information from firmware image header
> * Address issues with DT binding and example DTS
> * Update VM code for upstream GPU VA manager
> * BOs are always zeroed on allocation
> * Update copyright
>
> v4:
> * Implemented hang recovery via firmware hard reset
> * Add support for partial render jobs
> * Move to a threaded IRQ
> * Remove unnecessary read/write and clock helpers
> * Remove device tree elements not relevant to AXE-1-16M
> * Clean up resource acquisition
> * Remove unused DT binding attributes
>
> v3:
> * Use drm_sched for scheduling
> * Use GPU VA manager
> * Use runtime PM
> * Use drm_gem_shmem
> * GPU watchdog and device loss handling
> * DT binding changes: remove unused attributes, add additionProperties:false
>
> v2:
> * Redesigned and simplified UAPI based on RFC feedback from XDC 2022
> * Support for transfer and partial render jobs
> * Support for timeline sync objects
>
> RFC v1: https://lore.kernel.org/dri-devel/20220815165156.118212-1-sarah.walker@imgtec.com/
>
> RFC v2: https://lore.kernel.org/dri-devel/20230413103419.293493-1-sarah.walker@imgtec.com/
>
> v3: https://lore.kernel.org/dri-devel/20230613144800.52657-1-sarah.walker@imgtec.com/
>
> v4: https://lore.kernel.org/dri-devel/20230714142355.111382-1-sarah.walker@imgtec.com/
>
> Matt Coster (1):
>   sizes.h: Add entries between 32G and 64T
>
> Sarah Walker (16):
>   dt-bindings: gpu: Add Imagination Technologies PowerVR GPU
>   drm/imagination/uapi: Add PowerVR driver UAPI
>   drm/imagination: Add skeleton PowerVR driver
>   drm/imagination: Get GPU resources
>   drm/imagination: Add GPU register and FWIF headers
>   drm/imagination: Add GPU ID parsing and firmware loading
>   drm/imagination: Add GEM and VM related code
>   drm/imagination: Implement power management
>   drm/imagination: Implement firmware infrastructure and META FW support
>   drm/imagination: Implement MIPS firmware processor and MMU support
>   drm/imagination: Implement free list and HWRT create and destroy
>     ioctls
>   drm/imagination: Implement context creation/destruction ioctls
>   drm/imagination: Implement job submission and scheduling
>   drm/imagination: Add firmware trace to debugfs
>   drm/imagination: Add driver documentation
>   arm64: dts: ti: k3-am62-main: Add GPU device node [DO NOT MERGE]






I failed to compile this patch set.

I applied this series to linux next-20230822 and set CONFIG_DRM_POWERVR=m.


I got this error.

  CC [M]  drivers/gpu/drm/imagination/pvr_ccb.o
In file included from drivers/gpu/drm/imagination/pvr_ccb.c:4:
drivers/gpu/drm/imagination/pvr_ccb.h:7:10: fatal error:
pvr_rogue_fwif.h: No such file or directory
    7 | #include "pvr_rogue_fwif.h"
      |          ^~~~~~~~~~~~~~~~~~
compilation terminated.



Did you forget to do 'git add' or am I missing something?


I do not see pvr_rogue_fwif.h
in the following diff stat.


>  .../devicetree/bindings/gpu/img,powervr.yaml  |   75 +
>  Documentation/gpu/drivers.rst                 |    2 +
>  Documentation/gpu/imagination/index.rst       |   14 +
>  Documentation/gpu/imagination/uapi.rst        |  174 +
>  .../gpu/imagination/virtual_memory.rst        |  462 ++
>  MAINTAINERS                                   |   10 +
>  arch/arm64/boot/dts/ti/k3-am62-main.dtsi      |    9 +
>  drivers/gpu/drm/Kconfig                       |    2 +
>  drivers/gpu/drm/Makefile                      |    1 +
>  drivers/gpu/drm/imagination/Kconfig           |   16 +
>  drivers/gpu/drm/imagination/Makefile          |   35 +
>  drivers/gpu/drm/imagination/pvr_ccb.c         |  641 ++
>  drivers/gpu/drm/imagination/pvr_ccb.h         |   71 +
>  drivers/gpu/drm/imagination/pvr_cccb.c        |  267 +
>  drivers/gpu/drm/imagination/pvr_cccb.h        |  109 +
>  drivers/gpu/drm/imagination/pvr_context.c     |  460 ++
>  drivers/gpu/drm/imagination/pvr_context.h     |  205 +
>  drivers/gpu/drm/imagination/pvr_debugfs.c     |   53 +
>  drivers/gpu/drm/imagination/pvr_debugfs.h     |   29 +
>  drivers/gpu/drm/imagination/pvr_device.c      |  651 ++
>  drivers/gpu/drm/imagination/pvr_device.h      |  704 ++
>  drivers/gpu/drm/imagination/pvr_device_info.c |  253 +
>  drivers/gpu/drm/imagination/pvr_device_info.h |  185 +
>  drivers/gpu/drm/imagination/pvr_drv.c         | 1515 ++++
>  drivers/gpu/drm/imagination/pvr_drv.h         |  129 +
>  drivers/gpu/drm/imagination/pvr_free_list.c   |  625 ++
>  drivers/gpu/drm/imagination/pvr_free_list.h   |  195 +
>  drivers/gpu/drm/imagination/pvr_fw.c          | 1470 ++++
>  drivers/gpu/drm/imagination/pvr_fw.h          |  508 ++
>  drivers/gpu/drm/imagination/pvr_fw_info.h     |  135 +
>  drivers/gpu/drm/imagination/pvr_fw_meta.c     |  554 ++
>  drivers/gpu/drm/imagination/pvr_fw_meta.h     |   14 +
>  drivers/gpu/drm/imagination/pvr_fw_mips.c     |  250 +
>  drivers/gpu/drm/imagination/pvr_fw_mips.h     |   38 +
>  .../gpu/drm/imagination/pvr_fw_startstop.c    |  301 +
>  .../gpu/drm/imagination/pvr_fw_startstop.h    |   13 +
>  drivers/gpu/drm/imagination/pvr_fw_trace.c    |  515 ++
>  drivers/gpu/drm/imagination/pvr_fw_trace.h    |   78 +
>  drivers/gpu/drm/imagination/pvr_gem.c         |  396 ++
>  drivers/gpu/drm/imagination/pvr_gem.h         |  184 +
>  drivers/gpu/drm/imagination/pvr_hwrt.c        |  549 ++
>  drivers/gpu/drm/imagination/pvr_hwrt.h        |  165 +
>  drivers/gpu/drm/imagination/pvr_job.c         |  770 ++
>  drivers/gpu/drm/imagination/pvr_job.h         |  161 +
>  drivers/gpu/drm/imagination/pvr_mmu.c         | 2523 +++++++
>  drivers/gpu/drm/imagination/pvr_mmu.h         |  108 +
>  drivers/gpu/drm/imagination/pvr_params.c      |  147 +
>  drivers/gpu/drm/imagination/pvr_params.h      |   72 +
>  drivers/gpu/drm/imagination/pvr_power.c       |  421 ++
>  drivers/gpu/drm/imagination/pvr_power.h       |   39 +
>  drivers/gpu/drm/imagination/pvr_queue.c       | 1455 ++++
>  drivers/gpu/drm/imagination/pvr_queue.h       |  179 +
>  .../gpu/drm/imagination/pvr_rogue_cr_defs.h   | 6193 +++++++++++++++++
>  .../imagination/pvr_rogue_cr_defs_client.h    |  159 +
>  drivers/gpu/drm/imagination/pvr_rogue_defs.h  |  179 +
>  drivers/gpu/drm/imagination/pvr_rogue_fwif.h  | 2208 ++++++
>  .../drm/imagination/pvr_rogue_fwif_check.h    |  491 ++
>  .../drm/imagination/pvr_rogue_fwif_client.h   |  371 +
>  .../imagination/pvr_rogue_fwif_client_check.h |  133 +
>  .../drm/imagination/pvr_rogue_fwif_common.h   |   60 +
>  .../drm/imagination/pvr_rogue_fwif_dev_info.h |  112 +
>  .../pvr_rogue_fwif_resetframework.h           |   28 +
>  .../gpu/drm/imagination/pvr_rogue_fwif_sf.h   | 1648 +++++
>  .../drm/imagination/pvr_rogue_fwif_shared.h   |  258 +
>  .../imagination/pvr_rogue_fwif_shared_check.h |  108 +
>  .../drm/imagination/pvr_rogue_fwif_stream.h   |   78 +
>  .../drm/imagination/pvr_rogue_heap_config.h   |  113 +
>  drivers/gpu/drm/imagination/pvr_rogue_meta.h  |  356 +
>  drivers/gpu/drm/imagination/pvr_rogue_mips.h  |  335 +
>  .../drm/imagination/pvr_rogue_mips_check.h    |   58 +
>  .../gpu/drm/imagination/pvr_rogue_mmu_defs.h  |  136 +
>  drivers/gpu/drm/imagination/pvr_stream.c      |  285 +
>  drivers/gpu/drm/imagination/pvr_stream.h      |   75 +
>  drivers/gpu/drm/imagination/pvr_stream_defs.c |  351 +
>  drivers/gpu/drm/imagination/pvr_stream_defs.h |   16 +
>  drivers/gpu/drm/imagination/pvr_sync.c        |  287 +
>  drivers/gpu/drm/imagination/pvr_sync.h        |   84 +
>  drivers/gpu/drm/imagination/pvr_vm.c          |  906 +++
>  drivers/gpu/drm/imagination/pvr_vm.h          |   60 +
>  drivers/gpu/drm/imagination/pvr_vm_mips.c     |  208 +
>  drivers/gpu/drm/imagination/pvr_vm_mips.h     |   22 +
>  include/linux/sizes.h                         |    9 +
>  include/uapi/drm/pvr_drm.h                    | 1303 ++++
>  83 files changed, 34567 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/gpu/img,powervr.yaml
>  create mode 100644 Documentation/gpu/imagination/index.rst
>  create mode 100644 Documentation/gpu/imagination/uapi.rst
>  create mode 100644 Documentation/gpu/imagination/virtual_memory.rst
>  create mode 100644 drivers/gpu/drm/imagination/Kconfig
>  create mode 100644 drivers/gpu/drm/imagination/Makefile
>  create mode 100644 drivers/gpu/drm/imagination/pvr_ccb.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_ccb.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_cccb.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_cccb.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_context.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_context.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_debugfs.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_debugfs.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_device.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_device.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_device_info.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_device_info.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_drv.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_drv.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_free_list.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_free_list.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_fw.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_fw.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_fw_info.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_fw_meta.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_fw_meta.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_fw_mips.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_fw_mips.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_fw_startstop.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_fw_startstop.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_fw_trace.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_fw_trace.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_gem.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_gem.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_hwrt.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_hwrt.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_job.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_job.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_params.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_params.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_power.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_power.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_queue.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_queue.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_cr_defs.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_cr_defs_client.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_defs.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_check.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_client.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_client_check.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_common.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_dev_info.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_resetframework.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_sf.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_shared.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_shared_check.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_stream.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_heap_config.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_meta.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_mips.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_mips_check.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_mmu_defs.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_stream.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_stream.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_stream_defs.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_stream_defs.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_sync.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_sync.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_vm.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_vm.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_vm_mips.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_vm_mips.h
>  create mode 100644 include/uapi/drm/pvr_drm.h
>
> --
> 2.41.0
>


-- 
Best Regards
Masahiro Yamada

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 00/17] Imagination Technologies PowerVR DRM driver
@ 2023-08-23 22:31   ` Masahiro Yamada
  0 siblings, 0 replies; 77+ messages in thread
From: Masahiro Yamada @ 2023-08-23 22:31 UTC (permalink / raw)
  To: Sarah Walker
  Cc: mripard, matthew.brost, tzimmermann, luben.tuikov, linux-kernel,
	dri-devel, afd, boris.brezillon, dakr, donald.robson, hns,
	christian.koenig, faith.ekstrand

On Fri, Aug 18, 2023 at 4:35 AM Sarah Walker <sarah.walker@imgtec.com> wrote:
>
> This patch series adds the initial DRM driver for Imagination Technologies PowerVR
> GPUs, starting with those based on our Rogue architecture. It's worth pointing
> out that this is a new driver, written from the ground up, rather than a
> refactored version of our existing downstream driver (pvrsrvkm).
>
> This new DRM driver supports:
> - GEM shmem allocations
> - dma-buf / PRIME
> - Per-context userspace managed virtual address space
> - DRM sync objects (binary and timeline)
> - Power management suspend / resume
> - GPU job submission (geometry, fragment, compute, transfer)
> - META firmware processor
> - MIPS firmware processor
> - GPU hang detection and recovery
>
> Currently our main focus is on the AXE-1-16M GPU. Testing so far has been done
> using a TI SK-AM62 board (AXE-1-16M GPU). Firmware for the AXE-1-16M can be
> found here:
> https://gitlab.freedesktop.org/frankbinns/linux-firmware/-/tree/powervr
>
> A Vulkan driver that works with our downstream kernel driver has already been
> merged into Mesa [1][2]. Support for this new DRM driver is being maintained in
> a merge request [3], with the branch located here:
> https://gitlab.freedesktop.org/frankbinns/mesa/-/tree/powervr-winsys
>
> Job stream formats are documented at:
> https://gitlab.freedesktop.org/mesa/mesa/-/blob/f8d2b42ae65c2f16f36a43e0ae39d288431e4263/src/imagination/csbgen/rogue_kmd_stream.xml
>
> The Vulkan driver is progressing towards Vulkan 1.0. We're code complete, and
> are working towards passing conformance. The current combination of this kernel
> driver with the Mesa Vulkan driver (powervr-mesa-next branch) achieves 88.3% conformance.
>
> The code in this patch series, along with some of its history, can also be found here:
> https://gitlab.freedesktop.org/frankbinns/powervr/-/tree/powervr-next
>
> This patch series has dependencies on a number of patches not yet merged. They
> are listed below :
>
> drm/sched: Convert drm scheduler to use a work queue rather than kthread:
>   https://lore.kernel.org/dri-devel/20230404002211.3611376-2-matthew.brost@intel.com/
> drm/sched: Move schedule policy to scheduler / entity:
>   https://lore.kernel.org/dri-devel/20230404002211.3611376-3-matthew.brost@intel.com/
> drm/sched: Add DRM_SCHED_POLICY_SINGLE_ENTITY scheduling policy:
>   https://lore.kernel.org/dri-devel/20230404002211.3611376-4-matthew.brost@intel.com/
> drm/sched: Start run wq before TDR in drm_sched_start:
>   https://lore.kernel.org/dri-devel/20230404002211.3611376-6-matthew.brost@intel.com/
> drm/sched: Submit job before starting TDR:
>   https://lore.kernel.org/dri-devel/20230404002211.3611376-7-matthew.brost@intel.com/
> drm/sched: Add helper to set TDR timeout:
>   https://lore.kernel.org/dri-devel/20230404002211.3611376-8-matthew.brost@intel.com/
>
> [1] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/15243
> [2] https://gitlab.freedesktop.org/mesa/mesa/-/tree/main/src/imagination/vulkan
> [3] https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/15507
>
> High level summary of changes:
>
> v5:
> * Retrieve GPU device information from firmware image header
> * Address issues with DT binding and example DTS
> * Update VM code for upstream GPU VA manager
> * BOs are always zeroed on allocation
> * Update copyright
>
> v4:
> * Implemented hang recovery via firmware hard reset
> * Add support for partial render jobs
> * Move to a threaded IRQ
> * Remove unnecessary read/write and clock helpers
> * Remove device tree elements not relevant to AXE-1-16M
> * Clean up resource acquisition
> * Remove unused DT binding attributes
>
> v3:
> * Use drm_sched for scheduling
> * Use GPU VA manager
> * Use runtime PM
> * Use drm_gem_shmem
> * GPU watchdog and device loss handling
> * DT binding changes: remove unused attributes, add additionProperties:false
>
> v2:
> * Redesigned and simplified UAPI based on RFC feedback from XDC 2022
> * Support for transfer and partial render jobs
> * Support for timeline sync objects
>
> RFC v1: https://lore.kernel.org/dri-devel/20220815165156.118212-1-sarah.walker@imgtec.com/
>
> RFC v2: https://lore.kernel.org/dri-devel/20230413103419.293493-1-sarah.walker@imgtec.com/
>
> v3: https://lore.kernel.org/dri-devel/20230613144800.52657-1-sarah.walker@imgtec.com/
>
> v4: https://lore.kernel.org/dri-devel/20230714142355.111382-1-sarah.walker@imgtec.com/
>
> Matt Coster (1):
>   sizes.h: Add entries between 32G and 64T
>
> Sarah Walker (16):
>   dt-bindings: gpu: Add Imagination Technologies PowerVR GPU
>   drm/imagination/uapi: Add PowerVR driver UAPI
>   drm/imagination: Add skeleton PowerVR driver
>   drm/imagination: Get GPU resources
>   drm/imagination: Add GPU register and FWIF headers
>   drm/imagination: Add GPU ID parsing and firmware loading
>   drm/imagination: Add GEM and VM related code
>   drm/imagination: Implement power management
>   drm/imagination: Implement firmware infrastructure and META FW support
>   drm/imagination: Implement MIPS firmware processor and MMU support
>   drm/imagination: Implement free list and HWRT create and destroy
>     ioctls
>   drm/imagination: Implement context creation/destruction ioctls
>   drm/imagination: Implement job submission and scheduling
>   drm/imagination: Add firmware trace to debugfs
>   drm/imagination: Add driver documentation
>   arm64: dts: ti: k3-am62-main: Add GPU device node [DO NOT MERGE]






I failed to compile this patch set.

I applied this series to linux next-20230822 and set CONFIG_DRM_POWERVR=m.


I got this error.

  CC [M]  drivers/gpu/drm/imagination/pvr_ccb.o
In file included from drivers/gpu/drm/imagination/pvr_ccb.c:4:
drivers/gpu/drm/imagination/pvr_ccb.h:7:10: fatal error:
pvr_rogue_fwif.h: No such file or directory
    7 | #include "pvr_rogue_fwif.h"
      |          ^~~~~~~~~~~~~~~~~~
compilation terminated.



Did you forget to do 'git add' or am I missing something?


I do not see pvr_rogue_fwif.h
in the following diff stat.


>  .../devicetree/bindings/gpu/img,powervr.yaml  |   75 +
>  Documentation/gpu/drivers.rst                 |    2 +
>  Documentation/gpu/imagination/index.rst       |   14 +
>  Documentation/gpu/imagination/uapi.rst        |  174 +
>  .../gpu/imagination/virtual_memory.rst        |  462 ++
>  MAINTAINERS                                   |   10 +
>  arch/arm64/boot/dts/ti/k3-am62-main.dtsi      |    9 +
>  drivers/gpu/drm/Kconfig                       |    2 +
>  drivers/gpu/drm/Makefile                      |    1 +
>  drivers/gpu/drm/imagination/Kconfig           |   16 +
>  drivers/gpu/drm/imagination/Makefile          |   35 +
>  drivers/gpu/drm/imagination/pvr_ccb.c         |  641 ++
>  drivers/gpu/drm/imagination/pvr_ccb.h         |   71 +
>  drivers/gpu/drm/imagination/pvr_cccb.c        |  267 +
>  drivers/gpu/drm/imagination/pvr_cccb.h        |  109 +
>  drivers/gpu/drm/imagination/pvr_context.c     |  460 ++
>  drivers/gpu/drm/imagination/pvr_context.h     |  205 +
>  drivers/gpu/drm/imagination/pvr_debugfs.c     |   53 +
>  drivers/gpu/drm/imagination/pvr_debugfs.h     |   29 +
>  drivers/gpu/drm/imagination/pvr_device.c      |  651 ++
>  drivers/gpu/drm/imagination/pvr_device.h      |  704 ++
>  drivers/gpu/drm/imagination/pvr_device_info.c |  253 +
>  drivers/gpu/drm/imagination/pvr_device_info.h |  185 +
>  drivers/gpu/drm/imagination/pvr_drv.c         | 1515 ++++
>  drivers/gpu/drm/imagination/pvr_drv.h         |  129 +
>  drivers/gpu/drm/imagination/pvr_free_list.c   |  625 ++
>  drivers/gpu/drm/imagination/pvr_free_list.h   |  195 +
>  drivers/gpu/drm/imagination/pvr_fw.c          | 1470 ++++
>  drivers/gpu/drm/imagination/pvr_fw.h          |  508 ++
>  drivers/gpu/drm/imagination/pvr_fw_info.h     |  135 +
>  drivers/gpu/drm/imagination/pvr_fw_meta.c     |  554 ++
>  drivers/gpu/drm/imagination/pvr_fw_meta.h     |   14 +
>  drivers/gpu/drm/imagination/pvr_fw_mips.c     |  250 +
>  drivers/gpu/drm/imagination/pvr_fw_mips.h     |   38 +
>  .../gpu/drm/imagination/pvr_fw_startstop.c    |  301 +
>  .../gpu/drm/imagination/pvr_fw_startstop.h    |   13 +
>  drivers/gpu/drm/imagination/pvr_fw_trace.c    |  515 ++
>  drivers/gpu/drm/imagination/pvr_fw_trace.h    |   78 +
>  drivers/gpu/drm/imagination/pvr_gem.c         |  396 ++
>  drivers/gpu/drm/imagination/pvr_gem.h         |  184 +
>  drivers/gpu/drm/imagination/pvr_hwrt.c        |  549 ++
>  drivers/gpu/drm/imagination/pvr_hwrt.h        |  165 +
>  drivers/gpu/drm/imagination/pvr_job.c         |  770 ++
>  drivers/gpu/drm/imagination/pvr_job.h         |  161 +
>  drivers/gpu/drm/imagination/pvr_mmu.c         | 2523 +++++++
>  drivers/gpu/drm/imagination/pvr_mmu.h         |  108 +
>  drivers/gpu/drm/imagination/pvr_params.c      |  147 +
>  drivers/gpu/drm/imagination/pvr_params.h      |   72 +
>  drivers/gpu/drm/imagination/pvr_power.c       |  421 ++
>  drivers/gpu/drm/imagination/pvr_power.h       |   39 +
>  drivers/gpu/drm/imagination/pvr_queue.c       | 1455 ++++
>  drivers/gpu/drm/imagination/pvr_queue.h       |  179 +
>  .../gpu/drm/imagination/pvr_rogue_cr_defs.h   | 6193 +++++++++++++++++
>  .../imagination/pvr_rogue_cr_defs_client.h    |  159 +
>  drivers/gpu/drm/imagination/pvr_rogue_defs.h  |  179 +
>  drivers/gpu/drm/imagination/pvr_rogue_fwif.h  | 2208 ++++++
>  .../drm/imagination/pvr_rogue_fwif_check.h    |  491 ++
>  .../drm/imagination/pvr_rogue_fwif_client.h   |  371 +
>  .../imagination/pvr_rogue_fwif_client_check.h |  133 +
>  .../drm/imagination/pvr_rogue_fwif_common.h   |   60 +
>  .../drm/imagination/pvr_rogue_fwif_dev_info.h |  112 +
>  .../pvr_rogue_fwif_resetframework.h           |   28 +
>  .../gpu/drm/imagination/pvr_rogue_fwif_sf.h   | 1648 +++++
>  .../drm/imagination/pvr_rogue_fwif_shared.h   |  258 +
>  .../imagination/pvr_rogue_fwif_shared_check.h |  108 +
>  .../drm/imagination/pvr_rogue_fwif_stream.h   |   78 +
>  .../drm/imagination/pvr_rogue_heap_config.h   |  113 +
>  drivers/gpu/drm/imagination/pvr_rogue_meta.h  |  356 +
>  drivers/gpu/drm/imagination/pvr_rogue_mips.h  |  335 +
>  .../drm/imagination/pvr_rogue_mips_check.h    |   58 +
>  .../gpu/drm/imagination/pvr_rogue_mmu_defs.h  |  136 +
>  drivers/gpu/drm/imagination/pvr_stream.c      |  285 +
>  drivers/gpu/drm/imagination/pvr_stream.h      |   75 +
>  drivers/gpu/drm/imagination/pvr_stream_defs.c |  351 +
>  drivers/gpu/drm/imagination/pvr_stream_defs.h |   16 +
>  drivers/gpu/drm/imagination/pvr_sync.c        |  287 +
>  drivers/gpu/drm/imagination/pvr_sync.h        |   84 +
>  drivers/gpu/drm/imagination/pvr_vm.c          |  906 +++
>  drivers/gpu/drm/imagination/pvr_vm.h          |   60 +
>  drivers/gpu/drm/imagination/pvr_vm_mips.c     |  208 +
>  drivers/gpu/drm/imagination/pvr_vm_mips.h     |   22 +
>  include/linux/sizes.h                         |    9 +
>  include/uapi/drm/pvr_drm.h                    | 1303 ++++
>  83 files changed, 34567 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/gpu/img,powervr.yaml
>  create mode 100644 Documentation/gpu/imagination/index.rst
>  create mode 100644 Documentation/gpu/imagination/uapi.rst
>  create mode 100644 Documentation/gpu/imagination/virtual_memory.rst
>  create mode 100644 drivers/gpu/drm/imagination/Kconfig
>  create mode 100644 drivers/gpu/drm/imagination/Makefile
>  create mode 100644 drivers/gpu/drm/imagination/pvr_ccb.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_ccb.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_cccb.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_cccb.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_context.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_context.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_debugfs.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_debugfs.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_device.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_device.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_device_info.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_device_info.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_drv.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_drv.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_free_list.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_free_list.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_fw.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_fw.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_fw_info.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_fw_meta.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_fw_meta.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_fw_mips.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_fw_mips.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_fw_startstop.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_fw_startstop.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_fw_trace.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_fw_trace.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_gem.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_gem.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_hwrt.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_hwrt.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_job.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_job.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_mmu.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_params.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_params.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_power.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_power.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_queue.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_queue.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_cr_defs.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_cr_defs_client.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_defs.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_check.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_client.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_client_check.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_common.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_dev_info.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_resetframework.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_sf.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_shared.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_shared_check.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_fwif_stream.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_heap_config.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_meta.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_mips.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_mips_check.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_rogue_mmu_defs.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_stream.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_stream.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_stream_defs.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_stream_defs.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_sync.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_sync.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_vm.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_vm.h
>  create mode 100644 drivers/gpu/drm/imagination/pvr_vm_mips.c
>  create mode 100644 drivers/gpu/drm/imagination/pvr_vm_mips.h
>  create mode 100644 include/uapi/drm/pvr_drm.h
>
> --
> 2.41.0
>


-- 
Best Regards
Masahiro Yamada

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 00/17] Imagination Technologies PowerVR DRM driver
  2023-08-23 22:31   ` Masahiro Yamada
@ 2023-08-24  8:08     ` Sarah Walker
  -1 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-24  8:08 UTC (permalink / raw)
  To: masahiroy
  Cc: matthew.brost, hns, linux-kernel, mripard, afd, luben.tuikov,
	dakr, dri-devel, tzimmermann, boris.brezillon, christian.koenig,
	faith.ekstrand, Donald Robson

On Thu, 2023-08-24 at 07:31 +0900, Masahiro Yamada wrote:
> On Fri, Aug 18, 2023 at 4:35 AM Sarah Walker <sarah.walker@imgtec.com> wrote:
> > This patch series adds the initial DRM driver for Imagination Technologies PowerVR
> > GPUs, starting with those based on our Rogue architecture. It's worth pointing
> > out that this is a new driver, written from the ground up, rather than a
> > refactored version of our existing downstream driver (pvrsrvkm).
> > 
> > 
> 
> 
> 
> 
> I failed to compile this patch set.
> 
> I applied this series to linux next-20230822 and set CONFIG_DRM_POWERVR=m.
> 
> 
> I got this error.
> 
>   CC [M]  drivers/gpu/drm/imagination/pvr_ccb.o
> In file included from drivers/gpu/drm/imagination/pvr_ccb.c:4:
> drivers/gpu/drm/imagination/pvr_ccb.h:7:10: fatal error:
> pvr_rogue_fwif.h: No such file or directory
>     7 | #include "pvr_rogue_fwif.h"
>       |          ^~~~~~~~~~~~~~~~~~
> compilation terminated.

Apologies, it appears patch 6 (which includes this and other headers) got
blocked by our IT email policy. Will resend.

Sarah

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 00/17] Imagination Technologies PowerVR DRM driver
@ 2023-08-24  8:08     ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-24  8:08 UTC (permalink / raw)
  To: masahiroy
  Cc: luben.tuikov, christian.koenig, tzimmermann, mripard,
	matthew.brost, afd, daniel, maarten.lankhorst, boris.brezillon,
	hns, dri-devel, linux-kernel, dakr, Frank Binns, airlied,
	Donald Robson, faith.ekstrand

On Thu, 2023-08-24 at 07:31 +0900, Masahiro Yamada wrote:
> On Fri, Aug 18, 2023 at 4:35 AM Sarah Walker <sarah.walker@imgtec.com> wrote:
> > This patch series adds the initial DRM driver for Imagination Technologies PowerVR
> > GPUs, starting with those based on our Rogue architecture. It's worth pointing
> > out that this is a new driver, written from the ground up, rather than a
> > refactored version of our existing downstream driver (pvrsrvkm).
> > 
> > 
> 
> 
> 
> 
> I failed to compile this patch set.
> 
> I applied this series to linux next-20230822 and set CONFIG_DRM_POWERVR=m.
> 
> 
> I got this error.
> 
>   CC [M]  drivers/gpu/drm/imagination/pvr_ccb.o
> In file included from drivers/gpu/drm/imagination/pvr_ccb.c:4:
> drivers/gpu/drm/imagination/pvr_ccb.h:7:10: fatal error:
> pvr_rogue_fwif.h: No such file or directory
>     7 | #include "pvr_rogue_fwif.h"
>       |          ^~~~~~~~~~~~~~~~~~
> compilation terminated.

Apologies, it appears patch 6 (which includes this and other headers) got
blocked by our IT email policy. Will resend.

Sarah

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 00/17] Imagination Technologies PowerVR DRM driver
  2023-08-24  8:08     ` Sarah Walker
@ 2023-08-24  8:14       ` Sarah Walker
  -1 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-24  8:14 UTC (permalink / raw)
  To: masahiroy
  Cc: matthew.brost, hns, linux-kernel, mripard, afd, luben.tuikov,
	dakr, dri-devel, tzimmermann, boris.brezillon, christian.koenig,
	faith.ekstrand, Donald Robson

On Thu, 2023-08-24 at 09:08 +0100, Sarah Walker wrote:
> On Thu, 2023-08-24 at 07:31 +0900, Masahiro Yamada wrote:
> > On Fri, Aug 18, 2023 at 4:35 AM Sarah Walker <sarah.walker@imgtec.com> wrote:
> > > This patch series adds the initial DRM driver for Imagination Technologies PowerVR
> > > GPUs, starting with those based on our Rogue architecture. It's worth pointing
> > > out that this is a new driver, written from the ground up, rather than a
> > > refactored version of our existing downstream driver (pvrsrvkm).
> > > 
> > > 
> > 
> > 
> > 
> > I failed to compile this patch set.
> > 
> > I applied this series to linux next-20230822 and set CONFIG_DRM_POWERVR=m.
> > 
> > 
> > I got this error.
> > 
> >   CC [M]  drivers/gpu/drm/imagination/pvr_ccb.o
> > In file included from drivers/gpu/drm/imagination/pvr_ccb.c:4:
> > drivers/gpu/drm/imagination/pvr_ccb.h:7:10: fatal error:
> > pvr_rogue_fwif.h: No such file or directory
> >     7 | #include "pvr_rogue_fwif.h"
> >       |          ^~~~~~~~~~~~~~~~~~
> > compilation terminated.
> 
> Apologies, it appears patch 6 (which includes this and other headers) got
> blocked by our IT email policy. Will resend.

It appears it was sent, but may have been dropped due to size? It's at 
https://lists.freedesktop.org/archives/dri-devel/2023-August/418802.html.
Apologies for the confusion.

Sarah

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 00/17] Imagination Technologies PowerVR DRM driver
@ 2023-08-24  8:14       ` Sarah Walker
  0 siblings, 0 replies; 77+ messages in thread
From: Sarah Walker @ 2023-08-24  8:14 UTC (permalink / raw)
  To: masahiroy
  Cc: luben.tuikov, christian.koenig, tzimmermann, mripard,
	matthew.brost, afd, daniel, maarten.lankhorst, boris.brezillon,
	hns, dri-devel, linux-kernel, dakr, Frank Binns, airlied,
	Donald Robson, faith.ekstrand

On Thu, 2023-08-24 at 09:08 +0100, Sarah Walker wrote:
> On Thu, 2023-08-24 at 07:31 +0900, Masahiro Yamada wrote:
> > On Fri, Aug 18, 2023 at 4:35 AM Sarah Walker <sarah.walker@imgtec.com> wrote:
> > > This patch series adds the initial DRM driver for Imagination Technologies PowerVR
> > > GPUs, starting with those based on our Rogue architecture. It's worth pointing
> > > out that this is a new driver, written from the ground up, rather than a
> > > refactored version of our existing downstream driver (pvrsrvkm).
> > > 
> > > 
> > 
> > 
> > 
> > I failed to compile this patch set.
> > 
> > I applied this series to linux next-20230822 and set CONFIG_DRM_POWERVR=m.
> > 
> > 
> > I got this error.
> > 
> >   CC [M]  drivers/gpu/drm/imagination/pvr_ccb.o
> > In file included from drivers/gpu/drm/imagination/pvr_ccb.c:4:
> > drivers/gpu/drm/imagination/pvr_ccb.h:7:10: fatal error:
> > pvr_rogue_fwif.h: No such file or directory
> >     7 | #include "pvr_rogue_fwif.h"
> >       |          ^~~~~~~~~~~~~~~~~~
> > compilation terminated.
> 
> Apologies, it appears patch 6 (which includes this and other headers) got
> blocked by our IT email policy. Will resend.

It appears it was sent, but may have been dropped due to size? It's at 
https://lists.freedesktop.org/archives/dri-devel/2023-August/418802.html.
Apologies for the confusion.

Sarah

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 02/17] dt-bindings: gpu: Add Imagination Technologies PowerVR GPU
  2023-08-18  9:36     ` Linus Walleij
@ 2023-09-05 16:32       ` Frank Binns
  -1 siblings, 0 replies; 77+ messages in thread
From: Frank Binns @ 2023-09-05 16:32 UTC (permalink / raw)
  To: devicetree, krzysztof.kozlowski+dt, Sarah Walker, linus.walleij, robh+dt
  Cc: matthew.brost, hns, linux-kernel, dri-devel, afd, luben.tuikov,
	dakr, mripard, tzimmermann, boris.brezillon, christian.koenig,
	faith.ekstrand, Donald Robson

Hi Linus,

Thank you for your feedback (comments below).

On Fri, 2023-08-18 at 11:36 +0200, Linus Walleij wrote:
> Hi Sarah,
> 
> thanks for your patch!
> 
> Patches adding device tree bindings need to be CC:ed to
> devicetree@vger.kernel.org
> and the DT binding maintainers, I have added it for now.

Thank you - it looks like something went wrong when the patch was sent.

> 
> On Wed, Aug 16, 2023 at 10:26 AM Sarah Walker <sarah.walker@imgtec.com> wrote:
> 
> > Add the device tree binding documentation for the Series AXE GPU used in
> > TI AM62 SoCs.
> > 
> > Co-developed-by: Frank Binns <frank.binns@imgtec.com>
> > Signed-off-by: Frank Binns <frank.binns@imgtec.com>
> > Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
> (...)
> > +properties:
> > +  compatible:
> > +    items:
> > +      - enum:
> > +          - ti,am62-gpu
> > +      - const: img,powervr-seriesaxe
> 
> Should there not at least be a dash there?
> 
> img,powervr-series-axe?
> 
> It is spelled in two words in the commit message,
> Series AXE not SeriesAXE?

We've now changed the string to address your earlier feedback (see below).

> 
> Moreover, if this pertains to the AXE-1-16 and AXE-2-16 it is kind of a wildcard
> and we usually don't do that, I would use the exact version instead,
> such as:
> const: img,powervr-axe-1-16
> any reason not to do this?

The exact GPU model/revision is fully discoverable via a register. We saw the
same is also true for Mali Bifrost, where they have a single string covering
multiple models [1], so we took the same approach. We'll add a comment in v6
along the lines of the one in the Mali Bifrost bindings.

> 
> I asked about the relationship between these strings and the product
> designations earlier I think :/

Sorry about that, I honestly thought we'd addressed that bit of feedback by
changing the compatibility string, but clearly we hadn't :( Thank you for
catching this.

We've now changed the string to "img,img-axe" to align with the marketing name,
along with updating the commit message and various other places to refer to
PowerVR and IMG GPUs (the DRM driver supporting both).

Thanks
Frank

[1] 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/devicetree/bindings/gpu/arm,mali-bifrost.yaml?h=v6.5#n29


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 02/17] dt-bindings: gpu: Add Imagination Technologies PowerVR GPU
@ 2023-09-05 16:32       ` Frank Binns
  0 siblings, 0 replies; 77+ messages in thread
From: Frank Binns @ 2023-09-05 16:32 UTC (permalink / raw)
  To: devicetree, krzysztof.kozlowski+dt, Sarah Walker, linus.walleij, robh+dt
  Cc: tzimmermann, luben.tuikov, afd, faith.ekstrand, dri-devel,
	matthew.brost, linux-kernel, dakr, hns, boris.brezillon,
	christian.koenig, mripard, Donald Robson

Hi Linus,

Thank you for your feedback (comments below).

On Fri, 2023-08-18 at 11:36 +0200, Linus Walleij wrote:
> Hi Sarah,
> 
> thanks for your patch!
> 
> Patches adding device tree bindings need to be CC:ed to
> devicetree@vger.kernel.org
> and the DT binding maintainers, I have added it for now.

Thank you - it looks like something went wrong when the patch was sent.

> 
> On Wed, Aug 16, 2023 at 10:26 AM Sarah Walker <sarah.walker@imgtec.com> wrote:
> 
> > Add the device tree binding documentation for the Series AXE GPU used in
> > TI AM62 SoCs.
> > 
> > Co-developed-by: Frank Binns <frank.binns@imgtec.com>
> > Signed-off-by: Frank Binns <frank.binns@imgtec.com>
> > Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
> (...)
> > +properties:
> > +  compatible:
> > +    items:
> > +      - enum:
> > +          - ti,am62-gpu
> > +      - const: img,powervr-seriesaxe
> 
> Should there not at least be a dash there?
> 
> img,powervr-series-axe?
> 
> It is spelled in two words in the commit message,
> Series AXE not SeriesAXE?

We've now changed the string to address your earlier feedback (see below).

> 
> Moreover, if this pertains to the AXE-1-16 and AXE-2-16 it is kind of a wildcard
> and we usually don't do that, I would use the exact version instead,
> such as:
> const: img,powervr-axe-1-16
> any reason not to do this?

The exact GPU model/revision is fully discoverable via a register. We saw the
same is also true for Mali Bifrost, where they have a single string covering
multiple models [1], so we took the same approach. We'll add a comment in v6
along the lines of the one in the Mali Bifrost bindings.

> 
> I asked about the relationship between these strings and the product
> designations earlier I think :/

Sorry about that, I honestly thought we'd addressed that bit of feedback by
changing the compatibility string, but clearly we hadn't :( Thank you for
catching this.

We've now changed the string to "img,img-axe" to align with the marketing name,
along with updating the commit message and various other places to refer to
PowerVR and IMG GPUs (the DRM driver supporting both).

Thanks
Frank

[1] 
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/devicetree/bindings/gpu/arm,mali-bifrost.yaml?h=v6.5#n29


^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 09/17] drm/imagination: Implement power management
  2023-08-16 10:56     ` Paul Cercueil
@ 2023-09-05 16:34       ` Frank Binns
  -1 siblings, 0 replies; 77+ messages in thread
From: Frank Binns @ 2023-09-05 16:34 UTC (permalink / raw)
  To: dri-devel, Sarah Walker, paul
  Cc: matthew.brost, hns, linux-kernel, mripard, afd, luben.tuikov,
	dakr, Donald Robson, tzimmermann, boris.brezillon,
	christian.koenig, faith.ekstrand

Hi Paul,

On Wed, 2023-08-16 at 12:56 +0200, Paul Cercueil wrote:
> Hi Sarah,
> 
> Le mercredi 16 août 2023 à 09:25 +0100, Sarah Walker a écrit :
> > Add power management to the driver, using runtime pm. The power off
> > sequence depends on firmware commands which are not implemented in
> > this
> > patch.
> > 
> > Changes since v4:
> > - Suspend runtime PM before unplugging device on rmmod
> > 
> > Changes since v3:
> > - Don't power device when calling pvr_device_gpu_fini()
> > - Documentation for pvr_dev->lost has been improved
> > - pvr_power_init() renamed to pvr_watchdog_init()
> > - Use drm_dev_{enter,exit}
> > 
> > Changes since v2:
> > - Use runtime PM
> > - Implement watchdog
> > 
> > Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
> > ---
> >  drivers/gpu/drm/imagination/Makefile     |   1 +
> >  drivers/gpu/drm/imagination/pvr_device.c |  23 +-
> >  drivers/gpu/drm/imagination/pvr_device.h |  22 ++
> >  drivers/gpu/drm/imagination/pvr_drv.c    |  20 +-
> >  drivers/gpu/drm/imagination/pvr_power.c  | 271
> > +++++++++++++++++++++++
> >  drivers/gpu/drm/imagination/pvr_power.h  |  39 ++++
> >  6 files changed, 373 insertions(+), 3 deletions(-)
> >  create mode 100644 drivers/gpu/drm/imagination/pvr_power.c
> >  create mode 100644 drivers/gpu/drm/imagination/pvr_power.h
> > 
> > diff --git a/drivers/gpu/drm/imagination/Makefile
> > b/drivers/gpu/drm/imagination/Makefile
> > index 8fcabc1bea36..235e2d329e29 100644
> > --- a/drivers/gpu/drm/imagination/Makefile
> > +++ b/drivers/gpu/drm/imagination/Makefile
> > @@ -10,6 +10,7 @@ powervr-y := \
> >         pvr_fw.o \
> >         pvr_gem.o \
> >         pvr_mmu.o \
> > +       pvr_power.o \
> >         pvr_vm.o
> >  
> >  obj-$(CONFIG_DRM_POWERVR) += powervr.o
> > diff --git a/drivers/gpu/drm/imagination/pvr_device.c
> > b/drivers/gpu/drm/imagination/pvr_device.c
> > index ef8f7a2ff1a9..5dbd05f21238 100644
> > --- a/drivers/gpu/drm/imagination/pvr_device.c
> > +++ b/drivers/gpu/drm/imagination/pvr_device.c
> > @@ -5,6 +5,7 @@
> >  #include "pvr_device_info.h"
> >  
> >  #include "pvr_fw.h"
> > +#include "pvr_power.h"
> >  #include "pvr_rogue_cr_defs.h"
> >  #include "pvr_vm.h"
> >  
> > @@ -357,6 +358,8 @@ pvr_device_gpu_fini(struct pvr_device *pvr_dev)
> >  int
> >  pvr_device_init(struct pvr_device *pvr_dev)
> >  {
> > +       struct drm_device *drm_dev = from_pvr_device(pvr_dev);
> > +       struct device *dev = drm_dev->dev;
> >         int err;
> >  
> >         /* Enable and initialize clocks required for the device to
> > operate. */
> > @@ -364,13 +367,29 @@ pvr_device_init(struct pvr_device *pvr_dev)
> >         if (err)
> >                 return err;
> >  
> > +       /* Explicitly power the GPU so we can access control
> > registers before the FW is booted. */
> > +       err = pm_runtime_resume_and_get(dev);
> > +       if (err)
> > +               return err;
> > +
> >         /* Map the control registers into memory. */
> >         err = pvr_device_reg_init(pvr_dev);
> >         if (err)
> > -               return err;
> > +               goto err_pm_runtime_put;
> >  
> >         /* Perform GPU-specific initialization steps. */
> > -       return pvr_device_gpu_init(pvr_dev);
> > +       err = pvr_device_gpu_init(pvr_dev);
> > +       if (err)
> > +               goto err_pm_runtime_put;
> > +
> > +       pm_runtime_put(dev);
> > +
> > +       return 0;
> > +
> > +err_pm_runtime_put:
> > +       pm_runtime_put_sync_suspend(dev);
> > +
> > +       return err;
> >  }
> >  
> >  /**
> > diff --git a/drivers/gpu/drm/imagination/pvr_device.h
> > b/drivers/gpu/drm/imagination/pvr_device.h
> > index 990363e433d7..6d58ecfbc838 100644
> > --- a/drivers/gpu/drm/imagination/pvr_device.h
> > +++ b/drivers/gpu/drm/imagination/pvr_device.h
> > @@ -135,6 +135,28 @@ struct pvr_device {
> >  
> >         /** @fw_dev: Firmware related data. */
> >         struct pvr_fw_device fw_dev;
> > +
> > +       struct {
> > +               /** @work: Work item for watchdog callback. */
> > +               struct delayed_work work;
> > +
> > +               /** @old_kccb_cmds_executed: KCCB command execution
> > count at last watchdog poll. */
> > +               u32 old_kccb_cmds_executed;
> > +
> > +               /** @kccb_stall_count: Number of watchdog polls KCCB
> > has been stalled for. */
> > +               u32 kccb_stall_count;
> > +       } watchdog;
> > +
> > +       /**
> > +        * @lost: %true if the device has been lost.
> > +        *
> > +        * This variable is set if the device has become
> > irretrievably unavailable, e.g. if the
> > +        * firmware processor has stopped responding and can not be
> > revived via a hard reset.
> > +        */
> > +       bool lost;
> > +
> > +       /** @sched_wq: Workqueue for schedulers. */
> > +       struct workqueue_struct *sched_wq;
> >  };
> >  
> >  /**
> > diff --git a/drivers/gpu/drm/imagination/pvr_drv.c
> > b/drivers/gpu/drm/imagination/pvr_drv.c
> > index 0d51177b506c..0331dc549116 100644
> > --- a/drivers/gpu/drm/imagination/pvr_drv.c
> > +++ b/drivers/gpu/drm/imagination/pvr_drv.c
> > @@ -4,6 +4,7 @@
> >  #include "pvr_device.h"
> >  #include "pvr_drv.h"
> >  #include "pvr_gem.h"
> > +#include "pvr_power.h"
> >  #include "pvr_rogue_defs.h"
> >  #include "pvr_rogue_fwif_client.h"
> >  #include "pvr_rogue_fwif_shared.h"
> > @@ -1279,9 +1280,16 @@ pvr_probe(struct platform_device *plat_dev)
> >  
> >         platform_set_drvdata(plat_dev, drm_dev);
> >  
> > +       devm_pm_runtime_enable(&plat_dev->dev);
> > +       pm_runtime_mark_last_busy(&plat_dev->dev);
> > +
> > +       pm_runtime_set_autosuspend_delay(&plat_dev->dev, 50);
> > +       pm_runtime_use_autosuspend(&plat_dev->dev);
> > +       pvr_watchdog_init(pvr_dev);
> > +
> >         err = pvr_device_init(pvr_dev);
> >         if (err)
> > -               return err;
> > +               goto err_watchdog_fini;
> >  
> >         err = drm_dev_register(drm_dev, 0);
> >         if (err)
> > @@ -1292,6 +1300,9 @@ pvr_probe(struct platform_device *plat_dev)
> >  err_device_fini:
> >         pvr_device_fini(pvr_dev);
> >  
> > +err_watchdog_fini:
> > +       pvr_watchdog_fini(pvr_dev);
> > +
> >         return err;
> >  }
> >  
> > @@ -1301,8 +1312,10 @@ pvr_remove(struct platform_device *plat_dev)
> >         struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
> >         struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
> >  
> > +       pm_runtime_suspend(drm_dev->dev);
> >         drm_dev_unplug(drm_dev);
> >         pvr_device_fini(pvr_dev);
> > +       pvr_watchdog_fini(pvr_dev);
> >  
> >         return 0;
> >  }
> > @@ -1313,11 +1326,16 @@ static const struct of_device_id dt_match[] =
> > {
> >  };
> >  MODULE_DEVICE_TABLE(of, dt_match);
> >  
> > +static const struct dev_pm_ops pvr_pm_ops = {
> > +       SET_RUNTIME_PM_OPS(pvr_power_device_suspend,
> > pvr_power_device_resume, pvr_power_device_idle)
> > +};
> > +
> >  static struct platform_driver pvr_driver = {
> >         .probe = pvr_probe,
> >         .remove = pvr_remove,
> >         .driver = {
> >                 .name = PVR_DRIVER_NAME,
> > +               .pm = &pvr_pm_ops,
> >                 .of_match_table = dt_match,
> >         },
> >  };
> > diff --git a/drivers/gpu/drm/imagination/pvr_power.c
> > b/drivers/gpu/drm/imagination/pvr_power.c
> > new file mode 100644
> > index 000000000000..a494fed92e81
> > --- /dev/null
> > +++ b/drivers/gpu/drm/imagination/pvr_power.c
> > @@ -0,0 +1,271 @@
> > +// SPDX-License-Identifier: GPL-2.0 OR MIT
> > +/* Copyright (c) 2023 Imagination Technologies Ltd. */
> > +
> > +#include "pvr_device.h"
> > +#include "pvr_fw.h"
> > +#include "pvr_power.h"
> > +#include "pvr_rogue_fwif.h"
> > +
> > +#include <drm/drm_drv.h>
> > +#include <drm/drm_managed.h>
> > +#include <linux/clk.h>
> > +#include <linux/interrupt.h>
> > +#include <linux/mutex.h>
> > +#include <linux/platform_device.h>
> > +#include <linux/pm_runtime.h>
> > +#include <linux/timer.h>
> > +#include <linux/types.h>
> > +#include <linux/workqueue.h>
> > +
> > +#define POWER_SYNC_TIMEOUT_US (1000000) /* 1s */
> > +
> > +#define WATCHDOG_TIME_MS (500)
> > +
> > +static int
> > +pvr_power_send_command(struct pvr_device *pvr_dev, struct
> > rogue_fwif_kccb_cmd *pow_cmd)
> > +{
> > +       /* TODO: implement */
> > +       return -ENODEV;
> > +}
> > +
> > +static int
> > +pvr_power_request_idle(struct pvr_device *pvr_dev)
> > +{
> > +       struct rogue_fwif_kccb_cmd pow_cmd;
> > +
> > +       /* Send FORCED_IDLE request to FW. */
> > +       pow_cmd.cmd_type = ROGUE_FWIF_KCCB_CMD_POW;
> > +       pow_cmd.cmd_data.pow_data.pow_type =
> > ROGUE_FWIF_POW_FORCED_IDLE_REQ;
> > +       pow_cmd.cmd_data.pow_data.power_req_data.pow_request_type =
> > ROGUE_FWIF_POWER_FORCE_IDLE;
> > +
> > +       return pvr_power_send_command(pvr_dev, &pow_cmd);
> > +}
> > +
> > +static int
> > +pvr_power_request_pwr_off(struct pvr_device *pvr_dev)
> > +{
> > +       struct rogue_fwif_kccb_cmd pow_cmd;
> > +
> > +       /* Send POW_OFF request to firmware. */
> > +       pow_cmd.cmd_type = ROGUE_FWIF_KCCB_CMD_POW;
> > +       pow_cmd.cmd_data.pow_data.pow_type = ROGUE_FWIF_POW_OFF_REQ;
> > +       pow_cmd.cmd_data.pow_data.power_req_data.forced = true;
> > +
> > +       return pvr_power_send_command(pvr_dev, &pow_cmd);
> > +}
> > +
> > +static int
> > +pvr_power_fw_disable(struct pvr_device *pvr_dev, bool hard_reset)
> > +{
> > +       if (!hard_reset) {
> > +               int err;
> > +
> > +               cancel_delayed_work_sync(&pvr_dev->watchdog.work);
> > +
> > +               err = pvr_power_request_idle(pvr_dev);
> > +               if (err)
> > +                       return err;
> > +
> > +               err = pvr_power_request_pwr_off(pvr_dev);
> > +               if (err)
> > +                       return err;
> > +       }
> > +
> > +       /* TODO: stop firmware */
> > +       return -ENODEV;
> > +}
> > +
> > +static int
> > +pvr_power_fw_enable(struct pvr_device *pvr_dev)
> > +{
> > +       int err;
> > +
> > +       /* TODO: start firmware */
> > +       err = -ENODEV;
> > +       if (err)
> > +               return err;
> > +
> > +       queue_delayed_work(pvr_dev->sched_wq, &pvr_dev-
> > > watchdog.work,
> > +                          msecs_to_jiffies(WATCHDOG_TIME_MS));
> > +
> > +       return 0;
> > +}
> > +
> > +bool
> > +pvr_power_is_idle(struct pvr_device *pvr_dev)
> > +{
> > +       /* TODO: implement */
> > +       return true;
> > +}
> > +
> > +static bool
> > +pvr_watchdog_kccb_stalled(struct pvr_device *pvr_dev)
> > +{
> > +       /* TODO: implement */
> > +       return false;
> > +}
> > +
> > +static void
> > +pvr_watchdog_worker(struct work_struct *work)
> > +{
> > +       struct pvr_device *pvr_dev = container_of(work, struct
> > pvr_device,
> > +                                                
> > watchdog.work.work);
> > +       bool stalled;
> > +
> > +       if (pvr_dev->lost)
> > +               return;
> > +
> > +       if (pm_runtime_get_if_in_use(from_pvr_device(pvr_dev)->dev)
> > <= 0)
> > +               goto out_requeue;
> > +
> > +       stalled = pvr_watchdog_kccb_stalled(pvr_dev);
> > +
> > +       if (stalled) {
> > +               drm_err(from_pvr_device(pvr_dev), "FW stalled, trying
> > hard reset");
> > +
> > +               pvr_power_reset(pvr_dev, true);
> > +               /* Device may be lost at this point. */
> > +       }
> > +
> > +       pm_runtime_put(from_pvr_device(pvr_dev)->dev);
> > +
> > +out_requeue:
> > +       if (!pvr_dev->lost) {
> > +               queue_delayed_work(pvr_dev->sched_wq, &pvr_dev-
> > > watchdog.work,
> > +                                 
> > msecs_to_jiffies(WATCHDOG_TIME_MS));
> > +       }
> > +}
> > +
> > +/**
> > + * pvr_watchdog_init() - Initialise watchdog for device
> > + * @pvr_dev: Target PowerVR device.
> > + *
> > + * Returns:
> > + *  * 0 on success, or
> > + *  * -%ENOMEM on out of memory.
> > + */
> > +int
> > +pvr_watchdog_init(struct pvr_device *pvr_dev)
> > +{
> > +       INIT_DELAYED_WORK(&pvr_dev->watchdog.work,
> > pvr_watchdog_worker);
> > +
> > +       return 0;
> > +}
> > +
> > +int
> > +pvr_power_device_suspend(struct device *dev)
> > +{
> > +       struct platform_device *plat_dev = to_platform_device(dev);
> > +       struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
> > +       struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
> > +       int idx;
> > +
> > +       if (!drm_dev_enter(drm_dev, &idx))
> > +               return -EIO;
> > +
> > +       clk_disable_unprepare(pvr_dev->mem_clk);
> > +       clk_disable_unprepare(pvr_dev->sys_clk);
> > +       clk_disable_unprepare(pvr_dev->core_clk);
> > +
> > +       drm_dev_exit(idx);
> > +
> > +       return 0;
> > +}
> > +
> > +int
> > +pvr_power_device_resume(struct device *dev)
> > +{
> > +       struct platform_device *plat_dev = to_platform_device(dev);
> > +       struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
> > +       struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
> > +       int idx;
> > +       int err;
> > +
> > +       if (!drm_dev_enter(drm_dev, &idx))
> > +               return -EIO;
> > +
> > +       err = clk_prepare_enable(pvr_dev->core_clk);
> > +       if (err)
> > +               goto err_drm_dev_exit;
> > +
> > +       err = clk_prepare_enable(pvr_dev->sys_clk);
> > +       if (err)
> > +               goto err_core_clk_disable;
> > +
> > +       err = clk_prepare_enable(pvr_dev->mem_clk);
> > +       if (err)
> > +               goto err_sys_clk_disable;
> > +
> > +       drm_dev_exit(idx);
> > +
> > +       return 0;
> > +
> > +err_sys_clk_disable:
> > +       clk_disable_unprepare(pvr_dev->sys_clk);
> > +
> > +err_core_clk_disable:
> > +       clk_disable_unprepare(pvr_dev->core_clk);
> > +
> > +err_drm_dev_exit:
> > +       drm_dev_exit(idx);
> > +
> > +       return err;
> > +}
> > +
> > +int
> > +pvr_power_device_idle(struct device *dev)
> > +{
> > +       struct platform_device *plat_dev = to_platform_device(dev);
> > +       struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
> > +       struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
> > +
> > +       return pvr_power_is_idle(pvr_dev) ? 0 : -EBUSY;
> > +}
> 
> You have two options here. Either you require CONFIG_PM, and in that
> case you should add a dependency for it in the Kconfig. Also use
> RUNTIME_PM_OPS() instead of SET_RUNTIME_PM_OPS().
> 
> Otherwise, I would suggest to use EXPORT_NS_RUNTIME_DEV_PM_OPS() here,
> which allows you to have pvr_power_device_{suspend,resume,idle} static,
> and having them garbage-collected in the !CONFIG_PM case.
> 
> You can then have an "extern struct dev_pm_ops pvr_power_pm_ops;" in
> your header, and reference it in the driver using the pm_ptr() macro.

Thank you for pointing this out. We'll make sure this is fixed in v6.

Thanks
Frank

> 
> Cheers,
> -Paul
> 
> > +
> > +/**
> > + * pvr_power_reset() - Reset the GPU
> > + * @pvr_dev: Device pointer
> > + * @hard_reset: %true for hard reset, %false for soft reset
> > + *
> > + * If @hard_reset is %false and the FW processor fails to respond
> > during the reset process, this
> > + * function will attempt a hard reset.
> > + *
> > + * If a hard reset fails then the GPU device is reported as lost.
> > + *
> > + * Returns:
> > + *  * 0 on success, or
> > + *  * Any error code returned by pvr_power_get, pvr_power_fw_disable
> > or pvr_power_fw_enable().
> > + */
> > +int
> > +pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
> > +{
> > +       /* TODO: Implement hard reset. */
> > +       int err;
> > +
> > +       /*
> > +        * Take a power reference during the reset. This should
> > prevent any interference with the
> > +        * power state during reset.
> > +        */
> > +       WARN_ON(pvr_power_get(pvr_dev));
> > +
> > +       err = pvr_power_fw_disable(pvr_dev, false);
> > +       if (err)
> > +               goto err_power_put;
> > +
> > +       err = pvr_power_fw_enable(pvr_dev);
> > +
> > +err_power_put:
> > +       pvr_power_put(pvr_dev);
> > +
> > +       return err;
> > +}
> > +
> > +/**
> > + * pvr_watchdog_fini() - Shutdown watchdog for device
> > + * @pvr_dev: Target PowerVR device.
> > + */
> > +void
> > +pvr_watchdog_fini(struct pvr_device *pvr_dev)
> > +{
> > +       cancel_delayed_work_sync(&pvr_dev->watchdog.work);
> > +}
> > diff --git a/drivers/gpu/drm/imagination/pvr_power.h
> > b/drivers/gpu/drm/imagination/pvr_power.h
> > new file mode 100644
> > index 000000000000..439f08d13655
> > --- /dev/null
> > +++ b/drivers/gpu/drm/imagination/pvr_power.h
> > @@ -0,0 +1,39 @@
> > +/* SPDX-License-Identifier: GPL-2.0 OR MIT */
> > +/* Copyright (c) 2023 Imagination Technologies Ltd. */
> > +
> > +#ifndef PVR_POWER_H
> > +#define PVR_POWER_H
> > +
> > +#include "pvr_device.h"
> > +
> > +#include <linux/mutex.h>
> > +#include <linux/pm_runtime.h>
> > +
> > +int pvr_watchdog_init(struct pvr_device *pvr_dev);
> > +void pvr_watchdog_fini(struct pvr_device *pvr_dev);
> > +
> > +bool pvr_power_is_idle(struct pvr_device *pvr_dev);
> > +
> > +int pvr_power_device_suspend(struct device *dev);
> > +int pvr_power_device_resume(struct device *dev);
> > +int pvr_power_device_idle(struct device *dev);
> > +
> > +int pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset);
> > +
> > +static __always_inline int
> > +pvr_power_get(struct pvr_device *pvr_dev)
> > +{
> > +       struct drm_device *drm_dev = from_pvr_device(pvr_dev);
> > +
> > +       return pm_runtime_resume_and_get(drm_dev->dev);
> > +}
> > +
> > +static __always_inline int
> > +pvr_power_put(struct pvr_device *pvr_dev)
> > +{
> > +       struct drm_device *drm_dev = from_pvr_device(pvr_dev);
> > +
> > +       return pm_runtime_put(drm_dev->dev);
> > +}
> > +
> > +#endif /* PVR_POWER_H */

^ permalink raw reply	[flat|nested] 77+ messages in thread

* Re: [PATCH v5 09/17] drm/imagination: Implement power management
@ 2023-09-05 16:34       ` Frank Binns
  0 siblings, 0 replies; 77+ messages in thread
From: Frank Binns @ 2023-09-05 16:34 UTC (permalink / raw)
  To: dri-devel, Sarah Walker, paul
  Cc: luben.tuikov, afd, Donald Robson, tzimmermann, matthew.brost,
	linux-kernel, dakr, hns, boris.brezillon, christian.koenig,
	mripard, faith.ekstrand

Hi Paul,

On Wed, 2023-08-16 at 12:56 +0200, Paul Cercueil wrote:
> Hi Sarah,
> 
> Le mercredi 16 août 2023 à 09:25 +0100, Sarah Walker a écrit :
> > Add power management to the driver, using runtime pm. The power off
> > sequence depends on firmware commands which are not implemented in
> > this
> > patch.
> > 
> > Changes since v4:
> > - Suspend runtime PM before unplugging device on rmmod
> > 
> > Changes since v3:
> > - Don't power device when calling pvr_device_gpu_fini()
> > - Documentation for pvr_dev->lost has been improved
> > - pvr_power_init() renamed to pvr_watchdog_init()
> > - Use drm_dev_{enter,exit}
> > 
> > Changes since v2:
> > - Use runtime PM
> > - Implement watchdog
> > 
> > Signed-off-by: Sarah Walker <sarah.walker@imgtec.com>
> > ---
> >  drivers/gpu/drm/imagination/Makefile     |   1 +
> >  drivers/gpu/drm/imagination/pvr_device.c |  23 +-
> >  drivers/gpu/drm/imagination/pvr_device.h |  22 ++
> >  drivers/gpu/drm/imagination/pvr_drv.c    |  20 +-
> >  drivers/gpu/drm/imagination/pvr_power.c  | 271
> > +++++++++++++++++++++++
> >  drivers/gpu/drm/imagination/pvr_power.h  |  39 ++++
> >  6 files changed, 373 insertions(+), 3 deletions(-)
> >  create mode 100644 drivers/gpu/drm/imagination/pvr_power.c
> >  create mode 100644 drivers/gpu/drm/imagination/pvr_power.h
> > 
> > diff --git a/drivers/gpu/drm/imagination/Makefile
> > b/drivers/gpu/drm/imagination/Makefile
> > index 8fcabc1bea36..235e2d329e29 100644
> > --- a/drivers/gpu/drm/imagination/Makefile
> > +++ b/drivers/gpu/drm/imagination/Makefile
> > @@ -10,6 +10,7 @@ powervr-y := \
> >         pvr_fw.o \
> >         pvr_gem.o \
> >         pvr_mmu.o \
> > +       pvr_power.o \
> >         pvr_vm.o
> >  
> >  obj-$(CONFIG_DRM_POWERVR) += powervr.o
> > diff --git a/drivers/gpu/drm/imagination/pvr_device.c
> > b/drivers/gpu/drm/imagination/pvr_device.c
> > index ef8f7a2ff1a9..5dbd05f21238 100644
> > --- a/drivers/gpu/drm/imagination/pvr_device.c
> > +++ b/drivers/gpu/drm/imagination/pvr_device.c
> > @@ -5,6 +5,7 @@
> >  #include "pvr_device_info.h"
> >  
> >  #include "pvr_fw.h"
> > +#include "pvr_power.h"
> >  #include "pvr_rogue_cr_defs.h"
> >  #include "pvr_vm.h"
> >  
> > @@ -357,6 +358,8 @@ pvr_device_gpu_fini(struct pvr_device *pvr_dev)
> >  int
> >  pvr_device_init(struct pvr_device *pvr_dev)
> >  {
> > +       struct drm_device *drm_dev = from_pvr_device(pvr_dev);
> > +       struct device *dev = drm_dev->dev;
> >         int err;
> >  
> >         /* Enable and initialize clocks required for the device to
> > operate. */
> > @@ -364,13 +367,29 @@ pvr_device_init(struct pvr_device *pvr_dev)
> >         if (err)
> >                 return err;
> >  
> > +       /* Explicitly power the GPU so we can access control
> > registers before the FW is booted. */
> > +       err = pm_runtime_resume_and_get(dev);
> > +       if (err)
> > +               return err;
> > +
> >         /* Map the control registers into memory. */
> >         err = pvr_device_reg_init(pvr_dev);
> >         if (err)
> > -               return err;
> > +               goto err_pm_runtime_put;
> >  
> >         /* Perform GPU-specific initialization steps. */
> > -       return pvr_device_gpu_init(pvr_dev);
> > +       err = pvr_device_gpu_init(pvr_dev);
> > +       if (err)
> > +               goto err_pm_runtime_put;
> > +
> > +       pm_runtime_put(dev);
> > +
> > +       return 0;
> > +
> > +err_pm_runtime_put:
> > +       pm_runtime_put_sync_suspend(dev);
> > +
> > +       return err;
> >  }
> >  
> >  /**
> > diff --git a/drivers/gpu/drm/imagination/pvr_device.h
> > b/drivers/gpu/drm/imagination/pvr_device.h
> > index 990363e433d7..6d58ecfbc838 100644
> > --- a/drivers/gpu/drm/imagination/pvr_device.h
> > +++ b/drivers/gpu/drm/imagination/pvr_device.h
> > @@ -135,6 +135,28 @@ struct pvr_device {
> >  
> >         /** @fw_dev: Firmware related data. */
> >         struct pvr_fw_device fw_dev;
> > +
> > +       struct {
> > +               /** @work: Work item for watchdog callback. */
> > +               struct delayed_work work;
> > +
> > +               /** @old_kccb_cmds_executed: KCCB command execution
> > count at last watchdog poll. */
> > +               u32 old_kccb_cmds_executed;
> > +
> > +               /** @kccb_stall_count: Number of watchdog polls KCCB
> > has been stalled for. */
> > +               u32 kccb_stall_count;
> > +       } watchdog;
> > +
> > +       /**
> > +        * @lost: %true if the device has been lost.
> > +        *
> > +        * This variable is set if the device has become
> > irretrievably unavailable, e.g. if the
> > +        * firmware processor has stopped responding and can not be
> > revived via a hard reset.
> > +        */
> > +       bool lost;
> > +
> > +       /** @sched_wq: Workqueue for schedulers. */
> > +       struct workqueue_struct *sched_wq;
> >  };
> >  
> >  /**
> > diff --git a/drivers/gpu/drm/imagination/pvr_drv.c
> > b/drivers/gpu/drm/imagination/pvr_drv.c
> > index 0d51177b506c..0331dc549116 100644
> > --- a/drivers/gpu/drm/imagination/pvr_drv.c
> > +++ b/drivers/gpu/drm/imagination/pvr_drv.c
> > @@ -4,6 +4,7 @@
> >  #include "pvr_device.h"
> >  #include "pvr_drv.h"
> >  #include "pvr_gem.h"
> > +#include "pvr_power.h"
> >  #include "pvr_rogue_defs.h"
> >  #include "pvr_rogue_fwif_client.h"
> >  #include "pvr_rogue_fwif_shared.h"
> > @@ -1279,9 +1280,16 @@ pvr_probe(struct platform_device *plat_dev)
> >  
> >         platform_set_drvdata(plat_dev, drm_dev);
> >  
> > +       devm_pm_runtime_enable(&plat_dev->dev);
> > +       pm_runtime_mark_last_busy(&plat_dev->dev);
> > +
> > +       pm_runtime_set_autosuspend_delay(&plat_dev->dev, 50);
> > +       pm_runtime_use_autosuspend(&plat_dev->dev);
> > +       pvr_watchdog_init(pvr_dev);
> > +
> >         err = pvr_device_init(pvr_dev);
> >         if (err)
> > -               return err;
> > +               goto err_watchdog_fini;
> >  
> >         err = drm_dev_register(drm_dev, 0);
> >         if (err)
> > @@ -1292,6 +1300,9 @@ pvr_probe(struct platform_device *plat_dev)
> >  err_device_fini:
> >         pvr_device_fini(pvr_dev);
> >  
> > +err_watchdog_fini:
> > +       pvr_watchdog_fini(pvr_dev);
> > +
> >         return err;
> >  }
> >  
> > @@ -1301,8 +1312,10 @@ pvr_remove(struct platform_device *plat_dev)
> >         struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
> >         struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
> >  
> > +       pm_runtime_suspend(drm_dev->dev);
> >         drm_dev_unplug(drm_dev);
> >         pvr_device_fini(pvr_dev);
> > +       pvr_watchdog_fini(pvr_dev);
> >  
> >         return 0;
> >  }
> > @@ -1313,11 +1326,16 @@ static const struct of_device_id dt_match[] =
> > {
> >  };
> >  MODULE_DEVICE_TABLE(of, dt_match);
> >  
> > +static const struct dev_pm_ops pvr_pm_ops = {
> > +       SET_RUNTIME_PM_OPS(pvr_power_device_suspend,
> > pvr_power_device_resume, pvr_power_device_idle)
> > +};
> > +
> >  static struct platform_driver pvr_driver = {
> >         .probe = pvr_probe,
> >         .remove = pvr_remove,
> >         .driver = {
> >                 .name = PVR_DRIVER_NAME,
> > +               .pm = &pvr_pm_ops,
> >                 .of_match_table = dt_match,
> >         },
> >  };
> > diff --git a/drivers/gpu/drm/imagination/pvr_power.c
> > b/drivers/gpu/drm/imagination/pvr_power.c
> > new file mode 100644
> > index 000000000000..a494fed92e81
> > --- /dev/null
> > +++ b/drivers/gpu/drm/imagination/pvr_power.c
> > @@ -0,0 +1,271 @@
> > +// SPDX-License-Identifier: GPL-2.0 OR MIT
> > +/* Copyright (c) 2023 Imagination Technologies Ltd. */
> > +
> > +#include "pvr_device.h"
> > +#include "pvr_fw.h"
> > +#include "pvr_power.h"
> > +#include "pvr_rogue_fwif.h"
> > +
> > +#include <drm/drm_drv.h>
> > +#include <drm/drm_managed.h>
> > +#include <linux/clk.h>
> > +#include <linux/interrupt.h>
> > +#include <linux/mutex.h>
> > +#include <linux/platform_device.h>
> > +#include <linux/pm_runtime.h>
> > +#include <linux/timer.h>
> > +#include <linux/types.h>
> > +#include <linux/workqueue.h>
> > +
> > +#define POWER_SYNC_TIMEOUT_US (1000000) /* 1s */
> > +
> > +#define WATCHDOG_TIME_MS (500)
> > +
> > +static int
> > +pvr_power_send_command(struct pvr_device *pvr_dev, struct
> > rogue_fwif_kccb_cmd *pow_cmd)
> > +{
> > +       /* TODO: implement */
> > +       return -ENODEV;
> > +}
> > +
> > +static int
> > +pvr_power_request_idle(struct pvr_device *pvr_dev)
> > +{
> > +       struct rogue_fwif_kccb_cmd pow_cmd;
> > +
> > +       /* Send FORCED_IDLE request to FW. */
> > +       pow_cmd.cmd_type = ROGUE_FWIF_KCCB_CMD_POW;
> > +       pow_cmd.cmd_data.pow_data.pow_type =
> > ROGUE_FWIF_POW_FORCED_IDLE_REQ;
> > +       pow_cmd.cmd_data.pow_data.power_req_data.pow_request_type =
> > ROGUE_FWIF_POWER_FORCE_IDLE;
> > +
> > +       return pvr_power_send_command(pvr_dev, &pow_cmd);
> > +}
> > +
> > +static int
> > +pvr_power_request_pwr_off(struct pvr_device *pvr_dev)
> > +{
> > +       struct rogue_fwif_kccb_cmd pow_cmd;
> > +
> > +       /* Send POW_OFF request to firmware. */
> > +       pow_cmd.cmd_type = ROGUE_FWIF_KCCB_CMD_POW;
> > +       pow_cmd.cmd_data.pow_data.pow_type = ROGUE_FWIF_POW_OFF_REQ;
> > +       pow_cmd.cmd_data.pow_data.power_req_data.forced = true;
> > +
> > +       return pvr_power_send_command(pvr_dev, &pow_cmd);
> > +}
> > +
> > +static int
> > +pvr_power_fw_disable(struct pvr_device *pvr_dev, bool hard_reset)
> > +{
> > +       if (!hard_reset) {
> > +               int err;
> > +
> > +               cancel_delayed_work_sync(&pvr_dev->watchdog.work);
> > +
> > +               err = pvr_power_request_idle(pvr_dev);
> > +               if (err)
> > +                       return err;
> > +
> > +               err = pvr_power_request_pwr_off(pvr_dev);
> > +               if (err)
> > +                       return err;
> > +       }
> > +
> > +       /* TODO: stop firmware */
> > +       return -ENODEV;
> > +}
> > +
> > +static int
> > +pvr_power_fw_enable(struct pvr_device *pvr_dev)
> > +{
> > +       int err;
> > +
> > +       /* TODO: start firmware */
> > +       err = -ENODEV;
> > +       if (err)
> > +               return err;
> > +
> > +       queue_delayed_work(pvr_dev->sched_wq, &pvr_dev-
> > > watchdog.work,
> > +                          msecs_to_jiffies(WATCHDOG_TIME_MS));
> > +
> > +       return 0;
> > +}
> > +
> > +bool
> > +pvr_power_is_idle(struct pvr_device *pvr_dev)
> > +{
> > +       /* TODO: implement */
> > +       return true;
> > +}
> > +
> > +static bool
> > +pvr_watchdog_kccb_stalled(struct pvr_device *pvr_dev)
> > +{
> > +       /* TODO: implement */
> > +       return false;
> > +}
> > +
> > +static void
> > +pvr_watchdog_worker(struct work_struct *work)
> > +{
> > +       struct pvr_device *pvr_dev = container_of(work, struct
> > pvr_device,
> > +                                                
> > watchdog.work.work);
> > +       bool stalled;
> > +
> > +       if (pvr_dev->lost)
> > +               return;
> > +
> > +       if (pm_runtime_get_if_in_use(from_pvr_device(pvr_dev)->dev)
> > <= 0)
> > +               goto out_requeue;
> > +
> > +       stalled = pvr_watchdog_kccb_stalled(pvr_dev);
> > +
> > +       if (stalled) {
> > +               drm_err(from_pvr_device(pvr_dev), "FW stalled, trying
> > hard reset");
> > +
> > +               pvr_power_reset(pvr_dev, true);
> > +               /* Device may be lost at this point. */
> > +       }
> > +
> > +       pm_runtime_put(from_pvr_device(pvr_dev)->dev);
> > +
> > +out_requeue:
> > +       if (!pvr_dev->lost) {
> > +               queue_delayed_work(pvr_dev->sched_wq, &pvr_dev-
> > > watchdog.work,
> > +                                 
> > msecs_to_jiffies(WATCHDOG_TIME_MS));
> > +       }
> > +}
> > +
> > +/**
> > + * pvr_watchdog_init() - Initialise watchdog for device
> > + * @pvr_dev: Target PowerVR device.
> > + *
> > + * Returns:
> > + *  * 0 on success, or
> > + *  * -%ENOMEM on out of memory.
> > + */
> > +int
> > +pvr_watchdog_init(struct pvr_device *pvr_dev)
> > +{
> > +       INIT_DELAYED_WORK(&pvr_dev->watchdog.work,
> > pvr_watchdog_worker);
> > +
> > +       return 0;
> > +}
> > +
> > +int
> > +pvr_power_device_suspend(struct device *dev)
> > +{
> > +       struct platform_device *plat_dev = to_platform_device(dev);
> > +       struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
> > +       struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
> > +       int idx;
> > +
> > +       if (!drm_dev_enter(drm_dev, &idx))
> > +               return -EIO;
> > +
> > +       clk_disable_unprepare(pvr_dev->mem_clk);
> > +       clk_disable_unprepare(pvr_dev->sys_clk);
> > +       clk_disable_unprepare(pvr_dev->core_clk);
> > +
> > +       drm_dev_exit(idx);
> > +
> > +       return 0;
> > +}
> > +
> > +int
> > +pvr_power_device_resume(struct device *dev)
> > +{
> > +       struct platform_device *plat_dev = to_platform_device(dev);
> > +       struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
> > +       struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
> > +       int idx;
> > +       int err;
> > +
> > +       if (!drm_dev_enter(drm_dev, &idx))
> > +               return -EIO;
> > +
> > +       err = clk_prepare_enable(pvr_dev->core_clk);
> > +       if (err)
> > +               goto err_drm_dev_exit;
> > +
> > +       err = clk_prepare_enable(pvr_dev->sys_clk);
> > +       if (err)
> > +               goto err_core_clk_disable;
> > +
> > +       err = clk_prepare_enable(pvr_dev->mem_clk);
> > +       if (err)
> > +               goto err_sys_clk_disable;
> > +
> > +       drm_dev_exit(idx);
> > +
> > +       return 0;
> > +
> > +err_sys_clk_disable:
> > +       clk_disable_unprepare(pvr_dev->sys_clk);
> > +
> > +err_core_clk_disable:
> > +       clk_disable_unprepare(pvr_dev->core_clk);
> > +
> > +err_drm_dev_exit:
> > +       drm_dev_exit(idx);
> > +
> > +       return err;
> > +}
> > +
> > +int
> > +pvr_power_device_idle(struct device *dev)
> > +{
> > +       struct platform_device *plat_dev = to_platform_device(dev);
> > +       struct drm_device *drm_dev = platform_get_drvdata(plat_dev);
> > +       struct pvr_device *pvr_dev = to_pvr_device(drm_dev);
> > +
> > +       return pvr_power_is_idle(pvr_dev) ? 0 : -EBUSY;
> > +}
> 
> You have two options here. Either you require CONFIG_PM, and in that
> case you should add a dependency for it in the Kconfig. Also use
> RUNTIME_PM_OPS() instead of SET_RUNTIME_PM_OPS().
> 
> Otherwise, I would suggest to use EXPORT_NS_RUNTIME_DEV_PM_OPS() here,
> which allows you to have pvr_power_device_{suspend,resume,idle} static,
> and having them garbage-collected in the !CONFIG_PM case.
> 
> You can then have an "extern struct dev_pm_ops pvr_power_pm_ops;" in
> your header, and reference it in the driver using the pm_ptr() macro.

Thank you for pointing this out. We'll make sure this is fixed in v6.

Thanks
Frank

> 
> Cheers,
> -Paul
> 
> > +
> > +/**
> > + * pvr_power_reset() - Reset the GPU
> > + * @pvr_dev: Device pointer
> > + * @hard_reset: %true for hard reset, %false for soft reset
> > + *
> > + * If @hard_reset is %false and the FW processor fails to respond
> > during the reset process, this
> > + * function will attempt a hard reset.
> > + *
> > + * If a hard reset fails then the GPU device is reported as lost.
> > + *
> > + * Returns:
> > + *  * 0 on success, or
> > + *  * Any error code returned by pvr_power_get, pvr_power_fw_disable
> > or pvr_power_fw_enable().
> > + */
> > +int
> > +pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset)
> > +{
> > +       /* TODO: Implement hard reset. */
> > +       int err;
> > +
> > +       /*
> > +        * Take a power reference during the reset. This should
> > prevent any interference with the
> > +        * power state during reset.
> > +        */
> > +       WARN_ON(pvr_power_get(pvr_dev));
> > +
> > +       err = pvr_power_fw_disable(pvr_dev, false);
> > +       if (err)
> > +               goto err_power_put;
> > +
> > +       err = pvr_power_fw_enable(pvr_dev);
> > +
> > +err_power_put:
> > +       pvr_power_put(pvr_dev);
> > +
> > +       return err;
> > +}
> > +
> > +/**
> > + * pvr_watchdog_fini() - Shutdown watchdog for device
> > + * @pvr_dev: Target PowerVR device.
> > + */
> > +void
> > +pvr_watchdog_fini(struct pvr_device *pvr_dev)
> > +{
> > +       cancel_delayed_work_sync(&pvr_dev->watchdog.work);
> > +}
> > diff --git a/drivers/gpu/drm/imagination/pvr_power.h
> > b/drivers/gpu/drm/imagination/pvr_power.h
> > new file mode 100644
> > index 000000000000..439f08d13655
> > --- /dev/null
> > +++ b/drivers/gpu/drm/imagination/pvr_power.h
> > @@ -0,0 +1,39 @@
> > +/* SPDX-License-Identifier: GPL-2.0 OR MIT */
> > +/* Copyright (c) 2023 Imagination Technologies Ltd. */
> > +
> > +#ifndef PVR_POWER_H
> > +#define PVR_POWER_H
> > +
> > +#include "pvr_device.h"
> > +
> > +#include <linux/mutex.h>
> > +#include <linux/pm_runtime.h>
> > +
> > +int pvr_watchdog_init(struct pvr_device *pvr_dev);
> > +void pvr_watchdog_fini(struct pvr_device *pvr_dev);
> > +
> > +bool pvr_power_is_idle(struct pvr_device *pvr_dev);
> > +
> > +int pvr_power_device_suspend(struct device *dev);
> > +int pvr_power_device_resume(struct device *dev);
> > +int pvr_power_device_idle(struct device *dev);
> > +
> > +int pvr_power_reset(struct pvr_device *pvr_dev, bool hard_reset);
> > +
> > +static __always_inline int
> > +pvr_power_get(struct pvr_device *pvr_dev)
> > +{
> > +       struct drm_device *drm_dev = from_pvr_device(pvr_dev);
> > +
> > +       return pm_runtime_resume_and_get(drm_dev->dev);
> > +}
> > +
> > +static __always_inline int
> > +pvr_power_put(struct pvr_device *pvr_dev)
> > +{
> > +       struct drm_device *drm_dev = from_pvr_device(pvr_dev);
> > +
> > +       return pm_runtime_put(drm_dev->dev);
> > +}
> > +
> > +#endif /* PVR_POWER_H */

^ permalink raw reply	[flat|nested] 77+ messages in thread

end of thread, other threads:[~2023-09-05 20:44 UTC | newest]

Thread overview: 77+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-16  8:25 [PATCH v5 00/17] Imagination Technologies PowerVR DRM driver Sarah Walker
2023-08-16  8:25 ` Sarah Walker
2023-08-16  8:25 ` [PATCH v5 01/17] sizes.h: Add entries between 32G and 64T Sarah Walker
2023-08-16  8:25   ` Sarah Walker
2023-08-16  8:25 ` [PATCH v5 02/17] dt-bindings: gpu: Add Imagination Technologies PowerVR GPU Sarah Walker
2023-08-16  8:25   ` Sarah Walker
2023-08-18  9:36   ` Linus Walleij
2023-08-18  9:36     ` Linus Walleij
2023-08-18 10:33     ` Krzysztof Kozlowski
2023-08-18 10:33       ` Krzysztof Kozlowski
2023-09-05 16:32     ` Frank Binns
2023-09-05 16:32       ` Frank Binns
2023-08-18 10:32   ` Krzysztof Kozlowski
2023-08-18 10:32     ` Krzysztof Kozlowski
2023-08-16  8:25 ` [PATCH v5 03/17] drm/imagination/uapi: Add PowerVR driver UAPI Sarah Walker
2023-08-16  8:25   ` Sarah Walker
2023-08-18  0:43   ` Faith Ekstrand
2023-08-18 13:49     ` [EXTERNAL] " Sarah Walker
2023-08-18 13:49       ` Sarah Walker
2023-08-16  8:25 ` [PATCH v5 04/17] drm/imagination: Add skeleton PowerVR driver Sarah Walker
2023-08-16  8:25   ` Sarah Walker
2023-08-16  8:25 ` [PATCH v5 05/17] drm/imagination: Get GPU resources Sarah Walker
2023-08-16  8:25   ` Sarah Walker
2023-08-16  8:25 ` [PATCH v5 06/17] drm/imagination: Add GPU register and FWIF headers Sarah Walker
2023-08-16  8:25   ` Sarah Walker
2023-08-16  8:25 ` [PATCH v5 07/17] drm/imagination: Add GPU ID parsing and firmware loading Sarah Walker
2023-08-16  8:25   ` Sarah Walker
2023-08-16  8:25 ` [PATCH v5 08/17] drm/imagination: Add GEM and VM related code Sarah Walker
2023-08-16  8:25   ` Sarah Walker
2023-08-17 22:42   ` Jann Horn
2023-08-17 22:42     ` Jann Horn
2023-08-18 14:19     ` [EXTERNAL] " Sarah Walker
2023-08-18 14:19       ` Sarah Walker
2023-08-18 15:30   ` Danilo Krummrich
2023-08-18 15:30     ` Danilo Krummrich
2023-08-21  8:30     ` [EXTERNAL] " Donald Robson
2023-08-21  8:30       ` Donald Robson
2023-08-21 11:05       ` Danilo Krummrich
2023-08-21 11:05         ` Danilo Krummrich
2023-08-18 22:59   ` Jann Horn
2023-08-18 22:59     ` Jann Horn
2023-08-16  8:25 ` [PATCH v5 09/17] drm/imagination: Implement power management Sarah Walker
2023-08-16  8:25   ` Sarah Walker
2023-08-16 10:56   ` Paul Cercueil
2023-08-16 10:56     ` Paul Cercueil
2023-09-05 16:34     ` Frank Binns
2023-09-05 16:34       ` Frank Binns
2023-08-16  8:25 ` [PATCH v5 10/17] drm/imagination: Implement firmware infrastructure and META FW support Sarah Walker
2023-08-16  8:25   ` Sarah Walker
2023-08-16  8:25 ` [PATCH v5 11/17] drm/imagination: Implement MIPS firmware processor and MMU support Sarah Walker
2023-08-16  8:25   ` Sarah Walker
2023-08-16  8:25 ` [PATCH v5 12/17] drm/imagination: Implement free list and HWRT create and destroy ioctls Sarah Walker
2023-08-16  8:25   ` Sarah Walker
2023-08-16  8:25 ` [PATCH v5 13/17] drm/imagination: Implement context creation/destruction ioctls Sarah Walker
2023-08-16  8:25   ` Sarah Walker
2023-08-17 22:42   ` Jann Horn
2023-08-17 22:42     ` Jann Horn
2023-08-18 11:00     ` [EXTERNAL] " Sarah Walker
2023-08-18 11:00       ` Sarah Walker
2023-08-16  8:25 ` [PATCH v5 14/17] drm/imagination: Implement job submission and scheduling Sarah Walker
2023-08-16  8:25   ` Sarah Walker
2023-08-17 22:42   ` Jann Horn
2023-08-17 22:42     ` Jann Horn
2023-08-16  8:25 ` [PATCH v5 15/17] drm/imagination: Add firmware trace to debugfs Sarah Walker
2023-08-16  8:25   ` Sarah Walker
2023-08-16  8:25 ` [PATCH v5 16/17] drm/imagination: Add driver documentation Sarah Walker
2023-08-16  8:25   ` Sarah Walker
2023-08-16  8:25 ` [PATCH v5 17/17] arm64: dts: ti: k3-am62-main: Add GPU device node [DO NOT MERGE] Sarah Walker
2023-08-16  8:25   ` Sarah Walker
2023-08-18 10:34   ` Krzysztof Kozlowski
2023-08-18 10:34     ` Krzysztof Kozlowski
2023-08-23 22:31 ` [PATCH v5 00/17] Imagination Technologies PowerVR DRM driver Masahiro Yamada
2023-08-23 22:31   ` Masahiro Yamada
2023-08-24  8:08   ` Sarah Walker
2023-08-24  8:08     ` Sarah Walker
2023-08-24  8:14     ` Sarah Walker
2023-08-24  8:14       ` Sarah Walker

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.