All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 000/100] Add Vega10 Support
@ 2017-03-20 20:29 Alex Deucher
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
  0 siblings, 1 reply; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

This patch set adds support for vega10.  Major changes and supported
features:
- new vbios interface
- Lots of new hw IPs
- Support for video decode using UVD
- Support for video encode using VCE
- Support for 3D via radeonsi
- Power management
- Full display support via DC
- Support for SR-IOV

I did not send out the register headers since they are huge.  You can find them
along with all the other patches in this series here:
https://cgit.freedesktop.org/~agd5f/linux/log/?h=amd-staging-4.9

Please review.

Thanks,

Alex

Alex Deucher (29):
  drm/amdgpu: add the new atomfirmware interface header
  amdgpu: detect if we are using atomfirm or atombios for vbios (v2)
  drm/amdgpu: move atom scratch setup into amdgpu_atombios.c
  drm/amdgpu: add basic support for atomfirmware.h (v3)
  drm/amdgpu: add soc15ip.h
  drm/amdgpu: add vega10_enum.h
  drm/amdgpu: Add ATHUB 1.0 register headers
  drm/amdgpu: Add the DCE 12.0 register headers
  drm/amdgpu: add the GC 9.0 register headers
  drm/amdgpu: add the HDP 4.0 register headers
  drm/amdgpu: add the MMHUB 1.0 register headers
  drm/amdgpu: add MP 9.0 register headers
  drm/amdgpu: add NBIF 6.1 register headers
  drm/amdgpu: add NBIO 6.1 register headers
  drm/amdgpu: add OSSSYS 4.0 register headers
  drm/amdgpu: add SDMA 4.0 register headers
  drm/amdgpu: add SMUIO 9.0 register headers
  drm/amdgpu: add THM 9.0 register headers
  drm/amdgpu: add the UVD 7.0 register headers
  drm/amdgpu: add the VCE 4.0 register headers
  drm/amdgpu: add gfx9 clearstate header
  drm/amdgpu: add SDMA 4.0 packet header
  drm/amdgpu: use atomfirmware interfaces for scratch reg save/restore
  drm/amdgpu: update IH IV ring entry for soc-15
  drm/amdgpu: add PTE defines for MTYPE
  drm/amdgpu: add NGG parameters
  drm/amdgpu: Add asic family for vega10
  drm/amdgpu: add tiling flags for GFX9
  drm/amdgpu: gart fixes for vega10

Alex Xie (4):
  drm/amdgpu: Add MTYPE flags to GPU VM IOCTL interface
  drm/amdgpu: handle PTE EXEC in amdgpu_vm_bo_split_mapping
  drm/amdgpu: handle PTE MTYPE in amdgpu_vm_bo_split_mapping
  drm/amdgpu: Add GMC 9.0 support

Andrey Grodzovsky (1):
  drm/amdgpu: gb_addr_config struct

Charlene Liu (1):
  drm/amd/display: need to handle DCE_Info table ver4.2

Christian König (1):
  drm/amdgpu: add IV trace point

Eric Huang (7):
  drm/amd/powerplay: add smu9 header files for Vega10
  drm/amd/powerplay: add new Vega10's ppsmc header file
  drm/amdgpu: add new atomfirmware based helpers for powerplay
  drm/amd/powerplay: add some new structures for Vega10
  drm/amd: add structures for display/powerplay interface
  drm/amd/powerplay: add some display/powerplay interfaces
  drm/amd/powerplay: add Vega10 powerplay support

Felix Kuehling (1):
  drm/amd: Add MQD structs for GFX V9

Harry Wentland (6):
  drm/amd/display: Add DCE12 bios parser support
  drm/amd/display: Add DCE12 gpio support
  drm/amd/display: Add DCE12 i2c/aux support
  drm/amd/display: Add DCE12 irq support
  drm/amd/display: Add DCE12 core support
  drm/amd/display: Enable DCE12 support

Huang Rui (6):
  drm/amdgpu: use new flag to handle different firmware loading method
  drm/amdgpu: rework common ucode handling for vega10
  drm/amdgpu: add psp firmware header info
  drm/amdgpu: add PSP driver for vega10
  drm/amdgpu: add psp firmware info into info query and debugfs
  drm/amdgpu: add SMC firmware into global ucode list for psp loading

Jordan Lazare (1):
  drm/amd/display: Less log spam

Junwei Zhang (2):
  drm/amdgpu: add NBIO 6.1 driver
  drm/amdgpu: add Vega10 Device IDs

Ken Wang (8):
  drm/amdgpu: add common soc15 headers
  drm/amdgpu: add vega10 chip name
  drm/amdgpu: add 64bit doorbell assignments
  drm/amdgpu: add SDMA v4.0 implementation
  drm/amdgpu: implement GFX 9.0 support
  drm/amdgpu: add vega10 interrupt handler
  drm/amdgpu: soc15 enable (v2)
  drm/amdgpu: Set the IP blocks for vega10

Leo Liu (2):
  drm/amdgpu: add initial uvd 7.0 support for vega10
  drm/amdgpu: add initial vce 4.0 support for vega10

Marek Olšák (1):
  drm/amdgpu: don't validate TILE_SPLIT on GFX9

Monk Liu (5):
  drm/amdgpu/gfx9: programing wptr_poll_addr register
  drm/amdgpu:impl gfx9 cond_exec
  drm/amdgpu:bypass RLC init for SRIOV
  drm/amdgpu/sdma4:re-org SDMA initial steps for sriov
  drm/amdgpu/vega10:fix DOORBELL64 scheme

Rex Zhu (2):
  drm/amdgpu: get display info from DC when DC enabled.
  drm/amd/powerplay: add global PowerPlay mutex.

Xiangliang Yu (22):
  drm/amdgpu: impl sriov detection for vega10
  drm/amdgpu: add kiq ring for gfx9
  drm/amdgpu/gfx9: fullfill kiq funcs
  drm/amdgpu/gfx9: fullfill kiq irq funcs
  drm/amdgpu: init kiq and kcq for vega10
  drm/amdgpu/gfx9: impl gfx9 meta data emit
  drm/amdgpu/soc15: bypass PSP for VF
  drm/amdgpu/gmc9: no need use kiq in vega10 tlb flush
  drm/amdgpu/dce_virtual: bypass DPM for vf
  drm/amdgpu/virt: impl mailbox for ai
  drm/amdgpu/soc15: init virt ops for vf
  drm/amdgpu/soc15: enable virtual dce for vf
  drm/amdgpu: Don't touch PG&CG for SRIOV MM
  drm/amdgpu/vce4: enable doorbell for SRIOV
  drm/amdgpu: disable uvd for sriov
  drm/amdgpu/soc15: bypass pp block for vf
  drm/amdgpu/virt: add structure for MM table
  drm/amdgpu/vce4: alloc mm table for MM sriov
  drm/amdgpu/vce4: Ignore vce ring/ib test temporarily
  drm/amdgpu: add mmsch structures
  drm/amdgpu/vce4: impl vce & mmsch sriov start
  drm/amdgpu/gfx9: correct wptr pointer value

ken (1):
  drm/amdgpu: add clinetid definition for vega10

 drivers/gpu/drm/amd/amdgpu/Makefile                |     27 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu.h                |    172 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c       |     28 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h       |      3 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c   |    112 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h   |     33 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c           |     30 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c            |     73 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c         |     73 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c            |     36 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c           |      3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c            |      2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ih.h             |     47 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c            |      3 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c            |     32 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c         |      5 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_powerplay.c      |      5 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c            |    473 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h            |    127 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h          |     37 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c          |    113 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h          |     17 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c            |     58 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c            |     21 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h           |      7 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c             |     34 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h             |      4 +
 drivers/gpu/drm/amd/amdgpu/atom.c                  |     26 -
 drivers/gpu/drm/amd/amdgpu/atom.h                  |      1 -
 drivers/gpu/drm/amd/amdgpu/cik.c                   |      2 +
 drivers/gpu/drm/amd/amdgpu/clearstate_gfx9.h       |    941 +
 drivers/gpu/drm/amd/amdgpu/dce_virtual.c           |      3 +
 drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c              |      6 +-
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c              |   4075 +
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.h              |     35 +
 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c           |    447 +
 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h           |     35 +
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c              |    826 +
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h              |     30 +
 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c            |    585 +
 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h            |     35 +
 drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h            |     87 +
 drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c              |    207 +
 drivers/gpu/drm/amd/amdgpu/mxgpu_ai.h              |     47 +
 drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c             |    251 +
 drivers/gpu/drm/amd/amdgpu/nbio_v6_1.h             |     53 +
 drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h            |    269 +
 drivers/gpu/drm/amd/amdgpu/psp_v3_1.c              |    507 +
 drivers/gpu/drm/amd/amdgpu/psp_v3_1.h              |     50 +
 drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c             |      4 +-
 drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c             |      4 +-
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c             |   1573 +
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.h             |     30 +
 drivers/gpu/drm/amd/amdgpu/soc15.c                 |    825 +
 drivers/gpu/drm/amd/amdgpu/soc15.h                 |     35 +
 drivers/gpu/drm/amd/amdgpu/soc15_common.h          |     57 +
 drivers/gpu/drm/amd/amdgpu/soc15d.h                |    287 +
 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c              |   1543 +
 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.h              |     29 +
 drivers/gpu/drm/amd/amdgpu/vce_v4_0.c              |   1141 +
 drivers/gpu/drm/amd/amdgpu/vce_v4_0.h              |     29 +
 drivers/gpu/drm/amd/amdgpu/vega10_ih.c             |    424 +
 drivers/gpu/drm/amd/amdgpu/vega10_ih.h             |     30 +
 drivers/gpu/drm/amd/amdgpu/vega10_sdma_pkt_open.h  |   3335 +
 drivers/gpu/drm/amd/amdgpu/vi.c                    |      4 +-
 drivers/gpu/drm/amd/display/Kconfig                |      7 +
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  |    145 +-
 .../drm/amd/display/amdgpu_dm/amdgpu_dm_services.c |     10 +
 .../drm/amd/display/amdgpu_dm/amdgpu_dm_types.c    |     20 +-
 drivers/gpu/drm/amd/display/dc/Makefile            |      4 +
 drivers/gpu/drm/amd/display/dc/bios/Makefile       |      8 +
 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c |   2162 +
 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.h |     33 +
 .../amd/display/dc/bios/bios_parser_interface.c    |     14 +
 .../display/dc/bios/bios_parser_types_internal2.h  |     74 +
 .../gpu/drm/amd/display/dc/bios/command_table2.c   |    813 +
 .../gpu/drm/amd/display/dc/bios/command_table2.h   |    105 +
 .../amd/display/dc/bios/command_table_helper2.c    |    260 +
 .../amd/display/dc/bios/command_table_helper2.h    |     82 +
 .../dc/bios/dce112/command_table_helper2_dce112.c  |    418 +
 .../dc/bios/dce112/command_table_helper2_dce112.h  |     34 +
 drivers/gpu/drm/amd/display/dc/calcs/dce_calcs.c   |    117 +
 drivers/gpu/drm/amd/display/dc/core/dc.c           |     29 +
 drivers/gpu/drm/amd/display/dc/core/dc_debug.c     |     11 +
 drivers/gpu/drm/amd/display/dc/core/dc_link.c      |     19 +
 drivers/gpu/drm/amd/display/dc/core/dc_resource.c  |     14 +
 drivers/gpu/drm/amd/display/dc/dc.h                |     27 +
 drivers/gpu/drm/amd/display/dc/dc_hw_types.h       |     46 +
 .../gpu/drm/amd/display/dc/dce/dce_clock_source.c  |      6 +
 drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c    |    149 +
 drivers/gpu/drm/amd/display/dc/dce/dce_clocks.h    |     20 +
 drivers/gpu/drm/amd/display/dc/dce/dce_hwseq.h     |      8 +
 .../gpu/drm/amd/display/dc/dce/dce_link_encoder.h  |     14 +
 drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c |     35 +
 drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.h |     34 +
 drivers/gpu/drm/amd/display/dc/dce/dce_opp.h       |     72 +
 .../drm/amd/display/dc/dce/dce_stream_encoder.h    |    100 +
 drivers/gpu/drm/amd/display/dc/dce/dce_transform.h |     68 +
 .../amd/display/dc/dce110/dce110_hw_sequencer.c    |     53 +-
 .../drm/amd/display/dc/dce110/dce110_mem_input.c   |      3 +
 .../display/dc/dce110/dce110_timing_generator.h    |      3 +
 drivers/gpu/drm/amd/display/dc/dce120/Makefile     |     12 +
 .../amd/display/dc/dce120/dce120_hw_sequencer.c    |    197 +
 .../amd/display/dc/dce120/dce120_hw_sequencer.h    |     36 +
 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.c |     58 +
 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.h |     62 +
 .../drm/amd/display/dc/dce120/dce120_ipp_cursor.c  |    202 +
 .../drm/amd/display/dc/dce120/dce120_ipp_gamma.c   |    167 +
 .../drm/amd/display/dc/dce120/dce120_mem_input.c   |    340 +
 .../drm/amd/display/dc/dce120/dce120_mem_input.h   |     37 +
 .../drm/amd/display/dc/dce120/dce120_resource.c    |   1099 +
 .../drm/amd/display/dc/dce120/dce120_resource.h    |     39 +
 .../display/dc/dce120/dce120_timing_generator.c    |   1109 +
 .../display/dc/dce120/dce120_timing_generator.h    |     41 +
 .../gpu/drm/amd/display/dc/dce80/dce80_mem_input.c |      3 +
 drivers/gpu/drm/amd/display/dc/dm_services.h       |     89 +
 drivers/gpu/drm/amd/display/dc/dm_services_types.h |     27 +
 drivers/gpu/drm/amd/display/dc/gpio/Makefile       |     11 +
 .../amd/display/dc/gpio/dce120/hw_factory_dce120.c |    197 +
 .../amd/display/dc/gpio/dce120/hw_factory_dce120.h |     32 +
 .../display/dc/gpio/dce120/hw_translate_dce120.c   |    408 +
 .../display/dc/gpio/dce120/hw_translate_dce120.h   |     34 +
 drivers/gpu/drm/amd/display/dc/gpio/hw_factory.c   |      9 +
 drivers/gpu/drm/amd/display/dc/gpio/hw_translate.c |      9 +-
 drivers/gpu/drm/amd/display/dc/i2caux/Makefile     |     11 +
 .../amd/display/dc/i2caux/dce120/i2caux_dce120.c   |    125 +
 .../amd/display/dc/i2caux/dce120/i2caux_dce120.h   |     32 +
 drivers/gpu/drm/amd/display/dc/i2caux/i2caux.c     |      8 +
 .../gpu/drm/amd/display/dc/inc/bandwidth_calcs.h   |      3 +
 .../gpu/drm/amd/display/dc/inc/hw/display_clock.h  |     23 +
 drivers/gpu/drm/amd/display/dc/inc/hw/mem_input.h  |      4 +
 drivers/gpu/drm/amd/display/dc/irq/Makefile        |     12 +
 .../amd/display/dc/irq/dce120/irq_service_dce120.c |    293 +
 .../amd/display/dc/irq/dce120/irq_service_dce120.h |     34 +
 drivers/gpu/drm/amd/display/dc/irq/irq_service.c   |      3 +
 drivers/gpu/drm/amd/display/include/dal_asic_id.h  |      4 +
 drivers/gpu/drm/amd/display/include/dal_types.h    |      3 +
 drivers/gpu/drm/amd/include/amd_shared.h           |      4 +
 .../asic_reg/vega10/ATHUB/athub_1_0_default.h      |    241 +
 .../asic_reg/vega10/ATHUB/athub_1_0_offset.h       |    453 +
 .../asic_reg/vega10/ATHUB/athub_1_0_sh_mask.h      |   2045 +
 .../include/asic_reg/vega10/DC/dce_12_0_default.h  |   9868 ++
 .../include/asic_reg/vega10/DC/dce_12_0_offset.h   |  18193 +++
 .../include/asic_reg/vega10/DC/dce_12_0_sh_mask.h  |  64636 +++++++++
 .../include/asic_reg/vega10/GC/gc_9_0_default.h    |   3873 +
 .../amd/include/asic_reg/vega10/GC/gc_9_0_offset.h |   7230 +
 .../include/asic_reg/vega10/GC/gc_9_0_sh_mask.h    |  29868 ++++
 .../include/asic_reg/vega10/HDP/hdp_4_0_default.h  |    117 +
 .../include/asic_reg/vega10/HDP/hdp_4_0_offset.h   |    209 +
 .../include/asic_reg/vega10/HDP/hdp_4_0_sh_mask.h  |    601 +
 .../asic_reg/vega10/MMHUB/mmhub_1_0_default.h      |   1011 +
 .../asic_reg/vega10/MMHUB/mmhub_1_0_offset.h       |   1967 +
 .../asic_reg/vega10/MMHUB/mmhub_1_0_sh_mask.h      |  10127 ++
 .../include/asic_reg/vega10/MP/mp_9_0_default.h    |    342 +
 .../amd/include/asic_reg/vega10/MP/mp_9_0_offset.h |    375 +
 .../include/asic_reg/vega10/MP/mp_9_0_sh_mask.h    |   1463 +
 .../asic_reg/vega10/NBIF/nbif_6_1_default.h        |   1271 +
 .../include/asic_reg/vega10/NBIF/nbif_6_1_offset.h |   1688 +
 .../asic_reg/vega10/NBIF/nbif_6_1_sh_mask.h        |  10281 ++
 .../asic_reg/vega10/NBIO/nbio_6_1_default.h        |  22340 +++
 .../include/asic_reg/vega10/NBIO/nbio_6_1_offset.h |   3649 +
 .../asic_reg/vega10/NBIO/nbio_6_1_sh_mask.h        | 133884 ++++++++++++++++++
 .../asic_reg/vega10/OSSSYS/osssys_4_0_default.h    |    176 +
 .../asic_reg/vega10/OSSSYS/osssys_4_0_offset.h     |    327 +
 .../asic_reg/vega10/OSSSYS/osssys_4_0_sh_mask.h    |   1196 +
 .../asic_reg/vega10/SDMA0/sdma0_4_0_default.h      |    286 +
 .../asic_reg/vega10/SDMA0/sdma0_4_0_offset.h       |    547 +
 .../asic_reg/vega10/SDMA0/sdma0_4_0_sh_mask.h      |   1852 +
 .../asic_reg/vega10/SDMA1/sdma1_4_0_default.h      |    282 +
 .../asic_reg/vega10/SDMA1/sdma1_4_0_offset.h       |    539 +
 .../asic_reg/vega10/SDMA1/sdma1_4_0_sh_mask.h      |   1810 +
 .../asic_reg/vega10/SMUIO/smuio_9_0_default.h      |    100 +
 .../asic_reg/vega10/SMUIO/smuio_9_0_offset.h       |    175 +
 .../asic_reg/vega10/SMUIO/smuio_9_0_sh_mask.h      |    258 +
 .../include/asic_reg/vega10/THM/thm_9_0_default.h  |    194 +
 .../include/asic_reg/vega10/THM/thm_9_0_offset.h   |    363 +
 .../include/asic_reg/vega10/THM/thm_9_0_sh_mask.h  |   1314 +
 .../include/asic_reg/vega10/UVD/uvd_7_0_default.h  |    127 +
 .../include/asic_reg/vega10/UVD/uvd_7_0_offset.h   |    222 +
 .../include/asic_reg/vega10/UVD/uvd_7_0_sh_mask.h  |    811 +
 .../include/asic_reg/vega10/VCE/vce_4_0_default.h  |    122 +
 .../include/asic_reg/vega10/VCE/vce_4_0_offset.h   |    208 +
 .../include/asic_reg/vega10/VCE/vce_4_0_sh_mask.h  |    488 +
 .../gpu/drm/amd/include/asic_reg/vega10/soc15ip.h  |   1343 +
 .../drm/amd/include/asic_reg/vega10/vega10_enum.h  |  22531 +++
 drivers/gpu/drm/amd/include/atomfirmware.h         |   2385 +
 drivers/gpu/drm/amd/include/atomfirmwareid.h       |     86 +
 drivers/gpu/drm/amd/include/displayobject.h        |    249 +
 drivers/gpu/drm/amd/include/dm_pp_interface.h      |     83 +
 drivers/gpu/drm/amd/include/v9_structs.h           |    743 +
 drivers/gpu/drm/amd/powerplay/amd_powerplay.c      |    284 +-
 drivers/gpu/drm/amd/powerplay/hwmgr/Makefile       |      6 +-
 .../gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c  |     49 +
 drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c        |      9 +
 drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr_ppt.h    |     16 +-
 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c |    396 +
 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h |    140 +
 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c |   4378 +
 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.h |    434 +
 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_inc.h   |     44 +
 .../gpu/drm/amd/powerplay/hwmgr/vega10_powertune.c |    137 +
 .../gpu/drm/amd/powerplay/hwmgr/vega10_powertune.h |     65 +
 .../gpu/drm/amd/powerplay/hwmgr/vega10_pptable.h   |    331 +
 .../amd/powerplay/hwmgr/vega10_processpptables.c   |   1056 +
 .../amd/powerplay/hwmgr/vega10_processpptables.h   |     34 +
 .../gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c   |    761 +
 .../gpu/drm/amd/powerplay/hwmgr/vega10_thermal.h   |     83 +
 drivers/gpu/drm/amd/powerplay/inc/amd_powerplay.h  |     28 +-
 .../gpu/drm/amd/powerplay/inc/hardwaremanager.h    |     43 +
 drivers/gpu/drm/amd/powerplay/inc/hwmgr.h          |    125 +-
 drivers/gpu/drm/amd/powerplay/inc/pp_instance.h    |      1 +
 drivers/gpu/drm/amd/powerplay/inc/pp_soc15.h       |     48 +
 drivers/gpu/drm/amd/powerplay/inc/smu9.h           |    147 +
 drivers/gpu/drm/amd/powerplay/inc/smu9_driver_if.h |    418 +
 drivers/gpu/drm/amd/powerplay/inc/smumgr.h         |      3 +
 drivers/gpu/drm/amd/powerplay/inc/vega10_ppsmc.h   |    131 +
 drivers/gpu/drm/amd/powerplay/smumgr/Makefile      |      2 +-
 drivers/gpu/drm/amd/powerplay/smumgr/smumgr.c      |      9 +
 .../gpu/drm/amd/powerplay/smumgr/vega10_smumgr.c   |    564 +
 .../gpu/drm/amd/powerplay/smumgr/vega10_smumgr.h   |     70 +
 include/uapi/drm/amdgpu_drm.h                      |     29 +
 221 files changed, 403408 insertions(+), 219 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/clearstate_gfx9.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/mxgpu_ai.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/nbio_v6_1.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/psp_v3_1.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/psp_v3_1.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15_common.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15d.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/vce_v4_0.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/vega10_ih.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/vega10_ih.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/vega10_sdma_pkt_open.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/bios/bios_parser_types_internal2.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/bios/command_table2.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/bios/command_table2.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/bios/dce112/command_table_helper2_dce112.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/bios/dce112/command_table_helper2_dce112.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/Makefile
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp_cursor.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp_gamma.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_mem_input.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_mem_input.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_timing_generator.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_timing_generator.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_factory_dce120.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_factory_dce120.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_translate_dce120.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_translate_dce120.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/i2caux/dce120/i2caux_dce120.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/i2caux/dce120/i2caux_dce120.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/irq/dce120/irq_service_dce120.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/irq/dce120/irq_service_dce120.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/ATHUB/athub_1_0_default.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/ATHUB/athub_1_0_offset.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/ATHUB/athub_1_0_sh_mask.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/DC/dce_12_0_default.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/DC/dce_12_0_offset.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/DC/dce_12_0_sh_mask.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/GC/gc_9_0_default.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/GC/gc_9_0_offset.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/GC/gc_9_0_sh_mask.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/HDP/hdp_4_0_default.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/HDP/hdp_4_0_offset.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/HDP/hdp_4_0_sh_mask.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/MMHUB/mmhub_1_0_default.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/MMHUB/mmhub_1_0_offset.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/MMHUB/mmhub_1_0_sh_mask.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/MP/mp_9_0_default.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/MP/mp_9_0_offset.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/MP/mp_9_0_sh_mask.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/NBIF/nbif_6_1_default.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/NBIF/nbif_6_1_offset.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/NBIF/nbif_6_1_sh_mask.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/NBIO/nbio_6_1_default.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/NBIO/nbio_6_1_offset.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/NBIO/nbio_6_1_sh_mask.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/OSSSYS/osssys_4_0_default.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/OSSSYS/osssys_4_0_offset.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/OSSSYS/osssys_4_0_sh_mask.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA0/sdma0_4_0_default.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA0/sdma0_4_0_offset.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA0/sdma0_4_0_sh_mask.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA1/sdma1_4_0_default.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA1/sdma1_4_0_offset.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA1/sdma1_4_0_sh_mask.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/SMUIO/smuio_9_0_default.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/SMUIO/smuio_9_0_offset.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/SMUIO/smuio_9_0_sh_mask.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/THM/thm_9_0_default.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/THM/thm_9_0_offset.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/THM/thm_9_0_sh_mask.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/UVD/uvd_7_0_default.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/UVD/uvd_7_0_offset.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/UVD/uvd_7_0_sh_mask.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/VCE/vce_4_0_default.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/VCE/vce_4_0_offset.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/VCE/vce_4_0_sh_mask.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/soc15ip.h
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/vega10_enum.h
 create mode 100644 drivers/gpu/drm/amd/include/atomfirmware.h
 create mode 100644 drivers/gpu/drm/amd/include/atomfirmwareid.h
 create mode 100644 drivers/gpu/drm/amd/include/displayobject.h
 create mode 100644 drivers/gpu/drm/amd/include/dm_pp_interface.h
 create mode 100644 drivers/gpu/drm/amd/include/v9_structs.h
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.h
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_inc.h
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.c
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.h
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_pptable.h
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.c
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.h
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.h
 create mode 100644 drivers/gpu/drm/amd/powerplay/inc/pp_soc15.h
 create mode 100644 drivers/gpu/drm/amd/powerplay/inc/smu9.h
 create mode 100644 drivers/gpu/drm/amd/powerplay/inc/smu9_driver_if.h
 create mode 100644 drivers/gpu/drm/amd/powerplay/inc/vega10_ppsmc.h
 create mode 100644 drivers/gpu/drm/amd/powerplay/smumgr/vega10_smumgr.c
 create mode 100644 drivers/gpu/drm/amd/powerplay/smumgr/vega10_smumgr.h

-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 101+ messages in thread

* [PATCH 001/100] drm/amdgpu: add the new atomfirmware interface header
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 002/100] amdgpu: detect if we are using atomfirm or atombios for vbios (v2) Alex Deucher
                     ` (84 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

soc15 asics have a new vbios interface.  These headers
define that interface.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/include/atomfirmware.h   | 2385 ++++++++++++++++++++++++++
 drivers/gpu/drm/amd/include/atomfirmwareid.h |   86 +
 drivers/gpu/drm/amd/include/displayobject.h  |  249 +++
 3 files changed, 2720 insertions(+)
 create mode 100644 drivers/gpu/drm/amd/include/atomfirmware.h
 create mode 100644 drivers/gpu/drm/amd/include/atomfirmwareid.h
 create mode 100644 drivers/gpu/drm/amd/include/displayobject.h

diff --git a/drivers/gpu/drm/amd/include/atomfirmware.h b/drivers/gpu/drm/amd/include/atomfirmware.h
new file mode 100644
index 0000000..d386875
--- /dev/null
+++ b/drivers/gpu/drm/amd/include/atomfirmware.h
@@ -0,0 +1,2385 @@
+/****************************************************************************\
+* 
+*  File Name      atomfirmware.h
+*  Project        This is an interface header file between atombios and OS GPU drivers for SoC15 products
+*
+*  Description    header file of general definitions for OS nd pre-OS video drivers 
+*
+*  Copyright 2014 Advanced Micro Devices, Inc.
+*
+* Permission is hereby granted, free of charge, to any person obtaining a copy of this software 
+* and associated documentation files (the "Software"), to deal in the Software without restriction,
+* including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense,
+* and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so,
+* subject to the following conditions:
+*
+* The above copyright notice and this permission notice shall be included in all copies or substantial
+* portions of the Software.
+*
+* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+* OTHER DEALINGS IN THE SOFTWARE.
+*
+\****************************************************************************/
+
+/*IMPORTANT NOTES
+* If a change in VBIOS/Driver/Tool's interface is only needed for SoC15 and forward products, then the change is only needed in this atomfirmware.h header file.
+* If a change in VBIOS/Driver/Tool's interface is only needed for pre-SoC15 products, then the change is only needed in atombios.h header file.
+* If a change is needed for both pre and post SoC15 products, then the change has to be made separately and might be differently in both atomfirmware.h and atombios.h.
+*/
+
+#ifndef _ATOMFIRMWARE_H_
+#define _ATOMFIRMWARE_H_
+
+enum  atom_bios_header_version_def{
+  ATOM_MAJOR_VERSION        =0x0003,
+  ATOM_MINOR_VERSION        =0x0003,
+};
+
+#ifdef _H2INC
+  #ifndef uint32_t
+    typedef unsigned long uint32_t;
+  #endif
+
+  #ifndef uint16_t
+    typedef unsigned short uint16_t;
+  #endif
+
+  #ifndef uint8_t 
+    typedef unsigned char uint8_t;
+  #endif
+#endif
+
+enum atom_crtc_def{
+  ATOM_CRTC1      =0,
+  ATOM_CRTC2      =1,
+  ATOM_CRTC3      =2,
+  ATOM_CRTC4      =3,
+  ATOM_CRTC5      =4,
+  ATOM_CRTC6      =5,
+  ATOM_CRTC_INVALID  =0xff,
+};
+
+enum atom_ppll_def{
+  ATOM_PPLL0          =2,
+  ATOM_GCK_DFS        =8,
+  ATOM_FCH_CLK        =9,
+  ATOM_DP_DTO         =11,
+  ATOM_COMBOPHY_PLL0  =20,
+  ATOM_COMBOPHY_PLL1  =21,
+  ATOM_COMBOPHY_PLL2  =22,
+  ATOM_COMBOPHY_PLL3  =23,
+  ATOM_COMBOPHY_PLL4  =24,
+  ATOM_COMBOPHY_PLL5  =25,
+  ATOM_PPLL_INVALID   =0xff,
+};
+
+// define ASIC internal encoder id ( bit vector ), used for CRTC_SourceSel
+enum atom_dig_def{
+  ASIC_INT_DIG1_ENCODER_ID  =0x03,
+  ASIC_INT_DIG2_ENCODER_ID  =0x09,
+  ASIC_INT_DIG3_ENCODER_ID  =0x0a,
+  ASIC_INT_DIG4_ENCODER_ID  =0x0b,
+  ASIC_INT_DIG5_ENCODER_ID  =0x0c,
+  ASIC_INT_DIG6_ENCODER_ID  =0x0d,
+  ASIC_INT_DIG7_ENCODER_ID  =0x0e,
+};
+
+//ucEncoderMode
+enum atom_encode_mode_def
+{
+  ATOM_ENCODER_MODE_DP          =0,
+  ATOM_ENCODER_MODE_DP_SST      =0,
+  ATOM_ENCODER_MODE_LVDS        =1,
+  ATOM_ENCODER_MODE_DVI         =2,
+  ATOM_ENCODER_MODE_HDMI        =3,
+  ATOM_ENCODER_MODE_DP_AUDIO    =5,
+  ATOM_ENCODER_MODE_DP_MST      =5,
+  ATOM_ENCODER_MODE_CRT         =15,
+  ATOM_ENCODER_MODE_DVO         =16,
+};
+
+enum atom_encoder_refclk_src_def{
+  ENCODER_REFCLK_SRC_P1PLL      =0,
+  ENCODER_REFCLK_SRC_P2PLL      =1,
+  ENCODER_REFCLK_SRC_P3PLL      =2,
+  ENCODER_REFCLK_SRC_EXTCLK     =3,
+  ENCODER_REFCLK_SRC_INVALID    =0xff,
+};
+
+enum atom_scaler_def{
+  ATOM_SCALER_DISABLE          =0,  /*scaler bypass mode, auto-center & no replication*/
+  ATOM_SCALER_CENTER           =1,  //For Fudo, it's bypass and auto-center & auto replication
+  ATOM_SCALER_EXPANSION        =2,  /*scaler expansion by 2 tap alpha blending mode*/
+};
+
+enum atom_operation_def{
+  ATOM_DISABLE             = 0,
+  ATOM_ENABLE              = 1,
+  ATOM_INIT                = 7,
+  ATOM_GET_STATUS          = 8,
+};
+
+enum atom_embedded_display_op_def{
+  ATOM_LCD_BL_OFF                = 2,
+  ATOM_LCD_BL_OM                 = 3,
+  ATOM_LCD_BL_BRIGHTNESS_CONTROL = 4,
+  ATOM_LCD_SELFTEST_START        = 5,
+  ATOM_LCD_SELFTEST_STOP         = 6,
+};
+
+enum atom_spread_spectrum_mode{
+  ATOM_SS_CENTER_OR_DOWN_MODE_MASK  = 0x01,
+  ATOM_SS_DOWN_SPREAD_MODE          = 0x00,
+  ATOM_SS_CENTRE_SPREAD_MODE        = 0x01,
+  ATOM_INT_OR_EXT_SS_MASK           = 0x02,
+  ATOM_INTERNAL_SS_MASK             = 0x00,
+  ATOM_EXTERNAL_SS_MASK             = 0x02,
+};
+
+/* define panel bit per color  */
+enum atom_panel_bit_per_color{
+  PANEL_BPC_UNDEFINE     =0x00,
+  PANEL_6BIT_PER_COLOR   =0x01,
+  PANEL_8BIT_PER_COLOR   =0x02,
+  PANEL_10BIT_PER_COLOR  =0x03,
+  PANEL_12BIT_PER_COLOR  =0x04,
+  PANEL_16BIT_PER_COLOR  =0x05,
+};
+
+//ucVoltageType
+enum atom_voltage_type
+{
+  VOLTAGE_TYPE_VDDC = 1,
+  VOLTAGE_TYPE_MVDDC = 2,
+  VOLTAGE_TYPE_MVDDQ = 3,
+  VOLTAGE_TYPE_VDDCI = 4,
+  VOLTAGE_TYPE_VDDGFX = 5,
+  VOLTAGE_TYPE_PCC = 6,
+  VOLTAGE_TYPE_MVPP = 7,
+  VOLTAGE_TYPE_LEDDPM = 8,
+  VOLTAGE_TYPE_PCC_MVDD = 9,
+  VOLTAGE_TYPE_PCIE_VDDC = 10,
+  VOLTAGE_TYPE_PCIE_VDDR = 11,
+  VOLTAGE_TYPE_GENERIC_I2C_1 = 0x11,
+  VOLTAGE_TYPE_GENERIC_I2C_2 = 0x12,
+  VOLTAGE_TYPE_GENERIC_I2C_3 = 0x13,
+  VOLTAGE_TYPE_GENERIC_I2C_4 = 0x14,
+  VOLTAGE_TYPE_GENERIC_I2C_5 = 0x15,
+  VOLTAGE_TYPE_GENERIC_I2C_6 = 0x16,
+  VOLTAGE_TYPE_GENERIC_I2C_7 = 0x17,
+  VOLTAGE_TYPE_GENERIC_I2C_8 = 0x18,
+  VOLTAGE_TYPE_GENERIC_I2C_9 = 0x19,
+  VOLTAGE_TYPE_GENERIC_I2C_10 = 0x1A,
+};
+
+enum atom_dgpu_vram_type{
+  ATOM_DGPU_VRAM_TYPE_GDDR5 = 0x50,
+  ATOM_DGPU_VRAM_TYPE_HBM   = 0x60,
+};
+
+enum atom_dp_vs_preemph_def{
+  DP_VS_LEVEL0_PREEMPH_LEVEL0 = 0x00,
+  DP_VS_LEVEL1_PREEMPH_LEVEL0 = 0x01,
+  DP_VS_LEVEL2_PREEMPH_LEVEL0 = 0x02,
+  DP_VS_LEVEL3_PREEMPH_LEVEL0 = 0x03,
+  DP_VS_LEVEL0_PREEMPH_LEVEL1 = 0x08,
+  DP_VS_LEVEL1_PREEMPH_LEVEL1 = 0x09,
+  DP_VS_LEVEL2_PREEMPH_LEVEL1 = 0x0a,
+  DP_VS_LEVEL0_PREEMPH_LEVEL2 = 0x10,
+  DP_VS_LEVEL1_PREEMPH_LEVEL2 = 0x11,
+  DP_VS_LEVEL0_PREEMPH_LEVEL3 = 0x18,
+};
+
+
+/*
+enum atom_string_def{
+asic_bus_type_pcie_string = "PCI_EXPRESS", 
+atom_fire_gl_string       = "FGL",
+atom_bios_string          = "ATOM"
+};
+*/
+
+#pragma pack(1)                          /* BIOS data must use byte aligment*/
+
+enum atombios_image_offset{
+OFFSET_TO_ATOM_ROM_HEADER_POINTER          =0x00000048,
+OFFSET_TO_ATOM_ROM_IMAGE_SIZE              =0x00000002,
+OFFSET_TO_ATOMBIOS_ASIC_BUS_MEM_TYPE       =0x94,
+MAXSIZE_OF_ATOMBIOS_ASIC_BUS_MEM_TYPE      =20,  /*including the terminator 0x0!*/
+OFFSET_TO_GET_ATOMBIOS_NUMBER_OF_STRINGS   =0x2f,
+OFFSET_TO_GET_ATOMBIOS_STRING_START        =0x6e,
+};
+
+/****************************************************************************   
+* Common header for all tables (Data table, Command function).
+* Every table pointed in _ATOM_MASTER_DATA_TABLE has this common header. 
+* And the pointer actually points to this header.
+****************************************************************************/   
+
+struct atom_common_table_header
+{
+  uint16_t structuresize;
+  uint8_t  format_revision;   //mainly used for a hw function, when the parser is not backward compatible 
+  uint8_t  content_revision;  //change it when a data table has a structure change, or a hw function has a input/output parameter change                                
+};
+
+/****************************************************************************  
+* Structure stores the ROM header.
+****************************************************************************/   
+struct atom_rom_header_v2_2
+{
+  struct atom_common_table_header table_header;
+  uint8_t  atom_bios_string[4];        //enum atom_string_def atom_bios_string;     //Signature to distinguish between Atombios and non-atombios, 
+  uint16_t bios_segment_address;
+  uint16_t protectedmodeoffset;
+  uint16_t configfilenameoffset;
+  uint16_t crc_block_offset;
+  uint16_t vbios_bootupmessageoffset;
+  uint16_t int10_offset;
+  uint16_t pcibusdevinitcode;
+  uint16_t iobaseaddress;
+  uint16_t subsystem_vendor_id;
+  uint16_t subsystem_id;
+  uint16_t pci_info_offset;
+  uint16_t masterhwfunction_offset;      //Offest for SW to get all command function offsets, Don't change the position
+  uint16_t masterdatatable_offset;       //Offest for SW to get all data table offsets, Don't change the position
+  uint16_t reserved;
+  uint32_t pspdirtableoffset;
+};
+
+/*==============================hw function portion======================================================================*/
+
+
+/****************************************************************************   
+* Structures used in Command.mtb, each function name is not given here since those function could change from time to time
+* The real functionality of each function is associated with the parameter structure version when defined
+* For all internal cmd function definitions, please reference to atomstruct.h
+****************************************************************************/   
+struct atom_master_list_of_command_functions_v2_1{
+  uint16_t asic_init;                   //Function
+  uint16_t cmd_function1;               //used as an internal one
+  uint16_t cmd_function2;               //used as an internal one
+  uint16_t cmd_function3;               //used as an internal one
+  uint16_t digxencodercontrol;          //Function   
+  uint16_t cmd_function5;               //used as an internal one
+  uint16_t cmd_function6;               //used as an internal one 
+  uint16_t cmd_function7;               //used as an internal one
+  uint16_t cmd_function8;               //used as an internal one
+  uint16_t cmd_function9;               //used as an internal one
+  uint16_t setengineclock;              //Function
+  uint16_t setmemoryclock;              //Function
+  uint16_t setpixelclock;               //Function
+  uint16_t enabledisppowergating;       //Function            
+  uint16_t cmd_function14;              //used as an internal one             
+  uint16_t cmd_function15;              //used as an internal one
+  uint16_t cmd_function16;              //used as an internal one
+  uint16_t cmd_function17;              //used as an internal one
+  uint16_t cmd_function18;              //used as an internal one
+  uint16_t cmd_function19;              //used as an internal one 
+  uint16_t cmd_function20;              //used as an internal one               
+  uint16_t cmd_function21;              //used as an internal one
+  uint16_t cmd_function22;              //used as an internal one
+  uint16_t cmd_function23;              //used as an internal one
+  uint16_t cmd_function24;              //used as an internal one
+  uint16_t cmd_function25;              //used as an internal one
+  uint16_t cmd_function26;              //used as an internal one
+  uint16_t cmd_function27;              //used as an internal one
+  uint16_t cmd_function28;              //used as an internal one
+  uint16_t cmd_function29;              //used as an internal one
+  uint16_t cmd_function30;              //used as an internal one
+  uint16_t cmd_function31;              //used as an internal one
+  uint16_t cmd_function32;              //used as an internal one
+  uint16_t cmd_function33;              //used as an internal one
+  uint16_t blankcrtc;                   //Function
+  uint16_t enablecrtc;                  //Function
+  uint16_t cmd_function36;              //used as an internal one
+  uint16_t cmd_function37;              //used as an internal one
+  uint16_t cmd_function38;              //used as an internal one
+  uint16_t cmd_function39;              //used as an internal one
+  uint16_t cmd_function40;              //used as an internal one
+  uint16_t getsmuclockinfo;             //Function
+  uint16_t selectcrtc_source;           //Function
+  uint16_t cmd_function43;              //used as an internal one
+  uint16_t cmd_function44;              //used as an internal one
+  uint16_t cmd_function45;              //used as an internal one
+  uint16_t setdceclock;                 //Function
+  uint16_t getmemoryclock;              //Function           
+  uint16_t getengineclock;              //Function           
+  uint16_t setcrtc_usingdtdtiming;      //Function
+  uint16_t externalencodercontrol;      //Function 
+  uint16_t cmd_function51;              //used as an internal one
+  uint16_t cmd_function52;              //used as an internal one
+  uint16_t cmd_function53;              //used as an internal one
+  uint16_t processi2cchanneltransaction;//Function           
+  uint16_t cmd_function55;              //used as an internal one
+  uint16_t cmd_function56;              //used as an internal one
+  uint16_t cmd_function57;              //used as an internal one
+  uint16_t cmd_function58;              //used as an internal one
+  uint16_t cmd_function59;              //used as an internal one
+  uint16_t computegpuclockparam;        //Function         
+  uint16_t cmd_function61;              //used as an internal one
+  uint16_t cmd_function62;              //used as an internal one
+  uint16_t dynamicmemorysettings;       //Function function
+  uint16_t memorytraining;              //Function function
+  uint16_t cmd_function65;              //used as an internal one
+  uint16_t cmd_function66;              //used as an internal one
+  uint16_t setvoltage;                  //Function
+  uint16_t cmd_function68;              //used as an internal one
+  uint16_t readefusevalue;              //Function
+  uint16_t cmd_function70;              //used as an internal one 
+  uint16_t cmd_function71;              //used as an internal one
+  uint16_t cmd_function72;              //used as an internal one
+  uint16_t cmd_function73;              //used as an internal one
+  uint16_t cmd_function74;              //used as an internal one
+  uint16_t cmd_function75;              //used as an internal one
+  uint16_t dig1transmittercontrol;      //Function
+  uint16_t cmd_function77;              //used as an internal one
+  uint16_t processauxchanneltransaction;//Function
+  uint16_t cmd_function79;              //used as an internal one
+  uint16_t getvoltageinfo;              //Function
+};
+
+struct atom_master_command_function_v2_1
+{
+  struct atom_common_table_header  table_header;
+  struct atom_master_list_of_command_functions_v2_1 listofcmdfunctions;
+};
+
+/**************************************************************************** 
+* Structures used in every command function
+****************************************************************************/   
+struct atom_function_attribute
+{
+  uint16_t  ws_in_bytes:8;            //[7:0]=Size of workspace in Bytes (in multiple of a dword), 
+  uint16_t  ps_in_bytes:7;            //[14:8]=Size of parameter space in Bytes (multiple of a dword), 
+  uint16_t  updated_by_util:1;        //[15]=flag to indicate the function is updated by util
+};
+
+
+/**************************************************************************** 
+* Common header for all hw functions.
+* Every function pointed by _master_list_of_hw_function has this common header. 
+* And the pointer actually points to this header.
+****************************************************************************/   
+struct atom_rom_hw_function_header
+{
+  struct atom_common_table_header func_header;
+  struct atom_function_attribute func_attrib;  
+};
+
+
+/*==============================sw data table portion======================================================================*/
+/****************************************************************************
+* Structures used in data.mtb, each data table name is not given here since those data table could change from time to time
+* The real name of each table is given when its data structure version is defined
+****************************************************************************/
+struct atom_master_list_of_data_tables_v2_1{
+  uint16_t utilitypipeline;               /* Offest for the utility to get parser info,Don't change this position!*/
+  uint16_t multimedia_info;               
+  uint16_t sw_datatable2;
+  uint16_t sw_datatable3;                 
+  uint16_t firmwareinfo;                  /* Shared by various SW components */
+  uint16_t sw_datatable5;
+  uint16_t lcd_info;                      /* Shared by various SW components */
+  uint16_t sw_datatable7;
+  uint16_t smu_info;                 
+  uint16_t sw_datatable9;
+  uint16_t sw_datatable10; 
+  uint16_t vram_usagebyfirmware;          /* Shared by various SW components */
+  uint16_t gpio_pin_lut;                  /* Shared by various SW components */
+  uint16_t sw_datatable13; 
+  uint16_t gfx_info;
+  uint16_t powerplayinfo;                 /* Shared by various SW components */
+  uint16_t sw_datatable16;                
+  uint16_t sw_datatable17;
+  uint16_t sw_datatable18;
+  uint16_t sw_datatable19;                
+  uint16_t sw_datatable20;
+  uint16_t sw_datatable21;
+  uint16_t displayobjectinfo;             /* Shared by various SW components */
+  uint16_t indirectioaccess;			  /* used as an internal one */
+  uint16_t umc_info;                      /* Shared by various SW components */
+  uint16_t sw_datatable25;
+  uint16_t sw_datatable26;
+  uint16_t dce_info;                      /* Shared by various SW components */
+  uint16_t vram_info;                     /* Shared by various SW components */
+  uint16_t sw_datatable29;
+  uint16_t integratedsysteminfo;          /* Shared by various SW components */
+  uint16_t asic_profiling_info;           /* Shared by various SW components */
+  uint16_t voltageobject_info;            /* shared by various SW components */
+  uint16_t sw_datatable33;
+  uint16_t sw_datatable34;
+};
+
+
+struct atom_master_data_table_v2_1
+{ 
+  struct atom_common_table_header table_header;
+  struct atom_master_list_of_data_tables_v2_1 listOfdatatables;
+};
+
+
+struct atom_dtd_format
+{
+  uint16_t  pixclk;
+  uint16_t  h_active;
+  uint16_t  h_blanking_time;
+  uint16_t  v_active;
+  uint16_t  v_blanking_time;
+  uint16_t  h_sync_offset;
+  uint16_t  h_sync_width;
+  uint16_t  v_sync_offset;
+  uint16_t  v_syncwidth;
+  uint16_t  reserved;
+  uint16_t  reserved0;
+  uint8_t   h_border;
+  uint8_t   v_border;
+  uint16_t  miscinfo;
+  uint8_t   atom_mode_id;
+  uint8_t   refreshrate;
+};
+
+/* atom_dtd_format.modemiscinfo defintion */
+enum atom_dtd_format_modemiscinfo{
+  ATOM_HSYNC_POLARITY    = 0x0002,
+  ATOM_VSYNC_POLARITY    = 0x0004,
+  ATOM_H_REPLICATIONBY2  = 0x0010,
+  ATOM_V_REPLICATIONBY2  = 0x0020,
+  ATOM_INTERLACE         = 0x0080,
+  ATOM_COMPOSITESYNC     = 0x0040,
+};
+
+
+/* utilitypipeline
+ * when format_revision==1 && content_revision==1, then this an info table for atomworks to use during debug session, no structure is associated with it.
+ * the location of it can't change
+*/
+
+
+/* 
+  ***************************************************************************
+    Data Table firmwareinfo  structure
+  ***************************************************************************
+*/
+
+struct atom_firmware_info_v3_1
+{
+  struct atom_common_table_header table_header;
+  uint32_t firmware_revision;
+  uint32_t bootup_sclk_in10khz;
+  uint32_t bootup_mclk_in10khz;
+  uint32_t firmware_capability;             // enum atombios_firmware_capability
+  uint32_t main_call_parser_entry;          /* direct address of main parser call in VBIOS binary. */
+  uint32_t bios_scratch_reg_startaddr;      // 1st bios scratch register dword address 
+  uint16_t bootup_vddc_mv;
+  uint16_t bootup_vddci_mv; 
+  uint16_t bootup_mvddc_mv;
+  uint16_t bootup_vddgfx_mv;
+  uint8_t  mem_module_id;       
+  uint8_t  coolingsolution_id;              /*0: Air cooling; 1: Liquid cooling ... */
+  uint8_t  reserved1[2];
+  uint32_t mc_baseaddr_high;
+  uint32_t mc_baseaddr_low;
+  uint32_t reserved2[6];
+};
+
+/* Total 32bit cap indication */
+enum atombios_firmware_capability
+{
+  ATOM_FIRMWARE_CAP_FIRMWARE_POSTED = 0x00000001,
+  ATOM_FIRMWARE_CAP_GPU_VIRTUALIZATION  = 0x00000002,
+  ATOM_FIRMWARE_CAP_WMI_SUPPORT  = 0x00000040,
+};
+
+enum atom_cooling_solution_id{
+  AIR_COOLING    = 0x00,
+  LIQUID_COOLING = 0x01
+};
+
+
+/* 
+  ***************************************************************************
+    Data Table lcd_info  structure
+  ***************************************************************************
+*/
+
+struct lcd_info_v2_1
+{
+  struct  atom_common_table_header table_header;
+  struct  atom_dtd_format  lcd_timing;
+  uint16_t backlight_pwm;
+  uint16_t special_handle_cap;
+  uint16_t panel_misc;
+  uint16_t lvds_max_slink_pclk;
+  uint16_t lvds_ss_percentage;
+  uint16_t lvds_ss_rate_10hz;
+  uint8_t  pwr_on_digon_to_de;          /*all pwr sequence numbers below are in uint of 4ms*/
+  uint8_t  pwr_on_de_to_vary_bl;
+  uint8_t  pwr_down_vary_bloff_to_de;
+  uint8_t  pwr_down_de_to_digoff;
+  uint8_t  pwr_off_delay;
+  uint8_t  pwr_on_vary_bl_to_blon;
+  uint8_t  pwr_down_bloff_to_vary_bloff;
+  uint8_t  panel_bpc;
+  uint8_t  dpcd_edp_config_cap;
+  uint8_t  dpcd_max_link_rate;
+  uint8_t  dpcd_max_lane_count;
+  uint8_t  dpcd_max_downspread;
+  uint8_t  min_allowed_bl_level;
+  uint8_t  max_allowed_bl_level;
+  uint8_t  bootup_bl_level;
+  uint8_t  dplvdsrxid;
+  uint32_t reserved1[8];
+};
+
+/* lcd_info_v2_1.panel_misc defintion */
+enum atom_lcd_info_panel_misc{
+  ATOM_PANEL_MISC_FPDI            =0x0002,
+};
+
+//uceDPToLVDSRxId
+enum atom_lcd_info_dptolvds_rx_id
+{
+  eDP_TO_LVDS_RX_DISABLE                 = 0x00,       // no eDP->LVDS translator chip 
+  eDP_TO_LVDS_COMMON_ID                  = 0x01,       // common eDP->LVDS translator chip without AMD SW init
+  eDP_TO_LVDS_REALTEK_ID                 = 0x02,       // Realtek tansaltor which require AMD SW init
+};
+
+    
+/* 
+  ***************************************************************************
+    Data Table gpio_pin_lut  structure
+  ***************************************************************************
+*/
+
+struct atom_gpio_pin_assignment
+{
+  uint32_t data_a_reg_index;
+  uint8_t  gpio_bitshift;
+  uint8_t  gpio_mask_bitshift;
+  uint8_t  gpio_id;
+  uint8_t  reserved;
+};
+
+/* atom_gpio_pin_assignment.gpio_id definition */
+enum atom_gpio_pin_assignment_gpio_id {
+  I2C_HW_LANE_MUX        =0x0f, /* only valid when bit7=1 */
+  I2C_HW_ENGINE_ID_MASK  =0x70, /* only valid when bit7=1 */ 
+  I2C_HW_CAP             =0x80, /*only when the I2C_HW_CAP is set, the pin ID is assigned to an I2C pin pair, otherwise, it's an generic GPIO pin */
+
+  /* gpio_id pre-define id for multiple usage */
+  /* GPIO use to control PCIE_VDDC in certain SLT board */
+  PCIE_VDDC_CONTROL_GPIO_PINID = 56,
+  /* if PP_AC_DC_SWITCH_GPIO_PINID in Gpio_Pin_LutTable, AC/DC swithing feature is enable */
+  PP_AC_DC_SWITCH_GPIO_PINID = 60,
+  /* VDDC_REGULATOR_VRHOT_GPIO_PINID in Gpio_Pin_LutTable, VRHot feature is enable */
+  VDDC_VRHOT_GPIO_PINID = 61,
+  /*if VDDC_PCC_GPIO_PINID in GPIO_LUTable, Peak Current Control feature is enabled */
+  VDDC_PCC_GPIO_PINID = 62,
+  /* Only used on certain SLT/PA board to allow utility to cut Efuse. */
+  EFUSE_CUT_ENABLE_GPIO_PINID = 63,
+  /* ucGPIO=DRAM_SELF_REFRESH_GPIO_PIND uses  for memory self refresh (ucGPIO=0, DRAM self-refresh; ucGPIO= */
+  DRAM_SELF_REFRESH_GPIO_PINID = 64,
+  /* Thermal interrupt output->system thermal chip GPIO pin */
+  THERMAL_INT_OUTPUT_GPIO_PINID =65,
+};
+
+
+struct atom_gpio_pin_lut_v2_1
+{
+  struct  atom_common_table_header  table_header;
+  /*the real number of this included in the structure is calcualted by using the (whole structure size - the header size)/size of atom_gpio_pin_lut  */
+  struct  atom_gpio_pin_assignment  gpio_pin[8];
+};
+
+
+/* 
+  ***************************************************************************
+    Data Table vram_usagebyfirmware  structure
+  ***************************************************************************
+*/
+
+struct vram_usagebyfirmware_v2_1
+{
+  struct  atom_common_table_header  table_header;
+  uint32_t  start_address_in_kb;
+  uint16_t  used_by_firmware_in_kb;
+  uint16_t  used_by_driver_in_kb; 
+};
+
+
+/* 
+  ***************************************************************************
+    Data Table displayobjectinfo  structure
+  ***************************************************************************
+*/
+
+enum atom_object_record_type_id 
+{
+  ATOM_I2C_RECORD_TYPE =1,
+  ATOM_HPD_INT_RECORD_TYPE =2,
+  ATOM_OBJECT_GPIO_CNTL_RECORD_TYPE =9,
+  ATOM_CONNECTOR_HPDPIN_LUT_RECORD_TYPE =16,
+  ATOM_CONNECTOR_AUXDDC_LUT_RECORD_TYPE =17,
+  ATOM_ENCODER_CAP_RECORD_TYPE=20,
+  ATOM_BRACKET_LAYOUT_RECORD_TYPE=21,
+  ATOM_CONNECTOR_FORCED_TMDS_CAP_RECORD_TYPE=22,
+  ATOM_RECORD_END_TYPE  =0xFF,
+};
+
+struct atom_common_record_header
+{
+  uint8_t record_type;                      //An emun to indicate the record type
+  uint8_t record_size;                      //The size of the whole record in byte
+};
+
+struct atom_i2c_record
+{
+  struct atom_common_record_header record_header;   //record_type = ATOM_I2C_RECORD_TYPE
+  uint8_t i2c_id; 
+  uint8_t i2c_slave_addr;                   //The slave address, it's 0 when the record is attached to connector for DDC
+};
+
+struct atom_hpd_int_record
+{
+  struct atom_common_record_header record_header;  //record_type = ATOM_HPD_INT_RECORD_TYPE
+  uint8_t  pin_id;              //Corresponding block in GPIO_PIN_INFO table gives the pin info           
+  uint8_t  plugin_pin_state;
+};
+
+// Bit maps for ATOM_ENCODER_CAP_RECORD.usEncoderCap
+enum atom_encoder_caps_def
+{
+  ATOM_ENCODER_CAP_RECORD_HBR2                  =0x01,         // DP1.2 HBR2 is supported by HW encoder, it is retired in NI. the real meaning from SI is MST_EN
+  ATOM_ENCODER_CAP_RECORD_MST_EN                =0x01,         // from SI, this bit means DP MST is enable or not. 
+  ATOM_ENCODER_CAP_RECORD_HBR2_EN               =0x02,         // DP1.2 HBR2 setting is qualified and HBR2 can be enabled 
+  ATOM_ENCODER_CAP_RECORD_HDMI6Gbps_EN          =0x04,         // HDMI2.0 6Gbps enable or not. 
+  ATOM_ENCODER_CAP_RECORD_HBR3_EN               =0x08,         // DP1.3 HBR3 is supported by board. 
+};
+
+struct  atom_encoder_caps_record
+{
+  struct atom_common_record_header record_header;  //record_type = ATOM_ENCODER_CAP_RECORD_TYPE
+  uint32_t  encodercaps;
+};
+
+enum atom_connector_caps_def
+{
+  ATOM_CONNECTOR_CAP_INTERNAL_DISPLAY         = 0x01,        //a cap bit to indicate that this non-embedded display connector is an internal display
+  ATOM_CONNECTOR_CAP_INTERNAL_DISPLAY_BL      = 0x02,        //a cap bit to indicate that this internal display requires BL control from GPU, refers to lcd_info for BL PWM freq 
+};
+
+struct atom_disp_connector_caps_record
+{
+  struct atom_common_record_header record_header;
+  uint32_t connectcaps;                          
+};
+
+//The following generic object gpio pin control record type will replace JTAG_RECORD/FPGA_CONTROL_RECORD/DVI_EXT_INPUT_RECORD above gradually
+struct atom_gpio_pin_control_pair
+{
+  uint8_t gpio_id;               // GPIO_ID, find the corresponding ID in GPIO_LUT table
+  uint8_t gpio_pinstate;         // Pin state showing how to set-up the pin
+};
+
+struct atom_object_gpio_cntl_record
+{
+  struct atom_common_record_header record_header;
+  uint8_t flag;                   // Future expnadibility
+  uint8_t number_of_pins;         // Number of GPIO pins used to control the object
+  struct atom_gpio_pin_control_pair gpio[1];              // the real gpio pin pair determined by number of pins ucNumberOfPins
+};
+
+//Definitions for GPIO pin state 
+enum atom_gpio_pin_control_pinstate_def
+{
+  GPIO_PIN_TYPE_INPUT             = 0x00,
+  GPIO_PIN_TYPE_OUTPUT            = 0x10,
+  GPIO_PIN_TYPE_HW_CONTROL        = 0x20,
+
+//For GPIO_PIN_TYPE_OUTPUT the following is defined 
+  GPIO_PIN_OUTPUT_STATE_MASK      = 0x01,
+  GPIO_PIN_OUTPUT_STATE_SHIFT     = 0,
+  GPIO_PIN_STATE_ACTIVE_LOW       = 0x0,
+  GPIO_PIN_STATE_ACTIVE_HIGH      = 0x1,
+};
+
+// Indexes to GPIO array in GLSync record 
+// GLSync record is for Frame Lock/Gen Lock feature.
+enum atom_glsync_record_gpio_index_def
+{
+  ATOM_GPIO_INDEX_GLSYNC_REFCLK    = 0,
+  ATOM_GPIO_INDEX_GLSYNC_HSYNC     = 1,
+  ATOM_GPIO_INDEX_GLSYNC_VSYNC     = 2,
+  ATOM_GPIO_INDEX_GLSYNC_SWAP_REQ  = 3,
+  ATOM_GPIO_INDEX_GLSYNC_SWAP_GNT  = 4,
+  ATOM_GPIO_INDEX_GLSYNC_INTERRUPT = 5,
+  ATOM_GPIO_INDEX_GLSYNC_V_RESET   = 6,
+  ATOM_GPIO_INDEX_GLSYNC_SWAP_CNTL = 7,
+  ATOM_GPIO_INDEX_GLSYNC_SWAP_SEL  = 8,
+  ATOM_GPIO_INDEX_GLSYNC_MAX       = 9,
+};
+
+
+struct atom_connector_hpdpin_lut_record     //record for ATOM_CONNECTOR_HPDPIN_LUT_RECORD_TYPE
+{
+  struct atom_common_record_header record_header;
+  uint8_t hpd_pin_map[8];             
+};
+
+struct atom_connector_auxddc_lut_record     //record for ATOM_CONNECTOR_AUXDDC_LUT_RECORD_TYPE
+{
+  struct atom_common_record_header record_header;
+  uint8_t aux_ddc_map[8];
+};
+
+struct atom_connector_forced_tmds_cap_record
+{
+  struct atom_common_record_header record_header;
+  // override TMDS capability on this connector when it operate in TMDS mode.  usMaxTmdsClkRate = max TMDS Clock in Mhz/2.5
+  uint8_t  maxtmdsclkrate_in2_5mhz;
+  uint8_t  reserved;
+};    
+
+struct atom_connector_layout_info
+{
+  uint16_t connectorobjid;
+  uint8_t  connector_type;
+  uint8_t  position;
+};
+
+// define ATOM_CONNECTOR_LAYOUT_INFO.ucConnectorType to describe the display connector size
+enum atom_connector_layout_info_connector_type_def
+{
+  CONNECTOR_TYPE_DVI_D                 = 1,
+ 
+  CONNECTOR_TYPE_HDMI                  = 4,
+  CONNECTOR_TYPE_DISPLAY_PORT          = 5,
+  CONNECTOR_TYPE_MINI_DISPLAY_PORT     = 6,
+};
+
+struct  atom_bracket_layout_record
+{
+  struct atom_common_record_header record_header;
+  uint8_t bracketlen;
+  uint8_t bracketwidth;
+  uint8_t conn_num;
+  uint8_t reserved;
+  struct atom_connector_layout_info  conn_info[1];
+};
+
+enum atom_display_device_tag_def{
+  ATOM_DISPLAY_LCD1_SUPPORT            = 0x0002,  //an embedded display is either an LVDS or eDP signal type of display
+  ATOM_DISPLAY_DFP1_SUPPORT            = 0x0008,
+  ATOM_DISPLAY_DFP2_SUPPORT            = 0x0080,
+  ATOM_DISPLAY_DFP3_SUPPORT            = 0x0200,
+  ATOM_DISPLAY_DFP4_SUPPORT            = 0x0400,
+  ATOM_DISPLAY_DFP5_SUPPORT            = 0x0800,
+  ATOM_DISPLAY_DFP6_SUPPORT            = 0x0040,
+  ATOM_DISPLAY_DFPx_SUPPORT            = 0x0ec8,
+};
+
+struct atom_display_object_path_v2
+{
+  uint16_t display_objid;                  //Connector Object ID or Misc Object ID
+  uint16_t disp_recordoffset;
+  uint16_t encoderobjid;                   //first encoder closer to the connector, could be either an external or intenal encoder
+  uint16_t extencoderobjid;                //2nd encoder after the first encoder, from the connector point of view;
+  uint16_t encoder_recordoffset;
+  uint16_t extencoder_recordoffset;
+  uint16_t device_tag;                     //a supported device vector, each display path starts with this.the paths are enumerated in the way of priority, a path appears first 
+  uint8_t  priority_id;
+  uint8_t  reserved;
+};
+
+struct display_object_info_table_v1_4
+{
+  struct    atom_common_table_header  table_header;
+  uint16_t  supporteddevices;
+  uint8_t   number_of_path;
+  uint8_t   reserved;
+  struct    atom_display_object_path_v2 display_path[8];   //the real number of this included in the structure is calculated by using the (whole structure size - the header size- number_of_path)/size of atom_display_object_path
+};
+
+
+/* 
+  ***************************************************************************
+    Data Table dce_info  structure
+  ***************************************************************************
+*/
+struct atom_display_controller_info_v4_1
+{
+  struct  atom_common_table_header  table_header;
+  uint32_t display_caps;
+  uint32_t bootup_dispclk_10khz;
+  uint16_t dce_refclk_10khz;
+  uint16_t i2c_engine_refclk_10khz;
+  uint16_t dvi_ss_percentage;       // in unit of 0.001%
+  uint16_t dvi_ss_rate_10hz;        
+  uint16_t hdmi_ss_percentage;      // in unit of 0.001%
+  uint16_t hdmi_ss_rate_10hz;
+  uint16_t dp_ss_percentage;        // in unit of 0.001%
+  uint16_t dp_ss_rate_10hz;
+  uint8_t  dvi_ss_mode;             // enum of atom_spread_spectrum_mode
+  uint8_t  hdmi_ss_mode;            // enum of atom_spread_spectrum_mode
+  uint8_t  dp_ss_mode;              // enum of atom_spread_spectrum_mode 
+  uint8_t  ss_reserved;
+  uint8_t  hardcode_mode_num;       // a hardcode mode number defined in StandardVESA_TimingTable when a CRT or DFP EDID is not available
+  uint8_t  reserved1[3];
+  uint16_t dpphy_refclk_10khz;  
+  uint16_t reserved2;
+  uint8_t  dceip_min_ver;
+  uint8_t  dceip_max_ver;
+  uint8_t  max_disp_pipe_num;
+  uint8_t  max_vbios_active_disp_pipe_num;
+  uint8_t  max_ppll_num;
+  uint8_t  max_disp_phy_num;
+  uint8_t  max_aux_pairs;
+  uint8_t  remotedisplayconfig;
+  uint8_t  reserved3[8];
+};
+
+
+struct atom_display_controller_info_v4_2
+{
+  struct  atom_common_table_header  table_header;
+  uint32_t display_caps;            
+  uint32_t bootup_dispclk_10khz;
+  uint16_t dce_refclk_10khz;
+  uint16_t i2c_engine_refclk_10khz;
+  uint16_t dvi_ss_percentage;       // in unit of 0.001%   
+  uint16_t dvi_ss_rate_10hz;
+  uint16_t hdmi_ss_percentage;      // in unit of 0.001%
+  uint16_t hdmi_ss_rate_10hz;
+  uint16_t dp_ss_percentage;        // in unit of 0.001%
+  uint16_t dp_ss_rate_10hz;
+  uint8_t  dvi_ss_mode;             // enum of atom_spread_spectrum_mode
+  uint8_t  hdmi_ss_mode;            // enum of atom_spread_spectrum_mode
+  uint8_t  dp_ss_mode;              // enum of atom_spread_spectrum_mode 
+  uint8_t  ss_reserved;
+  uint8_t  dfp_hardcode_mode_num;   // DFP hardcode mode number defined in StandardVESA_TimingTable when EDID is not available
+  uint8_t  dfp_hardcode_refreshrate;// DFP hardcode mode refreshrate defined in StandardVESA_TimingTable when EDID is not available
+  uint8_t  vga_hardcode_mode_num;   // VGA hardcode mode number defined in StandardVESA_TimingTable when EDID is not avablable
+  uint8_t  vga_hardcode_refreshrate;// VGA hardcode mode number defined in StandardVESA_TimingTable when EDID is not avablable
+  uint16_t dpphy_refclk_10khz;  
+  uint16_t reserved2;
+  uint8_t  dcnip_min_ver;
+  uint8_t  dcnip_max_ver;
+  uint8_t  max_disp_pipe_num;
+  uint8_t  max_vbios_active_disp_pipe_num;
+  uint8_t  max_ppll_num;
+  uint8_t  max_disp_phy_num;
+  uint8_t  max_aux_pairs;
+  uint8_t  remotedisplayconfig;
+  uint8_t  reserved3[8];
+};
+
+
+enum dce_info_caps_def
+{
+  // only for VBIOS
+  DCE_INFO_CAPS_FORCE_DISPDEV_CONNECTED  =0x02,      
+  // only for VBIOS
+  DCE_INFO_CAPS_DISABLE_DFP_DP_HBR2      =0x04,
+  // only for VBIOS
+  DCE_INFO_CAPS_ENABLE_INTERLAC_TIMING   =0x08,
+
+};
+
+/* 
+  ***************************************************************************
+    Data Table ATOM_EXTERNAL_DISPLAY_CONNECTION_INFO  structure
+  ***************************************************************************
+*/
+struct atom_ext_display_path
+{
+  uint16_t  device_tag;                      //A bit vector to show what devices are supported 
+  uint16_t  device_acpi_enum;                //16bit device ACPI id. 
+  uint16_t  connectorobjid;                  //A physical connector for displays to plug in, using object connector definitions
+  uint8_t   auxddclut_index;                 //An index into external AUX/DDC channel LUT
+  uint8_t   hpdlut_index;                    //An index into external HPD pin LUT
+  uint16_t  ext_encoder_objid;               //external encoder object id
+  uint8_t   channelmapping;                  // if ucChannelMapping=0, using default one to one mapping
+  uint8_t   chpninvert;                      // bit vector for up to 8 lanes, =0: P and N is not invert, =1 P and N is inverted
+  uint16_t  caps;
+  uint16_t  reserved; 
+};
+
+//usCaps
+enum ext_display_path_cap_def
+{
+  EXT_DISPLAY_PATH_CAPS__HBR2_DISABLE               =0x0001,
+  EXT_DISPLAY_PATH_CAPS__DP_FIXED_VS_EN             =0x0002,
+  EXT_DISPLAY_PATH_CAPS__EXT_CHIP_MASK              =0x007C,           
+};
+
+struct atom_external_display_connection_info
+{
+  struct  atom_common_table_header  table_header;
+  uint8_t                  guid[16];                                  // a GUID is a 16 byte long string
+  struct atom_ext_display_path path[7];                               // total of fixed 7 entries.
+  uint8_t                  checksum;                                  // a simple Checksum of the sum of whole structure equal to 0x0. 
+  uint8_t                  stereopinid;                               // use for eDP panel
+  uint8_t                  remotedisplayconfig;
+  uint8_t                  edptolvdsrxid;
+  uint8_t                  fixdpvoltageswing;                         // usCaps[1]=1, this indicate DP_LANE_SET value
+  uint8_t                  reserved[3];                               // for potential expansion
+};
+
+/* 
+  ***************************************************************************
+    Data Table integratedsysteminfo  structure
+  ***************************************************************************
+*/
+
+struct atom_camera_dphy_timing_param
+{
+  uint8_t  profile_id;       // SENSOR_PROFILES
+  uint32_t param;
+};
+
+struct atom_camera_dphy_elec_param
+{
+  uint16_t param[3];
+};
+
+struct atom_camera_module_info
+{
+  uint8_t module_id;                    // 0: Rear, 1: Front right of user, 2: Front left of user
+  uint8_t module_name[8];
+  struct atom_camera_dphy_timing_param timingparam[6]; // Exact number is under estimation and confirmation from sensor vendor
+};
+
+struct atom_camera_flashlight_info
+{
+  uint8_t flashlight_id;                // 0: Rear, 1: Front
+  uint8_t name[8];
+};
+
+struct atom_camera_data
+{
+  uint32_t versionCode;
+  struct atom_camera_module_info cameraInfo[3];      // Assuming 3 camera sensors max
+  struct atom_camera_flashlight_info flashInfo;      // Assuming 1 flashlight max
+  struct atom_camera_dphy_elec_param dphy_param;
+  uint32_t crc_val;         // CRC
+};
+
+
+struct atom_14nm_dpphy_dvihdmi_tuningset
+{
+  uint32_t max_symclk_in10khz;
+  uint8_t encoder_mode;            //atom_encode_mode_def, =2: DVI, =3: HDMI mode
+  uint8_t phy_sel;                 //bit vector of phy, bit0= phya, bit1=phyb, ....bit5 = phyf 
+  uint16_t margindeemph;           //COMMON_MAR_DEEMPH_NOM[7:0]tx_margin_nom [15:8]deemph_gen1_nom
+  uint8_t deemph_6db_4;            //COMMON_SELDEEMPH60[31:24]deemph_6db_4
+  uint8_t boostadj;                //CMD_BUS_GLOBAL_FOR_TX_LANE0 [19:16]tx_boost_adj  [20]tx_boost_en  [23:22]tx_binary_ron_code_offset
+  uint8_t tx_driver_fifty_ohms;    //COMMON_ZCALCODE_CTRL[21].tx_driver_fifty_ohms
+  uint8_t deemph_sel;              //MARGIN_DEEMPH_LANE0.DEEMPH_SEL
+};
+
+struct atom_14nm_dpphy_dp_setting{
+  uint8_t dp_vs_pemph_level;       //enum of atom_dp_vs_preemph_def
+  uint16_t margindeemph;           //COMMON_MAR_DEEMPH_NOM[7:0]tx_margin_nom [15:8]deemph_gen1_nom
+  uint8_t deemph_6db_4;            //COMMON_SELDEEMPH60[31:24]deemph_6db_4
+  uint8_t boostadj;                //CMD_BUS_GLOBAL_FOR_TX_LANE0 [19:16]tx_boost_adj  [20]tx_boost_en  [23:22]tx_binary_ron_code_offset
+};
+
+struct atom_14nm_dpphy_dp_tuningset{
+  uint8_t phy_sel;                 // bit vector of phy, bit0= phya, bit1=phyb, ....bit5 = phyf 
+  uint8_t version;
+  uint16_t table_size;             // size of atom_14nm_dpphy_dp_tuningset
+  uint16_t reserved;
+  struct atom_14nm_dpphy_dp_setting dptuning[10];
+};
+
+struct atom_14nm_dig_transmitter_info_header_v4_0{  
+  struct  atom_common_table_header  table_header;  
+  uint16_t pcie_phy_tmds_hdmi_macro_settings_offset;     // offset of PCIEPhyTMDSHDMIMacroSettingsTbl 
+  uint16_t uniphy_vs_emph_lookup_table_offset;           // offset of UniphyVSEmphLookUpTbl
+  uint16_t uniphy_xbar_settings_table_offset;            // offset of UniphyXbarSettingsTbl
+};
+
+struct atom_14nm_combphy_tmds_vs_set
+{
+  uint8_t sym_clk;
+  uint8_t dig_mode;
+  uint8_t phy_sel;
+  uint16_t common_mar_deemph_nom__margin_deemph_val;
+  uint8_t common_seldeemph60__deemph_6db_4_val;
+  uint8_t cmd_bus_global_for_tx_lane0__boostadj_val ;
+  uint8_t common_zcalcode_ctrl__tx_driver_fifty_ohms_val;
+  uint8_t margin_deemph_lane0__deemph_sel_val;         
+};
+
+struct atom_integrated_system_info_v1_11
+{
+  struct  atom_common_table_header  table_header;
+  uint32_t  vbios_misc;                       //enum of atom_system_vbiosmisc_def
+  uint32_t  gpucapinfo;                       //enum of atom_system_gpucapinf_def   
+  uint32_t  system_config;                    
+  uint32_t  cpucapinfo;
+  uint16_t  gpuclk_ss_percentage;             //unit of 0.001%,   1000 mean 1% 
+  uint16_t  gpuclk_ss_type;
+  uint16_t  lvds_ss_percentage;               //unit of 0.001%,   1000 mean 1%
+  uint16_t  lvds_ss_rate_10hz;
+  uint16_t  hdmi_ss_percentage;               //unit of 0.001%,   1000 mean 1%
+  uint16_t  hdmi_ss_rate_10hz;
+  uint16_t  dvi_ss_percentage;                //unit of 0.001%,   1000 mean 1%
+  uint16_t  dvi_ss_rate_10hz;
+  uint16_t  dpphy_override;                   // bit vector, enum of atom_sysinfo_dpphy_override_def
+  uint16_t  lvds_misc;                        // enum of atom_sys_info_lvds_misc_def
+  uint16_t  backlight_pwm_hz;                 // pwm frequency in hz
+  uint8_t   memorytype;                       // enum of atom_sys_mem_type
+  uint8_t   umachannelnumber;                 // number of memory channels
+  uint8_t   pwr_on_digon_to_de;               /* all pwr sequence numbers below are in uint of 4ms */
+  uint8_t   pwr_on_de_to_vary_bl;
+  uint8_t   pwr_down_vary_bloff_to_de;
+  uint8_t   pwr_down_de_to_digoff;
+  uint8_t   pwr_off_delay;
+  uint8_t   pwr_on_vary_bl_to_blon;
+  uint8_t   pwr_down_bloff_to_vary_bloff;
+  uint8_t   min_allowed_bl_level;
+  struct atom_external_display_connection_info extdispconninfo;
+  struct atom_14nm_dpphy_dvihdmi_tuningset dvi_tuningset;
+  struct atom_14nm_dpphy_dvihdmi_tuningset hdmi_tuningset;
+  struct atom_14nm_dpphy_dvihdmi_tuningset hdmi6g_tuningset;
+  struct atom_14nm_dpphy_dp_tuningset dp_tuningset;
+  struct atom_14nm_dpphy_dp_tuningset dp_hbr3_tuningset;
+  struct atom_camera_data  camera_info;
+  uint32_t  reserved[138];
+};
+
+
+// system_config
+enum atom_system_vbiosmisc_def{
+  INTEGRATED_SYSTEM_INFO__GET_EDID_CALLBACK_FUNC_SUPPORT = 0x01,
+};
+
+
+// gpucapinfo
+enum atom_system_gpucapinf_def{
+  SYS_INFO_GPUCAPS__ENABEL_DFS_BYPASS  = 0x10,
+};
+
+//dpphy_override
+enum atom_sysinfo_dpphy_override_def{
+  ATOM_ENABLE_DVI_TUNINGSET   = 0x01,
+  ATOM_ENABLE_HDMI_TUNINGSET  = 0x02,
+  ATOM_ENABLE_HDMI6G_TUNINGSET  = 0x04,
+  ATOM_ENABLE_DP_TUNINGSET  = 0x08,
+  ATOM_ENABLE_DP_HBR3_TUNINGSET  = 0x10,  
+};
+
+//lvds_misc
+enum atom_sys_info_lvds_misc_def
+{
+  SYS_INFO_LVDS_MISC_888_FPDI_MODE                 =0x01,
+  SYS_INFO_LVDS_MISC_888_BPC_MODE                  =0x04,
+  SYS_INFO_LVDS_MISC_OVERRIDE_EN                   =0x08,
+};
+
+
+//memorytype  DMI Type 17 offset 12h - Memory Type
+enum atom_dmi_t17_mem_type_def{
+  OtherMemType = 0x01,                                  ///< Assign 01 to Other
+  UnknownMemType,                                       ///< Assign 02 to Unknown
+  DramMemType,                                          ///< Assign 03 to DRAM
+  EdramMemType,                                         ///< Assign 04 to EDRAM
+  VramMemType,                                          ///< Assign 05 to VRAM
+  SramMemType,                                          ///< Assign 06 to SRAM
+  RamMemType,                                           ///< Assign 07 to RAM
+  RomMemType,                                           ///< Assign 08 to ROM
+  FlashMemType,                                         ///< Assign 09 to Flash
+  EepromMemType,                                        ///< Assign 10 to EEPROM
+  FepromMemType,                                        ///< Assign 11 to FEPROM
+  EpromMemType,                                         ///< Assign 12 to EPROM
+  CdramMemType,                                         ///< Assign 13 to CDRAM
+  ThreeDramMemType,                                     ///< Assign 14 to 3DRAM
+  SdramMemType,                                         ///< Assign 15 to SDRAM
+  SgramMemType,                                         ///< Assign 16 to SGRAM
+  RdramMemType,                                         ///< Assign 17 to RDRAM
+  DdrMemType,                                           ///< Assign 18 to DDR
+  Ddr2MemType,                                          ///< Assign 19 to DDR2
+  Ddr2FbdimmMemType,                                    ///< Assign 20 to DDR2 FB-DIMM
+  Ddr3MemType = 0x18,                                   ///< Assign 24 to DDR3
+  Fbd2MemType,                                          ///< Assign 25 to FBD2
+  Ddr4MemType,                                          ///< Assign 26 to DDR4
+  LpDdrMemType,                                         ///< Assign 27 to LPDDR
+  LpDdr2MemType,                                        ///< Assign 28 to LPDDR2
+  LpDdr3MemType,                                        ///< Assign 29 to LPDDR3
+  LpDdr4MemType,                                        ///< Assign 30 to LPDDR4
+};
+
+
+// this Table is used starting from NL/AM, used by SBIOS and pass the IntegratedSystemInfoTable/PowerPlayInfoTable/SystemCameraInfoTable 
+struct atom_fusion_system_info_v4
+{
+  struct atom_integrated_system_info_v1_11   sysinfo;           // refer to ATOM_INTEGRATED_SYSTEM_INFO_V1_8 definition
+  uint32_t   powerplayinfo[256];                                // Reserve 1024 bytes space for PowerPlayInfoTable
+}; 
+
+
+/* 
+  ***************************************************************************
+    Data Table gfx_info  structure
+  ***************************************************************************
+*/
+
+struct  atom_gfx_info_v2_2
+{
+  struct  atom_common_table_header  table_header;
+  uint8_t gfxip_min_ver;
+  uint8_t gfxip_max_ver;
+  uint8_t max_shader_engines;
+  uint8_t max_tile_pipes;
+  uint8_t max_cu_per_sh;
+  uint8_t max_sh_per_se;
+  uint8_t max_backends_per_se;
+  uint8_t max_texture_channel_caches;
+  uint32_t regaddr_cp_dma_src_addr;
+  uint32_t regaddr_cp_dma_src_addr_hi;
+  uint32_t regaddr_cp_dma_dst_addr;
+  uint32_t regaddr_cp_dma_dst_addr_hi;
+  uint32_t regaddr_cp_dma_command; 
+  uint32_t regaddr_cp_status;
+  uint32_t regaddr_rlc_gpu_clock_32;
+  uint32_t rlc_gpu_timer_refclk; 
+};
+
+
+
+/* 
+  ***************************************************************************
+    Data Table smu_info  structure
+  ***************************************************************************
+*/
+struct atom_smu_info_v3_1
+{
+  struct  atom_common_table_header  table_header;
+  uint8_t smuip_min_ver;
+  uint8_t smuip_max_ver;
+  uint8_t smu_rsd1;
+  uint8_t gpuclk_ss_mode;           // enum of atom_spread_spectrum_mode
+  uint16_t sclk_ss_percentage;
+  uint16_t sclk_ss_rate_10hz;
+  uint16_t gpuclk_ss_percentage;    // in unit of 0.001%
+  uint16_t gpuclk_ss_rate_10hz;
+  uint32_t core_refclk_10khz;
+  uint8_t  ac_dc_gpio_bit;          // GPIO bit shift in SMU_GPIOPAD_A  configured for AC/DC switching, =0xff means invalid
+  uint8_t  ac_dc_polarity;          // GPIO polarity for AC/DC switching
+  uint8_t  vr0hot_gpio_bit;         // GPIO bit shift in SMU_GPIOPAD_A  configured for VR0 HOT event, =0xff means invalid
+  uint8_t  vr0hot_polarity;         // GPIO polarity for VR0 HOT event
+  uint8_t  vr1hot_gpio_bit;         // GPIO bit shift in SMU_GPIOPAD_A configured for VR1 HOT event , =0xff means invalid
+  uint8_t  vr1hot_polarity;         // GPIO polarity for VR1 HOT event 
+  uint8_t  fw_ctf_gpio_bit;         // GPIO bit shift in SMU_GPIOPAD_A configured for CTF, =0xff means invalid
+  uint8_t  fw_ctf_polarity;         // GPIO polarity for CTF
+};
+
+
+
+/* 
+  ***************************************************************************
+    Data Table asic_profiling_info  structure
+  ***************************************************************************
+*/
+struct  atom_asic_profiling_info_v4_1
+{
+  struct  atom_common_table_header  table_header;
+  uint32_t  maxvddc;                 
+  uint32_t  minvddc;               
+  uint32_t  avfs_meannsigma_acontant0;
+  uint32_t  avfs_meannsigma_acontant1;
+  uint32_t  avfs_meannsigma_acontant2;
+  uint16_t  avfs_meannsigma_dc_tol_sigma;
+  uint16_t  avfs_meannsigma_platform_mean;
+  uint16_t  avfs_meannsigma_platform_sigma;
+  uint32_t  gb_vdroop_table_cksoff_a0;
+  uint32_t  gb_vdroop_table_cksoff_a1;
+  uint32_t  gb_vdroop_table_cksoff_a2;
+  uint32_t  gb_vdroop_table_ckson_a0;
+  uint32_t  gb_vdroop_table_ckson_a1;
+  uint32_t  gb_vdroop_table_ckson_a2;
+  uint32_t  avfsgb_fuse_table_cksoff_m1;
+  uint16_t  avfsgb_fuse_table_cksoff_m2;
+  uint32_t  avfsgb_fuse_table_cksoff_b;
+  uint32_t  avfsgb_fuse_table_ckson_m1;	
+  uint16_t  avfsgb_fuse_table_ckson_m2;
+  uint32_t  avfsgb_fuse_table_ckson_b;
+  uint16_t  max_voltage_0_25mv;
+  uint8_t   enable_gb_vdroop_table_cksoff;
+  uint8_t   enable_gb_vdroop_table_ckson;
+  uint8_t   enable_gb_fuse_table_cksoff;
+  uint8_t   enable_gb_fuse_table_ckson;
+  uint16_t  psm_age_comfactor;
+  uint8_t   enable_apply_avfs_cksoff_voltage;
+  uint8_t   reserved;
+  uint32_t  dispclk2gfxclk_a;
+  uint16_t  dispclk2gfxclk_b;
+  uint32_t  dispclk2gfxclk_c;
+  uint32_t  pixclk2gfxclk_a;
+  uint16_t  pixclk2gfxclk_b;
+  uint32_t  pixclk2gfxclk_c;
+  uint32_t  dcefclk2gfxclk_a;
+  uint16_t  dcefclk2gfxclk_b;
+  uint32_t  dcefclk2gfxclk_c;
+  uint32_t  phyclk2gfxclk_a;
+  uint16_t  phyclk2gfxclk_b;
+  uint32_t  phyclk2gfxclk_c;
+};
+
+
+/* 
+  ***************************************************************************
+    Data Table multimedia_info  structure
+  ***************************************************************************
+*/
+struct atom_multimedia_info_v2_1
+{
+  struct  atom_common_table_header  table_header;
+  uint8_t uvdip_min_ver;
+  uint8_t uvdip_max_ver;
+  uint8_t vceip_min_ver;
+  uint8_t vceip_max_ver;
+  uint16_t uvd_enc_max_input_width_pixels;
+  uint16_t uvd_enc_max_input_height_pixels;
+  uint16_t vce_enc_max_input_width_pixels;
+  uint16_t vce_enc_max_input_height_pixels; 
+  uint32_t uvd_enc_max_bandwidth;           // 16x16 pixels/sec, codec independent
+  uint32_t vce_enc_max_bandwidth;           // 16x16 pixels/sec, codec independent
+};
+
+
+/* 
+  ***************************************************************************
+    Data Table umc_info  structure
+  ***************************************************************************
+*/
+struct atom_umc_info_v3_1
+{
+  struct  atom_common_table_header  table_header;
+  uint32_t ucode_version;
+  uint32_t ucode_rom_startaddr;
+  uint32_t ucode_length;
+  uint16_t umc_reg_init_offset;
+  uint16_t customer_ucode_name_offset;
+  uint16_t mclk_ss_percentage;
+  uint16_t mclk_ss_rate_10hz;
+  uint8_t umcip_min_ver;
+  uint8_t umcip_max_ver;
+  uint8_t vram_type;              //enum of atom_dgpu_vram_type
+  uint8_t umc_config;
+  uint32_t mem_refclk_10khz;
+};
+
+
+/* 
+  ***************************************************************************
+    Data Table vram_info  structure
+  ***************************************************************************
+*/
+struct atom_vram_module_v9
+{
+  // Design Specific Values
+  uint32_t  memory_size;                   // Total memory size in unit of MB for CONFIG_MEMSIZE zeros
+  uint32_t  channel_enable;                // for 32 channel ASIC usage
+  uint32_t  umcch_addrcfg;
+  uint32_t  umcch_addrsel;
+  uint32_t  umcch_colsel;
+  uint16_t  vram_module_size;              // Size of atom_vram_module_v9
+  uint8_t   ext_memory_id;                 // Current memory module ID
+  uint8_t   memory_type;                   // enum of atom_dgpu_vram_type
+  uint8_t   channel_num;                   // Number of mem. channels supported in this module
+  uint8_t   channel_width;                 // CHANNEL_16BIT/CHANNEL_32BIT/CHANNEL_64BIT
+  uint8_t   density;                       // _8Mx32, _16Mx32, _16Mx16, _32Mx16
+  uint8_t   tunningset_id;                 // MC phy registers set per. 
+  uint8_t   vender_rev_id;                 // [7:4] Revision, [3:0] Vendor code
+  uint8_t   refreshrate;                   // [1:0]=RefreshFactor (00=8ms, 01=16ms, 10=32ms,11=64ms)
+  uint16_t  vram_rsd2;                     // reserved
+  char    dram_pnstring[20];               // part number end with '0'. 
+};
+
+
+struct atom_vram_info_header_v2_3
+{
+  struct   atom_common_table_header  table_header;
+  uint16_t mem_adjust_tbloffset;                         // offset of atom_umc_init_reg_block structure for memory vendor specific UMC adjust setting
+  uint16_t mem_clk_patch_tbloffset;                      // offset of atom_umc_init_reg_block structure for memory clock specific UMC setting
+  uint16_t mc_adjust_pertile_tbloffset;                  // offset of atom_umc_init_reg_block structure for Per Byte Offset Preset Settings
+  uint16_t mc_phyinit_tbloffset;                         // offset of atom_umc_init_reg_block structure for MC phy init set
+  uint16_t dram_data_remap_tbloffset;                    // reserved for now
+  uint16_t vram_rsd2[3];
+  uint8_t  vram_module_num;                              // indicate number of VRAM module
+  uint8_t  vram_rsd1[2];
+  uint8_t  mc_phy_tile_num;                              // indicate the MCD tile number which use in DramDataRemapTbl and usMcAdjustPerTileTblOffset
+  struct   atom_vram_module_v9  vram_module[16];         // just for allocation, real number of blocks is in ucNumOfVRAMModule;
+};
+
+struct atom_umc_register_addr_info{
+  uint32_t  umc_register_addr:24;
+  uint32_t  umc_reg_type_ind:1;
+  uint32_t  umc_reg_rsvd:7;
+};
+
+//atom_umc_register_addr_info.
+enum atom_umc_register_addr_info_flag{
+  b3ATOM_UMC_REG_ADD_INFO_INDIRECT_ACCESS  =0x01,
+};
+
+union atom_umc_register_addr_info_access
+{
+  struct atom_umc_register_addr_info umc_reg_addr;
+  uint32_t u32umc_reg_addr;
+};
+
+struct atom_umc_reg_setting_id_config{
+  uint32_t memclockrange:24;
+  uint32_t mem_blk_id:8;
+};
+
+union atom_umc_reg_setting_id_config_access
+{
+  struct atom_umc_reg_setting_id_config umc_id_access;
+  uint32_t  u32umc_id_access;
+};
+
+struct atom_umc_reg_setting_data_block{
+  union atom_umc_reg_setting_id_config_access  block_id;
+  uint32_t u32umc_reg_data[1];                       
+};
+
+struct atom_umc_init_reg_block{
+  uint16_t umc_reg_num;
+  uint16_t reserved;    
+  union atom_umc_register_addr_info_access umc_reg_list[1];     //for allocation purpose, the real number come from umc_reg_num;
+  struct atom_umc_reg_setting_data_block umc_reg_setting_list[1];
+};
+
+
+/* 
+  ***************************************************************************
+    Data Table voltageobject_info  structure
+  ***************************************************************************
+*/
+struct  atom_i2c_data_entry
+{
+  uint16_t  i2c_reg_index;               // i2c register address, can be up to 16bit
+  uint16_t  i2c_reg_data;                // i2c register data, can be up to 16bit
+};
+
+struct atom_voltage_object_header_v4{
+  uint8_t    voltage_type;                           //enum atom_voltage_type
+  uint8_t    voltage_mode;                           //enum atom_voltage_object_mode 
+  uint16_t   object_size;                            //Size of Object
+};
+
+// atom_voltage_object_header_v4.voltage_mode
+enum atom_voltage_object_mode 
+{
+   VOLTAGE_OBJ_GPIO_LUT              =  0,        //VOLTAGE and GPIO Lookup table ->atom_gpio_voltage_object_v4
+   VOLTAGE_OBJ_VR_I2C_INIT_SEQ       =  3,        //VOLTAGE REGULATOR INIT sequece through I2C -> atom_i2c_voltage_object_v4
+   VOLTAGE_OBJ_PHASE_LUT             =  4,        //Set Vregulator Phase lookup table ->atom_gpio_voltage_object_v4
+   VOLTAGE_OBJ_SVID2                 =  7,        //Indicate voltage control by SVID2 ->atom_svid2_voltage_object_v4
+   VOLTAGE_OBJ_EVV                   =  8, 
+   VOLTAGE_OBJ_MERGED_POWER          =  9,
+};
+
+struct  atom_i2c_voltage_object_v4
+{
+   struct atom_voltage_object_header_v4 header;  // voltage mode = VOLTAGE_OBJ_VR_I2C_INIT_SEQ
+   uint8_t  regulator_id;                        //Indicate Voltage Regulator Id
+   uint8_t  i2c_id;
+   uint8_t  i2c_slave_addr;
+   uint8_t  i2c_control_offset;       
+   uint8_t  i2c_flag;                            // Bit0: 0 - One byte data; 1 - Two byte data
+   uint8_t  i2c_speed;                           // =0, use default i2c speed, otherwise use it in unit of kHz. 
+   uint8_t  reserved[2];
+   struct atom_i2c_data_entry i2cdatalut[1];     // end with 0xff
+};
+
+// ATOM_I2C_VOLTAGE_OBJECT_V3.ucVoltageControlFlag
+enum atom_i2c_voltage_control_flag
+{
+   VOLTAGE_DATA_ONE_BYTE = 0,
+   VOLTAGE_DATA_TWO_BYTE = 1,
+};
+
+
+struct atom_voltage_gpio_map_lut
+{
+  uint32_t  voltage_gpio_reg_val;              // The Voltage ID which is used to program GPIO register
+  uint16_t  voltage_level_mv;                  // The corresponding Voltage Value, in mV
+};
+
+struct atom_gpio_voltage_object_v4
+{
+   struct atom_voltage_object_header_v4 header;  // voltage mode = VOLTAGE_OBJ_GPIO_LUT or VOLTAGE_OBJ_PHASE_LUT
+   uint8_t  gpio_control_id;                     // default is 0 which indicate control through CG VID mode 
+   uint8_t  gpio_entry_num;                      // indiate the entry numbers of Votlage/Gpio value Look up table
+   uint8_t  phase_delay_us;                      // phase delay in unit of micro second
+   uint8_t  reserved;   
+   uint32_t gpio_mask_val;                         // GPIO Mask value
+   struct atom_voltage_gpio_map_lut voltage_gpio_lut[1];
+};
+
+struct  atom_svid2_voltage_object_v4
+{
+   struct atom_voltage_object_header_v4 header;  // voltage mode = VOLTAGE_OBJ_SVID2
+   uint8_t loadline_psi1;                        // bit4:0= loadline setting ( Core Loadline trim and offset trim ), bit5=0:PSI1_L disable =1: PSI1_L enable
+   uint8_t psi0_l_vid_thresd;                    // VR PSI0_L VID threshold
+   uint8_t psi0_enable;                          // 
+   uint8_t maxvstep;
+   uint8_t telemetry_offset;
+   uint8_t telemetry_gain; 
+   uint16_t reserved1;
+};
+
+struct atom_merged_voltage_object_v4
+{
+  struct atom_voltage_object_header_v4 header;  // voltage mode = VOLTAGE_OBJ_MERGED_POWER
+  uint8_t  merged_powerrail_type;               //enum atom_voltage_type
+  uint8_t  reserved[3];
+};
+
+union atom_voltage_object_v4{
+  struct atom_gpio_voltage_object_v4 gpio_voltage_obj;
+  struct atom_i2c_voltage_object_v4 i2c_voltage_obj;
+  struct atom_svid2_voltage_object_v4 svid2_voltage_obj;
+  struct atom_merged_voltage_object_v4 merged_voltage_obj;
+};
+
+struct  atom_voltage_objects_info_v4_1
+{
+  struct atom_common_table_header table_header; 
+  union atom_voltage_object_v4 voltage_object[1];   //Info for Voltage control
+};
+
+
+/* 
+  ***************************************************************************
+              All Command Function structure definition 
+  *************************************************************************** 
+*/   
+
+/* 
+  ***************************************************************************
+              Structures used by asic_init
+  *************************************************************************** 
+*/   
+
+struct asic_init_engine_parameters
+{
+  uint32_t sclkfreqin10khz:24;
+  uint32_t engineflag:8;              /* enum atom_asic_init_engine_flag  */
+};
+
+struct asic_init_mem_parameters
+{
+  uint32_t mclkfreqin10khz:24;
+  uint32_t memflag:8;                 /* enum atom_asic_init_mem_flag  */
+};
+
+struct asic_init_parameters_v2_1
+{
+  struct asic_init_engine_parameters engineparam;
+  struct asic_init_mem_parameters memparam;
+};
+
+struct asic_init_ps_allocation_v2_1
+{
+  struct asic_init_parameters_v2_1 param;
+  uint32_t reserved[16];
+};
+
+
+enum atom_asic_init_engine_flag
+{
+  b3NORMAL_ENGINE_INIT = 0,
+  b3SRIOV_SKIP_ASIC_INIT = 0x02,
+  b3SRIOV_LOAD_UCODE = 0x40,
+};
+
+enum atom_asic_init_mem_flag
+{
+  b3NORMAL_MEM_INIT = 0,
+  b3DRAM_SELF_REFRESH_EXIT =0x20,
+};
+
+/* 
+  ***************************************************************************
+              Structures used by setengineclock
+  *************************************************************************** 
+*/   
+
+struct set_engine_clock_parameters_v2_1
+{
+  uint32_t sclkfreqin10khz:24;
+  uint32_t sclkflag:8;              /* enum atom_set_engine_mem_clock_flag,  */
+  uint32_t reserved[10];
+};
+
+struct set_engine_clock_ps_allocation_v2_1
+{
+  struct set_engine_clock_parameters_v2_1 clockinfo;
+  uint32_t reserved[10];
+};
+
+
+enum atom_set_engine_mem_clock_flag
+{
+  b3NORMAL_CHANGE_CLOCK = 0,
+  b3FIRST_TIME_CHANGE_CLOCK = 0x08,
+  b3STORE_DPM_TRAINGING = 0x40,         //Applicable to memory clock change,when set, it store specific DPM mode training result
+};
+
+/* 
+  ***************************************************************************
+              Structures used by getengineclock
+  *************************************************************************** 
+*/   
+struct get_engine_clock_parameter
+{
+  uint32_t sclk_10khz;          // current engine speed in 10KHz unit
+  uint32_t reserved;
+};
+
+/* 
+  ***************************************************************************
+              Structures used by setmemoryclock
+  *************************************************************************** 
+*/   
+struct set_memory_clock_parameters_v2_1
+{
+  uint32_t mclkfreqin10khz:24;
+  uint32_t mclkflag:8;              /* enum atom_set_engine_mem_clock_flag,  */
+  uint32_t reserved[10];
+};
+
+struct set_memory_clock_ps_allocation_v2_1
+{
+  struct set_memory_clock_parameters_v2_1 clockinfo;
+  uint32_t reserved[10];
+};
+
+
+/* 
+  ***************************************************************************
+              Structures used by getmemoryclock
+  *************************************************************************** 
+*/   
+struct get_memory_clock_parameter
+{
+  uint32_t mclk_10khz;          // current engine speed in 10KHz unit
+  uint32_t reserved;
+};
+
+
+
+/* 
+  ***************************************************************************
+              Structures used by setvoltage
+  *************************************************************************** 
+*/   
+
+struct set_voltage_parameters_v1_4
+{
+  uint8_t  voltagetype;                /* enum atom_voltage_type */
+  uint8_t  command;                    /* Indicate action: Set voltage level, enum atom_set_voltage_command */
+  uint16_t vlevel_mv;                  /* real voltage level in unit of mv or Voltage Phase (0, 1, 2, .. ) */
+};
+
+//set_voltage_parameters_v2_1.voltagemode
+enum atom_set_voltage_command{
+  ATOM_SET_VOLTAGE  = 0,
+  ATOM_INIT_VOLTAGE_REGULATOR = 3,
+  ATOM_SET_VOLTAGE_PHASE = 4,
+  ATOM_GET_LEAKAGE_ID    = 8,
+};
+
+struct set_voltage_ps_allocation_v1_4
+{
+  struct set_voltage_parameters_v1_4 setvoltageparam;
+  uint32_t reserved[10];
+};
+
+
+/* 
+  ***************************************************************************
+              Structures used by computegpuclockparam
+  *************************************************************************** 
+*/   
+
+//ATOM_COMPUTE_CLOCK_FREQ.ulComputeClockFlag
+enum atom_gpu_clock_type 
+{
+  COMPUTE_GPUCLK_INPUT_FLAG_DEFAULT_GPUCLK =0x00,
+  COMPUTE_GPUCLK_INPUT_FLAG_GFXCLK =0x01,
+  COMPUTE_GPUCLK_INPUT_FLAG_UCLK =0x02,
+};
+
+struct compute_gpu_clock_input_parameter_v1_8
+{
+  uint32_t  gpuclock_10khz:24;         //Input= target clock, output = actual clock 
+  uint32_t  gpu_clock_type:8;          //Input indicate clock type: enum atom_gpu_clock_type
+  uint32_t  reserved[5];
+};
+
+
+struct compute_gpu_clock_output_parameter_v1_8
+{
+  uint32_t  gpuclock_10khz:24;              //Input= target clock, output = actual clock 
+  uint32_t  dfs_did:8;                      //return parameter: DFS divider which is used to program to register directly
+  uint32_t  pll_fb_mult;                    //Feedback Multiplier, bit 8:0 int, bit 15:12 post_div, bit 31:16 frac
+  uint32_t  pll_ss_fbsmult;                 // Spread FB Mult: bit 8:0 int, bit 31:16 frac
+  uint16_t  pll_ss_slew_frac;
+  uint8_t   pll_ss_enable;
+  uint8_t   reserved;
+  uint32_t  reserved1[2];
+};
+
+
+
+/* 
+  ***************************************************************************
+              Structures used by ReadEfuseValue
+  *************************************************************************** 
+*/   
+
+struct read_efuse_input_parameters_v3_1
+{
+  uint16_t efuse_start_index;
+  uint8_t  reserved;
+  uint8_t  bitslen;
+};
+
+// ReadEfuseValue input/output parameter
+union read_efuse_value_parameters_v3_1
+{
+  struct read_efuse_input_parameters_v3_1 efuse_info;
+  uint32_t efusevalue;
+};
+
+
+/* 
+  ***************************************************************************
+              Structures used by getsmuclockinfo
+  *************************************************************************** 
+*/   
+struct atom_get_smu_clock_info_parameters_v3_1
+{
+  uint8_t syspll_id;          // 0= syspll0, 1=syspll1, 2=syspll2                
+  uint8_t clk_id;             // atom_smu9_syspll0_clock_id  (only valid when command == GET_SMU_CLOCK_INFO_V3_1_GET_CLOCK_FREQ )
+  uint8_t command;            // enum of atom_get_smu_clock_info_command
+  uint8_t dfsdid;             // =0: get DFS DID from register, >0, give DFS divider, (only valid when command == GET_SMU_CLOCK_INFO_V3_1_GET_CLOCK_FREQ )
+};
+
+enum atom_get_smu_clock_info_command 
+{
+  GET_SMU_CLOCK_INFO_V3_1_GET_CLOCK_FREQ       = 0,
+  GET_SMU_CLOCK_INFO_V3_1_GET_PLLVCO_FREQ      = 1,
+  GET_SMU_CLOCK_INFO_V3_1_GET_PLLREFCLK_FREQ   = 2,
+};
+
+enum atom_smu9_syspll0_clock_id
+{
+  SMU9_SYSPLL0_SMNCLK_ID   = 0,       //  SMNCLK
+  SMU9_SYSPLL0_SOCCLK_ID   = 1,       //	SOCCLK (FCLK)
+  SMU9_SYSPLL0_MP0CLK_ID   = 2,       //	MP0CLK
+  SMU9_SYSPLL0_MP1CLK_ID   = 3,       //	MP1CLK
+  SMU9_SYSPLL0_LCLK_ID     = 4,       //	LCLK
+  SMU9_SYSPLL0_DCLK_ID     = 5,       //	DCLK
+  SMU9_SYSPLL0_VCLK_ID     = 6,       //	VCLK
+  SMU9_SYSPLL0_ECLK_ID     = 7,       //	ECLK
+  SMU9_SYSPLL0_DCEFCLK_ID  = 8,       //	DCEFCLK
+  SMU9_SYSPLL0_DPREFCLK_ID = 10,      //	DPREFCLK
+  SMU9_SYSPLL0_DISPCLK_ID  = 11,      //	DISPCLK
+};
+
+struct  atom_get_smu_clock_info_output_parameters_v3_1
+{
+  union {
+    uint32_t smu_clock_freq_hz;
+    uint32_t syspllvcofreq_10khz;
+    uint32_t sysspllrefclk_10khz;
+  }atom_smu_outputclkfreq;
+};
+
+
+
+/* 
+  ***************************************************************************
+              Structures used by dynamicmemorysettings
+  *************************************************************************** 
+*/   
+
+enum atom_dynamic_memory_setting_command 
+{
+  COMPUTE_MEMORY_PLL_PARAM = 1,
+  COMPUTE_ENGINE_PLL_PARAM = 2,
+  ADJUST_MC_SETTING_PARAM = 3,
+};
+
+/* when command = COMPUTE_MEMORY_PLL_PARAM or ADJUST_MC_SETTING_PARAM */
+struct dynamic_mclk_settings_parameters_v2_1
+{
+  uint32_t  mclk_10khz:24;         //Input= target mclk
+  uint32_t  command:8;             //command enum of atom_dynamic_memory_setting_command
+  uint32_t  reserved;
+};
+
+/* when command = COMPUTE_ENGINE_PLL_PARAM */
+struct dynamic_sclk_settings_parameters_v2_1
+{
+  uint32_t  sclk_10khz:24;         //Input= target mclk
+  uint32_t  command:8;             //command enum of atom_dynamic_memory_setting_command
+  uint32_t  mclk_10khz;
+  uint32_t  reserved;
+};
+
+union dynamic_memory_settings_parameters_v2_1
+{
+  struct dynamic_mclk_settings_parameters_v2_1 mclk_setting;
+  struct dynamic_sclk_settings_parameters_v2_1 sclk_setting;
+};
+
+
+
+/* 
+  ***************************************************************************
+              Structures used by memorytraining
+  *************************************************************************** 
+*/   
+
+enum atom_umc6_0_ucode_function_call_enum_id
+{
+  UMC60_UCODE_FUNC_ID_REINIT                 = 0,
+  UMC60_UCODE_FUNC_ID_ENTER_SELFREFRESH      = 1,
+  UMC60_UCODE_FUNC_ID_EXIT_SELFREFRESH       = 2,
+};
+
+
+struct memory_training_parameters_v2_1
+{
+  uint8_t ucode_func_id;
+  uint8_t ucode_reserved[3];
+  uint32_t reserved[5];
+};
+
+
+/* 
+  ***************************************************************************
+              Structures used by setpixelclock
+  *************************************************************************** 
+*/   
+
+struct set_pixel_clock_parameter_v1_7
+{
+    uint32_t pixclk_100hz;               // target the pixel clock to drive the CRTC timing in unit of 100Hz. 
+
+    uint8_t  pll_id;                     // ATOM_PHY_PLL0/ATOM_PHY_PLL1/ATOM_PPLL0
+    uint8_t  encoderobjid;               // ASIC encoder id defined in objectId.h, 
+                                         // indicate which graphic encoder will be used. 
+    uint8_t  encoder_mode;               // Encoder mode: 
+    uint8_t  miscinfo;                   // enum atom_set_pixel_clock_v1_7_misc_info
+    uint8_t  crtc_id;                    // enum of atom_crtc_def
+    uint8_t  deep_color_ratio;           // HDMI panel bit depth: enum atom_set_pixel_clock_v1_7_deepcolor_ratio
+    uint8_t  reserved1[2];    
+    uint32_t reserved2;
+};
+
+//ucMiscInfo
+enum atom_set_pixel_clock_v1_7_misc_info
+{
+  PIXEL_CLOCK_V7_MISC_FORCE_PROG_PPLL         = 0x01,
+  PIXEL_CLOCK_V7_MISC_PROG_PHYPLL             = 0x02,
+  PIXEL_CLOCK_V7_MISC_YUV420_MODE             = 0x04,
+  PIXEL_CLOCK_V7_MISC_DVI_DUALLINK_EN         = 0x08,
+  PIXEL_CLOCK_V7_MISC_REF_DIV_SRC             = 0x30,
+  PIXEL_CLOCK_V7_MISC_REF_DIV_SRC_XTALIN      = 0x00,
+  PIXEL_CLOCK_V7_MISC_REF_DIV_SRC_PCIE        = 0x10,
+  PIXEL_CLOCK_V7_MISC_REF_DIV_SRC_GENLK       = 0x20,
+  PIXEL_CLOCK_V7_MISC_REF_DIV_SRC_REFPAD      = 0x30, 
+  PIXEL_CLOCK_V7_MISC_ATOMIC_UPDATE           = 0x40,
+  PIXEL_CLOCK_V7_MISC_FORCE_SS_DIS            = 0x80,
+};
+
+/* deep_color_ratio */
+enum atom_set_pixel_clock_v1_7_deepcolor_ratio
+{
+  PIXEL_CLOCK_V7_DEEPCOLOR_RATIO_DIS          = 0x00,      //00 - DCCG_DEEP_COLOR_DTO_DISABLE: Disable Deep Color DTO 
+  PIXEL_CLOCK_V7_DEEPCOLOR_RATIO_5_4          = 0x01,      //01 - DCCG_DEEP_COLOR_DTO_5_4_RATIO: Set Deep Color DTO to 5:4 
+  PIXEL_CLOCK_V7_DEEPCOLOR_RATIO_3_2          = 0x02,      //02 - DCCG_DEEP_COLOR_DTO_3_2_RATIO: Set Deep Color DTO to 3:2 
+  PIXEL_CLOCK_V7_DEEPCOLOR_RATIO_2_1          = 0x03,      //03 - DCCG_DEEP_COLOR_DTO_2_1_RATIO: Set Deep Color DTO to 2:1 
+};
+
+/* 
+  ***************************************************************************
+              Structures used by setdceclock
+  *************************************************************************** 
+*/   
+
+// SetDCEClock input parameter for DCE11.2( ELM and BF ) and above 
+struct set_dce_clock_parameters_v2_1
+{
+  uint32_t dceclk_10khz;                               // target DCE frequency in unit of 10KHZ, return real DISPCLK/DPREFCLK frequency. 
+  uint8_t  dceclktype;                                 // =0: DISPCLK  =1: DPREFCLK  =2: PIXCLK
+  uint8_t  dceclksrc;                                  // ATOM_PLL0 or ATOM_GCK_DFS or ATOM_FCH_CLK or ATOM_COMBOPHY_PLLx
+  uint8_t  dceclkflag;                                 // Bit [1:0] = PPLL ref clock source ( when ucDCEClkSrc= ATOM_PPLL0 )
+  uint8_t  crtc_id;                                    // ucDisp Pipe Id, ATOM_CRTC0/1/2/..., use only when ucDCEClkType = PIXCLK
+};
+
+//ucDCEClkType
+enum atom_set_dce_clock_clock_type
+{
+  DCE_CLOCK_TYPE_DISPCLK                      = 0,
+  DCE_CLOCK_TYPE_DPREFCLK                     = 1,
+  DCE_CLOCK_TYPE_PIXELCLK                     = 2,        // used by VBIOS internally, called by SetPixelClock 
+};
+
+//ucDCEClkFlag when ucDCEClkType == DPREFCLK 
+enum atom_set_dce_clock_dprefclk_flag
+{
+  DCE_CLOCK_FLAG_PLL_REFCLK_SRC_MASK          = 0x03,
+  DCE_CLOCK_FLAG_PLL_REFCLK_SRC_GENERICA      = 0x00,
+  DCE_CLOCK_FLAG_PLL_REFCLK_SRC_GENLK         = 0x01,
+  DCE_CLOCK_FLAG_PLL_REFCLK_SRC_PCIE          = 0x02,
+  DCE_CLOCK_FLAG_PLL_REFCLK_SRC_XTALIN        = 0x03,
+};
+
+//ucDCEClkFlag when ucDCEClkType == PIXCLK 
+enum atom_set_dce_clock_pixclk_flag
+{
+  DCE_CLOCK_FLAG_PCLK_DEEPCOLOR_RATIO_MASK    = 0x03,
+  DCE_CLOCK_FLAG_PCLK_DEEPCOLOR_RATIO_DIS     = 0x00,      //00 - DCCG_DEEP_COLOR_DTO_DISABLE: Disable Deep Color DTO 
+  DCE_CLOCK_FLAG_PCLK_DEEPCOLOR_RATIO_5_4     = 0x01,      //01 - DCCG_DEEP_COLOR_DTO_5_4_RATIO: Set Deep Color DTO to 5:4 
+  DCE_CLOCK_FLAG_PCLK_DEEPCOLOR_RATIO_3_2     = 0x02,      //02 - DCCG_DEEP_COLOR_DTO_3_2_RATIO: Set Deep Color DTO to 3:2 
+  DCE_CLOCK_FLAG_PCLK_DEEPCOLOR_RATIO_2_1     = 0x03,      //03 - DCCG_DEEP_COLOR_DTO_2_1_RATIO: Set Deep Color DTO to 2:1 
+  DCE_CLOCK_FLAG_PIXCLK_YUV420_MODE           = 0x04,
+};
+
+struct set_dce_clock_ps_allocation_v2_1
+{
+  struct set_dce_clock_parameters_v2_1 param;
+  uint32_t ulReserved[2];
+};
+
+
+/****************************************************************************/   
+// Structures used by BlankCRTC
+/****************************************************************************/   
+struct blank_crtc_parameters
+{
+  uint8_t  crtc_id;                   // enum atom_crtc_def
+  uint8_t  blanking;                  // enum atom_blank_crtc_command
+  uint16_t reserved;
+  uint32_t reserved1;
+};
+
+enum atom_blank_crtc_command
+{
+  ATOM_BLANKING         = 1,
+  ATOM_BLANKING_OFF     = 0,
+};
+
+/****************************************************************************/   
+// Structures used by enablecrtc
+/****************************************************************************/   
+struct enable_crtc_parameters
+{
+  uint8_t crtc_id;                    // enum atom_crtc_def
+  uint8_t enable;                     // ATOM_ENABLE or ATOM_DISABLE 
+  uint8_t padding[2];
+};
+
+
+/****************************************************************************/   
+// Structure used by EnableDispPowerGating
+/****************************************************************************/   
+struct enable_disp_power_gating_parameters_v2_1
+{
+  uint8_t disp_pipe_id;                // ATOM_CRTC1, ATOM_CRTC2, ...
+  uint8_t enable;                     // ATOM_ENABLE or ATOM_DISABLE
+  uint8_t padding[2];
+};
+
+struct enable_disp_power_gating_ps_allocation 
+{
+  struct enable_disp_power_gating_parameters_v2_1 param;
+  uint32_t ulReserved[4];
+};
+
+/****************************************************************************/   
+// Structure used in setcrtc_usingdtdtiming
+/****************************************************************************/   
+struct set_crtc_using_dtd_timing_parameters
+{
+  uint16_t  h_size;
+  uint16_t  h_blanking_time;
+  uint16_t  v_size;
+  uint16_t  v_blanking_time;
+  uint16_t  h_syncoffset;
+  uint16_t  h_syncwidth;
+  uint16_t  v_syncoffset;
+  uint16_t  v_syncwidth;
+  uint16_t  modemiscinfo;  
+  uint8_t   h_border;
+  uint8_t   v_border;
+  uint8_t   crtc_id;                   // enum atom_crtc_def
+  uint8_t   encoder_mode;			   // atom_encode_mode_def
+  uint8_t   padding[2];
+};
+
+
+/****************************************************************************/   
+// Structures used by processi2cchanneltransaction
+/****************************************************************************/   
+struct process_i2c_channel_transaction_parameters
+{
+  uint8_t i2cspeed_khz;
+  union {
+    uint8_t regindex;
+    uint8_t status;                  /* enum atom_process_i2c_flag */
+  } regind_status;
+  uint16_t  i2c_data_out;
+  uint8_t   flag;                    /* enum atom_process_i2c_status */
+  uint8_t   trans_bytes;
+  uint8_t   slave_addr;
+  uint8_t   i2c_id;
+};
+
+//ucFlag
+enum atom_process_i2c_flag
+{
+  HW_I2C_WRITE          = 1,
+  HW_I2C_READ           = 0,
+  I2C_2BYTE_ADDR        = 0x02,
+  HW_I2C_SMBUS_BYTE_WR  = 0x04,
+};
+
+//status
+enum atom_process_i2c_status
+{
+  HW_ASSISTED_I2C_STATUS_FAILURE     =2,
+  HW_ASSISTED_I2C_STATUS_SUCCESS     =1,
+};
+
+
+/****************************************************************************/   
+// Structures used by processauxchanneltransaction
+/****************************************************************************/   
+
+struct process_aux_channel_transaction_parameters_v1_2
+{
+  uint16_t aux_request;
+  uint16_t dataout;
+  uint8_t  channelid;
+  union {
+    uint8_t   reply_status;
+    uint8_t   aux_delay;
+  } aux_status_delay;
+  uint8_t   dataout_len;
+  uint8_t   hpd_id;                                       //=0: HPD1, =1: HPD2, =2: HPD3, =3: HPD4, =4: HPD5, =5: HPD6
+};
+
+
+/****************************************************************************/   
+// Structures used by selectcrtc_source
+/****************************************************************************/   
+
+struct select_crtc_source_parameters_v2_3
+{
+  uint8_t crtc_id;                        // enum atom_crtc_def
+  uint8_t encoder_id;                     // enum atom_dig_def
+  uint8_t encode_mode;                    // enum atom_encode_mode_def
+  uint8_t dst_bpc;                        // enum atom_panel_bit_per_color
+};
+
+
+/****************************************************************************/   
+// Structures used by digxencodercontrol
+/****************************************************************************/   
+
+// ucAction:
+enum atom_dig_encoder_control_action
+{
+  ATOM_ENCODER_CMD_DISABLE_DIG                  = 0,
+  ATOM_ENCODER_CMD_ENABLE_DIG                   = 1,
+  ATOM_ENCODER_CMD_DP_LINK_TRAINING_START       = 0x08,
+  ATOM_ENCODER_CMD_DP_LINK_TRAINING_PATTERN1    = 0x09,
+  ATOM_ENCODER_CMD_DP_LINK_TRAINING_PATTERN2    = 0x0a,
+  ATOM_ENCODER_CMD_DP_LINK_TRAINING_PATTERN3    = 0x13,
+  ATOM_ENCODER_CMD_DP_LINK_TRAINING_COMPLETE    = 0x0b,
+  ATOM_ENCODER_CMD_DP_VIDEO_OFF                 = 0x0c,
+  ATOM_ENCODER_CMD_DP_VIDEO_ON                  = 0x0d,
+  ATOM_ENCODER_CMD_SETUP_PANEL_MODE             = 0x10,
+  ATOM_ENCODER_CMD_DP_LINK_TRAINING_PATTERN4    = 0x14,
+  ATOM_ENCODER_CMD_STREAM_SETUP                 = 0x0F, 
+  ATOM_ENCODER_CMD_LINK_SETUP                   = 0x11, 
+  ATOM_ENCODER_CMD_ENCODER_BLANK                = 0x12,
+};
+
+//define ucPanelMode
+enum atom_dig_encoder_control_panelmode
+{
+  DP_PANEL_MODE_DISABLE                        = 0x00,
+  DP_PANEL_MODE_ENABLE_eDP_MODE                = 0x01,
+  DP_PANEL_MODE_ENABLE_LVLINK_MODE             = 0x11,
+};
+
+//ucDigId
+enum atom_dig_encoder_control_v5_digid
+{
+  ATOM_ENCODER_CONFIG_V5_DIG0_ENCODER           = 0x00,
+  ATOM_ENCODER_CONFIG_V5_DIG1_ENCODER           = 0x01,
+  ATOM_ENCODER_CONFIG_V5_DIG2_ENCODER           = 0x02,
+  ATOM_ENCODER_CONFIG_V5_DIG3_ENCODER           = 0x03,
+  ATOM_ENCODER_CONFIG_V5_DIG4_ENCODER           = 0x04,
+  ATOM_ENCODER_CONFIG_V5_DIG5_ENCODER           = 0x05,
+  ATOM_ENCODER_CONFIG_V5_DIG6_ENCODER           = 0x06,
+  ATOM_ENCODER_CONFIG_V5_DIG7_ENCODER           = 0x07,
+};
+
+struct dig_encoder_stream_setup_parameters_v1_5
+{
+  uint8_t digid;            // 0~6 map to DIG0~DIG6 enum atom_dig_encoder_control_v5_digid
+  uint8_t action;           // =  ATOM_ENOCODER_CMD_STREAM_SETUP
+  uint8_t digmode;          // ATOM_ENCODER_MODE_DP/ATOM_ENCODER_MODE_DVI/ATOM_ENCODER_MODE_HDMI
+  uint8_t lanenum;          // Lane number     
+  uint32_t pclk_10khz;      // Pixel Clock in 10Khz
+  uint8_t bitpercolor;
+  uint8_t dplinkrate_270mhz;//= DP link rate/270Mhz, =6: 1.62G  = 10: 2.7G, =20: 5.4Ghz, =30: 8.1Ghz etc
+  uint8_t reserved[2];
+};
+
+struct dig_encoder_link_setup_parameters_v1_5
+{
+  uint8_t digid;           // 0~6 map to DIG0~DIG6 enum atom_dig_encoder_control_v5_digid
+  uint8_t action;          // =  ATOM_ENOCODER_CMD_LINK_SETUP              
+  uint8_t digmode;         // ATOM_ENCODER_MODE_DP/ATOM_ENCODER_MODE_DVI/ATOM_ENCODER_MODE_HDMI
+  uint8_t lanenum;         // Lane number     
+  uint8_t symclk_10khz;    // Symbol Clock in 10Khz
+  uint8_t hpd_sel;
+  uint8_t digfe_sel;       // DIG stream( front-end ) selection, bit0 means DIG0 FE is enable, 
+  uint8_t reserved[2];
+};
+
+struct dp_panel_mode_set_parameters_v1_5
+{
+  uint8_t digid;              // 0~6 map to DIG0~DIG6 enum atom_dig_encoder_control_v5_digid
+  uint8_t action;             // = ATOM_ENCODER_CMD_DPLINK_SETUP
+  uint8_t panelmode;      // enum atom_dig_encoder_control_panelmode
+  uint8_t reserved1;    
+  uint32_t reserved2[2];
+};
+
+struct dig_encoder_generic_cmd_parameters_v1_5 
+{
+  uint8_t digid;           // 0~6 map to DIG0~DIG6 enum atom_dig_encoder_control_v5_digid
+  uint8_t action;          // = rest of generic encoder command which does not carry any parameters
+  uint8_t reserved1[2];    
+  uint32_t reserved2[2];
+};
+
+union dig_encoder_control_parameters_v1_5
+{
+  struct dig_encoder_generic_cmd_parameters_v1_5  cmd_param;
+  struct dig_encoder_stream_setup_parameters_v1_5 stream_param;
+  struct dig_encoder_link_setup_parameters_v1_5   link_param;
+  struct dp_panel_mode_set_parameters_v1_5 dppanel_param;
+};
+
+/* 
+  ***************************************************************************
+              Structures used by dig1transmittercontrol
+  *************************************************************************** 
+*/   
+struct dig_transmitter_control_parameters_v1_6
+{
+  uint8_t phyid;           // 0=UNIPHYA, 1=UNIPHYB, 2=UNIPHYC, 3=UNIPHYD, 4= UNIPHYE 5=UNIPHYF
+  uint8_t action;          // define as ATOM_TRANSMITER_ACTION_xxx
+  union {
+    uint8_t digmode;        // enum atom_encode_mode_def
+    uint8_t dplaneset;      // DP voltage swing and pre-emphasis value defined in DPCD DP_LANE_SET, "DP_LANE_SET__xDB_y_zV"
+  } mode_laneset;
+  uint8_t  lanenum;        // Lane number 1, 2, 4, 8    
+  uint32_t symclk_10khz;   // Symbol Clock in 10Khz
+  uint8_t  hpdsel;         // =1: HPD1, =2: HPD2, .... =6: HPD6, =0: HPD is not assigned
+  uint8_t  digfe_sel;      // DIG stream( front-end ) selection, bit0 means DIG0 FE is enable, 
+  uint8_t  connobj_id;     // Connector Object Id defined in ObjectId.h
+  uint8_t  reserved;
+  uint32_t reserved1;
+};
+
+struct dig_transmitter_control_ps_allocation_v1_6
+{
+  struct dig_transmitter_control_parameters_v1_6 param;
+  uint32_t reserved[4];
+};
+
+//ucAction
+enum atom_dig_transmitter_control_action
+{
+  ATOM_TRANSMITTER_ACTION_DISABLE                 = 0,
+  ATOM_TRANSMITTER_ACTION_ENABLE                  = 1,
+  ATOM_TRANSMITTER_ACTION_LCD_BLOFF               = 2,
+  ATOM_TRANSMITTER_ACTION_LCD_BLON                = 3,
+  ATOM_TRANSMITTER_ACTION_BL_BRIGHTNESS_CONTROL   = 4,
+  ATOM_TRANSMITTER_ACTION_LCD_SELFTEST_START      = 5,
+  ATOM_TRANSMITTER_ACTION_LCD_SELFTEST_STOP       = 6,
+  ATOM_TRANSMITTER_ACTION_INIT                    = 7,
+  ATOM_TRANSMITTER_ACTION_DISABLE_OUTPUT          = 8,
+  ATOM_TRANSMITTER_ACTION_ENABLE_OUTPUT           = 9,
+  ATOM_TRANSMITTER_ACTION_SETUP                   = 10,
+  ATOM_TRANSMITTER_ACTION_SETUP_VSEMPH            = 11,
+  ATOM_TRANSMITTER_ACTION_POWER_ON                = 12,
+  ATOM_TRANSMITTER_ACTION_POWER_OFF               = 13,
+};
+
+// digfe_sel
+enum atom_dig_transmitter_control_digfe_sel
+{
+  ATOM_TRANMSITTER_V6__DIGA_SEL                   = 0x01,
+  ATOM_TRANMSITTER_V6__DIGB_SEL                   = 0x02,
+  ATOM_TRANMSITTER_V6__DIGC_SEL                   = 0x04,
+  ATOM_TRANMSITTER_V6__DIGD_SEL                   = 0x08,
+  ATOM_TRANMSITTER_V6__DIGE_SEL                   = 0x10,
+  ATOM_TRANMSITTER_V6__DIGF_SEL                   = 0x20,
+  ATOM_TRANMSITTER_V6__DIGG_SEL                   = 0x40,
+};
+
+
+//ucHPDSel
+enum atom_dig_transmitter_control_hpd_sel
+{
+  ATOM_TRANSMITTER_V6_NO_HPD_SEL                  = 0x00,
+  ATOM_TRANSMITTER_V6_HPD1_SEL                    = 0x01,
+  ATOM_TRANSMITTER_V6_HPD2_SEL                    = 0x02,
+  ATOM_TRANSMITTER_V6_HPD3_SEL                    = 0x03,
+  ATOM_TRANSMITTER_V6_HPD4_SEL                    = 0x04,
+  ATOM_TRANSMITTER_V6_HPD5_SEL                    = 0x05,
+  ATOM_TRANSMITTER_V6_HPD6_SEL                    = 0x06,
+};
+
+// ucDPLaneSet
+enum atom_dig_transmitter_control_dplaneset
+{
+  DP_LANE_SET__0DB_0_4V                           = 0x00,
+  DP_LANE_SET__0DB_0_6V                           = 0x01,
+  DP_LANE_SET__0DB_0_8V                           = 0x02,
+  DP_LANE_SET__0DB_1_2V                           = 0x03,
+  DP_LANE_SET__3_5DB_0_4V                         = 0x08, 
+  DP_LANE_SET__3_5DB_0_6V                         = 0x09,
+  DP_LANE_SET__3_5DB_0_8V                         = 0x0a,
+  DP_LANE_SET__6DB_0_4V                           = 0x10,
+  DP_LANE_SET__6DB_0_6V                           = 0x11,
+  DP_LANE_SET__9_5DB_0_4V                         = 0x18, 
+};
+
+
+
+/****************************************************************************/ 
+// Structures used by ExternalEncoderControl V2.4
+/****************************************************************************/   
+
+struct external_encoder_control_parameters_v2_4
+{
+  uint16_t pixelclock_10khz;  // pixel clock in 10Khz, valid when ucAction=SETUP/ENABLE_OUTPUT 
+  uint8_t  config;            // indicate which encoder, and DP link rate when ucAction = SETUP/ENABLE_OUTPUT  
+  uint8_t  action;            // 
+  uint8_t  encodermode;       // encoder mode, only used when ucAction = SETUP/ENABLE_OUTPUT
+  uint8_t  lanenum;           // lane number, only used when ucAction = SETUP/ENABLE_OUTPUT  
+  uint8_t  bitpercolor;       // output bit per color, only valid when ucAction = SETUP/ENABLE_OUTPUT and ucEncodeMode= DP
+  uint8_t  hpd_id;        
+};
+
+
+// ucAction
+enum external_encoder_control_action_def
+{
+  EXTERNAL_ENCODER_ACTION_V3_DISABLE_OUTPUT           = 0x00,
+  EXTERNAL_ENCODER_ACTION_V3_ENABLE_OUTPUT            = 0x01,
+  EXTERNAL_ENCODER_ACTION_V3_ENCODER_INIT             = 0x07,
+  EXTERNAL_ENCODER_ACTION_V3_ENCODER_SETUP            = 0x0f,
+  EXTERNAL_ENCODER_ACTION_V3_ENCODER_BLANKING_OFF     = 0x10,
+  EXTERNAL_ENCODER_ACTION_V3_ENCODER_BLANKING         = 0x11,
+  EXTERNAL_ENCODER_ACTION_V3_DACLOAD_DETECTION        = 0x12,
+  EXTERNAL_ENCODER_ACTION_V3_DDC_SETUP                = 0x14,
+};
+
+// ucConfig
+enum external_encoder_control_v2_4_config_def
+{
+  EXTERNAL_ENCODER_CONFIG_V3_DPLINKRATE_MASK          = 0x03,
+  EXTERNAL_ENCODER_CONFIG_V3_DPLINKRATE_1_62GHZ       = 0x00,
+  EXTERNAL_ENCODER_CONFIG_V3_DPLINKRATE_2_70GHZ       = 0x01,
+  EXTERNAL_ENCODER_CONFIG_V3_DPLINKRATE_5_40GHZ       = 0x02,
+  EXTERNAL_ENCODER_CONFIG_V3_DPLINKRATE_3_24GHZ       = 0x03,  
+  EXTERNAL_ENCODER_CONFIG_V3_ENCODER_SEL_MAKS         = 0x70,
+  EXTERNAL_ENCODER_CONFIG_V3_ENCODER1                 = 0x00,
+  EXTERNAL_ENCODER_CONFIG_V3_ENCODER2                 = 0x10,
+  EXTERNAL_ENCODER_CONFIG_V3_ENCODER3                 = 0x20,
+};
+
+struct external_encoder_control_ps_allocation_v2_4
+{
+  struct external_encoder_control_parameters_v2_4 sExtEncoder;
+  uint32_t reserved[2];
+};
+
+
+/* 
+  ***************************************************************************
+                           AMD ACPI Table
+  
+  *************************************************************************** 
+*/   
+
+struct amd_acpi_description_header{
+  uint32_t signature;
+  uint32_t tableLength;      //Length
+  uint8_t  revision;
+  uint8_t  checksum;
+  uint8_t  oemId[6];
+  uint8_t  oemTableId[8];    //UINT64  OemTableId;
+  uint32_t oemRevision;
+  uint32_t creatorId;
+  uint32_t creatorRevision;
+};
+
+struct uefi_acpi_vfct{
+  struct   amd_acpi_description_header sheader;
+  uint8_t  tableUUID[16];    //0x24
+  uint32_t vbiosimageoffset; //0x34. Offset to the first GOP_VBIOS_CONTENT block from the beginning of the stucture.
+  uint32_t lib1Imageoffset;  //0x38. Offset to the first GOP_LIB1_CONTENT block from the beginning of the stucture.
+  uint32_t reserved[4];      //0x3C
+};
+
+struct vfct_image_header{
+  uint32_t  pcibus;          //0x4C
+  uint32_t  pcidevice;       //0x50
+  uint32_t  pcifunction;     //0x54
+  uint16_t  vendorid;        //0x58
+  uint16_t  deviceid;        //0x5A
+  uint16_t  ssvid;           //0x5C
+  uint16_t  ssid;            //0x5E
+  uint32_t  revision;        //0x60
+  uint32_t  imagelength;     //0x64
+};
+
+
+struct gop_vbios_content {
+  struct vfct_image_header vbiosheader;
+  uint8_t                  vbioscontent[1];
+};
+
+struct gop_lib1_content {
+  struct vfct_image_header lib1header;
+  uint8_t                  lib1content[1];
+};
+
+
+
+/* 
+  ***************************************************************************
+                   Scratch Register definitions
+  Each number below indicates which scratch regiser request, Active and 
+  Connect all share the same definitions as display_device_tag defines
+  *************************************************************************** 
+*/   
+
+enum scratch_register_def{
+  ATOM_DEVICE_CONNECT_INFO_DEF      = 0,
+  ATOM_BL_BRI_LEVEL_INFO_DEF        = 2,
+  ATOM_ACTIVE_INFO_DEF              = 3,
+  ATOM_LCD_INFO_DEF                 = 4,
+  ATOM_DEVICE_REQ_INFO_DEF          = 5,
+  ATOM_ACC_CHANGE_INFO_DEF          = 6,
+  ATOM_PRE_OS_MODE_INFO_DEF         = 7,
+  ATOM_PRE_OS_ASSERTION_DEF      = 8,    //For GOP to record a 32bit assertion code, this is enabled by default in prodution GOP drivers.
+  ATOM_INTERNAL_TIMER_INFO_DEF      = 10,
+};
+
+enum scratch_device_connect_info_bit_def{
+  ATOM_DISPLAY_LCD1_CONNECT           =0x0002,
+  ATOM_DISPLAY_DFP1_CONNECT           =0x0008,
+  ATOM_DISPLAY_DFP2_CONNECT           =0x0080,
+  ATOM_DISPLAY_DFP3_CONNECT           =0x0200,
+  ATOM_DISPLAY_DFP4_CONNECT           =0x0400,
+  ATOM_DISPLAY_DFP5_CONNECT           =0x0800,
+  ATOM_DISPLAY_DFP6_CONNECT           =0x0040,
+  ATOM_DISPLAY_DFPx_CONNECT           =0x0ec8,
+  ATOM_CONNECT_INFO_DEVICE_MASK       =0x0fff,
+};
+
+enum scratch_bl_bri_level_info_bit_def{
+  ATOM_CURRENT_BL_LEVEL_SHIFT         =0x8,
+#ifndef _H2INC
+  ATOM_CURRENT_BL_LEVEL_MASK          =0x0000ff00,
+  ATOM_DEVICE_DPMS_STATE              =0x00010000,
+#endif
+};
+
+enum scratch_active_info_bits_def{
+  ATOM_DISPLAY_LCD1_ACTIVE            =0x0002,
+  ATOM_DISPLAY_DFP1_ACTIVE            =0x0008,
+  ATOM_DISPLAY_DFP2_ACTIVE            =0x0080,
+  ATOM_DISPLAY_DFP3_ACTIVE            =0x0200,
+  ATOM_DISPLAY_DFP4_ACTIVE            =0x0400,
+  ATOM_DISPLAY_DFP5_ACTIVE            =0x0800,
+  ATOM_DISPLAY_DFP6_ACTIVE            =0x0040,
+  ATOM_ACTIVE_INFO_DEVICE_MASK        =0x0fff,
+};
+
+enum scratch_device_req_info_bits_def{
+  ATOM_DISPLAY_LCD1_REQ               =0x0002,
+  ATOM_DISPLAY_DFP1_REQ               =0x0008,
+  ATOM_DISPLAY_DFP2_REQ               =0x0080,
+  ATOM_DISPLAY_DFP3_REQ               =0x0200,
+  ATOM_DISPLAY_DFP4_REQ               =0x0400,
+  ATOM_DISPLAY_DFP5_REQ               =0x0800,
+  ATOM_DISPLAY_DFP6_REQ               =0x0040,
+  ATOM_REQ_INFO_DEVICE_MASK           =0x0fff,
+};
+
+enum scratch_acc_change_info_bitshift_def{
+  ATOM_ACC_CHANGE_ACC_MODE_SHIFT    =4,
+  ATOM_ACC_CHANGE_LID_STATUS_SHIFT  =6,
+};
+
+enum scratch_acc_change_info_bits_def{
+  ATOM_ACC_CHANGE_ACC_MODE          =0x00000010,
+  ATOM_ACC_CHANGE_LID_STATUS        =0x00000040,
+};
+
+enum scratch_pre_os_mode_info_bits_def{
+  ATOM_PRE_OS_MODE_MASK             =0x00000003,
+  ATOM_PRE_OS_MODE_VGA              =0x00000000,
+  ATOM_PRE_OS_MODE_VESA             =0x00000001,
+  ATOM_PRE_OS_MODE_GOP              =0x00000002,
+  ATOM_PRE_OS_MODE_PIXEL_DEPTH      =0x0000000C,
+  ATOM_PRE_OS_MODE_PIXEL_FORMAT_MASK=0x000000F0,
+  ATOM_PRE_OS_MODE_8BIT_PAL_EN      =0x00000100,
+  ATOM_ASIC_INIT_COMPLETE           =0x00000200,
+#ifndef _H2INC
+  ATOM_PRE_OS_MODE_NUMBER_MASK      =0xFFFF0000,
+#endif
+};
+
+
+
+/* 
+  ***************************************************************************
+                       ATOM firmware ID header file
+              !! Please keep it at end of the atomfirmware.h !!
+  *************************************************************************** 
+*/   
+#include "atomfirmwareid.h"
+#pragma pack()
+
+#endif
+
diff --git a/drivers/gpu/drm/amd/include/atomfirmwareid.h b/drivers/gpu/drm/amd/include/atomfirmwareid.h
new file mode 100644
index 0000000..e6256ef
--- /dev/null
+++ b/drivers/gpu/drm/amd/include/atomfirmwareid.h
@@ -0,0 +1,86 @@
+/****************************************************************************\
+* 
+*  File Name      atomfirmwareid.h
+*
+*  Description    ATOM BIOS command/data table ID definition header file
+*
+*  Copyright 2016 Advanced Micro Devices, Inc.
+*
+* Permission is hereby granted, free of charge, to any person obtaining a copy of this software 
+* and associated documentation files (the "Software"), to deal in the Software without restriction,
+* including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense,
+* and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so,
+* subject to the following conditions:
+*
+* The above copyright notice and this permission notice shall be included in all copies or substantial
+* portions of the Software.
+*
+* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+* OTHER DEALINGS IN THE SOFTWARE.
+*
+\****************************************************************************/
+
+#ifndef _ATOMFIRMWAREID_H_
+#define _ATOMFIRMWAREID_H_
+
+enum atom_master_data_table_id
+{
+    VBIOS_DATA_TBL_ID__UTILITY_PIPELINE,
+    VBIOS_DATA_TBL_ID__MULTIMEDIA_INF,
+    VBIOS_DATA_TBL_ID__FIRMWARE_INF,
+    VBIOS_DATA_TBL_ID__LCD_INF,
+    VBIOS_DATA_TBL_ID__SMU_INF,
+    VBIOS_DATA_TBL_ID__VRAM_USAGE_BY_FIRMWARE,
+    VBIOS_DATA_TBL_ID__GPIO_PIN_LUT,
+    VBIOS_DATA_TBL_ID__GFX_INF,
+    VBIOS_DATA_TBL_ID__POWER_PLAY_INF,
+    VBIOS_DATA_TBL_ID__DISPLAY_OBJECT_INF,
+    VBIOS_DATA_TBL_ID__INDIRECT_IO_ACCESS,
+    VBIOS_DATA_TBL_ID__UMC_INF,
+    VBIOS_DATA_TBL_ID__DCE_INF,
+    VBIOS_DATA_TBL_ID__VRAM_INF,
+    VBIOS_DATA_TBL_ID__INTEGRATED_SYS_INF,
+    VBIOS_DATA_TBL_ID__ASIC_PROFILING_INF,
+    VBIOS_DATA_TBL_ID__VOLTAGE_OBJ_INF,
+
+    VBIOS_DATA_TBL_ID__UNDEFINED,
+};
+
+enum atom_master_command_table_id
+{
+    VBIOS_CMD_TBL_ID__ASIC_INIT,
+    VBIOS_CMD_TBL_ID__DIGX_ENCODER_CONTROL,
+    VBIOS_CMD_TBL_ID__SET_ENGINE_CLOCK,
+    VBIOS_CMD_TBL_ID__SET_MEMORY_CLOCK,
+    VBIOS_CMD_TBL_ID__SET_PIXEL_CLOCK,
+    VBIOS_CMD_TBL_ID__ENABLE_DISP_POWER_GATING,
+    VBIOS_CMD_TBL_ID__BLANK_CRTC,
+    VBIOS_CMD_TBL_ID__ENABLE_CRTC,
+    VBIOS_CMD_TBL_ID__GET_SMU_CLOCK_INFO,
+    VBIOS_CMD_TBL_ID__SELECT_CRTC_SOURCE,
+    VBIOS_CMD_TBL_ID__SET_DCE_CLOCK,
+    VBIOS_CMD_TBL_ID__GET_MEMORY_CLOCK,
+    VBIOS_CMD_TBL_ID__GET_ENGINE_CLOCK,
+    VBIOS_CMD_TBL_ID__SET_CRTC_USING_DTD_TIMING,
+    VBIOS_CMD_TBL_ID__EXTENAL_ENCODER_CONTROL,
+    VBIOS_CMD_TBL_ID__PROCESS_I2C_CHANNEL_TRANSACTION,
+    VBIOS_CMD_TBL_ID__COMPUTE_GPU_CLOCK_PARAM,
+    VBIOS_CMD_TBL_ID__DYNAMIC_MEMORY_SETTINGS,
+    VBIOS_CMD_TBL_ID__MEMORY_TRAINING,
+    VBIOS_CMD_TBL_ID__SET_VOLTAGE,
+    VBIOS_CMD_TBL_ID__DIG1_TRANSMITTER_CONTROL,
+    VBIOS_CMD_TBL_ID__PROCESS_AUX_CHANNEL_TRANSACTION,
+    VBIOS_CMD_TBL_ID__GET_VOLTAGE_INF,
+
+    VBIOS_CMD_TBL_ID__UNDEFINED,
+};
+
+
+
+#endif  /* _ATOMFIRMWAREID_H_  */
+/* ### EOF ### */
diff --git a/drivers/gpu/drm/amd/include/displayobject.h b/drivers/gpu/drm/amd/include/displayobject.h
new file mode 100644
index 0000000..67e23ff
--- /dev/null
+++ b/drivers/gpu/drm/amd/include/displayobject.h
@@ -0,0 +1,249 @@
+/****************************************************************************\
+* 
+*  Module Name    displayobjectsoc15.h
+*  Project        
+*  Device         
+*
+*  Description    Contains the common definitions for display objects for SoC15 products.
+*
+*  Copyright 2014 Advanced Micro Devices, Inc.
+*
+* Permission is hereby granted, free of charge, to any person obtaining a copy of this software 
+* and associated documentation files (the "Software"), to deal in the Software without restriction,
+* including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense,
+* and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so,
+* subject to the following conditions:
+*
+* The above copyright notice and this permission notice shall be included in all copies or substantial
+* portions of the Software.
+*
+* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+* THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+* OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+* ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+* OTHER DEALINGS IN THE SOFTWARE.
+*
+\****************************************************************************/
+#ifndef _DISPLAY_OBJECT_SOC15_H_
+#define _DISPLAY_OBJECT_SOC15_H_
+
+#if defined(_X86_)
+#pragma pack(1)
+#endif
+
+
+/****************************************************
+* Display Object Type Definition 
+*****************************************************/
+enum display_object_type{
+DISPLAY_OBJECT_TYPE_NONE						=0x00,
+DISPLAY_OBJECT_TYPE_GPU							=0x01,
+DISPLAY_OBJECT_TYPE_ENCODER						=0x02,
+DISPLAY_OBJECT_TYPE_CONNECTOR					=0x03
+};
+
+/****************************************************
+* Encorder Object Type Definition 
+*****************************************************/
+enum encoder_object_type{
+ENCODER_OBJECT_ID_NONE							 =0x00,
+ENCODER_OBJECT_ID_INTERNAL_UNIPHY				 =0x01,
+ENCODER_OBJECT_ID_INTERNAL_UNIPHY1				 =0x02,
+ENCODER_OBJECT_ID_INTERNAL_UNIPHY2				 =0x03,
+};
+
+
+/****************************************************
+* Connector Object ID Definition 
+*****************************************************/
+
+enum connector_object_type{
+CONNECTOR_OBJECT_ID_NONE						  =0x00, 
+CONNECTOR_OBJECT_ID_SINGLE_LINK_DVI_D			  =0x01,
+CONNECTOR_OBJECT_ID_DUAL_LINK_DVI_D				  =0x02,
+CONNECTOR_OBJECT_ID_HDMI_TYPE_A					  =0x03,
+CONNECTOR_OBJECT_ID_LVDS						  =0x04,
+CONNECTOR_OBJECT_ID_DISPLAYPORT					  =0x05,
+CONNECTOR_OBJECT_ID_eDP							  =0x06,
+CONNECTOR_OBJECT_ID_OPM							  =0x07
+};
+
+
+/****************************************************
+* Protection Object ID Definition 
+*****************************************************/
+//No need
+
+/****************************************************
+*  Object ENUM ID Definition 
+*****************************************************/
+
+enum object_enum_id{
+OBJECT_ENUM_ID1									  =0x01,
+OBJECT_ENUM_ID2									  =0x02,
+OBJECT_ENUM_ID3									  =0x03,
+OBJECT_ENUM_ID4									  =0x04,
+OBJECT_ENUM_ID5									  =0x05,
+OBJECT_ENUM_ID6									  =0x06
+};
+
+/****************************************************
+*Object ID Bit definition 
+*****************************************************/
+enum object_id_bit{
+OBJECT_ID_MASK									  =0x00FF,
+ENUM_ID_MASK									  =0x0F00,
+OBJECT_TYPE_MASK								  =0xF000,
+OBJECT_ID_SHIFT									  =0x00,
+ENUM_ID_SHIFT									  =0x08,
+OBJECT_TYPE_SHIFT								  =0x0C
+};
+
+
+/****************************************************
+* GPU Object definition - Shared with BIOS
+*****************************************************/
+enum gpu_objet_def{
+GPU_ENUM_ID1                            =( DISPLAY_OBJECT_TYPE_GPU << OBJECT_TYPE_SHIFT | OBJECT_ENUM_ID1 << ENUM_ID_SHIFT)
+};
+
+/****************************************************
+* Encoder Object definition - Shared with BIOS
+*****************************************************/
+
+enum encoder_objet_def{
+ENCODER_INTERNAL_UNIPHY_ENUM_ID1         =( DISPLAY_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\
+                                                 ENCODER_OBJECT_ID_INTERNAL_UNIPHY << OBJECT_ID_SHIFT),
+
+ENCODER_INTERNAL_UNIPHY_ENUM_ID2         =( DISPLAY_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID2 << ENUM_ID_SHIFT |\
+                                                 ENCODER_OBJECT_ID_INTERNAL_UNIPHY << OBJECT_ID_SHIFT),
+
+ENCODER_INTERNAL_UNIPHY1_ENUM_ID1        =( DISPLAY_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\
+                                                 ENCODER_OBJECT_ID_INTERNAL_UNIPHY1 << OBJECT_ID_SHIFT),
+
+ENCODER_INTERNAL_UNIPHY1_ENUM_ID2        =( DISPLAY_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID2 << ENUM_ID_SHIFT |\
+                                                 ENCODER_OBJECT_ID_INTERNAL_UNIPHY1 << OBJECT_ID_SHIFT),
+
+ENCODER_INTERNAL_UNIPHY2_ENUM_ID1        =( DISPLAY_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\
+                                                 ENCODER_OBJECT_ID_INTERNAL_UNIPHY2 << OBJECT_ID_SHIFT),
+
+ENCODER_INTERNAL_UNIPHY2_ENUM_ID2        =( DISPLAY_OBJECT_TYPE_ENCODER << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID2 << ENUM_ID_SHIFT |\
+                                                 ENCODER_OBJECT_ID_INTERNAL_UNIPHY2 << OBJECT_ID_SHIFT)
+};
+
+
+/****************************************************
+* Connector Object definition - Shared with BIOS
+*****************************************************/
+
+
+enum connector_objet_def{
+CONNECTOR_LVDS_ENUM_ID1							=( DISPLAY_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\
+                                                 CONNECTOR_OBJECT_ID_LVDS << OBJECT_ID_SHIFT),
+
+
+CONNECTOR_eDP_ENUM_ID1							=( DISPLAY_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\
+                                                 CONNECTOR_OBJECT_ID_eDP << OBJECT_ID_SHIFT),
+
+CONNECTOR_SINGLE_LINK_DVI_D_ENUM_ID1			=( DISPLAY_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\
+                                                 CONNECTOR_OBJECT_ID_SINGLE_LINK_DVI_D << OBJECT_ID_SHIFT),
+
+CONNECTOR_SINGLE_LINK_DVI_D_ENUM_ID2			=( DISPLAY_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID2 << ENUM_ID_SHIFT |\
+                                                 CONNECTOR_OBJECT_ID_SINGLE_LINK_DVI_D << OBJECT_ID_SHIFT),
+
+
+CONNECTOR_DUAL_LINK_DVI_D_ENUM_ID1				=( DISPLAY_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\
+                                                 CONNECTOR_OBJECT_ID_DUAL_LINK_DVI_D << OBJECT_ID_SHIFT),
+
+CONNECTOR_DUAL_LINK_DVI_D_ENUM_ID2				=( DISPLAY_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID2 << ENUM_ID_SHIFT |\
+                                                 CONNECTOR_OBJECT_ID_DUAL_LINK_DVI_D << OBJECT_ID_SHIFT),
+
+CONNECTOR_HDMI_TYPE_A_ENUM_ID1					=( DISPLAY_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\
+                                                 CONNECTOR_OBJECT_ID_HDMI_TYPE_A << OBJECT_ID_SHIFT),
+
+CONNECTOR_HDMI_TYPE_A_ENUM_ID2					=( DISPLAY_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID2 << ENUM_ID_SHIFT |\
+                                                 CONNECTOR_OBJECT_ID_HDMI_TYPE_A << OBJECT_ID_SHIFT),
+
+CONNECTOR_DISPLAYPORT_ENUM_ID1					=( DISPLAY_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\
+                                                 CONNECTOR_OBJECT_ID_DISPLAYPORT << OBJECT_ID_SHIFT),
+
+CONNECTOR_DISPLAYPORT_ENUM_ID2					=( DISPLAY_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID2 << ENUM_ID_SHIFT |\
+                                                 CONNECTOR_OBJECT_ID_DISPLAYPORT << OBJECT_ID_SHIFT),
+
+CONNECTOR_DISPLAYPORT_ENUM_ID3					=( DISPLAY_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID3 << ENUM_ID_SHIFT |\
+                                                 CONNECTOR_OBJECT_ID_DISPLAYPORT << OBJECT_ID_SHIFT),
+
+CONNECTOR_DISPLAYPORT_ENUM_ID4					=( DISPLAY_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID4 << ENUM_ID_SHIFT |\
+                                                 CONNECTOR_OBJECT_ID_DISPLAYPORT << OBJECT_ID_SHIFT),
+
+CONNECTOR_OPM_ENUM_ID1							=( DISPLAY_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID1 << ENUM_ID_SHIFT |\
+                                                 CONNECTOR_OBJECT_ID_OPM << OBJECT_ID_SHIFT),          //Mapping to MXM_DP_A
+
+CONNECTOR_OPM_ENUM_ID2							=( DISPLAY_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID2 << ENUM_ID_SHIFT |\
+                                                 CONNECTOR_OBJECT_ID_OPM << OBJECT_ID_SHIFT),          //Mapping to MXM_DP_B
+
+CONNECTOR_OPM_ENUM_ID3							=( DISPLAY_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID3 << ENUM_ID_SHIFT |\
+                                                 CONNECTOR_OBJECT_ID_OPM << OBJECT_ID_SHIFT),          //Mapping to MXM_DP_C
+
+CONNECTOR_OPM_ENUM_ID4							=( DISPLAY_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID4 << ENUM_ID_SHIFT |\
+                                                 CONNECTOR_OBJECT_ID_OPM << OBJECT_ID_SHIFT),          //Mapping to MXM_DP_D
+
+CONNECTOR_OPM_ENUM_ID5							=( DISPLAY_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID5 << ENUM_ID_SHIFT |\
+                                                 CONNECTOR_OBJECT_ID_OPM << OBJECT_ID_SHIFT),          //Mapping to MXM_LVDS_TXxx
+
+
+CONNECTOR_OPM_ENUM_ID6							=( DISPLAY_OBJECT_TYPE_CONNECTOR << OBJECT_TYPE_SHIFT |\
+                                                 OBJECT_ENUM_ID6 << ENUM_ID_SHIFT |\
+                                                 CONNECTOR_OBJECT_ID_OPM << OBJECT_ID_SHIFT)         //Mapping to MXM_LVDS_TXxx
+};
+
+/****************************************************
+* Router Object ID definition - Shared with BIOS
+*****************************************************/
+//No Need, in future we ever need, we can define a record in atomfirwareSoC15.h associated with an object that has this router
+
+
+/****************************************************
+* PROTECTION Object ID definition - Shared with BIOS
+*****************************************************/
+//No need,in future we ever need, all display path are capable of protection now.
+
+/****************************************************
+* Generic Object ID definition - Shared with BIOS
+*****************************************************/
+//No need, in future we ever need like GLsync, we can define a record in atomfirwareSoC15.h associated with an object.
+
+
+#if defined(_X86_)
+#pragma pack()
+#endif
+
+#endif
+
+
+
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 002/100] amdgpu: detect if we are using atomfirm or atombios for vbios (v2)
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
  2017-03-20 20:29   ` [PATCH 001/100] drm/amdgpu: add the new atomfirmware interface header Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 003/100] drm/amdgpu: move atom scratch setup into amdgpu_atombios.c Alex Deucher
                     ` (83 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

Supposedly atomfirmware rom header is 3.3 atombios is 1.1.

v2: rebased on newer kernel

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h      |  1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c | 30 +++++++++++++++++++++++-------
 2 files changed, 24 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 3b81ded..15e985e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -1314,6 +1314,7 @@ struct amdgpu_device {
 	bool				have_disp_power_ref;
 
 	/* BIOS */
+	bool				is_atom_fw;
 	uint8_t				*bios;
 	uint32_t			bios_size;
 	struct amdgpu_bo		*stollen_vga_memory;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
index 46ce883..f8d6f7b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
@@ -86,6 +86,18 @@ static bool check_atom_bios(uint8_t *bios, size_t size)
 	return false;
 }
 
+static bool is_atom_fw(uint8_t *bios)
+{
+	uint16_t bios_header_start = bios[0x48] | (bios[0x49] << 8);
+	uint8_t frev = bios[bios_header_start + 2];
+	uint8_t crev = bios[bios_header_start + 3];
+
+	if ((frev < 3) ||
+	    ((frev == 3) && (crev < 3)))
+		return false;
+
+	return true;
+}
 
 /* If you boot an IGP board with a discrete card as the primary,
  * the IGP rom is not accessible via the rom bar as the IGP rom is
@@ -418,26 +430,30 @@ static inline bool amdgpu_acpi_vfct_bios(struct amdgpu_device *adev)
 bool amdgpu_get_bios(struct amdgpu_device *adev)
 {
 	if (amdgpu_atrm_get_bios(adev))
-		return true;
+		goto success;
 
 	if (amdgpu_acpi_vfct_bios(adev))
-		return true;
+		goto success;
 
 	if (igp_read_bios_from_vram(adev))
-		return true;
+		goto success;
 
 	if (amdgpu_read_bios(adev))
-		return true;
+		goto success;
 
 	if (amdgpu_read_bios_from_rom(adev))
-		return true;
+		goto success;
 
 	if (amdgpu_read_disabled_bios(adev))
-		return true;
+		goto success;
 
 	if (amdgpu_read_platform_bios(adev))
-		return true;
+		goto success;
 
 	DRM_ERROR("Unable to locate a BIOS ROM\n");
 	return false;
+
+success:
+	adev->is_atom_fw = is_atom_fw(adev->bios);
+	return true;
 }
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 003/100] drm/amdgpu: move atom scratch setup into amdgpu_atombios.c
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
  2017-03-20 20:29   ` [PATCH 001/100] drm/amdgpu: add the new atomfirmware interface header Alex Deucher
  2017-03-20 20:29   ` [PATCH 002/100] amdgpu: detect if we are using atomfirm or atombios for vbios (v2) Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 004/100] drm/amdgpu: add basic support for atomfirmware.h (v3) Alex Deucher
                     ` (82 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

There will be a slightly different version for atomfirmware.

Reviewed-by: Ken Wang <Qingqing.Wang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c | 28 ++++++++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h |  3 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c   |  2 +-
 drivers/gpu/drm/amd/amdgpu/atom.c            | 26 --------------------------
 drivers/gpu/drm/amd/amdgpu/atom.h            |  1 -
 5 files changed, 32 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
index 56a86dd..f52b1bf 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
@@ -1748,3 +1748,31 @@ void amdgpu_atombios_copy_swap(u8 *dst, u8 *src, u8 num_bytes, bool to_le)
 	memcpy(dst, src, num_bytes);
 #endif
 }
+
+int amdgpu_atombios_allocate_fb_scratch(struct amdgpu_device *adev)
+{
+	struct atom_context *ctx = adev->mode_info.atom_context;
+	int index = GetIndexIntoMasterTable(DATA, VRAM_UsageByFirmware);
+	uint16_t data_offset;
+	int usage_bytes = 0;
+	struct _ATOM_VRAM_USAGE_BY_FIRMWARE *firmware_usage;
+
+	if (amdgpu_atom_parse_data_header(ctx, index, NULL, NULL, NULL, &data_offset)) {
+		firmware_usage = (struct _ATOM_VRAM_USAGE_BY_FIRMWARE *)(ctx->bios + data_offset);
+
+		DRM_DEBUG("atom firmware requested %08x %dkb\n",
+			  le32_to_cpu(firmware_usage->asFirmwareVramReserveInfo[0].ulStartAddrUsedByFirmware),
+			  le16_to_cpu(firmware_usage->asFirmwareVramReserveInfo[0].usFirmwareUseInKb));
+
+		usage_bytes = le16_to_cpu(firmware_usage->asFirmwareVramReserveInfo[0].usFirmwareUseInKb) * 1024;
+	}
+	ctx->scratch_size_bytes = 0;
+	if (usage_bytes == 0)
+		usage_bytes = 20 * 1024;
+	/* allocate some scratch memory */
+	ctx->scratch = kzalloc(usage_bytes, GFP_KERNEL);
+	if (!ctx->scratch)
+		return -ENOMEM;
+	ctx->scratch_size_bytes = usage_bytes;
+	return 0;
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
index 70e9ace..4e0f488 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h
@@ -215,4 +215,7 @@ int amdgpu_atombios_get_clock_dividers(struct amdgpu_device *adev,
 int amdgpu_atombios_get_svi2_info(struct amdgpu_device *adev,
 			      u8 voltage_type,
 			      u8 *svd_gpio_id, u8 *svc_gpio_id);
+
+int amdgpu_atombios_allocate_fb_scratch(struct amdgpu_device *adev);
+
 #endif
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 118f4e6..f87c1cb 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -993,7 +993,7 @@ static int amdgpu_atombios_init(struct amdgpu_device *adev)
 
 	mutex_init(&adev->mode_info.atom_context->mutex);
 	amdgpu_atombios_scratch_regs_init(adev);
-	amdgpu_atom_allocate_fb_scratch(adev->mode_info.atom_context);
+	amdgpu_atombios_allocate_fb_scratch(adev);
 	return 0;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/atom.c b/drivers/gpu/drm/amd/amdgpu/atom.c
index 81c60a2..d69aa2e 100644
--- a/drivers/gpu/drm/amd/amdgpu/atom.c
+++ b/drivers/gpu/drm/amd/amdgpu/atom.c
@@ -1417,29 +1417,3 @@ bool amdgpu_atom_parse_cmd_header(struct atom_context *ctx, int index, uint8_t *
 	return true;
 }
 
-int amdgpu_atom_allocate_fb_scratch(struct atom_context *ctx)
-{
-	int index = GetIndexIntoMasterTable(DATA, VRAM_UsageByFirmware);
-	uint16_t data_offset;
-	int usage_bytes = 0;
-	struct _ATOM_VRAM_USAGE_BY_FIRMWARE *firmware_usage;
-
-	if (amdgpu_atom_parse_data_header(ctx, index, NULL, NULL, NULL, &data_offset)) {
-		firmware_usage = (struct _ATOM_VRAM_USAGE_BY_FIRMWARE *)(ctx->bios + data_offset);
-
-		DRM_DEBUG("atom firmware requested %08x %dkb\n",
-			  le32_to_cpu(firmware_usage->asFirmwareVramReserveInfo[0].ulStartAddrUsedByFirmware),
-			  le16_to_cpu(firmware_usage->asFirmwareVramReserveInfo[0].usFirmwareUseInKb));
-
-		usage_bytes = le16_to_cpu(firmware_usage->asFirmwareVramReserveInfo[0].usFirmwareUseInKb) * 1024;
-	}
-	ctx->scratch_size_bytes = 0;
-	if (usage_bytes == 0)
-		usage_bytes = 20 * 1024;
-	/* allocate some scratch memory */
-	ctx->scratch = kzalloc(usage_bytes, GFP_KERNEL);
-	if (!ctx->scratch)
-		return -ENOMEM;
-	ctx->scratch_size_bytes = usage_bytes;
-	return 0;
-}
diff --git a/drivers/gpu/drm/amd/amdgpu/atom.h b/drivers/gpu/drm/amd/amdgpu/atom.h
index baa2438..ddd8045 100644
--- a/drivers/gpu/drm/amd/amdgpu/atom.h
+++ b/drivers/gpu/drm/amd/amdgpu/atom.h
@@ -152,7 +152,6 @@ bool amdgpu_atom_parse_data_header(struct atom_context *ctx, int index, uint16_t
 			    uint8_t *frev, uint8_t *crev, uint16_t *data_start);
 bool amdgpu_atom_parse_cmd_header(struct atom_context *ctx, int index,
 			   uint8_t *frev, uint8_t *crev);
-int amdgpu_atom_allocate_fb_scratch(struct atom_context *ctx);
 #include "atom-types.h"
 #include "atombios.h"
 #include "ObjectID.h"
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 004/100] drm/amdgpu: add basic support for atomfirmware.h (v3)
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (2 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 003/100] drm/amdgpu: move atom scratch setup into amdgpu_atombios.c Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 005/100] drm/amdgpu: add soc15ip.h Alex Deucher
                     ` (81 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

This adds basic support for asics that use atomfirmware.h
to define their vbios tables.

v2: rebase
v3: squash in num scratch reg fix

Reviewed-by: Ken Wang <Qingqing.Wang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/Makefile              |   2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu.h              |   3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c | 112 +++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h |  33 +++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c       |  38 +++++---
 5 files changed, 173 insertions(+), 15 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
index 8870e2e..fbf6474 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -30,7 +30,7 @@ amdgpu-y += amdgpu_device.o amdgpu_kms.o \
 	atombios_encoders.o amdgpu_sa.o atombios_i2c.o \
 	amdgpu_prime.o amdgpu_vm.o amdgpu_ib.o amdgpu_pll.o \
 	amdgpu_ucode.o amdgpu_bo_list.o amdgpu_ctx.o amdgpu_sync.o \
-	amdgpu_gtt_mgr.o amdgpu_vram_mgr.o amdgpu_virt.o
+	amdgpu_gtt_mgr.o amdgpu_vram_mgr.o amdgpu_virt.o amdgpu_atomfirmware.o
 
 # add asic specific block
 amdgpu-$(CONFIG_DRM_AMDGPU_CIK)+= cik.o cik_ih.o kv_smc.o kv_dpm.o \
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 15e985e..b713f37 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -113,7 +113,7 @@ extern int amdgpu_vram_page_split;
 #define AMDGPU_IB_POOL_SIZE			16
 #define AMDGPU_DEBUGFS_MAX_COMPONENTS		32
 #define AMDGPUFB_CONN_LIMIT			4
-#define AMDGPU_BIOS_NUM_SCRATCH			8
+#define AMDGPU_BIOS_NUM_SCRATCH			16
 
 /* max number of IP instances */
 #define AMDGPU_MAX_SDMA_INSTANCES		2
@@ -1318,6 +1318,7 @@ struct amdgpu_device {
 	uint8_t				*bios;
 	uint32_t			bios_size;
 	struct amdgpu_bo		*stollen_vga_memory;
+	uint32_t			bios_scratch_reg_offset;
 	uint32_t			bios_scratch[AMDGPU_BIOS_NUM_SCRATCH];
 
 	/* Register/doorbell mmio */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
new file mode 100644
index 0000000..4b9abd6
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
@@ -0,0 +1,112 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include <drm/drmP.h>
+#include <drm/amdgpu_drm.h>
+#include "amdgpu.h"
+#include "atomfirmware.h"
+#include "amdgpu_atomfirmware.h"
+#include "atom.h"
+
+#define get_index_into_master_table(master_table, table_name) (offsetof(struct master_table, table_name) / sizeof(uint16_t))
+
+bool amdgpu_atomfirmware_gpu_supports_virtualization(struct amdgpu_device *adev)
+{
+	int index = get_index_into_master_table(atom_master_list_of_data_tables_v2_1,
+						firmwareinfo);
+	uint16_t data_offset;
+
+	if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context, index, NULL,
+					  NULL, NULL, &data_offset)) {
+		struct atom_firmware_info_v3_1 *firmware_info =
+			(struct atom_firmware_info_v3_1 *)(adev->mode_info.atom_context->bios +
+							   data_offset);
+
+		if (le32_to_cpu(firmware_info->firmware_capability) &
+		    ATOM_FIRMWARE_CAP_GPU_VIRTUALIZATION)
+			return true;
+	}
+	return false;
+}
+
+void amdgpu_atomfirmware_scratch_regs_init(struct amdgpu_device *adev)
+{
+	int index = get_index_into_master_table(atom_master_list_of_data_tables_v2_1,
+						firmwareinfo);
+	uint16_t data_offset;
+
+	if (amdgpu_atom_parse_data_header(adev->mode_info.atom_context, index, NULL,
+					  NULL, NULL, &data_offset)) {
+		struct atom_firmware_info_v3_1 *firmware_info =
+			(struct atom_firmware_info_v3_1 *)(adev->mode_info.atom_context->bios +
+							   data_offset);
+
+		adev->bios_scratch_reg_offset =
+			le32_to_cpu(firmware_info->bios_scratch_reg_startaddr);
+	}
+}
+
+void amdgpu_atomfirmware_scratch_regs_save(struct amdgpu_device *adev)
+{
+	int i;
+
+	for (i = 0; i < AMDGPU_BIOS_NUM_SCRATCH; i++)
+		adev->bios_scratch[i] = RREG32(adev->bios_scratch_reg_offset + i);
+}
+
+void amdgpu_atomfirmware_scratch_regs_restore(struct amdgpu_device *adev)
+{
+	int i;
+
+	for (i = 0; i < AMDGPU_BIOS_NUM_SCRATCH; i++)
+		WREG32(adev->bios_scratch_reg_offset + i, adev->bios_scratch[i]);
+}
+
+int amdgpu_atomfirmware_allocate_fb_scratch(struct amdgpu_device *adev)
+{
+	struct atom_context *ctx = adev->mode_info.atom_context;
+	int index = get_index_into_master_table(atom_master_list_of_data_tables_v2_1,
+						vram_usagebyfirmware);
+	uint16_t data_offset;
+	int usage_bytes = 0;
+
+	if (amdgpu_atom_parse_data_header(ctx, index, NULL, NULL, NULL, &data_offset)) {
+		struct vram_usagebyfirmware_v2_1 *firmware_usage =
+			(struct vram_usagebyfirmware_v2_1 *)(ctx->bios + data_offset);
+
+		DRM_DEBUG("atom firmware requested %08x %dkb fw %dkb drv\n",
+			  le32_to_cpu(firmware_usage->start_address_in_kb),
+			  le16_to_cpu(firmware_usage->used_by_firmware_in_kb),
+			  le16_to_cpu(firmware_usage->used_by_driver_in_kb));
+
+		usage_bytes = le16_to_cpu(firmware_usage->used_by_driver_in_kb) * 1024;
+	}
+	ctx->scratch_size_bytes = 0;
+	if (usage_bytes == 0)
+		usage_bytes = 20 * 1024;
+	/* allocate some scratch memory */
+	ctx->scratch = kzalloc(usage_bytes, GFP_KERNEL);
+	if (!ctx->scratch)
+		return -ENOMEM;
+	ctx->scratch_size_bytes = usage_bytes;
+	return 0;
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h
new file mode 100644
index 0000000..d0c4dcd
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h
@@ -0,0 +1,33 @@
+/*
+ * Copyright 2014 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __AMDGPU_ATOMFIRMWARE_H__
+#define __AMDGPU_ATOMFIRMWARE_H__
+
+bool amdgpu_atomfirmware_gpu_supports_virtualization(struct amdgpu_device *adev);
+void amdgpu_atomfirmware_scratch_regs_init(struct amdgpu_device *adev);
+void amdgpu_atomfirmware_scratch_regs_save(struct amdgpu_device *adev);
+void amdgpu_atomfirmware_scratch_regs_restore(struct amdgpu_device *adev);
+int amdgpu_atomfirmware_allocate_fb_scratch(struct amdgpu_device *adev);
+
+#endif
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index f87c1cb..5a17899 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -41,6 +41,7 @@
 #include "amdgpu_i2c.h"
 #include "atom.h"
 #include "amdgpu_atombios.h"
+#include "amdgpu_atomfirmware.h"
 #include "amd_pcie.h"
 #ifdef CONFIG_DRM_AMDGPU_SI
 #include "si.h"
@@ -992,8 +993,13 @@ static int amdgpu_atombios_init(struct amdgpu_device *adev)
 	}
 
 	mutex_init(&adev->mode_info.atom_context->mutex);
-	amdgpu_atombios_scratch_regs_init(adev);
-	amdgpu_atombios_allocate_fb_scratch(adev);
+	if (adev->is_atom_fw) {
+		amdgpu_atomfirmware_scratch_regs_init(adev);
+		amdgpu_atomfirmware_allocate_fb_scratch(adev);
+	} else {
+		amdgpu_atombios_scratch_regs_init(adev);
+		amdgpu_atombios_allocate_fb_scratch(adev);
+	}
 	return 0;
 }
 
@@ -1755,8 +1761,13 @@ static int amdgpu_resume(struct amdgpu_device *adev)
 
 static void amdgpu_device_detect_sriov_bios(struct amdgpu_device *adev)
 {
-	if (amdgpu_atombios_has_gpu_virtualization_table(adev))
-		adev->virt.caps |= AMDGPU_SRIOV_CAPS_SRIOV_VBIOS;
+	if (adev->is_atom_fw) {
+		if (amdgpu_atomfirmware_gpu_supports_virtualization(adev))
+			adev->virt.caps |= AMDGPU_SRIOV_CAPS_SRIOV_VBIOS;
+	} else {
+		if (amdgpu_atombios_has_gpu_virtualization_table(adev))
+			adev->virt.caps |= AMDGPU_SRIOV_CAPS_SRIOV_VBIOS;
+	}
 }
 
 bool amdgpu_device_asic_has_dc_support(enum amd_asic_type asic_type)
@@ -1964,17 +1975,18 @@ int amdgpu_device_init(struct amdgpu_device *adev,
 		DRM_INFO("GPU post is not needed\n");
 	}
 
-	/* Initialize clocks */
-	r = amdgpu_atombios_get_clock_info(adev);
-	if (r) {
-		dev_err(adev->dev, "amdgpu_atombios_get_clock_info failed\n");
-		goto failed;
+	if (!adev->is_atom_fw) {
+		/* Initialize clocks */
+		r = amdgpu_atombios_get_clock_info(adev);
+		if (r) {
+			dev_err(adev->dev, "amdgpu_atombios_get_clock_info failed\n");
+			return r;
+		}
+		/* init i2c buses */
+		if (!amdgpu_device_has_dc_support(adev))
+			amdgpu_atombios_i2c_init(adev);
 	}
 
-	/* init i2c buses */
-	if (!amdgpu_device_has_dc_support(adev))
-		amdgpu_atombios_i2c_init(adev);
-
 	/* Fence driver */
 	r = amdgpu_fence_driver_init(adev);
 	if (r) {
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 005/100] drm/amdgpu: add soc15ip.h
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (3 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 004/100] drm/amdgpu: add basic support for atomfirmware.h (v3) Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 021/100] drm/amd: Add MQD structs for GFX V9 Alex Deucher
                     ` (80 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

This header defines the IP layout for soc15 based SoCs.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../gpu/drm/amd/include/asic_reg/vega10/soc15ip.h  | 1343 ++++++++++++++++++++
 1 file changed, 1343 insertions(+)
 create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/soc15ip.h

diff --git a/drivers/gpu/drm/amd/include/asic_reg/vega10/soc15ip.h b/drivers/gpu/drm/amd/include/asic_reg/vega10/soc15ip.h
new file mode 100644
index 0000000..1767db6
--- /dev/null
+++ b/drivers/gpu/drm/amd/include/asic_reg/vega10/soc15ip.h
@@ -0,0 +1,1343 @@
+/*
+ * Copyright (C) 2017  Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+ * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
+ * AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+ */
+#ifndef _soc15ip_new_HEADER
+#define _soc15ip_new_HEADER
+
+// HW ID
+#define MP1_HWID                                           1
+#define MP2_HWID                                           2
+#define THM_HWID                                           3
+#define SMUIO_HWID                                         4
+#define FUSE_HWID                                          5
+#define CLKA_HWID                                          6
+#define PWR_HWID                                          10
+#define GC_HWID                                           11
+#define UVD_HWID                                          12
+#define VCN_HWID                                          UVD_HWID
+#define AUDIO_AZ_HWID                                     13
+#define ACP_HWID                                          14
+#define DCI_HWID                                          15
+#define DMU_HWID                                         271
+#define DCO_HWID                                          16
+#define DIO_HWID                                         272
+#define XDMA_HWID                                         17
+#define DCEAZ_HWID                                        18
+#define DAZ_HWID                                         274
+#define SDPMUX_HWID                                       19
+#define NTB_HWID                                          20
+#define IOHC_HWID                                         24
+#define L2IMU_HWID                                        28
+#define VCE_HWID                                          32
+#define MMHUB_HWID                                        34
+#define ATHUB_HWID                                        35
+#define DBGU_NBIO_HWID                                    36
+#define DFX_HWID                                          37
+#define DBGU0_HWID                                        38
+#define DBGU1_HWID                                        39
+#define OSSSYS_HWID                                       40
+#define HDP_HWID                                          41
+#define SDMA0_HWID                                        42
+#define SDMA1_HWID                                        43
+#define ISP_HWID                                          44
+#define DBGU_IO_HWID                                      45
+#define DF_HWID                                           46
+#define CLKB_HWID                                         47
+#define FCH_HWID                                          48
+#define DFX_DAP_HWID                                      49
+#define L1IMU_PCIE_HWID                                   50
+#define L1IMU_NBIF_HWID                                   51
+#define L1IMU_IOAGR_HWID                                  52
+#define L1IMU3_HWID                                       53
+#define L1IMU4_HWID                                       54
+#define L1IMU5_HWID                                       55
+#define L1IMU6_HWID                                       56
+#define L1IMU7_HWID                                       57
+#define L1IMU8_HWID                                       58
+#define L1IMU9_HWID                                       59
+#define L1IMU10_HWID                                      60
+#define L1IMU11_HWID                                      61
+#define L1IMU12_HWID                                      62
+#define L1IMU13_HWID                                      63
+#define L1IMU14_HWID                                      64
+#define L1IMU15_HWID                                      65
+#define WAFLC_HWID                                        66
+#define FCH_USB_PD_HWID                                   67
+#define PCIE_HWID                                         70
+#define PCS_HWID                                          80
+#define DDCL_HWID                                         89
+#define SST_HWID                                          90
+#define IOAGR_HWID                                       100
+#define NBIF_HWID                                        108
+#define IOAPIC_HWID                                      124
+#define SYSTEMHUB_HWID                                   128
+#define NTBCCP_HWID                                      144
+#define UMC_HWID                                         150
+#define SATA_HWID                                        168
+#define USB_HWID                                         170
+#define CCXSEC_HWID                                      176
+#define XGBE_HWID                                        216
+#define MP0_HWID                                         254
+
+#define MAX_INSTANCE                                       5
+#define MAX_SEGMENT                                        5
+
+
+struct IP_BASE_INSTANCE 
+{
+    unsigned int segment[MAX_SEGMENT];
+};
+ 
+struct IP_BASE 
+{
+    struct IP_BASE_INSTANCE instance[MAX_INSTANCE];
+};
+
+
+static const struct IP_BASE NBIF_BASE			= { { { { 0x00000000, 0x00000014, 0x00000D20, 0x00010400, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE NBIO_BASE			= { { { { 0x00000000, 0x00000014, 0x00000D20, 0x00010400, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE DCE_BASE			= { { { { 0x00000012, 0x000000C0, 0x000034C0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE DCN_BASE			= { { { { 0x00000012, 0x000000C0, 0x000034C0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE MP0_BASE			= { { { { 0x00016000, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE MP1_BASE			= { { { { 0x00016000, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE MP2_BASE			= { { { { 0x00016000, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE DF_BASE			= { { { { 0x00007000, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE UVD_BASE			= { { { { 0x00007800, 0x00007E00, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };  //note: GLN does not use the first segment
+static const struct IP_BASE VCN_BASE			= { { { { 0x00007800, 0x00007E00, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };  //note: GLN does not use the first segment
+static const struct IP_BASE DBGU_BASE			= { { { { 0x00000180, 0x000001A0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } }; // not exist
+static const struct IP_BASE DBGU_NBIO_BASE		= { { { { 0x000001C0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } }; // not exist
+static const struct IP_BASE DBGU_IO_BASE		= { { { { 0x000001E0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } }; // not exist
+static const struct IP_BASE DFX_DAP_BASE		= { { { { 0x000005A0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } }; // not exist
+static const struct IP_BASE DFX_BASE			= { { { { 0x00000580, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } }; // this file does not contain registers
+static const struct IP_BASE ISP_BASE			= { { { { 0x00018000, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } }; // not exist
+static const struct IP_BASE SYSTEMHUB_BASE		= { { { { 0x00000EA0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } }; // not exist
+static const struct IP_BASE L2IMU_BASE			= { { { { 0x00007DC0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE IOHC_BASE			= { { { { 0x00010000, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE ATHUB_BASE			= { { { { 0x00000C20, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE VCE_BASE			= { { { { 0x00007E00, 0x00048800, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE GC_BASE			= { { { { 0x00002000, 0x0000A000, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE MMHUB_BASE			= { { { { 0x0001A000, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE RSMU_BASE			= { { { { 0x00012000, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE HDP_BASE			= { { { { 0x00000F20, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE OSSSYS_BASE		= { { { { 0x000010A0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE SDMA0_BASE			= { { { { 0x00001260, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE SDMA1_BASE			= { { { { 0x00001460, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE XDMA_BASE			= { { { { 0x00003400, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE UMC_BASE			= { { { { 0x00014000, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE THM_BASE			= { { { { 0x00016600, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE SMUIO_BASE			= { { { { 0x00016800, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE PWR_BASE			= { { { { 0x00016A00, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+static const struct IP_BASE CLK_BASE			= { { { { 0x00016C00, 0, 0, 0, 0 } },
+									    { { 0x00016E00, 0, 0, 0, 0 } }, 
+										{ { 0x00017000, 0, 0, 0, 0 } }, 
+	                                    { { 0x00017200, 0, 0, 0, 0 } }, 
+						                { { 0x00017E00, 0, 0, 0, 0 } } } };  
+static const struct IP_BASE FUSE_BASE			= { { { { 0x00017400, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } },
+										{ { 0, 0, 0, 0, 0 } }, 
+										{ { 0, 0, 0, 0, 0 } } } };
+
+
+#define NBIF_BASE__INST0_SEG0                     0x00000000
+#define NBIF_BASE__INST0_SEG1                     0x00000014
+#define NBIF_BASE__INST0_SEG2                     0x00000D20
+#define NBIF_BASE__INST0_SEG3                     0x00010400
+#define NBIF_BASE__INST0_SEG4                     0
+
+#define NBIF_BASE__INST1_SEG0                     0
+#define NBIF_BASE__INST1_SEG1                     0
+#define NBIF_BASE__INST1_SEG2                     0
+#define NBIF_BASE__INST1_SEG3                     0
+#define NBIF_BASE__INST1_SEG4                     0
+
+#define NBIF_BASE__INST2_SEG0                     0
+#define NBIF_BASE__INST2_SEG1                     0
+#define NBIF_BASE__INST2_SEG2                     0
+#define NBIF_BASE__INST2_SEG3                     0
+#define NBIF_BASE__INST2_SEG4                     0
+
+#define NBIF_BASE__INST3_SEG0                     0
+#define NBIF_BASE__INST3_SEG1                     0
+#define NBIF_BASE__INST3_SEG2                     0
+#define NBIF_BASE__INST3_SEG3                     0
+#define NBIF_BASE__INST3_SEG4                     0
+
+#define NBIF_BASE__INST4_SEG0                     0
+#define NBIF_BASE__INST4_SEG1                     0
+#define NBIF_BASE__INST4_SEG2                     0
+#define NBIF_BASE__INST4_SEG3                     0
+#define NBIF_BASE__INST4_SEG4                     0
+
+#define NBIO_BASE__INST0_SEG0                     0x00000000
+#define NBIO_BASE__INST0_SEG1                     0x00000014
+#define NBIO_BASE__INST0_SEG2                     0x00000D20
+#define NBIO_BASE__INST0_SEG3                     0x00010400
+#define NBIO_BASE__INST0_SEG4                     0
+
+#define NBIO_BASE__INST1_SEG0                     0
+#define NBIO_BASE__INST1_SEG1                     0
+#define NBIO_BASE__INST1_SEG2                     0
+#define NBIO_BASE__INST1_SEG3                     0
+#define NBIO_BASE__INST1_SEG4                     0
+
+#define NBIO_BASE__INST2_SEG0                     0
+#define NBIO_BASE__INST2_SEG1                     0
+#define NBIO_BASE__INST2_SEG2                     0
+#define NBIO_BASE__INST2_SEG3                     0
+#define NBIO_BASE__INST2_SEG4                     0
+
+#define NBIO_BASE__INST3_SEG0                     0
+#define NBIO_BASE__INST3_SEG1                     0
+#define NBIO_BASE__INST3_SEG2                     0
+#define NBIO_BASE__INST3_SEG3                     0
+#define NBIO_BASE__INST3_SEG4                     0
+
+#define NBIO_BASE__INST4_SEG0                     0
+#define NBIO_BASE__INST4_SEG1                     0
+#define NBIO_BASE__INST4_SEG2                     0
+#define NBIO_BASE__INST4_SEG3                     0
+#define NBIO_BASE__INST4_SEG4                     0
+
+#define DCE_BASE__INST0_SEG0                      0x00000012
+#define DCE_BASE__INST0_SEG1                      0x000000C0
+#define DCE_BASE__INST0_SEG2                      0x000034C0
+#define DCE_BASE__INST0_SEG3                      0
+#define DCE_BASE__INST0_SEG4                      0
+
+#define DCE_BASE__INST1_SEG0                      0
+#define DCE_BASE__INST1_SEG1                      0
+#define DCE_BASE__INST1_SEG2                      0
+#define DCE_BASE__INST1_SEG3                      0
+#define DCE_BASE__INST1_SEG4                      0
+
+#define DCE_BASE__INST2_SEG0                      0
+#define DCE_BASE__INST2_SEG1                      0
+#define DCE_BASE__INST2_SEG2                      0
+#define DCE_BASE__INST2_SEG3                      0
+#define DCE_BASE__INST2_SEG4                      0
+
+#define DCE_BASE__INST3_SEG0                      0
+#define DCE_BASE__INST3_SEG1                      0
+#define DCE_BASE__INST3_SEG2                      0
+#define DCE_BASE__INST3_SEG3                      0
+#define DCE_BASE__INST3_SEG4                      0
+
+#define DCE_BASE__INST4_SEG0                      0
+#define DCE_BASE__INST4_SEG1                      0
+#define DCE_BASE__INST4_SEG2                      0
+#define DCE_BASE__INST4_SEG3                      0
+#define DCE_BASE__INST4_SEG4                      0
+
+#define DCN_BASE__INST0_SEG0                      0x00000012
+#define DCN_BASE__INST0_SEG1                      0x000000C0
+#define DCN_BASE__INST0_SEG2                      0x000034C0
+#define DCN_BASE__INST0_SEG3                      0
+#define DCN_BASE__INST0_SEG4                      0
+
+#define DCN_BASE__INST1_SEG0                      0
+#define DCN_BASE__INST1_SEG1                      0
+#define DCN_BASE__INST1_SEG2                      0
+#define DCN_BASE__INST1_SEG3                      0
+#define DCN_BASE__INST1_SEG4                      0
+
+#define DCN_BASE__INST2_SEG0                      0
+#define DCN_BASE__INST2_SEG1                      0
+#define DCN_BASE__INST2_SEG2                      0
+#define DCN_BASE__INST2_SEG3                      0
+#define DCN_BASE__INST2_SEG4                      0
+
+#define DCN_BASE__INST3_SEG0                      0
+#define DCN_BASE__INST3_SEG1                      0
+#define DCN_BASE__INST3_SEG2                      0
+#define DCN_BASE__INST3_SEG3                      0
+#define DCN_BASE__INST3_SEG4                      0
+
+#define DCN_BASE__INST4_SEG0                      0
+#define DCN_BASE__INST4_SEG1                      0
+#define DCN_BASE__INST4_SEG2                      0
+#define DCN_BASE__INST4_SEG3                      0
+#define DCN_BASE__INST4_SEG4                      0
+
+#define MP0_BASE__INST0_SEG0                      0x00016000
+#define MP0_BASE__INST0_SEG1                      0
+#define MP0_BASE__INST0_SEG2                      0
+#define MP0_BASE__INST0_SEG3                      0
+#define MP0_BASE__INST0_SEG4                      0
+
+#define MP0_BASE__INST1_SEG0                      0
+#define MP0_BASE__INST1_SEG1                      0
+#define MP0_BASE__INST1_SEG2                      0
+#define MP0_BASE__INST1_SEG3                      0
+#define MP0_BASE__INST1_SEG4                      0
+
+#define MP0_BASE__INST2_SEG0                      0
+#define MP0_BASE__INST2_SEG1                      0
+#define MP0_BASE__INST2_SEG2                      0
+#define MP0_BASE__INST2_SEG3                      0
+#define MP0_BASE__INST2_SEG4                      0
+
+#define MP0_BASE__INST3_SEG0                      0
+#define MP0_BASE__INST3_SEG1                      0
+#define MP0_BASE__INST3_SEG2                      0
+#define MP0_BASE__INST3_SEG3                      0
+#define MP0_BASE__INST3_SEG4                      0
+
+#define MP0_BASE__INST4_SEG0                      0
+#define MP0_BASE__INST4_SEG1                      0
+#define MP0_BASE__INST4_SEG2                      0
+#define MP0_BASE__INST4_SEG3                      0
+#define MP0_BASE__INST4_SEG4                      0
+
+#define MP1_BASE__INST0_SEG0                      0x00016200
+#define MP1_BASE__INST0_SEG1                      0
+#define MP1_BASE__INST0_SEG2                      0
+#define MP1_BASE__INST0_SEG3                      0
+#define MP1_BASE__INST0_SEG4                      0
+
+#define MP1_BASE__INST1_SEG0                      0
+#define MP1_BASE__INST1_SEG1                      0
+#define MP1_BASE__INST1_SEG2                      0
+#define MP1_BASE__INST1_SEG3                      0
+#define MP1_BASE__INST1_SEG4                      0
+
+#define MP1_BASE__INST2_SEG0                      0
+#define MP1_BASE__INST2_SEG1                      0
+#define MP1_BASE__INST2_SEG2                      0
+#define MP1_BASE__INST2_SEG3                      0
+#define MP1_BASE__INST2_SEG4                      0
+
+#define MP1_BASE__INST3_SEG0                      0
+#define MP1_BASE__INST3_SEG1                      0
+#define MP1_BASE__INST3_SEG2                      0
+#define MP1_BASE__INST3_SEG3                      0
+#define MP1_BASE__INST3_SEG4                      0
+
+#define MP1_BASE__INST4_SEG0                      0
+#define MP1_BASE__INST4_SEG1                      0
+#define MP1_BASE__INST4_SEG2                      0
+#define MP1_BASE__INST4_SEG3                      0
+#define MP1_BASE__INST4_SEG4                      0
+
+#define MP2_BASE__INST0_SEG0                      0x00016400
+#define MP2_BASE__INST0_SEG1                      0
+#define MP2_BASE__INST0_SEG2                      0
+#define MP2_BASE__INST0_SEG3                      0
+#define MP2_BASE__INST0_SEG4                      0
+
+#define MP2_BASE__INST1_SEG0                      0
+#define MP2_BASE__INST1_SEG1                      0
+#define MP2_BASE__INST1_SEG2                      0
+#define MP2_BASE__INST1_SEG3                      0
+#define MP2_BASE__INST1_SEG4                      0
+
+#define MP2_BASE__INST2_SEG0                      0
+#define MP2_BASE__INST2_SEG1                      0
+#define MP2_BASE__INST2_SEG2                      0
+#define MP2_BASE__INST2_SEG3                      0
+#define MP2_BASE__INST2_SEG4                      0
+
+#define MP2_BASE__INST3_SEG0                      0
+#define MP2_BASE__INST3_SEG1                      0
+#define MP2_BASE__INST3_SEG2                      0
+#define MP2_BASE__INST3_SEG3                      0
+#define MP2_BASE__INST3_SEG4                      0
+
+#define MP2_BASE__INST4_SEG0                      0
+#define MP2_BASE__INST4_SEG1                      0
+#define MP2_BASE__INST4_SEG2                      0
+#define MP2_BASE__INST4_SEG3                      0
+#define MP2_BASE__INST4_SEG4                      0
+
+#define DF_BASE__INST0_SEG0                       0x00007000
+#define DF_BASE__INST0_SEG1                       0
+#define DF_BASE__INST0_SEG2                       0
+#define DF_BASE__INST0_SEG3                       0
+#define DF_BASE__INST0_SEG4                       0
+
+#define DF_BASE__INST1_SEG0                       0
+#define DF_BASE__INST1_SEG1                       0
+#define DF_BASE__INST1_SEG2                       0
+#define DF_BASE__INST1_SEG3                       0
+#define DF_BASE__INST1_SEG4                       0
+
+#define DF_BASE__INST2_SEG0                       0
+#define DF_BASE__INST2_SEG1                       0
+#define DF_BASE__INST2_SEG2                       0
+#define DF_BASE__INST2_SEG3                       0
+#define DF_BASE__INST2_SEG4                       0
+
+#define DF_BASE__INST3_SEG0                       0
+#define DF_BASE__INST3_SEG1                       0
+#define DF_BASE__INST3_SEG2                       0
+#define DF_BASE__INST3_SEG3                       0
+#define DF_BASE__INST3_SEG4                       0
+
+#define DF_BASE__INST4_SEG0                       0
+#define DF_BASE__INST4_SEG1                       0
+#define DF_BASE__INST4_SEG2                       0
+#define DF_BASE__INST4_SEG3                       0
+#define DF_BASE__INST4_SEG4                       0
+
+#define UVD_BASE__INST0_SEG0                      0x00007800
+#define UVD_BASE__INST0_SEG1                      0x00007E00
+#define UVD_BASE__INST0_SEG2                      0
+#define UVD_BASE__INST0_SEG3                      0
+#define UVD_BASE__INST0_SEG4                      0
+
+#define UVD_BASE__INST1_SEG0                      0
+#define UVD_BASE__INST1_SEG1                      0
+#define UVD_BASE__INST1_SEG2                      0
+#define UVD_BASE__INST1_SEG3                      0
+#define UVD_BASE__INST1_SEG4                      0
+
+#define UVD_BASE__INST2_SEG0                      0
+#define UVD_BASE__INST2_SEG1                      0
+#define UVD_BASE__INST2_SEG2                      0
+#define UVD_BASE__INST2_SEG3                      0
+#define UVD_BASE__INST2_SEG4                      0
+
+#define UVD_BASE__INST3_SEG0                      0
+#define UVD_BASE__INST3_SEG1                      0
+#define UVD_BASE__INST3_SEG2                      0
+#define UVD_BASE__INST3_SEG3                      0
+#define UVD_BASE__INST3_SEG4                      0
+
+#define UVD_BASE__INST4_SEG0                      0
+#define UVD_BASE__INST4_SEG1                      0
+#define UVD_BASE__INST4_SEG2                      0
+#define UVD_BASE__INST4_SEG3                      0
+#define UVD_BASE__INST4_SEG4                      0
+
+#define VCN_BASE__INST0_SEG0                      0x00007800
+#define VCN_BASE__INST0_SEG1                      0x00007E00
+#define VCN_BASE__INST0_SEG2                      0
+#define VCN_BASE__INST0_SEG3                      0
+#define VCN_BASE__INST0_SEG4                      0
+
+#define VCN_BASE__INST1_SEG0                      0
+#define VCN_BASE__INST1_SEG1                      0
+#define VCN_BASE__INST1_SEG2                      0
+#define VCN_BASE__INST1_SEG3                      0
+#define VCN_BASE__INST1_SEG4                      0
+
+#define VCN_BASE__INST2_SEG0                      0
+#define VCN_BASE__INST2_SEG1                      0
+#define VCN_BASE__INST2_SEG2                      0
+#define VCN_BASE__INST2_SEG3                      0
+#define VCN_BASE__INST2_SEG4                      0
+
+#define VCN_BASE__INST3_SEG0                      0
+#define VCN_BASE__INST3_SEG1                      0
+#define VCN_BASE__INST3_SEG2                      0
+#define VCN_BASE__INST3_SEG3                      0
+#define VCN_BASE__INST3_SEG4                      0
+
+#define VCN_BASE__INST4_SEG0                      0
+#define VCN_BASE__INST4_SEG1                      0
+#define VCN_BASE__INST4_SEG2                      0
+#define VCN_BASE__INST4_SEG3                      0
+#define VCN_BASE__INST4_SEG4                      0
+
+#define DBGU_BASE__INST0_SEG0                     0x00000180
+#define DBGU_BASE__INST0_SEG1                     0x000001A0
+#define DBGU_BASE__INST0_SEG2                     0
+#define DBGU_BASE__INST0_SEG3                     0
+#define DBGU_BASE__INST0_SEG4                     0
+
+#define DBGU_BASE__INST1_SEG0                     0
+#define DBGU_BASE__INST1_SEG1                     0
+#define DBGU_BASE__INST1_SEG2                     0
+#define DBGU_BASE__INST1_SEG3                     0
+#define DBGU_BASE__INST1_SEG4                     0
+
+#define DBGU_BASE__INST2_SEG0                     0
+#define DBGU_BASE__INST2_SEG1                     0
+#define DBGU_BASE__INST2_SEG2                     0
+#define DBGU_BASE__INST2_SEG3                     0
+#define DBGU_BASE__INST2_SEG4                     0
+
+#define DBGU_BASE__INST3_SEG0                     0
+#define DBGU_BASE__INST3_SEG1                     0
+#define DBGU_BASE__INST3_SEG2                     0
+#define DBGU_BASE__INST3_SEG3                     0
+#define DBGU_BASE__INST3_SEG4                     0
+
+#define DBGU_BASE__INST4_SEG0                     0
+#define DBGU_BASE__INST4_SEG1                     0
+#define DBGU_BASE__INST4_SEG2                     0
+#define DBGU_BASE__INST4_SEG3                     0
+#define DBGU_BASE__INST4_SEG4                     0
+
+#define DBGU_NBIO_BASE__INST0_SEG0                0x000001C0
+#define DBGU_NBIO_BASE__INST0_SEG1                0
+#define DBGU_NBIO_BASE__INST0_SEG2                0
+#define DBGU_NBIO_BASE__INST0_SEG3                0
+#define DBGU_NBIO_BASE__INST0_SEG4                0
+
+#define DBGU_NBIO_BASE__INST1_SEG0                0
+#define DBGU_NBIO_BASE__INST1_SEG1                0
+#define DBGU_NBIO_BASE__INST1_SEG2                0
+#define DBGU_NBIO_BASE__INST1_SEG3                0
+#define DBGU_NBIO_BASE__INST1_SEG4                0
+
+#define DBGU_NBIO_BASE__INST2_SEG0                0
+#define DBGU_NBIO_BASE__INST2_SEG1                0
+#define DBGU_NBIO_BASE__INST2_SEG2                0
+#define DBGU_NBIO_BASE__INST2_SEG3                0
+#define DBGU_NBIO_BASE__INST2_SEG4                0
+
+#define DBGU_NBIO_BASE__INST3_SEG0                0
+#define DBGU_NBIO_BASE__INST3_SEG1                0
+#define DBGU_NBIO_BASE__INST3_SEG2                0
+#define DBGU_NBIO_BASE__INST3_SEG3                0
+#define DBGU_NBIO_BASE__INST3_SEG4                0
+
+#define DBGU_NBIO_BASE__INST4_SEG0                0
+#define DBGU_NBIO_BASE__INST4_SEG1                0
+#define DBGU_NBIO_BASE__INST4_SEG2                0
+#define DBGU_NBIO_BASE__INST4_SEG3                0
+#define DBGU_NBIO_BASE__INST4_SEG4                0
+
+#define DBGU_IO_BASE__INST0_SEG0                  0x000001E0
+#define DBGU_IO_BASE__INST0_SEG1                  0
+#define DBGU_IO_BASE__INST0_SEG2                  0
+#define DBGU_IO_BASE__INST0_SEG3                  0
+#define DBGU_IO_BASE__INST0_SEG4                  0
+
+#define DBGU_IO_BASE__INST1_SEG0                  0
+#define DBGU_IO_BASE__INST1_SEG1                  0
+#define DBGU_IO_BASE__INST1_SEG2                  0
+#define DBGU_IO_BASE__INST1_SEG3                  0
+#define DBGU_IO_BASE__INST1_SEG4                  0
+
+#define DBGU_IO_BASE__INST2_SEG0                  0
+#define DBGU_IO_BASE__INST2_SEG1                  0
+#define DBGU_IO_BASE__INST2_SEG2                  0
+#define DBGU_IO_BASE__INST2_SEG3                  0
+#define DBGU_IO_BASE__INST2_SEG4                  0
+
+#define DBGU_IO_BASE__INST3_SEG0                  0
+#define DBGU_IO_BASE__INST3_SEG1                  0
+#define DBGU_IO_BASE__INST3_SEG2                  0
+#define DBGU_IO_BASE__INST3_SEG3                  0
+#define DBGU_IO_BASE__INST3_SEG4                  0
+
+#define DBGU_IO_BASE__INST4_SEG0                  0
+#define DBGU_IO_BASE__INST4_SEG1                  0
+#define DBGU_IO_BASE__INST4_SEG2                  0
+#define DBGU_IO_BASE__INST4_SEG3                  0
+#define DBGU_IO_BASE__INST4_SEG4                  0
+
+#define DFX_DAP_BASE__INST0_SEG0                  0x000005A0
+#define DFX_DAP_BASE__INST0_SEG1                  0
+#define DFX_DAP_BASE__INST0_SEG2                  0
+#define DFX_DAP_BASE__INST0_SEG3                  0
+#define DFX_DAP_BASE__INST0_SEG4                  0
+
+#define DFX_DAP_BASE__INST1_SEG0                  0
+#define DFX_DAP_BASE__INST1_SEG1                  0
+#define DFX_DAP_BASE__INST1_SEG2                  0
+#define DFX_DAP_BASE__INST1_SEG3                  0
+#define DFX_DAP_BASE__INST1_SEG4                  0
+
+#define DFX_DAP_BASE__INST2_SEG0                  0
+#define DFX_DAP_BASE__INST2_SEG1                  0
+#define DFX_DAP_BASE__INST2_SEG2                  0
+#define DFX_DAP_BASE__INST2_SEG3                  0
+#define DFX_DAP_BASE__INST2_SEG4                  0
+
+#define DFX_DAP_BASE__INST3_SEG0                  0
+#define DFX_DAP_BASE__INST3_SEG1                  0
+#define DFX_DAP_BASE__INST3_SEG2                  0
+#define DFX_DAP_BASE__INST3_SEG3                  0
+#define DFX_DAP_BASE__INST3_SEG4                  0
+
+#define DFX_DAP_BASE__INST4_SEG0                  0
+#define DFX_DAP_BASE__INST4_SEG1                  0
+#define DFX_DAP_BASE__INST4_SEG2                  0
+#define DFX_DAP_BASE__INST4_SEG3                  0
+#define DFX_DAP_BASE__INST4_SEG4                  0
+
+#define DFX_BASE__INST0_SEG0                      0x00000580
+#define DFX_BASE__INST0_SEG1                      0
+#define DFX_BASE__INST0_SEG2                      0
+#define DFX_BASE__INST0_SEG3                      0
+#define DFX_BASE__INST0_SEG4                      0
+
+#define DFX_BASE__INST1_SEG0                      0
+#define DFX_BASE__INST1_SEG1                      0
+#define DFX_BASE__INST1_SEG2                      0
+#define DFX_BASE__INST1_SEG3                      0
+#define DFX_BASE__INST1_SEG4                      0
+
+#define DFX_BASE__INST2_SEG0                      0
+#define DFX_BASE__INST2_SEG1                      0
+#define DFX_BASE__INST2_SEG2                      0
+#define DFX_BASE__INST2_SEG3                      0
+#define DFX_BASE__INST2_SEG4                      0
+
+#define DFX_BASE__INST3_SEG0                      0
+#define DFX_BASE__INST3_SEG1                      0
+#define DFX_BASE__INST3_SEG2                      0
+#define DFX_BASE__INST3_SEG3                      0
+#define DFX_BASE__INST3_SEG4                      0
+
+#define DFX_BASE__INST4_SEG0                      0
+#define DFX_BASE__INST4_SEG1                      0
+#define DFX_BASE__INST4_SEG2                      0
+#define DFX_BASE__INST4_SEG3                      0
+#define DFX_BASE__INST4_SEG4                      0
+
+#define ISP_BASE__INST0_SEG0                      0x00018000
+#define ISP_BASE__INST0_SEG1                      0
+#define ISP_BASE__INST0_SEG2                      0
+#define ISP_BASE__INST0_SEG3                      0
+#define ISP_BASE__INST0_SEG4                      0
+
+#define ISP_BASE__INST1_SEG0                      0
+#define ISP_BASE__INST1_SEG1                      0
+#define ISP_BASE__INST1_SEG2                      0
+#define ISP_BASE__INST1_SEG3                      0
+#define ISP_BASE__INST1_SEG4                      0
+
+#define ISP_BASE__INST2_SEG0                      0
+#define ISP_BASE__INST2_SEG1                      0
+#define ISP_BASE__INST2_SEG2                      0
+#define ISP_BASE__INST2_SEG3                      0
+#define ISP_BASE__INST2_SEG4                      0
+
+#define ISP_BASE__INST3_SEG0                      0
+#define ISP_BASE__INST3_SEG1                      0
+#define ISP_BASE__INST3_SEG2                      0
+#define ISP_BASE__INST3_SEG3                      0
+#define ISP_BASE__INST3_SEG4                      0
+
+#define ISP_BASE__INST4_SEG0                      0
+#define ISP_BASE__INST4_SEG1                      0
+#define ISP_BASE__INST4_SEG2                      0
+#define ISP_BASE__INST4_SEG3                      0
+#define ISP_BASE__INST4_SEG4                      0
+
+#define SYSTEMHUB_BASE__INST0_SEG0                0x00000EA0
+#define SYSTEMHUB_BASE__INST0_SEG1                0
+#define SYSTEMHUB_BASE__INST0_SEG2                0
+#define SYSTEMHUB_BASE__INST0_SEG3                0
+#define SYSTEMHUB_BASE__INST0_SEG4                0
+
+#define SYSTEMHUB_BASE__INST1_SEG0                0
+#define SYSTEMHUB_BASE__INST1_SEG1                0
+#define SYSTEMHUB_BASE__INST1_SEG2                0
+#define SYSTEMHUB_BASE__INST1_SEG3                0
+#define SYSTEMHUB_BASE__INST1_SEG4                0
+
+#define SYSTEMHUB_BASE__INST2_SEG0                0
+#define SYSTEMHUB_BASE__INST2_SEG1                0
+#define SYSTEMHUB_BASE__INST2_SEG2                0
+#define SYSTEMHUB_BASE__INST2_SEG3                0
+#define SYSTEMHUB_BASE__INST2_SEG4                0
+
+#define SYSTEMHUB_BASE__INST3_SEG0                0
+#define SYSTEMHUB_BASE__INST3_SEG1                0
+#define SYSTEMHUB_BASE__INST3_SEG2                0
+#define SYSTEMHUB_BASE__INST3_SEG3                0
+#define SYSTEMHUB_BASE__INST3_SEG4                0
+
+#define SYSTEMHUB_BASE__INST4_SEG0                0
+#define SYSTEMHUB_BASE__INST4_SEG1                0
+#define SYSTEMHUB_BASE__INST4_SEG2                0
+#define SYSTEMHUB_BASE__INST4_SEG3                0
+#define SYSTEMHUB_BASE__INST4_SEG4                0
+
+#define L2IMU_BASE__INST0_SEG0                    0x00007DC0
+#define L2IMU_BASE__INST0_SEG1                    0
+#define L2IMU_BASE__INST0_SEG2                    0
+#define L2IMU_BASE__INST0_SEG3                    0
+#define L2IMU_BASE__INST0_SEG4                    0
+
+#define L2IMU_BASE__INST1_SEG0                    0
+#define L2IMU_BASE__INST1_SEG1                    0
+#define L2IMU_BASE__INST1_SEG2                    0
+#define L2IMU_BASE__INST1_SEG3                    0
+#define L2IMU_BASE__INST1_SEG4                    0
+
+#define L2IMU_BASE__INST2_SEG0                    0
+#define L2IMU_BASE__INST2_SEG1                    0
+#define L2IMU_BASE__INST2_SEG2                    0
+#define L2IMU_BASE__INST2_SEG3                    0
+#define L2IMU_BASE__INST2_SEG4                    0
+
+#define L2IMU_BASE__INST3_SEG0                    0
+#define L2IMU_BASE__INST3_SEG1                    0
+#define L2IMU_BASE__INST3_SEG2                    0
+#define L2IMU_BASE__INST3_SEG3                    0
+#define L2IMU_BASE__INST3_SEG4                    0
+
+#define L2IMU_BASE__INST4_SEG0                    0
+#define L2IMU_BASE__INST4_SEG1                    0
+#define L2IMU_BASE__INST4_SEG2                    0
+#define L2IMU_BASE__INST4_SEG3                    0
+#define L2IMU_BASE__INST4_SEG4                    0
+
+#define IOHC_BASE__INST0_SEG0                     0x00010000
+#define IOHC_BASE__INST0_SEG1                     0
+#define IOHC_BASE__INST0_SEG2                     0
+#define IOHC_BASE__INST0_SEG3                     0
+#define IOHC_BASE__INST0_SEG4                     0
+
+#define IOHC_BASE__INST1_SEG0                     0
+#define IOHC_BASE__INST1_SEG1                     0
+#define IOHC_BASE__INST1_SEG2                     0
+#define IOHC_BASE__INST1_SEG3                     0
+#define IOHC_BASE__INST1_SEG4                     0
+
+#define IOHC_BASE__INST2_SEG0                     0
+#define IOHC_BASE__INST2_SEG1                     0
+#define IOHC_BASE__INST2_SEG2                     0
+#define IOHC_BASE__INST2_SEG3                     0
+#define IOHC_BASE__INST2_SEG4                     0
+
+#define IOHC_BASE__INST3_SEG0                     0
+#define IOHC_BASE__INST3_SEG1                     0
+#define IOHC_BASE__INST3_SEG2                     0
+#define IOHC_BASE__INST3_SEG3                     0
+#define IOHC_BASE__INST3_SEG4                     0
+
+#define IOHC_BASE__INST4_SEG0                     0
+#define IOHC_BASE__INST4_SEG1                     0
+#define IOHC_BASE__INST4_SEG2                     0
+#define IOHC_BASE__INST4_SEG3                     0
+#define IOHC_BASE__INST4_SEG4                     0
+
+#define ATHUB_BASE__INST0_SEG0                    0x00000C20
+#define ATHUB_BASE__INST0_SEG1                    0
+#define ATHUB_BASE__INST0_SEG2                    0
+#define ATHUB_BASE__INST0_SEG3                    0
+#define ATHUB_BASE__INST0_SEG4                    0
+
+#define ATHUB_BASE__INST1_SEG0                    0
+#define ATHUB_BASE__INST1_SEG1                    0
+#define ATHUB_BASE__INST1_SEG2                    0
+#define ATHUB_BASE__INST1_SEG3                    0
+#define ATHUB_BASE__INST1_SEG4                    0
+
+#define ATHUB_BASE__INST2_SEG0                    0
+#define ATHUB_BASE__INST2_SEG1                    0
+#define ATHUB_BASE__INST2_SEG2                    0
+#define ATHUB_BASE__INST2_SEG3                    0
+#define ATHUB_BASE__INST2_SEG4                    0
+
+#define ATHUB_BASE__INST3_SEG0                    0
+#define ATHUB_BASE__INST3_SEG1                    0
+#define ATHUB_BASE__INST3_SEG2                    0
+#define ATHUB_BASE__INST3_SEG3                    0
+#define ATHUB_BASE__INST3_SEG4                    0
+
+#define ATHUB_BASE__INST4_SEG0                    0
+#define ATHUB_BASE__INST4_SEG1                    0
+#define ATHUB_BASE__INST4_SEG2                    0
+#define ATHUB_BASE__INST4_SEG3                    0
+#define ATHUB_BASE__INST4_SEG4                    0
+
+#define VCE_BASE__INST0_SEG0                      0x00007E00
+#define VCE_BASE__INST0_SEG1                      0x00048800
+#define VCE_BASE__INST0_SEG2                      0
+#define VCE_BASE__INST0_SEG3                      0
+#define VCE_BASE__INST0_SEG4                      0
+
+#define VCE_BASE__INST1_SEG0                      0
+#define VCE_BASE__INST1_SEG1                      0
+#define VCE_BASE__INST1_SEG2                      0
+#define VCE_BASE__INST1_SEG3                      0
+#define VCE_BASE__INST1_SEG4                      0
+
+#define VCE_BASE__INST2_SEG0                      0
+#define VCE_BASE__INST2_SEG1                      0
+#define VCE_BASE__INST2_SEG2                      0
+#define VCE_BASE__INST2_SEG3                      0
+#define VCE_BASE__INST2_SEG4                      0
+
+#define VCE_BASE__INST3_SEG0                      0
+#define VCE_BASE__INST3_SEG1                      0
+#define VCE_BASE__INST3_SEG2                      0
+#define VCE_BASE__INST3_SEG3                      0
+#define VCE_BASE__INST3_SEG4                      0
+
+#define VCE_BASE__INST4_SEG0                      0
+#define VCE_BASE__INST4_SEG1                      0
+#define VCE_BASE__INST4_SEG2                      0
+#define VCE_BASE__INST4_SEG3                      0
+#define VCE_BASE__INST4_SEG4                      0
+
+#define GC_BASE__INST0_SEG0                       0x00002000
+#define GC_BASE__INST0_SEG1                       0x0000A000
+#define GC_BASE__INST0_SEG2                       0
+#define GC_BASE__INST0_SEG3                       0
+#define GC_BASE__INST0_SEG4                       0
+
+#define GC_BASE__INST1_SEG0                       0
+#define GC_BASE__INST1_SEG1                       0
+#define GC_BASE__INST1_SEG2                       0
+#define GC_BASE__INST1_SEG3                       0
+#define GC_BASE__INST1_SEG4                       0
+
+#define GC_BASE__INST2_SEG0                       0
+#define GC_BASE__INST2_SEG1                       0
+#define GC_BASE__INST2_SEG2                       0
+#define GC_BASE__INST2_SEG3                       0
+#define GC_BASE__INST2_SEG4                       0
+
+#define GC_BASE__INST3_SEG0                       0
+#define GC_BASE__INST3_SEG1                       0
+#define GC_BASE__INST3_SEG2                       0
+#define GC_BASE__INST3_SEG3                       0
+#define GC_BASE__INST3_SEG4                       0
+
+#define GC_BASE__INST4_SEG0                       0
+#define GC_BASE__INST4_SEG1                       0
+#define GC_BASE__INST4_SEG2                       0
+#define GC_BASE__INST4_SEG3                       0
+#define GC_BASE__INST4_SEG4                       0
+
+#define MMHUB_BASE__INST0_SEG0                    0x0001A000
+#define MMHUB_BASE__INST0_SEG1                    0
+#define MMHUB_BASE__INST0_SEG2                    0
+#define MMHUB_BASE__INST0_SEG3                    0
+#define MMHUB_BASE__INST0_SEG4                    0
+
+#define MMHUB_BASE__INST1_SEG0                    0
+#define MMHUB_BASE__INST1_SEG1                    0
+#define MMHUB_BASE__INST1_SEG2                    0
+#define MMHUB_BASE__INST1_SEG3                    0
+#define MMHUB_BASE__INST1_SEG4                    0
+
+#define MMHUB_BASE__INST2_SEG0                    0
+#define MMHUB_BASE__INST2_SEG1                    0
+#define MMHUB_BASE__INST2_SEG2                    0
+#define MMHUB_BASE__INST2_SEG3                    0
+#define MMHUB_BASE__INST2_SEG4                    0
+
+#define MMHUB_BASE__INST3_SEG0                    0
+#define MMHUB_BASE__INST3_SEG1                    0
+#define MMHUB_BASE__INST3_SEG2                    0
+#define MMHUB_BASE__INST3_SEG3                    0
+#define MMHUB_BASE__INST3_SEG4                    0
+
+#define MMHUB_BASE__INST4_SEG0                    0
+#define MMHUB_BASE__INST4_SEG1                    0
+#define MMHUB_BASE__INST4_SEG2                    0
+#define MMHUB_BASE__INST4_SEG3                    0
+#define MMHUB_BASE__INST4_SEG4                    0
+
+#define RSMU_BASE__INST0_SEG0                     0x00012000
+#define RSMU_BASE__INST0_SEG1                     0
+#define RSMU_BASE__INST0_SEG2                     0
+#define RSMU_BASE__INST0_SEG3                     0
+#define RSMU_BASE__INST0_SEG4                     0
+
+#define RSMU_BASE__INST1_SEG0                     0
+#define RSMU_BASE__INST1_SEG1                     0
+#define RSMU_BASE__INST1_SEG2                     0
+#define RSMU_BASE__INST1_SEG3                     0
+#define RSMU_BASE__INST1_SEG4                     0
+
+#define RSMU_BASE__INST2_SEG0                     0
+#define RSMU_BASE__INST2_SEG1                     0
+#define RSMU_BASE__INST2_SEG2                     0
+#define RSMU_BASE__INST2_SEG3                     0
+#define RSMU_BASE__INST2_SEG4                     0
+
+#define RSMU_BASE__INST3_SEG0                     0
+#define RSMU_BASE__INST3_SEG1                     0
+#define RSMU_BASE__INST3_SEG2                     0
+#define RSMU_BASE__INST3_SEG3                     0
+#define RSMU_BASE__INST3_SEG4                     0
+
+#define RSMU_BASE__INST4_SEG0                     0
+#define RSMU_BASE__INST4_SEG1                     0
+#define RSMU_BASE__INST4_SEG2                     0
+#define RSMU_BASE__INST4_SEG3                     0
+#define RSMU_BASE__INST4_SEG4                     0
+
+#define HDP_BASE__INST0_SEG0                      0x00000F20
+#define HDP_BASE__INST0_SEG1                      0
+#define HDP_BASE__INST0_SEG2                      0
+#define HDP_BASE__INST0_SEG3                      0
+#define HDP_BASE__INST0_SEG4                      0
+
+#define HDP_BASE__INST1_SEG0                      0
+#define HDP_BASE__INST1_SEG1                      0
+#define HDP_BASE__INST1_SEG2                      0
+#define HDP_BASE__INST1_SEG3                      0
+#define HDP_BASE__INST1_SEG4                      0
+
+#define HDP_BASE__INST2_SEG0                      0
+#define HDP_BASE__INST2_SEG1                      0
+#define HDP_BASE__INST2_SEG2                      0
+#define HDP_BASE__INST2_SEG3                      0
+#define HDP_BASE__INST2_SEG4                      0
+
+#define HDP_BASE__INST3_SEG0                      0
+#define HDP_BASE__INST3_SEG1                      0
+#define HDP_BASE__INST3_SEG2                      0
+#define HDP_BASE__INST3_SEG3                      0
+#define HDP_BASE__INST3_SEG4                      0
+
+#define HDP_BASE__INST4_SEG0                      0
+#define HDP_BASE__INST4_SEG1                      0
+#define HDP_BASE__INST4_SEG2                      0
+#define HDP_BASE__INST4_SEG3                      0
+#define HDP_BASE__INST4_SEG4                      0
+
+#define OSSSYS_BASE__INST0_SEG0                   0x000010A0
+#define OSSSYS_BASE__INST0_SEG1                   0
+#define OSSSYS_BASE__INST0_SEG2                   0
+#define OSSSYS_BASE__INST0_SEG3                   0
+#define OSSSYS_BASE__INST0_SEG4                   0
+
+#define OSSSYS_BASE__INST1_SEG0                   0
+#define OSSSYS_BASE__INST1_SEG1                   0
+#define OSSSYS_BASE__INST1_SEG2                   0
+#define OSSSYS_BASE__INST1_SEG3                   0
+#define OSSSYS_BASE__INST1_SEG4                   0
+
+#define OSSSYS_BASE__INST2_SEG0                   0
+#define OSSSYS_BASE__INST2_SEG1                   0
+#define OSSSYS_BASE__INST2_SEG2                   0
+#define OSSSYS_BASE__INST2_SEG3                   0
+#define OSSSYS_BASE__INST2_SEG4                   0
+
+#define OSSSYS_BASE__INST3_SEG0                   0
+#define OSSSYS_BASE__INST3_SEG1                   0
+#define OSSSYS_BASE__INST3_SEG2                   0
+#define OSSSYS_BASE__INST3_SEG3                   0
+#define OSSSYS_BASE__INST3_SEG4                   0
+
+#define OSSSYS_BASE__INST4_SEG0                   0
+#define OSSSYS_BASE__INST4_SEG1                   0
+#define OSSSYS_BASE__INST4_SEG2                   0
+#define OSSSYS_BASE__INST4_SEG3                   0
+#define OSSSYS_BASE__INST4_SEG4                   0
+
+#define SDMA0_BASE__INST0_SEG0                    0x00001260
+#define SDMA0_BASE__INST0_SEG1                    0
+#define SDMA0_BASE__INST0_SEG2                    0
+#define SDMA0_BASE__INST0_SEG3                    0
+#define SDMA0_BASE__INST0_SEG4                    0
+
+#define SDMA0_BASE__INST1_SEG0                    0
+#define SDMA0_BASE__INST1_SEG1                    0
+#define SDMA0_BASE__INST1_SEG2                    0
+#define SDMA0_BASE__INST1_SEG3                    0
+#define SDMA0_BASE__INST1_SEG4                    0
+
+#define SDMA0_BASE__INST2_SEG0                    0
+#define SDMA0_BASE__INST2_SEG1                    0
+#define SDMA0_BASE__INST2_SEG2                    0
+#define SDMA0_BASE__INST2_SEG3                    0
+#define SDMA0_BASE__INST2_SEG4                    0
+
+#define SDMA0_BASE__INST3_SEG0                    0
+#define SDMA0_BASE__INST3_SEG1                    0
+#define SDMA0_BASE__INST3_SEG2                    0
+#define SDMA0_BASE__INST3_SEG3                    0
+#define SDMA0_BASE__INST3_SEG4                    0
+
+#define SDMA0_BASE__INST4_SEG0                    0
+#define SDMA0_BASE__INST4_SEG1                    0
+#define SDMA0_BASE__INST4_SEG2                    0
+#define SDMA0_BASE__INST4_SEG3                    0
+#define SDMA0_BASE__INST4_SEG4                    0
+
+#define SDMA1_BASE__INST0_SEG0                    0x00001460
+#define SDMA1_BASE__INST0_SEG1                    0
+#define SDMA1_BASE__INST0_SEG2                    0
+#define SDMA1_BASE__INST0_SEG3                    0
+#define SDMA1_BASE__INST0_SEG4                    0
+
+#define SDMA1_BASE__INST1_SEG0                    0
+#define SDMA1_BASE__INST1_SEG1                    0
+#define SDMA1_BASE__INST1_SEG2                    0
+#define SDMA1_BASE__INST1_SEG3                    0
+#define SDMA1_BASE__INST1_SEG4                    0
+
+#define SDMA1_BASE__INST2_SEG0                    0
+#define SDMA1_BASE__INST2_SEG1                    0
+#define SDMA1_BASE__INST2_SEG2                    0
+#define SDMA1_BASE__INST2_SEG3                    0
+#define SDMA1_BASE__INST2_SEG4                    0
+
+#define SDMA1_BASE__INST3_SEG0                    0
+#define SDMA1_BASE__INST3_SEG1                    0
+#define SDMA1_BASE__INST3_SEG2                    0
+#define SDMA1_BASE__INST3_SEG3                    0
+#define SDMA1_BASE__INST3_SEG4                    0
+
+#define SDMA1_BASE__INST4_SEG0                    0
+#define SDMA1_BASE__INST4_SEG1                    0
+#define SDMA1_BASE__INST4_SEG2                    0
+#define SDMA1_BASE__INST4_SEG3                    0
+#define SDMA1_BASE__INST4_SEG4                    0
+
+#define XDMA_BASE__INST0_SEG0                     0x00003400
+#define XDMA_BASE__INST0_SEG1                     0
+#define XDMA_BASE__INST0_SEG2                     0
+#define XDMA_BASE__INST0_SEG3                     0
+#define XDMA_BASE__INST0_SEG4                     0
+
+#define XDMA_BASE__INST1_SEG0                     0
+#define XDMA_BASE__INST1_SEG1                     0
+#define XDMA_BASE__INST1_SEG2                     0
+#define XDMA_BASE__INST1_SEG3                     0
+#define XDMA_BASE__INST1_SEG4                     0
+
+#define XDMA_BASE__INST2_SEG0                     0
+#define XDMA_BASE__INST2_SEG1                     0
+#define XDMA_BASE__INST2_SEG2                     0
+#define XDMA_BASE__INST2_SEG3                     0
+#define XDMA_BASE__INST2_SEG4                     0
+
+#define XDMA_BASE__INST3_SEG0                     0
+#define XDMA_BASE__INST3_SEG1                     0
+#define XDMA_BASE__INST3_SEG2                     0
+#define XDMA_BASE__INST3_SEG3                     0
+#define XDMA_BASE__INST3_SEG4                     0
+
+#define XDMA_BASE__INST4_SEG0                     0
+#define XDMA_BASE__INST4_SEG1                     0
+#define XDMA_BASE__INST4_SEG2                     0
+#define XDMA_BASE__INST4_SEG3                     0
+#define XDMA_BASE__INST4_SEG4                     0
+
+#define UMC_BASE__INST0_SEG0                      0x00014000
+#define UMC_BASE__INST0_SEG1                      0
+#define UMC_BASE__INST0_SEG2                      0
+#define UMC_BASE__INST0_SEG3                      0
+#define UMC_BASE__INST0_SEG4                      0
+
+#define UMC_BASE__INST1_SEG0                      0
+#define UMC_BASE__INST1_SEG1                      0
+#define UMC_BASE__INST1_SEG2                      0
+#define UMC_BASE__INST1_SEG3                      0
+#define UMC_BASE__INST1_SEG4                      0
+
+#define UMC_BASE__INST2_SEG0                      0
+#define UMC_BASE__INST2_SEG1                      0
+#define UMC_BASE__INST2_SEG2                      0
+#define UMC_BASE__INST2_SEG3                      0
+#define UMC_BASE__INST2_SEG4                      0
+
+#define UMC_BASE__INST3_SEG0                      0
+#define UMC_BASE__INST3_SEG1                      0
+#define UMC_BASE__INST3_SEG2                      0
+#define UMC_BASE__INST3_SEG3                      0
+#define UMC_BASE__INST3_SEG4                      0
+
+#define UMC_BASE__INST4_SEG0                      0
+#define UMC_BASE__INST4_SEG1                      0
+#define UMC_BASE__INST4_SEG2                      0
+#define UMC_BASE__INST4_SEG3                      0
+#define UMC_BASE__INST4_SEG4                      0
+
+#define THM_BASE__INST0_SEG0                      0x00016600
+#define THM_BASE__INST0_SEG1                      0
+#define THM_BASE__INST0_SEG2                      0
+#define THM_BASE__INST0_SEG3                      0
+#define THM_BASE__INST0_SEG4                      0
+
+#define THM_BASE__INST1_SEG0                      0
+#define THM_BASE__INST1_SEG1                      0
+#define THM_BASE__INST1_SEG2                      0
+#define THM_BASE__INST1_SEG3                      0
+#define THM_BASE__INST1_SEG4                      0
+
+#define THM_BASE__INST2_SEG0                      0
+#define THM_BASE__INST2_SEG1                      0
+#define THM_BASE__INST2_SEG2                      0
+#define THM_BASE__INST2_SEG3                      0
+#define THM_BASE__INST2_SEG4                      0
+
+#define THM_BASE__INST3_SEG0                      0
+#define THM_BASE__INST3_SEG1                      0
+#define THM_BASE__INST3_SEG2                      0
+#define THM_BASE__INST3_SEG3                      0
+#define THM_BASE__INST3_SEG4                      0
+
+#define THM_BASE__INST4_SEG0                      0
+#define THM_BASE__INST4_SEG1                      0
+#define THM_BASE__INST4_SEG2                      0
+#define THM_BASE__INST4_SEG3                      0
+#define THM_BASE__INST4_SEG4                      0
+
+#define SMUIO_BASE__INST0_SEG0                    0x00016800
+#define SMUIO_BASE__INST0_SEG1                    0
+#define SMUIO_BASE__INST0_SEG2                    0
+#define SMUIO_BASE__INST0_SEG3                    0
+#define SMUIO_BASE__INST0_SEG4                    0
+
+#define SMUIO_BASE__INST1_SEG0                    0
+#define SMUIO_BASE__INST1_SEG1                    0
+#define SMUIO_BASE__INST1_SEG2                    0
+#define SMUIO_BASE__INST1_SEG3                    0
+#define SMUIO_BASE__INST1_SEG4                    0
+
+#define SMUIO_BASE__INST2_SEG0                    0
+#define SMUIO_BASE__INST2_SEG1                    0
+#define SMUIO_BASE__INST2_SEG2                    0
+#define SMUIO_BASE__INST2_SEG3                    0
+#define SMUIO_BASE__INST2_SEG4                    0
+
+#define SMUIO_BASE__INST3_SEG0                    0
+#define SMUIO_BASE__INST3_SEG1                    0
+#define SMUIO_BASE__INST3_SEG2                    0
+#define SMUIO_BASE__INST3_SEG3                    0
+#define SMUIO_BASE__INST3_SEG4                    0
+
+#define SMUIO_BASE__INST4_SEG0                    0
+#define SMUIO_BASE__INST4_SEG1                    0
+#define SMUIO_BASE__INST4_SEG2                    0
+#define SMUIO_BASE__INST4_SEG3                    0
+#define SMUIO_BASE__INST4_SEG4                    0
+
+#define PWR_BASE__INST0_SEG0                      0x00016A00
+#define PWR_BASE__INST0_SEG1                      0
+#define PWR_BASE__INST0_SEG2                      0
+#define PWR_BASE__INST0_SEG3                      0
+#define PWR_BASE__INST0_SEG4                      0
+
+#define PWR_BASE__INST1_SEG0                      0
+#define PWR_BASE__INST1_SEG1                      0
+#define PWR_BASE__INST1_SEG2                      0
+#define PWR_BASE__INST1_SEG3                      0
+#define PWR_BASE__INST1_SEG4                      0
+
+#define PWR_BASE__INST2_SEG0                      0
+#define PWR_BASE__INST2_SEG1                      0
+#define PWR_BASE__INST2_SEG2                      0
+#define PWR_BASE__INST2_SEG3                      0
+#define PWR_BASE__INST2_SEG4                      0
+
+#define PWR_BASE__INST3_SEG0                      0
+#define PWR_BASE__INST3_SEG1                      0
+#define PWR_BASE__INST3_SEG2                      0
+#define PWR_BASE__INST3_SEG3                      0
+#define PWR_BASE__INST3_SEG4                      0
+
+#define PWR_BASE__INST4_SEG0                      0
+#define PWR_BASE__INST4_SEG1                      0
+#define PWR_BASE__INST4_SEG2                      0
+#define PWR_BASE__INST4_SEG3                      0
+#define PWR_BASE__INST4_SEG4                      0
+
+#define CLK_BASE__INST0_SEG0                      0x00016C00
+#define CLK_BASE__INST0_SEG1                      0
+#define CLK_BASE__INST0_SEG2                      0
+#define CLK_BASE__INST0_SEG3                      0
+#define CLK_BASE__INST0_SEG4                      0
+
+#define CLK_BASE__INST1_SEG0                      0x00016E00
+#define CLK_BASE__INST1_SEG1                      0
+#define CLK_BASE__INST1_SEG2                      0
+#define CLK_BASE__INST1_SEG3                      0
+#define CLK_BASE__INST1_SEG4                      0
+
+#define CLK_BASE__INST2_SEG0                      0x00017000
+#define CLK_BASE__INST2_SEG1                      0
+#define CLK_BASE__INST2_SEG2                      0
+#define CLK_BASE__INST2_SEG3                      0
+#define CLK_BASE__INST2_SEG4                      0
+
+#define CLK_BASE__INST3_SEG0                      0x00017200
+#define CLK_BASE__INST3_SEG1                      0
+#define CLK_BASE__INST3_SEG2                      0
+#define CLK_BASE__INST3_SEG3                      0
+#define CLK_BASE__INST3_SEG4                      0
+
+#define CLK_BASE__INST4_SEG0                      0x00017E00
+#define CLK_BASE__INST4_SEG1                      0
+#define CLK_BASE__INST4_SEG2                      0
+#define CLK_BASE__INST4_SEG3                      0
+#define CLK_BASE__INST4_SEG4                      0
+
+#define FUSE_BASE__INST0_SEG0                     0x00017400
+#define FUSE_BASE__INST0_SEG1                     0
+#define FUSE_BASE__INST0_SEG2                     0
+#define FUSE_BASE__INST0_SEG3                     0
+#define FUSE_BASE__INST0_SEG4                     0
+
+#define FUSE_BASE__INST1_SEG0                     0
+#define FUSE_BASE__INST1_SEG1                     0
+#define FUSE_BASE__INST1_SEG2                     0
+#define FUSE_BASE__INST1_SEG3                     0
+#define FUSE_BASE__INST1_SEG4                     0
+
+#define FUSE_BASE__INST2_SEG0                     0
+#define FUSE_BASE__INST2_SEG1                     0
+#define FUSE_BASE__INST2_SEG2                     0
+#define FUSE_BASE__INST2_SEG3                     0
+#define FUSE_BASE__INST2_SEG4                     0
+
+#define FUSE_BASE__INST3_SEG0                     0
+#define FUSE_BASE__INST3_SEG1                     0
+#define FUSE_BASE__INST3_SEG2                     0
+#define FUSE_BASE__INST3_SEG3                     0
+#define FUSE_BASE__INST3_SEG4                     0
+
+#define FUSE_BASE__INST4_SEG0                     0
+#define FUSE_BASE__INST4_SEG1                     0
+#define FUSE_BASE__INST4_SEG2                     0
+#define FUSE_BASE__INST4_SEG3                     0
+#define FUSE_BASE__INST4_SEG4                     0
+
+
+#endif
+
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 021/100] drm/amd: Add MQD structs for GFX V9
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (4 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 005/100] drm/amdgpu: add soc15ip.h Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 022/100] drm/amdgpu: add gfx9 clearstate header Alex Deucher
                     ` (79 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Felix Kuehling

From: Felix Kuehling <Felix.Kuehling@amd.com>

This header defines the gfx v9 MEC structures.

Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com>
Reviewed-by: Shaoyun Liu <Shaoyun.Liu@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/include/v9_structs.h | 675 +++++++++++++++++++++++++++++++
 1 file changed, 675 insertions(+)
 create mode 100644 drivers/gpu/drm/amd/include/v9_structs.h

diff --git a/drivers/gpu/drm/amd/include/v9_structs.h b/drivers/gpu/drm/amd/include/v9_structs.h
new file mode 100644
index 0000000..e7508a3
--- /dev/null
+++ b/drivers/gpu/drm/amd/include/v9_structs.h
@@ -0,0 +1,675 @@
+/*
+ * Copyright 2012-2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef V9_STRUCTS_H_
+#define V9_STRUCTS_H_
+
+struct v9_sdma_mqd {
+	uint32_t sdmax_rlcx_rb_cntl;
+	uint32_t sdmax_rlcx_rb_base;
+	uint32_t sdmax_rlcx_rb_base_hi;
+	uint32_t sdmax_rlcx_rb_rptr;
+	uint32_t sdmax_rlcx_rb_wptr;
+	uint32_t sdmax_rlcx_rb_wptr_poll_cntl;
+	uint32_t sdmax_rlcx_rb_wptr_poll_addr_hi;
+	uint32_t sdmax_rlcx_rb_wptr_poll_addr_lo;
+	uint32_t sdmax_rlcx_rb_rptr_addr_hi;
+	uint32_t sdmax_rlcx_rb_rptr_addr_lo;
+	uint32_t sdmax_rlcx_ib_cntl;
+	uint32_t sdmax_rlcx_ib_rptr;
+	uint32_t sdmax_rlcx_ib_offset;
+	uint32_t sdmax_rlcx_ib_base_lo;
+	uint32_t sdmax_rlcx_ib_base_hi;
+	uint32_t sdmax_rlcx_ib_size;
+	uint32_t sdmax_rlcx_skip_cntl;
+	uint32_t sdmax_rlcx_context_status;
+	uint32_t sdmax_rlcx_doorbell;
+	uint32_t sdmax_rlcx_virtual_addr;
+	uint32_t sdmax_rlcx_ape1_cntl;
+	uint32_t sdmax_rlcx_doorbell_log;
+	uint32_t reserved_22;
+	uint32_t reserved_23;
+	uint32_t reserved_24;
+	uint32_t reserved_25;
+	uint32_t reserved_26;
+	uint32_t reserved_27;
+	uint32_t reserved_28;
+	uint32_t reserved_29;
+	uint32_t reserved_30;
+	uint32_t reserved_31;
+	uint32_t reserved_32;
+	uint32_t reserved_33;
+	uint32_t reserved_34;
+	uint32_t reserved_35;
+	uint32_t reserved_36;
+	uint32_t reserved_37;
+	uint32_t reserved_38;
+	uint32_t reserved_39;
+	uint32_t reserved_40;
+	uint32_t reserved_41;
+	uint32_t reserved_42;
+	uint32_t reserved_43;
+	uint32_t reserved_44;
+	uint32_t reserved_45;
+	uint32_t reserved_46;
+	uint32_t reserved_47;
+	uint32_t reserved_48;
+	uint32_t reserved_49;
+	uint32_t reserved_50;
+	uint32_t reserved_51;
+	uint32_t reserved_52;
+	uint32_t reserved_53;
+	uint32_t reserved_54;
+	uint32_t reserved_55;
+	uint32_t reserved_56;
+	uint32_t reserved_57;
+	uint32_t reserved_58;
+	uint32_t reserved_59;
+	uint32_t reserved_60;
+	uint32_t reserved_61;
+	uint32_t reserved_62;
+	uint32_t reserved_63;
+	uint32_t reserved_64;
+	uint32_t reserved_65;
+	uint32_t reserved_66;
+	uint32_t reserved_67;
+	uint32_t reserved_68;
+	uint32_t reserved_69;
+	uint32_t reserved_70;
+	uint32_t reserved_71;
+	uint32_t reserved_72;
+	uint32_t reserved_73;
+	uint32_t reserved_74;
+	uint32_t reserved_75;
+	uint32_t reserved_76;
+	uint32_t reserved_77;
+	uint32_t reserved_78;
+	uint32_t reserved_79;
+	uint32_t reserved_80;
+	uint32_t reserved_81;
+	uint32_t reserved_82;
+	uint32_t reserved_83;
+	uint32_t reserved_84;
+	uint32_t reserved_85;
+	uint32_t reserved_86;
+	uint32_t reserved_87;
+	uint32_t reserved_88;
+	uint32_t reserved_89;
+	uint32_t reserved_90;
+	uint32_t reserved_91;
+	uint32_t reserved_92;
+	uint32_t reserved_93;
+	uint32_t reserved_94;
+	uint32_t reserved_95;
+	uint32_t reserved_96;
+	uint32_t reserved_97;
+	uint32_t reserved_98;
+	uint32_t reserved_99;
+	uint32_t reserved_100;
+	uint32_t reserved_101;
+	uint32_t reserved_102;
+	uint32_t reserved_103;
+	uint32_t reserved_104;
+	uint32_t reserved_105;
+	uint32_t reserved_106;
+	uint32_t reserved_107;
+	uint32_t reserved_108;
+	uint32_t reserved_109;
+	uint32_t reserved_110;
+	uint32_t reserved_111;
+	uint32_t reserved_112;
+	uint32_t reserved_113;
+	uint32_t reserved_114;
+	uint32_t reserved_115;
+	uint32_t reserved_116;
+	uint32_t reserved_117;
+	uint32_t reserved_118;
+	uint32_t reserved_119;
+	uint32_t reserved_120;
+	uint32_t reserved_121;
+	uint32_t reserved_122;
+	uint32_t reserved_123;
+	uint32_t reserved_124;
+	uint32_t reserved_125;
+	uint32_t reserved_126;
+	uint32_t reserved_127;
+	uint32_t sdma_engine_id;
+	uint32_t sdma_queue_id;
+};
+
+struct v9_mqd {
+	uint32_t header;
+	uint32_t compute_dispatch_initiator;
+	uint32_t compute_dim_x;
+	uint32_t compute_dim_y;
+	uint32_t compute_dim_z;
+	uint32_t compute_start_x;
+	uint32_t compute_start_y;
+	uint32_t compute_start_z;
+	uint32_t compute_num_thread_x;
+	uint32_t compute_num_thread_y;
+	uint32_t compute_num_thread_z;
+	uint32_t compute_pipelinestat_enable;
+	uint32_t compute_perfcount_enable;
+	uint32_t compute_pgm_lo;
+	uint32_t compute_pgm_hi;
+	uint32_t compute_tba_lo;
+	uint32_t compute_tba_hi;
+	uint32_t compute_tma_lo;
+	uint32_t compute_tma_hi;
+	uint32_t compute_pgm_rsrc1;
+	uint32_t compute_pgm_rsrc2;
+	uint32_t compute_vmid;
+	uint32_t compute_resource_limits;
+	uint32_t compute_static_thread_mgmt_se0;
+	uint32_t compute_static_thread_mgmt_se1;
+	uint32_t compute_tmpring_size;
+	uint32_t compute_static_thread_mgmt_se2;
+	uint32_t compute_static_thread_mgmt_se3;
+	uint32_t compute_restart_x;
+	uint32_t compute_restart_y;
+	uint32_t compute_restart_z;
+	uint32_t compute_thread_trace_enable;
+	uint32_t compute_misc_reserved;
+	uint32_t compute_dispatch_id;
+	uint32_t compute_threadgroup_id;
+	uint32_t compute_relaunch;
+	uint32_t compute_wave_restore_addr_lo;
+	uint32_t compute_wave_restore_addr_hi;
+	uint32_t compute_wave_restore_control;
+	uint32_t reserved_39;
+	uint32_t reserved_40;
+	uint32_t reserved_41;
+	uint32_t reserved_42;
+	uint32_t reserved_43;
+	uint32_t reserved_44;
+	uint32_t reserved_45;
+	uint32_t reserved_46;
+	uint32_t reserved_47;
+	uint32_t reserved_48;
+	uint32_t reserved_49;
+	uint32_t reserved_50;
+	uint32_t reserved_51;
+	uint32_t reserved_52;
+	uint32_t reserved_53;
+	uint32_t reserved_54;
+	uint32_t reserved_55;
+	uint32_t reserved_56;
+	uint32_t reserved_57;
+	uint32_t reserved_58;
+	uint32_t reserved_59;
+	uint32_t reserved_60;
+	uint32_t reserved_61;
+	uint32_t reserved_62;
+	uint32_t reserved_63;
+	uint32_t reserved_64;
+	uint32_t compute_user_data_0;
+	uint32_t compute_user_data_1;
+	uint32_t compute_user_data_2;
+	uint32_t compute_user_data_3;
+	uint32_t compute_user_data_4;
+	uint32_t compute_user_data_5;
+	uint32_t compute_user_data_6;
+	uint32_t compute_user_data_7;
+	uint32_t compute_user_data_8;
+	uint32_t compute_user_data_9;
+	uint32_t compute_user_data_10;
+	uint32_t compute_user_data_11;
+	uint32_t compute_user_data_12;
+	uint32_t compute_user_data_13;
+	uint32_t compute_user_data_14;
+	uint32_t compute_user_data_15;
+	uint32_t cp_compute_csinvoc_count_lo;
+	uint32_t cp_compute_csinvoc_count_hi;
+	uint32_t reserved_83;
+	uint32_t reserved_84;
+	uint32_t reserved_85;
+	uint32_t cp_mqd_query_time_lo;
+	uint32_t cp_mqd_query_time_hi;
+	uint32_t cp_mqd_connect_start_time_lo;
+	uint32_t cp_mqd_connect_start_time_hi;
+	uint32_t cp_mqd_connect_end_time_lo;
+	uint32_t cp_mqd_connect_end_time_hi;
+	uint32_t cp_mqd_connect_end_wf_count;
+	uint32_t cp_mqd_connect_end_pq_rptr;
+	uint32_t cp_mqd_connect_end_pq_wptr;
+	uint32_t cp_mqd_connect_end_ib_rptr;
+	uint32_t cp_mqd_readindex_lo;
+	uint32_t cp_mqd_readindex_hi;
+	uint32_t cp_mqd_save_start_time_lo;
+	uint32_t cp_mqd_save_start_time_hi;
+	uint32_t cp_mqd_save_end_time_lo;
+	uint32_t cp_mqd_save_end_time_hi;
+	uint32_t cp_mqd_restore_start_time_lo;
+	uint32_t cp_mqd_restore_start_time_hi;
+	uint32_t cp_mqd_restore_end_time_lo;
+	uint32_t cp_mqd_restore_end_time_hi;
+	uint32_t disable_queue;
+	uint32_t reserved_107;
+	uint32_t gds_cs_ctxsw_cnt0;
+	uint32_t gds_cs_ctxsw_cnt1;
+	uint32_t gds_cs_ctxsw_cnt2;
+	uint32_t gds_cs_ctxsw_cnt3;
+	uint32_t reserved_112;
+	uint32_t reserved_113;
+	uint32_t cp_pq_exe_status_lo;
+	uint32_t cp_pq_exe_status_hi;
+	uint32_t cp_packet_id_lo;
+	uint32_t cp_packet_id_hi;
+	uint32_t cp_packet_exe_status_lo;
+	uint32_t cp_packet_exe_status_hi;
+	uint32_t gds_save_base_addr_lo;
+	uint32_t gds_save_base_addr_hi;
+	uint32_t gds_save_mask_lo;
+	uint32_t gds_save_mask_hi;
+	uint32_t ctx_save_base_addr_lo;
+	uint32_t ctx_save_base_addr_hi;
+	uint32_t reserved_126;
+	uint32_t reserved_127;
+	uint32_t cp_mqd_base_addr_lo;
+	uint32_t cp_mqd_base_addr_hi;
+	uint32_t cp_hqd_active;
+	uint32_t cp_hqd_vmid;
+	uint32_t cp_hqd_persistent_state;
+	uint32_t cp_hqd_pipe_priority;
+	uint32_t cp_hqd_queue_priority;
+	uint32_t cp_hqd_quantum;
+	uint32_t cp_hqd_pq_base_lo;
+	uint32_t cp_hqd_pq_base_hi;
+	uint32_t cp_hqd_pq_rptr;
+	uint32_t cp_hqd_pq_rptr_report_addr_lo;
+	uint32_t cp_hqd_pq_rptr_report_addr_hi;
+	uint32_t cp_hqd_pq_wptr_poll_addr_lo;
+	uint32_t cp_hqd_pq_wptr_poll_addr_hi;
+	uint32_t cp_hqd_pq_doorbell_control;
+	uint32_t reserved_144;
+	uint32_t cp_hqd_pq_control;
+	uint32_t cp_hqd_ib_base_addr_lo;
+	uint32_t cp_hqd_ib_base_addr_hi;
+	uint32_t cp_hqd_ib_rptr;
+	uint32_t cp_hqd_ib_control;
+	uint32_t cp_hqd_iq_timer;
+	uint32_t cp_hqd_iq_rptr;
+	uint32_t cp_hqd_dequeue_request;
+	uint32_t cp_hqd_dma_offload;
+	uint32_t cp_hqd_sema_cmd;
+	uint32_t cp_hqd_msg_type;
+	uint32_t cp_hqd_atomic0_preop_lo;
+	uint32_t cp_hqd_atomic0_preop_hi;
+	uint32_t cp_hqd_atomic1_preop_lo;
+	uint32_t cp_hqd_atomic1_preop_hi;
+	uint32_t cp_hqd_hq_status0;
+	uint32_t cp_hqd_hq_control0;
+	uint32_t cp_mqd_control;
+	uint32_t cp_hqd_hq_status1;
+	uint32_t cp_hqd_hq_control1;
+	uint32_t cp_hqd_eop_base_addr_lo;
+	uint32_t cp_hqd_eop_base_addr_hi;
+	uint32_t cp_hqd_eop_control;
+	uint32_t cp_hqd_eop_rptr;
+	uint32_t cp_hqd_eop_wptr;
+	uint32_t cp_hqd_eop_done_events;
+	uint32_t cp_hqd_ctx_save_base_addr_lo;
+	uint32_t cp_hqd_ctx_save_base_addr_hi;
+	uint32_t cp_hqd_ctx_save_control;
+	uint32_t cp_hqd_cntl_stack_offset;
+	uint32_t cp_hqd_cntl_stack_size;
+	uint32_t cp_hqd_wg_state_offset;
+	uint32_t cp_hqd_ctx_save_size;
+	uint32_t cp_hqd_gds_resource_state;
+	uint32_t cp_hqd_error;
+	uint32_t cp_hqd_eop_wptr_mem;
+	uint32_t cp_hqd_aql_control;
+	uint32_t cp_hqd_pq_wptr_lo;
+	uint32_t cp_hqd_pq_wptr_hi;
+	uint32_t reserved_184;
+	uint32_t reserved_185;
+	uint32_t reserved_186;
+	uint32_t reserved_187;
+	uint32_t reserved_188;
+	uint32_t reserved_189;
+	uint32_t reserved_190;
+	uint32_t reserved_191;
+	uint32_t iqtimer_pkt_header;
+	uint32_t iqtimer_pkt_dw0;
+	uint32_t iqtimer_pkt_dw1;
+	uint32_t iqtimer_pkt_dw2;
+	uint32_t iqtimer_pkt_dw3;
+	uint32_t iqtimer_pkt_dw4;
+	uint32_t iqtimer_pkt_dw5;
+	uint32_t iqtimer_pkt_dw6;
+	uint32_t iqtimer_pkt_dw7;
+	uint32_t iqtimer_pkt_dw8;
+	uint32_t iqtimer_pkt_dw9;
+	uint32_t iqtimer_pkt_dw10;
+	uint32_t iqtimer_pkt_dw11;
+	uint32_t iqtimer_pkt_dw12;
+	uint32_t iqtimer_pkt_dw13;
+	uint32_t iqtimer_pkt_dw14;
+	uint32_t iqtimer_pkt_dw15;
+	uint32_t iqtimer_pkt_dw16;
+	uint32_t iqtimer_pkt_dw17;
+	uint32_t iqtimer_pkt_dw18;
+	uint32_t iqtimer_pkt_dw19;
+	uint32_t iqtimer_pkt_dw20;
+	uint32_t iqtimer_pkt_dw21;
+	uint32_t iqtimer_pkt_dw22;
+	uint32_t iqtimer_pkt_dw23;
+	uint32_t iqtimer_pkt_dw24;
+	uint32_t iqtimer_pkt_dw25;
+	uint32_t iqtimer_pkt_dw26;
+	uint32_t iqtimer_pkt_dw27;
+	uint32_t iqtimer_pkt_dw28;
+	uint32_t iqtimer_pkt_dw29;
+	uint32_t iqtimer_pkt_dw30;
+	uint32_t iqtimer_pkt_dw31;
+	uint32_t reserved_225;
+	uint32_t reserved_226;
+	uint32_t reserved_227;
+	uint32_t set_resources_header;
+	uint32_t set_resources_dw1;
+	uint32_t set_resources_dw2;
+	uint32_t set_resources_dw3;
+	uint32_t set_resources_dw4;
+	uint32_t set_resources_dw5;
+	uint32_t set_resources_dw6;
+	uint32_t set_resources_dw7;
+	uint32_t reserved_236;
+	uint32_t reserved_237;
+	uint32_t reserved_238;
+	uint32_t reserved_239;
+	uint32_t queue_doorbell_id0;
+	uint32_t queue_doorbell_id1;
+	uint32_t queue_doorbell_id2;
+	uint32_t queue_doorbell_id3;
+	uint32_t queue_doorbell_id4;
+	uint32_t queue_doorbell_id5;
+	uint32_t queue_doorbell_id6;
+	uint32_t queue_doorbell_id7;
+	uint32_t queue_doorbell_id8;
+	uint32_t queue_doorbell_id9;
+	uint32_t queue_doorbell_id10;
+	uint32_t queue_doorbell_id11;
+	uint32_t queue_doorbell_id12;
+	uint32_t queue_doorbell_id13;
+	uint32_t queue_doorbell_id14;
+	uint32_t queue_doorbell_id15;
+	uint32_t reserved_256;
+	uint32_t reserved_257;
+	uint32_t reserved_258;
+	uint32_t reserved_259;
+	uint32_t reserved_260;
+	uint32_t reserved_261;
+	uint32_t reserved_262;
+	uint32_t reserved_263;
+	uint32_t reserved_264;
+	uint32_t reserved_265;
+	uint32_t reserved_266;
+	uint32_t reserved_267;
+	uint32_t reserved_268;
+	uint32_t reserved_269;
+	uint32_t reserved_270;
+	uint32_t reserved_271;
+	uint32_t reserved_272;
+	uint32_t reserved_273;
+	uint32_t reserved_274;
+	uint32_t reserved_275;
+	uint32_t reserved_276;
+	uint32_t reserved_277;
+	uint32_t reserved_278;
+	uint32_t reserved_279;
+	uint32_t reserved_280;
+	uint32_t reserved_281;
+	uint32_t reserved_282;
+	uint32_t reserved_283;
+	uint32_t reserved_284;
+	uint32_t reserved_285;
+	uint32_t reserved_286;
+	uint32_t reserved_287;
+	uint32_t reserved_288;
+	uint32_t reserved_289;
+	uint32_t reserved_290;
+	uint32_t reserved_291;
+	uint32_t reserved_292;
+	uint32_t reserved_293;
+	uint32_t reserved_294;
+	uint32_t reserved_295;
+	uint32_t reserved_296;
+	uint32_t reserved_297;
+	uint32_t reserved_298;
+	uint32_t reserved_299;
+	uint32_t reserved_300;
+	uint32_t reserved_301;
+	uint32_t reserved_302;
+	uint32_t reserved_303;
+	uint32_t reserved_304;
+	uint32_t reserved_305;
+	uint32_t reserved_306;
+	uint32_t reserved_307;
+	uint32_t reserved_308;
+	uint32_t reserved_309;
+	uint32_t reserved_310;
+	uint32_t reserved_311;
+	uint32_t reserved_312;
+	uint32_t reserved_313;
+	uint32_t reserved_314;
+	uint32_t reserved_315;
+	uint32_t reserved_316;
+	uint32_t reserved_317;
+	uint32_t reserved_318;
+	uint32_t reserved_319;
+	uint32_t reserved_320;
+	uint32_t reserved_321;
+	uint32_t reserved_322;
+	uint32_t reserved_323;
+	uint32_t reserved_324;
+	uint32_t reserved_325;
+	uint32_t reserved_326;
+	uint32_t reserved_327;
+	uint32_t reserved_328;
+	uint32_t reserved_329;
+	uint32_t reserved_330;
+	uint32_t reserved_331;
+	uint32_t reserved_332;
+	uint32_t reserved_333;
+	uint32_t reserved_334;
+	uint32_t reserved_335;
+	uint32_t reserved_336;
+	uint32_t reserved_337;
+	uint32_t reserved_338;
+	uint32_t reserved_339;
+	uint32_t reserved_340;
+	uint32_t reserved_341;
+	uint32_t reserved_342;
+	uint32_t reserved_343;
+	uint32_t reserved_344;
+	uint32_t reserved_345;
+	uint32_t reserved_346;
+	uint32_t reserved_347;
+	uint32_t reserved_348;
+	uint32_t reserved_349;
+	uint32_t reserved_350;
+	uint32_t reserved_351;
+	uint32_t reserved_352;
+	uint32_t reserved_353;
+	uint32_t reserved_354;
+	uint32_t reserved_355;
+	uint32_t reserved_356;
+	uint32_t reserved_357;
+	uint32_t reserved_358;
+	uint32_t reserved_359;
+	uint32_t reserved_360;
+	uint32_t reserved_361;
+	uint32_t reserved_362;
+	uint32_t reserved_363;
+	uint32_t reserved_364;
+	uint32_t reserved_365;
+	uint32_t reserved_366;
+	uint32_t reserved_367;
+	uint32_t reserved_368;
+	uint32_t reserved_369;
+	uint32_t reserved_370;
+	uint32_t reserved_371;
+	uint32_t reserved_372;
+	uint32_t reserved_373;
+	uint32_t reserved_374;
+	uint32_t reserved_375;
+	uint32_t reserved_376;
+	uint32_t reserved_377;
+	uint32_t reserved_378;
+	uint32_t reserved_379;
+	uint32_t reserved_380;
+	uint32_t reserved_381;
+	uint32_t reserved_382;
+	uint32_t reserved_383;
+	uint32_t reserved_384;
+	uint32_t reserved_385;
+	uint32_t reserved_386;
+	uint32_t reserved_387;
+	uint32_t reserved_388;
+	uint32_t reserved_389;
+	uint32_t reserved_390;
+	uint32_t reserved_391;
+	uint32_t reserved_392;
+	uint32_t reserved_393;
+	uint32_t reserved_394;
+	uint32_t reserved_395;
+	uint32_t reserved_396;
+	uint32_t reserved_397;
+	uint32_t reserved_398;
+	uint32_t reserved_399;
+	uint32_t reserved_400;
+	uint32_t reserved_401;
+	uint32_t reserved_402;
+	uint32_t reserved_403;
+	uint32_t reserved_404;
+	uint32_t reserved_405;
+	uint32_t reserved_406;
+	uint32_t reserved_407;
+	uint32_t reserved_408;
+	uint32_t reserved_409;
+	uint32_t reserved_410;
+	uint32_t reserved_411;
+	uint32_t reserved_412;
+	uint32_t reserved_413;
+	uint32_t reserved_414;
+	uint32_t reserved_415;
+	uint32_t reserved_416;
+	uint32_t reserved_417;
+	uint32_t reserved_418;
+	uint32_t reserved_419;
+	uint32_t reserved_420;
+	uint32_t reserved_421;
+	uint32_t reserved_422;
+	uint32_t reserved_423;
+	uint32_t reserved_424;
+	uint32_t reserved_425;
+	uint32_t reserved_426;
+	uint32_t reserved_427;
+	uint32_t reserved_428;
+	uint32_t reserved_429;
+	uint32_t reserved_430;
+	uint32_t reserved_431;
+	uint32_t reserved_432;
+	uint32_t reserved_433;
+	uint32_t reserved_434;
+	uint32_t reserved_435;
+	uint32_t reserved_436;
+	uint32_t reserved_437;
+	uint32_t reserved_438;
+	uint32_t reserved_439;
+	uint32_t reserved_440;
+	uint32_t reserved_441;
+	uint32_t reserved_442;
+	uint32_t reserved_443;
+	uint32_t reserved_444;
+	uint32_t reserved_445;
+	uint32_t reserved_446;
+	uint32_t reserved_447;
+	uint32_t reserved_448;
+	uint32_t reserved_449;
+	uint32_t reserved_450;
+	uint32_t reserved_451;
+	uint32_t reserved_452;
+	uint32_t reserved_453;
+	uint32_t reserved_454;
+	uint32_t reserved_455;
+	uint32_t reserved_456;
+	uint32_t reserved_457;
+	uint32_t reserved_458;
+	uint32_t reserved_459;
+	uint32_t reserved_460;
+	uint32_t reserved_461;
+	uint32_t reserved_462;
+	uint32_t reserved_463;
+	uint32_t reserved_464;
+	uint32_t reserved_465;
+	uint32_t reserved_466;
+	uint32_t reserved_467;
+	uint32_t reserved_468;
+	uint32_t reserved_469;
+	uint32_t reserved_470;
+	uint32_t reserved_471;
+	uint32_t reserved_472;
+	uint32_t reserved_473;
+	uint32_t reserved_474;
+	uint32_t reserved_475;
+	uint32_t reserved_476;
+	uint32_t reserved_477;
+	uint32_t reserved_478;
+	uint32_t reserved_479;
+	uint32_t reserved_480;
+	uint32_t reserved_481;
+	uint32_t reserved_482;
+	uint32_t reserved_483;
+	uint32_t reserved_484;
+	uint32_t reserved_485;
+	uint32_t reserved_486;
+	uint32_t reserved_487;
+	uint32_t reserved_488;
+	uint32_t reserved_489;
+	uint32_t reserved_490;
+	uint32_t reserved_491;
+	uint32_t reserved_492;
+	uint32_t reserved_493;
+	uint32_t reserved_494;
+	uint32_t reserved_495;
+	uint32_t reserved_496;
+	uint32_t reserved_497;
+	uint32_t reserved_498;
+	uint32_t reserved_499;
+	uint32_t reserved_500;
+	uint32_t reserved_501;
+	uint32_t reserved_502;
+	uint32_t reserved_503;
+	uint32_t reserved_504;
+	uint32_t reserved_505;
+	uint32_t reserved_506;
+	uint32_t reserved_507;
+	uint32_t reserved_508;
+	uint32_t reserved_509;
+	uint32_t reserved_510;
+	uint32_t reserved_511;
+};
+
+#endif /* V9_STRUCTS_H_ */
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 022/100] drm/amdgpu: add gfx9 clearstate header
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (5 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 021/100] drm/amd: Add MQD structs for GFX V9 Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 023/100] drm/amdgpu: add SDMA 4.0 packet header Alex Deucher
                     ` (78 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/clearstate_gfx9.h | 941 +++++++++++++++++++++++++++
 1 file changed, 941 insertions(+)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/clearstate_gfx9.h

diff --git a/drivers/gpu/drm/amd/amdgpu/clearstate_gfx9.h b/drivers/gpu/drm/amd/amdgpu/clearstate_gfx9.h
new file mode 100644
index 0000000..18fd01f
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/clearstate_gfx9.h
@@ -0,0 +1,941 @@
+
+/*
+***************************************************************************************************
+*
+*  Trade secret of Advanced Micro Devices, Inc.
+*  Copyright (c) 2010 Advanced Micro Devices, Inc. (unpublished)
+*
+*  All rights reserved.  This notice is intended as a precaution against inadvertent publication and
+*  does not imply publication or any waiver of confidentiality.  The year included in the foregoing
+*  notice is the year of creation of the work.
+*
+***************************************************************************************************
+*/
+/**
+***************************************************************************************************
+* @brief gfx9 Clearstate Definitions
+***************************************************************************************************
+*
+*   Do not edit! This is a machine-generated file!
+*
+*/
+
+static const unsigned int gfx9_SECT_CONTEXT_def_1[] =
+{
+    0x00000000, // DB_RENDER_CONTROL
+    0x00000000, // DB_COUNT_CONTROL
+    0x00000000, // DB_DEPTH_VIEW
+    0x00000000, // DB_RENDER_OVERRIDE
+    0x00000000, // DB_RENDER_OVERRIDE2
+    0x00000000, // DB_HTILE_DATA_BASE
+    0x00000000, // DB_HTILE_DATA_BASE_HI
+    0x00000000, // DB_DEPTH_SIZE
+    0x00000000, // DB_DEPTH_BOUNDS_MIN
+    0x00000000, // DB_DEPTH_BOUNDS_MAX
+    0x00000000, // DB_STENCIL_CLEAR
+    0x00000000, // DB_DEPTH_CLEAR
+    0x00000000, // PA_SC_SCREEN_SCISSOR_TL
+    0x40004000, // PA_SC_SCREEN_SCISSOR_BR
+    0x00000000, // DB_Z_INFO
+    0x00000000, // DB_STENCIL_INFO
+    0x00000000, // DB_Z_READ_BASE
+    0x00000000, // DB_Z_READ_BASE_HI
+    0x00000000, // DB_STENCIL_READ_BASE
+    0x00000000, // DB_STENCIL_READ_BASE_HI
+    0x00000000, // DB_Z_WRITE_BASE
+    0x00000000, // DB_Z_WRITE_BASE_HI
+    0x00000000, // DB_STENCIL_WRITE_BASE
+    0x00000000, // DB_STENCIL_WRITE_BASE_HI
+    0x00000000, // DB_DFSM_CONTROL
+    0x00000000, // DB_RENDER_FILTER
+    0x00000000, // DB_Z_INFO2
+    0x00000000, // DB_STENCIL_INFO2
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0x00000000, // TA_BC_BASE_ADDR
+    0x00000000, // TA_BC_BASE_ADDR_HI
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0x00000000, // COHER_DEST_BASE_HI_0
+    0x00000000, // COHER_DEST_BASE_HI_1
+    0x00000000, // COHER_DEST_BASE_HI_2
+    0x00000000, // COHER_DEST_BASE_HI_3
+    0x00000000, // COHER_DEST_BASE_2
+    0x00000000, // COHER_DEST_BASE_3
+    0x00000000, // PA_SC_WINDOW_OFFSET
+    0x80000000, // PA_SC_WINDOW_SCISSOR_TL
+    0x40004000, // PA_SC_WINDOW_SCISSOR_BR
+    0x0000ffff, // PA_SC_CLIPRECT_RULE
+    0x00000000, // PA_SC_CLIPRECT_0_TL
+    0x40004000, // PA_SC_CLIPRECT_0_BR
+    0x00000000, // PA_SC_CLIPRECT_1_TL
+    0x40004000, // PA_SC_CLIPRECT_1_BR
+    0x00000000, // PA_SC_CLIPRECT_2_TL
+    0x40004000, // PA_SC_CLIPRECT_2_BR
+    0x00000000, // PA_SC_CLIPRECT_3_TL
+    0x40004000, // PA_SC_CLIPRECT_3_BR
+    0xaa99aaaa, // PA_SC_EDGERULE
+    0x00000000, // PA_SU_HARDWARE_SCREEN_OFFSET
+    0xffffffff, // CB_TARGET_MASK
+    0xffffffff, // CB_SHADER_MASK
+    0x80000000, // PA_SC_GENERIC_SCISSOR_TL
+    0x40004000, // PA_SC_GENERIC_SCISSOR_BR
+    0x00000000, // COHER_DEST_BASE_0
+    0x00000000, // COHER_DEST_BASE_1
+    0x80000000, // PA_SC_VPORT_SCISSOR_0_TL
+    0x40004000, // PA_SC_VPORT_SCISSOR_0_BR
+    0x80000000, // PA_SC_VPORT_SCISSOR_1_TL
+    0x40004000, // PA_SC_VPORT_SCISSOR_1_BR
+    0x80000000, // PA_SC_VPORT_SCISSOR_2_TL
+    0x40004000, // PA_SC_VPORT_SCISSOR_2_BR
+    0x80000000, // PA_SC_VPORT_SCISSOR_3_TL
+    0x40004000, // PA_SC_VPORT_SCISSOR_3_BR
+    0x80000000, // PA_SC_VPORT_SCISSOR_4_TL
+    0x40004000, // PA_SC_VPORT_SCISSOR_4_BR
+    0x80000000, // PA_SC_VPORT_SCISSOR_5_TL
+    0x40004000, // PA_SC_VPORT_SCISSOR_5_BR
+    0x80000000, // PA_SC_VPORT_SCISSOR_6_TL
+    0x40004000, // PA_SC_VPORT_SCISSOR_6_BR
+    0x80000000, // PA_SC_VPORT_SCISSOR_7_TL
+    0x40004000, // PA_SC_VPORT_SCISSOR_7_BR
+    0x80000000, // PA_SC_VPORT_SCISSOR_8_TL
+    0x40004000, // PA_SC_VPORT_SCISSOR_8_BR
+    0x80000000, // PA_SC_VPORT_SCISSOR_9_TL
+    0x40004000, // PA_SC_VPORT_SCISSOR_9_BR
+    0x80000000, // PA_SC_VPORT_SCISSOR_10_TL
+    0x40004000, // PA_SC_VPORT_SCISSOR_10_BR
+    0x80000000, // PA_SC_VPORT_SCISSOR_11_TL
+    0x40004000, // PA_SC_VPORT_SCISSOR_11_BR
+    0x80000000, // PA_SC_VPORT_SCISSOR_12_TL
+    0x40004000, // PA_SC_VPORT_SCISSOR_12_BR
+    0x80000000, // PA_SC_VPORT_SCISSOR_13_TL
+    0x40004000, // PA_SC_VPORT_SCISSOR_13_BR
+    0x80000000, // PA_SC_VPORT_SCISSOR_14_TL
+    0x40004000, // PA_SC_VPORT_SCISSOR_14_BR
+    0x80000000, // PA_SC_VPORT_SCISSOR_15_TL
+    0x40004000, // PA_SC_VPORT_SCISSOR_15_BR
+    0x00000000, // PA_SC_VPORT_ZMIN_0
+    0x3f800000, // PA_SC_VPORT_ZMAX_0
+    0x00000000, // PA_SC_VPORT_ZMIN_1
+    0x3f800000, // PA_SC_VPORT_ZMAX_1
+    0x00000000, // PA_SC_VPORT_ZMIN_2
+    0x3f800000, // PA_SC_VPORT_ZMAX_2
+    0x00000000, // PA_SC_VPORT_ZMIN_3
+    0x3f800000, // PA_SC_VPORT_ZMAX_3
+    0x00000000, // PA_SC_VPORT_ZMIN_4
+    0x3f800000, // PA_SC_VPORT_ZMAX_4
+    0x00000000, // PA_SC_VPORT_ZMIN_5
+    0x3f800000, // PA_SC_VPORT_ZMAX_5
+    0x00000000, // PA_SC_VPORT_ZMIN_6
+    0x3f800000, // PA_SC_VPORT_ZMAX_6
+    0x00000000, // PA_SC_VPORT_ZMIN_7
+    0x3f800000, // PA_SC_VPORT_ZMAX_7
+    0x00000000, // PA_SC_VPORT_ZMIN_8
+    0x3f800000, // PA_SC_VPORT_ZMAX_8
+    0x00000000, // PA_SC_VPORT_ZMIN_9
+    0x3f800000, // PA_SC_VPORT_ZMAX_9
+    0x00000000, // PA_SC_VPORT_ZMIN_10
+    0x3f800000, // PA_SC_VPORT_ZMAX_10
+    0x00000000, // PA_SC_VPORT_ZMIN_11
+    0x3f800000, // PA_SC_VPORT_ZMAX_11
+    0x00000000, // PA_SC_VPORT_ZMIN_12
+    0x3f800000, // PA_SC_VPORT_ZMAX_12
+    0x00000000, // PA_SC_VPORT_ZMIN_13
+    0x3f800000, // PA_SC_VPORT_ZMAX_13
+    0x00000000, // PA_SC_VPORT_ZMIN_14
+    0x3f800000, // PA_SC_VPORT_ZMAX_14
+    0x00000000, // PA_SC_VPORT_ZMIN_15
+    0x3f800000, // PA_SC_VPORT_ZMAX_15
+};
+static const unsigned int gfx9_SECT_CONTEXT_def_2[] =
+{
+    0x00000000, // PA_SC_SCREEN_EXTENT_CONTROL
+    0x00000000, // PA_SC_TILE_STEERING_OVERRIDE
+    0x00000000, // CP_PERFMON_CNTX_CNTL
+    0x00000000, // CP_RINGID
+    0x00000000, // CP_VMID
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0x00000000, // PA_SC_RIGHT_VERT_GRID
+    0x00000000, // PA_SC_LEFT_VERT_GRID
+    0x00000000, // PA_SC_HORIZ_GRID
+    0x00000000, // PA_SC_FOV_WINDOW_LR
+    0x00000000, // PA_SC_FOV_WINDOW_TB
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0x00000000, // VGT_MULTI_PRIM_IB_RESET_INDX
+    0, // HOLE
+    0x00000000, // CB_BLEND_RED
+    0x00000000, // CB_BLEND_GREEN
+    0x00000000, // CB_BLEND_BLUE
+    0x00000000, // CB_BLEND_ALPHA
+    0x00000000, // CB_DCC_CONTROL
+    0, // HOLE
+    0x00000000, // DB_STENCIL_CONTROL
+    0x01000000, // DB_STENCILREFMASK
+    0x01000000, // DB_STENCILREFMASK_BF
+    0, // HOLE
+    0x00000000, // PA_CL_VPORT_XSCALE
+    0x00000000, // PA_CL_VPORT_XOFFSET
+    0x00000000, // PA_CL_VPORT_YSCALE
+    0x00000000, // PA_CL_VPORT_YOFFSET
+    0x00000000, // PA_CL_VPORT_ZSCALE
+    0x00000000, // PA_CL_VPORT_ZOFFSET
+    0x00000000, // PA_CL_VPORT_XSCALE_1
+    0x00000000, // PA_CL_VPORT_XOFFSET_1
+    0x00000000, // PA_CL_VPORT_YSCALE_1
+    0x00000000, // PA_CL_VPORT_YOFFSET_1
+    0x00000000, // PA_CL_VPORT_ZSCALE_1
+    0x00000000, // PA_CL_VPORT_ZOFFSET_1
+    0x00000000, // PA_CL_VPORT_XSCALE_2
+    0x00000000, // PA_CL_VPORT_XOFFSET_2
+    0x00000000, // PA_CL_VPORT_YSCALE_2
+    0x00000000, // PA_CL_VPORT_YOFFSET_2
+    0x00000000, // PA_CL_VPORT_ZSCALE_2
+    0x00000000, // PA_CL_VPORT_ZOFFSET_2
+    0x00000000, // PA_CL_VPORT_XSCALE_3
+    0x00000000, // PA_CL_VPORT_XOFFSET_3
+    0x00000000, // PA_CL_VPORT_YSCALE_3
+    0x00000000, // PA_CL_VPORT_YOFFSET_3
+    0x00000000, // PA_CL_VPORT_ZSCALE_3
+    0x00000000, // PA_CL_VPORT_ZOFFSET_3
+    0x00000000, // PA_CL_VPORT_XSCALE_4
+    0x00000000, // PA_CL_VPORT_XOFFSET_4
+    0x00000000, // PA_CL_VPORT_YSCALE_4
+    0x00000000, // PA_CL_VPORT_YOFFSET_4
+    0x00000000, // PA_CL_VPORT_ZSCALE_4
+    0x00000000, // PA_CL_VPORT_ZOFFSET_4
+    0x00000000, // PA_CL_VPORT_XSCALE_5
+    0x00000000, // PA_CL_VPORT_XOFFSET_5
+    0x00000000, // PA_CL_VPORT_YSCALE_5
+    0x00000000, // PA_CL_VPORT_YOFFSET_5
+    0x00000000, // PA_CL_VPORT_ZSCALE_5
+    0x00000000, // PA_CL_VPORT_ZOFFSET_5
+    0x00000000, // PA_CL_VPORT_XSCALE_6
+    0x00000000, // PA_CL_VPORT_XOFFSET_6
+    0x00000000, // PA_CL_VPORT_YSCALE_6
+    0x00000000, // PA_CL_VPORT_YOFFSET_6
+    0x00000000, // PA_CL_VPORT_ZSCALE_6
+    0x00000000, // PA_CL_VPORT_ZOFFSET_6
+    0x00000000, // PA_CL_VPORT_XSCALE_7
+    0x00000000, // PA_CL_VPORT_XOFFSET_7
+    0x00000000, // PA_CL_VPORT_YSCALE_7
+    0x00000000, // PA_CL_VPORT_YOFFSET_7
+    0x00000000, // PA_CL_VPORT_ZSCALE_7
+    0x00000000, // PA_CL_VPORT_ZOFFSET_7
+    0x00000000, // PA_CL_VPORT_XSCALE_8
+    0x00000000, // PA_CL_VPORT_XOFFSET_8
+    0x00000000, // PA_CL_VPORT_YSCALE_8
+    0x00000000, // PA_CL_VPORT_YOFFSET_8
+    0x00000000, // PA_CL_VPORT_ZSCALE_8
+    0x00000000, // PA_CL_VPORT_ZOFFSET_8
+    0x00000000, // PA_CL_VPORT_XSCALE_9
+    0x00000000, // PA_CL_VPORT_XOFFSET_9
+    0x00000000, // PA_CL_VPORT_YSCALE_9
+    0x00000000, // PA_CL_VPORT_YOFFSET_9
+    0x00000000, // PA_CL_VPORT_ZSCALE_9
+    0x00000000, // PA_CL_VPORT_ZOFFSET_9
+    0x00000000, // PA_CL_VPORT_XSCALE_10
+    0x00000000, // PA_CL_VPORT_XOFFSET_10
+    0x00000000, // PA_CL_VPORT_YSCALE_10
+    0x00000000, // PA_CL_VPORT_YOFFSET_10
+    0x00000000, // PA_CL_VPORT_ZSCALE_10
+    0x00000000, // PA_CL_VPORT_ZOFFSET_10
+    0x00000000, // PA_CL_VPORT_XSCALE_11
+    0x00000000, // PA_CL_VPORT_XOFFSET_11
+    0x00000000, // PA_CL_VPORT_YSCALE_11
+    0x00000000, // PA_CL_VPORT_YOFFSET_11
+    0x00000000, // PA_CL_VPORT_ZSCALE_11
+    0x00000000, // PA_CL_VPORT_ZOFFSET_11
+    0x00000000, // PA_CL_VPORT_XSCALE_12
+    0x00000000, // PA_CL_VPORT_XOFFSET_12
+    0x00000000, // PA_CL_VPORT_YSCALE_12
+    0x00000000, // PA_CL_VPORT_YOFFSET_12
+    0x00000000, // PA_CL_VPORT_ZSCALE_12
+    0x00000000, // PA_CL_VPORT_ZOFFSET_12
+    0x00000000, // PA_CL_VPORT_XSCALE_13
+    0x00000000, // PA_CL_VPORT_XOFFSET_13
+    0x00000000, // PA_CL_VPORT_YSCALE_13
+    0x00000000, // PA_CL_VPORT_YOFFSET_13
+    0x00000000, // PA_CL_VPORT_ZSCALE_13
+    0x00000000, // PA_CL_VPORT_ZOFFSET_13
+    0x00000000, // PA_CL_VPORT_XSCALE_14
+    0x00000000, // PA_CL_VPORT_XOFFSET_14
+    0x00000000, // PA_CL_VPORT_YSCALE_14
+    0x00000000, // PA_CL_VPORT_YOFFSET_14
+    0x00000000, // PA_CL_VPORT_ZSCALE_14
+    0x00000000, // PA_CL_VPORT_ZOFFSET_14
+    0x00000000, // PA_CL_VPORT_XSCALE_15
+    0x00000000, // PA_CL_VPORT_XOFFSET_15
+    0x00000000, // PA_CL_VPORT_YSCALE_15
+    0x00000000, // PA_CL_VPORT_YOFFSET_15
+    0x00000000, // PA_CL_VPORT_ZSCALE_15
+    0x00000000, // PA_CL_VPORT_ZOFFSET_15
+    0x00000000, // PA_CL_UCP_0_X
+    0x00000000, // PA_CL_UCP_0_Y
+    0x00000000, // PA_CL_UCP_0_Z
+    0x00000000, // PA_CL_UCP_0_W
+    0x00000000, // PA_CL_UCP_1_X
+    0x00000000, // PA_CL_UCP_1_Y
+    0x00000000, // PA_CL_UCP_1_Z
+    0x00000000, // PA_CL_UCP_1_W
+    0x00000000, // PA_CL_UCP_2_X
+    0x00000000, // PA_CL_UCP_2_Y
+    0x00000000, // PA_CL_UCP_2_Z
+    0x00000000, // PA_CL_UCP_2_W
+    0x00000000, // PA_CL_UCP_3_X
+    0x00000000, // PA_CL_UCP_3_Y
+    0x00000000, // PA_CL_UCP_3_Z
+    0x00000000, // PA_CL_UCP_3_W
+    0x00000000, // PA_CL_UCP_4_X
+    0x00000000, // PA_CL_UCP_4_Y
+    0x00000000, // PA_CL_UCP_4_Z
+    0x00000000, // PA_CL_UCP_4_W
+    0x00000000, // PA_CL_UCP_5_X
+    0x00000000, // PA_CL_UCP_5_Y
+    0x00000000, // PA_CL_UCP_5_Z
+    0x00000000, // PA_CL_UCP_5_W
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0x00000000, // SPI_PS_INPUT_CNTL_0
+    0x00000000, // SPI_PS_INPUT_CNTL_1
+    0x00000000, // SPI_PS_INPUT_CNTL_2
+    0x00000000, // SPI_PS_INPUT_CNTL_3
+    0x00000000, // SPI_PS_INPUT_CNTL_4
+    0x00000000, // SPI_PS_INPUT_CNTL_5
+    0x00000000, // SPI_PS_INPUT_CNTL_6
+    0x00000000, // SPI_PS_INPUT_CNTL_7
+    0x00000000, // SPI_PS_INPUT_CNTL_8
+    0x00000000, // SPI_PS_INPUT_CNTL_9
+    0x00000000, // SPI_PS_INPUT_CNTL_10
+    0x00000000, // SPI_PS_INPUT_CNTL_11
+    0x00000000, // SPI_PS_INPUT_CNTL_12
+    0x00000000, // SPI_PS_INPUT_CNTL_13
+    0x00000000, // SPI_PS_INPUT_CNTL_14
+    0x00000000, // SPI_PS_INPUT_CNTL_15
+    0x00000000, // SPI_PS_INPUT_CNTL_16
+    0x00000000, // SPI_PS_INPUT_CNTL_17
+    0x00000000, // SPI_PS_INPUT_CNTL_18
+    0x00000000, // SPI_PS_INPUT_CNTL_19
+    0x00000000, // SPI_PS_INPUT_CNTL_20
+    0x00000000, // SPI_PS_INPUT_CNTL_21
+    0x00000000, // SPI_PS_INPUT_CNTL_22
+    0x00000000, // SPI_PS_INPUT_CNTL_23
+    0x00000000, // SPI_PS_INPUT_CNTL_24
+    0x00000000, // SPI_PS_INPUT_CNTL_25
+    0x00000000, // SPI_PS_INPUT_CNTL_26
+    0x00000000, // SPI_PS_INPUT_CNTL_27
+    0x00000000, // SPI_PS_INPUT_CNTL_28
+    0x00000000, // SPI_PS_INPUT_CNTL_29
+    0x00000000, // SPI_PS_INPUT_CNTL_30
+    0x00000000, // SPI_PS_INPUT_CNTL_31
+    0x00000000, // SPI_VS_OUT_CONFIG
+    0, // HOLE
+    0x00000000, // SPI_PS_INPUT_ENA
+    0x00000000, // SPI_PS_INPUT_ADDR
+    0x00000000, // SPI_INTERP_CONTROL_0
+    0x00000002, // SPI_PS_IN_CONTROL
+    0, // HOLE
+    0x00000000, // SPI_BARYC_CNTL
+    0, // HOLE
+    0x00000000, // SPI_TMPRING_SIZE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0x00000000, // SPI_SHADER_POS_FORMAT
+    0x00000000, // SPI_SHADER_Z_FORMAT
+    0x00000000, // SPI_SHADER_COL_FORMAT
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0x00000000, // SX_PS_DOWNCONVERT
+    0x00000000, // SX_BLEND_OPT_EPSILON
+    0x00000000, // SX_BLEND_OPT_CONTROL
+    0x00000000, // SX_MRT0_BLEND_OPT
+    0x00000000, // SX_MRT1_BLEND_OPT
+    0x00000000, // SX_MRT2_BLEND_OPT
+    0x00000000, // SX_MRT3_BLEND_OPT
+    0x00000000, // SX_MRT4_BLEND_OPT
+    0x00000000, // SX_MRT5_BLEND_OPT
+    0x00000000, // SX_MRT6_BLEND_OPT
+    0x00000000, // SX_MRT7_BLEND_OPT
+    0x00000000, // CB_BLEND0_CONTROL
+    0x00000000, // CB_BLEND1_CONTROL
+    0x00000000, // CB_BLEND2_CONTROL
+    0x00000000, // CB_BLEND3_CONTROL
+    0x00000000, // CB_BLEND4_CONTROL
+    0x00000000, // CB_BLEND5_CONTROL
+    0x00000000, // CB_BLEND6_CONTROL
+    0x00000000, // CB_BLEND7_CONTROL
+    0x00000000, // CB_MRT0_EPITCH
+    0x00000000, // CB_MRT1_EPITCH
+    0x00000000, // CB_MRT2_EPITCH
+    0x00000000, // CB_MRT3_EPITCH
+    0x00000000, // CB_MRT4_EPITCH
+    0x00000000, // CB_MRT5_EPITCH
+    0x00000000, // CB_MRT6_EPITCH
+    0x00000000, // CB_MRT7_EPITCH
+};
+static const unsigned int gfx9_SECT_CONTEXT_def_3[] =
+{
+    0x00000000, // PA_CL_POINT_X_RAD
+    0x00000000, // PA_CL_POINT_Y_RAD
+    0x00000000, // PA_CL_POINT_SIZE
+    0x00000000, // PA_CL_POINT_CULL_RAD
+};
+static const unsigned int gfx9_SECT_CONTEXT_def_4[] =
+{
+    0x00000000, // DB_DEPTH_CONTROL
+    0x00000000, // DB_EQAA
+    0x00000000, // CB_COLOR_CONTROL
+    0x00000000, // DB_SHADER_CONTROL
+    0x00090000, // PA_CL_CLIP_CNTL
+    0x00000004, // PA_SU_SC_MODE_CNTL
+    0x00000000, // PA_CL_VTE_CNTL
+    0x00000000, // PA_CL_VS_OUT_CNTL
+    0x00000000, // PA_CL_NANINF_CNTL
+    0x00000000, // PA_SU_LINE_STIPPLE_CNTL
+    0x00000000, // PA_SU_LINE_STIPPLE_SCALE
+    0x00000000, // PA_SU_PRIM_FILTER_CNTL
+    0x00000000, // PA_SU_SMALL_PRIM_FILTER_CNTL
+    0x00000000, // PA_CL_OBJPRIM_ID_CNTL
+    0x00000000, // PA_CL_NGG_CNTL
+    0x00000000, // PA_SU_OVER_RASTERIZATION_CNTL
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0x00000000, // PA_SU_POINT_SIZE
+    0x00000000, // PA_SU_POINT_MINMAX
+    0x00000000, // PA_SU_LINE_CNTL
+    0x00000000, // PA_SC_LINE_STIPPLE
+    0x00000000, // VGT_OUTPUT_PATH_CNTL
+    0x00000000, // VGT_HOS_CNTL
+    0x00000000, // VGT_HOS_MAX_TESS_LEVEL
+    0x00000000, // VGT_HOS_MIN_TESS_LEVEL
+    0x00000000, // VGT_HOS_REUSE_DEPTH
+    0x00000000, // VGT_GROUP_PRIM_TYPE
+    0x00000000, // VGT_GROUP_FIRST_DECR
+    0x00000000, // VGT_GROUP_DECR
+    0x00000000, // VGT_GROUP_VECT_0_CNTL
+    0x00000000, // VGT_GROUP_VECT_1_CNTL
+    0x00000000, // VGT_GROUP_VECT_0_FMT_CNTL
+    0x00000000, // VGT_GROUP_VECT_1_FMT_CNTL
+    0x00000000, // VGT_GS_MODE
+    0x00000000, // VGT_GS_ONCHIP_CNTL
+    0x00000000, // PA_SC_MODE_CNTL_0
+    0x00000000, // PA_SC_MODE_CNTL_1
+    0x00000000, // VGT_ENHANCE
+    0x00000100, // VGT_GS_PER_ES
+    0x00000080, // VGT_ES_PER_GS
+    0x00000002, // VGT_GS_PER_VS
+    0x00000000, // VGT_GSVS_RING_OFFSET_1
+    0x00000000, // VGT_GSVS_RING_OFFSET_2
+    0x00000000, // VGT_GSVS_RING_OFFSET_3
+    0x00000000, // VGT_GS_OUT_PRIM_TYPE
+    0x00000000, // IA_ENHANCE
+};
+static const unsigned int gfx9_SECT_CONTEXT_def_5[] =
+{
+    0x00000000, // WD_ENHANCE
+    0x00000000, // VGT_PRIMITIVEID_EN
+};
+static const unsigned int gfx9_SECT_CONTEXT_def_6[] =
+{
+    0x00000000, // VGT_PRIMITIVEID_RESET
+};
+static const unsigned int gfx9_SECT_CONTEXT_def_7[] =
+{
+    0x00000000, // VGT_GS_MAX_PRIMS_PER_SUBGROUP
+    0x00000000, // VGT_DRAW_PAYLOAD_CNTL
+    0x00000000, // VGT_INDEX_PAYLOAD_CNTL
+    0x00000000, // VGT_INSTANCE_STEP_RATE_0
+    0x00000000, // VGT_INSTANCE_STEP_RATE_1
+    0, // HOLE
+    0x00000000, // VGT_ESGS_RING_ITEMSIZE
+    0x00000000, // VGT_GSVS_RING_ITEMSIZE
+    0x00000000, // VGT_REUSE_OFF
+    0x00000000, // VGT_VTX_CNT_EN
+    0x00000000, // DB_HTILE_SURFACE
+    0x00000000, // DB_SRESULTS_COMPARE_STATE0
+    0x00000000, // DB_SRESULTS_COMPARE_STATE1
+    0x00000000, // DB_PRELOAD_CONTROL
+    0, // HOLE
+    0x00000000, // VGT_STRMOUT_BUFFER_SIZE_0
+    0x00000000, // VGT_STRMOUT_VTX_STRIDE_0
+    0, // HOLE
+    0x00000000, // VGT_STRMOUT_BUFFER_OFFSET_0
+    0x00000000, // VGT_STRMOUT_BUFFER_SIZE_1
+    0x00000000, // VGT_STRMOUT_VTX_STRIDE_1
+    0, // HOLE
+    0x00000000, // VGT_STRMOUT_BUFFER_OFFSET_1
+    0x00000000, // VGT_STRMOUT_BUFFER_SIZE_2
+    0x00000000, // VGT_STRMOUT_VTX_STRIDE_2
+    0, // HOLE
+    0x00000000, // VGT_STRMOUT_BUFFER_OFFSET_2
+    0x00000000, // VGT_STRMOUT_BUFFER_SIZE_3
+    0x00000000, // VGT_STRMOUT_VTX_STRIDE_3
+    0, // HOLE
+    0x00000000, // VGT_STRMOUT_BUFFER_OFFSET_3
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0x00000000, // VGT_STRMOUT_DRAW_OPAQUE_OFFSET
+    0x00000000, // VGT_STRMOUT_DRAW_OPAQUE_BUFFER_FILLED_SIZE
+    0x00000000, // VGT_STRMOUT_DRAW_OPAQUE_VERTEX_STRIDE
+    0, // HOLE
+    0x00000000, // VGT_GS_MAX_VERT_OUT
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0, // HOLE
+    0x00000000, // VGT_TESS_DISTRIBUTION
+    0x00000000, // VGT_SHADER_STAGES_EN
+    0x00000000, // VGT_LS_HS_CONFIG
+    0x00000000, // VGT_GS_VERT_ITEMSIZE
+    0x00000000, // VGT_GS_VERT_ITEMSIZE_1
+    0x00000000, // VGT_GS_VERT_ITEMSIZE_2
+    0x00000000, // VGT_GS_VERT_ITEMSIZE_3
+    0x00000000, // VGT_TF_PARAM
+    0x00000000, // DB_ALPHA_TO_MASK
+    0x00000000, // VGT_DISPATCH_DRAW_INDEX
+    0x00000000, // PA_SU_POLY_OFFSET_DB_FMT_CNTL
+    0x00000000, // PA_SU_POLY_OFFSET_CLAMP
+    0x00000000, // PA_SU_POLY_OFFSET_FRONT_SCALE
+    0x00000000, // PA_SU_POLY_OFFSET_FRONT_OFFSET
+    0x00000000, // PA_SU_POLY_OFFSET_BACK_SCALE
+    0x00000000, // PA_SU_POLY_OFFSET_BACK_OFFSET
+    0x00000000, // VGT_GS_INSTANCE_CNT
+    0x00000000, // VGT_STRMOUT_CONFIG
+    0x00000000, // VGT_STRMOUT_BUFFER_CONFIG
+};
+static const unsigned int gfx9_SECT_CONTEXT_def_8[] =
+{
+    0x00000000, // PA_SC_CENTROID_PRIORITY_0
+    0x00000000, // PA_SC_CENTROID_PRIORITY_1
+    0x00001000, // PA_SC_LINE_CNTL
+    0x00000000, // PA_SC_AA_CONFIG
+    0x00000005, // PA_SU_VTX_CNTL
+    0x3f800000, // PA_CL_GB_VERT_CLIP_ADJ
+    0x3f800000, // PA_CL_GB_VERT_DISC_ADJ
+    0x3f800000, // PA_CL_GB_HORZ_CLIP_ADJ
+    0x3f800000, // PA_CL_GB_HORZ_DISC_ADJ
+    0x00000000, // PA_SC_AA_SAMPLE_LOCS_PIXEL_X0Y0_0
+    0x00000000, // PA_SC_AA_SAMPLE_LOCS_PIXEL_X0Y0_1
+    0x00000000, // PA_SC_AA_SAMPLE_LOCS_PIXEL_X0Y0_2
+    0x00000000, // PA_SC_AA_SAMPLE_LOCS_PIXEL_X0Y0_3
+    0x00000000, // PA_SC_AA_SAMPLE_LOCS_PIXEL_X1Y0_0
+    0x00000000, // PA_SC_AA_SAMPLE_LOCS_PIXEL_X1Y0_1
+    0x00000000, // PA_SC_AA_SAMPLE_LOCS_PIXEL_X1Y0_2
+    0x00000000, // PA_SC_AA_SAMPLE_LOCS_PIXEL_X1Y0_3
+    0x00000000, // PA_SC_AA_SAMPLE_LOCS_PIXEL_X0Y1_0
+    0x00000000, // PA_SC_AA_SAMPLE_LOCS_PIXEL_X0Y1_1
+    0x00000000, // PA_SC_AA_SAMPLE_LOCS_PIXEL_X0Y1_2
+    0x00000000, // PA_SC_AA_SAMPLE_LOCS_PIXEL_X0Y1_3
+    0x00000000, // PA_SC_AA_SAMPLE_LOCS_PIXEL_X1Y1_0
+    0x00000000, // PA_SC_AA_SAMPLE_LOCS_PIXEL_X1Y1_1
+    0x00000000, // PA_SC_AA_SAMPLE_LOCS_PIXEL_X1Y1_2
+    0x00000000, // PA_SC_AA_SAMPLE_LOCS_PIXEL_X1Y1_3
+    0xffffffff, // PA_SC_AA_MASK_X0Y0_X1Y0
+    0xffffffff, // PA_SC_AA_MASK_X0Y1_X1Y1
+    0x00000000, // PA_SC_SHADER_CONTROL
+    0x00000003, // PA_SC_BINNER_CNTL_0
+    0x00000000, // PA_SC_BINNER_CNTL_1
+    0x00000000, // PA_SC_CONSERVATIVE_RASTERIZATION_CNTL
+    0x00000000, // PA_SC_NGG_MODE_CNTL
+    0, // HOLE
+    0x0000001e, // VGT_VERTEX_REUSE_BLOCK_CNTL
+    0x00000020, // VGT_OUT_DEALLOC_CNTL
+    0x00000000, // CB_COLOR0_BASE
+    0x00000000, // CB_COLOR0_BASE_EXT
+    0x00000000, // CB_COLOR0_ATTRIB2
+    0x00000000, // CB_COLOR0_VIEW
+    0x00000000, // CB_COLOR0_INFO
+    0x00000000, // CB_COLOR0_ATTRIB
+    0x00000000, // CB_COLOR0_DCC_CONTROL
+    0x00000000, // CB_COLOR0_CMASK
+    0x00000000, // CB_COLOR0_CMASK_BASE_EXT
+    0x00000000, // CB_COLOR0_FMASK
+    0x00000000, // CB_COLOR0_FMASK_BASE_EXT
+    0x00000000, // CB_COLOR0_CLEAR_WORD0
+    0x00000000, // CB_COLOR0_CLEAR_WORD1
+    0x00000000, // CB_COLOR0_DCC_BASE
+    0x00000000, // CB_COLOR0_DCC_BASE_EXT
+    0x00000000, // CB_COLOR1_BASE
+    0x00000000, // CB_COLOR1_BASE_EXT
+    0x00000000, // CB_COLOR1_ATTRIB2
+    0x00000000, // CB_COLOR1_VIEW
+    0x00000000, // CB_COLOR1_INFO
+    0x00000000, // CB_COLOR1_ATTRIB
+    0x00000000, // CB_COLOR1_DCC_CONTROL
+    0x00000000, // CB_COLOR1_CMASK
+    0x00000000, // CB_COLOR1_CMASK_BASE_EXT
+    0x00000000, // CB_COLOR1_FMASK
+    0x00000000, // CB_COLOR1_FMASK_BASE_EXT
+    0x00000000, // CB_COLOR1_CLEAR_WORD0
+    0x00000000, // CB_COLOR1_CLEAR_WORD1
+    0x00000000, // CB_COLOR1_DCC_BASE
+    0x00000000, // CB_COLOR1_DCC_BASE_EXT
+    0x00000000, // CB_COLOR2_BASE
+    0x00000000, // CB_COLOR2_BASE_EXT
+    0x00000000, // CB_COLOR2_ATTRIB2
+    0x00000000, // CB_COLOR2_VIEW
+    0x00000000, // CB_COLOR2_INFO
+    0x00000000, // CB_COLOR2_ATTRIB
+    0x00000000, // CB_COLOR2_DCC_CONTROL
+    0x00000000, // CB_COLOR2_CMASK
+    0x00000000, // CB_COLOR2_CMASK_BASE_EXT
+    0x00000000, // CB_COLOR2_FMASK
+    0x00000000, // CB_COLOR2_FMASK_BASE_EXT
+    0x00000000, // CB_COLOR2_CLEAR_WORD0
+    0x00000000, // CB_COLOR2_CLEAR_WORD1
+    0x00000000, // CB_COLOR2_DCC_BASE
+    0x00000000, // CB_COLOR2_DCC_BASE_EXT
+    0x00000000, // CB_COLOR3_BASE
+    0x00000000, // CB_COLOR3_BASE_EXT
+    0x00000000, // CB_COLOR3_ATTRIB2
+    0x00000000, // CB_COLOR3_VIEW
+    0x00000000, // CB_COLOR3_INFO
+    0x00000000, // CB_COLOR3_ATTRIB
+    0x00000000, // CB_COLOR3_DCC_CONTROL
+    0x00000000, // CB_COLOR3_CMASK
+    0x00000000, // CB_COLOR3_CMASK_BASE_EXT
+    0x00000000, // CB_COLOR3_FMASK
+    0x00000000, // CB_COLOR3_FMASK_BASE_EXT
+    0x00000000, // CB_COLOR3_CLEAR_WORD0
+    0x00000000, // CB_COLOR3_CLEAR_WORD1
+    0x00000000, // CB_COLOR3_DCC_BASE
+    0x00000000, // CB_COLOR3_DCC_BASE_EXT
+    0x00000000, // CB_COLOR4_BASE
+    0x00000000, // CB_COLOR4_BASE_EXT
+    0x00000000, // CB_COLOR4_ATTRIB2
+    0x00000000, // CB_COLOR4_VIEW
+    0x00000000, // CB_COLOR4_INFO
+    0x00000000, // CB_COLOR4_ATTRIB
+    0x00000000, // CB_COLOR4_DCC_CONTROL
+    0x00000000, // CB_COLOR4_CMASK
+    0x00000000, // CB_COLOR4_CMASK_BASE_EXT
+    0x00000000, // CB_COLOR4_FMASK
+    0x00000000, // CB_COLOR4_FMASK_BASE_EXT
+    0x00000000, // CB_COLOR4_CLEAR_WORD0
+    0x00000000, // CB_COLOR4_CLEAR_WORD1
+    0x00000000, // CB_COLOR4_DCC_BASE
+    0x00000000, // CB_COLOR4_DCC_BASE_EXT
+    0x00000000, // CB_COLOR5_BASE
+    0x00000000, // CB_COLOR5_BASE_EXT
+    0x00000000, // CB_COLOR5_ATTRIB2
+    0x00000000, // CB_COLOR5_VIEW
+    0x00000000, // CB_COLOR5_INFO
+    0x00000000, // CB_COLOR5_ATTRIB
+    0x00000000, // CB_COLOR5_DCC_CONTROL
+    0x00000000, // CB_COLOR5_CMASK
+    0x00000000, // CB_COLOR5_CMASK_BASE_EXT
+    0x00000000, // CB_COLOR5_FMASK
+    0x00000000, // CB_COLOR5_FMASK_BASE_EXT
+    0x00000000, // CB_COLOR5_CLEAR_WORD0
+    0x00000000, // CB_COLOR5_CLEAR_WORD1
+    0x00000000, // CB_COLOR5_DCC_BASE
+    0x00000000, // CB_COLOR5_DCC_BASE_EXT
+    0x00000000, // CB_COLOR6_BASE
+    0x00000000, // CB_COLOR6_BASE_EXT
+    0x00000000, // CB_COLOR6_ATTRIB2
+    0x00000000, // CB_COLOR6_VIEW
+    0x00000000, // CB_COLOR6_INFO
+    0x00000000, // CB_COLOR6_ATTRIB
+    0x00000000, // CB_COLOR6_DCC_CONTROL
+    0x00000000, // CB_COLOR6_CMASK
+    0x00000000, // CB_COLOR6_CMASK_BASE_EXT
+    0x00000000, // CB_COLOR6_FMASK
+    0x00000000, // CB_COLOR6_FMASK_BASE_EXT
+    0x00000000, // CB_COLOR6_CLEAR_WORD0
+    0x00000000, // CB_COLOR6_CLEAR_WORD1
+    0x00000000, // CB_COLOR6_DCC_BASE
+    0x00000000, // CB_COLOR6_DCC_BASE_EXT
+    0x00000000, // CB_COLOR7_BASE
+    0x00000000, // CB_COLOR7_BASE_EXT
+    0x00000000, // CB_COLOR7_ATTRIB2
+    0x00000000, // CB_COLOR7_VIEW
+    0x00000000, // CB_COLOR7_INFO
+    0x00000000, // CB_COLOR7_ATTRIB
+    0x00000000, // CB_COLOR7_DCC_CONTROL
+    0x00000000, // CB_COLOR7_CMASK
+    0x00000000, // CB_COLOR7_CMASK_BASE_EXT
+    0x00000000, // CB_COLOR7_FMASK
+    0x00000000, // CB_COLOR7_FMASK_BASE_EXT
+    0x00000000, // CB_COLOR7_CLEAR_WORD0
+    0x00000000, // CB_COLOR7_CLEAR_WORD1
+    0x00000000, // CB_COLOR7_DCC_BASE
+    0x00000000, // CB_COLOR7_DCC_BASE_EXT
+};
+static const struct cs_extent_def gfx9_SECT_CONTEXT_defs[] =
+{
+    {gfx9_SECT_CONTEXT_def_1, 0x0000a000, 212 },
+    {gfx9_SECT_CONTEXT_def_2, 0x0000a0d6, 282 },
+    {gfx9_SECT_CONTEXT_def_3, 0x0000a1f5, 4 },
+    {gfx9_SECT_CONTEXT_def_4, 0x0000a200, 157 },
+    {gfx9_SECT_CONTEXT_def_5, 0x0000a2a0, 2 },
+    {gfx9_SECT_CONTEXT_def_6, 0x0000a2a3, 1 },
+    {gfx9_SECT_CONTEXT_def_7, 0x0000a2a5, 66 },
+    {gfx9_SECT_CONTEXT_def_8, 0x0000a2f5, 155 },
+    { 0, 0, 0 }
+};
+static const struct cs_section_def gfx9_cs_data[] = {
+    { gfx9_SECT_CONTEXT_defs, SECT_CONTEXT },
+    { 0, SECT_NONE }
+};
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 023/100] drm/amdgpu: add SDMA 4.0 packet header
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (6 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 022/100] drm/amdgpu: add gfx9 clearstate header Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 024/100] drm/amdgpu: add common soc15 headers Alex Deucher
                     ` (77 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/vega10_sdma_pkt_open.h | 3335 +++++++++++++++++++++
 1 file changed, 3335 insertions(+)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/vega10_sdma_pkt_open.h

diff --git a/drivers/gpu/drm/amd/amdgpu/vega10_sdma_pkt_open.h b/drivers/gpu/drm/amd/amdgpu/vega10_sdma_pkt_open.h
new file mode 100644
index 0000000..8de4ccc
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/vega10_sdma_pkt_open.h
@@ -0,0 +1,3335 @@
+/*
+ * Copyright (C) 2016  Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included
+ * in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
+ * OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
+ * AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __VEGA10_SDMA_PKT_OPEN_H_
+#define __VEGA10_SDMA_PKT_OPEN_H_
+
+#define SDMA_OP_NOP  0
+#define SDMA_OP_COPY  1
+#define SDMA_OP_WRITE  2
+#define SDMA_OP_INDIRECT  4
+#define SDMA_OP_FENCE  5
+#define SDMA_OP_TRAP  6
+#define SDMA_OP_SEM  7
+#define SDMA_OP_POLL_REGMEM  8
+#define SDMA_OP_COND_EXE  9
+#define SDMA_OP_ATOMIC  10
+#define SDMA_OP_CONST_FILL  11
+#define SDMA_OP_PTEPDE  12
+#define SDMA_OP_TIMESTAMP  13
+#define SDMA_OP_SRBM_WRITE  14
+#define SDMA_OP_PRE_EXE  15
+#define SDMA_OP_DUMMY_TRAP  16
+#define SDMA_SUBOP_TIMESTAMP_SET  0
+#define SDMA_SUBOP_TIMESTAMP_GET  1
+#define SDMA_SUBOP_TIMESTAMP_GET_GLOBAL  2
+#define SDMA_SUBOP_COPY_LINEAR  0
+#define SDMA_SUBOP_COPY_LINEAR_SUB_WIND  4
+#define SDMA_SUBOP_COPY_TILED  1
+#define SDMA_SUBOP_COPY_TILED_SUB_WIND  5
+#define SDMA_SUBOP_COPY_T2T_SUB_WIND  6
+#define SDMA_SUBOP_COPY_SOA  3
+#define SDMA_SUBOP_COPY_DIRTY_PAGE  7
+#define SDMA_SUBOP_COPY_LINEAR_PHY  8
+#define SDMA_SUBOP_WRITE_LINEAR  0
+#define SDMA_SUBOP_WRITE_TILED  1
+#define SDMA_SUBOP_PTEPDE_GEN  0
+#define SDMA_SUBOP_PTEPDE_COPY  1
+#define SDMA_SUBOP_PTEPDE_RMW  2
+#define SDMA_SUBOP_PTEPDE_COPY_BACKWARDS  3
+#define SDMA_SUBOP_DATA_FILL_MULTI  1
+#define SDMA_SUBOP_POLL_REG_WRITE_MEM  1
+#define SDMA_SUBOP_POLL_DBIT_WRITE_MEM  2
+#define SDMA_SUBOP_POLL_MEM_VERIFY  3
+#define HEADER_AGENT_DISPATCH  4
+#define HEADER_BARRIER  5
+#define SDMA_OP_AQL_COPY  0
+#define SDMA_OP_AQL_BARRIER_OR  0
+
+/*define for op field*/
+#define SDMA_PKT_HEADER_op_offset 0
+#define SDMA_PKT_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_HEADER_op_shift  0
+#define SDMA_PKT_HEADER_OP(x) (((x) & SDMA_PKT_HEADER_op_mask) << SDMA_PKT_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_HEADER_sub_op_offset 0
+#define SDMA_PKT_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_HEADER_sub_op_shift  8
+#define SDMA_PKT_HEADER_SUB_OP(x) (((x) & SDMA_PKT_HEADER_sub_op_mask) << SDMA_PKT_HEADER_sub_op_shift)
+
+
+/*
+** Definitions for SDMA_PKT_COPY_LINEAR packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_COPY_LINEAR_HEADER_op_offset 0
+#define SDMA_PKT_COPY_LINEAR_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_COPY_LINEAR_HEADER_op_shift  0
+#define SDMA_PKT_COPY_LINEAR_HEADER_OP(x) (((x) & SDMA_PKT_COPY_LINEAR_HEADER_op_mask) << SDMA_PKT_COPY_LINEAR_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_COPY_LINEAR_HEADER_sub_op_offset 0
+#define SDMA_PKT_COPY_LINEAR_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_COPY_LINEAR_HEADER_sub_op_shift  8
+#define SDMA_PKT_COPY_LINEAR_HEADER_SUB_OP(x) (((x) & SDMA_PKT_COPY_LINEAR_HEADER_sub_op_mask) << SDMA_PKT_COPY_LINEAR_HEADER_sub_op_shift)
+
+/*define for encrypt field*/
+#define SDMA_PKT_COPY_LINEAR_HEADER_encrypt_offset 0
+#define SDMA_PKT_COPY_LINEAR_HEADER_encrypt_mask   0x00000001
+#define SDMA_PKT_COPY_LINEAR_HEADER_encrypt_shift  16
+#define SDMA_PKT_COPY_LINEAR_HEADER_ENCRYPT(x) (((x) & SDMA_PKT_COPY_LINEAR_HEADER_encrypt_mask) << SDMA_PKT_COPY_LINEAR_HEADER_encrypt_shift)
+
+/*define for tmz field*/
+#define SDMA_PKT_COPY_LINEAR_HEADER_tmz_offset 0
+#define SDMA_PKT_COPY_LINEAR_HEADER_tmz_mask   0x00000001
+#define SDMA_PKT_COPY_LINEAR_HEADER_tmz_shift  18
+#define SDMA_PKT_COPY_LINEAR_HEADER_TMZ(x) (((x) & SDMA_PKT_COPY_LINEAR_HEADER_tmz_mask) << SDMA_PKT_COPY_LINEAR_HEADER_tmz_shift)
+
+/*define for broadcast field*/
+#define SDMA_PKT_COPY_LINEAR_HEADER_broadcast_offset 0
+#define SDMA_PKT_COPY_LINEAR_HEADER_broadcast_mask   0x00000001
+#define SDMA_PKT_COPY_LINEAR_HEADER_broadcast_shift  27
+#define SDMA_PKT_COPY_LINEAR_HEADER_BROADCAST(x) (((x) & SDMA_PKT_COPY_LINEAR_HEADER_broadcast_mask) << SDMA_PKT_COPY_LINEAR_HEADER_broadcast_shift)
+
+/*define for COUNT word*/
+/*define for count field*/
+#define SDMA_PKT_COPY_LINEAR_COUNT_count_offset 1
+#define SDMA_PKT_COPY_LINEAR_COUNT_count_mask   0x003FFFFF
+#define SDMA_PKT_COPY_LINEAR_COUNT_count_shift  0
+#define SDMA_PKT_COPY_LINEAR_COUNT_COUNT(x) (((x) & SDMA_PKT_COPY_LINEAR_COUNT_count_mask) << SDMA_PKT_COPY_LINEAR_COUNT_count_shift)
+
+/*define for PARAMETER word*/
+/*define for dst_sw field*/
+#define SDMA_PKT_COPY_LINEAR_PARAMETER_dst_sw_offset 2
+#define SDMA_PKT_COPY_LINEAR_PARAMETER_dst_sw_mask   0x00000003
+#define SDMA_PKT_COPY_LINEAR_PARAMETER_dst_sw_shift  16
+#define SDMA_PKT_COPY_LINEAR_PARAMETER_DST_SW(x) (((x) & SDMA_PKT_COPY_LINEAR_PARAMETER_dst_sw_mask) << SDMA_PKT_COPY_LINEAR_PARAMETER_dst_sw_shift)
+
+/*define for src_sw field*/
+#define SDMA_PKT_COPY_LINEAR_PARAMETER_src_sw_offset 2
+#define SDMA_PKT_COPY_LINEAR_PARAMETER_src_sw_mask   0x00000003
+#define SDMA_PKT_COPY_LINEAR_PARAMETER_src_sw_shift  24
+#define SDMA_PKT_COPY_LINEAR_PARAMETER_SRC_SW(x) (((x) & SDMA_PKT_COPY_LINEAR_PARAMETER_src_sw_mask) << SDMA_PKT_COPY_LINEAR_PARAMETER_src_sw_shift)
+
+/*define for SRC_ADDR_LO word*/
+/*define for src_addr_31_0 field*/
+#define SDMA_PKT_COPY_LINEAR_SRC_ADDR_LO_src_addr_31_0_offset 3
+#define SDMA_PKT_COPY_LINEAR_SRC_ADDR_LO_src_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_LINEAR_SRC_ADDR_LO_src_addr_31_0_shift  0
+#define SDMA_PKT_COPY_LINEAR_SRC_ADDR_LO_SRC_ADDR_31_0(x) (((x) & SDMA_PKT_COPY_LINEAR_SRC_ADDR_LO_src_addr_31_0_mask) << SDMA_PKT_COPY_LINEAR_SRC_ADDR_LO_src_addr_31_0_shift)
+
+/*define for SRC_ADDR_HI word*/
+/*define for src_addr_63_32 field*/
+#define SDMA_PKT_COPY_LINEAR_SRC_ADDR_HI_src_addr_63_32_offset 4
+#define SDMA_PKT_COPY_LINEAR_SRC_ADDR_HI_src_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_LINEAR_SRC_ADDR_HI_src_addr_63_32_shift  0
+#define SDMA_PKT_COPY_LINEAR_SRC_ADDR_HI_SRC_ADDR_63_32(x) (((x) & SDMA_PKT_COPY_LINEAR_SRC_ADDR_HI_src_addr_63_32_mask) << SDMA_PKT_COPY_LINEAR_SRC_ADDR_HI_src_addr_63_32_shift)
+
+/*define for DST_ADDR_LO word*/
+/*define for dst_addr_31_0 field*/
+#define SDMA_PKT_COPY_LINEAR_DST_ADDR_LO_dst_addr_31_0_offset 5
+#define SDMA_PKT_COPY_LINEAR_DST_ADDR_LO_dst_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_LINEAR_DST_ADDR_LO_dst_addr_31_0_shift  0
+#define SDMA_PKT_COPY_LINEAR_DST_ADDR_LO_DST_ADDR_31_0(x) (((x) & SDMA_PKT_COPY_LINEAR_DST_ADDR_LO_dst_addr_31_0_mask) << SDMA_PKT_COPY_LINEAR_DST_ADDR_LO_dst_addr_31_0_shift)
+
+/*define for DST_ADDR_HI word*/
+/*define for dst_addr_63_32 field*/
+#define SDMA_PKT_COPY_LINEAR_DST_ADDR_HI_dst_addr_63_32_offset 6
+#define SDMA_PKT_COPY_LINEAR_DST_ADDR_HI_dst_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_LINEAR_DST_ADDR_HI_dst_addr_63_32_shift  0
+#define SDMA_PKT_COPY_LINEAR_DST_ADDR_HI_DST_ADDR_63_32(x) (((x) & SDMA_PKT_COPY_LINEAR_DST_ADDR_HI_dst_addr_63_32_mask) << SDMA_PKT_COPY_LINEAR_DST_ADDR_HI_dst_addr_63_32_shift)
+
+
+/*
+** Definitions for SDMA_PKT_COPY_DIRTY_PAGE packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_COPY_DIRTY_PAGE_HEADER_op_offset 0
+#define SDMA_PKT_COPY_DIRTY_PAGE_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_COPY_DIRTY_PAGE_HEADER_op_shift  0
+#define SDMA_PKT_COPY_DIRTY_PAGE_HEADER_OP(x) (((x) & SDMA_PKT_COPY_DIRTY_PAGE_HEADER_op_mask) << SDMA_PKT_COPY_DIRTY_PAGE_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_COPY_DIRTY_PAGE_HEADER_sub_op_offset 0
+#define SDMA_PKT_COPY_DIRTY_PAGE_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_COPY_DIRTY_PAGE_HEADER_sub_op_shift  8
+#define SDMA_PKT_COPY_DIRTY_PAGE_HEADER_SUB_OP(x) (((x) & SDMA_PKT_COPY_DIRTY_PAGE_HEADER_sub_op_mask) << SDMA_PKT_COPY_DIRTY_PAGE_HEADER_sub_op_shift)
+
+/*define for tmz field*/
+#define SDMA_PKT_COPY_DIRTY_PAGE_HEADER_tmz_offset 0
+#define SDMA_PKT_COPY_DIRTY_PAGE_HEADER_tmz_mask   0x00000001
+#define SDMA_PKT_COPY_DIRTY_PAGE_HEADER_tmz_shift  18
+#define SDMA_PKT_COPY_DIRTY_PAGE_HEADER_TMZ(x) (((x) & SDMA_PKT_COPY_DIRTY_PAGE_HEADER_tmz_mask) << SDMA_PKT_COPY_DIRTY_PAGE_HEADER_tmz_shift)
+
+/*define for all field*/
+#define SDMA_PKT_COPY_DIRTY_PAGE_HEADER_all_offset 0
+#define SDMA_PKT_COPY_DIRTY_PAGE_HEADER_all_mask   0x00000001
+#define SDMA_PKT_COPY_DIRTY_PAGE_HEADER_all_shift  31
+#define SDMA_PKT_COPY_DIRTY_PAGE_HEADER_ALL(x) (((x) & SDMA_PKT_COPY_DIRTY_PAGE_HEADER_all_mask) << SDMA_PKT_COPY_DIRTY_PAGE_HEADER_all_shift)
+
+/*define for COUNT word*/
+/*define for count field*/
+#define SDMA_PKT_COPY_DIRTY_PAGE_COUNT_count_offset 1
+#define SDMA_PKT_COPY_DIRTY_PAGE_COUNT_count_mask   0x003FFFFF
+#define SDMA_PKT_COPY_DIRTY_PAGE_COUNT_count_shift  0
+#define SDMA_PKT_COPY_DIRTY_PAGE_COUNT_COUNT(x) (((x) & SDMA_PKT_COPY_DIRTY_PAGE_COUNT_count_mask) << SDMA_PKT_COPY_DIRTY_PAGE_COUNT_count_shift)
+
+/*define for PARAMETER word*/
+/*define for dst_sw field*/
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_sw_offset 2
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_sw_mask   0x00000003
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_sw_shift  16
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_DST_SW(x) (((x) & SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_sw_mask) << SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_sw_shift)
+
+/*define for dst_gcc field*/
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_gcc_offset 2
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_gcc_mask   0x00000001
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_gcc_shift  19
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_DST_GCC(x) (((x) & SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_gcc_mask) << SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_gcc_shift)
+
+/*define for dst_sys field*/
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_sys_offset 2
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_sys_mask   0x00000001
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_sys_shift  20
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_DST_SYS(x) (((x) & SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_sys_mask) << SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_sys_shift)
+
+/*define for dst_snoop field*/
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_snoop_offset 2
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_snoop_mask   0x00000001
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_snoop_shift  22
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_DST_SNOOP(x) (((x) & SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_snoop_mask) << SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_snoop_shift)
+
+/*define for dst_gpa field*/
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_gpa_offset 2
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_gpa_mask   0x00000001
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_gpa_shift  23
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_DST_GPA(x) (((x) & SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_gpa_mask) << SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_dst_gpa_shift)
+
+/*define for src_sw field*/
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_src_sw_offset 2
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_src_sw_mask   0x00000003
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_src_sw_shift  24
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_SRC_SW(x) (((x) & SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_src_sw_mask) << SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_src_sw_shift)
+
+/*define for src_sys field*/
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_src_sys_offset 2
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_src_sys_mask   0x00000001
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_src_sys_shift  28
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_SRC_SYS(x) (((x) & SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_src_sys_mask) << SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_src_sys_shift)
+
+/*define for src_snoop field*/
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_src_snoop_offset 2
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_src_snoop_mask   0x00000001
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_src_snoop_shift  30
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_SRC_SNOOP(x) (((x) & SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_src_snoop_mask) << SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_src_snoop_shift)
+
+/*define for src_gpa field*/
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_src_gpa_offset 2
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_src_gpa_mask   0x00000001
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_src_gpa_shift  31
+#define SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_SRC_GPA(x) (((x) & SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_src_gpa_mask) << SDMA_PKT_COPY_DIRTY_PAGE_PARAMETER_src_gpa_shift)
+
+/*define for SRC_ADDR_LO word*/
+/*define for src_addr_31_0 field*/
+#define SDMA_PKT_COPY_DIRTY_PAGE_SRC_ADDR_LO_src_addr_31_0_offset 3
+#define SDMA_PKT_COPY_DIRTY_PAGE_SRC_ADDR_LO_src_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_DIRTY_PAGE_SRC_ADDR_LO_src_addr_31_0_shift  0
+#define SDMA_PKT_COPY_DIRTY_PAGE_SRC_ADDR_LO_SRC_ADDR_31_0(x) (((x) & SDMA_PKT_COPY_DIRTY_PAGE_SRC_ADDR_LO_src_addr_31_0_mask) << SDMA_PKT_COPY_DIRTY_PAGE_SRC_ADDR_LO_src_addr_31_0_shift)
+
+/*define for SRC_ADDR_HI word*/
+/*define for src_addr_63_32 field*/
+#define SDMA_PKT_COPY_DIRTY_PAGE_SRC_ADDR_HI_src_addr_63_32_offset 4
+#define SDMA_PKT_COPY_DIRTY_PAGE_SRC_ADDR_HI_src_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_DIRTY_PAGE_SRC_ADDR_HI_src_addr_63_32_shift  0
+#define SDMA_PKT_COPY_DIRTY_PAGE_SRC_ADDR_HI_SRC_ADDR_63_32(x) (((x) & SDMA_PKT_COPY_DIRTY_PAGE_SRC_ADDR_HI_src_addr_63_32_mask) << SDMA_PKT_COPY_DIRTY_PAGE_SRC_ADDR_HI_src_addr_63_32_shift)
+
+/*define for DST_ADDR_LO word*/
+/*define for dst_addr_31_0 field*/
+#define SDMA_PKT_COPY_DIRTY_PAGE_DST_ADDR_LO_dst_addr_31_0_offset 5
+#define SDMA_PKT_COPY_DIRTY_PAGE_DST_ADDR_LO_dst_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_DIRTY_PAGE_DST_ADDR_LO_dst_addr_31_0_shift  0
+#define SDMA_PKT_COPY_DIRTY_PAGE_DST_ADDR_LO_DST_ADDR_31_0(x) (((x) & SDMA_PKT_COPY_DIRTY_PAGE_DST_ADDR_LO_dst_addr_31_0_mask) << SDMA_PKT_COPY_DIRTY_PAGE_DST_ADDR_LO_dst_addr_31_0_shift)
+
+/*define for DST_ADDR_HI word*/
+/*define for dst_addr_63_32 field*/
+#define SDMA_PKT_COPY_DIRTY_PAGE_DST_ADDR_HI_dst_addr_63_32_offset 6
+#define SDMA_PKT_COPY_DIRTY_PAGE_DST_ADDR_HI_dst_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_DIRTY_PAGE_DST_ADDR_HI_dst_addr_63_32_shift  0
+#define SDMA_PKT_COPY_DIRTY_PAGE_DST_ADDR_HI_DST_ADDR_63_32(x) (((x) & SDMA_PKT_COPY_DIRTY_PAGE_DST_ADDR_HI_dst_addr_63_32_mask) << SDMA_PKT_COPY_DIRTY_PAGE_DST_ADDR_HI_dst_addr_63_32_shift)
+
+
+/*
+** Definitions for SDMA_PKT_COPY_PHYSICAL_LINEAR packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_HEADER_op_offset 0
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_HEADER_op_shift  0
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_HEADER_OP(x) (((x) & SDMA_PKT_COPY_PHYSICAL_LINEAR_HEADER_op_mask) << SDMA_PKT_COPY_PHYSICAL_LINEAR_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_HEADER_sub_op_offset 0
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_HEADER_sub_op_shift  8
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_HEADER_SUB_OP(x) (((x) & SDMA_PKT_COPY_PHYSICAL_LINEAR_HEADER_sub_op_mask) << SDMA_PKT_COPY_PHYSICAL_LINEAR_HEADER_sub_op_shift)
+
+/*define for tmz field*/
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_HEADER_tmz_offset 0
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_HEADER_tmz_mask   0x00000001
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_HEADER_tmz_shift  18
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_HEADER_TMZ(x) (((x) & SDMA_PKT_COPY_PHYSICAL_LINEAR_HEADER_tmz_mask) << SDMA_PKT_COPY_PHYSICAL_LINEAR_HEADER_tmz_shift)
+
+/*define for COUNT word*/
+/*define for count field*/
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_COUNT_count_offset 1
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_COUNT_count_mask   0x003FFFFF
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_COUNT_count_shift  0
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_COUNT_COUNT(x) (((x) & SDMA_PKT_COPY_PHYSICAL_LINEAR_COUNT_count_mask) << SDMA_PKT_COPY_PHYSICAL_LINEAR_COUNT_count_shift)
+
+/*define for PARAMETER word*/
+/*define for dst_sw field*/
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_sw_offset 2
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_sw_mask   0x00000003
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_sw_shift  16
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_DST_SW(x) (((x) & SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_sw_mask) << SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_sw_shift)
+
+/*define for dst_gcc field*/
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_gcc_offset 2
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_gcc_mask   0x00000001
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_gcc_shift  19
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_DST_GCC(x) (((x) & SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_gcc_mask) << SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_gcc_shift)
+
+/*define for dst_sys field*/
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_sys_offset 2
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_sys_mask   0x00000001
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_sys_shift  20
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_DST_SYS(x) (((x) & SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_sys_mask) << SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_sys_shift)
+
+/*define for dst_log field*/
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_log_offset 2
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_log_mask   0x00000001
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_log_shift  21
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_DST_LOG(x) (((x) & SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_log_mask) << SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_log_shift)
+
+/*define for dst_snoop field*/
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_snoop_offset 2
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_snoop_mask   0x00000001
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_snoop_shift  22
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_DST_SNOOP(x) (((x) & SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_snoop_mask) << SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_snoop_shift)
+
+/*define for dst_gpa field*/
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_gpa_offset 2
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_gpa_mask   0x00000001
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_gpa_shift  23
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_DST_GPA(x) (((x) & SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_gpa_mask) << SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_dst_gpa_shift)
+
+/*define for src_sw field*/
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_sw_offset 2
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_sw_mask   0x00000003
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_sw_shift  24
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_SRC_SW(x) (((x) & SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_sw_mask) << SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_sw_shift)
+
+/*define for src_gcc field*/
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_gcc_offset 2
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_gcc_mask   0x00000001
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_gcc_shift  27
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_SRC_GCC(x) (((x) & SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_gcc_mask) << SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_gcc_shift)
+
+/*define for src_sys field*/
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_sys_offset 2
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_sys_mask   0x00000001
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_sys_shift  28
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_SRC_SYS(x) (((x) & SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_sys_mask) << SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_sys_shift)
+
+/*define for src_snoop field*/
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_snoop_offset 2
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_snoop_mask   0x00000001
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_snoop_shift  30
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_SRC_SNOOP(x) (((x) & SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_snoop_mask) << SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_snoop_shift)
+
+/*define for src_gpa field*/
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_gpa_offset 2
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_gpa_mask   0x00000001
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_gpa_shift  31
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_SRC_GPA(x) (((x) & SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_gpa_mask) << SDMA_PKT_COPY_PHYSICAL_LINEAR_PARAMETER_src_gpa_shift)
+
+/*define for SRC_ADDR_LO word*/
+/*define for src_addr_31_0 field*/
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_SRC_ADDR_LO_src_addr_31_0_offset 3
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_SRC_ADDR_LO_src_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_SRC_ADDR_LO_src_addr_31_0_shift  0
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_SRC_ADDR_LO_SRC_ADDR_31_0(x) (((x) & SDMA_PKT_COPY_PHYSICAL_LINEAR_SRC_ADDR_LO_src_addr_31_0_mask) << SDMA_PKT_COPY_PHYSICAL_LINEAR_SRC_ADDR_LO_src_addr_31_0_shift)
+
+/*define for SRC_ADDR_HI word*/
+/*define for src_addr_63_32 field*/
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_SRC_ADDR_HI_src_addr_63_32_offset 4
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_SRC_ADDR_HI_src_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_SRC_ADDR_HI_src_addr_63_32_shift  0
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_SRC_ADDR_HI_SRC_ADDR_63_32(x) (((x) & SDMA_PKT_COPY_PHYSICAL_LINEAR_SRC_ADDR_HI_src_addr_63_32_mask) << SDMA_PKT_COPY_PHYSICAL_LINEAR_SRC_ADDR_HI_src_addr_63_32_shift)
+
+/*define for DST_ADDR_LO word*/
+/*define for dst_addr_31_0 field*/
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_DST_ADDR_LO_dst_addr_31_0_offset 5
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_DST_ADDR_LO_dst_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_DST_ADDR_LO_dst_addr_31_0_shift  0
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_DST_ADDR_LO_DST_ADDR_31_0(x) (((x) & SDMA_PKT_COPY_PHYSICAL_LINEAR_DST_ADDR_LO_dst_addr_31_0_mask) << SDMA_PKT_COPY_PHYSICAL_LINEAR_DST_ADDR_LO_dst_addr_31_0_shift)
+
+/*define for DST_ADDR_HI word*/
+/*define for dst_addr_63_32 field*/
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_DST_ADDR_HI_dst_addr_63_32_offset 6
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_DST_ADDR_HI_dst_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_DST_ADDR_HI_dst_addr_63_32_shift  0
+#define SDMA_PKT_COPY_PHYSICAL_LINEAR_DST_ADDR_HI_DST_ADDR_63_32(x) (((x) & SDMA_PKT_COPY_PHYSICAL_LINEAR_DST_ADDR_HI_dst_addr_63_32_mask) << SDMA_PKT_COPY_PHYSICAL_LINEAR_DST_ADDR_HI_dst_addr_63_32_shift)
+
+
+/*
+** Definitions for SDMA_PKT_COPY_BROADCAST_LINEAR packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_op_offset 0
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_op_shift  0
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_OP(x) (((x) & SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_op_mask) << SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_sub_op_offset 0
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_sub_op_shift  8
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_SUB_OP(x) (((x) & SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_sub_op_mask) << SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_sub_op_shift)
+
+/*define for encrypt field*/
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_encrypt_offset 0
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_encrypt_mask   0x00000001
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_encrypt_shift  16
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_ENCRYPT(x) (((x) & SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_encrypt_mask) << SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_encrypt_shift)
+
+/*define for tmz field*/
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_tmz_offset 0
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_tmz_mask   0x00000001
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_tmz_shift  18
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_TMZ(x) (((x) & SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_tmz_mask) << SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_tmz_shift)
+
+/*define for broadcast field*/
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_broadcast_offset 0
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_broadcast_mask   0x00000001
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_broadcast_shift  27
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_BROADCAST(x) (((x) & SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_broadcast_mask) << SDMA_PKT_COPY_BROADCAST_LINEAR_HEADER_broadcast_shift)
+
+/*define for COUNT word*/
+/*define for count field*/
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_COUNT_count_offset 1
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_COUNT_count_mask   0x003FFFFF
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_COUNT_count_shift  0
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_COUNT_COUNT(x) (((x) & SDMA_PKT_COPY_BROADCAST_LINEAR_COUNT_count_mask) << SDMA_PKT_COPY_BROADCAST_LINEAR_COUNT_count_shift)
+
+/*define for PARAMETER word*/
+/*define for dst2_sw field*/
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_PARAMETER_dst2_sw_offset 2
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_PARAMETER_dst2_sw_mask   0x00000003
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_PARAMETER_dst2_sw_shift  8
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_PARAMETER_DST2_SW(x) (((x) & SDMA_PKT_COPY_BROADCAST_LINEAR_PARAMETER_dst2_sw_mask) << SDMA_PKT_COPY_BROADCAST_LINEAR_PARAMETER_dst2_sw_shift)
+
+/*define for dst1_sw field*/
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_PARAMETER_dst1_sw_offset 2
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_PARAMETER_dst1_sw_mask   0x00000003
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_PARAMETER_dst1_sw_shift  16
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_PARAMETER_DST1_SW(x) (((x) & SDMA_PKT_COPY_BROADCAST_LINEAR_PARAMETER_dst1_sw_mask) << SDMA_PKT_COPY_BROADCAST_LINEAR_PARAMETER_dst1_sw_shift)
+
+/*define for src_sw field*/
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_PARAMETER_src_sw_offset 2
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_PARAMETER_src_sw_mask   0x00000003
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_PARAMETER_src_sw_shift  24
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_PARAMETER_SRC_SW(x) (((x) & SDMA_PKT_COPY_BROADCAST_LINEAR_PARAMETER_src_sw_mask) << SDMA_PKT_COPY_BROADCAST_LINEAR_PARAMETER_src_sw_shift)
+
+/*define for SRC_ADDR_LO word*/
+/*define for src_addr_31_0 field*/
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_SRC_ADDR_LO_src_addr_31_0_offset 3
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_SRC_ADDR_LO_src_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_SRC_ADDR_LO_src_addr_31_0_shift  0
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_SRC_ADDR_LO_SRC_ADDR_31_0(x) (((x) & SDMA_PKT_COPY_BROADCAST_LINEAR_SRC_ADDR_LO_src_addr_31_0_mask) << SDMA_PKT_COPY_BROADCAST_LINEAR_SRC_ADDR_LO_src_addr_31_0_shift)
+
+/*define for SRC_ADDR_HI word*/
+/*define for src_addr_63_32 field*/
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_SRC_ADDR_HI_src_addr_63_32_offset 4
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_SRC_ADDR_HI_src_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_SRC_ADDR_HI_src_addr_63_32_shift  0
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_SRC_ADDR_HI_SRC_ADDR_63_32(x) (((x) & SDMA_PKT_COPY_BROADCAST_LINEAR_SRC_ADDR_HI_src_addr_63_32_mask) << SDMA_PKT_COPY_BROADCAST_LINEAR_SRC_ADDR_HI_src_addr_63_32_shift)
+
+/*define for DST1_ADDR_LO word*/
+/*define for dst1_addr_31_0 field*/
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_DST1_ADDR_LO_dst1_addr_31_0_offset 5
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_DST1_ADDR_LO_dst1_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_DST1_ADDR_LO_dst1_addr_31_0_shift  0
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_DST1_ADDR_LO_DST1_ADDR_31_0(x) (((x) & SDMA_PKT_COPY_BROADCAST_LINEAR_DST1_ADDR_LO_dst1_addr_31_0_mask) << SDMA_PKT_COPY_BROADCAST_LINEAR_DST1_ADDR_LO_dst1_addr_31_0_shift)
+
+/*define for DST1_ADDR_HI word*/
+/*define for dst1_addr_63_32 field*/
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_DST1_ADDR_HI_dst1_addr_63_32_offset 6
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_DST1_ADDR_HI_dst1_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_DST1_ADDR_HI_dst1_addr_63_32_shift  0
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_DST1_ADDR_HI_DST1_ADDR_63_32(x) (((x) & SDMA_PKT_COPY_BROADCAST_LINEAR_DST1_ADDR_HI_dst1_addr_63_32_mask) << SDMA_PKT_COPY_BROADCAST_LINEAR_DST1_ADDR_HI_dst1_addr_63_32_shift)
+
+/*define for DST2_ADDR_LO word*/
+/*define for dst2_addr_31_0 field*/
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_DST2_ADDR_LO_dst2_addr_31_0_offset 7
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_DST2_ADDR_LO_dst2_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_DST2_ADDR_LO_dst2_addr_31_0_shift  0
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_DST2_ADDR_LO_DST2_ADDR_31_0(x) (((x) & SDMA_PKT_COPY_BROADCAST_LINEAR_DST2_ADDR_LO_dst2_addr_31_0_mask) << SDMA_PKT_COPY_BROADCAST_LINEAR_DST2_ADDR_LO_dst2_addr_31_0_shift)
+
+/*define for DST2_ADDR_HI word*/
+/*define for dst2_addr_63_32 field*/
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_DST2_ADDR_HI_dst2_addr_63_32_offset 8
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_DST2_ADDR_HI_dst2_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_DST2_ADDR_HI_dst2_addr_63_32_shift  0
+#define SDMA_PKT_COPY_BROADCAST_LINEAR_DST2_ADDR_HI_DST2_ADDR_63_32(x) (((x) & SDMA_PKT_COPY_BROADCAST_LINEAR_DST2_ADDR_HI_dst2_addr_63_32_mask) << SDMA_PKT_COPY_BROADCAST_LINEAR_DST2_ADDR_HI_dst2_addr_63_32_shift)
+
+
+/*
+** Definitions for SDMA_PKT_COPY_LINEAR_SUBWIN packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_op_offset 0
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_op_shift  0
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_OP(x) (((x) & SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_op_mask) << SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_sub_op_offset 0
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_sub_op_shift  8
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_SUB_OP(x) (((x) & SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_sub_op_mask) << SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_sub_op_shift)
+
+/*define for tmz field*/
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_tmz_offset 0
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_tmz_mask   0x00000001
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_tmz_shift  18
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_TMZ(x) (((x) & SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_tmz_mask) << SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_tmz_shift)
+
+/*define for elementsize field*/
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_elementsize_offset 0
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_elementsize_mask   0x00000007
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_elementsize_shift  29
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_ELEMENTSIZE(x) (((x) & SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_elementsize_mask) << SDMA_PKT_COPY_LINEAR_SUBWIN_HEADER_elementsize_shift)
+
+/*define for SRC_ADDR_LO word*/
+/*define for src_addr_31_0 field*/
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_SRC_ADDR_LO_src_addr_31_0_offset 1
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_SRC_ADDR_LO_src_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_SRC_ADDR_LO_src_addr_31_0_shift  0
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_SRC_ADDR_LO_SRC_ADDR_31_0(x) (((x) & SDMA_PKT_COPY_LINEAR_SUBWIN_SRC_ADDR_LO_src_addr_31_0_mask) << SDMA_PKT_COPY_LINEAR_SUBWIN_SRC_ADDR_LO_src_addr_31_0_shift)
+
+/*define for SRC_ADDR_HI word*/
+/*define for src_addr_63_32 field*/
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_SRC_ADDR_HI_src_addr_63_32_offset 2
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_SRC_ADDR_HI_src_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_SRC_ADDR_HI_src_addr_63_32_shift  0
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_SRC_ADDR_HI_SRC_ADDR_63_32(x) (((x) & SDMA_PKT_COPY_LINEAR_SUBWIN_SRC_ADDR_HI_src_addr_63_32_mask) << SDMA_PKT_COPY_LINEAR_SUBWIN_SRC_ADDR_HI_src_addr_63_32_shift)
+
+/*define for DW_3 word*/
+/*define for src_x field*/
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_3_src_x_offset 3
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_3_src_x_mask   0x00003FFF
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_3_src_x_shift  0
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_3_SRC_X(x) (((x) & SDMA_PKT_COPY_LINEAR_SUBWIN_DW_3_src_x_mask) << SDMA_PKT_COPY_LINEAR_SUBWIN_DW_3_src_x_shift)
+
+/*define for src_y field*/
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_3_src_y_offset 3
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_3_src_y_mask   0x00003FFF
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_3_src_y_shift  16
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_3_SRC_Y(x) (((x) & SDMA_PKT_COPY_LINEAR_SUBWIN_DW_3_src_y_mask) << SDMA_PKT_COPY_LINEAR_SUBWIN_DW_3_src_y_shift)
+
+/*define for DW_4 word*/
+/*define for src_z field*/
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_4_src_z_offset 4
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_4_src_z_mask   0x000007FF
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_4_src_z_shift  0
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_4_SRC_Z(x) (((x) & SDMA_PKT_COPY_LINEAR_SUBWIN_DW_4_src_z_mask) << SDMA_PKT_COPY_LINEAR_SUBWIN_DW_4_src_z_shift)
+
+/*define for src_pitch field*/
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_4_src_pitch_offset 4
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_4_src_pitch_mask   0x0007FFFF
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_4_src_pitch_shift  13
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_4_SRC_PITCH(x) (((x) & SDMA_PKT_COPY_LINEAR_SUBWIN_DW_4_src_pitch_mask) << SDMA_PKT_COPY_LINEAR_SUBWIN_DW_4_src_pitch_shift)
+
+/*define for DW_5 word*/
+/*define for src_slice_pitch field*/
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_5_src_slice_pitch_offset 5
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_5_src_slice_pitch_mask   0x0FFFFFFF
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_5_src_slice_pitch_shift  0
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_5_SRC_SLICE_PITCH(x) (((x) & SDMA_PKT_COPY_LINEAR_SUBWIN_DW_5_src_slice_pitch_mask) << SDMA_PKT_COPY_LINEAR_SUBWIN_DW_5_src_slice_pitch_shift)
+
+/*define for DST_ADDR_LO word*/
+/*define for dst_addr_31_0 field*/
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DST_ADDR_LO_dst_addr_31_0_offset 6
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DST_ADDR_LO_dst_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DST_ADDR_LO_dst_addr_31_0_shift  0
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DST_ADDR_LO_DST_ADDR_31_0(x) (((x) & SDMA_PKT_COPY_LINEAR_SUBWIN_DST_ADDR_LO_dst_addr_31_0_mask) << SDMA_PKT_COPY_LINEAR_SUBWIN_DST_ADDR_LO_dst_addr_31_0_shift)
+
+/*define for DST_ADDR_HI word*/
+/*define for dst_addr_63_32 field*/
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DST_ADDR_HI_dst_addr_63_32_offset 7
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DST_ADDR_HI_dst_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DST_ADDR_HI_dst_addr_63_32_shift  0
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DST_ADDR_HI_DST_ADDR_63_32(x) (((x) & SDMA_PKT_COPY_LINEAR_SUBWIN_DST_ADDR_HI_dst_addr_63_32_mask) << SDMA_PKT_COPY_LINEAR_SUBWIN_DST_ADDR_HI_dst_addr_63_32_shift)
+
+/*define for DW_8 word*/
+/*define for dst_x field*/
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_8_dst_x_offset 8
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_8_dst_x_mask   0x00003FFF
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_8_dst_x_shift  0
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_8_DST_X(x) (((x) & SDMA_PKT_COPY_LINEAR_SUBWIN_DW_8_dst_x_mask) << SDMA_PKT_COPY_LINEAR_SUBWIN_DW_8_dst_x_shift)
+
+/*define for dst_y field*/
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_8_dst_y_offset 8
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_8_dst_y_mask   0x00003FFF
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_8_dst_y_shift  16
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_8_DST_Y(x) (((x) & SDMA_PKT_COPY_LINEAR_SUBWIN_DW_8_dst_y_mask) << SDMA_PKT_COPY_LINEAR_SUBWIN_DW_8_dst_y_shift)
+
+/*define for DW_9 word*/
+/*define for dst_z field*/
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_9_dst_z_offset 9
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_9_dst_z_mask   0x000007FF
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_9_dst_z_shift  0
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_9_DST_Z(x) (((x) & SDMA_PKT_COPY_LINEAR_SUBWIN_DW_9_dst_z_mask) << SDMA_PKT_COPY_LINEAR_SUBWIN_DW_9_dst_z_shift)
+
+/*define for dst_pitch field*/
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_9_dst_pitch_offset 9
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_9_dst_pitch_mask   0x0007FFFF
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_9_dst_pitch_shift  13
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_9_DST_PITCH(x) (((x) & SDMA_PKT_COPY_LINEAR_SUBWIN_DW_9_dst_pitch_mask) << SDMA_PKT_COPY_LINEAR_SUBWIN_DW_9_dst_pitch_shift)
+
+/*define for DW_10 word*/
+/*define for dst_slice_pitch field*/
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_10_dst_slice_pitch_offset 10
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_10_dst_slice_pitch_mask   0x0FFFFFFF
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_10_dst_slice_pitch_shift  0
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_10_DST_SLICE_PITCH(x) (((x) & SDMA_PKT_COPY_LINEAR_SUBWIN_DW_10_dst_slice_pitch_mask) << SDMA_PKT_COPY_LINEAR_SUBWIN_DW_10_dst_slice_pitch_shift)
+
+/*define for DW_11 word*/
+/*define for rect_x field*/
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_11_rect_x_offset 11
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_11_rect_x_mask   0x00003FFF
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_11_rect_x_shift  0
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_11_RECT_X(x) (((x) & SDMA_PKT_COPY_LINEAR_SUBWIN_DW_11_rect_x_mask) << SDMA_PKT_COPY_LINEAR_SUBWIN_DW_11_rect_x_shift)
+
+/*define for rect_y field*/
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_11_rect_y_offset 11
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_11_rect_y_mask   0x00003FFF
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_11_rect_y_shift  16
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_11_RECT_Y(x) (((x) & SDMA_PKT_COPY_LINEAR_SUBWIN_DW_11_rect_y_mask) << SDMA_PKT_COPY_LINEAR_SUBWIN_DW_11_rect_y_shift)
+
+/*define for DW_12 word*/
+/*define for rect_z field*/
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_12_rect_z_offset 12
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_12_rect_z_mask   0x000007FF
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_12_rect_z_shift  0
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_12_RECT_Z(x) (((x) & SDMA_PKT_COPY_LINEAR_SUBWIN_DW_12_rect_z_mask) << SDMA_PKT_COPY_LINEAR_SUBWIN_DW_12_rect_z_shift)
+
+/*define for dst_sw field*/
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_12_dst_sw_offset 12
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_12_dst_sw_mask   0x00000003
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_12_dst_sw_shift  16
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_12_DST_SW(x) (((x) & SDMA_PKT_COPY_LINEAR_SUBWIN_DW_12_dst_sw_mask) << SDMA_PKT_COPY_LINEAR_SUBWIN_DW_12_dst_sw_shift)
+
+/*define for src_sw field*/
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_12_src_sw_offset 12
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_12_src_sw_mask   0x00000003
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_12_src_sw_shift  24
+#define SDMA_PKT_COPY_LINEAR_SUBWIN_DW_12_SRC_SW(x) (((x) & SDMA_PKT_COPY_LINEAR_SUBWIN_DW_12_src_sw_mask) << SDMA_PKT_COPY_LINEAR_SUBWIN_DW_12_src_sw_shift)
+
+
+/*
+** Definitions for SDMA_PKT_COPY_TILED packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_COPY_TILED_HEADER_op_offset 0
+#define SDMA_PKT_COPY_TILED_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_COPY_TILED_HEADER_op_shift  0
+#define SDMA_PKT_COPY_TILED_HEADER_OP(x) (((x) & SDMA_PKT_COPY_TILED_HEADER_op_mask) << SDMA_PKT_COPY_TILED_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_COPY_TILED_HEADER_sub_op_offset 0
+#define SDMA_PKT_COPY_TILED_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_COPY_TILED_HEADER_sub_op_shift  8
+#define SDMA_PKT_COPY_TILED_HEADER_SUB_OP(x) (((x) & SDMA_PKT_COPY_TILED_HEADER_sub_op_mask) << SDMA_PKT_COPY_TILED_HEADER_sub_op_shift)
+
+/*define for encrypt field*/
+#define SDMA_PKT_COPY_TILED_HEADER_encrypt_offset 0
+#define SDMA_PKT_COPY_TILED_HEADER_encrypt_mask   0x00000001
+#define SDMA_PKT_COPY_TILED_HEADER_encrypt_shift  16
+#define SDMA_PKT_COPY_TILED_HEADER_ENCRYPT(x) (((x) & SDMA_PKT_COPY_TILED_HEADER_encrypt_mask) << SDMA_PKT_COPY_TILED_HEADER_encrypt_shift)
+
+/*define for tmz field*/
+#define SDMA_PKT_COPY_TILED_HEADER_tmz_offset 0
+#define SDMA_PKT_COPY_TILED_HEADER_tmz_mask   0x00000001
+#define SDMA_PKT_COPY_TILED_HEADER_tmz_shift  18
+#define SDMA_PKT_COPY_TILED_HEADER_TMZ(x) (((x) & SDMA_PKT_COPY_TILED_HEADER_tmz_mask) << SDMA_PKT_COPY_TILED_HEADER_tmz_shift)
+
+/*define for mip_max field*/
+#define SDMA_PKT_COPY_TILED_HEADER_mip_max_offset 0
+#define SDMA_PKT_COPY_TILED_HEADER_mip_max_mask   0x0000000F
+#define SDMA_PKT_COPY_TILED_HEADER_mip_max_shift  20
+#define SDMA_PKT_COPY_TILED_HEADER_MIP_MAX(x) (((x) & SDMA_PKT_COPY_TILED_HEADER_mip_max_mask) << SDMA_PKT_COPY_TILED_HEADER_mip_max_shift)
+
+/*define for detile field*/
+#define SDMA_PKT_COPY_TILED_HEADER_detile_offset 0
+#define SDMA_PKT_COPY_TILED_HEADER_detile_mask   0x00000001
+#define SDMA_PKT_COPY_TILED_HEADER_detile_shift  31
+#define SDMA_PKT_COPY_TILED_HEADER_DETILE(x) (((x) & SDMA_PKT_COPY_TILED_HEADER_detile_mask) << SDMA_PKT_COPY_TILED_HEADER_detile_shift)
+
+/*define for TILED_ADDR_LO word*/
+/*define for tiled_addr_31_0 field*/
+#define SDMA_PKT_COPY_TILED_TILED_ADDR_LO_tiled_addr_31_0_offset 1
+#define SDMA_PKT_COPY_TILED_TILED_ADDR_LO_tiled_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_TILED_TILED_ADDR_LO_tiled_addr_31_0_shift  0
+#define SDMA_PKT_COPY_TILED_TILED_ADDR_LO_TILED_ADDR_31_0(x) (((x) & SDMA_PKT_COPY_TILED_TILED_ADDR_LO_tiled_addr_31_0_mask) << SDMA_PKT_COPY_TILED_TILED_ADDR_LO_tiled_addr_31_0_shift)
+
+/*define for TILED_ADDR_HI word*/
+/*define for tiled_addr_63_32 field*/
+#define SDMA_PKT_COPY_TILED_TILED_ADDR_HI_tiled_addr_63_32_offset 2
+#define SDMA_PKT_COPY_TILED_TILED_ADDR_HI_tiled_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_TILED_TILED_ADDR_HI_tiled_addr_63_32_shift  0
+#define SDMA_PKT_COPY_TILED_TILED_ADDR_HI_TILED_ADDR_63_32(x) (((x) & SDMA_PKT_COPY_TILED_TILED_ADDR_HI_tiled_addr_63_32_mask) << SDMA_PKT_COPY_TILED_TILED_ADDR_HI_tiled_addr_63_32_shift)
+
+/*define for DW_3 word*/
+/*define for width field*/
+#define SDMA_PKT_COPY_TILED_DW_3_width_offset 3
+#define SDMA_PKT_COPY_TILED_DW_3_width_mask   0x00003FFF
+#define SDMA_PKT_COPY_TILED_DW_3_width_shift  0
+#define SDMA_PKT_COPY_TILED_DW_3_WIDTH(x) (((x) & SDMA_PKT_COPY_TILED_DW_3_width_mask) << SDMA_PKT_COPY_TILED_DW_3_width_shift)
+
+/*define for DW_4 word*/
+/*define for height field*/
+#define SDMA_PKT_COPY_TILED_DW_4_height_offset 4
+#define SDMA_PKT_COPY_TILED_DW_4_height_mask   0x00003FFF
+#define SDMA_PKT_COPY_TILED_DW_4_height_shift  0
+#define SDMA_PKT_COPY_TILED_DW_4_HEIGHT(x) (((x) & SDMA_PKT_COPY_TILED_DW_4_height_mask) << SDMA_PKT_COPY_TILED_DW_4_height_shift)
+
+/*define for depth field*/
+#define SDMA_PKT_COPY_TILED_DW_4_depth_offset 4
+#define SDMA_PKT_COPY_TILED_DW_4_depth_mask   0x000007FF
+#define SDMA_PKT_COPY_TILED_DW_4_depth_shift  16
+#define SDMA_PKT_COPY_TILED_DW_4_DEPTH(x) (((x) & SDMA_PKT_COPY_TILED_DW_4_depth_mask) << SDMA_PKT_COPY_TILED_DW_4_depth_shift)
+
+/*define for DW_5 word*/
+/*define for element_size field*/
+#define SDMA_PKT_COPY_TILED_DW_5_element_size_offset 5
+#define SDMA_PKT_COPY_TILED_DW_5_element_size_mask   0x00000007
+#define SDMA_PKT_COPY_TILED_DW_5_element_size_shift  0
+#define SDMA_PKT_COPY_TILED_DW_5_ELEMENT_SIZE(x) (((x) & SDMA_PKT_COPY_TILED_DW_5_element_size_mask) << SDMA_PKT_COPY_TILED_DW_5_element_size_shift)
+
+/*define for swizzle_mode field*/
+#define SDMA_PKT_COPY_TILED_DW_5_swizzle_mode_offset 5
+#define SDMA_PKT_COPY_TILED_DW_5_swizzle_mode_mask   0x0000001F
+#define SDMA_PKT_COPY_TILED_DW_5_swizzle_mode_shift  3
+#define SDMA_PKT_COPY_TILED_DW_5_SWIZZLE_MODE(x) (((x) & SDMA_PKT_COPY_TILED_DW_5_swizzle_mode_mask) << SDMA_PKT_COPY_TILED_DW_5_swizzle_mode_shift)
+
+/*define for dimension field*/
+#define SDMA_PKT_COPY_TILED_DW_5_dimension_offset 5
+#define SDMA_PKT_COPY_TILED_DW_5_dimension_mask   0x00000003
+#define SDMA_PKT_COPY_TILED_DW_5_dimension_shift  9
+#define SDMA_PKT_COPY_TILED_DW_5_DIMENSION(x) (((x) & SDMA_PKT_COPY_TILED_DW_5_dimension_mask) << SDMA_PKT_COPY_TILED_DW_5_dimension_shift)
+
+/*define for epitch field*/
+#define SDMA_PKT_COPY_TILED_DW_5_epitch_offset 5
+#define SDMA_PKT_COPY_TILED_DW_5_epitch_mask   0x0000FFFF
+#define SDMA_PKT_COPY_TILED_DW_5_epitch_shift  16
+#define SDMA_PKT_COPY_TILED_DW_5_EPITCH(x) (((x) & SDMA_PKT_COPY_TILED_DW_5_epitch_mask) << SDMA_PKT_COPY_TILED_DW_5_epitch_shift)
+
+/*define for DW_6 word*/
+/*define for x field*/
+#define SDMA_PKT_COPY_TILED_DW_6_x_offset 6
+#define SDMA_PKT_COPY_TILED_DW_6_x_mask   0x00003FFF
+#define SDMA_PKT_COPY_TILED_DW_6_x_shift  0
+#define SDMA_PKT_COPY_TILED_DW_6_X(x) (((x) & SDMA_PKT_COPY_TILED_DW_6_x_mask) << SDMA_PKT_COPY_TILED_DW_6_x_shift)
+
+/*define for y field*/
+#define SDMA_PKT_COPY_TILED_DW_6_y_offset 6
+#define SDMA_PKT_COPY_TILED_DW_6_y_mask   0x00003FFF
+#define SDMA_PKT_COPY_TILED_DW_6_y_shift  16
+#define SDMA_PKT_COPY_TILED_DW_6_Y(x) (((x) & SDMA_PKT_COPY_TILED_DW_6_y_mask) << SDMA_PKT_COPY_TILED_DW_6_y_shift)
+
+/*define for DW_7 word*/
+/*define for z field*/
+#define SDMA_PKT_COPY_TILED_DW_7_z_offset 7
+#define SDMA_PKT_COPY_TILED_DW_7_z_mask   0x000007FF
+#define SDMA_PKT_COPY_TILED_DW_7_z_shift  0
+#define SDMA_PKT_COPY_TILED_DW_7_Z(x) (((x) & SDMA_PKT_COPY_TILED_DW_7_z_mask) << SDMA_PKT_COPY_TILED_DW_7_z_shift)
+
+/*define for linear_sw field*/
+#define SDMA_PKT_COPY_TILED_DW_7_linear_sw_offset 7
+#define SDMA_PKT_COPY_TILED_DW_7_linear_sw_mask   0x00000003
+#define SDMA_PKT_COPY_TILED_DW_7_linear_sw_shift  16
+#define SDMA_PKT_COPY_TILED_DW_7_LINEAR_SW(x) (((x) & SDMA_PKT_COPY_TILED_DW_7_linear_sw_mask) << SDMA_PKT_COPY_TILED_DW_7_linear_sw_shift)
+
+/*define for tile_sw field*/
+#define SDMA_PKT_COPY_TILED_DW_7_tile_sw_offset 7
+#define SDMA_PKT_COPY_TILED_DW_7_tile_sw_mask   0x00000003
+#define SDMA_PKT_COPY_TILED_DW_7_tile_sw_shift  24
+#define SDMA_PKT_COPY_TILED_DW_7_TILE_SW(x) (((x) & SDMA_PKT_COPY_TILED_DW_7_tile_sw_mask) << SDMA_PKT_COPY_TILED_DW_7_tile_sw_shift)
+
+/*define for LINEAR_ADDR_LO word*/
+/*define for linear_addr_31_0 field*/
+#define SDMA_PKT_COPY_TILED_LINEAR_ADDR_LO_linear_addr_31_0_offset 8
+#define SDMA_PKT_COPY_TILED_LINEAR_ADDR_LO_linear_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_TILED_LINEAR_ADDR_LO_linear_addr_31_0_shift  0
+#define SDMA_PKT_COPY_TILED_LINEAR_ADDR_LO_LINEAR_ADDR_31_0(x) (((x) & SDMA_PKT_COPY_TILED_LINEAR_ADDR_LO_linear_addr_31_0_mask) << SDMA_PKT_COPY_TILED_LINEAR_ADDR_LO_linear_addr_31_0_shift)
+
+/*define for LINEAR_ADDR_HI word*/
+/*define for linear_addr_63_32 field*/
+#define SDMA_PKT_COPY_TILED_LINEAR_ADDR_HI_linear_addr_63_32_offset 9
+#define SDMA_PKT_COPY_TILED_LINEAR_ADDR_HI_linear_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_TILED_LINEAR_ADDR_HI_linear_addr_63_32_shift  0
+#define SDMA_PKT_COPY_TILED_LINEAR_ADDR_HI_LINEAR_ADDR_63_32(x) (((x) & SDMA_PKT_COPY_TILED_LINEAR_ADDR_HI_linear_addr_63_32_mask) << SDMA_PKT_COPY_TILED_LINEAR_ADDR_HI_linear_addr_63_32_shift)
+
+/*define for LINEAR_PITCH word*/
+/*define for linear_pitch field*/
+#define SDMA_PKT_COPY_TILED_LINEAR_PITCH_linear_pitch_offset 10
+#define SDMA_PKT_COPY_TILED_LINEAR_PITCH_linear_pitch_mask   0x0007FFFF
+#define SDMA_PKT_COPY_TILED_LINEAR_PITCH_linear_pitch_shift  0
+#define SDMA_PKT_COPY_TILED_LINEAR_PITCH_LINEAR_PITCH(x) (((x) & SDMA_PKT_COPY_TILED_LINEAR_PITCH_linear_pitch_mask) << SDMA_PKT_COPY_TILED_LINEAR_PITCH_linear_pitch_shift)
+
+/*define for LINEAR_SLICE_PITCH word*/
+/*define for linear_slice_pitch field*/
+#define SDMA_PKT_COPY_TILED_LINEAR_SLICE_PITCH_linear_slice_pitch_offset 11
+#define SDMA_PKT_COPY_TILED_LINEAR_SLICE_PITCH_linear_slice_pitch_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_TILED_LINEAR_SLICE_PITCH_linear_slice_pitch_shift  0
+#define SDMA_PKT_COPY_TILED_LINEAR_SLICE_PITCH_LINEAR_SLICE_PITCH(x) (((x) & SDMA_PKT_COPY_TILED_LINEAR_SLICE_PITCH_linear_slice_pitch_mask) << SDMA_PKT_COPY_TILED_LINEAR_SLICE_PITCH_linear_slice_pitch_shift)
+
+/*define for COUNT word*/
+/*define for count field*/
+#define SDMA_PKT_COPY_TILED_COUNT_count_offset 12
+#define SDMA_PKT_COPY_TILED_COUNT_count_mask   0x000FFFFF
+#define SDMA_PKT_COPY_TILED_COUNT_count_shift  0
+#define SDMA_PKT_COPY_TILED_COUNT_COUNT(x) (((x) & SDMA_PKT_COPY_TILED_COUNT_count_mask) << SDMA_PKT_COPY_TILED_COUNT_count_shift)
+
+
+/*
+** Definitions for SDMA_PKT_COPY_L2T_BROADCAST packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_op_offset 0
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_op_shift  0
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_OP(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_HEADER_op_mask) << SDMA_PKT_COPY_L2T_BROADCAST_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_sub_op_offset 0
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_sub_op_shift  8
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_SUB_OP(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_HEADER_sub_op_mask) << SDMA_PKT_COPY_L2T_BROADCAST_HEADER_sub_op_shift)
+
+/*define for encrypt field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_encrypt_offset 0
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_encrypt_mask   0x00000001
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_encrypt_shift  16
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_ENCRYPT(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_HEADER_encrypt_mask) << SDMA_PKT_COPY_L2T_BROADCAST_HEADER_encrypt_shift)
+
+/*define for tmz field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_tmz_offset 0
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_tmz_mask   0x00000001
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_tmz_shift  18
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_TMZ(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_HEADER_tmz_mask) << SDMA_PKT_COPY_L2T_BROADCAST_HEADER_tmz_shift)
+
+/*define for mip_max field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_mip_max_offset 0
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_mip_max_mask   0x0000000F
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_mip_max_shift  20
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_MIP_MAX(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_HEADER_mip_max_mask) << SDMA_PKT_COPY_L2T_BROADCAST_HEADER_mip_max_shift)
+
+/*define for videocopy field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_videocopy_offset 0
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_videocopy_mask   0x00000001
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_videocopy_shift  26
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_VIDEOCOPY(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_HEADER_videocopy_mask) << SDMA_PKT_COPY_L2T_BROADCAST_HEADER_videocopy_shift)
+
+/*define for broadcast field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_broadcast_offset 0
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_broadcast_mask   0x00000001
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_broadcast_shift  27
+#define SDMA_PKT_COPY_L2T_BROADCAST_HEADER_BROADCAST(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_HEADER_broadcast_mask) << SDMA_PKT_COPY_L2T_BROADCAST_HEADER_broadcast_shift)
+
+/*define for TILED_ADDR_LO_0 word*/
+/*define for tiled_addr0_31_0 field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_LO_0_tiled_addr0_31_0_offset 1
+#define SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_LO_0_tiled_addr0_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_LO_0_tiled_addr0_31_0_shift  0
+#define SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_LO_0_TILED_ADDR0_31_0(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_LO_0_tiled_addr0_31_0_mask) << SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_LO_0_tiled_addr0_31_0_shift)
+
+/*define for TILED_ADDR_HI_0 word*/
+/*define for tiled_addr0_63_32 field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_HI_0_tiled_addr0_63_32_offset 2
+#define SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_HI_0_tiled_addr0_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_HI_0_tiled_addr0_63_32_shift  0
+#define SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_HI_0_TILED_ADDR0_63_32(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_HI_0_tiled_addr0_63_32_mask) << SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_HI_0_tiled_addr0_63_32_shift)
+
+/*define for TILED_ADDR_LO_1 word*/
+/*define for tiled_addr1_31_0 field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_LO_1_tiled_addr1_31_0_offset 3
+#define SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_LO_1_tiled_addr1_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_LO_1_tiled_addr1_31_0_shift  0
+#define SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_LO_1_TILED_ADDR1_31_0(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_LO_1_tiled_addr1_31_0_mask) << SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_LO_1_tiled_addr1_31_0_shift)
+
+/*define for TILED_ADDR_HI_1 word*/
+/*define for tiled_addr1_63_32 field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_HI_1_tiled_addr1_63_32_offset 4
+#define SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_HI_1_tiled_addr1_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_HI_1_tiled_addr1_63_32_shift  0
+#define SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_HI_1_TILED_ADDR1_63_32(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_HI_1_tiled_addr1_63_32_mask) << SDMA_PKT_COPY_L2T_BROADCAST_TILED_ADDR_HI_1_tiled_addr1_63_32_shift)
+
+/*define for DW_5 word*/
+/*define for width field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_5_width_offset 5
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_5_width_mask   0x00003FFF
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_5_width_shift  0
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_5_WIDTH(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_DW_5_width_mask) << SDMA_PKT_COPY_L2T_BROADCAST_DW_5_width_shift)
+
+/*define for DW_6 word*/
+/*define for height field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_6_height_offset 6
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_6_height_mask   0x00003FFF
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_6_height_shift  0
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_6_HEIGHT(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_DW_6_height_mask) << SDMA_PKT_COPY_L2T_BROADCAST_DW_6_height_shift)
+
+/*define for depth field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_6_depth_offset 6
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_6_depth_mask   0x000007FF
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_6_depth_shift  16
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_6_DEPTH(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_DW_6_depth_mask) << SDMA_PKT_COPY_L2T_BROADCAST_DW_6_depth_shift)
+
+/*define for DW_7 word*/
+/*define for element_size field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_7_element_size_offset 7
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_7_element_size_mask   0x00000007
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_7_element_size_shift  0
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_7_ELEMENT_SIZE(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_DW_7_element_size_mask) << SDMA_PKT_COPY_L2T_BROADCAST_DW_7_element_size_shift)
+
+/*define for swizzle_mode field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_7_swizzle_mode_offset 7
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_7_swizzle_mode_mask   0x0000001F
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_7_swizzle_mode_shift  3
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_7_SWIZZLE_MODE(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_DW_7_swizzle_mode_mask) << SDMA_PKT_COPY_L2T_BROADCAST_DW_7_swizzle_mode_shift)
+
+/*define for dimension field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_7_dimension_offset 7
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_7_dimension_mask   0x00000003
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_7_dimension_shift  9
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_7_DIMENSION(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_DW_7_dimension_mask) << SDMA_PKT_COPY_L2T_BROADCAST_DW_7_dimension_shift)
+
+/*define for epitch field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_7_epitch_offset 7
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_7_epitch_mask   0x0000FFFF
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_7_epitch_shift  16
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_7_EPITCH(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_DW_7_epitch_mask) << SDMA_PKT_COPY_L2T_BROADCAST_DW_7_epitch_shift)
+
+/*define for DW_8 word*/
+/*define for x field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_8_x_offset 8
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_8_x_mask   0x00003FFF
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_8_x_shift  0
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_8_X(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_DW_8_x_mask) << SDMA_PKT_COPY_L2T_BROADCAST_DW_8_x_shift)
+
+/*define for y field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_8_y_offset 8
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_8_y_mask   0x00003FFF
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_8_y_shift  16
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_8_Y(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_DW_8_y_mask) << SDMA_PKT_COPY_L2T_BROADCAST_DW_8_y_shift)
+
+/*define for DW_9 word*/
+/*define for z field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_9_z_offset 9
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_9_z_mask   0x000007FF
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_9_z_shift  0
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_9_Z(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_DW_9_z_mask) << SDMA_PKT_COPY_L2T_BROADCAST_DW_9_z_shift)
+
+/*define for DW_10 word*/
+/*define for dst2_sw field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_10_dst2_sw_offset 10
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_10_dst2_sw_mask   0x00000003
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_10_dst2_sw_shift  8
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_10_DST2_SW(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_DW_10_dst2_sw_mask) << SDMA_PKT_COPY_L2T_BROADCAST_DW_10_dst2_sw_shift)
+
+/*define for linear_sw field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_10_linear_sw_offset 10
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_10_linear_sw_mask   0x00000003
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_10_linear_sw_shift  16
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_10_LINEAR_SW(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_DW_10_linear_sw_mask) << SDMA_PKT_COPY_L2T_BROADCAST_DW_10_linear_sw_shift)
+
+/*define for tile_sw field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_10_tile_sw_offset 10
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_10_tile_sw_mask   0x00000003
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_10_tile_sw_shift  24
+#define SDMA_PKT_COPY_L2T_BROADCAST_DW_10_TILE_SW(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_DW_10_tile_sw_mask) << SDMA_PKT_COPY_L2T_BROADCAST_DW_10_tile_sw_shift)
+
+/*define for LINEAR_ADDR_LO word*/
+/*define for linear_addr_31_0 field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_ADDR_LO_linear_addr_31_0_offset 11
+#define SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_ADDR_LO_linear_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_ADDR_LO_linear_addr_31_0_shift  0
+#define SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_ADDR_LO_LINEAR_ADDR_31_0(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_ADDR_LO_linear_addr_31_0_mask) << SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_ADDR_LO_linear_addr_31_0_shift)
+
+/*define for LINEAR_ADDR_HI word*/
+/*define for linear_addr_63_32 field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_ADDR_HI_linear_addr_63_32_offset 12
+#define SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_ADDR_HI_linear_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_ADDR_HI_linear_addr_63_32_shift  0
+#define SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_ADDR_HI_LINEAR_ADDR_63_32(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_ADDR_HI_linear_addr_63_32_mask) << SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_ADDR_HI_linear_addr_63_32_shift)
+
+/*define for LINEAR_PITCH word*/
+/*define for linear_pitch field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_PITCH_linear_pitch_offset 13
+#define SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_PITCH_linear_pitch_mask   0x0007FFFF
+#define SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_PITCH_linear_pitch_shift  0
+#define SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_PITCH_LINEAR_PITCH(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_PITCH_linear_pitch_mask) << SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_PITCH_linear_pitch_shift)
+
+/*define for LINEAR_SLICE_PITCH word*/
+/*define for linear_slice_pitch field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_SLICE_PITCH_linear_slice_pitch_offset 14
+#define SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_SLICE_PITCH_linear_slice_pitch_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_SLICE_PITCH_linear_slice_pitch_shift  0
+#define SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_SLICE_PITCH_LINEAR_SLICE_PITCH(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_SLICE_PITCH_linear_slice_pitch_mask) << SDMA_PKT_COPY_L2T_BROADCAST_LINEAR_SLICE_PITCH_linear_slice_pitch_shift)
+
+/*define for COUNT word*/
+/*define for count field*/
+#define SDMA_PKT_COPY_L2T_BROADCAST_COUNT_count_offset 15
+#define SDMA_PKT_COPY_L2T_BROADCAST_COUNT_count_mask   0x000FFFFF
+#define SDMA_PKT_COPY_L2T_BROADCAST_COUNT_count_shift  0
+#define SDMA_PKT_COPY_L2T_BROADCAST_COUNT_COUNT(x) (((x) & SDMA_PKT_COPY_L2T_BROADCAST_COUNT_count_mask) << SDMA_PKT_COPY_L2T_BROADCAST_COUNT_count_shift)
+
+
+/*
+** Definitions for SDMA_PKT_COPY_T2T packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_COPY_T2T_HEADER_op_offset 0
+#define SDMA_PKT_COPY_T2T_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_COPY_T2T_HEADER_op_shift  0
+#define SDMA_PKT_COPY_T2T_HEADER_OP(x) (((x) & SDMA_PKT_COPY_T2T_HEADER_op_mask) << SDMA_PKT_COPY_T2T_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_COPY_T2T_HEADER_sub_op_offset 0
+#define SDMA_PKT_COPY_T2T_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_COPY_T2T_HEADER_sub_op_shift  8
+#define SDMA_PKT_COPY_T2T_HEADER_SUB_OP(x) (((x) & SDMA_PKT_COPY_T2T_HEADER_sub_op_mask) << SDMA_PKT_COPY_T2T_HEADER_sub_op_shift)
+
+/*define for tmz field*/
+#define SDMA_PKT_COPY_T2T_HEADER_tmz_offset 0
+#define SDMA_PKT_COPY_T2T_HEADER_tmz_mask   0x00000001
+#define SDMA_PKT_COPY_T2T_HEADER_tmz_shift  18
+#define SDMA_PKT_COPY_T2T_HEADER_TMZ(x) (((x) & SDMA_PKT_COPY_T2T_HEADER_tmz_mask) << SDMA_PKT_COPY_T2T_HEADER_tmz_shift)
+
+/*define for mip_max field*/
+#define SDMA_PKT_COPY_T2T_HEADER_mip_max_offset 0
+#define SDMA_PKT_COPY_T2T_HEADER_mip_max_mask   0x0000000F
+#define SDMA_PKT_COPY_T2T_HEADER_mip_max_shift  20
+#define SDMA_PKT_COPY_T2T_HEADER_MIP_MAX(x) (((x) & SDMA_PKT_COPY_T2T_HEADER_mip_max_mask) << SDMA_PKT_COPY_T2T_HEADER_mip_max_shift)
+
+/*define for SRC_ADDR_LO word*/
+/*define for src_addr_31_0 field*/
+#define SDMA_PKT_COPY_T2T_SRC_ADDR_LO_src_addr_31_0_offset 1
+#define SDMA_PKT_COPY_T2T_SRC_ADDR_LO_src_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_T2T_SRC_ADDR_LO_src_addr_31_0_shift  0
+#define SDMA_PKT_COPY_T2T_SRC_ADDR_LO_SRC_ADDR_31_0(x) (((x) & SDMA_PKT_COPY_T2T_SRC_ADDR_LO_src_addr_31_0_mask) << SDMA_PKT_COPY_T2T_SRC_ADDR_LO_src_addr_31_0_shift)
+
+/*define for SRC_ADDR_HI word*/
+/*define for src_addr_63_32 field*/
+#define SDMA_PKT_COPY_T2T_SRC_ADDR_HI_src_addr_63_32_offset 2
+#define SDMA_PKT_COPY_T2T_SRC_ADDR_HI_src_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_T2T_SRC_ADDR_HI_src_addr_63_32_shift  0
+#define SDMA_PKT_COPY_T2T_SRC_ADDR_HI_SRC_ADDR_63_32(x) (((x) & SDMA_PKT_COPY_T2T_SRC_ADDR_HI_src_addr_63_32_mask) << SDMA_PKT_COPY_T2T_SRC_ADDR_HI_src_addr_63_32_shift)
+
+/*define for DW_3 word*/
+/*define for src_x field*/
+#define SDMA_PKT_COPY_T2T_DW_3_src_x_offset 3
+#define SDMA_PKT_COPY_T2T_DW_3_src_x_mask   0x00003FFF
+#define SDMA_PKT_COPY_T2T_DW_3_src_x_shift  0
+#define SDMA_PKT_COPY_T2T_DW_3_SRC_X(x) (((x) & SDMA_PKT_COPY_T2T_DW_3_src_x_mask) << SDMA_PKT_COPY_T2T_DW_3_src_x_shift)
+
+/*define for src_y field*/
+#define SDMA_PKT_COPY_T2T_DW_3_src_y_offset 3
+#define SDMA_PKT_COPY_T2T_DW_3_src_y_mask   0x00003FFF
+#define SDMA_PKT_COPY_T2T_DW_3_src_y_shift  16
+#define SDMA_PKT_COPY_T2T_DW_3_SRC_Y(x) (((x) & SDMA_PKT_COPY_T2T_DW_3_src_y_mask) << SDMA_PKT_COPY_T2T_DW_3_src_y_shift)
+
+/*define for DW_4 word*/
+/*define for src_z field*/
+#define SDMA_PKT_COPY_T2T_DW_4_src_z_offset 4
+#define SDMA_PKT_COPY_T2T_DW_4_src_z_mask   0x000007FF
+#define SDMA_PKT_COPY_T2T_DW_4_src_z_shift  0
+#define SDMA_PKT_COPY_T2T_DW_4_SRC_Z(x) (((x) & SDMA_PKT_COPY_T2T_DW_4_src_z_mask) << SDMA_PKT_COPY_T2T_DW_4_src_z_shift)
+
+/*define for src_width field*/
+#define SDMA_PKT_COPY_T2T_DW_4_src_width_offset 4
+#define SDMA_PKT_COPY_T2T_DW_4_src_width_mask   0x00003FFF
+#define SDMA_PKT_COPY_T2T_DW_4_src_width_shift  16
+#define SDMA_PKT_COPY_T2T_DW_4_SRC_WIDTH(x) (((x) & SDMA_PKT_COPY_T2T_DW_4_src_width_mask) << SDMA_PKT_COPY_T2T_DW_4_src_width_shift)
+
+/*define for DW_5 word*/
+/*define for src_height field*/
+#define SDMA_PKT_COPY_T2T_DW_5_src_height_offset 5
+#define SDMA_PKT_COPY_T2T_DW_5_src_height_mask   0x00003FFF
+#define SDMA_PKT_COPY_T2T_DW_5_src_height_shift  0
+#define SDMA_PKT_COPY_T2T_DW_5_SRC_HEIGHT(x) (((x) & SDMA_PKT_COPY_T2T_DW_5_src_height_mask) << SDMA_PKT_COPY_T2T_DW_5_src_height_shift)
+
+/*define for src_depth field*/
+#define SDMA_PKT_COPY_T2T_DW_5_src_depth_offset 5
+#define SDMA_PKT_COPY_T2T_DW_5_src_depth_mask   0x000007FF
+#define SDMA_PKT_COPY_T2T_DW_5_src_depth_shift  16
+#define SDMA_PKT_COPY_T2T_DW_5_SRC_DEPTH(x) (((x) & SDMA_PKT_COPY_T2T_DW_5_src_depth_mask) << SDMA_PKT_COPY_T2T_DW_5_src_depth_shift)
+
+/*define for DW_6 word*/
+/*define for src_element_size field*/
+#define SDMA_PKT_COPY_T2T_DW_6_src_element_size_offset 6
+#define SDMA_PKT_COPY_T2T_DW_6_src_element_size_mask   0x00000007
+#define SDMA_PKT_COPY_T2T_DW_6_src_element_size_shift  0
+#define SDMA_PKT_COPY_T2T_DW_6_SRC_ELEMENT_SIZE(x) (((x) & SDMA_PKT_COPY_T2T_DW_6_src_element_size_mask) << SDMA_PKT_COPY_T2T_DW_6_src_element_size_shift)
+
+/*define for src_swizzle_mode field*/
+#define SDMA_PKT_COPY_T2T_DW_6_src_swizzle_mode_offset 6
+#define SDMA_PKT_COPY_T2T_DW_6_src_swizzle_mode_mask   0x0000001F
+#define SDMA_PKT_COPY_T2T_DW_6_src_swizzle_mode_shift  3
+#define SDMA_PKT_COPY_T2T_DW_6_SRC_SWIZZLE_MODE(x) (((x) & SDMA_PKT_COPY_T2T_DW_6_src_swizzle_mode_mask) << SDMA_PKT_COPY_T2T_DW_6_src_swizzle_mode_shift)
+
+/*define for src_dimension field*/
+#define SDMA_PKT_COPY_T2T_DW_6_src_dimension_offset 6
+#define SDMA_PKT_COPY_T2T_DW_6_src_dimension_mask   0x00000003
+#define SDMA_PKT_COPY_T2T_DW_6_src_dimension_shift  9
+#define SDMA_PKT_COPY_T2T_DW_6_SRC_DIMENSION(x) (((x) & SDMA_PKT_COPY_T2T_DW_6_src_dimension_mask) << SDMA_PKT_COPY_T2T_DW_6_src_dimension_shift)
+
+/*define for src_epitch field*/
+#define SDMA_PKT_COPY_T2T_DW_6_src_epitch_offset 6
+#define SDMA_PKT_COPY_T2T_DW_6_src_epitch_mask   0x0000FFFF
+#define SDMA_PKT_COPY_T2T_DW_6_src_epitch_shift  16
+#define SDMA_PKT_COPY_T2T_DW_6_SRC_EPITCH(x) (((x) & SDMA_PKT_COPY_T2T_DW_6_src_epitch_mask) << SDMA_PKT_COPY_T2T_DW_6_src_epitch_shift)
+
+/*define for DST_ADDR_LO word*/
+/*define for dst_addr_31_0 field*/
+#define SDMA_PKT_COPY_T2T_DST_ADDR_LO_dst_addr_31_0_offset 7
+#define SDMA_PKT_COPY_T2T_DST_ADDR_LO_dst_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_T2T_DST_ADDR_LO_dst_addr_31_0_shift  0
+#define SDMA_PKT_COPY_T2T_DST_ADDR_LO_DST_ADDR_31_0(x) (((x) & SDMA_PKT_COPY_T2T_DST_ADDR_LO_dst_addr_31_0_mask) << SDMA_PKT_COPY_T2T_DST_ADDR_LO_dst_addr_31_0_shift)
+
+/*define for DST_ADDR_HI word*/
+/*define for dst_addr_63_32 field*/
+#define SDMA_PKT_COPY_T2T_DST_ADDR_HI_dst_addr_63_32_offset 8
+#define SDMA_PKT_COPY_T2T_DST_ADDR_HI_dst_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_T2T_DST_ADDR_HI_dst_addr_63_32_shift  0
+#define SDMA_PKT_COPY_T2T_DST_ADDR_HI_DST_ADDR_63_32(x) (((x) & SDMA_PKT_COPY_T2T_DST_ADDR_HI_dst_addr_63_32_mask) << SDMA_PKT_COPY_T2T_DST_ADDR_HI_dst_addr_63_32_shift)
+
+/*define for DW_9 word*/
+/*define for dst_x field*/
+#define SDMA_PKT_COPY_T2T_DW_9_dst_x_offset 9
+#define SDMA_PKT_COPY_T2T_DW_9_dst_x_mask   0x00003FFF
+#define SDMA_PKT_COPY_T2T_DW_9_dst_x_shift  0
+#define SDMA_PKT_COPY_T2T_DW_9_DST_X(x) (((x) & SDMA_PKT_COPY_T2T_DW_9_dst_x_mask) << SDMA_PKT_COPY_T2T_DW_9_dst_x_shift)
+
+/*define for dst_y field*/
+#define SDMA_PKT_COPY_T2T_DW_9_dst_y_offset 9
+#define SDMA_PKT_COPY_T2T_DW_9_dst_y_mask   0x00003FFF
+#define SDMA_PKT_COPY_T2T_DW_9_dst_y_shift  16
+#define SDMA_PKT_COPY_T2T_DW_9_DST_Y(x) (((x) & SDMA_PKT_COPY_T2T_DW_9_dst_y_mask) << SDMA_PKT_COPY_T2T_DW_9_dst_y_shift)
+
+/*define for DW_10 word*/
+/*define for dst_z field*/
+#define SDMA_PKT_COPY_T2T_DW_10_dst_z_offset 10
+#define SDMA_PKT_COPY_T2T_DW_10_dst_z_mask   0x000007FF
+#define SDMA_PKT_COPY_T2T_DW_10_dst_z_shift  0
+#define SDMA_PKT_COPY_T2T_DW_10_DST_Z(x) (((x) & SDMA_PKT_COPY_T2T_DW_10_dst_z_mask) << SDMA_PKT_COPY_T2T_DW_10_dst_z_shift)
+
+/*define for dst_width field*/
+#define SDMA_PKT_COPY_T2T_DW_10_dst_width_offset 10
+#define SDMA_PKT_COPY_T2T_DW_10_dst_width_mask   0x00003FFF
+#define SDMA_PKT_COPY_T2T_DW_10_dst_width_shift  16
+#define SDMA_PKT_COPY_T2T_DW_10_DST_WIDTH(x) (((x) & SDMA_PKT_COPY_T2T_DW_10_dst_width_mask) << SDMA_PKT_COPY_T2T_DW_10_dst_width_shift)
+
+/*define for DW_11 word*/
+/*define for dst_height field*/
+#define SDMA_PKT_COPY_T2T_DW_11_dst_height_offset 11
+#define SDMA_PKT_COPY_T2T_DW_11_dst_height_mask   0x00003FFF
+#define SDMA_PKT_COPY_T2T_DW_11_dst_height_shift  0
+#define SDMA_PKT_COPY_T2T_DW_11_DST_HEIGHT(x) (((x) & SDMA_PKT_COPY_T2T_DW_11_dst_height_mask) << SDMA_PKT_COPY_T2T_DW_11_dst_height_shift)
+
+/*define for dst_depth field*/
+#define SDMA_PKT_COPY_T2T_DW_11_dst_depth_offset 11
+#define SDMA_PKT_COPY_T2T_DW_11_dst_depth_mask   0x000007FF
+#define SDMA_PKT_COPY_T2T_DW_11_dst_depth_shift  16
+#define SDMA_PKT_COPY_T2T_DW_11_DST_DEPTH(x) (((x) & SDMA_PKT_COPY_T2T_DW_11_dst_depth_mask) << SDMA_PKT_COPY_T2T_DW_11_dst_depth_shift)
+
+/*define for DW_12 word*/
+/*define for dst_element_size field*/
+#define SDMA_PKT_COPY_T2T_DW_12_dst_element_size_offset 12
+#define SDMA_PKT_COPY_T2T_DW_12_dst_element_size_mask   0x00000007
+#define SDMA_PKT_COPY_T2T_DW_12_dst_element_size_shift  0
+#define SDMA_PKT_COPY_T2T_DW_12_DST_ELEMENT_SIZE(x) (((x) & SDMA_PKT_COPY_T2T_DW_12_dst_element_size_mask) << SDMA_PKT_COPY_T2T_DW_12_dst_element_size_shift)
+
+/*define for dst_swizzle_mode field*/
+#define SDMA_PKT_COPY_T2T_DW_12_dst_swizzle_mode_offset 12
+#define SDMA_PKT_COPY_T2T_DW_12_dst_swizzle_mode_mask   0x0000001F
+#define SDMA_PKT_COPY_T2T_DW_12_dst_swizzle_mode_shift  3
+#define SDMA_PKT_COPY_T2T_DW_12_DST_SWIZZLE_MODE(x) (((x) & SDMA_PKT_COPY_T2T_DW_12_dst_swizzle_mode_mask) << SDMA_PKT_COPY_T2T_DW_12_dst_swizzle_mode_shift)
+
+/*define for dst_dimension field*/
+#define SDMA_PKT_COPY_T2T_DW_12_dst_dimension_offset 12
+#define SDMA_PKT_COPY_T2T_DW_12_dst_dimension_mask   0x00000003
+#define SDMA_PKT_COPY_T2T_DW_12_dst_dimension_shift  9
+#define SDMA_PKT_COPY_T2T_DW_12_DST_DIMENSION(x) (((x) & SDMA_PKT_COPY_T2T_DW_12_dst_dimension_mask) << SDMA_PKT_COPY_T2T_DW_12_dst_dimension_shift)
+
+/*define for dst_epitch field*/
+#define SDMA_PKT_COPY_T2T_DW_12_dst_epitch_offset 12
+#define SDMA_PKT_COPY_T2T_DW_12_dst_epitch_mask   0x0000FFFF
+#define SDMA_PKT_COPY_T2T_DW_12_dst_epitch_shift  16
+#define SDMA_PKT_COPY_T2T_DW_12_DST_EPITCH(x) (((x) & SDMA_PKT_COPY_T2T_DW_12_dst_epitch_mask) << SDMA_PKT_COPY_T2T_DW_12_dst_epitch_shift)
+
+/*define for DW_13 word*/
+/*define for rect_x field*/
+#define SDMA_PKT_COPY_T2T_DW_13_rect_x_offset 13
+#define SDMA_PKT_COPY_T2T_DW_13_rect_x_mask   0x00003FFF
+#define SDMA_PKT_COPY_T2T_DW_13_rect_x_shift  0
+#define SDMA_PKT_COPY_T2T_DW_13_RECT_X(x) (((x) & SDMA_PKT_COPY_T2T_DW_13_rect_x_mask) << SDMA_PKT_COPY_T2T_DW_13_rect_x_shift)
+
+/*define for rect_y field*/
+#define SDMA_PKT_COPY_T2T_DW_13_rect_y_offset 13
+#define SDMA_PKT_COPY_T2T_DW_13_rect_y_mask   0x00003FFF
+#define SDMA_PKT_COPY_T2T_DW_13_rect_y_shift  16
+#define SDMA_PKT_COPY_T2T_DW_13_RECT_Y(x) (((x) & SDMA_PKT_COPY_T2T_DW_13_rect_y_mask) << SDMA_PKT_COPY_T2T_DW_13_rect_y_shift)
+
+/*define for DW_14 word*/
+/*define for rect_z field*/
+#define SDMA_PKT_COPY_T2T_DW_14_rect_z_offset 14
+#define SDMA_PKT_COPY_T2T_DW_14_rect_z_mask   0x000007FF
+#define SDMA_PKT_COPY_T2T_DW_14_rect_z_shift  0
+#define SDMA_PKT_COPY_T2T_DW_14_RECT_Z(x) (((x) & SDMA_PKT_COPY_T2T_DW_14_rect_z_mask) << SDMA_PKT_COPY_T2T_DW_14_rect_z_shift)
+
+/*define for dst_sw field*/
+#define SDMA_PKT_COPY_T2T_DW_14_dst_sw_offset 14
+#define SDMA_PKT_COPY_T2T_DW_14_dst_sw_mask   0x00000003
+#define SDMA_PKT_COPY_T2T_DW_14_dst_sw_shift  16
+#define SDMA_PKT_COPY_T2T_DW_14_DST_SW(x) (((x) & SDMA_PKT_COPY_T2T_DW_14_dst_sw_mask) << SDMA_PKT_COPY_T2T_DW_14_dst_sw_shift)
+
+/*define for src_sw field*/
+#define SDMA_PKT_COPY_T2T_DW_14_src_sw_offset 14
+#define SDMA_PKT_COPY_T2T_DW_14_src_sw_mask   0x00000003
+#define SDMA_PKT_COPY_T2T_DW_14_src_sw_shift  24
+#define SDMA_PKT_COPY_T2T_DW_14_SRC_SW(x) (((x) & SDMA_PKT_COPY_T2T_DW_14_src_sw_mask) << SDMA_PKT_COPY_T2T_DW_14_src_sw_shift)
+
+
+/*
+** Definitions for SDMA_PKT_COPY_TILED_SUBWIN packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_op_offset 0
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_op_shift  0
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_OP(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_HEADER_op_mask) << SDMA_PKT_COPY_TILED_SUBWIN_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_sub_op_offset 0
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_sub_op_shift  8
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_SUB_OP(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_HEADER_sub_op_mask) << SDMA_PKT_COPY_TILED_SUBWIN_HEADER_sub_op_shift)
+
+/*define for tmz field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_tmz_offset 0
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_tmz_mask   0x00000001
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_tmz_shift  18
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_TMZ(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_HEADER_tmz_mask) << SDMA_PKT_COPY_TILED_SUBWIN_HEADER_tmz_shift)
+
+/*define for mip_max field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_mip_max_offset 0
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_mip_max_mask   0x0000000F
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_mip_max_shift  20
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_MIP_MAX(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_HEADER_mip_max_mask) << SDMA_PKT_COPY_TILED_SUBWIN_HEADER_mip_max_shift)
+
+/*define for mip_id field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_mip_id_offset 0
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_mip_id_mask   0x0000000F
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_mip_id_shift  24
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_MIP_ID(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_HEADER_mip_id_mask) << SDMA_PKT_COPY_TILED_SUBWIN_HEADER_mip_id_shift)
+
+/*define for detile field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_detile_offset 0
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_detile_mask   0x00000001
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_detile_shift  31
+#define SDMA_PKT_COPY_TILED_SUBWIN_HEADER_DETILE(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_HEADER_detile_mask) << SDMA_PKT_COPY_TILED_SUBWIN_HEADER_detile_shift)
+
+/*define for TILED_ADDR_LO word*/
+/*define for tiled_addr_31_0 field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_TILED_ADDR_LO_tiled_addr_31_0_offset 1
+#define SDMA_PKT_COPY_TILED_SUBWIN_TILED_ADDR_LO_tiled_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_TILED_SUBWIN_TILED_ADDR_LO_tiled_addr_31_0_shift  0
+#define SDMA_PKT_COPY_TILED_SUBWIN_TILED_ADDR_LO_TILED_ADDR_31_0(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_TILED_ADDR_LO_tiled_addr_31_0_mask) << SDMA_PKT_COPY_TILED_SUBWIN_TILED_ADDR_LO_tiled_addr_31_0_shift)
+
+/*define for TILED_ADDR_HI word*/
+/*define for tiled_addr_63_32 field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_TILED_ADDR_HI_tiled_addr_63_32_offset 2
+#define SDMA_PKT_COPY_TILED_SUBWIN_TILED_ADDR_HI_tiled_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_TILED_SUBWIN_TILED_ADDR_HI_tiled_addr_63_32_shift  0
+#define SDMA_PKT_COPY_TILED_SUBWIN_TILED_ADDR_HI_TILED_ADDR_63_32(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_TILED_ADDR_HI_tiled_addr_63_32_mask) << SDMA_PKT_COPY_TILED_SUBWIN_TILED_ADDR_HI_tiled_addr_63_32_shift)
+
+/*define for DW_3 word*/
+/*define for tiled_x field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_3_tiled_x_offset 3
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_3_tiled_x_mask   0x00003FFF
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_3_tiled_x_shift  0
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_3_TILED_X(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_DW_3_tiled_x_mask) << SDMA_PKT_COPY_TILED_SUBWIN_DW_3_tiled_x_shift)
+
+/*define for tiled_y field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_3_tiled_y_offset 3
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_3_tiled_y_mask   0x00003FFF
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_3_tiled_y_shift  16
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_3_TILED_Y(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_DW_3_tiled_y_mask) << SDMA_PKT_COPY_TILED_SUBWIN_DW_3_tiled_y_shift)
+
+/*define for DW_4 word*/
+/*define for tiled_z field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_4_tiled_z_offset 4
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_4_tiled_z_mask   0x000007FF
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_4_tiled_z_shift  0
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_4_TILED_Z(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_DW_4_tiled_z_mask) << SDMA_PKT_COPY_TILED_SUBWIN_DW_4_tiled_z_shift)
+
+/*define for width field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_4_width_offset 4
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_4_width_mask   0x00003FFF
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_4_width_shift  16
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_4_WIDTH(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_DW_4_width_mask) << SDMA_PKT_COPY_TILED_SUBWIN_DW_4_width_shift)
+
+/*define for DW_5 word*/
+/*define for height field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_5_height_offset 5
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_5_height_mask   0x00003FFF
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_5_height_shift  0
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_5_HEIGHT(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_DW_5_height_mask) << SDMA_PKT_COPY_TILED_SUBWIN_DW_5_height_shift)
+
+/*define for depth field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_5_depth_offset 5
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_5_depth_mask   0x000007FF
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_5_depth_shift  16
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_5_DEPTH(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_DW_5_depth_mask) << SDMA_PKT_COPY_TILED_SUBWIN_DW_5_depth_shift)
+
+/*define for DW_6 word*/
+/*define for element_size field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_6_element_size_offset 6
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_6_element_size_mask   0x00000007
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_6_element_size_shift  0
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_6_ELEMENT_SIZE(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_DW_6_element_size_mask) << SDMA_PKT_COPY_TILED_SUBWIN_DW_6_element_size_shift)
+
+/*define for swizzle_mode field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_6_swizzle_mode_offset 6
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_6_swizzle_mode_mask   0x0000001F
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_6_swizzle_mode_shift  3
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_6_SWIZZLE_MODE(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_DW_6_swizzle_mode_mask) << SDMA_PKT_COPY_TILED_SUBWIN_DW_6_swizzle_mode_shift)
+
+/*define for dimension field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_6_dimension_offset 6
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_6_dimension_mask   0x00000003
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_6_dimension_shift  9
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_6_DIMENSION(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_DW_6_dimension_mask) << SDMA_PKT_COPY_TILED_SUBWIN_DW_6_dimension_shift)
+
+/*define for epitch field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_6_epitch_offset 6
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_6_epitch_mask   0x0000FFFF
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_6_epitch_shift  16
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_6_EPITCH(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_DW_6_epitch_mask) << SDMA_PKT_COPY_TILED_SUBWIN_DW_6_epitch_shift)
+
+/*define for LINEAR_ADDR_LO word*/
+/*define for linear_addr_31_0 field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_LINEAR_ADDR_LO_linear_addr_31_0_offset 7
+#define SDMA_PKT_COPY_TILED_SUBWIN_LINEAR_ADDR_LO_linear_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_TILED_SUBWIN_LINEAR_ADDR_LO_linear_addr_31_0_shift  0
+#define SDMA_PKT_COPY_TILED_SUBWIN_LINEAR_ADDR_LO_LINEAR_ADDR_31_0(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_LINEAR_ADDR_LO_linear_addr_31_0_mask) << SDMA_PKT_COPY_TILED_SUBWIN_LINEAR_ADDR_LO_linear_addr_31_0_shift)
+
+/*define for LINEAR_ADDR_HI word*/
+/*define for linear_addr_63_32 field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_LINEAR_ADDR_HI_linear_addr_63_32_offset 8
+#define SDMA_PKT_COPY_TILED_SUBWIN_LINEAR_ADDR_HI_linear_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_TILED_SUBWIN_LINEAR_ADDR_HI_linear_addr_63_32_shift  0
+#define SDMA_PKT_COPY_TILED_SUBWIN_LINEAR_ADDR_HI_LINEAR_ADDR_63_32(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_LINEAR_ADDR_HI_linear_addr_63_32_mask) << SDMA_PKT_COPY_TILED_SUBWIN_LINEAR_ADDR_HI_linear_addr_63_32_shift)
+
+/*define for DW_9 word*/
+/*define for linear_x field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_9_linear_x_offset 9
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_9_linear_x_mask   0x00003FFF
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_9_linear_x_shift  0
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_9_LINEAR_X(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_DW_9_linear_x_mask) << SDMA_PKT_COPY_TILED_SUBWIN_DW_9_linear_x_shift)
+
+/*define for linear_y field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_9_linear_y_offset 9
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_9_linear_y_mask   0x00003FFF
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_9_linear_y_shift  16
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_9_LINEAR_Y(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_DW_9_linear_y_mask) << SDMA_PKT_COPY_TILED_SUBWIN_DW_9_linear_y_shift)
+
+/*define for DW_10 word*/
+/*define for linear_z field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_10_linear_z_offset 10
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_10_linear_z_mask   0x000007FF
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_10_linear_z_shift  0
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_10_LINEAR_Z(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_DW_10_linear_z_mask) << SDMA_PKT_COPY_TILED_SUBWIN_DW_10_linear_z_shift)
+
+/*define for linear_pitch field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_10_linear_pitch_offset 10
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_10_linear_pitch_mask   0x00003FFF
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_10_linear_pitch_shift  16
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_10_LINEAR_PITCH(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_DW_10_linear_pitch_mask) << SDMA_PKT_COPY_TILED_SUBWIN_DW_10_linear_pitch_shift)
+
+/*define for DW_11 word*/
+/*define for linear_slice_pitch field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_11_linear_slice_pitch_offset 11
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_11_linear_slice_pitch_mask   0x0FFFFFFF
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_11_linear_slice_pitch_shift  0
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_11_LINEAR_SLICE_PITCH(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_DW_11_linear_slice_pitch_mask) << SDMA_PKT_COPY_TILED_SUBWIN_DW_11_linear_slice_pitch_shift)
+
+/*define for DW_12 word*/
+/*define for rect_x field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_12_rect_x_offset 12
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_12_rect_x_mask   0x00003FFF
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_12_rect_x_shift  0
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_12_RECT_X(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_DW_12_rect_x_mask) << SDMA_PKT_COPY_TILED_SUBWIN_DW_12_rect_x_shift)
+
+/*define for rect_y field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_12_rect_y_offset 12
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_12_rect_y_mask   0x00003FFF
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_12_rect_y_shift  16
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_12_RECT_Y(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_DW_12_rect_y_mask) << SDMA_PKT_COPY_TILED_SUBWIN_DW_12_rect_y_shift)
+
+/*define for DW_13 word*/
+/*define for rect_z field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_13_rect_z_offset 13
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_13_rect_z_mask   0x000007FF
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_13_rect_z_shift  0
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_13_RECT_Z(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_DW_13_rect_z_mask) << SDMA_PKT_COPY_TILED_SUBWIN_DW_13_rect_z_shift)
+
+/*define for linear_sw field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_13_linear_sw_offset 13
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_13_linear_sw_mask   0x00000003
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_13_linear_sw_shift  16
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_13_LINEAR_SW(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_DW_13_linear_sw_mask) << SDMA_PKT_COPY_TILED_SUBWIN_DW_13_linear_sw_shift)
+
+/*define for tile_sw field*/
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_13_tile_sw_offset 13
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_13_tile_sw_mask   0x00000003
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_13_tile_sw_shift  24
+#define SDMA_PKT_COPY_TILED_SUBWIN_DW_13_TILE_SW(x) (((x) & SDMA_PKT_COPY_TILED_SUBWIN_DW_13_tile_sw_mask) << SDMA_PKT_COPY_TILED_SUBWIN_DW_13_tile_sw_shift)
+
+
+/*
+** Definitions for SDMA_PKT_COPY_STRUCT packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_COPY_STRUCT_HEADER_op_offset 0
+#define SDMA_PKT_COPY_STRUCT_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_COPY_STRUCT_HEADER_op_shift  0
+#define SDMA_PKT_COPY_STRUCT_HEADER_OP(x) (((x) & SDMA_PKT_COPY_STRUCT_HEADER_op_mask) << SDMA_PKT_COPY_STRUCT_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_COPY_STRUCT_HEADER_sub_op_offset 0
+#define SDMA_PKT_COPY_STRUCT_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_COPY_STRUCT_HEADER_sub_op_shift  8
+#define SDMA_PKT_COPY_STRUCT_HEADER_SUB_OP(x) (((x) & SDMA_PKT_COPY_STRUCT_HEADER_sub_op_mask) << SDMA_PKT_COPY_STRUCT_HEADER_sub_op_shift)
+
+/*define for tmz field*/
+#define SDMA_PKT_COPY_STRUCT_HEADER_tmz_offset 0
+#define SDMA_PKT_COPY_STRUCT_HEADER_tmz_mask   0x00000001
+#define SDMA_PKT_COPY_STRUCT_HEADER_tmz_shift  18
+#define SDMA_PKT_COPY_STRUCT_HEADER_TMZ(x) (((x) & SDMA_PKT_COPY_STRUCT_HEADER_tmz_mask) << SDMA_PKT_COPY_STRUCT_HEADER_tmz_shift)
+
+/*define for detile field*/
+#define SDMA_PKT_COPY_STRUCT_HEADER_detile_offset 0
+#define SDMA_PKT_COPY_STRUCT_HEADER_detile_mask   0x00000001
+#define SDMA_PKT_COPY_STRUCT_HEADER_detile_shift  31
+#define SDMA_PKT_COPY_STRUCT_HEADER_DETILE(x) (((x) & SDMA_PKT_COPY_STRUCT_HEADER_detile_mask) << SDMA_PKT_COPY_STRUCT_HEADER_detile_shift)
+
+/*define for SB_ADDR_LO word*/
+/*define for sb_addr_31_0 field*/
+#define SDMA_PKT_COPY_STRUCT_SB_ADDR_LO_sb_addr_31_0_offset 1
+#define SDMA_PKT_COPY_STRUCT_SB_ADDR_LO_sb_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_STRUCT_SB_ADDR_LO_sb_addr_31_0_shift  0
+#define SDMA_PKT_COPY_STRUCT_SB_ADDR_LO_SB_ADDR_31_0(x) (((x) & SDMA_PKT_COPY_STRUCT_SB_ADDR_LO_sb_addr_31_0_mask) << SDMA_PKT_COPY_STRUCT_SB_ADDR_LO_sb_addr_31_0_shift)
+
+/*define for SB_ADDR_HI word*/
+/*define for sb_addr_63_32 field*/
+#define SDMA_PKT_COPY_STRUCT_SB_ADDR_HI_sb_addr_63_32_offset 2
+#define SDMA_PKT_COPY_STRUCT_SB_ADDR_HI_sb_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_STRUCT_SB_ADDR_HI_sb_addr_63_32_shift  0
+#define SDMA_PKT_COPY_STRUCT_SB_ADDR_HI_SB_ADDR_63_32(x) (((x) & SDMA_PKT_COPY_STRUCT_SB_ADDR_HI_sb_addr_63_32_mask) << SDMA_PKT_COPY_STRUCT_SB_ADDR_HI_sb_addr_63_32_shift)
+
+/*define for START_INDEX word*/
+/*define for start_index field*/
+#define SDMA_PKT_COPY_STRUCT_START_INDEX_start_index_offset 3
+#define SDMA_PKT_COPY_STRUCT_START_INDEX_start_index_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_STRUCT_START_INDEX_start_index_shift  0
+#define SDMA_PKT_COPY_STRUCT_START_INDEX_START_INDEX(x) (((x) & SDMA_PKT_COPY_STRUCT_START_INDEX_start_index_mask) << SDMA_PKT_COPY_STRUCT_START_INDEX_start_index_shift)
+
+/*define for COUNT word*/
+/*define for count field*/
+#define SDMA_PKT_COPY_STRUCT_COUNT_count_offset 4
+#define SDMA_PKT_COPY_STRUCT_COUNT_count_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_STRUCT_COUNT_count_shift  0
+#define SDMA_PKT_COPY_STRUCT_COUNT_COUNT(x) (((x) & SDMA_PKT_COPY_STRUCT_COUNT_count_mask) << SDMA_PKT_COPY_STRUCT_COUNT_count_shift)
+
+/*define for DW_5 word*/
+/*define for stride field*/
+#define SDMA_PKT_COPY_STRUCT_DW_5_stride_offset 5
+#define SDMA_PKT_COPY_STRUCT_DW_5_stride_mask   0x000007FF
+#define SDMA_PKT_COPY_STRUCT_DW_5_stride_shift  0
+#define SDMA_PKT_COPY_STRUCT_DW_5_STRIDE(x) (((x) & SDMA_PKT_COPY_STRUCT_DW_5_stride_mask) << SDMA_PKT_COPY_STRUCT_DW_5_stride_shift)
+
+/*define for linear_sw field*/
+#define SDMA_PKT_COPY_STRUCT_DW_5_linear_sw_offset 5
+#define SDMA_PKT_COPY_STRUCT_DW_5_linear_sw_mask   0x00000003
+#define SDMA_PKT_COPY_STRUCT_DW_5_linear_sw_shift  16
+#define SDMA_PKT_COPY_STRUCT_DW_5_LINEAR_SW(x) (((x) & SDMA_PKT_COPY_STRUCT_DW_5_linear_sw_mask) << SDMA_PKT_COPY_STRUCT_DW_5_linear_sw_shift)
+
+/*define for struct_sw field*/
+#define SDMA_PKT_COPY_STRUCT_DW_5_struct_sw_offset 5
+#define SDMA_PKT_COPY_STRUCT_DW_5_struct_sw_mask   0x00000003
+#define SDMA_PKT_COPY_STRUCT_DW_5_struct_sw_shift  24
+#define SDMA_PKT_COPY_STRUCT_DW_5_STRUCT_SW(x) (((x) & SDMA_PKT_COPY_STRUCT_DW_5_struct_sw_mask) << SDMA_PKT_COPY_STRUCT_DW_5_struct_sw_shift)
+
+/*define for LINEAR_ADDR_LO word*/
+/*define for linear_addr_31_0 field*/
+#define SDMA_PKT_COPY_STRUCT_LINEAR_ADDR_LO_linear_addr_31_0_offset 6
+#define SDMA_PKT_COPY_STRUCT_LINEAR_ADDR_LO_linear_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_STRUCT_LINEAR_ADDR_LO_linear_addr_31_0_shift  0
+#define SDMA_PKT_COPY_STRUCT_LINEAR_ADDR_LO_LINEAR_ADDR_31_0(x) (((x) & SDMA_PKT_COPY_STRUCT_LINEAR_ADDR_LO_linear_addr_31_0_mask) << SDMA_PKT_COPY_STRUCT_LINEAR_ADDR_LO_linear_addr_31_0_shift)
+
+/*define for LINEAR_ADDR_HI word*/
+/*define for linear_addr_63_32 field*/
+#define SDMA_PKT_COPY_STRUCT_LINEAR_ADDR_HI_linear_addr_63_32_offset 7
+#define SDMA_PKT_COPY_STRUCT_LINEAR_ADDR_HI_linear_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_COPY_STRUCT_LINEAR_ADDR_HI_linear_addr_63_32_shift  0
+#define SDMA_PKT_COPY_STRUCT_LINEAR_ADDR_HI_LINEAR_ADDR_63_32(x) (((x) & SDMA_PKT_COPY_STRUCT_LINEAR_ADDR_HI_linear_addr_63_32_mask) << SDMA_PKT_COPY_STRUCT_LINEAR_ADDR_HI_linear_addr_63_32_shift)
+
+
+/*
+** Definitions for SDMA_PKT_WRITE_UNTILED packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_WRITE_UNTILED_HEADER_op_offset 0
+#define SDMA_PKT_WRITE_UNTILED_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_WRITE_UNTILED_HEADER_op_shift  0
+#define SDMA_PKT_WRITE_UNTILED_HEADER_OP(x) (((x) & SDMA_PKT_WRITE_UNTILED_HEADER_op_mask) << SDMA_PKT_WRITE_UNTILED_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_WRITE_UNTILED_HEADER_sub_op_offset 0
+#define SDMA_PKT_WRITE_UNTILED_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_WRITE_UNTILED_HEADER_sub_op_shift  8
+#define SDMA_PKT_WRITE_UNTILED_HEADER_SUB_OP(x) (((x) & SDMA_PKT_WRITE_UNTILED_HEADER_sub_op_mask) << SDMA_PKT_WRITE_UNTILED_HEADER_sub_op_shift)
+
+/*define for encrypt field*/
+#define SDMA_PKT_WRITE_UNTILED_HEADER_encrypt_offset 0
+#define SDMA_PKT_WRITE_UNTILED_HEADER_encrypt_mask   0x00000001
+#define SDMA_PKT_WRITE_UNTILED_HEADER_encrypt_shift  16
+#define SDMA_PKT_WRITE_UNTILED_HEADER_ENCRYPT(x) (((x) & SDMA_PKT_WRITE_UNTILED_HEADER_encrypt_mask) << SDMA_PKT_WRITE_UNTILED_HEADER_encrypt_shift)
+
+/*define for tmz field*/
+#define SDMA_PKT_WRITE_UNTILED_HEADER_tmz_offset 0
+#define SDMA_PKT_WRITE_UNTILED_HEADER_tmz_mask   0x00000001
+#define SDMA_PKT_WRITE_UNTILED_HEADER_tmz_shift  18
+#define SDMA_PKT_WRITE_UNTILED_HEADER_TMZ(x) (((x) & SDMA_PKT_WRITE_UNTILED_HEADER_tmz_mask) << SDMA_PKT_WRITE_UNTILED_HEADER_tmz_shift)
+
+/*define for DST_ADDR_LO word*/
+/*define for dst_addr_31_0 field*/
+#define SDMA_PKT_WRITE_UNTILED_DST_ADDR_LO_dst_addr_31_0_offset 1
+#define SDMA_PKT_WRITE_UNTILED_DST_ADDR_LO_dst_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_WRITE_UNTILED_DST_ADDR_LO_dst_addr_31_0_shift  0
+#define SDMA_PKT_WRITE_UNTILED_DST_ADDR_LO_DST_ADDR_31_0(x) (((x) & SDMA_PKT_WRITE_UNTILED_DST_ADDR_LO_dst_addr_31_0_mask) << SDMA_PKT_WRITE_UNTILED_DST_ADDR_LO_dst_addr_31_0_shift)
+
+/*define for DST_ADDR_HI word*/
+/*define for dst_addr_63_32 field*/
+#define SDMA_PKT_WRITE_UNTILED_DST_ADDR_HI_dst_addr_63_32_offset 2
+#define SDMA_PKT_WRITE_UNTILED_DST_ADDR_HI_dst_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_WRITE_UNTILED_DST_ADDR_HI_dst_addr_63_32_shift  0
+#define SDMA_PKT_WRITE_UNTILED_DST_ADDR_HI_DST_ADDR_63_32(x) (((x) & SDMA_PKT_WRITE_UNTILED_DST_ADDR_HI_dst_addr_63_32_mask) << SDMA_PKT_WRITE_UNTILED_DST_ADDR_HI_dst_addr_63_32_shift)
+
+/*define for DW_3 word*/
+/*define for count field*/
+#define SDMA_PKT_WRITE_UNTILED_DW_3_count_offset 3
+#define SDMA_PKT_WRITE_UNTILED_DW_3_count_mask   0x000FFFFF
+#define SDMA_PKT_WRITE_UNTILED_DW_3_count_shift  0
+#define SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(x) (((x) & SDMA_PKT_WRITE_UNTILED_DW_3_count_mask) << SDMA_PKT_WRITE_UNTILED_DW_3_count_shift)
+
+/*define for sw field*/
+#define SDMA_PKT_WRITE_UNTILED_DW_3_sw_offset 3
+#define SDMA_PKT_WRITE_UNTILED_DW_3_sw_mask   0x00000003
+#define SDMA_PKT_WRITE_UNTILED_DW_3_sw_shift  24
+#define SDMA_PKT_WRITE_UNTILED_DW_3_SW(x) (((x) & SDMA_PKT_WRITE_UNTILED_DW_3_sw_mask) << SDMA_PKT_WRITE_UNTILED_DW_3_sw_shift)
+
+/*define for DATA0 word*/
+/*define for data0 field*/
+#define SDMA_PKT_WRITE_UNTILED_DATA0_data0_offset 4
+#define SDMA_PKT_WRITE_UNTILED_DATA0_data0_mask   0xFFFFFFFF
+#define SDMA_PKT_WRITE_UNTILED_DATA0_data0_shift  0
+#define SDMA_PKT_WRITE_UNTILED_DATA0_DATA0(x) (((x) & SDMA_PKT_WRITE_UNTILED_DATA0_data0_mask) << SDMA_PKT_WRITE_UNTILED_DATA0_data0_shift)
+
+
+/*
+** Definitions for SDMA_PKT_WRITE_TILED packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_WRITE_TILED_HEADER_op_offset 0
+#define SDMA_PKT_WRITE_TILED_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_WRITE_TILED_HEADER_op_shift  0
+#define SDMA_PKT_WRITE_TILED_HEADER_OP(x) (((x) & SDMA_PKT_WRITE_TILED_HEADER_op_mask) << SDMA_PKT_WRITE_TILED_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_WRITE_TILED_HEADER_sub_op_offset 0
+#define SDMA_PKT_WRITE_TILED_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_WRITE_TILED_HEADER_sub_op_shift  8
+#define SDMA_PKT_WRITE_TILED_HEADER_SUB_OP(x) (((x) & SDMA_PKT_WRITE_TILED_HEADER_sub_op_mask) << SDMA_PKT_WRITE_TILED_HEADER_sub_op_shift)
+
+/*define for encrypt field*/
+#define SDMA_PKT_WRITE_TILED_HEADER_encrypt_offset 0
+#define SDMA_PKT_WRITE_TILED_HEADER_encrypt_mask   0x00000001
+#define SDMA_PKT_WRITE_TILED_HEADER_encrypt_shift  16
+#define SDMA_PKT_WRITE_TILED_HEADER_ENCRYPT(x) (((x) & SDMA_PKT_WRITE_TILED_HEADER_encrypt_mask) << SDMA_PKT_WRITE_TILED_HEADER_encrypt_shift)
+
+/*define for tmz field*/
+#define SDMA_PKT_WRITE_TILED_HEADER_tmz_offset 0
+#define SDMA_PKT_WRITE_TILED_HEADER_tmz_mask   0x00000001
+#define SDMA_PKT_WRITE_TILED_HEADER_tmz_shift  18
+#define SDMA_PKT_WRITE_TILED_HEADER_TMZ(x) (((x) & SDMA_PKT_WRITE_TILED_HEADER_tmz_mask) << SDMA_PKT_WRITE_TILED_HEADER_tmz_shift)
+
+/*define for mip_max field*/
+#define SDMA_PKT_WRITE_TILED_HEADER_mip_max_offset 0
+#define SDMA_PKT_WRITE_TILED_HEADER_mip_max_mask   0x0000000F
+#define SDMA_PKT_WRITE_TILED_HEADER_mip_max_shift  20
+#define SDMA_PKT_WRITE_TILED_HEADER_MIP_MAX(x) (((x) & SDMA_PKT_WRITE_TILED_HEADER_mip_max_mask) << SDMA_PKT_WRITE_TILED_HEADER_mip_max_shift)
+
+/*define for DST_ADDR_LO word*/
+/*define for dst_addr_31_0 field*/
+#define SDMA_PKT_WRITE_TILED_DST_ADDR_LO_dst_addr_31_0_offset 1
+#define SDMA_PKT_WRITE_TILED_DST_ADDR_LO_dst_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_WRITE_TILED_DST_ADDR_LO_dst_addr_31_0_shift  0
+#define SDMA_PKT_WRITE_TILED_DST_ADDR_LO_DST_ADDR_31_0(x) (((x) & SDMA_PKT_WRITE_TILED_DST_ADDR_LO_dst_addr_31_0_mask) << SDMA_PKT_WRITE_TILED_DST_ADDR_LO_dst_addr_31_0_shift)
+
+/*define for DST_ADDR_HI word*/
+/*define for dst_addr_63_32 field*/
+#define SDMA_PKT_WRITE_TILED_DST_ADDR_HI_dst_addr_63_32_offset 2
+#define SDMA_PKT_WRITE_TILED_DST_ADDR_HI_dst_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_WRITE_TILED_DST_ADDR_HI_dst_addr_63_32_shift  0
+#define SDMA_PKT_WRITE_TILED_DST_ADDR_HI_DST_ADDR_63_32(x) (((x) & SDMA_PKT_WRITE_TILED_DST_ADDR_HI_dst_addr_63_32_mask) << SDMA_PKT_WRITE_TILED_DST_ADDR_HI_dst_addr_63_32_shift)
+
+/*define for DW_3 word*/
+/*define for width field*/
+#define SDMA_PKT_WRITE_TILED_DW_3_width_offset 3
+#define SDMA_PKT_WRITE_TILED_DW_3_width_mask   0x00003FFF
+#define SDMA_PKT_WRITE_TILED_DW_3_width_shift  0
+#define SDMA_PKT_WRITE_TILED_DW_3_WIDTH(x) (((x) & SDMA_PKT_WRITE_TILED_DW_3_width_mask) << SDMA_PKT_WRITE_TILED_DW_3_width_shift)
+
+/*define for DW_4 word*/
+/*define for height field*/
+#define SDMA_PKT_WRITE_TILED_DW_4_height_offset 4
+#define SDMA_PKT_WRITE_TILED_DW_4_height_mask   0x00003FFF
+#define SDMA_PKT_WRITE_TILED_DW_4_height_shift  0
+#define SDMA_PKT_WRITE_TILED_DW_4_HEIGHT(x) (((x) & SDMA_PKT_WRITE_TILED_DW_4_height_mask) << SDMA_PKT_WRITE_TILED_DW_4_height_shift)
+
+/*define for depth field*/
+#define SDMA_PKT_WRITE_TILED_DW_4_depth_offset 4
+#define SDMA_PKT_WRITE_TILED_DW_4_depth_mask   0x000007FF
+#define SDMA_PKT_WRITE_TILED_DW_4_depth_shift  16
+#define SDMA_PKT_WRITE_TILED_DW_4_DEPTH(x) (((x) & SDMA_PKT_WRITE_TILED_DW_4_depth_mask) << SDMA_PKT_WRITE_TILED_DW_4_depth_shift)
+
+/*define for DW_5 word*/
+/*define for element_size field*/
+#define SDMA_PKT_WRITE_TILED_DW_5_element_size_offset 5
+#define SDMA_PKT_WRITE_TILED_DW_5_element_size_mask   0x00000007
+#define SDMA_PKT_WRITE_TILED_DW_5_element_size_shift  0
+#define SDMA_PKT_WRITE_TILED_DW_5_ELEMENT_SIZE(x) (((x) & SDMA_PKT_WRITE_TILED_DW_5_element_size_mask) << SDMA_PKT_WRITE_TILED_DW_5_element_size_shift)
+
+/*define for swizzle_mode field*/
+#define SDMA_PKT_WRITE_TILED_DW_5_swizzle_mode_offset 5
+#define SDMA_PKT_WRITE_TILED_DW_5_swizzle_mode_mask   0x0000001F
+#define SDMA_PKT_WRITE_TILED_DW_5_swizzle_mode_shift  3
+#define SDMA_PKT_WRITE_TILED_DW_5_SWIZZLE_MODE(x) (((x) & SDMA_PKT_WRITE_TILED_DW_5_swizzle_mode_mask) << SDMA_PKT_WRITE_TILED_DW_5_swizzle_mode_shift)
+
+/*define for dimension field*/
+#define SDMA_PKT_WRITE_TILED_DW_5_dimension_offset 5
+#define SDMA_PKT_WRITE_TILED_DW_5_dimension_mask   0x00000003
+#define SDMA_PKT_WRITE_TILED_DW_5_dimension_shift  9
+#define SDMA_PKT_WRITE_TILED_DW_5_DIMENSION(x) (((x) & SDMA_PKT_WRITE_TILED_DW_5_dimension_mask) << SDMA_PKT_WRITE_TILED_DW_5_dimension_shift)
+
+/*define for epitch field*/
+#define SDMA_PKT_WRITE_TILED_DW_5_epitch_offset 5
+#define SDMA_PKT_WRITE_TILED_DW_5_epitch_mask   0x0000FFFF
+#define SDMA_PKT_WRITE_TILED_DW_5_epitch_shift  16
+#define SDMA_PKT_WRITE_TILED_DW_5_EPITCH(x) (((x) & SDMA_PKT_WRITE_TILED_DW_5_epitch_mask) << SDMA_PKT_WRITE_TILED_DW_5_epitch_shift)
+
+/*define for DW_6 word*/
+/*define for x field*/
+#define SDMA_PKT_WRITE_TILED_DW_6_x_offset 6
+#define SDMA_PKT_WRITE_TILED_DW_6_x_mask   0x00003FFF
+#define SDMA_PKT_WRITE_TILED_DW_6_x_shift  0
+#define SDMA_PKT_WRITE_TILED_DW_6_X(x) (((x) & SDMA_PKT_WRITE_TILED_DW_6_x_mask) << SDMA_PKT_WRITE_TILED_DW_6_x_shift)
+
+/*define for y field*/
+#define SDMA_PKT_WRITE_TILED_DW_6_y_offset 6
+#define SDMA_PKT_WRITE_TILED_DW_6_y_mask   0x00003FFF
+#define SDMA_PKT_WRITE_TILED_DW_6_y_shift  16
+#define SDMA_PKT_WRITE_TILED_DW_6_Y(x) (((x) & SDMA_PKT_WRITE_TILED_DW_6_y_mask) << SDMA_PKT_WRITE_TILED_DW_6_y_shift)
+
+/*define for DW_7 word*/
+/*define for z field*/
+#define SDMA_PKT_WRITE_TILED_DW_7_z_offset 7
+#define SDMA_PKT_WRITE_TILED_DW_7_z_mask   0x000007FF
+#define SDMA_PKT_WRITE_TILED_DW_7_z_shift  0
+#define SDMA_PKT_WRITE_TILED_DW_7_Z(x) (((x) & SDMA_PKT_WRITE_TILED_DW_7_z_mask) << SDMA_PKT_WRITE_TILED_DW_7_z_shift)
+
+/*define for sw field*/
+#define SDMA_PKT_WRITE_TILED_DW_7_sw_offset 7
+#define SDMA_PKT_WRITE_TILED_DW_7_sw_mask   0x00000003
+#define SDMA_PKT_WRITE_TILED_DW_7_sw_shift  24
+#define SDMA_PKT_WRITE_TILED_DW_7_SW(x) (((x) & SDMA_PKT_WRITE_TILED_DW_7_sw_mask) << SDMA_PKT_WRITE_TILED_DW_7_sw_shift)
+
+/*define for COUNT word*/
+/*define for count field*/
+#define SDMA_PKT_WRITE_TILED_COUNT_count_offset 8
+#define SDMA_PKT_WRITE_TILED_COUNT_count_mask   0x000FFFFF
+#define SDMA_PKT_WRITE_TILED_COUNT_count_shift  0
+#define SDMA_PKT_WRITE_TILED_COUNT_COUNT(x) (((x) & SDMA_PKT_WRITE_TILED_COUNT_count_mask) << SDMA_PKT_WRITE_TILED_COUNT_count_shift)
+
+/*define for DATA0 word*/
+/*define for data0 field*/
+#define SDMA_PKT_WRITE_TILED_DATA0_data0_offset 9
+#define SDMA_PKT_WRITE_TILED_DATA0_data0_mask   0xFFFFFFFF
+#define SDMA_PKT_WRITE_TILED_DATA0_data0_shift  0
+#define SDMA_PKT_WRITE_TILED_DATA0_DATA0(x) (((x) & SDMA_PKT_WRITE_TILED_DATA0_data0_mask) << SDMA_PKT_WRITE_TILED_DATA0_data0_shift)
+
+
+/*
+** Definitions for SDMA_PKT_PTEPDE_COPY packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_PTEPDE_COPY_HEADER_op_offset 0
+#define SDMA_PKT_PTEPDE_COPY_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_PTEPDE_COPY_HEADER_op_shift  0
+#define SDMA_PKT_PTEPDE_COPY_HEADER_OP(x) (((x) & SDMA_PKT_PTEPDE_COPY_HEADER_op_mask) << SDMA_PKT_PTEPDE_COPY_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_PTEPDE_COPY_HEADER_sub_op_offset 0
+#define SDMA_PKT_PTEPDE_COPY_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_PTEPDE_COPY_HEADER_sub_op_shift  8
+#define SDMA_PKT_PTEPDE_COPY_HEADER_SUB_OP(x) (((x) & SDMA_PKT_PTEPDE_COPY_HEADER_sub_op_mask) << SDMA_PKT_PTEPDE_COPY_HEADER_sub_op_shift)
+
+/*define for ptepde_op field*/
+#define SDMA_PKT_PTEPDE_COPY_HEADER_ptepde_op_offset 0
+#define SDMA_PKT_PTEPDE_COPY_HEADER_ptepde_op_mask   0x00000001
+#define SDMA_PKT_PTEPDE_COPY_HEADER_ptepde_op_shift  31
+#define SDMA_PKT_PTEPDE_COPY_HEADER_PTEPDE_OP(x) (((x) & SDMA_PKT_PTEPDE_COPY_HEADER_ptepde_op_mask) << SDMA_PKT_PTEPDE_COPY_HEADER_ptepde_op_shift)
+
+/*define for SRC_ADDR_LO word*/
+/*define for src_addr_31_0 field*/
+#define SDMA_PKT_PTEPDE_COPY_SRC_ADDR_LO_src_addr_31_0_offset 1
+#define SDMA_PKT_PTEPDE_COPY_SRC_ADDR_LO_src_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_PTEPDE_COPY_SRC_ADDR_LO_src_addr_31_0_shift  0
+#define SDMA_PKT_PTEPDE_COPY_SRC_ADDR_LO_SRC_ADDR_31_0(x) (((x) & SDMA_PKT_PTEPDE_COPY_SRC_ADDR_LO_src_addr_31_0_mask) << SDMA_PKT_PTEPDE_COPY_SRC_ADDR_LO_src_addr_31_0_shift)
+
+/*define for SRC_ADDR_HI word*/
+/*define for src_addr_63_32 field*/
+#define SDMA_PKT_PTEPDE_COPY_SRC_ADDR_HI_src_addr_63_32_offset 2
+#define SDMA_PKT_PTEPDE_COPY_SRC_ADDR_HI_src_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_PTEPDE_COPY_SRC_ADDR_HI_src_addr_63_32_shift  0
+#define SDMA_PKT_PTEPDE_COPY_SRC_ADDR_HI_SRC_ADDR_63_32(x) (((x) & SDMA_PKT_PTEPDE_COPY_SRC_ADDR_HI_src_addr_63_32_mask) << SDMA_PKT_PTEPDE_COPY_SRC_ADDR_HI_src_addr_63_32_shift)
+
+/*define for DST_ADDR_LO word*/
+/*define for dst_addr_31_0 field*/
+#define SDMA_PKT_PTEPDE_COPY_DST_ADDR_LO_dst_addr_31_0_offset 3
+#define SDMA_PKT_PTEPDE_COPY_DST_ADDR_LO_dst_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_PTEPDE_COPY_DST_ADDR_LO_dst_addr_31_0_shift  0
+#define SDMA_PKT_PTEPDE_COPY_DST_ADDR_LO_DST_ADDR_31_0(x) (((x) & SDMA_PKT_PTEPDE_COPY_DST_ADDR_LO_dst_addr_31_0_mask) << SDMA_PKT_PTEPDE_COPY_DST_ADDR_LO_dst_addr_31_0_shift)
+
+/*define for DST_ADDR_HI word*/
+/*define for dst_addr_63_32 field*/
+#define SDMA_PKT_PTEPDE_COPY_DST_ADDR_HI_dst_addr_63_32_offset 4
+#define SDMA_PKT_PTEPDE_COPY_DST_ADDR_HI_dst_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_PTEPDE_COPY_DST_ADDR_HI_dst_addr_63_32_shift  0
+#define SDMA_PKT_PTEPDE_COPY_DST_ADDR_HI_DST_ADDR_63_32(x) (((x) & SDMA_PKT_PTEPDE_COPY_DST_ADDR_HI_dst_addr_63_32_mask) << SDMA_PKT_PTEPDE_COPY_DST_ADDR_HI_dst_addr_63_32_shift)
+
+/*define for MASK_DW0 word*/
+/*define for mask_dw0 field*/
+#define SDMA_PKT_PTEPDE_COPY_MASK_DW0_mask_dw0_offset 5
+#define SDMA_PKT_PTEPDE_COPY_MASK_DW0_mask_dw0_mask   0xFFFFFFFF
+#define SDMA_PKT_PTEPDE_COPY_MASK_DW0_mask_dw0_shift  0
+#define SDMA_PKT_PTEPDE_COPY_MASK_DW0_MASK_DW0(x) (((x) & SDMA_PKT_PTEPDE_COPY_MASK_DW0_mask_dw0_mask) << SDMA_PKT_PTEPDE_COPY_MASK_DW0_mask_dw0_shift)
+
+/*define for MASK_DW1 word*/
+/*define for mask_dw1 field*/
+#define SDMA_PKT_PTEPDE_COPY_MASK_DW1_mask_dw1_offset 6
+#define SDMA_PKT_PTEPDE_COPY_MASK_DW1_mask_dw1_mask   0xFFFFFFFF
+#define SDMA_PKT_PTEPDE_COPY_MASK_DW1_mask_dw1_shift  0
+#define SDMA_PKT_PTEPDE_COPY_MASK_DW1_MASK_DW1(x) (((x) & SDMA_PKT_PTEPDE_COPY_MASK_DW1_mask_dw1_mask) << SDMA_PKT_PTEPDE_COPY_MASK_DW1_mask_dw1_shift)
+
+/*define for COUNT word*/
+/*define for count field*/
+#define SDMA_PKT_PTEPDE_COPY_COUNT_count_offset 7
+#define SDMA_PKT_PTEPDE_COPY_COUNT_count_mask   0x0007FFFF
+#define SDMA_PKT_PTEPDE_COPY_COUNT_count_shift  0
+#define SDMA_PKT_PTEPDE_COPY_COUNT_COUNT(x) (((x) & SDMA_PKT_PTEPDE_COPY_COUNT_count_mask) << SDMA_PKT_PTEPDE_COPY_COUNT_count_shift)
+
+
+/*
+** Definitions for SDMA_PKT_PTEPDE_COPY_BACKWARDS packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_op_offset 0
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_op_shift  0
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_OP(x) (((x) & SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_op_mask) << SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_sub_op_offset 0
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_sub_op_shift  8
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_SUB_OP(x) (((x) & SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_sub_op_mask) << SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_sub_op_shift)
+
+/*define for pte_size field*/
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_pte_size_offset 0
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_pte_size_mask   0x00000003
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_pte_size_shift  28
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_PTE_SIZE(x) (((x) & SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_pte_size_mask) << SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_pte_size_shift)
+
+/*define for direction field*/
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_direction_offset 0
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_direction_mask   0x00000001
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_direction_shift  30
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_DIRECTION(x) (((x) & SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_direction_mask) << SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_direction_shift)
+
+/*define for ptepde_op field*/
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_ptepde_op_offset 0
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_ptepde_op_mask   0x00000001
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_ptepde_op_shift  31
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_PTEPDE_OP(x) (((x) & SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_ptepde_op_mask) << SDMA_PKT_PTEPDE_COPY_BACKWARDS_HEADER_ptepde_op_shift)
+
+/*define for SRC_ADDR_LO word*/
+/*define for src_addr_31_0 field*/
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_SRC_ADDR_LO_src_addr_31_0_offset 1
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_SRC_ADDR_LO_src_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_SRC_ADDR_LO_src_addr_31_0_shift  0
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_SRC_ADDR_LO_SRC_ADDR_31_0(x) (((x) & SDMA_PKT_PTEPDE_COPY_BACKWARDS_SRC_ADDR_LO_src_addr_31_0_mask) << SDMA_PKT_PTEPDE_COPY_BACKWARDS_SRC_ADDR_LO_src_addr_31_0_shift)
+
+/*define for SRC_ADDR_HI word*/
+/*define for src_addr_63_32 field*/
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_SRC_ADDR_HI_src_addr_63_32_offset 2
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_SRC_ADDR_HI_src_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_SRC_ADDR_HI_src_addr_63_32_shift  0
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_SRC_ADDR_HI_SRC_ADDR_63_32(x) (((x) & SDMA_PKT_PTEPDE_COPY_BACKWARDS_SRC_ADDR_HI_src_addr_63_32_mask) << SDMA_PKT_PTEPDE_COPY_BACKWARDS_SRC_ADDR_HI_src_addr_63_32_shift)
+
+/*define for DST_ADDR_LO word*/
+/*define for dst_addr_31_0 field*/
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_DST_ADDR_LO_dst_addr_31_0_offset 3
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_DST_ADDR_LO_dst_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_DST_ADDR_LO_dst_addr_31_0_shift  0
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_DST_ADDR_LO_DST_ADDR_31_0(x) (((x) & SDMA_PKT_PTEPDE_COPY_BACKWARDS_DST_ADDR_LO_dst_addr_31_0_mask) << SDMA_PKT_PTEPDE_COPY_BACKWARDS_DST_ADDR_LO_dst_addr_31_0_shift)
+
+/*define for DST_ADDR_HI word*/
+/*define for dst_addr_63_32 field*/
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_DST_ADDR_HI_dst_addr_63_32_offset 4
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_DST_ADDR_HI_dst_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_DST_ADDR_HI_dst_addr_63_32_shift  0
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_DST_ADDR_HI_DST_ADDR_63_32(x) (((x) & SDMA_PKT_PTEPDE_COPY_BACKWARDS_DST_ADDR_HI_dst_addr_63_32_mask) << SDMA_PKT_PTEPDE_COPY_BACKWARDS_DST_ADDR_HI_dst_addr_63_32_shift)
+
+/*define for MASK_BIT_FOR_DW word*/
+/*define for mask_first_xfer field*/
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_MASK_BIT_FOR_DW_mask_first_xfer_offset 5
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_MASK_BIT_FOR_DW_mask_first_xfer_mask   0x000000FF
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_MASK_BIT_FOR_DW_mask_first_xfer_shift  0
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_MASK_BIT_FOR_DW_MASK_FIRST_XFER(x) (((x) & SDMA_PKT_PTEPDE_COPY_BACKWARDS_MASK_BIT_FOR_DW_mask_first_xfer_mask) << SDMA_PKT_PTEPDE_COPY_BACKWARDS_MASK_BIT_FOR_DW_mask_first_xfer_shift)
+
+/*define for mask_last_xfer field*/
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_MASK_BIT_FOR_DW_mask_last_xfer_offset 5
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_MASK_BIT_FOR_DW_mask_last_xfer_mask   0x000000FF
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_MASK_BIT_FOR_DW_mask_last_xfer_shift  8
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_MASK_BIT_FOR_DW_MASK_LAST_XFER(x) (((x) & SDMA_PKT_PTEPDE_COPY_BACKWARDS_MASK_BIT_FOR_DW_mask_last_xfer_mask) << SDMA_PKT_PTEPDE_COPY_BACKWARDS_MASK_BIT_FOR_DW_mask_last_xfer_shift)
+
+/*define for COUNT_IN_32B_XFER word*/
+/*define for count field*/
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_COUNT_IN_32B_XFER_count_offset 6
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_COUNT_IN_32B_XFER_count_mask   0x0001FFFF
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_COUNT_IN_32B_XFER_count_shift  0
+#define SDMA_PKT_PTEPDE_COPY_BACKWARDS_COUNT_IN_32B_XFER_COUNT(x) (((x) & SDMA_PKT_PTEPDE_COPY_BACKWARDS_COUNT_IN_32B_XFER_count_mask) << SDMA_PKT_PTEPDE_COPY_BACKWARDS_COUNT_IN_32B_XFER_count_shift)
+
+
+/*
+** Definitions for SDMA_PKT_PTEPDE_RMW packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_PTEPDE_RMW_HEADER_op_offset 0
+#define SDMA_PKT_PTEPDE_RMW_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_PTEPDE_RMW_HEADER_op_shift  0
+#define SDMA_PKT_PTEPDE_RMW_HEADER_OP(x) (((x) & SDMA_PKT_PTEPDE_RMW_HEADER_op_mask) << SDMA_PKT_PTEPDE_RMW_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_PTEPDE_RMW_HEADER_sub_op_offset 0
+#define SDMA_PKT_PTEPDE_RMW_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_PTEPDE_RMW_HEADER_sub_op_shift  8
+#define SDMA_PKT_PTEPDE_RMW_HEADER_SUB_OP(x) (((x) & SDMA_PKT_PTEPDE_RMW_HEADER_sub_op_mask) << SDMA_PKT_PTEPDE_RMW_HEADER_sub_op_shift)
+
+/*define for gcc field*/
+#define SDMA_PKT_PTEPDE_RMW_HEADER_gcc_offset 0
+#define SDMA_PKT_PTEPDE_RMW_HEADER_gcc_mask   0x00000001
+#define SDMA_PKT_PTEPDE_RMW_HEADER_gcc_shift  19
+#define SDMA_PKT_PTEPDE_RMW_HEADER_GCC(x) (((x) & SDMA_PKT_PTEPDE_RMW_HEADER_gcc_mask) << SDMA_PKT_PTEPDE_RMW_HEADER_gcc_shift)
+
+/*define for sys field*/
+#define SDMA_PKT_PTEPDE_RMW_HEADER_sys_offset 0
+#define SDMA_PKT_PTEPDE_RMW_HEADER_sys_mask   0x00000001
+#define SDMA_PKT_PTEPDE_RMW_HEADER_sys_shift  20
+#define SDMA_PKT_PTEPDE_RMW_HEADER_SYS(x) (((x) & SDMA_PKT_PTEPDE_RMW_HEADER_sys_mask) << SDMA_PKT_PTEPDE_RMW_HEADER_sys_shift)
+
+/*define for snp field*/
+#define SDMA_PKT_PTEPDE_RMW_HEADER_snp_offset 0
+#define SDMA_PKT_PTEPDE_RMW_HEADER_snp_mask   0x00000001
+#define SDMA_PKT_PTEPDE_RMW_HEADER_snp_shift  22
+#define SDMA_PKT_PTEPDE_RMW_HEADER_SNP(x) (((x) & SDMA_PKT_PTEPDE_RMW_HEADER_snp_mask) << SDMA_PKT_PTEPDE_RMW_HEADER_snp_shift)
+
+/*define for gpa field*/
+#define SDMA_PKT_PTEPDE_RMW_HEADER_gpa_offset 0
+#define SDMA_PKT_PTEPDE_RMW_HEADER_gpa_mask   0x00000001
+#define SDMA_PKT_PTEPDE_RMW_HEADER_gpa_shift  23
+#define SDMA_PKT_PTEPDE_RMW_HEADER_GPA(x) (((x) & SDMA_PKT_PTEPDE_RMW_HEADER_gpa_mask) << SDMA_PKT_PTEPDE_RMW_HEADER_gpa_shift)
+
+/*define for ADDR_LO word*/
+/*define for addr_31_0 field*/
+#define SDMA_PKT_PTEPDE_RMW_ADDR_LO_addr_31_0_offset 1
+#define SDMA_PKT_PTEPDE_RMW_ADDR_LO_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_PTEPDE_RMW_ADDR_LO_addr_31_0_shift  0
+#define SDMA_PKT_PTEPDE_RMW_ADDR_LO_ADDR_31_0(x) (((x) & SDMA_PKT_PTEPDE_RMW_ADDR_LO_addr_31_0_mask) << SDMA_PKT_PTEPDE_RMW_ADDR_LO_addr_31_0_shift)
+
+/*define for ADDR_HI word*/
+/*define for addr_63_32 field*/
+#define SDMA_PKT_PTEPDE_RMW_ADDR_HI_addr_63_32_offset 2
+#define SDMA_PKT_PTEPDE_RMW_ADDR_HI_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_PTEPDE_RMW_ADDR_HI_addr_63_32_shift  0
+#define SDMA_PKT_PTEPDE_RMW_ADDR_HI_ADDR_63_32(x) (((x) & SDMA_PKT_PTEPDE_RMW_ADDR_HI_addr_63_32_mask) << SDMA_PKT_PTEPDE_RMW_ADDR_HI_addr_63_32_shift)
+
+/*define for MASK_LO word*/
+/*define for mask_31_0 field*/
+#define SDMA_PKT_PTEPDE_RMW_MASK_LO_mask_31_0_offset 3
+#define SDMA_PKT_PTEPDE_RMW_MASK_LO_mask_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_PTEPDE_RMW_MASK_LO_mask_31_0_shift  0
+#define SDMA_PKT_PTEPDE_RMW_MASK_LO_MASK_31_0(x) (((x) & SDMA_PKT_PTEPDE_RMW_MASK_LO_mask_31_0_mask) << SDMA_PKT_PTEPDE_RMW_MASK_LO_mask_31_0_shift)
+
+/*define for MASK_HI word*/
+/*define for mask_63_32 field*/
+#define SDMA_PKT_PTEPDE_RMW_MASK_HI_mask_63_32_offset 4
+#define SDMA_PKT_PTEPDE_RMW_MASK_HI_mask_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_PTEPDE_RMW_MASK_HI_mask_63_32_shift  0
+#define SDMA_PKT_PTEPDE_RMW_MASK_HI_MASK_63_32(x) (((x) & SDMA_PKT_PTEPDE_RMW_MASK_HI_mask_63_32_mask) << SDMA_PKT_PTEPDE_RMW_MASK_HI_mask_63_32_shift)
+
+/*define for VALUE_LO word*/
+/*define for value_31_0 field*/
+#define SDMA_PKT_PTEPDE_RMW_VALUE_LO_value_31_0_offset 5
+#define SDMA_PKT_PTEPDE_RMW_VALUE_LO_value_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_PTEPDE_RMW_VALUE_LO_value_31_0_shift  0
+#define SDMA_PKT_PTEPDE_RMW_VALUE_LO_VALUE_31_0(x) (((x) & SDMA_PKT_PTEPDE_RMW_VALUE_LO_value_31_0_mask) << SDMA_PKT_PTEPDE_RMW_VALUE_LO_value_31_0_shift)
+
+/*define for VALUE_HI word*/
+/*define for value_63_32 field*/
+#define SDMA_PKT_PTEPDE_RMW_VALUE_HI_value_63_32_offset 6
+#define SDMA_PKT_PTEPDE_RMW_VALUE_HI_value_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_PTEPDE_RMW_VALUE_HI_value_63_32_shift  0
+#define SDMA_PKT_PTEPDE_RMW_VALUE_HI_VALUE_63_32(x) (((x) & SDMA_PKT_PTEPDE_RMW_VALUE_HI_value_63_32_mask) << SDMA_PKT_PTEPDE_RMW_VALUE_HI_value_63_32_shift)
+
+
+/*
+** Definitions for SDMA_PKT_WRITE_INCR packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_WRITE_INCR_HEADER_op_offset 0
+#define SDMA_PKT_WRITE_INCR_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_WRITE_INCR_HEADER_op_shift  0
+#define SDMA_PKT_WRITE_INCR_HEADER_OP(x) (((x) & SDMA_PKT_WRITE_INCR_HEADER_op_mask) << SDMA_PKT_WRITE_INCR_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_WRITE_INCR_HEADER_sub_op_offset 0
+#define SDMA_PKT_WRITE_INCR_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_WRITE_INCR_HEADER_sub_op_shift  8
+#define SDMA_PKT_WRITE_INCR_HEADER_SUB_OP(x) (((x) & SDMA_PKT_WRITE_INCR_HEADER_sub_op_mask) << SDMA_PKT_WRITE_INCR_HEADER_sub_op_shift)
+
+/*define for DST_ADDR_LO word*/
+/*define for dst_addr_31_0 field*/
+#define SDMA_PKT_WRITE_INCR_DST_ADDR_LO_dst_addr_31_0_offset 1
+#define SDMA_PKT_WRITE_INCR_DST_ADDR_LO_dst_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_WRITE_INCR_DST_ADDR_LO_dst_addr_31_0_shift  0
+#define SDMA_PKT_WRITE_INCR_DST_ADDR_LO_DST_ADDR_31_0(x) (((x) & SDMA_PKT_WRITE_INCR_DST_ADDR_LO_dst_addr_31_0_mask) << SDMA_PKT_WRITE_INCR_DST_ADDR_LO_dst_addr_31_0_shift)
+
+/*define for DST_ADDR_HI word*/
+/*define for dst_addr_63_32 field*/
+#define SDMA_PKT_WRITE_INCR_DST_ADDR_HI_dst_addr_63_32_offset 2
+#define SDMA_PKT_WRITE_INCR_DST_ADDR_HI_dst_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_WRITE_INCR_DST_ADDR_HI_dst_addr_63_32_shift  0
+#define SDMA_PKT_WRITE_INCR_DST_ADDR_HI_DST_ADDR_63_32(x) (((x) & SDMA_PKT_WRITE_INCR_DST_ADDR_HI_dst_addr_63_32_mask) << SDMA_PKT_WRITE_INCR_DST_ADDR_HI_dst_addr_63_32_shift)
+
+/*define for MASK_DW0 word*/
+/*define for mask_dw0 field*/
+#define SDMA_PKT_WRITE_INCR_MASK_DW0_mask_dw0_offset 3
+#define SDMA_PKT_WRITE_INCR_MASK_DW0_mask_dw0_mask   0xFFFFFFFF
+#define SDMA_PKT_WRITE_INCR_MASK_DW0_mask_dw0_shift  0
+#define SDMA_PKT_WRITE_INCR_MASK_DW0_MASK_DW0(x) (((x) & SDMA_PKT_WRITE_INCR_MASK_DW0_mask_dw0_mask) << SDMA_PKT_WRITE_INCR_MASK_DW0_mask_dw0_shift)
+
+/*define for MASK_DW1 word*/
+/*define for mask_dw1 field*/
+#define SDMA_PKT_WRITE_INCR_MASK_DW1_mask_dw1_offset 4
+#define SDMA_PKT_WRITE_INCR_MASK_DW1_mask_dw1_mask   0xFFFFFFFF
+#define SDMA_PKT_WRITE_INCR_MASK_DW1_mask_dw1_shift  0
+#define SDMA_PKT_WRITE_INCR_MASK_DW1_MASK_DW1(x) (((x) & SDMA_PKT_WRITE_INCR_MASK_DW1_mask_dw1_mask) << SDMA_PKT_WRITE_INCR_MASK_DW1_mask_dw1_shift)
+
+/*define for INIT_DW0 word*/
+/*define for init_dw0 field*/
+#define SDMA_PKT_WRITE_INCR_INIT_DW0_init_dw0_offset 5
+#define SDMA_PKT_WRITE_INCR_INIT_DW0_init_dw0_mask   0xFFFFFFFF
+#define SDMA_PKT_WRITE_INCR_INIT_DW0_init_dw0_shift  0
+#define SDMA_PKT_WRITE_INCR_INIT_DW0_INIT_DW0(x) (((x) & SDMA_PKT_WRITE_INCR_INIT_DW0_init_dw0_mask) << SDMA_PKT_WRITE_INCR_INIT_DW0_init_dw0_shift)
+
+/*define for INIT_DW1 word*/
+/*define for init_dw1 field*/
+#define SDMA_PKT_WRITE_INCR_INIT_DW1_init_dw1_offset 6
+#define SDMA_PKT_WRITE_INCR_INIT_DW1_init_dw1_mask   0xFFFFFFFF
+#define SDMA_PKT_WRITE_INCR_INIT_DW1_init_dw1_shift  0
+#define SDMA_PKT_WRITE_INCR_INIT_DW1_INIT_DW1(x) (((x) & SDMA_PKT_WRITE_INCR_INIT_DW1_init_dw1_mask) << SDMA_PKT_WRITE_INCR_INIT_DW1_init_dw1_shift)
+
+/*define for INCR_DW0 word*/
+/*define for incr_dw0 field*/
+#define SDMA_PKT_WRITE_INCR_INCR_DW0_incr_dw0_offset 7
+#define SDMA_PKT_WRITE_INCR_INCR_DW0_incr_dw0_mask   0xFFFFFFFF
+#define SDMA_PKT_WRITE_INCR_INCR_DW0_incr_dw0_shift  0
+#define SDMA_PKT_WRITE_INCR_INCR_DW0_INCR_DW0(x) (((x) & SDMA_PKT_WRITE_INCR_INCR_DW0_incr_dw0_mask) << SDMA_PKT_WRITE_INCR_INCR_DW0_incr_dw0_shift)
+
+/*define for INCR_DW1 word*/
+/*define for incr_dw1 field*/
+#define SDMA_PKT_WRITE_INCR_INCR_DW1_incr_dw1_offset 8
+#define SDMA_PKT_WRITE_INCR_INCR_DW1_incr_dw1_mask   0xFFFFFFFF
+#define SDMA_PKT_WRITE_INCR_INCR_DW1_incr_dw1_shift  0
+#define SDMA_PKT_WRITE_INCR_INCR_DW1_INCR_DW1(x) (((x) & SDMA_PKT_WRITE_INCR_INCR_DW1_incr_dw1_mask) << SDMA_PKT_WRITE_INCR_INCR_DW1_incr_dw1_shift)
+
+/*define for COUNT word*/
+/*define for count field*/
+#define SDMA_PKT_WRITE_INCR_COUNT_count_offset 9
+#define SDMA_PKT_WRITE_INCR_COUNT_count_mask   0x0007FFFF
+#define SDMA_PKT_WRITE_INCR_COUNT_count_shift  0
+#define SDMA_PKT_WRITE_INCR_COUNT_COUNT(x) (((x) & SDMA_PKT_WRITE_INCR_COUNT_count_mask) << SDMA_PKT_WRITE_INCR_COUNT_count_shift)
+
+
+/*
+** Definitions for SDMA_PKT_INDIRECT packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_INDIRECT_HEADER_op_offset 0
+#define SDMA_PKT_INDIRECT_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_INDIRECT_HEADER_op_shift  0
+#define SDMA_PKT_INDIRECT_HEADER_OP(x) (((x) & SDMA_PKT_INDIRECT_HEADER_op_mask) << SDMA_PKT_INDIRECT_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_INDIRECT_HEADER_sub_op_offset 0
+#define SDMA_PKT_INDIRECT_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_INDIRECT_HEADER_sub_op_shift  8
+#define SDMA_PKT_INDIRECT_HEADER_SUB_OP(x) (((x) & SDMA_PKT_INDIRECT_HEADER_sub_op_mask) << SDMA_PKT_INDIRECT_HEADER_sub_op_shift)
+
+/*define for vmid field*/
+#define SDMA_PKT_INDIRECT_HEADER_vmid_offset 0
+#define SDMA_PKT_INDIRECT_HEADER_vmid_mask   0x0000000F
+#define SDMA_PKT_INDIRECT_HEADER_vmid_shift  16
+#define SDMA_PKT_INDIRECT_HEADER_VMID(x) (((x) & SDMA_PKT_INDIRECT_HEADER_vmid_mask) << SDMA_PKT_INDIRECT_HEADER_vmid_shift)
+
+/*define for BASE_LO word*/
+/*define for ib_base_31_0 field*/
+#define SDMA_PKT_INDIRECT_BASE_LO_ib_base_31_0_offset 1
+#define SDMA_PKT_INDIRECT_BASE_LO_ib_base_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_INDIRECT_BASE_LO_ib_base_31_0_shift  0
+#define SDMA_PKT_INDIRECT_BASE_LO_IB_BASE_31_0(x) (((x) & SDMA_PKT_INDIRECT_BASE_LO_ib_base_31_0_mask) << SDMA_PKT_INDIRECT_BASE_LO_ib_base_31_0_shift)
+
+/*define for BASE_HI word*/
+/*define for ib_base_63_32 field*/
+#define SDMA_PKT_INDIRECT_BASE_HI_ib_base_63_32_offset 2
+#define SDMA_PKT_INDIRECT_BASE_HI_ib_base_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_INDIRECT_BASE_HI_ib_base_63_32_shift  0
+#define SDMA_PKT_INDIRECT_BASE_HI_IB_BASE_63_32(x) (((x) & SDMA_PKT_INDIRECT_BASE_HI_ib_base_63_32_mask) << SDMA_PKT_INDIRECT_BASE_HI_ib_base_63_32_shift)
+
+/*define for IB_SIZE word*/
+/*define for ib_size field*/
+#define SDMA_PKT_INDIRECT_IB_SIZE_ib_size_offset 3
+#define SDMA_PKT_INDIRECT_IB_SIZE_ib_size_mask   0x000FFFFF
+#define SDMA_PKT_INDIRECT_IB_SIZE_ib_size_shift  0
+#define SDMA_PKT_INDIRECT_IB_SIZE_IB_SIZE(x) (((x) & SDMA_PKT_INDIRECT_IB_SIZE_ib_size_mask) << SDMA_PKT_INDIRECT_IB_SIZE_ib_size_shift)
+
+/*define for CSA_ADDR_LO word*/
+/*define for csa_addr_31_0 field*/
+#define SDMA_PKT_INDIRECT_CSA_ADDR_LO_csa_addr_31_0_offset 4
+#define SDMA_PKT_INDIRECT_CSA_ADDR_LO_csa_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_INDIRECT_CSA_ADDR_LO_csa_addr_31_0_shift  0
+#define SDMA_PKT_INDIRECT_CSA_ADDR_LO_CSA_ADDR_31_0(x) (((x) & SDMA_PKT_INDIRECT_CSA_ADDR_LO_csa_addr_31_0_mask) << SDMA_PKT_INDIRECT_CSA_ADDR_LO_csa_addr_31_0_shift)
+
+/*define for CSA_ADDR_HI word*/
+/*define for csa_addr_63_32 field*/
+#define SDMA_PKT_INDIRECT_CSA_ADDR_HI_csa_addr_63_32_offset 5
+#define SDMA_PKT_INDIRECT_CSA_ADDR_HI_csa_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_INDIRECT_CSA_ADDR_HI_csa_addr_63_32_shift  0
+#define SDMA_PKT_INDIRECT_CSA_ADDR_HI_CSA_ADDR_63_32(x) (((x) & SDMA_PKT_INDIRECT_CSA_ADDR_HI_csa_addr_63_32_mask) << SDMA_PKT_INDIRECT_CSA_ADDR_HI_csa_addr_63_32_shift)
+
+
+/*
+** Definitions for SDMA_PKT_SEMAPHORE packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_SEMAPHORE_HEADER_op_offset 0
+#define SDMA_PKT_SEMAPHORE_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_SEMAPHORE_HEADER_op_shift  0
+#define SDMA_PKT_SEMAPHORE_HEADER_OP(x) (((x) & SDMA_PKT_SEMAPHORE_HEADER_op_mask) << SDMA_PKT_SEMAPHORE_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_SEMAPHORE_HEADER_sub_op_offset 0
+#define SDMA_PKT_SEMAPHORE_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_SEMAPHORE_HEADER_sub_op_shift  8
+#define SDMA_PKT_SEMAPHORE_HEADER_SUB_OP(x) (((x) & SDMA_PKT_SEMAPHORE_HEADER_sub_op_mask) << SDMA_PKT_SEMAPHORE_HEADER_sub_op_shift)
+
+/*define for write_one field*/
+#define SDMA_PKT_SEMAPHORE_HEADER_write_one_offset 0
+#define SDMA_PKT_SEMAPHORE_HEADER_write_one_mask   0x00000001
+#define SDMA_PKT_SEMAPHORE_HEADER_write_one_shift  29
+#define SDMA_PKT_SEMAPHORE_HEADER_WRITE_ONE(x) (((x) & SDMA_PKT_SEMAPHORE_HEADER_write_one_mask) << SDMA_PKT_SEMAPHORE_HEADER_write_one_shift)
+
+/*define for signal field*/
+#define SDMA_PKT_SEMAPHORE_HEADER_signal_offset 0
+#define SDMA_PKT_SEMAPHORE_HEADER_signal_mask   0x00000001
+#define SDMA_PKT_SEMAPHORE_HEADER_signal_shift  30
+#define SDMA_PKT_SEMAPHORE_HEADER_SIGNAL(x) (((x) & SDMA_PKT_SEMAPHORE_HEADER_signal_mask) << SDMA_PKT_SEMAPHORE_HEADER_signal_shift)
+
+/*define for mailbox field*/
+#define SDMA_PKT_SEMAPHORE_HEADER_mailbox_offset 0
+#define SDMA_PKT_SEMAPHORE_HEADER_mailbox_mask   0x00000001
+#define SDMA_PKT_SEMAPHORE_HEADER_mailbox_shift  31
+#define SDMA_PKT_SEMAPHORE_HEADER_MAILBOX(x) (((x) & SDMA_PKT_SEMAPHORE_HEADER_mailbox_mask) << SDMA_PKT_SEMAPHORE_HEADER_mailbox_shift)
+
+/*define for ADDR_LO word*/
+/*define for addr_31_0 field*/
+#define SDMA_PKT_SEMAPHORE_ADDR_LO_addr_31_0_offset 1
+#define SDMA_PKT_SEMAPHORE_ADDR_LO_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_SEMAPHORE_ADDR_LO_addr_31_0_shift  0
+#define SDMA_PKT_SEMAPHORE_ADDR_LO_ADDR_31_0(x) (((x) & SDMA_PKT_SEMAPHORE_ADDR_LO_addr_31_0_mask) << SDMA_PKT_SEMAPHORE_ADDR_LO_addr_31_0_shift)
+
+/*define for ADDR_HI word*/
+/*define for addr_63_32 field*/
+#define SDMA_PKT_SEMAPHORE_ADDR_HI_addr_63_32_offset 2
+#define SDMA_PKT_SEMAPHORE_ADDR_HI_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_SEMAPHORE_ADDR_HI_addr_63_32_shift  0
+#define SDMA_PKT_SEMAPHORE_ADDR_HI_ADDR_63_32(x) (((x) & SDMA_PKT_SEMAPHORE_ADDR_HI_addr_63_32_mask) << SDMA_PKT_SEMAPHORE_ADDR_HI_addr_63_32_shift)
+
+
+/*
+** Definitions for SDMA_PKT_FENCE packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_FENCE_HEADER_op_offset 0
+#define SDMA_PKT_FENCE_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_FENCE_HEADER_op_shift  0
+#define SDMA_PKT_FENCE_HEADER_OP(x) (((x) & SDMA_PKT_FENCE_HEADER_op_mask) << SDMA_PKT_FENCE_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_FENCE_HEADER_sub_op_offset 0
+#define SDMA_PKT_FENCE_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_FENCE_HEADER_sub_op_shift  8
+#define SDMA_PKT_FENCE_HEADER_SUB_OP(x) (((x) & SDMA_PKT_FENCE_HEADER_sub_op_mask) << SDMA_PKT_FENCE_HEADER_sub_op_shift)
+
+/*define for ADDR_LO word*/
+/*define for addr_31_0 field*/
+#define SDMA_PKT_FENCE_ADDR_LO_addr_31_0_offset 1
+#define SDMA_PKT_FENCE_ADDR_LO_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_FENCE_ADDR_LO_addr_31_0_shift  0
+#define SDMA_PKT_FENCE_ADDR_LO_ADDR_31_0(x) (((x) & SDMA_PKT_FENCE_ADDR_LO_addr_31_0_mask) << SDMA_PKT_FENCE_ADDR_LO_addr_31_0_shift)
+
+/*define for ADDR_HI word*/
+/*define for addr_63_32 field*/
+#define SDMA_PKT_FENCE_ADDR_HI_addr_63_32_offset 2
+#define SDMA_PKT_FENCE_ADDR_HI_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_FENCE_ADDR_HI_addr_63_32_shift  0
+#define SDMA_PKT_FENCE_ADDR_HI_ADDR_63_32(x) (((x) & SDMA_PKT_FENCE_ADDR_HI_addr_63_32_mask) << SDMA_PKT_FENCE_ADDR_HI_addr_63_32_shift)
+
+/*define for DATA word*/
+/*define for data field*/
+#define SDMA_PKT_FENCE_DATA_data_offset 3
+#define SDMA_PKT_FENCE_DATA_data_mask   0xFFFFFFFF
+#define SDMA_PKT_FENCE_DATA_data_shift  0
+#define SDMA_PKT_FENCE_DATA_DATA(x) (((x) & SDMA_PKT_FENCE_DATA_data_mask) << SDMA_PKT_FENCE_DATA_data_shift)
+
+
+/*
+** Definitions for SDMA_PKT_SRBM_WRITE packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_SRBM_WRITE_HEADER_op_offset 0
+#define SDMA_PKT_SRBM_WRITE_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_SRBM_WRITE_HEADER_op_shift  0
+#define SDMA_PKT_SRBM_WRITE_HEADER_OP(x) (((x) & SDMA_PKT_SRBM_WRITE_HEADER_op_mask) << SDMA_PKT_SRBM_WRITE_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_SRBM_WRITE_HEADER_sub_op_offset 0
+#define SDMA_PKT_SRBM_WRITE_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_SRBM_WRITE_HEADER_sub_op_shift  8
+#define SDMA_PKT_SRBM_WRITE_HEADER_SUB_OP(x) (((x) & SDMA_PKT_SRBM_WRITE_HEADER_sub_op_mask) << SDMA_PKT_SRBM_WRITE_HEADER_sub_op_shift)
+
+/*define for byte_en field*/
+#define SDMA_PKT_SRBM_WRITE_HEADER_byte_en_offset 0
+#define SDMA_PKT_SRBM_WRITE_HEADER_byte_en_mask   0x0000000F
+#define SDMA_PKT_SRBM_WRITE_HEADER_byte_en_shift  28
+#define SDMA_PKT_SRBM_WRITE_HEADER_BYTE_EN(x) (((x) & SDMA_PKT_SRBM_WRITE_HEADER_byte_en_mask) << SDMA_PKT_SRBM_WRITE_HEADER_byte_en_shift)
+
+/*define for ADDR word*/
+/*define for addr field*/
+#define SDMA_PKT_SRBM_WRITE_ADDR_addr_offset 1
+#define SDMA_PKT_SRBM_WRITE_ADDR_addr_mask   0x0003FFFF
+#define SDMA_PKT_SRBM_WRITE_ADDR_addr_shift  0
+#define SDMA_PKT_SRBM_WRITE_ADDR_ADDR(x) (((x) & SDMA_PKT_SRBM_WRITE_ADDR_addr_mask) << SDMA_PKT_SRBM_WRITE_ADDR_addr_shift)
+
+/*define for DATA word*/
+/*define for data field*/
+#define SDMA_PKT_SRBM_WRITE_DATA_data_offset 2
+#define SDMA_PKT_SRBM_WRITE_DATA_data_mask   0xFFFFFFFF
+#define SDMA_PKT_SRBM_WRITE_DATA_data_shift  0
+#define SDMA_PKT_SRBM_WRITE_DATA_DATA(x) (((x) & SDMA_PKT_SRBM_WRITE_DATA_data_mask) << SDMA_PKT_SRBM_WRITE_DATA_data_shift)
+
+
+/*
+** Definitions for SDMA_PKT_PRE_EXE packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_PRE_EXE_HEADER_op_offset 0
+#define SDMA_PKT_PRE_EXE_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_PRE_EXE_HEADER_op_shift  0
+#define SDMA_PKT_PRE_EXE_HEADER_OP(x) (((x) & SDMA_PKT_PRE_EXE_HEADER_op_mask) << SDMA_PKT_PRE_EXE_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_PRE_EXE_HEADER_sub_op_offset 0
+#define SDMA_PKT_PRE_EXE_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_PRE_EXE_HEADER_sub_op_shift  8
+#define SDMA_PKT_PRE_EXE_HEADER_SUB_OP(x) (((x) & SDMA_PKT_PRE_EXE_HEADER_sub_op_mask) << SDMA_PKT_PRE_EXE_HEADER_sub_op_shift)
+
+/*define for dev_sel field*/
+#define SDMA_PKT_PRE_EXE_HEADER_dev_sel_offset 0
+#define SDMA_PKT_PRE_EXE_HEADER_dev_sel_mask   0x000000FF
+#define SDMA_PKT_PRE_EXE_HEADER_dev_sel_shift  16
+#define SDMA_PKT_PRE_EXE_HEADER_DEV_SEL(x) (((x) & SDMA_PKT_PRE_EXE_HEADER_dev_sel_mask) << SDMA_PKT_PRE_EXE_HEADER_dev_sel_shift)
+
+/*define for EXEC_COUNT word*/
+/*define for exec_count field*/
+#define SDMA_PKT_PRE_EXE_EXEC_COUNT_exec_count_offset 1
+#define SDMA_PKT_PRE_EXE_EXEC_COUNT_exec_count_mask   0x00003FFF
+#define SDMA_PKT_PRE_EXE_EXEC_COUNT_exec_count_shift  0
+#define SDMA_PKT_PRE_EXE_EXEC_COUNT_EXEC_COUNT(x) (((x) & SDMA_PKT_PRE_EXE_EXEC_COUNT_exec_count_mask) << SDMA_PKT_PRE_EXE_EXEC_COUNT_exec_count_shift)
+
+
+/*
+** Definitions for SDMA_PKT_COND_EXE packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_COND_EXE_HEADER_op_offset 0
+#define SDMA_PKT_COND_EXE_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_COND_EXE_HEADER_op_shift  0
+#define SDMA_PKT_COND_EXE_HEADER_OP(x) (((x) & SDMA_PKT_COND_EXE_HEADER_op_mask) << SDMA_PKT_COND_EXE_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_COND_EXE_HEADER_sub_op_offset 0
+#define SDMA_PKT_COND_EXE_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_COND_EXE_HEADER_sub_op_shift  8
+#define SDMA_PKT_COND_EXE_HEADER_SUB_OP(x) (((x) & SDMA_PKT_COND_EXE_HEADER_sub_op_mask) << SDMA_PKT_COND_EXE_HEADER_sub_op_shift)
+
+/*define for ADDR_LO word*/
+/*define for addr_31_0 field*/
+#define SDMA_PKT_COND_EXE_ADDR_LO_addr_31_0_offset 1
+#define SDMA_PKT_COND_EXE_ADDR_LO_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_COND_EXE_ADDR_LO_addr_31_0_shift  0
+#define SDMA_PKT_COND_EXE_ADDR_LO_ADDR_31_0(x) (((x) & SDMA_PKT_COND_EXE_ADDR_LO_addr_31_0_mask) << SDMA_PKT_COND_EXE_ADDR_LO_addr_31_0_shift)
+
+/*define for ADDR_HI word*/
+/*define for addr_63_32 field*/
+#define SDMA_PKT_COND_EXE_ADDR_HI_addr_63_32_offset 2
+#define SDMA_PKT_COND_EXE_ADDR_HI_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_COND_EXE_ADDR_HI_addr_63_32_shift  0
+#define SDMA_PKT_COND_EXE_ADDR_HI_ADDR_63_32(x) (((x) & SDMA_PKT_COND_EXE_ADDR_HI_addr_63_32_mask) << SDMA_PKT_COND_EXE_ADDR_HI_addr_63_32_shift)
+
+/*define for REFERENCE word*/
+/*define for reference field*/
+#define SDMA_PKT_COND_EXE_REFERENCE_reference_offset 3
+#define SDMA_PKT_COND_EXE_REFERENCE_reference_mask   0xFFFFFFFF
+#define SDMA_PKT_COND_EXE_REFERENCE_reference_shift  0
+#define SDMA_PKT_COND_EXE_REFERENCE_REFERENCE(x) (((x) & SDMA_PKT_COND_EXE_REFERENCE_reference_mask) << SDMA_PKT_COND_EXE_REFERENCE_reference_shift)
+
+/*define for EXEC_COUNT word*/
+/*define for exec_count field*/
+#define SDMA_PKT_COND_EXE_EXEC_COUNT_exec_count_offset 4
+#define SDMA_PKT_COND_EXE_EXEC_COUNT_exec_count_mask   0x00003FFF
+#define SDMA_PKT_COND_EXE_EXEC_COUNT_exec_count_shift  0
+#define SDMA_PKT_COND_EXE_EXEC_COUNT_EXEC_COUNT(x) (((x) & SDMA_PKT_COND_EXE_EXEC_COUNT_exec_count_mask) << SDMA_PKT_COND_EXE_EXEC_COUNT_exec_count_shift)
+
+
+/*
+** Definitions for SDMA_PKT_CONSTANT_FILL packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_CONSTANT_FILL_HEADER_op_offset 0
+#define SDMA_PKT_CONSTANT_FILL_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_CONSTANT_FILL_HEADER_op_shift  0
+#define SDMA_PKT_CONSTANT_FILL_HEADER_OP(x) (((x) & SDMA_PKT_CONSTANT_FILL_HEADER_op_mask) << SDMA_PKT_CONSTANT_FILL_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_CONSTANT_FILL_HEADER_sub_op_offset 0
+#define SDMA_PKT_CONSTANT_FILL_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_CONSTANT_FILL_HEADER_sub_op_shift  8
+#define SDMA_PKT_CONSTANT_FILL_HEADER_SUB_OP(x) (((x) & SDMA_PKT_CONSTANT_FILL_HEADER_sub_op_mask) << SDMA_PKT_CONSTANT_FILL_HEADER_sub_op_shift)
+
+/*define for sw field*/
+#define SDMA_PKT_CONSTANT_FILL_HEADER_sw_offset 0
+#define SDMA_PKT_CONSTANT_FILL_HEADER_sw_mask   0x00000003
+#define SDMA_PKT_CONSTANT_FILL_HEADER_sw_shift  16
+#define SDMA_PKT_CONSTANT_FILL_HEADER_SW(x) (((x) & SDMA_PKT_CONSTANT_FILL_HEADER_sw_mask) << SDMA_PKT_CONSTANT_FILL_HEADER_sw_shift)
+
+/*define for fillsize field*/
+#define SDMA_PKT_CONSTANT_FILL_HEADER_fillsize_offset 0
+#define SDMA_PKT_CONSTANT_FILL_HEADER_fillsize_mask   0x00000003
+#define SDMA_PKT_CONSTANT_FILL_HEADER_fillsize_shift  30
+#define SDMA_PKT_CONSTANT_FILL_HEADER_FILLSIZE(x) (((x) & SDMA_PKT_CONSTANT_FILL_HEADER_fillsize_mask) << SDMA_PKT_CONSTANT_FILL_HEADER_fillsize_shift)
+
+/*define for DST_ADDR_LO word*/
+/*define for dst_addr_31_0 field*/
+#define SDMA_PKT_CONSTANT_FILL_DST_ADDR_LO_dst_addr_31_0_offset 1
+#define SDMA_PKT_CONSTANT_FILL_DST_ADDR_LO_dst_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_CONSTANT_FILL_DST_ADDR_LO_dst_addr_31_0_shift  0
+#define SDMA_PKT_CONSTANT_FILL_DST_ADDR_LO_DST_ADDR_31_0(x) (((x) & SDMA_PKT_CONSTANT_FILL_DST_ADDR_LO_dst_addr_31_0_mask) << SDMA_PKT_CONSTANT_FILL_DST_ADDR_LO_dst_addr_31_0_shift)
+
+/*define for DST_ADDR_HI word*/
+/*define for dst_addr_63_32 field*/
+#define SDMA_PKT_CONSTANT_FILL_DST_ADDR_HI_dst_addr_63_32_offset 2
+#define SDMA_PKT_CONSTANT_FILL_DST_ADDR_HI_dst_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_CONSTANT_FILL_DST_ADDR_HI_dst_addr_63_32_shift  0
+#define SDMA_PKT_CONSTANT_FILL_DST_ADDR_HI_DST_ADDR_63_32(x) (((x) & SDMA_PKT_CONSTANT_FILL_DST_ADDR_HI_dst_addr_63_32_mask) << SDMA_PKT_CONSTANT_FILL_DST_ADDR_HI_dst_addr_63_32_shift)
+
+/*define for DATA word*/
+/*define for src_data_31_0 field*/
+#define SDMA_PKT_CONSTANT_FILL_DATA_src_data_31_0_offset 3
+#define SDMA_PKT_CONSTANT_FILL_DATA_src_data_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_CONSTANT_FILL_DATA_src_data_31_0_shift  0
+#define SDMA_PKT_CONSTANT_FILL_DATA_SRC_DATA_31_0(x) (((x) & SDMA_PKT_CONSTANT_FILL_DATA_src_data_31_0_mask) << SDMA_PKT_CONSTANT_FILL_DATA_src_data_31_0_shift)
+
+/*define for COUNT word*/
+/*define for count field*/
+#define SDMA_PKT_CONSTANT_FILL_COUNT_count_offset 4
+#define SDMA_PKT_CONSTANT_FILL_COUNT_count_mask   0x003FFFFF
+#define SDMA_PKT_CONSTANT_FILL_COUNT_count_shift  0
+#define SDMA_PKT_CONSTANT_FILL_COUNT_COUNT(x) (((x) & SDMA_PKT_CONSTANT_FILL_COUNT_count_mask) << SDMA_PKT_CONSTANT_FILL_COUNT_count_shift)
+
+
+/*
+** Definitions for SDMA_PKT_DATA_FILL_MULTI packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_DATA_FILL_MULTI_HEADER_op_offset 0
+#define SDMA_PKT_DATA_FILL_MULTI_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_DATA_FILL_MULTI_HEADER_op_shift  0
+#define SDMA_PKT_DATA_FILL_MULTI_HEADER_OP(x) (((x) & SDMA_PKT_DATA_FILL_MULTI_HEADER_op_mask) << SDMA_PKT_DATA_FILL_MULTI_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_DATA_FILL_MULTI_HEADER_sub_op_offset 0
+#define SDMA_PKT_DATA_FILL_MULTI_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_DATA_FILL_MULTI_HEADER_sub_op_shift  8
+#define SDMA_PKT_DATA_FILL_MULTI_HEADER_SUB_OP(x) (((x) & SDMA_PKT_DATA_FILL_MULTI_HEADER_sub_op_mask) << SDMA_PKT_DATA_FILL_MULTI_HEADER_sub_op_shift)
+
+/*define for memlog_clr field*/
+#define SDMA_PKT_DATA_FILL_MULTI_HEADER_memlog_clr_offset 0
+#define SDMA_PKT_DATA_FILL_MULTI_HEADER_memlog_clr_mask   0x00000001
+#define SDMA_PKT_DATA_FILL_MULTI_HEADER_memlog_clr_shift  31
+#define SDMA_PKT_DATA_FILL_MULTI_HEADER_MEMLOG_CLR(x) (((x) & SDMA_PKT_DATA_FILL_MULTI_HEADER_memlog_clr_mask) << SDMA_PKT_DATA_FILL_MULTI_HEADER_memlog_clr_shift)
+
+/*define for BYTE_STRIDE word*/
+/*define for byte_stride field*/
+#define SDMA_PKT_DATA_FILL_MULTI_BYTE_STRIDE_byte_stride_offset 1
+#define SDMA_PKT_DATA_FILL_MULTI_BYTE_STRIDE_byte_stride_mask   0xFFFFFFFF
+#define SDMA_PKT_DATA_FILL_MULTI_BYTE_STRIDE_byte_stride_shift  0
+#define SDMA_PKT_DATA_FILL_MULTI_BYTE_STRIDE_BYTE_STRIDE(x) (((x) & SDMA_PKT_DATA_FILL_MULTI_BYTE_STRIDE_byte_stride_mask) << SDMA_PKT_DATA_FILL_MULTI_BYTE_STRIDE_byte_stride_shift)
+
+/*define for DMA_COUNT word*/
+/*define for dma_count field*/
+#define SDMA_PKT_DATA_FILL_MULTI_DMA_COUNT_dma_count_offset 2
+#define SDMA_PKT_DATA_FILL_MULTI_DMA_COUNT_dma_count_mask   0xFFFFFFFF
+#define SDMA_PKT_DATA_FILL_MULTI_DMA_COUNT_dma_count_shift  0
+#define SDMA_PKT_DATA_FILL_MULTI_DMA_COUNT_DMA_COUNT(x) (((x) & SDMA_PKT_DATA_FILL_MULTI_DMA_COUNT_dma_count_mask) << SDMA_PKT_DATA_FILL_MULTI_DMA_COUNT_dma_count_shift)
+
+/*define for DST_ADDR_LO word*/
+/*define for dst_addr_31_0 field*/
+#define SDMA_PKT_DATA_FILL_MULTI_DST_ADDR_LO_dst_addr_31_0_offset 3
+#define SDMA_PKT_DATA_FILL_MULTI_DST_ADDR_LO_dst_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_DATA_FILL_MULTI_DST_ADDR_LO_dst_addr_31_0_shift  0
+#define SDMA_PKT_DATA_FILL_MULTI_DST_ADDR_LO_DST_ADDR_31_0(x) (((x) & SDMA_PKT_DATA_FILL_MULTI_DST_ADDR_LO_dst_addr_31_0_mask) << SDMA_PKT_DATA_FILL_MULTI_DST_ADDR_LO_dst_addr_31_0_shift)
+
+/*define for DST_ADDR_HI word*/
+/*define for dst_addr_63_32 field*/
+#define SDMA_PKT_DATA_FILL_MULTI_DST_ADDR_HI_dst_addr_63_32_offset 4
+#define SDMA_PKT_DATA_FILL_MULTI_DST_ADDR_HI_dst_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_DATA_FILL_MULTI_DST_ADDR_HI_dst_addr_63_32_shift  0
+#define SDMA_PKT_DATA_FILL_MULTI_DST_ADDR_HI_DST_ADDR_63_32(x) (((x) & SDMA_PKT_DATA_FILL_MULTI_DST_ADDR_HI_dst_addr_63_32_mask) << SDMA_PKT_DATA_FILL_MULTI_DST_ADDR_HI_dst_addr_63_32_shift)
+
+/*define for BYTE_COUNT word*/
+/*define for count field*/
+#define SDMA_PKT_DATA_FILL_MULTI_BYTE_COUNT_count_offset 5
+#define SDMA_PKT_DATA_FILL_MULTI_BYTE_COUNT_count_mask   0x03FFFFFF
+#define SDMA_PKT_DATA_FILL_MULTI_BYTE_COUNT_count_shift  0
+#define SDMA_PKT_DATA_FILL_MULTI_BYTE_COUNT_COUNT(x) (((x) & SDMA_PKT_DATA_FILL_MULTI_BYTE_COUNT_count_mask) << SDMA_PKT_DATA_FILL_MULTI_BYTE_COUNT_count_shift)
+
+
+/*
+** Definitions for SDMA_PKT_POLL_REGMEM packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_POLL_REGMEM_HEADER_op_offset 0
+#define SDMA_PKT_POLL_REGMEM_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_POLL_REGMEM_HEADER_op_shift  0
+#define SDMA_PKT_POLL_REGMEM_HEADER_OP(x) (((x) & SDMA_PKT_POLL_REGMEM_HEADER_op_mask) << SDMA_PKT_POLL_REGMEM_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_POLL_REGMEM_HEADER_sub_op_offset 0
+#define SDMA_PKT_POLL_REGMEM_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_POLL_REGMEM_HEADER_sub_op_shift  8
+#define SDMA_PKT_POLL_REGMEM_HEADER_SUB_OP(x) (((x) & SDMA_PKT_POLL_REGMEM_HEADER_sub_op_mask) << SDMA_PKT_POLL_REGMEM_HEADER_sub_op_shift)
+
+/*define for hdp_flush field*/
+#define SDMA_PKT_POLL_REGMEM_HEADER_hdp_flush_offset 0
+#define SDMA_PKT_POLL_REGMEM_HEADER_hdp_flush_mask   0x00000001
+#define SDMA_PKT_POLL_REGMEM_HEADER_hdp_flush_shift  26
+#define SDMA_PKT_POLL_REGMEM_HEADER_HDP_FLUSH(x) (((x) & SDMA_PKT_POLL_REGMEM_HEADER_hdp_flush_mask) << SDMA_PKT_POLL_REGMEM_HEADER_hdp_flush_shift)
+
+/*define for func field*/
+#define SDMA_PKT_POLL_REGMEM_HEADER_func_offset 0
+#define SDMA_PKT_POLL_REGMEM_HEADER_func_mask   0x00000007
+#define SDMA_PKT_POLL_REGMEM_HEADER_func_shift  28
+#define SDMA_PKT_POLL_REGMEM_HEADER_FUNC(x) (((x) & SDMA_PKT_POLL_REGMEM_HEADER_func_mask) << SDMA_PKT_POLL_REGMEM_HEADER_func_shift)
+
+/*define for mem_poll field*/
+#define SDMA_PKT_POLL_REGMEM_HEADER_mem_poll_offset 0
+#define SDMA_PKT_POLL_REGMEM_HEADER_mem_poll_mask   0x00000001
+#define SDMA_PKT_POLL_REGMEM_HEADER_mem_poll_shift  31
+#define SDMA_PKT_POLL_REGMEM_HEADER_MEM_POLL(x) (((x) & SDMA_PKT_POLL_REGMEM_HEADER_mem_poll_mask) << SDMA_PKT_POLL_REGMEM_HEADER_mem_poll_shift)
+
+/*define for ADDR_LO word*/
+/*define for addr_31_0 field*/
+#define SDMA_PKT_POLL_REGMEM_ADDR_LO_addr_31_0_offset 1
+#define SDMA_PKT_POLL_REGMEM_ADDR_LO_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_POLL_REGMEM_ADDR_LO_addr_31_0_shift  0
+#define SDMA_PKT_POLL_REGMEM_ADDR_LO_ADDR_31_0(x) (((x) & SDMA_PKT_POLL_REGMEM_ADDR_LO_addr_31_0_mask) << SDMA_PKT_POLL_REGMEM_ADDR_LO_addr_31_0_shift)
+
+/*define for ADDR_HI word*/
+/*define for addr_63_32 field*/
+#define SDMA_PKT_POLL_REGMEM_ADDR_HI_addr_63_32_offset 2
+#define SDMA_PKT_POLL_REGMEM_ADDR_HI_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_POLL_REGMEM_ADDR_HI_addr_63_32_shift  0
+#define SDMA_PKT_POLL_REGMEM_ADDR_HI_ADDR_63_32(x) (((x) & SDMA_PKT_POLL_REGMEM_ADDR_HI_addr_63_32_mask) << SDMA_PKT_POLL_REGMEM_ADDR_HI_addr_63_32_shift)
+
+/*define for VALUE word*/
+/*define for value field*/
+#define SDMA_PKT_POLL_REGMEM_VALUE_value_offset 3
+#define SDMA_PKT_POLL_REGMEM_VALUE_value_mask   0xFFFFFFFF
+#define SDMA_PKT_POLL_REGMEM_VALUE_value_shift  0
+#define SDMA_PKT_POLL_REGMEM_VALUE_VALUE(x) (((x) & SDMA_PKT_POLL_REGMEM_VALUE_value_mask) << SDMA_PKT_POLL_REGMEM_VALUE_value_shift)
+
+/*define for MASK word*/
+/*define for mask field*/
+#define SDMA_PKT_POLL_REGMEM_MASK_mask_offset 4
+#define SDMA_PKT_POLL_REGMEM_MASK_mask_mask   0xFFFFFFFF
+#define SDMA_PKT_POLL_REGMEM_MASK_mask_shift  0
+#define SDMA_PKT_POLL_REGMEM_MASK_MASK(x) (((x) & SDMA_PKT_POLL_REGMEM_MASK_mask_mask) << SDMA_PKT_POLL_REGMEM_MASK_mask_shift)
+
+/*define for DW5 word*/
+/*define for interval field*/
+#define SDMA_PKT_POLL_REGMEM_DW5_interval_offset 5
+#define SDMA_PKT_POLL_REGMEM_DW5_interval_mask   0x0000FFFF
+#define SDMA_PKT_POLL_REGMEM_DW5_interval_shift  0
+#define SDMA_PKT_POLL_REGMEM_DW5_INTERVAL(x) (((x) & SDMA_PKT_POLL_REGMEM_DW5_interval_mask) << SDMA_PKT_POLL_REGMEM_DW5_interval_shift)
+
+/*define for retry_count field*/
+#define SDMA_PKT_POLL_REGMEM_DW5_retry_count_offset 5
+#define SDMA_PKT_POLL_REGMEM_DW5_retry_count_mask   0x00000FFF
+#define SDMA_PKT_POLL_REGMEM_DW5_retry_count_shift  16
+#define SDMA_PKT_POLL_REGMEM_DW5_RETRY_COUNT(x) (((x) & SDMA_PKT_POLL_REGMEM_DW5_retry_count_mask) << SDMA_PKT_POLL_REGMEM_DW5_retry_count_shift)
+
+
+/*
+** Definitions for SDMA_PKT_POLL_REG_WRITE_MEM packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_POLL_REG_WRITE_MEM_HEADER_op_offset 0
+#define SDMA_PKT_POLL_REG_WRITE_MEM_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_POLL_REG_WRITE_MEM_HEADER_op_shift  0
+#define SDMA_PKT_POLL_REG_WRITE_MEM_HEADER_OP(x) (((x) & SDMA_PKT_POLL_REG_WRITE_MEM_HEADER_op_mask) << SDMA_PKT_POLL_REG_WRITE_MEM_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_POLL_REG_WRITE_MEM_HEADER_sub_op_offset 0
+#define SDMA_PKT_POLL_REG_WRITE_MEM_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_POLL_REG_WRITE_MEM_HEADER_sub_op_shift  8
+#define SDMA_PKT_POLL_REG_WRITE_MEM_HEADER_SUB_OP(x) (((x) & SDMA_PKT_POLL_REG_WRITE_MEM_HEADER_sub_op_mask) << SDMA_PKT_POLL_REG_WRITE_MEM_HEADER_sub_op_shift)
+
+/*define for SRC_ADDR word*/
+/*define for addr_31_2 field*/
+#define SDMA_PKT_POLL_REG_WRITE_MEM_SRC_ADDR_addr_31_2_offset 1
+#define SDMA_PKT_POLL_REG_WRITE_MEM_SRC_ADDR_addr_31_2_mask   0x3FFFFFFF
+#define SDMA_PKT_POLL_REG_WRITE_MEM_SRC_ADDR_addr_31_2_shift  2
+#define SDMA_PKT_POLL_REG_WRITE_MEM_SRC_ADDR_ADDR_31_2(x) (((x) & SDMA_PKT_POLL_REG_WRITE_MEM_SRC_ADDR_addr_31_2_mask) << SDMA_PKT_POLL_REG_WRITE_MEM_SRC_ADDR_addr_31_2_shift)
+
+/*define for DST_ADDR_LO word*/
+/*define for addr_31_0 field*/
+#define SDMA_PKT_POLL_REG_WRITE_MEM_DST_ADDR_LO_addr_31_0_offset 2
+#define SDMA_PKT_POLL_REG_WRITE_MEM_DST_ADDR_LO_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_POLL_REG_WRITE_MEM_DST_ADDR_LO_addr_31_0_shift  0
+#define SDMA_PKT_POLL_REG_WRITE_MEM_DST_ADDR_LO_ADDR_31_0(x) (((x) & SDMA_PKT_POLL_REG_WRITE_MEM_DST_ADDR_LO_addr_31_0_mask) << SDMA_PKT_POLL_REG_WRITE_MEM_DST_ADDR_LO_addr_31_0_shift)
+
+/*define for DST_ADDR_HI word*/
+/*define for addr_63_32 field*/
+#define SDMA_PKT_POLL_REG_WRITE_MEM_DST_ADDR_HI_addr_63_32_offset 3
+#define SDMA_PKT_POLL_REG_WRITE_MEM_DST_ADDR_HI_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_POLL_REG_WRITE_MEM_DST_ADDR_HI_addr_63_32_shift  0
+#define SDMA_PKT_POLL_REG_WRITE_MEM_DST_ADDR_HI_ADDR_63_32(x) (((x) & SDMA_PKT_POLL_REG_WRITE_MEM_DST_ADDR_HI_addr_63_32_mask) << SDMA_PKT_POLL_REG_WRITE_MEM_DST_ADDR_HI_addr_63_32_shift)
+
+
+/*
+** Definitions for SDMA_PKT_POLL_DBIT_WRITE_MEM packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_HEADER_op_offset 0
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_HEADER_op_shift  0
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_HEADER_OP(x) (((x) & SDMA_PKT_POLL_DBIT_WRITE_MEM_HEADER_op_mask) << SDMA_PKT_POLL_DBIT_WRITE_MEM_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_HEADER_sub_op_offset 0
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_HEADER_sub_op_shift  8
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_HEADER_SUB_OP(x) (((x) & SDMA_PKT_POLL_DBIT_WRITE_MEM_HEADER_sub_op_mask) << SDMA_PKT_POLL_DBIT_WRITE_MEM_HEADER_sub_op_shift)
+
+/*define for ea field*/
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_HEADER_ea_offset 0
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_HEADER_ea_mask   0x00000003
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_HEADER_ea_shift  16
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_HEADER_EA(x) (((x) & SDMA_PKT_POLL_DBIT_WRITE_MEM_HEADER_ea_mask) << SDMA_PKT_POLL_DBIT_WRITE_MEM_HEADER_ea_shift)
+
+/*define for DST_ADDR_LO word*/
+/*define for addr_31_0 field*/
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_DST_ADDR_LO_addr_31_0_offset 1
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_DST_ADDR_LO_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_DST_ADDR_LO_addr_31_0_shift  0
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_DST_ADDR_LO_ADDR_31_0(x) (((x) & SDMA_PKT_POLL_DBIT_WRITE_MEM_DST_ADDR_LO_addr_31_0_mask) << SDMA_PKT_POLL_DBIT_WRITE_MEM_DST_ADDR_LO_addr_31_0_shift)
+
+/*define for DST_ADDR_HI word*/
+/*define for addr_63_32 field*/
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_DST_ADDR_HI_addr_63_32_offset 2
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_DST_ADDR_HI_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_DST_ADDR_HI_addr_63_32_shift  0
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_DST_ADDR_HI_ADDR_63_32(x) (((x) & SDMA_PKT_POLL_DBIT_WRITE_MEM_DST_ADDR_HI_addr_63_32_mask) << SDMA_PKT_POLL_DBIT_WRITE_MEM_DST_ADDR_HI_addr_63_32_shift)
+
+/*define for START_PAGE word*/
+/*define for addr_31_4 field*/
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_START_PAGE_addr_31_4_offset 3
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_START_PAGE_addr_31_4_mask   0x0FFFFFFF
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_START_PAGE_addr_31_4_shift  4
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_START_PAGE_ADDR_31_4(x) (((x) & SDMA_PKT_POLL_DBIT_WRITE_MEM_START_PAGE_addr_31_4_mask) << SDMA_PKT_POLL_DBIT_WRITE_MEM_START_PAGE_addr_31_4_shift)
+
+/*define for PAGE_NUM word*/
+/*define for page_num_31_0 field*/
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_PAGE_NUM_page_num_31_0_offset 4
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_PAGE_NUM_page_num_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_PAGE_NUM_page_num_31_0_shift  0
+#define SDMA_PKT_POLL_DBIT_WRITE_MEM_PAGE_NUM_PAGE_NUM_31_0(x) (((x) & SDMA_PKT_POLL_DBIT_WRITE_MEM_PAGE_NUM_page_num_31_0_mask) << SDMA_PKT_POLL_DBIT_WRITE_MEM_PAGE_NUM_page_num_31_0_shift)
+
+
+/*
+** Definitions for SDMA_PKT_POLL_MEM_VERIFY packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_POLL_MEM_VERIFY_HEADER_op_offset 0
+#define SDMA_PKT_POLL_MEM_VERIFY_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_POLL_MEM_VERIFY_HEADER_op_shift  0
+#define SDMA_PKT_POLL_MEM_VERIFY_HEADER_OP(x) (((x) & SDMA_PKT_POLL_MEM_VERIFY_HEADER_op_mask) << SDMA_PKT_POLL_MEM_VERIFY_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_POLL_MEM_VERIFY_HEADER_sub_op_offset 0
+#define SDMA_PKT_POLL_MEM_VERIFY_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_POLL_MEM_VERIFY_HEADER_sub_op_shift  8
+#define SDMA_PKT_POLL_MEM_VERIFY_HEADER_SUB_OP(x) (((x) & SDMA_PKT_POLL_MEM_VERIFY_HEADER_sub_op_mask) << SDMA_PKT_POLL_MEM_VERIFY_HEADER_sub_op_shift)
+
+/*define for mode field*/
+#define SDMA_PKT_POLL_MEM_VERIFY_HEADER_mode_offset 0
+#define SDMA_PKT_POLL_MEM_VERIFY_HEADER_mode_mask   0x00000001
+#define SDMA_PKT_POLL_MEM_VERIFY_HEADER_mode_shift  31
+#define SDMA_PKT_POLL_MEM_VERIFY_HEADER_MODE(x) (((x) & SDMA_PKT_POLL_MEM_VERIFY_HEADER_mode_mask) << SDMA_PKT_POLL_MEM_VERIFY_HEADER_mode_shift)
+
+/*define for PATTERN word*/
+/*define for pattern field*/
+#define SDMA_PKT_POLL_MEM_VERIFY_PATTERN_pattern_offset 1
+#define SDMA_PKT_POLL_MEM_VERIFY_PATTERN_pattern_mask   0xFFFFFFFF
+#define SDMA_PKT_POLL_MEM_VERIFY_PATTERN_pattern_shift  0
+#define SDMA_PKT_POLL_MEM_VERIFY_PATTERN_PATTERN(x) (((x) & SDMA_PKT_POLL_MEM_VERIFY_PATTERN_pattern_mask) << SDMA_PKT_POLL_MEM_VERIFY_PATTERN_pattern_shift)
+
+/*define for CMP0_ADDR_START_LO word*/
+/*define for cmp0_start_31_0 field*/
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_START_LO_cmp0_start_31_0_offset 2
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_START_LO_cmp0_start_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_START_LO_cmp0_start_31_0_shift  0
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_START_LO_CMP0_START_31_0(x) (((x) & SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_START_LO_cmp0_start_31_0_mask) << SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_START_LO_cmp0_start_31_0_shift)
+
+/*define for CMP0_ADDR_START_HI word*/
+/*define for cmp0_start_63_32 field*/
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_START_HI_cmp0_start_63_32_offset 3
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_START_HI_cmp0_start_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_START_HI_cmp0_start_63_32_shift  0
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_START_HI_CMP0_START_63_32(x) (((x) & SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_START_HI_cmp0_start_63_32_mask) << SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_START_HI_cmp0_start_63_32_shift)
+
+/*define for CMP0_ADDR_END_LO word*/
+/*define for cmp1_end_31_0 field*/
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_END_LO_cmp1_end_31_0_offset 4
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_END_LO_cmp1_end_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_END_LO_cmp1_end_31_0_shift  0
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_END_LO_CMP1_END_31_0(x) (((x) & SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_END_LO_cmp1_end_31_0_mask) << SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_END_LO_cmp1_end_31_0_shift)
+
+/*define for CMP0_ADDR_END_HI word*/
+/*define for cmp1_end_63_32 field*/
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_END_HI_cmp1_end_63_32_offset 5
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_END_HI_cmp1_end_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_END_HI_cmp1_end_63_32_shift  0
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_END_HI_CMP1_END_63_32(x) (((x) & SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_END_HI_cmp1_end_63_32_mask) << SDMA_PKT_POLL_MEM_VERIFY_CMP0_ADDR_END_HI_cmp1_end_63_32_shift)
+
+/*define for CMP1_ADDR_START_LO word*/
+/*define for cmp1_start_31_0 field*/
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_START_LO_cmp1_start_31_0_offset 6
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_START_LO_cmp1_start_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_START_LO_cmp1_start_31_0_shift  0
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_START_LO_CMP1_START_31_0(x) (((x) & SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_START_LO_cmp1_start_31_0_mask) << SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_START_LO_cmp1_start_31_0_shift)
+
+/*define for CMP1_ADDR_START_HI word*/
+/*define for cmp1_start_63_32 field*/
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_START_HI_cmp1_start_63_32_offset 7
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_START_HI_cmp1_start_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_START_HI_cmp1_start_63_32_shift  0
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_START_HI_CMP1_START_63_32(x) (((x) & SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_START_HI_cmp1_start_63_32_mask) << SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_START_HI_cmp1_start_63_32_shift)
+
+/*define for CMP1_ADDR_END_LO word*/
+/*define for cmp1_end_31_0 field*/
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_END_LO_cmp1_end_31_0_offset 8
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_END_LO_cmp1_end_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_END_LO_cmp1_end_31_0_shift  0
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_END_LO_CMP1_END_31_0(x) (((x) & SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_END_LO_cmp1_end_31_0_mask) << SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_END_LO_cmp1_end_31_0_shift)
+
+/*define for CMP1_ADDR_END_HI word*/
+/*define for cmp1_end_63_32 field*/
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_END_HI_cmp1_end_63_32_offset 9
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_END_HI_cmp1_end_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_END_HI_cmp1_end_63_32_shift  0
+#define SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_END_HI_CMP1_END_63_32(x) (((x) & SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_END_HI_cmp1_end_63_32_mask) << SDMA_PKT_POLL_MEM_VERIFY_CMP1_ADDR_END_HI_cmp1_end_63_32_shift)
+
+/*define for REC_ADDR_LO word*/
+/*define for rec_31_0 field*/
+#define SDMA_PKT_POLL_MEM_VERIFY_REC_ADDR_LO_rec_31_0_offset 10
+#define SDMA_PKT_POLL_MEM_VERIFY_REC_ADDR_LO_rec_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_POLL_MEM_VERIFY_REC_ADDR_LO_rec_31_0_shift  0
+#define SDMA_PKT_POLL_MEM_VERIFY_REC_ADDR_LO_REC_31_0(x) (((x) & SDMA_PKT_POLL_MEM_VERIFY_REC_ADDR_LO_rec_31_0_mask) << SDMA_PKT_POLL_MEM_VERIFY_REC_ADDR_LO_rec_31_0_shift)
+
+/*define for REC_ADDR_HI word*/
+/*define for rec_63_32 field*/
+#define SDMA_PKT_POLL_MEM_VERIFY_REC_ADDR_HI_rec_63_32_offset 11
+#define SDMA_PKT_POLL_MEM_VERIFY_REC_ADDR_HI_rec_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_POLL_MEM_VERIFY_REC_ADDR_HI_rec_63_32_shift  0
+#define SDMA_PKT_POLL_MEM_VERIFY_REC_ADDR_HI_REC_63_32(x) (((x) & SDMA_PKT_POLL_MEM_VERIFY_REC_ADDR_HI_rec_63_32_mask) << SDMA_PKT_POLL_MEM_VERIFY_REC_ADDR_HI_rec_63_32_shift)
+
+/*define for RESERVED word*/
+/*define for reserved field*/
+#define SDMA_PKT_POLL_MEM_VERIFY_RESERVED_reserved_offset 12
+#define SDMA_PKT_POLL_MEM_VERIFY_RESERVED_reserved_mask   0xFFFFFFFF
+#define SDMA_PKT_POLL_MEM_VERIFY_RESERVED_reserved_shift  0
+#define SDMA_PKT_POLL_MEM_VERIFY_RESERVED_RESERVED(x) (((x) & SDMA_PKT_POLL_MEM_VERIFY_RESERVED_reserved_mask) << SDMA_PKT_POLL_MEM_VERIFY_RESERVED_reserved_shift)
+
+
+/*
+** Definitions for SDMA_PKT_ATOMIC packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_ATOMIC_HEADER_op_offset 0
+#define SDMA_PKT_ATOMIC_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_ATOMIC_HEADER_op_shift  0
+#define SDMA_PKT_ATOMIC_HEADER_OP(x) (((x) & SDMA_PKT_ATOMIC_HEADER_op_mask) << SDMA_PKT_ATOMIC_HEADER_op_shift)
+
+/*define for loop field*/
+#define SDMA_PKT_ATOMIC_HEADER_loop_offset 0
+#define SDMA_PKT_ATOMIC_HEADER_loop_mask   0x00000001
+#define SDMA_PKT_ATOMIC_HEADER_loop_shift  16
+#define SDMA_PKT_ATOMIC_HEADER_LOOP(x) (((x) & SDMA_PKT_ATOMIC_HEADER_loop_mask) << SDMA_PKT_ATOMIC_HEADER_loop_shift)
+
+/*define for tmz field*/
+#define SDMA_PKT_ATOMIC_HEADER_tmz_offset 0
+#define SDMA_PKT_ATOMIC_HEADER_tmz_mask   0x00000001
+#define SDMA_PKT_ATOMIC_HEADER_tmz_shift  18
+#define SDMA_PKT_ATOMIC_HEADER_TMZ(x) (((x) & SDMA_PKT_ATOMIC_HEADER_tmz_mask) << SDMA_PKT_ATOMIC_HEADER_tmz_shift)
+
+/*define for atomic_op field*/
+#define SDMA_PKT_ATOMIC_HEADER_atomic_op_offset 0
+#define SDMA_PKT_ATOMIC_HEADER_atomic_op_mask   0x0000007F
+#define SDMA_PKT_ATOMIC_HEADER_atomic_op_shift  25
+#define SDMA_PKT_ATOMIC_HEADER_ATOMIC_OP(x) (((x) & SDMA_PKT_ATOMIC_HEADER_atomic_op_mask) << SDMA_PKT_ATOMIC_HEADER_atomic_op_shift)
+
+/*define for ADDR_LO word*/
+/*define for addr_31_0 field*/
+#define SDMA_PKT_ATOMIC_ADDR_LO_addr_31_0_offset 1
+#define SDMA_PKT_ATOMIC_ADDR_LO_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_ATOMIC_ADDR_LO_addr_31_0_shift  0
+#define SDMA_PKT_ATOMIC_ADDR_LO_ADDR_31_0(x) (((x) & SDMA_PKT_ATOMIC_ADDR_LO_addr_31_0_mask) << SDMA_PKT_ATOMIC_ADDR_LO_addr_31_0_shift)
+
+/*define for ADDR_HI word*/
+/*define for addr_63_32 field*/
+#define SDMA_PKT_ATOMIC_ADDR_HI_addr_63_32_offset 2
+#define SDMA_PKT_ATOMIC_ADDR_HI_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_ATOMIC_ADDR_HI_addr_63_32_shift  0
+#define SDMA_PKT_ATOMIC_ADDR_HI_ADDR_63_32(x) (((x) & SDMA_PKT_ATOMIC_ADDR_HI_addr_63_32_mask) << SDMA_PKT_ATOMIC_ADDR_HI_addr_63_32_shift)
+
+/*define for SRC_DATA_LO word*/
+/*define for src_data_31_0 field*/
+#define SDMA_PKT_ATOMIC_SRC_DATA_LO_src_data_31_0_offset 3
+#define SDMA_PKT_ATOMIC_SRC_DATA_LO_src_data_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_ATOMIC_SRC_DATA_LO_src_data_31_0_shift  0
+#define SDMA_PKT_ATOMIC_SRC_DATA_LO_SRC_DATA_31_0(x) (((x) & SDMA_PKT_ATOMIC_SRC_DATA_LO_src_data_31_0_mask) << SDMA_PKT_ATOMIC_SRC_DATA_LO_src_data_31_0_shift)
+
+/*define for SRC_DATA_HI word*/
+/*define for src_data_63_32 field*/
+#define SDMA_PKT_ATOMIC_SRC_DATA_HI_src_data_63_32_offset 4
+#define SDMA_PKT_ATOMIC_SRC_DATA_HI_src_data_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_ATOMIC_SRC_DATA_HI_src_data_63_32_shift  0
+#define SDMA_PKT_ATOMIC_SRC_DATA_HI_SRC_DATA_63_32(x) (((x) & SDMA_PKT_ATOMIC_SRC_DATA_HI_src_data_63_32_mask) << SDMA_PKT_ATOMIC_SRC_DATA_HI_src_data_63_32_shift)
+
+/*define for CMP_DATA_LO word*/
+/*define for cmp_data_31_0 field*/
+#define SDMA_PKT_ATOMIC_CMP_DATA_LO_cmp_data_31_0_offset 5
+#define SDMA_PKT_ATOMIC_CMP_DATA_LO_cmp_data_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_ATOMIC_CMP_DATA_LO_cmp_data_31_0_shift  0
+#define SDMA_PKT_ATOMIC_CMP_DATA_LO_CMP_DATA_31_0(x) (((x) & SDMA_PKT_ATOMIC_CMP_DATA_LO_cmp_data_31_0_mask) << SDMA_PKT_ATOMIC_CMP_DATA_LO_cmp_data_31_0_shift)
+
+/*define for CMP_DATA_HI word*/
+/*define for cmp_data_63_32 field*/
+#define SDMA_PKT_ATOMIC_CMP_DATA_HI_cmp_data_63_32_offset 6
+#define SDMA_PKT_ATOMIC_CMP_DATA_HI_cmp_data_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_ATOMIC_CMP_DATA_HI_cmp_data_63_32_shift  0
+#define SDMA_PKT_ATOMIC_CMP_DATA_HI_CMP_DATA_63_32(x) (((x) & SDMA_PKT_ATOMIC_CMP_DATA_HI_cmp_data_63_32_mask) << SDMA_PKT_ATOMIC_CMP_DATA_HI_cmp_data_63_32_shift)
+
+/*define for LOOP_INTERVAL word*/
+/*define for loop_interval field*/
+#define SDMA_PKT_ATOMIC_LOOP_INTERVAL_loop_interval_offset 7
+#define SDMA_PKT_ATOMIC_LOOP_INTERVAL_loop_interval_mask   0x00001FFF
+#define SDMA_PKT_ATOMIC_LOOP_INTERVAL_loop_interval_shift  0
+#define SDMA_PKT_ATOMIC_LOOP_INTERVAL_LOOP_INTERVAL(x) (((x) & SDMA_PKT_ATOMIC_LOOP_INTERVAL_loop_interval_mask) << SDMA_PKT_ATOMIC_LOOP_INTERVAL_loop_interval_shift)
+
+
+/*
+** Definitions for SDMA_PKT_TIMESTAMP_SET packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_TIMESTAMP_SET_HEADER_op_offset 0
+#define SDMA_PKT_TIMESTAMP_SET_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_TIMESTAMP_SET_HEADER_op_shift  0
+#define SDMA_PKT_TIMESTAMP_SET_HEADER_OP(x) (((x) & SDMA_PKT_TIMESTAMP_SET_HEADER_op_mask) << SDMA_PKT_TIMESTAMP_SET_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_TIMESTAMP_SET_HEADER_sub_op_offset 0
+#define SDMA_PKT_TIMESTAMP_SET_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_TIMESTAMP_SET_HEADER_sub_op_shift  8
+#define SDMA_PKT_TIMESTAMP_SET_HEADER_SUB_OP(x) (((x) & SDMA_PKT_TIMESTAMP_SET_HEADER_sub_op_mask) << SDMA_PKT_TIMESTAMP_SET_HEADER_sub_op_shift)
+
+/*define for INIT_DATA_LO word*/
+/*define for init_data_31_0 field*/
+#define SDMA_PKT_TIMESTAMP_SET_INIT_DATA_LO_init_data_31_0_offset 1
+#define SDMA_PKT_TIMESTAMP_SET_INIT_DATA_LO_init_data_31_0_mask   0xFFFFFFFF
+#define SDMA_PKT_TIMESTAMP_SET_INIT_DATA_LO_init_data_31_0_shift  0
+#define SDMA_PKT_TIMESTAMP_SET_INIT_DATA_LO_INIT_DATA_31_0(x) (((x) & SDMA_PKT_TIMESTAMP_SET_INIT_DATA_LO_init_data_31_0_mask) << SDMA_PKT_TIMESTAMP_SET_INIT_DATA_LO_init_data_31_0_shift)
+
+/*define for INIT_DATA_HI word*/
+/*define for init_data_63_32 field*/
+#define SDMA_PKT_TIMESTAMP_SET_INIT_DATA_HI_init_data_63_32_offset 2
+#define SDMA_PKT_TIMESTAMP_SET_INIT_DATA_HI_init_data_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_TIMESTAMP_SET_INIT_DATA_HI_init_data_63_32_shift  0
+#define SDMA_PKT_TIMESTAMP_SET_INIT_DATA_HI_INIT_DATA_63_32(x) (((x) & SDMA_PKT_TIMESTAMP_SET_INIT_DATA_HI_init_data_63_32_mask) << SDMA_PKT_TIMESTAMP_SET_INIT_DATA_HI_init_data_63_32_shift)
+
+
+/*
+** Definitions for SDMA_PKT_TIMESTAMP_GET packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_TIMESTAMP_GET_HEADER_op_offset 0
+#define SDMA_PKT_TIMESTAMP_GET_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_TIMESTAMP_GET_HEADER_op_shift  0
+#define SDMA_PKT_TIMESTAMP_GET_HEADER_OP(x) (((x) & SDMA_PKT_TIMESTAMP_GET_HEADER_op_mask) << SDMA_PKT_TIMESTAMP_GET_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_TIMESTAMP_GET_HEADER_sub_op_offset 0
+#define SDMA_PKT_TIMESTAMP_GET_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_TIMESTAMP_GET_HEADER_sub_op_shift  8
+#define SDMA_PKT_TIMESTAMP_GET_HEADER_SUB_OP(x) (((x) & SDMA_PKT_TIMESTAMP_GET_HEADER_sub_op_mask) << SDMA_PKT_TIMESTAMP_GET_HEADER_sub_op_shift)
+
+/*define for WRITE_ADDR_LO word*/
+/*define for write_addr_31_3 field*/
+#define SDMA_PKT_TIMESTAMP_GET_WRITE_ADDR_LO_write_addr_31_3_offset 1
+#define SDMA_PKT_TIMESTAMP_GET_WRITE_ADDR_LO_write_addr_31_3_mask   0x1FFFFFFF
+#define SDMA_PKT_TIMESTAMP_GET_WRITE_ADDR_LO_write_addr_31_3_shift  3
+#define SDMA_PKT_TIMESTAMP_GET_WRITE_ADDR_LO_WRITE_ADDR_31_3(x) (((x) & SDMA_PKT_TIMESTAMP_GET_WRITE_ADDR_LO_write_addr_31_3_mask) << SDMA_PKT_TIMESTAMP_GET_WRITE_ADDR_LO_write_addr_31_3_shift)
+
+/*define for WRITE_ADDR_HI word*/
+/*define for write_addr_63_32 field*/
+#define SDMA_PKT_TIMESTAMP_GET_WRITE_ADDR_HI_write_addr_63_32_offset 2
+#define SDMA_PKT_TIMESTAMP_GET_WRITE_ADDR_HI_write_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_TIMESTAMP_GET_WRITE_ADDR_HI_write_addr_63_32_shift  0
+#define SDMA_PKT_TIMESTAMP_GET_WRITE_ADDR_HI_WRITE_ADDR_63_32(x) (((x) & SDMA_PKT_TIMESTAMP_GET_WRITE_ADDR_HI_write_addr_63_32_mask) << SDMA_PKT_TIMESTAMP_GET_WRITE_ADDR_HI_write_addr_63_32_shift)
+
+
+/*
+** Definitions for SDMA_PKT_TIMESTAMP_GET_GLOBAL packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_TIMESTAMP_GET_GLOBAL_HEADER_op_offset 0
+#define SDMA_PKT_TIMESTAMP_GET_GLOBAL_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_TIMESTAMP_GET_GLOBAL_HEADER_op_shift  0
+#define SDMA_PKT_TIMESTAMP_GET_GLOBAL_HEADER_OP(x) (((x) & SDMA_PKT_TIMESTAMP_GET_GLOBAL_HEADER_op_mask) << SDMA_PKT_TIMESTAMP_GET_GLOBAL_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_TIMESTAMP_GET_GLOBAL_HEADER_sub_op_offset 0
+#define SDMA_PKT_TIMESTAMP_GET_GLOBAL_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_TIMESTAMP_GET_GLOBAL_HEADER_sub_op_shift  8
+#define SDMA_PKT_TIMESTAMP_GET_GLOBAL_HEADER_SUB_OP(x) (((x) & SDMA_PKT_TIMESTAMP_GET_GLOBAL_HEADER_sub_op_mask) << SDMA_PKT_TIMESTAMP_GET_GLOBAL_HEADER_sub_op_shift)
+
+/*define for WRITE_ADDR_LO word*/
+/*define for write_addr_31_3 field*/
+#define SDMA_PKT_TIMESTAMP_GET_GLOBAL_WRITE_ADDR_LO_write_addr_31_3_offset 1
+#define SDMA_PKT_TIMESTAMP_GET_GLOBAL_WRITE_ADDR_LO_write_addr_31_3_mask   0x1FFFFFFF
+#define SDMA_PKT_TIMESTAMP_GET_GLOBAL_WRITE_ADDR_LO_write_addr_31_3_shift  3
+#define SDMA_PKT_TIMESTAMP_GET_GLOBAL_WRITE_ADDR_LO_WRITE_ADDR_31_3(x) (((x) & SDMA_PKT_TIMESTAMP_GET_GLOBAL_WRITE_ADDR_LO_write_addr_31_3_mask) << SDMA_PKT_TIMESTAMP_GET_GLOBAL_WRITE_ADDR_LO_write_addr_31_3_shift)
+
+/*define for WRITE_ADDR_HI word*/
+/*define for write_addr_63_32 field*/
+#define SDMA_PKT_TIMESTAMP_GET_GLOBAL_WRITE_ADDR_HI_write_addr_63_32_offset 2
+#define SDMA_PKT_TIMESTAMP_GET_GLOBAL_WRITE_ADDR_HI_write_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_PKT_TIMESTAMP_GET_GLOBAL_WRITE_ADDR_HI_write_addr_63_32_shift  0
+#define SDMA_PKT_TIMESTAMP_GET_GLOBAL_WRITE_ADDR_HI_WRITE_ADDR_63_32(x) (((x) & SDMA_PKT_TIMESTAMP_GET_GLOBAL_WRITE_ADDR_HI_write_addr_63_32_mask) << SDMA_PKT_TIMESTAMP_GET_GLOBAL_WRITE_ADDR_HI_write_addr_63_32_shift)
+
+
+/*
+** Definitions for SDMA_PKT_TRAP packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_TRAP_HEADER_op_offset 0
+#define SDMA_PKT_TRAP_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_TRAP_HEADER_op_shift  0
+#define SDMA_PKT_TRAP_HEADER_OP(x) (((x) & SDMA_PKT_TRAP_HEADER_op_mask) << SDMA_PKT_TRAP_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_TRAP_HEADER_sub_op_offset 0
+#define SDMA_PKT_TRAP_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_TRAP_HEADER_sub_op_shift  8
+#define SDMA_PKT_TRAP_HEADER_SUB_OP(x) (((x) & SDMA_PKT_TRAP_HEADER_sub_op_mask) << SDMA_PKT_TRAP_HEADER_sub_op_shift)
+
+/*define for INT_CONTEXT word*/
+/*define for int_context field*/
+#define SDMA_PKT_TRAP_INT_CONTEXT_int_context_offset 1
+#define SDMA_PKT_TRAP_INT_CONTEXT_int_context_mask   0x0FFFFFFF
+#define SDMA_PKT_TRAP_INT_CONTEXT_int_context_shift  0
+#define SDMA_PKT_TRAP_INT_CONTEXT_INT_CONTEXT(x) (((x) & SDMA_PKT_TRAP_INT_CONTEXT_int_context_mask) << SDMA_PKT_TRAP_INT_CONTEXT_int_context_shift)
+
+
+/*
+** Definitions for SDMA_PKT_DUMMY_TRAP packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_DUMMY_TRAP_HEADER_op_offset 0
+#define SDMA_PKT_DUMMY_TRAP_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_DUMMY_TRAP_HEADER_op_shift  0
+#define SDMA_PKT_DUMMY_TRAP_HEADER_OP(x) (((x) & SDMA_PKT_DUMMY_TRAP_HEADER_op_mask) << SDMA_PKT_DUMMY_TRAP_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_DUMMY_TRAP_HEADER_sub_op_offset 0
+#define SDMA_PKT_DUMMY_TRAP_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_DUMMY_TRAP_HEADER_sub_op_shift  8
+#define SDMA_PKT_DUMMY_TRAP_HEADER_SUB_OP(x) (((x) & SDMA_PKT_DUMMY_TRAP_HEADER_sub_op_mask) << SDMA_PKT_DUMMY_TRAP_HEADER_sub_op_shift)
+
+/*define for INT_CONTEXT word*/
+/*define for int_context field*/
+#define SDMA_PKT_DUMMY_TRAP_INT_CONTEXT_int_context_offset 1
+#define SDMA_PKT_DUMMY_TRAP_INT_CONTEXT_int_context_mask   0x0FFFFFFF
+#define SDMA_PKT_DUMMY_TRAP_INT_CONTEXT_int_context_shift  0
+#define SDMA_PKT_DUMMY_TRAP_INT_CONTEXT_INT_CONTEXT(x) (((x) & SDMA_PKT_DUMMY_TRAP_INT_CONTEXT_int_context_mask) << SDMA_PKT_DUMMY_TRAP_INT_CONTEXT_int_context_shift)
+
+
+/*
+** Definitions for SDMA_PKT_NOP packet
+*/
+
+/*define for HEADER word*/
+/*define for op field*/
+#define SDMA_PKT_NOP_HEADER_op_offset 0
+#define SDMA_PKT_NOP_HEADER_op_mask   0x000000FF
+#define SDMA_PKT_NOP_HEADER_op_shift  0
+#define SDMA_PKT_NOP_HEADER_OP(x) (((x) & SDMA_PKT_NOP_HEADER_op_mask) << SDMA_PKT_NOP_HEADER_op_shift)
+
+/*define for sub_op field*/
+#define SDMA_PKT_NOP_HEADER_sub_op_offset 0
+#define SDMA_PKT_NOP_HEADER_sub_op_mask   0x000000FF
+#define SDMA_PKT_NOP_HEADER_sub_op_shift  8
+#define SDMA_PKT_NOP_HEADER_SUB_OP(x) (((x) & SDMA_PKT_NOP_HEADER_sub_op_mask) << SDMA_PKT_NOP_HEADER_sub_op_shift)
+
+/*define for count field*/
+#define SDMA_PKT_NOP_HEADER_count_offset 0
+#define SDMA_PKT_NOP_HEADER_count_mask   0x00003FFF
+#define SDMA_PKT_NOP_HEADER_count_shift  16
+#define SDMA_PKT_NOP_HEADER_COUNT(x) (((x) & SDMA_PKT_NOP_HEADER_count_mask) << SDMA_PKT_NOP_HEADER_count_shift)
+
+/*define for DATA0 word*/
+/*define for data0 field*/
+#define SDMA_PKT_NOP_DATA0_data0_offset 1
+#define SDMA_PKT_NOP_DATA0_data0_mask   0xFFFFFFFF
+#define SDMA_PKT_NOP_DATA0_data0_shift  0
+#define SDMA_PKT_NOP_DATA0_DATA0(x) (((x) & SDMA_PKT_NOP_DATA0_data0_mask) << SDMA_PKT_NOP_DATA0_data0_shift)
+
+
+/*
+** Definitions for SDMA_AQL_PKT_HEADER packet
+*/
+
+/*define for HEADER word*/
+/*define for format field*/
+#define SDMA_AQL_PKT_HEADER_HEADER_format_offset 0
+#define SDMA_AQL_PKT_HEADER_HEADER_format_mask   0x000000FF
+#define SDMA_AQL_PKT_HEADER_HEADER_format_shift  0
+#define SDMA_AQL_PKT_HEADER_HEADER_FORMAT(x) (((x) & SDMA_AQL_PKT_HEADER_HEADER_format_mask) << SDMA_AQL_PKT_HEADER_HEADER_format_shift)
+
+/*define for barrier field*/
+#define SDMA_AQL_PKT_HEADER_HEADER_barrier_offset 0
+#define SDMA_AQL_PKT_HEADER_HEADER_barrier_mask   0x00000001
+#define SDMA_AQL_PKT_HEADER_HEADER_barrier_shift  8
+#define SDMA_AQL_PKT_HEADER_HEADER_BARRIER(x) (((x) & SDMA_AQL_PKT_HEADER_HEADER_barrier_mask) << SDMA_AQL_PKT_HEADER_HEADER_barrier_shift)
+
+/*define for acquire_fence_scope field*/
+#define SDMA_AQL_PKT_HEADER_HEADER_acquire_fence_scope_offset 0
+#define SDMA_AQL_PKT_HEADER_HEADER_acquire_fence_scope_mask   0x00000003
+#define SDMA_AQL_PKT_HEADER_HEADER_acquire_fence_scope_shift  9
+#define SDMA_AQL_PKT_HEADER_HEADER_ACQUIRE_FENCE_SCOPE(x) (((x) & SDMA_AQL_PKT_HEADER_HEADER_acquire_fence_scope_mask) << SDMA_AQL_PKT_HEADER_HEADER_acquire_fence_scope_shift)
+
+/*define for release_fence_scope field*/
+#define SDMA_AQL_PKT_HEADER_HEADER_release_fence_scope_offset 0
+#define SDMA_AQL_PKT_HEADER_HEADER_release_fence_scope_mask   0x00000003
+#define SDMA_AQL_PKT_HEADER_HEADER_release_fence_scope_shift  11
+#define SDMA_AQL_PKT_HEADER_HEADER_RELEASE_FENCE_SCOPE(x) (((x) & SDMA_AQL_PKT_HEADER_HEADER_release_fence_scope_mask) << SDMA_AQL_PKT_HEADER_HEADER_release_fence_scope_shift)
+
+/*define for reserved field*/
+#define SDMA_AQL_PKT_HEADER_HEADER_reserved_offset 0
+#define SDMA_AQL_PKT_HEADER_HEADER_reserved_mask   0x00000007
+#define SDMA_AQL_PKT_HEADER_HEADER_reserved_shift  13
+#define SDMA_AQL_PKT_HEADER_HEADER_RESERVED(x) (((x) & SDMA_AQL_PKT_HEADER_HEADER_reserved_mask) << SDMA_AQL_PKT_HEADER_HEADER_reserved_shift)
+
+/*define for op field*/
+#define SDMA_AQL_PKT_HEADER_HEADER_op_offset 0
+#define SDMA_AQL_PKT_HEADER_HEADER_op_mask   0x0000000F
+#define SDMA_AQL_PKT_HEADER_HEADER_op_shift  16
+#define SDMA_AQL_PKT_HEADER_HEADER_OP(x) (((x) & SDMA_AQL_PKT_HEADER_HEADER_op_mask) << SDMA_AQL_PKT_HEADER_HEADER_op_shift)
+
+/*define for subop field*/
+#define SDMA_AQL_PKT_HEADER_HEADER_subop_offset 0
+#define SDMA_AQL_PKT_HEADER_HEADER_subop_mask   0x00000007
+#define SDMA_AQL_PKT_HEADER_HEADER_subop_shift  20
+#define SDMA_AQL_PKT_HEADER_HEADER_SUBOP(x) (((x) & SDMA_AQL_PKT_HEADER_HEADER_subop_mask) << SDMA_AQL_PKT_HEADER_HEADER_subop_shift)
+
+
+/*
+** Definitions for SDMA_AQL_PKT_COPY_LINEAR packet
+*/
+
+/*define for HEADER word*/
+/*define for format field*/
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_format_offset 0
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_format_mask   0x000000FF
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_format_shift  0
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_FORMAT(x) (((x) & SDMA_AQL_PKT_COPY_LINEAR_HEADER_format_mask) << SDMA_AQL_PKT_COPY_LINEAR_HEADER_format_shift)
+
+/*define for barrier field*/
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_barrier_offset 0
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_barrier_mask   0x00000001
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_barrier_shift  8
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_BARRIER(x) (((x) & SDMA_AQL_PKT_COPY_LINEAR_HEADER_barrier_mask) << SDMA_AQL_PKT_COPY_LINEAR_HEADER_barrier_shift)
+
+/*define for acquire_fence_scope field*/
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_acquire_fence_scope_offset 0
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_acquire_fence_scope_mask   0x00000003
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_acquire_fence_scope_shift  9
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_ACQUIRE_FENCE_SCOPE(x) (((x) & SDMA_AQL_PKT_COPY_LINEAR_HEADER_acquire_fence_scope_mask) << SDMA_AQL_PKT_COPY_LINEAR_HEADER_acquire_fence_scope_shift)
+
+/*define for release_fence_scope field*/
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_release_fence_scope_offset 0
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_release_fence_scope_mask   0x00000003
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_release_fence_scope_shift  11
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_RELEASE_FENCE_SCOPE(x) (((x) & SDMA_AQL_PKT_COPY_LINEAR_HEADER_release_fence_scope_mask) << SDMA_AQL_PKT_COPY_LINEAR_HEADER_release_fence_scope_shift)
+
+/*define for reserved field*/
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_reserved_offset 0
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_reserved_mask   0x00000007
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_reserved_shift  13
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_RESERVED(x) (((x) & SDMA_AQL_PKT_COPY_LINEAR_HEADER_reserved_mask) << SDMA_AQL_PKT_COPY_LINEAR_HEADER_reserved_shift)
+
+/*define for op field*/
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_op_offset 0
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_op_mask   0x0000000F
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_op_shift  16
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_OP(x) (((x) & SDMA_AQL_PKT_COPY_LINEAR_HEADER_op_mask) << SDMA_AQL_PKT_COPY_LINEAR_HEADER_op_shift)
+
+/*define for subop field*/
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_subop_offset 0
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_subop_mask   0x00000007
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_subop_shift  20
+#define SDMA_AQL_PKT_COPY_LINEAR_HEADER_SUBOP(x) (((x) & SDMA_AQL_PKT_COPY_LINEAR_HEADER_subop_mask) << SDMA_AQL_PKT_COPY_LINEAR_HEADER_subop_shift)
+
+/*define for RESERVED_DW1 word*/
+/*define for reserved_dw1 field*/
+#define SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW1_reserved_dw1_offset 1
+#define SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW1_reserved_dw1_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW1_reserved_dw1_shift  0
+#define SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW1_RESERVED_DW1(x) (((x) & SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW1_reserved_dw1_mask) << SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW1_reserved_dw1_shift)
+
+/*define for RETURN_ADDR_LO word*/
+/*define for return_addr_31_0 field*/
+#define SDMA_AQL_PKT_COPY_LINEAR_RETURN_ADDR_LO_return_addr_31_0_offset 2
+#define SDMA_AQL_PKT_COPY_LINEAR_RETURN_ADDR_LO_return_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_COPY_LINEAR_RETURN_ADDR_LO_return_addr_31_0_shift  0
+#define SDMA_AQL_PKT_COPY_LINEAR_RETURN_ADDR_LO_RETURN_ADDR_31_0(x) (((x) & SDMA_AQL_PKT_COPY_LINEAR_RETURN_ADDR_LO_return_addr_31_0_mask) << SDMA_AQL_PKT_COPY_LINEAR_RETURN_ADDR_LO_return_addr_31_0_shift)
+
+/*define for RETURN_ADDR_HI word*/
+/*define for return_addr_63_32 field*/
+#define SDMA_AQL_PKT_COPY_LINEAR_RETURN_ADDR_HI_return_addr_63_32_offset 3
+#define SDMA_AQL_PKT_COPY_LINEAR_RETURN_ADDR_HI_return_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_COPY_LINEAR_RETURN_ADDR_HI_return_addr_63_32_shift  0
+#define SDMA_AQL_PKT_COPY_LINEAR_RETURN_ADDR_HI_RETURN_ADDR_63_32(x) (((x) & SDMA_AQL_PKT_COPY_LINEAR_RETURN_ADDR_HI_return_addr_63_32_mask) << SDMA_AQL_PKT_COPY_LINEAR_RETURN_ADDR_HI_return_addr_63_32_shift)
+
+/*define for COUNT word*/
+/*define for count field*/
+#define SDMA_AQL_PKT_COPY_LINEAR_COUNT_count_offset 4
+#define SDMA_AQL_PKT_COPY_LINEAR_COUNT_count_mask   0x003FFFFF
+#define SDMA_AQL_PKT_COPY_LINEAR_COUNT_count_shift  0
+#define SDMA_AQL_PKT_COPY_LINEAR_COUNT_COUNT(x) (((x) & SDMA_AQL_PKT_COPY_LINEAR_COUNT_count_mask) << SDMA_AQL_PKT_COPY_LINEAR_COUNT_count_shift)
+
+/*define for PARAMETER word*/
+/*define for dst_sw field*/
+#define SDMA_AQL_PKT_COPY_LINEAR_PARAMETER_dst_sw_offset 5
+#define SDMA_AQL_PKT_COPY_LINEAR_PARAMETER_dst_sw_mask   0x00000003
+#define SDMA_AQL_PKT_COPY_LINEAR_PARAMETER_dst_sw_shift  16
+#define SDMA_AQL_PKT_COPY_LINEAR_PARAMETER_DST_SW(x) (((x) & SDMA_AQL_PKT_COPY_LINEAR_PARAMETER_dst_sw_mask) << SDMA_AQL_PKT_COPY_LINEAR_PARAMETER_dst_sw_shift)
+
+/*define for src_sw field*/
+#define SDMA_AQL_PKT_COPY_LINEAR_PARAMETER_src_sw_offset 5
+#define SDMA_AQL_PKT_COPY_LINEAR_PARAMETER_src_sw_mask   0x00000003
+#define SDMA_AQL_PKT_COPY_LINEAR_PARAMETER_src_sw_shift  24
+#define SDMA_AQL_PKT_COPY_LINEAR_PARAMETER_SRC_SW(x) (((x) & SDMA_AQL_PKT_COPY_LINEAR_PARAMETER_src_sw_mask) << SDMA_AQL_PKT_COPY_LINEAR_PARAMETER_src_sw_shift)
+
+/*define for SRC_ADDR_LO word*/
+/*define for src_addr_31_0 field*/
+#define SDMA_AQL_PKT_COPY_LINEAR_SRC_ADDR_LO_src_addr_31_0_offset 6
+#define SDMA_AQL_PKT_COPY_LINEAR_SRC_ADDR_LO_src_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_COPY_LINEAR_SRC_ADDR_LO_src_addr_31_0_shift  0
+#define SDMA_AQL_PKT_COPY_LINEAR_SRC_ADDR_LO_SRC_ADDR_31_0(x) (((x) & SDMA_AQL_PKT_COPY_LINEAR_SRC_ADDR_LO_src_addr_31_0_mask) << SDMA_AQL_PKT_COPY_LINEAR_SRC_ADDR_LO_src_addr_31_0_shift)
+
+/*define for SRC_ADDR_HI word*/
+/*define for src_addr_63_32 field*/
+#define SDMA_AQL_PKT_COPY_LINEAR_SRC_ADDR_HI_src_addr_63_32_offset 7
+#define SDMA_AQL_PKT_COPY_LINEAR_SRC_ADDR_HI_src_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_COPY_LINEAR_SRC_ADDR_HI_src_addr_63_32_shift  0
+#define SDMA_AQL_PKT_COPY_LINEAR_SRC_ADDR_HI_SRC_ADDR_63_32(x) (((x) & SDMA_AQL_PKT_COPY_LINEAR_SRC_ADDR_HI_src_addr_63_32_mask) << SDMA_AQL_PKT_COPY_LINEAR_SRC_ADDR_HI_src_addr_63_32_shift)
+
+/*define for DST_ADDR_LO word*/
+/*define for dst_addr_31_0 field*/
+#define SDMA_AQL_PKT_COPY_LINEAR_DST_ADDR_LO_dst_addr_31_0_offset 8
+#define SDMA_AQL_PKT_COPY_LINEAR_DST_ADDR_LO_dst_addr_31_0_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_COPY_LINEAR_DST_ADDR_LO_dst_addr_31_0_shift  0
+#define SDMA_AQL_PKT_COPY_LINEAR_DST_ADDR_LO_DST_ADDR_31_0(x) (((x) & SDMA_AQL_PKT_COPY_LINEAR_DST_ADDR_LO_dst_addr_31_0_mask) << SDMA_AQL_PKT_COPY_LINEAR_DST_ADDR_LO_dst_addr_31_0_shift)
+
+/*define for DST_ADDR_HI word*/
+/*define for dst_addr_63_32 field*/
+#define SDMA_AQL_PKT_COPY_LINEAR_DST_ADDR_HI_dst_addr_63_32_offset 9
+#define SDMA_AQL_PKT_COPY_LINEAR_DST_ADDR_HI_dst_addr_63_32_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_COPY_LINEAR_DST_ADDR_HI_dst_addr_63_32_shift  0
+#define SDMA_AQL_PKT_COPY_LINEAR_DST_ADDR_HI_DST_ADDR_63_32(x) (((x) & SDMA_AQL_PKT_COPY_LINEAR_DST_ADDR_HI_dst_addr_63_32_mask) << SDMA_AQL_PKT_COPY_LINEAR_DST_ADDR_HI_dst_addr_63_32_shift)
+
+/*define for RESERVED_DW10 word*/
+/*define for reserved_dw10 field*/
+#define SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW10_reserved_dw10_offset 10
+#define SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW10_reserved_dw10_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW10_reserved_dw10_shift  0
+#define SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW10_RESERVED_DW10(x) (((x) & SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW10_reserved_dw10_mask) << SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW10_reserved_dw10_shift)
+
+/*define for RESERVED_DW11 word*/
+/*define for reserved_dw11 field*/
+#define SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW11_reserved_dw11_offset 11
+#define SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW11_reserved_dw11_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW11_reserved_dw11_shift  0
+#define SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW11_RESERVED_DW11(x) (((x) & SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW11_reserved_dw11_mask) << SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW11_reserved_dw11_shift)
+
+/*define for RESERVED_DW12 word*/
+/*define for reserved_dw12 field*/
+#define SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW12_reserved_dw12_offset 12
+#define SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW12_reserved_dw12_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW12_reserved_dw12_shift  0
+#define SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW12_RESERVED_DW12(x) (((x) & SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW12_reserved_dw12_mask) << SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW12_reserved_dw12_shift)
+
+/*define for RESERVED_DW13 word*/
+/*define for reserved_dw13 field*/
+#define SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW13_reserved_dw13_offset 13
+#define SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW13_reserved_dw13_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW13_reserved_dw13_shift  0
+#define SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW13_RESERVED_DW13(x) (((x) & SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW13_reserved_dw13_mask) << SDMA_AQL_PKT_COPY_LINEAR_RESERVED_DW13_reserved_dw13_shift)
+
+/*define for COMPLETION_SIGNAL_LO word*/
+/*define for completion_signal_31_0 field*/
+#define SDMA_AQL_PKT_COPY_LINEAR_COMPLETION_SIGNAL_LO_completion_signal_31_0_offset 14
+#define SDMA_AQL_PKT_COPY_LINEAR_COMPLETION_SIGNAL_LO_completion_signal_31_0_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_COPY_LINEAR_COMPLETION_SIGNAL_LO_completion_signal_31_0_shift  0
+#define SDMA_AQL_PKT_COPY_LINEAR_COMPLETION_SIGNAL_LO_COMPLETION_SIGNAL_31_0(x) (((x) & SDMA_AQL_PKT_COPY_LINEAR_COMPLETION_SIGNAL_LO_completion_signal_31_0_mask) << SDMA_AQL_PKT_COPY_LINEAR_COMPLETION_SIGNAL_LO_completion_signal_31_0_shift)
+
+/*define for COMPLETION_SIGNAL_HI word*/
+/*define for completion_signal_63_32 field*/
+#define SDMA_AQL_PKT_COPY_LINEAR_COMPLETION_SIGNAL_HI_completion_signal_63_32_offset 15
+#define SDMA_AQL_PKT_COPY_LINEAR_COMPLETION_SIGNAL_HI_completion_signal_63_32_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_COPY_LINEAR_COMPLETION_SIGNAL_HI_completion_signal_63_32_shift  0
+#define SDMA_AQL_PKT_COPY_LINEAR_COMPLETION_SIGNAL_HI_COMPLETION_SIGNAL_63_32(x) (((x) & SDMA_AQL_PKT_COPY_LINEAR_COMPLETION_SIGNAL_HI_completion_signal_63_32_mask) << SDMA_AQL_PKT_COPY_LINEAR_COMPLETION_SIGNAL_HI_completion_signal_63_32_shift)
+
+
+/*
+** Definitions for SDMA_AQL_PKT_BARRIER_OR packet
+*/
+
+/*define for HEADER word*/
+/*define for format field*/
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_format_offset 0
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_format_mask   0x000000FF
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_format_shift  0
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_FORMAT(x) (((x) & SDMA_AQL_PKT_BARRIER_OR_HEADER_format_mask) << SDMA_AQL_PKT_BARRIER_OR_HEADER_format_shift)
+
+/*define for barrier field*/
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_barrier_offset 0
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_barrier_mask   0x00000001
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_barrier_shift  8
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_BARRIER(x) (((x) & SDMA_AQL_PKT_BARRIER_OR_HEADER_barrier_mask) << SDMA_AQL_PKT_BARRIER_OR_HEADER_barrier_shift)
+
+/*define for acquire_fence_scope field*/
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_acquire_fence_scope_offset 0
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_acquire_fence_scope_mask   0x00000003
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_acquire_fence_scope_shift  9
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_ACQUIRE_FENCE_SCOPE(x) (((x) & SDMA_AQL_PKT_BARRIER_OR_HEADER_acquire_fence_scope_mask) << SDMA_AQL_PKT_BARRIER_OR_HEADER_acquire_fence_scope_shift)
+
+/*define for release_fence_scope field*/
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_release_fence_scope_offset 0
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_release_fence_scope_mask   0x00000003
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_release_fence_scope_shift  11
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_RELEASE_FENCE_SCOPE(x) (((x) & SDMA_AQL_PKT_BARRIER_OR_HEADER_release_fence_scope_mask) << SDMA_AQL_PKT_BARRIER_OR_HEADER_release_fence_scope_shift)
+
+/*define for reserved field*/
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_reserved_offset 0
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_reserved_mask   0x00000007
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_reserved_shift  13
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_RESERVED(x) (((x) & SDMA_AQL_PKT_BARRIER_OR_HEADER_reserved_mask) << SDMA_AQL_PKT_BARRIER_OR_HEADER_reserved_shift)
+
+/*define for op field*/
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_op_offset 0
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_op_mask   0x0000000F
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_op_shift  16
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_OP(x) (((x) & SDMA_AQL_PKT_BARRIER_OR_HEADER_op_mask) << SDMA_AQL_PKT_BARRIER_OR_HEADER_op_shift)
+
+/*define for subop field*/
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_subop_offset 0
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_subop_mask   0x00000007
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_subop_shift  20
+#define SDMA_AQL_PKT_BARRIER_OR_HEADER_SUBOP(x) (((x) & SDMA_AQL_PKT_BARRIER_OR_HEADER_subop_mask) << SDMA_AQL_PKT_BARRIER_OR_HEADER_subop_shift)
+
+/*define for RESERVED_DW1 word*/
+/*define for reserved_dw1 field*/
+#define SDMA_AQL_PKT_BARRIER_OR_RESERVED_DW1_reserved_dw1_offset 1
+#define SDMA_AQL_PKT_BARRIER_OR_RESERVED_DW1_reserved_dw1_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_BARRIER_OR_RESERVED_DW1_reserved_dw1_shift  0
+#define SDMA_AQL_PKT_BARRIER_OR_RESERVED_DW1_RESERVED_DW1(x) (((x) & SDMA_AQL_PKT_BARRIER_OR_RESERVED_DW1_reserved_dw1_mask) << SDMA_AQL_PKT_BARRIER_OR_RESERVED_DW1_reserved_dw1_shift)
+
+/*define for DEPENDENT_ADDR_0_LO word*/
+/*define for dependent_addr_0_31_0 field*/
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_0_LO_dependent_addr_0_31_0_offset 2
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_0_LO_dependent_addr_0_31_0_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_0_LO_dependent_addr_0_31_0_shift  0
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_0_LO_DEPENDENT_ADDR_0_31_0(x) (((x) & SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_0_LO_dependent_addr_0_31_0_mask) << SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_0_LO_dependent_addr_0_31_0_shift)
+
+/*define for DEPENDENT_ADDR_0_HI word*/
+/*define for dependent_addr_0_63_32 field*/
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_0_HI_dependent_addr_0_63_32_offset 3
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_0_HI_dependent_addr_0_63_32_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_0_HI_dependent_addr_0_63_32_shift  0
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_0_HI_DEPENDENT_ADDR_0_63_32(x) (((x) & SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_0_HI_dependent_addr_0_63_32_mask) << SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_0_HI_dependent_addr_0_63_32_shift)
+
+/*define for DEPENDENT_ADDR_1_LO word*/
+/*define for dependent_addr_1_31_0 field*/
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_1_LO_dependent_addr_1_31_0_offset 4
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_1_LO_dependent_addr_1_31_0_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_1_LO_dependent_addr_1_31_0_shift  0
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_1_LO_DEPENDENT_ADDR_1_31_0(x) (((x) & SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_1_LO_dependent_addr_1_31_0_mask) << SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_1_LO_dependent_addr_1_31_0_shift)
+
+/*define for DEPENDENT_ADDR_1_HI word*/
+/*define for dependent_addr_1_63_32 field*/
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_1_HI_dependent_addr_1_63_32_offset 5
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_1_HI_dependent_addr_1_63_32_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_1_HI_dependent_addr_1_63_32_shift  0
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_1_HI_DEPENDENT_ADDR_1_63_32(x) (((x) & SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_1_HI_dependent_addr_1_63_32_mask) << SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_1_HI_dependent_addr_1_63_32_shift)
+
+/*define for DEPENDENT_ADDR_2_LO word*/
+/*define for dependent_addr_2_31_0 field*/
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_2_LO_dependent_addr_2_31_0_offset 6
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_2_LO_dependent_addr_2_31_0_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_2_LO_dependent_addr_2_31_0_shift  0
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_2_LO_DEPENDENT_ADDR_2_31_0(x) (((x) & SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_2_LO_dependent_addr_2_31_0_mask) << SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_2_LO_dependent_addr_2_31_0_shift)
+
+/*define for DEPENDENT_ADDR_2_HI word*/
+/*define for dependent_addr_2_63_32 field*/
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_2_HI_dependent_addr_2_63_32_offset 7
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_2_HI_dependent_addr_2_63_32_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_2_HI_dependent_addr_2_63_32_shift  0
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_2_HI_DEPENDENT_ADDR_2_63_32(x) (((x) & SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_2_HI_dependent_addr_2_63_32_mask) << SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_2_HI_dependent_addr_2_63_32_shift)
+
+/*define for DEPENDENT_ADDR_3_LO word*/
+/*define for dependent_addr_3_31_0 field*/
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_3_LO_dependent_addr_3_31_0_offset 8
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_3_LO_dependent_addr_3_31_0_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_3_LO_dependent_addr_3_31_0_shift  0
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_3_LO_DEPENDENT_ADDR_3_31_0(x) (((x) & SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_3_LO_dependent_addr_3_31_0_mask) << SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_3_LO_dependent_addr_3_31_0_shift)
+
+/*define for DEPENDENT_ADDR_3_HI word*/
+/*define for dependent_addr_3_63_32 field*/
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_3_HI_dependent_addr_3_63_32_offset 9
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_3_HI_dependent_addr_3_63_32_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_3_HI_dependent_addr_3_63_32_shift  0
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_3_HI_DEPENDENT_ADDR_3_63_32(x) (((x) & SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_3_HI_dependent_addr_3_63_32_mask) << SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_3_HI_dependent_addr_3_63_32_shift)
+
+/*define for DEPENDENT_ADDR_4_LO word*/
+/*define for dependent_addr_4_31_0 field*/
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_4_LO_dependent_addr_4_31_0_offset 10
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_4_LO_dependent_addr_4_31_0_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_4_LO_dependent_addr_4_31_0_shift  0
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_4_LO_DEPENDENT_ADDR_4_31_0(x) (((x) & SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_4_LO_dependent_addr_4_31_0_mask) << SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_4_LO_dependent_addr_4_31_0_shift)
+
+/*define for DEPENDENT_ADDR_4_HI word*/
+/*define for dependent_addr_4_63_32 field*/
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_4_HI_dependent_addr_4_63_32_offset 11
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_4_HI_dependent_addr_4_63_32_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_4_HI_dependent_addr_4_63_32_shift  0
+#define SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_4_HI_DEPENDENT_ADDR_4_63_32(x) (((x) & SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_4_HI_dependent_addr_4_63_32_mask) << SDMA_AQL_PKT_BARRIER_OR_DEPENDENT_ADDR_4_HI_dependent_addr_4_63_32_shift)
+
+/*define for RESERVED_DW12 word*/
+/*define for reserved_dw12 field*/
+#define SDMA_AQL_PKT_BARRIER_OR_RESERVED_DW12_reserved_dw12_offset 12
+#define SDMA_AQL_PKT_BARRIER_OR_RESERVED_DW12_reserved_dw12_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_BARRIER_OR_RESERVED_DW12_reserved_dw12_shift  0
+#define SDMA_AQL_PKT_BARRIER_OR_RESERVED_DW12_RESERVED_DW12(x) (((x) & SDMA_AQL_PKT_BARRIER_OR_RESERVED_DW12_reserved_dw12_mask) << SDMA_AQL_PKT_BARRIER_OR_RESERVED_DW12_reserved_dw12_shift)
+
+/*define for RESERVED_DW13 word*/
+/*define for reserved_dw13 field*/
+#define SDMA_AQL_PKT_BARRIER_OR_RESERVED_DW13_reserved_dw13_offset 13
+#define SDMA_AQL_PKT_BARRIER_OR_RESERVED_DW13_reserved_dw13_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_BARRIER_OR_RESERVED_DW13_reserved_dw13_shift  0
+#define SDMA_AQL_PKT_BARRIER_OR_RESERVED_DW13_RESERVED_DW13(x) (((x) & SDMA_AQL_PKT_BARRIER_OR_RESERVED_DW13_reserved_dw13_mask) << SDMA_AQL_PKT_BARRIER_OR_RESERVED_DW13_reserved_dw13_shift)
+
+/*define for COMPLETION_SIGNAL_LO word*/
+/*define for completion_signal_31_0 field*/
+#define SDMA_AQL_PKT_BARRIER_OR_COMPLETION_SIGNAL_LO_completion_signal_31_0_offset 14
+#define SDMA_AQL_PKT_BARRIER_OR_COMPLETION_SIGNAL_LO_completion_signal_31_0_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_BARRIER_OR_COMPLETION_SIGNAL_LO_completion_signal_31_0_shift  0
+#define SDMA_AQL_PKT_BARRIER_OR_COMPLETION_SIGNAL_LO_COMPLETION_SIGNAL_31_0(x) (((x) & SDMA_AQL_PKT_BARRIER_OR_COMPLETION_SIGNAL_LO_completion_signal_31_0_mask) << SDMA_AQL_PKT_BARRIER_OR_COMPLETION_SIGNAL_LO_completion_signal_31_0_shift)
+
+/*define for COMPLETION_SIGNAL_HI word*/
+/*define for completion_signal_63_32 field*/
+#define SDMA_AQL_PKT_BARRIER_OR_COMPLETION_SIGNAL_HI_completion_signal_63_32_offset 15
+#define SDMA_AQL_PKT_BARRIER_OR_COMPLETION_SIGNAL_HI_completion_signal_63_32_mask   0xFFFFFFFF
+#define SDMA_AQL_PKT_BARRIER_OR_COMPLETION_SIGNAL_HI_completion_signal_63_32_shift  0
+#define SDMA_AQL_PKT_BARRIER_OR_COMPLETION_SIGNAL_HI_COMPLETION_SIGNAL_63_32(x) (((x) & SDMA_AQL_PKT_BARRIER_OR_COMPLETION_SIGNAL_HI_completion_signal_63_32_mask) << SDMA_AQL_PKT_BARRIER_OR_COMPLETION_SIGNAL_HI_completion_signal_63_32_shift)
+
+
+#endif /* __SDMA_PKT_OPEN_H_ */
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 024/100] drm/amdgpu: add common soc15 headers
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (7 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 023/100] drm/amdgpu: add SDMA 4.0 packet header Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 025/100] drm/amdgpu: add vega10 chip name Alex Deucher
                     ` (76 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Ken Wang

From: Ken Wang <Qingqing.Wang@amd.com>

These are used by various IP modules.

Signed-off-by: Ken Wang <Qingqing.Wang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/soc15.h        |  35 ++++
 drivers/gpu/drm/amd/amdgpu/soc15_common.h |  57 ++++++
 drivers/gpu/drm/amd/amdgpu/soc15d.h       | 285 ++++++++++++++++++++++++++++++
 3 files changed, 377 insertions(+)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15_common.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15d.h

diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.h b/drivers/gpu/drm/amd/amdgpu/soc15.h
new file mode 100644
index 0000000..378a46d
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/soc15.h
@@ -0,0 +1,35 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __SOC15_H__
+#define __SOC15_H__
+
+#include "nbio_v6_1.h"
+
+extern const struct amd_ip_funcs soc15_common_ip_funcs;
+
+void soc15_grbm_select(struct amdgpu_device *adev,
+		    u32 me, u32 pipe, u32 queue, u32 vmid);
+int soc15_set_ip_blocks(struct amdgpu_device *adev);
+
+#endif
diff --git a/drivers/gpu/drm/amd/amdgpu/soc15_common.h b/drivers/gpu/drm/amd/amdgpu/soc15_common.h
new file mode 100644
index 0000000..2b96c80
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/soc15_common.h
@@ -0,0 +1,57 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __SOC15_COMMON_H__
+#define __SOC15_COMMON_H__
+
+struct nbio_hdp_flush_reg {
+	u32 hdp_flush_req_offset;
+	u32 hdp_flush_done_offset;
+	u32 ref_and_mask_cp0;
+	u32 ref_and_mask_cp1;
+	u32 ref_and_mask_cp2;
+	u32 ref_and_mask_cp3;
+	u32 ref_and_mask_cp4;
+	u32 ref_and_mask_cp5;
+	u32 ref_and_mask_cp6;
+	u32 ref_and_mask_cp7;
+	u32 ref_and_mask_cp8;
+	u32 ref_and_mask_cp9;
+	u32 ref_and_mask_sdma0;
+	u32 ref_and_mask_sdma1;
+};
+
+struct nbio_pcie_index_data {
+	u32 index_offset;
+	u32 data_offset;
+};
+// Register Access Macro
+#define SOC15_REG_OFFSET(ip, inst, reg)       (0 == reg##_BASE_IDX ? ip##_BASE__INST##inst##_SEG0 + reg : \
+                                                (1 == reg##_BASE_IDX ? ip##_BASE__INST##inst##_SEG1 + reg : \
+                                                    (2 == reg##_BASE_IDX ? ip##_BASE__INST##inst##_SEG2 + reg : \
+                                                        (3 == reg##_BASE_IDX ? ip##_BASE__INST##inst##_SEG3 + reg : \
+                                                            (ip##_BASE__INST##inst##_SEG4 + reg)))))
+
+#endif
+
+
diff --git a/drivers/gpu/drm/amd/amdgpu/soc15d.h b/drivers/gpu/drm/amd/amdgpu/soc15d.h
new file mode 100644
index 0000000..c47715d
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/soc15d.h
@@ -0,0 +1,285 @@
+/*
+ * Copyright 2014 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#ifndef SOC15_H
+#define SOC15_H
+
+#define GFX9_NUM_GFX_RINGS     1
+#define GFX9_NUM_COMPUTE_RINGS 8
+
+/*
+ * PM4
+ */
+#define	PACKET_TYPE0	0
+#define	PACKET_TYPE1	1
+#define	PACKET_TYPE2	2
+#define	PACKET_TYPE3	3
+
+#define CP_PACKET_GET_TYPE(h) (((h) >> 30) & 3)
+#define CP_PACKET_GET_COUNT(h) (((h) >> 16) & 0x3FFF)
+#define CP_PACKET0_GET_REG(h) ((h) & 0xFFFF)
+#define CP_PACKET3_GET_OPCODE(h) (((h) >> 8) & 0xFF)
+#define PACKET0(reg, n)	((PACKET_TYPE0 << 30) |				\
+			 ((reg) & 0xFFFF) |			\
+			 ((n) & 0x3FFF) << 16)
+#define CP_PACKET2			0x80000000
+#define		PACKET2_PAD_SHIFT		0
+#define		PACKET2_PAD_MASK		(0x3fffffff << 0)
+
+#define PACKET2(v)	(CP_PACKET2 | REG_SET(PACKET2_PAD, (v)))
+
+#define PACKET3(op, n)	((PACKET_TYPE3 << 30) |				\
+			 (((op) & 0xFF) << 8) |				\
+			 ((n) & 0x3FFF) << 16)
+
+#define PACKET3_COMPUTE(op, n) (PACKET3(op, n) | 1 << 1)
+
+/* Packet 3 types */
+#define	PACKET3_NOP					0x10
+#define	PACKET3_SET_BASE				0x11
+#define		PACKET3_BASE_INDEX(x)                  ((x) << 0)
+#define			CE_PARTITION_BASE		3
+#define	PACKET3_CLEAR_STATE				0x12
+#define	PACKET3_INDEX_BUFFER_SIZE			0x13
+#define	PACKET3_DISPATCH_DIRECT				0x15
+#define	PACKET3_DISPATCH_INDIRECT			0x16
+#define	PACKET3_ATOMIC_GDS				0x1D
+#define	PACKET3_ATOMIC_MEM				0x1E
+#define	PACKET3_OCCLUSION_QUERY				0x1F
+#define	PACKET3_SET_PREDICATION				0x20
+#define	PACKET3_REG_RMW					0x21
+#define	PACKET3_COND_EXEC				0x22
+#define	PACKET3_PRED_EXEC				0x23
+#define	PACKET3_DRAW_INDIRECT				0x24
+#define	PACKET3_DRAW_INDEX_INDIRECT			0x25
+#define	PACKET3_INDEX_BASE				0x26
+#define	PACKET3_DRAW_INDEX_2				0x27
+#define	PACKET3_CONTEXT_CONTROL				0x28
+#define	PACKET3_INDEX_TYPE				0x2A
+#define	PACKET3_DRAW_INDIRECT_MULTI			0x2C
+#define	PACKET3_DRAW_INDEX_AUTO				0x2D
+#define	PACKET3_NUM_INSTANCES				0x2F
+#define	PACKET3_DRAW_INDEX_MULTI_AUTO			0x30
+#define	PACKET3_INDIRECT_BUFFER_CONST			0x33
+#define	PACKET3_STRMOUT_BUFFER_UPDATE			0x34
+#define	PACKET3_DRAW_INDEX_OFFSET_2			0x35
+#define	PACKET3_DRAW_PREAMBLE				0x36
+#define	PACKET3_WRITE_DATA				0x37
+#define		WRITE_DATA_DST_SEL(x)                   ((x) << 8)
+		/* 0 - register
+		 * 1 - memory (sync - via GRBM)
+		 * 2 - gl2
+		 * 3 - gds
+		 * 4 - reserved
+		 * 5 - memory (async - direct)
+		 */
+#define		WR_ONE_ADDR                             (1 << 16)
+#define		WR_CONFIRM                              (1 << 20)
+#define		WRITE_DATA_CACHE_POLICY(x)              ((x) << 25)
+		/* 0 - LRU
+		 * 1 - Stream
+		 */
+#define		WRITE_DATA_ENGINE_SEL(x)                ((x) << 30)
+		/* 0 - me
+		 * 1 - pfp
+		 * 2 - ce
+		 */
+#define	PACKET3_DRAW_INDEX_INDIRECT_MULTI		0x38
+#define	PACKET3_MEM_SEMAPHORE				0x39
+#              define PACKET3_SEM_USE_MAILBOX       (0x1 << 16)
+#              define PACKET3_SEM_SEL_SIGNAL_TYPE   (0x1 << 20) /* 0 = increment, 1 = write 1 */
+#              define PACKET3_SEM_SEL_SIGNAL	    (0x6 << 29)
+#              define PACKET3_SEM_SEL_WAIT	    (0x7 << 29)
+#define	PACKET3_WAIT_REG_MEM				0x3C
+#define		WAIT_REG_MEM_FUNCTION(x)                ((x) << 0)
+		/* 0 - always
+		 * 1 - <
+		 * 2 - <=
+		 * 3 - ==
+		 * 4 - !=
+		 * 5 - >=
+		 * 6 - >
+		 */
+#define		WAIT_REG_MEM_MEM_SPACE(x)               ((x) << 4)
+		/* 0 - reg
+		 * 1 - mem
+		 */
+#define		WAIT_REG_MEM_OPERATION(x)               ((x) << 6)
+		/* 0 - wait_reg_mem
+		 * 1 - wr_wait_wr_reg
+		 */
+#define		WAIT_REG_MEM_ENGINE(x)                  ((x) << 8)
+		/* 0 - me
+		 * 1 - pfp
+		 */
+#define	PACKET3_INDIRECT_BUFFER				0x3F
+#define		INDIRECT_BUFFER_CACHE_POLICY(x)         ((x) << 28)
+		/* 0 - LRU
+		 * 1 - Stream
+		 * 2 - Bypass
+		 */
+#define	PACKET3_COPY_DATA				0x40
+#define	PACKET3_PFP_SYNC_ME				0x42
+#define	PACKET3_COND_WRITE				0x45
+#define	PACKET3_EVENT_WRITE				0x46
+#define		EVENT_TYPE(x)                           ((x) << 0)
+#define		EVENT_INDEX(x)                          ((x) << 8)
+		/* 0 - any non-TS event
+		 * 1 - ZPASS_DONE, PIXEL_PIPE_STAT_*
+		 * 2 - SAMPLE_PIPELINESTAT
+		 * 3 - SAMPLE_STREAMOUTSTAT*
+		 * 4 - *S_PARTIAL_FLUSH
+		 */
+#define	PACKET3_RELEASE_MEM				0x49
+#define		EVENT_TYPE(x)                           ((x) << 0)
+#define		EVENT_INDEX(x)                          ((x) << 8)
+#define		EOP_TCL1_VOL_ACTION_EN                  (1 << 12)
+#define		EOP_TC_VOL_ACTION_EN                    (1 << 13) /* L2 */
+#define		EOP_TC_WB_ACTION_EN                     (1 << 15) /* L2 */
+#define		EOP_TCL1_ACTION_EN                      (1 << 16)
+#define		EOP_TC_ACTION_EN                        (1 << 17) /* L2 */
+#define		EOP_TC_MD_ACTION_EN			(1 << 21) /* L2 metadata */
+
+#define		DATA_SEL(x)                             ((x) << 29)
+		/* 0 - discard
+		 * 1 - send low 32bit data
+		 * 2 - send 64bit data
+		 * 3 - send 64bit GPU counter value
+		 * 4 - send 64bit sys counter value
+		 */
+#define		INT_SEL(x)                              ((x) << 24)
+		/* 0 - none
+		 * 1 - interrupt only (DATA_SEL = 0)
+		 * 2 - interrupt when data write is confirmed
+		 */
+#define		DST_SEL(x)                              ((x) << 16)
+		/* 0 - MC
+		 * 1 - TC/L2
+		 */
+
+
+
+#define	PACKET3_PREAMBLE_CNTL				0x4A
+#              define PACKET3_PREAMBLE_BEGIN_CLEAR_STATE     (2 << 28)
+#              define PACKET3_PREAMBLE_END_CLEAR_STATE       (3 << 28)
+#define	PACKET3_DMA_DATA				0x50
+/* 1. header
+ * 2. CONTROL
+ * 3. SRC_ADDR_LO or DATA [31:0]
+ * 4. SRC_ADDR_HI [31:0]
+ * 5. DST_ADDR_LO [31:0]
+ * 6. DST_ADDR_HI [7:0]
+ * 7. COMMAND [30:21] | BYTE_COUNT [20:0]
+ */
+/* CONTROL */
+#              define PACKET3_DMA_DATA_ENGINE(x)     ((x) << 0)
+		/* 0 - ME
+		 * 1 - PFP
+		 */
+#              define PACKET3_DMA_DATA_SRC_CACHE_POLICY(x) ((x) << 13)
+		/* 0 - LRU
+		 * 1 - Stream
+		 */
+#              define PACKET3_DMA_DATA_DST_SEL(x)  ((x) << 20)
+		/* 0 - DST_ADDR using DAS
+		 * 1 - GDS
+		 * 3 - DST_ADDR using L2
+		 */
+#              define PACKET3_DMA_DATA_DST_CACHE_POLICY(x) ((x) << 25)
+		/* 0 - LRU
+		 * 1 - Stream
+		 */
+#              define PACKET3_DMA_DATA_SRC_SEL(x)  ((x) << 29)
+		/* 0 - SRC_ADDR using SAS
+		 * 1 - GDS
+		 * 2 - DATA
+		 * 3 - SRC_ADDR using L2
+		 */
+#              define PACKET3_DMA_DATA_CP_SYNC     (1 << 31)
+/* COMMAND */
+#              define PACKET3_DMA_DATA_CMD_SAS     (1 << 26)
+		/* 0 - memory
+		 * 1 - register
+		 */
+#              define PACKET3_DMA_DATA_CMD_DAS     (1 << 27)
+		/* 0 - memory
+		 * 1 - register
+		 */
+#              define PACKET3_DMA_DATA_CMD_SAIC    (1 << 28)
+#              define PACKET3_DMA_DATA_CMD_DAIC    (1 << 29)
+#              define PACKET3_DMA_DATA_CMD_RAW_WAIT  (1 << 30)
+#define	PACKET3_AQUIRE_MEM				0x58
+#define	PACKET3_REWIND					0x59
+#define	PACKET3_LOAD_UCONFIG_REG			0x5E
+#define	PACKET3_LOAD_SH_REG				0x5F
+#define	PACKET3_LOAD_CONFIG_REG				0x60
+#define	PACKET3_LOAD_CONTEXT_REG			0x61
+#define	PACKET3_SET_CONFIG_REG				0x68
+#define		PACKET3_SET_CONFIG_REG_START			0x00002000
+#define		PACKET3_SET_CONFIG_REG_END			0x00002c00
+#define	PACKET3_SET_CONTEXT_REG				0x69
+#define		PACKET3_SET_CONTEXT_REG_START			0x0000a000
+#define		PACKET3_SET_CONTEXT_REG_END			0x0000a400
+#define	PACKET3_SET_CONTEXT_REG_INDIRECT		0x73
+#define	PACKET3_SET_SH_REG				0x76
+#define		PACKET3_SET_SH_REG_START			0x00002c00
+#define		PACKET3_SET_SH_REG_END				0x00003000
+#define	PACKET3_SET_SH_REG_OFFSET			0x77
+#define	PACKET3_SET_QUEUE_REG				0x78
+#define	PACKET3_SET_UCONFIG_REG				0x79
+#define		PACKET3_SET_UCONFIG_REG_START			0x0000c000
+#define		PACKET3_SET_UCONFIG_REG_END			0x0000c400
+#define	PACKET3_SCRATCH_RAM_WRITE			0x7D
+#define	PACKET3_SCRATCH_RAM_READ			0x7E
+#define	PACKET3_LOAD_CONST_RAM				0x80
+#define	PACKET3_WRITE_CONST_RAM				0x81
+#define	PACKET3_DUMP_CONST_RAM				0x83
+#define	PACKET3_INCREMENT_CE_COUNTER			0x84
+#define	PACKET3_INCREMENT_DE_COUNTER			0x85
+#define	PACKET3_WAIT_ON_CE_COUNTER			0x86
+#define	PACKET3_WAIT_ON_DE_COUNTER_DIFF			0x88
+#define	PACKET3_SWITCH_BUFFER				0x8B
+
+#define VCE_CMD_NO_OP		0x00000000
+#define VCE_CMD_END		0x00000001
+#define VCE_CMD_IB		0x00000002
+#define VCE_CMD_FENCE		0x00000003
+#define VCE_CMD_TRAP		0x00000004
+#define VCE_CMD_IB_AUTO 	0x00000005
+#define VCE_CMD_SEMAPHORE	0x00000006
+
+#define VCE_CMD_IB_VM           0x00000102
+#define VCE_CMD_WAIT_GE         0x00000106
+#define VCE_CMD_UPDATE_PTB      0x00000107
+#define VCE_CMD_FLUSH_TLB       0x00000108
+#define VCE_CMD_REG_WRITE       0x00000109
+#define VCE_CMD_REG_WAIT        0x0000010a
+
+#define HEVC_ENC_CMD_NO_OP		0x00000000
+#define HEVC_ENC_CMD_END		0x00000001
+#define HEVC_ENC_CMD_FENCE		0x00000003
+#define HEVC_ENC_CMD_TRAP		0x00000004
+#define HEVC_ENC_CMD_IB_VM		0x00000102
+#define HEVC_ENC_CMD_REG_WRITE		0x00000109
+#define HEVC_ENC_CMD_REG_WAIT		0x0000010a
+
+#endif
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 025/100] drm/amdgpu: add vega10 chip name
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (8 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 024/100] drm/amdgpu: add common soc15 headers Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 026/100] drm/amdgpu: add clinetid definition for vega10 Alex Deucher
                     ` (75 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Ken Wang

From: Ken Wang <Qingqing.Wang@amd.com>

Signed-off-by: Ken Wang <Qingqing.Wang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 1 +
 drivers/gpu/drm/amd/include/amd_shared.h   | 1 +
 2 files changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 5a17899..7295cbc 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -76,6 +76,7 @@ static const char *amdgpu_asic_name[] = {
 	"POLARIS10",
 	"POLARIS11",
 	"POLARIS12",
+	"VEGA10",
 	"LAST",
 };
 
diff --git a/drivers/gpu/drm/amd/include/amd_shared.h b/drivers/gpu/drm/amd/include/amd_shared.h
index 4f61879..717d6be 100644
--- a/drivers/gpu/drm/amd/include/amd_shared.h
+++ b/drivers/gpu/drm/amd/include/amd_shared.h
@@ -47,6 +47,7 @@ enum amd_asic_type {
 	CHIP_POLARIS10,
 	CHIP_POLARIS11,
 	CHIP_POLARIS12,
+	CHIP_VEGA10,
 	CHIP_LAST,
 };
 
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 026/100] drm/amdgpu: add clinetid definition for vega10
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (9 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 025/100] drm/amdgpu: add vega10 chip name Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 027/100] drm/amdgpu: use new flag to handle different firmware loading method Alex Deucher
                     ` (74 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, ken

From: ken <Qingqing.Wang@amd.com>

Signed-off-by: ken <Qingqing.Wang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ih.h | 42 ++++++++++++++++++++++++++++++++--
 1 file changed, 40 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ih.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ih.h
index 584136e..043620d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ih.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ih.h
@@ -25,10 +25,48 @@
 #define __AMDGPU_IH_H__
 
 struct amdgpu_device;
+ /*
+  * vega10+ IH clients
+ */
+enum amdgpu_ih_clientid
+{
+    AMDGPU_IH_CLIENTID_IH	    = 0x00,
+    AMDGPU_IH_CLIENTID_ACP	    = 0x01,
+    AMDGPU_IH_CLIENTID_ATHUB	    = 0x02,
+    AMDGPU_IH_CLIENTID_BIF	    = 0x03,
+    AMDGPU_IH_CLIENTID_DCE	    = 0x04,
+    AMDGPU_IH_CLIENTID_ISP	    = 0x05,
+    AMDGPU_IH_CLIENTID_PCIE0	    = 0x06,
+    AMDGPU_IH_CLIENTID_RLC	    = 0x07,
+    AMDGPU_IH_CLIENTID_SDMA0	    = 0x08,
+    AMDGPU_IH_CLIENTID_SDMA1	    = 0x09,
+    AMDGPU_IH_CLIENTID_SE0SH	    = 0x0a,
+    AMDGPU_IH_CLIENTID_SE1SH	    = 0x0b,
+    AMDGPU_IH_CLIENTID_SE2SH	    = 0x0c,
+    AMDGPU_IH_CLIENTID_SE3SH	    = 0x0d,
+    AMDGPU_IH_CLIENTID_SYSHUB	    = 0x0e,
+    AMDGPU_IH_CLIENTID_THM	    = 0x0f,
+    AMDGPU_IH_CLIENTID_UVD	    = 0x10,
+    AMDGPU_IH_CLIENTID_VCE0	    = 0x11,
+    AMDGPU_IH_CLIENTID_VMC	    = 0x12,
+    AMDGPU_IH_CLIENTID_XDMA	    = 0x13,
+    AMDGPU_IH_CLIENTID_GRBM_CP	    = 0x14,
+    AMDGPU_IH_CLIENTID_ATS	    = 0x15,
+    AMDGPU_IH_CLIENTID_ROM_SMUIO    = 0x16,
+    AMDGPU_IH_CLIENTID_DF	    = 0x17,
+    AMDGPU_IH_CLIENTID_VCE1	    = 0x18,
+    AMDGPU_IH_CLIENTID_PWR	    = 0x19,
+    AMDGPU_IH_CLIENTID_UTCL2	    = 0x1b,
+    AMDGPU_IH_CLIENTID_EA	    = 0x1c,
+    AMDGPU_IH_CLIENTID_UTCL2LOG	    = 0x1d,
+    AMDGPU_IH_CLIENTID_MP0	    = 0x1e,
+    AMDGPU_IH_CLIENTID_MP1	    = 0x1f,
 
-#define AMDGPU_IH_CLIENTID_LEGACY 0
+    AMDGPU_IH_CLIENTID_MAX
 
-#define AMDGPU_IH_CLIENTID_MAX 0x1f
+};
+
+#define AMDGPU_IH_CLIENTID_LEGACY 0
 
 /*
  * R6xx+ IH ring
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 027/100] drm/amdgpu: use new flag to handle different firmware loading method
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (10 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 026/100] drm/amdgpu: add clinetid definition for vega10 Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 028/100] drm/amdgpu: gb_addr_config struct Alex Deucher
                     ` (73 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Huang Rui

From: Huang Rui <ray.huang@amd.com>

This patch introduces a new flag named "amdgpu_firmware_load_type" to
handle different firmware loading method. Since Vega10, there are
three ways to load firmware. It would be better to use a flag and a
fw_load_type kernel parameter to configure it.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h           | 10 +++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c       |  6 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_powerplay.c |  4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c     | 67 +++++++++++++++++++++++++--
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h     |  3 ++
 drivers/gpu/drm/amd/amdgpu/cik.c              |  2 +
 drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c         |  6 +--
 drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c        |  4 +-
 drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c        |  4 +-
 drivers/gpu/drm/amd/amdgpu/vi.c               |  4 +-
 10 files changed, 90 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index b713f37..4d06de8b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -82,7 +82,7 @@ extern int amdgpu_pcie_gen2;
 extern int amdgpu_msi;
 extern int amdgpu_lockup_timeout;
 extern int amdgpu_dpm;
-extern int amdgpu_smc_load_fw;
+extern int amdgpu_fw_load_type;
 extern int amdgpu_aspm;
 extern int amdgpu_runtime_pm;
 extern unsigned amdgpu_ip_block_mask;
@@ -1065,9 +1065,15 @@ struct amdgpu_sdma {
 /*
  * Firmware
  */
+enum amdgpu_firmware_load_type {
+	AMDGPU_FW_LOAD_DIRECT = 0,
+	AMDGPU_FW_LOAD_SMU,
+	AMDGPU_FW_LOAD_PSP,
+};
+
 struct amdgpu_firmware {
 	struct amdgpu_firmware_info ucode[AMDGPU_UCODE_ID_MAXIMUM];
-	bool smu_load;
+	enum amdgpu_firmware_load_type load_type;
 	struct amdgpu_bo *fw_buf;
 	unsigned int fw_size;
 };
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 296dcb7..3d0e8b1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -79,7 +79,7 @@ int amdgpu_pcie_gen2 = -1;
 int amdgpu_msi = -1;
 int amdgpu_lockup_timeout = 0;
 int amdgpu_dpm = -1;
-int amdgpu_smc_load_fw = 1;
+int amdgpu_fw_load_type = -1;
 int amdgpu_aspm = -1;
 int amdgpu_runtime_pm = -1;
 unsigned amdgpu_ip_block_mask = 0xffffffff;
@@ -140,8 +140,8 @@ module_param_named(lockup_timeout, amdgpu_lockup_timeout, int, 0444);
 MODULE_PARM_DESC(dpm, "DPM support (1 = enable, 0 = disable, -1 = auto)");
 module_param_named(dpm, amdgpu_dpm, int, 0444);
 
-MODULE_PARM_DESC(smc_load_fw, "SMC firmware loading(1 = enable, 0 = disable)");
-module_param_named(smc_load_fw, amdgpu_smc_load_fw, int, 0444);
+MODULE_PARM_DESC(fw_load_type, "firmware loading type (0 = direct, 1 = SMU, 2 = PSP, -1 = auto)");
+module_param_named(fw_load_type, amdgpu_fw_load_type, int, 0444);
 
 MODULE_PARM_DESC(aspm, "ASPM support (1 = enable, 0 = disable, -1 = auto)");
 module_param_named(aspm, amdgpu_aspm, int, 0444);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_powerplay.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_powerplay.c
index d56d200..96a5113 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_powerplay.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_powerplay.c
@@ -163,7 +163,7 @@ static int amdgpu_pp_hw_init(void *handle)
 	int ret = 0;
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
-	if (adev->pp_enabled && adev->firmware.smu_load)
+	if (adev->pp_enabled && adev->firmware.load_type == AMDGPU_FW_LOAD_SMU)
 		amdgpu_ucode_init_bo(adev);
 
 	if (adev->powerplay.ip_funcs->hw_init)
@@ -190,7 +190,7 @@ static int amdgpu_pp_hw_fini(void *handle)
 		ret = adev->powerplay.ip_funcs->hw_fini(
 					adev->powerplay.pp_handle);
 
-	if (adev->pp_enabled && adev->firmware.smu_load)
+	if (adev->pp_enabled && adev->firmware.load_type == AMDGPU_FW_LOAD_SMU)
 		amdgpu_ucode_fini_bo(adev);
 
 	return ret;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
index be16377..73c3e66 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
@@ -217,6 +217,49 @@ bool amdgpu_ucode_hdr_version(union amdgpu_firmware_header *hdr,
 	return true;
 }
 
+enum amdgpu_firmware_load_type
+amdgpu_ucode_get_load_type(struct amdgpu_device *adev, int load_type)
+{
+	switch (adev->asic_type) {
+#ifdef CONFIG_DRM_AMDGPU_SI
+	case CHIP_TAHITI:
+	case CHIP_PITCAIRN:
+	case CHIP_VERDE:
+	case CHIP_OLAND:
+		return AMDGPU_FW_LOAD_DIRECT;
+#endif
+#ifdef CONFIG_DRM_AMDGPU_CIK
+	case CHIP_BONAIRE:
+	case CHIP_KAVERI:
+	case CHIP_KABINI:
+	case CHIP_HAWAII:
+	case CHIP_MULLINS:
+		return AMDGPU_FW_LOAD_DIRECT;
+#endif
+	case CHIP_TOPAZ:
+	case CHIP_TONGA:
+	case CHIP_FIJI:
+	case CHIP_CARRIZO:
+	case CHIP_STONEY:
+	case CHIP_POLARIS10:
+	case CHIP_POLARIS11:
+	case CHIP_POLARIS12:
+		if (!load_type)
+			return AMDGPU_FW_LOAD_DIRECT;
+		else
+			return AMDGPU_FW_LOAD_SMU;
+	case CHIP_VEGA10:
+		if (!load_type)
+			return AMDGPU_FW_LOAD_DIRECT;
+		else
+			return AMDGPU_FW_LOAD_PSP;
+	default:
+		DRM_ERROR("Unknow firmware load type\n");
+	}
+
+	return AMDGPU_FW_LOAD_DIRECT;
+}
+
 static int amdgpu_ucode_init_single_fw(struct amdgpu_firmware_info *ucode,
 				uint64_t mc_addr, void *kptr)
 {
@@ -273,7 +316,7 @@ int amdgpu_ucode_init_bo(struct amdgpu_device *adev)
 	uint64_t fw_mc_addr;
 	void *fw_buf_ptr = NULL;
 	uint64_t fw_offset = 0;
-	int i, err;
+	int i, err, max;
 	struct amdgpu_firmware_info *ucode = NULL;
 	const struct common_firmware_header *header = NULL;
 
@@ -306,7 +349,16 @@ int amdgpu_ucode_init_bo(struct amdgpu_device *adev)
 
 	amdgpu_bo_unreserve(*bo);
 
-	for (i = 0; i < AMDGPU_UCODE_ID_MAXIMUM; i++) {
+	/*
+	 * if SMU loaded firmware, it needn't add SMC, UVD, and VCE
+	 * ucode info here
+	 */
+	if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP)
+		max = AMDGPU_UCODE_ID_MAXIMUM - 3;
+	else
+		max = AMDGPU_UCODE_ID_MAXIMUM;
+
+	for (i = 0; i < max; i++) {
 		ucode = &adev->firmware.ucode[i];
 		if (ucode->fw) {
 			header = (const struct common_firmware_header *)ucode->fw->data;
@@ -331,7 +383,8 @@ int amdgpu_ucode_init_bo(struct amdgpu_device *adev)
 failed_reserve:
 	amdgpu_bo_unref(bo);
 failed:
-	adev->firmware.smu_load = false;
+	if (err)
+		adev->firmware.load_type = AMDGPU_FW_LOAD_DIRECT;
 
 	return err;
 }
@@ -340,8 +393,14 @@ int amdgpu_ucode_fini_bo(struct amdgpu_device *adev)
 {
 	int i;
 	struct amdgpu_firmware_info *ucode = NULL;
+	int max;
+
+	if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP)
+		max = AMDGPU_UCODE_ID_MAXIMUM - 3;
+	else
+		max = AMDGPU_UCODE_ID_MAXIMUM;
 
-	for (i = 0; i < AMDGPU_UCODE_ID_MAXIMUM; i++) {
+	for (i = 0; i < max; i++) {
 		ucode = &adev->firmware.ucode[i];
 		if (ucode->fw) {
 			ucode->mc_addr = 0;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
index 19a584c..2b212b0 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
@@ -176,4 +176,7 @@ bool amdgpu_ucode_hdr_version(union amdgpu_firmware_header *hdr,
 int amdgpu_ucode_init_bo(struct amdgpu_device *adev);
 int amdgpu_ucode_fini_bo(struct amdgpu_device *adev);
 
+enum amdgpu_firmware_load_type
+amdgpu_ucode_get_load_type(struct amdgpu_device *adev, int load_type);
+
 #endif
diff --git a/drivers/gpu/drm/amd/amdgpu/cik.c b/drivers/gpu/drm/amd/amdgpu/cik.c
index 37b033a..1451594 100644
--- a/drivers/gpu/drm/amd/amdgpu/cik.c
+++ b/drivers/gpu/drm/amd/amdgpu/cik.c
@@ -1786,6 +1786,8 @@ static int cik_common_early_init(void *handle)
 		return -EINVAL;
 	}
 
+	adev->firmware.load_type = amdgpu_ucode_get_load_type(adev, amdgpu_fw_load_type);
+
 	amdgpu_get_pcie_info(adev);
 
 	return 0;
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
index d0975ac..a53e36c 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
@@ -1040,7 +1040,7 @@ static int gfx_v8_0_init_microcode(struct amdgpu_device *adev)
 		}
 	}
 
-	if (adev->firmware.smu_load) {
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_SMU) {
 		info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_PFP];
 		info->ucode_id = AMDGPU_UCODE_ID_CP_PFP;
 		info->fw = adev->gfx.pfp_fw;
@@ -4246,7 +4246,7 @@ static int gfx_v8_0_rlc_resume(struct amdgpu_device *adev)
 	gfx_v8_0_init_pg(adev);
 
 	if (!adev->pp_enabled) {
-		if (!adev->firmware.smu_load) {
+		if (adev->firmware.load_type != AMDGPU_FW_LOAD_SMU) {
 			/* legacy rlc firmware loading */
 			r = gfx_v8_0_rlc_load_microcode(adev);
 			if (r)
@@ -5266,7 +5266,7 @@ static int gfx_v8_0_cp_resume(struct amdgpu_device *adev)
 		gfx_v8_0_enable_gui_idle_interrupt(adev, false);
 
 	if (!adev->pp_enabled) {
-		if (!adev->firmware.smu_load) {
+		if (adev->firmware.load_type != AMDGPU_FW_LOAD_SMU) {
 			/* legacy firmware loading */
 			r = gfx_v8_0_cp_gfx_load_microcode(adev);
 			if (r)
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
index 7fc4854..bcc14ed 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
@@ -158,7 +158,7 @@ static int sdma_v2_4_init_microcode(struct amdgpu_device *adev)
 		if (adev->sdma.instance[i].feature_version >= 20)
 			adev->sdma.instance[i].burst_nop = true;
 
-		if (adev->firmware.smu_load) {
+		if (adev->firmware.load_type == AMDGPU_FW_LOAD_SMU) {
 			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_SDMA0 + i];
 			info->ucode_id = AMDGPU_UCODE_ID_SDMA0 + i;
 			info->fw = adev->sdma.instance[i].fw;
@@ -562,7 +562,7 @@ static int sdma_v2_4_start(struct amdgpu_device *adev)
 	int r;
 
 	if (!adev->pp_enabled) {
-		if (!adev->firmware.smu_load) {
+		if (adev->firmware.load_type != AMDGPU_FW_LOAD_SMU) {
 			r = sdma_v2_4_load_microcode(adev);
 			if (r)
 				return r;
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
index 27a823a..497c00a 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
@@ -310,7 +310,7 @@ static int sdma_v3_0_init_microcode(struct amdgpu_device *adev)
 		if (adev->sdma.instance[i].feature_version >= 20)
 			adev->sdma.instance[i].burst_nop = true;
 
-		if (adev->firmware.smu_load) {
+		if (adev->firmware.load_type == AMDGPU_FW_LOAD_SMU) {
 			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_SDMA0 + i];
 			info->ucode_id = AMDGPU_UCODE_ID_SDMA0 + i;
 			info->fw = adev->sdma.instance[i].fw;
@@ -771,7 +771,7 @@ static int sdma_v3_0_start(struct amdgpu_device *adev)
 	int r, i;
 
 	if (!adev->pp_enabled) {
-		if (!adev->firmware.smu_load) {
+		if (adev->firmware.load_type != AMDGPU_FW_LOAD_SMU) {
 			r = sdma_v3_0_load_microcode(adev);
 			if (r)
 				return r;
diff --git a/drivers/gpu/drm/amd/amdgpu/vi.c b/drivers/gpu/drm/amd/amdgpu/vi.c
index 35d8f36..9d8434e 100644
--- a/drivers/gpu/drm/amd/amdgpu/vi.c
+++ b/drivers/gpu/drm/amd/amdgpu/vi.c
@@ -1097,8 +1097,8 @@ static int vi_common_early_init(void *handle)
 		return -EINVAL;
 	}
 
-	if (amdgpu_smc_load_fw && smc_enabled)
-		adev->firmware.smu_load = true;
+	/* vi use smc load by default */
+	adev->firmware.load_type = amdgpu_ucode_get_load_type(adev, amdgpu_fw_load_type);
 
 	amdgpu_get_pcie_info(adev);
 
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 028/100] drm/amdgpu: gb_addr_config struct
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (11 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 027/100] drm/amdgpu: use new flag to handle different firmware loading method Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 029/100] drm/amdgpu: add 64bit doorbell assignments Alex Deucher
                     ` (72 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Andrey Grodzovsky

From: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>

Signed-off-by: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>
Reviewed-by: Ken Wang <Qingqing.Wang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 4d06de8b..ae4cb07 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -831,6 +831,15 @@ struct amdgpu_rb_config {
 	uint32_t raster_config_1;
 };
 
+struct gb_addr_config {
+	uint16_t pipe_interleave_size;
+	uint8_t num_pipes;
+	uint8_t max_compress_frags;
+	uint8_t num_banks;
+	uint8_t num_se;
+	uint8_t num_rb_per_se;
+};
+
 struct amdgpu_gfx_config {
 	unsigned max_shader_engines;
 	unsigned max_tile_pipes;
@@ -860,6 +869,7 @@ struct amdgpu_gfx_config {
 	uint32_t tile_mode_array[32];
 	uint32_t macrotile_mode_array[16];
 
+	struct gb_addr_config gb_addr_config_fields;
 	struct amdgpu_rb_config rb_config[AMDGPU_GFX_MAX_SE][AMDGPU_GFX_MAX_SH_PER_SE];
 
 	/* gfx configure feature */
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 029/100] drm/amdgpu: add 64bit doorbell assignments
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (12 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 028/100] drm/amdgpu: gb_addr_config struct Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 030/100] drm/amdgpu: Add MTYPE flags to GPU VM IOCTL interface Alex Deucher
                     ` (71 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Ken Wang

From: Ken Wang <Qingqing.Wang@amd.com>

Change-Id: Ic1859520f98c45f0db982a5093a3207da9fcfa5d
Signed-off-by: Ken Wang <Qingqing.Wang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h | 68 +++++++++++++++++++++++++++++++++++++
 1 file changed, 68 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index ae4cb07..f605219 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -620,6 +620,74 @@ struct amdgpu_doorbell {
 	u32			num_doorbells;	/* Number of doorbells actually reserved for amdgpu. */
 };
 
+/*
+ * 64bit doorbell, offset are in QWORD, occupy 2KB doorbell space
+ */
+typedef enum _AMDGPU_DOORBELL64_ASSIGNMENT
+{
+	/*
+	 * All compute related doorbells: kiq, hiq, diq, traditional compute queue, user queue, should locate in
+	 * a continues range so that programming CP_MEC_DOORBELL_RANGE_LOWER/UPPER can cover this range.
+	 *  Compute related doorbells are allocated from 0x00 to 0x8a
+	 */
+
+
+	/* kernel scheduling */
+	AMDGPU_DOORBELL64_KIQ                     = 0x00,
+
+	/* HSA interface queue and debug queue */
+	AMDGPU_DOORBELL64_HIQ                     = 0x01,
+	AMDGPU_DOORBELL64_DIQ                     = 0x02,
+
+	/* Compute engines */
+	AMDGPU_DOORBELL64_MEC_RING0               = 0x03,
+	AMDGPU_DOORBELL64_MEC_RING1               = 0x04,
+	AMDGPU_DOORBELL64_MEC_RING2               = 0x05,
+	AMDGPU_DOORBELL64_MEC_RING3               = 0x06,
+	AMDGPU_DOORBELL64_MEC_RING4               = 0x07,
+	AMDGPU_DOORBELL64_MEC_RING5               = 0x08,
+	AMDGPU_DOORBELL64_MEC_RING6               = 0x09,
+	AMDGPU_DOORBELL64_MEC_RING7               = 0x0a,
+
+	/* User queue doorbell range (128 doorbells) */
+	AMDGPU_DOORBELL64_USERQUEUE_START         = 0x0b,
+	AMDGPU_DOORBELL64_USERQUEUE_END           = 0x8a,
+
+	/* Graphics engine */
+	AMDGPU_DOORBELL64_GFX_RING0               = 0x8b,
+
+	/*
+	 * Other graphics doorbells can be allocated here: from 0x8c to 0xef
+	 * Graphics voltage island aperture 1
+	 * default non-graphics QWORD index is 0xF0 - 0xFF inclusive
+	 */
+
+	/* sDMA engines */
+	AMDGPU_DOORBELL64_sDMA_ENGINE0            = 0xF0,
+	AMDGPU_DOORBELL64_sDMA_HI_PRI_ENGINE0     = 0xF1,
+	AMDGPU_DOORBELL64_sDMA_ENGINE1            = 0xF2,
+	AMDGPU_DOORBELL64_sDMA_HI_PRI_ENGINE1     = 0xF3,
+
+	/* Interrupt handler */
+	AMDGPU_DOORBELL64_IH                      = 0xF4,  /* For legacy interrupt ring buffer */
+	AMDGPU_DOORBELL64_IH_RING1                = 0xF5,  /* For page migration request log */
+	AMDGPU_DOORBELL64_IH_RING2                = 0xF6,  /* For page migration translation/invalidation log */
+
+	/* VCN engine */
+	AMDGPU_DOORBELL64_VCN0                    = 0xF8,
+	AMDGPU_DOORBELL64_VCN1                    = 0xF9,
+	AMDGPU_DOORBELL64_VCN2                    = 0xFA,
+	AMDGPU_DOORBELL64_VCN3                    = 0xFB,
+	AMDGPU_DOORBELL64_VCN4                    = 0xFC,
+	AMDGPU_DOORBELL64_VCN5                    = 0xFD,
+	AMDGPU_DOORBELL64_VCN6                    = 0xFE,
+	AMDGPU_DOORBELL64_VCN7                    = 0xFF,
+
+	AMDGPU_DOORBELL64_MAX_ASSIGNMENT          = 0xFF,
+	AMDGPU_DOORBELL64_INVALID                 = 0xFFFF
+} AMDGPU_DOORBELL64_ASSIGNMENT;
+
+
 void amdgpu_doorbell_get_kfd_info(struct amdgpu_device *adev,
 				phys_addr_t *aperture_base,
 				size_t *aperture_size,
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 030/100] drm/amdgpu: Add MTYPE flags to GPU VM IOCTL interface
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (13 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 029/100] drm/amdgpu: add 64bit doorbell assignments Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 031/100] drm/amdgpu: use atomfirmware interfaces for scratch reg save/restore Alex Deucher
                     ` (70 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Alex Xie

From: Alex Xie <AlexBin.Xie@amd.com>

Signed-off-by: Alex Xie <AlexBin.Xie@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c |  2 +-
 include/uapi/drm/amdgpu_drm.h           | 12 ++++++++++++
 2 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index ba9077b..f9bea8b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -557,7 +557,7 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data,
 {
 	const uint32_t valid_flags = AMDGPU_VM_DELAY_UPDATE |
 		AMDGPU_VM_PAGE_READABLE | AMDGPU_VM_PAGE_WRITEABLE |
-		AMDGPU_VM_PAGE_EXECUTABLE;
+		AMDGPU_VM_PAGE_EXECUTABLE | AMDGPU_VM_MTYPE_MASK;
 	const uint32_t prt_flags = AMDGPU_VM_DELAY_UPDATE |
 		AMDGPU_VM_PAGE_PRT;
 
diff --git a/include/uapi/drm/amdgpu_drm.h b/include/uapi/drm/amdgpu_drm.h
index dd6c934..d4ad2a1 100644
--- a/include/uapi/drm/amdgpu_drm.h
+++ b/include/uapi/drm/amdgpu_drm.h
@@ -365,6 +365,18 @@ struct drm_amdgpu_gem_op {
 #define AMDGPU_VM_PAGE_EXECUTABLE	(1 << 3)
 /* partially resident texture */
 #define AMDGPU_VM_PAGE_PRT		(1 << 4)
+/* MTYPE flags use bit 5 to 8 */
+#define AMDGPU_VM_MTYPE_MASK		(0xf << 5)
+/* Default MTYPE. Pre-AI must use this.  Recommended for newer ASICs. */
+#define AMDGPU_VM_MTYPE_DEFAULT		(0 << 5)
+/* Use NC MTYPE instead of default MTYPE */
+#define AMDGPU_VM_MTYPE_NC		(1 << 5)
+/* Use WC MTYPE instead of default MTYPE */
+#define AMDGPU_VM_MTYPE_WC		(2 << 5)
+/* Use CC MTYPE instead of default MTYPE */
+#define AMDGPU_VM_MTYPE_CC		(3 << 5)
+/* Use UC MTYPE instead of default MTYPE */
+#define AMDGPU_VM_MTYPE_UC		(4 << 5)
 
 struct drm_amdgpu_gem_va {
 	/** GEM object handle */
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 031/100] drm/amdgpu: use atomfirmware interfaces for scratch reg save/restore
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (14 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 030/100] drm/amdgpu: Add MTYPE flags to GPU VM IOCTL interface Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 032/100] drm/amdgpu: update IH IV ring entry for soc-15 Alex Deucher
                     ` (69 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

If the board is atomfirmware based.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 20 ++++++++++++++++----
 1 file changed, 16 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 7295cbc..19d37a5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2196,7 +2196,10 @@ int amdgpu_device_suspend(struct drm_device *dev, bool suspend, bool fbcon)
 	 */
 	amdgpu_bo_evict_vram(adev);
 
-	amdgpu_atombios_scratch_regs_save(adev);
+	if (adev->is_atom_fw)
+		amdgpu_atomfirmware_scratch_regs_save(adev);
+	else
+		amdgpu_atombios_scratch_regs_save(adev);
 	pci_save_state(dev->pdev);
 	if (suspend) {
 		/* Shut down the device */
@@ -2248,7 +2251,10 @@ int amdgpu_device_resume(struct drm_device *dev, bool resume, bool fbcon)
 			return r;
 		}
 	}
-	amdgpu_atombios_scratch_regs_restore(adev);
+	if (adev->is_atom_fw)
+		amdgpu_atomfirmware_scratch_regs_restore(adev);
+	else
+		amdgpu_atombios_scratch_regs_restore(adev);
 
 	/* post card */
 	if (amdgpu_need_post(adev)) {
@@ -2655,9 +2661,15 @@ int amdgpu_gpu_reset(struct amdgpu_device *adev)
 			amdgpu_display_stop_mc_access(adev, &save);
 			amdgpu_wait_for_idle(adev, AMD_IP_BLOCK_TYPE_GMC);
 		}
-		amdgpu_atombios_scratch_regs_save(adev);
+		if (adev->is_atom_fw)
+			amdgpu_atomfirmware_scratch_regs_save(adev);
+		else
+			amdgpu_atombios_scratch_regs_save(adev);
 		r = amdgpu_asic_reset(adev);
-		amdgpu_atombios_scratch_regs_restore(adev);
+		if (adev->is_atom_fw)
+			amdgpu_atomfirmware_scratch_regs_restore(adev);
+		else
+			amdgpu_atombios_scratch_regs_restore(adev);
 		/* post card */
 		amdgpu_atom_asic_init(adev->mode_info.atom_context);
 
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 032/100] drm/amdgpu: update IH IV ring entry for soc-15
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (15 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 031/100] drm/amdgpu: use atomfirmware interfaces for scratch reg save/restore Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 033/100] drm/amdgpu: add IV trace point Alex Deucher
                     ` (68 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

Reflect the new format on soc-15 asics.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ih.h | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ih.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ih.h
index 043620d..a3da1a1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ih.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ih.h
@@ -93,11 +93,14 @@ struct amdgpu_ih_ring {
 struct amdgpu_iv_entry {
 	unsigned client_id;
 	unsigned src_id;
-	unsigned src_data[AMDGPU_IH_SRC_DATA_MAX_SIZE_DW];
 	unsigned ring_id;
 	unsigned vm_id;
 	unsigned vm_id_src;
+	uint64_t timestamp;
+	unsigned timestamp_src;
 	unsigned pas_id;
+	unsigned pasid_src;
+	unsigned src_data[AMDGPU_IH_SRC_DATA_MAX_SIZE_DW];
 	const uint32_t *iv_entry;
 };
 
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 033/100] drm/amdgpu: add IV trace point
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (16 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 032/100] drm/amdgpu: update IH IV ring entry for soc-15 Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 034/100] drm/amdgpu: add PTE defines for MTYPE Alex Deucher
                     ` (67 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Christian König

From: Christian König <christian.koenig@amd.com>

This allows us to grab IVs without spamming the log.

Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c   |  3 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h | 37 +++++++++++++++++++++++++++++++
 2 files changed, 40 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
index 9c98bee..1309886 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
@@ -33,6 +33,7 @@
 #include "amdgpu_ih.h"
 #include "atom.h"
 #include "amdgpu_connectors.h"
+#include "amdgpu_trace.h"
 
 #include <linux/pm_runtime.h>
 
@@ -367,6 +368,8 @@ void amdgpu_irq_dispatch(struct amdgpu_device *adev,
 	struct amdgpu_irq_src *src;
 	int r;
 
+	trace_amdgpu_iv(entry);
+
 	if (client_id >= AMDGPU_IH_CLIENTID_MAX) {
 		DRM_DEBUG("Invalid client_id in IV: %d\n", client_id);
 		return;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
index 03f598e..6d0a598 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h
@@ -49,6 +49,43 @@ TRACE_EVENT(amdgpu_mm_wreg,
 		      (unsigned long)__entry->value)
 );
 
+TRACE_EVENT(amdgpu_iv,
+	    TP_PROTO(struct amdgpu_iv_entry *iv),
+	    TP_ARGS(iv),
+	    TP_STRUCT__entry(
+			     __field(unsigned, client_id)
+			     __field(unsigned, src_id)
+			     __field(unsigned, ring_id)
+			     __field(unsigned, vm_id)
+			     __field(unsigned, vm_id_src)
+			     __field(uint64_t, timestamp)
+			     __field(unsigned, timestamp_src)
+			     __field(unsigned, pas_id)
+			     __array(unsigned, src_data, 4)
+			    ),
+	    TP_fast_assign(
+			   __entry->client_id = iv->client_id;
+			   __entry->src_id = iv->src_id;
+			   __entry->ring_id = iv->ring_id;
+			   __entry->vm_id = iv->vm_id;
+			   __entry->vm_id_src = iv->vm_id_src;
+			   __entry->timestamp = iv->timestamp;
+			   __entry->timestamp_src = iv->timestamp_src;
+			   __entry->pas_id = iv->pas_id;
+			   __entry->src_data[0] = iv->src_data[0];
+			   __entry->src_data[1] = iv->src_data[1];
+			   __entry->src_data[2] = iv->src_data[2];
+			   __entry->src_data[3] = iv->src_data[3];
+			   ),
+	    TP_printk("client_id:%u src_id:%u ring:%u vm_id:%u timestamp: %llu pas_id:%u src_data: %08x %08x %08x %08x\n",
+		      __entry->client_id, __entry->src_id,
+		      __entry->ring_id, __entry->vm_id,
+		      __entry->timestamp, __entry->pas_id,
+		      __entry->src_data[0], __entry->src_data[1],
+		      __entry->src_data[2], __entry->src_data[3])
+);
+
+
 TRACE_EVENT(amdgpu_bo_create,
 	    TP_PROTO(struct amdgpu_bo *bo),
 	    TP_ARGS(bo),
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 034/100] drm/amdgpu: add PTE defines for MTYPE
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (17 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 033/100] drm/amdgpu: add IV trace point Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 035/100] drm/amdgpu: add NGG parameters Alex Deucher
                     ` (66 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

New on SOC-15 asics.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 8e5abd2..c2e4604 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -67,6 +67,10 @@ struct amdgpu_bo_list_entry;
 
 #define AMDGPU_PTE_PRT		(1ULL << 63)
 
+/* VEGA10 only */
+#define AMDGPU_PTE_MTYPE(a)    ((uint64_t)a << 57)
+#define AMDGPU_PTE_MTYPE_MASK	AMDGPU_PTE_MTYPE(3ULL)
+
 /* How to programm VM fault handling */
 #define AMDGPU_VM_FAULT_STOP_NEVER	0
 #define AMDGPU_VM_FAULT_STOP_FIRST	1
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 035/100] drm/amdgpu: add NGG parameters
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (18 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 034/100] drm/amdgpu: add PTE defines for MTYPE Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 036/100] drm/amdgpu: Add asic family for vega10 Alex Deucher
                     ` (65 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

NGG (Next Generation Graphics) is a new feature in GFX9.0.  This
adds the relevant parameters.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h     | 29 +++++++++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 21 +++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c |  7 +++++++
 include/uapi/drm/amdgpu_drm.h           |  8 ++++++++
 4 files changed, 65 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index f605219..ad0e224 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -105,6 +105,11 @@ extern char *amdgpu_disable_cu;
 extern char *amdgpu_virtual_display;
 extern unsigned amdgpu_pp_feature_mask;
 extern int amdgpu_vram_page_split;
+extern int amdgpu_ngg;
+extern int amdgpu_prim_buf_per_se;
+extern int amdgpu_pos_buf_per_se;
+extern int amdgpu_cntl_sb_buf_per_se;
+extern int amdgpu_param_buf_per_se;
 
 #define AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS	        3000
 #define AMDGPU_MAX_USEC_TIMEOUT			100000	/* 100 ms */
@@ -959,6 +964,28 @@ struct amdgpu_gfx_funcs {
 	void (*read_wave_sgprs)(struct amdgpu_device *adev, uint32_t simd, uint32_t wave, uint32_t start, uint32_t size, uint32_t *dst);
 };
 
+struct amdgpu_ngg_buf {
+	struct amdgpu_bo	*bo;
+	uint64_t		gpu_addr;
+	uint32_t		size;
+	uint32_t		bo_size;
+};
+
+enum {
+	PRIM = 0,
+	POS,
+	CNTL,
+	PARAM,
+	NGG_BUF_MAX
+};
+
+struct amdgpu_ngg {
+	struct amdgpu_ngg_buf	buf[NGG_BUF_MAX];
+	uint32_t		gds_reserve_addr;
+	uint32_t		gds_reserve_size;
+	bool			init;
+};
+
 struct amdgpu_gfx {
 	struct mutex			gpu_clock_mutex;
 	struct amdgpu_gfx_config	config;
@@ -1002,6 +1029,8 @@ struct amdgpu_gfx {
 	uint32_t                        grbm_soft_reset;
 	uint32_t                        srbm_soft_reset;
 	bool                            in_reset;
+	/* NGG */
+	struct amdgpu_ngg		ngg;
 };
 
 int amdgpu_ib_get(struct amdgpu_device *adev, struct amdgpu_vm *vm,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 3d0e8b1..ef3ed11 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -103,6 +103,11 @@ unsigned amdgpu_pg_mask = 0xffffffff;
 char *amdgpu_disable_cu = NULL;
 char *amdgpu_virtual_display = NULL;
 unsigned amdgpu_pp_feature_mask = 0xffffffff;
+int amdgpu_ngg = 0;
+int amdgpu_prim_buf_per_se = 0;
+int amdgpu_pos_buf_per_se = 0;
+int amdgpu_cntl_sb_buf_per_se = 0;
+int amdgpu_param_buf_per_se = 0;
 
 MODULE_PARM_DESC(vramlimit, "Restrict VRAM for testing, in megabytes");
 module_param_named(vramlimit, amdgpu_vram_limit, int, 0600);
@@ -213,6 +218,22 @@ MODULE_PARM_DESC(virtual_display,
 		 "Enable virtual display feature (the virtual_display will be set like xxxx:xx:xx.x,x;xxxx:xx:xx.x,x)");
 module_param_named(virtual_display, amdgpu_virtual_display, charp, 0444);
 
+MODULE_PARM_DESC(ngg, "Next Generation Graphics (1 = enable, 0 = disable(default depending on gfx))");
+module_param_named(ngg, amdgpu_ngg, int, 0444);
+
+MODULE_PARM_DESC(prim_buf_per_se, "the size of Primitive Buffer per Shader Engine (default depending on gfx)");
+module_param_named(prim_buf_per_se, amdgpu_prim_buf_per_se, int, 0444);
+
+MODULE_PARM_DESC(pos_buf_per_se, "the size of Position Buffer per Shader Engine (default depending on gfx)");
+module_param_named(pos_buf_per_se, amdgpu_pos_buf_per_se, int, 0444);
+
+MODULE_PARM_DESC(cntl_sb_buf_per_se, "the size of Control Sideband per Shader Engine (default depending on gfx)");
+module_param_named(cntl_sb_buf_per_se, amdgpu_cntl_sb_buf_per_se, int, 0444);
+
+MODULE_PARM_DESC(param_buf_per_se, "the size of Off-Chip Pramater Cache per Shader Engine (default depending on gfx)");
+module_param_named(param_buf_per_se, amdgpu_param_buf_per_se, int, 0444);
+
+
 static const struct pci_device_id pciidlist[] = {
 #ifdef  CONFIG_DRM_AMDGPU_SI
 	{0x1002, 0x6780, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TAHITI},
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
index 6906322..de0c776 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
@@ -542,6 +542,13 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
 		dev_info.gc_double_offchip_lds_buf =
 			adev->gfx.config.double_offchip_lds_buf;
 
+		if (amdgpu_ngg) {
+			dev_info.prim_buf_gpu_addr = adev->gfx.ngg.buf[PRIM].gpu_addr;
+			dev_info.pos_buf_gpu_addr = adev->gfx.ngg.buf[POS].gpu_addr;
+			dev_info.cntl_sb_buf_gpu_addr = adev->gfx.ngg.buf[CNTL].gpu_addr;
+			dev_info.param_buf_gpu_addr = adev->gfx.ngg.buf[PARAM].gpu_addr;
+		}
+
 		return copy_to_user(out, &dev_info,
 				    min((size_t)size, sizeof(dev_info))) ? -EFAULT : 0;
 	}
diff --git a/include/uapi/drm/amdgpu_drm.h b/include/uapi/drm/amdgpu_drm.h
index d4ad2a1..1bf6b29 100644
--- a/include/uapi/drm/amdgpu_drm.h
+++ b/include/uapi/drm/amdgpu_drm.h
@@ -746,6 +746,14 @@ struct drm_amdgpu_info_device {
 	__u32 vce_harvest_config;
 	/* gfx double offchip LDS buffers */
 	__u32 gc_double_offchip_lds_buf;
+	/* NGG Primitive Buffer */
+	__u64 prim_buf_gpu_addr;
+	/* NGG Position Buffer */
+	__u64 pos_buf_gpu_addr;
+	/* NGG Control Sideband */
+	__u64 cntl_sb_buf_gpu_addr;
+	/* NGG Parameter Cache */
+	__u64 param_buf_gpu_addr;
 };
 
 struct drm_amdgpu_info_hw_ip {
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 036/100] drm/amdgpu: Add asic family for vega10
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (19 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 035/100] drm/amdgpu: add NGG parameters Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 037/100] drm/amdgpu: add tiling flags for GFX9 Alex Deucher
                     ` (64 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 include/uapi/drm/amdgpu_drm.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/uapi/drm/amdgpu_drm.h b/include/uapi/drm/amdgpu_drm.h
index 1bf6b29..7cfdbd8 100644
--- a/include/uapi/drm/amdgpu_drm.h
+++ b/include/uapi/drm/amdgpu_drm.h
@@ -805,6 +805,7 @@ struct drm_amdgpu_info_vce_clock_table {
 #define AMDGPU_FAMILY_KV			125 /* Kaveri, Kabini, Mullins */
 #define AMDGPU_FAMILY_VI			130 /* Iceland, Tonga */
 #define AMDGPU_FAMILY_CZ			135 /* Carrizo, Stoney */
+#define AMDGPU_FAMILY_AI			141 /* Vega10 */
 
 /*
  * Definition of free sync enter and exit signals
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 037/100] drm/amdgpu: add tiling flags for GFX9
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (20 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 036/100] drm/amdgpu: Add asic family for vega10 Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 038/100] drm/amdgpu: don't validate TILE_SPLIT on GFX9 Alex Deucher
                     ` (63 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 include/uapi/drm/amdgpu_drm.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/include/uapi/drm/amdgpu_drm.h b/include/uapi/drm/amdgpu_drm.h
index 7cfdbd8..289b129 100644
--- a/include/uapi/drm/amdgpu_drm.h
+++ b/include/uapi/drm/amdgpu_drm.h
@@ -228,7 +228,11 @@ struct drm_amdgpu_gem_userptr {
 #define AMDGPU_TILING_MACRO_TILE_ASPECT_MASK		0x3
 #define AMDGPU_TILING_NUM_BANKS_SHIFT			21
 #define AMDGPU_TILING_NUM_BANKS_MASK			0x3
+/* Tiling flags for GFX9. */
+#define AMDGPU_TILING_SWIZZLE_MODE_SHIFT		0
+#define AMDGPU_TILING_SWIZZLE_MODE_MASK			0x1f
 
+/* Set/Get helpers for tiling flags. */
 #define AMDGPU_TILING_SET(field, value) \
 	(((value) & AMDGPU_TILING_##field##_MASK) << AMDGPU_TILING_##field##_SHIFT)
 #define AMDGPU_TILING_GET(value, field) \
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 038/100] drm/amdgpu: don't validate TILE_SPLIT on GFX9
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (21 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 037/100] drm/amdgpu: add tiling flags for GFX9 Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 039/100] drm/amdgpu: rework common ucode handling for vega10 Alex Deucher
                     ` (62 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Marek Olšák

From: Marek Olšák <marek.olsak@amd.com>

Signed-off-by: Marek Olšák <marek.olsak@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
index e1e673c..434c931 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
@@ -815,7 +815,10 @@ int amdgpu_bo_fbdev_mmap(struct amdgpu_bo *bo,
 
 int amdgpu_bo_set_tiling_flags(struct amdgpu_bo *bo, u64 tiling_flags)
 {
-	if (AMDGPU_TILING_GET(tiling_flags, TILE_SPLIT) > 6)
+	struct amdgpu_device *adev = amdgpu_ttm_adev(bo->tbo.bdev);
+
+	if (adev->family <= AMDGPU_FAMILY_CZ &&
+	    AMDGPU_TILING_GET(tiling_flags, TILE_SPLIT) > 6)
 		return -EINVAL;
 
 	bo->tiling_flags = tiling_flags;
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 039/100] drm/amdgpu: rework common ucode handling for vega10
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (22 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 038/100] drm/amdgpu: don't validate TILE_SPLIT on GFX9 Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 040/100] drm/amdgpu: add psp firmware header info Alex Deucher
                     ` (61 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Huang Rui

From: Huang Rui <ray.huang@amd.com>

Handle ucode differences in vega10.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h       |  1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 70 +++++++++++++++++++++----------
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h |  5 +++
 3 files changed, 53 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index ad0e224..aaded8d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -1183,6 +1183,7 @@ struct amdgpu_firmware {
 	enum amdgpu_firmware_load_type load_type;
 	struct amdgpu_bo *fw_buf;
 	unsigned int fw_size;
+	unsigned int max_ucodes;
 };
 
 /*
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
index 73c3e66..a1891c9 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
@@ -260,10 +260,12 @@ amdgpu_ucode_get_load_type(struct amdgpu_device *adev, int load_type)
 	return AMDGPU_FW_LOAD_DIRECT;
 }
 
-static int amdgpu_ucode_init_single_fw(struct amdgpu_firmware_info *ucode,
-				uint64_t mc_addr, void *kptr)
+static int amdgpu_ucode_init_single_fw(struct amdgpu_device *adev,
+				       struct amdgpu_firmware_info *ucode,
+				       uint64_t mc_addr, void *kptr)
 {
 	const struct common_firmware_header *header = NULL;
+	const struct gfx_firmware_header_v1_0 *cp_hdr = NULL;
 
 	if (NULL == ucode->fw)
 		return 0;
@@ -276,11 +278,35 @@ static int amdgpu_ucode_init_single_fw(struct amdgpu_firmware_info *ucode,
 
 	header = (const struct common_firmware_header *)ucode->fw->data;
 
-	ucode->ucode_size = le32_to_cpu(header->ucode_size_bytes);
-
-	memcpy(ucode->kaddr, (void *)((uint8_t *)ucode->fw->data +
-	       le32_to_cpu(header->ucode_array_offset_bytes)),
-	       ucode->ucode_size);
+	cp_hdr = (const struct gfx_firmware_header_v1_0 *)ucode->fw->data;
+
+	if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP ||
+	    (ucode->ucode_id != AMDGPU_UCODE_ID_CP_MEC1 &&
+	     ucode->ucode_id != AMDGPU_UCODE_ID_CP_MEC2 &&
+	     ucode->ucode_id != AMDGPU_UCODE_ID_CP_MEC1_JT &&
+	     ucode->ucode_id != AMDGPU_UCODE_ID_CP_MEC2_JT)) {
+		ucode->ucode_size = le32_to_cpu(header->ucode_size_bytes);
+
+		memcpy(ucode->kaddr, (void *)((uint8_t *)ucode->fw->data +
+					      le32_to_cpu(header->ucode_array_offset_bytes)),
+		       ucode->ucode_size);
+	} else if (ucode->ucode_id == AMDGPU_UCODE_ID_CP_MEC1 ||
+		   ucode->ucode_id == AMDGPU_UCODE_ID_CP_MEC2) {
+		ucode->ucode_size = le32_to_cpu(header->ucode_size_bytes) -
+			le32_to_cpu(cp_hdr->jt_size) * 4;
+
+		memcpy(ucode->kaddr, (void *)((uint8_t *)ucode->fw->data +
+					      le32_to_cpu(header->ucode_array_offset_bytes)),
+		       ucode->ucode_size);
+	} else if (ucode->ucode_id == AMDGPU_UCODE_ID_CP_MEC1_JT ||
+		   ucode->ucode_id == AMDGPU_UCODE_ID_CP_MEC2_JT) {
+		ucode->ucode_size = le32_to_cpu(cp_hdr->jt_size) * 4;
+
+		memcpy(ucode->kaddr, (void *)((uint8_t *)ucode->fw->data +
+					      le32_to_cpu(header->ucode_array_offset_bytes) +
+					      le32_to_cpu(cp_hdr->jt_offset) * 4),
+		       ucode->ucode_size);
+	}
 
 	return 0;
 }
@@ -306,17 +332,18 @@ static int amdgpu_ucode_patch_jt(struct amdgpu_firmware_info *ucode,
 			   (le32_to_cpu(header->jt_offset) * 4);
 	memcpy(dst_addr, src_addr, le32_to_cpu(header->jt_size) * 4);
 
+	ucode->ucode_size += le32_to_cpu(header->jt_size) * 4;
+
 	return 0;
 }
 
-
 int amdgpu_ucode_init_bo(struct amdgpu_device *adev)
 {
 	struct amdgpu_bo **bo = &adev->firmware.fw_buf;
 	uint64_t fw_mc_addr;
 	void *fw_buf_ptr = NULL;
 	uint64_t fw_offset = 0;
-	int i, err, max;
+	int i, err;
 	struct amdgpu_firmware_info *ucode = NULL;
 	const struct common_firmware_header *header = NULL;
 
@@ -349,29 +376,32 @@ int amdgpu_ucode_init_bo(struct amdgpu_device *adev)
 
 	amdgpu_bo_unreserve(*bo);
 
+	memset(fw_buf_ptr, 0, adev->firmware.fw_size);
+
 	/*
 	 * if SMU loaded firmware, it needn't add SMC, UVD, and VCE
 	 * ucode info here
 	 */
 	if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP)
-		max = AMDGPU_UCODE_ID_MAXIMUM - 3;
+		adev->firmware.max_ucodes = AMDGPU_UCODE_ID_MAXIMUM - 4;
 	else
-		max = AMDGPU_UCODE_ID_MAXIMUM;
+		adev->firmware.max_ucodes = AMDGPU_UCODE_ID_MAXIMUM;
 
-	for (i = 0; i < max; i++) {
+	for (i = 0; i < adev->firmware.max_ucodes; i++) {
 		ucode = &adev->firmware.ucode[i];
 		if (ucode->fw) {
 			header = (const struct common_firmware_header *)ucode->fw->data;
-			amdgpu_ucode_init_single_fw(ucode, fw_mc_addr + fw_offset,
-						    fw_buf_ptr + fw_offset);
-			if (i == AMDGPU_UCODE_ID_CP_MEC1) {
+			amdgpu_ucode_init_single_fw(adev, ucode, fw_mc_addr + fw_offset,
+						    (void *)((uint8_t *)fw_buf_ptr + fw_offset));
+			if (i == AMDGPU_UCODE_ID_CP_MEC1 &&
+			    adev->firmware.load_type != AMDGPU_FW_LOAD_PSP) {
 				const struct gfx_firmware_header_v1_0 *cp_hdr;
 				cp_hdr = (const struct gfx_firmware_header_v1_0 *)ucode->fw->data;
 				amdgpu_ucode_patch_jt(ucode, fw_mc_addr + fw_offset,
 						    fw_buf_ptr + fw_offset);
 				fw_offset += ALIGN(le32_to_cpu(cp_hdr->jt_size) << 2, PAGE_SIZE);
 			}
-			fw_offset += ALIGN(le32_to_cpu(header->ucode_size_bytes), PAGE_SIZE);
+			fw_offset += ALIGN(ucode->ucode_size, PAGE_SIZE);
 		}
 	}
 	return 0;
@@ -393,14 +423,8 @@ int amdgpu_ucode_fini_bo(struct amdgpu_device *adev)
 {
 	int i;
 	struct amdgpu_firmware_info *ucode = NULL;
-	int max;
-
-	if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP)
-		max = AMDGPU_UCODE_ID_MAXIMUM - 3;
-	else
-		max = AMDGPU_UCODE_ID_MAXIMUM;
 
-	for (i = 0; i < max; i++) {
+	for (i = 0; i < adev->firmware.max_ucodes; i++) {
 		ucode = &adev->firmware.ucode[i];
 		if (ucode->fw) {
 			ucode->mc_addr = 0;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
index 2b212b0..39a0749 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
@@ -128,9 +128,14 @@ enum AMDGPU_UCODE_ID {
 	AMDGPU_UCODE_ID_CP_PFP,
 	AMDGPU_UCODE_ID_CP_ME,
 	AMDGPU_UCODE_ID_CP_MEC1,
+	AMDGPU_UCODE_ID_CP_MEC1_JT,
 	AMDGPU_UCODE_ID_CP_MEC2,
+	AMDGPU_UCODE_ID_CP_MEC2_JT,
 	AMDGPU_UCODE_ID_RLC_G,
 	AMDGPU_UCODE_ID_STORAGE,
+	AMDGPU_UCODE_ID_SMC,
+	AMDGPU_UCODE_ID_UVD,
+	AMDGPU_UCODE_ID_VCE,
 	AMDGPU_UCODE_ID_MAXIMUM,
 };
 
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 040/100] drm/amdgpu: add psp firmware header info
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (23 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 039/100] drm/amdgpu: rework common ucode handling for vega10 Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 041/100] drm/amdgpu: get display info from DC when DC enabled Alex Deucher
                     ` (60 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Huang Rui

From: Huang Rui <ray.huang@amd.com>

Defines the header info for the psp firmware.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
index 39a0749..758f03a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
@@ -50,6 +50,14 @@ struct smc_firmware_header_v1_0 {
 };
 
 /* version_major=1, version_minor=0 */
+struct psp_firmware_header_v1_0 {
+	struct common_firmware_header header;
+	uint32_t ucode_feature_version;
+	uint32_t sos_offset_bytes;
+	uint32_t sos_size_bytes;
+};
+
+/* version_major=1, version_minor=0 */
 struct gfx_firmware_header_v1_0 {
 	struct common_firmware_header header;
 	uint32_t ucode_feature_version;
@@ -110,6 +118,7 @@ union amdgpu_firmware_header {
 	struct common_firmware_header common;
 	struct mc_firmware_header_v1_0 mc;
 	struct smc_firmware_header_v1_0 smc;
+	struct psp_firmware_header_v1_0 psp;
 	struct gfx_firmware_header_v1_0 gfx;
 	struct rlc_firmware_header_v1_0 rlc;
 	struct rlc_firmware_header_v2_0 rlc_v2_0;
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 041/100] drm/amdgpu: get display info from DC when DC enabled.
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (24 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 040/100] drm/amdgpu: add psp firmware header info Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 042/100] drm/amdgpu: gart fixes for vega10 Alex Deucher
                     ` (59 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Rex Zhu

From: Rex Zhu <Rex.Zhu@amd.com>

Signed-off-by: Rex Zhu <Rex.Zhu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c | 59 +++++++++++++++++++--------------
 1 file changed, 34 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
index f0e3624..d42eade 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
@@ -999,10 +999,6 @@ static int amdgpu_cgs_get_active_displays_info(struct cgs_device *cgs_device,
 					  struct cgs_display_info *info)
 {
 	CGS_FUNC_ADEV;
-	struct amdgpu_crtc *amdgpu_crtc;
-	struct drm_device *ddev = adev->ddev;
-	struct drm_crtc *crtc;
-	uint32_t line_time_us, vblank_lines;
 	struct cgs_mode_info *mode_info;
 
 	if (info == NULL)
@@ -1010,30 +1006,43 @@ static int amdgpu_cgs_get_active_displays_info(struct cgs_device *cgs_device,
 
 	mode_info = info->mode_info;
 
-	if (adev->mode_info.num_crtc && adev->mode_info.mode_config_initialized) {
-		list_for_each_entry(crtc,
-				&ddev->mode_config.crtc_list, head) {
-			amdgpu_crtc = to_amdgpu_crtc(crtc);
-			if (crtc->enabled) {
-				info->active_display_mask |= (1 << amdgpu_crtc->crtc_id);
-				info->display_count++;
-			}
-			if (mode_info != NULL &&
-				crtc->enabled && amdgpu_crtc->enabled &&
-				amdgpu_crtc->hw_mode.clock) {
-				line_time_us = (amdgpu_crtc->hw_mode.crtc_htotal * 1000) /
-							amdgpu_crtc->hw_mode.clock;
-				vblank_lines = amdgpu_crtc->hw_mode.crtc_vblank_end -
-							amdgpu_crtc->hw_mode.crtc_vdisplay +
-							(amdgpu_crtc->v_border * 2);
-				mode_info->vblank_time_us = vblank_lines * line_time_us;
-				mode_info->refresh_rate = drm_mode_vrefresh(&amdgpu_crtc->hw_mode);
-				mode_info->ref_clock = adev->clock.spll.reference_freq;
-				mode_info = NULL;
+	if (!amdgpu_device_has_dc_support(adev)) {
+		struct amdgpu_crtc *amdgpu_crtc;
+		struct drm_device *ddev = adev->ddev;
+		struct drm_crtc *crtc;
+		uint32_t line_time_us, vblank_lines;
+
+		if (adev->mode_info.num_crtc && adev->mode_info.mode_config_initialized) {
+			list_for_each_entry(crtc,
+					&ddev->mode_config.crtc_list, head) {
+				amdgpu_crtc = to_amdgpu_crtc(crtc);
+				if (crtc->enabled) {
+					info->active_display_mask |= (1 << amdgpu_crtc->crtc_id);
+					info->display_count++;
+				}
+				if (mode_info != NULL &&
+					crtc->enabled && amdgpu_crtc->enabled &&
+					amdgpu_crtc->hw_mode.clock) {
+					line_time_us = (amdgpu_crtc->hw_mode.crtc_htotal * 1000) /
+								amdgpu_crtc->hw_mode.clock;
+					vblank_lines = amdgpu_crtc->hw_mode.crtc_vblank_end -
+								amdgpu_crtc->hw_mode.crtc_vdisplay +
+								(amdgpu_crtc->v_border * 2);
+					mode_info->vblank_time_us = vblank_lines * line_time_us;
+					mode_info->refresh_rate = drm_mode_vrefresh(&amdgpu_crtc->hw_mode);
+					mode_info->ref_clock = adev->clock.spll.reference_freq;
+					mode_info = NULL;
+				}
 			}
 		}
+	} else {
+		info->display_count = adev->pm.pm_display_cfg.num_display;
+		if (mode_info != NULL) {
+			mode_info->vblank_time_us = adev->pm.pm_display_cfg.min_vblank_time;
+			mode_info->refresh_rate = adev->pm.pm_display_cfg.vrefresh;
+			mode_info->ref_clock = adev->clock.spll.reference_freq;
+		}
 	}
-
 	return 0;
 }
 
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 042/100] drm/amdgpu: gart fixes for vega10
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (25 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 041/100] drm/amdgpu: get display info from DC when DC enabled Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 043/100] drm/amdgpu: handle PTE EXEC in amdgpu_vm_bo_split_mapping Alex Deucher
                     ` (58 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

Flags need to be 0 to be considered invalid.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
index 2916fab..6d691ab 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
@@ -229,7 +229,8 @@ void amdgpu_gart_unbind(struct amdgpu_device *adev, uint64_t offset,
 	unsigned p;
 	int i, j;
 	u64 page_base;
-	uint64_t flags = AMDGPU_PTE_SYSTEM;
+	/* Starting from VEGA10, system bit must be 0 to mean invalid. */
+	uint64_t flags = 0;
 
 	if (!adev->gart.ready) {
 		WARN(1, "trying to unbind memory from uninitialized GART !\n");
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 043/100] drm/amdgpu: handle PTE EXEC in amdgpu_vm_bo_split_mapping
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (26 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 042/100] drm/amdgpu: gart fixes for vega10 Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 044/100] drm/amdgpu: handle PTE MTYPE " Alex Deucher
                     ` (57 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Alex Xie

From: Alex Xie <AlexBin.Xie@amd.com>

Signed-off-by: Alex Xie <AlexBin.Xie@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 344b535..52e349a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1042,6 +1042,9 @@ static int amdgpu_vm_bo_split_mapping(struct amdgpu_device *adev,
 	if (!(mapping->flags & AMDGPU_PTE_WRITEABLE))
 		flags &= ~AMDGPU_PTE_WRITEABLE;
 
+	flags &= ~AMDGPU_PTE_EXECUTABLE;
+	flags |= mapping->flags & AMDGPU_PTE_EXECUTABLE;
+
 	trace_amdgpu_vm_bo_update(mapping);
 
 	pfn = mapping->offset >> PAGE_SHIFT;
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 044/100] drm/amdgpu: handle PTE MTYPE in amdgpu_vm_bo_split_mapping
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (27 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 043/100] drm/amdgpu: handle PTE EXEC in amdgpu_vm_bo_split_mapping Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 045/100] drm/amdgpu: add NBIO 6.1 driver Alex Deucher
                     ` (56 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Alex Xie

From: Alex Xie <AlexBin.Xie@amd.com>

Signed-off-by: Alex Xie <AlexBin.Xie@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 52e349a..df615d7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -1045,6 +1045,9 @@ static int amdgpu_vm_bo_split_mapping(struct amdgpu_device *adev,
 	flags &= ~AMDGPU_PTE_EXECUTABLE;
 	flags |= mapping->flags & AMDGPU_PTE_EXECUTABLE;
 
+	flags &= ~AMDGPU_PTE_MTYPE_MASK;
+	flags |= (mapping->flags & AMDGPU_PTE_MTYPE_MASK);
+
 	trace_amdgpu_vm_bo_update(mapping);
 
 	pfn = mapping->offset >> PAGE_SHIFT;
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 045/100] drm/amdgpu: add NBIO 6.1 driver
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (28 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 044/100] drm/amdgpu: handle PTE MTYPE " Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 046/100] drm/amdgpu: Add GMC 9.0 support Alex Deucher
                     ` (55 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Junwei Zhang, Tom St Denis, Alex Deucher, Hawking Zhang

From: Junwei Zhang <Jerry.Zhang@amd.com>

This handles nbio 6.1 specific implementations which
are used by various other IPs.

Signed-off-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Junwei Zhang <Jerry.Zhang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Tom St Denis <tom.stdenis@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/Makefile    |   2 +-
 drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c | 233 +++++++++++++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/nbio_v6_1.h |  52 ++++++++
 3 files changed, 286 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/nbio_v6_1.h

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
index fbf6474..69823e8 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -40,7 +40,7 @@ amdgpu-$(CONFIG_DRM_AMDGPU_CIK)+= cik.o cik_ih.o kv_smc.o kv_dpm.o \
 amdgpu-$(CONFIG_DRM_AMDGPU_SI)+= si.o gmc_v6_0.o gfx_v6_0.o si_ih.o si_dma.o dce_v6_0.o si_dpm.o si_smc.o
 
 amdgpu-y += \
-	vi.o mxgpu_vi.o
+	vi.o mxgpu_vi.o nbio_v6_1.o
 
 # add GMC block
 amdgpu-y += \
diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c b/drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c
new file mode 100644
index 0000000..f517e9a
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c
@@ -0,0 +1,233 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include "amdgpu.h"
+#include "amdgpu_atombios.h"
+#include "nbio_v6_1.h"
+
+#include "vega10/soc15ip.h"
+#include "vega10/NBIO/nbio_6_1_default.h"
+#include "vega10/NBIO/nbio_6_1_offset.h"
+#include "vega10/NBIO/nbio_6_1_sh_mask.h"
+#include "vega10/vega10_enum.h"
+
+#define smnCPM_CONTROL                                                                                  0x11180460
+#define smnPCIE_CNTL2                                                                                   0x11180070
+
+u32 nbio_v6_1_get_rev_id(struct amdgpu_device *adev)
+{
+        u32 tmp = RREG32(SOC15_REG_OFFSET(NBIO, 0, mmRCC_DEV0_EPF0_STRAP0));
+
+	tmp &= RCC_DEV0_EPF0_STRAP0__STRAP_ATI_REV_ID_DEV0_F0_MASK;
+	tmp >>= RCC_DEV0_EPF0_STRAP0__STRAP_ATI_REV_ID_DEV0_F0__SHIFT;
+
+	return tmp;
+}
+
+u32 nbio_v6_1_get_atombios_scratch_regs(struct amdgpu_device *adev,
+					uint32_t idx)
+{
+	return RREG32(SOC15_REG_OFFSET(NBIO, 0, mmBIOS_SCRATCH_0) + idx);
+}
+
+void nbio_v6_1_set_atombios_scratch_regs(struct amdgpu_device *adev,
+					 uint32_t idx, uint32_t val)
+{
+	WREG32(SOC15_REG_OFFSET(NBIO, 0, mmBIOS_SCRATCH_0) + idx, val);
+}
+
+void nbio_v6_1_mc_access_enable(struct amdgpu_device *adev, bool enable)
+{
+	if (enable)
+		WREG32(SOC15_REG_OFFSET(NBIO, 0, mmBIF_FB_EN),
+			BIF_FB_EN__FB_READ_EN_MASK | BIF_FB_EN__FB_WRITE_EN_MASK);
+	else
+		WREG32(SOC15_REG_OFFSET(NBIO, 0, mmBIF_FB_EN), 0);
+}
+
+void nbio_v6_1_hdp_flush(struct amdgpu_device *adev)
+{
+	WREG32(SOC15_REG_OFFSET(NBIO, 0, mmBIF_BX_PF0_HDP_MEM_COHERENCY_FLUSH_CNTL), 0);
+}
+
+u32 nbio_v6_1_get_memsize(struct amdgpu_device *adev)
+{
+	return RREG32(SOC15_REG_OFFSET(NBIO, 0, mmRCC_PF_0_0_RCC_CONFIG_MEMSIZE));
+}
+
+static const u32 nbio_sdma_doorbell_range_reg[] =
+{
+	SOC15_REG_OFFSET(NBIO, 0, mmBIF_SDMA0_DOORBELL_RANGE),
+	SOC15_REG_OFFSET(NBIO, 0, mmBIF_SDMA1_DOORBELL_RANGE)
+};
+
+void nbio_v6_1_sdma_doorbell_range(struct amdgpu_device *adev, int instance,
+				  bool use_doorbell, int doorbell_index)
+{
+	u32 doorbell_range = RREG32(nbio_sdma_doorbell_range_reg[instance]);
+
+	if (use_doorbell) {
+		doorbell_range = REG_SET_FIELD(doorbell_range, BIF_SDMA0_DOORBELL_RANGE, OFFSET, doorbell_index);
+		doorbell_range = REG_SET_FIELD(doorbell_range, BIF_SDMA0_DOORBELL_RANGE, SIZE, 2);
+	} else
+		doorbell_range = REG_SET_FIELD(doorbell_range, BIF_SDMA0_DOORBELL_RANGE, SIZE, 0);
+
+	WREG32(nbio_sdma_doorbell_range_reg[instance], doorbell_range);
+}
+
+void nbio_v6_1_enable_doorbell_aperture(struct amdgpu_device *adev,
+					bool enable)
+{
+	u32 tmp;
+
+	tmp = RREG32(SOC15_REG_OFFSET(NBIO, 0, mmRCC_PF_0_0_RCC_DOORBELL_APER_EN));
+	if (enable)
+		tmp = REG_SET_FIELD(tmp, RCC_PF_0_0_RCC_DOORBELL_APER_EN, BIF_DOORBELL_APER_EN, 1);
+	else
+		tmp = REG_SET_FIELD(tmp, RCC_PF_0_0_RCC_DOORBELL_APER_EN, BIF_DOORBELL_APER_EN, 0);
+
+	WREG32(SOC15_REG_OFFSET(NBIO, 0, mmRCC_PF_0_0_RCC_DOORBELL_APER_EN), tmp);
+}
+
+void nbio_v6_1_enable_doorbell_selfring_aperture(struct amdgpu_device *adev,
+					bool enable)
+{
+	u32 tmp = 0;
+
+	if (enable) {
+		tmp = REG_SET_FIELD(tmp, BIF_BX_PF0_DOORBELL_SELFRING_GPA_APER_CNTL, DOORBELL_SELFRING_GPA_APER_EN, 1) |
+			REG_SET_FIELD(tmp, BIF_BX_PF0_DOORBELL_SELFRING_GPA_APER_CNTL, DOORBELL_SELFRING_GPA_APER_MODE, 1) |
+			REG_SET_FIELD(tmp, BIF_BX_PF0_DOORBELL_SELFRING_GPA_APER_CNTL, DOORBELL_SELFRING_GPA_APER_SIZE, 0);
+
+		WREG32(SOC15_REG_OFFSET(NBIO, 0, mmBIF_BX_PF0_DOORBELL_SELFRING_GPA_APER_BASE_LOW),
+				       lower_32_bits(adev->doorbell.base));
+		WREG32(SOC15_REG_OFFSET(NBIO, 0, mmBIF_BX_PF0_DOORBELL_SELFRING_GPA_APER_BASE_HIGH),
+				       upper_32_bits(adev->doorbell.base));
+	}
+
+	WREG32(SOC15_REG_OFFSET(NBIO, 0, mmBIF_BX_PF0_DOORBELL_SELFRING_GPA_APER_CNTL), tmp);
+}
+
+
+void nbio_v6_1_ih_doorbell_range(struct amdgpu_device *adev,
+				bool use_doorbell, int doorbell_index)
+{
+	u32 ih_doorbell_range = RREG32(SOC15_REG_OFFSET(NBIO, 0 , mmBIF_IH_DOORBELL_RANGE));
+
+	if (use_doorbell) {
+		ih_doorbell_range = REG_SET_FIELD(ih_doorbell_range, BIF_IH_DOORBELL_RANGE, OFFSET, doorbell_index);
+		ih_doorbell_range = REG_SET_FIELD(ih_doorbell_range, BIF_IH_DOORBELL_RANGE, SIZE, 2);
+	} else
+		ih_doorbell_range = REG_SET_FIELD(ih_doorbell_range, BIF_IH_DOORBELL_RANGE, SIZE, 0);
+
+	WREG32(SOC15_REG_OFFSET(NBIO, 0, mmBIF_IH_DOORBELL_RANGE), ih_doorbell_range);
+}
+
+void nbio_v6_1_ih_control(struct amdgpu_device *adev)
+{
+	u32 interrupt_cntl;
+
+	/* setup interrupt control */
+	WREG32(SOC15_REG_OFFSET(NBIO, 0, mmINTERRUPT_CNTL2), adev->dummy_page.addr >> 8);
+	interrupt_cntl = RREG32(SOC15_REG_OFFSET(NBIO, 0, mmINTERRUPT_CNTL));
+	/* INTERRUPT_CNTL__IH_DUMMY_RD_OVERRIDE_MASK=0 - dummy read disabled with msi, enabled without msi
+	 * INTERRUPT_CNTL__IH_DUMMY_RD_OVERRIDE_MASK=1 - dummy read controlled by IH_DUMMY_RD_EN
+	 */
+	interrupt_cntl = REG_SET_FIELD(interrupt_cntl, INTERRUPT_CNTL, IH_DUMMY_RD_OVERRIDE, 0);
+	/* INTERRUPT_CNTL__IH_REQ_NONSNOOP_EN_MASK=1 if ring is in non-cacheable memory, e.g., vram */
+	interrupt_cntl = REG_SET_FIELD(interrupt_cntl, INTERRUPT_CNTL, IH_REQ_NONSNOOP_EN, 0);
+	WREG32(SOC15_REG_OFFSET(NBIO, 0, mmINTERRUPT_CNTL), interrupt_cntl);
+}
+
+void nbio_v6_1_update_medium_grain_clock_gating(struct amdgpu_device *adev,
+						bool enable)
+{
+	uint32_t def, data;
+
+	def = data = RREG32_PCIE(smnCPM_CONTROL);
+	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_BIF_MGCG)) {
+		data |= (CPM_CONTROL__LCLK_DYN_GATE_ENABLE_MASK |
+			 CPM_CONTROL__TXCLK_DYN_GATE_ENABLE_MASK |
+			 CPM_CONTROL__TXCLK_PERM_GATE_ENABLE_MASK |
+			 CPM_CONTROL__TXCLK_LCNT_GATE_ENABLE_MASK |
+			 CPM_CONTROL__TXCLK_REGS_GATE_ENABLE_MASK |
+			 CPM_CONTROL__TXCLK_PRBS_GATE_ENABLE_MASK |
+			 CPM_CONTROL__REFCLK_REGS_GATE_ENABLE_MASK);
+	} else {
+		data &= ~(CPM_CONTROL__LCLK_DYN_GATE_ENABLE_MASK |
+			  CPM_CONTROL__TXCLK_DYN_GATE_ENABLE_MASK |
+			  CPM_CONTROL__TXCLK_PERM_GATE_ENABLE_MASK |
+			  CPM_CONTROL__TXCLK_LCNT_GATE_ENABLE_MASK |
+			  CPM_CONTROL__TXCLK_REGS_GATE_ENABLE_MASK |
+			  CPM_CONTROL__TXCLK_PRBS_GATE_ENABLE_MASK |
+			  CPM_CONTROL__REFCLK_REGS_GATE_ENABLE_MASK);
+	}
+
+	if (def != data)
+		WREG32_PCIE(smnCPM_CONTROL, data);
+}
+
+void nbio_v6_1_update_medium_grain_light_sleep(struct amdgpu_device *adev,
+					       bool enable)
+{
+	uint32_t def, data;
+
+	def = data = RREG32_PCIE(smnPCIE_CNTL2);
+	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_BIF_LS)) {
+		data |= (PCIE_CNTL2__SLV_MEM_LS_EN_MASK |
+			 PCIE_CNTL2__MST_MEM_LS_EN_MASK |
+			 PCIE_CNTL2__REPLAY_MEM_LS_EN_MASK);
+	} else {
+		data &= ~(PCIE_CNTL2__SLV_MEM_LS_EN_MASK |
+			  PCIE_CNTL2__MST_MEM_LS_EN_MASK |
+			  PCIE_CNTL2__REPLAY_MEM_LS_EN_MASK);
+	}
+
+	if (def != data)
+		WREG32_PCIE(smnPCIE_CNTL2, data);
+}
+
+struct nbio_hdp_flush_reg nbio_v6_1_hdp_flush_reg;
+struct nbio_pcie_index_data nbio_v6_1_pcie_index_data;
+
+int nbio_v6_1_init(struct amdgpu_device *adev)
+{
+	nbio_v6_1_hdp_flush_reg.hdp_flush_req_offset = SOC15_REG_OFFSET(NBIO, 0, mmBIF_BX_PF0_GPU_HDP_FLUSH_REQ);
+	nbio_v6_1_hdp_flush_reg.hdp_flush_done_offset = SOC15_REG_OFFSET(NBIO, 0, mmBIF_BX_PF0_GPU_HDP_FLUSH_DONE);
+	nbio_v6_1_hdp_flush_reg.ref_and_mask_cp0 = BIF_BX_PF0_GPU_HDP_FLUSH_DONE__CP0_MASK;
+	nbio_v6_1_hdp_flush_reg.ref_and_mask_cp1 = BIF_BX_PF0_GPU_HDP_FLUSH_DONE__CP1_MASK;
+	nbio_v6_1_hdp_flush_reg.ref_and_mask_cp2 = BIF_BX_PF0_GPU_HDP_FLUSH_DONE__CP2_MASK;
+	nbio_v6_1_hdp_flush_reg.ref_and_mask_cp3 = BIF_BX_PF0_GPU_HDP_FLUSH_DONE__CP3_MASK;
+	nbio_v6_1_hdp_flush_reg.ref_and_mask_cp4 = BIF_BX_PF0_GPU_HDP_FLUSH_DONE__CP4_MASK;
+	nbio_v6_1_hdp_flush_reg.ref_and_mask_cp5 = BIF_BX_PF0_GPU_HDP_FLUSH_DONE__CP5_MASK;
+	nbio_v6_1_hdp_flush_reg.ref_and_mask_cp6 = BIF_BX_PF0_GPU_HDP_FLUSH_DONE__CP6_MASK;
+	nbio_v6_1_hdp_flush_reg.ref_and_mask_cp7 = BIF_BX_PF0_GPU_HDP_FLUSH_DONE__CP7_MASK;
+	nbio_v6_1_hdp_flush_reg.ref_and_mask_cp8 = BIF_BX_PF0_GPU_HDP_FLUSH_DONE__CP8_MASK;
+	nbio_v6_1_hdp_flush_reg.ref_and_mask_cp9 = BIF_BX_PF0_GPU_HDP_FLUSH_DONE__CP9_MASK;
+	nbio_v6_1_hdp_flush_reg.ref_and_mask_sdma0 = BIF_BX_PF0_GPU_HDP_FLUSH_DONE__SDMA0_MASK;
+	nbio_v6_1_hdp_flush_reg.ref_and_mask_sdma1 = BIF_BX_PF0_GPU_HDP_FLUSH_DONE__SDMA1_MASK;
+
+	nbio_v6_1_pcie_index_data.index_offset = SOC15_REG_OFFSET(NBIO, 0, mmPCIE_INDEX);
+	nbio_v6_1_pcie_index_data.data_offset = SOC15_REG_OFFSET(NBIO, 0, mmPCIE_DATA);
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v6_1.h b/drivers/gpu/drm/amd/amdgpu/nbio_v6_1.h
new file mode 100644
index 0000000..a778d1c
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/nbio_v6_1.h
@@ -0,0 +1,52 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __NBIO_V6_1_H__
+#define __NBIO_V6_1_H__
+
+#include "soc15_common.h"
+
+extern struct nbio_hdp_flush_reg nbio_v6_1_hdp_flush_reg;
+extern struct nbio_pcie_index_data nbio_v6_1_pcie_index_data;
+int nbio_v6_1_init(struct amdgpu_device *adev);
+u32 nbio_v6_1_get_atombios_scratch_regs(struct amdgpu_device *adev,
+                                        uint32_t idx);
+void nbio_v6_1_set_atombios_scratch_regs(struct amdgpu_device *adev,
+                                         uint32_t idx, uint32_t val);
+void nbio_v6_1_mc_access_enable(struct amdgpu_device *adev, bool enable);
+void nbio_v6_1_hdp_flush(struct amdgpu_device *adev);
+u32 nbio_v6_1_get_memsize(struct amdgpu_device *adev);
+void nbio_v6_1_sdma_doorbell_range(struct amdgpu_device *adev, int instance,
+				  bool use_doorbell, int doorbell_index);
+void nbio_v6_1_enable_doorbell_aperture(struct amdgpu_device *adev,
+					bool enable);
+void nbio_v6_1_enable_doorbell_selfring_aperture(struct amdgpu_device *adev,
+					bool enable);
+void nbio_v6_1_ih_doorbell_range(struct amdgpu_device *adev,
+				bool use_doorbell, int doorbell_index);
+void nbio_v6_1_ih_control(struct amdgpu_device *adev);
+u32 nbio_v6_1_get_rev_id(struct amdgpu_device *adev);
+void nbio_v6_1_update_medium_grain_clock_gating(struct amdgpu_device *adev, bool enable);
+void nbio_v6_1_update_medium_grain_light_sleep(struct amdgpu_device *adev, bool enable);
+
+#endif
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 046/100] drm/amdgpu: Add GMC 9.0 support
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (29 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 045/100] drm/amdgpu: add NBIO 6.1 driver Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
       [not found]     ` <1490041835-11255-32-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
  2017-03-20 20:29   ` [PATCH 047/100] drm/amdgpu: add SDMA v4.0 implementation Alex Deucher
                     ` (54 subsequent siblings)
  85 siblings, 1 reply; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Alex Xie

From: Alex Xie <AlexBin.Xie@amd.com>

On SOC-15 parts, the GMC (Graphics Memory Controller) consists
of two hubs: GFX (graphics and compute) and MM (sdma, uvd, vce).

Signed-off-by: Alex Xie <AlexBin.Xie@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/Makefile      |   6 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu.h      |  30 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c   |  28 +-
 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c | 447 +++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h |  35 ++
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c    | 826 +++++++++++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h    |  30 ++
 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c  | 585 ++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h  |  35 ++
 drivers/gpu/drm/amd/include/amd_shared.h |   2 +
 10 files changed, 2016 insertions(+), 8 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
index 69823e8..b5046fd 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -45,7 +45,8 @@ amdgpu-y += \
 # add GMC block
 amdgpu-y += \
 	gmc_v7_0.o \
-	gmc_v8_0.o
+	gmc_v8_0.o \
+	gfxhub_v1_0.o mmhub_v1_0.o gmc_v9_0.o
 
 # add IH block
 amdgpu-y += \
@@ -74,7 +75,8 @@ amdgpu-y += \
 # add async DMA block
 amdgpu-y += \
 	sdma_v2_4.o \
-	sdma_v3_0.o
+	sdma_v3_0.o \
+	sdma_v4_0.o
 
 # add UVD block
 amdgpu-y += \
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index aaded8d..d7257b6 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -123,6 +123,11 @@ extern int amdgpu_param_buf_per_se;
 /* max number of IP instances */
 #define AMDGPU_MAX_SDMA_INSTANCES		2
 
+/* max number of VMHUB */
+#define AMDGPU_MAX_VMHUBS			2
+#define AMDGPU_MMHUB				0
+#define AMDGPU_GFXHUB				1
+
 /* hardcode that limit for now */
 #define AMDGPU_VA_RESERVED_SIZE			(8 << 20)
 
@@ -310,6 +315,12 @@ struct amdgpu_gart_funcs {
 				     uint32_t flags);
 };
 
+/* provided by the mc block */
+struct amdgpu_mc_funcs {
+	/* adjust mc addr in fb for APU case */
+	u64 (*adjust_mc_addr)(struct amdgpu_device *adev, u64 addr);
+};
+
 /* provided by the ih block */
 struct amdgpu_ih_funcs {
 	/* ring read/write ptr handling, called from interrupt context */
@@ -559,6 +570,21 @@ int amdgpu_gart_bind(struct amdgpu_device *adev, uint64_t offset,
 int amdgpu_ttm_recover_gart(struct amdgpu_device *adev);
 
 /*
+ * VMHUB structures, functions & helpers
+ */
+struct amdgpu_vmhub {
+	uint32_t	ctx0_ptb_addr_lo32;
+	uint32_t	ctx0_ptb_addr_hi32;
+	uint32_t	vm_inv_eng0_req;
+	uint32_t	vm_inv_eng0_ack;
+	uint32_t	vm_context0_cntl;
+	uint32_t	vm_l2_pro_fault_status;
+	uint32_t	vm_l2_pro_fault_cntl;
+	uint32_t	(*get_invalidate_req)(unsigned int vm_id);
+	uint32_t	(*get_vm_protection_bits)(void);
+};
+
+/*
  * GPU MC structures, functions & helpers
  */
 struct amdgpu_mc {
@@ -591,6 +617,9 @@ struct amdgpu_mc {
 	u64					shared_aperture_end;
 	u64					private_aperture_start;
 	u64					private_aperture_end;
+	/* protects concurrent invalidation */
+	spinlock_t		invalidate_lock;
+	const struct amdgpu_mc_funcs *mc_funcs;
 };
 
 /*
@@ -1479,6 +1508,7 @@ struct amdgpu_device {
 	struct amdgpu_gart		gart;
 	struct amdgpu_dummy_page	dummy_page;
 	struct amdgpu_vm_manager	vm_manager;
+	struct amdgpu_vmhub             vmhub[AMDGPU_MAX_VMHUBS];
 
 	/* memory management */
 	struct amdgpu_mman		mman;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index df615d7..47a8080 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -375,6 +375,16 @@ static bool amdgpu_vm_ring_has_compute_vm_bug(struct amdgpu_ring *ring)
 	return false;
 }
 
+static u64 amdgpu_vm_adjust_mc_addr(struct amdgpu_device *adev, u64 mc_addr)
+{
+	u64 addr = mc_addr;
+
+	if (adev->mc.mc_funcs && adev->mc.mc_funcs->adjust_mc_addr)
+		addr = adev->mc.mc_funcs->adjust_mc_addr(adev, addr);
+
+	return addr;
+}
+
 /**
  * amdgpu_vm_flush - hardware flush the vm
  *
@@ -405,9 +415,10 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job)
 	if (ring->funcs->emit_vm_flush && (job->vm_needs_flush ||
 	    amdgpu_vm_is_gpu_reset(adev, id))) {
 		struct fence *fence;
+		u64 pd_addr = amdgpu_vm_adjust_mc_addr(adev, job->vm_pd_addr);
 
-		trace_amdgpu_vm_flush(job->vm_pd_addr, ring->idx, job->vm_id);
-		amdgpu_ring_emit_vm_flush(ring, job->vm_id, job->vm_pd_addr);
+		trace_amdgpu_vm_flush(pd_addr, ring->idx, job->vm_id);
+		amdgpu_ring_emit_vm_flush(ring, job->vm_id, pd_addr);
 
 		r = amdgpu_fence_emit(ring, &fence);
 		if (r)
@@ -643,15 +654,18 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device *adev,
 		    (count == AMDGPU_VM_MAX_UPDATE_SIZE)) {
 
 			if (count) {
+				uint64_t pt_addr =
+					amdgpu_vm_adjust_mc_addr(adev, last_pt);
+
 				if (shadow)
 					amdgpu_vm_do_set_ptes(&params,
 							      last_shadow,
-							      last_pt, count,
+							      pt_addr, count,
 							      incr,
 							      AMDGPU_PTE_VALID);
 
 				amdgpu_vm_do_set_ptes(&params, last_pde,
-						      last_pt, count, incr,
+						      pt_addr, count, incr,
 						      AMDGPU_PTE_VALID);
 			}
 
@@ -665,11 +679,13 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device *adev,
 	}
 
 	if (count) {
+		uint64_t pt_addr = amdgpu_vm_adjust_mc_addr(adev, last_pt);
+
 		if (vm->page_directory->shadow)
-			amdgpu_vm_do_set_ptes(&params, last_shadow, last_pt,
+			amdgpu_vm_do_set_ptes(&params, last_shadow, pt_addr,
 					      count, incr, AMDGPU_PTE_VALID);
 
-		amdgpu_vm_do_set_ptes(&params, last_pde, last_pt,
+		amdgpu_vm_do_set_ptes(&params, last_pde, pt_addr,
 				      count, incr, AMDGPU_PTE_VALID);
 	}
 
diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
new file mode 100644
index 0000000..1ff019c
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
@@ -0,0 +1,447 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include "amdgpu.h"
+#include "gfxhub_v1_0.h"
+
+#include "vega10/soc15ip.h"
+#include "vega10/GC/gc_9_0_offset.h"
+#include "vega10/GC/gc_9_0_sh_mask.h"
+#include "vega10/GC/gc_9_0_default.h"
+#include "vega10/vega10_enum.h"
+
+#include "soc15_common.h"
+
+int gfxhub_v1_0_gart_enable(struct amdgpu_device *adev)
+{
+	u32 tmp;
+	u64 value;
+	u32 i;
+
+	/* Program MC. */
+	/* Update configuration */
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR),
+		adev->mc.vram_start >> 18);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR),
+		adev->mc.vram_end >> 18);
+
+	value = adev->vram_scratch.gpu_addr - adev->mc.vram_start
+		+ adev->vm_manager.vram_base_offset;
+	WREG32(SOC15_REG_OFFSET(GC, 0,
+				mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_LSB),
+				(u32)(value >> 12));
+	WREG32(SOC15_REG_OFFSET(GC, 0,
+				mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_MSB),
+				(u32)(value >> 44));
+
+	/* Disable AGP. */
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_BASE), 0);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_TOP), 0);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_BOT), 0xFFFFFFFF);
+
+	/* GART Enable. */
+
+	/* Setup TLB control */
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL));
+	tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB, 1);
+	tmp = REG_SET_FIELD(tmp,
+				MC_VM_MX_L1_TLB_CNTL,
+				SYSTEM_ACCESS_MODE,
+				3);
+	tmp = REG_SET_FIELD(tmp,
+				MC_VM_MX_L1_TLB_CNTL,
+				ENABLE_ADVANCED_DRIVER_MODEL,
+				1);
+	tmp = REG_SET_FIELD(tmp,
+				MC_VM_MX_L1_TLB_CNTL,
+				SYSTEM_APERTURE_UNMAPPED_ACCESS,
+				0);
+	tmp = REG_SET_FIELD(tmp,
+				MC_VM_MX_L1_TLB_CNTL,
+				ECO_BITS,
+				0);
+	tmp = REG_SET_FIELD(tmp,
+				MC_VM_MX_L1_TLB_CNTL,
+				MTYPE,
+				MTYPE_UC);/* XXX for emulation. */
+	tmp = REG_SET_FIELD(tmp,
+				MC_VM_MX_L1_TLB_CNTL,
+				ATC_EN,
+				1);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL), tmp);
+
+	/* Setup L2 cache */
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL));
+	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 1);
+	tmp = REG_SET_FIELD(tmp,
+				VM_L2_CNTL,
+				ENABLE_L2_FRAGMENT_PROCESSING,
+				0);
+	tmp = REG_SET_FIELD(tmp,
+				VM_L2_CNTL,
+				L2_PDE0_CACHE_TAG_GENERATION_MODE,
+				0);/* XXX for emulation, Refer to closed source code.*/
+	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, PDE_FAULT_CLASSIFICATION, 1);
+	tmp = REG_SET_FIELD(tmp,
+				VM_L2_CNTL,
+				CONTEXT1_IDENTITY_ACCESS_MODE,
+				1);
+	tmp = REG_SET_FIELD(tmp,
+				VM_L2_CNTL,
+				IDENTITY_MODE_FRAGMENT_SIZE,
+				0);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL), tmp);
+
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL2));
+	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_ALL_L1_TLBS, 1);
+	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_L2_CACHE, 1);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL2), tmp);
+
+	tmp = mmVM_L2_CNTL3_DEFAULT;
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL3), tmp);
+
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL4));
+	tmp = REG_SET_FIELD(tmp,
+			    VM_L2_CNTL4,
+			    VMC_TAP_PDE_REQUEST_PHYSICAL,
+			    0);
+	tmp = REG_SET_FIELD(tmp,
+			    VM_L2_CNTL4,
+			    VMC_TAP_PTE_REQUEST_PHYSICAL,
+			    0);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL4), tmp);
+
+	/* setup context0 */
+	WREG32(SOC15_REG_OFFSET(GC, 0,
+				mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_LO32),
+		(u32)(adev->mc.gtt_start >> 12));
+	WREG32(SOC15_REG_OFFSET(GC, 0,
+				mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_HI32),
+		(u32)(adev->mc.gtt_start >> 44));
+
+	WREG32(SOC15_REG_OFFSET(GC, 0,
+				mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_LO32),
+		(u32)(adev->mc.gtt_end >> 12));
+	WREG32(SOC15_REG_OFFSET(GC, 0,
+				mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_HI32),
+		(u32)(adev->mc.gtt_end >> 44));
+
+	BUG_ON(adev->gart.table_addr & (~0x0000FFFFFFFFF000ULL));
+	value = adev->gart.table_addr - adev->mc.vram_start
+		+ adev->vm_manager.vram_base_offset;
+	value &= 0x0000FFFFFFFFF000ULL;
+	value |= 0x1; /*valid bit*/
+
+	WREG32(SOC15_REG_OFFSET(GC, 0,
+				mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32),
+		(u32)value);
+	WREG32(SOC15_REG_OFFSET(GC, 0,
+				mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32),
+		(u32)(value >> 32));
+
+	WREG32(SOC15_REG_OFFSET(GC, 0,
+				mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_LO32),
+		(u32)(adev->dummy_page.addr >> 12));
+	WREG32(SOC15_REG_OFFSET(GC, 0,
+				mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_HI32),
+		(u32)(adev->dummy_page.addr >> 44));
+
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL2));
+	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL2,
+			    ACTIVE_PAGE_MIGRATION_PTE_READ_RETRY,
+			    1);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL2), tmp);
+
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL));
+	tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL, ENABLE_CONTEXT, 1);
+	tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL, PAGE_TABLE_DEPTH, 0);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL), tmp);
+
+	/* Disable identity aperture.*/
+	WREG32(SOC15_REG_OFFSET(GC, 0,
+		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_LO32), 0XFFFFFFFF);
+	WREG32(SOC15_REG_OFFSET(GC, 0,
+		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_HI32), 0x0000000F);
+
+	WREG32(SOC15_REG_OFFSET(GC, 0,
+		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_LO32), 0);
+	WREG32(SOC15_REG_OFFSET(GC, 0,
+		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_HI32), 0);
+
+	WREG32(SOC15_REG_OFFSET(GC, 0,
+		mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_LO32), 0);
+	WREG32(SOC15_REG_OFFSET(GC, 0,
+		mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_HI32), 0);
+
+	for (i = 0; i <= 14; i++) {
+		tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_CNTL) + i);
+		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL, ENABLE_CONTEXT, 1);
+		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL, PAGE_TABLE_DEPTH, 1);
+		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
+				RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
+		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
+				DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
+		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
+				PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
+		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
+				VALID_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
+		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
+				READ_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
+		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
+				WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
+		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
+				EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
+		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
+				PAGE_TABLE_BLOCK_SIZE,
+				    amdgpu_vm_block_size - 9);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_CNTL) + i, tmp);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_LO32) + i*2, 0);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_HI32) + i*2, 0);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_LO32) + i*2,
+				adev->vm_manager.max_pfn - 1);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_HI32) + i*2, 0);
+	}
+
+
+	return 0;
+}
+
+void gfxhub_v1_0_gart_disable(struct amdgpu_device *adev)
+{
+	u32 tmp;
+	u32 i;
+
+	/* Disable all tables */
+	for (i = 0; i < 16; i++)
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL) + i, 0);
+
+	/* Setup TLB control */
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL));
+	tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB, 0);
+	tmp = REG_SET_FIELD(tmp,
+				MC_VM_MX_L1_TLB_CNTL,
+				ENABLE_ADVANCED_DRIVER_MODEL,
+				0);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL), tmp);
+
+	/* Setup L2 cache */
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL));
+	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 0);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL), tmp);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL3), 0);
+}
+
+/**
+ * gfxhub_v1_0_set_fault_enable_default - update GART/VM fault handling
+ *
+ * @adev: amdgpu_device pointer
+ * @value: true redirects VM faults to the default page
+ */
+void gfxhub_v1_0_set_fault_enable_default(struct amdgpu_device *adev,
+					  bool value)
+{
+	u32 tmp;
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL));
+	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
+			RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
+	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
+			PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, value);
+	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
+			PDE1_PROTECTION_FAULT_ENABLE_DEFAULT, value);
+	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
+			PDE2_PROTECTION_FAULT_ENABLE_DEFAULT, value);
+	tmp = REG_SET_FIELD(tmp,
+			VM_L2_PROTECTION_FAULT_CNTL,
+			TRANSLATE_FURTHER_PROTECTION_FAULT_ENABLE_DEFAULT,
+			value);
+	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
+			NACK_PROTECTION_FAULT_ENABLE_DEFAULT, value);
+	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
+			DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
+	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
+			VALID_PROTECTION_FAULT_ENABLE_DEFAULT, value);
+	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
+			READ_PROTECTION_FAULT_ENABLE_DEFAULT, value);
+	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
+			WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
+	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
+			EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL), tmp);
+}
+
+static uint32_t gfxhub_v1_0_get_invalidate_req(unsigned int vm_id)
+{
+	u32 req = 0;
+
+	/* invalidate using legacy mode on vm_id*/
+	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
+			    PER_VMID_INVALIDATE_REQ, 1 << vm_id);
+	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, FLUSH_TYPE, 0);
+	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PTES, 1);
+	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE0, 1);
+	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE1, 1);
+	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE2, 1);
+	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L1_PTES, 1);
+	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
+			    CLEAR_PROTECTION_FAULT_STATUS_ADDR,	0);
+
+	return req;
+}
+
+static uint32_t gfxhub_v1_0_get_vm_protection_bits(void)
+{
+	return (VM_CONTEXT1_CNTL__RANGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
+		    VM_CONTEXT1_CNTL__DUMMY_PAGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
+		    VM_CONTEXT1_CNTL__PDE0_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
+		    VM_CONTEXT1_CNTL__VALID_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
+		    VM_CONTEXT1_CNTL__READ_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
+		    VM_CONTEXT1_CNTL__WRITE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
+		    VM_CONTEXT1_CNTL__EXECUTE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK);
+}
+
+static int gfxhub_v1_0_early_init(void *handle)
+{
+	return 0;
+}
+
+static int gfxhub_v1_0_late_init(void *handle)
+{
+	return 0;
+}
+
+static int gfxhub_v1_0_sw_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_GFXHUB];
+
+	hub->ctx0_ptb_addr_lo32 =
+		SOC15_REG_OFFSET(GC, 0,
+				 mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32);
+	hub->ctx0_ptb_addr_hi32 =
+		SOC15_REG_OFFSET(GC, 0,
+				 mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32);
+	hub->vm_inv_eng0_req =
+		SOC15_REG_OFFSET(GC, 0, mmVM_INVALIDATE_ENG0_REQ);
+	hub->vm_inv_eng0_ack =
+		SOC15_REG_OFFSET(GC, 0, mmVM_INVALIDATE_ENG0_ACK);
+	hub->vm_context0_cntl =
+		SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL);
+	hub->vm_l2_pro_fault_status =
+		SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_STATUS);
+	hub->vm_l2_pro_fault_cntl =
+		SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL);
+
+	hub->get_invalidate_req = gfxhub_v1_0_get_invalidate_req;
+	hub->get_vm_protection_bits = gfxhub_v1_0_get_vm_protection_bits;
+
+	return 0;
+}
+
+static int gfxhub_v1_0_sw_fini(void *handle)
+{
+	return 0;
+}
+
+static int gfxhub_v1_0_hw_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	unsigned i;
+
+	for (i = 0 ; i < 18; ++i) {
+		WREG32(SOC15_REG_OFFSET(GC, 0,
+					mmVM_INVALIDATE_ENG0_ADDR_RANGE_LO32) +
+		       2 * i, 0xffffffff);
+		WREG32(SOC15_REG_OFFSET(GC, 0,
+					mmVM_INVALIDATE_ENG0_ADDR_RANGE_HI32) +
+		       2 * i, 0x1f);
+	}
+
+	return 0;
+}
+
+static int gfxhub_v1_0_hw_fini(void *handle)
+{
+	return 0;
+}
+
+static int gfxhub_v1_0_suspend(void *handle)
+{
+	return 0;
+}
+
+static int gfxhub_v1_0_resume(void *handle)
+{
+	return 0;
+}
+
+static bool gfxhub_v1_0_is_idle(void *handle)
+{
+	return true;
+}
+
+static int gfxhub_v1_0_wait_for_idle(void *handle)
+{
+	return 0;
+}
+
+static int gfxhub_v1_0_soft_reset(void *handle)
+{
+	return 0;
+}
+
+static int gfxhub_v1_0_set_clockgating_state(void *handle,
+					  enum amd_clockgating_state state)
+{
+	return 0;
+}
+
+static int gfxhub_v1_0_set_powergating_state(void *handle,
+					  enum amd_powergating_state state)
+{
+	return 0;
+}
+
+const struct amd_ip_funcs gfxhub_v1_0_ip_funcs = {
+	.name = "gfxhub_v1_0",
+	.early_init = gfxhub_v1_0_early_init,
+	.late_init = gfxhub_v1_0_late_init,
+	.sw_init = gfxhub_v1_0_sw_init,
+	.sw_fini = gfxhub_v1_0_sw_fini,
+	.hw_init = gfxhub_v1_0_hw_init,
+	.hw_fini = gfxhub_v1_0_hw_fini,
+	.suspend = gfxhub_v1_0_suspend,
+	.resume = gfxhub_v1_0_resume,
+	.is_idle = gfxhub_v1_0_is_idle,
+	.wait_for_idle = gfxhub_v1_0_wait_for_idle,
+	.soft_reset = gfxhub_v1_0_soft_reset,
+	.set_clockgating_state = gfxhub_v1_0_set_clockgating_state,
+	.set_powergating_state = gfxhub_v1_0_set_powergating_state,
+};
+
+const struct amdgpu_ip_block_version gfxhub_v1_0_ip_block =
+{
+	.type = AMD_IP_BLOCK_TYPE_GFXHUB,
+	.major = 1,
+	.minor = 0,
+	.rev = 0,
+	.funcs = &gfxhub_v1_0_ip_funcs,
+};
diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
new file mode 100644
index 0000000..5129a8f
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
@@ -0,0 +1,35 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __GFXHUB_V1_0_H__
+#define __GFXHUB_V1_0_H__
+
+int gfxhub_v1_0_gart_enable(struct amdgpu_device *adev);
+void gfxhub_v1_0_gart_disable(struct amdgpu_device *adev);
+void gfxhub_v1_0_set_fault_enable_default(struct amdgpu_device *adev,
+					  bool value);
+
+extern const struct amd_ip_funcs gfxhub_v1_0_ip_funcs;
+extern const struct amdgpu_ip_block_version gfxhub_v1_0_ip_block;
+
+#endif
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
new file mode 100644
index 0000000..5cf0fc3
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -0,0 +1,826 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include <linux/firmware.h>
+#include "amdgpu.h"
+#include "gmc_v9_0.h"
+
+#include "vega10/soc15ip.h"
+#include "vega10/HDP/hdp_4_0_offset.h"
+#include "vega10/HDP/hdp_4_0_sh_mask.h"
+#include "vega10/GC/gc_9_0_sh_mask.h"
+#include "vega10/vega10_enum.h"
+
+#include "soc15_common.h"
+
+#include "nbio_v6_1.h"
+#include "gfxhub_v1_0.h"
+#include "mmhub_v1_0.h"
+
+#define mmDF_CS_AON0_DramBaseAddress0                                                                  0x0044
+#define mmDF_CS_AON0_DramBaseAddress0_BASE_IDX                                                         0
+//DF_CS_AON0_DramBaseAddress0
+#define DF_CS_AON0_DramBaseAddress0__AddrRngVal__SHIFT                                                        0x0
+#define DF_CS_AON0_DramBaseAddress0__LgcyMmioHoleEn__SHIFT                                                    0x1
+#define DF_CS_AON0_DramBaseAddress0__IntLvNumChan__SHIFT                                                      0x4
+#define DF_CS_AON0_DramBaseAddress0__IntLvAddrSel__SHIFT                                                      0x8
+#define DF_CS_AON0_DramBaseAddress0__DramBaseAddr__SHIFT                                                      0xc
+#define DF_CS_AON0_DramBaseAddress0__AddrRngVal_MASK                                                          0x00000001L
+#define DF_CS_AON0_DramBaseAddress0__LgcyMmioHoleEn_MASK                                                      0x00000002L
+#define DF_CS_AON0_DramBaseAddress0__IntLvNumChan_MASK                                                        0x000000F0L
+#define DF_CS_AON0_DramBaseAddress0__IntLvAddrSel_MASK                                                        0x00000700L
+#define DF_CS_AON0_DramBaseAddress0__DramBaseAddr_MASK                                                        0xFFFFF000L
+
+/* XXX Move this macro to VEGA10 header file, which is like vid.h for VI.*/
+#define AMDGPU_NUM_OF_VMIDS			8
+
+static const u32 golden_settings_vega10_hdp[] =
+{
+	0xf64, 0x0fffffff, 0x00000000,
+	0xf65, 0x0fffffff, 0x00000000,
+	0xf66, 0x0fffffff, 0x00000000,
+	0xf67, 0x0fffffff, 0x00000000,
+	0xf68, 0x0fffffff, 0x00000000,
+	0xf6a, 0x0fffffff, 0x00000000,
+	0xf6b, 0x0fffffff, 0x00000000,
+	0xf6c, 0x0fffffff, 0x00000000,
+	0xf6d, 0x0fffffff, 0x00000000,
+	0xf6e, 0x0fffffff, 0x00000000,
+};
+
+static int gmc_v9_0_vm_fault_interrupt_state(struct amdgpu_device *adev,
+					struct amdgpu_irq_src *src,
+					unsigned type,
+					enum amdgpu_interrupt_state state)
+{
+	struct amdgpu_vmhub *hub;
+	u32 tmp, reg, bits, i;
+
+	switch (state) {
+	case AMDGPU_IRQ_STATE_DISABLE:
+		/* MM HUB */
+		hub = &adev->vmhub[AMDGPU_MMHUB];
+		bits = hub->get_vm_protection_bits();
+		for (i = 0; i< 16; i++) {
+			reg = hub->vm_context0_cntl + i;
+			tmp = RREG32(reg);
+			tmp &= ~bits;
+			WREG32(reg, tmp);
+		}
+
+		/* GFX HUB */
+		hub = &adev->vmhub[AMDGPU_GFXHUB];
+		bits = hub->get_vm_protection_bits();
+		for (i = 0; i < 16; i++) {
+			reg = hub->vm_context0_cntl + i;
+			tmp = RREG32(reg);
+			tmp &= ~bits;
+			WREG32(reg, tmp);
+		}
+		break;
+	case AMDGPU_IRQ_STATE_ENABLE:
+		/* MM HUB */
+		hub = &adev->vmhub[AMDGPU_MMHUB];
+		bits = hub->get_vm_protection_bits();
+		for (i = 0; i< 16; i++) {
+			reg = hub->vm_context0_cntl + i;
+			tmp = RREG32(reg);
+			tmp |= bits;
+			WREG32(reg, tmp);
+		}
+
+		/* GFX HUB */
+		hub = &adev->vmhub[AMDGPU_GFXHUB];
+		bits = hub->get_vm_protection_bits();
+		for (i = 0; i < 16; i++) {
+			reg = hub->vm_context0_cntl + i;
+			tmp = RREG32(reg);
+			tmp |= bits;
+			WREG32(reg, tmp);
+		}
+		break;
+	default:
+		break;
+	}
+
+	return 0;
+	return 0;
+}
+
+static int gmc_v9_0_process_interrupt(struct amdgpu_device *adev,
+				struct amdgpu_irq_src *source,
+				struct amdgpu_iv_entry *entry)
+{
+	struct amdgpu_vmhub *gfxhub = &adev->vmhub[AMDGPU_GFXHUB];
+	struct amdgpu_vmhub *mmhub = &adev->vmhub[AMDGPU_MMHUB];
+	uint32_t status;
+	u64 addr;
+
+	addr = (u64)entry->src_data[0] << 12;
+	addr |= ((u64)entry->src_data[1] & 0xf) << 44;
+
+	if (entry->vm_id_src) {
+		status = RREG32(mmhub->vm_l2_pro_fault_status);
+		WREG32_P(mmhub->vm_l2_pro_fault_cntl, 1, ~1);
+	} else {
+		status = RREG32(gfxhub->vm_l2_pro_fault_status);
+		WREG32_P(gfxhub->vm_l2_pro_fault_cntl, 1, ~1);
+	}
+
+	DRM_ERROR("[%s]VMC page fault (src_id:%u ring:%u vm_id:%u pas_id:%u) "
+		  "at page 0x%016llx from %d\n"
+		  "VM_L2_PROTECTION_FAULT_STATUS:0x%08X\n",
+		  entry->vm_id_src ? "mmhub" : "gfxhub",
+		  entry->src_id, entry->ring_id, entry->vm_id, entry->pas_id,
+		  addr, entry->client_id, status);
+
+	return 0;
+}
+
+static const struct amdgpu_irq_src_funcs gmc_v9_0_irq_funcs = {
+	.set = gmc_v9_0_vm_fault_interrupt_state,
+	.process = gmc_v9_0_process_interrupt,
+};
+
+static void gmc_v9_0_set_irq_funcs(struct amdgpu_device *adev)
+{
+	adev->mc.vm_fault.num_types = 1;
+	adev->mc.vm_fault.funcs = &gmc_v9_0_irq_funcs;
+}
+
+/*
+ * GART
+ * VMID 0 is the physical GPU addresses as used by the kernel.
+ * VMIDs 1-15 are used for userspace clients and are handled
+ * by the amdgpu vm/hsa code.
+ */
+
+/**
+ * gmc_v9_0_gart_flush_gpu_tlb - gart tlb flush callback
+ *
+ * @adev: amdgpu_device pointer
+ * @vmid: vm instance to flush
+ *
+ * Flush the TLB for the requested page table.
+ */
+static void gmc_v9_0_gart_flush_gpu_tlb(struct amdgpu_device *adev,
+					uint32_t vmid)
+{
+	/* Use register 17 for GART */
+	const unsigned eng = 17;
+	unsigned i, j;
+
+	/* flush hdp cache */
+	nbio_v6_1_hdp_flush(adev);
+
+	spin_lock(&adev->mc.invalidate_lock);
+
+	for (i = 0; i < AMDGPU_MAX_VMHUBS; ++i) {
+		struct amdgpu_vmhub *hub = &adev->vmhub[i];
+		u32 tmp = hub->get_invalidate_req(vmid);
+
+		WREG32(hub->vm_inv_eng0_req + eng, tmp);
+
+		/* Busy wait for ACK.*/
+		for (j = 0; j < 100; j++) {
+			tmp = RREG32(hub->vm_inv_eng0_ack + eng);
+			tmp &= 1 << vmid;
+			if (tmp)
+				break;
+			cpu_relax();
+		}
+		if (j < 100)
+			continue;
+
+		/* Wait for ACK with a delay.*/
+		for (j = 0; j < adev->usec_timeout; j++) {
+			tmp = RREG32(hub->vm_inv_eng0_ack + eng);
+			tmp &= 1 << vmid;
+			if (tmp)
+				break;
+			udelay(1);
+		}
+		if (j < adev->usec_timeout)
+			continue;
+
+		DRM_ERROR("Timeout waiting for VM flush ACK!\n");
+	}
+
+	spin_unlock(&adev->mc.invalidate_lock);
+}
+
+/**
+ * gmc_v9_0_gart_set_pte_pde - update the page tables using MMIO
+ *
+ * @adev: amdgpu_device pointer
+ * @cpu_pt_addr: cpu address of the page table
+ * @gpu_page_idx: entry in the page table to update
+ * @addr: dst addr to write into pte/pde
+ * @flags: access flags
+ *
+ * Update the page tables using the CPU.
+ */
+static int gmc_v9_0_gart_set_pte_pde(struct amdgpu_device *adev,
+					void *cpu_pt_addr,
+					uint32_t gpu_page_idx,
+					uint64_t addr,
+					uint64_t flags)
+{
+	void __iomem *ptr = (void *)cpu_pt_addr;
+	uint64_t value;
+
+	/*
+	 * PTE format on VEGA 10:
+	 * 63:59 reserved
+	 * 58:57 mtype
+	 * 56 F
+	 * 55 L
+	 * 54 P
+	 * 53 SW
+	 * 52 T
+	 * 50:48 reserved
+	 * 47:12 4k physical page base address
+	 * 11:7 fragment
+	 * 6 write
+	 * 5 read
+	 * 4 exe
+	 * 3 Z
+	 * 2 snooped
+	 * 1 system
+	 * 0 valid
+	 *
+	 * PDE format on VEGA 10:
+	 * 63:59 block fragment size
+	 * 58:55 reserved
+	 * 54 P
+	 * 53:48 reserved
+	 * 47:6 physical base address of PD or PTE
+	 * 5:3 reserved
+	 * 2 C
+	 * 1 system
+	 * 0 valid
+	 */
+
+	/*
+	 * The following is for PTE only. GART does not have PDEs.
+	*/
+	value = addr & 0x0000FFFFFFFFF000ULL;
+	value |= flags;
+	writeq(value, ptr + (gpu_page_idx * 8));
+	return 0;
+}
+
+static uint64_t gmc_v9_0_get_vm_pte_flags(struct amdgpu_device *adev,
+						uint32_t flags)
+
+{
+	uint64_t pte_flag = 0;
+
+	if (flags & AMDGPU_VM_PAGE_EXECUTABLE)
+		pte_flag |= AMDGPU_PTE_EXECUTABLE;
+	if (flags & AMDGPU_VM_PAGE_READABLE)
+		pte_flag |= AMDGPU_PTE_READABLE;
+	if (flags & AMDGPU_VM_PAGE_WRITEABLE)
+		pte_flag |= AMDGPU_PTE_WRITEABLE;
+
+	switch (flags & AMDGPU_VM_MTYPE_MASK) {
+	case AMDGPU_VM_MTYPE_DEFAULT:
+		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
+		break;
+	case AMDGPU_VM_MTYPE_NC:
+		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
+		break;
+	case AMDGPU_VM_MTYPE_WC:
+		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_WC);
+		break;
+	case AMDGPU_VM_MTYPE_CC:
+		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_CC);
+		break;
+	case AMDGPU_VM_MTYPE_UC:
+		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_UC);
+		break;
+	default:
+		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
+		break;
+	}
+
+	if (flags & AMDGPU_VM_PAGE_PRT)
+		pte_flag |= AMDGPU_PTE_PRT;
+
+	return pte_flag;
+}
+
+static const struct amdgpu_gart_funcs gmc_v9_0_gart_funcs = {
+	.flush_gpu_tlb = gmc_v9_0_gart_flush_gpu_tlb,
+	.set_pte_pde = gmc_v9_0_gart_set_pte_pde,
+	.get_vm_pte_flags = gmc_v9_0_get_vm_pte_flags
+};
+
+static void gmc_v9_0_set_gart_funcs(struct amdgpu_device *adev)
+{
+	if (adev->gart.gart_funcs == NULL)
+		adev->gart.gart_funcs = &gmc_v9_0_gart_funcs;
+}
+
+static u64 gmc_v9_0_adjust_mc_addr(struct amdgpu_device *adev, u64 mc_addr)
+{
+	return adev->vm_manager.vram_base_offset + mc_addr - adev->mc.vram_start;
+}
+
+static const struct amdgpu_mc_funcs gmc_v9_0_mc_funcs = {
+	.adjust_mc_addr = gmc_v9_0_adjust_mc_addr,
+};
+
+static void gmc_v9_0_set_mc_funcs(struct amdgpu_device *adev)
+{
+	adev->mc.mc_funcs = &gmc_v9_0_mc_funcs;
+}
+
+static int gmc_v9_0_early_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	gmc_v9_0_set_gart_funcs(adev);
+	gmc_v9_0_set_mc_funcs(adev);
+	gmc_v9_0_set_irq_funcs(adev);
+
+	return 0;
+}
+
+static int gmc_v9_0_late_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0);
+}
+
+static void gmc_v9_0_vram_gtt_location(struct amdgpu_device *adev,
+					struct amdgpu_mc *mc)
+{
+	u64 base = mmhub_v1_0_get_fb_location(adev);
+	amdgpu_vram_location(adev, &adev->mc, base);
+	adev->mc.gtt_base_align = 0;
+	amdgpu_gtt_location(adev, mc);
+}
+
+/**
+ * gmc_v9_0_mc_init - initialize the memory controller driver params
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Look up the amount of vram, vram width, and decide how to place
+ * vram and gart within the GPU's physical address space.
+ * Returns 0 for success.
+ */
+static int gmc_v9_0_mc_init(struct amdgpu_device *adev)
+{
+	u32 tmp;
+	int chansize, numchan;
+
+	/* hbm memory channel size */
+	chansize = 128;
+
+	tmp = RREG32(SOC15_REG_OFFSET(DF, 0, mmDF_CS_AON0_DramBaseAddress0));
+	tmp &= DF_CS_AON0_DramBaseAddress0__IntLvNumChan_MASK;
+	tmp >>= DF_CS_AON0_DramBaseAddress0__IntLvNumChan__SHIFT;
+	switch (tmp) {
+	case 0:
+	default:
+		numchan = 1;
+		break;
+	case 1:
+		numchan = 2;
+		break;
+	case 2:
+		numchan = 0;
+		break;
+	case 3:
+		numchan = 4;
+		break;
+	case 4:
+		numchan = 0;
+		break;
+	case 5:
+		numchan = 8;
+		break;
+	case 6:
+		numchan = 0;
+		break;
+	case 7:
+		numchan = 16;
+		break;
+	case 8:
+		numchan = 2;
+		break;
+	}
+	adev->mc.vram_width = numchan * chansize;
+
+	/* Could aper size report 0 ? */
+	adev->mc.aper_base = pci_resource_start(adev->pdev, 0);
+	adev->mc.aper_size = pci_resource_len(adev->pdev, 0);
+	/* size in MB on si */
+	adev->mc.mc_vram_size =
+		nbio_v6_1_get_memsize(adev) * 1024ULL * 1024ULL;
+	adev->mc.real_vram_size = adev->mc.mc_vram_size;
+	adev->mc.visible_vram_size = adev->mc.aper_size;
+
+	/* In case the PCI BAR is larger than the actual amount of vram */
+	if (adev->mc.visible_vram_size > adev->mc.real_vram_size)
+		adev->mc.visible_vram_size = adev->mc.real_vram_size;
+
+	/* unless the user had overridden it, set the gart
+	 * size equal to the 1024 or vram, whichever is larger.
+	 */
+	if (amdgpu_gart_size == -1)
+		adev->mc.gtt_size = max((1024ULL << 20), adev->mc.mc_vram_size);
+	else
+		adev->mc.gtt_size = (uint64_t)amdgpu_gart_size << 20;
+
+	gmc_v9_0_vram_gtt_location(adev, &adev->mc);
+
+	return 0;
+}
+
+static int gmc_v9_0_gart_init(struct amdgpu_device *adev)
+{
+	int r;
+
+	if (adev->gart.robj) {
+		WARN(1, "VEGA10 PCIE GART already initialized\n");
+		return 0;
+	}
+	/* Initialize common gart structure */
+	r = amdgpu_gart_init(adev);
+	if (r)
+		return r;
+	adev->gart.table_size = adev->gart.num_gpu_pages * 8;
+	adev->gart.gart_pte_flags = AMDGPU_PTE_MTYPE(MTYPE_UC) |
+				 AMDGPU_PTE_EXECUTABLE;
+	return amdgpu_gart_table_vram_alloc(adev);
+}
+
+/*
+ * vm
+ * VMID 0 is the physical GPU addresses as used by the kernel.
+ * VMIDs 1-15 are used for userspace clients and are handled
+ * by the amdgpu vm/hsa code.
+ */
+/**
+ * gmc_v9_0_vm_init - vm init callback
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Inits vega10 specific vm parameters (number of VMs, base of vram for
+ * VMIDs 1-15) (vega10).
+ * Returns 0 for success.
+ */
+static int gmc_v9_0_vm_init(struct amdgpu_device *adev)
+{
+	/*
+	 * number of VMs
+	 * VMID 0 is reserved for System
+	 * amdgpu graphics/compute will use VMIDs 1-7
+	 * amdkfd will use VMIDs 8-15
+	 */
+	adev->vm_manager.num_ids = AMDGPU_NUM_OF_VMIDS;
+	amdgpu_vm_manager_init(adev);
+
+	/* base offset of vram pages */
+	/*XXX This value is not zero for APU*/
+	adev->vm_manager.vram_base_offset = 0;
+
+	return 0;
+}
+
+/**
+ * gmc_v9_0_vm_fini - vm fini callback
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Tear down any asic specific VM setup.
+ */
+static void gmc_v9_0_vm_fini(struct amdgpu_device *adev)
+{
+	return;
+}
+
+static int gmc_v9_0_sw_init(void *handle)
+{
+	int r;
+	int dma_bits;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	spin_lock_init(&adev->mc.invalidate_lock);
+
+	if (adev->flags & AMD_IS_APU) {
+		adev->mc.vram_type = AMDGPU_VRAM_TYPE_UNKNOWN;
+	} else {
+		/* XXX Don't know how to get VRAM type yet. */
+		adev->mc.vram_type = AMDGPU_VRAM_TYPE_HBM;
+	}
+
+	/* This interrupt is VMC page fault.*/
+	r = amdgpu_irq_add_id(adev, AMDGPU_IH_CLIENTID_VMC, 0,
+				&adev->mc.vm_fault);
+
+	if (r)
+		return r;
+
+	/* Adjust VM size here.
+	 * Currently default to 64GB ((16 << 20) 4k pages).
+	 * Max GPUVM size is 48 bits.
+	 */
+	adev->vm_manager.max_pfn = amdgpu_vm_size << 18;
+
+	/* Set the internal MC address mask
+	 * This is the max address of the GPU's
+	 * internal address space.
+	 */
+	adev->mc.mc_mask = 0xffffffffffffULL; /* 48 bit MC */
+
+	/* set DMA mask + need_dma32 flags.
+	 * PCIE - can handle 44-bits.
+	 * IGP - can handle 44-bits
+	 * PCI - dma32 for legacy pci gart, 44 bits on vega10
+	 */
+	adev->need_dma32 = false;
+	dma_bits = adev->need_dma32 ? 32 : 44;
+	r = pci_set_dma_mask(adev->pdev, DMA_BIT_MASK(dma_bits));
+	if (r) {
+		adev->need_dma32 = true;
+		dma_bits = 32;
+		printk(KERN_WARNING "amdgpu: No suitable DMA available.\n");
+	}
+	r = pci_set_consistent_dma_mask(adev->pdev, DMA_BIT_MASK(dma_bits));
+	if (r) {
+		pci_set_consistent_dma_mask(adev->pdev, DMA_BIT_MASK(32));
+		printk(KERN_WARNING "amdgpu: No coherent DMA available.\n");
+	}
+
+	r = gmc_v9_0_mc_init(adev);
+	if (r)
+		return r;
+
+	/* Memory manager */
+	r = amdgpu_bo_init(adev);
+	if (r)
+		return r;
+
+	r = gmc_v9_0_gart_init(adev);
+	if (r)
+		return r;
+
+	if (!adev->vm_manager.enabled) {
+		r = gmc_v9_0_vm_init(adev);
+		if (r) {
+			dev_err(adev->dev, "vm manager initialization failed (%d).\n", r);
+			return r;
+		}
+		adev->vm_manager.enabled = true;
+	}
+	return r;
+}
+
+/**
+ * gmc_v8_0_gart_fini - vm fini callback
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Tears down the driver GART/VM setup (CIK).
+ */
+static void gmc_v9_0_gart_fini(struct amdgpu_device *adev)
+{
+	amdgpu_gart_table_vram_free(adev);
+	amdgpu_gart_fini(adev);
+}
+
+static int gmc_v9_0_sw_fini(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (adev->vm_manager.enabled) {
+		amdgpu_vm_manager_fini(adev);
+		gmc_v9_0_vm_fini(adev);
+		adev->vm_manager.enabled = false;
+	}
+	gmc_v9_0_gart_fini(adev);
+	amdgpu_gem_force_release(adev);
+	amdgpu_bo_fini(adev);
+
+	return 0;
+}
+
+static void gmc_v9_0_init_golden_registers(struct amdgpu_device *adev)
+{
+	switch (adev->asic_type) {
+	case CHIP_VEGA10:
+		break;
+	default:
+		break;
+	}
+}
+
+/**
+ * gmc_v9_0_gart_enable - gart enable
+ *
+ * @adev: amdgpu_device pointer
+ */
+static int gmc_v9_0_gart_enable(struct amdgpu_device *adev)
+{
+	int r;
+	bool value;
+	u32 tmp;
+
+	amdgpu_program_register_sequence(adev,
+		golden_settings_vega10_hdp,
+		(const u32)ARRAY_SIZE(golden_settings_vega10_hdp));
+
+	if (adev->gart.robj == NULL) {
+		dev_err(adev->dev, "No VRAM object for PCIE GART.\n");
+		return -EINVAL;
+	}
+	r = amdgpu_gart_table_vram_pin(adev);
+	if (r)
+		return r;
+
+	/* After HDP is initialized, flush HDP.*/
+	nbio_v6_1_hdp_flush(adev);
+
+	r = gfxhub_v1_0_gart_enable(adev);
+	if (r)
+		return r;
+
+	r = mmhub_v1_0_gart_enable(adev);
+	if (r)
+		return r;
+
+	tmp = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MISC_CNTL));
+	tmp |= HDP_MISC_CNTL__FLUSH_INVALIDATE_CACHE_MASK;
+	WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MISC_CNTL), tmp);
+
+	tmp = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_HOST_PATH_CNTL));
+	WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_HOST_PATH_CNTL), tmp);
+
+
+	if (amdgpu_vm_fault_stop == AMDGPU_VM_FAULT_STOP_ALWAYS)
+		value = false;
+	else
+		value = true;
+
+	gfxhub_v1_0_set_fault_enable_default(adev, value);
+	mmhub_v1_0_set_fault_enable_default(adev, value);
+
+	gmc_v9_0_gart_flush_gpu_tlb(adev, 0);
+
+	DRM_INFO("PCIE GART of %uM enabled (table at 0x%016llX).\n",
+		 (unsigned)(adev->mc.gtt_size >> 20),
+		 (unsigned long long)adev->gart.table_addr);
+	adev->gart.ready = true;
+	return 0;
+}
+
+static int gmc_v9_0_hw_init(void *handle)
+{
+	int r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	/* The sequence of these two function calls matters.*/
+	gmc_v9_0_init_golden_registers(adev);
+
+	r = gmc_v9_0_gart_enable(adev);
+
+	return r;
+}
+
+/**
+ * gmc_v9_0_gart_disable - gart disable
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * This disables all VM page table.
+ */
+static void gmc_v9_0_gart_disable(struct amdgpu_device *adev)
+{
+	gfxhub_v1_0_gart_disable(adev);
+	mmhub_v1_0_gart_disable(adev);
+	amdgpu_gart_table_vram_unpin(adev);
+}
+
+static int gmc_v9_0_hw_fini(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	amdgpu_irq_put(adev, &adev->mc.vm_fault, 0);
+	gmc_v9_0_gart_disable(adev);
+
+	return 0;
+}
+
+static int gmc_v9_0_suspend(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (adev->vm_manager.enabled) {
+		gmc_v9_0_vm_fini(adev);
+		adev->vm_manager.enabled = false;
+	}
+	gmc_v9_0_hw_fini(adev);
+
+	return 0;
+}
+
+static int gmc_v9_0_resume(void *handle)
+{
+	int r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	r = gmc_v9_0_hw_init(adev);
+	if (r)
+		return r;
+
+	if (!adev->vm_manager.enabled) {
+		r = gmc_v9_0_vm_init(adev);
+		if (r) {
+			dev_err(adev->dev,
+				"vm manager initialization failed (%d).\n", r);
+			return r;
+		}
+		adev->vm_manager.enabled = true;
+	}
+
+	return r;
+}
+
+static bool gmc_v9_0_is_idle(void *handle)
+{
+	/* MC is always ready in GMC v9.*/
+	return true;
+}
+
+static int gmc_v9_0_wait_for_idle(void *handle)
+{
+	/* There is no need to wait for MC idle in GMC v9.*/
+	return 0;
+}
+
+static int gmc_v9_0_soft_reset(void *handle)
+{
+	/* XXX for emulation.*/
+	return 0;
+}
+
+static int gmc_v9_0_set_clockgating_state(void *handle,
+					enum amd_clockgating_state state)
+{
+	return 0;
+}
+
+static int gmc_v9_0_set_powergating_state(void *handle,
+					enum amd_powergating_state state)
+{
+	return 0;
+}
+
+const struct amd_ip_funcs gmc_v9_0_ip_funcs = {
+	.name = "gmc_v9_0",
+	.early_init = gmc_v9_0_early_init,
+	.late_init = gmc_v9_0_late_init,
+	.sw_init = gmc_v9_0_sw_init,
+	.sw_fini = gmc_v9_0_sw_fini,
+	.hw_init = gmc_v9_0_hw_init,
+	.hw_fini = gmc_v9_0_hw_fini,
+	.suspend = gmc_v9_0_suspend,
+	.resume = gmc_v9_0_resume,
+	.is_idle = gmc_v9_0_is_idle,
+	.wait_for_idle = gmc_v9_0_wait_for_idle,
+	.soft_reset = gmc_v9_0_soft_reset,
+	.set_clockgating_state = gmc_v9_0_set_clockgating_state,
+	.set_powergating_state = gmc_v9_0_set_powergating_state,
+};
+
+const struct amdgpu_ip_block_version gmc_v9_0_ip_block =
+{
+	.type = AMD_IP_BLOCK_TYPE_GMC,
+	.major = 9,
+	.minor = 0,
+	.rev = 0,
+	.funcs = &gmc_v9_0_ip_funcs,
+};
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
new file mode 100644
index 0000000..b030ca5
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
@@ -0,0 +1,30 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __GMC_V9_0_H__
+#define __GMC_V9_0_H__
+
+extern const struct amd_ip_funcs gmc_v9_0_ip_funcs;
+extern const struct amdgpu_ip_block_version gmc_v9_0_ip_block;
+
+#endif
diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
new file mode 100644
index 0000000..b1e0e6b
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
@@ -0,0 +1,585 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include "amdgpu.h"
+#include "mmhub_v1_0.h"
+
+#include "vega10/soc15ip.h"
+#include "vega10/MMHUB/mmhub_1_0_offset.h"
+#include "vega10/MMHUB/mmhub_1_0_sh_mask.h"
+#include "vega10/MMHUB/mmhub_1_0_default.h"
+#include "vega10/ATHUB/athub_1_0_offset.h"
+#include "vega10/ATHUB/athub_1_0_sh_mask.h"
+#include "vega10/ATHUB/athub_1_0_default.h"
+#include "vega10/vega10_enum.h"
+
+#include "soc15_common.h"
+
+u64 mmhub_v1_0_get_fb_location(struct amdgpu_device *adev)
+{
+	u64 base = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_FB_LOCATION_BASE));
+
+	base &= MC_VM_FB_LOCATION_BASE__FB_BASE_MASK;
+	base <<= 24;
+
+	return base;
+}
+
+int mmhub_v1_0_gart_enable(struct amdgpu_device *adev)
+{
+	u32 tmp;
+	u64 value;
+	uint64_t addr;
+	u32 i;
+
+	/* Program MC. */
+	/* Update configuration */
+	DRM_INFO("%s -- in\n", __func__);
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR),
+		adev->mc.vram_start >> 18);
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR),
+		adev->mc.vram_end >> 18);
+	value = adev->vram_scratch.gpu_addr - adev->mc.vram_start +
+		adev->vm_manager.vram_base_offset;
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
+				mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_LSB),
+				(u32)(value >> 12));
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
+				mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_MSB),
+				(u32)(value >> 44));
+
+	/* Disable AGP. */
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_AGP_BASE), 0);
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_AGP_TOP), 0);
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_AGP_BOT), 0x00FFFFFF);
+
+	/* GART Enable. */
+
+	/* Setup TLB control */
+	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_MX_L1_TLB_CNTL));
+	tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB, 1);
+	tmp = REG_SET_FIELD(tmp,
+				MC_VM_MX_L1_TLB_CNTL,
+				SYSTEM_ACCESS_MODE,
+				3);
+	tmp = REG_SET_FIELD(tmp,
+				MC_VM_MX_L1_TLB_CNTL,
+				ENABLE_ADVANCED_DRIVER_MODEL,
+				1);
+	tmp = REG_SET_FIELD(tmp,
+				MC_VM_MX_L1_TLB_CNTL,
+				SYSTEM_APERTURE_UNMAPPED_ACCESS,
+				0);
+	tmp = REG_SET_FIELD(tmp,
+				MC_VM_MX_L1_TLB_CNTL,
+				ECO_BITS,
+				0);
+	tmp = REG_SET_FIELD(tmp,
+				MC_VM_MX_L1_TLB_CNTL,
+				MTYPE,
+				MTYPE_UC);/* XXX for emulation. */
+	tmp = REG_SET_FIELD(tmp,
+				MC_VM_MX_L1_TLB_CNTL,
+				ATC_EN,
+				1);
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_MX_L1_TLB_CNTL), tmp);
+
+	/* Setup L2 cache */
+	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL));
+	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 1);
+	tmp = REG_SET_FIELD(tmp,
+				VM_L2_CNTL,
+				ENABLE_L2_FRAGMENT_PROCESSING,
+				0);
+	tmp = REG_SET_FIELD(tmp,
+				VM_L2_CNTL,
+				L2_PDE0_CACHE_TAG_GENERATION_MODE,
+				0);/* XXX for emulation, Refer to closed source code.*/
+	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, PDE_FAULT_CLASSIFICATION, 1);
+	tmp = REG_SET_FIELD(tmp,
+				VM_L2_CNTL,
+				CONTEXT1_IDENTITY_ACCESS_MODE,
+				1);
+	tmp = REG_SET_FIELD(tmp,
+				VM_L2_CNTL,
+				IDENTITY_MODE_FRAGMENT_SIZE,
+				0);
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL), tmp);
+
+	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL2));
+	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_ALL_L1_TLBS, 1);
+	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_L2_CACHE, 1);
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL2), tmp);
+
+	tmp = mmVM_L2_CNTL3_DEFAULT;
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL3), tmp);
+
+	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL4));
+	tmp = REG_SET_FIELD(tmp,
+			    VM_L2_CNTL4,
+			    VMC_TAP_PDE_REQUEST_PHYSICAL,
+			    0);
+	tmp = REG_SET_FIELD(tmp,
+			    VM_L2_CNTL4,
+			    VMC_TAP_PTE_REQUEST_PHYSICAL,
+			    0);
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL4), tmp);
+
+	/* setup context0 */
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
+				mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_LO32),
+		(u32)(adev->mc.gtt_start >> 12));
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
+				mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_HI32),
+		(u32)(adev->mc.gtt_start >> 44));
+
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
+				mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_LO32),
+		(u32)(adev->mc.gtt_end >> 12));
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
+				mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_HI32),
+		(u32)(adev->mc.gtt_end >> 44));
+
+	BUG_ON(adev->gart.table_addr & (~0x0000FFFFFFFFF000ULL));
+	value = adev->gart.table_addr - adev->mc.vram_start +
+		adev->vm_manager.vram_base_offset;
+	value &= 0x0000FFFFFFFFF000ULL;
+	value |= 0x1; /* valid bit */
+
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
+				mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32),
+		(u32)value);
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
+				mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32),
+		(u32)(value >> 32));
+
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
+				mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_LO32),
+		(u32)(adev->dummy_page.addr >> 12));
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
+				mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_HI32),
+		(u32)(adev->dummy_page.addr >> 44));
+
+	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_PROTECTION_FAULT_CNTL2));
+	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL2,
+			    ACTIVE_PAGE_MIGRATION_PTE_READ_RETRY,
+			    1);
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_PROTECTION_FAULT_CNTL2), tmp);
+
+	addr = SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT0_CNTL);
+	tmp = RREG32(addr);
+
+	tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL, ENABLE_CONTEXT, 1);
+	tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL, PAGE_TABLE_DEPTH, 0);
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT0_CNTL), tmp);
+
+	tmp = RREG32(addr);
+
+	/* Disable identity aperture.*/
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
+		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_LO32), 0XFFFFFFFF);
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
+		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_HI32), 0x0000000F);
+
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
+		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_LO32), 0);
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
+		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_HI32), 0);
+
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
+		mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_LO32), 0);
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
+		mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_HI32), 0);
+
+	for (i = 0; i <= 14; i++) {
+		tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT1_CNTL)
+				+ i);
+		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
+				ENABLE_CONTEXT, 1);
+		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
+				PAGE_TABLE_DEPTH, 1);
+		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
+				RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
+		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
+				DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
+		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
+				PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
+		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
+				VALID_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
+		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
+				READ_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
+		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
+				WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
+		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
+				EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
+		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
+				PAGE_TABLE_BLOCK_SIZE,
+				amdgpu_vm_block_size - 9);
+		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT1_CNTL) + i, tmp);
+		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_LO32) + i*2, 0);
+		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_HI32) + i*2, 0);
+		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_LO32) + i*2,
+				adev->vm_manager.max_pfn - 1);
+		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_HI32) + i*2, 0);
+	}
+
+	return 0;
+}
+
+void mmhub_v1_0_gart_disable(struct amdgpu_device *adev)
+{
+	u32 tmp;
+	u32 i;
+
+	/* Disable all tables */
+	for (i = 0; i < 16; i++)
+		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT0_CNTL) + i, 0);
+
+	/* Setup TLB control */
+	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_MX_L1_TLB_CNTL));
+	tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB, 0);
+	tmp = REG_SET_FIELD(tmp,
+				MC_VM_MX_L1_TLB_CNTL,
+				ENABLE_ADVANCED_DRIVER_MODEL,
+				0);
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_MX_L1_TLB_CNTL), tmp);
+
+	/* Setup L2 cache */
+	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL));
+	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 0);
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL), tmp);
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL3), 0);
+}
+
+/**
+ * mmhub_v1_0_set_fault_enable_default - update GART/VM fault handling
+ *
+ * @adev: amdgpu_device pointer
+ * @value: true redirects VM faults to the default page
+ */
+void mmhub_v1_0_set_fault_enable_default(struct amdgpu_device *adev, bool value)
+{
+	u32 tmp;
+	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_PROTECTION_FAULT_CNTL));
+	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
+			RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
+	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
+			PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, value);
+	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
+			PDE1_PROTECTION_FAULT_ENABLE_DEFAULT, value);
+	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
+			PDE2_PROTECTION_FAULT_ENABLE_DEFAULT, value);
+	tmp = REG_SET_FIELD(tmp,
+			VM_L2_PROTECTION_FAULT_CNTL,
+			TRANSLATE_FURTHER_PROTECTION_FAULT_ENABLE_DEFAULT,
+			value);
+	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
+			NACK_PROTECTION_FAULT_ENABLE_DEFAULT, value);
+	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
+			DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
+	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
+			VALID_PROTECTION_FAULT_ENABLE_DEFAULT, value);
+	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
+			READ_PROTECTION_FAULT_ENABLE_DEFAULT, value);
+	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
+			WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
+	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
+			EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
+	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_PROTECTION_FAULT_CNTL), tmp);
+}
+
+static uint32_t mmhub_v1_0_get_invalidate_req(unsigned int vm_id)
+{
+	u32 req = 0;
+
+	/* invalidate using legacy mode on vm_id*/
+	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
+			    PER_VMID_INVALIDATE_REQ, 1 << vm_id);
+	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, FLUSH_TYPE, 0);
+	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PTES, 1);
+	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE0, 1);
+	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE1, 1);
+	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE2, 1);
+	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L1_PTES, 1);
+	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
+			    CLEAR_PROTECTION_FAULT_STATUS_ADDR,	0);
+
+	return req;
+}
+
+static uint32_t mmhub_v1_0_get_vm_protection_bits(void)
+{
+	return (VM_CONTEXT1_CNTL__RANGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
+		    VM_CONTEXT1_CNTL__DUMMY_PAGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
+		    VM_CONTEXT1_CNTL__PDE0_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
+		    VM_CONTEXT1_CNTL__VALID_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
+		    VM_CONTEXT1_CNTL__READ_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
+		    VM_CONTEXT1_CNTL__WRITE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
+		    VM_CONTEXT1_CNTL__EXECUTE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK);
+}
+
+static int mmhub_v1_0_early_init(void *handle)
+{
+	return 0;
+}
+
+static int mmhub_v1_0_late_init(void *handle)
+{
+	return 0;
+}
+
+static int mmhub_v1_0_sw_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_MMHUB];
+
+	hub->ctx0_ptb_addr_lo32 =
+		SOC15_REG_OFFSET(MMHUB, 0,
+				 mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32);
+	hub->ctx0_ptb_addr_hi32 =
+		SOC15_REG_OFFSET(MMHUB, 0,
+				 mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32);
+	hub->vm_inv_eng0_req =
+		SOC15_REG_OFFSET(MMHUB, 0, mmVM_INVALIDATE_ENG0_REQ);
+	hub->vm_inv_eng0_ack =
+		SOC15_REG_OFFSET(MMHUB, 0, mmVM_INVALIDATE_ENG0_ACK);
+	hub->vm_context0_cntl =
+		SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT0_CNTL);
+	hub->vm_l2_pro_fault_status =
+		SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_PROTECTION_FAULT_STATUS);
+	hub->vm_l2_pro_fault_cntl =
+		SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_PROTECTION_FAULT_CNTL);
+
+	hub->get_invalidate_req = mmhub_v1_0_get_invalidate_req;
+	hub->get_vm_protection_bits = mmhub_v1_0_get_vm_protection_bits;
+
+	return 0;
+}
+
+static int mmhub_v1_0_sw_fini(void *handle)
+{
+	return 0;
+}
+
+static int mmhub_v1_0_hw_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	unsigned i;
+
+	for (i = 0; i < 18; ++i) {
+		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
+					mmVM_INVALIDATE_ENG0_ADDR_RANGE_LO32) +
+		       2 * i, 0xffffffff);
+		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
+					mmVM_INVALIDATE_ENG0_ADDR_RANGE_HI32) +
+		       2 * i, 0x1f);
+	}
+
+	return 0;
+}
+
+static int mmhub_v1_0_hw_fini(void *handle)
+{
+	return 0;
+}
+
+static int mmhub_v1_0_suspend(void *handle)
+{
+	return 0;
+}
+
+static int mmhub_v1_0_resume(void *handle)
+{
+	return 0;
+}
+
+static bool mmhub_v1_0_is_idle(void *handle)
+{
+	return true;
+}
+
+static int mmhub_v1_0_wait_for_idle(void *handle)
+{
+	return 0;
+}
+
+static int mmhub_v1_0_soft_reset(void *handle)
+{
+	return 0;
+}
+
+static void mmhub_v1_0_update_medium_grain_clock_gating(struct amdgpu_device *adev,
+							bool enable)
+{
+	uint32_t def, data, def1, data1, def2, data2;
+
+	def  = data  = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmATC_L2_MISC_CG));
+	def1 = data1 = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmDAGB0_CNTL_MISC2));
+	def2 = data2 = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmDAGB1_CNTL_MISC2));
+
+	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_MGCG)) {
+		data |= ATC_L2_MISC_CG__ENABLE_MASK;
+
+		data1 &= ~(DAGB0_CNTL_MISC2__DISABLE_WRREQ_CG_MASK |
+		           DAGB0_CNTL_MISC2__DISABLE_WRRET_CG_MASK |
+		           DAGB0_CNTL_MISC2__DISABLE_RDREQ_CG_MASK |
+		           DAGB0_CNTL_MISC2__DISABLE_RDRET_CG_MASK |
+		           DAGB0_CNTL_MISC2__DISABLE_TLBWR_CG_MASK |
+		           DAGB0_CNTL_MISC2__DISABLE_TLBRD_CG_MASK);
+
+		data2 &= ~(DAGB1_CNTL_MISC2__DISABLE_WRREQ_CG_MASK |
+		           DAGB1_CNTL_MISC2__DISABLE_WRRET_CG_MASK |
+		           DAGB1_CNTL_MISC2__DISABLE_RDREQ_CG_MASK |
+		           DAGB1_CNTL_MISC2__DISABLE_RDRET_CG_MASK |
+		           DAGB1_CNTL_MISC2__DISABLE_TLBWR_CG_MASK |
+		           DAGB1_CNTL_MISC2__DISABLE_TLBRD_CG_MASK);
+	} else {
+		data &= ~ATC_L2_MISC_CG__ENABLE_MASK;
+
+		data1 |= (DAGB0_CNTL_MISC2__DISABLE_WRREQ_CG_MASK |
+			  DAGB0_CNTL_MISC2__DISABLE_WRRET_CG_MASK |
+			  DAGB0_CNTL_MISC2__DISABLE_RDREQ_CG_MASK |
+			  DAGB0_CNTL_MISC2__DISABLE_RDRET_CG_MASK |
+			  DAGB0_CNTL_MISC2__DISABLE_TLBWR_CG_MASK |
+			  DAGB0_CNTL_MISC2__DISABLE_TLBRD_CG_MASK);
+
+		data2 |= (DAGB1_CNTL_MISC2__DISABLE_WRREQ_CG_MASK |
+		          DAGB1_CNTL_MISC2__DISABLE_WRRET_CG_MASK |
+		          DAGB1_CNTL_MISC2__DISABLE_RDREQ_CG_MASK |
+		          DAGB1_CNTL_MISC2__DISABLE_RDRET_CG_MASK |
+		          DAGB1_CNTL_MISC2__DISABLE_TLBWR_CG_MASK |
+		          DAGB1_CNTL_MISC2__DISABLE_TLBRD_CG_MASK);
+	}
+
+	if (def != data)
+		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmATC_L2_MISC_CG), data);
+
+	if (def1 != data1)
+		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmDAGB0_CNTL_MISC2), data1);
+
+	if (def2 != data2)
+		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmDAGB1_CNTL_MISC2), data2);
+}
+
+static void athub_update_medium_grain_clock_gating(struct amdgpu_device *adev,
+						   bool enable)
+{
+	uint32_t def, data;
+
+	def = data = RREG32(SOC15_REG_OFFSET(ATHUB, 0, mmATHUB_MISC_CNTL));
+
+	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_MGCG))
+		data |= ATHUB_MISC_CNTL__CG_ENABLE_MASK;
+	else
+		data &= ~ATHUB_MISC_CNTL__CG_ENABLE_MASK;
+
+	if (def != data)
+		WREG32(SOC15_REG_OFFSET(ATHUB, 0, mmATHUB_MISC_CNTL), data);
+}
+
+static void mmhub_v1_0_update_medium_grain_light_sleep(struct amdgpu_device *adev,
+						       bool enable)
+{
+	uint32_t def, data;
+
+	def = data = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmATC_L2_MISC_CG));
+
+	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_LS))
+		data |= ATC_L2_MISC_CG__MEM_LS_ENABLE_MASK;
+	else
+		data &= ~ATC_L2_MISC_CG__MEM_LS_ENABLE_MASK;
+
+	if (def != data)
+		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmATC_L2_MISC_CG), data);
+}
+
+static void athub_update_medium_grain_light_sleep(struct amdgpu_device *adev,
+						  bool enable)
+{
+	uint32_t def, data;
+
+	def = data = RREG32(SOC15_REG_OFFSET(ATHUB, 0, mmATHUB_MISC_CNTL));
+
+	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_LS) &&
+	    (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS))
+		data |= ATHUB_MISC_CNTL__CG_MEM_LS_ENABLE_MASK;
+	else
+		data &= ~ATHUB_MISC_CNTL__CG_MEM_LS_ENABLE_MASK;
+
+	if(def != data)
+		WREG32(SOC15_REG_OFFSET(ATHUB, 0, mmATHUB_MISC_CNTL), data);
+}
+
+static int mmhub_v1_0_set_clockgating_state(void *handle,
+					enum amd_clockgating_state state)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	switch (adev->asic_type) {
+	case CHIP_VEGA10:
+		mmhub_v1_0_update_medium_grain_clock_gating(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		athub_update_medium_grain_clock_gating(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		mmhub_v1_0_update_medium_grain_light_sleep(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		athub_update_medium_grain_light_sleep(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		break;
+	default:
+		break;
+	}
+
+	return 0;
+}
+
+static int mmhub_v1_0_set_powergating_state(void *handle,
+					enum amd_powergating_state state)
+{
+	return 0;
+}
+
+const struct amd_ip_funcs mmhub_v1_0_ip_funcs = {
+	.name = "mmhub_v1_0",
+	.early_init = mmhub_v1_0_early_init,
+	.late_init = mmhub_v1_0_late_init,
+	.sw_init = mmhub_v1_0_sw_init,
+	.sw_fini = mmhub_v1_0_sw_fini,
+	.hw_init = mmhub_v1_0_hw_init,
+	.hw_fini = mmhub_v1_0_hw_fini,
+	.suspend = mmhub_v1_0_suspend,
+	.resume = mmhub_v1_0_resume,
+	.is_idle = mmhub_v1_0_is_idle,
+	.wait_for_idle = mmhub_v1_0_wait_for_idle,
+	.soft_reset = mmhub_v1_0_soft_reset,
+	.set_clockgating_state = mmhub_v1_0_set_clockgating_state,
+	.set_powergating_state = mmhub_v1_0_set_powergating_state,
+};
+
+const struct amdgpu_ip_block_version mmhub_v1_0_ip_block =
+{
+	.type = AMD_IP_BLOCK_TYPE_MMHUB,
+	.major = 1,
+	.minor = 0,
+	.rev = 0,
+	.funcs = &mmhub_v1_0_ip_funcs,
+};
diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
new file mode 100644
index 0000000..aadedf9
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
@@ -0,0 +1,35 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#ifndef __MMHUB_V1_0_H__
+#define __MMHUB_V1_0_H__
+
+u64 mmhub_v1_0_get_fb_location(struct amdgpu_device *adev);
+int mmhub_v1_0_gart_enable(struct amdgpu_device *adev);
+void mmhub_v1_0_gart_disable(struct amdgpu_device *adev);
+void mmhub_v1_0_set_fault_enable_default(struct amdgpu_device *adev,
+					 bool value);
+
+extern const struct amd_ip_funcs mmhub_v1_0_ip_funcs;
+extern const struct amdgpu_ip_block_version mmhub_v1_0_ip_block;
+
+#endif
diff --git a/drivers/gpu/drm/amd/include/amd_shared.h b/drivers/gpu/drm/amd/include/amd_shared.h
index 717d6be..a94420d 100644
--- a/drivers/gpu/drm/amd/include/amd_shared.h
+++ b/drivers/gpu/drm/amd/include/amd_shared.h
@@ -74,6 +74,8 @@ enum amd_ip_block_type {
 	AMD_IP_BLOCK_TYPE_UVD,
 	AMD_IP_BLOCK_TYPE_VCE,
 	AMD_IP_BLOCK_TYPE_ACP,
+	AMD_IP_BLOCK_TYPE_GFXHUB,
+	AMD_IP_BLOCK_TYPE_MMHUB
 };
 
 enum amd_clockgating_state {
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 047/100] drm/amdgpu: add SDMA v4.0 implementation
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (30 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 046/100] drm/amdgpu: Add GMC 9.0 support Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 048/100] drm/amdgpu: implement GFX 9.0 support Alex Deucher
                     ` (53 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Ken Wang

From: Ken Wang <Qingqing.Wang@amd.com>

Signed-off-by: Ken Wang <Qingqing.Wang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 1553 ++++++++++++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.h |   30 +
 2 files changed, 1583 insertions(+)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.h

diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
new file mode 100644
index 0000000..b460e00
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -0,0 +1,1553 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/firmware.h>
+#include <drm/drmP.h>
+#include "amdgpu.h"
+#include "amdgpu_ucode.h"
+#include "amdgpu_trace.h"
+
+#include "vega10/soc15ip.h"
+#include "vega10/SDMA0/sdma0_4_0_offset.h"
+#include "vega10/SDMA0/sdma0_4_0_sh_mask.h"
+#include "vega10/SDMA1/sdma1_4_0_offset.h"
+#include "vega10/SDMA1/sdma1_4_0_sh_mask.h"
+#include "vega10/MMHUB/mmhub_1_0_offset.h"
+#include "vega10/MMHUB/mmhub_1_0_sh_mask.h"
+#include "vega10/HDP/hdp_4_0_offset.h"
+
+#include "soc15_common.h"
+#include "soc15.h"
+#include "vega10_sdma_pkt_open.h"
+
+MODULE_FIRMWARE("amdgpu/vega10_sdma.bin");
+MODULE_FIRMWARE("amdgpu/vega10_sdma1.bin");
+
+static void sdma_v4_0_set_ring_funcs(struct amdgpu_device *adev);
+static void sdma_v4_0_set_buffer_funcs(struct amdgpu_device *adev);
+static void sdma_v4_0_set_vm_pte_funcs(struct amdgpu_device *adev);
+static void sdma_v4_0_set_irq_funcs(struct amdgpu_device *adev);
+
+static const u32 golden_settings_sdma_4[] =
+{
+	SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_CHICKEN_BITS), 0xfe931f07, 0x02831f07,
+	SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_CLK_CTRL), 0xff000ff0, 0x3f000100,
+	SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_GFX_IB_CNTL), 0x800f0100, 0x00000100,
+	SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_GFX_RB_WPTR_POLL_CNTL), 0xfffffff7, 0x00403000,
+	SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_PAGE_IB_CNTL), 0x800f0100, 0x00000100,
+	SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_PAGE_RB_WPTR_POLL_CNTL), 0x0000fff0, 0x00403000,
+	SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_POWER_CNTL), 0x003ff006, 0x0003c000,
+	SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_RLC0_IB_CNTL), 0x800f0100, 0x00000100,
+	SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_RLC0_RB_WPTR_POLL_CNTL), 0x0000fff0, 0x00403000,
+	SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_RLC1_IB_CNTL), 0x800f0100, 0x00000100,
+	SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_RLC1_RB_WPTR_POLL_CNTL), 0x0000fff0, 0x00403000,
+	SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_UTCL1_PAGE), 0x000003ff, 0x000003c0,
+	SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_CHICKEN_BITS), 0xfe931f07, 0x02831f07,
+	SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_CLK_CTRL), 0xffffffff, 0x3f000100,
+	SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_GFX_IB_CNTL), 0x800f0100, 0x00000100,
+	SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_GFX_RB_WPTR_POLL_CNTL), 0x0000fff0, 0x00403000,
+	SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_PAGE_IB_CNTL), 0x800f0100, 0x00000100,
+	SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_PAGE_RB_WPTR_POLL_CNTL), 0x0000fff0, 0x00403000,
+	SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_POWER_CNTL), 0x003ff000, 0x0003c000,
+	SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_RLC0_IB_CNTL), 0x800f0100, 0x00000100,
+	SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_RLC0_RB_WPTR_POLL_CNTL), 0x0000fff0, 0x00403000,
+	SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_RLC1_IB_CNTL), 0x800f0100, 0x00000100,
+	SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_RLC1_RB_WPTR_POLL_CNTL), 0x0000fff0, 0x00403000,
+	SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_UTCL1_PAGE), 0x000003ff, 0x000003c0
+};
+
+static const u32 golden_settings_sdma_vg10[] =
+{
+	SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG), 0x0018773f, 0x00104002,
+	SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG_READ), 0x0018773f, 0x00104002,
+	SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG), 0x0018773f, 0x00104002,
+	SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG_READ), 0x0018773f, 0x00104002
+};
+
+static u32 sdma_v4_0_get_reg_offset(u32 instance, u32 internal_offset)
+{
+	u32 base = 0;
+	switch (instance) {
+		case 0:
+			base = SDMA0_BASE.instance[0].segment[0];
+			break;
+		case 1:
+			base = SDMA1_BASE.instance[0].segment[0];
+			break;
+		default:
+			BUG();
+			break;
+	}
+
+	return base + internal_offset;
+}
+
+static void sdma_v4_0_init_golden_registers(struct amdgpu_device *adev)
+{
+	switch (adev->asic_type) {
+	case CHIP_VEGA10:
+		amdgpu_program_register_sequence(adev,
+						 golden_settings_sdma_4,
+						 (const u32)ARRAY_SIZE(golden_settings_sdma_4));
+		amdgpu_program_register_sequence(adev,
+						 golden_settings_sdma_vg10,
+						 (const u32)ARRAY_SIZE(golden_settings_sdma_vg10));
+		break;
+	default:
+		break;
+	}
+}
+
+static void sdma_v4_0_print_ucode_regs(void *handle)
+{
+	int i;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	dev_info(adev->dev, "VEGA10 SDMA ucode registers\n");
+	for (i = 0; i < adev->sdma.num_instances; i++) {
+		dev_info(adev->dev, "  SDMA%d_UCODE_ADDR=0x%08X\n",
+			 i, RREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_UCODE_ADDR)));
+		dev_info(adev->dev, "  SDMA%d_UCODE_CHECKSUM=0x%08X\n",
+			 i, RREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_UCODE_CHECKSUM)));
+	}
+}
+
+/**
+ * sdma_v4_0_init_microcode - load ucode images from disk
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Use the firmware interface to load the ucode images into
+ * the driver (not loaded into hw).
+ * Returns 0 on success, error on failure.
+ */
+
+// emulation only, won't work on real chip
+// vega10 real chip need to use PSP to load firmware
+static int sdma_v4_0_init_microcode(struct amdgpu_device *adev)
+{
+	const char *chip_name;
+	char fw_name[30];
+	int err = 0, i;
+	struct amdgpu_firmware_info *info = NULL;
+	const struct common_firmware_header *header = NULL;
+	const struct sdma_firmware_header_v1_0 *hdr;
+
+	DRM_DEBUG("\n");
+
+	switch (adev->asic_type) {
+	case CHIP_VEGA10:
+		chip_name = "vega10";
+		break;
+	default: BUG();
+	}
+
+	for (i = 0; i < adev->sdma.num_instances; i++) {
+		if (i == 0)
+			snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_sdma.bin", chip_name);
+		else
+			snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_sdma1.bin", chip_name);
+		err = request_firmware(&adev->sdma.instance[i].fw, fw_name, adev->dev);
+		if (err)
+			goto out;
+		err = amdgpu_ucode_validate(adev->sdma.instance[i].fw);
+		if (err)
+			goto out;
+		hdr = (const struct sdma_firmware_header_v1_0 *)adev->sdma.instance[i].fw->data;
+		adev->sdma.instance[i].fw_version = le32_to_cpu(hdr->header.ucode_version);
+		adev->sdma.instance[i].feature_version = le32_to_cpu(hdr->ucode_feature_version);
+		if (adev->sdma.instance[i].feature_version >= 20)
+			adev->sdma.instance[i].burst_nop = true;
+		DRM_DEBUG("psp_load == '%s'\n",
+				adev->firmware.load_type == AMDGPU_FW_LOAD_PSP? "true": "false");
+
+		if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_SDMA0 + i];
+			info->ucode_id = AMDGPU_UCODE_ID_SDMA0 + i;
+			info->fw = adev->sdma.instance[i].fw;
+			header = (const struct common_firmware_header *)info->fw->data;
+			adev->firmware.fw_size +=
+				ALIGN(le32_to_cpu(header->ucode_size_bytes), PAGE_SIZE);
+		}
+	}
+out:
+	if (err) {
+		printk(KERN_ERR
+		       "sdma_v4_0: Failed to load firmware \"%s\"\n",
+		       fw_name);
+		for (i = 0; i < adev->sdma.num_instances; i++) {
+			release_firmware(adev->sdma.instance[i].fw);
+			adev->sdma.instance[i].fw = NULL;
+		}
+	}
+	return err;
+}
+
+/**
+ * sdma_v4_0_ring_get_rptr - get the current read pointer
+ *
+ * @ring: amdgpu ring pointer
+ *
+ * Get the current rptr from the hardware (VEGA10+).
+ */
+static uint64_t sdma_v4_0_ring_get_rptr(struct amdgpu_ring *ring)
+{
+	u64* rptr;
+
+	/* XXX check if swapping is necessary on BE */
+	rptr =((u64*)&ring->adev->wb.wb[ring->rptr_offs]);
+
+	DRM_DEBUG("rptr before shift == 0x%016llx\n", *rptr);
+	return ((*rptr) >> 2);
+}
+
+/**
+ * sdma_v4_0_ring_get_wptr - get the current write pointer
+ *
+ * @ring: amdgpu ring pointer
+ *
+ * Get the current wptr from the hardware (VEGA10+).
+ */
+static uint64_t sdma_v4_0_ring_get_wptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	u64* wptr = NULL;
+	uint64_t local_wptr=0;
+
+	if (ring->use_doorbell) {
+		/* XXX check if swapping is necessary on BE */
+		wptr = ((u64*)&adev->wb.wb[ring->wptr_offs]);
+		DRM_DEBUG("wptr/doorbell before shift == 0x%016llx\n", *wptr);
+		*wptr = (*wptr) >> 2;
+		DRM_DEBUG("wptr/doorbell after shift == 0x%016llx\n", *wptr);
+	} else {
+		u32 lowbit, highbit;
+		int me = (ring == &adev->sdma.instance[0].ring) ? 0 : 1;
+		wptr=&local_wptr;
+		lowbit = RREG32(sdma_v4_0_get_reg_offset(me, mmSDMA0_GFX_RB_WPTR)) >> 2;
+		highbit = RREG32(sdma_v4_0_get_reg_offset(me, mmSDMA0_GFX_RB_WPTR_HI)) >> 2;
+
+		DRM_DEBUG("wptr [%i]high== 0x%08x low==0x%08x\n",
+				me, highbit, lowbit);
+		*wptr = highbit;
+		*wptr = (*wptr) << 32;
+		*wptr |= lowbit;
+	}
+
+	return *wptr;
+}
+
+/**
+ * sdma_v4_0_ring_set_wptr - commit the write pointer
+ *
+ * @ring: amdgpu ring pointer
+ *
+ * Write the wptr back to the hardware (VEGA10+).
+ */
+static void sdma_v4_0_ring_set_wptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	DRM_DEBUG("Setting write pointer\n");
+	if (ring->use_doorbell) {
+		DRM_DEBUG("Using doorbell -- "
+				"wptr_offs == 0x%08x "
+				"lower_32_bits(ring->wptr) << 2 == 0x%08x "
+				"upper_32_bits(ring->wptr) << 2 == 0x%08x\n",
+				ring->wptr_offs,
+				lower_32_bits(ring->wptr << 2),
+				upper_32_bits(ring->wptr << 2));
+		/* XXX check if swapping is necessary on BE */
+		adev->wb.wb[ring->wptr_offs] = lower_32_bits(ring->wptr << 2);
+		adev->wb.wb[ring->wptr_offs + 1] = upper_32_bits(ring->wptr << 2);
+		DRM_DEBUG("calling WDOORBELL64(0x%08x, 0x%016llx)\n",
+				ring->doorbell_index, ring->wptr << 2);
+		WDOORBELL64(ring->doorbell_index, ring->wptr << 2);
+	} else {
+		int me = (ring == &ring->adev->sdma.instance[0].ring) ? 0 : 1;
+		DRM_DEBUG("Not using doorbell -- "
+				"mmSDMA%i_GFX_RB_WPTR == 0x%08x "
+				"mmSDMA%i_GFX_RB_WPTR_HI == 0x%08x \n",
+				me,
+				me,
+				lower_32_bits(ring->wptr << 2),
+				upper_32_bits(ring->wptr << 2));
+		WREG32(sdma_v4_0_get_reg_offset(me, mmSDMA0_GFX_RB_WPTR), lower_32_bits(ring->wptr << 2));
+		WREG32(sdma_v4_0_get_reg_offset(me, mmSDMA0_GFX_RB_WPTR_HI), upper_32_bits(ring->wptr << 2));
+	}
+}
+
+static void sdma_v4_0_ring_insert_nop(struct amdgpu_ring *ring, uint32_t count)
+{
+	struct amdgpu_sdma_instance *sdma = amdgpu_get_sdma_instance(ring);
+	int i;
+
+	for (i = 0; i < count; i++)
+		if (sdma && sdma->burst_nop && (i == 0))
+			amdgpu_ring_write(ring, ring->funcs->nop |
+				SDMA_PKT_NOP_HEADER_COUNT(count - 1));
+		else
+			amdgpu_ring_write(ring, ring->funcs->nop);
+}
+
+/**
+ * sdma_v4_0_ring_emit_ib - Schedule an IB on the DMA engine
+ *
+ * @ring: amdgpu ring pointer
+ * @ib: IB object to schedule
+ *
+ * Schedule an IB in the DMA ring (VEGA10).
+ */
+static void sdma_v4_0_ring_emit_ib(struct amdgpu_ring *ring,
+                                   struct amdgpu_ib *ib,
+                                   unsigned vm_id, bool ctx_switch)
+{
+        u32 vmid = vm_id & 0xf;
+
+        /* IB packet must end on a 8 DW boundary */
+        sdma_v4_0_ring_insert_nop(ring, (10 - (lower_32_bits(ring->wptr) & 7)) % 8);
+
+        amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_INDIRECT) |
+                          SDMA_PKT_INDIRECT_HEADER_VMID(vmid));
+        /* base must be 32 byte aligned */
+        amdgpu_ring_write(ring, lower_32_bits(ib->gpu_addr) & 0xffffffe0);
+        amdgpu_ring_write(ring, upper_32_bits(ib->gpu_addr));
+        amdgpu_ring_write(ring, ib->length_dw);
+        amdgpu_ring_write(ring, 0);
+        amdgpu_ring_write(ring, 0);
+
+}
+
+/**
+ * sdma_v4_0_ring_emit_hdp_flush - emit an hdp flush on the DMA ring
+ *
+ * @ring: amdgpu ring pointer
+ *
+ * Emit an hdp flush packet on the requested DMA ring.
+ */
+static void sdma_v4_0_ring_emit_hdp_flush(struct amdgpu_ring *ring)
+{
+	u32 ref_and_mask = 0;
+	struct nbio_hdp_flush_reg *nbio_hf_reg;
+
+	if (ring->adev->asic_type == CHIP_VEGA10)
+		nbio_hf_reg = &nbio_v6_1_hdp_flush_reg;
+
+	if (ring == &ring->adev->sdma.instance[0].ring)
+		ref_and_mask = nbio_hf_reg->ref_and_mask_sdma0;
+	else
+		ref_and_mask = nbio_hf_reg->ref_and_mask_sdma1;
+
+	amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_POLL_REGMEM) |
+			  SDMA_PKT_POLL_REGMEM_HEADER_HDP_FLUSH(1) |
+			  SDMA_PKT_POLL_REGMEM_HEADER_FUNC(3)); /* == */
+	amdgpu_ring_write(ring, nbio_hf_reg->hdp_flush_done_offset << 2);
+	amdgpu_ring_write(ring, nbio_hf_reg->hdp_flush_req_offset << 2);
+	amdgpu_ring_write(ring, ref_and_mask); /* reference */
+	amdgpu_ring_write(ring, ref_and_mask); /* mask */
+	amdgpu_ring_write(ring, SDMA_PKT_POLL_REGMEM_DW5_RETRY_COUNT(0xfff) |
+			  SDMA_PKT_POLL_REGMEM_DW5_INTERVAL(10)); /* retry count, poll interval */
+}
+
+static void sdma_v4_0_ring_emit_hdp_invalidate(struct amdgpu_ring *ring)
+{
+	amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_SRBM_WRITE) |
+			  SDMA_PKT_SRBM_WRITE_HEADER_BYTE_EN(0xf));
+	amdgpu_ring_write(ring, SOC15_REG_OFFSET(HDP, 0, mmHDP_DEBUG0));
+	amdgpu_ring_write(ring, 1);
+}
+
+/**
+ * sdma_v4_0_ring_emit_fence - emit a fence on the DMA ring
+ *
+ * @ring: amdgpu ring pointer
+ * @fence: amdgpu fence object
+ *
+ * Add a DMA fence packet to the ring to write
+ * the fence seq number and DMA trap packet to generate
+ * an interrupt if needed (VEGA10).
+ */
+static void sdma_v4_0_ring_emit_fence(struct amdgpu_ring *ring, u64 addr, u64 seq,
+				      unsigned flags)
+{
+	bool write64bit = flags & AMDGPU_FENCE_FLAG_64BIT;
+	/* write the fence */
+	amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_FENCE));
+	/* zero in first two bits */
+	BUG_ON(addr & 0x3);
+	amdgpu_ring_write(ring, lower_32_bits(addr));
+	amdgpu_ring_write(ring, upper_32_bits(addr));
+	amdgpu_ring_write(ring, lower_32_bits(seq));
+
+	/* optionally write high bits as well */
+	if (write64bit) {
+		addr += 4;
+		amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_FENCE));
+		/* zero in first two bits */
+		BUG_ON(addr & 0x3);
+		amdgpu_ring_write(ring, lower_32_bits(addr));
+		amdgpu_ring_write(ring, upper_32_bits(addr));
+		amdgpu_ring_write(ring, upper_32_bits(seq));
+	}
+
+	/* generate an interrupt */
+	amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_TRAP));
+	amdgpu_ring_write(ring, SDMA_PKT_TRAP_INT_CONTEXT_INT_CONTEXT(0));
+}
+
+
+/**
+ * sdma_v4_0_gfx_stop - stop the gfx async dma engines
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Stop the gfx async dma ring buffers (VEGA10).
+ */
+static void sdma_v4_0_gfx_stop(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *sdma0 = &adev->sdma.instance[0].ring;
+	struct amdgpu_ring *sdma1 = &adev->sdma.instance[1].ring;
+	u32 rb_cntl, ib_cntl;
+	int i;
+
+	if ((adev->mman.buffer_funcs_ring == sdma0) ||
+	    (adev->mman.buffer_funcs_ring == sdma1))
+		amdgpu_ttm_set_active_vram_size(adev, adev->mc.visible_vram_size);
+
+	for (i = 0; i < adev->sdma.num_instances; i++) {
+		rb_cntl = RREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_RB_CNTL));
+		rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL, RB_ENABLE, 0);
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_RB_CNTL), rb_cntl);
+		ib_cntl = RREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_IB_CNTL));
+		ib_cntl = REG_SET_FIELD(ib_cntl, SDMA0_GFX_IB_CNTL, IB_ENABLE, 0);
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_IB_CNTL), ib_cntl);
+	}
+
+	sdma0->ready = false;
+	sdma1->ready = false;
+}
+
+/**
+ * sdma_v4_0_rlc_stop - stop the compute async dma engines
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Stop the compute async dma queues (VEGA10).
+ */
+static void sdma_v4_0_rlc_stop(struct amdgpu_device *adev)
+{
+	/* XXX todo */
+}
+
+/**
+ * sdma_v_0_ctx_switch_enable - stop the async dma engines context switch
+ *
+ * @adev: amdgpu_device pointer
+ * @enable: enable/disable the DMA MEs context switch.
+ *
+ * Halt or unhalt the async dma engines context switch (VEGA10).
+ */
+static void sdma_v4_0_ctx_switch_enable(struct amdgpu_device *adev, bool enable)
+{
+	u32 f32_cntl;
+	int i;
+
+	for (i = 0; i < adev->sdma.num_instances; i++) {
+		f32_cntl = RREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_CNTL));
+		f32_cntl = REG_SET_FIELD(f32_cntl, SDMA0_CNTL,
+				AUTO_CTXSW_ENABLE, enable ? 1 : 0);
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_CNTL), f32_cntl);
+	}
+
+}
+
+/**
+ * sdma_v4_0_enable - stop the async dma engines
+ *
+ * @adev: amdgpu_device pointer
+ * @enable: enable/disable the DMA MEs.
+ *
+ * Halt or unhalt the async dma engines (VEGA10).
+ */
+static void sdma_v4_0_enable(struct amdgpu_device *adev, bool enable)
+{
+	u32 f32_cntl;
+	int i;
+
+	if (enable == false) {
+		sdma_v4_0_gfx_stop(adev);
+		sdma_v4_0_rlc_stop(adev);
+	}
+
+	for (i = 0; i < adev->sdma.num_instances; i++) {
+		f32_cntl = RREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_F32_CNTL));
+		f32_cntl = REG_SET_FIELD(f32_cntl, SDMA0_F32_CNTL, HALT, enable ? 0 : 1);
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_F32_CNTL), f32_cntl);
+	}
+}
+
+/**
+ * sdma_v4_0_gfx_resume - setup and start the async dma engines
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Set up the gfx DMA ring buffers and enable them (VEGA10).
+ * Returns 0 for success, error for failure.
+ */
+static int sdma_v4_0_gfx_resume(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring;
+	u32 rb_cntl, ib_cntl;
+	u32 rb_bufsz;
+	u32 wb_offset;
+	u32 doorbell;
+	u32 doorbell_offset;
+	int i,r;
+
+	for (i = 0; i < adev->sdma.num_instances; i++) {
+		ring = &adev->sdma.instance[i].ring;
+		wb_offset = (ring->rptr_offs * 4);
+
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_SEM_WAIT_FAIL_TIMER_CNTL), 0);
+
+		/* Set ring buffer size in dwords */
+		rb_bufsz = order_base_2(ring->ring_size / 4);
+		rb_cntl = RREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_RB_CNTL));
+		rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL, RB_SIZE, rb_bufsz);
+#ifdef __BIG_ENDIAN
+		rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL, RB_SWAP_ENABLE, 1);
+		rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL,
+					RPTR_WRITEBACK_SWAP_ENABLE, 1);
+#endif
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_RB_CNTL), rb_cntl);
+
+		/* Initialize the ring buffer's read and write pointers */
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_RB_RPTR), 0);
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_RB_RPTR_HI), 0);
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_RB_WPTR), 0);
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_RB_WPTR_HI), 0);
+
+		/* set the wb address whether it's enabled or not */
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_RB_RPTR_ADDR_HI),
+		       upper_32_bits(adev->wb.gpu_addr + wb_offset) & 0xFFFFFFFF);
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_RB_RPTR_ADDR_LO),
+		       lower_32_bits(adev->wb.gpu_addr + wb_offset) & 0xFFFFFFFC);
+
+		rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL, RPTR_WRITEBACK_ENABLE, 1);
+
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_RB_BASE), ring->gpu_addr >> 8);
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_RB_BASE_HI), ring->gpu_addr >> 40);
+
+		ring->wptr = 0;
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_RB_WPTR), lower_32_bits(ring->wptr) << 2);
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_RB_WPTR_HI), upper_32_bits(ring->wptr) << 2);
+
+		doorbell = RREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_DOORBELL));
+		doorbell_offset = RREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_DOORBELL_OFFSET));
+
+		if (ring->use_doorbell){
+			doorbell = REG_SET_FIELD(doorbell, SDMA0_GFX_DOORBELL, ENABLE, 1);
+			doorbell_offset = REG_SET_FIELD(doorbell_offset, SDMA0_GFX_DOORBELL_OFFSET,
+					OFFSET, ring->doorbell_index);
+		} else {
+			doorbell = REG_SET_FIELD(doorbell, SDMA0_GFX_DOORBELL, ENABLE, 0);
+		}
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_DOORBELL), doorbell);
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_DOORBELL_OFFSET), doorbell_offset);
+		nbio_v6_1_sdma_doorbell_range(adev, i, ring->use_doorbell, ring->doorbell_index);
+
+		/* enable DMA RB */
+		rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL, RB_ENABLE, 1);
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_RB_CNTL), rb_cntl);
+
+		ib_cntl = RREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_IB_CNTL));
+		ib_cntl = REG_SET_FIELD(ib_cntl, SDMA0_GFX_IB_CNTL, IB_ENABLE, 1);
+#ifdef __BIG_ENDIAN
+		ib_cntl = REG_SET_FIELD(ib_cntl, SDMA0_GFX_IB_CNTL, IB_SWAP_ENABLE, 1);
+#endif
+		/* enable DMA IBs */
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_IB_CNTL), ib_cntl);
+
+		ring->ready = true;
+
+		r = amdgpu_ring_test_ring(ring);
+		if (r) {
+			ring->ready = false;
+			return r;
+		}
+
+		if (adev->mman.buffer_funcs_ring == ring)
+			amdgpu_ttm_set_active_vram_size(adev, adev->mc.real_vram_size);
+	}
+
+	return 0;
+}
+
+/**
+ * sdma_v4_0_rlc_resume - setup and start the async dma engines
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Set up the compute DMA queues and enable them (VEGA10).
+ * Returns 0 for success, error for failure.
+ */
+static int sdma_v4_0_rlc_resume(struct amdgpu_device *adev)
+{
+	/* XXX todo */
+	return 0;
+}
+
+/**
+ * sdma_v4_0_load_microcode - load the sDMA ME ucode
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Loads the sDMA0/1 ucode.
+ * Returns 0 for success, -EINVAL if the ucode is not available.
+ */
+static int sdma_v4_0_load_microcode(struct amdgpu_device *adev)
+{
+	const struct sdma_firmware_header_v1_0 *hdr;
+	const __le32 *fw_data;
+	u32 fw_size;
+	u32 digest_size = 0;
+	int i, j;
+
+	/* halt the MEs */
+	sdma_v4_0_enable(adev, false);
+
+	for (i = 0; i < adev->sdma.num_instances; i++) {
+		uint16_t version_major;
+		uint16_t version_minor;
+		if (!adev->sdma.instance[i].fw)
+			return -EINVAL;
+
+		hdr = (const struct sdma_firmware_header_v1_0 *)adev->sdma.instance[i].fw->data;
+		amdgpu_ucode_print_sdma_hdr(&hdr->header);
+		fw_size = le32_to_cpu(hdr->header.ucode_size_bytes) / 4;
+
+		version_major = le16_to_cpu(hdr->header.header_version_major);
+		version_minor = le16_to_cpu(hdr->header.header_version_minor);
+
+		if (version_major == 1 && version_minor >= 1) {
+			const struct sdma_firmware_header_v1_1 *sdma_v1_1_hdr = (const struct sdma_firmware_header_v1_1 *) hdr;
+			digest_size = le32_to_cpu(sdma_v1_1_hdr->digest_size);
+		}
+
+		fw_size -= digest_size;
+
+		fw_data = (const __le32 *)
+			(adev->sdma.instance[i].fw->data +
+				le32_to_cpu(hdr->header.ucode_array_offset_bytes));
+
+		sdma_v4_0_print_ucode_regs(adev);
+
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_UCODE_ADDR), 0);
+
+
+		for (j = 0; j < fw_size; j++)
+		{
+			WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_UCODE_DATA), le32_to_cpup(fw_data++));
+		}
+
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_UCODE_ADDR), adev->sdma.instance[i].fw_version);
+	}
+
+	sdma_v4_0_print_ucode_regs(adev);
+
+	return 0;
+}
+
+/**
+ * sdma_v4_0_start - setup and start the async dma engines
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Set up the DMA engines and enable them (VEGA10).
+ * Returns 0 for success, error for failure.
+ */
+static int sdma_v4_0_start(struct amdgpu_device *adev)
+{
+	int r;
+
+	if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP) {
+		DRM_INFO("Loading via direct write\n");
+		r = sdma_v4_0_load_microcode(adev);
+		if (r)
+			return r;
+	}
+
+	/* unhalt the MEs */
+	sdma_v4_0_enable(adev, true);
+	/* enable sdma ring preemption */
+	sdma_v4_0_ctx_switch_enable(adev, true);
+
+	/* start the gfx rings and rlc compute queues */
+	r = sdma_v4_0_gfx_resume(adev);
+	if (r)
+		return r;
+	r = sdma_v4_0_rlc_resume(adev);
+	if (r)
+		return r;
+
+	return 0;
+}
+
+/**
+ * sdma_v4_0_ring_test_ring - simple async dma engine test
+ *
+ * @ring: amdgpu_ring structure holding ring information
+ *
+ * Test the DMA engine by writing using it to write an
+ * value to memory. (VEGA10).
+ * Returns 0 for success, error for failure.
+ */
+static int sdma_v4_0_ring_test_ring(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	unsigned i;
+	unsigned index;
+	int r;
+	u32 tmp;
+	u64 gpu_addr;
+
+	DRM_INFO("In Ring test func\n");
+
+	r = amdgpu_wb_get(adev, &index);
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to allocate wb slot\n", r);
+		return r;
+	}
+
+	gpu_addr = adev->wb.gpu_addr + (index * 4);
+	tmp = 0xCAFEDEAD;
+	adev->wb.wb[index] = cpu_to_le32(tmp);
+
+	r = amdgpu_ring_alloc(ring, 5);
+	if (r) {
+		DRM_ERROR("amdgpu: dma failed to lock ring %d (%d).\n", ring->idx, r);
+		amdgpu_wb_free(adev, index);
+		return r;
+	}
+
+	amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) |
+			  SDMA_PKT_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR));
+	amdgpu_ring_write(ring, lower_32_bits(gpu_addr));
+	amdgpu_ring_write(ring, upper_32_bits(gpu_addr));
+	amdgpu_ring_write(ring, SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0));
+	amdgpu_ring_write(ring, 0xDEADBEEF);
+	amdgpu_ring_commit(ring);
+
+	for (i = 0; i < adev->usec_timeout; i++) {
+		tmp = le32_to_cpu(adev->wb.wb[index]);
+		if (tmp == 0xDEADBEEF) {
+			break;
+		}
+		DRM_UDELAY(1);
+	}
+
+	if (i < adev->usec_timeout) {
+		DRM_INFO("ring test on %d succeeded in %d usecs\n", ring->idx, i);
+	} else {
+		DRM_ERROR("amdgpu: ring %d test failed (0x%08X)\n",
+			  ring->idx, tmp);
+		r = -EINVAL;
+	}
+	amdgpu_wb_free(adev, index);
+
+	return r;
+}
+
+/**
+ * sdma_v4_0_ring_test_ib - test an IB on the DMA engine
+ *
+ * @ring: amdgpu_ring structure holding ring information
+ *
+ * Test a simple IB in the DMA ring (VEGA10).
+ * Returns 0 on success, error on failure.
+ */
+static int sdma_v4_0_ring_test_ib(struct amdgpu_ring *ring, long timeout)
+{
+	struct amdgpu_device *adev = ring->adev;
+	struct amdgpu_ib ib;
+	struct fence *f = NULL;
+	unsigned index;
+	long r;
+	u32 tmp = 0;
+	u64 gpu_addr;
+
+	r = amdgpu_wb_get(adev, &index);
+	if (r) {
+		dev_err(adev->dev, "(%ld) failed to allocate wb slot\n", r);
+		return r;
+	}
+
+	gpu_addr = adev->wb.gpu_addr + (index * 4);
+	tmp = 0xCAFEDEAD;
+	adev->wb.wb[index] = cpu_to_le32(tmp);
+	memset(&ib, 0, sizeof(ib));
+	r = amdgpu_ib_get(adev, NULL, 256, &ib);
+	if (r) {
+		DRM_ERROR("amdgpu: failed to get ib (%ld).\n", r);
+		goto err0;
+	}
+
+	ib.ptr[0] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) |
+		SDMA_PKT_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR);
+	ib.ptr[1] = lower_32_bits(gpu_addr);
+	ib.ptr[2] = upper_32_bits(gpu_addr);
+	ib.ptr[3] = SDMA_PKT_WRITE_UNTILED_DW_3_COUNT(0);
+	ib.ptr[4] = 0xDEADBEEF;
+	ib.ptr[5] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP);
+	ib.ptr[6] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP);
+	ib.ptr[7] = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP);
+	ib.length_dw = 8;
+
+	r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f);
+	if (r)
+		goto err1;
+
+        r = fence_wait_timeout(f, false, timeout);
+        if (r == 0) {
+                DRM_ERROR("amdgpu: IB test timed out\n");
+                r = -ETIMEDOUT;
+                goto err1;
+        } else if (r < 0) {
+                DRM_ERROR("amdgpu: fence wait failed (%ld).\n", r);
+                goto err1;
+        }
+        tmp = le32_to_cpu(adev->wb.wb[index]);
+        if (tmp == 0xDEADBEEF) {
+                DRM_INFO("ib test on ring %d succeeded\n", ring->idx);
+                r = 0;
+        } else {
+                DRM_ERROR("amdgpu: ib test failed (0x%08X)\n", tmp);
+                r = -EINVAL;
+        }
+err1:
+        amdgpu_ib_free(adev, &ib, NULL);
+        fence_put(f);
+err0:
+        amdgpu_wb_free(adev, index);
+        return r;
+}
+
+
+/**
+ * sdma_v4_0_vm_copy_pte - update PTEs by copying them from the GART
+ *
+ * @ib: indirect buffer to fill with commands
+ * @pe: addr of the page entry
+ * @src: src addr to copy from
+ * @count: number of page entries to update
+ *
+ * Update PTEs by copying them from the GART using sDMA (VEGA10).
+ */
+static void sdma_v4_0_vm_copy_pte(struct amdgpu_ib *ib,
+				  uint64_t pe, uint64_t src,
+				  unsigned count)
+{
+	unsigned bytes = count * 8;
+
+	ib->ptr[ib->length_dw++] = SDMA_PKT_HEADER_OP(SDMA_OP_COPY) |
+		SDMA_PKT_HEADER_SUB_OP(SDMA_SUBOP_COPY_LINEAR);
+	ib->ptr[ib->length_dw++] = bytes - 1;
+	ib->ptr[ib->length_dw++] = 0; /* src/dst endian swap */
+	ib->ptr[ib->length_dw++] = lower_32_bits(src);
+	ib->ptr[ib->length_dw++] = upper_32_bits(src);
+	ib->ptr[ib->length_dw++] = lower_32_bits(pe);
+	ib->ptr[ib->length_dw++] = upper_32_bits(pe);
+
+}
+
+/**
+ * sdma_v4_0_vm_write_pte - update PTEs by writing them manually
+ *
+ * @ib: indirect buffer to fill with commands
+ * @pe: addr of the page entry
+ * @addr: dst addr to write into pe
+ * @count: number of page entries to update
+ * @incr: increase next addr by incr bytes
+ * @flags: access flags
+ *
+ * Update PTEs by writing them manually using sDMA (VEGA10).
+ */
+static void sdma_v4_0_vm_write_pte(struct amdgpu_ib *ib, uint64_t pe,
+				   uint64_t value, unsigned count,
+				   uint32_t incr)
+{
+	unsigned ndw = count * 2;
+
+	ib->ptr[ib->length_dw++] = SDMA_PKT_HEADER_OP(SDMA_OP_WRITE) |
+		SDMA_PKT_HEADER_SUB_OP(SDMA_SUBOP_WRITE_LINEAR);
+	ib->ptr[ib->length_dw++] = lower_32_bits(pe);
+	ib->ptr[ib->length_dw++] = upper_32_bits(pe);
+	ib->ptr[ib->length_dw++] = ndw - 1;
+	for (; ndw > 0; ndw -= 2) {
+		ib->ptr[ib->length_dw++] = lower_32_bits(value);
+		ib->ptr[ib->length_dw++] = upper_32_bits(value);
+		value += incr;
+	}
+}
+
+/**
+ * sdma_v4_0_vm_set_pte_pde - update the page tables using sDMA
+ *
+ * @ib: indirect buffer to fill with commands
+ * @pe: addr of the page entry
+ * @addr: dst addr to write into pe
+ * @count: number of page entries to update
+ * @incr: increase next addr by incr bytes
+ * @flags: access flags
+ *
+ * Update the page tables using sDMA (VEGA10).
+ */
+static void sdma_v4_0_vm_set_pte_pde(struct amdgpu_ib *ib,
+				     uint64_t pe,
+				     uint64_t addr, unsigned count,
+				     uint32_t incr, uint64_t flags)
+{
+	/* for physically contiguous pages (vram) */
+	ib->ptr[ib->length_dw++] = SDMA_PKT_HEADER_OP(SDMA_OP_PTEPDE);
+	ib->ptr[ib->length_dw++] = lower_32_bits(pe); /* dst addr */
+	ib->ptr[ib->length_dw++] = upper_32_bits(pe);
+	ib->ptr[ib->length_dw++] = flags; /* mask */
+	ib->ptr[ib->length_dw++] = 0;
+	ib->ptr[ib->length_dw++] = lower_32_bits(addr); /* value */
+	ib->ptr[ib->length_dw++] = upper_32_bits(addr);
+	ib->ptr[ib->length_dw++] = incr; /* increment size */
+	ib->ptr[ib->length_dw++] = 0;
+	ib->ptr[ib->length_dw++] = count - 1; /* number of entries */
+}
+
+/**
+ * sdma_v4_0_ring_pad_ib - pad the IB to the required number of dw
+ *
+ * @ib: indirect buffer to fill with padding
+ *
+ */
+static void sdma_v4_0_ring_pad_ib(struct amdgpu_ring *ring, struct amdgpu_ib *ib)
+{
+	struct amdgpu_sdma_instance *sdma = amdgpu_get_sdma_instance(ring);
+	u32 pad_count;
+	int i;
+
+	pad_count = (8 - (ib->length_dw & 0x7)) % 8;
+	for (i = 0; i < pad_count; i++)
+		if (sdma && sdma->burst_nop && (i == 0))
+			ib->ptr[ib->length_dw++] =
+				SDMA_PKT_HEADER_OP(SDMA_OP_NOP) |
+				SDMA_PKT_NOP_HEADER_COUNT(pad_count - 1);
+		else
+			ib->ptr[ib->length_dw++] =
+				SDMA_PKT_HEADER_OP(SDMA_OP_NOP);
+}
+
+
+/**
+ * sdma_v4_0_ring_emit_pipeline_sync - sync the pipeline
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Make sure all previous operations are completed (CIK).
+ */
+static void sdma_v4_0_ring_emit_pipeline_sync(struct amdgpu_ring *ring)
+{
+	uint32_t seq = ring->fence_drv.sync_seq;
+	uint64_t addr = ring->fence_drv.gpu_addr;
+
+	/* wait for idle */
+	amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_POLL_REGMEM) |
+			  SDMA_PKT_POLL_REGMEM_HEADER_HDP_FLUSH(0) |
+			  SDMA_PKT_POLL_REGMEM_HEADER_FUNC(3) | /* equal */
+			  SDMA_PKT_POLL_REGMEM_HEADER_MEM_POLL(1));
+	amdgpu_ring_write(ring, addr & 0xfffffffc);
+	amdgpu_ring_write(ring, upper_32_bits(addr) & 0xffffffff);
+	amdgpu_ring_write(ring, seq); /* reference */
+	amdgpu_ring_write(ring, 0xfffffff); /* mask */
+	amdgpu_ring_write(ring, SDMA_PKT_POLL_REGMEM_DW5_RETRY_COUNT(0xfff) |
+			  SDMA_PKT_POLL_REGMEM_DW5_INTERVAL(4)); /* retry count, poll interval */
+}
+
+
+/**
+ * sdma_v4_0_ring_emit_vm_flush - vm flush using sDMA
+ *
+ * @ring: amdgpu_ring pointer
+ * @vm: amdgpu_vm pointer
+ *
+ * Update the page table base and flush the VM TLB
+ * using sDMA (VEGA10).
+ */
+static void sdma_v4_0_ring_emit_vm_flush(struct amdgpu_ring *ring,
+					 unsigned vm_id, uint64_t pd_addr)
+{
+	unsigned eng = ring->idx;
+	unsigned i;
+
+	pd_addr = pd_addr | 0x1; /* valid bit */
+	/* now only use physical base address of PDE and valid */
+	BUG_ON(pd_addr & 0xFFFF00000000003EULL);
+
+	for (i = 0; i < AMDGPU_MAX_VMHUBS; ++i) {
+		struct amdgpu_vmhub *hub = &ring->adev->vmhub[i];
+		uint32_t req = hub->get_invalidate_req(vm_id);
+
+		amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_SRBM_WRITE) |
+				  SDMA_PKT_SRBM_WRITE_HEADER_BYTE_EN(0xf));
+		amdgpu_ring_write(ring, hub->ctx0_ptb_addr_lo32 + vm_id * 2);
+		amdgpu_ring_write(ring, lower_32_bits(pd_addr));
+
+		amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_SRBM_WRITE) |
+				  SDMA_PKT_SRBM_WRITE_HEADER_BYTE_EN(0xf));
+		amdgpu_ring_write(ring, hub->ctx0_ptb_addr_hi32 + vm_id * 2);
+		amdgpu_ring_write(ring, upper_32_bits(pd_addr));
+
+		/* flush TLB */
+		amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_SRBM_WRITE) |
+				  SDMA_PKT_SRBM_WRITE_HEADER_BYTE_EN(0xf));
+		amdgpu_ring_write(ring, hub->vm_inv_eng0_req + eng);
+		amdgpu_ring_write(ring, req);
+
+		/* wait for flush */
+		amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_POLL_REGMEM) |
+				  SDMA_PKT_POLL_REGMEM_HEADER_HDP_FLUSH(0) |
+				  SDMA_PKT_POLL_REGMEM_HEADER_FUNC(3)); /* equal */
+		amdgpu_ring_write(ring, (hub->vm_inv_eng0_ack + eng) << 2);
+		amdgpu_ring_write(ring, 0);
+		amdgpu_ring_write(ring, 1 << vm_id); /* reference */
+		amdgpu_ring_write(ring, 1 << vm_id); /* mask */
+		amdgpu_ring_write(ring, SDMA_PKT_POLL_REGMEM_DW5_RETRY_COUNT(0xfff) |
+				  SDMA_PKT_POLL_REGMEM_DW5_INTERVAL(10));
+	}
+}
+
+static int sdma_v4_0_early_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	adev->sdma.num_instances = 2;
+
+	sdma_v4_0_set_ring_funcs(adev);
+	sdma_v4_0_set_buffer_funcs(adev);
+	sdma_v4_0_set_vm_pte_funcs(adev);
+	sdma_v4_0_set_irq_funcs(adev);
+
+	return 0;
+}
+
+
+static int sdma_v4_0_sw_init(void *handle)
+{
+	struct amdgpu_ring *ring;
+	int r, i;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	/* SDMA trap event */
+	r = amdgpu_irq_add_id(adev, AMDGPU_IH_CLIENTID_SDMA0, 224,
+			      &adev->sdma.trap_irq);
+	if (r)
+		return r;
+
+	/* SDMA trap event */
+	r = amdgpu_irq_add_id(adev, AMDGPU_IH_CLIENTID_SDMA1, 224,
+			      &adev->sdma.trap_irq);
+	if (r)
+		return r;
+
+	r = sdma_v4_0_init_microcode(adev);
+	if (r) {
+		DRM_ERROR("Failed to load sdma firmware!\n");
+		return r;
+	}
+
+	for (i = 0; i < adev->sdma.num_instances; i++) {
+		ring = &adev->sdma.instance[i].ring;
+		ring->ring_obj = NULL;
+		ring->use_doorbell = true;
+
+		DRM_INFO("use_doorbell being set to: [%s]\n",
+				ring->use_doorbell?"true":"false");
+
+		ring->doorbell_index = (i == 0) ?
+			(AMDGPU_DOORBELL64_sDMA_ENGINE0 << 1) //get DWORD offset
+			: (AMDGPU_DOORBELL64_sDMA_ENGINE1 << 1); // get DWORD offset
+
+		sprintf(ring->name, "sdma%d", i);
+		r = amdgpu_ring_init(adev, ring, 1024,
+				     &adev->sdma.trap_irq,
+				     (i == 0) ?
+				     AMDGPU_SDMA_IRQ_TRAP0 :
+				     AMDGPU_SDMA_IRQ_TRAP1);
+		if (r)
+			return r;
+	}
+
+	return r;
+}
+
+static int sdma_v4_0_sw_fini(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	int i;
+
+	for (i = 0; i < adev->sdma.num_instances; i++)
+		amdgpu_ring_fini(&adev->sdma.instance[i].ring);
+
+	return 0;
+}
+
+static int sdma_v4_0_hw_init(void *handle)
+{
+	int r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	sdma_v4_0_init_golden_registers(adev);
+
+	r = sdma_v4_0_start(adev);
+	if (r)
+		return r;
+
+	return r;
+}
+
+static int sdma_v4_0_hw_fini(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	sdma_v4_0_ctx_switch_enable(adev, false);
+	sdma_v4_0_enable(adev, false);
+
+	return 0;
+}
+
+static int sdma_v4_0_suspend(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	return sdma_v4_0_hw_fini(adev);
+}
+
+static int sdma_v4_0_resume(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	return sdma_v4_0_hw_init(adev);
+}
+
+static bool sdma_v4_0_is_idle(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	u32 i;
+	for (i = 0; i < adev->sdma.num_instances; i++) {
+		u32 tmp = RREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_STATUS_REG));
+		if (!(tmp & SDMA0_STATUS_REG__IDLE_MASK))
+	   		 return false;
+	}
+
+	return true;
+}
+
+static int sdma_v4_0_wait_for_idle(void *handle)
+{
+	unsigned i;
+	u32 sdma0,sdma1;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	for (i = 0; i < adev->usec_timeout; i++) {
+		sdma0 = RREG32(sdma_v4_0_get_reg_offset(0, mmSDMA0_STATUS_REG));
+		sdma1 = RREG32(sdma_v4_0_get_reg_offset(1, mmSDMA0_STATUS_REG));
+
+		if (sdma0 & sdma1 & SDMA0_STATUS_REG__IDLE_MASK)
+			return 0;
+		udelay(1);
+	}
+	return -ETIMEDOUT;
+}
+
+static int sdma_v4_0_soft_reset(void *handle)
+{
+	/* todo */
+
+	return 0;
+}
+
+static int sdma_v4_0_set_trap_irq_state(struct amdgpu_device *adev,
+					struct amdgpu_irq_src *source,
+					unsigned type,
+					enum amdgpu_interrupt_state state)
+{
+	u32 sdma_cntl;
+
+	u32 reg_offset = (type == AMDGPU_SDMA_IRQ_TRAP0) ?
+		sdma_v4_0_get_reg_offset(0, mmSDMA0_CNTL) :
+	       	sdma_v4_0_get_reg_offset(1, mmSDMA0_CNTL);
+
+	sdma_cntl = RREG32(reg_offset);
+	sdma_cntl = REG_SET_FIELD(sdma_cntl, SDMA0_CNTL, TRAP_ENABLE,
+		       state == AMDGPU_IRQ_STATE_ENABLE ? 1 : 0);
+	WREG32(reg_offset, sdma_cntl);
+
+	return 0;
+}
+
+static int sdma_v4_0_process_trap_irq(struct amdgpu_device *adev,
+				      struct amdgpu_irq_src *source,
+				      struct amdgpu_iv_entry *entry)
+{
+	DRM_DEBUG("IH: SDMA trap\n");
+	switch (entry->client_id) {
+	case AMDGPU_IH_CLIENTID_SDMA0:
+		switch (entry->ring_id) {
+		case 0:
+			amdgpu_fence_process(&adev->sdma.instance[0].ring);
+			break;
+		case 1:
+			/* XXX compute */
+			break;
+		case 2:
+			/* XXX compute */
+			break;
+		case 3:
+			/* XXX page queue*/
+			break;
+		}
+		break;
+	case AMDGPU_IH_CLIENTID_SDMA1:
+		switch (entry->ring_id) {
+		case 0:
+			amdgpu_fence_process(&adev->sdma.instance[1].ring);
+			break;
+		case 1:
+			/* XXX compute */
+			break;
+		case 2:
+			/* XXX compute */
+			break;
+		case 3:
+			/* XXX page queue*/
+			break;
+		}
+		break;
+	}
+	return 0;
+}
+
+static int sdma_v4_0_process_illegal_inst_irq(struct amdgpu_device *adev,
+					      struct amdgpu_irq_src *source,
+					      struct amdgpu_iv_entry *entry)
+{
+	DRM_ERROR("Illegal instruction in SDMA command stream\n");
+	schedule_work(&adev->reset_work);
+	return 0;
+}
+
+
+static void sdma_v4_0_update_medium_grain_clock_gating(
+		struct amdgpu_device *adev,
+		bool enable)
+{
+	uint32_t data, def;
+
+	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_SDMA_MGCG)) {
+		/* enable sdma0 clock gating */
+		def = data = RREG32(SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_CLK_CTRL));
+		data &= ~(SDMA0_CLK_CTRL__SOFT_OVERRIDE7_MASK |
+			  SDMA0_CLK_CTRL__SOFT_OVERRIDE6_MASK |
+			  SDMA0_CLK_CTRL__SOFT_OVERRIDE5_MASK |
+			  SDMA0_CLK_CTRL__SOFT_OVERRIDE4_MASK |
+			  SDMA0_CLK_CTRL__SOFT_OVERRIDE3_MASK |
+			  SDMA0_CLK_CTRL__SOFT_OVERRIDE2_MASK |
+			  SDMA0_CLK_CTRL__SOFT_OVERRIDE1_MASK |
+			  SDMA0_CLK_CTRL__SOFT_OVERRIDE0_MASK);
+		if (def != data)
+			WREG32(SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_CLK_CTRL), data);
+
+		if (adev->asic_type == CHIP_VEGA10) {
+			def = data = RREG32(SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_CLK_CTRL));
+			data &= ~(SDMA1_CLK_CTRL__SOFT_OVERRIDE7_MASK |
+				  SDMA1_CLK_CTRL__SOFT_OVERRIDE6_MASK |
+				  SDMA1_CLK_CTRL__SOFT_OVERRIDE5_MASK |
+				  SDMA1_CLK_CTRL__SOFT_OVERRIDE4_MASK |
+				  SDMA1_CLK_CTRL__SOFT_OVERRIDE3_MASK |
+				  SDMA1_CLK_CTRL__SOFT_OVERRIDE2_MASK |
+				  SDMA1_CLK_CTRL__SOFT_OVERRIDE1_MASK |
+				  SDMA1_CLK_CTRL__SOFT_OVERRIDE0_MASK);
+			if(def != data)
+				WREG32(SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_CLK_CTRL), data);
+		}
+	} else {
+		/* disable sdma0 clock gating */
+		def = data = RREG32(SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_CLK_CTRL));
+		data |= (SDMA0_CLK_CTRL__SOFT_OVERRIDE7_MASK |
+			 SDMA0_CLK_CTRL__SOFT_OVERRIDE6_MASK |
+			 SDMA0_CLK_CTRL__SOFT_OVERRIDE5_MASK |
+			 SDMA0_CLK_CTRL__SOFT_OVERRIDE4_MASK |
+			 SDMA0_CLK_CTRL__SOFT_OVERRIDE3_MASK |
+			 SDMA0_CLK_CTRL__SOFT_OVERRIDE2_MASK |
+			 SDMA0_CLK_CTRL__SOFT_OVERRIDE1_MASK |
+			 SDMA0_CLK_CTRL__SOFT_OVERRIDE0_MASK);
+
+		if (def != data)
+			WREG32(SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_CLK_CTRL), data);
+
+		if (adev->asic_type == CHIP_VEGA10) {
+			def = data = RREG32(SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_CLK_CTRL));
+			data |= (SDMA1_CLK_CTRL__SOFT_OVERRIDE7_MASK |
+				 SDMA1_CLK_CTRL__SOFT_OVERRIDE6_MASK |
+				 SDMA1_CLK_CTRL__SOFT_OVERRIDE5_MASK |
+				 SDMA1_CLK_CTRL__SOFT_OVERRIDE4_MASK |
+				 SDMA1_CLK_CTRL__SOFT_OVERRIDE3_MASK |
+				 SDMA1_CLK_CTRL__SOFT_OVERRIDE2_MASK |
+				 SDMA1_CLK_CTRL__SOFT_OVERRIDE1_MASK |
+				 SDMA1_CLK_CTRL__SOFT_OVERRIDE0_MASK);
+			if (def != data)
+				WREG32(SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_CLK_CTRL), data);
+		}
+	}
+}
+
+
+static void sdma_v4_0_update_medium_grain_light_sleep(
+		struct amdgpu_device *adev,
+		bool enable)
+{
+	uint32_t data, def;
+
+	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_SDMA_LS)) {
+		/* 1-not override: enable sdma0 mem light sleep */
+		def = data = RREG32(SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_POWER_CNTL));
+		data |= SDMA0_POWER_CNTL__MEM_POWER_OVERRIDE_MASK;
+		if (def != data)
+			WREG32(SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_POWER_CNTL), data);
+
+		/* 1-not override: enable sdma1 mem light sleep */
+		if (adev->asic_type == CHIP_VEGA10) {
+			 def = data = RREG32(SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_POWER_CNTL));
+			 data |= SDMA1_POWER_CNTL__MEM_POWER_OVERRIDE_MASK;
+			 if (def != data)
+				 WREG32(SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_POWER_CNTL), data);
+		}
+	} else {
+		/* 0-override:disable sdma0 mem light sleep */
+		def = data = RREG32(SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_POWER_CNTL));
+		data &= ~SDMA0_POWER_CNTL__MEM_POWER_OVERRIDE_MASK;
+		if (def != data)
+		       WREG32(SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_POWER_CNTL), data);
+
+		/* 0-override:disable sdma1 mem light sleep */
+		if (adev->asic_type == CHIP_VEGA10) {
+			def = data = RREG32(SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_POWER_CNTL));
+			data &= ~SDMA1_POWER_CNTL__MEM_POWER_OVERRIDE_MASK;
+			if (def != data)
+				WREG32(SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_POWER_CNTL), data);
+		}
+	}
+}
+
+static int sdma_v4_0_set_clockgating_state(void *handle,
+					  enum amd_clockgating_state state)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	switch (adev->asic_type) {
+	case CHIP_VEGA10:
+		sdma_v4_0_update_medium_grain_clock_gating(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		sdma_v4_0_update_medium_grain_light_sleep(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		break;
+	default:
+		break;
+	}
+	return 0;
+}
+
+static int sdma_v4_0_set_powergating_state(void *handle,
+					  enum amd_powergating_state state)
+{
+	return 0;
+}
+
+const struct amd_ip_funcs sdma_v4_0_ip_funcs = {
+	.name = "sdma_v4_0",
+	.early_init = sdma_v4_0_early_init,
+	.late_init = NULL,
+	.sw_init = sdma_v4_0_sw_init,
+	.sw_fini = sdma_v4_0_sw_fini,
+	.hw_init = sdma_v4_0_hw_init,
+	.hw_fini = sdma_v4_0_hw_fini,
+	.suspend = sdma_v4_0_suspend,
+	.resume = sdma_v4_0_resume,
+	.is_idle = sdma_v4_0_is_idle,
+	.wait_for_idle = sdma_v4_0_wait_for_idle,
+	.soft_reset = sdma_v4_0_soft_reset,
+	.set_clockgating_state = sdma_v4_0_set_clockgating_state,
+	.set_powergating_state = sdma_v4_0_set_powergating_state,
+};
+
+static const struct amdgpu_ring_funcs sdma_v4_0_ring_funcs = {
+	.type = AMDGPU_RING_TYPE_SDMA,
+	.align_mask = 0xf,
+	.nop = SDMA_PKT_NOP_HEADER_OP(SDMA_OP_NOP),
+	.support_64bit_ptrs = true,
+	.get_rptr = sdma_v4_0_ring_get_rptr,
+	.get_wptr = sdma_v4_0_ring_get_wptr,
+	.set_wptr = sdma_v4_0_ring_set_wptr,
+	.emit_frame_size =
+		6 + /* sdma_v4_0_ring_emit_hdp_flush */
+		3 + /* sdma_v4_0_ring_emit_hdp_invalidate */
+		6 + /* sdma_v4_0_ring_emit_pipeline_sync */
+		36 + /* sdma_v4_0_ring_emit_vm_flush */
+		10 + 10 + 10, /* sdma_v4_0_ring_emit_fence x3 for user fence, vm fence */
+	.emit_ib_size = 7 + 6, /* sdma_v4_0_ring_emit_ib */
+	.emit_ib = sdma_v4_0_ring_emit_ib,
+	.emit_fence = sdma_v4_0_ring_emit_fence,
+	.emit_pipeline_sync = sdma_v4_0_ring_emit_pipeline_sync,
+	.emit_vm_flush = sdma_v4_0_ring_emit_vm_flush,
+	.emit_hdp_flush = sdma_v4_0_ring_emit_hdp_flush,
+	.emit_hdp_invalidate = sdma_v4_0_ring_emit_hdp_invalidate,
+	.test_ring = sdma_v4_0_ring_test_ring,
+	.test_ib = sdma_v4_0_ring_test_ib,
+	.insert_nop = sdma_v4_0_ring_insert_nop,
+	.pad_ib = sdma_v4_0_ring_pad_ib,
+};
+
+static void sdma_v4_0_set_ring_funcs(struct amdgpu_device *adev)
+{
+	int i;
+
+	for (i = 0; i < adev->sdma.num_instances; i++)
+		adev->sdma.instance[i].ring.funcs = &sdma_v4_0_ring_funcs;
+}
+
+static const struct amdgpu_irq_src_funcs sdma_v4_0_trap_irq_funcs = {
+	.set = sdma_v4_0_set_trap_irq_state,
+	.process = sdma_v4_0_process_trap_irq,
+};
+
+static const struct amdgpu_irq_src_funcs sdma_v4_0_illegal_inst_irq_funcs = {
+	.process = sdma_v4_0_process_illegal_inst_irq,
+};
+
+static void sdma_v4_0_set_irq_funcs(struct amdgpu_device *adev)
+{
+	adev->sdma.trap_irq.num_types = AMDGPU_SDMA_IRQ_LAST;
+	adev->sdma.trap_irq.funcs = &sdma_v4_0_trap_irq_funcs;
+	adev->sdma.illegal_inst_irq.funcs = &sdma_v4_0_illegal_inst_irq_funcs;
+}
+
+/**
+ * sdma_v4_0_emit_copy_buffer - copy buffer using the sDMA engine
+ *
+ * @ring: amdgpu_ring structure holding ring information
+ * @src_offset: src GPU address
+ * @dst_offset: dst GPU address
+ * @byte_count: number of bytes to xfer
+ *
+ * Copy GPU buffers using the DMA engine (VEGA10).
+ * Used by the amdgpu ttm implementation to move pages if
+ * registered as the asic copy callback.
+ */
+static void sdma_v4_0_emit_copy_buffer(struct amdgpu_ib *ib,
+				       uint64_t src_offset,
+				       uint64_t dst_offset,
+				       uint32_t byte_count)
+{
+	ib->ptr[ib->length_dw++] = SDMA_PKT_HEADER_OP(SDMA_OP_COPY) |
+		SDMA_PKT_HEADER_SUB_OP(SDMA_SUBOP_COPY_LINEAR);
+	ib->ptr[ib->length_dw++] = byte_count - 1;
+	ib->ptr[ib->length_dw++] = 0; /* src/dst endian swap */
+	ib->ptr[ib->length_dw++] = lower_32_bits(src_offset);
+	ib->ptr[ib->length_dw++] = upper_32_bits(src_offset);
+	ib->ptr[ib->length_dw++] = lower_32_bits(dst_offset);
+	ib->ptr[ib->length_dw++] = upper_32_bits(dst_offset);
+}
+
+/**
+ * sdma_v4_0_emit_fill_buffer - fill buffer using the sDMA engine
+ *
+ * @ring: amdgpu_ring structure holding ring information
+ * @src_data: value to write to buffer
+ * @dst_offset: dst GPU address
+ * @byte_count: number of bytes to xfer
+ *
+ * Fill GPU buffers using the DMA engine (VEGA10).
+ */
+static void sdma_v4_0_emit_fill_buffer(struct amdgpu_ib *ib,
+				       uint32_t src_data,
+				       uint64_t dst_offset,
+				       uint32_t byte_count)
+{
+	ib->ptr[ib->length_dw++] = SDMA_PKT_HEADER_OP(SDMA_OP_CONST_FILL);
+	ib->ptr[ib->length_dw++] = lower_32_bits(dst_offset);
+	ib->ptr[ib->length_dw++] = upper_32_bits(dst_offset);
+	ib->ptr[ib->length_dw++] = src_data;
+	ib->ptr[ib->length_dw++] = byte_count - 1;
+}
+
+static const struct amdgpu_buffer_funcs sdma_v4_0_buffer_funcs = {
+	.copy_max_bytes = 0x400000,
+	.copy_num_dw = 7,
+	.emit_copy_buffer = sdma_v4_0_emit_copy_buffer,
+
+	.fill_max_bytes = 0x400000,
+	.fill_num_dw = 5,
+	.emit_fill_buffer = sdma_v4_0_emit_fill_buffer,
+};
+
+static void sdma_v4_0_set_buffer_funcs(struct amdgpu_device *adev)
+{
+	if (adev->mman.buffer_funcs == NULL) {
+		adev->mman.buffer_funcs = &sdma_v4_0_buffer_funcs;
+		adev->mman.buffer_funcs_ring = &adev->sdma.instance[0].ring;
+	}
+}
+
+static const struct amdgpu_vm_pte_funcs sdma_v4_0_vm_pte_funcs = {
+	.copy_pte = sdma_v4_0_vm_copy_pte,
+	.write_pte = sdma_v4_0_vm_write_pte,
+	.set_pte_pde = sdma_v4_0_vm_set_pte_pde,
+};
+
+static void sdma_v4_0_set_vm_pte_funcs(struct amdgpu_device *adev)
+{
+	unsigned i;
+
+	if (adev->vm_manager.vm_pte_funcs == NULL) {
+		adev->vm_manager.vm_pte_funcs = &sdma_v4_0_vm_pte_funcs;
+		for (i = 0; i < adev->sdma.num_instances; i++)
+			adev->vm_manager.vm_pte_rings[i] =
+				&adev->sdma.instance[i].ring;
+
+		adev->vm_manager.vm_pte_num_rings = adev->sdma.num_instances;
+	}
+}
+
+const struct amdgpu_ip_block_version sdma_v4_0_ip_block =
+{
+	.type = AMD_IP_BLOCK_TYPE_SDMA,
+	.major = 4,
+	.minor = 0,
+	.rev = 0,
+	.funcs = &sdma_v4_0_ip_funcs,
+};
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.h b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.h
new file mode 100644
index 0000000..5c5a747
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.h
@@ -0,0 +1,30 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __SDMA_V4_0_H__
+#define __SDMA_V4_0_H__
+
+extern const struct amd_ip_funcs sdma_v4_0_ip_funcs;
+extern const struct amdgpu_ip_block_version sdma_v4_0_ip_block;
+
+#endif
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 048/100] drm/amdgpu: implement GFX 9.0 support
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (31 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 047/100] drm/amdgpu: add SDMA v4.0 implementation Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 049/100] drm/amdgpu: add vega10 interrupt handler Alex Deucher
                     ` (52 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Ken Wang

From: Ken Wang <Qingqing.Wang@amd.com>

Add support for gfx v9.0.

Signed-off-by: Ken Wang <Qingqing.Wang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/Makefile   |    3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu.h   |    2 +
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 3291 +++++++++++++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.h |   35 +
 4 files changed, 3330 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.h

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
index b5046fd..61f090f 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -70,7 +70,8 @@ amdgpu-y += \
 # add GFX block
 amdgpu-y += \
 	amdgpu_gfx.o \
-	gfx_v8_0.o
+	gfx_v8_0.o \
+	gfx_v9_0.o
 
 # add async DMA block
 amdgpu-y += \
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index d7257b6..c453f5b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -898,6 +898,8 @@ struct amdgpu_rlc {
 struct amdgpu_mec {
 	struct amdgpu_bo	*hpd_eop_obj;
 	u64			hpd_eop_gpu_addr;
+	struct amdgpu_bo	*mec_fw_obj;
+	u64			mec_fw_gpu_addr;
 	u32 num_pipe;
 	u32 num_mec;
 	u32 num_queue;
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
new file mode 100644
index 0000000..bb93b0a
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -0,0 +1,3291 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include <linux/firmware.h>
+#include "drmP.h"
+#include "amdgpu.h"
+#include "amdgpu_gfx.h"
+#include "soc15.h"
+#include "soc15d.h"
+
+#include "vega10/soc15ip.h"
+#include "vega10/GC/gc_9_0_offset.h"
+#include "vega10/GC/gc_9_0_sh_mask.h"
+#include "vega10/vega10_enum.h"
+#include "vega10/HDP/hdp_4_0_offset.h"
+
+#include "soc15_common.h"
+#include "clearstate_gfx9.h"
+#include "v9_structs.h"
+
+#define GFX9_NUM_GFX_RINGS     1
+#define GFX9_NUM_COMPUTE_RINGS 8
+#define GFX9_NUM_SE		4
+#define RLCG_UCODE_LOADING_START_ADDRESS 0x2000
+
+MODULE_FIRMWARE("amdgpu/vega10_ce.bin");
+MODULE_FIRMWARE("amdgpu/vega10_pfp.bin");
+MODULE_FIRMWARE("amdgpu/vega10_me.bin");
+MODULE_FIRMWARE("amdgpu/vega10_mec.bin");
+MODULE_FIRMWARE("amdgpu/vega10_mec2.bin");
+MODULE_FIRMWARE("amdgpu/vega10_rlc.bin");
+
+static const struct amdgpu_gds_reg_offset amdgpu_gds_reg_offset[] =
+{
+	{SOC15_REG_OFFSET(GC, 0, mmGDS_VMID0_BASE), SOC15_REG_OFFSET(GC, 0, mmGDS_VMID0_SIZE),
+		SOC15_REG_OFFSET(GC, 0, mmGDS_GWS_VMID0), SOC15_REG_OFFSET(GC, 0, mmGDS_OA_VMID0)},
+	{SOC15_REG_OFFSET(GC, 0, mmGDS_VMID1_BASE), SOC15_REG_OFFSET(GC, 0, mmGDS_VMID1_SIZE),
+		SOC15_REG_OFFSET(GC, 0, mmGDS_GWS_VMID1), SOC15_REG_OFFSET(GC, 0, mmGDS_OA_VMID1)},
+	{SOC15_REG_OFFSET(GC, 0, mmGDS_VMID2_BASE), SOC15_REG_OFFSET(GC, 0, mmGDS_VMID2_SIZE),
+		SOC15_REG_OFFSET(GC, 0, mmGDS_GWS_VMID2), SOC15_REG_OFFSET(GC, 0, mmGDS_OA_VMID2)},
+	{SOC15_REG_OFFSET(GC, 0, mmGDS_VMID3_BASE), SOC15_REG_OFFSET(GC, 0, mmGDS_VMID3_SIZE),
+		SOC15_REG_OFFSET(GC, 0, mmGDS_GWS_VMID3), SOC15_REG_OFFSET(GC, 0, mmGDS_OA_VMID3)},
+	{SOC15_REG_OFFSET(GC, 0, mmGDS_VMID4_BASE), SOC15_REG_OFFSET(GC, 0, mmGDS_VMID4_SIZE),
+		SOC15_REG_OFFSET(GC, 0, mmGDS_GWS_VMID4), SOC15_REG_OFFSET(GC, 0, mmGDS_OA_VMID4)},
+	{SOC15_REG_OFFSET(GC, 0, mmGDS_VMID5_BASE), SOC15_REG_OFFSET(GC, 0, mmGDS_VMID5_SIZE),
+		SOC15_REG_OFFSET(GC, 0, mmGDS_GWS_VMID5), SOC15_REG_OFFSET(GC, 0, mmGDS_OA_VMID5)},
+	{SOC15_REG_OFFSET(GC, 0, mmGDS_VMID6_BASE), SOC15_REG_OFFSET(GC, 0, mmGDS_VMID6_SIZE),
+		SOC15_REG_OFFSET(GC, 0, mmGDS_GWS_VMID6), SOC15_REG_OFFSET(GC, 0, mmGDS_OA_VMID6)},
+	{SOC15_REG_OFFSET(GC, 0, mmGDS_VMID7_BASE), SOC15_REG_OFFSET(GC, 0, mmGDS_VMID7_SIZE),
+		SOC15_REG_OFFSET(GC, 0, mmGDS_GWS_VMID7), SOC15_REG_OFFSET(GC, 0, mmGDS_OA_VMID7)},
+	{SOC15_REG_OFFSET(GC, 0, mmGDS_VMID8_BASE), SOC15_REG_OFFSET(GC, 0, mmGDS_VMID8_SIZE),
+		SOC15_REG_OFFSET(GC, 0, mmGDS_GWS_VMID8), SOC15_REG_OFFSET(GC, 0, mmGDS_OA_VMID8)},
+	{SOC15_REG_OFFSET(GC, 0, mmGDS_VMID9_BASE), SOC15_REG_OFFSET(GC, 0, mmGDS_VMID9_SIZE),
+		SOC15_REG_OFFSET(GC, 0, mmGDS_GWS_VMID9), SOC15_REG_OFFSET(GC, 0, mmGDS_OA_VMID9)},
+	{SOC15_REG_OFFSET(GC, 0, mmGDS_VMID10_BASE), SOC15_REG_OFFSET(GC, 0, mmGDS_VMID10_SIZE),
+		SOC15_REG_OFFSET(GC, 0, mmGDS_GWS_VMID10), SOC15_REG_OFFSET(GC, 0, mmGDS_OA_VMID10)},
+	{SOC15_REG_OFFSET(GC, 0, mmGDS_VMID11_BASE), SOC15_REG_OFFSET(GC, 0, mmGDS_VMID11_SIZE),
+	       	SOC15_REG_OFFSET(GC, 0, mmGDS_GWS_VMID11), SOC15_REG_OFFSET(GC, 0, mmGDS_OA_VMID11)},
+	{SOC15_REG_OFFSET(GC, 0, mmGDS_VMID12_BASE), SOC15_REG_OFFSET(GC, 0, mmGDS_VMID12_SIZE),
+		SOC15_REG_OFFSET(GC, 0, mmGDS_GWS_VMID12), SOC15_REG_OFFSET(GC, 0, mmGDS_OA_VMID12)},
+	{SOC15_REG_OFFSET(GC, 0, mmGDS_VMID13_BASE), SOC15_REG_OFFSET(GC, 0, mmGDS_VMID13_SIZE),
+		SOC15_REG_OFFSET(GC, 0, mmGDS_GWS_VMID13), SOC15_REG_OFFSET(GC, 0, mmGDS_OA_VMID13)},
+	{SOC15_REG_OFFSET(GC, 0, mmGDS_VMID14_BASE), SOC15_REG_OFFSET(GC, 0, mmGDS_VMID14_SIZE),
+		SOC15_REG_OFFSET(GC, 0, mmGDS_GWS_VMID14), SOC15_REG_OFFSET(GC, 0, mmGDS_OA_VMID14)},
+	{SOC15_REG_OFFSET(GC, 0, mmGDS_VMID15_BASE), SOC15_REG_OFFSET(GC, 0, mmGDS_VMID15_SIZE),
+		SOC15_REG_OFFSET(GC, 0, mmGDS_GWS_VMID15), SOC15_REG_OFFSET(GC, 0, mmGDS_OA_VMID15)}
+};
+
+static const u32 golden_settings_gc_9_0[] =
+{
+	SOC15_REG_OFFSET(GC, 0, mmDB_DEBUG2), 0xf00ffeff, 0x00000400,
+	SOC15_REG_OFFSET(GC, 0, mmGB_GPU_ID), 0x0000000f, 0x00000000,
+	SOC15_REG_OFFSET(GC, 0, mmPA_SC_BINNER_EVENT_CNTL_3), 0x00000003, 0x82400024,
+	SOC15_REG_OFFSET(GC, 0, mmPA_SC_ENHANCE), 0x3fffffff, 0x00000001,
+	SOC15_REG_OFFSET(GC, 0, mmPA_SC_LINE_STIPPLE_STATE), 0x0000ff0f, 0x00000000,
+	SOC15_REG_OFFSET(GC, 0, mmTA_CNTL_AUX), 0xfffffeef, 0x010b0000,
+	SOC15_REG_OFFSET(GC, 0, mmTCP_CHAN_STEER_HI), 0xffffffff, 0x4a2c0e68,
+	SOC15_REG_OFFSET(GC, 0, mmTCP_CHAN_STEER_LO), 0xffffffff, 0xb5d3f197,
+	SOC15_REG_OFFSET(GC, 0, mmVGT_GS_MAX_WAVE_ID), 0x00000fff, 0x000003ff
+};
+
+static const u32 golden_settings_gc_9_0_vg10[] =
+{
+	SOC15_REG_OFFSET(GC, 0, mmCB_HW_CONTROL), 0x0000f000, 0x00012107,
+	SOC15_REG_OFFSET(GC, 0, mmCB_HW_CONTROL_3), 0x30000000, 0x10000000,
+	SOC15_REG_OFFSET(GC, 0, mmGB_ADDR_CONFIG), 0xffff77ff, 0x2a114042,
+	SOC15_REG_OFFSET(GC, 0, mmGB_ADDR_CONFIG_READ), 0xffff77ff, 0x2a114042,
+	SOC15_REG_OFFSET(GC, 0, mmPA_SC_ENHANCE_1), 0x00008000, 0x00048000,
+	SOC15_REG_OFFSET(GC, 0, mmRMI_UTCL1_CNTL2), 0x00030000, 0x00020000,
+	SOC15_REG_OFFSET(GC, 0, mmTD_CNTL), 0x00001800, 0x00000800
+};
+
+#define VEGA10_GB_ADDR_CONFIG_GOLDEN 0x2a114042
+
+static void gfx_v9_0_set_ring_funcs(struct amdgpu_device *adev);
+static void gfx_v9_0_set_irq_funcs(struct amdgpu_device *adev);
+static void gfx_v9_0_set_gds_init(struct amdgpu_device *adev);
+static void gfx_v9_0_set_rlc_funcs(struct amdgpu_device *adev);
+static int gfx_v9_0_get_cu_info(struct amdgpu_device *adev,
+                                 struct amdgpu_cu_info *cu_info);
+static uint64_t gfx_v9_0_get_gpu_clock_counter(struct amdgpu_device *adev);
+static void gfx_v9_0_select_se_sh(struct amdgpu_device *adev, u32 se_num, u32 sh_num, u32 instance);
+
+static void gfx_v9_0_init_golden_registers(struct amdgpu_device *adev)
+{
+	switch (adev->asic_type) {
+	case CHIP_VEGA10:
+		amdgpu_program_register_sequence(adev,
+						 golden_settings_gc_9_0,
+						 (const u32)ARRAY_SIZE(golden_settings_gc_9_0));
+		amdgpu_program_register_sequence(adev,
+						 golden_settings_gc_9_0_vg10,
+						 (const u32)ARRAY_SIZE(golden_settings_gc_9_0_vg10));
+		break;
+	default:
+		break;
+	}
+}
+
+static void gfx_v9_0_scratch_init(struct amdgpu_device *adev)
+{
+	adev->gfx.scratch.num_reg = 7;
+	adev->gfx.scratch.reg_base = SOC15_REG_OFFSET(GC, 0, mmSCRATCH_REG0);
+	adev->gfx.scratch.free_mask = (1u << adev->gfx.scratch.num_reg) - 1;
+}
+
+static void gfx_v9_0_write_data_to_reg(struct amdgpu_ring *ring, int eng_sel,
+				       bool wc, uint32_t reg, uint32_t val)
+{
+	amdgpu_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
+	amdgpu_ring_write(ring, WRITE_DATA_ENGINE_SEL(eng_sel) |
+				WRITE_DATA_DST_SEL(0) |
+				(wc ? WR_CONFIRM : 0));
+	amdgpu_ring_write(ring, reg);
+	amdgpu_ring_write(ring, 0);
+	amdgpu_ring_write(ring, val);
+}
+
+static void gfx_v9_0_wait_reg_mem(struct amdgpu_ring *ring, int eng_sel,
+				  int mem_space, int opt, uint32_t addr0,
+				  uint32_t addr1, uint32_t ref, uint32_t mask,
+				  uint32_t inv)
+{
+	amdgpu_ring_write(ring, PACKET3(PACKET3_WAIT_REG_MEM, 5));
+	amdgpu_ring_write(ring,
+				 /* memory (1) or register (0) */
+				 (WAIT_REG_MEM_MEM_SPACE(mem_space) |
+				 WAIT_REG_MEM_OPERATION(opt) | /* wait */
+				 WAIT_REG_MEM_FUNCTION(3) |  /* equal */
+				 WAIT_REG_MEM_ENGINE(eng_sel)));
+
+	if (mem_space)
+		BUG_ON(addr0 & 0x3); /* Dword align */
+	amdgpu_ring_write(ring, addr0);
+	amdgpu_ring_write(ring, addr1);
+	amdgpu_ring_write(ring, ref);
+	amdgpu_ring_write(ring, mask);
+	amdgpu_ring_write(ring, inv); /* poll interval */
+}
+
+static int gfx_v9_0_ring_test_ring(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	uint32_t scratch;
+	uint32_t tmp = 0;
+	unsigned i;
+	int r;
+
+	r = amdgpu_gfx_scratch_get(adev, &scratch);
+	if (r) {
+		DRM_ERROR("amdgpu: cp failed to get scratch reg (%d).\n", r);
+		return r;
+	}
+	WREG32(scratch, 0xCAFEDEAD);
+	r = amdgpu_ring_alloc(ring, 3);
+	if (r) {
+		DRM_ERROR("amdgpu: cp failed to lock ring %d (%d).\n",
+			  ring->idx, r);
+		amdgpu_gfx_scratch_free(adev, scratch);
+		return r;
+	}
+	amdgpu_ring_write(ring, PACKET3(PACKET3_SET_UCONFIG_REG, 1));
+	amdgpu_ring_write(ring, (scratch - PACKET3_SET_UCONFIG_REG_START));
+	amdgpu_ring_write(ring, 0xDEADBEEF);
+	amdgpu_ring_commit(ring);
+
+	for (i = 0; i < adev->usec_timeout; i++) {
+		tmp = RREG32(scratch);
+		if (tmp == 0xDEADBEEF)
+			break;
+		DRM_UDELAY(1);
+	}
+	if (i < adev->usec_timeout) {
+		DRM_INFO("ring test on %d succeeded in %d usecs\n",
+			 ring->idx, i);
+	} else {
+		DRM_ERROR("amdgpu: ring %d test failed (scratch(0x%04X)=0x%08X)\n",
+			  ring->idx, scratch, tmp);
+		r = -EINVAL;
+	}
+	amdgpu_gfx_scratch_free(adev, scratch);
+	return r;
+}
+
+static int gfx_v9_0_ring_test_ib(struct amdgpu_ring *ring, long timeout)
+{
+        struct amdgpu_device *adev = ring->adev;
+        struct amdgpu_ib ib;
+        struct fence *f = NULL;
+        uint32_t scratch;
+        uint32_t tmp = 0;
+        long r;
+
+        r = amdgpu_gfx_scratch_get(adev, &scratch);
+        if (r) {
+                DRM_ERROR("amdgpu: failed to get scratch reg (%ld).\n", r);
+                return r;
+        }
+        WREG32(scratch, 0xCAFEDEAD);
+        memset(&ib, 0, sizeof(ib));
+        r = amdgpu_ib_get(adev, NULL, 256, &ib);
+        if (r) {
+                DRM_ERROR("amdgpu: failed to get ib (%ld).\n", r);
+                goto err1;
+        }
+        ib.ptr[0] = PACKET3(PACKET3_SET_UCONFIG_REG, 1);
+        ib.ptr[1] = ((scratch - PACKET3_SET_UCONFIG_REG_START));
+        ib.ptr[2] = 0xDEADBEEF;
+        ib.length_dw = 3;
+
+        r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f);
+        if (r)
+                goto err2;
+
+        r = fence_wait_timeout(f, false, timeout);
+        if (r == 0) {
+                DRM_ERROR("amdgpu: IB test timed out.\n");
+                r = -ETIMEDOUT;
+                goto err2;
+        } else if (r < 0) {
+                DRM_ERROR("amdgpu: fence wait failed (%ld).\n", r);
+                goto err2;
+        }
+        tmp = RREG32(scratch);
+        if (tmp == 0xDEADBEEF) {
+                DRM_INFO("ib test on ring %d succeeded\n", ring->idx);
+                r = 0;
+        } else {
+                DRM_ERROR("amdgpu: ib test failed (scratch(0x%04X)=0x%08X)\n",
+                          scratch, tmp);
+                r = -EINVAL;
+        }
+err2:
+        amdgpu_ib_free(adev, &ib, NULL);
+        fence_put(f);
+err1:
+        amdgpu_gfx_scratch_free(adev, scratch);
+        return r;
+}
+
+static int gfx_v9_0_init_microcode(struct amdgpu_device *adev)
+{
+	const char *chip_name;
+	char fw_name[30];
+	int err;
+	struct amdgpu_firmware_info *info = NULL;
+	const struct common_firmware_header *header = NULL;
+	const struct gfx_firmware_header_v1_0 *cp_hdr;
+
+	DRM_DEBUG("\n");
+
+	switch (adev->asic_type) {
+	case CHIP_VEGA10:
+		chip_name = "vega10";
+		break;
+	default:
+		BUG();
+	}
+
+	snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_pfp.bin", chip_name);
+	err = request_firmware(&adev->gfx.pfp_fw, fw_name, adev->dev);
+	if (err)
+		goto out;
+	err = amdgpu_ucode_validate(adev->gfx.pfp_fw);
+	if (err)
+		goto out;
+	cp_hdr = (const struct gfx_firmware_header_v1_0 *)adev->gfx.pfp_fw->data;
+	adev->gfx.pfp_fw_version = le32_to_cpu(cp_hdr->header.ucode_version);
+	adev->gfx.pfp_feature_version = le32_to_cpu(cp_hdr->ucode_feature_version);
+
+	snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_me.bin", chip_name);
+	err = request_firmware(&adev->gfx.me_fw, fw_name, adev->dev);
+	if (err)
+		goto out;
+	err = amdgpu_ucode_validate(adev->gfx.me_fw);
+	if (err)
+		goto out;
+	cp_hdr = (const struct gfx_firmware_header_v1_0 *)adev->gfx.me_fw->data;
+	adev->gfx.me_fw_version = le32_to_cpu(cp_hdr->header.ucode_version);
+	adev->gfx.me_feature_version = le32_to_cpu(cp_hdr->ucode_feature_version);
+
+	snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_ce.bin", chip_name);
+	err = request_firmware(&adev->gfx.ce_fw, fw_name, adev->dev);
+	if (err)
+		goto out;
+	err = amdgpu_ucode_validate(adev->gfx.ce_fw);
+	if (err)
+		goto out;
+	cp_hdr = (const struct gfx_firmware_header_v1_0 *)adev->gfx.ce_fw->data;
+	adev->gfx.ce_fw_version = le32_to_cpu(cp_hdr->header.ucode_version);
+	adev->gfx.ce_feature_version = le32_to_cpu(cp_hdr->ucode_feature_version);
+
+	snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_rlc.bin", chip_name);
+	err = request_firmware(&adev->gfx.rlc_fw, fw_name, adev->dev);
+	if (err)
+		goto out;
+	err = amdgpu_ucode_validate(adev->gfx.rlc_fw);
+	cp_hdr = (const struct gfx_firmware_header_v1_0 *)adev->gfx.rlc_fw->data;
+	adev->gfx.rlc_fw_version = le32_to_cpu(cp_hdr->header.ucode_version);
+	adev->gfx.rlc_feature_version = le32_to_cpu(cp_hdr->ucode_feature_version);
+
+	snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mec.bin", chip_name);
+	err = request_firmware(&adev->gfx.mec_fw, fw_name, adev->dev);
+	if (err)
+		goto out;
+	err = amdgpu_ucode_validate(adev->gfx.mec_fw);
+	if (err)
+		goto out;
+	cp_hdr = (const struct gfx_firmware_header_v1_0 *)adev->gfx.mec_fw->data;
+	adev->gfx.mec_fw_version = le32_to_cpu(cp_hdr->header.ucode_version);
+	adev->gfx.mec_feature_version = le32_to_cpu(cp_hdr->ucode_feature_version);
+
+
+	snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mec2.bin", chip_name);
+	err = request_firmware(&adev->gfx.mec2_fw, fw_name, adev->dev);
+	if (!err) {
+		err = amdgpu_ucode_validate(adev->gfx.mec2_fw);
+		if (err)
+			goto out;
+		cp_hdr = (const struct gfx_firmware_header_v1_0 *)
+		adev->gfx.mec2_fw->data;
+		adev->gfx.mec2_fw_version =
+		le32_to_cpu(cp_hdr->header.ucode_version);
+		adev->gfx.mec2_feature_version =
+		le32_to_cpu(cp_hdr->ucode_feature_version);
+	} else {
+		err = 0;
+		adev->gfx.mec2_fw = NULL;
+	}
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
+		info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_PFP];
+		info->ucode_id = AMDGPU_UCODE_ID_CP_PFP;
+		info->fw = adev->gfx.pfp_fw;
+		header = (const struct common_firmware_header *)info->fw->data;
+		adev->firmware.fw_size +=
+			ALIGN(le32_to_cpu(header->ucode_size_bytes), PAGE_SIZE);
+
+		info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_ME];
+		info->ucode_id = AMDGPU_UCODE_ID_CP_ME;
+		info->fw = adev->gfx.me_fw;
+		header = (const struct common_firmware_header *)info->fw->data;
+		adev->firmware.fw_size +=
+			ALIGN(le32_to_cpu(header->ucode_size_bytes), PAGE_SIZE);
+
+		info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_CE];
+		info->ucode_id = AMDGPU_UCODE_ID_CP_CE;
+		info->fw = adev->gfx.ce_fw;
+		header = (const struct common_firmware_header *)info->fw->data;
+		adev->firmware.fw_size +=
+			ALIGN(le32_to_cpu(header->ucode_size_bytes), PAGE_SIZE);
+
+		info = &adev->firmware.ucode[AMDGPU_UCODE_ID_RLC_G];
+		info->ucode_id = AMDGPU_UCODE_ID_RLC_G;
+		info->fw = adev->gfx.rlc_fw;
+		header = (const struct common_firmware_header *)info->fw->data;
+		adev->firmware.fw_size +=
+			ALIGN(le32_to_cpu(header->ucode_size_bytes), PAGE_SIZE);
+
+		info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_MEC1];
+		info->ucode_id = AMDGPU_UCODE_ID_CP_MEC1;
+		info->fw = adev->gfx.mec_fw;
+		header = (const struct common_firmware_header *)info->fw->data;
+		cp_hdr = (const struct gfx_firmware_header_v1_0 *)info->fw->data;
+		adev->firmware.fw_size +=
+			ALIGN(le32_to_cpu(header->ucode_size_bytes) - le32_to_cpu(cp_hdr->jt_size) * 4, PAGE_SIZE);
+
+		info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_MEC1_JT];
+		info->ucode_id = AMDGPU_UCODE_ID_CP_MEC1_JT;
+		info->fw = adev->gfx.mec_fw;
+		adev->firmware.fw_size +=
+			ALIGN(le32_to_cpu(cp_hdr->jt_size) * 4, PAGE_SIZE);
+
+		if (adev->gfx.mec2_fw) {
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_MEC2];
+			info->ucode_id = AMDGPU_UCODE_ID_CP_MEC2;
+			info->fw = adev->gfx.mec2_fw;
+			header = (const struct common_firmware_header *)info->fw->data;
+			cp_hdr = (const struct gfx_firmware_header_v1_0 *)info->fw->data;
+			adev->firmware.fw_size +=
+				ALIGN(le32_to_cpu(header->ucode_size_bytes) - le32_to_cpu(cp_hdr->jt_size) * 4, PAGE_SIZE);
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_MEC2_JT];
+			info->ucode_id = AMDGPU_UCODE_ID_CP_MEC2_JT;
+			info->fw = adev->gfx.mec2_fw;
+			adev->firmware.fw_size +=
+				ALIGN(le32_to_cpu(cp_hdr->jt_size) * 4, PAGE_SIZE);
+		}
+
+	}
+
+out:
+	if (err) {
+		dev_err(adev->dev,
+			"gfx9: Failed to load firmware \"%s\"\n",
+			fw_name);
+		release_firmware(adev->gfx.pfp_fw);
+		adev->gfx.pfp_fw = NULL;
+		release_firmware(adev->gfx.me_fw);
+		adev->gfx.me_fw = NULL;
+		release_firmware(adev->gfx.ce_fw);
+		adev->gfx.ce_fw = NULL;
+		release_firmware(adev->gfx.rlc_fw);
+		adev->gfx.rlc_fw = NULL;
+		release_firmware(adev->gfx.mec_fw);
+		adev->gfx.mec_fw = NULL;
+		release_firmware(adev->gfx.mec2_fw);
+		adev->gfx.mec2_fw = NULL;
+	}
+	return err;
+}
+
+static void gfx_v9_0_mec_fini(struct amdgpu_device *adev)
+{
+	int r;
+
+	if (adev->gfx.mec.hpd_eop_obj) {
+		r = amdgpu_bo_reserve(adev->gfx.mec.hpd_eop_obj, false);
+		if (unlikely(r != 0))
+			dev_warn(adev->dev, "(%d) reserve HPD EOP bo failed\n", r);
+		amdgpu_bo_unpin(adev->gfx.mec.hpd_eop_obj);
+		amdgpu_bo_unreserve(adev->gfx.mec.hpd_eop_obj);
+
+		amdgpu_bo_unref(&adev->gfx.mec.hpd_eop_obj);
+		adev->gfx.mec.hpd_eop_obj = NULL;
+	}
+	if (adev->gfx.mec.mec_fw_obj) {
+		r = amdgpu_bo_reserve(adev->gfx.mec.mec_fw_obj, false);
+		if (unlikely(r != 0))
+			dev_warn(adev->dev, "(%d) reserve mec firmware bo failed\n", r);
+		amdgpu_bo_unpin(adev->gfx.mec.mec_fw_obj);
+		amdgpu_bo_unreserve(adev->gfx.mec.mec_fw_obj);
+
+		amdgpu_bo_unref(&adev->gfx.mec.mec_fw_obj);
+		adev->gfx.mec.mec_fw_obj = NULL;
+	}
+}
+
+#define MEC_HPD_SIZE 2048
+
+static int gfx_v9_0_mec_init(struct amdgpu_device *adev)
+{
+	int r;
+	u32 *hpd;
+	const __le32 *fw_data;
+	unsigned fw_size;
+	u32 *fw;
+
+	const struct gfx_firmware_header_v1_0 *mec_hdr;
+
+	/*
+	 * we assign only 1 pipe because all other pipes will
+	 * be handled by KFD
+	 */
+	adev->gfx.mec.num_mec = 1;
+	adev->gfx.mec.num_pipe = 1;
+	adev->gfx.mec.num_queue = adev->gfx.mec.num_mec * adev->gfx.mec.num_pipe * 8;
+
+	if (adev->gfx.mec.hpd_eop_obj == NULL) {
+		r = amdgpu_bo_create(adev,
+				     adev->gfx.mec.num_queue * MEC_HPD_SIZE,
+				     PAGE_SIZE, true,
+				     AMDGPU_GEM_DOMAIN_GTT, 0, NULL, NULL,
+				     &adev->gfx.mec.hpd_eop_obj);
+		if (r) {
+			dev_warn(adev->dev, "(%d) create HDP EOP bo failed\n", r);
+			return r;
+		}
+	}
+
+	r = amdgpu_bo_reserve(adev->gfx.mec.hpd_eop_obj, false);
+	if (unlikely(r != 0)) {
+		gfx_v9_0_mec_fini(adev);
+		return r;
+	}
+	r = amdgpu_bo_pin(adev->gfx.mec.hpd_eop_obj, AMDGPU_GEM_DOMAIN_GTT,
+			  &adev->gfx.mec.hpd_eop_gpu_addr);
+	if (r) {
+		dev_warn(adev->dev, "(%d) pin HDP EOP bo failed\n", r);
+		gfx_v9_0_mec_fini(adev);
+		return r;
+	}
+	r = amdgpu_bo_kmap(adev->gfx.mec.hpd_eop_obj, (void **)&hpd);
+	if (r) {
+		dev_warn(adev->dev, "(%d) map HDP EOP bo failed\n", r);
+		gfx_v9_0_mec_fini(adev);
+		return r;
+	}
+
+	memset(hpd, 0, adev->gfx.mec.hpd_eop_obj->tbo.mem.size);
+
+	amdgpu_bo_kunmap(adev->gfx.mec.hpd_eop_obj);
+	amdgpu_bo_unreserve(adev->gfx.mec.hpd_eop_obj);
+
+	mec_hdr = (const struct gfx_firmware_header_v1_0 *)adev->gfx.mec_fw->data;
+
+	fw_data = (const __le32 *)
+		(adev->gfx.mec_fw->data +
+		 le32_to_cpu(mec_hdr->header.ucode_array_offset_bytes));
+	fw_size = le32_to_cpu(mec_hdr->header.ucode_size_bytes) / 4;
+
+	if (adev->gfx.mec.mec_fw_obj == NULL) {
+		r = amdgpu_bo_create(adev,
+			mec_hdr->header.ucode_size_bytes,
+			PAGE_SIZE, true,
+			AMDGPU_GEM_DOMAIN_GTT, 0, NULL, NULL,
+			&adev->gfx.mec.mec_fw_obj);
+		if (r) {
+			dev_warn(adev->dev, "(%d) create mec firmware bo failed\n", r);
+			return r;
+		}
+	}
+
+	r = amdgpu_bo_reserve(adev->gfx.mec.mec_fw_obj, false);
+	if (unlikely(r != 0)) {
+		gfx_v9_0_mec_fini(adev);
+		return r;
+	}
+	r = amdgpu_bo_pin(adev->gfx.mec.mec_fw_obj, AMDGPU_GEM_DOMAIN_GTT,
+			&adev->gfx.mec.mec_fw_gpu_addr);
+	if (r) {
+		dev_warn(adev->dev, "(%d) pin mec firmware bo failed\n", r);
+		gfx_v9_0_mec_fini(adev);
+		return r;
+	}
+	r = amdgpu_bo_kmap(adev->gfx.mec.mec_fw_obj, (void **)&fw);
+	if (r) {
+		dev_warn(adev->dev, "(%d) map firmware bo failed\n", r);
+		gfx_v9_0_mec_fini(adev);
+		return r;
+	}
+	memcpy(fw, fw_data, fw_size);
+
+	amdgpu_bo_kunmap(adev->gfx.mec.mec_fw_obj);
+	amdgpu_bo_unreserve(adev->gfx.mec.mec_fw_obj);
+
+
+	return 0;
+}
+
+static uint32_t wave_read_ind(struct amdgpu_device *adev, uint32_t simd, uint32_t wave, uint32_t address)
+{
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmSQ_IND_INDEX),
+		(wave << SQ_IND_INDEX__WAVE_ID__SHIFT) |
+		(simd << SQ_IND_INDEX__SIMD_ID__SHIFT) |
+		(address << SQ_IND_INDEX__INDEX__SHIFT) |
+		(SQ_IND_INDEX__FORCE_READ_MASK));
+	return RREG32(SOC15_REG_OFFSET(GC, 0, mmSQ_IND_DATA));
+}
+
+static void wave_read_regs(struct amdgpu_device *adev, uint32_t simd,
+			   uint32_t wave, uint32_t thread,
+			   uint32_t regno, uint32_t num, uint32_t *out)
+{
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmSQ_IND_INDEX),
+		(wave << SQ_IND_INDEX__WAVE_ID__SHIFT) |
+		(simd << SQ_IND_INDEX__SIMD_ID__SHIFT) |
+		(regno << SQ_IND_INDEX__INDEX__SHIFT) |
+		(thread << SQ_IND_INDEX__THREAD_ID__SHIFT) |
+		(SQ_IND_INDEX__FORCE_READ_MASK) |
+		(SQ_IND_INDEX__AUTO_INCR_MASK));
+	while (num--)
+		*(out++) = RREG32(SOC15_REG_OFFSET(GC, 0, mmSQ_IND_DATA));
+}
+
+static void gfx_v9_0_read_wave_data(struct amdgpu_device *adev, uint32_t simd, uint32_t wave, uint32_t *dst, int *no_fields)
+{
+	/* type 1 wave data */
+	dst[(*no_fields)++] = 1;
+	dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_STATUS);
+	dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_PC_LO);
+	dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_PC_HI);
+	dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_EXEC_LO);
+	dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_EXEC_HI);
+	dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_HW_ID);
+	dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_INST_DW0);
+	dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_INST_DW1);
+	dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_GPR_ALLOC);
+	dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_LDS_ALLOC);
+	dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_TRAPSTS);
+	dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_IB_STS);
+	dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_IB_DBG0);
+	dst[(*no_fields)++] = wave_read_ind(adev, simd, wave, ixSQ_WAVE_M0);
+}
+
+static void gfx_v9_0_read_wave_sgprs(struct amdgpu_device *adev, uint32_t simd,
+				     uint32_t wave, uint32_t start,
+				     uint32_t size, uint32_t *dst)
+{
+	wave_read_regs(
+		adev, simd, wave, 0,
+		start + SQIND_WAVE_SGPRS_OFFSET, size, dst);
+}
+
+
+static const struct amdgpu_gfx_funcs gfx_v9_0_gfx_funcs = {
+	.get_gpu_clock_counter = &gfx_v9_0_get_gpu_clock_counter,
+	.select_se_sh = &gfx_v9_0_select_se_sh,
+	.read_wave_data = &gfx_v9_0_read_wave_data,
+	.read_wave_sgprs = &gfx_v9_0_read_wave_sgprs,
+};
+
+static void gfx_v9_0_gpu_early_init(struct amdgpu_device *adev)
+{
+	u32 gb_addr_config;
+
+	adev->gfx.funcs = &gfx_v9_0_gfx_funcs;
+
+	switch (adev->asic_type) {
+	case CHIP_VEGA10:
+		adev->gfx.config.max_shader_engines = 4;
+		adev->gfx.config.max_tile_pipes = 8; //??
+		adev->gfx.config.max_cu_per_sh = 16;
+		adev->gfx.config.max_sh_per_se = 1;
+		adev->gfx.config.max_backends_per_se = 4;
+		adev->gfx.config.max_texture_channel_caches = 16;
+		adev->gfx.config.max_gprs = 256;
+		adev->gfx.config.max_gs_threads = 32;
+		adev->gfx.config.max_hw_contexts = 8;
+
+		adev->gfx.config.sc_prim_fifo_size_frontend = 0x20;
+		adev->gfx.config.sc_prim_fifo_size_backend = 0x100;
+		adev->gfx.config.sc_hiz_tile_fifo_size = 0x30;
+		adev->gfx.config.sc_earlyz_tile_fifo_size = 0x4C0;
+		gb_addr_config = VEGA10_GB_ADDR_CONFIG_GOLDEN;
+		break;
+	default:
+		BUG();
+		break;
+	}
+
+	adev->gfx.config.gb_addr_config = gb_addr_config;
+
+	adev->gfx.config.gb_addr_config_fields.num_pipes = 1 <<
+			REG_GET_FIELD(
+					adev->gfx.config.gb_addr_config,
+					GB_ADDR_CONFIG,
+					NUM_PIPES);
+	adev->gfx.config.gb_addr_config_fields.num_banks = 1 <<
+			REG_GET_FIELD(
+					adev->gfx.config.gb_addr_config,
+					GB_ADDR_CONFIG,
+					NUM_BANKS);
+	adev->gfx.config.gb_addr_config_fields.max_compress_frags = 1 <<
+			REG_GET_FIELD(
+					adev->gfx.config.gb_addr_config,
+					GB_ADDR_CONFIG,
+					MAX_COMPRESSED_FRAGS);
+	adev->gfx.config.gb_addr_config_fields.num_rb_per_se = 1 <<
+			REG_GET_FIELD(
+					adev->gfx.config.gb_addr_config,
+					GB_ADDR_CONFIG,
+					NUM_RB_PER_SE);
+	adev->gfx.config.gb_addr_config_fields.num_se = 1 <<
+			REG_GET_FIELD(
+					adev->gfx.config.gb_addr_config,
+					GB_ADDR_CONFIG,
+					NUM_SHADER_ENGINES);
+	adev->gfx.config.gb_addr_config_fields.pipe_interleave_size = 1 << (8 +
+			REG_GET_FIELD(
+					adev->gfx.config.gb_addr_config,
+					GB_ADDR_CONFIG,
+					PIPE_INTERLEAVE_SIZE));
+}
+
+static int gfx_v9_0_ngg_create_buf(struct amdgpu_device *adev,
+				   struct amdgpu_ngg_buf *ngg_buf,
+				   int size_se,
+				   int default_size_se)
+{
+	int r;
+
+	if (size_se < 0) {
+		dev_err(adev->dev, "Buffer size is invalid: %d\n", size_se);
+		return -EINVAL;
+	}
+	size_se = size_se ? size_se : default_size_se;
+
+	ngg_buf->size = size_se * GFX9_NUM_SE;
+	r = amdgpu_bo_create_kernel(adev, ngg_buf->size,
+				    PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM,
+				    &ngg_buf->bo,
+				    &ngg_buf->gpu_addr,
+				    NULL);
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to create NGG buffer\n", r);
+		return r;
+	}
+	ngg_buf->bo_size = amdgpu_bo_size(ngg_buf->bo);
+
+	return r;
+}
+
+static int gfx_v9_0_ngg_fini(struct amdgpu_device *adev)
+{
+	int i;
+
+	for (i = 0; i < NGG_BUF_MAX; i++)
+		amdgpu_bo_free_kernel(&adev->gfx.ngg.buf[i].bo,
+				      &adev->gfx.ngg.buf[i].gpu_addr,
+				      NULL);
+
+	memset(&adev->gfx.ngg.buf[0], 0,
+			sizeof(struct amdgpu_ngg_buf) * NGG_BUF_MAX);
+
+	adev->gfx.ngg.init = false;
+
+	return 0;
+}
+
+static int gfx_v9_0_ngg_init(struct amdgpu_device *adev)
+{
+	int r;
+
+	if (!amdgpu_ngg || adev->gfx.ngg.init == true)
+		return 0;
+
+	/* GDS reserve memory: 64 bytes alignment */
+	adev->gfx.ngg.gds_reserve_size = ALIGN(5 * 4, 0x40);
+	adev->gds.mem.total_size -= adev->gfx.ngg.gds_reserve_size;
+	adev->gds.mem.gfx_partition_size -= adev->gfx.ngg.gds_reserve_size;
+	adev->gfx.ngg.gds_reserve_addr = amdgpu_gds_reg_offset[0].mem_base;
+	adev->gfx.ngg.gds_reserve_addr += adev->gds.mem.gfx_partition_size;
+
+	/* Primitive Buffer */
+	r = gfx_v9_0_ngg_create_buf(adev, &adev->gfx.ngg.buf[PRIM],
+				    amdgpu_prim_buf_per_se,
+				    64 * 1024);
+	if (r) {
+		dev_err(adev->dev, "Failed to create Primitive Buffer\n");
+		goto err;
+	}
+
+	/* Position Buffer */
+	r = gfx_v9_0_ngg_create_buf(adev, &adev->gfx.ngg.buf[POS],
+				    amdgpu_pos_buf_per_se,
+				    256 * 1024);
+	if (r) {
+		dev_err(adev->dev, "Failed to create Position Buffer\n");
+		goto err;
+	}
+
+	/* Control Sideband */
+	r = gfx_v9_0_ngg_create_buf(adev, &adev->gfx.ngg.buf[CNTL],
+				    amdgpu_cntl_sb_buf_per_se,
+				    256);
+	if (r) {
+		dev_err(adev->dev, "Failed to create Control Sideband Buffer\n");
+		goto err;
+	}
+
+	/* Parameter Cache, not created by default */
+	if (amdgpu_param_buf_per_se <= 0)
+		goto out;
+
+	r = gfx_v9_0_ngg_create_buf(adev, &adev->gfx.ngg.buf[PARAM],
+				    amdgpu_param_buf_per_se,
+				    512 * 1024);
+	if (r) {
+		dev_err(adev->dev, "Failed to create Parameter Cache\n");
+		goto err;
+	}
+
+out:
+	adev->gfx.ngg.init = true;
+	return 0;
+err:
+	gfx_v9_0_ngg_fini(adev);
+	return r;
+}
+
+static int gfx_v9_0_ngg_en(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring = &adev->gfx.gfx_ring[0];
+	int r;
+	u32 data;
+	u32 size;
+	u32 base;
+
+	if (!amdgpu_ngg)
+		return 0;
+
+	/* Program buffer size */
+	data = 0;
+	size = adev->gfx.ngg.buf[PRIM].size / 256;
+	data = REG_SET_FIELD(data, WD_BUF_RESOURCE_1, INDEX_BUF_SIZE, size);
+
+	size = adev->gfx.ngg.buf[POS].size / 256;
+	data = REG_SET_FIELD(data, WD_BUF_RESOURCE_1, POS_BUF_SIZE, size);
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmWD_BUF_RESOURCE_1), data);
+
+	data = 0;
+	size = adev->gfx.ngg.buf[CNTL].size / 256;
+	data = REG_SET_FIELD(data, WD_BUF_RESOURCE_2, CNTL_SB_BUF_SIZE, size);
+
+	size = adev->gfx.ngg.buf[PARAM].size / 1024;
+	data = REG_SET_FIELD(data, WD_BUF_RESOURCE_2, PARAM_BUF_SIZE, size);
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmWD_BUF_RESOURCE_2), data);
+
+	/* Program buffer base address */
+	base = lower_32_bits(adev->gfx.ngg.buf[PRIM].gpu_addr);
+	data = REG_SET_FIELD(0, WD_INDEX_BUF_BASE, BASE, base);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmWD_INDEX_BUF_BASE), data);
+
+	base = upper_32_bits(adev->gfx.ngg.buf[PRIM].gpu_addr);
+	data = REG_SET_FIELD(0, WD_INDEX_BUF_BASE_HI, BASE_HI, base);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmWD_INDEX_BUF_BASE_HI), data);
+
+	base = lower_32_bits(adev->gfx.ngg.buf[POS].gpu_addr);
+	data = REG_SET_FIELD(0, WD_POS_BUF_BASE, BASE, base);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmWD_POS_BUF_BASE), data);
+
+	base = upper_32_bits(adev->gfx.ngg.buf[POS].gpu_addr);
+	data = REG_SET_FIELD(0, WD_POS_BUF_BASE_HI, BASE_HI, base);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmWD_POS_BUF_BASE_HI), data);
+
+	base = lower_32_bits(adev->gfx.ngg.buf[CNTL].gpu_addr);
+	data = REG_SET_FIELD(0, WD_CNTL_SB_BUF_BASE, BASE, base);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmWD_CNTL_SB_BUF_BASE), data);
+
+	base = upper_32_bits(adev->gfx.ngg.buf[CNTL].gpu_addr);
+	data = REG_SET_FIELD(0, WD_CNTL_SB_BUF_BASE_HI, BASE_HI, base);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmWD_CNTL_SB_BUF_BASE_HI), data);
+
+	/* Clear GDS reserved memory */
+	r = amdgpu_ring_alloc(ring, 17);
+	if (r) {
+		DRM_ERROR("amdgpu: NGG failed to lock ring %d (%d).\n",
+			  ring->idx, r);
+		return r;
+	}
+
+	gfx_v9_0_write_data_to_reg(ring, 0, false,
+				   amdgpu_gds_reg_offset[0].mem_size,
+			           (adev->gds.mem.total_size +
+				    adev->gfx.ngg.gds_reserve_size) >>
+				   AMDGPU_GDS_SHIFT);
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_DMA_DATA, 5));
+	amdgpu_ring_write(ring, (PACKET3_DMA_DATA_CP_SYNC |
+				PACKET3_DMA_DATA_SRC_SEL(2)));
+	amdgpu_ring_write(ring, 0);
+	amdgpu_ring_write(ring, 0);
+	amdgpu_ring_write(ring, adev->gfx.ngg.gds_reserve_addr);
+	amdgpu_ring_write(ring, 0);
+	amdgpu_ring_write(ring, adev->gfx.ngg.gds_reserve_size);
+
+
+	gfx_v9_0_write_data_to_reg(ring, 0, false,
+				   amdgpu_gds_reg_offset[0].mem_size, 0);
+
+	amdgpu_ring_commit(ring);
+
+	return 0;
+}
+
+static int gfx_v9_0_sw_init(void *handle)
+{
+	int i, r;
+	struct amdgpu_ring *ring;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	/* EOP Event */
+	r = amdgpu_irq_add_id(adev, AMDGPU_IH_CLIENTID_GRBM_CP, 181, &adev->gfx.eop_irq);
+	if (r)
+		return r;
+
+	/* Privileged reg */
+	r = amdgpu_irq_add_id(adev, AMDGPU_IH_CLIENTID_GRBM_CP, 184,
+			      &adev->gfx.priv_reg_irq);
+	if (r)
+		return r;
+
+	/* Privileged inst */
+	r = amdgpu_irq_add_id(adev, AMDGPU_IH_CLIENTID_GRBM_CP, 185,
+			      &adev->gfx.priv_inst_irq);
+	if (r)
+		return r;
+
+	adev->gfx.gfx_current_status = AMDGPU_GFX_NORMAL_MODE;
+
+	gfx_v9_0_scratch_init(adev);
+
+	r = gfx_v9_0_init_microcode(adev);
+	if (r) {
+		DRM_ERROR("Failed to load gfx firmware!\n");
+		return r;
+	}
+
+	r = gfx_v9_0_mec_init(adev);
+	if (r) {
+		DRM_ERROR("Failed to init MEC BOs!\n");
+		return r;
+	}
+
+	/* set up the gfx ring */
+	for (i = 0; i < adev->gfx.num_gfx_rings; i++) {
+		ring = &adev->gfx.gfx_ring[i];
+		ring->ring_obj = NULL;
+		sprintf(ring->name, "gfx");
+		ring->use_doorbell = true;
+		ring->doorbell_index = AMDGPU_DOORBELL64_GFX_RING0 << 1;
+		r = amdgpu_ring_init(adev, ring, 1024,
+				     &adev->gfx.eop_irq, AMDGPU_CP_IRQ_GFX_EOP);
+		if (r)
+			return r;
+	}
+
+	/* set up the compute queues */
+	for (i = 0; i < adev->gfx.num_compute_rings; i++) {
+		unsigned irq_type;
+
+		/* max 32 queues per MEC */
+		if ((i >= 32) || (i >= AMDGPU_MAX_COMPUTE_RINGS)) {
+			DRM_ERROR("Too many (%d) compute rings!\n", i);
+			break;
+		}
+		ring = &adev->gfx.compute_ring[i];
+		ring->ring_obj = NULL;
+		ring->use_doorbell = true;
+		ring->doorbell_index = (AMDGPU_DOORBELL64_MEC_RING0 + i) << 1;
+		ring->me = 1; /* first MEC */
+		ring->pipe = i / 8;
+		ring->queue = i % 8;
+		sprintf(ring->name, "comp %d.%d.%d", ring->me, ring->pipe, ring->queue);
+		irq_type = AMDGPU_CP_IRQ_COMPUTE_MEC1_PIPE0_EOP + ring->pipe;
+		/* type-2 packets are deprecated on MEC, use type-3 instead */
+		r = amdgpu_ring_init(adev, ring, 1024,
+				     &adev->gfx.eop_irq, irq_type);
+		if (r)
+			return r;
+	}
+
+	/* reserve GDS, GWS and OA resource for gfx */
+	r = amdgpu_bo_create_kernel(adev, adev->gds.mem.gfx_partition_size,
+				    PAGE_SIZE, AMDGPU_GEM_DOMAIN_GDS,
+				    &adev->gds.gds_gfx_bo, NULL, NULL);
+	if (r)
+		return r;
+
+	r = amdgpu_bo_create_kernel(adev, adev->gds.gws.gfx_partition_size,
+				    PAGE_SIZE, AMDGPU_GEM_DOMAIN_GWS,
+				    &adev->gds.gws_gfx_bo, NULL, NULL);
+	if (r)
+		return r;
+
+	r = amdgpu_bo_create_kernel(adev, adev->gds.oa.gfx_partition_size,
+				    PAGE_SIZE, AMDGPU_GEM_DOMAIN_OA,
+				    &adev->gds.oa_gfx_bo, NULL, NULL);
+	if (r)
+		return r;
+
+	adev->gfx.ce_ram_size = 0x8000;
+
+	gfx_v9_0_gpu_early_init(adev);
+
+	r = gfx_v9_0_ngg_init(adev);
+	if (r)
+		return r;
+
+	return 0;
+}
+
+
+static int gfx_v9_0_sw_fini(void *handle)
+{
+	int i;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	amdgpu_bo_free_kernel(&adev->gds.oa_gfx_bo, NULL, NULL);
+	amdgpu_bo_free_kernel(&adev->gds.gws_gfx_bo, NULL, NULL);
+	amdgpu_bo_free_kernel(&adev->gds.gds_gfx_bo, NULL, NULL);
+
+	for (i = 0; i < adev->gfx.num_gfx_rings; i++)
+		amdgpu_ring_fini(&adev->gfx.gfx_ring[i]);
+	for (i = 0; i < adev->gfx.num_compute_rings; i++)
+		amdgpu_ring_fini(&adev->gfx.compute_ring[i]);
+
+	gfx_v9_0_mec_fini(adev);
+	gfx_v9_0_ngg_fini(adev);
+
+	return 0;
+}
+
+
+static void gfx_v9_0_tiling_mode_table_init(struct amdgpu_device *adev)
+{
+	/* TODO */
+}
+
+static void gfx_v9_0_select_se_sh(struct amdgpu_device *adev, u32 se_num, u32 sh_num, u32 instance)
+{
+	u32 data = REG_SET_FIELD(0, GRBM_GFX_INDEX, INSTANCE_BROADCAST_WRITES, 1);
+
+	if ((se_num == 0xffffffff) && (sh_num == 0xffffffff)) {
+		data = REG_SET_FIELD(data, GRBM_GFX_INDEX, SH_BROADCAST_WRITES, 1);
+		data = REG_SET_FIELD(data, GRBM_GFX_INDEX, SE_BROADCAST_WRITES, 1);
+	} else if (se_num == 0xffffffff) {
+		data = REG_SET_FIELD(data, GRBM_GFX_INDEX, SH_INDEX, sh_num);
+		data = REG_SET_FIELD(data, GRBM_GFX_INDEX, SE_BROADCAST_WRITES, 1);
+	} else if (sh_num == 0xffffffff) {
+		data = REG_SET_FIELD(data, GRBM_GFX_INDEX, SH_BROADCAST_WRITES, 1);
+		data = REG_SET_FIELD(data, GRBM_GFX_INDEX, SE_INDEX, se_num);
+	} else {
+		data = REG_SET_FIELD(data, GRBM_GFX_INDEX, SH_INDEX, sh_num);
+		data = REG_SET_FIELD(data, GRBM_GFX_INDEX, SE_INDEX, se_num);
+	}
+	WREG32( SOC15_REG_OFFSET(GC, 0, mmGRBM_GFX_INDEX), data);
+}
+
+static u32 gfx_v9_0_create_bitmask(u32 bit_width)
+{
+	return (u32)((1ULL << bit_width) - 1);
+}
+
+static u32 gfx_v9_0_get_rb_active_bitmap(struct amdgpu_device *adev)
+{
+	u32 data, mask;
+
+	data = RREG32(SOC15_REG_OFFSET(GC, 0, mmCC_RB_BACKEND_DISABLE));
+	data |= RREG32(SOC15_REG_OFFSET(GC, 0, mmGC_USER_RB_BACKEND_DISABLE));
+
+	data &= CC_RB_BACKEND_DISABLE__BACKEND_DISABLE_MASK;
+	data >>= GC_USER_RB_BACKEND_DISABLE__BACKEND_DISABLE__SHIFT;
+
+	mask = gfx_v9_0_create_bitmask(adev->gfx.config.max_backends_per_se /
+				       adev->gfx.config.max_sh_per_se);
+
+	return (~data) & mask;
+}
+
+static void gfx_v9_0_setup_rb(struct amdgpu_device *adev)
+{
+	int i, j;
+	u32 data, tmp, num_rbs = 0;
+	u32 active_rbs = 0;
+	u32 rb_bitmap_width_per_sh = adev->gfx.config.max_backends_per_se /
+					adev->gfx.config.max_sh_per_se;
+
+	mutex_lock(&adev->grbm_idx_mutex);
+	for (i = 0; i < adev->gfx.config.max_shader_engines; i++) {
+		for (j = 0; j < adev->gfx.config.max_sh_per_se; j++) {
+			gfx_v9_0_select_se_sh(adev, i, j, 0xffffffff);
+			data = gfx_v9_0_get_rb_active_bitmap(adev);
+			active_rbs |= data << ((i * adev->gfx.config.max_sh_per_se + j) *
+					       rb_bitmap_width_per_sh);
+		}
+	}
+	gfx_v9_0_select_se_sh(adev, 0xffffffff, 0xffffffff, 0xffffffff);
+	mutex_unlock(&adev->grbm_idx_mutex);
+
+	adev->gfx.config.backend_enable_mask = active_rbs;
+	tmp = active_rbs;
+	while (tmp >>= 1)
+		num_rbs++;
+	adev->gfx.config.num_rbs = num_rbs;
+}
+
+#define DEFAULT_SH_MEM_BASES	(0x6000)
+#define FIRST_COMPUTE_VMID	(8)
+#define LAST_COMPUTE_VMID	(16)
+static void gfx_v9_0_init_compute_vmid(struct amdgpu_device *adev)
+{
+	int i;
+	uint32_t sh_mem_config;
+	uint32_t sh_mem_bases;
+
+	/*
+	 * Configure apertures:
+	 * LDS:         0x60000000'00000000 - 0x60000001'00000000 (4GB)
+	 * Scratch:     0x60000001'00000000 - 0x60000002'00000000 (4GB)
+	 * GPUVM:       0x60010000'00000000 - 0x60020000'00000000 (1TB)
+	 */
+	sh_mem_bases = DEFAULT_SH_MEM_BASES | (DEFAULT_SH_MEM_BASES << 16);
+
+	sh_mem_config = SH_MEM_ADDRESS_MODE_64 |
+			SH_MEM_ALIGNMENT_MODE_UNALIGNED <<
+			SH_MEM_CONFIG__ALIGNMENT_MODE__SHIFT; 
+
+	mutex_lock(&adev->srbm_mutex);
+	for (i = FIRST_COMPUTE_VMID; i < LAST_COMPUTE_VMID; i++) {
+		soc15_grbm_select(adev, 0, 0, 0, i);
+		/* CP and shaders */
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmSH_MEM_CONFIG), sh_mem_config);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmSH_MEM_BASES), sh_mem_bases);
+	}
+	soc15_grbm_select(adev, 0, 0, 0, 0);
+	mutex_unlock(&adev->srbm_mutex);
+}
+
+static void gfx_v9_0_gpu_init(struct amdgpu_device *adev)
+{
+	u32 tmp;
+	int i;
+
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_CNTL));
+	tmp = REG_SET_FIELD(tmp, GRBM_CNTL, READ_TIMEOUT, 0xff);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_CNTL), tmp);
+
+	gfx_v9_0_tiling_mode_table_init(adev);
+
+	gfx_v9_0_setup_rb(adev);
+	gfx_v9_0_get_cu_info(adev, &adev->gfx.cu_info);
+
+	/* XXX SH_MEM regs */
+	/* where to put LDS, scratch, GPUVM in FSA64 space */
+	mutex_lock(&adev->srbm_mutex);
+	for (i = 0; i < 16; i++) {
+		soc15_grbm_select(adev, 0, 0, 0, i);
+		/* CP and shaders */
+		tmp = 0;
+		tmp = REG_SET_FIELD(tmp, SH_MEM_CONFIG, ALIGNMENT_MODE,
+				    SH_MEM_ALIGNMENT_MODE_UNALIGNED);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmSH_MEM_CONFIG), tmp);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmSH_MEM_BASES), 0);
+	}
+	soc15_grbm_select(adev, 0, 0, 0, 0);
+
+	mutex_unlock(&adev->srbm_mutex);
+
+	gfx_v9_0_init_compute_vmid(adev);
+
+	mutex_lock(&adev->grbm_idx_mutex);
+	/*
+	 * making sure that the following register writes will be broadcasted
+	 * to all the shaders
+	 */
+	gfx_v9_0_select_se_sh(adev, 0xffffffff, 0xffffffff, 0xffffffff);
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmPA_SC_FIFO_SIZE),
+		   (adev->gfx.config.sc_prim_fifo_size_frontend <<
+			PA_SC_FIFO_SIZE__SC_FRONTEND_PRIM_FIFO_SIZE__SHIFT) |
+		   (adev->gfx.config.sc_prim_fifo_size_backend <<
+			PA_SC_FIFO_SIZE__SC_BACKEND_PRIM_FIFO_SIZE__SHIFT) |
+		   (adev->gfx.config.sc_hiz_tile_fifo_size <<
+			PA_SC_FIFO_SIZE__SC_HIZ_TILE_FIFO_SIZE__SHIFT) |
+		   (adev->gfx.config.sc_earlyz_tile_fifo_size <<
+			PA_SC_FIFO_SIZE__SC_EARLYZ_TILE_FIFO_SIZE__SHIFT));
+	mutex_unlock(&adev->grbm_idx_mutex);
+
+}
+
+static void gfx_v9_0_wait_for_rlc_serdes(struct amdgpu_device *adev)
+{
+	u32 i, j, k;
+	u32 mask;
+
+	mutex_lock(&adev->grbm_idx_mutex);
+	for (i = 0; i < adev->gfx.config.max_shader_engines; i++) {
+		for (j = 0; j < adev->gfx.config.max_sh_per_se; j++) {
+			gfx_v9_0_select_se_sh(adev, i, j, 0xffffffff);
+			for (k = 0; k < adev->usec_timeout; k++) {
+				if (RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_SERDES_CU_MASTER_BUSY)) == 0)
+					break;
+				udelay(1);
+			}
+		}
+	}
+	gfx_v9_0_select_se_sh(adev, 0xffffffff, 0xffffffff, 0xffffffff);
+	mutex_unlock(&adev->grbm_idx_mutex);
+
+	mask = RLC_SERDES_NONCU_MASTER_BUSY__SE_MASTER_BUSY_MASK |
+		RLC_SERDES_NONCU_MASTER_BUSY__GC_MASTER_BUSY_MASK |
+		RLC_SERDES_NONCU_MASTER_BUSY__TC0_MASTER_BUSY_MASK |
+		RLC_SERDES_NONCU_MASTER_BUSY__TC1_MASTER_BUSY_MASK;
+	for (k = 0; k < adev->usec_timeout; k++) {
+		if ((RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_SERDES_NONCU_MASTER_BUSY)) & mask) == 0)
+			break;
+		udelay(1);
+	}
+}
+
+static void gfx_v9_0_enable_gui_idle_interrupt(struct amdgpu_device *adev,
+					       bool enable)
+{
+	u32 tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_INT_CNTL_RING0));
+
+	if (enable)
+		return;
+
+	tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CNTX_BUSY_INT_ENABLE, enable ? 1 : 0);
+	tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CNTX_EMPTY_INT_ENABLE, enable ? 1 : 0);
+	tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CMP_BUSY_INT_ENABLE, enable ? 1 : 0);
+	tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, GFX_IDLE_INT_ENABLE, enable ? 1 : 0);
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_INT_CNTL_RING0), tmp);
+}
+
+void gfx_v9_0_rlc_stop(struct amdgpu_device *adev)
+{
+	u32 tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CNTL));
+
+	tmp = REG_SET_FIELD(tmp, RLC_CNTL, RLC_ENABLE_F32, 0);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CNTL), tmp);
+
+	gfx_v9_0_enable_gui_idle_interrupt(adev, false);
+
+	gfx_v9_0_wait_for_rlc_serdes(adev);
+}
+
+static void gfx_v9_0_rlc_reset(struct amdgpu_device *adev)
+{
+	u32 tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_SOFT_RESET));
+
+	tmp = REG_SET_FIELD(tmp, GRBM_SOFT_RESET, SOFT_RESET_RLC, 1);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_SOFT_RESET), tmp);
+	udelay(50);
+	tmp = REG_SET_FIELD(tmp, GRBM_SOFT_RESET, SOFT_RESET_RLC, 0);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_SOFT_RESET), tmp);
+	udelay(50);
+}
+
+static void gfx_v9_0_rlc_start(struct amdgpu_device *adev)
+{
+#ifdef AMDGPU_RLC_DEBUG_RETRY
+	u32 rlc_ucode_ver;
+#endif
+	u32 tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CNTL));
+
+	tmp = REG_SET_FIELD(tmp, RLC_CNTL, RLC_ENABLE_F32, 1);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CNTL), tmp);
+
+	/* carrizo do enable cp interrupt after cp inited */
+	if (!(adev->flags & AMD_IS_APU))
+		gfx_v9_0_enable_gui_idle_interrupt(adev, true);
+
+	udelay(50);
+
+#ifdef AMDGPU_RLC_DEBUG_RETRY
+	/* RLC_GPM_GENERAL_6 : RLC Ucode version */
+	rlc_ucode_ver = RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_GPM_GENERAL_6));
+	if(rlc_ucode_ver == 0x108) {
+		DRM_INFO("Using rlc debug ucode. mmRLC_GPM_GENERAL_6 ==0x08%x / fw_ver == %i \n",
+				rlc_ucode_ver, adev->gfx.rlc_fw_version);
+		/* RLC_GPM_TIMER_INT_3 : Timer interval in RefCLK cycles,
+		 * default is 0x9C4 to create a 100us interval */
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_GPM_TIMER_INT_3), 0x9C4);
+		/* RLC_GPM_GENERAL_12 : Minimum gap between wptr and rptr
+		 * to disable the page fault retry interrupts, default is 
+		 * 0x100 (256) */
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_GPM_GENERAL_12), 0x100);
+	}
+#endif
+}
+
+static int gfx_v9_0_rlc_load_microcode(struct amdgpu_device *adev)
+{
+	const struct rlc_firmware_header_v2_0 *hdr;
+	const __le32 *fw_data;
+	unsigned i, fw_size;
+
+	if (!adev->gfx.rlc_fw)
+		return -EINVAL;
+
+	hdr = (const struct rlc_firmware_header_v2_0 *)adev->gfx.rlc_fw->data;
+	amdgpu_ucode_print_rlc_hdr(&hdr->header);
+
+	fw_data = (const __le32 *)(adev->gfx.rlc_fw->data +
+			   le32_to_cpu(hdr->header.ucode_array_offset_bytes));
+	fw_size = le32_to_cpu(hdr->header.ucode_size_bytes) / 4;
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_GPM_UCODE_ADDR),
+			RLCG_UCODE_LOADING_START_ADDRESS);
+	for (i = 0; i < fw_size; i++)
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_GPM_UCODE_DATA), le32_to_cpup(fw_data++));
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_GPM_UCODE_ADDR), adev->gfx.rlc_fw_version);
+
+	return 0;
+}
+
+static int gfx_v9_0_rlc_resume(struct amdgpu_device *adev)
+{
+	int r;
+
+	gfx_v9_0_rlc_stop(adev);
+
+	/* disable CG */
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CGCG_CGLS_CTRL), 0);
+
+	/* disable PG */
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_PG_CNTL), 0);
+
+	gfx_v9_0_rlc_reset(adev);
+
+	if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP) {
+		/* legacy rlc firmware loading */
+		r = gfx_v9_0_rlc_load_microcode(adev);
+		if (r)
+			return r;
+	}
+
+	gfx_v9_0_rlc_start(adev);
+
+	return 0;
+}
+
+static void gfx_v9_0_cp_gfx_enable(struct amdgpu_device *adev, bool enable)
+{
+	int i;
+	u32 tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_ME_CNTL));
+
+	if (enable) {
+		tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, ME_HALT, 0);
+		tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, PFP_HALT, 0);
+		tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, CE_HALT, 0);
+	} else {
+		tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, ME_HALT, 1);
+		tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, PFP_HALT, 1);
+		tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, CE_HALT, 1);
+		for (i = 0; i < adev->gfx.num_gfx_rings; i++)
+			adev->gfx.gfx_ring[i].ready = false;
+	}
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_ME_CNTL), tmp);
+	udelay(50);
+}
+
+static int gfx_v9_0_cp_gfx_load_microcode(struct amdgpu_device *adev)
+{
+	const struct gfx_firmware_header_v1_0 *pfp_hdr;
+	const struct gfx_firmware_header_v1_0 *ce_hdr;
+	const struct gfx_firmware_header_v1_0 *me_hdr;
+	const __le32 *fw_data;
+	unsigned i, fw_size;
+
+	if (!adev->gfx.me_fw || !adev->gfx.pfp_fw || !adev->gfx.ce_fw)
+		return -EINVAL;
+
+	pfp_hdr = (const struct gfx_firmware_header_v1_0 *)
+		adev->gfx.pfp_fw->data;
+	ce_hdr = (const struct gfx_firmware_header_v1_0 *)
+		adev->gfx.ce_fw->data;
+	me_hdr = (const struct gfx_firmware_header_v1_0 *)
+		adev->gfx.me_fw->data;
+
+	amdgpu_ucode_print_gfx_hdr(&pfp_hdr->header);
+	amdgpu_ucode_print_gfx_hdr(&ce_hdr->header);
+	amdgpu_ucode_print_gfx_hdr(&me_hdr->header);
+
+	gfx_v9_0_cp_gfx_enable(adev, false);
+
+	/* PFP */
+	fw_data = (const __le32 *)
+		(adev->gfx.pfp_fw->data +
+		 le32_to_cpu(pfp_hdr->header.ucode_array_offset_bytes));
+	fw_size = le32_to_cpu(pfp_hdr->header.ucode_size_bytes) / 4;
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_PFP_UCODE_ADDR), 0);
+	for (i = 0; i < fw_size; i++)
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_PFP_UCODE_DATA), le32_to_cpup(fw_data++));
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_PFP_UCODE_ADDR), adev->gfx.pfp_fw_version);
+
+	/* CE */
+	fw_data = (const __le32 *)
+		(adev->gfx.ce_fw->data +
+		 le32_to_cpu(ce_hdr->header.ucode_array_offset_bytes));
+	fw_size = le32_to_cpu(ce_hdr->header.ucode_size_bytes) / 4;
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_CE_UCODE_ADDR), 0);
+	for (i = 0; i < fw_size; i++)
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_CE_UCODE_DATA), le32_to_cpup(fw_data++));
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_CE_UCODE_ADDR), adev->gfx.ce_fw_version);
+
+	/* ME */
+	fw_data = (const __le32 *)
+		(adev->gfx.me_fw->data +
+		 le32_to_cpu(me_hdr->header.ucode_array_offset_bytes));
+	fw_size = le32_to_cpu(me_hdr->header.ucode_size_bytes) / 4;
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_ME_RAM_WADDR), 0);
+	for (i = 0; i < fw_size; i++)
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_ME_RAM_DATA), le32_to_cpup(fw_data++));
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_ME_RAM_WADDR), adev->gfx.me_fw_version);
+
+	return 0;
+}
+
+static u32 gfx_v9_0_get_csb_size(struct amdgpu_device *adev)
+{
+	u32 count = 0;
+	const struct cs_section_def *sect = NULL;
+	const struct cs_extent_def *ext = NULL;
+
+	/* begin clear state */
+	count += 2;
+	/* context control state */
+	count += 3;
+
+	for (sect = gfx9_cs_data; sect->section != NULL; ++sect) {
+		for (ext = sect->section; ext->extent != NULL; ++ext) {
+			if (sect->id == SECT_CONTEXT)
+				count += 2 + ext->reg_count;
+			else
+				return 0;
+		}
+	}
+	/* pa_sc_raster_config/pa_sc_raster_config1 */
+	count += 4;
+	/* end clear state */
+	count += 2;
+	/* clear state */
+	count += 2;
+
+	return count;
+}
+
+static int gfx_v9_0_cp_gfx_start(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring = &adev->gfx.gfx_ring[0];
+	const struct cs_section_def *sect = NULL;
+	const struct cs_extent_def *ext = NULL;
+	int r, i;
+
+	/* init the CP */
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MAX_CONTEXT), adev->gfx.config.max_hw_contexts - 1);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_DEVICE_ID), 1);
+
+	gfx_v9_0_cp_gfx_enable(adev, true);
+
+	r = amdgpu_ring_alloc(ring, gfx_v9_0_get_csb_size(adev) + 4);
+	if (r) {
+		DRM_ERROR("amdgpu: cp failed to lock ring (%d).\n", r);
+		return r;
+	}
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_PREAMBLE_CNTL, 0));
+	amdgpu_ring_write(ring, PACKET3_PREAMBLE_BEGIN_CLEAR_STATE);
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_CONTEXT_CONTROL, 1));
+	amdgpu_ring_write(ring, 0x80000000);
+	amdgpu_ring_write(ring, 0x80000000);
+
+	for (sect = gfx9_cs_data; sect->section != NULL; ++sect) {
+		for (ext = sect->section; ext->extent != NULL; ++ext) {
+			if (sect->id == SECT_CONTEXT) {
+				amdgpu_ring_write(ring,
+				       PACKET3(PACKET3_SET_CONTEXT_REG,
+					       ext->reg_count));
+				amdgpu_ring_write(ring,
+				       ext->reg_index - PACKET3_SET_CONTEXT_REG_START);
+				for (i = 0; i < ext->reg_count; i++)
+					amdgpu_ring_write(ring, ext->extent[i]);
+			}
+		}
+	}
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_PREAMBLE_CNTL, 0));
+	amdgpu_ring_write(ring, PACKET3_PREAMBLE_END_CLEAR_STATE);
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_CLEAR_STATE, 0));
+	amdgpu_ring_write(ring, 0);
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_SET_BASE, 2));
+	amdgpu_ring_write(ring, PACKET3_BASE_INDEX(CE_PARTITION_BASE));
+	amdgpu_ring_write(ring, 0x8000);
+	amdgpu_ring_write(ring, 0x8000);
+
+	amdgpu_ring_commit(ring);
+
+	return 0;
+}
+
+static int gfx_v9_0_cp_gfx_resume(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring;
+	u32 tmp;
+	u32 rb_bufsz;
+	u64 rb_addr, rptr_addr;
+
+	/* Set the write pointer delay */
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB_WPTR_DELAY), 0);
+
+	/* set the RB to use vmid 0 */
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB_VMID), 0);
+
+	/* Set ring buffer size */
+	ring = &adev->gfx.gfx_ring[0];
+	rb_bufsz = order_base_2(ring->ring_size / 8);
+	tmp = REG_SET_FIELD(0, CP_RB0_CNTL, RB_BUFSZ, rb_bufsz);
+	tmp = REG_SET_FIELD(tmp, CP_RB0_CNTL, RB_BLKSZ, rb_bufsz - 2);
+#ifdef __BIG_ENDIAN
+	tmp = REG_SET_FIELD(tmp, CP_RB0_CNTL, BUF_SWAP, 1);
+#endif
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB0_CNTL), tmp);
+
+	/* Initialize the ring buffer's write pointers */
+	ring->wptr = 0;
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB0_WPTR), lower_32_bits(ring->wptr));
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB0_WPTR_HI), upper_32_bits(ring->wptr));
+
+	/* set the wb address wether it's enabled or not */
+	rptr_addr = adev->wb.gpu_addr + (ring->rptr_offs * 4);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB0_RPTR_ADDR), lower_32_bits(rptr_addr));
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB0_RPTR_ADDR_HI), upper_32_bits(rptr_addr) & CP_RB_RPTR_ADDR_HI__RB_RPTR_ADDR_HI_MASK);
+
+	mdelay(1);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB0_CNTL), tmp);
+
+	rb_addr = ring->gpu_addr >> 8;
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB0_BASE), rb_addr);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB0_BASE_HI), upper_32_bits(rb_addr));
+
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB_DOORBELL_CONTROL));
+	if (ring->use_doorbell) {
+		tmp = REG_SET_FIELD(tmp, CP_RB_DOORBELL_CONTROL,
+				    DOORBELL_OFFSET, ring->doorbell_index);
+		tmp = REG_SET_FIELD(tmp, CP_RB_DOORBELL_CONTROL,
+				    DOORBELL_EN, 1);
+	} else {
+		tmp = REG_SET_FIELD(tmp, CP_RB_DOORBELL_CONTROL, DOORBELL_EN, 0);
+	}
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB_DOORBELL_CONTROL), tmp);
+
+	tmp = REG_SET_FIELD(0, CP_RB_DOORBELL_RANGE_LOWER,
+			DOORBELL_RANGE_LOWER, ring->doorbell_index);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB_DOORBELL_RANGE_LOWER), tmp);
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB_DOORBELL_RANGE_UPPER),
+		       CP_RB_DOORBELL_RANGE_UPPER__DOORBELL_RANGE_UPPER_MASK);
+
+
+	/* start the ring */
+	gfx_v9_0_cp_gfx_start(adev);
+	ring->ready = true;
+
+	return 0;
+}
+
+static void gfx_v9_0_cp_compute_enable(struct amdgpu_device *adev, bool enable)
+{
+	int i;
+
+	if (enable) {
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MEC_CNTL), 0);
+	} else {
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MEC_CNTL),
+			(CP_MEC_CNTL__MEC_ME1_HALT_MASK | CP_MEC_CNTL__MEC_ME2_HALT_MASK));
+		for (i = 0; i < adev->gfx.num_compute_rings; i++)
+			adev->gfx.compute_ring[i].ready = false;
+	}
+	udelay(50);
+}
+
+static int gfx_v9_0_cp_compute_start(struct amdgpu_device *adev)
+{
+	gfx_v9_0_cp_compute_enable(adev, true);
+
+	return 0;
+}
+
+static int gfx_v9_0_cp_compute_load_microcode(struct amdgpu_device *adev)
+{
+	const struct gfx_firmware_header_v1_0 *mec_hdr;
+	const __le32 *fw_data;
+	unsigned i;
+	u32 tmp;
+
+	if (!adev->gfx.mec_fw)
+		return -EINVAL;
+
+	gfx_v9_0_cp_compute_enable(adev, false);
+
+	mec_hdr = (const struct gfx_firmware_header_v1_0 *)adev->gfx.mec_fw->data;
+	amdgpu_ucode_print_gfx_hdr(&mec_hdr->header);
+
+	fw_data = (const __le32 *)
+		(adev->gfx.mec_fw->data +
+		 le32_to_cpu(mec_hdr->header.ucode_array_offset_bytes));
+	tmp = 0;
+	tmp = REG_SET_FIELD(tmp, CP_CPC_IC_BASE_CNTL, VMID, 0);
+	tmp = REG_SET_FIELD(tmp, CP_CPC_IC_BASE_CNTL, CACHE_POLICY, 0);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_CPC_IC_BASE_CNTL), tmp);
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_CPC_IC_BASE_LO),
+		adev->gfx.mec.mec_fw_gpu_addr & 0xFFFFF000);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_CPC_IC_BASE_HI),
+		upper_32_bits(adev->gfx.mec.mec_fw_gpu_addr));
+ 
+	/* MEC1 */
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MEC_ME1_UCODE_ADDR),
+			 mec_hdr->jt_offset);
+	for (i = 0; i < mec_hdr->jt_size; i++)
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MEC_ME1_UCODE_DATA),
+			le32_to_cpup(fw_data + mec_hdr->jt_offset + i));
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MEC_ME1_UCODE_ADDR),
+			adev->gfx.mec_fw_version);
+	/* Todo : Loading MEC2 firmware is only necessary if MEC2 should run different microcode than MEC1. */
+
+	return 0;
+}
+
+static void gfx_v9_0_cp_compute_fini(struct amdgpu_device *adev)
+{
+	int i, r;
+
+	for (i = 0; i < adev->gfx.num_compute_rings; i++) {
+		struct amdgpu_ring *ring = &adev->gfx.compute_ring[i];
+
+		if (ring->mqd_obj) {
+			r = amdgpu_bo_reserve(ring->mqd_obj, false);
+			if (unlikely(r != 0))
+				dev_warn(adev->dev, "(%d) reserve MQD bo failed\n", r);
+
+			amdgpu_bo_unpin(ring->mqd_obj);
+			amdgpu_bo_unreserve(ring->mqd_obj);
+
+			amdgpu_bo_unref(&ring->mqd_obj);
+			ring->mqd_obj = NULL;
+		}
+	}
+}
+
+static int gfx_v9_0_init_queue(struct amdgpu_ring *ring);
+
+static int gfx_v9_0_cp_compute_resume(struct amdgpu_device *adev)
+{
+	int i, r;
+	for (i = 0; i < adev->gfx.num_compute_rings; i++) {
+		struct amdgpu_ring *ring = &adev->gfx.compute_ring[i];
+		if (gfx_v9_0_init_queue(ring))
+			dev_warn(adev->dev, "compute queue %d init failed!\n", i);
+	}
+
+	r = gfx_v9_0_cp_compute_start(adev);
+	if (r)
+		return r;
+
+	return 0;
+}
+
+static int gfx_v9_0_cp_resume(struct amdgpu_device *adev)
+{
+	int r,i;
+	struct amdgpu_ring *ring;
+
+	if (!(adev->flags & AMD_IS_APU))
+		gfx_v9_0_enable_gui_idle_interrupt(adev, false);
+
+	if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP) {
+		/* legacy firmware loading */
+		r = gfx_v9_0_cp_gfx_load_microcode(adev);
+		if (r)
+			return r;
+
+		r = gfx_v9_0_cp_compute_load_microcode(adev);
+		if (r)
+			return r;
+	}
+
+	r = gfx_v9_0_cp_gfx_resume(adev);
+	if (r)
+		return r;
+
+	r = gfx_v9_0_cp_compute_resume(adev);
+	if (r)
+		return r;
+
+	ring = &adev->gfx.gfx_ring[0];
+	r = amdgpu_ring_test_ring(ring);
+	if (r) {
+		ring->ready = false;
+		return r;
+	}
+	for (i = 0; i < adev->gfx.num_compute_rings; i++) {
+		ring = &adev->gfx.compute_ring[i];
+
+		ring->ready = true;
+		r = amdgpu_ring_test_ring(ring);
+		if (r)
+			ring->ready = false;
+	}
+
+	gfx_v9_0_enable_gui_idle_interrupt(adev, true);
+
+	return 0;
+}
+
+static void gfx_v9_0_cp_enable(struct amdgpu_device *adev, bool enable)
+{
+	gfx_v9_0_cp_gfx_enable(adev, enable);
+	gfx_v9_0_cp_compute_enable(adev, enable);
+}
+
+static int gfx_v9_0_hw_init(void *handle)
+{
+	int r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	gfx_v9_0_init_golden_registers(adev);
+
+	gfx_v9_0_gpu_init(adev);
+
+	r = gfx_v9_0_rlc_resume(adev);
+	if (r)
+		return r;
+
+	r = gfx_v9_0_cp_resume(adev);
+	if (r)
+		return r;
+
+	r = gfx_v9_0_ngg_en(adev);
+	if (r)
+		return r;
+
+	return r;
+}
+
+static int gfx_v9_0_hw_fini(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	amdgpu_irq_put(adev, &adev->gfx.priv_reg_irq, 0);
+	amdgpu_irq_put(adev, &adev->gfx.priv_inst_irq, 0);
+	gfx_v9_0_cp_enable(adev, false);
+	gfx_v9_0_rlc_stop(adev);
+	gfx_v9_0_cp_compute_fini(adev);
+
+	return 0;
+}
+
+static int gfx_v9_0_suspend(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	return gfx_v9_0_hw_fini(adev);
+}
+
+static int gfx_v9_0_resume(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	return gfx_v9_0_hw_init(adev);
+}
+
+static bool gfx_v9_0_is_idle(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (REG_GET_FIELD(RREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_STATUS)), 
+				GRBM_STATUS, GUI_ACTIVE))
+		return false;
+	else
+		return true;
+}
+
+static int gfx_v9_0_wait_for_idle(void *handle)
+{
+	unsigned i;
+	u32 tmp;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	for (i = 0; i < adev->usec_timeout; i++) {
+		/* read MC_STATUS */
+		tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_STATUS)) & 
+			GRBM_STATUS__GUI_ACTIVE_MASK;
+
+		if (!REG_GET_FIELD(tmp, GRBM_STATUS, GUI_ACTIVE))
+			return 0;
+		udelay(1);
+	}
+	return -ETIMEDOUT;
+}
+
+static void gfx_v9_0_print_status(void *handle)
+{
+	int i;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	dev_info(adev->dev, "GFX 9.x registers\n");
+	dev_info(adev->dev, "  GRBM_STATUS=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_STATUS)));
+	dev_info(adev->dev, "  GRBM_STATUS2=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_STATUS2)));
+	dev_info(adev->dev, "  GRBM_STATUS_SE0=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_STATUS_SE0)));
+	dev_info(adev->dev, "  GRBM_STATUS_SE1=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_STATUS_SE1)));
+	dev_info(adev->dev, "  GRBM_STATUS_SE2=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_STATUS_SE2)));
+	dev_info(adev->dev, "  GRBM_STATUS_SE3=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_STATUS_SE3)));
+	dev_info(adev->dev, "  CP_STAT = 0x%08x\n", RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_STAT)));
+	dev_info(adev->dev, "  CP_STALLED_STAT1 = 0x%08x\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_STALLED_STAT1)));
+	dev_info(adev->dev, "  CP_STALLED_STAT2 = 0x%08x\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_STALLED_STAT2)));
+	dev_info(adev->dev, "  CP_STALLED_STAT3 = 0x%08x\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_STALLED_STAT3)));
+	dev_info(adev->dev, "  CP_CPF_BUSY_STAT = 0x%08x\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_CPF_BUSY_STAT)));
+	dev_info(adev->dev, "  CP_CPF_STALLED_STAT1 = 0x%08x\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_CPF_STALLED_STAT1)));
+	dev_info(adev->dev, "  CP_CPF_STATUS = 0x%08x\n", RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_CPF_STATUS)));
+	dev_info(adev->dev, "  CP_CPC_BUSY_STAT = 0x%08x\n", RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_CPC_BUSY_STAT)));
+	dev_info(adev->dev, "  CP_CPC_STALLED_STAT1 = 0x%08x\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_CPC_STALLED_STAT1)));
+	dev_info(adev->dev, "  CP_CPC_STATUS = 0x%08x\n", RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_CPC_STATUS)));
+
+	for (i = 0; i < 32; i++) {
+		dev_info(adev->dev, "  GB_TILE_MODE%d=0x%08X\n",
+			 i, RREG32(SOC15_REG_OFFSET(GC, 0, mmGB_TILE_MODE0 ) + i*4));
+	}
+	for (i = 0; i < 16; i++) {
+		dev_info(adev->dev, "  GB_MACROTILE_MODE%d=0x%08X\n",
+			 i, RREG32(SOC15_REG_OFFSET(GC, 0, mmGB_MACROTILE_MODE0) + i*4));
+	}
+	for (i = 0; i < adev->gfx.config.max_shader_engines; i++) {
+		dev_info(adev->dev, "  se: %d\n", i);
+		gfx_v9_0_select_se_sh(adev, i, 0xffffffff, 0xffffffff);
+		dev_info(adev->dev, "  PA_SC_RASTER_CONFIG=0x%08X\n",
+			 RREG32(SOC15_REG_OFFSET(GC, 0, mmPA_SC_RASTER_CONFIG)));
+		dev_info(adev->dev, "  PA_SC_RASTER_CONFIG_1=0x%08X\n",
+			 RREG32(SOC15_REG_OFFSET(GC, 0, mmPA_SC_RASTER_CONFIG_1)));
+	}
+	gfx_v9_0_select_se_sh(adev, 0xffffffff, 0xffffffff, 0xffffffff);
+
+	dev_info(adev->dev, "  GB_ADDR_CONFIG=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmGB_ADDR_CONFIG)));
+
+	dev_info(adev->dev, "  CP_MEQ_THRESHOLDS=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MEQ_THRESHOLDS)));
+	dev_info(adev->dev, "  SX_DEBUG_1=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmSX_DEBUG_1)));
+	dev_info(adev->dev, "  TA_CNTL_AUX=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmTA_CNTL_AUX)));
+	dev_info(adev->dev, "  SPI_CONFIG_CNTL=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmSPI_CONFIG_CNTL)));
+	dev_info(adev->dev, "  SQ_CONFIG=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmSQ_CONFIG)));
+	dev_info(adev->dev, "  DB_DEBUG=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmDB_DEBUG)));
+	dev_info(adev->dev, "  DB_DEBUG2=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmDB_DEBUG2)));
+	dev_info(adev->dev, "  DB_DEBUG3=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmDB_DEBUG3)));
+	dev_info(adev->dev, "  CB_HW_CONTROL=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCB_HW_CONTROL)));
+	dev_info(adev->dev, "  SPI_CONFIG_CNTL_1=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmSPI_CONFIG_CNTL_1)));
+	dev_info(adev->dev, "  PA_SC_FIFO_SIZE=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmPA_SC_FIFO_SIZE)));
+	dev_info(adev->dev, "  VGT_NUM_INSTANCES=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmVGT_NUM_INSTANCES)));
+	dev_info(adev->dev, "  CP_PERFMON_CNTL=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_PERFMON_CNTL)));
+	dev_info(adev->dev, "  PA_SC_FORCE_EOV_MAX_CNTS=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmPA_SC_FORCE_EOV_MAX_CNTS)));
+	dev_info(adev->dev, "  VGT_CACHE_INVALIDATION=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmVGT_CACHE_INVALIDATION)));
+	dev_info(adev->dev, "  VGT_GS_VERTEX_REUSE=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmVGT_GS_VERTEX_REUSE)));
+	dev_info(adev->dev, "  PA_SC_LINE_STIPPLE_STATE=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmPA_SC_LINE_STIPPLE_STATE)));
+	dev_info(adev->dev, "  PA_CL_ENHANCE=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmPA_CL_ENHANCE)));
+	dev_info(adev->dev, "  PA_SC_ENHANCE=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmPA_SC_ENHANCE)));
+
+	dev_info(adev->dev, "  CP_ME_CNTL=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_ME_CNTL)));
+	dev_info(adev->dev, "  CP_MAX_CONTEXT=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MAX_CONTEXT)));
+	dev_info(adev->dev, "  CP_DEVICE_ID=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_DEVICE_ID)));
+
+	dev_info(adev->dev, "  CP_SEM_WAIT_TIMER=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_SEM_WAIT_TIMER)));
+
+	dev_info(adev->dev, "  CP_RB_WPTR_DELAY=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB_WPTR_DELAY)));
+	dev_info(adev->dev, "  CP_RB_VMID=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB_VMID)));
+	dev_info(adev->dev, "  CP_RB0_CNTL=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB0_CNTL)));
+	dev_info(adev->dev, "  CP_RB0_WPTR=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB0_WPTR)));
+	dev_info(adev->dev, "  CP_RB0_RPTR_ADDR=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB0_RPTR_ADDR)));
+	dev_info(adev->dev, "  CP_RB0_RPTR_ADDR_HI=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB0_RPTR_ADDR_HI)));
+	dev_info(adev->dev, "  CP_RB0_CNTL=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB0_CNTL)));
+	dev_info(adev->dev, "  CP_RB0_BASE=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB0_BASE)));
+	dev_info(adev->dev, "  CP_RB0_BASE_HI=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB0_BASE_HI)));
+	dev_info(adev->dev, "  CP_MEC_CNTL=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MEC_CNTL)));
+
+	dev_info(adev->dev, "  SCRATCH_ADDR=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmSCRATCH_ADDR)));
+	dev_info(adev->dev, "  SCRATCH_UMSK=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmSCRATCH_UMSK)));
+
+	dev_info(adev->dev, "  CP_INT_CNTL_RING0=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_INT_CNTL_RING0)));
+	dev_info(adev->dev, "  RLC_LB_CNTL=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_LB_CNTL)));
+	dev_info(adev->dev, "  RLC_CNTL=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CNTL)));
+	dev_info(adev->dev, "  RLC_CGCG_CGLS_CTRL=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CGCG_CGLS_CTRL)));
+	dev_info(adev->dev, "  RLC_LB_CNTR_INIT=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_LB_CNTR_INIT)));
+	dev_info(adev->dev, "  RLC_LB_CNTR_MAX=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_LB_CNTR_MAX)));
+	dev_info(adev->dev, "  RLC_LB_INIT_CU_MASK=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_LB_INIT_CU_MASK)));
+	dev_info(adev->dev, "  RLC_LB_PARAMS=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_LB_PARAMS)));
+	dev_info(adev->dev, "  RLC_LB_CNTL=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_LB_CNTL)));
+	dev_info(adev->dev, "  RLC_UCODE_CNTL=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_UCODE_CNTL)));
+
+	dev_info(adev->dev, "  RLC_GPM_GENERAL_6=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_GPM_GENERAL_6)));
+	dev_info(adev->dev, "  RLC_GPM_GENERAL_12=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_GPM_GENERAL_12)));
+	dev_info(adev->dev, "  RLC_GPM_TIMER_INT_3=0x%08X\n",
+		 RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_GPM_TIMER_INT_3)));
+	mutex_lock(&adev->srbm_mutex);
+	for (i = 0; i < 16; i++) {
+		soc15_grbm_select(adev, 0, 0, 0, i);
+		dev_info(adev->dev, "  VM %d:\n", i);
+		dev_info(adev->dev, "  SH_MEM_CONFIG=0x%08X\n",
+			 RREG32(SOC15_REG_OFFSET(GC, 0, mmSH_MEM_CONFIG)));
+		dev_info(adev->dev, "  SH_MEM_BASES=0x%08X\n",
+			 RREG32(SOC15_REG_OFFSET(GC, 0, mmSH_MEM_BASES)));
+	}
+	soc15_grbm_select(adev, 0, 0, 0, 0);
+	mutex_unlock(&adev->srbm_mutex);
+}
+
+static int gfx_v9_0_soft_reset(void *handle)
+{
+	u32 grbm_soft_reset = 0;
+	u32 tmp;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	/* GRBM_STATUS */
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_STATUS));
+	if (tmp & (GRBM_STATUS__PA_BUSY_MASK | GRBM_STATUS__SC_BUSY_MASK |
+		   GRBM_STATUS__BCI_BUSY_MASK | GRBM_STATUS__SX_BUSY_MASK |
+		   GRBM_STATUS__TA_BUSY_MASK | GRBM_STATUS__VGT_BUSY_MASK |
+		   GRBM_STATUS__DB_BUSY_MASK | GRBM_STATUS__CB_BUSY_MASK |
+		   GRBM_STATUS__GDS_BUSY_MASK | GRBM_STATUS__SPI_BUSY_MASK |
+		   GRBM_STATUS__IA_BUSY_MASK | GRBM_STATUS__IA_BUSY_NO_DMA_MASK)) {
+		grbm_soft_reset = REG_SET_FIELD(grbm_soft_reset,
+						GRBM_SOFT_RESET, SOFT_RESET_CP, 1);
+		grbm_soft_reset = REG_SET_FIELD(grbm_soft_reset,
+						GRBM_SOFT_RESET, SOFT_RESET_GFX, 1);
+	}
+
+	if (tmp & (GRBM_STATUS__CP_BUSY_MASK | GRBM_STATUS__CP_COHERENCY_BUSY_MASK)) {
+		grbm_soft_reset = REG_SET_FIELD(grbm_soft_reset,
+						GRBM_SOFT_RESET, SOFT_RESET_CP, 1);
+	}
+
+	/* GRBM_STATUS2 */
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_STATUS2));
+	if (REG_GET_FIELD(tmp, GRBM_STATUS2, RLC_BUSY))
+		grbm_soft_reset = REG_SET_FIELD(grbm_soft_reset,
+						GRBM_SOFT_RESET, SOFT_RESET_RLC, 1);
+
+
+	if (grbm_soft_reset ) {
+		gfx_v9_0_print_status((void *)adev);
+		/* stop the rlc */
+		gfx_v9_0_rlc_stop(adev);
+
+		/* Disable GFX parsing/prefetching */
+		gfx_v9_0_cp_gfx_enable(adev, false);
+
+		/* Disable MEC parsing/prefetching */
+		gfx_v9_0_cp_compute_enable(adev, false);
+
+		if (grbm_soft_reset) {
+			tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_SOFT_RESET));
+			tmp |= grbm_soft_reset;
+			dev_info(adev->dev, "GRBM_SOFT_RESET=0x%08X\n", tmp);
+			WREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_SOFT_RESET), tmp);
+			tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_SOFT_RESET));
+
+			udelay(50);
+
+			tmp &= ~grbm_soft_reset;
+			WREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_SOFT_RESET), tmp);
+			tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_SOFT_RESET));
+		}
+
+		/* Wait a little for things to settle down */
+		udelay(50);
+		gfx_v9_0_print_status((void *)adev);
+	}
+	return 0;
+}
+
+static uint64_t gfx_v9_0_get_gpu_clock_counter(struct amdgpu_device *adev)
+{
+	uint64_t clock;
+
+	mutex_lock(&adev->gfx.gpu_clock_mutex);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CAPTURE_GPU_CLOCK_COUNT), 1);
+	clock = (uint64_t)RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_GPU_CLOCK_COUNT_LSB)) |
+		((uint64_t)RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_GPU_CLOCK_COUNT_MSB)) << 32ULL);
+	mutex_unlock(&adev->gfx.gpu_clock_mutex);
+	return clock;
+}
+
+static void gfx_v9_0_ring_emit_gds_switch(struct amdgpu_ring *ring,
+					  uint32_t vmid,
+					  uint32_t gds_base, uint32_t gds_size,
+					  uint32_t gws_base, uint32_t gws_size,
+					  uint32_t oa_base, uint32_t oa_size)
+{
+	gds_base = gds_base >> AMDGPU_GDS_SHIFT;
+	gds_size = gds_size >> AMDGPU_GDS_SHIFT;
+
+	gws_base = gws_base >> AMDGPU_GWS_SHIFT;
+	gws_size = gws_size >> AMDGPU_GWS_SHIFT;
+
+	oa_base = oa_base >> AMDGPU_OA_SHIFT;
+	oa_size = oa_size >> AMDGPU_OA_SHIFT;
+
+	/* GDS Base */
+	gfx_v9_0_write_data_to_reg(ring, 0, false,
+				   amdgpu_gds_reg_offset[vmid].mem_base,
+				   gds_base);
+
+	/* GDS Size */
+	gfx_v9_0_write_data_to_reg(ring, 0, false,
+				   amdgpu_gds_reg_offset[vmid].mem_size,
+				   gds_size);
+
+	/* GWS */
+	gfx_v9_0_write_data_to_reg(ring, 0, false,
+				   amdgpu_gds_reg_offset[vmid].gws,
+				   gws_size << GDS_GWS_VMID0__SIZE__SHIFT | gws_base);
+
+	/* OA */
+	gfx_v9_0_write_data_to_reg(ring, 0, false,
+				   amdgpu_gds_reg_offset[vmid].oa,
+				   (1 << (oa_size + oa_base)) - (1 << oa_base));
+}
+
+static int gfx_v9_0_early_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	adev->gfx.num_gfx_rings = GFX9_NUM_GFX_RINGS;
+	adev->gfx.num_compute_rings = GFX9_NUM_COMPUTE_RINGS;
+	gfx_v9_0_set_ring_funcs(adev);
+	gfx_v9_0_set_irq_funcs(adev);
+	gfx_v9_0_set_gds_init(adev);
+	gfx_v9_0_set_rlc_funcs(adev);
+
+	return 0;
+}
+
+static int gfx_v9_0_late_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	int r;
+
+	r = amdgpu_irq_get(adev, &adev->gfx.priv_reg_irq, 0);
+	if (r)
+		return r;
+
+	r = amdgpu_irq_get(adev, &adev->gfx.priv_inst_irq, 0);
+	if (r)
+		return r;
+
+	return 0;
+}
+
+static void gfx_v9_0_enter_rlc_safe_mode(struct amdgpu_device *adev)
+{
+	uint32_t rlc_setting, data;
+	unsigned i;
+
+	if (adev->gfx.rlc.in_safe_mode)
+		return;
+
+	/* if RLC is not enabled, do nothing */
+	rlc_setting = RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CNTL));
+	if (!(rlc_setting & RLC_CNTL__RLC_ENABLE_F32_MASK))
+		return;
+
+	if (adev->cg_flags &
+	    (AMD_CG_SUPPORT_GFX_CGCG | AMD_CG_SUPPORT_GFX_MGCG |
+	     AMD_CG_SUPPORT_GFX_3D_CGCG)) {
+		data = RLC_SAFE_MODE__CMD_MASK;
+		data |= (1 << RLC_SAFE_MODE__MESSAGE__SHIFT);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_SAFE_MODE), data);
+
+		/* wait for RLC_SAFE_MODE */
+		for (i = 0; i < adev->usec_timeout; i++) {
+			if (!REG_GET_FIELD(SOC15_REG_OFFSET(GC, 0, mmRLC_SAFE_MODE), RLC_SAFE_MODE, CMD))
+				break;
+			udelay(1);
+		}
+		adev->gfx.rlc.in_safe_mode = true;
+	}
+}
+
+static void gfx_v9_0_exit_rlc_safe_mode(struct amdgpu_device *adev)
+{
+	uint32_t rlc_setting, data;
+
+	if (!adev->gfx.rlc.in_safe_mode)
+		return;
+
+	/* if RLC is not enabled, do nothing */
+	rlc_setting = RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CNTL));
+	if (!(rlc_setting & RLC_CNTL__RLC_ENABLE_F32_MASK))
+		return;
+
+	if (adev->cg_flags &
+	    (AMD_CG_SUPPORT_GFX_CGCG | AMD_CG_SUPPORT_GFX_MGCG)) {
+		/*
+		 * Try to exit safe mode only if it is already in safe
+		 * mode.
+		 */
+		data = RLC_SAFE_MODE__CMD_MASK;
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_SAFE_MODE), data);
+		adev->gfx.rlc.in_safe_mode = false;
+	}
+}
+
+static void gfx_v9_0_update_medium_grain_clock_gating(struct amdgpu_device *adev,
+						      bool enable)
+{
+	uint32_t data, def;
+
+	/* It is disabled by HW by default */
+	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_GFX_MGCG)) {
+		/* 1 - RLC_CGTT_MGCG_OVERRIDE */
+		def = data = RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE));
+		data &= ~(RLC_CGTT_MGCG_OVERRIDE__CPF_CGTT_SCLK_OVERRIDE_MASK |
+			  RLC_CGTT_MGCG_OVERRIDE__GRBM_CGTT_SCLK_OVERRIDE_MASK |
+			  RLC_CGTT_MGCG_OVERRIDE__GFXIP_MGCG_OVERRIDE_MASK |
+			  RLC_CGTT_MGCG_OVERRIDE__GFXIP_MGLS_OVERRIDE_MASK);
+
+		/* only for Vega10 & Raven1 */
+		data |= RLC_CGTT_MGCG_OVERRIDE__RLC_CGTT_SCLK_OVERRIDE_MASK;
+
+		if (def != data)
+			WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE), data);
+
+		/* MGLS is a global flag to control all MGLS in GFX */
+		if (adev->cg_flags & AMD_CG_SUPPORT_GFX_MGLS) {
+			/* 2 - RLC memory Light sleep */
+			if (adev->cg_flags & AMD_CG_SUPPORT_GFX_RLC_LS) {
+				def = data = RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_MEM_SLP_CNTL));
+				data |= RLC_MEM_SLP_CNTL__RLC_MEM_LS_EN_MASK;
+				if (def != data)
+					WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_MEM_SLP_CNTL), data);
+			}
+			/* 3 - CP memory Light sleep */
+			if (adev->cg_flags & AMD_CG_SUPPORT_GFX_CP_LS) {
+				def = data = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MEM_SLP_CNTL));
+				data |= CP_MEM_SLP_CNTL__CP_MEM_LS_EN_MASK;
+				if (def != data)
+					WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MEM_SLP_CNTL), data);
+			}
+		}
+	} else {
+		/* 1 - MGCG_OVERRIDE */
+		def = data = RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE));
+		data |= (RLC_CGTT_MGCG_OVERRIDE__CPF_CGTT_SCLK_OVERRIDE_MASK |
+			 RLC_CGTT_MGCG_OVERRIDE__RLC_CGTT_SCLK_OVERRIDE_MASK |
+			 RLC_CGTT_MGCG_OVERRIDE__GRBM_CGTT_SCLK_OVERRIDE_MASK |
+			 RLC_CGTT_MGCG_OVERRIDE__GFXIP_MGCG_OVERRIDE_MASK |
+			 RLC_CGTT_MGCG_OVERRIDE__GFXIP_MGLS_OVERRIDE_MASK);
+		if (def != data)
+			WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE), data);
+
+		/* 2 - disable MGLS in RLC */
+		data = RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_MEM_SLP_CNTL));
+		if (data & RLC_MEM_SLP_CNTL__RLC_MEM_LS_EN_MASK) {
+			data &= ~RLC_MEM_SLP_CNTL__RLC_MEM_LS_EN_MASK;
+			WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_MEM_SLP_CNTL), data);
+		}
+
+		/* 3 - disable MGLS in CP */
+		data = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MEM_SLP_CNTL));
+		if (data & CP_MEM_SLP_CNTL__CP_MEM_LS_EN_MASK) {
+			data &= ~CP_MEM_SLP_CNTL__CP_MEM_LS_EN_MASK;
+			WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MEM_SLP_CNTL), data);
+		}
+	}
+}
+
+static void gfx_v9_0_update_3d_clock_gating(struct amdgpu_device *adev,
+					   bool enable)
+{
+	uint32_t data, def;
+
+	adev->gfx.rlc.funcs->enter_safe_mode(adev);
+
+	/* Enable 3D CGCG/CGLS */
+	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGCG)) {
+		/* write cmd to clear cgcg/cgls ov */
+		def = data = RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE));
+		/* unset CGCG override */
+		data &= ~RLC_CGTT_MGCG_OVERRIDE__GFXIP_GFX3D_CG_OVERRIDE_MASK;
+		/* update CGCG and CGLS override bits */
+		if (def != data)
+			WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE), data);
+		/* enable 3Dcgcg FSM(0x0020003f) */
+		def = RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CGCG_CGLS_CTRL_3D));
+		data = (0x2000 << RLC_CGCG_CGLS_CTRL_3D__CGCG_GFX_IDLE_THRESHOLD__SHIFT) |
+			RLC_CGCG_CGLS_CTRL_3D__CGCG_EN_MASK;
+		if (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGLS)
+			data |= (0x000F << RLC_CGCG_CGLS_CTRL_3D__CGLS_REP_COMPANSAT_DELAY__SHIFT) |
+				RLC_CGCG_CGLS_CTRL_3D__CGLS_EN_MASK;
+		if (def != data)
+			WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CGCG_CGLS_CTRL_3D), data);
+
+		/* set IDLE_POLL_COUNT(0x00900100) */
+		def = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB_WPTR_POLL_CNTL));
+		data = (0x0100 << CP_RB_WPTR_POLL_CNTL__POLL_FREQUENCY__SHIFT) |
+			(0x0090 << CP_RB_WPTR_POLL_CNTL__IDLE_POLL_COUNT__SHIFT);
+		if (def != data)
+			WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB_WPTR_POLL_CNTL), data);
+	} else {
+		/* Disable CGCG/CGLS */
+		def = data = RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CGCG_CGLS_CTRL_3D));
+		/* disable cgcg, cgls should be disabled */
+		data &= ~(RLC_CGCG_CGLS_CTRL_3D__CGCG_EN_MASK |
+			  RLC_CGCG_CGLS_CTRL_3D__CGLS_EN_MASK);
+		/* disable cgcg and cgls in FSM */
+		if (def != data)
+			WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CGCG_CGLS_CTRL_3D), data);
+	}
+
+	adev->gfx.rlc.funcs->exit_safe_mode(adev);
+}
+
+static void gfx_v9_0_update_coarse_grain_clock_gating(struct amdgpu_device *adev,
+						      bool enable)
+{
+	uint32_t def, data;
+
+	adev->gfx.rlc.funcs->enter_safe_mode(adev);
+
+	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGCG)) {
+		def = data = RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE));
+		/* unset CGCG override */
+		data &= ~RLC_CGTT_MGCG_OVERRIDE__GFXIP_CGCG_OVERRIDE_MASK;
+		if (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGLS)
+			data &= ~RLC_CGTT_MGCG_OVERRIDE__GFXIP_CGLS_OVERRIDE_MASK;
+		else
+			data |= RLC_CGTT_MGCG_OVERRIDE__GFXIP_CGLS_OVERRIDE_MASK;
+		/* update CGCG and CGLS override bits */
+		if (def != data)
+			WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CGTT_MGCG_OVERRIDE), data);
+
+		/* enable cgcg FSM(0x0020003F) */
+		def = RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CGCG_CGLS_CTRL));
+		data = (0x2000 << RLC_CGCG_CGLS_CTRL__CGCG_GFX_IDLE_THRESHOLD__SHIFT) |
+			RLC_CGCG_CGLS_CTRL__CGCG_EN_MASK;
+		if (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGLS)
+			data |= (0x000F << RLC_CGCG_CGLS_CTRL__CGLS_REP_COMPANSAT_DELAY__SHIFT) |
+				RLC_CGCG_CGLS_CTRL__CGLS_EN_MASK;
+		if (def != data)
+			WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CGCG_CGLS_CTRL), data);
+
+		/* set IDLE_POLL_COUNT(0x00900100) */
+		def = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB_WPTR_POLL_CNTL));
+		data = (0x0100 << CP_RB_WPTR_POLL_CNTL__POLL_FREQUENCY__SHIFT) |
+			(0x0090 << CP_RB_WPTR_POLL_CNTL__IDLE_POLL_COUNT__SHIFT);
+		if (def != data)
+			WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB_WPTR_POLL_CNTL), data);
+	} else {
+		def = data = RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CGCG_CGLS_CTRL));
+		/* reset CGCG/CGLS bits */
+		data &= ~(RLC_CGCG_CGLS_CTRL__CGCG_EN_MASK | RLC_CGCG_CGLS_CTRL__CGLS_EN_MASK);
+		/* disable cgcg and cgls in FSM */
+		if (def != data)
+			WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CGCG_CGLS_CTRL), data);
+	}
+
+	adev->gfx.rlc.funcs->exit_safe_mode(adev);
+}
+
+static int gfx_v9_0_update_gfx_clock_gating(struct amdgpu_device *adev,
+					    bool enable)
+{
+	if (enable) {
+		/* CGCG/CGLS should be enabled after MGCG/MGLS
+		 * ===  MGCG + MGLS ===
+		 */
+		gfx_v9_0_update_medium_grain_clock_gating(adev, enable);
+		/* ===  CGCG /CGLS for GFX 3D Only === */
+		gfx_v9_0_update_3d_clock_gating(adev, enable);
+		/* ===  CGCG + CGLS === */
+		gfx_v9_0_update_coarse_grain_clock_gating(adev, enable);
+	} else {
+		/* CGCG/CGLS should be disabled before MGCG/MGLS
+		 * ===  CGCG + CGLS ===
+		 */
+		gfx_v9_0_update_coarse_grain_clock_gating(adev, enable);
+		/* ===  CGCG /CGLS for GFX 3D Only === */
+		gfx_v9_0_update_3d_clock_gating(adev, enable);
+		/* ===  MGCG + MGLS === */
+		gfx_v9_0_update_medium_grain_clock_gating(adev, enable);
+	}
+	return 0;
+}
+
+static const struct amdgpu_rlc_funcs gfx_v9_0_rlc_funcs = {
+	.enter_safe_mode = gfx_v9_0_enter_rlc_safe_mode,
+	.exit_safe_mode = gfx_v9_0_exit_rlc_safe_mode
+};
+
+static int gfx_v9_0_set_powergating_state(void *handle,
+					  enum amd_powergating_state state)
+{
+	return 0;
+}
+
+static int gfx_v9_0_set_clockgating_state(void *handle,
+					  enum amd_clockgating_state state)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	switch (adev->asic_type) {
+	case CHIP_VEGA10:
+		gfx_v9_0_update_gfx_clock_gating(adev,
+						 state == AMD_CG_STATE_GATE ? true : false);
+		break;
+	default:
+		break;
+	}
+	return 0;
+}
+
+static u64 gfx_v9_0_ring_get_rptr_gfx(struct amdgpu_ring *ring)
+{
+	return ring->adev->wb.wb[ring->rptr_offs]; /* gfx9 is 32bit rptr*/
+}
+
+static u64 gfx_v9_0_ring_get_wptr_gfx(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	u64 wptr;
+
+	/* XXX check if swapping is necessary on BE */
+	if (ring->use_doorbell) {
+		wptr = atomic64_read((atomic64_t *)&adev->wb.wb[ring->wptr_offs]);
+	} else {
+		wptr = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB0_WPTR));
+		wptr += (u64)RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB0_WPTR_HI)) << 32;
+	}
+
+	return wptr;
+}
+
+static void gfx_v9_0_ring_set_wptr_gfx(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	if (ring->use_doorbell) {
+		/* XXX check if swapping is necessary on BE */
+		atomic64_set((atomic64_t*)&adev->wb.wb[ring->wptr_offs], ring->wptr);
+		WDOORBELL64(ring->doorbell_index, ring->wptr);
+	} else {
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB0_WPTR), lower_32_bits(ring->wptr));
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB0_WPTR_HI), upper_32_bits(ring->wptr));
+	}
+}
+
+static void gfx_v9_0_ring_emit_hdp_flush(struct amdgpu_ring *ring)
+{
+	u32 ref_and_mask, reg_mem_engine;
+	struct nbio_hdp_flush_reg *nbio_hf_reg;
+
+	if (ring->adev->asic_type == CHIP_VEGA10)
+		nbio_hf_reg = &nbio_v6_1_hdp_flush_reg;
+
+	if (ring->funcs->type == AMDGPU_RING_TYPE_COMPUTE) {
+		switch (ring->me) {
+		case 1:
+			ref_and_mask = nbio_hf_reg->ref_and_mask_cp2 << ring->pipe;
+			break;
+		case 2:
+			ref_and_mask = nbio_hf_reg->ref_and_mask_cp6 << ring->pipe;
+			break;
+		default:
+			return;
+		}
+		reg_mem_engine = 0;
+	} else {
+		ref_and_mask = nbio_hf_reg->ref_and_mask_cp0;
+		reg_mem_engine = 1; /* pfp */
+	}
+
+	gfx_v9_0_wait_reg_mem(ring, reg_mem_engine, 0, 1,
+			      nbio_hf_reg->hdp_flush_req_offset,
+			      nbio_hf_reg->hdp_flush_done_offset,
+			      ref_and_mask, ref_and_mask, 0x20);
+}
+
+static void gfx_v9_0_ring_emit_hdp_invalidate(struct amdgpu_ring *ring)
+{
+	gfx_v9_0_write_data_to_reg(ring, 0, true,
+				   SOC15_REG_OFFSET(HDP, 0, mmHDP_DEBUG0), 1);
+}
+
+static void gfx_v9_0_ring_emit_ib_gfx(struct amdgpu_ring *ring,
+                                      struct amdgpu_ib *ib,
+                                      unsigned vm_id, bool ctx_switch)
+{
+        u32 header, control = 0;
+
+        if (ib->flags & AMDGPU_IB_FLAG_CE)
+                header = PACKET3(PACKET3_INDIRECT_BUFFER_CONST, 2);
+        else
+                header = PACKET3(PACKET3_INDIRECT_BUFFER, 2);
+
+        control |= ib->length_dw | (vm_id << 24);
+
+        amdgpu_ring_write(ring, header);
+	BUG_ON(ib->gpu_addr & 0x3); /* Dword align */
+        amdgpu_ring_write(ring,
+#ifdef __BIG_ENDIAN
+                          (2 << 0) |
+#endif
+                          lower_32_bits(ib->gpu_addr));
+        amdgpu_ring_write(ring, upper_32_bits(ib->gpu_addr));
+        amdgpu_ring_write(ring, control);
+}
+
+#define	INDIRECT_BUFFER_VALID                   (1 << 23)
+
+static void gfx_v9_0_ring_emit_ib_compute(struct amdgpu_ring *ring,
+                                          struct amdgpu_ib *ib,
+                                          unsigned vm_id, bool ctx_switch)
+{
+        u32 control = INDIRECT_BUFFER_VALID | ib->length_dw | (vm_id << 24);
+
+        amdgpu_ring_write(ring, PACKET3(PACKET3_INDIRECT_BUFFER, 2));
+	BUG_ON(ib->gpu_addr & 0x3); /* Dword align */
+        amdgpu_ring_write(ring,
+#ifdef __BIG_ENDIAN
+                                (2 << 0) |
+#endif
+                                lower_32_bits(ib->gpu_addr));
+        amdgpu_ring_write(ring, upper_32_bits(ib->gpu_addr));
+        amdgpu_ring_write(ring, control);
+}
+
+static void gfx_v9_0_ring_emit_fence(struct amdgpu_ring *ring, u64 addr,
+				     u64 seq, unsigned flags)
+{
+	bool write64bit = flags & AMDGPU_FENCE_FLAG_64BIT;
+	bool int_sel = flags & AMDGPU_FENCE_FLAG_INT;
+
+	/* RELEASE_MEM - flush caches, send int */
+	amdgpu_ring_write(ring, PACKET3(PACKET3_RELEASE_MEM, 6));
+	amdgpu_ring_write(ring, (EOP_TCL1_ACTION_EN |
+				 EOP_TC_ACTION_EN |
+				 EOP_TC_WB_ACTION_EN |
+				 EOP_TC_MD_ACTION_EN |
+				 EVENT_TYPE(CACHE_FLUSH_AND_INV_TS_EVENT) |
+				 EVENT_INDEX(5)));
+	amdgpu_ring_write(ring, DATA_SEL(write64bit ? 2 : 1) | INT_SEL(int_sel ? 2 : 0));
+
+	/*
+	 * the address should be Qword aligned if 64bit write, Dword
+	 * aligned if only send 32bit data low (discard data high)
+	 */
+	if (write64bit)
+		BUG_ON(addr & 0x7);
+	else
+		BUG_ON(addr & 0x3);
+	amdgpu_ring_write(ring, lower_32_bits(addr));
+	amdgpu_ring_write(ring, upper_32_bits(addr));
+	amdgpu_ring_write(ring, lower_32_bits(seq));
+	amdgpu_ring_write(ring, upper_32_bits(seq));
+	amdgpu_ring_write(ring, 0);
+}
+
+static void gfx_v9_0_ring_emit_pipeline_sync(struct amdgpu_ring *ring)
+{
+	int usepfp = (ring->funcs->type == AMDGPU_RING_TYPE_GFX);
+	uint32_t seq = ring->fence_drv.sync_seq;
+	uint64_t addr = ring->fence_drv.gpu_addr;
+
+	gfx_v9_0_wait_reg_mem(ring, usepfp, 1, 0,
+			      lower_32_bits(addr), upper_32_bits(addr),
+			      seq, 0xffffffff, 4);
+}
+
+static void gfx_v9_0_ring_emit_vm_flush(struct amdgpu_ring *ring,
+					unsigned vm_id, uint64_t pd_addr)
+{
+	int usepfp = (ring->funcs->type == AMDGPU_RING_TYPE_GFX);
+	unsigned eng = ring->idx;
+	unsigned i;
+
+	pd_addr = pd_addr | 0x1; /* valid bit */
+	/* now only use physical base address of PDE and valid */
+	BUG_ON(pd_addr & 0xFFFF00000000003EULL);
+
+	for (i = 0; i < AMDGPU_MAX_VMHUBS; ++i) {
+		struct amdgpu_vmhub *hub = &ring->adev->vmhub[i];
+		uint32_t req = hub->get_invalidate_req(vm_id);
+
+		gfx_v9_0_write_data_to_reg(ring, usepfp, true,
+					   hub->ctx0_ptb_addr_lo32
+					   + (2 * vm_id),
+					   lower_32_bits(pd_addr));
+
+		gfx_v9_0_write_data_to_reg(ring, usepfp, true,
+					   hub->ctx0_ptb_addr_hi32
+					   + (2 * vm_id),
+					   upper_32_bits(pd_addr));
+
+		gfx_v9_0_write_data_to_reg(ring, usepfp, true,
+					   hub->vm_inv_eng0_req + eng, req);
+
+		/* wait for the invalidate to complete */
+		gfx_v9_0_wait_reg_mem(ring, 0, 0, 0, hub->vm_inv_eng0_ack +
+				      eng, 0, 1 << vm_id, 1 << vm_id, 0x20);
+	}
+
+	/* compute doesn't have PFP */
+	if (usepfp) {
+		/* sync PFP to ME, otherwise we might get invalid PFP reads */
+		amdgpu_ring_write(ring, PACKET3(PACKET3_PFP_SYNC_ME, 0));
+		amdgpu_ring_write(ring, 0x0);
+		/* Emits 128 dw nop to prevent CE access VM before vm_flush finish */
+		amdgpu_ring_insert_nop(ring, 128);
+	}
+}
+
+static u64 gfx_v9_0_ring_get_rptr_compute(struct amdgpu_ring *ring)
+{
+	return ring->adev->wb.wb[ring->rptr_offs]; /* gfx9 hardware is 32bit rptr */
+}
+
+static u64 gfx_v9_0_ring_get_wptr_compute(struct amdgpu_ring *ring)
+{
+	u64 wptr;
+
+	/* XXX check if swapping is necessary on BE */
+	if (ring->use_doorbell)
+		wptr = atomic64_read((atomic64_t *)&ring->adev->wb.wb[ring->wptr_offs]);
+	else
+		BUG();
+	return wptr;
+}
+
+static void gfx_v9_0_ring_set_wptr_compute(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	/* XXX check if swapping is necessary on BE */
+	if (ring->use_doorbell) {
+		atomic64_set((atomic64_t*)&adev->wb.wb[ring->wptr_offs], ring->wptr);
+		WDOORBELL64(ring->doorbell_index, ring->wptr);
+	} else{
+		BUG(); /* only DOORBELL method supported on gfx9 now */
+	}
+}
+
+static void gfx_v9_ring_emit_sb(struct amdgpu_ring *ring)
+{
+	amdgpu_ring_write(ring, PACKET3(PACKET3_SWITCH_BUFFER, 0));
+	amdgpu_ring_write(ring, 0);
+}
+
+static void gfx_v9_ring_emit_cntxcntl(struct amdgpu_ring *ring, uint32_t flags)
+{
+	uint32_t dw2 = 0;
+
+	dw2 |= 0x80000000; /* set load_enable otherwise this package is just NOPs */
+	if (flags & AMDGPU_HAVE_CTX_SWITCH) {
+		/* set load_global_config & load_global_uconfig */
+		dw2 |= 0x8001;
+		/* set load_cs_sh_regs */
+		dw2 |= 0x01000000;
+		/* set load_per_context_state & load_gfx_sh_regs for GFX */
+		dw2 |= 0x10002;
+
+		/* set load_ce_ram if preamble presented */
+		if (AMDGPU_PREAMBLE_IB_PRESENT & flags)
+			dw2 |= 0x10000000;
+	} else {
+		/* still load_ce_ram if this is the first time preamble presented
+		 * although there is no context switch happens.
+		 */
+		if (AMDGPU_PREAMBLE_IB_PRESENT_FIRST & flags)
+			dw2 |= 0x10000000;
+	}
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_CONTEXT_CONTROL, 1));
+	amdgpu_ring_write(ring, dw2);
+	amdgpu_ring_write(ring, 0);
+}
+
+static void gfx_v9_0_set_gfx_eop_interrupt_state(struct amdgpu_device *adev,
+						 enum amdgpu_interrupt_state state)
+{
+	u32 cp_int_cntl;
+
+	switch (state) {
+	case AMDGPU_IRQ_STATE_DISABLE:
+		cp_int_cntl = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_INT_CNTL_RING0));
+		cp_int_cntl = REG_SET_FIELD(cp_int_cntl, CP_INT_CNTL_RING0,
+					    TIME_STAMP_INT_ENABLE, 0);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_INT_CNTL_RING0), cp_int_cntl);
+		break;
+	case AMDGPU_IRQ_STATE_ENABLE:
+		cp_int_cntl = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_INT_CNTL_RING0));
+		cp_int_cntl =
+			REG_SET_FIELD(cp_int_cntl, CP_INT_CNTL_RING0,
+				      TIME_STAMP_INT_ENABLE, 1);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_INT_CNTL_RING0), cp_int_cntl);
+		break;
+	default:
+		break;
+	}
+}
+
+static void gfx_v9_0_set_compute_eop_interrupt_state(struct amdgpu_device *adev,
+						     int me, int pipe,
+						     enum amdgpu_interrupt_state state)
+{
+	u32 mec_int_cntl, mec_int_cntl_reg;
+
+	/*
+	 * amdgpu controls only pipe 0 of MEC1. That's why this function only
+	 * handles the setting of interrupts for this specific pipe. All other
+	 * pipes' interrupts are set by amdkfd.
+	 */
+
+	if (me == 1) {
+		switch (pipe) {
+		case 0:
+			mec_int_cntl_reg = SOC15_REG_OFFSET(GC, 0, mmCP_ME1_PIPE0_INT_CNTL);
+			break;
+		default:
+			DRM_DEBUG("invalid pipe %d\n", pipe);
+			return;
+		}
+	} else {
+		DRM_DEBUG("invalid me %d\n", me);
+		return;
+	}
+
+	switch (state) {
+	case AMDGPU_IRQ_STATE_DISABLE:
+		mec_int_cntl = RREG32(mec_int_cntl_reg);
+		mec_int_cntl = REG_SET_FIELD(mec_int_cntl, CP_ME1_PIPE0_INT_CNTL,
+					     TIME_STAMP_INT_ENABLE, 0);
+		WREG32(mec_int_cntl_reg, mec_int_cntl);
+		break;
+	case AMDGPU_IRQ_STATE_ENABLE:
+		mec_int_cntl = RREG32(mec_int_cntl_reg);
+		mec_int_cntl = REG_SET_FIELD(mec_int_cntl, CP_ME1_PIPE0_INT_CNTL,
+					     TIME_STAMP_INT_ENABLE, 1);
+		WREG32(mec_int_cntl_reg, mec_int_cntl);
+		break;
+	default:
+		break;
+	}
+}
+
+static int gfx_v9_0_set_priv_reg_fault_state(struct amdgpu_device *adev,
+					     struct amdgpu_irq_src *source,
+					     unsigned type,
+					     enum amdgpu_interrupt_state state)
+{
+	u32 cp_int_cntl;
+
+	switch (state) {
+	case AMDGPU_IRQ_STATE_DISABLE:
+		cp_int_cntl = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_INT_CNTL_RING0));
+		cp_int_cntl = REG_SET_FIELD(cp_int_cntl, CP_INT_CNTL_RING0,
+					    PRIV_REG_INT_ENABLE, 0);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_INT_CNTL_RING0), cp_int_cntl);
+		break;
+	case AMDGPU_IRQ_STATE_ENABLE:
+		cp_int_cntl = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_INT_CNTL_RING0));
+		cp_int_cntl = REG_SET_FIELD(cp_int_cntl, CP_INT_CNTL_RING0,
+					    PRIV_REG_INT_ENABLE, 1);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_INT_CNTL_RING0), cp_int_cntl);
+		break;
+	default:
+		break;
+	}
+
+	return 0;
+}
+
+static int gfx_v9_0_set_priv_inst_fault_state(struct amdgpu_device *adev,
+					      struct amdgpu_irq_src *source,
+					      unsigned type,
+					      enum amdgpu_interrupt_state state)
+{
+	u32 cp_int_cntl;
+
+	switch (state) {
+	case AMDGPU_IRQ_STATE_DISABLE:
+		cp_int_cntl = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_INT_CNTL_RING0));
+		cp_int_cntl = REG_SET_FIELD(cp_int_cntl, CP_INT_CNTL_RING0,
+					    PRIV_INSTR_INT_ENABLE, 0);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_INT_CNTL_RING0), cp_int_cntl);
+		break;
+	case AMDGPU_IRQ_STATE_ENABLE:
+		cp_int_cntl = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_INT_CNTL_RING0));
+		cp_int_cntl = REG_SET_FIELD(cp_int_cntl, CP_INT_CNTL_RING0,
+					    PRIV_INSTR_INT_ENABLE, 1);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_INT_CNTL_RING0), cp_int_cntl);
+		break;
+	default:
+		break;
+	}
+
+	return 0;
+}
+
+static int gfx_v9_0_set_eop_interrupt_state(struct amdgpu_device *adev,
+					    struct amdgpu_irq_src *src,
+					    unsigned type,
+					    enum amdgpu_interrupt_state state)
+{
+	switch (type) {
+	case AMDGPU_CP_IRQ_GFX_EOP:
+		gfx_v9_0_set_gfx_eop_interrupt_state(adev, state);
+		break;
+	case AMDGPU_CP_IRQ_COMPUTE_MEC1_PIPE0_EOP:
+		gfx_v9_0_set_compute_eop_interrupt_state(adev, 1, 0, state);
+		break;
+	case AMDGPU_CP_IRQ_COMPUTE_MEC1_PIPE1_EOP:
+		gfx_v9_0_set_compute_eop_interrupt_state(adev, 1, 1, state);
+		break;
+	case AMDGPU_CP_IRQ_COMPUTE_MEC1_PIPE2_EOP:
+		gfx_v9_0_set_compute_eop_interrupt_state(adev, 1, 2, state);
+		break;
+	case AMDGPU_CP_IRQ_COMPUTE_MEC1_PIPE3_EOP:
+		gfx_v9_0_set_compute_eop_interrupt_state(adev, 1, 3, state);
+		break;
+	case AMDGPU_CP_IRQ_COMPUTE_MEC2_PIPE0_EOP:
+		gfx_v9_0_set_compute_eop_interrupt_state(adev, 2, 0, state);
+		break;
+	case AMDGPU_CP_IRQ_COMPUTE_MEC2_PIPE1_EOP:
+		gfx_v9_0_set_compute_eop_interrupt_state(adev, 2, 1, state);
+		break;
+	case AMDGPU_CP_IRQ_COMPUTE_MEC2_PIPE2_EOP:
+		gfx_v9_0_set_compute_eop_interrupt_state(adev, 2, 2, state);
+		break;
+	case AMDGPU_CP_IRQ_COMPUTE_MEC2_PIPE3_EOP:
+		gfx_v9_0_set_compute_eop_interrupt_state(adev, 2, 3, state);
+		break;
+	default:
+		break;
+	}
+	return 0;
+}
+
+static int gfx_v9_0_eop_irq(struct amdgpu_device *adev,
+			    struct amdgpu_irq_src *source,
+			    struct amdgpu_iv_entry *entry)
+{
+	int i;
+	u8 me_id, pipe_id, queue_id;
+	struct amdgpu_ring *ring;
+
+	DRM_DEBUG("IH: CP EOP\n");
+	me_id = (entry->ring_id & 0x0c) >> 2;
+	pipe_id = (entry->ring_id & 0x03) >> 0;
+	queue_id = (entry->ring_id & 0x70) >> 4;
+
+	switch (me_id) {
+	case 0:
+		amdgpu_fence_process(&adev->gfx.gfx_ring[0]);
+		break;
+	case 1:
+	case 2:
+		for (i = 0; i < adev->gfx.num_compute_rings; i++) {
+			ring = &adev->gfx.compute_ring[i];
+			/* Per-queue interrupt is supported for MEC starting from VI.
+			  * The interrupt can only be enabled/disabled per pipe instead of per queue.
+			  */
+			if ((ring->me == me_id) && (ring->pipe == pipe_id) && (ring->queue == queue_id))
+				amdgpu_fence_process(ring);
+		}
+		break;
+	}
+	return 0;
+}
+
+static int gfx_v9_0_priv_reg_irq(struct amdgpu_device *adev,
+				 struct amdgpu_irq_src *source,
+				 struct amdgpu_iv_entry *entry)
+{
+	DRM_ERROR("Illegal register access in command stream\n");
+	schedule_work(&adev->reset_work);
+	return 0;
+}
+
+static int gfx_v9_0_priv_inst_irq(struct amdgpu_device *adev,
+				  struct amdgpu_irq_src *source,
+				  struct amdgpu_iv_entry *entry)
+{
+	DRM_ERROR("Illegal instruction in command stream\n");
+	schedule_work(&adev->reset_work);
+	return 0;
+}
+
+const struct amd_ip_funcs gfx_v9_0_ip_funcs = {
+	.name = "gfx_v9_0",
+	.early_init = gfx_v9_0_early_init,
+	.late_init = gfx_v9_0_late_init,
+	.sw_init = gfx_v9_0_sw_init,
+	.sw_fini = gfx_v9_0_sw_fini,
+	.hw_init = gfx_v9_0_hw_init,
+	.hw_fini = gfx_v9_0_hw_fini,
+	.suspend = gfx_v9_0_suspend,
+	.resume = gfx_v9_0_resume,
+	.is_idle = gfx_v9_0_is_idle,
+	.wait_for_idle = gfx_v9_0_wait_for_idle,
+	.soft_reset = gfx_v9_0_soft_reset,
+	.set_clockgating_state = gfx_v9_0_set_clockgating_state,
+	.set_powergating_state = gfx_v9_0_set_powergating_state,
+};
+
+static const struct amdgpu_ring_funcs gfx_v9_0_ring_funcs_gfx = {
+	.type = AMDGPU_RING_TYPE_GFX,
+	.align_mask = 0xff,
+	.nop = PACKET3(PACKET3_NOP, 0x3FFF),
+	.support_64bit_ptrs = true,
+	.get_rptr = gfx_v9_0_ring_get_rptr_gfx,
+	.get_wptr = gfx_v9_0_ring_get_wptr_gfx,
+	.set_wptr = gfx_v9_0_ring_set_wptr_gfx,
+	.emit_frame_size =
+		20 + /* gfx_v9_0_ring_emit_gds_switch */
+		7 + /* gfx_v9_0_ring_emit_hdp_flush */
+		5 + /* gfx_v9_0_ring_emit_hdp_invalidate */
+		8 + 8 + 8 +/* gfx_v9_0_ring_emit_fence x3 for user fence, vm fence */
+		7 + /* gfx_v9_0_ring_emit_pipeline_sync */
+		128 + 66 + /* gfx_v9_0_ring_emit_vm_flush */
+		2 + /* gfx_v9_ring_emit_sb */
+		3, /* gfx_v9_ring_emit_cntxcntl */
+	.emit_ib_size =	4, /* gfx_v9_0_ring_emit_ib_gfx */
+	.emit_ib = gfx_v9_0_ring_emit_ib_gfx,
+	.emit_fence = gfx_v9_0_ring_emit_fence,
+	.emit_pipeline_sync = gfx_v9_0_ring_emit_pipeline_sync,
+	.emit_vm_flush = gfx_v9_0_ring_emit_vm_flush,
+	.emit_gds_switch = gfx_v9_0_ring_emit_gds_switch,
+	.emit_hdp_flush = gfx_v9_0_ring_emit_hdp_flush,
+	.emit_hdp_invalidate = gfx_v9_0_ring_emit_hdp_invalidate,
+	.test_ring = gfx_v9_0_ring_test_ring,
+	.test_ib = gfx_v9_0_ring_test_ib,
+	.insert_nop = amdgpu_ring_insert_nop,
+	.pad_ib = amdgpu_ring_generic_pad_ib,
+	.emit_switch_buffer = gfx_v9_ring_emit_sb,
+	.emit_cntxcntl = gfx_v9_ring_emit_cntxcntl,
+};
+
+static const struct amdgpu_ring_funcs gfx_v9_0_ring_funcs_compute = {
+	.type = AMDGPU_RING_TYPE_COMPUTE,
+	.align_mask = 0xff,
+	.nop = PACKET3(PACKET3_NOP, 0x3FFF),
+	.support_64bit_ptrs = true,
+	.get_rptr = gfx_v9_0_ring_get_rptr_compute,
+	.get_wptr = gfx_v9_0_ring_get_wptr_compute,
+	.set_wptr = gfx_v9_0_ring_set_wptr_compute,
+	.emit_frame_size =
+		20 + /* gfx_v9_0_ring_emit_gds_switch */
+		7 + /* gfx_v9_0_ring_emit_hdp_flush */
+		5 + /* gfx_v9_0_ring_emit_hdp_invalidate */
+		7 + /* gfx_v9_0_ring_emit_pipeline_sync */
+		64 + /* gfx_v9_0_ring_emit_vm_flush */
+		8 + 8 + 8, /* gfx_v9_0_ring_emit_fence x3 for user fence, vm fence */
+	.emit_ib_size =	4, /* gfx_v9_0_ring_emit_ib_compute */
+	.emit_ib = gfx_v9_0_ring_emit_ib_compute,
+	.emit_fence = gfx_v9_0_ring_emit_fence,
+	.emit_pipeline_sync = gfx_v9_0_ring_emit_pipeline_sync,
+	.emit_vm_flush = gfx_v9_0_ring_emit_vm_flush,
+	.emit_gds_switch = gfx_v9_0_ring_emit_gds_switch,
+	.emit_hdp_flush = gfx_v9_0_ring_emit_hdp_flush,
+	.emit_hdp_invalidate = gfx_v9_0_ring_emit_hdp_invalidate,
+	.test_ring = gfx_v9_0_ring_test_ring,
+	.test_ib = gfx_v9_0_ring_test_ib,
+	.insert_nop = amdgpu_ring_insert_nop,
+	.pad_ib = amdgpu_ring_generic_pad_ib,
+};
+
+
+static void gfx_v9_0_set_ring_funcs(struct amdgpu_device *adev)
+{
+	int i;
+
+	for (i = 0; i < adev->gfx.num_gfx_rings; i++)
+		adev->gfx.gfx_ring[i].funcs = &gfx_v9_0_ring_funcs_gfx;
+
+	for (i = 0; i < adev->gfx.num_compute_rings; i++)
+		adev->gfx.compute_ring[i].funcs = &gfx_v9_0_ring_funcs_compute;
+}
+
+static const struct amdgpu_irq_src_funcs gfx_v9_0_eop_irq_funcs = {
+	.set = gfx_v9_0_set_eop_interrupt_state,
+	.process = gfx_v9_0_eop_irq,
+};
+
+static const struct amdgpu_irq_src_funcs gfx_v9_0_priv_reg_irq_funcs = {
+	.set = gfx_v9_0_set_priv_reg_fault_state,
+	.process = gfx_v9_0_priv_reg_irq,
+};
+
+static const struct amdgpu_irq_src_funcs gfx_v9_0_priv_inst_irq_funcs = {
+	.set = gfx_v9_0_set_priv_inst_fault_state,
+	.process = gfx_v9_0_priv_inst_irq,
+};
+
+static void gfx_v9_0_set_irq_funcs(struct amdgpu_device *adev)
+{
+	adev->gfx.eop_irq.num_types = AMDGPU_CP_IRQ_LAST;
+	adev->gfx.eop_irq.funcs = &gfx_v9_0_eop_irq_funcs;
+
+	adev->gfx.priv_reg_irq.num_types = 1;
+	adev->gfx.priv_reg_irq.funcs = &gfx_v9_0_priv_reg_irq_funcs;
+
+	adev->gfx.priv_inst_irq.num_types = 1;
+	adev->gfx.priv_inst_irq.funcs = &gfx_v9_0_priv_inst_irq_funcs;
+}
+
+static void gfx_v9_0_set_rlc_funcs(struct amdgpu_device *adev)
+{
+	switch (adev->asic_type) {
+	case CHIP_VEGA10:
+		adev->gfx.rlc.funcs = &gfx_v9_0_rlc_funcs;
+		break;
+	default:
+		break;
+	}
+}
+
+static void gfx_v9_0_set_gds_init(struct amdgpu_device *adev)
+{
+	/* init asci gds info */
+	adev->gds.mem.total_size = RREG32(SOC15_REG_OFFSET(GC, 0, mmGDS_VMID0_SIZE));
+	adev->gds.gws.total_size = 64;
+	adev->gds.oa.total_size = 16;
+
+	if (adev->gds.mem.total_size == 64 * 1024) {
+		adev->gds.mem.gfx_partition_size = 4096;
+		adev->gds.mem.cs_partition_size = 4096;
+
+		adev->gds.gws.gfx_partition_size = 4;
+		adev->gds.gws.cs_partition_size = 4;
+
+		adev->gds.oa.gfx_partition_size = 4;
+		adev->gds.oa.cs_partition_size = 1;
+	} else {
+		adev->gds.mem.gfx_partition_size = 1024;
+		adev->gds.mem.cs_partition_size = 1024;
+
+		adev->gds.gws.gfx_partition_size = 16;
+		adev->gds.gws.cs_partition_size = 16;
+
+		adev->gds.oa.gfx_partition_size = 4;
+		adev->gds.oa.cs_partition_size = 4;
+	}
+}
+
+static u32 gfx_v9_0_get_cu_active_bitmap(struct amdgpu_device *adev)
+{
+	u32 data, mask;
+
+	data = RREG32(SOC15_REG_OFFSET(GC, 0, mmCC_GC_SHADER_ARRAY_CONFIG));
+	data |= RREG32(SOC15_REG_OFFSET(GC, 0, mmGC_USER_SHADER_ARRAY_CONFIG));
+
+	data &= CC_GC_SHADER_ARRAY_CONFIG__INACTIVE_CUS_MASK;
+	data >>= CC_GC_SHADER_ARRAY_CONFIG__INACTIVE_CUS__SHIFT;
+
+	mask = gfx_v9_0_create_bitmask(adev->gfx.config.max_cu_per_sh);
+
+	return (~data) & mask;
+}
+
+static int gfx_v9_0_get_cu_info(struct amdgpu_device *adev,
+				 struct amdgpu_cu_info *cu_info)
+{
+	int i, j, k, counter, active_cu_number = 0;
+	u32 mask, bitmap, ao_bitmap, ao_cu_mask = 0;
+
+	if (!adev || !cu_info)
+		return -EINVAL;
+
+	memset(cu_info, 0, sizeof(*cu_info));
+
+	mutex_lock(&adev->grbm_idx_mutex);
+	for (i = 0; i < adev->gfx.config.max_shader_engines; i++) {
+		for (j = 0; j < adev->gfx.config.max_sh_per_se; j++) {
+			mask = 1;
+			ao_bitmap = 0;
+			counter = 0;
+			gfx_v9_0_select_se_sh(adev, i, j, 0xffffffff);
+			bitmap = gfx_v9_0_get_cu_active_bitmap(adev);
+			cu_info->bitmap[i][j] = bitmap;
+
+			for (k = 0; k < 16; k ++) {
+				if (bitmap & mask) {
+					if (counter < 2)
+						ao_bitmap |= mask;
+					counter ++;
+				}
+				mask <<= 1;
+			}
+			active_cu_number += counter;
+			ao_cu_mask |= (ao_bitmap << (i * 16 + j * 8));
+		}
+	}
+	gfx_v9_0_select_se_sh(adev, 0xffffffff, 0xffffffff, 0xffffffff);
+	mutex_unlock(&adev->grbm_idx_mutex);
+
+	cu_info->number = active_cu_number;
+	cu_info->ao_cu_mask = ao_cu_mask;
+
+	return 0;
+}
+
+static int gfx_v9_0_init_queue(struct amdgpu_ring *ring)
+{
+	int r, j;
+	u32 tmp;
+	bool use_doorbell = true;
+	u64 hqd_gpu_addr;
+	u64 mqd_gpu_addr;
+	u64 eop_gpu_addr;
+	u64 wb_gpu_addr;
+	u32 *buf;
+	struct v9_mqd *mqd;
+	struct amdgpu_device *adev;
+
+	adev = ring->adev;
+	if (ring->mqd_obj == NULL) {
+		r = amdgpu_bo_create(adev,
+				sizeof(struct v9_mqd),
+				PAGE_SIZE,true,
+				AMDGPU_GEM_DOMAIN_GTT, 0, NULL,
+				NULL, &ring->mqd_obj);
+		if (r) {
+			dev_warn(adev->dev, "(%d) create MQD bo failed\n", r);
+			return r;
+		}
+	}
+
+	r = amdgpu_bo_reserve(ring->mqd_obj, false);
+	if (unlikely(r != 0)) {
+		gfx_v9_0_cp_compute_fini(adev);
+		return r;
+	}
+
+	r = amdgpu_bo_pin(ring->mqd_obj, AMDGPU_GEM_DOMAIN_GTT,
+				  &mqd_gpu_addr);
+	if (r) {
+		dev_warn(adev->dev, "(%d) pin MQD bo failed\n", r);
+		gfx_v9_0_cp_compute_fini(adev);
+		return r;
+	}
+	r = amdgpu_bo_kmap(ring->mqd_obj, (void **)&buf);
+	if (r) {
+		dev_warn(adev->dev, "(%d) map MQD bo failed\n", r);
+		gfx_v9_0_cp_compute_fini(adev);
+		return r;
+	}
+
+	/* init the mqd struct */
+	memset(buf, 0, sizeof(struct v9_mqd));
+
+	mqd = (struct v9_mqd *)buf;
+	mqd->header = 0xC0310800;
+	mqd->compute_pipelinestat_enable = 0x00000001;
+	mqd->compute_static_thread_mgmt_se0 = 0xffffffff;
+	mqd->compute_static_thread_mgmt_se1 = 0xffffffff;
+	mqd->compute_static_thread_mgmt_se2 = 0xffffffff;
+	mqd->compute_static_thread_mgmt_se3 = 0xffffffff;
+	mqd->compute_misc_reserved = 0x00000003;
+	mutex_lock(&adev->srbm_mutex);
+	soc15_grbm_select(adev, ring->me,
+			       ring->pipe,
+			       ring->queue, 0);
+	/* disable wptr polling */
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_PQ_WPTR_POLL_CNTL));
+	tmp = REG_SET_FIELD(tmp, CP_PQ_WPTR_POLL_CNTL, EN, 0);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_PQ_WPTR_POLL_CNTL), tmp);
+
+	/* write the EOP addr */
+	BUG_ON(ring->me != 1 || ring->pipe != 0); /* can't handle other cases eop address */
+	eop_gpu_addr = adev->gfx.mec.hpd_eop_gpu_addr + (ring->queue * MEC_HPD_SIZE);
+	eop_gpu_addr >>= 8;
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_EOP_BASE_ADDR), lower_32_bits(eop_gpu_addr));
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_EOP_BASE_ADDR_HI), upper_32_bits(eop_gpu_addr));
+	mqd->cp_hqd_eop_base_addr_lo = lower_32_bits(eop_gpu_addr);
+	mqd->cp_hqd_eop_base_addr_hi = upper_32_bits(eop_gpu_addr);
+
+	/* set the EOP size, register value is 2^(EOP_SIZE+1) dwords */
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_EOP_CONTROL));
+	tmp = REG_SET_FIELD(tmp, CP_HQD_EOP_CONTROL, EOP_SIZE,
+				    (order_base_2(MEC_HPD_SIZE / 4) - 1));
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_EOP_CONTROL), tmp);
+
+	/* enable doorbell? */
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_DOORBELL_CONTROL));
+	if (use_doorbell)
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL, DOORBELL_EN, 1);
+	else
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL, DOORBELL_EN, 0);
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_DOORBELL_CONTROL), tmp);
+	mqd->cp_hqd_pq_doorbell_control = tmp;
+
+	/* disable the queue if it's active */
+	ring->wptr = 0;
+	mqd->cp_hqd_dequeue_request = 0;
+	mqd->cp_hqd_pq_rptr = 0;
+	mqd->cp_hqd_pq_wptr_lo = 0;
+	mqd->cp_hqd_pq_wptr_hi = 0;
+	if (RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_ACTIVE)) & 1) {
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_DEQUEUE_REQUEST), 1);
+		for (j = 0; j < adev->usec_timeout; j++) {
+			if (!(RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_ACTIVE)) & 1))
+				break;
+			udelay(1);
+		}
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_DEQUEUE_REQUEST), mqd->cp_hqd_dequeue_request);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_RPTR), mqd->cp_hqd_pq_rptr);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_WPTR_LO), mqd->cp_hqd_pq_wptr_lo);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_WPTR_HI), mqd->cp_hqd_pq_wptr_hi);
+	}
+
+	/* set the pointer to the MQD */
+	mqd->cp_mqd_base_addr_lo = mqd_gpu_addr & 0xfffffffc;
+	mqd->cp_mqd_base_addr_hi = upper_32_bits(mqd_gpu_addr);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MQD_BASE_ADDR), mqd->cp_mqd_base_addr_lo);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MQD_BASE_ADDR_HI), mqd->cp_mqd_base_addr_hi);
+
+	/* set MQD vmid to 0 */
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MQD_CONTROL));
+	tmp = REG_SET_FIELD(tmp, CP_MQD_CONTROL, VMID, 0);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MQD_CONTROL), tmp);
+	mqd->cp_mqd_control = tmp;
+
+	/* set the pointer to the HQD, this is similar CP_RB0_BASE/_HI */
+	hqd_gpu_addr = ring->gpu_addr >> 8;
+	mqd->cp_hqd_pq_base_lo = hqd_gpu_addr;
+	mqd->cp_hqd_pq_base_hi = upper_32_bits(hqd_gpu_addr);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_BASE), mqd->cp_hqd_pq_base_lo);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_BASE_HI), mqd->cp_hqd_pq_base_hi);
+
+	/* set up the HQD, this is similar to CP_RB0_CNTL */
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_CONTROL));
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, QUEUE_SIZE,
+		(order_base_2(ring->ring_size / 4) - 1));
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, RPTR_BLOCK_SIZE,
+		((order_base_2(AMDGPU_GPU_PAGE_SIZE / 4) - 1) << 8));
+#ifdef __BIG_ENDIAN
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, ENDIAN_SWAP, 1);
+#endif
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 0);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, ROQ_PQ_IB_FLIP, 0);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_CONTROL), tmp);
+	mqd->cp_hqd_pq_control = tmp;
+
+	/* set the wb address wether it's enabled or not */
+	wb_gpu_addr = adev->wb.gpu_addr + (ring->rptr_offs * 4);
+	mqd->cp_hqd_pq_rptr_report_addr_lo = wb_gpu_addr & 0xfffffffc;
+	mqd->cp_hqd_pq_rptr_report_addr_hi =
+	upper_32_bits(wb_gpu_addr) & 0xffff;
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_RPTR_REPORT_ADDR),
+		mqd->cp_hqd_pq_rptr_report_addr_lo);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_RPTR_REPORT_ADDR_HI),
+		mqd->cp_hqd_pq_rptr_report_addr_hi);
+
+	/* only used if CP_PQ_WPTR_POLL_CNTL.CP_PQ_WPTR_POLL_CNTL__EN_MASK=1 */
+	wb_gpu_addr = adev->wb.gpu_addr + (ring->wptr_offs * 4);
+	mqd->cp_hqd_pq_wptr_poll_addr_lo = wb_gpu_addr & 0xfffffffc;
+	mqd->cp_hqd_pq_wptr_poll_addr_hi = upper_32_bits(wb_gpu_addr) & 0xffff;
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_WPTR_POLL_ADDR),
+		mqd->cp_hqd_pq_wptr_poll_addr_lo);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_WPTR_POLL_ADDR_HI),
+		mqd->cp_hqd_pq_wptr_poll_addr_hi);
+
+	/* enable the doorbell if requested */
+	if (use_doorbell) {
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MEC_DOORBELL_RANGE_LOWER),
+			(AMDGPU_DOORBELL64_KIQ * 2) << 2);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MEC_DOORBELL_RANGE_UPPER),
+			(AMDGPU_DOORBELL64_MEC_RING7 * 2) << 2);
+		tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_DOORBELL_CONTROL));
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+			DOORBELL_OFFSET, ring->doorbell_index);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL, DOORBELL_EN, 1);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL, DOORBELL_SOURCE, 0);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL, DOORBELL_HIT, 0);
+		mqd->cp_hqd_pq_doorbell_control = tmp;
+
+	} else {
+		mqd->cp_hqd_pq_doorbell_control = 0;
+	}
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_DOORBELL_CONTROL),
+		mqd->cp_hqd_pq_doorbell_control);
+
+	/* reset read and write pointers, similar to CP_RB0_WPTR/_RPTR */
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_WPTR_LO), mqd->cp_hqd_pq_wptr_lo);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_WPTR_HI), mqd->cp_hqd_pq_wptr_hi);
+
+	/* set the vmid for the queue */
+	mqd->cp_hqd_vmid = 0;
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_VMID), mqd->cp_hqd_vmid);
+
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PERSISTENT_STATE));
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PERSISTENT_STATE, PRELOAD_SIZE, 0x53);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PERSISTENT_STATE), tmp);
+	mqd->cp_hqd_persistent_state = tmp;
+
+	/* activate the queue */
+	mqd->cp_hqd_active = 1;
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_ACTIVE), mqd->cp_hqd_active);
+
+	soc15_grbm_select(adev, 0, 0, 0, 0);
+	mutex_unlock(&adev->srbm_mutex);
+
+	amdgpu_bo_kunmap(ring->mqd_obj);
+	amdgpu_bo_unreserve(ring->mqd_obj);
+
+	if (use_doorbell) {
+		tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_PQ_STATUS));
+		tmp = REG_SET_FIELD(tmp, CP_PQ_STATUS, DOORBELL_ENABLE, 1);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_PQ_STATUS), tmp);
+	}
+
+	return 0;
+}
+
+const struct amdgpu_ip_block_version gfx_v9_0_ip_block =
+{
+	.type = AMD_IP_BLOCK_TYPE_GFX,
+	.major = 9,
+	.minor = 0,
+	.rev = 0,
+	.funcs = &gfx_v9_0_ip_funcs,
+};
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.h b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.h
new file mode 100644
index 0000000..56ef652
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.h
@@ -0,0 +1,35 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __GFX_V9_0_H__
+#define __GFX_V9_0_H__
+
+extern const struct amd_ip_funcs gfx_v9_0_ip_funcs;
+extern const struct amdgpu_ip_block_version gfx_v9_0_ip_block;
+
+void gfx_v9_0_select_se_sh(struct amdgpu_device *adev, u32 se_num, u32 sh_num);
+
+uint64_t gfx_v9_0_get_gpu_clock_counter(struct amdgpu_device *adev);
+int gfx_v9_0_get_cu_info(struct amdgpu_device *adev, struct amdgpu_cu_info *cu_info);
+
+#endif
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 049/100] drm/amdgpu: add vega10 interrupt handler
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (32 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 048/100] drm/amdgpu: implement GFX 9.0 support Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 050/100] drm/amdgpu: add initial uvd 7.0 support for vega10 Alex Deucher
                     ` (51 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Ken Wang

From: Ken Wang <Qingqing.Wang@amd.com>

Signed-off-by: Ken Wang <Qingqing.Wang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/Makefile    |   3 +-
 drivers/gpu/drm/amd/amdgpu/vega10_ih.c | 424 +++++++++++++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/vega10_ih.h |  30 +++
 3 files changed, 456 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/vega10_ih.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/vega10_ih.h

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
index 61f090f..bc29569 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -54,7 +54,8 @@ amdgpu-y += \
 	amdgpu_ih.o \
 	iceland_ih.o \
 	tonga_ih.o \
-	cz_ih.o
+	cz_ih.o \
+	vega10_ih.o
 
 # add SMC block
 amdgpu-y += \
diff --git a/drivers/gpu/drm/amd/amdgpu/vega10_ih.c b/drivers/gpu/drm/amd/amdgpu/vega10_ih.c
new file mode 100644
index 0000000..23371e1
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/vega10_ih.c
@@ -0,0 +1,424 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include "drmP.h"
+#include "amdgpu.h"
+#include "amdgpu_ih.h"
+#include "soc15.h"
+
+
+#include "vega10/soc15ip.h"
+#include "vega10/OSSSYS/osssys_4_0_offset.h"
+#include "vega10/OSSSYS/osssys_4_0_sh_mask.h"
+
+#include "soc15_common.h"
+#include "vega10_ih.h"
+
+
+
+static void vega10_ih_set_interrupt_funcs(struct amdgpu_device *adev);
+
+/**
+ * vega10_ih_enable_interrupts - Enable the interrupt ring buffer
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Enable the interrupt ring buffer (VEGA10).
+ */
+static void vega10_ih_enable_interrupts(struct amdgpu_device *adev)
+{
+	u32 ih_rb_cntl = RREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_RB_CNTL));
+
+	ih_rb_cntl = REG_SET_FIELD(ih_rb_cntl, IH_RB_CNTL, RB_ENABLE, 1);
+	ih_rb_cntl = REG_SET_FIELD(ih_rb_cntl, IH_RB_CNTL, ENABLE_INTR, 1);
+	WREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_RB_CNTL), ih_rb_cntl);
+	adev->irq.ih.enabled = true;
+}
+
+/**
+ * vega10_ih_disable_interrupts - Disable the interrupt ring buffer
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Disable the interrupt ring buffer (VEGA10).
+ */
+static void vega10_ih_disable_interrupts(struct amdgpu_device *adev)
+{
+	u32 ih_rb_cntl = RREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_RB_CNTL));
+
+	ih_rb_cntl = REG_SET_FIELD(ih_rb_cntl, IH_RB_CNTL, RB_ENABLE, 0);
+	ih_rb_cntl = REG_SET_FIELD(ih_rb_cntl, IH_RB_CNTL, ENABLE_INTR, 0);
+	WREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_RB_CNTL), ih_rb_cntl);
+	/* set rptr, wptr to 0 */
+	WREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_RB_RPTR), 0);
+	WREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_RB_WPTR), 0);
+	adev->irq.ih.enabled = false;
+	adev->irq.ih.rptr = 0;
+}
+
+/**
+ * vega10_ih_irq_init - init and enable the interrupt ring
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Allocate a ring buffer for the interrupt controller,
+ * enable the RLC, disable interrupts, enable the IH
+ * ring buffer and enable it (VI).
+ * Called at device load and reume.
+ * Returns 0 for success, errors for failure.
+ */
+static int vega10_ih_irq_init(struct amdgpu_device *adev)
+{
+	int ret = 0;
+	int rb_bufsz;
+	u32 ih_rb_cntl, ih_doorbell_rtpr;
+	u32 tmp;
+	u64 wptr_off;
+
+	/* disable irqs */
+	vega10_ih_disable_interrupts(adev);
+
+	nbio_v6_1_ih_control(adev);
+
+	ih_rb_cntl = RREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_RB_CNTL));
+	/* Ring Buffer base. [39:8] of 40-bit address of the beginning of the ring buffer*/
+	if (adev->irq.ih.use_bus_addr) {
+		WREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_RB_BASE), adev->irq.ih.rb_dma_addr >> 8);
+		WREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_RB_BASE_HI), (adev->irq.ih.rb_dma_addr >> 40) &0xff);
+		ih_rb_cntl = REG_SET_FIELD(ih_rb_cntl, IH_RB_CNTL, MC_SPACE, 1);
+	} else {
+		WREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_RB_BASE), adev->irq.ih.gpu_addr >> 8);
+		WREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_RB_BASE_HI), (adev->irq.ih.gpu_addr >> 40) & 0xff);
+		ih_rb_cntl = REG_SET_FIELD(ih_rb_cntl, IH_RB_CNTL, MC_SPACE, 4);
+	}
+	rb_bufsz = order_base_2(adev->irq.ih.ring_size / 4);
+	ih_rb_cntl = REG_SET_FIELD(ih_rb_cntl, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 1);
+	ih_rb_cntl = REG_SET_FIELD(ih_rb_cntl, IH_RB_CNTL, WPTR_OVERFLOW_ENABLE, 1);
+	ih_rb_cntl = REG_SET_FIELD(ih_rb_cntl, IH_RB_CNTL, RB_SIZE, rb_bufsz);
+	/* Ring Buffer write pointer writeback. If enabled, IH_RB_WPTR register value is written to memory */
+	ih_rb_cntl = REG_SET_FIELD(ih_rb_cntl, IH_RB_CNTL, WPTR_WRITEBACK_ENABLE, 1);
+	ih_rb_cntl = REG_SET_FIELD(ih_rb_cntl, IH_RB_CNTL, MC_SNOOP, 1);
+	ih_rb_cntl = REG_SET_FIELD(ih_rb_cntl, IH_RB_CNTL, MC_RO, 0);
+	ih_rb_cntl = REG_SET_FIELD(ih_rb_cntl, IH_RB_CNTL, MC_VMID, 0);
+
+	if (adev->irq.msi_enabled)
+		ih_rb_cntl = REG_SET_FIELD(ih_rb_cntl, IH_RB_CNTL, RPTR_REARM, 1);
+
+	WREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_RB_CNTL), ih_rb_cntl);
+
+	/* set the writeback address whether it's enabled or not */
+	if (adev->irq.ih.use_bus_addr)
+		wptr_off = adev->irq.ih.rb_dma_addr + (adev->irq.ih.wptr_offs * 4);
+	else
+		wptr_off = adev->wb.gpu_addr + (adev->irq.ih.wptr_offs * 4);
+	WREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_RB_WPTR_ADDR_LO), lower_32_bits(wptr_off));
+	WREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_RB_WPTR_ADDR_HI), upper_32_bits(wptr_off) & 0xFF);
+
+	/* set rptr, wptr to 0 */
+	WREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_RB_RPTR), 0);
+	WREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_RB_WPTR), 0);
+
+	ih_doorbell_rtpr = RREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_DOORBELL_RPTR));
+	if (adev->irq.ih.use_doorbell) {
+		ih_doorbell_rtpr = REG_SET_FIELD(ih_doorbell_rtpr, IH_DOORBELL_RPTR,
+						 OFFSET, adev->irq.ih.doorbell_index);
+		ih_doorbell_rtpr = REG_SET_FIELD(ih_doorbell_rtpr, IH_DOORBELL_RPTR,
+						 ENABLE, 1);
+	} else {
+		ih_doorbell_rtpr = REG_SET_FIELD(ih_doorbell_rtpr, IH_DOORBELL_RPTR,
+						 ENABLE, 0);
+	}
+	WREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_DOORBELL_RPTR), ih_doorbell_rtpr);
+	nbio_v6_1_ih_doorbell_range(adev, adev->irq.ih.use_doorbell, adev->irq.ih.doorbell_index);
+
+	tmp = RREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_STORM_CLIENT_LIST_CNTL));
+	tmp = REG_SET_FIELD(tmp, IH_STORM_CLIENT_LIST_CNTL,
+			    CLIENT18_IS_STORM_CLIENT, 1);
+	WREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_STORM_CLIENT_LIST_CNTL), tmp);
+
+	tmp = RREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_INT_FLOOD_CNTL));
+	tmp = REG_SET_FIELD(tmp, IH_INT_FLOOD_CNTL, FLOOD_CNTL_ENABLE, 1);
+	WREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_INT_FLOOD_CNTL), tmp);
+
+	pci_set_master(adev->pdev);
+
+	/* enable interrupts */
+	vega10_ih_enable_interrupts(adev);
+
+	return ret;
+}
+
+/**
+ * vega10_ih_irq_disable - disable interrupts
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Disable interrupts on the hw (VEGA10).
+ */
+static void vega10_ih_irq_disable(struct amdgpu_device *adev)
+{
+	vega10_ih_disable_interrupts(adev);
+
+	/* Wait and acknowledge irq */
+	mdelay(1);
+}
+
+/**
+ * vega10_ih_get_wptr - get the IH ring buffer wptr
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Get the IH ring buffer wptr from either the register
+ * or the writeback memory buffer (VEGA10).  Also check for
+ * ring buffer overflow and deal with it.
+ * Returns the value of the wptr.
+ */
+static u32 vega10_ih_get_wptr(struct amdgpu_device *adev)
+{
+	u32 wptr, tmp;
+
+	if (adev->irq.ih.use_bus_addr)
+		wptr = le32_to_cpu(adev->irq.ih.ring[adev->irq.ih.wptr_offs]);
+	else
+		wptr = le32_to_cpu(adev->wb.wb[adev->irq.ih.wptr_offs]);
+
+	if (REG_GET_FIELD(wptr, IH_RB_WPTR, RB_OVERFLOW)) {
+		wptr = REG_SET_FIELD(wptr, IH_RB_WPTR, RB_OVERFLOW, 0);
+
+		/* When a ring buffer overflow happen start parsing interrupt
+		 * from the last not overwritten vector (wptr + 32). Hopefully
+		 * this should allow us to catchup.
+		 */
+		tmp = (wptr + 32) & adev->irq.ih.ptr_mask;
+		dev_warn(adev->dev, "IH ring buffer overflow (0x%08X, 0x%08X, 0x%08X)\n",
+			wptr, adev->irq.ih.rptr, tmp);
+		adev->irq.ih.rptr = tmp;
+
+		tmp = RREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_RB_CNTL));
+		tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 1);
+		WREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_RB_CNTL), tmp);
+	}
+	return (wptr & adev->irq.ih.ptr_mask);
+}
+
+/**
+ * vega10_ih_decode_iv - decode an interrupt vector
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Decodes the interrupt vector at the current rptr
+ * position and also advance the position.
+ */
+static void vega10_ih_decode_iv(struct amdgpu_device *adev,
+				 struct amdgpu_iv_entry *entry)
+{
+	/* wptr/rptr are in bytes! */
+	u32 ring_index = adev->irq.ih.rptr >> 2;
+	uint32_t dw[8];
+
+	dw[0] = le32_to_cpu(adev->irq.ih.ring[ring_index + 0]);
+	dw[1] = le32_to_cpu(adev->irq.ih.ring[ring_index + 1]);
+	dw[2] = le32_to_cpu(adev->irq.ih.ring[ring_index + 2]);
+	dw[3] = le32_to_cpu(adev->irq.ih.ring[ring_index + 3]);
+	dw[4] = le32_to_cpu(adev->irq.ih.ring[ring_index + 4]);
+	dw[5] = le32_to_cpu(adev->irq.ih.ring[ring_index + 5]);
+	dw[6] = le32_to_cpu(adev->irq.ih.ring[ring_index + 6]);
+	dw[7] = le32_to_cpu(adev->irq.ih.ring[ring_index + 7]);
+
+	entry->client_id = dw[0] & 0xff;
+	entry->src_id = (dw[0] >> 8) & 0xff;
+	entry->ring_id = (dw[0] >> 16) & 0xff;
+	entry->vm_id = (dw[0] >> 24) & 0xf;
+	entry->vm_id_src = (dw[0] >> 31);
+	entry->timestamp = dw[1] | ((u64)(dw[2] & 0xffff) << 32);
+	entry->timestamp_src = dw[2] >> 31;
+	entry->pas_id = dw[3] & 0xffff;
+	entry->pasid_src = dw[3] >> 31;
+	entry->src_data[0] = dw[4];
+	entry->src_data[1] = dw[5];
+	entry->src_data[2] = dw[6];
+	entry->src_data[3] = dw[7];
+
+
+	/* wptr/rptr are in bytes! */
+	adev->irq.ih.rptr += 32;
+}
+
+/**
+ * vega10_ih_set_rptr - set the IH ring buffer rptr
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Set the IH ring buffer rptr.
+ */
+static void vega10_ih_set_rptr(struct amdgpu_device *adev)
+{
+	if (adev->irq.ih.use_doorbell) {
+		/* XXX check if swapping is necessary on BE */
+		if (adev->irq.ih.use_bus_addr)
+			adev->irq.ih.ring[adev->irq.ih.rptr_offs] = adev->irq.ih.rptr;
+		else
+			adev->wb.wb[adev->irq.ih.rptr_offs] = adev->irq.ih.rptr;
+		WDOORBELL32(adev->irq.ih.doorbell_index, adev->irq.ih.rptr);
+	} else {
+		WREG32(SOC15_REG_OFFSET(OSSSYS, 0, mmIH_RB_RPTR), adev->irq.ih.rptr);
+	}
+}
+
+static int vega10_ih_early_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	vega10_ih_set_interrupt_funcs(adev);
+	return 0;
+}
+
+static int vega10_ih_sw_init(void *handle)
+{
+	int r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	r = amdgpu_ih_ring_init(adev, 256 * 1024, true);
+	if (r)
+		return r;
+
+	adev->irq.ih.use_doorbell = true;
+	adev->irq.ih.doorbell_index = AMDGPU_DOORBELL64_IH << 1;
+
+	r = amdgpu_irq_init(adev);
+
+	return r;
+}
+
+static int vega10_ih_sw_fini(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	amdgpu_irq_fini(adev);
+	amdgpu_ih_ring_fini(adev);
+
+	return 0;
+}
+
+static int vega10_ih_hw_init(void *handle)
+{
+	int r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	r = vega10_ih_irq_init(adev);
+	if (r)
+		return r;
+
+	return 0;
+}
+
+static int vega10_ih_hw_fini(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	vega10_ih_irq_disable(adev);
+
+	return 0;
+}
+
+static int vega10_ih_suspend(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	return vega10_ih_hw_fini(adev);
+}
+
+static int vega10_ih_resume(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	return vega10_ih_hw_init(adev);
+}
+
+static bool vega10_ih_is_idle(void *handle)
+{
+	/* todo */
+	return true;
+}
+
+static int vega10_ih_wait_for_idle(void *handle)
+{
+	/* todo */
+	return -ETIMEDOUT;
+}
+
+static int vega10_ih_soft_reset(void *handle)
+{
+	/* todo */
+
+	return 0;
+}
+
+static int vega10_ih_set_clockgating_state(void *handle,
+					  enum amd_clockgating_state state)
+{
+	return 0;
+}
+
+static int vega10_ih_set_powergating_state(void *handle,
+					  enum amd_powergating_state state)
+{
+	return 0;
+}
+
+const struct amd_ip_funcs vega10_ih_ip_funcs = {
+	.name = "vega10_ih",
+	.early_init = vega10_ih_early_init,
+	.late_init = NULL,
+	.sw_init = vega10_ih_sw_init,
+	.sw_fini = vega10_ih_sw_fini,
+	.hw_init = vega10_ih_hw_init,
+	.hw_fini = vega10_ih_hw_fini,
+	.suspend = vega10_ih_suspend,
+	.resume = vega10_ih_resume,
+	.is_idle = vega10_ih_is_idle,
+	.wait_for_idle = vega10_ih_wait_for_idle,
+	.soft_reset = vega10_ih_soft_reset,
+	.set_clockgating_state = vega10_ih_set_clockgating_state,
+	.set_powergating_state = vega10_ih_set_powergating_state,
+};
+
+static const struct amdgpu_ih_funcs vega10_ih_funcs = {
+	.get_wptr = vega10_ih_get_wptr,
+	.decode_iv = vega10_ih_decode_iv,
+	.set_rptr = vega10_ih_set_rptr
+};
+
+static void vega10_ih_set_interrupt_funcs(struct amdgpu_device *adev)
+{
+	if (adev->irq.ih_funcs == NULL)
+		adev->irq.ih_funcs = &vega10_ih_funcs;
+}
+
+const struct amdgpu_ip_block_version vega10_ih_ip_block =
+{
+	.type = AMD_IP_BLOCK_TYPE_IH,
+	.major = 4,
+	.minor = 0,
+	.rev = 0,
+	.funcs = &vega10_ih_ip_funcs,
+};
diff --git a/drivers/gpu/drm/amd/amdgpu/vega10_ih.h b/drivers/gpu/drm/amd/amdgpu/vega10_ih.h
new file mode 100644
index 0000000..82edd28b
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/vega10_ih.h
@@ -0,0 +1,30 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __VEGA10_IH_H__
+#define __VEGA10_IH_H__
+
+extern const struct amd_ip_funcs vega10_ih_ip_funcs;
+extern const struct amdgpu_ip_block_version vega10_ih_ip_block;
+
+#endif
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 050/100] drm/amdgpu: add initial uvd 7.0 support for vega10
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (33 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 049/100] drm/amdgpu: add vega10 interrupt handler Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 051/100] drm/amdgpu: add initial vce 4.0 " Alex Deucher
                     ` (50 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Leo Liu

From: Leo Liu <leo.liu@amd.com>

Signed-off-by: Leo Liu <leo.liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/Makefile     |    3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c |   52 +-
 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c   | 1543 +++++++++++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.h   |   29 +
 4 files changed, 1615 insertions(+), 12 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.h

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
index bc29569..65829fa 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -84,7 +84,8 @@ amdgpu-y += \
 amdgpu-y += \
 	amdgpu_uvd.o \
 	uvd_v5_0.o \
-	uvd_v6_0.o
+	uvd_v6_0.o \
+	uvd_v7_0.o
 
 # add VCE block
 amdgpu-y += \
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
index 02b8613..b2e1d3b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
@@ -67,6 +67,14 @@
 #define FIRMWARE_POLARIS11	"amdgpu/polaris11_uvd.bin"
 #define FIRMWARE_POLARIS12	"amdgpu/polaris12_uvd.bin"
 
+#define FIRMWARE_VEGA10		"amdgpu/vega10_uvd.bin"
+
+#define mmUVD_GPCOM_VCPU_DATA0_VEGA10 (0x03c4 + 0x7e00)
+#define mmUVD_GPCOM_VCPU_DATA1_VEGA10 (0x03c5 + 0x7e00)
+#define mmUVD_GPCOM_VCPU_CMD_VEGA10 (0x03c3 + 0x7e00)
+#define mmUVD_NO_OP_VEGA10 (0x03ff + 0x7e00)
+#define mmUVD_ENGINE_CNTL_VEGA10 (0x03c6 + 0x7e00)
+
 /**
  * amdgpu_uvd_cs_ctx - Command submission parser context
  *
@@ -101,6 +109,8 @@ MODULE_FIRMWARE(FIRMWARE_POLARIS10);
 MODULE_FIRMWARE(FIRMWARE_POLARIS11);
 MODULE_FIRMWARE(FIRMWARE_POLARIS12);
 
+MODULE_FIRMWARE(FIRMWARE_VEGA10);
+
 static void amdgpu_uvd_idle_work_handler(struct work_struct *work);
 
 int amdgpu_uvd_sw_init(struct amdgpu_device *adev)
@@ -151,6 +161,9 @@ int amdgpu_uvd_sw_init(struct amdgpu_device *adev)
 	case CHIP_POLARIS11:
 		fw_name = FIRMWARE_POLARIS11;
 		break;
+	case CHIP_VEGA10:
+		fw_name = FIRMWARE_VEGA10;
+		break;
 	case CHIP_POLARIS12:
 		fw_name = FIRMWARE_POLARIS12;
 		break;
@@ -203,9 +216,11 @@ int amdgpu_uvd_sw_init(struct amdgpu_device *adev)
 		DRM_ERROR("POLARIS10/11 UVD firmware version %hu.%hu is too old.\n",
 			  version_major, version_minor);
 
-	bo_size = AMDGPU_GPU_PAGE_ALIGN(le32_to_cpu(hdr->ucode_size_bytes) + 8)
-		  +  AMDGPU_UVD_STACK_SIZE + AMDGPU_UVD_HEAP_SIZE
+	bo_size = AMDGPU_UVD_STACK_SIZE + AMDGPU_UVD_HEAP_SIZE
 		  +  AMDGPU_UVD_SESSION_SIZE * adev->uvd.max_handles;
+	if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP)
+		bo_size += AMDGPU_GPU_PAGE_ALIGN(le32_to_cpu(hdr->ucode_size_bytes) + 8);
+
 	r = amdgpu_bo_create_kernel(adev, bo_size, PAGE_SIZE,
 				    AMDGPU_GEM_DOMAIN_VRAM, &adev->uvd.vcpu_bo,
 				    &adev->uvd.gpu_addr, &adev->uvd.cpu_addr);
@@ -319,11 +334,13 @@ int amdgpu_uvd_resume(struct amdgpu_device *adev)
 		unsigned offset;
 
 		hdr = (const struct common_firmware_header *)adev->uvd.fw->data;
-		offset = le32_to_cpu(hdr->ucode_array_offset_bytes);
-		memcpy_toio(adev->uvd.cpu_addr, adev->uvd.fw->data + offset,
-			    le32_to_cpu(hdr->ucode_size_bytes));
-		size -= le32_to_cpu(hdr->ucode_size_bytes);
-		ptr += le32_to_cpu(hdr->ucode_size_bytes);
+		if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP) {
+			offset = le32_to_cpu(hdr->ucode_array_offset_bytes);
+			memcpy_toio(adev->uvd.cpu_addr, adev->uvd.fw->data + offset,
+				    le32_to_cpu(hdr->ucode_size_bytes));
+			size -= le32_to_cpu(hdr->ucode_size_bytes);
+			ptr += le32_to_cpu(hdr->ucode_size_bytes);
+		}
 		memset_io(ptr, 0, size);
 	}
 
@@ -936,6 +953,7 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
 	struct fence *f = NULL;
 	struct amdgpu_device *adev = ring->adev;
 	uint64_t addr;
+	uint32_t data[4];
 	int i, r;
 
 	memset(&tv, 0, sizeof(tv));
@@ -961,16 +979,28 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo,
 	if (r)
 		goto err;
 
+	if (adev->asic_type >= CHIP_VEGA10) {
+		data[0] = PACKET0(mmUVD_GPCOM_VCPU_DATA0_VEGA10, 0);
+		data[1] = PACKET0(mmUVD_GPCOM_VCPU_DATA1_VEGA10, 0);
+		data[2] = PACKET0(mmUVD_GPCOM_VCPU_CMD_VEGA10, 0);
+		data[3] = PACKET0(mmUVD_NO_OP_VEGA10, 0);
+	} else {
+		data[0] = PACKET0(mmUVD_GPCOM_VCPU_DATA0, 0);
+		data[1] = PACKET0(mmUVD_GPCOM_VCPU_DATA1, 0);
+		data[2] = PACKET0(mmUVD_GPCOM_VCPU_CMD, 0);
+		data[3] = PACKET0(mmUVD_NO_OP, 0);
+	}
+
 	ib = &job->ibs[0];
 	addr = amdgpu_bo_gpu_offset(bo);
-	ib->ptr[0] = PACKET0(mmUVD_GPCOM_VCPU_DATA0, 0);
+	ib->ptr[0] = data[0];
 	ib->ptr[1] = addr;
-	ib->ptr[2] = PACKET0(mmUVD_GPCOM_VCPU_DATA1, 0);
+	ib->ptr[2] = data[1];
 	ib->ptr[3] = addr >> 32;
-	ib->ptr[4] = PACKET0(mmUVD_GPCOM_VCPU_CMD, 0);
+	ib->ptr[4] = data[2];
 	ib->ptr[5] = 0;
 	for (i = 6; i < 16; i += 2) {
-		ib->ptr[i] = PACKET0(mmUVD_NO_OP, 0);
+		ib->ptr[i] = data[3];
 		ib->ptr[i+1] = 0;
 	}
 	ib->length_dw = 16;
diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
new file mode 100644
index 0000000..3457546
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
@@ -0,0 +1,1543 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/firmware.h>
+#include <drm/drmP.h>
+#include "amdgpu.h"
+#include "amdgpu_uvd.h"
+#include "soc15d.h"
+#include "soc15_common.h"
+
+#include "vega10/soc15ip.h"
+#include "vega10/UVD/uvd_7_0_offset.h"
+#include "vega10/UVD/uvd_7_0_sh_mask.h"
+#include "vega10/NBIF/nbif_6_1_offset.h"
+#include "vega10/HDP/hdp_4_0_offset.h"
+#include "vega10/MMHUB/mmhub_1_0_offset.h"
+#include "vega10/MMHUB/mmhub_1_0_sh_mask.h"
+
+static void uvd_v7_0_set_ring_funcs(struct amdgpu_device *adev);
+static void uvd_v7_0_set_enc_ring_funcs(struct amdgpu_device *adev);
+static void uvd_v7_0_set_irq_funcs(struct amdgpu_device *adev);
+static int uvd_v7_0_start(struct amdgpu_device *adev);
+static void uvd_v7_0_stop(struct amdgpu_device *adev);
+
+/**
+ * uvd_v7_0_ring_get_rptr - get read pointer
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Returns the current hardware read pointer
+ */
+static uint64_t uvd_v7_0_ring_get_rptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	return RREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RBC_RB_RPTR));
+}
+
+/**
+ * uvd_v7_0_enc_ring_get_rptr - get enc read pointer
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Returns the current hardware enc read pointer
+ */
+static uint64_t uvd_v7_0_enc_ring_get_rptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	if (ring == &adev->uvd.ring_enc[0])
+		return RREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_RPTR));
+	else
+		return RREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_RPTR2));
+}
+
+/**
+ * uvd_v7_0_ring_get_wptr - get write pointer
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Returns the current hardware write pointer
+ */
+static uint64_t uvd_v7_0_ring_get_wptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	return RREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RBC_RB_WPTR));
+}
+
+/**
+ * uvd_v7_0_enc_ring_get_wptr - get enc write pointer
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Returns the current hardware enc write pointer
+ */
+static uint64_t uvd_v7_0_enc_ring_get_wptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	if (ring == &adev->uvd.ring_enc[0])
+		return RREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_WPTR));
+	else
+		return RREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_WPTR2));
+}
+
+/**
+ * uvd_v7_0_ring_set_wptr - set write pointer
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Commits the write pointer to the hardware
+ */
+static void uvd_v7_0_ring_set_wptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RBC_RB_WPTR), lower_32_bits(ring->wptr));
+}
+
+/**
+ * uvd_v7_0_enc_ring_set_wptr - set enc write pointer
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Commits the enc write pointer to the hardware
+ */
+static void uvd_v7_0_enc_ring_set_wptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	if (ring == &adev->uvd.ring_enc[0])
+		WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_WPTR),
+			lower_32_bits(ring->wptr));
+	else
+		WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_WPTR2),
+			lower_32_bits(ring->wptr));
+}
+
+/**
+ * uvd_v7_0_enc_ring_test_ring - test if UVD ENC ring is working
+ *
+ * @ring: the engine to test on
+ *
+ */
+static int uvd_v7_0_enc_ring_test_ring(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	uint32_t rptr = amdgpu_ring_get_rptr(ring);
+	unsigned i;
+	int r;
+
+	r = amdgpu_ring_alloc(ring, 16);
+	if (r) {
+		DRM_ERROR("amdgpu: uvd enc failed to lock ring %d (%d).\n",
+			  ring->idx, r);
+		return r;
+	}
+	amdgpu_ring_write(ring, HEVC_ENC_CMD_END);
+	amdgpu_ring_commit(ring);
+
+	for (i = 0; i < adev->usec_timeout; i++) {
+		if (amdgpu_ring_get_rptr(ring) != rptr)
+			break;
+		DRM_UDELAY(1);
+	}
+
+	if (i < adev->usec_timeout) {
+		DRM_INFO("ring test on %d succeeded in %d usecs\n",
+			 ring->idx, i);
+	} else {
+		DRM_ERROR("amdgpu: ring %d test failed\n",
+			  ring->idx);
+		r = -ETIMEDOUT;
+	}
+
+	return r;
+}
+
+/**
+ * uvd_v7_0_enc_get_create_msg - generate a UVD ENC create msg
+ *
+ * @adev: amdgpu_device pointer
+ * @ring: ring we should submit the msg to
+ * @handle: session handle to use
+ * @fence: optional fence to return
+ *
+ * Open up a stream for HW test
+ */
+static int uvd_v7_0_enc_get_create_msg(struct amdgpu_ring *ring, uint32_t handle,
+			      struct fence **fence)
+{
+	const unsigned ib_size_dw = 16;
+	struct amdgpu_job *job;
+	struct amdgpu_ib *ib;
+	struct fence *f = NULL;
+	uint64_t dummy;
+	int i, r;
+
+	r = amdgpu_job_alloc_with_ib(ring->adev, ib_size_dw * 4, &job);
+	if (r)
+		return r;
+
+	ib = &job->ibs[0];
+	dummy = ib->gpu_addr + 1024;
+
+	ib->length_dw = 0;
+	ib->ptr[ib->length_dw++] = 0x00000018;
+	ib->ptr[ib->length_dw++] = 0x00000001; /* session info */
+	ib->ptr[ib->length_dw++] = handle;
+	ib->ptr[ib->length_dw++] = 0x00000000;
+	ib->ptr[ib->length_dw++] = upper_32_bits(dummy);
+	ib->ptr[ib->length_dw++] = dummy;
+
+	ib->ptr[ib->length_dw++] = 0x00000014;
+	ib->ptr[ib->length_dw++] = 0x00000002; /* task info */
+	ib->ptr[ib->length_dw++] = 0x0000001c;
+	ib->ptr[ib->length_dw++] = 0x00000000;
+	ib->ptr[ib->length_dw++] = 0x00000000;
+
+	ib->ptr[ib->length_dw++] = 0x00000008;
+	ib->ptr[ib->length_dw++] = 0x08000001; /* op initialize */
+
+	for (i = ib->length_dw; i < ib_size_dw; ++i)
+		ib->ptr[i] = 0x0;
+
+	r = amdgpu_ib_schedule(ring, 1, ib, NULL, &f);
+	job->fence = fence_get(f);
+	if (r)
+		goto err;
+
+	amdgpu_job_free(job);
+	if (fence)
+		*fence = fence_get(f);
+	fence_put(f);
+	return 0;
+
+err:
+	amdgpu_job_free(job);
+	return r;
+}
+
+/**
+ * uvd_v7_0_enc_get_destroy_msg - generate a UVD ENC destroy msg
+ *
+ * @adev: amdgpu_device pointer
+ * @ring: ring we should submit the msg to
+ * @handle: session handle to use
+ * @fence: optional fence to return
+ *
+ * Close up a stream for HW test or if userspace failed to do so
+ */
+int uvd_v7_0_enc_get_destroy_msg(struct amdgpu_ring *ring, uint32_t handle,
+			       bool direct, struct fence **fence)
+{
+	const unsigned ib_size_dw = 16;
+	struct amdgpu_job *job;
+	struct amdgpu_ib *ib;
+	struct fence *f = NULL;
+	uint64_t dummy;
+	int i, r;
+
+	r = amdgpu_job_alloc_with_ib(ring->adev, ib_size_dw * 4, &job);
+	if (r)
+		return r;
+
+	ib = &job->ibs[0];
+	dummy = ib->gpu_addr + 1024;
+
+	ib->length_dw = 0;
+	ib->ptr[ib->length_dw++] = 0x00000018;
+	ib->ptr[ib->length_dw++] = 0x00000001;
+	ib->ptr[ib->length_dw++] = handle;
+	ib->ptr[ib->length_dw++] = 0x00000000;
+	ib->ptr[ib->length_dw++] = upper_32_bits(dummy);
+	ib->ptr[ib->length_dw++] = dummy;
+
+	ib->ptr[ib->length_dw++] = 0x00000014;
+	ib->ptr[ib->length_dw++] = 0x00000002;
+	ib->ptr[ib->length_dw++] = 0x0000001c;
+	ib->ptr[ib->length_dw++] = 0x00000000;
+	ib->ptr[ib->length_dw++] = 0x00000000;
+
+	ib->ptr[ib->length_dw++] = 0x00000008;
+	ib->ptr[ib->length_dw++] = 0x08000002; /* op close session */
+
+	for (i = ib->length_dw; i < ib_size_dw; ++i)
+		ib->ptr[i] = 0x0;
+
+	if (direct) {
+		r = amdgpu_ib_schedule(ring, 1, ib, NULL, &f);
+		job->fence = fence_get(f);
+		if (r)
+			goto err;
+
+		amdgpu_job_free(job);
+	} else {
+		r = amdgpu_job_submit(job, ring, &ring->adev->vce.entity,
+				      AMDGPU_FENCE_OWNER_UNDEFINED, &f);
+		if (r)
+			goto err;
+	}
+
+	if (fence)
+		*fence = fence_get(f);
+	fence_put(f);
+	return 0;
+
+err:
+	amdgpu_job_free(job);
+	return r;
+}
+
+/**
+ * uvd_v7_0_enc_ring_test_ib - test if UVD ENC IBs are working
+ *
+ * @ring: the engine to test on
+ *
+ */
+static int uvd_v7_0_enc_ring_test_ib(struct amdgpu_ring *ring, long timeout)
+{
+	struct fence *fence = NULL;
+	long r;
+
+	r = uvd_v7_0_enc_get_create_msg(ring, 1, NULL);
+	if (r) {
+		DRM_ERROR("amdgpu: failed to get create msg (%ld).\n", r);
+		goto error;
+	}
+
+	r = uvd_v7_0_enc_get_destroy_msg(ring, 1, true, &fence);
+	if (r) {
+		DRM_ERROR("amdgpu: failed to get destroy ib (%ld).\n", r);
+		goto error;
+	}
+
+	r = fence_wait_timeout(fence, false, timeout);
+	if (r == 0) {
+		DRM_ERROR("amdgpu: IB test timed out.\n");
+		r = -ETIMEDOUT;
+	} else if (r < 0) {
+		DRM_ERROR("amdgpu: fence wait failed (%ld).\n", r);
+	} else {
+		DRM_INFO("ib test on ring %d succeeded\n", ring->idx);
+		r = 0;
+	}
+error:
+	fence_put(fence);
+	return r;
+}
+
+static int uvd_v7_0_early_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	adev->uvd.num_enc_rings = 2;
+	uvd_v7_0_set_ring_funcs(adev);
+	uvd_v7_0_set_enc_ring_funcs(adev);
+	uvd_v7_0_set_irq_funcs(adev);
+
+	return 0;
+}
+
+static int uvd_v7_0_sw_init(void *handle)
+{
+	struct amdgpu_ring *ring;
+	struct amd_sched_rq *rq;
+	int i, r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	/* UVD TRAP */
+	r = amdgpu_irq_add_id(adev, AMDGPU_IH_CLIENTID_UVD, 124, &adev->uvd.irq);
+	if (r)
+		return r;
+
+	/* UVD ENC TRAP */
+	for (i = 0; i < adev->uvd.num_enc_rings; ++i) {
+		r = amdgpu_irq_add_id(adev, AMDGPU_IH_CLIENTID_UVD, i + 119, &adev->uvd.irq);
+		if (r)
+			return r;
+	}
+
+	r = amdgpu_uvd_sw_init(adev);
+	if (r)
+		return r;
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
+		const struct common_firmware_header *hdr;
+		hdr = (const struct common_firmware_header *)adev->uvd.fw->data;
+		adev->firmware.ucode[AMDGPU_UCODE_ID_UVD].ucode_id = AMDGPU_UCODE_ID_UVD;
+		adev->firmware.ucode[AMDGPU_UCODE_ID_UVD].fw = adev->uvd.fw;
+		adev->firmware.fw_size +=
+			ALIGN(le32_to_cpu(hdr->ucode_size_bytes), PAGE_SIZE);
+		DRM_INFO("PSP loading UVD firmware\n");
+	}
+
+	ring = &adev->uvd.ring_enc[0];
+	rq = &ring->sched.sched_rq[AMD_SCHED_PRIORITY_NORMAL];
+	r = amd_sched_entity_init(&ring->sched, &adev->uvd.entity_enc,
+				  rq, amdgpu_sched_jobs);
+	if (r) {
+		DRM_ERROR("Failed setting up UVD ENC run queue.\n");
+		return r;
+	}
+
+	r = amdgpu_uvd_resume(adev);
+	if (r)
+		return r;
+
+	ring = &adev->uvd.ring;
+	sprintf(ring->name, "uvd");
+	r = amdgpu_ring_init(adev, ring, 512, &adev->uvd.irq, 0);
+	if (r)
+		return r;
+
+	for (i = 0; i < adev->uvd.num_enc_rings; ++i) {
+		ring = &adev->uvd.ring_enc[i];
+		sprintf(ring->name, "uvd_enc%d", i);
+		r = amdgpu_ring_init(adev, ring, 512, &adev->uvd.irq, 0);
+		if (r)
+			return r;
+	}
+
+	return r;
+}
+
+static int uvd_v7_0_sw_fini(void *handle)
+{
+	int i, r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	r = amdgpu_uvd_suspend(adev);
+	if (r)
+		return r;
+
+	amd_sched_entity_fini(&adev->uvd.ring_enc[0].sched, &adev->uvd.entity_enc);
+
+	for (i = 0; i < adev->uvd.num_enc_rings; ++i)
+		amdgpu_ring_fini(&adev->uvd.ring_enc[i]);
+
+	r = amdgpu_uvd_sw_fini(adev);
+	if (r)
+		return r;
+
+	return r;
+}
+
+/**
+ * uvd_v7_0_hw_init - start and test UVD block
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Initialize the hardware, boot up the VCPU and do some testing
+ */
+static int uvd_v7_0_hw_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	struct amdgpu_ring *ring = &adev->uvd.ring;
+	uint32_t tmp;
+	int i, r;
+
+	r = uvd_v7_0_start(adev);
+	if (r)
+		goto done;
+
+	ring->ready = true;
+	r = amdgpu_ring_test_ring(ring);
+	if (r) {
+		ring->ready = false;
+		goto done;
+	}
+
+	r = amdgpu_ring_alloc(ring, 10);
+	if (r) {
+		DRM_ERROR("amdgpu: ring failed to lock UVD ring (%d).\n", r);
+		goto done;
+	}
+
+	tmp = PACKET0(SOC15_REG_OFFSET(UVD, 0,
+		mmUVD_SEMA_WAIT_FAULT_TIMEOUT_CNTL), 0);
+	amdgpu_ring_write(ring, tmp);
+	amdgpu_ring_write(ring, 0xFFFFF);
+
+	tmp = PACKET0(SOC15_REG_OFFSET(UVD, 0,
+		mmUVD_SEMA_WAIT_INCOMPLETE_TIMEOUT_CNTL), 0);
+	amdgpu_ring_write(ring, tmp);
+	amdgpu_ring_write(ring, 0xFFFFF);
+
+	tmp = PACKET0(SOC15_REG_OFFSET(UVD, 0,
+		mmUVD_SEMA_SIGNAL_INCOMPLETE_TIMEOUT_CNTL), 0);
+	amdgpu_ring_write(ring, tmp);
+	amdgpu_ring_write(ring, 0xFFFFF);
+
+	/* Clear timeout status bits */
+	amdgpu_ring_write(ring, PACKET0(SOC15_REG_OFFSET(UVD, 0,
+		mmUVD_SEMA_TIMEOUT_STATUS), 0));
+	amdgpu_ring_write(ring, 0x8);
+
+	amdgpu_ring_write(ring, PACKET0(SOC15_REG_OFFSET(UVD, 0,
+		mmUVD_SEMA_CNTL), 0));
+	amdgpu_ring_write(ring, 3);
+
+	amdgpu_ring_commit(ring);
+
+	for (i = 0; i < adev->uvd.num_enc_rings; ++i) {
+		ring = &adev->uvd.ring_enc[i];
+		ring->ready = true;
+		r = amdgpu_ring_test_ring(ring);
+		if (r) {
+			ring->ready = false;
+			goto done;
+		}
+	}
+
+done:
+	if (!r)
+		DRM_INFO("UVD and UVD ENC initialized successfully.\n");
+
+	return r;
+}
+
+/**
+ * uvd_v7_0_hw_fini - stop the hardware block
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Stop the UVD block, mark ring as not ready any more
+ */
+static int uvd_v7_0_hw_fini(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	struct amdgpu_ring *ring = &adev->uvd.ring;
+
+	uvd_v7_0_stop(adev);
+	ring->ready = false;
+
+	return 0;
+}
+
+static int uvd_v7_0_suspend(void *handle)
+{
+	int r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	r = uvd_v7_0_hw_fini(adev);
+	if (r)
+		return r;
+
+	/* Skip this for APU for now */
+	if (!(adev->flags & AMD_IS_APU)) {
+		r = amdgpu_uvd_suspend(adev);
+		if (r)
+			return r;
+	}
+
+	return r;
+}
+
+static int uvd_v7_0_resume(void *handle)
+{
+	int r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	/* Skip this for APU for now */
+	if (!(adev->flags & AMD_IS_APU)) {
+		r = amdgpu_uvd_resume(adev);
+		if (r)
+			return r;
+	}
+	r = uvd_v7_0_hw_init(adev);
+	if (r)
+		return r;
+
+	return r;
+}
+
+/**
+ * uvd_v7_0_mc_resume - memory controller programming
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Let the UVD memory controller know it's offsets
+ */
+static void uvd_v7_0_mc_resume(struct amdgpu_device *adev)
+{
+	uint32_t size = AMDGPU_GPU_PAGE_ALIGN(adev->uvd.fw->size + 4);
+	uint32_t offset;
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
+		WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW),
+			lower_32_bits(adev->firmware.ucode[AMDGPU_UCODE_ID_UVD].mc_addr));
+		WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH),
+			upper_32_bits(adev->firmware.ucode[AMDGPU_UCODE_ID_UVD].mc_addr));
+		offset = 0;
+	} else {
+		WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW),
+			lower_32_bits(adev->uvd.gpu_addr));
+		WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH),
+			upper_32_bits(adev->uvd.gpu_addr));
+		offset = size;
+	}
+
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_VCPU_CACHE_OFFSET0),
+				AMDGPU_UVD_FIRMWARE_OFFSET >> 3);
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_VCPU_CACHE_SIZE0), size);
+
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_VCPU_CACHE1_64BIT_BAR_LOW),
+			lower_32_bits(adev->uvd.gpu_addr + offset));
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_VCPU_CACHE1_64BIT_BAR_HIGH),
+			upper_32_bits(adev->uvd.gpu_addr + offset));
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_VCPU_CACHE_OFFSET1), (1 << 21));
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_VCPU_CACHE_SIZE1), AMDGPU_UVD_HEAP_SIZE);
+
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_VCPU_CACHE2_64BIT_BAR_LOW),
+			lower_32_bits(adev->uvd.gpu_addr + offset + AMDGPU_UVD_HEAP_SIZE));
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_VCPU_CACHE2_64BIT_BAR_HIGH),
+			upper_32_bits(adev->uvd.gpu_addr + offset + AMDGPU_UVD_HEAP_SIZE));
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_VCPU_CACHE_OFFSET2), (2 << 21));
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_VCPU_CACHE_SIZE2),
+			AMDGPU_UVD_STACK_SIZE + (AMDGPU_UVD_SESSION_SIZE * 40));
+
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_UDEC_ADDR_CONFIG),
+			adev->gfx.config.gb_addr_config);
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_UDEC_DB_ADDR_CONFIG),
+			adev->gfx.config.gb_addr_config);
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_UDEC_DBW_ADDR_CONFIG),
+			adev->gfx.config.gb_addr_config);
+
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_GP_SCRATCH4), adev->uvd.max_handles);
+}
+
+/**
+ * uvd_v7_0_start - start UVD block
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Setup and start the UVD block
+ */
+static int uvd_v7_0_start(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring = &adev->uvd.ring;
+	uint32_t rb_bufsz, tmp;
+	uint32_t lmi_swap_cntl;
+	uint32_t mp_swap_cntl;
+	int i, j, r;
+
+	/* disable DPG */
+	WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_POWER_STATUS), 0,
+			~UVD_POWER_STATUS__UVD_PG_MODE_MASK);
+
+	/* disable byte swapping */
+	lmi_swap_cntl = 0;
+	mp_swap_cntl = 0;
+
+	uvd_v7_0_mc_resume(adev);
+
+	/* disable clock gating */
+	WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_CGC_CTRL), 0,
+			~UVD_CGC_CTRL__DYN_CLOCK_MODE_MASK);
+
+	/* disable interupt */
+	WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_MASTINT_EN), 0,
+			~UVD_MASTINT_EN__VCPU_EN_MASK);
+
+	/* stall UMC and register bus before resetting VCPU */
+	WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_CTRL2),
+			UVD_LMI_CTRL2__STALL_ARB_UMC_MASK,
+			~UVD_LMI_CTRL2__STALL_ARB_UMC_MASK);
+	mdelay(1);
+
+	/* put LMI, VCPU, RBC etc... into reset */
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_SOFT_RESET),
+		UVD_SOFT_RESET__LMI_SOFT_RESET_MASK |
+		UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK |
+		UVD_SOFT_RESET__LBSI_SOFT_RESET_MASK |
+		UVD_SOFT_RESET__RBC_SOFT_RESET_MASK |
+		UVD_SOFT_RESET__CSM_SOFT_RESET_MASK |
+		UVD_SOFT_RESET__CXW_SOFT_RESET_MASK |
+		UVD_SOFT_RESET__TAP_SOFT_RESET_MASK |
+		UVD_SOFT_RESET__LMI_UMC_SOFT_RESET_MASK);
+	mdelay(5);
+
+	/* initialize UVD memory controller */
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_CTRL),
+		(0x40 << UVD_LMI_CTRL__WRITE_CLEAN_TIMER__SHIFT) |
+		UVD_LMI_CTRL__WRITE_CLEAN_TIMER_EN_MASK |
+		UVD_LMI_CTRL__DATA_COHERENCY_EN_MASK |
+		UVD_LMI_CTRL__VCPU_DATA_COHERENCY_EN_MASK |
+		UVD_LMI_CTRL__REQ_MODE_MASK |
+		0x00100000L);
+
+#ifdef __BIG_ENDIAN
+	/* swap (8 in 32) RB and IB */
+	lmi_swap_cntl = 0xa;
+	mp_swap_cntl = 0;
+#endif
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_SWAP_CNTL), lmi_swap_cntl);
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_MP_SWAP_CNTL), mp_swap_cntl);
+
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_MPC_SET_MUXA0), 0x40c2040);
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_MPC_SET_MUXA1), 0x0);
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_MPC_SET_MUXB0), 0x40c2040);
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_MPC_SET_MUXB1), 0x0);
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_MPC_SET_ALU), 0);
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_MPC_SET_MUX), 0x88);
+
+	/* take all subblocks out of reset, except VCPU */
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_SOFT_RESET),
+			UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK);
+	mdelay(5);
+
+	/* enable VCPU clock */
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_VCPU_CNTL),
+			UVD_VCPU_CNTL__CLK_EN_MASK);
+
+	/* enable UMC */
+	WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_CTRL2), 0,
+			~UVD_LMI_CTRL2__STALL_ARB_UMC_MASK);
+
+	/* boot up the VCPU */
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_SOFT_RESET), 0);
+	mdelay(10);
+
+	for (i = 0; i < 10; ++i) {
+		uint32_t status;
+
+		for (j = 0; j < 100; ++j) {
+			status = RREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_STATUS));
+			if (status & 2)
+				break;
+			mdelay(10);
+		}
+		r = 0;
+		if (status & 2)
+			break;
+
+		DRM_ERROR("UVD not responding, trying to reset the VCPU!!!\n");
+		WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_SOFT_RESET),
+				UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK,
+				~UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK);
+		mdelay(10);
+		WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_SOFT_RESET), 0,
+				~UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK);
+		mdelay(10);
+		r = -1;
+	}
+
+	if (r) {
+		DRM_ERROR("UVD not responding, giving up!!!\n");
+		return r;
+	}
+	/* enable master interrupt */
+	WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_MASTINT_EN),
+		(UVD_MASTINT_EN__VCPU_EN_MASK|UVD_MASTINT_EN__SYS_EN_MASK),
+		~(UVD_MASTINT_EN__VCPU_EN_MASK|UVD_MASTINT_EN__SYS_EN_MASK));
+
+	/* clear the bit 4 of UVD_STATUS */
+	WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_STATUS), 0,
+			~(2 << UVD_STATUS__VCPU_REPORT__SHIFT));
+
+	/* force RBC into idle state */
+	rb_bufsz = order_base_2(ring->ring_size);
+	tmp = REG_SET_FIELD(0, UVD_RBC_RB_CNTL, RB_BUFSZ, rb_bufsz);
+	tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL, RB_BLKSZ, 1);
+	tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL, RB_NO_FETCH, 1);
+	tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL, RB_WPTR_POLL_EN, 0);
+	tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL, RB_NO_UPDATE, 1);
+	tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL, RB_RPTR_WR_EN, 1);
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RBC_RB_CNTL), tmp);
+
+	/* set the write pointer delay */
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RBC_RB_WPTR_CNTL), 0);
+
+	/* set the wb address */
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RBC_RB_RPTR_ADDR),
+			(upper_32_bits(ring->gpu_addr) >> 2));
+
+	/* programm the RB_BASE for ring buffer */
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_RBC_RB_64BIT_BAR_LOW),
+			lower_32_bits(ring->gpu_addr));
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_RBC_RB_64BIT_BAR_HIGH),
+			upper_32_bits(ring->gpu_addr));
+
+	/* Initialize the ring buffer's read and write pointers */
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RBC_RB_RPTR), 0);
+
+	ring->wptr = RREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RBC_RB_RPTR));
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RBC_RB_WPTR),
+			lower_32_bits(ring->wptr));
+
+	WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_RBC_RB_CNTL), 0,
+			~UVD_RBC_RB_CNTL__RB_NO_FETCH_MASK);
+
+	ring = &adev->uvd.ring_enc[0];
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_RPTR), lower_32_bits(ring->wptr));
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_WPTR), lower_32_bits(ring->wptr));
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_BASE_LO), ring->gpu_addr);
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_BASE_HI), upper_32_bits(ring->gpu_addr));
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_SIZE), ring->ring_size / 4);
+
+	ring = &adev->uvd.ring_enc[1];
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_RPTR2), lower_32_bits(ring->wptr));
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_WPTR2), lower_32_bits(ring->wptr));
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_BASE_LO2), ring->gpu_addr);
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_BASE_HI2), upper_32_bits(ring->gpu_addr));
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_SIZE2), ring->ring_size / 4);
+
+	return 0;
+}
+
+/**
+ * uvd_v7_0_stop - stop UVD block
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * stop the UVD block
+ */
+static void uvd_v7_0_stop(struct amdgpu_device *adev)
+{
+	/* force RBC into idle state */
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RBC_RB_CNTL), 0x11010101);
+
+	/* Stall UMC and register bus before resetting VCPU */
+	WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_CTRL2),
+			UVD_LMI_CTRL2__STALL_ARB_UMC_MASK,
+			~UVD_LMI_CTRL2__STALL_ARB_UMC_MASK);
+	mdelay(1);
+
+	/* put VCPU into reset */
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_SOFT_RESET),
+			UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK);
+	mdelay(5);
+
+	/* disable VCPU clock */
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_VCPU_CNTL), 0x0);
+
+	/* Unstall UMC and register bus */
+	WREG32_P(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_CTRL2), 0,
+			~UVD_LMI_CTRL2__STALL_ARB_UMC_MASK);
+}
+
+/**
+ * uvd_v7_0_ring_emit_fence - emit an fence & trap command
+ *
+ * @ring: amdgpu_ring pointer
+ * @fence: fence to emit
+ *
+ * Write a fence and a trap command to the ring.
+ */
+static void uvd_v7_0_ring_emit_fence(struct amdgpu_ring *ring, u64 addr, u64 seq,
+				     unsigned flags)
+{
+	WARN_ON(flags & AMDGPU_FENCE_FLAG_64BIT);
+
+	amdgpu_ring_write(ring,
+		PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_CONTEXT_ID), 0));
+	amdgpu_ring_write(ring, seq);
+	amdgpu_ring_write(ring,
+		PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_GPCOM_VCPU_DATA0), 0));
+	amdgpu_ring_write(ring, addr & 0xffffffff);
+	amdgpu_ring_write(ring,
+		PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_GPCOM_VCPU_DATA1), 0));
+	amdgpu_ring_write(ring, upper_32_bits(addr) & 0xff);
+	amdgpu_ring_write(ring,
+		PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_GPCOM_VCPU_CMD), 0));
+	amdgpu_ring_write(ring, 0);
+
+	amdgpu_ring_write(ring,
+		PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_GPCOM_VCPU_DATA0), 0));
+	amdgpu_ring_write(ring, 0);
+	amdgpu_ring_write(ring,
+		PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_GPCOM_VCPU_DATA1), 0));
+	amdgpu_ring_write(ring, 0);
+	amdgpu_ring_write(ring,
+		PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_GPCOM_VCPU_CMD), 0));
+	amdgpu_ring_write(ring, 2);
+}
+
+/**
+ * uvd_v7_0_enc_ring_emit_fence - emit an enc fence & trap command
+ *
+ * @ring: amdgpu_ring pointer
+ * @fence: fence to emit
+ *
+ * Write enc a fence and a trap command to the ring.
+ */
+static void uvd_v7_0_enc_ring_emit_fence(struct amdgpu_ring *ring, u64 addr,
+			u64 seq, unsigned flags)
+{
+	WARN_ON(flags & AMDGPU_FENCE_FLAG_64BIT);
+
+	amdgpu_ring_write(ring, HEVC_ENC_CMD_FENCE);
+	amdgpu_ring_write(ring, addr);
+	amdgpu_ring_write(ring, upper_32_bits(addr));
+	amdgpu_ring_write(ring, seq);
+	amdgpu_ring_write(ring, HEVC_ENC_CMD_TRAP);
+}
+
+/**
+ * uvd_v7_0_ring_emit_hdp_flush - emit an hdp flush
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Emits an hdp flush.
+ */
+static void uvd_v7_0_ring_emit_hdp_flush(struct amdgpu_ring *ring)
+{
+	amdgpu_ring_write(ring, PACKET0(SOC15_REG_OFFSET(NBIF, 0,
+		mmHDP_MEM_COHERENCY_FLUSH_CNTL), 0));
+	amdgpu_ring_write(ring, 0);
+}
+
+/**
+ * uvd_v7_0_ring_hdp_invalidate - emit an hdp invalidate
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Emits an hdp invalidate.
+ */
+static void uvd_v7_0_ring_emit_hdp_invalidate(struct amdgpu_ring *ring)
+{
+	amdgpu_ring_write(ring, PACKET0(SOC15_REG_OFFSET(HDP, 0, mmHDP_DEBUG0), 0));
+	amdgpu_ring_write(ring, 1);
+}
+
+/**
+ * uvd_v7_0_ring_test_ring - register write test
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Test if we can successfully write to the context register
+ */
+static int uvd_v7_0_ring_test_ring(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	uint32_t tmp = 0;
+	unsigned i;
+	int r;
+
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_CONTEXT_ID), 0xCAFEDEAD);
+	r = amdgpu_ring_alloc(ring, 3);
+	if (r) {
+		DRM_ERROR("amdgpu: cp failed to lock ring %d (%d).\n",
+			  ring->idx, r);
+		return r;
+	}
+	amdgpu_ring_write(ring,
+		PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_CONTEXT_ID), 0));
+	amdgpu_ring_write(ring, 0xDEADBEEF);
+	amdgpu_ring_commit(ring);
+	for (i = 0; i < adev->usec_timeout; i++) {
+		tmp = RREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_CONTEXT_ID));
+		if (tmp == 0xDEADBEEF)
+			break;
+		DRM_UDELAY(1);
+	}
+
+	if (i < adev->usec_timeout) {
+		DRM_INFO("ring test on %d succeeded in %d usecs\n",
+			 ring->idx, i);
+	} else {
+		DRM_ERROR("amdgpu: ring %d test failed (0x%08X)\n",
+			  ring->idx, tmp);
+		r = -EINVAL;
+	}
+	return r;
+}
+
+/**
+ * uvd_v7_0_ring_emit_ib - execute indirect buffer
+ *
+ * @ring: amdgpu_ring pointer
+ * @ib: indirect buffer to execute
+ *
+ * Write ring commands to execute the indirect buffer
+ */
+static void uvd_v7_0_ring_emit_ib(struct amdgpu_ring *ring,
+				  struct amdgpu_ib *ib,
+				  unsigned vm_id, bool ctx_switch)
+{
+	amdgpu_ring_write(ring,
+		PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_RBC_IB_VMID), 0));
+	amdgpu_ring_write(ring, vm_id);
+
+	amdgpu_ring_write(ring,
+		PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_RBC_IB_64BIT_BAR_LOW), 0));
+	amdgpu_ring_write(ring, lower_32_bits(ib->gpu_addr));
+	amdgpu_ring_write(ring,
+		PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_RBC_IB_64BIT_BAR_HIGH), 0));
+	amdgpu_ring_write(ring, upper_32_bits(ib->gpu_addr));
+	amdgpu_ring_write(ring,
+		PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_RBC_IB_SIZE), 0));
+	amdgpu_ring_write(ring, ib->length_dw);
+}
+
+/**
+ * uvd_v7_0_enc_ring_emit_ib - enc execute indirect buffer
+ *
+ * @ring: amdgpu_ring pointer
+ * @ib: indirect buffer to execute
+ *
+ * Write enc ring commands to execute the indirect buffer
+ */
+static void uvd_v7_0_enc_ring_emit_ib(struct amdgpu_ring *ring,
+		struct amdgpu_ib *ib, unsigned int vm_id, bool ctx_switch)
+{
+	amdgpu_ring_write(ring, HEVC_ENC_CMD_IB_VM);
+	amdgpu_ring_write(ring, vm_id);
+	amdgpu_ring_write(ring, lower_32_bits(ib->gpu_addr));
+	amdgpu_ring_write(ring, upper_32_bits(ib->gpu_addr));
+	amdgpu_ring_write(ring, ib->length_dw);
+}
+
+static void uvd_v7_0_vm_reg_write(struct amdgpu_ring *ring,
+				uint32_t data0, uint32_t data1)
+{
+	amdgpu_ring_write(ring,
+		PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_GPCOM_VCPU_DATA0), 0));
+	amdgpu_ring_write(ring, data0);
+	amdgpu_ring_write(ring,
+		PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_GPCOM_VCPU_DATA1), 0));
+	amdgpu_ring_write(ring, data1);
+	amdgpu_ring_write(ring,
+		PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_GPCOM_VCPU_CMD), 0));
+	amdgpu_ring_write(ring, 8);
+}
+
+static void uvd_v7_0_vm_reg_wait(struct amdgpu_ring *ring,
+				uint32_t data0, uint32_t data1, uint32_t mask)
+{
+	amdgpu_ring_write(ring,
+		PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_GPCOM_VCPU_DATA0), 0));
+	amdgpu_ring_write(ring, data0);
+	amdgpu_ring_write(ring,
+		PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_GPCOM_VCPU_DATA1), 0));
+	amdgpu_ring_write(ring, data1);
+	amdgpu_ring_write(ring,
+		PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_GP_SCRATCH8), 0));
+	amdgpu_ring_write(ring, mask);
+	amdgpu_ring_write(ring,
+		PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_GPCOM_VCPU_CMD), 0));
+	amdgpu_ring_write(ring, 12);
+}
+
+static void uvd_v7_0_ring_emit_vm_flush(struct amdgpu_ring *ring,
+					unsigned vm_id, uint64_t pd_addr)
+{
+	uint32_t data0, data1, mask;
+	unsigned eng = ring->idx;
+	unsigned i;
+
+	pd_addr = pd_addr | 0x1; /* valid bit */
+	/* now only use physical base address of PDE and valid */
+	BUG_ON(pd_addr & 0xFFFF00000000003EULL);
+
+	for (i = 0; i < AMDGPU_MAX_VMHUBS; ++i) {
+		struct amdgpu_vmhub *hub = &ring->adev->vmhub[i];
+		uint32_t req = hub->get_invalidate_req(vm_id);
+
+		data0 = (hub->ctx0_ptb_addr_hi32 + vm_id * 2) << 2;
+		data1 = upper_32_bits(pd_addr);
+		uvd_v7_0_vm_reg_write(ring, data0, data1);
+
+		data0 = (hub->ctx0_ptb_addr_lo32 + vm_id * 2) << 2;
+		data1 = lower_32_bits(pd_addr);
+		uvd_v7_0_vm_reg_write(ring, data0, data1);
+
+		data0 = (hub->ctx0_ptb_addr_lo32 + vm_id * 2) << 2;
+		data1 = lower_32_bits(pd_addr);
+		mask = 0xffffffff;
+		uvd_v7_0_vm_reg_wait(ring, data0, data1, mask);
+
+		/* flush TLB */
+		data0 = (hub->vm_inv_eng0_req + eng) << 2;
+		data1 = req;
+		uvd_v7_0_vm_reg_write(ring, data0, data1);
+
+		/* wait for flush */
+		data0 = (hub->vm_inv_eng0_ack + eng) << 2;
+		data1 = 1 << vm_id;
+		mask =  1 << vm_id;
+		uvd_v7_0_vm_reg_wait(ring, data0, data1, mask);
+	}
+}
+
+static void uvd_v7_0_enc_ring_insert_end(struct amdgpu_ring *ring)
+{
+	amdgpu_ring_write(ring, HEVC_ENC_CMD_END);
+}
+
+static void uvd_v7_0_enc_ring_emit_vm_flush(struct amdgpu_ring *ring,
+			 unsigned int vm_id, uint64_t pd_addr)
+{
+	unsigned eng = ring->idx;
+	unsigned i;
+
+	pd_addr = pd_addr | 0x1; /* valid bit */
+	/* now only use physical base address of PDE and valid */
+	BUG_ON(pd_addr & 0xFFFF00000000003EULL);
+
+	for (i = 0; i < AMDGPU_MAX_VMHUBS; ++i) {
+		struct amdgpu_vmhub *hub = &ring->adev->vmhub[i];
+		uint32_t req = hub->get_invalidate_req(vm_id);
+
+		amdgpu_ring_write(ring, HEVC_ENC_CMD_REG_WRITE);
+		amdgpu_ring_write(ring,
+			(hub->ctx0_ptb_addr_hi32 + vm_id * 2) << 2);
+		amdgpu_ring_write(ring, upper_32_bits(pd_addr));
+
+		amdgpu_ring_write(ring, HEVC_ENC_CMD_REG_WRITE);
+		amdgpu_ring_write(ring,
+			(hub->ctx0_ptb_addr_lo32 + vm_id * 2) << 2);
+		amdgpu_ring_write(ring, lower_32_bits(pd_addr));
+
+		amdgpu_ring_write(ring, HEVC_ENC_CMD_REG_WAIT);
+		amdgpu_ring_write(ring,
+			(hub->ctx0_ptb_addr_lo32 + vm_id * 2) << 2);
+		amdgpu_ring_write(ring, 0xffffffff);
+		amdgpu_ring_write(ring, lower_32_bits(pd_addr));
+
+		/* flush TLB */
+		amdgpu_ring_write(ring, HEVC_ENC_CMD_REG_WRITE);
+		amdgpu_ring_write(ring,	(hub->vm_inv_eng0_req + eng) << 2);
+		amdgpu_ring_write(ring, req);
+
+		/* wait for flush */
+		amdgpu_ring_write(ring, HEVC_ENC_CMD_REG_WAIT);
+		amdgpu_ring_write(ring, (hub->vm_inv_eng0_ack + eng) << 2);
+		amdgpu_ring_write(ring, 1 << vm_id);
+		amdgpu_ring_write(ring, 1 << vm_id);
+	}
+}
+
+#if 0
+static bool uvd_v7_0_is_idle(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	return !(RREG32(mmSRBM_STATUS) & SRBM_STATUS__UVD_BUSY_MASK);
+}
+
+static int uvd_v7_0_wait_for_idle(void *handle)
+{
+	unsigned i;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	for (i = 0; i < adev->usec_timeout; i++) {
+		if (uvd_v7_0_is_idle(handle))
+			return 0;
+	}
+	return -ETIMEDOUT;
+}
+
+#define AMDGPU_UVD_STATUS_BUSY_MASK    0xfd
+static bool uvd_v7_0_check_soft_reset(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	u32 srbm_soft_reset = 0;
+	u32 tmp = RREG32(mmSRBM_STATUS);
+
+	if (REG_GET_FIELD(tmp, SRBM_STATUS, UVD_RQ_PENDING) ||
+	    REG_GET_FIELD(tmp, SRBM_STATUS, UVD_BUSY) ||
+	    (RREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_STATUS) &
+		    AMDGPU_UVD_STATUS_BUSY_MASK)))
+		srbm_soft_reset = REG_SET_FIELD(srbm_soft_reset,
+				SRBM_SOFT_RESET, SOFT_RESET_UVD, 1);
+
+	if (srbm_soft_reset) {
+		adev->uvd.srbm_soft_reset = srbm_soft_reset;
+		return true;
+	} else {
+		adev->uvd.srbm_soft_reset = 0;
+		return false;
+	}
+}
+
+static int uvd_v7_0_pre_soft_reset(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (!adev->uvd.srbm_soft_reset)
+		return 0;
+
+	uvd_v7_0_stop(adev);
+	return 0;
+}
+
+static int uvd_v7_0_soft_reset(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	u32 srbm_soft_reset;
+
+	if (!adev->uvd.srbm_soft_reset)
+		return 0;
+	srbm_soft_reset = adev->uvd.srbm_soft_reset;
+
+	if (srbm_soft_reset) {
+		u32 tmp;
+
+		tmp = RREG32(mmSRBM_SOFT_RESET);
+		tmp |= srbm_soft_reset;
+		dev_info(adev->dev, "SRBM_SOFT_RESET=0x%08X\n", tmp);
+		WREG32(mmSRBM_SOFT_RESET, tmp);
+		tmp = RREG32(mmSRBM_SOFT_RESET);
+
+		udelay(50);
+
+		tmp &= ~srbm_soft_reset;
+		WREG32(mmSRBM_SOFT_RESET, tmp);
+		tmp = RREG32(mmSRBM_SOFT_RESET);
+
+		/* Wait a little for things to settle down */
+		udelay(50);
+	}
+
+	return 0;
+}
+
+static int uvd_v7_0_post_soft_reset(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (!adev->uvd.srbm_soft_reset)
+		return 0;
+
+	mdelay(5);
+
+	return uvd_v7_0_start(adev);
+}
+#endif
+
+static int uvd_v7_0_set_interrupt_state(struct amdgpu_device *adev,
+					struct amdgpu_irq_src *source,
+					unsigned type,
+					enum amdgpu_interrupt_state state)
+{
+	// TODO
+	return 0;
+}
+
+static int uvd_v7_0_process_interrupt(struct amdgpu_device *adev,
+				      struct amdgpu_irq_src *source,
+				      struct amdgpu_iv_entry *entry)
+{
+	DRM_DEBUG("IH: UVD TRAP\n");
+	switch (entry->src_id) {
+	case 124:
+		amdgpu_fence_process(&adev->uvd.ring);
+		break;
+	case 119:
+		amdgpu_fence_process(&adev->uvd.ring_enc[0]);
+		break;
+	case 120:
+		amdgpu_fence_process(&adev->uvd.ring_enc[1]);
+		break;
+	default:
+		DRM_ERROR("Unhandled interrupt: %d %d\n",
+			  entry->src_id, entry->src_data[0]);
+		break;
+	}
+
+	return 0;
+}
+
+#if 0
+static void uvd_v7_0_set_sw_clock_gating(struct amdgpu_device *adev)
+{
+	uint32_t data, data1, data2, suvd_flags;
+
+	data = RREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_CGC_CTRL));
+	data1 = RREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_SUVD_CGC_GATE));
+	data2 = RREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_SUVD_CGC_CTRL));
+
+	data &= ~(UVD_CGC_CTRL__CLK_OFF_DELAY_MASK |
+		  UVD_CGC_CTRL__CLK_GATE_DLY_TIMER_MASK);
+
+	suvd_flags = UVD_SUVD_CGC_GATE__SRE_MASK |
+		     UVD_SUVD_CGC_GATE__SIT_MASK |
+		     UVD_SUVD_CGC_GATE__SMP_MASK |
+		     UVD_SUVD_CGC_GATE__SCM_MASK |
+		     UVD_SUVD_CGC_GATE__SDB_MASK;
+
+	data |= UVD_CGC_CTRL__DYN_CLOCK_MODE_MASK |
+		(1 << REG_FIELD_SHIFT(UVD_CGC_CTRL, CLK_GATE_DLY_TIMER)) |
+		(4 << REG_FIELD_SHIFT(UVD_CGC_CTRL, CLK_OFF_DELAY));
+
+	data &= ~(UVD_CGC_CTRL__UDEC_RE_MODE_MASK |
+			UVD_CGC_CTRL__UDEC_CM_MODE_MASK |
+			UVD_CGC_CTRL__UDEC_IT_MODE_MASK |
+			UVD_CGC_CTRL__UDEC_DB_MODE_MASK |
+			UVD_CGC_CTRL__UDEC_MP_MODE_MASK |
+			UVD_CGC_CTRL__SYS_MODE_MASK |
+			UVD_CGC_CTRL__UDEC_MODE_MASK |
+			UVD_CGC_CTRL__MPEG2_MODE_MASK |
+			UVD_CGC_CTRL__REGS_MODE_MASK |
+			UVD_CGC_CTRL__RBC_MODE_MASK |
+			UVD_CGC_CTRL__LMI_MC_MODE_MASK |
+			UVD_CGC_CTRL__LMI_UMC_MODE_MASK |
+			UVD_CGC_CTRL__IDCT_MODE_MASK |
+			UVD_CGC_CTRL__MPRD_MODE_MASK |
+			UVD_CGC_CTRL__MPC_MODE_MASK |
+			UVD_CGC_CTRL__LBSI_MODE_MASK |
+			UVD_CGC_CTRL__LRBBM_MODE_MASK |
+			UVD_CGC_CTRL__WCB_MODE_MASK |
+			UVD_CGC_CTRL__VCPU_MODE_MASK |
+			UVD_CGC_CTRL__JPEG_MODE_MASK |
+			UVD_CGC_CTRL__JPEG2_MODE_MASK |
+			UVD_CGC_CTRL__SCPU_MODE_MASK);
+	data2 &= ~(UVD_SUVD_CGC_CTRL__SRE_MODE_MASK |
+			UVD_SUVD_CGC_CTRL__SIT_MODE_MASK |
+			UVD_SUVD_CGC_CTRL__SMP_MODE_MASK |
+			UVD_SUVD_CGC_CTRL__SCM_MODE_MASK |
+			UVD_SUVD_CGC_CTRL__SDB_MODE_MASK);
+	data1 |= suvd_flags;
+
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_CGC_CTRL), data);
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_CGC_GATE), 0);
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_SUVD_CGC_GATE), data1);
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_SUVD_CGC_CTRL), data2);
+}
+
+static void uvd_v7_0_set_hw_clock_gating(struct amdgpu_device *adev)
+{
+	uint32_t data, data1, cgc_flags, suvd_flags;
+
+	data = RREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_CGC_GATE));
+	data1 = RREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_SUVD_CGC_GATE));
+
+	cgc_flags = UVD_CGC_GATE__SYS_MASK |
+		UVD_CGC_GATE__UDEC_MASK |
+		UVD_CGC_GATE__MPEG2_MASK |
+		UVD_CGC_GATE__RBC_MASK |
+		UVD_CGC_GATE__LMI_MC_MASK |
+		UVD_CGC_GATE__IDCT_MASK |
+		UVD_CGC_GATE__MPRD_MASK |
+		UVD_CGC_GATE__MPC_MASK |
+		UVD_CGC_GATE__LBSI_MASK |
+		UVD_CGC_GATE__LRBBM_MASK |
+		UVD_CGC_GATE__UDEC_RE_MASK |
+		UVD_CGC_GATE__UDEC_CM_MASK |
+		UVD_CGC_GATE__UDEC_IT_MASK |
+		UVD_CGC_GATE__UDEC_DB_MASK |
+		UVD_CGC_GATE__UDEC_MP_MASK |
+		UVD_CGC_GATE__WCB_MASK |
+		UVD_CGC_GATE__VCPU_MASK |
+		UVD_CGC_GATE__SCPU_MASK |
+		UVD_CGC_GATE__JPEG_MASK |
+		UVD_CGC_GATE__JPEG2_MASK;
+
+	suvd_flags = UVD_SUVD_CGC_GATE__SRE_MASK |
+				UVD_SUVD_CGC_GATE__SIT_MASK |
+				UVD_SUVD_CGC_GATE__SMP_MASK |
+				UVD_SUVD_CGC_GATE__SCM_MASK |
+				UVD_SUVD_CGC_GATE__SDB_MASK;
+
+	data |= cgc_flags;
+	data1 |= suvd_flags;
+
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_CGC_GATE), data);
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_SUVD_CGC_GATE), data1);
+}
+
+static void uvd_v7_0_set_bypass_mode(struct amdgpu_device *adev, bool enable)
+{
+	u32 tmp = RREG32_SMC(ixGCK_DFS_BYPASS_CNTL);
+
+	if (enable)
+		tmp |= (GCK_DFS_BYPASS_CNTL__BYPASSDCLK_MASK |
+			GCK_DFS_BYPASS_CNTL__BYPASSVCLK_MASK);
+	else
+		tmp &= ~(GCK_DFS_BYPASS_CNTL__BYPASSDCLK_MASK |
+			 GCK_DFS_BYPASS_CNTL__BYPASSVCLK_MASK);
+
+	WREG32_SMC(ixGCK_DFS_BYPASS_CNTL, tmp);
+}
+
+
+static int uvd_v7_0_set_clockgating_state(void *handle,
+					  enum amd_clockgating_state state)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	bool enable = (state == AMD_CG_STATE_GATE) ? true : false;
+
+	uvd_v7_0_set_bypass_mode(adev, enable);
+
+	if (!(adev->cg_flags & AMD_CG_SUPPORT_UVD_MGCG))
+		return 0;
+
+	if (enable) {
+		/* disable HW gating and enable Sw gating */
+		uvd_v7_0_set_sw_clock_gating(adev);
+	} else {
+		/* wait for STATUS to clear */
+		if (uvd_v7_0_wait_for_idle(handle))
+			return -EBUSY;
+
+		/* enable HW gates because UVD is idle */
+		/* uvd_v7_0_set_hw_clock_gating(adev); */
+	}
+
+	return 0;
+}
+
+static int uvd_v7_0_set_powergating_state(void *handle,
+					  enum amd_powergating_state state)
+{
+	/* This doesn't actually powergate the UVD block.
+	 * That's done in the dpm code via the SMC.  This
+	 * just re-inits the block as necessary.  The actual
+	 * gating still happens in the dpm code.  We should
+	 * revisit this when there is a cleaner line between
+	 * the smc and the hw blocks
+	 */
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (!(adev->pg_flags & AMD_PG_SUPPORT_UVD))
+		return 0;
+
+	WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_POWER_STATUS), UVD_POWER_STATUS__UVD_PG_EN_MASK);
+
+	if (state == AMD_PG_STATE_GATE) {
+		uvd_v7_0_stop(adev);
+		return 0;
+	} else {
+		return uvd_v7_0_start(adev);
+	}
+}
+#endif
+
+static int uvd_v7_0_set_clockgating_state(void *handle,
+					  enum amd_clockgating_state state)
+{
+	/* needed for driver unload*/
+	return 0;
+}
+
+const struct amd_ip_funcs uvd_v7_0_ip_funcs = {
+	.name = "uvd_v7_0",
+	.early_init = uvd_v7_0_early_init,
+	.late_init = NULL,
+	.sw_init = uvd_v7_0_sw_init,
+	.sw_fini = uvd_v7_0_sw_fini,
+	.hw_init = uvd_v7_0_hw_init,
+	.hw_fini = uvd_v7_0_hw_fini,
+	.suspend = uvd_v7_0_suspend,
+	.resume = uvd_v7_0_resume,
+	.is_idle = NULL /* uvd_v7_0_is_idle */,
+	.wait_for_idle = NULL /* uvd_v7_0_wait_for_idle */,
+	.check_soft_reset = NULL /* uvd_v7_0_check_soft_reset */,
+	.pre_soft_reset = NULL /* uvd_v7_0_pre_soft_reset */,
+	.soft_reset = NULL /* uvd_v7_0_soft_reset */,
+	.post_soft_reset = NULL /* uvd_v7_0_post_soft_reset */,
+	.set_clockgating_state = uvd_v7_0_set_clockgating_state,
+	.set_powergating_state = NULL /* uvd_v7_0_set_powergating_state */,
+};
+
+static const struct amdgpu_ring_funcs uvd_v7_0_ring_vm_funcs = {
+	.type = AMDGPU_RING_TYPE_UVD,
+	.align_mask = 0xf,
+	.nop = PACKET0(SOC15_REG_OFFSET(UVD, 0, mmUVD_NO_OP), 0),
+	.support_64bit_ptrs = false,
+	.get_rptr = uvd_v7_0_ring_get_rptr,
+	.get_wptr = uvd_v7_0_ring_get_wptr,
+	.set_wptr = uvd_v7_0_ring_set_wptr,
+	.emit_frame_size =
+		2 + /* uvd_v7_0_ring_emit_hdp_flush */
+		2 + /* uvd_v7_0_ring_emit_hdp_invalidate */
+		34 * AMDGPU_MAX_VMHUBS + /* uvd_v7_0_ring_emit_vm_flush */
+		14 + 14, /* uvd_v7_0_ring_emit_fence x2 vm fence */
+	.emit_ib_size = 8, /* uvd_v7_0_ring_emit_ib */
+	.emit_ib = uvd_v7_0_ring_emit_ib,
+	.emit_fence = uvd_v7_0_ring_emit_fence,
+	.emit_vm_flush = uvd_v7_0_ring_emit_vm_flush,
+	.emit_hdp_flush = uvd_v7_0_ring_emit_hdp_flush,
+	.emit_hdp_invalidate = uvd_v7_0_ring_emit_hdp_invalidate,
+	.test_ring = uvd_v7_0_ring_test_ring,
+	.test_ib = amdgpu_uvd_ring_test_ib,
+	.insert_nop = amdgpu_ring_insert_nop,
+	.pad_ib = amdgpu_ring_generic_pad_ib,
+	.begin_use = amdgpu_uvd_ring_begin_use,
+	.end_use = amdgpu_uvd_ring_end_use,
+};
+
+static const struct amdgpu_ring_funcs uvd_v7_0_enc_ring_vm_funcs = {
+	.type = AMDGPU_RING_TYPE_UVD_ENC,
+	.align_mask = 0x3f,
+	.nop = HEVC_ENC_CMD_NO_OP,
+	.support_64bit_ptrs = false,
+	.get_rptr = uvd_v7_0_enc_ring_get_rptr,
+	.get_wptr = uvd_v7_0_enc_ring_get_wptr,
+	.set_wptr = uvd_v7_0_enc_ring_set_wptr,
+	.emit_frame_size =
+		17 * AMDGPU_MAX_VMHUBS + /* uvd_v7_0_enc_ring_emit_vm_flush */
+		5 + 5 + /* uvd_v7_0_enc_ring_emit_fence x2 vm fence */
+		1, /* uvd_v7_0_enc_ring_insert_end */
+	.emit_ib_size = 5, /* uvd_v7_0_enc_ring_emit_ib */
+	.emit_ib = uvd_v7_0_enc_ring_emit_ib,
+	.emit_fence = uvd_v7_0_enc_ring_emit_fence,
+	.emit_vm_flush = uvd_v7_0_enc_ring_emit_vm_flush,
+	.test_ring = uvd_v7_0_enc_ring_test_ring,
+	.test_ib = uvd_v7_0_enc_ring_test_ib,
+	.insert_nop = amdgpu_ring_insert_nop,
+	.insert_end = uvd_v7_0_enc_ring_insert_end,
+	.pad_ib = amdgpu_ring_generic_pad_ib,
+	.begin_use = amdgpu_uvd_ring_begin_use,
+	.end_use = amdgpu_uvd_ring_end_use,
+};
+
+static void uvd_v7_0_set_ring_funcs(struct amdgpu_device *adev)
+{
+	adev->uvd.ring.funcs = &uvd_v7_0_ring_vm_funcs;
+	DRM_INFO("UVD is enabled in VM mode\n");
+}
+
+static void uvd_v7_0_set_enc_ring_funcs(struct amdgpu_device *adev)
+{
+	int i;
+
+	for (i = 0; i < adev->uvd.num_enc_rings; ++i)
+		adev->uvd.ring_enc[i].funcs = &uvd_v7_0_enc_ring_vm_funcs;
+
+	DRM_INFO("UVD ENC is enabled in VM mode\n");
+}
+
+static const struct amdgpu_irq_src_funcs uvd_v7_0_irq_funcs = {
+	.set = uvd_v7_0_set_interrupt_state,
+	.process = uvd_v7_0_process_interrupt,
+};
+
+static void uvd_v7_0_set_irq_funcs(struct amdgpu_device *adev)
+{
+	adev->uvd.irq.num_types = adev->uvd.num_enc_rings + 1;
+	adev->uvd.irq.funcs = &uvd_v7_0_irq_funcs;
+}
+
+const struct amdgpu_ip_block_version uvd_v7_0_ip_block =
+{
+		.type = AMD_IP_BLOCK_TYPE_UVD,
+		.major = 7,
+		.minor = 0,
+		.rev = 0,
+		.funcs = &uvd_v7_0_ip_funcs,
+};
diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.h b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.h
new file mode 100644
index 0000000..cbe82ab
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.h
@@ -0,0 +1,29 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __UVD_V7_0_H__
+#define __UVD_V7_0_H__
+
+extern const struct amdgpu_ip_block_version uvd_v7_0_ip_block;
+
+#endif
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 051/100] drm/amdgpu: add initial vce 4.0 support for vega10
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (34 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 050/100] drm/amdgpu: add initial uvd 7.0 support for vega10 Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 052/100] drm/amdgpu: add PSP driver " Alex Deucher
                     ` (49 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Leo Liu

From: Leo Liu <leo.liu@amd.com>

Signed-off-by: Leo Liu <leo.liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/Makefile     |   3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c |   7 +
 drivers/gpu/drm/amd/amdgpu/vce_v4_0.c   | 894 ++++++++++++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/vce_v4_0.h   |  29 ++
 4 files changed, 932 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/vce_v4_0.h

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
index 65829fa..2ba5671 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -90,7 +90,8 @@ amdgpu-y += \
 # add VCE block
 amdgpu-y += \
 	amdgpu_vce.o \
-	vce_v3_0.o
+	vce_v3_0.o \
+	vce_v4_0.o
 
 # add amdkfd interfaces
 amdgpu-y += \
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
index c46116c..647944b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
@@ -54,6 +54,8 @@
 #define FIRMWARE_POLARIS11         "amdgpu/polaris11_vce.bin"
 #define FIRMWARE_POLARIS12         "amdgpu/polaris12_vce.bin"
 
+#define FIRMWARE_VEGA10		"amdgpu/vega10_vce.bin"
+
 #ifdef CONFIG_DRM_AMDGPU_CIK
 MODULE_FIRMWARE(FIRMWARE_BONAIRE);
 MODULE_FIRMWARE(FIRMWARE_KABINI);
@@ -69,6 +71,8 @@ MODULE_FIRMWARE(FIRMWARE_POLARIS10);
 MODULE_FIRMWARE(FIRMWARE_POLARIS11);
 MODULE_FIRMWARE(FIRMWARE_POLARIS12);
 
+MODULE_FIRMWARE(FIRMWARE_VEGA10);
+
 static void amdgpu_vce_idle_work_handler(struct work_struct *work);
 
 /**
@@ -123,6 +127,9 @@ int amdgpu_vce_sw_init(struct amdgpu_device *adev, unsigned long size)
 	case CHIP_POLARIS11:
 		fw_name = FIRMWARE_POLARIS11;
 		break;
+	case CHIP_VEGA10:
+		fw_name = FIRMWARE_VEGA10;
+		break;
 	case CHIP_POLARIS12:
 		fw_name = FIRMWARE_POLARIS12;
 		break;
diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
new file mode 100644
index 0000000..74146be
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
@@ -0,0 +1,894 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the
+ * "Software"), to deal in the Software without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sub license, and/or sell copies of the Software, and to
+ * permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM,
+ * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
+ * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
+ * USE OR OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * The above copyright notice and this permission notice (including the
+ * next paragraph) shall be included in all copies or substantial portions
+ * of the Software.
+ *
+ */
+
+#include <linux/firmware.h>
+#include <drm/drmP.h>
+#include "amdgpu.h"
+#include "amdgpu_vce.h"
+#include "soc15d.h"
+#include "soc15_common.h"
+
+#include "vega10/soc15ip.h"
+#include "vega10/VCE/vce_4_0_offset.h"
+#include "vega10/VCE/vce_4_0_default.h"
+#include "vega10/VCE/vce_4_0_sh_mask.h"
+#include "vega10/MMHUB/mmhub_1_0_offset.h"
+#include "vega10/MMHUB/mmhub_1_0_sh_mask.h"
+
+#define VCE_STATUS_VCPU_REPORT_FW_LOADED_MASK	0x02
+
+#define VCE_V4_0_FW_SIZE	(384 * 1024)
+#define VCE_V4_0_STACK_SIZE	(64 * 1024)
+#define VCE_V4_0_DATA_SIZE	((16 * 1024 * AMDGPU_MAX_VCE_HANDLES) + (52 * 1024))
+
+static void vce_v4_0_mc_resume(struct amdgpu_device *adev);
+static void vce_v4_0_set_ring_funcs(struct amdgpu_device *adev);
+static void vce_v4_0_set_irq_funcs(struct amdgpu_device *adev);
+
+/**
+ * vce_v4_0_ring_get_rptr - get read pointer
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Returns the current hardware read pointer
+ */
+static uint64_t vce_v4_0_ring_get_rptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	if (ring == &adev->vce.ring[0])
+		return RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_RPTR));
+	else if (ring == &adev->vce.ring[1])
+		return RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_RPTR2));
+	else
+		return RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_RPTR3));
+}
+
+/**
+ * vce_v4_0_ring_get_wptr - get write pointer
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Returns the current hardware write pointer
+ */
+static uint64_t vce_v4_0_ring_get_wptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	if (ring == &adev->vce.ring[0])
+		return RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_WPTR));
+	else if (ring == &adev->vce.ring[1])
+		return RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_WPTR2));
+	else
+		return RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_WPTR3));
+}
+
+/**
+ * vce_v4_0_ring_set_wptr - set write pointer
+ *
+ * @ring: amdgpu_ring pointer
+ *
+ * Commits the write pointer to the hardware
+ */
+static void vce_v4_0_ring_set_wptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	if (ring == &adev->vce.ring[0])
+		WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_WPTR),
+			lower_32_bits(ring->wptr));
+	else if (ring == &adev->vce.ring[1])
+		WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_WPTR2),
+			lower_32_bits(ring->wptr));
+	else
+		WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_WPTR3),
+			lower_32_bits(ring->wptr));
+}
+
+static int vce_v4_0_firmware_loaded(struct amdgpu_device *adev)
+{
+	int i, j;
+
+	for (i = 0; i < 10; ++i) {
+		for (j = 0; j < 100; ++j) {
+			uint32_t status =
+				RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_STATUS));
+
+			if (status & VCE_STATUS_VCPU_REPORT_FW_LOADED_MASK)
+				return 0;
+			mdelay(10);
+		}
+
+		DRM_ERROR("VCE not responding, trying to reset the ECPU!!!\n");
+		WREG32_P(SOC15_REG_OFFSET(VCE, 0, mmVCE_SOFT_RESET),
+				VCE_SOFT_RESET__ECPU_SOFT_RESET_MASK,
+				~VCE_SOFT_RESET__ECPU_SOFT_RESET_MASK);
+		mdelay(10);
+		WREG32_P(SOC15_REG_OFFSET(VCE, 0, mmVCE_SOFT_RESET), 0,
+				~VCE_SOFT_RESET__ECPU_SOFT_RESET_MASK);
+		mdelay(10);
+
+	}
+
+	return -ETIMEDOUT;
+}
+
+/**
+ * vce_v4_0_start - start VCE block
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * Setup and start the VCE block
+ */
+static int vce_v4_0_start(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring;
+	int r;
+
+	ring = &adev->vce.ring[0];
+
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_RPTR), lower_32_bits(ring->wptr));
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_WPTR), lower_32_bits(ring->wptr));
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_BASE_LO), ring->gpu_addr);
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_BASE_HI), upper_32_bits(ring->gpu_addr));
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_SIZE), ring->ring_size / 4);
+
+	ring = &adev->vce.ring[1];
+
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_RPTR2), lower_32_bits(ring->wptr));
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_WPTR2), lower_32_bits(ring->wptr));
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_BASE_LO2), ring->gpu_addr);
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_BASE_HI2), upper_32_bits(ring->gpu_addr));
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_SIZE2), ring->ring_size / 4);
+
+	ring = &adev->vce.ring[2];
+
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_RPTR3), lower_32_bits(ring->wptr));
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_WPTR3), lower_32_bits(ring->wptr));
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_BASE_LO3), ring->gpu_addr);
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_BASE_HI3), upper_32_bits(ring->gpu_addr));
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_SIZE3), ring->ring_size / 4);
+
+	vce_v4_0_mc_resume(adev);
+	WREG32_P(SOC15_REG_OFFSET(VCE, 0, mmVCE_STATUS), VCE_STATUS__JOB_BUSY_MASK,
+			~VCE_STATUS__JOB_BUSY_MASK);
+
+	WREG32_P(SOC15_REG_OFFSET(VCE, 0, mmVCE_VCPU_CNTL), 1, ~0x200001);
+
+	WREG32_P(SOC15_REG_OFFSET(VCE, 0, mmVCE_SOFT_RESET), 0,
+			~VCE_SOFT_RESET__ECPU_SOFT_RESET_MASK);
+	mdelay(100);
+
+	r = vce_v4_0_firmware_loaded(adev);
+
+	/* clear BUSY flag */
+	WREG32_P(SOC15_REG_OFFSET(VCE, 0, mmVCE_STATUS), 0, ~VCE_STATUS__JOB_BUSY_MASK);
+
+	if (r) {
+		DRM_ERROR("VCE not responding, giving up!!!\n");
+		return r;
+	}
+
+	return 0;
+}
+
+static int vce_v4_0_stop(struct amdgpu_device *adev)
+{
+
+	WREG32_P(SOC15_REG_OFFSET(VCE, 0, mmVCE_VCPU_CNTL), 0, ~0x200001);
+
+	/* hold on ECPU */
+	WREG32_P(SOC15_REG_OFFSET(VCE, 0, mmVCE_SOFT_RESET),
+			VCE_SOFT_RESET__ECPU_SOFT_RESET_MASK,
+			~VCE_SOFT_RESET__ECPU_SOFT_RESET_MASK);
+
+	/* clear BUSY flag */
+	WREG32_P(SOC15_REG_OFFSET(VCE, 0, mmVCE_STATUS), 0, ~VCE_STATUS__JOB_BUSY_MASK);
+
+	/* Set Clock-Gating off */
+	/* if (adev->cg_flags & AMD_CG_SUPPORT_VCE_MGCG)
+		vce_v4_0_set_vce_sw_clock_gating(adev, false);
+	*/
+
+	return 0;
+}
+
+static int vce_v4_0_early_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	adev->vce.num_rings = 3;
+
+	vce_v4_0_set_ring_funcs(adev);
+	vce_v4_0_set_irq_funcs(adev);
+
+	return 0;
+}
+
+static int vce_v4_0_sw_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	struct amdgpu_ring *ring;
+	unsigned size;
+	int r, i;
+
+	r = amdgpu_irq_add_id(adev, AMDGPU_IH_CLIENTID_VCE0, 167, &adev->vce.irq);
+	if (r)
+		return r;
+
+	size  = (VCE_V4_0_STACK_SIZE + VCE_V4_0_DATA_SIZE) * 2;
+	if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP)
+		size += VCE_V4_0_FW_SIZE;
+
+	r = amdgpu_vce_sw_init(adev, size);
+	if (r)
+		return r;
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
+		const struct common_firmware_header *hdr;
+		hdr = (const struct common_firmware_header *)adev->vce.fw->data;
+		adev->firmware.ucode[AMDGPU_UCODE_ID_VCE].ucode_id = AMDGPU_UCODE_ID_VCE;
+		adev->firmware.ucode[AMDGPU_UCODE_ID_VCE].fw = adev->vce.fw;
+		adev->firmware.fw_size +=
+			ALIGN(le32_to_cpu(hdr->ucode_size_bytes), PAGE_SIZE);
+		DRM_INFO("PSP loading VCE firmware\n");
+	}
+
+	if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP) {
+		r = amdgpu_vce_resume(adev);
+		if (r)
+			return r;
+	}
+
+	for (i = 0; i < adev->vce.num_rings; i++) {
+		ring = &adev->vce.ring[i];
+		sprintf(ring->name, "vce%d", i);
+		r = amdgpu_ring_init(adev, ring, 512, &adev->vce.irq, 0);
+		if (r)
+			return r;
+	}
+
+	return r;
+}
+
+static int vce_v4_0_sw_fini(void *handle)
+{
+	int r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	r = amdgpu_vce_suspend(adev);
+	if (r)
+		return r;
+
+	r = amdgpu_vce_sw_fini(adev);
+	if (r)
+		return r;
+
+	return r;
+}
+
+static int vce_v4_0_hw_init(void *handle)
+{
+	int r, i;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	r = vce_v4_0_start(adev);
+	if (r)
+		return r;
+
+	for (i = 0; i < adev->vce.num_rings; i++)
+		adev->vce.ring[i].ready = false;
+
+	for (i = 0; i < adev->vce.num_rings; i++) {
+		r = amdgpu_ring_test_ring(&adev->vce.ring[i]);
+		if (r)
+			return r;
+		else
+			adev->vce.ring[i].ready = true;
+	}
+
+	DRM_INFO("VCE initialized successfully.\n");
+
+	return 0;
+}
+
+static int vce_v4_0_hw_fini(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	int i;
+
+	/* vce_v4_0_wait_for_idle(handle); */
+	vce_v4_0_stop(adev);
+	for (i = 0; i < adev->vce.num_rings; i++)
+		adev->vce.ring[i].ready = false;
+
+	return 0;
+}
+
+static int vce_v4_0_suspend(void *handle)
+{
+	int r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	r = vce_v4_0_hw_fini(adev);
+	if (r)
+		return r;
+
+	r = amdgpu_vce_suspend(adev);
+	if (r)
+		return r;
+
+	return r;
+}
+
+static int vce_v4_0_resume(void *handle)
+{
+	int r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	r = amdgpu_vce_resume(adev);
+	if (r)
+		return r;
+
+	r = vce_v4_0_hw_init(adev);
+	if (r)
+		return r;
+
+	return r;
+}
+
+static void vce_v4_0_mc_resume(struct amdgpu_device *adev)
+{
+	uint32_t offset, size;
+
+	WREG32_P(SOC15_REG_OFFSET(VCE, 0, mmVCE_CLOCK_GATING_A), 0, ~(1 << 16));
+	WREG32_P(SOC15_REG_OFFSET(VCE, 0, mmVCE_UENC_CLOCK_GATING), 0x1FF000, ~0xFF9FF000);
+	WREG32_P(SOC15_REG_OFFSET(VCE, 0, mmVCE_UENC_REG_CLOCK_GATING), 0x3F, ~0x3F);
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_CLOCK_GATING_B), 0x1FF);
+
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_CTRL), 0x00398000);
+	WREG32_P(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_CACHE_CTRL), 0x0, ~0x1);
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_SWAP_CNTL), 0);
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_SWAP_CNTL1), 0);
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_VM_CTRL), 0);
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
+		WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_VCPU_CACHE_40BIT_BAR0),
+			(adev->firmware.ucode[AMDGPU_UCODE_ID_VCE].mc_addr >> 8));
+		WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_VCPU_CACHE_64BIT_BAR0),
+			(adev->firmware.ucode[AMDGPU_UCODE_ID_VCE].mc_addr >> 40) & 0xff);
+	} else {
+		WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_VCPU_CACHE_40BIT_BAR0),
+			(adev->vce.gpu_addr >> 8));
+		WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_VCPU_CACHE_64BIT_BAR0),
+			(adev->vce.gpu_addr >> 40) & 0xff);
+	}
+
+	offset = AMDGPU_VCE_FIRMWARE_OFFSET;
+	size = VCE_V4_0_FW_SIZE;
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_VCPU_CACHE_OFFSET0), offset & ~0x0f000000);
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_VCPU_CACHE_SIZE0), size);
+
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_VCPU_CACHE_40BIT_BAR1), (adev->vce.gpu_addr >> 8));
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_VCPU_CACHE_64BIT_BAR1), (adev->vce.gpu_addr >> 40) & 0xff);
+	offset = (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP) ? offset + size : 0;
+	size = VCE_V4_0_STACK_SIZE;
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_VCPU_CACHE_OFFSET1), (offset & ~0x0f000000) | (1 << 24));
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_VCPU_CACHE_SIZE1), size);
+
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_VCPU_CACHE_40BIT_BAR2), (adev->vce.gpu_addr >> 8));
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_VCPU_CACHE_64BIT_BAR2), (adev->vce.gpu_addr >> 40) & 0xff);
+	offset += size;
+	size = VCE_V4_0_DATA_SIZE;
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_VCPU_CACHE_OFFSET2), (offset & ~0x0f000000) | (2 << 24));
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_VCPU_CACHE_SIZE2), size);
+
+	WREG32_P(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_CTRL2), 0x0, ~0x100);
+	WREG32_P(SOC15_REG_OFFSET(VCE, 0, mmVCE_SYS_INT_EN),
+			VCE_SYS_INT_EN__VCE_SYS_INT_TRAP_INTERRUPT_EN_MASK,
+			~VCE_SYS_INT_EN__VCE_SYS_INT_TRAP_INTERRUPT_EN_MASK);
+}
+
+static int vce_v4_0_set_clockgating_state(void *handle,
+					  enum amd_clockgating_state state)
+{
+	/* needed for driver unload*/
+	return 0;
+}
+
+#if 0
+static bool vce_v4_0_is_idle(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	u32 mask = 0;
+
+	mask |= (adev->vce.harvest_config & AMDGPU_VCE_HARVEST_VCE0) ? 0 : SRBM_STATUS2__VCE0_BUSY_MASK;
+	mask |= (adev->vce.harvest_config & AMDGPU_VCE_HARVEST_VCE1) ? 0 : SRBM_STATUS2__VCE1_BUSY_MASK;
+
+	return !(RREG32(mmSRBM_STATUS2) & mask);
+}
+
+static int vce_v4_0_wait_for_idle(void *handle)
+{
+	unsigned i;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	for (i = 0; i < adev->usec_timeout; i++)
+		if (vce_v4_0_is_idle(handle))
+			return 0;
+
+	return -ETIMEDOUT;
+}
+
+#define  VCE_STATUS_VCPU_REPORT_AUTO_BUSY_MASK  0x00000008L   /* AUTO_BUSY */
+#define  VCE_STATUS_VCPU_REPORT_RB0_BUSY_MASK   0x00000010L   /* RB0_BUSY */
+#define  VCE_STATUS_VCPU_REPORT_RB1_BUSY_MASK   0x00000020L   /* RB1_BUSY */
+#define  AMDGPU_VCE_STATUS_BUSY_MASK (VCE_STATUS_VCPU_REPORT_AUTO_BUSY_MASK | \
+				      VCE_STATUS_VCPU_REPORT_RB0_BUSY_MASK)
+
+static bool vce_v4_0_check_soft_reset(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	u32 srbm_soft_reset = 0;
+
+	/* According to VCE team , we should use VCE_STATUS instead
+	 * SRBM_STATUS.VCE_BUSY bit for busy status checking.
+	 * GRBM_GFX_INDEX.INSTANCE_INDEX is used to specify which VCE
+	 * instance's registers are accessed
+	 * (0 for 1st instance, 10 for 2nd instance).
+	 *
+	 *VCE_STATUS
+	 *|UENC|ACPI|AUTO ACTIVE|RB1 |RB0 |RB2 |          |FW_LOADED|JOB |
+	 *|----+----+-----------+----+----+----+----------+---------+----|
+	 *|bit8|bit7|    bit6   |bit5|bit4|bit3|   bit2   |  bit1   |bit0|
+	 *
+	 * VCE team suggest use bit 3--bit 6 for busy status check
+	 */
+	mutex_lock(&adev->grbm_idx_mutex);
+	WREG32_FIELD(GRBM_GFX_INDEX, INSTANCE_INDEX, 0);
+	if (RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_STATUS) & AMDGPU_VCE_STATUS_BUSY_MASK) {
+		srbm_soft_reset = REG_SET_FIELD(srbm_soft_reset, SRBM_SOFT_RESET, SOFT_RESET_VCE0, 1);
+		srbm_soft_reset = REG_SET_FIELD(srbm_soft_reset, SRBM_SOFT_RESET, SOFT_RESET_VCE1, 1);
+	}
+	WREG32_FIELD(GRBM_GFX_INDEX, INSTANCE_INDEX, 0x10);
+	if (RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_STATUS) & AMDGPU_VCE_STATUS_BUSY_MASK) {
+		srbm_soft_reset = REG_SET_FIELD(srbm_soft_reset, SRBM_SOFT_RESET, SOFT_RESET_VCE0, 1);
+		srbm_soft_reset = REG_SET_FIELD(srbm_soft_reset, SRBM_SOFT_RESET, SOFT_RESET_VCE1, 1);
+	}
+	WREG32_FIELD(GRBM_GFX_INDEX, INSTANCE_INDEX, 0);
+	mutex_unlock(&adev->grbm_idx_mutex);
+
+	if (srbm_soft_reset) {
+		adev->vce.srbm_soft_reset = srbm_soft_reset;
+		return true;
+	} else {
+		adev->vce.srbm_soft_reset = 0;
+		return false;
+	}
+}
+
+static int vce_v4_0_soft_reset(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	u32 srbm_soft_reset;
+
+	if (!adev->vce.srbm_soft_reset)
+		return 0;
+	srbm_soft_reset = adev->vce.srbm_soft_reset;
+
+	if (srbm_soft_reset) {
+		u32 tmp;
+
+		tmp = RREG32(mmSRBM_SOFT_RESET);
+		tmp |= srbm_soft_reset;
+		dev_info(adev->dev, "SRBM_SOFT_RESET=0x%08X\n", tmp);
+		WREG32(mmSRBM_SOFT_RESET, tmp);
+		tmp = RREG32(mmSRBM_SOFT_RESET);
+
+		udelay(50);
+
+		tmp &= ~srbm_soft_reset;
+		WREG32(mmSRBM_SOFT_RESET, tmp);
+		tmp = RREG32(mmSRBM_SOFT_RESET);
+
+		/* Wait a little for things to settle down */
+		udelay(50);
+	}
+
+	return 0;
+}
+
+static int vce_v4_0_pre_soft_reset(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (!adev->vce.srbm_soft_reset)
+		return 0;
+
+	mdelay(5);
+
+	return vce_v4_0_suspend(adev);
+}
+
+
+static int vce_v4_0_post_soft_reset(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (!adev->vce.srbm_soft_reset)
+		return 0;
+
+	mdelay(5);
+
+	return vce_v4_0_resume(adev);
+}
+
+static void vce_v4_0_override_vce_clock_gating(struct amdgpu_device *adev, bool override)
+{
+	u32 tmp, data;
+
+	tmp = data = RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_ARB_CTRL));
+	if (override)
+		data |= VCE_RB_ARB_CTRL__VCE_CGTT_OVERRIDE_MASK;
+	else
+		data &= ~VCE_RB_ARB_CTRL__VCE_CGTT_OVERRIDE_MASK;
+
+	if (tmp != data)
+		WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_ARB_CTRL), data);
+}
+
+static void vce_v4_0_set_vce_sw_clock_gating(struct amdgpu_device *adev,
+					     bool gated)
+{
+	u32 data;
+
+	/* Set Override to disable Clock Gating */
+	vce_v4_0_override_vce_clock_gating(adev, true);
+
+	/* This function enables MGCG which is controlled by firmware.
+	   With the clocks in the gated state the core is still
+	   accessible but the firmware will throttle the clocks on the
+	   fly as necessary.
+	*/
+	if (gated) {
+		data = RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_CLOCK_GATING_B));
+		data |= 0x1ff;
+		data &= ~0xef0000;
+		WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_CLOCK_GATING_B), data);
+
+		data = RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_UENC_CLOCK_GATING));
+		data |= 0x3ff000;
+		data &= ~0xffc00000;
+		WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_UENC_CLOCK_GATING), data);
+
+		data = RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_UENC_CLOCK_GATING_2));
+		data |= 0x2;
+		data &= ~0x00010000;
+		WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_UENC_CLOCK_GATING_2), data);
+
+		data = RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_UENC_REG_CLOCK_GATING));
+		data |= 0x37f;
+		WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_UENC_REG_CLOCK_GATING), data);
+
+		data = RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_UENC_DMA_DCLK_CTRL));
+		data |= VCE_UENC_DMA_DCLK_CTRL__WRDMCLK_FORCEON_MASK |
+			VCE_UENC_DMA_DCLK_CTRL__RDDMCLK_FORCEON_MASK |
+			VCE_UENC_DMA_DCLK_CTRL__REGCLK_FORCEON_MASK  |
+			0x8;
+		WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_UENC_DMA_DCLK_CTRL), data);
+	} else {
+		data = RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_CLOCK_GATING_B));
+		data &= ~0x80010;
+		data |= 0xe70008;
+		WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_CLOCK_GATING_B), data);
+
+		data = RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_UENC_CLOCK_GATING));
+		data |= 0xffc00000;
+		WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_UENC_CLOCK_GATING), data);
+
+		data = RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_UENC_CLOCK_GATING_2));
+		data |= 0x10000;
+		WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_UENC_CLOCK_GATING_2), data);
+
+		data = RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_UENC_REG_CLOCK_GATING));
+		data &= ~0xffc00000;
+		WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_UENC_REG_CLOCK_GATING), data);
+
+		data = RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_UENC_DMA_DCLK_CTRL));
+		data &= ~(VCE_UENC_DMA_DCLK_CTRL__WRDMCLK_FORCEON_MASK |
+			  VCE_UENC_DMA_DCLK_CTRL__RDDMCLK_FORCEON_MASK |
+			  VCE_UENC_DMA_DCLK_CTRL__REGCLK_FORCEON_MASK  |
+			  0x8);
+		WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_UENC_DMA_DCLK_CTRL), data);
+	}
+	vce_v4_0_override_vce_clock_gating(adev, false);
+}
+
+static void vce_v4_0_set_bypass_mode(struct amdgpu_device *adev, bool enable)
+{
+	u32 tmp = RREG32_SMC(ixGCK_DFS_BYPASS_CNTL);
+
+	if (enable)
+		tmp |= GCK_DFS_BYPASS_CNTL__BYPASSECLK_MASK;
+	else
+		tmp &= ~GCK_DFS_BYPASS_CNTL__BYPASSECLK_MASK;
+
+	WREG32_SMC(ixGCK_DFS_BYPASS_CNTL, tmp);
+}
+
+static int vce_v4_0_set_clockgating_state(void *handle,
+					  enum amd_clockgating_state state)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	bool enable = (state == AMD_CG_STATE_GATE) ? true : false;
+	int i;
+
+	if ((adev->asic_type == CHIP_POLARIS10) ||
+		(adev->asic_type == CHIP_TONGA) ||
+		(adev->asic_type == CHIP_FIJI))
+		vce_v4_0_set_bypass_mode(adev, enable);
+
+	if (!(adev->cg_flags & AMD_CG_SUPPORT_VCE_MGCG))
+		return 0;
+
+	mutex_lock(&adev->grbm_idx_mutex);
+	for (i = 0; i < 2; i++) {
+		/* Program VCE Instance 0 or 1 if not harvested */
+		if (adev->vce.harvest_config & (1 << i))
+			continue;
+
+		WREG32_FIELD(GRBM_GFX_INDEX, VCE_INSTANCE, i);
+
+		if (enable) {
+			/* initialize VCE_CLOCK_GATING_A: Clock ON/OFF delay */
+			uint32_t data = RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_CLOCK_GATING_A);
+			data &= ~(0xf | 0xff0);
+			data |= ((0x0 << 0) | (0x04 << 4));
+			WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_CLOCK_GATING_A, data);
+
+			/* initialize VCE_UENC_CLOCK_GATING: Clock ON/OFF delay */
+			data = RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_UENC_CLOCK_GATING);
+			data &= ~(0xf | 0xff0);
+			data |= ((0x0 << 0) | (0x04 << 4));
+			WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_UENC_CLOCK_GATING, data);
+		}
+
+		vce_v4_0_set_vce_sw_clock_gating(adev, enable);
+	}
+
+	WREG32_FIELD(GRBM_GFX_INDEX, VCE_INSTANCE, 0);
+	mutex_unlock(&adev->grbm_idx_mutex);
+
+	return 0;
+}
+
+static int vce_v4_0_set_powergating_state(void *handle,
+					  enum amd_powergating_state state)
+{
+	/* This doesn't actually powergate the VCE block.
+	 * That's done in the dpm code via the SMC.  This
+	 * just re-inits the block as necessary.  The actual
+	 * gating still happens in the dpm code.  We should
+	 * revisit this when there is a cleaner line between
+	 * the smc and the hw blocks
+	 */
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (!(adev->pg_flags & AMD_PG_SUPPORT_VCE))
+		return 0;
+
+	if (state == AMD_PG_STATE_GATE)
+		/* XXX do we need a vce_v4_0_stop()? */
+		return 0;
+	else
+		return vce_v4_0_start(adev);
+}
+#endif
+
+static void vce_v4_0_ring_emit_ib(struct amdgpu_ring *ring,
+		struct amdgpu_ib *ib, unsigned int vm_id, bool ctx_switch)
+{
+	amdgpu_ring_write(ring, VCE_CMD_IB_VM);
+	amdgpu_ring_write(ring, vm_id);
+	amdgpu_ring_write(ring, lower_32_bits(ib->gpu_addr));
+	amdgpu_ring_write(ring, upper_32_bits(ib->gpu_addr));
+	amdgpu_ring_write(ring, ib->length_dw);
+}
+
+static void vce_v4_0_ring_emit_fence(struct amdgpu_ring *ring, u64 addr,
+			u64 seq, unsigned flags)
+{
+	WARN_ON(flags & AMDGPU_FENCE_FLAG_64BIT);
+
+	amdgpu_ring_write(ring, VCE_CMD_FENCE);
+	amdgpu_ring_write(ring, addr);
+	amdgpu_ring_write(ring, upper_32_bits(addr));
+	amdgpu_ring_write(ring, seq);
+	amdgpu_ring_write(ring, VCE_CMD_TRAP);
+}
+
+static void vce_v4_0_ring_insert_end(struct amdgpu_ring *ring)
+{
+	amdgpu_ring_write(ring, VCE_CMD_END);
+}
+
+static void vce_v4_0_emit_vm_flush(struct amdgpu_ring *ring,
+			 unsigned int vm_id, uint64_t pd_addr)
+{
+	unsigned eng = ring->idx;
+	unsigned i;
+
+	pd_addr = pd_addr | 0x1; /* valid bit */
+	/* now only use physical base address of PDE and valid */
+	BUG_ON(pd_addr & 0xFFFF00000000003EULL);
+
+	for (i = 0; i < AMDGPU_MAX_VMHUBS; ++i) {
+		struct amdgpu_vmhub *hub = &ring->adev->vmhub[i];
+		uint32_t req = hub->get_invalidate_req(vm_id);
+
+		amdgpu_ring_write(ring, VCE_CMD_REG_WRITE);
+		amdgpu_ring_write(ring,
+			(hub->ctx0_ptb_addr_hi32 + vm_id * 2) << 2);
+		amdgpu_ring_write(ring, upper_32_bits(pd_addr));
+
+		amdgpu_ring_write(ring, VCE_CMD_REG_WRITE);
+		amdgpu_ring_write(ring,
+			(hub->ctx0_ptb_addr_lo32 + vm_id * 2) << 2);
+		amdgpu_ring_write(ring, lower_32_bits(pd_addr));
+
+		amdgpu_ring_write(ring, VCE_CMD_REG_WAIT);
+		amdgpu_ring_write(ring,
+			(hub->ctx0_ptb_addr_lo32 + vm_id * 2) << 2);
+		amdgpu_ring_write(ring, 0xffffffff);
+		amdgpu_ring_write(ring, lower_32_bits(pd_addr));
+
+		/* flush TLB */
+		amdgpu_ring_write(ring, VCE_CMD_REG_WRITE);
+		amdgpu_ring_write(ring,	(hub->vm_inv_eng0_req + eng) << 2);
+		amdgpu_ring_write(ring, req);
+
+		/* wait for flush */
+		amdgpu_ring_write(ring, VCE_CMD_REG_WAIT);
+		amdgpu_ring_write(ring, (hub->vm_inv_eng0_ack + eng) << 2);
+		amdgpu_ring_write(ring, 1 << vm_id);
+		amdgpu_ring_write(ring, 1 << vm_id);
+	}
+}
+
+static int vce_v4_0_set_interrupt_state(struct amdgpu_device *adev,
+					struct amdgpu_irq_src *source,
+					unsigned type,
+					enum amdgpu_interrupt_state state)
+{
+	uint32_t val = 0;
+
+	if (state == AMDGPU_IRQ_STATE_ENABLE)
+		val |= VCE_SYS_INT_EN__VCE_SYS_INT_TRAP_INTERRUPT_EN_MASK;
+
+	WREG32_P(SOC15_REG_OFFSET(VCE, 0, mmVCE_SYS_INT_EN), val,
+			~VCE_SYS_INT_EN__VCE_SYS_INT_TRAP_INTERRUPT_EN_MASK);
+	return 0;
+}
+
+static int vce_v4_0_process_interrupt(struct amdgpu_device *adev,
+				      struct amdgpu_irq_src *source,
+				      struct amdgpu_iv_entry *entry)
+{
+	DRM_DEBUG("IH: VCE\n");
+
+	WREG32_P(SOC15_REG_OFFSET(VCE, 0, mmVCE_SYS_INT_STATUS),
+			VCE_SYS_INT_STATUS__VCE_SYS_INT_TRAP_INTERRUPT_INT_MASK,
+			~VCE_SYS_INT_STATUS__VCE_SYS_INT_TRAP_INTERRUPT_INT_MASK);
+
+	switch (entry->src_data[0]) {
+	case 0:
+	case 1:
+	case 2:
+		amdgpu_fence_process(&adev->vce.ring[entry->src_data[0]]);
+		break;
+	default:
+		DRM_ERROR("Unhandled interrupt: %d %d\n",
+			  entry->src_id, entry->src_data[0]);
+		break;
+	}
+
+	return 0;
+}
+
+const struct amd_ip_funcs vce_v4_0_ip_funcs = {
+	.name = "vce_v4_0",
+	.early_init = vce_v4_0_early_init,
+	.late_init = NULL,
+	.sw_init = vce_v4_0_sw_init,
+	.sw_fini = vce_v4_0_sw_fini,
+	.hw_init = vce_v4_0_hw_init,
+	.hw_fini = vce_v4_0_hw_fini,
+	.suspend = vce_v4_0_suspend,
+	.resume = vce_v4_0_resume,
+	.is_idle = NULL /* vce_v4_0_is_idle */,
+	.wait_for_idle = NULL /* vce_v4_0_wait_for_idle */,
+	.check_soft_reset = NULL /* vce_v4_0_check_soft_reset */,
+	.pre_soft_reset = NULL /* vce_v4_0_pre_soft_reset */,
+	.soft_reset = NULL /* vce_v4_0_soft_reset */,
+	.post_soft_reset = NULL /* vce_v4_0_post_soft_reset */,
+	.set_clockgating_state = vce_v4_0_set_clockgating_state,
+	.set_powergating_state = NULL /* vce_v4_0_set_powergating_state */,
+};
+
+static const struct amdgpu_ring_funcs vce_v4_0_ring_vm_funcs = {
+	.type = AMDGPU_RING_TYPE_VCE,
+	.align_mask = 0x3f,
+	.nop = VCE_CMD_NO_OP,
+	.support_64bit_ptrs = false,
+	.get_rptr = vce_v4_0_ring_get_rptr,
+	.get_wptr = vce_v4_0_ring_get_wptr,
+	.set_wptr = vce_v4_0_ring_set_wptr,
+	.parse_cs = amdgpu_vce_ring_parse_cs_vm,
+	.emit_frame_size =
+		17 * AMDGPU_MAX_VMHUBS + /* vce_v4_0_emit_vm_flush */
+		5 + 5 + /* amdgpu_vce_ring_emit_fence x2 vm fence */
+		1, /* vce_v4_0_ring_insert_end */
+	.emit_ib_size = 5, /* vce_v4_0_ring_emit_ib */
+	.emit_ib = vce_v4_0_ring_emit_ib,
+	.emit_vm_flush = vce_v4_0_emit_vm_flush,
+	.emit_fence = vce_v4_0_ring_emit_fence,
+	.test_ring = amdgpu_vce_ring_test_ring,
+	.test_ib = amdgpu_vce_ring_test_ib,
+	.insert_nop = amdgpu_ring_insert_nop,
+	.insert_end = vce_v4_0_ring_insert_end,
+	.pad_ib = amdgpu_ring_generic_pad_ib,
+	.begin_use = amdgpu_vce_ring_begin_use,
+	.end_use = amdgpu_vce_ring_end_use,
+};
+
+static void vce_v4_0_set_ring_funcs(struct amdgpu_device *adev)
+{
+	int i;
+
+	for (i = 0; i < adev->vce.num_rings; i++)
+		adev->vce.ring[i].funcs = &vce_v4_0_ring_vm_funcs;
+	DRM_INFO("VCE enabled in VM mode\n");
+}
+
+static const struct amdgpu_irq_src_funcs vce_v4_0_irq_funcs = {
+	.set = vce_v4_0_set_interrupt_state,
+	.process = vce_v4_0_process_interrupt,
+};
+
+static void vce_v4_0_set_irq_funcs(struct amdgpu_device *adev)
+{
+	adev->vce.irq.num_types = 1;
+	adev->vce.irq.funcs = &vce_v4_0_irq_funcs;
+};
+
+const struct amdgpu_ip_block_version vce_v4_0_ip_block =
+{
+	.type = AMD_IP_BLOCK_TYPE_VCE,
+	.major = 4,
+	.minor = 0,
+	.rev = 0,
+	.funcs = &vce_v4_0_ip_funcs,
+};
diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.h b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.h
new file mode 100644
index 0000000..a32beda
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.h
@@ -0,0 +1,29 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __VCE_V4_0_H__
+#define __VCE_V4_0_H__
+
+extern const struct amdgpu_ip_block_version vce_v4_0_ip_block;
+
+#endif
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 052/100] drm/amdgpu: add PSP driver for vega10
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (35 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 051/100] drm/amdgpu: add initial vce 4.0 " Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 053/100] drm/amdgpu: add psp firmware info into info query and debugfs Alex Deucher
                     ` (48 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Huang Rui

From: Huang Rui <ray.huang@amd.com>

PSP is responsible for firmware loading on SOC-15 asics.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/Makefile        |   5 +
 drivers/gpu/drm/amd/amdgpu/amdgpu.h        |   9 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c |   1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c    | 473 +++++++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h    | 127 ++++++++
 drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h    | 269 +++++++++++++++
 drivers/gpu/drm/amd/amdgpu/psp_v3_1.c      | 507 +++++++++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/psp_v3_1.h      |  50 +++
 drivers/gpu/drm/amd/include/amd_shared.h   |   1 +
 9 files changed, 1442 insertions(+)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/psp_v3_1.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/psp_v3_1.h

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
index 2ba5671..bad4658 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -57,6 +57,11 @@ amdgpu-y += \
 	cz_ih.o \
 	vega10_ih.o
 
+# add PSP block
+amdgpu-y += \
+	amdgpu_psp.o \
+	psp_v3_1.o
+
 # add SMC block
 amdgpu-y += \
 	amdgpu_dpm.o \
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index c453f5b..2675480 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -52,6 +52,7 @@
 #include "amdgpu_irq.h"
 #include "amdgpu_ucode.h"
 #include "amdgpu_ttm.h"
+#include "amdgpu_psp.h"
 #include "amdgpu_gds.h"
 #include "amdgpu_sync.h"
 #include "amdgpu_ring.h"
@@ -1215,6 +1216,10 @@ struct amdgpu_firmware {
 	struct amdgpu_bo *fw_buf;
 	unsigned int fw_size;
 	unsigned int max_ucodes;
+	/* firmwares are loaded by psp instead of smu from vega10 */
+	const struct amdgpu_psp_funcs *funcs;
+	struct amdgpu_bo *rbuf;
+	struct mutex mutex;
 };
 
 /*
@@ -1578,6 +1583,9 @@ struct amdgpu_device {
 	/* firmwares */
 	struct amdgpu_firmware		firmware;
 
+	/* PSP */
+	struct psp_context		psp;
+
 	/* GDS */
 	struct amdgpu_gds		gds;
 
@@ -1838,6 +1846,7 @@ amdgpu_get_sdma_instance(struct amdgpu_ring *ring)
 #define amdgpu_gfx_get_gpu_clock_counter(adev) (adev)->gfx.funcs->get_gpu_clock_counter((adev))
 #define amdgpu_gfx_select_se_sh(adev, se, sh, instance) (adev)->gfx.funcs->select_se_sh((adev), (se), (sh), (instance))
 #define amdgpu_gds_switch(adev, r, v, d, w, a) (adev)->gds.funcs->patch_gds_switch((r), (v), (d), (w), (a))
+#define amdgpu_psp_check_fw_loading_status(adev, i) (adev)->firmware.funcs->check_fw_loading_status((adev), (i))
 
 /* Common functions */
 int amdgpu_gpu_reset(struct amdgpu_device *adev);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 19d37a5..82e42ef 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -1870,6 +1870,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
 	 * can recall function without having locking issues */
 	mutex_init(&adev->vm_manager.lock);
 	atomic_set(&adev->irq.ih.lock, 0);
+	mutex_init(&adev->firmware.mutex);
 	mutex_init(&adev->pm.mutex);
 	mutex_init(&adev->gfx.gpu_clock_mutex);
 	mutex_init(&adev->srbm_mutex);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
new file mode 100644
index 0000000..89d1d2f
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
@@ -0,0 +1,473 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Author: Huang Rui
+ *
+ */
+
+#include <linux/firmware.h>
+#include "drmP.h"
+#include "amdgpu.h"
+#include "amdgpu_psp.h"
+#include "amdgpu_ucode.h"
+#include "soc15_common.h"
+#include "psp_v3_1.h"
+
+static void psp_set_funcs(struct amdgpu_device *adev);
+
+static int psp_early_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	psp_set_funcs(adev);
+
+	return 0;
+}
+
+static int psp_sw_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	struct psp_context *psp = &adev->psp;
+	int ret;
+
+	switch (adev->asic_type) {
+	case CHIP_VEGA10:
+		psp->init_microcode = psp_v3_1_init_microcode;
+		psp->bootloader_load_sysdrv = psp_v3_1_bootloader_load_sysdrv;
+		psp->bootloader_load_sos = psp_v3_1_bootloader_load_sos;
+		psp->prep_cmd_buf = psp_v3_1_prep_cmd_buf;
+		psp->ring_init = psp_v3_1_ring_init;
+		psp->cmd_submit = psp_v3_1_cmd_submit;
+		psp->compare_sram_data = psp_v3_1_compare_sram_data;
+		psp->smu_reload_quirk = psp_v3_1_smu_reload_quirk;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	psp->adev = adev;
+
+	ret = psp_init_microcode(psp);
+	if (ret) {
+		DRM_ERROR("Failed to load psp firmware!\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+static int psp_sw_fini(void *handle)
+{
+	return 0;
+}
+
+int psp_wait_for(struct psp_context *psp, uint32_t reg_index,
+		 uint32_t reg_val, uint32_t mask, bool check_changed)
+{
+	uint32_t val;
+	int i;
+	struct amdgpu_device *adev = psp->adev;
+
+	val = RREG32(reg_index);
+
+	for (i = 0; i < adev->usec_timeout; i++) {
+		if (check_changed) {
+			if (val != reg_val)
+				return 0;
+		} else {
+			if ((val & mask) == reg_val)
+				return 0;
+		}
+		udelay(1);
+	}
+
+	return -ETIME;
+}
+
+static int
+psp_cmd_submit_buf(struct psp_context *psp,
+		   struct amdgpu_firmware_info *ucode,
+		   struct psp_gfx_cmd_resp *cmd, uint64_t fence_mc_addr,
+		   int index)
+{
+	int ret;
+	struct amdgpu_bo *cmd_buf_bo;
+	uint64_t cmd_buf_mc_addr;
+	struct psp_gfx_cmd_resp *cmd_buf_mem;
+	struct amdgpu_device *adev = psp->adev;
+
+	ret = amdgpu_bo_create_kernel(adev, PSP_CMD_BUFFER_SIZE, PAGE_SIZE,
+				      AMDGPU_GEM_DOMAIN_VRAM,
+				      &cmd_buf_bo, &cmd_buf_mc_addr,
+				      (void **)&cmd_buf_mem);
+	if (ret)
+		return ret;
+
+	memset(cmd_buf_mem, 0, PSP_CMD_BUFFER_SIZE);
+
+	memcpy(cmd_buf_mem, cmd, sizeof(struct psp_gfx_cmd_resp));
+
+	ret = psp_cmd_submit(psp, ucode, cmd_buf_mc_addr,
+			     fence_mc_addr, index);
+
+	while (*((unsigned int *)psp->fence_buf) != index) {
+		msleep(1);
+	};
+
+	return ret;
+}
+
+static void psp_prep_tmr_cmd_buf(struct psp_gfx_cmd_resp *cmd,
+				 uint64_t tmr_mc, uint32_t size)
+{
+	cmd->cmd_id = GFX_CMD_ID_SETUP_TMR;
+	cmd->cmd.cmd_setup_tmr.buf_phy_addr_lo = (uint32_t)tmr_mc;
+	cmd->cmd.cmd_setup_tmr.buf_phy_addr_hi = (uint32_t)(tmr_mc >> 32);
+	cmd->cmd.cmd_setup_tmr.buf_size = size;
+}
+
+/* Set up Trusted Memory Region */
+static int psp_tmr_init(struct psp_context *psp)
+{
+	int ret;
+	struct psp_gfx_cmd_resp *cmd;
+
+	cmd = kzalloc(sizeof(struct psp_gfx_cmd_resp), GFP_KERNEL);
+	if (!cmd)
+		return -ENOMEM;
+
+	/*
+	 * Allocate 3M memory aligned to 1M from Frame Buffer (local
+	 * physical).
+	 *
+	 * Note: this memory need be reserved till the driver
+	 * uninitializes.
+	 */
+	ret = amdgpu_bo_create_kernel(psp->adev, 0x300000, 0x100000,
+				      AMDGPU_GEM_DOMAIN_VRAM,
+				      &psp->tmr_bo, &psp->tmr_mc_addr, &psp->tmr_buf);
+	if (ret)
+		goto failed;
+
+	psp_prep_tmr_cmd_buf(cmd, psp->tmr_mc_addr, 0x300000);
+
+	ret = psp_cmd_submit_buf(psp, NULL, cmd,
+				 psp->fence_buf_mc_addr, 1);
+	if (ret)
+		goto failed_mem;
+
+	return 0;
+
+failed_mem:
+	amdgpu_bo_free_kernel(&psp->tmr_bo, &psp->tmr_mc_addr, &psp->tmr_buf);
+failed:
+	kfree(cmd);
+	return ret;
+}
+
+static void psp_prep_asd_cmd_buf(struct psp_gfx_cmd_resp *cmd,
+				 uint64_t asd_mc, uint64_t asd_mc_shared,
+				 uint32_t size, uint32_t shared_size)
+{
+	cmd->cmd_id = GFX_CMD_ID_LOAD_ASD;
+	cmd->cmd.cmd_load_ta.app_phy_addr_lo = lower_32_bits(asd_mc);
+	cmd->cmd.cmd_load_ta.app_phy_addr_hi = upper_32_bits(asd_mc);
+	cmd->cmd.cmd_load_ta.app_len = size;
+
+	cmd->cmd.cmd_load_ta.cmd_buf_phy_addr_lo = lower_32_bits(asd_mc_shared);
+	cmd->cmd.cmd_load_ta.cmd_buf_phy_addr_hi = upper_32_bits(asd_mc_shared);
+	cmd->cmd.cmd_load_ta.cmd_buf_len = shared_size;
+}
+
+static int psp_asd_load(struct psp_context *psp)
+{
+	int ret;
+	struct amdgpu_bo *asd_bo, *asd_shared_bo;
+	uint64_t asd_mc_addr, asd_shared_mc_addr;
+	void *asd_buf, *asd_shared_buf;
+	struct psp_gfx_cmd_resp *cmd;
+
+	cmd = kzalloc(sizeof(struct psp_gfx_cmd_resp), GFP_KERNEL);
+	if (!cmd)
+		return -ENOMEM;
+
+	/*
+	 * Allocate 16k memory aligned to 4k from Frame Buffer (local
+	 * physical) for shared ASD <-> Driver
+	 */
+	ret = amdgpu_bo_create_kernel(psp->adev, PSP_ASD_SHARED_MEM_SIZE, PAGE_SIZE,
+				      AMDGPU_GEM_DOMAIN_VRAM,
+				      &asd_shared_bo, &asd_shared_mc_addr, &asd_buf);
+	if (ret)
+		goto failed;
+
+	/*
+	 * Allocate 256k memory aligned to 4k from Frame Buffer (local
+	 * physical) for ASD firmware
+	 */
+	ret = amdgpu_bo_create_kernel(psp->adev, PSP_ASD_BIN_SIZE, PAGE_SIZE,
+				      AMDGPU_GEM_DOMAIN_VRAM,
+				      &asd_bo, &asd_mc_addr, &asd_buf);
+	if (ret)
+		goto failed_mem;
+
+	memcpy(asd_buf, psp->asd_start_addr, psp->asd_ucode_size);
+
+	psp_prep_asd_cmd_buf(cmd, asd_mc_addr, asd_shared_mc_addr,
+			     psp->asd_ucode_size, PSP_ASD_SHARED_MEM_SIZE);
+
+	ret = psp_cmd_submit_buf(psp, NULL, cmd,
+				 psp->fence_buf_mc_addr, 2);
+	if (ret)
+		goto failed_mem1;
+
+	amdgpu_bo_free_kernel(&asd_bo, &asd_mc_addr, &asd_buf);
+	amdgpu_bo_free_kernel(&asd_shared_bo, &asd_shared_mc_addr, &asd_shared_buf);
+
+	return 0;
+
+failed_mem1:
+	amdgpu_bo_free_kernel(&asd_bo, &asd_mc_addr, &asd_buf);
+failed_mem:
+	amdgpu_bo_free_kernel(&asd_shared_bo, &asd_shared_mc_addr, &asd_shared_buf);
+failed:
+	kfree(cmd);
+	return ret;
+}
+
+static int psp_load_fw(struct amdgpu_device *adev)
+{
+	int ret;
+	struct psp_gfx_cmd_resp *cmd;
+	int i;
+	struct amdgpu_firmware_info *ucode;
+	struct psp_context *psp = &adev->psp;
+
+	cmd = kzalloc(sizeof(struct psp_gfx_cmd_resp), GFP_KERNEL);
+	if (!cmd)
+		return -ENOMEM;
+
+	ret = psp_bootloader_load_sysdrv(psp);
+	if (ret)
+		goto failed;
+
+	ret = psp_bootloader_load_sos(psp);
+	if (ret)
+		goto failed;
+
+	ret = psp_ring_init(psp, PSP_RING_TYPE__KM);
+	if (ret)
+		goto failed;
+
+	ret = amdgpu_bo_create_kernel(adev, PSP_FENCE_BUFFER_SIZE, PAGE_SIZE,
+				      AMDGPU_GEM_DOMAIN_VRAM,
+				      &psp->fence_buf_bo,
+				      &psp->fence_buf_mc_addr,
+				      &psp->fence_buf);
+	if (ret)
+		goto failed;
+
+	memset(psp->fence_buf, 0, PSP_FENCE_BUFFER_SIZE);
+
+	ret = psp_tmr_init(psp);
+	if (ret)
+		goto failed;
+
+	ret = psp_asd_load(psp);
+	if (ret)
+		goto failed;
+
+	for (i = 0; i < adev->firmware.max_ucodes; i++) {
+		ucode = &adev->firmware.ucode[i];
+		if (!ucode->fw)
+			continue;
+
+		if (ucode->ucode_id == AMDGPU_UCODE_ID_SMC &&
+		    psp_smu_reload_quirk(psp))
+			continue;
+
+		ret = psp_prep_cmd_buf(ucode, cmd);
+		if (ret)
+			goto failed_mem;
+
+		ret = psp_cmd_submit_buf(psp, ucode, cmd,
+					 psp->fence_buf_mc_addr, i + 3);
+		if (ret)
+			goto failed_mem;
+
+#if 0
+		/* check if firmware loaded sucessfully */
+		if (!amdgpu_psp_check_fw_loading_status(adev, i))
+			return -EINVAL;
+#endif
+	}
+
+	amdgpu_bo_free_kernel(&psp->fence_buf_bo,
+			      &psp->fence_buf_mc_addr, &psp->fence_buf);
+
+	return 0;
+
+failed_mem:
+	amdgpu_bo_free_kernel(&psp->fence_buf_bo,
+			      &psp->fence_buf_mc_addr, &psp->fence_buf);
+failed:
+	kfree(cmd);
+	return ret;
+}
+
+static int psp_hw_init(void *handle)
+{
+	int ret;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+
+	if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP)
+		return 0;
+
+	mutex_lock(&adev->firmware.mutex);
+	/*
+	 * This sequence is just used on hw_init only once, no need on
+	 * resume.
+	 */
+	ret = amdgpu_ucode_init_bo(adev);
+	if (ret)
+		goto failed;
+
+	ret = psp_load_fw(adev);
+	if (ret) {
+		DRM_ERROR("PSP firmware loading failed\n");
+		goto failed;
+	}
+
+	mutex_unlock(&adev->firmware.mutex);
+	return 0;
+
+failed:
+	adev->firmware.load_type = AMDGPU_FW_LOAD_DIRECT;
+	mutex_unlock(&adev->firmware.mutex);
+	return -EINVAL;
+}
+
+static int psp_hw_fini(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	struct psp_context *psp = &adev->psp;
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP)
+		amdgpu_ucode_fini_bo(adev);
+
+	if (psp->tmr_buf)
+		amdgpu_bo_free_kernel(&psp->tmr_bo, &psp->tmr_mc_addr, &psp->tmr_buf);
+
+	return 0;
+}
+
+static int psp_suspend(void *handle)
+{
+	return 0;
+}
+
+static int psp_resume(void *handle)
+{
+	int ret;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP)
+		return 0;
+
+	mutex_lock(&adev->firmware.mutex);
+
+	ret = psp_load_fw(adev);
+	if (ret)
+		DRM_ERROR("PSP resume failed\n");
+
+	mutex_unlock(&adev->firmware.mutex);
+
+	return ret;
+}
+
+static bool psp_check_fw_loading_status(struct amdgpu_device *adev,
+					enum AMDGPU_UCODE_ID ucode_type)
+{
+	struct amdgpu_firmware_info *ucode = NULL;
+
+	if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP) {
+		DRM_INFO("firmware is not loaded by PSP\n");
+		return true;
+	}
+
+	if (!adev->firmware.fw_size)
+		return false;
+
+	ucode = &adev->firmware.ucode[ucode_type];
+	if (!ucode->fw || !ucode->ucode_size)
+		return false;
+
+	return psp_compare_sram_data(&adev->psp, ucode, ucode_type);
+}
+
+static int psp_set_clockgating_state(void *handle,
+				     enum amd_clockgating_state state)
+{
+	return 0;
+}
+
+static int psp_set_powergating_state(void *handle,
+				     enum amd_powergating_state state)
+{
+	return 0;
+}
+
+const struct amd_ip_funcs psp_ip_funcs = {
+	.name = "psp",
+	.early_init = psp_early_init,
+	.late_init = NULL,
+	.sw_init = psp_sw_init,
+	.sw_fini = psp_sw_fini,
+	.hw_init = psp_hw_init,
+	.hw_fini = psp_hw_fini,
+	.suspend = psp_suspend,
+	.resume = psp_resume,
+	.is_idle = NULL,
+	.wait_for_idle = NULL,
+	.soft_reset = NULL,
+	.set_clockgating_state = psp_set_clockgating_state,
+	.set_powergating_state = psp_set_powergating_state,
+};
+
+static const struct amdgpu_psp_funcs psp_funcs = {
+	.check_fw_loading_status = psp_check_fw_loading_status,
+};
+
+static void psp_set_funcs(struct amdgpu_device *adev)
+{
+	if (NULL == adev->firmware.funcs)
+		adev->firmware.funcs = &psp_funcs;
+}
+
+const struct amdgpu_ip_block_version psp_v3_1_ip_block =
+{
+	.type = AMD_IP_BLOCK_TYPE_PSP,
+	.major = 3,
+	.minor = 1,
+	.rev = 0,
+	.funcs = &psp_ip_funcs,
+};
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
new file mode 100644
index 0000000..e9f35e0
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
@@ -0,0 +1,127 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Author: Huang Rui
+ *
+ */
+#ifndef __AMDGPU_PSP_H__
+#define __AMDGPU_PSP_H__
+
+#include "amdgpu.h"
+#include "psp_gfx_if.h"
+
+#define PSP_FENCE_BUFFER_SIZE	0x1000
+#define PSP_CMD_BUFFER_SIZE	0x1000
+#define PSP_ASD_BIN_SIZE	0x40000
+#define PSP_ASD_SHARED_MEM_SIZE	0x4000
+
+enum psp_ring_type
+{
+	PSP_RING_TYPE__INVALID = 0,
+	/*
+	 * These values map to the way the PSP kernel identifies the
+	 * rings.
+	 */
+	PSP_RING_TYPE__UM = 1, /* User mode ring (formerly called RBI) */
+	PSP_RING_TYPE__KM = 2  /* Kernel mode ring (formerly called GPCOM) */
+};
+
+struct psp_ring
+{
+	enum psp_ring_type		ring_type;
+	struct psp_gfx_rb_frame		*ring_mem;
+	uint64_t			ring_mem_mc_addr;
+	void				*ring_mem_handle;
+	uint32_t			ring_size;
+};
+
+struct psp_context
+{
+	struct amdgpu_device            *adev;
+	struct psp_ring                 km_ring;
+
+	int (*init_microcode)(struct psp_context *psp);
+	int (*bootloader_load_sysdrv)(struct psp_context *psp);
+	int (*bootloader_load_sos)(struct psp_context *psp);
+	int (*prep_cmd_buf)(struct amdgpu_firmware_info *ucode,
+			    struct psp_gfx_cmd_resp *cmd);
+	int (*ring_init)(struct psp_context *psp, enum psp_ring_type ring_type);
+	int (*cmd_submit)(struct psp_context *psp, struct amdgpu_firmware_info *ucode,
+			  uint64_t cmd_buf_mc_addr, uint64_t fence_mc_addr, int index);
+	bool (*compare_sram_data)(struct psp_context *psp,
+				  struct amdgpu_firmware_info *ucode,
+				  enum AMDGPU_UCODE_ID ucode_type);
+	bool (*smu_reload_quirk)(struct psp_context *psp);
+
+	/* sos firmware */
+	const struct firmware		*sos_fw;
+	uint32_t			sos_fw_version;
+	uint32_t			sos_feature_version;
+	uint32_t			sys_bin_size;
+	uint32_t			sos_bin_size;
+	uint8_t				*sys_start_addr;
+	uint8_t				*sos_start_addr;
+
+	/* tmr buffer */
+	struct amdgpu_bo 		*tmr_bo;
+	uint64_t 			tmr_mc_addr;
+	void				*tmr_buf;
+
+	/* asd firmware */
+	const struct firmware		*asd_fw;
+	uint32_t			asd_fw_version;
+	uint32_t			asd_feature_version;
+	uint32_t			asd_ucode_size;
+	uint8_t				*asd_start_addr;
+
+	/* fence buffer */
+	struct amdgpu_bo 		*fence_buf_bo;
+	uint64_t 			fence_buf_mc_addr;
+	void				*fence_buf;
+};
+
+struct amdgpu_psp_funcs {
+	bool (*check_fw_loading_status)(struct amdgpu_device *adev,
+					enum AMDGPU_UCODE_ID);
+};
+
+#define psp_prep_cmd_buf(ucode, type) (psp)->prep_cmd_buf((ucode), (type))
+#define psp_ring_init(psp, type) (psp)->ring_init((psp), (type))
+#define psp_cmd_submit(psp, ucode, cmd_mc, fence_mc, index) \
+		(psp)->cmd_submit((psp), (ucode), (cmd_mc), (fence_mc), (index))
+#define psp_compare_sram_data(psp, ucode, type) \
+		(psp)->compare_sram_data((psp), (ucode), (type))
+#define psp_init_microcode(psp) \
+		((psp)->init_microcode ? (psp)->init_microcode((psp)) : 0)
+#define psp_bootloader_load_sysdrv(psp) \
+		((psp)->bootloader_load_sysdrv ? (psp)->bootloader_load_sysdrv((psp)) : 0)
+#define psp_bootloader_load_sos(psp) \
+		((psp)->bootloader_load_sos ? (psp)->bootloader_load_sos((psp)) : 0)
+#define psp_smu_reload_quirk(psp) \
+		((psp)->smu_reload_quirk ? (psp)->smu_reload_quirk((psp)) : false)
+
+extern const struct amd_ip_funcs psp_ip_funcs;
+
+extern const struct amdgpu_ip_block_version psp_v3_1_ip_block;
+extern int psp_wait_for(struct psp_context *psp, uint32_t reg_index,
+			uint32_t field_val, uint32_t mask, bool check_changed);
+
+#endif
diff --git a/drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h b/drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h
new file mode 100644
index 0000000..8da6da9
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h
@@ -0,0 +1,269 @@
+/*
+ * Copyright 2017 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef _PSP_TEE_GFX_IF_H_
+#define _PSP_TEE_GFX_IF_H_
+
+#define PSP_GFX_CMD_BUF_VERSION     0x00000001
+
+#define GFX_CMD_STATUS_MASK         0x0000FFFF
+#define GFX_CMD_ID_MASK             0x000F0000
+#define GFX_CMD_RESERVED_MASK       0x7FF00000
+#define GFX_CMD_RESPONSE_MASK       0x80000000
+
+/* TEE Gfx Command IDs for the register interface.
+*  Command ID must be between 0x00010000 and 0x000F0000.
+*/
+enum psp_gfx_crtl_cmd_id
+{
+    GFX_CTRL_CMD_ID_INIT_RBI_RING   = 0x00010000,   /* initialize RBI ring */
+    GFX_CTRL_CMD_ID_INIT_GPCOM_RING = 0x00020000,   /* initialize GPCOM ring */
+    GFX_CTRL_CMD_ID_DESTROY_RINGS   = 0x00030000,   /* destroy rings */
+    GFX_CTRL_CMD_ID_CAN_INIT_RINGS  = 0x00040000,   /* is it allowed to initialized the rings */
+
+    GFX_CTRL_CMD_ID_MAX             = 0x000F0000,   /* max command ID */
+};
+
+
+/* Control registers of the TEE Gfx interface. These are located in
+*  SRBM-to-PSP mailbox registers (total 8 registers).
+*/
+struct psp_gfx_ctrl
+{
+    volatile uint32_t   cmd_resp;         /* +0   Command/Response register for Gfx commands */
+    volatile uint32_t   rbi_wptr;         /* +4   Write pointer (index) of RBI ring */
+    volatile uint32_t   rbi_rptr;         /* +8   Read pointer (index) of RBI ring */
+    volatile uint32_t   gpcom_wptr;       /* +12  Write pointer (index) of GPCOM ring */
+    volatile uint32_t   gpcom_rptr;       /* +16  Read pointer (index) of GPCOM ring */
+    volatile uint32_t   ring_addr_lo;     /* +20  bits [31:0] of physical address of ring buffer */
+    volatile uint32_t   ring_addr_hi;     /* +24  bits [63:32] of physical address of ring buffer */
+    volatile uint32_t   ring_buf_size;    /* +28  Ring buffer size (in bytes) */
+
+};
+
+
+/* Response flag is set in the command when command is completed by PSP.
+*  Used in the GFX_CTRL.CmdResp.
+*  When PSP GFX I/F is initialized, the flag is set.
+*/
+#define GFX_FLAG_RESPONSE               0x80000000
+
+
+/* TEE Gfx Command IDs for the ring buffer interface. */
+enum psp_gfx_cmd_id
+{
+    GFX_CMD_ID_LOAD_TA      = 0x00000001,   /* load TA */
+    GFX_CMD_ID_UNLOAD_TA    = 0x00000002,   /* unload TA */
+    GFX_CMD_ID_INVOKE_CMD   = 0x00000003,   /* send command to TA */
+    GFX_CMD_ID_LOAD_ASD     = 0x00000004,   /* load ASD Driver */
+    GFX_CMD_ID_SETUP_TMR    = 0x00000005,   /* setup TMR region */
+    GFX_CMD_ID_LOAD_IP_FW   = 0x00000006,   /* load HW IP FW */
+
+};
+
+
+/* Command to load Trusted Application binary into PSP OS. */
+struct psp_gfx_cmd_load_ta
+{
+    uint32_t        app_phy_addr_lo;        /* bits [31:0] of the physical address of the TA binary (must be 4 KB aligned) */
+    uint32_t        app_phy_addr_hi;        /* bits [63:32] of the physical address of the TA binary */
+    uint32_t        app_len;                /* length of the TA binary in bytes */
+    uint32_t        cmd_buf_phy_addr_lo;    /* bits [31:0] of the physical address of CMD buffer (must be 4 KB aligned) */
+    uint32_t        cmd_buf_phy_addr_hi;    /* bits [63:32] of the physical address of CMD buffer */
+    uint32_t        cmd_buf_len;            /* length of the CMD buffer in bytes; must be multiple of 4 KB */
+
+    /* Note: CmdBufLen can be set to 0. In this case no persistent CMD buffer is provided
+    *       for the TA. Each InvokeCommand can have dinamically mapped CMD buffer instead
+    *       of using global persistent buffer.
+    */
+};
+
+
+/* Command to Unload Trusted Application binary from PSP OS. */
+struct psp_gfx_cmd_unload_ta
+{
+    uint32_t        session_id;          /* Session ID of the loaded TA to be unloaded */
+
+};
+
+
+/* Shared buffers for InvokeCommand.
+*/
+struct psp_gfx_buf_desc
+{
+    uint32_t        buf_phy_addr_lo;       /* bits [31:0] of physical address of the buffer (must be 4 KB aligned) */
+    uint32_t        buf_phy_addr_hi;       /* bits [63:32] of physical address of the buffer */
+    uint32_t        buf_size;              /* buffer size in bytes (must be multiple of 4 KB and no bigger than 64 MB) */
+
+};
+
+/* Max number of descriptors for one shared buffer (in how many different
+*  physical locations one shared buffer can be stored). If buffer is too much
+*  fragmented, error will be returned.
+*/
+#define GFX_BUF_MAX_DESC        64
+
+struct psp_gfx_buf_list
+{
+    uint32_t                num_desc;                    /* number of buffer descriptors in the list */
+    uint32_t                total_size;                  /* total size of all buffers in the list in bytes (must be multiple of 4 KB) */
+    struct psp_gfx_buf_desc buf_desc[GFX_BUF_MAX_DESC];  /* list of buffer descriptors */
+
+    /* total 776 bytes */
+};
+
+/* Command to execute InvokeCommand entry point of the TA. */
+struct psp_gfx_cmd_invoke_cmd
+{
+    uint32_t                session_id;           /* Session ID of the TA to be executed */
+    uint32_t                ta_cmd_id;            /* Command ID to be sent to TA */
+    struct psp_gfx_buf_list buf;                  /* one indirect buffer (scatter/gather list) */
+
+};
+
+
+/* Command to setup TMR region. */
+struct psp_gfx_cmd_setup_tmr
+{
+    uint32_t        buf_phy_addr_lo;       /* bits [31:0] of physical address of TMR buffer (must be 4 KB aligned) */
+    uint32_t        buf_phy_addr_hi;       /* bits [63:32] of physical address of TMR buffer */
+    uint32_t        buf_size;              /* buffer size in bytes (must be multiple of 4 KB) */
+
+};
+
+
+/* FW types for GFX_CMD_ID_LOAD_IP_FW command. Limit 31. */
+enum psp_gfx_fw_type
+{
+    GFX_FW_TYPE_NONE        = 0,
+    GFX_FW_TYPE_CP_ME       = 1,
+    GFX_FW_TYPE_CP_PFP      = 2,
+    GFX_FW_TYPE_CP_CE       = 3,
+    GFX_FW_TYPE_CP_MEC      = 4,
+    GFX_FW_TYPE_CP_MEC_ME1  = 5,
+    GFX_FW_TYPE_CP_MEC_ME2  = 6,
+    GFX_FW_TYPE_RLC_V       = 7,
+    GFX_FW_TYPE_RLC_G       = 8,
+    GFX_FW_TYPE_SDMA0       = 9,
+    GFX_FW_TYPE_SDMA1       = 10,
+    GFX_FW_TYPE_DMCU_ERAM   = 11,
+    GFX_FW_TYPE_DMCU_ISR    = 12,
+    GFX_FW_TYPE_VCN         = 13,
+    GFX_FW_TYPE_UVD         = 14,
+    GFX_FW_TYPE_VCE         = 15,
+    GFX_FW_TYPE_ISP         = 16,
+    GFX_FW_TYPE_ACP         = 17,
+    GFX_FW_TYPE_SMU         = 18,
+};
+
+/* Command to load HW IP FW. */
+struct psp_gfx_cmd_load_ip_fw
+{
+    uint32_t                fw_phy_addr_lo;    /* bits [31:0] of physical address of FW location (must be 4 KB aligned) */
+    uint32_t                fw_phy_addr_hi;    /* bits [63:32] of physical address of FW location */
+    uint32_t                fw_size;           /* FW buffer size in bytes */
+    enum psp_gfx_fw_type    fw_type;           /* FW type */
+
+};
+
+
+/* All GFX ring buffer commands. */
+union psp_gfx_commands
+{
+    struct psp_gfx_cmd_load_ta          cmd_load_ta;
+    struct psp_gfx_cmd_unload_ta        cmd_unload_ta;
+    struct psp_gfx_cmd_invoke_cmd       cmd_invoke_cmd;
+    struct psp_gfx_cmd_setup_tmr        cmd_setup_tmr;
+    struct psp_gfx_cmd_load_ip_fw       cmd_load_ip_fw;
+
+};
+
+
+/* Structure of GFX Response buffer.
+* For GPCOM I/F it is part of GFX_CMD_RESP buffer, for RBI
+* it is separate buffer.
+*/
+struct psp_gfx_resp
+{
+    uint32_t    status;             /* +0  status of command execution */
+    uint32_t    session_id;         /* +4  session ID in response to LoadTa command */
+    uint32_t    fw_addr_lo;         /* +8  bits [31:0] of FW address within TMR (in response to cmd_load_ip_fw command) */
+    uint32_t    fw_addr_hi;         /* +12 bits [63:32] of FW address within TMR (in response to cmd_load_ip_fw command) */
+
+    uint32_t    reserved[4];
+
+    /* total 32 bytes */
+};
+
+/* Structure of Command buffer pointed by psp_gfx_rb_frame.cmd_buf_addr_hi
+*  and psp_gfx_rb_frame.cmd_buf_addr_lo.
+*/
+struct psp_gfx_cmd_resp
+{
+    uint32_t        buf_size;           /* +0  total size of the buffer in bytes */
+    uint32_t        buf_version;        /* +4  version of the buffer strusture; must be PSP_GFX_CMD_BUF_VERSION */
+    uint32_t        cmd_id;             /* +8  command ID */
+
+    /* These fields are used for RBI only. They are all 0 in GPCOM commands
+    */
+    uint32_t        resp_buf_addr_lo;   /* +12 bits [31:0] of physical address of response buffer (must be 4 KB aligned) */
+    uint32_t        resp_buf_addr_hi;   /* +16 bits [63:32] of physical address of response buffer */
+    uint32_t        resp_offset;        /* +20 offset within response buffer */
+    uint32_t        resp_buf_size;      /* +24 total size of the response buffer in bytes */
+
+    union psp_gfx_commands  cmd;        /* +28 command specific structures */
+
+    uint8_t         reserved_1[864 - sizeof(union psp_gfx_commands) - 28];
+
+    /* Note: Resp is part of this buffer for GPCOM ring. For RBI ring the response
+    *        is separate buffer pointed by resp_buf_addr_hi and resp_buf_addr_lo.
+    */
+    struct psp_gfx_resp     resp;       /* +864 response */
+
+    uint8_t         reserved_2[1024 - 864 - sizeof(struct psp_gfx_resp)];
+
+    /* total size 1024 bytes */
+};
+
+
+#define FRAME_TYPE_DESTROY          1   /* frame sent by KMD driver when UMD Scheduler context is destroyed*/
+
+/* Structure of the Ring Buffer Frame */
+struct psp_gfx_rb_frame
+{
+    uint32_t    cmd_buf_addr_lo;    /* +0  bits [31:0] of physical address of command buffer (must be 4 KB aligned) */
+    uint32_t    cmd_buf_addr_hi;    /* +4  bits [63:32] of physical address of command buffer */
+    uint32_t    cmd_buf_size;       /* +8  command buffer size in bytes */
+    uint32_t    fence_addr_lo;      /* +12 bits [31:0] of physical address of Fence for this frame */
+    uint32_t    fence_addr_hi;      /* +16 bits [63:32] of physical address of Fence for this frame */
+    uint32_t    fence_value;        /* +20 Fence value */
+    uint32_t    sid_lo;             /* +24 bits [31:0] of SID value (used only for RBI frames) */
+    uint32_t    sid_hi;             /* +28 bits [63:32] of SID value (used only for RBI frames) */
+    uint8_t     vmid;               /* +32 VMID value used for mapping of all addresses for this frame */
+    uint8_t     frame_type;         /* +33 1: destory context frame, 0: all other frames; used only for RBI frames */
+    uint8_t     reserved1[2];       /* +34 reserved, must be 0 */
+    uint32_t    reserved2[7];       /* +40 reserved, must be 0 */
+    /* total 64 bytes */
+};
+
+#endif /* _PSP_TEE_GFX_IF_H_ */
diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v3_1.c b/drivers/gpu/drm/amd/amdgpu/psp_v3_1.c
new file mode 100644
index 0000000..49c3844
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/psp_v3_1.c
@@ -0,0 +1,507 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Author: Huang Rui
+ *
+ */
+
+#include <linux/firmware.h>
+#include "drmP.h"
+#include "amdgpu.h"
+#include "amdgpu_psp.h"
+#include "amdgpu_ucode.h"
+#include "soc15_common.h"
+#include "psp_v3_1.h"
+
+#include "vega10/soc15ip.h"
+#include "vega10/MP/mp_9_0_offset.h"
+#include "vega10/MP/mp_9_0_sh_mask.h"
+#include "vega10/GC/gc_9_0_offset.h"
+#include "vega10/SDMA0/sdma0_4_0_offset.h"
+#include "vega10/NBIO/nbio_6_1_offset.h"
+
+MODULE_FIRMWARE("amdgpu/vega10_sos.bin");
+MODULE_FIRMWARE("amdgpu/vega10_asd.bin");
+
+#define smnMP1_FIRMWARE_FLAGS 0x3010028
+
+static int
+psp_v3_1_get_fw_type(struct amdgpu_firmware_info *ucode, enum psp_gfx_fw_type *type)
+{
+	switch(ucode->ucode_id) {
+	case AMDGPU_UCODE_ID_SDMA0:
+		*type = GFX_FW_TYPE_SDMA0;
+		break;
+	case AMDGPU_UCODE_ID_SDMA1:
+		*type = GFX_FW_TYPE_SDMA1;
+		break;
+	case AMDGPU_UCODE_ID_CP_CE:
+		*type = GFX_FW_TYPE_CP_CE;
+		break;
+	case AMDGPU_UCODE_ID_CP_PFP:
+		*type = GFX_FW_TYPE_CP_PFP;
+		break;
+	case AMDGPU_UCODE_ID_CP_ME:
+		*type = GFX_FW_TYPE_CP_ME;
+		break;
+	case AMDGPU_UCODE_ID_CP_MEC1:
+		*type = GFX_FW_TYPE_CP_MEC;
+		break;
+	case AMDGPU_UCODE_ID_CP_MEC1_JT:
+		*type = GFX_FW_TYPE_CP_MEC_ME1;
+		break;
+	case AMDGPU_UCODE_ID_CP_MEC2:
+		*type = GFX_FW_TYPE_CP_MEC;
+		break;
+	case AMDGPU_UCODE_ID_CP_MEC2_JT:
+		*type = GFX_FW_TYPE_CP_MEC_ME2;
+		break;
+	case AMDGPU_UCODE_ID_RLC_G:
+		*type = GFX_FW_TYPE_RLC_G;
+		break;
+	case AMDGPU_UCODE_ID_SMC:
+		*type = GFX_FW_TYPE_SMU;
+		break;
+	case AMDGPU_UCODE_ID_UVD:
+		*type = GFX_FW_TYPE_UVD;
+		break;
+	case AMDGPU_UCODE_ID_VCE:
+		*type = GFX_FW_TYPE_VCE;
+		break;
+	case AMDGPU_UCODE_ID_MAXIMUM:
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+int psp_v3_1_init_microcode(struct psp_context *psp)
+{
+	struct amdgpu_device *adev = psp->adev;
+	const char *chip_name;
+	char fw_name[30];
+	int err = 0;
+	const struct psp_firmware_header_v1_0 *hdr;
+
+	DRM_DEBUG("\n");
+
+	switch (adev->asic_type) {
+	case CHIP_VEGA10:
+		chip_name = "vega10";
+		break;
+	default: BUG();
+	}
+
+	snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_sos.bin", chip_name);
+	err = request_firmware(&adev->psp.sos_fw, fw_name, adev->dev);
+	if (err)
+		goto out;
+
+	err = amdgpu_ucode_validate(adev->psp.sos_fw);
+	if (err)
+		goto out;
+
+	hdr = (const struct psp_firmware_header_v1_0 *)adev->psp.sos_fw->data;
+	adev->psp.sos_fw_version = le32_to_cpu(hdr->header.ucode_version);
+	adev->psp.sos_feature_version = le32_to_cpu(hdr->ucode_feature_version);
+	adev->psp.sos_bin_size = le32_to_cpu(hdr->sos_size_bytes);
+	adev->psp.sys_bin_size = le32_to_cpu(hdr->header.ucode_size_bytes) -
+					le32_to_cpu(hdr->sos_size_bytes);
+	adev->psp.sys_start_addr = (uint8_t *)hdr +
+				le32_to_cpu(hdr->header.ucode_array_offset_bytes);
+	adev->psp.sos_start_addr = (uint8_t *)adev->psp.sys_start_addr +
+				le32_to_cpu(hdr->sos_offset_bytes);
+
+	snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_asd.bin", chip_name);
+	err = request_firmware(&adev->psp.asd_fw, fw_name, adev->dev);
+	if (err)
+		goto out;
+
+	err = amdgpu_ucode_validate(adev->psp.asd_fw);
+	if (err)
+		goto out;
+
+	hdr = (const struct psp_firmware_header_v1_0 *)adev->psp.asd_fw->data;
+	adev->psp.asd_fw_version = le32_to_cpu(hdr->header.ucode_version);
+	adev->psp.asd_feature_version = le32_to_cpu(hdr->ucode_feature_version);
+	adev->psp.asd_ucode_size = le32_to_cpu(hdr->header.ucode_size_bytes);
+	adev->psp.asd_start_addr = (uint8_t *)hdr +
+				le32_to_cpu(hdr->header.ucode_array_offset_bytes);
+
+	return 0;
+out:
+	if (err) {
+		dev_err(adev->dev,
+			"psp v3.1: Failed to load firmware \"%s\"\n",
+			fw_name);
+		release_firmware(adev->psp.sos_fw);
+		adev->psp.sos_fw = NULL;
+		release_firmware(adev->psp.asd_fw);
+		adev->psp.asd_fw = NULL;
+	}
+
+	return err;
+}
+
+int psp_v3_1_bootloader_load_sysdrv(struct psp_context *psp)
+{
+	int ret;
+	uint32_t psp_gfxdrv_command_reg = 0;
+	struct amdgpu_bo *psp_sysdrv;
+	void *psp_sysdrv_virt = NULL;
+	uint64_t psp_sysdrv_mem;
+	struct amdgpu_device *adev = psp->adev;
+	uint32_t size;
+
+	/* Wait for bootloader to signify that is ready having bit 31 of C2PMSG_35 set to 1 */
+	ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_35),
+			   0x80000000, 0x80000000, false);
+	if (ret)
+		return ret;
+
+	/*
+	 * Create a 1 meg GART memory to store the psp sys driver
+	 * binary with a 1 meg aligned address
+	 */
+	size = (psp->sys_bin_size + (PSP_BOOTLOADER_1_MEG_ALIGNMENT - 1)) &
+		(~(PSP_BOOTLOADER_1_MEG_ALIGNMENT - 1));
+
+	ret = amdgpu_bo_create_kernel(adev, size, PSP_BOOTLOADER_1_MEG_ALIGNMENT,
+				      AMDGPU_GEM_DOMAIN_GTT,
+				      &psp_sysdrv,
+				      &psp_sysdrv_mem,
+				      &psp_sysdrv_virt);
+	if (ret)
+		return ret;
+
+	/* Copy PSP System Driver binary to memory */
+	memcpy(psp_sysdrv_virt, psp->sys_start_addr, psp->sys_bin_size);
+
+	/* Provide the sys driver to bootrom */
+	WREG32(SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_36),
+	       (uint32_t)(psp_sysdrv_mem >> 20));
+	psp_gfxdrv_command_reg = 1 << 16;
+	WREG32(SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_35),
+	       psp_gfxdrv_command_reg);
+
+	/* there might be handshake issue with hardware which needs delay */
+	mdelay(20);
+
+	ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_35),
+			   0x80000000, 0x80000000, false);
+
+	amdgpu_bo_free_kernel(&psp_sysdrv, &psp_sysdrv_mem, &psp_sysdrv_virt);
+
+	return ret;
+}
+
+int psp_v3_1_bootloader_load_sos(struct psp_context *psp)
+{
+	int ret;
+	unsigned int psp_gfxdrv_command_reg = 0;
+	struct amdgpu_bo *psp_sos;
+	void *psp_sos_virt = NULL;
+	uint64_t psp_sos_mem;
+	struct amdgpu_device *adev = psp->adev;
+	uint32_t size;
+
+	/* Wait for bootloader to signify that is ready having bit 31 of C2PMSG_35 set to 1 */
+	ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_35),
+			   0x80000000, 0x80000000, false);
+	if (ret)
+		return ret;
+
+	size = (psp->sos_bin_size + (PSP_BOOTLOADER_1_MEG_ALIGNMENT - 1)) &
+		(~((uint64_t)PSP_BOOTLOADER_1_MEG_ALIGNMENT - 1));
+
+	ret = amdgpu_bo_create_kernel(adev, size, PSP_BOOTLOADER_1_MEG_ALIGNMENT,
+				      AMDGPU_GEM_DOMAIN_GTT,
+				      &psp_sos,
+				      &psp_sos_mem,
+				      &psp_sos_virt);
+	if (ret)
+		return ret;
+
+	/* Copy Secure OS binary to PSP memory */
+	memcpy(psp_sos_virt, psp->sos_start_addr, psp->sos_bin_size);
+
+	/* Provide the PSP secure OS to bootrom */
+	WREG32(SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_36),
+	       (uint32_t)(psp_sos_mem >> 20));
+	psp_gfxdrv_command_reg = 2 << 16;
+	WREG32(SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_35),
+	       psp_gfxdrv_command_reg);
+
+	/* there might be handshake issue with hardware which needs delay */
+	mdelay(20);
+#if 0
+	ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_81),
+			   RREG32(SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_81)),
+			   0, true);
+#endif
+
+	amdgpu_bo_free_kernel(&psp_sos, &psp_sos_mem, &psp_sos_virt);
+
+	return ret;
+}
+
+int psp_v3_1_prep_cmd_buf(struct amdgpu_firmware_info *ucode, struct psp_gfx_cmd_resp *cmd)
+{
+	int ret;
+	uint64_t fw_mem_mc_addr = ucode->mc_addr;
+
+	memset(cmd, 0, sizeof(struct psp_gfx_cmd_resp));
+
+	cmd->cmd_id = GFX_CMD_ID_LOAD_IP_FW;
+	cmd->cmd.cmd_load_ip_fw.fw_phy_addr_lo = (uint32_t)fw_mem_mc_addr;
+	cmd->cmd.cmd_load_ip_fw.fw_phy_addr_hi = (uint32_t)((uint64_t)fw_mem_mc_addr >> 32);
+	cmd->cmd.cmd_load_ip_fw.fw_size = ucode->ucode_size;
+
+	ret = psp_v3_1_get_fw_type(ucode, &cmd->cmd.cmd_load_ip_fw.fw_type);
+	if (ret)
+		DRM_ERROR("Unknown firmware type\n");
+
+	return ret;
+}
+
+int psp_v3_1_ring_init(struct psp_context *psp, enum psp_ring_type ring_type)
+{
+	int ret = 0;
+	unsigned int psp_ring_reg = 0;
+	struct psp_ring *ring;
+	struct amdgpu_device *adev = psp->adev;
+
+	ring = &psp->km_ring;
+
+	ring->ring_type = ring_type;
+
+	/* allocate 4k Page of Local Frame Buffer memory for ring */
+	ring->ring_size = 0x1000;
+	ret = amdgpu_bo_create_kernel(adev, ring->ring_size, PAGE_SIZE,
+				      AMDGPU_GEM_DOMAIN_VRAM,
+				      &adev->firmware.rbuf,
+				      &ring->ring_mem_mc_addr,
+				      (void **)&ring->ring_mem);
+	if (ret) {
+		ring->ring_size = 0;
+		return ret;
+	}
+
+	/* Write low address of the ring to C2PMSG_69 */
+	psp_ring_reg = lower_32_bits(ring->ring_mem_mc_addr);
+	WREG32(SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_69), psp_ring_reg);
+	/* Write high address of the ring to C2PMSG_70 */
+	psp_ring_reg = upper_32_bits(ring->ring_mem_mc_addr);
+	WREG32(SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_70), psp_ring_reg);
+	/* Write size of ring to C2PMSG_71 */
+	psp_ring_reg = ring->ring_size;
+	WREG32(SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_71), psp_ring_reg);
+	/* Write the ring initialization command to C2PMSG_64 */
+	psp_ring_reg = ring_type;
+	psp_ring_reg = psp_ring_reg << 16;
+	WREG32(SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64), psp_ring_reg);
+
+	/* there might be handshake issue with hardware which needs delay */
+	mdelay(20);
+
+	/* Wait for response flag (bit 31) in C2PMSG_64 */
+	ret = psp_wait_for(psp, SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_64),
+			   0x80000000, 0x8000FFFF, false);
+
+	return ret;
+}
+
+int psp_v3_1_cmd_submit(struct psp_context *psp,
+		        struct amdgpu_firmware_info *ucode,
+		        uint64_t cmd_buf_mc_addr, uint64_t fence_mc_addr,
+		        int index)
+{
+	unsigned int psp_write_ptr_reg = 0;
+	struct psp_gfx_rb_frame * write_frame = psp->km_ring.ring_mem;
+	struct psp_ring *ring = &psp->km_ring;
+	struct amdgpu_device *adev = psp->adev;
+	uint32_t ring_size_dw = ring->ring_size / 4;
+	uint32_t rb_frame_size_dw = sizeof(struct psp_gfx_rb_frame) / 4;
+
+	/* KM (GPCOM) prepare write pointer */
+	psp_write_ptr_reg = RREG32(SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_67));
+
+	/* Update KM RB frame pointer to new frame */
+	/* write_frame ptr increments by size of rb_frame in bytes */
+	/* psp_write_ptr_reg increments by size of rb_frame in DWORDs */
+	if ((psp_write_ptr_reg % ring_size_dw) == 0)
+		write_frame = ring->ring_mem;
+	else
+		write_frame = ring->ring_mem + (psp_write_ptr_reg / rb_frame_size_dw);
+
+	/* Initialize KM RB frame */
+	memset(write_frame, 0, sizeof(struct psp_gfx_rb_frame));
+
+	/* Update KM RB frame */
+	write_frame->cmd_buf_addr_hi = (unsigned int)(cmd_buf_mc_addr >> 32);
+	write_frame->cmd_buf_addr_lo = (unsigned int)(cmd_buf_mc_addr);
+	write_frame->fence_addr_hi = (unsigned int)(fence_mc_addr >> 32);
+	write_frame->fence_addr_lo = (unsigned int)(fence_mc_addr);
+	write_frame->fence_value = index;
+
+	/* Update the write Pointer in DWORDs */
+	psp_write_ptr_reg = (psp_write_ptr_reg + rb_frame_size_dw) % ring_size_dw;
+	WREG32(SOC15_REG_OFFSET(MP0, 0, mmMP0_SMN_C2PMSG_67), psp_write_ptr_reg);
+
+	return 0;
+}
+
+static int
+psp_v3_1_sram_map(unsigned int *sram_offset, unsigned int *sram_addr_reg_offset,
+		  unsigned int *sram_data_reg_offset,
+		  enum AMDGPU_UCODE_ID ucode_id)
+{
+	int ret = 0;
+
+	switch(ucode_id) {
+/* TODO: needs to confirm */
+#if 0
+	case AMDGPU_UCODE_ID_SMC:
+		*sram_offset = 0;
+		*sram_addr_reg_offset = 0;
+		*sram_data_reg_offset = 0;
+		break;
+#endif
+
+	case AMDGPU_UCODE_ID_CP_CE:
+		*sram_offset = 0x0;
+		*sram_addr_reg_offset = SOC15_REG_OFFSET(GC, 0, mmCP_CE_UCODE_ADDR);
+		*sram_data_reg_offset = SOC15_REG_OFFSET(GC, 0, mmCP_CE_UCODE_DATA);
+		break;
+
+	case AMDGPU_UCODE_ID_CP_PFP:
+		*sram_offset = 0x0;
+		*sram_addr_reg_offset = SOC15_REG_OFFSET(GC, 0, mmCP_PFP_UCODE_ADDR);
+		*sram_data_reg_offset = SOC15_REG_OFFSET(GC, 0, mmCP_PFP_UCODE_DATA);
+		break;
+
+	case AMDGPU_UCODE_ID_CP_ME:
+		*sram_offset = 0x0;
+		*sram_addr_reg_offset = SOC15_REG_OFFSET(GC, 0, mmCP_HYP_ME_UCODE_ADDR);
+		*sram_data_reg_offset = SOC15_REG_OFFSET(GC, 0, mmCP_HYP_ME_UCODE_DATA);
+		break;
+
+	case AMDGPU_UCODE_ID_CP_MEC1:
+		*sram_offset = 0x10000;
+		*sram_addr_reg_offset = SOC15_REG_OFFSET(GC, 0, mmCP_MEC_ME1_UCODE_ADDR);
+		*sram_data_reg_offset = SOC15_REG_OFFSET(GC, 0, mmCP_MEC_ME1_UCODE_DATA);
+		break;
+
+	case AMDGPU_UCODE_ID_CP_MEC2:
+		*sram_offset = 0x10000;
+		*sram_addr_reg_offset = SOC15_REG_OFFSET(GC, 0, mmCP_HYP_MEC2_UCODE_ADDR);
+		*sram_data_reg_offset = SOC15_REG_OFFSET(GC, 0, mmCP_HYP_MEC2_UCODE_DATA);
+		break;
+
+	case AMDGPU_UCODE_ID_RLC_G:
+		*sram_offset = 0x2000;
+		*sram_addr_reg_offset = SOC15_REG_OFFSET(GC, 0, mmRLC_GPM_UCODE_ADDR);
+		*sram_data_reg_offset = SOC15_REG_OFFSET(GC, 0, mmRLC_GPM_UCODE_DATA);
+		break;
+
+	case AMDGPU_UCODE_ID_SDMA0:
+		*sram_offset = 0x0;
+		*sram_addr_reg_offset = SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_UCODE_ADDR);
+		*sram_data_reg_offset = SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_UCODE_DATA);
+		break;
+
+/* TODO: needs to confirm */
+#if 0
+	case AMDGPU_UCODE_ID_SDMA1:
+		*sram_offset = ;
+		*sram_addr_reg_offset = ;
+		break;
+
+	case AMDGPU_UCODE_ID_UVD:
+		*sram_offset = ;
+		*sram_addr_reg_offset = ;
+		break;
+
+	case AMDGPU_UCODE_ID_VCE:
+		*sram_offset = ;
+		*sram_addr_reg_offset = ;
+		break;
+#endif
+
+	case AMDGPU_UCODE_ID_MAXIMUM:
+	default:
+		ret = -EINVAL;
+		break;
+	}
+
+	return ret;
+}
+
+bool psp_v3_1_compare_sram_data(struct psp_context *psp,
+				struct amdgpu_firmware_info *ucode,
+				enum AMDGPU_UCODE_ID ucode_type)
+{
+	int err = 0;
+	unsigned int fw_sram_reg_val = 0;
+	unsigned int fw_sram_addr_reg_offset = 0;
+	unsigned int fw_sram_data_reg_offset = 0;
+	unsigned int ucode_size;
+	uint32_t *ucode_mem = NULL;
+	struct amdgpu_device *adev = psp->adev;
+
+	err = psp_v3_1_sram_map(&fw_sram_reg_val, &fw_sram_addr_reg_offset,
+				&fw_sram_data_reg_offset, ucode_type);
+	if (err)
+		return false;
+
+	WREG32(fw_sram_addr_reg_offset, fw_sram_reg_val);
+
+	ucode_size = ucode->ucode_size;
+	ucode_mem = (uint32_t *)ucode->kaddr;
+	while (!ucode_size) {
+		fw_sram_reg_val = RREG32(fw_sram_data_reg_offset);
+
+		if (*ucode_mem != fw_sram_reg_val)
+			return false;
+
+		ucode_mem++;
+		/* 4 bytes */
+		ucode_size -= 4;
+	}
+
+	return true;
+}
+
+bool psp_v3_1_smu_reload_quirk(struct psp_context *psp)
+{
+	struct amdgpu_device *adev = psp->adev;
+	uint32_t reg, reg_val;
+
+	reg_val = (smnMP1_FIRMWARE_FLAGS & 0xffffffff) | 0x03b00000;
+	WREG32(SOC15_REG_OFFSET(NBIO, 0, mmPCIE_INDEX2), reg_val);
+	reg = RREG32(SOC15_REG_OFFSET(NBIO, 0, mmPCIE_DATA2));
+	if ((reg & MP1_FIRMWARE_FLAGS__INTERRUPTS_ENABLED_MASK) >>
+	     MP1_FIRMWARE_FLAGS__INTERRUPTS_ENABLED__SHIFT)
+		return true;
+
+	return false;
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v3_1.h b/drivers/gpu/drm/amd/amdgpu/psp_v3_1.h
new file mode 100644
index 0000000..e82eff7
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/psp_v3_1.h
@@ -0,0 +1,50 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Author: Huang Rui
+ *
+ */
+#ifndef __PSP_V3_1_H__
+#define __PSP_V3_1_H__
+
+#include "amdgpu_psp.h"
+
+enum { PSP_DIRECTORY_TABLE_ENTRIES = 4 };
+enum { PSP_BINARY_ALIGNMENT = 64 };
+enum { PSP_BOOTLOADER_1_MEG_ALIGNMENT = 0x100000 };
+enum { PSP_BOOTLOADER_8_MEM_ALIGNMENT = 0x800000 };
+
+extern int psp_v3_1_init_microcode(struct psp_context *psp);
+extern int psp_v3_1_bootloader_load_sysdrv(struct psp_context *psp);
+extern int psp_v3_1_bootloader_load_sos(struct psp_context *psp);
+extern int psp_v3_1_prep_cmd_buf(struct amdgpu_firmware_info *ucode,
+				 struct psp_gfx_cmd_resp *cmd);
+extern int psp_v3_1_ring_init(struct psp_context *psp,
+			      enum psp_ring_type ring_type);
+extern int psp_v3_1_cmd_submit(struct psp_context *psp,
+			       struct amdgpu_firmware_info *ucode,
+			       uint64_t cmd_buf_mc_addr, uint64_t fence_mc_addr,
+			       int index);
+extern bool psp_v3_1_compare_sram_data(struct psp_context *psp,
+				       struct amdgpu_firmware_info *ucode,
+				       enum AMDGPU_UCODE_ID ucode_type);
+extern bool psp_v3_1_smu_reload_quirk(struct psp_context *psp);
+#endif
diff --git a/drivers/gpu/drm/amd/include/amd_shared.h b/drivers/gpu/drm/amd/include/amd_shared.h
index a94420d..2ccf44e 100644
--- a/drivers/gpu/drm/amd/include/amd_shared.h
+++ b/drivers/gpu/drm/amd/include/amd_shared.h
@@ -68,6 +68,7 @@ enum amd_ip_block_type {
 	AMD_IP_BLOCK_TYPE_GMC,
 	AMD_IP_BLOCK_TYPE_IH,
 	AMD_IP_BLOCK_TYPE_SMC,
+	AMD_IP_BLOCK_TYPE_PSP,
 	AMD_IP_BLOCK_TYPE_DCE,
 	AMD_IP_BLOCK_TYPE_GFX,
 	AMD_IP_BLOCK_TYPE_SDMA,
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 053/100] drm/amdgpu: add psp firmware info into info query and debugfs
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (36 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 052/100] drm/amdgpu: add PSP driver " Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 054/100] drm/amdgpu: add SMC firmware into global ucode list for psp loading Alex Deucher
                     ` (47 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Huang Rui

From: Huang Rui <ray.huang@amd.com>

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 25 +++++++++++++++++++++++++
 include/uapi/drm/amdgpu_drm.h           |  4 ++++
 2 files changed, 29 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
index de0c776..a2f2b7f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
@@ -209,6 +209,14 @@ static int amdgpu_firmware_info(struct drm_amdgpu_info_firmware *fw_info,
 		fw_info->ver = adev->sdma.instance[query_fw->index].fw_version;
 		fw_info->feature = adev->sdma.instance[query_fw->index].feature_version;
 		break;
+	case AMDGPU_INFO_FW_SOS:
+		fw_info->ver = adev->psp.sos_fw_version;
+		fw_info->feature = adev->psp.sos_feature_version;
+		break;
+	case AMDGPU_INFO_FW_ASD:
+		fw_info->ver = adev->psp.asd_fw_version;
+		fw_info->feature = adev->psp.asd_feature_version;
+		break;
 	default:
 		return -EINVAL;
 	}
@@ -1095,6 +1103,23 @@ static int amdgpu_debugfs_firmware_info(struct seq_file *m, void *data)
 			   fw_info.feature, fw_info.ver);
 	}
 
+	/* PSP SOS */
+	query_fw.fw_type = AMDGPU_INFO_FW_SOS;
+	ret = amdgpu_firmware_info(&fw_info, &query_fw, adev);
+	if (ret)
+		return ret;
+	seq_printf(m, "SOS feature version: %u, firmware version: 0x%08x\n",
+		   fw_info.feature, fw_info.ver);
+
+
+	/* PSP ASD */
+	query_fw.fw_type = AMDGPU_INFO_FW_ASD;
+	ret = amdgpu_firmware_info(&fw_info, &query_fw, adev);
+	if (ret)
+		return ret;
+	seq_printf(m, "ASD feature version: %u, firmware version: 0x%08x\n",
+		   fw_info.feature, fw_info.ver);
+
 	/* SMC */
 	query_fw.fw_type = AMDGPU_INFO_FW_SMC;
 	ret = amdgpu_firmware_info(&fw_info, &query_fw, adev);
diff --git a/include/uapi/drm/amdgpu_drm.h b/include/uapi/drm/amdgpu_drm.h
index 289b129..7da19cd 100644
--- a/include/uapi/drm/amdgpu_drm.h
+++ b/include/uapi/drm/amdgpu_drm.h
@@ -524,6 +524,10 @@ struct drm_amdgpu_cs_chunk_data {
 	#define AMDGPU_INFO_FW_SMC		0x0a
 	/* Subquery id: Query SDMA firmware version */
 	#define AMDGPU_INFO_FW_SDMA		0x0b
+	/* Subquery id: Query PSP SOS firmware version */
+	#define AMDGPU_INFO_FW_SOS		0x0c
+	/* Subquery id: Query PSP ASD firmware version */
+	#define AMDGPU_INFO_FW_ASD		0x0d
 /* number of bytes moved for TTM migration */
 #define AMDGPU_INFO_NUM_BYTES_MOVED		0x0f
 /* the used VRAM size */
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 054/100] drm/amdgpu: add SMC firmware into global ucode list for psp loading
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (37 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 053/100] drm/amdgpu: add psp firmware info into info query and debugfs Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 055/100] drm/amd/powerplay: add smu9 header files for Vega10 Alex Deucher
                     ` (46 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Huang Rui

From: Huang Rui <ray.huang@amd.com>

Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
index d42eade..1524d90 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
@@ -837,6 +837,8 @@ static int amdgpu_cgs_get_firmware_info(struct cgs_device *cgs_device,
 		uint32_t ucode_start_address;
 		const uint8_t *src;
 		const struct smc_firmware_header_v1_0 *hdr;
+		const struct common_firmware_header *header;
+		struct amdgpu_firmware_info *ucode = NULL;
 
 		if (CGS_UCODE_ID_SMU_SK == type)
 			amdgpu_cgs_rel_firmware(cgs_device, CGS_UCODE_ID_SMU);
@@ -919,6 +921,15 @@ static int amdgpu_cgs_get_firmware_info(struct cgs_device *cgs_device,
 				adev->pm.fw = NULL;
 				return err;
 			}
+
+			if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
+				ucode = &adev->firmware.ucode[AMDGPU_UCODE_ID_SMC];
+				ucode->ucode_id = AMDGPU_UCODE_ID_SMC;
+				ucode->fw = adev->pm.fw;
+				header = (const struct common_firmware_header *)ucode->fw->data;
+				adev->firmware.fw_size +=
+					ALIGN(le32_to_cpu(header->ucode_size_bytes), PAGE_SIZE);
+			}
 		}
 
 		hdr = (const struct smc_firmware_header_v1_0 *)	adev->pm.fw->data;
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 055/100] drm/amd/powerplay: add smu9 header files for Vega10
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (38 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 054/100] drm/amdgpu: add SMC firmware into global ucode list for psp loading Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 056/100] drm/amd/powerplay: add new Vega10's ppsmc header file Alex Deucher
                     ` (45 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Eric Huang, Alex Deucher

From: Eric Huang <JinHuiEric.Huang@amd.com>

Signed-off-by: Eric Huang <JinHuiEric.Huang@amd.com>
Reviewed-by: Ken Wang <Qingqing.Wang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/inc/smu9.h           | 147 ++++++++
 drivers/gpu/drm/amd/powerplay/inc/smu9_driver_if.h | 418 +++++++++++++++++++++
 2 files changed, 565 insertions(+)
 create mode 100644 drivers/gpu/drm/amd/powerplay/inc/smu9.h
 create mode 100644 drivers/gpu/drm/amd/powerplay/inc/smu9_driver_if.h

diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu9.h b/drivers/gpu/drm/amd/powerplay/inc/smu9.h
new file mode 100644
index 0000000..9ef2490
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu9.h
@@ -0,0 +1,147 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef SMU9_H
+#define SMU9_H
+
+#pragma pack(push, 1)
+
+#define ENABLE_DEBUG_FEATURES
+
+/* Feature Control Defines */
+#define FEATURE_DPM_PREFETCHER_BIT      0
+#define FEATURE_DPM_GFXCLK_BIT          1
+#define FEATURE_DPM_UCLK_BIT            2
+#define FEATURE_DPM_SOCCLK_BIT          3
+#define FEATURE_DPM_UVD_BIT             4
+#define FEATURE_DPM_VCE_BIT             5
+#define FEATURE_ULV_BIT                 6
+#define FEATURE_DPM_MP0CLK_BIT          7
+#define FEATURE_DPM_LINK_BIT            8
+#define FEATURE_DPM_DCEFCLK_BIT         9
+#define FEATURE_AVFS_BIT                10
+#define FEATURE_DS_GFXCLK_BIT           11
+#define FEATURE_DS_SOCCLK_BIT           12
+#define FEATURE_DS_LCLK_BIT             13
+#define FEATURE_PPT_BIT                 14
+#define FEATURE_TDC_BIT                 15
+#define FEATURE_THERMAL_BIT             16
+#define FEATURE_GFX_PER_CU_CG_BIT       17
+#define FEATURE_RM_BIT                  18
+#define FEATURE_DS_DCEFCLK_BIT          19
+#define FEATURE_ACDC_BIT                20
+#define FEATURE_VR0HOT_BIT              21
+#define FEATURE_VR1HOT_BIT              22
+#define FEATURE_FW_CTF_BIT              23
+#define FEATURE_LED_DISPLAY_BIT         24
+#define FEATURE_FAN_CONTROL_BIT         25
+#define FEATURE_VOLTAGE_CONTROLLER_BIT  26
+#define FEATURE_SPARE_27_BIT            27
+#define FEATURE_SPARE_28_BIT            28
+#define FEATURE_SPARE_29_BIT            29
+#define FEATURE_SPARE_30_BIT            30
+#define FEATURE_SPARE_31_BIT            31
+
+#define NUM_FEATURES                    32
+
+#define FFEATURE_DPM_PREFETCHER_MASK     (1 << FEATURE_DPM_PREFETCHER_BIT     )
+#define FFEATURE_DPM_GFXCLK_MASK         (1 << FEATURE_DPM_GFXCLK_BIT         )
+#define FFEATURE_DPM_UCLK_MASK           (1 << FEATURE_DPM_UCLK_BIT           )
+#define FFEATURE_DPM_SOCCLK_MASK         (1 << FEATURE_DPM_SOCCLK_BIT         )
+#define FFEATURE_DPM_UVD_MASK            (1 << FEATURE_DPM_UVD_BIT            )
+#define FFEATURE_DPM_VCE_MASK            (1 << FEATURE_DPM_VCE_BIT            )
+#define FFEATURE_ULV_MASK                (1 << FEATURE_ULV_BIT                )
+#define FFEATURE_DPM_MP0CLK_MASK         (1 << FEATURE_DPM_MP0CLK_BIT         )
+#define FFEATURE_DPM_LINK_MASK           (1 << FEATURE_DPM_LINK_BIT           )
+#define FFEATURE_DPM_DCEFCLK_MASK        (1 << FEATURE_DPM_DCEFCLK_BIT        )
+#define FFEATURE_AVFS_MASK               (1 << FEATURE_AVFS_BIT               )
+#define FFEATURE_DS_GFXCLK_MASK          (1 << FEATURE_DS_GFXCLK_BIT          )
+#define FFEATURE_DS_SOCCLK_MASK          (1 << FEATURE_DS_SOCCLK_BIT          )
+#define FFEATURE_DS_LCLK_MASK            (1 << FEATURE_DS_LCLK_BIT            )
+#define FFEATURE_PPT_MASK                (1 << FEATURE_PPT_BIT                )
+#define FFEATURE_TDC_MASK                (1 << FEATURE_TDC_BIT                )
+#define FFEATURE_THERMAL_MASK            (1 << FEATURE_THERMAL_BIT            )
+#define FFEATURE_GFX_PER_CU_CG_MASK      (1 << FEATURE_GFX_PER_CU_CG_BIT      )
+#define FFEATURE_RM_MASK                 (1 << FEATURE_RM_BIT                 )
+#define FFEATURE_DS_DCEFCLK_MASK         (1 << FEATURE_DS_DCEFCLK_BIT         )
+#define FFEATURE_ACDC_MASK               (1 << FEATURE_ACDC_BIT               )
+#define FFEATURE_VR0HOT_MASK             (1 << FEATURE_VR0HOT_BIT             )
+#define FFEATURE_VR1HOT_MASK             (1 << FEATURE_VR1HOT_BIT             )
+#define FFEATURE_FW_CTF_MASK             (1 << FEATURE_FW_CTF_BIT             )
+#define FFEATURE_LED_DISPLAY_MASK        (1 << FEATURE_LED_DISPLAY_BIT        )
+#define FFEATURE_FAN_CONTROL_MASK        (1 << FEATURE_FAN_CONTROL_BIT        )
+#define FFEATURE_VOLTAGE_CONTROLLER_MASK (1 << FEATURE_VOLTAGE_CONTROLLER_BIT )
+#define FFEATURE_SPARE_27_MASK           (1 << FEATURE_SPARE_27_BIT           )
+#define FFEATURE_SPARE_28_MASK           (1 << FEATURE_SPARE_28_BIT           )
+#define FFEATURE_SPARE_29_MASK           (1 << FEATURE_SPARE_29_BIT           )
+#define FFEATURE_SPARE_30_MASK           (1 << FEATURE_SPARE_30_BIT           )
+#define FFEATURE_SPARE_31_MASK           (1 << FEATURE_SPARE_31_BIT           )
+/* Workload types */
+#define WORKLOAD_VR_BIT                 0
+#define WORKLOAD_FRTC_BIT               1
+#define WORKLOAD_VIDEO_BIT              2
+#define WORKLOAD_COMPUTE_BIT            3
+#define NUM_WORKLOADS                   4
+
+/* ULV Client Masks */
+#define ULV_CLIENT_RLC_MASK         0x00000001
+#define ULV_CLIENT_UVD_MASK         0x00000002
+#define ULV_CLIENT_VCE_MASK         0x00000004
+#define ULV_CLIENT_SDMA0_MASK       0x00000008
+#define ULV_CLIENT_SDMA1_MASK       0x00000010
+#define ULV_CLIENT_JPEG_MASK        0x00000020
+#define ULV_CLIENT_GFXCLK_DPM_MASK  0x00000040
+#define ULV_CLIENT_UVD_DPM_MASK     0x00000080
+#define ULV_CLIENT_VCE_DPM_MASK     0x00000100
+#define ULV_CLIENT_MP0CLK_DPM_MASK  0x00000200
+#define ULV_CLIENT_UCLK_DPM_MASK    0x00000400
+#define ULV_CLIENT_SOCCLK_DPM_MASK  0x00000800
+#define ULV_CLIENT_DCEFCLK_DPM_MASK 0x00001000
+
+typedef struct {
+	/* MP1_EXT_SCRATCH0 */
+	uint32_t CurrLevel_GFXCLK  : 4;
+	uint32_t CurrLevel_UVD     : 4;
+	uint32_t CurrLevel_VCE     : 4;
+	uint32_t CurrLevel_LCLK    : 4;
+	uint32_t CurrLevel_MP0CLK  : 4;
+	uint32_t CurrLevel_UCLK    : 4;
+	uint32_t CurrLevel_SOCCLK  : 4;
+	uint32_t CurrLevel_DCEFCLK : 4;
+	/* MP1_EXT_SCRATCH1 */
+	uint32_t TargLevel_GFXCLK  : 4;
+	uint32_t TargLevel_UVD     : 4;
+	uint32_t TargLevel_VCE     : 4;
+	uint32_t TargLevel_LCLK    : 4;
+	uint32_t TargLevel_MP0CLK  : 4;
+	uint32_t TargLevel_UCLK    : 4;
+	uint32_t TargLevel_SOCCLK  : 4;
+	uint32_t TargLevel_DCEFCLK : 4;
+	/* MP1_EXT_SCRATCH2-7 */
+	uint32_t Reserved[6];
+} FwStatus_t;
+
+#pragma pack(pop)
+
+#endif
+
diff --git a/drivers/gpu/drm/amd/powerplay/inc/smu9_driver_if.h b/drivers/gpu/drm/amd/powerplay/inc/smu9_driver_if.h
new file mode 100644
index 0000000..aee0214
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/inc/smu9_driver_if.h
@@ -0,0 +1,418 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef SMU9_DRIVER_IF_H
+#define SMU9_DRIVER_IF_H
+
+#include "smu9.h"
+
+/**** IMPORTANT ***
+ * SMU TEAM: Always increment the interface version if
+ * any structure is changed in this file
+ */
+#define SMU9_DRIVER_IF_VERSION 0xa
+
+#define NUM_GFXCLK_DPM_LEVELS  8
+#define NUM_UVD_DPM_LEVELS     8
+#define NUM_VCE_DPM_LEVELS     8
+#define NUM_MP0CLK_DPM_LEVELS  8
+#define NUM_UCLK_DPM_LEVELS    4
+#define NUM_SOCCLK_DPM_LEVELS  8
+#define NUM_DCEFCLK_DPM_LEVELS 8
+#define NUM_LINK_LEVELS        2
+
+#define MAX_GFXCLK_DPM_LEVEL  (NUM_GFXCLK_DPM_LEVELS  - 1)
+#define MAX_UVD_DPM_LEVEL     (NUM_UVD_DPM_LEVELS     - 1)
+#define MAX_VCE_DPM_LEVEL     (NUM_VCE_DPM_LEVELS     - 1)
+#define MAX_MP0CLK_DPM_LEVEL  (NUM_MP0CLK_DPM_LEVELS  - 1)
+#define MAX_UCLK_DPM_LEVEL    (NUM_UCLK_DPM_LEVELS    - 1)
+#define MAX_SOCCLK_DPM_LEVEL  (NUM_SOCCLK_DPM_LEVELS  - 1)
+#define MAX_DCEFCLK_DPM_LEVEL (NUM_DCEFCLK_DPM_LEVELS - 1)
+#define MAX_LINK_DPM_LEVEL    (NUM_LINK_LEVELS        - 1)
+
+#define MIN_GFXCLK_DPM_LEVEL  0
+#define MIN_UVD_DPM_LEVEL     0
+#define MIN_VCE_DPM_LEVEL     0
+#define MIN_MP0CLK_DPM_LEVEL  0
+#define MIN_UCLK_DPM_LEVEL    0
+#define MIN_SOCCLK_DPM_LEVEL  0
+#define MIN_DCEFCLK_DPM_LEVEL 0
+#define MIN_LINK_DPM_LEVEL    0
+
+#define NUM_EVV_VOLTAGE_LEVELS 8
+#define MAX_EVV_VOLTAGE_LEVEL (NUM_EVV_VOLTAGE_LEVELS - 1)
+#define MIN_EVV_VOLTAGE_LEVEL 0
+
+#define NUM_PSP_LEVEL_MAP 4
+
+/* Gemini Modes */
+#define PPSMC_GeminiModeNone   0  /* Single GPU board */
+#define PPSMC_GeminiModeMaster 1  /* Master GPU on a Gemini board */
+#define PPSMC_GeminiModeSlave  2  /* Slave GPU on a Gemini board */
+
+/* Voltage Modes for DPMs */
+#define VOLTAGE_MODE_AVFS_INTERPOLATE 0
+#define VOLTAGE_MODE_AVFS_WORST_CASE  1
+#define VOLTAGE_MODE_STATIC           2
+
+typedef struct {
+  uint32_t FbMult; /* Feedback Multiplier, bit 8:0 int, bit 15:12 post_div, bit 31:16 frac */
+  uint32_t SsFbMult; /* Spread FB Mult: bit 8:0 int, bit 31:16 frac */
+  uint16_t SsSlewFrac;
+  uint8_t  SsOn;
+  uint8_t  Did;      /* DID */
+} PllSetting_t;
+
+typedef struct {
+  int32_t a0;
+  int32_t a1;
+  int32_t a2;
+} GbVdroopTable_t;
+
+typedef struct {
+  int32_t m1;
+  int32_t m2;
+  int32_t b;
+
+  uint8_t m1_shift;
+  uint8_t m2_shift;
+  uint8_t b_shift;
+  uint8_t padding;
+} QuadraticInt_t;
+
+#define NUM_DSPCLK_LEVELS 8
+
+typedef enum {
+  DSPCLK_DCEFCLK = 0,
+  DSPCLK_DISPCLK,
+  DSPCLK_PIXCLK,
+  DSPCLK_PHYCLK,
+  DSPCLK_COUNT,
+} DSPCLK_e;
+
+typedef struct {
+  uint16_t Freq; /* in MHz */
+  uint16_t Vid;  /* min voltage in SVI2 VID */
+} DisplayClockTable_t;
+
+typedef struct {
+  /* PowerTune */
+  uint16_t SocketPowerLimit; /* Watts */
+  uint16_t TdcLimit;         /* Amps */
+  uint16_t EdcLimit;         /* Amps */
+  uint16_t TedgeLimit;       /* Celcius */
+  uint16_t ThotspotLimit;    /* Celcius */
+  uint16_t ThbmLimit;        /* Celcius */
+  uint16_t Tvr_socLimit;     /* Celcius */
+  uint16_t Tvr_memLimit;     /* Celcius */
+  uint16_t Tliquid1Limit;    /* Celcius */
+  uint16_t Tliquid2Limit;    /* Celcius */
+  uint16_t TplxLimit;        /* Celcius */
+  uint16_t LoadLineResistance; /* in mOhms */
+  uint32_t FitLimit;         /* Failures in time (failures per million parts over the defined lifetime) */
+
+  /* External Component Communication Settings */
+  uint8_t  Liquid1_I2C_address;
+  uint8_t  Liquid2_I2C_address;
+  uint8_t  Vr_I2C_address;
+  uint8_t  Plx_I2C_address;
+
+  uint8_t  GeminiMode;
+  uint8_t  spare17[3];
+  uint32_t GeminiApertureHigh;
+  uint32_t GeminiApertureLow;
+
+  uint8_t  Liquid_I2C_LineSCL;
+  uint8_t  Liquid_I2C_LineSDA;
+  uint8_t  Vr_I2C_LineSCL;
+  uint8_t  Vr_I2C_LineSDA;
+  uint8_t  Plx_I2C_LineSCL;
+  uint8_t  Plx_I2C_LineSDA;
+  uint8_t  paddingx[2];
+
+  /* ULV Settings */
+  uint8_t  UlvOffsetVid;     /* SVI2 VID */
+  uint8_t  UlvSmnclkDid;     /* DID for ULV mode. 0 means CLK will not be modified in ULV. */
+  uint8_t  UlvMp1clkDid;     /* DID for ULV mode. 0 means CLK will not be modified in ULV. */
+  uint8_t  UlvGfxclkBypass;  /* 1 to turn off/bypass Gfxclk during ULV, 0 to leave Gfxclk on during ULV */
+
+  /* VDDCR_SOC Voltages */
+  uint8_t      SocVid[NUM_EVV_VOLTAGE_LEVELS];
+
+  /* This is the minimum voltage needed to run the SOC. */
+  uint8_t      MinVoltageVid; /* Minimum Voltage ("Vmin") of ASIC */
+  uint8_t      MaxVoltageVid; /* Maximum Voltage allowable */
+  uint8_t      MaxVidStep; /* Max VID step that SMU will request. Multiple steps are taken if voltage change exceeds this value. */
+  uint8_t      padding8;
+
+  uint8_t      UlvPhaseSheddingPsi0; /* set this to 1 to set PSI0/1 to 1 in ULV mode */
+  uint8_t      UlvPhaseSheddingPsi1; /* set this to 1 to set PSI0/1 to 1 in ULV mode */
+  uint8_t      padding8_2[2];
+
+  /* SOC Frequencies */
+  PllSetting_t GfxclkLevel        [NUM_GFXCLK_DPM_LEVELS];
+
+  uint8_t      SocclkDid          [NUM_SOCCLK_DPM_LEVELS];          /* DID */
+  uint8_t      SocDpmVoltageIndex [NUM_SOCCLK_DPM_LEVELS];
+
+  uint8_t      VclkDid            [NUM_UVD_DPM_LEVELS];            /* DID */
+  uint8_t      DclkDid            [NUM_UVD_DPM_LEVELS];            /* DID */
+  uint8_t      UvdDpmVoltageIndex [NUM_UVD_DPM_LEVELS];
+
+  uint8_t      EclkDid            [NUM_VCE_DPM_LEVELS];            /* DID */
+  uint8_t      VceDpmVoltageIndex [NUM_VCE_DPM_LEVELS];
+
+  uint8_t      Mp0clkDid          [NUM_MP0CLK_DPM_LEVELS];          /* DID */
+  uint8_t      Mp0DpmVoltageIndex [NUM_MP0CLK_DPM_LEVELS];
+
+  DisplayClockTable_t DisplayClockTable[DSPCLK_COUNT][NUM_DSPCLK_LEVELS];
+  QuadraticInt_t      DisplayClock2Gfxclk[DSPCLK_COUNT];
+
+  uint8_t      GfxDpmVoltageMode;
+  uint8_t      SocDpmVoltageMode;
+  uint8_t      UclkDpmVoltageMode;
+  uint8_t      UvdDpmVoltageMode;
+
+  uint8_t      VceDpmVoltageMode;
+  uint8_t      Mp0DpmVoltageMode;
+  uint8_t      DisplayDpmVoltageMode;
+  uint8_t      padding8_3;
+
+  uint16_t     GfxclkSlewRate;
+  uint16_t     padding;
+
+  uint32_t     LowGfxclkInterruptThreshold;  /* in units of 10KHz */
+
+  /* Alpha parameters for clock averages. ("255"=1) */
+  uint8_t      GfxclkAverageAlpha;
+  uint8_t      SocclkAverageAlpha;
+  uint8_t      UclkAverageAlpha;
+  uint8_t      GfxActivityAverageAlpha;
+
+  /* UCLK States */
+  uint8_t      MemVid[NUM_UCLK_DPM_LEVELS];    /* VID */
+  PllSetting_t UclkLevel[NUM_UCLK_DPM_LEVELS];   /* Full PLL settings */
+  uint8_t      MemSocVoltageIndex[NUM_UCLK_DPM_LEVELS];
+  uint8_t      LowestUclkReservedForUlv; /* Set this to 1 if UCLK DPM0 is reserved for ULV-mode only */
+  uint8_t      paddingUclk[3];
+  uint16_t     NumMemoryChannels;  /* Used for memory bandwidth calculations */
+  uint16_t     MemoryChannelWidth; /* Used for memory bandwidth calculations */
+
+  /* CKS Settings */
+  uint8_t      CksEnable[NUM_GFXCLK_DPM_LEVELS];
+  uint8_t      CksVidOffset[NUM_GFXCLK_DPM_LEVELS];
+
+  /* MP0 Mapping Table */
+  uint8_t      PspLevelMap[NUM_PSP_LEVEL_MAP];
+
+  /* Link DPM Settings */
+  uint8_t     PcieGenSpeed[NUM_LINK_LEVELS];           /* 0:PciE-gen1 1:PciE-gen2 2:PciE-gen3 */
+  uint8_t     PcieLaneCount[NUM_LINK_LEVELS];          /* 1=x1, 2=x2, 3=x4, 4=x8, 5=x12, 6=x16 */
+  uint8_t     LclkDid[NUM_LINK_LEVELS];                /* Leave at 0 to use hardcoded values in FW */
+  uint8_t     paddingLinkDpm[2];
+
+  /* Fan Control */
+  uint16_t     FanStopTemp;          /* Celcius */
+  uint16_t     FanStartTemp;         /* Celcius */
+
+  uint16_t     FanGainEdge;
+  uint16_t     FanGainHotspot;
+  uint16_t     FanGainLiquid;
+  uint16_t     FanGainVrVddc;
+  uint16_t     FanGainVrMvdd;
+  uint16_t     FanGainPlx;
+  uint16_t     FanGainHbm;
+  uint16_t     FanPwmMin;
+  uint16_t     FanAcousticLimitRpm;
+  uint16_t     FanThrottlingRpm;
+  uint16_t     FanMaximumRpm;
+  uint16_t     FanTargetTemperature;
+  uint16_t     FanTargetGfxclk;
+  uint8_t      FanZeroRpmEnable;
+  uint8_t      FanSpare;
+
+  /* The following are AFC override parameters. Leave at 0 to use FW defaults. */
+  int16_t      FuzzyFan_ErrorSetDelta;
+  int16_t      FuzzyFan_ErrorRateSetDelta;
+  int16_t      FuzzyFan_PwmSetDelta;
+  uint16_t     FuzzyFan_Reserved;
+
+  /* GPIO Settings */
+  uint8_t      AcDcGpio;        /* GPIO pin configured for AC/DC switching */
+  uint8_t      AcDcPolarity;    /* GPIO polarity for AC/DC switching */
+  uint8_t      VR0HotGpio;      /* GPIO pin configured for VR0 HOT event */
+  uint8_t      VR0HotPolarity;  /* GPIO polarity for VR0 HOT event */
+  uint8_t      VR1HotGpio;      /* GPIO pin configured for VR1 HOT event */
+  uint8_t      VR1HotPolarity;  /* GPIO polarity for VR1 HOT event */
+  uint8_t      Padding1;       /* replace GPIO pin configured for CTF */
+  uint8_t      Padding2;       /* replace GPIO polarity for CTF */
+
+  /* LED Display Settings */
+  uint8_t      LedPin0;         /* GPIO number for LedPin[0] */
+  uint8_t      LedPin1;         /* GPIO number for LedPin[1] */
+  uint8_t      LedPin2;         /* GPIO number for LedPin[2] */
+  uint8_t      padding8_4;
+
+  /* AVFS */
+  uint8_t      OverrideBtcGbCksOn;
+  uint8_t      OverrideAvfsGbCksOn;
+  uint8_t      PaddingAvfs8[2];
+
+  GbVdroopTable_t BtcGbVdroopTableCksOn;
+  GbVdroopTable_t BtcGbVdroopTableCksOff;
+
+  QuadraticInt_t  AvfsGbCksOn;  /* Replacement equation */
+  QuadraticInt_t  AvfsGbCksOff; /* Replacement equation */
+
+  uint8_t      StaticVoltageOffsetVid[NUM_GFXCLK_DPM_LEVELS]; /* This values are added on to the final voltage calculation */
+
+  /* Ageing Guardband Parameters */
+  uint32_t     AConstant[3];
+  uint16_t     DC_tol_sigma;
+  uint16_t     Platform_mean;
+  uint16_t     Platform_sigma;
+  uint16_t     PSM_Age_CompFactor;
+
+  uint32_t     Reserved[20];
+
+  /* Padding - ignore */
+  uint32_t     MmHubPadding[7]; /* SMU internal use */
+
+} PPTable_t;
+
+typedef struct {
+  uint16_t MinClock; // This is either DCEFCLK or SOCCLK (in MHz)
+  uint16_t MaxClock; // This is either DCEFCLK or SOCCLK (in MHz)
+  uint16_t MinUclk;
+  uint16_t MaxUclk;
+
+  uint8_t  WmSetting;
+  uint8_t  Padding[3];
+} WatermarkRowGeneric_t;
+
+#define NUM_WM_RANGES 4
+
+typedef enum {
+  WM_SOCCLK = 0,
+  WM_DCEFCLK,
+  WM_COUNT,
+} WM_CLOCK_e;
+
+typedef struct {
+  /* Watermarks */
+  WatermarkRowGeneric_t WatermarkRow[WM_COUNT][NUM_WM_RANGES];
+
+  uint32_t     MmHubPadding[7]; /* SMU internal use */
+} Watermarks_t;
+
+#ifdef PPTABLE_V10_SMU_VERSION
+typedef struct {
+  float        AvfsGbCksOn[NUM_GFXCLK_DPM_LEVELS];
+  float        AcBtcGbCksOn[NUM_GFXCLK_DPM_LEVELS];
+  float        AvfsGbCksOff[NUM_GFXCLK_DPM_LEVELS];
+  float        AcBtcGbCksOff[NUM_GFXCLK_DPM_LEVELS];
+  float        DcBtcGb;
+
+  uint32_t     MmHubPadding[7]; /* SMU internal use */
+} AvfsTable_t;
+#else
+typedef struct {
+  uint32_t     AvfsGbCksOn[NUM_GFXCLK_DPM_LEVELS];
+  uint32_t     AcBtcGbCksOn[NUM_GFXCLK_DPM_LEVELS];
+  uint32_t     AvfsGbCksOff[NUM_GFXCLK_DPM_LEVELS];
+  uint32_t     AcBtcGbCksOff[NUM_GFXCLK_DPM_LEVELS];
+  uint32_t     DcBtcGb;
+
+  uint32_t     MmHubPadding[7]; /* SMU internal use */
+} AvfsTable_t;
+#endif
+
+typedef struct {
+  uint16_t avgPsmCount[30];
+  uint16_t minPsmCount[30];
+  uint16_t avgPsmVoltage[30]; /* in mV with 2 fractional bits */
+  uint16_t minPsmVoltage[30]; /* in mV with 2 fractional bits */
+
+  uint32_t MmHubPadding[7]; /* SMU internal use */
+} AvfsDebugTable_t;
+
+typedef struct {
+  uint8_t  AvfsEn;
+  uint8_t  AvfsVersion;
+  uint8_t  Padding[2];
+
+  uint32_t VFT0_m1; /* Q16.16 */
+  uint32_t VFT0_m2; /* Q16.16 */
+  uint32_t VFT0_b;  /* Q16.16 */
+
+  uint32_t VFT1_m1; /* Q16.16 */
+  uint32_t VFT1_m2; /* Q16.16 */
+  uint32_t VFT1_b;  /* Q16.16 */
+
+  uint32_t VFT2_m1; /* Q16.16 */
+  uint32_t VFT2_m2; /* Q16.16 */
+  uint32_t VFT2_b;  /* Q16.16 */
+
+  uint32_t AvfsGb0_m1; /* Q16.16 */
+  uint32_t AvfsGb0_m2; /* Q16.16 */
+  uint32_t AvfsGb0_b;  /* Q16.16 */
+
+  uint32_t AcBtcGb_m1; /* Q16.16 */
+  uint32_t AcBtcGb_m2; /* Q16.16 */
+  uint32_t AcBtcGb_b;  /* Q16.16 */
+
+  uint32_t AvfsTempCold;
+  uint32_t AvfsTempMid;
+  uint32_t AvfsTempHot;
+
+  uint32_t InversionVoltage; /*  in mV with 2 fractional bits */
+
+  uint32_t P2V_m1; /* Q16.16 */
+  uint32_t P2V_m2; /* Q16.16 */
+  uint32_t P2V_b;  /* Q16.16 */
+
+  uint32_t P2VCharzFreq; /* in 10KHz units */
+
+  uint32_t EnabledAvfsModules;
+
+  uint32_t MmHubPadding[7]; /* SMU internal use */
+} AvfsFuseOverride_t;
+
+/* These defines are used with the following messages:
+ * SMC_MSG_TransferTableDram2Smu
+ * SMC_MSG_TransferTableSmu2Dram
+ */
+#define TABLE_PPTABLE            0
+#define TABLE_WATERMARKS         1
+#define TABLE_AVFS               2
+#define TABLE_AVFS_PSM_DEBUG     3
+#define TABLE_AVFS_FUSE_OVERRIDE 4
+#define TABLE_PMSTATUSLOG        5
+#define TABLE_COUNT              6
+
+/* These defines are used with the SMC_MSG_SetUclkFastSwitch message. */
+#define UCLK_SWITCH_SLOW 0
+#define UCLK_SWITCH_FAST 1
+
+
+#endif
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 056/100] drm/amd/powerplay: add new Vega10's ppsmc header file
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (39 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 055/100] drm/amd/powerplay: add smu9 header files for Vega10 Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 057/100] drm/amdgpu: add new atomfirmware based helpers for powerplay Alex Deucher
                     ` (44 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Eric Huang, Alex Deucher

From: Eric Huang <JinHuiEric.Huang@amd.com>

Signed-off-by: Eric Huang <JinHuiEric.Huang@amd.com>
Reviewed-by: Ken Wang <Qingqing.Wang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/inc/vega10_ppsmc.h | 131 +++++++++++++++++++++++
 1 file changed, 131 insertions(+)
 create mode 100644 drivers/gpu/drm/amd/powerplay/inc/vega10_ppsmc.h

diff --git a/drivers/gpu/drm/amd/powerplay/inc/vega10_ppsmc.h b/drivers/gpu/drm/amd/powerplay/inc/vega10_ppsmc.h
new file mode 100644
index 0000000..90beef3
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/inc/vega10_ppsmc.h
@@ -0,0 +1,131 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef PP_SMC_H
+#define PP_SMC_H
+
+#pragma pack(push, 1)
+
+#define SMU_UCODE_VERSION                  0x001c0800
+
+/* SMU Response Codes: */
+#define PPSMC_Result_OK                    0x1
+#define PPSMC_Result_Failed                0xFF
+#define PPSMC_Result_UnknownCmd            0xFE
+#define PPSMC_Result_CmdRejectedPrereq     0xFD
+#define PPSMC_Result_CmdRejectedBusy       0xFC
+
+typedef uint16_t PPSMC_Result;
+
+/* Message Definitions */
+#define PPSMC_MSG_TestMessage                    0x1
+#define PPSMC_MSG_GetSmuVersion                  0x2
+#define PPSMC_MSG_GetDriverIfVersion             0x3
+#define PPSMC_MSG_EnableSmuFeatures              0x4
+#define PPSMC_MSG_DisableSmuFeatures             0x5
+#define PPSMC_MSG_GetEnabledSmuFeatures          0x6
+#define PPSMC_MSG_SetWorkloadMask                0x7
+#define PPSMC_MSG_SetPptLimit                    0x8
+#define PPSMC_MSG_SetDriverDramAddrHigh          0x9
+#define PPSMC_MSG_SetDriverDramAddrLow           0xA
+#define PPSMC_MSG_SetToolsDramAddrHigh           0xB
+#define PPSMC_MSG_SetToolsDramAddrLow            0xC
+#define PPSMC_MSG_TransferTableSmu2Dram          0xD
+#define PPSMC_MSG_TransferTableDram2Smu          0xE
+#define PPSMC_MSG_UseDefaultPPTable              0xF
+#define PPSMC_MSG_UseBackupPPTable               0x10
+#define PPSMC_MSG_RunBtc                         0x11
+#define PPSMC_MSG_RequestI2CBus                  0x12
+#define PPSMC_MSG_ReleaseI2CBus                  0x13
+#define PPSMC_MSG_ConfigureTelemetry             0x14
+#define PPSMC_MSG_SetUlvIpMask                   0x15
+#define PPSMC_MSG_SetSocVidOffset                0x16
+#define PPSMC_MSG_SetMemVidOffset                0x17
+#define PPSMC_MSG_GetSocVidOffset                0x18
+#define PPSMC_MSG_GetMemVidOffset                0x19
+#define PPSMC_MSG_SetFloorSocVoltage             0x1A
+#define PPSMC_MSG_SoftReset                      0x1B
+#define PPSMC_MSG_StartBacoMonitor               0x1C
+#define PPSMC_MSG_CancelBacoMonitor              0x1D
+#define PPSMC_MSG_EnterBaco                      0x1E
+#define PPSMC_MSG_AllowLowGfxclkInterrupt        0x1F
+#define PPSMC_MSG_SetLowGfxclkInterruptThreshold 0x20
+#define PPSMC_MSG_SetSoftMinGfxclkByIndex        0x21
+#define PPSMC_MSG_SetSoftMaxGfxclkByIndex        0x22
+#define PPSMC_MSG_GetCurrentGfxclkIndex          0x23
+#define PPSMC_MSG_SetSoftMinUclkByIndex          0x24
+#define PPSMC_MSG_SetSoftMaxUclkByIndex          0x25
+#define PPSMC_MSG_GetCurrentUclkIndex            0x26
+#define PPSMC_MSG_SetSoftMinUvdByIndex           0x27
+#define PPSMC_MSG_SetSoftMaxUvdByIndex           0x28
+#define PPSMC_MSG_GetCurrentUvdIndex             0x29
+#define PPSMC_MSG_SetSoftMinVceByIndex           0x2A
+#define PPSMC_MSG_SetSoftMaxVceByIndex           0x2B
+#define PPSMC_MSG_SetHardMinVceByIndex           0x2C
+#define PPSMC_MSG_GetCurrentVceIndex             0x2D
+#define PPSMC_MSG_SetSoftMinSocclkByIndex        0x2E
+#define PPSMC_MSG_SetHardMinSocclkByIndex        0x2F
+#define PPSMC_MSG_SetSoftMaxSocclkByIndex        0x30
+#define PPSMC_MSG_GetCurrentSocclkIndex          0x31
+#define PPSMC_MSG_SetMinLinkDpmByIndex           0x32
+#define PPSMC_MSG_GetCurrentLinkIndex            0x33
+#define PPSMC_MSG_GetAverageGfxclkFrequency      0x34
+#define PPSMC_MSG_GetAverageSocclkFrequency      0x35
+#define PPSMC_MSG_GetAverageUclkFrequency        0x36
+#define PPSMC_MSG_GetAverageGfxActivity          0x37
+#define PPSMC_MSG_GetTemperatureEdge             0x38
+#define PPSMC_MSG_GetTemperatureHotspot          0x39
+#define PPSMC_MSG_GetTemperatureHBM              0x3A
+#define PPSMC_MSG_GetTemperatureVrSoc            0x3B
+#define PPSMC_MSG_GetTemperatureVrMem            0x3C
+#define PPSMC_MSG_GetTemperatureLiquid           0x3D
+#define PPSMC_MSG_GetTemperaturePlx              0x3E
+#define PPSMC_MSG_OverDriveSetPercentage         0x3F
+#define PPSMC_MSG_SetMinDeepSleepDcefclk         0x40
+#define PPSMC_MSG_SwitchToAC                     0x41
+#define PPSMC_MSG_SetUclkFastSwitch              0x42
+#define PPSMC_MSG_SetUclkDownHyst                0x43
+#define PPSMC_MSG_RemoveDCClamp                  0x44
+#define PPSMC_MSG_GfxDeviceDriverReset           0x45
+#define PPSMC_MSG_GetCurrentRpm                  0x46
+#define PPSMC_MSG_SetVideoFps                    0x47
+#define PPSMC_MSG_SetCustomGfxDpmParameters      0x48
+#define PPSMC_MSG_SetTjMax                       0x49
+#define PPSMC_MSG_SetFanTemperatureTarget        0x4A
+#define PPSMC_MSG_PrepareMp1ForUnload            0x4B
+#define PPSMC_MSG_RequestDisplayClockByFreq      0x4C
+#define PPSMC_MSG_GetClockFreqMHz                0x4D
+#define PPSMC_MSG_DramLogSetDramAddrHigh         0x4E
+#define PPSMC_MSG_DramLogSetDramAddrLow          0x4F
+#define PPSMC_MSG_DramLogSetDramSize             0x50
+#define PPSMC_MSG_SetFanMaxRpm                   0x51
+#define PPSMC_MSG_SetFanMinPwm                   0x52
+#define PPSMC_MSG_ConfigureGfxDidt               0x55
+#define PPSMC_MSG_NumOfDisplays                  0x56
+#define PPSMC_Message_Count                      0x57
+
+typedef int PPSMC_Msg;
+
+#pragma pack(pop)
+
+#endif
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 057/100] drm/amdgpu: add new atomfirmware based helpers for powerplay
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (40 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 056/100] drm/amd/powerplay: add new Vega10's ppsmc header file Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 058/100] drm/amd/powerplay: add global PowerPlay mutex Alex Deucher
                     ` (43 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Eric Huang, Alex Deucher

From: Eric Huang <JinHuiEric.Huang@amd.com>

New helpers for fetching info out of atomfirmware.

Signed-off-by: Eric Huang <JinHuiEric.Huang@amd.com>
Reviewed-by: Ken Wang <Ken.Wang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/hwmgr/Makefile       |   2 +-
 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c | 396 +++++++++++++++++++++
 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h | 140 ++++++++
 3 files changed, 537 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/Makefile b/drivers/gpu/drm/amd/powerplay/hwmgr/Makefile
index 5fff1d6..ccb51c2 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/Makefile
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/Makefile
@@ -5,7 +5,7 @@
 HARDWARE_MGR = hwmgr.o processpptables.o functiontables.o \
 		hardwaremanager.o pp_acpi.o cz_hwmgr.o \
 		cz_clockpowergating.o pppcielanes.o\
-		process_pptables_v1_0.o ppatomctrl.o \
+		process_pptables_v1_0.o ppatomctrl.o ppatomfwctrl.o \
 		smu7_hwmgr.o smu7_powertune.o smu7_thermal.o \
 		smu7_clockpowergating.o
 
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c
new file mode 100644
index 0000000..b71525f
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c
@@ -0,0 +1,396 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include "ppatomfwctrl.h"
+#include "atomfirmware.h"
+#include "pp_debug.h"
+
+
+static const union atom_voltage_object_v4 *pp_atomfwctrl_lookup_voltage_type_v4(
+		const struct atom_voltage_objects_info_v4_1 *voltage_object_info_table,
+		uint8_t voltage_type, uint8_t voltage_mode)
+{
+	unsigned int size = le16_to_cpu(
+			voltage_object_info_table->table_header.structuresize);
+	unsigned int offset =
+			offsetof(struct atom_voltage_objects_info_v4_1, voltage_object[0]);
+	unsigned long start = (unsigned long)voltage_object_info_table;
+
+	while (offset < size) {
+		const union atom_voltage_object_v4 *voltage_object =
+				(const union atom_voltage_object_v4 *)(start + offset);
+
+        if (voltage_type == voltage_object->gpio_voltage_obj.header.voltage_type &&
+            voltage_mode == voltage_object->gpio_voltage_obj.header.voltage_mode)
+            return voltage_object;
+
+        offset += le16_to_cpu(voltage_object->gpio_voltage_obj.header.object_size);
+
+    }
+
+    return NULL;
+}
+
+static struct atom_voltage_objects_info_v4_1 *pp_atomfwctrl_get_voltage_info_table(
+		struct pp_hwmgr *hwmgr)
+{
+    const void *table_address;
+    uint16_t idx;
+
+    idx = GetIndexIntoMasterDataTable(voltageobject_info);
+    table_address =	cgs_atom_get_data_table(hwmgr->device,
+    		idx, NULL, NULL, NULL);
+
+    PP_ASSERT_WITH_CODE( 
+        table_address,
+        "Error retrieving BIOS Table Address!",
+        return NULL);
+
+    return (struct atom_voltage_objects_info_v4_1 *)table_address;
+}
+
+/**
+* Returns TRUE if the given voltage type is controlled by GPIO pins.
+* voltage_type is one of SET_VOLTAGE_TYPE_ASIC_VDDC, SET_VOLTAGE_TYPE_ASIC_MVDDC, SET_VOLTAGE_TYPE_ASIC_MVDDQ.
+* voltage_mode is one of ATOM_SET_VOLTAGE, ATOM_SET_VOLTAGE_PHASE
+*/
+bool pp_atomfwctrl_is_voltage_controlled_by_gpio_v4(struct pp_hwmgr *hwmgr,
+		uint8_t voltage_type, uint8_t voltage_mode)
+{
+	struct atom_voltage_objects_info_v4_1 *voltage_info =
+			(struct atom_voltage_objects_info_v4_1 *)
+			pp_atomfwctrl_get_voltage_info_table(hwmgr);
+	bool ret;
+
+	/* If we cannot find the table do NOT try to control this voltage. */
+	PP_ASSERT_WITH_CODE(voltage_info,
+			"Could not find Voltage Table in BIOS.",
+			return false);
+
+	ret = (pp_atomfwctrl_lookup_voltage_type_v4(voltage_info,
+			voltage_type, voltage_mode)) ? true : false;
+
+	return ret;
+}
+
+int pp_atomfwctrl_get_voltage_table_v4(struct pp_hwmgr *hwmgr,
+		uint8_t voltage_type, uint8_t voltage_mode,
+		struct pp_atomfwctrl_voltage_table *voltage_table)
+{
+	struct atom_voltage_objects_info_v4_1 *voltage_info =
+			(struct atom_voltage_objects_info_v4_1 *)
+			pp_atomfwctrl_get_voltage_info_table(hwmgr);
+	const union atom_voltage_object_v4 *voltage_object;
+	unsigned int i;
+	int result = 0;
+
+	PP_ASSERT_WITH_CODE(voltage_info,
+			"Could not find Voltage Table in BIOS.",
+			return -1);
+
+	voltage_object = pp_atomfwctrl_lookup_voltage_type_v4(voltage_info,
+			voltage_type, voltage_mode);
+
+	if (!voltage_object)
+		return -1;
+
+	voltage_table->count = 0;
+	if (voltage_mode == VOLTAGE_OBJ_GPIO_LUT) {
+		PP_ASSERT_WITH_CODE(
+				(voltage_object->gpio_voltage_obj.gpio_entry_num <=
+				PP_ATOMFWCTRL_MAX_VOLTAGE_ENTRIES),
+				"Too many voltage entries!",
+				result = -1);
+
+		if (!result) {
+			for (i = 0; i < voltage_object->gpio_voltage_obj.
+							gpio_entry_num; i++) {
+				voltage_table->entries[i].value =
+						le16_to_cpu(voltage_object->gpio_voltage_obj.
+						voltage_gpio_lut[i].voltage_level_mv);
+				voltage_table->entries[i].smio_low =
+						le32_to_cpu(voltage_object->gpio_voltage_obj.
+						voltage_gpio_lut[i].voltage_gpio_reg_val);
+			}
+			voltage_table->count =
+					voltage_object->gpio_voltage_obj.gpio_entry_num;
+			voltage_table->mask_low =
+					le32_to_cpu(
+					voltage_object->gpio_voltage_obj.gpio_mask_val);
+			voltage_table->phase_delay =
+					voltage_object->gpio_voltage_obj.phase_delay_us;
+		}
+	} else if (voltage_mode == VOLTAGE_OBJ_SVID2) {
+		voltage_table->psi1_enable =
+			voltage_object->svid2_voltage_obj.loadline_psi1 & 0x1;
+		voltage_table->psi0_enable =
+			voltage_object->svid2_voltage_obj.psi0_enable & 0x1;
+		voltage_table->max_vid_step =
+			voltage_object->svid2_voltage_obj.maxvstep;
+		voltage_table->telemetry_offset =
+			voltage_object->svid2_voltage_obj.telemetry_offset;
+		voltage_table->telemetry_slope =
+			voltage_object->svid2_voltage_obj.telemetry_gain;
+	} else
+		PP_ASSERT_WITH_CODE(false,
+				"Unsupported Voltage Object Mode!",
+				result = -1);
+
+	return result;
+}
+
+ 
+static struct atom_gpio_pin_lut_v2_1 *pp_atomfwctrl_get_gpio_lookup_table(
+		struct pp_hwmgr *hwmgr)
+{
+	const void *table_address;
+	uint16_t idx;
+
+	idx = GetIndexIntoMasterDataTable(gpio_pin_lut);
+	table_address =	cgs_atom_get_data_table(hwmgr->device,
+			idx, NULL, NULL, NULL);
+	PP_ASSERT_WITH_CODE(table_address,
+			"Error retrieving BIOS Table Address!",
+			return NULL);
+
+	return (struct atom_gpio_pin_lut_v2_1 *)table_address;
+}
+
+static bool pp_atomfwctrl_lookup_gpio_pin(
+		struct atom_gpio_pin_lut_v2_1 *gpio_lookup_table,
+		const uint32_t pin_id,
+		struct pp_atomfwctrl_gpio_pin_assignment *gpio_pin_assignment)
+{
+	unsigned int size = le16_to_cpu(
+			gpio_lookup_table->table_header.structuresize);
+	unsigned int offset =
+			offsetof(struct atom_gpio_pin_lut_v2_1, gpio_pin[0]);
+	unsigned long start = (unsigned long)gpio_lookup_table;
+
+	while (offset < size) {
+		const struct  atom_gpio_pin_assignment *pin_assignment =
+				(const struct  atom_gpio_pin_assignment *)(start + offset);
+
+		if (pin_id == pin_assignment->gpio_id)  {
+			gpio_pin_assignment->uc_gpio_pin_bit_shift =
+					pin_assignment->gpio_bitshift;
+			gpio_pin_assignment->us_gpio_pin_aindex =
+					le16_to_cpu(pin_assignment->data_a_reg_index);
+			return true;
+		}
+		offset += offsetof(struct atom_gpio_pin_assignment, gpio_id) + 1;
+	}
+	return false;
+}
+
+/**
+* Returns TRUE if the given pin id find in lookup table.
+*/
+bool pp_atomfwctrl_get_pp_assign_pin(struct pp_hwmgr *hwmgr,
+		const uint32_t pin_id,
+		struct pp_atomfwctrl_gpio_pin_assignment *gpio_pin_assignment)
+{
+	bool ret = false;
+	struct atom_gpio_pin_lut_v2_1 *gpio_lookup_table =
+			pp_atomfwctrl_get_gpio_lookup_table(hwmgr);
+
+	/* If we cannot find the table do NOT try to control this voltage. */
+	PP_ASSERT_WITH_CODE(gpio_lookup_table,
+			"Could not find GPIO lookup Table in BIOS.",
+			return false);
+
+	ret = pp_atomfwctrl_lookup_gpio_pin(gpio_lookup_table,
+			pin_id, gpio_pin_assignment);
+
+	return ret;
+}
+
+/**
+* Enter to SelfRefresh mode.
+* @param hwmgr
+*/
+int pp_atomfwctrl_enter_self_refresh(struct pp_hwmgr *hwmgr)
+{
+	/* 0 - no action
+	 * 1 - leave power to video memory always on
+	 */
+	return 0;
+}
+
+/** pp_atomfwctrl_get_gpu_pll_dividers_vega10().
+ *
+ * @param hwmgr       input parameter: pointer to HwMgr
+ * @param clock_type  input parameter: Clock type: 1 - GFXCLK, 2 - UCLK, 0 - All other clocks
+ * @param clock_value input parameter: Clock
+ * @param dividers    output parameter:Clock dividers
+ */
+int pp_atomfwctrl_get_gpu_pll_dividers_vega10(struct pp_hwmgr *hwmgr,
+		uint32_t clock_type, uint32_t clock_value,
+		struct pp_atomfwctrl_clock_dividers_soc15 *dividers)
+{
+	struct compute_gpu_clock_input_parameter_v1_8 pll_parameters;
+	struct compute_gpu_clock_output_parameter_v1_8 *pll_output;
+	int result;
+	uint32_t idx;
+
+	pll_parameters.gpuclock_10khz = (uint32_t)clock_value;
+	pll_parameters.gpu_clock_type = clock_type;
+
+	idx = GetIndexIntoMasterCmdTable(computegpuclockparam);
+	result = cgs_atom_exec_cmd_table(hwmgr->device, idx, &pll_parameters);
+
+	if (!result) {
+		pll_output = (struct compute_gpu_clock_output_parameter_v1_8 *)
+				&pll_parameters;
+		dividers->ulClock = le32_to_cpu(pll_output->gpuclock_10khz);
+		dividers->ulDid = le32_to_cpu(pll_output->dfs_did);
+		dividers->ulPll_fb_mult = le32_to_cpu(pll_output->pll_fb_mult);
+		dividers->ulPll_ss_fbsmult = le32_to_cpu(pll_output->pll_ss_fbsmult);
+		dividers->usPll_ss_slew_frac = le16_to_cpu(pll_output->pll_ss_slew_frac);
+		dividers->ucPll_ss_enable = pll_output->pll_ss_enable;
+	}
+	return result;
+}
+
+int pp_atomfwctrl_get_avfs_information(struct pp_hwmgr *hwmgr,
+		struct pp_atomfwctrl_avfs_parameters *param)
+{
+	uint16_t idx;
+	struct atom_asic_profiling_info_v4_1 *profile;
+
+	idx = GetIndexIntoMasterDataTable(asic_profiling_info);
+	profile = (struct atom_asic_profiling_info_v4_1 *)
+			cgs_atom_get_data_table(hwmgr->device,
+					idx, NULL, NULL, NULL);
+
+	if (!profile)
+		return -1;
+
+	param->ulMaxVddc = le32_to_cpu(profile->maxvddc);
+	param->ulMinVddc = le32_to_cpu(profile->minvddc);
+	param->ulMeanNsigmaAcontant0 =
+			le32_to_cpu(profile->avfs_meannsigma_acontant0);
+	param->ulMeanNsigmaAcontant1 =
+			le32_to_cpu(profile->avfs_meannsigma_acontant1);
+	param->ulMeanNsigmaAcontant2 =
+			le32_to_cpu(profile->avfs_meannsigma_acontant2);
+	param->usMeanNsigmaDcTolSigma =
+			le16_to_cpu(profile->avfs_meannsigma_dc_tol_sigma);
+	param->usMeanNsigmaPlatformMean =
+			le16_to_cpu(profile->avfs_meannsigma_platform_mean);
+	param->usMeanNsigmaPlatformSigma =
+			le16_to_cpu(profile->avfs_meannsigma_platform_sigma);
+	param->ulGbVdroopTableCksoffA0 =
+			le32_to_cpu(profile->gb_vdroop_table_cksoff_a0);
+	param->ulGbVdroopTableCksoffA1 =
+			le32_to_cpu(profile->gb_vdroop_table_cksoff_a1);
+	param->ulGbVdroopTableCksoffA2 =
+			le32_to_cpu(profile->gb_vdroop_table_cksoff_a2);
+	param->ulGbVdroopTableCksonA0 =
+			le32_to_cpu(profile->gb_vdroop_table_ckson_a0);
+	param->ulGbVdroopTableCksonA1 =
+			le32_to_cpu(profile->gb_vdroop_table_ckson_a1);
+	param->ulGbVdroopTableCksonA2 =
+			le32_to_cpu(profile->gb_vdroop_table_ckson_a2);
+	param->ulGbFuseTableCksoffM1 =
+			le32_to_cpu(profile->avfsgb_fuse_table_cksoff_m1);
+	param->usGbFuseTableCksoffM2 =
+			le16_to_cpu(profile->avfsgb_fuse_table_cksoff_m2);
+	param->ulGbFuseTableCksoffB =
+			le32_to_cpu(profile->avfsgb_fuse_table_cksoff_b);
+	param->ulGbFuseTableCksonM1 =
+			le32_to_cpu(profile->avfsgb_fuse_table_ckson_m1);
+	param->usGbFuseTableCksonM2 =
+			le16_to_cpu(profile->avfsgb_fuse_table_ckson_m2);
+	param->ulGbFuseTableCksonB =
+			le32_to_cpu(profile->avfsgb_fuse_table_ckson_b);
+	param->usMaxVoltage025mv =
+			le16_to_cpu(profile->max_voltage_0_25mv);
+	param->ucEnableGbVdroopTableCksoff =
+			profile->enable_gb_vdroop_table_cksoff;
+	param->ucEnableGbVdroopTableCkson =
+			profile->enable_gb_vdroop_table_ckson;
+	param->ucEnableGbFuseTableCksoff =
+			profile->enable_gb_fuse_table_cksoff;
+	param->ucEnableGbFuseTableCkson =
+			profile->enable_gb_fuse_table_ckson;
+	param->usPsmAgeComfactor =
+			le16_to_cpu(profile->psm_age_comfactor);
+	param->ucEnableApplyAvfsCksoffVoltage =
+			profile->enable_apply_avfs_cksoff_voltage;
+
+	param->ulDispclk2GfxclkM1 =
+			le32_to_cpu(profile->dispclk2gfxclk_a);
+	param->usDispclk2GfxclkM2 =
+			le16_to_cpu(profile->dispclk2gfxclk_b);
+	param->ulDispclk2GfxclkB =
+			le32_to_cpu(profile->dispclk2gfxclk_c);
+	param->ulDcefclk2GfxclkM1 =
+			le32_to_cpu(profile->dcefclk2gfxclk_a);
+	param->usDcefclk2GfxclkM2 =
+			le16_to_cpu(profile->dcefclk2gfxclk_b);
+	param->ulDcefclk2GfxclkB =
+			le32_to_cpu(profile->dcefclk2gfxclk_c);
+	param->ulPixelclk2GfxclkM1 =
+			le32_to_cpu(profile->pixclk2gfxclk_a);
+	param->usPixelclk2GfxclkM2 =
+			le16_to_cpu(profile->pixclk2gfxclk_b);
+	param->ulPixelclk2GfxclkB =
+			le32_to_cpu(profile->pixclk2gfxclk_c);
+	param->ulPhyclk2GfxclkM1 =
+			le32_to_cpu(profile->phyclk2gfxclk_a);
+	param->usPhyclk2GfxclkM2 =
+			le16_to_cpu(profile->phyclk2gfxclk_b);
+	param->ulPhyclk2GfxclkB =
+			le32_to_cpu(profile->phyclk2gfxclk_c);
+
+	return 0;
+}
+
+int pp_atomfwctrl_get_gpio_information(struct pp_hwmgr *hwmgr,
+		struct pp_atomfwctrl_gpio_parameters *param)
+{
+	struct atom_smu_info_v3_1 *info;
+	uint16_t idx;
+
+	idx = GetIndexIntoMasterDataTable(smu_info);
+	info = (struct atom_smu_info_v3_1 *)
+		cgs_atom_get_data_table(hwmgr->device,
+				idx, NULL, NULL, NULL);
+
+	if (!info) {
+		pr_info("Error retrieving BIOS smu_info Table Address!");
+		return -1;
+	}
+
+	param->ucAcDcGpio       = info->ac_dc_gpio_bit;
+	param->ucAcDcPolarity   = info->ac_dc_polarity;
+	param->ucVR0HotGpio     = info->vr0hot_gpio_bit;
+	param->ucVR0HotPolarity = info->vr0hot_polarity;
+	param->ucVR1HotGpio     = info->vr1hot_gpio_bit;
+	param->ucVR1HotPolarity = info->vr1hot_polarity;
+	param->ucFwCtfGpio      = info->fw_ctf_gpio_bit;
+	param->ucFwCtfPolarity  = info->fw_ctf_polarity;
+
+	return 0;
+}
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h
new file mode 100644
index 0000000..7efe9b9
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h
@@ -0,0 +1,140 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef PP_ATOMFWCTRL_H
+#define PP_ATOMFWCTRL_H
+
+#include "hwmgr.h"
+
+#define GetIndexIntoMasterCmdTable(FieldName) \
+	(((char*)(&((struct atom_master_list_of_command_functions_v2_1*)0)->FieldName)-(char*)0)/sizeof(uint16_t))
+#define GetIndexIntoMasterDataTable(FieldName) \
+	(((char*)(&((struct atom_master_list_of_data_tables_v2_1*)0)->FieldName)-(char*)0)/sizeof(uint16_t))
+
+#define PP_ATOMFWCTRL_MAX_VOLTAGE_ENTRIES 32
+
+struct pp_atomfwctrl_voltage_table_entry {
+	uint16_t value;
+	uint32_t  smio_low;
+};
+
+struct pp_atomfwctrl_voltage_table {
+	uint32_t count;
+	uint32_t mask_low;
+	uint32_t phase_delay;
+	uint8_t psi0_enable;
+	uint8_t psi1_enable;
+	uint8_t max_vid_step;
+	uint8_t telemetry_offset;
+	uint8_t telemetry_slope;
+	struct pp_atomfwctrl_voltage_table_entry entries[PP_ATOMFWCTRL_MAX_VOLTAGE_ENTRIES];
+};
+
+struct pp_atomfwctrl_gpio_pin_assignment {
+	uint16_t us_gpio_pin_aindex;
+	uint8_t uc_gpio_pin_bit_shift;
+};
+
+struct pp_atomfwctrl_clock_dividers_soc15 {
+	uint32_t   ulClock;           /* the actual clock */
+	uint32_t   ulDid;             /* DFS divider */
+	uint32_t   ulPll_fb_mult;     /* Feedback Multiplier:  bit 8:0 int, bit 15:12 post_div, bit 31:16 frac */
+	uint32_t   ulPll_ss_fbsmult;  /* Spread FB Multiplier: bit 8:0 int, bit 31:16 frac */
+	uint16_t   usPll_ss_slew_frac;
+	uint8_t    ucPll_ss_enable;
+	uint8_t    ucReserve;
+	uint32_t   ulReserve[2];
+};
+
+struct pp_atomfwctrl_avfs_parameters {
+	uint32_t   ulMaxVddc;
+	uint32_t   ulMinVddc;
+	uint8_t    ucMaxVidStep;
+	uint32_t   ulMeanNsigmaAcontant0;
+	uint32_t   ulMeanNsigmaAcontant1;
+	uint32_t   ulMeanNsigmaAcontant2;
+	uint16_t   usMeanNsigmaDcTolSigma;
+	uint16_t   usMeanNsigmaPlatformMean;
+	uint16_t   usMeanNsigmaPlatformSigma;
+	uint32_t   ulGbVdroopTableCksoffA0;
+	uint32_t   ulGbVdroopTableCksoffA1;
+	uint32_t   ulGbVdroopTableCksoffA2;
+	uint32_t   ulGbVdroopTableCksonA0;
+	uint32_t   ulGbVdroopTableCksonA1;
+	uint32_t   ulGbVdroopTableCksonA2;
+	uint32_t   ulGbFuseTableCksoffM1;
+	uint16_t   usGbFuseTableCksoffM2;
+	uint32_t   ulGbFuseTableCksoffB;\
+	uint32_t   ulGbFuseTableCksonM1;
+	uint16_t   usGbFuseTableCksonM2;
+	uint32_t   ulGbFuseTableCksonB;
+	uint16_t   usMaxVoltage025mv;
+	uint8_t    ucEnableGbVdroopTableCksoff;
+	uint8_t    ucEnableGbVdroopTableCkson;
+	uint8_t    ucEnableGbFuseTableCksoff;
+	uint8_t    ucEnableGbFuseTableCkson;
+	uint16_t   usPsmAgeComfactor;
+	uint8_t    ucEnableApplyAvfsCksoffVoltage;
+	uint32_t   ulDispclk2GfxclkM1;
+	uint16_t   usDispclk2GfxclkM2;
+	uint32_t   ulDispclk2GfxclkB;
+	uint32_t   ulDcefclk2GfxclkM1;
+	uint16_t   usDcefclk2GfxclkM2;
+	uint32_t   ulDcefclk2GfxclkB;
+	uint32_t   ulPixelclk2GfxclkM1;
+	uint16_t   usPixelclk2GfxclkM2;
+	uint32_t   ulPixelclk2GfxclkB;
+	uint32_t   ulPhyclk2GfxclkM1;
+	uint16_t   usPhyclk2GfxclkM2;
+	uint32_t   ulPhyclk2GfxclkB;
+};
+
+struct pp_atomfwctrl_gpio_parameters {
+	uint8_t   ucAcDcGpio;
+	uint8_t   ucAcDcPolarity;
+	uint8_t   ucVR0HotGpio;
+	uint8_t   ucVR0HotPolarity;
+	uint8_t   ucVR1HotGpio;
+	uint8_t   ucVR1HotPolarity;
+	uint8_t   ucFwCtfGpio;
+	uint8_t   ucFwCtfPolarity;
+};
+int pp_atomfwctrl_get_gpu_pll_dividers_vega10(struct pp_hwmgr *hwmgr,
+		uint32_t clock_type, uint32_t clock_value,
+		struct pp_atomfwctrl_clock_dividers_soc15 *dividers);
+int pp_atomfwctrl_enter_self_refresh(struct pp_hwmgr *hwmgr);
+bool pp_atomfwctrl_get_pp_assign_pin(struct pp_hwmgr *hwmgr, const uint32_t pin_id,
+		struct pp_atomfwctrl_gpio_pin_assignment *gpio_pin_assignment);
+
+int pp_atomfwctrl_get_voltage_table_v4(struct pp_hwmgr *hwmgr, uint8_t voltage_type,
+		uint8_t voltage_mode, struct pp_atomfwctrl_voltage_table *voltage_table);
+bool pp_atomfwctrl_is_voltage_controlled_by_gpio_v4(struct pp_hwmgr *hwmgr,
+		uint8_t voltage_type, uint8_t voltage_mode);
+
+int pp_atomfwctrl_get_avfs_information(struct pp_hwmgr *hwmgr,
+		struct pp_atomfwctrl_avfs_parameters *param);
+int pp_atomfwctrl_get_gpio_information(struct pp_hwmgr *hwmgr,
+		struct pp_atomfwctrl_gpio_parameters *param);
+
+#endif
+
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 058/100] drm/amd/powerplay: add global PowerPlay mutex.
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (41 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 057/100] drm/amdgpu: add new atomfirmware based helpers for powerplay Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 059/100] drm/amd/powerplay: add some new structures for Vega10 Alex Deucher
                     ` (42 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Rex Zhu

From: Rex Zhu <Rex.Zhu@amd.com>

Signed-off-by: Rex Zhu <Rex.Zhu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/amd_powerplay.c   | 192 ++++++++++++++++--------
 drivers/gpu/drm/amd/powerplay/inc/pp_instance.h |   1 +
 2 files changed, 132 insertions(+), 61 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
index 8132d46..985ed21 100644
--- a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
+++ b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
@@ -341,8 +341,9 @@ static int pp_dpm_force_performance_level(void *handle,
 		return 0;
 	}
 
+	mutex_lock(&pp_handle->pp_lock);
 	hwmgr->hwmgr_func->force_dpm_level(hwmgr, level);
-
+	mutex_unlock(&pp_handle->pp_lock);
 	return 0;
 }
 
@@ -352,6 +353,7 @@ static enum amd_dpm_forced_level pp_dpm_get_performance_level(
 	struct pp_hwmgr  *hwmgr;
 	struct pp_instance *pp_handle = (struct pp_instance *)handle;
 	int ret = 0;
+	enum amd_dpm_forced_level level;
 
 	ret = pp_check(pp_handle);
 
@@ -359,8 +361,10 @@ static enum amd_dpm_forced_level pp_dpm_get_performance_level(
 		return ret;
 
 	hwmgr = pp_handle->hwmgr;
-
-	return hwmgr->dpm_level;
+	mutex_lock(&pp_handle->pp_lock);
+	level = hwmgr->dpm_level;
+	mutex_unlock(&pp_handle->pp_lock);
+	return level;
 }
 
 static int pp_dpm_get_sclk(void *handle, bool low)
@@ -380,8 +384,10 @@ static int pp_dpm_get_sclk(void *handle, bool low)
 		pr_info("%s was not implemented.\n", __func__);
 		return 0;
 	}
-
-	return hwmgr->hwmgr_func->get_sclk(hwmgr, low);
+	mutex_lock(&pp_handle->pp_lock);
+	ret = hwmgr->hwmgr_func->get_sclk(hwmgr, low);
+	mutex_unlock(&pp_handle->pp_lock);
+	return ret;
 }
 
 static int pp_dpm_get_mclk(void *handle, bool low)
@@ -401,8 +407,10 @@ static int pp_dpm_get_mclk(void *handle, bool low)
 		pr_info("%s was not implemented.\n", __func__);
 		return 0;
 	}
-
-	return hwmgr->hwmgr_func->get_mclk(hwmgr, low);
+	mutex_lock(&pp_handle->pp_lock);
+	ret = hwmgr->hwmgr_func->get_mclk(hwmgr, low);
+	mutex_unlock(&pp_handle->pp_lock);
+	return ret;
 }
 
 static int pp_dpm_powergate_vce(void *handle, bool gate)
@@ -422,8 +430,10 @@ static int pp_dpm_powergate_vce(void *handle, bool gate)
 		pr_info("%s was not implemented.\n", __func__);
 		return 0;
 	}
-
-	return hwmgr->hwmgr_func->powergate_vce(hwmgr, gate);
+	mutex_lock(&pp_handle->pp_lock);
+	ret = hwmgr->hwmgr_func->powergate_vce(hwmgr, gate);
+	mutex_unlock(&pp_handle->pp_lock);
+	return ret;
 }
 
 static int pp_dpm_powergate_uvd(void *handle, bool gate)
@@ -443,8 +453,10 @@ static int pp_dpm_powergate_uvd(void *handle, bool gate)
 		pr_info("%s was not implemented.\n", __func__);
 		return 0;
 	}
-
-	return hwmgr->hwmgr_func->powergate_uvd(hwmgr, gate);
+	mutex_lock(&pp_handle->pp_lock);
+	ret = hwmgr->hwmgr_func->powergate_uvd(hwmgr, gate);
+	mutex_unlock(&pp_handle->pp_lock);
+	return ret;
 }
 
 static enum PP_StateUILabel power_state_convert(enum amd_pm_state_type  state)
@@ -472,7 +484,7 @@ static int pp_dpm_dispatch_tasks(void *handle, enum amd_pp_event event_id,
 
 	if (ret != 0)
 		return ret;
-
+	mutex_lock(&pp_handle->pp_lock);
 	switch (event_id) {
 	case AMD_PP_EVENT_DISPLAY_CONFIG_CHANGE:
 		ret = pem_handle_event(pp_handle->eventmgr, event_id, &data);
@@ -498,6 +510,7 @@ static int pp_dpm_dispatch_tasks(void *handle, enum amd_pp_event event_id,
 	default:
 		break;
 	}
+	mutex_unlock(&pp_handle->pp_lock);
 	return ret;
 }
 
@@ -507,6 +520,7 @@ static enum amd_pm_state_type pp_dpm_get_current_power_state(void *handle)
 	struct pp_power_state *state;
 	struct pp_instance *pp_handle = (struct pp_instance *)handle;
 	int ret = 0;
+	enum amd_pm_state_type pm_type;
 
 	ret = pp_check(pp_handle);
 
@@ -518,21 +532,26 @@ static enum amd_pm_state_type pp_dpm_get_current_power_state(void *handle)
 	if (hwmgr->current_ps == NULL)
 		return -EINVAL;
 
+	mutex_lock(&pp_handle->pp_lock);
+
 	state = hwmgr->current_ps;
 
 	switch (state->classification.ui_label) {
 	case PP_StateUILabel_Battery:
-		return POWER_STATE_TYPE_BATTERY;
+		pm_type = POWER_STATE_TYPE_BATTERY;
 	case PP_StateUILabel_Balanced:
-		return POWER_STATE_TYPE_BALANCED;
+		pm_type = POWER_STATE_TYPE_BALANCED;
 	case PP_StateUILabel_Performance:
-		return POWER_STATE_TYPE_PERFORMANCE;
+		pm_type = POWER_STATE_TYPE_PERFORMANCE;
 	default:
 		if (state->classification.flags & PP_StateClassificationFlag_Boot)
-			return  POWER_STATE_TYPE_INTERNAL_BOOT;
+			pm_type = POWER_STATE_TYPE_INTERNAL_BOOT;
 		else
-			return POWER_STATE_TYPE_DEFAULT;
+			pm_type = POWER_STATE_TYPE_DEFAULT;
 	}
+	mutex_unlock(&pp_handle->pp_lock);
+
+	return pm_type;
 }
 
 static int pp_dpm_set_fan_control_mode(void *handle, uint32_t mode)
@@ -552,8 +571,10 @@ static int pp_dpm_set_fan_control_mode(void *handle, uint32_t mode)
 		pr_info("%s was not implemented.\n", __func__);
 		return 0;
 	}
-
-	return hwmgr->hwmgr_func->set_fan_control_mode(hwmgr, mode);
+	mutex_lock(&pp_handle->pp_lock);
+	ret = hwmgr->hwmgr_func->set_fan_control_mode(hwmgr, mode);
+	mutex_unlock(&pp_handle->pp_lock);
+	return ret;
 }
 
 static int pp_dpm_get_fan_control_mode(void *handle)
@@ -573,8 +594,10 @@ static int pp_dpm_get_fan_control_mode(void *handle)
 		pr_info("%s was not implemented.\n", __func__);
 		return 0;
 	}
-
-	return hwmgr->hwmgr_func->get_fan_control_mode(hwmgr);
+	mutex_lock(&pp_handle->pp_lock);
+	ret = hwmgr->hwmgr_func->get_fan_control_mode(hwmgr);
+	mutex_unlock(&pp_handle->pp_lock);
+	return ret;
 }
 
 static int pp_dpm_set_fan_speed_percent(void *handle, uint32_t percent)
@@ -594,8 +617,10 @@ static int pp_dpm_set_fan_speed_percent(void *handle, uint32_t percent)
 		pr_info("%s was not implemented.\n", __func__);
 		return 0;
 	}
-
-	return hwmgr->hwmgr_func->set_fan_speed_percent(hwmgr, percent);
+	mutex_lock(&pp_handle->pp_lock);
+	ret = hwmgr->hwmgr_func->set_fan_speed_percent(hwmgr, percent);
+	mutex_unlock(&pp_handle->pp_lock);
+	return ret;
 }
 
 static int pp_dpm_get_fan_speed_percent(void *handle, uint32_t *speed)
@@ -616,7 +641,10 @@ static int pp_dpm_get_fan_speed_percent(void *handle, uint32_t *speed)
 		return 0;
 	}
 
-	return hwmgr->hwmgr_func->get_fan_speed_percent(hwmgr, speed);
+	mutex_lock(&pp_handle->pp_lock);
+	ret = hwmgr->hwmgr_func->get_fan_speed_percent(hwmgr, speed);
+	mutex_unlock(&pp_handle->pp_lock);
+	return ret;
 }
 
 static int pp_dpm_get_fan_speed_rpm(void *handle, uint32_t *rpm)
@@ -635,7 +663,10 @@ static int pp_dpm_get_fan_speed_rpm(void *handle, uint32_t *rpm)
 	if (hwmgr->hwmgr_func->get_fan_speed_rpm == NULL)
 		return -EINVAL;
 
-	return hwmgr->hwmgr_func->get_fan_speed_rpm(hwmgr, rpm);
+	mutex_lock(&pp_handle->pp_lock);
+	ret = hwmgr->hwmgr_func->get_fan_speed_rpm(hwmgr, rpm);
+	mutex_unlock(&pp_handle->pp_lock);
+	return ret;
 }
 
 static int pp_dpm_get_temperature(void *handle)
@@ -655,8 +686,10 @@ static int pp_dpm_get_temperature(void *handle)
 		pr_info("%s was not implemented.\n", __func__);
 		return 0;
 	}
-
-	return hwmgr->hwmgr_func->get_temperature(hwmgr);
+	mutex_lock(&pp_handle->pp_lock);
+	ret = hwmgr->hwmgr_func->get_temperature(hwmgr);
+	mutex_unlock(&pp_handle->pp_lock);
+	return ret;
 }
 
 static int pp_dpm_get_pp_num_states(void *handle,
@@ -677,6 +710,8 @@ static int pp_dpm_get_pp_num_states(void *handle,
 	if (hwmgr->ps == NULL)
 		return -EINVAL;
 
+	mutex_lock(&pp_handle->pp_lock);
+
 	data->nums = hwmgr->num_ps;
 
 	for (i = 0; i < hwmgr->num_ps; i++) {
@@ -699,7 +734,7 @@ static int pp_dpm_get_pp_num_states(void *handle,
 				data->states[i] = POWER_STATE_TYPE_DEFAULT;
 		}
 	}
-
+	mutex_unlock(&pp_handle->pp_lock);
 	return 0;
 }
 
@@ -708,6 +743,7 @@ static int pp_dpm_get_pp_table(void *handle, char **table)
 	struct pp_hwmgr *hwmgr;
 	struct pp_instance *pp_handle = (struct pp_instance *)handle;
 	int ret = 0;
+	int size = 0;
 
 	ret = pp_check(pp_handle);
 
@@ -719,9 +755,11 @@ static int pp_dpm_get_pp_table(void *handle, char **table)
 	if (!hwmgr->soft_pp_table)
 		return -EINVAL;
 
+	mutex_lock(&pp_handle->pp_lock);
 	*table = (char *)hwmgr->soft_pp_table;
-
-	return hwmgr->soft_pp_table_size;
+	size = hwmgr->soft_pp_table_size;
+	mutex_unlock(&pp_handle->pp_lock);
+	return size;
 }
 
 static int pp_dpm_set_pp_table(void *handle, const char *buf, size_t size)
@@ -736,19 +774,21 @@ static int pp_dpm_set_pp_table(void *handle, const char *buf, size_t size)
 		return ret;
 
 	hwmgr = pp_handle->hwmgr;
-
+	mutex_lock(&pp_handle->pp_lock);
 	if (!hwmgr->hardcode_pp_table) {
 		hwmgr->hardcode_pp_table = kmemdup(hwmgr->soft_pp_table,
 						   hwmgr->soft_pp_table_size,
 						   GFP_KERNEL);
-
-		if (!hwmgr->hardcode_pp_table)
+		if (!hwmgr->hardcode_pp_table) {
+			mutex_unlock(&pp_handle->pp_lock);
 			return -ENOMEM;
+		}
 	}
 
 	memcpy(hwmgr->hardcode_pp_table, buf, size);
 
 	hwmgr->soft_pp_table = hwmgr->hardcode_pp_table;
+	mutex_unlock(&pp_handle->pp_lock);
 
 	ret = amd_powerplay_reset(handle);
 	if (ret)
@@ -781,8 +821,10 @@ static int pp_dpm_force_clock_level(void *handle,
 		pr_info("%s was not implemented.\n", __func__);
 		return 0;
 	}
-
-	return hwmgr->hwmgr_func->force_clock_level(hwmgr, type, mask);
+	mutex_lock(&pp_handle->pp_lock);
+	hwmgr->hwmgr_func->force_clock_level(hwmgr, type, mask);
+	mutex_unlock(&pp_handle->pp_lock);
+	return ret;
 }
 
 static int pp_dpm_print_clock_levels(void *handle,
@@ -803,7 +845,10 @@ static int pp_dpm_print_clock_levels(void *handle,
 		pr_info("%s was not implemented.\n", __func__);
 		return 0;
 	}
-	return hwmgr->hwmgr_func->print_clock_levels(hwmgr, type, buf);
+	mutex_lock(&pp_handle->pp_lock);
+	ret = hwmgr->hwmgr_func->print_clock_levels(hwmgr, type, buf);
+	mutex_unlock(&pp_handle->pp_lock);
+	return ret;
 }
 
 static int pp_dpm_get_sclk_od(void *handle)
@@ -823,8 +868,10 @@ static int pp_dpm_get_sclk_od(void *handle)
 		pr_info("%s was not implemented.\n", __func__);
 		return 0;
 	}
-
-	return hwmgr->hwmgr_func->get_sclk_od(hwmgr);
+	mutex_lock(&pp_handle->pp_lock);
+	ret = hwmgr->hwmgr_func->get_sclk_od(hwmgr);
+	mutex_unlock(&pp_handle->pp_lock);
+	return ret;
 }
 
 static int pp_dpm_set_sclk_od(void *handle, uint32_t value)
@@ -845,7 +892,10 @@ static int pp_dpm_set_sclk_od(void *handle, uint32_t value)
 		return 0;
 	}
 
-	return hwmgr->hwmgr_func->set_sclk_od(hwmgr, value);
+	mutex_lock(&pp_handle->pp_lock);
+	ret = hwmgr->hwmgr_func->set_sclk_od(hwmgr, value);
+	mutex_lock(&pp_handle->pp_lock);
+	return ret;
 }
 
 static int pp_dpm_get_mclk_od(void *handle)
@@ -865,8 +915,10 @@ static int pp_dpm_get_mclk_od(void *handle)
 		pr_info("%s was not implemented.\n", __func__);
 		return 0;
 	}
-
-	return hwmgr->hwmgr_func->get_mclk_od(hwmgr);
+	mutex_lock(&pp_handle->pp_lock);
+	ret = hwmgr->hwmgr_func->get_mclk_od(hwmgr);
+	mutex_unlock(&pp_handle->pp_lock);
+	return ret;
 }
 
 static int pp_dpm_set_mclk_od(void *handle, uint32_t value)
@@ -886,8 +938,10 @@ static int pp_dpm_set_mclk_od(void *handle, uint32_t value)
 		pr_info("%s was not implemented.\n", __func__);
 		return 0;
 	}
-
-	return hwmgr->hwmgr_func->set_mclk_od(hwmgr, value);
+	mutex_lock(&pp_handle->pp_lock);
+	ret = hwmgr->hwmgr_func->set_mclk_od(hwmgr, value);
+	mutex_unlock(&pp_handle->pp_lock);
+	return ret;
 }
 
 static int pp_dpm_read_sensor(void *handle, int idx,
@@ -909,7 +963,11 @@ static int pp_dpm_read_sensor(void *handle, int idx,
 		return 0;
 	}
 
-	return hwmgr->hwmgr_func->read_sensor(hwmgr, idx, value, size);
+	mutex_lock(&pp_handle->pp_lock);
+	ret = hwmgr->hwmgr_func->read_sensor(hwmgr, idx, value, size);
+	mutex_unlock(&pp_handle->pp_lock);
+
+	return ret;
 }
 
 static struct amd_vce_state*
@@ -1114,8 +1172,8 @@ int amd_powerplay_create(struct amd_pp_init *pp_init,
 	instance->pm_en = pp_init->pm_en;
 	instance->feature_mask = pp_init->feature_mask;
 	instance->device = pp_init->device;
+	mutex_init(&instance->pp_lock);
 	*handle = instance;
-
 	return 0;
 }
 
@@ -1186,9 +1244,9 @@ int amd_powerplay_display_configuration_change(void *handle,
 		return ret;
 
 	hwmgr = pp_handle->hwmgr;
-
+	mutex_lock(&pp_handle->pp_lock);
 	phm_store_dal_configuration_data(hwmgr, display_config);
-
+	mutex_unlock(&pp_handle->pp_lock);
 	return 0;
 }
 
@@ -1209,7 +1267,10 @@ int amd_powerplay_get_display_power_level(void *handle,
 	if (output == NULL)
 		return -EINVAL;
 
-	return phm_get_dal_power_level(hwmgr, output);
+	mutex_lock(&pp_handle->pp_lock);
+	ret = phm_get_dal_power_level(hwmgr, output);
+	mutex_unlock(&pp_handle->pp_lock);
+	return ret;
 }
 
 int amd_powerplay_get_current_clocks(void *handle,
@@ -1228,14 +1289,22 @@ int amd_powerplay_get_current_clocks(void *handle,
 
 	hwmgr = pp_handle->hwmgr;
 
+	mutex_lock(&pp_handle->pp_lock);
+
 	phm_get_dal_power_level(hwmgr, &simple_clocks);
 
-	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps, PHM_PlatformCaps_PowerContainment)) {
-		if (0 != phm_get_clock_info(hwmgr, &hwmgr->current_ps->hardware, &hw_clocks, PHM_PerformanceLevelDesignation_PowerContainment))
-			PP_ASSERT_WITH_CODE(0, "Error in PHM_GetPowerContainmentClockInfo", return -1);
-	} else {
-		if (0 != phm_get_clock_info(hwmgr, &hwmgr->current_ps->hardware, &hw_clocks, PHM_PerformanceLevelDesignation_Activity))
-			PP_ASSERT_WITH_CODE(0, "Error in PHM_GetClockInfo", return -1);
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+					PHM_PlatformCaps_PowerContainment))
+		ret = phm_get_clock_info(hwmgr, &hwmgr->current_ps->hardware,
+					&hw_clocks, PHM_PerformanceLevelDesignation_PowerContainment);
+	else
+		ret = phm_get_clock_info(hwmgr, &hwmgr->current_ps->hardware,
+					&hw_clocks, PHM_PerformanceLevelDesignation_Activity);
+
+	if (ret != 0) {
+		pr_info("Error in phm_get_clock_info \n");
+		mutex_unlock(&pp_handle->pp_lock);
+		return -EINVAL;
 	}
 
 	clocks->min_engine_clock = hw_clocks.min_eng_clk;
@@ -1254,14 +1323,12 @@ int amd_powerplay_get_current_clocks(void *handle,
 		clocks->max_engine_clock_in_sr = hw_clocks.max_eng_clk;
 		clocks->min_engine_clock_in_sr = hw_clocks.min_eng_clk;
 	}
-
+	mutex_unlock(&pp_handle->pp_lock);
 	return 0;
-
 }
 
 int amd_powerplay_get_clock_by_type(void *handle, enum amd_pp_clock_type type, struct amd_pp_clocks *clocks)
 {
-	int result = -1;
 	struct pp_hwmgr  *hwmgr;
 	struct pp_instance *pp_handle = (struct pp_instance *)handle;
 	int ret = 0;
@@ -1276,9 +1343,10 @@ int amd_powerplay_get_clock_by_type(void *handle, enum amd_pp_clock_type type, s
 	if (clocks == NULL)
 		return -EINVAL;
 
-	result = phm_get_clock_by_type(hwmgr, type, clocks);
-
-	return result;
+	mutex_lock(&pp_handle->pp_lock);
+	ret = phm_get_clock_by_type(hwmgr, type, clocks);
+	mutex_unlock(&pp_handle->pp_lock);
+	return ret;
 }
 
 int amd_powerplay_get_display_mode_validation_clocks(void *handle,
@@ -1295,13 +1363,15 @@ int amd_powerplay_get_display_mode_validation_clocks(void *handle,
 
 	hwmgr = pp_handle->hwmgr;
 
-
 	if (clocks == NULL)
 		return -EINVAL;
 
+	mutex_lock(&pp_handle->pp_lock);
+
 	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps, PHM_PlatformCaps_DynamicPatchPowerState))
 		ret = phm_get_max_high_clocks(hwmgr, clocks);
 
+	mutex_unlock(&pp_handle->pp_lock);
 	return ret;
 }
 
diff --git a/drivers/gpu/drm/amd/powerplay/inc/pp_instance.h b/drivers/gpu/drm/amd/powerplay/inc/pp_instance.h
index ab8494f..4c3b537 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/pp_instance.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/pp_instance.h
@@ -39,6 +39,7 @@ struct pp_instance {
 	struct pp_smumgr *smu_mgr;
 	struct pp_hwmgr *hwmgr;
 	struct pp_eventmgr *eventmgr;
+	struct mutex pp_lock;
 };
 
 #endif
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 059/100] drm/amd/powerplay: add some new structures for Vega10
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (42 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 058/100] drm/amd/powerplay: add global PowerPlay mutex Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 060/100] drm/amd: add structures for display/powerplay interface Alex Deucher
                     ` (41 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Eric Huang, Alex Deucher

From: Eric Huang <JinHuiEric.Huang@amd.com>

Signed-off-by: Eric Huang <JinHuiEric.Huang@amd.com>
Reviewed-by: Ken Wang <Qingqing.Wang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr_ppt.h    |  16 ++-
 .../gpu/drm/amd/powerplay/inc/hardwaremanager.h    |  32 ++++++
 drivers/gpu/drm/amd/powerplay/inc/hwmgr.h          | 112 ++++++++++++++++++++-
 3 files changed, 155 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr_ppt.h b/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr_ppt.h
index 2930a33..c0193e0 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr_ppt.h
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr_ppt.h
@@ -30,15 +30,17 @@
 
 struct phm_ppt_v1_clock_voltage_dependency_record {
 	uint32_t clk;
-	uint8_t vddInd;
+	uint8_t  vddInd;
+	uint8_t  vddciInd;
+	uint8_t  mvddInd;
 	uint16_t vdd_offset;
 	uint16_t vddc;
 	uint16_t vddgfx;
 	uint16_t vddci;
 	uint16_t mvdd;
-	uint8_t phases;
-	uint8_t cks_enable;
-	uint8_t cks_voffset;
+	uint8_t  phases;
+	uint8_t  cks_enable;
+	uint8_t  cks_voffset;
 	uint32_t sclk_offset;
 };
 
@@ -94,6 +96,7 @@ struct phm_ppt_v1_pcie_record {
 	uint8_t gen_speed;
 	uint8_t lane_width;
 	uint16_t usreserved;
+	uint16_t reserved;
 	uint32_t pcie_sclk;
 };
 typedef struct phm_ppt_v1_pcie_record phm_ppt_v1_pcie_record;
@@ -104,5 +107,10 @@ struct phm_ppt_v1_pcie_table {
 };
 typedef struct phm_ppt_v1_pcie_table phm_ppt_v1_pcie_table;
 
+struct phm_ppt_v1_gpio_table {
+	uint8_t vrhot_triggered_sclk_dpm_index;           /* SCLK DPM level index to switch to when VRHot is triggered */
+};
+typedef struct phm_ppt_v1_gpio_table phm_ppt_v1_gpio_table;
+
 #endif
 
diff --git a/drivers/gpu/drm/amd/powerplay/inc/hardwaremanager.h b/drivers/gpu/drm/amd/powerplay/inc/hardwaremanager.h
index 2612997..c5279c2 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/hardwaremanager.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/hardwaremanager.h
@@ -182,6 +182,7 @@ enum phm_platform_caps {
 	PHM_PlatformCaps_Thermal2GPIO17,                        /* indicates thermal2GPIO17 table support */
 	PHM_PlatformCaps_ThermalOutGPIO,                        /* indicates ThermalOutGPIO support, pin number is assigned by VBIOS */
 	PHM_PlatformCaps_DisableMclkSwitchingForFrameLock,      /* Disable memory clock switch during Framelock */
+	PHM_PlatformCaps_ForceMclkHigh,                         /* Disable memory clock switching by forcing memory clock high */
 	PHM_PlatformCaps_VRHotGPIOConfigurable,                 /* indicates VR_HOT GPIO configurable */
 	PHM_PlatformCaps_TempInversion,                         /* enable Temp Inversion feature */
 	PHM_PlatformCaps_IOIC3,
@@ -212,6 +213,20 @@ enum phm_platform_caps {
 	PHM_PlatformCaps_TablelessHardwareInterface,
 	PHM_PlatformCaps_EnableDriverEVV,
 	PHM_PlatformCaps_SPLLShutdownSupport,
+	PHM_PlatformCaps_VirtualBatteryState,
+	PHM_PlatformCaps_IgnoreForceHighClockRequestsInAPUs,
+	PHM_PlatformCaps_DisableMclkSwitchForVR,
+	PHM_PlatformCaps_SMU8,
+	PHM_PlatformCaps_VRHotPolarityHigh,
+	PHM_PlatformCaps_IPS_UlpsExclusive,
+	PHM_PlatformCaps_SMCtoPPLIBAcdcGpioScheme,
+	PHM_PlatformCaps_GeminiAsymmetricPower,
+	PHM_PlatformCaps_OCLPowerOptimization,
+	PHM_PlatformCaps_MaxPCIEBandWidth,
+	PHM_PlatformCaps_PerfPerWattOptimizationSupport,
+	PHM_PlatformCaps_UVDClientMCTuning,
+	PHM_PlatformCaps_ODNinACSupport,
+	PHM_PlatformCaps_ODNinDCSupport,
 	PHM_PlatformCaps_Max
 };
 
@@ -290,6 +305,8 @@ struct PP_Clocks {
 	uint32_t memoryClock;
 	uint32_t BusBandwidth;
 	uint32_t engineClockInSR;
+	uint32_t dcefClock;
+	uint32_t dcefClockInSR;
 };
 
 struct pp_clock_info {
@@ -334,6 +351,21 @@ struct phm_clocks {
 	uint32_t clock[MAX_NUM_CLOCKS];
 };
 
+struct phm_odn_performance_level {
+	uint32_t clock;
+	uint32_t vddc;
+	bool enabled;
+};
+
+struct phm_odn_clock_levels {
+	uint32_t size;
+	uint32_t options;
+	uint32_t flags;
+	uint32_t number_of_performance_levels;
+	/* variable-sized array, specify by ulNumberOfPerformanceLevels. */
+	struct phm_odn_performance_level performance_level_entries[8];
+};
+
 extern int phm_disable_clock_power_gatings(struct pp_hwmgr *hwmgr);
 extern int phm_enable_clock_power_gatings(struct pp_hwmgr *hwmgr);
 extern int phm_powergate_uvd(struct pp_hwmgr *hwmgr, bool gate);
diff --git a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
index d5aa6cd..02185d4 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
@@ -83,7 +83,8 @@ enum PP_FEATURE_MASK {
 	PP_ULV_MASK = 0x100,
 	PP_ENABLE_GFX_CG_THRU_SMU = 0x200,
 	PP_CLOCK_STRETCH_MASK = 0x400,
-	PP_OD_FUZZY_FAN_CONTROL_MASK = 0x800
+	PP_OD_FUZZY_FAN_CONTROL_MASK = 0x800,
+	PP_SOCCLK_DPM_MASK = 0x1000,
 };
 
 enum PHM_BackEnd_Magic {
@@ -412,6 +413,7 @@ struct phm_cac_tdp_table {
 	uint16_t usLowCACLeakage;
 	uint16_t usHighCACLeakage;
 	uint16_t usMaximumPowerDeliveryLimit;
+	uint16_t usEDCLimit;
 	uint16_t usOperatingTempMinLimit;
 	uint16_t usOperatingTempMaxLimit;
 	uint16_t usOperatingTempStep;
@@ -438,6 +440,46 @@ struct phm_cac_tdp_table {
 	uint8_t  ucCKS_LDO_REFSEL;
 };
 
+struct phm_tdp_table {
+	uint16_t usTDP;
+	uint16_t usConfigurableTDP;
+	uint16_t usTDC;
+	uint16_t usBatteryPowerLimit;
+	uint16_t usSmallPowerLimit;
+	uint16_t usLowCACLeakage;
+	uint16_t usHighCACLeakage;
+	uint16_t usMaximumPowerDeliveryLimit;
+	uint16_t usEDCLimit;
+	uint16_t usOperatingTempMinLimit;
+	uint16_t usOperatingTempMaxLimit;
+	uint16_t usOperatingTempStep;
+	uint16_t usOperatingTempHyst;
+	uint16_t usDefaultTargetOperatingTemp;
+	uint16_t usTargetOperatingTemp;
+	uint16_t usPowerTuneDataSetID;
+	uint16_t usSoftwareShutdownTemp;
+	uint16_t usClockStretchAmount;
+	uint16_t usTemperatureLimitTedge;
+	uint16_t usTemperatureLimitHotspot;
+	uint16_t usTemperatureLimitLiquid1;
+	uint16_t usTemperatureLimitLiquid2;
+	uint16_t usTemperatureLimitHBM;
+	uint16_t usTemperatureLimitVrVddc;
+	uint16_t usTemperatureLimitVrMvdd;
+	uint16_t usTemperatureLimitPlx;
+	uint8_t  ucLiquid1_I2C_address;
+	uint8_t  ucLiquid2_I2C_address;
+	uint8_t  ucLiquid_I2C_Line;
+	uint8_t  ucVr_I2C_address;
+	uint8_t  ucVr_I2C_Line;
+	uint8_t  ucPlx_I2C_address;
+	uint8_t  ucPlx_I2C_Line;
+	uint8_t  ucLiquid_I2C_LineSDA;
+	uint8_t  ucVr_I2C_LineSDA;
+	uint8_t  ucPlx_I2C_LineSDA;
+	uint32_t usBoostPowerLimit;
+};
+
 struct phm_ppm_table {
 	uint8_t   ppm_design;
 	uint16_t  cpu_core_number;
@@ -472,9 +514,11 @@ struct phm_vq_budgeting_table {
 struct phm_clock_and_voltage_limits {
 	uint32_t sclk;
 	uint32_t mclk;
+	uint32_t gfxclk;
 	uint16_t vddc;
 	uint16_t vddci;
 	uint16_t vddgfx;
+	uint16_t vddmem;
 };
 
 /* Structure to hold PPTable information */
@@ -482,18 +526,77 @@ struct phm_clock_and_voltage_limits {
 struct phm_ppt_v1_information {
 	struct phm_ppt_v1_clock_voltage_dependency_table *vdd_dep_on_sclk;
 	struct phm_ppt_v1_clock_voltage_dependency_table *vdd_dep_on_mclk;
+	struct phm_ppt_v1_clock_voltage_dependency_table *vdd_dep_on_socclk;
+	struct phm_ppt_v1_clock_voltage_dependency_table *vdd_dep_on_dcefclk;
 	struct phm_clock_array *valid_sclk_values;
 	struct phm_clock_array *valid_mclk_values;
+	struct phm_clock_array *valid_socclk_values;
+	struct phm_clock_array *valid_dcefclk_values;
 	struct phm_clock_and_voltage_limits max_clock_voltage_on_dc;
 	struct phm_clock_and_voltage_limits max_clock_voltage_on_ac;
 	struct phm_clock_voltage_dependency_table *vddc_dep_on_dal_pwrl;
 	struct phm_ppm_table *ppm_parameter_table;
 	struct phm_cac_tdp_table *cac_dtp_table;
+	struct phm_tdp_table *tdp_table;
+	struct phm_ppt_v1_mm_clock_voltage_dependency_table *mm_dep_table;
+	struct phm_ppt_v1_voltage_lookup_table *vddc_lookup_table;
+	struct phm_ppt_v1_voltage_lookup_table *vddgfx_lookup_table;
+	struct phm_ppt_v1_voltage_lookup_table *vddmem_lookup_table;
+	struct phm_ppt_v1_pcie_table *pcie_table;
+	struct phm_ppt_v1_gpio_table *gpio_table;
+	uint16_t us_ulv_voltage_offset;
+	uint16_t us_ulv_smnclk_did;
+	uint16_t us_ulv_mp1clk_did;
+	uint16_t us_ulv_gfxclk_bypass;
+	uint16_t us_gfxclk_slew_rate;
+	uint16_t us_min_gfxclk_freq_limit;
+};
+
+struct phm_ppt_v2_information {
+	struct phm_ppt_v1_clock_voltage_dependency_table *vdd_dep_on_sclk;
+	struct phm_ppt_v1_clock_voltage_dependency_table *vdd_dep_on_mclk;
+	struct phm_ppt_v1_clock_voltage_dependency_table *vdd_dep_on_socclk;
+	struct phm_ppt_v1_clock_voltage_dependency_table *vdd_dep_on_dcefclk;
+	struct phm_ppt_v1_clock_voltage_dependency_table *vdd_dep_on_pixclk;
+	struct phm_ppt_v1_clock_voltage_dependency_table *vdd_dep_on_dispclk;
+	struct phm_ppt_v1_clock_voltage_dependency_table *vdd_dep_on_phyclk;
 	struct phm_ppt_v1_mm_clock_voltage_dependency_table *mm_dep_table;
+
+	struct phm_clock_voltage_dependency_table *vddc_dep_on_dalpwrl;
+
+	struct phm_clock_array *valid_sclk_values;
+	struct phm_clock_array *valid_mclk_values;
+	struct phm_clock_array *valid_socclk_values;
+	struct phm_clock_array *valid_dcefclk_values;
+
+	struct phm_clock_and_voltage_limits max_clock_voltage_on_dc;
+	struct phm_clock_and_voltage_limits max_clock_voltage_on_ac;
+
+	struct phm_ppm_table *ppm_parameter_table;
+	struct phm_cac_tdp_table *cac_dtp_table;
+	struct phm_tdp_table *tdp_table;
+
 	struct phm_ppt_v1_voltage_lookup_table *vddc_lookup_table;
 	struct phm_ppt_v1_voltage_lookup_table *vddgfx_lookup_table;
+	struct phm_ppt_v1_voltage_lookup_table *vddmem_lookup_table;
+	struct phm_ppt_v1_voltage_lookup_table *vddci_lookup_table;
+
 	struct phm_ppt_v1_pcie_table *pcie_table;
+
 	uint16_t us_ulv_voltage_offset;
+	uint16_t us_ulv_smnclk_did;
+	uint16_t us_ulv_mp1clk_did;
+	uint16_t us_ulv_gfxclk_bypass;
+	uint16_t us_gfxclk_slew_rate;
+	uint16_t us_min_gfxclk_freq_limit;
+
+	uint8_t  uc_gfx_dpm_voltage_mode;
+	uint8_t  uc_soc_dpm_voltage_mode;
+	uint8_t  uc_uclk_dpm_voltage_mode;
+	uint8_t  uc_uvd_dpm_voltage_mode;
+	uint8_t  uc_vce_dpm_voltage_mode;
+	uint8_t  uc_mp0_dpm_voltage_mode;
+	uint8_t  uc_dcef_dpm_voltage_mode;
 };
 
 struct phm_dynamic_state_info {
@@ -572,6 +675,13 @@ struct pp_advance_fan_control_parameters {
 	uint16_t  usFanGainVrMvdd;
 	uint16_t  usFanGainPlx;
 	uint16_t  usFanGainHbm;
+	uint8_t   ucEnableZeroRPM;
+	uint8_t   ucFanStopTemperature;
+	uint8_t   ucFanStartTemperature;
+	uint32_t  ulMaxFanSCLKAcousticLimit;       /* Maximum Fan Controller SCLK Frequency Acoustic Limit. */
+	uint32_t  ulTargetGfxClk;
+	uint16_t  usZeroRPMStartTemperature;
+	uint16_t  usZeroRPMStopTemperature;
 };
 
 struct pp_thermal_controller_info {
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 060/100] drm/amd: add structures for display/powerplay interface
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (43 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 059/100] drm/amd/powerplay: add some new structures for Vega10 Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 061/100] drm/amd/powerplay: add some display/powerplay interfaces Alex Deucher
                     ` (40 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Eric Huang, Alex Deucher

From: Eric Huang <JinHuiEric.Huang@amd.com>

Signed-off-by: Eric Huang <JinHuiEric.Huang@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Tony Cheng <tony.cheng@amd.com>
Acked-by: Harry Wentland <harry.wentland@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/include/dm_pp_interface.h | 83 +++++++++++++++++++++++++++
 1 file changed, 83 insertions(+)
 create mode 100644 drivers/gpu/drm/amd/include/dm_pp_interface.h

diff --git a/drivers/gpu/drm/amd/include/dm_pp_interface.h b/drivers/gpu/drm/amd/include/dm_pp_interface.h
new file mode 100644
index 0000000..7343aed
--- /dev/null
+++ b/drivers/gpu/drm/amd/include/dm_pp_interface.h
@@ -0,0 +1,83 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#ifndef _DM_PP_INTERFACE_
+#define _DM_PP_INTERFACE_
+
+#define PP_MAX_CLOCK_LEVELS 8
+
+struct pp_clock_with_latency {
+	uint32_t clocks_in_khz;
+	uint32_t latency_in_us;
+};
+
+struct pp_clock_levels_with_latency {
+	uint32_t num_levels;
+	struct pp_clock_with_latency data[PP_MAX_CLOCK_LEVELS];
+};
+
+struct pp_clock_with_voltage {
+	uint32_t clocks_in_khz;
+	uint32_t voltage_in_mv;
+};
+
+struct pp_clock_levels_with_voltage {
+	uint32_t num_levels;
+	struct pp_clock_with_voltage data[PP_MAX_CLOCK_LEVELS];
+};
+
+#define PP_MAX_WM_SETS 4
+
+enum pp_wm_set_id {
+	DC_WM_SET_A = 0,
+	DC_WM_SET_B,
+	DC_WM_SET_C,
+	DC_WM_SET_D,
+	DC_WM_SET_INVALID = 0xffff,
+};
+
+struct pp_wm_set_with_dmif_clock_range_soc15 {
+	enum pp_wm_set_id wm_set_id;
+	uint32_t wm_min_dcefclk_in_khz;
+	uint32_t wm_max_dcefclk_in_khz;
+	uint32_t wm_min_memclk_in_khz;
+	uint32_t wm_max_memclk_in_khz;
+};
+
+struct pp_wm_set_with_mcif_clock_range_soc15 {
+	enum pp_wm_set_id wm_set_id;
+	uint32_t wm_min_socclk_in_khz;
+	uint32_t wm_max_socclk_in_khz;
+	uint32_t wm_min_memclk_in_khz;
+	uint32_t wm_max_memclk_in_khz;
+};
+
+struct pp_wm_sets_with_clock_ranges_soc15 {
+	uint32_t num_wm_sets_dmif;
+	uint32_t num_wm_sets_mcif;
+	struct pp_wm_set_with_dmif_clock_range_soc15
+		wm_sets_dmif[PP_MAX_WM_SETS];
+	struct pp_wm_set_with_mcif_clock_range_soc15
+		wm_sets_mcif[PP_MAX_WM_SETS];
+};
+
+#endif /* _DM_PP_INTERFACE_ */
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 061/100] drm/amd/powerplay: add some display/powerplay interfaces
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (44 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 060/100] drm/amd: add structures for display/powerplay interface Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 062/100] drm/amd/powerplay: add Vega10 powerplay support Alex Deucher
                     ` (39 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Eric Huang, Alex Deucher

From: Eric Huang <JinHuiEric.Huang@amd.com>

New interfaces needed to handle the new clock trees and
bandwidth requirements on vega10.

Signed-off-by: Eric Huang <JinHuiEric.Huang@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Tony Cheng <tony.cheng@amd.com>
Acked-by: Harry Wentland <harry.wentland@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/powerplay/amd_powerplay.c      | 94 ++++++++++++++++++++++
 .../gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c  | 49 +++++++++++
 drivers/gpu/drm/amd/powerplay/inc/amd_powerplay.h  | 28 ++++++-
 .../gpu/drm/amd/powerplay/inc/hardwaremanager.h    | 11 +++
 drivers/gpu/drm/amd/powerplay/inc/hwmgr.h          | 10 +++
 5 files changed, 191 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
index 985ed21..9e84031 100644
--- a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
+++ b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
@@ -1349,6 +1349,100 @@ int amd_powerplay_get_clock_by_type(void *handle, enum amd_pp_clock_type type, s
 	return ret;
 }
 
+int amd_powerplay_get_clock_by_type_with_latency(void *handle,
+		enum amd_pp_clock_type type,
+		struct pp_clock_levels_with_latency *clocks)
+{
+	struct pp_hwmgr *hwmgr;
+	struct pp_instance *pp_handle = (struct pp_instance *)handle;
+	int ret = 0;
+
+	ret = pp_check(pp_handle);
+	if (ret != 0)
+		return ret;
+
+	if (!clocks)
+		return -EINVAL;
+
+	mutex_lock(&pp_handle->pp_lock);
+	hwmgr = ((struct pp_instance *)handle)->hwmgr;
+	ret = phm_get_clock_by_type_with_latency(hwmgr, type, clocks);
+	mutex_unlock(&pp_handle->pp_lock);
+	return ret;
+}
+
+int amd_powerplay_get_clock_by_type_with_voltage(void *handle,
+		enum amd_pp_clock_type type,
+		struct pp_clock_levels_with_voltage *clocks)
+{
+	struct pp_hwmgr *hwmgr;
+	struct pp_instance *pp_handle = (struct pp_instance *)handle;
+	int ret = 0;
+
+	ret = pp_check(pp_handle);
+	if (ret != 0)
+		return ret;
+
+	if (!clocks)
+		return -EINVAL;
+
+	hwmgr = ((struct pp_instance *)handle)->hwmgr;
+
+	mutex_lock(&pp_handle->pp_lock);
+
+	ret = phm_get_clock_by_type_with_voltage(hwmgr, type, clocks);
+
+	mutex_unlock(&pp_handle->pp_lock);
+	return ret;
+}
+
+int amd_powerplay_set_watermarks_for_clocks_ranges(void *handle,
+		struct pp_wm_sets_with_clock_ranges_soc15 *wm_with_clock_ranges)
+{
+	struct pp_hwmgr *hwmgr;
+	struct pp_instance *pp_handle = (struct pp_instance *)handle;
+	int ret = 0;
+
+	ret = pp_check(pp_handle);
+	if (ret != 0)
+		return ret;
+
+	if (!wm_with_clock_ranges)
+		return -EINVAL;
+
+	hwmgr = ((struct pp_instance *)handle)->hwmgr;
+
+	mutex_lock(&pp_handle->pp_lock);
+	ret = phm_set_watermarks_for_clocks_ranges(hwmgr,
+			wm_with_clock_ranges);
+	mutex_unlock(&pp_handle->pp_lock);
+
+	return ret;
+}
+
+int amd_powerplay_display_clock_voltage_request(void *handle,
+		struct pp_display_clock_request *clock)
+{
+	struct pp_hwmgr *hwmgr;
+	struct pp_instance *pp_handle = (struct pp_instance *)handle;
+	int ret = 0;
+
+	ret = pp_check(pp_handle);
+	if (ret != 0)
+		return ret;
+
+	if (!clock)
+		return -EINVAL;
+
+	hwmgr = ((struct pp_instance *)handle)->hwmgr;
+
+	mutex_lock(&pp_handle->pp_lock);
+	ret = phm_display_clock_voltage_request(hwmgr, clock);
+	mutex_unlock(&pp_handle->pp_lock);
+
+	return ret;
+}
+
 int amd_powerplay_get_display_mode_validation_clocks(void *handle,
 		struct amd_pp_simple_clock_info *clocks)
 {
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c b/drivers/gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c
index 6013ef1..0a2076e 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c
@@ -443,6 +443,55 @@ int phm_get_clock_by_type(struct pp_hwmgr *hwmgr, enum amd_pp_clock_type type, s
 
 }
 
+int phm_get_clock_by_type_with_latency(struct pp_hwmgr *hwmgr,
+		enum amd_pp_clock_type type,
+		struct pp_clock_levels_with_latency *clocks)
+{
+	PHM_FUNC_CHECK(hwmgr);
+
+	if (hwmgr->hwmgr_func->get_clock_by_type_with_latency == NULL)
+		return -EINVAL;
+
+	return hwmgr->hwmgr_func->get_clock_by_type_with_latency(hwmgr, type, clocks);
+
+}
+
+int phm_get_clock_by_type_with_voltage(struct pp_hwmgr *hwmgr,
+		enum amd_pp_clock_type type,
+		struct pp_clock_levels_with_voltage *clocks)
+{
+	PHM_FUNC_CHECK(hwmgr);
+
+	if (hwmgr->hwmgr_func->get_clock_by_type_with_voltage == NULL)
+		return -EINVAL;
+
+	return hwmgr->hwmgr_func->get_clock_by_type_with_voltage(hwmgr, type, clocks);
+
+}
+
+int phm_set_watermarks_for_clocks_ranges(struct pp_hwmgr *hwmgr,
+		struct pp_wm_sets_with_clock_ranges_soc15 *wm_with_clock_ranges)
+{
+	PHM_FUNC_CHECK(hwmgr);
+
+	if (!hwmgr->hwmgr_func->set_watermarks_for_clocks_ranges)
+		return -EINVAL;
+
+	return hwmgr->hwmgr_func->set_watermarks_for_clocks_ranges(hwmgr,
+			wm_with_clock_ranges);
+}
+
+int phm_display_clock_voltage_request(struct pp_hwmgr *hwmgr,
+		struct pp_display_clock_request *clock)
+{
+	PHM_FUNC_CHECK(hwmgr);
+
+	if (!hwmgr->hwmgr_func->display_clock_voltage_request)
+		return -EINVAL;
+
+	return hwmgr->hwmgr_func->display_clock_voltage_request(hwmgr, clock);
+}
+
 int phm_get_max_high_clocks(struct pp_hwmgr *hwmgr, struct amd_pp_simple_clock_info *clocks)
 {
 	PHM_FUNC_CHECK(hwmgr);
diff --git a/drivers/gpu/drm/amd/powerplay/inc/amd_powerplay.h b/drivers/gpu/drm/amd/powerplay/inc/amd_powerplay.h
index c0bf3af..4e39f35 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/amd_powerplay.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/amd_powerplay.h
@@ -28,6 +28,7 @@
 #include <linux/errno.h>
 #include "amd_shared.h"
 #include "cgs_common.h"
+#include "dm_pp_interface.h"
 
 extern const struct amd_ip_funcs pp_ip_funcs;
 extern const struct amd_powerplay_funcs pp_dpm_funcs;
@@ -226,6 +227,8 @@ struct amd_pp_display_configuration {
 	 * higher latency not allowed.
 	 */
 	uint32_t dce_tolerable_mclk_in_active_latency;
+	uint32_t min_dcef_set_clk;
+	uint32_t min_dcef_deep_sleep_set_clk;
 };
 
 struct amd_pp_simple_clock_info {
@@ -266,7 +269,11 @@ struct amd_pp_clock_info {
 enum amd_pp_clock_type {
 	amd_pp_disp_clock = 1,
 	amd_pp_sys_clock,
-	amd_pp_mem_clock
+	amd_pp_mem_clock,
+	amd_pp_dcef_clock,
+	amd_pp_soc_clock,
+	amd_pp_pixel_clock,
+	amd_pp_phy_clock
 };
 
 #define MAX_NUM_CLOCKS 16
@@ -303,6 +310,11 @@ struct pp_gpu_power {
 	uint32_t average_gpu_power;
 };
 
+struct pp_display_clock_request {
+	enum amd_pp_clock_type clock_type;
+	uint32_t clock_freq_in_khz;
+};
+
 #define PP_GROUP_MASK        0xF0000000
 #define PP_GROUP_SHIFT       28
 
@@ -405,6 +417,20 @@ int amd_powerplay_get_clock_by_type(void *handle,
 		enum amd_pp_clock_type type,
 		struct amd_pp_clocks *clocks);
 
+int amd_powerplay_get_clock_by_type_with_latency(void *handle,
+		enum amd_pp_clock_type type,
+		struct pp_clock_levels_with_latency *clocks);
+
+int amd_powerplay_get_clock_by_type_with_voltage(void *handle,
+		enum amd_pp_clock_type type,
+		struct pp_clock_levels_with_voltage *clocks);
+
+int amd_powerplay_set_watermarks_for_clocks_ranges(void *handle,
+		struct pp_wm_sets_with_clock_ranges_soc15 *wm_with_clock_ranges);
+
+int amd_powerplay_display_clock_voltage_request(void *handle,
+		struct pp_display_clock_request *clock);
+
 int amd_powerplay_get_display_mode_validation_clocks(void *handle,
 		struct amd_pp_simple_clock_info *output);
 
diff --git a/drivers/gpu/drm/amd/powerplay/inc/hardwaremanager.h b/drivers/gpu/drm/amd/powerplay/inc/hardwaremanager.h
index c5279c2..ee58e4c 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/hardwaremanager.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/hardwaremanager.h
@@ -419,6 +419,17 @@ extern int phm_get_current_shallow_sleep_clocks(struct pp_hwmgr *hwmgr, const st
 
 extern int phm_get_clock_by_type(struct pp_hwmgr *hwmgr, enum amd_pp_clock_type type, struct amd_pp_clocks *clocks);
 
+extern int phm_get_clock_by_type_with_latency(struct pp_hwmgr *hwmgr,
+		enum amd_pp_clock_type type,
+		struct pp_clock_levels_with_latency *clocks);
+extern int phm_get_clock_by_type_with_voltage(struct pp_hwmgr *hwmgr,
+		enum amd_pp_clock_type type,
+		struct pp_clock_levels_with_voltage *clocks);
+extern int phm_set_watermarks_for_clocks_ranges(struct pp_hwmgr *hwmgr,
+		struct pp_wm_sets_with_clock_ranges_soc15 *wm_with_clock_ranges);
+extern int phm_display_clock_voltage_request(struct pp_hwmgr *hwmgr,
+		struct pp_display_clock_request *clock);
+
 extern int phm_get_max_high_clocks(struct pp_hwmgr *hwmgr, struct amd_pp_simple_clock_info *clocks);
 
 #endif /* _HARDWARE_MANAGER_H_ */
diff --git a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
index 02185d4..7de9bea 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
@@ -347,6 +347,16 @@ struct pp_hwmgr_func {
 	int (*get_current_shallow_sleep_clocks)(struct pp_hwmgr *hwmgr,
 				const struct pp_hw_power_state *state, struct pp_clock_info *clock_info);
 	int (*get_clock_by_type)(struct pp_hwmgr *hwmgr, enum amd_pp_clock_type type, struct amd_pp_clocks *clocks);
+	int (*get_clock_by_type_with_latency)(struct pp_hwmgr *hwmgr,
+			enum amd_pp_clock_type type,
+			struct pp_clock_levels_with_latency *clocks);
+	int (*get_clock_by_type_with_voltage)(struct pp_hwmgr *hwmgr,
+			enum amd_pp_clock_type type,
+			struct pp_clock_levels_with_voltage *clocks);
+	int (*set_watermarks_for_clocks_ranges)(struct pp_hwmgr *hwmgr,
+			struct pp_wm_sets_with_clock_ranges_soc15 *wm_with_clock_ranges);
+	int (*display_clock_voltage_request)(struct pp_hwmgr *hwmgr,
+			struct pp_display_clock_request *clock);
 	int (*get_max_high_clocks)(struct pp_hwmgr *hwmgr, struct amd_pp_simple_clock_info *clocks);
 	int (*power_off_asic)(struct pp_hwmgr *hwmgr);
 	int (*force_clock_level)(struct pp_hwmgr *hwmgr, enum pp_clock_type type, uint32_t mask);
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 062/100] drm/amd/powerplay: add Vega10 powerplay support
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (45 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 061/100] drm/amd/powerplay: add some display/powerplay interfaces Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:29   ` [PATCH 063/100] drm/amd/display: Add DCE12 bios parser support Alex Deucher
                     ` (38 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Eric Huang, Alex Deucher

From: Eric Huang <JinHuiEric.Huang@amd.com>

Signed-off-by: Eric Huang <JinHuiEric.Huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_powerplay.c      |    1 +
 drivers/gpu/drm/amd/powerplay/hwmgr/Makefile       |    4 +-
 drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c        |    9 +
 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c | 4378 ++++++++++++++++++++
 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.h |  434 ++
 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_inc.h   |   44 +
 .../gpu/drm/amd/powerplay/hwmgr/vega10_powertune.c |  137 +
 .../gpu/drm/amd/powerplay/hwmgr/vega10_powertune.h |   65 +
 .../gpu/drm/amd/powerplay/hwmgr/vega10_pptable.h   |  331 ++
 .../amd/powerplay/hwmgr/vega10_processpptables.c   | 1056 +++++
 .../amd/powerplay/hwmgr/vega10_processpptables.h   |   34 +
 .../gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c   |  761 ++++
 .../gpu/drm/amd/powerplay/hwmgr/vega10_thermal.h   |   83 +
 drivers/gpu/drm/amd/powerplay/inc/hwmgr.h          |    3 +
 drivers/gpu/drm/amd/powerplay/inc/pp_soc15.h       |   48 +
 drivers/gpu/drm/amd/powerplay/inc/smumgr.h         |    3 +
 drivers/gpu/drm/amd/powerplay/smumgr/Makefile      |    2 +-
 drivers/gpu/drm/amd/powerplay/smumgr/smumgr.c      |    9 +
 .../gpu/drm/amd/powerplay/smumgr/vega10_smumgr.c   |  564 +++
 .../gpu/drm/amd/powerplay/smumgr/vega10_smumgr.h   |   70 +
 20 files changed, 8034 insertions(+), 2 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.h
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_inc.h
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.c
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.h
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_pptable.h
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.c
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.h
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c
 create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.h
 create mode 100644 drivers/gpu/drm/amd/powerplay/inc/pp_soc15.h
 create mode 100644 drivers/gpu/drm/amd/powerplay/smumgr/vega10_smumgr.c
 create mode 100644 drivers/gpu/drm/amd/powerplay/smumgr/vega10_smumgr.h

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_powerplay.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_powerplay.c
index 96a5113..f5ae871 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_powerplay.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_powerplay.c
@@ -71,6 +71,7 @@ static int amdgpu_pp_early_init(void *handle)
 	case CHIP_TOPAZ:
 	case CHIP_CARRIZO:
 	case CHIP_STONEY:
+	case CHIP_VEGA10:
 		adev->pp_enabled = true;
 		if (amdgpu_create_pp_handle(adev))
 			return -EINVAL;
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/Makefile b/drivers/gpu/drm/amd/powerplay/hwmgr/Makefile
index ccb51c2..27db2b7 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/Makefile
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/Makefile
@@ -7,7 +7,9 @@ HARDWARE_MGR = hwmgr.o processpptables.o functiontables.o \
 		cz_clockpowergating.o pppcielanes.o\
 		process_pptables_v1_0.o ppatomctrl.o ppatomfwctrl.o \
 		smu7_hwmgr.o smu7_powertune.o smu7_thermal.o \
-		smu7_clockpowergating.o
+		smu7_clockpowergating.o \
+		vega10_processpptables.o vega10_hwmgr.o vega10_powertune.o \
+		vega10_thermal.o
 
 
 AMD_PP_HWMGR = $(addprefix $(AMD_PP_PATH)/hwmgr/,$(HARDWARE_MGR))
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
index 2ea9c0e..ff4ae3d 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
@@ -106,6 +106,15 @@ int hwmgr_early_init(struct pp_instance *handle)
 		}
 		smu7_init_function_pointers(hwmgr);
 		break;
+	case AMDGPU_FAMILY_AI:
+		switch (hwmgr->chip_id) {
+		case CHIP_VEGA10:
+			vega10_hwmgr_init(hwmgr);
+			break;
+		default:
+			return -EINVAL;
+		}
+		break;
 	default:
 		return -EINVAL;
 	}
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
new file mode 100644
index 0000000..91fa0b5
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
@@ -0,0 +1,4378 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/fb.h>
+#include "linux/delay.h"
+
+#include "hwmgr.h"
+#include "amd_powerplay.h"
+#include "vega10_smumgr.h"
+#include "hardwaremanager.h"
+#include "ppatomfwctrl.h"
+#include "atomfirmware.h"
+#include "cgs_common.h"
+#include "vega10_powertune.h"
+#include "smu9.h"
+#include "smu9_driver_if.h"
+#include "vega10_inc.h"
+#include "pp_soc15.h"
+#include "pppcielanes.h"
+#include "vega10_hwmgr.h"
+#include "vega10_processpptables.h"
+#include "vega10_pptable.h"
+#include "vega10_thermal.h"
+#include "pp_debug.h"
+#include "pp_acpi.h"
+#include "amd_pcie_helpers.h"
+#include "cgs_linux.h"
+#include "ppinterrupt.h"
+
+
+#define VOLTAGE_SCALE  4
+#define VOLTAGE_VID_OFFSET_SCALE1   625
+#define VOLTAGE_VID_OFFSET_SCALE2   100
+
+#define HBM_MEMORY_CHANNEL_WIDTH    128
+
+uint32_t channel_number[] = {1, 2, 0, 4, 0, 8, 0, 16, 2};
+
+#define MEM_FREQ_LOW_LATENCY        25000
+#define MEM_FREQ_HIGH_LATENCY       80000
+#define MEM_LATENCY_HIGH            245
+#define MEM_LATENCY_LOW             35
+#define MEM_LATENCY_ERR             0xFFFF
+
+#define mmDF_CS_AON0_DramBaseAddress0                                                                  0x0044
+#define mmDF_CS_AON0_DramBaseAddress0_BASE_IDX                                                         0
+
+//DF_CS_AON0_DramBaseAddress0
+#define DF_CS_AON0_DramBaseAddress0__AddrRngVal__SHIFT                                                        0x0
+#define DF_CS_AON0_DramBaseAddress0__LgcyMmioHoleEn__SHIFT                                                    0x1
+#define DF_CS_AON0_DramBaseAddress0__IntLvNumChan__SHIFT                                                      0x4
+#define DF_CS_AON0_DramBaseAddress0__IntLvAddrSel__SHIFT                                                      0x8
+#define DF_CS_AON0_DramBaseAddress0__DramBaseAddr__SHIFT                                                      0xc
+#define DF_CS_AON0_DramBaseAddress0__AddrRngVal_MASK                                                          0x00000001L
+#define DF_CS_AON0_DramBaseAddress0__LgcyMmioHoleEn_MASK                                                      0x00000002L
+#define DF_CS_AON0_DramBaseAddress0__IntLvNumChan_MASK                                                        0x000000F0L
+#define DF_CS_AON0_DramBaseAddress0__IntLvAddrSel_MASK                                                        0x00000700L
+#define DF_CS_AON0_DramBaseAddress0__DramBaseAddr_MASK                                                        0xFFFFF000L
+
+const ULONG PhwVega10_Magic = (ULONG)(PHM_VIslands_Magic);
+
+struct vega10_power_state *cast_phw_vega10_power_state(
+				  struct pp_hw_power_state *hw_ps)
+{
+	PP_ASSERT_WITH_CODE((PhwVega10_Magic == hw_ps->magic),
+				"Invalid Powerstate Type!",
+				 return NULL;);
+
+	return (struct vega10_power_state *)hw_ps;
+}
+
+const struct vega10_power_state *cast_const_phw_vega10_power_state(
+				 const struct pp_hw_power_state *hw_ps)
+{
+	PP_ASSERT_WITH_CODE((PhwVega10_Magic == hw_ps->magic),
+				"Invalid Powerstate Type!",
+				 return NULL;);
+
+	return (const struct vega10_power_state *)hw_ps;
+}
+
+static void vega10_set_default_registry_data(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+
+	data->registry_data.sclk_dpm_key_disabled =
+			hwmgr->feature_mask & PP_SCLK_DPM_MASK ? false : true;
+	data->registry_data.socclk_dpm_key_disabled =
+			hwmgr->feature_mask & PP_SOCCLK_DPM_MASK ? false : true;
+	data->registry_data.mclk_dpm_key_disabled =
+			hwmgr->feature_mask & PP_MCLK_DPM_MASK ? false : true;
+
+	data->registry_data.dcefclk_dpm_key_disabled =
+			hwmgr->feature_mask & PP_DCEFCLK_DPM_MASK ? false : true;
+
+	if (hwmgr->feature_mask & PP_POWER_CONTAINMENT_MASK) {
+		data->registry_data.power_containment_support = 1;
+		data->registry_data.enable_pkg_pwr_tracking_feature = 1;
+		data->registry_data.enable_tdc_limit_feature = 1;
+	}
+
+	data->registry_data.pcie_dpm_key_disabled = 1;
+	data->registry_data.disable_water_mark = 0;
+
+	data->registry_data.fan_control_support = 1;
+	data->registry_data.thermal_support = 1;
+	data->registry_data.fw_ctf_enabled = 1;
+
+	data->registry_data.avfs_support = 1;
+	data->registry_data.led_dpm_enabled = 1;
+
+	data->registry_data.vr0hot_enabled = 1;
+	data->registry_data.vr1hot_enabled = 1;
+	data->registry_data.regulator_hot_gpio_support = 1;
+
+	data->display_voltage_mode = PPVEGA10_VEGA10DISPLAYVOLTAGEMODE_DFLT;
+	data->dcef_clk_quad_eqn_a = PPREGKEY_VEGA10QUADRATICEQUATION_DFLT;
+	data->dcef_clk_quad_eqn_b = PPREGKEY_VEGA10QUADRATICEQUATION_DFLT;
+	data->dcef_clk_quad_eqn_c = PPREGKEY_VEGA10QUADRATICEQUATION_DFLT;
+	data->disp_clk_quad_eqn_a = PPREGKEY_VEGA10QUADRATICEQUATION_DFLT;
+	data->disp_clk_quad_eqn_b = PPREGKEY_VEGA10QUADRATICEQUATION_DFLT;
+	data->disp_clk_quad_eqn_c = PPREGKEY_VEGA10QUADRATICEQUATION_DFLT;
+	data->pixel_clk_quad_eqn_a = PPREGKEY_VEGA10QUADRATICEQUATION_DFLT;
+	data->pixel_clk_quad_eqn_b = PPREGKEY_VEGA10QUADRATICEQUATION_DFLT;
+	data->pixel_clk_quad_eqn_c = PPREGKEY_VEGA10QUADRATICEQUATION_DFLT;
+	data->phy_clk_quad_eqn_a = PPREGKEY_VEGA10QUADRATICEQUATION_DFLT;
+	data->phy_clk_quad_eqn_b = PPREGKEY_VEGA10QUADRATICEQUATION_DFLT;
+	data->phy_clk_quad_eqn_c = PPREGKEY_VEGA10QUADRATICEQUATION_DFLT;
+
+	data->gfxclk_average_alpha = PPVEGA10_VEGA10GFXCLKAVERAGEALPHA_DFLT;
+	data->socclk_average_alpha = PPVEGA10_VEGA10SOCCLKAVERAGEALPHA_DFLT;
+	data->uclk_average_alpha = PPVEGA10_VEGA10UCLKCLKAVERAGEALPHA_DFLT;
+	data->gfx_activity_average_alpha = PPVEGA10_VEGA10GFXACTIVITYAVERAGEALPHA_DFLT;
+}
+
+static int vega10_set_features_platform_caps(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)hwmgr->pptable;
+	struct cgs_system_info sys_info = {0};
+	int result;
+
+	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_SclkDeepSleep);
+
+	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_DynamicPatchPowerState);
+
+	if (data->vddci_control == VEGA10_VOLTAGE_CONTROL_NONE)
+		phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
+				PHM_PlatformCaps_ControlVDDCI);
+
+	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_TablelessHardwareInterface);
+
+	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_EnableSMU7ThermalManagement);
+
+	sys_info.size = sizeof(struct cgs_system_info);
+	sys_info.info_id = CGS_SYSTEM_INFO_PG_FLAGS;
+	result = cgs_query_system_info(hwmgr->device, &sys_info);
+
+	if (!result && (sys_info.value & AMD_PG_SUPPORT_UVD))
+		phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+				PHM_PlatformCaps_UVDPowerGating);
+
+	if (!result && (sys_info.value & AMD_PG_SUPPORT_VCE))
+		phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+				PHM_PlatformCaps_VCEPowerGating);
+
+	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_UnTabledHardwareInterface);
+
+	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_FanSpeedInTableIsRPM);
+
+	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_ODFuzzyFanControlSupport);
+
+	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+				PHM_PlatformCaps_DynamicPowerManagement);
+
+	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_SMC);
+
+	/* power tune caps */
+	/* assume disabled */
+	phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_PowerContainment);
+	phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_SQRamping);
+	phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_DBRamping);
+	phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_TDRamping);
+	phm_cap_unset(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_TCPRamping);
+
+	if (data->registry_data.power_containment_support)
+		phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+				PHM_PlatformCaps_PowerContainment);
+	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_CAC);
+
+	if (table_info->tdp_table->usClockStretchAmount &&
+			data->registry_data.clock_stretcher_support)
+		phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+				PHM_PlatformCaps_ClockStretcher);
+
+	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_RegulatorHot);
+	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_AutomaticDCTransition);
+
+	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_UVDDPM);
+	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_VCEDPM);
+
+	return 0;
+}
+
+static void vega10_init_dpm_defaults(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend);
+	int i;
+
+	vega10_initialize_power_tune_defaults(hwmgr);
+
+	for (i = 0; i < GNLD_FEATURES_MAX; i++) {
+		data->smu_features[i].smu_feature_id = 0xffff;
+		data->smu_features[i].smu_feature_bitmap = 1 << i;
+		data->smu_features[i].enabled = false;
+		data->smu_features[i].supported = false;
+	}
+
+	data->smu_features[GNLD_DPM_PREFETCHER].smu_feature_id =
+			FEATURE_DPM_PREFETCHER_BIT;
+	data->smu_features[GNLD_DPM_GFXCLK].smu_feature_id =
+			FEATURE_DPM_GFXCLK_BIT;
+	data->smu_features[GNLD_DPM_UCLK].smu_feature_id =
+			FEATURE_DPM_UCLK_BIT;
+	data->smu_features[GNLD_DPM_SOCCLK].smu_feature_id =
+			FEATURE_DPM_SOCCLK_BIT;
+	data->smu_features[GNLD_DPM_UVD].smu_feature_id =
+			FEATURE_DPM_UVD_BIT;
+	data->smu_features[GNLD_DPM_VCE].smu_feature_id =
+			FEATURE_DPM_VCE_BIT;
+	data->smu_features[GNLD_DPM_MP0CLK].smu_feature_id =
+			FEATURE_DPM_MP0CLK_BIT;
+	data->smu_features[GNLD_DPM_LINK].smu_feature_id =
+			FEATURE_DPM_LINK_BIT;
+	data->smu_features[GNLD_DPM_DCEFCLK].smu_feature_id =
+			FEATURE_DPM_DCEFCLK_BIT;
+	data->smu_features[GNLD_ULV].smu_feature_id =
+			FEATURE_ULV_BIT;
+	data->smu_features[GNLD_AVFS].smu_feature_id =
+			FEATURE_AVFS_BIT;
+	data->smu_features[GNLD_DS_GFXCLK].smu_feature_id =
+			FEATURE_DS_GFXCLK_BIT;
+	data->smu_features[GNLD_DS_SOCCLK].smu_feature_id =
+			FEATURE_DS_SOCCLK_BIT;
+	data->smu_features[GNLD_DS_LCLK].smu_feature_id =
+			FEATURE_DS_LCLK_BIT;
+	data->smu_features[GNLD_PPT].smu_feature_id =
+			FEATURE_PPT_BIT;
+	data->smu_features[GNLD_TDC].smu_feature_id =
+			FEATURE_TDC_BIT;
+	data->smu_features[GNLD_THERMAL].smu_feature_id =
+			FEATURE_THERMAL_BIT;
+	data->smu_features[GNLD_GFX_PER_CU_CG].smu_feature_id =
+			FEATURE_GFX_PER_CU_CG_BIT;
+	data->smu_features[GNLD_RM].smu_feature_id =
+			FEATURE_RM_BIT;
+	data->smu_features[GNLD_DS_DCEFCLK].smu_feature_id =
+			FEATURE_DS_DCEFCLK_BIT;
+	data->smu_features[GNLD_ACDC].smu_feature_id =
+			FEATURE_ACDC_BIT;
+	data->smu_features[GNLD_VR0HOT].smu_feature_id =
+			FEATURE_VR0HOT_BIT;
+	data->smu_features[GNLD_VR1HOT].smu_feature_id =
+			FEATURE_VR1HOT_BIT;
+	data->smu_features[GNLD_FW_CTF].smu_feature_id =
+			FEATURE_FW_CTF_BIT;
+	data->smu_features[GNLD_LED_DISPLAY].smu_feature_id =
+			FEATURE_LED_DISPLAY_BIT;
+	data->smu_features[GNLD_FAN_CONTROL].smu_feature_id =
+			FEATURE_FAN_CONTROL_BIT;
+	data->smu_features[GNLD_VOLTAGE_CONTROLLER].smu_feature_id =
+			FEATURE_VOLTAGE_CONTROLLER_BIT;
+
+	if (!data->registry_data.prefetcher_dpm_key_disabled)
+		data->smu_features[GNLD_DPM_PREFETCHER].supported = true;
+
+	if (!data->registry_data.sclk_dpm_key_disabled)
+		data->smu_features[GNLD_DPM_GFXCLK].supported = true;
+
+	if (!data->registry_data.mclk_dpm_key_disabled)
+		data->smu_features[GNLD_DPM_UCLK].supported = true;
+
+	if (!data->registry_data.socclk_dpm_key_disabled)
+		data->smu_features[GNLD_DPM_SOCCLK].supported = true;
+
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_UVDDPM))
+		data->smu_features[GNLD_DPM_UVD].supported = true;
+
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_VCEDPM))
+		data->smu_features[GNLD_DPM_VCE].supported = true;
+
+	if (!data->registry_data.pcie_dpm_key_disabled)
+		data->smu_features[GNLD_DPM_LINK].supported = true;
+
+	if (!data->registry_data.dcefclk_dpm_key_disabled)
+		data->smu_features[GNLD_DPM_DCEFCLK].supported = true;
+
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_SclkDeepSleep) &&
+			data->registry_data.sclk_deep_sleep_support) {
+		data->smu_features[GNLD_DS_GFXCLK].supported = true;
+		data->smu_features[GNLD_DS_SOCCLK].supported = true;
+		data->smu_features[GNLD_DS_LCLK].supported = true;
+	}
+
+	if (data->registry_data.enable_pkg_pwr_tracking_feature)
+		data->smu_features[GNLD_PPT].supported = true;
+
+	if (data->registry_data.enable_tdc_limit_feature)
+		data->smu_features[GNLD_TDC].supported = true;
+
+	if (data->registry_data.thermal_support)
+		data->smu_features[GNLD_THERMAL].supported = true;
+
+	if (data->registry_data.fan_control_support)
+		data->smu_features[GNLD_FAN_CONTROL].supported = true;
+
+	if (data->registry_data.fw_ctf_enabled)
+		data->smu_features[GNLD_FW_CTF].supported = true;
+
+	if (data->registry_data.avfs_support)
+		data->smu_features[GNLD_AVFS].supported = true;
+
+	if (data->registry_data.led_dpm_enabled)
+		data->smu_features[GNLD_LED_DISPLAY].supported = true;
+
+	if (data->registry_data.vr1hot_enabled)
+		data->smu_features[GNLD_VR1HOT].supported = true;
+
+	if (data->registry_data.vr0hot_enabled)
+		data->smu_features[GNLD_VR0HOT].supported = true;
+
+}
+
+#ifdef PPLIB_VEGA10_EVV_SUPPORT
+static int vega10_get_socclk_for_voltage_evv(struct pp_hwmgr *hwmgr,
+	phm_ppt_v1_voltage_lookup_table *lookup_table,
+	uint16_t virtual_voltage_id, int32_t *socclk)
+{
+	uint8_t entry_id;
+	uint8_t voltage_id;
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)(hwmgr->pptable);
+
+	PP_ASSERT_WITH_CODE(lookup_table->count != 0,
+			"Lookup table is empty",
+			return -EINVAL);
+
+	/* search for leakage voltage ID 0xff01 ~ 0xff08 and sclk */
+	for (entry_id = 0; entry_id < table_info->vdd_dep_on_sclk->count; entry_id++) {
+		voltage_id = table_info->vdd_dep_on_socclk->entries[entry_id].vddInd;
+		if (lookup_table->entries[voltage_id].us_vdd == virtual_voltage_id)
+			break;
+	}
+
+	PP_ASSERT_WITH_CODE(entry_id < table_info->vdd_dep_on_socclk->count,
+			"Can't find requested voltage id in vdd_dep_on_socclk table!",
+			return -EINVAL);
+
+	*socclk = table_info->vdd_dep_on_socclk->entries[entry_id].clk;
+
+	return 0;
+}
+
+#define ATOM_VIRTUAL_VOLTAGE_ID0             0xff01
+/**
+* Get Leakage VDDC based on leakage ID.
+*
+* @param    hwmgr  the address of the powerplay hardware manager.
+* @return   always 0.
+*/
+static int vega10_get_evv_voltages(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend);
+	uint16_t vv_id;
+	uint32_t vddc = 0;
+	uint16_t i, j;
+	uint32_t sclk = 0;
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)hwmgr->pptable;
+	struct phm_ppt_v1_clock_voltage_dependency_table *socclk_table =
+			table_info->vdd_dep_on_socclk;
+	int result;
+
+	for (i = 0; i < VEGA10_MAX_LEAKAGE_COUNT; i++) {
+		vv_id = ATOM_VIRTUAL_VOLTAGE_ID0 + i;
+
+		if (!vega10_get_socclk_for_voltage_evv(hwmgr,
+				table_info->vddc_lookup_table, vv_id, &sclk)) {
+			if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+					PHM_PlatformCaps_ClockStretcher)) {
+				for (j = 1; j < socclk_table->count; j++) {
+					if (socclk_table->entries[j].clk == sclk &&
+							socclk_table->entries[j].cks_enable == 0) {
+						sclk += 5000;
+						break;
+					}
+				}
+			}
+
+			PP_ASSERT_WITH_CODE(!atomctrl_get_voltage_evv_on_sclk_ai(hwmgr,
+					VOLTAGE_TYPE_VDDC, sclk, vv_id, &vddc),
+					"Error retrieving EVV voltage value!",
+					continue);
+
+
+			/* need to make sure vddc is less than 2v or else, it could burn the ASIC. */
+			PP_ASSERT_WITH_CODE((vddc < 2000 && vddc != 0),
+					"Invalid VDDC value", result = -EINVAL;);
+
+			/* the voltage should not be zero nor equal to leakage ID */
+			if (vddc != 0 && vddc != vv_id) {
+				data->vddc_leakage.actual_voltage[data->vddc_leakage.count] = (uint16_t)(vddc/100);
+				data->vddc_leakage.leakage_id[data->vddc_leakage.count] = vv_id;
+				data->vddc_leakage.count++;
+			}
+		}
+	}
+
+	return 0;
+}
+
+/**
+ * Change virtual leakage voltage to actual value.
+ *
+ * @param     hwmgr  the address of the powerplay hardware manager.
+ * @param     pointer to changing voltage
+ * @param     pointer to leakage table
+ */
+static void vega10_patch_with_vdd_leakage(struct pp_hwmgr *hwmgr,
+		uint16_t *voltage, struct vega10_leakage_voltage *leakage_table)
+{
+	uint32_t index;
+
+	/* search for leakage voltage ID 0xff01 ~ 0xff08 */
+	for (index = 0; index < leakage_table->count; index++) {
+		/* if this voltage matches a leakage voltage ID */
+		/* patch with actual leakage voltage */
+		if (leakage_table->leakage_id[index] == *voltage) {
+			*voltage = leakage_table->actual_voltage[index];
+			break;
+		}
+	}
+
+	if (*voltage > ATOM_VIRTUAL_VOLTAGE_ID0)
+		pr_info("Voltage value looks like a Leakage ID \
+				but it's not patched\n");
+}
+
+/**
+* Patch voltage lookup table by EVV leakages.
+*
+* @param     hwmgr  the address of the powerplay hardware manager.
+* @param     pointer to voltage lookup table
+* @param     pointer to leakage table
+* @return     always 0
+*/
+static int vega10_patch_lookup_table_with_leakage(struct pp_hwmgr *hwmgr,
+		phm_ppt_v1_voltage_lookup_table *lookup_table,
+		struct vega10_leakage_voltage *leakage_table)
+{
+	uint32_t i;
+
+	for (i = 0; i < lookup_table->count; i++)
+		vega10_patch_with_vdd_leakage(hwmgr,
+				&lookup_table->entries[i].us_vdd, leakage_table);
+
+	return 0;
+}
+
+static int vega10_patch_clock_voltage_limits_with_vddc_leakage(
+		struct pp_hwmgr *hwmgr, struct vega10_leakage_voltage *leakage_table,
+		uint16_t *vddc)
+{
+	vega10_patch_with_vdd_leakage(hwmgr, (uint16_t *)vddc, leakage_table);
+
+	return 0;
+}
+#endif
+
+static int vega10_patch_voltage_dependency_tables_with_lookup_table(
+		struct pp_hwmgr *hwmgr)
+{
+	uint8_t entry_id;
+	uint8_t voltage_id;
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)(hwmgr->pptable);
+	struct phm_ppt_v1_clock_voltage_dependency_table *socclk_table =
+			table_info->vdd_dep_on_socclk;
+	struct phm_ppt_v1_clock_voltage_dependency_table *gfxclk_table =
+			table_info->vdd_dep_on_sclk;
+	struct phm_ppt_v1_clock_voltage_dependency_table *dcefclk_table =
+			table_info->vdd_dep_on_dcefclk;
+	struct phm_ppt_v1_clock_voltage_dependency_table *pixclk_table =
+			table_info->vdd_dep_on_pixclk;
+	struct phm_ppt_v1_clock_voltage_dependency_table *dspclk_table =
+			table_info->vdd_dep_on_dispclk;
+	struct phm_ppt_v1_clock_voltage_dependency_table *phyclk_table =
+			table_info->vdd_dep_on_phyclk;
+	struct phm_ppt_v1_clock_voltage_dependency_table *mclk_table =
+			table_info->vdd_dep_on_mclk;
+	struct phm_ppt_v1_mm_clock_voltage_dependency_table *mm_table =
+			table_info->mm_dep_table;
+
+	for (entry_id = 0; entry_id < socclk_table->count; entry_id++) {
+		voltage_id = socclk_table->entries[entry_id].vddInd;
+		socclk_table->entries[entry_id].vddc =
+				table_info->vddc_lookup_table->entries[voltage_id].us_vdd;
+	}
+
+	for (entry_id = 0; entry_id < gfxclk_table->count; entry_id++) {
+		voltage_id = gfxclk_table->entries[entry_id].vddInd;
+		gfxclk_table->entries[entry_id].vddc =
+				table_info->vddc_lookup_table->entries[voltage_id].us_vdd;
+	}
+
+	for (entry_id = 0; entry_id < dcefclk_table->count; entry_id++) {
+		voltage_id = dcefclk_table->entries[entry_id].vddInd;
+		dcefclk_table->entries[entry_id].vddc =
+				table_info->vddc_lookup_table->entries[voltage_id].us_vdd;
+	}
+
+	for (entry_id = 0; entry_id < pixclk_table->count; entry_id++) {
+		voltage_id = pixclk_table->entries[entry_id].vddInd;
+		pixclk_table->entries[entry_id].vddc =
+				table_info->vddc_lookup_table->entries[voltage_id].us_vdd;
+	}
+
+	for (entry_id = 0; entry_id < dspclk_table->count; entry_id++) {
+		voltage_id = dspclk_table->entries[entry_id].vddInd;
+		dspclk_table->entries[entry_id].vddc =
+				table_info->vddc_lookup_table->entries[voltage_id].us_vdd;
+	}
+
+	for (entry_id = 0; entry_id < phyclk_table->count; entry_id++) {
+		voltage_id = phyclk_table->entries[entry_id].vddInd;
+		phyclk_table->entries[entry_id].vddc =
+				table_info->vddc_lookup_table->entries[voltage_id].us_vdd;
+	}
+
+	for (entry_id = 0; entry_id < mclk_table->count; ++entry_id) {
+		voltage_id = mclk_table->entries[entry_id].vddInd;
+		mclk_table->entries[entry_id].vddc =
+				table_info->vddc_lookup_table->entries[voltage_id].us_vdd;
+		voltage_id = mclk_table->entries[entry_id].vddciInd;
+		mclk_table->entries[entry_id].vddci =
+				table_info->vddci_lookup_table->entries[voltage_id].us_vdd;
+		voltage_id = mclk_table->entries[entry_id].mvddInd;
+		mclk_table->entries[entry_id].mvdd =
+				table_info->vddmem_lookup_table->entries[voltage_id].us_vdd;
+	}
+
+	for (entry_id = 0; entry_id < mm_table->count; ++entry_id) {
+		voltage_id = mm_table->entries[entry_id].vddcInd;
+		mm_table->entries[entry_id].vddc =
+			table_info->vddc_lookup_table->entries[voltage_id].us_vdd;
+	}
+
+	return 0;
+
+}
+
+static int vega10_sort_lookup_table(struct pp_hwmgr *hwmgr,
+		struct phm_ppt_v1_voltage_lookup_table *lookup_table)
+{
+	uint32_t table_size, i, j;
+	struct phm_ppt_v1_voltage_lookup_record tmp_voltage_lookup_record;
+
+	PP_ASSERT_WITH_CODE(lookup_table && lookup_table->count,
+		"Lookup table is empty", return -EINVAL);
+
+	table_size = lookup_table->count;
+
+	/* Sorting voltages */
+	for (i = 0; i < table_size - 1; i++) {
+		for (j = i + 1; j > 0; j--) {
+			if (lookup_table->entries[j].us_vdd <
+					lookup_table->entries[j - 1].us_vdd) {
+				tmp_voltage_lookup_record = lookup_table->entries[j - 1];
+				lookup_table->entries[j - 1] = lookup_table->entries[j];
+				lookup_table->entries[j] = tmp_voltage_lookup_record;
+			}
+		}
+	}
+
+	return 0;
+}
+
+static int vega10_complete_dependency_tables(struct pp_hwmgr *hwmgr)
+{
+	int result = 0;
+	int tmp_result;
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)(hwmgr->pptable);
+#ifdef PPLIB_VEGA10_EVV_SUPPORT
+	struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend);
+
+	tmp_result = vega10_patch_lookup_table_with_leakage(hwmgr,
+			table_info->vddc_lookup_table, &(data->vddc_leakage));
+	if (tmp_result)
+		result = tmp_result;
+
+	tmp_result = vega10_patch_clock_voltage_limits_with_vddc_leakage(hwmgr,
+			&(data->vddc_leakage), &table_info->max_clock_voltage_on_dc.vddc);
+	if (tmp_result)
+		result = tmp_result;
+#endif
+
+	tmp_result = vega10_patch_voltage_dependency_tables_with_lookup_table(hwmgr);
+	if (tmp_result)
+		result = tmp_result;
+
+	tmp_result = vega10_sort_lookup_table(hwmgr, table_info->vddc_lookup_table);
+	if (tmp_result)
+		result = tmp_result;
+
+	return result;
+}
+
+static int vega10_set_private_data_based_on_pptable(struct pp_hwmgr *hwmgr)
+{
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)(hwmgr->pptable);
+	struct phm_ppt_v1_clock_voltage_dependency_table *allowed_sclk_vdd_table =
+			table_info->vdd_dep_on_socclk;
+	struct phm_ppt_v1_clock_voltage_dependency_table *allowed_mclk_vdd_table =
+			table_info->vdd_dep_on_mclk;
+
+	PP_ASSERT_WITH_CODE(allowed_sclk_vdd_table,
+		"VDD dependency on SCLK table is missing. \
+		This table is mandatory", return -EINVAL);
+	PP_ASSERT_WITH_CODE(allowed_sclk_vdd_table->count >= 1,
+		"VDD dependency on SCLK table is empty. \
+		This table is mandatory", return -EINVAL);
+
+	PP_ASSERT_WITH_CODE(allowed_mclk_vdd_table,
+		"VDD dependency on MCLK table is missing. \
+		This table is mandatory", return -EINVAL);
+	PP_ASSERT_WITH_CODE(allowed_mclk_vdd_table->count >= 1,
+		"VDD dependency on MCLK table is empty. \
+		This table is mandatory", return -EINVAL);
+
+	table_info->max_clock_voltage_on_ac.sclk =
+		allowed_sclk_vdd_table->entries[allowed_sclk_vdd_table->count - 1].clk;
+	table_info->max_clock_voltage_on_ac.mclk =
+		allowed_mclk_vdd_table->entries[allowed_mclk_vdd_table->count - 1].clk;
+	table_info->max_clock_voltage_on_ac.vddc =
+		allowed_sclk_vdd_table->entries[allowed_sclk_vdd_table->count - 1].vddc;
+	table_info->max_clock_voltage_on_ac.vddci =
+		allowed_mclk_vdd_table->entries[allowed_mclk_vdd_table->count - 1].vddci;
+
+	hwmgr->dyn_state.max_clock_voltage_on_ac.sclk =
+		table_info->max_clock_voltage_on_ac.sclk;
+	hwmgr->dyn_state.max_clock_voltage_on_ac.mclk =
+		table_info->max_clock_voltage_on_ac.mclk;
+	hwmgr->dyn_state.max_clock_voltage_on_ac.vddc =
+		table_info->max_clock_voltage_on_ac.vddc;
+	hwmgr->dyn_state.max_clock_voltage_on_ac.vddci =
+		table_info->max_clock_voltage_on_ac.vddci;
+
+	return 0;
+}
+
+static int vega10_hwmgr_backend_fini(struct pp_hwmgr *hwmgr)
+{
+	kfree(hwmgr->dyn_state.vddc_dep_on_dal_pwrl);
+	hwmgr->dyn_state.vddc_dep_on_dal_pwrl = NULL;
+
+	kfree(hwmgr->backend);
+	hwmgr->backend = NULL;
+
+	return 0;
+}
+
+static int vega10_hwmgr_backend_init(struct pp_hwmgr *hwmgr)
+{
+	int result = 0;
+	struct vega10_hwmgr *data;
+	uint32_t config_telemetry = 0;
+	struct pp_atomfwctrl_voltage_table vol_table;
+	struct cgs_system_info sys_info = {0};
+
+	data = kzalloc(sizeof(struct vega10_hwmgr), GFP_KERNEL);
+	if (data == NULL)
+		return -ENOMEM;
+
+	hwmgr->backend = data;
+
+	vega10_set_default_registry_data(hwmgr);
+
+	data->disable_dpm_mask = 0xff;
+	data->workload_mask = 0xff;
+
+	/* need to set voltage control types before EVV patching */
+	data->vddc_control = VEGA10_VOLTAGE_CONTROL_NONE;
+	data->mvdd_control = VEGA10_VOLTAGE_CONTROL_NONE;
+	data->vddci_control = VEGA10_VOLTAGE_CONTROL_NONE;
+
+	/* VDDCR_SOC */
+	if (pp_atomfwctrl_is_voltage_controlled_by_gpio_v4(hwmgr,
+			VOLTAGE_TYPE_VDDC, VOLTAGE_OBJ_SVID2)) {
+		if (!pp_atomfwctrl_get_voltage_table_v4(hwmgr,
+				VOLTAGE_TYPE_VDDC, VOLTAGE_OBJ_SVID2,
+				&vol_table)) {
+			config_telemetry = ((vol_table.telemetry_slope << 8) & 0xff00) |
+					(vol_table.telemetry_offset & 0xff);
+			data->vddc_control = VEGA10_VOLTAGE_CONTROL_BY_SVID2;
+		}
+	} else {
+		kfree(hwmgr->backend);
+		hwmgr->backend = NULL;
+		PP_ASSERT_WITH_CODE(false,
+				"VDDCR_SOC is not SVID2!",
+				return -1);
+	}
+
+	/* MVDDC */
+	if (pp_atomfwctrl_is_voltage_controlled_by_gpio_v4(hwmgr,
+			VOLTAGE_TYPE_MVDDC, VOLTAGE_OBJ_SVID2)) {
+		if (!pp_atomfwctrl_get_voltage_table_v4(hwmgr,
+				VOLTAGE_TYPE_MVDDC, VOLTAGE_OBJ_SVID2,
+				&vol_table)) {
+			config_telemetry |=
+					((vol_table.telemetry_slope << 24) & 0xff000000) |
+					((vol_table.telemetry_offset << 16) & 0xff0000);
+			data->mvdd_control = VEGA10_VOLTAGE_CONTROL_BY_SVID2;
+		}
+	}
+
+	 /* VDDCI_MEM */
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_ControlVDDCI)) {
+		if (pp_atomfwctrl_is_voltage_controlled_by_gpio_v4(hwmgr,
+				VOLTAGE_TYPE_VDDCI, VOLTAGE_OBJ_GPIO_LUT))
+			data->vddci_control = VEGA10_VOLTAGE_CONTROL_BY_GPIO;
+	}
+
+	data->config_telemetry = config_telemetry;
+
+	vega10_set_features_platform_caps(hwmgr);
+
+	vega10_init_dpm_defaults(hwmgr);
+
+#ifdef PPLIB_VEGA10_EVV_SUPPORT
+	/* Get leakage voltage based on leakage ID. */
+	PP_ASSERT_WITH_CODE(!vega10_get_evv_voltages(hwmgr),
+			"Get EVV Voltage Failed.  Abort Driver loading!",
+			return -1);
+#endif
+
+	/* Patch our voltage dependency table with actual leakage voltage
+	 * We need to perform leakage translation before it's used by other functions
+	 */
+	vega10_complete_dependency_tables(hwmgr);
+
+	/* Parse pptable data read from VBIOS */
+	vega10_set_private_data_based_on_pptable(hwmgr);
+
+	data->is_tlu_enabled = false;
+
+	hwmgr->platform_descriptor.hardwareActivityPerformanceLevels =
+			VEGA10_MAX_HARDWARE_POWERLEVELS;
+	hwmgr->platform_descriptor.hardwarePerformanceLevels = 2;
+	hwmgr->platform_descriptor.minimumClocksReductionPercentage = 50;
+
+	hwmgr->platform_descriptor.vbiosInterruptId = 0x20000400; /* IRQ_SOURCE1_SW_INT */
+	/* The true clock step depends on the frequency, typically 4.5 or 9 MHz. Here we use 5. */
+	hwmgr->platform_descriptor.clockStep.engineClock = 500;
+	hwmgr->platform_descriptor.clockStep.memoryClock = 500;
+
+	sys_info.size = sizeof(struct cgs_system_info);
+	sys_info.info_id = CGS_SYSTEM_INFO_GFX_CU_INFO;
+	result = cgs_query_system_info(hwmgr->device, &sys_info);
+	data->total_active_cus = sys_info.value;
+	/* Setup default Overdrive Fan control settings */
+	data->odn_fan_table.target_fan_speed =
+			hwmgr->thermal_controller.advanceFanControlParameters.usMaxFanRPM;
+	data->odn_fan_table.target_temperature =
+			hwmgr->thermal_controller.
+			advanceFanControlParameters.ucTargetTemperature;
+	data->odn_fan_table.min_performance_clock =
+			hwmgr->thermal_controller.advanceFanControlParameters.
+			ulMinFanSCLKAcousticLimit;
+	data->odn_fan_table.min_fan_limit =
+			hwmgr->thermal_controller.
+			advanceFanControlParameters.usFanPWMMinLimit *
+			hwmgr->thermal_controller.fanInfo.ulMaxRPM / 100;
+
+	return result;
+}
+
+static int vega10_init_sclk_threshold(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+
+	data->low_sclk_interrupt_threshold = 0;
+
+	return 0;
+}
+
+static int vega10_setup_dpm_led_config(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	PPTable_t *pp_table = &(data->smc_state_table.pp_table);
+
+	struct pp_atomfwctrl_voltage_table table;
+	uint8_t i, j;
+	uint32_t mask = 0;
+	uint32_t tmp;
+	int32_t ret = 0;
+
+	ret = pp_atomfwctrl_get_voltage_table_v4(hwmgr, VOLTAGE_TYPE_LEDDPM,
+						VOLTAGE_OBJ_GPIO_LUT, &table);
+
+	if (!ret) {
+		tmp = table.mask_low;
+		for (i = 0, j = 0; i < 32; i++) {
+			if (tmp & 1) {
+				mask |= (uint32_t)(i << (8 * j));
+				if (++j >= 3)
+					break;
+			}
+			tmp >>= 1;
+		}
+	}
+
+	pp_table->LedPin0 = (uint8_t)(mask & 0xff);
+	pp_table->LedPin1 = (uint8_t)((mask >> 8) & 0xff);
+	pp_table->LedPin2 = (uint8_t)((mask >> 16) & 0xff);
+	return ret;
+}
+
+static int vega10_setup_asic_task(struct pp_hwmgr *hwmgr)
+{
+	PP_ASSERT_WITH_CODE(!vega10_init_sclk_threshold(hwmgr),
+			"Failed to init sclk threshold!",
+			return -EINVAL);
+
+	PP_ASSERT_WITH_CODE(!vega10_setup_dpm_led_config(hwmgr),
+			"Failed to set up led dpm config!",
+			return -EINVAL);
+
+	return 0;
+}
+
+static bool vega10_is_dpm_running(struct pp_hwmgr *hwmgr)
+{
+	uint32_t features_enabled;
+
+	if (!vega10_get_smc_features(hwmgr->smumgr, &features_enabled)) {
+		if (features_enabled & SMC_DPM_FEATURES)
+			return true;
+	}
+	return false;
+}
+
+/**
+* Remove repeated voltage values and create table with unique values.
+*
+* @param    hwmgr  the address of the powerplay hardware manager.
+* @param    vol_table  the pointer to changing voltage table
+* @return    0 in success
+*/
+
+static int vega10_trim_voltage_table(struct pp_hwmgr *hwmgr,
+		struct pp_atomfwctrl_voltage_table *vol_table)
+{
+	uint32_t i, j;
+	uint16_t vvalue;
+	bool found = false;
+	struct pp_atomfwctrl_voltage_table *table;
+
+	PP_ASSERT_WITH_CODE(vol_table,
+			"Voltage Table empty.", return -EINVAL);
+	table = kzalloc(sizeof(struct pp_atomfwctrl_voltage_table),
+			GFP_KERNEL);
+
+	if (!table)
+		return -ENOMEM;
+
+	table->mask_low = vol_table->mask_low;
+	table->phase_delay = vol_table->phase_delay;
+
+	for (i = 0; i < vol_table->count; i++) {
+		vvalue = vol_table->entries[i].value;
+		found = false;
+
+		for (j = 0; j < table->count; j++) {
+			if (vvalue == table->entries[j].value) {
+				found = true;
+				break;
+			}
+		}
+
+		if (!found) {
+			table->entries[table->count].value = vvalue;
+			table->entries[table->count].smio_low =
+					vol_table->entries[i].smio_low;
+			table->count++;
+		}
+	}
+
+	memcpy(vol_table, table, sizeof(struct pp_atomfwctrl_voltage_table));
+	kfree(table);
+
+	return 0;
+}
+
+static int vega10_get_mvdd_voltage_table(struct pp_hwmgr *hwmgr,
+		phm_ppt_v1_clock_voltage_dependency_table *dep_table,
+		struct pp_atomfwctrl_voltage_table *vol_table)
+{
+	int i;
+
+	PP_ASSERT_WITH_CODE(dep_table->count,
+			"Voltage Dependency Table empty.",
+			return -EINVAL);
+
+	vol_table->mask_low = 0;
+	vol_table->phase_delay = 0;
+	vol_table->count = dep_table->count;
+
+	for (i = 0; i < vol_table->count; i++) {
+		vol_table->entries[i].value = dep_table->entries[i].mvdd;
+		vol_table->entries[i].smio_low = 0;
+	}
+
+	PP_ASSERT_WITH_CODE(!vega10_trim_voltage_table(hwmgr,
+			vol_table),
+			"Failed to trim MVDD Table!",
+			return -1);
+
+	return 0;
+}
+
+static int vega10_get_vddci_voltage_table(struct pp_hwmgr *hwmgr,
+		phm_ppt_v1_clock_voltage_dependency_table *dep_table,
+		struct pp_atomfwctrl_voltage_table *vol_table)
+{
+	uint32_t i;
+
+	PP_ASSERT_WITH_CODE(dep_table->count,
+			"Voltage Dependency Table empty.",
+			return -EINVAL);
+
+	vol_table->mask_low = 0;
+	vol_table->phase_delay = 0;
+	vol_table->count = dep_table->count;
+
+	for (i = 0; i < dep_table->count; i++) {
+		vol_table->entries[i].value = dep_table->entries[i].vddci;
+		vol_table->entries[i].smio_low = 0;
+	}
+
+	PP_ASSERT_WITH_CODE(!vega10_trim_voltage_table(hwmgr, vol_table),
+			"Failed to trim VDDCI table.",
+			return -1);
+
+	return 0;
+}
+
+static int vega10_get_vdd_voltage_table(struct pp_hwmgr *hwmgr,
+		phm_ppt_v1_clock_voltage_dependency_table *dep_table,
+		struct pp_atomfwctrl_voltage_table *vol_table)
+{
+	int i;
+
+	PP_ASSERT_WITH_CODE(dep_table->count,
+			"Voltage Dependency Table empty.",
+			return -EINVAL);
+
+	vol_table->mask_low = 0;
+	vol_table->phase_delay = 0;
+	vol_table->count = dep_table->count;
+
+	for (i = 0; i < vol_table->count; i++) {
+		vol_table->entries[i].value = dep_table->entries[i].vddc;
+		vol_table->entries[i].smio_low = 0;
+	}
+
+	return 0;
+}
+
+/* ---- Voltage Tables ----
+ * If the voltage table would be bigger than
+ * what will fit into the state table on
+ * the SMC keep only the higher entries.
+ */
+static void vega10_trim_voltage_table_to_fit_state_table(
+		struct pp_hwmgr *hwmgr,
+		uint32_t max_vol_steps,
+		struct pp_atomfwctrl_voltage_table *vol_table)
+{
+	unsigned int i, diff;
+
+	if (vol_table->count <= max_vol_steps)
+		return;
+
+	diff = vol_table->count - max_vol_steps;
+
+	for (i = 0; i < max_vol_steps; i++)
+		vol_table->entries[i] = vol_table->entries[i + diff];
+
+	vol_table->count = max_vol_steps;
+}
+
+/**
+* Create Voltage Tables.
+*
+* @param    hwmgr  the address of the powerplay hardware manager.
+* @return   always 0
+*/
+static int vega10_construct_voltage_tables(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend);
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)hwmgr->pptable;
+	int result;
+
+	if (data->mvdd_control == VEGA10_VOLTAGE_CONTROL_BY_SVID2 ||
+			data->mvdd_control == VEGA10_VOLTAGE_CONTROL_NONE) {
+		result = vega10_get_mvdd_voltage_table(hwmgr,
+				table_info->vdd_dep_on_mclk,
+				&(data->mvdd_voltage_table));
+		PP_ASSERT_WITH_CODE(!result,
+				"Failed to retrieve MVDDC table!",
+				return result);
+	}
+
+	if (data->vddci_control == VEGA10_VOLTAGE_CONTROL_NONE) {
+		result = vega10_get_vddci_voltage_table(hwmgr,
+				table_info->vdd_dep_on_mclk,
+				&(data->vddci_voltage_table));
+		PP_ASSERT_WITH_CODE(!result,
+				"Failed to retrieve VDDCI_MEM table!",
+				return result);
+	}
+
+	if (data->vddc_control == VEGA10_VOLTAGE_CONTROL_BY_SVID2 ||
+			data->vddc_control == VEGA10_VOLTAGE_CONTROL_NONE) {
+		result = vega10_get_vdd_voltage_table(hwmgr,
+				table_info->vdd_dep_on_sclk,
+				&(data->vddc_voltage_table));
+		PP_ASSERT_WITH_CODE(!result,
+				"Failed to retrieve VDDCR_SOC table!",
+				return result);
+	}
+
+	PP_ASSERT_WITH_CODE(data->vddc_voltage_table.count <= 16,
+			"Too many voltage values for VDDC. Trimming to fit state table.",
+			vega10_trim_voltage_table_to_fit_state_table(hwmgr,
+					16, &(data->vddc_voltage_table)));
+
+	PP_ASSERT_WITH_CODE(data->vddci_voltage_table.count <= 16,
+			"Too many voltage values for VDDCI. Trimming to fit state table.",
+			vega10_trim_voltage_table_to_fit_state_table(hwmgr,
+					16, &(data->vddci_voltage_table)));
+
+	PP_ASSERT_WITH_CODE(data->mvdd_voltage_table.count <= 16,
+			"Too many voltage values for MVDD. Trimming to fit state table.",
+			vega10_trim_voltage_table_to_fit_state_table(hwmgr,
+					16, &(data->mvdd_voltage_table)));
+
+
+	return 0;
+}
+
+/*
+ * @fn vega10_init_dpm_state
+ * @brief Function to initialize all Soft Min/Max and Hard Min/Max to 0xff.
+ *
+ * @param    dpm_state - the address of the DPM Table to initiailize.
+ * @return   None.
+ */
+static void vega10_init_dpm_state(struct vega10_dpm_state *dpm_state)
+{
+	dpm_state->soft_min_level = 0xff;
+	dpm_state->soft_max_level = 0xff;
+	dpm_state->hard_min_level = 0xff;
+	dpm_state->hard_max_level = 0xff;
+}
+
+static void vega10_setup_default_single_dpm_table(struct pp_hwmgr *hwmgr,
+		struct vega10_single_dpm_table *dpm_table,
+		struct phm_ppt_v1_clock_voltage_dependency_table *dep_table)
+{
+	int i;
+
+	for (i = 0; i < dep_table->count; i++) {
+		if (i == 0 || dpm_table->dpm_levels[dpm_table->count - 1].value !=
+				dep_table->entries[i].clk) {
+			dpm_table->dpm_levels[dpm_table->count].value =
+					dep_table->entries[i].clk;
+			dpm_table->dpm_levels[dpm_table->count].enabled = true;
+			dpm_table->count++;
+		}
+	}
+}
+static int vega10_setup_default_pcie_table(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	struct vega10_pcie_table *pcie_table = &(data->dpm_table.pcie_table);
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)(hwmgr->pptable);
+	struct phm_ppt_v1_pcie_table *bios_pcie_table =
+			table_info->pcie_table;
+	uint32_t i;
+
+	PP_ASSERT_WITH_CODE(bios_pcie_table->count,
+			"Incorrect number of PCIE States from VBIOS!",
+			return -1);
+
+	for (i = 0; i < NUM_LINK_LEVELS - 1; i++) {
+		if (data->registry_data.pcieSpeedOverride)
+			pcie_table->pcie_gen[i] =
+					data->registry_data.pcieSpeedOverride;
+		else
+			pcie_table->pcie_gen[i] =
+					bios_pcie_table->entries[i].gen_speed;
+
+		if (data->registry_data.pcieLaneOverride)
+			pcie_table->pcie_lane[i] =
+					data->registry_data.pcieLaneOverride;
+		else
+			pcie_table->pcie_lane[i] =
+					bios_pcie_table->entries[i].lane_width;
+
+		if (data->registry_data.pcieClockOverride)
+			pcie_table->lclk[i] =
+					data->registry_data.pcieClockOverride;
+		else
+			pcie_table->lclk[i] =
+					bios_pcie_table->entries[i].pcie_sclk;
+
+		pcie_table->count++;
+	}
+
+	if (data->registry_data.pcieSpeedOverride)
+		pcie_table->pcie_gen[i] = data->registry_data.pcieSpeedOverride;
+	else
+		pcie_table->pcie_gen[i] =
+			bios_pcie_table->entries[bios_pcie_table->count - 1].gen_speed;
+
+	if (data->registry_data.pcieLaneOverride)
+		pcie_table->pcie_lane[i] = data->registry_data.pcieLaneOverride;
+	else
+		pcie_table->pcie_lane[i] =
+			bios_pcie_table->entries[bios_pcie_table->count - 1].lane_width;
+
+	if (data->registry_data.pcieClockOverride)
+		pcie_table->lclk[i] = data->registry_data.pcieClockOverride;
+	else
+		pcie_table->lclk[i] =
+			bios_pcie_table->entries[bios_pcie_table->count - 1].pcie_sclk;
+
+	pcie_table->count++;
+
+	return 0;
+}
+
+/*
+ * This function is to initialize all DPM state tables
+ * for SMU based on the dependency table.
+ * Dynamic state patching function will then trim these
+ * state tables to the allowed range based
+ * on the power policy or external client requests,
+ * such as UVD request, etc.
+ */
+static int vega10_setup_default_dpm_tables(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)(hwmgr->pptable);
+	struct vega10_single_dpm_table *dpm_table;
+	uint32_t i;
+
+	struct phm_ppt_v1_clock_voltage_dependency_table *dep_soc_table =
+			table_info->vdd_dep_on_socclk;
+	struct phm_ppt_v1_clock_voltage_dependency_table *dep_gfx_table =
+			table_info->vdd_dep_on_sclk;
+	struct phm_ppt_v1_clock_voltage_dependency_table *dep_mclk_table =
+			table_info->vdd_dep_on_mclk;
+	struct phm_ppt_v1_mm_clock_voltage_dependency_table *dep_mm_table =
+			table_info->mm_dep_table;
+	struct phm_ppt_v1_clock_voltage_dependency_table *dep_dcef_table =
+			table_info->vdd_dep_on_dcefclk;
+	struct phm_ppt_v1_clock_voltage_dependency_table *dep_pix_table =
+			table_info->vdd_dep_on_pixclk;
+	struct phm_ppt_v1_clock_voltage_dependency_table *dep_disp_table =
+			table_info->vdd_dep_on_dispclk;
+	struct phm_ppt_v1_clock_voltage_dependency_table *dep_phy_table =
+			table_info->vdd_dep_on_phyclk;
+
+	PP_ASSERT_WITH_CODE(dep_soc_table,
+			"SOCCLK dependency table is missing. This table is mandatory",
+			return -EINVAL);
+	PP_ASSERT_WITH_CODE(dep_soc_table->count >= 1,
+			"SOCCLK dependency table is empty. This table is mandatory",
+			return -EINVAL);
+
+	PP_ASSERT_WITH_CODE(dep_gfx_table,
+			"GFXCLK dependency table is missing. This table is mandatory",
+			return -EINVAL);
+	PP_ASSERT_WITH_CODE(dep_gfx_table->count >= 1,
+			"GFXCLK dependency table is empty. This table is mandatory",
+			return -EINVAL);
+
+	PP_ASSERT_WITH_CODE(dep_mclk_table,
+			"MCLK dependency table is missing. This table is mandatory",
+			return -EINVAL);
+	PP_ASSERT_WITH_CODE(dep_mclk_table->count >= 1,
+			"MCLK dependency table has to have is missing. This table is mandatory",
+			return -EINVAL);
+
+	/* Initialize Sclk DPM table based on allow Sclk values */
+	data->dpm_table.soc_table.count = 0;
+	data->dpm_table.gfx_table.count = 0;
+	data->dpm_table.dcef_table.count = 0;
+
+	dpm_table = &(data->dpm_table.soc_table);
+	vega10_setup_default_single_dpm_table(hwmgr,
+			dpm_table,
+			dep_soc_table);
+
+	vega10_init_dpm_state(&(dpm_table->dpm_state));
+
+	dpm_table = &(data->dpm_table.gfx_table);
+	vega10_setup_default_single_dpm_table(hwmgr,
+			dpm_table,
+			dep_gfx_table);
+	vega10_init_dpm_state(&(dpm_table->dpm_state));
+
+	/* Initialize Mclk DPM table based on allow Mclk values */
+	data->dpm_table.mem_table.count = 0;
+	dpm_table = &(data->dpm_table.mem_table);
+	vega10_setup_default_single_dpm_table(hwmgr,
+			dpm_table,
+			dep_mclk_table);
+	vega10_init_dpm_state(&(dpm_table->dpm_state));
+
+	data->dpm_table.eclk_table.count = 0;
+	dpm_table = &(data->dpm_table.eclk_table);
+	for (i = 0; i < dep_mm_table->count; i++) {
+		if (i == 0 || dpm_table->dpm_levels
+				[dpm_table->count - 1].value !=
+						dep_mm_table->entries[i].eclk) {
+			dpm_table->dpm_levels[dpm_table->count].value =
+					dep_mm_table->entries[i].eclk;
+			dpm_table->dpm_levels[dpm_table->count].enabled =
+					(i == 0) ? true : false;
+			dpm_table->count++;
+		}
+	}
+	vega10_init_dpm_state(&(dpm_table->dpm_state));
+
+	data->dpm_table.vclk_table.count = 0;
+	data->dpm_table.dclk_table.count = 0;
+	dpm_table = &(data->dpm_table.vclk_table);
+	for (i = 0; i < dep_mm_table->count; i++) {
+		if (i == 0 || dpm_table->dpm_levels
+				[dpm_table->count - 1].value !=
+						dep_mm_table->entries[i].vclk) {
+			dpm_table->dpm_levels[dpm_table->count].value =
+					dep_mm_table->entries[i].vclk;
+			dpm_table->dpm_levels[dpm_table->count].enabled =
+					(i == 0) ? true : false;
+			dpm_table->count++;
+		}
+	}
+	vega10_init_dpm_state(&(dpm_table->dpm_state));
+
+	dpm_table = &(data->dpm_table.dclk_table);
+	for (i = 0; i < dep_mm_table->count; i++) {
+		if (i == 0 || dpm_table->dpm_levels
+				[dpm_table->count - 1].value !=
+						dep_mm_table->entries[i].dclk) {
+			dpm_table->dpm_levels[dpm_table->count].value =
+					dep_mm_table->entries[i].dclk;
+			dpm_table->dpm_levels[dpm_table->count].enabled =
+					(i == 0) ? true : false;
+			dpm_table->count++;
+		}
+	}
+	vega10_init_dpm_state(&(dpm_table->dpm_state));
+
+	/* Assume there is no headless Vega10 for now */
+	dpm_table = &(data->dpm_table.dcef_table);
+	vega10_setup_default_single_dpm_table(hwmgr,
+			dpm_table,
+			dep_dcef_table);
+
+	vega10_init_dpm_state(&(dpm_table->dpm_state));
+
+	dpm_table = &(data->dpm_table.pixel_table);
+	vega10_setup_default_single_dpm_table(hwmgr,
+			dpm_table,
+			dep_pix_table);
+
+	vega10_init_dpm_state(&(dpm_table->dpm_state));
+
+	dpm_table = &(data->dpm_table.display_table);
+	vega10_setup_default_single_dpm_table(hwmgr,
+			dpm_table,
+			dep_disp_table);
+
+	vega10_init_dpm_state(&(dpm_table->dpm_state));
+
+	dpm_table = &(data->dpm_table.phy_table);
+	vega10_setup_default_single_dpm_table(hwmgr,
+			dpm_table,
+			dep_phy_table);
+
+	vega10_init_dpm_state(&(dpm_table->dpm_state));
+
+	vega10_setup_default_pcie_table(hwmgr);
+
+	/* save a copy of the default DPM table */
+	memcpy(&(data->golden_dpm_table), &(data->dpm_table),
+			sizeof(struct vega10_dpm_table));
+
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_ODNinACSupport) ||
+		phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_ODNinDCSupport)) {
+		data->odn_dpm_table.odn_core_clock_dpm_levels.
+		number_of_performance_levels = data->dpm_table.gfx_table.count;
+		for (i = 0; i < data->dpm_table.gfx_table.count; i++) {
+			data->odn_dpm_table.odn_core_clock_dpm_levels.
+			performance_level_entries[i].clock =
+					data->dpm_table.gfx_table.dpm_levels[i].value;
+			data->odn_dpm_table.odn_core_clock_dpm_levels.
+			performance_level_entries[i].enabled = true;
+		}
+
+		data->odn_dpm_table.vdd_dependency_on_sclk.count =
+				dep_gfx_table->count;
+		for (i = 0; i < dep_gfx_table->count; i++) {
+			data->odn_dpm_table.vdd_dependency_on_sclk.entries[i].clk =
+					dep_gfx_table->entries[i].clk;
+			data->odn_dpm_table.vdd_dependency_on_sclk.entries[i].vddInd =
+					dep_gfx_table->entries[i].vddInd;
+			data->odn_dpm_table.vdd_dependency_on_sclk.entries[i].cks_enable =
+					dep_gfx_table->entries[i].cks_enable;
+			data->odn_dpm_table.vdd_dependency_on_sclk.entries[i].cks_voffset =
+					dep_gfx_table->entries[i].cks_voffset;
+		}
+
+		data->odn_dpm_table.odn_memory_clock_dpm_levels.
+		number_of_performance_levels = data->dpm_table.mem_table.count;
+		for (i = 0; i < data->dpm_table.mem_table.count; i++) {
+			data->odn_dpm_table.odn_memory_clock_dpm_levels.
+			performance_level_entries[i].clock =
+					data->dpm_table.mem_table.dpm_levels[i].value;
+			data->odn_dpm_table.odn_memory_clock_dpm_levels.
+			performance_level_entries[i].enabled = true;
+		}
+
+		data->odn_dpm_table.vdd_dependency_on_mclk.count = dep_mclk_table->count;
+		for (i = 0; i < dep_mclk_table->count; i++) {
+			data->odn_dpm_table.vdd_dependency_on_mclk.entries[i].clk =
+					dep_mclk_table->entries[i].clk;
+			data->odn_dpm_table.vdd_dependency_on_mclk.entries[i].vddInd =
+					dep_mclk_table->entries[i].vddInd;
+			data->odn_dpm_table.vdd_dependency_on_mclk.entries[i].vddci =
+					dep_mclk_table->entries[i].vddci;
+		}
+	}
+
+	return 0;
+}
+
+/*
+ * @fn vega10_populate_ulv_state
+ * @brief Function to provide parameters for Utral Low Voltage state to SMC.
+ *
+ * @param    hwmgr - the address of the hardware manager.
+ * @return   Always 0.
+ */
+static int vega10_populate_ulv_state(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)(hwmgr->pptable);
+
+	data->smc_state_table.pp_table.UlvOffsetVid =
+			(uint8_t)(table_info->us_ulv_voltage_offset *
+					VOLTAGE_VID_OFFSET_SCALE2 /
+					VOLTAGE_VID_OFFSET_SCALE1);
+
+	data->smc_state_table.pp_table.UlvSmnclkDid =
+			(uint8_t)(table_info->us_ulv_smnclk_did);
+	data->smc_state_table.pp_table.UlvMp1clkDid =
+			(uint8_t)(table_info->us_ulv_mp1clk_did);
+	data->smc_state_table.pp_table.UlvGfxclkBypass =
+			(uint8_t)(table_info->us_ulv_gfxclk_bypass);
+	data->smc_state_table.pp_table.UlvPhaseSheddingPsi0 =
+			(uint8_t)(data->vddc_voltage_table.psi0_enable);
+	data->smc_state_table.pp_table.UlvPhaseSheddingPsi1 =
+			(uint8_t)(data->vddc_voltage_table.psi1_enable);
+
+	return 0;
+}
+
+static int vega10_populate_single_lclk_level(struct pp_hwmgr *hwmgr,
+		uint32_t lclock, uint8_t *curr_lclk_did)
+{
+	struct pp_atomfwctrl_clock_dividers_soc15 dividers;
+
+	PP_ASSERT_WITH_CODE(!pp_atomfwctrl_get_gpu_pll_dividers_vega10(
+			hwmgr,
+			COMPUTE_GPUCLK_INPUT_FLAG_DEFAULT_GPUCLK,
+			lclock, &dividers),
+			"Failed to get LCLK clock settings from VBIOS!",
+			return -1);
+
+	*curr_lclk_did = dividers.ulDid;
+
+	return 0;
+}
+
+static int vega10_populate_smc_link_levels(struct pp_hwmgr *hwmgr)
+{
+	int result = -1;
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	PPTable_t *pp_table = &(data->smc_state_table.pp_table);
+	struct vega10_pcie_table *pcie_table =
+			&(data->dpm_table.pcie_table);
+	uint32_t i, j;
+
+	for (i = 0; i < pcie_table->count; i++) {
+		pp_table->PcieGenSpeed[i] = pcie_table->pcie_gen[i];
+		pp_table->PcieLaneCount[i] = pcie_table->pcie_lane[i];
+
+		result = vega10_populate_single_lclk_level(hwmgr,
+				pcie_table->lclk[i], &(pp_table->LclkDid[i]));
+		if (result) {
+			pr_info("Populate LClock Level %d Failed!\n", i);
+			return result;
+		}
+	}
+
+	j = i - 1;
+	while (i < NUM_LINK_LEVELS) {
+		pp_table->PcieGenSpeed[i] = pcie_table->pcie_gen[j];
+		pp_table->PcieLaneCount[i] = pcie_table->pcie_lane[j];
+
+		result = vega10_populate_single_lclk_level(hwmgr,
+				pcie_table->lclk[j], &(pp_table->LclkDid[i]));
+		if (result) {
+			pr_info("Populate LClock Level %d Failed!\n", i);
+			return result;
+		}
+		i++;
+	}
+
+	return result;
+}
+
+/**
+* Populates single SMC GFXSCLK structure using the provided engine clock
+*
+* @param    hwmgr      the address of the hardware manager
+* @param    gfx_clock  the GFX clock to use to populate the structure.
+* @param    current_gfxclk_level  location in PPTable for the SMC GFXCLK structure.
+*/
+
+static int vega10_populate_single_gfx_level(struct pp_hwmgr *hwmgr,
+		uint32_t gfx_clock, PllSetting_t *current_gfxclk_level)
+{
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)(hwmgr->pptable);
+	struct phm_ppt_v1_clock_voltage_dependency_table *dep_on_sclk =
+			table_info->vdd_dep_on_sclk;
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	struct pp_atomfwctrl_clock_dividers_soc15 dividers;
+	uint32_t i;
+
+	if (data->apply_overdrive_next_settings_mask &
+			DPMTABLE_OD_UPDATE_VDDC)
+		dep_on_sclk = (struct phm_ppt_v1_clock_voltage_dependency_table *)
+						&(data->odn_dpm_table.vdd_dependency_on_sclk);
+
+	PP_ASSERT_WITH_CODE(dep_on_sclk,
+			"Invalid SOC_VDD-GFX_CLK Dependency Table!",
+			return -EINVAL);
+
+	for (i = 0; i < dep_on_sclk->count; i++) {
+		if (dep_on_sclk->entries[i].clk == gfx_clock)
+			break;
+	}
+
+	PP_ASSERT_WITH_CODE(dep_on_sclk->count > i,
+			"Cannot find gfx_clk in SOC_VDD-GFX_CLK!",
+			return -EINVAL);
+	PP_ASSERT_WITH_CODE(!pp_atomfwctrl_get_gpu_pll_dividers_vega10(hwmgr,
+			COMPUTE_GPUCLK_INPUT_FLAG_GFXCLK,
+			gfx_clock, &dividers),
+			"Failed to get GFX Clock settings from VBIOS!",
+			return -EINVAL);
+
+	/* Feedback Multiplier: bit 0:8 int, bit 15:12 post_div, bit 31:16 frac */
+	current_gfxclk_level->FbMult =
+			cpu_to_le32(dividers.ulPll_fb_mult);
+	/* Spread FB Multiplier bit: bit 0:8 int, bit 31:16 frac */
+	current_gfxclk_level->SsOn = dividers.ucPll_ss_enable;
+	current_gfxclk_level->SsFbMult =
+			cpu_to_le32(dividers.ulPll_ss_fbsmult);
+	current_gfxclk_level->SsSlewFrac =
+			cpu_to_le16(dividers.usPll_ss_slew_frac);
+	current_gfxclk_level->Did = (uint8_t)(dividers.ulDid);
+
+	return 0;
+}
+
+/**
+ * @brief Populates single SMC SOCCLK structure using the provided clock.
+ *
+ * @param    hwmgr - the address of the hardware manager.
+ * @param    soc_clock - the SOC clock to use to populate the structure.
+ * @param    current_socclk_level - location in PPTable for the SMC SOCCLK structure.
+ * @return   0 on success..
+ */
+static int vega10_populate_single_soc_level(struct pp_hwmgr *hwmgr,
+		uint32_t soc_clock, uint8_t *current_soc_did,
+		uint8_t *current_vol_index)
+{
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)(hwmgr->pptable);
+	struct phm_ppt_v1_clock_voltage_dependency_table *dep_on_soc =
+			table_info->vdd_dep_on_socclk;
+	struct pp_atomfwctrl_clock_dividers_soc15 dividers;
+	uint32_t i;
+
+	PP_ASSERT_WITH_CODE(dep_on_soc,
+			"Invalid SOC_VDD-SOC_CLK Dependency Table!",
+			return -EINVAL);
+	for (i = 0; i < dep_on_soc->count; i++) {
+		if (dep_on_soc->entries[i].clk == soc_clock)
+			break;
+	}
+	PP_ASSERT_WITH_CODE(dep_on_soc->count > i,
+			"Cannot find SOC_CLK in SOC_VDD-SOC_CLK Dependency Table",
+			return -EINVAL);
+	PP_ASSERT_WITH_CODE(!pp_atomfwctrl_get_gpu_pll_dividers_vega10(hwmgr,
+			COMPUTE_GPUCLK_INPUT_FLAG_DEFAULT_GPUCLK,
+			soc_clock, &dividers),
+			"Failed to get SOC Clock settings from VBIOS!",
+			return -EINVAL);
+
+	*current_soc_did = (uint8_t)dividers.ulDid;
+	*current_vol_index = (uint8_t)(dep_on_soc->entries[i].vddInd);
+
+	return 0;
+}
+
+uint16_t vega10_locate_vddc_given_clock(struct pp_hwmgr *hwmgr,
+		uint32_t clk,
+		struct phm_ppt_v1_clock_voltage_dependency_table *dep_table)
+{
+	uint16_t i;
+
+	for (i = 0; i < dep_table->count; i++) {
+		if (dep_table->entries[i].clk == clk)
+			return dep_table->entries[i].vddc;
+	}
+
+	pr_info("[LocateVddcGivenClock] Cannot locate SOC Vddc for this clock!");
+	return 0;
+}
+
+/**
+* Populates all SMC SCLK levels' structure based on the trimmed allowed dpm engine clock states
+*
+* @param    hwmgr      the address of the hardware manager
+*/
+static int vega10_populate_all_graphic_levels(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)(hwmgr->pptable);
+	struct phm_ppt_v1_clock_voltage_dependency_table *dep_table =
+			table_info->vdd_dep_on_socclk;
+	PPTable_t *pp_table = &(data->smc_state_table.pp_table);
+	struct vega10_single_dpm_table *dpm_table = &(data->dpm_table.gfx_table);
+	int result = 0;
+	uint32_t i, j;
+
+	for (i = 0; i < dpm_table->count; i++) {
+		result = vega10_populate_single_gfx_level(hwmgr,
+				dpm_table->dpm_levels[i].value,
+				&(pp_table->GfxclkLevel[i]));
+		if (result)
+			return result;
+	}
+
+	j = i - 1;
+	while (i < NUM_GFXCLK_DPM_LEVELS) {
+		result = vega10_populate_single_gfx_level(hwmgr,
+				dpm_table->dpm_levels[j].value,
+				&(pp_table->GfxclkLevel[i]));
+		if (result)
+			return result;
+		i++;
+	}
+
+	pp_table->GfxclkSlewRate =
+			cpu_to_le16(table_info->us_gfxclk_slew_rate);
+
+	dpm_table = &(data->dpm_table.soc_table);
+	for (i = 0; i < dpm_table->count; i++) {
+		pp_table->SocVid[i] =
+				(uint8_t)convert_to_vid(
+				vega10_locate_vddc_given_clock(hwmgr,
+						dpm_table->dpm_levels[i].value,
+						dep_table));
+		result = vega10_populate_single_soc_level(hwmgr,
+				dpm_table->dpm_levels[i].value,
+				&(pp_table->SocclkDid[i]),
+				&(pp_table->SocDpmVoltageIndex[i]));
+		if (result)
+			return result;
+	}
+
+	j = i - 1;
+	while (i < NUM_SOCCLK_DPM_LEVELS) {
+		pp_table->SocVid[i] = pp_table->SocVid[j];
+		result = vega10_populate_single_soc_level(hwmgr,
+				dpm_table->dpm_levels[j].value,
+				&(pp_table->SocclkDid[i]),
+				&(pp_table->SocDpmVoltageIndex[i]));
+		if (result)
+			return result;
+		i++;
+	}
+
+	return result;
+}
+
+/**
+ * @brief Populates single SMC GFXCLK structure using the provided clock.
+ *
+ * @param    hwmgr - the address of the hardware manager.
+ * @param    mem_clock - the memory clock to use to populate the structure.
+ * @return   0 on success..
+ */
+static int vega10_populate_single_memory_level(struct pp_hwmgr *hwmgr,
+		uint32_t mem_clock, uint8_t *current_mem_vid,
+		PllSetting_t *current_memclk_level, uint8_t *current_mem_soc_vind)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)(hwmgr->pptable);
+	struct phm_ppt_v1_clock_voltage_dependency_table *dep_on_mclk =
+			table_info->vdd_dep_on_mclk;
+	struct pp_atomfwctrl_clock_dividers_soc15 dividers;
+	uint32_t i;
+
+	if (data->apply_overdrive_next_settings_mask &
+			DPMTABLE_OD_UPDATE_VDDC)
+		dep_on_mclk = (struct phm_ppt_v1_clock_voltage_dependency_table *)
+					&data->odn_dpm_table.vdd_dependency_on_mclk;
+
+	PP_ASSERT_WITH_CODE(dep_on_mclk,
+			"Invalid SOC_VDD-UCLK Dependency Table!",
+			return -EINVAL);
+
+	for (i = 0; i < dep_on_mclk->count; i++) {
+		if (dep_on_mclk->entries[i].clk == mem_clock)
+			break;
+	}
+
+	PP_ASSERT_WITH_CODE(dep_on_mclk->count > i,
+			"Cannot find UCLK in SOC_VDD-UCLK Dependency Table!",
+			return -EINVAL);
+
+	PP_ASSERT_WITH_CODE(!pp_atomfwctrl_get_gpu_pll_dividers_vega10(
+			hwmgr, COMPUTE_GPUCLK_INPUT_FLAG_UCLK, mem_clock, &dividers),
+			"Failed to get UCLK settings from VBIOS!",
+			return -1);
+
+	*current_mem_vid =
+			(uint8_t)(convert_to_vid(dep_on_mclk->entries[i].mvdd));
+	*current_mem_soc_vind =
+			(uint8_t)(dep_on_mclk->entries[i].vddInd);
+	current_memclk_level->FbMult = cpu_to_le32(dividers.ulPll_fb_mult);
+	current_memclk_level->Did = (uint8_t)(dividers.ulDid);
+
+	PP_ASSERT_WITH_CODE(current_memclk_level->Did >= 1,
+			"Invalid Divider ID!",
+			return -EINVAL);
+
+	return 0;
+}
+
+/**
+ * @brief Populates all SMC MCLK levels' structure based on the trimmed allowed dpm memory clock states.
+ *
+ * @param    pHwMgr - the address of the hardware manager.
+ * @return   PP_Result_OK on success.
+ */
+static int vega10_populate_all_memory_levels(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	PPTable_t *pp_table = &(data->smc_state_table.pp_table);
+	struct vega10_single_dpm_table *dpm_table =
+			&(data->dpm_table.mem_table);
+	int result = 0;
+	uint32_t i, j, reg, mem_channels;
+
+	for (i = 0; i < dpm_table->count; i++) {
+		result = vega10_populate_single_memory_level(hwmgr,
+				dpm_table->dpm_levels[i].value,
+				&(pp_table->MemVid[i]),
+				&(pp_table->UclkLevel[i]),
+				&(pp_table->MemSocVoltageIndex[i]));
+		if (result)
+			return result;
+	}
+
+	j = i - 1;
+	while (i < NUM_UCLK_DPM_LEVELS) {
+		result = vega10_populate_single_memory_level(hwmgr,
+				dpm_table->dpm_levels[j].value,
+				&(pp_table->MemVid[i]),
+				&(pp_table->UclkLevel[i]),
+				&(pp_table->MemSocVoltageIndex[i]));
+		if (result)
+			return result;
+		i++;
+	}
+
+	reg = soc15_get_register_offset(DF_HWID, 0,
+			mmDF_CS_AON0_DramBaseAddress0_BASE_IDX,
+			mmDF_CS_AON0_DramBaseAddress0);
+	mem_channels = (cgs_read_register(hwmgr->device, reg) &
+			DF_CS_AON0_DramBaseAddress0__IntLvNumChan_MASK) >>
+			DF_CS_AON0_DramBaseAddress0__IntLvNumChan__SHIFT;
+	pp_table->NumMemoryChannels = cpu_to_le16(mem_channels);
+	pp_table->MemoryChannelWidth =
+			cpu_to_le16(HBM_MEMORY_CHANNEL_WIDTH *
+					channel_number[mem_channels]);
+
+	pp_table->LowestUclkReservedForUlv =
+			(uint8_t)(data->lowest_uclk_reserved_for_ulv);
+
+	return result;
+}
+
+static int vega10_populate_single_display_type(struct pp_hwmgr *hwmgr,
+		DSPCLK_e disp_clock)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	PPTable_t *pp_table = &(data->smc_state_table.pp_table);
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)
+			(hwmgr->pptable);
+	struct phm_ppt_v1_clock_voltage_dependency_table *dep_table;
+	uint32_t i;
+	uint16_t clk = 0, vddc = 0;
+	uint8_t vid = 0;
+
+	switch (disp_clock) {
+	case DSPCLK_DCEFCLK:
+		dep_table = table_info->vdd_dep_on_dcefclk;
+		break;
+	case DSPCLK_DISPCLK:
+		dep_table = table_info->vdd_dep_on_dispclk;
+		break;
+	case DSPCLK_PIXCLK:
+		dep_table = table_info->vdd_dep_on_pixclk;
+		break;
+	case DSPCLK_PHYCLK:
+		dep_table = table_info->vdd_dep_on_phyclk;
+		break;
+	default:
+		return -1;
+	}
+
+	PP_ASSERT_WITH_CODE(dep_table->count <= NUM_DSPCLK_LEVELS,
+			"Number Of Entries Exceeded maximum!",
+			return -1);
+
+	for (i = 0; i < dep_table->count; i++) {
+		clk = (uint16_t)(dep_table->entries[i].clk / 100);
+		vddc = table_info->vddc_lookup_table->
+				entries[dep_table->entries[i].vddInd].us_vdd;
+		vid = (uint8_t)convert_to_vid(vddc);
+		pp_table->DisplayClockTable[disp_clock][i].Freq =
+				cpu_to_le16(clk);
+		pp_table->DisplayClockTable[disp_clock][i].Vid =
+				cpu_to_le16(vid);
+	}
+
+	while (i < NUM_DSPCLK_LEVELS) {
+		pp_table->DisplayClockTable[disp_clock][i].Freq =
+				cpu_to_le16(clk);
+		pp_table->DisplayClockTable[disp_clock][i].Vid =
+				cpu_to_le16(vid);
+		i++;
+	}
+
+	return 0;
+}
+
+static int vega10_populate_all_display_clock_levels(struct pp_hwmgr *hwmgr)
+{
+	uint32_t i;
+
+	for (i = 0; i < DSPCLK_COUNT; i++) {
+		PP_ASSERT_WITH_CODE(!vega10_populate_single_display_type(hwmgr, i),
+				"Failed to populate Clock in DisplayClockTable!",
+				return -1);
+	}
+
+	return 0;
+}
+
+static int vega10_populate_single_eclock_level(struct pp_hwmgr *hwmgr,
+		uint32_t eclock, uint8_t *current_eclk_did,
+		uint8_t *current_soc_vol)
+{
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)(hwmgr->pptable);
+	struct phm_ppt_v1_mm_clock_voltage_dependency_table *dep_table =
+			table_info->mm_dep_table;
+	struct pp_atomfwctrl_clock_dividers_soc15 dividers;
+	uint32_t i;
+
+	PP_ASSERT_WITH_CODE(!pp_atomfwctrl_get_gpu_pll_dividers_vega10(hwmgr,
+			COMPUTE_GPUCLK_INPUT_FLAG_DEFAULT_GPUCLK,
+			eclock, &dividers),
+			"Failed to get ECLK clock settings from VBIOS!",
+			return -1);
+
+	*current_eclk_did = (uint8_t)dividers.ulDid;
+
+	for (i = 0; i < dep_table->count; i++) {
+		if (dep_table->entries[i].eclk == eclock)
+			*current_soc_vol = dep_table->entries[i].vddcInd;
+	}
+
+	return 0;
+}
+
+static int vega10_populate_smc_vce_levels(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	PPTable_t *pp_table = &(data->smc_state_table.pp_table);
+	struct vega10_single_dpm_table *dpm_table = &(data->dpm_table.eclk_table);
+	int result = -EINVAL;
+	uint32_t i, j;
+
+	for (i = 0; i < dpm_table->count; i++) {
+		result = vega10_populate_single_eclock_level(hwmgr,
+				dpm_table->dpm_levels[i].value,
+				&(pp_table->EclkDid[i]),
+				&(pp_table->VceDpmVoltageIndex[i]));
+		if (result)
+			return result;
+	}
+
+	j = i - 1;
+	while (i < NUM_VCE_DPM_LEVELS) {
+		result = vega10_populate_single_eclock_level(hwmgr,
+				dpm_table->dpm_levels[j].value,
+				&(pp_table->EclkDid[i]),
+				&(pp_table->VceDpmVoltageIndex[i]));
+		if (result)
+			return result;
+		i++;
+	}
+
+	return result;
+}
+
+static int vega10_populate_single_vclock_level(struct pp_hwmgr *hwmgr,
+		uint32_t vclock, uint8_t *current_vclk_did)
+{
+	struct pp_atomfwctrl_clock_dividers_soc15 dividers;
+
+	PP_ASSERT_WITH_CODE(!pp_atomfwctrl_get_gpu_pll_dividers_vega10(hwmgr,
+			COMPUTE_GPUCLK_INPUT_FLAG_DEFAULT_GPUCLK,
+			vclock, &dividers),
+			"Failed to get VCLK clock settings from VBIOS!",
+			return -EINVAL);
+
+	*current_vclk_did = (uint8_t)dividers.ulDid;
+
+	return 0;
+}
+
+static int vega10_populate_single_dclock_level(struct pp_hwmgr *hwmgr,
+		uint32_t dclock, uint8_t *current_dclk_did)
+{
+	struct pp_atomfwctrl_clock_dividers_soc15 dividers;
+
+	PP_ASSERT_WITH_CODE(!pp_atomfwctrl_get_gpu_pll_dividers_vega10(hwmgr,
+			COMPUTE_GPUCLK_INPUT_FLAG_DEFAULT_GPUCLK,
+			dclock, &dividers),
+			"Failed to get DCLK clock settings from VBIOS!",
+			return -EINVAL);
+
+	*current_dclk_did = (uint8_t)dividers.ulDid;
+
+	return 0;
+}
+
+static int vega10_populate_smc_uvd_levels(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	PPTable_t *pp_table = &(data->smc_state_table.pp_table);
+	struct vega10_single_dpm_table *vclk_dpm_table =
+			&(data->dpm_table.vclk_table);
+	struct vega10_single_dpm_table *dclk_dpm_table =
+			&(data->dpm_table.dclk_table);
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)(hwmgr->pptable);
+	struct phm_ppt_v1_mm_clock_voltage_dependency_table *dep_table =
+			table_info->mm_dep_table;
+	int result = -EINVAL;
+	uint32_t i, j;
+
+	for (i = 0; i < vclk_dpm_table->count; i++) {
+		result = vega10_populate_single_vclock_level(hwmgr,
+				vclk_dpm_table->dpm_levels[i].value,
+				&(pp_table->VclkDid[i]));
+		if (result)
+			return result;
+	}
+
+	j = i - 1;
+	while (i < NUM_UVD_DPM_LEVELS) {
+		result = vega10_populate_single_vclock_level(hwmgr,
+				vclk_dpm_table->dpm_levels[j].value,
+				&(pp_table->VclkDid[i]));
+		if (result)
+			return result;
+		i++;
+	}
+
+	for (i = 0; i < dclk_dpm_table->count; i++) {
+		result = vega10_populate_single_dclock_level(hwmgr,
+				dclk_dpm_table->dpm_levels[i].value,
+				&(pp_table->DclkDid[i]));
+		if (result)
+			return result;
+	}
+
+	j = i - 1;
+	while (i < NUM_UVD_DPM_LEVELS) {
+		result = vega10_populate_single_dclock_level(hwmgr,
+				dclk_dpm_table->dpm_levels[j].value,
+				&(pp_table->DclkDid[i]));
+		if (result)
+			return result;
+		i++;
+	}
+
+	for (i = 0; i < dep_table->count; i++) {
+		if (dep_table->entries[i].vclk ==
+				vclk_dpm_table->dpm_levels[i].value &&
+			dep_table->entries[i].dclk ==
+				dclk_dpm_table->dpm_levels[i].value)
+			pp_table->UvdDpmVoltageIndex[i] =
+					dep_table->entries[i].vddcInd;
+		else
+			return -1;
+	}
+
+	j = i - 1;
+	while (i < NUM_UVD_DPM_LEVELS) {
+		pp_table->UvdDpmVoltageIndex[i] = dep_table->entries[j].vddcInd;
+		i++;
+	}
+
+	return 0;
+}
+
+static int vega10_populate_clock_stretcher_table(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	PPTable_t *pp_table = &(data->smc_state_table.pp_table);
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)(hwmgr->pptable);
+	struct phm_ppt_v1_clock_voltage_dependency_table *dep_table =
+			table_info->vdd_dep_on_sclk;
+	uint32_t i;
+
+	for (i = 0; dep_table->count; i++) {
+		pp_table->CksEnable[i] = dep_table->entries[i].cks_enable;
+		pp_table->CksVidOffset[i] = convert_to_vid(
+				dep_table->entries[i].cks_voffset);
+	}
+
+	return 0;
+}
+
+static int vega10_populate_avfs_parameters(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	PPTable_t *pp_table = &(data->smc_state_table.pp_table);
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)(hwmgr->pptable);
+	struct phm_ppt_v1_clock_voltage_dependency_table *dep_table =
+			table_info->vdd_dep_on_sclk;
+	struct pp_atomfwctrl_avfs_parameters avfs_params = {0};
+	int result = 0;
+	uint32_t i;
+
+	pp_table->MinVoltageVid = (uint8_t)0xff;
+	pp_table->MaxVoltageVid = (uint8_t)0;
+
+	if (data->smu_features[GNLD_AVFS].supported) {
+		result = pp_atomfwctrl_get_avfs_information(hwmgr, &avfs_params);
+		if (!result) {
+			pp_table->MinVoltageVid = (uint8_t)
+					convert_to_vid((uint16_t)(avfs_params.ulMaxVddc));
+			pp_table->MaxVoltageVid = (uint8_t)
+					convert_to_vid((uint16_t)(avfs_params.ulMinVddc));
+			pp_table->BtcGbVdroopTableCksOn.a0 =
+					cpu_to_le32(avfs_params.ulGbVdroopTableCksonA0);
+			pp_table->BtcGbVdroopTableCksOn.a1 =
+					cpu_to_le32(avfs_params.ulGbVdroopTableCksonA1);
+			pp_table->BtcGbVdroopTableCksOn.a2 =
+					cpu_to_le32(avfs_params.ulGbVdroopTableCksonA2);
+
+			pp_table->BtcGbVdroopTableCksOff.a0 =
+					cpu_to_le32(avfs_params.ulGbVdroopTableCksoffA0);
+			pp_table->BtcGbVdroopTableCksOff.a1 =
+					cpu_to_le32(avfs_params.ulGbVdroopTableCksoffA1);
+			pp_table->BtcGbVdroopTableCksOff.a2 =
+					cpu_to_le32(avfs_params.ulGbVdroopTableCksoffA2);
+
+			pp_table->AvfsGbCksOn.m1 =
+					cpu_to_le32(avfs_params.ulGbFuseTableCksonM1);
+			pp_table->AvfsGbCksOn.m2 =
+					cpu_to_le16(avfs_params.usGbFuseTableCksonM2);
+			pp_table->AvfsGbCksOn.b =
+					cpu_to_le32(avfs_params.ulGbFuseTableCksonB);
+			pp_table->AvfsGbCksOn.m1_shift = 24;
+			pp_table->AvfsGbCksOn.m2_shift = 12;
+
+			pp_table->AvfsGbCksOff.m1 =
+					cpu_to_le32(avfs_params.ulGbFuseTableCksoffM1);
+			pp_table->AvfsGbCksOff.m2 =
+					cpu_to_le16(avfs_params.usGbFuseTableCksoffM2);
+			pp_table->AvfsGbCksOff.b =
+					cpu_to_le32(avfs_params.ulGbFuseTableCksoffB);
+			pp_table->AvfsGbCksOff.m1_shift = 24;
+			pp_table->AvfsGbCksOff.m2_shift = 12;
+
+			pp_table->AConstant[0] =
+					cpu_to_le32(avfs_params.ulMeanNsigmaAcontant0);
+			pp_table->AConstant[1] =
+					cpu_to_le32(avfs_params.ulMeanNsigmaAcontant1);
+			pp_table->AConstant[2] =
+					cpu_to_le32(avfs_params.ulMeanNsigmaAcontant2);
+			pp_table->DC_tol_sigma =
+					cpu_to_le16(avfs_params.usMeanNsigmaDcTolSigma);
+			pp_table->Platform_mean =
+					cpu_to_le16(avfs_params.usMeanNsigmaPlatformMean);
+			pp_table->PSM_Age_CompFactor =
+					cpu_to_le16(avfs_params.usPsmAgeComfactor);
+			pp_table->Platform_sigma =
+					cpu_to_le16(avfs_params.usMeanNsigmaDcTolSigma);
+
+			for (i = 0; i < dep_table->count; i++)
+				pp_table->StaticVoltageOffsetVid[i] = (uint8_t)
+						(dep_table->entries[i].sclk_offset *
+								VOLTAGE_VID_OFFSET_SCALE2 /
+								VOLTAGE_VID_OFFSET_SCALE1);
+
+			pp_table->OverrideBtcGbCksOn =
+					avfs_params.ucEnableGbVdroopTableCkson;
+			pp_table->OverrideAvfsGbCksOn =
+					avfs_params.ucEnableGbFuseTableCkson;
+
+			if ((PPREGKEY_VEGA10QUADRATICEQUATION_DFLT !=
+					data->disp_clk_quad_eqn_a) &&
+				(PPREGKEY_VEGA10QUADRATICEQUATION_DFLT !=
+					data->disp_clk_quad_eqn_b)) {
+				pp_table->DisplayClock2Gfxclk[DSPCLK_DISPCLK].m1 =
+						(int32_t)data->disp_clk_quad_eqn_a;
+				pp_table->DisplayClock2Gfxclk[DSPCLK_DISPCLK].m2 =
+						(int16_t)data->disp_clk_quad_eqn_b;
+				pp_table->DisplayClock2Gfxclk[DSPCLK_DISPCLK].b =
+						(int32_t)data->disp_clk_quad_eqn_c;
+			} else {
+				pp_table->DisplayClock2Gfxclk[DSPCLK_DISPCLK].m1 =
+						(int32_t)avfs_params.ulDispclk2GfxclkM1;
+				pp_table->DisplayClock2Gfxclk[DSPCLK_DISPCLK].m2 =
+						(int16_t)avfs_params.usDispclk2GfxclkM2;
+				pp_table->DisplayClock2Gfxclk[DSPCLK_DISPCLK].b =
+						(int32_t)avfs_params.ulDispclk2GfxclkB;
+			}
+
+			pp_table->DisplayClock2Gfxclk[DSPCLK_DISPCLK].m1_shift = 24;
+			pp_table->DisplayClock2Gfxclk[DSPCLK_DISPCLK].m2_shift = 12;
+
+			if ((PPREGKEY_VEGA10QUADRATICEQUATION_DFLT !=
+					data->dcef_clk_quad_eqn_a) &&
+				(PPREGKEY_VEGA10QUADRATICEQUATION_DFLT !=
+					data->dcef_clk_quad_eqn_b)) {
+				pp_table->DisplayClock2Gfxclk[DSPCLK_DCEFCLK].m1 =
+						(int32_t)data->dcef_clk_quad_eqn_a;
+				pp_table->DisplayClock2Gfxclk[DSPCLK_DCEFCLK].m2 =
+						(int16_t)data->dcef_clk_quad_eqn_b;
+				pp_table->DisplayClock2Gfxclk[DSPCLK_DCEFCLK].b =
+						(int32_t)data->dcef_clk_quad_eqn_c;
+			} else {
+				pp_table->DisplayClock2Gfxclk[DSPCLK_DCEFCLK].m1 =
+						(int32_t)avfs_params.ulDcefclk2GfxclkM1;
+				pp_table->DisplayClock2Gfxclk[DSPCLK_DCEFCLK].m2 =
+						(int16_t)avfs_params.usDcefclk2GfxclkM2;
+				pp_table->DisplayClock2Gfxclk[DSPCLK_DCEFCLK].b =
+						(int32_t)avfs_params.ulDcefclk2GfxclkB;
+			}
+
+			pp_table->DisplayClock2Gfxclk[DSPCLK_DCEFCLK].m1_shift = 24;
+			pp_table->DisplayClock2Gfxclk[DSPCLK_DCEFCLK].m2_shift = 12;
+
+			if ((PPREGKEY_VEGA10QUADRATICEQUATION_DFLT !=
+					data->pixel_clk_quad_eqn_a) &&
+				(PPREGKEY_VEGA10QUADRATICEQUATION_DFLT !=
+					data->pixel_clk_quad_eqn_b)) {
+				pp_table->DisplayClock2Gfxclk[DSPCLK_PIXCLK].m1 =
+						(int32_t)data->pixel_clk_quad_eqn_a;
+				pp_table->DisplayClock2Gfxclk[DSPCLK_PIXCLK].m2 =
+						(int16_t)data->pixel_clk_quad_eqn_b;
+				pp_table->DisplayClock2Gfxclk[DSPCLK_PIXCLK].b =
+						(int32_t)data->pixel_clk_quad_eqn_c;
+			} else {
+				pp_table->DisplayClock2Gfxclk[DSPCLK_PIXCLK].m1 =
+						(int32_t)avfs_params.ulPixelclk2GfxclkM1;
+				pp_table->DisplayClock2Gfxclk[DSPCLK_PIXCLK].m2 =
+						(int16_t)avfs_params.usPixelclk2GfxclkM2;
+				pp_table->DisplayClock2Gfxclk[DSPCLK_PIXCLK].b =
+						(int32_t)avfs_params.ulPixelclk2GfxclkB;
+			}
+
+			pp_table->DisplayClock2Gfxclk[DSPCLK_PIXCLK].m1_shift = 24;
+			pp_table->DisplayClock2Gfxclk[DSPCLK_PIXCLK].m2_shift = 12;
+
+			if ((PPREGKEY_VEGA10QUADRATICEQUATION_DFLT !=
+					data->phy_clk_quad_eqn_a) &&
+				(PPREGKEY_VEGA10QUADRATICEQUATION_DFLT !=
+					data->phy_clk_quad_eqn_b)) {
+				pp_table->DisplayClock2Gfxclk[DSPCLK_PHYCLK].m1 =
+						(int32_t)data->phy_clk_quad_eqn_a;
+				pp_table->DisplayClock2Gfxclk[DSPCLK_PHYCLK].m2 =
+						(int16_t)data->phy_clk_quad_eqn_b;
+				pp_table->DisplayClock2Gfxclk[DSPCLK_PHYCLK].b =
+						(int32_t)data->phy_clk_quad_eqn_c;
+			} else {
+				pp_table->DisplayClock2Gfxclk[DSPCLK_PHYCLK].m1 =
+						(int32_t)avfs_params.ulPhyclk2GfxclkM1;
+				pp_table->DisplayClock2Gfxclk[DSPCLK_PHYCLK].m2 =
+						(int16_t)avfs_params.usPhyclk2GfxclkM2;
+				pp_table->DisplayClock2Gfxclk[DSPCLK_PHYCLK].b =
+						(int32_t)avfs_params.ulPhyclk2GfxclkB;
+			}
+
+			pp_table->DisplayClock2Gfxclk[DSPCLK_PHYCLK].m1_shift = 24;
+			pp_table->DisplayClock2Gfxclk[DSPCLK_PHYCLK].m2_shift = 12;
+		} else {
+			data->smu_features[GNLD_AVFS].supported = false;
+		}
+	}
+
+	return 0;
+}
+
+static int vega10_populate_gpio_parameters(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	PPTable_t *pp_table = &(data->smc_state_table.pp_table);
+	struct pp_atomfwctrl_gpio_parameters gpio_params = {0};
+	int result;
+
+	result = pp_atomfwctrl_get_gpio_information(hwmgr, &gpio_params);
+	if (!result) {
+		if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+				PHM_PlatformCaps_RegulatorHot) &&
+				(data->registry_data.regulator_hot_gpio_support)) {
+			pp_table->VR0HotGpio = gpio_params.ucVR0HotGpio;
+			pp_table->VR0HotPolarity = gpio_params.ucVR0HotPolarity;
+			pp_table->VR1HotGpio = gpio_params.ucVR1HotGpio;
+			pp_table->VR1HotPolarity = gpio_params.ucVR1HotPolarity;
+		} else {
+			pp_table->VR0HotGpio = 0;
+			pp_table->VR0HotPolarity = 0;
+			pp_table->VR1HotGpio = 0;
+			pp_table->VR1HotPolarity = 0;
+		}
+
+		if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+				PHM_PlatformCaps_AutomaticDCTransition) &&
+				(data->registry_data.ac_dc_switch_gpio_support)) {
+			pp_table->AcDcGpio = gpio_params.ucAcDcGpio;
+			pp_table->AcDcPolarity = gpio_params.ucAcDcPolarity;
+		} else {
+			pp_table->AcDcGpio = 0;
+			pp_table->AcDcPolarity = 0;
+		}
+	}
+
+	return result;
+}
+
+static int vega10_avfs_enable(struct pp_hwmgr *hwmgr, bool enable)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+
+	if (data->smu_features[GNLD_AVFS].supported) {
+		if (enable) {
+			PP_ASSERT_WITH_CODE(!vega10_enable_smc_features(hwmgr->smumgr,
+					true,
+					data->smu_features[GNLD_AVFS].smu_feature_bitmap),
+					"[avfs_control] Attempt to Enable AVFS feature Failed!",
+					return -1);
+			data->smu_features[GNLD_AVFS].enabled = true;
+		} else {
+			PP_ASSERT_WITH_CODE(!vega10_enable_smc_features(hwmgr->smumgr,
+					false,
+					data->smu_features[GNLD_AVFS].smu_feature_id),
+					"[avfs_control] Attempt to Disable AVFS feature Failed!",
+					return -1);
+			data->smu_features[GNLD_AVFS].enabled = false;
+		}
+	}
+
+	return 0;
+}
+
+/**
+* Initializes the SMC table and uploads it
+*
+* @param    hwmgr  the address of the powerplay hardware manager.
+* @param    pInput  the pointer to input data (PowerState)
+* @return   always 0
+*/
+static int vega10_init_smc_table(struct pp_hwmgr *hwmgr)
+{
+	int result;
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)(hwmgr->pptable);
+	PPTable_t *pp_table = &(data->smc_state_table.pp_table);
+	struct pp_atomfwctrl_voltage_table voltage_table;
+
+	result = vega10_setup_default_dpm_tables(hwmgr);
+	PP_ASSERT_WITH_CODE(!result,
+			"Failed to setup default DPM tables!",
+			return result);
+
+	pp_atomfwctrl_get_voltage_table_v4(hwmgr, VOLTAGE_TYPE_VDDC,
+			VOLTAGE_OBJ_SVID2,  &voltage_table);
+	pp_table->MaxVidStep = voltage_table.max_vid_step;
+
+	pp_table->GfxDpmVoltageMode =
+			(uint8_t)(table_info->uc_gfx_dpm_voltage_mode);
+	pp_table->SocDpmVoltageMode =
+			(uint8_t)(table_info->uc_soc_dpm_voltage_mode);
+	pp_table->UclkDpmVoltageMode =
+			(uint8_t)(table_info->uc_uclk_dpm_voltage_mode);
+	pp_table->UvdDpmVoltageMode =
+			(uint8_t)(table_info->uc_uvd_dpm_voltage_mode);
+	pp_table->VceDpmVoltageMode =
+			(uint8_t)(table_info->uc_vce_dpm_voltage_mode);
+	pp_table->Mp0DpmVoltageMode =
+			(uint8_t)(table_info->uc_mp0_dpm_voltage_mode);
+	pp_table->DisplayDpmVoltageMode =
+			(uint8_t)(table_info->uc_dcef_dpm_voltage_mode);
+
+	if (data->registry_data.ulv_support &&
+			table_info->us_ulv_voltage_offset) {
+		result = vega10_populate_ulv_state(hwmgr);
+		PP_ASSERT_WITH_CODE(!result,
+				"Failed to initialize ULV state!",
+				return result);
+	}
+
+	result = vega10_populate_smc_link_levels(hwmgr);
+	PP_ASSERT_WITH_CODE(!result,
+			"Failed to initialize Link Level!",
+			return result);
+
+	result = vega10_populate_all_graphic_levels(hwmgr);
+	PP_ASSERT_WITH_CODE(!result,
+			"Failed to initialize Graphics Level!",
+			return result);
+
+	result = vega10_populate_all_memory_levels(hwmgr);
+	PP_ASSERT_WITH_CODE(!result,
+			"Failed to initialize Memory Level!",
+			return result);
+
+	result = vega10_populate_all_display_clock_levels(hwmgr);
+	PP_ASSERT_WITH_CODE(!result,
+			"Failed to initialize Display Level!",
+			return result);
+
+	result = vega10_populate_smc_vce_levels(hwmgr);
+	PP_ASSERT_WITH_CODE(!result,
+			"Failed to initialize VCE Level!",
+			return result);
+
+	result = vega10_populate_smc_uvd_levels(hwmgr);
+	PP_ASSERT_WITH_CODE(!result,
+			"Failed to initialize UVD Level!",
+			return result);
+
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_ClockStretcher)) {
+		result = vega10_populate_clock_stretcher_table(hwmgr);
+		PP_ASSERT_WITH_CODE(!result,
+				"Failed to populate Clock Stretcher Table!",
+				return result);
+	}
+
+	result = vega10_populate_avfs_parameters(hwmgr);
+	PP_ASSERT_WITH_CODE(!result,
+			"Failed to initialize AVFS Parameters!",
+			return result);
+
+	result = vega10_populate_gpio_parameters(hwmgr);
+	PP_ASSERT_WITH_CODE(!result,
+			"Failed to initialize GPIO Parameters!",
+			return result);
+
+	pp_table->GfxclkAverageAlpha = (uint8_t)
+			(data->gfxclk_average_alpha);
+	pp_table->SocclkAverageAlpha = (uint8_t)
+			(data->socclk_average_alpha);
+	pp_table->UclkAverageAlpha = (uint8_t)
+			(data->uclk_average_alpha);
+	pp_table->GfxActivityAverageAlpha = (uint8_t)
+			(data->gfx_activity_average_alpha);
+
+	result = vega10_copy_table_to_smc(hwmgr->smumgr,
+			(uint8_t *)pp_table, PPTABLE);
+	PP_ASSERT_WITH_CODE(!result,
+			"Failed to upload PPtable!", return result);
+
+	if (data->smu_features[GNLD_AVFS].supported) {
+		uint32_t features_enabled;
+		result = vega10_get_smc_features(hwmgr->smumgr, &features_enabled);
+		PP_ASSERT_WITH_CODE(!result,
+				"Failed to Retrieve Enabled Features!",
+				return result);
+		if (!(features_enabled & (1 << FEATURE_AVFS_BIT))) {
+			result = vega10_perform_btc(hwmgr->smumgr);
+			PP_ASSERT_WITH_CODE(!result,
+					"Failed to Perform BTC!",
+					return result);
+			result = vega10_avfs_enable(hwmgr, true);
+			PP_ASSERT_WITH_CODE(!result,
+					"Attempt to enable AVFS feature Failed!",
+					return result);
+			result = vega10_save_vft_table(hwmgr->smumgr,
+					(uint8_t *)&(data->smc_state_table.avfs_table));
+			PP_ASSERT_WITH_CODE(!result,
+					"Attempt to save VFT table Failed!",
+					return result);
+		} else {
+			data->smu_features[GNLD_AVFS].enabled = true;
+			result = vega10_restore_vft_table(hwmgr->smumgr,
+					(uint8_t *)&(data->smc_state_table.avfs_table));
+			PP_ASSERT_WITH_CODE(!result,
+					"Attempt to restore VFT table Failed!",
+					return result;);
+		}
+	}
+
+	return 0;
+}
+
+static int vega10_enable_thermal_protection(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend);
+
+	if (data->smu_features[GNLD_THERMAL].supported) {
+		if (data->smu_features[GNLD_THERMAL].enabled)
+			pr_info("THERMAL Feature Already enabled!");
+
+		PP_ASSERT_WITH_CODE(
+				!vega10_enable_smc_features(hwmgr->smumgr,
+				true,
+				data->smu_features[GNLD_THERMAL].smu_feature_bitmap),
+				"Enable THERMAL Feature Failed!",
+				return -1);
+		data->smu_features[GNLD_THERMAL].enabled = true;
+	}
+
+	return 0;
+}
+
+static int vega10_enable_vrhot_feature(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_RegulatorHot)) {
+		if (data->smu_features[GNLD_VR0HOT].supported) {
+			PP_ASSERT_WITH_CODE(
+					!vega10_enable_smc_features(hwmgr->smumgr,
+					true,
+					data->smu_features[GNLD_VR0HOT].smu_feature_bitmap),
+					"Attempt to Enable VR0 Hot feature Failed!",
+					return -1);
+			data->smu_features[GNLD_VR0HOT].enabled = true;
+		} else {
+			if (data->smu_features[GNLD_VR1HOT].supported) {
+				PP_ASSERT_WITH_CODE(
+						!vega10_enable_smc_features(hwmgr->smumgr,
+						true,
+						data->smu_features[GNLD_VR1HOT].smu_feature_bitmap),
+						"Attempt to Enable VR0 Hot feature Failed!",
+						return -1);
+				data->smu_features[GNLD_VR1HOT].enabled = true;
+			}
+		}
+	}
+	return 0;
+}
+
+static int vega10_enable_ulv(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+
+	if (data->registry_data.ulv_support) {
+		PP_ASSERT_WITH_CODE(!vega10_enable_smc_features(hwmgr->smumgr,
+				true, data->smu_features[GNLD_ULV].smu_feature_bitmap),
+				"Enable ULV Feature Failed!",
+				return -1);
+		data->smu_features[GNLD_ULV].enabled = true;
+	}
+
+	return 0;
+}
+
+static int vega10_enable_deep_sleep_master_switch(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+
+	if (data->smu_features[GNLD_DS_GFXCLK].supported) {
+		PP_ASSERT_WITH_CODE(!vega10_enable_smc_features(hwmgr->smumgr,
+				true, data->smu_features[GNLD_DS_GFXCLK].smu_feature_bitmap),
+				"Attempt to Enable DS_GFXCLK Feature Failed!",
+				return -1);
+		data->smu_features[GNLD_DS_GFXCLK].enabled = true;
+	}
+
+	if (data->smu_features[GNLD_DS_SOCCLK].supported) {
+		PP_ASSERT_WITH_CODE(!vega10_enable_smc_features(hwmgr->smumgr,
+				true, data->smu_features[GNLD_DS_SOCCLK].smu_feature_bitmap),
+				"Attempt to Enable DS_GFXCLK Feature Failed!",
+				return -1);
+		data->smu_features[GNLD_DS_SOCCLK].enabled = true;
+	}
+
+	if (data->smu_features[GNLD_DS_LCLK].supported) {
+		PP_ASSERT_WITH_CODE(!vega10_enable_smc_features(hwmgr->smumgr,
+				true, data->smu_features[GNLD_DS_LCLK].smu_feature_bitmap),
+				"Attempt to Enable DS_GFXCLK Feature Failed!",
+				return -1);
+		data->smu_features[GNLD_DS_LCLK].enabled = true;
+	}
+
+	return 0;
+}
+
+/**
+ * @brief Tell SMC to enabled the supported DPMs.
+ *
+ * @param    hwmgr - the address of the powerplay hardware manager.
+ * @Param    bitmap - bitmap for the features to enabled.
+ * @return   0 on at least one DPM is successfully enabled.
+ */
+static int vega10_start_dpm(struct pp_hwmgr *hwmgr, uint32_t bitmap)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	uint32_t i, feature_mask = 0;
+
+	for (i = 0; i < GNLD_DPM_MAX; i++) {
+		if (data->smu_features[i].smu_feature_bitmap & bitmap) {
+			if (data->smu_features[i].supported) {
+				if (!data->smu_features[i].enabled) {
+					feature_mask |= data->smu_features[i].
+							smu_feature_bitmap;
+					data->smu_features[i].enabled = true;
+				}
+			}
+		}
+	}
+
+	if (vega10_enable_smc_features(hwmgr->smumgr,
+			true, feature_mask)) {
+		for (i = 0; i < GNLD_DPM_MAX; i++) {
+			if (data->smu_features[i].smu_feature_bitmap &
+					feature_mask)
+				data->smu_features[i].enabled = false;
+		}
+	}
+
+	if(data->smu_features[GNLD_LED_DISPLAY].supported == true){
+		PP_ASSERT_WITH_CODE(!vega10_enable_smc_features(hwmgr->smumgr,
+				true, data->smu_features[GNLD_LED_DISPLAY].smu_feature_bitmap),
+		"Attempt to Enable LED DPM feature Failed!", return -EINVAL);
+		data->smu_features[GNLD_LED_DISPLAY].enabled = true;
+	}
+
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_Falcon_QuickTransition)) {
+		if (data->smu_features[GNLD_ACDC].supported) {
+			PP_ASSERT_WITH_CODE(!vega10_enable_smc_features(hwmgr->smumgr,
+					true, data->smu_features[GNLD_ACDC].smu_feature_bitmap),
+					"Attempt to Enable DS_GFXCLK Feature Failed!",
+					return -1);
+			data->smu_features[GNLD_ACDC].enabled = true;
+		}
+	}
+
+	return 0;
+}
+
+static int vega10_enable_dpm_tasks(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	int tmp_result, result = 0;
+
+	tmp_result = smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+			PPSMC_MSG_ConfigureTelemetry, data->config_telemetry);
+	PP_ASSERT_WITH_CODE(!tmp_result,
+			"Failed to configure telemetry!",
+			return tmp_result);
+
+	vega10_set_tools_address(hwmgr->smumgr);
+
+	tmp_result = (!vega10_is_dpm_running(hwmgr)) ? 0 : -1;
+	PP_ASSERT_WITH_CODE(!tmp_result,
+			"DPM is already running right , skipping re-enablement!",
+			return 0);
+
+	tmp_result = vega10_construct_voltage_tables(hwmgr);
+	PP_ASSERT_WITH_CODE(!tmp_result,
+			"Failed to contruct voltage tables!",
+			result = tmp_result);
+
+	tmp_result = vega10_init_smc_table(hwmgr);
+	PP_ASSERT_WITH_CODE(!tmp_result,
+			"Failed to initialize SMC table!",
+			result = tmp_result);
+
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_ThermalController)) {
+		tmp_result = vega10_enable_thermal_protection(hwmgr);
+		PP_ASSERT_WITH_CODE(!tmp_result,
+				"Failed to enable thermal protection!",
+				result = tmp_result);
+	}
+
+	tmp_result = vega10_enable_vrhot_feature(hwmgr);
+	PP_ASSERT_WITH_CODE(!tmp_result,
+			"Failed to enable VR hot feature!",
+			result = tmp_result);
+
+	tmp_result = vega10_enable_ulv(hwmgr);
+	PP_ASSERT_WITH_CODE(!tmp_result,
+			"Failed to enable ULV!",
+			result = tmp_result);
+
+	tmp_result = vega10_enable_deep_sleep_master_switch(hwmgr);
+	PP_ASSERT_WITH_CODE(!tmp_result,
+			"Failed to enable deep sleep master switch!",
+			result = tmp_result);
+
+	tmp_result = vega10_start_dpm(hwmgr, SMC_DPM_FEATURES);
+	PP_ASSERT_WITH_CODE(!tmp_result,
+			"Failed to start DPM!", result = tmp_result);
+
+	tmp_result = vega10_enable_power_containment(hwmgr);
+	PP_ASSERT_WITH_CODE(!tmp_result,
+			"Failed to enable power containment!",
+			result = tmp_result);
+
+	tmp_result = vega10_power_control_set_level(hwmgr);
+	PP_ASSERT_WITH_CODE(!tmp_result,
+			"Failed to power control set level!",
+			result = tmp_result);
+
+	return result;
+}
+
+static int vega10_get_power_state_size(struct pp_hwmgr *hwmgr)
+{
+	return sizeof(struct vega10_power_state);
+}
+
+static int vega10_get_pp_table_entry_callback_func(struct pp_hwmgr *hwmgr,
+		void *state, struct pp_power_state *power_state,
+		void *pp_table, uint32_t classification_flag)
+{
+	struct vega10_power_state *vega10_power_state =
+			cast_phw_vega10_power_state(&(power_state->hardware));
+	struct vega10_performance_level *performance_level;
+	ATOM_Vega10_State *state_entry = (ATOM_Vega10_State *)state;
+	ATOM_Vega10_POWERPLAYTABLE *powerplay_table =
+			(ATOM_Vega10_POWERPLAYTABLE *)pp_table;
+	ATOM_Vega10_SOCCLK_Dependency_Table *socclk_dep_table =
+			(ATOM_Vega10_SOCCLK_Dependency_Table *)
+			(((unsigned long)powerplay_table) +
+			le16_to_cpu(powerplay_table->usSocclkDependencyTableOffset));
+	ATOM_Vega10_GFXCLK_Dependency_Table *gfxclk_dep_table =
+			(ATOM_Vega10_GFXCLK_Dependency_Table *)
+			(((unsigned long)powerplay_table) +
+			le16_to_cpu(powerplay_table->usGfxclkDependencyTableOffset));
+	ATOM_Vega10_MCLK_Dependency_Table *mclk_dep_table =
+			(ATOM_Vega10_MCLK_Dependency_Table *)
+			(((unsigned long)powerplay_table) +
+			le16_to_cpu(powerplay_table->usMclkDependencyTableOffset));
+
+
+	/* The following fields are not initialized here:
+	 * id orderedList allStatesList
+	 */
+	power_state->classification.ui_label =
+			(le16_to_cpu(state_entry->usClassification) &
+			ATOM_PPLIB_CLASSIFICATION_UI_MASK) >>
+			ATOM_PPLIB_CLASSIFICATION_UI_SHIFT;
+	power_state->classification.flags = classification_flag;
+	/* NOTE: There is a classification2 flag in BIOS
+	 * that is not being used right now
+	 */
+	power_state->classification.temporary_state = false;
+	power_state->classification.to_be_deleted = false;
+
+	power_state->validation.disallowOnDC =
+			((le32_to_cpu(state_entry->ulCapsAndSettings) &
+					ATOM_Vega10_DISALLOW_ON_DC) != 0);
+
+	power_state->display.disableFrameModulation = false;
+	power_state->display.limitRefreshrate = false;
+	power_state->display.enableVariBright =
+			((le32_to_cpu(state_entry->ulCapsAndSettings) &
+					ATOM_Vega10_ENABLE_VARIBRIGHT) != 0);
+
+	power_state->validation.supportedPowerLevels = 0;
+	power_state->uvd_clocks.VCLK = 0;
+	power_state->uvd_clocks.DCLK = 0;
+	power_state->temperatures.min = 0;
+	power_state->temperatures.max = 0;
+
+	performance_level = &(vega10_power_state->performance_levels
+			[vega10_power_state->performance_level_count++]);
+
+	PP_ASSERT_WITH_CODE(
+			(vega10_power_state->performance_level_count <
+					NUM_GFXCLK_DPM_LEVELS),
+			"Performance levels exceeds SMC limit!",
+			return -1);
+
+	PP_ASSERT_WITH_CODE(
+			(vega10_power_state->performance_level_count <=
+					hwmgr->platform_descriptor.
+					hardwareActivityPerformanceLevels),
+			"Performance levels exceeds Driver limit!",
+			return -1);
+
+	/* Performance levels are arranged from low to high. */
+	performance_level->soc_clock = socclk_dep_table->entries
+			[state_entry->ucSocClockIndexLow].ulClk;
+	performance_level->gfx_clock = gfxclk_dep_table->entries
+			[state_entry->ucGfxClockIndexLow].ulClk;
+	performance_level->mem_clock = mclk_dep_table->entries
+			[state_entry->ucMemClockIndexLow].ulMemClk;
+
+	performance_level = &(vega10_power_state->performance_levels
+				[vega10_power_state->performance_level_count++]);
+
+	performance_level->soc_clock = socclk_dep_table->entries
+			[state_entry->ucSocClockIndexHigh].ulClk;
+	performance_level->gfx_clock = gfxclk_dep_table->entries
+			[state_entry->ucGfxClockIndexHigh].ulClk;
+	performance_level->mem_clock = mclk_dep_table->entries
+			[state_entry->ucMemClockIndexHigh].ulMemClk;
+	return 0;
+}
+
+static int vega10_get_pp_table_entry(struct pp_hwmgr *hwmgr,
+		unsigned long entry_index, struct pp_power_state *state)
+{
+	int result;
+	struct vega10_power_state *ps;
+
+	state->hardware.magic = PhwVega10_Magic;
+
+	ps = cast_phw_vega10_power_state(&state->hardware);
+
+	result = vega10_get_powerplay_table_entry(hwmgr, entry_index, state,
+			vega10_get_pp_table_entry_callback_func);
+
+	/*
+	 * This is the earliest time we have all the dependency table
+	 * and the VBIOS boot state
+	 */
+	/* set DC compatible flag if this state supports DC */
+	if (!state->validation.disallowOnDC)
+		ps->dc_compatible = true;
+
+	ps->uvd_clks.vclk = state->uvd_clocks.VCLK;
+	ps->uvd_clks.dclk = state->uvd_clocks.DCLK;
+
+	return 0;
+}
+
+static int vega10_patch_boot_state(struct pp_hwmgr *hwmgr,
+	     struct pp_hw_power_state *hw_ps)
+{
+	return 0;
+}
+
+static int vega10_apply_state_adjust_rules(struct pp_hwmgr *hwmgr,
+				struct pp_power_state  *request_ps,
+			const struct pp_power_state *current_ps)
+{
+	struct vega10_power_state *vega10_ps =
+				cast_phw_vega10_power_state(&request_ps->hardware);
+	uint32_t sclk;
+	uint32_t mclk;
+	struct PP_Clocks minimum_clocks = {0};
+	bool disable_mclk_switching;
+	bool disable_mclk_switching_for_frame_lock;
+	bool disable_mclk_switching_for_vr;
+	bool force_mclk_high;
+	struct cgs_display_info info = {0};
+	const struct phm_clock_and_voltage_limits *max_limits;
+	uint32_t i;
+	struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend);
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)(hwmgr->pptable);
+	int32_t count;
+	uint32_t stable_pstate_sclk_dpm_percentage;
+	uint32_t stable_pstate_sclk = 0, stable_pstate_mclk = 0;
+	uint32_t latency;
+
+	data->battery_state = (PP_StateUILabel_Battery ==
+			request_ps->classification.ui_label);
+
+	if (vega10_ps->performance_level_count != 2)
+		pr_info("VI should always have 2 performance levels");
+
+	max_limits = (PP_PowerSource_AC == hwmgr->power_source) ?
+			&(hwmgr->dyn_state.max_clock_voltage_on_ac) :
+			&(hwmgr->dyn_state.max_clock_voltage_on_dc);
+
+	/* Cap clock DPM tables at DC MAX if it is in DC. */
+	if (PP_PowerSource_DC == hwmgr->power_source) {
+		for (i = 0; i < vega10_ps->performance_level_count; i++) {
+			if (vega10_ps->performance_levels[i].mem_clock >
+				max_limits->mclk)
+				vega10_ps->performance_levels[i].mem_clock =
+						max_limits->mclk;
+			if (vega10_ps->performance_levels[i].gfx_clock >
+				max_limits->sclk)
+				vega10_ps->performance_levels[i].gfx_clock =
+						max_limits->sclk;
+		}
+	}
+
+	vega10_ps->vce_clks.evclk = hwmgr->vce_arbiter.evclk;
+	vega10_ps->vce_clks.ecclk = hwmgr->vce_arbiter.ecclk;
+
+	cgs_get_active_displays_info(hwmgr->device, &info);
+
+	/* result = PHM_CheckVBlankTime(hwmgr, &vblankTooShort);*/
+	minimum_clocks.engineClock = hwmgr->display_config.min_core_set_clock;
+	/* minimum_clocks.memoryClock = hwmgr->display_config.min_mem_set_clock; */
+
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_StablePState)) {
+		PP_ASSERT_WITH_CODE(
+			data->registry_data.stable_pstate_sclk_dpm_percentage >= 1 &&
+			data->registry_data.stable_pstate_sclk_dpm_percentage <= 100,
+			"percent sclk value must range from 1% to 100%, setting default value",
+			stable_pstate_sclk_dpm_percentage = 75);
+
+		max_limits = &(hwmgr->dyn_state.max_clock_voltage_on_ac);
+		stable_pstate_sclk = (max_limits->sclk *
+				stable_pstate_sclk_dpm_percentage) / 100;
+
+		for (count = table_info->vdd_dep_on_sclk->count - 1;
+				count >= 0; count--) {
+			if (stable_pstate_sclk >=
+					table_info->vdd_dep_on_sclk->entries[count].clk) {
+				stable_pstate_sclk =
+						table_info->vdd_dep_on_sclk->entries[count].clk;
+				break;
+			}
+		}
+
+		if (count < 0)
+			stable_pstate_sclk = table_info->vdd_dep_on_sclk->entries[0].clk;
+
+		stable_pstate_mclk = max_limits->mclk;
+
+		minimum_clocks.engineClock = stable_pstate_sclk;
+		minimum_clocks.memoryClock = stable_pstate_mclk;
+	}
+
+	if (minimum_clocks.engineClock < hwmgr->gfx_arbiter.sclk)
+		minimum_clocks.engineClock = hwmgr->gfx_arbiter.sclk;
+
+	if (minimum_clocks.memoryClock < hwmgr->gfx_arbiter.mclk)
+		minimum_clocks.memoryClock = hwmgr->gfx_arbiter.mclk;
+
+	vega10_ps->sclk_threshold = hwmgr->gfx_arbiter.sclk_threshold;
+
+	if (hwmgr->gfx_arbiter.sclk_over_drive) {
+		PP_ASSERT_WITH_CODE((hwmgr->gfx_arbiter.sclk_over_drive <=
+				hwmgr->platform_descriptor.overdriveLimit.engineClock),
+				"Overdrive sclk exceeds limit",
+				hwmgr->gfx_arbiter.sclk_over_drive =
+						hwmgr->platform_descriptor.overdriveLimit.engineClock);
+
+		if (hwmgr->gfx_arbiter.sclk_over_drive >= hwmgr->gfx_arbiter.sclk)
+			vega10_ps->performance_levels[1].gfx_clock =
+					hwmgr->gfx_arbiter.sclk_over_drive;
+	}
+
+	if (hwmgr->gfx_arbiter.mclk_over_drive) {
+		PP_ASSERT_WITH_CODE((hwmgr->gfx_arbiter.mclk_over_drive <=
+				hwmgr->platform_descriptor.overdriveLimit.memoryClock),
+				"Overdrive mclk exceeds limit",
+				hwmgr->gfx_arbiter.mclk_over_drive =
+						hwmgr->platform_descriptor.overdriveLimit.memoryClock);
+
+		if (hwmgr->gfx_arbiter.mclk_over_drive >= hwmgr->gfx_arbiter.mclk)
+			vega10_ps->performance_levels[1].mem_clock =
+					hwmgr->gfx_arbiter.mclk_over_drive;
+	}
+
+	disable_mclk_switching_for_frame_lock = phm_cap_enabled(
+				    hwmgr->platform_descriptor.platformCaps,
+				    PHM_PlatformCaps_DisableMclkSwitchingForFrameLock);
+	disable_mclk_switching_for_vr = phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_DisableMclkSwitchForVR);
+	force_mclk_high = phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_ForceMclkHigh);
+
+	disable_mclk_switching = (info.display_count > 1) ||
+				    disable_mclk_switching_for_frame_lock ||
+				    disable_mclk_switching_for_vr ||
+				    force_mclk_high;
+
+	sclk = vega10_ps->performance_levels[0].gfx_clock;
+	mclk = vega10_ps->performance_levels[0].mem_clock;
+
+	if (sclk < minimum_clocks.engineClock)
+		sclk = (minimum_clocks.engineClock > max_limits->sclk) ?
+				max_limits->sclk : minimum_clocks.engineClock;
+
+	if (mclk < minimum_clocks.memoryClock)
+		mclk = (minimum_clocks.memoryClock > max_limits->mclk) ?
+				max_limits->mclk : minimum_clocks.memoryClock;
+
+	vega10_ps->performance_levels[0].gfx_clock = sclk;
+	vega10_ps->performance_levels[0].mem_clock = mclk;
+
+	vega10_ps->performance_levels[1].gfx_clock =
+		(vega10_ps->performance_levels[1].gfx_clock >=
+				vega10_ps->performance_levels[0].gfx_clock) ?
+						vega10_ps->performance_levels[1].gfx_clock :
+						vega10_ps->performance_levels[0].gfx_clock;
+
+	if (disable_mclk_switching) {
+		/* Set Mclk the max of level 0 and level 1 */
+		if (mclk < vega10_ps->performance_levels[1].mem_clock)
+			mclk = vega10_ps->performance_levels[1].mem_clock;
+
+		/* Find the lowest MCLK frequency that is within
+		 * the tolerable latency defined in DAL
+		 */
+		latency = 0;
+		for (i = 0; i < data->mclk_latency_table.count; i++) {
+			if ((data->mclk_latency_table.entries[i].latency <= latency) &&
+				(data->mclk_latency_table.entries[i].frequency >=
+						vega10_ps->performance_levels[0].mem_clock) &&
+				(data->mclk_latency_table.entries[i].frequency <=
+						vega10_ps->performance_levels[1].mem_clock))
+				mclk = data->mclk_latency_table.entries[i].frequency;
+		}
+		vega10_ps->performance_levels[0].mem_clock = mclk;
+	} else {
+		if (vega10_ps->performance_levels[1].mem_clock <
+				vega10_ps->performance_levels[0].mem_clock)
+			vega10_ps->performance_levels[1].mem_clock =
+					vega10_ps->performance_levels[0].mem_clock;
+	}
+
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_StablePState)) {
+		for (i = 0; i < vega10_ps->performance_level_count; i++) {
+			vega10_ps->performance_levels[i].gfx_clock = stable_pstate_sclk;
+			vega10_ps->performance_levels[i].mem_clock = stable_pstate_mclk;
+		}
+	}
+
+	return 0;
+}
+
+static int vega10_find_dpm_states_clocks_in_dpm_table(struct pp_hwmgr *hwmgr, const void *input)
+{
+	const struct phm_set_power_state_input *states =
+			(const struct phm_set_power_state_input *)input;
+	const struct vega10_power_state *vega10_ps =
+			cast_const_phw_vega10_power_state(states->pnew_state);
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	struct vega10_single_dpm_table *sclk_table =
+			&(data->dpm_table.gfx_table);
+	uint32_t sclk = vega10_ps->performance_levels
+			[vega10_ps->performance_level_count - 1].gfx_clock;
+	struct vega10_single_dpm_table *mclk_table =
+			&(data->dpm_table.mem_table);
+	uint32_t mclk = vega10_ps->performance_levels
+			[vega10_ps->performance_level_count - 1].mem_clock;
+	struct PP_Clocks min_clocks = {0};
+	uint32_t i;
+	struct cgs_display_info info = {0};
+
+	data->need_update_dpm_table = 0;
+
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_ODNinACSupport) ||
+		phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+				PHM_PlatformCaps_ODNinDCSupport)) {
+		for (i = 0; i < sclk_table->count; i++) {
+			if (sclk == sclk_table->dpm_levels[i].value)
+				break;
+		}
+
+		if (!(data->apply_overdrive_next_settings_mask &
+				DPMTABLE_OD_UPDATE_SCLK) && i >= sclk_table->count) {
+			/* Check SCLK in DAL's minimum clocks
+			 * in case DeepSleep divider update is required.
+			 */
+			if (data->display_timing.min_clock_in_sr !=
+					min_clocks.engineClockInSR &&
+				(min_clocks.engineClockInSR >=
+						VEGA10_MINIMUM_ENGINE_CLOCK ||
+					data->display_timing.min_clock_in_sr >=
+						VEGA10_MINIMUM_ENGINE_CLOCK))
+				data->need_update_dpm_table |= DPMTABLE_UPDATE_SCLK;
+		}
+
+		cgs_get_active_displays_info(hwmgr->device, &info);
+
+		if (data->display_timing.num_existing_displays !=
+				info.display_count)
+			data->need_update_dpm_table |= DPMTABLE_UPDATE_MCLK;
+	} else {
+		for (i = 0; i < sclk_table->count; i++) {
+			if (sclk == sclk_table->dpm_levels[i].value)
+				break;
+		}
+
+		if (i >= sclk_table->count)
+			data->need_update_dpm_table |= DPMTABLE_OD_UPDATE_SCLK;
+		else {
+			/* Check SCLK in DAL's minimum clocks
+			 * in case DeepSleep divider update is required.
+			 */
+			if (data->display_timing.min_clock_in_sr !=
+					min_clocks.engineClockInSR &&
+				(min_clocks.engineClockInSR >=
+						VEGA10_MINIMUM_ENGINE_CLOCK ||
+					data->display_timing.min_clock_in_sr >=
+						VEGA10_MINIMUM_ENGINE_CLOCK))
+				data->need_update_dpm_table |= DPMTABLE_UPDATE_SCLK;
+		}
+
+		for (i = 0; i < mclk_table->count; i++) {
+			if (mclk == mclk_table->dpm_levels[i].value)
+				break;
+		}
+
+		cgs_get_active_displays_info(hwmgr->device, &info);
+
+		if (i >= mclk_table->count)
+			data->need_update_dpm_table |= DPMTABLE_OD_UPDATE_MCLK;
+
+		if (data->display_timing.num_existing_displays !=
+				info.display_count ||
+				i >= mclk_table->count)
+			data->need_update_dpm_table |= DPMTABLE_UPDATE_MCLK;
+	}
+	return 0;
+}
+
+static int vega10_populate_and_upload_sclk_mclk_dpm_levels(
+		struct pp_hwmgr *hwmgr, const void *input)
+{
+	int result = 0;
+	const struct phm_set_power_state_input *states =
+			(const struct phm_set_power_state_input *)input;
+	const struct vega10_power_state *vega10_ps =
+			cast_const_phw_vega10_power_state(states->pnew_state);
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	uint32_t sclk = vega10_ps->performance_levels
+			[vega10_ps->performance_level_count - 1].gfx_clock;
+	uint32_t mclk = vega10_ps->performance_levels
+			[vega10_ps->performance_level_count - 1].mem_clock;
+	struct vega10_dpm_table *dpm_table = &data->dpm_table;
+	struct vega10_dpm_table *golden_dpm_table =
+			&data->golden_dpm_table;
+	uint32_t dpm_count, clock_percent;
+	uint32_t i;
+
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_ODNinACSupport) ||
+		phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_ODNinDCSupport)) {
+
+		if (!data->need_update_dpm_table &&
+			!data->apply_optimized_settings &&
+			!data->apply_overdrive_next_settings_mask)
+			return 0;
+
+		if (data->apply_overdrive_next_settings_mask &
+				DPMTABLE_OD_UPDATE_SCLK) {
+			for (dpm_count = 0;
+					dpm_count < dpm_table->gfx_table.count;
+					dpm_count++) {
+				dpm_table->gfx_table.dpm_levels[dpm_count].enabled =
+						data->odn_dpm_table.odn_core_clock_dpm_levels.
+						performance_level_entries[dpm_count].enabled;
+				dpm_table->gfx_table.dpm_levels[dpm_count].value =
+						data->odn_dpm_table.odn_core_clock_dpm_levels.
+						performance_level_entries[dpm_count].clock;
+			}
+		}
+
+		if (data->apply_overdrive_next_settings_mask &
+				DPMTABLE_OD_UPDATE_MCLK) {
+			for (dpm_count = 0;
+					dpm_count < dpm_table->mem_table.count;
+					dpm_count++) {
+				dpm_table->mem_table.dpm_levels[dpm_count].enabled =
+						data->odn_dpm_table.odn_memory_clock_dpm_levels.
+						performance_level_entries[dpm_count].enabled;
+				dpm_table->mem_table.dpm_levels[dpm_count].value =
+						data->odn_dpm_table.odn_memory_clock_dpm_levels.
+						performance_level_entries[dpm_count].clock;
+			}
+		}
+
+		if ((data->need_update_dpm_table & DPMTABLE_UPDATE_SCLK) ||
+			data->apply_optimized_settings ||
+			(data->apply_overdrive_next_settings_mask &
+					DPMTABLE_OD_UPDATE_SCLK)) {
+			result = vega10_populate_all_graphic_levels(hwmgr);
+			PP_ASSERT_WITH_CODE(!result,
+					"Failed to populate SCLK during \
+					PopulateNewDPMClocksStates Function!",
+					return result);
+		}
+
+		if ((data->need_update_dpm_table & DPMTABLE_UPDATE_MCLK) ||
+			(data->apply_overdrive_next_settings_mask &
+					DPMTABLE_OD_UPDATE_MCLK)){
+			result = vega10_populate_all_memory_levels(hwmgr);
+			PP_ASSERT_WITH_CODE(!result,
+					"Failed to populate MCLK during \
+					PopulateNewDPMClocksStates Function!",
+					return result);
+		}
+	} else {
+		if (!data->need_update_dpm_table &&
+				!data->apply_optimized_settings)
+			return 0;
+
+		if (data->need_update_dpm_table & DPMTABLE_OD_UPDATE_SCLK &&
+				data->smu_features[GNLD_DPM_GFXCLK].supported) {
+				dpm_table->
+				gfx_table.dpm_levels[dpm_table->gfx_table.count - 1].
+				value = sclk;
+
+				if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+						PHM_PlatformCaps_OD6PlusinACSupport) ||
+					phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+							PHM_PlatformCaps_OD6PlusinDCSupport)) {
+					/* Need to do calculation based on the golden DPM table
+					 * as the Heatmap GPU Clock axis is also based on
+					 * the default values
+					 */
+					PP_ASSERT_WITH_CODE(
+							golden_dpm_table->gfx_table.dpm_levels
+							[golden_dpm_table->gfx_table.count - 1].value,
+							"Divide by 0!",
+							return -1);
+
+					dpm_count = dpm_table->gfx_table.count < 2 ?
+							0 : dpm_table->gfx_table.count - 2;
+					for (i = dpm_count; i > 1; i--) {
+						if (sclk > golden_dpm_table->gfx_table.dpm_levels
+							[golden_dpm_table->gfx_table.count - 1].value) {
+							clock_percent =
+								((sclk - golden_dpm_table->gfx_table.dpm_levels
+								[golden_dpm_table->gfx_table.count - 1].value) *
+								100) /
+								golden_dpm_table->gfx_table.dpm_levels
+								[golden_dpm_table->gfx_table.count - 1].value;
+
+							dpm_table->gfx_table.dpm_levels[i].value =
+								golden_dpm_table->gfx_table.dpm_levels[i].value +
+								(golden_dpm_table->gfx_table.dpm_levels[i].value *
+								clock_percent) / 100;
+						} else if (golden_dpm_table->
+								gfx_table.dpm_levels[dpm_table->gfx_table.count-1].value >
+								sclk) {
+							clock_percent =
+								((golden_dpm_table->gfx_table.dpm_levels
+								[golden_dpm_table->gfx_table.count - 1].value -
+								sclk) *	100) /
+								golden_dpm_table->gfx_table.dpm_levels
+								[golden_dpm_table->gfx_table.count-1].value;
+
+							dpm_table->gfx_table.dpm_levels[i].value =
+								golden_dpm_table->gfx_table.dpm_levels[i].value -
+								(golden_dpm_table->gfx_table.dpm_levels[i].value *
+								clock_percent) / 100;
+						} else
+							dpm_table->gfx_table.dpm_levels[i].value =
+								golden_dpm_table->gfx_table.dpm_levels[i].value;
+					}
+				}
+			}
+
+		if (data->need_update_dpm_table & DPMTABLE_OD_UPDATE_MCLK &&
+				data->smu_features[GNLD_DPM_UCLK].supported) {
+			dpm_table->
+			mem_table.dpm_levels[dpm_table->mem_table.count - 1].
+			value = mclk;
+
+			if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+					PHM_PlatformCaps_OD6PlusinACSupport) ||
+				phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+						PHM_PlatformCaps_OD6PlusinDCSupport)) {
+
+				PP_ASSERT_WITH_CODE(
+					golden_dpm_table->mem_table.dpm_levels
+					[golden_dpm_table->mem_table.count - 1].value,
+					"Divide by 0!",
+					return -1);
+
+				dpm_count = dpm_table->mem_table.count < 2 ?
+						0 : dpm_table->mem_table.count - 2;
+				for (i = dpm_count; i > 1; i--) {
+					if (mclk > golden_dpm_table->mem_table.dpm_levels
+						[golden_dpm_table->mem_table.count-1].value) {
+						clock_percent = ((mclk -
+							golden_dpm_table->mem_table.dpm_levels
+							[golden_dpm_table->mem_table.count-1].value) *
+							100) /
+							golden_dpm_table->mem_table.dpm_levels
+							[golden_dpm_table->mem_table.count-1].value;
+
+						dpm_table->mem_table.dpm_levels[i].value =
+							golden_dpm_table->mem_table.dpm_levels[i].value +
+							(golden_dpm_table->mem_table.dpm_levels[i].value *
+							clock_percent) / 100;
+					} else if (golden_dpm_table->mem_table.dpm_levels
+							[dpm_table->mem_table.count-1].value > mclk) {
+						clock_percent = ((golden_dpm_table->mem_table.dpm_levels
+							[golden_dpm_table->mem_table.count-1].value - mclk) *
+							100) /
+							golden_dpm_table->mem_table.dpm_levels
+							[golden_dpm_table->mem_table.count-1].value;
+
+						dpm_table->mem_table.dpm_levels[i].value =
+							golden_dpm_table->mem_table.dpm_levels[i].value -
+							(golden_dpm_table->mem_table.dpm_levels[i].value *
+							clock_percent) / 100;
+					} else
+						dpm_table->mem_table.dpm_levels[i].value =
+							golden_dpm_table->mem_table.dpm_levels[i].value;
+				}
+			}
+		}
+
+		if ((data->need_update_dpm_table &
+			(DPMTABLE_OD_UPDATE_SCLK + DPMTABLE_UPDATE_SCLK)) ||
+			data->apply_optimized_settings) {
+			result = vega10_populate_all_graphic_levels(hwmgr);
+			PP_ASSERT_WITH_CODE(!result,
+					"Failed to populate SCLK during \
+					PopulateNewDPMClocksStates Function!",
+					return result);
+		}
+
+		if (data->need_update_dpm_table &
+				(DPMTABLE_OD_UPDATE_MCLK + DPMTABLE_UPDATE_MCLK)) {
+			result = vega10_populate_all_memory_levels(hwmgr);
+			PP_ASSERT_WITH_CODE(!result,
+					"Failed to populate MCLK during \
+					PopulateNewDPMClocksStates Function!",
+					return result);
+		}
+	}
+
+	return result;
+}
+
+static int vega10_trim_single_dpm_states(struct pp_hwmgr *hwmgr,
+		struct vega10_single_dpm_table *dpm_table,
+		uint32_t low_limit, uint32_t high_limit)
+{
+	uint32_t i;
+
+	for (i = 0; i < dpm_table->count; i++) {
+		if ((dpm_table->dpm_levels[i].value < low_limit) ||
+		    (dpm_table->dpm_levels[i].value > high_limit))
+			dpm_table->dpm_levels[i].enabled = false;
+		else
+			dpm_table->dpm_levels[i].enabled = true;
+	}
+	return 0;
+}
+
+static int vega10_trim_single_dpm_states_with_mask(struct pp_hwmgr *hwmgr,
+		struct vega10_single_dpm_table *dpm_table,
+		uint32_t low_limit, uint32_t high_limit,
+		uint32_t disable_dpm_mask)
+{
+	uint32_t i;
+
+	for (i = 0; i < dpm_table->count; i++) {
+		if ((dpm_table->dpm_levels[i].value < low_limit) ||
+		    (dpm_table->dpm_levels[i].value > high_limit))
+			dpm_table->dpm_levels[i].enabled = false;
+		else if (!((1 << i) & disable_dpm_mask))
+			dpm_table->dpm_levels[i].enabled = false;
+		else
+			dpm_table->dpm_levels[i].enabled = true;
+	}
+	return 0;
+}
+
+static int vega10_trim_dpm_states(struct pp_hwmgr *hwmgr,
+		const struct vega10_power_state *vega10_ps)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	uint32_t high_limit_count;
+
+	PP_ASSERT_WITH_CODE((vega10_ps->performance_level_count >= 1),
+			"power state did not have any performance level",
+			return -1);
+
+	high_limit_count = (vega10_ps->performance_level_count == 1) ? 0 : 1;
+
+	vega10_trim_single_dpm_states(hwmgr,
+			&(data->dpm_table.soc_table),
+			vega10_ps->performance_levels[0].soc_clock,
+			vega10_ps->performance_levels[high_limit_count].soc_clock);
+
+	vega10_trim_single_dpm_states_with_mask(hwmgr,
+			&(data->dpm_table.gfx_table),
+			vega10_ps->performance_levels[0].gfx_clock,
+			vega10_ps->performance_levels[high_limit_count].gfx_clock,
+			data->disable_dpm_mask);
+
+	vega10_trim_single_dpm_states(hwmgr,
+			&(data->dpm_table.mem_table),
+			vega10_ps->performance_levels[0].mem_clock,
+			vega10_ps->performance_levels[high_limit_count].mem_clock);
+
+	return 0;
+}
+
+static uint32_t vega10_find_lowest_dpm_level(
+		struct vega10_single_dpm_table *table)
+{
+	uint32_t i;
+
+	for (i = 0; i < table->count; i++) {
+		if (table->dpm_levels[i].enabled)
+			break;
+	}
+
+	return i;
+}
+
+static uint32_t vega10_find_highest_dpm_level(
+		struct vega10_single_dpm_table *table)
+{
+	uint32_t i = 0;
+
+	if (table->count <= MAX_REGULAR_DPM_NUMBER) {
+		for (i = table->count; i > 0; i--) {
+			if (table->dpm_levels[i - 1].enabled)
+				return i - 1;
+		}
+	} else {
+		pr_info("DPM Table Has Too Many Entries!");
+		return MAX_REGULAR_DPM_NUMBER - 1;
+	}
+
+	return i;
+}
+
+static void vega10_apply_dal_minimum_voltage_request(
+		struct pp_hwmgr *hwmgr)
+{
+	return;
+}
+
+static int vega10_upload_dpm_bootup_level(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+
+	vega10_apply_dal_minimum_voltage_request(hwmgr);
+
+	if (!data->registry_data.sclk_dpm_key_disabled) {
+		if (data->smc_state_table.gfx_boot_level !=
+				data->dpm_table.gfx_table.dpm_state.soft_min_level)
+			data->dpm_table.gfx_table.dpm_state.soft_min_level =
+					data->smc_state_table.gfx_boot_level;
+	}
+
+	if (!data->registry_data.mclk_dpm_key_disabled) {
+		if (data->smc_state_table.mem_boot_level !=
+				data->dpm_table.mem_table.dpm_state.soft_min_level)
+			data->dpm_table.mem_table.dpm_state.soft_min_level =
+					data->smc_state_table.mem_boot_level;
+	}
+
+	return 0;
+}
+
+static int vega10_upload_dpm_max_level(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+
+	vega10_apply_dal_minimum_voltage_request(hwmgr);
+
+	if (!data->registry_data.sclk_dpm_key_disabled) {
+		if (data->smc_state_table.gfx_max_level !=
+				data->dpm_table.gfx_table.dpm_state.soft_max_level) {
+			data->dpm_table.gfx_table.dpm_state.soft_max_level =
+					data->smc_state_table.gfx_max_level;
+		}
+	}
+
+	if (!data->registry_data.mclk_dpm_key_disabled) {
+		if (data->smc_state_table.mem_max_level !=
+				data->dpm_table.mem_table.dpm_state.soft_max_level) {
+			data->dpm_table.mem_table.dpm_state.soft_max_level =
+					data->smc_state_table.mem_max_level;
+		}
+	}
+
+	return 0;
+}
+
+static int vega10_generate_dpm_level_enable_mask(
+		struct pp_hwmgr *hwmgr, const void *input)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	const struct phm_set_power_state_input *states =
+			(const struct phm_set_power_state_input *)input;
+	const struct vega10_power_state *vega10_ps =
+			cast_const_phw_vega10_power_state(states->pnew_state);
+
+	PP_ASSERT_WITH_CODE(!vega10_trim_dpm_states(hwmgr, vega10_ps),
+			"Attempt to Trim DPM States Failed!",
+			return -1);
+
+	data->smc_state_table.gfx_boot_level =
+			vega10_find_lowest_dpm_level(&(data->dpm_table.gfx_table));
+	data->smc_state_table.gfx_max_level =
+			vega10_find_highest_dpm_level(&(data->dpm_table.gfx_table));
+	data->smc_state_table.mem_boot_level =
+			vega10_find_lowest_dpm_level(&(data->dpm_table.mem_table));
+	data->smc_state_table.mem_max_level =
+			vega10_find_highest_dpm_level(&(data->dpm_table.mem_table));
+
+	PP_ASSERT_WITH_CODE(!vega10_upload_dpm_bootup_level(hwmgr),
+			"Attempt to upload DPM Bootup Levels Failed!",
+			return -1);
+	PP_ASSERT_WITH_CODE(!vega10_upload_dpm_max_level(hwmgr),
+			"Attempt to upload DPM Max Levels Failed!",
+			return -1);
+
+	return 0;
+}
+
+int vega10_enable_disable_vce_dpm(struct pp_hwmgr *hwmgr, bool enable)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+
+	if (data->smu_features[GNLD_DPM_VCE].supported) {
+		PP_ASSERT_WITH_CODE(!vega10_enable_smc_features(hwmgr->smumgr,
+				enable,
+				data->smu_features[GNLD_DPM_VCE].smu_feature_bitmap),
+				"Attempt to Enable/Disable DPM VCE Failed!",
+				return -1);
+		data->smu_features[GNLD_DPM_VCE].enabled = enable;
+	}
+
+	return 0;
+}
+
+int vega10_update_vce_dpm(struct pp_hwmgr *hwmgr, const void *input)
+{
+	const struct phm_set_power_state_input *states =
+			(const struct phm_set_power_state_input *)input;
+	const struct vega10_power_state *vega10_nps =
+			cast_const_phw_vega10_power_state(states->pnew_state);
+	const struct vega10_power_state *vega10_cps =
+			cast_const_phw_vega10_power_state(states->pcurrent_state);
+	int result = 0;
+
+	if (!phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_VCEDPM))
+		return 0;
+
+	if (vega10_nps->vce_clks.evclk > 0 &&
+			(vega10_cps == NULL ||
+			vega10_cps->vce_clks.evclk == 0))
+		result = vega10_enable_disable_vce_dpm(hwmgr, true);
+	else if (!vega10_nps->vce_clks.evclk &&
+			(vega10_cps && vega10_cps->vce_clks.evclk))
+		result = vega10_enable_disable_vce_dpm(hwmgr, false);
+
+	return result;
+}
+
+static int vega10_update_sclk_threshold(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	int result = 0;
+	uint32_t low_sclk_interrupt_threshold = 0;
+
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_SclkThrottleLowNotification)
+		&& (hwmgr->gfx_arbiter.sclk_threshold !=
+				data->low_sclk_interrupt_threshold)) {
+		data->low_sclk_interrupt_threshold =
+				hwmgr->gfx_arbiter.sclk_threshold;
+		low_sclk_interrupt_threshold =
+				data->low_sclk_interrupt_threshold;
+
+		data->smc_state_table.pp_table.LowGfxclkInterruptThreshold =
+				cpu_to_le32(low_sclk_interrupt_threshold);
+
+		/* This message will also enable SmcToHost Interrupt */
+		result = smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+				PPSMC_MSG_SetLowGfxclkInterruptThreshold,
+				(uint32_t)low_sclk_interrupt_threshold);
+	}
+
+	return result;
+}
+
+static int vega10_set_power_state_tasks(struct pp_hwmgr *hwmgr,
+		const void *input)
+{
+	int tmp_result, result = 0;
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	PPTable_t *pp_table = &(data->smc_state_table.pp_table);
+
+	tmp_result = vega10_find_dpm_states_clocks_in_dpm_table(hwmgr, input);
+	PP_ASSERT_WITH_CODE(!tmp_result,
+			"Failed to find DPM states clocks in DPM table!",
+			result = tmp_result);
+
+	tmp_result = vega10_populate_and_upload_sclk_mclk_dpm_levels(hwmgr, input);
+	PP_ASSERT_WITH_CODE(!tmp_result,
+			"Failed to populate and upload SCLK MCLK DPM levels!",
+			result = tmp_result);
+
+	tmp_result = vega10_generate_dpm_level_enable_mask(hwmgr, input);
+	PP_ASSERT_WITH_CODE(!tmp_result,
+			"Failed to generate DPM level enabled mask!",
+			result = tmp_result);
+
+	tmp_result = vega10_update_vce_dpm(hwmgr, input);
+	PP_ASSERT_WITH_CODE(!tmp_result,
+			"Failed to update VCE DPM!",
+			result = tmp_result);
+
+	tmp_result = vega10_update_sclk_threshold(hwmgr);
+	PP_ASSERT_WITH_CODE(!tmp_result,
+			"Failed to update SCLK threshold!",
+			result = tmp_result);
+
+	result = vega10_copy_table_to_smc(hwmgr->smumgr,
+			(uint8_t *)pp_table, PPTABLE);
+	PP_ASSERT_WITH_CODE(!result,
+			"Failed to upload PPtable!", return result);
+
+	data->apply_optimized_settings = false;
+	data->apply_overdrive_next_settings_mask = 0;
+
+	return 0;
+}
+
+static int vega10_dpm_get_sclk(struct pp_hwmgr *hwmgr, bool low)
+{
+	struct pp_power_state *ps;
+	struct vega10_power_state *vega10_ps;
+
+	if (hwmgr == NULL)
+		return -EINVAL;
+
+	ps = hwmgr->request_ps;
+
+	if (ps == NULL)
+		return -EINVAL;
+
+	vega10_ps = cast_phw_vega10_power_state(&ps->hardware);
+
+	if (low)
+		return vega10_ps->performance_levels[0].gfx_clock;
+	else
+		return vega10_ps->performance_levels
+				[vega10_ps->performance_level_count - 1].gfx_clock;
+}
+
+static int vega10_dpm_get_mclk(struct pp_hwmgr *hwmgr, bool low)
+{
+	struct pp_power_state *ps;
+	struct vega10_power_state *vega10_ps;
+
+	if (hwmgr == NULL)
+		return -EINVAL;
+
+	ps = hwmgr->request_ps;
+
+	if (ps == NULL)
+		return -EINVAL;
+
+	vega10_ps = cast_phw_vega10_power_state(&ps->hardware);
+
+	if (low)
+		return vega10_ps->performance_levels[0].mem_clock;
+	else
+		return vega10_ps->performance_levels
+				[vega10_ps->performance_level_count-1].mem_clock;
+}
+
+static int vega10_read_sensor(struct pp_hwmgr *hwmgr, int idx,
+			      void *value, int *size)
+{
+	uint32_t sclk_idx, mclk_idx, activity_percent = 0;
+	struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend);
+	struct vega10_dpm_table *dpm_table = &data->dpm_table;
+	int ret = 0;
+
+	switch (idx) {
+	case AMDGPU_PP_SENSOR_GFX_SCLK:
+		ret = smum_send_msg_to_smc(hwmgr->smumgr, PPSMC_MSG_GetCurrentGfxclkIndex);
+		if (!ret) {
+			vega10_read_arg_from_smc(hwmgr->smumgr, &sclk_idx);
+			*((uint32_t *)value) = dpm_table->gfx_table.dpm_levels[sclk_idx].value;
+			*size = 4;
+		}
+		break;
+	case AMDGPU_PP_SENSOR_GFX_MCLK:
+		ret = smum_send_msg_to_smc(hwmgr->smumgr, PPSMC_MSG_GetCurrentUclkIndex);
+		if (!ret) {
+			vega10_read_arg_from_smc(hwmgr->smumgr, &mclk_idx);
+			*((uint32_t *)value) = dpm_table->mem_table.dpm_levels[mclk_idx].value;
+			*size = 4;
+		}
+		break;
+	case AMDGPU_PP_SENSOR_GPU_LOAD:
+		ret = smum_send_msg_to_smc(hwmgr->smumgr, PPSMC_MSG_GetAverageGfxActivity);
+		if (!ret) {
+			vega10_read_arg_from_smc(hwmgr->smumgr, &activity_percent);
+
+			activity_percent += 0x80;
+			activity_percent >>= 8;
+			*((uint32_t *)value) = activity_percent > 100 ? 100 : activity_percent;
+			*size = 4;
+		}
+		break;
+	case AMDGPU_PP_SENSOR_GPU_TEMP:
+		*((uint32_t *)value) = vega10_thermal_get_temperature(hwmgr);
+		*size = 4;
+		break;
+	case AMDGPU_PP_SENSOR_UVD_POWER:
+		*((uint32_t *)value) = data->uvd_power_gated ? 0 : 1;
+		*size = 4;
+		break;
+	case AMDGPU_PP_SENSOR_VCE_POWER:
+		*((uint32_t *)value) = data->vce_power_gated ? 0 : 1;
+		*size = 4;
+		break;
+	default:
+		ret = -EINVAL;
+		break;
+	}
+	return ret;
+}
+
+static int vega10_notify_smc_display_change(struct pp_hwmgr *hwmgr,
+		bool has_disp)
+{
+	return smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+			PPSMC_MSG_SetUclkFastSwitch,
+			has_disp ? 0 : 1);
+}
+
+int vega10_display_clock_voltage_request(struct pp_hwmgr *hwmgr,
+		struct pp_display_clock_request *clock_req)
+{
+	int result = 0;
+	enum amd_pp_clock_type clk_type = clock_req->clock_type;
+	uint32_t clk_freq = clock_req->clock_freq_in_khz / 100;
+	DSPCLK_e clk_select = 0;
+	uint32_t clk_request = 0;
+
+	switch (clk_type) {
+	case amd_pp_dcef_clock:
+		clk_select = DSPCLK_DCEFCLK;
+		break;
+	case amd_pp_disp_clock:
+		clk_select = DSPCLK_DISPCLK;
+		break;
+	case amd_pp_pixel_clock:
+		clk_select = DSPCLK_PIXCLK;
+		break;
+	case amd_pp_phy_clock:
+		clk_select = DSPCLK_PHYCLK;
+		break;
+	default:
+		pr_info("[DisplayClockVoltageRequest]Invalid Clock Type!");
+		result = -1;
+		break;
+	}
+
+	if (!result) {
+		clk_request = (clk_freq << 16) | clk_select;
+		result = smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+				PPSMC_MSG_RequestDisplayClockByFreq,
+				clk_request);
+	}
+
+	return result;
+}
+
+static int vega10_notify_smc_display_config_after_ps_adjustment(
+		struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	struct vega10_single_dpm_table *dpm_table =
+			&data->dpm_table.dcef_table;
+	uint32_t num_active_disps = 0;
+	struct cgs_display_info info = {0};
+	struct PP_Clocks min_clocks = {0};
+	uint32_t i;
+	struct pp_display_clock_request clock_req;
+
+	info.mode_info = NULL;
+
+	cgs_get_active_displays_info(hwmgr->device, &info);
+
+	num_active_disps = info.display_count;
+
+	if (num_active_disps > 1)
+		vega10_notify_smc_display_change(hwmgr, false);
+	else
+		vega10_notify_smc_display_change(hwmgr, true);
+
+	min_clocks.dcefClock = hwmgr->display_config.min_dcef_set_clk;
+	min_clocks.dcefClockInSR = hwmgr->display_config.min_dcef_deep_sleep_set_clk;
+
+	for (i = 0; i < dpm_table->count; i++) {
+		if (dpm_table->dpm_levels[i].value == min_clocks.dcefClock)
+			break;
+	}
+
+	if (i < dpm_table->count) {
+		clock_req.clock_type = amd_pp_dcef_clock;
+		clock_req.clock_freq_in_khz = dpm_table->dpm_levels[i].value;
+		if (!vega10_display_clock_voltage_request(hwmgr, &clock_req)) {
+			PP_ASSERT_WITH_CODE(!smum_send_msg_to_smc_with_parameter(
+					hwmgr->smumgr, PPSMC_MSG_SetMinDeepSleepDcefclk,
+					min_clocks.dcefClockInSR),
+					"Attempt to set divider for DCEFCLK Failed!",);
+		} else
+			pr_info("Attempt to set Hard Min for DCEFCLK Failed!");
+	} else
+		pr_info("Cannot find requested DCEFCLK!");
+
+	return 0;
+}
+
+static int vega10_force_dpm_highest(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+
+	data->smc_state_table.gfx_boot_level =
+	data->smc_state_table.gfx_max_level =
+			vega10_find_highest_dpm_level(&(data->dpm_table.gfx_table));
+	data->smc_state_table.mem_boot_level =
+	data->smc_state_table.mem_max_level =
+			vega10_find_highest_dpm_level(&(data->dpm_table.mem_table));
+
+	PP_ASSERT_WITH_CODE(!vega10_upload_dpm_bootup_level(hwmgr),
+			"Failed to upload boot level to highest!",
+			return -1);
+
+	PP_ASSERT_WITH_CODE(!vega10_upload_dpm_max_level(hwmgr),
+			"Failed to upload dpm max level to highest!",
+			return -1);
+
+	return 0;
+}
+
+static int vega10_force_dpm_lowest(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+
+	data->smc_state_table.gfx_boot_level =
+	data->smc_state_table.gfx_max_level =
+			vega10_find_lowest_dpm_level(&(data->dpm_table.gfx_table));
+	data->smc_state_table.mem_boot_level =
+	data->smc_state_table.mem_max_level =
+			vega10_find_lowest_dpm_level(&(data->dpm_table.mem_table));
+
+	PP_ASSERT_WITH_CODE(!vega10_upload_dpm_bootup_level(hwmgr),
+			"Failed to upload boot level to highest!",
+			return -1);
+
+	PP_ASSERT_WITH_CODE(!vega10_upload_dpm_max_level(hwmgr),
+			"Failed to upload dpm max level to highest!",
+			return -1);
+
+	return 0;
+
+}
+
+static int vega10_unforce_dpm_levels(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend);
+
+	data->smc_state_table.gfx_boot_level =
+			vega10_find_lowest_dpm_level(&(data->dpm_table.gfx_table));
+	data->smc_state_table.gfx_max_level =
+			vega10_find_highest_dpm_level(&(data->dpm_table.gfx_table));
+	data->smc_state_table.mem_boot_level =
+			vega10_find_lowest_dpm_level(&(data->dpm_table.mem_table));
+	data->smc_state_table.mem_max_level =
+			vega10_find_highest_dpm_level(&(data->dpm_table.mem_table));
+
+	PP_ASSERT_WITH_CODE(!vega10_upload_dpm_bootup_level(hwmgr),
+			"Failed to upload DPM Bootup Levels!",
+			return -1);
+
+	PP_ASSERT_WITH_CODE(!vega10_upload_dpm_max_level(hwmgr),
+			"Failed to upload DPM Max Levels!",
+			return -1);
+	return 0;
+}
+
+static int vega10_dpm_force_dpm_level(struct pp_hwmgr *hwmgr,
+				enum amd_dpm_forced_level level)
+{
+	int ret = 0;
+
+	switch (level) {
+	case AMD_DPM_FORCED_LEVEL_HIGH:
+		ret = vega10_force_dpm_highest(hwmgr);
+		if (ret)
+			return ret;
+		break;
+	case AMD_DPM_FORCED_LEVEL_LOW:
+		ret = vega10_force_dpm_lowest(hwmgr);
+		if (ret)
+			return ret;
+		break;
+	case AMD_DPM_FORCED_LEVEL_AUTO:
+		ret = vega10_unforce_dpm_levels(hwmgr);
+		if (ret)
+			return ret;
+		break;
+	default:
+		break;
+	}
+
+	hwmgr->dpm_level = level;
+
+	return ret;
+}
+
+static int vega10_set_fan_control_mode(struct pp_hwmgr *hwmgr, uint32_t mode)
+{
+	if (mode) {
+		/* stop auto-manage */
+		if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+				PHM_PlatformCaps_MicrocodeFanControl))
+			vega10_fan_ctrl_stop_smc_fan_control(hwmgr);
+		vega10_fan_ctrl_set_static_mode(hwmgr, mode);
+	} else
+		/* restart auto-manage */
+		vega10_fan_ctrl_reset_fan_speed_to_default(hwmgr);
+
+	return 0;
+}
+
+static int vega10_get_fan_control_mode(struct pp_hwmgr *hwmgr)
+{
+	uint32_t reg;
+
+	if (hwmgr->fan_ctrl_is_in_default_mode)
+		return hwmgr->fan_ctrl_default_mode;
+	else
+		reg = soc15_get_register_offset(THM_HWID, 0,
+			mmCG_FDO_CTRL2_BASE_IDX, mmCG_FDO_CTRL2);
+		return (cgs_read_register(hwmgr->device, reg) &
+				CG_FDO_CTRL2__FDO_PWM_MODE_MASK) >>
+				CG_FDO_CTRL2__FDO_PWM_MODE__SHIFT;
+}
+
+static int vega10_get_dal_power_level(struct pp_hwmgr *hwmgr,
+		struct amd_pp_simple_clock_info *info)
+{
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)hwmgr->pptable;
+	struct phm_clock_and_voltage_limits *max_limits =
+			&table_info->max_clock_voltage_on_ac;
+
+	info->engine_max_clock = max_limits->sclk;
+	info->memory_max_clock = max_limits->mclk;
+
+	return 0;
+}
+
+static void vega10_get_sclks(struct pp_hwmgr *hwmgr,
+		struct pp_clock_levels_with_latency *clocks)
+{
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)hwmgr->pptable;
+	struct phm_ppt_v1_clock_voltage_dependency_table *dep_table =
+			table_info->vdd_dep_on_sclk;
+	uint32_t i;
+
+	for (i = 0; i < dep_table->count; i++) {
+		if (dep_table->entries[i].clk) {
+			clocks->data[clocks->num_levels].clocks_in_khz =
+					dep_table->entries[i].clk;
+			clocks->num_levels++;
+		}
+	}
+
+}
+
+static uint32_t vega10_get_mem_latency(struct pp_hwmgr *hwmgr,
+		uint32_t clock)
+{
+	if (clock >= MEM_FREQ_LOW_LATENCY &&
+			clock < MEM_FREQ_HIGH_LATENCY)
+		return MEM_LATENCY_HIGH;
+	else if (clock >= MEM_FREQ_HIGH_LATENCY)
+		return MEM_LATENCY_LOW;
+	else
+		return MEM_LATENCY_ERR;
+}
+
+static void vega10_get_memclocks(struct pp_hwmgr *hwmgr,
+		struct pp_clock_levels_with_latency *clocks)
+{
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)hwmgr->pptable;
+	struct phm_ppt_v1_clock_voltage_dependency_table *dep_table =
+			table_info->vdd_dep_on_mclk;
+	struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend);
+	uint32_t i;
+
+	clocks->num_levels = 0;
+	data->mclk_latency_table.count = 0;
+
+	for (i = 0; i < dep_table->count; i++) {
+		if (dep_table->entries[i].clk) {
+			clocks->data[clocks->num_levels].clocks_in_khz =
+			data->mclk_latency_table.entries
+			[data->mclk_latency_table.count].frequency =
+					dep_table->entries[i].clk;
+			clocks->data[clocks->num_levels].latency_in_us =
+			data->mclk_latency_table.entries
+			[data->mclk_latency_table.count].latency =
+					vega10_get_mem_latency(hwmgr,
+						dep_table->entries[i].clk);
+			clocks->num_levels++;
+			data->mclk_latency_table.count++;
+		}
+	}
+}
+
+static void vega10_get_dcefclocks(struct pp_hwmgr *hwmgr,
+		struct pp_clock_levels_with_latency *clocks)
+{
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)hwmgr->pptable;
+	struct phm_ppt_v1_clock_voltage_dependency_table *dep_table =
+			table_info->vdd_dep_on_dcefclk;
+	uint32_t i;
+
+	for (i = 0; i < dep_table->count; i++) {
+		clocks->data[i].clocks_in_khz = dep_table->entries[i].clk;
+		clocks->data[i].latency_in_us = 0;
+		clocks->num_levels++;
+	}
+}
+
+static void vega10_get_socclocks(struct pp_hwmgr *hwmgr,
+		struct pp_clock_levels_with_latency *clocks)
+{
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)hwmgr->pptable;
+	struct phm_ppt_v1_clock_voltage_dependency_table *dep_table =
+			table_info->vdd_dep_on_socclk;
+	uint32_t i;
+
+	for (i = 0; i < dep_table->count; i++) {
+		clocks->data[i].clocks_in_khz = dep_table->entries[i].clk;
+		clocks->data[i].latency_in_us = 0;
+		clocks->num_levels++;
+	}
+}
+
+static int vega10_get_clock_by_type_with_latency(struct pp_hwmgr *hwmgr,
+		enum amd_pp_clock_type type,
+		struct pp_clock_levels_with_latency *clocks)
+{
+	switch (type) {
+	case amd_pp_sys_clock:
+		vega10_get_sclks(hwmgr, clocks);
+		break;
+	case amd_pp_mem_clock:
+		vega10_get_memclocks(hwmgr, clocks);
+		break;
+	case amd_pp_dcef_clock:
+		vega10_get_dcefclocks(hwmgr, clocks);
+		break;
+	case amd_pp_soc_clock:
+		vega10_get_socclocks(hwmgr, clocks);
+		break;
+	default:
+		return -1;
+	}
+
+	return 0;
+}
+
+static int vega10_get_clock_by_type_with_voltage(struct pp_hwmgr *hwmgr,
+		enum amd_pp_clock_type type,
+		struct pp_clock_levels_with_voltage *clocks)
+{
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)hwmgr->pptable;
+	struct phm_ppt_v1_clock_voltage_dependency_table *dep_table;
+	uint32_t i;
+
+	switch (type) {
+	case amd_pp_mem_clock:
+		dep_table = table_info->vdd_dep_on_mclk;
+		break;
+	case amd_pp_dcef_clock:
+		dep_table = table_info->vdd_dep_on_dcefclk;
+		break;
+	case amd_pp_disp_clock:
+		dep_table = table_info->vdd_dep_on_dispclk;
+		break;
+	case amd_pp_pixel_clock:
+		dep_table = table_info->vdd_dep_on_pixclk;
+		break;
+	case amd_pp_phy_clock:
+		dep_table = table_info->vdd_dep_on_phyclk;
+		break;
+	default:
+		return -1;
+	}
+
+	for (i = 0; i < dep_table->count; i++) {
+		clocks->data[i].clocks_in_khz = dep_table->entries[i].clk;
+		clocks->data[i].voltage_in_mv = (uint32_t)(table_info->vddc_lookup_table->
+				entries[dep_table->entries[i].vddInd].us_vdd);
+		clocks->num_levels++;
+	}
+
+	if (i < dep_table->count)
+		return -1;
+
+	return 0;
+}
+
+static int vega10_set_watermarks_for_clocks_ranges(struct pp_hwmgr *hwmgr,
+		struct pp_wm_sets_with_clock_ranges_soc15 *wm_with_clock_ranges)
+{
+	struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend);
+	Watermarks_t *table = &(data->smc_state_table.water_marks_table);
+	int result = 0;
+	uint32_t i;
+
+	if (!data->registry_data.disable_water_mark) {
+		for (i = 0; i < wm_with_clock_ranges->num_wm_sets_dmif; i++) {
+			table->WatermarkRow[WM_DCEFCLK][i].MinClock =
+				cpu_to_le16((uint16_t)
+				(wm_with_clock_ranges->wm_sets_dmif[i].wm_min_dcefclk_in_khz) /
+				100);
+			table->WatermarkRow[WM_DCEFCLK][i].MaxClock =
+				cpu_to_le16((uint16_t)
+				(wm_with_clock_ranges->wm_sets_dmif[i].wm_max_dcefclk_in_khz) /
+				100);
+			table->WatermarkRow[WM_DCEFCLK][i].MinUclk =
+				cpu_to_le16((uint16_t)
+				(wm_with_clock_ranges->wm_sets_dmif[i].wm_min_memclk_in_khz) /
+				100);
+			table->WatermarkRow[WM_DCEFCLK][i].MaxUclk =
+				cpu_to_le16((uint16_t)
+				(wm_with_clock_ranges->wm_sets_dmif[i].wm_max_memclk_in_khz) /
+				100);
+			table->WatermarkRow[WM_DCEFCLK][i].WmSetting = (uint8_t)
+					wm_with_clock_ranges->wm_sets_dmif[i].wm_set_id;
+		}
+
+		for (i = 0; i < wm_with_clock_ranges->num_wm_sets_mcif; i++) {
+			table->WatermarkRow[WM_SOCCLK][i].MinClock =
+				cpu_to_le16((uint16_t)
+				(wm_with_clock_ranges->wm_sets_mcif[i].wm_min_socclk_in_khz) /
+				100);
+			table->WatermarkRow[WM_SOCCLK][i].MaxClock =
+				cpu_to_le16((uint16_t)
+				(wm_with_clock_ranges->wm_sets_mcif[i].wm_max_socclk_in_khz) /
+				100);
+			table->WatermarkRow[WM_SOCCLK][i].MinUclk =
+				cpu_to_le16((uint16_t)
+				(wm_with_clock_ranges->wm_sets_mcif[i].wm_min_memclk_in_khz) /
+				100);
+			table->WatermarkRow[WM_SOCCLK][i].MaxUclk =
+				cpu_to_le16((uint16_t)
+				(wm_with_clock_ranges->wm_sets_mcif[i].wm_max_memclk_in_khz) /
+				100);
+			table->WatermarkRow[WM_SOCCLK][i].WmSetting = (uint8_t)
+					wm_with_clock_ranges->wm_sets_mcif[i].wm_set_id;
+		}
+		data->water_marks_bitmap = WaterMarksExist;
+	}
+
+	return result;
+}
+
+static int vega10_force_clock_level(struct pp_hwmgr *hwmgr,
+		enum pp_clock_type type, uint32_t mask)
+{
+	struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend);
+	uint32_t i;
+
+	if (hwmgr->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL)
+		return -EINVAL;
+
+	switch (type) {
+	case PP_SCLK:
+		if (data->registry_data.sclk_dpm_key_disabled)
+			break;
+
+		for (i = 0; i < 32; i++) {
+			if (mask & (1 << i))
+				break;
+		}
+
+		PP_ASSERT_WITH_CODE(!smum_send_msg_to_smc_with_parameter(
+				hwmgr->smumgr,
+				PPSMC_MSG_SetSoftMinGfxclkByIndex,
+				i),
+				"Failed to set soft min sclk index!",
+				return -1);
+		break;
+
+	case PP_MCLK:
+		if (data->registry_data.mclk_dpm_key_disabled)
+			break;
+
+		for (i = 0; i < 32; i++) {
+			if (mask & (1 << i))
+				break;
+		}
+
+		PP_ASSERT_WITH_CODE(!smum_send_msg_to_smc_with_parameter(
+				hwmgr->smumgr,
+				PPSMC_MSG_SetSoftMinUclkByIndex,
+				i),
+				"Failed to set soft min mclk index!",
+				return -1);
+		break;
+
+	case PP_PCIE:
+		if (data->registry_data.pcie_dpm_key_disabled)
+			break;
+
+		for (i = 0; i < 32; i++) {
+			if (mask & (1 << i))
+				break;
+		}
+
+		PP_ASSERT_WITH_CODE(!smum_send_msg_to_smc_with_parameter(
+				hwmgr->smumgr,
+				PPSMC_MSG_SetMinLinkDpmByIndex,
+				i),
+				"Failed to set min pcie index!",
+				return -1);
+		break;
+	default:
+		break;
+	}
+
+	return 0;
+}
+
+static int vega10_print_clock_levels(struct pp_hwmgr *hwmgr,
+		enum pp_clock_type type, char *buf)
+{
+	struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend);
+	struct vega10_single_dpm_table *sclk_table = &(data->dpm_table.gfx_table);
+	struct vega10_single_dpm_table *mclk_table = &(data->dpm_table.mem_table);
+	struct vega10_pcie_table *pcie_table = &(data->dpm_table.pcie_table);
+	int i, now, size = 0;
+
+	switch (type) {
+	case PP_SCLK:
+		if (data->registry_data.sclk_dpm_key_disabled)
+			break;
+
+		PP_ASSERT_WITH_CODE(!smum_send_msg_to_smc(hwmgr->smumgr,
+				PPSMC_MSG_GetCurrentGfxclkIndex),
+				"Attempt to get current sclk index Failed!",
+				return -1);
+		PP_ASSERT_WITH_CODE(!vega10_read_arg_from_smc(hwmgr->smumgr,
+				&now),
+				"Attempt to read sclk index Failed!",
+				return -1);
+
+		for (i = 0; i < sclk_table->count; i++)
+			size += sprintf(buf + size, "%d: %uMhz %s\n",
+					i, sclk_table->dpm_levels[i].value / 100,
+					(i == now) ? "*" : "");
+		break;
+	case PP_MCLK:
+		if (data->registry_data.mclk_dpm_key_disabled)
+			break;
+
+		PP_ASSERT_WITH_CODE(!smum_send_msg_to_smc(hwmgr->smumgr,
+				PPSMC_MSG_GetCurrentUclkIndex),
+				"Attempt to get current mclk index Failed!",
+				return -1);
+		PP_ASSERT_WITH_CODE(!vega10_read_arg_from_smc(hwmgr->smumgr,
+				&now),
+				"Attempt to read mclk index Failed!",
+				return -1);
+
+		for (i = 0; i < mclk_table->count; i++)
+			size += sprintf(buf + size, "%d: %uMhz %s\n",
+					i, mclk_table->dpm_levels[i].value / 100,
+					(i == now) ? "*" : "");
+		break;
+	case PP_PCIE:
+		PP_ASSERT_WITH_CODE(!smum_send_msg_to_smc(hwmgr->smumgr,
+				PPSMC_MSG_GetCurrentLinkIndex),
+				"Attempt to get current mclk index Failed!",
+				return -1);
+		PP_ASSERT_WITH_CODE(!vega10_read_arg_from_smc(hwmgr->smumgr,
+				&now),
+				"Attempt to read mclk index Failed!",
+				return -1);
+
+		for (i = 0; i < pcie_table->count; i++)
+			size += sprintf(buf + size, "%d: %s %s\n", i,
+					(pcie_table->pcie_gen[i] == 0) ? "2.5GB, x1" :
+					(pcie_table->pcie_gen[i] == 1) ? "5.0GB, x16" :
+					(pcie_table->pcie_gen[i] == 2) ? "8.0GB, x16" : "",
+					(i == now) ? "*" : "");
+		break;
+	default:
+		break;
+	}
+	return size;
+}
+
+static int vega10_display_configuration_changed_task(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend);
+	int result = 0;
+	uint32_t num_turned_on_displays = 1;
+	Watermarks_t *wm_table = &(data->smc_state_table.water_marks_table);
+	struct cgs_display_info info = {0};
+
+	cgs_get_active_displays_info(hwmgr->device, &info);
+	num_turned_on_displays = info.display_count;
+
+	smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+			PPSMC_MSG_NumOfDisplays, num_turned_on_displays);
+
+	if ((data->water_marks_bitmap & WaterMarksExist) &&
+			!(data->water_marks_bitmap & WaterMarksLoaded)) {
+		result = vega10_copy_table_to_smc(hwmgr->smumgr,
+			(uint8_t *)wm_table, WMTABLE);
+		PP_ASSERT_WITH_CODE(result, "Failed to update WMTABLE!", return EINVAL);
+		data->water_marks_bitmap |= WaterMarksLoaded;
+	}
+
+	return result;
+}
+
+int vega10_enable_disable_uvd_dpm(struct pp_hwmgr *hwmgr, bool enable)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+
+	if (data->smu_features[GNLD_DPM_UVD].supported) {
+		PP_ASSERT_WITH_CODE(!vega10_enable_smc_features(hwmgr->smumgr,
+				enable,
+				data->smu_features[GNLD_DPM_UVD].smu_feature_bitmap),
+				"Attempt to Enable/Disable DPM UVD Failed!",
+				return -1);
+		data->smu_features[GNLD_DPM_UVD].enabled = enable;
+	}
+	return 0;
+}
+
+static int vega10_power_gate_vce(struct pp_hwmgr *hwmgr, bool bgate)
+{
+	struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend);
+
+	data->vce_power_gated = bgate;
+	return vega10_enable_disable_vce_dpm(hwmgr, !bgate);
+}
+
+static int vega10_power_gate_uvd(struct pp_hwmgr *hwmgr, bool bgate)
+{
+	struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend);
+
+	data->uvd_power_gated = bgate;
+	return vega10_enable_disable_uvd_dpm(hwmgr, !bgate);
+}
+
+
+static const struct pp_hwmgr_func vega10_hwmgr_funcs = {
+	.backend_init = vega10_hwmgr_backend_init,
+	.backend_fini = vega10_hwmgr_backend_fini,
+	.asic_setup = vega10_setup_asic_task,
+	.dynamic_state_management_enable = vega10_enable_dpm_tasks,
+	.get_num_of_pp_table_entries =
+			vega10_get_number_of_powerplay_table_entries,
+	.get_power_state_size = vega10_get_power_state_size,
+	.get_pp_table_entry = vega10_get_pp_table_entry,
+	.patch_boot_state = vega10_patch_boot_state,
+	.apply_state_adjust_rules = vega10_apply_state_adjust_rules,
+	.power_state_set = vega10_set_power_state_tasks,
+	.get_sclk = vega10_dpm_get_sclk,
+	.get_mclk = vega10_dpm_get_mclk,
+	.notify_smc_display_config_after_ps_adjustment =
+			vega10_notify_smc_display_config_after_ps_adjustment,
+	.force_dpm_level = vega10_dpm_force_dpm_level,
+	.get_temperature = vega10_thermal_get_temperature,
+	.stop_thermal_controller = vega10_thermal_stop_thermal_controller,
+	.get_fan_speed_info = vega10_fan_ctrl_get_fan_speed_info,
+	.get_fan_speed_percent = vega10_fan_ctrl_get_fan_speed_percent,
+	.set_fan_speed_percent = vega10_fan_ctrl_set_fan_speed_percent,
+	.reset_fan_speed_to_default =
+			vega10_fan_ctrl_reset_fan_speed_to_default,
+	.get_fan_speed_rpm = vega10_fan_ctrl_get_fan_speed_rpm,
+	.set_fan_speed_rpm = vega10_fan_ctrl_set_fan_speed_rpm,
+	.uninitialize_thermal_controller =
+			vega10_thermal_ctrl_uninitialize_thermal_controller,
+	.set_fan_control_mode = vega10_set_fan_control_mode,
+	.get_fan_control_mode = vega10_get_fan_control_mode,
+	.read_sensor = vega10_read_sensor,
+	.get_dal_power_level = vega10_get_dal_power_level,
+	.get_clock_by_type_with_latency = vega10_get_clock_by_type_with_latency,
+	.get_clock_by_type_with_voltage = vega10_get_clock_by_type_with_voltage,
+	.set_watermarks_for_clocks_ranges = vega10_set_watermarks_for_clocks_ranges,
+	.display_clock_voltage_request = vega10_display_clock_voltage_request,
+	.force_clock_level = vega10_force_clock_level,
+	.print_clock_levels = vega10_print_clock_levels,
+	.display_config_changed = vega10_display_configuration_changed_task,
+	.powergate_uvd = vega10_power_gate_uvd,
+	.powergate_vce = vega10_power_gate_vce,
+};
+
+int vega10_hwmgr_init(struct pp_hwmgr *hwmgr)
+{
+	hwmgr->hwmgr_func = &vega10_hwmgr_funcs;
+	hwmgr->pptable_func = &vega10_pptable_funcs;
+	pp_vega10_thermal_initialize(hwmgr);
+	return 0;
+}
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.h b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.h
new file mode 100644
index 0000000..83c67b9
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.h
@@ -0,0 +1,434 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef _VEGA10_HWMGR_H_
+#define _VEGA10_HWMGR_H_
+
+#include "hwmgr.h"
+#include "smu9_driver_if.h"
+#include "ppatomctrl.h"
+#include "ppatomfwctrl.h"
+#include "vega10_ppsmc.h"
+#include "vega10_powertune.h"
+
+extern const uint32_t PhwVega10_Magic;
+#define VEGA10_MAX_HARDWARE_POWERLEVELS 2
+
+#define WaterMarksExist  1
+#define WaterMarksLoaded 2
+
+enum {
+	GNLD_DPM_PREFETCHER = 0,
+	GNLD_DPM_GFXCLK,
+	GNLD_DPM_UCLK,
+	GNLD_DPM_SOCCLK,
+	GNLD_DPM_UVD,
+	GNLD_DPM_VCE,
+	GNLD_ULV,
+	GNLD_DPM_MP0CLK,
+	GNLD_DPM_LINK,
+	GNLD_DPM_DCEFCLK,
+	GNLD_AVFS,
+	GNLD_DS_GFXCLK,
+	GNLD_DS_SOCCLK,
+	GNLD_DS_LCLK,
+	GNLD_PPT,
+	GNLD_TDC,
+	GNLD_THERMAL,
+	GNLD_GFX_PER_CU_CG,
+	GNLD_RM,
+	GNLD_DS_DCEFCLK,
+	GNLD_ACDC,
+	GNLD_VR0HOT,
+	GNLD_VR1HOT,
+	GNLD_FW_CTF,
+	GNLD_LED_DISPLAY,
+	GNLD_FAN_CONTROL,
+	GNLD_VOLTAGE_CONTROLLER,
+	GNLD_FEATURES_MAX
+};
+
+#define GNLD_DPM_MAX    (GNLD_DPM_DCEFCLK + 1)
+
+#define SMC_DPM_FEATURES    0x30F
+
+struct smu_features {
+	bool supported;
+	bool enabled;
+	uint32_t smu_feature_id;
+	uint32_t smu_feature_bitmap;
+};
+
+struct vega10_performance_level {
+	uint32_t  soc_clock;
+	uint32_t  gfx_clock;
+	uint32_t  mem_clock;
+};
+
+struct vega10_bacos {
+	uint32_t                       baco_flags;
+	/* struct vega10_performance_level  performance_level; */
+};
+
+struct vega10_uvd_clocks {
+	uint32_t  vclk;
+	uint32_t  dclk;
+};
+
+struct vega10_vce_clocks {
+	uint32_t  evclk;
+	uint32_t  ecclk;
+};
+
+struct vega10_power_state {
+	uint32_t                  magic;
+	struct vega10_uvd_clocks    uvd_clks;
+	struct vega10_vce_clocks    vce_clks;
+	uint16_t                  performance_level_count;
+	bool                      dc_compatible;
+	uint32_t                  sclk_threshold;
+	struct vega10_performance_level  performance_levels[VEGA10_MAX_HARDWARE_POWERLEVELS];
+};
+
+struct vega10_dpm_level {
+	bool	enabled;
+	uint32_t	value;
+	uint32_t	param1;
+};
+
+#define VEGA10_MAX_DEEPSLEEP_DIVIDER_ID 5
+#define MAX_REGULAR_DPM_NUMBER 8
+#define MAX_PCIE_CONF 2
+#define VEGA10_MINIMUM_ENGINE_CLOCK 2500
+
+struct vega10_dpm_state {
+	uint32_t  soft_min_level;
+	uint32_t  soft_max_level;
+	uint32_t  hard_min_level;
+	uint32_t  hard_max_level;
+};
+
+struct vega10_single_dpm_table {
+	uint32_t		count;
+	struct vega10_dpm_state	dpm_state;
+	struct vega10_dpm_level	dpm_levels[MAX_REGULAR_DPM_NUMBER];
+};
+
+struct vega10_pcie_table {
+	uint16_t count;
+	uint8_t  pcie_gen[MAX_PCIE_CONF];
+	uint8_t  pcie_lane[MAX_PCIE_CONF];
+	uint32_t lclk[MAX_PCIE_CONF];
+};
+
+struct vega10_dpm_table {
+	struct vega10_single_dpm_table  soc_table;
+	struct vega10_single_dpm_table  gfx_table;
+	struct vega10_single_dpm_table  mem_table;
+	struct vega10_single_dpm_table  eclk_table;
+	struct vega10_single_dpm_table  vclk_table;
+	struct vega10_single_dpm_table  dclk_table;
+	struct vega10_single_dpm_table  dcef_table;
+	struct vega10_single_dpm_table  pixel_table;
+	struct vega10_single_dpm_table  display_table;
+	struct vega10_single_dpm_table  phy_table;
+	struct vega10_pcie_table        pcie_table;
+};
+
+#define VEGA10_MAX_LEAKAGE_COUNT  8
+struct vega10_leakage_voltage {
+	uint16_t  count;
+	uint16_t  leakage_id[VEGA10_MAX_LEAKAGE_COUNT];
+	uint16_t  actual_voltage[VEGA10_MAX_LEAKAGE_COUNT];
+};
+
+struct vega10_display_timing {
+	uint32_t  min_clock_in_sr;
+	uint32_t  num_existing_displays;
+};
+
+struct vega10_dpmlevel_enable_mask {
+	uint32_t  uvd_dpm_enable_mask;
+	uint32_t  vce_dpm_enable_mask;
+	uint32_t  acp_dpm_enable_mask;
+	uint32_t  samu_dpm_enable_mask;
+	uint32_t  sclk_dpm_enable_mask;
+	uint32_t  mclk_dpm_enable_mask;
+};
+
+struct vega10_vbios_boot_state {
+	uint16_t    vddc;
+	uint16_t    vddci;
+	uint32_t    gfx_clock;
+	uint32_t    mem_clock;
+	uint32_t    soc_clock;
+};
+
+#define DPMTABLE_OD_UPDATE_SCLK     0x00000001
+#define DPMTABLE_OD_UPDATE_MCLK     0x00000002
+#define DPMTABLE_UPDATE_SCLK        0x00000004
+#define DPMTABLE_UPDATE_MCLK        0x00000008
+#define DPMTABLE_OD_UPDATE_VDDC     0x00000010
+
+struct vega10_smc_state_table {
+	uint32_t        soc_boot_level;
+	uint32_t        gfx_boot_level;
+	uint32_t        dcef_boot_level;
+	uint32_t        mem_boot_level;
+	uint32_t        uvd_boot_level;
+	uint32_t        vce_boot_level;
+	uint32_t        gfx_max_level;
+	uint32_t        mem_max_level;
+	uint8_t         vr_hot_gpio;
+	uint8_t         ac_dc_gpio;
+	uint8_t         therm_out_gpio;
+	uint8_t         therm_out_polarity;
+	uint8_t         therm_out_mode;
+	PPTable_t       pp_table;
+	Watermarks_t    water_marks_table;
+	AvfsTable_t     avfs_table;
+};
+
+struct vega10_mclk_latency_entries {
+	uint32_t  frequency;
+	uint32_t  latency;
+};
+
+struct vega10_mclk_latency_table {
+	uint32_t  count;
+	struct vega10_mclk_latency_entries  entries[MAX_REGULAR_DPM_NUMBER];
+};
+
+struct vega10_registry_data {
+	uint8_t   ac_dc_switch_gpio_support;
+	uint8_t   avfs_support;
+	uint8_t   cac_support;
+	uint8_t   clock_stretcher_support;
+	uint8_t   db_ramping_support;
+	uint8_t   didt_support;
+	uint8_t   dynamic_state_patching_support;
+	uint8_t   enable_pkg_pwr_tracking_feature;
+	uint8_t   enable_tdc_limit_feature;
+	uint32_t  fast_watermark_threshold;
+	uint8_t   force_dpm_high;
+	uint8_t   fuzzy_fan_control_support;
+	uint8_t   long_idle_baco_support;
+	uint8_t   mclk_dpm_key_disabled;
+	uint8_t   od_state_in_dc_support;
+	uint8_t   pcieLaneOverride;
+	uint8_t   pcieSpeedOverride;
+	uint32_t  pcieClockOverride;
+	uint8_t   pcie_dpm_key_disabled;
+	uint8_t   dcefclk_dpm_key_disabled;
+	uint8_t   power_containment_support;
+	uint8_t   ppt_support;
+	uint8_t   prefetcher_dpm_key_disabled;
+	uint8_t   quick_transition_support;
+	uint8_t   regulator_hot_gpio_support;
+	uint8_t   sclk_deep_sleep_support;
+	uint8_t   sclk_dpm_key_disabled;
+	uint8_t   sclk_from_vbios;
+	uint8_t   sclk_throttle_low_notification;
+	uint8_t   show_baco_dbg_info;
+	uint8_t   skip_baco_hardware;
+	uint8_t   socclk_dpm_key_disabled;
+	uint8_t   spll_shutdown_support;
+	uint8_t   sq_ramping_support;
+	uint32_t  stable_pstate_sclk_dpm_percentage;
+	uint8_t   tcp_ramping_support;
+	uint8_t   tdc_support;
+	uint8_t   td_ramping_support;
+	uint8_t   thermal_out_gpio_support;
+	uint8_t   thermal_support;
+	uint8_t   fw_ctf_enabled;
+	uint8_t   fan_control_support;
+	uint8_t   ulps_support;
+	uint8_t   ulv_support;
+	uint32_t  vddc_vddci_delta;
+	uint8_t   odn_feature_enable;
+	uint8_t   disable_water_mark;
+	uint8_t   zrpm_stop_temp;
+	uint8_t   zrpm_start_temp;
+	uint8_t   led_dpm_enabled;
+	uint8_t   vr0hot_enabled;
+	uint8_t   vr1hot_enabled;
+};
+
+struct vega10_odn_clock_voltage_dependency_table {
+	uint32_t count;
+	struct phm_ppt_v1_clock_voltage_dependency_record
+		entries[MAX_REGULAR_DPM_NUMBER];
+};
+
+struct vega10_odn_dpm_table {
+	struct phm_odn_clock_levels		odn_core_clock_dpm_levels;
+	struct phm_odn_clock_levels		odn_memory_clock_dpm_levels;
+	struct vega10_odn_clock_voltage_dependency_table		vdd_dependency_on_sclk;
+	struct vega10_odn_clock_voltage_dependency_table		vdd_dependency_on_mclk;
+};
+
+struct vega10_odn_fan_table {
+	uint32_t	target_fan_speed;
+	uint32_t	target_temperature;
+	uint32_t	min_performance_clock;
+	uint32_t	min_fan_limit;
+};
+
+struct vega10_hwmgr {
+	struct vega10_dpm_table			dpm_table;
+	struct vega10_dpm_table			golden_dpm_table;
+	struct vega10_registry_data      registry_data;
+	struct vega10_vbios_boot_state   vbios_boot_state;
+	struct vega10_mclk_latency_table mclk_latency_table;
+
+	struct vega10_leakage_voltage    vddc_leakage;
+
+	uint32_t                           vddc_control;
+	struct pp_atomfwctrl_voltage_table vddc_voltage_table;
+	uint32_t                           mvdd_control;
+	struct pp_atomfwctrl_voltage_table mvdd_voltage_table;
+	uint32_t                           vddci_control;
+	struct pp_atomfwctrl_voltage_table vddci_voltage_table;
+
+	uint32_t                           active_auto_throttle_sources;
+	uint32_t                           water_marks_bitmap;
+	struct vega10_bacos                bacos;
+
+	struct vega10_odn_dpm_table       odn_dpm_table;
+	struct vega10_odn_fan_table       odn_fan_table;
+
+	/* ---- General data ---- */
+	uint8_t                           need_update_dpm_table;
+
+	bool                           cac_enabled;
+	bool                           battery_state;
+	bool                           is_tlu_enabled;
+
+	uint32_t                       low_sclk_interrupt_threshold;
+
+	uint32_t                       total_active_cus;
+
+	struct vega10_display_timing display_timing;
+
+	/* ---- Vega10 Dyn Register Settings ---- */
+
+	uint32_t                       debug_settings;
+	uint32_t                       lowest_uclk_reserved_for_ulv;
+	uint32_t                       gfxclk_average_alpha;
+	uint32_t                       socclk_average_alpha;
+	uint32_t                       uclk_average_alpha;
+	uint32_t                       gfx_activity_average_alpha;
+	uint32_t                       display_voltage_mode;
+	uint32_t                       dcef_clk_quad_eqn_a;
+	uint32_t                       dcef_clk_quad_eqn_b;
+	uint32_t                       dcef_clk_quad_eqn_c;
+	uint32_t                       disp_clk_quad_eqn_a;
+	uint32_t                       disp_clk_quad_eqn_b;
+	uint32_t                       disp_clk_quad_eqn_c;
+	uint32_t                       pixel_clk_quad_eqn_a;
+	uint32_t                       pixel_clk_quad_eqn_b;
+	uint32_t                       pixel_clk_quad_eqn_c;
+	uint32_t                       phy_clk_quad_eqn_a;
+	uint32_t                       phy_clk_quad_eqn_b;
+	uint32_t                       phy_clk_quad_eqn_c;
+
+	/* ---- Thermal Temperature Setting ---- */
+	struct vega10_dpmlevel_enable_mask     dpm_level_enable_mask;
+
+	/* ---- Power Gating States ---- */
+	bool                           uvd_power_gated;
+	bool                           vce_power_gated;
+	bool                           samu_power_gated;
+	bool                           need_long_memory_training;
+
+	/* Internal settings to apply the application power optimization parameters */
+	bool                           apply_optimized_settings;
+	uint32_t                       disable_dpm_mask;
+
+	/* ---- Overdrive next setting ---- */
+	uint32_t                       apply_overdrive_next_settings_mask;
+
+	/* ---- Workload Mask ---- */
+	uint32_t                       workload_mask;
+
+	/* ---- SMU9 ---- */
+	struct smu_features            smu_features[GNLD_FEATURES_MAX];
+	struct vega10_smc_state_table  smc_state_table;
+
+	uint32_t                       config_telemetry;
+};
+
+#define VEGA10_DPM2_NEAR_TDP_DEC                      10
+#define VEGA10_DPM2_ABOVE_SAFE_INC                    5
+#define VEGA10_DPM2_BELOW_SAFE_INC                    20
+
+#define VEGA10_DPM2_LTA_WINDOW_SIZE                   7
+
+#define VEGA10_DPM2_LTS_TRUNCATE                      0
+
+#define VEGA10_DPM2_TDP_SAFE_LIMIT_PERCENT            80
+
+#define VEGA10_DPM2_MAXPS_PERCENT_M                   90
+#define VEGA10_DPM2_MAXPS_PERCENT_H                   90
+
+#define VEGA10_DPM2_PWREFFICIENCYRATIO_MARGIN         50
+
+#define VEGA10_DPM2_SQ_RAMP_MAX_POWER                 0x3FFF
+#define VEGA10_DPM2_SQ_RAMP_MIN_POWER                 0x12
+#define VEGA10_DPM2_SQ_RAMP_MAX_POWER_DELTA           0x15
+#define VEGA10_DPM2_SQ_RAMP_SHORT_TERM_INTERVAL_SIZE  0x1E
+#define VEGA10_DPM2_SQ_RAMP_LONG_TERM_INTERVAL_RATIO  0xF
+
+#define VEGA10_VOLTAGE_CONTROL_NONE                   0x0
+#define VEGA10_VOLTAGE_CONTROL_BY_GPIO                0x1
+#define VEGA10_VOLTAGE_CONTROL_BY_SVID2               0x2
+#define VEGA10_VOLTAGE_CONTROL_MERGED                 0x3
+/* To convert to Q8.8 format for firmware */
+#define VEGA10_Q88_FORMAT_CONVERSION_UNIT             256
+
+#define VEGA10_UNUSED_GPIO_PIN       0x7F
+
+#define VEGA10_THERM_OUT_MODE_DISABLE       0x0
+#define VEGA10_THERM_OUT_MODE_THERM_ONLY    0x1
+#define VEGA10_THERM_OUT_MODE_THERM_VRHOT   0x2
+
+#define PPVEGA10_VEGA10DISPLAYVOLTAGEMODE_DFLT   0xffffffff
+#define PPREGKEY_VEGA10QUADRATICEQUATION_DFLT    0xffffffff
+
+#define PPVEGA10_VEGA10GFXCLKAVERAGEALPHA_DFLT       25 /* 10% * 255 = 25 */
+#define PPVEGA10_VEGA10SOCCLKAVERAGEALPHA_DFLT       25 /* 10% * 255 = 25 */
+#define PPVEGA10_VEGA10UCLKCLKAVERAGEALPHA_DFLT      25 /* 10% * 255 = 25 */
+#define PPVEGA10_VEGA10GFXACTIVITYAVERAGEALPHA_DFLT  25 /* 10% * 255 = 25 */
+
+extern int tonga_initializa_dynamic_state_adjustment_rule_settings(struct pp_hwmgr *hwmgr);
+extern int tonga_hwmgr_backend_fini(struct pp_hwmgr *hwmgr);
+extern int tonga_get_mc_microcode_version (struct pp_hwmgr *hwmgr);
+extern int tonga_notify_smc_display_config_after_ps_adjustment(struct pp_hwmgr *hwmgr);
+extern int tonga_notify_smc_display_change(struct pp_hwmgr *hwmgr, bool has_display);
+int vega10_update_vce_dpm(struct pp_hwmgr *hwmgr, const void *input);
+int vega10_update_uvd_dpm(struct pp_hwmgr *hwmgr, bool bgate);
+int vega10_update_samu_dpm(struct pp_hwmgr *hwmgr, bool bgate);
+int vega10_update_acp_dpm(struct pp_hwmgr *hwmgr, bool bgate);
+int vega10_enable_disable_vce_dpm(struct pp_hwmgr *hwmgr, bool enable);
+
+#endif /* _VEGA10_HWMGR_H_ */
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_inc.h b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_inc.h
new file mode 100644
index 0000000..8c55eaa
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_inc.h
@@ -0,0 +1,44 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef VEGA10_INC_H
+#define VEGA10_INC_H
+
+#include "asic_reg/vega10/THM/thm_9_0_default.h"
+#include "asic_reg/vega10/THM/thm_9_0_offset.h"
+#include "asic_reg/vega10/THM/thm_9_0_sh_mask.h"
+
+#include "asic_reg/vega10/MP/mp_9_0_default.h"
+#include "asic_reg/vega10/MP/mp_9_0_offset.h"
+#include "asic_reg/vega10/MP/mp_9_0_sh_mask.h"
+
+#include "asic_reg/vega10/GC/gc_9_0_default.h"
+#include "asic_reg/vega10/GC/gc_9_0_offset.h"
+#include "asic_reg/vega10/GC/gc_9_0_sh_mask.h"
+
+#include "asic_reg/vega10/NBIO/nbio_6_1_default.h"
+#include "asic_reg/vega10/NBIO/nbio_6_1_offset.h"
+#include "asic_reg/vega10/NBIO/nbio_6_1_sh_mask.h"
+
+
+#endif
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.c
new file mode 100644
index 0000000..f1e244c
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.c
@@ -0,0 +1,137 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include "hwmgr.h"
+#include "vega10_hwmgr.h"
+#include "vega10_powertune.h"
+#include "vega10_smumgr.h"
+#include "vega10_ppsmc.h"
+#include "pp_debug.h"
+
+void vega10_initialize_power_tune_defaults(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend);
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)(hwmgr->pptable);
+	struct phm_tdp_table *tdp_table = table_info->tdp_table;
+	PPTable_t *table = &(data->smc_state_table.pp_table);
+
+	table->SocketPowerLimit = cpu_to_le16(
+			tdp_table->usMaximumPowerDeliveryLimit);
+	table->TdcLimit = cpu_to_le16(tdp_table->usTDC);
+	table->EdcLimit = cpu_to_le16(tdp_table->usEDCLimit);
+	table->TedgeLimit = cpu_to_le16(tdp_table->usTemperatureLimitTedge);
+	table->ThotspotLimit = cpu_to_le16(tdp_table->usTemperatureLimitHotspot);
+	table->ThbmLimit = cpu_to_le16(tdp_table->usTemperatureLimitHBM);
+	table->Tvr_socLimit = cpu_to_le16(tdp_table->usTemperatureLimitVrVddc);
+	table->Tvr_memLimit = cpu_to_le16(tdp_table->usTemperatureLimitVrMvdd);
+	table->Tliquid1Limit = cpu_to_le16(tdp_table->usTemperatureLimitLiquid1);
+	table->Tliquid2Limit = cpu_to_le16(tdp_table->usTemperatureLimitLiquid2);
+	table->TplxLimit = cpu_to_le16(tdp_table->usTemperatureLimitPlx);
+	table->LoadLineResistance = cpu_to_le16(
+			hwmgr->platform_descriptor.LoadLineSlope);
+	table->FitLimit = 0; /* Not used for Vega10 */
+
+	table->Liquid1_I2C_address = tdp_table->ucLiquid1_I2C_address;
+	table->Liquid2_I2C_address = tdp_table->ucLiquid2_I2C_address;
+	table->Vr_I2C_address = tdp_table->ucVr_I2C_address;
+	table->Plx_I2C_address = tdp_table->ucPlx_I2C_address;
+
+	table->Liquid_I2C_LineSCL = tdp_table->ucLiquid_I2C_Line;
+	table->Liquid_I2C_LineSDA = tdp_table->ucLiquid_I2C_LineSDA;
+
+	table->Vr_I2C_LineSCL = tdp_table->ucVr_I2C_Line;
+	table->Vr_I2C_LineSDA = tdp_table->ucVr_I2C_LineSDA;
+
+	table->Plx_I2C_LineSCL = tdp_table->ucPlx_I2C_Line;
+	table->Plx_I2C_LineSDA = tdp_table->ucPlx_I2C_LineSDA;
+}
+
+int vega10_set_power_limit(struct pp_hwmgr *hwmgr, uint32_t n)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+
+	if (data->registry_data.enable_pkg_pwr_tracking_feature)
+		return smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+				PPSMC_MSG_SetPptLimit, n);
+
+	return 0;
+}
+
+int vega10_enable_power_containment(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data =
+			(struct vega10_hwmgr *)(hwmgr->backend);
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)(hwmgr->pptable);
+	struct phm_tdp_table *tdp_table = table_info->tdp_table;
+	uint32_t default_pwr_limit =
+			(uint32_t)(tdp_table->usMaximumPowerDeliveryLimit);
+	int result = 0;
+
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_PowerContainment)) {
+		if (data->smu_features[GNLD_PPT].supported)
+			PP_ASSERT_WITH_CODE(!vega10_enable_smc_features(hwmgr->smumgr,
+					true, data->smu_features[GNLD_PPT].smu_feature_bitmap),
+					"Attempt to enable PPT feature Failed!",
+					data->smu_features[GNLD_PPT].supported = false);
+
+		if (data->smu_features[GNLD_TDC].supported)
+			PP_ASSERT_WITH_CODE(!vega10_enable_smc_features(hwmgr->smumgr,
+					true, data->smu_features[GNLD_TDC].smu_feature_bitmap),
+					"Attempt to enable PPT feature Failed!",
+					data->smu_features[GNLD_TDC].supported = false);
+
+		result = vega10_set_power_limit(hwmgr, default_pwr_limit);
+		PP_ASSERT_WITH_CODE(!result,
+				"Failed to set Default Power Limit in SMC!",
+				return result);
+	}
+
+	return result;
+}
+
+static int vega10_set_overdrive_target_percentage(struct pp_hwmgr *hwmgr,
+		uint32_t adjust_percent)
+{
+	return smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+			PPSMC_MSG_OverDriveSetPercentage, adjust_percent);
+}
+
+int vega10_power_control_set_level(struct pp_hwmgr *hwmgr)
+{
+	int adjust_percent, result = 0;
+
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_PowerContainment)) {
+		adjust_percent =
+				hwmgr->platform_descriptor.TDPAdjustmentPolarity ?
+				hwmgr->platform_descriptor.TDPAdjustment :
+				(-1 * hwmgr->platform_descriptor.TDPAdjustment);
+		result = vega10_set_overdrive_target_percentage(hwmgr,
+				(uint32_t)adjust_percent);
+	}
+	return result;
+}
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.h b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.h
new file mode 100644
index 0000000..d9662bf
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.h
@@ -0,0 +1,65 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#ifndef _VEGA10_POWERTUNE_H_
+#define _VEGA10_POWERTUNE_H_
+
+enum vega10_pt_config_reg_type {
+	VEGA10_CONFIGREG_MMR = 0,
+	VEGA10_CONFIGREG_SMC_IND,
+	VEGA10_CONFIGREG_DIDT_IND,
+	VEGA10_CONFIGREG_CACHE,
+	VEGA10_CONFIGREG_MAX
+};
+
+/* PowerContainment Features */
+#define POWERCONTAINMENT_FEATURE_DTE             0x00000001
+#define POWERCONTAINMENT_FEATURE_TDCLimit        0x00000002
+#define POWERCONTAINMENT_FEATURE_PkgPwrLimit     0x00000004
+
+struct vega10_pt_config_reg {
+	uint32_t                           offset;
+	uint32_t                           mask;
+	uint32_t                           shift;
+	uint32_t                           value;
+	enum vega10_pt_config_reg_type       type;
+};
+
+struct vega10_pt_defaults {
+    uint8_t   SviLoadLineEn;
+    uint8_t   SviLoadLineVddC;
+    uint8_t   TDC_VDDC_ThrottleReleaseLimitPerc;
+    uint8_t   TDC_MAWt;
+    uint8_t   TdcWaterfallCtl;
+    uint8_t   DTEAmbientTempBase;
+};
+
+void vega10_initialize_power_tune_defaults(struct pp_hwmgr *hwmgr);
+int vega10_populate_bapm_parameters_in_dpm_table(struct pp_hwmgr *hwmgr);
+int vega10_populate_pm_fuses(struct pp_hwmgr *hwmgr);
+int vega10_enable_smc_cac(struct pp_hwmgr *hwmgr);
+int vega10_enable_power_containment(struct pp_hwmgr *hwmgr);
+int vega10_set_power_limit(struct pp_hwmgr *hwmgr, uint32_t n);
+int vega10_power_control_set_level(struct pp_hwmgr *hwmgr);
+
+#endif  /* _VEGA10_POWERTUNE_H_ */
+
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_pptable.h b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_pptable.h
new file mode 100644
index 0000000..8e53d3a
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_pptable.h
@@ -0,0 +1,331 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#ifndef _VEGA10_PPTABLE_H_
+#define _VEGA10_PPTABLE_H_
+
+#pragma pack(push, 1)
+
+#define ATOM_VEGA10_PP_FANPARAMETERS_TACHOMETER_PULSES_PER_REVOLUTION_MASK 0x0f
+#define ATOM_VEGA10_PP_FANPARAMETERS_NOFAN                                 0x80
+
+#define ATOM_VEGA10_PP_THERMALCONTROLLER_NONE      0
+#define ATOM_VEGA10_PP_THERMALCONTROLLER_LM96163   17
+#define ATOM_VEGA10_PP_THERMALCONTROLLER_VEGA10    24
+
+#define ATOM_VEGA10_PP_THERMALCONTROLLER_ADT7473_WITH_INTERNAL   0x89
+#define ATOM_VEGA10_PP_THERMALCONTROLLER_EMC2103_WITH_INTERNAL   0x8D
+
+#define ATOM_VEGA10_PP_PLATFORM_CAP_POWERPLAY                   0x1
+#define ATOM_VEGA10_PP_PLATFORM_CAP_SBIOSPOWERSOURCE            0x2
+#define ATOM_VEGA10_PP_PLATFORM_CAP_HARDWAREDC                  0x4
+#define ATOM_VEGA10_PP_PLATFORM_CAP_BACO                        0x8
+#define ATOM_VEGA10_PP_PLATFORM_COMBINE_PCC_WITH_THERMAL_SIGNAL 0x10
+
+
+/* ATOM_PPLIB_NONCLOCK_INFO::usClassification */
+#define ATOM_PPLIB_CLASSIFICATION_UI_MASK               0x0007
+#define ATOM_PPLIB_CLASSIFICATION_UI_SHIFT              0
+#define ATOM_PPLIB_CLASSIFICATION_UI_NONE               0
+#define ATOM_PPLIB_CLASSIFICATION_UI_BATTERY            1
+#define ATOM_PPLIB_CLASSIFICATION_UI_BALANCED           3
+#define ATOM_PPLIB_CLASSIFICATION_UI_PERFORMANCE        5
+/* 2, 4, 6, 7 are reserved */
+
+#define ATOM_PPLIB_CLASSIFICATION_BOOT                  0x0008
+#define ATOM_PPLIB_CLASSIFICATION_THERMAL               0x0010
+#define ATOM_PPLIB_CLASSIFICATION_LIMITEDPOWERSOURCE    0x0020
+#define ATOM_PPLIB_CLASSIFICATION_REST                  0x0040
+#define ATOM_PPLIB_CLASSIFICATION_FORCED                0x0080
+#define ATOM_PPLIB_CLASSIFICATION_ACPI                  0x1000
+
+/* ATOM_PPLIB_NONCLOCK_INFO::usClassification2 */
+#define ATOM_PPLIB_CLASSIFICATION2_LIMITEDPOWERSOURCE_2 0x0001
+
+#define ATOM_Vega10_DISALLOW_ON_DC                   0x00004000
+#define ATOM_Vega10_ENABLE_VARIBRIGHT                0x00008000
+
+#define ATOM_Vega10_TABLE_REVISION_VEGA10         8
+
+#define ATOM_Vega10_VoltageMode_AVFS_Interpolate     0
+#define ATOM_Vega10_VoltageMode_AVFS_WorstCase       1
+#define ATOM_Vega10_VoltageMode_Static               2
+
+typedef struct _ATOM_Vega10_POWERPLAYTABLE {
+	struct atom_common_table_header sHeader;
+	UCHAR  ucTableRevision;
+	USHORT usTableSize;                        /* the size of header structure */
+	ULONG  ulGoldenPPID;                       /* PPGen use only */
+	ULONG  ulGoldenRevision;                   /* PPGen use only */
+	USHORT usFormatID;                         /* PPGen use only */
+	ULONG  ulPlatformCaps;                     /* See ATOM_Vega10_CAPS_* */
+	ULONG  ulMaxODEngineClock;                 /* For Overdrive. */
+	ULONG  ulMaxODMemoryClock;                 /* For Overdrive. */
+	USHORT usPowerControlLimit;
+	USHORT usUlvVoltageOffset;                 /* in mv units */
+	USHORT usUlvSmnclkDid;
+	USHORT usUlvMp1clkDid;
+	USHORT usUlvGfxclkBypass;
+	USHORT usGfxclkSlewRate;
+	UCHAR  ucGfxVoltageMode;
+	UCHAR  ucSocVoltageMode;
+	UCHAR  ucUclkVoltageMode;
+	UCHAR  ucUvdVoltageMode;
+	UCHAR  ucVceVoltageMode;
+	UCHAR  ucMp0VoltageMode;
+	UCHAR  ucDcefVoltageMode;
+	USHORT usStateArrayOffset;                 /* points to ATOM_Vega10_State_Array */
+	USHORT usFanTableOffset;                   /* points to ATOM_Vega10_Fan_Table */
+	USHORT usThermalControllerOffset;          /* points to ATOM_Vega10_Thermal_Controller */
+	USHORT usSocclkDependencyTableOffset;      /* points to ATOM_Vega10_SOCCLK_Dependency_Table */
+	USHORT usMclkDependencyTableOffset;        /* points to ATOM_Vega10_MCLK_Dependency_Table */
+	USHORT usGfxclkDependencyTableOffset;      /* points to ATOM_Vega10_GFXCLK_Dependency_Table */
+	USHORT usDcefclkDependencyTableOffset;     /* points to ATOM_Vega10_DCEFCLK_Dependency_Table */
+	USHORT usVddcLookupTableOffset;            /* points to ATOM_Vega10_Voltage_Lookup_Table */
+	USHORT usVddmemLookupTableOffset;          /* points to ATOM_Vega10_Voltage_Lookup_Table */
+	USHORT usMMDependencyTableOffset;          /* points to ATOM_Vega10_MM_Dependency_Table */
+	USHORT usVCEStateTableOffset;              /* points to ATOM_Vega10_VCE_State_Table */
+	USHORT usReserve;                          /* No PPM Support for Vega10 */
+	USHORT usPowerTuneTableOffset;             /* points to ATOM_Vega10_PowerTune_Table */
+	USHORT usHardLimitTableOffset;             /* points to ATOM_Vega10_Hard_Limit_Table */
+	USHORT usVddciLookupTableOffset;           /* points to ATOM_Vega10_Voltage_Lookup_Table */
+	USHORT usPCIETableOffset;                  /* points to ATOM_Vega10_PCIE_Table */
+	USHORT usPixclkDependencyTableOffset;      /* points to ATOM_Vega10_PIXCLK_Dependency_Table */
+	USHORT usDispClkDependencyTableOffset;     /* points to ATOM_Vega10_DISPCLK_Dependency_Table */
+	USHORT usPhyClkDependencyTableOffset;      /* points to ATOM_Vega10_PHYCLK_Dependency_Table */
+} ATOM_Vega10_POWERPLAYTABLE;
+
+typedef struct _ATOM_Vega10_State {
+	UCHAR  ucSocClockIndexHigh;
+	UCHAR  ucSocClockIndexLow;
+	UCHAR  ucGfxClockIndexHigh;
+	UCHAR  ucGfxClockIndexLow;
+	UCHAR  ucMemClockIndexHigh;
+	UCHAR  ucMemClockIndexLow;
+	USHORT usClassification;
+	ULONG  ulCapsAndSettings;
+	USHORT usClassification2;
+} ATOM_Vega10_State;
+
+typedef struct _ATOM_Vega10_State_Array {
+	UCHAR ucRevId;
+	UCHAR ucNumEntries;                                         /* Number of entries. */
+	ATOM_Vega10_State states[1];                             /* Dynamically allocate entries. */
+} ATOM_Vega10_State_Array;
+
+typedef struct _ATOM_Vega10_CLK_Dependency_Record {
+	ULONG  ulClk;                                               /* Frequency of Clock */
+	UCHAR  ucVddInd;                                            /* Base voltage */
+} ATOM_Vega10_CLK_Dependency_Record;
+
+typedef struct _ATOM_Vega10_GFXCLK_Dependency_Record {
+	ULONG  ulClk;                                               /* Clock Frequency */
+	UCHAR  ucVddInd;                                            /* SOC_VDD index */
+	USHORT usCKSVOffsetandDisable;                              /* Bits 0~30: Voltage offset for CKS, Bit 31: Disable/enable for the GFXCLK level. */
+	USHORT usAVFSOffset;                                        /* AVFS Voltage offset */
+} ATOM_Vega10_GFXCLK_Dependency_Record;
+
+typedef struct _ATOM_Vega10_MCLK_Dependency_Record {
+	ULONG  ulMemClk;                                            /* Clock Frequency */
+	UCHAR  ucVddInd;                                            /* SOC_VDD index */
+	UCHAR  ucVddMemInd;                                         /* MEM_VDD - only non zero for MCLK record */
+	UCHAR  ucVddciInd;                                          /* VDDCI   = only non zero for MCLK record */
+} ATOM_Vega10_MCLK_Dependency_Record;
+
+typedef struct _ATOM_Vega10_GFXCLK_Dependency_Table {
+    UCHAR ucRevId;
+    UCHAR ucNumEntries;                                         /* Number of entries. */
+    ATOM_Vega10_GFXCLK_Dependency_Record entries[1];            /* Dynamically allocate entries. */
+} ATOM_Vega10_GFXCLK_Dependency_Table;
+
+typedef struct _ATOM_Vega10_MCLK_Dependency_Table {
+    UCHAR ucRevId;
+    UCHAR ucNumEntries;                                         /* Number of entries. */
+    ATOM_Vega10_MCLK_Dependency_Record entries[1];            /* Dynamically allocate entries. */
+} ATOM_Vega10_MCLK_Dependency_Table;
+
+typedef struct _ATOM_Vega10_SOCCLK_Dependency_Table {
+    UCHAR ucRevId;
+    UCHAR ucNumEntries;                                         /* Number of entries. */
+    ATOM_Vega10_CLK_Dependency_Record entries[1];            /* Dynamically allocate entries. */
+} ATOM_Vega10_SOCCLK_Dependency_Table;
+
+typedef struct _ATOM_Vega10_DCEFCLK_Dependency_Table {
+    UCHAR ucRevId;
+    UCHAR ucNumEntries;                                         /* Number of entries. */
+    ATOM_Vega10_CLK_Dependency_Record entries[1];            /* Dynamically allocate entries. */
+} ATOM_Vega10_DCEFCLK_Dependency_Table;
+
+typedef struct _ATOM_Vega10_PIXCLK_Dependency_Table {
+	UCHAR ucRevId;
+	UCHAR ucNumEntries;                                         /* Number of entries. */
+	ATOM_Vega10_CLK_Dependency_Record entries[1];            /* Dynamically allocate entries. */
+} ATOM_Vega10_PIXCLK_Dependency_Table;
+
+typedef struct _ATOM_Vega10_DISPCLK_Dependency_Table {
+	UCHAR ucRevId;
+	UCHAR ucNumEntries;                                         /* Number of entries.*/
+	ATOM_Vega10_CLK_Dependency_Record entries[1];            /* Dynamically allocate entries. */
+} ATOM_Vega10_DISPCLK_Dependency_Table;
+
+typedef struct _ATOM_Vega10_PHYCLK_Dependency_Table {
+	UCHAR ucRevId;
+	UCHAR ucNumEntries;                                         /* Number of entries. */
+	ATOM_Vega10_CLK_Dependency_Record entries[1];            /* Dynamically allocate entries. */
+} ATOM_Vega10_PHYCLK_Dependency_Table;
+
+typedef struct _ATOM_Vega10_MM_Dependency_Record {
+    UCHAR  ucVddcInd;                                           /* SOC_VDD voltage */
+    ULONG  ulDClk;                                              /* UVD D-clock */
+    ULONG  ulVClk;                                              /* UVD V-clock */
+    ULONG  ulEClk;                                              /* VCE clock */
+    ULONG  ulPSPClk;                                            /* PSP clock */
+} ATOM_Vega10_MM_Dependency_Record;
+
+typedef struct _ATOM_Vega10_MM_Dependency_Table {
+	UCHAR ucRevId;
+	UCHAR ucNumEntries;                                         /* Number of entries */
+	ATOM_Vega10_MM_Dependency_Record entries[1];             /* Dynamically allocate entries */
+} ATOM_Vega10_MM_Dependency_Table;
+
+typedef struct _ATOM_Vega10_PCIE_Record {
+	ULONG ulLCLK;                                               /* LClock */
+	UCHAR ucPCIEGenSpeed;                                       /* PCIE Speed */
+	UCHAR ucPCIELaneWidth;                                      /* PCIE Lane Width */
+} ATOM_Vega10_PCIE_Record;
+
+typedef struct _ATOM_Vega10_PCIE_Table {
+	UCHAR  ucRevId;
+	UCHAR  ucNumEntries;                                        /* Number of entries */
+	ATOM_Vega10_PCIE_Record entries[1];                      /* Dynamically allocate entries. */
+} ATOM_Vega10_PCIE_Table;
+
+typedef struct _ATOM_Vega10_Voltage_Lookup_Record {
+	USHORT usVdd;                                               /* Base voltage */
+} ATOM_Vega10_Voltage_Lookup_Record;
+
+typedef struct _ATOM_Vega10_Voltage_Lookup_Table {
+	UCHAR ucRevId;
+	UCHAR ucNumEntries;                                          /* Number of entries */
+	ATOM_Vega10_Voltage_Lookup_Record entries[1];             /* Dynamically allocate entries */
+} ATOM_Vega10_Voltage_Lookup_Table;
+
+typedef struct _ATOM_Vega10_Fan_Table {
+	UCHAR   ucRevId;                         /* Change this if the table format changes or version changes so that the other fields are not the same. */
+	USHORT  usFanOutputSensitivity;          /* Sensitivity of fan reaction to temepature changes. */
+	USHORT  usFanRPMMax;                     /* The default value in RPM. */
+	USHORT  usThrottlingRPM;
+	USHORT  usFanAcousticLimit;              /* Minimum Fan Controller Frequency Acoustic Limit. */
+	USHORT  usTargetTemperature;             /* The default ideal temperature in Celcius. */
+	USHORT  usMinimumPWMLimit;               /* The minimum PWM that the advanced fan controller can set. */
+	USHORT  usTargetGfxClk;                   /* The ideal Fan Controller GFXCLK Frequency Acoustic Limit. */
+	USHORT  usFanGainEdge;
+	USHORT  usFanGainHotspot;
+	USHORT  usFanGainLiquid;
+	USHORT  usFanGainVrVddc;
+	USHORT  usFanGainVrMvdd;
+	USHORT  usFanGainPlx;
+	USHORT  usFanGainHbm;
+	UCHAR   ucEnableZeroRPM;
+	USHORT  usFanStopTemperature;
+	USHORT  usFanStartTemperature;
+} ATOM_Vega10_Fan_Table;
+
+typedef struct _ATOM_Vega10_Thermal_Controller {
+	UCHAR ucRevId;
+	UCHAR ucType;           /* one of ATOM_VEGA10_PP_THERMALCONTROLLER_*/
+	UCHAR ucI2cLine;        /* as interpreted by DAL I2C */
+	UCHAR ucI2cAddress;
+	UCHAR ucFanParameters;  /* Fan Control Parameters. */
+	UCHAR ucFanMinRPM;      /* Fan Minimum RPM (hundreds) -- for display purposes only.*/
+	UCHAR ucFanMaxRPM;      /* Fan Maximum RPM (hundreds) -- for display purposes only.*/
+    UCHAR ucFlags;          /* to be defined */
+} ATOM_Vega10_Thermal_Controller;
+
+typedef struct _ATOM_Vega10_VCE_State_Record
+{
+    UCHAR  ucVCEClockIndex;         /*index into usVCEDependencyTableOffset of 'ATOM_Vega10_MM_Dependency_Table' type */
+    UCHAR  ucFlag;                  /* 2 bits indicates memory p-states */
+    UCHAR  ucSCLKIndex;             /* index into ATOM_Vega10_SCLK_Dependency_Table */
+    UCHAR  ucMCLKIndex;             /* index into ATOM_Vega10_MCLK_Dependency_Table */
+} ATOM_Vega10_VCE_State_Record;
+
+typedef struct _ATOM_Vega10_VCE_State_Table
+{
+    UCHAR ucRevId;
+    UCHAR ucNumEntries;
+    ATOM_Vega10_VCE_State_Record entries[1];
+} ATOM_Vega10_VCE_State_Table;
+
+typedef struct _ATOM_Vega10_PowerTune_Table {
+	UCHAR  ucRevId;
+	USHORT usSocketPowerLimit;
+	USHORT usBatteryPowerLimit;
+	USHORT usSmallPowerLimit;
+	USHORT usTdcLimit;
+	USHORT usEdcLimit;
+	USHORT usSoftwareShutdownTemp;
+	USHORT usTemperatureLimitHotSpot;
+	USHORT usTemperatureLimitLiquid1;
+	USHORT usTemperatureLimitLiquid2;
+	USHORT usTemperatureLimitHBM;
+	USHORT usTemperatureLimitVrSoc;
+	USHORT usTemperatureLimitVrMem;
+	USHORT usTemperatureLimitPlx;
+	USHORT usLoadLineResistance;
+	UCHAR  ucLiquid1_I2C_address;
+	UCHAR  ucLiquid2_I2C_address;
+	UCHAR  ucVr_I2C_address;
+	UCHAR  ucPlx_I2C_address;
+	UCHAR  ucLiquid_I2C_LineSCL;
+	UCHAR  ucLiquid_I2C_LineSDA;
+	UCHAR  ucVr_I2C_LineSCL;
+	UCHAR  ucVr_I2C_LineSDA;
+	UCHAR  ucPlx_I2C_LineSCL;
+	UCHAR  ucPlx_I2C_LineSDA;
+	USHORT usTemperatureLimitTedge;
+} ATOM_Vega10_PowerTune_Table;
+
+typedef struct _ATOM_Vega10_Hard_Limit_Record {
+    ULONG  ulSOCCLKLimit;
+    ULONG  ulGFXCLKLimit;
+    ULONG  ulMCLKLimit;
+    USHORT usVddcLimit;
+    USHORT usVddciLimit;
+    USHORT usVddMemLimit;
+} ATOM_Vega10_Hard_Limit_Record;
+
+typedef struct _ATOM_Vega10_Hard_Limit_Table
+{
+    UCHAR ucRevId;
+    UCHAR ucNumEntries;
+    ATOM_Vega10_Hard_Limit_Record entries[1];
+} ATOM_Vega10_Hard_Limit_Table;
+
+typedef struct _Vega10_PPTable_Generic_SubTable_Header
+{
+    UCHAR  ucRevId;
+} Vega10_PPTable_Generic_SubTable_Header;
+
+#pragma pack(pop)
+
+#endif
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.c
new file mode 100644
index 0000000..518634f
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.c
@@ -0,0 +1,1056 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/fb.h>
+
+#include "vega10_processpptables.h"
+#include "ppatomfwctrl.h"
+#include "atomfirmware.h"
+#include "pp_debug.h"
+#include "cgs_common.h"
+#include "vega10_pptable.h"
+
+static void set_hw_cap(struct pp_hwmgr *hwmgr, bool enable,
+		enum phm_platform_caps cap)
+{
+	if (enable)
+		phm_cap_set(hwmgr->platform_descriptor.platformCaps, cap);
+	else
+		phm_cap_unset(hwmgr->platform_descriptor.platformCaps, cap);
+}
+
+static const void *get_powerplay_table(struct pp_hwmgr *hwmgr)
+{
+	int index = GetIndexIntoMasterDataTable(powerplayinfo);
+
+	u16 size;
+	u8 frev, crev;
+	const void *table_address = hwmgr->soft_pp_table;
+
+	if (!table_address) {
+		table_address = (ATOM_Vega10_POWERPLAYTABLE *)
+				cgs_atom_get_data_table(hwmgr->device, index,
+						&size, &frev, &crev);
+
+		hwmgr->soft_pp_table = table_address;	/*Cache the result in RAM.*/
+	}
+
+	return table_address;
+}
+
+static int check_powerplay_tables(
+		struct pp_hwmgr *hwmgr,
+		const ATOM_Vega10_POWERPLAYTABLE *powerplay_table)
+{
+	const ATOM_Vega10_State_Array *state_arrays;
+
+	state_arrays = (ATOM_Vega10_State_Array *)(((unsigned long)powerplay_table) +
+		le16_to_cpu(powerplay_table->usStateArrayOffset));
+
+	PP_ASSERT_WITH_CODE((powerplay_table->sHeader.format_revision >=
+			ATOM_Vega10_TABLE_REVISION_VEGA10),
+		"Unsupported PPTable format!", return -1);
+	PP_ASSERT_WITH_CODE(powerplay_table->usStateArrayOffset,
+		"State table is not set!", return -1);
+	PP_ASSERT_WITH_CODE(powerplay_table->sHeader.structuresize > 0,
+		"Invalid PowerPlay Table!", return -1);
+	PP_ASSERT_WITH_CODE(state_arrays->ucNumEntries > 0,
+		"Invalid PowerPlay Table!", return -1);
+
+	return 0;
+}
+
+static int set_platform_caps(struct pp_hwmgr *hwmgr, uint32_t powerplay_caps)
+{
+	set_hw_cap(
+			hwmgr,
+			0 != (powerplay_caps & ATOM_VEGA10_PP_PLATFORM_CAP_POWERPLAY),
+			PHM_PlatformCaps_PowerPlaySupport);
+
+	set_hw_cap(
+			hwmgr,
+			0 != (powerplay_caps & ATOM_VEGA10_PP_PLATFORM_CAP_SBIOSPOWERSOURCE),
+			PHM_PlatformCaps_BiosPowerSourceControl);
+
+	set_hw_cap(
+			hwmgr,
+			0 != (powerplay_caps & ATOM_VEGA10_PP_PLATFORM_CAP_HARDWAREDC),
+			PHM_PlatformCaps_AutomaticDCTransition);
+
+	set_hw_cap(
+			hwmgr,
+			0 != (powerplay_caps & ATOM_VEGA10_PP_PLATFORM_CAP_BACO),
+			PHM_PlatformCaps_BACO);
+
+	set_hw_cap(
+			hwmgr,
+			0 != (powerplay_caps & ATOM_VEGA10_PP_PLATFORM_COMBINE_PCC_WITH_THERMAL_SIGNAL),
+			PHM_PlatformCaps_CombinePCCWithThermalSignal);
+
+	return 0;
+}
+
+static int init_thermal_controller(
+		struct pp_hwmgr *hwmgr,
+		const ATOM_Vega10_POWERPLAYTABLE *powerplay_table)
+{
+	const ATOM_Vega10_Thermal_Controller *thermal_controller;
+	const ATOM_Vega10_Fan_Table *fan_table;
+
+	thermal_controller = (ATOM_Vega10_Thermal_Controller *)
+			(((unsigned long)powerplay_table) +
+			le16_to_cpu(powerplay_table->usThermalControllerOffset));
+
+	PP_ASSERT_WITH_CODE((powerplay_table->usThermalControllerOffset != 0),
+			"Thermal controller table not set!", return -1);
+
+	hwmgr->thermal_controller.ucType = thermal_controller->ucType;
+	hwmgr->thermal_controller.ucI2cLine = thermal_controller->ucI2cLine;
+	hwmgr->thermal_controller.ucI2cAddress = thermal_controller->ucI2cAddress;
+
+	hwmgr->thermal_controller.fanInfo.bNoFan =
+			(0 != (thermal_controller->ucFanParameters &
+			ATOM_VEGA10_PP_FANPARAMETERS_NOFAN));
+
+	hwmgr->thermal_controller.fanInfo.ucTachometerPulsesPerRevolution =
+			thermal_controller->ucFanParameters &
+			ATOM_VEGA10_PP_FANPARAMETERS_TACHOMETER_PULSES_PER_REVOLUTION_MASK;
+
+	hwmgr->thermal_controller.fanInfo.ulMinRPM =
+			thermal_controller->ucFanMinRPM * 100UL;
+	hwmgr->thermal_controller.fanInfo.ulMaxRPM =
+			thermal_controller->ucFanMaxRPM * 100UL;
+
+	set_hw_cap(
+			hwmgr,
+			ATOM_VEGA10_PP_THERMALCONTROLLER_NONE != hwmgr->thermal_controller.ucType,
+			PHM_PlatformCaps_ThermalController);
+
+	if (!powerplay_table->usFanTableOffset)
+		return 0;
+
+	fan_table = (const ATOM_Vega10_Fan_Table *)
+			(((unsigned long)powerplay_table) +
+			le16_to_cpu(powerplay_table->usFanTableOffset));
+
+	PP_ASSERT_WITH_CODE((fan_table->ucRevId >= 8),
+		"Invalid Input Fan Table!", return -1);
+
+	hwmgr->thermal_controller.advanceFanControlParameters.ulCycleDelay
+		= 100000;
+	phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+		PHM_PlatformCaps_MicrocodeFanControl);
+
+	hwmgr->thermal_controller.advanceFanControlParameters.usFanOutputSensitivity =
+			le16_to_cpu(fan_table->usFanOutputSensitivity);
+	hwmgr->thermal_controller.advanceFanControlParameters.usMaxFanRPM =
+			le16_to_cpu(fan_table->usFanRPMMax);
+	hwmgr->thermal_controller.advanceFanControlParameters.usFanRPMMaxLimit =
+			le16_to_cpu(fan_table->usThrottlingRPM);
+	hwmgr->thermal_controller.advanceFanControlParameters.ulMinFanSCLKAcousticLimit =
+			le32_to_cpu((uint32_t)(fan_table->usFanAcousticLimit));
+	hwmgr->thermal_controller.advanceFanControlParameters.usTMax =
+			le16_to_cpu(fan_table->usTargetTemperature);
+	hwmgr->thermal_controller.advanceFanControlParameters.usPWMMin =
+			le16_to_cpu(fan_table->usMinimumPWMLimit);
+	hwmgr->thermal_controller.advanceFanControlParameters.ulTargetGfxClk =
+			le32_to_cpu((uint32_t)(fan_table->usTargetGfxClk));
+	hwmgr->thermal_controller.advanceFanControlParameters.usFanGainEdge =
+			le16_to_cpu(fan_table->usFanGainEdge);
+	hwmgr->thermal_controller.advanceFanControlParameters.usFanGainHotspot =
+			le16_to_cpu(fan_table->usFanGainHotspot);
+	hwmgr->thermal_controller.advanceFanControlParameters.usFanGainLiquid =
+			le16_to_cpu(fan_table->usFanGainLiquid);
+	hwmgr->thermal_controller.advanceFanControlParameters.usFanGainVrVddc =
+			le16_to_cpu(fan_table->usFanGainVrVddc);
+	hwmgr->thermal_controller.advanceFanControlParameters.usFanGainVrMvdd =
+			le16_to_cpu(fan_table->usFanGainVrMvdd);
+	hwmgr->thermal_controller.advanceFanControlParameters.usFanGainPlx =
+			le16_to_cpu(fan_table->usFanGainPlx);
+	hwmgr->thermal_controller.advanceFanControlParameters.usFanGainHbm =
+			le16_to_cpu(fan_table->usFanGainHbm);
+
+	hwmgr->thermal_controller.advanceFanControlParameters.ucEnableZeroRPM =
+			fan_table->ucEnableZeroRPM;
+	hwmgr->thermal_controller.advanceFanControlParameters.usZeroRPMStopTemperature =
+			le16_to_cpu(fan_table->usFanStopTemperature);
+	hwmgr->thermal_controller.advanceFanControlParameters.usZeroRPMStartTemperature =
+			le16_to_cpu(fan_table->usFanStartTemperature);
+
+	return 0;
+}
+
+static int init_over_drive_limits(
+		struct pp_hwmgr *hwmgr,
+		const ATOM_Vega10_POWERPLAYTABLE *powerplay_table)
+{
+	hwmgr->platform_descriptor.overdriveLimit.engineClock =
+			le32_to_cpu(powerplay_table->ulMaxODEngineClock);
+	hwmgr->platform_descriptor.overdriveLimit.memoryClock =
+			le32_to_cpu(powerplay_table->ulMaxODMemoryClock);
+
+	hwmgr->platform_descriptor.minOverdriveVDDC = 0;
+	hwmgr->platform_descriptor.maxOverdriveVDDC = 0;
+	hwmgr->platform_descriptor.overdriveVDDCStep = 0;
+
+	if (hwmgr->platform_descriptor.overdriveLimit.engineClock > 0 &&
+		hwmgr->platform_descriptor.overdriveLimit.memoryClock > 0) {
+		phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_ACOverdriveSupport);
+	}
+
+	return 0;
+}
+
+static int get_mm_clock_voltage_table(
+		struct pp_hwmgr *hwmgr,
+		phm_ppt_v1_mm_clock_voltage_dependency_table **vega10_mm_table,
+		const ATOM_Vega10_MM_Dependency_Table *mm_dependency_table)
+{
+	uint32_t table_size, i;
+	const ATOM_Vega10_MM_Dependency_Record *mm_dependency_record;
+	phm_ppt_v1_mm_clock_voltage_dependency_table *mm_table;
+
+	PP_ASSERT_WITH_CODE((mm_dependency_table->ucNumEntries != 0),
+			"Invalid PowerPlay Table!", return -1);
+
+	table_size = sizeof(uint32_t) +
+			sizeof(phm_ppt_v1_mm_clock_voltage_dependency_record) *
+			mm_dependency_table->ucNumEntries;
+	mm_table = (phm_ppt_v1_mm_clock_voltage_dependency_table *)
+			kzalloc(table_size, GFP_KERNEL);
+
+	if (!mm_table)
+		return -ENOMEM;
+
+	mm_table->count = mm_dependency_table->ucNumEntries;
+
+	for (i = 0; i < mm_dependency_table->ucNumEntries; i++) {
+		mm_dependency_record = &mm_dependency_table->entries[i];
+		mm_table->entries[i].vddcInd = mm_dependency_record->ucVddcInd;
+		mm_table->entries[i].samclock =
+				le32_to_cpu(mm_dependency_record->ulPSPClk);
+		mm_table->entries[i].eclk = le32_to_cpu(mm_dependency_record->ulEClk);
+		mm_table->entries[i].vclk = le32_to_cpu(mm_dependency_record->ulVClk);
+		mm_table->entries[i].dclk = le32_to_cpu(mm_dependency_record->ulDClk);
+	}
+
+	*vega10_mm_table = mm_table;
+
+	return 0;
+}
+
+static int get_tdp_table(
+		struct pp_hwmgr *hwmgr,
+		struct phm_tdp_table **info_tdp_table,
+		const Vega10_PPTable_Generic_SubTable_Header *table)
+{
+	uint32_t table_size;
+	struct phm_tdp_table *tdp_table;
+
+	const ATOM_Vega10_PowerTune_Table *power_tune_table =
+			(ATOM_Vega10_PowerTune_Table *)table;
+
+	table_size = sizeof(uint32_t) + sizeof(struct phm_cac_tdp_table);
+	hwmgr->dyn_state.cac_dtp_table = (struct phm_cac_tdp_table *)
+			kzalloc(table_size, GFP_KERNEL);
+
+	if (!hwmgr->dyn_state.cac_dtp_table)
+		return -ENOMEM;
+
+	table_size = sizeof(uint32_t) + sizeof(struct phm_tdp_table);
+	tdp_table = kzalloc(table_size, GFP_KERNEL);
+
+	if (!tdp_table) {
+		kfree(hwmgr->dyn_state.cac_dtp_table);
+		hwmgr->dyn_state.cac_dtp_table = NULL;
+		return -ENOMEM;
+	}
+
+	tdp_table->usMaximumPowerDeliveryLimit = le16_to_cpu(power_tune_table->usSocketPowerLimit);
+	tdp_table->usTDC = le16_to_cpu(power_tune_table->usTdcLimit);
+	tdp_table->usEDCLimit = le16_to_cpu(power_tune_table->usEdcLimit);
+	tdp_table->usSoftwareShutdownTemp =
+			le16_to_cpu(power_tune_table->usSoftwareShutdownTemp);
+	tdp_table->usTemperatureLimitTedge =
+			le16_to_cpu(power_tune_table->usTemperatureLimitTedge);
+	tdp_table->usTemperatureLimitHotspot =
+			le16_to_cpu(power_tune_table->usTemperatureLimitHotSpot);
+	tdp_table->usTemperatureLimitLiquid1 =
+			le16_to_cpu(power_tune_table->usTemperatureLimitLiquid1);
+	tdp_table->usTemperatureLimitLiquid2 =
+			le16_to_cpu(power_tune_table->usTemperatureLimitLiquid2);
+	tdp_table->usTemperatureLimitHBM =
+			le16_to_cpu(power_tune_table->usTemperatureLimitHBM);
+	tdp_table->usTemperatureLimitVrVddc =
+			le16_to_cpu(power_tune_table->usTemperatureLimitVrSoc);
+	tdp_table->usTemperatureLimitVrMvdd =
+			le16_to_cpu(power_tune_table->usTemperatureLimitVrMem);
+	tdp_table->usTemperatureLimitPlx =
+			le16_to_cpu(power_tune_table->usTemperatureLimitPlx);
+	tdp_table->ucLiquid1_I2C_address = power_tune_table->ucLiquid1_I2C_address;
+	tdp_table->ucLiquid2_I2C_address = power_tune_table->ucLiquid2_I2C_address;
+	tdp_table->ucLiquid_I2C_Line = power_tune_table->ucLiquid_I2C_LineSCL;
+	tdp_table->ucLiquid_I2C_LineSDA = power_tune_table->ucLiquid_I2C_LineSDA;
+	tdp_table->ucVr_I2C_address = power_tune_table->ucVr_I2C_address;
+	tdp_table->ucVr_I2C_Line = power_tune_table->ucVr_I2C_LineSCL;
+	tdp_table->ucVr_I2C_LineSDA = power_tune_table->ucVr_I2C_LineSDA;
+	tdp_table->ucPlx_I2C_address = power_tune_table->ucPlx_I2C_address;
+	tdp_table->ucPlx_I2C_Line = power_tune_table->ucPlx_I2C_LineSCL;
+	tdp_table->ucPlx_I2C_LineSDA = power_tune_table->ucPlx_I2C_LineSDA;
+
+	hwmgr->platform_descriptor.LoadLineSlope = power_tune_table->usLoadLineResistance;
+
+	*info_tdp_table = tdp_table;
+
+	return 0;
+}
+
+static int get_socclk_voltage_dependency_table(
+		struct pp_hwmgr *hwmgr,
+		phm_ppt_v1_clock_voltage_dependency_table **pp_vega10_clk_dep_table,
+		const ATOM_Vega10_SOCCLK_Dependency_Table *clk_dep_table)
+{
+	uint32_t table_size, i;
+	phm_ppt_v1_clock_voltage_dependency_table *clk_table;
+
+	PP_ASSERT_WITH_CODE(clk_dep_table->ucNumEntries,
+		"Invalid PowerPlay Table!", return -1);
+
+	table_size = sizeof(uint32_t) +
+			sizeof(phm_ppt_v1_clock_voltage_dependency_record) *
+			clk_dep_table->ucNumEntries;
+
+	clk_table = (phm_ppt_v1_clock_voltage_dependency_table *)
+			kzalloc(table_size, GFP_KERNEL);
+
+	if (!clk_table)
+		return -ENOMEM;
+
+	clk_table->count = (uint32_t)clk_dep_table->ucNumEntries;
+
+	for (i = 0; i < clk_dep_table->ucNumEntries; i++) {
+		clk_table->entries[i].vddInd =
+				clk_dep_table->entries[i].ucVddInd;
+		clk_table->entries[i].clk =
+				le32_to_cpu(clk_dep_table->entries[i].ulClk);
+	}
+
+	*pp_vega10_clk_dep_table = clk_table;
+
+	return 0;
+}
+
+static int get_mclk_voltage_dependency_table(
+		struct pp_hwmgr *hwmgr,
+		phm_ppt_v1_clock_voltage_dependency_table **pp_vega10_mclk_dep_table,
+		const ATOM_Vega10_MCLK_Dependency_Table *mclk_dep_table)
+{
+	uint32_t table_size, i;
+	phm_ppt_v1_clock_voltage_dependency_table *mclk_table;
+
+	PP_ASSERT_WITH_CODE(mclk_dep_table->ucNumEntries,
+		"Invalid PowerPlay Table!", return -1);
+
+	table_size = sizeof(uint32_t) +
+			sizeof(phm_ppt_v1_clock_voltage_dependency_record) *
+			mclk_dep_table->ucNumEntries;
+
+	mclk_table = (phm_ppt_v1_clock_voltage_dependency_table *)
+			kzalloc(table_size, GFP_KERNEL);
+
+	if (!mclk_table)
+		return -ENOMEM;
+
+	mclk_table->count = (uint32_t)mclk_dep_table->ucNumEntries;
+
+	for (i = 0; i < mclk_dep_table->ucNumEntries; i++) {
+		mclk_table->entries[i].vddInd =
+				mclk_dep_table->entries[i].ucVddInd;
+		mclk_table->entries[i].vddciInd =
+				mclk_dep_table->entries[i].ucVddciInd;
+		mclk_table->entries[i].mvddInd =
+				mclk_dep_table->entries[i].ucVddMemInd;
+		mclk_table->entries[i].clk =
+				le32_to_cpu(mclk_dep_table->entries[i].ulMemClk);
+	}
+
+	*pp_vega10_mclk_dep_table = mclk_table;
+
+	return 0;
+}
+
+static int get_gfxclk_voltage_dependency_table(
+		struct pp_hwmgr *hwmgr,
+		struct phm_ppt_v1_clock_voltage_dependency_table
+			**pp_vega10_clk_dep_table,
+		const ATOM_Vega10_GFXCLK_Dependency_Table *clk_dep_table)
+{
+	uint32_t table_size, i;
+	struct phm_ppt_v1_clock_voltage_dependency_table
+				*clk_table;
+
+	PP_ASSERT_WITH_CODE((clk_dep_table->ucNumEntries != 0),
+			"Invalid PowerPlay Table!", return -1);
+
+	table_size = sizeof(uint32_t) +
+			sizeof(phm_ppt_v1_clock_voltage_dependency_record) *
+			clk_dep_table->ucNumEntries;
+
+	clk_table = (struct phm_ppt_v1_clock_voltage_dependency_table *)
+			kzalloc(table_size, GFP_KERNEL);
+
+	if (!clk_table)
+		return -ENOMEM;
+
+	clk_table->count = clk_dep_table->ucNumEntries;
+
+	for (i = 0; i < clk_table->count; i++) {
+		clk_table->entries[i].vddInd =
+				clk_dep_table->entries[i].ucVddInd;
+		clk_table->entries[i].clk =
+				le32_to_cpu(clk_dep_table->entries[i].ulClk);
+		clk_table->entries[i].cks_enable =
+				(((clk_dep_table->entries[i].usCKSVOffsetandDisable & 0x80)
+						>> 15) == 0) ? 1 : 0;
+		clk_table->entries[i].cks_voffset =
+				(clk_dep_table->entries[i].usCKSVOffsetandDisable & 0x7F);
+		clk_table->entries[i].sclk_offset =
+				clk_dep_table->entries[i].usAVFSOffset;
+	}
+
+	*pp_vega10_clk_dep_table = clk_table;
+
+	return 0;
+}
+
+static int get_dcefclk_voltage_dependency_table(
+		struct pp_hwmgr *hwmgr,
+		struct phm_ppt_v1_clock_voltage_dependency_table
+			**pp_vega10_clk_dep_table,
+		const ATOM_Vega10_DCEFCLK_Dependency_Table *clk_dep_table)
+{
+	uint32_t table_size, i;
+	struct phm_ppt_v1_clock_voltage_dependency_table
+				*clk_table;
+
+	PP_ASSERT_WITH_CODE((clk_dep_table->ucNumEntries != 0),
+			"Invalid PowerPlay Table!", return -1);
+
+	table_size = sizeof(uint32_t) +
+			sizeof(phm_ppt_v1_clock_voltage_dependency_record) *
+			clk_dep_table->ucNumEntries;
+
+	clk_table = (struct phm_ppt_v1_clock_voltage_dependency_table *)
+			kzalloc(table_size, GFP_KERNEL);
+
+	if (!clk_table)
+		return -ENOMEM;
+
+	clk_table->count = clk_dep_table->ucNumEntries;
+
+	for (i = 0; i < clk_table->count; i++) {
+		clk_table->entries[i].vddInd =
+				clk_dep_table->entries[i].ucVddInd;
+		clk_table->entries[i].clk =
+				le32_to_cpu(clk_dep_table->entries[i].ulClk);
+	}
+
+	*pp_vega10_clk_dep_table = clk_table;
+
+	return 0;
+}
+
+static int get_pcie_table(struct pp_hwmgr *hwmgr,
+		struct phm_ppt_v1_pcie_table **vega10_pcie_table,
+		const Vega10_PPTable_Generic_SubTable_Header *table)
+{
+	uint32_t table_size, i, pcie_count;
+	struct phm_ppt_v1_pcie_table *pcie_table;
+	struct phm_ppt_v2_information *table_info =
+			(struct phm_ppt_v2_information *)(hwmgr->pptable);
+	const ATOM_Vega10_PCIE_Table *atom_pcie_table =
+			(ATOM_Vega10_PCIE_Table *)table;
+
+	PP_ASSERT_WITH_CODE(atom_pcie_table->ucNumEntries,
+			"Invalid PowerPlay Table!",
+			return 0);
+
+	table_size = sizeof(uint32_t) +
+			sizeof(struct phm_ppt_v1_pcie_record) *
+			atom_pcie_table->ucNumEntries;
+
+	pcie_table = (struct phm_ppt_v1_pcie_table *)
+			kzalloc(table_size, GFP_KERNEL);
+
+	if (!pcie_table)
+		return -ENOMEM;
+
+	pcie_count = table_info->vdd_dep_on_sclk->count;
+	if (atom_pcie_table->ucNumEntries <= pcie_count)
+		pcie_count = atom_pcie_table->ucNumEntries;
+	else
+		pr_info("Number of Pcie Entries exceed the number of"
+				" GFXCLK Dpm Levels!"
+				" Disregarding the excess entries...\n");
+
+	pcie_table->count = pcie_count;
+
+	for (i = 0; i < pcie_count; i++) {
+		pcie_table->entries[i].gen_speed =
+				atom_pcie_table->entries[i].ucPCIEGenSpeed;
+		pcie_table->entries[i].lane_width =
+				atom_pcie_table->entries[i].ucPCIELaneWidth;
+		pcie_table->entries[i].pcie_sclk =
+				atom_pcie_table->entries[i].ulLCLK;
+	}
+
+	*vega10_pcie_table = pcie_table;
+
+	return 0;
+}
+
+static int get_hard_limits(
+		struct pp_hwmgr *hwmgr,
+		struct phm_clock_and_voltage_limits *limits,
+		const ATOM_Vega10_Hard_Limit_Table *limit_table)
+{
+	PP_ASSERT_WITH_CODE(limit_table->ucNumEntries,
+			"Invalid PowerPlay Table!", return -1);
+
+	/* currently we always take entries[0] parameters */
+	limits->sclk = le32_to_cpu(limit_table->entries[0].ulSOCCLKLimit);
+	limits->mclk = le32_to_cpu(limit_table->entries[0].ulMCLKLimit);
+	limits->gfxclk = le32_to_cpu(limit_table->entries[0].ulGFXCLKLimit);
+	limits->vddc = le16_to_cpu(limit_table->entries[0].usVddcLimit);
+	limits->vddci = le16_to_cpu(limit_table->entries[0].usVddciLimit);
+	limits->vddmem = le16_to_cpu(limit_table->entries[0].usVddMemLimit);
+
+	return 0;
+}
+
+static int get_valid_clk(
+		struct pp_hwmgr *hwmgr,
+		struct phm_clock_array **clk_table,
+		const phm_ppt_v1_clock_voltage_dependency_table *clk_volt_pp_table)
+{
+	uint32_t table_size, i;
+	struct phm_clock_array *table;
+
+	PP_ASSERT_WITH_CODE(clk_volt_pp_table->count,
+			"Invalid PowerPlay Table!", return -1);
+
+	table_size = sizeof(uint32_t) +
+			sizeof(uint32_t) * clk_volt_pp_table->count;
+
+	table = kzalloc(table_size, GFP_KERNEL);
+
+	if (!table)
+		return -ENOMEM;
+
+	table->count = (uint32_t)clk_volt_pp_table->count;
+
+	for (i = 0; i < table->count; i++)
+		table->values[i] = (uint32_t)clk_volt_pp_table->entries[i].clk;
+
+	*clk_table = table;
+
+	return 0;
+}
+
+static int init_powerplay_extended_tables(
+		struct pp_hwmgr *hwmgr,
+		const ATOM_Vega10_POWERPLAYTABLE *powerplay_table)
+{
+	int result = 0;
+	struct phm_ppt_v2_information *pp_table_info =
+		(struct phm_ppt_v2_information *)(hwmgr->pptable);
+
+	const ATOM_Vega10_MM_Dependency_Table *mm_dependency_table =
+			(const ATOM_Vega10_MM_Dependency_Table *)
+			(((unsigned long) powerplay_table) +
+			le16_to_cpu(powerplay_table->usMMDependencyTableOffset));
+	const Vega10_PPTable_Generic_SubTable_Header *power_tune_table =
+			(const Vega10_PPTable_Generic_SubTable_Header *)
+			(((unsigned long) powerplay_table) +
+			le16_to_cpu(powerplay_table->usPowerTuneTableOffset));
+	const ATOM_Vega10_SOCCLK_Dependency_Table *socclk_dep_table =
+			(const ATOM_Vega10_SOCCLK_Dependency_Table *)
+			(((unsigned long) powerplay_table) +
+			le16_to_cpu(powerplay_table->usSocclkDependencyTableOffset));
+	const ATOM_Vega10_GFXCLK_Dependency_Table *gfxclk_dep_table =
+			(const ATOM_Vega10_GFXCLK_Dependency_Table *)
+			(((unsigned long) powerplay_table) +
+			le16_to_cpu(powerplay_table->usGfxclkDependencyTableOffset));
+	const ATOM_Vega10_DCEFCLK_Dependency_Table *dcefclk_dep_table =
+			(const ATOM_Vega10_DCEFCLK_Dependency_Table *)
+			(((unsigned long) powerplay_table) +
+			le16_to_cpu(powerplay_table->usDcefclkDependencyTableOffset));
+	const ATOM_Vega10_MCLK_Dependency_Table *mclk_dep_table =
+			(const ATOM_Vega10_MCLK_Dependency_Table *)
+			(((unsigned long) powerplay_table) +
+			le16_to_cpu(powerplay_table->usMclkDependencyTableOffset));
+	const ATOM_Vega10_Hard_Limit_Table *hard_limits =
+			(const ATOM_Vega10_Hard_Limit_Table *)
+			(((unsigned long) powerplay_table) +
+			le16_to_cpu(powerplay_table->usHardLimitTableOffset));
+	const Vega10_PPTable_Generic_SubTable_Header *pcie_table =
+			(const Vega10_PPTable_Generic_SubTable_Header *)
+			(((unsigned long) powerplay_table) +
+			le16_to_cpu(powerplay_table->usPCIETableOffset));
+	const ATOM_Vega10_PIXCLK_Dependency_Table *pixclk_dep_table =
+			(const ATOM_Vega10_PIXCLK_Dependency_Table *)
+			(((unsigned long) powerplay_table) +
+			le16_to_cpu(powerplay_table->usPixclkDependencyTableOffset));
+	const ATOM_Vega10_PHYCLK_Dependency_Table *phyclk_dep_table =
+			(const ATOM_Vega10_PHYCLK_Dependency_Table *)
+			(((unsigned long) powerplay_table) +
+			le16_to_cpu(powerplay_table->usPhyClkDependencyTableOffset));
+	const ATOM_Vega10_DISPCLK_Dependency_Table *dispclk_dep_table =
+			(const ATOM_Vega10_DISPCLK_Dependency_Table *)
+			(((unsigned long) powerplay_table) +
+			le16_to_cpu(powerplay_table->usDispClkDependencyTableOffset));
+
+	pp_table_info->vdd_dep_on_socclk = NULL;
+	pp_table_info->vdd_dep_on_sclk = NULL;
+	pp_table_info->vdd_dep_on_mclk = NULL;
+	pp_table_info->vdd_dep_on_dcefclk = NULL;
+	pp_table_info->mm_dep_table = NULL;
+	pp_table_info->tdp_table = NULL;
+	pp_table_info->vdd_dep_on_pixclk = NULL;
+	pp_table_info->vdd_dep_on_phyclk = NULL;
+	pp_table_info->vdd_dep_on_dispclk = NULL;
+
+	if (powerplay_table->usMMDependencyTableOffset)
+		result = get_mm_clock_voltage_table(hwmgr,
+				&pp_table_info->mm_dep_table,
+				mm_dependency_table);
+
+	if (!result && powerplay_table->usPowerTuneTableOffset)
+		result = get_tdp_table(hwmgr,
+				&pp_table_info->tdp_table,
+				power_tune_table);
+
+	if (!result && powerplay_table->usSocclkDependencyTableOffset)
+		result = get_socclk_voltage_dependency_table(hwmgr,
+				&pp_table_info->vdd_dep_on_socclk,
+				socclk_dep_table);
+
+	if (!result && powerplay_table->usGfxclkDependencyTableOffset)
+		result = get_gfxclk_voltage_dependency_table(hwmgr,
+				&pp_table_info->vdd_dep_on_sclk,
+				gfxclk_dep_table);
+
+	if (!result && powerplay_table->usPixclkDependencyTableOffset)
+		result = get_dcefclk_voltage_dependency_table(hwmgr,
+				&pp_table_info->vdd_dep_on_pixclk,
+				(const ATOM_Vega10_DCEFCLK_Dependency_Table*)
+				pixclk_dep_table);
+
+	if (!result && powerplay_table->usPhyClkDependencyTableOffset)
+		result = get_dcefclk_voltage_dependency_table(hwmgr,
+				&pp_table_info->vdd_dep_on_phyclk,
+				(const ATOM_Vega10_DCEFCLK_Dependency_Table *)
+				phyclk_dep_table);
+
+	if (!result && powerplay_table->usDispClkDependencyTableOffset)
+		result = get_dcefclk_voltage_dependency_table(hwmgr,
+				&pp_table_info->vdd_dep_on_dispclk,
+				(const ATOM_Vega10_DCEFCLK_Dependency_Table *)
+				dispclk_dep_table);
+
+	if (!result && powerplay_table->usDcefclkDependencyTableOffset)
+		result = get_dcefclk_voltage_dependency_table(hwmgr,
+				&pp_table_info->vdd_dep_on_dcefclk,
+				dcefclk_dep_table);
+
+	if (!result && powerplay_table->usMclkDependencyTableOffset)
+		result = get_mclk_voltage_dependency_table(hwmgr,
+				&pp_table_info->vdd_dep_on_mclk,
+				mclk_dep_table);
+
+	if (!result && powerplay_table->usPCIETableOffset)
+		result = get_pcie_table(hwmgr,
+				&pp_table_info->pcie_table,
+				pcie_table);
+
+	if (!result && powerplay_table->usHardLimitTableOffset)
+		result = get_hard_limits(hwmgr,
+				&pp_table_info->max_clock_voltage_on_dc,
+				hard_limits);
+
+	hwmgr->dyn_state.max_clock_voltage_on_dc.sclk =
+			pp_table_info->max_clock_voltage_on_dc.sclk;
+	hwmgr->dyn_state.max_clock_voltage_on_dc.mclk =
+			pp_table_info->max_clock_voltage_on_dc.mclk;
+	hwmgr->dyn_state.max_clock_voltage_on_dc.vddc =
+			pp_table_info->max_clock_voltage_on_dc.vddc;
+	hwmgr->dyn_state.max_clock_voltage_on_dc.vddci =
+			pp_table_info->max_clock_voltage_on_dc.vddci;
+
+	if (!result &&
+		pp_table_info->vdd_dep_on_socclk &&
+		pp_table_info->vdd_dep_on_socclk->count)
+		result = get_valid_clk(hwmgr,
+				&pp_table_info->valid_socclk_values,
+				pp_table_info->vdd_dep_on_socclk);
+
+	if (!result &&
+		pp_table_info->vdd_dep_on_sclk &&
+		pp_table_info->vdd_dep_on_sclk->count)
+		result = get_valid_clk(hwmgr,
+				&pp_table_info->valid_sclk_values,
+				pp_table_info->vdd_dep_on_sclk);
+
+	if (!result &&
+		pp_table_info->vdd_dep_on_dcefclk &&
+		pp_table_info->vdd_dep_on_dcefclk->count)
+		result = get_valid_clk(hwmgr,
+				&pp_table_info->valid_dcefclk_values,
+				pp_table_info->vdd_dep_on_dcefclk);
+
+	if (!result &&
+		pp_table_info->vdd_dep_on_mclk &&
+		pp_table_info->vdd_dep_on_mclk->count)
+		result = get_valid_clk(hwmgr,
+				&pp_table_info->valid_mclk_values,
+				pp_table_info->vdd_dep_on_mclk);
+
+	return result;
+}
+
+static int get_vddc_lookup_table(
+		struct pp_hwmgr	*hwmgr,
+		phm_ppt_v1_voltage_lookup_table	**lookup_table,
+		const ATOM_Vega10_Voltage_Lookup_Table *vddc_lookup_pp_tables,
+		uint32_t max_levels)
+{
+	uint32_t table_size, i;
+	phm_ppt_v1_voltage_lookup_table *table;
+
+	PP_ASSERT_WITH_CODE((vddc_lookup_pp_tables->ucNumEntries != 0),
+			"Invalid SOC_VDDD Lookup Table!", return 1);
+
+	table_size = sizeof(uint32_t) +
+			sizeof(phm_ppt_v1_voltage_lookup_record) * max_levels;
+
+	table = (phm_ppt_v1_voltage_lookup_table *)
+			kzalloc(table_size, GFP_KERNEL);
+
+	if (NULL == table)
+		return -ENOMEM;
+
+	table->count = vddc_lookup_pp_tables->ucNumEntries;
+
+	for (i = 0; i < vddc_lookup_pp_tables->ucNumEntries; i++)
+		table->entries[i].us_vdd =
+				le16_to_cpu(vddc_lookup_pp_tables->entries[i].usVdd);
+
+	*lookup_table = table;
+
+	return 0;
+}
+
+static int init_dpm_2_parameters(
+		struct pp_hwmgr *hwmgr,
+		const ATOM_Vega10_POWERPLAYTABLE *powerplay_table)
+{
+	int result = 0;
+	struct phm_ppt_v2_information *pp_table_info =
+			(struct phm_ppt_v2_information *)(hwmgr->pptable);
+	uint32_t disable_power_control = 0;
+
+	pp_table_info->us_ulv_voltage_offset =
+		le16_to_cpu(powerplay_table->usUlvVoltageOffset);
+
+	pp_table_info->us_ulv_smnclk_did =
+			le16_to_cpu(powerplay_table->usUlvSmnclkDid);
+	pp_table_info->us_ulv_mp1clk_did =
+			le16_to_cpu(powerplay_table->usUlvMp1clkDid);
+	pp_table_info->us_ulv_gfxclk_bypass =
+			le16_to_cpu(powerplay_table->usUlvGfxclkBypass);
+	pp_table_info->us_gfxclk_slew_rate =
+			le16_to_cpu(powerplay_table->usGfxclkSlewRate);
+	pp_table_info->uc_gfx_dpm_voltage_mode  =
+			le16_to_cpu(powerplay_table->ucGfxVoltageMode);
+	pp_table_info->uc_soc_dpm_voltage_mode  =
+			le16_to_cpu(powerplay_table->ucSocVoltageMode);
+	pp_table_info->uc_uclk_dpm_voltage_mode =
+			le16_to_cpu(powerplay_table->ucUclkVoltageMode);
+	pp_table_info->uc_uvd_dpm_voltage_mode  =
+			le16_to_cpu(powerplay_table->ucUvdVoltageMode);
+	pp_table_info->uc_vce_dpm_voltage_mode  =
+			le16_to_cpu(powerplay_table->ucVceVoltageMode);
+	pp_table_info->uc_mp0_dpm_voltage_mode  =
+			le16_to_cpu(powerplay_table->ucMp0VoltageMode);
+	pp_table_info->uc_dcef_dpm_voltage_mode =
+			le16_to_cpu(powerplay_table->ucDcefVoltageMode);
+
+	pp_table_info->ppm_parameter_table = NULL;
+	pp_table_info->vddc_lookup_table = NULL;
+	pp_table_info->vddmem_lookup_table = NULL;
+	pp_table_info->vddci_lookup_table = NULL;
+
+	/* TDP limits */
+	hwmgr->platform_descriptor.TDPODLimit =
+		le16_to_cpu(powerplay_table->usPowerControlLimit);
+	hwmgr->platform_descriptor.TDPAdjustment = 0;
+	hwmgr->platform_descriptor.VidAdjustment = 0;
+	hwmgr->platform_descriptor.VidAdjustmentPolarity = 0;
+	hwmgr->platform_descriptor.VidMinLimit = 0;
+	hwmgr->platform_descriptor.VidMaxLimit = 1500000;
+	hwmgr->platform_descriptor.VidStep = 6250;
+
+	disable_power_control = 0;
+	if (!disable_power_control) {
+		/* enable TDP overdrive (PowerControl) feature as well if supported */
+		if (hwmgr->platform_descriptor.TDPODLimit)
+			phm_cap_set(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_PowerControl);
+	}
+
+	if (powerplay_table->usVddcLookupTableOffset) {
+		const ATOM_Vega10_Voltage_Lookup_Table *vddc_table =
+				(ATOM_Vega10_Voltage_Lookup_Table *)
+				(((unsigned long)powerplay_table) +
+				le16_to_cpu(powerplay_table->usVddcLookupTableOffset));
+		result = get_vddc_lookup_table(hwmgr,
+				&pp_table_info->vddc_lookup_table, vddc_table, 16);
+	}
+
+	if (powerplay_table->usVddmemLookupTableOffset) {
+		const ATOM_Vega10_Voltage_Lookup_Table *vdd_mem_table =
+				(ATOM_Vega10_Voltage_Lookup_Table *)
+				(((unsigned long)powerplay_table) +
+				le16_to_cpu(powerplay_table->usVddmemLookupTableOffset));
+		result = get_vddc_lookup_table(hwmgr,
+				&pp_table_info->vddmem_lookup_table, vdd_mem_table, 16);
+	}
+
+	if (powerplay_table->usVddciLookupTableOffset) {
+		const ATOM_Vega10_Voltage_Lookup_Table *vddci_table =
+				(ATOM_Vega10_Voltage_Lookup_Table *)
+				(((unsigned long)powerplay_table) +
+				le16_to_cpu(powerplay_table->usVddciLookupTableOffset));
+		result = get_vddc_lookup_table(hwmgr,
+				&pp_table_info->vddci_lookup_table, vddci_table, 16);
+	}
+
+	return result;
+}
+
+int vega10_pp_tables_initialize(struct pp_hwmgr *hwmgr)
+{
+	int result = 0;
+	const ATOM_Vega10_POWERPLAYTABLE *powerplay_table;
+
+	hwmgr->pptable = kzalloc(sizeof(struct phm_ppt_v2_information), GFP_KERNEL);
+
+	PP_ASSERT_WITH_CODE((NULL != hwmgr->pptable),
+			    "Failed to allocate hwmgr->pptable!", return -ENOMEM);
+
+	powerplay_table = get_powerplay_table(hwmgr);
+
+	PP_ASSERT_WITH_CODE((NULL != powerplay_table),
+		"Missing PowerPlay Table!", return -1);
+
+	result = check_powerplay_tables(hwmgr, powerplay_table);
+
+	PP_ASSERT_WITH_CODE((result == 0),
+			    "check_powerplay_tables failed", return result);
+
+	result = set_platform_caps(hwmgr,
+				   le32_to_cpu(powerplay_table->ulPlatformCaps));
+
+	PP_ASSERT_WITH_CODE((result == 0),
+			    "set_platform_caps failed", return result);
+
+	result = init_thermal_controller(hwmgr, powerplay_table);
+
+	PP_ASSERT_WITH_CODE((result == 0),
+			    "init_thermal_controller failed", return result);
+
+	result = init_over_drive_limits(hwmgr, powerplay_table);
+
+	PP_ASSERT_WITH_CODE((result == 0),
+			    "init_over_drive_limits failed", return result);
+
+	result = init_powerplay_extended_tables(hwmgr, powerplay_table);
+
+	PP_ASSERT_WITH_CODE((result == 0),
+			    "init_powerplay_extended_tables failed", return result);
+
+	result = init_dpm_2_parameters(hwmgr, powerplay_table);
+
+	PP_ASSERT_WITH_CODE((result == 0),
+			    "init_dpm_2_parameters failed", return result);
+
+	return result;
+}
+
+static int vega10_pp_tables_uninitialize(struct pp_hwmgr *hwmgr)
+{
+	int result = 0;
+	struct phm_ppt_v2_information *pp_table_info =
+			(struct phm_ppt_v2_information *)(hwmgr->pptable);
+
+	kfree(pp_table_info->vdd_dep_on_sclk);
+	pp_table_info->vdd_dep_on_sclk = NULL;
+
+	kfree(pp_table_info->vdd_dep_on_mclk);
+	pp_table_info->vdd_dep_on_mclk = NULL;
+
+	kfree(pp_table_info->valid_mclk_values);
+	pp_table_info->valid_mclk_values = NULL;
+
+	kfree(pp_table_info->valid_sclk_values);
+	pp_table_info->valid_sclk_values = NULL;
+
+	kfree(pp_table_info->vddc_lookup_table);
+	pp_table_info->vddc_lookup_table = NULL;
+
+	kfree(pp_table_info->vddmem_lookup_table);
+	pp_table_info->vddmem_lookup_table = NULL;
+
+	kfree(pp_table_info->vddci_lookup_table);
+	pp_table_info->vddci_lookup_table = NULL;
+
+	kfree(pp_table_info->ppm_parameter_table);
+	pp_table_info->ppm_parameter_table = NULL;
+
+	kfree(pp_table_info->mm_dep_table);
+	pp_table_info->mm_dep_table = NULL;
+
+	kfree(pp_table_info->cac_dtp_table);
+	pp_table_info->cac_dtp_table = NULL;
+
+	kfree(hwmgr->dyn_state.cac_dtp_table);
+	hwmgr->dyn_state.cac_dtp_table = NULL;
+
+	kfree(pp_table_info->tdp_table);
+	pp_table_info->tdp_table = NULL;
+
+	kfree(hwmgr->pptable);
+	hwmgr->pptable = NULL;
+
+	return result;
+}
+
+const struct pp_table_func vega10_pptable_funcs = {
+	.pptable_init = vega10_pp_tables_initialize,
+	.pptable_fini = vega10_pp_tables_uninitialize,
+};
+
+int vega10_get_number_of_powerplay_table_entries(struct pp_hwmgr *hwmgr)
+{
+	const ATOM_Vega10_State_Array *state_arrays;
+	const ATOM_Vega10_POWERPLAYTABLE *pp_table = get_powerplay_table(hwmgr);
+
+	PP_ASSERT_WITH_CODE((NULL != pp_table),
+			"Missing PowerPlay Table!", return -1);
+	PP_ASSERT_WITH_CODE((pp_table->sHeader.format_revision >=
+			ATOM_Vega10_TABLE_REVISION_VEGA10),
+			"Incorrect PowerPlay table revision!", return -1);
+
+	state_arrays = (ATOM_Vega10_State_Array *)(((unsigned long)pp_table) +
+			le16_to_cpu(pp_table->usStateArrayOffset));
+
+	return (uint32_t)(state_arrays->ucNumEntries);
+}
+
+static uint32_t make_classification_flags(struct pp_hwmgr *hwmgr,
+		uint16_t classification, uint16_t classification2)
+{
+	uint32_t result = 0;
+
+	if (classification & ATOM_PPLIB_CLASSIFICATION_BOOT)
+		result |= PP_StateClassificationFlag_Boot;
+
+	if (classification & ATOM_PPLIB_CLASSIFICATION_THERMAL)
+		result |= PP_StateClassificationFlag_Thermal;
+
+	if (classification & ATOM_PPLIB_CLASSIFICATION_LIMITEDPOWERSOURCE)
+		result |= PP_StateClassificationFlag_LimitedPowerSource;
+
+	if (classification & ATOM_PPLIB_CLASSIFICATION_REST)
+		result |= PP_StateClassificationFlag_Rest;
+
+	if (classification & ATOM_PPLIB_CLASSIFICATION_FORCED)
+		result |= PP_StateClassificationFlag_Forced;
+
+	if (classification & ATOM_PPLIB_CLASSIFICATION_ACPI)
+		result |= PP_StateClassificationFlag_ACPI;
+
+	if (classification2 & ATOM_PPLIB_CLASSIFICATION2_LIMITEDPOWERSOURCE_2)
+		result |= PP_StateClassificationFlag_LimitedPowerSource_2;
+
+	return result;
+}
+
+int vega10_get_powerplay_table_entry(struct pp_hwmgr *hwmgr,
+		uint32_t entry_index, struct pp_power_state *power_state,
+		int (*call_back_func)(struct pp_hwmgr *, void *,
+				struct pp_power_state *, void *, uint32_t))
+{
+	int result = 0;
+	const ATOM_Vega10_State_Array *state_arrays;
+	const ATOM_Vega10_State *state_entry;
+	const ATOM_Vega10_POWERPLAYTABLE *pp_table =
+			get_powerplay_table(hwmgr);
+
+	PP_ASSERT_WITH_CODE(pp_table, "Missing PowerPlay Table!",
+			return -1;);
+	power_state->classification.bios_index = entry_index;
+
+	if (pp_table->sHeader.format_revision >=
+			ATOM_Vega10_TABLE_REVISION_VEGA10) {
+		state_arrays = (ATOM_Vega10_State_Array *)
+				(((unsigned long)pp_table) +
+				le16_to_cpu(pp_table->usStateArrayOffset));
+
+		PP_ASSERT_WITH_CODE(pp_table->usStateArrayOffset > 0,
+				"Invalid PowerPlay Table State Array Offset.",
+				return -1);
+		PP_ASSERT_WITH_CODE(state_arrays->ucNumEntries > 0,
+				"Invalid PowerPlay Table State Array.",
+				return -1);
+		PP_ASSERT_WITH_CODE((entry_index <= state_arrays->ucNumEntries),
+				"Invalid PowerPlay Table State Array Entry.",
+				return -1);
+
+		state_entry = &(state_arrays->states[entry_index]);
+
+		result = call_back_func(hwmgr, (void *)state_entry, power_state,
+				(void *)pp_table,
+				make_classification_flags(hwmgr,
+					le16_to_cpu(state_entry->usClassification),
+					le16_to_cpu(state_entry->usClassification2)));
+	}
+
+	if (!result && (power_state->classification.flags &
+			PP_StateClassificationFlag_Boot))
+		result = hwmgr->hwmgr_func->patch_boot_state(hwmgr, &(power_state->hardware));
+
+	return result;
+}
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.h b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.h
new file mode 100644
index 0000000..995d133
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.h
@@ -0,0 +1,34 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef VEGA10_PROCESSPPTABLES_H
+#define VEGA10_PROCESSPPTABLES_H
+
+#include "hwmgr.h"
+
+extern const struct pp_table_func vega10_pptable_funcs;
+extern int vega10_get_number_of_powerplay_table_entries(struct pp_hwmgr *hwmgr);
+extern int vega10_get_powerplay_table_entry(struct pp_hwmgr *hwmgr, uint32_t entry_index,
+		struct pp_power_state *power_state, int (*call_back_func)(struct pp_hwmgr *, void *,
+				struct pp_power_state *, void *, uint32_t));
+#endif
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c
new file mode 100644
index 0000000..f4d77b6
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c
@@ -0,0 +1,761 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include "vega10_thermal.h"
+#include "vega10_hwmgr.h"
+#include "vega10_smumgr.h"
+#include "vega10_ppsmc.h"
+#include "vega10_inc.h"
+#include "pp_soc15.h"
+#include "pp_debug.h"
+
+static int vega10_get_current_rpm(struct pp_hwmgr *hwmgr, uint32_t *current_rpm)
+{
+	PP_ASSERT_WITH_CODE(!smum_send_msg_to_smc(hwmgr->smumgr,
+				PPSMC_MSG_GetCurrentRpm),
+			"Attempt to get current RPM from SMC Failed!",
+			return -1);
+	PP_ASSERT_WITH_CODE(!vega10_read_arg_from_smc(hwmgr->smumgr,
+			current_rpm),
+			"Attempt to read current RPM from SMC Failed!",
+			return -1);
+	return 0;
+}
+
+int vega10_fan_ctrl_get_fan_speed_info(struct pp_hwmgr *hwmgr,
+		struct phm_fan_speed_info *fan_speed_info)
+{
+
+	if (hwmgr->thermal_controller.fanInfo.bNoFan)
+		return 0;
+
+	fan_speed_info->supports_percent_read = true;
+	fan_speed_info->supports_percent_write = true;
+	fan_speed_info->min_percent = 0;
+	fan_speed_info->max_percent = 100;
+
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_FanSpeedInTableIsRPM) &&
+		hwmgr->thermal_controller.fanInfo.
+		ucTachometerPulsesPerRevolution) {
+		fan_speed_info->supports_rpm_read = true;
+		fan_speed_info->supports_rpm_write = true;
+		fan_speed_info->min_rpm =
+				hwmgr->thermal_controller.fanInfo.ulMinRPM;
+		fan_speed_info->max_rpm =
+				hwmgr->thermal_controller.fanInfo.ulMaxRPM;
+	} else {
+		fan_speed_info->min_rpm = 0;
+		fan_speed_info->max_rpm = 0;
+	}
+
+	return 0;
+}
+
+int vega10_fan_ctrl_get_fan_speed_percent(struct pp_hwmgr *hwmgr,
+		uint32_t *speed)
+{
+	uint32_t current_rpm;
+	uint32_t percent = 0;
+
+	if (hwmgr->thermal_controller.fanInfo.bNoFan)
+		return 0;
+
+	if (vega10_get_current_rpm(hwmgr, &current_rpm))
+		return -1;
+
+	if (hwmgr->thermal_controller.
+			advanceFanControlParameters.usMaxFanRPM != 0)
+		percent = current_rpm * 100 /
+			hwmgr->thermal_controller.
+			advanceFanControlParameters.usMaxFanRPM;
+
+	*speed = percent > 100 ? 100 : percent;
+
+	return 0;
+}
+
+int vega10_fan_ctrl_get_fan_speed_rpm(struct pp_hwmgr *hwmgr, uint32_t *speed)
+{
+	struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend);
+	uint32_t tach_period;
+	uint32_t crystal_clock_freq;
+	int result = 0;
+
+	if (hwmgr->thermal_controller.fanInfo.bNoFan)
+		return -1;
+
+	if (data->smu_features[GNLD_FAN_CONTROL].supported)
+		result = vega10_get_current_rpm(hwmgr, speed);
+	else {
+		uint32_t reg = soc15_get_register_offset(THM_HWID, 0,
+				mmCG_TACH_STATUS_BASE_IDX, mmCG_TACH_STATUS);
+		tach_period = (cgs_read_register(hwmgr->device,
+				reg) & CG_TACH_STATUS__TACH_PERIOD_MASK) >>
+				CG_TACH_STATUS__TACH_PERIOD__SHIFT;
+
+		if (tach_period == 0)
+			return -EINVAL;
+
+		crystal_clock_freq = smu7_get_xclk(hwmgr);
+
+		*speed = 60 * crystal_clock_freq * 10000 / tach_period;
+	}
+
+	return result;
+}
+
+/**
+* Set Fan Speed Control to static mode,
+* so that the user can decide what speed to use.
+* @param    hwmgr  the address of the powerplay hardware manager.
+*           mode the fan control mode, 0 default, 1 by percent, 5, by RPM
+* @exception Should always succeed.
+*/
+int vega10_fan_ctrl_set_static_mode(struct pp_hwmgr *hwmgr, uint32_t mode)
+{
+	uint32_t reg;
+
+	reg = soc15_get_register_offset(THM_HWID, 0,
+			mmCG_FDO_CTRL2_BASE_IDX, mmCG_FDO_CTRL2);
+
+	if (hwmgr->fan_ctrl_is_in_default_mode) {
+		hwmgr->fan_ctrl_default_mode =
+				(cgs_read_register(hwmgr->device, reg) &
+				CG_FDO_CTRL2__FDO_PWM_MODE_MASK) >>
+				CG_FDO_CTRL2__FDO_PWM_MODE__SHIFT;
+		hwmgr->tmin = (cgs_read_register(hwmgr->device, reg) &
+				CG_FDO_CTRL2__TMIN_MASK) >>
+				CG_FDO_CTRL2__TMIN__SHIFT;
+		hwmgr->fan_ctrl_is_in_default_mode = false;
+	}
+
+	cgs_write_register(hwmgr->device, reg,
+			(cgs_read_register(hwmgr->device, reg) &
+			~CG_FDO_CTRL2__TMIN_MASK) |
+			(0 << CG_FDO_CTRL2__TMIN__SHIFT));
+	cgs_write_register(hwmgr->device, reg,
+			(cgs_read_register(hwmgr->device, reg) &
+			~CG_FDO_CTRL2__FDO_PWM_MODE_MASK) |
+			(mode << CG_FDO_CTRL2__FDO_PWM_MODE__SHIFT));
+
+	return 0;
+}
+
+/**
+* Reset Fan Speed Control to default mode.
+* @param    hwmgr  the address of the powerplay hardware manager.
+* @exception Should always succeed.
+*/
+int vega10_fan_ctrl_set_default_mode(struct pp_hwmgr *hwmgr)
+{
+	uint32_t reg;
+
+	reg = soc15_get_register_offset(THM_HWID, 0,
+			mmCG_FDO_CTRL2_BASE_IDX, mmCG_FDO_CTRL2);
+
+	if (!hwmgr->fan_ctrl_is_in_default_mode) {
+		cgs_write_register(hwmgr->device, reg,
+				(cgs_read_register(hwmgr->device, reg) &
+				~CG_FDO_CTRL2__FDO_PWM_MODE_MASK) |
+				(hwmgr->fan_ctrl_default_mode <<
+				CG_FDO_CTRL2__FDO_PWM_MODE__SHIFT));
+		cgs_write_register(hwmgr->device, reg,
+				(cgs_read_register(hwmgr->device, reg) &
+				~CG_FDO_CTRL2__TMIN_MASK) |
+				(hwmgr->tmin << CG_FDO_CTRL2__TMIN__SHIFT));
+		hwmgr->fan_ctrl_is_in_default_mode = true;
+	}
+
+	return 0;
+}
+
+/**
+ * @fn vega10_enable_fan_control_feature
+ * @brief Enables the SMC Fan Control Feature.
+ *
+ * @param    hwmgr - the address of the powerplay hardware manager.
+ * @return   0 on success. -1 otherwise.
+ */
+static int vega10_enable_fan_control_feature(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend);
+
+	if (data->smu_features[GNLD_FAN_CONTROL].supported) {
+		PP_ASSERT_WITH_CODE(!vega10_enable_smc_features(
+				hwmgr->smumgr, true,
+				data->smu_features[GNLD_FAN_CONTROL].
+				smu_feature_bitmap),
+				"Attempt to Enable FAN CONTROL feature Failed!",
+				return -1);
+		data->smu_features[GNLD_FAN_CONTROL].enabled = true;
+	}
+
+	return 0;
+}
+
+static int vega10_disable_fan_control_feature(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend);
+
+	if (data->smu_features[GNLD_FAN_CONTROL].supported) {
+		PP_ASSERT_WITH_CODE(!vega10_enable_smc_features(
+				hwmgr->smumgr, false,
+				data->smu_features[GNLD_FAN_CONTROL].
+				smu_feature_bitmap),
+				"Attempt to Enable FAN CONTROL feature Failed!",
+				return -1);
+		data->smu_features[GNLD_FAN_CONTROL].enabled = false;
+	}
+
+	return 0;
+}
+
+int vega10_fan_ctrl_start_smc_fan_control(struct pp_hwmgr *hwmgr)
+{
+	if (hwmgr->thermal_controller.fanInfo.bNoFan)
+		return -1;
+
+	PP_ASSERT_WITH_CODE(!vega10_enable_fan_control_feature(hwmgr),
+			"Attempt to Enable SMC FAN CONTROL Feature Failed!",
+			return -1);
+
+	return 0;
+}
+
+
+int vega10_fan_ctrl_stop_smc_fan_control(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend);
+
+	if (hwmgr->thermal_controller.fanInfo.bNoFan)
+		return -1;
+
+	if (data->smu_features[GNLD_FAN_CONTROL].supported) {
+		PP_ASSERT_WITH_CODE(!vega10_disable_fan_control_feature(hwmgr),
+				"Attempt to Disable SMC FAN CONTROL Feature Failed!",
+				return -1);
+	}
+	return 0;
+}
+
+/**
+* Set Fan Speed in percent.
+* @param    hwmgr  the address of the powerplay hardware manager.
+* @param    speed is the percentage value (0% - 100%) to be set.
+* @exception Fails is the 100% setting appears to be 0.
+*/
+int vega10_fan_ctrl_set_fan_speed_percent(struct pp_hwmgr *hwmgr,
+		uint32_t speed)
+{
+	uint32_t duty100;
+	uint32_t duty;
+	uint64_t tmp64;
+	uint32_t reg;
+
+	if (hwmgr->thermal_controller.fanInfo.bNoFan)
+		return 0;
+
+	if (speed > 100)
+		speed = 100;
+
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_MicrocodeFanControl))
+		vega10_fan_ctrl_stop_smc_fan_control(hwmgr);
+
+	reg = soc15_get_register_offset(THM_HWID, 0,
+			mmCG_FDO_CTRL1_BASE_IDX, mmCG_FDO_CTRL1);
+
+	duty100 = (cgs_read_register(hwmgr->device, reg) &
+			CG_FDO_CTRL1__FMAX_DUTY100_MASK) >>
+			CG_FDO_CTRL1__FMAX_DUTY100__SHIFT;
+
+	if (duty100 == 0)
+		return -EINVAL;
+
+	tmp64 = (uint64_t)speed * duty100;
+	do_div(tmp64, 100);
+	duty = (uint32_t)tmp64;
+
+	reg = soc15_get_register_offset(THM_HWID, 0,
+			mmCG_FDO_CTRL0_BASE_IDX, mmCG_FDO_CTRL0);
+	cgs_write_register(hwmgr->device, reg,
+			(cgs_read_register(hwmgr->device, reg) &
+			~CG_FDO_CTRL0__FDO_STATIC_DUTY_MASK) |
+			(duty << CG_FDO_CTRL0__FDO_STATIC_DUTY__SHIFT));
+
+	return vega10_fan_ctrl_set_static_mode(hwmgr, FDO_PWM_MODE_STATIC);
+}
+
+/**
+* Reset Fan Speed to default.
+* @param    hwmgr  the address of the powerplay hardware manager.
+* @exception Always succeeds.
+*/
+int vega10_fan_ctrl_reset_fan_speed_to_default(struct pp_hwmgr *hwmgr)
+{
+	int result;
+
+	if (hwmgr->thermal_controller.fanInfo.bNoFan)
+		return 0;
+
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_MicrocodeFanControl)) {
+		result = vega10_fan_ctrl_set_static_mode(hwmgr,
+				FDO_PWM_MODE_STATIC);
+		if (!result)
+			result = vega10_fan_ctrl_start_smc_fan_control(hwmgr);
+	} else
+		result = vega10_fan_ctrl_set_default_mode(hwmgr);
+
+	return result;
+}
+
+/**
+* Set Fan Speed in RPM.
+* @param    hwmgr  the address of the powerplay hardware manager.
+* @param    speed is the percentage value (min - max) to be set.
+* @exception Fails is the speed not lie between min and max.
+*/
+int vega10_fan_ctrl_set_fan_speed_rpm(struct pp_hwmgr *hwmgr, uint32_t speed)
+{
+	uint32_t tach_period;
+	uint32_t crystal_clock_freq;
+	int result = 0;
+	uint32_t reg;
+
+	if (hwmgr->thermal_controller.fanInfo.bNoFan ||
+			(speed < hwmgr->thermal_controller.fanInfo.ulMinRPM) ||
+			(speed > hwmgr->thermal_controller.fanInfo.ulMaxRPM))
+		return -1;
+
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_MicrocodeFanControl))
+		result = vega10_fan_ctrl_stop_smc_fan_control(hwmgr);
+
+	if (!result) {
+		crystal_clock_freq = smu7_get_xclk(hwmgr);
+		tach_period = 60 * crystal_clock_freq * 10000 / (8 * speed);
+		reg = soc15_get_register_offset(THM_HWID, 0,
+				mmCG_TACH_STATUS_BASE_IDX, mmCG_TACH_STATUS);
+		cgs_write_register(hwmgr->device, reg,
+				(cgs_read_register(hwmgr->device, reg) &
+				~CG_TACH_STATUS__TACH_PERIOD_MASK) |
+				(tach_period << CG_TACH_STATUS__TACH_PERIOD__SHIFT));
+	}
+	return vega10_fan_ctrl_set_static_mode(hwmgr, FDO_PWM_MODE_STATIC_RPM);
+}
+
+/**
+* Reads the remote temperature from the SIslands thermal controller.
+*
+* @param    hwmgr The address of the hardware manager.
+*/
+int vega10_thermal_get_temperature(struct pp_hwmgr *hwmgr)
+{
+	int temp;
+	uint32_t reg;
+
+	reg = soc15_get_register_offset(THM_HWID, 0,
+			mmCG_TACH_STATUS_BASE_IDX,  mmCG_MULT_THERMAL_STATUS);
+
+	temp = cgs_read_register(hwmgr->device, reg);
+
+	temp = (temp & CG_MULT_THERMAL_STATUS__CTF_TEMP_MASK) >>
+			CG_MULT_THERMAL_STATUS__CTF_TEMP__SHIFT;
+
+	/* Bit 9 means the reading is lower than the lowest usable value. */
+	if (temp & 0x200)
+		temp = VEGA10_THERMAL_MAXIMUM_TEMP_READING;
+	else
+		temp = temp & 0x1ff;
+
+	temp *= PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
+
+	return temp;
+}
+
+/**
+* Set the requested temperature range for high and low alert signals
+*
+* @param    hwmgr The address of the hardware manager.
+* @param    range Temperature range to be programmed for
+*           high and low alert signals
+* @exception PP_Result_BadInput if the input data is not valid.
+*/
+static int vega10_thermal_set_temperature_range(struct pp_hwmgr *hwmgr,
+		struct PP_TemperatureRange *range)
+{
+	uint32_t low = VEGA10_THERMAL_MINIMUM_ALERT_TEMP *
+			PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
+	uint32_t high = VEGA10_THERMAL_MAXIMUM_ALERT_TEMP *
+			PP_TEMPERATURE_UNITS_PER_CENTIGRADES;
+	uint32_t val, reg;
+
+	if (low < range->min)
+		low = range->min;
+	if (high > range->max)
+		high = range->max;
+
+	if (low > high)
+		return -EINVAL;
+
+	reg = soc15_get_register_offset(THM_HWID, 0,
+			mmTHM_THERMAL_INT_CTRL_BASE_IDX, mmTHM_THERMAL_INT_CTRL);
+
+	val = cgs_read_register(hwmgr->device, reg);
+	val &= ~(THM_THERMAL_INT_CTRL__DIG_THERM_INTH_MASK);
+	val |= (high / PP_TEMPERATURE_UNITS_PER_CENTIGRADES) <<
+			THM_THERMAL_INT_CTRL__DIG_THERM_INTH__SHIFT;
+	val &= ~(THM_THERMAL_INT_CTRL__DIG_THERM_INTL_MASK);
+	val |= (low / PP_TEMPERATURE_UNITS_PER_CENTIGRADES) <<
+			THM_THERMAL_INT_CTRL__DIG_THERM_INTL__SHIFT;
+	cgs_write_register(hwmgr->device, reg, val);
+
+	reg = soc15_get_register_offset(THM_HWID, 0,
+			mmTHM_TCON_HTC_BASE_IDX, mmTHM_TCON_HTC);
+
+	val = cgs_read_register(hwmgr->device, reg);
+	val &= ~(THM_TCON_HTC__HTC_TMP_LMT_MASK);
+	val |= (high / PP_TEMPERATURE_UNITS_PER_CENTIGRADES) <<
+			THM_TCON_HTC__HTC_TMP_LMT__SHIFT;
+	cgs_write_register(hwmgr->device, reg, val);
+
+	return 0;
+}
+
+/**
+* Programs thermal controller one-time setting registers
+*
+* @param    hwmgr The address of the hardware manager.
+*/
+static int vega10_thermal_initialize(struct pp_hwmgr *hwmgr)
+{
+	uint32_t reg;
+
+	if (hwmgr->thermal_controller.fanInfo.ucTachometerPulsesPerRevolution) {
+		reg = soc15_get_register_offset(THM_HWID, 0,
+				mmCG_TACH_CTRL_BASE_IDX, mmCG_TACH_CTRL);
+		cgs_write_register(hwmgr->device, reg,
+				(cgs_read_register(hwmgr->device, reg) &
+				~CG_TACH_CTRL__EDGE_PER_REV_MASK) |
+				((hwmgr->thermal_controller.fanInfo.
+				ucTachometerPulsesPerRevolution - 1) <<
+				CG_TACH_CTRL__EDGE_PER_REV__SHIFT));
+	}
+
+	reg = soc15_get_register_offset(THM_HWID, 0,
+			mmCG_FDO_CTRL2_BASE_IDX, mmCG_FDO_CTRL2);
+	cgs_write_register(hwmgr->device, reg,
+			(cgs_read_register(hwmgr->device, reg) &
+			~CG_FDO_CTRL2__TACH_PWM_RESP_RATE_MASK) |
+			(0x28 << CG_FDO_CTRL2__TACH_PWM_RESP_RATE__SHIFT));
+
+	return 0;
+}
+
+/**
+* Enable thermal alerts on the RV770 thermal controller.
+*
+* @param    hwmgr The address of the hardware manager.
+*/
+static int vega10_thermal_enable_alert(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend);
+
+	if (data->smu_features[GNLD_FW_CTF].supported) {
+		if (data->smu_features[GNLD_FW_CTF].enabled)
+			printk("[Thermal_EnableAlert] FW CTF Already Enabled!\n");
+	}
+
+	PP_ASSERT_WITH_CODE(!vega10_enable_smc_features(hwmgr->smumgr,
+			true,
+			data->smu_features[GNLD_FW_CTF].smu_feature_bitmap),
+			"Attempt to Enable FW CTF feature Failed!",
+			return -1);
+	data->smu_features[GNLD_FW_CTF].enabled = true;
+	return 0;
+}
+
+/**
+* Disable thermal alerts on the RV770 thermal controller.
+* @param    hwmgr The address of the hardware manager.
+*/
+static int vega10_thermal_disable_alert(struct pp_hwmgr *hwmgr)
+{
+	struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend);
+
+	if (data->smu_features[GNLD_FW_CTF].supported) {
+		if (!data->smu_features[GNLD_FW_CTF].enabled)
+			printk("[Thermal_EnableAlert] FW CTF Already disabled!\n");
+	}
+
+	PP_ASSERT_WITH_CODE(!vega10_enable_smc_features(hwmgr->smumgr,
+			false,
+			data->smu_features[GNLD_FW_CTF].smu_feature_bitmap),
+			"Attempt to disable FW CTF feature Failed!",
+			return -1);
+	data->smu_features[GNLD_FW_CTF].enabled = false;
+	return 0;
+}
+
+/**
+* Uninitialize the thermal controller.
+* Currently just disables alerts.
+* @param    hwmgr The address of the hardware manager.
+*/
+int vega10_thermal_stop_thermal_controller(struct pp_hwmgr *hwmgr)
+{
+	int result = vega10_thermal_disable_alert(hwmgr);
+
+	if (!hwmgr->thermal_controller.fanInfo.bNoFan)
+		vega10_fan_ctrl_set_default_mode(hwmgr);
+
+	return result;
+}
+
+/**
+* Set up the fan table to control the fan using the SMC.
+* @param    hwmgr  the address of the powerplay hardware manager.
+* @param    pInput the pointer to input data
+* @param    pOutput the pointer to output data
+* @param    pStorage the pointer to temporary storage
+* @param    Result the last failure code
+* @return   result from set temperature range routine
+*/
+int tf_vega10_thermal_setup_fan_table(struct pp_hwmgr *hwmgr,
+		void *input, void *output, void *storage, int result)
+{
+	int ret;
+	struct vega10_hwmgr *data = (struct vega10_hwmgr *)(hwmgr->backend);
+	PPTable_t *table = &(data->smc_state_table.pp_table);
+
+	if (!data->smu_features[GNLD_FAN_CONTROL].supported)
+		return 0;
+
+	table->FanMaximumRpm = (uint16_t)hwmgr->thermal_controller.
+			advanceFanControlParameters.usMaxFanRPM;
+	table->FanThrottlingRpm = hwmgr->thermal_controller.
+			advanceFanControlParameters.usFanRPMMaxLimit;
+	table->FanAcousticLimitRpm = (uint16_t)(hwmgr->thermal_controller.
+			advanceFanControlParameters.ulMinFanSCLKAcousticLimit);
+	table->FanTargetTemperature = hwmgr->thermal_controller.
+			advanceFanControlParameters.usTMax;
+	table->FanPwmMin = hwmgr->thermal_controller.
+			advanceFanControlParameters.usPWMMin * 255 / 100;
+	table->FanTargetGfxclk = (uint16_t)(hwmgr->thermal_controller.
+			advanceFanControlParameters.ulTargetGfxClk);
+	table->FanGainEdge = hwmgr->thermal_controller.
+			advanceFanControlParameters.usFanGainEdge;
+	table->FanGainHotspot = hwmgr->thermal_controller.
+			advanceFanControlParameters.usFanGainHotspot;
+	table->FanGainLiquid = hwmgr->thermal_controller.
+			advanceFanControlParameters.usFanGainLiquid;
+	table->FanGainVrVddc = hwmgr->thermal_controller.
+			advanceFanControlParameters.usFanGainVrVddc;
+	table->FanGainVrMvdd = hwmgr->thermal_controller.
+			advanceFanControlParameters.usFanGainVrMvdd;
+	table->FanGainPlx = hwmgr->thermal_controller.
+			advanceFanControlParameters.usFanGainPlx;
+	table->FanGainHbm = hwmgr->thermal_controller.
+			advanceFanControlParameters.usFanGainHbm;
+	table->FanZeroRpmEnable = hwmgr->thermal_controller.
+			advanceFanControlParameters.ucEnableZeroRPM;
+	table->FanStopTemp = hwmgr->thermal_controller.
+			advanceFanControlParameters.usZeroRPMStopTemperature;
+	table->FanStartTemp = hwmgr->thermal_controller.
+			advanceFanControlParameters.usZeroRPMStartTemperature;
+
+	ret = vega10_copy_table_to_smc(hwmgr->smumgr,
+			(uint8_t *)(&(data->smc_state_table.pp_table)), PPTABLE);
+	if (ret)
+		pr_info("Failed to update Fan Control Table in PPTable!");
+
+	return ret;
+}
+
+/**
+* Start the fan control on the SMC.
+* @param    hwmgr  the address of the powerplay hardware manager.
+* @param    pInput the pointer to input data
+* @param    pOutput the pointer to output data
+* @param    pStorage the pointer to temporary storage
+* @param    Result the last failure code
+* @return   result from set temperature range routine
+*/
+int tf_vega10_thermal_start_smc_fan_control(struct pp_hwmgr *hwmgr,
+		void *input, void *output, void *storage, int result)
+{
+/* If the fantable setup has failed we could have disabled
+ * PHM_PlatformCaps_MicrocodeFanControl even after
+ * this function was included in the table.
+ * Make sure that we still think controlling the fan is OK.
+*/
+	if (phm_cap_enabled(hwmgr->platform_descriptor.platformCaps,
+			PHM_PlatformCaps_MicrocodeFanControl)) {
+		vega10_fan_ctrl_start_smc_fan_control(hwmgr);
+		vega10_fan_ctrl_set_static_mode(hwmgr, FDO_PWM_MODE_STATIC);
+	}
+
+	return 0;
+}
+
+/**
+* Set temperature range for high and low alerts
+* @param    hwmgr  the address of the powerplay hardware manager.
+* @param    pInput the pointer to input data
+* @param    pOutput the pointer to output data
+* @param    pStorage the pointer to temporary storage
+* @param    Result the last failure code
+* @return   result from set temperature range routine
+*/
+int tf_vega10_thermal_set_temperature_range(struct pp_hwmgr *hwmgr,
+		void *input, void *output, void *storage, int result)
+{
+	struct PP_TemperatureRange *range = (struct PP_TemperatureRange *)input;
+
+	if (range == NULL)
+		return -EINVAL;
+
+	return vega10_thermal_set_temperature_range(hwmgr, range);
+}
+
+/**
+* Programs one-time setting registers
+* @param    hwmgr  the address of the powerplay hardware manager.
+* @param    pInput the pointer to input data
+* @param    pOutput the pointer to output data
+* @param    pStorage the pointer to temporary storage
+* @param    Result the last failure code
+* @return   result from initialize thermal controller routine
+*/
+int tf_vega10_thermal_initialize(struct pp_hwmgr *hwmgr,
+		void *input, void *output, void *storage, int result)
+{
+	return vega10_thermal_initialize(hwmgr);
+}
+
+/**
+* Enable high and low alerts
+* @param    hwmgr  the address of the powerplay hardware manager.
+* @param    pInput the pointer to input data
+* @param    pOutput the pointer to output data
+* @param    pStorage the pointer to temporary storage
+* @param    Result the last failure code
+* @return   result from enable alert routine
+*/
+int tf_vega10_thermal_enable_alert(struct pp_hwmgr *hwmgr,
+		void *input, void *output, void *storage, int result)
+{
+	return vega10_thermal_enable_alert(hwmgr);
+}
+
+/**
+* Disable high and low alerts
+* @param    hwmgr  the address of the powerplay hardware manager.
+* @param    pInput the pointer to input data
+* @param    pOutput the pointer to output data
+* @param    pStorage the pointer to temporary storage
+* @param    Result the last failure code
+* @return   result from disable alert routine
+*/
+static int tf_vega10_thermal_disable_alert(struct pp_hwmgr *hwmgr,
+		void *input, void *output, void *storage, int result)
+{
+	return vega10_thermal_disable_alert(hwmgr);
+}
+
+static struct phm_master_table_item
+vega10_thermal_start_thermal_controller_master_list[] = {
+	{NULL, tf_vega10_thermal_initialize},
+	{NULL, tf_vega10_thermal_set_temperature_range},
+	{NULL, tf_vega10_thermal_enable_alert},
+/* We should restrict performance levels to low before we halt the SMC.
+ * On the other hand we are still in boot state when we do this
+ * so it would be pointless.
+ * If this assumption changes we have to revisit this table.
+ */
+	{NULL, tf_vega10_thermal_setup_fan_table},
+	{NULL, tf_vega10_thermal_start_smc_fan_control},
+	{NULL, NULL}
+};
+
+static struct phm_master_table_header
+vega10_thermal_start_thermal_controller_master = {
+	0,
+	PHM_MasterTableFlag_None,
+	vega10_thermal_start_thermal_controller_master_list
+};
+
+static struct phm_master_table_item
+vega10_thermal_set_temperature_range_master_list[] = {
+	{NULL, tf_vega10_thermal_disable_alert},
+	{NULL, tf_vega10_thermal_set_temperature_range},
+	{NULL, tf_vega10_thermal_enable_alert},
+	{NULL, NULL}
+};
+
+struct phm_master_table_header
+vega10_thermal_set_temperature_range_master = {
+	0,
+	PHM_MasterTableFlag_None,
+	vega10_thermal_set_temperature_range_master_list
+};
+
+int vega10_thermal_ctrl_uninitialize_thermal_controller(struct pp_hwmgr *hwmgr)
+{
+	if (!hwmgr->thermal_controller.fanInfo.bNoFan) {
+		vega10_fan_ctrl_set_default_mode(hwmgr);
+		vega10_fan_ctrl_stop_smc_fan_control(hwmgr);
+	}
+	return 0;
+}
+
+/**
+* Initializes the thermal controller related functions
+* in the Hardware Manager structure.
+* @param    hwmgr The address of the hardware manager.
+* @exception Any error code from the low-level communication.
+*/
+int pp_vega10_thermal_initialize(struct pp_hwmgr *hwmgr)
+{
+	int result;
+
+	result = phm_construct_table(hwmgr,
+			&vega10_thermal_set_temperature_range_master,
+			&(hwmgr->set_temperature_range));
+
+	if (!result) {
+		result = phm_construct_table(hwmgr,
+				&vega10_thermal_start_thermal_controller_master,
+				&(hwmgr->start_thermal_controller));
+		if (result)
+			phm_destroy_table(hwmgr,
+					&(hwmgr->set_temperature_range));
+	}
+
+	if (!result)
+		hwmgr->fan_ctrl_is_in_default_mode = true;
+	return result;
+}
+
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.h b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.h
new file mode 100644
index 0000000..8036808
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.h
@@ -0,0 +1,83 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef VEGA10_THERMAL_H
+#define VEGA10_THERMAL_H
+
+#include "hwmgr.h"
+
+struct vega10_temperature {
+	uint16_t edge_temp;
+	uint16_t hot_spot_temp;
+	uint16_t hbm_temp;
+	uint16_t vr_soc_temp;
+	uint16_t vr_mem_temp;
+	uint16_t liquid1_temp;
+	uint16_t liquid2_temp;
+	uint16_t plx_temp;
+};
+
+#define VEGA10_THERMAL_HIGH_ALERT_MASK         0x1
+#define VEGA10_THERMAL_LOW_ALERT_MASK          0x2
+
+#define VEGA10_THERMAL_MINIMUM_TEMP_READING    -256
+#define VEGA10_THERMAL_MAXIMUM_TEMP_READING    255
+
+#define VEGA10_THERMAL_MINIMUM_ALERT_TEMP      0
+#define VEGA10_THERMAL_MAXIMUM_ALERT_TEMP      255
+
+#define FDO_PWM_MODE_STATIC  1
+#define FDO_PWM_MODE_STATIC_RPM 5
+
+
+extern int tf_vega10_thermal_initialize(struct pp_hwmgr *hwmgr,
+		void *input, void *output, void *storage, int result);
+extern int tf_vega10_thermal_set_temperature_range(struct pp_hwmgr *hwmgr,
+		void *input, void *output, void *storage, int result);
+extern int tf_vega10_thermal_enable_alert(struct pp_hwmgr *hwmgr,
+		void *input, void *output, void *storage, int result);
+
+extern int vega10_thermal_get_temperature(struct pp_hwmgr *hwmgr);
+extern int vega10_thermal_stop_thermal_controller(struct pp_hwmgr *hwmgr);
+extern int vega10_fan_ctrl_get_fan_speed_info(struct pp_hwmgr *hwmgr,
+		struct phm_fan_speed_info *fan_speed_info);
+extern int vega10_fan_ctrl_get_fan_speed_percent(struct pp_hwmgr *hwmgr,
+		uint32_t *speed);
+extern int vega10_fan_ctrl_set_default_mode(struct pp_hwmgr *hwmgr);
+extern int vega10_fan_ctrl_set_static_mode(struct pp_hwmgr *hwmgr,
+		uint32_t mode);
+extern int vega10_fan_ctrl_set_fan_speed_percent(struct pp_hwmgr *hwmgr,
+		uint32_t speed);
+extern int vega10_fan_ctrl_reset_fan_speed_to_default(struct pp_hwmgr *hwmgr);
+extern int pp_vega10_thermal_initialize(struct pp_hwmgr *hwmgr);
+extern int vega10_thermal_ctrl_uninitialize_thermal_controller(
+		struct pp_hwmgr *hwmgr);
+extern int vega10_fan_ctrl_set_fan_speed_rpm(struct pp_hwmgr *hwmgr,
+		uint32_t speed);
+extern int vega10_fan_ctrl_get_fan_speed_rpm(struct pp_hwmgr *hwmgr,
+		uint32_t *speed);
+extern int vega10_fan_ctrl_stop_smc_fan_control(struct pp_hwmgr *hwmgr);
+extern uint32_t smu7_get_xclk(struct pp_hwmgr *hwmgr);
+
+#endif
+
diff --git a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
index 7de9bea..320225d 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
@@ -85,6 +85,7 @@ enum PP_FEATURE_MASK {
 	PP_CLOCK_STRETCH_MASK = 0x400,
 	PP_OD_FUZZY_FAN_CONTROL_MASK = 0x800,
 	PP_SOCCLK_DPM_MASK = 0x1000,
+	PP_DCEFCLK_DPM_MASK = 0x2000,
 };
 
 enum PHM_BackEnd_Magic {
@@ -820,6 +821,8 @@ extern uint32_t phm_get_lowest_enabled_level(struct pp_hwmgr *hwmgr, uint32_t ma
 extern void phm_apply_dal_min_voltage_request(struct pp_hwmgr *hwmgr);
 
 extern int smu7_init_function_pointers(struct pp_hwmgr *hwmgr);
+extern int vega10_hwmgr_init(struct pp_hwmgr *hwmgr);
+
 extern int phm_get_voltage_evv_on_sclk(struct pp_hwmgr *hwmgr, uint8_t voltage_type,
 				uint32_t sclk, uint16_t id, uint16_t *voltage);
 
diff --git a/drivers/gpu/drm/amd/powerplay/inc/pp_soc15.h b/drivers/gpu/drm/amd/powerplay/inc/pp_soc15.h
new file mode 100644
index 0000000..227d999
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/inc/pp_soc15.h
@@ -0,0 +1,48 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#ifndef PP_SOC15_H
+#define PP_SOC15_H
+
+#include "vega10/soc15ip.h"
+
+inline static uint32_t soc15_get_register_offset(
+		uint32_t hw_id,
+		uint32_t inst,
+		uint32_t segment,
+		uint32_t offset)
+{
+	uint32_t reg = 0;
+
+	if (hw_id == THM_HWID)
+		reg = THM_BASE.instance[inst].segment[segment] + offset;
+	else if (hw_id == NBIF_HWID)
+		reg = NBIF_BASE.instance[inst].segment[segment] + offset;
+	else if (hw_id == MP1_HWID)
+		reg = MP1_BASE.instance[inst].segment[segment] + offset;
+	else if (hw_id == DF_HWID)
+		reg = DF_BASE.instance[inst].segment[segment] + offset;
+
+	return reg;
+}
+
+#endif
diff --git a/drivers/gpu/drm/amd/powerplay/inc/smumgr.h b/drivers/gpu/drm/amd/powerplay/inc/smumgr.h
index 52f56f4..37f4121 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/smumgr.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/smumgr.h
@@ -38,6 +38,7 @@ extern const struct pp_smumgr_func iceland_smu_funcs;
 extern const struct pp_smumgr_func tonga_smu_funcs;
 extern const struct pp_smumgr_func fiji_smu_funcs;
 extern const struct pp_smumgr_func polaris10_smu_funcs;
+extern const struct pp_smumgr_func vega10_smu_funcs;
 
 enum AVFS_BTC_STATUS {
 	AVFS_BTC_BOOT = 0,
@@ -177,6 +178,8 @@ extern int smu_allocate_memory(void *device, uint32_t size,
 			 void **kptr, void *handle);
 
 extern int smu_free_memory(void *device, void *handle);
+extern int vega10_smum_init(struct pp_smumgr *smumgr);
+
 extern int smum_update_sclk_threshold(struct pp_hwmgr *hwmgr);
 
 extern int smum_update_smc_table(struct pp_hwmgr *hwmgr, uint32_t type);
diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/Makefile b/drivers/gpu/drm/amd/powerplay/smumgr/Makefile
index 51ff083..68b01b5 100644
--- a/drivers/gpu/drm/amd/powerplay/smumgr/Makefile
+++ b/drivers/gpu/drm/amd/powerplay/smumgr/Makefile
@@ -4,7 +4,7 @@
 
 SMU_MGR = smumgr.o cz_smumgr.o tonga_smumgr.o fiji_smumgr.o fiji_smc.o \
 	  polaris10_smumgr.o iceland_smumgr.o polaris10_smc.o tonga_smc.o \
-	  smu7_smumgr.o iceland_smc.o
+	  smu7_smumgr.o iceland_smc.o vega10_smumgr.o
 
 AMD_PP_SMUMGR = $(addprefix $(AMD_PP_PATH)/smumgr/,$(SMU_MGR))
 
diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/smumgr.c b/drivers/gpu/drm/amd/powerplay/smumgr/smumgr.c
index 454f445..c0d7576 100644
--- a/drivers/gpu/drm/amd/powerplay/smumgr/smumgr.c
+++ b/drivers/gpu/drm/amd/powerplay/smumgr/smumgr.c
@@ -86,6 +86,15 @@ int smum_early_init(struct pp_instance *handle)
 			return -EINVAL;
 		}
 		break;
+	case AMDGPU_FAMILY_AI:
+		switch (smumgr->chip_id) {
+		case CHIP_VEGA10:
+			smumgr->smumgr_funcs = &vega10_smu_funcs;
+			break;
+		default:
+			return -EINVAL;
+		}
+		break;
 	default:
 		kfree(smumgr);
 		return -EINVAL;
diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/vega10_smumgr.c b/drivers/gpu/drm/amd/powerplay/smumgr/vega10_smumgr.c
new file mode 100644
index 0000000..2685f02
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/smumgr/vega10_smumgr.c
@@ -0,0 +1,564 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include "smumgr.h"
+#include "vega10_inc.h"
+#include "pp_soc15.h"
+#include "vega10_smumgr.h"
+#include "vega10_ppsmc.h"
+#include "smu9_driver_if.h"
+
+#include "ppatomctrl.h"
+#include "pp_debug.h"
+#include "smu_ucode_xfer_vi.h"
+#include "smu7_smumgr.h"
+
+#define AVFS_EN_MSB		1568
+#define AVFS_EN_LSB		1568
+
+#define VOLTAGE_SCALE	4
+
+/* Microcode file is stored in this buffer */
+#define BUFFER_SIZE                 80000
+#define MAX_STRING_SIZE             15
+#define BUFFER_SIZETWO              131072 /* 128 *1024 */
+
+/* MP Apertures */
+#define MP0_Public                  0x03800000
+#define MP0_SRAM                    0x03900000
+#define MP1_Public                  0x03b00000
+#define MP1_SRAM                    0x03c00004
+
+#define smnMP1_FIRMWARE_FLAGS                                                                           0x3010028
+#define smnMP0_FW_INTF                                                                                  0x3010104
+#define smnMP1_PUB_CTRL                                                                                 0x3010b14
+
+static bool vega10_is_smc_ram_running(struct pp_smumgr *smumgr)
+{
+	uint32_t mp1_fw_flags, reg;
+
+	reg = soc15_get_register_offset(NBIF_HWID, 0,
+			mmPCIE_INDEX2_BASE_IDX, mmPCIE_INDEX2);
+
+	cgs_write_register(smumgr->device, reg,
+			(MP1_Public | (smnMP1_FIRMWARE_FLAGS & 0xffffffff)));
+
+	reg = soc15_get_register_offset(NBIF_HWID, 0,
+			mmPCIE_DATA2_BASE_IDX, mmPCIE_DATA2);
+
+	mp1_fw_flags = cgs_read_register(smumgr->device, reg);
+
+	if (mp1_fw_flags & MP1_FIRMWARE_FLAGS__INTERRUPTS_ENABLED_MASK)
+		return true;
+
+	return false;
+}
+
+/**
+* Check if SMC has responded to previous message.
+*
+* @param    smumgr  the address of the powerplay hardware manager.
+* @return   TRUE    SMC has responded, FALSE otherwise.
+*/
+static uint32_t vega10_wait_for_response(struct pp_smumgr *smumgr)
+{
+	uint32_t reg;
+
+	if (!vega10_is_smc_ram_running(smumgr))
+		return -1;
+
+	reg = soc15_get_register_offset(MP1_HWID, 0,
+			mmMP1_SMN_C2PMSG_90_BASE_IDX, mmMP1_SMN_C2PMSG_90);
+
+	smum_wait_for_register_unequal(smumgr, reg,
+			0, MP1_C2PMSG_90__CONTENT_MASK);
+
+	return cgs_read_register(smumgr->device, reg);
+}
+
+/**
+* Send a message to the SMC, and do not wait for its response.
+*
+* @param    smumgr  the address of the powerplay hardware manager.
+* @param    msg the message to send.
+* @return   Always return 0.
+*/
+int vega10_send_msg_to_smc_without_waiting(struct pp_smumgr *smumgr,
+		uint16_t msg)
+{
+	uint32_t reg;
+
+	if (!vega10_is_smc_ram_running(smumgr))
+		return -1;
+
+	reg = soc15_get_register_offset(MP1_HWID, 0,
+			mmMP1_SMN_C2PMSG_66_BASE_IDX, mmMP1_SMN_C2PMSG_66);
+	cgs_write_register(smumgr->device, reg, msg);
+
+	return 0;
+}
+
+/**
+* Send a message to the SMC, and wait for its response.
+*
+* @param    smumgr  the address of the powerplay hardware manager.
+* @param    msg the message to send.
+* @return   The response that came from the SMC.
+*/
+int vega10_send_msg_to_smc(struct pp_smumgr *smumgr, uint16_t msg)
+{
+	uint32_t reg;
+
+	if (!vega10_is_smc_ram_running(smumgr))
+		return -1;
+
+	vega10_wait_for_response(smumgr);
+
+	reg = soc15_get_register_offset(MP1_HWID, 0,
+			mmMP1_SMN_C2PMSG_90_BASE_IDX, mmMP1_SMN_C2PMSG_90);
+	cgs_write_register(smumgr->device, reg, 0);
+
+	vega10_send_msg_to_smc_without_waiting(smumgr, msg);
+
+	PP_ASSERT_WITH_CODE(vega10_wait_for_response(smumgr) == 1,
+			"Failed to send Message.",
+			return -1);
+
+	return 0;
+}
+
+/**
+ * Send a message to the SMC with parameter
+ * @param    smumgr:  the address of the powerplay hardware manager.
+ * @param    msg: the message to send.
+ * @param    parameter: the parameter to send
+ * @return   The response that came from the SMC.
+ */
+int vega10_send_msg_to_smc_with_parameter(struct pp_smumgr *smumgr,
+		uint16_t msg, uint32_t parameter)
+{
+	uint32_t reg;
+
+	if (!vega10_is_smc_ram_running(smumgr))
+		return -1;
+
+	vega10_wait_for_response(smumgr);
+
+	reg = soc15_get_register_offset(MP1_HWID, 0,
+			mmMP1_SMN_C2PMSG_90_BASE_IDX, mmMP1_SMN_C2PMSG_90);
+	cgs_write_register(smumgr->device, reg, 0);
+
+	reg = soc15_get_register_offset(MP1_HWID, 0,
+			mmMP1_SMN_C2PMSG_82_BASE_IDX, mmMP1_SMN_C2PMSG_82);
+	cgs_write_register(smumgr->device, reg, parameter);
+
+	vega10_send_msg_to_smc_without_waiting(smumgr, msg);
+
+	PP_ASSERT_WITH_CODE(vega10_wait_for_response(smumgr) == 1,
+			"Failed to send Message.",
+			return -1);
+
+	return 0;
+}
+
+
+/**
+* Send a message to the SMC with parameter, do not wait for response
+*
+* @param    smumgr:  the address of the powerplay hardware manager.
+* @param    msg: the message to send.
+* @param    parameter: the parameter to send
+* @return   The response that came from the SMC.
+*/
+int vega10_send_msg_to_smc_with_parameter_without_waiting(
+		struct pp_smumgr *smumgr, uint16_t msg, uint32_t parameter)
+{
+	uint32_t reg;
+
+	reg = soc15_get_register_offset(MP1_HWID, 0,
+			mmMP1_SMN_C2PMSG_82_BASE_IDX, mmMP1_SMN_C2PMSG_82);
+	cgs_write_register(smumgr->device, reg, parameter);
+
+	return vega10_send_msg_to_smc_without_waiting(smumgr, msg);
+}
+
+/**
+* Retrieve an argument from SMC.
+*
+* @param    smumgr  the address of the powerplay hardware manager.
+* @param    arg     pointer to store the argument from SMC.
+* @return   Always return 0.
+*/
+int vega10_read_arg_from_smc(struct pp_smumgr *smumgr, uint32_t *arg)
+{
+	uint32_t reg;
+
+	reg = soc15_get_register_offset(MP1_HWID, 0,
+			mmMP1_SMN_C2PMSG_82_BASE_IDX, mmMP1_SMN_C2PMSG_82);
+
+	*arg = cgs_read_register(smumgr->device, reg);
+
+	return 0;
+}
+
+/**
+* Copy table from SMC into driver FB
+* @param   smumgr    the address of the SMC manager
+* @param   table_id    the driver's table ID to copy from
+*/
+int vega10_copy_table_from_smc(struct pp_smumgr *smumgr,
+		uint8_t *table, int16_t table_id)
+{
+	struct vega10_smumgr *priv =
+			(struct vega10_smumgr *)(smumgr->backend);
+
+	PP_ASSERT_WITH_CODE(table_id < MAX_SMU_TABLE,
+			"Invalid SMU Table ID!", return -1;);
+	PP_ASSERT_WITH_CODE(priv->smu_tables.entry[table_id].version != 0,
+			"Invalid SMU Table version!", return -1;);
+	PP_ASSERT_WITH_CODE(priv->smu_tables.entry[table_id].size != 0,
+			"Invalid SMU Table Length!", return -1;);
+	PP_ASSERT_WITH_CODE(vega10_send_msg_to_smc_with_parameter(smumgr,
+			PPSMC_MSG_SetDriverDramAddrHigh,
+			priv->smu_tables.entry[table_id].table_addr_high) == 0,
+			"[CopyTableFromSMC] Attempt to Set Dram Addr High Failed!", return -1;);
+	PP_ASSERT_WITH_CODE(vega10_send_msg_to_smc_with_parameter(smumgr,
+			PPSMC_MSG_SetDriverDramAddrLow,
+			priv->smu_tables.entry[table_id].table_addr_low) == 0,
+			"[CopyTableFromSMC] Attempt to Set Dram Addr Low Failed!",
+			return -1;);
+	PP_ASSERT_WITH_CODE(vega10_send_msg_to_smc_with_parameter(smumgr,
+			PPSMC_MSG_TransferTableSmu2Dram,
+			priv->smu_tables.entry[table_id].table_id) == 0,
+			"[CopyTableFromSMC] Attempt to Transfer Table From SMU Failed!",
+			return -1;);
+
+	memcpy(table, priv->smu_tables.entry[table_id].table,
+			priv->smu_tables.entry[table_id].size);
+
+	return 0;
+}
+
+/**
+* Copy table from Driver FB into SMC
+* @param   smumgr    the address of the SMC manager
+* @param   table_id    the table to copy from
+*/
+int vega10_copy_table_to_smc(struct pp_smumgr *smumgr,
+		uint8_t *table, int16_t table_id)
+{
+	struct vega10_smumgr *priv =
+			(struct vega10_smumgr *)(smumgr->backend);
+
+	PP_ASSERT_WITH_CODE(table_id < MAX_SMU_TABLE,
+			"Invalid SMU Table ID!", return -1;);
+	PP_ASSERT_WITH_CODE(priv->smu_tables.entry[table_id].version != 0,
+			"Invalid SMU Table version!", return -1;);
+	PP_ASSERT_WITH_CODE(priv->smu_tables.entry[table_id].size != 0,
+			"Invalid SMU Table Length!", return -1;);
+
+	memcpy(priv->smu_tables.entry[table_id].table, table,
+			priv->smu_tables.entry[table_id].size);
+
+	PP_ASSERT_WITH_CODE(vega10_send_msg_to_smc_with_parameter(smumgr,
+			PPSMC_MSG_SetDriverDramAddrHigh,
+			priv->smu_tables.entry[table_id].table_addr_high) == 0,
+			"[CopyTableToSMC] Attempt to Set Dram Addr High Failed!",
+			return -1;);
+	PP_ASSERT_WITH_CODE(vega10_send_msg_to_smc_with_parameter(smumgr,
+			PPSMC_MSG_SetDriverDramAddrLow,
+			priv->smu_tables.entry[table_id].table_addr_low) == 0,
+			"[CopyTableToSMC] Attempt to Set Dram Addr Low Failed!",
+			return -1;);
+	PP_ASSERT_WITH_CODE(vega10_send_msg_to_smc_with_parameter(smumgr,
+			PPSMC_MSG_TransferTableDram2Smu,
+			priv->smu_tables.entry[table_id].table_id) == 0,
+			"[CopyTableToSMC] Attempt to Transfer Table To SMU Failed!",
+			return -1;);
+
+	return 0;
+}
+
+int vega10_perform_btc(struct pp_smumgr *smumgr)
+{
+	PP_ASSERT_WITH_CODE(!vega10_send_msg_to_smc_with_parameter(
+			smumgr, PPSMC_MSG_RunBtc, 0),
+			"Attempt to run DC BTC Failed!",
+			return -1);
+	return 0;
+}
+
+int vega10_save_vft_table(struct pp_smumgr *smumgr, uint8_t *avfs_table)
+{
+	PP_ASSERT_WITH_CODE(avfs_table,
+			"No access to SMC AVFS Table",
+			return -1);
+
+	return vega10_copy_table_from_smc(smumgr, avfs_table, AVFSTABLE);
+}
+
+int vega10_restore_vft_table(struct pp_smumgr *smumgr, uint8_t *avfs_table)
+{
+	PP_ASSERT_WITH_CODE(avfs_table,
+			"No access to SMC AVFS Table",
+			return -1);
+
+	return vega10_copy_table_to_smc(smumgr, avfs_table, AVFSTABLE);
+}
+
+int vega10_enable_smc_features(struct pp_smumgr *smumgr,
+		bool enable, uint32_t feature_mask)
+{
+	int msg = enable ? PPSMC_MSG_EnableSmuFeatures :
+			PPSMC_MSG_DisableSmuFeatures;
+
+	return vega10_send_msg_to_smc_with_parameter(smumgr,
+			msg, feature_mask);
+}
+
+int vega10_get_smc_features(struct pp_smumgr *smumgr,
+		uint32_t *features_enabled)
+{
+	if (!vega10_send_msg_to_smc(smumgr,
+			PPSMC_MSG_GetEnabledSmuFeatures)) {
+		if (!vega10_read_arg_from_smc(smumgr, features_enabled))
+			return 0;
+	}
+
+	return -1;
+}
+
+int vega10_set_tools_address(struct pp_smumgr *smumgr)
+{
+	struct vega10_smumgr *priv =
+			(struct vega10_smumgr *)(smumgr->backend);
+
+	if (priv->smu_tables.entry[TOOLSTABLE].table_addr_high ||
+			priv->smu_tables.entry[TOOLSTABLE].table_addr_low) {
+		if (!vega10_send_msg_to_smc_with_parameter(smumgr,
+				PPSMC_MSG_SetToolsDramAddrHigh,
+				priv->smu_tables.entry[TOOLSTABLE].table_addr_high))
+			vega10_send_msg_to_smc_with_parameter(smumgr,
+					PPSMC_MSG_SetToolsDramAddrLow,
+					priv->smu_tables.entry[TOOLSTABLE].table_addr_low);
+	}
+	return 0;
+}
+
+static int vega10_verify_smc_interface(struct pp_smumgr *smumgr)
+{
+	uint32_t smc_driver_if_version;
+
+	PP_ASSERT_WITH_CODE(!vega10_send_msg_to_smc(smumgr,
+			PPSMC_MSG_GetDriverIfVersion),
+			"Attempt to get SMC IF Version Number Failed!",
+			return -1);
+	PP_ASSERT_WITH_CODE(!vega10_read_arg_from_smc(smumgr,
+			&smc_driver_if_version),
+			"Attempt to read SMC IF Version Number Failed!",
+			return -1);
+
+	if (smc_driver_if_version != SMU9_DRIVER_IF_VERSION)
+		return -1;
+
+	return 0;
+}
+
+/**
+* Write a 32bit value to the SMC SRAM space.
+* ALL PARAMETERS ARE IN HOST BYTE ORDER.
+* @param    smumgr  the address of the powerplay hardware manager.
+* @param    smc_addr the address in the SMC RAM to access.
+* @param    value to write to the SMC SRAM.
+*/
+static int vega10_smu_init(struct pp_smumgr *smumgr)
+{
+	struct vega10_smumgr *priv;
+	uint64_t mc_addr;
+	void *kaddr = NULL;
+	unsigned long handle, tools_size;
+	int ret;
+	struct cgs_firmware_info info = {0};
+
+	ret = cgs_get_firmware_info(smumgr->device,
+				    smu7_convert_fw_type_to_cgs(UCODE_ID_SMU),
+				    &info);
+	if (ret || !info.kptr)
+		return -EINVAL;
+
+	priv = kzalloc(sizeof(struct vega10_smumgr), GFP_KERNEL);
+
+	if (!priv)
+		return -ENOMEM;
+
+	smumgr->backend = priv;
+
+	/* allocate space for pptable */
+	smu_allocate_memory(smumgr->device,
+			sizeof(PPTable_t),
+			CGS_GPU_MEM_TYPE__VISIBLE_CONTIG_FB,
+			PAGE_SIZE,
+			&mc_addr,
+			&kaddr,
+			&handle);
+
+	PP_ASSERT_WITH_CODE(kaddr,
+			"[vega10_smu_init] Out of memory for pptable.",
+			kfree(smumgr->backend);
+			cgs_free_gpu_mem(smumgr->device,
+			(cgs_handle_t)handle);
+			return -1);
+
+	priv->smu_tables.entry[PPTABLE].version = 0x01;
+	priv->smu_tables.entry[PPTABLE].size = sizeof(PPTable_t);
+	priv->smu_tables.entry[PPTABLE].table_id = TABLE_PPTABLE;
+	priv->smu_tables.entry[PPTABLE].table_addr_high =
+			smu_upper_32_bits(mc_addr);
+	priv->smu_tables.entry[PPTABLE].table_addr_low =
+			smu_lower_32_bits(mc_addr);
+	priv->smu_tables.entry[PPTABLE].table = kaddr;
+	priv->smu_tables.entry[PPTABLE].handle = handle;
+
+	/* allocate space for watermarks table */
+	smu_allocate_memory(smumgr->device,
+			sizeof(Watermarks_t),
+			CGS_GPU_MEM_TYPE__VISIBLE_CONTIG_FB,
+			PAGE_SIZE,
+			&mc_addr,
+			&kaddr,
+			&handle);
+
+	PP_ASSERT_WITH_CODE(kaddr,
+			"[vega10_smu_init] Out of memory for wmtable.",
+			kfree(smumgr->backend);
+			cgs_free_gpu_mem(smumgr->device,
+			(cgs_handle_t)priv->smu_tables.entry[PPTABLE].handle);
+			cgs_free_gpu_mem(smumgr->device,
+			(cgs_handle_t)handle);
+			return -1);
+
+	priv->smu_tables.entry[WMTABLE].version = 0x01;
+	priv->smu_tables.entry[WMTABLE].size = sizeof(Watermarks_t);
+	priv->smu_tables.entry[WMTABLE].table_id = TABLE_WATERMARKS;
+	priv->smu_tables.entry[WMTABLE].table_addr_high =
+			smu_upper_32_bits(mc_addr);
+	priv->smu_tables.entry[WMTABLE].table_addr_low =
+			smu_lower_32_bits(mc_addr);
+	priv->smu_tables.entry[WMTABLE].table = kaddr;
+	priv->smu_tables.entry[WMTABLE].handle = handle;
+
+	/* allocate space for AVFS table */
+	smu_allocate_memory(smumgr->device,
+			sizeof(AvfsTable_t),
+			CGS_GPU_MEM_TYPE__VISIBLE_CONTIG_FB,
+			PAGE_SIZE,
+			&mc_addr,
+			&kaddr,
+			&handle);
+
+	PP_ASSERT_WITH_CODE(kaddr,
+			"[vega10_smu_init] Out of memory for avfs table.",
+			kfree(smumgr->backend);
+			cgs_free_gpu_mem(smumgr->device,
+			(cgs_handle_t)priv->smu_tables.entry[PPTABLE].handle);
+			cgs_free_gpu_mem(smumgr->device,
+			(cgs_handle_t)priv->smu_tables.entry[WMTABLE].handle);
+			cgs_free_gpu_mem(smumgr->device,
+			(cgs_handle_t)handle);
+			return -1);
+
+	priv->smu_tables.entry[AVFSTABLE].version = 0x01;
+	priv->smu_tables.entry[AVFSTABLE].size = sizeof(AvfsTable_t);
+	priv->smu_tables.entry[AVFSTABLE].table_id = TABLE_AVFS;
+	priv->smu_tables.entry[AVFSTABLE].table_addr_high =
+			smu_upper_32_bits(mc_addr);
+	priv->smu_tables.entry[AVFSTABLE].table_addr_low =
+			smu_lower_32_bits(mc_addr);
+	priv->smu_tables.entry[AVFSTABLE].table = kaddr;
+	priv->smu_tables.entry[AVFSTABLE].handle = handle;
+
+	tools_size = 0;
+	if (tools_size) {
+		smu_allocate_memory(smumgr->device,
+				tools_size,
+				CGS_GPU_MEM_TYPE__VISIBLE_CONTIG_FB,
+				PAGE_SIZE,
+				&mc_addr,
+				&kaddr,
+				&handle);
+
+		if (kaddr) {
+			priv->smu_tables.entry[TOOLSTABLE].version = 0x01;
+			priv->smu_tables.entry[TOOLSTABLE].size = tools_size;
+			priv->smu_tables.entry[TOOLSTABLE].table_id = TABLE_PMSTATUSLOG;
+			priv->smu_tables.entry[TOOLSTABLE].table_addr_high =
+					smu_upper_32_bits(mc_addr);
+			priv->smu_tables.entry[TOOLSTABLE].table_addr_low =
+					smu_lower_32_bits(mc_addr);
+			priv->smu_tables.entry[TOOLSTABLE].table = kaddr;
+			priv->smu_tables.entry[TOOLSTABLE].handle = handle;
+		}
+	}
+
+	return 0;
+}
+
+static int vega10_smu_fini(struct pp_smumgr *smumgr)
+{
+	struct vega10_smumgr *priv =
+			(struct vega10_smumgr *)(smumgr->backend);
+
+	if (priv) {
+		cgs_free_gpu_mem(smumgr->device,
+				(cgs_handle_t)priv->smu_tables.entry[PPTABLE].handle);
+		cgs_free_gpu_mem(smumgr->device,
+				(cgs_handle_t)priv->smu_tables.entry[WMTABLE].handle);
+		cgs_free_gpu_mem(smumgr->device,
+				(cgs_handle_t)priv->smu_tables.entry[AVFSTABLE].handle);
+		if (priv->smu_tables.entry[TOOLSTABLE].table)
+			cgs_free_gpu_mem(smumgr->device,
+					(cgs_handle_t)priv->smu_tables.entry[TOOLSTABLE].handle);
+		kfree(smumgr->backend);
+		smumgr->backend = NULL;
+	}
+	return 0;
+}
+
+static int vega10_start_smu(struct pp_smumgr *smumgr)
+{
+	PP_ASSERT_WITH_CODE(!vega10_verify_smc_interface(smumgr),
+			"Failed to verify SMC interface!",
+			return -1);
+	return 0;
+}
+
+const struct pp_smumgr_func vega10_smu_funcs = {
+	.smu_init = &vega10_smu_init,
+	.smu_fini = &vega10_smu_fini,
+	.start_smu = &vega10_start_smu,
+	.request_smu_load_specific_fw = NULL,
+	.send_msg_to_smc = &vega10_send_msg_to_smc,
+	.send_msg_to_smc_with_parameter = &vega10_send_msg_to_smc_with_parameter,
+	.download_pptable_settings = NULL,
+	.upload_pptable_settings = NULL,
+};
diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/vega10_smumgr.h b/drivers/gpu/drm/amd/powerplay/smumgr/vega10_smumgr.h
new file mode 100644
index 0000000..ad05021
--- /dev/null
+++ b/drivers/gpu/drm/amd/powerplay/smumgr/vega10_smumgr.h
@@ -0,0 +1,70 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#ifndef _VEGA10_SMUMANAGER_H_
+#define _VEGA10_SMUMANAGER_H_
+
+#include "vega10_hwmgr.h"
+
+enum smu_table_id {
+	PPTABLE = 0,
+	WMTABLE,
+	AVFSTABLE,
+	TOOLSTABLE,
+	MAX_SMU_TABLE,
+};
+
+struct smu_table_entry {
+	uint32_t version;
+	uint32_t size;
+	uint32_t table_id;
+	uint32_t table_addr_high;
+	uint32_t table_addr_low;
+	uint8_t *table;
+	unsigned long handle;
+};
+
+struct smu_table_array {
+	struct smu_table_entry entry[MAX_SMU_TABLE];
+};
+
+struct vega10_smumgr {
+	struct smu_table_array            smu_tables;
+};
+
+int vega10_read_arg_from_smc(struct pp_smumgr *smumgr, uint32_t *arg);
+int vega10_copy_table_from_smc(struct pp_smumgr *smumgr,
+		uint8_t *table, int16_t table_id);
+int vega10_copy_table_to_smc(struct pp_smumgr *smumgr,
+		uint8_t *table, int16_t table_id);
+int vega10_enable_smc_features(struct pp_smumgr *smumgr,
+		bool enable, uint32_t feature_mask);
+int vega10_get_smc_features(struct pp_smumgr *smumgr,
+		uint32_t *features_enabled);
+int vega10_save_vft_table(struct pp_smumgr *smumgr, uint8_t *avfs_table);
+int vega10_restore_vft_table(struct pp_smumgr *smumgr, uint8_t *avfs_table);
+int vega10_perform_btc(struct pp_smumgr *smumgr);
+
+int vega10_set_tools_address(struct pp_smumgr *smumgr);
+
+#endif
+
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 063/100] drm/amd/display: Add DCE12 bios parser support
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (46 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 062/100] drm/amd/powerplay: add Vega10 powerplay support Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
       [not found]     ` <1490041835-11255-49-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
  2017-03-20 20:29   ` [PATCH 064/100] drm/amd/display: Add DCE12 gpio support Alex Deucher
                     ` (37 subsequent siblings)
  85 siblings, 1 reply; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Harry Wentland

From: Harry Wentland <harry.wentland@amd.com>

Signed-off-by: Harry Wentland <harry.wentland@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c | 2085 ++++++++++++++++++++
 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.h |   33 +
 .../display/dc/bios/bios_parser_types_internal2.h  |   74 +
 .../gpu/drm/amd/display/dc/bios/command_table2.c   |  813 ++++++++
 .../gpu/drm/amd/display/dc/bios/command_table2.h   |  105 +
 .../amd/display/dc/bios/command_table_helper2.c    |  260 +++
 .../amd/display/dc/bios/command_table_helper2.h    |   82 +
 .../dc/bios/dce112/command_table_helper2_dce112.c  |  418 ++++
 .../dc/bios/dce112/command_table_helper2_dce112.h  |   34 +
 9 files changed, 3904 insertions(+)
 create mode 100644 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/bios/bios_parser_types_internal2.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/bios/command_table2.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/bios/command_table2.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/bios/dce112/command_table_helper2_dce112.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/bios/dce112/command_table_helper2_dce112.h

diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
new file mode 100644
index 0000000..f6e77da
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
@@ -0,0 +1,2085 @@
+/*
+ * Copyright 2012-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "dm_services.h"
+
+#define _BIOS_PARSER_2_
+
+#include "ObjectID.h"
+#include "atomfirmware.h"
+#include "atomfirmwareid.h"
+
+#include "dc_bios_types.h"
+#include "include/grph_object_ctrl_defs.h"
+#include "include/bios_parser_interface.h"
+#include "include/i2caux_interface.h"
+#include "include/logger_interface.h"
+
+#include "command_table2.h"
+
+#include "bios_parser_helper.h"
+#include "command_table_helper2.h"
+#include "bios_parser2.h"
+#include "bios_parser_types_internal2.h"
+#include "bios_parser_interface.h"
+
+#define LAST_RECORD_TYPE 0xff
+
+
+struct i2c_id_config_access {
+	uint8_t bfI2C_LineMux:4;
+	uint8_t bfHW_EngineID:3;
+	uint8_t bfHW_Capable:1;
+	uint8_t ucAccess;
+};
+
+static enum object_type object_type_from_bios_object_id(
+	uint32_t bios_object_id);
+
+static enum object_enum_id enum_id_from_bios_object_id(uint32_t bios_object_id);
+
+static struct graphics_object_id object_id_from_bios_object_id(
+	uint32_t bios_object_id);
+
+static uint32_t id_from_bios_object_id(enum object_type type,
+	uint32_t bios_object_id);
+
+static uint32_t gpu_id_from_bios_object_id(uint32_t bios_object_id);
+
+static enum encoder_id encoder_id_from_bios_object_id(uint32_t bios_object_id);
+
+static enum connector_id connector_id_from_bios_object_id(
+						uint32_t bios_object_id);
+
+static enum generic_id generic_id_from_bios_object_id(uint32_t bios_object_id);
+
+static enum bp_result get_gpio_i2c_info(struct bios_parser *bp,
+	struct atom_i2c_record *record,
+	struct graphics_object_i2c_info *info);
+
+static enum bp_result bios_parser_get_firmware_info(
+	struct dc_bios *dcb,
+	struct firmware_info *info);
+
+static enum bp_result bios_parser_get_encoder_cap_info(
+	struct dc_bios *dcb,
+	struct graphics_object_id object_id,
+	struct bp_encoder_cap_info *info);
+
+static enum bp_result get_firmware_info_v3_1(
+	struct bios_parser *bp,
+	struct firmware_info *info);
+
+static struct atom_hpd_int_record *get_hpd_record(struct bios_parser *bp,
+		struct atom_display_object_path_v2 *object);
+
+static struct atom_encoder_caps_record *get_encoder_cap_record(
+	struct bios_parser *bp,
+	struct atom_display_object_path_v2 *object);
+
+#define BIOS_IMAGE_SIZE_OFFSET 2
+#define BIOS_IMAGE_SIZE_UNIT 512
+
+#define DATA_TABLES(table) (bp->master_data_tbl->listOfdatatables.table)
+
+
+static void destruct(struct bios_parser *bp)
+{
+	if (bp->base.bios_local_image)
+		dm_free(bp->base.bios_local_image);
+
+	if (bp->base.integrated_info)
+		dm_free(bp->base.integrated_info);
+}
+
+static void firmware_parser_destroy(struct dc_bios **dcb)
+{
+	struct bios_parser *bp = BP_FROM_DCB(*dcb);
+
+	if (!bp) {
+		BREAK_TO_DEBUGGER();
+		return;
+	}
+
+	destruct(bp);
+
+	dm_free(bp);
+	*dcb = NULL;
+}
+
+static void get_atom_data_table_revision(
+	struct atom_common_table_header *atom_data_tbl,
+	struct atom_data_revision *tbl_revision)
+{
+	if (!tbl_revision)
+		return;
+
+	/* initialize the revision to 0 which is invalid revision */
+	tbl_revision->major = 0;
+	tbl_revision->minor = 0;
+
+	if (!atom_data_tbl)
+		return;
+
+	tbl_revision->major =
+			(uint32_t) atom_data_tbl->format_revision & 0x3f;
+	tbl_revision->minor =
+			(uint32_t) atom_data_tbl->content_revision & 0x3f;
+}
+
+static struct graphics_object_id object_id_from_bios_object_id(
+	uint32_t bios_object_id)
+{
+	enum object_type type;
+	enum object_enum_id enum_id;
+	struct graphics_object_id go_id = { 0 };
+
+	type = object_type_from_bios_object_id(bios_object_id);
+
+	if (type == OBJECT_TYPE_UNKNOWN)
+		return go_id;
+
+	enum_id = enum_id_from_bios_object_id(bios_object_id);
+
+	if (enum_id == ENUM_ID_UNKNOWN)
+		return go_id;
+
+	go_id = dal_graphics_object_id_init(
+			id_from_bios_object_id(type, bios_object_id),
+								enum_id, type);
+
+	return go_id;
+}
+
+static enum object_type object_type_from_bios_object_id(uint32_t bios_object_id)
+{
+	uint32_t bios_object_type = (bios_object_id & OBJECT_TYPE_MASK)
+				>> OBJECT_TYPE_SHIFT;
+	enum object_type object_type;
+
+	switch (bios_object_type) {
+	case GRAPH_OBJECT_TYPE_GPU:
+		object_type = OBJECT_TYPE_GPU;
+		break;
+	case GRAPH_OBJECT_TYPE_ENCODER:
+		object_type = OBJECT_TYPE_ENCODER;
+		break;
+	case GRAPH_OBJECT_TYPE_CONNECTOR:
+		object_type = OBJECT_TYPE_CONNECTOR;
+		break;
+	case GRAPH_OBJECT_TYPE_ROUTER:
+		object_type = OBJECT_TYPE_ROUTER;
+		break;
+	case GRAPH_OBJECT_TYPE_GENERIC:
+		object_type = OBJECT_TYPE_GENERIC;
+		break;
+	default:
+		object_type = OBJECT_TYPE_UNKNOWN;
+		break;
+	}
+
+	return object_type;
+}
+
+static enum object_enum_id enum_id_from_bios_object_id(uint32_t bios_object_id)
+{
+	uint32_t bios_enum_id =
+			(bios_object_id & ENUM_ID_MASK) >> ENUM_ID_SHIFT;
+	enum object_enum_id id;
+
+	switch (bios_enum_id) {
+	case GRAPH_OBJECT_ENUM_ID1:
+		id = ENUM_ID_1;
+		break;
+	case GRAPH_OBJECT_ENUM_ID2:
+		id = ENUM_ID_2;
+		break;
+	case GRAPH_OBJECT_ENUM_ID3:
+		id = ENUM_ID_3;
+		break;
+	case GRAPH_OBJECT_ENUM_ID4:
+		id = ENUM_ID_4;
+		break;
+	case GRAPH_OBJECT_ENUM_ID5:
+		id = ENUM_ID_5;
+		break;
+	case GRAPH_OBJECT_ENUM_ID6:
+		id = ENUM_ID_6;
+		break;
+	case GRAPH_OBJECT_ENUM_ID7:
+		id = ENUM_ID_7;
+		break;
+	default:
+		id = ENUM_ID_UNKNOWN;
+		break;
+	}
+
+	return id;
+}
+
+static uint32_t id_from_bios_object_id(enum object_type type,
+	uint32_t bios_object_id)
+{
+	switch (type) {
+	case OBJECT_TYPE_GPU:
+		return gpu_id_from_bios_object_id(bios_object_id);
+	case OBJECT_TYPE_ENCODER:
+		return (uint32_t)encoder_id_from_bios_object_id(bios_object_id);
+	case OBJECT_TYPE_CONNECTOR:
+		return (uint32_t)connector_id_from_bios_object_id(
+				bios_object_id);
+	case OBJECT_TYPE_GENERIC:
+		return generic_id_from_bios_object_id(bios_object_id);
+	default:
+		return 0;
+	}
+}
+
+uint32_t gpu_id_from_bios_object_id(uint32_t bios_object_id)
+{
+	return (bios_object_id & OBJECT_ID_MASK) >> OBJECT_ID_SHIFT;
+}
+
+static enum encoder_id encoder_id_from_bios_object_id(uint32_t bios_object_id)
+{
+	uint32_t bios_encoder_id = gpu_id_from_bios_object_id(bios_object_id);
+	enum encoder_id id;
+
+	switch (bios_encoder_id) {
+	case ENCODER_OBJECT_ID_INTERNAL_LVDS:
+		id = ENCODER_ID_INTERNAL_LVDS;
+		break;
+	case ENCODER_OBJECT_ID_INTERNAL_TMDS1:
+		id = ENCODER_ID_INTERNAL_TMDS1;
+		break;
+	case ENCODER_OBJECT_ID_INTERNAL_TMDS2:
+		id = ENCODER_ID_INTERNAL_TMDS2;
+		break;
+	case ENCODER_OBJECT_ID_INTERNAL_DAC1:
+		id = ENCODER_ID_INTERNAL_DAC1;
+		break;
+	case ENCODER_OBJECT_ID_INTERNAL_DAC2:
+		id = ENCODER_ID_INTERNAL_DAC2;
+		break;
+	case ENCODER_OBJECT_ID_INTERNAL_LVTM1:
+		id = ENCODER_ID_INTERNAL_LVTM1;
+		break;
+	case ENCODER_OBJECT_ID_HDMI_INTERNAL:
+		id = ENCODER_ID_INTERNAL_HDMI;
+		break;
+	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1:
+		id = ENCODER_ID_INTERNAL_KLDSCP_TMDS1;
+		break;
+	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1:
+		id = ENCODER_ID_INTERNAL_KLDSCP_DAC1;
+		break;
+	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2:
+		id = ENCODER_ID_INTERNAL_KLDSCP_DAC2;
+		break;
+	case ENCODER_OBJECT_ID_MVPU_FPGA:
+		id = ENCODER_ID_EXTERNAL_MVPU_FPGA;
+		break;
+	case ENCODER_OBJECT_ID_INTERNAL_DDI:
+		id = ENCODER_ID_INTERNAL_DDI;
+		break;
+	case ENCODER_OBJECT_ID_INTERNAL_UNIPHY:
+		id = ENCODER_ID_INTERNAL_UNIPHY;
+		break;
+	case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA:
+		id = ENCODER_ID_INTERNAL_KLDSCP_LVTMA;
+		break;
+	case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1:
+		id = ENCODER_ID_INTERNAL_UNIPHY1;
+		break;
+	case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2:
+		id = ENCODER_ID_INTERNAL_UNIPHY2;
+		break;
+	case ENCODER_OBJECT_ID_ALMOND: /* ENCODER_OBJECT_ID_NUTMEG */
+		id = ENCODER_ID_EXTERNAL_NUTMEG;
+		break;
+	case ENCODER_OBJECT_ID_TRAVIS:
+		id = ENCODER_ID_EXTERNAL_TRAVIS;
+		break;
+	case ENCODER_OBJECT_ID_INTERNAL_UNIPHY3:
+		id = ENCODER_ID_INTERNAL_UNIPHY3;
+		break;
+	default:
+		id = ENCODER_ID_UNKNOWN;
+		ASSERT(0);
+		break;
+	}
+
+	return id;
+}
+
+static enum connector_id connector_id_from_bios_object_id(
+	uint32_t bios_object_id)
+{
+	uint32_t bios_connector_id = gpu_id_from_bios_object_id(bios_object_id);
+
+	enum connector_id id;
+
+	switch (bios_connector_id) {
+	case CONNECTOR_OBJECT_ID_SINGLE_LINK_DVI_I:
+		id = CONNECTOR_ID_SINGLE_LINK_DVII;
+		break;
+	case CONNECTOR_OBJECT_ID_DUAL_LINK_DVI_I:
+		id = CONNECTOR_ID_DUAL_LINK_DVII;
+		break;
+	case CONNECTOR_OBJECT_ID_SINGLE_LINK_DVI_D:
+		id = CONNECTOR_ID_SINGLE_LINK_DVID;
+		break;
+	case CONNECTOR_OBJECT_ID_DUAL_LINK_DVI_D:
+		id = CONNECTOR_ID_DUAL_LINK_DVID;
+		break;
+	case CONNECTOR_OBJECT_ID_VGA:
+		id = CONNECTOR_ID_VGA;
+		break;
+	case CONNECTOR_OBJECT_ID_HDMI_TYPE_A:
+		id = CONNECTOR_ID_HDMI_TYPE_A;
+		break;
+	case CONNECTOR_OBJECT_ID_LVDS:
+		id = CONNECTOR_ID_LVDS;
+		break;
+	case CONNECTOR_OBJECT_ID_PCIE_CONNECTOR:
+		id = CONNECTOR_ID_PCIE;
+		break;
+	case CONNECTOR_OBJECT_ID_HARDCODE_DVI:
+		id = CONNECTOR_ID_HARDCODE_DVI;
+		break;
+	case CONNECTOR_OBJECT_ID_DISPLAYPORT:
+		id = CONNECTOR_ID_DISPLAY_PORT;
+		break;
+	case CONNECTOR_OBJECT_ID_eDP:
+		id = CONNECTOR_ID_EDP;
+		break;
+	case CONNECTOR_OBJECT_ID_MXM:
+		id = CONNECTOR_ID_MXM;
+		break;
+	default:
+		id = CONNECTOR_ID_UNKNOWN;
+		break;
+	}
+
+	return id;
+}
+
+enum generic_id generic_id_from_bios_object_id(uint32_t bios_object_id)
+{
+	uint32_t bios_generic_id = gpu_id_from_bios_object_id(bios_object_id);
+
+	enum generic_id id;
+
+	switch (bios_generic_id) {
+	case GENERIC_OBJECT_ID_MXM_OPM:
+		id = GENERIC_ID_MXM_OPM;
+		break;
+	case GENERIC_OBJECT_ID_GLSYNC:
+		id = GENERIC_ID_GLSYNC;
+		break;
+	case GENERIC_OBJECT_ID_STEREO_PIN:
+		id = GENERIC_ID_STEREO;
+		break;
+	default:
+		id = GENERIC_ID_UNKNOWN;
+		break;
+	}
+
+	return id;
+}
+
+static uint8_t bios_parser_get_connectors_number(struct dc_bios *dcb)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+	unsigned int count = 0;
+	unsigned int i;
+
+	for (i = 0; i < bp->object_info_tbl.v1_4->number_of_path; i++) {
+		if (bp->object_info_tbl.v1_4->display_path[i].encoderobjid != 0
+				&&
+		bp->object_info_tbl.v1_4->display_path[i].display_objid != 0)
+			count++;
+	}
+	return count;
+}
+
+static struct graphics_object_id bios_parser_get_encoder_id(
+	struct dc_bios *dcb,
+	uint32_t i)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+	struct graphics_object_id object_id = dal_graphics_object_id_init(
+		0, ENUM_ID_UNKNOWN, OBJECT_TYPE_UNKNOWN);
+
+	if (bp->object_info_tbl.v1_4->number_of_path > i)
+		object_id = object_id_from_bios_object_id(
+		bp->object_info_tbl.v1_4->display_path[i].encoderobjid);
+
+	return object_id;
+}
+
+static struct graphics_object_id bios_parser_get_connector_id(
+	struct dc_bios *dcb,
+	uint8_t i)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+	struct graphics_object_id object_id = dal_graphics_object_id_init(
+		0, ENUM_ID_UNKNOWN, OBJECT_TYPE_UNKNOWN);
+	struct object_info_table *tbl = &bp->object_info_tbl;
+	struct display_object_info_table_v1_4 *v1_4 = tbl->v1_4;
+
+	if (v1_4->number_of_path > i) {
+		/* If display_objid is generic object id,  the encoderObj
+		 * /extencoderobjId should be 0
+		 */
+		if (v1_4->display_path[i].encoderobjid != 0 &&
+				v1_4->display_path[i].display_objid != 0)
+			object_id = object_id_from_bios_object_id(
+					v1_4->display_path[i].display_objid);
+	}
+
+	return object_id;
+}
+
+
+/*  TODO:  GetNumberOfSrc*/
+
+static uint32_t bios_parser_get_dst_number(struct dc_bios *dcb,
+	struct graphics_object_id id)
+{
+	/* connector has 1 Dest, encoder has 0 Dest */
+	switch (id.type) {
+	case OBJECT_TYPE_ENCODER:
+		return 0;
+	case OBJECT_TYPE_CONNECTOR:
+		return 1;
+	default:
+		return 0;
+	}
+}
+
+/*  removed getSrcObjList, getDestObjList*/
+
+
+static enum bp_result bios_parser_get_src_obj(struct dc_bios *dcb,
+	struct graphics_object_id object_id, uint32_t index,
+	struct graphics_object_id *src_object_id)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+	unsigned int i;
+	enum bp_result  bp_result = BP_RESULT_BADINPUT;
+	struct graphics_object_id obj_id = {0};
+	struct object_info_table *tbl = &bp->object_info_tbl;
+
+	if (!src_object_id)
+		return bp_result;
+
+	switch (object_id.type) {
+	/* Encoder's Source is GPU.  BIOS does not provide GPU, since all
+	 * displaypaths point to same GPU (0x1100).  Hardcode GPU object type
+	 */
+	case OBJECT_TYPE_ENCODER:
+		/* TODO: since num of src must be less than 2.
+		 * If found in for loop, should break.
+		 * DAL2 implementation may be changed too
+		 */
+		for (i = 0; i < tbl->v1_4->number_of_path; i++) {
+			obj_id = object_id_from_bios_object_id(
+			tbl->v1_4->display_path[i].encoderobjid);
+			if (object_id.type == obj_id.type &&
+					object_id.id == obj_id.id &&
+						object_id.enum_id ==
+							obj_id.enum_id) {
+				*src_object_id =
+				object_id_from_bios_object_id(0x1100);
+				/* break; */
+			}
+		}
+		bp_result = BP_RESULT_OK;
+		break;
+	case OBJECT_TYPE_CONNECTOR:
+		for (i = 0; i < tbl->v1_4->number_of_path; i++) {
+			obj_id = object_id_from_bios_object_id(
+				tbl->v1_4->display_path[i].display_objid);
+
+			if (object_id.type == obj_id.type &&
+				object_id.id == obj_id.id &&
+					object_id.enum_id == obj_id.enum_id) {
+				*src_object_id =
+				object_id_from_bios_object_id(
+				tbl->v1_4->display_path[i].encoderobjid);
+				/* break; */
+			}
+		}
+		bp_result = BP_RESULT_OK;
+		break;
+	default:
+		break;
+	}
+
+	return bp_result;
+}
+
+static enum bp_result bios_parser_get_dst_obj(struct dc_bios *dcb,
+	struct graphics_object_id object_id, uint32_t index,
+	struct graphics_object_id *dest_object_id)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+	unsigned int i;
+	enum bp_result  bp_result = BP_RESULT_BADINPUT;
+	struct graphics_object_id obj_id = {0};
+	struct object_info_table *tbl = &bp->object_info_tbl;
+
+	if (!dest_object_id)
+		return BP_RESULT_BADINPUT;
+
+	switch (object_id.type) {
+	case OBJECT_TYPE_ENCODER:
+		/* TODO: since num of src must be less than 2.
+		 * If found in for loop, should break.
+		 * DAL2 implementation may be changed too
+		 */
+		for (i = 0; i < tbl->v1_4->number_of_path; i++) {
+			obj_id = object_id_from_bios_object_id(
+				tbl->v1_4->display_path[i].encoderobjid);
+			if (object_id.type == obj_id.type &&
+					object_id.id == obj_id.id &&
+						object_id.enum_id ==
+							obj_id.enum_id) {
+				*dest_object_id =
+					object_id_from_bios_object_id(
+				tbl->v1_4->display_path[i].display_objid);
+				/* break; */
+			}
+		}
+		bp_result = BP_RESULT_OK;
+		break;
+	default:
+		break;
+	}
+
+	return bp_result;
+}
+
+
+/* from graphics_object_id, find display path which includes the object_id */
+static struct atom_display_object_path_v2 *get_bios_object(
+	struct bios_parser *bp,
+	struct graphics_object_id id)
+{
+	unsigned int i;
+	struct graphics_object_id obj_id = {0};
+
+	switch (id.type) {
+	case OBJECT_TYPE_ENCODER:
+		for (i = 0; i < bp->object_info_tbl.v1_4->number_of_path; i++) {
+			obj_id = object_id_from_bios_object_id(
+			bp->object_info_tbl.v1_4->display_path[i].encoderobjid);
+			if (id.type == obj_id.type &&
+					id.id == obj_id.id &&
+						id.enum_id == obj_id.enum_id)
+				return
+				&bp->object_info_tbl.v1_4->display_path[i];
+		}
+	case OBJECT_TYPE_CONNECTOR:
+	case OBJECT_TYPE_GENERIC:
+		/* Both Generic and Connector Object ID
+		 * will be stored on display_objid
+		*/
+		for (i = 0; i < bp->object_info_tbl.v1_4->number_of_path; i++) {
+			obj_id = object_id_from_bios_object_id(
+			bp->object_info_tbl.v1_4->display_path[i].display_objid
+			);
+			if (id.type == obj_id.type &&
+					id.id == obj_id.id &&
+						id.enum_id == obj_id.enum_id)
+				return
+				&bp->object_info_tbl.v1_4->display_path[i];
+		}
+	default:
+		return NULL;
+	}
+}
+
+static enum bp_result bios_parser_get_i2c_info(struct dc_bios *dcb,
+	struct graphics_object_id id,
+	struct graphics_object_i2c_info *info)
+{
+	uint32_t offset;
+	struct atom_display_object_path_v2 *object;
+	struct atom_common_record_header *header;
+	struct atom_i2c_record *record;
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+
+	if (!info)
+		return BP_RESULT_BADINPUT;
+
+	object = get_bios_object(bp, id);
+
+	if (!object)
+		return BP_RESULT_BADINPUT;
+
+	offset = object->disp_recordoffset + bp->object_info_tbl_offset;
+
+	for (;;) {
+		header = GET_IMAGE(struct atom_common_record_header, offset);
+
+		if (!header)
+			return BP_RESULT_BADBIOSTABLE;
+
+		if (header->record_type == LAST_RECORD_TYPE ||
+			!header->record_size)
+			break;
+
+		if (header->record_type == ATOM_I2C_RECORD_TYPE
+			&& sizeof(struct atom_i2c_record) <=
+							header->record_size) {
+			/* get the I2C info */
+			record = (struct atom_i2c_record *) header;
+
+			if (get_gpio_i2c_info(bp, record, info) ==
+								BP_RESULT_OK)
+				return BP_RESULT_OK;
+		}
+
+		offset += header->record_size;
+	}
+
+	return BP_RESULT_NORECORD;
+}
+
+static enum bp_result get_gpio_i2c_info(
+	struct bios_parser *bp,
+	struct atom_i2c_record *record,
+	struct graphics_object_i2c_info *info)
+{
+	struct atom_gpio_pin_lut_v2_1 *header;
+	uint32_t count = 0;
+	unsigned int table_index = 0;
+
+	if (!info)
+		return BP_RESULT_BADINPUT;
+
+	/* get the GPIO_I2C info */
+	if (!DATA_TABLES(gpio_pin_lut))
+		return BP_RESULT_BADBIOSTABLE;
+
+	header = GET_IMAGE(struct atom_gpio_pin_lut_v2_1,
+					DATA_TABLES(gpio_pin_lut));
+	if (!header)
+		return BP_RESULT_BADBIOSTABLE;
+
+	if (sizeof(struct atom_common_table_header) +
+			sizeof(struct atom_gpio_pin_assignment)	>
+			le16_to_cpu(header->table_header.structuresize))
+		return BP_RESULT_BADBIOSTABLE;
+
+	/* TODO: is version change? */
+	if (header->table_header.content_revision != 1)
+		return BP_RESULT_UNSUPPORTED;
+
+	/* get data count */
+	count = (le16_to_cpu(header->table_header.structuresize)
+			- sizeof(struct atom_common_table_header))
+				/ sizeof(struct atom_gpio_pin_assignment);
+
+	table_index = record->i2c_id  & I2C_HW_LANE_MUX;
+
+	if (count < table_index) {
+		bool find_valid = false;
+
+		for (table_index = 0; table_index < count; table_index++) {
+			if (((record->i2c_id & I2C_HW_CAP) == (
+			header->gpio_pin[table_index].gpio_id &
+							I2C_HW_CAP)) &&
+			((record->i2c_id & I2C_HW_ENGINE_ID_MASK)  ==
+			(header->gpio_pin[table_index].gpio_id &
+						I2C_HW_ENGINE_ID_MASK)) &&
+			((record->i2c_id & I2C_HW_LANE_MUX) ==
+			(header->gpio_pin[table_index].gpio_id &
+							I2C_HW_LANE_MUX))) {
+				/* still valid */
+				find_valid = true;
+				break;
+			}
+		}
+		/* If we don't find the entry that we are looking for then
+		 *  we will return BP_Result_BadBiosTable.
+		 */
+		if (find_valid == false)
+			return BP_RESULT_BADBIOSTABLE;
+	}
+
+	/* get the GPIO_I2C_INFO */
+	info->i2c_hw_assist = (record->i2c_id & I2C_HW_CAP) ? true : false;
+	info->i2c_line = record->i2c_id & I2C_HW_LANE_MUX;
+	info->i2c_engine_id = (record->i2c_id & I2C_HW_ENGINE_ID_MASK) >> 4;
+	info->i2c_slave_address = record->i2c_slave_addr;
+
+	/* TODO: check how to get register offset for en, Y, etc. */
+	info->gpio_info.clk_a_register_index =
+			le16_to_cpu(
+			header->gpio_pin[table_index].data_a_reg_index);
+	info->gpio_info.clk_a_shift =
+			header->gpio_pin[table_index].gpio_bitshift;
+
+	return BP_RESULT_OK;
+}
+
+static enum bp_result get_voltage_ddc_info_v4(
+	uint8_t *i2c_line,
+	uint32_t index,
+	struct atom_common_table_header *header,
+	uint8_t *address)
+{
+	enum bp_result result = BP_RESULT_NORECORD;
+	struct atom_voltage_objects_info_v4_1 *info =
+		(struct atom_voltage_objects_info_v4_1 *) address;
+
+	uint8_t *voltage_current_object =
+		(uint8_t *) (&(info->voltage_object[0]));
+
+	while ((address + le16_to_cpu(header->structuresize)) >
+						voltage_current_object) {
+		struct atom_i2c_voltage_object_v4 *object =
+			(struct atom_i2c_voltage_object_v4 *)
+						voltage_current_object;
+
+		if (object->header.voltage_mode ==
+			ATOM_INIT_VOLTAGE_REGULATOR) {
+			if (object->header.voltage_type == index) {
+				*i2c_line = object->i2c_id ^ 0x90;
+				result = BP_RESULT_OK;
+				break;
+			}
+		}
+
+		voltage_current_object +=
+				le16_to_cpu(object->header.object_size);
+	}
+	return result;
+}
+
+static enum bp_result bios_parser_get_thermal_ddc_info(
+	struct dc_bios *dcb,
+	uint32_t i2c_channel_id,
+	struct graphics_object_i2c_info *info)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+	struct i2c_id_config_access *config;
+	struct atom_i2c_record record;
+
+	if (!info)
+		return BP_RESULT_BADINPUT;
+
+	config = (struct i2c_id_config_access *) &i2c_channel_id;
+
+	record.i2c_id = config->bfHW_Capable;
+	record.i2c_id |= config->bfI2C_LineMux;
+	record.i2c_id |= config->bfHW_EngineID;
+
+	return get_gpio_i2c_info(bp, &record, info);
+}
+
+static enum bp_result bios_parser_get_voltage_ddc_info(struct dc_bios *dcb,
+	uint32_t index,
+	struct graphics_object_i2c_info *info)
+{
+	uint8_t i2c_line = 0;
+	enum bp_result result = BP_RESULT_NORECORD;
+	uint8_t *voltage_info_address;
+	struct atom_common_table_header *header;
+	struct atom_data_revision revision = {0};
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+
+	if (!DATA_TABLES(voltageobject_info))
+		return result;
+
+	voltage_info_address = get_image(&bp->base,
+			DATA_TABLES(voltageobject_info),
+			sizeof(struct atom_common_table_header));
+
+	header = (struct atom_common_table_header *) voltage_info_address;
+
+	get_atom_data_table_revision(header, &revision);
+
+	switch (revision.major) {
+	case 4:
+		if (revision.minor != 1)
+			break;
+		result = get_voltage_ddc_info_v4(&i2c_line, index, header,
+			voltage_info_address);
+		break;
+	}
+
+	if (result == BP_RESULT_OK)
+		result = bios_parser_get_thermal_ddc_info(dcb,
+			i2c_line, info);
+
+	return result;
+}
+
+static enum bp_result bios_parser_get_hpd_info(
+	struct dc_bios *dcb,
+	struct graphics_object_id id,
+	struct graphics_object_hpd_info *info)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+	struct atom_display_object_path_v2 *object;
+	struct atom_hpd_int_record *record = NULL;
+
+	if (!info)
+		return BP_RESULT_BADINPUT;
+
+	object = get_bios_object(bp, id);
+
+	if (!object)
+		return BP_RESULT_BADINPUT;
+
+	record = get_hpd_record(bp, object);
+
+	if (record != NULL) {
+		info->hpd_int_gpio_uid = record->pin_id;
+		info->hpd_active = record->plugin_pin_state;
+		return BP_RESULT_OK;
+	}
+
+	return BP_RESULT_NORECORD;
+}
+
+static struct atom_hpd_int_record *get_hpd_record(
+	struct bios_parser *bp,
+	struct atom_display_object_path_v2 *object)
+{
+	struct atom_common_record_header *header;
+	uint32_t offset;
+
+	if (!object) {
+		BREAK_TO_DEBUGGER(); /* Invalid object */
+		return NULL;
+	}
+
+	offset = le16_to_cpu(object->disp_recordoffset)
+			+ bp->object_info_tbl_offset;
+
+	for (;;) {
+		header = GET_IMAGE(struct atom_common_record_header, offset);
+
+		if (!header)
+			return NULL;
+
+		if (header->record_type == LAST_RECORD_TYPE ||
+			!header->record_size)
+			break;
+
+		if (header->record_type == ATOM_HPD_INT_RECORD_TYPE
+			&& sizeof(struct atom_hpd_int_record) <=
+							header->record_size)
+			return (struct atom_hpd_int_record *) header;
+
+		offset += header->record_size;
+	}
+
+	return NULL;
+}
+
+/**
+ * bios_parser_get_gpio_pin_info
+ * Get GpioPin information of input gpio id
+ *
+ * @param gpio_id, GPIO ID
+ * @param info, GpioPin information structure
+ * @return Bios parser result code
+ * @note
+ *  to get the GPIO PIN INFO, we need:
+ *  1. get the GPIO_ID from other object table, see GetHPDInfo()
+ *  2. in DATA_TABLE.GPIO_Pin_LUT, search all records,
+ *	to get the registerA  offset/mask
+ */
+static enum bp_result bios_parser_get_gpio_pin_info(
+	struct dc_bios *dcb,
+	uint32_t gpio_id,
+	struct gpio_pin_info *info)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+	struct atom_gpio_pin_lut_v2_1 *header;
+	uint32_t count = 0;
+	uint32_t i = 0;
+
+	if (!DATA_TABLES(gpio_pin_lut))
+		return BP_RESULT_BADBIOSTABLE;
+
+	header = GET_IMAGE(struct atom_gpio_pin_lut_v2_1,
+						DATA_TABLES(gpio_pin_lut));
+	if (!header)
+		return BP_RESULT_BADBIOSTABLE;
+
+	if (sizeof(struct atom_common_table_header) +
+			sizeof(struct atom_gpio_pin_lut_v2_1)
+			> le16_to_cpu(header->table_header.structuresize))
+		return BP_RESULT_BADBIOSTABLE;
+
+	if (header->table_header.content_revision != 1)
+		return BP_RESULT_UNSUPPORTED;
+
+	/* Temporary hard code gpio pin info */
+#if defined(FOR_SIMNOW_BOOT)
+	{
+		struct  atom_gpio_pin_assignment  gpio_pin[8] = {
+				{0x5db5, 0, 0, 1, 0},
+				{0x5db5, 8, 8, 2, 0},
+				{0x5db5, 0x10, 0x10, 3, 0},
+				{0x5db5, 0x18, 0x14, 4, 0},
+				{0x5db5, 0x1A, 0x18, 5, 0},
+				{0x5db5, 0x1C, 0x1C, 6, 0},
+		};
+
+		count = 6;
+		memmove(header->gpio_pin, gpio_pin, sizeof(gpio_pin));
+	}
+#else
+	count = (le16_to_cpu(header->table_header.structuresize)
+			- sizeof(struct atom_common_table_header))
+				/ sizeof(struct atom_gpio_pin_assignment);
+#endif
+	for (i = 0; i < count; ++i) {
+		if (header->gpio_pin[i].gpio_id != gpio_id)
+			continue;
+
+		info->offset =
+			(uint32_t) le16_to_cpu(
+					header->gpio_pin[i].data_a_reg_index);
+		info->offset_y = info->offset + 2;
+		info->offset_en = info->offset + 1;
+		info->offset_mask = info->offset - 1;
+
+		info->mask = (uint32_t) (1 <<
+			header->gpio_pin[i].gpio_bitshift);
+		info->mask_y = info->mask + 2;
+		info->mask_en = info->mask + 1;
+		info->mask_mask = info->mask - 1;
+
+		return BP_RESULT_OK;
+	}
+
+	return BP_RESULT_NORECORD;
+}
+
+static struct device_id device_type_from_device_id(uint16_t device_id)
+{
+
+	struct device_id result_device_id;
+
+	switch (device_id) {
+	case ATOM_DISPLAY_LCD1_SUPPORT:
+		result_device_id.device_type = DEVICE_TYPE_LCD;
+		result_device_id.enum_id = 1;
+		break;
+
+	case ATOM_DISPLAY_DFP1_SUPPORT:
+		result_device_id.device_type = DEVICE_TYPE_DFP;
+		result_device_id.enum_id = 1;
+		break;
+
+	case ATOM_DISPLAY_DFP2_SUPPORT:
+		result_device_id.device_type = DEVICE_TYPE_DFP;
+		result_device_id.enum_id = 2;
+		break;
+
+	case ATOM_DISPLAY_DFP3_SUPPORT:
+		result_device_id.device_type = DEVICE_TYPE_DFP;
+		result_device_id.enum_id = 3;
+		break;
+
+	case ATOM_DISPLAY_DFP4_SUPPORT:
+		result_device_id.device_type = DEVICE_TYPE_DFP;
+		result_device_id.enum_id = 4;
+		break;
+
+	case ATOM_DISPLAY_DFP5_SUPPORT:
+		result_device_id.device_type = DEVICE_TYPE_DFP;
+		result_device_id.enum_id = 5;
+		break;
+
+	case ATOM_DISPLAY_DFP6_SUPPORT:
+		result_device_id.device_type = DEVICE_TYPE_DFP;
+		result_device_id.enum_id = 6;
+		break;
+
+	default:
+		BREAK_TO_DEBUGGER(); /* Invalid device Id */
+		result_device_id.device_type = DEVICE_TYPE_UNKNOWN;
+		result_device_id.enum_id = 0;
+	}
+	return result_device_id;
+}
+
+static enum bp_result bios_parser_get_device_tag(
+	struct dc_bios *dcb,
+	struct graphics_object_id connector_object_id,
+	uint32_t device_tag_index,
+	struct connector_device_tag_info *info)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+	struct atom_display_object_path_v2 *object;
+
+	if (!info)
+		return BP_RESULT_BADINPUT;
+
+	/* getBiosObject will return MXM object */
+	object = get_bios_object(bp, connector_object_id);
+
+	if (!object) {
+		BREAK_TO_DEBUGGER(); /* Invalid object id */
+		return BP_RESULT_BADINPUT;
+	}
+
+	info->acpi_device = 0; /* BIOS no longer provides this */
+	info->dev_id = device_type_from_device_id(object->device_tag);
+
+	return BP_RESULT_OK;
+}
+
+static enum bp_result get_ss_info_v4_1(
+	struct bios_parser *bp,
+	uint32_t id,
+	uint32_t index,
+	struct spread_spectrum_info *ss_info)
+{
+	enum bp_result result = BP_RESULT_OK;
+	struct atom_display_controller_info_v4_1 *disp_cntl_tbl = NULL;
+	struct atom_smu_info_v3_1 *smu_tbl = NULL;
+
+	if (!ss_info)
+		return BP_RESULT_BADINPUT;
+
+	if (!DATA_TABLES(dce_info))
+		return BP_RESULT_BADBIOSTABLE;
+
+	if (!DATA_TABLES(smu_info))
+		return BP_RESULT_BADBIOSTABLE;
+
+	disp_cntl_tbl =  GET_IMAGE(struct atom_display_controller_info_v4_1,
+							DATA_TABLES(dce_info));
+	if (!disp_cntl_tbl)
+		return BP_RESULT_BADBIOSTABLE;
+
+	smu_tbl =  GET_IMAGE(struct atom_smu_info_v3_1, DATA_TABLES(smu_info));
+	if (!smu_tbl)
+		return BP_RESULT_BADBIOSTABLE;
+
+
+	ss_info->type.STEP_AND_DELAY_INFO = false;
+	ss_info->spread_percentage_divider = 1000;
+	/* BIOS no longer uses target clock.  Always enable for now */
+	ss_info->target_clock_range = 0xffffffff;
+
+	switch (id) {
+	case AS_SIGNAL_TYPE_DVI:
+		ss_info->spread_spectrum_percentage =
+				disp_cntl_tbl->dvi_ss_percentage;
+		ss_info->spread_spectrum_range =
+				disp_cntl_tbl->dvi_ss_rate_10hz * 10;
+		if (disp_cntl_tbl->dvi_ss_mode & ATOM_SS_CENTRE_SPREAD_MODE)
+			ss_info->type.CENTER_MODE = true;
+		break;
+	case AS_SIGNAL_TYPE_HDMI:
+		ss_info->spread_spectrum_percentage =
+				disp_cntl_tbl->hdmi_ss_percentage;
+		ss_info->spread_spectrum_range =
+				disp_cntl_tbl->hdmi_ss_rate_10hz * 10;
+		if (disp_cntl_tbl->hdmi_ss_mode & ATOM_SS_CENTRE_SPREAD_MODE)
+			ss_info->type.CENTER_MODE = true;
+		break;
+	/* TODO LVDS not support anymore? */
+	case AS_SIGNAL_TYPE_DISPLAY_PORT:
+		ss_info->spread_spectrum_percentage =
+				disp_cntl_tbl->dp_ss_percentage;
+		ss_info->spread_spectrum_range =
+				disp_cntl_tbl->dp_ss_rate_10hz * 10;
+		if (disp_cntl_tbl->dp_ss_mode & ATOM_SS_CENTRE_SPREAD_MODE)
+			ss_info->type.CENTER_MODE = true;
+		break;
+	case AS_SIGNAL_TYPE_GPU_PLL:
+		ss_info->spread_spectrum_percentage =
+				smu_tbl->gpuclk_ss_percentage;
+		ss_info->spread_spectrum_range =
+				smu_tbl->gpuclk_ss_rate_10hz * 10;
+		if (smu_tbl->gpuclk_ss_mode & ATOM_SS_CENTRE_SPREAD_MODE)
+			ss_info->type.CENTER_MODE = true;
+		break;
+	default:
+		result = BP_RESULT_UNSUPPORTED;
+	}
+
+	return result;
+}
+
+/**
+ * bios_parser_get_spread_spectrum_info
+ * Get spread spectrum information from the ASIC_InternalSS_Info(ver 2.1 or
+ * ver 3.1) or SS_Info table from the VBIOS. Currently ASIC_InternalSS_Info
+ * ver 2.1 can co-exist with SS_Info table. Expect ASIC_InternalSS_Info
+ * ver 3.1,
+ * there is only one entry for each signal /ss id.  However, there is
+ * no planning of supporting multiple spread Sprectum entry for EverGreen
+ * @param [in] this
+ * @param [in] signal, ASSignalType to be converted to info index
+ * @param [in] index, number of entries that match the converted info index
+ * @param [out] ss_info, sprectrum information structure,
+ * @return Bios parser result code
+ */
+static enum bp_result bios_parser_get_spread_spectrum_info(
+	struct dc_bios *dcb,
+	enum as_signal_type signal,
+	uint32_t index,
+	struct spread_spectrum_info *ss_info)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+	enum bp_result result = BP_RESULT_UNSUPPORTED;
+	struct atom_common_table_header *header;
+	struct atom_data_revision tbl_revision;
+
+	if (!ss_info) /* check for bad input */
+		return BP_RESULT_BADINPUT;
+
+	if (!DATA_TABLES(dce_info))
+		return BP_RESULT_UNSUPPORTED;
+
+	header = GET_IMAGE(struct atom_common_table_header,
+						DATA_TABLES(dce_info));
+	get_atom_data_table_revision(header, &tbl_revision);
+
+	switch (tbl_revision.major) {
+	case 4:
+		switch (tbl_revision.minor) {
+		case 1:
+			return get_ss_info_v4_1(bp, signal, index, ss_info);
+		default:
+			break;
+		}
+		break;
+	default:
+		break;
+	}
+	/* there can not be more then one entry for SS Info table */
+	return result;
+}
+
+static enum bp_result get_embedded_panel_info_v2_1(
+	struct bios_parser *bp,
+	struct embedded_panel_info *info)
+{
+	struct lcd_info_v2_1 *lvds;
+
+	if (!info)
+		return BP_RESULT_BADINPUT;
+
+	if (!DATA_TABLES(lcd_info))
+		return BP_RESULT_UNSUPPORTED;
+
+	lvds = GET_IMAGE(struct lcd_info_v2_1, DATA_TABLES(lcd_info));
+
+	if (!lvds)
+		return BP_RESULT_BADBIOSTABLE;
+
+	/* TODO: previous vv1_3, should v2_1 */
+	if (!((lvds->table_header.format_revision == 2)
+			&& (lvds->table_header.content_revision >= 1)))
+		return BP_RESULT_UNSUPPORTED;
+
+	memset(info, 0, sizeof(struct embedded_panel_info));
+
+	/* We need to convert from 10KHz units into KHz units */
+	info->lcd_timing.pixel_clk =
+			le16_to_cpu(lvds->lcd_timing.pixclk) * 10;
+	/* usHActive does not include borders, according to VBIOS team */
+	info->lcd_timing.horizontal_addressable =
+			le16_to_cpu(lvds->lcd_timing.h_active);
+	/* usHBlanking_Time includes borders, so we should really be
+	 * subtractingborders duing this translation, but LVDS generally
+	 * doesn't have borders, so we should be okay leaving this as is for
+	 * now.  May need to revisit if we ever have LVDS with borders
+	 */
+	info->lcd_timing.horizontal_blanking_time =
+		le16_to_cpu(lvds->lcd_timing.h_blanking_time);
+	/* usVActive does not include borders, according to VBIOS team*/
+	info->lcd_timing.vertical_addressable =
+		le16_to_cpu(lvds->lcd_timing.v_active);
+	/* usVBlanking_Time includes borders, so we should really be
+	 * subtracting borders duing this translation, but LVDS generally
+	 * doesn't have borders, so we should be okay leaving this as is for
+	 * now. May need to revisit if we ever have LVDS with borders
+	 */
+	info->lcd_timing.vertical_blanking_time =
+		le16_to_cpu(lvds->lcd_timing.v_blanking_time);
+	info->lcd_timing.horizontal_sync_offset =
+		le16_to_cpu(lvds->lcd_timing.h_sync_offset);
+	info->lcd_timing.horizontal_sync_width =
+		le16_to_cpu(lvds->lcd_timing.h_sync_width);
+	info->lcd_timing.vertical_sync_offset =
+		le16_to_cpu(lvds->lcd_timing.v_sync_offset);
+	info->lcd_timing.vertical_sync_width =
+		le16_to_cpu(lvds->lcd_timing.v_syncwidth);
+	info->lcd_timing.horizontal_border = lvds->lcd_timing.h_border;
+	info->lcd_timing.vertical_border = lvds->lcd_timing.v_border;
+
+	/* not provided by VBIOS */
+	info->lcd_timing.misc_info.HORIZONTAL_CUT_OFF = 0;
+
+	info->lcd_timing.misc_info.H_SYNC_POLARITY =
+		~(uint32_t)
+		(lvds->lcd_timing.miscinfo & ATOM_HSYNC_POLARITY);
+	info->lcd_timing.misc_info.V_SYNC_POLARITY =
+		~(uint32_t)
+		(lvds->lcd_timing.miscinfo & ATOM_VSYNC_POLARITY);
+
+	/* not provided by VBIOS */
+	info->lcd_timing.misc_info.VERTICAL_CUT_OFF = 0;
+
+	info->lcd_timing.misc_info.H_REPLICATION_BY2 =
+		lvds->lcd_timing.miscinfo & ATOM_H_REPLICATIONBY2;
+	info->lcd_timing.misc_info.V_REPLICATION_BY2 =
+		lvds->lcd_timing.miscinfo & ATOM_V_REPLICATIONBY2;
+	info->lcd_timing.misc_info.COMPOSITE_SYNC =
+		lvds->lcd_timing.miscinfo & ATOM_COMPOSITESYNC;
+	info->lcd_timing.misc_info.INTERLACE =
+		lvds->lcd_timing.miscinfo & ATOM_INTERLACE;
+
+	/* not provided by VBIOS*/
+	info->lcd_timing.misc_info.DOUBLE_CLOCK = 0;
+	/* not provided by VBIOS*/
+	info->ss_id = 0;
+
+	info->realtek_eDPToLVDS =
+			(lvds->dplvdsrxid == eDP_TO_LVDS_REALTEK_ID ? 1:0);
+
+	return BP_RESULT_OK;
+}
+
+static enum bp_result bios_parser_get_embedded_panel_info(
+	struct dc_bios *dcb,
+	struct embedded_panel_info *info)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+	struct atom_common_table_header *header;
+	struct atom_data_revision tbl_revision;
+
+	if (!DATA_TABLES(lcd_info))
+		return BP_RESULT_FAILURE;
+
+	header = GET_IMAGE(struct atom_common_table_header,
+					DATA_TABLES(lcd_info));
+
+	if (!header)
+		return BP_RESULT_BADBIOSTABLE;
+
+	get_atom_data_table_revision(header, &tbl_revision);
+
+
+	switch (tbl_revision.major) {
+	case 2:
+		switch (tbl_revision.minor) {
+		case 1:
+			return get_embedded_panel_info_v2_1(bp, info);
+		default:
+			break;
+		}
+	default:
+		break;
+	}
+
+	return BP_RESULT_FAILURE;
+}
+
+static uint32_t get_support_mask_for_device_id(struct device_id device_id)
+{
+	enum dal_device_type device_type = device_id.device_type;
+	uint32_t enum_id = device_id.enum_id;
+
+	switch (device_type) {
+	case DEVICE_TYPE_LCD:
+		switch (enum_id) {
+		case 1:
+			return ATOM_DISPLAY_LCD1_SUPPORT;
+		default:
+			break;
+		}
+		break;
+	case DEVICE_TYPE_DFP:
+		switch (enum_id) {
+		case 1:
+			return ATOM_DISPLAY_DFP1_SUPPORT;
+		case 2:
+			return ATOM_DISPLAY_DFP2_SUPPORT;
+		case 3:
+			return ATOM_DISPLAY_DFP3_SUPPORT;
+		case 4:
+			return ATOM_DISPLAY_DFP4_SUPPORT;
+		case 5:
+			return ATOM_DISPLAY_DFP5_SUPPORT;
+		case 6:
+			return ATOM_DISPLAY_DFP6_SUPPORT;
+		default:
+			break;
+		}
+		break;
+	default:
+		break;
+	};
+
+	/* Unidentified device ID, return empty support mask. */
+	return 0;
+}
+
+static bool bios_parser_is_device_id_supported(
+	struct dc_bios *dcb,
+	struct device_id id)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+
+	uint32_t mask = get_support_mask_for_device_id(id);
+
+	return (le16_to_cpu(bp->object_info_tbl.v1_4->supporteddevices) &
+								mask) != 0;
+}
+
+static void bios_parser_post_init(
+	struct dc_bios *dcb)
+{
+	/* TODO for OPM module. Need implement later */
+}
+
+static uint32_t bios_parser_get_ss_entry_number(
+	struct dc_bios *dcb,
+	enum as_signal_type signal)
+{
+	/* TODO: DAL2 atomfirmware implementation does not need this.
+	 * why DAL3 need this?
+	 */
+	return 1;
+}
+
+static enum bp_result bios_parser_transmitter_control(
+	struct dc_bios *dcb,
+	struct bp_transmitter_control *cntl)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+
+	if (!bp->cmd_tbl.transmitter_control)
+		return BP_RESULT_FAILURE;
+
+	return bp->cmd_tbl.transmitter_control(bp, cntl);
+}
+
+static enum bp_result bios_parser_encoder_control(
+	struct dc_bios *dcb,
+	struct bp_encoder_control *cntl)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+
+	if (!bp->cmd_tbl.dig_encoder_control)
+		return BP_RESULT_FAILURE;
+
+	return bp->cmd_tbl.dig_encoder_control(bp, cntl);
+}
+
+static enum bp_result bios_parser_set_pixel_clock(
+	struct dc_bios *dcb,
+	struct bp_pixel_clock_parameters *bp_params)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+
+	if (!bp->cmd_tbl.set_pixel_clock)
+		return BP_RESULT_FAILURE;
+
+	return bp->cmd_tbl.set_pixel_clock(bp, bp_params);
+}
+
+static enum bp_result bios_parser_set_dce_clock(
+	struct dc_bios *dcb,
+	struct bp_set_dce_clock_parameters *bp_params)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+
+	if (!bp->cmd_tbl.set_dce_clock)
+		return BP_RESULT_FAILURE;
+
+	return bp->cmd_tbl.set_dce_clock(bp, bp_params);
+}
+
+static unsigned int bios_parser_get_smu_clock_info(
+	struct dc_bios *dcb)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+
+	if (!bp->cmd_tbl.get_smu_clock_info)
+		return BP_RESULT_FAILURE;
+
+	return bp->cmd_tbl.get_smu_clock_info(bp);
+}
+
+static enum bp_result bios_parser_program_crtc_timing(
+	struct dc_bios *dcb,
+	struct bp_hw_crtc_timing_parameters *bp_params)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+
+	if (!bp->cmd_tbl.set_crtc_timing)
+		return BP_RESULT_FAILURE;
+
+	return bp->cmd_tbl.set_crtc_timing(bp, bp_params);
+}
+
+static enum bp_result bios_parser_enable_crtc(
+	struct dc_bios *dcb,
+	enum controller_id id,
+	bool enable)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+
+	if (!bp->cmd_tbl.enable_crtc)
+		return BP_RESULT_FAILURE;
+
+	return bp->cmd_tbl.enable_crtc(bp, id, enable);
+}
+
+static enum bp_result bios_parser_crtc_source_select(
+	struct dc_bios *dcb,
+	struct bp_crtc_source_select *bp_params)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+
+	if (!bp->cmd_tbl.select_crtc_source)
+		return BP_RESULT_FAILURE;
+
+	return bp->cmd_tbl.select_crtc_source(bp, bp_params);
+}
+
+static enum bp_result bios_parser_enable_disp_power_gating(
+	struct dc_bios *dcb,
+	enum controller_id controller_id,
+	enum bp_pipe_control_action action)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+
+	if (!bp->cmd_tbl.enable_disp_power_gating)
+		return BP_RESULT_FAILURE;
+
+	return bp->cmd_tbl.enable_disp_power_gating(bp, controller_id,
+		action);
+}
+
+static bool bios_parser_is_accelerated_mode(
+	struct dc_bios *dcb)
+{
+	return bios_is_accelerated_mode(dcb);
+}
+
+
+/**
+ * bios_parser_set_scratch_critical_state
+ *
+ * @brief
+ *  update critical state bit in VBIOS scratch register
+ *
+ * @param
+ *  bool - to set or reset state
+ */
+static void bios_parser_set_scratch_critical_state(
+	struct dc_bios *dcb,
+	bool state)
+{
+	bios_set_scratch_critical_state(dcb, state);
+}
+
+static enum bp_result bios_parser_get_firmware_info(
+	struct dc_bios *dcb,
+	struct firmware_info *info)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+	enum bp_result result = BP_RESULT_BADBIOSTABLE;
+	struct atom_common_table_header *header;
+
+	struct atom_data_revision revision;
+
+	if (info && DATA_TABLES(firmwareinfo)) {
+		header = GET_IMAGE(struct atom_common_table_header,
+				DATA_TABLES(firmwareinfo));
+		get_atom_data_table_revision(header, &revision);
+		switch (revision.major) {
+		case 3:
+			switch (revision.minor) {
+			case 1:
+				result = get_firmware_info_v3_1(bp, info);
+				break;
+			default:
+				break;
+			}
+			break;
+		default:
+			break;
+		}
+	}
+
+	return result;
+}
+
+static enum bp_result get_firmware_info_v3_1(
+	struct bios_parser *bp,
+	struct firmware_info *info)
+{
+	struct atom_firmware_info_v3_1 *firmware_info;
+	struct atom_display_controller_info_v4_1 *dce_info = NULL;
+
+	if (!info)
+		return BP_RESULT_BADINPUT;
+
+	firmware_info = GET_IMAGE(struct atom_firmware_info_v3_1,
+			DATA_TABLES(firmwareinfo));
+
+	dce_info = GET_IMAGE(struct atom_display_controller_info_v4_1,
+			DATA_TABLES(dce_info));
+
+	if (!firmware_info || !dce_info)
+		return BP_RESULT_BADBIOSTABLE;
+
+	memset(info, 0, sizeof(*info));
+
+	/* Pixel clock pll information. */
+	 /* We need to convert from 10KHz units into KHz units */
+	info->default_memory_clk = firmware_info->bootup_mclk_in10khz * 10;
+	info->default_engine_clk = firmware_info->bootup_sclk_in10khz * 10;
+
+	 /* 27MHz for Vega10: */
+	info->pll_info.crystal_frequency = dce_info->dce_refclk_10khz * 10;
+
+	/* Hardcode frequency if BIOS gives no DCE Ref Clk */
+	if (info->pll_info.crystal_frequency == 0)
+		info->pll_info.crystal_frequency = 27000;
+
+	info->dp_phy_ref_clk     = dce_info->dpphy_refclk_10khz * 10;
+	info->i2c_engine_ref_clk = dce_info->i2c_engine_refclk_10khz * 10;
+
+	/* Get GPU PLL VCO Clock */
+
+	if (bp->cmd_tbl.get_smu_clock_info != NULL) {
+		/* VBIOS gives in 10KHz */
+		info->smu_gpu_pll_output_freq =
+				bp->cmd_tbl.get_smu_clock_info(bp) * 10;
+	}
+
+	 return BP_RESULT_OK;
+}
+
+static enum bp_result bios_parser_get_encoder_cap_info(
+	struct dc_bios *dcb,
+	struct graphics_object_id object_id,
+	struct bp_encoder_cap_info *info)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+	struct atom_display_object_path_v2 *object;
+	struct atom_encoder_caps_record *record = NULL;
+
+	if (!info)
+		return BP_RESULT_BADINPUT;
+
+	object = get_bios_object(bp, object_id);
+
+	if (!object)
+		return BP_RESULT_BADINPUT;
+
+	record = get_encoder_cap_record(bp, object);
+	if (!record)
+		return BP_RESULT_NORECORD;
+
+	info->DP_HBR2_CAP = (record->encodercaps &
+			ATOM_ENCODER_CAP_RECORD_HBR2) ? 1 : 0;
+	info->DP_HBR2_EN = (record->encodercaps &
+			ATOM_ENCODER_CAP_RECORD_HBR2_EN) ? 1 : 0;
+	info->DP_HBR3_EN = (record->encodercaps &
+			ATOM_ENCODER_CAP_RECORD_HBR3_EN) ? 1 : 0;
+	info->HDMI_6GB_EN = (record->encodercaps &
+			ATOM_ENCODER_CAP_RECORD_HDMI6Gbps_EN) ? 1 : 0;
+
+	return BP_RESULT_OK;
+}
+
+
+static struct atom_encoder_caps_record *get_encoder_cap_record(
+	struct bios_parser *bp,
+	struct atom_display_object_path_v2 *object)
+{
+	struct atom_common_record_header *header;
+	uint32_t offset;
+
+	if (!object) {
+		BREAK_TO_DEBUGGER(); /* Invalid object */
+		return NULL;
+	}
+
+	offset = object->encoder_recordoffset + bp->object_info_tbl_offset;
+
+	for (;;) {
+		header = GET_IMAGE(struct atom_common_record_header, offset);
+
+		if (!header)
+			return NULL;
+
+		offset += header->record_size;
+
+		if (header->record_type == LAST_RECORD_TYPE ||
+				!header->record_size)
+			break;
+
+		if (header->record_type != ATOM_ENCODER_CAP_RECORD_TYPE)
+			continue;
+
+		if (sizeof(struct atom_encoder_caps_record) <=
+							header->record_size)
+			return (struct atom_encoder_caps_record *)header;
+	}
+
+	return NULL;
+}
+
+/*
+ * get_integrated_info_v11
+ *
+ * @brief
+ * Get V8 integrated BIOS information
+ *
+ * @param
+ * bios_parser *bp - [in]BIOS parser handler to get master data table
+ * integrated_info *info - [out] store and output integrated info
+ *
+ * @return
+ * enum bp_result - BP_RESULT_OK if information is available,
+ *                  BP_RESULT_BADBIOSTABLE otherwise.
+ */
+static enum bp_result get_integrated_info_v11(
+	struct bios_parser *bp,
+	struct integrated_info *info)
+{
+	struct atom_integrated_system_info_v1_11 *info_v11;
+	uint32_t i;
+
+	info_v11 = GET_IMAGE(struct atom_integrated_system_info_v1_11,
+					DATA_TABLES(integratedsysteminfo));
+
+	if (info_v11 == NULL)
+	return BP_RESULT_BADBIOSTABLE;
+
+	info->gpu_cap_info =
+	le32_to_cpu(info_v11->gpucapinfo);
+	/*
+	* system_config: Bit[0] = 0 : PCIE power gating disabled
+	*                       = 1 : PCIE power gating enabled
+	*                Bit[1] = 0 : DDR-PLL shut down disabled
+	*                       = 1 : DDR-PLL shut down enabled
+	*                Bit[2] = 0 : DDR-PLL power down disabled
+	*                       = 1 : DDR-PLL power down enabled
+	*/
+	info->system_config = le32_to_cpu(info_v11->system_config);
+	info->cpu_cap_info = le32_to_cpu(info_v11->cpucapinfo);
+	info->memory_type = info_v11->memorytype;
+	info->ma_channel_number = info_v11->umachannelnumber;
+	info->lvds_ss_percentage =
+	le16_to_cpu(info_v11->lvds_ss_percentage);
+	info->lvds_sspread_rate_in_10hz =
+	le16_to_cpu(info_v11->lvds_ss_rate_10hz);
+	info->hdmi_ss_percentage =
+	le16_to_cpu(info_v11->hdmi_ss_percentage);
+	info->hdmi_sspread_rate_in_10hz =
+	le16_to_cpu(info_v11->hdmi_ss_rate_10hz);
+	info->dvi_ss_percentage =
+	le16_to_cpu(info_v11->dvi_ss_percentage);
+	info->dvi_sspread_rate_in_10_hz =
+	le16_to_cpu(info_v11->dvi_ss_rate_10hz);
+	info->lvds_misc = info_v11->lvds_misc;
+	for (i = 0; i < NUMBER_OF_UCHAR_FOR_GUID; ++i) {
+		info->ext_disp_conn_info.gu_id[i] =
+				info_v11->extdispconninfo.guid[i];
+	}
+
+	for (i = 0; i < MAX_NUMBER_OF_EXT_DISPLAY_PATH; ++i) {
+		info->ext_disp_conn_info.path[i].device_connector_id =
+		object_id_from_bios_object_id(
+		le16_to_cpu(info_v11->extdispconninfo.path[i].connectorobjid));
+
+		info->ext_disp_conn_info.path[i].ext_encoder_obj_id =
+		object_id_from_bios_object_id(
+			le16_to_cpu(
+			info_v11->extdispconninfo.path[i].ext_encoder_objid));
+
+		info->ext_disp_conn_info.path[i].device_tag =
+			le16_to_cpu(
+				info_v11->extdispconninfo.path[i].device_tag);
+		info->ext_disp_conn_info.path[i].device_acpi_enum =
+		le16_to_cpu(
+			info_v11->extdispconninfo.path[i].device_acpi_enum);
+		info->ext_disp_conn_info.path[i].ext_aux_ddc_lut_index =
+			info_v11->extdispconninfo.path[i].auxddclut_index;
+		info->ext_disp_conn_info.path[i].ext_hpd_pin_lut_index =
+			info_v11->extdispconninfo.path[i].hpdlut_index;
+		info->ext_disp_conn_info.path[i].channel_mapping.raw =
+			info_v11->extdispconninfo.path[i].channelmapping;
+	}
+	info->ext_disp_conn_info.checksum =
+	info_v11->extdispconninfo.checksum;
+
+	/** TODO - review **/
+	#if 0
+	info->boot_up_engine_clock = le32_to_cpu(info_v11->ulBootUpEngineClock)
+									* 10;
+	info->dentist_vco_freq = le32_to_cpu(info_v11->ulDentistVCOFreq) * 10;
+	info->boot_up_uma_clock = le32_to_cpu(info_v8->ulBootUpUMAClock) * 10;
+
+	for (i = 0; i < NUMBER_OF_DISP_CLK_VOLTAGE; ++i) {
+		/* Convert [10KHz] into [KHz] */
+		info->disp_clk_voltage[i].max_supported_clk =
+		le32_to_cpu(info_v11->sDISPCLK_Voltage[i].
+			ulMaximumSupportedCLK) * 10;
+		info->disp_clk_voltage[i].voltage_index =
+		le32_to_cpu(info_v11->sDISPCLK_Voltage[i].ulVoltageIndex);
+	}
+
+	info->boot_up_req_display_vector =
+			le32_to_cpu(info_v11->ulBootUpReqDisplayVector);
+	info->boot_up_nb_voltage =
+			le16_to_cpu(info_v11->usBootUpNBVoltage);
+	info->ext_disp_conn_info_offset =
+			le16_to_cpu(info_v11->usExtDispConnInfoOffset);
+	info->gmc_restore_reset_time =
+			le32_to_cpu(info_v11->ulGMCRestoreResetTime);
+	info->minimum_n_clk =
+			le32_to_cpu(info_v11->ulNbpStateNClkFreq[0]);
+	for (i = 1; i < 4; ++i)
+		info->minimum_n_clk =
+				info->minimum_n_clk <
+				le32_to_cpu(info_v11->ulNbpStateNClkFreq[i]) ?
+				info->minimum_n_clk : le32_to_cpu(
+					info_v11->ulNbpStateNClkFreq[i]);
+
+	info->idle_n_clk = le32_to_cpu(info_v11->ulIdleNClk);
+	info->ddr_dll_power_up_time =
+	    le32_to_cpu(info_v11->ulDDR_DLL_PowerUpTime);
+	info->ddr_pll_power_up_time =
+		le32_to_cpu(info_v11->ulDDR_PLL_PowerUpTime);
+	info->pcie_clk_ss_type = le16_to_cpu(info_v11->usPCIEClkSSType);
+	info->max_lvds_pclk_freq_in_single_link =
+		le16_to_cpu(info_v11->usMaxLVDSPclkFreqInSingleLink);
+	info->max_lvds_pclk_freq_in_single_link =
+		le16_to_cpu(info_v11->usMaxLVDSPclkFreqInSingleLink);
+	info->lvds_pwr_on_seq_dig_on_to_de_in_4ms =
+		info_v11->ucLVDSPwrOnSeqDIGONtoDE_in4Ms;
+	info->lvds_pwr_on_seq_de_to_vary_bl_in_4ms =
+		info_v11->ucLVDSPwrOnSeqDEtoVARY_BL_in4Ms;
+	info->lvds_pwr_on_seq_vary_bl_to_blon_in_4ms =
+		info_v11->ucLVDSPwrOnSeqVARY_BLtoBLON_in4Ms;
+	info->lvds_pwr_off_seq_vary_bl_to_de_in4ms =
+		info_v11->ucLVDSPwrOffSeqVARY_BLtoDE_in4Ms;
+	info->lvds_pwr_off_seq_de_to_dig_on_in4ms =
+		info_v11->ucLVDSPwrOffSeqDEtoDIGON_in4Ms;
+	info->lvds_pwr_off_seq_blon_to_vary_bl_in_4ms =
+		info_v11->ucLVDSPwrOffSeqBLONtoVARY_BL_in4Ms;
+	info->lvds_off_to_on_delay_in_4ms =
+		info_v11->ucLVDSOffToOnDelay_in4Ms;
+	info->lvds_bit_depth_control_val =
+		le32_to_cpu(info_v11->ulLCDBitDepthControlVal);
+
+	for (i = 0; i < NUMBER_OF_AVAILABLE_SCLK; ++i) {
+		/* Convert [10KHz] into [KHz] */
+		info->avail_s_clk[i].supported_s_clk =
+			le32_to_cpu(info_v11->sAvail_SCLK[i].ulSupportedSCLK)
+									* 10;
+		info->avail_s_clk[i].voltage_index =
+			le16_to_cpu(info_v11->sAvail_SCLK[i].usVoltageIndex);
+		info->avail_s_clk[i].voltage_id =
+			le16_to_cpu(info_v11->sAvail_SCLK[i].usVoltageID);
+	}
+	#endif /* TODO*/
+
+	return BP_RESULT_OK;
+}
+
+
+/*
+ * construct_integrated_info
+ *
+ * @brief
+ * Get integrated BIOS information based on table revision
+ *
+ * @param
+ * bios_parser *bp - [in]BIOS parser handler to get master data table
+ * integrated_info *info - [out] store and output integrated info
+ *
+ * @return
+ * enum bp_result - BP_RESULT_OK if information is available,
+ *                  BP_RESULT_BADBIOSTABLE otherwise.
+ */
+static enum bp_result construct_integrated_info(
+	struct bios_parser *bp,
+	struct integrated_info *info)
+{
+	enum bp_result result = BP_RESULT_BADBIOSTABLE;
+
+	struct atom_common_table_header *header;
+	struct atom_data_revision revision;
+
+	struct clock_voltage_caps temp = {0, 0};
+	uint32_t i;
+	uint32_t j;
+
+	if (info && DATA_TABLES(integratedsysteminfo)) {
+		header = GET_IMAGE(struct atom_common_table_header,
+					DATA_TABLES(integratedsysteminfo));
+
+		get_atom_data_table_revision(header, &revision);
+
+		/* Don't need to check major revision as they are all 1 */
+		switch (revision.minor) {
+		case 11:
+			result = get_integrated_info_v11(bp, info);
+			break;
+		default:
+			return result;
+		}
+	}
+
+	if (result != BP_RESULT_OK)
+		return result;
+
+	/* Sort voltage table from low to high*/
+	for (i = 1; i < NUMBER_OF_DISP_CLK_VOLTAGE; ++i) {
+		for (j = i; j > 0; --j) {
+			if (info->disp_clk_voltage[j].max_supported_clk <
+				info->disp_clk_voltage[j-1].max_supported_clk
+				) {
+				/* swap j and j - 1*/
+				temp = info->disp_clk_voltage[j-1];
+				info->disp_clk_voltage[j-1] =
+					info->disp_clk_voltage[j];
+				info->disp_clk_voltage[j] = temp;
+			}
+		}
+	}
+
+	return result;
+}
+
+static struct integrated_info *bios_parser_create_integrated_info(
+	struct dc_bios *dcb)
+{
+	struct bios_parser *bp = BP_FROM_DCB(dcb);
+	struct integrated_info *info = NULL;
+
+	info = dm_alloc(sizeof(struct integrated_info));
+
+	if (info == NULL) {
+		ASSERT_CRITICAL(0);
+		return NULL;
+	}
+
+	if (construct_integrated_info(bp, info) == BP_RESULT_OK)
+	return info;
+
+	dm_free(info);
+
+	return NULL;
+}
+
+static const struct dc_vbios_funcs vbios_funcs = {
+	.get_connectors_number = bios_parser_get_connectors_number,
+
+	.get_encoder_id = bios_parser_get_encoder_id,
+
+	.get_connector_id = bios_parser_get_connector_id,
+
+	.get_dst_number = bios_parser_get_dst_number,
+
+	.get_src_obj = bios_parser_get_src_obj,
+
+	.get_dst_obj = bios_parser_get_dst_obj,
+
+	.get_i2c_info = bios_parser_get_i2c_info,
+
+	.get_voltage_ddc_info = bios_parser_get_voltage_ddc_info,
+
+	.get_thermal_ddc_info = bios_parser_get_thermal_ddc_info,
+
+	.get_hpd_info = bios_parser_get_hpd_info,
+
+	.get_device_tag = bios_parser_get_device_tag,
+
+	.get_firmware_info = bios_parser_get_firmware_info,
+
+	.get_spread_spectrum_info = bios_parser_get_spread_spectrum_info,
+
+	.get_ss_entry_number = bios_parser_get_ss_entry_number,
+
+	.get_embedded_panel_info = bios_parser_get_embedded_panel_info,
+
+	.get_gpio_pin_info = bios_parser_get_gpio_pin_info,
+
+	.get_encoder_cap_info = bios_parser_get_encoder_cap_info,
+
+	.is_device_id_supported = bios_parser_is_device_id_supported,
+
+
+
+	.is_accelerated_mode = bios_parser_is_accelerated_mode,
+
+	.set_scratch_critical_state = bios_parser_set_scratch_critical_state,
+
+
+/*	 COMMANDS */
+	.encoder_control = bios_parser_encoder_control,
+
+	.transmitter_control = bios_parser_transmitter_control,
+
+	.enable_crtc = bios_parser_enable_crtc,
+
+	.set_pixel_clock = bios_parser_set_pixel_clock,
+
+	.set_dce_clock = bios_parser_set_dce_clock,
+
+	.program_crtc_timing = bios_parser_program_crtc_timing,
+
+	/* .blank_crtc = bios_parser_blank_crtc, */
+
+	.crtc_source_select = bios_parser_crtc_source_select,
+
+	/* .external_encoder_control = bios_parser_external_encoder_control, */
+
+	.enable_disp_power_gating = bios_parser_enable_disp_power_gating,
+
+	.post_init = bios_parser_post_init,
+
+	.bios_parser_destroy = firmware_parser_destroy,
+
+	.get_smu_clock_info = bios_parser_get_smu_clock_info,
+};
+
+static bool bios_parser_construct(
+	struct bios_parser *bp,
+	struct bp_init_data *init,
+	enum dce_version dce_version)
+{
+	uint16_t *rom_header_offset = NULL;
+	struct atom_rom_header_v2_2 *rom_header = NULL;
+	struct display_object_info_table_v1_4 *object_info_tbl;
+	struct atom_data_revision tbl_rev = {0};
+
+	if (!init)
+		return false;
+
+	if (!init->bios)
+		return false;
+
+	bp->base.funcs = &vbios_funcs;
+	bp->base.bios = init->bios;
+	bp->base.bios_size = bp->base.bios[OFFSET_TO_ATOM_ROM_IMAGE_SIZE] * BIOS_IMAGE_SIZE_UNIT;
+
+	bp->base.ctx = init->ctx;
+
+	bp->base.bios_local_image = NULL;
+
+	rom_header_offset =
+			GET_IMAGE(uint16_t, OFFSET_TO_ATOM_ROM_HEADER_POINTER);
+
+	if (!rom_header_offset)
+		return false;
+
+	rom_header = GET_IMAGE(struct atom_rom_header_v2_2, *rom_header_offset);
+
+	if (!rom_header)
+		return false;
+
+	get_atom_data_table_revision(&rom_header->table_header, &tbl_rev);
+	if (!(tbl_rev.major >= 2 && tbl_rev.minor >= 2))
+		return false;
+
+	bp->master_data_tbl =
+		GET_IMAGE(struct atom_master_data_table_v2_1,
+				rom_header->masterdatatable_offset);
+
+	if (!bp->master_data_tbl)
+		return false;
+
+	bp->object_info_tbl_offset = DATA_TABLES(displayobjectinfo);
+
+	if (!bp->object_info_tbl_offset)
+		return false;
+
+	object_info_tbl =
+			GET_IMAGE(struct display_object_info_table_v1_4,
+						bp->object_info_tbl_offset);
+
+	if (!object_info_tbl)
+		return false;
+
+	get_atom_data_table_revision(&object_info_tbl->table_header,
+		&bp->object_info_tbl.revision);
+
+	if (bp->object_info_tbl.revision.major == 1
+		&& bp->object_info_tbl.revision.minor >= 4) {
+		struct display_object_info_table_v1_4 *tbl_v1_4;
+
+		tbl_v1_4 = GET_IMAGE(struct display_object_info_table_v1_4,
+			bp->object_info_tbl_offset);
+		if (!tbl_v1_4)
+			return false;
+
+		bp->object_info_tbl.v1_4 = tbl_v1_4;
+	} else
+		return false;
+
+	dal_firmware_parser_init_cmd_tbl(bp);
+	dal_bios_parser_init_cmd_tbl_helper2(&bp->cmd_helper, dce_version);
+
+	bp->base.integrated_info = bios_parser_create_integrated_info(&bp->base);
+
+	return true;
+}
+
+struct dc_bios *firmware_parser_create(
+	struct bp_init_data *init,
+	enum dce_version dce_version)
+{
+	struct bios_parser *bp = NULL;
+
+	bp = dm_alloc(sizeof(struct bios_parser));
+	if (!bp)
+		return NULL;
+
+	if (bios_parser_construct(bp, init, dce_version))
+		return &bp->base;
+
+	dm_free(bp);
+	return NULL;
+}
+
+
diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.h b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.h
new file mode 100644
index 0000000..cb40546
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.h
@@ -0,0 +1,33 @@
+/*
+ * Copyright 2012-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DAL_BIOS_PARSER2_H__
+#define __DAL_BIOS_PARSER2_H__
+
+struct dc_bios *firmware_parser_create(
+	struct bp_init_data *init,
+	enum dce_version dce_version);
+
+#endif
diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser_types_internal2.h b/drivers/gpu/drm/amd/display/dc/bios/bios_parser_types_internal2.h
new file mode 100644
index 0000000..bf1f5c8
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser_types_internal2.h
@@ -0,0 +1,74 @@
+/*
+ * Copyright 2012-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DAL_BIOS_PARSER_TYPES_BIOS2_H__
+#define __DAL_BIOS_PARSER_TYPES_BIOS2_H__
+
+#include "dc_bios_types.h"
+#include "bios_parser_helper.h"
+
+/* use atomfirmware_bringup.h only. Not atombios.h anymore */
+
+struct atom_data_revision {
+	uint32_t major;
+	uint32_t minor;
+};
+
+struct object_info_table {
+	struct atom_data_revision revision;
+	union {
+		struct display_object_info_table_v1_4 *v1_4;
+	};
+};
+
+enum spread_spectrum_id {
+	SS_ID_UNKNOWN = 0,
+	SS_ID_DP1 = 0xf1,
+	SS_ID_DP2 = 0xf2,
+	SS_ID_LVLINK_2700MHZ = 0xf3,
+	SS_ID_LVLINK_1620MHZ = 0xf4
+};
+
+struct bios_parser {
+	struct dc_bios base;
+
+	struct object_info_table object_info_tbl;
+	uint32_t object_info_tbl_offset;
+	struct atom_master_data_table_v2_1 *master_data_tbl;
+
+
+	const struct bios_parser_helper *bios_helper;
+
+	const struct command_table_helper *cmd_helper;
+	struct cmd_tbl cmd_tbl;
+
+	bool remap_device_tags;
+};
+
+/* Bios Parser from DC Bios */
+#define BP_FROM_DCB(dc_bios) \
+	container_of(dc_bios, struct bios_parser, base)
+
+#endif
diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table2.c b/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
new file mode 100644
index 0000000..36d1582
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
@@ -0,0 +1,813 @@
+/*
+ * Copyright 2012-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "dm_services.h"
+
+#include "ObjectID.h"
+#include "atomfirmware.h"
+#include "atomfirmwareid.h"
+
+#include "include/bios_parser_interface.h"
+
+#include "command_table2.h"
+#include "command_table_helper2.h"
+#include "bios_parser_helper.h"
+#include "bios_parser_types_internal2.h"
+
+#define GET_INDEX_INTO_MASTER_TABLE(MasterOrData, FieldName)\
+	(((char *)(&((\
+		struct atom_master_list_of_##MasterOrData##_functions_v2_1 *)0)\
+		->FieldName)-(char *)0)/sizeof(uint16_t))
+
+#define EXEC_BIOS_CMD_TABLE(fname, params)\
+	(cgs_atom_exec_cmd_table(bp->base.ctx->cgs_device, \
+		GET_INDEX_INTO_MASTER_TABLE(command, fname), \
+		&params) == 0)
+
+#define BIOS_CMD_TABLE_REVISION(fname, frev, crev)\
+	cgs_atom_get_cmd_table_revs(bp->base.ctx->cgs_device, \
+		GET_INDEX_INTO_MASTER_TABLE(command, fname), &frev, &crev)
+
+#define BIOS_CMD_TABLE_PARA_REVISION(fname)\
+	bios_cmd_table_para_revision(bp->base.ctx->cgs_device, \
+			GET_INDEX_INTO_MASTER_TABLE(command, fname))
+
+static void init_dig_encoder_control(struct bios_parser *bp);
+static void init_transmitter_control(struct bios_parser *bp);
+static void init_set_pixel_clock(struct bios_parser *bp);
+
+static void init_set_crtc_timing(struct bios_parser *bp);
+
+static void init_select_crtc_source(struct bios_parser *bp);
+static void init_enable_crtc(struct bios_parser *bp);
+
+static void init_external_encoder_control(struct bios_parser *bp);
+static void init_enable_disp_power_gating(struct bios_parser *bp);
+static void init_set_dce_clock(struct bios_parser *bp);
+static void init_get_smu_clock_info(struct bios_parser *bp);
+
+void dal_firmware_parser_init_cmd_tbl(struct bios_parser *bp)
+{
+	init_dig_encoder_control(bp);
+	init_transmitter_control(bp);
+	init_set_pixel_clock(bp);
+
+	init_set_crtc_timing(bp);
+
+	init_select_crtc_source(bp);
+	init_enable_crtc(bp);
+
+	init_external_encoder_control(bp);
+	init_enable_disp_power_gating(bp);
+	init_set_dce_clock(bp);
+	init_get_smu_clock_info(bp);
+}
+
+static uint32_t bios_cmd_table_para_revision(void *cgs_device,
+					     uint32_t index)
+{
+	uint8_t frev, crev;
+
+	if (cgs_atom_get_cmd_table_revs(cgs_device,
+					index,
+					&frev, &crev) != 0)
+		return 0;
+	return crev;
+}
+
+/******************************************************************************
+ ******************************************************************************
+ **
+ **                  D I G E N C O D E R C O N T R O L
+ **
+ ******************************************************************************
+ *****************************************************************************/
+
+static enum bp_result encoder_control_digx_v1_5(
+	struct bios_parser *bp,
+	struct bp_encoder_control *cntl);
+
+static void init_dig_encoder_control(struct bios_parser *bp)
+{
+	uint32_t version =
+		BIOS_CMD_TABLE_PARA_REVISION(digxencodercontrol);
+
+	switch (version) {
+	case 5:
+		bp->cmd_tbl.dig_encoder_control = encoder_control_digx_v1_5;
+		break;
+	default:
+		bp->cmd_tbl.dig_encoder_control = NULL;
+		break;
+	}
+}
+
+static enum bp_result encoder_control_digx_v1_5(
+	struct bios_parser *bp,
+	struct bp_encoder_control *cntl)
+{
+	enum bp_result result = BP_RESULT_FAILURE;
+	struct dig_encoder_stream_setup_parameters_v1_5 params = {0};
+
+	params.digid = (uint8_t)(cntl->engine_id);
+	params.action = bp->cmd_helper->encoder_action_to_atom(cntl->action);
+
+	params.pclk_10khz = cntl->pixel_clock / 10;
+	params.digmode =
+			(uint8_t)(bp->cmd_helper->encoder_mode_bp_to_atom(
+					cntl->signal,
+					cntl->enable_dp_audio));
+	params.lanenum = (uint8_t)(cntl->lanes_number);
+
+	switch (cntl->color_depth) {
+	case COLOR_DEPTH_888:
+		params.bitpercolor = PANEL_8BIT_PER_COLOR;
+		break;
+	case COLOR_DEPTH_101010:
+		params.bitpercolor = PANEL_10BIT_PER_COLOR;
+		break;
+	case COLOR_DEPTH_121212:
+		params.bitpercolor = PANEL_12BIT_PER_COLOR;
+		break;
+	case COLOR_DEPTH_161616:
+		params.bitpercolor = PANEL_16BIT_PER_COLOR;
+		break;
+	default:
+		break;
+	}
+
+	if (cntl->signal == SIGNAL_TYPE_HDMI_TYPE_A)
+		switch (cntl->color_depth) {
+		case COLOR_DEPTH_101010:
+			params.pclk_10khz =
+				(params.pclk_10khz * 30) / 24;
+			break;
+		case COLOR_DEPTH_121212:
+			params.pclk_10khz =
+				(params.pclk_10khz * 36) / 24;
+			break;
+		case COLOR_DEPTH_161616:
+			params.pclk_10khz =
+				(params.pclk_10khz * 48) / 24;
+			break;
+		default:
+			break;
+		}
+
+	if (EXEC_BIOS_CMD_TABLE(digxencodercontrol, params))
+		result = BP_RESULT_OK;
+
+	return result;
+}
+
+/*****************************************************************************
+ ******************************************************************************
+ **
+ **                  TRANSMITTER CONTROL
+ **
+ ******************************************************************************
+ *****************************************************************************/
+
+static enum bp_result transmitter_control_v1_6(
+	struct bios_parser *bp,
+	struct bp_transmitter_control *cntl);
+
+static void init_transmitter_control(struct bios_parser *bp)
+{
+	uint8_t frev;
+	uint8_t crev;
+
+	if (BIOS_CMD_TABLE_REVISION(dig1transmittercontrol, frev, crev) != 0)
+		BREAK_TO_DEBUGGER();
+	switch (crev) {
+	case 6:
+		bp->cmd_tbl.transmitter_control = transmitter_control_v1_6;
+		break;
+	default:
+		bp->cmd_tbl.transmitter_control = NULL;
+		break;
+	}
+}
+
+static enum bp_result transmitter_control_v1_6(
+	struct bios_parser *bp,
+	struct bp_transmitter_control *cntl)
+{
+	enum bp_result result = BP_RESULT_FAILURE;
+	const struct command_table_helper *cmd = bp->cmd_helper;
+	struct dig_transmitter_control_ps_allocation_v1_6 ps = { { 0 } };
+
+	ps.param.phyid = cmd->phy_id_to_atom(cntl->transmitter);
+	ps.param.action = (uint8_t)cntl->action;
+
+	if (cntl->action == TRANSMITTER_CONTROL_SET_VOLTAGE_AND_PREEMPASIS)
+		ps.param.mode_laneset.dplaneset = (uint8_t)cntl->lane_settings;
+	else
+		ps.param.mode_laneset.digmode =
+				cmd->signal_type_to_atom_dig_mode(cntl->signal);
+
+	ps.param.lanenum = (uint8_t)cntl->lanes_number;
+	ps.param.hpdsel = cmd->hpd_sel_to_atom(cntl->hpd_sel);
+	ps.param.digfe_sel = cmd->dig_encoder_sel_to_atom(cntl->engine_id);
+	ps.param.connobj_id = (uint8_t)cntl->connector_obj_id.id;
+	ps.param.symclk_10khz = cntl->pixel_clock/10;
+
+
+	if (cntl->action == TRANSMITTER_CONTROL_ENABLE ||
+		cntl->action == TRANSMITTER_CONTROL_ACTIAVATE ||
+		cntl->action == TRANSMITTER_CONTROL_DEACTIVATE) {
+		dm_logger_write(bp->base.ctx->logger, LOG_HW_SET_MODE,\
+		"************************%s:ps.param.symclk_10khz = %d\n",\
+		__func__, ps.param.symclk_10khz);
+	}
+
+
+/*color_depth not used any more, driver has deep color factor in the Phyclk*/
+	if (EXEC_BIOS_CMD_TABLE(dig1transmittercontrol, ps))
+		result = BP_RESULT_OK;
+	return result;
+}
+
+/******************************************************************************
+ ******************************************************************************
+ **
+ **                  SET PIXEL CLOCK
+ **
+ ******************************************************************************
+ *****************************************************************************/
+
+static enum bp_result set_pixel_clock_v7(
+	struct bios_parser *bp,
+	struct bp_pixel_clock_parameters *bp_params);
+
+static void init_set_pixel_clock(struct bios_parser *bp)
+{
+	switch (BIOS_CMD_TABLE_PARA_REVISION(setpixelclock)) {
+	case 7:
+		bp->cmd_tbl.set_pixel_clock = set_pixel_clock_v7;
+		break;
+	default:
+		bp->cmd_tbl.set_pixel_clock = NULL;
+		break;
+	}
+}
+
+
+
+static enum bp_result set_pixel_clock_v7(
+	struct bios_parser *bp,
+	struct bp_pixel_clock_parameters *bp_params)
+{
+	enum bp_result result = BP_RESULT_FAILURE;
+	struct set_pixel_clock_parameter_v1_7 clk;
+	uint8_t controller_id;
+	uint32_t pll_id;
+
+	memset(&clk, 0, sizeof(clk));
+
+	if (bp->cmd_helper->clock_source_id_to_atom(bp_params->pll_id, &pll_id)
+			&& bp->cmd_helper->controller_id_to_atom(bp_params->
+					controller_id, &controller_id)) {
+		/* Note: VBIOS still wants to use ucCRTC name which is now
+		 * 1 byte in ULONG
+		 *typedef struct _CRTC_PIXEL_CLOCK_FREQ
+		 *{
+		 * target the pixel clock to drive the CRTC timing.
+		 * ULONG ulPixelClock:24;
+		 * 0 means disable PPLL/DCPLL. Expanded to 24 bits comparing to
+		 * previous version.
+		 * ATOM_CRTC1~6, indicate the CRTC controller to
+		 * ULONG ucCRTC:8;
+		 * drive the pixel clock. not used for DCPLL case.
+		 *}CRTC_PIXEL_CLOCK_FREQ;
+		 *union
+		 *{
+		 * pixel clock and CRTC id frequency
+		 * CRTC_PIXEL_CLOCK_FREQ ulCrtcPclkFreq;
+		 * ULONG ulDispEngClkFreq; dispclk frequency
+		 *};
+		 */
+		clk.crtc_id = controller_id;
+		clk.pll_id = (uint8_t) pll_id;
+		clk.encoderobjid =
+			bp->cmd_helper->encoder_id_to_atom(
+				dal_graphics_object_id_get_encoder_id(
+					bp_params->encoder_object_id));
+
+		clk.encoder_mode = (uint8_t) bp->
+			cmd_helper->encoder_mode_bp_to_atom(
+				bp_params->signal_type, false);
+
+		/* We need to convert from KHz units into 10KHz units */
+		clk.pixclk_100hz = cpu_to_le32(bp_params->target_pixel_clock *
+				10);
+
+		clk.deep_color_ratio =
+			(uint8_t) bp->cmd_helper->
+				transmitter_color_depth_to_atom(
+					bp_params->color_depth);
+		dm_logger_write(bp->base.ctx->logger, LOG_HW_SET_MODE,\
+				"************************%s:program display clock = %d"\
+				"colorDepth = %d\n", __func__,\
+				bp_params->target_pixel_clock, bp_params->color_depth);
+
+		if (bp_params->flags.FORCE_PROGRAMMING_OF_PLL)
+			clk.miscinfo |= PIXEL_CLOCK_V7_MISC_FORCE_PROG_PPLL;
+
+		if (bp_params->flags.PROGRAM_PHY_PLL_ONLY)
+			clk.miscinfo |= PIXEL_CLOCK_V7_MISC_PROG_PHYPLL;
+
+		if (bp_params->flags.SUPPORT_YUV_420)
+			clk.miscinfo |= PIXEL_CLOCK_V7_MISC_YUV420_MODE;
+
+		if (bp_params->flags.SET_XTALIN_REF_SRC)
+			clk.miscinfo |= PIXEL_CLOCK_V7_MISC_REF_DIV_SRC_XTALIN;
+
+		if (bp_params->flags.SET_GENLOCK_REF_DIV_SRC)
+			clk.miscinfo |= PIXEL_CLOCK_V7_MISC_REF_DIV_SRC_GENLK;
+
+		if (bp_params->signal_type == SIGNAL_TYPE_DVI_DUAL_LINK)
+			clk.miscinfo |= PIXEL_CLOCK_V7_MISC_DVI_DUALLINK_EN;
+
+		if (EXEC_BIOS_CMD_TABLE(setpixelclock, clk))
+			result = BP_RESULT_OK;
+	}
+	return result;
+}
+
+/******************************************************************************
+ ******************************************************************************
+ **
+ **                  SET CRTC TIMING
+ **
+ ******************************************************************************
+ *****************************************************************************/
+
+static enum bp_result set_crtc_using_dtd_timing_v3(
+	struct bios_parser *bp,
+	struct bp_hw_crtc_timing_parameters *bp_params);
+
+static void init_set_crtc_timing(struct bios_parser *bp)
+{
+	uint32_t dtd_version =
+			BIOS_CMD_TABLE_PARA_REVISION(setcrtc_usingdtdtiming);
+
+		switch (dtd_version) {
+		case 3:
+			bp->cmd_tbl.set_crtc_timing =
+					set_crtc_using_dtd_timing_v3;
+			break;
+		default:
+			bp->cmd_tbl.set_crtc_timing = NULL;
+			break;
+		}
+}
+
+static enum bp_result set_crtc_using_dtd_timing_v3(
+	struct bios_parser *bp,
+	struct bp_hw_crtc_timing_parameters *bp_params)
+{
+	enum bp_result result = BP_RESULT_FAILURE;
+	struct set_crtc_using_dtd_timing_parameters params = {0};
+	uint8_t atom_controller_id;
+
+	if (bp->cmd_helper->controller_id_to_atom(
+			bp_params->controller_id, &atom_controller_id))
+		params.crtc_id = atom_controller_id;
+
+	/* bios usH_Size wants h addressable size */
+	params.h_size = cpu_to_le16((uint16_t)bp_params->h_addressable);
+	/* bios usH_Blanking_Time wants borders included in blanking */
+	params.h_blanking_time =
+			cpu_to_le16((uint16_t)(bp_params->h_total -
+					bp_params->h_addressable));
+	/* bios usV_Size wants v addressable size */
+	params.v_size = cpu_to_le16((uint16_t)bp_params->v_addressable);
+	/* bios usV_Blanking_Time wants borders included in blanking */
+	params.v_blanking_time =
+			cpu_to_le16((uint16_t)(bp_params->v_total -
+					bp_params->v_addressable));
+	/* bios usHSyncOffset is the offset from the end of h addressable,
+	 * our horizontalSyncStart is the offset from the beginning
+	 * of h addressable
+	 */
+	params.h_syncoffset =
+			cpu_to_le16((uint16_t)(bp_params->h_sync_start -
+					bp_params->h_addressable));
+	params.h_syncwidth = cpu_to_le16((uint16_t)bp_params->h_sync_width);
+	/* bios usHSyncOffset is the offset from the end of v addressable,
+	 * our verticalSyncStart is the offset from the beginning of
+	 * v addressable
+	 */
+	params.v_syncoffset =
+			cpu_to_le16((uint16_t)(bp_params->v_sync_start -
+					bp_params->v_addressable));
+	params.v_syncwidth = cpu_to_le16((uint16_t)bp_params->v_sync_width);
+
+	/* we assume that overscan from original timing does not get bigger
+	 * than 255
+	 * we will program all the borders in the Set CRTC Overscan call below
+	 */
+
+	if (bp_params->flags.HSYNC_POSITIVE_POLARITY == 0)
+		params.modemiscinfo =
+				cpu_to_le16(le16_to_cpu(params.modemiscinfo) |
+						ATOM_HSYNC_POLARITY);
+
+	if (bp_params->flags.VSYNC_POSITIVE_POLARITY == 0)
+		params.modemiscinfo =
+				cpu_to_le16(le16_to_cpu(params.modemiscinfo) |
+						ATOM_VSYNC_POLARITY);
+
+	if (bp_params->flags.INTERLACE)	{
+		params.modemiscinfo =
+				cpu_to_le16(le16_to_cpu(params.modemiscinfo) |
+						ATOM_INTERLACE);
+
+		/* original DAL code has this condition to apply this
+		 * for non-TV/CV only
+		 * due to complex MV testing for possible impact
+		 * if ( pACParameters->signal != SignalType_YPbPr &&
+		 *  pACParameters->signal != SignalType_Composite &&
+		 *  pACParameters->signal != SignalType_SVideo)
+		 */
+		{
+			/* HW will deduct 0.5 line from 2nd feild.
+			 * i.e. for 1080i, it is 2 lines for 1st field,
+			 * 2.5 lines for the 2nd feild. we need input as 5
+			 * instead of 4.
+			 * but it is 4 either from Edid data (spec CEA 861)
+			 * or CEA timing table.
+			 */
+			params.v_syncoffset =
+				cpu_to_le16(le16_to_cpu(params.v_syncoffset) +
+						1);
+
+		}
+	}
+
+	if (bp_params->flags.HORZ_COUNT_BY_TWO)
+		params.modemiscinfo =
+			cpu_to_le16(le16_to_cpu(params.modemiscinfo) |
+					0x100); /* ATOM_DOUBLE_CLOCK_MODE */
+
+	if (EXEC_BIOS_CMD_TABLE(setcrtc_usingdtdtiming, params))
+		result = BP_RESULT_OK;
+
+	return result;
+}
+
+/******************************************************************************
+ ******************************************************************************
+ **
+ **                  SELECT CRTC SOURCE
+ **
+ ******************************************************************************
+ *****************************************************************************/
+
+
+static enum bp_result select_crtc_source_v3(
+	struct bios_parser *bp,
+	struct bp_crtc_source_select *bp_params);
+
+static void init_select_crtc_source(struct bios_parser *bp)
+{
+	switch (BIOS_CMD_TABLE_PARA_REVISION(selectcrtc_source)) {
+	case 3:
+		bp->cmd_tbl.select_crtc_source = select_crtc_source_v3;
+		break;
+	default:
+		bp->cmd_tbl.select_crtc_source = NULL;
+		break;
+	}
+}
+
+
+static enum bp_result select_crtc_source_v3(
+	struct bios_parser *bp,
+	struct bp_crtc_source_select *bp_params)
+{
+	bool result = BP_RESULT_FAILURE;
+	struct select_crtc_source_parameters_v2_3 params;
+	uint8_t atom_controller_id;
+	uint32_t atom_engine_id;
+	enum signal_type s = bp_params->signal;
+
+	memset(&params, 0, sizeof(params));
+
+	if (bp->cmd_helper->controller_id_to_atom(bp_params->controller_id,
+			&atom_controller_id))
+		params.crtc_id = atom_controller_id;
+	else
+		return result;
+
+	if (bp->cmd_helper->engine_bp_to_atom(bp_params->engine_id,
+			&atom_engine_id))
+		params.encoder_id = (uint8_t)atom_engine_id;
+	else
+		return result;
+
+	if (s == SIGNAL_TYPE_EDP ||
+		(s == SIGNAL_TYPE_DISPLAY_PORT && bp_params->sink_signal ==
+							SIGNAL_TYPE_LVDS))
+		s = SIGNAL_TYPE_LVDS;
+
+	params.encode_mode =
+			bp->cmd_helper->encoder_mode_bp_to_atom(
+					s, bp_params->enable_dp_audio);
+	/* Needed for VBIOS Random Spatial Dithering feature */
+	params.dst_bpc = (uint8_t)(bp_params->display_output_bit_depth);
+
+	if (EXEC_BIOS_CMD_TABLE(selectcrtc_source, params))
+		result = BP_RESULT_OK;
+
+	return result;
+}
+
+/******************************************************************************
+ ******************************************************************************
+ **
+ **                  ENABLE CRTC
+ **
+ ******************************************************************************
+ *****************************************************************************/
+
+static enum bp_result enable_crtc_v1(
+	struct bios_parser *bp,
+	enum controller_id controller_id,
+	bool enable);
+
+static void init_enable_crtc(struct bios_parser *bp)
+{
+	switch (BIOS_CMD_TABLE_PARA_REVISION(enablecrtc)) {
+	case 1:
+		bp->cmd_tbl.enable_crtc = enable_crtc_v1;
+		break;
+	default:
+		bp->cmd_tbl.enable_crtc = NULL;
+		break;
+	}
+}
+
+static enum bp_result enable_crtc_v1(
+	struct bios_parser *bp,
+	enum controller_id controller_id,
+	bool enable)
+{
+	bool result = BP_RESULT_FAILURE;
+	struct enable_crtc_parameters params = {0};
+	uint8_t id;
+
+	if (bp->cmd_helper->controller_id_to_atom(controller_id, &id))
+		params.crtc_id = id;
+	else
+		return BP_RESULT_BADINPUT;
+
+	if (enable)
+		params.enable = ATOM_ENABLE;
+	else
+		params.enable = ATOM_DISABLE;
+
+	if (EXEC_BIOS_CMD_TABLE(enablecrtc, params))
+		result = BP_RESULT_OK;
+
+	return result;
+}
+
+/******************************************************************************
+ ******************************************************************************
+ **
+ **                  DISPLAY PLL
+ **
+ ******************************************************************************
+ *****************************************************************************/
+
+
+
+/******************************************************************************
+ ******************************************************************************
+ **
+ **                  EXTERNAL ENCODER CONTROL
+ **
+ ******************************************************************************
+ *****************************************************************************/
+
+static enum bp_result external_encoder_control_v3(
+	struct bios_parser *bp,
+	struct bp_external_encoder_control *cntl);
+
+static void init_external_encoder_control(
+	struct bios_parser *bp)
+{
+	switch (BIOS_CMD_TABLE_PARA_REVISION(externalencodercontrol)) {
+	case 3:
+		bp->cmd_tbl.external_encoder_control =
+				external_encoder_control_v3;
+		break;
+	default:
+		bp->cmd_tbl.external_encoder_control = NULL;
+		break;
+	}
+}
+
+static enum bp_result external_encoder_control_v3(
+	struct bios_parser *bp,
+	struct bp_external_encoder_control *cntl)
+{
+	/* TODO */
+	return BP_RESULT_OK;
+}
+
+/******************************************************************************
+ ******************************************************************************
+ **
+ **                  ENABLE DISPLAY POWER GATING
+ **
+ ******************************************************************************
+ *****************************************************************************/
+
+static enum bp_result enable_disp_power_gating_v2_1(
+	struct bios_parser *bp,
+	enum controller_id crtc_id,
+	enum bp_pipe_control_action action);
+
+static void init_enable_disp_power_gating(
+	struct bios_parser *bp)
+{
+	switch (BIOS_CMD_TABLE_PARA_REVISION(enabledisppowergating)) {
+	case 1:
+		bp->cmd_tbl.enable_disp_power_gating =
+				enable_disp_power_gating_v2_1;
+		break;
+	default:
+		bp->cmd_tbl.enable_disp_power_gating = NULL;
+		break;
+	}
+}
+
+static enum bp_result enable_disp_power_gating_v2_1(
+	struct bios_parser *bp,
+	enum controller_id crtc_id,
+	enum bp_pipe_control_action action)
+{
+	enum bp_result result = BP_RESULT_FAILURE;
+
+
+	struct enable_disp_power_gating_ps_allocation ps = { { 0 } };
+	uint8_t atom_crtc_id;
+
+	if (bp->cmd_helper->controller_id_to_atom(crtc_id, &atom_crtc_id))
+		ps.param.disp_pipe_id = atom_crtc_id;
+	else
+		return BP_RESULT_BADINPUT;
+
+	ps.param.enable =
+		bp->cmd_helper->disp_power_gating_action_to_atom(action);
+
+	if (EXEC_BIOS_CMD_TABLE(enabledisppowergating, ps.param))
+		result = BP_RESULT_OK;
+
+	return result;
+}
+
+/******************************************************************************
+*******************************************************************************
+ **
+ **                  SET DCE CLOCK
+ **
+*******************************************************************************
+*******************************************************************************/
+
+static enum bp_result set_dce_clock_v2_1(
+	struct bios_parser *bp,
+	struct bp_set_dce_clock_parameters *bp_params);
+
+static void init_set_dce_clock(struct bios_parser *bp)
+{
+	switch (BIOS_CMD_TABLE_PARA_REVISION(setdceclock)) {
+	case 1:
+		bp->cmd_tbl.set_dce_clock = set_dce_clock_v2_1;
+		break;
+	default:
+		bp->cmd_tbl.set_dce_clock = NULL;
+		break;
+	}
+}
+
+static enum bp_result set_dce_clock_v2_1(
+	struct bios_parser *bp,
+	struct bp_set_dce_clock_parameters *bp_params)
+{
+	enum bp_result result = BP_RESULT_FAILURE;
+
+	struct set_dce_clock_ps_allocation_v2_1 params;
+	uint32_t atom_pll_id;
+	uint32_t atom_clock_type;
+	const struct command_table_helper *cmd = bp->cmd_helper;
+
+	memset(&params, 0, sizeof(params));
+
+	if (!cmd->clock_source_id_to_atom(bp_params->pll_id, &atom_pll_id) ||
+			!cmd->dc_clock_type_to_atom(bp_params->clock_type,
+					&atom_clock_type))
+		return BP_RESULT_BADINPUT;
+
+	params.param.dceclksrc  = atom_pll_id;
+	params.param.dceclktype = atom_clock_type;
+
+	if (bp_params->clock_type == DCECLOCK_TYPE_DPREFCLK) {
+		if (bp_params->flags.USE_GENLOCK_AS_SOURCE_FOR_DPREFCLK)
+			params.param.dceclkflag |=
+					DCE_CLOCK_FLAG_PLL_REFCLK_SRC_GENLK;
+
+		if (bp_params->flags.USE_PCIE_AS_SOURCE_FOR_DPREFCLK)
+			params.param.dceclkflag |=
+					DCE_CLOCK_FLAG_PLL_REFCLK_SRC_PCIE;
+
+		if (bp_params->flags.USE_XTALIN_AS_SOURCE_FOR_DPREFCLK)
+			params.param.dceclkflag |=
+					DCE_CLOCK_FLAG_PLL_REFCLK_SRC_XTALIN;
+
+		if (bp_params->flags.USE_GENERICA_AS_SOURCE_FOR_DPREFCLK)
+			params.param.dceclkflag |=
+					DCE_CLOCK_FLAG_PLL_REFCLK_SRC_GENERICA;
+	} else
+		/* only program clock frequency if display clock is used;
+		 * VBIOS will program DPREFCLK
+		 * We need to convert from KHz units into 10KHz units
+		 */
+		params.param.dceclk_10khz = cpu_to_le32(
+				bp_params->target_clock_frequency / 10);
+	dm_logger_write(bp->base.ctx->logger, LOG_HW_SET_MODE,
+			"************************%s:target_clock_frequency = %d"\
+			"clock_type = %d \n", __func__,\
+			bp_params->target_clock_frequency,\
+			bp_params->clock_type);
+
+	if (EXEC_BIOS_CMD_TABLE(setdceclock, params)) {
+		/* Convert from 10KHz units back to KHz */
+		bp_params->target_clock_frequency = le32_to_cpu(
+				params.param.dceclk_10khz) * 10;
+		result = BP_RESULT_OK;
+	}
+
+	return result;
+}
+
+
+/******************************************************************************
+ ******************************************************************************
+ **
+ **                  GET SMU CLOCK INFO
+ **
+ ******************************************************************************
+ *****************************************************************************/
+
+static unsigned int get_smu_clock_info_v3_1(struct bios_parser *bp);
+
+static void init_get_smu_clock_info(struct bios_parser *bp)
+{
+	/* TODO add switch for table vrsion */
+	bp->cmd_tbl.get_smu_clock_info = get_smu_clock_info_v3_1;
+
+}
+
+static unsigned int get_smu_clock_info_v3_1(struct bios_parser *bp)
+{
+	struct atom_get_smu_clock_info_parameters_v3_1 smu_input = {0};
+	struct atom_get_smu_clock_info_output_parameters_v3_1 smu_output;
+
+	smu_input.command = GET_SMU_CLOCK_INFO_V3_1_GET_PLLVCO_FREQ;
+
+	/* Get Specific Clock */
+	if (EXEC_BIOS_CMD_TABLE(getsmuclockinfo, smu_input)) {
+		memmove(&smu_output, &smu_input, sizeof(
+			struct atom_get_smu_clock_info_parameters_v3_1));
+		return smu_output.atom_smu_outputclkfreq.syspllvcofreq_10khz;
+	}
+
+	return 0;
+}
+
diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table2.h b/drivers/gpu/drm/amd/display/dc/bios/command_table2.h
new file mode 100644
index 0000000..59061b8
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/bios/command_table2.h
@@ -0,0 +1,105 @@
+/*
+ * Copyright 2012-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DAL_COMMAND_TABLE2_H__
+#define __DAL_COMMAND_TABLE2_H__
+
+struct bios_parser;
+struct bp_encoder_control;
+
+struct cmd_tbl {
+	enum bp_result (*dig_encoder_control)(
+		struct bios_parser *bp,
+		struct bp_encoder_control *control);
+	enum bp_result (*encoder_control_dig1)(
+		struct bios_parser *bp,
+		struct bp_encoder_control *control);
+	enum bp_result (*encoder_control_dig2)(
+		struct bios_parser *bp,
+		struct bp_encoder_control *control);
+	enum bp_result (*transmitter_control)(
+		struct bios_parser *bp,
+		struct bp_transmitter_control *control);
+	enum bp_result (*set_pixel_clock)(
+		struct bios_parser *bp,
+		struct bp_pixel_clock_parameters *bp_params);
+	enum bp_result (*enable_spread_spectrum_on_ppll)(
+		struct bios_parser *bp,
+		struct bp_spread_spectrum_parameters *bp_params,
+		bool enable);
+	enum bp_result (*adjust_display_pll)(
+		struct bios_parser *bp,
+		struct bp_adjust_pixel_clock_parameters *bp_params);
+	enum bp_result (*dac1_encoder_control)(
+		struct bios_parser *bp,
+		bool enable,
+		uint32_t pixel_clock,
+		uint8_t dac_standard);
+	enum bp_result (*dac2_encoder_control)(
+		struct bios_parser *bp,
+		bool enable,
+		uint32_t pixel_clock,
+		uint8_t dac_standard);
+	enum bp_result (*dac1_output_control)(
+		struct bios_parser *bp,
+		bool enable);
+	enum bp_result (*dac2_output_control)(
+		struct bios_parser *bp,
+		bool enable);
+	enum bp_result (*set_crtc_timing)(
+		struct bios_parser *bp,
+		struct bp_hw_crtc_timing_parameters *bp_params);
+	enum bp_result (*select_crtc_source)(
+		struct bios_parser *bp,
+		struct bp_crtc_source_select *bp_params);
+	enum bp_result (*enable_crtc)(
+		struct bios_parser *bp,
+		enum controller_id controller_id,
+		bool enable);
+	enum bp_result (*enable_crtc_mem_req)(
+		struct bios_parser *bp,
+		enum controller_id controller_id,
+		bool enable);
+	enum bp_result (*program_clock)(
+		struct bios_parser *bp,
+		struct bp_pixel_clock_parameters *bp_params);
+	enum bp_result (*external_encoder_control)(
+			struct bios_parser *bp,
+			struct bp_external_encoder_control *cntl);
+	enum bp_result (*enable_disp_power_gating)(
+		struct bios_parser *bp,
+		enum controller_id crtc_id,
+		enum bp_pipe_control_action action);
+	enum bp_result (*set_dce_clock)(
+		struct bios_parser *bp,
+		struct bp_set_dce_clock_parameters *bp_params);
+	unsigned int (*get_smu_clock_info)(
+			struct bios_parser *bp);
+
+};
+
+void dal_firmware_parser_init_cmd_tbl(struct bios_parser *bp);
+
+#endif
diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.c b/drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.c
new file mode 100644
index 0000000..b0dcad2
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.c
@@ -0,0 +1,260 @@
+/*
+ * Copyright 2012-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "dm_services.h"
+
+#include "ObjectID.h"
+#include "atomfirmware.h"
+#include "atomfirmwareid.h"
+
+#include "include/bios_parser_types.h"
+
+#include "command_table_helper2.h"
+
+bool dal_bios_parser_init_cmd_tbl_helper2(
+	const struct command_table_helper **h,
+	enum dce_version dce)
+{
+	switch (dce) {
+	case DCE_VERSION_8_0:
+		*h = dal_cmd_tbl_helper_dce80_get_table();
+		return true;
+
+	case DCE_VERSION_10_0:
+		*h = dal_cmd_tbl_helper_dce110_get_table();
+		return true;
+
+	case DCE_VERSION_11_0:
+		*h = dal_cmd_tbl_helper_dce110_get_table();
+		return true;
+
+	case DCE_VERSION_11_2:
+		*h = dal_cmd_tbl_helper_dce112_get_table2();
+		return true;
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	case DCE_VERSION_12_0:
+		*h = dal_cmd_tbl_helper_dce112_get_table2();
+		return true;
+#endif
+
+	default:
+		/* Unsupported DCE */
+		BREAK_TO_DEBUGGER();
+		return false;
+	}
+}
+
+/* real implementations */
+
+bool dal_cmd_table_helper_controller_id_to_atom2(
+	enum controller_id id,
+	uint8_t *atom_id)
+{
+	if (atom_id == NULL) {
+		BREAK_TO_DEBUGGER();
+		return false;
+	}
+
+	switch (id) {
+	case CONTROLLER_ID_D0:
+		*atom_id = ATOM_CRTC1;
+		return true;
+	case CONTROLLER_ID_D1:
+		*atom_id = ATOM_CRTC2;
+		return true;
+	case CONTROLLER_ID_D2:
+		*atom_id = ATOM_CRTC3;
+		return true;
+	case CONTROLLER_ID_D3:
+		*atom_id = ATOM_CRTC4;
+		return true;
+	case CONTROLLER_ID_D4:
+		*atom_id = ATOM_CRTC5;
+		return true;
+	case CONTROLLER_ID_D5:
+		*atom_id = ATOM_CRTC6;
+		return true;
+	/* TODO :case CONTROLLER_ID_UNDERLAY0:
+		*atom_id = ATOM_UNDERLAY_PIPE0;
+		return true;
+	*/
+	case CONTROLLER_ID_UNDEFINED:
+		*atom_id = ATOM_CRTC_INVALID;
+		return true;
+	default:
+		/* Wrong controller id */
+		BREAK_TO_DEBUGGER();
+		return false;
+	}
+}
+
+/**
+* translate_transmitter_bp_to_atom
+*
+* @brief
+*  Translate the Transmitter to the corresponding ATOM BIOS value
+*
+* @param
+*   input transmitter
+*   output digitalTransmitter
+*    // =00: Digital Transmitter1 ( UNIPHY linkAB )
+*    // =01: Digital Transmitter2 ( UNIPHY linkCD )
+*    // =02: Digital Transmitter3 ( UNIPHY linkEF )
+*/
+uint8_t dal_cmd_table_helper_transmitter_bp_to_atom2(
+	enum transmitter t)
+{
+	switch (t) {
+	case TRANSMITTER_UNIPHY_A:
+	case TRANSMITTER_UNIPHY_B:
+	case TRANSMITTER_TRAVIS_LCD:
+		return 0;
+	case TRANSMITTER_UNIPHY_C:
+	case TRANSMITTER_UNIPHY_D:
+		return 1;
+	case TRANSMITTER_UNIPHY_E:
+	case TRANSMITTER_UNIPHY_F:
+		return 2;
+	default:
+		/* Invalid Transmitter Type! */
+		BREAK_TO_DEBUGGER();
+		return 0;
+	}
+}
+
+uint32_t dal_cmd_table_helper_encoder_mode_bp_to_atom2(
+	enum signal_type s,
+	bool enable_dp_audio)
+{
+	switch (s) {
+	case SIGNAL_TYPE_DVI_SINGLE_LINK:
+	case SIGNAL_TYPE_DVI_DUAL_LINK:
+		return ATOM_ENCODER_MODE_DVI;
+	case SIGNAL_TYPE_HDMI_TYPE_A:
+		return ATOM_ENCODER_MODE_HDMI;
+	case SIGNAL_TYPE_LVDS:
+		return ATOM_ENCODER_MODE_LVDS;
+	case SIGNAL_TYPE_EDP:
+	case SIGNAL_TYPE_DISPLAY_PORT_MST:
+	case SIGNAL_TYPE_DISPLAY_PORT:
+	case SIGNAL_TYPE_VIRTUAL:
+		if (enable_dp_audio)
+			return ATOM_ENCODER_MODE_DP_AUDIO;
+		else
+			return ATOM_ENCODER_MODE_DP;
+	case SIGNAL_TYPE_RGB:
+		return ATOM_ENCODER_MODE_CRT;
+	default:
+		return ATOM_ENCODER_MODE_CRT;
+	}
+}
+
+bool dal_cmd_table_helper_clock_source_id_to_ref_clk_src2(
+	enum clock_source_id id,
+	uint32_t *ref_clk_src_id)
+{
+	if (ref_clk_src_id == NULL) {
+		BREAK_TO_DEBUGGER();
+		return false;
+	}
+
+	switch (id) {
+	case CLOCK_SOURCE_ID_PLL1:
+		*ref_clk_src_id = ENCODER_REFCLK_SRC_P1PLL;
+		return true;
+	case CLOCK_SOURCE_ID_PLL2:
+		*ref_clk_src_id = ENCODER_REFCLK_SRC_P2PLL;
+		return true;
+	/*TODO:case CLOCK_SOURCE_ID_DCPLL:
+		*ref_clk_src_id = ENCODER_REFCLK_SRC_DCPLL;
+		return true;
+	*/
+	case CLOCK_SOURCE_ID_EXTERNAL:
+		*ref_clk_src_id = ENCODER_REFCLK_SRC_EXTCLK;
+		return true;
+	case CLOCK_SOURCE_ID_UNDEFINED:
+		*ref_clk_src_id = ENCODER_REFCLK_SRC_INVALID;
+		return true;
+	default:
+		/* Unsupported clock source id */
+		BREAK_TO_DEBUGGER();
+		return false;
+	}
+}
+
+uint8_t dal_cmd_table_helper_encoder_id_to_atom2(
+	enum encoder_id id)
+{
+	switch (id) {
+	case ENCODER_ID_INTERNAL_LVDS:
+		return ENCODER_OBJECT_ID_INTERNAL_LVDS;
+	case ENCODER_ID_INTERNAL_TMDS1:
+		return ENCODER_OBJECT_ID_INTERNAL_TMDS1;
+	case ENCODER_ID_INTERNAL_TMDS2:
+		return ENCODER_OBJECT_ID_INTERNAL_TMDS2;
+	case ENCODER_ID_INTERNAL_DAC1:
+		return ENCODER_OBJECT_ID_INTERNAL_DAC1;
+	case ENCODER_ID_INTERNAL_DAC2:
+		return ENCODER_OBJECT_ID_INTERNAL_DAC2;
+	case ENCODER_ID_INTERNAL_LVTM1:
+		return ENCODER_OBJECT_ID_INTERNAL_LVTM1;
+	case ENCODER_ID_INTERNAL_HDMI:
+		return ENCODER_OBJECT_ID_HDMI_INTERNAL;
+	case ENCODER_ID_EXTERNAL_TRAVIS:
+		return ENCODER_OBJECT_ID_TRAVIS;
+	case ENCODER_ID_EXTERNAL_NUTMEG:
+		return ENCODER_OBJECT_ID_NUTMEG;
+	case ENCODER_ID_INTERNAL_KLDSCP_TMDS1:
+		return ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1;
+	case ENCODER_ID_INTERNAL_KLDSCP_DAC1:
+		return ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1;
+	case ENCODER_ID_INTERNAL_KLDSCP_DAC2:
+		return ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2;
+	case ENCODER_ID_EXTERNAL_MVPU_FPGA:
+		return ENCODER_OBJECT_ID_MVPU_FPGA;
+	case ENCODER_ID_INTERNAL_DDI:
+		return ENCODER_OBJECT_ID_INTERNAL_DDI;
+	case ENCODER_ID_INTERNAL_UNIPHY:
+		return ENCODER_OBJECT_ID_INTERNAL_UNIPHY;
+	case ENCODER_ID_INTERNAL_KLDSCP_LVTMA:
+		return ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA;
+	case ENCODER_ID_INTERNAL_UNIPHY1:
+		return ENCODER_OBJECT_ID_INTERNAL_UNIPHY1;
+	case ENCODER_ID_INTERNAL_UNIPHY2:
+		return ENCODER_OBJECT_ID_INTERNAL_UNIPHY2;
+	case ENCODER_ID_INTERNAL_UNIPHY3:
+		return ENCODER_OBJECT_ID_INTERNAL_UNIPHY3;
+	case ENCODER_ID_INTERNAL_WIRELESS:
+		return ENCODER_OBJECT_ID_INTERNAL_VCE;
+	case ENCODER_ID_INTERNAL_VIRTUAL:
+		return ENCODER_OBJECT_ID_NONE;
+	case ENCODER_ID_UNKNOWN:
+		return ENCODER_OBJECT_ID_NONE;
+	default:
+		/* Invalid encoder id */
+		BREAK_TO_DEBUGGER();
+		return ENCODER_OBJECT_ID_NONE;
+	}
+}
diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.h b/drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.h
new file mode 100644
index 0000000..9f587c9
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.h
@@ -0,0 +1,82 @@
+/*
+ * Copyright 2012-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DAL_COMMAND_TABLE_HELPER2_H__
+#define __DAL_COMMAND_TABLE_HELPER2_H__
+
+#include "dce80/command_table_helper_dce80.h"
+#include "dce110/command_table_helper_dce110.h"
+#include "dce112/command_table_helper2_dce112.h"
+
+struct command_table_helper {
+	bool (*controller_id_to_atom)(enum controller_id id, uint8_t *atom_id);
+	uint8_t (*encoder_action_to_atom)(
+			enum bp_encoder_control_action action);
+	uint32_t (*encoder_mode_bp_to_atom)(enum signal_type s,
+			bool enable_dp_audio);
+	bool (*engine_bp_to_atom)(enum engine_id engine_id,
+			uint32_t *atom_engine_id);
+	bool (*clock_source_id_to_atom)(enum clock_source_id id,
+			uint32_t *atom_pll_id);
+	bool (*clock_source_id_to_ref_clk_src)(
+			enum clock_source_id id,
+			uint32_t *ref_clk_src_id);
+	uint8_t (*transmitter_bp_to_atom)(enum transmitter t);
+	uint8_t (*encoder_id_to_atom)(enum encoder_id id);
+	uint8_t (*clock_source_id_to_atom_phy_clk_src_id)(
+			enum clock_source_id id);
+	uint8_t (*signal_type_to_atom_dig_mode)(enum signal_type s);
+	uint8_t (*hpd_sel_to_atom)(enum hpd_source_id id);
+	uint8_t (*dig_encoder_sel_to_atom)(enum engine_id engine_id);
+	uint8_t (*phy_id_to_atom)(enum transmitter t);
+	uint8_t (*disp_power_gating_action_to_atom)(
+			enum bp_pipe_control_action action);
+	bool (*dc_clock_type_to_atom)(enum bp_dce_clock_type id,
+			uint32_t *atom_clock_type);
+	uint8_t (*transmitter_color_depth_to_atom)(
+			enum transmitter_color_depth id);
+};
+
+bool dal_bios_parser_init_cmd_tbl_helper2(const struct command_table_helper **h,
+	enum dce_version dce);
+
+bool dal_cmd_table_helper_controller_id_to_atom2(
+	enum controller_id id,
+	uint8_t *atom_id);
+
+uint32_t dal_cmd_table_helper_encoder_mode_bp_to_atom2(
+	enum signal_type s,
+	bool enable_dp_audio);
+
+bool dal_cmd_table_helper_clock_source_id_to_ref_clk_src2(
+	enum clock_source_id id,
+	uint32_t *ref_clk_src_id);
+
+uint8_t dal_cmd_table_helper_transmitter_bp_to_atom2(
+	enum transmitter t);
+
+uint8_t dal_cmd_table_helper_encoder_id_to_atom2(
+	enum encoder_id id);
+#endif
diff --git a/drivers/gpu/drm/amd/display/dc/bios/dce112/command_table_helper2_dce112.c b/drivers/gpu/drm/amd/display/dc/bios/dce112/command_table_helper2_dce112.c
new file mode 100644
index 0000000..d342cdec
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/bios/dce112/command_table_helper2_dce112.c
@@ -0,0 +1,418 @@
+/*
+ * Copyright 2012-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "dm_services.h"
+
+#include "atom.h"
+
+#include "include/bios_parser_types.h"
+
+#include "../command_table_helper2.h"
+
+static uint8_t phy_id_to_atom(enum transmitter t)
+{
+	uint8_t atom_phy_id;
+
+	switch (t) {
+	case TRANSMITTER_UNIPHY_A:
+		atom_phy_id = ATOM_PHY_ID_UNIPHYA;
+		break;
+	case TRANSMITTER_UNIPHY_B:
+		atom_phy_id = ATOM_PHY_ID_UNIPHYB;
+		break;
+	case TRANSMITTER_UNIPHY_C:
+		atom_phy_id = ATOM_PHY_ID_UNIPHYC;
+		break;
+	case TRANSMITTER_UNIPHY_D:
+		atom_phy_id = ATOM_PHY_ID_UNIPHYD;
+		break;
+	case TRANSMITTER_UNIPHY_E:
+		atom_phy_id = ATOM_PHY_ID_UNIPHYE;
+		break;
+	case TRANSMITTER_UNIPHY_F:
+		atom_phy_id = ATOM_PHY_ID_UNIPHYF;
+		break;
+	case TRANSMITTER_UNIPHY_G:
+		atom_phy_id = ATOM_PHY_ID_UNIPHYG;
+		break;
+	default:
+		atom_phy_id = ATOM_PHY_ID_UNIPHYA;
+		break;
+	}
+	return atom_phy_id;
+}
+
+static uint8_t signal_type_to_atom_dig_mode(enum signal_type s)
+{
+	uint8_t atom_dig_mode = ATOM_TRANSMITTER_DIGMODE_V6_DP;
+
+	switch (s) {
+	case SIGNAL_TYPE_DISPLAY_PORT:
+	case SIGNAL_TYPE_EDP:
+		atom_dig_mode = ATOM_TRANSMITTER_DIGMODE_V6_DP;
+		break;
+	case SIGNAL_TYPE_DVI_SINGLE_LINK:
+	case SIGNAL_TYPE_DVI_DUAL_LINK:
+		atom_dig_mode = ATOM_TRANSMITTER_DIGMODE_V6_DVI;
+		break;
+	case SIGNAL_TYPE_HDMI_TYPE_A:
+		atom_dig_mode = ATOM_TRANSMITTER_DIGMODE_V6_HDMI;
+		break;
+	case SIGNAL_TYPE_DISPLAY_PORT_MST:
+		atom_dig_mode = ATOM_TRANSMITTER_DIGMODE_V6_DP_MST;
+		break;
+	default:
+		atom_dig_mode = ATOM_TRANSMITTER_DIGMODE_V6_DVI;
+		break;
+	}
+
+	return atom_dig_mode;
+}
+
+static uint8_t clock_source_id_to_atom_phy_clk_src_id(
+		enum clock_source_id id)
+{
+	uint8_t atom_phy_clk_src_id = 0;
+
+	switch (id) {
+	case CLOCK_SOURCE_ID_PLL0:
+		atom_phy_clk_src_id = ATOM_TRANSMITTER_CONFIG_V5_P0PLL;
+		break;
+	case CLOCK_SOURCE_ID_PLL1:
+		atom_phy_clk_src_id = ATOM_TRANSMITTER_CONFIG_V5_P1PLL;
+		break;
+	case CLOCK_SOURCE_ID_PLL2:
+		atom_phy_clk_src_id = ATOM_TRANSMITTER_CONFIG_V5_P2PLL;
+		break;
+	case CLOCK_SOURCE_ID_EXTERNAL:
+		atom_phy_clk_src_id = ATOM_TRANSMITTER_CONFIG_V5_REFCLK_SRC_EXT;
+		break;
+	default:
+		atom_phy_clk_src_id = ATOM_TRANSMITTER_CONFIG_V5_P1PLL;
+		break;
+	}
+
+	return atom_phy_clk_src_id >> 2;
+}
+
+static uint8_t hpd_sel_to_atom(enum hpd_source_id id)
+{
+	uint8_t atom_hpd_sel = 0;
+
+	switch (id) {
+	case HPD_SOURCEID1:
+		atom_hpd_sel = ATOM_TRANSMITTER_V6_HPD1_SEL;
+		break;
+	case HPD_SOURCEID2:
+		atom_hpd_sel = ATOM_TRANSMITTER_V6_HPD2_SEL;
+		break;
+	case HPD_SOURCEID3:
+		atom_hpd_sel = ATOM_TRANSMITTER_V6_HPD3_SEL;
+		break;
+	case HPD_SOURCEID4:
+		atom_hpd_sel = ATOM_TRANSMITTER_V6_HPD4_SEL;
+		break;
+	case HPD_SOURCEID5:
+		atom_hpd_sel = ATOM_TRANSMITTER_V6_HPD5_SEL;
+		break;
+	case HPD_SOURCEID6:
+		atom_hpd_sel = ATOM_TRANSMITTER_V6_HPD6_SEL;
+		break;
+	case HPD_SOURCEID_UNKNOWN:
+	default:
+		atom_hpd_sel = 0;
+		break;
+	}
+	return atom_hpd_sel;
+}
+
+static uint8_t dig_encoder_sel_to_atom(enum engine_id id)
+{
+	uint8_t atom_dig_encoder_sel = 0;
+
+	switch (id) {
+	case ENGINE_ID_DIGA:
+		atom_dig_encoder_sel = ATOM_TRANMSITTER_V6__DIGA_SEL;
+		break;
+	case ENGINE_ID_DIGB:
+		atom_dig_encoder_sel = ATOM_TRANMSITTER_V6__DIGB_SEL;
+		break;
+	case ENGINE_ID_DIGC:
+		atom_dig_encoder_sel = ATOM_TRANMSITTER_V6__DIGC_SEL;
+		break;
+	case ENGINE_ID_DIGD:
+		atom_dig_encoder_sel = ATOM_TRANMSITTER_V6__DIGD_SEL;
+		break;
+	case ENGINE_ID_DIGE:
+		atom_dig_encoder_sel = ATOM_TRANMSITTER_V6__DIGE_SEL;
+		break;
+	case ENGINE_ID_DIGF:
+		atom_dig_encoder_sel = ATOM_TRANMSITTER_V6__DIGF_SEL;
+		break;
+	case ENGINE_ID_DIGG:
+		atom_dig_encoder_sel = ATOM_TRANMSITTER_V6__DIGG_SEL;
+		break;
+	case ENGINE_ID_UNKNOWN:
+		/* No DIG_FRONT is associated to DIG_BACKEND */
+		atom_dig_encoder_sel = 0;
+		break;
+	default:
+		atom_dig_encoder_sel = ATOM_TRANMSITTER_V6__DIGA_SEL;
+		break;
+	}
+
+	return 0;
+}
+
+static bool clock_source_id_to_atom(
+	enum clock_source_id id,
+	uint32_t *atom_pll_id)
+{
+	bool result = true;
+
+	if (atom_pll_id != NULL)
+		switch (id) {
+		case CLOCK_SOURCE_COMBO_PHY_PLL0:
+			*atom_pll_id = ATOM_COMBOPHY_PLL0;
+			break;
+		case CLOCK_SOURCE_COMBO_PHY_PLL1:
+			*atom_pll_id = ATOM_COMBOPHY_PLL1;
+			break;
+		case CLOCK_SOURCE_COMBO_PHY_PLL2:
+			*atom_pll_id = ATOM_COMBOPHY_PLL2;
+			break;
+		case CLOCK_SOURCE_COMBO_PHY_PLL3:
+			*atom_pll_id = ATOM_COMBOPHY_PLL3;
+			break;
+		case CLOCK_SOURCE_COMBO_PHY_PLL4:
+			*atom_pll_id = ATOM_COMBOPHY_PLL4;
+			break;
+		case CLOCK_SOURCE_COMBO_PHY_PLL5:
+			*atom_pll_id = ATOM_COMBOPHY_PLL5;
+			break;
+		case CLOCK_SOURCE_COMBO_DISPLAY_PLL0:
+			*atom_pll_id = ATOM_PPLL0;
+			break;
+		case CLOCK_SOURCE_ID_DFS:
+			*atom_pll_id = ATOM_GCK_DFS;
+			break;
+		case CLOCK_SOURCE_ID_VCE:
+			*atom_pll_id = ATOM_DP_DTO;
+			break;
+		case CLOCK_SOURCE_ID_DP_DTO:
+			*atom_pll_id = ATOM_DP_DTO;
+			break;
+		case CLOCK_SOURCE_ID_UNDEFINED:
+			/* Should not happen */
+			*atom_pll_id = ATOM_PPLL_INVALID;
+			result = false;
+			break;
+		default:
+			result = false;
+			break;
+		}
+
+	return result;
+}
+
+static bool engine_bp_to_atom(enum engine_id id, uint32_t *atom_engine_id)
+{
+	bool result = false;
+
+	if (atom_engine_id != NULL)
+		switch (id) {
+		case ENGINE_ID_DIGA:
+			*atom_engine_id = ASIC_INT_DIG1_ENCODER_ID;
+			result = true;
+			break;
+		case ENGINE_ID_DIGB:
+			*atom_engine_id = ASIC_INT_DIG2_ENCODER_ID;
+			result = true;
+			break;
+		case ENGINE_ID_DIGC:
+			*atom_engine_id = ASIC_INT_DIG3_ENCODER_ID;
+			result = true;
+			break;
+		case ENGINE_ID_DIGD:
+			*atom_engine_id = ASIC_INT_DIG4_ENCODER_ID;
+			result = true;
+			break;
+		case ENGINE_ID_DIGE:
+			*atom_engine_id = ASIC_INT_DIG5_ENCODER_ID;
+			result = true;
+			break;
+		case ENGINE_ID_DIGF:
+			*atom_engine_id = ASIC_INT_DIG6_ENCODER_ID;
+			result = true;
+			break;
+		case ENGINE_ID_DIGG:
+			*atom_engine_id = ASIC_INT_DIG7_ENCODER_ID;
+			result = true;
+			break;
+		case ENGINE_ID_DACA:
+			*atom_engine_id = ASIC_INT_DAC1_ENCODER_ID;
+			result = true;
+			break;
+		default:
+			break;
+		}
+
+	return result;
+}
+
+static uint8_t encoder_action_to_atom(enum bp_encoder_control_action action)
+{
+	uint8_t atom_action = 0;
+
+	switch (action) {
+	case ENCODER_CONTROL_ENABLE:
+		atom_action = ATOM_ENABLE;
+		break;
+	case ENCODER_CONTROL_DISABLE:
+		atom_action = ATOM_DISABLE;
+		break;
+	case ENCODER_CONTROL_SETUP:
+		atom_action = ATOM_ENCODER_CMD_STREAM_SETUP;
+		break;
+	case ENCODER_CONTROL_INIT:
+		atom_action = ATOM_ENCODER_INIT;
+		break;
+	default:
+		BREAK_TO_DEBUGGER(); /* Unhandle action in driver.!! */
+		break;
+	}
+
+	return atom_action;
+}
+
+static uint8_t disp_power_gating_action_to_atom(
+	enum bp_pipe_control_action action)
+{
+	uint8_t atom_pipe_action = 0;
+
+	switch (action) {
+	case ASIC_PIPE_DISABLE:
+		atom_pipe_action = ATOM_DISABLE;
+		break;
+	case ASIC_PIPE_ENABLE:
+		atom_pipe_action = ATOM_ENABLE;
+		break;
+	case ASIC_PIPE_INIT:
+		atom_pipe_action = ATOM_INIT;
+		break;
+	default:
+		ASSERT_CRITICAL(false); /* Unhandle action in driver! */
+		break;
+	}
+
+	return atom_pipe_action;
+}
+
+static bool dc_clock_type_to_atom(
+		enum bp_dce_clock_type id,
+		uint32_t *atom_clock_type)
+{
+	bool retCode = true;
+
+	if (atom_clock_type != NULL) {
+		switch (id) {
+		case DCECLOCK_TYPE_DISPLAY_CLOCK:
+			*atom_clock_type = DCE_CLOCK_TYPE_DISPCLK;
+			break;
+
+		case DCECLOCK_TYPE_DPREFCLK:
+			*atom_clock_type = DCE_CLOCK_TYPE_DPREFCLK;
+			break;
+
+		default:
+			ASSERT_CRITICAL(false); /* Unhandle action in driver! */
+			break;
+		}
+	}
+
+	return retCode;
+}
+
+static uint8_t transmitter_color_depth_to_atom(enum transmitter_color_depth id)
+{
+	uint8_t atomColorDepth = 0;
+
+	switch (id) {
+	case TRANSMITTER_COLOR_DEPTH_24:
+		atomColorDepth = PIXEL_CLOCK_V7_DEEPCOLOR_RATIO_DIS;
+		break;
+	case TRANSMITTER_COLOR_DEPTH_30:
+		atomColorDepth = PIXEL_CLOCK_V7_DEEPCOLOR_RATIO_5_4;
+		break;
+	case TRANSMITTER_COLOR_DEPTH_36:
+		atomColorDepth = PIXEL_CLOCK_V7_DEEPCOLOR_RATIO_3_2;
+		break;
+	case TRANSMITTER_COLOR_DEPTH_48:
+		atomColorDepth = PIXEL_CLOCK_V7_DEEPCOLOR_RATIO_2_1;
+		break;
+	default:
+		ASSERT_CRITICAL(false); /* Unhandle action in driver! */
+		break;
+	}
+
+	return atomColorDepth;
+}
+
+/* function table */
+static const struct command_table_helper command_table_helper_funcs = {
+	.controller_id_to_atom = dal_cmd_table_helper_controller_id_to_atom2,
+	.encoder_action_to_atom = encoder_action_to_atom,
+	.engine_bp_to_atom = engine_bp_to_atom,
+	.clock_source_id_to_atom = clock_source_id_to_atom,
+	.clock_source_id_to_atom_phy_clk_src_id =
+			clock_source_id_to_atom_phy_clk_src_id,
+	.signal_type_to_atom_dig_mode = signal_type_to_atom_dig_mode,
+	.hpd_sel_to_atom = hpd_sel_to_atom,
+	.dig_encoder_sel_to_atom = dig_encoder_sel_to_atom,
+	.phy_id_to_atom = phy_id_to_atom,
+	.disp_power_gating_action_to_atom = disp_power_gating_action_to_atom,
+	.clock_source_id_to_ref_clk_src = NULL,
+	.transmitter_bp_to_atom = NULL,
+	.encoder_id_to_atom = dal_cmd_table_helper_encoder_id_to_atom2,
+	.encoder_mode_bp_to_atom =
+			dal_cmd_table_helper_encoder_mode_bp_to_atom2,
+	.dc_clock_type_to_atom = dc_clock_type_to_atom,
+	.transmitter_color_depth_to_atom = transmitter_color_depth_to_atom,
+};
+
+/*
+ * dal_cmd_tbl_helper_dce110_get_table
+ *
+ * @brief
+ * Initialize command table helper functions
+ *
+ * @param
+ * const struct command_table_helper **h - [out] struct of functions
+ *
+ */
+const struct command_table_helper *dal_cmd_tbl_helper_dce112_get_table2()
+{
+	return &command_table_helper_funcs;
+}
diff --git a/drivers/gpu/drm/amd/display/dc/bios/dce112/command_table_helper2_dce112.h b/drivers/gpu/drm/amd/display/dc/bios/dce112/command_table_helper2_dce112.h
new file mode 100644
index 0000000..abf28a0
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/bios/dce112/command_table_helper2_dce112.h
@@ -0,0 +1,34 @@
+/*
+ * Copyright 2012-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DAL_COMMAND_TABLE_HELPER2_DCE112_H__
+#define __DAL_COMMAND_TABLE_HELPER2_DCE112_H__
+
+struct command_table_helper;
+
+/* Initialize command table helper functions */
+const struct command_table_helper *dal_cmd_tbl_helper_dce112_get_table2(void);
+
+#endif /* __DAL_COMMAND_TABLE_HELPER_DCE110_H__ */
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 064/100] drm/amd/display: Add DCE12 gpio support
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (47 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 063/100] drm/amd/display: Add DCE12 bios parser support Alex Deucher
@ 2017-03-20 20:29   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 065/100] drm/amd/display: Add DCE12 i2c/aux support Alex Deucher
                     ` (36 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:29 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Harry Wentland

From: Harry Wentland <harry.wentland@amd.com>

Signed-off-by: Harry Wentland <harry.wentland@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../amd/display/dc/gpio/dce120/hw_factory_dce120.c | 197 ++++++++++
 .../amd/display/dc/gpio/dce120/hw_factory_dce120.h |  32 ++
 .../display/dc/gpio/dce120/hw_translate_dce120.c   | 408 +++++++++++++++++++++
 .../display/dc/gpio/dce120/hw_translate_dce120.h   |  34 ++
 4 files changed, 671 insertions(+)
 create mode 100644 drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_factory_dce120.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_factory_dce120.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_translate_dce120.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_translate_dce120.h

diff --git a/drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_factory_dce120.c b/drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_factory_dce120.c
new file mode 100644
index 0000000..4ced9a7
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_factory_dce120.c
@@ -0,0 +1,197 @@
+/*
+ * Copyright 2013-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "dm_services.h"
+#include "include/gpio_types.h"
+#include "../hw_factory.h"
+
+
+#include "../hw_gpio.h"
+#include "../hw_ddc.h"
+#include "../hw_hpd.h"
+
+#include "hw_factory_dce120.h"
+
+#include "vega10/DC/dce_12_0_offset.h"
+#include "vega10/DC/dce_12_0_sh_mask.h"
+#include "vega10/soc15ip.h"
+
+#define block HPD
+#define reg_num 0
+
+/* set field name */
+#define SF_HPD(reg_name, field_name, post_fix)\
+	.field_name = HPD0_ ## reg_name ## __ ## field_name ## post_fix
+
+/* set field name */
+#define SF_HPD(reg_name, field_name, post_fix)\
+	.field_name = HPD0_ ## reg_name ## __ ## field_name ## post_fix
+
+#define BASE_INNER(seg) \
+	DCE_BASE__INST0_SEG ## seg
+
+/* compile time expand base address. */
+#define BASE(seg) \
+	BASE_INNER(seg)
+
+#define REG(reg_name)\
+		BASE(mm ## reg_name ## _BASE_IDX) + mm ## reg_name
+
+#define REGI(reg_name, block, id)\
+	BASE(mm ## block ## id ## _ ## reg_name ## _BASE_IDX) + \
+				mm ## block ## id ## _ ## reg_name
+
+
+#include "reg_helper.h"
+#include "../hpd_regs.h"
+
+#define hpd_regs(id) \
+{\
+	HPD_REG_LIST(id)\
+}
+
+static const struct hpd_registers hpd_regs[] = {
+	hpd_regs(0),
+	hpd_regs(1),
+	hpd_regs(2),
+	hpd_regs(3),
+	hpd_regs(4),
+	hpd_regs(5)
+};
+
+static const struct hpd_sh_mask hpd_shift = {
+		HPD_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct hpd_sh_mask hpd_mask = {
+		HPD_MASK_SH_LIST(_MASK)
+};
+
+#include "../ddc_regs.h"
+
+ /* set field name */
+#define SF_DDC(reg_name, field_name, post_fix)\
+	.field_name = reg_name ## __ ## field_name ## post_fix
+
+static const struct ddc_registers ddc_data_regs[] = {
+	ddc_data_regs(1),
+	ddc_data_regs(2),
+	ddc_data_regs(3),
+	ddc_data_regs(4),
+	ddc_data_regs(5),
+	ddc_data_regs(6),
+	ddc_vga_data_regs,
+	ddc_i2c_data_regs
+};
+
+static const struct ddc_registers ddc_clk_regs[] = {
+	ddc_clk_regs(1),
+	ddc_clk_regs(2),
+	ddc_clk_regs(3),
+	ddc_clk_regs(4),
+	ddc_clk_regs(5),
+	ddc_clk_regs(6),
+	ddc_vga_clk_regs,
+	ddc_i2c_clk_regs
+};
+
+static const struct ddc_sh_mask ddc_shift = {
+		DDC_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct ddc_sh_mask ddc_mask = {
+		DDC_MASK_SH_LIST(_MASK)
+};
+
+static void define_ddc_registers(
+		struct hw_gpio_pin *pin,
+		uint32_t en)
+{
+	struct hw_ddc *ddc = HW_DDC_FROM_BASE(pin);
+
+	switch (pin->id) {
+	case GPIO_ID_DDC_DATA:
+		ddc->regs = &ddc_data_regs[en];
+		ddc->base.regs = &ddc_data_regs[en].gpio;
+		break;
+	case GPIO_ID_DDC_CLOCK:
+		ddc->regs = &ddc_clk_regs[en];
+		ddc->base.regs = &ddc_clk_regs[en].gpio;
+		break;
+	default:
+		ASSERT_CRITICAL(false);
+		return;
+	}
+
+	ddc->shifts = &ddc_shift;
+	ddc->masks = &ddc_mask;
+
+}
+
+static void define_hpd_registers(struct hw_gpio_pin *pin, uint32_t en)
+{
+	struct hw_hpd *hpd = HW_HPD_FROM_BASE(pin);
+
+	hpd->regs = &hpd_regs[en];
+	hpd->shifts = &hpd_shift;
+	hpd->masks = &hpd_mask;
+	hpd->base.regs = &hpd_regs[en].gpio;
+}
+
+
+/* fucntion table */
+static const struct hw_factory_funcs funcs = {
+	.create_ddc_data = dal_hw_ddc_create,
+	.create_ddc_clock = dal_hw_ddc_create,
+	.create_generic = NULL,
+	.create_hpd = dal_hw_hpd_create,
+	.create_sync = NULL,
+	.create_gsl = NULL,
+	.define_hpd_registers = define_hpd_registers,
+	.define_ddc_registers = define_ddc_registers
+};
+/*
+ * dal_hw_factory_dce120_init
+ *
+ * @brief
+ * Initialize HW factory function pointers and pin info
+ *
+ * @param
+ * struct hw_factory *factory - [out] struct of function pointers
+ */
+void dal_hw_factory_dce120_init(struct hw_factory *factory)
+{
+	/*TODO check ASIC CAPs*/
+	factory->number_of_pins[GPIO_ID_DDC_DATA] = 8;
+	factory->number_of_pins[GPIO_ID_DDC_CLOCK] = 8;
+	factory->number_of_pins[GPIO_ID_GENERIC] = 7;
+	factory->number_of_pins[GPIO_ID_HPD] = 6;
+	factory->number_of_pins[GPIO_ID_GPIO_PAD] = 31;
+	factory->number_of_pins[GPIO_ID_VIP_PAD] = 0;
+	factory->number_of_pins[GPIO_ID_SYNC] = 2;
+	factory->number_of_pins[GPIO_ID_GSL] = 4;
+
+	factory->funcs = &funcs;
+}
diff --git a/drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_factory_dce120.h b/drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_factory_dce120.h
new file mode 100644
index 0000000..db260c3
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_factory_dce120.h
@@ -0,0 +1,32 @@
+/*
+ * Copyright 2013-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DAL_HW_FACTORY_DCE120_H__
+#define __DAL_HW_FACTORY_DCE120_H__
+
+/* Initialize HW factory function pointers and pin info */
+void dal_hw_factory_dce120_init(struct hw_factory *factory);
+
+#endif /* __DAL_HW_FACTORY_DCE120_H__ */
diff --git a/drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_translate_dce120.c b/drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_translate_dce120.c
new file mode 100644
index 0000000..af3843a
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_translate_dce120.c
@@ -0,0 +1,408 @@
+/*
+ * Copyright 2013-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+/*
+ * Pre-requisites: headers required by header of this unit
+ */
+
+#include "hw_translate_dce120.h"
+
+#include "dm_services.h"
+#include "include/gpio_types.h"
+#include "../hw_translate.h"
+
+#include "vega10/DC/dce_12_0_offset.h"
+#include "vega10/DC/dce_12_0_sh_mask.h"
+#include "vega10/soc15ip.h"
+
+/* begin *********************
+ * macros to expend register list macro defined in HW object header file */
+
+#define BASE_INNER(seg) \
+	DCE_BASE__INST0_SEG ## seg
+
+/* compile time expand base address. */
+#define BASE(seg) \
+	BASE_INNER(seg)
+
+#define REG(reg_name)\
+		BASE(mm ## reg_name ## _BASE_IDX) + mm ## reg_name
+
+#define REGI(reg_name, block, id)\
+	BASE(mm ## block ## id ## _ ## reg_name ## _BASE_IDX) + \
+				mm ## block ## id ## _ ## reg_name
+
+/* macros to expend register list macro defined in HW object header file
+ * end *********************/
+
+static bool offset_to_id(
+	uint32_t offset,
+	uint32_t mask,
+	enum gpio_id *id,
+	uint32_t *en)
+{
+	switch (offset) {
+	/* GENERIC */
+	case REG(DC_GPIO_GENERIC_A):
+		*id = GPIO_ID_GENERIC;
+		switch (mask) {
+		case DC_GPIO_GENERIC_A__DC_GPIO_GENERICA_A_MASK:
+			*en = GPIO_GENERIC_A;
+			return true;
+		case DC_GPIO_GENERIC_A__DC_GPIO_GENERICB_A_MASK:
+			*en = GPIO_GENERIC_B;
+			return true;
+		case DC_GPIO_GENERIC_A__DC_GPIO_GENERICC_A_MASK:
+			*en = GPIO_GENERIC_C;
+			return true;
+		case DC_GPIO_GENERIC_A__DC_GPIO_GENERICD_A_MASK:
+			*en = GPIO_GENERIC_D;
+			return true;
+		case DC_GPIO_GENERIC_A__DC_GPIO_GENERICE_A_MASK:
+			*en = GPIO_GENERIC_E;
+			return true;
+		case DC_GPIO_GENERIC_A__DC_GPIO_GENERICF_A_MASK:
+			*en = GPIO_GENERIC_F;
+			return true;
+		case DC_GPIO_GENERIC_A__DC_GPIO_GENERICG_A_MASK:
+			*en = GPIO_GENERIC_G;
+			return true;
+		default:
+			ASSERT_CRITICAL(false);
+			return false;
+		}
+	break;
+	/* HPD */
+	case REG(DC_GPIO_HPD_A):
+		*id = GPIO_ID_HPD;
+		switch (mask) {
+		case DC_GPIO_HPD_A__DC_GPIO_HPD1_A_MASK:
+			*en = GPIO_HPD_1;
+			return true;
+		case DC_GPIO_HPD_A__DC_GPIO_HPD2_A_MASK:
+			*en = GPIO_HPD_2;
+			return true;
+		case DC_GPIO_HPD_A__DC_GPIO_HPD3_A_MASK:
+			*en = GPIO_HPD_3;
+			return true;
+		case DC_GPIO_HPD_A__DC_GPIO_HPD4_A_MASK:
+			*en = GPIO_HPD_4;
+			return true;
+		case DC_GPIO_HPD_A__DC_GPIO_HPD5_A_MASK:
+			*en = GPIO_HPD_5;
+			return true;
+		case DC_GPIO_HPD_A__DC_GPIO_HPD6_A_MASK:
+			*en = GPIO_HPD_6;
+			return true;
+		default:
+			ASSERT_CRITICAL(false);
+			return false;
+		}
+	break;
+	/* SYNCA */
+	case REG(DC_GPIO_SYNCA_A):
+		*id = GPIO_ID_SYNC;
+		switch (mask) {
+		case DC_GPIO_SYNCA_A__DC_GPIO_HSYNCA_A_MASK:
+			*en = GPIO_SYNC_HSYNC_A;
+			return true;
+		case DC_GPIO_SYNCA_A__DC_GPIO_VSYNCA_A_MASK:
+			*en = GPIO_SYNC_VSYNC_A;
+			return true;
+		default:
+			ASSERT_CRITICAL(false);
+			return false;
+		}
+	break;
+	/* REG(DC_GPIO_GENLK_MASK */
+	case REG(DC_GPIO_GENLK_A):
+		*id = GPIO_ID_GSL;
+		switch (mask) {
+		case DC_GPIO_GENLK_A__DC_GPIO_GENLK_CLK_A_MASK:
+			*en = GPIO_GSL_GENLOCK_CLOCK;
+			return true;
+		case DC_GPIO_GENLK_A__DC_GPIO_GENLK_VSYNC_A_MASK:
+			*en = GPIO_GSL_GENLOCK_VSYNC;
+			return true;
+		case DC_GPIO_GENLK_A__DC_GPIO_SWAPLOCK_A_A_MASK:
+			*en = GPIO_GSL_SWAPLOCK_A;
+			return true;
+		case DC_GPIO_GENLK_A__DC_GPIO_SWAPLOCK_B_A_MASK:
+			*en = GPIO_GSL_SWAPLOCK_B;
+			return true;
+		default:
+			ASSERT_CRITICAL(false);
+			return false;
+		}
+	break;
+	/* DDC */
+	/* we don't care about the GPIO_ID for DDC
+	 * in DdcHandle it will use GPIO_ID_DDC_DATA/GPIO_ID_DDC_CLOCK
+	 * directly in the create method */
+	case REG(DC_GPIO_DDC1_A):
+		*en = GPIO_DDC_LINE_DDC1;
+		return true;
+	case REG(DC_GPIO_DDC2_A):
+		*en = GPIO_DDC_LINE_DDC2;
+		return true;
+	case REG(DC_GPIO_DDC3_A):
+		*en = GPIO_DDC_LINE_DDC3;
+		return true;
+	case REG(DC_GPIO_DDC4_A):
+		*en = GPIO_DDC_LINE_DDC4;
+		return true;
+	case REG(DC_GPIO_DDC5_A):
+		*en = GPIO_DDC_LINE_DDC5;
+		return true;
+	case REG(DC_GPIO_DDC6_A):
+		*en = GPIO_DDC_LINE_DDC6;
+		return true;
+	case REG(DC_GPIO_DDCVGA_A):
+		*en = GPIO_DDC_LINE_DDC_VGA;
+		return true;
+	/* GPIO_I2CPAD */
+	case REG(DC_GPIO_I2CPAD_A):
+		*en = GPIO_DDC_LINE_I2C_PAD;
+		return true;
+	/* Not implemented */
+	case REG(DC_GPIO_PWRSEQ_A):
+	case REG(DC_GPIO_PAD_STRENGTH_1):
+	case REG(DC_GPIO_PAD_STRENGTH_2):
+	case REG(DC_GPIO_DEBUG):
+		return false;
+	/* UNEXPECTED */
+	default:
+		ASSERT_CRITICAL(false);
+		return false;
+	}
+}
+
+static bool id_to_offset(
+	enum gpio_id id,
+	uint32_t en,
+	struct gpio_pin_info *info)
+{
+	bool result = true;
+
+	switch (id) {
+	case GPIO_ID_DDC_DATA:
+		info->mask = DC_GPIO_DDC6_A__DC_GPIO_DDC6DATA_A_MASK;
+		switch (en) {
+		case GPIO_DDC_LINE_DDC1:
+			info->offset = REG(DC_GPIO_DDC1_A);
+		break;
+		case GPIO_DDC_LINE_DDC2:
+			info->offset = REG(DC_GPIO_DDC2_A);
+		break;
+		case GPIO_DDC_LINE_DDC3:
+			info->offset = REG(DC_GPIO_DDC3_A);
+		break;
+		case GPIO_DDC_LINE_DDC4:
+			info->offset = REG(DC_GPIO_DDC4_A);
+		break;
+		case GPIO_DDC_LINE_DDC5:
+			info->offset = REG(DC_GPIO_DDC5_A);
+		break;
+		case GPIO_DDC_LINE_DDC6:
+			info->offset = REG(DC_GPIO_DDC6_A);
+		break;
+		case GPIO_DDC_LINE_DDC_VGA:
+			info->offset = REG(DC_GPIO_DDCVGA_A);
+		break;
+		case GPIO_DDC_LINE_I2C_PAD:
+			info->offset = REG(DC_GPIO_I2CPAD_A);
+		break;
+		default:
+			ASSERT_CRITICAL(false);
+			result = false;
+		}
+	break;
+	case GPIO_ID_DDC_CLOCK:
+		info->mask = DC_GPIO_DDC6_A__DC_GPIO_DDC6CLK_A_MASK;
+		switch (en) {
+		case GPIO_DDC_LINE_DDC1:
+			info->offset = REG(DC_GPIO_DDC1_A);
+		break;
+		case GPIO_DDC_LINE_DDC2:
+			info->offset = REG(DC_GPIO_DDC2_A);
+		break;
+		case GPIO_DDC_LINE_DDC3:
+			info->offset = REG(DC_GPIO_DDC3_A);
+		break;
+		case GPIO_DDC_LINE_DDC4:
+			info->offset = REG(DC_GPIO_DDC4_A);
+		break;
+		case GPIO_DDC_LINE_DDC5:
+			info->offset = REG(DC_GPIO_DDC5_A);
+		break;
+		case GPIO_DDC_LINE_DDC6:
+			info->offset = REG(DC_GPIO_DDC6_A);
+		break;
+		case GPIO_DDC_LINE_DDC_VGA:
+			info->offset = REG(DC_GPIO_DDCVGA_A);
+		break;
+		case GPIO_DDC_LINE_I2C_PAD:
+			info->offset = REG(DC_GPIO_I2CPAD_A);
+		break;
+		default:
+			ASSERT_CRITICAL(false);
+			result = false;
+		}
+	break;
+	case GPIO_ID_GENERIC:
+		info->offset = REG(DC_GPIO_GENERIC_A);
+		switch (en) {
+		case GPIO_GENERIC_A:
+			info->mask = DC_GPIO_GENERIC_A__DC_GPIO_GENERICA_A_MASK;
+		break;
+		case GPIO_GENERIC_B:
+			info->mask = DC_GPIO_GENERIC_A__DC_GPIO_GENERICB_A_MASK;
+		break;
+		case GPIO_GENERIC_C:
+			info->mask = DC_GPIO_GENERIC_A__DC_GPIO_GENERICC_A_MASK;
+		break;
+		case GPIO_GENERIC_D:
+			info->mask = DC_GPIO_GENERIC_A__DC_GPIO_GENERICD_A_MASK;
+		break;
+		case GPIO_GENERIC_E:
+			info->mask = DC_GPIO_GENERIC_A__DC_GPIO_GENERICE_A_MASK;
+		break;
+		case GPIO_GENERIC_F:
+			info->mask = DC_GPIO_GENERIC_A__DC_GPIO_GENERICF_A_MASK;
+		break;
+		case GPIO_GENERIC_G:
+			info->mask = DC_GPIO_GENERIC_A__DC_GPIO_GENERICG_A_MASK;
+		break;
+		default:
+			ASSERT_CRITICAL(false);
+			result = false;
+		}
+	break;
+	case GPIO_ID_HPD:
+		info->offset = REG(DC_GPIO_HPD_A);
+		switch (en) {
+		case GPIO_HPD_1:
+			info->mask = DC_GPIO_HPD_A__DC_GPIO_HPD1_A_MASK;
+		break;
+		case GPIO_HPD_2:
+			info->mask = DC_GPIO_HPD_A__DC_GPIO_HPD2_A_MASK;
+		break;
+		case GPIO_HPD_3:
+			info->mask = DC_GPIO_HPD_A__DC_GPIO_HPD3_A_MASK;
+		break;
+		case GPIO_HPD_4:
+			info->mask = DC_GPIO_HPD_A__DC_GPIO_HPD4_A_MASK;
+		break;
+		case GPIO_HPD_5:
+			info->mask = DC_GPIO_HPD_A__DC_GPIO_HPD5_A_MASK;
+		break;
+		case GPIO_HPD_6:
+			info->mask = DC_GPIO_HPD_A__DC_GPIO_HPD6_A_MASK;
+		break;
+		default:
+			ASSERT_CRITICAL(false);
+			result = false;
+		}
+	break;
+	case GPIO_ID_SYNC:
+		switch (en) {
+		case GPIO_SYNC_HSYNC_A:
+			info->offset = REG(DC_GPIO_SYNCA_A);
+			info->mask = DC_GPIO_SYNCA_A__DC_GPIO_HSYNCA_A_MASK;
+		break;
+		case GPIO_SYNC_VSYNC_A:
+			info->offset = REG(DC_GPIO_SYNCA_A);
+			info->mask = DC_GPIO_SYNCA_A__DC_GPIO_VSYNCA_A_MASK;
+		break;
+		case GPIO_SYNC_HSYNC_B:
+		case GPIO_SYNC_VSYNC_B:
+		default:
+			ASSERT_CRITICAL(false);
+			result = false;
+		}
+	break;
+	case GPIO_ID_GSL:
+		switch (en) {
+		case GPIO_GSL_GENLOCK_CLOCK:
+			info->offset = REG(DC_GPIO_GENLK_A);
+			info->mask = DC_GPIO_GENLK_A__DC_GPIO_GENLK_CLK_A_MASK;
+		break;
+		case GPIO_GSL_GENLOCK_VSYNC:
+			info->offset = REG(DC_GPIO_GENLK_A);
+			info->mask =
+				DC_GPIO_GENLK_A__DC_GPIO_GENLK_VSYNC_A_MASK;
+		break;
+		case GPIO_GSL_SWAPLOCK_A:
+			info->offset = REG(DC_GPIO_GENLK_A);
+			info->mask = DC_GPIO_GENLK_A__DC_GPIO_SWAPLOCK_A_A_MASK;
+		break;
+		case GPIO_GSL_SWAPLOCK_B:
+			info->offset = REG(DC_GPIO_GENLK_A);
+			info->mask = DC_GPIO_GENLK_A__DC_GPIO_SWAPLOCK_B_A_MASK;
+		break;
+		default:
+			ASSERT_CRITICAL(false);
+			result = false;
+		}
+	break;
+	case GPIO_ID_VIP_PAD:
+	default:
+		ASSERT_CRITICAL(false);
+		result = false;
+	}
+
+	if (result) {
+		info->offset_y = info->offset + 2;
+		info->offset_en = info->offset + 1;
+		info->offset_mask = info->offset - 1;
+
+		info->mask_y = info->mask;
+		info->mask_en = info->mask;
+		info->mask_mask = info->mask;
+	}
+
+	return result;
+}
+
+/* function table */
+static const struct hw_translate_funcs funcs = {
+	.offset_to_id = offset_to_id,
+	.id_to_offset = id_to_offset,
+};
+
+/*
+ * dal_hw_translate_dce120_init
+ *
+ * @brief
+ * Initialize Hw translate function pointers.
+ *
+ * @param
+ * struct hw_translate *tr - [out] struct of function pointers
+ *
+ */
+void dal_hw_translate_dce120_init(struct hw_translate *tr)
+{
+	tr->funcs = &funcs;
+}
diff --git a/drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_translate_dce120.h b/drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_translate_dce120.h
new file mode 100644
index 0000000..c217668
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_translate_dce120.h
@@ -0,0 +1,34 @@
+/*
+ * Copyright 2013-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DAL_HW_TRANSLATE_DCE120_H__
+#define __DAL_HW_TRANSLATE_DCE120_H__
+
+struct hw_translate;
+
+/* Initialize Hw translate function pointers */
+void dal_hw_translate_dce120_init(struct hw_translate *tr);
+
+#endif /* __DAL_HW_TRANSLATE_DCE120_H__ */
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 065/100] drm/amd/display: Add DCE12 i2c/aux support
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (48 preceding siblings ...)
  2017-03-20 20:29   ` [PATCH 064/100] drm/amd/display: Add DCE12 gpio support Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 066/100] drm/amd/display: Add DCE12 irq support Alex Deucher
                     ` (35 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Harry Wentland

From: Harry Wentland <harry.wentland@amd.com>

Signed-off-by: Harry Wentland <harry.wentland@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../amd/display/dc/i2caux/dce120/i2caux_dce120.c   | 125 +++++++++++++++++++++
 .../amd/display/dc/i2caux/dce120/i2caux_dce120.h   |  32 ++++++
 2 files changed, 157 insertions(+)
 create mode 100644 drivers/gpu/drm/amd/display/dc/i2caux/dce120/i2caux_dce120.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/i2caux/dce120/i2caux_dce120.h

diff --git a/drivers/gpu/drm/amd/display/dc/i2caux/dce120/i2caux_dce120.c b/drivers/gpu/drm/amd/display/dc/i2caux/dce120/i2caux_dce120.c
new file mode 100644
index 0000000..9119829
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/i2caux/dce120/i2caux_dce120.c
@@ -0,0 +1,125 @@
+/*
+ * Copyright 2012-16 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "dm_services.h"
+
+#include "include/i2caux_interface.h"
+#include "../i2caux.h"
+#include "../engine.h"
+#include "../i2c_engine.h"
+#include "../i2c_sw_engine.h"
+#include "../i2c_hw_engine.h"
+
+#include "../dce110/i2c_hw_engine_dce110.h"
+#include "../dce110/aux_engine_dce110.h"
+#include "../dce110/i2caux_dce110.h"
+
+#include "vega10/DC/dce_12_0_offset.h"
+#include "vega10/DC/dce_12_0_sh_mask.h"
+#include "vega10/soc15ip.h"
+
+/* begin *********************
+ * macros to expend register list macro defined in HW object header file */
+
+#define BASE_INNER(seg) \
+	DCE_BASE__INST0_SEG ## seg
+
+/* compile time expand base address. */
+#define BASE(seg) \
+	BASE_INNER(seg)
+
+#define SR(reg_name)\
+		.reg_name = BASE(mm ## reg_name ## _BASE_IDX) +  \
+					mm ## reg_name
+
+#define SRI(reg_name, block, id)\
+	.reg_name = BASE(mm ## block ## id ## _ ## reg_name ## _BASE_IDX) + \
+					mm ## block ## id ## _ ## reg_name
+/* macros to expend register list macro defined in HW object header file
+ * end *********************/
+
+#define aux_regs(id)\
+[id] = {\
+	AUX_COMMON_REG_LIST(id), \
+	.AUX_RESET_MASK = DP_AUX0_AUX_CONTROL__AUX_RESET_MASK \
+}
+
+static const struct dce110_aux_registers dce120_aux_regs[] = {
+		aux_regs(0),
+		aux_regs(1),
+		aux_regs(2),
+		aux_regs(3),
+		aux_regs(4),
+		aux_regs(5),
+};
+
+#define hw_engine_regs(id)\
+{\
+		I2C_HW_ENGINE_COMMON_REG_LIST(id) \
+}
+
+static const struct dce110_i2c_hw_engine_registers dce120_hw_engine_regs[] = {
+		hw_engine_regs(1),
+		hw_engine_regs(2),
+		hw_engine_regs(3),
+		hw_engine_regs(4),
+		hw_engine_regs(5),
+		hw_engine_regs(6)
+};
+
+static const struct dce110_i2c_hw_engine_shift i2c_shift = {
+		I2C_COMMON_MASK_SH_LIST_DCE110(__SHIFT)
+};
+
+static const struct dce110_i2c_hw_engine_mask i2c_mask = {
+		I2C_COMMON_MASK_SH_LIST_DCE110(_MASK)
+};
+
+struct i2caux *dal_i2caux_dce120_create(
+	struct dc_context *ctx)
+{
+	struct i2caux_dce110 *i2caux_dce110 =
+		dm_alloc(sizeof(struct i2caux_dce110));
+
+	if (!i2caux_dce110) {
+		ASSERT_CRITICAL(false);
+		return NULL;
+	}
+
+	if (dal_i2caux_dce110_construct(
+			i2caux_dce110,
+			ctx,
+			dce120_aux_regs,
+			dce120_hw_engine_regs,
+			&i2c_shift,
+			&i2c_mask))
+		return &i2caux_dce110->base;
+
+	ASSERT_CRITICAL(false);
+
+	dm_free(i2caux_dce110);
+
+	return NULL;
+}
diff --git a/drivers/gpu/drm/amd/display/dc/i2caux/dce120/i2caux_dce120.h b/drivers/gpu/drm/amd/display/dc/i2caux/dce120/i2caux_dce120.h
new file mode 100644
index 0000000..b6ac476
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/i2caux/dce120/i2caux_dce120.h
@@ -0,0 +1,32 @@
+/*
+ * Copyright 2012-16 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DAL_I2C_AUX_DCE120_H__
+#define __DAL_I2C_AUX_DCE120_H__
+
+struct i2caux *dal_i2caux_dce120_create(
+	struct dc_context *ctx);
+
+#endif /* __DAL_I2C_AUX_DCE120_H__ */
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 066/100] drm/amd/display: Add DCE12 irq support
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (49 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 065/100] drm/amd/display: Add DCE12 i2c/aux support Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 067/100] drm/amd/display: Add DCE12 core support Alex Deucher
                     ` (34 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Harry Wentland

From: Harry Wentland <harry.wentland@amd.com>

Signed-off-by: Harry Wentland <harry.wentland@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../amd/display/dc/irq/dce120/irq_service_dce120.c | 293 +++++++++++++++++++++
 .../amd/display/dc/irq/dce120/irq_service_dce120.h |  34 +++
 2 files changed, 327 insertions(+)
 create mode 100644 drivers/gpu/drm/amd/display/dc/irq/dce120/irq_service_dce120.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/irq/dce120/irq_service_dce120.h

diff --git a/drivers/gpu/drm/amd/display/dc/irq/dce120/irq_service_dce120.c b/drivers/gpu/drm/amd/display/dc/irq/dce120/irq_service_dce120.c
new file mode 100644
index 0000000..5a263b2
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/irq/dce120/irq_service_dce120.c
@@ -0,0 +1,293 @@
+/*
+ * Copyright 2012-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "dm_services.h"
+
+#include "include/logger_interface.h"
+
+#include "irq_service_dce120.h"
+#include "../dce110/irq_service_dce110.h"
+
+#include "vega10/DC/dce_12_0_offset.h"
+#include "vega10/DC/dce_12_0_sh_mask.h"
+#include "vega10/soc15ip.h"
+
+#include "ivsrcid/ivsrcid_vislands30.h"
+
+static bool hpd_ack(
+	struct irq_service *irq_service,
+	const struct irq_source_info *info)
+{
+	uint32_t addr = info->status_reg;
+	uint32_t value = dm_read_reg(irq_service->ctx, addr);
+	uint32_t current_status =
+		get_reg_field_value(
+			value,
+			HPD0_DC_HPD_INT_STATUS,
+			DC_HPD_SENSE_DELAYED);
+
+	dal_irq_service_ack_generic(irq_service, info);
+
+	value = dm_read_reg(irq_service->ctx, info->enable_reg);
+
+	set_reg_field_value(
+		value,
+		current_status ? 0 : 1,
+		HPD0_DC_HPD_INT_CONTROL,
+		DC_HPD_INT_POLARITY);
+
+	dm_write_reg(irq_service->ctx, info->enable_reg, value);
+
+	return true;
+}
+
+static const struct irq_source_info_funcs hpd_irq_info_funcs = {
+	.set = NULL,
+	.ack = hpd_ack
+};
+
+static const struct irq_source_info_funcs hpd_rx_irq_info_funcs = {
+	.set = NULL,
+	.ack = NULL
+};
+
+static const struct irq_source_info_funcs pflip_irq_info_funcs = {
+	.set = NULL,
+	.ack = NULL
+};
+
+static const struct irq_source_info_funcs vblank_irq_info_funcs = {
+	.set = NULL,
+	.ack = NULL
+};
+
+#define BASE_INNER(seg) \
+	DCE_BASE__INST0_SEG ## seg
+
+#define BASE(seg) \
+	BASE_INNER(seg)
+
+#define SRI(reg_name, block, id)\
+	BASE(mm ## block ## id ## _ ## reg_name ## _BASE_IDX) + \
+			mm ## block ## id ## _ ## reg_name
+
+
+#define IRQ_REG_ENTRY(block, reg_num, reg1, mask1, reg2, mask2)\
+	.enable_reg = SRI(reg1, block, reg_num),\
+	.enable_mask = \
+		block ## reg_num ## _ ## reg1 ## __ ## mask1 ## _MASK,\
+	.enable_value = {\
+		block ## reg_num ## _ ## reg1 ## __ ## mask1 ## _MASK,\
+		~block ## reg_num ## _ ## reg1 ## __ ## mask1 ## _MASK \
+	},\
+	.ack_reg = SRI(reg2, block, reg_num),\
+	.ack_mask = \
+		block ## reg_num ## _ ## reg2 ## __ ## mask2 ## _MASK,\
+	.ack_value = \
+		block ## reg_num ## _ ## reg2 ## __ ## mask2 ## _MASK \
+
+#define hpd_int_entry(reg_num)\
+	[DC_IRQ_SOURCE_HPD1 + reg_num] = {\
+		IRQ_REG_ENTRY(HPD, reg_num,\
+			DC_HPD_INT_CONTROL, DC_HPD_INT_EN,\
+			DC_HPD_INT_CONTROL, DC_HPD_INT_ACK),\
+		.status_reg = SRI(DC_HPD_INT_STATUS, HPD, reg_num),\
+		.funcs = &hpd_irq_info_funcs\
+	}
+
+#define hpd_rx_int_entry(reg_num)\
+	[DC_IRQ_SOURCE_HPD1RX + reg_num] = {\
+		IRQ_REG_ENTRY(HPD, reg_num,\
+			DC_HPD_INT_CONTROL, DC_HPD_RX_INT_EN,\
+			DC_HPD_INT_CONTROL, DC_HPD_RX_INT_ACK),\
+		.status_reg = SRI(DC_HPD_INT_STATUS, HPD, reg_num),\
+		.funcs = &hpd_rx_irq_info_funcs\
+	}
+#define pflip_int_entry(reg_num)\
+	[DC_IRQ_SOURCE_PFLIP1 + reg_num] = {\
+		IRQ_REG_ENTRY(DCP, reg_num, \
+			GRPH_INTERRUPT_CONTROL, GRPH_PFLIP_INT_MASK, \
+			GRPH_INTERRUPT_STATUS, GRPH_PFLIP_INT_CLEAR),\
+		.status_reg = SRI(GRPH_INTERRUPT_STATUS, DCP, reg_num),\
+		.funcs = &pflip_irq_info_funcs\
+	}
+
+#define vupdate_int_entry(reg_num)\
+	[DC_IRQ_SOURCE_VUPDATE1 + reg_num] = {\
+		IRQ_REG_ENTRY(CRTC, reg_num,\
+			CRTC_INTERRUPT_CONTROL, CRTC_V_UPDATE_INT_MSK,\
+			CRTC_V_UPDATE_INT_STATUS, CRTC_V_UPDATE_INT_CLEAR),\
+		.funcs = &vblank_irq_info_funcs\
+	}
+
+#define vblank_int_entry(reg_num)\
+	[DC_IRQ_SOURCE_VBLANK1 + reg_num] = {\
+		IRQ_REG_ENTRY(LB, reg_num,\
+			LB_INTERRUPT_MASK, VBLANK_INTERRUPT_MASK,\
+			LB_VBLANK_STATUS, VBLANK_ACK),\
+		.funcs = &vblank_irq_info_funcs\
+	}
+
+#define dummy_irq_entry() \
+	{\
+		.funcs = &dummy_irq_info_funcs\
+	}
+
+#define i2c_int_entry(reg_num) \
+	[DC_IRQ_SOURCE_I2C_DDC ## reg_num] = dummy_irq_entry()
+
+#define dp_sink_int_entry(reg_num) \
+	[DC_IRQ_SOURCE_DPSINK ## reg_num] = dummy_irq_entry()
+
+#define gpio_pad_int_entry(reg_num) \
+	[DC_IRQ_SOURCE_GPIOPAD ## reg_num] = dummy_irq_entry()
+
+#define dc_underflow_int_entry(reg_num) \
+	[DC_IRQ_SOURCE_DC ## reg_num ## UNDERFLOW] = dummy_irq_entry()
+
+static const struct irq_source_info_funcs dummy_irq_info_funcs = {
+	.set = dal_irq_service_dummy_set,
+	.ack = dal_irq_service_dummy_ack
+};
+
+static const struct irq_source_info
+irq_source_info_dce120[DAL_IRQ_SOURCES_NUMBER] = {
+	[DC_IRQ_SOURCE_INVALID] = dummy_irq_entry(),
+	hpd_int_entry(0),
+	hpd_int_entry(1),
+	hpd_int_entry(2),
+	hpd_int_entry(3),
+	hpd_int_entry(4),
+	hpd_int_entry(5),
+	hpd_rx_int_entry(0),
+	hpd_rx_int_entry(1),
+	hpd_rx_int_entry(2),
+	hpd_rx_int_entry(3),
+	hpd_rx_int_entry(4),
+	hpd_rx_int_entry(5),
+	i2c_int_entry(1),
+	i2c_int_entry(2),
+	i2c_int_entry(3),
+	i2c_int_entry(4),
+	i2c_int_entry(5),
+	i2c_int_entry(6),
+	dp_sink_int_entry(1),
+	dp_sink_int_entry(2),
+	dp_sink_int_entry(3),
+	dp_sink_int_entry(4),
+	dp_sink_int_entry(5),
+	dp_sink_int_entry(6),
+	[DC_IRQ_SOURCE_TIMER] = dummy_irq_entry(),
+	pflip_int_entry(0),
+	pflip_int_entry(1),
+	pflip_int_entry(2),
+	pflip_int_entry(3),
+	pflip_int_entry(4),
+	pflip_int_entry(5),
+	[DC_IRQ_SOURCE_PFLIP_UNDERLAY0] = dummy_irq_entry(),
+	gpio_pad_int_entry(0),
+	gpio_pad_int_entry(1),
+	gpio_pad_int_entry(2),
+	gpio_pad_int_entry(3),
+	gpio_pad_int_entry(4),
+	gpio_pad_int_entry(5),
+	gpio_pad_int_entry(6),
+	gpio_pad_int_entry(7),
+	gpio_pad_int_entry(8),
+	gpio_pad_int_entry(9),
+	gpio_pad_int_entry(10),
+	gpio_pad_int_entry(11),
+	gpio_pad_int_entry(12),
+	gpio_pad_int_entry(13),
+	gpio_pad_int_entry(14),
+	gpio_pad_int_entry(15),
+	gpio_pad_int_entry(16),
+	gpio_pad_int_entry(17),
+	gpio_pad_int_entry(18),
+	gpio_pad_int_entry(19),
+	gpio_pad_int_entry(20),
+	gpio_pad_int_entry(21),
+	gpio_pad_int_entry(22),
+	gpio_pad_int_entry(23),
+	gpio_pad_int_entry(24),
+	gpio_pad_int_entry(25),
+	gpio_pad_int_entry(26),
+	gpio_pad_int_entry(27),
+	gpio_pad_int_entry(28),
+	gpio_pad_int_entry(29),
+	gpio_pad_int_entry(30),
+	dc_underflow_int_entry(1),
+	dc_underflow_int_entry(2),
+	dc_underflow_int_entry(3),
+	dc_underflow_int_entry(4),
+	dc_underflow_int_entry(5),
+	dc_underflow_int_entry(6),
+	[DC_IRQ_SOURCE_DMCU_SCP] = dummy_irq_entry(),
+	[DC_IRQ_SOURCE_VBIOS_SW] = dummy_irq_entry(),
+	vupdate_int_entry(0),
+	vupdate_int_entry(1),
+	vupdate_int_entry(2),
+	vupdate_int_entry(3),
+	vupdate_int_entry(4),
+	vupdate_int_entry(5),
+	vblank_int_entry(0),
+	vblank_int_entry(1),
+	vblank_int_entry(2),
+	vblank_int_entry(3),
+	vblank_int_entry(4),
+	vblank_int_entry(5),
+};
+
+static const struct irq_service_funcs irq_service_funcs_dce120 = {
+		.to_dal_irq_source = to_dal_irq_source_dce110
+};
+
+static bool construct(
+	struct irq_service *irq_service,
+	struct irq_service_init_data *init_data)
+{
+	if (!dal_irq_service_construct(irq_service, init_data))
+		return false;
+
+	irq_service->info = irq_source_info_dce120;
+	irq_service->funcs = &irq_service_funcs_dce120;
+
+	return true;
+}
+
+struct irq_service *dal_irq_service_dce120_create(
+	struct irq_service_init_data *init_data)
+{
+	struct irq_service *irq_service = dm_alloc(sizeof(*irq_service));
+
+	if (!irq_service)
+		return NULL;
+
+	if (construct(irq_service, init_data))
+		return irq_service;
+
+	dm_free(irq_service);
+	return NULL;
+}
diff --git a/drivers/gpu/drm/amd/display/dc/irq/dce120/irq_service_dce120.h b/drivers/gpu/drm/amd/display/dc/irq/dce120/irq_service_dce120.h
new file mode 100644
index 0000000..420c96e
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/irq/dce120/irq_service_dce120.h
@@ -0,0 +1,34 @@
+/*
+ * Copyright 2012-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DAL_IRQ_SERVICE_DCE120_H__
+#define __DAL_IRQ_SERVICE_DCE120_H__
+
+#include "../irq_service.h"
+
+struct irq_service *dal_irq_service_dce120_create(
+	struct irq_service_init_data *init_data);
+
+#endif
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 067/100] drm/amd/display: Add DCE12 core support
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (50 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 066/100] drm/amd/display: Add DCE12 irq support Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 068/100] drm/amd/display: Enable DCE12 support Alex Deucher
                     ` (33 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Harry Wentland

From: Harry Wentland <harry.wentland@amd.com>

Signed-off-by: Harry Wentland <harry.wentland@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 .../amd/display/dc/dce120/dce120_hw_sequencer.c    |  197 ++++
 .../amd/display/dc/dce120/dce120_hw_sequencer.h    |   36 +
 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.c |   58 +
 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.h |   62 ++
 .../drm/amd/display/dc/dce120/dce120_ipp_cursor.c  |  202 ++++
 .../drm/amd/display/dc/dce120/dce120_ipp_gamma.c   |  167 +++
 .../drm/amd/display/dc/dce120/dce120_mem_input.c   |  340 ++++++
 .../drm/amd/display/dc/dce120/dce120_mem_input.h   |   37 +
 .../drm/amd/display/dc/dce120/dce120_resource.c    | 1099 +++++++++++++++++++
 .../drm/amd/display/dc/dce120/dce120_resource.h    |   39 +
 .../display/dc/dce120/dce120_timing_generator.c    | 1109 ++++++++++++++++++++
 .../display/dc/dce120/dce120_timing_generator.h    |   41 +
 12 files changed, 3387 insertions(+)
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp_cursor.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp_gamma.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_mem_input.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_mem_input.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.h
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_timing_generator.c
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_timing_generator.h

diff --git a/drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.c
new file mode 100644
index 0000000..f5ffd8f6
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.c
@@ -0,0 +1,197 @@
+/*
+ * Copyright 2015 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "dm_services.h"
+#include "dc.h"
+#include "core_dc.h"
+#include "core_types.h"
+#include "dce120_hw_sequencer.h"
+
+#include "dce110/dce110_hw_sequencer.h"
+
+/* include DCE12.0 register header files */
+#include "vega10/DC/dce_12_0_offset.h"
+#include "vega10/DC/dce_12_0_sh_mask.h"
+#include "vega10/soc15ip.h"
+#include "reg_helper.h"
+
+struct dce120_hw_seq_reg_offsets {
+	uint32_t crtc;
+};
+
+static const struct dce120_hw_seq_reg_offsets reg_offsets[] = {
+{
+	.crtc = (mmCRTC0_CRTC_GSL_CONTROL - mmCRTC0_CRTC_GSL_CONTROL),
+},
+{
+	.crtc = (mmCRTC1_CRTC_GSL_CONTROL - mmCRTC0_CRTC_GSL_CONTROL),
+},
+{
+	.crtc = (mmCRTC2_CRTC_GSL_CONTROL - mmCRTC0_CRTC_GSL_CONTROL),
+},
+{
+	.crtc = (mmCRTC3_CRTC_GSL_CONTROL - mmCRTC0_CRTC_GSL_CONTROL),
+},
+{
+	.crtc = (mmCRTC4_CRTC_GSL_CONTROL - mmCRTC0_CRTC_GSL_CONTROL),
+},
+{
+	.crtc = (mmCRTC5_CRTC_GSL_CONTROL - mmCRTC0_CRTC_GSL_CONTROL),
+}
+};
+
+#define HW_REG_CRTC(reg, id)\
+	(reg + reg_offsets[id].crtc)
+
+#define CNTL_ID(controller_id)\
+	controller_id
+/*******************************************************************************
+ * Private definitions
+ ******************************************************************************/
+#if 0
+static void dce120_init_pte(struct dc_context *ctx, uint8_t controller_id)
+{
+	uint32_t addr;
+	uint32_t value = 0;
+	uint32_t chunk_int = 0;
+	uint32_t chunk_mul = 0;
+/*
+	addr = mmDCP0_DVMM_PTE_CONTROL + controller_id *
+			(mmDCP1_DVMM_PTE_CONTROL- mmDCP0_DVMM_PTE_CONTROL);
+
+	value = dm_read_reg(ctx, addr);
+
+	set_reg_field_value(
+			value, 0, DCP, controller_id,
+			DVMM_PTE_CONTROL,
+			DVMM_USE_SINGLE_PTE);
+
+	set_reg_field_value_soc15(
+			value, 1, DCP, controller_id,
+			DVMM_PTE_CONTROL,
+			DVMM_PTE_BUFFER_MODE0);
+
+	set_reg_field_value_soc15(
+			value, 1, DCP, controller_id,
+			DVMM_PTE_CONTROL,
+			DVMM_PTE_BUFFER_MODE1);
+
+	dm_write_reg(ctx, addr, value);*/
+
+	addr = mmDVMM_PTE_REQ;
+	value = dm_read_reg(ctx, addr);
+
+	chunk_int = get_reg_field_value(
+		value,
+		DVMM_PTE_REQ,
+		HFLIP_PTEREQ_PER_CHUNK_INT);
+
+	chunk_mul = get_reg_field_value(
+		value,
+		DVMM_PTE_REQ,
+		HFLIP_PTEREQ_PER_CHUNK_MULTIPLIER);
+
+	if (chunk_int != 0x4 || chunk_mul != 0x4) {
+
+		set_reg_field_value(
+			value,
+			255,
+			DVMM_PTE_REQ,
+			MAX_PTEREQ_TO_ISSUE);
+
+		set_reg_field_value(
+			value,
+			4,
+			DVMM_PTE_REQ,
+			HFLIP_PTEREQ_PER_CHUNK_INT);
+
+		set_reg_field_value(
+			value,
+			4,
+			DVMM_PTE_REQ,
+			HFLIP_PTEREQ_PER_CHUNK_MULTIPLIER);
+
+		dm_write_reg(ctx, addr, value);
+	}
+}
+#endif
+
+static bool dce120_enable_display_power_gating(
+	struct core_dc *dc,
+	uint8_t controller_id,
+	struct dc_bios *dcb,
+	enum pipe_gating_control power_gating)
+{
+	/* disable for bringup */
+#if 0
+	enum bp_result bp_result = BP_RESULT_OK;
+	enum bp_pipe_control_action cntl;
+	struct dc_context *ctx = dc->ctx;
+
+	if (IS_FPGA_MAXIMUS_DC(ctx->dce_environment))
+		return true;
+
+	if (power_gating == PIPE_GATING_CONTROL_INIT)
+		cntl = ASIC_PIPE_INIT;
+	else if (power_gating == PIPE_GATING_CONTROL_ENABLE)
+		cntl = ASIC_PIPE_ENABLE;
+	else
+		cntl = ASIC_PIPE_DISABLE;
+
+	if (power_gating != PIPE_GATING_CONTROL_INIT || controller_id == 0) {
+
+		bp_result = dcb->funcs->enable_disp_power_gating(
+						dcb, controller_id + 1, cntl);
+
+		/* Revert MASTER_UPDATE_MODE to 0 because bios sets it 2
+		 * by default when command table is called
+		 */
+		dm_write_reg(ctx,
+			HW_REG_CRTC(mmCRTC0_CRTC_MASTER_UPDATE_MODE, controller_id),
+			0);
+	}
+
+	if (power_gating != PIPE_GATING_CONTROL_ENABLE)
+		dce120_init_pte(ctx, controller_id);
+
+	if (bp_result == BP_RESULT_OK)
+		return true;
+	else
+		return false;
+#endif
+	return false;
+}
+
+bool dce120_hw_sequencer_construct(struct core_dc *dc)
+{
+	/* All registers used by dce11.2 match those in dce11 in offset and
+	 * structure
+	 */
+	dce110_hw_sequencer_construct(dc);
+	dc->hwss.enable_display_power_gating = dce120_enable_display_power_gating;
+
+	return true;
+}
+
diff --git a/drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.h
new file mode 100644
index 0000000..3402413
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.h
@@ -0,0 +1,36 @@
+/*
+* Copyright 2012-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DC_HWSS_DCE120_H__
+#define __DC_HWSS_DCE120_H__
+
+#include "core_types.h"
+
+struct core_dc;
+
+bool dce120_hw_sequencer_construct(struct core_dc *dc);
+
+#endif /* __DC_HWSS_DCE112_H__ */
+
diff --git a/drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.c b/drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.c
new file mode 100644
index 0000000..f450569
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.c
@@ -0,0 +1,58 @@
+/*
+ * Copyright 2012-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "dm_services.h"
+#include "include/logger_interface.h"
+
+#include "vega10/DC/dce_12_0_offset.h"
+#include "vega10/DC/dce_12_0_sh_mask.h"
+#include "vega10/soc15ip.h"
+
+#include "dce120_ipp.h"
+
+static const struct ipp_funcs funcs = {
+		.ipp_cursor_set_attributes = dce120_ipp_cursor_set_attributes,
+		.ipp_cursor_set_position = dce120_ipp_cursor_set_position,
+		.ipp_program_prescale = dce120_ipp_program_prescale,
+		.ipp_program_input_lut = dce120_ipp_program_input_lut,
+		.ipp_set_degamma = dce120_ipp_set_degamma,
+};
+
+bool dce120_ipp_construct(
+	struct dce110_ipp *ipp,
+	struct dc_context *ctx,
+	uint32_t inst,
+	const struct dce110_ipp_reg_offsets *offset)
+{
+	if (!dce110_ipp_construct(ipp, ctx, inst, offset)) {
+		ASSERT_CRITICAL(false);
+		return false;
+	}
+
+	ipp->base.funcs = &funcs;
+
+	return true;
+}
+
diff --git a/drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.h b/drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.h
new file mode 100644
index 0000000..4b326bc
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.h
@@ -0,0 +1,62 @@
+/*
+ * Copyright 2012-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DC_IPP_DCE120_H__
+#define __DC_IPP_DCE120_H__
+
+#include "ipp.h"
+#include "../dce110/dce110_ipp.h"
+
+
+bool dce120_ipp_construct(
+	struct dce110_ipp *ipp,
+	struct dc_context *ctx,
+	enum controller_id id,
+	const struct dce110_ipp_reg_offsets *offset);
+
+/* CURSOR RELATED */
+void dce120_ipp_cursor_set_position(
+	struct input_pixel_processor *ipp,
+	const struct dc_cursor_position *position,
+	const struct dc_cursor_mi_param *param);
+
+bool dce120_ipp_cursor_set_attributes(
+	struct input_pixel_processor *ipp,
+	const struct dc_cursor_attributes *attributes);
+
+/* DEGAMMA RELATED */
+bool dce120_ipp_set_degamma(
+	struct input_pixel_processor *ipp,
+	enum ipp_degamma_mode mode);
+
+void dce120_ipp_program_prescale(
+	struct input_pixel_processor *ipp,
+	struct ipp_prescale_params *params);
+
+void dce120_ipp_program_input_lut(
+	struct input_pixel_processor *ipp,
+	const struct dc_gamma *gamma);
+
+#endif /*__DC_IPP_DCE120_H__*/
diff --git a/drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp_cursor.c b/drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp_cursor.c
new file mode 100644
index 0000000..d520b5d
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp_cursor.c
@@ -0,0 +1,202 @@
+/*
+ * Copyright 2012-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "dm_services.h"
+#include "include/logger_interface.h"
+
+#include "vega10/DC/dce_12_0_offset.h"
+#include "vega10/DC/dce_12_0_sh_mask.h"
+#include "vega10/soc15ip.h"
+
+#include "../dce110/dce110_ipp.h"
+
+
+#define DCP_REG_UPDATE_N(reg_name, n, ...)	\
+		generic_reg_update_soc15(ipp110->base.ctx, ipp110->offsets.dcp_offset, reg_name, n, __VA_ARGS__)
+
+#define DCP_REG_SET_N(reg_name, n, ...)	\
+		generic_reg_set_soc15(ipp110->base.ctx, ipp110->offsets.dcp_offset, reg_name, n, __VA_ARGS__)
+
+#define DCP_REG_UPDATE(reg, field, val)	\
+		DCP_REG_UPDATE_N(reg, 1, FD(reg##__##field), val)
+
+#define DCP_REG_UPDATE_2(reg, field1, val1, field2, val2)	\
+		DCP_REG_UPDATE_N(reg, 2, FD(reg##__##field1), val1, FD(reg##__##field2), val2)
+
+#define DCP_REG_UPDATE_3(reg, field1, val1, field2, val2, field3, val3)	\
+		DCP_REG_UPDATE_N(reg, 3, FD(reg##__##field1), val1, FD(reg##__##field2), val2, FD(reg##__##field3), val3)
+
+#define DCP_REG_SET(reg, field, val)	\
+		DCP_REG_SET_N(reg, 1, FD(reg##__##field), val)
+
+#define DCP_REG_SET_2(reg, field1, val1, field2, val2)	\
+		DCP_REG_SET_N(reg, 2, FD(reg##__##field1), val1, FD(reg##__##field2), val2)
+
+#define DCP_REG_SET_3(reg, field1, val1, field2, val2, field3, val3)	\
+		DCP_REG_SET_N(reg, 3, FD(reg##__##field1), val1, FD(reg##__##field2), val2, FD(reg##__##field3), val3)
+
+/* TODO: DAL3 does not implement cursor memory control
+ * MCIF_MEM_CONTROL, DMIF_CURSOR_MEM_CONTROL
+ */
+static void lock(
+	struct dce110_ipp *ipp110, bool lock)
+{
+	DCP_REG_UPDATE(DCP0_CUR_UPDATE, CURSOR_UPDATE_LOCK, lock);
+}
+
+static bool program_control(
+	struct dce110_ipp *ipp110,
+	enum dc_cursor_color_format color_format,
+	bool enable_magnification,
+	bool inverse_transparent_clamping)
+{
+	uint32_t mode = 0;
+
+	switch (color_format) {
+	case CURSOR_MODE_MONO:
+		mode = 0;
+		break;
+	case CURSOR_MODE_COLOR_1BIT_AND:
+		mode = 1;
+		break;
+	case CURSOR_MODE_COLOR_PRE_MULTIPLIED_ALPHA:
+		mode = 2;
+		break;
+	case CURSOR_MODE_COLOR_UN_PRE_MULTIPLIED_ALPHA:
+		mode = 3;
+		break;
+	default:
+		return false;
+	}
+
+	DCP_REG_UPDATE_3(
+		DCP0_CUR_CONTROL,
+		CURSOR_MODE, mode,
+		CURSOR_2X_MAGNIFY, enable_magnification,
+		CUR_INV_TRANS_CLAMP, inverse_transparent_clamping);
+
+	if (color_format == CURSOR_MODE_MONO) {
+		DCP_REG_SET_3(
+			DCP0_CUR_COLOR1,
+			CUR_COLOR1_BLUE, 0,
+			CUR_COLOR1_GREEN, 0,
+			CUR_COLOR1_RED, 0);
+
+		DCP_REG_SET_3(
+			DCP0_CUR_COLOR2,
+			CUR_COLOR2_BLUE, 0xff,
+			CUR_COLOR2_GREEN, 0xff,
+			CUR_COLOR2_RED, 0xff);
+	}
+	return true;
+}
+
+static void program_address(
+	struct dce110_ipp *ipp110,
+	PHYSICAL_ADDRESS_LOC address)
+{
+	/* SURFACE_ADDRESS_HIGH: Higher order bits (39:32) of hardware cursor
+	 * surface base address in byte. It is 4K byte aligned.
+	 * The correct way to program cursor surface address is to first write
+	 * to CUR_SURFACE_ADDRESS_HIGH, and then write to CUR_SURFACE_ADDRESS
+	 */
+
+	DCP_REG_SET(
+		DCP0_CUR_SURFACE_ADDRESS_HIGH,
+		CURSOR_SURFACE_ADDRESS_HIGH, address.high_part);
+
+	DCP_REG_SET(
+		DCP0_CUR_SURFACE_ADDRESS,
+		CURSOR_SURFACE_ADDRESS, address.low_part);
+}
+
+void dce120_ipp_cursor_set_position(
+	struct input_pixel_processor *ipp,
+	const struct dc_cursor_position *position,
+	const struct dc_cursor_mi_param *param)
+{
+	struct dce110_ipp *ipp110 = TO_DCE110_IPP(ipp);
+
+	/* lock cursor registers */
+	lock(ipp110, true);
+
+	/* Flag passed in structure differentiates cursor enable/disable. */
+	/* Update if it differs from cached state. */
+	DCP_REG_UPDATE(DCP0_CUR_CONTROL, CURSOR_EN, position->enable);
+
+	DCP_REG_SET_2(
+		DCP0_CUR_POSITION,
+		CURSOR_X_POSITION, position->x,
+		CURSOR_Y_POSITION, position->y);
+
+	if (position->hot_spot_enable)
+		DCP_REG_SET_2(
+			DCP0_CUR_HOT_SPOT,
+			CURSOR_HOT_SPOT_X, position->x_hotspot,
+			CURSOR_HOT_SPOT_Y, position->y_hotspot);
+
+	/* unlock cursor registers */
+	lock(ipp110, false);
+}
+
+bool dce120_ipp_cursor_set_attributes(
+	struct input_pixel_processor *ipp,
+	const struct dc_cursor_attributes *attributes)
+{
+	struct dce110_ipp *ipp110 = TO_DCE110_IPP(ipp);
+	/* Lock cursor registers */
+	lock(ipp110, true);
+
+	/* Program cursor control */
+	program_control(
+		ipp110,
+		attributes->color_format,
+		attributes->attribute_flags.bits.ENABLE_MAGNIFICATION,
+		attributes->attribute_flags.bits.INVERSE_TRANSPARENT_CLAMPING);
+
+	/* Program hot spot coordinates */
+	DCP_REG_SET_2(
+		DCP0_CUR_HOT_SPOT,
+		CURSOR_HOT_SPOT_X, attributes->x_hot,
+		CURSOR_HOT_SPOT_Y, attributes->y_hot);
+
+	/*
+	 * Program cursor size -- NOTE: HW spec specifies that HW register
+	 * stores size as (height - 1, width - 1)
+	 */
+	DCP_REG_SET_2(
+		DCP0_CUR_SIZE,
+		CURSOR_WIDTH, attributes->width-1,
+		CURSOR_HEIGHT, attributes->height-1);
+
+	/* Program cursor surface address */
+	program_address(ipp110, attributes->address);
+
+	/* Unlock Cursor registers. */
+	lock(ipp110, false);
+
+	return true;
+}
+
diff --git a/drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp_gamma.c b/drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp_gamma.c
new file mode 100644
index 0000000..7aa5a49
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp_gamma.c
@@ -0,0 +1,167 @@
+/*
+ * Copyright 2012-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "dm_services.h"
+#include "include/logger_interface.h"
+#include "include/fixed31_32.h"
+#include "basics/conversion.h"
+
+#include "vega10/DC/dce_12_0_offset.h"
+#include "vega10/DC/dce_12_0_sh_mask.h"
+#include "vega10/soc15ip.h"
+
+#include "../dce110/dce110_ipp.h"
+
+#define DCP_REG_UPDATE_N(reg_name, n, ...)	\
+		generic_reg_update_soc15(ipp110->base.ctx, ipp110->offsets.dcp_offset, reg_name, n, __VA_ARGS__)
+
+#define DCP_REG_SET_N(reg_name, n, ...)	\
+		generic_reg_set_soc15(ipp110->base.ctx, ipp110->offsets.dcp_offset, reg_name, n, __VA_ARGS__)
+
+#define DCP_REG_UPDATE(reg, field, val)	\
+		DCP_REG_UPDATE_N(reg, 1, FD(reg##__##field), val)
+
+#define DCP_REG_UPDATE_2(reg, field1, val1, field2, val2)	\
+		DCP_REG_UPDATE_N(reg, 2, FD(reg##__##field1), val1, FD(reg##__##field2), val2)
+
+#define DCP_REG_UPDATE_3(reg, field1, val1, field2, val2, field3, val3)	\
+		DCP_REG_UPDATE_N(reg, 3, FD(reg##__##field1), val1, FD(reg##__##field2), val2, FD(reg##__##field3), val3)
+
+#define DCP_REG_SET(reg, field, val)	\
+		DCP_REG_SET_N(reg, 1, FD(reg##__##field), val)
+
+#define DCP_REG_SET_2(reg, field1, val1, field2, val2)	\
+		DCP_REG_SET_N(reg, 2, FD(reg##__##field1), val1, FD(reg##__##field2), val2)
+
+#define DCP_REG_SET_3(reg, field1, val1, field2, val2, field3, val3)	\
+		DCP_REG_SET_N(reg, 3, FD(reg##__##field1), val1, FD(reg##__##field2), val2, FD(reg##__##field3), val3)
+
+
+bool dce120_ipp_set_degamma(
+	struct input_pixel_processor *ipp,
+	enum ipp_degamma_mode mode)
+{
+	struct dce110_ipp *ipp110 = TO_DCE110_IPP(ipp);
+	uint32_t degamma_type = (mode == IPP_DEGAMMA_MODE_HW_sRGB) ? 1 : 0;
+
+	ASSERT(mode == IPP_DEGAMMA_MODE_BYPASS ||
+			mode == IPP_DEGAMMA_MODE_HW_sRGB);
+
+	DCP_REG_SET_3(
+		DCP0_DEGAMMA_CONTROL,
+		GRPH_DEGAMMA_MODE, degamma_type,
+		CURSOR_DEGAMMA_MODE, degamma_type,
+		CURSOR2_DEGAMMA_MODE, degamma_type);
+
+	return true;
+}
+
+void dce120_ipp_program_prescale(
+	struct input_pixel_processor *ipp,
+	struct ipp_prescale_params *params)
+{
+	struct dce110_ipp *ipp110 = TO_DCE110_IPP(ipp);
+
+	/* set to bypass mode first before change */
+	DCP_REG_UPDATE(
+		DCP0_PRESCALE_GRPH_CONTROL,
+		GRPH_PRESCALE_BYPASS,
+		1);
+
+	DCP_REG_SET_2(
+		DCP0_PRESCALE_VALUES_GRPH_R,
+		GRPH_PRESCALE_SCALE_R, params->scale,
+		GRPH_PRESCALE_BIAS_R, params->bias);
+
+	DCP_REG_SET_2(
+		DCP0_PRESCALE_VALUES_GRPH_G,
+		GRPH_PRESCALE_SCALE_G, params->scale,
+		GRPH_PRESCALE_BIAS_G, params->bias);
+
+	DCP_REG_SET_2(
+		DCP0_PRESCALE_VALUES_GRPH_B,
+		GRPH_PRESCALE_SCALE_B, params->scale,
+		GRPH_PRESCALE_BIAS_B, params->bias);
+
+	if (params->mode != IPP_PRESCALE_MODE_BYPASS) {
+		DCP_REG_UPDATE(DCP0_PRESCALE_GRPH_CONTROL,
+			       GRPH_PRESCALE_BYPASS, 0);
+
+		/* If prescale is in use, then legacy lut should be bypassed */
+		DCP_REG_UPDATE(DCP0_INPUT_GAMMA_CONTROL,
+			       GRPH_INPUT_GAMMA_MODE, 1);
+	}
+}
+
+static void dce120_helper_select_lut(struct dce110_ipp *ipp110)
+{
+	/* enable all */
+	DCP_REG_SET(
+		DCP0_DC_LUT_WRITE_EN_MASK,
+		DC_LUT_WRITE_EN_MASK,
+		0x7);
+
+	/* 256 entry mode */
+	DCP_REG_UPDATE(DCP0_DC_LUT_RW_MODE, DC_LUT_RW_MODE, 0);
+
+	/* LUT-256, unsigned, integer, new u0.12 format */
+	DCP_REG_SET_3(
+		DCP0_DC_LUT_CONTROL,
+		DC_LUT_DATA_R_FORMAT, 3,
+		DC_LUT_DATA_G_FORMAT, 3,
+		DC_LUT_DATA_B_FORMAT, 3);
+
+	/* start from index 0 */
+	DCP_REG_SET(
+		DCP0_DC_LUT_RW_INDEX,
+		DC_LUT_RW_INDEX,
+		0);
+}
+
+void dce120_ipp_program_input_lut(
+	struct input_pixel_processor *ipp,
+	const struct dc_gamma *gamma)
+{
+	int i;
+	struct dce110_ipp *ipp110 = TO_DCE110_IPP(ipp);
+
+	/* power on LUT memory */
+	DCP_REG_SET(DCFE0_DCFE_MEM_PWR_CTRL, DCP_LUT_MEM_PWR_DIS, 1);
+
+	dce120_helper_select_lut(ipp110);
+
+	for (i = 0; i < INPUT_LUT_ENTRIES; i++) {
+		DCP_REG_SET(DCP0_DC_LUT_SEQ_COLOR, DC_LUT_SEQ_COLOR, gamma->red[i]);
+		DCP_REG_SET(DCP0_DC_LUT_SEQ_COLOR, DC_LUT_SEQ_COLOR, gamma->green[i]);
+		DCP_REG_SET(DCP0_DC_LUT_SEQ_COLOR, DC_LUT_SEQ_COLOR, gamma->blue[i]);
+	}
+
+	/* power off LUT memory */
+	DCP_REG_SET(DCFE0_DCFE_MEM_PWR_CTRL, DCP_LUT_MEM_PWR_DIS, 0);
+
+	/* bypass prescale, enable legacy LUT */
+	DCP_REG_UPDATE(DCP0_PRESCALE_GRPH_CONTROL, GRPH_PRESCALE_BYPASS, 1);
+	DCP_REG_UPDATE(DCP0_INPUT_GAMMA_CONTROL, GRPH_INPUT_GAMMA_MODE, 0);
+}
diff --git a/drivers/gpu/drm/amd/display/dc/dce120/dce120_mem_input.c b/drivers/gpu/drm/amd/display/dc/dce120/dce120_mem_input.c
new file mode 100644
index 0000000..c067721
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dce120/dce120_mem_input.c
@@ -0,0 +1,340 @@
+/*
+ * Copyright 2012-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+#include "dm_services.h"
+#include "dce120_mem_input.h"
+
+
+#include "vega10/DC/dce_12_0_offset.h"
+#include "vega10/DC/dce_12_0_sh_mask.h"
+#include "vega10/soc15ip.h"
+
+#define GENERAL_REG_UPDATE_N(reg_name, n, ...)	\
+		generic_reg_update_soc15(mem_input110->base.ctx, 0, reg_name, n, __VA_ARGS__)
+
+#define GENERAL_REG_UPDATE(reg, field, val)	\
+		GENERAL_REG_UPDATE_N(reg, 1, FD(reg##__##field), val)
+
+#define GENERAL_REG_UPDATE_2(reg, field1, val1, field2, val2)	\
+		GENERAL_REG_UPDATE_N(reg, 2, FD(reg##__##field1), val1, FD(reg##__##field2), val2)
+
+
+
+#define DCP_REG_UPDATE_N(reg_name, n, ...)	\
+		generic_reg_update_soc15(mem_input110->base.ctx, mem_input110->offsets.dcp, reg_name, n, __VA_ARGS__)
+
+#define DCP_REG_SET_N(reg_name, n, ...)	\
+		generic_reg_set_soc15(mem_input110->base.ctx, mem_input110->offsets.dcp, reg_name, n, __VA_ARGS__)
+
+#define DCP_REG_UPDATE(reg, field, val)	\
+		DCP_REG_UPDATE_N(reg, 1, FD(reg##__##field), val)
+
+#define DCP_REG_UPDATE_2(reg, field1, val1, field2, val2)	\
+		DCP_REG_UPDATE_N(reg, 2, FD(reg##__##field1), val1, FD(reg##__##field2), val2)
+
+#define DCP_REG_UPDATE_3(reg, field1, val1, field2, val2, field3, val3)	\
+		DCP_REG_UPDATE_N(reg, 3, FD(reg##__##field1), val1, FD(reg##__##field2), val2, FD(reg##__##field3), val3)
+
+#define DCP_REG_SET(reg, field, val)	\
+		DCP_REG_SET_N(reg, 1, FD(reg##__##field), val)
+
+#define DCP_REG_SET_2(reg, field1, val1, field2, val2)	\
+		DCP_REG_SET_N(reg, 2, FD(reg##__##field1), val1, FD(reg##__##field2), val2)
+
+#define DCP_REG_SET_3(reg, field1, val1, field2, val2, field3, val3)	\
+		DCP_REG_SET_N(reg, 3, FD(reg##__##field1), val1, FD(reg##__##field2), val2, FD(reg##__##field3), val3)
+
+
+
+#define DMIF_REG_UPDATE_N(reg_name, n, ...)	\
+		generic_reg_update_soc15(mem_input110->base.ctx, mem_input110->offsets.dmif, reg_name, n, __VA_ARGS__)
+
+#define DMIF_REG_SET_N(reg_name, n, ...)	\
+		generic_reg_set_soc15(mem_input110->base.ctx, mem_input110->offsets.dmif, reg_name, n, __VA_ARGS__)
+
+#define DMIF_REG_UPDATE(reg, field, val)	\
+		DMIF_REG_UPDATE_N(reg, 1, FD(reg##__##field), val)
+
+#define DMIF_REG_UPDATE_2(reg, field1, val1, field2, val2)	\
+		DMIF_REG_UPDATE_N(reg, 2, FD(reg##__##field1), val1, FD(reg##__##field2), val2)
+
+#define DMIF_REG_UPDATE_3(reg, field1, val1, field2, val2, field3, val3)	\
+		DMIF_REG_UPDATE_N(reg, 3, FD(reg##__##field1), val1, FD(reg##__##field2), val2, FD(reg##__##field3), val3)
+
+#define DMIF_REG_SET(reg, field, val)	\
+		DMIF_REG_SET_N(reg, 1, FD(reg##__##field), val)
+
+#define DMIF_REG_SET_2(reg, field1, val1, field2, val2)	\
+		DMIF_REG_SET_N(reg, 2, FD(reg##__##field1), val1, FD(reg##__##field2), val2)
+
+#define DMIF_REG_SET_3(reg, field1, val1, field2, val2, field3, val3)	\
+		DMIF_REG_SET_N(reg, 3, FD(reg##__##field1), val1, FD(reg##__##field2), val2, FD(reg##__##field3), val3)
+
+
+
+#define PIPE_REG_UPDATE_N(reg_name, n, ...)	\
+		generic_reg_update_soc15(mem_input110->base.ctx, mem_input110->offsets.pipe, reg_name, n, __VA_ARGS__)
+
+#define PIPE_REG_SET_N(reg_name, n, ...)	\
+		generic_reg_set_soc15(mem_input110->base.ctx, mem_input110->offsets.pipe, reg_name, n, __VA_ARGS__)
+
+#define PIPE_REG_UPDATE(reg, field, val)	\
+		PIPE_REG_UPDATE_N(reg, 1, FD(reg##__##field), val)
+
+#define PIPE_REG_UPDATE_2(reg, field1, val1, field2, val2)	\
+		PIPE_REG_UPDATE_N(reg, 2, FD(reg##__##field1), val1, FD(reg##__##field2), val2)
+
+#define PIPE_REG_UPDATE_3(reg, field1, val1, field2, val2, field3, val3)	\
+		PIPE_REG_UPDATE_N(reg, 3, FD(reg##__##field1), val1, FD(reg##__##field2), val2, FD(reg##__##field3), val3)
+
+#define PIPE_REG_SET(reg, field, val)	\
+		PIPE_REG_SET_N(reg, 1, FD(reg##__##field), val)
+
+#define PIPE_REG_SET_2(reg, field1, val1, field2, val2)	\
+		PIPE_REG_SET_N(reg, 2, FD(reg##__##field1), val1, FD(reg##__##field2), val2)
+
+#define PIPE_REG_SET_3(reg, field1, val1, field2, val2, field3, val3)	\
+		PIPE_REG_SET_N(reg, 3, FD(reg##__##field1), val1, FD(reg##__##field2), val2, FD(reg##__##field3), val3)
+
+
+
+static void program_sec_addr(
+	struct dce110_mem_input *mem_input110,
+	PHYSICAL_ADDRESS_LOC address)
+{
+	uint32_t temp;
+
+	/*high register MUST be programmed first*/
+	temp = address.high_part &
+		DCP0_GRPH_SECONDARY_SURFACE_ADDRESS_HIGH__GRPH_SECONDARY_SURFACE_ADDRESS_HIGH_MASK;
+
+	DCP_REG_SET(
+		DCP0_GRPH_SECONDARY_SURFACE_ADDRESS_HIGH,
+		GRPH_SECONDARY_SURFACE_ADDRESS_HIGH,
+		temp);
+
+	temp = address.low_part >>
+		DCP0_GRPH_SECONDARY_SURFACE_ADDRESS__GRPH_SECONDARY_SURFACE_ADDRESS__SHIFT;
+
+	DCP_REG_SET_2(
+		DCP0_GRPH_SECONDARY_SURFACE_ADDRESS,
+		GRPH_SECONDARY_SURFACE_ADDRESS, temp,
+		GRPH_SECONDARY_DFQ_ENABLE, 0);
+}
+
+static void program_pri_addr(
+	struct dce110_mem_input *mem_input110,
+	PHYSICAL_ADDRESS_LOC address)
+{
+	uint32_t temp;
+
+	/*high register MUST be programmed first*/
+	temp = address.high_part &
+		DCP0_GRPH_PRIMARY_SURFACE_ADDRESS_HIGH__GRPH_PRIMARY_SURFACE_ADDRESS_HIGH_MASK;
+
+	DCP_REG_SET(
+		DCP0_GRPH_PRIMARY_SURFACE_ADDRESS_HIGH,
+		GRPH_PRIMARY_SURFACE_ADDRESS_HIGH,
+		temp);
+
+	temp = address.low_part >>
+		DCP0_GRPH_PRIMARY_SURFACE_ADDRESS__GRPH_PRIMARY_SURFACE_ADDRESS__SHIFT;
+
+	DCP_REG_SET(
+		DCP0_GRPH_PRIMARY_SURFACE_ADDRESS,
+		GRPH_PRIMARY_SURFACE_ADDRESS,
+		temp);
+}
+
+
+static bool mem_input_is_flip_pending(struct mem_input *mem_input)
+{
+	struct dce110_mem_input *mem_input110 = TO_DCE110_MEM_INPUT(mem_input);
+	uint32_t value;
+
+	value = dm_read_reg_soc15(mem_input110->base.ctx,
+			mmDCP0_GRPH_UPDATE, mem_input110->offsets.dcp);
+
+	if (get_reg_field_value(value, DCP0_GRPH_UPDATE,
+			GRPH_SURFACE_UPDATE_PENDING))
+		return true;
+
+	mem_input->current_address = mem_input->request_address;
+	return false;
+}
+
+static bool mem_input_program_surface_flip_and_addr(
+	struct mem_input *mem_input,
+	const struct dc_plane_address *address,
+	bool flip_immediate)
+{
+	struct dce110_mem_input *mem_input110 = TO_DCE110_MEM_INPUT(mem_input);
+
+	/* TODO: Figure out if two modes are needed:
+	 * non-XDMA Mode: GRPH_SURFACE_UPDATE_IMMEDIATE_EN = 1
+	 * XDMA Mode: GRPH_SURFACE_UPDATE_H_RETRACE_EN = 1
+	 */
+	DCP_REG_UPDATE(DCP0_GRPH_UPDATE,
+			GRPH_UPDATE_LOCK, 1);
+
+	if (flip_immediate) {
+		DCP_REG_UPDATE_2(
+			DCP0_GRPH_FLIP_CONTROL,
+			GRPH_SURFACE_UPDATE_IMMEDIATE_EN, 0,
+			GRPH_SURFACE_UPDATE_H_RETRACE_EN, 1);
+	} else {
+		DCP_REG_UPDATE_2(
+			DCP0_GRPH_FLIP_CONTROL,
+			GRPH_SURFACE_UPDATE_IMMEDIATE_EN, 0,
+			GRPH_SURFACE_UPDATE_H_RETRACE_EN, 0);
+	}
+
+	switch (address->type) {
+	case PLN_ADDR_TYPE_GRAPHICS:
+		if (address->grph.addr.quad_part == 0)
+			break;
+		program_pri_addr(mem_input110, address->grph.addr);
+		break;
+	case PLN_ADDR_TYPE_GRPH_STEREO:
+		if (address->grph_stereo.left_addr.quad_part == 0
+			|| address->grph_stereo.right_addr.quad_part == 0)
+			break;
+		program_pri_addr(mem_input110, address->grph_stereo.left_addr);
+		program_sec_addr(mem_input110, address->grph_stereo.right_addr);
+		break;
+	default:
+		/* not supported */
+		BREAK_TO_DEBUGGER();
+		break;
+	}
+
+	mem_input->request_address = *address;
+
+	if (flip_immediate)
+		mem_input->current_address = *address;
+
+	DCP_REG_UPDATE(DCP0_GRPH_UPDATE,
+			GRPH_UPDATE_LOCK, 0);
+
+	return true;
+}
+
+static void mem_input_update_dchub(struct mem_input *mi,
+		struct dchub_init_data *dh_data)
+{
+	struct dce110_mem_input *mem_input110 = TO_DCE110_MEM_INPUT(mi);
+	/* TODO: port code from dal2 */
+	switch (dh_data->fb_mode) {
+	case FRAME_BUFFER_MODE_ZFB_ONLY:
+		/*For ZFB case need to put DCHUB FB BASE and TOP upside down to indicate ZFB mode*/
+		GENERAL_REG_UPDATE_2(
+				DCHUB_FB_LOCATION,
+				FB_TOP, 0,
+				FB_BASE, 0x0FFFF);
+
+		GENERAL_REG_UPDATE(
+				DCHUB_AGP_BASE,
+				AGP_BASE, dh_data->zfb_phys_addr_base >> 22);
+
+		GENERAL_REG_UPDATE(
+				DCHUB_AGP_BOT,
+				AGP_BOT, dh_data->zfb_mc_base_addr >> 22);
+
+		GENERAL_REG_UPDATE(
+				DCHUB_AGP_TOP,
+				AGP_TOP, (dh_data->zfb_mc_base_addr + dh_data->zfb_size_in_byte - 1) >> 22);
+		break;
+	case FRAME_BUFFER_MODE_MIXED_ZFB_AND_LOCAL:
+		/*Should not touch FB LOCATION (done by VBIOS on AsicInit table)*/
+		GENERAL_REG_UPDATE(
+				DCHUB_AGP_BASE,
+				AGP_BASE, dh_data->zfb_phys_addr_base >> 22);
+
+		GENERAL_REG_UPDATE(
+				DCHUB_AGP_BOT,
+				AGP_BOT, dh_data->zfb_mc_base_addr >> 22);
+
+		GENERAL_REG_UPDATE(
+				DCHUB_AGP_TOP,
+				AGP_TOP, (dh_data->zfb_mc_base_addr + dh_data->zfb_size_in_byte - 1) >> 22);
+		break;
+	case FRAME_BUFFER_MODE_LOCAL_ONLY:
+		/*Should not touch FB LOCATION (done by VBIOS on AsicInit table)*/
+		GENERAL_REG_UPDATE(
+				DCHUB_AGP_BASE,
+				AGP_BASE, 0);
+
+		GENERAL_REG_UPDATE(
+				DCHUB_AGP_BOT,
+				AGP_BOT, 0x03FFFF);
+
+		GENERAL_REG_UPDATE(
+				DCHUB_AGP_TOP,
+				AGP_TOP, 0);
+		break;
+	default:
+		break;
+	}
+
+	dh_data->dchub_initialzied = true;
+	dh_data->dchub_info_valid = false;
+}
+
+static struct mem_input_funcs dce120_mem_input_funcs = {
+	.mem_input_program_display_marks = dce_mem_input_program_display_marks,
+	.allocate_mem_input = dce_mem_input_allocate_dmif,
+	.free_mem_input = dce_mem_input_free_dmif,
+	.mem_input_program_surface_flip_and_addr =
+			mem_input_program_surface_flip_and_addr,
+	.mem_input_program_pte_vm = dce_mem_input_program_pte_vm,
+	.mem_input_program_surface_config =
+			dce_mem_input_program_surface_config,
+	.mem_input_is_flip_pending = mem_input_is_flip_pending,
+	.mem_input_update_dchub = mem_input_update_dchub
+};
+
+/*****************************************/
+/* Constructor, Destructor               */
+/*****************************************/
+
+bool dce120_mem_input_construct(
+	struct dce110_mem_input *mem_input110,
+	struct dc_context *ctx,
+	uint32_t inst,
+	const struct dce110_mem_input_reg_offsets *offsets)
+{
+	/* supported stutter method
+	 * STUTTER_MODE_ENHANCED
+	 * STUTTER_MODE_QUAD_DMIF_BUFFER
+	 * STUTTER_MODE_WATERMARK_NBP_STATE
+	 */
+
+	if (!dce110_mem_input_construct(mem_input110, ctx, inst, offsets))
+		return false;
+
+	mem_input110->base.funcs = &dce120_mem_input_funcs;
+	mem_input110->offsets = *offsets;
+
+	return true;
+}
diff --git a/drivers/gpu/drm/amd/display/dc/dce120/dce120_mem_input.h b/drivers/gpu/drm/amd/display/dc/dce120/dce120_mem_input.h
new file mode 100644
index 0000000..379fd72
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dce120/dce120_mem_input.h
@@ -0,0 +1,37 @@
+/* Copyright 2012-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DC_MEM_INPUT_DCE120_H__
+#define __DC_MEM_INPUT_DCE120_H__
+
+#include "mem_input.h"
+#include "dce110/dce110_mem_input.h"
+
+bool dce120_mem_input_construct(
+	struct dce110_mem_input *mem_input110,
+	struct dc_context *ctx,
+	uint32_t inst,
+	const struct dce110_mem_input_reg_offsets *offsets);
+
+#endif
diff --git a/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c b/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
new file mode 100644
index 0000000..9a1984b
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
@@ -0,0 +1,1099 @@
+/*
+* Copyright 2012-15 Advanced Micro Devices, Inc.cls
+*
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "dm_services.h"
+
+
+#include "stream_encoder.h"
+#include "resource.h"
+#include "include/irq_service_interface.h"
+#include "dce120_resource.h"
+#include "dce112/dce112_resource.h"
+
+#include "dce110/dce110_resource.h"
+#include "../virtual/virtual_stream_encoder.h"
+#include "dce120_timing_generator.h"
+#include "irq/dce120/irq_service_dce120.h"
+#include "dce/dce_opp.h"
+#include "dce/dce_clock_source.h"
+#include "dce/dce_clocks.h"
+#include "dce120_ipp.h"
+#include "dce110/dce110_mem_input.h"
+#include "dce120/dce120_mem_input.h"
+
+#include "dce110/dce110_hw_sequencer.h"
+#include "dce120/dce120_hw_sequencer.h"
+#include "dce/dce_transform.h"
+
+#include "dce/dce_audio.h"
+#include "dce/dce_link_encoder.h"
+#include "dce/dce_stream_encoder.h"
+#include "dce/dce_hwseq.h"
+#include "dce/dce_abm.h"
+#include "dce/dce_dmcu.h"
+
+#include "vega10/DC/dce_12_0_offset.h"
+#include "vega10/DC/dce_12_0_sh_mask.h"
+#include "vega10/soc15ip.h"
+#include "vega10/NBIO/nbio_6_1_offset.h"
+#include "reg_helper.h"
+
+#ifndef mmDP0_DP_DPHY_INTERNAL_CTRL
+	#define mmDP0_DP_DPHY_INTERNAL_CTRL		0x210f
+	#define mmDP0_DP_DPHY_INTERNAL_CTRL_BASE_IDX	2
+	#define mmDP1_DP_DPHY_INTERNAL_CTRL		0x220f
+	#define mmDP1_DP_DPHY_INTERNAL_CTRL_BASE_IDX	2
+	#define mmDP2_DP_DPHY_INTERNAL_CTRL		0x230f
+	#define mmDP2_DP_DPHY_INTERNAL_CTRL_BASE_IDX	2
+	#define mmDP3_DP_DPHY_INTERNAL_CTRL		0x240f
+	#define mmDP3_DP_DPHY_INTERNAL_CTRL_BASE_IDX	2
+	#define mmDP4_DP_DPHY_INTERNAL_CTRL		0x250f
+	#define mmDP4_DP_DPHY_INTERNAL_CTRL_BASE_IDX	2
+	#define mmDP5_DP_DPHY_INTERNAL_CTRL		0x260f
+	#define mmDP5_DP_DPHY_INTERNAL_CTRL_BASE_IDX	2
+	#define mmDP6_DP_DPHY_INTERNAL_CTRL		0x270f
+	#define mmDP6_DP_DPHY_INTERNAL_CTRL_BASE_IDX	2
+#endif
+
+enum dce120_clk_src_array_id {
+	DCE120_CLK_SRC_PLL0,
+	DCE120_CLK_SRC_PLL1,
+	DCE120_CLK_SRC_PLL2,
+	DCE120_CLK_SRC_PLL3,
+	DCE120_CLK_SRC_PLL4,
+	DCE120_CLK_SRC_PLL5,
+
+	DCE120_CLK_SRC_TOTAL
+};
+
+static const struct dce110_timing_generator_offsets dce120_tg_offsets[] = {
+	{
+		.crtc = (mmCRTC0_CRTC_CONTROL - mmCRTC0_CRTC_CONTROL),
+	},
+	{
+		.crtc = (mmCRTC1_CRTC_CONTROL - mmCRTC0_CRTC_CONTROL),
+	},
+	{
+		.crtc = (mmCRTC2_CRTC_CONTROL - mmCRTC0_CRTC_CONTROL),
+	},
+	{
+		.crtc = (mmCRTC3_CRTC_CONTROL - mmCRTC0_CRTC_CONTROL),
+	},
+	{
+		.crtc = (mmCRTC4_CRTC_CONTROL - mmCRTC0_CRTC_CONTROL),
+	},
+	{
+		.crtc = (mmCRTC5_CRTC_CONTROL - mmCRTC0_CRTC_CONTROL),
+	}
+};
+
+/* begin *********************
+ * macros to expend register list macro defined in HW object header file */
+
+#define BASE_INNER(seg) \
+	DCE_BASE__INST0_SEG ## seg
+
+#define NBIO_BASE_INNER(seg) \
+	NBIF_BASE__INST0_SEG ## seg
+
+#define NBIO_BASE(seg) \
+	NBIO_BASE_INNER(seg)
+
+/* compile time expand base address. */
+#define BASE(seg) \
+	BASE_INNER(seg)
+
+#define SR(reg_name)\
+		.reg_name = BASE(mm ## reg_name ## _BASE_IDX) +  \
+					mm ## reg_name
+
+#define SRI(reg_name, block, id)\
+	.reg_name = BASE(mm ## block ## id ## _ ## reg_name ## _BASE_IDX) + \
+					mm ## block ## id ## _ ## reg_name
+
+/* macros to expend register list macro defined in HW object header file
+ * end *********************/
+
+
+static const struct dce_disp_clk_registers disp_clk_regs = {
+		CLK_COMMON_REG_LIST_DCE_BASE()
+};
+
+static const struct dce_disp_clk_shift disp_clk_shift = {
+		CLK_COMMON_MASK_SH_LIST_DCE_COMMON_BASE(__SHIFT)
+};
+
+static const struct dce_disp_clk_mask disp_clk_mask = {
+		CLK_COMMON_MASK_SH_LIST_DCE_COMMON_BASE(_MASK)
+};
+
+static const struct dce_dmcu_registers dmcu_regs = {
+		DMCU_DCE110_COMMON_REG_LIST()
+};
+
+static const struct dce_dmcu_shift dmcu_shift = {
+		DMCU_MASK_SH_LIST_DCE110(__SHIFT)
+};
+
+static const struct dce_dmcu_mask dmcu_mask = {
+		DMCU_MASK_SH_LIST_DCE110(_MASK)
+};
+
+static const struct dce_abm_registers abm_regs = {
+		ABM_DCE110_COMMON_REG_LIST()
+};
+
+static const struct dce_abm_shift abm_shift = {
+		ABM_MASK_SH_LIST_DCE110(__SHIFT)
+};
+
+static const struct dce_abm_mask abm_mask = {
+		ABM_MASK_SH_LIST_DCE110(_MASK)
+};
+
+#define transform_regs(id)\
+[id] = {\
+		XFM_COMMON_REG_LIST_DCE110(id)\
+}
+
+static const struct dce_transform_registers xfm_regs[] = {
+		transform_regs(0),
+		transform_regs(1),
+		transform_regs(2),
+		transform_regs(3),
+		transform_regs(4),
+		transform_regs(5)
+};
+
+static const struct dce_transform_shift xfm_shift = {
+		XFM_COMMON_MASK_SH_LIST_SOC_BASE(__SHIFT)
+};
+
+static const struct dce_transform_mask xfm_mask = {
+		XFM_COMMON_MASK_SH_LIST_SOC_BASE(_MASK)
+};
+
+#define aux_regs(id)\
+[id] = {\
+	AUX_REG_LIST(id)\
+}
+
+static const struct dce110_link_enc_aux_registers link_enc_aux_regs[] = {
+		aux_regs(0),
+		aux_regs(1),
+		aux_regs(2),
+		aux_regs(3),
+		aux_regs(4),
+		aux_regs(5)
+};
+
+#define hpd_regs(id)\
+[id] = {\
+	HPD_REG_LIST(id)\
+}
+
+static const struct dce110_link_enc_hpd_registers link_enc_hpd_regs[] = {
+		hpd_regs(0),
+		hpd_regs(1),
+		hpd_regs(2),
+		hpd_regs(3),
+		hpd_regs(4),
+		hpd_regs(5)
+};
+
+#define link_regs(id)\
+[id] = {\
+	LE_DCE120_REG_LIST(id), \
+	SRI(DP_DPHY_INTERNAL_CTRL, DP, id) \
+}
+
+static const struct dce110_link_enc_registers link_enc_regs[] = {
+	link_regs(0),
+	link_regs(1),
+	link_regs(2),
+	link_regs(3),
+	link_regs(4),
+	link_regs(5),
+	link_regs(6),
+};
+
+
+#define stream_enc_regs(id)\
+[id] = {\
+	SE_COMMON_REG_LIST(id),\
+	.TMDS_CNTL = 0,\
+}
+
+static const struct dce110_stream_enc_registers stream_enc_regs[] = {
+	stream_enc_regs(0),
+	stream_enc_regs(1),
+	stream_enc_regs(2),
+	stream_enc_regs(3),
+	stream_enc_regs(4),
+	stream_enc_regs(5)
+};
+
+static const struct dce_stream_encoder_shift se_shift = {
+		SE_COMMON_MASK_SH_LIST_DCE120(__SHIFT)
+};
+
+static const struct dce_stream_encoder_mask se_mask = {
+		SE_COMMON_MASK_SH_LIST_DCE120(_MASK)
+};
+
+#define opp_regs(id)\
+[id] = {\
+	OPP_DCE_120_REG_LIST(id),\
+}
+
+static const struct dce_opp_registers opp_regs[] = {
+	opp_regs(0),
+	opp_regs(1),
+	opp_regs(2),
+	opp_regs(3),
+	opp_regs(4),
+	opp_regs(5)
+};
+
+static const struct dce_opp_shift opp_shift = {
+	OPP_COMMON_MASK_SH_LIST_DCE_120(__SHIFT)
+};
+
+static const struct dce_opp_mask opp_mask = {
+	OPP_COMMON_MASK_SH_LIST_DCE_120(_MASK)
+};
+
+#define audio_regs(id)\
+[id] = {\
+	AUD_COMMON_REG_LIST(id)\
+}
+
+static struct dce_audio_registers audio_regs[] = {
+	audio_regs(0),
+	audio_regs(1),
+	audio_regs(2),
+	audio_regs(3),
+	audio_regs(4),
+	audio_regs(5)
+};
+
+#define DCE120_AUD_COMMON_MASK_SH_LIST(mask_sh)\
+		SF(AZF0ENDPOINT0_AZALIA_F0_CODEC_ENDPOINT_INDEX, AZALIA_ENDPOINT_REG_INDEX, mask_sh),\
+		SF(AZF0ENDPOINT0_AZALIA_F0_CODEC_ENDPOINT_DATA, AZALIA_ENDPOINT_REG_DATA, mask_sh),\
+		AUD_COMMON_MASK_SH_LIST_BASE(mask_sh)
+
+static const struct dce_audio_shift audio_shift = {
+		DCE120_AUD_COMMON_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct dce_aduio_mask audio_mask = {
+		DCE120_AUD_COMMON_MASK_SH_LIST(_MASK)
+};
+
+#define clk_src_regs(index, id)\
+[index] = {\
+	CS_COMMON_REG_LIST_DCE_112(id),\
+}
+
+static const struct dce110_clk_src_regs clk_src_regs[] = {
+	clk_src_regs(0, A),
+	clk_src_regs(1, B),
+	clk_src_regs(2, C),
+	clk_src_regs(3, D),
+	clk_src_regs(4, E),
+	clk_src_regs(5, F)
+};
+
+static const struct dce110_clk_src_shift cs_shift = {
+		CS_COMMON_MASK_SH_LIST_DCE_112(__SHIFT)
+};
+
+static const struct dce110_clk_src_mask cs_mask = {
+		CS_COMMON_MASK_SH_LIST_DCE_112(_MASK)
+};
+
+struct output_pixel_processor *dce120_opp_create(
+	struct dc_context *ctx,
+	uint32_t inst)
+{
+	struct dce110_opp *opp =
+		dm_alloc(sizeof(struct dce110_opp));
+
+	if (!opp)
+		return NULL;
+
+	if (dce110_opp_construct(opp,
+			ctx, inst, &opp_regs[inst], &opp_shift, &opp_mask))
+		return &opp->base;
+
+	BREAK_TO_DEBUGGER();
+	dm_free(opp);
+	return NULL;
+}
+
+static const struct dce110_ipp_reg_offsets dce120_ipp_reg_offsets[] = {
+	{
+		.dcp_offset = (mmDCP0_CUR_CONTROL - mmDCP0_CUR_CONTROL),
+	},
+	{
+		.dcp_offset = (mmDCP1_CUR_CONTROL - mmDCP0_CUR_CONTROL),
+	},
+	{
+		.dcp_offset = (mmDCP2_CUR_CONTROL - mmDCP0_CUR_CONTROL),
+	},
+	{
+		.dcp_offset = (mmDCP3_CUR_CONTROL - mmDCP0_CUR_CONTROL),
+	},
+	{
+		.dcp_offset = (mmDCP4_CUR_CONTROL - mmDCP0_CUR_CONTROL),
+	},
+	{
+		.dcp_offset = (mmDCP5_CUR_CONTROL - mmDCP0_CUR_CONTROL),
+	}
+};
+
+static const struct dce110_mem_input_reg_offsets dce120_mi_reg_offsets[] = {
+	{
+		.dcp = (mmDCP0_GRPH_CONTROL - mmDCP0_GRPH_CONTROL),
+		.dmif = (mmDMIF_PG0_DPG_WATERMARK_MASK_CONTROL
+				- mmDMIF_PG0_DPG_WATERMARK_MASK_CONTROL),
+		.pipe = (mmPIPE0_DMIF_BUFFER_CONTROL
+				- mmPIPE0_DMIF_BUFFER_CONTROL),
+	},
+	{
+		.dcp = (mmDCP1_GRPH_CONTROL - mmDCP0_GRPH_CONTROL),
+		.dmif = (mmDMIF_PG1_DPG_WATERMARK_MASK_CONTROL
+				- mmDMIF_PG0_DPG_WATERMARK_MASK_CONTROL),
+		.pipe = (mmPIPE1_DMIF_BUFFER_CONTROL
+				- mmPIPE0_DMIF_BUFFER_CONTROL),
+	},
+	{
+		.dcp = (mmDCP2_GRPH_CONTROL - mmDCP0_GRPH_CONTROL),
+		.dmif = (mmDMIF_PG2_DPG_WATERMARK_MASK_CONTROL
+				- mmDMIF_PG0_DPG_WATERMARK_MASK_CONTROL),
+		.pipe = (mmPIPE2_DMIF_BUFFER_CONTROL
+				- mmPIPE0_DMIF_BUFFER_CONTROL),
+	},
+	{
+		.dcp = (mmDCP3_GRPH_CONTROL - mmDCP0_GRPH_CONTROL),
+		.dmif = (mmDMIF_PG3_DPG_WATERMARK_MASK_CONTROL
+				- mmDMIF_PG0_DPG_WATERMARK_MASK_CONTROL),
+		.pipe = (mmPIPE3_DMIF_BUFFER_CONTROL
+				- mmPIPE0_DMIF_BUFFER_CONTROL),
+	},
+	{
+		.dcp = (mmDCP4_GRPH_CONTROL - mmDCP0_GRPH_CONTROL),
+		.dmif = (mmDMIF_PG4_DPG_WATERMARK_MASK_CONTROL
+				- mmDMIF_PG0_DPG_WATERMARK_MASK_CONTROL),
+		.pipe = (mmPIPE4_DMIF_BUFFER_CONTROL
+				- mmPIPE0_DMIF_BUFFER_CONTROL),
+	},
+	{
+		.dcp = (mmDCP5_GRPH_CONTROL - mmDCP0_GRPH_CONTROL),
+		.dmif = (mmDMIF_PG5_DPG_WATERMARK_MASK_CONTROL
+				- mmDMIF_PG0_DPG_WATERMARK_MASK_CONTROL),
+		.pipe = (mmPIPE5_DMIF_BUFFER_CONTROL
+				- mmPIPE0_DMIF_BUFFER_CONTROL),
+	}
+};
+
+static const struct bios_registers bios_regs = {
+	.BIOS_SCRATCH_6 = mmBIOS_SCRATCH_6 + NBIO_BASE(mmBIOS_SCRATCH_6_BASE_IDX)
+};
+
+static const struct resource_caps res_cap = {
+		.num_timing_generator = 3,
+		.num_audio = 7,
+		.num_stream_encoder = 6,
+		.num_pll = 6,
+};
+
+static const struct dc_debug debug_defaults = {
+		.disable_clock_gate = true,
+};
+
+struct clock_source *dce120_clock_source_create(
+	struct dc_context *ctx,
+	struct dc_bios *bios,
+	enum clock_source_id id,
+	const struct dce110_clk_src_regs *regs,
+	bool dp_clk_src)
+{
+	struct dce110_clk_src *clk_src =
+		dm_alloc(sizeof(struct dce110_clk_src));
+
+	if (!clk_src)
+		return NULL;
+
+	if (dce110_clk_src_construct(clk_src, ctx, bios, id,
+			regs, &cs_shift, &cs_mask)) {
+		clk_src->base.dp_clk_src = dp_clk_src;
+		return &clk_src->base;
+	}
+
+	BREAK_TO_DEBUGGER();
+	return NULL;
+}
+
+void dce120_clock_source_destroy(struct clock_source **clk_src)
+{
+	dm_free(TO_DCE110_CLK_SRC(*clk_src));
+	*clk_src = NULL;
+}
+
+
+bool dce120_hw_sequencer_create(struct core_dc *dc)
+{
+	/* All registers used by dce11.2 match those in dce11 in offset and
+	 * structure
+	 */
+	dce120_hw_sequencer_construct(dc);
+
+	/*TODO	Move to separate file and Override what is needed */
+
+	return true;
+}
+
+static struct timing_generator *dce120_timing_generator_create(
+		struct dc_context *ctx,
+		uint32_t instance,
+		const struct dce110_timing_generator_offsets *offsets)
+{
+	struct dce110_timing_generator *tg110 =
+		dm_alloc(sizeof(struct dce110_timing_generator));
+
+	if (!tg110)
+		return NULL;
+
+	if (dce120_timing_generator_construct(tg110, ctx, instance, offsets))
+		return &tg110->base;
+
+	BREAK_TO_DEBUGGER();
+	dm_free(tg110);
+	return NULL;
+}
+
+static void dce120_ipp_destroy(struct input_pixel_processor **ipp)
+{
+	dm_free(TO_DCE110_IPP(*ipp));
+	*ipp = NULL;
+}
+
+static void dce120_transform_destroy(struct transform **xfm)
+{
+	dm_free(TO_DCE_TRANSFORM(*xfm));
+	*xfm = NULL;
+}
+
+static void destruct(struct dce110_resource_pool *pool)
+{
+	unsigned int i;
+
+	for (i = 0; i < pool->base.pipe_count; i++) {
+		if (pool->base.opps[i] != NULL)
+			dce110_opp_destroy(&pool->base.opps[i]);
+
+		if (pool->base.transforms[i] != NULL)
+			dce120_transform_destroy(&pool->base.transforms[i]);
+
+		if (pool->base.ipps[i] != NULL)
+			dce120_ipp_destroy(&pool->base.ipps[i]);
+
+		if (pool->base.mis[i] != NULL) {
+			dm_free(TO_DCE110_MEM_INPUT(pool->base.mis[i]));
+			pool->base.mis[i] = NULL;
+		}
+
+		if (pool->base.irqs != NULL) {
+			dal_irq_service_destroy(&pool->base.irqs);
+		}
+
+		if (pool->base.timing_generators[i] != NULL) {
+			dm_free(DCE110TG_FROM_TG(pool->base.timing_generators[i]));
+			pool->base.timing_generators[i] = NULL;
+		}
+	}
+
+	for (i = 0; i < pool->base.audio_count; i++) {
+		if (pool->base.audios[i])
+			dce_aud_destroy(&pool->base.audios[i]);
+	}
+
+	for (i = 0; i < pool->base.stream_enc_count; i++) {
+		if (pool->base.stream_enc[i] != NULL)
+			dm_free(DCE110STRENC_FROM_STRENC(pool->base.stream_enc[i]));
+	}
+
+	for (i = 0; i < pool->base.clk_src_count; i++) {
+		if (pool->base.clock_sources[i] != NULL)
+			dce120_clock_source_destroy(
+				&pool->base.clock_sources[i]);
+	}
+
+	if (pool->base.dp_clock_source != NULL)
+		dce120_clock_source_destroy(&pool->base.dp_clock_source);
+
+	if (pool->base.abm != NULL)
+		dce_abm_destroy(&pool->base.abm);
+
+	if (pool->base.dmcu != NULL)
+		dce_dmcu_destroy(&pool->base.dmcu);
+
+	if (pool->base.display_clock != NULL)
+		dce_disp_clk_destroy(&pool->base.display_clock);
+}
+
+static void read_dce_straps(
+	struct dc_context *ctx,
+	struct resource_straps *straps)
+{
+	/* TODO: Registers are missing */
+	/*REG_GET_2(CC_DC_HDMI_STRAPS,
+			HDMI_DISABLE, &straps->hdmi_disable,
+			AUDIO_STREAM_NUMBER, &straps->audio_stream_number);
+
+	REG_GET(DC_PINSTRAPS, DC_PINSTRAPS_AUDIO, &straps->dc_pinstraps_audio);*/
+}
+
+static struct audio *create_audio(
+		struct dc_context *ctx, unsigned int inst)
+{
+	return dce_audio_create(ctx, inst,
+			&audio_regs[inst], &audio_shift, &audio_mask);
+}
+
+static const struct encoder_feature_support link_enc_feature = {
+		.max_hdmi_deep_color = COLOR_DEPTH_121212,
+		.max_hdmi_pixel_clock = 600000,
+		.ycbcr420_supported = true,
+		.flags.bits.IS_HBR2_CAPABLE = true,
+		.flags.bits.IS_HBR3_CAPABLE = true,
+		.flags.bits.IS_TPS3_CAPABLE = true,
+		.flags.bits.IS_TPS4_CAPABLE = true,
+		.flags.bits.IS_YCBCR_CAPABLE = true
+};
+
+struct link_encoder *dce120_link_encoder_create(
+	const struct encoder_init_data *enc_init_data)
+{
+	struct dce110_link_encoder *enc110 =
+		dm_alloc(sizeof(struct dce110_link_encoder));
+
+	if (!enc110)
+		return NULL;
+
+	if (dce110_link_encoder_construct(
+			enc110,
+			enc_init_data,
+			&link_enc_feature,
+			&link_enc_regs[enc_init_data->transmitter],
+			&link_enc_aux_regs[enc_init_data->channel - 1],
+			&link_enc_hpd_regs[enc_init_data->hpd_source])) {
+
+		return &enc110->base;
+	}
+
+	BREAK_TO_DEBUGGER();
+	dm_free(enc110);
+	return NULL;
+}
+
+static struct input_pixel_processor *dce120_ipp_create(
+	struct dc_context *ctx,
+	uint32_t inst,
+	const struct dce110_ipp_reg_offsets *offset)
+{
+	struct dce110_ipp *ipp = dm_alloc(sizeof(struct dce110_ipp));
+
+	if (!ipp)
+		return NULL;
+
+	if (dce120_ipp_construct(ipp, ctx, inst, offset))
+		return &ipp->base;
+
+	BREAK_TO_DEBUGGER();
+	dm_free(ipp);
+	return NULL;
+}
+
+static struct stream_encoder *dce120_stream_encoder_create(
+	enum engine_id eng_id,
+	struct dc_context *ctx)
+{
+	struct dce110_stream_encoder *enc110 =
+		dm_alloc(sizeof(struct dce110_stream_encoder));
+
+	if (!enc110)
+		return NULL;
+
+	if (dce110_stream_encoder_construct(
+			enc110, ctx, ctx->dc_bios, eng_id,
+			&stream_enc_regs[eng_id], &se_shift, &se_mask))
+		return &enc110->base;
+
+	BREAK_TO_DEBUGGER();
+	dm_free(enc110);
+	return NULL;
+}
+
+#define SRII(reg_name, block, id)\
+	.reg_name[id] = BASE(mm ## block ## id ## _ ## reg_name ## _BASE_IDX) + \
+					mm ## block ## id ## _ ## reg_name
+
+static const struct dce_hwseq_registers hwseq_reg = {
+		HWSEQ_DCE112_REG_LIST()
+};
+
+static const struct dce_hwseq_shift hwseq_shift = {
+		HWSEQ_DCE12_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct dce_hwseq_mask hwseq_mask = {
+		HWSEQ_DCE12_MASK_SH_LIST(_MASK)
+};
+
+static struct dce_hwseq *dce120_hwseq_create(
+	struct dc_context *ctx)
+{
+	struct dce_hwseq *hws = dm_alloc(sizeof(struct dce_hwseq));
+
+	if (hws) {
+		hws->ctx = ctx;
+		hws->regs = &hwseq_reg;
+		hws->shifts = &hwseq_shift;
+		hws->masks = &hwseq_mask;
+	}
+	return hws;
+}
+
+static const struct resource_create_funcs res_create_funcs = {
+	.read_dce_straps = read_dce_straps,
+	.create_audio = create_audio,
+	.create_stream_encoder = dce120_stream_encoder_create,
+	.create_hwseq = dce120_hwseq_create,
+};
+
+#define mi_inst_regs(id) { MI_DCE12_REG_LIST(id) }
+static const struct dce_mem_input_registers mi_regs[] = {
+		mi_inst_regs(0),
+		mi_inst_regs(1),
+		mi_inst_regs(2),
+		mi_inst_regs(3),
+		mi_inst_regs(4),
+		mi_inst_regs(5),
+};
+
+static const struct dce_mem_input_shift mi_shifts = {
+		MI_DCE12_MASK_SH_LIST(__SHIFT)
+};
+
+static const struct dce_mem_input_mask mi_masks = {
+		MI_DCE12_MASK_SH_LIST(_MASK)
+};
+
+static struct mem_input *dce120_mem_input_create(
+	struct dc_context *ctx,
+	uint32_t inst,
+	const struct dce110_mem_input_reg_offsets *offset)
+{
+	struct dce110_mem_input *mem_input110 =
+		dm_alloc(sizeof(struct dce110_mem_input));
+
+	if (!mem_input110)
+		return NULL;
+
+	if (dce120_mem_input_construct(mem_input110, ctx, inst, offset)) {
+		struct mem_input *mi = &mem_input110->base;
+
+		mi->regs = &mi_regs[inst];
+		mi->shifts = &mi_shifts;
+		mi->masks = &mi_masks;
+		return mi;
+	}
+
+	BREAK_TO_DEBUGGER();
+	dm_free(mem_input110);
+	return NULL;
+}
+
+static struct transform *dce120_transform_create(
+	struct dc_context *ctx,
+	uint32_t inst)
+{
+	struct dce_transform *transform =
+		dm_alloc(sizeof(struct dce_transform));
+
+	if (!transform)
+		return NULL;
+
+	if (dce_transform_construct(transform, ctx, inst,
+			&xfm_regs[inst], &xfm_shift, &xfm_mask)) {
+		transform->lb_memory_size = 0x1404; /*5124*/
+		return &transform->base;
+	}
+
+	BREAK_TO_DEBUGGER();
+	dm_free(transform);
+	return NULL;
+}
+
+static void dce120_destroy_resource_pool(struct resource_pool **pool)
+{
+	struct dce110_resource_pool *dce110_pool = TO_DCE110_RES_POOL(*pool);
+
+	destruct(dce110_pool);
+	dm_free(dce110_pool);
+	*pool = NULL;
+}
+
+static const struct resource_funcs dce120_res_pool_funcs = {
+	.destroy = dce120_destroy_resource_pool,
+	.link_enc_create = dce120_link_encoder_create,
+	.validate_with_context = dce112_validate_with_context,
+	.validate_guaranteed = dce112_validate_guaranteed,
+	.validate_bandwidth = dce112_validate_bandwidth
+};
+
+static void bw_calcs_data_update_from_pplib(struct core_dc *dc)
+{
+	struct dm_pp_clock_levels_with_latency eng_clks = {0};
+	struct dm_pp_clock_levels_with_latency mem_clks = {0};
+	struct dm_pp_wm_sets_with_clock_ranges clk_ranges = {0};
+	int i;
+	unsigned int clk;
+	unsigned int latency;
+
+	/*do system clock*/
+	if (!dm_pp_get_clock_levels_by_type_with_latency(
+				dc->ctx,
+				DM_PP_CLOCK_TYPE_ENGINE_CLK,
+				&eng_clks) || eng_clks.num_levels == 0) {
+
+		eng_clks.num_levels = 8;
+		clk = 300000;
+
+		for (i = 0; i < eng_clks.num_levels; i++) {
+			eng_clks.data[i].clocks_in_khz = clk;
+			clk += 100000;
+		}
+	}
+
+	/* convert all the clock fro kHz to fix point mHz  TODO: wloop data */
+	dc->bw_vbios.high_sclk = bw_frc_to_fixed(
+		eng_clks.data[eng_clks.num_levels-1].clocks_in_khz, 1000);
+	dc->bw_vbios.mid1_sclk  = bw_frc_to_fixed(
+		eng_clks.data[eng_clks.num_levels/8].clocks_in_khz, 1000);
+	dc->bw_vbios.mid2_sclk  = bw_frc_to_fixed(
+		eng_clks.data[eng_clks.num_levels*2/8].clocks_in_khz, 1000);
+	dc->bw_vbios.mid3_sclk  = bw_frc_to_fixed(
+		eng_clks.data[eng_clks.num_levels*3/8].clocks_in_khz, 1000);
+	dc->bw_vbios.mid4_sclk  = bw_frc_to_fixed(
+		eng_clks.data[eng_clks.num_levels*4/8].clocks_in_khz, 1000);
+	dc->bw_vbios.mid5_sclk  = bw_frc_to_fixed(
+		eng_clks.data[eng_clks.num_levels*5/8].clocks_in_khz, 1000);
+	dc->bw_vbios.mid6_sclk  = bw_frc_to_fixed(
+		eng_clks.data[eng_clks.num_levels*6/8].clocks_in_khz, 1000);
+	dc->bw_vbios.low_sclk  = bw_frc_to_fixed(
+			eng_clks.data[0].clocks_in_khz, 1000);
+
+	/*do memory clock*/
+	if (!dm_pp_get_clock_levels_by_type_with_latency(
+			dc->ctx,
+			DM_PP_CLOCK_TYPE_MEMORY_CLK,
+			&mem_clks) || mem_clks.num_levels == 0) {
+
+		mem_clks.num_levels = 3;
+		clk = 250000;
+		latency = 45;
+
+		for (i = 0; i < eng_clks.num_levels; i++) {
+			mem_clks.data[i].clocks_in_khz = clk;
+			mem_clks.data[i].latency_in_us = latency;
+			clk += 500000;
+			latency -= 5;
+		}
+
+	}
+
+	/* we don't need to call PPLIB for validation clock since they
+	 * also give us the highest sclk and highest mclk (UMA clock).
+	 * ALSO always convert UMA clock (from PPLIB)  to YCLK (HW formula):
+	 * YCLK = UMACLK*m_memoryTypeMultiplier
+	 */
+	dc->bw_vbios.low_yclk = bw_frc_to_fixed(
+		mem_clks.data[0].clocks_in_khz * MEMORY_TYPE_MULTIPLIER, 1000);
+	dc->bw_vbios.mid_yclk = bw_frc_to_fixed(
+		mem_clks.data[mem_clks.num_levels>>1].clocks_in_khz * MEMORY_TYPE_MULTIPLIER,
+		1000);
+	dc->bw_vbios.high_yclk = bw_frc_to_fixed(
+		mem_clks.data[mem_clks.num_levels-1].clocks_in_khz * MEMORY_TYPE_MULTIPLIER,
+		1000);
+
+	/* Now notify PPLib/SMU about which Watermarks sets they should select
+	 * depending on DPM state they are in. And update BW MGR GFX Engine and
+	 * Memory clock member variables for Watermarks calculations for each
+	 * Watermark Set
+	 */
+	clk_ranges.num_wm_sets = 4;
+	clk_ranges.wm_clk_ranges[0].wm_set_id = WM_SET_A;
+	clk_ranges.wm_clk_ranges[0].wm_min_eng_clk_in_khz =
+			eng_clks.data[0].clocks_in_khz;
+	clk_ranges.wm_clk_ranges[0].wm_max_eng_clk_in_khz =
+			eng_clks.data[eng_clks.num_levels*3/8].clocks_in_khz - 1;
+	clk_ranges.wm_clk_ranges[0].wm_min_memg_clk_in_khz =
+			mem_clks.data[0].clocks_in_khz;
+	clk_ranges.wm_clk_ranges[0].wm_max_mem_clk_in_khz =
+			mem_clks.data[mem_clks.num_levels>>1].clocks_in_khz - 1;
+
+	clk_ranges.wm_clk_ranges[1].wm_set_id = WM_SET_B;
+	clk_ranges.wm_clk_ranges[1].wm_min_eng_clk_in_khz =
+			eng_clks.data[eng_clks.num_levels*3/8].clocks_in_khz;
+	/* 5 GHz instead of data[7].clockInKHz to cover Overdrive */
+	clk_ranges.wm_clk_ranges[1].wm_max_eng_clk_in_khz = 5000000;
+	clk_ranges.wm_clk_ranges[1].wm_min_memg_clk_in_khz =
+			mem_clks.data[0].clocks_in_khz;
+	clk_ranges.wm_clk_ranges[1].wm_max_mem_clk_in_khz =
+			mem_clks.data[mem_clks.num_levels>>1].clocks_in_khz - 1;
+
+	clk_ranges.wm_clk_ranges[2].wm_set_id = WM_SET_C;
+	clk_ranges.wm_clk_ranges[2].wm_min_eng_clk_in_khz =
+			eng_clks.data[0].clocks_in_khz;
+	clk_ranges.wm_clk_ranges[2].wm_max_eng_clk_in_khz =
+			eng_clks.data[eng_clks.num_levels*3/8].clocks_in_khz - 1;
+	clk_ranges.wm_clk_ranges[2].wm_min_memg_clk_in_khz =
+			mem_clks.data[mem_clks.num_levels>>1].clocks_in_khz;
+	/* 5 GHz instead of data[2].clockInKHz to cover Overdrive */
+	clk_ranges.wm_clk_ranges[2].wm_max_mem_clk_in_khz = 5000000;
+
+	clk_ranges.wm_clk_ranges[3].wm_set_id = WM_SET_D;
+	clk_ranges.wm_clk_ranges[3].wm_min_eng_clk_in_khz =
+			eng_clks.data[eng_clks.num_levels*3/8].clocks_in_khz;
+	/* 5 GHz instead of data[7].clockInKHz to cover Overdrive */
+	clk_ranges.wm_clk_ranges[3].wm_max_eng_clk_in_khz = 5000000;
+	clk_ranges.wm_clk_ranges[3].wm_min_memg_clk_in_khz =
+			mem_clks.data[mem_clks.num_levels>>1].clocks_in_khz;
+	/* 5 GHz instead of data[2].clockInKHz to cover Overdrive */
+	clk_ranges.wm_clk_ranges[3].wm_max_mem_clk_in_khz = 5000000;
+
+	/* Notify PP Lib/SMU which Watermarks to use for which clock ranges */
+	dm_pp_notify_wm_clock_changes(dc->ctx, &clk_ranges);
+}
+
+static bool construct(
+	uint8_t num_virtual_links,
+	struct core_dc *dc,
+	struct dce110_resource_pool *pool)
+{
+	unsigned int i;
+	struct dc_context *ctx = dc->ctx;
+
+	ctx->dc_bios->regs = &bios_regs;
+
+	pool->base.res_cap = &res_cap;
+	pool->base.funcs = &dce120_res_pool_funcs;
+
+	/* TODO: Fill more data from GreenlandAsicCapability.cpp */
+	pool->base.pipe_count = 6;
+	pool->base.underlay_pipe_index = NO_UNDERLAY_PIPE;
+
+	dc->public.caps.max_downscale_ratio = 200;
+	dc->public.caps.i2c_speed_in_khz = 100;
+	dc->public.caps.max_cursor_size = 128;
+	dc->public.debug = debug_defaults;
+
+	/*************************************************
+	 *  Create resources                             *
+	 *************************************************/
+
+	pool->base.clock_sources[DCE120_CLK_SRC_PLL0] =
+			dce120_clock_source_create(ctx, ctx->dc_bios,
+				CLOCK_SOURCE_COMBO_PHY_PLL0,
+				&clk_src_regs[0], false);
+	pool->base.clock_sources[DCE120_CLK_SRC_PLL1] =
+			dce120_clock_source_create(ctx, ctx->dc_bios,
+				CLOCK_SOURCE_COMBO_PHY_PLL1,
+				&clk_src_regs[1], false);
+	pool->base.clock_sources[DCE120_CLK_SRC_PLL2] =
+			dce120_clock_source_create(ctx, ctx->dc_bios,
+				CLOCK_SOURCE_COMBO_PHY_PLL2,
+				&clk_src_regs[2], false);
+	pool->base.clock_sources[DCE120_CLK_SRC_PLL3] =
+			dce120_clock_source_create(ctx, ctx->dc_bios,
+				CLOCK_SOURCE_COMBO_PHY_PLL3,
+				&clk_src_regs[3], false);
+	pool->base.clock_sources[DCE120_CLK_SRC_PLL4] =
+			dce120_clock_source_create(ctx, ctx->dc_bios,
+				CLOCK_SOURCE_COMBO_PHY_PLL4,
+				&clk_src_regs[4], false);
+	pool->base.clock_sources[DCE120_CLK_SRC_PLL5] =
+			dce120_clock_source_create(ctx, ctx->dc_bios,
+				CLOCK_SOURCE_COMBO_PHY_PLL5,
+				&clk_src_regs[5], false);
+	pool->base.clk_src_count = DCE120_CLK_SRC_TOTAL;
+
+	pool->base.dp_clock_source =
+			dce120_clock_source_create(ctx, ctx->dc_bios,
+				CLOCK_SOURCE_ID_DP_DTO,
+				&clk_src_regs[0], true);
+
+	for (i = 0; i < pool->base.clk_src_count; i++) {
+		if (pool->base.clock_sources[i] == NULL) {
+			dm_error("DC: failed to create clock sources!\n");
+			BREAK_TO_DEBUGGER();
+			goto clk_src_create_fail;
+		}
+	}
+
+	pool->base.display_clock = dce120_disp_clk_create(ctx,
+			&disp_clk_regs,
+			&disp_clk_shift,
+			&disp_clk_mask);
+	if (pool->base.display_clock == NULL) {
+		dm_error("DC: failed to create display clock!\n");
+		BREAK_TO_DEBUGGER();
+		goto disp_clk_create_fail;
+	}
+
+	pool->base.dmcu = dce_dmcu_create(ctx,
+			&dmcu_regs,
+			&dmcu_shift,
+			&dmcu_mask);
+	if (pool->base.dmcu == NULL) {
+		dm_error("DC: failed to create dmcu!\n");
+		BREAK_TO_DEBUGGER();
+		goto res_create_fail;
+	}
+
+	pool->base.abm = dce_abm_create(ctx,
+			&abm_regs,
+			&abm_shift,
+			&abm_mask);
+	if (pool->base.abm == NULL) {
+		dm_error("DC: failed to create abm!\n");
+		BREAK_TO_DEBUGGER();
+		goto res_create_fail;
+	}
+
+	{
+	#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+		struct irq_service_init_data init_data;
+		init_data.ctx = dc->ctx;
+		pool->base.irqs = dal_irq_service_dce120_create(&init_data);
+		if (!pool->base.irqs)
+			goto irqs_create_fail;
+	#endif
+	}
+
+	for (i = 0; i < pool->base.pipe_count; i++) {
+		pool->base.timing_generators[i] =
+				dce120_timing_generator_create(
+					ctx,
+					i,
+					&dce120_tg_offsets[i]);
+		if (pool->base.timing_generators[i] == NULL) {
+			BREAK_TO_DEBUGGER();
+			dm_error("DC: failed to create tg!\n");
+			goto controller_create_fail;
+		}
+
+		pool->base.mis[i] = dce120_mem_input_create(ctx,
+				i, &dce120_mi_reg_offsets[i]);
+
+		if (pool->base.mis[i] == NULL) {
+			BREAK_TO_DEBUGGER();
+			dm_error(
+				"DC: failed to create memory input!\n");
+			goto controller_create_fail;
+		}
+
+		pool->base.ipps[i] = dce120_ipp_create(ctx, i,
+				&dce120_ipp_reg_offsets[i]);
+		if (pool->base.ipps[i] == NULL) {
+			BREAK_TO_DEBUGGER();
+			dm_error(
+				"DC: failed to create input pixel processor!\n");
+			goto controller_create_fail;
+		}
+
+		pool->base.transforms[i] = dce120_transform_create(ctx, i);
+		if (pool->base.transforms[i] == NULL) {
+			BREAK_TO_DEBUGGER();
+			dm_error(
+				"DC: failed to create transform!\n");
+			goto res_create_fail;
+		}
+
+		pool->base.opps[i] = dce120_opp_create(
+			ctx,
+			i);
+		if (pool->base.opps[i] == NULL) {
+			BREAK_TO_DEBUGGER();
+			dm_error(
+				"DC: failed to create output pixel processor!\n");
+		}
+	}
+
+	if (!resource_construct(num_virtual_links, dc, &pool->base,
+			 &res_create_funcs))
+		goto res_create_fail;
+
+	/* Create hardware sequencer */
+	if (!dce120_hw_sequencer_create(dc))
+		goto controller_create_fail;
+
+	bw_calcs_init(&dc->bw_dceip, &dc->bw_vbios, dc->ctx->asic_id);
+
+	bw_calcs_data_update_from_pplib(dc);
+
+	return true;
+
+irqs_create_fail:
+controller_create_fail:
+disp_clk_create_fail:
+clk_src_create_fail:
+res_create_fail:
+
+	destruct(pool);
+
+	return false;
+}
+
+struct resource_pool *dce120_create_resource_pool(
+	uint8_t num_virtual_links,
+	struct core_dc *dc)
+{
+	struct dce110_resource_pool *pool =
+		dm_alloc(sizeof(struct dce110_resource_pool));
+
+	if (!pool)
+		return NULL;
+
+	if (construct(num_virtual_links, dc, pool))
+		return &pool->base;
+
+	BREAK_TO_DEBUGGER();
+	return NULL;
+}
diff --git a/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.h b/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.h
new file mode 100644
index 0000000..038c78d
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.h
@@ -0,0 +1,39 @@
+/*
+* Copyright 2012-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DC_RESOURCE_DCE120_H__
+#define __DC_RESOURCE_DCE120_H__
+
+#include "core_types.h"
+
+struct core_dc;
+struct resource_pool;
+
+struct resource_pool *dce120_create_resource_pool(
+	uint8_t num_virtual_links,
+	struct core_dc *dc);
+
+#endif /* __DC_RESOURCE_DCE120_H__ */
+
diff --git a/drivers/gpu/drm/amd/display/dc/dce120/dce120_timing_generator.c b/drivers/gpu/drm/amd/display/dc/dce120/dce120_timing_generator.c
new file mode 100644
index 0000000..d7e787b
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dce120/dce120_timing_generator.c
@@ -0,0 +1,1109 @@
+/*
+ * Copyright 2012-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "dm_services.h"
+
+#include "vega10/DC/dce_12_0_offset.h"
+#include "vega10/DC/dce_12_0_sh_mask.h"
+#include "vega10/soc15ip.h"
+
+#include "dc_types.h"
+#include "dc_bios_types.h"
+
+#include "include/grph_object_id.h"
+#include "include/logger_interface.h"
+#include "dce120_timing_generator.h"
+
+#include "timing_generator.h"
+
+#define CRTC_REG_UPDATE_N(reg_name, n, ...)	\
+		generic_reg_update_soc15(tg110->base.ctx, tg110->offsets.crtc, reg_name, n, __VA_ARGS__)
+
+#define CRTC_REG_SET_N(reg_name, n, ...)	\
+		generic_reg_set_soc15(tg110->base.ctx, tg110->offsets.crtc, reg_name, n, __VA_ARGS__)
+
+#define CRTC_REG_UPDATE(reg, field, val)	\
+		CRTC_REG_UPDATE_N(reg, 1, FD(reg##__##field), val)
+
+#define CRTC_REG_UPDATE_2(reg, field1, val1, field2, val2)	\
+		CRTC_REG_UPDATE_N(reg, 2, FD(reg##__##field1), val1, FD(reg##__##field2), val2)
+
+#define CRTC_REG_UPDATE_3(reg, field1, val1, field2, val2, field3, val3)	\
+		CRTC_REG_UPDATE_N(reg, 3, FD(reg##__##field1), val1, FD(reg##__##field2), val2, FD(reg##__##field3), val3)
+
+#define CRTC_REG_UPDATE_4(reg, field1, val1, field2, val2, field3, val3, field4, val4)	\
+		CRTC_REG_UPDATE_N(reg, 3, FD(reg##__##field1), val1, FD(reg##__##field2), val2, FD(reg##__##field3), val3, FD(reg##__##field4), val4)
+
+#define CRTC_REG_UPDATE_5(reg, field1, val1, field2, val2, field3, val3, field4, val4, field5, val5)	\
+		CRTC_REG_UPDATE_N(reg, 3, FD(reg##__##field1), val1, FD(reg##__##field2), val2, FD(reg##__##field3), val3, FD(reg##__##field4), val4, FD(reg##__##field5), val5)
+
+#define CRTC_REG_SET(reg, field, val)	\
+		CRTC_REG_SET_N(reg, 1, FD(reg##__##field), val)
+
+#define CRTC_REG_SET_2(reg, field1, val1, field2, val2)	\
+		CRTC_REG_SET_N(reg, 2, FD(reg##__##field1), val1, FD(reg##__##field2), val2)
+
+#define CRTC_REG_SET_3(reg, field1, val1, field2, val2, field3, val3)	\
+		CRTC_REG_SET_N(reg, 3, FD(reg##__##field1), val1, FD(reg##__##field2), val2, FD(reg##__##field3), val3)
+
+/**
+ *****************************************************************************
+ *  Function: is_in_vertical_blank
+ *
+ *  @brief
+ *     check the current status of CRTC to check if we are in Vertical Blank
+ *     regioneased" state
+ *
+ *  @return
+ *     true if currently in blank region, false otherwise
+ *
+ *****************************************************************************
+ */
+static bool dce120_timing_generator_is_in_vertical_blank(
+		struct timing_generator *tg)
+{
+	uint32_t field = 0;
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+	uint32_t value = dm_read_reg_soc15(
+					tg->ctx,
+					mmCRTC0_CRTC_STATUS,
+					tg110->offsets.crtc);
+
+	field = get_reg_field_value(value, CRTC0_CRTC_STATUS, CRTC_V_BLANK);
+	return field == 1;
+}
+
+
+/* determine if given timing can be supported by TG */
+bool dce120_timing_generator_validate_timing(
+	struct timing_generator *tg,
+	const struct dc_crtc_timing *timing,
+	enum signal_type signal)
+{
+	uint32_t interlace_factor = timing->flags.INTERLACE ? 2 : 1;
+	uint32_t v_blank =
+					(timing->v_total - timing->v_addressable -
+					timing->v_border_top - timing->v_border_bottom) *
+					interlace_factor;
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+
+	if (!dce110_timing_generator_validate_timing(
+					tg,
+					timing,
+					signal))
+		return false;
+
+
+	if (v_blank < tg110->min_v_blank	||
+		 timing->h_sync_width  < tg110->min_h_sync_width ||
+		 timing->v_sync_width  < tg110->min_v_sync_width)
+		return false;
+
+	return true;
+}
+
+bool dce120_tg_validate_timing(struct timing_generator *tg,
+	const struct dc_crtc_timing *timing)
+{
+	return dce120_timing_generator_validate_timing(tg, timing, SIGNAL_TYPE_NONE);
+}
+
+/******** HW programming ************/
+/* Disable/Enable Timing Generator */
+bool dce120_timing_generator_enable_crtc(struct timing_generator *tg)
+{
+	enum bp_result result;
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+
+	/* Set MASTER_UPDATE_MODE to 0
+	 * This is needed for DRR, and also suggested to be default value by Syed.*/
+
+	CRTC_REG_UPDATE(CRTC0_CRTC_MASTER_UPDATE_MODE,
+			MASTER_UPDATE_MODE, 0);
+
+	CRTC_REG_UPDATE(CRTC0_CRTC_MASTER_UPDATE_LOCK,
+			UNDERFLOW_UPDATE_LOCK, 0);
+
+	/* TODO API for AtomFirmware didn't change*/
+	result = tg->bp->funcs->enable_crtc(tg->bp, tg110->controller_id, true);
+
+	return result == BP_RESULT_OK;
+}
+
+void dce120_timing_generator_set_early_control(
+		struct timing_generator *tg,
+		uint32_t early_cntl)
+{
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+
+	CRTC_REG_UPDATE(CRTC0_CRTC_CONTROL,
+			CRTC_HBLANK_EARLY_CONTROL, early_cntl);
+}
+
+/**************** TG current status ******************/
+
+/* return the current frame counter. Used by Linux kernel DRM */
+uint32_t dce120_timing_generator_get_vblank_counter(
+		struct timing_generator *tg)
+{
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+	uint32_t value = dm_read_reg_soc15(
+				tg->ctx,
+				mmCRTC0_CRTC_STATUS_FRAME_COUNT,
+				tg110->offsets.crtc);
+	uint32_t field = get_reg_field_value(
+				value, CRTC0_CRTC_STATUS_FRAME_COUNT, CRTC_FRAME_COUNT);
+
+	return field;
+}
+
+/* Get current H and V position */
+void dce120_timing_generator_get_crtc_positions(
+	struct timing_generator *tg,
+	int32_t *h_position,
+	int32_t *v_position)
+{
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+	uint32_t value = dm_read_reg_soc15(
+				tg->ctx,
+				mmCRTC0_CRTC_STATUS_POSITION,
+				tg110->offsets.crtc);
+
+	*h_position = get_reg_field_value(
+					value, CRTC0_CRTC_STATUS_POSITION, CRTC_HORZ_COUNT);
+
+	*v_position = get_reg_field_value(
+						value, CRTC0_CRTC_STATUS_POSITION, CRTC_VERT_COUNT);
+}
+
+/* wait until TG is in beginning of vertical blank region */
+void dce120_timing_generator_wait_for_vblank(struct timing_generator *tg)
+{
+	/* We want to catch beginning of VBlank here, so if the first try are
+	 * in VBlank, we might be very close to Active, in this case wait for
+	 * another frame
+	 */
+	while (dce120_timing_generator_is_in_vertical_blank(tg)) {
+		if (!tg->funcs->is_counter_moving(tg)) {
+			/* error - no point to wait if counter is not moving */
+			break;
+		}
+	}
+
+	while (!dce120_timing_generator_is_in_vertical_blank(tg)) {
+		if (!tg->funcs->is_counter_moving(tg)) {
+			/* error - no point to wait if counter is not moving */
+			break;
+		}
+	}
+}
+
+/* wait until TG is in beginning of active region */
+void dce120_timing_generator_wait_for_vactive(struct timing_generator *tg)
+{
+	while (dce120_timing_generator_is_in_vertical_blank(tg)) {
+		if (!tg->funcs->is_counter_moving(tg)) {
+			/* error - no point to wait if counter is not moving */
+			break;
+		}
+	}
+}
+
+/*********** Timing Generator Synchronization routines ****/
+
+/* Setups Global Swap Lock group, TimingServer or TimingClient*/
+void dce120_timing_generator_setup_global_swap_lock(
+	struct timing_generator *tg,
+	const struct dcp_gsl_params *gsl_params)
+{
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+	uint32_t value_crtc_vtotal =
+							dm_read_reg_soc15(tg->ctx,
+							mmCRTC0_CRTC_V_TOTAL,
+							tg110->offsets.crtc);
+	/* Checkpoint relative to end of frame */
+	uint32_t check_point =
+							get_reg_field_value(value_crtc_vtotal,
+							CRTC0_CRTC_V_TOTAL,
+							CRTC_V_TOTAL);
+
+
+	dm_write_reg_soc15(tg->ctx, mmCRTC0_CRTC_GSL_WINDOW, tg110->offsets.crtc, 0);
+
+	CRTC_REG_UPDATE_N(DCP0_DCP_GSL_CONTROL, 6,
+		/* This pipe will belong to GSL Group zero. */
+		FD(DCP0_DCP_GSL_CONTROL__DCP_GSL0_EN), 1,
+		FD(DCP0_DCP_GSL_CONTROL__DCP_GSL_MASTER_EN), gsl_params->gsl_master == tg->inst,
+		FD(DCP0_DCP_GSL_CONTROL__DCP_GSL_HSYNC_FLIP_FORCE_DELAY), HFLIP_READY_DELAY,
+		/* Keep signal low (pending high) during 6 lines.
+		 * Also defines minimum interval before re-checking signal. */
+		FD(DCP0_DCP_GSL_CONTROL__DCP_GSL_HSYNC_FLIP_CHECK_DELAY), HFLIP_CHECK_DELAY,
+		/* DCP_GSL_PURPOSE_SURFACE_FLIP */
+		FD(DCP0_DCP_GSL_CONTROL__DCP_GSL_SYNC_SOURCE), 0,
+		FD(DCP0_DCP_GSL_CONTROL__DCP_GSL_DELAY_SURFACE_UPDATE_PENDING), 1);
+
+	CRTC_REG_SET_2(
+			CRTC0_CRTC_GSL_CONTROL,
+			CRTC_GSL_CHECK_LINE_NUM, check_point - FLIP_READY_BACK_LOOKUP,
+			CRTC_GSL_FORCE_DELAY, VFLIP_READY_DELAY);
+}
+
+/* Clear all the register writes done by setup_global_swap_lock */
+void dce120_timing_generator_tear_down_global_swap_lock(
+	struct timing_generator *tg)
+{
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+
+	/* Settig HW default values from reg specs */
+	CRTC_REG_SET_N(DCP0_DCP_GSL_CONTROL, 6,
+			FD(DCP0_DCP_GSL_CONTROL__DCP_GSL0_EN), 0,
+			FD(DCP0_DCP_GSL_CONTROL__DCP_GSL_MASTER_EN), 0,
+			FD(DCP0_DCP_GSL_CONTROL__DCP_GSL_HSYNC_FLIP_FORCE_DELAY), HFLIP_READY_DELAY,
+			FD(DCP0_DCP_GSL_CONTROL__DCP_GSL_HSYNC_FLIP_CHECK_DELAY), HFLIP_CHECK_DELAY,
+			/* DCP_GSL_PURPOSE_SURFACE_FLIP */
+			FD(DCP0_DCP_GSL_CONTROL__DCP_GSL_SYNC_SOURCE), 0,
+			FD(DCP0_DCP_GSL_CONTROL__DCP_GSL_DELAY_SURFACE_UPDATE_PENDING), 0);
+
+		CRTC_REG_SET_2(
+			CRTC0_CRTC_GSL_CONTROL,
+			CRTC_GSL_CHECK_LINE_NUM, 0,
+			CRTC_GSL_FORCE_DELAY, 0x2); /*TODO Why this value here ?*/
+}
+
+/* Reset slave controllers on master VSync */
+void dce120_timing_generator_enable_reset_trigger(
+	struct timing_generator *tg,
+	int source)
+{
+	enum trigger_source_select trig_src_select = TRIGGER_SOURCE_SELECT_LOGIC_ZERO;
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+	uint32_t rising_edge = 0;
+	uint32_t falling_edge = 0;
+	/* Setup trigger edge */
+	uint32_t pol_value = dm_read_reg_soc15(
+									tg->ctx,
+									mmCRTC0_CRTC_V_SYNC_A_CNTL,
+									tg110->offsets.crtc);
+
+	/* Register spec has reversed definition:
+	 *	0 for positive, 1 for negative */
+	if (get_reg_field_value(pol_value,
+			CRTC0_CRTC_V_SYNC_A_CNTL,
+			CRTC_V_SYNC_A_POL) == 0) {
+		rising_edge = 1;
+	} else {
+		falling_edge = 1;
+	}
+
+	/* TODO What about other sources ?*/
+	trig_src_select = TRIGGER_SOURCE_SELECT_GSL_GROUP0;
+
+	CRTC_REG_UPDATE_N(CRTC0_CRTC_TRIGB_CNTL, 7,
+		FD(CRTC0_CRTC_TRIGB_CNTL__CRTC_TRIGB_SOURCE_SELECT), trig_src_select,
+		FD(CRTC0_CRTC_TRIGB_CNTL__CRTC_TRIGB_POLARITY_SELECT), TRIGGER_POLARITY_SELECT_LOGIC_ZERO,
+		FD(CRTC0_CRTC_TRIGB_CNTL__CRTC_TRIGB_RISING_EDGE_DETECT_CNTL), rising_edge,
+		FD(CRTC0_CRTC_TRIGB_CNTL__CRTC_TRIGB_FALLING_EDGE_DETECT_CNTL), falling_edge,
+		/* send every signal */
+		FD(CRTC0_CRTC_TRIGB_CNTL__CRTC_TRIGB_FREQUENCY_SELECT), 0,
+		/* no delay */
+		FD(CRTC0_CRTC_TRIGB_CNTL__CRTC_TRIGB_DELAY), 0,
+		/* clear trigger status */
+		FD(CRTC0_CRTC_TRIGB_CNTL__CRTC_TRIGB_CLEAR), 1);
+
+	CRTC_REG_UPDATE_3(
+			CRTC0_CRTC_FORCE_COUNT_NOW_CNTL,
+			CRTC_FORCE_COUNT_NOW_MODE, 2,
+			CRTC_FORCE_COUNT_NOW_TRIG_SEL, 1,
+			CRTC_FORCE_COUNT_NOW_CLEAR, 1);
+}
+
+/* disabling trigger-reset */
+void dce120_timing_generator_disable_reset_trigger(
+	struct timing_generator *tg)
+{
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+
+	CRTC_REG_UPDATE_2(
+		CRTC0_CRTC_FORCE_COUNT_NOW_CNTL,
+		CRTC_FORCE_COUNT_NOW_MODE, 0,
+		CRTC_FORCE_COUNT_NOW_CLEAR, 1);
+
+	CRTC_REG_UPDATE_3(
+		CRTC0_CRTC_TRIGB_CNTL,
+		CRTC_TRIGB_SOURCE_SELECT, TRIGGER_SOURCE_SELECT_LOGIC_ZERO,
+		CRTC_TRIGB_POLARITY_SELECT, TRIGGER_POLARITY_SELECT_LOGIC_ZERO,
+		/* clear trigger status */
+		CRTC_TRIGB_CLEAR, 1);
+
+}
+
+/* Checks whether CRTC triggered reset occurred */
+bool dce120_timing_generator_did_triggered_reset_occur(
+	struct timing_generator *tg)
+{
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+	uint32_t value = dm_read_reg_soc15(
+			tg->ctx,
+			mmCRTC0_CRTC_FORCE_COUNT_NOW_CNTL,
+			tg110->offsets.crtc);
+
+	return get_reg_field_value(value,
+			CRTC0_CRTC_FORCE_COUNT_NOW_CNTL,
+			CRTC_FORCE_COUNT_NOW_OCCURRED) != 0;
+}
+
+
+/******** Stuff to move to other virtual HW objects *****************/
+/* Move to enable accelerated mode */
+void dce120_timing_generator_disable_vga(struct timing_generator *tg)
+{
+	uint32_t addr = 0;
+	uint32_t offset = 0;
+	uint32_t value = 0;
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+
+	switch (tg110->controller_id) {
+	case CONTROLLER_ID_D0:
+		addr = mmD1VGA_CONTROL;
+		offset = 0;
+		break;
+	case CONTROLLER_ID_D1:
+		addr = mmD2VGA_CONTROL;
+		offset = mmD2VGA_CONTROL - mmD1VGA_CONTROL;
+		break;
+	case CONTROLLER_ID_D2:
+		addr = mmD3VGA_CONTROL;
+		offset = mmD3VGA_CONTROL - mmD1VGA_CONTROL;
+		break;
+	case CONTROLLER_ID_D3:
+		addr = mmD4VGA_CONTROL;
+		offset = mmD4VGA_CONTROL - mmD1VGA_CONTROL;
+		break;
+	case CONTROLLER_ID_D4:
+		addr = mmD1VGA_CONTROL;
+		offset = mmD1VGA_CONTROL - mmD1VGA_CONTROL;
+		break;
+	case CONTROLLER_ID_D5:
+		addr = mmD6VGA_CONTROL;
+		offset = mmD6VGA_CONTROL - mmD1VGA_CONTROL;
+		break;
+	default:
+		break;
+	}
+
+	value = dm_read_reg_soc15(tg->ctx, mmD1VGA_CONTROL, offset);
+
+	set_reg_field_value(value, 0, D1VGA_CONTROL, D1VGA_MODE_ENABLE);
+	set_reg_field_value(value, 0, D1VGA_CONTROL, D1VGA_TIMING_SELECT);
+	set_reg_field_value(
+			value, 0, D1VGA_CONTROL, D1VGA_SYNC_POLARITY_SELECT);
+	set_reg_field_value(value, 0, D1VGA_CONTROL, D1VGA_OVERSCAN_COLOR_EN);
+
+	dm_write_reg_soc15(tg->ctx, mmD1VGA_CONTROL, offset, value);
+}
+/* TODO: Should we move it to transform */
+/* Fully program CRTC timing in timing generator */
+void dce120_timing_generator_program_blanking(
+	struct timing_generator *tg,
+	const struct dc_crtc_timing *timing)
+{
+	uint32_t tmp1 = 0;
+	uint32_t tmp2 = 0;
+	uint32_t vsync_offset = timing->v_border_bottom +
+			timing->v_front_porch;
+	uint32_t v_sync_start = timing->v_addressable + vsync_offset;
+
+	uint32_t hsync_offset = timing->h_border_right +
+			timing->h_front_porch;
+	uint32_t h_sync_start = timing->h_addressable + hsync_offset;
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+
+	CRTC_REG_UPDATE(
+			CRTC0_CRTC_H_TOTAL,
+			CRTC_H_TOTAL,
+			timing->h_total - 1);
+
+	CRTC_REG_UPDATE(
+		CRTC0_CRTC_V_TOTAL,
+		CRTC_V_TOTAL,
+		timing->v_total - 1);
+
+	tmp1 = timing->h_total -
+			(h_sync_start + timing->h_border_left);
+	tmp2 = tmp1 + timing->h_addressable +
+			timing->h_border_left + timing->h_border_right;
+
+	CRTC_REG_UPDATE_2(
+			CRTC0_CRTC_H_BLANK_START_END,
+			CRTC_H_BLANK_END, tmp1,
+			CRTC_H_BLANK_START, tmp2);
+
+	tmp1 = timing->v_total - (v_sync_start + timing->v_border_top);
+	tmp2 = tmp1 + timing->v_addressable + timing->v_border_top +
+			timing->v_border_bottom;
+
+	CRTC_REG_UPDATE_2(
+		CRTC0_CRTC_V_BLANK_START_END,
+		CRTC_V_BLANK_END, tmp1,
+		CRTC_V_BLANK_START, tmp2);
+}
+
+/* TODO: Should we move it to opp? */
+/* Combine with below and move YUV/RGB color conversion to SW layer */
+void dce120_timing_generator_program_blank_color(
+	struct timing_generator *tg,
+	const struct tg_color *black_color)
+{
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+
+	CRTC_REG_UPDATE_3(
+		CRTC0_CRTC_BLACK_COLOR,
+		CRTC_BLACK_COLOR_B_CB, black_color->color_b_cb,
+		CRTC_BLACK_COLOR_G_Y, black_color->color_g_y,
+		CRTC_BLACK_COLOR_R_CR, black_color->color_r_cr);
+}
+/* Combine with above and move YUV/RGB color conversion to SW layer */
+void dce120_timing_generator_set_overscan_color_black(
+	struct timing_generator *tg,
+	const struct tg_color *color)
+{
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+	uint32_t value = 0;
+	CRTC_REG_SET_3(
+		CRTC0_CRTC_OVERSCAN_COLOR,
+		CRTC_OVERSCAN_COLOR_BLUE, color->color_b_cb,
+		CRTC_OVERSCAN_COLOR_GREEN, color->color_g_y,
+		CRTC_OVERSCAN_COLOR_RED, color->color_r_cr);
+
+	value = dm_read_reg_soc15(
+			tg->ctx,
+			mmCRTC0_CRTC_OVERSCAN_COLOR,
+			tg110->offsets.crtc);
+
+	dm_write_reg_soc15(
+			tg->ctx,
+			mmCRTC0_CRTC_BLACK_COLOR,
+			tg110->offsets.crtc,
+			value);
+
+	/* This is desirable to have a constant DAC output voltage during the
+	 * blank time that is higher than the 0 volt reference level that the
+	 * DAC outputs when the NBLANK signal
+	 * is asserted low, such as for output to an analog TV. */
+	dm_write_reg_soc15(
+		tg->ctx,
+		mmCRTC0_CRTC_BLANK_DATA_COLOR,
+		tg110->offsets.crtc,
+		value);
+
+	/* TO DO we have to program EXT registers and we need to know LB DATA
+	 * format because it is used when more 10 , i.e. 12 bits per color
+	 *
+	 * m_mmDxCRTC_OVERSCAN_COLOR_EXT
+	 * m_mmDxCRTC_BLACK_COLOR_EXT
+	 * m_mmDxCRTC_BLANK_DATA_COLOR_EXT
+	 */
+}
+
+void dce120_timing_generator_set_drr(
+	struct timing_generator *tg,
+	const struct drr_params *params)
+{
+
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+
+	if (params != NULL &&
+		params->vertical_total_max > 0 &&
+		params->vertical_total_min > 0) {
+
+		CRTC_REG_UPDATE(
+				CRTC0_CRTC_V_TOTAL_MIN,
+				CRTC_V_TOTAL_MIN, params->vertical_total_min - 1);
+		CRTC_REG_UPDATE(
+				CRTC0_CRTC_V_TOTAL_MAX,
+				CRTC_V_TOTAL_MAX, params->vertical_total_max - 1);
+		CRTC_REG_SET_N(CRTC0_CRTC_V_TOTAL_CONTROL, 6,
+				FD(CRTC0_CRTC_V_TOTAL_CONTROL__CRTC_V_TOTAL_MIN_SEL), 1,
+				FD(CRTC0_CRTC_V_TOTAL_CONTROL__CRTC_V_TOTAL_MAX_SEL), 1,
+				FD(CRTC0_CRTC_V_TOTAL_CONTROL__CRTC_FORCE_LOCK_ON_EVENT), 0,
+				FD(CRTC0_CRTC_V_TOTAL_CONTROL__CRTC_FORCE_LOCK_TO_MASTER_VSYNC), 0,
+				FD(CRTC0_CRTC_V_TOTAL_CONTROL__CRTC_SET_V_TOTAL_MIN_MASK_EN), 0,
+				FD(CRTC0_CRTC_V_TOTAL_CONTROL__CRTC_SET_V_TOTAL_MIN_MASK), 0);
+		CRTC_REG_UPDATE(
+				CRTC0_CRTC_STATIC_SCREEN_CONTROL,
+				CRTC_STATIC_SCREEN_EVENT_MASK,
+				0x180);
+
+	} else {
+		CRTC_REG_UPDATE(
+				CRTC0_CRTC_V_TOTAL_MIN,
+				CRTC_V_TOTAL_MIN, 0);
+		CRTC_REG_UPDATE(
+				CRTC0_CRTC_V_TOTAL_MAX,
+				CRTC_V_TOTAL_MAX, 0);
+		CRTC_REG_SET_N(CRTC0_CRTC_V_TOTAL_CONTROL, 5,
+				FD(CRTC0_CRTC_V_TOTAL_CONTROL__CRTC_V_TOTAL_MIN_SEL), 0,
+				FD(CRTC0_CRTC_V_TOTAL_CONTROL__CRTC_V_TOTAL_MAX_SEL), 0,
+				FD(CRTC0_CRTC_V_TOTAL_CONTROL__CRTC_FORCE_LOCK_ON_EVENT), 0,
+				FD(CRTC0_CRTC_V_TOTAL_CONTROL__CRTC_FORCE_LOCK_TO_MASTER_VSYNC), 0,
+				FD(CRTC0_CRTC_V_TOTAL_CONTROL__CRTC_SET_V_TOTAL_MIN_MASK), 0);
+		CRTC_REG_UPDATE(
+				CRTC0_CRTC_STATIC_SCREEN_CONTROL,
+				CRTC_STATIC_SCREEN_EVENT_MASK,
+				0);
+	}
+}
+
+uint32_t dce120_timing_generator_get_crtc_scanoutpos(
+	struct timing_generator *tg,
+	uint32_t *vbl,
+	uint32_t *position)
+{
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+
+	*vbl = dm_read_reg_soc15(
+			tg->ctx,
+			mmCRTC0_CRTC_V_BLANK_START_END,
+			tg110->offsets.crtc);
+
+	*position = dm_read_reg_soc15(
+				tg->ctx,
+				mmCRTC0_CRTC_STATUS_POSITION,
+				tg110->offsets.crtc);
+
+	return 0;
+}
+
+void dce120_timing_generator_enable_advanced_request(
+	struct timing_generator *tg,
+	bool enable,
+	const struct dc_crtc_timing *timing)
+{
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+	uint32_t v_sync_width_and_b_porch =
+				timing->v_total - timing->v_addressable -
+				timing->v_border_bottom - timing->v_front_porch;
+	uint32_t value = dm_read_reg_soc15(
+				tg->ctx,
+				mmCRTC0_CRTC_START_LINE_CONTROL,
+				tg110->offsets.crtc);
+
+
+	if (enable) {
+		set_reg_field_value(
+			value,
+			0,
+			CRTC0_CRTC_START_LINE_CONTROL,
+			CRTC_LEGACY_REQUESTOR_EN);
+	} else {
+		set_reg_field_value(
+			value,
+			1,
+			CRTC0_CRTC_START_LINE_CONTROL,
+			CRTC_LEGACY_REQUESTOR_EN);
+	}
+
+	/* Program advanced line position acc.to the best case from fetching data perspective to hide MC latency
+	 * and prefilling Line Buffer in V Blank (to 10 lines as LB can store max 10 lines)
+	 */
+	if (v_sync_width_and_b_porch > 10)
+		set_reg_field_value(
+			value,
+			10,
+			CRTC0_CRTC_START_LINE_CONTROL,
+			CRTC_ADVANCED_START_LINE_POSITION);
+	else
+		set_reg_field_value(
+			value,
+			v_sync_width_and_b_porch,
+			CRTC0_CRTC_START_LINE_CONTROL,
+			CRTC_ADVANCED_START_LINE_POSITION);
+
+	dm_write_reg_soc15(tg->ctx,
+			mmCRTC0_CRTC_START_LINE_CONTROL,
+			tg110->offsets.crtc,
+			value);
+}
+
+void dce120_tg_program_blank_color(struct timing_generator *tg,
+	const struct tg_color *black_color)
+{
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+	uint32_t value = 0;
+
+	CRTC_REG_UPDATE_3(
+		CRTC0_CRTC_BLACK_COLOR,
+		CRTC_BLACK_COLOR_B_CB, black_color->color_b_cb,
+		CRTC_BLACK_COLOR_G_Y, black_color->color_g_y,
+		CRTC_BLACK_COLOR_R_CR, black_color->color_r_cr);
+
+	value = dm_read_reg_soc15(
+				tg->ctx,
+				mmCRTC0_CRTC_BLACK_COLOR,
+				tg110->offsets.crtc);
+	dm_write_reg_soc15(
+		tg->ctx,
+		mmCRTC0_CRTC_BLANK_DATA_COLOR,
+		tg110->offsets.crtc,
+		value);
+}
+
+void dce120_tg_set_overscan_color(struct timing_generator *tg,
+	const struct tg_color *overscan_color)
+{
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+
+	CRTC_REG_SET_3(
+		CRTC0_CRTC_OVERSCAN_COLOR,
+		CRTC_OVERSCAN_COLOR_BLUE, overscan_color->color_b_cb,
+		CRTC_OVERSCAN_COLOR_GREEN, overscan_color->color_g_y,
+		CRTC_OVERSCAN_COLOR_RED, overscan_color->color_r_cr);
+}
+
+void dce120_tg_program_timing(struct timing_generator *tg,
+	const struct dc_crtc_timing *timing,
+	bool use_vbios)
+{
+	if (use_vbios)
+			dce110_timing_generator_program_timing_generator(tg, timing);
+		else
+			dce120_timing_generator_program_blanking(tg, timing);
+}
+
+bool dce120_tg_is_blanked(struct timing_generator *tg)
+{
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+	uint32_t value = dm_read_reg_soc15(
+			tg->ctx,
+			mmCRTC0_CRTC_BLANK_CONTROL,
+			tg110->offsets.crtc);
+
+	if (
+		get_reg_field_value(
+			value,
+			CRTC0_CRTC_BLANK_CONTROL,
+			CRTC_BLANK_DATA_EN) == 1	&&
+		get_reg_field_value(
+			value,
+			CRTC0_CRTC_BLANK_CONTROL,
+			CRTC_CURRENT_BLANK_STATE) == 1)
+			return true;
+
+	return false;
+}
+
+void dce120_tg_set_blank(struct timing_generator *tg,
+		bool enable_blanking)
+{
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+
+	CRTC_REG_SET(
+		CRTC0_CRTC_DOUBLE_BUFFER_CONTROL,
+		CRTC_BLANK_DATA_DOUBLE_BUFFER_EN, 0);
+
+	if (enable_blanking) {
+		CRTC_REG_SET(
+			CRTC0_CRTC_BLANK_CONTROL,
+			CRTC_BLANK_DATA_EN, 1);
+
+	} else
+		dm_write_reg_soc15(
+			tg->ctx,
+			mmCRTC0_CRTC_BLANK_CONTROL,
+			tg110->offsets.crtc,
+			0);
+}
+
+bool dce120_tg_validate_timing(struct timing_generator *tg,
+	const struct dc_crtc_timing *timing);
+
+void dce120_tg_wait_for_state(struct timing_generator *tg,
+	enum crtc_state state)
+{
+	switch (state) {
+	case CRTC_STATE_VBLANK:
+		dce120_timing_generator_wait_for_vblank(tg);
+		break;
+
+	case CRTC_STATE_VACTIVE:
+		dce120_timing_generator_wait_for_vactive(tg);
+		break;
+
+	default:
+		break;
+	}
+}
+
+void dce120_tg_set_colors(struct timing_generator *tg,
+	const struct tg_color *blank_color,
+	const struct tg_color *overscan_color)
+{
+	if (blank_color != NULL)
+		dce120_tg_program_blank_color(tg, blank_color);
+
+	if (overscan_color != NULL)
+		dce120_tg_set_overscan_color(tg, overscan_color);
+}
+
+static void dce120_timing_generator_set_static_screen_control(
+	struct timing_generator *tg,
+	uint32_t value)
+{
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+
+	CRTC_REG_UPDATE_2(CRTC0_CRTC_STATIC_SCREEN_CONTROL,
+			CRTC_STATIC_SCREEN_EVENT_MASK, value,
+			CRTC_STATIC_SCREEN_FRAME_COUNT, 2);
+}
+
+void dce120_timing_generator_set_test_pattern(
+	struct timing_generator *tg,
+	/* TODO: replace 'controller_dp_test_pattern' by 'test_pattern_mode'
+	 * because this is not DP-specific (which is probably somewhere in DP
+	 * encoder) */
+	enum controller_dp_test_pattern test_pattern,
+	enum dc_color_depth color_depth)
+{
+	struct dc_context *ctx = tg->ctx;
+	uint32_t value;
+	struct dce110_timing_generator *tg110 = DCE110TG_FROM_TG(tg);
+	enum test_pattern_color_format bit_depth;
+	enum test_pattern_dyn_range dyn_range;
+	enum test_pattern_mode mode;
+	/* color ramp generator mixes 16-bits color */
+	uint32_t src_bpc = 16;
+	/* requested bpc */
+	uint32_t dst_bpc;
+	uint32_t index;
+	/* RGB values of the color bars.
+	 * Produce two RGB colors: RGB0 - white (all Fs)
+	 * and RGB1 - black (all 0s)
+	 * (three RGB components for two colors)
+	 */
+	uint16_t src_color[6] = {0xFFFF, 0xFFFF, 0xFFFF, 0x0000,
+						0x0000, 0x0000};
+	/* dest color (converted to the specified color format) */
+	uint16_t dst_color[6];
+	uint32_t inc_base;
+
+	/* translate to bit depth */
+	switch (color_depth) {
+	case COLOR_DEPTH_666:
+		bit_depth = TEST_PATTERN_COLOR_FORMAT_BPC_6;
+	break;
+	case COLOR_DEPTH_888:
+		bit_depth = TEST_PATTERN_COLOR_FORMAT_BPC_8;
+	break;
+	case COLOR_DEPTH_101010:
+		bit_depth = TEST_PATTERN_COLOR_FORMAT_BPC_10;
+	break;
+	case COLOR_DEPTH_121212:
+		bit_depth = TEST_PATTERN_COLOR_FORMAT_BPC_12;
+	break;
+	default:
+		bit_depth = TEST_PATTERN_COLOR_FORMAT_BPC_8;
+	break;
+	}
+
+	switch (test_pattern) {
+	case CONTROLLER_DP_TEST_PATTERN_COLORSQUARES:
+	case CONTROLLER_DP_TEST_PATTERN_COLORSQUARES_CEA:
+	{
+		dyn_range = (test_pattern ==
+				CONTROLLER_DP_TEST_PATTERN_COLORSQUARES_CEA ?
+				TEST_PATTERN_DYN_RANGE_CEA :
+				TEST_PATTERN_DYN_RANGE_VESA);
+		mode = TEST_PATTERN_MODE_COLORSQUARES_RGB;
+
+		CRTC_REG_UPDATE_2(CRTC0_CRTC_TEST_PATTERN_PARAMETERS,
+				CRTC_TEST_PATTERN_VRES, 6,
+				CRTC_TEST_PATTERN_HRES, 6);
+
+		CRTC_REG_UPDATE_4(CRTC0_CRTC_TEST_PATTERN_CONTROL,
+				CRTC_TEST_PATTERN_EN, 1,
+				CRTC_TEST_PATTERN_MODE, mode,
+				CRTC_TEST_PATTERN_DYNAMIC_RANGE, dyn_range,
+				CRTC_TEST_PATTERN_COLOR_FORMAT, bit_depth);
+	}
+	break;
+
+	case CONTROLLER_DP_TEST_PATTERN_VERTICALBARS:
+	case CONTROLLER_DP_TEST_PATTERN_HORIZONTALBARS:
+	{
+		mode = (test_pattern ==
+			CONTROLLER_DP_TEST_PATTERN_VERTICALBARS ?
+			TEST_PATTERN_MODE_VERTICALBARS :
+			TEST_PATTERN_MODE_HORIZONTALBARS);
+
+		switch (bit_depth) {
+		case TEST_PATTERN_COLOR_FORMAT_BPC_6:
+			dst_bpc = 6;
+		break;
+		case TEST_PATTERN_COLOR_FORMAT_BPC_8:
+			dst_bpc = 8;
+		break;
+		case TEST_PATTERN_COLOR_FORMAT_BPC_10:
+			dst_bpc = 10;
+		break;
+		default:
+			dst_bpc = 8;
+		break;
+		}
+
+		/* adjust color to the required colorFormat */
+		for (index = 0; index < 6; index++) {
+			/* dst = 2^dstBpc * src / 2^srcBpc = src >>
+			 * (srcBpc - dstBpc);
+			 */
+			dst_color[index] =
+				src_color[index] >> (src_bpc - dst_bpc);
+		/* CRTC_TEST_PATTERN_DATA has 16 bits,
+		 * lowest 6 are hardwired to ZERO
+		 * color bits should be left aligned aligned to MSB
+		 * XXXXXXXXXX000000 for 10 bit,
+		 * XXXXXXXX00000000 for 8 bit and XXXXXX0000000000 for 6
+		 */
+			dst_color[index] <<= (16 - dst_bpc);
+		}
+
+		dm_write_reg_soc15(ctx, mmCRTC0_CRTC_TEST_PATTERN_PARAMETERS, tg110->offsets.crtc, 0);
+
+		/* We have to write the mask before data, similar to pipeline.
+		 * For example, for 8 bpc, if we want RGB0 to be magenta,
+		 * and RGB1 to be cyan,
+		 * we need to make 7 writes:
+		 * MASK   DATA
+		 * 000001 00000000 00000000                     set mask to R0
+		 * 000010 11111111 00000000     R0 255, 0xFF00, set mask to G0
+		 * 000100 00000000 00000000     G0 0,   0x0000, set mask to B0
+		 * 001000 11111111 00000000     B0 255, 0xFF00, set mask to R1
+		 * 010000 00000000 00000000     R1 0,   0x0000, set mask to G1
+		 * 100000 11111111 00000000     G1 255, 0xFF00, set mask to B1
+		 * 100000 11111111 00000000     B1 255, 0xFF00
+		 *
+		 * we will make a loop of 6 in which we prepare the mask,
+		 * then write, then prepare the color for next write.
+		 * first iteration will write mask only,
+		 * but each next iteration color prepared in
+		 * previous iteration will be written within new mask,
+		 * the last component will written separately,
+		 * mask is not changing between 6th and 7th write
+		 * and color will be prepared by last iteration
+		 */
+
+		/* write color, color values mask in CRTC_TEST_PATTERN_MASK
+		 * is B1, G1, R1, B0, G0, R0
+		 */
+		value = 0;
+		for (index = 0; index < 6; index++) {
+			/* prepare color mask, first write PATTERN_DATA
+			 * will have all zeros
+			 */
+			set_reg_field_value(
+				value,
+				(1 << index),
+				CRTC0_CRTC_TEST_PATTERN_COLOR,
+				CRTC_TEST_PATTERN_MASK);
+			/* write color component */
+			dm_write_reg_soc15(ctx, mmCRTC0_CRTC_TEST_PATTERN_COLOR, tg110->offsets.crtc, value);
+			/* prepare next color component,
+			 * will be written in the next iteration
+			 */
+			set_reg_field_value(
+				value,
+				dst_color[index],
+				CRTC0_CRTC_TEST_PATTERN_COLOR,
+				CRTC_TEST_PATTERN_DATA);
+		}
+		/* write last color component,
+		 * it's been already prepared in the loop
+		 */
+		dm_write_reg_soc15(ctx, mmCRTC0_CRTC_TEST_PATTERN_COLOR, tg110->offsets.crtc, value);
+
+		/* enable test pattern */
+		CRTC_REG_UPDATE_4(CRTC0_CRTC_TEST_PATTERN_CONTROL,
+				CRTC_TEST_PATTERN_EN, 1,
+				CRTC_TEST_PATTERN_MODE, mode,
+				CRTC_TEST_PATTERN_DYNAMIC_RANGE, 0,
+				CRTC_TEST_PATTERN_COLOR_FORMAT, bit_depth);
+	}
+	break;
+
+	case CONTROLLER_DP_TEST_PATTERN_COLORRAMP:
+	{
+		mode = (bit_depth ==
+			TEST_PATTERN_COLOR_FORMAT_BPC_10 ?
+			TEST_PATTERN_MODE_DUALRAMP_RGB :
+			TEST_PATTERN_MODE_SINGLERAMP_RGB);
+
+		switch (bit_depth) {
+		case TEST_PATTERN_COLOR_FORMAT_BPC_6:
+			dst_bpc = 6;
+		break;
+		case TEST_PATTERN_COLOR_FORMAT_BPC_8:
+			dst_bpc = 8;
+		break;
+		case TEST_PATTERN_COLOR_FORMAT_BPC_10:
+			dst_bpc = 10;
+		break;
+		default:
+			dst_bpc = 8;
+		break;
+		}
+
+		/* increment for the first ramp for one color gradation
+		 * 1 gradation for 6-bit color is 2^10
+		 * gradations in 16-bit color
+		 */
+		inc_base = (src_bpc - dst_bpc);
+
+		switch (bit_depth) {
+		case TEST_PATTERN_COLOR_FORMAT_BPC_6:
+		{
+			CRTC_REG_UPDATE_5(CRTC0_CRTC_TEST_PATTERN_PARAMETERS,
+					CRTC_TEST_PATTERN_INC0, inc_base,
+					CRTC_TEST_PATTERN_INC1, 0,
+					CRTC_TEST_PATTERN_HRES, 6,
+					CRTC_TEST_PATTERN_VRES, 6,
+					CRTC_TEST_PATTERN_RAMP0_OFFSET, 0);
+		}
+		break;
+		case TEST_PATTERN_COLOR_FORMAT_BPC_8:
+		{
+			CRTC_REG_UPDATE_5(CRTC0_CRTC_TEST_PATTERN_PARAMETERS,
+					CRTC_TEST_PATTERN_INC0, inc_base,
+					CRTC_TEST_PATTERN_INC1, 0,
+					CRTC_TEST_PATTERN_HRES, 8,
+					CRTC_TEST_PATTERN_VRES, 6,
+					CRTC_TEST_PATTERN_RAMP0_OFFSET, 0);
+		}
+		break;
+		case TEST_PATTERN_COLOR_FORMAT_BPC_10:
+		{
+			CRTC_REG_UPDATE_5(CRTC0_CRTC_TEST_PATTERN_PARAMETERS,
+					CRTC_TEST_PATTERN_INC0, inc_base,
+					CRTC_TEST_PATTERN_INC1, inc_base + 2,
+					CRTC_TEST_PATTERN_HRES, 8,
+					CRTC_TEST_PATTERN_VRES, 5,
+					CRTC_TEST_PATTERN_RAMP0_OFFSET, 384 << 6);
+		}
+		break;
+		default:
+		break;
+		}
+
+		dm_write_reg_soc15(ctx, mmCRTC0_CRTC_TEST_PATTERN_COLOR, tg110->offsets.crtc, 0);
+
+		/* enable test pattern */
+		dm_write_reg_soc15(ctx, mmCRTC0_CRTC_TEST_PATTERN_CONTROL, tg110->offsets.crtc, 0);
+
+		CRTC_REG_UPDATE_4(CRTC0_CRTC_TEST_PATTERN_CONTROL,
+				CRTC_TEST_PATTERN_EN, 1,
+				CRTC_TEST_PATTERN_MODE, mode,
+				CRTC_TEST_PATTERN_DYNAMIC_RANGE, 0,
+				CRTC_TEST_PATTERN_COLOR_FORMAT, bit_depth);
+	}
+	break;
+	case CONTROLLER_DP_TEST_PATTERN_VIDEOMODE:
+	{
+		value = 0;
+		dm_write_reg_soc15(ctx, mmCRTC0_CRTC_TEST_PATTERN_CONTROL, tg110->offsets.crtc,  value);
+		dm_write_reg_soc15(ctx, mmCRTC0_CRTC_TEST_PATTERN_COLOR, tg110->offsets.crtc, value);
+		dm_write_reg_soc15(ctx, mmCRTC0_CRTC_TEST_PATTERN_PARAMETERS, tg110->offsets.crtc, value);
+	}
+	break;
+	default:
+	break;
+	}
+}
+
+static struct timing_generator_funcs dce120_tg_funcs = {
+		.validate_timing = dce120_tg_validate_timing,
+		.program_timing = dce120_tg_program_timing,
+		.enable_crtc = dce120_timing_generator_enable_crtc,
+		.disable_crtc = dce110_timing_generator_disable_crtc,
+		/* used by enable_timing_synchronization. Not need for FPGA */
+		.is_counter_moving = dce110_timing_generator_is_counter_moving,
+		/* never be called */
+		.get_position = dce120_timing_generator_get_crtc_positions,
+		.get_frame_count = dce120_timing_generator_get_vblank_counter,
+		.get_scanoutpos = dce120_timing_generator_get_crtc_scanoutpos,
+		.set_early_control = dce120_timing_generator_set_early_control,
+		/* used by enable_timing_synchronization. Not need for FPGA */
+		.wait_for_state = dce120_tg_wait_for_state,
+		.set_blank = dce120_tg_set_blank,
+		.is_blanked = dce120_tg_is_blanked,
+		/* never be called */
+		.set_colors = dce120_tg_set_colors,
+		.set_overscan_blank_color = dce120_timing_generator_set_overscan_color_black,
+		.set_blank_color = dce120_timing_generator_program_blank_color,
+		.disable_vga = dce120_timing_generator_disable_vga,
+		.did_triggered_reset_occur = dce120_timing_generator_did_triggered_reset_occur,
+		.setup_global_swap_lock = dce120_timing_generator_setup_global_swap_lock,
+		.enable_reset_trigger = dce120_timing_generator_enable_reset_trigger,
+		.disable_reset_trigger = dce120_timing_generator_disable_reset_trigger,
+		.tear_down_global_swap_lock = dce120_timing_generator_tear_down_global_swap_lock,
+		.enable_advanced_request = dce120_timing_generator_enable_advanced_request,
+		.set_drr = dce120_timing_generator_set_drr,
+		.set_static_screen_control = dce120_timing_generator_set_static_screen_control,
+		.set_test_pattern = dce120_timing_generator_set_test_pattern
+};
+
+
+bool dce120_timing_generator_construct(
+	struct dce110_timing_generator *tg110,
+	struct dc_context *ctx,
+	uint32_t instance,
+	const struct dce110_timing_generator_offsets *offsets)
+{
+	if (!tg110)
+			return false;
+
+	tg110->controller_id = CONTROLLER_ID_D0 + instance;
+	tg110->base.inst = instance;
+
+	tg110->offsets = *offsets;
+
+	tg110->base.funcs = &dce120_tg_funcs;
+
+	tg110->base.ctx = ctx;
+	tg110->base.bp = ctx->dc_bios;
+
+	tg110->max_h_total = CRTC0_CRTC_H_TOTAL__CRTC_H_TOTAL_MASK + 1;
+	tg110->max_v_total = CRTC0_CRTC_V_TOTAL__CRTC_V_TOTAL_MASK + 1;
+
+	/*//CRTC requires a minimum HBLANK = 32 pixels and o
+	 * Minimum HSYNC = 8 pixels*/
+	tg110->min_h_blank = 32;
+	/*DCE12_CRTC_Block_ARch.doc*/
+	tg110->min_h_front_porch = 0;
+	tg110->min_h_back_porch = 0;
+
+	tg110->min_h_sync_width = 8;
+	tg110->min_v_sync_width = 1;
+	tg110->min_v_blank = 3;
+
+	return true;
+}
diff --git a/drivers/gpu/drm/amd/display/dc/dce120/dce120_timing_generator.h b/drivers/gpu/drm/amd/display/dc/dce120/dce120_timing_generator.h
new file mode 100644
index 0000000..243c0a3
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dce120/dce120_timing_generator.h
@@ -0,0 +1,41 @@
+/*
+ * Copyright 2012-15 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ *  and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#ifndef __DC_TIMING_GENERATOR_DCE120_H__
+#define __DC_TIMING_GENERATOR_DCE120_H__
+
+#include "timing_generator.h"
+#include "../include/grph_object_id.h"
+#include "../include/hw_sequencer_types.h"
+#include "dce110/dce110_timing_generator.h"
+
+
+bool dce120_timing_generator_construct(
+	struct dce110_timing_generator *tg110,
+	struct dc_context *ctx,
+	uint32_t instance,
+	const struct dce110_timing_generator_offsets *offsets);
+
+#endif /* __DC_TIMING_GENERATOR_DCE120_H__ */
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 068/100] drm/amd/display: Enable DCE12 support
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (51 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 067/100] drm/amd/display: Add DCE12 core support Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 069/100] drm/amd/display: need to handle DCE_Info table ver4.2 Alex Deucher
                     ` (32 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Harry Wentland

From: Harry Wentland <harry.wentland@amd.com>

This wires DCE12 support into DC and enables it.

Signed-off-by: Harry Wentland <harry.wentland@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c         |   5 +-
 drivers/gpu/drm/amd/display/Kconfig                |   7 +
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  | 145 +++++++++++++++++++-
 .../drm/amd/display/amdgpu_dm/amdgpu_dm_services.c |  10 ++
 .../drm/amd/display/amdgpu_dm/amdgpu_dm_types.c    |  20 ++-
 drivers/gpu/drm/amd/display/dc/Makefile            |   4 +
 drivers/gpu/drm/amd/display/dc/bios/Makefile       |   8 ++
 .../amd/display/dc/bios/bios_parser_interface.c    |  14 ++
 drivers/gpu/drm/amd/display/dc/calcs/dce_calcs.c   | 117 ++++++++++++++++
 drivers/gpu/drm/amd/display/dc/core/dc.c           |  29 ++++
 drivers/gpu/drm/amd/display/dc/core/dc_debug.c     |  11 ++
 drivers/gpu/drm/amd/display/dc/core/dc_link.c      |  19 +++
 drivers/gpu/drm/amd/display/dc/core/dc_resource.c  |  14 ++
 drivers/gpu/drm/amd/display/dc/dc.h                |  27 ++++
 drivers/gpu/drm/amd/display/dc/dc_hw_types.h       |  46 +++++++
 .../gpu/drm/amd/display/dc/dce/dce_clock_source.c  |   6 +
 drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c    | 149 +++++++++++++++++++++
 drivers/gpu/drm/amd/display/dc/dce/dce_clocks.h    |  20 +++
 drivers/gpu/drm/amd/display/dc/dce/dce_hwseq.h     |   8 ++
 .../gpu/drm/amd/display/dc/dce/dce_link_encoder.h  |  14 ++
 drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c |  35 +++++
 drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.h |  34 +++++
 drivers/gpu/drm/amd/display/dc/dce/dce_opp.h       |  72 ++++++++++
 .../drm/amd/display/dc/dce/dce_stream_encoder.h    | 100 ++++++++++++++
 drivers/gpu/drm/amd/display/dc/dce/dce_transform.h |  68 ++++++++++
 .../amd/display/dc/dce110/dce110_hw_sequencer.c    |  53 +++++++-
 .../drm/amd/display/dc/dce110/dce110_mem_input.c   |   3 +
 .../display/dc/dce110/dce110_timing_generator.h    |   3 +
 drivers/gpu/drm/amd/display/dc/dce120/Makefile     |  12 ++
 .../gpu/drm/amd/display/dc/dce80/dce80_mem_input.c |   3 +
 drivers/gpu/drm/amd/display/dc/dm_services.h       |  89 ++++++++++++
 drivers/gpu/drm/amd/display/dc/dm_services_types.h |  27 ++++
 drivers/gpu/drm/amd/display/dc/gpio/Makefile       |  11 ++
 drivers/gpu/drm/amd/display/dc/gpio/hw_factory.c   |   9 ++
 drivers/gpu/drm/amd/display/dc/gpio/hw_translate.c |   9 +-
 drivers/gpu/drm/amd/display/dc/i2caux/Makefile     |  11 ++
 drivers/gpu/drm/amd/display/dc/i2caux/i2caux.c     |   8 ++
 .../gpu/drm/amd/display/dc/inc/bandwidth_calcs.h   |   3 +
 .../gpu/drm/amd/display/dc/inc/hw/display_clock.h  |  23 ++++
 drivers/gpu/drm/amd/display/dc/inc/hw/mem_input.h  |   4 +
 drivers/gpu/drm/amd/display/dc/irq/Makefile        |  12 ++
 drivers/gpu/drm/amd/display/dc/irq/irq_service.c   |   3 +
 drivers/gpu/drm/amd/display/include/dal_asic_id.h  |   4 +
 drivers/gpu/drm/amd/display/include/dal_types.h    |   3 +
 44 files changed, 1262 insertions(+), 10 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/Makefile

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 82e42ef..8e64437 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -1777,15 +1777,16 @@ bool amdgpu_device_asic_has_dc_support(enum amd_asic_type asic_type)
 #if defined(CONFIG_DRM_AMD_DC)
 	case CHIP_BONAIRE:
 	case CHIP_HAWAII:
-		return amdgpu_dc != 0;
 	case CHIP_CARRIZO:
 	case CHIP_STONEY:
 	case CHIP_POLARIS11:
 	case CHIP_POLARIS10:
 	case CHIP_POLARIS12:
-		return amdgpu_dc != 0;
 	case CHIP_TONGA:
 	case CHIP_FIJI:
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	case CHIP_VEGA10:
+#endif
 		return amdgpu_dc != 0;
 #endif
 	default:
diff --git a/drivers/gpu/drm/amd/display/Kconfig b/drivers/gpu/drm/amd/display/Kconfig
index f652cc3..40d6386 100644
--- a/drivers/gpu/drm/amd/display/Kconfig
+++ b/drivers/gpu/drm/amd/display/Kconfig
@@ -9,6 +9,13 @@ config DRM_AMD_DC
 
           Will be deprecated when the DC component is upstream.
 
+config DRM_AMD_DC_DCE12_0
+        bool "Vega10 family"
+        depends on DRM_AMD_DC
+        help
+         Choose this option if you want to have
+         VG family for display engine.
+
 config DEBUG_KERNEL_DC
         bool "Enable kgdb break in DC"
         depends on DRM_AMD_DC
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index da12e23..b570a18 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -894,6 +894,10 @@ static int dce110_register_irq_handlers(struct amdgpu_device *adev)
 	struct dc_interrupt_params int_params = {0};
 	int r;
 	int i;
+	unsigned client_id = AMDGPU_IH_CLIENTID_LEGACY;
+
+	if (adev->asic_type == CHIP_VEGA10)
+		client_id = AMDGPU_IH_CLIENTID_DCE;
 
 	int_params.requested_polarity = INTERRUPT_POLARITY_DEFAULT;
 	int_params.current_polarity = INTERRUPT_POLARITY_DEFAULT;
@@ -910,7 +914,7 @@ static int dce110_register_irq_handlers(struct amdgpu_device *adev)
 
 	/* Use VBLANK interrupt */
 	for (i = 1; i <= adev->mode_info.num_crtc; i++) {
-		r = amdgpu_irq_add_id(adev, AMDGPU_IH_CLIENTID_LEGACY, i, &adev->crtc_irq);
+		r = amdgpu_irq_add_id(adev, client_id, i, &adev->crtc_irq);
 
 		if (r) {
 			DRM_ERROR("Failed to add crtc irq id!\n");
@@ -933,7 +937,7 @@ static int dce110_register_irq_handlers(struct amdgpu_device *adev)
 	/* Use GRPH_PFLIP interrupt */
 	for (i = VISLANDS30_IV_SRCID_D1_GRPH_PFLIP;
 			i <= VISLANDS30_IV_SRCID_D6_GRPH_PFLIP; i += 2) {
-		r = amdgpu_irq_add_id(adev, AMDGPU_IH_CLIENTID_LEGACY, i, &adev->pageflip_irq);
+		r = amdgpu_irq_add_id(adev, client_id, i, &adev->pageflip_irq);
 		if (r) {
 			DRM_ERROR("Failed to add page flip irq id!\n");
 			return r;
@@ -954,8 +958,8 @@ static int dce110_register_irq_handlers(struct amdgpu_device *adev)
 	}
 
 	/* HPD */
-	r = amdgpu_irq_add_id(adev, AMDGPU_IH_CLIENTID_LEGACY, VISLANDS30_IV_SRCID_HOTPLUG_DETECT_A,
-			&adev->hpd_irq);
+	r = amdgpu_irq_add_id(adev, client_id,
+			VISLANDS30_IV_SRCID_HOTPLUG_DETECT_A, &adev->hpd_irq);
 	if (r) {
 		DRM_ERROR("Failed to add hpd irq id!\n");
 		return r;
@@ -1125,6 +1129,9 @@ int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
 	case CHIP_POLARIS11:
 	case CHIP_POLARIS10:
 	case CHIP_POLARIS12:
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	case CHIP_VEGA10:
+#endif
 		if (dce110_register_irq_handlers(dm->adev)) {
 			DRM_ERROR("DM: Failed to initialize IRQ\n");
 			return -1;
@@ -1303,6 +1310,101 @@ static int amdgpu_notify_freesync(struct drm_device *dev, void *data,
 	return r;
 }
 
+#if  defined(CONFIG_DRM_AMD_DC_DCE12_0)
+void dce_v12_0_stop_mc_access(struct amdgpu_device *adev,
+			      struct amdgpu_mode_mc_save *save)
+{
+#if 0
+	u32 crtc_enabled, tmp;
+	int i;
+
+	save->vga_render_control = RREG32(mmVGA_RENDER_CONTROL);
+	save->vga_hdp_control = RREG32(mmVGA_HDP_CONTROL);
+
+	/* disable VGA render */
+	tmp = RREG32(mmVGA_RENDER_CONTROL);
+	tmp = REG_SET_FIELD(tmp, VGA_RENDER_CONTROL, VGA_VSTATUS_CNTL, 0);
+	WREG32(mmVGA_RENDER_CONTROL, tmp);
+
+	/* blank the display controllers */
+	for (i = 0; i < adev->mode_info.num_crtc; i++) {
+		crtc_enabled = REG_GET_FIELD(RREG32(mmCRTC_CONTROL + crtc_offsets[i]),
+					     CRTC_CONTROL, CRTC_MASTER_EN);
+		if (crtc_enabled) {
+			save->crtc_enabled[i] = true;
+			tmp = RREG32(mmCRTC_BLANK_CONTROL + crtc_offsets[i]);
+			if (REG_GET_FIELD(tmp, CRTC_BLANK_CONTROL, CRTC_BLANK_DATA_EN) == 0) {
+				/*it is correct only for RGB ; black is 0*/
+				WREG32(mmCRTC_BLANK_DATA_COLOR + crtc_offsets[i], 0);
+				tmp = REG_SET_FIELD(tmp, CRTC_BLANK_CONTROL, CRTC_BLANK_DATA_EN, 1);
+				WREG32(mmCRTC_BLANK_CONTROL + crtc_offsets[i], tmp);
+			}
+		} else {
+			save->crtc_enabled[i] = false;
+		}
+	}
+#endif
+}
+
+void dce_v12_0_resume_mc_access(struct amdgpu_device *adev,
+				struct amdgpu_mode_mc_save *save)
+{
+#if 0
+	u32 tmp;
+	int i;
+
+	/* update crtc base addresses */
+	for (i = 0; i < adev->mode_info.num_crtc; i++) {
+		WREG32(mmGRPH_PRIMARY_SURFACE_ADDRESS_HIGH + crtc_offsets[i],
+		       upper_32_bits(adev->mc.vram_start));
+		WREG32(mmGRPH_PRIMARY_SURFACE_ADDRESS + crtc_offsets[i],
+		       (u32)adev->mc.vram_start);
+
+		if (save->crtc_enabled[i]) {
+			tmp = RREG32(mmCRTC_BLANK_CONTROL + crtc_offsets[i]);
+			tmp = REG_SET_FIELD(tmp, CRTC_BLANK_CONTROL, CRTC_BLANK_DATA_EN, 0);
+			WREG32(mmCRTC_BLANK_CONTROL + crtc_offsets[i], tmp);
+		}
+	}
+
+	WREG32(mmVGA_MEMORY_BASE_ADDRESS_HIGH, upper_32_bits(adev->mc.vram_start));
+	WREG32(mmVGA_MEMORY_BASE_ADDRESS, lower_32_bits(adev->mc.vram_start));
+
+	/* Unlock vga access */
+	WREG32(mmVGA_HDP_CONTROL, save->vga_hdp_control);
+	mdelay(1);
+	WREG32(mmVGA_RENDER_CONTROL, save->vga_render_control);
+#endif
+}
+
+void dce_v12_0_set_vga_render_state(struct amdgpu_device *adev,
+				    bool render)
+{
+	u32 tmp;
+
+	/* Lockout access through VGA aperture*/
+	tmp = RREG32(0xCA);
+	if (render) {
+		tmp = tmp & 0xFFFFFFEF;
+		WREG32(0xCA, tmp);
+	} else {
+		tmp |= 0x10;
+		WREG32(0xCA, tmp);
+	}
+
+	/* disable VGA render */
+	tmp = RREG32(0xC0);
+	if (render) {
+		tmp |=  0x10000;
+		WREG32(0xC0, tmp);
+	} else {
+		tmp = tmp & 0xFFFCFFFF;
+		WREG32(0xC0, tmp);
+	}
+}
+
+#endif
+
 #ifdef CONFIG_DRM_AMDGPU_CIK
 static const struct amdgpu_display_funcs dm_dce_v8_0_display_funcs = {
 	.set_vga_render_state = dce_v8_0_set_vga_render_state,
@@ -1373,6 +1475,32 @@ static const struct amdgpu_display_funcs dm_dce_v11_0_display_funcs = {
 
 };
 
+#ifdef CONFIG_DRM_AMD_DC_DCE12_0
+static const struct amdgpu_display_funcs dm_dce_v12_0_display_funcs = {
+	.set_vga_render_state = dce_v12_0_set_vga_render_state,
+	.bandwidth_update = dm_bandwidth_update, /* called unconditionally */
+	.vblank_get_counter = dm_vblank_get_counter,/* called unconditionally */
+	.vblank_wait = NULL,
+	.backlight_set_level =
+		dm_set_backlight_level,/* called unconditionally */
+	.backlight_get_level =
+		dm_get_backlight_level,/* called unconditionally */
+	.hpd_sense = NULL,/* called unconditionally */
+	.hpd_set_polarity = NULL, /* called unconditionally */
+	.hpd_get_gpio_reg = NULL, /* VBIOS parsing. DAL does it. */
+	.page_flip = dm_page_flip, /* called unconditionally */
+	.page_flip_get_scanoutpos =
+		dm_crtc_get_scanoutpos,/* called unconditionally */
+	.add_encoder = NULL, /* VBIOS parsing. DAL does it. */
+	.add_connector = NULL, /* VBIOS parsing. DAL does it. */
+	.stop_mc_access = dce_v12_0_stop_mc_access, /* called unconditionally */
+	.resume_mc_access = dce_v12_0_resume_mc_access, /* called unconditionally */
+	.notify_freesync = amdgpu_notify_freesync,
+
+};
+#endif
+
+
 #if defined(CONFIG_DEBUG_KERNEL_DC)
 
 static ssize_t s3_debug_store(
@@ -1459,6 +1587,15 @@ static int dm_early_init(void *handle)
 		if (adev->mode_info.funcs == NULL)
 			adev->mode_info.funcs = &dm_dce_v11_0_display_funcs;
 		break;
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	case CHIP_VEGA10:
+		adev->mode_info.num_crtc = 6;
+		adev->mode_info.num_hpd = 6;
+		adev->mode_info.num_dig = 6;
+		if (adev->mode_info.funcs == NULL)
+			adev->mode_info.funcs = &dm_dce_v12_0_display_funcs;
+		break;
+#endif
 	default:
 		DRM_ERROR("Usupported ASIC type: 0x%X\n", adev->asic_type);
 		return -EINVAL;
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
index 1ddc56c..df53092 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
@@ -402,6 +402,16 @@ bool dm_pp_notify_wm_clock_changes(
 	return false;
 }
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+bool dm_pp_notify_wm_clock_changes_soc15(
+	const struct dc_context *ctx,
+	struct dm_pp_wm_sets_with_clock_ranges_soc15 *wm_with_clock_ranges)
+{
+	/* TODO: to be implemented */
+	return false;
+}
+#endif
+
 bool dm_pp_apply_power_level_change_request(
 	const struct dc_context *ctx,
 	struct dm_pp_power_level_change_request *level_change_req)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_types.c
index d4f8f81..ede8955 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_types.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_types.c
@@ -493,7 +493,7 @@ static void fill_plane_attributes_from_fb(
 
 	memset(&surface->tiling_info, 0, sizeof(surface->tiling_info));
 
-	/* Fill GFX8 params */
+	/* Fill GFX params */
 	if (AMDGPU_TILING_GET(tiling_flags, ARRAY_MODE) == DC_ARRAY_2D_TILED_THIN1)
 	{
 		unsigned bankw, bankh, mtaspect, tile_split, num_banks;
@@ -522,6 +522,24 @@ static void fill_plane_attributes_from_fb(
 	surface->tiling_info.gfx8.pipe_config =
 			AMDGPU_TILING_GET(tiling_flags, PIPE_CONFIG);
 
+	if (adev->asic_type == CHIP_VEGA10) {
+		/* Fill GFX9 params */
+		surface->tiling_info.gfx9.num_pipes =
+			adev->gfx.config.gb_addr_config_fields.num_pipes;
+		surface->tiling_info.gfx9.num_banks =
+			adev->gfx.config.gb_addr_config_fields.num_banks;
+		surface->tiling_info.gfx9.pipe_interleave =
+			adev->gfx.config.gb_addr_config_fields.pipe_interleave_size;
+		surface->tiling_info.gfx9.num_shader_engines =
+			adev->gfx.config.gb_addr_config_fields.num_se;
+		surface->tiling_info.gfx9.max_compressed_frags =
+			adev->gfx.config.gb_addr_config_fields.max_compress_frags;
+		surface->tiling_info.gfx9.swizzle =
+			AMDGPU_TILING_GET(tiling_flags, SWIZZLE_MODE);
+		surface->tiling_info.gfx9.shaderEnable = 1;
+	}
+
+
 	surface->plane_size.grph.surface_size.x = 0;
 	surface->plane_size.grph.surface_size.y = 0;
 	surface->plane_size.grph.surface_size.width = fb->width;
diff --git a/drivers/gpu/drm/amd/display/dc/Makefile b/drivers/gpu/drm/amd/display/dc/Makefile
index 2df163b..a580cab 100644
--- a/drivers/gpu/drm/amd/display/dc/Makefile
+++ b/drivers/gpu/drm/amd/display/dc/Makefile
@@ -4,6 +4,10 @@
 
 DC_LIBS = basics bios calcs dce gpio i2caux irq virtual
 
+ifdef CONFIG_DRM_AMD_DC_DCE12_0
+DC_LIBS += dce120
+endif
+
 DC_LIBS += dce112
 DC_LIBS += dce110
 DC_LIBS += dce100
diff --git a/drivers/gpu/drm/amd/display/dc/bios/Makefile b/drivers/gpu/drm/amd/display/dc/bios/Makefile
index 876614d..7702484 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/Makefile
+++ b/drivers/gpu/drm/amd/display/dc/bios/Makefile
@@ -4,6 +4,10 @@
 
 BIOS = bios_parser.o bios_parser_interface.o  bios_parser_helper.o command_table.o command_table_helper.o
 
+ifdef CONFIG_DRM_AMD_DC_DCE12_0
+BIOS += command_table2.o command_table_helper2.o bios_parser2.o
+endif
+
 AMD_DAL_BIOS = $(addprefix $(AMDDALPATH)/dc/bios/,$(BIOS))
 
 AMD_DISPLAY_FILES += $(AMD_DAL_BIOS)
@@ -21,3 +25,7 @@ AMD_DISPLAY_FILES += $(AMDDALPATH)/dc/bios/dce80/command_table_helper_dce80.o
 AMD_DISPLAY_FILES += $(AMDDALPATH)/dc/bios/dce110/command_table_helper_dce110.o
 
 AMD_DISPLAY_FILES += $(AMDDALPATH)/dc/bios/dce112/command_table_helper_dce112.o
+
+ifdef CONFIG_DRM_AMD_DC_DCE12_0
+AMD_DISPLAY_FILES += $(AMDDALPATH)/dc/bios/dce112/command_table_helper2_dce112.o
+endif
diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser_interface.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser_interface.c
index 42272c3..7fe2a79 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser_interface.c
+++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser_interface.c
@@ -29,6 +29,10 @@
 #include "bios_parser_interface.h"
 #include "bios_parser.h"
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+#include "bios_parser2.h"
+#endif
+
 
 struct dc_bios *dal_bios_parser_create(
 	struct bp_init_data *init,
@@ -36,7 +40,17 @@ struct dc_bios *dal_bios_parser_create(
 {
 	struct dc_bios *bios = NULL;
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	bios = firmware_parser_create(init, dce_version);
+
+	if (bios == NULL)
+		/* TODO: remove dce_version from bios_parser.
+		 * cannot remove today because dal enum to bp enum translation is dce specific
+		 */
+		bios = bios_parser_create(init, dce_version);
+#else
 	bios = bios_parser_create(init, dce_version);
+#endif
 
 	return bios;
 }
diff --git a/drivers/gpu/drm/amd/display/dc/calcs/dce_calcs.c b/drivers/gpu/drm/amd/display/dc/calcs/dce_calcs.c
index ab8d1e9..aa98762 100644
--- a/drivers/gpu/drm/amd/display/dc/calcs/dce_calcs.c
+++ b/drivers/gpu/drm/amd/display/dc/calcs/dce_calcs.c
@@ -50,6 +50,11 @@ static enum bw_calcs_version bw_calcs_version_from_asic_id(struct hw_asic_id asi
 			return BW_CALCS_VERSION_POLARIS11;
 		return BW_CALCS_VERSION_INVALID;
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	case FAMILY_AI:
+		return BW_CALCS_VERSION_VEGA10;
+#endif
+
 	default:
 		return BW_CALCS_VERSION_INVALID;
 	}
@@ -2430,6 +2435,118 @@ void bw_calcs_init(struct bw_calcs_dceip *bw_dceip,
 		dceip.scatter_gather_pte_request_rows_in_tiling_mode = 2;
 		dceip.mcifwr_all_surfaces_burst_time = bw_int_to_fixed(0);
 		break;
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	case BW_CALCS_VERSION_VEGA10:
+		vbios.memory_type = bw_def_hbm;
+		vbios.dram_channel_width_in_bits = 128;
+		vbios.number_of_dram_channels = asic_id.vram_width / vbios.dram_channel_width_in_bits;
+		vbios.number_of_dram_banks = 16;
+		vbios.high_yclk = bw_int_to_fixed(2400);
+		vbios.mid_yclk = bw_int_to_fixed(1700);
+		vbios.low_yclk = bw_int_to_fixed(1000);
+		vbios.low_sclk = bw_int_to_fixed(300);
+		vbios.mid1_sclk = bw_int_to_fixed(350);
+		vbios.mid2_sclk = bw_int_to_fixed(400);
+		vbios.mid3_sclk = bw_int_to_fixed(500);
+		vbios.mid4_sclk = bw_int_to_fixed(600);
+		vbios.mid5_sclk = bw_int_to_fixed(700);
+		vbios.mid6_sclk = bw_int_to_fixed(760);
+		vbios.high_sclk = bw_int_to_fixed(776);
+		vbios.low_voltage_max_dispclk = bw_int_to_fixed(460);
+		vbios.mid_voltage_max_dispclk = bw_int_to_fixed(670);
+		vbios.high_voltage_max_dispclk = bw_int_to_fixed(1133);
+		vbios.low_voltage_max_phyclk = bw_int_to_fixed(540);
+		vbios.mid_voltage_max_phyclk = bw_int_to_fixed(810);
+		vbios.high_voltage_max_phyclk = bw_int_to_fixed(810);
+		vbios.data_return_bus_width = bw_int_to_fixed(32);
+		vbios.trc = bw_int_to_fixed(48);
+		vbios.dmifmc_urgent_latency = bw_int_to_fixed(3);
+		vbios.stutter_self_refresh_exit_latency = bw_frc_to_fixed(75, 10);
+		vbios.stutter_self_refresh_entry_latency = bw_frc_to_fixed(19, 10);
+		vbios.nbp_state_change_latency = bw_int_to_fixed(39);
+		vbios.mcifwrmc_urgent_latency = bw_int_to_fixed(10);
+		vbios.scatter_gather_enable = false;
+		vbios.down_spread_percentage = bw_frc_to_fixed(5, 10);
+		vbios.cursor_width = 32;
+		vbios.average_compression_rate = 4;
+		vbios.number_of_request_slots_gmc_reserves_for_dmif_per_channel = 8;
+		vbios.blackout_duration = bw_int_to_fixed(0); /* us */
+		vbios.maximum_blackout_recovery_time = bw_int_to_fixed(0);
+
+		dceip.large_cursor = false;
+		dceip.dmif_request_buffer_size = bw_int_to_fixed(2304);
+		dceip.dmif_pipe_en_fbc_chunk_tracker = true;
+		dceip.cursor_max_outstanding_group_num = 1;
+		dceip.lines_interleaved_into_lb = 2;
+		dceip.chunk_width = 256;
+		dceip.number_of_graphics_pipes = 6;
+		dceip.number_of_underlay_pipes = 0;
+		dceip.low_power_tiling_mode = 0;
+		dceip.display_write_back_supported = true;
+		dceip.argb_compression_support = true;
+		dceip.underlay_vscaler_efficiency6_bit_per_component =
+			bw_frc_to_fixed(35556, 10000);
+		dceip.underlay_vscaler_efficiency8_bit_per_component =
+			bw_frc_to_fixed(34286, 10000);
+		dceip.underlay_vscaler_efficiency10_bit_per_component =
+			bw_frc_to_fixed(32, 10);
+		dceip.underlay_vscaler_efficiency12_bit_per_component =
+			bw_int_to_fixed(3);
+		dceip.graphics_vscaler_efficiency6_bit_per_component =
+			bw_frc_to_fixed(35, 10);
+		dceip.graphics_vscaler_efficiency8_bit_per_component =
+			bw_frc_to_fixed(34286, 10000);
+		dceip.graphics_vscaler_efficiency10_bit_per_component =
+			bw_frc_to_fixed(32, 10);
+		dceip.graphics_vscaler_efficiency12_bit_per_component =
+			bw_int_to_fixed(3);
+		dceip.alpha_vscaler_efficiency = bw_int_to_fixed(3);
+		dceip.max_dmif_buffer_allocated = 4;
+		dceip.graphics_dmif_size = 24576;
+		dceip.underlay_luma_dmif_size = 19456;
+		dceip.underlay_chroma_dmif_size = 23552;
+		dceip.pre_downscaler_enabled = true;
+		dceip.underlay_downscale_prefetch_enabled = false;
+		dceip.lb_write_pixels_per_dispclk = bw_int_to_fixed(1);
+		dceip.lb_size_per_component444 = bw_int_to_fixed(245952);
+		dceip.graphics_lb_nodownscaling_multi_line_prefetching = true;
+		dceip.stutter_and_dram_clock_state_change_gated_before_cursor =
+			bw_int_to_fixed(1);
+		dceip.underlay420_luma_lb_size_per_component = bw_int_to_fixed(
+			82176);
+		dceip.underlay420_chroma_lb_size_per_component =
+			bw_int_to_fixed(164352);
+		dceip.underlay422_lb_size_per_component = bw_int_to_fixed(
+			82176);
+		dceip.cursor_chunk_width = bw_int_to_fixed(64);
+		dceip.cursor_dcp_buffer_lines = bw_int_to_fixed(4);
+		dceip.underlay_maximum_width_efficient_for_tiling =
+			bw_int_to_fixed(1920);
+		dceip.underlay_maximum_height_efficient_for_tiling =
+			bw_int_to_fixed(1080);
+		dceip.peak_pte_request_to_eviction_ratio_limiting_multiple_displays_or_single_rotated_display =
+			bw_frc_to_fixed(3, 10);
+		dceip.peak_pte_request_to_eviction_ratio_limiting_single_display_no_rotation =
+			bw_int_to_fixed(25);
+		dceip.minimum_outstanding_pte_request_limit = bw_int_to_fixed(
+			2);
+		dceip.maximum_total_outstanding_pte_requests_allowed_by_saw =
+			bw_int_to_fixed(128);
+		dceip.limit_excessive_outstanding_dmif_requests = true;
+		dceip.linear_mode_line_request_alternation_slice =
+			bw_int_to_fixed(64);
+		dceip.scatter_gather_lines_of_pte_prefetching_in_linear_mode =
+			32;
+		dceip.display_write_back420_luma_mcifwr_buffer_size = 12288;
+		dceip.display_write_back420_chroma_mcifwr_buffer_size = 8192;
+		dceip.request_efficiency = bw_frc_to_fixed(8, 10);
+		dceip.dispclk_per_request = bw_int_to_fixed(2);
+		dceip.dispclk_ramping_factor = bw_frc_to_fixed(105, 100);
+		dceip.display_pipe_throughput_factor = bw_frc_to_fixed(105, 100);
+		dceip.scatter_gather_pte_request_rows_in_tiling_mode = 2;
+		dceip.mcifwr_all_surfaces_burst_time = bw_int_to_fixed(0);
+		break;
+#endif
 	default:
 		break;
 	}
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
index b9ca968..28ed8ea 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -1815,3 +1815,32 @@ void dc_link_remove_remote_sink(const struct dc_link *link, const struct dc_sink
 	}
 }
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+bool dc_init_dchub(struct dc *dc, struct dchub_init_data *dh_data)
+{
+	int i;
+	struct core_dc *core_dc = DC_TO_CORE(dc);
+	struct mem_input *mi = NULL;
+
+	for (i = 0; i < core_dc->res_pool->pipe_count; i++) {
+		if (core_dc->res_pool->mis[i] != NULL) {
+			mi = core_dc->res_pool->mis[i];
+			break;
+		}
+	}
+	if (mi == NULL) {
+		dm_error("no mem_input!\n");
+		return false;
+	}
+
+	if (mi->funcs->mem_input_update_dchub)
+		mi->funcs->mem_input_update_dchub(mi, dh_data);
+	else
+		ASSERT(mi->funcs->mem_input_update_dchub);
+
+
+	return true;
+
+}
+#endif
+
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_debug.c b/drivers/gpu/drm/amd/display/dc/core/dc_debug.c
index 85ddf5f..079558a 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_debug.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_debug.c
@@ -141,6 +141,12 @@ void pre_surface_trace(
 				surface->format,
 				surface->rotation,
 				surface->stereo_format);
+
+#if defined (CONFIG_DRM_AMD_DC_DCE12_0)
+		SURFACE_TRACE("surface->tiling_info.gfx9.swizzle = %d;\n",
+				surface->tiling_info.gfx9.swizzle);
+#endif
+
 		SURFACE_TRACE("\n");
 	}
 	SURFACE_TRACE("\n");
@@ -221,6 +227,11 @@ void update_surface_trace(
 					update->plane_info->tiling_info.gfx8.pipe_config,
 					update->plane_info->tiling_info.gfx8.array_mode,
 					update->plane_info->visible);
+
+			#if defined (CONFIG_DRM_AMD_DC_DCE12_0)
+					SURFACE_TRACE("surface->tiling_info.gfx9.swizzle = %d;\n",
+					update->plane_info->tiling_info.gfx9.swizzle);
+			#endif
 		}
 
 		if (update->scaling_info) {
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
index 5ca72af..f13da7c 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c
@@ -1217,6 +1217,25 @@ static enum dc_status enable_link_dp(struct pipe_ctx *pipe_ctx)
 				pipe_ctx->dis_clk->funcs->set_min_clocks_state(
 					pipe_ctx->dis_clk, DM_PP_CLOCKS_STATE_NOMINAL);
 		} else {
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+			uint32_t dp_phyclk_in_khz;
+			const struct clocks_value clocks_value =
+					pipe_ctx->dis_clk->cur_clocks_value;
+
+			/* 27mhz = 27000000hz= 27000khz */
+			dp_phyclk_in_khz = link_settings.link_rate * 27000;
+
+			if (((clocks_value.max_non_dp_phyclk_in_khz != 0) &&
+				(dp_phyclk_in_khz > clocks_value.max_non_dp_phyclk_in_khz)) ||
+				(dp_phyclk_in_khz > clocks_value.max_dp_phyclk_in_khz)) {
+				pipe_ctx->dis_clk->funcs->apply_clock_voltage_request(
+						pipe_ctx->dis_clk,
+						DM_PP_CLOCK_TYPE_DISPLAYPHYCLK,
+						dp_phyclk_in_khz,
+						false,
+						true);
+			}
+#endif
 		}
 	}
 
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
index 6119973..77ef330 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
@@ -39,6 +39,9 @@
 #include "dce100/dce100_resource.h"
 #include "dce110/dce110_resource.h"
 #include "dce112/dce112_resource.h"
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+#include "dce120/dce120_resource.h"
+#endif
 
 enum dce_version resource_parse_asic_id(struct hw_asic_id asic_id)
 {
@@ -65,6 +68,11 @@ enum dce_version resource_parse_asic_id(struct hw_asic_id asic_id)
 			dc_version = DCE_VERSION_11_2;
 		}
 		break;
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	case FAMILY_AI:
+		dc_version = DCE_VERSION_12_0;
+		break;
+#endif
 	default:
 		dc_version = DCE_VERSION_UNKNOWN;
 		break;
@@ -97,6 +105,12 @@ struct resource_pool *dc_create_resource_pool(
 		res_pool = dce112_create_resource_pool(
 			num_virtual_links, dc);
 		break;
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	case DCE_VERSION_12_0:
+		res_pool = dce120_create_resource_pool(
+			num_virtual_links, dc);
+		break;
+#endif
 	default:
 		break;
 	}
diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
index c6c0cf5..bc15065 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -55,6 +55,9 @@ struct dc_caps {
 struct dc_dcc_surface_param {
 	enum surface_pixel_format format;
 	struct dc_size surface_size;
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	enum swizzle_mode_values swizzle_mode;
+#endif
 	enum dc_scan_direction scan;
 };
 
@@ -143,6 +146,9 @@ struct dc_debug {
 	bool disable_stutter;
 	bool disable_dcc;
 	bool disable_dfs_bypass;
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	bool disable_pplib_clock_request;
+#endif
 	bool disable_clock_gate;
 	bool disable_dmcu;
 	bool force_abm_enable;
@@ -157,6 +163,23 @@ struct dc {
 	struct dc_debug debug;
 };
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+enum frame_buffer_mode {
+	FRAME_BUFFER_MODE_LOCAL_ONLY = 0,
+	FRAME_BUFFER_MODE_ZFB_ONLY,
+	FRAME_BUFFER_MODE_MIXED_ZFB_AND_LOCAL,
+} ;
+
+struct dchub_init_data {
+	bool dchub_initialzied;
+	bool dchub_info_valid;
+	int64_t zfb_phys_addr_base;
+	int64_t zfb_mc_base_addr;
+	uint64_t zfb_size_in_byte;
+	enum frame_buffer_mode fb_mode;
+};
+#endif
+
 struct dc_init_data {
 	struct hw_asic_id asic_id;
 	void *driver; /* ctx */
@@ -177,6 +200,10 @@ struct dc *dc_create(const struct dc_init_data *init_params);
 
 void dc_destroy(struct dc **dc);
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+bool dc_init_dchub(struct dc *dc, struct dchub_init_data *dh_data);
+#endif
+
 /*******************************************************************************
  * Surface Interfaces
  ******************************************************************************/
diff --git a/drivers/gpu/drm/amd/display/dc/dc_hw_types.h b/drivers/gpu/drm/amd/display/dc/dc_hw_types.h
index 75e16ac..6381340 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_hw_types.h
+++ b/drivers/gpu/drm/amd/display/dc/dc_hw_types.h
@@ -259,6 +259,36 @@ enum tile_mode_values {
 	DC_ADDR_SURF_MICRO_TILING_NON_DISPLAY = 0x1,
 };
 
+#if defined (CONFIG_DRM_AMD_DC_DCE12_0)
+enum swizzle_mode_values {
+	DC_SW_LINEAR = 0,
+	DC_SW_256B_S = 1,
+	DC_SW_256_D = 2,
+	DC_SW_256_R = 3,
+	DC_SW_4KB_S = 5,
+	DC_SW_4KB_D = 6,
+	DC_SW_4KB_R = 7,
+	DC_SW_64KB_S = 9,
+	DC_SW_64KB_D = 10,
+	DC_SW_64KB_R = 11,
+	DC_SW_VAR_S = 13,
+	DC_SW_VAR_D = 14,
+	DC_SW_VAR_R = 15,
+	DC_SW_64KB_S_T = 17,
+	DC_SW_64KB_D_T = 18,
+	DC_SW_4KB_S_X = 21,
+	DC_SW_4KB_D_X = 22,
+	DC_SW_4KB_R_X = 23,
+	DC_SW_64KB_S_X = 25,
+	DC_SW_64KB_D_X = 26,
+	DC_SW_64KB_R_X = 27,
+	DC_SW_VAR_S_X = 29,
+	DC_SW_VAR_D_X = 30,
+	DC_SW_VAR_R_X = 31,
+	DC_SW_MAX
+};
+#endif
+
 union dc_tiling_info {
 
 	struct {
@@ -323,6 +353,22 @@ union dc_tiling_info {
 		enum array_mode_values array_mode;
 	} gfx8;
 
+#if defined (CONFIG_DRM_AMD_DC_DCE12_0)
+	struct {
+		unsigned int num_pipes;
+		unsigned int num_banks;
+		unsigned int pipe_interleave;
+		unsigned int num_shader_engines;
+		unsigned int num_rb_per_se;
+		unsigned int max_compressed_frags;
+		bool shaderEnable;
+
+		enum swizzle_mode_values swizzle;
+		bool meta_linear;
+		bool rb_aligned;
+		bool pipe_aligned;
+	} gfx9;
+#endif
 };
 
 /* Rotation angle */
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
index 1d6a9da..f53dc15 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
@@ -585,6 +585,9 @@ static uint32_t dce110_get_pix_clk_dividers(
 			pll_settings, pix_clk_params);
 		break;
 	case DCE_VERSION_11_2:
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	case DCE_VERSION_12_0:
+#endif
 		dce112_get_pix_clk_dividers_helper(clk_src,
 				pll_settings, pix_clk_params);
 		break;
@@ -868,6 +871,9 @@ static bool dce110_program_pix_clk(
 
 		break;
 	case DCE_VERSION_11_2:
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	case DCE_VERSION_12_0:
+#endif
 		if (clock_source->id != CLOCK_SOURCE_ID_DP_DTO) {
 			bp_pc_params.flags.SET_GENLOCK_REF_DIV_SRC =
 							pll_settings->use_external_clk;
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c b/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
index ac1feba..9c743e5 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c
@@ -80,6 +80,20 @@ static struct state_dependent_clocks dce112_max_clks_by_state[] = {
 /*ClocksStatePerformance*/
 { .display_clk_khz = 1132000, .pixel_clk_khz = 600000 } };
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+static struct state_dependent_clocks dce120_max_clks_by_state[] = {
+/*ClocksStateInvalid - should not be used*/
+{ .display_clk_khz = 0, .pixel_clk_khz = 0 },
+/*ClocksStateUltraLow - currently by HW design team not supposed to be used*/
+{ .display_clk_khz = 0, .pixel_clk_khz = 0 },
+/*ClocksStateLow*/
+{ .display_clk_khz = 460000, .pixel_clk_khz = 400000 },
+/*ClocksStateNominal*/
+{ .display_clk_khz = 670000, .pixel_clk_khz = 600000 },
+/*ClocksStatePerformance*/
+{ .display_clk_khz = 1133000, .pixel_clk_khz = 600000 } };
+#endif
+
 /* Starting point for each divider range.*/
 enum dce_divider_range_start {
 	DIVIDER_RANGE_01_START = 200, /* 2.00*/
@@ -483,6 +497,103 @@ static void dce_clock_read_ss_info(struct dce_disp_clk *clk_dce)
 	}
 }
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+static bool dce_apply_clock_voltage_request(
+	struct display_clock *clk,
+	enum dm_pp_clock_type clocks_type,
+	int clocks_in_khz,
+	bool pre_mode_set,
+	bool update_dp_phyclk)
+{
+	struct dm_pp_clock_for_voltage_req clock_voltage_req = {0};
+
+	switch (clocks_type) {
+	case DM_PP_CLOCK_TYPE_DISPLAY_CLK:
+	case DM_PP_CLOCK_TYPE_PIXELCLK:
+	case DM_PP_CLOCK_TYPE_DISPLAYPHYCLK:
+		break;
+	default:
+		BREAK_TO_DEBUGGER();
+		return false;
+	}
+
+	clock_voltage_req.clk_type = clocks_type;
+	clock_voltage_req.clocks_in_khz = clocks_in_khz;
+
+	/* to pplib */
+	if (pre_mode_set) {
+		switch (clocks_type) {
+		case DM_PP_CLOCK_TYPE_DISPLAY_CLK:
+			if (clocks_in_khz > clk->cur_clocks_value.dispclk_in_khz) {
+				dm_pp_apply_clock_for_voltage_request(
+						clk->ctx, &clock_voltage_req);
+				clk->cur_clocks_value.dispclk_notify_pplib_done = true;
+			} else
+				clk->cur_clocks_value.dispclk_notify_pplib_done = false;
+			/* no matter incrase or decrase clock, update current clock value */
+			clk->cur_clocks_value.dispclk_in_khz = clocks_in_khz;
+			break;
+		case DM_PP_CLOCK_TYPE_PIXELCLK:
+			if (clocks_in_khz > clk->cur_clocks_value.max_pixelclk_in_khz) {
+				dm_pp_apply_clock_for_voltage_request(
+						clk->ctx, &clock_voltage_req);
+				clk->cur_clocks_value.pixelclk_notify_pplib_done = true;
+			} else
+				clk->cur_clocks_value.pixelclk_notify_pplib_done = false;
+			/* no matter incrase or decrase clock, update current clock value */
+			clk->cur_clocks_value.max_pixelclk_in_khz = clocks_in_khz;
+			break;
+		case DM_PP_CLOCK_TYPE_DISPLAYPHYCLK:
+			if (clocks_in_khz > clk->cur_clocks_value.max_non_dp_phyclk_in_khz) {
+				dm_pp_apply_clock_for_voltage_request(
+						clk->ctx, &clock_voltage_req);
+				clk->cur_clocks_value.phyclk_notigy_pplib_done = true;
+			} else
+				clk->cur_clocks_value.phyclk_notigy_pplib_done = false;
+			/* no matter incrase or decrase clock, update current clock value */
+			clk->cur_clocks_value.max_non_dp_phyclk_in_khz = clocks_in_khz;
+			break;
+		default:
+			ASSERT(0);
+			break;
+		}
+	} else {
+		switch (clocks_type) {
+		case DM_PP_CLOCK_TYPE_DISPLAY_CLK:
+			if (!clk->cur_clocks_value.dispclk_notify_pplib_done)
+				dm_pp_apply_clock_for_voltage_request(
+						clk->ctx, &clock_voltage_req);
+			break;
+		case DM_PP_CLOCK_TYPE_PIXELCLK:
+			if (!clk->cur_clocks_value.pixelclk_notify_pplib_done)
+				dm_pp_apply_clock_for_voltage_request(
+						clk->ctx, &clock_voltage_req);
+			break;
+		case DM_PP_CLOCK_TYPE_DISPLAYPHYCLK:
+			if (!clk->cur_clocks_value.phyclk_notigy_pplib_done)
+				dm_pp_apply_clock_for_voltage_request(
+						clk->ctx, &clock_voltage_req);
+			break;
+		default:
+			ASSERT(0);
+			break;
+		}
+	}
+
+	if (update_dp_phyclk && (clocks_in_khz >
+	clk->cur_clocks_value.max_dp_phyclk_in_khz))
+		clk->cur_clocks_value.max_dp_phyclk_in_khz = clocks_in_khz;
+
+	return true;
+}
+
+static const struct display_clock_funcs dce120_funcs = {
+	.get_dp_ref_clk_frequency = dce_clocks_get_dp_ref_freq,
+	.apply_clock_voltage_request = dce_apply_clock_voltage_request,
+	.set_clock = dce112_set_clock
+};
+#endif
+
 static const struct display_clock_funcs dce112_funcs = {
 	.get_dp_ref_clk_frequency = dce_clocks_get_dp_ref_freq,
 	.get_required_clocks_state = dce_get_required_clocks_state,
@@ -623,6 +734,44 @@ struct display_clock *dce112_disp_clk_create(
 	return &clk_dce->base;
 }
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+struct display_clock *dce120_disp_clk_create(
+		struct dc_context *ctx,
+		const struct dce_disp_clk_registers *regs,
+		const struct dce_disp_clk_shift *clk_shift,
+		const struct dce_disp_clk_mask *clk_mask)
+{
+	struct dce_disp_clk *clk_dce = dm_alloc(sizeof(*clk_dce));
+	struct dm_pp_clock_levels_with_voltage clk_level_info = {0};
+
+	if (clk_dce == NULL) {
+		BREAK_TO_DEBUGGER();
+		return NULL;
+	}
+
+	memcpy(clk_dce->max_clks_by_state,
+		dce120_max_clks_by_state,
+		sizeof(dce120_max_clks_by_state));
+
+	dce_disp_clk_construct(
+		clk_dce, ctx, regs, clk_shift, clk_mask);
+
+	clk_dce->base.funcs = &dce120_funcs;
+
+	/* new in dce120 */
+	if (!ctx->dc->debug.disable_pplib_clock_request  &&
+			dm_pp_get_clock_levels_by_type_with_voltage(
+			ctx, DM_PP_CLOCK_TYPE_DISPLAY_CLK, &clk_level_info)
+						&& clk_level_info.num_levels)
+		clk_dce->max_displ_clk_in_khz =
+			clk_level_info.data[clk_level_info.num_levels - 1].clocks_in_khz;
+	else
+		clk_dce->max_displ_clk_in_khz = 1133000;
+
+	return &clk_dce->base;
+}
+#endif
+
 void dce_disp_clk_destroy(struct display_clock **disp_clk)
 {
 	struct dce_disp_clk *clk_dce = TO_DCE_CLOCKS(*disp_clk);
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.h b/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.h
index 020ab9d..18787f6 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.h
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clocks.h
@@ -45,6 +45,14 @@
 	CLK_SF(MASTER_COMM_CMD_REG, MASTER_COMM_CMD_REG_BYTE0, mask_sh), \
 	CLK_SF(MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, mask_sh)
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+#define CLK_COMMON_MASK_SH_LIST_SOC_BASE(mask_sh) \
+	CLK_SF(DCCG_DFS_DPREFCLK_CNTL, DPREFCLK_SRC_SEL, mask_sh), \
+	CLK_SF(DCCG_DFS_DENTIST_DISPCLK_CNTL, DENTIST_DPREFCLK_WDIVIDER, mask_sh), \
+	CLK_SF(DCCG_DFS_MASTER_COMM_CMD_REG, MASTER_COMM_CMD_REG_BYTE0, mask_sh), \
+	CLK_SF(DCCG_DFS_MASTER_COMM_CNTL_REG, MASTER_COMM_INTERRUPT, mask_sh)
+#endif
+
 #define CLK_REG_FIELD_LIST(type) \
 	type DPREFCLK_SRC_SEL; \
 	type DENTIST_DPREFCLK_WDIVIDER; \
@@ -118,6 +126,10 @@ struct dce_disp_clk {
 	int gpu_pll_ss_divider;
 
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	/* max disp_clk from PPLIB for max validation display clock*/
+	int max_displ_clk_in_khz;
+#endif
 };
 
 
@@ -139,6 +151,14 @@ struct display_clock *dce112_disp_clk_create(
 	const struct dce_disp_clk_shift *clk_shift,
 	const struct dce_disp_clk_mask *clk_mask);
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+struct display_clock *dce120_disp_clk_create(
+	struct dc_context *ctx,
+	const struct dce_disp_clk_registers *regs,
+	const struct dce_disp_clk_shift *clk_shift,
+	const struct dce_disp_clk_mask *clk_mask);
+#endif
+
 void dce_disp_clk_destroy(struct display_clock **disp_clk);
 
 #endif /* _DCE_CLOCKS_H_ */
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_hwseq.h b/drivers/gpu/drm/amd/display/dc/dce/dce_hwseq.h
index 70e0652..ff7984b 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_hwseq.h
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_hwseq.h
@@ -186,6 +186,14 @@ struct dce_hwseq_registers {
 	HWSEQ_DCE10_MASK_SH_LIST(mask_sh),\
 	HWSEQ_PHYPLL_MASK_SH_LIST(mask_sh, CRTC0_)
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+#define HWSEQ_DCE12_MASK_SH_LIST(mask_sh)\
+	HWSEQ_DCEF_MASK_SH_LIST(mask_sh, DCFE0_DCFE_),\
+	HWSEQ_BLND_MASK_SH_LIST(mask_sh, BLND0_BLND_),\
+	HWSEQ_PIXEL_RATE_MASK_SH_LIST(mask_sh, CRTC0_),\
+	HWSEQ_PHYPLL_MASK_SH_LIST(mask_sh, CRTC0_)
+#endif
+
 #define HWSEQ_REG_FIED_LIST(type) \
 	type DCFE_CLOCK_ENABLE; \
 	type DCFEV_CLOCK_ENABLE; \
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.h b/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.h
index f337d60..f6a1006 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.h
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.h
@@ -31,6 +31,13 @@
 #define TO_DCE110_LINK_ENC(link_encoder)\
 	container_of(link_encoder, struct dce110_link_encoder, base)
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+/* Not found regs in dce120 spec
+ * BIOS_SCRATCH_2
+ * DP_DPHY_INTERNAL_CTRL
+ */
+#endif
+
 #define AUX_REG_LIST(id)\
 	SRI(AUX_CONTROL, DP_AUX, id), \
 	SRI(AUX_DPHY_RX_CONTROL0, DP_AUX, id)
@@ -79,6 +86,13 @@
 	SRI(DP_DPHY_INTERNAL_CTRL, DP, id), \
 	SR(DCI_MEM_PWR_STATUS)
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	#define LE_DCE120_REG_LIST(id)\
+		LE_COMMON_REG_LIST_BASE(id), \
+		SRI(DP_DPHY_BS_SR_SWAP_CNTL, DP, id), \
+		SR(DCI_MEM_PWR_STATUS)
+#endif
+
 	#define LE_DCE80_REG_LIST(id)\
 		SRI(DP_DPHY_INTERNAL_CTRL, DP, id), \
 		LE_COMMON_REG_LIST_BASE(id)
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c b/drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c
index e14a21c..c494f71 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c
@@ -187,6 +187,20 @@ static void program_nbp_watermark(struct mem_input *mi,
 		REG_UPDATE(DPG_PIPE_NB_PSTATE_CHANGE_CONTROL,
 				NB_PSTATE_CHANGE_WATERMARK, nbp_wm);
 	}
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	if (REG(DPG_PIPE_LOW_POWER_CONTROL)) {
+		REG_UPDATE(DPG_WATERMARK_MASK_CONTROL,
+				PSTATE_CHANGE_WATERMARK_MASK, wm_select);
+
+		REG_UPDATE_3(DPG_PIPE_LOW_POWER_CONTROL,
+				PSTATE_CHANGE_ENABLE, 1,
+				PSTATE_CHANGE_URGENT_DURING_REQUEST, 1,
+				PSTATE_CHANGE_NOT_SELF_REFRESH_DURING_REQUEST, 1);
+
+		REG_UPDATE(DPG_PIPE_LOW_POWER_CONTROL,
+				PSTATE_CHANGE_WATERMARK, nbp_wm);
+	}
+#endif
 }
 
 static void program_stutter_watermark(struct mem_input *mi,
@@ -196,6 +210,12 @@ static void program_stutter_watermark(struct mem_input *mi,
 	REG_UPDATE(DPG_WATERMARK_MASK_CONTROL,
 		STUTTER_EXIT_SELF_REFRESH_WATERMARK_MASK, wm_select);
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	if (REG(DPG_PIPE_STUTTER_CONTROL2))
+		REG_UPDATE(DPG_PIPE_STUTTER_CONTROL2,
+				STUTTER_EXIT_SELF_REFRESH_WATERMARK, stutter_mark);
+	else
+#endif
 		REG_UPDATE(DPG_PIPE_STUTTER_CONTROL,
 				STUTTER_EXIT_SELF_REFRESH_WATERMARK, stutter_mark);
 }
@@ -234,6 +254,21 @@ void dce_mem_input_program_display_marks(struct mem_input *mi,
 static void program_tiling(struct mem_input *mi,
 	const union dc_tiling_info *info)
 {
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	if (mi->masks->GRPH_SW_MODE) { /* GFX9 */
+		REG_UPDATE_6(GRPH_CONTROL,
+				GRPH_SW_MODE, info->gfx9.swizzle,
+				GRPH_NUM_BANKS, log_2(info->gfx9.num_banks),
+				GRPH_NUM_SHADER_ENGINES, log_2(info->gfx9.num_shader_engines),
+				GRPH_NUM_PIPES, log_2(info->gfx9.num_pipes),
+				GRPH_COLOR_EXPANSION_MODE, 1,
+				GRPH_SE_ENABLE, info->gfx9.shaderEnable);
+		/* TODO: DCP0_GRPH_CONTROL__GRPH_SE_ENABLE where to get info
+		GRPH_SE_ENABLE, 1,
+		GRPH_Z, 0);
+		 */
+	}
+#endif
 	if (mi->masks->GRPH_ARRAY_MODE) { /* GFX8 */
 		REG_UPDATE_9(GRPH_CONTROL,
 				GRPH_NUM_BANKS, info->gfx8.num_banks,
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.h b/drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.h
index ec053c2..9e18c2a 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.h
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.h
@@ -58,6 +58,15 @@
 	MI_DCE11_2_REG_LIST(id),\
 	MI_DCE_PTE_REG_LIST(id)
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+#define MI_DCE12_REG_LIST(id)\
+	MI_DCE_BASE_REG_LIST(id),\
+	MI_DCE_PTE_REG_LIST(id),\
+	SRI(GRPH_PIPE_OUTSTANDING_REQUEST_LIMIT, DCP, id),\
+	SRI(DPG_PIPE_STUTTER_CONTROL2, DMIF_PG, id),\
+	SRI(DPG_PIPE_LOW_POWER_CONTROL, DMIF_PG, id)
+#endif
+
 struct dce_mem_input_registers {
 	/* DCP */
 	uint32_t GRPH_ENABLE;
@@ -163,6 +172,31 @@ struct dce_mem_input_registers {
 	MI_DCE11_2_MASK_SH_LIST(mask_sh),\
 	MI_DCP_PTE_MASK_SH_LIST(mask_sh, )
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+#define MI_GFX9_TILE_MASK_SH_LIST(mask_sh, blk)\
+	SFB(blk, GRPH_CONTROL, GRPH_SW_MODE, mask_sh),\
+	SFB(blk, GRPH_CONTROL, GRPH_SE_ENABLE, mask_sh),\
+	SFB(blk, GRPH_CONTROL, GRPH_NUM_SHADER_ENGINES, mask_sh),\
+	SFB(blk, GRPH_CONTROL, GRPH_NUM_PIPES, mask_sh),\
+	SFB(blk, GRPH_CONTROL, GRPH_COLOR_EXPANSION_MODE, mask_sh)
+
+#define MI_DCE12_DMIF_PG_MASK_SH_LIST(mask_sh, blk)\
+	SFB(blk, DPG_PIPE_STUTTER_CONTROL2, STUTTER_EXIT_SELF_REFRESH_WATERMARK, mask_sh),\
+	SFB(blk, DPG_WATERMARK_MASK_CONTROL, PSTATE_CHANGE_WATERMARK_MASK, mask_sh),\
+	SFB(blk, DPG_PIPE_LOW_POWER_CONTROL, PSTATE_CHANGE_ENABLE, mask_sh),\
+	SFB(blk, DPG_PIPE_LOW_POWER_CONTROL, PSTATE_CHANGE_URGENT_DURING_REQUEST, mask_sh),\
+	SFB(blk, DPG_PIPE_LOW_POWER_CONTROL, PSTATE_CHANGE_NOT_SELF_REFRESH_DURING_REQUEST, mask_sh),\
+	SFB(blk, DPG_PIPE_LOW_POWER_CONTROL, PSTATE_CHANGE_WATERMARK, mask_sh)
+
+#define MI_DCE12_MASK_SH_LIST(mask_sh)\
+	MI_DCP_MASK_SH_LIST(mask_sh, DCP0_),\
+	MI_DCP_DCE11_MASK_SH_LIST(mask_sh, DCP0_),\
+	MI_DCP_PTE_MASK_SH_LIST(mask_sh, DCP0_),\
+	MI_DMIF_PG_MASK_SH_LIST(mask_sh, DMIF_PG0_),\
+	MI_DCE12_DMIF_PG_MASK_SH_LIST(mask_sh, DMIF_PG0_),\
+	MI_GFX9_TILE_MASK_SH_LIST(mask_sh, DCP0_)
+#endif
+
 #define MI_REG_FIELD_LIST(type) \
 	type GRPH_ENABLE; \
 	type GRPH_X_START; \
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_opp.h b/drivers/gpu/drm/amd/display/dc/dce/dce_opp.h
index a5afc02..4784ced 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_opp.h
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_opp.h
@@ -107,6 +107,14 @@ enum dce110_opp_reg_type {
 	SRI(FMT_TEMPORAL_DITHER_PROGRAMMABLE_PATTERN_T_MATRIX, FMT, id), \
 	SRI(CONTROL, FMT_MEMORY, id)
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+#define OPP_DCE_120_REG_LIST(id) \
+	OPP_COMMON_REG_LIST_BASE(id), \
+	SRI(DCFE_MEM_PWR_CTRL, DCFE, id), \
+	SRI(DCFE_MEM_PWR_STATUS, DCFE, id), \
+	SRI(CONTROL, FMT_MEMORY, id)
+#endif
+
 #define OPP_SF(reg_name, field_name, post_fix)\
 	.field_name = reg_name ## __ ## field_name ## post_fix
 
@@ -197,6 +205,70 @@ enum dce110_opp_reg_type {
 	OPP_SF(DCFE_MEM_LIGHT_SLEEP_CNTL, DCP_LUT_LIGHT_SLEEP_DIS, mask_sh),\
 	OPP_SF(DCFE_MEM_LIGHT_SLEEP_CNTL, REGAMMA_LUT_MEM_PWR_STATE, mask_sh)
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+#define OPP_COMMON_MASK_SH_LIST_DCE_120(mask_sh)\
+	OPP_SF(DCFE0_DCFE_MEM_PWR_CTRL, DCP_REGAMMA_MEM_PWR_DIS, mask_sh),\
+	OPP_SF(DCFE0_DCFE_MEM_PWR_CTRL, DCP_LUT_MEM_PWR_DIS, mask_sh),\
+	OPP_SF(DCP0_REGAMMA_CNTLA_START_CNTL, REGAMMA_CNTLA_EXP_REGION_START, mask_sh),\
+	OPP_SF(DCP0_REGAMMA_CNTLA_START_CNTL, REGAMMA_CNTLA_EXP_REGION_START_SEGMENT, mask_sh),\
+	OPP_SF(DCP0_REGAMMA_CNTLA_SLOPE_CNTL, REGAMMA_CNTLA_EXP_REGION_LINEAR_SLOPE, mask_sh),\
+	OPP_SF(DCP0_REGAMMA_CNTLA_END_CNTL1, REGAMMA_CNTLA_EXP_REGION_END, mask_sh),\
+	OPP_SF(DCP0_REGAMMA_CNTLA_END_CNTL2, REGAMMA_CNTLA_EXP_REGION_END_BASE, mask_sh),\
+	OPP_SF(DCP0_REGAMMA_CNTLA_END_CNTL2, REGAMMA_CNTLA_EXP_REGION_END_SLOPE, mask_sh),\
+	OPP_SF(DCP0_REGAMMA_CNTLA_REGION_0_1, REGAMMA_CNTLA_EXP_REGION0_LUT_OFFSET, mask_sh),\
+	OPP_SF(DCP0_REGAMMA_CNTLA_REGION_0_1, REGAMMA_CNTLA_EXP_REGION0_NUM_SEGMENTS, mask_sh),\
+	OPP_SF(DCP0_REGAMMA_CNTLA_REGION_0_1, REGAMMA_CNTLA_EXP_REGION1_LUT_OFFSET, mask_sh),\
+	OPP_SF(DCP0_REGAMMA_CNTLA_REGION_0_1, REGAMMA_CNTLA_EXP_REGION1_NUM_SEGMENTS, mask_sh),\
+	OPP_SF(DCFE0_DCFE_MEM_PWR_STATUS, DCP_REGAMMA_MEM_PWR_STATE, mask_sh),\
+	OPP_SF(DCP0_REGAMMA_LUT_WRITE_EN_MASK, REGAMMA_LUT_WRITE_EN_MASK, mask_sh),\
+	OPP_SF(DCP0_REGAMMA_CONTROL, GRPH_REGAMMA_MODE, mask_sh),\
+	OPP_SF(DCP0_OUTPUT_CSC_C11_C12, OUTPUT_CSC_C11, mask_sh),\
+	OPP_SF(DCP0_OUTPUT_CSC_C11_C12, OUTPUT_CSC_C12, mask_sh),\
+	OPP_SF(DCP0_OUTPUT_CSC_CONTROL, OUTPUT_CSC_GRPH_MODE, mask_sh),\
+	OPP_SF(FMT0_FMT_DYNAMIC_EXP_CNTL, FMT_DYNAMIC_EXP_EN, mask_sh),\
+	OPP_SF(FMT0_FMT_DYNAMIC_EXP_CNTL, FMT_DYNAMIC_EXP_MODE, mask_sh),\
+	OPP_SF(FMT0_FMT_BIT_DEPTH_CONTROL, FMT_TRUNCATE_EN, mask_sh),\
+	OPP_SF(FMT0_FMT_BIT_DEPTH_CONTROL, FMT_TRUNCATE_DEPTH, mask_sh),\
+	OPP_SF(FMT0_FMT_BIT_DEPTH_CONTROL, FMT_TRUNCATE_MODE, mask_sh),\
+	OPP_SF(FMT0_FMT_BIT_DEPTH_CONTROL, FMT_SPATIAL_DITHER_EN, mask_sh),\
+	OPP_SF(FMT0_FMT_BIT_DEPTH_CONTROL, FMT_SPATIAL_DITHER_DEPTH, mask_sh),\
+	OPP_SF(FMT0_FMT_BIT_DEPTH_CONTROL, FMT_SPATIAL_DITHER_MODE, mask_sh),\
+	OPP_SF(FMT0_FMT_BIT_DEPTH_CONTROL, FMT_TEMPORAL_DITHER_EN, mask_sh),\
+	OPP_SF(FMT0_FMT_BIT_DEPTH_CONTROL, FMT_TEMPORAL_DITHER_RESET, mask_sh),\
+	OPP_SF(FMT0_FMT_BIT_DEPTH_CONTROL, FMT_TEMPORAL_DITHER_OFFSET, mask_sh),\
+	OPP_SF(FMT0_FMT_BIT_DEPTH_CONTROL, FMT_TEMPORAL_DITHER_DEPTH, mask_sh),\
+	OPP_SF(FMT0_FMT_BIT_DEPTH_CONTROL, FMT_TEMPORAL_LEVEL, mask_sh),\
+	OPP_SF(FMT0_FMT_BIT_DEPTH_CONTROL, FMT_25FRC_SEL, mask_sh),\
+	OPP_SF(FMT0_FMT_BIT_DEPTH_CONTROL, FMT_50FRC_SEL, mask_sh),\
+	OPP_SF(FMT0_FMT_BIT_DEPTH_CONTROL, FMT_75FRC_SEL, mask_sh),\
+	OPP_SF(FMT0_FMT_BIT_DEPTH_CONTROL, FMT_HIGHPASS_RANDOM_ENABLE, mask_sh),\
+	OPP_SF(FMT0_FMT_BIT_DEPTH_CONTROL, FMT_FRAME_RANDOM_ENABLE, mask_sh),\
+	OPP_SF(FMT0_FMT_BIT_DEPTH_CONTROL, FMT_RGB_RANDOM_ENABLE, mask_sh),\
+	OPP_SF(FMT0_FMT_BIT_DEPTH_CONTROL, FMT_TEMPORAL_DITHER_EN, mask_sh),\
+	OPP_SF(FMT0_FMT_CONTROL, FMT_SPATIAL_DITHER_FRAME_COUNTER_MAX, mask_sh),\
+	OPP_SF(FMT0_FMT_CONTROL, FMT_SPATIAL_DITHER_FRAME_COUNTER_BIT_SWAP, mask_sh),\
+	OPP_SF(FMT0_FMT_DITHER_RAND_R_SEED, FMT_RAND_R_SEED, mask_sh),\
+	OPP_SF(FMT0_FMT_DITHER_RAND_G_SEED, FMT_RAND_G_SEED, mask_sh),\
+	OPP_SF(FMT0_FMT_DITHER_RAND_B_SEED, FMT_RAND_B_SEED, mask_sh),\
+	OPP_SF(FMT_MEMORY0_CONTROL, FMT420_MEM0_SOURCE_SEL, mask_sh),\
+	OPP_SF(FMT_MEMORY0_CONTROL, FMT420_MEM0_PWR_FORCE, mask_sh),\
+	OPP_SF(FMT0_FMT_CONTROL, FMT_SRC_SELECT, mask_sh),\
+	OPP_SF(FMT0_FMT_CONTROL, FMT_420_PIXEL_PHASE_LOCKED_CLEAR, mask_sh),\
+	OPP_SF(FMT0_FMT_CONTROL, FMT_420_PIXEL_PHASE_LOCKED, mask_sh),\
+	OPP_SF(FMT0_FMT_CLAMP_CNTL, FMT_CLAMP_DATA_EN, mask_sh),\
+	OPP_SF(FMT0_FMT_CLAMP_CNTL, FMT_CLAMP_COLOR_FORMAT, mask_sh),\
+	OPP_SF(FMT0_FMT_CLAMP_COMPONENT_R, FMT_CLAMP_LOWER_R, mask_sh),\
+	OPP_SF(FMT0_FMT_CLAMP_COMPONENT_R, FMT_CLAMP_UPPER_R, mask_sh),\
+	OPP_SF(FMT0_FMT_CLAMP_COMPONENT_G, FMT_CLAMP_LOWER_G, mask_sh),\
+	OPP_SF(FMT0_FMT_CLAMP_COMPONENT_G, FMT_CLAMP_UPPER_G, mask_sh),\
+	OPP_SF(FMT0_FMT_CLAMP_COMPONENT_B, FMT_CLAMP_LOWER_B, mask_sh),\
+	OPP_SF(FMT0_FMT_CLAMP_COMPONENT_B, FMT_CLAMP_UPPER_B, mask_sh),\
+	OPP_SF(FMT0_FMT_CONTROL, FMT_PIXEL_ENCODING, mask_sh),\
+	OPP_SF(FMT0_FMT_CONTROL, FMT_SUBSAMPLING_MODE, mask_sh),\
+	OPP_SF(FMT0_FMT_CONTROL, FMT_SUBSAMPLING_ORDER, mask_sh),\
+	OPP_SF(FMT0_FMT_CONTROL, FMT_CBCR_BIT_REDUCTION_BYPASS, mask_sh)
+#endif
+
 #define OPP_REG_FIELD_LIST(type) \
 	type DCP_REGAMMA_MEM_PWR_DIS; \
 	type DCP_LUT_MEM_PWR_DIS; \
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_stream_encoder.h b/drivers/gpu/drm/amd/display/dc/dce/dce_stream_encoder.h
index 458a370..c784c1b 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_stream_encoder.h
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_stream_encoder.h
@@ -187,6 +187,91 @@
 #define SE_COMMON_MASK_SH_LIST_DCE_COMMON(mask_sh)\
 	SE_COMMON_MASK_SH_LIST_DCE_COMMON_BASE(mask_sh)
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+#define SE_COMMON_MASK_SH_LIST_SOC_BASE(mask_sh)\
+	SE_SF(DIG0_AFMT_VBI_PACKET_CONTROL, AFMT_GENERIC_INDEX, mask_sh),\
+	SE_SF(DIG0_AFMT_GENERIC_HDR, AFMT_GENERIC_HB0, mask_sh),\
+	SE_SF(DIG0_AFMT_GENERIC_HDR, AFMT_GENERIC_HB1, mask_sh),\
+	SE_SF(DIG0_AFMT_GENERIC_HDR, AFMT_GENERIC_HB2, mask_sh),\
+	SE_SF(DIG0_AFMT_GENERIC_HDR, AFMT_GENERIC_HB3, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL0, HDMI_GENERIC0_CONT, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL0, HDMI_GENERIC0_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL0, HDMI_GENERIC0_LINE, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL0, HDMI_GENERIC1_CONT, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL0, HDMI_GENERIC1_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_GENERIC_PACKET_CONTROL0, HDMI_GENERIC1_LINE, mask_sh),\
+	SE_SF(DP0_DP_PIXEL_FORMAT, DP_PIXEL_ENCODING, mask_sh),\
+	SE_SF(DP0_DP_PIXEL_FORMAT, DP_COMPONENT_DEPTH, mask_sh),\
+	SE_SF(DIG0_HDMI_CONTROL, HDMI_PACKET_GEN_VERSION, mask_sh),\
+	SE_SF(DIG0_HDMI_CONTROL, HDMI_KEEPOUT_MODE, mask_sh),\
+	SE_SF(DIG0_HDMI_CONTROL, HDMI_DEEP_COLOR_ENABLE, mask_sh),\
+	SE_SF(DIG0_HDMI_CONTROL, HDMI_DEEP_COLOR_DEPTH, mask_sh),\
+	SE_SF(DIG0_HDMI_CONTROL, HDMI_DATA_SCRAMBLE_EN, mask_sh),\
+	SE_SF(DIG0_HDMI_VBI_PACKET_CONTROL, HDMI_GC_CONT, mask_sh),\
+	SE_SF(DIG0_HDMI_VBI_PACKET_CONTROL, HDMI_GC_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_VBI_PACKET_CONTROL, HDMI_NULL_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_INFOFRAME_CONTROL0, HDMI_AUDIO_INFO_SEND, mask_sh),\
+	SE_SF(DIG0_AFMT_INFOFRAME_CONTROL0, AFMT_AUDIO_INFO_UPDATE, mask_sh),\
+	SE_SF(DIG0_HDMI_INFOFRAME_CONTROL1, HDMI_AUDIO_INFO_LINE, mask_sh),\
+	SE_SF(DIG0_HDMI_GC, HDMI_GC_AVMUTE, mask_sh),\
+	SE_SF(DP0_DP_MSE_RATE_CNTL, DP_MSE_RATE_X, mask_sh),\
+	SE_SF(DP0_DP_MSE_RATE_CNTL, DP_MSE_RATE_Y, mask_sh),\
+	SE_SF(DP0_DP_MSE_RATE_UPDATE, DP_MSE_RATE_UPDATE_PENDING, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_GSP0_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_STREAM_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_GSP1_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_GSP2_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_GSP3_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_MPG_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_VID_STREAM_CNTL, DP_VID_STREAM_DIS_DEFER, mask_sh),\
+	SE_SF(DP0_DP_VID_STREAM_CNTL, DP_VID_STREAM_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_VID_STREAM_CNTL, DP_VID_STREAM_STATUS, mask_sh),\
+	SE_SF(DP0_DP_STEER_FIFO, DP_STEER_FIFO_RESET, mask_sh),\
+	SE_SF(DP0_DP_VID_TIMING, DP_VID_M_N_GEN_EN, mask_sh),\
+	SE_SF(DP0_DP_VID_N, DP_VID_N, mask_sh),\
+	SE_SF(DP0_DP_VID_M, DP_VID_M, mask_sh),\
+	SE_SF(DIG0_DIG_FE_CNTL, DIG_START, mask_sh),\
+	SE_SF(DIG0_AFMT_AUDIO_SRC_CONTROL, AFMT_AUDIO_SRC_SELECT, mask_sh),\
+	SE_SF(DIG0_AFMT_AUDIO_PACKET_CONTROL2, AFMT_AUDIO_CHANNEL_ENABLE, mask_sh),\
+	SE_SF(DIG0_HDMI_AUDIO_PACKET_CONTROL, HDMI_AUDIO_PACKETS_PER_LINE, mask_sh),\
+	SE_SF(DIG0_HDMI_AUDIO_PACKET_CONTROL, HDMI_AUDIO_DELAY_EN, mask_sh),\
+	SE_SF(DIG0_AFMT_AUDIO_PACKET_CONTROL, AFMT_60958_CS_UPDATE, mask_sh),\
+	SE_SF(DIG0_AFMT_AUDIO_PACKET_CONTROL2, AFMT_AUDIO_LAYOUT_OVRD, mask_sh),\
+	SE_SF(DIG0_AFMT_AUDIO_PACKET_CONTROL2, AFMT_60958_OSF_OVRD, mask_sh),\
+	SE_SF(DIG0_HDMI_ACR_PACKET_CONTROL, HDMI_ACR_AUTO_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_ACR_PACKET_CONTROL, HDMI_ACR_SOURCE, mask_sh),\
+	SE_SF(DIG0_HDMI_ACR_PACKET_CONTROL, HDMI_ACR_AUDIO_PRIORITY, mask_sh),\
+	SE_SF(DIG0_HDMI_ACR_32_0, HDMI_ACR_CTS_32, mask_sh),\
+	SE_SF(DIG0_HDMI_ACR_32_1, HDMI_ACR_N_32, mask_sh),\
+	SE_SF(DIG0_HDMI_ACR_44_0, HDMI_ACR_CTS_44, mask_sh),\
+	SE_SF(DIG0_HDMI_ACR_44_1, HDMI_ACR_N_44, mask_sh),\
+	SE_SF(DIG0_HDMI_ACR_48_0, HDMI_ACR_CTS_48, mask_sh),\
+	SE_SF(DIG0_HDMI_ACR_48_1, HDMI_ACR_N_48, mask_sh),\
+	SE_SF(DIG0_AFMT_60958_0, AFMT_60958_CS_CHANNEL_NUMBER_L, mask_sh),\
+	SE_SF(DIG0_AFMT_60958_0, AFMT_60958_CS_CLOCK_ACCURACY, mask_sh),\
+	SE_SF(DIG0_AFMT_60958_1, AFMT_60958_CS_CHANNEL_NUMBER_R, mask_sh),\
+	SE_SF(DIG0_AFMT_60958_2, AFMT_60958_CS_CHANNEL_NUMBER_2, mask_sh),\
+	SE_SF(DIG0_AFMT_60958_2, AFMT_60958_CS_CHANNEL_NUMBER_3, mask_sh),\
+	SE_SF(DIG0_AFMT_60958_2, AFMT_60958_CS_CHANNEL_NUMBER_4, mask_sh),\
+	SE_SF(DIG0_AFMT_60958_2, AFMT_60958_CS_CHANNEL_NUMBER_5, mask_sh),\
+	SE_SF(DIG0_AFMT_60958_2, AFMT_60958_CS_CHANNEL_NUMBER_6, mask_sh),\
+	SE_SF(DIG0_AFMT_60958_2, AFMT_60958_CS_CHANNEL_NUMBER_7, mask_sh),\
+	SE_SF(DP0_DP_SEC_AUD_N, DP_SEC_AUD_N, mask_sh),\
+	SE_SF(DP0_DP_SEC_TIMESTAMP, DP_SEC_TIMESTAMP_MODE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_ASP_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_ATP_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_AIP_ENABLE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_ACM_ENABLE, mask_sh),\
+	SE_SF(DIG0_AFMT_AUDIO_PACKET_CONTROL, AFMT_AUDIO_SAMPLE_SEND, mask_sh),\
+	SE_SF(DIG0_AFMT_CNTL, AFMT_AUDIO_CLOCK_EN, mask_sh),\
+	SE_SF(DIG0_HDMI_CONTROL, HDMI_CLOCK_CHANNEL_RATE, mask_sh),\
+	SE_SF(DIG0_DIG_FE_CNTL, TMDS_PIXEL_ENCODING, mask_sh),\
+	SE_SF(DIG0_DIG_FE_CNTL, TMDS_COLOR_FORMAT, mask_sh)
+#endif
+
+#define SE_COMMON_MASK_SH_LIST_SOC(mask_sh)\
+	SE_COMMON_MASK_SH_LIST_SOC_BASE(mask_sh)
+
 #define SE_COMMON_MASK_SH_LIST_DCE80_100(mask_sh)\
 	SE_COMMON_MASK_SH_LIST_DCE_COMMON(mask_sh),\
 	SE_SF(TMDS_CNTL, TMDS_PIXEL_ENCODING, mask_sh),\
@@ -209,6 +294,21 @@
 	SE_SF(DIG_FE_CNTL, TMDS_COLOR_FORMAT, mask_sh),\
 	SE_SF(DP_VID_TIMING, DP_VID_M_DOUBLE_VALUE_EN, mask_sh)
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+#define SE_COMMON_MASK_SH_LIST_DCE120(mask_sh)\
+	SE_COMMON_MASK_SH_LIST_SOC(mask_sh),\
+	SE_SF(DIG0_AFMT_VBI_PACKET_CONTROL, AFMT_GENERIC0_UPDATE, mask_sh),\
+	SE_SF(DIG0_AFMT_VBI_PACKET_CONTROL, AFMT_GENERIC2_UPDATE, mask_sh),\
+	SE_SF(DP0_DP_PIXEL_FORMAT, DP_DYN_RANGE, mask_sh),\
+	SE_SF(DP0_DP_PIXEL_FORMAT, DP_YCBCR_RANGE, mask_sh),\
+	SE_SF(DIG0_HDMI_INFOFRAME_CONTROL0, HDMI_AVI_INFO_SEND, mask_sh),\
+	SE_SF(DIG0_HDMI_INFOFRAME_CONTROL0, HDMI_AVI_INFO_CONT, mask_sh),\
+	SE_SF(DIG0_HDMI_INFOFRAME_CONTROL1, HDMI_AVI_INFO_LINE, mask_sh),\
+	SE_SF(DP0_DP_SEC_CNTL, DP_SEC_AVI_ENABLE, mask_sh),\
+	SE_SF(DIG0_AFMT_AVI_INFO3, AFMT_AVI_INFO_VERSION, mask_sh),\
+	SE_SF(DP0_DP_VID_TIMING, DP_VID_M_DOUBLE_VALUE_EN, mask_sh)
+#endif
+
 struct dce_stream_encoder_shift {
 	uint8_t AFMT_GENERIC_INDEX;
 	uint8_t AFMT_GENERIC0_UPDATE;
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_transform.h b/drivers/gpu/drm/amd/display/dc/dce/dce_transform.h
index b2cf9bf..aa6bc4f 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_transform.h
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_transform.h
@@ -153,6 +153,74 @@
 	XFM_SF(DCFE_MEM_PWR_STATUS, SCL_COEFF_MEM_PWR_STATE, mask_sh), \
 	XFM_SF(SCL_MODE, SCL_PSCL_EN, mask_sh)
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+#define XFM_COMMON_MASK_SH_LIST_SOC_BASE(mask_sh) \
+	XFM_SF(DCP0_OUT_CLAMP_CONTROL_B_CB, OUT_CLAMP_MIN_B_CB, mask_sh), \
+	XFM_SF(DCP0_OUT_CLAMP_CONTROL_B_CB, OUT_CLAMP_MAX_B_CB, mask_sh), \
+	XFM_SF(DCP0_OUT_CLAMP_CONTROL_G_Y, OUT_CLAMP_MIN_G_Y, mask_sh), \
+	XFM_SF(DCP0_OUT_CLAMP_CONTROL_G_Y, OUT_CLAMP_MAX_G_Y, mask_sh), \
+	XFM_SF(DCP0_OUT_CLAMP_CONTROL_R_CR, OUT_CLAMP_MIN_R_CR, mask_sh), \
+	XFM_SF(DCP0_OUT_CLAMP_CONTROL_R_CR, OUT_CLAMP_MAX_R_CR, mask_sh), \
+	XFM_SF(DCP0_OUT_ROUND_CONTROL, OUT_ROUND_TRUNC_MODE, mask_sh), \
+	XFM_SF(DCP0_DCP_SPATIAL_DITHER_CNTL, DCP_SPATIAL_DITHER_EN, mask_sh), \
+	XFM_SF(DCP0_DCP_SPATIAL_DITHER_CNTL, DCP_SPATIAL_DITHER_MODE, mask_sh), \
+	XFM_SF(DCP0_DCP_SPATIAL_DITHER_CNTL, DCP_SPATIAL_DITHER_DEPTH, mask_sh), \
+	XFM_SF(DCP0_DCP_SPATIAL_DITHER_CNTL, DCP_FRAME_RANDOM_ENABLE, mask_sh), \
+	XFM_SF(DCP0_DCP_SPATIAL_DITHER_CNTL, DCP_RGB_RANDOM_ENABLE, mask_sh), \
+	XFM_SF(DCP0_DCP_SPATIAL_DITHER_CNTL, DCP_HIGHPASS_RANDOM_ENABLE, mask_sh), \
+	XFM_SF(DCP0_DENORM_CONTROL, DENORM_MODE, mask_sh), \
+	XFM_SF(LB0_LB_DATA_FORMAT, PIXEL_DEPTH, mask_sh), \
+	XFM_SF(LB0_LB_DATA_FORMAT, PIXEL_EXPAN_MODE, mask_sh), \
+	XFM_SF(DCP0_GAMUT_REMAP_C11_C12, GAMUT_REMAP_C11, mask_sh), \
+	XFM_SF(DCP0_GAMUT_REMAP_C11_C12, GAMUT_REMAP_C12, mask_sh), \
+	XFM_SF(DCP0_GAMUT_REMAP_C13_C14, GAMUT_REMAP_C13, mask_sh), \
+	XFM_SF(DCP0_GAMUT_REMAP_C13_C14, GAMUT_REMAP_C14, mask_sh), \
+	XFM_SF(DCP0_GAMUT_REMAP_C21_C22, GAMUT_REMAP_C21, mask_sh), \
+	XFM_SF(DCP0_GAMUT_REMAP_C21_C22, GAMUT_REMAP_C22, mask_sh), \
+	XFM_SF(DCP0_GAMUT_REMAP_C23_C24, GAMUT_REMAP_C23, mask_sh), \
+	XFM_SF(DCP0_GAMUT_REMAP_C23_C24, GAMUT_REMAP_C24, mask_sh), \
+	XFM_SF(DCP0_GAMUT_REMAP_C31_C32, GAMUT_REMAP_C31, mask_sh), \
+	XFM_SF(DCP0_GAMUT_REMAP_C31_C32, GAMUT_REMAP_C32, mask_sh), \
+	XFM_SF(DCP0_GAMUT_REMAP_C33_C34, GAMUT_REMAP_C33, mask_sh), \
+	XFM_SF(DCP0_GAMUT_REMAP_C33_C34, GAMUT_REMAP_C34, mask_sh), \
+	XFM_SF(DCP0_GAMUT_REMAP_CONTROL, GRPH_GAMUT_REMAP_MODE, mask_sh), \
+	XFM_SF(SCL0_SCL_MODE, SCL_MODE, mask_sh), \
+	XFM_SF(SCL0_SCL_TAP_CONTROL, SCL_H_NUM_OF_TAPS, mask_sh), \
+	XFM_SF(SCL0_SCL_TAP_CONTROL, SCL_V_NUM_OF_TAPS, mask_sh), \
+	XFM_SF(SCL0_SCL_CONTROL, SCL_BOUNDARY_MODE, mask_sh), \
+	XFM_SF(SCL0_SCL_BYPASS_CONTROL, SCL_BYPASS_MODE, mask_sh), \
+	XFM_SF(SCL0_EXT_OVERSCAN_LEFT_RIGHT, EXT_OVERSCAN_LEFT, mask_sh), \
+	XFM_SF(SCL0_EXT_OVERSCAN_LEFT_RIGHT, EXT_OVERSCAN_RIGHT, mask_sh), \
+	XFM_SF(SCL0_EXT_OVERSCAN_TOP_BOTTOM, EXT_OVERSCAN_TOP, mask_sh), \
+	XFM_SF(SCL0_EXT_OVERSCAN_TOP_BOTTOM, EXT_OVERSCAN_BOTTOM, mask_sh), \
+	XFM_SF(SCL0_SCL_COEF_RAM_SELECT, SCL_C_RAM_FILTER_TYPE, mask_sh), \
+	XFM_SF(SCL0_SCL_COEF_RAM_SELECT, SCL_C_RAM_PHASE, mask_sh), \
+	XFM_SF(SCL0_SCL_COEF_RAM_SELECT, SCL_C_RAM_TAP_PAIR_IDX, mask_sh), \
+	XFM_SF(SCL0_SCL_COEF_RAM_TAP_DATA, SCL_C_RAM_EVEN_TAP_COEF_EN, mask_sh), \
+	XFM_SF(SCL0_SCL_COEF_RAM_TAP_DATA, SCL_C_RAM_EVEN_TAP_COEF, mask_sh), \
+	XFM_SF(SCL0_SCL_COEF_RAM_TAP_DATA, SCL_C_RAM_ODD_TAP_COEF_EN, mask_sh), \
+	XFM_SF(SCL0_SCL_COEF_RAM_TAP_DATA, SCL_C_RAM_ODD_TAP_COEF, mask_sh), \
+	XFM_SF(SCL0_VIEWPORT_START, VIEWPORT_X_START, mask_sh), \
+	XFM_SF(SCL0_VIEWPORT_START, VIEWPORT_Y_START, mask_sh), \
+	XFM_SF(SCL0_VIEWPORT_SIZE, VIEWPORT_HEIGHT, mask_sh), \
+	XFM_SF(SCL0_VIEWPORT_SIZE, VIEWPORT_WIDTH, mask_sh), \
+	XFM_SF(SCL0_SCL_HORZ_FILTER_SCALE_RATIO, SCL_H_SCALE_RATIO, mask_sh), \
+	XFM_SF(SCL0_SCL_VERT_FILTER_SCALE_RATIO, SCL_V_SCALE_RATIO, mask_sh), \
+	XFM_SF(SCL0_SCL_HORZ_FILTER_INIT, SCL_H_INIT_INT, mask_sh), \
+	XFM_SF(SCL0_SCL_HORZ_FILTER_INIT, SCL_H_INIT_FRAC, mask_sh), \
+	XFM_SF(SCL0_SCL_VERT_FILTER_INIT, SCL_V_INIT_INT, mask_sh), \
+	XFM_SF(SCL0_SCL_VERT_FILTER_INIT, SCL_V_INIT_FRAC, mask_sh), \
+	XFM_SF(LB0_LB_MEMORY_CTRL, LB_MEMORY_CONFIG, mask_sh), \
+	XFM_SF(LB0_LB_MEMORY_CTRL, LB_MEMORY_SIZE, mask_sh), \
+	XFM_SF(SCL0_SCL_VERT_FILTER_CONTROL, SCL_V_2TAP_HARDCODE_COEF_EN, mask_sh), \
+	XFM_SF(SCL0_SCL_HORZ_FILTER_CONTROL, SCL_H_2TAP_HARDCODE_COEF_EN, mask_sh), \
+	XFM_SF(SCL0_SCL_UPDATE, SCL_COEF_UPDATE_COMPLETE, mask_sh), \
+	XFM_SF(LB0_LB_DATA_FORMAT, ALPHA_EN, mask_sh), \
+	XFM_SF(DCFE0_DCFE_MEM_PWR_CTRL, SCL_COEFF_MEM_PWR_DIS, mask_sh), \
+	XFM_SF(DCFE0_DCFE_MEM_PWR_STATUS, SCL_COEFF_MEM_PWR_STATE, mask_sh), \
+	XFM_SF(SCL0_SCL_MODE, SCL_PSCL_EN, mask_sh)
+#endif
+
 #define XFM_REG_FIELD_LIST(type) \
 	type OUT_CLAMP_MIN_B_CB; \
 	type OUT_CLAMP_MAX_B_CB; \
diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
index 041830e..66d5f34 100644
--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
@@ -1396,9 +1396,12 @@ static uint32_t get_max_pixel_clock_for_all_paths(
 	return max_pix_clk;
 }
 
-/*
- * Find clock state based on clock requested. if clock value is 0, simply
+/* Find clock state based on clock requested. if clock value is 0, simply
  * set clock state as requested without finding clock state by clock value
+ *TODO: when dce120_hw_sequencer.c is created, override apply_min_clock.
+ *
+ * TODOFPGA  remove TODO after implement dal_display_clock_get_cur_clocks_value
+ * etc support for dcn1.0
  */
 static void apply_min_clocks(
 	struct core_dc *dc,
@@ -1425,6 +1428,30 @@ static void apply_min_clocks(
 		}
 
 		/* TODOFPGA */
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+		/* TODO: This is incorrect. Figure out how to fix. */
+		pipe_ctx->dis_clk->funcs->apply_clock_voltage_request(
+				pipe_ctx->dis_clk,
+				DM_PP_CLOCK_TYPE_DISPLAY_CLK,
+				pipe_ctx->dis_clk->cur_clocks_value.dispclk_in_khz,
+				pre_mode_set,
+				false);
+
+		pipe_ctx->dis_clk->funcs->apply_clock_voltage_request(
+				pipe_ctx->dis_clk,
+				DM_PP_CLOCK_TYPE_PIXELCLK,
+				pipe_ctx->dis_clk->cur_clocks_value.max_pixelclk_in_khz,
+				pre_mode_set,
+				false);
+
+		pipe_ctx->dis_clk->funcs->apply_clock_voltage_request(
+				pipe_ctx->dis_clk,
+				DM_PP_CLOCK_TYPE_DISPLAYPHYCLK,
+				pipe_ctx->dis_clk->cur_clocks_value.max_non_dp_phyclk_in_khz,
+				pre_mode_set,
+				false);
+		return;
+#endif
 	}
 
 	/* get the required state based on state dependent clocks:
@@ -1441,6 +1468,28 @@ static void apply_min_clocks(
 		pipe_ctx->dis_clk->funcs->set_min_clocks_state(
 			pipe_ctx->dis_clk, *clocks_state);
 	} else {
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+		pipe_ctx->dis_clk->funcs->apply_clock_voltage_request(
+				pipe_ctx->dis_clk,
+				DM_PP_CLOCK_TYPE_DISPLAY_CLK,
+				req_clocks.display_clk_khz,
+				pre_mode_set,
+				false);
+
+		pipe_ctx->dis_clk->funcs->apply_clock_voltage_request(
+				pipe_ctx->dis_clk,
+				DM_PP_CLOCK_TYPE_PIXELCLK,
+				req_clocks.pixel_clk_khz,
+				pre_mode_set,
+				false);
+
+		pipe_ctx->dis_clk->funcs->apply_clock_voltage_request(
+				pipe_ctx->dis_clk,
+				DM_PP_CLOCK_TYPE_DISPLAYPHYCLK,
+				req_clocks.pixel_clk_khz,
+				pre_mode_set,
+				false);
+#endif
 	}
 }
 
diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_mem_input.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_mem_input.c
index 1643fb5..3ffb845f 100644
--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_mem_input.c
+++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_mem_input.c
@@ -409,6 +409,9 @@ static struct mem_input_funcs dce110_mem_input_funcs = {
 			dce_mem_input_program_surface_config,
 	.mem_input_is_flip_pending =
 			dce110_mem_input_is_flip_pending,
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	.mem_input_update_dchub = NULL
+#endif
 };
 /*****************************************/
 /* Constructor, Destructor               */
diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_timing_generator.h b/drivers/gpu/drm/amd/display/dc/dce110/dce110_timing_generator.h
index dcb49fe..55f0a94 100644
--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_timing_generator.h
+++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_timing_generator.h
@@ -108,6 +108,9 @@ struct dce110_timing_generator {
 	uint32_t min_h_front_porch;
 	uint32_t min_h_back_porch;
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	/* DCE 12 */
+#endif
 	uint32_t min_h_sync_width;
 	uint32_t min_v_sync_width;
 	uint32_t min_v_blank;
diff --git a/drivers/gpu/drm/amd/display/dc/dce120/Makefile b/drivers/gpu/drm/amd/display/dc/dce120/Makefile
new file mode 100644
index 0000000..3c6b3fa
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dce120/Makefile
@@ -0,0 +1,12 @@
+#
+# Makefile for the 'controller' sub-component of DAL.
+# It provides the control and status of HW CRTC block.
+
+
+DCE120 = dce120_resource.o dce120_timing_generator.o \
+dce120_ipp.o dce120_ipp_cursor.o dce120_ipp_gamma.o \
+dce120_mem_input.o dce120_hw_sequencer.o
+
+AMD_DAL_DCE120 = $(addprefix $(AMDDALPATH)/dc/dce120/,$(DCE120))
+
+AMD_DISPLAY_FILES += $(AMD_DAL_DCE120)
\ No newline at end of file
diff --git a/drivers/gpu/drm/amd/display/dc/dce80/dce80_mem_input.c b/drivers/gpu/drm/amd/display/dc/dce80/dce80_mem_input.c
index 704a7ce..2f22931 100644
--- a/drivers/gpu/drm/amd/display/dc/dce80/dce80_mem_input.c
+++ b/drivers/gpu/drm/amd/display/dc/dce80/dce80_mem_input.c
@@ -54,6 +54,9 @@ static struct mem_input_funcs dce80_mem_input_funcs = {
 			dce_mem_input_program_surface_config,
 	.mem_input_is_flip_pending =
 			dce110_mem_input_is_flip_pending,
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	.mem_input_update_dchub = NULL
+#endif
 };
 
 /*****************************************/
diff --git a/drivers/gpu/drm/amd/display/dc/dm_services.h b/drivers/gpu/drm/amd/display/dc/dm_services.h
index 73c0f1f..bdc7cb0 100644
--- a/drivers/gpu/drm/amd/display/dc/dm_services.h
+++ b/drivers/gpu/drm/amd/display/dc/dm_services.h
@@ -192,6 +192,89 @@ unsigned int generic_reg_wait(const struct dc_context *ctx,
 	unsigned int delay_between_poll_us, unsigned int time_out_num_tries,
 	const char *func_name);
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+
+/* These macros need to be used with soc15 registers in order to retrieve
+ * the actual offset.
+ */
+#define REG_OFFSET(reg) (reg + DCE_BASE.instance[0].segment[reg##_BASE_IDX])
+#define REG_BIF_OFFSET(reg) (reg + NBIF_BASE.instance[0].segment[reg##_BASE_IDX])
+
+#define dm_write_reg_soc15(ctx, reg, inst_offset, value)	\
+		dm_write_reg_func(ctx, reg + DCE_BASE.instance[0].segment[reg##_BASE_IDX] + inst_offset, value, __func__)
+
+#define dm_read_reg_soc15(ctx, reg, inst_offset)	\
+		dm_read_reg_func(ctx, reg + DCE_BASE.instance[0].segment[reg##_BASE_IDX] + inst_offset, __func__)
+
+#define generic_reg_update_soc15(ctx, inst_offset, reg_name, n, ...)\
+		generic_reg_update_ex(ctx, DCE_BASE.instance[0].segment[mm##reg_name##_BASE_IDX] +  mm##reg_name + inst_offset, \
+		dm_read_reg_func(ctx, mm##reg_name + DCE_BASE.instance[0].segment[mm##reg_name##_BASE_IDX] + inst_offset, __func__), \
+		n, __VA_ARGS__)
+
+#define generic_reg_set_soc15(ctx, inst_offset, reg_name, n, ...)\
+		generic_reg_update_ex(ctx, DCE_BASE.instance[0].segment[mm##reg_name##_BASE_IDX] + mm##reg_name + inst_offset, 0, \
+		n, __VA_ARGS__)
+
+#define get_reg_field_value_soc15(reg_value, block, reg_num, reg_name, reg_field)\
+	get_reg_field_value_ex(\
+		(reg_value),\
+		block ## reg_num ## _ ## reg_name ## __ ## reg_field ## _MASK,\
+		block ## reg_num ## _ ## reg_name ## __ ## reg_field ## __SHIFT)
+
+#define set_reg_field_value_soc15(reg_value, value, block, reg_num, reg_name, reg_field)\
+	(reg_value) = set_reg_field_value_ex(\
+		(reg_value),\
+		(value),\
+		block ## reg_num ## _ ## reg_name ## __ ## reg_field ## _MASK,\
+		block ## reg_num ## _ ## reg_name ## __ ## reg_field ## __SHIFT)
+
+/* TODO get rid of this pos*/
+static inline bool wait_reg_func(
+	const struct dc_context *ctx,
+	uint32_t addr,
+	uint32_t mask,
+	uint8_t shift,
+	uint32_t condition_value,
+	unsigned int interval_us,
+	unsigned int timeout_us)
+{
+	uint32_t field_value;
+	uint32_t reg_val;
+	unsigned int count = 0;
+
+	if (IS_FPGA_MAXIMUS_DC(ctx->dce_environment))
+		timeout_us *= 655;  /* 6553 give about 30 second before time out */
+
+	do {
+		/* try once without sleeping */
+		if (count > 0) {
+			if (interval_us >= 1000)
+				msleep(interval_us/1000);
+			else
+				udelay(interval_us);
+		}
+		reg_val = dm_read_reg(ctx, addr);
+		field_value = get_reg_field_value_ex(reg_val, mask, shift);
+		count += interval_us;
+
+	} while (field_value != condition_value && count <= timeout_us);
+
+	ASSERT(count <= timeout_us);
+
+	return count <= timeout_us;
+}
+
+#define wait_reg(ctx, inst_offset, reg_name, reg_field, condition_value)\
+	wait_reg_func(\
+		ctx,\
+		mm##reg_name + inst_offset + DCE_BASE.instance[0].segment[mm##reg_name##_BASE_IDX],\
+		reg_name ## __ ## reg_field ## _MASK,\
+		reg_name ## __ ## reg_field ## __SHIFT,\
+		condition_value,\
+		20000,\
+		200000)
+
+#endif
 /**************************************
  * Power Play (PP) interfaces
  **************************************/
@@ -254,6 +337,12 @@ bool dm_pp_notify_wm_clock_changes(
 	const struct dc_context *ctx,
 	struct dm_pp_wm_sets_with_clock_ranges *wm_with_clock_ranges);
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+bool dm_pp_notify_wm_clock_changes_soc15(
+	const struct dc_context *ctx,
+	struct dm_pp_wm_sets_with_clock_ranges_soc15 *wm_with_clock_ranges);
+#endif
+
 /* DAL calls this function to notify PP about completion of Mode Set.
  * For PP it means that current DCE clocks are those which were returned
  * by dc_service_pp_pre_dce_clock_change(), in the 'output' parameter.
diff --git a/drivers/gpu/drm/amd/display/dc/dm_services_types.h b/drivers/gpu/drm/amd/display/dc/dm_services_types.h
index 460971d..8d26615 100644
--- a/drivers/gpu/drm/amd/display/dc/dm_services_types.h
+++ b/drivers/gpu/drm/amd/display/dc/dm_services_types.h
@@ -141,6 +141,33 @@ struct dm_pp_wm_sets_with_clock_ranges {
 	struct dm_pp_clock_range_for_wm_set wm_clk_ranges[MAX_WM_SETS];
 };
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+struct dm_pp_clock_range_for_dmif_wm_set_soc15 {
+	enum dm_pp_wm_set_id wm_set_id;
+	uint32_t wm_min_dcfclk_clk_in_khz;
+	uint32_t wm_max_dcfclk_clk_in_khz;
+	uint32_t wm_min_memg_clk_in_khz;
+	uint32_t wm_max_mem_clk_in_khz;
+};
+
+struct dm_pp_clock_range_for_mcif_wm_set_soc15 {
+	enum dm_pp_wm_set_id wm_set_id;
+	uint32_t wm_min_socclk_clk_in_khz;
+	uint32_t wm_max_socclk_clk_in_khz;
+	uint32_t wm_min_memg_clk_in_khz;
+	uint32_t wm_max_mem_clk_in_khz;
+};
+
+struct dm_pp_wm_sets_with_clock_ranges_soc15 {
+	uint32_t num_wm_dmif_sets;
+	uint32_t num_wm_mcif_sets;
+	struct dm_pp_clock_range_for_dmif_wm_set_soc15
+		wm_dmif_clocks_ranges[MAX_WM_SETS];
+	struct dm_pp_clock_range_for_mcif_wm_set_soc15
+		wm_mcif_clocks_ranges[MAX_WM_SETS];
+};
+#endif
+
 #define MAX_DISPLAY_CONFIGS 6
 
 struct dm_pp_display_configuration {
diff --git a/drivers/gpu/drm/amd/display/dc/gpio/Makefile b/drivers/gpu/drm/amd/display/dc/gpio/Makefile
index a15c257..8cf12a8 100644
--- a/drivers/gpu/drm/amd/display/dc/gpio/Makefile
+++ b/drivers/gpu/drm/amd/display/dc/gpio/Makefile
@@ -29,6 +29,17 @@ AMD_DAL_GPIO_DCE110 = $(addprefix $(AMDDALPATH)/dc/gpio/dce110/,$(GPIO_DCE110))
 AMD_DISPLAY_FILES += $(AMD_DAL_GPIO_DCE110)
 
 ###############################################################################
+# DCE 12x
+###############################################################################
+ifdef CONFIG_DRM_AMD_DC_DCE12_0
+GPIO_DCE120 = hw_translate_dce120.o hw_factory_dce120.o
+
+AMD_DAL_GPIO_DCE120 = $(addprefix $(AMDDALPATH)/dc/gpio/dce120/,$(GPIO_DCE120))
+
+AMD_DISPLAY_FILES += $(AMD_DAL_GPIO_DCE120)
+endif
+
+###############################################################################
 # Diagnostics on FPGA
 ###############################################################################
 GPIO_DIAG_FPGA = hw_translate_diag.o hw_factory_diag.o
diff --git a/drivers/gpu/drm/amd/display/dc/gpio/hw_factory.c b/drivers/gpu/drm/amd/display/dc/gpio/hw_factory.c
index f1a6fa7..66ea3b3 100644
--- a/drivers/gpu/drm/amd/display/dc/gpio/hw_factory.c
+++ b/drivers/gpu/drm/amd/display/dc/gpio/hw_factory.c
@@ -44,6 +44,10 @@
 
 #include "dce110/hw_factory_dce110.h"
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+#include "dce120/hw_factory_dce120.h"
+#endif
+
 #include "diagnostics/hw_factory_diag.h"
 
 /*
@@ -72,6 +76,11 @@ bool dal_hw_factory_init(
 	case DCE_VERSION_11_2:
 		dal_hw_factory_dce110_init(factory);
 		return true;
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	case DCE_VERSION_12_0:
+		dal_hw_factory_dce120_init(factory);
+		return true;
+#endif
 	default:
 		ASSERT_CRITICAL(false);
 		return false;
diff --git a/drivers/gpu/drm/amd/display/dc/gpio/hw_translate.c b/drivers/gpu/drm/amd/display/dc/gpio/hw_translate.c
index 23e097f..10e8644 100644
--- a/drivers/gpu/drm/amd/display/dc/gpio/hw_translate.c
+++ b/drivers/gpu/drm/amd/display/dc/gpio/hw_translate.c
@@ -42,7 +42,9 @@
 
 #include "dce80/hw_translate_dce80.h"
 #include "dce110/hw_translate_dce110.h"
-
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+#include "dce120/hw_translate_dce120.h"
+#endif
 #include "diagnostics/hw_translate_diag.h"
 
 /*
@@ -68,6 +70,11 @@ bool dal_hw_translate_init(
 	case DCE_VERSION_11_2:
 		dal_hw_translate_dce110_init(translate);
 		return true;
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	case DCE_VERSION_12_0:
+		dal_hw_translate_dce120_init(translate);
+		return true;
+#endif
 	default:
 		BREAK_TO_DEBUGGER();
 		return false;
diff --git a/drivers/gpu/drm/amd/display/dc/i2caux/Makefile b/drivers/gpu/drm/amd/display/dc/i2caux/Makefile
index 83dfc43..99aa5d8 100644
--- a/drivers/gpu/drm/amd/display/dc/i2caux/Makefile
+++ b/drivers/gpu/drm/amd/display/dc/i2caux/Makefile
@@ -48,6 +48,17 @@ AMD_DAL_I2CAUX_DCE112 = $(addprefix $(AMDDALPATH)/dc/i2caux/dce112/,$(I2CAUX_DCE
 AMD_DISPLAY_FILES += $(AMD_DAL_I2CAUX_DCE112)
 
 ###############################################################################
+# DCE 120 family
+###############################################################################
+ifdef CONFIG_DRM_AMD_DC_DCE12_0
+I2CAUX_DCE120 = i2caux_dce120.o
+
+AMD_DAL_I2CAUX_DCE120 = $(addprefix $(AMDDALPATH)/dc/i2caux/dce120/,$(I2CAUX_DCE120))
+
+AMD_DISPLAY_FILES += $(AMD_DAL_I2CAUX_DCE120)
+endif
+
+###############################################################################
 # Diagnostics on FPGA
 ###############################################################################
 I2CAUX_DIAG = i2caux_diag.o
diff --git a/drivers/gpu/drm/amd/display/dc/i2caux/i2caux.c b/drivers/gpu/drm/amd/display/dc/i2caux/i2caux.c
index 5391655..ea3bd75 100644
--- a/drivers/gpu/drm/amd/display/dc/i2caux/i2caux.c
+++ b/drivers/gpu/drm/amd/display/dc/i2caux/i2caux.c
@@ -57,6 +57,10 @@
 
 #include "dce112/i2caux_dce112.h"
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+#include "dce120/i2caux_dce120.h"
+#endif
+
 #include "diagnostics/i2caux_diag.h"
 
 /*
@@ -80,6 +84,10 @@ struct i2caux *dal_i2caux_create(
 		return dal_i2caux_dce110_create(ctx);
 	case DCE_VERSION_10_0:
 		return dal_i2caux_dce100_create(ctx);
+	#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	case DCE_VERSION_12_0:
+		return dal_i2caux_dce120_create(ctx);
+	#endif
 	default:
 		BREAK_TO_DEBUGGER();
 		return NULL;
diff --git a/drivers/gpu/drm/amd/display/dc/inc/bandwidth_calcs.h b/drivers/gpu/drm/amd/display/dc/inc/bandwidth_calcs.h
index 16f06fa..a7eaecd 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/bandwidth_calcs.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/bandwidth_calcs.h
@@ -40,6 +40,9 @@ enum bw_calcs_version {
 	BW_CALCS_VERSION_POLARIS10,
 	BW_CALCS_VERSION_POLARIS11,
 	BW_CALCS_VERSION_STONEY,
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	BW_CALCS_VERSION_VEGA10
+#endif
 };
 
 /*******************************************************************************
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/display_clock.h b/drivers/gpu/drm/amd/display/dc/inc/hw/display_clock.h
index e163f58..bf77aa6 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/display_clock.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/display_clock.h
@@ -28,6 +28,18 @@
 
 #include "dm_services_types.h"
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+struct clocks_value {
+	int dispclk_in_khz;
+	int max_pixelclk_in_khz;
+	int max_non_dp_phyclk_in_khz;
+	int max_dp_phyclk_in_khz;
+	bool dispclk_notify_pplib_done;
+	bool pixelclk_notify_pplib_done;
+	bool phyclk_notigy_pplib_done;
+};
+#endif
+
 /* Structure containing all state-dependent clocks
  * (dependent on "enum clocks_state") */
 struct state_dependent_clocks {
@@ -41,6 +53,9 @@ struct display_clock {
 
 	enum dm_pp_clocks_state max_clks_state;
 	enum dm_pp_clocks_state cur_min_clks_state;
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	struct clocks_value cur_clocks_value;
+#endif
 };
 
 struct display_clock_funcs {
@@ -56,6 +71,14 @@ struct display_clock_funcs {
 
 	int (*get_dp_ref_clk_frequency)(struct display_clock *disp_clk);
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	bool (*apply_clock_voltage_request)(
+		struct display_clock *disp_clk,
+		enum dm_pp_clock_type clocks_type,
+		int clocks_in_khz,
+		bool pre_mode_set,
+		bool update_dp_phyclk);
+#endif
 };
 
 #endif /* __DISPLAY_CLOCK_H__ */
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/mem_input.h b/drivers/gpu/drm/amd/display/dc/inc/hw/mem_input.h
index ed980ae..6c06006 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/mem_input.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/mem_input.h
@@ -100,6 +100,10 @@ struct mem_input_funcs {
 
 	bool (*mem_input_is_flip_pending)(struct mem_input *mem_input);
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	void (*mem_input_update_dchub)(struct mem_input *mem_input,
+			struct dchub_init_data *dh_data);
+#endif
 };
 
 #endif
diff --git a/drivers/gpu/drm/amd/display/dc/irq/Makefile b/drivers/gpu/drm/amd/display/dc/irq/Makefile
index 0271033..140e498 100644
--- a/drivers/gpu/drm/amd/display/dc/irq/Makefile
+++ b/drivers/gpu/drm/amd/display/dc/irq/Makefile
@@ -26,3 +26,15 @@ IRQ_DCE11 = irq_service_dce110.o
 AMD_DAL_IRQ_DCE11 = $(addprefix $(AMDDALPATH)/dc/irq/dce110/,$(IRQ_DCE11))
 
 AMD_DISPLAY_FILES += $(AMD_DAL_IRQ_DCE11)
+
+###############################################################################
+# DCE 12x
+###############################################################################
+ifdef CONFIG_DRM_AMD_DC_DCE12_0
+IRQ_DCE12 = irq_service_dce120.o
+
+AMD_DAL_IRQ_DCE12 = $(addprefix $(AMDDALPATH)/dc/irq/dce120/,$(IRQ_DCE12))
+
+AMD_DISPLAY_FILES += $(AMD_DAL_IRQ_DCE12)
+endif
+
diff --git a/drivers/gpu/drm/amd/display/dc/irq/irq_service.c b/drivers/gpu/drm/amd/display/dc/irq/irq_service.c
index fbaa2fc..a1b6d83 100644
--- a/drivers/gpu/drm/amd/display/dc/irq/irq_service.c
+++ b/drivers/gpu/drm/amd/display/dc/irq/irq_service.c
@@ -33,6 +33,9 @@
 
 #include "dce80/irq_service_dce80.h"
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+#include "dce120/irq_service_dce120.h"
+#endif
 
 #include "reg_helper.h"
 #include "irq_service.h"
diff --git a/drivers/gpu/drm/amd/display/include/dal_asic_id.h b/drivers/gpu/drm/amd/display/include/dal_asic_id.h
index 46f1e88..15c0b8c 100644
--- a/drivers/gpu/drm/amd/display/include/dal_asic_id.h
+++ b/drivers/gpu/drm/amd/display/include/dal_asic_id.h
@@ -123,6 +123,10 @@
 #define FAMILY_VI 130 /* Volcanic Islands: Iceland (V), Tonga (M) */
 #define FAMILY_CZ 135 /* Carrizo */
 
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+#define FAMILY_AI 141
+#endif
+
 #define	FAMILY_UNKNOWN 0xFF
 
 #endif /* __DAL_ASIC_ID_H__ */
diff --git a/drivers/gpu/drm/amd/display/include/dal_types.h b/drivers/gpu/drm/amd/display/include/dal_types.h
index ada5b19..e24c1ef 100644
--- a/drivers/gpu/drm/amd/display/include/dal_types.h
+++ b/drivers/gpu/drm/amd/display/include/dal_types.h
@@ -38,6 +38,9 @@ enum dce_version {
 	DCE_VERSION_10_0,
 	DCE_VERSION_11_0,
 	DCE_VERSION_11_2,
+#if defined(CONFIG_DRM_AMD_DC_DCE12_0)
+	DCE_VERSION_12_0,
+#endif
 	DCE_VERSION_MAX,
 };
 
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 069/100] drm/amd/display: need to handle DCE_Info table ver4.2
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (52 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 068/100] drm/amd/display: Enable DCE12 support Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 070/100] drm/amd/display: Less log spam Alex Deucher
                     ` (31 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Charlene Liu

From: Charlene Liu <charlene.liu@amd.com>

Signed-off-by: Charlene Liu <charlene.liu@amd.com>
Acked-by: Harry Wentland <Harry.Wentland@amd.com>
Reviewed-by: Krunoslav Kovac <Krunoslav.Kovac@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c | 79 +++++++++++++++++++++-
 1 file changed, 78 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
index f6e77da..123942f 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
@@ -1137,6 +1137,81 @@ static enum bp_result get_ss_info_v4_1(
 	return result;
 }
 
+static enum bp_result get_ss_info_v4_2(
+	struct bios_parser *bp,
+	uint32_t id,
+	uint32_t index,
+	struct spread_spectrum_info *ss_info)
+{
+	enum bp_result result = BP_RESULT_OK;
+	struct atom_display_controller_info_v4_2 *disp_cntl_tbl = NULL;
+	struct atom_smu_info_v3_1 *smu_tbl = NULL;
+
+	if (!ss_info)
+		return BP_RESULT_BADINPUT;
+
+	if (!DATA_TABLES(dce_info))
+		return BP_RESULT_BADBIOSTABLE;
+
+	if (!DATA_TABLES(smu_info))
+		return BP_RESULT_BADBIOSTABLE;
+
+	disp_cntl_tbl =  GET_IMAGE(struct atom_display_controller_info_v4_2,
+							DATA_TABLES(dce_info));
+	if (!disp_cntl_tbl)
+		return BP_RESULT_BADBIOSTABLE;
+
+	smu_tbl =  GET_IMAGE(struct atom_smu_info_v3_1, DATA_TABLES(smu_info));
+	if (!smu_tbl)
+		return BP_RESULT_BADBIOSTABLE;
+
+
+	ss_info->type.STEP_AND_DELAY_INFO = false;
+	ss_info->spread_percentage_divider = 1000;
+	/* BIOS no longer uses target clock.  Always enable for now */
+	ss_info->target_clock_range = 0xffffffff;
+
+	switch (id) {
+	case AS_SIGNAL_TYPE_DVI:
+		ss_info->spread_spectrum_percentage =
+				disp_cntl_tbl->dvi_ss_percentage;
+		ss_info->spread_spectrum_range =
+				disp_cntl_tbl->dvi_ss_rate_10hz * 10;
+		if (disp_cntl_tbl->dvi_ss_mode & ATOM_SS_CENTRE_SPREAD_MODE)
+			ss_info->type.CENTER_MODE = true;
+		break;
+	case AS_SIGNAL_TYPE_HDMI:
+		ss_info->spread_spectrum_percentage =
+				disp_cntl_tbl->hdmi_ss_percentage;
+		ss_info->spread_spectrum_range =
+				disp_cntl_tbl->hdmi_ss_rate_10hz * 10;
+		if (disp_cntl_tbl->hdmi_ss_mode & ATOM_SS_CENTRE_SPREAD_MODE)
+			ss_info->type.CENTER_MODE = true;
+		break;
+	/* TODO LVDS not support anymore? */
+	case AS_SIGNAL_TYPE_DISPLAY_PORT:
+		ss_info->spread_spectrum_percentage =
+				disp_cntl_tbl->dp_ss_percentage;
+		ss_info->spread_spectrum_range =
+				disp_cntl_tbl->dp_ss_rate_10hz * 10;
+		if (disp_cntl_tbl->dp_ss_mode & ATOM_SS_CENTRE_SPREAD_MODE)
+			ss_info->type.CENTER_MODE = true;
+		break;
+	case AS_SIGNAL_TYPE_GPU_PLL:
+		ss_info->spread_spectrum_percentage =
+				smu_tbl->gpuclk_ss_percentage;
+		ss_info->spread_spectrum_range =
+				smu_tbl->gpuclk_ss_rate_10hz * 10;
+		if (smu_tbl->gpuclk_ss_mode & ATOM_SS_CENTRE_SPREAD_MODE)
+			ss_info->type.CENTER_MODE = true;
+		break;
+	default:
+		result = BP_RESULT_UNSUPPORTED;
+	}
+
+	return result;
+}
+
 /**
  * bios_parser_get_spread_spectrum_info
  * Get spread spectrum information from the ASIC_InternalSS_Info(ver 2.1 or
@@ -1177,6 +1252,8 @@ static enum bp_result bios_parser_get_spread_spectrum_info(
 		switch (tbl_revision.minor) {
 		case 1:
 			return get_ss_info_v4_1(bp, signal, index, ss_info);
+		case 2:
+			return get_ss_info_v4_2(bp, signal, index, ss_info);
 		default:
 			break;
 		}
@@ -1579,7 +1656,7 @@ static enum bp_result get_firmware_info_v3_1(
 	/* Hardcode frequency if BIOS gives no DCE Ref Clk */
 	if (info->pll_info.crystal_frequency == 0)
 		info->pll_info.crystal_frequency = 27000;
-
+	/*dp_phy_ref_clk is not correct for atom_display_controller_info_v4_2, but we don't use it*/
 	info->dp_phy_ref_clk     = dce_info->dpphy_refclk_10khz * 10;
 	info->i2c_engine_ref_clk = dce_info->i2c_engine_refclk_10khz * 10;
 
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 070/100] drm/amd/display: Less log spam
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (53 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 069/100] drm/amd/display: need to handle DCE_Info table ver4.2 Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 071/100] drm/amdgpu: soc15 enable (v2) Alex Deucher
                     ` (30 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Jordan Lazare

From: Jordan Lazare <Jordan.Lazare@amd.com>

Signed-off-by: Jordan Lazare <Jordan.Lazare@amd.com>
Acked-by: Harry Wentland <Harry.Wentland@amd.com>
Reviewed-by: Charlene Liu <Charlene.Liu@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/display/dc/bios/command_table2.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table2.c b/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
index 36d1582..e33e6bf 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
+++ b/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
@@ -238,8 +238,8 @@ static enum bp_result transmitter_control_v1_6(
 	if (cntl->action == TRANSMITTER_CONTROL_ENABLE ||
 		cntl->action == TRANSMITTER_CONTROL_ACTIAVATE ||
 		cntl->action == TRANSMITTER_CONTROL_DEACTIVATE) {
-		dm_logger_write(bp->base.ctx->logger, LOG_HW_SET_MODE,\
-		"************************%s:ps.param.symclk_10khz = %d\n",\
+		dm_logger_write(bp->base.ctx->logger, LOG_BIOS,\
+		"%s:ps.param.symclk_10khz = %d\n",\
 		__func__, ps.param.symclk_10khz);
 	}
 
@@ -328,8 +328,8 @@ static enum bp_result set_pixel_clock_v7(
 			(uint8_t) bp->cmd_helper->
 				transmitter_color_depth_to_atom(
 					bp_params->color_depth);
-		dm_logger_write(bp->base.ctx->logger, LOG_HW_SET_MODE,\
-				"************************%s:program display clock = %d"\
+		dm_logger_write(bp->base.ctx->logger, LOG_BIOS,\
+				"%s:program display clock = %d"\
 				"colorDepth = %d\n", __func__,\
 				bp_params->target_pixel_clock, bp_params->color_depth);
 
@@ -760,8 +760,8 @@ static enum bp_result set_dce_clock_v2_1(
 		 */
 		params.param.dceclk_10khz = cpu_to_le32(
 				bp_params->target_clock_frequency / 10);
-	dm_logger_write(bp->base.ctx->logger, LOG_HW_SET_MODE,
-			"************************%s:target_clock_frequency = %d"\
+	dm_logger_write(bp->base.ctx->logger, LOG_BIOS,
+			"%s:target_clock_frequency = %d"\
 			"clock_type = %d \n", __func__,\
 			bp_params->target_clock_frequency,\
 			bp_params->clock_type);
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 071/100] drm/amdgpu: soc15 enable (v2)
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (54 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 070/100] drm/amd/display: Less log spam Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 072/100] drm/amdgpu: Set the IP blocks for vega10 Alex Deucher
                     ` (29 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Ken Wang

From: Ken Wang <Qingqing.Wang@amd.com>

Add soc15 support and enable all the IPs for vega10.

v2: squash in xclk fix

Signed-off-by: Ken Wang <Qingqing.Wang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/Makefile     |   2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c |   3 +
 drivers/gpu/drm/amd/amdgpu/soc15.c      | 813 ++++++++++++++++++++++++++++++++
 3 files changed, 817 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15.c

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
index bad4658..a377fdb 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -40,7 +40,7 @@ amdgpu-$(CONFIG_DRM_AMDGPU_CIK)+= cik.o cik_ih.o kv_smc.o kv_dpm.o \
 amdgpu-$(CONFIG_DRM_AMDGPU_SI)+= si.o gmc_v6_0.o gfx_v6_0.o si_ih.o si_dma.o dce_v6_0.o si_dpm.o si_smc.o
 
 amdgpu-y += \
-	vi.o mxgpu_vi.o nbio_v6_1.o
+	vi.o mxgpu_vi.o nbio_v6_1.o soc15.o
 
 # add GMC block
 amdgpu-y += \
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
index 1524d90..d6cbdbe 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c
@@ -903,6 +903,9 @@ static int amdgpu_cgs_get_firmware_info(struct cgs_device *cgs_device,
 			case CHIP_POLARIS12:
 				strcpy(fw_name, "amdgpu/polaris12_smc.bin");
 				break;
+			case CHIP_VEGA10:
+				strcpy(fw_name, "amdgpu/vega10_smc.bin");
+				break;
 			default:
 				DRM_ERROR("SMC firmware not supported\n");
 				return -EINVAL;
diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
new file mode 100644
index 0000000..07e10f3
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
@@ -0,0 +1,813 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include <linux/firmware.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include "drmP.h"
+#include "amdgpu.h"
+#include "amdgpu_atombios.h"
+#include "amdgpu_ih.h"
+#include "amdgpu_uvd.h"
+#include "amdgpu_vce.h"
+#include "amdgpu_ucode.h"
+#include "amdgpu_psp.h"
+#include "atom.h"
+#include "amd_pcie.h"
+
+#include "vega10/soc15ip.h"
+#include "vega10/UVD/uvd_7_0_offset.h"
+#include "vega10/GC/gc_9_0_offset.h"
+#include "vega10/GC/gc_9_0_sh_mask.h"
+#include "vega10/SDMA0/sdma0_4_0_offset.h"
+#include "vega10/SDMA1/sdma1_4_0_offset.h"
+#include "vega10/HDP/hdp_4_0_offset.h"
+#include "vega10/HDP/hdp_4_0_sh_mask.h"
+#include "vega10/MP/mp_9_0_offset.h"
+#include "vega10/MP/mp_9_0_sh_mask.h"
+#include "vega10/SMUIO/smuio_9_0_offset.h"
+#include "vega10/SMUIO/smuio_9_0_sh_mask.h"
+
+#include "soc15.h"
+#include "soc15_common.h"
+#include "gfx_v9_0.h"
+#include "gmc_v9_0.h"
+#include "gfxhub_v1_0.h"
+#include "mmhub_v1_0.h"
+#include "vega10_ih.h"
+#include "sdma_v4_0.h"
+#include "uvd_v7_0.h"
+#include "vce_v4_0.h"
+#include "amdgpu_powerplay.h"
+
+MODULE_FIRMWARE("amdgpu/vega10_smc.bin");
+
+#define mmFabricConfigAccessControl                                                                    0x0410
+#define mmFabricConfigAccessControl_BASE_IDX                                                           0
+#define mmFabricConfigAccessControl_DEFAULT                                      0x00000000
+//FabricConfigAccessControl
+#define FabricConfigAccessControl__CfgRegInstAccEn__SHIFT                                                     0x0
+#define FabricConfigAccessControl__CfgRegInstAccRegLock__SHIFT                                                0x1
+#define FabricConfigAccessControl__CfgRegInstID__SHIFT                                                        0x10
+#define FabricConfigAccessControl__CfgRegInstAccEn_MASK                                                       0x00000001L
+#define FabricConfigAccessControl__CfgRegInstAccRegLock_MASK                                                  0x00000002L
+#define FabricConfigAccessControl__CfgRegInstID_MASK                                                          0x00FF0000L
+
+
+#define mmDF_PIE_AON0_DfGlobalClkGater                                                                 0x00fc
+#define mmDF_PIE_AON0_DfGlobalClkGater_BASE_IDX                                                        0
+//DF_PIE_AON0_DfGlobalClkGater
+#define DF_PIE_AON0_DfGlobalClkGater__MGCGMode__SHIFT                                                         0x0
+#define DF_PIE_AON0_DfGlobalClkGater__MGCGMode_MASK                                                           0x0000000FL
+
+enum {
+	DF_MGCG_DISABLE = 0,
+	DF_MGCG_ENABLE_00_CYCLE_DELAY =1,
+	DF_MGCG_ENABLE_01_CYCLE_DELAY =2,
+	DF_MGCG_ENABLE_15_CYCLE_DELAY =13,
+	DF_MGCG_ENABLE_31_CYCLE_DELAY =14,
+	DF_MGCG_ENABLE_63_CYCLE_DELAY =15
+};
+
+#define mmMP0_MISC_CGTT_CTRL0                                                                   0x01b9
+#define mmMP0_MISC_CGTT_CTRL0_BASE_IDX                                                          0
+#define mmMP0_MISC_LIGHT_SLEEP_CTRL                                                             0x01ba
+#define mmMP0_MISC_LIGHT_SLEEP_CTRL_BASE_IDX                                                    0
+
+/*
+ * Indirect registers accessor
+ */
+static u32 soc15_pcie_rreg(struct amdgpu_device *adev, u32 reg)
+{
+	unsigned long flags, address, data;
+	u32 r;
+	struct nbio_pcie_index_data *nbio_pcie_id;
+
+	if (adev->asic_type == CHIP_VEGA10)
+		nbio_pcie_id = &nbio_v6_1_pcie_index_data;
+
+	address = nbio_pcie_id->index_offset;
+	data = nbio_pcie_id->data_offset;
+
+	spin_lock_irqsave(&adev->pcie_idx_lock, flags);
+	WREG32(address, reg);
+	(void)RREG32(address);
+	r = RREG32(data);
+	spin_unlock_irqrestore(&adev->pcie_idx_lock, flags);
+	return r;
+}
+
+static void soc15_pcie_wreg(struct amdgpu_device *adev, u32 reg, u32 v)
+{
+	unsigned long flags, address, data;
+	struct nbio_pcie_index_data *nbio_pcie_id;
+
+	if (adev->asic_type == CHIP_VEGA10)
+		nbio_pcie_id = &nbio_v6_1_pcie_index_data;
+
+	address = nbio_pcie_id->index_offset;
+	data = nbio_pcie_id->data_offset;
+
+	spin_lock_irqsave(&adev->pcie_idx_lock, flags);
+	WREG32(address, reg);
+	(void)RREG32(address);
+	WREG32(data, v);
+	(void)RREG32(data);
+	spin_unlock_irqrestore(&adev->pcie_idx_lock, flags);
+}
+
+static u32 soc15_uvd_ctx_rreg(struct amdgpu_device *adev, u32 reg)
+{
+	unsigned long flags, address, data;
+	u32 r;
+
+	address = SOC15_REG_OFFSET(UVD, 0, mmUVD_CTX_INDEX);
+	data = SOC15_REG_OFFSET(UVD, 0, mmUVD_CTX_DATA);
+
+	spin_lock_irqsave(&adev->uvd_ctx_idx_lock, flags);
+	WREG32(address, ((reg) & 0x1ff));
+	r = RREG32(data);
+	spin_unlock_irqrestore(&adev->uvd_ctx_idx_lock, flags);
+	return r;
+}
+
+static void soc15_uvd_ctx_wreg(struct amdgpu_device *adev, u32 reg, u32 v)
+{
+	unsigned long flags, address, data;
+
+	address = SOC15_REG_OFFSET(UVD, 0, mmUVD_CTX_INDEX);
+	data = SOC15_REG_OFFSET(UVD, 0, mmUVD_CTX_DATA);
+
+	spin_lock_irqsave(&adev->uvd_ctx_idx_lock, flags);
+	WREG32(address, ((reg) & 0x1ff));
+	WREG32(data, (v));
+	spin_unlock_irqrestore(&adev->uvd_ctx_idx_lock, flags);
+}
+
+static u32 soc15_didt_rreg(struct amdgpu_device *adev, u32 reg)
+{
+	unsigned long flags, address, data;
+	u32 r;
+
+	address = SOC15_REG_OFFSET(GC, 0, mmDIDT_IND_INDEX);
+	data = SOC15_REG_OFFSET(GC, 0, mmDIDT_IND_DATA);
+
+	spin_lock_irqsave(&adev->didt_idx_lock, flags);
+	WREG32(address, (reg));
+	r = RREG32(data);
+	spin_unlock_irqrestore(&adev->didt_idx_lock, flags);
+	return r;
+}
+
+static void soc15_didt_wreg(struct amdgpu_device *adev, u32 reg, u32 v)
+{
+	unsigned long flags, address, data;
+
+	address = SOC15_REG_OFFSET(GC, 0, mmDIDT_IND_INDEX);
+	data = SOC15_REG_OFFSET(GC, 0, mmDIDT_IND_DATA);
+
+	spin_lock_irqsave(&adev->didt_idx_lock, flags);
+	WREG32(address, (reg));
+	WREG32(data, (v));
+	spin_unlock_irqrestore(&adev->didt_idx_lock, flags);
+}
+
+static u32 soc15_get_config_memsize(struct amdgpu_device *adev)
+{
+	return nbio_v6_1_get_memsize(adev);
+}
+
+static const u32 vega10_golden_init[] =
+{
+};
+
+static void soc15_init_golden_registers(struct amdgpu_device *adev)
+{
+	/* Some of the registers might be dependent on GRBM_GFX_INDEX */
+	mutex_lock(&adev->grbm_idx_mutex);
+
+	switch (adev->asic_type) {
+	case CHIP_VEGA10:
+		amdgpu_program_register_sequence(adev,
+						 vega10_golden_init,
+						 (const u32)ARRAY_SIZE(vega10_golden_init));
+		break;
+	default:
+		break;
+	}
+	mutex_unlock(&adev->grbm_idx_mutex);
+}
+static u32 soc15_get_xclk(struct amdgpu_device *adev)
+{
+	if (adev->asic_type == CHIP_VEGA10)
+		return adev->clock.spll.reference_freq/4;
+	else
+		return adev->clock.spll.reference_freq;
+}
+
+
+void soc15_grbm_select(struct amdgpu_device *adev,
+		     u32 me, u32 pipe, u32 queue, u32 vmid)
+{
+	u32 grbm_gfx_cntl = 0;
+	grbm_gfx_cntl = REG_SET_FIELD(grbm_gfx_cntl, GRBM_GFX_CNTL, PIPEID, pipe);
+	grbm_gfx_cntl = REG_SET_FIELD(grbm_gfx_cntl, GRBM_GFX_CNTL, MEID, me);
+	grbm_gfx_cntl = REG_SET_FIELD(grbm_gfx_cntl, GRBM_GFX_CNTL, VMID, vmid);
+	grbm_gfx_cntl = REG_SET_FIELD(grbm_gfx_cntl, GRBM_GFX_CNTL, QUEUEID, queue);
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmGRBM_GFX_CNTL), grbm_gfx_cntl);
+}
+
+static void soc15_vga_set_state(struct amdgpu_device *adev, bool state)
+{
+	/* todo */
+}
+
+static bool soc15_read_disabled_bios(struct amdgpu_device *adev)
+{
+	/* todo */
+	return false;
+}
+
+static bool soc15_read_bios_from_rom(struct amdgpu_device *adev,
+				     u8 *bios, u32 length_bytes)
+{
+	u32 *dw_ptr;
+	u32 i, length_dw;
+
+	if (bios == NULL)
+		return false;
+	if (length_bytes == 0)
+		return false;
+	/* APU vbios image is part of sbios image */
+	if (adev->flags & AMD_IS_APU)
+		return false;
+
+	dw_ptr = (u32 *)bios;
+	length_dw = ALIGN(length_bytes, 4) / 4;
+
+	/* set rom index to 0 */
+	WREG32(SOC15_REG_OFFSET(SMUIO, 0, mmROM_INDEX), 0);
+	/* read out the rom data */
+	for (i = 0; i < length_dw; i++)
+		dw_ptr[i] = RREG32(SOC15_REG_OFFSET(SMUIO, 0, mmROM_DATA));
+
+	return true;
+}
+
+static struct amdgpu_allowed_register_entry vega10_allowed_read_registers[] = {
+	/* todo */
+};
+
+static struct amdgpu_allowed_register_entry soc15_allowed_read_registers[] = {
+	{ SOC15_REG_OFFSET(GC, 0, mmGRBM_STATUS), false},
+	{ SOC15_REG_OFFSET(GC, 0, mmGRBM_STATUS2), false},
+	{ SOC15_REG_OFFSET(GC, 0, mmGRBM_STATUS_SE0), false},
+	{ SOC15_REG_OFFSET(GC, 0, mmGRBM_STATUS_SE1), false},
+	{ SOC15_REG_OFFSET(GC, 0, mmGRBM_STATUS_SE2), false},
+	{ SOC15_REG_OFFSET(GC, 0, mmGRBM_STATUS_SE3), false},
+	{ SOC15_REG_OFFSET(SDMA0, 0, mmSDMA0_STATUS_REG), false},
+	{ SOC15_REG_OFFSET(SDMA1, 0, mmSDMA1_STATUS_REG), false},
+	{ SOC15_REG_OFFSET(GC, 0, mmCP_STAT), false},
+	{ SOC15_REG_OFFSET(GC, 0, mmCP_STALLED_STAT1), false},
+	{ SOC15_REG_OFFSET(GC, 0, mmCP_STALLED_STAT2), false},
+	{ SOC15_REG_OFFSET(GC, 0, mmCP_STALLED_STAT3), false},
+	{ SOC15_REG_OFFSET(GC, 0, mmCP_CPF_BUSY_STAT), false},
+	{ SOC15_REG_OFFSET(GC, 0, mmCP_CPF_STALLED_STAT1), false},
+	{ SOC15_REG_OFFSET(GC, 0, mmCP_CPF_STATUS), false},
+	{ SOC15_REG_OFFSET(GC, 0, mmCP_CPF_BUSY_STAT), false},
+	{ SOC15_REG_OFFSET(GC, 0, mmCP_CPC_STALLED_STAT1), false},
+	{ SOC15_REG_OFFSET(GC, 0, mmCP_CPC_STATUS), false},
+	{ SOC15_REG_OFFSET(GC, 0, mmGB_ADDR_CONFIG), false},
+	{ SOC15_REG_OFFSET(GC, 0, mmCC_RB_BACKEND_DISABLE), false, true},
+	{ SOC15_REG_OFFSET(GC, 0, mmGC_USER_RB_BACKEND_DISABLE), false, true},
+	{ SOC15_REG_OFFSET(GC, 0, mmGB_BACKEND_MAP), false, false},
+};
+
+static uint32_t soc15_read_indexed_register(struct amdgpu_device *adev, u32 se_num,
+					 u32 sh_num, u32 reg_offset)
+{
+	uint32_t val;
+
+	mutex_lock(&adev->grbm_idx_mutex);
+	if (se_num != 0xffffffff || sh_num != 0xffffffff)
+		amdgpu_gfx_select_se_sh(adev, se_num, sh_num, 0xffffffff);
+
+	val = RREG32(reg_offset);
+
+	if (se_num != 0xffffffff || sh_num != 0xffffffff)
+		amdgpu_gfx_select_se_sh(adev, 0xffffffff, 0xffffffff, 0xffffffff);
+	mutex_unlock(&adev->grbm_idx_mutex);
+	return val;
+}
+
+static int soc15_read_register(struct amdgpu_device *adev, u32 se_num,
+			    u32 sh_num, u32 reg_offset, u32 *value)
+{
+	struct amdgpu_allowed_register_entry *asic_register_table = NULL;
+	struct amdgpu_allowed_register_entry *asic_register_entry;
+	uint32_t size, i;
+
+	*value = 0;
+	switch (adev->asic_type) {
+	case CHIP_VEGA10:
+		asic_register_table = vega10_allowed_read_registers;
+		size = ARRAY_SIZE(vega10_allowed_read_registers);
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	if (asic_register_table) {
+		for (i = 0; i < size; i++) {
+			asic_register_entry = asic_register_table + i;
+			if (reg_offset != asic_register_entry->reg_offset)
+				continue;
+			if (!asic_register_entry->untouched)
+				*value = asic_register_entry->grbm_indexed ?
+					soc15_read_indexed_register(adev, se_num,
+								 sh_num, reg_offset) :
+					RREG32(reg_offset);
+			return 0;
+		}
+	}
+
+	for (i = 0; i < ARRAY_SIZE(soc15_allowed_read_registers); i++) {
+		if (reg_offset != soc15_allowed_read_registers[i].reg_offset)
+			continue;
+
+		if (!soc15_allowed_read_registers[i].untouched)
+			*value = soc15_allowed_read_registers[i].grbm_indexed ?
+				soc15_read_indexed_register(adev, se_num,
+							 sh_num, reg_offset) :
+				RREG32(reg_offset);
+		return 0;
+	}
+	return -EINVAL;
+}
+
+static void soc15_gpu_pci_config_reset(struct amdgpu_device *adev)
+{
+	u32 i;
+
+	dev_info(adev->dev, "GPU pci config reset\n");
+
+	/* disable BM */
+	pci_clear_master(adev->pdev);
+	/* reset */
+	amdgpu_pci_config_reset(adev);
+
+	udelay(100);
+
+	/* wait for asic to come out of reset */
+	for (i = 0; i < adev->usec_timeout; i++) {
+		if (nbio_v6_1_get_memsize(adev) != 0xffffffff)
+			break;
+		udelay(1);
+	}
+
+}
+
+static int soc15_asic_reset(struct amdgpu_device *adev)
+{
+	amdgpu_atombios_scratch_regs_engine_hung(adev, true);
+
+	soc15_gpu_pci_config_reset(adev);
+
+	amdgpu_atombios_scratch_regs_engine_hung(adev, false);
+
+	return 0;
+}
+
+/*static int soc15_set_uvd_clock(struct amdgpu_device *adev, u32 clock,
+			u32 cntl_reg, u32 status_reg)
+{
+	return 0;
+}*/
+
+static int soc15_set_uvd_clocks(struct amdgpu_device *adev, u32 vclk, u32 dclk)
+{
+	/*int r;
+
+	r = soc15_set_uvd_clock(adev, vclk, ixCG_VCLK_CNTL, ixCG_VCLK_STATUS);
+	if (r)
+		return r;
+
+	r = soc15_set_uvd_clock(adev, dclk, ixCG_DCLK_CNTL, ixCG_DCLK_STATUS);
+	*/
+	return 0;
+}
+
+static int soc15_set_vce_clocks(struct amdgpu_device *adev, u32 evclk, u32 ecclk)
+{
+	/* todo */
+
+	return 0;
+}
+
+static void soc15_pcie_gen3_enable(struct amdgpu_device *adev)
+{
+	if (pci_is_root_bus(adev->pdev->bus))
+		return;
+
+	if (amdgpu_pcie_gen2 == 0)
+		return;
+
+	if (adev->flags & AMD_IS_APU)
+		return;
+
+	if (!(adev->pm.pcie_gen_mask & (CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2 |
+					CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)))
+		return;
+
+	/* todo */
+}
+
+static void soc15_program_aspm(struct amdgpu_device *adev)
+{
+
+	if (amdgpu_aspm == 0)
+		return;
+
+	/* todo */
+}
+
+static void soc15_enable_doorbell_aperture(struct amdgpu_device *adev,
+					bool enable)
+{
+	nbio_v6_1_enable_doorbell_aperture(adev, enable);
+	nbio_v6_1_enable_doorbell_selfring_aperture(adev, enable);
+}
+
+static const struct amdgpu_ip_block_version vega10_common_ip_block =
+{
+	.type = AMD_IP_BLOCK_TYPE_COMMON,
+	.major = 2,
+	.minor = 0,
+	.rev = 0,
+	.funcs = &soc15_common_ip_funcs,
+};
+
+int soc15_set_ip_blocks(struct amdgpu_device *adev)
+{
+	switch (adev->asic_type) {
+	case CHIP_VEGA10:
+		amdgpu_ip_block_add(adev, &vega10_common_ip_block);
+		amdgpu_ip_block_add(adev, &gfxhub_v1_0_ip_block);
+		amdgpu_ip_block_add(adev, &mmhub_v1_0_ip_block);
+		amdgpu_ip_block_add(adev, &gmc_v9_0_ip_block);
+		amdgpu_ip_block_add(adev, &vega10_ih_ip_block);
+		amdgpu_ip_block_add(adev, &psp_v3_1_ip_block);
+		amdgpu_ip_block_add(adev, &amdgpu_pp_ip_block);
+#if defined(CONFIG_DRM_AMD_DC)
+		if (amdgpu_device_has_dc_support(adev))
+			amdgpu_ip_block_add(adev, &dm_ip_block);
+#else
+#	warning "Enable CONFIG_DRM_AMD_DC for display support on SOC15."
+#endif
+		amdgpu_ip_block_add(adev, &gfx_v9_0_ip_block);
+		amdgpu_ip_block_add(adev, &sdma_v4_0_ip_block);
+		amdgpu_ip_block_add(adev, &uvd_v7_0_ip_block);
+		amdgpu_ip_block_add(adev, &vce_v4_0_ip_block);
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static uint32_t soc15_get_rev_id(struct amdgpu_device *adev)
+{
+	return nbio_v6_1_get_rev_id(adev);
+}
+
+
+int gmc_v9_0_mc_wait_for_idle(struct amdgpu_device *adev)
+{
+	/* to be implemented in MC IP*/
+	return 0;
+}
+
+static const struct amdgpu_asic_funcs soc15_asic_funcs =
+{
+	.read_disabled_bios = &soc15_read_disabled_bios,
+	.read_bios_from_rom = &soc15_read_bios_from_rom,
+	.read_register = &soc15_read_register,
+	.reset = &soc15_asic_reset,
+	.set_vga_state = &soc15_vga_set_state,
+	.get_xclk = &soc15_get_xclk,
+	.set_uvd_clocks = &soc15_set_uvd_clocks,
+	.set_vce_clocks = &soc15_set_vce_clocks,
+	.get_config_memsize = &soc15_get_config_memsize,
+};
+
+static int soc15_common_early_init(void *handle)
+{
+	bool psp_enabled = false;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	adev->smc_rreg = NULL;
+	adev->smc_wreg = NULL;
+	adev->pcie_rreg = &soc15_pcie_rreg;
+	adev->pcie_wreg = &soc15_pcie_wreg;
+	adev->uvd_ctx_rreg = &soc15_uvd_ctx_rreg;
+	adev->uvd_ctx_wreg = &soc15_uvd_ctx_wreg;
+	adev->didt_rreg = &soc15_didt_rreg;
+	adev->didt_wreg = &soc15_didt_wreg;
+
+	adev->asic_funcs = &soc15_asic_funcs;
+
+	if (amdgpu_get_ip_block(adev, AMD_IP_BLOCK_TYPE_PSP) &&
+		(amdgpu_ip_block_mask & (1 << AMD_IP_BLOCK_TYPE_PSP)))
+		psp_enabled = true;
+
+	/*
+	 * nbio need be used for both sdma and gfx9, but only
+	 * initializes once
+	 */
+	switch(adev->asic_type) {
+	case CHIP_VEGA10:
+		nbio_v6_1_init(adev);
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	adev->rev_id = soc15_get_rev_id(adev);
+	adev->external_rev_id = 0xFF;
+	switch (adev->asic_type) {
+	case CHIP_VEGA10:
+		adev->cg_flags = AMD_CG_SUPPORT_GFX_MGCG |
+			AMD_CG_SUPPORT_GFX_MGLS |
+			AMD_CG_SUPPORT_GFX_RLC_LS |
+			AMD_CG_SUPPORT_GFX_CP_LS |
+			AMD_CG_SUPPORT_GFX_3D_CGCG |
+			AMD_CG_SUPPORT_GFX_3D_CGLS |
+			AMD_CG_SUPPORT_GFX_CGCG |
+			AMD_CG_SUPPORT_GFX_CGLS |
+			AMD_CG_SUPPORT_BIF_MGCG |
+			AMD_CG_SUPPORT_BIF_LS |
+			AMD_CG_SUPPORT_HDP_MGCG |
+			AMD_CG_SUPPORT_HDP_LS |
+			AMD_CG_SUPPORT_DRM_MGCG |
+			AMD_CG_SUPPORT_DRM_LS |
+			AMD_CG_SUPPORT_ROM_MGCG |
+			AMD_CG_SUPPORT_DF_MGCG |
+			AMD_CG_SUPPORT_SDMA_MGCG |
+			AMD_CG_SUPPORT_SDMA_LS |
+			AMD_CG_SUPPORT_MC_MGCG |
+			AMD_CG_SUPPORT_MC_LS;
+		adev->pg_flags = 0;
+		adev->external_rev_id = 0x1;
+		break;
+	default:
+		/* FIXME: not supported yet */
+		return -EINVAL;
+	}
+
+	adev->firmware.load_type = amdgpu_ucode_get_load_type(adev, amdgpu_fw_load_type);
+
+	amdgpu_get_pcie_info(adev);
+
+	return 0;
+}
+
+static int soc15_common_sw_init(void *handle)
+{
+	return 0;
+}
+
+static int soc15_common_sw_fini(void *handle)
+{
+	return 0;
+}
+
+static int soc15_common_hw_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	/* move the golden regs per IP block */
+	soc15_init_golden_registers(adev);
+	/* enable pcie gen2/3 link */
+	soc15_pcie_gen3_enable(adev);
+	/* enable aspm */
+	soc15_program_aspm(adev);
+	/* enable the doorbell aperture */
+	soc15_enable_doorbell_aperture(adev, true);
+
+	return 0;
+}
+
+static int soc15_common_hw_fini(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	/* disable the doorbell aperture */
+	soc15_enable_doorbell_aperture(adev, false);
+
+	return 0;
+}
+
+static int soc15_common_suspend(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	return soc15_common_hw_fini(adev);
+}
+
+static int soc15_common_resume(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	return soc15_common_hw_init(adev);
+}
+
+static bool soc15_common_is_idle(void *handle)
+{
+	return true;
+}
+
+static int soc15_common_wait_for_idle(void *handle)
+{
+	return 0;
+}
+
+static int soc15_common_soft_reset(void *handle)
+{
+	return 0;
+}
+
+static void soc15_update_hdp_light_sleep(struct amdgpu_device *adev, bool enable)
+{
+	uint32_t def, data;
+
+	def = data = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_LS));
+
+	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS))
+		data |= HDP_MEM_POWER_LS__LS_ENABLE_MASK;
+	else
+		data &= ~HDP_MEM_POWER_LS__LS_ENABLE_MASK;
+
+	if (def != data)
+		WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MEM_POWER_LS), data);
+}
+
+static void soc15_update_drm_clock_gating(struct amdgpu_device *adev, bool enable)
+{
+	uint32_t def, data;
+
+	def = data = RREG32(SOC15_REG_OFFSET(MP0, 0, mmMP0_MISC_CGTT_CTRL0));
+
+	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_DRM_MGCG))
+		data &= ~(0x01000000 |
+			  0x02000000 |
+			  0x04000000 |
+			  0x08000000 |
+			  0x10000000 |
+			  0x20000000 |
+			  0x40000000 |
+			  0x80000000);
+	else
+		data |= (0x01000000 |
+			 0x02000000 |
+			 0x04000000 |
+			 0x08000000 |
+			 0x10000000 |
+			 0x20000000 |
+			 0x40000000 |
+			 0x80000000);
+
+	if (def != data)
+		WREG32(SOC15_REG_OFFSET(MP0, 0, mmMP0_MISC_CGTT_CTRL0), data);
+}
+
+static void soc15_update_drm_light_sleep(struct amdgpu_device *adev, bool enable)
+{
+	uint32_t def, data;
+
+	def = data = RREG32(SOC15_REG_OFFSET(MP0, 0, mmMP0_MISC_LIGHT_SLEEP_CTRL));
+
+	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_DRM_LS))
+		data |= 1;
+	else
+		data &= ~1;
+
+	if (def != data)
+		WREG32(SOC15_REG_OFFSET(MP0, 0, mmMP0_MISC_LIGHT_SLEEP_CTRL), data);
+}
+
+static void soc15_update_rom_medium_grain_clock_gating(struct amdgpu_device *adev,
+						       bool enable)
+{
+	uint32_t def, data;
+
+	def = data = RREG32(SOC15_REG_OFFSET(SMUIO, 0, mmCGTT_ROM_CLK_CTRL0));
+
+	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_ROM_MGCG))
+		data &= ~(CGTT_ROM_CLK_CTRL0__SOFT_OVERRIDE0_MASK |
+			CGTT_ROM_CLK_CTRL0__SOFT_OVERRIDE1_MASK);
+	else
+		data |= CGTT_ROM_CLK_CTRL0__SOFT_OVERRIDE0_MASK |
+			CGTT_ROM_CLK_CTRL0__SOFT_OVERRIDE1_MASK;
+
+	if (def != data)
+		WREG32(SOC15_REG_OFFSET(SMUIO, 0, mmCGTT_ROM_CLK_CTRL0), data);
+}
+
+static void soc15_update_df_medium_grain_clock_gating(struct amdgpu_device *adev,
+						       bool enable)
+{
+	uint32_t data;
+
+	/* Put DF on broadcast mode */
+	data = RREG32(SOC15_REG_OFFSET(DF, 0, mmFabricConfigAccessControl));
+	data &= ~FabricConfigAccessControl__CfgRegInstAccEn_MASK;
+	WREG32(SOC15_REG_OFFSET(DF, 0, mmFabricConfigAccessControl), data);
+
+	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_DF_MGCG)) {
+		data = RREG32(SOC15_REG_OFFSET(DF, 0, mmDF_PIE_AON0_DfGlobalClkGater));
+		data &= ~DF_PIE_AON0_DfGlobalClkGater__MGCGMode_MASK;
+		data |= DF_MGCG_ENABLE_15_CYCLE_DELAY;
+		WREG32(SOC15_REG_OFFSET(DF, 0, mmDF_PIE_AON0_DfGlobalClkGater), data);
+	} else {
+		data = RREG32(SOC15_REG_OFFSET(DF, 0, mmDF_PIE_AON0_DfGlobalClkGater));
+		data &= ~DF_PIE_AON0_DfGlobalClkGater__MGCGMode_MASK;
+		data |= DF_MGCG_DISABLE;
+		WREG32(SOC15_REG_OFFSET(DF, 0, mmDF_PIE_AON0_DfGlobalClkGater), data);
+	}
+
+	WREG32(SOC15_REG_OFFSET(DF, 0, mmFabricConfigAccessControl),
+	       mmFabricConfigAccessControl_DEFAULT);
+}
+
+static int soc15_common_set_clockgating_state(void *handle,
+					    enum amd_clockgating_state state)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	switch (adev->asic_type) {
+	case CHIP_VEGA10:
+		nbio_v6_1_update_medium_grain_clock_gating(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		nbio_v6_1_update_medium_grain_light_sleep(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		soc15_update_hdp_light_sleep(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		soc15_update_drm_clock_gating(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		soc15_update_drm_light_sleep(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		soc15_update_rom_medium_grain_clock_gating(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		soc15_update_df_medium_grain_clock_gating(adev,
+				state == AMD_CG_STATE_GATE ? true : false);
+		break;
+	default:
+		break;
+	}
+	return 0;
+}
+
+static int soc15_common_set_powergating_state(void *handle,
+					    enum amd_powergating_state state)
+{
+	/* todo */
+	return 0;
+}
+
+const struct amd_ip_funcs soc15_common_ip_funcs = {
+	.name = "soc15_common",
+	.early_init = soc15_common_early_init,
+	.late_init = NULL,
+	.sw_init = soc15_common_sw_init,
+	.sw_fini = soc15_common_sw_fini,
+	.hw_init = soc15_common_hw_init,
+	.hw_fini = soc15_common_hw_fini,
+	.suspend = soc15_common_suspend,
+	.resume = soc15_common_resume,
+	.is_idle = soc15_common_is_idle,
+	.wait_for_idle = soc15_common_wait_for_idle,
+	.soft_reset = soc15_common_soft_reset,
+	.set_clockgating_state = soc15_common_set_clockgating_state,
+	.set_powergating_state = soc15_common_set_powergating_state,
+};
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 072/100] drm/amdgpu: Set the IP blocks for vega10
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (55 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 071/100] drm/amdgpu: soc15 enable (v2) Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 073/100] drm/amdgpu: add Vega10 Device IDs Alex Deucher
                     ` (28 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Ken Wang

From: Ken Wang <Qingqing.Wang@amd.com>

Signed-off-by: Ken Wang <Qingqing.Wang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 8e64437..47d1dcc 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -50,6 +50,7 @@
 #include "cik.h"
 #endif
 #include "vi.h"
+#include "soc15.h"
 #include "bif/bif_4_1_d.h"
 #include <linux/pci.h>
 #include <linux/firmware.h>
@@ -1433,6 +1434,13 @@ static int amdgpu_early_init(struct amdgpu_device *adev)
 			return r;
 		break;
 #endif
+	case CHIP_VEGA10:
+		adev->family = AMDGPU_FAMILY_AI;
+
+		r = soc15_set_ip_blocks(adev);
+		if (r)
+			return r;
+		break;
 	default:
 		/* FIXME: not supported yet */
 		return -EINVAL;
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 073/100] drm/amdgpu: add Vega10 Device IDs
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (56 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 072/100] drm/amdgpu: Set the IP blocks for vega10 Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 074/100] drm/amdgpu/gfx9: programing wptr_poll_addr register Alex Deucher
                     ` (27 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Junwei Zhang, Alex Deucher

From: Junwei Zhang <Jerry.Zhang@amd.com>

Signed-off-by: Junwei Zhang <Jerry.Zhang@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index ef3ed11..d7f286d 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -450,7 +450,14 @@ static const struct pci_device_id pciidlist[] = {
 	{0x1002, 0x6986, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
 	{0x1002, 0x6987, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
 	{0x1002, 0x699F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
-
+	/* Vega 10 */
+	{0x1002, 0x6860, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
+	{0x1002, 0x6861, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
+	{0x1002, 0x6862, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
+	{0x1002, 0x6863, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
+	{0x1002, 0x6867, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
+	{0x1002, 0x686c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
+	{0x1002, 0x687f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
 	{0, 0, 0}
 };
 
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 074/100] drm/amdgpu/gfx9: programing wptr_poll_addr register
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (57 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 073/100] drm/amdgpu: add Vega10 Device IDs Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 075/100] drm/amdgpu: impl sriov detection for vega10 Alex Deucher
                     ` (26 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Xiangliang Yu, Monk Liu

From: Monk Liu <Monk.Liu@amd.com>

Required for SR-IOV.

Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index bb93b0a..4704524 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -1501,7 +1501,7 @@ static int gfx_v9_0_cp_gfx_resume(struct amdgpu_device *adev)
 	struct amdgpu_ring *ring;
 	u32 tmp;
 	u32 rb_bufsz;
-	u64 rb_addr, rptr_addr;
+	u64 rb_addr, rptr_addr, wptr_gpu_addr;
 
 	/* Set the write pointer delay */
 	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB_WPTR_DELAY), 0);
@@ -1529,6 +1529,10 @@ static int gfx_v9_0_cp_gfx_resume(struct amdgpu_device *adev)
 	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB0_RPTR_ADDR), lower_32_bits(rptr_addr));
 	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB0_RPTR_ADDR_HI), upper_32_bits(rptr_addr) & CP_RB_RPTR_ADDR_HI__RB_RPTR_ADDR_HI_MASK);
 
+	wptr_gpu_addr = adev->wb.gpu_addr + (ring->wptr_offs * 4);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB_WPTR_POLL_ADDR_LO), lower_32_bits(wptr_gpu_addr));
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB_WPTR_POLL_ADDR_HI), upper_32_bits(wptr_gpu_addr));
+
 	mdelay(1);
 	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_RB0_CNTL), tmp);
 
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 075/100] drm/amdgpu: impl sriov detection for vega10
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (58 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 074/100] drm/amdgpu/gfx9: programing wptr_poll_addr register Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 076/100] drm/amdgpu: add kiq ring for gfx9 Alex Deucher
                     ` (25 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Xiangliang Yu

From: Xiangliang Yu <Xiangliang.Yu@amd.com>

Read vega10 hw register to detect if sriov is enabled, and call
it before IP blocks setting.

Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Monk Liu <Monk.Liu@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c | 18 ++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/nbio_v6_1.h |  1 +
 drivers/gpu/drm/amd/amdgpu/soc15.c     |  2 ++
 3 files changed, 21 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c b/drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c
index f517e9a..9021872 100644
--- a/drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c
+++ b/drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c
@@ -231,3 +231,21 @@ int nbio_v6_1_init(struct amdgpu_device *adev)
 
 	return 0;
 }
+
+void nbio_v6_1_detect_hw_virt(struct amdgpu_device *adev)
+{
+	uint32_t reg;
+
+	reg = RREG32(SOC15_REG_OFFSET(NBIO, 0,
+				      mmRCC_PF_0_0_RCC_IOV_FUNC_IDENTIFIER));
+	if (reg & 1)
+		adev->virt.caps |= AMDGPU_SRIOV_CAPS_IS_VF;
+
+	if (reg & 0x80000000)
+		adev->virt.caps |= AMDGPU_SRIOV_CAPS_ENABLE_IOV;
+
+	if (!reg) {
+		if (is_virtual_machine())	/* passthrough mode exclus sriov mod */
+			adev->virt.caps |= AMDGPU_PASSTHROUGH_MODE;
+	}
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v6_1.h b/drivers/gpu/drm/amd/amdgpu/nbio_v6_1.h
index a778d1c..3e04093 100644
--- a/drivers/gpu/drm/amd/amdgpu/nbio_v6_1.h
+++ b/drivers/gpu/drm/amd/amdgpu/nbio_v6_1.h
@@ -48,5 +48,6 @@ void nbio_v6_1_ih_control(struct amdgpu_device *adev);
 u32 nbio_v6_1_get_rev_id(struct amdgpu_device *adev);
 void nbio_v6_1_update_medium_grain_clock_gating(struct amdgpu_device *adev, bool enable);
 void nbio_v6_1_update_medium_grain_light_sleep(struct amdgpu_device *adev, bool enable);
+void nbio_v6_1_detect_hw_virt(struct amdgpu_device *adev);
 
 #endif
diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
index 07e10f3..263f602 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
+++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
@@ -469,6 +469,8 @@ static const struct amdgpu_ip_block_version vega10_common_ip_block =
 
 int soc15_set_ip_blocks(struct amdgpu_device *adev)
 {
+	nbio_v6_1_detect_hw_virt(adev);
+
 	switch (adev->asic_type) {
 	case CHIP_VEGA10:
 		amdgpu_ip_block_add(adev, &vega10_common_ip_block);
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 076/100] drm/amdgpu: add kiq ring for gfx9
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (59 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 075/100] drm/amdgpu: impl sriov detection for vega10 Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 077/100] drm/amdgpu/gfx9: fullfill kiq funcs Alex Deucher
                     ` (24 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Xiangliang Yu, Monk Liu

From: Xiangliang Yu <Xiangliang.Yu@amd.com>

Allocate KIQ ring in sw_init for gfx9.

Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 88 +++++++++++++++++++++++++++++++++++
 1 file changed, 88 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index 4704524..ad88c4b 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -576,6 +576,74 @@ static int gfx_v9_0_mec_init(struct amdgpu_device *adev)
 	return 0;
 }
 
+static void gfx_v9_0_kiq_fini(struct amdgpu_device *adev)
+{
+	struct amdgpu_kiq *kiq = &adev->gfx.kiq;
+
+	amdgpu_bo_free_kernel(&kiq->eop_obj, &kiq->eop_gpu_addr, NULL);
+}
+
+static int gfx_v9_0_kiq_init(struct amdgpu_device *adev)
+{
+	int r;
+	u32 *hpd;
+	struct amdgpu_kiq *kiq = &adev->gfx.kiq;
+
+	r = amdgpu_bo_create_kernel(adev, MEC_HPD_SIZE, PAGE_SIZE,
+				    AMDGPU_GEM_DOMAIN_GTT, &kiq->eop_obj,
+				    &kiq->eop_gpu_addr, (void **)&hpd);
+	if (r) {
+		dev_warn(adev->dev, "failed to create KIQ bo (%d).\n", r);
+		return r;
+	}
+
+	memset(hpd, 0, MEC_HPD_SIZE);
+
+	amdgpu_bo_kunmap(kiq->eop_obj);
+
+	return 0;
+}
+
+static int gfx_v9_0_kiq_init_ring(struct amdgpu_device *adev,
+				  struct amdgpu_ring *ring,
+				  struct amdgpu_irq_src *irq)
+{
+	int r = 0;
+
+	r = amdgpu_wb_get(adev, &adev->virt.reg_val_offs);
+	if (r)
+		return r;
+
+	ring->adev = NULL;
+	ring->ring_obj = NULL;
+	ring->use_doorbell = true;
+	ring->doorbell_index = AMDGPU_DOORBELL_KIQ;
+	if (adev->gfx.mec2_fw) {
+		ring->me = 2;
+		ring->pipe = 0;
+	} else {
+		ring->me = 1;
+		ring->pipe = 1;
+	}
+
+	irq->data = ring;
+	ring->queue = 0;
+	sprintf(ring->name, "kiq %d.%d.%d", ring->me, ring->pipe, ring->queue);
+	r = amdgpu_ring_init(adev, ring, 1024,
+			     irq, AMDGPU_CP_KIQ_IRQ_DRIVER0);
+	if (r)
+		dev_warn(adev->dev, "(%d) failed to init kiq ring\n", r);
+
+	return r;
+}
+static void gfx_v9_0_kiq_free_ring(struct amdgpu_ring *ring,
+				   struct amdgpu_irq_src *irq)
+{
+	amdgpu_wb_free(ring->adev, ring->adev->virt.reg_val_offs);
+	amdgpu_ring_fini(ring);
+	irq->data = NULL;
+}
+
 static uint32_t wave_read_ind(struct amdgpu_device *adev, uint32_t simd, uint32_t wave, uint32_t address)
 {
 	WREG32(SOC15_REG_OFFSET(GC, 0, mmSQ_IND_INDEX),
@@ -898,6 +966,7 @@ static int gfx_v9_0_sw_init(void *handle)
 {
 	int i, r;
 	struct amdgpu_ring *ring;
+	struct amdgpu_kiq *kiq;
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
 	/* EOP Event */
@@ -971,6 +1040,19 @@ static int gfx_v9_0_sw_init(void *handle)
 			return r;
 	}
 
+	if (amdgpu_sriov_vf(adev)) {
+		r = gfx_v9_0_kiq_init(adev);
+		if (r) {
+			DRM_ERROR("Failed to init KIQ BOs!\n");
+			return r;
+		}
+
+		kiq = &adev->gfx.kiq;
+		r = gfx_v9_0_kiq_init_ring(adev, &kiq->ring, &kiq->irq);
+		if (r)
+			return r;
+	}
+
 	/* reserve GDS, GWS and OA resource for gfx */
 	r = amdgpu_bo_create_kernel(adev, adev->gds.mem.gfx_partition_size,
 				    PAGE_SIZE, AMDGPU_GEM_DOMAIN_GDS,
@@ -1016,6 +1098,11 @@ static int gfx_v9_0_sw_fini(void *handle)
 	for (i = 0; i < adev->gfx.num_compute_rings; i++)
 		amdgpu_ring_fini(&adev->gfx.compute_ring[i]);
 
+	if (amdgpu_sriov_vf(adev)) {
+		gfx_v9_0_kiq_free_ring(&adev->gfx.kiq.ring, &adev->gfx.kiq.irq);
+		gfx_v9_0_kiq_fini(adev);
+	}
+
 	gfx_v9_0_mec_fini(adev);
 	gfx_v9_0_ngg_fini(adev);
 
@@ -1577,6 +1664,7 @@ static void gfx_v9_0_cp_compute_enable(struct amdgpu_device *adev, bool enable)
 			(CP_MEC_CNTL__MEC_ME1_HALT_MASK | CP_MEC_CNTL__MEC_ME2_HALT_MASK));
 		for (i = 0; i < adev->gfx.num_compute_rings; i++)
 			adev->gfx.compute_ring[i].ready = false;
+		adev->gfx.kiq.ring.ready = false;
 	}
 	udelay(50);
 }
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 077/100] drm/amdgpu/gfx9: fullfill kiq funcs
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (60 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 076/100] drm/amdgpu: add kiq ring for gfx9 Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 078/100] drm/amdgpu/gfx9: fullfill kiq irq funcs Alex Deucher
                     ` (23 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Xiangliang Yu, Monk Liu

From: Xiangliang Yu <Xiangliang.Yu@amd.com>

Fullfill kiq funcs to support kiq ring.

Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 79 +++++++++++++++++++++++++++++++++++
 1 file changed, 79 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index ad88c4b..987587a 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -2696,6 +2696,31 @@ static void gfx_v9_0_ring_set_wptr_compute(struct amdgpu_ring *ring)
 	}
 }
 
+static void gfx_v9_0_ring_emit_fence_kiq(struct amdgpu_ring *ring, u64 addr,
+					 u64 seq, unsigned int flags)
+{
+	/* we only allocate 32bit for each seq wb address */
+	BUG_ON(flags & AMDGPU_FENCE_FLAG_64BIT);
+
+	/* write fence seq to the "addr" */
+	amdgpu_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
+	amdgpu_ring_write(ring, (WRITE_DATA_ENGINE_SEL(0) |
+				 WRITE_DATA_DST_SEL(5) | WR_CONFIRM));
+	amdgpu_ring_write(ring, lower_32_bits(addr));
+	amdgpu_ring_write(ring, upper_32_bits(addr));
+	amdgpu_ring_write(ring, lower_32_bits(seq));
+
+	if (flags & AMDGPU_FENCE_FLAG_INT) {
+		/* set register to trigger INT */
+		amdgpu_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
+		amdgpu_ring_write(ring, (WRITE_DATA_ENGINE_SEL(0) |
+					 WRITE_DATA_DST_SEL(0) | WR_CONFIRM));
+		amdgpu_ring_write(ring, SOC15_REG_OFFSET(GC, 0, mmCPC_INT_STATUS));
+		amdgpu_ring_write(ring, 0);
+		amdgpu_ring_write(ring, 0x20000000); /* src_id is 178 */
+	}
+}
+
 static void gfx_v9_ring_emit_sb(struct amdgpu_ring *ring)
 {
 	amdgpu_ring_write(ring, PACKET3(PACKET3_SWITCH_BUFFER, 0));
@@ -2731,6 +2756,32 @@ static void gfx_v9_ring_emit_cntxcntl(struct amdgpu_ring *ring, uint32_t flags)
 	amdgpu_ring_write(ring, 0);
 }
 
+static void gfx_v9_0_ring_emit_rreg(struct amdgpu_ring *ring, uint32_t reg)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_COPY_DATA, 4));
+	amdgpu_ring_write(ring, 0 |	/* src: register*/
+				(5 << 8) |	/* dst: memory */
+				(1 << 20));	/* write confirm */
+	amdgpu_ring_write(ring, reg);
+	amdgpu_ring_write(ring, 0);
+	amdgpu_ring_write(ring, lower_32_bits(adev->wb.gpu_addr +
+				adev->virt.reg_val_offs * 4));
+	amdgpu_ring_write(ring, upper_32_bits(adev->wb.gpu_addr +
+				adev->virt.reg_val_offs * 4));
+}
+
+static void gfx_v9_0_ring_emit_wreg(struct amdgpu_ring *ring, uint32_t reg,
+				  uint32_t val)
+{
+	amdgpu_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
+	amdgpu_ring_write(ring, (1 << 16)); /* no inc addr */
+	amdgpu_ring_write(ring, reg);
+	amdgpu_ring_write(ring, 0);
+	amdgpu_ring_write(ring, val);
+}
+
 static void gfx_v9_0_set_gfx_eop_interrupt_state(struct amdgpu_device *adev,
 						 enum amdgpu_interrupt_state state)
 {
@@ -3021,11 +3072,39 @@ static const struct amdgpu_ring_funcs gfx_v9_0_ring_funcs_compute = {
 	.pad_ib = amdgpu_ring_generic_pad_ib,
 };
 
+static const struct amdgpu_ring_funcs gfx_v9_0_ring_funcs_kiq = {
+	.type = AMDGPU_RING_TYPE_KIQ,
+	.align_mask = 0xff,
+	.nop = PACKET3(PACKET3_NOP, 0x3FFF),
+	.get_rptr = gfx_v9_0_ring_get_rptr_compute,
+	.get_wptr = gfx_v9_0_ring_get_wptr_compute,
+	.set_wptr = gfx_v9_0_ring_set_wptr_compute,
+	.emit_frame_size =
+		20 + /* gfx_v9_0_ring_emit_gds_switch */
+		7 + /* gfx_v9_0_ring_emit_hdp_flush */
+		5 + /* gfx_v9_0_ring_emit_hdp_invalidate */
+		7 + /* gfx_v9_0_ring_emit_pipeline_sync */
+		64 + /* gfx_v9_0_ring_emit_vm_flush */
+		8 + 8 + 8, /* gfx_v9_0_ring_emit_fence_kiq x3 for user fence, vm fence */
+	.emit_ib_size =	4, /* gfx_v9_0_ring_emit_ib_compute */
+	.emit_ib = gfx_v9_0_ring_emit_ib_compute,
+	.emit_fence = gfx_v9_0_ring_emit_fence_kiq,
+	.emit_hdp_flush = gfx_v9_0_ring_emit_hdp_flush,
+	.emit_hdp_invalidate = gfx_v9_0_ring_emit_hdp_invalidate,
+	.test_ring = gfx_v9_0_ring_test_ring,
+	.test_ib = gfx_v9_0_ring_test_ib,
+	.insert_nop = amdgpu_ring_insert_nop,
+	.pad_ib = amdgpu_ring_generic_pad_ib,
+	.emit_rreg = gfx_v9_0_ring_emit_rreg,
+	.emit_wreg = gfx_v9_0_ring_emit_wreg,
+};
 
 static void gfx_v9_0_set_ring_funcs(struct amdgpu_device *adev)
 {
 	int i;
 
+	adev->gfx.kiq.ring.funcs = &gfx_v9_0_ring_funcs_kiq;
+
 	for (i = 0; i < adev->gfx.num_gfx_rings; i++)
 		adev->gfx.gfx_ring[i].funcs = &gfx_v9_0_ring_funcs_gfx;
 
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 078/100] drm/amdgpu/gfx9: fullfill kiq irq funcs
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (61 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 077/100] drm/amdgpu/gfx9: fullfill kiq funcs Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 079/100] drm/amdgpu: init kiq and kcq for vega10 Alex Deucher
                     ` (22 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Xiangliang Yu, Monk Liu

From: Xiangliang Yu <Xiangliang.Yu@amd.com>

Fullfill KIQ irq funcs to support kiq interrupt.

Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 74 +++++++++++++++++++++++++++++++++++
 1 file changed, 74 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index 987587a..d694af1 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -2993,6 +2993,72 @@ static int gfx_v9_0_priv_inst_irq(struct amdgpu_device *adev,
 	return 0;
 }
 
+static int gfx_v9_0_kiq_set_interrupt_state(struct amdgpu_device *adev,
+					    struct amdgpu_irq_src *src,
+					    unsigned int type,
+					    enum amdgpu_interrupt_state state)
+{
+	uint32_t tmp, target;
+	struct amdgpu_ring *ring = (struct amdgpu_ring *)src->data;
+
+	BUG_ON(!ring || (ring->funcs->type != AMDGPU_RING_TYPE_KIQ));
+
+	if (ring->me == 1)
+		target = SOC15_REG_OFFSET(GC, 0, mmCP_ME1_PIPE0_INT_CNTL);
+	else
+		target = SOC15_REG_OFFSET(GC, 0, mmCP_ME2_PIPE0_INT_CNTL);
+	target += ring->pipe;
+
+	switch (type) {
+	case AMDGPU_CP_KIQ_IRQ_DRIVER0:
+		if (state == AMDGPU_IRQ_STATE_DISABLE) {
+			tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmCPC_INT_CNTL));
+			tmp = REG_SET_FIELD(tmp, CPC_INT_CNTL,
+						 GENERIC2_INT_ENABLE, 0);
+			WREG32(SOC15_REG_OFFSET(GC, 0, mmCPC_INT_CNTL), tmp);
+
+			tmp = RREG32(target);
+			tmp = REG_SET_FIELD(tmp, CP_ME2_PIPE0_INT_CNTL,
+						 GENERIC2_INT_ENABLE, 0);
+			WREG32(target, tmp);
+		} else {
+			tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmCPC_INT_CNTL));
+			tmp = REG_SET_FIELD(tmp, CPC_INT_CNTL,
+						 GENERIC2_INT_ENABLE, 1);
+			WREG32(SOC15_REG_OFFSET(GC, 0, mmCPC_INT_CNTL), tmp);
+
+			tmp = RREG32(target);
+			tmp = REG_SET_FIELD(tmp, CP_ME2_PIPE0_INT_CNTL,
+						 GENERIC2_INT_ENABLE, 1);
+			WREG32(target, tmp);
+		}
+		break;
+	default:
+		BUG(); /* kiq only support GENERIC2_INT now */
+		break;
+	}
+	return 0;
+}
+
+static int gfx_v9_0_kiq_irq(struct amdgpu_device *adev,
+			    struct amdgpu_irq_src *source,
+			    struct amdgpu_iv_entry *entry)
+{
+	u8 me_id, pipe_id, queue_id;
+	struct amdgpu_ring *ring = (struct amdgpu_ring *)source->data;
+
+	BUG_ON(!ring || (ring->funcs->type != AMDGPU_RING_TYPE_KIQ));
+
+	me_id = (entry->ring_id & 0x0c) >> 2;
+	pipe_id = (entry->ring_id & 0x03) >> 0;
+	queue_id = (entry->ring_id & 0x70) >> 4;
+	DRM_DEBUG("IH: CPC GENERIC2_INT, me:%d, pipe:%d, queue:%d\n",
+		   me_id, pipe_id, queue_id);
+
+	amdgpu_fence_process(ring);
+	return 0;
+}
+
 const struct amd_ip_funcs gfx_v9_0_ip_funcs = {
 	.name = "gfx_v9_0",
 	.early_init = gfx_v9_0_early_init,
@@ -3112,6 +3178,11 @@ static void gfx_v9_0_set_ring_funcs(struct amdgpu_device *adev)
 		adev->gfx.compute_ring[i].funcs = &gfx_v9_0_ring_funcs_compute;
 }
 
+static const struct amdgpu_irq_src_funcs gfx_v9_0_kiq_irq_funcs = {
+	.set = gfx_v9_0_kiq_set_interrupt_state,
+	.process = gfx_v9_0_kiq_irq,
+};
+
 static const struct amdgpu_irq_src_funcs gfx_v9_0_eop_irq_funcs = {
 	.set = gfx_v9_0_set_eop_interrupt_state,
 	.process = gfx_v9_0_eop_irq,
@@ -3137,6 +3208,9 @@ static void gfx_v9_0_set_irq_funcs(struct amdgpu_device *adev)
 
 	adev->gfx.priv_inst_irq.num_types = 1;
 	adev->gfx.priv_inst_irq.funcs = &gfx_v9_0_priv_inst_irq_funcs;
+
+	adev->gfx.kiq.irq.num_types = AMDGPU_CP_KIQ_IRQ_LAST;
+	adev->gfx.kiq.irq.funcs = &gfx_v9_0_kiq_irq_funcs;
 }
 
 static void gfx_v9_0_set_rlc_funcs(struct amdgpu_device *adev)
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 079/100] drm/amdgpu: init kiq and kcq for vega10
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (62 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 078/100] drm/amdgpu/gfx9: fullfill kiq irq funcs Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 080/100] drm/amdgpu:impl gfx9 cond_exec Alex Deucher
                     ` (21 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Xiangliang Yu, Monk Liu

From: Xiangliang Yu <Xiangliang.Yu@amd.com>

Init kiq via cpu mmio and init kcq through kiq.

Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 464 +++++++++++++++++++++++++++++++++-
 drivers/gpu/drm/amd/amdgpu/soc15d.h   |   2 +
 2 files changed, 465 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index d694af1..2f833ca 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -644,6 +644,60 @@ static void gfx_v9_0_kiq_free_ring(struct amdgpu_ring *ring,
 	irq->data = NULL;
 }
 
+/* create MQD for each compute queue */
+static int gfx_v9_0_compute_mqd_soft_init(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring = NULL;
+	int r, i;
+
+	/* create MQD for KIQ */
+	ring = &adev->gfx.kiq.ring;
+	if (!ring->mqd_obj) {
+		r = amdgpu_bo_create_kernel(adev, sizeof(struct v9_mqd), PAGE_SIZE,
+						AMDGPU_GEM_DOMAIN_GTT, &ring->mqd_obj,
+						&ring->mqd_gpu_addr, (void **)&ring->mqd_ptr);
+		if (r) {
+			dev_warn(adev->dev, "failed to create ring mqd ob (%d)", r);
+			return r;
+		}
+
+		/*TODO: prepare MQD backup */
+	}
+
+	/* create MQD for each KCQ */
+	for (i = 0; i < adev->gfx.num_compute_rings; i++)
+	{
+		ring = &adev->gfx.compute_ring[i];
+		if (!ring->mqd_obj) {
+			r = amdgpu_bo_create_kernel(adev, sizeof(struct v9_mqd), PAGE_SIZE,
+							AMDGPU_GEM_DOMAIN_GTT, &ring->mqd_obj,
+							&ring->mqd_gpu_addr, (void **)&ring->mqd_ptr);
+			if (r) {
+				dev_warn(adev->dev, "failed to create ring mqd ob (%d)", r);
+				return r;
+			}
+
+			/* TODO: prepare MQD backup */
+		}
+	}
+
+	return 0;
+}
+
+static void gfx_v9_0_compute_mqd_soft_fini(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring = NULL;
+	int i;
+
+	for (i = 0; i < adev->gfx.num_compute_rings; i++) {
+		ring = &adev->gfx.compute_ring[i];
+		amdgpu_bo_free_kernel(&ring->mqd_obj, &ring->mqd_gpu_addr, (void **)&ring->mqd_ptr);
+	}
+
+	ring = &adev->gfx.kiq.ring;
+	amdgpu_bo_free_kernel(&ring->mqd_obj, &ring->mqd_gpu_addr, (void **)&ring->mqd_ptr);
+}
+
 static uint32_t wave_read_ind(struct amdgpu_device *adev, uint32_t simd, uint32_t wave, uint32_t address)
 {
 	WREG32(SOC15_REG_OFFSET(GC, 0, mmSQ_IND_INDEX),
@@ -1051,6 +1105,11 @@ static int gfx_v9_0_sw_init(void *handle)
 		r = gfx_v9_0_kiq_init_ring(adev, &kiq->ring, &kiq->irq);
 		if (r)
 			return r;
+
+		/* create MQD for all compute queues as wel as KIQ for SRIOV case */
+		r = gfx_v9_0_compute_mqd_soft_init(adev);
+		if (r)
+			return r;
 	}
 
 	/* reserve GDS, GWS and OA resource for gfx */
@@ -1099,6 +1158,7 @@ static int gfx_v9_0_sw_fini(void *handle)
 		amdgpu_ring_fini(&adev->gfx.compute_ring[i]);
 
 	if (amdgpu_sriov_vf(adev)) {
+		gfx_v9_0_compute_mqd_soft_fini(adev);
 		gfx_v9_0_kiq_free_ring(&adev->gfx.kiq.ring, &adev->gfx.kiq.irq);
 		gfx_v9_0_kiq_fini(adev);
 	}
@@ -1757,6 +1817,393 @@ static int gfx_v9_0_cp_compute_resume(struct amdgpu_device *adev)
 	return 0;
 }
 
+/* KIQ functions */
+static void gfx_v9_0_kiq_setting(struct amdgpu_ring *ring)
+{
+	uint32_t tmp;
+	struct amdgpu_device *adev = ring->adev;
+
+	/* tell RLC which is KIQ queue */
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CP_SCHEDULERS));
+	tmp &= 0xffffff00;
+	tmp |= (ring->me << 5) | (ring->pipe << 3) | (ring->queue);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CP_SCHEDULERS), tmp);
+	tmp |= 0x80;
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmRLC_CP_SCHEDULERS), tmp);
+}
+
+static void gfx_v9_0_kiq_enable(struct amdgpu_ring *ring)
+{
+	amdgpu_ring_alloc(ring, 8);
+	/* set resources */
+	amdgpu_ring_write(ring, PACKET3(PACKET3_SET_RESOURCES, 6));
+	amdgpu_ring_write(ring, 0);	/* vmid_mask:0 queue_type:0 (KIQ) */
+	amdgpu_ring_write(ring, 0x000000FF);	/* queue mask lo */
+	amdgpu_ring_write(ring, 0);	/* queue mask hi */
+	amdgpu_ring_write(ring, 0);	/* gws mask lo */
+	amdgpu_ring_write(ring, 0);	/* gws mask hi */
+	amdgpu_ring_write(ring, 0);	/* oac mask */
+	amdgpu_ring_write(ring, 0);	/* gds heap base:0, gds heap size:0 */
+	amdgpu_ring_commit(ring);
+	udelay(50);
+}
+
+static void gfx_v9_0_map_queue_enable(struct amdgpu_ring *kiq_ring,
+				   struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = kiq_ring->adev;
+	uint64_t mqd_addr, wptr_addr;
+
+	mqd_addr = amdgpu_bo_gpu_offset(ring->mqd_obj);
+	wptr_addr = adev->wb.gpu_addr + (ring->wptr_offs * 4);
+	amdgpu_ring_alloc(kiq_ring, 8);
+
+	amdgpu_ring_write(kiq_ring, PACKET3(PACKET3_MAP_QUEUES, 5));
+	/* Q_sel:0, vmid:0, vidmem: 1, engine:0, num_Q:1*/
+	amdgpu_ring_write(kiq_ring, /* Q_sel: 0, vmid: 0, engine: 0, num_Q: 1 */
+			  (0 << 4) | /* Queue_Sel */
+			  (0 << 8) | /* VMID */
+			  (ring->queue << 13 ) |
+			  (ring->pipe << 16) |
+			  ((ring->me == 1 ? 0 : 1) << 18) |
+			  (0 << 21) | /*queue_type: normal compute queue */
+			  (1 << 24) | /* alloc format: all_on_one_pipe */
+			  (0 << 26) | /* engine_sel: compute */
+			  (1 << 29)); /* num_queues: must be 1 */
+	amdgpu_ring_write(kiq_ring, (ring->doorbell_index << 2));
+	amdgpu_ring_write(kiq_ring, lower_32_bits(mqd_addr));
+	amdgpu_ring_write(kiq_ring, upper_32_bits(mqd_addr));
+	amdgpu_ring_write(kiq_ring, lower_32_bits(wptr_addr));
+	amdgpu_ring_write(kiq_ring, upper_32_bits(wptr_addr));
+	amdgpu_ring_commit(kiq_ring);
+	udelay(50);
+}
+
+static int gfx_v9_0_mqd_init(struct amdgpu_device *adev,
+			     struct v9_mqd *mqd,
+			     uint64_t mqd_gpu_addr,
+			     uint64_t eop_gpu_addr,
+			     struct amdgpu_ring *ring)
+{
+	uint64_t hqd_gpu_addr, wb_gpu_addr, eop_base_addr;
+	uint32_t tmp;
+
+	mqd->header = 0xC0310800;
+	mqd->compute_pipelinestat_enable = 0x00000001;
+	mqd->compute_static_thread_mgmt_se0 = 0xffffffff;
+	mqd->compute_static_thread_mgmt_se1 = 0xffffffff;
+	mqd->compute_static_thread_mgmt_se2 = 0xffffffff;
+	mqd->compute_static_thread_mgmt_se3 = 0xffffffff;
+	mqd->compute_misc_reserved = 0x00000003;
+
+	eop_base_addr = eop_gpu_addr >> 8;
+	mqd->cp_hqd_eop_base_addr_lo = eop_base_addr;
+	mqd->cp_hqd_eop_base_addr_hi = upper_32_bits(eop_base_addr);
+
+	/* set the EOP size, register value is 2^(EOP_SIZE+1) dwords */
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_EOP_CONTROL));
+	tmp = REG_SET_FIELD(tmp, CP_HQD_EOP_CONTROL, EOP_SIZE,
+			(order_base_2(MEC_HPD_SIZE / 4) - 1));
+
+	mqd->cp_hqd_eop_control = tmp;
+
+	/* enable doorbell? */
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_DOORBELL_CONTROL));
+
+	if (ring->use_doorbell) {
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_OFFSET, ring->doorbell_index);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_EN, 1);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_SOURCE, 0);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_HIT, 0);
+	}
+	else
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+					 DOORBELL_EN, 0);
+
+	mqd->cp_hqd_pq_doorbell_control = tmp;
+
+	/* disable the queue if it's active */
+	ring->wptr = 0;
+	mqd->cp_hqd_dequeue_request = 0;
+	mqd->cp_hqd_pq_rptr = 0;
+	mqd->cp_hqd_pq_wptr_lo = 0;
+	mqd->cp_hqd_pq_wptr_hi = 0;
+
+	/* set the pointer to the MQD */
+	mqd->cp_mqd_base_addr_lo = mqd_gpu_addr & 0xfffffffc;
+	mqd->cp_mqd_base_addr_hi = upper_32_bits(mqd_gpu_addr);
+
+	/* set MQD vmid to 0 */
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MQD_CONTROL));
+	tmp = REG_SET_FIELD(tmp, CP_MQD_CONTROL, VMID, 0);
+	mqd->cp_mqd_control = tmp;
+
+	/* set the pointer to the HQD, this is similar CP_RB0_BASE/_HI */
+	hqd_gpu_addr = ring->gpu_addr >> 8;
+	mqd->cp_hqd_pq_base_lo = hqd_gpu_addr;
+	mqd->cp_hqd_pq_base_hi = upper_32_bits(hqd_gpu_addr);
+
+	/* set up the HQD, this is similar to CP_RB0_CNTL */
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_CONTROL));
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, QUEUE_SIZE,
+			    (order_base_2(ring->ring_size / 4) - 1));
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, RPTR_BLOCK_SIZE,
+			((order_base_2(AMDGPU_GPU_PAGE_SIZE / 4) - 1) << 8));
+#ifdef __BIG_ENDIAN
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, ENDIAN_SWAP, 1);
+#endif
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 0);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, ROQ_PQ_IB_FLIP, 0);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1);
+	mqd->cp_hqd_pq_control = tmp;
+
+	/* set the wb address whether it's enabled or not */
+	wb_gpu_addr = adev->wb.gpu_addr + (ring->rptr_offs * 4);
+	mqd->cp_hqd_pq_rptr_report_addr_lo = wb_gpu_addr & 0xfffffffc;
+	mqd->cp_hqd_pq_rptr_report_addr_hi =
+		upper_32_bits(wb_gpu_addr) & 0xffff;
+
+	/* only used if CP_PQ_WPTR_POLL_CNTL.CP_PQ_WPTR_POLL_CNTL__EN_MASK=1 */
+	wb_gpu_addr = adev->wb.gpu_addr + (ring->wptr_offs * 4);
+	mqd->cp_hqd_pq_wptr_poll_addr_lo = wb_gpu_addr & 0xfffffffc;
+	mqd->cp_hqd_pq_wptr_poll_addr_hi = upper_32_bits(wb_gpu_addr) & 0xffff;
+
+	tmp = 0;
+	/* enable the doorbell if requested */
+	if (ring->use_doorbell) {
+		tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_DOORBELL_CONTROL));
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				DOORBELL_OFFSET, ring->doorbell_index);
+
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+					 DOORBELL_EN, 1);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+					 DOORBELL_SOURCE, 0);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+					 DOORBELL_HIT, 0);
+	}
+
+	mqd->cp_hqd_pq_doorbell_control = tmp;
+
+	/* reset read and write pointers, similar to CP_RB0_WPTR/_RPTR */
+	ring->wptr = 0;
+	mqd->cp_hqd_pq_rptr = RREG32(mmCP_HQD_PQ_RPTR);
+
+	/* set the vmid for the queue */
+	mqd->cp_hqd_vmid = 0;
+
+	tmp = RREG32(mmCP_HQD_PERSISTENT_STATE);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PERSISTENT_STATE, PRELOAD_SIZE, 0x53);
+	mqd->cp_hqd_persistent_state = tmp;
+
+	/* activate the queue */
+	mqd->cp_hqd_active = 1;
+
+	return 0;
+}
+
+static int gfx_v9_0_kiq_init_register(struct amdgpu_device *adev,
+				      struct v9_mqd *mqd,
+				      struct amdgpu_ring *ring)
+{
+	uint32_t tmp;
+	int j;
+
+	/* disable wptr polling */
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_PQ_WPTR_POLL_CNTL));
+	tmp = REG_SET_FIELD(tmp, CP_PQ_WPTR_POLL_CNTL, EN, 0);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_PQ_WPTR_POLL_CNTL), tmp);
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_EOP_BASE_ADDR),
+	       mqd->cp_hqd_eop_base_addr_lo);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_EOP_BASE_ADDR_HI),
+	       mqd->cp_hqd_eop_base_addr_hi);
+
+	/* set the EOP size, register value is 2^(EOP_SIZE+1) dwords */
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_EOP_CONTROL),
+	       mqd->cp_hqd_eop_control);
+
+	/* enable doorbell? */
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_DOORBELL_CONTROL),
+	       mqd->cp_hqd_pq_doorbell_control);
+
+	/* disable the queue if it's active */
+	if (RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_ACTIVE)) & 1) {
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_DEQUEUE_REQUEST), 1);
+		for (j = 0; j < adev->usec_timeout; j++) {
+			if (!(RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_ACTIVE)) & 1))
+				break;
+			udelay(1);
+		}
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_DEQUEUE_REQUEST),
+		       mqd->cp_hqd_dequeue_request);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_RPTR),
+		       mqd->cp_hqd_pq_rptr);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_WPTR_LO),
+		       mqd->cp_hqd_pq_wptr_lo);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_WPTR_HI),
+		       mqd->cp_hqd_pq_wptr_hi);
+	}
+
+	/* set the pointer to the MQD */
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MQD_BASE_ADDR),
+	       mqd->cp_mqd_base_addr_lo);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MQD_BASE_ADDR_HI),
+	       mqd->cp_mqd_base_addr_hi);
+
+	/* set MQD vmid to 0 */
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MQD_CONTROL),
+	       mqd->cp_mqd_control);
+
+	/* set the pointer to the HQD, this is similar CP_RB0_BASE/_HI */
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_BASE),
+	       mqd->cp_hqd_pq_base_lo);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_BASE_HI),
+	       mqd->cp_hqd_pq_base_hi);
+
+	/* set up the HQD, this is similar to CP_RB0_CNTL */
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_CONTROL),
+	       mqd->cp_hqd_pq_control);
+
+	/* set the wb address whether it's enabled or not */
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_RPTR_REPORT_ADDR),
+				mqd->cp_hqd_pq_rptr_report_addr_lo);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_RPTR_REPORT_ADDR_HI),
+				mqd->cp_hqd_pq_rptr_report_addr_hi);
+
+	/* only used if CP_PQ_WPTR_POLL_CNTL.CP_PQ_WPTR_POLL_CNTL__EN_MASK=1 */
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_WPTR_POLL_ADDR),
+	       mqd->cp_hqd_pq_wptr_poll_addr_lo);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_WPTR_POLL_ADDR_HI),
+	       mqd->cp_hqd_pq_wptr_poll_addr_hi);
+
+	/* enable the doorbell if requested */
+	if (ring->use_doorbell) {
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MEC_DOORBELL_RANGE_LOWER),
+					(AMDGPU_DOORBELL64_KIQ *2) << 2);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_MEC_DOORBELL_RANGE_UPPER),
+					(AMDGPU_DOORBELL64_USERQUEUE_END * 2) << 2);
+	}
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_DOORBELL_CONTROL),
+	       mqd->cp_hqd_pq_doorbell_control);
+
+	/* reset read and write pointers, similar to CP_RB0_WPTR/_RPTR */
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_WPTR_LO),
+	       mqd->cp_hqd_pq_wptr_lo);
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PQ_WPTR_HI),
+	       mqd->cp_hqd_pq_wptr_hi);
+
+	/* set the vmid for the queue */
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_VMID), mqd->cp_hqd_vmid);
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_PERSISTENT_STATE),
+	       mqd->cp_hqd_persistent_state);
+
+	/* activate the queue */
+	WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_HQD_ACTIVE),
+	       mqd->cp_hqd_active);
+
+	if (ring->use_doorbell) {
+		tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmCP_PQ_STATUS));
+		tmp = REG_SET_FIELD(tmp, CP_PQ_STATUS, DOORBELL_ENABLE, 1);
+		WREG32(SOC15_REG_OFFSET(GC, 0, mmCP_PQ_STATUS), tmp);
+	}
+
+	return 0;
+}
+
+static int gfx_v9_0_kiq_init_queue(struct amdgpu_ring *ring,
+				   struct v9_mqd *mqd,
+				   u64 mqd_gpu_addr)
+{
+	struct amdgpu_device *adev = ring->adev;
+	struct amdgpu_kiq *kiq = &adev->gfx.kiq;
+	uint64_t eop_gpu_addr;
+	bool is_kiq = (ring->funcs->type == AMDGPU_RING_TYPE_KIQ);
+	int mqd_idx = AMDGPU_MAX_COMPUTE_RINGS;
+
+	if (is_kiq) {
+		eop_gpu_addr = kiq->eop_gpu_addr;
+		gfx_v9_0_kiq_setting(&kiq->ring);
+	} else {
+		eop_gpu_addr = adev->gfx.mec.hpd_eop_gpu_addr +
+					ring->queue * MEC_HPD_SIZE;
+		mqd_idx = ring - &adev->gfx.compute_ring[0];
+	}
+
+	if (!adev->gfx.in_reset) {
+		memset((void *)mqd, 0, sizeof(*mqd));
+		mutex_lock(&adev->srbm_mutex);
+		soc15_grbm_select(adev, ring->me, ring->pipe, ring->queue, 0);
+		gfx_v9_0_mqd_init(adev, mqd, mqd_gpu_addr, eop_gpu_addr, ring);
+		if (is_kiq)
+			gfx_v9_0_kiq_init_register(adev, mqd, ring);
+		soc15_grbm_select(adev, 0, 0, 0, 0);
+		mutex_unlock(&adev->srbm_mutex);
+
+	} else { /* for GPU_RESET case */
+		/* reset MQD to a clean status */
+
+		/* reset ring buffer */
+		ring->wptr = 0;
+
+		if (is_kiq) {
+		    mutex_lock(&adev->srbm_mutex);
+		    soc15_grbm_select(adev, ring->me, ring->pipe, ring->queue, 0);
+		    gfx_v9_0_kiq_init_register(adev, mqd, ring);
+		    soc15_grbm_select(adev, 0, 0, 0, 0);
+		    mutex_unlock(&adev->srbm_mutex);
+		}
+	}
+
+	if (is_kiq)
+		gfx_v9_0_kiq_enable(ring);
+	else
+		gfx_v9_0_map_queue_enable(&kiq->ring, ring);
+
+	return 0;
+}
+
+static int gfx_v9_0_kiq_resume(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring = NULL;
+	int r = 0, i;
+
+	gfx_v9_0_cp_compute_enable(adev, true);
+
+	ring = &adev->gfx.kiq.ring;
+	if (!amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr)) {
+		r = gfx_v9_0_kiq_init_queue(ring, ring->mqd_ptr, ring->mqd_gpu_addr);
+		amdgpu_bo_kunmap(ring->mqd_obj);
+		ring->mqd_ptr = NULL;
+		if (r)
+			return r;
+	} else {
+		return r;
+	}
+
+	for (i = 0; i < adev->gfx.num_compute_rings; i++) {
+		ring = &adev->gfx.compute_ring[i];
+		if (!amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr)) {
+			r = gfx_v9_0_kiq_init_queue(ring, ring->mqd_ptr, ring->mqd_gpu_addr);
+			amdgpu_bo_kunmap(ring->mqd_obj);
+			ring->mqd_ptr = NULL;
+			if (r)
+			return r;
+		} else {
+			return r;
+		}
+	}
+
+	return 0;
+}
+
 static int gfx_v9_0_cp_resume(struct amdgpu_device *adev)
 {
 	int r,i;
@@ -1780,7 +2227,10 @@ static int gfx_v9_0_cp_resume(struct amdgpu_device *adev)
 	if (r)
 		return r;
 
-	r = gfx_v9_0_cp_compute_resume(adev);
+	if (amdgpu_sriov_vf(adev))
+		r = gfx_v9_0_kiq_resume(adev);
+	else
+		r = gfx_v9_0_cp_compute_resume(adev);
 	if (r)
 		return r;
 
@@ -1799,6 +2249,14 @@ static int gfx_v9_0_cp_resume(struct amdgpu_device *adev)
 			ring->ready = false;
 	}
 
+	if (amdgpu_sriov_vf(adev)) {
+		ring = &adev->gfx.kiq.ring;
+		ring->ready = true;
+		r = amdgpu_ring_test_ring(ring);
+		if (r)
+			ring->ready = false;
+	}
+
 	gfx_v9_0_enable_gui_idle_interrupt(adev, true);
 
 	return 0;
@@ -1840,6 +2298,10 @@ static int gfx_v9_0_hw_fini(void *handle)
 
 	amdgpu_irq_put(adev, &adev->gfx.priv_reg_irq, 0);
 	amdgpu_irq_put(adev, &adev->gfx.priv_inst_irq, 0);
+	if (amdgpu_sriov_vf(adev)) {
+		pr_debug("For SRIOV client, shouldn't do anything.\n");
+		return 0;
+	}
 	gfx_v9_0_cp_enable(adev, false);
 	gfx_v9_0_rlc_stop(adev);
 	gfx_v9_0_cp_compute_fini(adev);
diff --git a/drivers/gpu/drm/amd/amdgpu/soc15d.h b/drivers/gpu/drm/amd/amdgpu/soc15d.h
index c47715d..7d29329 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc15d.h
+++ b/drivers/gpu/drm/amd/amdgpu/soc15d.h
@@ -258,6 +258,8 @@
 #define	PACKET3_WAIT_ON_CE_COUNTER			0x86
 #define	PACKET3_WAIT_ON_DE_COUNTER_DIFF			0x88
 #define	PACKET3_SWITCH_BUFFER				0x8B
+#define PACKET3_SET_RESOURCES				0xA0
+#define PACKET3_MAP_QUEUES				0xA2
 
 #define VCE_CMD_NO_OP		0x00000000
 #define VCE_CMD_END		0x00000001
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 080/100] drm/amdgpu:impl gfx9 cond_exec
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (63 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 079/100] drm/amdgpu: init kiq and kcq for vega10 Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 081/100] drm/amdgpu/gfx9: impl gfx9 meta data emit Alex Deucher
                     ` (20 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Xiangliang Yu, Monk Liu

From: Monk Liu <Monk.Liu@amd.com>

it is needed for virtualization

Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index 2f833ca..6f266d0 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -3218,6 +3218,30 @@ static void gfx_v9_ring_emit_cntxcntl(struct amdgpu_ring *ring, uint32_t flags)
 	amdgpu_ring_write(ring, 0);
 }
 
+static unsigned gfx_v9_0_ring_emit_init_cond_exec(struct amdgpu_ring *ring)
+{
+	unsigned ret;
+	amdgpu_ring_write(ring, PACKET3(PACKET3_COND_EXEC, 3));
+	amdgpu_ring_write(ring, lower_32_bits(ring->cond_exe_gpu_addr));
+	amdgpu_ring_write(ring, upper_32_bits(ring->cond_exe_gpu_addr));
+	amdgpu_ring_write(ring, 0); /* discard following DWs if *cond_exec_gpu_addr==0 */
+	ret = ring->wptr;
+	amdgpu_ring_write(ring, 0x55aa55aa); /* patch dummy value later */
+	return ret;
+}
+
+static void gfx_v9_0_ring_emit_patch_cond_exec(struct amdgpu_ring *ring, unsigned offset)
+{
+	unsigned cur;
+	BUG_ON(ring->ring[offset] != 0x55aa55aa);
+
+	cur = ring->wptr - 1;
+	if (likely(cur > offset))
+		ring->ring[offset] = cur - offset;
+	else
+		ring->ring[offset] = (ring->ring_size>>2) - offset + cur;
+}
+
 static void gfx_v9_0_ring_emit_rreg(struct amdgpu_ring *ring, uint32_t reg)
 {
 	struct amdgpu_device *adev = ring->adev;
@@ -3569,6 +3593,8 @@ static const struct amdgpu_ring_funcs gfx_v9_0_ring_funcs_gfx = {
 	.pad_ib = amdgpu_ring_generic_pad_ib,
 	.emit_switch_buffer = gfx_v9_ring_emit_sb,
 	.emit_cntxcntl = gfx_v9_ring_emit_cntxcntl,
+	.init_cond_exec = gfx_v9_0_ring_emit_init_cond_exec,
+	.patch_cond_exec = gfx_v9_0_ring_emit_patch_cond_exec,
 };
 
 static const struct amdgpu_ring_funcs gfx_v9_0_ring_funcs_compute = {
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 081/100] drm/amdgpu/gfx9: impl gfx9 meta data emit
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (64 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 080/100] drm/amdgpu:impl gfx9 cond_exec Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 082/100] drm/amdgpu:bypass RLC init for SRIOV Alex Deucher
                     ` (19 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Xiangliang Yu, Monk Liu

From: Xiangliang Yu <Xiangliang.Yu@amd.com>

Insert ce meta prior to cntx_cntl and de follow it.

Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c    | 47 ++++++++++++++++++++++
 drivers/gpu/drm/amd/include/v9_structs.h | 68 ++++++++++++++++++++++++++++++++
 2 files changed, 115 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index 6f266d0..2241075 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -3189,10 +3189,54 @@ static void gfx_v9_ring_emit_sb(struct amdgpu_ring *ring)
 	amdgpu_ring_write(ring, 0);
 }
 
+static void gfx_v9_0_ring_emit_ce_meta(struct amdgpu_ring *ring)
+{
+	static struct v9_ce_ib_state ce_payload = {0};
+	uint64_t csa_addr;
+	int cnt;
+
+	cnt = (sizeof(ce_payload) >> 2) + 4 - 2;
+	csa_addr = AMDGPU_VA_RESERVED_SIZE - 2 * 4096;
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, cnt));
+	amdgpu_ring_write(ring, (WRITE_DATA_ENGINE_SEL(2) |
+				 WRITE_DATA_DST_SEL(8) |
+				 WR_CONFIRM) |
+				 WRITE_DATA_CACHE_POLICY(0));
+	amdgpu_ring_write(ring, lower_32_bits(csa_addr + offsetof(struct v9_gfx_meta_data, ce_payload)));
+	amdgpu_ring_write(ring, upper_32_bits(csa_addr + offsetof(struct v9_gfx_meta_data, ce_payload)));
+	amdgpu_ring_write_multiple(ring, (void *)&ce_payload, sizeof(ce_payload) >> 2);
+}
+
+static void gfx_v9_0_ring_emit_de_meta(struct amdgpu_ring *ring)
+{
+	static struct v9_de_ib_state de_payload = {0};
+	uint64_t csa_addr, gds_addr;
+	int cnt;
+
+	csa_addr = AMDGPU_VA_RESERVED_SIZE - 2 * 4096;
+	gds_addr = csa_addr + 4096;
+	de_payload.gds_backup_addrlo = lower_32_bits(gds_addr);
+	de_payload.gds_backup_addrhi = upper_32_bits(gds_addr);
+
+	cnt = (sizeof(de_payload) >> 2) + 4 - 2;
+	amdgpu_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, cnt));
+	amdgpu_ring_write(ring, (WRITE_DATA_ENGINE_SEL(1) |
+				 WRITE_DATA_DST_SEL(8) |
+				 WR_CONFIRM) |
+				 WRITE_DATA_CACHE_POLICY(0));
+	amdgpu_ring_write(ring, lower_32_bits(csa_addr + offsetof(struct v9_gfx_meta_data, de_payload)));
+	amdgpu_ring_write(ring, upper_32_bits(csa_addr + offsetof(struct v9_gfx_meta_data, de_payload)));
+	amdgpu_ring_write_multiple(ring, (void *)&de_payload, sizeof(de_payload) >> 2);
+}
+
 static void gfx_v9_ring_emit_cntxcntl(struct amdgpu_ring *ring, uint32_t flags)
 {
 	uint32_t dw2 = 0;
 
+	if (amdgpu_sriov_vf(ring->adev))
+		gfx_v9_0_ring_emit_ce_meta(ring);
+
 	dw2 |= 0x80000000; /* set load_enable otherwise this package is just NOPs */
 	if (flags & AMDGPU_HAVE_CTX_SWITCH) {
 		/* set load_global_config & load_global_uconfig */
@@ -3216,6 +3260,9 @@ static void gfx_v9_ring_emit_cntxcntl(struct amdgpu_ring *ring, uint32_t flags)
 	amdgpu_ring_write(ring, PACKET3(PACKET3_CONTEXT_CONTROL, 1));
 	amdgpu_ring_write(ring, dw2);
 	amdgpu_ring_write(ring, 0);
+
+	if (amdgpu_sriov_vf(ring->adev))
+		gfx_v9_0_ring_emit_de_meta(ring);
 }
 
 static unsigned gfx_v9_0_ring_emit_init_cond_exec(struct amdgpu_ring *ring)
diff --git a/drivers/gpu/drm/amd/include/v9_structs.h b/drivers/gpu/drm/amd/include/v9_structs.h
index e7508a3..9a9e6c7 100644
--- a/drivers/gpu/drm/amd/include/v9_structs.h
+++ b/drivers/gpu/drm/amd/include/v9_structs.h
@@ -672,4 +672,72 @@ struct v9_mqd {
 	uint32_t reserved_511;
 };
 
+/* from vega10 all CSA format is shifted to chain ib compatible mode */
+struct v9_ce_ib_state {
+    /* section of non chained ib part */
+    uint32_t ce_ib_completion_status;
+    uint32_t ce_constegnine_count;
+    uint32_t ce_ibOffset_ib1;
+    uint32_t ce_ibOffset_ib2;
+
+    /* section of chained ib */
+    uint32_t ce_chainib_addrlo_ib1;
+    uint32_t ce_chainib_addrlo_ib2;
+    uint32_t ce_chainib_addrhi_ib1;
+    uint32_t ce_chainib_addrhi_ib2;
+    uint32_t ce_chainib_size_ib1;
+    uint32_t ce_chainib_size_ib2;
+}; /* total 10 DWORD */
+
+struct v9_de_ib_state {
+    /* section of non chained ib part */
+    uint32_t ib_completion_status;
+    uint32_t de_constEngine_count;
+    uint32_t ib_offset_ib1;
+    uint32_t ib_offset_ib2;
+
+    /* section of chained ib */
+    uint32_t chain_ib_addrlo_ib1;
+    uint32_t chain_ib_addrlo_ib2;
+    uint32_t chain_ib_addrhi_ib1;
+    uint32_t chain_ib_addrhi_ib2;
+    uint32_t chain_ib_size_ib1;
+    uint32_t chain_ib_size_ib2;
+
+    /* section of non chained ib part */
+    uint32_t preamble_begin_ib1;
+    uint32_t preamble_begin_ib2;
+    uint32_t preamble_end_ib1;
+    uint32_t preamble_end_ib2;
+
+    /* section of chained ib */
+    uint32_t chain_ib_pream_addrlo_ib1;
+    uint32_t chain_ib_pream_addrlo_ib2;
+    uint32_t chain_ib_pream_addrhi_ib1;
+    uint32_t chain_ib_pream_addrhi_ib2;
+
+    /* section of non chained ib part */
+    uint32_t draw_indirect_baseLo;
+    uint32_t draw_indirect_baseHi;
+    uint32_t disp_indirect_baseLo;
+    uint32_t disp_indirect_baseHi;
+    uint32_t gds_backup_addrlo;
+    uint32_t gds_backup_addrhi;
+    uint32_t index_base_addrlo;
+    uint32_t index_base_addrhi;
+    uint32_t sample_cntl;
+}; /* Total of 27 DWORD */
+
+struct v9_gfx_meta_data {
+    /* 10 DWORD, address must be 4KB aligned */
+    struct v9_ce_ib_state ce_payload;
+    uint32_t reserved1[54];
+    /* 27 DWORD, address must be 64B aligned */
+    struct v9_de_ib_state de_payload;
+    /* PFP IB base address which get pre-empted */
+    uint32_t DeIbBaseAddrLo;
+    uint32_t DeIbBaseAddrHi;
+    uint32_t reserved2[931];
+}; /* Total of 4K Bytes */
+
 #endif /* V9_STRUCTS_H_ */
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 082/100] drm/amdgpu:bypass RLC init for SRIOV
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (65 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 081/100] drm/amdgpu/gfx9: impl gfx9 meta data emit Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 083/100] drm/amdgpu/sdma4:re-org SDMA initial steps for sriov Alex Deucher
                     ` (18 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Xiangliang Yu, Monk Liu

From: Monk Liu <Monk.Liu@amd.com>

one issue unresolved for RLC:
rlc will go wrong completely if there is a soft_reset
before RLC ucode loading.

to workaround above issue, we can totally ignore RLC
in guest driver side due to there was already full
initialization on RLC side by GIM

Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index 2241075..8e5367d 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -1459,6 +1459,9 @@ static int gfx_v9_0_rlc_resume(struct amdgpu_device *adev)
 {
 	int r;
 
+	if (amdgpu_sriov_vf(adev))
+		return 0;
+
 	gfx_v9_0_rlc_stop(adev);
 
 	/* disable CG */
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 083/100] drm/amdgpu/sdma4:re-org SDMA initial steps for sriov
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (66 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 082/100] drm/amdgpu:bypass RLC init for SRIOV Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 084/100] drm/amdgpu/soc15: bypass PSP for VF Alex Deucher
                     ` (17 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Xiangliang Yu, Monk Liu

From: Monk Liu <Monk.Liu@amd.com>

Rework sdma init to support SR-IOV.

Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
index b460e00..ee3b4a9 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -522,6 +522,7 @@ static int sdma_v4_0_gfx_resume(struct amdgpu_device *adev)
 	u32 wb_offset;
 	u32 doorbell;
 	u32 doorbell_offset;
+	u32 temp;
 	int i,r;
 
 	for (i = 0; i < adev->sdma.num_instances; i++) {
@@ -576,6 +577,16 @@ static int sdma_v4_0_gfx_resume(struct amdgpu_device *adev)
 		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_DOORBELL_OFFSET), doorbell_offset);
 		nbio_v6_1_sdma_doorbell_range(adev, i, ring->use_doorbell, ring->doorbell_index);
 
+		/* set utc l1 enable flag always to 1 */
+		temp = RREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_CNTL));
+		temp = REG_SET_FIELD(temp, SDMA0_CNTL, UTC_L1_ENABLE, 1);
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_CNTL), temp);
+
+		/* unhalt engine */
+		temp = RREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_F32_CNTL));
+		temp = REG_SET_FIELD(temp, SDMA0_F32_CNTL, HALT, 0);
+		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_F32_CNTL), temp);
+
 		/* enable DMA RB */
 		rb_cntl = REG_SET_FIELD(rb_cntl, SDMA0_GFX_RB_CNTL, RB_ENABLE, 1);
 		WREG32(sdma_v4_0_get_reg_offset(i, mmSDMA0_GFX_RB_CNTL), rb_cntl);
@@ -690,6 +701,15 @@ static int sdma_v4_0_start(struct amdgpu_device *adev)
 {
 	int r;
 
+	if (amdgpu_sriov_vf(adev)) {
+		/* disable RB and halt engine */
+		sdma_v4_0_enable(adev, false);
+
+		/* set RB registers */
+		r = sdma_v4_0_gfx_resume(adev);
+		return r;
+	}
+
 	if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP) {
 		DRM_INFO("Loading via direct write\n");
 		r = sdma_v4_0_load_microcode(adev);
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 084/100] drm/amdgpu/soc15: bypass PSP for VF
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (67 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 083/100] drm/amdgpu/sdma4:re-org SDMA initial steps for sriov Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 085/100] drm/amdgpu/gmc9: no need use kiq in vega10 tlb flush Alex Deucher
                     ` (16 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Xiangliang Yu, Monk Liu

From: Xiangliang Yu <Xiangliang.Yu@amd.com>

Bypass PSP block for VF device.

Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/soc15.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
index 263f602..b197288 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
+++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
@@ -478,7 +478,8 @@ int soc15_set_ip_blocks(struct amdgpu_device *adev)
 		amdgpu_ip_block_add(adev, &mmhub_v1_0_ip_block);
 		amdgpu_ip_block_add(adev, &gmc_v9_0_ip_block);
 		amdgpu_ip_block_add(adev, &vega10_ih_ip_block);
-		amdgpu_ip_block_add(adev, &psp_v3_1_ip_block);
+		if (!amdgpu_sriov_vf(adev))
+			amdgpu_ip_block_add(adev, &psp_v3_1_ip_block);
 		amdgpu_ip_block_add(adev, &amdgpu_pp_ip_block);
 #if defined(CONFIG_DRM_AMD_DC)
 		if (amdgpu_device_has_dc_support(adev))
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 085/100] drm/amdgpu/gmc9: no need use kiq in vega10 tlb flush
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (68 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 084/100] drm/amdgpu/soc15: bypass PSP for VF Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 086/100] drm/amdgpu/dce_virtual: bypass DPM for vf Alex Deucher
                     ` (15 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Xiangliang Yu, Monk Liu

From: Xiangliang Yu <Xiangliang.Yu@amd.com>

two reasons:
1. there is a spinlock around;
2. vm register is pf/vf copy, vf can access via mmio safely.

Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index 5cf0fc3..51a1919 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -198,11 +198,11 @@ static void gmc_v9_0_gart_flush_gpu_tlb(struct amdgpu_device *adev,
 		struct amdgpu_vmhub *hub = &adev->vmhub[i];
 		u32 tmp = hub->get_invalidate_req(vmid);
 
-		WREG32(hub->vm_inv_eng0_req + eng, tmp);
+		WREG32_NO_KIQ(hub->vm_inv_eng0_req + eng, tmp);
 
 		/* Busy wait for ACK.*/
 		for (j = 0; j < 100; j++) {
-			tmp = RREG32(hub->vm_inv_eng0_ack + eng);
+			tmp = RREG32_NO_KIQ(hub->vm_inv_eng0_ack + eng);
 			tmp &= 1 << vmid;
 			if (tmp)
 				break;
@@ -213,7 +213,7 @@ static void gmc_v9_0_gart_flush_gpu_tlb(struct amdgpu_device *adev,
 
 		/* Wait for ACK with a delay.*/
 		for (j = 0; j < adev->usec_timeout; j++) {
-			tmp = RREG32(hub->vm_inv_eng0_ack + eng);
+			tmp = RREG32_NO_KIQ(hub->vm_inv_eng0_ack + eng);
 			tmp &= 1 << vmid;
 			if (tmp)
 				break;
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 086/100] drm/amdgpu/dce_virtual: bypass DPM for vf
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (69 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 085/100] drm/amdgpu/gmc9: no need use kiq in vega10 tlb flush Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 087/100] drm/amdgpu/virt: impl mailbox for ai Alex Deucher
                     ` (14 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Xiangliang Yu, Monk Liu

From: Xiangliang Yu <Xiangliang.Yu@amd.com>

If enable DPM for VF, always get lot of warn_slow_patch_null in
dmesg and vf doesn't support DPM.

Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/dce_virtual.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
index 5ee139c..8bb9cfd 100644
--- a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
+++ b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
@@ -204,6 +204,9 @@ static void dce_virtual_crtc_dpms(struct drm_crtc *crtc, int mode)
 	struct amdgpu_crtc *amdgpu_crtc = to_amdgpu_crtc(crtc);
 	unsigned type;
 
+	if (amdgpu_sriov_vf(adev))
+		return;
+
 	switch (mode) {
 	case DRM_MODE_DPMS_ON:
 		amdgpu_crtc->enabled = true;
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 087/100] drm/amdgpu/virt: impl mailbox for ai
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (70 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 086/100] drm/amdgpu/dce_virtual: bypass DPM for vf Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 088/100] drm/amdgpu/soc15: init virt ops for vf Alex Deucher
                     ` (13 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Xiangliang Yu

From: Xiangliang Yu <Xiangliang.Yu@amd.com>

Implement mailbox protocol for AI so that guest vf can communicate
with GPU hypervisor.

Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Monk Liu <Monk.Liu@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/Makefile   |   2 +-
 drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c | 207 ++++++++++++++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/mxgpu_ai.h |  47 ++++++++
 3 files changed, 255 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/mxgpu_ai.h

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
index a377fdb..d227695 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -40,7 +40,7 @@ amdgpu-$(CONFIG_DRM_AMDGPU_CIK)+= cik.o cik_ih.o kv_smc.o kv_dpm.o \
 amdgpu-$(CONFIG_DRM_AMDGPU_SI)+= si.o gmc_v6_0.o gfx_v6_0.o si_ih.o si_dma.o dce_v6_0.o si_dpm.o si_smc.o
 
 amdgpu-y += \
-	vi.o mxgpu_vi.o nbio_v6_1.o soc15.o
+	vi.o mxgpu_vi.o nbio_v6_1.o soc15.o mxgpu_ai.o
 
 # add GMC block
 amdgpu-y += \
diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c b/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
new file mode 100644
index 0000000..cfd5e54
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
@@ -0,0 +1,207 @@
+/*
+ * Copyright 2014 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include "amdgpu.h"
+#include "vega10/soc15ip.h"
+#include "vega10/NBIO/nbio_6_1_offset.h"
+#include "vega10/NBIO/nbio_6_1_sh_mask.h"
+#include "vega10/GC/gc_9_0_offset.h"
+#include "vega10/GC/gc_9_0_sh_mask.h"
+#include "soc15.h"
+#include "soc15_common.h"
+#include "mxgpu_ai.h"
+
+static void xgpu_ai_mailbox_send_ack(struct amdgpu_device *adev)
+{
+	u32 reg;
+	int timeout = AI_MAILBOX_TIMEDOUT;
+	u32 mask = REG_FIELD_MASK(BIF_BX_PF0_MAILBOX_CONTROL, RCV_MSG_VALID);
+
+	reg = RREG32_NO_KIQ(SOC15_REG_OFFSET(NBIO, 0,
+					     mmBIF_BX_PF0_MAILBOX_CONTROL));
+	reg = REG_SET_FIELD(reg, BIF_BX_PF0_MAILBOX_CONTROL, RCV_MSG_ACK, 1);
+	WREG32_NO_KIQ(SOC15_REG_OFFSET(NBIO, 0,
+				       mmBIF_BX_PF0_MAILBOX_CONTROL), reg);
+
+	/*Wait for RCV_MSG_VALID to be 0*/
+	reg = RREG32_NO_KIQ(SOC15_REG_OFFSET(NBIO, 0,
+					     mmBIF_BX_PF0_MAILBOX_CONTROL));
+	while (reg & mask) {
+		if (timeout <= 0) {
+			pr_err("RCV_MSG_VALID is not cleared\n");
+			break;
+		}
+		mdelay(1);
+		timeout -=1;
+
+		reg = RREG32_NO_KIQ(SOC15_REG_OFFSET(NBIO, 0,
+						     mmBIF_BX_PF0_MAILBOX_CONTROL));
+	}
+}
+
+static void xgpu_ai_mailbox_set_valid(struct amdgpu_device *adev, bool val)
+{
+	u32 reg;
+
+	reg = RREG32_NO_KIQ(SOC15_REG_OFFSET(NBIO, 0,
+					     mmBIF_BX_PF0_MAILBOX_CONTROL));
+	reg = REG_SET_FIELD(reg, BIF_BX_PF0_MAILBOX_CONTROL,
+			    TRN_MSG_VALID, val ? 1 : 0);
+	WREG32_NO_KIQ(SOC15_REG_OFFSET(NBIO, 0, mmBIF_BX_PF0_MAILBOX_CONTROL),
+		      reg);
+}
+
+static void xgpu_ai_mailbox_trans_msg(struct amdgpu_device *adev,
+				      enum idh_request req)
+{
+	u32 reg;
+
+	reg = RREG32_NO_KIQ(SOC15_REG_OFFSET(NBIO, 0,
+					     mmBIF_BX_PF0_MAILBOX_MSGBUF_TRN_DW0));
+	reg = REG_SET_FIELD(reg, BIF_BX_PF0_MAILBOX_MSGBUF_TRN_DW0,
+			    MSGBUF_DATA, req);
+	WREG32_NO_KIQ(SOC15_REG_OFFSET(NBIO, 0, mmBIF_BX_PF0_MAILBOX_MSGBUF_TRN_DW0),
+		      reg);
+
+	xgpu_ai_mailbox_set_valid(adev, true);
+}
+
+static int xgpu_ai_mailbox_rcv_msg(struct amdgpu_device *adev,
+				   enum idh_event event)
+{
+	u32 reg;
+	u32 mask = REG_FIELD_MASK(BIF_BX_PF0_MAILBOX_CONTROL, RCV_MSG_VALID);
+
+	if (event != IDH_FLR_NOTIFICATION_CMPL) {
+		reg = RREG32_NO_KIQ(SOC15_REG_OFFSET(NBIO, 0,
+						     mmBIF_BX_PF0_MAILBOX_CONTROL));
+		if (!(reg & mask))
+			return -ENOENT;
+	}
+
+	reg = RREG32_NO_KIQ(SOC15_REG_OFFSET(NBIO, 0,
+					     mmBIF_BX_PF0_MAILBOX_MSGBUF_RCV_DW0));
+	if (reg != event)
+		return -ENOENT;
+
+	xgpu_ai_mailbox_send_ack(adev);
+
+	return 0;
+}
+
+static int xgpu_ai_poll_ack(struct amdgpu_device *adev)
+{
+	int r = 0, timeout = AI_MAILBOX_TIMEDOUT;
+	u32 mask = REG_FIELD_MASK(BIF_BX_PF0_MAILBOX_CONTROL, TRN_MSG_ACK);
+	u32 reg;
+
+	reg = RREG32_NO_KIQ(SOC15_REG_OFFSET(NBIO, 0,
+					     mmBIF_BX_PF0_MAILBOX_CONTROL));
+	while (!(reg & mask)) {
+		if (timeout <= 0) {
+			pr_err("Doesn't get ack from pf.\n");
+			r = -ETIME;
+			break;
+		}
+		msleep(1);
+		timeout -= 1;
+
+		reg = RREG32_NO_KIQ(SOC15_REG_OFFSET(NBIO, 0,
+						     mmBIF_BX_PF0_MAILBOX_CONTROL));
+	}
+
+	return r;
+}
+
+static int xgpu_vi_poll_msg(struct amdgpu_device *adev, enum idh_event event)
+{
+	int r = 0, timeout = AI_MAILBOX_TIMEDOUT;
+
+	r = xgpu_ai_mailbox_rcv_msg(adev, event);
+	while (r) {
+		if (timeout <= 0) {
+			pr_err("Doesn't get ack from pf.\n");
+			r = -ETIME;
+			break;
+		}
+		msleep(1);
+		timeout -= 1;
+
+		r = xgpu_ai_mailbox_rcv_msg(adev, event);
+	}
+
+	return r;
+}
+
+
+static int xgpu_ai_send_access_requests(struct amdgpu_device *adev,
+					enum idh_request req)
+{
+	int r;
+
+	xgpu_ai_mailbox_trans_msg(adev, req);
+
+	/* start to poll ack */
+	r = xgpu_ai_poll_ack(adev);
+	if (r)
+		return r;
+
+	xgpu_ai_mailbox_set_valid(adev, false);
+
+	/* start to check msg if request is idh_req_gpu_init_access */
+	if (req == IDH_REQ_GPU_INIT_ACCESS ||
+		req == IDH_REQ_GPU_FINI_ACCESS ||
+		req == IDH_REQ_GPU_RESET_ACCESS) {
+		r = xgpu_vi_poll_msg(adev, IDH_READY_TO_ACCESS_GPU);
+		if (r)
+			return r;
+	}
+
+	return 0;
+}
+
+static int xgpu_ai_request_full_gpu_access(struct amdgpu_device *adev,
+					   bool init)
+{
+	enum idh_request req;
+
+	req = init ? IDH_REQ_GPU_INIT_ACCESS : IDH_REQ_GPU_FINI_ACCESS;
+	return xgpu_ai_send_access_requests(adev, req);
+}
+
+static int xgpu_ai_release_full_gpu_access(struct amdgpu_device *adev,
+					   bool init)
+{
+	enum idh_request req;
+	int r = 0;
+
+	req = init ? IDH_REL_GPU_INIT_ACCESS : IDH_REL_GPU_FINI_ACCESS;
+	r = xgpu_ai_send_access_requests(adev, req);
+
+	return r;
+}
+
+const struct amdgpu_virt_ops xgpu_ai_virt_ops = {
+	.req_full_gpu	= xgpu_ai_request_full_gpu_access,
+	.rel_full_gpu	= xgpu_ai_release_full_gpu_access,
+};
diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.h b/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.h
new file mode 100644
index 0000000..bf8ab8f
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.h
@@ -0,0 +1,47 @@
+/*
+ * Copyright 2014 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __MXGPU_AI_H__
+#define __MXGPU_AI_H__
+
+#define AI_MAILBOX_TIMEDOUT	150000
+
+enum idh_request {
+	IDH_REQ_GPU_INIT_ACCESS = 1,
+	IDH_REL_GPU_INIT_ACCESS,
+	IDH_REQ_GPU_FINI_ACCESS,
+	IDH_REL_GPU_FINI_ACCESS,
+	IDH_REQ_GPU_RESET_ACCESS
+};
+
+enum idh_event {
+	IDH_CLR_MSG_BUF	= 0,
+	IDH_READY_TO_ACCESS_GPU,
+	IDH_FLR_NOTIFICATION,
+	IDH_FLR_NOTIFICATION_CMPL,
+	IDH_EVENT_MAX
+};
+
+extern const struct amdgpu_virt_ops xgpu_ai_virt_ops;
+
+#endif
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 088/100] drm/amdgpu/soc15: init virt ops for vf
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (71 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 087/100] drm/amdgpu/virt: impl mailbox for ai Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 089/100] drm/amdgpu/soc15: enable virtual dce " Alex Deucher
                     ` (12 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Xiangliang Yu

From: Xiangliang Yu <Xiangliang.Yu@amd.com>

If gpu device is vf, set virt ops so that guest can talk with GPU
hypervisor.

Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Monk Liu <Monk.Liu@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/soc15.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
index b197288..46ccd60 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
+++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
@@ -58,6 +58,7 @@
 #include "uvd_v7_0.h"
 #include "vce_v4_0.h"
 #include "amdgpu_powerplay.h"
+#include "mxgpu_ai.h"
 
 MODULE_FIRMWARE("amdgpu/vega10_smc.bin");
 
@@ -471,6 +472,9 @@ int soc15_set_ip_blocks(struct amdgpu_device *adev)
 {
 	nbio_v6_1_detect_hw_virt(adev);
 
+	if (amdgpu_sriov_vf(adev))
+		adev->virt.ops = &xgpu_ai_virt_ops;
+
 	switch (adev->asic_type) {
 	case CHIP_VEGA10:
 		amdgpu_ip_block_add(adev, &vega10_common_ip_block);
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 089/100] drm/amdgpu/soc15: enable virtual dce for vf
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (72 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 088/100] drm/amdgpu/soc15: init virt ops for vf Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 090/100] drm/amdgpu/vega10:fix DOORBELL64 scheme Alex Deucher
                     ` (11 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Xiangliang Yu

From: Xiangliang Yu <Xiangliang.Yu@amd.com>

VF need virtual dce, enable it if device is vf.

Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Monk Liu <Monk.Liu@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/soc15.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
index 46ccd60..a7b5338 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
+++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
@@ -58,6 +58,7 @@
 #include "uvd_v7_0.h"
 #include "vce_v4_0.h"
 #include "amdgpu_powerplay.h"
+#include "dce_virtual.h"
 #include "mxgpu_ai.h"
 
 MODULE_FIRMWARE("amdgpu/vega10_smc.bin");
@@ -485,8 +486,10 @@ int soc15_set_ip_blocks(struct amdgpu_device *adev)
 		if (!amdgpu_sriov_vf(adev))
 			amdgpu_ip_block_add(adev, &psp_v3_1_ip_block);
 		amdgpu_ip_block_add(adev, &amdgpu_pp_ip_block);
+		if (amdgpu_sriov_vf(adev))
+			amdgpu_ip_block_add(adev, &dce_virtual_ip_block);
 #if defined(CONFIG_DRM_AMD_DC)
-		if (amdgpu_device_has_dc_support(adev))
+		else if (amdgpu_device_has_dc_support(adev))
 			amdgpu_ip_block_add(adev, &dm_ip_block);
 #else
 #	warning "Enable CONFIG_DRM_AMD_DC for display support on SOC15."
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 090/100] drm/amdgpu/vega10:fix DOORBELL64 scheme
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (73 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 089/100] drm/amdgpu/soc15: enable virtual dce " Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 091/100] drm/amdgpu: Don't touch PG&CG for SRIOV MM Alex Deucher
                     ` (10 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Xiangliang Yu, Monk Liu

From: Monk Liu <Monk.Liu@amd.com>

Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu.h | 27 ++++++++++++++++++---------
 1 file changed, 18 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 2675480..2e1c782 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -708,15 +708,24 @@ typedef enum _AMDGPU_DOORBELL64_ASSIGNMENT
 	AMDGPU_DOORBELL64_IH_RING1                = 0xF5,  /* For page migration request log */
 	AMDGPU_DOORBELL64_IH_RING2                = 0xF6,  /* For page migration translation/invalidation log */
 
-	/* VCN engine */
-	AMDGPU_DOORBELL64_VCN0                    = 0xF8,
-	AMDGPU_DOORBELL64_VCN1                    = 0xF9,
-	AMDGPU_DOORBELL64_VCN2                    = 0xFA,
-	AMDGPU_DOORBELL64_VCN3                    = 0xFB,
-	AMDGPU_DOORBELL64_VCN4                    = 0xFC,
-	AMDGPU_DOORBELL64_VCN5                    = 0xFD,
-	AMDGPU_DOORBELL64_VCN6                    = 0xFE,
-	AMDGPU_DOORBELL64_VCN7                    = 0xFF,
+	/* VCN engine use 32 bits doorbell  */
+	AMDGPU_DOORBELL64_VCN0_1                  = 0xF8, /* lower 32 bits for VNC0 and upper 32 bits for VNC1 */
+	AMDGPU_DOORBELL64_VCN2_3                  = 0xF9,
+	AMDGPU_DOORBELL64_VCN4_5                  = 0xFA,
+	AMDGPU_DOORBELL64_VCN6_7                  = 0xFB,
+
+	/* overlap the doorbell assignment with VCN as they are  mutually exclusive
+	 * VCE engine's doorbell is 32 bit and two VCE ring share one QWORD
+	 */
+	AMDGPU_DOORBELL64_RING0_1                 = 0xF8,
+	AMDGPU_DOORBELL64_RING2_3                 = 0xF9,
+	AMDGPU_DOORBELL64_RING4_5                 = 0xFA,
+	AMDGPU_DOORBELL64_RING6_7                 = 0xFB,
+
+	AMDGPU_DOORBELL64_UVD_RING0_1             = 0xFC,
+	AMDGPU_DOORBELL64_UVD_RING2_3             = 0xFD,
+	AMDGPU_DOORBELL64_UVD_RING4_5             = 0xFE,
+	AMDGPU_DOORBELL64_UVD_RING6_7             = 0xFF,
 
 	AMDGPU_DOORBELL64_MAX_ASSIGNMENT          = 0xFF,
 	AMDGPU_DOORBELL64_INVALID                 = 0xFFFF
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 091/100] drm/amdgpu: Don't touch PG&CG for SRIOV MM
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (74 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 090/100] drm/amdgpu/vega10:fix DOORBELL64 scheme Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 092/100] drm/amdgpu/vce4: enable doorbell for SRIOV Alex Deucher
                     ` (9 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Xiangliang Yu

From: Xiangliang Yu <Xiangliang.Yu@amd.com>

For SRIOV, MM don't need to care about PG & CG, skip it.

Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c | 6 ++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c | 6 ++++++
 2 files changed, 12 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
index b2e1d3b..e1a838e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
@@ -1138,6 +1138,9 @@ static void amdgpu_uvd_idle_work_handler(struct work_struct *work)
 		container_of(work, struct amdgpu_device, uvd.idle_work.work);
 	unsigned fences = amdgpu_fence_count_emitted(&adev->uvd.ring);
 
+	if (amdgpu_sriov_vf(adev))
+		return;
+
 	if (fences == 0) {
 		if (adev->pm.dpm_enabled) {
 			amdgpu_dpm_enable_uvd(adev, false);
@@ -1159,6 +1162,9 @@ void amdgpu_uvd_ring_begin_use(struct amdgpu_ring *ring)
 	struct amdgpu_device *adev = ring->adev;
 	bool set_clocks = !cancel_delayed_work_sync(&adev->uvd.idle_work);
 
+	if (amdgpu_sriov_vf(adev))
+		return;
+
 	if (set_clocks) {
 		if (adev->pm.dpm_enabled) {
 			amdgpu_dpm_enable_uvd(adev, true);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
index 647944b..f9e45d2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
@@ -320,6 +320,9 @@ static void amdgpu_vce_idle_work_handler(struct work_struct *work)
 		container_of(work, struct amdgpu_device, vce.idle_work.work);
 	unsigned i, count = 0;
 
+	if (amdgpu_sriov_vf(adev))
+		return;
+
 	for (i = 0; i < adev->vce.num_rings; i++)
 		count += amdgpu_fence_count_emitted(&adev->vce.ring[i]);
 
@@ -350,6 +353,9 @@ void amdgpu_vce_ring_begin_use(struct amdgpu_ring *ring)
 	struct amdgpu_device *adev = ring->adev;
 	bool set_clocks;
 
+	if (amdgpu_sriov_vf(adev))
+		return;
+
 	mutex_lock(&adev->vce.idle_mutex);
 	set_clocks = !cancel_delayed_work_sync(&adev->vce.idle_work);
 	if (set_clocks) {
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 092/100] drm/amdgpu/vce4: enable doorbell for SRIOV
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (75 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 091/100] drm/amdgpu: Don't touch PG&CG for SRIOV MM Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 093/100] drm/amdgpu: disable uvd for sriov Alex Deucher
                     ` (8 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Xiangliang Yu, Monk Liu

From: Xiangliang Yu <Xiangliang.Yu@amd.com>

VCE SRIOV need use doorbell and only works on VCN0 ring now

Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/vce_v4_0.c | 25 ++++++++++++++++++++++++-
 1 file changed, 24 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
index 74146be..21a86d8 100644
--- a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
@@ -78,6 +78,9 @@ static uint64_t vce_v4_0_ring_get_wptr(struct amdgpu_ring *ring)
 {
 	struct amdgpu_device *adev = ring->adev;
 
+	if (ring->use_doorbell)
+		return adev->wb.wb[ring->wptr_offs];
+
 	if (ring == &adev->vce.ring[0])
 		return RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_WPTR));
 	else if (ring == &adev->vce.ring[1])
@@ -97,6 +100,13 @@ static void vce_v4_0_ring_set_wptr(struct amdgpu_ring *ring)
 {
 	struct amdgpu_device *adev = ring->adev;
 
+	if (ring->use_doorbell) {
+		/* XXX check if swapping is necessary on BE */
+		adev->wb.wb[ring->wptr_offs] = lower_32_bits(ring->wptr);
+		WDOORBELL32(ring->doorbell_index, lower_32_bits(ring->wptr));
+		return;
+	}
+
 	if (ring == &adev->vce.ring[0])
 		WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_WPTR),
 			lower_32_bits(ring->wptr));
@@ -220,7 +230,10 @@ static int vce_v4_0_early_init(void *handle)
 {
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
-	adev->vce.num_rings = 3;
+	if (amdgpu_sriov_vf(adev)) /* currently only VCN0 support SRIOV */
+		adev->vce.num_rings = 1;
+	else
+		adev->vce.num_rings = 3;
 
 	vce_v4_0_set_ring_funcs(adev);
 	vce_v4_0_set_irq_funcs(adev);
@@ -266,6 +279,16 @@ static int vce_v4_0_sw_init(void *handle)
 	for (i = 0; i < adev->vce.num_rings; i++) {
 		ring = &adev->vce.ring[i];
 		sprintf(ring->name, "vce%d", i);
+		if (amdgpu_sriov_vf(adev)) {
+			/* DOORBELL only works under SRIOV */
+			ring->use_doorbell = true;
+			if (i == 0)
+				ring->doorbell_index = AMDGPU_DOORBELL64_RING0_1 * 2;
+			else if (i == 1)
+				ring->doorbell_index = AMDGPU_DOORBELL64_RING2_3 * 2;
+			else
+				ring->doorbell_index = AMDGPU_DOORBELL64_RING2_3 * 2 + 1;
+		}
 		r = amdgpu_ring_init(adev, ring, 512, &adev->vce.irq, 0);
 		if (r)
 			return r;
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 093/100] drm/amdgpu: disable uvd for sriov
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (76 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 092/100] drm/amdgpu/vce4: enable doorbell for SRIOV Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 094/100] drm/amdgpu/soc15: bypass pp block for vf Alex Deucher
                     ` (7 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Xiangliang Yu, Monk Liu

From: Xiangliang Yu <Xiangliang.Yu@amd.com>

disable uvd for sriov temporarily.

Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/soc15.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
index a7b5338..54cb0b5 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
+++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
@@ -496,7 +496,8 @@ int soc15_set_ip_blocks(struct amdgpu_device *adev)
 #endif
 		amdgpu_ip_block_add(adev, &gfx_v9_0_ip_block);
 		amdgpu_ip_block_add(adev, &sdma_v4_0_ip_block);
-		amdgpu_ip_block_add(adev, &uvd_v7_0_ip_block);
+		if (!amdgpu_sriov_vf(adev))
+			amdgpu_ip_block_add(adev, &uvd_v7_0_ip_block);
 		amdgpu_ip_block_add(adev, &vce_v4_0_ip_block);
 		break;
 	default:
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 094/100] drm/amdgpu/soc15: bypass pp block for vf
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (77 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 093/100] drm/amdgpu: disable uvd for sriov Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 095/100] drm/amdgpu/virt: add structure for MM table Alex Deucher
                     ` (6 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Xiangliang Yu

From: Xiangliang Yu <Xiangliang.Yu@amd.com>

Disable pp block if device is  vf.

Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Monk Liu <Monk.Liu@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/soc15.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
index 54cb0b5..7e54d9dc 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
+++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
@@ -483,9 +483,10 @@ int soc15_set_ip_blocks(struct amdgpu_device *adev)
 		amdgpu_ip_block_add(adev, &mmhub_v1_0_ip_block);
 		amdgpu_ip_block_add(adev, &gmc_v9_0_ip_block);
 		amdgpu_ip_block_add(adev, &vega10_ih_ip_block);
-		if (!amdgpu_sriov_vf(adev))
+		if (!amdgpu_sriov_vf(adev)) {
 			amdgpu_ip_block_add(adev, &psp_v3_1_ip_block);
-		amdgpu_ip_block_add(adev, &amdgpu_pp_ip_block);
+			amdgpu_ip_block_add(adev, &amdgpu_pp_ip_block);
+		}
 		if (amdgpu_sriov_vf(adev))
 			amdgpu_ip_block_add(adev, &dce_virtual_ip_block);
 #if defined(CONFIG_DRM_AMD_DC)
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 095/100] drm/amdgpu/virt: add structure for MM table
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (78 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 094/100] drm/amdgpu/soc15: bypass pp block for vf Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 096/100] drm/amdgpu/vce4: alloc mm table for MM sriov Alex Deucher
                     ` (5 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Xiangliang Yu

From: Xiangliang Yu <Xiangliang.Yu@amd.com>

Add new structure for MM table for multi media scheduler of sriov.

Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Monk Liu <Monk.Liu@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
index 846f29c..1ee0a19 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
@@ -30,6 +30,12 @@
 #define AMDGPU_PASSTHROUGH_MODE        (1 << 3) /* thw whole GPU is pass through for VM */
 #define AMDGPU_SRIOV_CAPS_RUNTIME      (1 << 4) /* is out of full access mode */
 
+struct amdgpu_mm_table {
+	struct amdgpu_bo	*bo;
+	uint32_t		*cpu_addr;
+	uint64_t		gpu_addr;
+};
+
 /**
  * struct amdgpu_virt_ops - amdgpu device virt operations
  */
@@ -51,6 +57,7 @@ struct amdgpu_virt {
 	struct amdgpu_irq_src		ack_irq;
 	struct amdgpu_irq_src		rcv_irq;
 	struct work_struct		flr_work;
+	struct amdgpu_mm_table		mm_table;
 	const struct amdgpu_virt_ops	*ops;
 };
 
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 096/100] drm/amdgpu/vce4: alloc mm table for MM sriov
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (79 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 095/100] drm/amdgpu/virt: add structure for MM table Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 097/100] drm/amdgpu/vce4: Ignore vce ring/ib test temporarily Alex Deucher
                     ` (4 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Xiangliang Yu, Monk Liu

From: Xiangliang Yu <Xiangliang.Yu@amd.com>

Allocate MM table for sriov device.

Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/vce_v4_0.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
index 21a86d8..b1b887e 100644
--- a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
@@ -294,6 +294,21 @@ static int vce_v4_0_sw_init(void *handle)
 			return r;
 	}
 
+	if (amdgpu_sriov_vf(adev)) {
+		r = amdgpu_bo_create_kernel(adev, PAGE_SIZE, PAGE_SIZE,
+					    AMDGPU_GEM_DOMAIN_VRAM,
+					    &adev->virt.mm_table.bo,
+					    &adev->virt.mm_table.gpu_addr,
+					    (void *)&adev->virt.mm_table.cpu_addr);
+		if (!r) {
+			memset((void *)adev->virt.mm_table.cpu_addr, 0, PAGE_SIZE);
+			printk("mm table gpu addr = 0x%llx, cpu addr = %p. \n",
+			       adev->virt.mm_table.gpu_addr,
+			       adev->virt.mm_table.cpu_addr);
+		}
+		return r;
+	}
+
 	return r;
 }
 
@@ -302,6 +317,12 @@ static int vce_v4_0_sw_fini(void *handle)
 	int r;
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
+	/* free MM table */
+	if (amdgpu_sriov_vf(adev))
+		amdgpu_bo_free_kernel(&adev->virt.mm_table.bo,
+				      &adev->virt.mm_table.gpu_addr,
+				      (void *)&adev->virt.mm_table.cpu_addr);
+
 	r = amdgpu_vce_suspend(adev);
 	if (r)
 		return r;
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 097/100] drm/amdgpu/vce4: Ignore vce ring/ib test temporarily
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (80 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 096/100] drm/amdgpu/vce4: alloc mm table for MM sriov Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 098/100] drm/amdgpu: add mmsch structures Alex Deucher
                     ` (3 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher, Xiangliang Yu

From: Xiangliang Yu <Xiangliang.Yu@amd.com>

In order to not break SRIOV gfx development, will revert
this patch after vce proved working.

Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
index f9e45d2..eccd70a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
@@ -957,6 +957,10 @@ int amdgpu_vce_ring_test_ring(struct amdgpu_ring *ring)
 	unsigned i;
 	int r;
 
+	/* TODO: remove it if VCE can work for sriov */
+	if (amdgpu_sriov_vf(adev))
+		return 0;
+
 	r = amdgpu_ring_alloc(ring, 16);
 	if (r) {
 		DRM_ERROR("amdgpu: vce failed to lock ring %d (%d).\n",
@@ -995,6 +999,10 @@ int amdgpu_vce_ring_test_ib(struct amdgpu_ring *ring, long timeout)
 	struct fence *fence = NULL;
 	long r;
 
+	/* TODO: remove it if VCE can work for sriov */
+	if (amdgpu_sriov_vf(ring->adev))
+		return 0;
+
 	/* skip vce ring1/2 ib test for now, since it's not reliable */
 	if (ring != &ring->adev->vce.ring[0])
 		return 0;
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 098/100] drm/amdgpu: add mmsch structures
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (81 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 097/100] drm/amdgpu/vce4: Ignore vce ring/ib test temporarily Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 099/100] drm/amdgpu/vce4: impl vce & mmsch sriov start Alex Deucher
                     ` (2 subsequent siblings)
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Xiangliang Yu, Monk Liu

From: Xiangliang Yu <Xiangliang.Yu@amd.com>

For MM SRIOV, need to prepare MM table send send it to MMSCH to
initial UVD & VCE engine. Create new header file for the structures.

Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h | 87 +++++++++++++++++++++++++++++++++
 1 file changed, 87 insertions(+)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h

diff --git a/drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h b/drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h
new file mode 100644
index 0000000..5f0fc8b
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h
@@ -0,0 +1,87 @@
+/*
+ * Copyright 2017 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __MMSCH_V1_0_H__
+#define __MMSCH_V1_0_H__
+
+#define MMSCH_VERSION_MAJOR	1
+#define MMSCH_VERSION_MINOR	0
+#define MMSCH_VERSION	(MMSCH_VERSION_MAJOR << 16 | MMSCH_VERSION_MINOR)
+
+enum mmsch_v1_0_command_type {
+	MMSCH_COMMAND__DIRECT_REG_WRITE = 0,
+	MMSCH_COMMAND__DIRECT_REG_POLLING = 2,
+	MMSCH_COMMAND__DIRECT_REG_READ_MODIFY_WRITE = 3,
+	MMSCH_COMMAND__INDIRECT_REG_WRITE = 8,
+	MMSCH_COMMAND__END = 0xf
+};
+
+struct mmsch_v1_0_init_header {
+	uint32_t version;
+	uint32_t header_size;
+	uint32_t vce_init_status;
+	uint32_t uvd_init_status;
+	uint32_t vce_table_offset;
+	uint32_t vce_table_size;
+	uint32_t uvd_table_offset;
+	uint32_t uvd_table_size;
+};
+
+struct mmsch_v1_0_cmd_direct_reg_header {
+	uint32_t reg_offset   : 28;
+	uint32_t command_type : 4;
+};
+
+struct mmsch_v1_0_cmd_indirect_reg_header {
+	uint32_t reg_offset    : 20;
+	uint32_t reg_idx_space : 8;
+	uint32_t command_type  : 4;
+};
+
+struct mmsch_v1_0_cmd_direct_write {
+	struct mmsch_v1_0_cmd_direct_reg_header cmd_header;
+	uint32_t reg_value;
+};
+
+struct mmsch_v1_0_cmd_direct_read_modify_write {
+	struct mmsch_v1_0_cmd_direct_reg_header cmd_header;
+	uint32_t write_data;
+	uint32_t mask_value;
+};
+
+struct mmsch_v1_0_cmd_direct_polling {
+	struct mmsch_v1_0_cmd_direct_reg_header cmd_header;
+	uint32_t mask_value;
+	uint32_t wait_value;
+};
+
+struct mmsch_v1_0_cmd_end {
+	struct mmsch_v1_0_cmd_direct_reg_header cmd_header;
+};
+
+struct mmsch_v1_0_cmd_indirect_write {
+	struct mmsch_v1_0_cmd_indirect_reg_header cmd_header;
+	uint32_t reg_value;
+};
+
+#endif
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 099/100] drm/amdgpu/vce4: impl vce & mmsch sriov start
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (82 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 098/100] drm/amdgpu: add mmsch structures Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-20 20:30   ` [PATCH 100/100] drm/amdgpu/gfx9: correct wptr pointer value Alex Deucher
  2017-03-21  7:42   ` [PATCH 000/100] Add Vega10 Support Christian König
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Xiangliang Yu, Monk Liu

From: Xiangliang Yu <Xiangliang.Yu@amd.com>

For MM sriov, need use MMSCH to init engine and the init procedures
are all saved in mm table.

Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/vce_v4_0.c | 205 +++++++++++++++++++++++++++++++++-
 1 file changed, 204 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
index b1b887e..15321495 100644
--- a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
@@ -30,6 +30,7 @@
 #include "amdgpu_vce.h"
 #include "soc15d.h"
 #include "soc15_common.h"
+#include "mmsch_v1_0.h"
 
 #include "vega10/soc15ip.h"
 #include "vega10/VCE/vce_4_0_offset.h"
@@ -48,6 +49,63 @@ static void vce_v4_0_mc_resume(struct amdgpu_device *adev);
 static void vce_v4_0_set_ring_funcs(struct amdgpu_device *adev);
 static void vce_v4_0_set_irq_funcs(struct amdgpu_device *adev);
 
+static inline void mmsch_insert_direct_wt(struct mmsch_v1_0_cmd_direct_write *direct_wt,
+					  uint32_t *init_table,
+					  uint32_t reg_offset,
+					  uint32_t value)
+{
+	direct_wt->cmd_header.reg_offset = reg_offset;
+	direct_wt->reg_value = value;
+	memcpy((void *)init_table, direct_wt, sizeof(struct mmsch_v1_0_cmd_direct_write));
+}
+
+static inline void mmsch_insert_direct_rd_mod_wt(struct mmsch_v1_0_cmd_direct_read_modify_write *direct_rd_mod_wt,
+						 uint32_t *init_table,
+						 uint32_t reg_offset,
+						 uint32_t mask, uint32_t data)
+{
+	direct_rd_mod_wt->cmd_header.reg_offset = reg_offset;
+	direct_rd_mod_wt->mask_value = mask;
+	direct_rd_mod_wt->write_data = data;
+	memcpy((void *)init_table, direct_rd_mod_wt,
+	       sizeof(struct mmsch_v1_0_cmd_direct_read_modify_write));
+}
+
+static inline void mmsch_insert_direct_poll(struct mmsch_v1_0_cmd_direct_polling *direct_poll,
+					    uint32_t *init_table,
+					    uint32_t reg_offset,
+					    uint32_t mask, uint32_t wait)
+{
+	direct_poll->cmd_header.reg_offset = reg_offset;
+	direct_poll->mask_value = mask;
+	direct_poll->wait_value = wait;
+	memcpy((void *)init_table, direct_poll, sizeof(struct mmsch_v1_0_cmd_direct_polling));
+}
+
+#define INSERT_DIRECT_RD_MOD_WT(reg, mask, data) { \
+	mmsch_insert_direct_rd_mod_wt(&direct_rd_mod_wt, \
+				      init_table, (reg), \
+				      (mask), (data)); \
+	init_table += sizeof(struct mmsch_v1_0_cmd_direct_read_modify_write)/4; \
+	table_size += sizeof(struct mmsch_v1_0_cmd_direct_read_modify_write)/4; \
+}
+
+#define INSERT_DIRECT_WT(reg, value) { \
+	mmsch_insert_direct_wt(&direct_wt, \
+			       init_table, (reg), \
+			       (value)); \
+	init_table += sizeof(struct mmsch_v1_0_cmd_direct_write)/4; \
+	table_size += sizeof(struct mmsch_v1_0_cmd_direct_write)/4; \
+}
+
+#define INSERT_DIRECT_POLL(reg, mask, wait) { \
+	mmsch_insert_direct_poll(&direct_poll, \
+				 init_table, (reg), \
+				 (mask), (wait)); \
+	init_table += sizeof(struct mmsch_v1_0_cmd_direct_polling)/4; \
+	table_size += sizeof(struct mmsch_v1_0_cmd_direct_polling)/4; \
+}
+
 /**
  * vce_v4_0_ring_get_rptr - get read pointer
  *
@@ -146,6 +204,148 @@ static int vce_v4_0_firmware_loaded(struct amdgpu_device *adev)
 	return -ETIMEDOUT;
 }
 
+static int vce_v4_0_mmsch_start(struct amdgpu_device *adev,
+				struct amdgpu_mm_table *table)
+{
+	uint32_t data = 0, loop;
+	uint64_t addr = table->gpu_addr;
+	struct mmsch_v1_0_init_header *header = (struct mmsch_v1_0_init_header *)table->cpu_addr;
+	uint32_t size;
+
+	size = header->header_size + header->vce_table_size + header->uvd_table_size;
+
+	/* 1, write to vce_mmsch_vf_ctx_addr_lo/hi register with GPU mc addr of memory descriptor location */
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_MMSCH_VF_CTX_ADDR_LO), lower_32_bits(addr));
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_MMSCH_VF_CTX_ADDR_HI), upper_32_bits(addr));
+
+	/* 2, update vmid of descriptor */
+	data = RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_MMSCH_VF_VMID));
+	data &= ~VCE_MMSCH_VF_VMID__VF_CTX_VMID_MASK;
+	data |= (0 << VCE_MMSCH_VF_VMID__VF_CTX_VMID__SHIFT); /* use domain0 for MM scheduler */
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_MMSCH_VF_VMID), data);
+
+	/* 3, notify mmsch about the size of this descriptor */
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_MMSCH_VF_CTX_SIZE), size);
+
+	/* 4, set resp to zero */
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_MMSCH_VF_MAILBOX_RESP), 0);
+
+	/* 5, kick off the initialization and wait until VCE_MMSCH_VF_MAILBOX_RESP becomes non-zero */
+	WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_MMSCH_VF_MAILBOX_HOST), 0x10000001);
+
+	data = RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_MMSCH_VF_MAILBOX_RESP));
+	loop = 1000;
+	while ((data & 0x10000002) != 0x10000002) {
+		udelay(10);
+		data = RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_MMSCH_VF_MAILBOX_RESP));
+		loop--;
+		if (!loop)
+			break;
+	}
+
+	if (!loop) {
+		dev_err(adev->dev, "failed to init MMSCH, mmVCE_MMSCH_VF_MAILBOX_RESP = %x\n", data);
+		return -EBUSY;
+	}
+
+	return 0;
+}
+
+static int vce_v4_0_sriov_start(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring;
+	uint32_t offset, size;
+	uint32_t table_size = 0;
+	struct mmsch_v1_0_cmd_direct_write direct_wt = {0};
+	struct mmsch_v1_0_cmd_direct_read_modify_write direct_rd_mod_wt = {0};
+	struct mmsch_v1_0_cmd_direct_polling direct_poll = {0};
+	struct mmsch_v1_0_cmd_end end = {0};
+	uint32_t *init_table = adev->virt.mm_table.cpu_addr;
+	struct mmsch_v1_0_init_header *header = (struct mmsch_v1_0_init_header *)init_table;
+
+	direct_wt.cmd_header.command_type = MMSCH_COMMAND__DIRECT_REG_WRITE;
+	direct_rd_mod_wt.cmd_header.command_type = MMSCH_COMMAND__DIRECT_REG_READ_MODIFY_WRITE;
+	direct_poll.cmd_header.command_type = MMSCH_COMMAND__DIRECT_REG_POLLING;
+	end.cmd_header.command_type = MMSCH_COMMAND__END;
+
+	if (header->vce_table_offset == 0 && header->vce_table_size == 0) {
+		header->version = MMSCH_VERSION;
+		header->header_size = sizeof(struct mmsch_v1_0_init_header) >> 2;
+
+		if (header->uvd_table_offset == 0 && header->uvd_table_size == 0)
+			header->vce_table_offset = header->header_size;
+		else
+			header->vce_table_offset = header->uvd_table_size + header->uvd_table_offset;
+
+		init_table += header->vce_table_offset;
+
+		ring = &adev->vce.ring[0];
+		INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_RPTR), ring->wptr);
+		INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_WPTR), ring->wptr);
+		INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_BASE_LO), lower_32_bits(ring->gpu_addr));
+		INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_BASE_HI), upper_32_bits(ring->gpu_addr));
+		INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_RB_SIZE), ring->ring_size / 4);
+
+		/* BEGING OF MC_RESUME */
+		INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_CLOCK_GATING_A), ~(1 << 16), 0);
+		INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_UENC_CLOCK_GATING), ~0xFF9FF000, 0x1FF000);
+		INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_UENC_REG_CLOCK_GATING), ~0x3F, 0x3F);
+		INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_CLOCK_GATING_B), 0x1FF);
+
+		INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_CTRL), 0x398000);
+		INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_CACHE_CTRL), ~0x1, 0);
+		INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_SWAP_CNTL), 0);
+		INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_SWAP_CNTL1), 0);
+		INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_VM_CTRL), 0);
+
+		INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_VCPU_CACHE_40BIT_BAR0), adev->vce.gpu_addr >> 8);
+		INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_VCPU_CACHE_40BIT_BAR1), adev->vce.gpu_addr >> 8);
+		INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_VCPU_CACHE_40BIT_BAR2), adev->vce.gpu_addr >> 8);
+
+		offset = AMDGPU_VCE_FIRMWARE_OFFSET;
+		size = VCE_V4_0_FW_SIZE;
+		INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_VCPU_CACHE_OFFSET0), offset & 0x7FFFFFFF);
+		INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_VCPU_CACHE_SIZE0), size);
+
+		offset += size;
+		size = VCE_V4_0_STACK_SIZE;
+		INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_VCPU_CACHE_OFFSET1), offset & 0x7FFFFFFF);
+		INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_VCPU_CACHE_SIZE1), size);
+
+		offset += size;
+		size = VCE_V4_0_DATA_SIZE;
+		INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_VCPU_CACHE_OFFSET2), offset & 0x7FFFFFFF);
+		INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_VCPU_CACHE_SIZE2), size);
+
+		INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_CTRL2), ~0x100, 0);
+		INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_SYS_INT_EN),
+				0xffffffff, VCE_SYS_INT_EN__VCE_SYS_INT_TRAP_INTERRUPT_EN_MASK);
+
+		/* end of MC_RESUME */
+		INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_VCPU_CNTL),
+				~0x200001, VCE_VCPU_CNTL__CLK_EN_MASK);
+		INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_SOFT_RESET),
+				~VCE_SOFT_RESET__ECPU_SOFT_RESET_MASK, 0);
+
+		INSERT_DIRECT_POLL(SOC15_REG_OFFSET(VCE, 0, mmVCE_STATUS),
+				VCE_STATUS_VCPU_REPORT_FW_LOADED_MASK,
+				VCE_STATUS_VCPU_REPORT_FW_LOADED_MASK);
+
+		/* clear BUSY flag */
+		INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_STATUS),
+				~VCE_STATUS__JOB_BUSY_MASK, 0);
+
+		/* add end packet */
+		memcpy((void *)init_table, &end, sizeof(struct mmsch_v1_0_cmd_end));
+		table_size += sizeof(struct mmsch_v1_0_cmd_end) / 4;
+		header->vce_table_size = table_size;
+
+		return vce_v4_0_mmsch_start(adev, &adev->virt.mm_table);
+	}
+
+	return -EINVAL; /* already initializaed ? */
+}
+
 /**
  * vce_v4_0_start - start VCE block
  *
@@ -339,7 +539,10 @@ static int vce_v4_0_hw_init(void *handle)
 	int r, i;
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
-	r = vce_v4_0_start(adev);
+	if (amdgpu_sriov_vf(adev))
+		r = vce_v4_0_sriov_start(adev);
+	else
+		r = vce_v4_0_start(adev);
 	if (r)
 		return r;
 
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* [PATCH 100/100] drm/amdgpu/gfx9: correct wptr pointer value
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (83 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 099/100] drm/amdgpu/vce4: impl vce & mmsch sriov start Alex Deucher
@ 2017-03-20 20:30   ` Alex Deucher
  2017-03-21  7:42   ` [PATCH 000/100] Add Vega10 Support Christian König
  85 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-20 20:30 UTC (permalink / raw)
  To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Xiangliang Yu, Monk Liu

From: Xiangliang Yu <Xiangliang.Yu@amd.com>

Wptr number should be align with buf_mask, otherwise will point
to wrong place.

Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Ken Wang <Qingqing.Wang@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
index 8e5367d..ad82ab7 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
@@ -3275,7 +3275,7 @@ static unsigned gfx_v9_0_ring_emit_init_cond_exec(struct amdgpu_ring *ring)
 	amdgpu_ring_write(ring, lower_32_bits(ring->cond_exe_gpu_addr));
 	amdgpu_ring_write(ring, upper_32_bits(ring->cond_exe_gpu_addr));
 	amdgpu_ring_write(ring, 0); /* discard following DWs if *cond_exec_gpu_addr==0 */
-	ret = ring->wptr;
+	ret = ring->wptr & ring->buf_mask;
 	amdgpu_ring_write(ring, 0x55aa55aa); /* patch dummy value later */
 	return ret;
 }
@@ -3283,9 +3283,10 @@ static unsigned gfx_v9_0_ring_emit_init_cond_exec(struct amdgpu_ring *ring)
 static void gfx_v9_0_ring_emit_patch_cond_exec(struct amdgpu_ring *ring, unsigned offset)
 {
 	unsigned cur;
+	BUG_ON(offset > ring->buf_mask);
 	BUG_ON(ring->ring[offset] != 0x55aa55aa);
 
-	cur = ring->wptr - 1;
+	cur = (ring->wptr & ring->buf_mask) - 1;
 	if (likely(cur > offset))
 		ring->ring[offset] = cur - offset;
 	else
-- 
2.5.5

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 101+ messages in thread

* Re: [PATCH 000/100] Add Vega10 Support
       [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
                     ` (84 preceding siblings ...)
  2017-03-20 20:30   ` [PATCH 100/100] drm/amdgpu/gfx9: correct wptr pointer value Alex Deucher
@ 2017-03-21  7:42   ` Christian König
       [not found]     ` <50d03274-5a6e-fb77-9741-b6700a9949bd-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
  85 siblings, 1 reply; 101+ messages in thread
From: Christian König @ 2017-03-21  7:42 UTC (permalink / raw)
  To: Alex Deucher, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

Patches #1 - #5, #21, #23, #25, #27, #28, #31, #35-#38, #40, #41, #45 
are Acked-by: Christian König.

Patches #6-#20, #22, #24, #32, #39, #42 didn't made it to the list 
(probably to large).

Patches #43, #44 are Reviewed-by: Christian König 
<christian.koenig@amd.com>.

Patch #26: That stuff actually belongs into vega10 specifc code, doesn't it?

Patch #29: We shouldn't use typedefs for enums.

Going to take a look at the rest later today,
Christian.

Am 20.03.2017 um 21:29 schrieb Alex Deucher:
> This patch set adds support for vega10.  Major changes and supported
> features:
> - new vbios interface
> - Lots of new hw IPs
> - Support for video decode using UVD
> - Support for video encode using VCE
> - Support for 3D via radeonsi
> - Power management
> - Full display support via DC
> - Support for SR-IOV
>
> I did not send out the register headers since they are huge.  You can find them
> along with all the other patches in this series here:
> https://cgit.freedesktop.org/~agd5f/linux/log/?h=amd-staging-4.9
>
> Please review.
>
> Thanks,
>
> Alex
>
> Alex Deucher (29):
>    drm/amdgpu: add the new atomfirmware interface header
>    amdgpu: detect if we are using atomfirm or atombios for vbios (v2)
>    drm/amdgpu: move atom scratch setup into amdgpu_atombios.c
>    drm/amdgpu: add basic support for atomfirmware.h (v3)
>    drm/amdgpu: add soc15ip.h
>    drm/amdgpu: add vega10_enum.h
>    drm/amdgpu: Add ATHUB 1.0 register headers
>    drm/amdgpu: Add the DCE 12.0 register headers
>    drm/amdgpu: add the GC 9.0 register headers
>    drm/amdgpu: add the HDP 4.0 register headers
>    drm/amdgpu: add the MMHUB 1.0 register headers
>    drm/amdgpu: add MP 9.0 register headers
>    drm/amdgpu: add NBIF 6.1 register headers
>    drm/amdgpu: add NBIO 6.1 register headers
>    drm/amdgpu: add OSSSYS 4.0 register headers
>    drm/amdgpu: add SDMA 4.0 register headers
>    drm/amdgpu: add SMUIO 9.0 register headers
>    drm/amdgpu: add THM 9.0 register headers
>    drm/amdgpu: add the UVD 7.0 register headers
>    drm/amdgpu: add the VCE 4.0 register headers
>    drm/amdgpu: add gfx9 clearstate header
>    drm/amdgpu: add SDMA 4.0 packet header
>    drm/amdgpu: use atomfirmware interfaces for scratch reg save/restore
>    drm/amdgpu: update IH IV ring entry for soc-15
>    drm/amdgpu: add PTE defines for MTYPE
>    drm/amdgpu: add NGG parameters
>    drm/amdgpu: Add asic family for vega10
>    drm/amdgpu: add tiling flags for GFX9
>    drm/amdgpu: gart fixes for vega10
>
> Alex Xie (4):
>    drm/amdgpu: Add MTYPE flags to GPU VM IOCTL interface
>    drm/amdgpu: handle PTE EXEC in amdgpu_vm_bo_split_mapping
>    drm/amdgpu: handle PTE MTYPE in amdgpu_vm_bo_split_mapping
>    drm/amdgpu: Add GMC 9.0 support
>
> Andrey Grodzovsky (1):
>    drm/amdgpu: gb_addr_config struct
>
> Charlene Liu (1):
>    drm/amd/display: need to handle DCE_Info table ver4.2
>
> Christian König (1):
>    drm/amdgpu: add IV trace point
>
> Eric Huang (7):
>    drm/amd/powerplay: add smu9 header files for Vega10
>    drm/amd/powerplay: add new Vega10's ppsmc header file
>    drm/amdgpu: add new atomfirmware based helpers for powerplay
>    drm/amd/powerplay: add some new structures for Vega10
>    drm/amd: add structures for display/powerplay interface
>    drm/amd/powerplay: add some display/powerplay interfaces
>    drm/amd/powerplay: add Vega10 powerplay support
>
> Felix Kuehling (1):
>    drm/amd: Add MQD structs for GFX V9
>
> Harry Wentland (6):
>    drm/amd/display: Add DCE12 bios parser support
>    drm/amd/display: Add DCE12 gpio support
>    drm/amd/display: Add DCE12 i2c/aux support
>    drm/amd/display: Add DCE12 irq support
>    drm/amd/display: Add DCE12 core support
>    drm/amd/display: Enable DCE12 support
>
> Huang Rui (6):
>    drm/amdgpu: use new flag to handle different firmware loading method
>    drm/amdgpu: rework common ucode handling for vega10
>    drm/amdgpu: add psp firmware header info
>    drm/amdgpu: add PSP driver for vega10
>    drm/amdgpu: add psp firmware info into info query and debugfs
>    drm/amdgpu: add SMC firmware into global ucode list for psp loading
>
> Jordan Lazare (1):
>    drm/amd/display: Less log spam
>
> Junwei Zhang (2):
>    drm/amdgpu: add NBIO 6.1 driver
>    drm/amdgpu: add Vega10 Device IDs
>
> Ken Wang (8):
>    drm/amdgpu: add common soc15 headers
>    drm/amdgpu: add vega10 chip name
>    drm/amdgpu: add 64bit doorbell assignments
>    drm/amdgpu: add SDMA v4.0 implementation
>    drm/amdgpu: implement GFX 9.0 support
>    drm/amdgpu: add vega10 interrupt handler
>    drm/amdgpu: soc15 enable (v2)
>    drm/amdgpu: Set the IP blocks for vega10
>
> Leo Liu (2):
>    drm/amdgpu: add initial uvd 7.0 support for vega10
>    drm/amdgpu: add initial vce 4.0 support for vega10
>
> Marek Olšák (1):
>    drm/amdgpu: don't validate TILE_SPLIT on GFX9
>
> Monk Liu (5):
>    drm/amdgpu/gfx9: programing wptr_poll_addr register
>    drm/amdgpu:impl gfx9 cond_exec
>    drm/amdgpu:bypass RLC init for SRIOV
>    drm/amdgpu/sdma4:re-org SDMA initial steps for sriov
>    drm/amdgpu/vega10:fix DOORBELL64 scheme
>
> Rex Zhu (2):
>    drm/amdgpu: get display info from DC when DC enabled.
>    drm/amd/powerplay: add global PowerPlay mutex.
>
> Xiangliang Yu (22):
>    drm/amdgpu: impl sriov detection for vega10
>    drm/amdgpu: add kiq ring for gfx9
>    drm/amdgpu/gfx9: fullfill kiq funcs
>    drm/amdgpu/gfx9: fullfill kiq irq funcs
>    drm/amdgpu: init kiq and kcq for vega10
>    drm/amdgpu/gfx9: impl gfx9 meta data emit
>    drm/amdgpu/soc15: bypass PSP for VF
>    drm/amdgpu/gmc9: no need use kiq in vega10 tlb flush
>    drm/amdgpu/dce_virtual: bypass DPM for vf
>    drm/amdgpu/virt: impl mailbox for ai
>    drm/amdgpu/soc15: init virt ops for vf
>    drm/amdgpu/soc15: enable virtual dce for vf
>    drm/amdgpu: Don't touch PG&CG for SRIOV MM
>    drm/amdgpu/vce4: enable doorbell for SRIOV
>    drm/amdgpu: disable uvd for sriov
>    drm/amdgpu/soc15: bypass pp block for vf
>    drm/amdgpu/virt: add structure for MM table
>    drm/amdgpu/vce4: alloc mm table for MM sriov
>    drm/amdgpu/vce4: Ignore vce ring/ib test temporarily
>    drm/amdgpu: add mmsch structures
>    drm/amdgpu/vce4: impl vce & mmsch sriov start
>    drm/amdgpu/gfx9: correct wptr pointer value
>
> ken (1):
>    drm/amdgpu: add clinetid definition for vega10
>
>   drivers/gpu/drm/amd/amdgpu/Makefile                |     27 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu.h                |    172 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c       |     28 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h       |      3 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c   |    112 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h   |     33 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c           |     30 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c            |     73 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c         |     73 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c            |     36 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c           |      3 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c            |      2 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ih.h             |     47 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c            |      3 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c            |     32 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.c         |      5 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_powerplay.c      |      5 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c            |    473 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h            |    127 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h          |     37 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c          |    113 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h          |     17 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c            |     58 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c            |     21 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h           |      7 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c             |     34 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h             |      4 +
>   drivers/gpu/drm/amd/amdgpu/atom.c                  |     26 -
>   drivers/gpu/drm/amd/amdgpu/atom.h                  |      1 -
>   drivers/gpu/drm/amd/amdgpu/cik.c                   |      2 +
>   drivers/gpu/drm/amd/amdgpu/clearstate_gfx9.h       |    941 +
>   drivers/gpu/drm/amd/amdgpu/dce_virtual.c           |      3 +
>   drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c              |      6 +-
>   drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c              |   4075 +
>   drivers/gpu/drm/amd/amdgpu/gfx_v9_0.h              |     35 +
>   drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c           |    447 +
>   drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h           |     35 +
>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c              |    826 +
>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h              |     30 +
>   drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c            |    585 +
>   drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h            |     35 +
>   drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h            |     87 +
>   drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c              |    207 +
>   drivers/gpu/drm/amd/amdgpu/mxgpu_ai.h              |     47 +
>   drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c             |    251 +
>   drivers/gpu/drm/amd/amdgpu/nbio_v6_1.h             |     53 +
>   drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h            |    269 +
>   drivers/gpu/drm/amd/amdgpu/psp_v3_1.c              |    507 +
>   drivers/gpu/drm/amd/amdgpu/psp_v3_1.h              |     50 +
>   drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c             |      4 +-
>   drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c             |      4 +-
>   drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c             |   1573 +
>   drivers/gpu/drm/amd/amdgpu/sdma_v4_0.h             |     30 +
>   drivers/gpu/drm/amd/amdgpu/soc15.c                 |    825 +
>   drivers/gpu/drm/amd/amdgpu/soc15.h                 |     35 +
>   drivers/gpu/drm/amd/amdgpu/soc15_common.h          |     57 +
>   drivers/gpu/drm/amd/amdgpu/soc15d.h                |    287 +
>   drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c              |   1543 +
>   drivers/gpu/drm/amd/amdgpu/uvd_v7_0.h              |     29 +
>   drivers/gpu/drm/amd/amdgpu/vce_v4_0.c              |   1141 +
>   drivers/gpu/drm/amd/amdgpu/vce_v4_0.h              |     29 +
>   drivers/gpu/drm/amd/amdgpu/vega10_ih.c             |    424 +
>   drivers/gpu/drm/amd/amdgpu/vega10_ih.h             |     30 +
>   drivers/gpu/drm/amd/amdgpu/vega10_sdma_pkt_open.h  |   3335 +
>   drivers/gpu/drm/amd/amdgpu/vi.c                    |      4 +-
>   drivers/gpu/drm/amd/display/Kconfig                |      7 +
>   drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  |    145 +-
>   .../drm/amd/display/amdgpu_dm/amdgpu_dm_services.c |     10 +
>   .../drm/amd/display/amdgpu_dm/amdgpu_dm_types.c    |     20 +-
>   drivers/gpu/drm/amd/display/dc/Makefile            |      4 +
>   drivers/gpu/drm/amd/display/dc/bios/Makefile       |      8 +
>   drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c |   2162 +
>   drivers/gpu/drm/amd/display/dc/bios/bios_parser2.h |     33 +
>   .../amd/display/dc/bios/bios_parser_interface.c    |     14 +
>   .../display/dc/bios/bios_parser_types_internal2.h  |     74 +
>   .../gpu/drm/amd/display/dc/bios/command_table2.c   |    813 +
>   .../gpu/drm/amd/display/dc/bios/command_table2.h   |    105 +
>   .../amd/display/dc/bios/command_table_helper2.c    |    260 +
>   .../amd/display/dc/bios/command_table_helper2.h    |     82 +
>   .../dc/bios/dce112/command_table_helper2_dce112.c  |    418 +
>   .../dc/bios/dce112/command_table_helper2_dce112.h  |     34 +
>   drivers/gpu/drm/amd/display/dc/calcs/dce_calcs.c   |    117 +
>   drivers/gpu/drm/amd/display/dc/core/dc.c           |     29 +
>   drivers/gpu/drm/amd/display/dc/core/dc_debug.c     |     11 +
>   drivers/gpu/drm/amd/display/dc/core/dc_link.c      |     19 +
>   drivers/gpu/drm/amd/display/dc/core/dc_resource.c  |     14 +
>   drivers/gpu/drm/amd/display/dc/dc.h                |     27 +
>   drivers/gpu/drm/amd/display/dc/dc_hw_types.h       |     46 +
>   .../gpu/drm/amd/display/dc/dce/dce_clock_source.c  |      6 +
>   drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c    |    149 +
>   drivers/gpu/drm/amd/display/dc/dce/dce_clocks.h    |     20 +
>   drivers/gpu/drm/amd/display/dc/dce/dce_hwseq.h     |      8 +
>   .../gpu/drm/amd/display/dc/dce/dce_link_encoder.h  |     14 +
>   drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c |     35 +
>   drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.h |     34 +
>   drivers/gpu/drm/amd/display/dc/dce/dce_opp.h       |     72 +
>   .../drm/amd/display/dc/dce/dce_stream_encoder.h    |    100 +
>   drivers/gpu/drm/amd/display/dc/dce/dce_transform.h |     68 +
>   .../amd/display/dc/dce110/dce110_hw_sequencer.c    |     53 +-
>   .../drm/amd/display/dc/dce110/dce110_mem_input.c   |      3 +
>   .../display/dc/dce110/dce110_timing_generator.h    |      3 +
>   drivers/gpu/drm/amd/display/dc/dce120/Makefile     |     12 +
>   .../amd/display/dc/dce120/dce120_hw_sequencer.c    |    197 +
>   .../amd/display/dc/dce120/dce120_hw_sequencer.h    |     36 +
>   drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.c |     58 +
>   drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.h |     62 +
>   .../drm/amd/display/dc/dce120/dce120_ipp_cursor.c  |    202 +
>   .../drm/amd/display/dc/dce120/dce120_ipp_gamma.c   |    167 +
>   .../drm/amd/display/dc/dce120/dce120_mem_input.c   |    340 +
>   .../drm/amd/display/dc/dce120/dce120_mem_input.h   |     37 +
>   .../drm/amd/display/dc/dce120/dce120_resource.c    |   1099 +
>   .../drm/amd/display/dc/dce120/dce120_resource.h    |     39 +
>   .../display/dc/dce120/dce120_timing_generator.c    |   1109 +
>   .../display/dc/dce120/dce120_timing_generator.h    |     41 +
>   .../gpu/drm/amd/display/dc/dce80/dce80_mem_input.c |      3 +
>   drivers/gpu/drm/amd/display/dc/dm_services.h       |     89 +
>   drivers/gpu/drm/amd/display/dc/dm_services_types.h |     27 +
>   drivers/gpu/drm/amd/display/dc/gpio/Makefile       |     11 +
>   .../amd/display/dc/gpio/dce120/hw_factory_dce120.c |    197 +
>   .../amd/display/dc/gpio/dce120/hw_factory_dce120.h |     32 +
>   .../display/dc/gpio/dce120/hw_translate_dce120.c   |    408 +
>   .../display/dc/gpio/dce120/hw_translate_dce120.h   |     34 +
>   drivers/gpu/drm/amd/display/dc/gpio/hw_factory.c   |      9 +
>   drivers/gpu/drm/amd/display/dc/gpio/hw_translate.c |      9 +-
>   drivers/gpu/drm/amd/display/dc/i2caux/Makefile     |     11 +
>   .../amd/display/dc/i2caux/dce120/i2caux_dce120.c   |    125 +
>   .../amd/display/dc/i2caux/dce120/i2caux_dce120.h   |     32 +
>   drivers/gpu/drm/amd/display/dc/i2caux/i2caux.c     |      8 +
>   .../gpu/drm/amd/display/dc/inc/bandwidth_calcs.h   |      3 +
>   .../gpu/drm/amd/display/dc/inc/hw/display_clock.h  |     23 +
>   drivers/gpu/drm/amd/display/dc/inc/hw/mem_input.h  |      4 +
>   drivers/gpu/drm/amd/display/dc/irq/Makefile        |     12 +
>   .../amd/display/dc/irq/dce120/irq_service_dce120.c |    293 +
>   .../amd/display/dc/irq/dce120/irq_service_dce120.h |     34 +
>   drivers/gpu/drm/amd/display/dc/irq/irq_service.c   |      3 +
>   drivers/gpu/drm/amd/display/include/dal_asic_id.h  |      4 +
>   drivers/gpu/drm/amd/display/include/dal_types.h    |      3 +
>   drivers/gpu/drm/amd/include/amd_shared.h           |      4 +
>   .../asic_reg/vega10/ATHUB/athub_1_0_default.h      |    241 +
>   .../asic_reg/vega10/ATHUB/athub_1_0_offset.h       |    453 +
>   .../asic_reg/vega10/ATHUB/athub_1_0_sh_mask.h      |   2045 +
>   .../include/asic_reg/vega10/DC/dce_12_0_default.h  |   9868 ++
>   .../include/asic_reg/vega10/DC/dce_12_0_offset.h   |  18193 +++
>   .../include/asic_reg/vega10/DC/dce_12_0_sh_mask.h  |  64636 +++++++++
>   .../include/asic_reg/vega10/GC/gc_9_0_default.h    |   3873 +
>   .../amd/include/asic_reg/vega10/GC/gc_9_0_offset.h |   7230 +
>   .../include/asic_reg/vega10/GC/gc_9_0_sh_mask.h    |  29868 ++++
>   .../include/asic_reg/vega10/HDP/hdp_4_0_default.h  |    117 +
>   .../include/asic_reg/vega10/HDP/hdp_4_0_offset.h   |    209 +
>   .../include/asic_reg/vega10/HDP/hdp_4_0_sh_mask.h  |    601 +
>   .../asic_reg/vega10/MMHUB/mmhub_1_0_default.h      |   1011 +
>   .../asic_reg/vega10/MMHUB/mmhub_1_0_offset.h       |   1967 +
>   .../asic_reg/vega10/MMHUB/mmhub_1_0_sh_mask.h      |  10127 ++
>   .../include/asic_reg/vega10/MP/mp_9_0_default.h    |    342 +
>   .../amd/include/asic_reg/vega10/MP/mp_9_0_offset.h |    375 +
>   .../include/asic_reg/vega10/MP/mp_9_0_sh_mask.h    |   1463 +
>   .../asic_reg/vega10/NBIF/nbif_6_1_default.h        |   1271 +
>   .../include/asic_reg/vega10/NBIF/nbif_6_1_offset.h |   1688 +
>   .../asic_reg/vega10/NBIF/nbif_6_1_sh_mask.h        |  10281 ++
>   .../asic_reg/vega10/NBIO/nbio_6_1_default.h        |  22340 +++
>   .../include/asic_reg/vega10/NBIO/nbio_6_1_offset.h |   3649 +
>   .../asic_reg/vega10/NBIO/nbio_6_1_sh_mask.h        | 133884 ++++++++++++++++++
>   .../asic_reg/vega10/OSSSYS/osssys_4_0_default.h    |    176 +
>   .../asic_reg/vega10/OSSSYS/osssys_4_0_offset.h     |    327 +
>   .../asic_reg/vega10/OSSSYS/osssys_4_0_sh_mask.h    |   1196 +
>   .../asic_reg/vega10/SDMA0/sdma0_4_0_default.h      |    286 +
>   .../asic_reg/vega10/SDMA0/sdma0_4_0_offset.h       |    547 +
>   .../asic_reg/vega10/SDMA0/sdma0_4_0_sh_mask.h      |   1852 +
>   .../asic_reg/vega10/SDMA1/sdma1_4_0_default.h      |    282 +
>   .../asic_reg/vega10/SDMA1/sdma1_4_0_offset.h       |    539 +
>   .../asic_reg/vega10/SDMA1/sdma1_4_0_sh_mask.h      |   1810 +
>   .../asic_reg/vega10/SMUIO/smuio_9_0_default.h      |    100 +
>   .../asic_reg/vega10/SMUIO/smuio_9_0_offset.h       |    175 +
>   .../asic_reg/vega10/SMUIO/smuio_9_0_sh_mask.h      |    258 +
>   .../include/asic_reg/vega10/THM/thm_9_0_default.h  |    194 +
>   .../include/asic_reg/vega10/THM/thm_9_0_offset.h   |    363 +
>   .../include/asic_reg/vega10/THM/thm_9_0_sh_mask.h  |   1314 +
>   .../include/asic_reg/vega10/UVD/uvd_7_0_default.h  |    127 +
>   .../include/asic_reg/vega10/UVD/uvd_7_0_offset.h   |    222 +
>   .../include/asic_reg/vega10/UVD/uvd_7_0_sh_mask.h  |    811 +
>   .../include/asic_reg/vega10/VCE/vce_4_0_default.h  |    122 +
>   .../include/asic_reg/vega10/VCE/vce_4_0_offset.h   |    208 +
>   .../include/asic_reg/vega10/VCE/vce_4_0_sh_mask.h  |    488 +
>   .../gpu/drm/amd/include/asic_reg/vega10/soc15ip.h  |   1343 +
>   .../drm/amd/include/asic_reg/vega10/vega10_enum.h  |  22531 +++
>   drivers/gpu/drm/amd/include/atomfirmware.h         |   2385 +
>   drivers/gpu/drm/amd/include/atomfirmwareid.h       |     86 +
>   drivers/gpu/drm/amd/include/displayobject.h        |    249 +
>   drivers/gpu/drm/amd/include/dm_pp_interface.h      |     83 +
>   drivers/gpu/drm/amd/include/v9_structs.h           |    743 +
>   drivers/gpu/drm/amd/powerplay/amd_powerplay.c      |    284 +-
>   drivers/gpu/drm/amd/powerplay/hwmgr/Makefile       |      6 +-
>   .../gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c  |     49 +
>   drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c        |      9 +
>   drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr_ppt.h    |     16 +-
>   drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c |    396 +
>   drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h |    140 +
>   drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c |   4378 +
>   drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.h |    434 +
>   drivers/gpu/drm/amd/powerplay/hwmgr/vega10_inc.h   |     44 +
>   .../gpu/drm/amd/powerplay/hwmgr/vega10_powertune.c |    137 +
>   .../gpu/drm/amd/powerplay/hwmgr/vega10_powertune.h |     65 +
>   .../gpu/drm/amd/powerplay/hwmgr/vega10_pptable.h   |    331 +
>   .../amd/powerplay/hwmgr/vega10_processpptables.c   |   1056 +
>   .../amd/powerplay/hwmgr/vega10_processpptables.h   |     34 +
>   .../gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c   |    761 +
>   .../gpu/drm/amd/powerplay/hwmgr/vega10_thermal.h   |     83 +
>   drivers/gpu/drm/amd/powerplay/inc/amd_powerplay.h  |     28 +-
>   .../gpu/drm/amd/powerplay/inc/hardwaremanager.h    |     43 +
>   drivers/gpu/drm/amd/powerplay/inc/hwmgr.h          |    125 +-
>   drivers/gpu/drm/amd/powerplay/inc/pp_instance.h    |      1 +
>   drivers/gpu/drm/amd/powerplay/inc/pp_soc15.h       |     48 +
>   drivers/gpu/drm/amd/powerplay/inc/smu9.h           |    147 +
>   drivers/gpu/drm/amd/powerplay/inc/smu9_driver_if.h |    418 +
>   drivers/gpu/drm/amd/powerplay/inc/smumgr.h         |      3 +
>   drivers/gpu/drm/amd/powerplay/inc/vega10_ppsmc.h   |    131 +
>   drivers/gpu/drm/amd/powerplay/smumgr/Makefile      |      2 +-
>   drivers/gpu/drm/amd/powerplay/smumgr/smumgr.c      |      9 +
>   .../gpu/drm/amd/powerplay/smumgr/vega10_smumgr.c   |    564 +
>   .../gpu/drm/amd/powerplay/smumgr/vega10_smumgr.h   |     70 +
>   include/uapi/drm/amdgpu_drm.h                      |     29 +
>   221 files changed, 403408 insertions(+), 219 deletions(-)
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/clearstate_gfx9.h
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.h
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mxgpu_ai.h
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/nbio_v6_1.h
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/psp_v3_1.c
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/psp_v3_1.h
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.h
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15.c
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15.h
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15_common.h
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15d.h
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.h
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vce_v4_0.h
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vega10_ih.c
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vega10_ih.h
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vega10_sdma_pkt_open.h
>   create mode 100644 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
>   create mode 100644 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.h
>   create mode 100644 drivers/gpu/drm/amd/display/dc/bios/bios_parser_types_internal2.h
>   create mode 100644 drivers/gpu/drm/amd/display/dc/bios/command_table2.c
>   create mode 100644 drivers/gpu/drm/amd/display/dc/bios/command_table2.h
>   create mode 100644 drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.c
>   create mode 100644 drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.h
>   create mode 100644 drivers/gpu/drm/amd/display/dc/bios/dce112/command_table_helper2_dce112.c
>   create mode 100644 drivers/gpu/drm/amd/display/dc/bios/dce112/command_table_helper2_dce112.h
>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/Makefile
>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.c
>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.h
>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.c
>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.h
>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp_cursor.c
>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp_gamma.c
>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_mem_input.c
>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_mem_input.h
>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.h
>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_timing_generator.c
>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_timing_generator.h
>   create mode 100644 drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_factory_dce120.c
>   create mode 100644 drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_factory_dce120.h
>   create mode 100644 drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_translate_dce120.c
>   create mode 100644 drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_translate_dce120.h
>   create mode 100644 drivers/gpu/drm/amd/display/dc/i2caux/dce120/i2caux_dce120.c
>   create mode 100644 drivers/gpu/drm/amd/display/dc/i2caux/dce120/i2caux_dce120.h
>   create mode 100644 drivers/gpu/drm/amd/display/dc/irq/dce120/irq_service_dce120.c
>   create mode 100644 drivers/gpu/drm/amd/display/dc/irq/dce120/irq_service_dce120.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/ATHUB/athub_1_0_default.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/ATHUB/athub_1_0_offset.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/ATHUB/athub_1_0_sh_mask.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/DC/dce_12_0_default.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/DC/dce_12_0_offset.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/DC/dce_12_0_sh_mask.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/GC/gc_9_0_default.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/GC/gc_9_0_offset.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/GC/gc_9_0_sh_mask.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/HDP/hdp_4_0_default.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/HDP/hdp_4_0_offset.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/HDP/hdp_4_0_sh_mask.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/MMHUB/mmhub_1_0_default.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/MMHUB/mmhub_1_0_offset.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/MMHUB/mmhub_1_0_sh_mask.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/MP/mp_9_0_default.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/MP/mp_9_0_offset.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/MP/mp_9_0_sh_mask.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/NBIF/nbif_6_1_default.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/NBIF/nbif_6_1_offset.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/NBIF/nbif_6_1_sh_mask.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/NBIO/nbio_6_1_default.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/NBIO/nbio_6_1_offset.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/NBIO/nbio_6_1_sh_mask.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/OSSSYS/osssys_4_0_default.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/OSSSYS/osssys_4_0_offset.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/OSSSYS/osssys_4_0_sh_mask.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA0/sdma0_4_0_default.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA0/sdma0_4_0_offset.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA0/sdma0_4_0_sh_mask.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA1/sdma1_4_0_default.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA1/sdma1_4_0_offset.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA1/sdma1_4_0_sh_mask.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/SMUIO/smuio_9_0_default.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/SMUIO/smuio_9_0_offset.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/SMUIO/smuio_9_0_sh_mask.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/THM/thm_9_0_default.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/THM/thm_9_0_offset.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/THM/thm_9_0_sh_mask.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/UVD/uvd_7_0_default.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/UVD/uvd_7_0_offset.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/UVD/uvd_7_0_sh_mask.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/VCE/vce_4_0_default.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/VCE/vce_4_0_offset.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/VCE/vce_4_0_sh_mask.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/soc15ip.h
>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/vega10_enum.h
>   create mode 100644 drivers/gpu/drm/amd/include/atomfirmware.h
>   create mode 100644 drivers/gpu/drm/amd/include/atomfirmwareid.h
>   create mode 100644 drivers/gpu/drm/amd/include/displayobject.h
>   create mode 100644 drivers/gpu/drm/amd/include/dm_pp_interface.h
>   create mode 100644 drivers/gpu/drm/amd/include/v9_structs.h
>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c
>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h
>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.h
>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_inc.h
>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.c
>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.h
>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_pptable.h
>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.c
>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.h
>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c
>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.h
>   create mode 100644 drivers/gpu/drm/amd/powerplay/inc/pp_soc15.h
>   create mode 100644 drivers/gpu/drm/amd/powerplay/inc/smu9.h
>   create mode 100644 drivers/gpu/drm/amd/powerplay/inc/smu9_driver_if.h
>   create mode 100644 drivers/gpu/drm/amd/powerplay/inc/vega10_ppsmc.h
>   create mode 100644 drivers/gpu/drm/amd/powerplay/smumgr/vega10_smumgr.c
>   create mode 100644 drivers/gpu/drm/amd/powerplay/smumgr/vega10_smumgr.h
>

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH 046/100] drm/amdgpu: Add GMC 9.0 support
       [not found]     ` <1490041835-11255-32-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
@ 2017-03-21  8:49       ` Christian König
       [not found]         ` <003f0fba-4792-a32a-c982-73457dfbd1aa-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
  2017-03-22 19:48       ` Dave Airlie
  2017-03-23  2:42       ` Zhang, Jerry (Junwei)
  2 siblings, 1 reply; 101+ messages in thread
From: Christian König @ 2017-03-21  8:49 UTC (permalink / raw)
  To: Alex Deucher, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Alex Xie

Am 20.03.2017 um 21:29 schrieb Alex Deucher:
> From: Alex Xie <AlexBin.Xie@amd.com>
>
> On SOC-15 parts, the GMC (Graphics Memory Controller) consists
> of two hubs: GFX (graphics and compute) and MM (sdma, uvd, vce).
>
> Signed-off-by: Alex Xie <AlexBin.Xie@amd.com>
> Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/Makefile      |   6 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu.h      |  30 ++
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c   |  28 +-
>   drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c | 447 +++++++++++++++++
>   drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h |  35 ++
>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c    | 826 +++++++++++++++++++++++++++++++
>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h    |  30 ++
>   drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c  | 585 ++++++++++++++++++++++
>   drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h  |  35 ++
>   drivers/gpu/drm/amd/include/amd_shared.h |   2 +
>   10 files changed, 2016 insertions(+), 8 deletions(-)
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
> index 69823e8..b5046fd 100644
> --- a/drivers/gpu/drm/amd/amdgpu/Makefile
> +++ b/drivers/gpu/drm/amd/amdgpu/Makefile
> @@ -45,7 +45,8 @@ amdgpu-y += \
>   # add GMC block
>   amdgpu-y += \
>   	gmc_v7_0.o \
> -	gmc_v8_0.o
> +	gmc_v8_0.o \
> +	gfxhub_v1_0.o mmhub_v1_0.o gmc_v9_0.o
>   
>   # add IH block
>   amdgpu-y += \
> @@ -74,7 +75,8 @@ amdgpu-y += \
>   # add async DMA block
>   amdgpu-y += \
>   	sdma_v2_4.o \
> -	sdma_v3_0.o
> +	sdma_v3_0.o \
> +	sdma_v4_0.o

That change doesn't belong into this patch.

>   
>   # add UVD block
>   amdgpu-y += \
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index aaded8d..d7257b6 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -123,6 +123,11 @@ extern int amdgpu_param_buf_per_se;
>   /* max number of IP instances */
>   #define AMDGPU_MAX_SDMA_INSTANCES		2
>   
> +/* max number of VMHUB */
> +#define AMDGPU_MAX_VMHUBS			2
> +#define AMDGPU_MMHUB				0
> +#define AMDGPU_GFXHUB				1
> +
>   /* hardcode that limit for now */
>   #define AMDGPU_VA_RESERVED_SIZE			(8 << 20)
>   
> @@ -310,6 +315,12 @@ struct amdgpu_gart_funcs {
>   				     uint32_t flags);
>   };
>   
> +/* provided by the mc block */
> +struct amdgpu_mc_funcs {
> +	/* adjust mc addr in fb for APU case */
> +	u64 (*adjust_mc_addr)(struct amdgpu_device *adev, u64 addr);
> +};
> +

That isn't hardware specific and actually incorrectly implemented.

The calculation depends on the NB on APUs, not the GPU part and the 
current implementation probably breaks it for Carizzo and others APUs.

I suggest to just remove the callback and move the calculation into 
amdgpu_vm_adjust_mc_addr().

Then rename amdgpu_vm_adjust_mc_addr() to amdgpu_vm_get_pde() and call 
it from amdgpu_vm_update_page_directory() as well as the GFX9 specifc 
flush functions.

>   /* provided by the ih block */
>   struct amdgpu_ih_funcs {
>   	/* ring read/write ptr handling, called from interrupt context */
> @@ -559,6 +570,21 @@ int amdgpu_gart_bind(struct amdgpu_device *adev, uint64_t offset,
>   int amdgpu_ttm_recover_gart(struct amdgpu_device *adev);
>   
>   /*
> + * VMHUB structures, functions & helpers
> + */
> +struct amdgpu_vmhub {
> +	uint32_t	ctx0_ptb_addr_lo32;
> +	uint32_t	ctx0_ptb_addr_hi32;
> +	uint32_t	vm_inv_eng0_req;
> +	uint32_t	vm_inv_eng0_ack;
> +	uint32_t	vm_context0_cntl;
> +	uint32_t	vm_l2_pro_fault_status;
> +	uint32_t	vm_l2_pro_fault_cntl;
> +	uint32_t	(*get_invalidate_req)(unsigned int vm_id);
> +	uint32_t	(*get_vm_protection_bits)(void);

Those two callbacks aren't a good idea either.

The invalidation request bits are defined by the RTL of the HUB which is 
just instantiated twice, see the register database for details.

We should probably make those functions in the gmc_v9_0.c which are 
called from the device specific flush methods.

Regards,
Christian.

> +};
> +
> +/*
>    * GPU MC structures, functions & helpers
>    */
>   struct amdgpu_mc {
> @@ -591,6 +617,9 @@ struct amdgpu_mc {
>   	u64					shared_aperture_end;
>   	u64					private_aperture_start;
>   	u64					private_aperture_end;
> +	/* protects concurrent invalidation */
> +	spinlock_t		invalidate_lock;
> +	const struct amdgpu_mc_funcs *mc_funcs;
>   };
>   
>   /*
> @@ -1479,6 +1508,7 @@ struct amdgpu_device {
>   	struct amdgpu_gart		gart;
>   	struct amdgpu_dummy_page	dummy_page;
>   	struct amdgpu_vm_manager	vm_manager;
> +	struct amdgpu_vmhub             vmhub[AMDGPU_MAX_VMHUBS];
>   
>   	/* memory management */
>   	struct amdgpu_mman		mman;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index df615d7..47a8080 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -375,6 +375,16 @@ static bool amdgpu_vm_ring_has_compute_vm_bug(struct amdgpu_ring *ring)
>   	return false;
>   }
>   
> +static u64 amdgpu_vm_adjust_mc_addr(struct amdgpu_device *adev, u64 mc_addr)
> +{
> +	u64 addr = mc_addr;
> +
> +	if (adev->mc.mc_funcs && adev->mc.mc_funcs->adjust_mc_addr)
> +		addr = adev->mc.mc_funcs->adjust_mc_addr(adev, addr);
> +
> +	return addr;
> +}
> +
>   /**
>    * amdgpu_vm_flush - hardware flush the vm
>    *
> @@ -405,9 +415,10 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job)
>   	if (ring->funcs->emit_vm_flush && (job->vm_needs_flush ||
>   	    amdgpu_vm_is_gpu_reset(adev, id))) {
>   		struct fence *fence;
> +		u64 pd_addr = amdgpu_vm_adjust_mc_addr(adev, job->vm_pd_addr);
>   
> -		trace_amdgpu_vm_flush(job->vm_pd_addr, ring->idx, job->vm_id);
> -		amdgpu_ring_emit_vm_flush(ring, job->vm_id, job->vm_pd_addr);
> +		trace_amdgpu_vm_flush(pd_addr, ring->idx, job->vm_id);
> +		amdgpu_ring_emit_vm_flush(ring, job->vm_id, pd_addr);
>   
>   		r = amdgpu_fence_emit(ring, &fence);
>   		if (r)
> @@ -643,15 +654,18 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device *adev,
>   		    (count == AMDGPU_VM_MAX_UPDATE_SIZE)) {
>   
>   			if (count) {
> +				uint64_t pt_addr =
> +					amdgpu_vm_adjust_mc_addr(adev, last_pt);
> +
>   				if (shadow)
>   					amdgpu_vm_do_set_ptes(&params,
>   							      last_shadow,
> -							      last_pt, count,
> +							      pt_addr, count,
>   							      incr,
>   							      AMDGPU_PTE_VALID);
>   
>   				amdgpu_vm_do_set_ptes(&params, last_pde,
> -						      last_pt, count, incr,
> +						      pt_addr, count, incr,
>   						      AMDGPU_PTE_VALID);
>   			}
>   
> @@ -665,11 +679,13 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device *adev,
>   	}
>   
>   	if (count) {
> +		uint64_t pt_addr = amdgpu_vm_adjust_mc_addr(adev, last_pt);
> +
>   		if (vm->page_directory->shadow)
> -			amdgpu_vm_do_set_ptes(&params, last_shadow, last_pt,
> +			amdgpu_vm_do_set_ptes(&params, last_shadow, pt_addr,
>   					      count, incr, AMDGPU_PTE_VALID);
>   
> -		amdgpu_vm_do_set_ptes(&params, last_pde, last_pt,
> +		amdgpu_vm_do_set_ptes(&params, last_pde, pt_addr,
>   				      count, incr, AMDGPU_PTE_VALID);
>   	}
>   
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
> new file mode 100644
> index 0000000..1ff019c
> --- /dev/null
> +++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
> @@ -0,0 +1,447 @@
> +/*
> + * Copyright 2016 Advanced Micro Devices, Inc.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + *
> + */
> +#include "amdgpu.h"
> +#include "gfxhub_v1_0.h"
> +
> +#include "vega10/soc15ip.h"
> +#include "vega10/GC/gc_9_0_offset.h"
> +#include "vega10/GC/gc_9_0_sh_mask.h"
> +#include "vega10/GC/gc_9_0_default.h"
> +#include "vega10/vega10_enum.h"
> +
> +#include "soc15_common.h"
> +
> +int gfxhub_v1_0_gart_enable(struct amdgpu_device *adev)
> +{
> +	u32 tmp;
> +	u64 value;
> +	u32 i;
> +
> +	/* Program MC. */
> +	/* Update configuration */
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR),
> +		adev->mc.vram_start >> 18);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR),
> +		adev->mc.vram_end >> 18);
> +
> +	value = adev->vram_scratch.gpu_addr - adev->mc.vram_start
> +		+ adev->vm_manager.vram_base_offset;
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +				mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_LSB),
> +				(u32)(value >> 12));
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +				mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_MSB),
> +				(u32)(value >> 44));
> +
> +	/* Disable AGP. */
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_BASE), 0);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_TOP), 0);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_BOT), 0xFFFFFFFF);
> +
> +	/* GART Enable. */
> +
> +	/* Setup TLB control */
> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL));
> +	tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB, 1);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				SYSTEM_ACCESS_MODE,
> +				3);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				ENABLE_ADVANCED_DRIVER_MODEL,
> +				1);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				SYSTEM_APERTURE_UNMAPPED_ACCESS,
> +				0);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				ECO_BITS,
> +				0);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				MTYPE,
> +				MTYPE_UC);/* XXX for emulation. */
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				ATC_EN,
> +				1);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL), tmp);
> +
> +	/* Setup L2 cache */
> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL));
> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 1);
> +	tmp = REG_SET_FIELD(tmp,
> +				VM_L2_CNTL,
> +				ENABLE_L2_FRAGMENT_PROCESSING,
> +				0);
> +	tmp = REG_SET_FIELD(tmp,
> +				VM_L2_CNTL,
> +				L2_PDE0_CACHE_TAG_GENERATION_MODE,
> +				0);/* XXX for emulation, Refer to closed source code.*/
> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, PDE_FAULT_CLASSIFICATION, 1);
> +	tmp = REG_SET_FIELD(tmp,
> +				VM_L2_CNTL,
> +				CONTEXT1_IDENTITY_ACCESS_MODE,
> +				1);
> +	tmp = REG_SET_FIELD(tmp,
> +				VM_L2_CNTL,
> +				IDENTITY_MODE_FRAGMENT_SIZE,
> +				0);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL), tmp);
> +
> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL2));
> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_ALL_L1_TLBS, 1);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_L2_CACHE, 1);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL2), tmp);
> +
> +	tmp = mmVM_L2_CNTL3_DEFAULT;
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL3), tmp);
> +
> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL4));
> +	tmp = REG_SET_FIELD(tmp,
> +			    VM_L2_CNTL4,
> +			    VMC_TAP_PDE_REQUEST_PHYSICAL,
> +			    0);
> +	tmp = REG_SET_FIELD(tmp,
> +			    VM_L2_CNTL4,
> +			    VMC_TAP_PTE_REQUEST_PHYSICAL,
> +			    0);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL4), tmp);
> +
> +	/* setup context0 */
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_LO32),
> +		(u32)(adev->mc.gtt_start >> 12));
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_HI32),
> +		(u32)(adev->mc.gtt_start >> 44));
> +
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_LO32),
> +		(u32)(adev->mc.gtt_end >> 12));
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_HI32),
> +		(u32)(adev->mc.gtt_end >> 44));
> +
> +	BUG_ON(adev->gart.table_addr & (~0x0000FFFFFFFFF000ULL));
> +	value = adev->gart.table_addr - adev->mc.vram_start
> +		+ adev->vm_manager.vram_base_offset;
> +	value &= 0x0000FFFFFFFFF000ULL;
> +	value |= 0x1; /*valid bit*/
> +
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32),
> +		(u32)value);
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32),
> +		(u32)(value >> 32));
> +
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +				mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_LO32),
> +		(u32)(adev->dummy_page.addr >> 12));
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +				mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_HI32),
> +		(u32)(adev->dummy_page.addr >> 44));
> +
> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL2));
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL2,
> +			    ACTIVE_PAGE_MIGRATION_PTE_READ_RETRY,
> +			    1);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL2), tmp);
> +
> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL));
> +	tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL, ENABLE_CONTEXT, 1);
> +	tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL, PAGE_TABLE_DEPTH, 0);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL), tmp);
> +
> +	/* Disable identity aperture.*/
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_LO32), 0XFFFFFFFF);
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_HI32), 0x0000000F);
> +
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_LO32), 0);
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_HI32), 0);
> +
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +		mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_LO32), 0);
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +		mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_HI32), 0);
> +
> +	for (i = 0; i <= 14; i++) {
> +		tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_CNTL) + i);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL, ENABLE_CONTEXT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL, PAGE_TABLE_DEPTH, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				VALID_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				READ_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				PAGE_TABLE_BLOCK_SIZE,
> +				    amdgpu_vm_block_size - 9);
> +		WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_CNTL) + i, tmp);
> +		WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_LO32) + i*2, 0);
> +		WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_HI32) + i*2, 0);
> +		WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_LO32) + i*2,
> +				adev->vm_manager.max_pfn - 1);
> +		WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_HI32) + i*2, 0);
> +	}
> +
> +
> +	return 0;
> +}
> +
> +void gfxhub_v1_0_gart_disable(struct amdgpu_device *adev)
> +{
> +	u32 tmp;
> +	u32 i;
> +
> +	/* Disable all tables */
> +	for (i = 0; i < 16; i++)
> +		WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL) + i, 0);
> +
> +	/* Setup TLB control */
> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL));
> +	tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB, 0);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				ENABLE_ADVANCED_DRIVER_MODEL,
> +				0);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL), tmp);
> +
> +	/* Setup L2 cache */
> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL));
> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 0);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL), tmp);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL3), 0);
> +}
> +
> +/**
> + * gfxhub_v1_0_set_fault_enable_default - update GART/VM fault handling
> + *
> + * @adev: amdgpu_device pointer
> + * @value: true redirects VM faults to the default page
> + */
> +void gfxhub_v1_0_set_fault_enable_default(struct amdgpu_device *adev,
> +					  bool value)
> +{
> +	u32 tmp;
> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL));
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			PDE1_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			PDE2_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp,
> +			VM_L2_PROTECTION_FAULT_CNTL,
> +			TRANSLATE_FURTHER_PROTECTION_FAULT_ENABLE_DEFAULT,
> +			value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			NACK_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			VALID_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			READ_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL), tmp);
> +}
> +
> +static uint32_t gfxhub_v1_0_get_invalidate_req(unsigned int vm_id)
> +{
> +	u32 req = 0;
> +
> +	/* invalidate using legacy mode on vm_id*/
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> +			    PER_VMID_INVALIDATE_REQ, 1 << vm_id);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, FLUSH_TYPE, 0);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PTES, 1);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE0, 1);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE1, 1);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE2, 1);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L1_PTES, 1);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> +			    CLEAR_PROTECTION_FAULT_STATUS_ADDR,	0);
> +
> +	return req;
> +}
> +
> +static uint32_t gfxhub_v1_0_get_vm_protection_bits(void)
> +{
> +	return (VM_CONTEXT1_CNTL__RANGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__DUMMY_PAGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__PDE0_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__VALID_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__READ_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__WRITE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__EXECUTE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK);
> +}
> +
> +static int gfxhub_v1_0_early_init(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int gfxhub_v1_0_late_init(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int gfxhub_v1_0_sw_init(void *handle)
> +{
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +	struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_GFXHUB];
> +
> +	hub->ctx0_ptb_addr_lo32 =
> +		SOC15_REG_OFFSET(GC, 0,
> +				 mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32);
> +	hub->ctx0_ptb_addr_hi32 =
> +		SOC15_REG_OFFSET(GC, 0,
> +				 mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32);
> +	hub->vm_inv_eng0_req =
> +		SOC15_REG_OFFSET(GC, 0, mmVM_INVALIDATE_ENG0_REQ);
> +	hub->vm_inv_eng0_ack =
> +		SOC15_REG_OFFSET(GC, 0, mmVM_INVALIDATE_ENG0_ACK);
> +	hub->vm_context0_cntl =
> +		SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL);
> +	hub->vm_l2_pro_fault_status =
> +		SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_STATUS);
> +	hub->vm_l2_pro_fault_cntl =
> +		SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL);
> +
> +	hub->get_invalidate_req = gfxhub_v1_0_get_invalidate_req;
> +	hub->get_vm_protection_bits = gfxhub_v1_0_get_vm_protection_bits;
> +
> +	return 0;
> +}
> +
> +static int gfxhub_v1_0_sw_fini(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int gfxhub_v1_0_hw_init(void *handle)
> +{
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +	unsigned i;
> +
> +	for (i = 0 ; i < 18; ++i) {
> +		WREG32(SOC15_REG_OFFSET(GC, 0,
> +					mmVM_INVALIDATE_ENG0_ADDR_RANGE_LO32) +
> +		       2 * i, 0xffffffff);
> +		WREG32(SOC15_REG_OFFSET(GC, 0,
> +					mmVM_INVALIDATE_ENG0_ADDR_RANGE_HI32) +
> +		       2 * i, 0x1f);
> +	}
> +
> +	return 0;
> +}
> +
> +static int gfxhub_v1_0_hw_fini(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int gfxhub_v1_0_suspend(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int gfxhub_v1_0_resume(void *handle)
> +{
> +	return 0;
> +}
> +
> +static bool gfxhub_v1_0_is_idle(void *handle)
> +{
> +	return true;
> +}
> +
> +static int gfxhub_v1_0_wait_for_idle(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int gfxhub_v1_0_soft_reset(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int gfxhub_v1_0_set_clockgating_state(void *handle,
> +					  enum amd_clockgating_state state)
> +{
> +	return 0;
> +}
> +
> +static int gfxhub_v1_0_set_powergating_state(void *handle,
> +					  enum amd_powergating_state state)
> +{
> +	return 0;
> +}
> +
> +const struct amd_ip_funcs gfxhub_v1_0_ip_funcs = {
> +	.name = "gfxhub_v1_0",
> +	.early_init = gfxhub_v1_0_early_init,
> +	.late_init = gfxhub_v1_0_late_init,
> +	.sw_init = gfxhub_v1_0_sw_init,
> +	.sw_fini = gfxhub_v1_0_sw_fini,
> +	.hw_init = gfxhub_v1_0_hw_init,
> +	.hw_fini = gfxhub_v1_0_hw_fini,
> +	.suspend = gfxhub_v1_0_suspend,
> +	.resume = gfxhub_v1_0_resume,
> +	.is_idle = gfxhub_v1_0_is_idle,
> +	.wait_for_idle = gfxhub_v1_0_wait_for_idle,
> +	.soft_reset = gfxhub_v1_0_soft_reset,
> +	.set_clockgating_state = gfxhub_v1_0_set_clockgating_state,
> +	.set_powergating_state = gfxhub_v1_0_set_powergating_state,
> +};
> +
> +const struct amdgpu_ip_block_version gfxhub_v1_0_ip_block =
> +{
> +	.type = AMD_IP_BLOCK_TYPE_GFXHUB,
> +	.major = 1,
> +	.minor = 0,
> +	.rev = 0,
> +	.funcs = &gfxhub_v1_0_ip_funcs,
> +};
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
> new file mode 100644
> index 0000000..5129a8f
> --- /dev/null
> +++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
> @@ -0,0 +1,35 @@
> +/*
> + * Copyright 2016 Advanced Micro Devices, Inc.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + *
> + */
> +
> +#ifndef __GFXHUB_V1_0_H__
> +#define __GFXHUB_V1_0_H__
> +
> +int gfxhub_v1_0_gart_enable(struct amdgpu_device *adev);
> +void gfxhub_v1_0_gart_disable(struct amdgpu_device *adev);
> +void gfxhub_v1_0_set_fault_enable_default(struct amdgpu_device *adev,
> +					  bool value);
> +
> +extern const struct amd_ip_funcs gfxhub_v1_0_ip_funcs;
> +extern const struct amdgpu_ip_block_version gfxhub_v1_0_ip_block;
> +
> +#endif
> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> new file mode 100644
> index 0000000..5cf0fc3
> --- /dev/null
> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> @@ -0,0 +1,826 @@
> +/*
> + * Copyright 2016 Advanced Micro Devices, Inc.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + *
> + */
> +#include <linux/firmware.h>
> +#include "amdgpu.h"
> +#include "gmc_v9_0.h"
> +
> +#include "vega10/soc15ip.h"
> +#include "vega10/HDP/hdp_4_0_offset.h"
> +#include "vega10/HDP/hdp_4_0_sh_mask.h"
> +#include "vega10/GC/gc_9_0_sh_mask.h"
> +#include "vega10/vega10_enum.h"
> +
> +#include "soc15_common.h"
> +
> +#include "nbio_v6_1.h"
> +#include "gfxhub_v1_0.h"
> +#include "mmhub_v1_0.h"
> +
> +#define mmDF_CS_AON0_DramBaseAddress0                                                                  0x0044
> +#define mmDF_CS_AON0_DramBaseAddress0_BASE_IDX                                                         0
> +//DF_CS_AON0_DramBaseAddress0
> +#define DF_CS_AON0_DramBaseAddress0__AddrRngVal__SHIFT                                                        0x0
> +#define DF_CS_AON0_DramBaseAddress0__LgcyMmioHoleEn__SHIFT                                                    0x1
> +#define DF_CS_AON0_DramBaseAddress0__IntLvNumChan__SHIFT                                                      0x4
> +#define DF_CS_AON0_DramBaseAddress0__IntLvAddrSel__SHIFT                                                      0x8
> +#define DF_CS_AON0_DramBaseAddress0__DramBaseAddr__SHIFT                                                      0xc
> +#define DF_CS_AON0_DramBaseAddress0__AddrRngVal_MASK                                                          0x00000001L
> +#define DF_CS_AON0_DramBaseAddress0__LgcyMmioHoleEn_MASK                                                      0x00000002L
> +#define DF_CS_AON0_DramBaseAddress0__IntLvNumChan_MASK                                                        0x000000F0L
> +#define DF_CS_AON0_DramBaseAddress0__IntLvAddrSel_MASK                                                        0x00000700L
> +#define DF_CS_AON0_DramBaseAddress0__DramBaseAddr_MASK                                                        0xFFFFF000L
> +
> +/* XXX Move this macro to VEGA10 header file, which is like vid.h for VI.*/
> +#define AMDGPU_NUM_OF_VMIDS			8
> +
> +static const u32 golden_settings_vega10_hdp[] =
> +{
> +	0xf64, 0x0fffffff, 0x00000000,
> +	0xf65, 0x0fffffff, 0x00000000,
> +	0xf66, 0x0fffffff, 0x00000000,
> +	0xf67, 0x0fffffff, 0x00000000,
> +	0xf68, 0x0fffffff, 0x00000000,
> +	0xf6a, 0x0fffffff, 0x00000000,
> +	0xf6b, 0x0fffffff, 0x00000000,
> +	0xf6c, 0x0fffffff, 0x00000000,
> +	0xf6d, 0x0fffffff, 0x00000000,
> +	0xf6e, 0x0fffffff, 0x00000000,
> +};
> +
> +static int gmc_v9_0_vm_fault_interrupt_state(struct amdgpu_device *adev,
> +					struct amdgpu_irq_src *src,
> +					unsigned type,
> +					enum amdgpu_interrupt_state state)
> +{
> +	struct amdgpu_vmhub *hub;
> +	u32 tmp, reg, bits, i;
> +
> +	switch (state) {
> +	case AMDGPU_IRQ_STATE_DISABLE:
> +		/* MM HUB */
> +		hub = &adev->vmhub[AMDGPU_MMHUB];
> +		bits = hub->get_vm_protection_bits();
> +		for (i = 0; i< 16; i++) {
> +			reg = hub->vm_context0_cntl + i;
> +			tmp = RREG32(reg);
> +			tmp &= ~bits;
> +			WREG32(reg, tmp);
> +		}
> +
> +		/* GFX HUB */
> +		hub = &adev->vmhub[AMDGPU_GFXHUB];
> +		bits = hub->get_vm_protection_bits();
> +		for (i = 0; i < 16; i++) {
> +			reg = hub->vm_context0_cntl + i;
> +			tmp = RREG32(reg);
> +			tmp &= ~bits;
> +			WREG32(reg, tmp);
> +		}
> +		break;
> +	case AMDGPU_IRQ_STATE_ENABLE:
> +		/* MM HUB */
> +		hub = &adev->vmhub[AMDGPU_MMHUB];
> +		bits = hub->get_vm_protection_bits();
> +		for (i = 0; i< 16; i++) {
> +			reg = hub->vm_context0_cntl + i;
> +			tmp = RREG32(reg);
> +			tmp |= bits;
> +			WREG32(reg, tmp);
> +		}
> +
> +		/* GFX HUB */
> +		hub = &adev->vmhub[AMDGPU_GFXHUB];
> +		bits = hub->get_vm_protection_bits();
> +		for (i = 0; i < 16; i++) {
> +			reg = hub->vm_context0_cntl + i;
> +			tmp = RREG32(reg);
> +			tmp |= bits;
> +			WREG32(reg, tmp);
> +		}
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	return 0;
> +	return 0;
> +}
> +
> +static int gmc_v9_0_process_interrupt(struct amdgpu_device *adev,
> +				struct amdgpu_irq_src *source,
> +				struct amdgpu_iv_entry *entry)
> +{
> +	struct amdgpu_vmhub *gfxhub = &adev->vmhub[AMDGPU_GFXHUB];
> +	struct amdgpu_vmhub *mmhub = &adev->vmhub[AMDGPU_MMHUB];
> +	uint32_t status;
> +	u64 addr;
> +
> +	addr = (u64)entry->src_data[0] << 12;
> +	addr |= ((u64)entry->src_data[1] & 0xf) << 44;
> +
> +	if (entry->vm_id_src) {
> +		status = RREG32(mmhub->vm_l2_pro_fault_status);
> +		WREG32_P(mmhub->vm_l2_pro_fault_cntl, 1, ~1);
> +	} else {
> +		status = RREG32(gfxhub->vm_l2_pro_fault_status);
> +		WREG32_P(gfxhub->vm_l2_pro_fault_cntl, 1, ~1);
> +	}
> +
> +	DRM_ERROR("[%s]VMC page fault (src_id:%u ring:%u vm_id:%u pas_id:%u) "
> +		  "at page 0x%016llx from %d\n"
> +		  "VM_L2_PROTECTION_FAULT_STATUS:0x%08X\n",
> +		  entry->vm_id_src ? "mmhub" : "gfxhub",
> +		  entry->src_id, entry->ring_id, entry->vm_id, entry->pas_id,
> +		  addr, entry->client_id, status);
> +
> +	return 0;
> +}
> +
> +static const struct amdgpu_irq_src_funcs gmc_v9_0_irq_funcs = {
> +	.set = gmc_v9_0_vm_fault_interrupt_state,
> +	.process = gmc_v9_0_process_interrupt,
> +};
> +
> +static void gmc_v9_0_set_irq_funcs(struct amdgpu_device *adev)
> +{
> +	adev->mc.vm_fault.num_types = 1;
> +	adev->mc.vm_fault.funcs = &gmc_v9_0_irq_funcs;
> +}
> +
> +/*
> + * GART
> + * VMID 0 is the physical GPU addresses as used by the kernel.
> + * VMIDs 1-15 are used for userspace clients and are handled
> + * by the amdgpu vm/hsa code.
> + */
> +
> +/**
> + * gmc_v9_0_gart_flush_gpu_tlb - gart tlb flush callback
> + *
> + * @adev: amdgpu_device pointer
> + * @vmid: vm instance to flush
> + *
> + * Flush the TLB for the requested page table.
> + */
> +static void gmc_v9_0_gart_flush_gpu_tlb(struct amdgpu_device *adev,
> +					uint32_t vmid)
> +{
> +	/* Use register 17 for GART */
> +	const unsigned eng = 17;
> +	unsigned i, j;
> +
> +	/* flush hdp cache */
> +	nbio_v6_1_hdp_flush(adev);
> +
> +	spin_lock(&adev->mc.invalidate_lock);
> +
> +	for (i = 0; i < AMDGPU_MAX_VMHUBS; ++i) {
> +		struct amdgpu_vmhub *hub = &adev->vmhub[i];
> +		u32 tmp = hub->get_invalidate_req(vmid);
> +
> +		WREG32(hub->vm_inv_eng0_req + eng, tmp);
> +
> +		/* Busy wait for ACK.*/
> +		for (j = 0; j < 100; j++) {
> +			tmp = RREG32(hub->vm_inv_eng0_ack + eng);
> +			tmp &= 1 << vmid;
> +			if (tmp)
> +				break;
> +			cpu_relax();
> +		}
> +		if (j < 100)
> +			continue;
> +
> +		/* Wait for ACK with a delay.*/
> +		for (j = 0; j < adev->usec_timeout; j++) {
> +			tmp = RREG32(hub->vm_inv_eng0_ack + eng);
> +			tmp &= 1 << vmid;
> +			if (tmp)
> +				break;
> +			udelay(1);
> +		}
> +		if (j < adev->usec_timeout)
> +			continue;
> +
> +		DRM_ERROR("Timeout waiting for VM flush ACK!\n");
> +	}
> +
> +	spin_unlock(&adev->mc.invalidate_lock);
> +}
> +
> +/**
> + * gmc_v9_0_gart_set_pte_pde - update the page tables using MMIO
> + *
> + * @adev: amdgpu_device pointer
> + * @cpu_pt_addr: cpu address of the page table
> + * @gpu_page_idx: entry in the page table to update
> + * @addr: dst addr to write into pte/pde
> + * @flags: access flags
> + *
> + * Update the page tables using the CPU.
> + */
> +static int gmc_v9_0_gart_set_pte_pde(struct amdgpu_device *adev,
> +					void *cpu_pt_addr,
> +					uint32_t gpu_page_idx,
> +					uint64_t addr,
> +					uint64_t flags)
> +{
> +	void __iomem *ptr = (void *)cpu_pt_addr;
> +	uint64_t value;
> +
> +	/*
> +	 * PTE format on VEGA 10:
> +	 * 63:59 reserved
> +	 * 58:57 mtype
> +	 * 56 F
> +	 * 55 L
> +	 * 54 P
> +	 * 53 SW
> +	 * 52 T
> +	 * 50:48 reserved
> +	 * 47:12 4k physical page base address
> +	 * 11:7 fragment
> +	 * 6 write
> +	 * 5 read
> +	 * 4 exe
> +	 * 3 Z
> +	 * 2 snooped
> +	 * 1 system
> +	 * 0 valid
> +	 *
> +	 * PDE format on VEGA 10:
> +	 * 63:59 block fragment size
> +	 * 58:55 reserved
> +	 * 54 P
> +	 * 53:48 reserved
> +	 * 47:6 physical base address of PD or PTE
> +	 * 5:3 reserved
> +	 * 2 C
> +	 * 1 system
> +	 * 0 valid
> +	 */
> +
> +	/*
> +	 * The following is for PTE only. GART does not have PDEs.
> +	*/
> +	value = addr & 0x0000FFFFFFFFF000ULL;
> +	value |= flags;
> +	writeq(value, ptr + (gpu_page_idx * 8));
> +	return 0;
> +}
> +
> +static uint64_t gmc_v9_0_get_vm_pte_flags(struct amdgpu_device *adev,
> +						uint32_t flags)
> +
> +{
> +	uint64_t pte_flag = 0;
> +
> +	if (flags & AMDGPU_VM_PAGE_EXECUTABLE)
> +		pte_flag |= AMDGPU_PTE_EXECUTABLE;
> +	if (flags & AMDGPU_VM_PAGE_READABLE)
> +		pte_flag |= AMDGPU_PTE_READABLE;
> +	if (flags & AMDGPU_VM_PAGE_WRITEABLE)
> +		pte_flag |= AMDGPU_PTE_WRITEABLE;
> +
> +	switch (flags & AMDGPU_VM_MTYPE_MASK) {
> +	case AMDGPU_VM_MTYPE_DEFAULT:
> +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
> +		break;
> +	case AMDGPU_VM_MTYPE_NC:
> +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
> +		break;
> +	case AMDGPU_VM_MTYPE_WC:
> +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_WC);
> +		break;
> +	case AMDGPU_VM_MTYPE_CC:
> +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_CC);
> +		break;
> +	case AMDGPU_VM_MTYPE_UC:
> +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_UC);
> +		break;
> +	default:
> +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
> +		break;
> +	}
> +
> +	if (flags & AMDGPU_VM_PAGE_PRT)
> +		pte_flag |= AMDGPU_PTE_PRT;
> +
> +	return pte_flag;
> +}
> +
> +static const struct amdgpu_gart_funcs gmc_v9_0_gart_funcs = {
> +	.flush_gpu_tlb = gmc_v9_0_gart_flush_gpu_tlb,
> +	.set_pte_pde = gmc_v9_0_gart_set_pte_pde,
> +	.get_vm_pte_flags = gmc_v9_0_get_vm_pte_flags
> +};
> +
> +static void gmc_v9_0_set_gart_funcs(struct amdgpu_device *adev)
> +{
> +	if (adev->gart.gart_funcs == NULL)
> +		adev->gart.gart_funcs = &gmc_v9_0_gart_funcs;
> +}
> +
> +static u64 gmc_v9_0_adjust_mc_addr(struct amdgpu_device *adev, u64 mc_addr)
> +{
> +	return adev->vm_manager.vram_base_offset + mc_addr - adev->mc.vram_start;
> +}
> +
> +static const struct amdgpu_mc_funcs gmc_v9_0_mc_funcs = {
> +	.adjust_mc_addr = gmc_v9_0_adjust_mc_addr,
> +};
> +
> +static void gmc_v9_0_set_mc_funcs(struct amdgpu_device *adev)
> +{
> +	adev->mc.mc_funcs = &gmc_v9_0_mc_funcs;
> +}
> +
> +static int gmc_v9_0_early_init(void *handle)
> +{
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +
> +	gmc_v9_0_set_gart_funcs(adev);
> +	gmc_v9_0_set_mc_funcs(adev);
> +	gmc_v9_0_set_irq_funcs(adev);
> +
> +	return 0;
> +}
> +
> +static int gmc_v9_0_late_init(void *handle)
> +{
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +	return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0);
> +}
> +
> +static void gmc_v9_0_vram_gtt_location(struct amdgpu_device *adev,
> +					struct amdgpu_mc *mc)
> +{
> +	u64 base = mmhub_v1_0_get_fb_location(adev);
> +	amdgpu_vram_location(adev, &adev->mc, base);
> +	adev->mc.gtt_base_align = 0;
> +	amdgpu_gtt_location(adev, mc);
> +}
> +
> +/**
> + * gmc_v9_0_mc_init - initialize the memory controller driver params
> + *
> + * @adev: amdgpu_device pointer
> + *
> + * Look up the amount of vram, vram width, and decide how to place
> + * vram and gart within the GPU's physical address space.
> + * Returns 0 for success.
> + */
> +static int gmc_v9_0_mc_init(struct amdgpu_device *adev)
> +{
> +	u32 tmp;
> +	int chansize, numchan;
> +
> +	/* hbm memory channel size */
> +	chansize = 128;
> +
> +	tmp = RREG32(SOC15_REG_OFFSET(DF, 0, mmDF_CS_AON0_DramBaseAddress0));
> +	tmp &= DF_CS_AON0_DramBaseAddress0__IntLvNumChan_MASK;
> +	tmp >>= DF_CS_AON0_DramBaseAddress0__IntLvNumChan__SHIFT;
> +	switch (tmp) {
> +	case 0:
> +	default:
> +		numchan = 1;
> +		break;
> +	case 1:
> +		numchan = 2;
> +		break;
> +	case 2:
> +		numchan = 0;
> +		break;
> +	case 3:
> +		numchan = 4;
> +		break;
> +	case 4:
> +		numchan = 0;
> +		break;
> +	case 5:
> +		numchan = 8;
> +		break;
> +	case 6:
> +		numchan = 0;
> +		break;
> +	case 7:
> +		numchan = 16;
> +		break;
> +	case 8:
> +		numchan = 2;
> +		break;
> +	}
> +	adev->mc.vram_width = numchan * chansize;
> +
> +	/* Could aper size report 0 ? */
> +	adev->mc.aper_base = pci_resource_start(adev->pdev, 0);
> +	adev->mc.aper_size = pci_resource_len(adev->pdev, 0);
> +	/* size in MB on si */
> +	adev->mc.mc_vram_size =
> +		nbio_v6_1_get_memsize(adev) * 1024ULL * 1024ULL;
> +	adev->mc.real_vram_size = adev->mc.mc_vram_size;
> +	adev->mc.visible_vram_size = adev->mc.aper_size;
> +
> +	/* In case the PCI BAR is larger than the actual amount of vram */
> +	if (adev->mc.visible_vram_size > adev->mc.real_vram_size)
> +		adev->mc.visible_vram_size = adev->mc.real_vram_size;
> +
> +	/* unless the user had overridden it, set the gart
> +	 * size equal to the 1024 or vram, whichever is larger.
> +	 */
> +	if (amdgpu_gart_size == -1)
> +		adev->mc.gtt_size = max((1024ULL << 20), adev->mc.mc_vram_size);
> +	else
> +		adev->mc.gtt_size = (uint64_t)amdgpu_gart_size << 20;
> +
> +	gmc_v9_0_vram_gtt_location(adev, &adev->mc);
> +
> +	return 0;
> +}
> +
> +static int gmc_v9_0_gart_init(struct amdgpu_device *adev)
> +{
> +	int r;
> +
> +	if (adev->gart.robj) {
> +		WARN(1, "VEGA10 PCIE GART already initialized\n");
> +		return 0;
> +	}
> +	/* Initialize common gart structure */
> +	r = amdgpu_gart_init(adev);
> +	if (r)
> +		return r;
> +	adev->gart.table_size = adev->gart.num_gpu_pages * 8;
> +	adev->gart.gart_pte_flags = AMDGPU_PTE_MTYPE(MTYPE_UC) |
> +				 AMDGPU_PTE_EXECUTABLE;
> +	return amdgpu_gart_table_vram_alloc(adev);
> +}
> +
> +/*
> + * vm
> + * VMID 0 is the physical GPU addresses as used by the kernel.
> + * VMIDs 1-15 are used for userspace clients and are handled
> + * by the amdgpu vm/hsa code.
> + */
> +/**
> + * gmc_v9_0_vm_init - vm init callback
> + *
> + * @adev: amdgpu_device pointer
> + *
> + * Inits vega10 specific vm parameters (number of VMs, base of vram for
> + * VMIDs 1-15) (vega10).
> + * Returns 0 for success.
> + */
> +static int gmc_v9_0_vm_init(struct amdgpu_device *adev)
> +{
> +	/*
> +	 * number of VMs
> +	 * VMID 0 is reserved for System
> +	 * amdgpu graphics/compute will use VMIDs 1-7
> +	 * amdkfd will use VMIDs 8-15
> +	 */
> +	adev->vm_manager.num_ids = AMDGPU_NUM_OF_VMIDS;
> +	amdgpu_vm_manager_init(adev);
> +
> +	/* base offset of vram pages */
> +	/*XXX This value is not zero for APU*/
> +	adev->vm_manager.vram_base_offset = 0;
> +
> +	return 0;
> +}
> +
> +/**
> + * gmc_v9_0_vm_fini - vm fini callback
> + *
> + * @adev: amdgpu_device pointer
> + *
> + * Tear down any asic specific VM setup.
> + */
> +static void gmc_v9_0_vm_fini(struct amdgpu_device *adev)
> +{
> +	return;
> +}
> +
> +static int gmc_v9_0_sw_init(void *handle)
> +{
> +	int r;
> +	int dma_bits;
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +
> +	spin_lock_init(&adev->mc.invalidate_lock);
> +
> +	if (adev->flags & AMD_IS_APU) {
> +		adev->mc.vram_type = AMDGPU_VRAM_TYPE_UNKNOWN;
> +	} else {
> +		/* XXX Don't know how to get VRAM type yet. */
> +		adev->mc.vram_type = AMDGPU_VRAM_TYPE_HBM;
> +	}
> +
> +	/* This interrupt is VMC page fault.*/
> +	r = amdgpu_irq_add_id(adev, AMDGPU_IH_CLIENTID_VMC, 0,
> +				&adev->mc.vm_fault);
> +
> +	if (r)
> +		return r;
> +
> +	/* Adjust VM size here.
> +	 * Currently default to 64GB ((16 << 20) 4k pages).
> +	 * Max GPUVM size is 48 bits.
> +	 */
> +	adev->vm_manager.max_pfn = amdgpu_vm_size << 18;
> +
> +	/* Set the internal MC address mask
> +	 * This is the max address of the GPU's
> +	 * internal address space.
> +	 */
> +	adev->mc.mc_mask = 0xffffffffffffULL; /* 48 bit MC */
> +
> +	/* set DMA mask + need_dma32 flags.
> +	 * PCIE - can handle 44-bits.
> +	 * IGP - can handle 44-bits
> +	 * PCI - dma32 for legacy pci gart, 44 bits on vega10
> +	 */
> +	adev->need_dma32 = false;
> +	dma_bits = adev->need_dma32 ? 32 : 44;
> +	r = pci_set_dma_mask(adev->pdev, DMA_BIT_MASK(dma_bits));
> +	if (r) {
> +		adev->need_dma32 = true;
> +		dma_bits = 32;
> +		printk(KERN_WARNING "amdgpu: No suitable DMA available.\n");
> +	}
> +	r = pci_set_consistent_dma_mask(adev->pdev, DMA_BIT_MASK(dma_bits));
> +	if (r) {
> +		pci_set_consistent_dma_mask(adev->pdev, DMA_BIT_MASK(32));
> +		printk(KERN_WARNING "amdgpu: No coherent DMA available.\n");
> +	}
> +
> +	r = gmc_v9_0_mc_init(adev);
> +	if (r)
> +		return r;
> +
> +	/* Memory manager */
> +	r = amdgpu_bo_init(adev);
> +	if (r)
> +		return r;
> +
> +	r = gmc_v9_0_gart_init(adev);
> +	if (r)
> +		return r;
> +
> +	if (!adev->vm_manager.enabled) {
> +		r = gmc_v9_0_vm_init(adev);
> +		if (r) {
> +			dev_err(adev->dev, "vm manager initialization failed (%d).\n", r);
> +			return r;
> +		}
> +		adev->vm_manager.enabled = true;
> +	}
> +	return r;
> +}
> +
> +/**
> + * gmc_v8_0_gart_fini - vm fini callback
> + *
> + * @adev: amdgpu_device pointer
> + *
> + * Tears down the driver GART/VM setup (CIK).
> + */
> +static void gmc_v9_0_gart_fini(struct amdgpu_device *adev)
> +{
> +	amdgpu_gart_table_vram_free(adev);
> +	amdgpu_gart_fini(adev);
> +}
> +
> +static int gmc_v9_0_sw_fini(void *handle)
> +{
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +
> +	if (adev->vm_manager.enabled) {
> +		amdgpu_vm_manager_fini(adev);
> +		gmc_v9_0_vm_fini(adev);
> +		adev->vm_manager.enabled = false;
> +	}
> +	gmc_v9_0_gart_fini(adev);
> +	amdgpu_gem_force_release(adev);
> +	amdgpu_bo_fini(adev);
> +
> +	return 0;
> +}
> +
> +static void gmc_v9_0_init_golden_registers(struct amdgpu_device *adev)
> +{
> +	switch (adev->asic_type) {
> +	case CHIP_VEGA10:
> +		break;
> +	default:
> +		break;
> +	}
> +}
> +
> +/**
> + * gmc_v9_0_gart_enable - gart enable
> + *
> + * @adev: amdgpu_device pointer
> + */
> +static int gmc_v9_0_gart_enable(struct amdgpu_device *adev)
> +{
> +	int r;
> +	bool value;
> +	u32 tmp;
> +
> +	amdgpu_program_register_sequence(adev,
> +		golden_settings_vega10_hdp,
> +		(const u32)ARRAY_SIZE(golden_settings_vega10_hdp));
> +
> +	if (adev->gart.robj == NULL) {
> +		dev_err(adev->dev, "No VRAM object for PCIE GART.\n");
> +		return -EINVAL;
> +	}
> +	r = amdgpu_gart_table_vram_pin(adev);
> +	if (r)
> +		return r;
> +
> +	/* After HDP is initialized, flush HDP.*/
> +	nbio_v6_1_hdp_flush(adev);
> +
> +	r = gfxhub_v1_0_gart_enable(adev);
> +	if (r)
> +		return r;
> +
> +	r = mmhub_v1_0_gart_enable(adev);
> +	if (r)
> +		return r;
> +
> +	tmp = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MISC_CNTL));
> +	tmp |= HDP_MISC_CNTL__FLUSH_INVALIDATE_CACHE_MASK;
> +	WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MISC_CNTL), tmp);
> +
> +	tmp = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_HOST_PATH_CNTL));
> +	WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_HOST_PATH_CNTL), tmp);
> +
> +
> +	if (amdgpu_vm_fault_stop == AMDGPU_VM_FAULT_STOP_ALWAYS)
> +		value = false;
> +	else
> +		value = true;
> +
> +	gfxhub_v1_0_set_fault_enable_default(adev, value);
> +	mmhub_v1_0_set_fault_enable_default(adev, value);
> +
> +	gmc_v9_0_gart_flush_gpu_tlb(adev, 0);
> +
> +	DRM_INFO("PCIE GART of %uM enabled (table at 0x%016llX).\n",
> +		 (unsigned)(adev->mc.gtt_size >> 20),
> +		 (unsigned long long)adev->gart.table_addr);
> +	adev->gart.ready = true;
> +	return 0;
> +}
> +
> +static int gmc_v9_0_hw_init(void *handle)
> +{
> +	int r;
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +
> +	/* The sequence of these two function calls matters.*/
> +	gmc_v9_0_init_golden_registers(adev);
> +
> +	r = gmc_v9_0_gart_enable(adev);
> +
> +	return r;
> +}
> +
> +/**
> + * gmc_v9_0_gart_disable - gart disable
> + *
> + * @adev: amdgpu_device pointer
> + *
> + * This disables all VM page table.
> + */
> +static void gmc_v9_0_gart_disable(struct amdgpu_device *adev)
> +{
> +	gfxhub_v1_0_gart_disable(adev);
> +	mmhub_v1_0_gart_disable(adev);
> +	amdgpu_gart_table_vram_unpin(adev);
> +}
> +
> +static int gmc_v9_0_hw_fini(void *handle)
> +{
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +
> +	amdgpu_irq_put(adev, &adev->mc.vm_fault, 0);
> +	gmc_v9_0_gart_disable(adev);
> +
> +	return 0;
> +}
> +
> +static int gmc_v9_0_suspend(void *handle)
> +{
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +
> +	if (adev->vm_manager.enabled) {
> +		gmc_v9_0_vm_fini(adev);
> +		adev->vm_manager.enabled = false;
> +	}
> +	gmc_v9_0_hw_fini(adev);
> +
> +	return 0;
> +}
> +
> +static int gmc_v9_0_resume(void *handle)
> +{
> +	int r;
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +
> +	r = gmc_v9_0_hw_init(adev);
> +	if (r)
> +		return r;
> +
> +	if (!adev->vm_manager.enabled) {
> +		r = gmc_v9_0_vm_init(adev);
> +		if (r) {
> +			dev_err(adev->dev,
> +				"vm manager initialization failed (%d).\n", r);
> +			return r;
> +		}
> +		adev->vm_manager.enabled = true;
> +	}
> +
> +	return r;
> +}
> +
> +static bool gmc_v9_0_is_idle(void *handle)
> +{
> +	/* MC is always ready in GMC v9.*/
> +	return true;
> +}
> +
> +static int gmc_v9_0_wait_for_idle(void *handle)
> +{
> +	/* There is no need to wait for MC idle in GMC v9.*/
> +	return 0;
> +}
> +
> +static int gmc_v9_0_soft_reset(void *handle)
> +{
> +	/* XXX for emulation.*/
> +	return 0;
> +}
> +
> +static int gmc_v9_0_set_clockgating_state(void *handle,
> +					enum amd_clockgating_state state)
> +{
> +	return 0;
> +}
> +
> +static int gmc_v9_0_set_powergating_state(void *handle,
> +					enum amd_powergating_state state)
> +{
> +	return 0;
> +}
> +
> +const struct amd_ip_funcs gmc_v9_0_ip_funcs = {
> +	.name = "gmc_v9_0",
> +	.early_init = gmc_v9_0_early_init,
> +	.late_init = gmc_v9_0_late_init,
> +	.sw_init = gmc_v9_0_sw_init,
> +	.sw_fini = gmc_v9_0_sw_fini,
> +	.hw_init = gmc_v9_0_hw_init,
> +	.hw_fini = gmc_v9_0_hw_fini,
> +	.suspend = gmc_v9_0_suspend,
> +	.resume = gmc_v9_0_resume,
> +	.is_idle = gmc_v9_0_is_idle,
> +	.wait_for_idle = gmc_v9_0_wait_for_idle,
> +	.soft_reset = gmc_v9_0_soft_reset,
> +	.set_clockgating_state = gmc_v9_0_set_clockgating_state,
> +	.set_powergating_state = gmc_v9_0_set_powergating_state,
> +};
> +
> +const struct amdgpu_ip_block_version gmc_v9_0_ip_block =
> +{
> +	.type = AMD_IP_BLOCK_TYPE_GMC,
> +	.major = 9,
> +	.minor = 0,
> +	.rev = 0,
> +	.funcs = &gmc_v9_0_ip_funcs,
> +};
> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
> new file mode 100644
> index 0000000..b030ca5
> --- /dev/null
> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
> @@ -0,0 +1,30 @@
> +/*
> + * Copyright 2016 Advanced Micro Devices, Inc.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + *
> + */
> +
> +#ifndef __GMC_V9_0_H__
> +#define __GMC_V9_0_H__
> +
> +extern const struct amd_ip_funcs gmc_v9_0_ip_funcs;
> +extern const struct amdgpu_ip_block_version gmc_v9_0_ip_block;
> +
> +#endif
> diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
> new file mode 100644
> index 0000000..b1e0e6b
> --- /dev/null
> +++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
> @@ -0,0 +1,585 @@
> +/*
> + * Copyright 2016 Advanced Micro Devices, Inc.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + *
> + */
> +#include "amdgpu.h"
> +#include "mmhub_v1_0.h"
> +
> +#include "vega10/soc15ip.h"
> +#include "vega10/MMHUB/mmhub_1_0_offset.h"
> +#include "vega10/MMHUB/mmhub_1_0_sh_mask.h"
> +#include "vega10/MMHUB/mmhub_1_0_default.h"
> +#include "vega10/ATHUB/athub_1_0_offset.h"
> +#include "vega10/ATHUB/athub_1_0_sh_mask.h"
> +#include "vega10/ATHUB/athub_1_0_default.h"
> +#include "vega10/vega10_enum.h"
> +
> +#include "soc15_common.h"
> +
> +u64 mmhub_v1_0_get_fb_location(struct amdgpu_device *adev)
> +{
> +	u64 base = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_FB_LOCATION_BASE));
> +
> +	base &= MC_VM_FB_LOCATION_BASE__FB_BASE_MASK;
> +	base <<= 24;
> +
> +	return base;
> +}
> +
> +int mmhub_v1_0_gart_enable(struct amdgpu_device *adev)
> +{
> +	u32 tmp;
> +	u64 value;
> +	uint64_t addr;
> +	u32 i;
> +
> +	/* Program MC. */
> +	/* Update configuration */
> +	DRM_INFO("%s -- in\n", __func__);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR),
> +		adev->mc.vram_start >> 18);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR),
> +		adev->mc.vram_end >> 18);
> +	value = adev->vram_scratch.gpu_addr - adev->mc.vram_start +
> +		adev->vm_manager.vram_base_offset;
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +				mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_LSB),
> +				(u32)(value >> 12));
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +				mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_MSB),
> +				(u32)(value >> 44));
> +
> +	/* Disable AGP. */
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_AGP_BASE), 0);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_AGP_TOP), 0);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_AGP_BOT), 0x00FFFFFF);
> +
> +	/* GART Enable. */
> +
> +	/* Setup TLB control */
> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_MX_L1_TLB_CNTL));
> +	tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB, 1);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				SYSTEM_ACCESS_MODE,
> +				3);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				ENABLE_ADVANCED_DRIVER_MODEL,
> +				1);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				SYSTEM_APERTURE_UNMAPPED_ACCESS,
> +				0);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				ECO_BITS,
> +				0);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				MTYPE,
> +				MTYPE_UC);/* XXX for emulation. */
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				ATC_EN,
> +				1);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_MX_L1_TLB_CNTL), tmp);
> +
> +	/* Setup L2 cache */
> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL));
> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 1);
> +	tmp = REG_SET_FIELD(tmp,
> +				VM_L2_CNTL,
> +				ENABLE_L2_FRAGMENT_PROCESSING,
> +				0);
> +	tmp = REG_SET_FIELD(tmp,
> +				VM_L2_CNTL,
> +				L2_PDE0_CACHE_TAG_GENERATION_MODE,
> +				0);/* XXX for emulation, Refer to closed source code.*/
> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, PDE_FAULT_CLASSIFICATION, 1);
> +	tmp = REG_SET_FIELD(tmp,
> +				VM_L2_CNTL,
> +				CONTEXT1_IDENTITY_ACCESS_MODE,
> +				1);
> +	tmp = REG_SET_FIELD(tmp,
> +				VM_L2_CNTL,
> +				IDENTITY_MODE_FRAGMENT_SIZE,
> +				0);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL), tmp);
> +
> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL2));
> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_ALL_L1_TLBS, 1);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_L2_CACHE, 1);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL2), tmp);
> +
> +	tmp = mmVM_L2_CNTL3_DEFAULT;
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL3), tmp);
> +
> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL4));
> +	tmp = REG_SET_FIELD(tmp,
> +			    VM_L2_CNTL4,
> +			    VMC_TAP_PDE_REQUEST_PHYSICAL,
> +			    0);
> +	tmp = REG_SET_FIELD(tmp,
> +			    VM_L2_CNTL4,
> +			    VMC_TAP_PTE_REQUEST_PHYSICAL,
> +			    0);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL4), tmp);
> +
> +	/* setup context0 */
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_LO32),
> +		(u32)(adev->mc.gtt_start >> 12));
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_HI32),
> +		(u32)(adev->mc.gtt_start >> 44));
> +
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_LO32),
> +		(u32)(adev->mc.gtt_end >> 12));
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_HI32),
> +		(u32)(adev->mc.gtt_end >> 44));
> +
> +	BUG_ON(adev->gart.table_addr & (~0x0000FFFFFFFFF000ULL));
> +	value = adev->gart.table_addr - adev->mc.vram_start +
> +		adev->vm_manager.vram_base_offset;
> +	value &= 0x0000FFFFFFFFF000ULL;
> +	value |= 0x1; /* valid bit */
> +
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32),
> +		(u32)value);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32),
> +		(u32)(value >> 32));
> +
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +				mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_LO32),
> +		(u32)(adev->dummy_page.addr >> 12));
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +				mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_HI32),
> +		(u32)(adev->dummy_page.addr >> 44));
> +
> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_PROTECTION_FAULT_CNTL2));
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL2,
> +			    ACTIVE_PAGE_MIGRATION_PTE_READ_RETRY,
> +			    1);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_PROTECTION_FAULT_CNTL2), tmp);
> +
> +	addr = SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT0_CNTL);
> +	tmp = RREG32(addr);
> +
> +	tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL, ENABLE_CONTEXT, 1);
> +	tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL, PAGE_TABLE_DEPTH, 0);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT0_CNTL), tmp);
> +
> +	tmp = RREG32(addr);
> +
> +	/* Disable identity aperture.*/
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_LO32), 0XFFFFFFFF);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_HI32), 0x0000000F);
> +
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_LO32), 0);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_HI32), 0);
> +
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +		mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_LO32), 0);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +		mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_HI32), 0);
> +
> +	for (i = 0; i <= 14; i++) {
> +		tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT1_CNTL)
> +				+ i);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				ENABLE_CONTEXT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				PAGE_TABLE_DEPTH, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				VALID_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				READ_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				PAGE_TABLE_BLOCK_SIZE,
> +				amdgpu_vm_block_size - 9);
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT1_CNTL) + i, tmp);
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_LO32) + i*2, 0);
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_HI32) + i*2, 0);
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_LO32) + i*2,
> +				adev->vm_manager.max_pfn - 1);
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_HI32) + i*2, 0);
> +	}
> +
> +	return 0;
> +}
> +
> +void mmhub_v1_0_gart_disable(struct amdgpu_device *adev)
> +{
> +	u32 tmp;
> +	u32 i;
> +
> +	/* Disable all tables */
> +	for (i = 0; i < 16; i++)
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT0_CNTL) + i, 0);
> +
> +	/* Setup TLB control */
> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_MX_L1_TLB_CNTL));
> +	tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB, 0);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				ENABLE_ADVANCED_DRIVER_MODEL,
> +				0);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_MX_L1_TLB_CNTL), tmp);
> +
> +	/* Setup L2 cache */
> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL));
> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 0);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL), tmp);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL3), 0);
> +}
> +
> +/**
> + * mmhub_v1_0_set_fault_enable_default - update GART/VM fault handling
> + *
> + * @adev: amdgpu_device pointer
> + * @value: true redirects VM faults to the default page
> + */
> +void mmhub_v1_0_set_fault_enable_default(struct amdgpu_device *adev, bool value)
> +{
> +	u32 tmp;
> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_PROTECTION_FAULT_CNTL));
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			PDE1_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			PDE2_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp,
> +			VM_L2_PROTECTION_FAULT_CNTL,
> +			TRANSLATE_FURTHER_PROTECTION_FAULT_ENABLE_DEFAULT,
> +			value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			NACK_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			VALID_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			READ_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_PROTECTION_FAULT_CNTL), tmp);
> +}
> +
> +static uint32_t mmhub_v1_0_get_invalidate_req(unsigned int vm_id)
> +{
> +	u32 req = 0;
> +
> +	/* invalidate using legacy mode on vm_id*/
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> +			    PER_VMID_INVALIDATE_REQ, 1 << vm_id);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, FLUSH_TYPE, 0);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PTES, 1);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE0, 1);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE1, 1);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE2, 1);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L1_PTES, 1);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> +			    CLEAR_PROTECTION_FAULT_STATUS_ADDR,	0);
> +
> +	return req;
> +}
> +
> +static uint32_t mmhub_v1_0_get_vm_protection_bits(void)
> +{
> +	return (VM_CONTEXT1_CNTL__RANGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__DUMMY_PAGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__PDE0_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__VALID_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__READ_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__WRITE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__EXECUTE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK);
> +}
> +
> +static int mmhub_v1_0_early_init(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int mmhub_v1_0_late_init(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int mmhub_v1_0_sw_init(void *handle)
> +{
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +	struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_MMHUB];
> +
> +	hub->ctx0_ptb_addr_lo32 =
> +		SOC15_REG_OFFSET(MMHUB, 0,
> +				 mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32);
> +	hub->ctx0_ptb_addr_hi32 =
> +		SOC15_REG_OFFSET(MMHUB, 0,
> +				 mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32);
> +	hub->vm_inv_eng0_req =
> +		SOC15_REG_OFFSET(MMHUB, 0, mmVM_INVALIDATE_ENG0_REQ);
> +	hub->vm_inv_eng0_ack =
> +		SOC15_REG_OFFSET(MMHUB, 0, mmVM_INVALIDATE_ENG0_ACK);
> +	hub->vm_context0_cntl =
> +		SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT0_CNTL);
> +	hub->vm_l2_pro_fault_status =
> +		SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_PROTECTION_FAULT_STATUS);
> +	hub->vm_l2_pro_fault_cntl =
> +		SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_PROTECTION_FAULT_CNTL);
> +
> +	hub->get_invalidate_req = mmhub_v1_0_get_invalidate_req;
> +	hub->get_vm_protection_bits = mmhub_v1_0_get_vm_protection_bits;
> +
> +	return 0;
> +}
> +
> +static int mmhub_v1_0_sw_fini(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int mmhub_v1_0_hw_init(void *handle)
> +{
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +	unsigned i;
> +
> +	for (i = 0; i < 18; ++i) {
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +					mmVM_INVALIDATE_ENG0_ADDR_RANGE_LO32) +
> +		       2 * i, 0xffffffff);
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +					mmVM_INVALIDATE_ENG0_ADDR_RANGE_HI32) +
> +		       2 * i, 0x1f);
> +	}
> +
> +	return 0;
> +}
> +
> +static int mmhub_v1_0_hw_fini(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int mmhub_v1_0_suspend(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int mmhub_v1_0_resume(void *handle)
> +{
> +	return 0;
> +}
> +
> +static bool mmhub_v1_0_is_idle(void *handle)
> +{
> +	return true;
> +}
> +
> +static int mmhub_v1_0_wait_for_idle(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int mmhub_v1_0_soft_reset(void *handle)
> +{
> +	return 0;
> +}
> +
> +static void mmhub_v1_0_update_medium_grain_clock_gating(struct amdgpu_device *adev,
> +							bool enable)
> +{
> +	uint32_t def, data, def1, data1, def2, data2;
> +
> +	def  = data  = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmATC_L2_MISC_CG));
> +	def1 = data1 = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmDAGB0_CNTL_MISC2));
> +	def2 = data2 = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmDAGB1_CNTL_MISC2));
> +
> +	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_MGCG)) {
> +		data |= ATC_L2_MISC_CG__ENABLE_MASK;
> +
> +		data1 &= ~(DAGB0_CNTL_MISC2__DISABLE_WRREQ_CG_MASK |
> +		           DAGB0_CNTL_MISC2__DISABLE_WRRET_CG_MASK |
> +		           DAGB0_CNTL_MISC2__DISABLE_RDREQ_CG_MASK |
> +		           DAGB0_CNTL_MISC2__DISABLE_RDRET_CG_MASK |
> +		           DAGB0_CNTL_MISC2__DISABLE_TLBWR_CG_MASK |
> +		           DAGB0_CNTL_MISC2__DISABLE_TLBRD_CG_MASK);
> +
> +		data2 &= ~(DAGB1_CNTL_MISC2__DISABLE_WRREQ_CG_MASK |
> +		           DAGB1_CNTL_MISC2__DISABLE_WRRET_CG_MASK |
> +		           DAGB1_CNTL_MISC2__DISABLE_RDREQ_CG_MASK |
> +		           DAGB1_CNTL_MISC2__DISABLE_RDRET_CG_MASK |
> +		           DAGB1_CNTL_MISC2__DISABLE_TLBWR_CG_MASK |
> +		           DAGB1_CNTL_MISC2__DISABLE_TLBRD_CG_MASK);
> +	} else {
> +		data &= ~ATC_L2_MISC_CG__ENABLE_MASK;
> +
> +		data1 |= (DAGB0_CNTL_MISC2__DISABLE_WRREQ_CG_MASK |
> +			  DAGB0_CNTL_MISC2__DISABLE_WRRET_CG_MASK |
> +			  DAGB0_CNTL_MISC2__DISABLE_RDREQ_CG_MASK |
> +			  DAGB0_CNTL_MISC2__DISABLE_RDRET_CG_MASK |
> +			  DAGB0_CNTL_MISC2__DISABLE_TLBWR_CG_MASK |
> +			  DAGB0_CNTL_MISC2__DISABLE_TLBRD_CG_MASK);
> +
> +		data2 |= (DAGB1_CNTL_MISC2__DISABLE_WRREQ_CG_MASK |
> +		          DAGB1_CNTL_MISC2__DISABLE_WRRET_CG_MASK |
> +		          DAGB1_CNTL_MISC2__DISABLE_RDREQ_CG_MASK |
> +		          DAGB1_CNTL_MISC2__DISABLE_RDRET_CG_MASK |
> +		          DAGB1_CNTL_MISC2__DISABLE_TLBWR_CG_MASK |
> +		          DAGB1_CNTL_MISC2__DISABLE_TLBRD_CG_MASK);
> +	}
> +
> +	if (def != data)
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmATC_L2_MISC_CG), data);
> +
> +	if (def1 != data1)
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmDAGB0_CNTL_MISC2), data1);
> +
> +	if (def2 != data2)
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmDAGB1_CNTL_MISC2), data2);
> +}
> +
> +static void athub_update_medium_grain_clock_gating(struct amdgpu_device *adev,
> +						   bool enable)
> +{
> +	uint32_t def, data;
> +
> +	def = data = RREG32(SOC15_REG_OFFSET(ATHUB, 0, mmATHUB_MISC_CNTL));
> +
> +	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_MGCG))
> +		data |= ATHUB_MISC_CNTL__CG_ENABLE_MASK;
> +	else
> +		data &= ~ATHUB_MISC_CNTL__CG_ENABLE_MASK;
> +
> +	if (def != data)
> +		WREG32(SOC15_REG_OFFSET(ATHUB, 0, mmATHUB_MISC_CNTL), data);
> +}
> +
> +static void mmhub_v1_0_update_medium_grain_light_sleep(struct amdgpu_device *adev,
> +						       bool enable)
> +{
> +	uint32_t def, data;
> +
> +	def = data = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmATC_L2_MISC_CG));
> +
> +	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_LS))
> +		data |= ATC_L2_MISC_CG__MEM_LS_ENABLE_MASK;
> +	else
> +		data &= ~ATC_L2_MISC_CG__MEM_LS_ENABLE_MASK;
> +
> +	if (def != data)
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmATC_L2_MISC_CG), data);
> +}
> +
> +static void athub_update_medium_grain_light_sleep(struct amdgpu_device *adev,
> +						  bool enable)
> +{
> +	uint32_t def, data;
> +
> +	def = data = RREG32(SOC15_REG_OFFSET(ATHUB, 0, mmATHUB_MISC_CNTL));
> +
> +	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_LS) &&
> +	    (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS))
> +		data |= ATHUB_MISC_CNTL__CG_MEM_LS_ENABLE_MASK;
> +	else
> +		data &= ~ATHUB_MISC_CNTL__CG_MEM_LS_ENABLE_MASK;
> +
> +	if(def != data)
> +		WREG32(SOC15_REG_OFFSET(ATHUB, 0, mmATHUB_MISC_CNTL), data);
> +}
> +
> +static int mmhub_v1_0_set_clockgating_state(void *handle,
> +					enum amd_clockgating_state state)
> +{
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +
> +	switch (adev->asic_type) {
> +	case CHIP_VEGA10:
> +		mmhub_v1_0_update_medium_grain_clock_gating(adev,
> +				state == AMD_CG_STATE_GATE ? true : false);
> +		athub_update_medium_grain_clock_gating(adev,
> +				state == AMD_CG_STATE_GATE ? true : false);
> +		mmhub_v1_0_update_medium_grain_light_sleep(adev,
> +				state == AMD_CG_STATE_GATE ? true : false);
> +		athub_update_medium_grain_light_sleep(adev,
> +				state == AMD_CG_STATE_GATE ? true : false);
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	return 0;
> +}
> +
> +static int mmhub_v1_0_set_powergating_state(void *handle,
> +					enum amd_powergating_state state)
> +{
> +	return 0;
> +}
> +
> +const struct amd_ip_funcs mmhub_v1_0_ip_funcs = {
> +	.name = "mmhub_v1_0",
> +	.early_init = mmhub_v1_0_early_init,
> +	.late_init = mmhub_v1_0_late_init,
> +	.sw_init = mmhub_v1_0_sw_init,
> +	.sw_fini = mmhub_v1_0_sw_fini,
> +	.hw_init = mmhub_v1_0_hw_init,
> +	.hw_fini = mmhub_v1_0_hw_fini,
> +	.suspend = mmhub_v1_0_suspend,
> +	.resume = mmhub_v1_0_resume,
> +	.is_idle = mmhub_v1_0_is_idle,
> +	.wait_for_idle = mmhub_v1_0_wait_for_idle,
> +	.soft_reset = mmhub_v1_0_soft_reset,
> +	.set_clockgating_state = mmhub_v1_0_set_clockgating_state,
> +	.set_powergating_state = mmhub_v1_0_set_powergating_state,
> +};
> +
> +const struct amdgpu_ip_block_version mmhub_v1_0_ip_block =
> +{
> +	.type = AMD_IP_BLOCK_TYPE_MMHUB,
> +	.major = 1,
> +	.minor = 0,
> +	.rev = 0,
> +	.funcs = &mmhub_v1_0_ip_funcs,
> +};
> diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
> new file mode 100644
> index 0000000..aadedf9
> --- /dev/null
> +++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
> @@ -0,0 +1,35 @@
> +/*
> + * Copyright 2016 Advanced Micro Devices, Inc.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + *
> + */
> +#ifndef __MMHUB_V1_0_H__
> +#define __MMHUB_V1_0_H__
> +
> +u64 mmhub_v1_0_get_fb_location(struct amdgpu_device *adev);
> +int mmhub_v1_0_gart_enable(struct amdgpu_device *adev);
> +void mmhub_v1_0_gart_disable(struct amdgpu_device *adev);
> +void mmhub_v1_0_set_fault_enable_default(struct amdgpu_device *adev,
> +					 bool value);
> +
> +extern const struct amd_ip_funcs mmhub_v1_0_ip_funcs;
> +extern const struct amdgpu_ip_block_version mmhub_v1_0_ip_block;
> +
> +#endif
> diff --git a/drivers/gpu/drm/amd/include/amd_shared.h b/drivers/gpu/drm/amd/include/amd_shared.h
> index 717d6be..a94420d 100644
> --- a/drivers/gpu/drm/amd/include/amd_shared.h
> +++ b/drivers/gpu/drm/amd/include/amd_shared.h
> @@ -74,6 +74,8 @@ enum amd_ip_block_type {
>   	AMD_IP_BLOCK_TYPE_UVD,
>   	AMD_IP_BLOCK_TYPE_VCE,
>   	AMD_IP_BLOCK_TYPE_ACP,
> +	AMD_IP_BLOCK_TYPE_GFXHUB,
> +	AMD_IP_BLOCK_TYPE_MMHUB
>   };
>   
>   enum amd_clockgating_state {


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH 000/100] Add Vega10 Support
       [not found]     ` <50d03274-5a6e-fb77-9741-b6700a9949bd-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
@ 2017-03-21 11:51       ` Christian König
       [not found]         ` <b717602f-7573-6c20-ca68-491e3fe847c0-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
  2017-03-21 22:00       ` Alex Deucher
  1 sibling, 1 reply; 101+ messages in thread
From: Christian König @ 2017-03-21 11:51 UTC (permalink / raw)
  To: Alex Deucher, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

Patches #48, #49, #52-#63, #65-#68, #70-#72, #74, #76, #77, #79, #81-#84 
are Acked-by: Christian König <christian.koenig@amd.com>.

Patches #50, #64, #69, #75, #78, #80, #85, #89-#91, #100 didn't made it 
to the list.

Patch #73 probably needs to be moved to the end of the set or at least 
after the wptr_poll fix.

Apart from those everything should already have my reviewed-by or acked-by.

What worries me a bit are the ones who didn't made it to the list. Going 
to check my spam folder, but that is a bit disturbing.

Regards,
Christian.

Am 21.03.2017 um 08:42 schrieb Christian König:
> Patches #1 - #5, #21, #23, #25, #27, #28, #31, #35-#38, #40, #41, #45 
> are Acked-by: Christian König.
>
> Patches #6-#20, #22, #24, #32, #39, #42 didn't made it to the list 
> (probably to large).
>
> Patches #43, #44 are Reviewed-by: Christian König 
> <christian.koenig@amd.com>.
>
> Patch #26: That stuff actually belongs into vega10 specifc code, 
> doesn't it?
>
> Patch #29: We shouldn't use typedefs for enums.
>
> Going to take a look at the rest later today,
> Christian.
>
> Am 20.03.2017 um 21:29 schrieb Alex Deucher:
>> This patch set adds support for vega10. Major changes and supported
>> features:
>> - new vbios interface
>> - Lots of new hw IPs
>> - Support for video decode using UVD
>> - Support for video encode using VCE
>> - Support for 3D via radeonsi
>> - Power management
>> - Full display support via DC
>> - Support for SR-IOV
>>
>> I did not send out the register headers since they are huge. You can 
>> find them
>> along with all the other patches in this series here:
>> https://cgit.freedesktop.org/~agd5f/linux/log/?h=amd-staging-4.9
>>
>> Please review.
>>
>> Thanks,
>>
>> Alex
>>
>> Alex Deucher (29):
>>    drm/amdgpu: add the new atomfirmware interface header
>>    amdgpu: detect if we are using atomfirm or atombios for vbios (v2)
>>    drm/amdgpu: move atom scratch setup into amdgpu_atombios.c
>>    drm/amdgpu: add basic support for atomfirmware.h (v3)
>>    drm/amdgpu: add soc15ip.h
>>    drm/amdgpu: add vega10_enum.h
>>    drm/amdgpu: Add ATHUB 1.0 register headers
>>    drm/amdgpu: Add the DCE 12.0 register headers
>>    drm/amdgpu: add the GC 9.0 register headers
>>    drm/amdgpu: add the HDP 4.0 register headers
>>    drm/amdgpu: add the MMHUB 1.0 register headers
>>    drm/amdgpu: add MP 9.0 register headers
>>    drm/amdgpu: add NBIF 6.1 register headers
>>    drm/amdgpu: add NBIO 6.1 register headers
>>    drm/amdgpu: add OSSSYS 4.0 register headers
>>    drm/amdgpu: add SDMA 4.0 register headers
>>    drm/amdgpu: add SMUIO 9.0 register headers
>>    drm/amdgpu: add THM 9.0 register headers
>>    drm/amdgpu: add the UVD 7.0 register headers
>>    drm/amdgpu: add the VCE 4.0 register headers
>>    drm/amdgpu: add gfx9 clearstate header
>>    drm/amdgpu: add SDMA 4.0 packet header
>>    drm/amdgpu: use atomfirmware interfaces for scratch reg save/restore
>>    drm/amdgpu: update IH IV ring entry for soc-15
>>    drm/amdgpu: add PTE defines for MTYPE
>>    drm/amdgpu: add NGG parameters
>>    drm/amdgpu: Add asic family for vega10
>>    drm/amdgpu: add tiling flags for GFX9
>>    drm/amdgpu: gart fixes for vega10
>>
>> Alex Xie (4):
>>    drm/amdgpu: Add MTYPE flags to GPU VM IOCTL interface
>>    drm/amdgpu: handle PTE EXEC in amdgpu_vm_bo_split_mapping
>>    drm/amdgpu: handle PTE MTYPE in amdgpu_vm_bo_split_mapping
>>    drm/amdgpu: Add GMC 9.0 support
>>
>> Andrey Grodzovsky (1):
>>    drm/amdgpu: gb_addr_config struct
>>
>> Charlene Liu (1):
>>    drm/amd/display: need to handle DCE_Info table ver4.2
>>
>> Christian König (1):
>>    drm/amdgpu: add IV trace point
>>
>> Eric Huang (7):
>>    drm/amd/powerplay: add smu9 header files for Vega10
>>    drm/amd/powerplay: add new Vega10's ppsmc header file
>>    drm/amdgpu: add new atomfirmware based helpers for powerplay
>>    drm/amd/powerplay: add some new structures for Vega10
>>    drm/amd: add structures for display/powerplay interface
>>    drm/amd/powerplay: add some display/powerplay interfaces
>>    drm/amd/powerplay: add Vega10 powerplay support
>>
>> Felix Kuehling (1):
>>    drm/amd: Add MQD structs for GFX V9
>>
>> Harry Wentland (6):
>>    drm/amd/display: Add DCE12 bios parser support
>>    drm/amd/display: Add DCE12 gpio support
>>    drm/amd/display: Add DCE12 i2c/aux support
>>    drm/amd/display: Add DCE12 irq support
>>    drm/amd/display: Add DCE12 core support
>>    drm/amd/display: Enable DCE12 support
>>
>> Huang Rui (6):
>>    drm/amdgpu: use new flag to handle different firmware loading method
>>    drm/amdgpu: rework common ucode handling for vega10
>>    drm/amdgpu: add psp firmware header info
>>    drm/amdgpu: add PSP driver for vega10
>>    drm/amdgpu: add psp firmware info into info query and debugfs
>>    drm/amdgpu: add SMC firmware into global ucode list for psp loading
>>
>> Jordan Lazare (1):
>>    drm/amd/display: Less log spam
>>
>> Junwei Zhang (2):
>>    drm/amdgpu: add NBIO 6.1 driver
>>    drm/amdgpu: add Vega10 Device IDs
>>
>> Ken Wang (8):
>>    drm/amdgpu: add common soc15 headers
>>    drm/amdgpu: add vega10 chip name
>>    drm/amdgpu: add 64bit doorbell assignments
>>    drm/amdgpu: add SDMA v4.0 implementation
>>    drm/amdgpu: implement GFX 9.0 support
>>    drm/amdgpu: add vega10 interrupt handler
>>    drm/amdgpu: soc15 enable (v2)
>>    drm/amdgpu: Set the IP blocks for vega10
>>
>> Leo Liu (2):
>>    drm/amdgpu: add initial uvd 7.0 support for vega10
>>    drm/amdgpu: add initial vce 4.0 support for vega10
>>
>> Marek Olšák (1):
>>    drm/amdgpu: don't validate TILE_SPLIT on GFX9
>>
>> Monk Liu (5):
>>    drm/amdgpu/gfx9: programing wptr_poll_addr register
>>    drm/amdgpu:impl gfx9 cond_exec
>>    drm/amdgpu:bypass RLC init for SRIOV
>>    drm/amdgpu/sdma4:re-org SDMA initial steps for sriov
>>    drm/amdgpu/vega10:fix DOORBELL64 scheme
>>
>> Rex Zhu (2):
>>    drm/amdgpu: get display info from DC when DC enabled.
>>    drm/amd/powerplay: add global PowerPlay mutex.
>>
>> Xiangliang Yu (22):
>>    drm/amdgpu: impl sriov detection for vega10
>>    drm/amdgpu: add kiq ring for gfx9
>>    drm/amdgpu/gfx9: fullfill kiq funcs
>>    drm/amdgpu/gfx9: fullfill kiq irq funcs
>>    drm/amdgpu: init kiq and kcq for vega10
>>    drm/amdgpu/gfx9: impl gfx9 meta data emit
>>    drm/amdgpu/soc15: bypass PSP for VF
>>    drm/amdgpu/gmc9: no need use kiq in vega10 tlb flush
>>    drm/amdgpu/dce_virtual: bypass DPM for vf
>>    drm/amdgpu/virt: impl mailbox for ai
>>    drm/amdgpu/soc15: init virt ops for vf
>>    drm/amdgpu/soc15: enable virtual dce for vf
>>    drm/amdgpu: Don't touch PG&CG for SRIOV MM
>>    drm/amdgpu/vce4: enable doorbell for SRIOV
>>    drm/amdgpu: disable uvd for sriov
>>    drm/amdgpu/soc15: bypass pp block for vf
>>    drm/amdgpu/virt: add structure for MM table
>>    drm/amdgpu/vce4: alloc mm table for MM sriov
>>    drm/amdgpu/vce4: Ignore vce ring/ib test temporarily
>>    drm/amdgpu: add mmsch structures
>>    drm/amdgpu/vce4: impl vce & mmsch sriov start
>>    drm/amdgpu/gfx9: correct wptr pointer value
>>
>> ken (1):
>>    drm/amdgpu: add clinetid definition for vega10
>>
>>   drivers/gpu/drm/amd/amdgpu/Makefile                |     27 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu.h                |    172 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c       |     28 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h       |      3 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c   |    112 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h   |     33 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c           |     30 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c            |     73 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c         |     73 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c            |     36 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c           |      3 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c            |      2 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ih.h             |     47 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c            |      3 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c            |     32 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.c         |      5 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_powerplay.c      |      5 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c            |    473 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h            |    127 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h          |     37 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c          |    113 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h          |     17 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c            |     58 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c            |     21 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h           |      7 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c             |     34 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h             |      4 +
>>   drivers/gpu/drm/amd/amdgpu/atom.c                  |     26 -
>>   drivers/gpu/drm/amd/amdgpu/atom.h                  |      1 -
>>   drivers/gpu/drm/amd/amdgpu/cik.c                   |      2 +
>>   drivers/gpu/drm/amd/amdgpu/clearstate_gfx9.h       |    941 +
>>   drivers/gpu/drm/amd/amdgpu/dce_virtual.c           |      3 +
>>   drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c              |      6 +-
>>   drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c              |   4075 +
>>   drivers/gpu/drm/amd/amdgpu/gfx_v9_0.h              |     35 +
>>   drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c           |    447 +
>>   drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h           |     35 +
>>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c              |    826 +
>>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h              |     30 +
>>   drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c            |    585 +
>>   drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h            |     35 +
>>   drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h            |     87 +
>>   drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c              |    207 +
>>   drivers/gpu/drm/amd/amdgpu/mxgpu_ai.h              |     47 +
>>   drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c             |    251 +
>>   drivers/gpu/drm/amd/amdgpu/nbio_v6_1.h             |     53 +
>>   drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h            |    269 +
>>   drivers/gpu/drm/amd/amdgpu/psp_v3_1.c              |    507 +
>>   drivers/gpu/drm/amd/amdgpu/psp_v3_1.h              |     50 +
>>   drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c             |      4 +-
>>   drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c             |      4 +-
>>   drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c             |   1573 +
>>   drivers/gpu/drm/amd/amdgpu/sdma_v4_0.h             |     30 +
>>   drivers/gpu/drm/amd/amdgpu/soc15.c                 |    825 +
>>   drivers/gpu/drm/amd/amdgpu/soc15.h                 |     35 +
>>   drivers/gpu/drm/amd/amdgpu/soc15_common.h          |     57 +
>>   drivers/gpu/drm/amd/amdgpu/soc15d.h                |    287 +
>>   drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c              |   1543 +
>>   drivers/gpu/drm/amd/amdgpu/uvd_v7_0.h              |     29 +
>>   drivers/gpu/drm/amd/amdgpu/vce_v4_0.c              |   1141 +
>>   drivers/gpu/drm/amd/amdgpu/vce_v4_0.h              |     29 +
>>   drivers/gpu/drm/amd/amdgpu/vega10_ih.c             |    424 +
>>   drivers/gpu/drm/amd/amdgpu/vega10_ih.h             |     30 +
>>   drivers/gpu/drm/amd/amdgpu/vega10_sdma_pkt_open.h  |   3335 +
>>   drivers/gpu/drm/amd/amdgpu/vi.c                    |      4 +-
>>   drivers/gpu/drm/amd/display/Kconfig                |      7 +
>>   drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  |    145 +-
>>   .../drm/amd/display/amdgpu_dm/amdgpu_dm_services.c |     10 +
>>   .../drm/amd/display/amdgpu_dm/amdgpu_dm_types.c    |     20 +-
>>   drivers/gpu/drm/amd/display/dc/Makefile            |      4 +
>>   drivers/gpu/drm/amd/display/dc/bios/Makefile       |      8 +
>>   drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c |   2162 +
>>   drivers/gpu/drm/amd/display/dc/bios/bios_parser2.h |     33 +
>>   .../amd/display/dc/bios/bios_parser_interface.c    |     14 +
>>   .../display/dc/bios/bios_parser_types_internal2.h  |     74 +
>>   .../gpu/drm/amd/display/dc/bios/command_table2.c   |    813 +
>>   .../gpu/drm/amd/display/dc/bios/command_table2.h   |    105 +
>>   .../amd/display/dc/bios/command_table_helper2.c    |    260 +
>>   .../amd/display/dc/bios/command_table_helper2.h    |     82 +
>>   .../dc/bios/dce112/command_table_helper2_dce112.c  |    418 +
>>   .../dc/bios/dce112/command_table_helper2_dce112.h  |     34 +
>>   drivers/gpu/drm/amd/display/dc/calcs/dce_calcs.c   |    117 +
>>   drivers/gpu/drm/amd/display/dc/core/dc.c           |     29 +
>>   drivers/gpu/drm/amd/display/dc/core/dc_debug.c     |     11 +
>>   drivers/gpu/drm/amd/display/dc/core/dc_link.c      |     19 +
>>   drivers/gpu/drm/amd/display/dc/core/dc_resource.c  |     14 +
>>   drivers/gpu/drm/amd/display/dc/dc.h                |     27 +
>>   drivers/gpu/drm/amd/display/dc/dc_hw_types.h       |     46 +
>>   .../gpu/drm/amd/display/dc/dce/dce_clock_source.c  |      6 +
>>   drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c    |    149 +
>>   drivers/gpu/drm/amd/display/dc/dce/dce_clocks.h    |     20 +
>>   drivers/gpu/drm/amd/display/dc/dce/dce_hwseq.h     |      8 +
>>   .../gpu/drm/amd/display/dc/dce/dce_link_encoder.h  |     14 +
>>   drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c |     35 +
>>   drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.h |     34 +
>>   drivers/gpu/drm/amd/display/dc/dce/dce_opp.h       |     72 +
>>   .../drm/amd/display/dc/dce/dce_stream_encoder.h    |    100 +
>>   drivers/gpu/drm/amd/display/dc/dce/dce_transform.h |     68 +
>>   .../amd/display/dc/dce110/dce110_hw_sequencer.c    |     53 +-
>>   .../drm/amd/display/dc/dce110/dce110_mem_input.c   |      3 +
>>   .../display/dc/dce110/dce110_timing_generator.h    |      3 +
>>   drivers/gpu/drm/amd/display/dc/dce120/Makefile     |     12 +
>>   .../amd/display/dc/dce120/dce120_hw_sequencer.c    |    197 +
>>   .../amd/display/dc/dce120/dce120_hw_sequencer.h    |     36 +
>>   drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.c |     58 +
>>   drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.h |     62 +
>>   .../drm/amd/display/dc/dce120/dce120_ipp_cursor.c  |    202 +
>>   .../drm/amd/display/dc/dce120/dce120_ipp_gamma.c   |    167 +
>>   .../drm/amd/display/dc/dce120/dce120_mem_input.c   |    340 +
>>   .../drm/amd/display/dc/dce120/dce120_mem_input.h   |     37 +
>>   .../drm/amd/display/dc/dce120/dce120_resource.c    |   1099 +
>>   .../drm/amd/display/dc/dce120/dce120_resource.h    |     39 +
>>   .../display/dc/dce120/dce120_timing_generator.c    |   1109 +
>>   .../display/dc/dce120/dce120_timing_generator.h    |     41 +
>>   .../gpu/drm/amd/display/dc/dce80/dce80_mem_input.c |      3 +
>>   drivers/gpu/drm/amd/display/dc/dm_services.h       |     89 +
>>   drivers/gpu/drm/amd/display/dc/dm_services_types.h |     27 +
>>   drivers/gpu/drm/amd/display/dc/gpio/Makefile       |     11 +
>>   .../amd/display/dc/gpio/dce120/hw_factory_dce120.c |    197 +
>>   .../amd/display/dc/gpio/dce120/hw_factory_dce120.h |     32 +
>>   .../display/dc/gpio/dce120/hw_translate_dce120.c   |    408 +
>>   .../display/dc/gpio/dce120/hw_translate_dce120.h   |     34 +
>>   drivers/gpu/drm/amd/display/dc/gpio/hw_factory.c   |      9 +
>>   drivers/gpu/drm/amd/display/dc/gpio/hw_translate.c |      9 +-
>>   drivers/gpu/drm/amd/display/dc/i2caux/Makefile     |     11 +
>>   .../amd/display/dc/i2caux/dce120/i2caux_dce120.c   |    125 +
>>   .../amd/display/dc/i2caux/dce120/i2caux_dce120.h   |     32 +
>>   drivers/gpu/drm/amd/display/dc/i2caux/i2caux.c     |      8 +
>>   .../gpu/drm/amd/display/dc/inc/bandwidth_calcs.h   |      3 +
>>   .../gpu/drm/amd/display/dc/inc/hw/display_clock.h  |     23 +
>>   drivers/gpu/drm/amd/display/dc/inc/hw/mem_input.h  |      4 +
>>   drivers/gpu/drm/amd/display/dc/irq/Makefile        |     12 +
>>   .../amd/display/dc/irq/dce120/irq_service_dce120.c |    293 +
>>   .../amd/display/dc/irq/dce120/irq_service_dce120.h |     34 +
>>   drivers/gpu/drm/amd/display/dc/irq/irq_service.c   |      3 +
>>   drivers/gpu/drm/amd/display/include/dal_asic_id.h  |      4 +
>>   drivers/gpu/drm/amd/display/include/dal_types.h    |      3 +
>>   drivers/gpu/drm/amd/include/amd_shared.h           |      4 +
>>   .../asic_reg/vega10/ATHUB/athub_1_0_default.h      |    241 +
>>   .../asic_reg/vega10/ATHUB/athub_1_0_offset.h       |    453 +
>>   .../asic_reg/vega10/ATHUB/athub_1_0_sh_mask.h      |   2045 +
>>   .../include/asic_reg/vega10/DC/dce_12_0_default.h  |   9868 ++
>>   .../include/asic_reg/vega10/DC/dce_12_0_offset.h   |  18193 +++
>>   .../include/asic_reg/vega10/DC/dce_12_0_sh_mask.h  |  64636 +++++++++
>>   .../include/asic_reg/vega10/GC/gc_9_0_default.h    |   3873 +
>>   .../amd/include/asic_reg/vega10/GC/gc_9_0_offset.h |   7230 +
>>   .../include/asic_reg/vega10/GC/gc_9_0_sh_mask.h    |  29868 ++++
>>   .../include/asic_reg/vega10/HDP/hdp_4_0_default.h  |    117 +
>>   .../include/asic_reg/vega10/HDP/hdp_4_0_offset.h   |    209 +
>>   .../include/asic_reg/vega10/HDP/hdp_4_0_sh_mask.h  |    601 +
>>   .../asic_reg/vega10/MMHUB/mmhub_1_0_default.h      |   1011 +
>>   .../asic_reg/vega10/MMHUB/mmhub_1_0_offset.h       |   1967 +
>>   .../asic_reg/vega10/MMHUB/mmhub_1_0_sh_mask.h      |  10127 ++
>>   .../include/asic_reg/vega10/MP/mp_9_0_default.h    |    342 +
>>   .../amd/include/asic_reg/vega10/MP/mp_9_0_offset.h |    375 +
>>   .../include/asic_reg/vega10/MP/mp_9_0_sh_mask.h    |   1463 +
>>   .../asic_reg/vega10/NBIF/nbif_6_1_default.h        |   1271 +
>>   .../include/asic_reg/vega10/NBIF/nbif_6_1_offset.h |   1688 +
>>   .../asic_reg/vega10/NBIF/nbif_6_1_sh_mask.h        |  10281 ++
>>   .../asic_reg/vega10/NBIO/nbio_6_1_default.h        |  22340 +++
>>   .../include/asic_reg/vega10/NBIO/nbio_6_1_offset.h |   3649 +
>>   .../asic_reg/vega10/NBIO/nbio_6_1_sh_mask.h        | 133884 
>> ++++++++++++++++++
>>   .../asic_reg/vega10/OSSSYS/osssys_4_0_default.h    |    176 +
>>   .../asic_reg/vega10/OSSSYS/osssys_4_0_offset.h     |    327 +
>>   .../asic_reg/vega10/OSSSYS/osssys_4_0_sh_mask.h    |   1196 +
>>   .../asic_reg/vega10/SDMA0/sdma0_4_0_default.h      |    286 +
>>   .../asic_reg/vega10/SDMA0/sdma0_4_0_offset.h       |    547 +
>>   .../asic_reg/vega10/SDMA0/sdma0_4_0_sh_mask.h      |   1852 +
>>   .../asic_reg/vega10/SDMA1/sdma1_4_0_default.h      |    282 +
>>   .../asic_reg/vega10/SDMA1/sdma1_4_0_offset.h       |    539 +
>>   .../asic_reg/vega10/SDMA1/sdma1_4_0_sh_mask.h      |   1810 +
>>   .../asic_reg/vega10/SMUIO/smuio_9_0_default.h      |    100 +
>>   .../asic_reg/vega10/SMUIO/smuio_9_0_offset.h       |    175 +
>>   .../asic_reg/vega10/SMUIO/smuio_9_0_sh_mask.h      |    258 +
>>   .../include/asic_reg/vega10/THM/thm_9_0_default.h  |    194 +
>>   .../include/asic_reg/vega10/THM/thm_9_0_offset.h   |    363 +
>>   .../include/asic_reg/vega10/THM/thm_9_0_sh_mask.h  |   1314 +
>>   .../include/asic_reg/vega10/UVD/uvd_7_0_default.h  |    127 +
>>   .../include/asic_reg/vega10/UVD/uvd_7_0_offset.h   |    222 +
>>   .../include/asic_reg/vega10/UVD/uvd_7_0_sh_mask.h  |    811 +
>>   .../include/asic_reg/vega10/VCE/vce_4_0_default.h  |    122 +
>>   .../include/asic_reg/vega10/VCE/vce_4_0_offset.h   |    208 +
>>   .../include/asic_reg/vega10/VCE/vce_4_0_sh_mask.h  |    488 +
>>   .../gpu/drm/amd/include/asic_reg/vega10/soc15ip.h  |   1343 +
>>   .../drm/amd/include/asic_reg/vega10/vega10_enum.h  |  22531 +++
>>   drivers/gpu/drm/amd/include/atomfirmware.h         |   2385 +
>>   drivers/gpu/drm/amd/include/atomfirmwareid.h       |     86 +
>>   drivers/gpu/drm/amd/include/displayobject.h        |    249 +
>>   drivers/gpu/drm/amd/include/dm_pp_interface.h      |     83 +
>>   drivers/gpu/drm/amd/include/v9_structs.h           |    743 +
>>   drivers/gpu/drm/amd/powerplay/amd_powerplay.c      |    284 +-
>>   drivers/gpu/drm/amd/powerplay/hwmgr/Makefile       |      6 +-
>>   .../gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c  |     49 +
>>   drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c        |      9 +
>>   drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr_ppt.h    |     16 +-
>>   drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c |    396 +
>>   drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h |    140 +
>>   drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c |   4378 +
>>   drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.h |    434 +
>>   drivers/gpu/drm/amd/powerplay/hwmgr/vega10_inc.h   |     44 +
>>   .../gpu/drm/amd/powerplay/hwmgr/vega10_powertune.c |    137 +
>>   .../gpu/drm/amd/powerplay/hwmgr/vega10_powertune.h |     65 +
>>   .../gpu/drm/amd/powerplay/hwmgr/vega10_pptable.h   |    331 +
>>   .../amd/powerplay/hwmgr/vega10_processpptables.c   |   1056 +
>>   .../amd/powerplay/hwmgr/vega10_processpptables.h   |     34 +
>>   .../gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c   |    761 +
>>   .../gpu/drm/amd/powerplay/hwmgr/vega10_thermal.h   |     83 +
>>   drivers/gpu/drm/amd/powerplay/inc/amd_powerplay.h  |     28 +-
>>   .../gpu/drm/amd/powerplay/inc/hardwaremanager.h    |     43 +
>>   drivers/gpu/drm/amd/powerplay/inc/hwmgr.h          |    125 +-
>>   drivers/gpu/drm/amd/powerplay/inc/pp_instance.h    |      1 +
>>   drivers/gpu/drm/amd/powerplay/inc/pp_soc15.h       |     48 +
>>   drivers/gpu/drm/amd/powerplay/inc/smu9.h           |    147 +
>>   drivers/gpu/drm/amd/powerplay/inc/smu9_driver_if.h |    418 +
>>   drivers/gpu/drm/amd/powerplay/inc/smumgr.h         |      3 +
>>   drivers/gpu/drm/amd/powerplay/inc/vega10_ppsmc.h   |    131 +
>>   drivers/gpu/drm/amd/powerplay/smumgr/Makefile      |      2 +-
>>   drivers/gpu/drm/amd/powerplay/smumgr/smumgr.c      |      9 +
>>   .../gpu/drm/amd/powerplay/smumgr/vega10_smumgr.c   |    564 +
>>   .../gpu/drm/amd/powerplay/smumgr/vega10_smumgr.h   |     70 +
>>   include/uapi/drm/amdgpu_drm.h                      |     29 +
>>   221 files changed, 403408 insertions(+), 219 deletions(-)
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/clearstate_gfx9.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mxgpu_ai.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/nbio_v6_1.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/psp_v3_1.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/psp_v3_1.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15_common.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15d.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vce_v4_0.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vega10_ih.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vega10_ih.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vega10_sdma_pkt_open.h
>>   create mode 100644 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
>>   create mode 100644 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/bios/bios_parser_types_internal2.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/bios/command_table2.c
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/bios/command_table2.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.c
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/bios/dce112/command_table_helper2_dce112.c
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/bios/dce112/command_table_helper2_dce112.h
>>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/Makefile
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.c
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.h
>>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.c
>>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp_cursor.c
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp_gamma.c
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/dce120/dce120_mem_input.c
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/dce120/dce120_mem_input.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/dce120/dce120_timing_generator.c
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/dce120/dce120_timing_generator.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_factory_dce120.c
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_factory_dce120.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_translate_dce120.c
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_translate_dce120.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/i2caux/dce120/i2caux_dce120.c
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/i2caux/dce120/i2caux_dce120.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/irq/dce120/irq_service_dce120.c
>>   create mode 100644 
>> drivers/gpu/drm/amd/display/dc/irq/dce120/irq_service_dce120.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/ATHUB/athub_1_0_default.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/ATHUB/athub_1_0_offset.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/ATHUB/athub_1_0_sh_mask.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/DC/dce_12_0_default.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/DC/dce_12_0_offset.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/DC/dce_12_0_sh_mask.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/GC/gc_9_0_default.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/GC/gc_9_0_offset.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/GC/gc_9_0_sh_mask.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/HDP/hdp_4_0_default.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/HDP/hdp_4_0_offset.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/HDP/hdp_4_0_sh_mask.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/MMHUB/mmhub_1_0_default.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/MMHUB/mmhub_1_0_offset.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/MMHUB/mmhub_1_0_sh_mask.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/MP/mp_9_0_default.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/MP/mp_9_0_offset.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/MP/mp_9_0_sh_mask.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIF/nbif_6_1_default.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIF/nbif_6_1_offset.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIF/nbif_6_1_sh_mask.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIO/nbio_6_1_default.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIO/nbio_6_1_offset.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIO/nbio_6_1_sh_mask.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/OSSSYS/osssys_4_0_default.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/OSSSYS/osssys_4_0_offset.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/OSSSYS/osssys_4_0_sh_mask.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA0/sdma0_4_0_default.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA0/sdma0_4_0_offset.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA0/sdma0_4_0_sh_mask.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA1/sdma1_4_0_default.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA1/sdma1_4_0_offset.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA1/sdma1_4_0_sh_mask.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/SMUIO/smuio_9_0_default.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/SMUIO/smuio_9_0_offset.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/SMUIO/smuio_9_0_sh_mask.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/THM/thm_9_0_default.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/THM/thm_9_0_offset.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/THM/thm_9_0_sh_mask.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/UVD/uvd_7_0_default.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/UVD/uvd_7_0_offset.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/UVD/uvd_7_0_sh_mask.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/VCE/vce_4_0_default.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/VCE/vce_4_0_offset.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/VCE/vce_4_0_sh_mask.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/soc15ip.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/include/asic_reg/vega10/vega10_enum.h
>>   create mode 100644 drivers/gpu/drm/amd/include/atomfirmware.h
>>   create mode 100644 drivers/gpu/drm/amd/include/atomfirmwareid.h
>>   create mode 100644 drivers/gpu/drm/amd/include/displayobject.h
>>   create mode 100644 drivers/gpu/drm/amd/include/dm_pp_interface.h
>>   create mode 100644 drivers/gpu/drm/amd/include/v9_structs.h
>>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c
>>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h
>>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
>>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.h
>>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_inc.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.c
>>   create mode 100644 
>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_pptable.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.c
>>   create mode 100644 
>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c
>>   create mode 100644 
>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.h
>>   create mode 100644 drivers/gpu/drm/amd/powerplay/inc/pp_soc15.h
>>   create mode 100644 drivers/gpu/drm/amd/powerplay/inc/smu9.h
>>   create mode 100644 drivers/gpu/drm/amd/powerplay/inc/smu9_driver_if.h
>>   create mode 100644 drivers/gpu/drm/amd/powerplay/inc/vega10_ppsmc.h
>>   create mode 100644 
>> drivers/gpu/drm/amd/powerplay/smumgr/vega10_smumgr.c
>>   create mode 100644 
>> drivers/gpu/drm/amd/powerplay/smumgr/vega10_smumgr.h
>>
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH 000/100] Add Vega10 Support
       [not found]         ` <b717602f-7573-6c20-ca68-491e3fe847c0-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
@ 2017-03-21 12:18           ` Christian König
       [not found]             ` <15b7d1b4-8ac7-d14b-40f6-aba529b301ea-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
  0 siblings, 1 reply; 101+ messages in thread
From: Christian König @ 2017-03-21 12:18 UTC (permalink / raw)
  To: Alex Deucher, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Alex Deucher

In my Spam folder I've found:

Patches #22, #24, #39, #50, #64, #69, #75, #78 which are Acked-by: 
Christian König <christian.koenig@amd.com>.

Patches #32, #42 which are Reviewed-by: Christian König 
<christian.koenig@amd.com>.

And patches #80, #85, #89, #90, #91, #100 which already had either my rb 
or ackb.

So still missing are #6-#20 which are probably just to large for the list.

Regards,
Christian.

Am 21.03.2017 um 12:51 schrieb Christian König:
> Patches #48, #49, #52-#63, #65-#68, #70-#72, #74, #76, #77, #79, 
> #81-#84 are Acked-by: Christian König <christian.koenig@amd.com>.
>
> Patches #50, #64, #69, #75, #78, #80, #85, #89-#91, #100 didn't made 
> it to the list.
>
> Patch #73 probably needs to be moved to the end of the set or at least 
> after the wptr_poll fix.
>
> Apart from those everything should already have my reviewed-by or 
> acked-by.
>
> What worries me a bit are the ones who didn't made it to the list. 
> Going to check my spam folder, but that is a bit disturbing.
>
> Regards,
> Christian.
>
> Am 21.03.2017 um 08:42 schrieb Christian König:
>> Patches #1 - #5, #21, #23, #25, #27, #28, #31, #35-#38, #40, #41, #45 
>> are Acked-by: Christian König.
>>
>> Patches #6-#20, #22, #24, #32, #39, #42 didn't made it to the list 
>> (probably to large).
>>
>> Patches #43, #44 are Reviewed-by: Christian König 
>> <christian.koenig@amd.com>.
>>
>> Patch #26: That stuff actually belongs into vega10 specifc code, 
>> doesn't it?
>>
>> Patch #29: We shouldn't use typedefs for enums.
>>
>> Going to take a look at the rest later today,
>> Christian.
>>
>> Am 20.03.2017 um 21:29 schrieb Alex Deucher:
>>> This patch set adds support for vega10. Major changes and supported
>>> features:
>>> - new vbios interface
>>> - Lots of new hw IPs
>>> - Support for video decode using UVD
>>> - Support for video encode using VCE
>>> - Support for 3D via radeonsi
>>> - Power management
>>> - Full display support via DC
>>> - Support for SR-IOV
>>>
>>> I did not send out the register headers since they are huge. You can 
>>> find them
>>> along with all the other patches in this series here:
>>> https://cgit.freedesktop.org/~agd5f/linux/log/?h=amd-staging-4.9
>>>
>>> Please review.
>>>
>>> Thanks,
>>>
>>> Alex
>>>
>>> Alex Deucher (29):
>>>    drm/amdgpu: add the new atomfirmware interface header
>>>    amdgpu: detect if we are using atomfirm or atombios for vbios (v2)
>>>    drm/amdgpu: move atom scratch setup into amdgpu_atombios.c
>>>    drm/amdgpu: add basic support for atomfirmware.h (v3)
>>>    drm/amdgpu: add soc15ip.h
>>>    drm/amdgpu: add vega10_enum.h
>>>    drm/amdgpu: Add ATHUB 1.0 register headers
>>>    drm/amdgpu: Add the DCE 12.0 register headers
>>>    drm/amdgpu: add the GC 9.0 register headers
>>>    drm/amdgpu: add the HDP 4.0 register headers
>>>    drm/amdgpu: add the MMHUB 1.0 register headers
>>>    drm/amdgpu: add MP 9.0 register headers
>>>    drm/amdgpu: add NBIF 6.1 register headers
>>>    drm/amdgpu: add NBIO 6.1 register headers
>>>    drm/amdgpu: add OSSSYS 4.0 register headers
>>>    drm/amdgpu: add SDMA 4.0 register headers
>>>    drm/amdgpu: add SMUIO 9.0 register headers
>>>    drm/amdgpu: add THM 9.0 register headers
>>>    drm/amdgpu: add the UVD 7.0 register headers
>>>    drm/amdgpu: add the VCE 4.0 register headers
>>>    drm/amdgpu: add gfx9 clearstate header
>>>    drm/amdgpu: add SDMA 4.0 packet header
>>>    drm/amdgpu: use atomfirmware interfaces for scratch reg save/restore
>>>    drm/amdgpu: update IH IV ring entry for soc-15
>>>    drm/amdgpu: add PTE defines for MTYPE
>>>    drm/amdgpu: add NGG parameters
>>>    drm/amdgpu: Add asic family for vega10
>>>    drm/amdgpu: add tiling flags for GFX9
>>>    drm/amdgpu: gart fixes for vega10
>>>
>>> Alex Xie (4):
>>>    drm/amdgpu: Add MTYPE flags to GPU VM IOCTL interface
>>>    drm/amdgpu: handle PTE EXEC in amdgpu_vm_bo_split_mapping
>>>    drm/amdgpu: handle PTE MTYPE in amdgpu_vm_bo_split_mapping
>>>    drm/amdgpu: Add GMC 9.0 support
>>>
>>> Andrey Grodzovsky (1):
>>>    drm/amdgpu: gb_addr_config struct
>>>
>>> Charlene Liu (1):
>>>    drm/amd/display: need to handle DCE_Info table ver4.2
>>>
>>> Christian König (1):
>>>    drm/amdgpu: add IV trace point
>>>
>>> Eric Huang (7):
>>>    drm/amd/powerplay: add smu9 header files for Vega10
>>>    drm/amd/powerplay: add new Vega10's ppsmc header file
>>>    drm/amdgpu: add new atomfirmware based helpers for powerplay
>>>    drm/amd/powerplay: add some new structures for Vega10
>>>    drm/amd: add structures for display/powerplay interface
>>>    drm/amd/powerplay: add some display/powerplay interfaces
>>>    drm/amd/powerplay: add Vega10 powerplay support
>>>
>>> Felix Kuehling (1):
>>>    drm/amd: Add MQD structs for GFX V9
>>>
>>> Harry Wentland (6):
>>>    drm/amd/display: Add DCE12 bios parser support
>>>    drm/amd/display: Add DCE12 gpio support
>>>    drm/amd/display: Add DCE12 i2c/aux support
>>>    drm/amd/display: Add DCE12 irq support
>>>    drm/amd/display: Add DCE12 core support
>>>    drm/amd/display: Enable DCE12 support
>>>
>>> Huang Rui (6):
>>>    drm/amdgpu: use new flag to handle different firmware loading method
>>>    drm/amdgpu: rework common ucode handling for vega10
>>>    drm/amdgpu: add psp firmware header info
>>>    drm/amdgpu: add PSP driver for vega10
>>>    drm/amdgpu: add psp firmware info into info query and debugfs
>>>    drm/amdgpu: add SMC firmware into global ucode list for psp loading
>>>
>>> Jordan Lazare (1):
>>>    drm/amd/display: Less log spam
>>>
>>> Junwei Zhang (2):
>>>    drm/amdgpu: add NBIO 6.1 driver
>>>    drm/amdgpu: add Vega10 Device IDs
>>>
>>> Ken Wang (8):
>>>    drm/amdgpu: add common soc15 headers
>>>    drm/amdgpu: add vega10 chip name
>>>    drm/amdgpu: add 64bit doorbell assignments
>>>    drm/amdgpu: add SDMA v4.0 implementation
>>>    drm/amdgpu: implement GFX 9.0 support
>>>    drm/amdgpu: add vega10 interrupt handler
>>>    drm/amdgpu: soc15 enable (v2)
>>>    drm/amdgpu: Set the IP blocks for vega10
>>>
>>> Leo Liu (2):
>>>    drm/amdgpu: add initial uvd 7.0 support for vega10
>>>    drm/amdgpu: add initial vce 4.0 support for vega10
>>>
>>> Marek Olšák (1):
>>>    drm/amdgpu: don't validate TILE_SPLIT on GFX9
>>>
>>> Monk Liu (5):
>>>    drm/amdgpu/gfx9: programing wptr_poll_addr register
>>>    drm/amdgpu:impl gfx9 cond_exec
>>>    drm/amdgpu:bypass RLC init for SRIOV
>>>    drm/amdgpu/sdma4:re-org SDMA initial steps for sriov
>>>    drm/amdgpu/vega10:fix DOORBELL64 scheme
>>>
>>> Rex Zhu (2):
>>>    drm/amdgpu: get display info from DC when DC enabled.
>>>    drm/amd/powerplay: add global PowerPlay mutex.
>>>
>>> Xiangliang Yu (22):
>>>    drm/amdgpu: impl sriov detection for vega10
>>>    drm/amdgpu: add kiq ring for gfx9
>>>    drm/amdgpu/gfx9: fullfill kiq funcs
>>>    drm/amdgpu/gfx9: fullfill kiq irq funcs
>>>    drm/amdgpu: init kiq and kcq for vega10
>>>    drm/amdgpu/gfx9: impl gfx9 meta data emit
>>>    drm/amdgpu/soc15: bypass PSP for VF
>>>    drm/amdgpu/gmc9: no need use kiq in vega10 tlb flush
>>>    drm/amdgpu/dce_virtual: bypass DPM for vf
>>>    drm/amdgpu/virt: impl mailbox for ai
>>>    drm/amdgpu/soc15: init virt ops for vf
>>>    drm/amdgpu/soc15: enable virtual dce for vf
>>>    drm/amdgpu: Don't touch PG&CG for SRIOV MM
>>>    drm/amdgpu/vce4: enable doorbell for SRIOV
>>>    drm/amdgpu: disable uvd for sriov
>>>    drm/amdgpu/soc15: bypass pp block for vf
>>>    drm/amdgpu/virt: add structure for MM table
>>>    drm/amdgpu/vce4: alloc mm table for MM sriov
>>>    drm/amdgpu/vce4: Ignore vce ring/ib test temporarily
>>>    drm/amdgpu: add mmsch structures
>>>    drm/amdgpu/vce4: impl vce & mmsch sriov start
>>>    drm/amdgpu/gfx9: correct wptr pointer value
>>>
>>> ken (1):
>>>    drm/amdgpu: add clinetid definition for vega10
>>>
>>>   drivers/gpu/drm/amd/amdgpu/Makefile                |     27 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu.h                |    172 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c       |     28 +
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h       |      3 +
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c   |    112 +
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h   |     33 +
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c           |     30 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c            |     73 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c         |     73 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c            |     36 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c           |      3 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c            |      2 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ih.h             |     47 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c            |      3 +
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c            |     32 +
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.c         |      5 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_powerplay.c      |      5 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c            |    473 +
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h            |    127 +
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h          |     37 +
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c          |    113 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h          |     17 +
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c            |     58 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c            |     21 +
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h           |      7 +
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c             |     34 +-
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h             |      4 +
>>>   drivers/gpu/drm/amd/amdgpu/atom.c                  |     26 -
>>>   drivers/gpu/drm/amd/amdgpu/atom.h                  |      1 -
>>>   drivers/gpu/drm/amd/amdgpu/cik.c                   |      2 +
>>>   drivers/gpu/drm/amd/amdgpu/clearstate_gfx9.h       |    941 +
>>>   drivers/gpu/drm/amd/amdgpu/dce_virtual.c           |      3 +
>>>   drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c              |      6 +-
>>>   drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c              |   4075 +
>>>   drivers/gpu/drm/amd/amdgpu/gfx_v9_0.h              |     35 +
>>>   drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c           |    447 +
>>>   drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h           |     35 +
>>>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c              |    826 +
>>>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h              |     30 +
>>>   drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c            |    585 +
>>>   drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h            |     35 +
>>>   drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h            |     87 +
>>>   drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c              |    207 +
>>>   drivers/gpu/drm/amd/amdgpu/mxgpu_ai.h              |     47 +
>>>   drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c             |    251 +
>>>   drivers/gpu/drm/amd/amdgpu/nbio_v6_1.h             |     53 +
>>>   drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h            |    269 +
>>>   drivers/gpu/drm/amd/amdgpu/psp_v3_1.c              |    507 +
>>>   drivers/gpu/drm/amd/amdgpu/psp_v3_1.h              |     50 +
>>>   drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c             |      4 +-
>>>   drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c             |      4 +-
>>>   drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c             |   1573 +
>>>   drivers/gpu/drm/amd/amdgpu/sdma_v4_0.h             |     30 +
>>>   drivers/gpu/drm/amd/amdgpu/soc15.c                 |    825 +
>>>   drivers/gpu/drm/amd/amdgpu/soc15.h                 |     35 +
>>>   drivers/gpu/drm/amd/amdgpu/soc15_common.h          |     57 +
>>>   drivers/gpu/drm/amd/amdgpu/soc15d.h                |    287 +
>>>   drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c              |   1543 +
>>>   drivers/gpu/drm/amd/amdgpu/uvd_v7_0.h              |     29 +
>>>   drivers/gpu/drm/amd/amdgpu/vce_v4_0.c              |   1141 +
>>>   drivers/gpu/drm/amd/amdgpu/vce_v4_0.h              |     29 +
>>>   drivers/gpu/drm/amd/amdgpu/vega10_ih.c             |    424 +
>>>   drivers/gpu/drm/amd/amdgpu/vega10_ih.h             |     30 +
>>>   drivers/gpu/drm/amd/amdgpu/vega10_sdma_pkt_open.h  |   3335 +
>>>   drivers/gpu/drm/amd/amdgpu/vi.c                    |      4 +-
>>>   drivers/gpu/drm/amd/display/Kconfig                |      7 +
>>>   drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  |    145 +-
>>>   .../drm/amd/display/amdgpu_dm/amdgpu_dm_services.c |     10 +
>>>   .../drm/amd/display/amdgpu_dm/amdgpu_dm_types.c    |     20 +-
>>>   drivers/gpu/drm/amd/display/dc/Makefile            |      4 +
>>>   drivers/gpu/drm/amd/display/dc/bios/Makefile       |      8 +
>>>   drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c |   2162 +
>>>   drivers/gpu/drm/amd/display/dc/bios/bios_parser2.h |     33 +
>>>   .../amd/display/dc/bios/bios_parser_interface.c    |     14 +
>>>   .../display/dc/bios/bios_parser_types_internal2.h  |     74 +
>>>   .../gpu/drm/amd/display/dc/bios/command_table2.c   |    813 +
>>>   .../gpu/drm/amd/display/dc/bios/command_table2.h   |    105 +
>>>   .../amd/display/dc/bios/command_table_helper2.c    |    260 +
>>>   .../amd/display/dc/bios/command_table_helper2.h    |     82 +
>>>   .../dc/bios/dce112/command_table_helper2_dce112.c  |    418 +
>>>   .../dc/bios/dce112/command_table_helper2_dce112.h  |     34 +
>>>   drivers/gpu/drm/amd/display/dc/calcs/dce_calcs.c   |    117 +
>>>   drivers/gpu/drm/amd/display/dc/core/dc.c           |     29 +
>>>   drivers/gpu/drm/amd/display/dc/core/dc_debug.c     |     11 +
>>>   drivers/gpu/drm/amd/display/dc/core/dc_link.c      |     19 +
>>>   drivers/gpu/drm/amd/display/dc/core/dc_resource.c  |     14 +
>>>   drivers/gpu/drm/amd/display/dc/dc.h                |     27 +
>>>   drivers/gpu/drm/amd/display/dc/dc_hw_types.h       |     46 +
>>>   .../gpu/drm/amd/display/dc/dce/dce_clock_source.c  |      6 +
>>>   drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c    |    149 +
>>>   drivers/gpu/drm/amd/display/dc/dce/dce_clocks.h    |     20 +
>>>   drivers/gpu/drm/amd/display/dc/dce/dce_hwseq.h     |      8 +
>>>   .../gpu/drm/amd/display/dc/dce/dce_link_encoder.h  |     14 +
>>>   drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c |     35 +
>>>   drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.h |     34 +
>>>   drivers/gpu/drm/amd/display/dc/dce/dce_opp.h       |     72 +
>>>   .../drm/amd/display/dc/dce/dce_stream_encoder.h    |    100 +
>>>   drivers/gpu/drm/amd/display/dc/dce/dce_transform.h |     68 +
>>>   .../amd/display/dc/dce110/dce110_hw_sequencer.c    |     53 +-
>>>   .../drm/amd/display/dc/dce110/dce110_mem_input.c   |      3 +
>>>   .../display/dc/dce110/dce110_timing_generator.h    |      3 +
>>>   drivers/gpu/drm/amd/display/dc/dce120/Makefile     |     12 +
>>>   .../amd/display/dc/dce120/dce120_hw_sequencer.c    |    197 +
>>>   .../amd/display/dc/dce120/dce120_hw_sequencer.h    |     36 +
>>>   drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.c |     58 +
>>>   drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.h |     62 +
>>>   .../drm/amd/display/dc/dce120/dce120_ipp_cursor.c  |    202 +
>>>   .../drm/amd/display/dc/dce120/dce120_ipp_gamma.c   |    167 +
>>>   .../drm/amd/display/dc/dce120/dce120_mem_input.c   |    340 +
>>>   .../drm/amd/display/dc/dce120/dce120_mem_input.h   |     37 +
>>>   .../drm/amd/display/dc/dce120/dce120_resource.c    |   1099 +
>>>   .../drm/amd/display/dc/dce120/dce120_resource.h    |     39 +
>>>   .../display/dc/dce120/dce120_timing_generator.c    |   1109 +
>>>   .../display/dc/dce120/dce120_timing_generator.h    |     41 +
>>>   .../gpu/drm/amd/display/dc/dce80/dce80_mem_input.c |      3 +
>>>   drivers/gpu/drm/amd/display/dc/dm_services.h       |     89 +
>>>   drivers/gpu/drm/amd/display/dc/dm_services_types.h |     27 +
>>>   drivers/gpu/drm/amd/display/dc/gpio/Makefile       |     11 +
>>>   .../amd/display/dc/gpio/dce120/hw_factory_dce120.c |    197 +
>>>   .../amd/display/dc/gpio/dce120/hw_factory_dce120.h |     32 +
>>>   .../display/dc/gpio/dce120/hw_translate_dce120.c   |    408 +
>>>   .../display/dc/gpio/dce120/hw_translate_dce120.h   |     34 +
>>>   drivers/gpu/drm/amd/display/dc/gpio/hw_factory.c   |      9 +
>>>   drivers/gpu/drm/amd/display/dc/gpio/hw_translate.c |      9 +-
>>>   drivers/gpu/drm/amd/display/dc/i2caux/Makefile     |     11 +
>>>   .../amd/display/dc/i2caux/dce120/i2caux_dce120.c   |    125 +
>>>   .../amd/display/dc/i2caux/dce120/i2caux_dce120.h   |     32 +
>>>   drivers/gpu/drm/amd/display/dc/i2caux/i2caux.c     |      8 +
>>>   .../gpu/drm/amd/display/dc/inc/bandwidth_calcs.h   |      3 +
>>>   .../gpu/drm/amd/display/dc/inc/hw/display_clock.h  |     23 +
>>>   drivers/gpu/drm/amd/display/dc/inc/hw/mem_input.h  |      4 +
>>>   drivers/gpu/drm/amd/display/dc/irq/Makefile        |     12 +
>>>   .../amd/display/dc/irq/dce120/irq_service_dce120.c |    293 +
>>>   .../amd/display/dc/irq/dce120/irq_service_dce120.h |     34 +
>>>   drivers/gpu/drm/amd/display/dc/irq/irq_service.c   |      3 +
>>>   drivers/gpu/drm/amd/display/include/dal_asic_id.h  |      4 +
>>>   drivers/gpu/drm/amd/display/include/dal_types.h    |      3 +
>>>   drivers/gpu/drm/amd/include/amd_shared.h           |      4 +
>>>   .../asic_reg/vega10/ATHUB/athub_1_0_default.h      |    241 +
>>>   .../asic_reg/vega10/ATHUB/athub_1_0_offset.h       |    453 +
>>>   .../asic_reg/vega10/ATHUB/athub_1_0_sh_mask.h      |   2045 +
>>>   .../include/asic_reg/vega10/DC/dce_12_0_default.h  |   9868 ++
>>>   .../include/asic_reg/vega10/DC/dce_12_0_offset.h   |  18193 +++
>>>   .../include/asic_reg/vega10/DC/dce_12_0_sh_mask.h  |  64636 +++++++++
>>>   .../include/asic_reg/vega10/GC/gc_9_0_default.h    |   3873 +
>>>   .../amd/include/asic_reg/vega10/GC/gc_9_0_offset.h |   7230 +
>>>   .../include/asic_reg/vega10/GC/gc_9_0_sh_mask.h    |  29868 ++++
>>>   .../include/asic_reg/vega10/HDP/hdp_4_0_default.h  |    117 +
>>>   .../include/asic_reg/vega10/HDP/hdp_4_0_offset.h   |    209 +
>>>   .../include/asic_reg/vega10/HDP/hdp_4_0_sh_mask.h  |    601 +
>>>   .../asic_reg/vega10/MMHUB/mmhub_1_0_default.h      |   1011 +
>>>   .../asic_reg/vega10/MMHUB/mmhub_1_0_offset.h       |   1967 +
>>>   .../asic_reg/vega10/MMHUB/mmhub_1_0_sh_mask.h      |  10127 ++
>>>   .../include/asic_reg/vega10/MP/mp_9_0_default.h    |    342 +
>>>   .../amd/include/asic_reg/vega10/MP/mp_9_0_offset.h |    375 +
>>>   .../include/asic_reg/vega10/MP/mp_9_0_sh_mask.h    |   1463 +
>>>   .../asic_reg/vega10/NBIF/nbif_6_1_default.h        |   1271 +
>>>   .../include/asic_reg/vega10/NBIF/nbif_6_1_offset.h |   1688 +
>>>   .../asic_reg/vega10/NBIF/nbif_6_1_sh_mask.h        |  10281 ++
>>>   .../asic_reg/vega10/NBIO/nbio_6_1_default.h        |  22340 +++
>>>   .../include/asic_reg/vega10/NBIO/nbio_6_1_offset.h |   3649 +
>>>   .../asic_reg/vega10/NBIO/nbio_6_1_sh_mask.h        | 133884 
>>> ++++++++++++++++++
>>>   .../asic_reg/vega10/OSSSYS/osssys_4_0_default.h    |    176 +
>>>   .../asic_reg/vega10/OSSSYS/osssys_4_0_offset.h     |    327 +
>>>   .../asic_reg/vega10/OSSSYS/osssys_4_0_sh_mask.h    |   1196 +
>>>   .../asic_reg/vega10/SDMA0/sdma0_4_0_default.h      |    286 +
>>>   .../asic_reg/vega10/SDMA0/sdma0_4_0_offset.h       |    547 +
>>>   .../asic_reg/vega10/SDMA0/sdma0_4_0_sh_mask.h      |   1852 +
>>>   .../asic_reg/vega10/SDMA1/sdma1_4_0_default.h      |    282 +
>>>   .../asic_reg/vega10/SDMA1/sdma1_4_0_offset.h       |    539 +
>>>   .../asic_reg/vega10/SDMA1/sdma1_4_0_sh_mask.h      |   1810 +
>>>   .../asic_reg/vega10/SMUIO/smuio_9_0_default.h      |    100 +
>>>   .../asic_reg/vega10/SMUIO/smuio_9_0_offset.h       |    175 +
>>>   .../asic_reg/vega10/SMUIO/smuio_9_0_sh_mask.h      |    258 +
>>>   .../include/asic_reg/vega10/THM/thm_9_0_default.h  |    194 +
>>>   .../include/asic_reg/vega10/THM/thm_9_0_offset.h   |    363 +
>>>   .../include/asic_reg/vega10/THM/thm_9_0_sh_mask.h  |   1314 +
>>>   .../include/asic_reg/vega10/UVD/uvd_7_0_default.h  |    127 +
>>>   .../include/asic_reg/vega10/UVD/uvd_7_0_offset.h   |    222 +
>>>   .../include/asic_reg/vega10/UVD/uvd_7_0_sh_mask.h  |    811 +
>>>   .../include/asic_reg/vega10/VCE/vce_4_0_default.h  |    122 +
>>>   .../include/asic_reg/vega10/VCE/vce_4_0_offset.h   |    208 +
>>>   .../include/asic_reg/vega10/VCE/vce_4_0_sh_mask.h  |    488 +
>>>   .../gpu/drm/amd/include/asic_reg/vega10/soc15ip.h  |   1343 +
>>>   .../drm/amd/include/asic_reg/vega10/vega10_enum.h  |  22531 +++
>>>   drivers/gpu/drm/amd/include/atomfirmware.h         |   2385 +
>>>   drivers/gpu/drm/amd/include/atomfirmwareid.h       |     86 +
>>>   drivers/gpu/drm/amd/include/displayobject.h        |    249 +
>>>   drivers/gpu/drm/amd/include/dm_pp_interface.h      |     83 +
>>>   drivers/gpu/drm/amd/include/v9_structs.h           |    743 +
>>>   drivers/gpu/drm/amd/powerplay/amd_powerplay.c      |    284 +-
>>>   drivers/gpu/drm/amd/powerplay/hwmgr/Makefile       |      6 +-
>>>   .../gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c  |     49 +
>>>   drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c        |      9 +
>>>   drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr_ppt.h    |     16 +-
>>>   drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c |    396 +
>>>   drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h |    140 +
>>>   drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c |   4378 +
>>>   drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.h |    434 +
>>>   drivers/gpu/drm/amd/powerplay/hwmgr/vega10_inc.h   |     44 +
>>>   .../gpu/drm/amd/powerplay/hwmgr/vega10_powertune.c |    137 +
>>>   .../gpu/drm/amd/powerplay/hwmgr/vega10_powertune.h |     65 +
>>>   .../gpu/drm/amd/powerplay/hwmgr/vega10_pptable.h   |    331 +
>>>   .../amd/powerplay/hwmgr/vega10_processpptables.c   |   1056 +
>>>   .../amd/powerplay/hwmgr/vega10_processpptables.h   |     34 +
>>>   .../gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c   |    761 +
>>>   .../gpu/drm/amd/powerplay/hwmgr/vega10_thermal.h   |     83 +
>>>   drivers/gpu/drm/amd/powerplay/inc/amd_powerplay.h  |     28 +-
>>>   .../gpu/drm/amd/powerplay/inc/hardwaremanager.h    |     43 +
>>>   drivers/gpu/drm/amd/powerplay/inc/hwmgr.h          |    125 +-
>>>   drivers/gpu/drm/amd/powerplay/inc/pp_instance.h    |      1 +
>>>   drivers/gpu/drm/amd/powerplay/inc/pp_soc15.h       |     48 +
>>>   drivers/gpu/drm/amd/powerplay/inc/smu9.h           |    147 +
>>>   drivers/gpu/drm/amd/powerplay/inc/smu9_driver_if.h |    418 +
>>>   drivers/gpu/drm/amd/powerplay/inc/smumgr.h         |      3 +
>>>   drivers/gpu/drm/amd/powerplay/inc/vega10_ppsmc.h   |    131 +
>>>   drivers/gpu/drm/amd/powerplay/smumgr/Makefile      |      2 +-
>>>   drivers/gpu/drm/amd/powerplay/smumgr/smumgr.c      |      9 +
>>>   .../gpu/drm/amd/powerplay/smumgr/vega10_smumgr.c   |    564 +
>>>   .../gpu/drm/amd/powerplay/smumgr/vega10_smumgr.h   |     70 +
>>>   include/uapi/drm/amdgpu_drm.h                      |     29 +
>>>   221 files changed, 403408 insertions(+), 219 deletions(-)
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/clearstate_gfx9.h
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.h
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mxgpu_ai.h
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/nbio_v6_1.h
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/psp_v3_1.c
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/psp_v3_1.h
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.h
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15.c
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15.h
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15_common.h
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15d.h
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.h
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vce_v4_0.h
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vega10_ih.c
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vega10_ih.h
>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vega10_sdma_pkt_open.h
>>>   create mode 100644 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
>>>   create mode 100644 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/bios/bios_parser_types_internal2.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/bios/command_table2.c
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/bios/command_table2.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.c
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/bios/dce112/command_table_helper2_dce112.c
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/bios/dce112/command_table_helper2_dce112.h
>>>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/Makefile
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.c
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.h
>>>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.c
>>>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp_cursor.c
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp_gamma.c
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/dce120/dce120_mem_input.c
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/dce120/dce120_mem_input.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/dce120/dce120_timing_generator.c
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/dce120/dce120_timing_generator.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_factory_dce120.c
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_factory_dce120.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_translate_dce120.c
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_translate_dce120.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/i2caux/dce120/i2caux_dce120.c
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/i2caux/dce120/i2caux_dce120.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/irq/dce120/irq_service_dce120.c
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/display/dc/irq/dce120/irq_service_dce120.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/ATHUB/athub_1_0_default.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/ATHUB/athub_1_0_offset.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/ATHUB/athub_1_0_sh_mask.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/DC/dce_12_0_default.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/DC/dce_12_0_offset.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/DC/dce_12_0_sh_mask.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/GC/gc_9_0_default.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/GC/gc_9_0_offset.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/GC/gc_9_0_sh_mask.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/HDP/hdp_4_0_default.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/HDP/hdp_4_0_offset.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/HDP/hdp_4_0_sh_mask.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/MMHUB/mmhub_1_0_default.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/MMHUB/mmhub_1_0_offset.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/MMHUB/mmhub_1_0_sh_mask.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/MP/mp_9_0_default.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/MP/mp_9_0_offset.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/MP/mp_9_0_sh_mask.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIF/nbif_6_1_default.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIF/nbif_6_1_offset.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIF/nbif_6_1_sh_mask.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIO/nbio_6_1_default.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIO/nbio_6_1_offset.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIO/nbio_6_1_sh_mask.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/OSSSYS/osssys_4_0_default.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/OSSSYS/osssys_4_0_offset.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/OSSSYS/osssys_4_0_sh_mask.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA0/sdma0_4_0_default.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA0/sdma0_4_0_offset.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA0/sdma0_4_0_sh_mask.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA1/sdma1_4_0_default.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA1/sdma1_4_0_offset.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA1/sdma1_4_0_sh_mask.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/SMUIO/smuio_9_0_default.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/SMUIO/smuio_9_0_offset.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/SMUIO/smuio_9_0_sh_mask.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/THM/thm_9_0_default.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/THM/thm_9_0_offset.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/THM/thm_9_0_sh_mask.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/UVD/uvd_7_0_default.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/UVD/uvd_7_0_offset.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/UVD/uvd_7_0_sh_mask.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/VCE/vce_4_0_default.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/VCE/vce_4_0_offset.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/VCE/vce_4_0_sh_mask.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/soc15ip.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/include/asic_reg/vega10/vega10_enum.h
>>>   create mode 100644 drivers/gpu/drm/amd/include/atomfirmware.h
>>>   create mode 100644 drivers/gpu/drm/amd/include/atomfirmwareid.h
>>>   create mode 100644 drivers/gpu/drm/amd/include/displayobject.h
>>>   create mode 100644 drivers/gpu/drm/amd/include/dm_pp_interface.h
>>>   create mode 100644 drivers/gpu/drm/amd/include/v9_structs.h
>>>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c
>>>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h
>>>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
>>>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.h
>>>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_inc.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.c
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_pptable.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.c
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.h
>>>   create mode 100644 drivers/gpu/drm/amd/powerplay/inc/pp_soc15.h
>>>   create mode 100644 drivers/gpu/drm/amd/powerplay/inc/smu9.h
>>>   create mode 100644 drivers/gpu/drm/amd/powerplay/inc/smu9_driver_if.h
>>>   create mode 100644 drivers/gpu/drm/amd/powerplay/inc/vega10_ppsmc.h
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/powerplay/smumgr/vega10_smumgr.c
>>>   create mode 100644 
>>> drivers/gpu/drm/amd/powerplay/smumgr/vega10_smumgr.h
>>>
>>
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 101+ messages in thread

* RE: [PATCH 046/100] drm/amdgpu: Add GMC 9.0 support
       [not found]         ` <003f0fba-4792-a32a-c982-73457dfbd1aa-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
@ 2017-03-21 15:09           ` Deucher, Alexander
       [not found]             ` <BN6PR12MB1652E0D9C22360FF77A4360AF73D0-/b2+HYfkarQqUD6E6FAiowdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
  2017-03-22 19:41           ` Alex Deucher
  1 sibling, 1 reply; 101+ messages in thread
From: Deucher, Alexander @ 2017-03-21 15:09 UTC (permalink / raw)
  To: 'Christian König',
	Alex Deucher, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Xie, AlexBin

> -----Original Message-----
> From: Christian König [mailto:deathsimple@vodafone.de]
> Sent: Tuesday, March 21, 2017 4:50 AM
> To: Alex Deucher; amd-gfx@lists.freedesktop.org
> Cc: Deucher, Alexander; Xie, AlexBin
> Subject: Re: [PATCH 046/100] drm/amdgpu: Add GMC 9.0 support
> 
> Am 20.03.2017 um 21:29 schrieb Alex Deucher:
> > From: Alex Xie <AlexBin.Xie@amd.com>
> >
> > On SOC-15 parts, the GMC (Graphics Memory Controller) consists
> > of two hubs: GFX (graphics and compute) and MM (sdma, uvd, vce).
> >
> > Signed-off-by: Alex Xie <AlexBin.Xie@amd.com>
> > Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
> > Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> > ---
> >   drivers/gpu/drm/amd/amdgpu/Makefile      |   6 +-
> >   drivers/gpu/drm/amd/amdgpu/amdgpu.h      |  30 ++
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c   |  28 +-
> >   drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c | 447
> +++++++++++++++++
> >   drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h |  35 ++
> >   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c    | 826
> +++++++++++++++++++++++++++++++
> >   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h    |  30 ++
> >   drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c  | 585
> ++++++++++++++++++++++
> >   drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h  |  35 ++
> >   drivers/gpu/drm/amd/include/amd_shared.h |   2 +
> >   10 files changed, 2016 insertions(+), 8 deletions(-)
> >   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
> >   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
> >   create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> >   create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
> >   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
> >   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile
> b/drivers/gpu/drm/amd/amdgpu/Makefile
> > index 69823e8..b5046fd 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/Makefile
> > +++ b/drivers/gpu/drm/amd/amdgpu/Makefile
> > @@ -45,7 +45,8 @@ amdgpu-y += \
> >   # add GMC block
> >   amdgpu-y += \
> >   	gmc_v7_0.o \
> > -	gmc_v8_0.o
> > +	gmc_v8_0.o \
> > +	gfxhub_v1_0.o mmhub_v1_0.o gmc_v9_0.o
> >
> >   # add IH block
> >   amdgpu-y += \
> > @@ -74,7 +75,8 @@ amdgpu-y += \
> >   # add async DMA block
> >   amdgpu-y += \
> >   	sdma_v2_4.o \
> > -	sdma_v3_0.o
> > +	sdma_v3_0.o \
> > +	sdma_v4_0.o
> 
> That change doesn't belong into this patch.
> 
> >
> >   # add UVD block
> >   amdgpu-y += \
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> > index aaded8d..d7257b6 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> > @@ -123,6 +123,11 @@ extern int amdgpu_param_buf_per_se;
> >   /* max number of IP instances */
> >   #define AMDGPU_MAX_SDMA_INSTANCES		2
> >
> > +/* max number of VMHUB */
> > +#define AMDGPU_MAX_VMHUBS			2
> > +#define AMDGPU_MMHUB				0
> > +#define AMDGPU_GFXHUB				1
> > +
> >   /* hardcode that limit for now */
> >   #define AMDGPU_VA_RESERVED_SIZE			(8 << 20)
> >
> > @@ -310,6 +315,12 @@ struct amdgpu_gart_funcs {
> >   				     uint32_t flags);
> >   };
> >
> > +/* provided by the mc block */
> > +struct amdgpu_mc_funcs {
> > +	/* adjust mc addr in fb for APU case */
> > +	u64 (*adjust_mc_addr)(struct amdgpu_device *adev, u64 addr);
> > +};
> > +
> 
> That isn't hardware specific and actually incorrectly implemented.
> 
> The calculation depends on the NB on APUs, not the GPU part and the
> current implementation probably breaks it for Carizzo and others APUs.
> 
> I suggest to just remove the callback and move the calculation into
> amdgpu_vm_adjust_mc_addr().
> 
> Then rename amdgpu_vm_adjust_mc_addr() to amdgpu_vm_get_pde()
> and call
> it from amdgpu_vm_update_page_directory() as well as the GFX9 specifc
> flush functions.
> 
> >   /* provided by the ih block */
> >   struct amdgpu_ih_funcs {
> >   	/* ring read/write ptr handling, called from interrupt context */
> > @@ -559,6 +570,21 @@ int amdgpu_gart_bind(struct amdgpu_device
> *adev, uint64_t offset,
> >   int amdgpu_ttm_recover_gart(struct amdgpu_device *adev);
> >
> >   /*
> > + * VMHUB structures, functions & helpers
> > + */
> > +struct amdgpu_vmhub {
> > +	uint32_t	ctx0_ptb_addr_lo32;
> > +	uint32_t	ctx0_ptb_addr_hi32;
> > +	uint32_t	vm_inv_eng0_req;
> > +	uint32_t	vm_inv_eng0_ack;
> > +	uint32_t	vm_context0_cntl;
> > +	uint32_t	vm_l2_pro_fault_status;
> > +	uint32_t	vm_l2_pro_fault_cntl;
> > +	uint32_t	(*get_invalidate_req)(unsigned int vm_id);
> > +	uint32_t	(*get_vm_protection_bits)(void);
> 
> Those two callbacks aren't a good idea either.
> 
> The invalidation request bits are defined by the RTL of the HUB which is
> just instantiated twice, see the register database for details.
> 
> We should probably make those functions in the gmc_v9_0.c which are
> called from the device specific flush methods.

Didn't you have some patches to clean up the gfxhub/mmhub split?  I don't think they ever landed.

Alex

> 
> Regards,
> Christian.
> 
> > +};
> > +
> > +/*
> >    * GPU MC structures, functions & helpers
> >    */
> >   struct amdgpu_mc {
> > @@ -591,6 +617,9 @@ struct amdgpu_mc {
> >   	u64					shared_aperture_end;
> >   	u64					private_aperture_start;
> >   	u64					private_aperture_end;
> > +	/* protects concurrent invalidation */
> > +	spinlock_t		invalidate_lock;
> > +	const struct amdgpu_mc_funcs *mc_funcs;
> >   };
> >
> >   /*
> > @@ -1479,6 +1508,7 @@ struct amdgpu_device {
> >   	struct amdgpu_gart		gart;
> >   	struct amdgpu_dummy_page	dummy_page;
> >   	struct amdgpu_vm_manager	vm_manager;
> > +	struct amdgpu_vmhub             vmhub[AMDGPU_MAX_VMHUBS];
> >
> >   	/* memory management */
> >   	struct amdgpu_mman		mman;
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > index df615d7..47a8080 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> > @@ -375,6 +375,16 @@ static bool
> amdgpu_vm_ring_has_compute_vm_bug(struct amdgpu_ring *ring)
> >   	return false;
> >   }
> >
> > +static u64 amdgpu_vm_adjust_mc_addr(struct amdgpu_device *adev,
> u64 mc_addr)
> > +{
> > +	u64 addr = mc_addr;
> > +
> > +	if (adev->mc.mc_funcs && adev->mc.mc_funcs->adjust_mc_addr)
> > +		addr = adev->mc.mc_funcs->adjust_mc_addr(adev, addr);
> > +
> > +	return addr;
> > +}
> > +
> >   /**
> >    * amdgpu_vm_flush - hardware flush the vm
> >    *
> > @@ -405,9 +415,10 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring,
> struct amdgpu_job *job)
> >   	if (ring->funcs->emit_vm_flush && (job->vm_needs_flush ||
> >   	    amdgpu_vm_is_gpu_reset(adev, id))) {
> >   		struct fence *fence;
> > +		u64 pd_addr = amdgpu_vm_adjust_mc_addr(adev, job-
> >vm_pd_addr);
> >
> > -		trace_amdgpu_vm_flush(job->vm_pd_addr, ring->idx, job-
> >vm_id);
> > -		amdgpu_ring_emit_vm_flush(ring, job->vm_id, job-
> >vm_pd_addr);
> > +		trace_amdgpu_vm_flush(pd_addr, ring->idx, job->vm_id);
> > +		amdgpu_ring_emit_vm_flush(ring, job->vm_id, pd_addr);
> >
> >   		r = amdgpu_fence_emit(ring, &fence);
> >   		if (r)
> > @@ -643,15 +654,18 @@ int amdgpu_vm_update_page_directory(struct
> amdgpu_device *adev,
> >   		    (count == AMDGPU_VM_MAX_UPDATE_SIZE)) {
> >
> >   			if (count) {
> > +				uint64_t pt_addr =
> > +					amdgpu_vm_adjust_mc_addr(adev,
> last_pt);
> > +
> >   				if (shadow)
> >   					amdgpu_vm_do_set_ptes(&params,
> >   							      last_shadow,
> > -							      last_pt, count,
> > +							      pt_addr, count,
> >   							      incr,
> >
> AMDGPU_PTE_VALID);
> >
> >   				amdgpu_vm_do_set_ptes(&params,
> last_pde,
> > -						      last_pt, count, incr,
> > +						      pt_addr, count, incr,
> >   						      AMDGPU_PTE_VALID);
> >   			}
> >
> > @@ -665,11 +679,13 @@ int amdgpu_vm_update_page_directory(struct
> amdgpu_device *adev,
> >   	}
> >
> >   	if (count) {
> > +		uint64_t pt_addr = amdgpu_vm_adjust_mc_addr(adev,
> last_pt);
> > +
> >   		if (vm->page_directory->shadow)
> > -			amdgpu_vm_do_set_ptes(&params, last_shadow,
> last_pt,
> > +			amdgpu_vm_do_set_ptes(&params, last_shadow,
> pt_addr,
> >   					      count, incr,
> AMDGPU_PTE_VALID);
> >
> > -		amdgpu_vm_do_set_ptes(&params, last_pde, last_pt,
> > +		amdgpu_vm_do_set_ptes(&params, last_pde, pt_addr,
> >   				      count, incr, AMDGPU_PTE_VALID);
> >   	}
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
> b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
> > new file mode 100644
> > index 0000000..1ff019c
> > --- /dev/null
> > +++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
> > @@ -0,0 +1,447 @@
> > +/*
> > + * Copyright 2016 Advanced Micro Devices, Inc.
> > + *
> > + * Permission is hereby granted, free of charge, to any person obtaining a
> > + * copy of this software and associated documentation files (the
> "Software"),
> > + * to deal in the Software without restriction, including without limitation
> > + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> > + * and/or sell copies of the Software, and to permit persons to whom the
> > + * Software is furnished to do so, subject to the following conditions:
> > + *
> > + * The above copyright notice and this permission notice shall be included
> in
> > + * all copies or substantial portions of the Software.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
> KIND, EXPRESS OR
> > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> MERCHANTABILITY,
> > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN
> NO EVENT SHALL
> > + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM,
> DAMAGES OR
> > + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
> OTHERWISE,
> > + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
> THE USE OR
> > + * OTHER DEALINGS IN THE SOFTWARE.
> > + *
> > + */
> > +#include "amdgpu.h"
> > +#include "gfxhub_v1_0.h"
> > +
> > +#include "vega10/soc15ip.h"
> > +#include "vega10/GC/gc_9_0_offset.h"
> > +#include "vega10/GC/gc_9_0_sh_mask.h"
> > +#include "vega10/GC/gc_9_0_default.h"
> > +#include "vega10/vega10_enum.h"
> > +
> > +#include "soc15_common.h"
> > +
> > +int gfxhub_v1_0_gart_enable(struct amdgpu_device *adev)
> > +{
> > +	u32 tmp;
> > +	u64 value;
> > +	u32 i;
> > +
> > +	/* Program MC. */
> > +	/* Update configuration */
> > +	WREG32(SOC15_REG_OFFSET(GC, 0,
> mmMC_VM_SYSTEM_APERTURE_LOW_ADDR),
> > +		adev->mc.vram_start >> 18);
> > +	WREG32(SOC15_REG_OFFSET(GC, 0,
> mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR),
> > +		adev->mc.vram_end >> 18);
> > +
> > +	value = adev->vram_scratch.gpu_addr - adev->mc.vram_start
> > +		+ adev->vm_manager.vram_base_offset;
> > +	WREG32(SOC15_REG_OFFSET(GC, 0,
> > +
> 	mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_LSB),
> > +				(u32)(value >> 12));
> > +	WREG32(SOC15_REG_OFFSET(GC, 0,
> > +
> 	mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_MSB),
> > +				(u32)(value >> 44));
> > +
> > +	/* Disable AGP. */
> > +	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_BASE), 0);
> > +	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_TOP), 0);
> > +	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_BOT),
> 0xFFFFFFFF);
> > +
> > +	/* GART Enable. */
> > +
> > +	/* Setup TLB control */
> > +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0,
> mmMC_VM_MX_L1_TLB_CNTL));
> > +	tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
> ENABLE_L1_TLB, 1);
> > +	tmp = REG_SET_FIELD(tmp,
> > +				MC_VM_MX_L1_TLB_CNTL,
> > +				SYSTEM_ACCESS_MODE,
> > +				3);
> > +	tmp = REG_SET_FIELD(tmp,
> > +				MC_VM_MX_L1_TLB_CNTL,
> > +				ENABLE_ADVANCED_DRIVER_MODEL,
> > +				1);
> > +	tmp = REG_SET_FIELD(tmp,
> > +				MC_VM_MX_L1_TLB_CNTL,
> > +				SYSTEM_APERTURE_UNMAPPED_ACCESS,
> > +				0);
> > +	tmp = REG_SET_FIELD(tmp,
> > +				MC_VM_MX_L1_TLB_CNTL,
> > +				ECO_BITS,
> > +				0);
> > +	tmp = REG_SET_FIELD(tmp,
> > +				MC_VM_MX_L1_TLB_CNTL,
> > +				MTYPE,
> > +				MTYPE_UC);/* XXX for emulation. */
> > +	tmp = REG_SET_FIELD(tmp,
> > +				MC_VM_MX_L1_TLB_CNTL,
> > +				ATC_EN,
> > +				1);
> > +	WREG32(SOC15_REG_OFFSET(GC, 0,
> mmMC_VM_MX_L1_TLB_CNTL), tmp);
> > +
> > +	/* Setup L2 cache */
> > +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL));
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 1);
> > +	tmp = REG_SET_FIELD(tmp,
> > +				VM_L2_CNTL,
> > +				ENABLE_L2_FRAGMENT_PROCESSING,
> > +				0);
> > +	tmp = REG_SET_FIELD(tmp,
> > +				VM_L2_CNTL,
> > +				L2_PDE0_CACHE_TAG_GENERATION_MODE,
> > +				0);/* XXX for emulation, Refer to closed
> source code.*/
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL,
> PDE_FAULT_CLASSIFICATION, 1);
> > +	tmp = REG_SET_FIELD(tmp,
> > +				VM_L2_CNTL,
> > +				CONTEXT1_IDENTITY_ACCESS_MODE,
> > +				1);
> > +	tmp = REG_SET_FIELD(tmp,
> > +				VM_L2_CNTL,
> > +				IDENTITY_MODE_FRAGMENT_SIZE,
> > +				0);
> > +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL), tmp);
> > +
> > +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL2));
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2,
> INVALIDATE_ALL_L1_TLBS, 1);
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_L2_CACHE,
> 1);
> > +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL2), tmp);
> > +
> > +	tmp = mmVM_L2_CNTL3_DEFAULT;
> > +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL3), tmp);
> > +
> > +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL4));
> > +	tmp = REG_SET_FIELD(tmp,
> > +			    VM_L2_CNTL4,
> > +			    VMC_TAP_PDE_REQUEST_PHYSICAL,
> > +			    0);
> > +	tmp = REG_SET_FIELD(tmp,
> > +			    VM_L2_CNTL4,
> > +			    VMC_TAP_PTE_REQUEST_PHYSICAL,
> > +			    0);
> > +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL4), tmp);
> > +
> > +	/* setup context0 */
> > +	WREG32(SOC15_REG_OFFSET(GC, 0,
> > +
> 	mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_LO32),
> > +		(u32)(adev->mc.gtt_start >> 12));
> > +	WREG32(SOC15_REG_OFFSET(GC, 0,
> > +
> 	mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_HI32),
> > +		(u32)(adev->mc.gtt_start >> 44));
> > +
> > +	WREG32(SOC15_REG_OFFSET(GC, 0,
> > +
> 	mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_LO32),
> > +		(u32)(adev->mc.gtt_end >> 12));
> > +	WREG32(SOC15_REG_OFFSET(GC, 0,
> > +
> 	mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_HI32),
> > +		(u32)(adev->mc.gtt_end >> 44));
> > +
> > +	BUG_ON(adev->gart.table_addr & (~0x0000FFFFFFFFF000ULL));
> > +	value = adev->gart.table_addr - adev->mc.vram_start
> > +		+ adev->vm_manager.vram_base_offset;
> > +	value &= 0x0000FFFFFFFFF000ULL;
> > +	value |= 0x1; /*valid bit*/
> > +
> > +	WREG32(SOC15_REG_OFFSET(GC, 0,
> > +
> 	mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32),
> > +		(u32)value);
> > +	WREG32(SOC15_REG_OFFSET(GC, 0,
> > +
> 	mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32),
> > +		(u32)(value >> 32));
> > +
> > +	WREG32(SOC15_REG_OFFSET(GC, 0,
> > +
> 	mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_LO32),
> > +		(u32)(adev->dummy_page.addr >> 12));
> > +	WREG32(SOC15_REG_OFFSET(GC, 0,
> > +
> 	mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_HI32),
> > +		(u32)(adev->dummy_page.addr >> 44));
> > +
> > +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0,
> mmVM_L2_PROTECTION_FAULT_CNTL2));
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL2,
> > +			    ACTIVE_PAGE_MIGRATION_PTE_READ_RETRY,
> > +			    1);
> > +	WREG32(SOC15_REG_OFFSET(GC, 0,
> mmVM_L2_PROTECTION_FAULT_CNTL2), tmp);
> > +
> > +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0,
> mmVM_CONTEXT0_CNTL));
> > +	tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL,
> ENABLE_CONTEXT, 1);
> > +	tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL,
> PAGE_TABLE_DEPTH, 0);
> > +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL),
> tmp);
> > +
> > +	/* Disable identity aperture.*/
> > +	WREG32(SOC15_REG_OFFSET(GC, 0,
> > +
> 	mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_LO32),
> 0XFFFFFFFF);
> > +	WREG32(SOC15_REG_OFFSET(GC, 0,
> > +
> 	mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_HI32),
> 0x0000000F);
> > +
> > +	WREG32(SOC15_REG_OFFSET(GC, 0,
> > +
> 	mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_LO32),
> 0);
> > +	WREG32(SOC15_REG_OFFSET(GC, 0,
> > +
> 	mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_HI32), 0);
> > +
> > +	WREG32(SOC15_REG_OFFSET(GC, 0,
> > +		mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_LO32),
> 0);
> > +	WREG32(SOC15_REG_OFFSET(GC, 0,
> > +		mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_HI32),
> 0);
> > +
> > +	for (i = 0; i <= 14; i++) {
> > +		tmp = RREG32(SOC15_REG_OFFSET(GC, 0,
> mmVM_CONTEXT1_CNTL) + i);
> > +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> ENABLE_CONTEXT, 1);
> > +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> PAGE_TABLE_DEPTH, 1);
> > +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> > +
> 	RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> > +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> > +
> 	DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> > +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> > +
> 	PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> > +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> > +
> 	VALID_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> > +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> > +
> 	READ_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> > +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> > +
> 	WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> > +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> > +
> 	EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> > +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> > +				PAGE_TABLE_BLOCK_SIZE,
> > +				    amdgpu_vm_block_size - 9);
> > +		WREG32(SOC15_REG_OFFSET(GC, 0,
> mmVM_CONTEXT1_CNTL) + i, tmp);
> > +		WREG32(SOC15_REG_OFFSET(GC, 0,
> mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_LO32) + i*2, 0);
> > +		WREG32(SOC15_REG_OFFSET(GC, 0,
> mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_HI32) + i*2, 0);
> > +		WREG32(SOC15_REG_OFFSET(GC, 0,
> mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_LO32) + i*2,
> > +				adev->vm_manager.max_pfn - 1);
> > +		WREG32(SOC15_REG_OFFSET(GC, 0,
> mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_HI32) + i*2, 0);
> > +	}
> > +
> > +
> > +	return 0;
> > +}
> > +
> > +void gfxhub_v1_0_gart_disable(struct amdgpu_device *adev)
> > +{
> > +	u32 tmp;
> > +	u32 i;
> > +
> > +	/* Disable all tables */
> > +	for (i = 0; i < 16; i++)
> > +		WREG32(SOC15_REG_OFFSET(GC, 0,
> mmVM_CONTEXT0_CNTL) + i, 0);
> > +
> > +	/* Setup TLB control */
> > +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0,
> mmMC_VM_MX_L1_TLB_CNTL));
> > +	tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
> ENABLE_L1_TLB, 0);
> > +	tmp = REG_SET_FIELD(tmp,
> > +				MC_VM_MX_L1_TLB_CNTL,
> > +				ENABLE_ADVANCED_DRIVER_MODEL,
> > +				0);
> > +	WREG32(SOC15_REG_OFFSET(GC, 0,
> mmMC_VM_MX_L1_TLB_CNTL), tmp);
> > +
> > +	/* Setup L2 cache */
> > +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL));
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 0);
> > +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL), tmp);
> > +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL3), 0);
> > +}
> > +
> > +/**
> > + * gfxhub_v1_0_set_fault_enable_default - update GART/VM fault
> handling
> > + *
> > + * @adev: amdgpu_device pointer
> > + * @value: true redirects VM faults to the default page
> > + */
> > +void gfxhub_v1_0_set_fault_enable_default(struct amdgpu_device
> *adev,
> > +					  bool value)
> > +{
> > +	u32 tmp;
> > +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0,
> mmVM_L2_PROTECTION_FAULT_CNTL));
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> > +			RANGE_PROTECTION_FAULT_ENABLE_DEFAULT,
> value);
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> > +			PDE0_PROTECTION_FAULT_ENABLE_DEFAULT,
> value);
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> > +			PDE1_PROTECTION_FAULT_ENABLE_DEFAULT,
> value);
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> > +			PDE2_PROTECTION_FAULT_ENABLE_DEFAULT,
> value);
> > +	tmp = REG_SET_FIELD(tmp,
> > +			VM_L2_PROTECTION_FAULT_CNTL,
> > +
> 	TRANSLATE_FURTHER_PROTECTION_FAULT_ENABLE_DEFAULT,
> > +			value);
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> > +			NACK_PROTECTION_FAULT_ENABLE_DEFAULT,
> value);
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> > +
> 	DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> > +			VALID_PROTECTION_FAULT_ENABLE_DEFAULT,
> value);
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> > +			READ_PROTECTION_FAULT_ENABLE_DEFAULT,
> value);
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> > +			WRITE_PROTECTION_FAULT_ENABLE_DEFAULT,
> value);
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> > +			EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT,
> value);
> > +	WREG32(SOC15_REG_OFFSET(GC, 0,
> mmVM_L2_PROTECTION_FAULT_CNTL), tmp);
> > +}
> > +
> > +static uint32_t gfxhub_v1_0_get_invalidate_req(unsigned int vm_id)
> > +{
> > +	u32 req = 0;
> > +
> > +	/* invalidate using legacy mode on vm_id*/
> > +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> > +			    PER_VMID_INVALIDATE_REQ, 1 << vm_id);
> > +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> FLUSH_TYPE, 0);
> > +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> INVALIDATE_L2_PTES, 1);
> > +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> INVALIDATE_L2_PDE0, 1);
> > +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> INVALIDATE_L2_PDE1, 1);
> > +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> INVALIDATE_L2_PDE2, 1);
> > +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> INVALIDATE_L1_PTES, 1);
> > +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> > +			    CLEAR_PROTECTION_FAULT_STATUS_ADDR,	0);
> > +
> > +	return req;
> > +}
> > +
> > +static uint32_t gfxhub_v1_0_get_vm_protection_bits(void)
> > +{
> > +	return
> (VM_CONTEXT1_CNTL__RANGE_PROTECTION_FAULT_ENABLE_INTERRUPT
> _MASK |
> > +
> VM_CONTEXT1_CNTL__DUMMY_PAGE_PROTECTION_FAULT_ENABLE_INTE
> RRUPT_MASK |
> > +
> VM_CONTEXT1_CNTL__PDE0_PROTECTION_FAULT_ENABLE_INTERRUPT_M
> ASK |
> > +
> VM_CONTEXT1_CNTL__VALID_PROTECTION_FAULT_ENABLE_INTERRUPT_
> MASK |
> > +
> VM_CONTEXT1_CNTL__READ_PROTECTION_FAULT_ENABLE_INTERRUPT_M
> ASK |
> > +
> VM_CONTEXT1_CNTL__WRITE_PROTECTION_FAULT_ENABLE_INTERRUPT_
> MASK |
> > +
> VM_CONTEXT1_CNTL__EXECUTE_PROTECTION_FAULT_ENABLE_INTERRUPT
> _MASK);
> > +}
> > +
> > +static int gfxhub_v1_0_early_init(void *handle)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int gfxhub_v1_0_late_init(void *handle)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int gfxhub_v1_0_sw_init(void *handle)
> > +{
> > +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > +	struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_GFXHUB];
> > +
> > +	hub->ctx0_ptb_addr_lo32 =
> > +		SOC15_REG_OFFSET(GC, 0,
> > +
> mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32);
> > +	hub->ctx0_ptb_addr_hi32 =
> > +		SOC15_REG_OFFSET(GC, 0,
> > +
> mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32);
> > +	hub->vm_inv_eng0_req =
> > +		SOC15_REG_OFFSET(GC, 0,
> mmVM_INVALIDATE_ENG0_REQ);
> > +	hub->vm_inv_eng0_ack =
> > +		SOC15_REG_OFFSET(GC, 0,
> mmVM_INVALIDATE_ENG0_ACK);
> > +	hub->vm_context0_cntl =
> > +		SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL);
> > +	hub->vm_l2_pro_fault_status =
> > +		SOC15_REG_OFFSET(GC, 0,
> mmVM_L2_PROTECTION_FAULT_STATUS);
> > +	hub->vm_l2_pro_fault_cntl =
> > +		SOC15_REG_OFFSET(GC, 0,
> mmVM_L2_PROTECTION_FAULT_CNTL);
> > +
> > +	hub->get_invalidate_req = gfxhub_v1_0_get_invalidate_req;
> > +	hub->get_vm_protection_bits =
> gfxhub_v1_0_get_vm_protection_bits;
> > +
> > +	return 0;
> > +}
> > +
> > +static int gfxhub_v1_0_sw_fini(void *handle)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int gfxhub_v1_0_hw_init(void *handle)
> > +{
> > +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > +	unsigned i;
> > +
> > +	for (i = 0 ; i < 18; ++i) {
> > +		WREG32(SOC15_REG_OFFSET(GC, 0,
> > +
> 	mmVM_INVALIDATE_ENG0_ADDR_RANGE_LO32) +
> > +		       2 * i, 0xffffffff);
> > +		WREG32(SOC15_REG_OFFSET(GC, 0,
> > +
> 	mmVM_INVALIDATE_ENG0_ADDR_RANGE_HI32) +
> > +		       2 * i, 0x1f);
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +static int gfxhub_v1_0_hw_fini(void *handle)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int gfxhub_v1_0_suspend(void *handle)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int gfxhub_v1_0_resume(void *handle)
> > +{
> > +	return 0;
> > +}
> > +
> > +static bool gfxhub_v1_0_is_idle(void *handle)
> > +{
> > +	return true;
> > +}
> > +
> > +static int gfxhub_v1_0_wait_for_idle(void *handle)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int gfxhub_v1_0_soft_reset(void *handle)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int gfxhub_v1_0_set_clockgating_state(void *handle,
> > +					  enum amd_clockgating_state state)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int gfxhub_v1_0_set_powergating_state(void *handle,
> > +					  enum amd_powergating_state
> state)
> > +{
> > +	return 0;
> > +}
> > +
> > +const struct amd_ip_funcs gfxhub_v1_0_ip_funcs = {
> > +	.name = "gfxhub_v1_0",
> > +	.early_init = gfxhub_v1_0_early_init,
> > +	.late_init = gfxhub_v1_0_late_init,
> > +	.sw_init = gfxhub_v1_0_sw_init,
> > +	.sw_fini = gfxhub_v1_0_sw_fini,
> > +	.hw_init = gfxhub_v1_0_hw_init,
> > +	.hw_fini = gfxhub_v1_0_hw_fini,
> > +	.suspend = gfxhub_v1_0_suspend,
> > +	.resume = gfxhub_v1_0_resume,
> > +	.is_idle = gfxhub_v1_0_is_idle,
> > +	.wait_for_idle = gfxhub_v1_0_wait_for_idle,
> > +	.soft_reset = gfxhub_v1_0_soft_reset,
> > +	.set_clockgating_state = gfxhub_v1_0_set_clockgating_state,
> > +	.set_powergating_state = gfxhub_v1_0_set_powergating_state,
> > +};
> > +
> > +const struct amdgpu_ip_block_version gfxhub_v1_0_ip_block =
> > +{
> > +	.type = AMD_IP_BLOCK_TYPE_GFXHUB,
> > +	.major = 1,
> > +	.minor = 0,
> > +	.rev = 0,
> > +	.funcs = &gfxhub_v1_0_ip_funcs,
> > +};
> > diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
> b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
> > new file mode 100644
> > index 0000000..5129a8f
> > --- /dev/null
> > +++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
> > @@ -0,0 +1,35 @@
> > +/*
> > + * Copyright 2016 Advanced Micro Devices, Inc.
> > + *
> > + * Permission is hereby granted, free of charge, to any person obtaining a
> > + * copy of this software and associated documentation files (the
> "Software"),
> > + * to deal in the Software without restriction, including without limitation
> > + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> > + * and/or sell copies of the Software, and to permit persons to whom the
> > + * Software is furnished to do so, subject to the following conditions:
> > + *
> > + * The above copyright notice and this permission notice shall be included
> in
> > + * all copies or substantial portions of the Software.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
> KIND, EXPRESS OR
> > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> MERCHANTABILITY,
> > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN
> NO EVENT SHALL
> > + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM,
> DAMAGES OR
> > + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
> OTHERWISE,
> > + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
> THE USE OR
> > + * OTHER DEALINGS IN THE SOFTWARE.
> > + *
> > + */
> > +
> > +#ifndef __GFXHUB_V1_0_H__
> > +#define __GFXHUB_V1_0_H__
> > +
> > +int gfxhub_v1_0_gart_enable(struct amdgpu_device *adev);
> > +void gfxhub_v1_0_gart_disable(struct amdgpu_device *adev);
> > +void gfxhub_v1_0_set_fault_enable_default(struct amdgpu_device
> *adev,
> > +					  bool value);
> > +
> > +extern const struct amd_ip_funcs gfxhub_v1_0_ip_funcs;
> > +extern const struct amdgpu_ip_block_version gfxhub_v1_0_ip_block;
> > +
> > +#endif
> > diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> > new file mode 100644
> > index 0000000..5cf0fc3
> > --- /dev/null
> > +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> > @@ -0,0 +1,826 @@
> > +/*
> > + * Copyright 2016 Advanced Micro Devices, Inc.
> > + *
> > + * Permission is hereby granted, free of charge, to any person obtaining a
> > + * copy of this software and associated documentation files (the
> "Software"),
> > + * to deal in the Software without restriction, including without limitation
> > + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> > + * and/or sell copies of the Software, and to permit persons to whom the
> > + * Software is furnished to do so, subject to the following conditions:
> > + *
> > + * The above copyright notice and this permission notice shall be included
> in
> > + * all copies or substantial portions of the Software.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
> KIND, EXPRESS OR
> > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> MERCHANTABILITY,
> > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN
> NO EVENT SHALL
> > + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM,
> DAMAGES OR
> > + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
> OTHERWISE,
> > + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
> THE USE OR
> > + * OTHER DEALINGS IN THE SOFTWARE.
> > + *
> > + */
> > +#include <linux/firmware.h>
> > +#include "amdgpu.h"
> > +#include "gmc_v9_0.h"
> > +
> > +#include "vega10/soc15ip.h"
> > +#include "vega10/HDP/hdp_4_0_offset.h"
> > +#include "vega10/HDP/hdp_4_0_sh_mask.h"
> > +#include "vega10/GC/gc_9_0_sh_mask.h"
> > +#include "vega10/vega10_enum.h"
> > +
> > +#include "soc15_common.h"
> > +
> > +#include "nbio_v6_1.h"
> > +#include "gfxhub_v1_0.h"
> > +#include "mmhub_v1_0.h"
> > +
> > +#define mmDF_CS_AON0_DramBaseAddress0
> 0x0044
> > +#define mmDF_CS_AON0_DramBaseAddress0_BASE_IDX
> 0
> > +//DF_CS_AON0_DramBaseAddress0
> > +#define DF_CS_AON0_DramBaseAddress0__AddrRngVal__SHIFT
> 0x0
> > +#define DF_CS_AON0_DramBaseAddress0__LgcyMmioHoleEn__SHIFT
> 0x1
> > +#define DF_CS_AON0_DramBaseAddress0__IntLvNumChan__SHIFT
> 0x4
> > +#define DF_CS_AON0_DramBaseAddress0__IntLvAddrSel__SHIFT
> 0x8
> > +#define DF_CS_AON0_DramBaseAddress0__DramBaseAddr__SHIFT
> 0xc
> > +#define DF_CS_AON0_DramBaseAddress0__AddrRngVal_MASK
> 0x00000001L
> > +#define DF_CS_AON0_DramBaseAddress0__LgcyMmioHoleEn_MASK
> 0x00000002L
> > +#define DF_CS_AON0_DramBaseAddress0__IntLvNumChan_MASK
> 0x000000F0L
> > +#define DF_CS_AON0_DramBaseAddress0__IntLvAddrSel_MASK
> 0x00000700L
> > +#define DF_CS_AON0_DramBaseAddress0__DramBaseAddr_MASK
> 0xFFFFF000L
> > +
> > +/* XXX Move this macro to VEGA10 header file, which is like vid.h for VI.*/
> > +#define AMDGPU_NUM_OF_VMIDS			8
> > +
> > +static const u32 golden_settings_vega10_hdp[] =
> > +{
> > +	0xf64, 0x0fffffff, 0x00000000,
> > +	0xf65, 0x0fffffff, 0x00000000,
> > +	0xf66, 0x0fffffff, 0x00000000,
> > +	0xf67, 0x0fffffff, 0x00000000,
> > +	0xf68, 0x0fffffff, 0x00000000,
> > +	0xf6a, 0x0fffffff, 0x00000000,
> > +	0xf6b, 0x0fffffff, 0x00000000,
> > +	0xf6c, 0x0fffffff, 0x00000000,
> > +	0xf6d, 0x0fffffff, 0x00000000,
> > +	0xf6e, 0x0fffffff, 0x00000000,
> > +};
> > +
> > +static int gmc_v9_0_vm_fault_interrupt_state(struct amdgpu_device
> *adev,
> > +					struct amdgpu_irq_src *src,
> > +					unsigned type,
> > +					enum amdgpu_interrupt_state state)
> > +{
> > +	struct amdgpu_vmhub *hub;
> > +	u32 tmp, reg, bits, i;
> > +
> > +	switch (state) {
> > +	case AMDGPU_IRQ_STATE_DISABLE:
> > +		/* MM HUB */
> > +		hub = &adev->vmhub[AMDGPU_MMHUB];
> > +		bits = hub->get_vm_protection_bits();
> > +		for (i = 0; i< 16; i++) {
> > +			reg = hub->vm_context0_cntl + i;
> > +			tmp = RREG32(reg);
> > +			tmp &= ~bits;
> > +			WREG32(reg, tmp);
> > +		}
> > +
> > +		/* GFX HUB */
> > +		hub = &adev->vmhub[AMDGPU_GFXHUB];
> > +		bits = hub->get_vm_protection_bits();
> > +		for (i = 0; i < 16; i++) {
> > +			reg = hub->vm_context0_cntl + i;
> > +			tmp = RREG32(reg);
> > +			tmp &= ~bits;
> > +			WREG32(reg, tmp);
> > +		}
> > +		break;
> > +	case AMDGPU_IRQ_STATE_ENABLE:
> > +		/* MM HUB */
> > +		hub = &adev->vmhub[AMDGPU_MMHUB];
> > +		bits = hub->get_vm_protection_bits();
> > +		for (i = 0; i< 16; i++) {
> > +			reg = hub->vm_context0_cntl + i;
> > +			tmp = RREG32(reg);
> > +			tmp |= bits;
> > +			WREG32(reg, tmp);
> > +		}
> > +
> > +		/* GFX HUB */
> > +		hub = &adev->vmhub[AMDGPU_GFXHUB];
> > +		bits = hub->get_vm_protection_bits();
> > +		for (i = 0; i < 16; i++) {
> > +			reg = hub->vm_context0_cntl + i;
> > +			tmp = RREG32(reg);
> > +			tmp |= bits;
> > +			WREG32(reg, tmp);
> > +		}
> > +		break;
> > +	default:
> > +		break;
> > +	}
> > +
> > +	return 0;
> > +	return 0;
> > +}
> > +
> > +static int gmc_v9_0_process_interrupt(struct amdgpu_device *adev,
> > +				struct amdgpu_irq_src *source,
> > +				struct amdgpu_iv_entry *entry)
> > +{
> > +	struct amdgpu_vmhub *gfxhub = &adev-
> >vmhub[AMDGPU_GFXHUB];
> > +	struct amdgpu_vmhub *mmhub = &adev-
> >vmhub[AMDGPU_MMHUB];
> > +	uint32_t status;
> > +	u64 addr;
> > +
> > +	addr = (u64)entry->src_data[0] << 12;
> > +	addr |= ((u64)entry->src_data[1] & 0xf) << 44;
> > +
> > +	if (entry->vm_id_src) {
> > +		status = RREG32(mmhub->vm_l2_pro_fault_status);
> > +		WREG32_P(mmhub->vm_l2_pro_fault_cntl, 1, ~1);
> > +	} else {
> > +		status = RREG32(gfxhub->vm_l2_pro_fault_status);
> > +		WREG32_P(gfxhub->vm_l2_pro_fault_cntl, 1, ~1);
> > +	}
> > +
> > +	DRM_ERROR("[%s]VMC page fault (src_id:%u ring:%u vm_id:%u
> pas_id:%u) "
> > +		  "at page 0x%016llx from %d\n"
> > +		  "VM_L2_PROTECTION_FAULT_STATUS:0x%08X\n",
> > +		  entry->vm_id_src ? "mmhub" : "gfxhub",
> > +		  entry->src_id, entry->ring_id, entry->vm_id, entry->pas_id,
> > +		  addr, entry->client_id, status);
> > +
> > +	return 0;
> > +}
> > +
> > +static const struct amdgpu_irq_src_funcs gmc_v9_0_irq_funcs = {
> > +	.set = gmc_v9_0_vm_fault_interrupt_state,
> > +	.process = gmc_v9_0_process_interrupt,
> > +};
> > +
> > +static void gmc_v9_0_set_irq_funcs(struct amdgpu_device *adev)
> > +{
> > +	adev->mc.vm_fault.num_types = 1;
> > +	adev->mc.vm_fault.funcs = &gmc_v9_0_irq_funcs;
> > +}
> > +
> > +/*
> > + * GART
> > + * VMID 0 is the physical GPU addresses as used by the kernel.
> > + * VMIDs 1-15 are used for userspace clients and are handled
> > + * by the amdgpu vm/hsa code.
> > + */
> > +
> > +/**
> > + * gmc_v9_0_gart_flush_gpu_tlb - gart tlb flush callback
> > + *
> > + * @adev: amdgpu_device pointer
> > + * @vmid: vm instance to flush
> > + *
> > + * Flush the TLB for the requested page table.
> > + */
> > +static void gmc_v9_0_gart_flush_gpu_tlb(struct amdgpu_device *adev,
> > +					uint32_t vmid)
> > +{
> > +	/* Use register 17 for GART */
> > +	const unsigned eng = 17;
> > +	unsigned i, j;
> > +
> > +	/* flush hdp cache */
> > +	nbio_v6_1_hdp_flush(adev);
> > +
> > +	spin_lock(&adev->mc.invalidate_lock);
> > +
> > +	for (i = 0; i < AMDGPU_MAX_VMHUBS; ++i) {
> > +		struct amdgpu_vmhub *hub = &adev->vmhub[i];
> > +		u32 tmp = hub->get_invalidate_req(vmid);
> > +
> > +		WREG32(hub->vm_inv_eng0_req + eng, tmp);
> > +
> > +		/* Busy wait for ACK.*/
> > +		for (j = 0; j < 100; j++) {
> > +			tmp = RREG32(hub->vm_inv_eng0_ack + eng);
> > +			tmp &= 1 << vmid;
> > +			if (tmp)
> > +				break;
> > +			cpu_relax();
> > +		}
> > +		if (j < 100)
> > +			continue;
> > +
> > +		/* Wait for ACK with a delay.*/
> > +		for (j = 0; j < adev->usec_timeout; j++) {
> > +			tmp = RREG32(hub->vm_inv_eng0_ack + eng);
> > +			tmp &= 1 << vmid;
> > +			if (tmp)
> > +				break;
> > +			udelay(1);
> > +		}
> > +		if (j < adev->usec_timeout)
> > +			continue;
> > +
> > +		DRM_ERROR("Timeout waiting for VM flush ACK!\n");
> > +	}
> > +
> > +	spin_unlock(&adev->mc.invalidate_lock);
> > +}
> > +
> > +/**
> > + * gmc_v9_0_gart_set_pte_pde - update the page tables using MMIO
> > + *
> > + * @adev: amdgpu_device pointer
> > + * @cpu_pt_addr: cpu address of the page table
> > + * @gpu_page_idx: entry in the page table to update
> > + * @addr: dst addr to write into pte/pde
> > + * @flags: access flags
> > + *
> > + * Update the page tables using the CPU.
> > + */
> > +static int gmc_v9_0_gart_set_pte_pde(struct amdgpu_device *adev,
> > +					void *cpu_pt_addr,
> > +					uint32_t gpu_page_idx,
> > +					uint64_t addr,
> > +					uint64_t flags)
> > +{
> > +	void __iomem *ptr = (void *)cpu_pt_addr;
> > +	uint64_t value;
> > +
> > +	/*
> > +	 * PTE format on VEGA 10:
> > +	 * 63:59 reserved
> > +	 * 58:57 mtype
> > +	 * 56 F
> > +	 * 55 L
> > +	 * 54 P
> > +	 * 53 SW
> > +	 * 52 T
> > +	 * 50:48 reserved
> > +	 * 47:12 4k physical page base address
> > +	 * 11:7 fragment
> > +	 * 6 write
> > +	 * 5 read
> > +	 * 4 exe
> > +	 * 3 Z
> > +	 * 2 snooped
> > +	 * 1 system
> > +	 * 0 valid
> > +	 *
> > +	 * PDE format on VEGA 10:
> > +	 * 63:59 block fragment size
> > +	 * 58:55 reserved
> > +	 * 54 P
> > +	 * 53:48 reserved
> > +	 * 47:6 physical base address of PD or PTE
> > +	 * 5:3 reserved
> > +	 * 2 C
> > +	 * 1 system
> > +	 * 0 valid
> > +	 */
> > +
> > +	/*
> > +	 * The following is for PTE only. GART does not have PDEs.
> > +	*/
> > +	value = addr & 0x0000FFFFFFFFF000ULL;
> > +	value |= flags;
> > +	writeq(value, ptr + (gpu_page_idx * 8));
> > +	return 0;
> > +}
> > +
> > +static uint64_t gmc_v9_0_get_vm_pte_flags(struct amdgpu_device
> *adev,
> > +						uint32_t flags)
> > +
> > +{
> > +	uint64_t pte_flag = 0;
> > +
> > +	if (flags & AMDGPU_VM_PAGE_EXECUTABLE)
> > +		pte_flag |= AMDGPU_PTE_EXECUTABLE;
> > +	if (flags & AMDGPU_VM_PAGE_READABLE)
> > +		pte_flag |= AMDGPU_PTE_READABLE;
> > +	if (flags & AMDGPU_VM_PAGE_WRITEABLE)
> > +		pte_flag |= AMDGPU_PTE_WRITEABLE;
> > +
> > +	switch (flags & AMDGPU_VM_MTYPE_MASK) {
> > +	case AMDGPU_VM_MTYPE_DEFAULT:
> > +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
> > +		break;
> > +	case AMDGPU_VM_MTYPE_NC:
> > +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
> > +		break;
> > +	case AMDGPU_VM_MTYPE_WC:
> > +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_WC);
> > +		break;
> > +	case AMDGPU_VM_MTYPE_CC:
> > +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_CC);
> > +		break;
> > +	case AMDGPU_VM_MTYPE_UC:
> > +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_UC);
> > +		break;
> > +	default:
> > +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
> > +		break;
> > +	}
> > +
> > +	if (flags & AMDGPU_VM_PAGE_PRT)
> > +		pte_flag |= AMDGPU_PTE_PRT;
> > +
> > +	return pte_flag;
> > +}
> > +
> > +static const struct amdgpu_gart_funcs gmc_v9_0_gart_funcs = {
> > +	.flush_gpu_tlb = gmc_v9_0_gart_flush_gpu_tlb,
> > +	.set_pte_pde = gmc_v9_0_gart_set_pte_pde,
> > +	.get_vm_pte_flags = gmc_v9_0_get_vm_pte_flags
> > +};
> > +
> > +static void gmc_v9_0_set_gart_funcs(struct amdgpu_device *adev)
> > +{
> > +	if (adev->gart.gart_funcs == NULL)
> > +		adev->gart.gart_funcs = &gmc_v9_0_gart_funcs;
> > +}
> > +
> > +static u64 gmc_v9_0_adjust_mc_addr(struct amdgpu_device *adev, u64
> mc_addr)
> > +{
> > +	return adev->vm_manager.vram_base_offset + mc_addr - adev-
> >mc.vram_start;
> > +}
> > +
> > +static const struct amdgpu_mc_funcs gmc_v9_0_mc_funcs = {
> > +	.adjust_mc_addr = gmc_v9_0_adjust_mc_addr,
> > +};
> > +
> > +static void gmc_v9_0_set_mc_funcs(struct amdgpu_device *adev)
> > +{
> > +	adev->mc.mc_funcs = &gmc_v9_0_mc_funcs;
> > +}
> > +
> > +static int gmc_v9_0_early_init(void *handle)
> > +{
> > +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > +
> > +	gmc_v9_0_set_gart_funcs(adev);
> > +	gmc_v9_0_set_mc_funcs(adev);
> > +	gmc_v9_0_set_irq_funcs(adev);
> > +
> > +	return 0;
> > +}
> > +
> > +static int gmc_v9_0_late_init(void *handle)
> > +{
> > +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > +	return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0);
> > +}
> > +
> > +static void gmc_v9_0_vram_gtt_location(struct amdgpu_device *adev,
> > +					struct amdgpu_mc *mc)
> > +{
> > +	u64 base = mmhub_v1_0_get_fb_location(adev);
> > +	amdgpu_vram_location(adev, &adev->mc, base);
> > +	adev->mc.gtt_base_align = 0;
> > +	amdgpu_gtt_location(adev, mc);
> > +}
> > +
> > +/**
> > + * gmc_v9_0_mc_init - initialize the memory controller driver params
> > + *
> > + * @adev: amdgpu_device pointer
> > + *
> > + * Look up the amount of vram, vram width, and decide how to place
> > + * vram and gart within the GPU's physical address space.
> > + * Returns 0 for success.
> > + */
> > +static int gmc_v9_0_mc_init(struct amdgpu_device *adev)
> > +{
> > +	u32 tmp;
> > +	int chansize, numchan;
> > +
> > +	/* hbm memory channel size */
> > +	chansize = 128;
> > +
> > +	tmp = RREG32(SOC15_REG_OFFSET(DF, 0,
> mmDF_CS_AON0_DramBaseAddress0));
> > +	tmp &= DF_CS_AON0_DramBaseAddress0__IntLvNumChan_MASK;
> > +	tmp >>=
> DF_CS_AON0_DramBaseAddress0__IntLvNumChan__SHIFT;
> > +	switch (tmp) {
> > +	case 0:
> > +	default:
> > +		numchan = 1;
> > +		break;
> > +	case 1:
> > +		numchan = 2;
> > +		break;
> > +	case 2:
> > +		numchan = 0;
> > +		break;
> > +	case 3:
> > +		numchan = 4;
> > +		break;
> > +	case 4:
> > +		numchan = 0;
> > +		break;
> > +	case 5:
> > +		numchan = 8;
> > +		break;
> > +	case 6:
> > +		numchan = 0;
> > +		break;
> > +	case 7:
> > +		numchan = 16;
> > +		break;
> > +	case 8:
> > +		numchan = 2;
> > +		break;
> > +	}
> > +	adev->mc.vram_width = numchan * chansize;
> > +
> > +	/* Could aper size report 0 ? */
> > +	adev->mc.aper_base = pci_resource_start(adev->pdev, 0);
> > +	adev->mc.aper_size = pci_resource_len(adev->pdev, 0);
> > +	/* size in MB on si */
> > +	adev->mc.mc_vram_size =
> > +		nbio_v6_1_get_memsize(adev) * 1024ULL * 1024ULL;
> > +	adev->mc.real_vram_size = adev->mc.mc_vram_size;
> > +	adev->mc.visible_vram_size = adev->mc.aper_size;
> > +
> > +	/* In case the PCI BAR is larger than the actual amount of vram */
> > +	if (adev->mc.visible_vram_size > adev->mc.real_vram_size)
> > +		adev->mc.visible_vram_size = adev->mc.real_vram_size;
> > +
> > +	/* unless the user had overridden it, set the gart
> > +	 * size equal to the 1024 or vram, whichever is larger.
> > +	 */
> > +	if (amdgpu_gart_size == -1)
> > +		adev->mc.gtt_size = max((1024ULL << 20), adev-
> >mc.mc_vram_size);
> > +	else
> > +		adev->mc.gtt_size = (uint64_t)amdgpu_gart_size << 20;
> > +
> > +	gmc_v9_0_vram_gtt_location(adev, &adev->mc);
> > +
> > +	return 0;
> > +}
> > +
> > +static int gmc_v9_0_gart_init(struct amdgpu_device *adev)
> > +{
> > +	int r;
> > +
> > +	if (adev->gart.robj) {
> > +		WARN(1, "VEGA10 PCIE GART already initialized\n");
> > +		return 0;
> > +	}
> > +	/* Initialize common gart structure */
> > +	r = amdgpu_gart_init(adev);
> > +	if (r)
> > +		return r;
> > +	adev->gart.table_size = adev->gart.num_gpu_pages * 8;
> > +	adev->gart.gart_pte_flags = AMDGPU_PTE_MTYPE(MTYPE_UC) |
> > +				 AMDGPU_PTE_EXECUTABLE;
> > +	return amdgpu_gart_table_vram_alloc(adev);
> > +}
> > +
> > +/*
> > + * vm
> > + * VMID 0 is the physical GPU addresses as used by the kernel.
> > + * VMIDs 1-15 are used for userspace clients and are handled
> > + * by the amdgpu vm/hsa code.
> > + */
> > +/**
> > + * gmc_v9_0_vm_init - vm init callback
> > + *
> > + * @adev: amdgpu_device pointer
> > + *
> > + * Inits vega10 specific vm parameters (number of VMs, base of vram for
> > + * VMIDs 1-15) (vega10).
> > + * Returns 0 for success.
> > + */
> > +static int gmc_v9_0_vm_init(struct amdgpu_device *adev)
> > +{
> > +	/*
> > +	 * number of VMs
> > +	 * VMID 0 is reserved for System
> > +	 * amdgpu graphics/compute will use VMIDs 1-7
> > +	 * amdkfd will use VMIDs 8-15
> > +	 */
> > +	adev->vm_manager.num_ids = AMDGPU_NUM_OF_VMIDS;
> > +	amdgpu_vm_manager_init(adev);
> > +
> > +	/* base offset of vram pages */
> > +	/*XXX This value is not zero for APU*/
> > +	adev->vm_manager.vram_base_offset = 0;
> > +
> > +	return 0;
> > +}
> > +
> > +/**
> > + * gmc_v9_0_vm_fini - vm fini callback
> > + *
> > + * @adev: amdgpu_device pointer
> > + *
> > + * Tear down any asic specific VM setup.
> > + */
> > +static void gmc_v9_0_vm_fini(struct amdgpu_device *adev)
> > +{
> > +	return;
> > +}
> > +
> > +static int gmc_v9_0_sw_init(void *handle)
> > +{
> > +	int r;
> > +	int dma_bits;
> > +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > +
> > +	spin_lock_init(&adev->mc.invalidate_lock);
> > +
> > +	if (adev->flags & AMD_IS_APU) {
> > +		adev->mc.vram_type = AMDGPU_VRAM_TYPE_UNKNOWN;
> > +	} else {
> > +		/* XXX Don't know how to get VRAM type yet. */
> > +		adev->mc.vram_type = AMDGPU_VRAM_TYPE_HBM;
> > +	}
> > +
> > +	/* This interrupt is VMC page fault.*/
> > +	r = amdgpu_irq_add_id(adev, AMDGPU_IH_CLIENTID_VMC, 0,
> > +				&adev->mc.vm_fault);
> > +
> > +	if (r)
> > +		return r;
> > +
> > +	/* Adjust VM size here.
> > +	 * Currently default to 64GB ((16 << 20) 4k pages).
> > +	 * Max GPUVM size is 48 bits.
> > +	 */
> > +	adev->vm_manager.max_pfn = amdgpu_vm_size << 18;
> > +
> > +	/* Set the internal MC address mask
> > +	 * This is the max address of the GPU's
> > +	 * internal address space.
> > +	 */
> > +	adev->mc.mc_mask = 0xffffffffffffULL; /* 48 bit MC */
> > +
> > +	/* set DMA mask + need_dma32 flags.
> > +	 * PCIE - can handle 44-bits.
> > +	 * IGP - can handle 44-bits
> > +	 * PCI - dma32 for legacy pci gart, 44 bits on vega10
> > +	 */
> > +	adev->need_dma32 = false;
> > +	dma_bits = adev->need_dma32 ? 32 : 44;
> > +	r = pci_set_dma_mask(adev->pdev, DMA_BIT_MASK(dma_bits));
> > +	if (r) {
> > +		adev->need_dma32 = true;
> > +		dma_bits = 32;
> > +		printk(KERN_WARNING "amdgpu: No suitable DMA
> available.\n");
> > +	}
> > +	r = pci_set_consistent_dma_mask(adev->pdev,
> DMA_BIT_MASK(dma_bits));
> > +	if (r) {
> > +		pci_set_consistent_dma_mask(adev->pdev,
> DMA_BIT_MASK(32));
> > +		printk(KERN_WARNING "amdgpu: No coherent DMA
> available.\n");
> > +	}
> > +
> > +	r = gmc_v9_0_mc_init(adev);
> > +	if (r)
> > +		return r;
> > +
> > +	/* Memory manager */
> > +	r = amdgpu_bo_init(adev);
> > +	if (r)
> > +		return r;
> > +
> > +	r = gmc_v9_0_gart_init(adev);
> > +	if (r)
> > +		return r;
> > +
> > +	if (!adev->vm_manager.enabled) {
> > +		r = gmc_v9_0_vm_init(adev);
> > +		if (r) {
> > +			dev_err(adev->dev, "vm manager initialization failed
> (%d).\n", r);
> > +			return r;
> > +		}
> > +		adev->vm_manager.enabled = true;
> > +	}
> > +	return r;
> > +}
> > +
> > +/**
> > + * gmc_v8_0_gart_fini - vm fini callback
> > + *
> > + * @adev: amdgpu_device pointer
> > + *
> > + * Tears down the driver GART/VM setup (CIK).
> > + */
> > +static void gmc_v9_0_gart_fini(struct amdgpu_device *adev)
> > +{
> > +	amdgpu_gart_table_vram_free(adev);
> > +	amdgpu_gart_fini(adev);
> > +}
> > +
> > +static int gmc_v9_0_sw_fini(void *handle)
> > +{
> > +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > +
> > +	if (adev->vm_manager.enabled) {
> > +		amdgpu_vm_manager_fini(adev);
> > +		gmc_v9_0_vm_fini(adev);
> > +		adev->vm_manager.enabled = false;
> > +	}
> > +	gmc_v9_0_gart_fini(adev);
> > +	amdgpu_gem_force_release(adev);
> > +	amdgpu_bo_fini(adev);
> > +
> > +	return 0;
> > +}
> > +
> > +static void gmc_v9_0_init_golden_registers(struct amdgpu_device *adev)
> > +{
> > +	switch (adev->asic_type) {
> > +	case CHIP_VEGA10:
> > +		break;
> > +	default:
> > +		break;
> > +	}
> > +}
> > +
> > +/**
> > + * gmc_v9_0_gart_enable - gart enable
> > + *
> > + * @adev: amdgpu_device pointer
> > + */
> > +static int gmc_v9_0_gart_enable(struct amdgpu_device *adev)
> > +{
> > +	int r;
> > +	bool value;
> > +	u32 tmp;
> > +
> > +	amdgpu_program_register_sequence(adev,
> > +		golden_settings_vega10_hdp,
> > +		(const u32)ARRAY_SIZE(golden_settings_vega10_hdp));
> > +
> > +	if (adev->gart.robj == NULL) {
> > +		dev_err(adev->dev, "No VRAM object for PCIE GART.\n");
> > +		return -EINVAL;
> > +	}
> > +	r = amdgpu_gart_table_vram_pin(adev);
> > +	if (r)
> > +		return r;
> > +
> > +	/* After HDP is initialized, flush HDP.*/
> > +	nbio_v6_1_hdp_flush(adev);
> > +
> > +	r = gfxhub_v1_0_gart_enable(adev);
> > +	if (r)
> > +		return r;
> > +
> > +	r = mmhub_v1_0_gart_enable(adev);
> > +	if (r)
> > +		return r;
> > +
> > +	tmp = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MISC_CNTL));
> > +	tmp |= HDP_MISC_CNTL__FLUSH_INVALIDATE_CACHE_MASK;
> > +	WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MISC_CNTL), tmp);
> > +
> > +	tmp = RREG32(SOC15_REG_OFFSET(HDP, 0,
> mmHDP_HOST_PATH_CNTL));
> > +	WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_HOST_PATH_CNTL),
> tmp);
> > +
> > +
> > +	if (amdgpu_vm_fault_stop ==
> AMDGPU_VM_FAULT_STOP_ALWAYS)
> > +		value = false;
> > +	else
> > +		value = true;
> > +
> > +	gfxhub_v1_0_set_fault_enable_default(adev, value);
> > +	mmhub_v1_0_set_fault_enable_default(adev, value);
> > +
> > +	gmc_v9_0_gart_flush_gpu_tlb(adev, 0);
> > +
> > +	DRM_INFO("PCIE GART of %uM enabled (table at 0x%016llX).\n",
> > +		 (unsigned)(adev->mc.gtt_size >> 20),
> > +		 (unsigned long long)adev->gart.table_addr);
> > +	adev->gart.ready = true;
> > +	return 0;
> > +}
> > +
> > +static int gmc_v9_0_hw_init(void *handle)
> > +{
> > +	int r;
> > +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > +
> > +	/* The sequence of these two function calls matters.*/
> > +	gmc_v9_0_init_golden_registers(adev);
> > +
> > +	r = gmc_v9_0_gart_enable(adev);
> > +
> > +	return r;
> > +}
> > +
> > +/**
> > + * gmc_v9_0_gart_disable - gart disable
> > + *
> > + * @adev: amdgpu_device pointer
> > + *
> > + * This disables all VM page table.
> > + */
> > +static void gmc_v9_0_gart_disable(struct amdgpu_device *adev)
> > +{
> > +	gfxhub_v1_0_gart_disable(adev);
> > +	mmhub_v1_0_gart_disable(adev);
> > +	amdgpu_gart_table_vram_unpin(adev);
> > +}
> > +
> > +static int gmc_v9_0_hw_fini(void *handle)
> > +{
> > +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > +
> > +	amdgpu_irq_put(adev, &adev->mc.vm_fault, 0);
> > +	gmc_v9_0_gart_disable(adev);
> > +
> > +	return 0;
> > +}
> > +
> > +static int gmc_v9_0_suspend(void *handle)
> > +{
> > +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > +
> > +	if (adev->vm_manager.enabled) {
> > +		gmc_v9_0_vm_fini(adev);
> > +		adev->vm_manager.enabled = false;
> > +	}
> > +	gmc_v9_0_hw_fini(adev);
> > +
> > +	return 0;
> > +}
> > +
> > +static int gmc_v9_0_resume(void *handle)
> > +{
> > +	int r;
> > +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > +
> > +	r = gmc_v9_0_hw_init(adev);
> > +	if (r)
> > +		return r;
> > +
> > +	if (!adev->vm_manager.enabled) {
> > +		r = gmc_v9_0_vm_init(adev);
> > +		if (r) {
> > +			dev_err(adev->dev,
> > +				"vm manager initialization failed (%d).\n", r);
> > +			return r;
> > +		}
> > +		adev->vm_manager.enabled = true;
> > +	}
> > +
> > +	return r;
> > +}
> > +
> > +static bool gmc_v9_0_is_idle(void *handle)
> > +{
> > +	/* MC is always ready in GMC v9.*/
> > +	return true;
> > +}
> > +
> > +static int gmc_v9_0_wait_for_idle(void *handle)
> > +{
> > +	/* There is no need to wait for MC idle in GMC v9.*/
> > +	return 0;
> > +}
> > +
> > +static int gmc_v9_0_soft_reset(void *handle)
> > +{
> > +	/* XXX for emulation.*/
> > +	return 0;
> > +}
> > +
> > +static int gmc_v9_0_set_clockgating_state(void *handle,
> > +					enum amd_clockgating_state state)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int gmc_v9_0_set_powergating_state(void *handle,
> > +					enum amd_powergating_state state)
> > +{
> > +	return 0;
> > +}
> > +
> > +const struct amd_ip_funcs gmc_v9_0_ip_funcs = {
> > +	.name = "gmc_v9_0",
> > +	.early_init = gmc_v9_0_early_init,
> > +	.late_init = gmc_v9_0_late_init,
> > +	.sw_init = gmc_v9_0_sw_init,
> > +	.sw_fini = gmc_v9_0_sw_fini,
> > +	.hw_init = gmc_v9_0_hw_init,
> > +	.hw_fini = gmc_v9_0_hw_fini,
> > +	.suspend = gmc_v9_0_suspend,
> > +	.resume = gmc_v9_0_resume,
> > +	.is_idle = gmc_v9_0_is_idle,
> > +	.wait_for_idle = gmc_v9_0_wait_for_idle,
> > +	.soft_reset = gmc_v9_0_soft_reset,
> > +	.set_clockgating_state = gmc_v9_0_set_clockgating_state,
> > +	.set_powergating_state = gmc_v9_0_set_powergating_state,
> > +};
> > +
> > +const struct amdgpu_ip_block_version gmc_v9_0_ip_block =
> > +{
> > +	.type = AMD_IP_BLOCK_TYPE_GMC,
> > +	.major = 9,
> > +	.minor = 0,
> > +	.rev = 0,
> > +	.funcs = &gmc_v9_0_ip_funcs,
> > +};
> > diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
> b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
> > new file mode 100644
> > index 0000000..b030ca5
> > --- /dev/null
> > +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
> > @@ -0,0 +1,30 @@
> > +/*
> > + * Copyright 2016 Advanced Micro Devices, Inc.
> > + *
> > + * Permission is hereby granted, free of charge, to any person obtaining a
> > + * copy of this software and associated documentation files (the
> "Software"),
> > + * to deal in the Software without restriction, including without limitation
> > + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> > + * and/or sell copies of the Software, and to permit persons to whom the
> > + * Software is furnished to do so, subject to the following conditions:
> > + *
> > + * The above copyright notice and this permission notice shall be included
> in
> > + * all copies or substantial portions of the Software.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
> KIND, EXPRESS OR
> > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> MERCHANTABILITY,
> > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN
> NO EVENT SHALL
> > + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM,
> DAMAGES OR
> > + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
> OTHERWISE,
> > + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
> THE USE OR
> > + * OTHER DEALINGS IN THE SOFTWARE.
> > + *
> > + */
> > +
> > +#ifndef __GMC_V9_0_H__
> > +#define __GMC_V9_0_H__
> > +
> > +extern const struct amd_ip_funcs gmc_v9_0_ip_funcs;
> > +extern const struct amdgpu_ip_block_version gmc_v9_0_ip_block;
> > +
> > +#endif
> > diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
> b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
> > new file mode 100644
> > index 0000000..b1e0e6b
> > --- /dev/null
> > +++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
> > @@ -0,0 +1,585 @@
> > +/*
> > + * Copyright 2016 Advanced Micro Devices, Inc.
> > + *
> > + * Permission is hereby granted, free of charge, to any person obtaining a
> > + * copy of this software and associated documentation files (the
> "Software"),
> > + * to deal in the Software without restriction, including without limitation
> > + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> > + * and/or sell copies of the Software, and to permit persons to whom the
> > + * Software is furnished to do so, subject to the following conditions:
> > + *
> > + * The above copyright notice and this permission notice shall be included
> in
> > + * all copies or substantial portions of the Software.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
> KIND, EXPRESS OR
> > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> MERCHANTABILITY,
> > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN
> NO EVENT SHALL
> > + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM,
> DAMAGES OR
> > + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
> OTHERWISE,
> > + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
> THE USE OR
> > + * OTHER DEALINGS IN THE SOFTWARE.
> > + *
> > + */
> > +#include "amdgpu.h"
> > +#include "mmhub_v1_0.h"
> > +
> > +#include "vega10/soc15ip.h"
> > +#include "vega10/MMHUB/mmhub_1_0_offset.h"
> > +#include "vega10/MMHUB/mmhub_1_0_sh_mask.h"
> > +#include "vega10/MMHUB/mmhub_1_0_default.h"
> > +#include "vega10/ATHUB/athub_1_0_offset.h"
> > +#include "vega10/ATHUB/athub_1_0_sh_mask.h"
> > +#include "vega10/ATHUB/athub_1_0_default.h"
> > +#include "vega10/vega10_enum.h"
> > +
> > +#include "soc15_common.h"
> > +
> > +u64 mmhub_v1_0_get_fb_location(struct amdgpu_device *adev)
> > +{
> > +	u64 base = RREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmMC_VM_FB_LOCATION_BASE));
> > +
> > +	base &= MC_VM_FB_LOCATION_BASE__FB_BASE_MASK;
> > +	base <<= 24;
> > +
> > +	return base;
> > +}
> > +
> > +int mmhub_v1_0_gart_enable(struct amdgpu_device *adev)
> > +{
> > +	u32 tmp;
> > +	u64 value;
> > +	uint64_t addr;
> > +	u32 i;
> > +
> > +	/* Program MC. */
> > +	/* Update configuration */
> > +	DRM_INFO("%s -- in\n", __func__);
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmMC_VM_SYSTEM_APERTURE_LOW_ADDR),
> > +		adev->mc.vram_start >> 18);
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR),
> > +		adev->mc.vram_end >> 18);
> > +	value = adev->vram_scratch.gpu_addr - adev->mc.vram_start +
> > +		adev->vm_manager.vram_base_offset;
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> > +
> 	mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_LSB),
> > +				(u32)(value >> 12));
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> > +
> 	mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_MSB),
> > +				(u32)(value >> 44));
> > +
> > +	/* Disable AGP. */
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_AGP_BASE),
> 0);
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_AGP_TOP),
> 0);
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_AGP_BOT),
> 0x00FFFFFF);
> > +
> > +	/* GART Enable. */
> > +
> > +	/* Setup TLB control */
> > +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmMC_VM_MX_L1_TLB_CNTL));
> > +	tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
> ENABLE_L1_TLB, 1);
> > +	tmp = REG_SET_FIELD(tmp,
> > +				MC_VM_MX_L1_TLB_CNTL,
> > +				SYSTEM_ACCESS_MODE,
> > +				3);
> > +	tmp = REG_SET_FIELD(tmp,
> > +				MC_VM_MX_L1_TLB_CNTL,
> > +				ENABLE_ADVANCED_DRIVER_MODEL,
> > +				1);
> > +	tmp = REG_SET_FIELD(tmp,
> > +				MC_VM_MX_L1_TLB_CNTL,
> > +				SYSTEM_APERTURE_UNMAPPED_ACCESS,
> > +				0);
> > +	tmp = REG_SET_FIELD(tmp,
> > +				MC_VM_MX_L1_TLB_CNTL,
> > +				ECO_BITS,
> > +				0);
> > +	tmp = REG_SET_FIELD(tmp,
> > +				MC_VM_MX_L1_TLB_CNTL,
> > +				MTYPE,
> > +				MTYPE_UC);/* XXX for emulation. */
> > +	tmp = REG_SET_FIELD(tmp,
> > +				MC_VM_MX_L1_TLB_CNTL,
> > +				ATC_EN,
> > +				1);
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmMC_VM_MX_L1_TLB_CNTL), tmp);
> > +
> > +	/* Setup L2 cache */
> > +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL));
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 1);
> > +	tmp = REG_SET_FIELD(tmp,
> > +				VM_L2_CNTL,
> > +				ENABLE_L2_FRAGMENT_PROCESSING,
> > +				0);
> > +	tmp = REG_SET_FIELD(tmp,
> > +				VM_L2_CNTL,
> > +				L2_PDE0_CACHE_TAG_GENERATION_MODE,
> > +				0);/* XXX for emulation, Refer to closed
> source code.*/
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL,
> PDE_FAULT_CLASSIFICATION, 1);
> > +	tmp = REG_SET_FIELD(tmp,
> > +				VM_L2_CNTL,
> > +				CONTEXT1_IDENTITY_ACCESS_MODE,
> > +				1);
> > +	tmp = REG_SET_FIELD(tmp,
> > +				VM_L2_CNTL,
> > +				IDENTITY_MODE_FRAGMENT_SIZE,
> > +				0);
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL), tmp);
> > +
> > +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL2));
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2,
> INVALIDATE_ALL_L1_TLBS, 1);
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_L2_CACHE,
> 1);
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL2), tmp);
> > +
> > +	tmp = mmVM_L2_CNTL3_DEFAULT;
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL3), tmp);
> > +
> > +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL4));
> > +	tmp = REG_SET_FIELD(tmp,
> > +			    VM_L2_CNTL4,
> > +			    VMC_TAP_PDE_REQUEST_PHYSICAL,
> > +			    0);
> > +	tmp = REG_SET_FIELD(tmp,
> > +			    VM_L2_CNTL4,
> > +			    VMC_TAP_PTE_REQUEST_PHYSICAL,
> > +			    0);
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL4), tmp);
> > +
> > +	/* setup context0 */
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> > +
> 	mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_LO32),
> > +		(u32)(adev->mc.gtt_start >> 12));
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> > +
> 	mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_HI32),
> > +		(u32)(adev->mc.gtt_start >> 44));
> > +
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> > +
> 	mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_LO32),
> > +		(u32)(adev->mc.gtt_end >> 12));
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> > +
> 	mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_HI32),
> > +		(u32)(adev->mc.gtt_end >> 44));
> > +
> > +	BUG_ON(adev->gart.table_addr & (~0x0000FFFFFFFFF000ULL));
> > +	value = adev->gart.table_addr - adev->mc.vram_start +
> > +		adev->vm_manager.vram_base_offset;
> > +	value &= 0x0000FFFFFFFFF000ULL;
> > +	value |= 0x1; /* valid bit */
> > +
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> > +
> 	mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32),
> > +		(u32)value);
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> > +
> 	mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32),
> > +		(u32)(value >> 32));
> > +
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> > +
> 	mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_LO32),
> > +		(u32)(adev->dummy_page.addr >> 12));
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> > +
> 	mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_HI32),
> > +		(u32)(adev->dummy_page.addr >> 44));
> > +
> > +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmVM_L2_PROTECTION_FAULT_CNTL2));
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL2,
> > +			    ACTIVE_PAGE_MIGRATION_PTE_READ_RETRY,
> > +			    1);
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmVM_L2_PROTECTION_FAULT_CNTL2), tmp);
> > +
> > +	addr = SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT0_CNTL);
> > +	tmp = RREG32(addr);
> > +
> > +	tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL,
> ENABLE_CONTEXT, 1);
> > +	tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL,
> PAGE_TABLE_DEPTH, 0);
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmVM_CONTEXT0_CNTL), tmp);
> > +
> > +	tmp = RREG32(addr);
> > +
> > +	/* Disable identity aperture.*/
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> > +
> 	mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_LO32),
> 0XFFFFFFFF);
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> > +
> 	mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_HI32),
> 0x0000000F);
> > +
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> > +
> 	mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_LO32),
> 0);
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> > +
> 	mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_HI32), 0);
> > +
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> > +		mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_LO32),
> 0);
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> > +		mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_HI32),
> 0);
> > +
> > +	for (i = 0; i <= 14; i++) {
> > +		tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmVM_CONTEXT1_CNTL)
> > +				+ i);
> > +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> > +				ENABLE_CONTEXT, 1);
> > +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> > +				PAGE_TABLE_DEPTH, 1);
> > +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> > +
> 	RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> > +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> > +
> 	DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> > +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> > +
> 	PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> > +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> > +
> 	VALID_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> > +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> > +
> 	READ_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> > +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> > +
> 	WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> > +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> > +
> 	EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> > +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> > +				PAGE_TABLE_BLOCK_SIZE,
> > +				amdgpu_vm_block_size - 9);
> > +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmVM_CONTEXT1_CNTL) + i, tmp);
> > +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_LO32) + i*2, 0);
> > +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_HI32) + i*2, 0);
> > +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_LO32) + i*2,
> > +				adev->vm_manager.max_pfn - 1);
> > +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_HI32) + i*2, 0);
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +void mmhub_v1_0_gart_disable(struct amdgpu_device *adev)
> > +{
> > +	u32 tmp;
> > +	u32 i;
> > +
> > +	/* Disable all tables */
> > +	for (i = 0; i < 16; i++)
> > +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmVM_CONTEXT0_CNTL) + i, 0);
> > +
> > +	/* Setup TLB control */
> > +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmMC_VM_MX_L1_TLB_CNTL));
> > +	tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
> ENABLE_L1_TLB, 0);
> > +	tmp = REG_SET_FIELD(tmp,
> > +				MC_VM_MX_L1_TLB_CNTL,
> > +				ENABLE_ADVANCED_DRIVER_MODEL,
> > +				0);
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmMC_VM_MX_L1_TLB_CNTL), tmp);
> > +
> > +	/* Setup L2 cache */
> > +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL));
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 0);
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL), tmp);
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL3), 0);
> > +}
> > +
> > +/**
> > + * mmhub_v1_0_set_fault_enable_default - update GART/VM fault
> handling
> > + *
> > + * @adev: amdgpu_device pointer
> > + * @value: true redirects VM faults to the default page
> > + */
> > +void mmhub_v1_0_set_fault_enable_default(struct amdgpu_device
> *adev, bool value)
> > +{
> > +	u32 tmp;
> > +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmVM_L2_PROTECTION_FAULT_CNTL));
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> > +			RANGE_PROTECTION_FAULT_ENABLE_DEFAULT,
> value);
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> > +			PDE0_PROTECTION_FAULT_ENABLE_DEFAULT,
> value);
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> > +			PDE1_PROTECTION_FAULT_ENABLE_DEFAULT,
> value);
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> > +			PDE2_PROTECTION_FAULT_ENABLE_DEFAULT,
> value);
> > +	tmp = REG_SET_FIELD(tmp,
> > +			VM_L2_PROTECTION_FAULT_CNTL,
> > +
> 	TRANSLATE_FURTHER_PROTECTION_FAULT_ENABLE_DEFAULT,
> > +			value);
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> > +			NACK_PROTECTION_FAULT_ENABLE_DEFAULT,
> value);
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> > +
> 	DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> > +			VALID_PROTECTION_FAULT_ENABLE_DEFAULT,
> value);
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> > +			READ_PROTECTION_FAULT_ENABLE_DEFAULT,
> value);
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> > +			WRITE_PROTECTION_FAULT_ENABLE_DEFAULT,
> value);
> > +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> > +			EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT,
> value);
> > +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmVM_L2_PROTECTION_FAULT_CNTL), tmp);
> > +}
> > +
> > +static uint32_t mmhub_v1_0_get_invalidate_req(unsigned int vm_id)
> > +{
> > +	u32 req = 0;
> > +
> > +	/* invalidate using legacy mode on vm_id*/
> > +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> > +			    PER_VMID_INVALIDATE_REQ, 1 << vm_id);
> > +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> FLUSH_TYPE, 0);
> > +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> INVALIDATE_L2_PTES, 1);
> > +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> INVALIDATE_L2_PDE0, 1);
> > +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> INVALIDATE_L2_PDE1, 1);
> > +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> INVALIDATE_L2_PDE2, 1);
> > +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> INVALIDATE_L1_PTES, 1);
> > +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> > +			    CLEAR_PROTECTION_FAULT_STATUS_ADDR,	0);
> > +
> > +	return req;
> > +}
> > +
> > +static uint32_t mmhub_v1_0_get_vm_protection_bits(void)
> > +{
> > +	return
> (VM_CONTEXT1_CNTL__RANGE_PROTECTION_FAULT_ENABLE_INTERRUPT
> _MASK |
> > +
> VM_CONTEXT1_CNTL__DUMMY_PAGE_PROTECTION_FAULT_ENABLE_INTE
> RRUPT_MASK |
> > +
> VM_CONTEXT1_CNTL__PDE0_PROTECTION_FAULT_ENABLE_INTERRUPT_M
> ASK |
> > +
> VM_CONTEXT1_CNTL__VALID_PROTECTION_FAULT_ENABLE_INTERRUPT_
> MASK |
> > +
> VM_CONTEXT1_CNTL__READ_PROTECTION_FAULT_ENABLE_INTERRUPT_M
> ASK |
> > +
> VM_CONTEXT1_CNTL__WRITE_PROTECTION_FAULT_ENABLE_INTERRUPT_
> MASK |
> > +
> VM_CONTEXT1_CNTL__EXECUTE_PROTECTION_FAULT_ENABLE_INTERRUPT
> _MASK);
> > +}
> > +
> > +static int mmhub_v1_0_early_init(void *handle)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int mmhub_v1_0_late_init(void *handle)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int mmhub_v1_0_sw_init(void *handle)
> > +{
> > +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > +	struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_MMHUB];
> > +
> > +	hub->ctx0_ptb_addr_lo32 =
> > +		SOC15_REG_OFFSET(MMHUB, 0,
> > +
> mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32);
> > +	hub->ctx0_ptb_addr_hi32 =
> > +		SOC15_REG_OFFSET(MMHUB, 0,
> > +
> mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32);
> > +	hub->vm_inv_eng0_req =
> > +		SOC15_REG_OFFSET(MMHUB, 0,
> mmVM_INVALIDATE_ENG0_REQ);
> > +	hub->vm_inv_eng0_ack =
> > +		SOC15_REG_OFFSET(MMHUB, 0,
> mmVM_INVALIDATE_ENG0_ACK);
> > +	hub->vm_context0_cntl =
> > +		SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT0_CNTL);
> > +	hub->vm_l2_pro_fault_status =
> > +		SOC15_REG_OFFSET(MMHUB, 0,
> mmVM_L2_PROTECTION_FAULT_STATUS);
> > +	hub->vm_l2_pro_fault_cntl =
> > +		SOC15_REG_OFFSET(MMHUB, 0,
> mmVM_L2_PROTECTION_FAULT_CNTL);
> > +
> > +	hub->get_invalidate_req = mmhub_v1_0_get_invalidate_req;
> > +	hub->get_vm_protection_bits =
> mmhub_v1_0_get_vm_protection_bits;
> > +
> > +	return 0;
> > +}
> > +
> > +static int mmhub_v1_0_sw_fini(void *handle)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int mmhub_v1_0_hw_init(void *handle)
> > +{
> > +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > +	unsigned i;
> > +
> > +	for (i = 0; i < 18; ++i) {
> > +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> > +
> 	mmVM_INVALIDATE_ENG0_ADDR_RANGE_LO32) +
> > +		       2 * i, 0xffffffff);
> > +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> > +
> 	mmVM_INVALIDATE_ENG0_ADDR_RANGE_HI32) +
> > +		       2 * i, 0x1f);
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +static int mmhub_v1_0_hw_fini(void *handle)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int mmhub_v1_0_suspend(void *handle)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int mmhub_v1_0_resume(void *handle)
> > +{
> > +	return 0;
> > +}
> > +
> > +static bool mmhub_v1_0_is_idle(void *handle)
> > +{
> > +	return true;
> > +}
> > +
> > +static int mmhub_v1_0_wait_for_idle(void *handle)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int mmhub_v1_0_soft_reset(void *handle)
> > +{
> > +	return 0;
> > +}
> > +
> > +static void mmhub_v1_0_update_medium_grain_clock_gating(struct
> amdgpu_device *adev,
> > +							bool enable)
> > +{
> > +	uint32_t def, data, def1, data1, def2, data2;
> > +
> > +	def  = data  = RREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmATC_L2_MISC_CG));
> > +	def1 = data1 = RREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmDAGB0_CNTL_MISC2));
> > +	def2 = data2 = RREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmDAGB1_CNTL_MISC2));
> > +
> > +	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_MGCG)) {
> > +		data |= ATC_L2_MISC_CG__ENABLE_MASK;
> > +
> > +		data1 &=
> ~(DAGB0_CNTL_MISC2__DISABLE_WRREQ_CG_MASK |
> > +		           DAGB0_CNTL_MISC2__DISABLE_WRRET_CG_MASK |
> > +		           DAGB0_CNTL_MISC2__DISABLE_RDREQ_CG_MASK |
> > +		           DAGB0_CNTL_MISC2__DISABLE_RDRET_CG_MASK |
> > +		           DAGB0_CNTL_MISC2__DISABLE_TLBWR_CG_MASK |
> > +		           DAGB0_CNTL_MISC2__DISABLE_TLBRD_CG_MASK);
> > +
> > +		data2 &=
> ~(DAGB1_CNTL_MISC2__DISABLE_WRREQ_CG_MASK |
> > +		           DAGB1_CNTL_MISC2__DISABLE_WRRET_CG_MASK |
> > +		           DAGB1_CNTL_MISC2__DISABLE_RDREQ_CG_MASK |
> > +		           DAGB1_CNTL_MISC2__DISABLE_RDRET_CG_MASK |
> > +		           DAGB1_CNTL_MISC2__DISABLE_TLBWR_CG_MASK |
> > +		           DAGB1_CNTL_MISC2__DISABLE_TLBRD_CG_MASK);
> > +	} else {
> > +		data &= ~ATC_L2_MISC_CG__ENABLE_MASK;
> > +
> > +		data1 |=
> (DAGB0_CNTL_MISC2__DISABLE_WRREQ_CG_MASK |
> > +			  DAGB0_CNTL_MISC2__DISABLE_WRRET_CG_MASK
> |
> > +			  DAGB0_CNTL_MISC2__DISABLE_RDREQ_CG_MASK
> |
> > +			  DAGB0_CNTL_MISC2__DISABLE_RDRET_CG_MASK
> |
> > +			  DAGB0_CNTL_MISC2__DISABLE_TLBWR_CG_MASK
> |
> > +
> DAGB0_CNTL_MISC2__DISABLE_TLBRD_CG_MASK);
> > +
> > +		data2 |=
> (DAGB1_CNTL_MISC2__DISABLE_WRREQ_CG_MASK |
> > +		          DAGB1_CNTL_MISC2__DISABLE_WRRET_CG_MASK |
> > +		          DAGB1_CNTL_MISC2__DISABLE_RDREQ_CG_MASK |
> > +		          DAGB1_CNTL_MISC2__DISABLE_RDRET_CG_MASK |
> > +		          DAGB1_CNTL_MISC2__DISABLE_TLBWR_CG_MASK |
> > +		          DAGB1_CNTL_MISC2__DISABLE_TLBRD_CG_MASK);
> > +	}
> > +
> > +	if (def != data)
> > +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmATC_L2_MISC_CG), data);
> > +
> > +	if (def1 != data1)
> > +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmDAGB0_CNTL_MISC2), data1);
> > +
> > +	if (def2 != data2)
> > +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmDAGB1_CNTL_MISC2), data2);
> > +}
> > +
> > +static void athub_update_medium_grain_clock_gating(struct
> amdgpu_device *adev,
> > +						   bool enable)
> > +{
> > +	uint32_t def, data;
> > +
> > +	def = data = RREG32(SOC15_REG_OFFSET(ATHUB, 0,
> mmATHUB_MISC_CNTL));
> > +
> > +	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_MGCG))
> > +		data |= ATHUB_MISC_CNTL__CG_ENABLE_MASK;
> > +	else
> > +		data &= ~ATHUB_MISC_CNTL__CG_ENABLE_MASK;
> > +
> > +	if (def != data)
> > +		WREG32(SOC15_REG_OFFSET(ATHUB, 0,
> mmATHUB_MISC_CNTL), data);
> > +}
> > +
> > +static void mmhub_v1_0_update_medium_grain_light_sleep(struct
> amdgpu_device *adev,
> > +						       bool enable)
> > +{
> > +	uint32_t def, data;
> > +
> > +	def = data = RREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmATC_L2_MISC_CG));
> > +
> > +	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_LS))
> > +		data |= ATC_L2_MISC_CG__MEM_LS_ENABLE_MASK;
> > +	else
> > +		data &= ~ATC_L2_MISC_CG__MEM_LS_ENABLE_MASK;
> > +
> > +	if (def != data)
> > +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> mmATC_L2_MISC_CG), data);
> > +}
> > +
> > +static void athub_update_medium_grain_light_sleep(struct
> amdgpu_device *adev,
> > +						  bool enable)
> > +{
> > +	uint32_t def, data;
> > +
> > +	def = data = RREG32(SOC15_REG_OFFSET(ATHUB, 0,
> mmATHUB_MISC_CNTL));
> > +
> > +	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_LS) &&
> > +	    (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS))
> > +		data |= ATHUB_MISC_CNTL__CG_MEM_LS_ENABLE_MASK;
> > +	else
> > +		data &=
> ~ATHUB_MISC_CNTL__CG_MEM_LS_ENABLE_MASK;
> > +
> > +	if(def != data)
> > +		WREG32(SOC15_REG_OFFSET(ATHUB, 0,
> mmATHUB_MISC_CNTL), data);
> > +}
> > +
> > +static int mmhub_v1_0_set_clockgating_state(void *handle,
> > +					enum amd_clockgating_state state)
> > +{
> > +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> > +
> > +	switch (adev->asic_type) {
> > +	case CHIP_VEGA10:
> > +		mmhub_v1_0_update_medium_grain_clock_gating(adev,
> > +				state == AMD_CG_STATE_GATE ? true :
> false);
> > +		athub_update_medium_grain_clock_gating(adev,
> > +				state == AMD_CG_STATE_GATE ? true :
> false);
> > +		mmhub_v1_0_update_medium_grain_light_sleep(adev,
> > +				state == AMD_CG_STATE_GATE ? true :
> false);
> > +		athub_update_medium_grain_light_sleep(adev,
> > +				state == AMD_CG_STATE_GATE ? true :
> false);
> > +		break;
> > +	default:
> > +		break;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> > +static int mmhub_v1_0_set_powergating_state(void *handle,
> > +					enum amd_powergating_state state)
> > +{
> > +	return 0;
> > +}
> > +
> > +const struct amd_ip_funcs mmhub_v1_0_ip_funcs = {
> > +	.name = "mmhub_v1_0",
> > +	.early_init = mmhub_v1_0_early_init,
> > +	.late_init = mmhub_v1_0_late_init,
> > +	.sw_init = mmhub_v1_0_sw_init,
> > +	.sw_fini = mmhub_v1_0_sw_fini,
> > +	.hw_init = mmhub_v1_0_hw_init,
> > +	.hw_fini = mmhub_v1_0_hw_fini,
> > +	.suspend = mmhub_v1_0_suspend,
> > +	.resume = mmhub_v1_0_resume,
> > +	.is_idle = mmhub_v1_0_is_idle,
> > +	.wait_for_idle = mmhub_v1_0_wait_for_idle,
> > +	.soft_reset = mmhub_v1_0_soft_reset,
> > +	.set_clockgating_state = mmhub_v1_0_set_clockgating_state,
> > +	.set_powergating_state = mmhub_v1_0_set_powergating_state,
> > +};
> > +
> > +const struct amdgpu_ip_block_version mmhub_v1_0_ip_block =
> > +{
> > +	.type = AMD_IP_BLOCK_TYPE_MMHUB,
> > +	.major = 1,
> > +	.minor = 0,
> > +	.rev = 0,
> > +	.funcs = &mmhub_v1_0_ip_funcs,
> > +};
> > diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
> b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
> > new file mode 100644
> > index 0000000..aadedf9
> > --- /dev/null
> > +++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
> > @@ -0,0 +1,35 @@
> > +/*
> > + * Copyright 2016 Advanced Micro Devices, Inc.
> > + *
> > + * Permission is hereby granted, free of charge, to any person obtaining a
> > + * copy of this software and associated documentation files (the
> "Software"),
> > + * to deal in the Software without restriction, including without limitation
> > + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> > + * and/or sell copies of the Software, and to permit persons to whom the
> > + * Software is furnished to do so, subject to the following conditions:
> > + *
> > + * The above copyright notice and this permission notice shall be included
> in
> > + * all copies or substantial portions of the Software.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
> KIND, EXPRESS OR
> > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> MERCHANTABILITY,
> > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN
> NO EVENT SHALL
> > + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM,
> DAMAGES OR
> > + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
> OTHERWISE,
> > + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
> THE USE OR
> > + * OTHER DEALINGS IN THE SOFTWARE.
> > + *
> > + */
> > +#ifndef __MMHUB_V1_0_H__
> > +#define __MMHUB_V1_0_H__
> > +
> > +u64 mmhub_v1_0_get_fb_location(struct amdgpu_device *adev);
> > +int mmhub_v1_0_gart_enable(struct amdgpu_device *adev);
> > +void mmhub_v1_0_gart_disable(struct amdgpu_device *adev);
> > +void mmhub_v1_0_set_fault_enable_default(struct amdgpu_device
> *adev,
> > +					 bool value);
> > +
> > +extern const struct amd_ip_funcs mmhub_v1_0_ip_funcs;
> > +extern const struct amdgpu_ip_block_version mmhub_v1_0_ip_block;
> > +
> > +#endif
> > diff --git a/drivers/gpu/drm/amd/include/amd_shared.h
> b/drivers/gpu/drm/amd/include/amd_shared.h
> > index 717d6be..a94420d 100644
> > --- a/drivers/gpu/drm/amd/include/amd_shared.h
> > +++ b/drivers/gpu/drm/amd/include/amd_shared.h
> > @@ -74,6 +74,8 @@ enum amd_ip_block_type {
> >   	AMD_IP_BLOCK_TYPE_UVD,
> >   	AMD_IP_BLOCK_TYPE_VCE,
> >   	AMD_IP_BLOCK_TYPE_ACP,
> > +	AMD_IP_BLOCK_TYPE_GFXHUB,
> > +	AMD_IP_BLOCK_TYPE_MMHUB
> >   };
> >
> >   enum amd_clockgating_state {
> 

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH 000/100] Add Vega10 Support
       [not found]             ` <15b7d1b4-8ac7-d14b-40f6-aba529b301ea-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
@ 2017-03-21 15:54               ` Alex Deucher
  0 siblings, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-21 15:54 UTC (permalink / raw)
  To: Christian König; +Cc: Alex Deucher, amd-gfx list

On Tue, Mar 21, 2017 at 8:18 AM, Christian König
<deathsimple@vodafone.de> wrote:
> In my Spam folder I've found:
>
> Patches #22, #24, #39, #50, #64, #69, #75, #78 which are Acked-by: Christian
> König <christian.koenig@amd.com>.
>
> Patches #32, #42 which are Reviewed-by: Christian König
> <christian.koenig@amd.com>.
>
> And patches #80, #85, #89, #90, #91, #100 which already had either my rb or
> ackb.
>
> So still missing are #6-#20 which are probably just to large for the list.

As I mentioned below, 6-20 are the register headers which I didn't
send because they are too big.

Alex

>
> Regards,
> Christian.
>
>
> Am 21.03.2017 um 12:51 schrieb Christian König:
>>
>> Patches #48, #49, #52-#63, #65-#68, #70-#72, #74, #76, #77, #79, #81-#84
>> are Acked-by: Christian König <christian.koenig@amd.com>.
>>
>> Patches #50, #64, #69, #75, #78, #80, #85, #89-#91, #100 didn't made it to
>> the list.
>>
>> Patch #73 probably needs to be moved to the end of the set or at least
>> after the wptr_poll fix.
>>
>> Apart from those everything should already have my reviewed-by or
>> acked-by.
>>
>> What worries me a bit are the ones who didn't made it to the list. Going
>> to check my spam folder, but that is a bit disturbing.
>>
>> Regards,
>> Christian.
>>
>> Am 21.03.2017 um 08:42 schrieb Christian König:
>>>
>>> Patches #1 - #5, #21, #23, #25, #27, #28, #31, #35-#38, #40, #41, #45 are
>>> Acked-by: Christian König.
>>>
>>> Patches #6-#20, #22, #24, #32, #39, #42 didn't made it to the list
>>> (probably to large).
>>>
>>> Patches #43, #44 are Reviewed-by: Christian König
>>> <christian.koenig@amd.com>.
>>>
>>> Patch #26: That stuff actually belongs into vega10 specifc code, doesn't
>>> it?
>>>
>>> Patch #29: We shouldn't use typedefs for enums.
>>>
>>> Going to take a look at the rest later today,
>>> Christian.
>>>
>>> Am 20.03.2017 um 21:29 schrieb Alex Deucher:
>>>>
>>>> This patch set adds support for vega10. Major changes and supported
>>>> features:
>>>> - new vbios interface
>>>> - Lots of new hw IPs
>>>> - Support for video decode using UVD
>>>> - Support for video encode using VCE
>>>> - Support for 3D via radeonsi
>>>> - Power management
>>>> - Full display support via DC
>>>> - Support for SR-IOV
>>>>
>>>> I did not send out the register headers since they are huge. You can
>>>> find them
>>>> along with all the other patches in this series here:
>>>> https://cgit.freedesktop.org/~agd5f/linux/log/?h=amd-staging-4.9
>>>>
>>>> Please review.
>>>>
>>>> Thanks,
>>>>
>>>> Alex
>>>>
>>>> Alex Deucher (29):
>>>>    drm/amdgpu: add the new atomfirmware interface header
>>>>    amdgpu: detect if we are using atomfirm or atombios for vbios (v2)
>>>>    drm/amdgpu: move atom scratch setup into amdgpu_atombios.c
>>>>    drm/amdgpu: add basic support for atomfirmware.h (v3)
>>>>    drm/amdgpu: add soc15ip.h
>>>>    drm/amdgpu: add vega10_enum.h
>>>>    drm/amdgpu: Add ATHUB 1.0 register headers
>>>>    drm/amdgpu: Add the DCE 12.0 register headers
>>>>    drm/amdgpu: add the GC 9.0 register headers
>>>>    drm/amdgpu: add the HDP 4.0 register headers
>>>>    drm/amdgpu: add the MMHUB 1.0 register headers
>>>>    drm/amdgpu: add MP 9.0 register headers
>>>>    drm/amdgpu: add NBIF 6.1 register headers
>>>>    drm/amdgpu: add NBIO 6.1 register headers
>>>>    drm/amdgpu: add OSSSYS 4.0 register headers
>>>>    drm/amdgpu: add SDMA 4.0 register headers
>>>>    drm/amdgpu: add SMUIO 9.0 register headers
>>>>    drm/amdgpu: add THM 9.0 register headers
>>>>    drm/amdgpu: add the UVD 7.0 register headers
>>>>    drm/amdgpu: add the VCE 4.0 register headers
>>>>    drm/amdgpu: add gfx9 clearstate header
>>>>    drm/amdgpu: add SDMA 4.0 packet header
>>>>    drm/amdgpu: use atomfirmware interfaces for scratch reg save/restore
>>>>    drm/amdgpu: update IH IV ring entry for soc-15
>>>>    drm/amdgpu: add PTE defines for MTYPE
>>>>    drm/amdgpu: add NGG parameters
>>>>    drm/amdgpu: Add asic family for vega10
>>>>    drm/amdgpu: add tiling flags for GFX9
>>>>    drm/amdgpu: gart fixes for vega10
>>>>
>>>> Alex Xie (4):
>>>>    drm/amdgpu: Add MTYPE flags to GPU VM IOCTL interface
>>>>    drm/amdgpu: handle PTE EXEC in amdgpu_vm_bo_split_mapping
>>>>    drm/amdgpu: handle PTE MTYPE in amdgpu_vm_bo_split_mapping
>>>>    drm/amdgpu: Add GMC 9.0 support
>>>>
>>>> Andrey Grodzovsky (1):
>>>>    drm/amdgpu: gb_addr_config struct
>>>>
>>>> Charlene Liu (1):
>>>>    drm/amd/display: need to handle DCE_Info table ver4.2
>>>>
>>>> Christian König (1):
>>>>    drm/amdgpu: add IV trace point
>>>>
>>>> Eric Huang (7):
>>>>    drm/amd/powerplay: add smu9 header files for Vega10
>>>>    drm/amd/powerplay: add new Vega10's ppsmc header file
>>>>    drm/amdgpu: add new atomfirmware based helpers for powerplay
>>>>    drm/amd/powerplay: add some new structures for Vega10
>>>>    drm/amd: add structures for display/powerplay interface
>>>>    drm/amd/powerplay: add some display/powerplay interfaces
>>>>    drm/amd/powerplay: add Vega10 powerplay support
>>>>
>>>> Felix Kuehling (1):
>>>>    drm/amd: Add MQD structs for GFX V9
>>>>
>>>> Harry Wentland (6):
>>>>    drm/amd/display: Add DCE12 bios parser support
>>>>    drm/amd/display: Add DCE12 gpio support
>>>>    drm/amd/display: Add DCE12 i2c/aux support
>>>>    drm/amd/display: Add DCE12 irq support
>>>>    drm/amd/display: Add DCE12 core support
>>>>    drm/amd/display: Enable DCE12 support
>>>>
>>>> Huang Rui (6):
>>>>    drm/amdgpu: use new flag to handle different firmware loading method
>>>>    drm/amdgpu: rework common ucode handling for vega10
>>>>    drm/amdgpu: add psp firmware header info
>>>>    drm/amdgpu: add PSP driver for vega10
>>>>    drm/amdgpu: add psp firmware info into info query and debugfs
>>>>    drm/amdgpu: add SMC firmware into global ucode list for psp loading
>>>>
>>>> Jordan Lazare (1):
>>>>    drm/amd/display: Less log spam
>>>>
>>>> Junwei Zhang (2):
>>>>    drm/amdgpu: add NBIO 6.1 driver
>>>>    drm/amdgpu: add Vega10 Device IDs
>>>>
>>>> Ken Wang (8):
>>>>    drm/amdgpu: add common soc15 headers
>>>>    drm/amdgpu: add vega10 chip name
>>>>    drm/amdgpu: add 64bit doorbell assignments
>>>>    drm/amdgpu: add SDMA v4.0 implementation
>>>>    drm/amdgpu: implement GFX 9.0 support
>>>>    drm/amdgpu: add vega10 interrupt handler
>>>>    drm/amdgpu: soc15 enable (v2)
>>>>    drm/amdgpu: Set the IP blocks for vega10
>>>>
>>>> Leo Liu (2):
>>>>    drm/amdgpu: add initial uvd 7.0 support for vega10
>>>>    drm/amdgpu: add initial vce 4.0 support for vega10
>>>>
>>>> Marek Olšák (1):
>>>>    drm/amdgpu: don't validate TILE_SPLIT on GFX9
>>>>
>>>> Monk Liu (5):
>>>>    drm/amdgpu/gfx9: programing wptr_poll_addr register
>>>>    drm/amdgpu:impl gfx9 cond_exec
>>>>    drm/amdgpu:bypass RLC init for SRIOV
>>>>    drm/amdgpu/sdma4:re-org SDMA initial steps for sriov
>>>>    drm/amdgpu/vega10:fix DOORBELL64 scheme
>>>>
>>>> Rex Zhu (2):
>>>>    drm/amdgpu: get display info from DC when DC enabled.
>>>>    drm/amd/powerplay: add global PowerPlay mutex.
>>>>
>>>> Xiangliang Yu (22):
>>>>    drm/amdgpu: impl sriov detection for vega10
>>>>    drm/amdgpu: add kiq ring for gfx9
>>>>    drm/amdgpu/gfx9: fullfill kiq funcs
>>>>    drm/amdgpu/gfx9: fullfill kiq irq funcs
>>>>    drm/amdgpu: init kiq and kcq for vega10
>>>>    drm/amdgpu/gfx9: impl gfx9 meta data emit
>>>>    drm/amdgpu/soc15: bypass PSP for VF
>>>>    drm/amdgpu/gmc9: no need use kiq in vega10 tlb flush
>>>>    drm/amdgpu/dce_virtual: bypass DPM for vf
>>>>    drm/amdgpu/virt: impl mailbox for ai
>>>>    drm/amdgpu/soc15: init virt ops for vf
>>>>    drm/amdgpu/soc15: enable virtual dce for vf
>>>>    drm/amdgpu: Don't touch PG&CG for SRIOV MM
>>>>    drm/amdgpu/vce4: enable doorbell for SRIOV
>>>>    drm/amdgpu: disable uvd for sriov
>>>>    drm/amdgpu/soc15: bypass pp block for vf
>>>>    drm/amdgpu/virt: add structure for MM table
>>>>    drm/amdgpu/vce4: alloc mm table for MM sriov
>>>>    drm/amdgpu/vce4: Ignore vce ring/ib test temporarily
>>>>    drm/amdgpu: add mmsch structures
>>>>    drm/amdgpu/vce4: impl vce & mmsch sriov start
>>>>    drm/amdgpu/gfx9: correct wptr pointer value
>>>>
>>>> ken (1):
>>>>    drm/amdgpu: add clinetid definition for vega10
>>>>
>>>>   drivers/gpu/drm/amd/amdgpu/Makefile                |     27 +-
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu.h                |    172 +-
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c       |     28 +
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h       |      3 +
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c   |    112 +
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h   |     33 +
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c           |     30 +-
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c            |     73 +-
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c         |     73 +-
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c            |     36 +-
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c           |      3 +-
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c            |      2 +-
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ih.h             |     47 +-
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c            |      3 +
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c            |     32 +
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.c         |      5 +-
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_powerplay.c      |      5 +-
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c            |    473 +
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h            |    127 +
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h          |     37 +
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c          |    113 +-
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h          |     17 +
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c            |     58 +-
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c            |     21 +
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h           |      7 +
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c             |     34 +-
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h             |      4 +
>>>>   drivers/gpu/drm/amd/amdgpu/atom.c                  |     26 -
>>>>   drivers/gpu/drm/amd/amdgpu/atom.h                  |      1 -
>>>>   drivers/gpu/drm/amd/amdgpu/cik.c                   |      2 +
>>>>   drivers/gpu/drm/amd/amdgpu/clearstate_gfx9.h       |    941 +
>>>>   drivers/gpu/drm/amd/amdgpu/dce_virtual.c           |      3 +
>>>>   drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c              |      6 +-
>>>>   drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c              |   4075 +
>>>>   drivers/gpu/drm/amd/amdgpu/gfx_v9_0.h              |     35 +
>>>>   drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c           |    447 +
>>>>   drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h           |     35 +
>>>>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c              |    826 +
>>>>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h              |     30 +
>>>>   drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c            |    585 +
>>>>   drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h            |     35 +
>>>>   drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h            |     87 +
>>>>   drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c              |    207 +
>>>>   drivers/gpu/drm/amd/amdgpu/mxgpu_ai.h              |     47 +
>>>>   drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c             |    251 +
>>>>   drivers/gpu/drm/amd/amdgpu/nbio_v6_1.h             |     53 +
>>>>   drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h            |    269 +
>>>>   drivers/gpu/drm/amd/amdgpu/psp_v3_1.c              |    507 +
>>>>   drivers/gpu/drm/amd/amdgpu/psp_v3_1.h              |     50 +
>>>>   drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c             |      4 +-
>>>>   drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c             |      4 +-
>>>>   drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c             |   1573 +
>>>>   drivers/gpu/drm/amd/amdgpu/sdma_v4_0.h             |     30 +
>>>>   drivers/gpu/drm/amd/amdgpu/soc15.c                 |    825 +
>>>>   drivers/gpu/drm/amd/amdgpu/soc15.h                 |     35 +
>>>>   drivers/gpu/drm/amd/amdgpu/soc15_common.h          |     57 +
>>>>   drivers/gpu/drm/amd/amdgpu/soc15d.h                |    287 +
>>>>   drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c              |   1543 +
>>>>   drivers/gpu/drm/amd/amdgpu/uvd_v7_0.h              |     29 +
>>>>   drivers/gpu/drm/amd/amdgpu/vce_v4_0.c              |   1141 +
>>>>   drivers/gpu/drm/amd/amdgpu/vce_v4_0.h              |     29 +
>>>>   drivers/gpu/drm/amd/amdgpu/vega10_ih.c             |    424 +
>>>>   drivers/gpu/drm/amd/amdgpu/vega10_ih.h             |     30 +
>>>>   drivers/gpu/drm/amd/amdgpu/vega10_sdma_pkt_open.h  |   3335 +
>>>>   drivers/gpu/drm/amd/amdgpu/vi.c                    |      4 +-
>>>>   drivers/gpu/drm/amd/display/Kconfig                |      7 +
>>>>   drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  |    145 +-
>>>>   .../drm/amd/display/amdgpu_dm/amdgpu_dm_services.c |     10 +
>>>>   .../drm/amd/display/amdgpu_dm/amdgpu_dm_types.c    |     20 +-
>>>>   drivers/gpu/drm/amd/display/dc/Makefile            |      4 +
>>>>   drivers/gpu/drm/amd/display/dc/bios/Makefile       |      8 +
>>>>   drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c |   2162 +
>>>>   drivers/gpu/drm/amd/display/dc/bios/bios_parser2.h |     33 +
>>>>   .../amd/display/dc/bios/bios_parser_interface.c    |     14 +
>>>>   .../display/dc/bios/bios_parser_types_internal2.h  |     74 +
>>>>   .../gpu/drm/amd/display/dc/bios/command_table2.c   |    813 +
>>>>   .../gpu/drm/amd/display/dc/bios/command_table2.h   |    105 +
>>>>   .../amd/display/dc/bios/command_table_helper2.c    |    260 +
>>>>   .../amd/display/dc/bios/command_table_helper2.h    |     82 +
>>>>   .../dc/bios/dce112/command_table_helper2_dce112.c  |    418 +
>>>>   .../dc/bios/dce112/command_table_helper2_dce112.h  |     34 +
>>>>   drivers/gpu/drm/amd/display/dc/calcs/dce_calcs.c   |    117 +
>>>>   drivers/gpu/drm/amd/display/dc/core/dc.c           |     29 +
>>>>   drivers/gpu/drm/amd/display/dc/core/dc_debug.c     |     11 +
>>>>   drivers/gpu/drm/amd/display/dc/core/dc_link.c      |     19 +
>>>>   drivers/gpu/drm/amd/display/dc/core/dc_resource.c  |     14 +
>>>>   drivers/gpu/drm/amd/display/dc/dc.h                |     27 +
>>>>   drivers/gpu/drm/amd/display/dc/dc_hw_types.h       |     46 +
>>>>   .../gpu/drm/amd/display/dc/dce/dce_clock_source.c  |      6 +
>>>>   drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c    |    149 +
>>>>   drivers/gpu/drm/amd/display/dc/dce/dce_clocks.h    |     20 +
>>>>   drivers/gpu/drm/amd/display/dc/dce/dce_hwseq.h     |      8 +
>>>>   .../gpu/drm/amd/display/dc/dce/dce_link_encoder.h  |     14 +
>>>>   drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c |     35 +
>>>>   drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.h |     34 +
>>>>   drivers/gpu/drm/amd/display/dc/dce/dce_opp.h       |     72 +
>>>>   .../drm/amd/display/dc/dce/dce_stream_encoder.h    |    100 +
>>>>   drivers/gpu/drm/amd/display/dc/dce/dce_transform.h |     68 +
>>>>   .../amd/display/dc/dce110/dce110_hw_sequencer.c    |     53 +-
>>>>   .../drm/amd/display/dc/dce110/dce110_mem_input.c   |      3 +
>>>>   .../display/dc/dce110/dce110_timing_generator.h    |      3 +
>>>>   drivers/gpu/drm/amd/display/dc/dce120/Makefile     |     12 +
>>>>   .../amd/display/dc/dce120/dce120_hw_sequencer.c    |    197 +
>>>>   .../amd/display/dc/dce120/dce120_hw_sequencer.h    |     36 +
>>>>   drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.c |     58 +
>>>>   drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.h |     62 +
>>>>   .../drm/amd/display/dc/dce120/dce120_ipp_cursor.c  |    202 +
>>>>   .../drm/amd/display/dc/dce120/dce120_ipp_gamma.c   |    167 +
>>>>   .../drm/amd/display/dc/dce120/dce120_mem_input.c   |    340 +
>>>>   .../drm/amd/display/dc/dce120/dce120_mem_input.h   |     37 +
>>>>   .../drm/amd/display/dc/dce120/dce120_resource.c    |   1099 +
>>>>   .../drm/amd/display/dc/dce120/dce120_resource.h    |     39 +
>>>>   .../display/dc/dce120/dce120_timing_generator.c    |   1109 +
>>>>   .../display/dc/dce120/dce120_timing_generator.h    |     41 +
>>>>   .../gpu/drm/amd/display/dc/dce80/dce80_mem_input.c |      3 +
>>>>   drivers/gpu/drm/amd/display/dc/dm_services.h       |     89 +
>>>>   drivers/gpu/drm/amd/display/dc/dm_services_types.h |     27 +
>>>>   drivers/gpu/drm/amd/display/dc/gpio/Makefile       |     11 +
>>>>   .../amd/display/dc/gpio/dce120/hw_factory_dce120.c |    197 +
>>>>   .../amd/display/dc/gpio/dce120/hw_factory_dce120.h |     32 +
>>>>   .../display/dc/gpio/dce120/hw_translate_dce120.c   |    408 +
>>>>   .../display/dc/gpio/dce120/hw_translate_dce120.h   |     34 +
>>>>   drivers/gpu/drm/amd/display/dc/gpio/hw_factory.c   |      9 +
>>>>   drivers/gpu/drm/amd/display/dc/gpio/hw_translate.c |      9 +-
>>>>   drivers/gpu/drm/amd/display/dc/i2caux/Makefile     |     11 +
>>>>   .../amd/display/dc/i2caux/dce120/i2caux_dce120.c   |    125 +
>>>>   .../amd/display/dc/i2caux/dce120/i2caux_dce120.h   |     32 +
>>>>   drivers/gpu/drm/amd/display/dc/i2caux/i2caux.c     |      8 +
>>>>   .../gpu/drm/amd/display/dc/inc/bandwidth_calcs.h   |      3 +
>>>>   .../gpu/drm/amd/display/dc/inc/hw/display_clock.h  |     23 +
>>>>   drivers/gpu/drm/amd/display/dc/inc/hw/mem_input.h  |      4 +
>>>>   drivers/gpu/drm/amd/display/dc/irq/Makefile        |     12 +
>>>>   .../amd/display/dc/irq/dce120/irq_service_dce120.c |    293 +
>>>>   .../amd/display/dc/irq/dce120/irq_service_dce120.h |     34 +
>>>>   drivers/gpu/drm/amd/display/dc/irq/irq_service.c   |      3 +
>>>>   drivers/gpu/drm/amd/display/include/dal_asic_id.h  |      4 +
>>>>   drivers/gpu/drm/amd/display/include/dal_types.h    |      3 +
>>>>   drivers/gpu/drm/amd/include/amd_shared.h           |      4 +
>>>>   .../asic_reg/vega10/ATHUB/athub_1_0_default.h      |    241 +
>>>>   .../asic_reg/vega10/ATHUB/athub_1_0_offset.h       |    453 +
>>>>   .../asic_reg/vega10/ATHUB/athub_1_0_sh_mask.h      |   2045 +
>>>>   .../include/asic_reg/vega10/DC/dce_12_0_default.h  |   9868 ++
>>>>   .../include/asic_reg/vega10/DC/dce_12_0_offset.h   |  18193 +++
>>>>   .../include/asic_reg/vega10/DC/dce_12_0_sh_mask.h  |  64636 +++++++++
>>>>   .../include/asic_reg/vega10/GC/gc_9_0_default.h    |   3873 +
>>>>   .../amd/include/asic_reg/vega10/GC/gc_9_0_offset.h |   7230 +
>>>>   .../include/asic_reg/vega10/GC/gc_9_0_sh_mask.h    |  29868 ++++
>>>>   .../include/asic_reg/vega10/HDP/hdp_4_0_default.h  |    117 +
>>>>   .../include/asic_reg/vega10/HDP/hdp_4_0_offset.h   |    209 +
>>>>   .../include/asic_reg/vega10/HDP/hdp_4_0_sh_mask.h  |    601 +
>>>>   .../asic_reg/vega10/MMHUB/mmhub_1_0_default.h      |   1011 +
>>>>   .../asic_reg/vega10/MMHUB/mmhub_1_0_offset.h       |   1967 +
>>>>   .../asic_reg/vega10/MMHUB/mmhub_1_0_sh_mask.h      |  10127 ++
>>>>   .../include/asic_reg/vega10/MP/mp_9_0_default.h    |    342 +
>>>>   .../amd/include/asic_reg/vega10/MP/mp_9_0_offset.h |    375 +
>>>>   .../include/asic_reg/vega10/MP/mp_9_0_sh_mask.h    |   1463 +
>>>>   .../asic_reg/vega10/NBIF/nbif_6_1_default.h        |   1271 +
>>>>   .../include/asic_reg/vega10/NBIF/nbif_6_1_offset.h |   1688 +
>>>>   .../asic_reg/vega10/NBIF/nbif_6_1_sh_mask.h        |  10281 ++
>>>>   .../asic_reg/vega10/NBIO/nbio_6_1_default.h        |  22340 +++
>>>>   .../include/asic_reg/vega10/NBIO/nbio_6_1_offset.h |   3649 +
>>>>   .../asic_reg/vega10/NBIO/nbio_6_1_sh_mask.h        | 133884
>>>> ++++++++++++++++++
>>>>   .../asic_reg/vega10/OSSSYS/osssys_4_0_default.h    |    176 +
>>>>   .../asic_reg/vega10/OSSSYS/osssys_4_0_offset.h     |    327 +
>>>>   .../asic_reg/vega10/OSSSYS/osssys_4_0_sh_mask.h    |   1196 +
>>>>   .../asic_reg/vega10/SDMA0/sdma0_4_0_default.h      |    286 +
>>>>   .../asic_reg/vega10/SDMA0/sdma0_4_0_offset.h       |    547 +
>>>>   .../asic_reg/vega10/SDMA0/sdma0_4_0_sh_mask.h      |   1852 +
>>>>   .../asic_reg/vega10/SDMA1/sdma1_4_0_default.h      |    282 +
>>>>   .../asic_reg/vega10/SDMA1/sdma1_4_0_offset.h       |    539 +
>>>>   .../asic_reg/vega10/SDMA1/sdma1_4_0_sh_mask.h      |   1810 +
>>>>   .../asic_reg/vega10/SMUIO/smuio_9_0_default.h      |    100 +
>>>>   .../asic_reg/vega10/SMUIO/smuio_9_0_offset.h       |    175 +
>>>>   .../asic_reg/vega10/SMUIO/smuio_9_0_sh_mask.h      |    258 +
>>>>   .../include/asic_reg/vega10/THM/thm_9_0_default.h  |    194 +
>>>>   .../include/asic_reg/vega10/THM/thm_9_0_offset.h   |    363 +
>>>>   .../include/asic_reg/vega10/THM/thm_9_0_sh_mask.h  |   1314 +
>>>>   .../include/asic_reg/vega10/UVD/uvd_7_0_default.h  |    127 +
>>>>   .../include/asic_reg/vega10/UVD/uvd_7_0_offset.h   |    222 +
>>>>   .../include/asic_reg/vega10/UVD/uvd_7_0_sh_mask.h  |    811 +
>>>>   .../include/asic_reg/vega10/VCE/vce_4_0_default.h  |    122 +
>>>>   .../include/asic_reg/vega10/VCE/vce_4_0_offset.h   |    208 +
>>>>   .../include/asic_reg/vega10/VCE/vce_4_0_sh_mask.h  |    488 +
>>>>   .../gpu/drm/amd/include/asic_reg/vega10/soc15ip.h  |   1343 +
>>>>   .../drm/amd/include/asic_reg/vega10/vega10_enum.h  |  22531 +++
>>>>   drivers/gpu/drm/amd/include/atomfirmware.h         |   2385 +
>>>>   drivers/gpu/drm/amd/include/atomfirmwareid.h       |     86 +
>>>>   drivers/gpu/drm/amd/include/displayobject.h        |    249 +
>>>>   drivers/gpu/drm/amd/include/dm_pp_interface.h      |     83 +
>>>>   drivers/gpu/drm/amd/include/v9_structs.h           |    743 +
>>>>   drivers/gpu/drm/amd/powerplay/amd_powerplay.c      |    284 +-
>>>>   drivers/gpu/drm/amd/powerplay/hwmgr/Makefile       |      6 +-
>>>>   .../gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c  |     49 +
>>>>   drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c        |      9 +
>>>>   drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr_ppt.h    |     16 +-
>>>>   drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c |    396 +
>>>>   drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h |    140 +
>>>>   drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c |   4378 +
>>>>   drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.h |    434 +
>>>>   drivers/gpu/drm/amd/powerplay/hwmgr/vega10_inc.h   |     44 +
>>>>   .../gpu/drm/amd/powerplay/hwmgr/vega10_powertune.c |    137 +
>>>>   .../gpu/drm/amd/powerplay/hwmgr/vega10_powertune.h |     65 +
>>>>   .../gpu/drm/amd/powerplay/hwmgr/vega10_pptable.h   |    331 +
>>>>   .../amd/powerplay/hwmgr/vega10_processpptables.c   |   1056 +
>>>>   .../amd/powerplay/hwmgr/vega10_processpptables.h   |     34 +
>>>>   .../gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c   |    761 +
>>>>   .../gpu/drm/amd/powerplay/hwmgr/vega10_thermal.h   |     83 +
>>>>   drivers/gpu/drm/amd/powerplay/inc/amd_powerplay.h  |     28 +-
>>>>   .../gpu/drm/amd/powerplay/inc/hardwaremanager.h    |     43 +
>>>>   drivers/gpu/drm/amd/powerplay/inc/hwmgr.h          |    125 +-
>>>>   drivers/gpu/drm/amd/powerplay/inc/pp_instance.h    |      1 +
>>>>   drivers/gpu/drm/amd/powerplay/inc/pp_soc15.h       |     48 +
>>>>   drivers/gpu/drm/amd/powerplay/inc/smu9.h           |    147 +
>>>>   drivers/gpu/drm/amd/powerplay/inc/smu9_driver_if.h |    418 +
>>>>   drivers/gpu/drm/amd/powerplay/inc/smumgr.h         |      3 +
>>>>   drivers/gpu/drm/amd/powerplay/inc/vega10_ppsmc.h   |    131 +
>>>>   drivers/gpu/drm/amd/powerplay/smumgr/Makefile      |      2 +-
>>>>   drivers/gpu/drm/amd/powerplay/smumgr/smumgr.c      |      9 +
>>>>   .../gpu/drm/amd/powerplay/smumgr/vega10_smumgr.c   |    564 +
>>>>   .../gpu/drm/amd/powerplay/smumgr/vega10_smumgr.h   |     70 +
>>>>   include/uapi/drm/amdgpu_drm.h                      |     29 +
>>>>   221 files changed, 403408 insertions(+), 219 deletions(-)
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/clearstate_gfx9.h
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.h
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mxgpu_ai.h
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/nbio_v6_1.h
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/psp_v3_1.c
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/psp_v3_1.h
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.h
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15.c
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15.h
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15_common.h
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15d.h
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.h
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vce_v4_0.h
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vega10_ih.c
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vega10_ih.h
>>>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vega10_sdma_pkt_open.h
>>>>   create mode 100644 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
>>>>   create mode 100644 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/bios/bios_parser_types_internal2.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/bios/command_table2.c
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/bios/command_table2.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.c
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/bios/dce112/command_table_helper2_dce112.c
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/bios/dce112/command_table_helper2_dce112.h
>>>>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/Makefile
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.c
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.h
>>>>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.c
>>>>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp_cursor.c
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp_gamma.c
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/dce120/dce120_mem_input.c
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/dce120/dce120_mem_input.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/dce120/dce120_timing_generator.c
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/dce120/dce120_timing_generator.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_factory_dce120.c
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_factory_dce120.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_translate_dce120.c
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_translate_dce120.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/i2caux/dce120/i2caux_dce120.c
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/i2caux/dce120/i2caux_dce120.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/irq/dce120/irq_service_dce120.c
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/display/dc/irq/dce120/irq_service_dce120.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/ATHUB/athub_1_0_default.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/ATHUB/athub_1_0_offset.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/ATHUB/athub_1_0_sh_mask.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/DC/dce_12_0_default.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/DC/dce_12_0_offset.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/DC/dce_12_0_sh_mask.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/GC/gc_9_0_default.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/GC/gc_9_0_offset.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/GC/gc_9_0_sh_mask.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/HDP/hdp_4_0_default.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/HDP/hdp_4_0_offset.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/HDP/hdp_4_0_sh_mask.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/MMHUB/mmhub_1_0_default.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/MMHUB/mmhub_1_0_offset.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/MMHUB/mmhub_1_0_sh_mask.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/MP/mp_9_0_default.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/MP/mp_9_0_offset.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/MP/mp_9_0_sh_mask.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIF/nbif_6_1_default.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIF/nbif_6_1_offset.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIF/nbif_6_1_sh_mask.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIO/nbio_6_1_default.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIO/nbio_6_1_offset.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIO/nbio_6_1_sh_mask.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/OSSSYS/osssys_4_0_default.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/OSSSYS/osssys_4_0_offset.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/OSSSYS/osssys_4_0_sh_mask.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA0/sdma0_4_0_default.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA0/sdma0_4_0_offset.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA0/sdma0_4_0_sh_mask.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA1/sdma1_4_0_default.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA1/sdma1_4_0_offset.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA1/sdma1_4_0_sh_mask.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/SMUIO/smuio_9_0_default.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/SMUIO/smuio_9_0_offset.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/SMUIO/smuio_9_0_sh_mask.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/THM/thm_9_0_default.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/THM/thm_9_0_offset.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/THM/thm_9_0_sh_mask.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/UVD/uvd_7_0_default.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/UVD/uvd_7_0_offset.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/UVD/uvd_7_0_sh_mask.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/VCE/vce_4_0_default.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/VCE/vce_4_0_offset.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/VCE/vce_4_0_sh_mask.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/soc15ip.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/include/asic_reg/vega10/vega10_enum.h
>>>>   create mode 100644 drivers/gpu/drm/amd/include/atomfirmware.h
>>>>   create mode 100644 drivers/gpu/drm/amd/include/atomfirmwareid.h
>>>>   create mode 100644 drivers/gpu/drm/amd/include/displayobject.h
>>>>   create mode 100644 drivers/gpu/drm/amd/include/dm_pp_interface.h
>>>>   create mode 100644 drivers/gpu/drm/amd/include/v9_structs.h
>>>>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c
>>>>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h
>>>>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
>>>>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.h
>>>>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_inc.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.c
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_pptable.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.c
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.h
>>>>   create mode 100644 drivers/gpu/drm/amd/powerplay/inc/pp_soc15.h
>>>>   create mode 100644 drivers/gpu/drm/amd/powerplay/inc/smu9.h
>>>>   create mode 100644 drivers/gpu/drm/amd/powerplay/inc/smu9_driver_if.h
>>>>   create mode 100644 drivers/gpu/drm/amd/powerplay/inc/vega10_ppsmc.h
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/powerplay/smumgr/vega10_smumgr.c
>>>>   create mode 100644
>>>> drivers/gpu/drm/amd/powerplay/smumgr/vega10_smumgr.h
>>>>
>>>
>>> _______________________________________________
>>> amd-gfx mailing list
>>> amd-gfx@lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>>
>>
>>
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>
>
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH 000/100] Add Vega10 Support
       [not found]     ` <50d03274-5a6e-fb77-9741-b6700a9949bd-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
  2017-03-21 11:51       ` Christian König
@ 2017-03-21 22:00       ` Alex Deucher
  1 sibling, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-21 22:00 UTC (permalink / raw)
  To: Christian König; +Cc: Alex Deucher, amd-gfx list

On Tue, Mar 21, 2017 at 3:42 AM, Christian König
<deathsimple@vodafone.de> wrote:
> Patches #1 - #5, #21, #23, #25, #27, #28, #31, #35-#38, #40, #41, #45 are
> Acked-by: Christian König.
>
> Patches #6-#20, #22, #24, #32, #39, #42 didn't made it to the list (probably
> to large).
>
> Patches #43, #44 are Reviewed-by: Christian König
> <christian.koenig@amd.com>.
>
> Patch #26: That stuff actually belongs into vega10 specifc code, doesn't it?

It's common to all soc15 parts for the foreseeable future.

>
> Patch #29: We shouldn't use typedefs for enums.

The existing doorbell assignments use a typedef as well.  Should
probably fix both up.

>
> Going to take a look at the rest later today,
> Christian.
>
>
> Am 20.03.2017 um 21:29 schrieb Alex Deucher:
>>
>> This patch set adds support for vega10.  Major changes and supported
>> features:
>> - new vbios interface
>> - Lots of new hw IPs
>> - Support for video decode using UVD
>> - Support for video encode using VCE
>> - Support for 3D via radeonsi
>> - Power management
>> - Full display support via DC
>> - Support for SR-IOV
>>
>> I did not send out the register headers since they are huge.  You can find
>> them
>> along with all the other patches in this series here:
>> https://cgit.freedesktop.org/~agd5f/linux/log/?h=amd-staging-4.9
>>
>> Please review.
>>
>> Thanks,
>>
>> Alex
>>
>> Alex Deucher (29):
>>    drm/amdgpu: add the new atomfirmware interface header
>>    amdgpu: detect if we are using atomfirm or atombios for vbios (v2)
>>    drm/amdgpu: move atom scratch setup into amdgpu_atombios.c
>>    drm/amdgpu: add basic support for atomfirmware.h (v3)
>>    drm/amdgpu: add soc15ip.h
>>    drm/amdgpu: add vega10_enum.h
>>    drm/amdgpu: Add ATHUB 1.0 register headers
>>    drm/amdgpu: Add the DCE 12.0 register headers
>>    drm/amdgpu: add the GC 9.0 register headers
>>    drm/amdgpu: add the HDP 4.0 register headers
>>    drm/amdgpu: add the MMHUB 1.0 register headers
>>    drm/amdgpu: add MP 9.0 register headers
>>    drm/amdgpu: add NBIF 6.1 register headers
>>    drm/amdgpu: add NBIO 6.1 register headers
>>    drm/amdgpu: add OSSSYS 4.0 register headers
>>    drm/amdgpu: add SDMA 4.0 register headers
>>    drm/amdgpu: add SMUIO 9.0 register headers
>>    drm/amdgpu: add THM 9.0 register headers
>>    drm/amdgpu: add the UVD 7.0 register headers
>>    drm/amdgpu: add the VCE 4.0 register headers
>>    drm/amdgpu: add gfx9 clearstate header
>>    drm/amdgpu: add SDMA 4.0 packet header
>>    drm/amdgpu: use atomfirmware interfaces for scratch reg save/restore
>>    drm/amdgpu: update IH IV ring entry for soc-15
>>    drm/amdgpu: add PTE defines for MTYPE
>>    drm/amdgpu: add NGG parameters
>>    drm/amdgpu: Add asic family for vega10
>>    drm/amdgpu: add tiling flags for GFX9
>>    drm/amdgpu: gart fixes for vega10
>>
>> Alex Xie (4):
>>    drm/amdgpu: Add MTYPE flags to GPU VM IOCTL interface
>>    drm/amdgpu: handle PTE EXEC in amdgpu_vm_bo_split_mapping
>>    drm/amdgpu: handle PTE MTYPE in amdgpu_vm_bo_split_mapping
>>    drm/amdgpu: Add GMC 9.0 support
>>
>> Andrey Grodzovsky (1):
>>    drm/amdgpu: gb_addr_config struct
>>
>> Charlene Liu (1):
>>    drm/amd/display: need to handle DCE_Info table ver4.2
>>
>> Christian König (1):
>>    drm/amdgpu: add IV trace point
>>
>> Eric Huang (7):
>>    drm/amd/powerplay: add smu9 header files for Vega10
>>    drm/amd/powerplay: add new Vega10's ppsmc header file
>>    drm/amdgpu: add new atomfirmware based helpers for powerplay
>>    drm/amd/powerplay: add some new structures for Vega10
>>    drm/amd: add structures for display/powerplay interface
>>    drm/amd/powerplay: add some display/powerplay interfaces
>>    drm/amd/powerplay: add Vega10 powerplay support
>>
>> Felix Kuehling (1):
>>    drm/amd: Add MQD structs for GFX V9
>>
>> Harry Wentland (6):
>>    drm/amd/display: Add DCE12 bios parser support
>>    drm/amd/display: Add DCE12 gpio support
>>    drm/amd/display: Add DCE12 i2c/aux support
>>    drm/amd/display: Add DCE12 irq support
>>    drm/amd/display: Add DCE12 core support
>>    drm/amd/display: Enable DCE12 support
>>
>> Huang Rui (6):
>>    drm/amdgpu: use new flag to handle different firmware loading method
>>    drm/amdgpu: rework common ucode handling for vega10
>>    drm/amdgpu: add psp firmware header info
>>    drm/amdgpu: add PSP driver for vega10
>>    drm/amdgpu: add psp firmware info into info query and debugfs
>>    drm/amdgpu: add SMC firmware into global ucode list for psp loading
>>
>> Jordan Lazare (1):
>>    drm/amd/display: Less log spam
>>
>> Junwei Zhang (2):
>>    drm/amdgpu: add NBIO 6.1 driver
>>    drm/amdgpu: add Vega10 Device IDs
>>
>> Ken Wang (8):
>>    drm/amdgpu: add common soc15 headers
>>    drm/amdgpu: add vega10 chip name
>>    drm/amdgpu: add 64bit doorbell assignments
>>    drm/amdgpu: add SDMA v4.0 implementation
>>    drm/amdgpu: implement GFX 9.0 support
>>    drm/amdgpu: add vega10 interrupt handler
>>    drm/amdgpu: soc15 enable (v2)
>>    drm/amdgpu: Set the IP blocks for vega10
>>
>> Leo Liu (2):
>>    drm/amdgpu: add initial uvd 7.0 support for vega10
>>    drm/amdgpu: add initial vce 4.0 support for vega10
>>
>> Marek Olšák (1):
>>    drm/amdgpu: don't validate TILE_SPLIT on GFX9
>>
>> Monk Liu (5):
>>    drm/amdgpu/gfx9: programing wptr_poll_addr register
>>    drm/amdgpu:impl gfx9 cond_exec
>>    drm/amdgpu:bypass RLC init for SRIOV
>>    drm/amdgpu/sdma4:re-org SDMA initial steps for sriov
>>    drm/amdgpu/vega10:fix DOORBELL64 scheme
>>
>> Rex Zhu (2):
>>    drm/amdgpu: get display info from DC when DC enabled.
>>    drm/amd/powerplay: add global PowerPlay mutex.
>>
>> Xiangliang Yu (22):
>>    drm/amdgpu: impl sriov detection for vega10
>>    drm/amdgpu: add kiq ring for gfx9
>>    drm/amdgpu/gfx9: fullfill kiq funcs
>>    drm/amdgpu/gfx9: fullfill kiq irq funcs
>>    drm/amdgpu: init kiq and kcq for vega10
>>    drm/amdgpu/gfx9: impl gfx9 meta data emit
>>    drm/amdgpu/soc15: bypass PSP for VF
>>    drm/amdgpu/gmc9: no need use kiq in vega10 tlb flush
>>    drm/amdgpu/dce_virtual: bypass DPM for vf
>>    drm/amdgpu/virt: impl mailbox for ai
>>    drm/amdgpu/soc15: init virt ops for vf
>>    drm/amdgpu/soc15: enable virtual dce for vf
>>    drm/amdgpu: Don't touch PG&CG for SRIOV MM
>>    drm/amdgpu/vce4: enable doorbell for SRIOV
>>    drm/amdgpu: disable uvd for sriov
>>    drm/amdgpu/soc15: bypass pp block for vf
>>    drm/amdgpu/virt: add structure for MM table
>>    drm/amdgpu/vce4: alloc mm table for MM sriov
>>    drm/amdgpu/vce4: Ignore vce ring/ib test temporarily
>>    drm/amdgpu: add mmsch structures
>>    drm/amdgpu/vce4: impl vce & mmsch sriov start
>>    drm/amdgpu/gfx9: correct wptr pointer value
>>
>> ken (1):
>>    drm/amdgpu: add clinetid definition for vega10
>>
>>   drivers/gpu/drm/amd/amdgpu/Makefile                |     27 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu.h                |    172 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c       |     28 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.h       |      3 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c   |    112 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h   |     33 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c           |     30 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c            |     73 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c         |     73 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c            |     36 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c           |      3 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c            |      2 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ih.h             |     47 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c            |      3 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c            |     32 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.c         |      5 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_powerplay.c      |      5 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c            |    473 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h            |    127 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h          |     37 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c          |    113 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h          |     17 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c            |     58 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c            |     21 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h           |      7 +
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c             |     34 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h             |      4 +
>>   drivers/gpu/drm/amd/amdgpu/atom.c                  |     26 -
>>   drivers/gpu/drm/amd/amdgpu/atom.h                  |      1 -
>>   drivers/gpu/drm/amd/amdgpu/cik.c                   |      2 +
>>   drivers/gpu/drm/amd/amdgpu/clearstate_gfx9.h       |    941 +
>>   drivers/gpu/drm/amd/amdgpu/dce_virtual.c           |      3 +
>>   drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c              |      6 +-
>>   drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c              |   4075 +
>>   drivers/gpu/drm/amd/amdgpu/gfx_v9_0.h              |     35 +
>>   drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c           |    447 +
>>   drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h           |     35 +
>>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c              |    826 +
>>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h              |     30 +
>>   drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c            |    585 +
>>   drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h            |     35 +
>>   drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h            |     87 +
>>   drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c              |    207 +
>>   drivers/gpu/drm/amd/amdgpu/mxgpu_ai.h              |     47 +
>>   drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c             |    251 +
>>   drivers/gpu/drm/amd/amdgpu/nbio_v6_1.h             |     53 +
>>   drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h            |    269 +
>>   drivers/gpu/drm/amd/amdgpu/psp_v3_1.c              |    507 +
>>   drivers/gpu/drm/amd/amdgpu/psp_v3_1.h              |     50 +
>>   drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c             |      4 +-
>>   drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c             |      4 +-
>>   drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c             |   1573 +
>>   drivers/gpu/drm/amd/amdgpu/sdma_v4_0.h             |     30 +
>>   drivers/gpu/drm/amd/amdgpu/soc15.c                 |    825 +
>>   drivers/gpu/drm/amd/amdgpu/soc15.h                 |     35 +
>>   drivers/gpu/drm/amd/amdgpu/soc15_common.h          |     57 +
>>   drivers/gpu/drm/amd/amdgpu/soc15d.h                |    287 +
>>   drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c              |   1543 +
>>   drivers/gpu/drm/amd/amdgpu/uvd_v7_0.h              |     29 +
>>   drivers/gpu/drm/amd/amdgpu/vce_v4_0.c              |   1141 +
>>   drivers/gpu/drm/amd/amdgpu/vce_v4_0.h              |     29 +
>>   drivers/gpu/drm/amd/amdgpu/vega10_ih.c             |    424 +
>>   drivers/gpu/drm/amd/amdgpu/vega10_ih.h             |     30 +
>>   drivers/gpu/drm/amd/amdgpu/vega10_sdma_pkt_open.h  |   3335 +
>>   drivers/gpu/drm/amd/amdgpu/vi.c                    |      4 +-
>>   drivers/gpu/drm/amd/display/Kconfig                |      7 +
>>   drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c  |    145 +-
>>   .../drm/amd/display/amdgpu_dm/amdgpu_dm_services.c |     10 +
>>   .../drm/amd/display/amdgpu_dm/amdgpu_dm_types.c    |     20 +-
>>   drivers/gpu/drm/amd/display/dc/Makefile            |      4 +
>>   drivers/gpu/drm/amd/display/dc/bios/Makefile       |      8 +
>>   drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c |   2162 +
>>   drivers/gpu/drm/amd/display/dc/bios/bios_parser2.h |     33 +
>>   .../amd/display/dc/bios/bios_parser_interface.c    |     14 +
>>   .../display/dc/bios/bios_parser_types_internal2.h  |     74 +
>>   .../gpu/drm/amd/display/dc/bios/command_table2.c   |    813 +
>>   .../gpu/drm/amd/display/dc/bios/command_table2.h   |    105 +
>>   .../amd/display/dc/bios/command_table_helper2.c    |    260 +
>>   .../amd/display/dc/bios/command_table_helper2.h    |     82 +
>>   .../dc/bios/dce112/command_table_helper2_dce112.c  |    418 +
>>   .../dc/bios/dce112/command_table_helper2_dce112.h  |     34 +
>>   drivers/gpu/drm/amd/display/dc/calcs/dce_calcs.c   |    117 +
>>   drivers/gpu/drm/amd/display/dc/core/dc.c           |     29 +
>>   drivers/gpu/drm/amd/display/dc/core/dc_debug.c     |     11 +
>>   drivers/gpu/drm/amd/display/dc/core/dc_link.c      |     19 +
>>   drivers/gpu/drm/amd/display/dc/core/dc_resource.c  |     14 +
>>   drivers/gpu/drm/amd/display/dc/dc.h                |     27 +
>>   drivers/gpu/drm/amd/display/dc/dc_hw_types.h       |     46 +
>>   .../gpu/drm/amd/display/dc/dce/dce_clock_source.c  |      6 +
>>   drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c    |    149 +
>>   drivers/gpu/drm/amd/display/dc/dce/dce_clocks.h    |     20 +
>>   drivers/gpu/drm/amd/display/dc/dce/dce_hwseq.h     |      8 +
>>   .../gpu/drm/amd/display/dc/dce/dce_link_encoder.h  |     14 +
>>   drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.c |     35 +
>>   drivers/gpu/drm/amd/display/dc/dce/dce_mem_input.h |     34 +
>>   drivers/gpu/drm/amd/display/dc/dce/dce_opp.h       |     72 +
>>   .../drm/amd/display/dc/dce/dce_stream_encoder.h    |    100 +
>>   drivers/gpu/drm/amd/display/dc/dce/dce_transform.h |     68 +
>>   .../amd/display/dc/dce110/dce110_hw_sequencer.c    |     53 +-
>>   .../drm/amd/display/dc/dce110/dce110_mem_input.c   |      3 +
>>   .../display/dc/dce110/dce110_timing_generator.h    |      3 +
>>   drivers/gpu/drm/amd/display/dc/dce120/Makefile     |     12 +
>>   .../amd/display/dc/dce120/dce120_hw_sequencer.c    |    197 +
>>   .../amd/display/dc/dce120/dce120_hw_sequencer.h    |     36 +
>>   drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.c |     58 +
>>   drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.h |     62 +
>>   .../drm/amd/display/dc/dce120/dce120_ipp_cursor.c  |    202 +
>>   .../drm/amd/display/dc/dce120/dce120_ipp_gamma.c   |    167 +
>>   .../drm/amd/display/dc/dce120/dce120_mem_input.c   |    340 +
>>   .../drm/amd/display/dc/dce120/dce120_mem_input.h   |     37 +
>>   .../drm/amd/display/dc/dce120/dce120_resource.c    |   1099 +
>>   .../drm/amd/display/dc/dce120/dce120_resource.h    |     39 +
>>   .../display/dc/dce120/dce120_timing_generator.c    |   1109 +
>>   .../display/dc/dce120/dce120_timing_generator.h    |     41 +
>>   .../gpu/drm/amd/display/dc/dce80/dce80_mem_input.c |      3 +
>>   drivers/gpu/drm/amd/display/dc/dm_services.h       |     89 +
>>   drivers/gpu/drm/amd/display/dc/dm_services_types.h |     27 +
>>   drivers/gpu/drm/amd/display/dc/gpio/Makefile       |     11 +
>>   .../amd/display/dc/gpio/dce120/hw_factory_dce120.c |    197 +
>>   .../amd/display/dc/gpio/dce120/hw_factory_dce120.h |     32 +
>>   .../display/dc/gpio/dce120/hw_translate_dce120.c   |    408 +
>>   .../display/dc/gpio/dce120/hw_translate_dce120.h   |     34 +
>>   drivers/gpu/drm/amd/display/dc/gpio/hw_factory.c   |      9 +
>>   drivers/gpu/drm/amd/display/dc/gpio/hw_translate.c |      9 +-
>>   drivers/gpu/drm/amd/display/dc/i2caux/Makefile     |     11 +
>>   .../amd/display/dc/i2caux/dce120/i2caux_dce120.c   |    125 +
>>   .../amd/display/dc/i2caux/dce120/i2caux_dce120.h   |     32 +
>>   drivers/gpu/drm/amd/display/dc/i2caux/i2caux.c     |      8 +
>>   .../gpu/drm/amd/display/dc/inc/bandwidth_calcs.h   |      3 +
>>   .../gpu/drm/amd/display/dc/inc/hw/display_clock.h  |     23 +
>>   drivers/gpu/drm/amd/display/dc/inc/hw/mem_input.h  |      4 +
>>   drivers/gpu/drm/amd/display/dc/irq/Makefile        |     12 +
>>   .../amd/display/dc/irq/dce120/irq_service_dce120.c |    293 +
>>   .../amd/display/dc/irq/dce120/irq_service_dce120.h |     34 +
>>   drivers/gpu/drm/amd/display/dc/irq/irq_service.c   |      3 +
>>   drivers/gpu/drm/amd/display/include/dal_asic_id.h  |      4 +
>>   drivers/gpu/drm/amd/display/include/dal_types.h    |      3 +
>>   drivers/gpu/drm/amd/include/amd_shared.h           |      4 +
>>   .../asic_reg/vega10/ATHUB/athub_1_0_default.h      |    241 +
>>   .../asic_reg/vega10/ATHUB/athub_1_0_offset.h       |    453 +
>>   .../asic_reg/vega10/ATHUB/athub_1_0_sh_mask.h      |   2045 +
>>   .../include/asic_reg/vega10/DC/dce_12_0_default.h  |   9868 ++
>>   .../include/asic_reg/vega10/DC/dce_12_0_offset.h   |  18193 +++
>>   .../include/asic_reg/vega10/DC/dce_12_0_sh_mask.h  |  64636 +++++++++
>>   .../include/asic_reg/vega10/GC/gc_9_0_default.h    |   3873 +
>>   .../amd/include/asic_reg/vega10/GC/gc_9_0_offset.h |   7230 +
>>   .../include/asic_reg/vega10/GC/gc_9_0_sh_mask.h    |  29868 ++++
>>   .../include/asic_reg/vega10/HDP/hdp_4_0_default.h  |    117 +
>>   .../include/asic_reg/vega10/HDP/hdp_4_0_offset.h   |    209 +
>>   .../include/asic_reg/vega10/HDP/hdp_4_0_sh_mask.h  |    601 +
>>   .../asic_reg/vega10/MMHUB/mmhub_1_0_default.h      |   1011 +
>>   .../asic_reg/vega10/MMHUB/mmhub_1_0_offset.h       |   1967 +
>>   .../asic_reg/vega10/MMHUB/mmhub_1_0_sh_mask.h      |  10127 ++
>>   .../include/asic_reg/vega10/MP/mp_9_0_default.h    |    342 +
>>   .../amd/include/asic_reg/vega10/MP/mp_9_0_offset.h |    375 +
>>   .../include/asic_reg/vega10/MP/mp_9_0_sh_mask.h    |   1463 +
>>   .../asic_reg/vega10/NBIF/nbif_6_1_default.h        |   1271 +
>>   .../include/asic_reg/vega10/NBIF/nbif_6_1_offset.h |   1688 +
>>   .../asic_reg/vega10/NBIF/nbif_6_1_sh_mask.h        |  10281 ++
>>   .../asic_reg/vega10/NBIO/nbio_6_1_default.h        |  22340 +++
>>   .../include/asic_reg/vega10/NBIO/nbio_6_1_offset.h |   3649 +
>>   .../asic_reg/vega10/NBIO/nbio_6_1_sh_mask.h        | 133884
>> ++++++++++++++++++
>>   .../asic_reg/vega10/OSSSYS/osssys_4_0_default.h    |    176 +
>>   .../asic_reg/vega10/OSSSYS/osssys_4_0_offset.h     |    327 +
>>   .../asic_reg/vega10/OSSSYS/osssys_4_0_sh_mask.h    |   1196 +
>>   .../asic_reg/vega10/SDMA0/sdma0_4_0_default.h      |    286 +
>>   .../asic_reg/vega10/SDMA0/sdma0_4_0_offset.h       |    547 +
>>   .../asic_reg/vega10/SDMA0/sdma0_4_0_sh_mask.h      |   1852 +
>>   .../asic_reg/vega10/SDMA1/sdma1_4_0_default.h      |    282 +
>>   .../asic_reg/vega10/SDMA1/sdma1_4_0_offset.h       |    539 +
>>   .../asic_reg/vega10/SDMA1/sdma1_4_0_sh_mask.h      |   1810 +
>>   .../asic_reg/vega10/SMUIO/smuio_9_0_default.h      |    100 +
>>   .../asic_reg/vega10/SMUIO/smuio_9_0_offset.h       |    175 +
>>   .../asic_reg/vega10/SMUIO/smuio_9_0_sh_mask.h      |    258 +
>>   .../include/asic_reg/vega10/THM/thm_9_0_default.h  |    194 +
>>   .../include/asic_reg/vega10/THM/thm_9_0_offset.h   |    363 +
>>   .../include/asic_reg/vega10/THM/thm_9_0_sh_mask.h  |   1314 +
>>   .../include/asic_reg/vega10/UVD/uvd_7_0_default.h  |    127 +
>>   .../include/asic_reg/vega10/UVD/uvd_7_0_offset.h   |    222 +
>>   .../include/asic_reg/vega10/UVD/uvd_7_0_sh_mask.h  |    811 +
>>   .../include/asic_reg/vega10/VCE/vce_4_0_default.h  |    122 +
>>   .../include/asic_reg/vega10/VCE/vce_4_0_offset.h   |    208 +
>>   .../include/asic_reg/vega10/VCE/vce_4_0_sh_mask.h  |    488 +
>>   .../gpu/drm/amd/include/asic_reg/vega10/soc15ip.h  |   1343 +
>>   .../drm/amd/include/asic_reg/vega10/vega10_enum.h  |  22531 +++
>>   drivers/gpu/drm/amd/include/atomfirmware.h         |   2385 +
>>   drivers/gpu/drm/amd/include/atomfirmwareid.h       |     86 +
>>   drivers/gpu/drm/amd/include/displayobject.h        |    249 +
>>   drivers/gpu/drm/amd/include/dm_pp_interface.h      |     83 +
>>   drivers/gpu/drm/amd/include/v9_structs.h           |    743 +
>>   drivers/gpu/drm/amd/powerplay/amd_powerplay.c      |    284 +-
>>   drivers/gpu/drm/amd/powerplay/hwmgr/Makefile       |      6 +-
>>   .../gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c  |     49 +
>>   drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c        |      9 +
>>   drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr_ppt.h    |     16 +-
>>   drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c |    396 +
>>   drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h |    140 +
>>   drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c |   4378 +
>>   drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.h |    434 +
>>   drivers/gpu/drm/amd/powerplay/hwmgr/vega10_inc.h   |     44 +
>>   .../gpu/drm/amd/powerplay/hwmgr/vega10_powertune.c |    137 +
>>   .../gpu/drm/amd/powerplay/hwmgr/vega10_powertune.h |     65 +
>>   .../gpu/drm/amd/powerplay/hwmgr/vega10_pptable.h   |    331 +
>>   .../amd/powerplay/hwmgr/vega10_processpptables.c   |   1056 +
>>   .../amd/powerplay/hwmgr/vega10_processpptables.h   |     34 +
>>   .../gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c   |    761 +
>>   .../gpu/drm/amd/powerplay/hwmgr/vega10_thermal.h   |     83 +
>>   drivers/gpu/drm/amd/powerplay/inc/amd_powerplay.h  |     28 +-
>>   .../gpu/drm/amd/powerplay/inc/hardwaremanager.h    |     43 +
>>   drivers/gpu/drm/amd/powerplay/inc/hwmgr.h          |    125 +-
>>   drivers/gpu/drm/amd/powerplay/inc/pp_instance.h    |      1 +
>>   drivers/gpu/drm/amd/powerplay/inc/pp_soc15.h       |     48 +
>>   drivers/gpu/drm/amd/powerplay/inc/smu9.h           |    147 +
>>   drivers/gpu/drm/amd/powerplay/inc/smu9_driver_if.h |    418 +
>>   drivers/gpu/drm/amd/powerplay/inc/smumgr.h         |      3 +
>>   drivers/gpu/drm/amd/powerplay/inc/vega10_ppsmc.h   |    131 +
>>   drivers/gpu/drm/amd/powerplay/smumgr/Makefile      |      2 +-
>>   drivers/gpu/drm/amd/powerplay/smumgr/smumgr.c      |      9 +
>>   .../gpu/drm/amd/powerplay/smumgr/vega10_smumgr.c   |    564 +
>>   .../gpu/drm/amd/powerplay/smumgr/vega10_smumgr.h   |     70 +
>>   include/uapi/drm/amdgpu_drm.h                      |     29 +
>>   221 files changed, 403408 insertions(+), 219 deletions(-)
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_atomfirmware.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/clearstate_gfx9.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfx_v9_0.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mxgpu_ai.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/nbio_v6_1.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/nbio_v6_1.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/psp_gfx_if.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/psp_v3_1.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/psp_v3_1.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/sdma_v4_0.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15_common.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/soc15d.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/uvd_v7_0.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vce_v4_0.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vega10_ih.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vega10_ih.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/vega10_sdma_pkt_open.h
>>   create mode 100644 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
>>   create mode 100644 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.h
>>   create mode 100644
>> drivers/gpu/drm/amd/display/dc/bios/bios_parser_types_internal2.h
>>   create mode 100644 drivers/gpu/drm/amd/display/dc/bios/command_table2.c
>>   create mode 100644 drivers/gpu/drm/amd/display/dc/bios/command_table2.h
>>   create mode 100644
>> drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.c
>>   create mode 100644
>> drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.h
>>   create mode 100644
>> drivers/gpu/drm/amd/display/dc/bios/dce112/command_table_helper2_dce112.c
>>   create mode 100644
>> drivers/gpu/drm/amd/display/dc/bios/dce112/command_table_helper2_dce112.h
>>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/Makefile
>>   create mode 100644
>> drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.c
>>   create mode 100644
>> drivers/gpu/drm/amd/display/dc/dce120/dce120_hw_sequencer.h
>>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.c
>>   create mode 100644 drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp.h
>>   create mode 100644
>> drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp_cursor.c
>>   create mode 100644
>> drivers/gpu/drm/amd/display/dc/dce120/dce120_ipp_gamma.c
>>   create mode 100644
>> drivers/gpu/drm/amd/display/dc/dce120/dce120_mem_input.c
>>   create mode 100644
>> drivers/gpu/drm/amd/display/dc/dce120/dce120_mem_input.h
>>   create mode 100644
>> drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
>>   create mode 100644
>> drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.h
>>   create mode 100644
>> drivers/gpu/drm/amd/display/dc/dce120/dce120_timing_generator.c
>>   create mode 100644
>> drivers/gpu/drm/amd/display/dc/dce120/dce120_timing_generator.h
>>   create mode 100644
>> drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_factory_dce120.c
>>   create mode 100644
>> drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_factory_dce120.h
>>   create mode 100644
>> drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_translate_dce120.c
>>   create mode 100644
>> drivers/gpu/drm/amd/display/dc/gpio/dce120/hw_translate_dce120.h
>>   create mode 100644
>> drivers/gpu/drm/amd/display/dc/i2caux/dce120/i2caux_dce120.c
>>   create mode 100644
>> drivers/gpu/drm/amd/display/dc/i2caux/dce120/i2caux_dce120.h
>>   create mode 100644
>> drivers/gpu/drm/amd/display/dc/irq/dce120/irq_service_dce120.c
>>   create mode 100644
>> drivers/gpu/drm/amd/display/dc/irq/dce120/irq_service_dce120.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/ATHUB/athub_1_0_default.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/ATHUB/athub_1_0_offset.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/ATHUB/athub_1_0_sh_mask.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/DC/dce_12_0_default.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/DC/dce_12_0_offset.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/DC/dce_12_0_sh_mask.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/GC/gc_9_0_default.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/GC/gc_9_0_offset.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/GC/gc_9_0_sh_mask.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/HDP/hdp_4_0_default.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/HDP/hdp_4_0_offset.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/HDP/hdp_4_0_sh_mask.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/MMHUB/mmhub_1_0_default.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/MMHUB/mmhub_1_0_offset.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/MMHUB/mmhub_1_0_sh_mask.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/MP/mp_9_0_default.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/MP/mp_9_0_offset.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/MP/mp_9_0_sh_mask.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIF/nbif_6_1_default.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIF/nbif_6_1_offset.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIF/nbif_6_1_sh_mask.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIO/nbio_6_1_default.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIO/nbio_6_1_offset.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/NBIO/nbio_6_1_sh_mask.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/OSSSYS/osssys_4_0_default.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/OSSSYS/osssys_4_0_offset.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/OSSSYS/osssys_4_0_sh_mask.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA0/sdma0_4_0_default.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA0/sdma0_4_0_offset.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA0/sdma0_4_0_sh_mask.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA1/sdma1_4_0_default.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA1/sdma1_4_0_offset.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/SDMA1/sdma1_4_0_sh_mask.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/SMUIO/smuio_9_0_default.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/SMUIO/smuio_9_0_offset.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/SMUIO/smuio_9_0_sh_mask.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/THM/thm_9_0_default.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/THM/thm_9_0_offset.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/THM/thm_9_0_sh_mask.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/UVD/uvd_7_0_default.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/UVD/uvd_7_0_offset.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/UVD/uvd_7_0_sh_mask.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/VCE/vce_4_0_default.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/VCE/vce_4_0_offset.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/VCE/vce_4_0_sh_mask.h
>>   create mode 100644 drivers/gpu/drm/amd/include/asic_reg/vega10/soc15ip.h
>>   create mode 100644
>> drivers/gpu/drm/amd/include/asic_reg/vega10/vega10_enum.h
>>   create mode 100644 drivers/gpu/drm/amd/include/atomfirmware.h
>>   create mode 100644 drivers/gpu/drm/amd/include/atomfirmwareid.h
>>   create mode 100644 drivers/gpu/drm/amd/include/displayobject.h
>>   create mode 100644 drivers/gpu/drm/amd/include/dm_pp_interface.h
>>   create mode 100644 drivers/gpu/drm/amd/include/v9_structs.h
>>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c
>>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h
>>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
>>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.h
>>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_inc.h
>>   create mode 100644
>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.c
>>   create mode 100644
>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_powertune.h
>>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_pptable.h
>>   create mode 100644
>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.c
>>   create mode 100644
>> drivers/gpu/drm/amd/powerplay/hwmgr/vega10_processpptables.h
>>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.c
>>   create mode 100644 drivers/gpu/drm/amd/powerplay/hwmgr/vega10_thermal.h
>>   create mode 100644 drivers/gpu/drm/amd/powerplay/inc/pp_soc15.h
>>   create mode 100644 drivers/gpu/drm/amd/powerplay/inc/smu9.h
>>   create mode 100644 drivers/gpu/drm/amd/powerplay/inc/smu9_driver_if.h
>>   create mode 100644 drivers/gpu/drm/amd/powerplay/inc/vega10_ppsmc.h
>>   create mode 100644 drivers/gpu/drm/amd/powerplay/smumgr/vega10_smumgr.c
>>   create mode 100644 drivers/gpu/drm/amd/powerplay/smumgr/vega10_smumgr.h
>>
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH 046/100] drm/amdgpu: Add GMC 9.0 support
       [not found]             ` <BN6PR12MB1652E0D9C22360FF77A4360AF73D0-/b2+HYfkarQqUD6E6FAiowdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
@ 2017-03-22 16:51               ` Christian König
  0 siblings, 0 replies; 101+ messages in thread
From: Christian König @ 2017-03-22 16:51 UTC (permalink / raw)
  To: Deucher, Alexander, Alex Deucher,
	amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Xie, AlexBin

Am 21.03.2017 um 16:09 schrieb Deucher, Alexander:
>> [SNIP]
>> Those two callbacks aren't a good idea either.
>>
>> The invalidation request bits are defined by the RTL of the HUB which is
>> just instantiated twice, see the register database for details.
>>
>> We should probably make those functions in the gmc_v9_0.c which are
>> called from the device specific flush methods.
> Didn't you have some patches to clean up the gfxhub/mmhub split?  I don't think they ever landed.

Yeah, a good part of that landed.

But some patches broke horrible because I didn't had hardware to test at 
that time. So I didn't pursued working on that till we have hardware.

Most of the stuff can land later on, it's just that this looks like it 
will break Carrizo/Kaveri/Kabini because of the incorrect handling.

We should make sure those still work before we push this upstream.

Christian.

>
> Alex
>
>> Regards,
>> Christian.
>>
>>> +};
>>> +
>>> +/*
>>>     * GPU MC structures, functions & helpers
>>>     */
>>>    struct amdgpu_mc {
>>> @@ -591,6 +617,9 @@ struct amdgpu_mc {
>>>    	u64					shared_aperture_end;
>>>    	u64					private_aperture_start;
>>>    	u64					private_aperture_end;
>>> +	/* protects concurrent invalidation */
>>> +	spinlock_t		invalidate_lock;
>>> +	const struct amdgpu_mc_funcs *mc_funcs;
>>>    };
>>>
>>>    /*
>>> @@ -1479,6 +1508,7 @@ struct amdgpu_device {
>>>    	struct amdgpu_gart		gart;
>>>    	struct amdgpu_dummy_page	dummy_page;
>>>    	struct amdgpu_vm_manager	vm_manager;
>>> +	struct amdgpu_vmhub             vmhub[AMDGPU_MAX_VMHUBS];
>>>
>>>    	/* memory management */
>>>    	struct amdgpu_mman		mman;
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> index df615d7..47a8080 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> @@ -375,6 +375,16 @@ static bool
>> amdgpu_vm_ring_has_compute_vm_bug(struct amdgpu_ring *ring)
>>>    	return false;
>>>    }
>>>
>>> +static u64 amdgpu_vm_adjust_mc_addr(struct amdgpu_device *adev,
>> u64 mc_addr)
>>> +{
>>> +	u64 addr = mc_addr;
>>> +
>>> +	if (adev->mc.mc_funcs && adev->mc.mc_funcs->adjust_mc_addr)
>>> +		addr = adev->mc.mc_funcs->adjust_mc_addr(adev, addr);
>>> +
>>> +	return addr;
>>> +}
>>> +
>>>    /**
>>>     * amdgpu_vm_flush - hardware flush the vm
>>>     *
>>> @@ -405,9 +415,10 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring,
>> struct amdgpu_job *job)
>>>    	if (ring->funcs->emit_vm_flush && (job->vm_needs_flush ||
>>>    	    amdgpu_vm_is_gpu_reset(adev, id))) {
>>>    		struct fence *fence;
>>> +		u64 pd_addr = amdgpu_vm_adjust_mc_addr(adev, job-
>>> vm_pd_addr);
>>>
>>> -		trace_amdgpu_vm_flush(job->vm_pd_addr, ring->idx, job-
>>> vm_id);
>>> -		amdgpu_ring_emit_vm_flush(ring, job->vm_id, job-
>>> vm_pd_addr);
>>> +		trace_amdgpu_vm_flush(pd_addr, ring->idx, job->vm_id);
>>> +		amdgpu_ring_emit_vm_flush(ring, job->vm_id, pd_addr);
>>>
>>>    		r = amdgpu_fence_emit(ring, &fence);
>>>    		if (r)
>>> @@ -643,15 +654,18 @@ int amdgpu_vm_update_page_directory(struct
>> amdgpu_device *adev,
>>>    		    (count == AMDGPU_VM_MAX_UPDATE_SIZE)) {
>>>
>>>    			if (count) {
>>> +				uint64_t pt_addr =
>>> +					amdgpu_vm_adjust_mc_addr(adev,
>> last_pt);
>>> +
>>>    				if (shadow)
>>>    					amdgpu_vm_do_set_ptes(&params,
>>>    							      last_shadow,
>>> -							      last_pt, count,
>>> +							      pt_addr, count,
>>>    							      incr,
>>>
>> AMDGPU_PTE_VALID);
>>>    				amdgpu_vm_do_set_ptes(&params,
>> last_pde,
>>> -						      last_pt, count, incr,
>>> +						      pt_addr, count, incr,
>>>    						      AMDGPU_PTE_VALID);
>>>    			}
>>>
>>> @@ -665,11 +679,13 @@ int amdgpu_vm_update_page_directory(struct
>> amdgpu_device *adev,
>>>    	}
>>>
>>>    	if (count) {
>>> +		uint64_t pt_addr = amdgpu_vm_adjust_mc_addr(adev,
>> last_pt);
>>> +
>>>    		if (vm->page_directory->shadow)
>>> -			amdgpu_vm_do_set_ptes(&params, last_shadow,
>> last_pt,
>>> +			amdgpu_vm_do_set_ptes(&params, last_shadow,
>> pt_addr,
>>>    					      count, incr,
>> AMDGPU_PTE_VALID);
>>> -		amdgpu_vm_do_set_ptes(&params, last_pde, last_pt,
>>> +		amdgpu_vm_do_set_ptes(&params, last_pde, pt_addr,
>>>    				      count, incr, AMDGPU_PTE_VALID);
>>>    	}
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>> b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>>> new file mode 100644
>>> index 0000000..1ff019c
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>>> @@ -0,0 +1,447 @@
>>> +/*
>>> + * Copyright 2016 Advanced Micro Devices, Inc.
>>> + *
>>> + * Permission is hereby granted, free of charge, to any person obtaining a
>>> + * copy of this software and associated documentation files (the
>> "Software"),
>>> + * to deal in the Software without restriction, including without limitation
>>> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
>>> + * and/or sell copies of the Software, and to permit persons to whom the
>>> + * Software is furnished to do so, subject to the following conditions:
>>> + *
>>> + * The above copyright notice and this permission notice shall be included
>> in
>>> + * all copies or substantial portions of the Software.
>>> + *
>>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
>> KIND, EXPRESS OR
>>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>> MERCHANTABILITY,
>>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN
>> NO EVENT SHALL
>>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM,
>> DAMAGES OR
>>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
>> OTHERWISE,
>>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
>> THE USE OR
>>> + * OTHER DEALINGS IN THE SOFTWARE.
>>> + *
>>> + */
>>> +#include "amdgpu.h"
>>> +#include "gfxhub_v1_0.h"
>>> +
>>> +#include "vega10/soc15ip.h"
>>> +#include "vega10/GC/gc_9_0_offset.h"
>>> +#include "vega10/GC/gc_9_0_sh_mask.h"
>>> +#include "vega10/GC/gc_9_0_default.h"
>>> +#include "vega10/vega10_enum.h"
>>> +
>>> +#include "soc15_common.h"
>>> +
>>> +int gfxhub_v1_0_gart_enable(struct amdgpu_device *adev)
>>> +{
>>> +	u32 tmp;
>>> +	u64 value;
>>> +	u32 i;
>>> +
>>> +	/* Program MC. */
>>> +	/* Update configuration */
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0,
>> mmMC_VM_SYSTEM_APERTURE_LOW_ADDR),
>>> +		adev->mc.vram_start >> 18);
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0,
>> mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR),
>>> +		adev->mc.vram_end >> 18);
>>> +
>>> +	value = adev->vram_scratch.gpu_addr - adev->mc.vram_start
>>> +		+ adev->vm_manager.vram_base_offset;
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +
>> 	mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_LSB),
>>> +				(u32)(value >> 12));
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +
>> 	mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_MSB),
>>> +				(u32)(value >> 44));
>>> +
>>> +	/* Disable AGP. */
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_BASE), 0);
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_TOP), 0);
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_BOT),
>> 0xFFFFFFFF);
>>> +
>>> +	/* GART Enable. */
>>> +
>>> +	/* Setup TLB control */
>>> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0,
>> mmMC_VM_MX_L1_TLB_CNTL));
>>> +	tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
>> ENABLE_L1_TLB, 1);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +				MC_VM_MX_L1_TLB_CNTL,
>>> +				SYSTEM_ACCESS_MODE,
>>> +				3);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +				MC_VM_MX_L1_TLB_CNTL,
>>> +				ENABLE_ADVANCED_DRIVER_MODEL,
>>> +				1);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +				MC_VM_MX_L1_TLB_CNTL,
>>> +				SYSTEM_APERTURE_UNMAPPED_ACCESS,
>>> +				0);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +				MC_VM_MX_L1_TLB_CNTL,
>>> +				ECO_BITS,
>>> +				0);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +				MC_VM_MX_L1_TLB_CNTL,
>>> +				MTYPE,
>>> +				MTYPE_UC);/* XXX for emulation. */
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +				MC_VM_MX_L1_TLB_CNTL,
>>> +				ATC_EN,
>>> +				1);
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0,
>> mmMC_VM_MX_L1_TLB_CNTL), tmp);
>>> +
>>> +	/* Setup L2 cache */
>>> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL));
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 1);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +				VM_L2_CNTL,
>>> +				ENABLE_L2_FRAGMENT_PROCESSING,
>>> +				0);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +				VM_L2_CNTL,
>>> +				L2_PDE0_CACHE_TAG_GENERATION_MODE,
>>> +				0);/* XXX for emulation, Refer to closed
>> source code.*/
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL,
>> PDE_FAULT_CLASSIFICATION, 1);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +				VM_L2_CNTL,
>>> +				CONTEXT1_IDENTITY_ACCESS_MODE,
>>> +				1);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +				VM_L2_CNTL,
>>> +				IDENTITY_MODE_FRAGMENT_SIZE,
>>> +				0);
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL), tmp);
>>> +
>>> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL2));
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2,
>> INVALIDATE_ALL_L1_TLBS, 1);
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_L2_CACHE,
>> 1);
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL2), tmp);
>>> +
>>> +	tmp = mmVM_L2_CNTL3_DEFAULT;
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL3), tmp);
>>> +
>>> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL4));
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +			    VM_L2_CNTL4,
>>> +			    VMC_TAP_PDE_REQUEST_PHYSICAL,
>>> +			    0);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +			    VM_L2_CNTL4,
>>> +			    VMC_TAP_PTE_REQUEST_PHYSICAL,
>>> +			    0);
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL4), tmp);
>>> +
>>> +	/* setup context0 */
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +
>> 	mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_LO32),
>>> +		(u32)(adev->mc.gtt_start >> 12));
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +
>> 	mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_HI32),
>>> +		(u32)(adev->mc.gtt_start >> 44));
>>> +
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +
>> 	mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_LO32),
>>> +		(u32)(adev->mc.gtt_end >> 12));
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +
>> 	mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_HI32),
>>> +		(u32)(adev->mc.gtt_end >> 44));
>>> +
>>> +	BUG_ON(adev->gart.table_addr & (~0x0000FFFFFFFFF000ULL));
>>> +	value = adev->gart.table_addr - adev->mc.vram_start
>>> +		+ adev->vm_manager.vram_base_offset;
>>> +	value &= 0x0000FFFFFFFFF000ULL;
>>> +	value |= 0x1; /*valid bit*/
>>> +
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +
>> 	mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32),
>>> +		(u32)value);
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +
>> 	mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32),
>>> +		(u32)(value >> 32));
>>> +
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +
>> 	mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_LO32),
>>> +		(u32)(adev->dummy_page.addr >> 12));
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +
>> 	mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_HI32),
>>> +		(u32)(adev->dummy_page.addr >> 44));
>>> +
>>> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_L2_PROTECTION_FAULT_CNTL2));
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL2,
>>> +			    ACTIVE_PAGE_MIGRATION_PTE_READ_RETRY,
>>> +			    1);
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_L2_PROTECTION_FAULT_CNTL2), tmp);
>>> +
>>> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_CONTEXT0_CNTL));
>>> +	tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL,
>> ENABLE_CONTEXT, 1);
>>> +	tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL,
>> PAGE_TABLE_DEPTH, 0);
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL),
>> tmp);
>>> +
>>> +	/* Disable identity aperture.*/
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +
>> 	mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_LO32),
>> 0XFFFFFFFF);
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +
>> 	mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_HI32),
>> 0x0000000F);
>>> +
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +
>> 	mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_LO32),
>> 0);
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +
>> 	mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_HI32), 0);
>>> +
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +		mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_LO32),
>> 0);
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +		mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_HI32),
>> 0);
>>> +
>>> +	for (i = 0; i <= 14; i++) {
>>> +		tmp = RREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_CONTEXT1_CNTL) + i);
>>> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>> ENABLE_CONTEXT, 1);
>>> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>> PAGE_TABLE_DEPTH, 1);
>>> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +
>> 	RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>>> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +
>> 	DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>>> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +
>> 	PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>>> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +
>> 	VALID_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>>> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +
>> 	READ_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>>> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +
>> 	WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>>> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +
>> 	EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>>> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +				PAGE_TABLE_BLOCK_SIZE,
>>> +				    amdgpu_vm_block_size - 9);
>>> +		WREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_CONTEXT1_CNTL) + i, tmp);
>>> +		WREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_LO32) + i*2, 0);
>>> +		WREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_HI32) + i*2, 0);
>>> +		WREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_LO32) + i*2,
>>> +				adev->vm_manager.max_pfn - 1);
>>> +		WREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_HI32) + i*2, 0);
>>> +	}
>>> +
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +void gfxhub_v1_0_gart_disable(struct amdgpu_device *adev)
>>> +{
>>> +	u32 tmp;
>>> +	u32 i;
>>> +
>>> +	/* Disable all tables */
>>> +	for (i = 0; i < 16; i++)
>>> +		WREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_CONTEXT0_CNTL) + i, 0);
>>> +
>>> +	/* Setup TLB control */
>>> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0,
>> mmMC_VM_MX_L1_TLB_CNTL));
>>> +	tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
>> ENABLE_L1_TLB, 0);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +				MC_VM_MX_L1_TLB_CNTL,
>>> +				ENABLE_ADVANCED_DRIVER_MODEL,
>>> +				0);
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0,
>> mmMC_VM_MX_L1_TLB_CNTL), tmp);
>>> +
>>> +	/* Setup L2 cache */
>>> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL));
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 0);
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL), tmp);
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL3), 0);
>>> +}
>>> +
>>> +/**
>>> + * gfxhub_v1_0_set_fault_enable_default - update GART/VM fault
>> handling
>>> + *
>>> + * @adev: amdgpu_device pointer
>>> + * @value: true redirects VM faults to the default page
>>> + */
>>> +void gfxhub_v1_0_set_fault_enable_default(struct amdgpu_device
>> *adev,
>>> +					  bool value)
>>> +{
>>> +	u32 tmp;
>>> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_L2_PROTECTION_FAULT_CNTL));
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +			RANGE_PROTECTION_FAULT_ENABLE_DEFAULT,
>> value);
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +			PDE0_PROTECTION_FAULT_ENABLE_DEFAULT,
>> value);
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +			PDE1_PROTECTION_FAULT_ENABLE_DEFAULT,
>> value);
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +			PDE2_PROTECTION_FAULT_ENABLE_DEFAULT,
>> value);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +			VM_L2_PROTECTION_FAULT_CNTL,
>>> +
>> 	TRANSLATE_FURTHER_PROTECTION_FAULT_ENABLE_DEFAULT,
>>> +			value);
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +			NACK_PROTECTION_FAULT_ENABLE_DEFAULT,
>> value);
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +
>> 	DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +			VALID_PROTECTION_FAULT_ENABLE_DEFAULT,
>> value);
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +			READ_PROTECTION_FAULT_ENABLE_DEFAULT,
>> value);
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +			WRITE_PROTECTION_FAULT_ENABLE_DEFAULT,
>> value);
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +			EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT,
>> value);
>>> +	WREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_L2_PROTECTION_FAULT_CNTL), tmp);
>>> +}
>>> +
>>> +static uint32_t gfxhub_v1_0_get_invalidate_req(unsigned int vm_id)
>>> +{
>>> +	u32 req = 0;
>>> +
>>> +	/* invalidate using legacy mode on vm_id*/
>>> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>>> +			    PER_VMID_INVALIDATE_REQ, 1 << vm_id);
>>> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> FLUSH_TYPE, 0);
>>> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> INVALIDATE_L2_PTES, 1);
>>> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> INVALIDATE_L2_PDE0, 1);
>>> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> INVALIDATE_L2_PDE1, 1);
>>> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> INVALIDATE_L2_PDE2, 1);
>>> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> INVALIDATE_L1_PTES, 1);
>>> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>>> +			    CLEAR_PROTECTION_FAULT_STATUS_ADDR,	0);
>>> +
>>> +	return req;
>>> +}
>>> +
>>> +static uint32_t gfxhub_v1_0_get_vm_protection_bits(void)
>>> +{
>>> +	return
>> (VM_CONTEXT1_CNTL__RANGE_PROTECTION_FAULT_ENABLE_INTERRUPT
>> _MASK |
>>> +
>> VM_CONTEXT1_CNTL__DUMMY_PAGE_PROTECTION_FAULT_ENABLE_INTE
>> RRUPT_MASK |
>>> +
>> VM_CONTEXT1_CNTL__PDE0_PROTECTION_FAULT_ENABLE_INTERRUPT_M
>> ASK |
>>> +
>> VM_CONTEXT1_CNTL__VALID_PROTECTION_FAULT_ENABLE_INTERRUPT_
>> MASK |
>>> +
>> VM_CONTEXT1_CNTL__READ_PROTECTION_FAULT_ENABLE_INTERRUPT_M
>> ASK |
>>> +
>> VM_CONTEXT1_CNTL__WRITE_PROTECTION_FAULT_ENABLE_INTERRUPT_
>> MASK |
>>> +
>> VM_CONTEXT1_CNTL__EXECUTE_PROTECTION_FAULT_ENABLE_INTERRUPT
>> _MASK);
>>> +}
>>> +
>>> +static int gfxhub_v1_0_early_init(void *handle)
>>> +{
>>> +	return 0;
>>> +}
>>> +
>>> +static int gfxhub_v1_0_late_init(void *handle)
>>> +{
>>> +	return 0;
>>> +}
>>> +
>>> +static int gfxhub_v1_0_sw_init(void *handle)
>>> +{
>>> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +	struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_GFXHUB];
>>> +
>>> +	hub->ctx0_ptb_addr_lo32 =
>>> +		SOC15_REG_OFFSET(GC, 0,
>>> +
>> mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32);
>>> +	hub->ctx0_ptb_addr_hi32 =
>>> +		SOC15_REG_OFFSET(GC, 0,
>>> +
>> mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32);
>>> +	hub->vm_inv_eng0_req =
>>> +		SOC15_REG_OFFSET(GC, 0,
>> mmVM_INVALIDATE_ENG0_REQ);
>>> +	hub->vm_inv_eng0_ack =
>>> +		SOC15_REG_OFFSET(GC, 0,
>> mmVM_INVALIDATE_ENG0_ACK);
>>> +	hub->vm_context0_cntl =
>>> +		SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL);
>>> +	hub->vm_l2_pro_fault_status =
>>> +		SOC15_REG_OFFSET(GC, 0,
>> mmVM_L2_PROTECTION_FAULT_STATUS);
>>> +	hub->vm_l2_pro_fault_cntl =
>>> +		SOC15_REG_OFFSET(GC, 0,
>> mmVM_L2_PROTECTION_FAULT_CNTL);
>>> +
>>> +	hub->get_invalidate_req = gfxhub_v1_0_get_invalidate_req;
>>> +	hub->get_vm_protection_bits =
>> gfxhub_v1_0_get_vm_protection_bits;
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static int gfxhub_v1_0_sw_fini(void *handle)
>>> +{
>>> +	return 0;
>>> +}
>>> +
>>> +static int gfxhub_v1_0_hw_init(void *handle)
>>> +{
>>> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +	unsigned i;
>>> +
>>> +	for (i = 0 ; i < 18; ++i) {
>>> +		WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +
>> 	mmVM_INVALIDATE_ENG0_ADDR_RANGE_LO32) +
>>> +		       2 * i, 0xffffffff);
>>> +		WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +
>> 	mmVM_INVALIDATE_ENG0_ADDR_RANGE_HI32) +
>>> +		       2 * i, 0x1f);
>>> +	}
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static int gfxhub_v1_0_hw_fini(void *handle)
>>> +{
>>> +	return 0;
>>> +}
>>> +
>>> +static int gfxhub_v1_0_suspend(void *handle)
>>> +{
>>> +	return 0;
>>> +}
>>> +
>>> +static int gfxhub_v1_0_resume(void *handle)
>>> +{
>>> +	return 0;
>>> +}
>>> +
>>> +static bool gfxhub_v1_0_is_idle(void *handle)
>>> +{
>>> +	return true;
>>> +}
>>> +
>>> +static int gfxhub_v1_0_wait_for_idle(void *handle)
>>> +{
>>> +	return 0;
>>> +}
>>> +
>>> +static int gfxhub_v1_0_soft_reset(void *handle)
>>> +{
>>> +	return 0;
>>> +}
>>> +
>>> +static int gfxhub_v1_0_set_clockgating_state(void *handle,
>>> +					  enum amd_clockgating_state state)
>>> +{
>>> +	return 0;
>>> +}
>>> +
>>> +static int gfxhub_v1_0_set_powergating_state(void *handle,
>>> +					  enum amd_powergating_state
>> state)
>>> +{
>>> +	return 0;
>>> +}
>>> +
>>> +const struct amd_ip_funcs gfxhub_v1_0_ip_funcs = {
>>> +	.name = "gfxhub_v1_0",
>>> +	.early_init = gfxhub_v1_0_early_init,
>>> +	.late_init = gfxhub_v1_0_late_init,
>>> +	.sw_init = gfxhub_v1_0_sw_init,
>>> +	.sw_fini = gfxhub_v1_0_sw_fini,
>>> +	.hw_init = gfxhub_v1_0_hw_init,
>>> +	.hw_fini = gfxhub_v1_0_hw_fini,
>>> +	.suspend = gfxhub_v1_0_suspend,
>>> +	.resume = gfxhub_v1_0_resume,
>>> +	.is_idle = gfxhub_v1_0_is_idle,
>>> +	.wait_for_idle = gfxhub_v1_0_wait_for_idle,
>>> +	.soft_reset = gfxhub_v1_0_soft_reset,
>>> +	.set_clockgating_state = gfxhub_v1_0_set_clockgating_state,
>>> +	.set_powergating_state = gfxhub_v1_0_set_powergating_state,
>>> +};
>>> +
>>> +const struct amdgpu_ip_block_version gfxhub_v1_0_ip_block =
>>> +{
>>> +	.type = AMD_IP_BLOCK_TYPE_GFXHUB,
>>> +	.major = 1,
>>> +	.minor = 0,
>>> +	.rev = 0,
>>> +	.funcs = &gfxhub_v1_0_ip_funcs,
>>> +};
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
>> b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
>>> new file mode 100644
>>> index 0000000..5129a8f
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
>>> @@ -0,0 +1,35 @@
>>> +/*
>>> + * Copyright 2016 Advanced Micro Devices, Inc.
>>> + *
>>> + * Permission is hereby granted, free of charge, to any person obtaining a
>>> + * copy of this software and associated documentation files (the
>> "Software"),
>>> + * to deal in the Software without restriction, including without limitation
>>> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
>>> + * and/or sell copies of the Software, and to permit persons to whom the
>>> + * Software is furnished to do so, subject to the following conditions:
>>> + *
>>> + * The above copyright notice and this permission notice shall be included
>> in
>>> + * all copies or substantial portions of the Software.
>>> + *
>>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
>> KIND, EXPRESS OR
>>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>> MERCHANTABILITY,
>>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN
>> NO EVENT SHALL
>>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM,
>> DAMAGES OR
>>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
>> OTHERWISE,
>>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
>> THE USE OR
>>> + * OTHER DEALINGS IN THE SOFTWARE.
>>> + *
>>> + */
>>> +
>>> +#ifndef __GFXHUB_V1_0_H__
>>> +#define __GFXHUB_V1_0_H__
>>> +
>>> +int gfxhub_v1_0_gart_enable(struct amdgpu_device *adev);
>>> +void gfxhub_v1_0_gart_disable(struct amdgpu_device *adev);
>>> +void gfxhub_v1_0_set_fault_enable_default(struct amdgpu_device
>> *adev,
>>> +					  bool value);
>>> +
>>> +extern const struct amd_ip_funcs gfxhub_v1_0_ip_funcs;
>>> +extern const struct amdgpu_ip_block_version gfxhub_v1_0_ip_block;
>>> +
>>> +#endif
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>>> new file mode 100644
>>> index 0000000..5cf0fc3
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>>> @@ -0,0 +1,826 @@
>>> +/*
>>> + * Copyright 2016 Advanced Micro Devices, Inc.
>>> + *
>>> + * Permission is hereby granted, free of charge, to any person obtaining a
>>> + * copy of this software and associated documentation files (the
>> "Software"),
>>> + * to deal in the Software without restriction, including without limitation
>>> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
>>> + * and/or sell copies of the Software, and to permit persons to whom the
>>> + * Software is furnished to do so, subject to the following conditions:
>>> + *
>>> + * The above copyright notice and this permission notice shall be included
>> in
>>> + * all copies or substantial portions of the Software.
>>> + *
>>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
>> KIND, EXPRESS OR
>>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>> MERCHANTABILITY,
>>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN
>> NO EVENT SHALL
>>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM,
>> DAMAGES OR
>>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
>> OTHERWISE,
>>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
>> THE USE OR
>>> + * OTHER DEALINGS IN THE SOFTWARE.
>>> + *
>>> + */
>>> +#include <linux/firmware.h>
>>> +#include "amdgpu.h"
>>> +#include "gmc_v9_0.h"
>>> +
>>> +#include "vega10/soc15ip.h"
>>> +#include "vega10/HDP/hdp_4_0_offset.h"
>>> +#include "vega10/HDP/hdp_4_0_sh_mask.h"
>>> +#include "vega10/GC/gc_9_0_sh_mask.h"
>>> +#include "vega10/vega10_enum.h"
>>> +
>>> +#include "soc15_common.h"
>>> +
>>> +#include "nbio_v6_1.h"
>>> +#include "gfxhub_v1_0.h"
>>> +#include "mmhub_v1_0.h"
>>> +
>>> +#define mmDF_CS_AON0_DramBaseAddress0
>> 0x0044
>>> +#define mmDF_CS_AON0_DramBaseAddress0_BASE_IDX
>> 0
>>> +//DF_CS_AON0_DramBaseAddress0
>>> +#define DF_CS_AON0_DramBaseAddress0__AddrRngVal__SHIFT
>> 0x0
>>> +#define DF_CS_AON0_DramBaseAddress0__LgcyMmioHoleEn__SHIFT
>> 0x1
>>> +#define DF_CS_AON0_DramBaseAddress0__IntLvNumChan__SHIFT
>> 0x4
>>> +#define DF_CS_AON0_DramBaseAddress0__IntLvAddrSel__SHIFT
>> 0x8
>>> +#define DF_CS_AON0_DramBaseAddress0__DramBaseAddr__SHIFT
>> 0xc
>>> +#define DF_CS_AON0_DramBaseAddress0__AddrRngVal_MASK
>> 0x00000001L
>>> +#define DF_CS_AON0_DramBaseAddress0__LgcyMmioHoleEn_MASK
>> 0x00000002L
>>> +#define DF_CS_AON0_DramBaseAddress0__IntLvNumChan_MASK
>> 0x000000F0L
>>> +#define DF_CS_AON0_DramBaseAddress0__IntLvAddrSel_MASK
>> 0x00000700L
>>> +#define DF_CS_AON0_DramBaseAddress0__DramBaseAddr_MASK
>> 0xFFFFF000L
>>> +
>>> +/* XXX Move this macro to VEGA10 header file, which is like vid.h for VI.*/
>>> +#define AMDGPU_NUM_OF_VMIDS			8
>>> +
>>> +static const u32 golden_settings_vega10_hdp[] =
>>> +{
>>> +	0xf64, 0x0fffffff, 0x00000000,
>>> +	0xf65, 0x0fffffff, 0x00000000,
>>> +	0xf66, 0x0fffffff, 0x00000000,
>>> +	0xf67, 0x0fffffff, 0x00000000,
>>> +	0xf68, 0x0fffffff, 0x00000000,
>>> +	0xf6a, 0x0fffffff, 0x00000000,
>>> +	0xf6b, 0x0fffffff, 0x00000000,
>>> +	0xf6c, 0x0fffffff, 0x00000000,
>>> +	0xf6d, 0x0fffffff, 0x00000000,
>>> +	0xf6e, 0x0fffffff, 0x00000000,
>>> +};
>>> +
>>> +static int gmc_v9_0_vm_fault_interrupt_state(struct amdgpu_device
>> *adev,
>>> +					struct amdgpu_irq_src *src,
>>> +					unsigned type,
>>> +					enum amdgpu_interrupt_state state)
>>> +{
>>> +	struct amdgpu_vmhub *hub;
>>> +	u32 tmp, reg, bits, i;
>>> +
>>> +	switch (state) {
>>> +	case AMDGPU_IRQ_STATE_DISABLE:
>>> +		/* MM HUB */
>>> +		hub = &adev->vmhub[AMDGPU_MMHUB];
>>> +		bits = hub->get_vm_protection_bits();
>>> +		for (i = 0; i< 16; i++) {
>>> +			reg = hub->vm_context0_cntl + i;
>>> +			tmp = RREG32(reg);
>>> +			tmp &= ~bits;
>>> +			WREG32(reg, tmp);
>>> +		}
>>> +
>>> +		/* GFX HUB */
>>> +		hub = &adev->vmhub[AMDGPU_GFXHUB];
>>> +		bits = hub->get_vm_protection_bits();
>>> +		for (i = 0; i < 16; i++) {
>>> +			reg = hub->vm_context0_cntl + i;
>>> +			tmp = RREG32(reg);
>>> +			tmp &= ~bits;
>>> +			WREG32(reg, tmp);
>>> +		}
>>> +		break;
>>> +	case AMDGPU_IRQ_STATE_ENABLE:
>>> +		/* MM HUB */
>>> +		hub = &adev->vmhub[AMDGPU_MMHUB];
>>> +		bits = hub->get_vm_protection_bits();
>>> +		for (i = 0; i< 16; i++) {
>>> +			reg = hub->vm_context0_cntl + i;
>>> +			tmp = RREG32(reg);
>>> +			tmp |= bits;
>>> +			WREG32(reg, tmp);
>>> +		}
>>> +
>>> +		/* GFX HUB */
>>> +		hub = &adev->vmhub[AMDGPU_GFXHUB];
>>> +		bits = hub->get_vm_protection_bits();
>>> +		for (i = 0; i < 16; i++) {
>>> +			reg = hub->vm_context0_cntl + i;
>>> +			tmp = RREG32(reg);
>>> +			tmp |= bits;
>>> +			WREG32(reg, tmp);
>>> +		}
>>> +		break;
>>> +	default:
>>> +		break;
>>> +	}
>>> +
>>> +	return 0;
>>> +	return 0;
>>> +}
>>> +
>>> +static int gmc_v9_0_process_interrupt(struct amdgpu_device *adev,
>>> +				struct amdgpu_irq_src *source,
>>> +				struct amdgpu_iv_entry *entry)
>>> +{
>>> +	struct amdgpu_vmhub *gfxhub = &adev-
>>> vmhub[AMDGPU_GFXHUB];
>>> +	struct amdgpu_vmhub *mmhub = &adev-
>>> vmhub[AMDGPU_MMHUB];
>>> +	uint32_t status;
>>> +	u64 addr;
>>> +
>>> +	addr = (u64)entry->src_data[0] << 12;
>>> +	addr |= ((u64)entry->src_data[1] & 0xf) << 44;
>>> +
>>> +	if (entry->vm_id_src) {
>>> +		status = RREG32(mmhub->vm_l2_pro_fault_status);
>>> +		WREG32_P(mmhub->vm_l2_pro_fault_cntl, 1, ~1);
>>> +	} else {
>>> +		status = RREG32(gfxhub->vm_l2_pro_fault_status);
>>> +		WREG32_P(gfxhub->vm_l2_pro_fault_cntl, 1, ~1);
>>> +	}
>>> +
>>> +	DRM_ERROR("[%s]VMC page fault (src_id:%u ring:%u vm_id:%u
>> pas_id:%u) "
>>> +		  "at page 0x%016llx from %d\n"
>>> +		  "VM_L2_PROTECTION_FAULT_STATUS:0x%08X\n",
>>> +		  entry->vm_id_src ? "mmhub" : "gfxhub",
>>> +		  entry->src_id, entry->ring_id, entry->vm_id, entry->pas_id,
>>> +		  addr, entry->client_id, status);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static const struct amdgpu_irq_src_funcs gmc_v9_0_irq_funcs = {
>>> +	.set = gmc_v9_0_vm_fault_interrupt_state,
>>> +	.process = gmc_v9_0_process_interrupt,
>>> +};
>>> +
>>> +static void gmc_v9_0_set_irq_funcs(struct amdgpu_device *adev)
>>> +{
>>> +	adev->mc.vm_fault.num_types = 1;
>>> +	adev->mc.vm_fault.funcs = &gmc_v9_0_irq_funcs;
>>> +}
>>> +
>>> +/*
>>> + * GART
>>> + * VMID 0 is the physical GPU addresses as used by the kernel.
>>> + * VMIDs 1-15 are used for userspace clients and are handled
>>> + * by the amdgpu vm/hsa code.
>>> + */
>>> +
>>> +/**
>>> + * gmc_v9_0_gart_flush_gpu_tlb - gart tlb flush callback
>>> + *
>>> + * @adev: amdgpu_device pointer
>>> + * @vmid: vm instance to flush
>>> + *
>>> + * Flush the TLB for the requested page table.
>>> + */
>>> +static void gmc_v9_0_gart_flush_gpu_tlb(struct amdgpu_device *adev,
>>> +					uint32_t vmid)
>>> +{
>>> +	/* Use register 17 for GART */
>>> +	const unsigned eng = 17;
>>> +	unsigned i, j;
>>> +
>>> +	/* flush hdp cache */
>>> +	nbio_v6_1_hdp_flush(adev);
>>> +
>>> +	spin_lock(&adev->mc.invalidate_lock);
>>> +
>>> +	for (i = 0; i < AMDGPU_MAX_VMHUBS; ++i) {
>>> +		struct amdgpu_vmhub *hub = &adev->vmhub[i];
>>> +		u32 tmp = hub->get_invalidate_req(vmid);
>>> +
>>> +		WREG32(hub->vm_inv_eng0_req + eng, tmp);
>>> +
>>> +		/* Busy wait for ACK.*/
>>> +		for (j = 0; j < 100; j++) {
>>> +			tmp = RREG32(hub->vm_inv_eng0_ack + eng);
>>> +			tmp &= 1 << vmid;
>>> +			if (tmp)
>>> +				break;
>>> +			cpu_relax();
>>> +		}
>>> +		if (j < 100)
>>> +			continue;
>>> +
>>> +		/* Wait for ACK with a delay.*/
>>> +		for (j = 0; j < adev->usec_timeout; j++) {
>>> +			tmp = RREG32(hub->vm_inv_eng0_ack + eng);
>>> +			tmp &= 1 << vmid;
>>> +			if (tmp)
>>> +				break;
>>> +			udelay(1);
>>> +		}
>>> +		if (j < adev->usec_timeout)
>>> +			continue;
>>> +
>>> +		DRM_ERROR("Timeout waiting for VM flush ACK!\n");
>>> +	}
>>> +
>>> +	spin_unlock(&adev->mc.invalidate_lock);
>>> +}
>>> +
>>> +/**
>>> + * gmc_v9_0_gart_set_pte_pde - update the page tables using MMIO
>>> + *
>>> + * @adev: amdgpu_device pointer
>>> + * @cpu_pt_addr: cpu address of the page table
>>> + * @gpu_page_idx: entry in the page table to update
>>> + * @addr: dst addr to write into pte/pde
>>> + * @flags: access flags
>>> + *
>>> + * Update the page tables using the CPU.
>>> + */
>>> +static int gmc_v9_0_gart_set_pte_pde(struct amdgpu_device *adev,
>>> +					void *cpu_pt_addr,
>>> +					uint32_t gpu_page_idx,
>>> +					uint64_t addr,
>>> +					uint64_t flags)
>>> +{
>>> +	void __iomem *ptr = (void *)cpu_pt_addr;
>>> +	uint64_t value;
>>> +
>>> +	/*
>>> +	 * PTE format on VEGA 10:
>>> +	 * 63:59 reserved
>>> +	 * 58:57 mtype
>>> +	 * 56 F
>>> +	 * 55 L
>>> +	 * 54 P
>>> +	 * 53 SW
>>> +	 * 52 T
>>> +	 * 50:48 reserved
>>> +	 * 47:12 4k physical page base address
>>> +	 * 11:7 fragment
>>> +	 * 6 write
>>> +	 * 5 read
>>> +	 * 4 exe
>>> +	 * 3 Z
>>> +	 * 2 snooped
>>> +	 * 1 system
>>> +	 * 0 valid
>>> +	 *
>>> +	 * PDE format on VEGA 10:
>>> +	 * 63:59 block fragment size
>>> +	 * 58:55 reserved
>>> +	 * 54 P
>>> +	 * 53:48 reserved
>>> +	 * 47:6 physical base address of PD or PTE
>>> +	 * 5:3 reserved
>>> +	 * 2 C
>>> +	 * 1 system
>>> +	 * 0 valid
>>> +	 */
>>> +
>>> +	/*
>>> +	 * The following is for PTE only. GART does not have PDEs.
>>> +	*/
>>> +	value = addr & 0x0000FFFFFFFFF000ULL;
>>> +	value |= flags;
>>> +	writeq(value, ptr + (gpu_page_idx * 8));
>>> +	return 0;
>>> +}
>>> +
>>> +static uint64_t gmc_v9_0_get_vm_pte_flags(struct amdgpu_device
>> *adev,
>>> +						uint32_t flags)
>>> +
>>> +{
>>> +	uint64_t pte_flag = 0;
>>> +
>>> +	if (flags & AMDGPU_VM_PAGE_EXECUTABLE)
>>> +		pte_flag |= AMDGPU_PTE_EXECUTABLE;
>>> +	if (flags & AMDGPU_VM_PAGE_READABLE)
>>> +		pte_flag |= AMDGPU_PTE_READABLE;
>>> +	if (flags & AMDGPU_VM_PAGE_WRITEABLE)
>>> +		pte_flag |= AMDGPU_PTE_WRITEABLE;
>>> +
>>> +	switch (flags & AMDGPU_VM_MTYPE_MASK) {
>>> +	case AMDGPU_VM_MTYPE_DEFAULT:
>>> +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
>>> +		break;
>>> +	case AMDGPU_VM_MTYPE_NC:
>>> +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
>>> +		break;
>>> +	case AMDGPU_VM_MTYPE_WC:
>>> +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_WC);
>>> +		break;
>>> +	case AMDGPU_VM_MTYPE_CC:
>>> +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_CC);
>>> +		break;
>>> +	case AMDGPU_VM_MTYPE_UC:
>>> +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_UC);
>>> +		break;
>>> +	default:
>>> +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
>>> +		break;
>>> +	}
>>> +
>>> +	if (flags & AMDGPU_VM_PAGE_PRT)
>>> +		pte_flag |= AMDGPU_PTE_PRT;
>>> +
>>> +	return pte_flag;
>>> +}
>>> +
>>> +static const struct amdgpu_gart_funcs gmc_v9_0_gart_funcs = {
>>> +	.flush_gpu_tlb = gmc_v9_0_gart_flush_gpu_tlb,
>>> +	.set_pte_pde = gmc_v9_0_gart_set_pte_pde,
>>> +	.get_vm_pte_flags = gmc_v9_0_get_vm_pte_flags
>>> +};
>>> +
>>> +static void gmc_v9_0_set_gart_funcs(struct amdgpu_device *adev)
>>> +{
>>> +	if (adev->gart.gart_funcs == NULL)
>>> +		adev->gart.gart_funcs = &gmc_v9_0_gart_funcs;
>>> +}
>>> +
>>> +static u64 gmc_v9_0_adjust_mc_addr(struct amdgpu_device *adev, u64
>> mc_addr)
>>> +{
>>> +	return adev->vm_manager.vram_base_offset + mc_addr - adev-
>>> mc.vram_start;
>>> +}
>>> +
>>> +static const struct amdgpu_mc_funcs gmc_v9_0_mc_funcs = {
>>> +	.adjust_mc_addr = gmc_v9_0_adjust_mc_addr,
>>> +};
>>> +
>>> +static void gmc_v9_0_set_mc_funcs(struct amdgpu_device *adev)
>>> +{
>>> +	adev->mc.mc_funcs = &gmc_v9_0_mc_funcs;
>>> +}
>>> +
>>> +static int gmc_v9_0_early_init(void *handle)
>>> +{
>>> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +
>>> +	gmc_v9_0_set_gart_funcs(adev);
>>> +	gmc_v9_0_set_mc_funcs(adev);
>>> +	gmc_v9_0_set_irq_funcs(adev);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static int gmc_v9_0_late_init(void *handle)
>>> +{
>>> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +	return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0);
>>> +}
>>> +
>>> +static void gmc_v9_0_vram_gtt_location(struct amdgpu_device *adev,
>>> +					struct amdgpu_mc *mc)
>>> +{
>>> +	u64 base = mmhub_v1_0_get_fb_location(adev);
>>> +	amdgpu_vram_location(adev, &adev->mc, base);
>>> +	adev->mc.gtt_base_align = 0;
>>> +	amdgpu_gtt_location(adev, mc);
>>> +}
>>> +
>>> +/**
>>> + * gmc_v9_0_mc_init - initialize the memory controller driver params
>>> + *
>>> + * @adev: amdgpu_device pointer
>>> + *
>>> + * Look up the amount of vram, vram width, and decide how to place
>>> + * vram and gart within the GPU's physical address space.
>>> + * Returns 0 for success.
>>> + */
>>> +static int gmc_v9_0_mc_init(struct amdgpu_device *adev)
>>> +{
>>> +	u32 tmp;
>>> +	int chansize, numchan;
>>> +
>>> +	/* hbm memory channel size */
>>> +	chansize = 128;
>>> +
>>> +	tmp = RREG32(SOC15_REG_OFFSET(DF, 0,
>> mmDF_CS_AON0_DramBaseAddress0));
>>> +	tmp &= DF_CS_AON0_DramBaseAddress0__IntLvNumChan_MASK;
>>> +	tmp >>=
>> DF_CS_AON0_DramBaseAddress0__IntLvNumChan__SHIFT;
>>> +	switch (tmp) {
>>> +	case 0:
>>> +	default:
>>> +		numchan = 1;
>>> +		break;
>>> +	case 1:
>>> +		numchan = 2;
>>> +		break;
>>> +	case 2:
>>> +		numchan = 0;
>>> +		break;
>>> +	case 3:
>>> +		numchan = 4;
>>> +		break;
>>> +	case 4:
>>> +		numchan = 0;
>>> +		break;
>>> +	case 5:
>>> +		numchan = 8;
>>> +		break;
>>> +	case 6:
>>> +		numchan = 0;
>>> +		break;
>>> +	case 7:
>>> +		numchan = 16;
>>> +		break;
>>> +	case 8:
>>> +		numchan = 2;
>>> +		break;
>>> +	}
>>> +	adev->mc.vram_width = numchan * chansize;
>>> +
>>> +	/* Could aper size report 0 ? */
>>> +	adev->mc.aper_base = pci_resource_start(adev->pdev, 0);
>>> +	adev->mc.aper_size = pci_resource_len(adev->pdev, 0);
>>> +	/* size in MB on si */
>>> +	adev->mc.mc_vram_size =
>>> +		nbio_v6_1_get_memsize(adev) * 1024ULL * 1024ULL;
>>> +	adev->mc.real_vram_size = adev->mc.mc_vram_size;
>>> +	adev->mc.visible_vram_size = adev->mc.aper_size;
>>> +
>>> +	/* In case the PCI BAR is larger than the actual amount of vram */
>>> +	if (adev->mc.visible_vram_size > adev->mc.real_vram_size)
>>> +		adev->mc.visible_vram_size = adev->mc.real_vram_size;
>>> +
>>> +	/* unless the user had overridden it, set the gart
>>> +	 * size equal to the 1024 or vram, whichever is larger.
>>> +	 */
>>> +	if (amdgpu_gart_size == -1)
>>> +		adev->mc.gtt_size = max((1024ULL << 20), adev-
>>> mc.mc_vram_size);
>>> +	else
>>> +		adev->mc.gtt_size = (uint64_t)amdgpu_gart_size << 20;
>>> +
>>> +	gmc_v9_0_vram_gtt_location(adev, &adev->mc);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static int gmc_v9_0_gart_init(struct amdgpu_device *adev)
>>> +{
>>> +	int r;
>>> +
>>> +	if (adev->gart.robj) {
>>> +		WARN(1, "VEGA10 PCIE GART already initialized\n");
>>> +		return 0;
>>> +	}
>>> +	/* Initialize common gart structure */
>>> +	r = amdgpu_gart_init(adev);
>>> +	if (r)
>>> +		return r;
>>> +	adev->gart.table_size = adev->gart.num_gpu_pages * 8;
>>> +	adev->gart.gart_pte_flags = AMDGPU_PTE_MTYPE(MTYPE_UC) |
>>> +				 AMDGPU_PTE_EXECUTABLE;
>>> +	return amdgpu_gart_table_vram_alloc(adev);
>>> +}
>>> +
>>> +/*
>>> + * vm
>>> + * VMID 0 is the physical GPU addresses as used by the kernel.
>>> + * VMIDs 1-15 are used for userspace clients and are handled
>>> + * by the amdgpu vm/hsa code.
>>> + */
>>> +/**
>>> + * gmc_v9_0_vm_init - vm init callback
>>> + *
>>> + * @adev: amdgpu_device pointer
>>> + *
>>> + * Inits vega10 specific vm parameters (number of VMs, base of vram for
>>> + * VMIDs 1-15) (vega10).
>>> + * Returns 0 for success.
>>> + */
>>> +static int gmc_v9_0_vm_init(struct amdgpu_device *adev)
>>> +{
>>> +	/*
>>> +	 * number of VMs
>>> +	 * VMID 0 is reserved for System
>>> +	 * amdgpu graphics/compute will use VMIDs 1-7
>>> +	 * amdkfd will use VMIDs 8-15
>>> +	 */
>>> +	adev->vm_manager.num_ids = AMDGPU_NUM_OF_VMIDS;
>>> +	amdgpu_vm_manager_init(adev);
>>> +
>>> +	/* base offset of vram pages */
>>> +	/*XXX This value is not zero for APU*/
>>> +	adev->vm_manager.vram_base_offset = 0;
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +/**
>>> + * gmc_v9_0_vm_fini - vm fini callback
>>> + *
>>> + * @adev: amdgpu_device pointer
>>> + *
>>> + * Tear down any asic specific VM setup.
>>> + */
>>> +static void gmc_v9_0_vm_fini(struct amdgpu_device *adev)
>>> +{
>>> +	return;
>>> +}
>>> +
>>> +static int gmc_v9_0_sw_init(void *handle)
>>> +{
>>> +	int r;
>>> +	int dma_bits;
>>> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +
>>> +	spin_lock_init(&adev->mc.invalidate_lock);
>>> +
>>> +	if (adev->flags & AMD_IS_APU) {
>>> +		adev->mc.vram_type = AMDGPU_VRAM_TYPE_UNKNOWN;
>>> +	} else {
>>> +		/* XXX Don't know how to get VRAM type yet. */
>>> +		adev->mc.vram_type = AMDGPU_VRAM_TYPE_HBM;
>>> +	}
>>> +
>>> +	/* This interrupt is VMC page fault.*/
>>> +	r = amdgpu_irq_add_id(adev, AMDGPU_IH_CLIENTID_VMC, 0,
>>> +				&adev->mc.vm_fault);
>>> +
>>> +	if (r)
>>> +		return r;
>>> +
>>> +	/* Adjust VM size here.
>>> +	 * Currently default to 64GB ((16 << 20) 4k pages).
>>> +	 * Max GPUVM size is 48 bits.
>>> +	 */
>>> +	adev->vm_manager.max_pfn = amdgpu_vm_size << 18;
>>> +
>>> +	/* Set the internal MC address mask
>>> +	 * This is the max address of the GPU's
>>> +	 * internal address space.
>>> +	 */
>>> +	adev->mc.mc_mask = 0xffffffffffffULL; /* 48 bit MC */
>>> +
>>> +	/* set DMA mask + need_dma32 flags.
>>> +	 * PCIE - can handle 44-bits.
>>> +	 * IGP - can handle 44-bits
>>> +	 * PCI - dma32 for legacy pci gart, 44 bits on vega10
>>> +	 */
>>> +	adev->need_dma32 = false;
>>> +	dma_bits = adev->need_dma32 ? 32 : 44;
>>> +	r = pci_set_dma_mask(adev->pdev, DMA_BIT_MASK(dma_bits));
>>> +	if (r) {
>>> +		adev->need_dma32 = true;
>>> +		dma_bits = 32;
>>> +		printk(KERN_WARNING "amdgpu: No suitable DMA
>> available.\n");
>>> +	}
>>> +	r = pci_set_consistent_dma_mask(adev->pdev,
>> DMA_BIT_MASK(dma_bits));
>>> +	if (r) {
>>> +		pci_set_consistent_dma_mask(adev->pdev,
>> DMA_BIT_MASK(32));
>>> +		printk(KERN_WARNING "amdgpu: No coherent DMA
>> available.\n");
>>> +	}
>>> +
>>> +	r = gmc_v9_0_mc_init(adev);
>>> +	if (r)
>>> +		return r;
>>> +
>>> +	/* Memory manager */
>>> +	r = amdgpu_bo_init(adev);
>>> +	if (r)
>>> +		return r;
>>> +
>>> +	r = gmc_v9_0_gart_init(adev);
>>> +	if (r)
>>> +		return r;
>>> +
>>> +	if (!adev->vm_manager.enabled) {
>>> +		r = gmc_v9_0_vm_init(adev);
>>> +		if (r) {
>>> +			dev_err(adev->dev, "vm manager initialization failed
>> (%d).\n", r);
>>> +			return r;
>>> +		}
>>> +		adev->vm_manager.enabled = true;
>>> +	}
>>> +	return r;
>>> +}
>>> +
>>> +/**
>>> + * gmc_v8_0_gart_fini - vm fini callback
>>> + *
>>> + * @adev: amdgpu_device pointer
>>> + *
>>> + * Tears down the driver GART/VM setup (CIK).
>>> + */
>>> +static void gmc_v9_0_gart_fini(struct amdgpu_device *adev)
>>> +{
>>> +	amdgpu_gart_table_vram_free(adev);
>>> +	amdgpu_gart_fini(adev);
>>> +}
>>> +
>>> +static int gmc_v9_0_sw_fini(void *handle)
>>> +{
>>> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +
>>> +	if (adev->vm_manager.enabled) {
>>> +		amdgpu_vm_manager_fini(adev);
>>> +		gmc_v9_0_vm_fini(adev);
>>> +		adev->vm_manager.enabled = false;
>>> +	}
>>> +	gmc_v9_0_gart_fini(adev);
>>> +	amdgpu_gem_force_release(adev);
>>> +	amdgpu_bo_fini(adev);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static void gmc_v9_0_init_golden_registers(struct amdgpu_device *adev)
>>> +{
>>> +	switch (adev->asic_type) {
>>> +	case CHIP_VEGA10:
>>> +		break;
>>> +	default:
>>> +		break;
>>> +	}
>>> +}
>>> +
>>> +/**
>>> + * gmc_v9_0_gart_enable - gart enable
>>> + *
>>> + * @adev: amdgpu_device pointer
>>> + */
>>> +static int gmc_v9_0_gart_enable(struct amdgpu_device *adev)
>>> +{
>>> +	int r;
>>> +	bool value;
>>> +	u32 tmp;
>>> +
>>> +	amdgpu_program_register_sequence(adev,
>>> +		golden_settings_vega10_hdp,
>>> +		(const u32)ARRAY_SIZE(golden_settings_vega10_hdp));
>>> +
>>> +	if (adev->gart.robj == NULL) {
>>> +		dev_err(adev->dev, "No VRAM object for PCIE GART.\n");
>>> +		return -EINVAL;
>>> +	}
>>> +	r = amdgpu_gart_table_vram_pin(adev);
>>> +	if (r)
>>> +		return r;
>>> +
>>> +	/* After HDP is initialized, flush HDP.*/
>>> +	nbio_v6_1_hdp_flush(adev);
>>> +
>>> +	r = gfxhub_v1_0_gart_enable(adev);
>>> +	if (r)
>>> +		return r;
>>> +
>>> +	r = mmhub_v1_0_gart_enable(adev);
>>> +	if (r)
>>> +		return r;
>>> +
>>> +	tmp = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MISC_CNTL));
>>> +	tmp |= HDP_MISC_CNTL__FLUSH_INVALIDATE_CACHE_MASK;
>>> +	WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MISC_CNTL), tmp);
>>> +
>>> +	tmp = RREG32(SOC15_REG_OFFSET(HDP, 0,
>> mmHDP_HOST_PATH_CNTL));
>>> +	WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_HOST_PATH_CNTL),
>> tmp);
>>> +
>>> +
>>> +	if (amdgpu_vm_fault_stop ==
>> AMDGPU_VM_FAULT_STOP_ALWAYS)
>>> +		value = false;
>>> +	else
>>> +		value = true;
>>> +
>>> +	gfxhub_v1_0_set_fault_enable_default(adev, value);
>>> +	mmhub_v1_0_set_fault_enable_default(adev, value);
>>> +
>>> +	gmc_v9_0_gart_flush_gpu_tlb(adev, 0);
>>> +
>>> +	DRM_INFO("PCIE GART of %uM enabled (table at 0x%016llX).\n",
>>> +		 (unsigned)(adev->mc.gtt_size >> 20),
>>> +		 (unsigned long long)adev->gart.table_addr);
>>> +	adev->gart.ready = true;
>>> +	return 0;
>>> +}
>>> +
>>> +static int gmc_v9_0_hw_init(void *handle)
>>> +{
>>> +	int r;
>>> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +
>>> +	/* The sequence of these two function calls matters.*/
>>> +	gmc_v9_0_init_golden_registers(adev);
>>> +
>>> +	r = gmc_v9_0_gart_enable(adev);
>>> +
>>> +	return r;
>>> +}
>>> +
>>> +/**
>>> + * gmc_v9_0_gart_disable - gart disable
>>> + *
>>> + * @adev: amdgpu_device pointer
>>> + *
>>> + * This disables all VM page table.
>>> + */
>>> +static void gmc_v9_0_gart_disable(struct amdgpu_device *adev)
>>> +{
>>> +	gfxhub_v1_0_gart_disable(adev);
>>> +	mmhub_v1_0_gart_disable(adev);
>>> +	amdgpu_gart_table_vram_unpin(adev);
>>> +}
>>> +
>>> +static int gmc_v9_0_hw_fini(void *handle)
>>> +{
>>> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +
>>> +	amdgpu_irq_put(adev, &adev->mc.vm_fault, 0);
>>> +	gmc_v9_0_gart_disable(adev);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static int gmc_v9_0_suspend(void *handle)
>>> +{
>>> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +
>>> +	if (adev->vm_manager.enabled) {
>>> +		gmc_v9_0_vm_fini(adev);
>>> +		adev->vm_manager.enabled = false;
>>> +	}
>>> +	gmc_v9_0_hw_fini(adev);
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static int gmc_v9_0_resume(void *handle)
>>> +{
>>> +	int r;
>>> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +
>>> +	r = gmc_v9_0_hw_init(adev);
>>> +	if (r)
>>> +		return r;
>>> +
>>> +	if (!adev->vm_manager.enabled) {
>>> +		r = gmc_v9_0_vm_init(adev);
>>> +		if (r) {
>>> +			dev_err(adev->dev,
>>> +				"vm manager initialization failed (%d).\n", r);
>>> +			return r;
>>> +		}
>>> +		adev->vm_manager.enabled = true;
>>> +	}
>>> +
>>> +	return r;
>>> +}
>>> +
>>> +static bool gmc_v9_0_is_idle(void *handle)
>>> +{
>>> +	/* MC is always ready in GMC v9.*/
>>> +	return true;
>>> +}
>>> +
>>> +static int gmc_v9_0_wait_for_idle(void *handle)
>>> +{
>>> +	/* There is no need to wait for MC idle in GMC v9.*/
>>> +	return 0;
>>> +}
>>> +
>>> +static int gmc_v9_0_soft_reset(void *handle)
>>> +{
>>> +	/* XXX for emulation.*/
>>> +	return 0;
>>> +}
>>> +
>>> +static int gmc_v9_0_set_clockgating_state(void *handle,
>>> +					enum amd_clockgating_state state)
>>> +{
>>> +	return 0;
>>> +}
>>> +
>>> +static int gmc_v9_0_set_powergating_state(void *handle,
>>> +					enum amd_powergating_state state)
>>> +{
>>> +	return 0;
>>> +}
>>> +
>>> +const struct amd_ip_funcs gmc_v9_0_ip_funcs = {
>>> +	.name = "gmc_v9_0",
>>> +	.early_init = gmc_v9_0_early_init,
>>> +	.late_init = gmc_v9_0_late_init,
>>> +	.sw_init = gmc_v9_0_sw_init,
>>> +	.sw_fini = gmc_v9_0_sw_fini,
>>> +	.hw_init = gmc_v9_0_hw_init,
>>> +	.hw_fini = gmc_v9_0_hw_fini,
>>> +	.suspend = gmc_v9_0_suspend,
>>> +	.resume = gmc_v9_0_resume,
>>> +	.is_idle = gmc_v9_0_is_idle,
>>> +	.wait_for_idle = gmc_v9_0_wait_for_idle,
>>> +	.soft_reset = gmc_v9_0_soft_reset,
>>> +	.set_clockgating_state = gmc_v9_0_set_clockgating_state,
>>> +	.set_powergating_state = gmc_v9_0_set_powergating_state,
>>> +};
>>> +
>>> +const struct amdgpu_ip_block_version gmc_v9_0_ip_block =
>>> +{
>>> +	.type = AMD_IP_BLOCK_TYPE_GMC,
>>> +	.major = 9,
>>> +	.minor = 0,
>>> +	.rev = 0,
>>> +	.funcs = &gmc_v9_0_ip_funcs,
>>> +};
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
>> b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
>>> new file mode 100644
>>> index 0000000..b030ca5
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
>>> @@ -0,0 +1,30 @@
>>> +/*
>>> + * Copyright 2016 Advanced Micro Devices, Inc.
>>> + *
>>> + * Permission is hereby granted, free of charge, to any person obtaining a
>>> + * copy of this software and associated documentation files (the
>> "Software"),
>>> + * to deal in the Software without restriction, including without limitation
>>> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
>>> + * and/or sell copies of the Software, and to permit persons to whom the
>>> + * Software is furnished to do so, subject to the following conditions:
>>> + *
>>> + * The above copyright notice and this permission notice shall be included
>> in
>>> + * all copies or substantial portions of the Software.
>>> + *
>>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
>> KIND, EXPRESS OR
>>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>> MERCHANTABILITY,
>>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN
>> NO EVENT SHALL
>>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM,
>> DAMAGES OR
>>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
>> OTHERWISE,
>>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
>> THE USE OR
>>> + * OTHER DEALINGS IN THE SOFTWARE.
>>> + *
>>> + */
>>> +
>>> +#ifndef __GMC_V9_0_H__
>>> +#define __GMC_V9_0_H__
>>> +
>>> +extern const struct amd_ip_funcs gmc_v9_0_ip_funcs;
>>> +extern const struct amdgpu_ip_block_version gmc_v9_0_ip_block;
>>> +
>>> +#endif
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>> b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>>> new file mode 100644
>>> index 0000000..b1e0e6b
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>>> @@ -0,0 +1,585 @@
>>> +/*
>>> + * Copyright 2016 Advanced Micro Devices, Inc.
>>> + *
>>> + * Permission is hereby granted, free of charge, to any person obtaining a
>>> + * copy of this software and associated documentation files (the
>> "Software"),
>>> + * to deal in the Software without restriction, including without limitation
>>> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
>>> + * and/or sell copies of the Software, and to permit persons to whom the
>>> + * Software is furnished to do so, subject to the following conditions:
>>> + *
>>> + * The above copyright notice and this permission notice shall be included
>> in
>>> + * all copies or substantial portions of the Software.
>>> + *
>>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
>> KIND, EXPRESS OR
>>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>> MERCHANTABILITY,
>>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN
>> NO EVENT SHALL
>>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM,
>> DAMAGES OR
>>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
>> OTHERWISE,
>>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
>> THE USE OR
>>> + * OTHER DEALINGS IN THE SOFTWARE.
>>> + *
>>> + */
>>> +#include "amdgpu.h"
>>> +#include "mmhub_v1_0.h"
>>> +
>>> +#include "vega10/soc15ip.h"
>>> +#include "vega10/MMHUB/mmhub_1_0_offset.h"
>>> +#include "vega10/MMHUB/mmhub_1_0_sh_mask.h"
>>> +#include "vega10/MMHUB/mmhub_1_0_default.h"
>>> +#include "vega10/ATHUB/athub_1_0_offset.h"
>>> +#include "vega10/ATHUB/athub_1_0_sh_mask.h"
>>> +#include "vega10/ATHUB/athub_1_0_default.h"
>>> +#include "vega10/vega10_enum.h"
>>> +
>>> +#include "soc15_common.h"
>>> +
>>> +u64 mmhub_v1_0_get_fb_location(struct amdgpu_device *adev)
>>> +{
>>> +	u64 base = RREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmMC_VM_FB_LOCATION_BASE));
>>> +
>>> +	base &= MC_VM_FB_LOCATION_BASE__FB_BASE_MASK;
>>> +	base <<= 24;
>>> +
>>> +	return base;
>>> +}
>>> +
>>> +int mmhub_v1_0_gart_enable(struct amdgpu_device *adev)
>>> +{
>>> +	u32 tmp;
>>> +	u64 value;
>>> +	uint64_t addr;
>>> +	u32 i;
>>> +
>>> +	/* Program MC. */
>>> +	/* Update configuration */
>>> +	DRM_INFO("%s -- in\n", __func__);
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmMC_VM_SYSTEM_APERTURE_LOW_ADDR),
>>> +		adev->mc.vram_start >> 18);
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR),
>>> +		adev->mc.vram_end >> 18);
>>> +	value = adev->vram_scratch.gpu_addr - adev->mc.vram_start +
>>> +		adev->vm_manager.vram_base_offset;
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>>> +
>> 	mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_LSB),
>>> +				(u32)(value >> 12));
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>>> +
>> 	mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_MSB),
>>> +				(u32)(value >> 44));
>>> +
>>> +	/* Disable AGP. */
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_AGP_BASE),
>> 0);
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_AGP_TOP),
>> 0);
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_AGP_BOT),
>> 0x00FFFFFF);
>>> +
>>> +	/* GART Enable. */
>>> +
>>> +	/* Setup TLB control */
>>> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmMC_VM_MX_L1_TLB_CNTL));
>>> +	tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
>> ENABLE_L1_TLB, 1);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +				MC_VM_MX_L1_TLB_CNTL,
>>> +				SYSTEM_ACCESS_MODE,
>>> +				3);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +				MC_VM_MX_L1_TLB_CNTL,
>>> +				ENABLE_ADVANCED_DRIVER_MODEL,
>>> +				1);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +				MC_VM_MX_L1_TLB_CNTL,
>>> +				SYSTEM_APERTURE_UNMAPPED_ACCESS,
>>> +				0);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +				MC_VM_MX_L1_TLB_CNTL,
>>> +				ECO_BITS,
>>> +				0);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +				MC_VM_MX_L1_TLB_CNTL,
>>> +				MTYPE,
>>> +				MTYPE_UC);/* XXX for emulation. */
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +				MC_VM_MX_L1_TLB_CNTL,
>>> +				ATC_EN,
>>> +				1);
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmMC_VM_MX_L1_TLB_CNTL), tmp);
>>> +
>>> +	/* Setup L2 cache */
>>> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL));
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 1);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +				VM_L2_CNTL,
>>> +				ENABLE_L2_FRAGMENT_PROCESSING,
>>> +				0);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +				VM_L2_CNTL,
>>> +				L2_PDE0_CACHE_TAG_GENERATION_MODE,
>>> +				0);/* XXX for emulation, Refer to closed
>> source code.*/
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL,
>> PDE_FAULT_CLASSIFICATION, 1);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +				VM_L2_CNTL,
>>> +				CONTEXT1_IDENTITY_ACCESS_MODE,
>>> +				1);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +				VM_L2_CNTL,
>>> +				IDENTITY_MODE_FRAGMENT_SIZE,
>>> +				0);
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL), tmp);
>>> +
>>> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL2));
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2,
>> INVALIDATE_ALL_L1_TLBS, 1);
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_L2_CACHE,
>> 1);
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL2), tmp);
>>> +
>>> +	tmp = mmVM_L2_CNTL3_DEFAULT;
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL3), tmp);
>>> +
>>> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL4));
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +			    VM_L2_CNTL4,
>>> +			    VMC_TAP_PDE_REQUEST_PHYSICAL,
>>> +			    0);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +			    VM_L2_CNTL4,
>>> +			    VMC_TAP_PTE_REQUEST_PHYSICAL,
>>> +			    0);
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL4), tmp);
>>> +
>>> +	/* setup context0 */
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>>> +
>> 	mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_LO32),
>>> +		(u32)(adev->mc.gtt_start >> 12));
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>>> +
>> 	mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_HI32),
>>> +		(u32)(adev->mc.gtt_start >> 44));
>>> +
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>>> +
>> 	mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_LO32),
>>> +		(u32)(adev->mc.gtt_end >> 12));
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>>> +
>> 	mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_HI32),
>>> +		(u32)(adev->mc.gtt_end >> 44));
>>> +
>>> +	BUG_ON(adev->gart.table_addr & (~0x0000FFFFFFFFF000ULL));
>>> +	value = adev->gart.table_addr - adev->mc.vram_start +
>>> +		adev->vm_manager.vram_base_offset;
>>> +	value &= 0x0000FFFFFFFFF000ULL;
>>> +	value |= 0x1; /* valid bit */
>>> +
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>>> +
>> 	mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32),
>>> +		(u32)value);
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>>> +
>> 	mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32),
>>> +		(u32)(value >> 32));
>>> +
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>>> +
>> 	mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_LO32),
>>> +		(u32)(adev->dummy_page.addr >> 12));
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>>> +
>> 	mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_HI32),
>>> +		(u32)(adev->dummy_page.addr >> 44));
>>> +
>>> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmVM_L2_PROTECTION_FAULT_CNTL2));
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL2,
>>> +			    ACTIVE_PAGE_MIGRATION_PTE_READ_RETRY,
>>> +			    1);
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmVM_L2_PROTECTION_FAULT_CNTL2), tmp);
>>> +
>>> +	addr = SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT0_CNTL);
>>> +	tmp = RREG32(addr);
>>> +
>>> +	tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL,
>> ENABLE_CONTEXT, 1);
>>> +	tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL,
>> PAGE_TABLE_DEPTH, 0);
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmVM_CONTEXT0_CNTL), tmp);
>>> +
>>> +	tmp = RREG32(addr);
>>> +
>>> +	/* Disable identity aperture.*/
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>>> +
>> 	mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_LO32),
>> 0XFFFFFFFF);
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>>> +
>> 	mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_HI32),
>> 0x0000000F);
>>> +
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>>> +
>> 	mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_LO32),
>> 0);
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>>> +
>> 	mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_HI32), 0);
>>> +
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>>> +		mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_LO32),
>> 0);
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>>> +		mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_HI32),
>> 0);
>>> +
>>> +	for (i = 0; i <= 14; i++) {
>>> +		tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmVM_CONTEXT1_CNTL)
>>> +				+ i);
>>> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +				ENABLE_CONTEXT, 1);
>>> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +				PAGE_TABLE_DEPTH, 1);
>>> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +
>> 	RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>>> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +
>> 	DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>>> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +
>> 	PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>>> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +
>> 	VALID_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>>> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +
>> 	READ_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>>> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +
>> 	WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>>> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +
>> 	EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>>> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +				PAGE_TABLE_BLOCK_SIZE,
>>> +				amdgpu_vm_block_size - 9);
>>> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmVM_CONTEXT1_CNTL) + i, tmp);
>>> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_LO32) + i*2, 0);
>>> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_HI32) + i*2, 0);
>>> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_LO32) + i*2,
>>> +				adev->vm_manager.max_pfn - 1);
>>> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_HI32) + i*2, 0);
>>> +	}
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +void mmhub_v1_0_gart_disable(struct amdgpu_device *adev)
>>> +{
>>> +	u32 tmp;
>>> +	u32 i;
>>> +
>>> +	/* Disable all tables */
>>> +	for (i = 0; i < 16; i++)
>>> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmVM_CONTEXT0_CNTL) + i, 0);
>>> +
>>> +	/* Setup TLB control */
>>> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmMC_VM_MX_L1_TLB_CNTL));
>>> +	tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL,
>> ENABLE_L1_TLB, 0);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +				MC_VM_MX_L1_TLB_CNTL,
>>> +				ENABLE_ADVANCED_DRIVER_MODEL,
>>> +				0);
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmMC_VM_MX_L1_TLB_CNTL), tmp);
>>> +
>>> +	/* Setup L2 cache */
>>> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL));
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 0);
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL), tmp);
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL3), 0);
>>> +}
>>> +
>>> +/**
>>> + * mmhub_v1_0_set_fault_enable_default - update GART/VM fault
>> handling
>>> + *
>>> + * @adev: amdgpu_device pointer
>>> + * @value: true redirects VM faults to the default page
>>> + */
>>> +void mmhub_v1_0_set_fault_enable_default(struct amdgpu_device
>> *adev, bool value)
>>> +{
>>> +	u32 tmp;
>>> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmVM_L2_PROTECTION_FAULT_CNTL));
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +			RANGE_PROTECTION_FAULT_ENABLE_DEFAULT,
>> value);
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +			PDE0_PROTECTION_FAULT_ENABLE_DEFAULT,
>> value);
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +			PDE1_PROTECTION_FAULT_ENABLE_DEFAULT,
>> value);
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +			PDE2_PROTECTION_FAULT_ENABLE_DEFAULT,
>> value);
>>> +	tmp = REG_SET_FIELD(tmp,
>>> +			VM_L2_PROTECTION_FAULT_CNTL,
>>> +
>> 	TRANSLATE_FURTHER_PROTECTION_FAULT_ENABLE_DEFAULT,
>>> +			value);
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +			NACK_PROTECTION_FAULT_ENABLE_DEFAULT,
>> value);
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +
>> 	DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +			VALID_PROTECTION_FAULT_ENABLE_DEFAULT,
>> value);
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +			READ_PROTECTION_FAULT_ENABLE_DEFAULT,
>> value);
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +			WRITE_PROTECTION_FAULT_ENABLE_DEFAULT,
>> value);
>>> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +			EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT,
>> value);
>>> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmVM_L2_PROTECTION_FAULT_CNTL), tmp);
>>> +}
>>> +
>>> +static uint32_t mmhub_v1_0_get_invalidate_req(unsigned int vm_id)
>>> +{
>>> +	u32 req = 0;
>>> +
>>> +	/* invalidate using legacy mode on vm_id*/
>>> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>>> +			    PER_VMID_INVALIDATE_REQ, 1 << vm_id);
>>> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> FLUSH_TYPE, 0);
>>> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> INVALIDATE_L2_PTES, 1);
>>> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> INVALIDATE_L2_PDE0, 1);
>>> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> INVALIDATE_L2_PDE1, 1);
>>> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> INVALIDATE_L2_PDE2, 1);
>>> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> INVALIDATE_L1_PTES, 1);
>>> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>>> +			    CLEAR_PROTECTION_FAULT_STATUS_ADDR,	0);
>>> +
>>> +	return req;
>>> +}
>>> +
>>> +static uint32_t mmhub_v1_0_get_vm_protection_bits(void)
>>> +{
>>> +	return
>> (VM_CONTEXT1_CNTL__RANGE_PROTECTION_FAULT_ENABLE_INTERRUPT
>> _MASK |
>>> +
>> VM_CONTEXT1_CNTL__DUMMY_PAGE_PROTECTION_FAULT_ENABLE_INTE
>> RRUPT_MASK |
>>> +
>> VM_CONTEXT1_CNTL__PDE0_PROTECTION_FAULT_ENABLE_INTERRUPT_M
>> ASK |
>>> +
>> VM_CONTEXT1_CNTL__VALID_PROTECTION_FAULT_ENABLE_INTERRUPT_
>> MASK |
>>> +
>> VM_CONTEXT1_CNTL__READ_PROTECTION_FAULT_ENABLE_INTERRUPT_M
>> ASK |
>>> +
>> VM_CONTEXT1_CNTL__WRITE_PROTECTION_FAULT_ENABLE_INTERRUPT_
>> MASK |
>>> +
>> VM_CONTEXT1_CNTL__EXECUTE_PROTECTION_FAULT_ENABLE_INTERRUPT
>> _MASK);
>>> +}
>>> +
>>> +static int mmhub_v1_0_early_init(void *handle)
>>> +{
>>> +	return 0;
>>> +}
>>> +
>>> +static int mmhub_v1_0_late_init(void *handle)
>>> +{
>>> +	return 0;
>>> +}
>>> +
>>> +static int mmhub_v1_0_sw_init(void *handle)
>>> +{
>>> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +	struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_MMHUB];
>>> +
>>> +	hub->ctx0_ptb_addr_lo32 =
>>> +		SOC15_REG_OFFSET(MMHUB, 0,
>>> +
>> mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32);
>>> +	hub->ctx0_ptb_addr_hi32 =
>>> +		SOC15_REG_OFFSET(MMHUB, 0,
>>> +
>> mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32);
>>> +	hub->vm_inv_eng0_req =
>>> +		SOC15_REG_OFFSET(MMHUB, 0,
>> mmVM_INVALIDATE_ENG0_REQ);
>>> +	hub->vm_inv_eng0_ack =
>>> +		SOC15_REG_OFFSET(MMHUB, 0,
>> mmVM_INVALIDATE_ENG0_ACK);
>>> +	hub->vm_context0_cntl =
>>> +		SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT0_CNTL);
>>> +	hub->vm_l2_pro_fault_status =
>>> +		SOC15_REG_OFFSET(MMHUB, 0,
>> mmVM_L2_PROTECTION_FAULT_STATUS);
>>> +	hub->vm_l2_pro_fault_cntl =
>>> +		SOC15_REG_OFFSET(MMHUB, 0,
>> mmVM_L2_PROTECTION_FAULT_CNTL);
>>> +
>>> +	hub->get_invalidate_req = mmhub_v1_0_get_invalidate_req;
>>> +	hub->get_vm_protection_bits =
>> mmhub_v1_0_get_vm_protection_bits;
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static int mmhub_v1_0_sw_fini(void *handle)
>>> +{
>>> +	return 0;
>>> +}
>>> +
>>> +static int mmhub_v1_0_hw_init(void *handle)
>>> +{
>>> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +	unsigned i;
>>> +
>>> +	for (i = 0; i < 18; ++i) {
>>> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>>> +
>> 	mmVM_INVALIDATE_ENG0_ADDR_RANGE_LO32) +
>>> +		       2 * i, 0xffffffff);
>>> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>>> +
>> 	mmVM_INVALIDATE_ENG0_ADDR_RANGE_HI32) +
>>> +		       2 * i, 0x1f);
>>> +	}
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static int mmhub_v1_0_hw_fini(void *handle)
>>> +{
>>> +	return 0;
>>> +}
>>> +
>>> +static int mmhub_v1_0_suspend(void *handle)
>>> +{
>>> +	return 0;
>>> +}
>>> +
>>> +static int mmhub_v1_0_resume(void *handle)
>>> +{
>>> +	return 0;
>>> +}
>>> +
>>> +static bool mmhub_v1_0_is_idle(void *handle)
>>> +{
>>> +	return true;
>>> +}
>>> +
>>> +static int mmhub_v1_0_wait_for_idle(void *handle)
>>> +{
>>> +	return 0;
>>> +}
>>> +
>>> +static int mmhub_v1_0_soft_reset(void *handle)
>>> +{
>>> +	return 0;
>>> +}
>>> +
>>> +static void mmhub_v1_0_update_medium_grain_clock_gating(struct
>> amdgpu_device *adev,
>>> +							bool enable)
>>> +{
>>> +	uint32_t def, data, def1, data1, def2, data2;
>>> +
>>> +	def  = data  = RREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmATC_L2_MISC_CG));
>>> +	def1 = data1 = RREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmDAGB0_CNTL_MISC2));
>>> +	def2 = data2 = RREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmDAGB1_CNTL_MISC2));
>>> +
>>> +	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_MGCG)) {
>>> +		data |= ATC_L2_MISC_CG__ENABLE_MASK;
>>> +
>>> +		data1 &=
>> ~(DAGB0_CNTL_MISC2__DISABLE_WRREQ_CG_MASK |
>>> +		           DAGB0_CNTL_MISC2__DISABLE_WRRET_CG_MASK |
>>> +		           DAGB0_CNTL_MISC2__DISABLE_RDREQ_CG_MASK |
>>> +		           DAGB0_CNTL_MISC2__DISABLE_RDRET_CG_MASK |
>>> +		           DAGB0_CNTL_MISC2__DISABLE_TLBWR_CG_MASK |
>>> +		           DAGB0_CNTL_MISC2__DISABLE_TLBRD_CG_MASK);
>>> +
>>> +		data2 &=
>> ~(DAGB1_CNTL_MISC2__DISABLE_WRREQ_CG_MASK |
>>> +		           DAGB1_CNTL_MISC2__DISABLE_WRRET_CG_MASK |
>>> +		           DAGB1_CNTL_MISC2__DISABLE_RDREQ_CG_MASK |
>>> +		           DAGB1_CNTL_MISC2__DISABLE_RDRET_CG_MASK |
>>> +		           DAGB1_CNTL_MISC2__DISABLE_TLBWR_CG_MASK |
>>> +		           DAGB1_CNTL_MISC2__DISABLE_TLBRD_CG_MASK);
>>> +	} else {
>>> +		data &= ~ATC_L2_MISC_CG__ENABLE_MASK;
>>> +
>>> +		data1 |=
>> (DAGB0_CNTL_MISC2__DISABLE_WRREQ_CG_MASK |
>>> +			  DAGB0_CNTL_MISC2__DISABLE_WRRET_CG_MASK
>> |
>>> +			  DAGB0_CNTL_MISC2__DISABLE_RDREQ_CG_MASK
>> |
>>> +			  DAGB0_CNTL_MISC2__DISABLE_RDRET_CG_MASK
>> |
>>> +			  DAGB0_CNTL_MISC2__DISABLE_TLBWR_CG_MASK
>> |
>>> +
>> DAGB0_CNTL_MISC2__DISABLE_TLBRD_CG_MASK);
>>> +
>>> +		data2 |=
>> (DAGB1_CNTL_MISC2__DISABLE_WRREQ_CG_MASK |
>>> +		          DAGB1_CNTL_MISC2__DISABLE_WRRET_CG_MASK |
>>> +		          DAGB1_CNTL_MISC2__DISABLE_RDREQ_CG_MASK |
>>> +		          DAGB1_CNTL_MISC2__DISABLE_RDRET_CG_MASK |
>>> +		          DAGB1_CNTL_MISC2__DISABLE_TLBWR_CG_MASK |
>>> +		          DAGB1_CNTL_MISC2__DISABLE_TLBRD_CG_MASK);
>>> +	}
>>> +
>>> +	if (def != data)
>>> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmATC_L2_MISC_CG), data);
>>> +
>>> +	if (def1 != data1)
>>> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmDAGB0_CNTL_MISC2), data1);
>>> +
>>> +	if (def2 != data2)
>>> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmDAGB1_CNTL_MISC2), data2);
>>> +}
>>> +
>>> +static void athub_update_medium_grain_clock_gating(struct
>> amdgpu_device *adev,
>>> +						   bool enable)
>>> +{
>>> +	uint32_t def, data;
>>> +
>>> +	def = data = RREG32(SOC15_REG_OFFSET(ATHUB, 0,
>> mmATHUB_MISC_CNTL));
>>> +
>>> +	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_MGCG))
>>> +		data |= ATHUB_MISC_CNTL__CG_ENABLE_MASK;
>>> +	else
>>> +		data &= ~ATHUB_MISC_CNTL__CG_ENABLE_MASK;
>>> +
>>> +	if (def != data)
>>> +		WREG32(SOC15_REG_OFFSET(ATHUB, 0,
>> mmATHUB_MISC_CNTL), data);
>>> +}
>>> +
>>> +static void mmhub_v1_0_update_medium_grain_light_sleep(struct
>> amdgpu_device *adev,
>>> +						       bool enable)
>>> +{
>>> +	uint32_t def, data;
>>> +
>>> +	def = data = RREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmATC_L2_MISC_CG));
>>> +
>>> +	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_LS))
>>> +		data |= ATC_L2_MISC_CG__MEM_LS_ENABLE_MASK;
>>> +	else
>>> +		data &= ~ATC_L2_MISC_CG__MEM_LS_ENABLE_MASK;
>>> +
>>> +	if (def != data)
>>> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmATC_L2_MISC_CG), data);
>>> +}
>>> +
>>> +static void athub_update_medium_grain_light_sleep(struct
>> amdgpu_device *adev,
>>> +						  bool enable)
>>> +{
>>> +	uint32_t def, data;
>>> +
>>> +	def = data = RREG32(SOC15_REG_OFFSET(ATHUB, 0,
>> mmATHUB_MISC_CNTL));
>>> +
>>> +	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_LS) &&
>>> +	    (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS))
>>> +		data |= ATHUB_MISC_CNTL__CG_MEM_LS_ENABLE_MASK;
>>> +	else
>>> +		data &=
>> ~ATHUB_MISC_CNTL__CG_MEM_LS_ENABLE_MASK;
>>> +
>>> +	if(def != data)
>>> +		WREG32(SOC15_REG_OFFSET(ATHUB, 0,
>> mmATHUB_MISC_CNTL), data);
>>> +}
>>> +
>>> +static int mmhub_v1_0_set_clockgating_state(void *handle,
>>> +					enum amd_clockgating_state state)
>>> +{
>>> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +
>>> +	switch (adev->asic_type) {
>>> +	case CHIP_VEGA10:
>>> +		mmhub_v1_0_update_medium_grain_clock_gating(adev,
>>> +				state == AMD_CG_STATE_GATE ? true :
>> false);
>>> +		athub_update_medium_grain_clock_gating(adev,
>>> +				state == AMD_CG_STATE_GATE ? true :
>> false);
>>> +		mmhub_v1_0_update_medium_grain_light_sleep(adev,
>>> +				state == AMD_CG_STATE_GATE ? true :
>> false);
>>> +		athub_update_medium_grain_light_sleep(adev,
>>> +				state == AMD_CG_STATE_GATE ? true :
>> false);
>>> +		break;
>>> +	default:
>>> +		break;
>>> +	}
>>> +
>>> +	return 0;
>>> +}
>>> +
>>> +static int mmhub_v1_0_set_powergating_state(void *handle,
>>> +					enum amd_powergating_state state)
>>> +{
>>> +	return 0;
>>> +}
>>> +
>>> +const struct amd_ip_funcs mmhub_v1_0_ip_funcs = {
>>> +	.name = "mmhub_v1_0",
>>> +	.early_init = mmhub_v1_0_early_init,
>>> +	.late_init = mmhub_v1_0_late_init,
>>> +	.sw_init = mmhub_v1_0_sw_init,
>>> +	.sw_fini = mmhub_v1_0_sw_fini,
>>> +	.hw_init = mmhub_v1_0_hw_init,
>>> +	.hw_fini = mmhub_v1_0_hw_fini,
>>> +	.suspend = mmhub_v1_0_suspend,
>>> +	.resume = mmhub_v1_0_resume,
>>> +	.is_idle = mmhub_v1_0_is_idle,
>>> +	.wait_for_idle = mmhub_v1_0_wait_for_idle,
>>> +	.soft_reset = mmhub_v1_0_soft_reset,
>>> +	.set_clockgating_state = mmhub_v1_0_set_clockgating_state,
>>> +	.set_powergating_state = mmhub_v1_0_set_powergating_state,
>>> +};
>>> +
>>> +const struct amdgpu_ip_block_version mmhub_v1_0_ip_block =
>>> +{
>>> +	.type = AMD_IP_BLOCK_TYPE_MMHUB,
>>> +	.major = 1,
>>> +	.minor = 0,
>>> +	.rev = 0,
>>> +	.funcs = &mmhub_v1_0_ip_funcs,
>>> +};
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
>> b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
>>> new file mode 100644
>>> index 0000000..aadedf9
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
>>> @@ -0,0 +1,35 @@
>>> +/*
>>> + * Copyright 2016 Advanced Micro Devices, Inc.
>>> + *
>>> + * Permission is hereby granted, free of charge, to any person obtaining a
>>> + * copy of this software and associated documentation files (the
>> "Software"),
>>> + * to deal in the Software without restriction, including without limitation
>>> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
>>> + * and/or sell copies of the Software, and to permit persons to whom the
>>> + * Software is furnished to do so, subject to the following conditions:
>>> + *
>>> + * The above copyright notice and this permission notice shall be included
>> in
>>> + * all copies or substantial portions of the Software.
>>> + *
>>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
>> KIND, EXPRESS OR
>>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>> MERCHANTABILITY,
>>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN
>> NO EVENT SHALL
>>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM,
>> DAMAGES OR
>>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
>> OTHERWISE,
>>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
>> THE USE OR
>>> + * OTHER DEALINGS IN THE SOFTWARE.
>>> + *
>>> + */
>>> +#ifndef __MMHUB_V1_0_H__
>>> +#define __MMHUB_V1_0_H__
>>> +
>>> +u64 mmhub_v1_0_get_fb_location(struct amdgpu_device *adev);
>>> +int mmhub_v1_0_gart_enable(struct amdgpu_device *adev);
>>> +void mmhub_v1_0_gart_disable(struct amdgpu_device *adev);
>>> +void mmhub_v1_0_set_fault_enable_default(struct amdgpu_device
>> *adev,
>>> +					 bool value);
>>> +
>>> +extern const struct amd_ip_funcs mmhub_v1_0_ip_funcs;
>>> +extern const struct amdgpu_ip_block_version mmhub_v1_0_ip_block;
>>> +
>>> +#endif
>>> diff --git a/drivers/gpu/drm/amd/include/amd_shared.h
>> b/drivers/gpu/drm/amd/include/amd_shared.h
>>> index 717d6be..a94420d 100644
>>> --- a/drivers/gpu/drm/amd/include/amd_shared.h
>>> +++ b/drivers/gpu/drm/amd/include/amd_shared.h
>>> @@ -74,6 +74,8 @@ enum amd_ip_block_type {
>>>    	AMD_IP_BLOCK_TYPE_UVD,
>>>    	AMD_IP_BLOCK_TYPE_VCE,
>>>    	AMD_IP_BLOCK_TYPE_ACP,
>>> +	AMD_IP_BLOCK_TYPE_GFXHUB,
>>> +	AMD_IP_BLOCK_TYPE_MMHUB
>>>    };
>>>
>>>    enum amd_clockgating_state {


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH 046/100] drm/amdgpu: Add GMC 9.0 support
       [not found]         ` <003f0fba-4792-a32a-c982-73457dfbd1aa-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
  2017-03-21 15:09           ` Deucher, Alexander
@ 2017-03-22 19:41           ` Alex Deucher
  1 sibling, 0 replies; 101+ messages in thread
From: Alex Deucher @ 2017-03-22 19:41 UTC (permalink / raw)
  To: Christian König; +Cc: Alex Deucher, amd-gfx list, Alex Xie

On Tue, Mar 21, 2017 at 4:49 AM, Christian König
<deathsimple@vodafone.de> wrote:
> Am 20.03.2017 um 21:29 schrieb Alex Deucher:
>>
>> From: Alex Xie <AlexBin.Xie@amd.com>
>>
>> On SOC-15 parts, the GMC (Graphics Memory Controller) consists
>> of two hubs: GFX (graphics and compute) and MM (sdma, uvd, vce).
>>
>> Signed-off-by: Alex Xie <AlexBin.Xie@amd.com>
>> Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
>> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/Makefile      |   6 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu.h      |  30 ++
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c   |  28 +-
>>   drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c | 447 +++++++++++++++++
>>   drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h |  35 ++
>>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c    | 826
>> +++++++++++++++++++++++++++++++
>>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h    |  30 ++
>>   drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c  | 585 ++++++++++++++++++++++
>>   drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h  |  35 ++
>>   drivers/gpu/drm/amd/include/amd_shared.h |   2 +
>>   10 files changed, 2016 insertions(+), 8 deletions(-)
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile
>> b/drivers/gpu/drm/amd/amdgpu/Makefile
>> index 69823e8..b5046fd 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/Makefile
>> +++ b/drivers/gpu/drm/amd/amdgpu/Makefile
>> @@ -45,7 +45,8 @@ amdgpu-y += \
>>   # add GMC block
>>   amdgpu-y += \
>>         gmc_v7_0.o \
>> -       gmc_v8_0.o
>> +       gmc_v8_0.o \
>> +       gfxhub_v1_0.o mmhub_v1_0.o gmc_v9_0.o
>>     # add IH block
>>   amdgpu-y += \
>> @@ -74,7 +75,8 @@ amdgpu-y += \
>>   # add async DMA block
>>   amdgpu-y += \
>>         sdma_v2_4.o \
>> -       sdma_v3_0.o
>> +       sdma_v3_0.o \
>> +       sdma_v4_0.o
>
>
> That change doesn't belong into this patch.

Fixed.

>
>>     # add UVD block
>>   amdgpu-y += \
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>> index aaded8d..d7257b6 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>> @@ -123,6 +123,11 @@ extern int amdgpu_param_buf_per_se;
>>   /* max number of IP instances */
>>   #define AMDGPU_MAX_SDMA_INSTANCES             2
>>   +/* max number of VMHUB */
>> +#define AMDGPU_MAX_VMHUBS                      2
>> +#define AMDGPU_MMHUB                           0
>> +#define AMDGPU_GFXHUB                          1
>> +
>>   /* hardcode that limit for now */
>>   #define AMDGPU_VA_RESERVED_SIZE                       (8 << 20)
>>   @@ -310,6 +315,12 @@ struct amdgpu_gart_funcs {
>>                                      uint32_t flags);
>>   };
>>   +/* provided by the mc block */
>> +struct amdgpu_mc_funcs {
>> +       /* adjust mc addr in fb for APU case */
>> +       u64 (*adjust_mc_addr)(struct amdgpu_device *adev, u64 addr);
>> +};
>> +
>
>
> That isn't hardware specific and actually incorrectly implemented.
>
> The calculation depends on the NB on APUs, not the GPU part and the current
> implementation probably breaks it for Carizzo and others APUs.

The behavior shouldn't change for existing asics.
amdgpu_vm_adjust_mc_addr() just passes the address through directly if
there is no callback defined which matches the current behavior.  If
that is wrong, the current code would be broken.

>
> I suggest to just remove the callback and move the calculation into
> amdgpu_vm_adjust_mc_addr().
>
> Then rename amdgpu_vm_adjust_mc_addr() to amdgpu_vm_get_pde() and call it
> from amdgpu_vm_update_page_directory() as well as the GFX9 specifc flush
> functions.

I'm not sure I follow what you are suggesting. The rename makes sense,
but why move the logic from asic specific code to generic code?  That
will break older asics.

>
>>   /* provided by the ih block */
>>   struct amdgpu_ih_funcs {
>>         /* ring read/write ptr handling, called from interrupt context */
>> @@ -559,6 +570,21 @@ int amdgpu_gart_bind(struct amdgpu_device *adev,
>> uint64_t offset,
>>   int amdgpu_ttm_recover_gart(struct amdgpu_device *adev);
>>     /*
>> + * VMHUB structures, functions & helpers
>> + */
>> +struct amdgpu_vmhub {
>> +       uint32_t        ctx0_ptb_addr_lo32;
>> +       uint32_t        ctx0_ptb_addr_hi32;
>> +       uint32_t        vm_inv_eng0_req;
>> +       uint32_t        vm_inv_eng0_ack;
>> +       uint32_t        vm_context0_cntl;
>> +       uint32_t        vm_l2_pro_fault_status;
>> +       uint32_t        vm_l2_pro_fault_cntl;
>> +       uint32_t        (*get_invalidate_req)(unsigned int vm_id);
>> +       uint32_t        (*get_vm_protection_bits)(void);
>
>
> Those two callbacks aren't a good idea either.
>
> The invalidation request bits are defined by the RTL of the HUB which is
> just instantiated twice, see the register database for details.
>
> We should probably make those functions in the gmc_v9_0.c which are called
> from the device specific flush methods.

I think these callbacks make sense.  We already have to fetch a bunch
of hub specific data which is why we have the structure in the first
place.  It seems logical to fetch the bits as well via that interface
even if they happen to be the same in this particular set of IPs.
That means the flush code in all the IPs would then be a mix of
fetches from the hub structure and direct calls to gmc9 code which
seems messier then just fetches from the hub structure.

Alex

>
> Regards,
> Christian.
>
>> +};
>> +
>> +/*
>>    * GPU MC structures, functions & helpers
>>    */
>>   struct amdgpu_mc {
>> @@ -591,6 +617,9 @@ struct amdgpu_mc {
>>         u64                                     shared_aperture_end;
>>         u64                                     private_aperture_start;
>>         u64                                     private_aperture_end;
>> +       /* protects concurrent invalidation */
>> +       spinlock_t              invalidate_lock;
>> +       const struct amdgpu_mc_funcs *mc_funcs;
>>   };
>>     /*
>> @@ -1479,6 +1508,7 @@ struct amdgpu_device {
>>         struct amdgpu_gart              gart;
>>         struct amdgpu_dummy_page        dummy_page;
>>         struct amdgpu_vm_manager        vm_manager;
>> +       struct amdgpu_vmhub             vmhub[AMDGPU_MAX_VMHUBS];
>>         /* memory management */
>>         struct amdgpu_mman              mman;
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> index df615d7..47a8080 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> @@ -375,6 +375,16 @@ static bool amdgpu_vm_ring_has_compute_vm_bug(struct
>> amdgpu_ring *ring)
>>         return false;
>>   }
>>   +static u64 amdgpu_vm_adjust_mc_addr(struct amdgpu_device *adev, u64
>> mc_addr)
>> +{
>> +       u64 addr = mc_addr;
>> +
>> +       if (adev->mc.mc_funcs && adev->mc.mc_funcs->adjust_mc_addr)
>> +               addr = adev->mc.mc_funcs->adjust_mc_addr(adev, addr);
>> +
>> +       return addr;
>> +}
>> +
>>   /**
>>    * amdgpu_vm_flush - hardware flush the vm
>>    *
>> @@ -405,9 +415,10 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct
>> amdgpu_job *job)
>>         if (ring->funcs->emit_vm_flush && (job->vm_needs_flush ||
>>             amdgpu_vm_is_gpu_reset(adev, id))) {
>>                 struct fence *fence;
>> +               u64 pd_addr = amdgpu_vm_adjust_mc_addr(adev,
>> job->vm_pd_addr);
>>   -             trace_amdgpu_vm_flush(job->vm_pd_addr, ring->idx,
>> job->vm_id);
>> -               amdgpu_ring_emit_vm_flush(ring, job->vm_id,
>> job->vm_pd_addr);
>> +               trace_amdgpu_vm_flush(pd_addr, ring->idx, job->vm_id);
>> +               amdgpu_ring_emit_vm_flush(ring, job->vm_id, pd_addr);
>>                 r = amdgpu_fence_emit(ring, &fence);
>>                 if (r)
>> @@ -643,15 +654,18 @@ int amdgpu_vm_update_page_directory(struct
>> amdgpu_device *adev,
>>                     (count == AMDGPU_VM_MAX_UPDATE_SIZE)) {
>>                         if (count) {
>> +                               uint64_t pt_addr =
>> +                                       amdgpu_vm_adjust_mc_addr(adev,
>> last_pt);
>> +
>>                                 if (shadow)
>>                                         amdgpu_vm_do_set_ptes(&params,
>>                                                               last_shadow,
>> -                                                             last_pt,
>> count,
>> +                                                             pt_addr,
>> count,
>>                                                               incr,
>>
>> AMDGPU_PTE_VALID);
>>                                 amdgpu_vm_do_set_ptes(&params, last_pde,
>> -                                                     last_pt, count,
>> incr,
>> +                                                     pt_addr, count,
>> incr,
>>                                                       AMDGPU_PTE_VALID);
>>                         }
>>   @@ -665,11 +679,13 @@ int amdgpu_vm_update_page_directory(struct
>> amdgpu_device *adev,
>>         }
>>         if (count) {
>> +               uint64_t pt_addr = amdgpu_vm_adjust_mc_addr(adev,
>> last_pt);
>> +
>>                 if (vm->page_directory->shadow)
>> -                       amdgpu_vm_do_set_ptes(&params, last_shadow,
>> last_pt,
>> +                       amdgpu_vm_do_set_ptes(&params, last_shadow,
>> pt_addr,
>>                                               count, incr,
>> AMDGPU_PTE_VALID);
>>   -             amdgpu_vm_do_set_ptes(&params, last_pde, last_pt,
>> +               amdgpu_vm_do_set_ptes(&params, last_pde, pt_addr,
>>                                       count, incr, AMDGPU_PTE_VALID);
>>         }
>>   diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>> b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>> new file mode 100644
>> index 0000000..1ff019c
>> --- /dev/null
>> +++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>> @@ -0,0 +1,447 @@
>> +/*
>> + * Copyright 2016 Advanced Micro Devices, Inc.
>> + *
>> + * Permission is hereby granted, free of charge, to any person obtaining
>> a
>> + * copy of this software and associated documentation files (the
>> "Software"),
>> + * to deal in the Software without restriction, including without
>> limitation
>> + * the rights to use, copy, modify, merge, publish, distribute,
>> sublicense,
>> + * and/or sell copies of the Software, and to permit persons to whom the
>> + * Software is furnished to do so, subject to the following conditions:
>> + *
>> + * The above copyright notice and this permission notice shall be
>> included in
>> + * all copies or substantial portions of the Software.
>> + *
>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
>> EXPRESS OR
>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>> MERCHANTABILITY,
>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT
>> SHALL
>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES
>> OR
>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
>> + * OTHER DEALINGS IN THE SOFTWARE.
>> + *
>> + */
>> +#include "amdgpu.h"
>> +#include "gfxhub_v1_0.h"
>> +
>> +#include "vega10/soc15ip.h"
>> +#include "vega10/GC/gc_9_0_offset.h"
>> +#include "vega10/GC/gc_9_0_sh_mask.h"
>> +#include "vega10/GC/gc_9_0_default.h"
>> +#include "vega10/vega10_enum.h"
>> +
>> +#include "soc15_common.h"
>> +
>> +int gfxhub_v1_0_gart_enable(struct amdgpu_device *adev)
>> +{
>> +       u32 tmp;
>> +       u64 value;
>> +       u32 i;
>> +
>> +       /* Program MC. */
>> +       /* Update configuration */
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR),
>> +               adev->mc.vram_start >> 18);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR),
>> +               adev->mc.vram_end >> 18);
>> +
>> +       value = adev->vram_scratch.gpu_addr - adev->mc.vram_start
>> +               + adev->vm_manager.vram_base_offset;
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +                               mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_LSB),
>> +                               (u32)(value >> 12));
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +                               mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_MSB),
>> +                               (u32)(value >> 44));
>> +
>> +       /* Disable AGP. */
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_BASE), 0);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_TOP), 0);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_BOT), 0xFFFFFFFF);
>> +
>> +       /* GART Enable. */
>> +
>> +       /* Setup TLB control */
>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL));
>> +       tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB, 1);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                               MC_VM_MX_L1_TLB_CNTL,
>> +                               SYSTEM_ACCESS_MODE,
>> +                               3);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                               MC_VM_MX_L1_TLB_CNTL,
>> +                               ENABLE_ADVANCED_DRIVER_MODEL,
>> +                               1);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                               MC_VM_MX_L1_TLB_CNTL,
>> +                               SYSTEM_APERTURE_UNMAPPED_ACCESS,
>> +                               0);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                               MC_VM_MX_L1_TLB_CNTL,
>> +                               ECO_BITS,
>> +                               0);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                               MC_VM_MX_L1_TLB_CNTL,
>> +                               MTYPE,
>> +                               MTYPE_UC);/* XXX for emulation. */
>> +       tmp = REG_SET_FIELD(tmp,
>> +                               MC_VM_MX_L1_TLB_CNTL,
>> +                               ATC_EN,
>> +                               1);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL), tmp);
>> +
>> +       /* Setup L2 cache */
>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL));
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 1);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                               VM_L2_CNTL,
>> +                               ENABLE_L2_FRAGMENT_PROCESSING,
>> +                               0);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                               VM_L2_CNTL,
>> +                               L2_PDE0_CACHE_TAG_GENERATION_MODE,
>> +                               0);/* XXX for emulation, Refer to closed
>> source code.*/
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, PDE_FAULT_CLASSIFICATION, 1);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                               VM_L2_CNTL,
>> +                               CONTEXT1_IDENTITY_ACCESS_MODE,
>> +                               1);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                               VM_L2_CNTL,
>> +                               IDENTITY_MODE_FRAGMENT_SIZE,
>> +                               0);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL), tmp);
>> +
>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL2));
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_ALL_L1_TLBS, 1);
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_L2_CACHE, 1);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL2), tmp);
>> +
>> +       tmp = mmVM_L2_CNTL3_DEFAULT;
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL3), tmp);
>> +
>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL4));
>> +       tmp = REG_SET_FIELD(tmp,
>> +                           VM_L2_CNTL4,
>> +                           VMC_TAP_PDE_REQUEST_PHYSICAL,
>> +                           0);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                           VM_L2_CNTL4,
>> +                           VMC_TAP_PTE_REQUEST_PHYSICAL,
>> +                           0);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL4), tmp);
>> +
>> +       /* setup context0 */
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +                               mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_LO32),
>> +               (u32)(adev->mc.gtt_start >> 12));
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +                               mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_HI32),
>> +               (u32)(adev->mc.gtt_start >> 44));
>> +
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +                               mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_LO32),
>> +               (u32)(adev->mc.gtt_end >> 12));
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +                               mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_HI32),
>> +               (u32)(adev->mc.gtt_end >> 44));
>> +
>> +       BUG_ON(adev->gart.table_addr & (~0x0000FFFFFFFFF000ULL));
>> +       value = adev->gart.table_addr - adev->mc.vram_start
>> +               + adev->vm_manager.vram_base_offset;
>> +       value &= 0x0000FFFFFFFFF000ULL;
>> +       value |= 0x1; /*valid bit*/
>> +
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +                               mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32),
>> +               (u32)value);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +                               mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32),
>> +               (u32)(value >> 32));
>> +
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +
>> mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_LO32),
>> +               (u32)(adev->dummy_page.addr >> 12));
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +
>> mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_HI32),
>> +               (u32)(adev->dummy_page.addr >> 44));
>> +
>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_L2_PROTECTION_FAULT_CNTL2));
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL2,
>> +                           ACTIVE_PAGE_MIGRATION_PTE_READ_RETRY,
>> +                           1);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL2),
>> tmp);
>> +
>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL));
>> +       tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL, ENABLE_CONTEXT, 1);
>> +       tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL, PAGE_TABLE_DEPTH, 0);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL), tmp);
>> +
>> +       /* Disable identity aperture.*/
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +               mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_LO32),
>> 0XFFFFFFFF);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +               mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_HI32),
>> 0x0000000F);
>> +
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +               mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_LO32), 0);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +               mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_HI32), 0);
>> +
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +               mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_LO32), 0);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +               mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_HI32), 0);
>> +
>> +       for (i = 0; i <= 14; i++) {
>> +               tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_CNTL) +
>> i);
>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL, ENABLE_CONTEXT,
>> 1);
>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>> PAGE_TABLE_DEPTH, 1);
>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>> +                               RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>> +
>> DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>> +                               PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>> +                               VALID_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>> +                               READ_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>> +                               WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>> +                               EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT,
>> 1);
>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>> +                               PAGE_TABLE_BLOCK_SIZE,
>> +                                   amdgpu_vm_block_size - 9);
>> +               WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_CNTL) + i,
>> tmp);
>> +               WREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_LO32) + i*2, 0);
>> +               WREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_HI32) + i*2, 0);
>> +               WREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_LO32) + i*2,
>> +                               adev->vm_manager.max_pfn - 1);
>> +               WREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_HI32) + i*2, 0);
>> +       }
>> +
>> +
>> +       return 0;
>> +}
>> +
>> +void gfxhub_v1_0_gart_disable(struct amdgpu_device *adev)
>> +{
>> +       u32 tmp;
>> +       u32 i;
>> +
>> +       /* Disable all tables */
>> +       for (i = 0; i < 16; i++)
>> +               WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL) + i,
>> 0);
>> +
>> +       /* Setup TLB control */
>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL));
>> +       tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB, 0);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                               MC_VM_MX_L1_TLB_CNTL,
>> +                               ENABLE_ADVANCED_DRIVER_MODEL,
>> +                               0);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL), tmp);
>> +
>> +       /* Setup L2 cache */
>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL));
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 0);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL), tmp);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL3), 0);
>> +}
>> +
>> +/**
>> + * gfxhub_v1_0_set_fault_enable_default - update GART/VM fault handling
>> + *
>> + * @adev: amdgpu_device pointer
>> + * @value: true redirects VM faults to the default page
>> + */
>> +void gfxhub_v1_0_set_fault_enable_default(struct amdgpu_device *adev,
>> +                                         bool value)
>> +{
>> +       u32 tmp;
>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_L2_PROTECTION_FAULT_CNTL));
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>> +                       RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>> +                       PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>> +                       PDE1_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>> +                       PDE2_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                       VM_L2_PROTECTION_FAULT_CNTL,
>> +                       TRANSLATE_FURTHER_PROTECTION_FAULT_ENABLE_DEFAULT,
>> +                       value);
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>> +                       NACK_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>> +                       DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT,
>> value);
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>> +                       VALID_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>> +                       READ_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>> +                       WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>> +                       EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL),
>> tmp);
>> +}
>> +
>> +static uint32_t gfxhub_v1_0_get_invalidate_req(unsigned int vm_id)
>> +{
>> +       u32 req = 0;
>> +
>> +       /* invalidate using legacy mode on vm_id*/
>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> +                           PER_VMID_INVALIDATE_REQ, 1 << vm_id);
>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, FLUSH_TYPE, 0);
>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> INVALIDATE_L2_PTES, 1);
>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> INVALIDATE_L2_PDE0, 1);
>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> INVALIDATE_L2_PDE1, 1);
>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> INVALIDATE_L2_PDE2, 1);
>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> INVALIDATE_L1_PTES, 1);
>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> +                           CLEAR_PROTECTION_FAULT_STATUS_ADDR, 0);
>> +
>> +       return req;
>> +}
>> +
>> +static uint32_t gfxhub_v1_0_get_vm_protection_bits(void)
>> +{
>> +       return
>> (VM_CONTEXT1_CNTL__RANGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
>> +
>> VM_CONTEXT1_CNTL__DUMMY_PAGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
>> +
>> VM_CONTEXT1_CNTL__PDE0_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
>> +
>> VM_CONTEXT1_CNTL__VALID_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
>> +
>> VM_CONTEXT1_CNTL__READ_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
>> +
>> VM_CONTEXT1_CNTL__WRITE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
>> +
>> VM_CONTEXT1_CNTL__EXECUTE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK);
>> +}
>> +
>> +static int gfxhub_v1_0_early_init(void *handle)
>> +{
>> +       return 0;
>> +}
>> +
>> +static int gfxhub_v1_0_late_init(void *handle)
>> +{
>> +       return 0;
>> +}
>> +
>> +static int gfxhub_v1_0_sw_init(void *handle)
>> +{
>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>> +       struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_GFXHUB];
>> +
>> +       hub->ctx0_ptb_addr_lo32 =
>> +               SOC15_REG_OFFSET(GC, 0,
>> +                                mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32);
>> +       hub->ctx0_ptb_addr_hi32 =
>> +               SOC15_REG_OFFSET(GC, 0,
>> +                                mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32);
>> +       hub->vm_inv_eng0_req =
>> +               SOC15_REG_OFFSET(GC, 0, mmVM_INVALIDATE_ENG0_REQ);
>> +       hub->vm_inv_eng0_ack =
>> +               SOC15_REG_OFFSET(GC, 0, mmVM_INVALIDATE_ENG0_ACK);
>> +       hub->vm_context0_cntl =
>> +               SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL);
>> +       hub->vm_l2_pro_fault_status =
>> +               SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_STATUS);
>> +       hub->vm_l2_pro_fault_cntl =
>> +               SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL);
>> +
>> +       hub->get_invalidate_req = gfxhub_v1_0_get_invalidate_req;
>> +       hub->get_vm_protection_bits = gfxhub_v1_0_get_vm_protection_bits;
>> +
>> +       return 0;
>> +}
>> +
>> +static int gfxhub_v1_0_sw_fini(void *handle)
>> +{
>> +       return 0;
>> +}
>> +
>> +static int gfxhub_v1_0_hw_init(void *handle)
>> +{
>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>> +       unsigned i;
>> +
>> +       for (i = 0 ; i < 18; ++i) {
>> +               WREG32(SOC15_REG_OFFSET(GC, 0,
>> +
>> mmVM_INVALIDATE_ENG0_ADDR_RANGE_LO32) +
>> +                      2 * i, 0xffffffff);
>> +               WREG32(SOC15_REG_OFFSET(GC, 0,
>> +
>> mmVM_INVALIDATE_ENG0_ADDR_RANGE_HI32) +
>> +                      2 * i, 0x1f);
>> +       }
>> +
>> +       return 0;
>> +}
>> +
>> +static int gfxhub_v1_0_hw_fini(void *handle)
>> +{
>> +       return 0;
>> +}
>> +
>> +static int gfxhub_v1_0_suspend(void *handle)
>> +{
>> +       return 0;
>> +}
>> +
>> +static int gfxhub_v1_0_resume(void *handle)
>> +{
>> +       return 0;
>> +}
>> +
>> +static bool gfxhub_v1_0_is_idle(void *handle)
>> +{
>> +       return true;
>> +}
>> +
>> +static int gfxhub_v1_0_wait_for_idle(void *handle)
>> +{
>> +       return 0;
>> +}
>> +
>> +static int gfxhub_v1_0_soft_reset(void *handle)
>> +{
>> +       return 0;
>> +}
>> +
>> +static int gfxhub_v1_0_set_clockgating_state(void *handle,
>> +                                         enum amd_clockgating_state
>> state)
>> +{
>> +       return 0;
>> +}
>> +
>> +static int gfxhub_v1_0_set_powergating_state(void *handle,
>> +                                         enum amd_powergating_state
>> state)
>> +{
>> +       return 0;
>> +}
>> +
>> +const struct amd_ip_funcs gfxhub_v1_0_ip_funcs = {
>> +       .name = "gfxhub_v1_0",
>> +       .early_init = gfxhub_v1_0_early_init,
>> +       .late_init = gfxhub_v1_0_late_init,
>> +       .sw_init = gfxhub_v1_0_sw_init,
>> +       .sw_fini = gfxhub_v1_0_sw_fini,
>> +       .hw_init = gfxhub_v1_0_hw_init,
>> +       .hw_fini = gfxhub_v1_0_hw_fini,
>> +       .suspend = gfxhub_v1_0_suspend,
>> +       .resume = gfxhub_v1_0_resume,
>> +       .is_idle = gfxhub_v1_0_is_idle,
>> +       .wait_for_idle = gfxhub_v1_0_wait_for_idle,
>> +       .soft_reset = gfxhub_v1_0_soft_reset,
>> +       .set_clockgating_state = gfxhub_v1_0_set_clockgating_state,
>> +       .set_powergating_state = gfxhub_v1_0_set_powergating_state,
>> +};
>> +
>> +const struct amdgpu_ip_block_version gfxhub_v1_0_ip_block =
>> +{
>> +       .type = AMD_IP_BLOCK_TYPE_GFXHUB,
>> +       .major = 1,
>> +       .minor = 0,
>> +       .rev = 0,
>> +       .funcs = &gfxhub_v1_0_ip_funcs,
>> +};
>> diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
>> b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
>> new file mode 100644
>> index 0000000..5129a8f
>> --- /dev/null
>> +++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
>> @@ -0,0 +1,35 @@
>> +/*
>> + * Copyright 2016 Advanced Micro Devices, Inc.
>> + *
>> + * Permission is hereby granted, free of charge, to any person obtaining
>> a
>> + * copy of this software and associated documentation files (the
>> "Software"),
>> + * to deal in the Software without restriction, including without
>> limitation
>> + * the rights to use, copy, modify, merge, publish, distribute,
>> sublicense,
>> + * and/or sell copies of the Software, and to permit persons to whom the
>> + * Software is furnished to do so, subject to the following conditions:
>> + *
>> + * The above copyright notice and this permission notice shall be
>> included in
>> + * all copies or substantial portions of the Software.
>> + *
>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
>> EXPRESS OR
>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>> MERCHANTABILITY,
>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT
>> SHALL
>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES
>> OR
>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
>> + * OTHER DEALINGS IN THE SOFTWARE.
>> + *
>> + */
>> +
>> +#ifndef __GFXHUB_V1_0_H__
>> +#define __GFXHUB_V1_0_H__
>> +
>> +int gfxhub_v1_0_gart_enable(struct amdgpu_device *adev);
>> +void gfxhub_v1_0_gart_disable(struct amdgpu_device *adev);
>> +void gfxhub_v1_0_set_fault_enable_default(struct amdgpu_device *adev,
>> +                                         bool value);
>> +
>> +extern const struct amd_ip_funcs gfxhub_v1_0_ip_funcs;
>> +extern const struct amdgpu_ip_block_version gfxhub_v1_0_ip_block;
>> +
>> +#endif
>> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> new file mode 100644
>> index 0000000..5cf0fc3
>> --- /dev/null
>> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> @@ -0,0 +1,826 @@
>> +/*
>> + * Copyright 2016 Advanced Micro Devices, Inc.
>> + *
>> + * Permission is hereby granted, free of charge, to any person obtaining
>> a
>> + * copy of this software and associated documentation files (the
>> "Software"),
>> + * to deal in the Software without restriction, including without
>> limitation
>> + * the rights to use, copy, modify, merge, publish, distribute,
>> sublicense,
>> + * and/or sell copies of the Software, and to permit persons to whom the
>> + * Software is furnished to do so, subject to the following conditions:
>> + *
>> + * The above copyright notice and this permission notice shall be
>> included in
>> + * all copies or substantial portions of the Software.
>> + *
>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
>> EXPRESS OR
>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>> MERCHANTABILITY,
>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT
>> SHALL
>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES
>> OR
>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
>> + * OTHER DEALINGS IN THE SOFTWARE.
>> + *
>> + */
>> +#include <linux/firmware.h>
>> +#include "amdgpu.h"
>> +#include "gmc_v9_0.h"
>> +
>> +#include "vega10/soc15ip.h"
>> +#include "vega10/HDP/hdp_4_0_offset.h"
>> +#include "vega10/HDP/hdp_4_0_sh_mask.h"
>> +#include "vega10/GC/gc_9_0_sh_mask.h"
>> +#include "vega10/vega10_enum.h"
>> +
>> +#include "soc15_common.h"
>> +
>> +#include "nbio_v6_1.h"
>> +#include "gfxhub_v1_0.h"
>> +#include "mmhub_v1_0.h"
>> +
>> +#define mmDF_CS_AON0_DramBaseAddress0
>> 0x0044
>> +#define mmDF_CS_AON0_DramBaseAddress0_BASE_IDX
>> 0
>> +//DF_CS_AON0_DramBaseAddress0
>> +#define DF_CS_AON0_DramBaseAddress0__AddrRngVal__SHIFT
>> 0x0
>> +#define DF_CS_AON0_DramBaseAddress0__LgcyMmioHoleEn__SHIFT
>> 0x1
>> +#define DF_CS_AON0_DramBaseAddress0__IntLvNumChan__SHIFT
>> 0x4
>> +#define DF_CS_AON0_DramBaseAddress0__IntLvAddrSel__SHIFT
>> 0x8
>> +#define DF_CS_AON0_DramBaseAddress0__DramBaseAddr__SHIFT
>> 0xc
>> +#define DF_CS_AON0_DramBaseAddress0__AddrRngVal_MASK
>> 0x00000001L
>> +#define DF_CS_AON0_DramBaseAddress0__LgcyMmioHoleEn_MASK
>> 0x00000002L
>> +#define DF_CS_AON0_DramBaseAddress0__IntLvNumChan_MASK
>> 0x000000F0L
>> +#define DF_CS_AON0_DramBaseAddress0__IntLvAddrSel_MASK
>> 0x00000700L
>> +#define DF_CS_AON0_DramBaseAddress0__DramBaseAddr_MASK
>> 0xFFFFF000L
>> +
>> +/* XXX Move this macro to VEGA10 header file, which is like vid.h for
>> VI.*/
>> +#define AMDGPU_NUM_OF_VMIDS                    8
>> +
>> +static const u32 golden_settings_vega10_hdp[] =
>> +{
>> +       0xf64, 0x0fffffff, 0x00000000,
>> +       0xf65, 0x0fffffff, 0x00000000,
>> +       0xf66, 0x0fffffff, 0x00000000,
>> +       0xf67, 0x0fffffff, 0x00000000,
>> +       0xf68, 0x0fffffff, 0x00000000,
>> +       0xf6a, 0x0fffffff, 0x00000000,
>> +       0xf6b, 0x0fffffff, 0x00000000,
>> +       0xf6c, 0x0fffffff, 0x00000000,
>> +       0xf6d, 0x0fffffff, 0x00000000,
>> +       0xf6e, 0x0fffffff, 0x00000000,
>> +};
>> +
>> +static int gmc_v9_0_vm_fault_interrupt_state(struct amdgpu_device *adev,
>> +                                       struct amdgpu_irq_src *src,
>> +                                       unsigned type,
>> +                                       enum amdgpu_interrupt_state state)
>> +{
>> +       struct amdgpu_vmhub *hub;
>> +       u32 tmp, reg, bits, i;
>> +
>> +       switch (state) {
>> +       case AMDGPU_IRQ_STATE_DISABLE:
>> +               /* MM HUB */
>> +               hub = &adev->vmhub[AMDGPU_MMHUB];
>> +               bits = hub->get_vm_protection_bits();
>> +               for (i = 0; i< 16; i++) {
>> +                       reg = hub->vm_context0_cntl + i;
>> +                       tmp = RREG32(reg);
>> +                       tmp &= ~bits;
>> +                       WREG32(reg, tmp);
>> +               }
>> +
>> +               /* GFX HUB */
>> +               hub = &adev->vmhub[AMDGPU_GFXHUB];
>> +               bits = hub->get_vm_protection_bits();
>> +               for (i = 0; i < 16; i++) {
>> +                       reg = hub->vm_context0_cntl + i;
>> +                       tmp = RREG32(reg);
>> +                       tmp &= ~bits;
>> +                       WREG32(reg, tmp);
>> +               }
>> +               break;
>> +       case AMDGPU_IRQ_STATE_ENABLE:
>> +               /* MM HUB */
>> +               hub = &adev->vmhub[AMDGPU_MMHUB];
>> +               bits = hub->get_vm_protection_bits();
>> +               for (i = 0; i< 16; i++) {
>> +                       reg = hub->vm_context0_cntl + i;
>> +                       tmp = RREG32(reg);
>> +                       tmp |= bits;
>> +                       WREG32(reg, tmp);
>> +               }
>> +
>> +               /* GFX HUB */
>> +               hub = &adev->vmhub[AMDGPU_GFXHUB];
>> +               bits = hub->get_vm_protection_bits();
>> +               for (i = 0; i < 16; i++) {
>> +                       reg = hub->vm_context0_cntl + i;
>> +                       tmp = RREG32(reg);
>> +                       tmp |= bits;
>> +                       WREG32(reg, tmp);
>> +               }
>> +               break;
>> +       default:
>> +               break;
>> +       }
>> +
>> +       return 0;
>> +       return 0;
>> +}
>> +
>> +static int gmc_v9_0_process_interrupt(struct amdgpu_device *adev,
>> +                               struct amdgpu_irq_src *source,
>> +                               struct amdgpu_iv_entry *entry)
>> +{
>> +       struct amdgpu_vmhub *gfxhub = &adev->vmhub[AMDGPU_GFXHUB];
>> +       struct amdgpu_vmhub *mmhub = &adev->vmhub[AMDGPU_MMHUB];
>> +       uint32_t status;
>> +       u64 addr;
>> +
>> +       addr = (u64)entry->src_data[0] << 12;
>> +       addr |= ((u64)entry->src_data[1] & 0xf) << 44;
>> +
>> +       if (entry->vm_id_src) {
>> +               status = RREG32(mmhub->vm_l2_pro_fault_status);
>> +               WREG32_P(mmhub->vm_l2_pro_fault_cntl, 1, ~1);
>> +       } else {
>> +               status = RREG32(gfxhub->vm_l2_pro_fault_status);
>> +               WREG32_P(gfxhub->vm_l2_pro_fault_cntl, 1, ~1);
>> +       }
>> +
>> +       DRM_ERROR("[%s]VMC page fault (src_id:%u ring:%u vm_id:%u
>> pas_id:%u) "
>> +                 "at page 0x%016llx from %d\n"
>> +                 "VM_L2_PROTECTION_FAULT_STATUS:0x%08X\n",
>> +                 entry->vm_id_src ? "mmhub" : "gfxhub",
>> +                 entry->src_id, entry->ring_id, entry->vm_id,
>> entry->pas_id,
>> +                 addr, entry->client_id, status);
>> +
>> +       return 0;
>> +}
>> +
>> +static const struct amdgpu_irq_src_funcs gmc_v9_0_irq_funcs = {
>> +       .set = gmc_v9_0_vm_fault_interrupt_state,
>> +       .process = gmc_v9_0_process_interrupt,
>> +};
>> +
>> +static void gmc_v9_0_set_irq_funcs(struct amdgpu_device *adev)
>> +{
>> +       adev->mc.vm_fault.num_types = 1;
>> +       adev->mc.vm_fault.funcs = &gmc_v9_0_irq_funcs;
>> +}
>> +
>> +/*
>> + * GART
>> + * VMID 0 is the physical GPU addresses as used by the kernel.
>> + * VMIDs 1-15 are used for userspace clients and are handled
>> + * by the amdgpu vm/hsa code.
>> + */
>> +
>> +/**
>> + * gmc_v9_0_gart_flush_gpu_tlb - gart tlb flush callback
>> + *
>> + * @adev: amdgpu_device pointer
>> + * @vmid: vm instance to flush
>> + *
>> + * Flush the TLB for the requested page table.
>> + */
>> +static void gmc_v9_0_gart_flush_gpu_tlb(struct amdgpu_device *adev,
>> +                                       uint32_t vmid)
>> +{
>> +       /* Use register 17 for GART */
>> +       const unsigned eng = 17;
>> +       unsigned i, j;
>> +
>> +       /* flush hdp cache */
>> +       nbio_v6_1_hdp_flush(adev);
>> +
>> +       spin_lock(&adev->mc.invalidate_lock);
>> +
>> +       for (i = 0; i < AMDGPU_MAX_VMHUBS; ++i) {
>> +               struct amdgpu_vmhub *hub = &adev->vmhub[i];
>> +               u32 tmp = hub->get_invalidate_req(vmid);
>> +
>> +               WREG32(hub->vm_inv_eng0_req + eng, tmp);
>> +
>> +               /* Busy wait for ACK.*/
>> +               for (j = 0; j < 100; j++) {
>> +                       tmp = RREG32(hub->vm_inv_eng0_ack + eng);
>> +                       tmp &= 1 << vmid;
>> +                       if (tmp)
>> +                               break;
>> +                       cpu_relax();
>> +               }
>> +               if (j < 100)
>> +                       continue;
>> +
>> +               /* Wait for ACK with a delay.*/
>> +               for (j = 0; j < adev->usec_timeout; j++) {
>> +                       tmp = RREG32(hub->vm_inv_eng0_ack + eng);
>> +                       tmp &= 1 << vmid;
>> +                       if (tmp)
>> +                               break;
>> +                       udelay(1);
>> +               }
>> +               if (j < adev->usec_timeout)
>> +                       continue;
>> +
>> +               DRM_ERROR("Timeout waiting for VM flush ACK!\n");
>> +       }
>> +
>> +       spin_unlock(&adev->mc.invalidate_lock);
>> +}
>> +
>> +/**
>> + * gmc_v9_0_gart_set_pte_pde - update the page tables using MMIO
>> + *
>> + * @adev: amdgpu_device pointer
>> + * @cpu_pt_addr: cpu address of the page table
>> + * @gpu_page_idx: entry in the page table to update
>> + * @addr: dst addr to write into pte/pde
>> + * @flags: access flags
>> + *
>> + * Update the page tables using the CPU.
>> + */
>> +static int gmc_v9_0_gart_set_pte_pde(struct amdgpu_device *adev,
>> +                                       void *cpu_pt_addr,
>> +                                       uint32_t gpu_page_idx,
>> +                                       uint64_t addr,
>> +                                       uint64_t flags)
>> +{
>> +       void __iomem *ptr = (void *)cpu_pt_addr;
>> +       uint64_t value;
>> +
>> +       /*
>> +        * PTE format on VEGA 10:
>> +        * 63:59 reserved
>> +        * 58:57 mtype
>> +        * 56 F
>> +        * 55 L
>> +        * 54 P
>> +        * 53 SW
>> +        * 52 T
>> +        * 50:48 reserved
>> +        * 47:12 4k physical page base address
>> +        * 11:7 fragment
>> +        * 6 write
>> +        * 5 read
>> +        * 4 exe
>> +        * 3 Z
>> +        * 2 snooped
>> +        * 1 system
>> +        * 0 valid
>> +        *
>> +        * PDE format on VEGA 10:
>> +        * 63:59 block fragment size
>> +        * 58:55 reserved
>> +        * 54 P
>> +        * 53:48 reserved
>> +        * 47:6 physical base address of PD or PTE
>> +        * 5:3 reserved
>> +        * 2 C
>> +        * 1 system
>> +        * 0 valid
>> +        */
>> +
>> +       /*
>> +        * The following is for PTE only. GART does not have PDEs.
>> +       */
>> +       value = addr & 0x0000FFFFFFFFF000ULL;
>> +       value |= flags;
>> +       writeq(value, ptr + (gpu_page_idx * 8));
>> +       return 0;
>> +}
>> +
>> +static uint64_t gmc_v9_0_get_vm_pte_flags(struct amdgpu_device *adev,
>> +                                               uint32_t flags)
>> +
>> +{
>> +       uint64_t pte_flag = 0;
>> +
>> +       if (flags & AMDGPU_VM_PAGE_EXECUTABLE)
>> +               pte_flag |= AMDGPU_PTE_EXECUTABLE;
>> +       if (flags & AMDGPU_VM_PAGE_READABLE)
>> +               pte_flag |= AMDGPU_PTE_READABLE;
>> +       if (flags & AMDGPU_VM_PAGE_WRITEABLE)
>> +               pte_flag |= AMDGPU_PTE_WRITEABLE;
>> +
>> +       switch (flags & AMDGPU_VM_MTYPE_MASK) {
>> +       case AMDGPU_VM_MTYPE_DEFAULT:
>> +               pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
>> +               break;
>> +       case AMDGPU_VM_MTYPE_NC:
>> +               pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
>> +               break;
>> +       case AMDGPU_VM_MTYPE_WC:
>> +               pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_WC);
>> +               break;
>> +       case AMDGPU_VM_MTYPE_CC:
>> +               pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_CC);
>> +               break;
>> +       case AMDGPU_VM_MTYPE_UC:
>> +               pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_UC);
>> +               break;
>> +       default:
>> +               pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
>> +               break;
>> +       }
>> +
>> +       if (flags & AMDGPU_VM_PAGE_PRT)
>> +               pte_flag |= AMDGPU_PTE_PRT;
>> +
>> +       return pte_flag;
>> +}
>> +
>> +static const struct amdgpu_gart_funcs gmc_v9_0_gart_funcs = {
>> +       .flush_gpu_tlb = gmc_v9_0_gart_flush_gpu_tlb,
>> +       .set_pte_pde = gmc_v9_0_gart_set_pte_pde,
>> +       .get_vm_pte_flags = gmc_v9_0_get_vm_pte_flags
>> +};
>> +
>> +static void gmc_v9_0_set_gart_funcs(struct amdgpu_device *adev)
>> +{
>> +       if (adev->gart.gart_funcs == NULL)
>> +               adev->gart.gart_funcs = &gmc_v9_0_gart_funcs;
>> +}
>> +
>> +static u64 gmc_v9_0_adjust_mc_addr(struct amdgpu_device *adev, u64
>> mc_addr)
>> +{
>> +       return adev->vm_manager.vram_base_offset + mc_addr -
>> adev->mc.vram_start;
>> +}
>> +
>> +static const struct amdgpu_mc_funcs gmc_v9_0_mc_funcs = {
>> +       .adjust_mc_addr = gmc_v9_0_adjust_mc_addr,
>> +};
>> +
>> +static void gmc_v9_0_set_mc_funcs(struct amdgpu_device *adev)
>> +{
>> +       adev->mc.mc_funcs = &gmc_v9_0_mc_funcs;
>> +}
>> +
>> +static int gmc_v9_0_early_init(void *handle)
>> +{
>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>> +
>> +       gmc_v9_0_set_gart_funcs(adev);
>> +       gmc_v9_0_set_mc_funcs(adev);
>> +       gmc_v9_0_set_irq_funcs(adev);
>> +
>> +       return 0;
>> +}
>> +
>> +static int gmc_v9_0_late_init(void *handle)
>> +{
>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>> +       return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0);
>> +}
>> +
>> +static void gmc_v9_0_vram_gtt_location(struct amdgpu_device *adev,
>> +                                       struct amdgpu_mc *mc)
>> +{
>> +       u64 base = mmhub_v1_0_get_fb_location(adev);
>> +       amdgpu_vram_location(adev, &adev->mc, base);
>> +       adev->mc.gtt_base_align = 0;
>> +       amdgpu_gtt_location(adev, mc);
>> +}
>> +
>> +/**
>> + * gmc_v9_0_mc_init - initialize the memory controller driver params
>> + *
>> + * @adev: amdgpu_device pointer
>> + *
>> + * Look up the amount of vram, vram width, and decide how to place
>> + * vram and gart within the GPU's physical address space.
>> + * Returns 0 for success.
>> + */
>> +static int gmc_v9_0_mc_init(struct amdgpu_device *adev)
>> +{
>> +       u32 tmp;
>> +       int chansize, numchan;
>> +
>> +       /* hbm memory channel size */
>> +       chansize = 128;
>> +
>> +       tmp = RREG32(SOC15_REG_OFFSET(DF, 0,
>> mmDF_CS_AON0_DramBaseAddress0));
>> +       tmp &= DF_CS_AON0_DramBaseAddress0__IntLvNumChan_MASK;
>> +       tmp >>= DF_CS_AON0_DramBaseAddress0__IntLvNumChan__SHIFT;
>> +       switch (tmp) {
>> +       case 0:
>> +       default:
>> +               numchan = 1;
>> +               break;
>> +       case 1:
>> +               numchan = 2;
>> +               break;
>> +       case 2:
>> +               numchan = 0;
>> +               break;
>> +       case 3:
>> +               numchan = 4;
>> +               break;
>> +       case 4:
>> +               numchan = 0;
>> +               break;
>> +       case 5:
>> +               numchan = 8;
>> +               break;
>> +       case 6:
>> +               numchan = 0;
>> +               break;
>> +       case 7:
>> +               numchan = 16;
>> +               break;
>> +       case 8:
>> +               numchan = 2;
>> +               break;
>> +       }
>> +       adev->mc.vram_width = numchan * chansize;
>> +
>> +       /* Could aper size report 0 ? */
>> +       adev->mc.aper_base = pci_resource_start(adev->pdev, 0);
>> +       adev->mc.aper_size = pci_resource_len(adev->pdev, 0);
>> +       /* size in MB on si */
>> +       adev->mc.mc_vram_size =
>> +               nbio_v6_1_get_memsize(adev) * 1024ULL * 1024ULL;
>> +       adev->mc.real_vram_size = adev->mc.mc_vram_size;
>> +       adev->mc.visible_vram_size = adev->mc.aper_size;
>> +
>> +       /* In case the PCI BAR is larger than the actual amount of vram */
>> +       if (adev->mc.visible_vram_size > adev->mc.real_vram_size)
>> +               adev->mc.visible_vram_size = adev->mc.real_vram_size;
>> +
>> +       /* unless the user had overridden it, set the gart
>> +        * size equal to the 1024 or vram, whichever is larger.
>> +        */
>> +       if (amdgpu_gart_size == -1)
>> +               adev->mc.gtt_size = max((1024ULL << 20),
>> adev->mc.mc_vram_size);
>> +       else
>> +               adev->mc.gtt_size = (uint64_t)amdgpu_gart_size << 20;
>> +
>> +       gmc_v9_0_vram_gtt_location(adev, &adev->mc);
>> +
>> +       return 0;
>> +}
>> +
>> +static int gmc_v9_0_gart_init(struct amdgpu_device *adev)
>> +{
>> +       int r;
>> +
>> +       if (adev->gart.robj) {
>> +               WARN(1, "VEGA10 PCIE GART already initialized\n");
>> +               return 0;
>> +       }
>> +       /* Initialize common gart structure */
>> +       r = amdgpu_gart_init(adev);
>> +       if (r)
>> +               return r;
>> +       adev->gart.table_size = adev->gart.num_gpu_pages * 8;
>> +       adev->gart.gart_pte_flags = AMDGPU_PTE_MTYPE(MTYPE_UC) |
>> +                                AMDGPU_PTE_EXECUTABLE;
>> +       return amdgpu_gart_table_vram_alloc(adev);
>> +}
>> +
>> +/*
>> + * vm
>> + * VMID 0 is the physical GPU addresses as used by the kernel.
>> + * VMIDs 1-15 are used for userspace clients and are handled
>> + * by the amdgpu vm/hsa code.
>> + */
>> +/**
>> + * gmc_v9_0_vm_init - vm init callback
>> + *
>> + * @adev: amdgpu_device pointer
>> + *
>> + * Inits vega10 specific vm parameters (number of VMs, base of vram for
>> + * VMIDs 1-15) (vega10).
>> + * Returns 0 for success.
>> + */
>> +static int gmc_v9_0_vm_init(struct amdgpu_device *adev)
>> +{
>> +       /*
>> +        * number of VMs
>> +        * VMID 0 is reserved for System
>> +        * amdgpu graphics/compute will use VMIDs 1-7
>> +        * amdkfd will use VMIDs 8-15
>> +        */
>> +       adev->vm_manager.num_ids = AMDGPU_NUM_OF_VMIDS;
>> +       amdgpu_vm_manager_init(adev);
>> +
>> +       /* base offset of vram pages */
>> +       /*XXX This value is not zero for APU*/
>> +       adev->vm_manager.vram_base_offset = 0;
>> +
>> +       return 0;
>> +}
>> +
>> +/**
>> + * gmc_v9_0_vm_fini - vm fini callback
>> + *
>> + * @adev: amdgpu_device pointer
>> + *
>> + * Tear down any asic specific VM setup.
>> + */
>> +static void gmc_v9_0_vm_fini(struct amdgpu_device *adev)
>> +{
>> +       return;
>> +}
>> +
>> +static int gmc_v9_0_sw_init(void *handle)
>> +{
>> +       int r;
>> +       int dma_bits;
>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>> +
>> +       spin_lock_init(&adev->mc.invalidate_lock);
>> +
>> +       if (adev->flags & AMD_IS_APU) {
>> +               adev->mc.vram_type = AMDGPU_VRAM_TYPE_UNKNOWN;
>> +       } else {
>> +               /* XXX Don't know how to get VRAM type yet. */
>> +               adev->mc.vram_type = AMDGPU_VRAM_TYPE_HBM;
>> +       }
>> +
>> +       /* This interrupt is VMC page fault.*/
>> +       r = amdgpu_irq_add_id(adev, AMDGPU_IH_CLIENTID_VMC, 0,
>> +                               &adev->mc.vm_fault);
>> +
>> +       if (r)
>> +               return r;
>> +
>> +       /* Adjust VM size here.
>> +        * Currently default to 64GB ((16 << 20) 4k pages).
>> +        * Max GPUVM size is 48 bits.
>> +        */
>> +       adev->vm_manager.max_pfn = amdgpu_vm_size << 18;
>> +
>> +       /* Set the internal MC address mask
>> +        * This is the max address of the GPU's
>> +        * internal address space.
>> +        */
>> +       adev->mc.mc_mask = 0xffffffffffffULL; /* 48 bit MC */
>> +
>> +       /* set DMA mask + need_dma32 flags.
>> +        * PCIE - can handle 44-bits.
>> +        * IGP - can handle 44-bits
>> +        * PCI - dma32 for legacy pci gart, 44 bits on vega10
>> +        */
>> +       adev->need_dma32 = false;
>> +       dma_bits = adev->need_dma32 ? 32 : 44;
>> +       r = pci_set_dma_mask(adev->pdev, DMA_BIT_MASK(dma_bits));
>> +       if (r) {
>> +               adev->need_dma32 = true;
>> +               dma_bits = 32;
>> +               printk(KERN_WARNING "amdgpu: No suitable DMA
>> available.\n");
>> +       }
>> +       r = pci_set_consistent_dma_mask(adev->pdev,
>> DMA_BIT_MASK(dma_bits));
>> +       if (r) {
>> +               pci_set_consistent_dma_mask(adev->pdev, DMA_BIT_MASK(32));
>> +               printk(KERN_WARNING "amdgpu: No coherent DMA
>> available.\n");
>> +       }
>> +
>> +       r = gmc_v9_0_mc_init(adev);
>> +       if (r)
>> +               return r;
>> +
>> +       /* Memory manager */
>> +       r = amdgpu_bo_init(adev);
>> +       if (r)
>> +               return r;
>> +
>> +       r = gmc_v9_0_gart_init(adev);
>> +       if (r)
>> +               return r;
>> +
>> +       if (!adev->vm_manager.enabled) {
>> +               r = gmc_v9_0_vm_init(adev);
>> +               if (r) {
>> +                       dev_err(adev->dev, "vm manager initialization
>> failed (%d).\n", r);
>> +                       return r;
>> +               }
>> +               adev->vm_manager.enabled = true;
>> +       }
>> +       return r;
>> +}
>> +
>> +/**
>> + * gmc_v8_0_gart_fini - vm fini callback
>> + *
>> + * @adev: amdgpu_device pointer
>> + *
>> + * Tears down the driver GART/VM setup (CIK).
>> + */
>> +static void gmc_v9_0_gart_fini(struct amdgpu_device *adev)
>> +{
>> +       amdgpu_gart_table_vram_free(adev);
>> +       amdgpu_gart_fini(adev);
>> +}
>> +
>> +static int gmc_v9_0_sw_fini(void *handle)
>> +{
>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>> +
>> +       if (adev->vm_manager.enabled) {
>> +               amdgpu_vm_manager_fini(adev);
>> +               gmc_v9_0_vm_fini(adev);
>> +               adev->vm_manager.enabled = false;
>> +       }
>> +       gmc_v9_0_gart_fini(adev);
>> +       amdgpu_gem_force_release(adev);
>> +       amdgpu_bo_fini(adev);
>> +
>> +       return 0;
>> +}
>> +
>> +static void gmc_v9_0_init_golden_registers(struct amdgpu_device *adev)
>> +{
>> +       switch (adev->asic_type) {
>> +       case CHIP_VEGA10:
>> +               break;
>> +       default:
>> +               break;
>> +       }
>> +}
>> +
>> +/**
>> + * gmc_v9_0_gart_enable - gart enable
>> + *
>> + * @adev: amdgpu_device pointer
>> + */
>> +static int gmc_v9_0_gart_enable(struct amdgpu_device *adev)
>> +{
>> +       int r;
>> +       bool value;
>> +       u32 tmp;
>> +
>> +       amdgpu_program_register_sequence(adev,
>> +               golden_settings_vega10_hdp,
>> +               (const u32)ARRAY_SIZE(golden_settings_vega10_hdp));
>> +
>> +       if (adev->gart.robj == NULL) {
>> +               dev_err(adev->dev, "No VRAM object for PCIE GART.\n");
>> +               return -EINVAL;
>> +       }
>> +       r = amdgpu_gart_table_vram_pin(adev);
>> +       if (r)
>> +               return r;
>> +
>> +       /* After HDP is initialized, flush HDP.*/
>> +       nbio_v6_1_hdp_flush(adev);
>> +
>> +       r = gfxhub_v1_0_gart_enable(adev);
>> +       if (r)
>> +               return r;
>> +
>> +       r = mmhub_v1_0_gart_enable(adev);
>> +       if (r)
>> +               return r;
>> +
>> +       tmp = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MISC_CNTL));
>> +       tmp |= HDP_MISC_CNTL__FLUSH_INVALIDATE_CACHE_MASK;
>> +       WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MISC_CNTL), tmp);
>> +
>> +       tmp = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_HOST_PATH_CNTL));
>> +       WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_HOST_PATH_CNTL), tmp);
>> +
>> +
>> +       if (amdgpu_vm_fault_stop == AMDGPU_VM_FAULT_STOP_ALWAYS)
>> +               value = false;
>> +       else
>> +               value = true;
>> +
>> +       gfxhub_v1_0_set_fault_enable_default(adev, value);
>> +       mmhub_v1_0_set_fault_enable_default(adev, value);
>> +
>> +       gmc_v9_0_gart_flush_gpu_tlb(adev, 0);
>> +
>> +       DRM_INFO("PCIE GART of %uM enabled (table at 0x%016llX).\n",
>> +                (unsigned)(adev->mc.gtt_size >> 20),
>> +                (unsigned long long)adev->gart.table_addr);
>> +       adev->gart.ready = true;
>> +       return 0;
>> +}
>> +
>> +static int gmc_v9_0_hw_init(void *handle)
>> +{
>> +       int r;
>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>> +
>> +       /* The sequence of these two function calls matters.*/
>> +       gmc_v9_0_init_golden_registers(adev);
>> +
>> +       r = gmc_v9_0_gart_enable(adev);
>> +
>> +       return r;
>> +}
>> +
>> +/**
>> + * gmc_v9_0_gart_disable - gart disable
>> + *
>> + * @adev: amdgpu_device pointer
>> + *
>> + * This disables all VM page table.
>> + */
>> +static void gmc_v9_0_gart_disable(struct amdgpu_device *adev)
>> +{
>> +       gfxhub_v1_0_gart_disable(adev);
>> +       mmhub_v1_0_gart_disable(adev);
>> +       amdgpu_gart_table_vram_unpin(adev);
>> +}
>> +
>> +static int gmc_v9_0_hw_fini(void *handle)
>> +{
>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>> +
>> +       amdgpu_irq_put(adev, &adev->mc.vm_fault, 0);
>> +       gmc_v9_0_gart_disable(adev);
>> +
>> +       return 0;
>> +}
>> +
>> +static int gmc_v9_0_suspend(void *handle)
>> +{
>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>> +
>> +       if (adev->vm_manager.enabled) {
>> +               gmc_v9_0_vm_fini(adev);
>> +               adev->vm_manager.enabled = false;
>> +       }
>> +       gmc_v9_0_hw_fini(adev);
>> +
>> +       return 0;
>> +}
>> +
>> +static int gmc_v9_0_resume(void *handle)
>> +{
>> +       int r;
>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>> +
>> +       r = gmc_v9_0_hw_init(adev);
>> +       if (r)
>> +               return r;
>> +
>> +       if (!adev->vm_manager.enabled) {
>> +               r = gmc_v9_0_vm_init(adev);
>> +               if (r) {
>> +                       dev_err(adev->dev,
>> +                               "vm manager initialization failed
>> (%d).\n", r);
>> +                       return r;
>> +               }
>> +               adev->vm_manager.enabled = true;
>> +       }
>> +
>> +       return r;
>> +}
>> +
>> +static bool gmc_v9_0_is_idle(void *handle)
>> +{
>> +       /* MC is always ready in GMC v9.*/
>> +       return true;
>> +}
>> +
>> +static int gmc_v9_0_wait_for_idle(void *handle)
>> +{
>> +       /* There is no need to wait for MC idle in GMC v9.*/
>> +       return 0;
>> +}
>> +
>> +static int gmc_v9_0_soft_reset(void *handle)
>> +{
>> +       /* XXX for emulation.*/
>> +       return 0;
>> +}
>> +
>> +static int gmc_v9_0_set_clockgating_state(void *handle,
>> +                                       enum amd_clockgating_state state)
>> +{
>> +       return 0;
>> +}
>> +
>> +static int gmc_v9_0_set_powergating_state(void *handle,
>> +                                       enum amd_powergating_state state)
>> +{
>> +       return 0;
>> +}
>> +
>> +const struct amd_ip_funcs gmc_v9_0_ip_funcs = {
>> +       .name = "gmc_v9_0",
>> +       .early_init = gmc_v9_0_early_init,
>> +       .late_init = gmc_v9_0_late_init,
>> +       .sw_init = gmc_v9_0_sw_init,
>> +       .sw_fini = gmc_v9_0_sw_fini,
>> +       .hw_init = gmc_v9_0_hw_init,
>> +       .hw_fini = gmc_v9_0_hw_fini,
>> +       .suspend = gmc_v9_0_suspend,
>> +       .resume = gmc_v9_0_resume,
>> +       .is_idle = gmc_v9_0_is_idle,
>> +       .wait_for_idle = gmc_v9_0_wait_for_idle,
>> +       .soft_reset = gmc_v9_0_soft_reset,
>> +       .set_clockgating_state = gmc_v9_0_set_clockgating_state,
>> +       .set_powergating_state = gmc_v9_0_set_powergating_state,
>> +};
>> +
>> +const struct amdgpu_ip_block_version gmc_v9_0_ip_block =
>> +{
>> +       .type = AMD_IP_BLOCK_TYPE_GMC,
>> +       .major = 9,
>> +       .minor = 0,
>> +       .rev = 0,
>> +       .funcs = &gmc_v9_0_ip_funcs,
>> +};
>> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
>> b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
>> new file mode 100644
>> index 0000000..b030ca5
>> --- /dev/null
>> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
>> @@ -0,0 +1,30 @@
>> +/*
>> + * Copyright 2016 Advanced Micro Devices, Inc.
>> + *
>> + * Permission is hereby granted, free of charge, to any person obtaining
>> a
>> + * copy of this software and associated documentation files (the
>> "Software"),
>> + * to deal in the Software without restriction, including without
>> limitation
>> + * the rights to use, copy, modify, merge, publish, distribute,
>> sublicense,
>> + * and/or sell copies of the Software, and to permit persons to whom the
>> + * Software is furnished to do so, subject to the following conditions:
>> + *
>> + * The above copyright notice and this permission notice shall be
>> included in
>> + * all copies or substantial portions of the Software.
>> + *
>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
>> EXPRESS OR
>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>> MERCHANTABILITY,
>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT
>> SHALL
>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES
>> OR
>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
>> + * OTHER DEALINGS IN THE SOFTWARE.
>> + *
>> + */
>> +
>> +#ifndef __GMC_V9_0_H__
>> +#define __GMC_V9_0_H__
>> +
>> +extern const struct amd_ip_funcs gmc_v9_0_ip_funcs;
>> +extern const struct amdgpu_ip_block_version gmc_v9_0_ip_block;
>> +
>> +#endif
>> diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>> b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>> new file mode 100644
>> index 0000000..b1e0e6b
>> --- /dev/null
>> +++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>> @@ -0,0 +1,585 @@
>> +/*
>> + * Copyright 2016 Advanced Micro Devices, Inc.
>> + *
>> + * Permission is hereby granted, free of charge, to any person obtaining
>> a
>> + * copy of this software and associated documentation files (the
>> "Software"),
>> + * to deal in the Software without restriction, including without
>> limitation
>> + * the rights to use, copy, modify, merge, publish, distribute,
>> sublicense,
>> + * and/or sell copies of the Software, and to permit persons to whom the
>> + * Software is furnished to do so, subject to the following conditions:
>> + *
>> + * The above copyright notice and this permission notice shall be
>> included in
>> + * all copies or substantial portions of the Software.
>> + *
>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
>> EXPRESS OR
>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>> MERCHANTABILITY,
>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT
>> SHALL
>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES
>> OR
>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
>> + * OTHER DEALINGS IN THE SOFTWARE.
>> + *
>> + */
>> +#include "amdgpu.h"
>> +#include "mmhub_v1_0.h"
>> +
>> +#include "vega10/soc15ip.h"
>> +#include "vega10/MMHUB/mmhub_1_0_offset.h"
>> +#include "vega10/MMHUB/mmhub_1_0_sh_mask.h"
>> +#include "vega10/MMHUB/mmhub_1_0_default.h"
>> +#include "vega10/ATHUB/athub_1_0_offset.h"
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH 046/100] drm/amdgpu: Add GMC 9.0 support
       [not found]     ` <1490041835-11255-32-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
  2017-03-21  8:49       ` Christian König
@ 2017-03-22 19:48       ` Dave Airlie
       [not found]         ` <CAPM=9tyv3RT0Q8i5zan_iaM7XxoTdb6nXuX=aT4C9vPPKjYfww-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2017-03-23  2:42       ` Zhang, Jerry (Junwei)
  2 siblings, 1 reply; 101+ messages in thread
From: Dave Airlie @ 2017-03-22 19:48 UTC (permalink / raw)
  To: Alex Deucher; +Cc: Alex Deucher, amd-gfx mailing list, Alex Xie

> +};
> +
> +static int gmc_v9_0_vm_fault_interrupt_state(struct amdgpu_device *adev,
> +                                       struct amdgpu_irq_src *src,
> +                                       unsigned type,
> +                                       enum amdgpu_interrupt_state state)
> +{
> +       struct amdgpu_vmhub *hub;
> +       u32 tmp, reg, bits, i;
> +
> +       switch (state) {
> +       case AMDGPU_IRQ_STATE_DISABLE:
> +               /* MM HUB */
> +               hub = &adev->vmhub[AMDGPU_MMHUB];
> +               bits = hub->get_vm_protection_bits();
> +               for (i = 0; i< 16; i++) {
> +                       reg = hub->vm_context0_cntl + i;
> +                       tmp = RREG32(reg);
> +                       tmp &= ~bits;
> +                       WREG32(reg, tmp);
> +               }
> +
> +               /* GFX HUB */
> +               hub = &adev->vmhub[AMDGPU_GFXHUB];
> +               bits = hub->get_vm_protection_bits();
> +               for (i = 0; i < 16; i++) {
> +                       reg = hub->vm_context0_cntl + i;
> +                       tmp = RREG32(reg);
> +                       tmp &= ~bits;
> +                       WREG32(reg, tmp);
> +               }
> +               break;
> +       case AMDGPU_IRQ_STATE_ENABLE:
> +               /* MM HUB */
> +               hub = &adev->vmhub[AMDGPU_MMHUB];
> +               bits = hub->get_vm_protection_bits();
> +               for (i = 0; i< 16; i++) {
> +                       reg = hub->vm_context0_cntl + i;
> +                       tmp = RREG32(reg);
> +                       tmp |= bits;
> +                       WREG32(reg, tmp);
> +               }
> +
> +               /* GFX HUB */
> +               hub = &adev->vmhub[AMDGPU_GFXHUB];
> +               bits = hub->get_vm_protection_bits();
> +               for (i = 0; i < 16; i++) {
> +                       reg = hub->vm_context0_cntl + i;
> +                       tmp = RREG32(reg);
> +                       tmp |= bits;
> +                       WREG32(reg, tmp);
> +               }
> +               break;
> +       default:
> +               break;
> +       }
> +
> +       return 0;
> +       return 0;
> +}
> +

Probably only need one :-)

Dave.
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 101+ messages in thread

* RE: [PATCH 046/100] drm/amdgpu: Add GMC 9.0 support
       [not found]         ` <CAPM=9tyv3RT0Q8i5zan_iaM7XxoTdb6nXuX=aT4C9vPPKjYfww-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-03-22 19:53           ` Deucher, Alexander
  0 siblings, 0 replies; 101+ messages in thread
From: Deucher, Alexander @ 2017-03-22 19:53 UTC (permalink / raw)
  To: 'Dave Airlie', Alex Deucher; +Cc: amd-gfx mailing list, Xie, AlexBin

> -----Original Message-----
> From: Dave Airlie [mailto:airlied@gmail.com]
> Sent: Wednesday, March 22, 2017 3:48 PM
> To: Alex Deucher
> Cc: amd-gfx mailing list; Deucher, Alexander; Xie, AlexBin
> Subject: Re: [PATCH 046/100] drm/amdgpu: Add GMC 9.0 support
> 
> > +};
> > +
> > +static int gmc_v9_0_vm_fault_interrupt_state(struct amdgpu_device
> *adev,
> > +                                       struct amdgpu_irq_src *src,
> > +                                       unsigned type,
> > +                                       enum amdgpu_interrupt_state state)
> > +{
> > +       struct amdgpu_vmhub *hub;
> > +       u32 tmp, reg, bits, i;
> > +
> > +       switch (state) {
> > +       case AMDGPU_IRQ_STATE_DISABLE:
> > +               /* MM HUB */
> > +               hub = &adev->vmhub[AMDGPU_MMHUB];
> > +               bits = hub->get_vm_protection_bits();
> > +               for (i = 0; i< 16; i++) {
> > +                       reg = hub->vm_context0_cntl + i;
> > +                       tmp = RREG32(reg);
> > +                       tmp &= ~bits;
> > +                       WREG32(reg, tmp);
> > +               }
> > +
> > +               /* GFX HUB */
> > +               hub = &adev->vmhub[AMDGPU_GFXHUB];
> > +               bits = hub->get_vm_protection_bits();
> > +               for (i = 0; i < 16; i++) {
> > +                       reg = hub->vm_context0_cntl + i;
> > +                       tmp = RREG32(reg);
> > +                       tmp &= ~bits;
> > +                       WREG32(reg, tmp);
> > +               }
> > +               break;
> > +       case AMDGPU_IRQ_STATE_ENABLE:
> > +               /* MM HUB */
> > +               hub = &adev->vmhub[AMDGPU_MMHUB];
> > +               bits = hub->get_vm_protection_bits();
> > +               for (i = 0; i< 16; i++) {
> > +                       reg = hub->vm_context0_cntl + i;
> > +                       tmp = RREG32(reg);
> > +                       tmp |= bits;
> > +                       WREG32(reg, tmp);
> > +               }
> > +
> > +               /* GFX HUB */
> > +               hub = &adev->vmhub[AMDGPU_GFXHUB];
> > +               bits = hub->get_vm_protection_bits();
> > +               for (i = 0; i < 16; i++) {
> > +                       reg = hub->vm_context0_cntl + i;
> > +                       tmp = RREG32(reg);
> > +                       tmp |= bits;
> > +                       WREG32(reg, tmp);
> > +               }
> > +               break;
> > +       default:
> > +               break;
> > +       }
> > +
> > +       return 0;
> > +       return 0;
> > +}
> > +
> 
> Probably only need one :-)

Fixed locally.  Thanks!

Alex

> 
> Dave.
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH 046/100] drm/amdgpu: Add GMC 9.0 support
       [not found]     ` <1490041835-11255-32-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
  2017-03-21  8:49       ` Christian König
  2017-03-22 19:48       ` Dave Airlie
@ 2017-03-23  2:42       ` Zhang, Jerry (Junwei)
       [not found]         ` <58D33609.3060304-5C7GfCeVMHo@public.gmane.org>
  2 siblings, 1 reply; 101+ messages in thread
From: Zhang, Jerry (Junwei) @ 2017-03-23  2:42 UTC (permalink / raw)
  To: Alex Deucher, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Alex Xie

Hi Alex,

I remember we had a patch to remove the FB location programming in gmc/vmhub.
I saw it's not in gmc v9 in this patch, but pre-gmcv9 still program FB register.

Is that any missing for sync here?
Or it's only supported for gmc v9 now.

Jerry

On 03/21/2017 04:29 AM, Alex Deucher wrote:
> From: Alex Xie <AlexBin.Xie@amd.com>
>
> On SOC-15 parts, the GMC (Graphics Memory Controller) consists
> of two hubs: GFX (graphics and compute) and MM (sdma, uvd, vce).
>
> Signed-off-by: Alex Xie <AlexBin.Xie@amd.com>
> Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/Makefile      |   6 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu.h      |  30 ++
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c   |  28 +-
>   drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c | 447 +++++++++++++++++
>   drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h |  35 ++
>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c    | 826 +++++++++++++++++++++++++++++++
>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h    |  30 ++
>   drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c  | 585 ++++++++++++++++++++++
>   drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h  |  35 ++
>   drivers/gpu/drm/amd/include/amd_shared.h |   2 +
>   10 files changed, 2016 insertions(+), 8 deletions(-)
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
> index 69823e8..b5046fd 100644
> --- a/drivers/gpu/drm/amd/amdgpu/Makefile
> +++ b/drivers/gpu/drm/amd/amdgpu/Makefile
> @@ -45,7 +45,8 @@ amdgpu-y += \
>   # add GMC block
>   amdgpu-y += \
>   	gmc_v7_0.o \
> -	gmc_v8_0.o
> +	gmc_v8_0.o \
> +	gfxhub_v1_0.o mmhub_v1_0.o gmc_v9_0.o
>
>   # add IH block
>   amdgpu-y += \
> @@ -74,7 +75,8 @@ amdgpu-y += \
>   # add async DMA block
>   amdgpu-y += \
>   	sdma_v2_4.o \
> -	sdma_v3_0.o
> +	sdma_v3_0.o \
> +	sdma_v4_0.o
>
>   # add UVD block
>   amdgpu-y += \
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index aaded8d..d7257b6 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -123,6 +123,11 @@ extern int amdgpu_param_buf_per_se;
>   /* max number of IP instances */
>   #define AMDGPU_MAX_SDMA_INSTANCES		2
>
> +/* max number of VMHUB */
> +#define AMDGPU_MAX_VMHUBS			2
> +#define AMDGPU_MMHUB				0
> +#define AMDGPU_GFXHUB				1
> +
>   /* hardcode that limit for now */
>   #define AMDGPU_VA_RESERVED_SIZE			(8 << 20)
>
> @@ -310,6 +315,12 @@ struct amdgpu_gart_funcs {
>   				     uint32_t flags);
>   };
>
> +/* provided by the mc block */
> +struct amdgpu_mc_funcs {
> +	/* adjust mc addr in fb for APU case */
> +	u64 (*adjust_mc_addr)(struct amdgpu_device *adev, u64 addr);
> +};
> +
>   /* provided by the ih block */
>   struct amdgpu_ih_funcs {
>   	/* ring read/write ptr handling, called from interrupt context */
> @@ -559,6 +570,21 @@ int amdgpu_gart_bind(struct amdgpu_device *adev, uint64_t offset,
>   int amdgpu_ttm_recover_gart(struct amdgpu_device *adev);
>
>   /*
> + * VMHUB structures, functions & helpers
> + */
> +struct amdgpu_vmhub {
> +	uint32_t	ctx0_ptb_addr_lo32;
> +	uint32_t	ctx0_ptb_addr_hi32;
> +	uint32_t	vm_inv_eng0_req;
> +	uint32_t	vm_inv_eng0_ack;
> +	uint32_t	vm_context0_cntl;
> +	uint32_t	vm_l2_pro_fault_status;
> +	uint32_t	vm_l2_pro_fault_cntl;
> +	uint32_t	(*get_invalidate_req)(unsigned int vm_id);
> +	uint32_t	(*get_vm_protection_bits)(void);
> +};
> +
> +/*
>    * GPU MC structures, functions & helpers
>    */
>   struct amdgpu_mc {
> @@ -591,6 +617,9 @@ struct amdgpu_mc {
>   	u64					shared_aperture_end;
>   	u64					private_aperture_start;
>   	u64					private_aperture_end;
> +	/* protects concurrent invalidation */
> +	spinlock_t		invalidate_lock;
> +	const struct amdgpu_mc_funcs *mc_funcs;
>   };
>
>   /*
> @@ -1479,6 +1508,7 @@ struct amdgpu_device {
>   	struct amdgpu_gart		gart;
>   	struct amdgpu_dummy_page	dummy_page;
>   	struct amdgpu_vm_manager	vm_manager;
> +	struct amdgpu_vmhub             vmhub[AMDGPU_MAX_VMHUBS];
>
>   	/* memory management */
>   	struct amdgpu_mman		mman;
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index df615d7..47a8080 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -375,6 +375,16 @@ static bool amdgpu_vm_ring_has_compute_vm_bug(struct amdgpu_ring *ring)
>   	return false;
>   }
>
> +static u64 amdgpu_vm_adjust_mc_addr(struct amdgpu_device *adev, u64 mc_addr)
> +{
> +	u64 addr = mc_addr;
> +
> +	if (adev->mc.mc_funcs && adev->mc.mc_funcs->adjust_mc_addr)
> +		addr = adev->mc.mc_funcs->adjust_mc_addr(adev, addr);
> +
> +	return addr;
> +}
> +
>   /**
>    * amdgpu_vm_flush - hardware flush the vm
>    *
> @@ -405,9 +415,10 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job)
>   	if (ring->funcs->emit_vm_flush && (job->vm_needs_flush ||
>   	    amdgpu_vm_is_gpu_reset(adev, id))) {
>   		struct fence *fence;
> +		u64 pd_addr = amdgpu_vm_adjust_mc_addr(adev, job->vm_pd_addr);
>
> -		trace_amdgpu_vm_flush(job->vm_pd_addr, ring->idx, job->vm_id);
> -		amdgpu_ring_emit_vm_flush(ring, job->vm_id, job->vm_pd_addr);
> +		trace_amdgpu_vm_flush(pd_addr, ring->idx, job->vm_id);
> +		amdgpu_ring_emit_vm_flush(ring, job->vm_id, pd_addr);
>
>   		r = amdgpu_fence_emit(ring, &fence);
>   		if (r)
> @@ -643,15 +654,18 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device *adev,
>   		    (count == AMDGPU_VM_MAX_UPDATE_SIZE)) {
>
>   			if (count) {
> +				uint64_t pt_addr =
> +					amdgpu_vm_adjust_mc_addr(adev, last_pt);
> +
>   				if (shadow)
>   					amdgpu_vm_do_set_ptes(&params,
>   							      last_shadow,
> -							      last_pt, count,
> +							      pt_addr, count,
>   							      incr,
>   							      AMDGPU_PTE_VALID);
>
>   				amdgpu_vm_do_set_ptes(&params, last_pde,
> -						      last_pt, count, incr,
> +						      pt_addr, count, incr,
>   						      AMDGPU_PTE_VALID);
>   			}
>
> @@ -665,11 +679,13 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device *adev,
>   	}
>
>   	if (count) {
> +		uint64_t pt_addr = amdgpu_vm_adjust_mc_addr(adev, last_pt);
> +
>   		if (vm->page_directory->shadow)
> -			amdgpu_vm_do_set_ptes(&params, last_shadow, last_pt,
> +			amdgpu_vm_do_set_ptes(&params, last_shadow, pt_addr,
>   					      count, incr, AMDGPU_PTE_VALID);
>
> -		amdgpu_vm_do_set_ptes(&params, last_pde, last_pt,
> +		amdgpu_vm_do_set_ptes(&params, last_pde, pt_addr,
>   				      count, incr, AMDGPU_PTE_VALID);
>   	}
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
> new file mode 100644
> index 0000000..1ff019c
> --- /dev/null
> +++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
> @@ -0,0 +1,447 @@
> +/*
> + * Copyright 2016 Advanced Micro Devices, Inc.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + *
> + */
> +#include "amdgpu.h"
> +#include "gfxhub_v1_0.h"
> +
> +#include "vega10/soc15ip.h"
> +#include "vega10/GC/gc_9_0_offset.h"
> +#include "vega10/GC/gc_9_0_sh_mask.h"
> +#include "vega10/GC/gc_9_0_default.h"
> +#include "vega10/vega10_enum.h"
> +
> +#include "soc15_common.h"
> +
> +int gfxhub_v1_0_gart_enable(struct amdgpu_device *adev)
> +{
> +	u32 tmp;
> +	u64 value;
> +	u32 i;
> +
> +	/* Program MC. */
> +	/* Update configuration */
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR),
> +		adev->mc.vram_start >> 18);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR),
> +		adev->mc.vram_end >> 18);
> +
> +	value = adev->vram_scratch.gpu_addr - adev->mc.vram_start
> +		+ adev->vm_manager.vram_base_offset;
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +				mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_LSB),
> +				(u32)(value >> 12));
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +				mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_MSB),
> +				(u32)(value >> 44));
> +
> +	/* Disable AGP. */
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_BASE), 0);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_TOP), 0);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_BOT), 0xFFFFFFFF);
> +
> +	/* GART Enable. */
> +
> +	/* Setup TLB control */
> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL));
> +	tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB, 1);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				SYSTEM_ACCESS_MODE,
> +				3);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				ENABLE_ADVANCED_DRIVER_MODEL,
> +				1);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				SYSTEM_APERTURE_UNMAPPED_ACCESS,
> +				0);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				ECO_BITS,
> +				0);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				MTYPE,
> +				MTYPE_UC);/* XXX for emulation. */
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				ATC_EN,
> +				1);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL), tmp);
> +
> +	/* Setup L2 cache */
> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL));
> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 1);
> +	tmp = REG_SET_FIELD(tmp,
> +				VM_L2_CNTL,
> +				ENABLE_L2_FRAGMENT_PROCESSING,
> +				0);
> +	tmp = REG_SET_FIELD(tmp,
> +				VM_L2_CNTL,
> +				L2_PDE0_CACHE_TAG_GENERATION_MODE,
> +				0);/* XXX for emulation, Refer to closed source code.*/
> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, PDE_FAULT_CLASSIFICATION, 1);
> +	tmp = REG_SET_FIELD(tmp,
> +				VM_L2_CNTL,
> +				CONTEXT1_IDENTITY_ACCESS_MODE,
> +				1);
> +	tmp = REG_SET_FIELD(tmp,
> +				VM_L2_CNTL,
> +				IDENTITY_MODE_FRAGMENT_SIZE,
> +				0);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL), tmp);
> +
> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL2));
> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_ALL_L1_TLBS, 1);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_L2_CACHE, 1);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL2), tmp);
> +
> +	tmp = mmVM_L2_CNTL3_DEFAULT;
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL3), tmp);
> +
> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL4));
> +	tmp = REG_SET_FIELD(tmp,
> +			    VM_L2_CNTL4,
> +			    VMC_TAP_PDE_REQUEST_PHYSICAL,
> +			    0);
> +	tmp = REG_SET_FIELD(tmp,
> +			    VM_L2_CNTL4,
> +			    VMC_TAP_PTE_REQUEST_PHYSICAL,
> +			    0);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL4), tmp);
> +
> +	/* setup context0 */
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_LO32),
> +		(u32)(adev->mc.gtt_start >> 12));
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_HI32),
> +		(u32)(adev->mc.gtt_start >> 44));
> +
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_LO32),
> +		(u32)(adev->mc.gtt_end >> 12));
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_HI32),
> +		(u32)(adev->mc.gtt_end >> 44));
> +
> +	BUG_ON(adev->gart.table_addr & (~0x0000FFFFFFFFF000ULL));
> +	value = adev->gart.table_addr - adev->mc.vram_start
> +		+ adev->vm_manager.vram_base_offset;
> +	value &= 0x0000FFFFFFFFF000ULL;
> +	value |= 0x1; /*valid bit*/
> +
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32),
> +		(u32)value);
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32),
> +		(u32)(value >> 32));
> +
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +				mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_LO32),
> +		(u32)(adev->dummy_page.addr >> 12));
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +				mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_HI32),
> +		(u32)(adev->dummy_page.addr >> 44));
> +
> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL2));
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL2,
> +			    ACTIVE_PAGE_MIGRATION_PTE_READ_RETRY,
> +			    1);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL2), tmp);
> +
> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL));
> +	tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL, ENABLE_CONTEXT, 1);
> +	tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL, PAGE_TABLE_DEPTH, 0);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL), tmp);
> +
> +	/* Disable identity aperture.*/
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_LO32), 0XFFFFFFFF);
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_HI32), 0x0000000F);
> +
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_LO32), 0);
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_HI32), 0);
> +
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +		mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_LO32), 0);
> +	WREG32(SOC15_REG_OFFSET(GC, 0,
> +		mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_HI32), 0);
> +
> +	for (i = 0; i <= 14; i++) {
> +		tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_CNTL) + i);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL, ENABLE_CONTEXT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL, PAGE_TABLE_DEPTH, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				VALID_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				READ_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				PAGE_TABLE_BLOCK_SIZE,
> +				    amdgpu_vm_block_size - 9);
> +		WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_CNTL) + i, tmp);
> +		WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_LO32) + i*2, 0);
> +		WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_HI32) + i*2, 0);
> +		WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_LO32) + i*2,
> +				adev->vm_manager.max_pfn - 1);
> +		WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_HI32) + i*2, 0);
> +	}
> +
> +
> +	return 0;
> +}
> +
> +void gfxhub_v1_0_gart_disable(struct amdgpu_device *adev)
> +{
> +	u32 tmp;
> +	u32 i;
> +
> +	/* Disable all tables */
> +	for (i = 0; i < 16; i++)
> +		WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL) + i, 0);
> +
> +	/* Setup TLB control */
> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL));
> +	tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB, 0);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				ENABLE_ADVANCED_DRIVER_MODEL,
> +				0);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL), tmp);
> +
> +	/* Setup L2 cache */
> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL));
> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 0);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL), tmp);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL3), 0);
> +}
> +
> +/**
> + * gfxhub_v1_0_set_fault_enable_default - update GART/VM fault handling
> + *
> + * @adev: amdgpu_device pointer
> + * @value: true redirects VM faults to the default page
> + */
> +void gfxhub_v1_0_set_fault_enable_default(struct amdgpu_device *adev,
> +					  bool value)
> +{
> +	u32 tmp;
> +	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL));
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			PDE1_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			PDE2_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp,
> +			VM_L2_PROTECTION_FAULT_CNTL,
> +			TRANSLATE_FURTHER_PROTECTION_FAULT_ENABLE_DEFAULT,
> +			value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			NACK_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			VALID_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			READ_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL), tmp);
> +}
> +
> +static uint32_t gfxhub_v1_0_get_invalidate_req(unsigned int vm_id)
> +{
> +	u32 req = 0;
> +
> +	/* invalidate using legacy mode on vm_id*/
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> +			    PER_VMID_INVALIDATE_REQ, 1 << vm_id);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, FLUSH_TYPE, 0);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PTES, 1);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE0, 1);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE1, 1);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE2, 1);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L1_PTES, 1);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> +			    CLEAR_PROTECTION_FAULT_STATUS_ADDR,	0);
> +
> +	return req;
> +}
> +
> +static uint32_t gfxhub_v1_0_get_vm_protection_bits(void)
> +{
> +	return (VM_CONTEXT1_CNTL__RANGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__DUMMY_PAGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__PDE0_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__VALID_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__READ_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__WRITE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__EXECUTE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK);
> +}
> +
> +static int gfxhub_v1_0_early_init(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int gfxhub_v1_0_late_init(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int gfxhub_v1_0_sw_init(void *handle)
> +{
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +	struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_GFXHUB];
> +
> +	hub->ctx0_ptb_addr_lo32 =
> +		SOC15_REG_OFFSET(GC, 0,
> +				 mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32);
> +	hub->ctx0_ptb_addr_hi32 =
> +		SOC15_REG_OFFSET(GC, 0,
> +				 mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32);
> +	hub->vm_inv_eng0_req =
> +		SOC15_REG_OFFSET(GC, 0, mmVM_INVALIDATE_ENG0_REQ);
> +	hub->vm_inv_eng0_ack =
> +		SOC15_REG_OFFSET(GC, 0, mmVM_INVALIDATE_ENG0_ACK);
> +	hub->vm_context0_cntl =
> +		SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL);
> +	hub->vm_l2_pro_fault_status =
> +		SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_STATUS);
> +	hub->vm_l2_pro_fault_cntl =
> +		SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL);
> +
> +	hub->get_invalidate_req = gfxhub_v1_0_get_invalidate_req;
> +	hub->get_vm_protection_bits = gfxhub_v1_0_get_vm_protection_bits;
> +
> +	return 0;
> +}
> +
> +static int gfxhub_v1_0_sw_fini(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int gfxhub_v1_0_hw_init(void *handle)
> +{
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +	unsigned i;
> +
> +	for (i = 0 ; i < 18; ++i) {
> +		WREG32(SOC15_REG_OFFSET(GC, 0,
> +					mmVM_INVALIDATE_ENG0_ADDR_RANGE_LO32) +
> +		       2 * i, 0xffffffff);
> +		WREG32(SOC15_REG_OFFSET(GC, 0,
> +					mmVM_INVALIDATE_ENG0_ADDR_RANGE_HI32) +
> +		       2 * i, 0x1f);
> +	}
> +
> +	return 0;
> +}
> +
> +static int gfxhub_v1_0_hw_fini(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int gfxhub_v1_0_suspend(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int gfxhub_v1_0_resume(void *handle)
> +{
> +	return 0;
> +}
> +
> +static bool gfxhub_v1_0_is_idle(void *handle)
> +{
> +	return true;
> +}
> +
> +static int gfxhub_v1_0_wait_for_idle(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int gfxhub_v1_0_soft_reset(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int gfxhub_v1_0_set_clockgating_state(void *handle,
> +					  enum amd_clockgating_state state)
> +{
> +	return 0;
> +}
> +
> +static int gfxhub_v1_0_set_powergating_state(void *handle,
> +					  enum amd_powergating_state state)
> +{
> +	return 0;
> +}
> +
> +const struct amd_ip_funcs gfxhub_v1_0_ip_funcs = {
> +	.name = "gfxhub_v1_0",
> +	.early_init = gfxhub_v1_0_early_init,
> +	.late_init = gfxhub_v1_0_late_init,
> +	.sw_init = gfxhub_v1_0_sw_init,
> +	.sw_fini = gfxhub_v1_0_sw_fini,
> +	.hw_init = gfxhub_v1_0_hw_init,
> +	.hw_fini = gfxhub_v1_0_hw_fini,
> +	.suspend = gfxhub_v1_0_suspend,
> +	.resume = gfxhub_v1_0_resume,
> +	.is_idle = gfxhub_v1_0_is_idle,
> +	.wait_for_idle = gfxhub_v1_0_wait_for_idle,
> +	.soft_reset = gfxhub_v1_0_soft_reset,
> +	.set_clockgating_state = gfxhub_v1_0_set_clockgating_state,
> +	.set_powergating_state = gfxhub_v1_0_set_powergating_state,
> +};
> +
> +const struct amdgpu_ip_block_version gfxhub_v1_0_ip_block =
> +{
> +	.type = AMD_IP_BLOCK_TYPE_GFXHUB,
> +	.major = 1,
> +	.minor = 0,
> +	.rev = 0,
> +	.funcs = &gfxhub_v1_0_ip_funcs,
> +};
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
> new file mode 100644
> index 0000000..5129a8f
> --- /dev/null
> +++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
> @@ -0,0 +1,35 @@
> +/*
> + * Copyright 2016 Advanced Micro Devices, Inc.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + *
> + */
> +
> +#ifndef __GFXHUB_V1_0_H__
> +#define __GFXHUB_V1_0_H__
> +
> +int gfxhub_v1_0_gart_enable(struct amdgpu_device *adev);
> +void gfxhub_v1_0_gart_disable(struct amdgpu_device *adev);
> +void gfxhub_v1_0_set_fault_enable_default(struct amdgpu_device *adev,
> +					  bool value);
> +
> +extern const struct amd_ip_funcs gfxhub_v1_0_ip_funcs;
> +extern const struct amdgpu_ip_block_version gfxhub_v1_0_ip_block;
> +
> +#endif
> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> new file mode 100644
> index 0000000..5cf0fc3
> --- /dev/null
> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> @@ -0,0 +1,826 @@
> +/*
> + * Copyright 2016 Advanced Micro Devices, Inc.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + *
> + */
> +#include <linux/firmware.h>
> +#include "amdgpu.h"
> +#include "gmc_v9_0.h"
> +
> +#include "vega10/soc15ip.h"
> +#include "vega10/HDP/hdp_4_0_offset.h"
> +#include "vega10/HDP/hdp_4_0_sh_mask.h"
> +#include "vega10/GC/gc_9_0_sh_mask.h"
> +#include "vega10/vega10_enum.h"
> +
> +#include "soc15_common.h"
> +
> +#include "nbio_v6_1.h"
> +#include "gfxhub_v1_0.h"
> +#include "mmhub_v1_0.h"
> +
> +#define mmDF_CS_AON0_DramBaseAddress0                                                                  0x0044
> +#define mmDF_CS_AON0_DramBaseAddress0_BASE_IDX                                                         0
> +//DF_CS_AON0_DramBaseAddress0
> +#define DF_CS_AON0_DramBaseAddress0__AddrRngVal__SHIFT                                                        0x0
> +#define DF_CS_AON0_DramBaseAddress0__LgcyMmioHoleEn__SHIFT                                                    0x1
> +#define DF_CS_AON0_DramBaseAddress0__IntLvNumChan__SHIFT                                                      0x4
> +#define DF_CS_AON0_DramBaseAddress0__IntLvAddrSel__SHIFT                                                      0x8
> +#define DF_CS_AON0_DramBaseAddress0__DramBaseAddr__SHIFT                                                      0xc
> +#define DF_CS_AON0_DramBaseAddress0__AddrRngVal_MASK                                                          0x00000001L
> +#define DF_CS_AON0_DramBaseAddress0__LgcyMmioHoleEn_MASK                                                      0x00000002L
> +#define DF_CS_AON0_DramBaseAddress0__IntLvNumChan_MASK                                                        0x000000F0L
> +#define DF_CS_AON0_DramBaseAddress0__IntLvAddrSel_MASK                                                        0x00000700L
> +#define DF_CS_AON0_DramBaseAddress0__DramBaseAddr_MASK                                                        0xFFFFF000L
> +
> +/* XXX Move this macro to VEGA10 header file, which is like vid.h for VI.*/
> +#define AMDGPU_NUM_OF_VMIDS			8
> +
> +static const u32 golden_settings_vega10_hdp[] =
> +{
> +	0xf64, 0x0fffffff, 0x00000000,
> +	0xf65, 0x0fffffff, 0x00000000,
> +	0xf66, 0x0fffffff, 0x00000000,
> +	0xf67, 0x0fffffff, 0x00000000,
> +	0xf68, 0x0fffffff, 0x00000000,
> +	0xf6a, 0x0fffffff, 0x00000000,
> +	0xf6b, 0x0fffffff, 0x00000000,
> +	0xf6c, 0x0fffffff, 0x00000000,
> +	0xf6d, 0x0fffffff, 0x00000000,
> +	0xf6e, 0x0fffffff, 0x00000000,
> +};
> +
> +static int gmc_v9_0_vm_fault_interrupt_state(struct amdgpu_device *adev,
> +					struct amdgpu_irq_src *src,
> +					unsigned type,
> +					enum amdgpu_interrupt_state state)
> +{
> +	struct amdgpu_vmhub *hub;
> +	u32 tmp, reg, bits, i;
> +
> +	switch (state) {
> +	case AMDGPU_IRQ_STATE_DISABLE:
> +		/* MM HUB */
> +		hub = &adev->vmhub[AMDGPU_MMHUB];
> +		bits = hub->get_vm_protection_bits();
> +		for (i = 0; i< 16; i++) {
> +			reg = hub->vm_context0_cntl + i;
> +			tmp = RREG32(reg);
> +			tmp &= ~bits;
> +			WREG32(reg, tmp);
> +		}
> +
> +		/* GFX HUB */
> +		hub = &adev->vmhub[AMDGPU_GFXHUB];
> +		bits = hub->get_vm_protection_bits();
> +		for (i = 0; i < 16; i++) {
> +			reg = hub->vm_context0_cntl + i;
> +			tmp = RREG32(reg);
> +			tmp &= ~bits;
> +			WREG32(reg, tmp);
> +		}
> +		break;
> +	case AMDGPU_IRQ_STATE_ENABLE:
> +		/* MM HUB */
> +		hub = &adev->vmhub[AMDGPU_MMHUB];
> +		bits = hub->get_vm_protection_bits();
> +		for (i = 0; i< 16; i++) {
> +			reg = hub->vm_context0_cntl + i;
> +			tmp = RREG32(reg);
> +			tmp |= bits;
> +			WREG32(reg, tmp);
> +		}
> +
> +		/* GFX HUB */
> +		hub = &adev->vmhub[AMDGPU_GFXHUB];
> +		bits = hub->get_vm_protection_bits();
> +		for (i = 0; i < 16; i++) {
> +			reg = hub->vm_context0_cntl + i;
> +			tmp = RREG32(reg);
> +			tmp |= bits;
> +			WREG32(reg, tmp);
> +		}
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	return 0;
> +	return 0;
> +}
> +
> +static int gmc_v9_0_process_interrupt(struct amdgpu_device *adev,
> +				struct amdgpu_irq_src *source,
> +				struct amdgpu_iv_entry *entry)
> +{
> +	struct amdgpu_vmhub *gfxhub = &adev->vmhub[AMDGPU_GFXHUB];
> +	struct amdgpu_vmhub *mmhub = &adev->vmhub[AMDGPU_MMHUB];
> +	uint32_t status;
> +	u64 addr;
> +
> +	addr = (u64)entry->src_data[0] << 12;
> +	addr |= ((u64)entry->src_data[1] & 0xf) << 44;
> +
> +	if (entry->vm_id_src) {
> +		status = RREG32(mmhub->vm_l2_pro_fault_status);
> +		WREG32_P(mmhub->vm_l2_pro_fault_cntl, 1, ~1);
> +	} else {
> +		status = RREG32(gfxhub->vm_l2_pro_fault_status);
> +		WREG32_P(gfxhub->vm_l2_pro_fault_cntl, 1, ~1);
> +	}
> +
> +	DRM_ERROR("[%s]VMC page fault (src_id:%u ring:%u vm_id:%u pas_id:%u) "
> +		  "at page 0x%016llx from %d\n"
> +		  "VM_L2_PROTECTION_FAULT_STATUS:0x%08X\n",
> +		  entry->vm_id_src ? "mmhub" : "gfxhub",
> +		  entry->src_id, entry->ring_id, entry->vm_id, entry->pas_id,
> +		  addr, entry->client_id, status);
> +
> +	return 0;
> +}
> +
> +static const struct amdgpu_irq_src_funcs gmc_v9_0_irq_funcs = {
> +	.set = gmc_v9_0_vm_fault_interrupt_state,
> +	.process = gmc_v9_0_process_interrupt,
> +};
> +
> +static void gmc_v9_0_set_irq_funcs(struct amdgpu_device *adev)
> +{
> +	adev->mc.vm_fault.num_types = 1;
> +	adev->mc.vm_fault.funcs = &gmc_v9_0_irq_funcs;
> +}
> +
> +/*
> + * GART
> + * VMID 0 is the physical GPU addresses as used by the kernel.
> + * VMIDs 1-15 are used for userspace clients and are handled
> + * by the amdgpu vm/hsa code.
> + */
> +
> +/**
> + * gmc_v9_0_gart_flush_gpu_tlb - gart tlb flush callback
> + *
> + * @adev: amdgpu_device pointer
> + * @vmid: vm instance to flush
> + *
> + * Flush the TLB for the requested page table.
> + */
> +static void gmc_v9_0_gart_flush_gpu_tlb(struct amdgpu_device *adev,
> +					uint32_t vmid)
> +{
> +	/* Use register 17 for GART */
> +	const unsigned eng = 17;
> +	unsigned i, j;
> +
> +	/* flush hdp cache */
> +	nbio_v6_1_hdp_flush(adev);
> +
> +	spin_lock(&adev->mc.invalidate_lock);
> +
> +	for (i = 0; i < AMDGPU_MAX_VMHUBS; ++i) {
> +		struct amdgpu_vmhub *hub = &adev->vmhub[i];
> +		u32 tmp = hub->get_invalidate_req(vmid);
> +
> +		WREG32(hub->vm_inv_eng0_req + eng, tmp);
> +
> +		/* Busy wait for ACK.*/
> +		for (j = 0; j < 100; j++) {
> +			tmp = RREG32(hub->vm_inv_eng0_ack + eng);
> +			tmp &= 1 << vmid;
> +			if (tmp)
> +				break;
> +			cpu_relax();
> +		}
> +		if (j < 100)
> +			continue;
> +
> +		/* Wait for ACK with a delay.*/
> +		for (j = 0; j < adev->usec_timeout; j++) {
> +			tmp = RREG32(hub->vm_inv_eng0_ack + eng);
> +			tmp &= 1 << vmid;
> +			if (tmp)
> +				break;
> +			udelay(1);
> +		}
> +		if (j < adev->usec_timeout)
> +			continue;
> +
> +		DRM_ERROR("Timeout waiting for VM flush ACK!\n");
> +	}
> +
> +	spin_unlock(&adev->mc.invalidate_lock);
> +}
> +
> +/**
> + * gmc_v9_0_gart_set_pte_pde - update the page tables using MMIO
> + *
> + * @adev: amdgpu_device pointer
> + * @cpu_pt_addr: cpu address of the page table
> + * @gpu_page_idx: entry in the page table to update
> + * @addr: dst addr to write into pte/pde
> + * @flags: access flags
> + *
> + * Update the page tables using the CPU.
> + */
> +static int gmc_v9_0_gart_set_pte_pde(struct amdgpu_device *adev,
> +					void *cpu_pt_addr,
> +					uint32_t gpu_page_idx,
> +					uint64_t addr,
> +					uint64_t flags)
> +{
> +	void __iomem *ptr = (void *)cpu_pt_addr;
> +	uint64_t value;
> +
> +	/*
> +	 * PTE format on VEGA 10:
> +	 * 63:59 reserved
> +	 * 58:57 mtype
> +	 * 56 F
> +	 * 55 L
> +	 * 54 P
> +	 * 53 SW
> +	 * 52 T
> +	 * 50:48 reserved
> +	 * 47:12 4k physical page base address
> +	 * 11:7 fragment
> +	 * 6 write
> +	 * 5 read
> +	 * 4 exe
> +	 * 3 Z
> +	 * 2 snooped
> +	 * 1 system
> +	 * 0 valid
> +	 *
> +	 * PDE format on VEGA 10:
> +	 * 63:59 block fragment size
> +	 * 58:55 reserved
> +	 * 54 P
> +	 * 53:48 reserved
> +	 * 47:6 physical base address of PD or PTE
> +	 * 5:3 reserved
> +	 * 2 C
> +	 * 1 system
> +	 * 0 valid
> +	 */
> +
> +	/*
> +	 * The following is for PTE only. GART does not have PDEs.
> +	*/
> +	value = addr & 0x0000FFFFFFFFF000ULL;
> +	value |= flags;
> +	writeq(value, ptr + (gpu_page_idx * 8));
> +	return 0;
> +}
> +
> +static uint64_t gmc_v9_0_get_vm_pte_flags(struct amdgpu_device *adev,
> +						uint32_t flags)
> +
> +{
> +	uint64_t pte_flag = 0;
> +
> +	if (flags & AMDGPU_VM_PAGE_EXECUTABLE)
> +		pte_flag |= AMDGPU_PTE_EXECUTABLE;
> +	if (flags & AMDGPU_VM_PAGE_READABLE)
> +		pte_flag |= AMDGPU_PTE_READABLE;
> +	if (flags & AMDGPU_VM_PAGE_WRITEABLE)
> +		pte_flag |= AMDGPU_PTE_WRITEABLE;
> +
> +	switch (flags & AMDGPU_VM_MTYPE_MASK) {
> +	case AMDGPU_VM_MTYPE_DEFAULT:
> +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
> +		break;
> +	case AMDGPU_VM_MTYPE_NC:
> +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
> +		break;
> +	case AMDGPU_VM_MTYPE_WC:
> +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_WC);
> +		break;
> +	case AMDGPU_VM_MTYPE_CC:
> +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_CC);
> +		break;
> +	case AMDGPU_VM_MTYPE_UC:
> +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_UC);
> +		break;
> +	default:
> +		pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
> +		break;
> +	}
> +
> +	if (flags & AMDGPU_VM_PAGE_PRT)
> +		pte_flag |= AMDGPU_PTE_PRT;
> +
> +	return pte_flag;
> +}
> +
> +static const struct amdgpu_gart_funcs gmc_v9_0_gart_funcs = {
> +	.flush_gpu_tlb = gmc_v9_0_gart_flush_gpu_tlb,
> +	.set_pte_pde = gmc_v9_0_gart_set_pte_pde,
> +	.get_vm_pte_flags = gmc_v9_0_get_vm_pte_flags
> +};
> +
> +static void gmc_v9_0_set_gart_funcs(struct amdgpu_device *adev)
> +{
> +	if (adev->gart.gart_funcs == NULL)
> +		adev->gart.gart_funcs = &gmc_v9_0_gart_funcs;
> +}
> +
> +static u64 gmc_v9_0_adjust_mc_addr(struct amdgpu_device *adev, u64 mc_addr)
> +{
> +	return adev->vm_manager.vram_base_offset + mc_addr - adev->mc.vram_start;
> +}
> +
> +static const struct amdgpu_mc_funcs gmc_v9_0_mc_funcs = {
> +	.adjust_mc_addr = gmc_v9_0_adjust_mc_addr,
> +};
> +
> +static void gmc_v9_0_set_mc_funcs(struct amdgpu_device *adev)
> +{
> +	adev->mc.mc_funcs = &gmc_v9_0_mc_funcs;
> +}
> +
> +static int gmc_v9_0_early_init(void *handle)
> +{
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +
> +	gmc_v9_0_set_gart_funcs(adev);
> +	gmc_v9_0_set_mc_funcs(adev);
> +	gmc_v9_0_set_irq_funcs(adev);
> +
> +	return 0;
> +}
> +
> +static int gmc_v9_0_late_init(void *handle)
> +{
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +	return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0);
> +}
> +
> +static void gmc_v9_0_vram_gtt_location(struct amdgpu_device *adev,
> +					struct amdgpu_mc *mc)
> +{
> +	u64 base = mmhub_v1_0_get_fb_location(adev);
> +	amdgpu_vram_location(adev, &adev->mc, base);
> +	adev->mc.gtt_base_align = 0;
> +	amdgpu_gtt_location(adev, mc);
> +}
> +
> +/**
> + * gmc_v9_0_mc_init - initialize the memory controller driver params
> + *
> + * @adev: amdgpu_device pointer
> + *
> + * Look up the amount of vram, vram width, and decide how to place
> + * vram and gart within the GPU's physical address space.
> + * Returns 0 for success.
> + */
> +static int gmc_v9_0_mc_init(struct amdgpu_device *adev)
> +{
> +	u32 tmp;
> +	int chansize, numchan;
> +
> +	/* hbm memory channel size */
> +	chansize = 128;
> +
> +	tmp = RREG32(SOC15_REG_OFFSET(DF, 0, mmDF_CS_AON0_DramBaseAddress0));
> +	tmp &= DF_CS_AON0_DramBaseAddress0__IntLvNumChan_MASK;
> +	tmp >>= DF_CS_AON0_DramBaseAddress0__IntLvNumChan__SHIFT;
> +	switch (tmp) {
> +	case 0:
> +	default:
> +		numchan = 1;
> +		break;
> +	case 1:
> +		numchan = 2;
> +		break;
> +	case 2:
> +		numchan = 0;
> +		break;
> +	case 3:
> +		numchan = 4;
> +		break;
> +	case 4:
> +		numchan = 0;
> +		break;
> +	case 5:
> +		numchan = 8;
> +		break;
> +	case 6:
> +		numchan = 0;
> +		break;
> +	case 7:
> +		numchan = 16;
> +		break;
> +	case 8:
> +		numchan = 2;
> +		break;
> +	}
> +	adev->mc.vram_width = numchan * chansize;
> +
> +	/* Could aper size report 0 ? */
> +	adev->mc.aper_base = pci_resource_start(adev->pdev, 0);
> +	adev->mc.aper_size = pci_resource_len(adev->pdev, 0);
> +	/* size in MB on si */
> +	adev->mc.mc_vram_size =
> +		nbio_v6_1_get_memsize(adev) * 1024ULL * 1024ULL;
> +	adev->mc.real_vram_size = adev->mc.mc_vram_size;
> +	adev->mc.visible_vram_size = adev->mc.aper_size;
> +
> +	/* In case the PCI BAR is larger than the actual amount of vram */
> +	if (adev->mc.visible_vram_size > adev->mc.real_vram_size)
> +		adev->mc.visible_vram_size = adev->mc.real_vram_size;
> +
> +	/* unless the user had overridden it, set the gart
> +	 * size equal to the 1024 or vram, whichever is larger.
> +	 */
> +	if (amdgpu_gart_size == -1)
> +		adev->mc.gtt_size = max((1024ULL << 20), adev->mc.mc_vram_size);
> +	else
> +		adev->mc.gtt_size = (uint64_t)amdgpu_gart_size << 20;
> +
> +	gmc_v9_0_vram_gtt_location(adev, &adev->mc);
> +
> +	return 0;
> +}
> +
> +static int gmc_v9_0_gart_init(struct amdgpu_device *adev)
> +{
> +	int r;
> +
> +	if (adev->gart.robj) {
> +		WARN(1, "VEGA10 PCIE GART already initialized\n");
> +		return 0;
> +	}
> +	/* Initialize common gart structure */
> +	r = amdgpu_gart_init(adev);
> +	if (r)
> +		return r;
> +	adev->gart.table_size = adev->gart.num_gpu_pages * 8;
> +	adev->gart.gart_pte_flags = AMDGPU_PTE_MTYPE(MTYPE_UC) |
> +				 AMDGPU_PTE_EXECUTABLE;
> +	return amdgpu_gart_table_vram_alloc(adev);
> +}
> +
> +/*
> + * vm
> + * VMID 0 is the physical GPU addresses as used by the kernel.
> + * VMIDs 1-15 are used for userspace clients and are handled
> + * by the amdgpu vm/hsa code.
> + */
> +/**
> + * gmc_v9_0_vm_init - vm init callback
> + *
> + * @adev: amdgpu_device pointer
> + *
> + * Inits vega10 specific vm parameters (number of VMs, base of vram for
> + * VMIDs 1-15) (vega10).
> + * Returns 0 for success.
> + */
> +static int gmc_v9_0_vm_init(struct amdgpu_device *adev)
> +{
> +	/*
> +	 * number of VMs
> +	 * VMID 0 is reserved for System
> +	 * amdgpu graphics/compute will use VMIDs 1-7
> +	 * amdkfd will use VMIDs 8-15
> +	 */
> +	adev->vm_manager.num_ids = AMDGPU_NUM_OF_VMIDS;
> +	amdgpu_vm_manager_init(adev);
> +
> +	/* base offset of vram pages */
> +	/*XXX This value is not zero for APU*/
> +	adev->vm_manager.vram_base_offset = 0;
> +
> +	return 0;
> +}
> +
> +/**
> + * gmc_v9_0_vm_fini - vm fini callback
> + *
> + * @adev: amdgpu_device pointer
> + *
> + * Tear down any asic specific VM setup.
> + */
> +static void gmc_v9_0_vm_fini(struct amdgpu_device *adev)
> +{
> +	return;
> +}
> +
> +static int gmc_v9_0_sw_init(void *handle)
> +{
> +	int r;
> +	int dma_bits;
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +
> +	spin_lock_init(&adev->mc.invalidate_lock);
> +
> +	if (adev->flags & AMD_IS_APU) {
> +		adev->mc.vram_type = AMDGPU_VRAM_TYPE_UNKNOWN;
> +	} else {
> +		/* XXX Don't know how to get VRAM type yet. */
> +		adev->mc.vram_type = AMDGPU_VRAM_TYPE_HBM;
> +	}
> +
> +	/* This interrupt is VMC page fault.*/
> +	r = amdgpu_irq_add_id(adev, AMDGPU_IH_CLIENTID_VMC, 0,
> +				&adev->mc.vm_fault);
> +
> +	if (r)
> +		return r;
> +
> +	/* Adjust VM size here.
> +	 * Currently default to 64GB ((16 << 20) 4k pages).
> +	 * Max GPUVM size is 48 bits.
> +	 */
> +	adev->vm_manager.max_pfn = amdgpu_vm_size << 18;
> +
> +	/* Set the internal MC address mask
> +	 * This is the max address of the GPU's
> +	 * internal address space.
> +	 */
> +	adev->mc.mc_mask = 0xffffffffffffULL; /* 48 bit MC */
> +
> +	/* set DMA mask + need_dma32 flags.
> +	 * PCIE - can handle 44-bits.
> +	 * IGP - can handle 44-bits
> +	 * PCI - dma32 for legacy pci gart, 44 bits on vega10
> +	 */
> +	adev->need_dma32 = false;
> +	dma_bits = adev->need_dma32 ? 32 : 44;
> +	r = pci_set_dma_mask(adev->pdev, DMA_BIT_MASK(dma_bits));
> +	if (r) {
> +		adev->need_dma32 = true;
> +		dma_bits = 32;
> +		printk(KERN_WARNING "amdgpu: No suitable DMA available.\n");
> +	}
> +	r = pci_set_consistent_dma_mask(adev->pdev, DMA_BIT_MASK(dma_bits));
> +	if (r) {
> +		pci_set_consistent_dma_mask(adev->pdev, DMA_BIT_MASK(32));
> +		printk(KERN_WARNING "amdgpu: No coherent DMA available.\n");
> +	}
> +
> +	r = gmc_v9_0_mc_init(adev);
> +	if (r)
> +		return r;
> +
> +	/* Memory manager */
> +	r = amdgpu_bo_init(adev);
> +	if (r)
> +		return r;
> +
> +	r = gmc_v9_0_gart_init(adev);
> +	if (r)
> +		return r;
> +
> +	if (!adev->vm_manager.enabled) {
> +		r = gmc_v9_0_vm_init(adev);
> +		if (r) {
> +			dev_err(adev->dev, "vm manager initialization failed (%d).\n", r);
> +			return r;
> +		}
> +		adev->vm_manager.enabled = true;
> +	}
> +	return r;
> +}
> +
> +/**
> + * gmc_v8_0_gart_fini - vm fini callback
> + *
> + * @adev: amdgpu_device pointer
> + *
> + * Tears down the driver GART/VM setup (CIK).
> + */
> +static void gmc_v9_0_gart_fini(struct amdgpu_device *adev)
> +{
> +	amdgpu_gart_table_vram_free(adev);
> +	amdgpu_gart_fini(adev);
> +}
> +
> +static int gmc_v9_0_sw_fini(void *handle)
> +{
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +
> +	if (adev->vm_manager.enabled) {
> +		amdgpu_vm_manager_fini(adev);
> +		gmc_v9_0_vm_fini(adev);
> +		adev->vm_manager.enabled = false;
> +	}
> +	gmc_v9_0_gart_fini(adev);
> +	amdgpu_gem_force_release(adev);
> +	amdgpu_bo_fini(adev);
> +
> +	return 0;
> +}
> +
> +static void gmc_v9_0_init_golden_registers(struct amdgpu_device *adev)
> +{
> +	switch (adev->asic_type) {
> +	case CHIP_VEGA10:
> +		break;
> +	default:
> +		break;
> +	}
> +}
> +
> +/**
> + * gmc_v9_0_gart_enable - gart enable
> + *
> + * @adev: amdgpu_device pointer
> + */
> +static int gmc_v9_0_gart_enable(struct amdgpu_device *adev)
> +{
> +	int r;
> +	bool value;
> +	u32 tmp;
> +
> +	amdgpu_program_register_sequence(adev,
> +		golden_settings_vega10_hdp,
> +		(const u32)ARRAY_SIZE(golden_settings_vega10_hdp));
> +
> +	if (adev->gart.robj == NULL) {
> +		dev_err(adev->dev, "No VRAM object for PCIE GART.\n");
> +		return -EINVAL;
> +	}
> +	r = amdgpu_gart_table_vram_pin(adev);
> +	if (r)
> +		return r;
> +
> +	/* After HDP is initialized, flush HDP.*/
> +	nbio_v6_1_hdp_flush(adev);
> +
> +	r = gfxhub_v1_0_gart_enable(adev);
> +	if (r)
> +		return r;
> +
> +	r = mmhub_v1_0_gart_enable(adev);
> +	if (r)
> +		return r;
> +
> +	tmp = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MISC_CNTL));
> +	tmp |= HDP_MISC_CNTL__FLUSH_INVALIDATE_CACHE_MASK;
> +	WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MISC_CNTL), tmp);
> +
> +	tmp = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_HOST_PATH_CNTL));
> +	WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_HOST_PATH_CNTL), tmp);
> +
> +
> +	if (amdgpu_vm_fault_stop == AMDGPU_VM_FAULT_STOP_ALWAYS)
> +		value = false;
> +	else
> +		value = true;
> +
> +	gfxhub_v1_0_set_fault_enable_default(adev, value);
> +	mmhub_v1_0_set_fault_enable_default(adev, value);
> +
> +	gmc_v9_0_gart_flush_gpu_tlb(adev, 0);
> +
> +	DRM_INFO("PCIE GART of %uM enabled (table at 0x%016llX).\n",
> +		 (unsigned)(adev->mc.gtt_size >> 20),
> +		 (unsigned long long)adev->gart.table_addr);
> +	adev->gart.ready = true;
> +	return 0;
> +}
> +
> +static int gmc_v9_0_hw_init(void *handle)
> +{
> +	int r;
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +
> +	/* The sequence of these two function calls matters.*/
> +	gmc_v9_0_init_golden_registers(adev);
> +
> +	r = gmc_v9_0_gart_enable(adev);
> +
> +	return r;
> +}
> +
> +/**
> + * gmc_v9_0_gart_disable - gart disable
> + *
> + * @adev: amdgpu_device pointer
> + *
> + * This disables all VM page table.
> + */
> +static void gmc_v9_0_gart_disable(struct amdgpu_device *adev)
> +{
> +	gfxhub_v1_0_gart_disable(adev);
> +	mmhub_v1_0_gart_disable(adev);
> +	amdgpu_gart_table_vram_unpin(adev);
> +}
> +
> +static int gmc_v9_0_hw_fini(void *handle)
> +{
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +
> +	amdgpu_irq_put(adev, &adev->mc.vm_fault, 0);
> +	gmc_v9_0_gart_disable(adev);
> +
> +	return 0;
> +}
> +
> +static int gmc_v9_0_suspend(void *handle)
> +{
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +
> +	if (adev->vm_manager.enabled) {
> +		gmc_v9_0_vm_fini(adev);
> +		adev->vm_manager.enabled = false;
> +	}
> +	gmc_v9_0_hw_fini(adev);
> +
> +	return 0;
> +}
> +
> +static int gmc_v9_0_resume(void *handle)
> +{
> +	int r;
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +
> +	r = gmc_v9_0_hw_init(adev);
> +	if (r)
> +		return r;
> +
> +	if (!adev->vm_manager.enabled) {
> +		r = gmc_v9_0_vm_init(adev);
> +		if (r) {
> +			dev_err(adev->dev,
> +				"vm manager initialization failed (%d).\n", r);
> +			return r;
> +		}
> +		adev->vm_manager.enabled = true;
> +	}
> +
> +	return r;
> +}
> +
> +static bool gmc_v9_0_is_idle(void *handle)
> +{
> +	/* MC is always ready in GMC v9.*/
> +	return true;
> +}
> +
> +static int gmc_v9_0_wait_for_idle(void *handle)
> +{
> +	/* There is no need to wait for MC idle in GMC v9.*/
> +	return 0;
> +}
> +
> +static int gmc_v9_0_soft_reset(void *handle)
> +{
> +	/* XXX for emulation.*/
> +	return 0;
> +}
> +
> +static int gmc_v9_0_set_clockgating_state(void *handle,
> +					enum amd_clockgating_state state)
> +{
> +	return 0;
> +}
> +
> +static int gmc_v9_0_set_powergating_state(void *handle,
> +					enum amd_powergating_state state)
> +{
> +	return 0;
> +}
> +
> +const struct amd_ip_funcs gmc_v9_0_ip_funcs = {
> +	.name = "gmc_v9_0",
> +	.early_init = gmc_v9_0_early_init,
> +	.late_init = gmc_v9_0_late_init,
> +	.sw_init = gmc_v9_0_sw_init,
> +	.sw_fini = gmc_v9_0_sw_fini,
> +	.hw_init = gmc_v9_0_hw_init,
> +	.hw_fini = gmc_v9_0_hw_fini,
> +	.suspend = gmc_v9_0_suspend,
> +	.resume = gmc_v9_0_resume,
> +	.is_idle = gmc_v9_0_is_idle,
> +	.wait_for_idle = gmc_v9_0_wait_for_idle,
> +	.soft_reset = gmc_v9_0_soft_reset,
> +	.set_clockgating_state = gmc_v9_0_set_clockgating_state,
> +	.set_powergating_state = gmc_v9_0_set_powergating_state,
> +};
> +
> +const struct amdgpu_ip_block_version gmc_v9_0_ip_block =
> +{
> +	.type = AMD_IP_BLOCK_TYPE_GMC,
> +	.major = 9,
> +	.minor = 0,
> +	.rev = 0,
> +	.funcs = &gmc_v9_0_ip_funcs,
> +};
> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
> new file mode 100644
> index 0000000..b030ca5
> --- /dev/null
> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
> @@ -0,0 +1,30 @@
> +/*
> + * Copyright 2016 Advanced Micro Devices, Inc.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + *
> + */
> +
> +#ifndef __GMC_V9_0_H__
> +#define __GMC_V9_0_H__
> +
> +extern const struct amd_ip_funcs gmc_v9_0_ip_funcs;
> +extern const struct amdgpu_ip_block_version gmc_v9_0_ip_block;
> +
> +#endif
> diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
> new file mode 100644
> index 0000000..b1e0e6b
> --- /dev/null
> +++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
> @@ -0,0 +1,585 @@
> +/*
> + * Copyright 2016 Advanced Micro Devices, Inc.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + *
> + */
> +#include "amdgpu.h"
> +#include "mmhub_v1_0.h"
> +
> +#include "vega10/soc15ip.h"
> +#include "vega10/MMHUB/mmhub_1_0_offset.h"
> +#include "vega10/MMHUB/mmhub_1_0_sh_mask.h"
> +#include "vega10/MMHUB/mmhub_1_0_default.h"
> +#include "vega10/ATHUB/athub_1_0_offset.h"
> +#include "vega10/ATHUB/athub_1_0_sh_mask.h"
> +#include "vega10/ATHUB/athub_1_0_default.h"
> +#include "vega10/vega10_enum.h"
> +
> +#include "soc15_common.h"
> +
> +u64 mmhub_v1_0_get_fb_location(struct amdgpu_device *adev)
> +{
> +	u64 base = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_FB_LOCATION_BASE));
> +
> +	base &= MC_VM_FB_LOCATION_BASE__FB_BASE_MASK;
> +	base <<= 24;
> +
> +	return base;
> +}
> +
> +int mmhub_v1_0_gart_enable(struct amdgpu_device *adev)
> +{
> +	u32 tmp;
> +	u64 value;
> +	uint64_t addr;
> +	u32 i;
> +
> +	/* Program MC. */
> +	/* Update configuration */
> +	DRM_INFO("%s -- in\n", __func__);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR),
> +		adev->mc.vram_start >> 18);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR),
> +		adev->mc.vram_end >> 18);
> +	value = adev->vram_scratch.gpu_addr - adev->mc.vram_start +
> +		adev->vm_manager.vram_base_offset;
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +				mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_LSB),
> +				(u32)(value >> 12));
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +				mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_MSB),
> +				(u32)(value >> 44));
> +
> +	/* Disable AGP. */
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_AGP_BASE), 0);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_AGP_TOP), 0);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_AGP_BOT), 0x00FFFFFF);
> +
> +	/* GART Enable. */
> +
> +	/* Setup TLB control */
> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_MX_L1_TLB_CNTL));
> +	tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB, 1);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				SYSTEM_ACCESS_MODE,
> +				3);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				ENABLE_ADVANCED_DRIVER_MODEL,
> +				1);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				SYSTEM_APERTURE_UNMAPPED_ACCESS,
> +				0);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				ECO_BITS,
> +				0);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				MTYPE,
> +				MTYPE_UC);/* XXX for emulation. */
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				ATC_EN,
> +				1);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_MX_L1_TLB_CNTL), tmp);
> +
> +	/* Setup L2 cache */
> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL));
> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 1);
> +	tmp = REG_SET_FIELD(tmp,
> +				VM_L2_CNTL,
> +				ENABLE_L2_FRAGMENT_PROCESSING,
> +				0);
> +	tmp = REG_SET_FIELD(tmp,
> +				VM_L2_CNTL,
> +				L2_PDE0_CACHE_TAG_GENERATION_MODE,
> +				0);/* XXX for emulation, Refer to closed source code.*/
> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, PDE_FAULT_CLASSIFICATION, 1);
> +	tmp = REG_SET_FIELD(tmp,
> +				VM_L2_CNTL,
> +				CONTEXT1_IDENTITY_ACCESS_MODE,
> +				1);
> +	tmp = REG_SET_FIELD(tmp,
> +				VM_L2_CNTL,
> +				IDENTITY_MODE_FRAGMENT_SIZE,
> +				0);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL), tmp);
> +
> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL2));
> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_ALL_L1_TLBS, 1);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_L2_CACHE, 1);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL2), tmp);
> +
> +	tmp = mmVM_L2_CNTL3_DEFAULT;
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL3), tmp);
> +
> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL4));
> +	tmp = REG_SET_FIELD(tmp,
> +			    VM_L2_CNTL4,
> +			    VMC_TAP_PDE_REQUEST_PHYSICAL,
> +			    0);
> +	tmp = REG_SET_FIELD(tmp,
> +			    VM_L2_CNTL4,
> +			    VMC_TAP_PTE_REQUEST_PHYSICAL,
> +			    0);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL4), tmp);
> +
> +	/* setup context0 */
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_LO32),
> +		(u32)(adev->mc.gtt_start >> 12));
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_HI32),
> +		(u32)(adev->mc.gtt_start >> 44));
> +
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_LO32),
> +		(u32)(adev->mc.gtt_end >> 12));
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_HI32),
> +		(u32)(adev->mc.gtt_end >> 44));
> +
> +	BUG_ON(adev->gart.table_addr & (~0x0000FFFFFFFFF000ULL));
> +	value = adev->gart.table_addr - adev->mc.vram_start +
> +		adev->vm_manager.vram_base_offset;
> +	value &= 0x0000FFFFFFFFF000ULL;
> +	value |= 0x1; /* valid bit */
> +
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32),
> +		(u32)value);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +				mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32),
> +		(u32)(value >> 32));
> +
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +				mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_LO32),
> +		(u32)(adev->dummy_page.addr >> 12));
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +				mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_HI32),
> +		(u32)(adev->dummy_page.addr >> 44));
> +
> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_PROTECTION_FAULT_CNTL2));
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL2,
> +			    ACTIVE_PAGE_MIGRATION_PTE_READ_RETRY,
> +			    1);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_PROTECTION_FAULT_CNTL2), tmp);
> +
> +	addr = SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT0_CNTL);
> +	tmp = RREG32(addr);
> +
> +	tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL, ENABLE_CONTEXT, 1);
> +	tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL, PAGE_TABLE_DEPTH, 0);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT0_CNTL), tmp);
> +
> +	tmp = RREG32(addr);
> +
> +	/* Disable identity aperture.*/
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_LO32), 0XFFFFFFFF);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_HI32), 0x0000000F);
> +
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_LO32), 0);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +		mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_HI32), 0);
> +
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +		mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_LO32), 0);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +		mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_HI32), 0);
> +
> +	for (i = 0; i <= 14; i++) {
> +		tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT1_CNTL)
> +				+ i);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				ENABLE_CONTEXT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				PAGE_TABLE_DEPTH, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				VALID_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				READ_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
> +		tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
> +				PAGE_TABLE_BLOCK_SIZE,
> +				amdgpu_vm_block_size - 9);
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT1_CNTL) + i, tmp);
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_LO32) + i*2, 0);
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_HI32) + i*2, 0);
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_LO32) + i*2,
> +				adev->vm_manager.max_pfn - 1);
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_HI32) + i*2, 0);
> +	}
> +
> +	return 0;
> +}
> +
> +void mmhub_v1_0_gart_disable(struct amdgpu_device *adev)
> +{
> +	u32 tmp;
> +	u32 i;
> +
> +	/* Disable all tables */
> +	for (i = 0; i < 16; i++)
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT0_CNTL) + i, 0);
> +
> +	/* Setup TLB control */
> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_MX_L1_TLB_CNTL));
> +	tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB, 0);
> +	tmp = REG_SET_FIELD(tmp,
> +				MC_VM_MX_L1_TLB_CNTL,
> +				ENABLE_ADVANCED_DRIVER_MODEL,
> +				0);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmMC_VM_MX_L1_TLB_CNTL), tmp);
> +
> +	/* Setup L2 cache */
> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL));
> +	tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 0);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL), tmp);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_CNTL3), 0);
> +}
> +
> +/**
> + * mmhub_v1_0_set_fault_enable_default - update GART/VM fault handling
> + *
> + * @adev: amdgpu_device pointer
> + * @value: true redirects VM faults to the default page
> + */
> +void mmhub_v1_0_set_fault_enable_default(struct amdgpu_device *adev, bool value)
> +{
> +	u32 tmp;
> +	tmp = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_PROTECTION_FAULT_CNTL));
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			PDE1_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			PDE2_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp,
> +			VM_L2_PROTECTION_FAULT_CNTL,
> +			TRANSLATE_FURTHER_PROTECTION_FAULT_ENABLE_DEFAULT,
> +			value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			NACK_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			VALID_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			READ_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
> +			EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
> +	WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_PROTECTION_FAULT_CNTL), tmp);
> +}
> +
> +static uint32_t mmhub_v1_0_get_invalidate_req(unsigned int vm_id)
> +{
> +	u32 req = 0;
> +
> +	/* invalidate using legacy mode on vm_id*/
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> +			    PER_VMID_INVALIDATE_REQ, 1 << vm_id);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, FLUSH_TYPE, 0);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PTES, 1);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE0, 1);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE1, 1);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L2_PDE2, 1);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, INVALIDATE_L1_PTES, 1);
> +	req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
> +			    CLEAR_PROTECTION_FAULT_STATUS_ADDR,	0);
> +
> +	return req;
> +}
> +
> +static uint32_t mmhub_v1_0_get_vm_protection_bits(void)
> +{
> +	return (VM_CONTEXT1_CNTL__RANGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__DUMMY_PAGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__PDE0_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__VALID_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__READ_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__WRITE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
> +		    VM_CONTEXT1_CNTL__EXECUTE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK);
> +}
> +
> +static int mmhub_v1_0_early_init(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int mmhub_v1_0_late_init(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int mmhub_v1_0_sw_init(void *handle)
> +{
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +	struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_MMHUB];
> +
> +	hub->ctx0_ptb_addr_lo32 =
> +		SOC15_REG_OFFSET(MMHUB, 0,
> +				 mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32);
> +	hub->ctx0_ptb_addr_hi32 =
> +		SOC15_REG_OFFSET(MMHUB, 0,
> +				 mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32);
> +	hub->vm_inv_eng0_req =
> +		SOC15_REG_OFFSET(MMHUB, 0, mmVM_INVALIDATE_ENG0_REQ);
> +	hub->vm_inv_eng0_ack =
> +		SOC15_REG_OFFSET(MMHUB, 0, mmVM_INVALIDATE_ENG0_ACK);
> +	hub->vm_context0_cntl =
> +		SOC15_REG_OFFSET(MMHUB, 0, mmVM_CONTEXT0_CNTL);
> +	hub->vm_l2_pro_fault_status =
> +		SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_PROTECTION_FAULT_STATUS);
> +	hub->vm_l2_pro_fault_cntl =
> +		SOC15_REG_OFFSET(MMHUB, 0, mmVM_L2_PROTECTION_FAULT_CNTL);
> +
> +	hub->get_invalidate_req = mmhub_v1_0_get_invalidate_req;
> +	hub->get_vm_protection_bits = mmhub_v1_0_get_vm_protection_bits;
> +
> +	return 0;
> +}
> +
> +static int mmhub_v1_0_sw_fini(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int mmhub_v1_0_hw_init(void *handle)
> +{
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +	unsigned i;
> +
> +	for (i = 0; i < 18; ++i) {
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +					mmVM_INVALIDATE_ENG0_ADDR_RANGE_LO32) +
> +		       2 * i, 0xffffffff);
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0,
> +					mmVM_INVALIDATE_ENG0_ADDR_RANGE_HI32) +
> +		       2 * i, 0x1f);
> +	}
> +
> +	return 0;
> +}
> +
> +static int mmhub_v1_0_hw_fini(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int mmhub_v1_0_suspend(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int mmhub_v1_0_resume(void *handle)
> +{
> +	return 0;
> +}
> +
> +static bool mmhub_v1_0_is_idle(void *handle)
> +{
> +	return true;
> +}
> +
> +static int mmhub_v1_0_wait_for_idle(void *handle)
> +{
> +	return 0;
> +}
> +
> +static int mmhub_v1_0_soft_reset(void *handle)
> +{
> +	return 0;
> +}
> +
> +static void mmhub_v1_0_update_medium_grain_clock_gating(struct amdgpu_device *adev,
> +							bool enable)
> +{
> +	uint32_t def, data, def1, data1, def2, data2;
> +
> +	def  = data  = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmATC_L2_MISC_CG));
> +	def1 = data1 = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmDAGB0_CNTL_MISC2));
> +	def2 = data2 = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmDAGB1_CNTL_MISC2));
> +
> +	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_MGCG)) {
> +		data |= ATC_L2_MISC_CG__ENABLE_MASK;
> +
> +		data1 &= ~(DAGB0_CNTL_MISC2__DISABLE_WRREQ_CG_MASK |
> +		           DAGB0_CNTL_MISC2__DISABLE_WRRET_CG_MASK |
> +		           DAGB0_CNTL_MISC2__DISABLE_RDREQ_CG_MASK |
> +		           DAGB0_CNTL_MISC2__DISABLE_RDRET_CG_MASK |
> +		           DAGB0_CNTL_MISC2__DISABLE_TLBWR_CG_MASK |
> +		           DAGB0_CNTL_MISC2__DISABLE_TLBRD_CG_MASK);
> +
> +		data2 &= ~(DAGB1_CNTL_MISC2__DISABLE_WRREQ_CG_MASK |
> +		           DAGB1_CNTL_MISC2__DISABLE_WRRET_CG_MASK |
> +		           DAGB1_CNTL_MISC2__DISABLE_RDREQ_CG_MASK |
> +		           DAGB1_CNTL_MISC2__DISABLE_RDRET_CG_MASK |
> +		           DAGB1_CNTL_MISC2__DISABLE_TLBWR_CG_MASK |
> +		           DAGB1_CNTL_MISC2__DISABLE_TLBRD_CG_MASK);
> +	} else {
> +		data &= ~ATC_L2_MISC_CG__ENABLE_MASK;
> +
> +		data1 |= (DAGB0_CNTL_MISC2__DISABLE_WRREQ_CG_MASK |
> +			  DAGB0_CNTL_MISC2__DISABLE_WRRET_CG_MASK |
> +			  DAGB0_CNTL_MISC2__DISABLE_RDREQ_CG_MASK |
> +			  DAGB0_CNTL_MISC2__DISABLE_RDRET_CG_MASK |
> +			  DAGB0_CNTL_MISC2__DISABLE_TLBWR_CG_MASK |
> +			  DAGB0_CNTL_MISC2__DISABLE_TLBRD_CG_MASK);
> +
> +		data2 |= (DAGB1_CNTL_MISC2__DISABLE_WRREQ_CG_MASK |
> +		          DAGB1_CNTL_MISC2__DISABLE_WRRET_CG_MASK |
> +		          DAGB1_CNTL_MISC2__DISABLE_RDREQ_CG_MASK |
> +		          DAGB1_CNTL_MISC2__DISABLE_RDRET_CG_MASK |
> +		          DAGB1_CNTL_MISC2__DISABLE_TLBWR_CG_MASK |
> +		          DAGB1_CNTL_MISC2__DISABLE_TLBRD_CG_MASK);
> +	}
> +
> +	if (def != data)
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmATC_L2_MISC_CG), data);
> +
> +	if (def1 != data1)
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmDAGB0_CNTL_MISC2), data1);
> +
> +	if (def2 != data2)
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmDAGB1_CNTL_MISC2), data2);
> +}
> +
> +static void athub_update_medium_grain_clock_gating(struct amdgpu_device *adev,
> +						   bool enable)
> +{
> +	uint32_t def, data;
> +
> +	def = data = RREG32(SOC15_REG_OFFSET(ATHUB, 0, mmATHUB_MISC_CNTL));
> +
> +	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_MGCG))
> +		data |= ATHUB_MISC_CNTL__CG_ENABLE_MASK;
> +	else
> +		data &= ~ATHUB_MISC_CNTL__CG_ENABLE_MASK;
> +
> +	if (def != data)
> +		WREG32(SOC15_REG_OFFSET(ATHUB, 0, mmATHUB_MISC_CNTL), data);
> +}
> +
> +static void mmhub_v1_0_update_medium_grain_light_sleep(struct amdgpu_device *adev,
> +						       bool enable)
> +{
> +	uint32_t def, data;
> +
> +	def = data = RREG32(SOC15_REG_OFFSET(MMHUB, 0, mmATC_L2_MISC_CG));
> +
> +	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_LS))
> +		data |= ATC_L2_MISC_CG__MEM_LS_ENABLE_MASK;
> +	else
> +		data &= ~ATC_L2_MISC_CG__MEM_LS_ENABLE_MASK;
> +
> +	if (def != data)
> +		WREG32(SOC15_REG_OFFSET(MMHUB, 0, mmATC_L2_MISC_CG), data);
> +}
> +
> +static void athub_update_medium_grain_light_sleep(struct amdgpu_device *adev,
> +						  bool enable)
> +{
> +	uint32_t def, data;
> +
> +	def = data = RREG32(SOC15_REG_OFFSET(ATHUB, 0, mmATHUB_MISC_CNTL));
> +
> +	if (enable && (adev->cg_flags & AMD_CG_SUPPORT_MC_LS) &&
> +	    (adev->cg_flags & AMD_CG_SUPPORT_HDP_LS))
> +		data |= ATHUB_MISC_CNTL__CG_MEM_LS_ENABLE_MASK;
> +	else
> +		data &= ~ATHUB_MISC_CNTL__CG_MEM_LS_ENABLE_MASK;
> +
> +	if(def != data)
> +		WREG32(SOC15_REG_OFFSET(ATHUB, 0, mmATHUB_MISC_CNTL), data);
> +}
> +
> +static int mmhub_v1_0_set_clockgating_state(void *handle,
> +					enum amd_clockgating_state state)
> +{
> +	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> +
> +	switch (adev->asic_type) {
> +	case CHIP_VEGA10:
> +		mmhub_v1_0_update_medium_grain_clock_gating(adev,
> +				state == AMD_CG_STATE_GATE ? true : false);
> +		athub_update_medium_grain_clock_gating(adev,
> +				state == AMD_CG_STATE_GATE ? true : false);
> +		mmhub_v1_0_update_medium_grain_light_sleep(adev,
> +				state == AMD_CG_STATE_GATE ? true : false);
> +		athub_update_medium_grain_light_sleep(adev,
> +				state == AMD_CG_STATE_GATE ? true : false);
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	return 0;
> +}
> +
> +static int mmhub_v1_0_set_powergating_state(void *handle,
> +					enum amd_powergating_state state)
> +{
> +	return 0;
> +}
> +
> +const struct amd_ip_funcs mmhub_v1_0_ip_funcs = {
> +	.name = "mmhub_v1_0",
> +	.early_init = mmhub_v1_0_early_init,
> +	.late_init = mmhub_v1_0_late_init,
> +	.sw_init = mmhub_v1_0_sw_init,
> +	.sw_fini = mmhub_v1_0_sw_fini,
> +	.hw_init = mmhub_v1_0_hw_init,
> +	.hw_fini = mmhub_v1_0_hw_fini,
> +	.suspend = mmhub_v1_0_suspend,
> +	.resume = mmhub_v1_0_resume,
> +	.is_idle = mmhub_v1_0_is_idle,
> +	.wait_for_idle = mmhub_v1_0_wait_for_idle,
> +	.soft_reset = mmhub_v1_0_soft_reset,
> +	.set_clockgating_state = mmhub_v1_0_set_clockgating_state,
> +	.set_powergating_state = mmhub_v1_0_set_powergating_state,
> +};
> +
> +const struct amdgpu_ip_block_version mmhub_v1_0_ip_block =
> +{
> +	.type = AMD_IP_BLOCK_TYPE_MMHUB,
> +	.major = 1,
> +	.minor = 0,
> +	.rev = 0,
> +	.funcs = &mmhub_v1_0_ip_funcs,
> +};
> diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
> new file mode 100644
> index 0000000..aadedf9
> --- /dev/null
> +++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
> @@ -0,0 +1,35 @@
> +/*
> + * Copyright 2016 Advanced Micro Devices, Inc.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + *
> + */
> +#ifndef __MMHUB_V1_0_H__
> +#define __MMHUB_V1_0_H__
> +
> +u64 mmhub_v1_0_get_fb_location(struct amdgpu_device *adev);
> +int mmhub_v1_0_gart_enable(struct amdgpu_device *adev);
> +void mmhub_v1_0_gart_disable(struct amdgpu_device *adev);
> +void mmhub_v1_0_set_fault_enable_default(struct amdgpu_device *adev,
> +					 bool value);
> +
> +extern const struct amd_ip_funcs mmhub_v1_0_ip_funcs;
> +extern const struct amdgpu_ip_block_version mmhub_v1_0_ip_block;
> +
> +#endif
> diff --git a/drivers/gpu/drm/amd/include/amd_shared.h b/drivers/gpu/drm/amd/include/amd_shared.h
> index 717d6be..a94420d 100644
> --- a/drivers/gpu/drm/amd/include/amd_shared.h
> +++ b/drivers/gpu/drm/amd/include/amd_shared.h
> @@ -74,6 +74,8 @@ enum amd_ip_block_type {
>   	AMD_IP_BLOCK_TYPE_UVD,
>   	AMD_IP_BLOCK_TYPE_VCE,
>   	AMD_IP_BLOCK_TYPE_ACP,
> +	AMD_IP_BLOCK_TYPE_GFXHUB,
> +	AMD_IP_BLOCK_TYPE_MMHUB
>   };
>
>   enum amd_clockgating_state {
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH 046/100] drm/amdgpu: Add GMC 9.0 support
       [not found]         ` <58D33609.3060304-5C7GfCeVMHo@public.gmane.org>
@ 2017-03-23  2:53           ` Alex Deucher
       [not found]             ` <CADnq5_N_8r-pesBn1NwDrHJcOx8jnryhtcWow01Cbndj8B9N6w-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 101+ messages in thread
From: Alex Deucher @ 2017-03-23  2:53 UTC (permalink / raw)
  To: Zhang, Jerry (Junwei); +Cc: Alex Deucher, amd-gfx list, Alex Xie

On Wed, Mar 22, 2017 at 10:42 PM, Zhang, Jerry (Junwei)
<Jerry.Zhang@amd.com> wrote:
> Hi Alex,
>
> I remember we had a patch to remove the FB location programming in
> gmc/vmhub.
> I saw it's not in gmc v9 in this patch, but pre-gmcv9 still program FB
> register.
>
> Is that any missing for sync here?
> Or it's only supported for gmc v9 now.

I never landed the patches for older asics because I couldn't get vce
to work properly with dal disabled without reprogramming the fb
location.

Alex


>
> Jerry
>
> On 03/21/2017 04:29 AM, Alex Deucher wrote:
>>
>> From: Alex Xie <AlexBin.Xie@amd.com>
>>
>> On SOC-15 parts, the GMC (Graphics Memory Controller) consists
>> of two hubs: GFX (graphics and compute) and MM (sdma, uvd, vce).
>>
>> Signed-off-by: Alex Xie <AlexBin.Xie@amd.com>
>> Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
>> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/Makefile      |   6 +-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu.h      |  30 ++
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c   |  28 +-
>>   drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c | 447 +++++++++++++++++
>>   drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h |  35 ++
>>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c    | 826
>> +++++++++++++++++++++++++++++++
>>   drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h    |  30 ++
>>   drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c  | 585 ++++++++++++++++++++++
>>   drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h  |  35 ++
>>   drivers/gpu/drm/amd/include/amd_shared.h |   2 +
>>   10 files changed, 2016 insertions(+), 8 deletions(-)
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>>   create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile
>> b/drivers/gpu/drm/amd/amdgpu/Makefile
>> index 69823e8..b5046fd 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/Makefile
>> +++ b/drivers/gpu/drm/amd/amdgpu/Makefile
>> @@ -45,7 +45,8 @@ amdgpu-y += \
>>   # add GMC block
>>   amdgpu-y += \
>>         gmc_v7_0.o \
>> -       gmc_v8_0.o
>> +       gmc_v8_0.o \
>> +       gfxhub_v1_0.o mmhub_v1_0.o gmc_v9_0.o
>>
>>   # add IH block
>>   amdgpu-y += \
>> @@ -74,7 +75,8 @@ amdgpu-y += \
>>   # add async DMA block
>>   amdgpu-y += \
>>         sdma_v2_4.o \
>> -       sdma_v3_0.o
>> +       sdma_v3_0.o \
>> +       sdma_v4_0.o
>>
>>   # add UVD block
>>   amdgpu-y += \
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>> index aaded8d..d7257b6 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>> @@ -123,6 +123,11 @@ extern int amdgpu_param_buf_per_se;
>>   /* max number of IP instances */
>>   #define AMDGPU_MAX_SDMA_INSTANCES             2
>>
>> +/* max number of VMHUB */
>> +#define AMDGPU_MAX_VMHUBS                      2
>> +#define AMDGPU_MMHUB                           0
>> +#define AMDGPU_GFXHUB                          1
>> +
>>   /* hardcode that limit for now */
>>   #define AMDGPU_VA_RESERVED_SIZE                       (8 << 20)
>>
>> @@ -310,6 +315,12 @@ struct amdgpu_gart_funcs {
>>                                      uint32_t flags);
>>   };
>>
>> +/* provided by the mc block */
>> +struct amdgpu_mc_funcs {
>> +       /* adjust mc addr in fb for APU case */
>> +       u64 (*adjust_mc_addr)(struct amdgpu_device *adev, u64 addr);
>> +};
>> +
>>   /* provided by the ih block */
>>   struct amdgpu_ih_funcs {
>>         /* ring read/write ptr handling, called from interrupt context */
>> @@ -559,6 +570,21 @@ int amdgpu_gart_bind(struct amdgpu_device *adev,
>> uint64_t offset,
>>   int amdgpu_ttm_recover_gart(struct amdgpu_device *adev);
>>
>>   /*
>> + * VMHUB structures, functions & helpers
>> + */
>> +struct amdgpu_vmhub {
>> +       uint32_t        ctx0_ptb_addr_lo32;
>> +       uint32_t        ctx0_ptb_addr_hi32;
>> +       uint32_t        vm_inv_eng0_req;
>> +       uint32_t        vm_inv_eng0_ack;
>> +       uint32_t        vm_context0_cntl;
>> +       uint32_t        vm_l2_pro_fault_status;
>> +       uint32_t        vm_l2_pro_fault_cntl;
>> +       uint32_t        (*get_invalidate_req)(unsigned int vm_id);
>> +       uint32_t        (*get_vm_protection_bits)(void);
>> +};
>> +
>> +/*
>>    * GPU MC structures, functions & helpers
>>    */
>>   struct amdgpu_mc {
>> @@ -591,6 +617,9 @@ struct amdgpu_mc {
>>         u64                                     shared_aperture_end;
>>         u64                                     private_aperture_start;
>>         u64                                     private_aperture_end;
>> +       /* protects concurrent invalidation */
>> +       spinlock_t              invalidate_lock;
>> +       const struct amdgpu_mc_funcs *mc_funcs;
>>   };
>>
>>   /*
>> @@ -1479,6 +1508,7 @@ struct amdgpu_device {
>>         struct amdgpu_gart              gart;
>>         struct amdgpu_dummy_page        dummy_page;
>>         struct amdgpu_vm_manager        vm_manager;
>> +       struct amdgpu_vmhub             vmhub[AMDGPU_MAX_VMHUBS];
>>
>>         /* memory management */
>>         struct amdgpu_mman              mman;
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> index df615d7..47a8080 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>> @@ -375,6 +375,16 @@ static bool amdgpu_vm_ring_has_compute_vm_bug(struct
>> amdgpu_ring *ring)
>>         return false;
>>   }
>>
>> +static u64 amdgpu_vm_adjust_mc_addr(struct amdgpu_device *adev, u64
>> mc_addr)
>> +{
>> +       u64 addr = mc_addr;
>> +
>> +       if (adev->mc.mc_funcs && adev->mc.mc_funcs->adjust_mc_addr)
>> +               addr = adev->mc.mc_funcs->adjust_mc_addr(adev, addr);
>> +
>> +       return addr;
>> +}
>> +
>>   /**
>>    * amdgpu_vm_flush - hardware flush the vm
>>    *
>> @@ -405,9 +415,10 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct
>> amdgpu_job *job)
>>         if (ring->funcs->emit_vm_flush && (job->vm_needs_flush ||
>>             amdgpu_vm_is_gpu_reset(adev, id))) {
>>                 struct fence *fence;
>> +               u64 pd_addr = amdgpu_vm_adjust_mc_addr(adev,
>> job->vm_pd_addr);
>>
>> -               trace_amdgpu_vm_flush(job->vm_pd_addr, ring->idx,
>> job->vm_id);
>> -               amdgpu_ring_emit_vm_flush(ring, job->vm_id,
>> job->vm_pd_addr);
>> +               trace_amdgpu_vm_flush(pd_addr, ring->idx, job->vm_id);
>> +               amdgpu_ring_emit_vm_flush(ring, job->vm_id, pd_addr);
>>
>>                 r = amdgpu_fence_emit(ring, &fence);
>>                 if (r)
>> @@ -643,15 +654,18 @@ int amdgpu_vm_update_page_directory(struct
>> amdgpu_device *adev,
>>                     (count == AMDGPU_VM_MAX_UPDATE_SIZE)) {
>>
>>                         if (count) {
>> +                               uint64_t pt_addr =
>> +                                       amdgpu_vm_adjust_mc_addr(adev,
>> last_pt);
>> +
>>                                 if (shadow)
>>                                         amdgpu_vm_do_set_ptes(&params,
>>                                                               last_shadow,
>> -                                                             last_pt,
>> count,
>> +                                                             pt_addr,
>> count,
>>                                                               incr,
>>
>> AMDGPU_PTE_VALID);
>>
>>                                 amdgpu_vm_do_set_ptes(&params, last_pde,
>> -                                                     last_pt, count,
>> incr,
>> +                                                     pt_addr, count,
>> incr,
>>                                                       AMDGPU_PTE_VALID);
>>                         }
>>
>> @@ -665,11 +679,13 @@ int amdgpu_vm_update_page_directory(struct
>> amdgpu_device *adev,
>>         }
>>
>>         if (count) {
>> +               uint64_t pt_addr = amdgpu_vm_adjust_mc_addr(adev,
>> last_pt);
>> +
>>                 if (vm->page_directory->shadow)
>> -                       amdgpu_vm_do_set_ptes(&params, last_shadow,
>> last_pt,
>> +                       amdgpu_vm_do_set_ptes(&params, last_shadow,
>> pt_addr,
>>                                               count, incr,
>> AMDGPU_PTE_VALID);
>>
>> -               amdgpu_vm_do_set_ptes(&params, last_pde, last_pt,
>> +               amdgpu_vm_do_set_ptes(&params, last_pde, pt_addr,
>>                                       count, incr, AMDGPU_PTE_VALID);
>>         }
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>> b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>> new file mode 100644
>> index 0000000..1ff019c
>> --- /dev/null
>> +++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>> @@ -0,0 +1,447 @@
>> +/*
>> + * Copyright 2016 Advanced Micro Devices, Inc.
>> + *
>> + * Permission is hereby granted, free of charge, to any person obtaining
>> a
>> + * copy of this software and associated documentation files (the
>> "Software"),
>> + * to deal in the Software without restriction, including without
>> limitation
>> + * the rights to use, copy, modify, merge, publish, distribute,
>> sublicense,
>> + * and/or sell copies of the Software, and to permit persons to whom the
>> + * Software is furnished to do so, subject to the following conditions:
>> + *
>> + * The above copyright notice and this permission notice shall be
>> included in
>> + * all copies or substantial portions of the Software.
>> + *
>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
>> EXPRESS OR
>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>> MERCHANTABILITY,
>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT
>> SHALL
>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES
>> OR
>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
>> + * OTHER DEALINGS IN THE SOFTWARE.
>> + *
>> + */
>> +#include "amdgpu.h"
>> +#include "gfxhub_v1_0.h"
>> +
>> +#include "vega10/soc15ip.h"
>> +#include "vega10/GC/gc_9_0_offset.h"
>> +#include "vega10/GC/gc_9_0_sh_mask.h"
>> +#include "vega10/GC/gc_9_0_default.h"
>> +#include "vega10/vega10_enum.h"
>> +
>> +#include "soc15_common.h"
>> +
>> +int gfxhub_v1_0_gart_enable(struct amdgpu_device *adev)
>> +{
>> +       u32 tmp;
>> +       u64 value;
>> +       u32 i;
>> +
>> +       /* Program MC. */
>> +       /* Update configuration */
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR),
>> +               adev->mc.vram_start >> 18);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR),
>> +               adev->mc.vram_end >> 18);
>> +
>> +       value = adev->vram_scratch.gpu_addr - adev->mc.vram_start
>> +               + adev->vm_manager.vram_base_offset;
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +                               mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_LSB),
>> +                               (u32)(value >> 12));
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +                               mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_MSB),
>> +                               (u32)(value >> 44));
>> +
>> +       /* Disable AGP. */
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_BASE), 0);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_TOP), 0);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_BOT), 0xFFFFFFFF);
>> +
>> +       /* GART Enable. */
>> +
>> +       /* Setup TLB control */
>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL));
>> +       tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB, 1);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                               MC_VM_MX_L1_TLB_CNTL,
>> +                               SYSTEM_ACCESS_MODE,
>> +                               3);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                               MC_VM_MX_L1_TLB_CNTL,
>> +                               ENABLE_ADVANCED_DRIVER_MODEL,
>> +                               1);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                               MC_VM_MX_L1_TLB_CNTL,
>> +                               SYSTEM_APERTURE_UNMAPPED_ACCESS,
>> +                               0);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                               MC_VM_MX_L1_TLB_CNTL,
>> +                               ECO_BITS,
>> +                               0);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                               MC_VM_MX_L1_TLB_CNTL,
>> +                               MTYPE,
>> +                               MTYPE_UC);/* XXX for emulation. */
>> +       tmp = REG_SET_FIELD(tmp,
>> +                               MC_VM_MX_L1_TLB_CNTL,
>> +                               ATC_EN,
>> +                               1);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL), tmp);
>> +
>> +       /* Setup L2 cache */
>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL));
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 1);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                               VM_L2_CNTL,
>> +                               ENABLE_L2_FRAGMENT_PROCESSING,
>> +                               0);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                               VM_L2_CNTL,
>> +                               L2_PDE0_CACHE_TAG_GENERATION_MODE,
>> +                               0);/* XXX for emulation, Refer to closed
>> source code.*/
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, PDE_FAULT_CLASSIFICATION, 1);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                               VM_L2_CNTL,
>> +                               CONTEXT1_IDENTITY_ACCESS_MODE,
>> +                               1);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                               VM_L2_CNTL,
>> +                               IDENTITY_MODE_FRAGMENT_SIZE,
>> +                               0);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL), tmp);
>> +
>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL2));
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_ALL_L1_TLBS, 1);
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_L2_CACHE, 1);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL2), tmp);
>> +
>> +       tmp = mmVM_L2_CNTL3_DEFAULT;
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL3), tmp);
>> +
>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL4));
>> +       tmp = REG_SET_FIELD(tmp,
>> +                           VM_L2_CNTL4,
>> +                           VMC_TAP_PDE_REQUEST_PHYSICAL,
>> +                           0);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                           VM_L2_CNTL4,
>> +                           VMC_TAP_PTE_REQUEST_PHYSICAL,
>> +                           0);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL4), tmp);
>> +
>> +       /* setup context0 */
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +                               mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_LO32),
>> +               (u32)(adev->mc.gtt_start >> 12));
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +                               mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_HI32),
>> +               (u32)(adev->mc.gtt_start >> 44));
>> +
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +                               mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_LO32),
>> +               (u32)(adev->mc.gtt_end >> 12));
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +                               mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_HI32),
>> +               (u32)(adev->mc.gtt_end >> 44));
>> +
>> +       BUG_ON(adev->gart.table_addr & (~0x0000FFFFFFFFF000ULL));
>> +       value = adev->gart.table_addr - adev->mc.vram_start
>> +               + adev->vm_manager.vram_base_offset;
>> +       value &= 0x0000FFFFFFFFF000ULL;
>> +       value |= 0x1; /*valid bit*/
>> +
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +                               mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32),
>> +               (u32)value);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +                               mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32),
>> +               (u32)(value >> 32));
>> +
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +
>> mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_LO32),
>> +               (u32)(adev->dummy_page.addr >> 12));
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +
>> mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_HI32),
>> +               (u32)(adev->dummy_page.addr >> 44));
>> +
>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_L2_PROTECTION_FAULT_CNTL2));
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL2,
>> +                           ACTIVE_PAGE_MIGRATION_PTE_READ_RETRY,
>> +                           1);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL2),
>> tmp);
>> +
>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL));
>> +       tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL, ENABLE_CONTEXT, 1);
>> +       tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL, PAGE_TABLE_DEPTH, 0);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL), tmp);
>> +
>> +       /* Disable identity aperture.*/
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +               mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_LO32),
>> 0XFFFFFFFF);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +               mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_HI32),
>> 0x0000000F);
>> +
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +               mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_LO32), 0);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +               mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_HI32), 0);
>> +
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +               mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_LO32), 0);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>> +               mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_HI32), 0);
>> +
>> +       for (i = 0; i <= 14; i++) {
>> +               tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_CNTL) +
>> i);
>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL, ENABLE_CONTEXT,
>> 1);
>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>> PAGE_TABLE_DEPTH, 1);
>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>> +                               RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>> +
>> DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>> +                               PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>> +                               VALID_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>> +                               READ_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>> +                               WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>> +                               EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT,
>> 1);
>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>> +                               PAGE_TABLE_BLOCK_SIZE,
>> +                                   amdgpu_vm_block_size - 9);
>> +               WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_CNTL) + i,
>> tmp);
>> +               WREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_LO32) + i*2, 0);
>> +               WREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_HI32) + i*2, 0);
>> +               WREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_LO32) + i*2,
>> +                               adev->vm_manager.max_pfn - 1);
>> +               WREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_HI32) + i*2, 0);
>> +       }
>> +
>> +
>> +       return 0;
>> +}
>> +
>> +void gfxhub_v1_0_gart_disable(struct amdgpu_device *adev)
>> +{
>> +       u32 tmp;
>> +       u32 i;
>> +
>> +       /* Disable all tables */
>> +       for (i = 0; i < 16; i++)
>> +               WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL) + i,
>> 0);
>> +
>> +       /* Setup TLB control */
>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL));
>> +       tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB, 0);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                               MC_VM_MX_L1_TLB_CNTL,
>> +                               ENABLE_ADVANCED_DRIVER_MODEL,
>> +                               0);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL), tmp);
>> +
>> +       /* Setup L2 cache */
>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL));
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 0);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL), tmp);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL3), 0);
>> +}
>> +
>> +/**
>> + * gfxhub_v1_0_set_fault_enable_default - update GART/VM fault handling
>> + *
>> + * @adev: amdgpu_device pointer
>> + * @value: true redirects VM faults to the default page
>> + */
>> +void gfxhub_v1_0_set_fault_enable_default(struct amdgpu_device *adev,
>> +                                         bool value)
>> +{
>> +       u32 tmp;
>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0,
>> mmVM_L2_PROTECTION_FAULT_CNTL));
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>> +                       RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>> +                       PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>> +                       PDE1_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>> +                       PDE2_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>> +       tmp = REG_SET_FIELD(tmp,
>> +                       VM_L2_PROTECTION_FAULT_CNTL,
>> +                       TRANSLATE_FURTHER_PROTECTION_FAULT_ENABLE_DEFAULT,
>> +                       value);
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>> +                       NACK_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>> +                       DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT,
>> value);
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>> +                       VALID_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>> +                       READ_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>> +                       WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>> +                       EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL),
>> tmp);
>> +}
>> +
>> +static uint32_t gfxhub_v1_0_get_invalidate_req(unsigned int vm_id)
>> +{
>> +       u32 req = 0;
>> +
>> +       /* invalidate using legacy mode on vm_id*/
>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> +                           PER_VMID_INVALIDATE_REQ, 1 << vm_id);
>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, FLUSH_TYPE, 0);
>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> INVALIDATE_L2_PTES, 1);
>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> INVALIDATE_L2_PDE0, 1);
>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> INVALIDATE_L2_PDE1, 1);
>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> INVALIDATE_L2_PDE2, 1);
>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> INVALIDATE_L1_PTES, 1);
>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>> +                           CLEAR_PROTECTION_FAULT_STATUS_ADDR, 0);
>> +
>> +       return req;
>> +}
>> +
>> +static uint32_t gfxhub_v1_0_get_vm_protection_bits(void)
>> +{
>> +       return
>> (VM_CONTEXT1_CNTL__RANGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
>> +
>> VM_CONTEXT1_CNTL__DUMMY_PAGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
>> +
>> VM_CONTEXT1_CNTL__PDE0_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
>> +
>> VM_CONTEXT1_CNTL__VALID_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
>> +
>> VM_CONTEXT1_CNTL__READ_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
>> +
>> VM_CONTEXT1_CNTL__WRITE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
>> +
>> VM_CONTEXT1_CNTL__EXECUTE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK);
>> +}
>> +
>> +static int gfxhub_v1_0_early_init(void *handle)
>> +{
>> +       return 0;
>> +}
>> +
>> +static int gfxhub_v1_0_late_init(void *handle)
>> +{
>> +       return 0;
>> +}
>> +
>> +static int gfxhub_v1_0_sw_init(void *handle)
>> +{
>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>> +       struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_GFXHUB];
>> +
>> +       hub->ctx0_ptb_addr_lo32 =
>> +               SOC15_REG_OFFSET(GC, 0,
>> +                                mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32);
>> +       hub->ctx0_ptb_addr_hi32 =
>> +               SOC15_REG_OFFSET(GC, 0,
>> +                                mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32);
>> +       hub->vm_inv_eng0_req =
>> +               SOC15_REG_OFFSET(GC, 0, mmVM_INVALIDATE_ENG0_REQ);
>> +       hub->vm_inv_eng0_ack =
>> +               SOC15_REG_OFFSET(GC, 0, mmVM_INVALIDATE_ENG0_ACK);
>> +       hub->vm_context0_cntl =
>> +               SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL);
>> +       hub->vm_l2_pro_fault_status =
>> +               SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_STATUS);
>> +       hub->vm_l2_pro_fault_cntl =
>> +               SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL);
>> +
>> +       hub->get_invalidate_req = gfxhub_v1_0_get_invalidate_req;
>> +       hub->get_vm_protection_bits = gfxhub_v1_0_get_vm_protection_bits;
>> +
>> +       return 0;
>> +}
>> +
>> +static int gfxhub_v1_0_sw_fini(void *handle)
>> +{
>> +       return 0;
>> +}
>> +
>> +static int gfxhub_v1_0_hw_init(void *handle)
>> +{
>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>> +       unsigned i;
>> +
>> +       for (i = 0 ; i < 18; ++i) {
>> +               WREG32(SOC15_REG_OFFSET(GC, 0,
>> +
>> mmVM_INVALIDATE_ENG0_ADDR_RANGE_LO32) +
>> +                      2 * i, 0xffffffff);
>> +               WREG32(SOC15_REG_OFFSET(GC, 0,
>> +
>> mmVM_INVALIDATE_ENG0_ADDR_RANGE_HI32) +
>> +                      2 * i, 0x1f);
>> +       }
>> +
>> +       return 0;
>> +}
>> +
>> +static int gfxhub_v1_0_hw_fini(void *handle)
>> +{
>> +       return 0;
>> +}
>> +
>> +static int gfxhub_v1_0_suspend(void *handle)
>> +{
>> +       return 0;
>> +}
>> +
>> +static int gfxhub_v1_0_resume(void *handle)
>> +{
>> +       return 0;
>> +}
>> +
>> +static bool gfxhub_v1_0_is_idle(void *handle)
>> +{
>> +       return true;
>> +}
>> +
>> +static int gfxhub_v1_0_wait_for_idle(void *handle)
>> +{
>> +       return 0;
>> +}
>> +
>> +static int gfxhub_v1_0_soft_reset(void *handle)
>> +{
>> +       return 0;
>> +}
>> +
>> +static int gfxhub_v1_0_set_clockgating_state(void *handle,
>> +                                         enum amd_clockgating_state
>> state)
>> +{
>> +       return 0;
>> +}
>> +
>> +static int gfxhub_v1_0_set_powergating_state(void *handle,
>> +                                         enum amd_powergating_state
>> state)
>> +{
>> +       return 0;
>> +}
>> +
>> +const struct amd_ip_funcs gfxhub_v1_0_ip_funcs = {
>> +       .name = "gfxhub_v1_0",
>> +       .early_init = gfxhub_v1_0_early_init,
>> +       .late_init = gfxhub_v1_0_late_init,
>> +       .sw_init = gfxhub_v1_0_sw_init,
>> +       .sw_fini = gfxhub_v1_0_sw_fini,
>> +       .hw_init = gfxhub_v1_0_hw_init,
>> +       .hw_fini = gfxhub_v1_0_hw_fini,
>> +       .suspend = gfxhub_v1_0_suspend,
>> +       .resume = gfxhub_v1_0_resume,
>> +       .is_idle = gfxhub_v1_0_is_idle,
>> +       .wait_for_idle = gfxhub_v1_0_wait_for_idle,
>> +       .soft_reset = gfxhub_v1_0_soft_reset,
>> +       .set_clockgating_state = gfxhub_v1_0_set_clockgating_state,
>> +       .set_powergating_state = gfxhub_v1_0_set_powergating_state,
>> +};
>> +
>> +const struct amdgpu_ip_block_version gfxhub_v1_0_ip_block =
>> +{
>> +       .type = AMD_IP_BLOCK_TYPE_GFXHUB,
>> +       .major = 1,
>> +       .minor = 0,
>> +       .rev = 0,
>> +       .funcs = &gfxhub_v1_0_ip_funcs,
>> +};
>> diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
>> b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
>> new file mode 100644
>> index 0000000..5129a8f
>> --- /dev/null
>> +++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
>> @@ -0,0 +1,35 @@
>> +/*
>> + * Copyright 2016 Advanced Micro Devices, Inc.
>> + *
>> + * Permission is hereby granted, free of charge, to any person obtaining
>> a
>> + * copy of this software and associated documentation files (the
>> "Software"),
>> + * to deal in the Software without restriction, including without
>> limitation
>> + * the rights to use, copy, modify, merge, publish, distribute,
>> sublicense,
>> + * and/or sell copies of the Software, and to permit persons to whom the
>> + * Software is furnished to do so, subject to the following conditions:
>> + *
>> + * The above copyright notice and this permission notice shall be
>> included in
>> + * all copies or substantial portions of the Software.
>> + *
>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
>> EXPRESS OR
>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>> MERCHANTABILITY,
>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT
>> SHALL
>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES
>> OR
>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
>> + * OTHER DEALINGS IN THE SOFTWARE.
>> + *
>> + */
>> +
>> +#ifndef __GFXHUB_V1_0_H__
>> +#define __GFXHUB_V1_0_H__
>> +
>> +int gfxhub_v1_0_gart_enable(struct amdgpu_device *adev);
>> +void gfxhub_v1_0_gart_disable(struct amdgpu_device *adev);
>> +void gfxhub_v1_0_set_fault_enable_default(struct amdgpu_device *adev,
>> +                                         bool value);
>> +
>> +extern const struct amd_ip_funcs gfxhub_v1_0_ip_funcs;
>> +extern const struct amdgpu_ip_block_version gfxhub_v1_0_ip_block;
>> +
>> +#endif
>> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> new file mode 100644
>> index 0000000..5cf0fc3
>> --- /dev/null
>> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>> @@ -0,0 +1,826 @@
>> +/*
>> + * Copyright 2016 Advanced Micro Devices, Inc.
>> + *
>> + * Permission is hereby granted, free of charge, to any person obtaining
>> a
>> + * copy of this software and associated documentation files (the
>> "Software"),
>> + * to deal in the Software without restriction, including without
>> limitation
>> + * the rights to use, copy, modify, merge, publish, distribute,
>> sublicense,
>> + * and/or sell copies of the Software, and to permit persons to whom the
>> + * Software is furnished to do so, subject to the following conditions:
>> + *
>> + * The above copyright notice and this permission notice shall be
>> included in
>> + * all copies or substantial portions of the Software.
>> + *
>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
>> EXPRESS OR
>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>> MERCHANTABILITY,
>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT
>> SHALL
>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES
>> OR
>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
>> + * OTHER DEALINGS IN THE SOFTWARE.
>> + *
>> + */
>> +#include <linux/firmware.h>
>> +#include "amdgpu.h"
>> +#include "gmc_v9_0.h"
>> +
>> +#include "vega10/soc15ip.h"
>> +#include "vega10/HDP/hdp_4_0_offset.h"
>> +#include "vega10/HDP/hdp_4_0_sh_mask.h"
>> +#include "vega10/GC/gc_9_0_sh_mask.h"
>> +#include "vega10/vega10_enum.h"
>> +
>> +#include "soc15_common.h"
>> +
>> +#include "nbio_v6_1.h"
>> +#include "gfxhub_v1_0.h"
>> +#include "mmhub_v1_0.h"
>> +
>> +#define mmDF_CS_AON0_DramBaseAddress0
>> 0x0044
>> +#define mmDF_CS_AON0_DramBaseAddress0_BASE_IDX
>> 0
>> +//DF_CS_AON0_DramBaseAddress0
>> +#define DF_CS_AON0_DramBaseAddress0__AddrRngVal__SHIFT
>> 0x0
>> +#define DF_CS_AON0_DramBaseAddress0__LgcyMmioHoleEn__SHIFT
>> 0x1
>> +#define DF_CS_AON0_DramBaseAddress0__IntLvNumChan__SHIFT
>> 0x4
>> +#define DF_CS_AON0_DramBaseAddress0__IntLvAddrSel__SHIFT
>> 0x8
>> +#define DF_CS_AON0_DramBaseAddress0__DramBaseAddr__SHIFT
>> 0xc
>> +#define DF_CS_AON0_DramBaseAddress0__AddrRngVal_MASK
>> 0x00000001L
>> +#define DF_CS_AON0_DramBaseAddress0__LgcyMmioHoleEn_MASK
>> 0x00000002L
>> +#define DF_CS_AON0_DramBaseAddress0__IntLvNumChan_MASK
>> 0x000000F0L
>> +#define DF_CS_AON0_DramBaseAddress0__IntLvAddrSel_MASK
>> 0x00000700L
>> +#define DF_CS_AON0_DramBaseAddress0__DramBaseAddr_MASK
>> 0xFFFFF000L
>> +
>> +/* XXX Move this macro to VEGA10 header file, which is like vid.h for
>> VI.*/
>> +#define AMDGPU_NUM_OF_VMIDS                    8
>> +
>> +static const u32 golden_settings_vega10_hdp[] =
>> +{
>> +       0xf64, 0x0fffffff, 0x00000000,
>> +       0xf65, 0x0fffffff, 0x00000000,
>> +       0xf66, 0x0fffffff, 0x00000000,
>> +       0xf67, 0x0fffffff, 0x00000000,
>> +       0xf68, 0x0fffffff, 0x00000000,
>> +       0xf6a, 0x0fffffff, 0x00000000,
>> +       0xf6b, 0x0fffffff, 0x00000000,
>> +       0xf6c, 0x0fffffff, 0x00000000,
>> +       0xf6d, 0x0fffffff, 0x00000000,
>> +       0xf6e, 0x0fffffff, 0x00000000,
>> +};
>> +
>> +static int gmc_v9_0_vm_fault_interrupt_state(struct amdgpu_device *adev,
>> +                                       struct amdgpu_irq_src *src,
>> +                                       unsigned type,
>> +                                       enum amdgpu_interrupt_state state)
>> +{
>> +       struct amdgpu_vmhub *hub;
>> +       u32 tmp, reg, bits, i;
>> +
>> +       switch (state) {
>> +       case AMDGPU_IRQ_STATE_DISABLE:
>> +               /* MM HUB */
>> +               hub = &adev->vmhub[AMDGPU_MMHUB];
>> +               bits = hub->get_vm_protection_bits();
>> +               for (i = 0; i< 16; i++) {
>> +                       reg = hub->vm_context0_cntl + i;
>> +                       tmp = RREG32(reg);
>> +                       tmp &= ~bits;
>> +                       WREG32(reg, tmp);
>> +               }
>> +
>> +               /* GFX HUB */
>> +               hub = &adev->vmhub[AMDGPU_GFXHUB];
>> +               bits = hub->get_vm_protection_bits();
>> +               for (i = 0; i < 16; i++) {
>> +                       reg = hub->vm_context0_cntl + i;
>> +                       tmp = RREG32(reg);
>> +                       tmp &= ~bits;
>> +                       WREG32(reg, tmp);
>> +               }
>> +               break;
>> +       case AMDGPU_IRQ_STATE_ENABLE:
>> +               /* MM HUB */
>> +               hub = &adev->vmhub[AMDGPU_MMHUB];
>> +               bits = hub->get_vm_protection_bits();
>> +               for (i = 0; i< 16; i++) {
>> +                       reg = hub->vm_context0_cntl + i;
>> +                       tmp = RREG32(reg);
>> +                       tmp |= bits;
>> +                       WREG32(reg, tmp);
>> +               }
>> +
>> +               /* GFX HUB */
>> +               hub = &adev->vmhub[AMDGPU_GFXHUB];
>> +               bits = hub->get_vm_protection_bits();
>> +               for (i = 0; i < 16; i++) {
>> +                       reg = hub->vm_context0_cntl + i;
>> +                       tmp = RREG32(reg);
>> +                       tmp |= bits;
>> +                       WREG32(reg, tmp);
>> +               }
>> +               break;
>> +       default:
>> +               break;
>> +       }
>> +
>> +       return 0;
>> +       return 0;
>> +}
>> +
>> +static int gmc_v9_0_process_interrupt(struct amdgpu_device *adev,
>> +                               struct amdgpu_irq_src *source,
>> +                               struct amdgpu_iv_entry *entry)
>> +{
>> +       struct amdgpu_vmhub *gfxhub = &adev->vmhub[AMDGPU_GFXHUB];
>> +       struct amdgpu_vmhub *mmhub = &adev->vmhub[AMDGPU_MMHUB];
>> +       uint32_t status;
>> +       u64 addr;
>> +
>> +       addr = (u64)entry->src_data[0] << 12;
>> +       addr |= ((u64)entry->src_data[1] & 0xf) << 44;
>> +
>> +       if (entry->vm_id_src) {
>> +               status = RREG32(mmhub->vm_l2_pro_fault_status);
>> +               WREG32_P(mmhub->vm_l2_pro_fault_cntl, 1, ~1);
>> +       } else {
>> +               status = RREG32(gfxhub->vm_l2_pro_fault_status);
>> +               WREG32_P(gfxhub->vm_l2_pro_fault_cntl, 1, ~1);
>> +       }
>> +
>> +       DRM_ERROR("[%s]VMC page fault (src_id:%u ring:%u vm_id:%u
>> pas_id:%u) "
>> +                 "at page 0x%016llx from %d\n"
>> +                 "VM_L2_PROTECTION_FAULT_STATUS:0x%08X\n",
>> +                 entry->vm_id_src ? "mmhub" : "gfxhub",
>> +                 entry->src_id, entry->ring_id, entry->vm_id,
>> entry->pas_id,
>> +                 addr, entry->client_id, status);
>> +
>> +       return 0;
>> +}
>> +
>> +static const struct amdgpu_irq_src_funcs gmc_v9_0_irq_funcs = {
>> +       .set = gmc_v9_0_vm_fault_interrupt_state,
>> +       .process = gmc_v9_0_process_interrupt,
>> +};
>> +
>> +static void gmc_v9_0_set_irq_funcs(struct amdgpu_device *adev)
>> +{
>> +       adev->mc.vm_fault.num_types = 1;
>> +       adev->mc.vm_fault.funcs = &gmc_v9_0_irq_funcs;
>> +}
>> +
>> +/*
>> + * GART
>> + * VMID 0 is the physical GPU addresses as used by the kernel.
>> + * VMIDs 1-15 are used for userspace clients and are handled
>> + * by the amdgpu vm/hsa code.
>> + */
>> +
>> +/**
>> + * gmc_v9_0_gart_flush_gpu_tlb - gart tlb flush callback
>> + *
>> + * @adev: amdgpu_device pointer
>> + * @vmid: vm instance to flush
>> + *
>> + * Flush the TLB for the requested page table.
>> + */
>> +static void gmc_v9_0_gart_flush_gpu_tlb(struct amdgpu_device *adev,
>> +                                       uint32_t vmid)
>> +{
>> +       /* Use register 17 for GART */
>> +       const unsigned eng = 17;
>> +       unsigned i, j;
>> +
>> +       /* flush hdp cache */
>> +       nbio_v6_1_hdp_flush(adev);
>> +
>> +       spin_lock(&adev->mc.invalidate_lock);
>> +
>> +       for (i = 0; i < AMDGPU_MAX_VMHUBS; ++i) {
>> +               struct amdgpu_vmhub *hub = &adev->vmhub[i];
>> +               u32 tmp = hub->get_invalidate_req(vmid);
>> +
>> +               WREG32(hub->vm_inv_eng0_req + eng, tmp);
>> +
>> +               /* Busy wait for ACK.*/
>> +               for (j = 0; j < 100; j++) {
>> +                       tmp = RREG32(hub->vm_inv_eng0_ack + eng);
>> +                       tmp &= 1 << vmid;
>> +                       if (tmp)
>> +                               break;
>> +                       cpu_relax();
>> +               }
>> +               if (j < 100)
>> +                       continue;
>> +
>> +               /* Wait for ACK with a delay.*/
>> +               for (j = 0; j < adev->usec_timeout; j++) {
>> +                       tmp = RREG32(hub->vm_inv_eng0_ack + eng);
>> +                       tmp &= 1 << vmid;
>> +                       if (tmp)
>> +                               break;
>> +                       udelay(1);
>> +               }
>> +               if (j < adev->usec_timeout)
>> +                       continue;
>> +
>> +               DRM_ERROR("Timeout waiting for VM flush ACK!\n");
>> +       }
>> +
>> +       spin_unlock(&adev->mc.invalidate_lock);
>> +}
>> +
>> +/**
>> + * gmc_v9_0_gart_set_pte_pde - update the page tables using MMIO
>> + *
>> + * @adev: amdgpu_device pointer
>> + * @cpu_pt_addr: cpu address of the page table
>> + * @gpu_page_idx: entry in the page table to update
>> + * @addr: dst addr to write into pte/pde
>> + * @flags: access flags
>> + *
>> + * Update the page tables using the CPU.
>> + */
>> +static int gmc_v9_0_gart_set_pte_pde(struct amdgpu_device *adev,
>> +                                       void *cpu_pt_addr,
>> +                                       uint32_t gpu_page_idx,
>> +                                       uint64_t addr,
>> +                                       uint64_t flags)
>> +{
>> +       void __iomem *ptr = (void *)cpu_pt_addr;
>> +       uint64_t value;
>> +
>> +       /*
>> +        * PTE format on VEGA 10:
>> +        * 63:59 reserved
>> +        * 58:57 mtype
>> +        * 56 F
>> +        * 55 L
>> +        * 54 P
>> +        * 53 SW
>> +        * 52 T
>> +        * 50:48 reserved
>> +        * 47:12 4k physical page base address
>> +        * 11:7 fragment
>> +        * 6 write
>> +        * 5 read
>> +        * 4 exe
>> +        * 3 Z
>> +        * 2 snooped
>> +        * 1 system
>> +        * 0 valid
>> +        *
>> +        * PDE format on VEGA 10:
>> +        * 63:59 block fragment size
>> +        * 58:55 reserved
>> +        * 54 P
>> +        * 53:48 reserved
>> +        * 47:6 physical base address of PD or PTE
>> +        * 5:3 reserved
>> +        * 2 C
>> +        * 1 system
>> +        * 0 valid
>> +        */
>> +
>> +       /*
>> +        * The following is for PTE only. GART does not have PDEs.
>> +       */
>> +       value = addr & 0x0000FFFFFFFFF000ULL;
>> +       value |= flags;
>> +       writeq(value, ptr + (gpu_page_idx * 8));
>> +       return 0;
>> +}
>> +
>> +static uint64_t gmc_v9_0_get_vm_pte_flags(struct amdgpu_device *adev,
>> +                                               uint32_t flags)
>> +
>> +{
>> +       uint64_t pte_flag = 0;
>> +
>> +       if (flags & AMDGPU_VM_PAGE_EXECUTABLE)
>> +               pte_flag |= AMDGPU_PTE_EXECUTABLE;
>> +       if (flags & AMDGPU_VM_PAGE_READABLE)
>> +               pte_flag |= AMDGPU_PTE_READABLE;
>> +       if (flags & AMDGPU_VM_PAGE_WRITEABLE)
>> +               pte_flag |= AMDGPU_PTE_WRITEABLE;
>> +
>> +       switch (flags & AMDGPU_VM_MTYPE_MASK) {
>> +       case AMDGPU_VM_MTYPE_DEFAULT:
>> +               pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
>> +               break;
>> +       case AMDGPU_VM_MTYPE_NC:
>> +               pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
>> +               break;
>> +       case AMDGPU_VM_MTYPE_WC:
>> +               pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_WC);
>> +               break;
>> +       case AMDGPU_VM_MTYPE_CC:
>> +               pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_CC);
>> +               break;
>> +       case AMDGPU_VM_MTYPE_UC:
>> +               pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_UC);
>> +               break;
>> +       default:
>> +               pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
>> +               break;
>> +       }
>> +
>> +       if (flags & AMDGPU_VM_PAGE_PRT)
>> +               pte_flag |= AMDGPU_PTE_PRT;
>> +
>> +       return pte_flag;
>> +}
>> +
>> +static const struct amdgpu_gart_funcs gmc_v9_0_gart_funcs = {
>> +       .flush_gpu_tlb = gmc_v9_0_gart_flush_gpu_tlb,
>> +       .set_pte_pde = gmc_v9_0_gart_set_pte_pde,
>> +       .get_vm_pte_flags = gmc_v9_0_get_vm_pte_flags
>> +};
>> +
>> +static void gmc_v9_0_set_gart_funcs(struct amdgpu_device *adev)
>> +{
>> +       if (adev->gart.gart_funcs == NULL)
>> +               adev->gart.gart_funcs = &gmc_v9_0_gart_funcs;
>> +}
>> +
>> +static u64 gmc_v9_0_adjust_mc_addr(struct amdgpu_device *adev, u64
>> mc_addr)
>> +{
>> +       return adev->vm_manager.vram_base_offset + mc_addr -
>> adev->mc.vram_start;
>> +}
>> +
>> +static const struct amdgpu_mc_funcs gmc_v9_0_mc_funcs = {
>> +       .adjust_mc_addr = gmc_v9_0_adjust_mc_addr,
>> +};
>> +
>> +static void gmc_v9_0_set_mc_funcs(struct amdgpu_device *adev)
>> +{
>> +       adev->mc.mc_funcs = &gmc_v9_0_mc_funcs;
>> +}
>> +
>> +static int gmc_v9_0_early_init(void *handle)
>> +{
>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>> +
>> +       gmc_v9_0_set_gart_funcs(adev);
>> +       gmc_v9_0_set_mc_funcs(adev);
>> +       gmc_v9_0_set_irq_funcs(adev);
>> +
>> +       return 0;
>> +}
>> +
>> +static int gmc_v9_0_late_init(void *handle)
>> +{
>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>> +       return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0);
>> +}
>> +
>> +static void gmc_v9_0_vram_gtt_location(struct amdgpu_device *adev,
>> +                                       struct amdgpu_mc *mc)
>> +{
>> +       u64 base = mmhub_v1_0_get_fb_location(adev);
>> +       amdgpu_vram_location(adev, &adev->mc, base);
>> +       adev->mc.gtt_base_align = 0;
>> +       amdgpu_gtt_location(adev, mc);
>> +}
>> +
>> +/**
>> + * gmc_v9_0_mc_init - initialize the memory controller driver params
>> + *
>> + * @adev: amdgpu_device pointer
>> + *
>> + * Look up the amount of vram, vram width, and decide how to place
>> + * vram and gart within the GPU's physical address space.
>> + * Returns 0 for success.
>> + */
>> +static int gmc_v9_0_mc_init(struct amdgpu_device *adev)
>> +{
>> +       u32 tmp;
>> +       int chansize, numchan;
>> +
>> +       /* hbm memory channel size */
>> +       chansize = 128;
>> +
>> +       tmp = RREG32(SOC15_REG_OFFSET(DF, 0,
>> mmDF_CS_AON0_DramBaseAddress0));
>> +       tmp &= DF_CS_AON0_DramBaseAddress0__IntLvNumChan_MASK;
>> +       tmp >>= DF_CS_AON0_DramBaseAddress0__IntLvNumChan__SHIFT;
>> +       switch (tmp) {
>> +       case 0:
>> +       default:
>> +               numchan = 1;
>> +               break;
>> +       case 1:
>> +               numchan = 2;
>> +               break;
>> +       case 2:
>> +               numchan = 0;
>> +               break;
>> +       case 3:
>> +               numchan = 4;
>> +               break;
>> +       case 4:
>> +               numchan = 0;
>> +               break;
>> +       case 5:
>> +               numchan = 8;
>> +               break;
>> +       case 6:
>> +               numchan = 0;
>> +               break;
>> +       case 7:
>> +               numchan = 16;
>> +               break;
>> +       case 8:
>> +               numchan = 2;
>> +               break;
>> +       }
>> +       adev->mc.vram_width = numchan * chansize;
>> +
>> +       /* Could aper size report 0 ? */
>> +       adev->mc.aper_base = pci_resource_start(adev->pdev, 0);
>> +       adev->mc.aper_size = pci_resource_len(adev->pdev, 0);
>> +       /* size in MB on si */
>> +       adev->mc.mc_vram_size =
>> +               nbio_v6_1_get_memsize(adev) * 1024ULL * 1024ULL;
>> +       adev->mc.real_vram_size = adev->mc.mc_vram_size;
>> +       adev->mc.visible_vram_size = adev->mc.aper_size;
>> +
>> +       /* In case the PCI BAR is larger than the actual amount of vram */
>> +       if (adev->mc.visible_vram_size > adev->mc.real_vram_size)
>> +               adev->mc.visible_vram_size = adev->mc.real_vram_size;
>> +
>> +       /* unless the user had overridden it, set the gart
>> +        * size equal to the 1024 or vram, whichever is larger.
>> +        */
>> +       if (amdgpu_gart_size == -1)
>> +               adev->mc.gtt_size = max((1024ULL << 20),
>> adev->mc.mc_vram_size);
>> +       else
>> +               adev->mc.gtt_size = (uint64_t)amdgpu_gart_size << 20;
>> +
>> +       gmc_v9_0_vram_gtt_location(adev, &adev->mc);
>> +
>> +       return 0;
>> +}
>> +
>> +static int gmc_v9_0_gart_init(struct amdgpu_device *adev)
>> +{
>> +       int r;
>> +
>> +       if (adev->gart.robj) {
>> +               WARN(1, "VEGA10 PCIE GART already initialized\n");
>> +               return 0;
>> +       }
>> +       /* Initialize common gart structure */
>> +       r = amdgpu_gart_init(adev);
>> +       if (r)
>> +               return r;
>> +       adev->gart.table_size = adev->gart.num_gpu_pages * 8;
>> +       adev->gart.gart_pte_flags = AMDGPU_PTE_MTYPE(MTYPE_UC) |
>> +                                AMDGPU_PTE_EXECUTABLE;
>> +       return amdgpu_gart_table_vram_alloc(adev);
>> +}
>> +
>> +/*
>> + * vm
>> + * VMID 0 is the physical GPU addresses as used by the kernel.
>> + * VMIDs 1-15 are used for userspace clients and are handled
>> + * by the amdgpu vm/hsa code.
>> + */
>> +/**
>> + * gmc_v9_0_vm_init - vm init callback
>> + *
>> + * @adev: amdgpu_device pointer
>> + *
>> + * Inits vega10 specific vm parameters (number of VMs, base of vram for
>> + * VMIDs 1-15) (vega10).
>> + * Returns 0 for success.
>> + */
>> +static int gmc_v9_0_vm_init(struct amdgpu_device *adev)
>> +{
>> +       /*
>> +        * number of VMs
>> +        * VMID 0 is reserved for System
>> +        * amdgpu graphics/compute will use VMIDs 1-7
>> +        * amdkfd will use VMIDs 8-15
>> +        */
>> +       adev->vm_manager.num_ids = AMDGPU_NUM_OF_VMIDS;
>> +       amdgpu_vm_manager_init(adev);
>> +
>> +       /* base offset of vram pages */
>> +       /*XXX This value is not zero for APU*/
>> +       adev->vm_manager.vram_base_offset = 0;
>> +
>> +       return 0;
>> +}
>> +
>> +/**
>> + * gmc_v9_0_vm_fini - vm fini callback
>> + *
>> + * @adev: amdgpu_device pointer
>> + *
>> + * Tear down any asic specific VM setup.
>> + */
>> +static void gmc_v9_0_vm_fini(struct amdgpu_device *adev)
>> +{
>> +       return;
>> +}
>> +
>> +static int gmc_v9_0_sw_init(void *handle)
>> +{
>> +       int r;
>> +       int dma_bits;
>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>> +
>> +       spin_lock_init(&adev->mc.invalidate_lock);
>> +
>> +       if (adev->flags & AMD_IS_APU) {
>> +               adev->mc.vram_type = AMDGPU_VRAM_TYPE_UNKNOWN;
>> +       } else {
>> +               /* XXX Don't know how to get VRAM type yet. */
>> +               adev->mc.vram_type = AMDGPU_VRAM_TYPE_HBM;
>> +       }
>> +
>> +       /* This interrupt is VMC page fault.*/
>> +       r = amdgpu_irq_add_id(adev, AMDGPU_IH_CLIENTID_VMC, 0,
>> +                               &adev->mc.vm_fault);
>> +
>> +       if (r)
>> +               return r;
>> +
>> +       /* Adjust VM size here.
>> +        * Currently default to 64GB ((16 << 20) 4k pages).
>> +        * Max GPUVM size is 48 bits.
>> +        */
>> +       adev->vm_manager.max_pfn = amdgpu_vm_size << 18;
>> +
>> +       /* Set the internal MC address mask
>> +        * This is the max address of the GPU's
>> +        * internal address space.
>> +        */
>> +       adev->mc.mc_mask = 0xffffffffffffULL; /* 48 bit MC */
>> +
>> +       /* set DMA mask + need_dma32 flags.
>> +        * PCIE - can handle 44-bits.
>> +        * IGP - can handle 44-bits
>> +        * PCI - dma32 for legacy pci gart, 44 bits on vega10
>> +        */
>> +       adev->need_dma32 = false;
>> +       dma_bits = adev->need_dma32 ? 32 : 44;
>> +       r = pci_set_dma_mask(adev->pdev, DMA_BIT_MASK(dma_bits));
>> +       if (r) {
>> +               adev->need_dma32 = true;
>> +               dma_bits = 32;
>> +               printk(KERN_WARNING "amdgpu: No suitable DMA
>> available.\n");
>> +       }
>> +       r = pci_set_consistent_dma_mask(adev->pdev,
>> DMA_BIT_MASK(dma_bits));
>> +       if (r) {
>> +               pci_set_consistent_dma_mask(adev->pdev, DMA_BIT_MASK(32));
>> +               printk(KERN_WARNING "amdgpu: No coherent DMA
>> available.\n");
>> +       }
>> +
>> +       r = gmc_v9_0_mc_init(adev);
>> +       if (r)
>> +               return r;
>> +
>> +       /* Memory manager */
>> +       r = amdgpu_bo_init(adev);
>> +       if (r)
>> +               return r;
>> +
>> +       r = gmc_v9_0_gart_init(adev);
>> +       if (r)
>> +               return r;
>> +
>> +       if (!adev->vm_manager.enabled) {
>> +               r = gmc_v9_0_vm_init(adev);
>> +               if (r) {
>> +                       dev_err(adev->dev, "vm manager initialization
>> failed (%d).\n", r);
>> +                       return r;
>> +               }
>> +               adev->vm_manager.enabled = true;
>> +       }
>> +       return r;
>> +}
>> +
>> +/**
>> + * gmc_v8_0_gart_fini - vm fini callback
>> + *
>> + * @adev: amdgpu_device pointer
>> + *
>> + * Tears down the driver GART/VM setup (CIK).
>> + */
>> +static void gmc_v9_0_gart_fini(struct amdgpu_device *adev)
>> +{
>> +       amdgpu_gart_table_vram_free(adev);
>> +       amdgpu_gart_fini(adev);
>> +}
>> +
>> +static int gmc_v9_0_sw_fini(void *handle)
>> +{
>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>> +
>> +       if (adev->vm_manager.enabled) {
>> +               amdgpu_vm_manager_fini(adev);
>> +               gmc_v9_0_vm_fini(adev);
>> +               adev->vm_manager.enabled = false;
>> +       }
>> +       gmc_v9_0_gart_fini(adev);
>> +       amdgpu_gem_force_release(adev);
>> +       amdgpu_bo_fini(adev);
>> +
>> +       return 0;
>> +}
>> +
>> +static void gmc_v9_0_init_golden_registers(struct amdgpu_device *adev)
>> +{
>> +       switch (adev->asic_type) {
>> +       case CHIP_VEGA10:
>> +               break;
>> +       default:
>> +               break;
>> +       }
>> +}
>> +
>> +/**
>> + * gmc_v9_0_gart_enable - gart enable
>> + *
>> + * @adev: amdgpu_device pointer
>> + */
>> +static int gmc_v9_0_gart_enable(struct amdgpu_device *adev)
>> +{
>> +       int r;
>> +       bool value;
>> +       u32 tmp;
>> +
>> +       amdgpu_program_register_sequence(adev,
>> +               golden_settings_vega10_hdp,
>> +               (const u32)ARRAY_SIZE(golden_settings_vega10_hdp));
>> +
>> +       if (adev->gart.robj == NULL) {
>> +               dev_err(adev->dev, "No VRAM object for PCIE GART.\n");
>> +               return -EINVAL;
>> +       }
>> +       r = amdgpu_gart_table_vram_pin(adev);
>> +       if (r)
>> +               return r;
>> +
>> +       /* After HDP is initialized, flush HDP.*/
>> +       nbio_v6_1_hdp_flush(adev);
>> +
>> +       r = gfxhub_v1_0_gart_enable(adev);
>> +       if (r)
>> +               return r;
>> +
>> +       r = mmhub_v1_0_gart_enable(adev);
>> +       if (r)
>> +               return r;
>> +
>> +       tmp = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MISC_CNTL));
>> +       tmp |= HDP_MISC_CNTL__FLUSH_INVALIDATE_CACHE_MASK;
>> +       WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MISC_CNTL), tmp);
>> +
>> +       tmp = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_HOST_PATH_CNTL));
>> +       WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_HOST_PATH_CNTL), tmp);
>> +
>> +
>> +       if (amdgpu_vm_fault_stop == AMDGPU_VM_FAULT_STOP_ALWAYS)
>> +               value = false;
>> +       else
>> +               value = true;
>> +
>> +       gfxhub_v1_0_set_fault_enable_default(adev, value);
>> +       mmhub_v1_0_set_fault_enable_default(adev, value);
>> +
>> +       gmc_v9_0_gart_flush_gpu_tlb(adev, 0);
>> +
>> +       DRM_INFO("PCIE GART of %uM enabled (table at 0x%016llX).\n",
>> +                (unsigned)(adev->mc.gtt_size >> 20),
>> +                (unsigned long long)adev->gart.table_addr);
>> +       adev->gart.ready = true;
>> +       return 0;
>> +}
>> +
>> +static int gmc_v9_0_hw_init(void *handle)
>> +{
>> +       int r;
>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>> +
>> +       /* The sequence of these two function calls matters.*/
>> +       gmc_v9_0_init_golden_registers(adev);
>> +
>> +       r = gmc_v9_0_gart_enable(adev);
>> +
>> +       return r;
>> +}
>> +
>> +/**
>> + * gmc_v9_0_gart_disable - gart disable
>> + *
>> + * @adev: amdgpu_device pointer
>> + *
>> + * This disables all VM page table.
>> + */
>> +static void gmc_v9_0_gart_disable(struct amdgpu_device *adev)
>> +{
>> +       gfxhub_v1_0_gart_disable(adev);
>> +       mmhub_v1_0_gart_disable(adev);
>> +       amdgpu_gart_table_vram_unpin(adev);
>> +}
>> +
>> +static int gmc_v9_0_hw_fini(void *handle)
>> +{
>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>> +
>> +       amdgpu_irq_put(adev, &adev->mc.vm_fault, 0);
>> +       gmc_v9_0_gart_disable(adev);
>> +
>> +       return 0;
>> +}
>> +
>> +static int gmc_v9_0_suspend(void *handle)
>> +{
>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>> +
>> +       if (adev->vm_manager.enabled) {
>> +               gmc_v9_0_vm_fini(adev);
>> +               adev->vm_manager.enabled = false;
>> +       }
>> +       gmc_v9_0_hw_fini(adev);
>> +
>> +       return 0;
>> +}
>> +
>> +static int gmc_v9_0_resume(void *handle)
>> +{
>> +       int r;
>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>> +
>> +       r = gmc_v9_0_hw_init(adev);
>> +       if (r)
>> +               return r;
>> +
>> +       if (!adev->vm_manager.enabled) {
>> +               r = gmc_v9_0_vm_init(adev);
>> +               if (r) {
>> +                       dev_err(adev->dev,
>> +                               "vm manager initialization failed
>> (%d).\n", r);
>> +                       return r;
>> +               }
>> +               adev->vm_manager.enabled = true;
>> +       }
>> +
>> +       return r;
>> +}
>> +
>> +static bool gmc_v9_0_is_idle(void *handle)
>> +{
>> +       /* MC is always ready in GMC v9.*/
>> +       return true;
>> +}
>> +
>> +static int gmc_v9_0_wait_for_idle(void *handle)
>> +{
>> +       /* There is no need to wait for MC idle in GMC v9.*/
>> +       return 0;
>> +}
>> +
>> +static int gmc_v9_0_soft_reset(void *handle)
>> +{
>> +       /* XXX for emulation.*/
>> +       return 0;
>> +}
>> +
>> +static int gmc_v9_0_set_clockgating_state(void *handle,
>> +                                       enum amd_clockgating_state state)
>> +{
>> +       return 0;
>> +}
>> +
>> +static int gmc_v9_0_set_powergating_state(void *handle,
>> +                                       enum amd_powergating_state state)
>> +{
>> +       return 0;
>> +}
>> +
>> +const struct amd_ip_funcs gmc_v9_0_ip_funcs = {
>> +       .name = "gmc_v9_0",
>> +       .early_init = gmc_v9_0_early_init,
>> +       .late_init = gmc_v9_0_late_init,
>> +       .sw_init = gmc_v9_0_sw_init,
>> +       .sw_fini = gmc_v9_0_sw_fini,
>> +       .hw_init = gmc_v9_0_hw_init,
>> +       .hw_fini = gmc_v9_0_hw_fini,
>> +       .suspend = gmc_v9_0_suspend,
>> +       .resume = gmc_v9_0_resume,
>> +       .is_idle = gmc_v9_0_is_idle,
>> +       .wait_for_idle = gmc_v9_0_wait_for_idle,
>> +       .soft_reset = gmc_v9_0_soft_reset,
>> +       .set_clockgating_state = gmc_v9_0_set_clockgating_state,
>> +       .set_powergating_state = gmc_v9_0_set_powergating_state,
>> +};
>> +
>> +const struct amdgpu_ip_block_version gmc_v9_0_ip_block =
>> +{
>> +       .type = AMD_IP_BLOCK_TYPE_GMC,
>> +       .major = 9,
>> +       .minor = 0,
>> +       .rev = 0,
>> +       .funcs = &gmc_v9_0_ip_funcs,
>> +};
>> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
>> b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
>> new file mode 100644
>> index 0000000..b030ca5
>> --- /dev/null
>> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
>> @@ -0,0 +1,30 @@
>> +/*
>> + * Copyright 2016 Advanced Micro Devices, Inc.
>> + *
>> + * Permission is hereby granted, free of charge, to any person obtaining
>> a
>> + * copy of this software and associated documentation files (the
>> "Software"),
>> + * to deal in the Software without restriction, including without
>> limitation
>> + * the rights to use, copy, modify, merge, publish, distribute,
>> sublicense,
>> + * and/or sell copies of the Software, and to permit persons to whom the
>> + * Software is furnished to do so, subject to the following conditions:
>> + *
>> + * The above copyright notice and this permission notice shall be
>> included in
>> + * all copies or substantial portions of the Software.
>> + *
>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
>> EXPRESS OR
>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>> MERCHANTABILITY,
>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT
>> SHALL
>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES
>> OR
>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
>> + * OTHER DEALINGS IN THE SOFTWARE.
>> + *
>> + */
>> +
>> +#ifndef __GMC_V9_0_H__
>> +#define __GMC_V9_0_H__
>> +
>> +extern const struct amd_ip_funcs gmc_v9_0_ip_funcs;
>> +extern const struct amdgpu_ip_block_version gmc_v9_0_ip_block;
>> +
>> +#endif
>> diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>> b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>> new file mode 100644
>> index 0000000..b1e0e6b
>> --- /dev/null
>> +++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>> @@ -0,0 +1,585 @@
>> +/*
>> + * Copyright 2016 Advanced Micro Devices, Inc.
>> + *
>> + * Permission is hereby granted, free of charge, to any person obtaining
>> a
>> + * copy of this software and associated documentation files (the
>> "Software"),
>> + * to deal in the Software without restriction, including without
>> limitation
>> + * the rights to use, copy, modify, merge, publish, distribute,
>> sublicense,
>> + * and/or sell copies of the Software, and to permit persons to whom the
>> + * Software is furnished to do so, subject to the following conditions:
>> + *
>> + * The above copyright notice and this permission notice shall be
>> included in
>> + * all copies or substantial portions of the Software.
>> + *
>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
>> EXPRESS OR
>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>> MERCHANTABILITY,
>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT
>> SHALL
>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES
>> OR
>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
>> + * OTHER DEALINGS IN THE SOFTWARE.
>> + *
>> + */
>> +#include "amdgpu.h"
>> +#include "mmhub_v1_0.h"
>> +
>> +#include "vega10/soc15ip.h"
>> +#include "vega10/MMHUB/mmhub_1_0_offset.h"
>> +#include "vega10/MMHUB/mmhub_1_0_sh_mask.h"
>> +#include "vega10/MMHUB/mmhub_1_0_default.h"
>> +#include "vega10/ATHUB/athub_1_0_offset.h"
>> +#include "vega10/ATHUB/athub_1_0_sh_mask.h"
>> +#include "vega10/ATHUB/athub_1_0_default.h"
>> +#include "vega10/vega10_enum.h"
>> +
>> +#include "soc15_common.h"
>> +
>> +u64 mmhub_v1_0_get_fb_location(struct amdgpu_device *adev)
>> +{
>> +       u64 base = RREG32(SOC15_REG_OFFSET(MMHUB, 0,
>> mmMC_VM_FB_LOCATION_BASE));
>> +
>> +       base &= MC_VM_FB_LOCATION_BASE__FB_BASE_MASK;
>> +       base <<= 24;
>> +
>> +       return base;
>> +}
>> +
>> +int mmhub_v1_0_gart_enable(struct amdgpu_device *adev)
>> +{
>> +       u32 tmp;
>> +       u64 value;
>> +       uint64_t addr;
>> +       u32 i;
>> +
>> +       /* Program MC. */
>> +       /* Update configuration */
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH 046/100] drm/amdgpu: Add GMC 9.0 support
       [not found]             ` <CADnq5_N_8r-pesBn1NwDrHJcOx8jnryhtcWow01Cbndj8B9N6w-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-03-23  2:54               ` Zhang, Jerry (Junwei)
  0 siblings, 0 replies; 101+ messages in thread
From: Zhang, Jerry (Junwei) @ 2017-03-23  2:54 UTC (permalink / raw)
  To: Alex Deucher; +Cc: Alex Deucher, amd-gfx list, Alex Xie

On 03/23/2017 10:53 AM, Alex Deucher wrote:
> On Wed, Mar 22, 2017 at 10:42 PM, Zhang, Jerry (Junwei)
> <Jerry.Zhang@amd.com> wrote:
>> Hi Alex,
>>
>> I remember we had a patch to remove the FB location programming in
>> gmc/vmhub.
>> I saw it's not in gmc v9 in this patch, but pre-gmcv9 still program FB
>> register.
>>
>> Is that any missing for sync here?
>> Or it's only supported for gmc v9 now.
>
> I never landed the patches for older asics because I couldn't get vce
> to work properly with dal disabled without reprogramming the fb
> location.

Thanks, got the latest info.

Jerry

>
> Alex
>
>
>>
>> Jerry
>>
>> On 03/21/2017 04:29 AM, Alex Deucher wrote:
>>>
>>> From: Alex Xie <AlexBin.Xie@amd.com>
>>>
>>> On SOC-15 parts, the GMC (Graphics Memory Controller) consists
>>> of two hubs: GFX (graphics and compute) and MM (sdma, uvd, vce).
>>>
>>> Signed-off-by: Alex Xie <AlexBin.Xie@amd.com>
>>> Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
>>> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
>>> ---
>>>    drivers/gpu/drm/amd/amdgpu/Makefile      |   6 +-
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu.h      |  30 ++
>>>    drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c   |  28 +-
>>>    drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c | 447 +++++++++++++++++
>>>    drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h |  35 ++
>>>    drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c    | 826
>>> +++++++++++++++++++++++++++++++
>>>    drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h    |  30 ++
>>>    drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c  | 585 ++++++++++++++++++++++
>>>    drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h  |  35 ++
>>>    drivers/gpu/drm/amd/include/amd_shared.h |   2 +
>>>    10 files changed, 2016 insertions(+), 8 deletions(-)
>>>    create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>>>    create mode 100644 drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
>>>    create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>>>    create mode 100644 drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
>>>    create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>>>    create mode 100644 drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.h
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile
>>> b/drivers/gpu/drm/amd/amdgpu/Makefile
>>> index 69823e8..b5046fd 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/Makefile
>>> +++ b/drivers/gpu/drm/amd/amdgpu/Makefile
>>> @@ -45,7 +45,8 @@ amdgpu-y += \
>>>    # add GMC block
>>>    amdgpu-y += \
>>>          gmc_v7_0.o \
>>> -       gmc_v8_0.o
>>> +       gmc_v8_0.o \
>>> +       gfxhub_v1_0.o mmhub_v1_0.o gmc_v9_0.o
>>>
>>>    # add IH block
>>>    amdgpu-y += \
>>> @@ -74,7 +75,8 @@ amdgpu-y += \
>>>    # add async DMA block
>>>    amdgpu-y += \
>>>          sdma_v2_4.o \
>>> -       sdma_v3_0.o
>>> +       sdma_v3_0.o \
>>> +       sdma_v4_0.o
>>>
>>>    # add UVD block
>>>    amdgpu-y += \
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>> index aaded8d..d7257b6 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
>>> @@ -123,6 +123,11 @@ extern int amdgpu_param_buf_per_se;
>>>    /* max number of IP instances */
>>>    #define AMDGPU_MAX_SDMA_INSTANCES             2
>>>
>>> +/* max number of VMHUB */
>>> +#define AMDGPU_MAX_VMHUBS                      2
>>> +#define AMDGPU_MMHUB                           0
>>> +#define AMDGPU_GFXHUB                          1
>>> +
>>>    /* hardcode that limit for now */
>>>    #define AMDGPU_VA_RESERVED_SIZE                       (8 << 20)
>>>
>>> @@ -310,6 +315,12 @@ struct amdgpu_gart_funcs {
>>>                                       uint32_t flags);
>>>    };
>>>
>>> +/* provided by the mc block */
>>> +struct amdgpu_mc_funcs {
>>> +       /* adjust mc addr in fb for APU case */
>>> +       u64 (*adjust_mc_addr)(struct amdgpu_device *adev, u64 addr);
>>> +};
>>> +
>>>    /* provided by the ih block */
>>>    struct amdgpu_ih_funcs {
>>>          /* ring read/write ptr handling, called from interrupt context */
>>> @@ -559,6 +570,21 @@ int amdgpu_gart_bind(struct amdgpu_device *adev,
>>> uint64_t offset,
>>>    int amdgpu_ttm_recover_gart(struct amdgpu_device *adev);
>>>
>>>    /*
>>> + * VMHUB structures, functions & helpers
>>> + */
>>> +struct amdgpu_vmhub {
>>> +       uint32_t        ctx0_ptb_addr_lo32;
>>> +       uint32_t        ctx0_ptb_addr_hi32;
>>> +       uint32_t        vm_inv_eng0_req;
>>> +       uint32_t        vm_inv_eng0_ack;
>>> +       uint32_t        vm_context0_cntl;
>>> +       uint32_t        vm_l2_pro_fault_status;
>>> +       uint32_t        vm_l2_pro_fault_cntl;
>>> +       uint32_t        (*get_invalidate_req)(unsigned int vm_id);
>>> +       uint32_t        (*get_vm_protection_bits)(void);
>>> +};
>>> +
>>> +/*
>>>     * GPU MC structures, functions & helpers
>>>     */
>>>    struct amdgpu_mc {
>>> @@ -591,6 +617,9 @@ struct amdgpu_mc {
>>>          u64                                     shared_aperture_end;
>>>          u64                                     private_aperture_start;
>>>          u64                                     private_aperture_end;
>>> +       /* protects concurrent invalidation */
>>> +       spinlock_t              invalidate_lock;
>>> +       const struct amdgpu_mc_funcs *mc_funcs;
>>>    };
>>>
>>>    /*
>>> @@ -1479,6 +1508,7 @@ struct amdgpu_device {
>>>          struct amdgpu_gart              gart;
>>>          struct amdgpu_dummy_page        dummy_page;
>>>          struct amdgpu_vm_manager        vm_manager;
>>> +       struct amdgpu_vmhub             vmhub[AMDGPU_MAX_VMHUBS];
>>>
>>>          /* memory management */
>>>          struct amdgpu_mman              mman;
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> index df615d7..47a8080 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>> @@ -375,6 +375,16 @@ static bool amdgpu_vm_ring_has_compute_vm_bug(struct
>>> amdgpu_ring *ring)
>>>          return false;
>>>    }
>>>
>>> +static u64 amdgpu_vm_adjust_mc_addr(struct amdgpu_device *adev, u64
>>> mc_addr)
>>> +{
>>> +       u64 addr = mc_addr;
>>> +
>>> +       if (adev->mc.mc_funcs && adev->mc.mc_funcs->adjust_mc_addr)
>>> +               addr = adev->mc.mc_funcs->adjust_mc_addr(adev, addr);
>>> +
>>> +       return addr;
>>> +}
>>> +
>>>    /**
>>>     * amdgpu_vm_flush - hardware flush the vm
>>>     *
>>> @@ -405,9 +415,10 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct
>>> amdgpu_job *job)
>>>          if (ring->funcs->emit_vm_flush && (job->vm_needs_flush ||
>>>              amdgpu_vm_is_gpu_reset(adev, id))) {
>>>                  struct fence *fence;
>>> +               u64 pd_addr = amdgpu_vm_adjust_mc_addr(adev,
>>> job->vm_pd_addr);
>>>
>>> -               trace_amdgpu_vm_flush(job->vm_pd_addr, ring->idx,
>>> job->vm_id);
>>> -               amdgpu_ring_emit_vm_flush(ring, job->vm_id,
>>> job->vm_pd_addr);
>>> +               trace_amdgpu_vm_flush(pd_addr, ring->idx, job->vm_id);
>>> +               amdgpu_ring_emit_vm_flush(ring, job->vm_id, pd_addr);
>>>
>>>                  r = amdgpu_fence_emit(ring, &fence);
>>>                  if (r)
>>> @@ -643,15 +654,18 @@ int amdgpu_vm_update_page_directory(struct
>>> amdgpu_device *adev,
>>>                      (count == AMDGPU_VM_MAX_UPDATE_SIZE)) {
>>>
>>>                          if (count) {
>>> +                               uint64_t pt_addr =
>>> +                                       amdgpu_vm_adjust_mc_addr(adev,
>>> last_pt);
>>> +
>>>                                  if (shadow)
>>>                                          amdgpu_vm_do_set_ptes(&params,
>>>                                                                last_shadow,
>>> -                                                             last_pt,
>>> count,
>>> +                                                             pt_addr,
>>> count,
>>>                                                                incr,
>>>
>>> AMDGPU_PTE_VALID);
>>>
>>>                                  amdgpu_vm_do_set_ptes(&params, last_pde,
>>> -                                                     last_pt, count,
>>> incr,
>>> +                                                     pt_addr, count,
>>> incr,
>>>                                                        AMDGPU_PTE_VALID);
>>>                          }
>>>
>>> @@ -665,11 +679,13 @@ int amdgpu_vm_update_page_directory(struct
>>> amdgpu_device *adev,
>>>          }
>>>
>>>          if (count) {
>>> +               uint64_t pt_addr = amdgpu_vm_adjust_mc_addr(adev,
>>> last_pt);
>>> +
>>>                  if (vm->page_directory->shadow)
>>> -                       amdgpu_vm_do_set_ptes(&params, last_shadow,
>>> last_pt,
>>> +                       amdgpu_vm_do_set_ptes(&params, last_shadow,
>>> pt_addr,
>>>                                                count, incr,
>>> AMDGPU_PTE_VALID);
>>>
>>> -               amdgpu_vm_do_set_ptes(&params, last_pde, last_pt,
>>> +               amdgpu_vm_do_set_ptes(&params, last_pde, pt_addr,
>>>                                        count, incr, AMDGPU_PTE_VALID);
>>>          }
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>>> b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>>> new file mode 100644
>>> index 0000000..1ff019c
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.c
>>> @@ -0,0 +1,447 @@
>>> +/*
>>> + * Copyright 2016 Advanced Micro Devices, Inc.
>>> + *
>>> + * Permission is hereby granted, free of charge, to any person obtaining
>>> a
>>> + * copy of this software and associated documentation files (the
>>> "Software"),
>>> + * to deal in the Software without restriction, including without
>>> limitation
>>> + * the rights to use, copy, modify, merge, publish, distribute,
>>> sublicense,
>>> + * and/or sell copies of the Software, and to permit persons to whom the
>>> + * Software is furnished to do so, subject to the following conditions:
>>> + *
>>> + * The above copyright notice and this permission notice shall be
>>> included in
>>> + * all copies or substantial portions of the Software.
>>> + *
>>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
>>> EXPRESS OR
>>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>>> MERCHANTABILITY,
>>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT
>>> SHALL
>>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES
>>> OR
>>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
>>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
>>> + * OTHER DEALINGS IN THE SOFTWARE.
>>> + *
>>> + */
>>> +#include "amdgpu.h"
>>> +#include "gfxhub_v1_0.h"
>>> +
>>> +#include "vega10/soc15ip.h"
>>> +#include "vega10/GC/gc_9_0_offset.h"
>>> +#include "vega10/GC/gc_9_0_sh_mask.h"
>>> +#include "vega10/GC/gc_9_0_default.h"
>>> +#include "vega10/vega10_enum.h"
>>> +
>>> +#include "soc15_common.h"
>>> +
>>> +int gfxhub_v1_0_gart_enable(struct amdgpu_device *adev)
>>> +{
>>> +       u32 tmp;
>>> +       u64 value;
>>> +       u32 i;
>>> +
>>> +       /* Program MC. */
>>> +       /* Update configuration */
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_SYSTEM_APERTURE_LOW_ADDR),
>>> +               adev->mc.vram_start >> 18);
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_SYSTEM_APERTURE_HIGH_ADDR),
>>> +               adev->mc.vram_end >> 18);
>>> +
>>> +       value = adev->vram_scratch.gpu_addr - adev->mc.vram_start
>>> +               + adev->vm_manager.vram_base_offset;
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +                               mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_LSB),
>>> +                               (u32)(value >> 12));
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +                               mmMC_VM_SYSTEM_APERTURE_DEFAULT_ADDR_MSB),
>>> +                               (u32)(value >> 44));
>>> +
>>> +       /* Disable AGP. */
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_BASE), 0);
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_TOP), 0);
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_AGP_BOT), 0xFFFFFFFF);
>>> +
>>> +       /* GART Enable. */
>>> +
>>> +       /* Setup TLB control */
>>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL));
>>> +       tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB, 1);
>>> +       tmp = REG_SET_FIELD(tmp,
>>> +                               MC_VM_MX_L1_TLB_CNTL,
>>> +                               SYSTEM_ACCESS_MODE,
>>> +                               3);
>>> +       tmp = REG_SET_FIELD(tmp,
>>> +                               MC_VM_MX_L1_TLB_CNTL,
>>> +                               ENABLE_ADVANCED_DRIVER_MODEL,
>>> +                               1);
>>> +       tmp = REG_SET_FIELD(tmp,
>>> +                               MC_VM_MX_L1_TLB_CNTL,
>>> +                               SYSTEM_APERTURE_UNMAPPED_ACCESS,
>>> +                               0);
>>> +       tmp = REG_SET_FIELD(tmp,
>>> +                               MC_VM_MX_L1_TLB_CNTL,
>>> +                               ECO_BITS,
>>> +                               0);
>>> +       tmp = REG_SET_FIELD(tmp,
>>> +                               MC_VM_MX_L1_TLB_CNTL,
>>> +                               MTYPE,
>>> +                               MTYPE_UC);/* XXX for emulation. */
>>> +       tmp = REG_SET_FIELD(tmp,
>>> +                               MC_VM_MX_L1_TLB_CNTL,
>>> +                               ATC_EN,
>>> +                               1);
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL), tmp);
>>> +
>>> +       /* Setup L2 cache */
>>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL));
>>> +       tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 1);
>>> +       tmp = REG_SET_FIELD(tmp,
>>> +                               VM_L2_CNTL,
>>> +                               ENABLE_L2_FRAGMENT_PROCESSING,
>>> +                               0);
>>> +       tmp = REG_SET_FIELD(tmp,
>>> +                               VM_L2_CNTL,
>>> +                               L2_PDE0_CACHE_TAG_GENERATION_MODE,
>>> +                               0);/* XXX for emulation, Refer to closed
>>> source code.*/
>>> +       tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, PDE_FAULT_CLASSIFICATION, 1);
>>> +       tmp = REG_SET_FIELD(tmp,
>>> +                               VM_L2_CNTL,
>>> +                               CONTEXT1_IDENTITY_ACCESS_MODE,
>>> +                               1);
>>> +       tmp = REG_SET_FIELD(tmp,
>>> +                               VM_L2_CNTL,
>>> +                               IDENTITY_MODE_FRAGMENT_SIZE,
>>> +                               0);
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL), tmp);
>>> +
>>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL2));
>>> +       tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_ALL_L1_TLBS, 1);
>>> +       tmp = REG_SET_FIELD(tmp, VM_L2_CNTL2, INVALIDATE_L2_CACHE, 1);
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL2), tmp);
>>> +
>>> +       tmp = mmVM_L2_CNTL3_DEFAULT;
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL3), tmp);
>>> +
>>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL4));
>>> +       tmp = REG_SET_FIELD(tmp,
>>> +                           VM_L2_CNTL4,
>>> +                           VMC_TAP_PDE_REQUEST_PHYSICAL,
>>> +                           0);
>>> +       tmp = REG_SET_FIELD(tmp,
>>> +                           VM_L2_CNTL4,
>>> +                           VMC_TAP_PTE_REQUEST_PHYSICAL,
>>> +                           0);
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL4), tmp);
>>> +
>>> +       /* setup context0 */
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +                               mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_LO32),
>>> +               (u32)(adev->mc.gtt_start >> 12));
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +                               mmVM_CONTEXT0_PAGE_TABLE_START_ADDR_HI32),
>>> +               (u32)(adev->mc.gtt_start >> 44));
>>> +
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +                               mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_LO32),
>>> +               (u32)(adev->mc.gtt_end >> 12));
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +                               mmVM_CONTEXT0_PAGE_TABLE_END_ADDR_HI32),
>>> +               (u32)(adev->mc.gtt_end >> 44));
>>> +
>>> +       BUG_ON(adev->gart.table_addr & (~0x0000FFFFFFFFF000ULL));
>>> +       value = adev->gart.table_addr - adev->mc.vram_start
>>> +               + adev->vm_manager.vram_base_offset;
>>> +       value &= 0x0000FFFFFFFFF000ULL;
>>> +       value |= 0x1; /*valid bit*/
>>> +
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +                               mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32),
>>> +               (u32)value);
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +                               mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32),
>>> +               (u32)(value >> 32));
>>> +
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +
>>> mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_LO32),
>>> +               (u32)(adev->dummy_page.addr >> 12));
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +
>>> mmVM_L2_PROTECTION_FAULT_DEFAULT_ADDR_HI32),
>>> +               (u32)(adev->dummy_page.addr >> 44));
>>> +
>>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0,
>>> mmVM_L2_PROTECTION_FAULT_CNTL2));
>>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL2,
>>> +                           ACTIVE_PAGE_MIGRATION_PTE_READ_RETRY,
>>> +                           1);
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL2),
>>> tmp);
>>> +
>>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL));
>>> +       tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL, ENABLE_CONTEXT, 1);
>>> +       tmp = REG_SET_FIELD(tmp, VM_CONTEXT0_CNTL, PAGE_TABLE_DEPTH, 0);
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL), tmp);
>>> +
>>> +       /* Disable identity aperture.*/
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +               mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_LO32),
>>> 0XFFFFFFFF);
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +               mmVM_L2_CONTEXT1_IDENTITY_APERTURE_LOW_ADDR_HI32),
>>> 0x0000000F);
>>> +
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +               mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_LO32), 0);
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +               mmVM_L2_CONTEXT1_IDENTITY_APERTURE_HIGH_ADDR_HI32), 0);
>>> +
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +               mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_LO32), 0);
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +               mmVM_L2_CONTEXT_IDENTITY_PHYSICAL_OFFSET_HI32), 0);
>>> +
>>> +       for (i = 0; i <= 14; i++) {
>>> +               tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_CNTL) +
>>> i);
>>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL, ENABLE_CONTEXT,
>>> 1);
>>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> PAGE_TABLE_DEPTH, 1);
>>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +                               RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +
>>> DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +                               PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +                               VALID_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +                               READ_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +                               WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, 1);
>>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +                               EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT,
>>> 1);
>>> +               tmp = REG_SET_FIELD(tmp, VM_CONTEXT1_CNTL,
>>> +                               PAGE_TABLE_BLOCK_SIZE,
>>> +                                   amdgpu_vm_block_size - 9);
>>> +               WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT1_CNTL) + i,
>>> tmp);
>>> +               WREG32(SOC15_REG_OFFSET(GC, 0,
>>> mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_LO32) + i*2, 0);
>>> +               WREG32(SOC15_REG_OFFSET(GC, 0,
>>> mmVM_CONTEXT1_PAGE_TABLE_START_ADDR_HI32) + i*2, 0);
>>> +               WREG32(SOC15_REG_OFFSET(GC, 0,
>>> mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_LO32) + i*2,
>>> +                               adev->vm_manager.max_pfn - 1);
>>> +               WREG32(SOC15_REG_OFFSET(GC, 0,
>>> mmVM_CONTEXT1_PAGE_TABLE_END_ADDR_HI32) + i*2, 0);
>>> +       }
>>> +
>>> +
>>> +       return 0;
>>> +}
>>> +
>>> +void gfxhub_v1_0_gart_disable(struct amdgpu_device *adev)
>>> +{
>>> +       u32 tmp;
>>> +       u32 i;
>>> +
>>> +       /* Disable all tables */
>>> +       for (i = 0; i < 16; i++)
>>> +               WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL) + i,
>>> 0);
>>> +
>>> +       /* Setup TLB control */
>>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL));
>>> +       tmp = REG_SET_FIELD(tmp, MC_VM_MX_L1_TLB_CNTL, ENABLE_L1_TLB, 0);
>>> +       tmp = REG_SET_FIELD(tmp,
>>> +                               MC_VM_MX_L1_TLB_CNTL,
>>> +                               ENABLE_ADVANCED_DRIVER_MODEL,
>>> +                               0);
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmMC_VM_MX_L1_TLB_CNTL), tmp);
>>> +
>>> +       /* Setup L2 cache */
>>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL));
>>> +       tmp = REG_SET_FIELD(tmp, VM_L2_CNTL, ENABLE_L2_CACHE, 0);
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL), tmp);
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_CNTL3), 0);
>>> +}
>>> +
>>> +/**
>>> + * gfxhub_v1_0_set_fault_enable_default - update GART/VM fault handling
>>> + *
>>> + * @adev: amdgpu_device pointer
>>> + * @value: true redirects VM faults to the default page
>>> + */
>>> +void gfxhub_v1_0_set_fault_enable_default(struct amdgpu_device *adev,
>>> +                                         bool value)
>>> +{
>>> +       u32 tmp;
>>> +       tmp = RREG32(SOC15_REG_OFFSET(GC, 0,
>>> mmVM_L2_PROTECTION_FAULT_CNTL));
>>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +                       RANGE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +                       PDE0_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +                       PDE1_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +                       PDE2_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>>> +       tmp = REG_SET_FIELD(tmp,
>>> +                       VM_L2_PROTECTION_FAULT_CNTL,
>>> +                       TRANSLATE_FURTHER_PROTECTION_FAULT_ENABLE_DEFAULT,
>>> +                       value);
>>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +                       NACK_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +                       DUMMY_PAGE_PROTECTION_FAULT_ENABLE_DEFAULT,
>>> value);
>>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +                       VALID_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +                       READ_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +                       WRITE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>>> +       tmp = REG_SET_FIELD(tmp, VM_L2_PROTECTION_FAULT_CNTL,
>>> +                       EXECUTE_PROTECTION_FAULT_ENABLE_DEFAULT, value);
>>> +       WREG32(SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL),
>>> tmp);
>>> +}
>>> +
>>> +static uint32_t gfxhub_v1_0_get_invalidate_req(unsigned int vm_id)
>>> +{
>>> +       u32 req = 0;
>>> +
>>> +       /* invalidate using legacy mode on vm_id*/
>>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>>> +                           PER_VMID_INVALIDATE_REQ, 1 << vm_id);
>>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ, FLUSH_TYPE, 0);
>>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>>> INVALIDATE_L2_PTES, 1);
>>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>>> INVALIDATE_L2_PDE0, 1);
>>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>>> INVALIDATE_L2_PDE1, 1);
>>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>>> INVALIDATE_L2_PDE2, 1);
>>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>>> INVALIDATE_L1_PTES, 1);
>>> +       req = REG_SET_FIELD(req, VM_INVALIDATE_ENG0_REQ,
>>> +                           CLEAR_PROTECTION_FAULT_STATUS_ADDR, 0);
>>> +
>>> +       return req;
>>> +}
>>> +
>>> +static uint32_t gfxhub_v1_0_get_vm_protection_bits(void)
>>> +{
>>> +       return
>>> (VM_CONTEXT1_CNTL__RANGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
>>> +
>>> VM_CONTEXT1_CNTL__DUMMY_PAGE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
>>> +
>>> VM_CONTEXT1_CNTL__PDE0_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
>>> +
>>> VM_CONTEXT1_CNTL__VALID_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
>>> +
>>> VM_CONTEXT1_CNTL__READ_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
>>> +
>>> VM_CONTEXT1_CNTL__WRITE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK |
>>> +
>>> VM_CONTEXT1_CNTL__EXECUTE_PROTECTION_FAULT_ENABLE_INTERRUPT_MASK);
>>> +}
>>> +
>>> +static int gfxhub_v1_0_early_init(void *handle)
>>> +{
>>> +       return 0;
>>> +}
>>> +
>>> +static int gfxhub_v1_0_late_init(void *handle)
>>> +{
>>> +       return 0;
>>> +}
>>> +
>>> +static int gfxhub_v1_0_sw_init(void *handle)
>>> +{
>>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +       struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_GFXHUB];
>>> +
>>> +       hub->ctx0_ptb_addr_lo32 =
>>> +               SOC15_REG_OFFSET(GC, 0,
>>> +                                mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_LO32);
>>> +       hub->ctx0_ptb_addr_hi32 =
>>> +               SOC15_REG_OFFSET(GC, 0,
>>> +                                mmVM_CONTEXT0_PAGE_TABLE_BASE_ADDR_HI32);
>>> +       hub->vm_inv_eng0_req =
>>> +               SOC15_REG_OFFSET(GC, 0, mmVM_INVALIDATE_ENG0_REQ);
>>> +       hub->vm_inv_eng0_ack =
>>> +               SOC15_REG_OFFSET(GC, 0, mmVM_INVALIDATE_ENG0_ACK);
>>> +       hub->vm_context0_cntl =
>>> +               SOC15_REG_OFFSET(GC, 0, mmVM_CONTEXT0_CNTL);
>>> +       hub->vm_l2_pro_fault_status =
>>> +               SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_STATUS);
>>> +       hub->vm_l2_pro_fault_cntl =
>>> +               SOC15_REG_OFFSET(GC, 0, mmVM_L2_PROTECTION_FAULT_CNTL);
>>> +
>>> +       hub->get_invalidate_req = gfxhub_v1_0_get_invalidate_req;
>>> +       hub->get_vm_protection_bits = gfxhub_v1_0_get_vm_protection_bits;
>>> +
>>> +       return 0;
>>> +}
>>> +
>>> +static int gfxhub_v1_0_sw_fini(void *handle)
>>> +{
>>> +       return 0;
>>> +}
>>> +
>>> +static int gfxhub_v1_0_hw_init(void *handle)
>>> +{
>>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +       unsigned i;
>>> +
>>> +       for (i = 0 ; i < 18; ++i) {
>>> +               WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +
>>> mmVM_INVALIDATE_ENG0_ADDR_RANGE_LO32) +
>>> +                      2 * i, 0xffffffff);
>>> +               WREG32(SOC15_REG_OFFSET(GC, 0,
>>> +
>>> mmVM_INVALIDATE_ENG0_ADDR_RANGE_HI32) +
>>> +                      2 * i, 0x1f);
>>> +       }
>>> +
>>> +       return 0;
>>> +}
>>> +
>>> +static int gfxhub_v1_0_hw_fini(void *handle)
>>> +{
>>> +       return 0;
>>> +}
>>> +
>>> +static int gfxhub_v1_0_suspend(void *handle)
>>> +{
>>> +       return 0;
>>> +}
>>> +
>>> +static int gfxhub_v1_0_resume(void *handle)
>>> +{
>>> +       return 0;
>>> +}
>>> +
>>> +static bool gfxhub_v1_0_is_idle(void *handle)
>>> +{
>>> +       return true;
>>> +}
>>> +
>>> +static int gfxhub_v1_0_wait_for_idle(void *handle)
>>> +{
>>> +       return 0;
>>> +}
>>> +
>>> +static int gfxhub_v1_0_soft_reset(void *handle)
>>> +{
>>> +       return 0;
>>> +}
>>> +
>>> +static int gfxhub_v1_0_set_clockgating_state(void *handle,
>>> +                                         enum amd_clockgating_state
>>> state)
>>> +{
>>> +       return 0;
>>> +}
>>> +
>>> +static int gfxhub_v1_0_set_powergating_state(void *handle,
>>> +                                         enum amd_powergating_state
>>> state)
>>> +{
>>> +       return 0;
>>> +}
>>> +
>>> +const struct amd_ip_funcs gfxhub_v1_0_ip_funcs = {
>>> +       .name = "gfxhub_v1_0",
>>> +       .early_init = gfxhub_v1_0_early_init,
>>> +       .late_init = gfxhub_v1_0_late_init,
>>> +       .sw_init = gfxhub_v1_0_sw_init,
>>> +       .sw_fini = gfxhub_v1_0_sw_fini,
>>> +       .hw_init = gfxhub_v1_0_hw_init,
>>> +       .hw_fini = gfxhub_v1_0_hw_fini,
>>> +       .suspend = gfxhub_v1_0_suspend,
>>> +       .resume = gfxhub_v1_0_resume,
>>> +       .is_idle = gfxhub_v1_0_is_idle,
>>> +       .wait_for_idle = gfxhub_v1_0_wait_for_idle,
>>> +       .soft_reset = gfxhub_v1_0_soft_reset,
>>> +       .set_clockgating_state = gfxhub_v1_0_set_clockgating_state,
>>> +       .set_powergating_state = gfxhub_v1_0_set_powergating_state,
>>> +};
>>> +
>>> +const struct amdgpu_ip_block_version gfxhub_v1_0_ip_block =
>>> +{
>>> +       .type = AMD_IP_BLOCK_TYPE_GFXHUB,
>>> +       .major = 1,
>>> +       .minor = 0,
>>> +       .rev = 0,
>>> +       .funcs = &gfxhub_v1_0_ip_funcs,
>>> +};
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
>>> b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
>>> new file mode 100644
>>> index 0000000..5129a8f
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/amd/amdgpu/gfxhub_v1_0.h
>>> @@ -0,0 +1,35 @@
>>> +/*
>>> + * Copyright 2016 Advanced Micro Devices, Inc.
>>> + *
>>> + * Permission is hereby granted, free of charge, to any person obtaining
>>> a
>>> + * copy of this software and associated documentation files (the
>>> "Software"),
>>> + * to deal in the Software without restriction, including without
>>> limitation
>>> + * the rights to use, copy, modify, merge, publish, distribute,
>>> sublicense,
>>> + * and/or sell copies of the Software, and to permit persons to whom the
>>> + * Software is furnished to do so, subject to the following conditions:
>>> + *
>>> + * The above copyright notice and this permission notice shall be
>>> included in
>>> + * all copies or substantial portions of the Software.
>>> + *
>>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
>>> EXPRESS OR
>>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>>> MERCHANTABILITY,
>>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT
>>> SHALL
>>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES
>>> OR
>>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
>>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
>>> + * OTHER DEALINGS IN THE SOFTWARE.
>>> + *
>>> + */
>>> +
>>> +#ifndef __GFXHUB_V1_0_H__
>>> +#define __GFXHUB_V1_0_H__
>>> +
>>> +int gfxhub_v1_0_gart_enable(struct amdgpu_device *adev);
>>> +void gfxhub_v1_0_gart_disable(struct amdgpu_device *adev);
>>> +void gfxhub_v1_0_set_fault_enable_default(struct amdgpu_device *adev,
>>> +                                         bool value);
>>> +
>>> +extern const struct amd_ip_funcs gfxhub_v1_0_ip_funcs;
>>> +extern const struct amdgpu_ip_block_version gfxhub_v1_0_ip_block;
>>> +
>>> +#endif
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>>> b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>>> new file mode 100644
>>> index 0000000..5cf0fc3
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
>>> @@ -0,0 +1,826 @@
>>> +/*
>>> + * Copyright 2016 Advanced Micro Devices, Inc.
>>> + *
>>> + * Permission is hereby granted, free of charge, to any person obtaining
>>> a
>>> + * copy of this software and associated documentation files (the
>>> "Software"),
>>> + * to deal in the Software without restriction, including without
>>> limitation
>>> + * the rights to use, copy, modify, merge, publish, distribute,
>>> sublicense,
>>> + * and/or sell copies of the Software, and to permit persons to whom the
>>> + * Software is furnished to do so, subject to the following conditions:
>>> + *
>>> + * The above copyright notice and this permission notice shall be
>>> included in
>>> + * all copies or substantial portions of the Software.
>>> + *
>>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
>>> EXPRESS OR
>>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>>> MERCHANTABILITY,
>>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT
>>> SHALL
>>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES
>>> OR
>>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
>>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
>>> + * OTHER DEALINGS IN THE SOFTWARE.
>>> + *
>>> + */
>>> +#include <linux/firmware.h>
>>> +#include "amdgpu.h"
>>> +#include "gmc_v9_0.h"
>>> +
>>> +#include "vega10/soc15ip.h"
>>> +#include "vega10/HDP/hdp_4_0_offset.h"
>>> +#include "vega10/HDP/hdp_4_0_sh_mask.h"
>>> +#include "vega10/GC/gc_9_0_sh_mask.h"
>>> +#include "vega10/vega10_enum.h"
>>> +
>>> +#include "soc15_common.h"
>>> +
>>> +#include "nbio_v6_1.h"
>>> +#include "gfxhub_v1_0.h"
>>> +#include "mmhub_v1_0.h"
>>> +
>>> +#define mmDF_CS_AON0_DramBaseAddress0
>>> 0x0044
>>> +#define mmDF_CS_AON0_DramBaseAddress0_BASE_IDX
>>> 0
>>> +//DF_CS_AON0_DramBaseAddress0
>>> +#define DF_CS_AON0_DramBaseAddress0__AddrRngVal__SHIFT
>>> 0x0
>>> +#define DF_CS_AON0_DramBaseAddress0__LgcyMmioHoleEn__SHIFT
>>> 0x1
>>> +#define DF_CS_AON0_DramBaseAddress0__IntLvNumChan__SHIFT
>>> 0x4
>>> +#define DF_CS_AON0_DramBaseAddress0__IntLvAddrSel__SHIFT
>>> 0x8
>>> +#define DF_CS_AON0_DramBaseAddress0__DramBaseAddr__SHIFT
>>> 0xc
>>> +#define DF_CS_AON0_DramBaseAddress0__AddrRngVal_MASK
>>> 0x00000001L
>>> +#define DF_CS_AON0_DramBaseAddress0__LgcyMmioHoleEn_MASK
>>> 0x00000002L
>>> +#define DF_CS_AON0_DramBaseAddress0__IntLvNumChan_MASK
>>> 0x000000F0L
>>> +#define DF_CS_AON0_DramBaseAddress0__IntLvAddrSel_MASK
>>> 0x00000700L
>>> +#define DF_CS_AON0_DramBaseAddress0__DramBaseAddr_MASK
>>> 0xFFFFF000L
>>> +
>>> +/* XXX Move this macro to VEGA10 header file, which is like vid.h for
>>> VI.*/
>>> +#define AMDGPU_NUM_OF_VMIDS                    8
>>> +
>>> +static const u32 golden_settings_vega10_hdp[] =
>>> +{
>>> +       0xf64, 0x0fffffff, 0x00000000,
>>> +       0xf65, 0x0fffffff, 0x00000000,
>>> +       0xf66, 0x0fffffff, 0x00000000,
>>> +       0xf67, 0x0fffffff, 0x00000000,
>>> +       0xf68, 0x0fffffff, 0x00000000,
>>> +       0xf6a, 0x0fffffff, 0x00000000,
>>> +       0xf6b, 0x0fffffff, 0x00000000,
>>> +       0xf6c, 0x0fffffff, 0x00000000,
>>> +       0xf6d, 0x0fffffff, 0x00000000,
>>> +       0xf6e, 0x0fffffff, 0x00000000,
>>> +};
>>> +
>>> +static int gmc_v9_0_vm_fault_interrupt_state(struct amdgpu_device *adev,
>>> +                                       struct amdgpu_irq_src *src,
>>> +                                       unsigned type,
>>> +                                       enum amdgpu_interrupt_state state)
>>> +{
>>> +       struct amdgpu_vmhub *hub;
>>> +       u32 tmp, reg, bits, i;
>>> +
>>> +       switch (state) {
>>> +       case AMDGPU_IRQ_STATE_DISABLE:
>>> +               /* MM HUB */
>>> +               hub = &adev->vmhub[AMDGPU_MMHUB];
>>> +               bits = hub->get_vm_protection_bits();
>>> +               for (i = 0; i< 16; i++) {
>>> +                       reg = hub->vm_context0_cntl + i;
>>> +                       tmp = RREG32(reg);
>>> +                       tmp &= ~bits;
>>> +                       WREG32(reg, tmp);
>>> +               }
>>> +
>>> +               /* GFX HUB */
>>> +               hub = &adev->vmhub[AMDGPU_GFXHUB];
>>> +               bits = hub->get_vm_protection_bits();
>>> +               for (i = 0; i < 16; i++) {
>>> +                       reg = hub->vm_context0_cntl + i;
>>> +                       tmp = RREG32(reg);
>>> +                       tmp &= ~bits;
>>> +                       WREG32(reg, tmp);
>>> +               }
>>> +               break;
>>> +       case AMDGPU_IRQ_STATE_ENABLE:
>>> +               /* MM HUB */
>>> +               hub = &adev->vmhub[AMDGPU_MMHUB];
>>> +               bits = hub->get_vm_protection_bits();
>>> +               for (i = 0; i< 16; i++) {
>>> +                       reg = hub->vm_context0_cntl + i;
>>> +                       tmp = RREG32(reg);
>>> +                       tmp |= bits;
>>> +                       WREG32(reg, tmp);
>>> +               }
>>> +
>>> +               /* GFX HUB */
>>> +               hub = &adev->vmhub[AMDGPU_GFXHUB];
>>> +               bits = hub->get_vm_protection_bits();
>>> +               for (i = 0; i < 16; i++) {
>>> +                       reg = hub->vm_context0_cntl + i;
>>> +                       tmp = RREG32(reg);
>>> +                       tmp |= bits;
>>> +                       WREG32(reg, tmp);
>>> +               }
>>> +               break;
>>> +       default:
>>> +               break;
>>> +       }
>>> +
>>> +       return 0;
>>> +       return 0;
>>> +}
>>> +
>>> +static int gmc_v9_0_process_interrupt(struct amdgpu_device *adev,
>>> +                               struct amdgpu_irq_src *source,
>>> +                               struct amdgpu_iv_entry *entry)
>>> +{
>>> +       struct amdgpu_vmhub *gfxhub = &adev->vmhub[AMDGPU_GFXHUB];
>>> +       struct amdgpu_vmhub *mmhub = &adev->vmhub[AMDGPU_MMHUB];
>>> +       uint32_t status;
>>> +       u64 addr;
>>> +
>>> +       addr = (u64)entry->src_data[0] << 12;
>>> +       addr |= ((u64)entry->src_data[1] & 0xf) << 44;
>>> +
>>> +       if (entry->vm_id_src) {
>>> +               status = RREG32(mmhub->vm_l2_pro_fault_status);
>>> +               WREG32_P(mmhub->vm_l2_pro_fault_cntl, 1, ~1);
>>> +       } else {
>>> +               status = RREG32(gfxhub->vm_l2_pro_fault_status);
>>> +               WREG32_P(gfxhub->vm_l2_pro_fault_cntl, 1, ~1);
>>> +       }
>>> +
>>> +       DRM_ERROR("[%s]VMC page fault (src_id:%u ring:%u vm_id:%u
>>> pas_id:%u) "
>>> +                 "at page 0x%016llx from %d\n"
>>> +                 "VM_L2_PROTECTION_FAULT_STATUS:0x%08X\n",
>>> +                 entry->vm_id_src ? "mmhub" : "gfxhub",
>>> +                 entry->src_id, entry->ring_id, entry->vm_id,
>>> entry->pas_id,
>>> +                 addr, entry->client_id, status);
>>> +
>>> +       return 0;
>>> +}
>>> +
>>> +static const struct amdgpu_irq_src_funcs gmc_v9_0_irq_funcs = {
>>> +       .set = gmc_v9_0_vm_fault_interrupt_state,
>>> +       .process = gmc_v9_0_process_interrupt,
>>> +};
>>> +
>>> +static void gmc_v9_0_set_irq_funcs(struct amdgpu_device *adev)
>>> +{
>>> +       adev->mc.vm_fault.num_types = 1;
>>> +       adev->mc.vm_fault.funcs = &gmc_v9_0_irq_funcs;
>>> +}
>>> +
>>> +/*
>>> + * GART
>>> + * VMID 0 is the physical GPU addresses as used by the kernel.
>>> + * VMIDs 1-15 are used for userspace clients and are handled
>>> + * by the amdgpu vm/hsa code.
>>> + */
>>> +
>>> +/**
>>> + * gmc_v9_0_gart_flush_gpu_tlb - gart tlb flush callback
>>> + *
>>> + * @adev: amdgpu_device pointer
>>> + * @vmid: vm instance to flush
>>> + *
>>> + * Flush the TLB for the requested page table.
>>> + */
>>> +static void gmc_v9_0_gart_flush_gpu_tlb(struct amdgpu_device *adev,
>>> +                                       uint32_t vmid)
>>> +{
>>> +       /* Use register 17 for GART */
>>> +       const unsigned eng = 17;
>>> +       unsigned i, j;
>>> +
>>> +       /* flush hdp cache */
>>> +       nbio_v6_1_hdp_flush(adev);
>>> +
>>> +       spin_lock(&adev->mc.invalidate_lock);
>>> +
>>> +       for (i = 0; i < AMDGPU_MAX_VMHUBS; ++i) {
>>> +               struct amdgpu_vmhub *hub = &adev->vmhub[i];
>>> +               u32 tmp = hub->get_invalidate_req(vmid);
>>> +
>>> +               WREG32(hub->vm_inv_eng0_req + eng, tmp);
>>> +
>>> +               /* Busy wait for ACK.*/
>>> +               for (j = 0; j < 100; j++) {
>>> +                       tmp = RREG32(hub->vm_inv_eng0_ack + eng);
>>> +                       tmp &= 1 << vmid;
>>> +                       if (tmp)
>>> +                               break;
>>> +                       cpu_relax();
>>> +               }
>>> +               if (j < 100)
>>> +                       continue;
>>> +
>>> +               /* Wait for ACK with a delay.*/
>>> +               for (j = 0; j < adev->usec_timeout; j++) {
>>> +                       tmp = RREG32(hub->vm_inv_eng0_ack + eng);
>>> +                       tmp &= 1 << vmid;
>>> +                       if (tmp)
>>> +                               break;
>>> +                       udelay(1);
>>> +               }
>>> +               if (j < adev->usec_timeout)
>>> +                       continue;
>>> +
>>> +               DRM_ERROR("Timeout waiting for VM flush ACK!\n");
>>> +       }
>>> +
>>> +       spin_unlock(&adev->mc.invalidate_lock);
>>> +}
>>> +
>>> +/**
>>> + * gmc_v9_0_gart_set_pte_pde - update the page tables using MMIO
>>> + *
>>> + * @adev: amdgpu_device pointer
>>> + * @cpu_pt_addr: cpu address of the page table
>>> + * @gpu_page_idx: entry in the page table to update
>>> + * @addr: dst addr to write into pte/pde
>>> + * @flags: access flags
>>> + *
>>> + * Update the page tables using the CPU.
>>> + */
>>> +static int gmc_v9_0_gart_set_pte_pde(struct amdgpu_device *adev,
>>> +                                       void *cpu_pt_addr,
>>> +                                       uint32_t gpu_page_idx,
>>> +                                       uint64_t addr,
>>> +                                       uint64_t flags)
>>> +{
>>> +       void __iomem *ptr = (void *)cpu_pt_addr;
>>> +       uint64_t value;
>>> +
>>> +       /*
>>> +        * PTE format on VEGA 10:
>>> +        * 63:59 reserved
>>> +        * 58:57 mtype
>>> +        * 56 F
>>> +        * 55 L
>>> +        * 54 P
>>> +        * 53 SW
>>> +        * 52 T
>>> +        * 50:48 reserved
>>> +        * 47:12 4k physical page base address
>>> +        * 11:7 fragment
>>> +        * 6 write
>>> +        * 5 read
>>> +        * 4 exe
>>> +        * 3 Z
>>> +        * 2 snooped
>>> +        * 1 system
>>> +        * 0 valid
>>> +        *
>>> +        * PDE format on VEGA 10:
>>> +        * 63:59 block fragment size
>>> +        * 58:55 reserved
>>> +        * 54 P
>>> +        * 53:48 reserved
>>> +        * 47:6 physical base address of PD or PTE
>>> +        * 5:3 reserved
>>> +        * 2 C
>>> +        * 1 system
>>> +        * 0 valid
>>> +        */
>>> +
>>> +       /*
>>> +        * The following is for PTE only. GART does not have PDEs.
>>> +       */
>>> +       value = addr & 0x0000FFFFFFFFF000ULL;
>>> +       value |= flags;
>>> +       writeq(value, ptr + (gpu_page_idx * 8));
>>> +       return 0;
>>> +}
>>> +
>>> +static uint64_t gmc_v9_0_get_vm_pte_flags(struct amdgpu_device *adev,
>>> +                                               uint32_t flags)
>>> +
>>> +{
>>> +       uint64_t pte_flag = 0;
>>> +
>>> +       if (flags & AMDGPU_VM_PAGE_EXECUTABLE)
>>> +               pte_flag |= AMDGPU_PTE_EXECUTABLE;
>>> +       if (flags & AMDGPU_VM_PAGE_READABLE)
>>> +               pte_flag |= AMDGPU_PTE_READABLE;
>>> +       if (flags & AMDGPU_VM_PAGE_WRITEABLE)
>>> +               pte_flag |= AMDGPU_PTE_WRITEABLE;
>>> +
>>> +       switch (flags & AMDGPU_VM_MTYPE_MASK) {
>>> +       case AMDGPU_VM_MTYPE_DEFAULT:
>>> +               pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
>>> +               break;
>>> +       case AMDGPU_VM_MTYPE_NC:
>>> +               pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
>>> +               break;
>>> +       case AMDGPU_VM_MTYPE_WC:
>>> +               pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_WC);
>>> +               break;
>>> +       case AMDGPU_VM_MTYPE_CC:
>>> +               pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_CC);
>>> +               break;
>>> +       case AMDGPU_VM_MTYPE_UC:
>>> +               pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_UC);
>>> +               break;
>>> +       default:
>>> +               pte_flag |= AMDGPU_PTE_MTYPE(MTYPE_NC);
>>> +               break;
>>> +       }
>>> +
>>> +       if (flags & AMDGPU_VM_PAGE_PRT)
>>> +               pte_flag |= AMDGPU_PTE_PRT;
>>> +
>>> +       return pte_flag;
>>> +}
>>> +
>>> +static const struct amdgpu_gart_funcs gmc_v9_0_gart_funcs = {
>>> +       .flush_gpu_tlb = gmc_v9_0_gart_flush_gpu_tlb,
>>> +       .set_pte_pde = gmc_v9_0_gart_set_pte_pde,
>>> +       .get_vm_pte_flags = gmc_v9_0_get_vm_pte_flags
>>> +};
>>> +
>>> +static void gmc_v9_0_set_gart_funcs(struct amdgpu_device *adev)
>>> +{
>>> +       if (adev->gart.gart_funcs == NULL)
>>> +               adev->gart.gart_funcs = &gmc_v9_0_gart_funcs;
>>> +}
>>> +
>>> +static u64 gmc_v9_0_adjust_mc_addr(struct amdgpu_device *adev, u64
>>> mc_addr)
>>> +{
>>> +       return adev->vm_manager.vram_base_offset + mc_addr -
>>> adev->mc.vram_start;
>>> +}
>>> +
>>> +static const struct amdgpu_mc_funcs gmc_v9_0_mc_funcs = {
>>> +       .adjust_mc_addr = gmc_v9_0_adjust_mc_addr,
>>> +};
>>> +
>>> +static void gmc_v9_0_set_mc_funcs(struct amdgpu_device *adev)
>>> +{
>>> +       adev->mc.mc_funcs = &gmc_v9_0_mc_funcs;
>>> +}
>>> +
>>> +static int gmc_v9_0_early_init(void *handle)
>>> +{
>>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +
>>> +       gmc_v9_0_set_gart_funcs(adev);
>>> +       gmc_v9_0_set_mc_funcs(adev);
>>> +       gmc_v9_0_set_irq_funcs(adev);
>>> +
>>> +       return 0;
>>> +}
>>> +
>>> +static int gmc_v9_0_late_init(void *handle)
>>> +{
>>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +       return amdgpu_irq_get(adev, &adev->mc.vm_fault, 0);
>>> +}
>>> +
>>> +static void gmc_v9_0_vram_gtt_location(struct amdgpu_device *adev,
>>> +                                       struct amdgpu_mc *mc)
>>> +{
>>> +       u64 base = mmhub_v1_0_get_fb_location(adev);
>>> +       amdgpu_vram_location(adev, &adev->mc, base);
>>> +       adev->mc.gtt_base_align = 0;
>>> +       amdgpu_gtt_location(adev, mc);
>>> +}
>>> +
>>> +/**
>>> + * gmc_v9_0_mc_init - initialize the memory controller driver params
>>> + *
>>> + * @adev: amdgpu_device pointer
>>> + *
>>> + * Look up the amount of vram, vram width, and decide how to place
>>> + * vram and gart within the GPU's physical address space.
>>> + * Returns 0 for success.
>>> + */
>>> +static int gmc_v9_0_mc_init(struct amdgpu_device *adev)
>>> +{
>>> +       u32 tmp;
>>> +       int chansize, numchan;
>>> +
>>> +       /* hbm memory channel size */
>>> +       chansize = 128;
>>> +
>>> +       tmp = RREG32(SOC15_REG_OFFSET(DF, 0,
>>> mmDF_CS_AON0_DramBaseAddress0));
>>> +       tmp &= DF_CS_AON0_DramBaseAddress0__IntLvNumChan_MASK;
>>> +       tmp >>= DF_CS_AON0_DramBaseAddress0__IntLvNumChan__SHIFT;
>>> +       switch (tmp) {
>>> +       case 0:
>>> +       default:
>>> +               numchan = 1;
>>> +               break;
>>> +       case 1:
>>> +               numchan = 2;
>>> +               break;
>>> +       case 2:
>>> +               numchan = 0;
>>> +               break;
>>> +       case 3:
>>> +               numchan = 4;
>>> +               break;
>>> +       case 4:
>>> +               numchan = 0;
>>> +               break;
>>> +       case 5:
>>> +               numchan = 8;
>>> +               break;
>>> +       case 6:
>>> +               numchan = 0;
>>> +               break;
>>> +       case 7:
>>> +               numchan = 16;
>>> +               break;
>>> +       case 8:
>>> +               numchan = 2;
>>> +               break;
>>> +       }
>>> +       adev->mc.vram_width = numchan * chansize;
>>> +
>>> +       /* Could aper size report 0 ? */
>>> +       adev->mc.aper_base = pci_resource_start(adev->pdev, 0);
>>> +       adev->mc.aper_size = pci_resource_len(adev->pdev, 0);
>>> +       /* size in MB on si */
>>> +       adev->mc.mc_vram_size =
>>> +               nbio_v6_1_get_memsize(adev) * 1024ULL * 1024ULL;
>>> +       adev->mc.real_vram_size = adev->mc.mc_vram_size;
>>> +       adev->mc.visible_vram_size = adev->mc.aper_size;
>>> +
>>> +       /* In case the PCI BAR is larger than the actual amount of vram */
>>> +       if (adev->mc.visible_vram_size > adev->mc.real_vram_size)
>>> +               adev->mc.visible_vram_size = adev->mc.real_vram_size;
>>> +
>>> +       /* unless the user had overridden it, set the gart
>>> +        * size equal to the 1024 or vram, whichever is larger.
>>> +        */
>>> +       if (amdgpu_gart_size == -1)
>>> +               adev->mc.gtt_size = max((1024ULL << 20),
>>> adev->mc.mc_vram_size);
>>> +       else
>>> +               adev->mc.gtt_size = (uint64_t)amdgpu_gart_size << 20;
>>> +
>>> +       gmc_v9_0_vram_gtt_location(adev, &adev->mc);
>>> +
>>> +       return 0;
>>> +}
>>> +
>>> +static int gmc_v9_0_gart_init(struct amdgpu_device *adev)
>>> +{
>>> +       int r;
>>> +
>>> +       if (adev->gart.robj) {
>>> +               WARN(1, "VEGA10 PCIE GART already initialized\n");
>>> +               return 0;
>>> +       }
>>> +       /* Initialize common gart structure */
>>> +       r = amdgpu_gart_init(adev);
>>> +       if (r)
>>> +               return r;
>>> +       adev->gart.table_size = adev->gart.num_gpu_pages * 8;
>>> +       adev->gart.gart_pte_flags = AMDGPU_PTE_MTYPE(MTYPE_UC) |
>>> +                                AMDGPU_PTE_EXECUTABLE;
>>> +       return amdgpu_gart_table_vram_alloc(adev);
>>> +}
>>> +
>>> +/*
>>> + * vm
>>> + * VMID 0 is the physical GPU addresses as used by the kernel.
>>> + * VMIDs 1-15 are used for userspace clients and are handled
>>> + * by the amdgpu vm/hsa code.
>>> + */
>>> +/**
>>> + * gmc_v9_0_vm_init - vm init callback
>>> + *
>>> + * @adev: amdgpu_device pointer
>>> + *
>>> + * Inits vega10 specific vm parameters (number of VMs, base of vram for
>>> + * VMIDs 1-15) (vega10).
>>> + * Returns 0 for success.
>>> + */
>>> +static int gmc_v9_0_vm_init(struct amdgpu_device *adev)
>>> +{
>>> +       /*
>>> +        * number of VMs
>>> +        * VMID 0 is reserved for System
>>> +        * amdgpu graphics/compute will use VMIDs 1-7
>>> +        * amdkfd will use VMIDs 8-15
>>> +        */
>>> +       adev->vm_manager.num_ids = AMDGPU_NUM_OF_VMIDS;
>>> +       amdgpu_vm_manager_init(adev);
>>> +
>>> +       /* base offset of vram pages */
>>> +       /*XXX This value is not zero for APU*/
>>> +       adev->vm_manager.vram_base_offset = 0;
>>> +
>>> +       return 0;
>>> +}
>>> +
>>> +/**
>>> + * gmc_v9_0_vm_fini - vm fini callback
>>> + *
>>> + * @adev: amdgpu_device pointer
>>> + *
>>> + * Tear down any asic specific VM setup.
>>> + */
>>> +static void gmc_v9_0_vm_fini(struct amdgpu_device *adev)
>>> +{
>>> +       return;
>>> +}
>>> +
>>> +static int gmc_v9_0_sw_init(void *handle)
>>> +{
>>> +       int r;
>>> +       int dma_bits;
>>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +
>>> +       spin_lock_init(&adev->mc.invalidate_lock);
>>> +
>>> +       if (adev->flags & AMD_IS_APU) {
>>> +               adev->mc.vram_type = AMDGPU_VRAM_TYPE_UNKNOWN;
>>> +       } else {
>>> +               /* XXX Don't know how to get VRAM type yet. */
>>> +               adev->mc.vram_type = AMDGPU_VRAM_TYPE_HBM;
>>> +       }
>>> +
>>> +       /* This interrupt is VMC page fault.*/
>>> +       r = amdgpu_irq_add_id(adev, AMDGPU_IH_CLIENTID_VMC, 0,
>>> +                               &adev->mc.vm_fault);
>>> +
>>> +       if (r)
>>> +               return r;
>>> +
>>> +       /* Adjust VM size here.
>>> +        * Currently default to 64GB ((16 << 20) 4k pages).
>>> +        * Max GPUVM size is 48 bits.
>>> +        */
>>> +       adev->vm_manager.max_pfn = amdgpu_vm_size << 18;
>>> +
>>> +       /* Set the internal MC address mask
>>> +        * This is the max address of the GPU's
>>> +        * internal address space.
>>> +        */
>>> +       adev->mc.mc_mask = 0xffffffffffffULL; /* 48 bit MC */
>>> +
>>> +       /* set DMA mask + need_dma32 flags.
>>> +        * PCIE - can handle 44-bits.
>>> +        * IGP - can handle 44-bits
>>> +        * PCI - dma32 for legacy pci gart, 44 bits on vega10
>>> +        */
>>> +       adev->need_dma32 = false;
>>> +       dma_bits = adev->need_dma32 ? 32 : 44;
>>> +       r = pci_set_dma_mask(adev->pdev, DMA_BIT_MASK(dma_bits));
>>> +       if (r) {
>>> +               adev->need_dma32 = true;
>>> +               dma_bits = 32;
>>> +               printk(KERN_WARNING "amdgpu: No suitable DMA
>>> available.\n");
>>> +       }
>>> +       r = pci_set_consistent_dma_mask(adev->pdev,
>>> DMA_BIT_MASK(dma_bits));
>>> +       if (r) {
>>> +               pci_set_consistent_dma_mask(adev->pdev, DMA_BIT_MASK(32));
>>> +               printk(KERN_WARNING "amdgpu: No coherent DMA
>>> available.\n");
>>> +       }
>>> +
>>> +       r = gmc_v9_0_mc_init(adev);
>>> +       if (r)
>>> +               return r;
>>> +
>>> +       /* Memory manager */
>>> +       r = amdgpu_bo_init(adev);
>>> +       if (r)
>>> +               return r;
>>> +
>>> +       r = gmc_v9_0_gart_init(adev);
>>> +       if (r)
>>> +               return r;
>>> +
>>> +       if (!adev->vm_manager.enabled) {
>>> +               r = gmc_v9_0_vm_init(adev);
>>> +               if (r) {
>>> +                       dev_err(adev->dev, "vm manager initialization
>>> failed (%d).\n", r);
>>> +                       return r;
>>> +               }
>>> +               adev->vm_manager.enabled = true;
>>> +       }
>>> +       return r;
>>> +}
>>> +
>>> +/**
>>> + * gmc_v8_0_gart_fini - vm fini callback
>>> + *
>>> + * @adev: amdgpu_device pointer
>>> + *
>>> + * Tears down the driver GART/VM setup (CIK).
>>> + */
>>> +static void gmc_v9_0_gart_fini(struct amdgpu_device *adev)
>>> +{
>>> +       amdgpu_gart_table_vram_free(adev);
>>> +       amdgpu_gart_fini(adev);
>>> +}
>>> +
>>> +static int gmc_v9_0_sw_fini(void *handle)
>>> +{
>>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +
>>> +       if (adev->vm_manager.enabled) {
>>> +               amdgpu_vm_manager_fini(adev);
>>> +               gmc_v9_0_vm_fini(adev);
>>> +               adev->vm_manager.enabled = false;
>>> +       }
>>> +       gmc_v9_0_gart_fini(adev);
>>> +       amdgpu_gem_force_release(adev);
>>> +       amdgpu_bo_fini(adev);
>>> +
>>> +       return 0;
>>> +}
>>> +
>>> +static void gmc_v9_0_init_golden_registers(struct amdgpu_device *adev)
>>> +{
>>> +       switch (adev->asic_type) {
>>> +       case CHIP_VEGA10:
>>> +               break;
>>> +       default:
>>> +               break;
>>> +       }
>>> +}
>>> +
>>> +/**
>>> + * gmc_v9_0_gart_enable - gart enable
>>> + *
>>> + * @adev: amdgpu_device pointer
>>> + */
>>> +static int gmc_v9_0_gart_enable(struct amdgpu_device *adev)
>>> +{
>>> +       int r;
>>> +       bool value;
>>> +       u32 tmp;
>>> +
>>> +       amdgpu_program_register_sequence(adev,
>>> +               golden_settings_vega10_hdp,
>>> +               (const u32)ARRAY_SIZE(golden_settings_vega10_hdp));
>>> +
>>> +       if (adev->gart.robj == NULL) {
>>> +               dev_err(adev->dev, "No VRAM object for PCIE GART.\n");
>>> +               return -EINVAL;
>>> +       }
>>> +       r = amdgpu_gart_table_vram_pin(adev);
>>> +       if (r)
>>> +               return r;
>>> +
>>> +       /* After HDP is initialized, flush HDP.*/
>>> +       nbio_v6_1_hdp_flush(adev);
>>> +
>>> +       r = gfxhub_v1_0_gart_enable(adev);
>>> +       if (r)
>>> +               return r;
>>> +
>>> +       r = mmhub_v1_0_gart_enable(adev);
>>> +       if (r)
>>> +               return r;
>>> +
>>> +       tmp = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MISC_CNTL));
>>> +       tmp |= HDP_MISC_CNTL__FLUSH_INVALIDATE_CACHE_MASK;
>>> +       WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_MISC_CNTL), tmp);
>>> +
>>> +       tmp = RREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_HOST_PATH_CNTL));
>>> +       WREG32(SOC15_REG_OFFSET(HDP, 0, mmHDP_HOST_PATH_CNTL), tmp);
>>> +
>>> +
>>> +       if (amdgpu_vm_fault_stop == AMDGPU_VM_FAULT_STOP_ALWAYS)
>>> +               value = false;
>>> +       else
>>> +               value = true;
>>> +
>>> +       gfxhub_v1_0_set_fault_enable_default(adev, value);
>>> +       mmhub_v1_0_set_fault_enable_default(adev, value);
>>> +
>>> +       gmc_v9_0_gart_flush_gpu_tlb(adev, 0);
>>> +
>>> +       DRM_INFO("PCIE GART of %uM enabled (table at 0x%016llX).\n",
>>> +                (unsigned)(adev->mc.gtt_size >> 20),
>>> +                (unsigned long long)adev->gart.table_addr);
>>> +       adev->gart.ready = true;
>>> +       return 0;
>>> +}
>>> +
>>> +static int gmc_v9_0_hw_init(void *handle)
>>> +{
>>> +       int r;
>>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +
>>> +       /* The sequence of these two function calls matters.*/
>>> +       gmc_v9_0_init_golden_registers(adev);
>>> +
>>> +       r = gmc_v9_0_gart_enable(adev);
>>> +
>>> +       return r;
>>> +}
>>> +
>>> +/**
>>> + * gmc_v9_0_gart_disable - gart disable
>>> + *
>>> + * @adev: amdgpu_device pointer
>>> + *
>>> + * This disables all VM page table.
>>> + */
>>> +static void gmc_v9_0_gart_disable(struct amdgpu_device *adev)
>>> +{
>>> +       gfxhub_v1_0_gart_disable(adev);
>>> +       mmhub_v1_0_gart_disable(adev);
>>> +       amdgpu_gart_table_vram_unpin(adev);
>>> +}
>>> +
>>> +static int gmc_v9_0_hw_fini(void *handle)
>>> +{
>>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +
>>> +       amdgpu_irq_put(adev, &adev->mc.vm_fault, 0);
>>> +       gmc_v9_0_gart_disable(adev);
>>> +
>>> +       return 0;
>>> +}
>>> +
>>> +static int gmc_v9_0_suspend(void *handle)
>>> +{
>>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +
>>> +       if (adev->vm_manager.enabled) {
>>> +               gmc_v9_0_vm_fini(adev);
>>> +               adev->vm_manager.enabled = false;
>>> +       }
>>> +       gmc_v9_0_hw_fini(adev);
>>> +
>>> +       return 0;
>>> +}
>>> +
>>> +static int gmc_v9_0_resume(void *handle)
>>> +{
>>> +       int r;
>>> +       struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> +
>>> +       r = gmc_v9_0_hw_init(adev);
>>> +       if (r)
>>> +               return r;
>>> +
>>> +       if (!adev->vm_manager.enabled) {
>>> +               r = gmc_v9_0_vm_init(adev);
>>> +               if (r) {
>>> +                       dev_err(adev->dev,
>>> +                               "vm manager initialization failed
>>> (%d).\n", r);
>>> +                       return r;
>>> +               }
>>> +               adev->vm_manager.enabled = true;
>>> +       }
>>> +
>>> +       return r;
>>> +}
>>> +
>>> +static bool gmc_v9_0_is_idle(void *handle)
>>> +{
>>> +       /* MC is always ready in GMC v9.*/
>>> +       return true;
>>> +}
>>> +
>>> +static int gmc_v9_0_wait_for_idle(void *handle)
>>> +{
>>> +       /* There is no need to wait for MC idle in GMC v9.*/
>>> +       return 0;
>>> +}
>>> +
>>> +static int gmc_v9_0_soft_reset(void *handle)
>>> +{
>>> +       /* XXX for emulation.*/
>>> +       return 0;
>>> +}
>>> +
>>> +static int gmc_v9_0_set_clockgating_state(void *handle,
>>> +                                       enum amd_clockgating_state state)
>>> +{
>>> +       return 0;
>>> +}
>>> +
>>> +static int gmc_v9_0_set_powergating_state(void *handle,
>>> +                                       enum amd_powergating_state state)
>>> +{
>>> +       return 0;
>>> +}
>>> +
>>> +const struct amd_ip_funcs gmc_v9_0_ip_funcs = {
>>> +       .name = "gmc_v9_0",
>>> +       .early_init = gmc_v9_0_early_init,
>>> +       .late_init = gmc_v9_0_late_init,
>>> +       .sw_init = gmc_v9_0_sw_init,
>>> +       .sw_fini = gmc_v9_0_sw_fini,
>>> +       .hw_init = gmc_v9_0_hw_init,
>>> +       .hw_fini = gmc_v9_0_hw_fini,
>>> +       .suspend = gmc_v9_0_suspend,
>>> +       .resume = gmc_v9_0_resume,
>>> +       .is_idle = gmc_v9_0_is_idle,
>>> +       .wait_for_idle = gmc_v9_0_wait_for_idle,
>>> +       .soft_reset = gmc_v9_0_soft_reset,
>>> +       .set_clockgating_state = gmc_v9_0_set_clockgating_state,
>>> +       .set_powergating_state = gmc_v9_0_set_powergating_state,
>>> +};
>>> +
>>> +const struct amdgpu_ip_block_version gmc_v9_0_ip_block =
>>> +{
>>> +       .type = AMD_IP_BLOCK_TYPE_GMC,
>>> +       .major = 9,
>>> +       .minor = 0,
>>> +       .rev = 0,
>>> +       .funcs = &gmc_v9_0_ip_funcs,
>>> +};
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
>>> b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
>>> new file mode 100644
>>> index 0000000..b030ca5
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.h
>>> @@ -0,0 +1,30 @@
>>> +/*
>>> + * Copyright 2016 Advanced Micro Devices, Inc.
>>> + *
>>> + * Permission is hereby granted, free of charge, to any person obtaining
>>> a
>>> + * copy of this software and associated documentation files (the
>>> "Software"),
>>> + * to deal in the Software without restriction, including without
>>> limitation
>>> + * the rights to use, copy, modify, merge, publish, distribute,
>>> sublicense,
>>> + * and/or sell copies of the Software, and to permit persons to whom the
>>> + * Software is furnished to do so, subject to the following conditions:
>>> + *
>>> + * The above copyright notice and this permission notice shall be
>>> included in
>>> + * all copies or substantial portions of the Software.
>>> + *
>>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
>>> EXPRESS OR
>>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>>> MERCHANTABILITY,
>>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT
>>> SHALL
>>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES
>>> OR
>>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
>>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
>>> + * OTHER DEALINGS IN THE SOFTWARE.
>>> + *
>>> + */
>>> +
>>> +#ifndef __GMC_V9_0_H__
>>> +#define __GMC_V9_0_H__
>>> +
>>> +extern const struct amd_ip_funcs gmc_v9_0_ip_funcs;
>>> +extern const struct amdgpu_ip_block_version gmc_v9_0_ip_block;
>>> +
>>> +#endif
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>>> b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>>> new file mode 100644
>>> index 0000000..b1e0e6b
>>> --- /dev/null
>>> +++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v1_0.c
>>> @@ -0,0 +1,585 @@
>>> +/*
>>> + * Copyright 2016 Advanced Micro Devices, Inc.
>>> + *
>>> + * Permission is hereby granted, free of charge, to any person obtaining
>>> a
>>> + * copy of this software and associated documentation files (the
>>> "Software"),
>>> + * to deal in the Software without restriction, including without
>>> limitation
>>> + * the rights to use, copy, modify, merge, publish, distribute,
>>> sublicense,
>>> + * and/or sell copies of the Software, and to permit persons to whom the
>>> + * Software is furnished to do so, subject to the following conditions:
>>> + *
>>> + * The above copyright notice and this permission notice shall be
>>> included in
>>> + * all copies or substantial portions of the Software.
>>> + *
>>> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
>>> EXPRESS OR
>>> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>>> MERCHANTABILITY,
>>> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT
>>> SHALL
>>> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES
>>> OR
>>> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
>>> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
>>> + * OTHER DEALINGS IN THE SOFTWARE.
>>> + *
>>> + */
>>> +#include "amdgpu.h"
>>> +#include "mmhub_v1_0.h"
>>> +
>>> +#include "vega10/soc15ip.h"
>>> +#include "vega10/MMHUB/mmhub_1_0_offset.h"
>>> +#include "vega10/MMHUB/mmhub_1_0_sh_mask.h"
>>> +#include "vega10/MMHUB/mmhub_1_0_default.h"
>>> +#include "vega10/ATHUB/athub_1_0_offset.h"
>>> +#include "vega10/ATHUB/athub_1_0_sh_mask.h"
>>> +#include "vega10/ATHUB/athub_1_0_default.h"
>>> +#include "vega10/vega10_enum.h"
>>> +
>>> +#include "soc15_common.h"
>>> +
>>> +u64 mmhub_v1_0_get_fb_location(struct amdgpu_device *adev)
>>> +{
>>> +       u64 base = RREG32(SOC15_REG_OFFSET(MMHUB, 0,
>>> mmMC_VM_FB_LOCATION_BASE));
>>> +
>>> +       base &= MC_VM_FB_LOCATION_BASE__FB_BASE_MASK;
>>> +       base <<= 24;
>>> +
>>> +       return base;
>>> +}
>>> +
>>> +int mmhub_v1_0_gart_enable(struct amdgpu_device *adev)
>>> +{
>>> +       u32 tmp;
>>> +       u64 value;
>>> +       uint64_t addr;
>>> +       u32 i;
>>> +
>>> +       /* Program MC. */
>>> +       /* Update configuration */
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 101+ messages in thread

* Re: [PATCH 063/100] drm/amd/display: Add DCE12 bios parser support
       [not found]     ` <1490041835-11255-49-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
@ 2017-08-03 21:04       ` Mike Lothian
  0 siblings, 0 replies; 101+ messages in thread
From: Mike Lothian @ 2017-08-03 21:04 UTC (permalink / raw)
  To: Alex Deucher, amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Alex Deucher, Harry Wentland


[-- Attachment #1.1: Type: text/plain, Size: 58524 bytes --]

Hi

I've just tested out the new amd-staging-drm-next branch and noticed the
following warning caused by this patch

drivers/gpu/drm/amd/amdgpu/../display/dc/bios/bios_parser2.c: In function
‘get_embedded_panel_info_v2_1’:
drivers/gpu/drm/amd/amdgpu/../display/dc/bios/bios_parser2.c:1335:3:
warning: overflow in implicit constant conversion [-Woverflow]
   lvds->lcd_timing.miscinfo & ATOM_INTERLACE;
   ^~~~

Cheers

Mike

On Mon, 20 Mar 2017 at 20:35 Alex Deucher <alexdeucher-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:

> From: Harry Wentland <harry.wentland-5C7GfCeVMHo@public.gmane.org>
>
> Signed-off-by: Harry Wentland <harry.wentland-5C7GfCeVMHo@public.gmane.org>
> Signed-off-by: Alex Deucher <alexander.deucher-5C7GfCeVMHo@public.gmane.org>
> ---
>  drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c | 2085
> ++++++++++++++++++++
>  drivers/gpu/drm/amd/display/dc/bios/bios_parser2.h |   33 +
>  .../display/dc/bios/bios_parser_types_internal2.h  |   74 +
>  .../gpu/drm/amd/display/dc/bios/command_table2.c   |  813 ++++++++
>  .../gpu/drm/amd/display/dc/bios/command_table2.h   |  105 +
>  .../amd/display/dc/bios/command_table_helper2.c    |  260 +++
>  .../amd/display/dc/bios/command_table_helper2.h    |   82 +
>  .../dc/bios/dce112/command_table_helper2_dce112.c  |  418 ++++
>  .../dc/bios/dce112/command_table_helper2_dce112.h  |   34 +
>  9 files changed, 3904 insertions(+)
>  create mode 100644 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
>  create mode 100644 drivers/gpu/drm/amd/display/dc/bios/bios_parser2.h
>  create mode 100644
> drivers/gpu/drm/amd/display/dc/bios/bios_parser_types_internal2.h
>  create mode 100644 drivers/gpu/drm/amd/display/dc/bios/command_table2.c
>  create mode 100644 drivers/gpu/drm/amd/display/dc/bios/command_table2.h
>  create mode 100644
> drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.c
>  create mode 100644
> drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.h
>  create mode 100644
> drivers/gpu/drm/amd/display/dc/bios/dce112/command_table_helper2_dce112.c
>  create mode 100644
> drivers/gpu/drm/amd/display/dc/bios/dce112/command_table_helper2_dce112.h
>
> diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
> b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
> new file mode 100644
> index 0000000..f6e77da
> --- /dev/null
> +++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
> @@ -0,0 +1,2085 @@
> +/*
> + * Copyright 2012-15 Advanced Micro Devices, Inc.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the
> "Software"),
> + * to deal in the Software without restriction, including without
> limitation
> + * the rights to use, copy, modify, merge, publish, distribute,
> sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be
> included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
> EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT
> SHALL
> + * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES
> OR
> + * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + * OTHER DEALINGS IN THE SOFTWARE.
> + *
> + * Authors: AMD
> + *
> + */
> +
> +#include "dm_services.h"
> +
> +#define _BIOS_PARSER_2_
> +
> +#include "ObjectID.h"
> +#include "atomfirmware.h"
> +#include "atomfirmwareid.h"
> +
> +#include "dc_bios_types.h"
> +#include "include/grph_object_ctrl_defs.h"
> +#include "include/bios_parser_interface.h"
> +#include "include/i2caux_interface.h"
> +#include "include/logger_interface.h"
> +
> +#include "command_table2.h"
> +
> +#include "bios_parser_helper.h"
> +#include "command_table_helper2.h"
> +#include "bios_parser2.h"
> +#include "bios_parser_types_internal2.h"
> +#include "bios_parser_interface.h"
> +
> +#define LAST_RECORD_TYPE 0xff
> +
> +
> +struct i2c_id_config_access {
> +       uint8_t bfI2C_LineMux:4;
> +       uint8_t bfHW_EngineID:3;
> +       uint8_t bfHW_Capable:1;
> +       uint8_t ucAccess;
> +};
> +
> +static enum object_type object_type_from_bios_object_id(
> +       uint32_t bios_object_id);
> +
> +static enum object_enum_id enum_id_from_bios_object_id(uint32_t
> bios_object_id);
> +
> +static struct graphics_object_id object_id_from_bios_object_id(
> +       uint32_t bios_object_id);
> +
> +static uint32_t id_from_bios_object_id(enum object_type type,
> +       uint32_t bios_object_id);
> +
> +static uint32_t gpu_id_from_bios_object_id(uint32_t bios_object_id);
> +
> +static enum encoder_id encoder_id_from_bios_object_id(uint32_t
> bios_object_id);
> +
> +static enum connector_id connector_id_from_bios_object_id(
> +                                               uint32_t bios_object_id);
> +
> +static enum generic_id generic_id_from_bios_object_id(uint32_t
> bios_object_id);
> +
> +static enum bp_result get_gpio_i2c_info(struct bios_parser *bp,
> +       struct atom_i2c_record *record,
> +       struct graphics_object_i2c_info *info);
> +
> +static enum bp_result bios_parser_get_firmware_info(
> +       struct dc_bios *dcb,
> +       struct firmware_info *info);
> +
> +static enum bp_result bios_parser_get_encoder_cap_info(
> +       struct dc_bios *dcb,
> +       struct graphics_object_id object_id,
> +       struct bp_encoder_cap_info *info);
> +
> +static enum bp_result get_firmware_info_v3_1(
> +       struct bios_parser *bp,
> +       struct firmware_info *info);
> +
> +static struct atom_hpd_int_record *get_hpd_record(struct bios_parser *bp,
> +               struct atom_display_object_path_v2 *object);
> +
> +static struct atom_encoder_caps_record *get_encoder_cap_record(
> +       struct bios_parser *bp,
> +       struct atom_display_object_path_v2 *object);
> +
> +#define BIOS_IMAGE_SIZE_OFFSET 2
> +#define BIOS_IMAGE_SIZE_UNIT 512
> +
> +#define DATA_TABLES(table) (bp->master_data_tbl->listOfdatatables.table)
> +
> +
> +static void destruct(struct bios_parser *bp)
> +{
> +       if (bp->base.bios_local_image)
> +               dm_free(bp->base.bios_local_image);
> +
> +       if (bp->base.integrated_info)
> +               dm_free(bp->base.integrated_info);
> +}
> +
> +static void firmware_parser_destroy(struct dc_bios **dcb)
> +{
> +       struct bios_parser *bp = BP_FROM_DCB(*dcb);
> +
> +       if (!bp) {
> +               BREAK_TO_DEBUGGER();
> +               return;
> +       }
> +
> +       destruct(bp);
> +
> +       dm_free(bp);
> +       *dcb = NULL;
> +}
> +
> +static void get_atom_data_table_revision(
> +       struct atom_common_table_header *atom_data_tbl,
> +       struct atom_data_revision *tbl_revision)
> +{
> +       if (!tbl_revision)
> +               return;
> +
> +       /* initialize the revision to 0 which is invalid revision */
> +       tbl_revision->major = 0;
> +       tbl_revision->minor = 0;
> +
> +       if (!atom_data_tbl)
> +               return;
> +
> +       tbl_revision->major =
> +                       (uint32_t) atom_data_tbl->format_revision & 0x3f;
> +       tbl_revision->minor =
> +                       (uint32_t) atom_data_tbl->content_revision & 0x3f;
> +}
> +
> +static struct graphics_object_id object_id_from_bios_object_id(
> +       uint32_t bios_object_id)
> +{
> +       enum object_type type;
> +       enum object_enum_id enum_id;
> +       struct graphics_object_id go_id = { 0 };
> +
> +       type = object_type_from_bios_object_id(bios_object_id);
> +
> +       if (type == OBJECT_TYPE_UNKNOWN)
> +               return go_id;
> +
> +       enum_id = enum_id_from_bios_object_id(bios_object_id);
> +
> +       if (enum_id == ENUM_ID_UNKNOWN)
> +               return go_id;
> +
> +       go_id = dal_graphics_object_id_init(
> +                       id_from_bios_object_id(type, bios_object_id),
> +                                                               enum_id,
> type);
> +
> +       return go_id;
> +}
> +
> +static enum object_type object_type_from_bios_object_id(uint32_t
> bios_object_id)
> +{
> +       uint32_t bios_object_type = (bios_object_id & OBJECT_TYPE_MASK)
> +                               >> OBJECT_TYPE_SHIFT;
> +       enum object_type object_type;
> +
> +       switch (bios_object_type) {
> +       case GRAPH_OBJECT_TYPE_GPU:
> +               object_type = OBJECT_TYPE_GPU;
> +               break;
> +       case GRAPH_OBJECT_TYPE_ENCODER:
> +               object_type = OBJECT_TYPE_ENCODER;
> +               break;
> +       case GRAPH_OBJECT_TYPE_CONNECTOR:
> +               object_type = OBJECT_TYPE_CONNECTOR;
> +               break;
> +       case GRAPH_OBJECT_TYPE_ROUTER:
> +               object_type = OBJECT_TYPE_ROUTER;
> +               break;
> +       case GRAPH_OBJECT_TYPE_GENERIC:
> +               object_type = OBJECT_TYPE_GENERIC;
> +               break;
> +       default:
> +               object_type = OBJECT_TYPE_UNKNOWN;
> +               break;
> +       }
> +
> +       return object_type;
> +}
> +
> +static enum object_enum_id enum_id_from_bios_object_id(uint32_t
> bios_object_id)
> +{
> +       uint32_t bios_enum_id =
> +                       (bios_object_id & ENUM_ID_MASK) >> ENUM_ID_SHIFT;
> +       enum object_enum_id id;
> +
> +       switch (bios_enum_id) {
> +       case GRAPH_OBJECT_ENUM_ID1:
> +               id = ENUM_ID_1;
> +               break;
> +       case GRAPH_OBJECT_ENUM_ID2:
> +               id = ENUM_ID_2;
> +               break;
> +       case GRAPH_OBJECT_ENUM_ID3:
> +               id = ENUM_ID_3;
> +               break;
> +       case GRAPH_OBJECT_ENUM_ID4:
> +               id = ENUM_ID_4;
> +               break;
> +       case GRAPH_OBJECT_ENUM_ID5:
> +               id = ENUM_ID_5;
> +               break;
> +       case GRAPH_OBJECT_ENUM_ID6:
> +               id = ENUM_ID_6;
> +               break;
> +       case GRAPH_OBJECT_ENUM_ID7:
> +               id = ENUM_ID_7;
> +               break;
> +       default:
> +               id = ENUM_ID_UNKNOWN;
> +               break;
> +       }
> +
> +       return id;
> +}
> +
> +static uint32_t id_from_bios_object_id(enum object_type type,
> +       uint32_t bios_object_id)
> +{
> +       switch (type) {
> +       case OBJECT_TYPE_GPU:
> +               return gpu_id_from_bios_object_id(bios_object_id);
> +       case OBJECT_TYPE_ENCODER:
> +               return
> (uint32_t)encoder_id_from_bios_object_id(bios_object_id);
> +       case OBJECT_TYPE_CONNECTOR:
> +               return (uint32_t)connector_id_from_bios_object_id(
> +                               bios_object_id);
> +       case OBJECT_TYPE_GENERIC:
> +               return generic_id_from_bios_object_id(bios_object_id);
> +       default:
> +               return 0;
> +       }
> +}
> +
> +uint32_t gpu_id_from_bios_object_id(uint32_t bios_object_id)
> +{
> +       return (bios_object_id & OBJECT_ID_MASK) >> OBJECT_ID_SHIFT;
> +}
> +
> +static enum encoder_id encoder_id_from_bios_object_id(uint32_t
> bios_object_id)
> +{
> +       uint32_t bios_encoder_id =
> gpu_id_from_bios_object_id(bios_object_id);
> +       enum encoder_id id;
> +
> +       switch (bios_encoder_id) {
> +       case ENCODER_OBJECT_ID_INTERNAL_LVDS:
> +               id = ENCODER_ID_INTERNAL_LVDS;
> +               break;
> +       case ENCODER_OBJECT_ID_INTERNAL_TMDS1:
> +               id = ENCODER_ID_INTERNAL_TMDS1;
> +               break;
> +       case ENCODER_OBJECT_ID_INTERNAL_TMDS2:
> +               id = ENCODER_ID_INTERNAL_TMDS2;
> +               break;
> +       case ENCODER_OBJECT_ID_INTERNAL_DAC1:
> +               id = ENCODER_ID_INTERNAL_DAC1;
> +               break;
> +       case ENCODER_OBJECT_ID_INTERNAL_DAC2:
> +               id = ENCODER_ID_INTERNAL_DAC2;
> +               break;
> +       case ENCODER_OBJECT_ID_INTERNAL_LVTM1:
> +               id = ENCODER_ID_INTERNAL_LVTM1;
> +               break;
> +       case ENCODER_OBJECT_ID_HDMI_INTERNAL:
> +               id = ENCODER_ID_INTERNAL_HDMI;
> +               break;
> +       case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_TMDS1:
> +               id = ENCODER_ID_INTERNAL_KLDSCP_TMDS1;
> +               break;
> +       case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC1:
> +               id = ENCODER_ID_INTERNAL_KLDSCP_DAC1;
> +               break;
> +       case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_DAC2:
> +               id = ENCODER_ID_INTERNAL_KLDSCP_DAC2;
> +               break;
> +       case ENCODER_OBJECT_ID_MVPU_FPGA:
> +               id = ENCODER_ID_EXTERNAL_MVPU_FPGA;
> +               break;
> +       case ENCODER_OBJECT_ID_INTERNAL_DDI:
> +               id = ENCODER_ID_INTERNAL_DDI;
> +               break;
> +       case ENCODER_OBJECT_ID_INTERNAL_UNIPHY:
> +               id = ENCODER_ID_INTERNAL_UNIPHY;
> +               break;
> +       case ENCODER_OBJECT_ID_INTERNAL_KLDSCP_LVTMA:
> +               id = ENCODER_ID_INTERNAL_KLDSCP_LVTMA;
> +               break;
> +       case ENCODER_OBJECT_ID_INTERNAL_UNIPHY1:
> +               id = ENCODER_ID_INTERNAL_UNIPHY1;
> +               break;
> +       case ENCODER_OBJECT_ID_INTERNAL_UNIPHY2:
> +               id = ENCODER_ID_INTERNAL_UNIPHY2;
> +               break;
> +       case ENCODER_OBJECT_ID_ALMOND: /* ENCODER_OBJECT_ID_NUTMEG */
> +               id = ENCODER_ID_EXTERNAL_NUTMEG;
> +               break;
> +       case ENCODER_OBJECT_ID_TRAVIS:
> +               id = ENCODER_ID_EXTERNAL_TRAVIS;
> +               break;
> +       case ENCODER_OBJECT_ID_INTERNAL_UNIPHY3:
> +               id = ENCODER_ID_INTERNAL_UNIPHY3;
> +               break;
> +       default:
> +               id = ENCODER_ID_UNKNOWN;
> +               ASSERT(0);
> +               break;
> +       }
> +
> +       return id;
> +}
> +
> +static enum connector_id connector_id_from_bios_object_id(
> +       uint32_t bios_object_id)
> +{
> +       uint32_t bios_connector_id =
> gpu_id_from_bios_object_id(bios_object_id);
> +
> +       enum connector_id id;
> +
> +       switch (bios_connector_id) {
> +       case CONNECTOR_OBJECT_ID_SINGLE_LINK_DVI_I:
> +               id = CONNECTOR_ID_SINGLE_LINK_DVII;
> +               break;
> +       case CONNECTOR_OBJECT_ID_DUAL_LINK_DVI_I:
> +               id = CONNECTOR_ID_DUAL_LINK_DVII;
> +               break;
> +       case CONNECTOR_OBJECT_ID_SINGLE_LINK_DVI_D:
> +               id = CONNECTOR_ID_SINGLE_LINK_DVID;
> +               break;
> +       case CONNECTOR_OBJECT_ID_DUAL_LINK_DVI_D:
> +               id = CONNECTOR_ID_DUAL_LINK_DVID;
> +               break;
> +       case CONNECTOR_OBJECT_ID_VGA:
> +               id = CONNECTOR_ID_VGA;
> +               break;
> +       case CONNECTOR_OBJECT_ID_HDMI_TYPE_A:
> +               id = CONNECTOR_ID_HDMI_TYPE_A;
> +               break;
> +       case CONNECTOR_OBJECT_ID_LVDS:
> +               id = CONNECTOR_ID_LVDS;
> +               break;
> +       case CONNECTOR_OBJECT_ID_PCIE_CONNECTOR:
> +               id = CONNECTOR_ID_PCIE;
> +               break;
> +       case CONNECTOR_OBJECT_ID_HARDCODE_DVI:
> +               id = CONNECTOR_ID_HARDCODE_DVI;
> +               break;
> +       case CONNECTOR_OBJECT_ID_DISPLAYPORT:
> +               id = CONNECTOR_ID_DISPLAY_PORT;
> +               break;
> +       case CONNECTOR_OBJECT_ID_eDP:
> +               id = CONNECTOR_ID_EDP;
> +               break;
> +       case CONNECTOR_OBJECT_ID_MXM:
> +               id = CONNECTOR_ID_MXM;
> +               break;
> +       default:
> +               id = CONNECTOR_ID_UNKNOWN;
> +               break;
> +       }
> +
> +       return id;
> +}
> +
> +enum generic_id generic_id_from_bios_object_id(uint32_t bios_object_id)
> +{
> +       uint32_t bios_generic_id =
> gpu_id_from_bios_object_id(bios_object_id);
> +
> +       enum generic_id id;
> +
> +       switch (bios_generic_id) {
> +       case GENERIC_OBJECT_ID_MXM_OPM:
> +               id = GENERIC_ID_MXM_OPM;
> +               break;
> +       case GENERIC_OBJECT_ID_GLSYNC:
> +               id = GENERIC_ID_GLSYNC;
> +               break;
> +       case GENERIC_OBJECT_ID_STEREO_PIN:
> +               id = GENERIC_ID_STEREO;
> +               break;
> +       default:
> +               id = GENERIC_ID_UNKNOWN;
> +               break;
> +       }
> +
> +       return id;
> +}
> +
> +static uint8_t bios_parser_get_connectors_number(struct dc_bios *dcb)
> +{
> +       struct bios_parser *bp = BP_FROM_DCB(dcb);
> +       unsigned int count = 0;
> +       unsigned int i;
> +
> +       for (i = 0; i < bp->object_info_tbl.v1_4->number_of_path; i++) {
> +               if (bp->object_info_tbl.v1_4->display_path[i].encoderobjid
> != 0
> +                               &&
> +               bp->object_info_tbl.v1_4->display_path[i].display_objid !=
> 0)
> +                       count++;
> +       }
> +       return count;
> +}
> +
> +static struct graphics_object_id bios_parser_get_encoder_id(
> +       struct dc_bios *dcb,
> +       uint32_t i)
> +{
> +       struct bios_parser *bp = BP_FROM_DCB(dcb);
> +       struct graphics_object_id object_id = dal_graphics_object_id_init(
> +               0, ENUM_ID_UNKNOWN, OBJECT_TYPE_UNKNOWN);
> +
> +       if (bp->object_info_tbl.v1_4->number_of_path > i)
> +               object_id = object_id_from_bios_object_id(
> +               bp->object_info_tbl.v1_4->display_path[i].encoderobjid);
> +
> +       return object_id;
> +}
> +
> +static struct graphics_object_id bios_parser_get_connector_id(
> +       struct dc_bios *dcb,
> +       uint8_t i)
> +{
> +       struct bios_parser *bp = BP_FROM_DCB(dcb);
> +       struct graphics_object_id object_id = dal_graphics_object_id_init(
> +               0, ENUM_ID_UNKNOWN, OBJECT_TYPE_UNKNOWN);
> +       struct object_info_table *tbl = &bp->object_info_tbl;
> +       struct display_object_info_table_v1_4 *v1_4 = tbl->v1_4;
> +
> +       if (v1_4->number_of_path > i) {
> +               /* If display_objid is generic object id,  the encoderObj
> +                * /extencoderobjId should be 0
> +                */
> +               if (v1_4->display_path[i].encoderobjid != 0 &&
> +                               v1_4->display_path[i].display_objid != 0)
> +                       object_id = object_id_from_bios_object_id(
> +
>  v1_4->display_path[i].display_objid);
> +       }
> +
> +       return object_id;
> +}
> +
> +
> +/*  TODO:  GetNumberOfSrc*/
> +
> +static uint32_t bios_parser_get_dst_number(struct dc_bios *dcb,
> +       struct graphics_object_id id)
> +{
> +       /* connector has 1 Dest, encoder has 0 Dest */
> +       switch (id.type) {
> +       case OBJECT_TYPE_ENCODER:
> +               return 0;
> +       case OBJECT_TYPE_CONNECTOR:
> +               return 1;
> +       default:
> +               return 0;
> +       }
> +}
> +
> +/*  removed getSrcObjList, getDestObjList*/
> +
> +
> +static enum bp_result bios_parser_get_src_obj(struct dc_bios *dcb,
> +       struct graphics_object_id object_id, uint32_t index,
> +       struct graphics_object_id *src_object_id)
> +{
> +       struct bios_parser *bp = BP_FROM_DCB(dcb);
> +       unsigned int i;
> +       enum bp_result  bp_result = BP_RESULT_BADINPUT;
> +       struct graphics_object_id obj_id = {0};
> +       struct object_info_table *tbl = &bp->object_info_tbl;
> +
> +       if (!src_object_id)
> +               return bp_result;
> +
> +       switch (object_id.type) {
> +       /* Encoder's Source is GPU.  BIOS does not provide GPU, since all
> +        * displaypaths point to same GPU (0x1100).  Hardcode GPU object
> type
> +        */
> +       case OBJECT_TYPE_ENCODER:
> +               /* TODO: since num of src must be less than 2.
> +                * If found in for loop, should break.
> +                * DAL2 implementation may be changed too
> +                */
> +               for (i = 0; i < tbl->v1_4->number_of_path; i++) {
> +                       obj_id = object_id_from_bios_object_id(
> +                       tbl->v1_4->display_path[i].encoderobjid);
> +                       if (object_id.type == obj_id.type &&
> +                                       object_id.id == obj_id.id &&
> +                                               object_id.enum_id ==
> +                                                       obj_id.enum_id) {
> +                               *src_object_id =
> +                               object_id_from_bios_object_id(0x1100);
> +                               /* break; */
> +                       }
> +               }
> +               bp_result = BP_RESULT_OK;
> +               break;
> +       case OBJECT_TYPE_CONNECTOR:
> +               for (i = 0; i < tbl->v1_4->number_of_path; i++) {
> +                       obj_id = object_id_from_bios_object_id(
> +                               tbl->v1_4->display_path[i].display_objid);
> +
> +                       if (object_id.type == obj_id.type &&
> +                               object_id.id == obj_id.id &&
> +                                       object_id.enum_id ==
> obj_id.enum_id) {
> +                               *src_object_id =
> +                               object_id_from_bios_object_id(
> +                               tbl->v1_4->display_path[i].encoderobjid);
> +                               /* break; */
> +                       }
> +               }
> +               bp_result = BP_RESULT_OK;
> +               break;
> +       default:
> +               break;
> +       }
> +
> +       return bp_result;
> +}
> +
> +static enum bp_result bios_parser_get_dst_obj(struct dc_bios *dcb,
> +       struct graphics_object_id object_id, uint32_t index,
> +       struct graphics_object_id *dest_object_id)
> +{
> +       struct bios_parser *bp = BP_FROM_DCB(dcb);
> +       unsigned int i;
> +       enum bp_result  bp_result = BP_RESULT_BADINPUT;
> +       struct graphics_object_id obj_id = {0};
> +       struct object_info_table *tbl = &bp->object_info_tbl;
> +
> +       if (!dest_object_id)
> +               return BP_RESULT_BADINPUT;
> +
> +       switch (object_id.type) {
> +       case OBJECT_TYPE_ENCODER:
> +               /* TODO: since num of src must be less than 2.
> +                * If found in for loop, should break.
> +                * DAL2 implementation may be changed too
> +                */
> +               for (i = 0; i < tbl->v1_4->number_of_path; i++) {
> +                       obj_id = object_id_from_bios_object_id(
> +                               tbl->v1_4->display_path[i].encoderobjid);
> +                       if (object_id.type == obj_id.type &&
> +                                       object_id.id == obj_id.id &&
> +                                               object_id.enum_id ==
> +                                                       obj_id.enum_id) {
> +                               *dest_object_id =
> +                                       object_id_from_bios_object_id(
> +                               tbl->v1_4->display_path[i].display_objid);
> +                               /* break; */
> +                       }
> +               }
> +               bp_result = BP_RESULT_OK;
> +               break;
> +       default:
> +               break;
> +       }
> +
> +       return bp_result;
> +}
> +
> +
> +/* from graphics_object_id, find display path which includes the
> object_id */
> +static struct atom_display_object_path_v2 *get_bios_object(
> +       struct bios_parser *bp,
> +       struct graphics_object_id id)
> +{
> +       unsigned int i;
> +       struct graphics_object_id obj_id = {0};
> +
> +       switch (id.type) {
> +       case OBJECT_TYPE_ENCODER:
> +               for (i = 0; i < bp->object_info_tbl.v1_4->number_of_path;
> i++) {
> +                       obj_id = object_id_from_bios_object_id(
> +
>  bp->object_info_tbl.v1_4->display_path[i].encoderobjid);
> +                       if (id.type == obj_id.type &&
> +                                       id.id == obj_id.id &&
> +                                               id.enum_id ==
> obj_id.enum_id)
> +                               return
> +                               &bp->object_info_tbl.v1_4->display_path[i];
> +               }
> +       case OBJECT_TYPE_CONNECTOR:
> +       case OBJECT_TYPE_GENERIC:
> +               /* Both Generic and Connector Object ID
> +                * will be stored on display_objid
> +               */
> +               for (i = 0; i < bp->object_info_tbl.v1_4->number_of_path;
> i++) {
> +                       obj_id = object_id_from_bios_object_id(
> +
>  bp->object_info_tbl.v1_4->display_path[i].display_objid
> +                       );
> +                       if (id.type == obj_id.type &&
> +                                       id.id == obj_id.id &&
> +                                               id.enum_id ==
> obj_id.enum_id)
> +                               return
> +                               &bp->object_info_tbl.v1_4->display_path[i];
> +               }
> +       default:
> +               return NULL;
> +       }
> +}
> +
> +static enum bp_result bios_parser_get_i2c_info(struct dc_bios *dcb,
> +       struct graphics_object_id id,
> +       struct graphics_object_i2c_info *info)
> +{
> +       uint32_t offset;
> +       struct atom_display_object_path_v2 *object;
> +       struct atom_common_record_header *header;
> +       struct atom_i2c_record *record;
> +       struct bios_parser *bp = BP_FROM_DCB(dcb);
> +
> +       if (!info)
> +               return BP_RESULT_BADINPUT;
> +
> +       object = get_bios_object(bp, id);
> +
> +       if (!object)
> +               return BP_RESULT_BADINPUT;
> +
> +       offset = object->disp_recordoffset + bp->object_info_tbl_offset;
> +
> +       for (;;) {
> +               header = GET_IMAGE(struct atom_common_record_header,
> offset);
> +
> +               if (!header)
> +                       return BP_RESULT_BADBIOSTABLE;
> +
> +               if (header->record_type == LAST_RECORD_TYPE ||
> +                       !header->record_size)
> +                       break;
> +
> +               if (header->record_type == ATOM_I2C_RECORD_TYPE
> +                       && sizeof(struct atom_i2c_record) <=
> +
>  header->record_size) {
> +                       /* get the I2C info */
> +                       record = (struct atom_i2c_record *) header;
> +
> +                       if (get_gpio_i2c_info(bp, record, info) ==
> +
>  BP_RESULT_OK)
> +                               return BP_RESULT_OK;
> +               }
> +
> +               offset += header->record_size;
> +       }
> +
> +       return BP_RESULT_NORECORD;
> +}
> +
> +static enum bp_result get_gpio_i2c_info(
> +       struct bios_parser *bp,
> +       struct atom_i2c_record *record,
> +       struct graphics_object_i2c_info *info)
> +{
> +       struct atom_gpio_pin_lut_v2_1 *header;
> +       uint32_t count = 0;
> +       unsigned int table_index = 0;
> +
> +       if (!info)
> +               return BP_RESULT_BADINPUT;
> +
> +       /* get the GPIO_I2C info */
> +       if (!DATA_TABLES(gpio_pin_lut))
> +               return BP_RESULT_BADBIOSTABLE;
> +
> +       header = GET_IMAGE(struct atom_gpio_pin_lut_v2_1,
> +                                       DATA_TABLES(gpio_pin_lut));
> +       if (!header)
> +               return BP_RESULT_BADBIOSTABLE;
> +
> +       if (sizeof(struct atom_common_table_header) +
> +                       sizeof(struct atom_gpio_pin_assignment) >
> +                       le16_to_cpu(header->table_header.structuresize))
> +               return BP_RESULT_BADBIOSTABLE;
> +
> +       /* TODO: is version change? */
> +       if (header->table_header.content_revision != 1)
> +               return BP_RESULT_UNSUPPORTED;
> +
> +       /* get data count */
> +       count = (le16_to_cpu(header->table_header.structuresize)
> +                       - sizeof(struct atom_common_table_header))
> +                               / sizeof(struct atom_gpio_pin_assignment);
> +
> +       table_index = record->i2c_id  & I2C_HW_LANE_MUX;
> +
> +       if (count < table_index) {
> +               bool find_valid = false;
> +
> +               for (table_index = 0; table_index < count; table_index++) {
> +                       if (((record->i2c_id & I2C_HW_CAP) == (
> +                       header->gpio_pin[table_index].gpio_id &
> +                                                       I2C_HW_CAP)) &&
> +                       ((record->i2c_id & I2C_HW_ENGINE_ID_MASK)  ==
> +                       (header->gpio_pin[table_index].gpio_id &
> +                                               I2C_HW_ENGINE_ID_MASK)) &&
> +                       ((record->i2c_id & I2C_HW_LANE_MUX) ==
> +                       (header->gpio_pin[table_index].gpio_id &
> +                                                       I2C_HW_LANE_MUX)))
> {
> +                               /* still valid */
> +                               find_valid = true;
> +                               break;
> +                       }
> +               }
> +               /* If we don't find the entry that we are looking for then
> +                *  we will return BP_Result_BadBiosTable.
> +                */
> +               if (find_valid == false)
> +                       return BP_RESULT_BADBIOSTABLE;
> +       }
> +
> +       /* get the GPIO_I2C_INFO */
> +       info->i2c_hw_assist = (record->i2c_id & I2C_HW_CAP) ? true : false;
> +       info->i2c_line = record->i2c_id & I2C_HW_LANE_MUX;
> +       info->i2c_engine_id = (record->i2c_id & I2C_HW_ENGINE_ID_MASK) >>
> 4;
> +       info->i2c_slave_address = record->i2c_slave_addr;
> +
> +       /* TODO: check how to get register offset for en, Y, etc. */
> +       info->gpio_info.clk_a_register_index =
> +                       le16_to_cpu(
> +                       header->gpio_pin[table_index].data_a_reg_index);
> +       info->gpio_info.clk_a_shift =
> +                       header->gpio_pin[table_index].gpio_bitshift;
> +
> +       return BP_RESULT_OK;
> +}
> +
> +static enum bp_result get_voltage_ddc_info_v4(
> +       uint8_t *i2c_line,
> +       uint32_t index,
> +       struct atom_common_table_header *header,
> +       uint8_t *address)
> +{
> +       enum bp_result result = BP_RESULT_NORECORD;
> +       struct atom_voltage_objects_info_v4_1 *info =
> +               (struct atom_voltage_objects_info_v4_1 *) address;
> +
> +       uint8_t *voltage_current_object =
> +               (uint8_t *) (&(info->voltage_object[0]));
> +
> +       while ((address + le16_to_cpu(header->structuresize)) >
> +                                               voltage_current_object) {
> +               struct atom_i2c_voltage_object_v4 *object =
> +                       (struct atom_i2c_voltage_object_v4 *)
> +                                               voltage_current_object;
> +
> +               if (object->header.voltage_mode ==
> +                       ATOM_INIT_VOLTAGE_REGULATOR) {
> +                       if (object->header.voltage_type == index) {
> +                               *i2c_line = object->i2c_id ^ 0x90;
> +                               result = BP_RESULT_OK;
> +                               break;
> +                       }
> +               }
> +
> +               voltage_current_object +=
> +                               le16_to_cpu(object->header.object_size);
> +       }
> +       return result;
> +}
> +
> +static enum bp_result bios_parser_get_thermal_ddc_info(
> +       struct dc_bios *dcb,
> +       uint32_t i2c_channel_id,
> +       struct graphics_object_i2c_info *info)
> +{
> +       struct bios_parser *bp = BP_FROM_DCB(dcb);
> +       struct i2c_id_config_access *config;
> +       struct atom_i2c_record record;
> +
> +       if (!info)
> +               return BP_RESULT_BADINPUT;
> +
> +       config = (struct i2c_id_config_access *) &i2c_channel_id;
> +
> +       record.i2c_id = config->bfHW_Capable;
> +       record.i2c_id |= config->bfI2C_LineMux;
> +       record.i2c_id |= config->bfHW_EngineID;
> +
> +       return get_gpio_i2c_info(bp, &record, info);
> +}
> +
> +static enum bp_result bios_parser_get_voltage_ddc_info(struct dc_bios
> *dcb,
> +       uint32_t index,
> +       struct graphics_object_i2c_info *info)
> +{
> +       uint8_t i2c_line = 0;
> +       enum bp_result result = BP_RESULT_NORECORD;
> +       uint8_t *voltage_info_address;
> +       struct atom_common_table_header *header;
> +       struct atom_data_revision revision = {0};
> +       struct bios_parser *bp = BP_FROM_DCB(dcb);
> +
> +       if (!DATA_TABLES(voltageobject_info))
> +               return result;
> +
> +       voltage_info_address = get_image(&bp->base,
> +                       DATA_TABLES(voltageobject_info),
> +                       sizeof(struct atom_common_table_header));
> +
> +       header = (struct atom_common_table_header *) voltage_info_address;
> +
> +       get_atom_data_table_revision(header, &revision);
> +
> +       switch (revision.major) {
> +       case 4:
> +               if (revision.minor != 1)
> +                       break;
> +               result = get_voltage_ddc_info_v4(&i2c_line, index, header,
> +                       voltage_info_address);
> +               break;
> +       }
> +
> +       if (result == BP_RESULT_OK)
> +               result = bios_parser_get_thermal_ddc_info(dcb,
> +                       i2c_line, info);
> +
> +       return result;
> +}
> +
> +static enum bp_result bios_parser_get_hpd_info(
> +       struct dc_bios *dcb,
> +       struct graphics_object_id id,
> +       struct graphics_object_hpd_info *info)
> +{
> +       struct bios_parser *bp = BP_FROM_DCB(dcb);
> +       struct atom_display_object_path_v2 *object;
> +       struct atom_hpd_int_record *record = NULL;
> +
> +       if (!info)
> +               return BP_RESULT_BADINPUT;
> +
> +       object = get_bios_object(bp, id);
> +
> +       if (!object)
> +               return BP_RESULT_BADINPUT;
> +
> +       record = get_hpd_record(bp, object);
> +
> +       if (record != NULL) {
> +               info->hpd_int_gpio_uid = record->pin_id;
> +               info->hpd_active = record->plugin_pin_state;
> +               return BP_RESULT_OK;
> +       }
> +
> +       return BP_RESULT_NORECORD;
> +}
> +
> +static struct atom_hpd_int_record *get_hpd_record(
> +       struct bios_parser *bp,
> +       struct atom_display_object_path_v2 *object)
> +{
> +       struct atom_common_record_header *header;
> +       uint32_t offset;
> +
> +       if (!object) {
> +               BREAK_TO_DEBUGGER(); /* Invalid object */
> +               return NULL;
> +       }
> +
> +       offset = le16_to_cpu(object->disp_recordoffset)
> +                       + bp->object_info_tbl_offset;
> +
> +       for (;;) {
> +               header = GET_IMAGE(struct atom_common_record_header,
> offset);
> +
> +               if (!header)
> +                       return NULL;
> +
> +               if (header->record_type == LAST_RECORD_TYPE ||
> +                       !header->record_size)
> +                       break;
> +
> +               if (header->record_type == ATOM_HPD_INT_RECORD_TYPE
> +                       && sizeof(struct atom_hpd_int_record) <=
> +
>  header->record_size)
> +                       return (struct atom_hpd_int_record *) header;
> +
> +               offset += header->record_size;
> +       }
> +
> +       return NULL;
> +}
> +
> +/**
> + * bios_parser_get_gpio_pin_info
> + * Get GpioPin information of input gpio id
> + *
> + * @param gpio_id, GPIO ID
> + * @param info, GpioPin information structure
> + * @return Bios parser result code
> + * @note
> + *  to get the GPIO PIN INFO, we need:
> + *  1. get the GPIO_ID from other object table, see GetHPDInfo()
> + *  2. in DATA_TABLE.GPIO_Pin_LUT, search all records,
> + *     to get the registerA  offset/mask
> + */
> +static enum bp_result bios_parser_get_gpio_pin_info(
> +       struct dc_bios *dcb,
> +       uint32_t gpio_id,
> +       struct gpio_pin_info *info)
> +{
> +       struct bios_parser *bp = BP_FROM_DCB(dcb);
> +       struct atom_gpio_pin_lut_v2_1 *header;
> +       uint32_t count = 0;
> +       uint32_t i = 0;
> +
> +       if (!DATA_TABLES(gpio_pin_lut))
> +               return BP_RESULT_BADBIOSTABLE;
> +
> +       header = GET_IMAGE(struct atom_gpio_pin_lut_v2_1,
> +                                               DATA_TABLES(gpio_pin_lut));
> +       if (!header)
> +               return BP_RESULT_BADBIOSTABLE;
> +
> +       if (sizeof(struct atom_common_table_header) +
> +                       sizeof(struct atom_gpio_pin_lut_v2_1)
> +                       > le16_to_cpu(header->table_header.structuresize))
> +               return BP_RESULT_BADBIOSTABLE;
> +
> +       if (header->table_header.content_revision != 1)
> +               return BP_RESULT_UNSUPPORTED;
> +
> +       /* Temporary hard code gpio pin info */
> +#if defined(FOR_SIMNOW_BOOT)
> +       {
> +               struct  atom_gpio_pin_assignment  gpio_pin[8] = {
> +                               {0x5db5, 0, 0, 1, 0},
> +                               {0x5db5, 8, 8, 2, 0},
> +                               {0x5db5, 0x10, 0x10, 3, 0},
> +                               {0x5db5, 0x18, 0x14, 4, 0},
> +                               {0x5db5, 0x1A, 0x18, 5, 0},
> +                               {0x5db5, 0x1C, 0x1C, 6, 0},
> +               };
> +
> +               count = 6;
> +               memmove(header->gpio_pin, gpio_pin, sizeof(gpio_pin));
> +       }
> +#else
> +       count = (le16_to_cpu(header->table_header.structuresize)
> +                       - sizeof(struct atom_common_table_header))
> +                               / sizeof(struct atom_gpio_pin_assignment);
> +#endif
> +       for (i = 0; i < count; ++i) {
> +               if (header->gpio_pin[i].gpio_id != gpio_id)
> +                       continue;
> +
> +               info->offset =
> +                       (uint32_t) le16_to_cpu(
> +
>  header->gpio_pin[i].data_a_reg_index);
> +               info->offset_y = info->offset + 2;
> +               info->offset_en = info->offset + 1;
> +               info->offset_mask = info->offset - 1;
> +
> +               info->mask = (uint32_t) (1 <<
> +                       header->gpio_pin[i].gpio_bitshift);
> +               info->mask_y = info->mask + 2;
> +               info->mask_en = info->mask + 1;
> +               info->mask_mask = info->mask - 1;
> +
> +               return BP_RESULT_OK;
> +       }
> +
> +       return BP_RESULT_NORECORD;
> +}
> +
> +static struct device_id device_type_from_device_id(uint16_t device_id)
> +{
> +
> +       struct device_id result_device_id;
> +
> +       switch (device_id) {
> +       case ATOM_DISPLAY_LCD1_SUPPORT:
> +               result_device_id.device_type = DEVICE_TYPE_LCD;
> +               result_device_id.enum_id = 1;
> +               break;
> +
> +       case ATOM_DISPLAY_DFP1_SUPPORT:
> +               result_device_id.device_type = DEVICE_TYPE_DFP;
> +               result_device_id.enum_id = 1;
> +               break;
> +
> +       case ATOM_DISPLAY_DFP2_SUPPORT:
> +               result_device_id.device_type = DEVICE_TYPE_DFP;
> +               result_device_id.enum_id = 2;
> +               break;
> +
> +       case ATOM_DISPLAY_DFP3_SUPPORT:
> +               result_device_id.device_type = DEVICE_TYPE_DFP;
> +               result_device_id.enum_id = 3;
> +               break;
> +
> +       case ATOM_DISPLAY_DFP4_SUPPORT:
> +               result_device_id.device_type = DEVICE_TYPE_DFP;
> +               result_device_id.enum_id = 4;
> +               break;
> +
> +       case ATOM_DISPLAY_DFP5_SUPPORT:
> +               result_device_id.device_type = DEVICE_TYPE_DFP;
> +               result_device_id.enum_id = 5;
> +               break;
> +
> +       case ATOM_DISPLAY_DFP6_SUPPORT:
> +               result_device_id.device_type = DEVICE_TYPE_DFP;
> +               result_device_id.enum_id = 6;
> +               break;
> +
> +       default:
> +               BREAK_TO_DEBUGGER(); /* Invalid device Id */
> +               result_device_id.device_type = DEVICE_TYPE_UNKNOWN;
> +               result_device_id.enum_id = 0;
> +       }
> +       return result_device_id;
> +}
> +
> +static enum bp_result bios_parser_get_device_tag(
> +       struct dc_bios *dcb,
> +       struct graphics_object_id connector_object_id,
> +       uint32_t device_tag_index,
> +       struct connector_device_tag_info *info)
> +{
> +       struct bios_parser *bp = BP_FROM_DCB(dcb);
> +       struct atom_display_object_path_v2 *object;
> +
> +       if (!info)
> +               return BP_RESULT_BADINPUT;
> +
> +       /* getBiosObject will return MXM object */
> +       object = get_bios_object(bp, connector_object_id);
> +
> +       if (!object) {
> +               BREAK_TO_DEBUGGER(); /* Invalid object id */
> +               return BP_RESULT_BADINPUT;
> +       }
> +
> +       info->acpi_device = 0; /* BIOS no longer provides this */
> +       info->dev_id = device_type_from_device_id(object->device_tag);
> +
> +       return BP_RESULT_OK;
> +}
> +
> +static enum bp_result get_ss_info_v4_1(
> +       struct bios_parser *bp,
> +       uint32_t id,
> +       uint32_t index,
> +       struct spread_spectrum_info *ss_info)
> +{
> +       enum bp_result result = BP_RESULT_OK;
> +       struct atom_display_controller_info_v4_1 *disp_cntl_tbl = NULL;
> +       struct atom_smu_info_v3_1 *smu_tbl = NULL;
> +
> +       if (!ss_info)
> +               return BP_RESULT_BADINPUT;
> +
> +       if (!DATA_TABLES(dce_info))
> +               return BP_RESULT_BADBIOSTABLE;
> +
> +       if (!DATA_TABLES(smu_info))
> +               return BP_RESULT_BADBIOSTABLE;
> +
> +       disp_cntl_tbl =  GET_IMAGE(struct
> atom_display_controller_info_v4_1,
> +
>  DATA_TABLES(dce_info));
> +       if (!disp_cntl_tbl)
> +               return BP_RESULT_BADBIOSTABLE;
> +
> +       smu_tbl =  GET_IMAGE(struct atom_smu_info_v3_1,
> DATA_TABLES(smu_info));
> +       if (!smu_tbl)
> +               return BP_RESULT_BADBIOSTABLE;
> +
> +
> +       ss_info->type.STEP_AND_DELAY_INFO = false;
> +       ss_info->spread_percentage_divider = 1000;
> +       /* BIOS no longer uses target clock.  Always enable for now */
> +       ss_info->target_clock_range = 0xffffffff;
> +
> +       switch (id) {
> +       case AS_SIGNAL_TYPE_DVI:
> +               ss_info->spread_spectrum_percentage =
> +                               disp_cntl_tbl->dvi_ss_percentage;
> +               ss_info->spread_spectrum_range =
> +                               disp_cntl_tbl->dvi_ss_rate_10hz * 10;
> +               if (disp_cntl_tbl->dvi_ss_mode &
> ATOM_SS_CENTRE_SPREAD_MODE)
> +                       ss_info->type.CENTER_MODE = true;
> +               break;
> +       case AS_SIGNAL_TYPE_HDMI:
> +               ss_info->spread_spectrum_percentage =
> +                               disp_cntl_tbl->hdmi_ss_percentage;
> +               ss_info->spread_spectrum_range =
> +                               disp_cntl_tbl->hdmi_ss_rate_10hz * 10;
> +               if (disp_cntl_tbl->hdmi_ss_mode &
> ATOM_SS_CENTRE_SPREAD_MODE)
> +                       ss_info->type.CENTER_MODE = true;
> +               break;
> +       /* TODO LVDS not support anymore? */
> +       case AS_SIGNAL_TYPE_DISPLAY_PORT:
> +               ss_info->spread_spectrum_percentage =
> +                               disp_cntl_tbl->dp_ss_percentage;
> +               ss_info->spread_spectrum_range =
> +                               disp_cntl_tbl->dp_ss_rate_10hz * 10;
> +               if (disp_cntl_tbl->dp_ss_mode & ATOM_SS_CENTRE_SPREAD_MODE)
> +                       ss_info->type.CENTER_MODE = true;
> +               break;
> +       case AS_SIGNAL_TYPE_GPU_PLL:
> +               ss_info->spread_spectrum_percentage =
> +                               smu_tbl->gpuclk_ss_percentage;
> +               ss_info->spread_spectrum_range =
> +                               smu_tbl->gpuclk_ss_rate_10hz * 10;
> +               if (smu_tbl->gpuclk_ss_mode & ATOM_SS_CENTRE_SPREAD_MODE)
> +                       ss_info->type.CENTER_MODE = true;
> +               break;
> +       default:
> +               result = BP_RESULT_UNSUPPORTED;
> +       }
> +
> +       return result;
> +}
> +
> +/**
> + * bios_parser_get_spread_spectrum_info
> + * Get spread spectrum information from the ASIC_InternalSS_Info(ver 2.1
> or
> + * ver 3.1) or SS_Info table from the VBIOS. Currently
> ASIC_InternalSS_Info
> + * ver 2.1 can co-exist with SS_Info table. Expect ASIC_InternalSS_Info
> + * ver 3.1,
> + * there is only one entry for each signal /ss id.  However, there is
> + * no planning of supporting multiple spread Sprectum entry for EverGreen
> + * @param [in] this
> + * @param [in] signal, ASSignalType to be converted to info index
> + * @param [in] index, number of entries that match the converted info
> index
> + * @param [out] ss_info, sprectrum information structure,
> + * @return Bios parser result code
> + */
> +static enum bp_result bios_parser_get_spread_spectrum_info(
> +       struct dc_bios *dcb,
> +       enum as_signal_type signal,
> +       uint32_t index,
> +       struct spread_spectrum_info *ss_info)
> +{
> +       struct bios_parser *bp = BP_FROM_DCB(dcb);
> +       enum bp_result result = BP_RESULT_UNSUPPORTED;
> +       struct atom_common_table_header *header;
> +       struct atom_data_revision tbl_revision;
> +
> +       if (!ss_info) /* check for bad input */
> +               return BP_RESULT_BADINPUT;
> +
> +       if (!DATA_TABLES(dce_info))
> +               return BP_RESULT_UNSUPPORTED;
> +
> +       header = GET_IMAGE(struct atom_common_table_header,
> +                                               DATA_TABLES(dce_info));
> +       get_atom_data_table_revision(header, &tbl_revision);
> +
> +       switch (tbl_revision.major) {
> +       case 4:
> +               switch (tbl_revision.minor) {
> +               case 1:
> +                       return get_ss_info_v4_1(bp, signal, index,
> ss_info);
> +               default:
> +                       break;
> +               }
> +               break;
> +       default:
> +               break;
> +       }
> +       /* there can not be more then one entry for SS Info table */
> +       return result;
> +}
> +
> +static enum bp_result get_embedded_panel_info_v2_1(
> +       struct bios_parser *bp,
> +       struct embedded_panel_info *info)
> +{
> +       struct lcd_info_v2_1 *lvds;
> +
> +       if (!info)
> +               return BP_RESULT_BADINPUT;
> +
> +       if (!DATA_TABLES(lcd_info))
> +               return BP_RESULT_UNSUPPORTED;
> +
> +       lvds = GET_IMAGE(struct lcd_info_v2_1, DATA_TABLES(lcd_info));
> +
> +       if (!lvds)
> +               return BP_RESULT_BADBIOSTABLE;
> +
> +       /* TODO: previous vv1_3, should v2_1 */
> +       if (!((lvds->table_header.format_revision == 2)
> +                       && (lvds->table_header.content_revision >= 1)))
> +               return BP_RESULT_UNSUPPORTED;
> +
> +       memset(info, 0, sizeof(struct embedded_panel_info));
> +
> +       /* We need to convert from 10KHz units into KHz units */
> +       info->lcd_timing.pixel_clk =
> +                       le16_to_cpu(lvds->lcd_timing.pixclk) * 10;
> +       /* usHActive does not include borders, according to VBIOS team */
> +       info->lcd_timing.horizontal_addressable =
> +                       le16_to_cpu(lvds->lcd_timing.h_active);
> +       /* usHBlanking_Time includes borders, so we should really be
> +        * subtractingborders duing this translation, but LVDS generally
> +        * doesn't have borders, so we should be okay leaving this as is
> for
> +        * now.  May need to revisit if we ever have LVDS with borders
> +        */
> +       info->lcd_timing.horizontal_blanking_time =
> +               le16_to_cpu(lvds->lcd_timing.h_blanking_time);
> +       /* usVActive does not include borders, according to VBIOS team*/
> +       info->lcd_timing.vertical_addressable =
> +               le16_to_cpu(lvds->lcd_timing.v_active);
> +       /* usVBlanking_Time includes borders, so we should really be
> +        * subtracting borders duing this translation, but LVDS generally
> +        * doesn't have borders, so we should be okay leaving this as is
> for
> +        * now. May need to revisit if we ever have LVDS with borders
> +        */
> +       info->lcd_timing.vertical_blanking_time =
> +               le16_to_cpu(lvds->lcd_timing.v_blanking_time);
> +       info->lcd_timing.horizontal_sync_offset =
> +               le16_to_cpu(lvds->lcd_timing.h_sync_offset);
> +       info->lcd_timing.horizontal_sync_width =
> +               le16_to_cpu(lvds->lcd_timing.h_sync_width);
> +       info->lcd_timing.vertical_sync_offset =
> +               le16_to_cpu(lvds->lcd_timing.v_sync_offset);
> +       info->lcd_timing.vertical_sync_width =
> +               le16_to_cpu(lvds->lcd_timing.v_syncwidth);
> +       info->lcd_timing.horizontal_border = lvds->lcd_timing.h_border;
> +       info->lcd_timing.vertical_border = lvds->lcd_timing.v_border;
> +
> +       /* not provided by VBIOS */
> +       info->lcd_timing.misc_info.HORIZONTAL_CUT_OFF = 0;
> +
> +       info->lcd_timing.misc_info.H_SYNC_POLARITY =
> +               ~(uint32_t)
> +               (lvds->lcd_timing.miscinfo & ATOM_HSYNC_POLARITY);
> +       info->lcd_timing.misc_info.V_SYNC_POLARITY =
> +               ~(uint32_t)
> +               (lvds->lcd_timing.miscinfo & ATOM_VSYNC_POLARITY);
> +
> +       /* not provided by VBIOS */
> +       info->lcd_timing.misc_info.VERTICAL_CUT_OFF = 0;
> +
> +       info->lcd_timing.misc_info.H_REPLICATION_BY2 =
> +               lvds->lcd_timing.miscinfo & ATOM_H_REPLICATIONBY2;
> +       info->lcd_timing.misc_info.V_REPLICATION_BY2 =
> +               lvds->lcd_timing.miscinfo & ATOM_V_REPLICATIONBY2;
> +       info->lcd_timing.misc_info.COMPOSITE_SYNC =
> +               lvds->lcd_timing.miscinfo & ATOM_COMPOSITESYNC;
> +       info->lcd_timing.misc_info.INTERLACE =
> +               lvds->lcd_timing.miscinfo & ATOM_INTERLACE;
> +
> +       /* not provided by VBIOS*/
> +       info->lcd_timing.misc_info.DOUBLE_CLOCK = 0;
> +       /* not provided by VBIOS*/
> +       info->ss_id = 0;
> +
> +       info->realtek_eDPToLVDS =
> +                       (lvds->dplvdsrxid == eDP_TO_LVDS_REALTEK_ID ? 1:0);
> +
> +       return BP_RESULT_OK;
> +}
> +
> +static enum bp_result bios_parser_get_embedded_panel_info(
> +       struct dc_bios *dcb,
> +       struct embedded_panel_info *info)
> +{
> +       struct bios_parser *bp = BP_FROM_DCB(dcb);
> +       struct atom_common_table_header *header;
> +       struct atom_data_revision tbl_revision;
> +
> +       if (!DATA_TABLES(lcd_info))
> +               return BP_RESULT_FAILURE;
> +
> +       header = GET_IMAGE(struct atom_common_table_header,
> +                                       DATA_TABLES(lcd_info));
> +
> +       if (!header)
> +               return BP_RESULT_BADBIOSTABLE;
> +
> +       get_atom_data_table_revision(header, &tbl_revision);
> +
> +
> +       switch (tbl_revision.major) {
> +       case 2:
> +               switch (tbl_revision.minor) {
> +               case 1:
> +                       return get_embedded_panel_info_v2_1(bp, info);
> +               default:
> +                       break;
> +               }
> +       default:
> +               break;
> +       }
> +
> +       return BP_RESULT_FAILURE;
> +}
> +
> +static uint32_t get_support_mask_for_device_id(struct device_id device_id)
> +{
> +       enum dal_device_type device_type = device_id.device_type;
> +       uint32_t enum_id = device_id.enum_id;
> +
> +       switch (device_type) {
> +       case DEVICE_TYPE_LCD:
> +               switch (enum_id) {
> +               case 1:
> +                       return ATOM_DISPLAY_LCD1_SUPPORT;
> +               default:
> +                       break;
> +               }
> +               break;
> +       case DEVICE_TYPE_DFP:
> +               switch (enum_id) {
> +               case 1:
> +                       return ATOM_DISPLAY_DFP1_SUPPORT;
> +               case 2:
> +                       return ATOM_DISPLAY_DFP2_SUPPORT;
> +               case 3:
> +                       return ATOM_DISPLAY_DFP3_SUPPORT;
> +               case 4:
> +                       return ATOM_DISPLAY_DFP4_SUPPORT;
> +               case 5:
> +                       return ATOM_DISPLAY_DFP5_SUPPORT;
> +               case 6:
> +                       return ATOM_DISPLAY_DFP6_SUPPORT;
> +               default:
> +                       break;
> +               }
> +               break;
> +       default:
> +               break;
> +       };
> +
> +       /* Unidentified device ID, return empty support mask. */
> +       return 0;
> +}
> +
> +static bool bios_parser_is_device_id_supported(
> +       struct dc_bios *dcb,
> +       struct device_id id)
> +{
> +       struct bios_parser *bp = BP_FROM_DCB(dcb);
> +
> +       uint32_t mask = get_support_mask_for_device_id(id);
> +
> +       return (le16_to_cpu(bp->object_info_tbl.v1_4->supporteddevices) &
> +                                                               mask) != 0;
> +}
> +
> +static void bios_parser_post_init(
> +       struct dc_bios *dcb)
> +{
> +       /* TODO for OPM module. Need implement later */
> +}
> +
> +static uint32_t bios_parser_get_ss_entry_number(
> +       struct dc_bios *dcb,
> +       enum as_signal_type signal)
> +{
> +       /* TODO: DAL2 atomfirmware implementation does not need this.
> +        * why DAL3 need this?
> +        */
> +       return 1;
> +}
> +
> +static enum bp_result bios_parser_transmitter_control(
> +       struct dc_bios *dcb,
> +       struct bp_transmitter_control *cntl)
> +{
> +       struct bios_parser *bp = BP_FROM_DCB(dcb);
> +
> +       if (!bp->cmd_tbl.transmitter_control)
> +               return BP_RESULT_FAILURE;
> +
> +       return bp->cmd_tbl.transmitter_control(bp, cntl);
> +}
> +
> +static enum bp_result bios_parser_encoder_control(
> +       struct dc_bios *dcb,
> +       struct bp_encoder_control *cntl)
> +{
> +       struct bios_parser *bp = BP_FROM_DCB(dcb);
> +
> +       if (!bp->cmd_tbl.dig_encoder_control)
> +               return BP_RESULT_FAILURE;
> +
> +       return bp->cmd_tbl.dig_encoder_control(bp, cntl);
> +}
> +
> +static enum bp_result bios_parser_set_pixel_clock(
> +       struct dc_bios *dcb,
> +       struct bp_pixel_clock_parameters *bp_params)
> +{
> +       struct bios_parser *bp = BP_FROM_DCB(dcb);
> +
> +       if (!bp->cmd_tbl.set_pixel_clock)
> +               return BP_RESULT_FAILURE;
> +
> +       return bp->cmd_tbl.set_pixel_clock(bp, bp_params);
> +}
> +
> +static enum bp_result bios_parser_set_dce_clock(
> +       struct dc_bios *dcb,
> +       struct bp_set_dce_clock_parameters *bp_params)
> +{
> +       struct bios_parser *bp = BP_FROM_DCB(dcb);
> +
> +       if (!bp->cmd_tbl.set_dce_clock)
> +               return BP_RESULT_FAILURE;
> +
> +       return bp->cmd_tbl.set_dce_clock(bp, bp_params);
> +}
> +
> +static unsigned int bios_parser_get_smu_clock_info(
> +       struct dc_bios *dcb)
> +{
> +       struct bios_parser *bp = BP_FROM_DCB(dcb);
> +
> +       if (!bp->cmd_tbl.get_smu_clock_info)
> +               return BP_RESULT_FAILURE;
> +
> +       return bp->cmd_tbl.get_smu_clock_info(bp);
> +}
> +
> +static enum bp_result bios_parser_program_crtc_timing(
> +       struct dc_bios *dcb,
> +       struct bp_hw_crtc_timing_parameters *bp_params)
> +{
> +       struct bios_parser *bp = BP_FROM_DCB(dcb);
> +
> +       if (!bp->cmd_tbl.set_crtc_timing)
> +               return BP_RESULT_FAILURE;
> +
> +       return bp->cmd_tbl.set_crtc_timing(bp, bp_params);
> +}
> +
> +static enum bp_result bios_parser_enable_crtc(
> +       struct dc_bios *dcb,
> +       enum controller_id id,
> +       bool enable)
> +{
> +       struct bios_parser *bp = BP_FROM_DCB(dcb);
> +
> +       if (!bp->cmd_tbl.enable_crtc)
> +               return BP_RESULT_FAILURE;
> +
> +       return bp->cmd_tbl.enable_crtc(bp, id, enable);
> +}
> +
> +static enum bp_result bios_parser_crtc_source_select(
> +       struct dc_bios *dcb,
> +       struct bp_crtc_source_select *bp_params)
> +{
> +       struct bios_parser *bp = BP_FROM_DCB(dcb);
> +
> +       if (!bp->cmd_tbl.select_crtc_source)
> +               return BP_RESULT_FAILURE;
> +
> +       return bp->cmd_tbl.select_crtc_source(bp, bp_params);
> +}
> +
> +static enum bp_result bios_parser_enable_di

[-- Attachment #1.2: Type: text/html, Size: 71324 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 101+ messages in thread

end of thread, other threads:[~2017-08-03 21:04 UTC | newest]

Thread overview: 101+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-20 20:29 [PATCH 000/100] Add Vega10 Support Alex Deucher
     [not found] ` <1490041835-11255-1-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
2017-03-20 20:29   ` [PATCH 001/100] drm/amdgpu: add the new atomfirmware interface header Alex Deucher
2017-03-20 20:29   ` [PATCH 002/100] amdgpu: detect if we are using atomfirm or atombios for vbios (v2) Alex Deucher
2017-03-20 20:29   ` [PATCH 003/100] drm/amdgpu: move atom scratch setup into amdgpu_atombios.c Alex Deucher
2017-03-20 20:29   ` [PATCH 004/100] drm/amdgpu: add basic support for atomfirmware.h (v3) Alex Deucher
2017-03-20 20:29   ` [PATCH 005/100] drm/amdgpu: add soc15ip.h Alex Deucher
2017-03-20 20:29   ` [PATCH 021/100] drm/amd: Add MQD structs for GFX V9 Alex Deucher
2017-03-20 20:29   ` [PATCH 022/100] drm/amdgpu: add gfx9 clearstate header Alex Deucher
2017-03-20 20:29   ` [PATCH 023/100] drm/amdgpu: add SDMA 4.0 packet header Alex Deucher
2017-03-20 20:29   ` [PATCH 024/100] drm/amdgpu: add common soc15 headers Alex Deucher
2017-03-20 20:29   ` [PATCH 025/100] drm/amdgpu: add vega10 chip name Alex Deucher
2017-03-20 20:29   ` [PATCH 026/100] drm/amdgpu: add clinetid definition for vega10 Alex Deucher
2017-03-20 20:29   ` [PATCH 027/100] drm/amdgpu: use new flag to handle different firmware loading method Alex Deucher
2017-03-20 20:29   ` [PATCH 028/100] drm/amdgpu: gb_addr_config struct Alex Deucher
2017-03-20 20:29   ` [PATCH 029/100] drm/amdgpu: add 64bit doorbell assignments Alex Deucher
2017-03-20 20:29   ` [PATCH 030/100] drm/amdgpu: Add MTYPE flags to GPU VM IOCTL interface Alex Deucher
2017-03-20 20:29   ` [PATCH 031/100] drm/amdgpu: use atomfirmware interfaces for scratch reg save/restore Alex Deucher
2017-03-20 20:29   ` [PATCH 032/100] drm/amdgpu: update IH IV ring entry for soc-15 Alex Deucher
2017-03-20 20:29   ` [PATCH 033/100] drm/amdgpu: add IV trace point Alex Deucher
2017-03-20 20:29   ` [PATCH 034/100] drm/amdgpu: add PTE defines for MTYPE Alex Deucher
2017-03-20 20:29   ` [PATCH 035/100] drm/amdgpu: add NGG parameters Alex Deucher
2017-03-20 20:29   ` [PATCH 036/100] drm/amdgpu: Add asic family for vega10 Alex Deucher
2017-03-20 20:29   ` [PATCH 037/100] drm/amdgpu: add tiling flags for GFX9 Alex Deucher
2017-03-20 20:29   ` [PATCH 038/100] drm/amdgpu: don't validate TILE_SPLIT on GFX9 Alex Deucher
2017-03-20 20:29   ` [PATCH 039/100] drm/amdgpu: rework common ucode handling for vega10 Alex Deucher
2017-03-20 20:29   ` [PATCH 040/100] drm/amdgpu: add psp firmware header info Alex Deucher
2017-03-20 20:29   ` [PATCH 041/100] drm/amdgpu: get display info from DC when DC enabled Alex Deucher
2017-03-20 20:29   ` [PATCH 042/100] drm/amdgpu: gart fixes for vega10 Alex Deucher
2017-03-20 20:29   ` [PATCH 043/100] drm/amdgpu: handle PTE EXEC in amdgpu_vm_bo_split_mapping Alex Deucher
2017-03-20 20:29   ` [PATCH 044/100] drm/amdgpu: handle PTE MTYPE " Alex Deucher
2017-03-20 20:29   ` [PATCH 045/100] drm/amdgpu: add NBIO 6.1 driver Alex Deucher
2017-03-20 20:29   ` [PATCH 046/100] drm/amdgpu: Add GMC 9.0 support Alex Deucher
     [not found]     ` <1490041835-11255-32-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
2017-03-21  8:49       ` Christian König
     [not found]         ` <003f0fba-4792-a32a-c982-73457dfbd1aa-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
2017-03-21 15:09           ` Deucher, Alexander
     [not found]             ` <BN6PR12MB1652E0D9C22360FF77A4360AF73D0-/b2+HYfkarQqUD6E6FAiowdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
2017-03-22 16:51               ` Christian König
2017-03-22 19:41           ` Alex Deucher
2017-03-22 19:48       ` Dave Airlie
     [not found]         ` <CAPM=9tyv3RT0Q8i5zan_iaM7XxoTdb6nXuX=aT4C9vPPKjYfww-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-03-22 19:53           ` Deucher, Alexander
2017-03-23  2:42       ` Zhang, Jerry (Junwei)
     [not found]         ` <58D33609.3060304-5C7GfCeVMHo@public.gmane.org>
2017-03-23  2:53           ` Alex Deucher
     [not found]             ` <CADnq5_N_8r-pesBn1NwDrHJcOx8jnryhtcWow01Cbndj8B9N6w-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-03-23  2:54               ` Zhang, Jerry (Junwei)
2017-03-20 20:29   ` [PATCH 047/100] drm/amdgpu: add SDMA v4.0 implementation Alex Deucher
2017-03-20 20:29   ` [PATCH 048/100] drm/amdgpu: implement GFX 9.0 support Alex Deucher
2017-03-20 20:29   ` [PATCH 049/100] drm/amdgpu: add vega10 interrupt handler Alex Deucher
2017-03-20 20:29   ` [PATCH 050/100] drm/amdgpu: add initial uvd 7.0 support for vega10 Alex Deucher
2017-03-20 20:29   ` [PATCH 051/100] drm/amdgpu: add initial vce 4.0 " Alex Deucher
2017-03-20 20:29   ` [PATCH 052/100] drm/amdgpu: add PSP driver " Alex Deucher
2017-03-20 20:29   ` [PATCH 053/100] drm/amdgpu: add psp firmware info into info query and debugfs Alex Deucher
2017-03-20 20:29   ` [PATCH 054/100] drm/amdgpu: add SMC firmware into global ucode list for psp loading Alex Deucher
2017-03-20 20:29   ` [PATCH 055/100] drm/amd/powerplay: add smu9 header files for Vega10 Alex Deucher
2017-03-20 20:29   ` [PATCH 056/100] drm/amd/powerplay: add new Vega10's ppsmc header file Alex Deucher
2017-03-20 20:29   ` [PATCH 057/100] drm/amdgpu: add new atomfirmware based helpers for powerplay Alex Deucher
2017-03-20 20:29   ` [PATCH 058/100] drm/amd/powerplay: add global PowerPlay mutex Alex Deucher
2017-03-20 20:29   ` [PATCH 059/100] drm/amd/powerplay: add some new structures for Vega10 Alex Deucher
2017-03-20 20:29   ` [PATCH 060/100] drm/amd: add structures for display/powerplay interface Alex Deucher
2017-03-20 20:29   ` [PATCH 061/100] drm/amd/powerplay: add some display/powerplay interfaces Alex Deucher
2017-03-20 20:29   ` [PATCH 062/100] drm/amd/powerplay: add Vega10 powerplay support Alex Deucher
2017-03-20 20:29   ` [PATCH 063/100] drm/amd/display: Add DCE12 bios parser support Alex Deucher
     [not found]     ` <1490041835-11255-49-git-send-email-alexander.deucher-5C7GfCeVMHo@public.gmane.org>
2017-08-03 21:04       ` Mike Lothian
2017-03-20 20:29   ` [PATCH 064/100] drm/amd/display: Add DCE12 gpio support Alex Deucher
2017-03-20 20:30   ` [PATCH 065/100] drm/amd/display: Add DCE12 i2c/aux support Alex Deucher
2017-03-20 20:30   ` [PATCH 066/100] drm/amd/display: Add DCE12 irq support Alex Deucher
2017-03-20 20:30   ` [PATCH 067/100] drm/amd/display: Add DCE12 core support Alex Deucher
2017-03-20 20:30   ` [PATCH 068/100] drm/amd/display: Enable DCE12 support Alex Deucher
2017-03-20 20:30   ` [PATCH 069/100] drm/amd/display: need to handle DCE_Info table ver4.2 Alex Deucher
2017-03-20 20:30   ` [PATCH 070/100] drm/amd/display: Less log spam Alex Deucher
2017-03-20 20:30   ` [PATCH 071/100] drm/amdgpu: soc15 enable (v2) Alex Deucher
2017-03-20 20:30   ` [PATCH 072/100] drm/amdgpu: Set the IP blocks for vega10 Alex Deucher
2017-03-20 20:30   ` [PATCH 073/100] drm/amdgpu: add Vega10 Device IDs Alex Deucher
2017-03-20 20:30   ` [PATCH 074/100] drm/amdgpu/gfx9: programing wptr_poll_addr register Alex Deucher
2017-03-20 20:30   ` [PATCH 075/100] drm/amdgpu: impl sriov detection for vega10 Alex Deucher
2017-03-20 20:30   ` [PATCH 076/100] drm/amdgpu: add kiq ring for gfx9 Alex Deucher
2017-03-20 20:30   ` [PATCH 077/100] drm/amdgpu/gfx9: fullfill kiq funcs Alex Deucher
2017-03-20 20:30   ` [PATCH 078/100] drm/amdgpu/gfx9: fullfill kiq irq funcs Alex Deucher
2017-03-20 20:30   ` [PATCH 079/100] drm/amdgpu: init kiq and kcq for vega10 Alex Deucher
2017-03-20 20:30   ` [PATCH 080/100] drm/amdgpu:impl gfx9 cond_exec Alex Deucher
2017-03-20 20:30   ` [PATCH 081/100] drm/amdgpu/gfx9: impl gfx9 meta data emit Alex Deucher
2017-03-20 20:30   ` [PATCH 082/100] drm/amdgpu:bypass RLC init for SRIOV Alex Deucher
2017-03-20 20:30   ` [PATCH 083/100] drm/amdgpu/sdma4:re-org SDMA initial steps for sriov Alex Deucher
2017-03-20 20:30   ` [PATCH 084/100] drm/amdgpu/soc15: bypass PSP for VF Alex Deucher
2017-03-20 20:30   ` [PATCH 085/100] drm/amdgpu/gmc9: no need use kiq in vega10 tlb flush Alex Deucher
2017-03-20 20:30   ` [PATCH 086/100] drm/amdgpu/dce_virtual: bypass DPM for vf Alex Deucher
2017-03-20 20:30   ` [PATCH 087/100] drm/amdgpu/virt: impl mailbox for ai Alex Deucher
2017-03-20 20:30   ` [PATCH 088/100] drm/amdgpu/soc15: init virt ops for vf Alex Deucher
2017-03-20 20:30   ` [PATCH 089/100] drm/amdgpu/soc15: enable virtual dce " Alex Deucher
2017-03-20 20:30   ` [PATCH 090/100] drm/amdgpu/vega10:fix DOORBELL64 scheme Alex Deucher
2017-03-20 20:30   ` [PATCH 091/100] drm/amdgpu: Don't touch PG&CG for SRIOV MM Alex Deucher
2017-03-20 20:30   ` [PATCH 092/100] drm/amdgpu/vce4: enable doorbell for SRIOV Alex Deucher
2017-03-20 20:30   ` [PATCH 093/100] drm/amdgpu: disable uvd for sriov Alex Deucher
2017-03-20 20:30   ` [PATCH 094/100] drm/amdgpu/soc15: bypass pp block for vf Alex Deucher
2017-03-20 20:30   ` [PATCH 095/100] drm/amdgpu/virt: add structure for MM table Alex Deucher
2017-03-20 20:30   ` [PATCH 096/100] drm/amdgpu/vce4: alloc mm table for MM sriov Alex Deucher
2017-03-20 20:30   ` [PATCH 097/100] drm/amdgpu/vce4: Ignore vce ring/ib test temporarily Alex Deucher
2017-03-20 20:30   ` [PATCH 098/100] drm/amdgpu: add mmsch structures Alex Deucher
2017-03-20 20:30   ` [PATCH 099/100] drm/amdgpu/vce4: impl vce & mmsch sriov start Alex Deucher
2017-03-20 20:30   ` [PATCH 100/100] drm/amdgpu/gfx9: correct wptr pointer value Alex Deucher
2017-03-21  7:42   ` [PATCH 000/100] Add Vega10 Support Christian König
     [not found]     ` <50d03274-5a6e-fb77-9741-b6700a9949bd-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
2017-03-21 11:51       ` Christian König
     [not found]         ` <b717602f-7573-6c20-ca68-491e3fe847c0-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
2017-03-21 12:18           ` Christian König
     [not found]             ` <15b7d1b4-8ac7-d14b-40f6-aba529b301ea-ANTagKRnAhcb1SvskN2V4Q@public.gmane.org>
2017-03-21 15:54               ` Alex Deucher
2017-03-21 22:00       ` Alex Deucher

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.