All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/29] Add support for GC 11.0
@ 2022-04-29 18:01 Alex Deucher
  2022-04-29 18:01 ` [PATCH 01/29] drm/amdgpu: support RLCP firmware front door load Alex Deucher
                   ` (28 more replies)
  0 siblings, 29 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:01 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher

This patch set adds GFX and KFD support for GC 11.0.  GC11 uses MES
for engine management and has a new microcontroller, IMU, which
handles power management for the block.

Alex Deucher (2):
  drm/amdgpu/discovery: handle AMDGPU_FW_LOAD_RLC_BACKDOOR_AUTO in SMU
  drm/amdgpu/discovery: add MES11 support

Chengming Gui (2):
  drm/amd/amdgpu: adjust the fw load type list
  drm/amd/amdgpu: add more fw load type to fit new ASICs

Evan Quan (2):
  drm/amdgpu: enable GFX CGCG/CGLS for GC11.0.0
  drm/amdgpu: enable fgcg for soc21

Hawking Zhang (2):
  drm/amdgpu: add init support for GFX11 (v2)
  drm/amdgpu: enable GENERIC0_INT for gfx/compute pipes

Jack Xiao (6):
  drm/amdgpu: add new CP_MES ucode ids
  drm/amdgpu: correct cp doorbell range
  drm/amdgpu: add mes unmap legacy queue routine
  drm/amdgpu/mes11: initiate mes v11 support
  drm/amdgpu/gfx10: enable kiq to map mes ring
  drm/amdgpu/gfx11: enable kiq to map mes ring

Likun Gao (14):
  drm/amdgpu: support RLCP firmware front door load
  drm/amdgpu: support RLCV firmware front door load
  drm/amdgpu: support for new SDMA front door load
  drm/amdgpu: support IMU front door load
  drm/amdgpu: add convert for new gfx type
  drm/amdgpu: init SDMA v6 microcode with PSP load type
  drm/amdgpu: extend the show ucode name function
  drm/amdgpu/gfx: refine fw hdr check fuction
  drm/amdgpu: skip amdgpu_ucode_create_bo for backdoor autoload
  drm/amdgpu: fix the fw size for sdma
  drm/amdgpu: renovate sdma fw struct
  drm/amdgpu: support RS64 CP fw front door load
  drm/amdgpu: support imu for gfx11
  drm/amdgpu/discovery: add GFX 11.0 Support

Mukul Joshi (1):
  drm/amdkfd: Add KFD support for soc21 v3

 drivers/gpu/drm/amd/amdgpu/Makefile           |   10 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c    |   15 +-
 .../drm/amd/amdgpu/amdgpu_amdkfd_gfx_v11.c    |  625 ++
 drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c |   30 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_doorbell.h  |   11 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c       |    2 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c       |    8 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h       |   17 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_imu.h       |   51 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c       |  335 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h       |   85 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c       |   51 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c     |  140 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h     |   35 +-
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c        |   22 +-
 drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c        | 6342 +++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/gfx_v11_0.h        |   29 +
 drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c        |    7 +-
 drivers/gpu/drm/amd/amdgpu/imu_v11_0.c        |  286 +
 drivers/gpu/drm/amd/amdgpu/imu_v11_0.h        |   30 +
 drivers/gpu/drm/amd/amdgpu/mes_v10_1.c        |  175 +-
 drivers/gpu/drm/amd/amdgpu/mes_v11_0.c        | 1181 +++
 drivers/gpu/drm/amd/amdgpu/mes_v11_0.h        |   29 +
 drivers/gpu/drm/amd/amdgpu/nv.c               |    4 +
 drivers/gpu/drm/amd/amdgpu/soc21.c            |    8 +-
 drivers/gpu/drm/amd/amdkfd/Makefile           |    3 +
 drivers/gpu/drm/amd/amdkfd/kfd_crat.c         |    8 +
 drivers/gpu/drm/amd/amdkfd/kfd_device.c       |   24 +-
 .../drm/amd/amdkfd/kfd_device_queue_manager.c |  299 +-
 .../drm/amd/amdkfd/kfd_device_queue_manager.h |    5 +
 .../amd/amdkfd/kfd_device_queue_manager_v11.c |   81 +
 drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c     |   56 +-
 .../gpu/drm/amd/amdkfd/kfd_int_process_v11.c  |  383 +
 .../gpu/drm/amd/amdkfd/kfd_int_process_v9.c   |    8 +-
 drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c  |   10 +-
 .../gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c  |  508 ++
 drivers/gpu/drm/amd/amdkfd/kfd_priv.h         |   13 +
 drivers/gpu/drm/amd/amdkfd/kfd_process.c      |   19 +
 .../amd/amdkfd/kfd_process_queue_manager.c    |   21 +
 drivers/gpu/drm/amd/amdkfd/kfd_topology.c     |    3 +-
 drivers/gpu/drm/amd/amdkfd/soc15_int.h        |    3 +-
 .../gpu/drm/amd/include/kgd_kfd_interface.h   |    1 +
 .../drm/amd/{amdgpu => include}/mes_api_def.h |  167 +-
 drivers/gpu/drm/amd/include/mes_v11_api_def.h |  579 ++
 44 files changed, 11417 insertions(+), 302 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v11.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_imu.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/gfx_v11_0.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/imu_v11_0.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/imu_v11_0.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/mes_v11_0.h
 create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v11.c
 create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_int_process_v11.c
 create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c
 rename drivers/gpu/drm/amd/{amdgpu => include}/mes_api_def.h (68%)
 create mode 100644 drivers/gpu/drm/amd/include/mes_v11_api_def.h

-- 
2.35.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 01/29] drm/amdgpu: support RLCP firmware front door load
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
@ 2022-04-29 18:01 ` Alex Deucher
  2022-04-29 18:01 ` [PATCH 02/29] drm/amdgpu: support RLCV " Alex Deucher
                   ` (27 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:01 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Likun Gao, Hawking Zhang

From: Likun Gao <Likun.Gao@amd.com>

Support RLCP firmware front door load.

Signed-off-by: Likun Gao <Likun.Gao@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c   | 3 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 4 ++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h | 1 +
 3 files changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
index d0fb14ef645c..9b3c23a22059 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
@@ -2217,6 +2217,9 @@ static int psp_get_fw_type(struct amdgpu_firmware_info *ucode,
 	case AMDGPU_UCODE_ID_CP_MEC2_JT:
 		*type = GFX_FW_TYPE_CP_MEC_ME2;
 		break;
+	case AMDGPU_UCODE_ID_RLC_P:
+		*type = GFX_FW_TYPE_RLC_P;
+		break;
 	case AMDGPU_UCODE_ID_RLC_G:
 		*type = GFX_FW_TYPE_RLC_G;
 		break;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
index a67f41465337..7af9ca069570 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
@@ -701,6 +701,10 @@ static int amdgpu_ucode_init_single_fw(struct amdgpu_device *adev,
 			ucode->ucode_size = adev->gfx.rlc.rlc_dram_ucode_size_bytes;
 			ucode_addr = adev->gfx.rlc.rlc_dram_ucode;
 			break;
+		case AMDGPU_UCODE_ID_RLC_P:
+			ucode->ucode_size = adev->gfx.rlc.rlcp_ucode_size_bytes;
+			ucode_addr = adev->gfx.rlc.rlcp_ucode;
+			break;
 		case AMDGPU_UCODE_ID_CP_MES:
 			ucode->ucode_size = le32_to_cpu(mes_hdr->mes_ucode_size_bytes);
 			ucode_addr = (u8 *)ucode->fw->data +
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
index fb88f951fb3a..554a4a0521bc 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
@@ -400,6 +400,7 @@ enum AMDGPU_UCODE_ID {
 	AMDGPU_UCODE_ID_RLC_RESTORE_LIST_SRM_MEM,
 	AMDGPU_UCODE_ID_RLC_IRAM,
 	AMDGPU_UCODE_ID_RLC_DRAM,
+	AMDGPU_UCODE_ID_RLC_P,
 	AMDGPU_UCODE_ID_RLC_G,
 	AMDGPU_UCODE_ID_STORAGE,
 	AMDGPU_UCODE_ID_SMC,
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 02/29] drm/amdgpu: support RLCV firmware front door load
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
  2022-04-29 18:01 ` [PATCH 01/29] drm/amdgpu: support RLCP firmware front door load Alex Deucher
@ 2022-04-29 18:01 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 03/29] drm/amdgpu: support for new SDMA " Alex Deucher
                   ` (26 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:01 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Likun Gao, Hawking Zhang

From: Likun Gao <Likun.Gao@amd.com>

Support RLCV firmware front door load.

Signed-off-by: Likun Gao <Likun.Gao@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c   | 3 +++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 4 ++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h | 1 +
 3 files changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
index 9b3c23a22059..4c91efb695c4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
@@ -2220,6 +2220,9 @@ static int psp_get_fw_type(struct amdgpu_firmware_info *ucode,
 	case AMDGPU_UCODE_ID_RLC_P:
 		*type = GFX_FW_TYPE_RLC_P;
 		break;
+	case AMDGPU_UCODE_ID_RLC_V:
+		*type = GFX_FW_TYPE_RLC_V;
+		break;
 	case AMDGPU_UCODE_ID_RLC_G:
 		*type = GFX_FW_TYPE_RLC_G;
 		break;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
index 7af9ca069570..8cc059482007 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
@@ -705,6 +705,10 @@ static int amdgpu_ucode_init_single_fw(struct amdgpu_device *adev,
 			ucode->ucode_size = adev->gfx.rlc.rlcp_ucode_size_bytes;
 			ucode_addr = adev->gfx.rlc.rlcp_ucode;
 			break;
+		case AMDGPU_UCODE_ID_RLC_V:
+			ucode->ucode_size = adev->gfx.rlc.rlcv_ucode_size_bytes;
+			ucode_addr = adev->gfx.rlc.rlcv_ucode;
+			break;
 		case AMDGPU_UCODE_ID_CP_MES:
 			ucode->ucode_size = le32_to_cpu(mes_hdr->mes_ucode_size_bytes);
 			ucode_addr = (u8 *)ucode->fw->data +
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
index 554a4a0521bc..c3018eea4ae3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
@@ -401,6 +401,7 @@ enum AMDGPU_UCODE_ID {
 	AMDGPU_UCODE_ID_RLC_IRAM,
 	AMDGPU_UCODE_ID_RLC_DRAM,
 	AMDGPU_UCODE_ID_RLC_P,
+	AMDGPU_UCODE_ID_RLC_V,
 	AMDGPU_UCODE_ID_RLC_G,
 	AMDGPU_UCODE_ID_STORAGE,
 	AMDGPU_UCODE_ID_SMC,
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 03/29] drm/amdgpu: support for new SDMA front door load
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
  2022-04-29 18:01 ` [PATCH 01/29] drm/amdgpu: support RLCP firmware front door load Alex Deucher
  2022-04-29 18:01 ` [PATCH 02/29] drm/amdgpu: support RLCV " Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 04/29] drm/amdgpu: add new CP_MES ucode ids Alex Deucher
                   ` (25 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Likun Gao, Hawking Zhang

From: Likun Gao <Likun.Gao@amd.com>

Support for SDMA v6_0 ucode front door load.

Signed-off-by: Likun Gao <Likun.Gao@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c   |  6 ++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 12 ++++++++++++
 2 files changed, 18 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
index 4c91efb695c4..ee852bf8887b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
@@ -2277,6 +2277,12 @@ static int psp_get_fw_type(struct amdgpu_firmware_info *ucode,
 	case AMDGPU_UCODE_ID_DMCUB:
 		*type = GFX_FW_TYPE_DMUB;
 		break;
+	case AMDGPU_UCODE_ID_SDMA_UCODE_TH0:
+		*type = GFX_FW_TYPE_SDMA_UCODE_TH0;
+		break;
+	case AMDGPU_UCODE_ID_SDMA_UCODE_TH1:
+		*type = GFX_FW_TYPE_SDMA_UCODE_TH1;
+		break;
 	case AMDGPU_UCODE_ID_MAXIMUM:
 	default:
 		return -EINVAL;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
index 8cc059482007..1c69e182242c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
@@ -648,6 +648,7 @@ static int amdgpu_ucode_init_single_fw(struct amdgpu_device *adev,
 	const struct dmcu_firmware_header_v1_0 *dmcu_hdr = NULL;
 	const struct dmcub_firmware_header_v1_0 *dmcub_hdr = NULL;
 	const struct mes_firmware_header_v1_0 *mes_hdr = NULL;
+	const struct sdma_firmware_header_v2_0 *sdma_hdr = NULL;
 	u8 *ucode_addr;
 
 	if (NULL == ucode->fw)
@@ -664,9 +665,20 @@ static int amdgpu_ucode_init_single_fw(struct amdgpu_device *adev,
 	dmcu_hdr = (const struct dmcu_firmware_header_v1_0 *)ucode->fw->data;
 	dmcub_hdr = (const struct dmcub_firmware_header_v1_0 *)ucode->fw->data;
 	mes_hdr = (const struct mes_firmware_header_v1_0 *)ucode->fw->data;
+	sdma_hdr = (const struct sdma_firmware_header_v2_0 *)ucode->fw->data;
 
 	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
 		switch (ucode->ucode_id) {
+		case AMDGPU_UCODE_ID_SDMA_UCODE_TH0:
+			ucode->ucode_size = le32_to_cpu(sdma_hdr->ctx_jt_offset + sdma_hdr->ctx_jt_size);
+			ucode_addr = (u8 *)ucode->fw->data +
+				le32_to_cpu(sdma_hdr->header.ucode_array_offset_bytes);
+			break;
+		case AMDGPU_UCODE_ID_SDMA_UCODE_TH1:
+			ucode->ucode_size = le32_to_cpu(sdma_hdr->ctl_jt_offset + sdma_hdr->ctl_jt_size);
+			ucode_addr = (u8 *)ucode->fw->data +
+				le32_to_cpu(sdma_hdr->ctl_ucode_offset);
+			break;
 		case AMDGPU_UCODE_ID_CP_MEC1:
 		case AMDGPU_UCODE_ID_CP_MEC2:
 			ucode->ucode_size = le32_to_cpu(header->ucode_size_bytes) -
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 04/29] drm/amdgpu: add new CP_MES ucode ids
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (2 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 03/29] drm/amdgpu: support for new SDMA " Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 05/29] drm/amdgpu: support IMU front door load Alex Deucher
                   ` (24 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Jack Xiao, Hawking Zhang

From: Jack Xiao <Jack.Xiao@amd.com>

Needed for MES KIQ firmware loading.

Signed-off-by: Jack Xiao <Jack.Xiao@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
index c3018eea4ae3..c6417778510c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
@@ -395,6 +395,8 @@ enum AMDGPU_UCODE_ID {
 	AMDGPU_UCODE_ID_CP_MEC2_JT,
 	AMDGPU_UCODE_ID_CP_MES,
 	AMDGPU_UCODE_ID_CP_MES_DATA,
+	AMDGPU_UCODE_ID_CP_MES1,
+	AMDGPU_UCODE_ID_CP_MES1_DATA,
 	AMDGPU_UCODE_ID_RLC_RESTORE_LIST_CNTL,
 	AMDGPU_UCODE_ID_RLC_RESTORE_LIST_GPM_MEM,
 	AMDGPU_UCODE_ID_RLC_RESTORE_LIST_SRM_MEM,
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 05/29] drm/amdgpu: support IMU front door load
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (3 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 04/29] drm/amdgpu: add new CP_MES ucode ids Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 06/29] drm/amdgpu: add convert for new gfx type Alex Deucher
                   ` (23 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Likun Gao, Hawking Zhang

From: Likun Gao <Likun.Gao@amd.com>

Support for front door to load IMU firmware.

Signed-off-by: Likun Gao <Likun.Gao@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c   |  6 ++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 13 +++++++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h |  2 ++
 3 files changed, 21 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
index ee852bf8887b..8b42d87f55ce 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
@@ -2283,6 +2283,12 @@ static int psp_get_fw_type(struct amdgpu_firmware_info *ucode,
 	case AMDGPU_UCODE_ID_SDMA_UCODE_TH1:
 		*type = GFX_FW_TYPE_SDMA_UCODE_TH1;
 		break;
+	case AMDGPU_UCODE_ID_IMU_I:
+		*type = GFX_FW_TYPE_IMU_I;
+		break;
+	case AMDGPU_UCODE_ID_IMU_D:
+		*type = GFX_FW_TYPE_IMU_D;
+		break;
 	case AMDGPU_UCODE_ID_MAXIMUM:
 	default:
 		return -EINVAL;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
index 1c69e182242c..67619090a548 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
@@ -649,6 +649,7 @@ static int amdgpu_ucode_init_single_fw(struct amdgpu_device *adev,
 	const struct dmcub_firmware_header_v1_0 *dmcub_hdr = NULL;
 	const struct mes_firmware_header_v1_0 *mes_hdr = NULL;
 	const struct sdma_firmware_header_v2_0 *sdma_hdr = NULL;
+	const struct imu_firmware_header_v1_0 *imu_hdr = NULL;
 	u8 *ucode_addr;
 
 	if (NULL == ucode->fw)
@@ -666,6 +667,7 @@ static int amdgpu_ucode_init_single_fw(struct amdgpu_device *adev,
 	dmcub_hdr = (const struct dmcub_firmware_header_v1_0 *)ucode->fw->data;
 	mes_hdr = (const struct mes_firmware_header_v1_0 *)ucode->fw->data;
 	sdma_hdr = (const struct sdma_firmware_header_v2_0 *)ucode->fw->data;
+	imu_hdr = (const struct imu_firmware_header_v1_0 *)ucode->fw->data;
 
 	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
 		switch (ucode->ucode_id) {
@@ -762,6 +764,17 @@ static int amdgpu_ucode_init_single_fw(struct amdgpu_device *adev,
 			ucode->ucode_size = ucode->fw->size;
 			ucode_addr = (u8 *)ucode->fw->data;
 			break;
+		case AMDGPU_UCODE_ID_IMU_I:
+			ucode->ucode_size = le32_to_cpu(imu_hdr->imu_iram_ucode_size_bytes);
+			ucode_addr = (u8 *)ucode->fw->data +
+				le32_to_cpu(imu_hdr->header.ucode_array_offset_bytes);
+			break;
+		case AMDGPU_UCODE_ID_IMU_D:
+			ucode->ucode_size = le32_to_cpu(imu_hdr->imu_dram_ucode_size_bytes);
+			ucode_addr = (u8 *)ucode->fw->data +
+				le32_to_cpu(imu_hdr->header.ucode_array_offset_bytes) +
+				le32_to_cpu(imu_hdr->imu_iram_ucode_size_bytes);
+			break;
 		default:
 			ucode->ucode_size = le32_to_cpu(header->ucode_size_bytes);
 			ucode_addr = (u8 *)ucode->fw->data +
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
index c6417778510c..127c034202a9 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
@@ -397,6 +397,8 @@ enum AMDGPU_UCODE_ID {
 	AMDGPU_UCODE_ID_CP_MES_DATA,
 	AMDGPU_UCODE_ID_CP_MES1,
 	AMDGPU_UCODE_ID_CP_MES1_DATA,
+	AMDGPU_UCODE_ID_IMU_I,
+	AMDGPU_UCODE_ID_IMU_D,
 	AMDGPU_UCODE_ID_RLC_RESTORE_LIST_CNTL,
 	AMDGPU_UCODE_ID_RLC_RESTORE_LIST_GPM_MEM,
 	AMDGPU_UCODE_ID_RLC_RESTORE_LIST_SRM_MEM,
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 06/29] drm/amdgpu: add convert for new gfx type
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (4 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 05/29] drm/amdgpu: support IMU front door load Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 07/29] drm/amdgpu: init SDMA v6 microcode with PSP load type Alex Deucher
                   ` (22 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Likun Gao, Hawking Zhang

From: Likun Gao <Likun.Gao@amd.com>

Add convert for CP RS64 related gfx ip type.

Signed-off-by: Likun Gao <Likun.Gao@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c   | 33 +++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h | 11 ++++++++
 2 files changed, 44 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
index 8b42d87f55ce..c615fe095bd4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
@@ -2289,6 +2289,39 @@ static int psp_get_fw_type(struct amdgpu_firmware_info *ucode,
 	case AMDGPU_UCODE_ID_IMU_D:
 		*type = GFX_FW_TYPE_IMU_D;
 		break;
+	case AMDGPU_UCODE_ID_CP_RS64_PFP:
+		*type = GFX_FW_TYPE_RS64_PFP;
+		break;
+	case AMDGPU_UCODE_ID_CP_RS64_ME:
+		*type = GFX_FW_TYPE_RS64_ME;
+		break;
+	case AMDGPU_UCODE_ID_CP_RS64_MEC:
+		*type = GFX_FW_TYPE_RS64_MEC;
+		break;
+	case AMDGPU_UCODE_ID_CP_RS64_PFP_P0_STACK:
+		*type = GFX_FW_TYPE_RS64_PFP_P0_STACK;
+		break;
+	case AMDGPU_UCODE_ID_CP_RS64_PFP_P1_STACK:
+		*type = GFX_FW_TYPE_RS64_PFP_P1_STACK;
+		break;
+	case AMDGPU_UCODE_ID_CP_RS64_ME_P0_STACK:
+		*type = GFX_FW_TYPE_RS64_ME_P0_STACK;
+		break;
+	case AMDGPU_UCODE_ID_CP_RS64_ME_P1_STACK:
+		*type = GFX_FW_TYPE_RS64_ME_P1_STACK;
+		break;
+	case AMDGPU_UCODE_ID_CP_RS64_MEC_P0_STACK:
+		*type = GFX_FW_TYPE_RS64_MEC_P0_STACK;
+		break;
+	case AMDGPU_UCODE_ID_CP_RS64_MEC_P1_STACK:
+		*type = GFX_FW_TYPE_RS64_MEC_P1_STACK;
+		break;
+	case AMDGPU_UCODE_ID_CP_RS64_MEC_P2_STACK:
+		*type = GFX_FW_TYPE_RS64_MEC_P2_STACK;
+		break;
+	case AMDGPU_UCODE_ID_CP_RS64_MEC_P3_STACK:
+		*type = GFX_FW_TYPE_RS64_MEC_P3_STACK;
+		break;
 	case AMDGPU_UCODE_ID_MAXIMUM:
 	default:
 		return -EINVAL;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
index 127c034202a9..273432b75fda 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
@@ -389,6 +389,17 @@ enum AMDGPU_UCODE_ID {
 	AMDGPU_UCODE_ID_CP_CE,
 	AMDGPU_UCODE_ID_CP_PFP,
 	AMDGPU_UCODE_ID_CP_ME,
+	AMDGPU_UCODE_ID_CP_RS64_PFP,
+	AMDGPU_UCODE_ID_CP_RS64_ME,
+	AMDGPU_UCODE_ID_CP_RS64_MEC,
+	AMDGPU_UCODE_ID_CP_RS64_PFP_P0_STACK,
+	AMDGPU_UCODE_ID_CP_RS64_PFP_P1_STACK,
+	AMDGPU_UCODE_ID_CP_RS64_ME_P0_STACK,
+	AMDGPU_UCODE_ID_CP_RS64_ME_P1_STACK,
+	AMDGPU_UCODE_ID_CP_RS64_MEC_P0_STACK,
+	AMDGPU_UCODE_ID_CP_RS64_MEC_P1_STACK,
+	AMDGPU_UCODE_ID_CP_RS64_MEC_P2_STACK,
+	AMDGPU_UCODE_ID_CP_RS64_MEC_P3_STACK,
 	AMDGPU_UCODE_ID_CP_MEC1,
 	AMDGPU_UCODE_ID_CP_MEC1_JT,
 	AMDGPU_UCODE_ID_CP_MEC2,
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 07/29] drm/amdgpu: init SDMA v6 microcode with PSP load type
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (5 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 06/29] drm/amdgpu: add convert for new gfx type Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 08/29] drm/amdgpu: extend the show ucode name function Alex Deucher
                   ` (21 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Likun Gao, Hawking Zhang

From: Likun Gao <Likun.Gao@amd.com>

Update to use new SDMA UCODE ID when init sdma microcode for sdma6
with psp front door load type.

Signed-off-by: Likun Gao <Likun.Gao@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
index 273432b75fda..4042aa268dd6 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
@@ -386,6 +386,8 @@ enum AMDGPU_UCODE_ID {
 	AMDGPU_UCODE_ID_SDMA5,
 	AMDGPU_UCODE_ID_SDMA6,
 	AMDGPU_UCODE_ID_SDMA7,
+	AMDGPU_UCODE_ID_SDMA_UCODE_TH0,
+	AMDGPU_UCODE_ID_SDMA_UCODE_TH1,
 	AMDGPU_UCODE_ID_CP_CE,
 	AMDGPU_UCODE_ID_CP_PFP,
 	AMDGPU_UCODE_ID_CP_ME,
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 08/29] drm/amdgpu: extend the show ucode name function
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (6 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 07/29] drm/amdgpu: init SDMA v6 microcode with PSP load type Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 09/29] drm/amdgpu/gfx: refine fw hdr check fuction Alex Deucher
                   ` (20 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Likun Gao, Hawking Zhang

From: Likun Gao <Likun.Gao@amd.com>

Extend amdgpu_ucode_name function to show SDMA TH0, TH1, IMU, RLCP, RLCV
and MES related ucode name via ucode id.

Signed-off-by: Likun Gao <Likun.Gao@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
index 67619090a548..89bf746fe96a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
@@ -515,6 +515,10 @@ const char *amdgpu_ucode_name(enum AMDGPU_UCODE_ID ucode_id)
 		return "SDMA6";
 	case AMDGPU_UCODE_ID_SDMA7:
 		return "SDMA7";
+	case AMDGPU_UCODE_ID_SDMA_UCODE_TH0:
+		return "SDMA_CTX";
+	case AMDGPU_UCODE_ID_SDMA_UCODE_TH1:
+		return "SDMA_CTL";
 	case AMDGPU_UCODE_ID_CP_CE:
 		return "CP_CE";
 	case AMDGPU_UCODE_ID_CP_PFP:
@@ -533,6 +537,10 @@ const char *amdgpu_ucode_name(enum AMDGPU_UCODE_ID ucode_id)
 		return "CP_MES";
 	case AMDGPU_UCODE_ID_CP_MES_DATA:
 		return "CP_MES_DATA";
+	case AMDGPU_UCODE_ID_CP_MES1:
+		return "CP_MES_KIQ";
+	case AMDGPU_UCODE_ID_CP_MES1_DATA:
+		return "CP_MES_KIQ_DATA";
 	case AMDGPU_UCODE_ID_RLC_RESTORE_LIST_CNTL:
 		return "RLC_RESTORE_LIST_CNTL";
 	case AMDGPU_UCODE_ID_RLC_RESTORE_LIST_GPM_MEM:
@@ -545,6 +553,14 @@ const char *amdgpu_ucode_name(enum AMDGPU_UCODE_ID ucode_id)
 		return "RLC_DRAM";
 	case AMDGPU_UCODE_ID_RLC_G:
 		return "RLC_G";
+	case AMDGPU_UCODE_ID_RLC_P:
+		return "RLC_P";
+	case AMDGPU_UCODE_ID_RLC_V:
+		return "RLC_V";
+	case AMDGPU_UCODE_ID_IMU_I:
+		return "IMU_I";
+	case AMDGPU_UCODE_ID_IMU_D:
+		return "IMU_D";
 	case AMDGPU_UCODE_ID_STORAGE:
 		return "STORAGE";
 	case AMDGPU_UCODE_ID_SMC:
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 09/29] drm/amdgpu/gfx: refine fw hdr check fuction
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (7 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 08/29] drm/amdgpu: extend the show ucode name function Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 10/29] drm/amd/amdgpu: adjust the fw load type list Alex Deucher
                   ` (19 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Likun Gao, Wenhui Sheng, Hawking Zhang

From: Likun Gao <Likun.Gao@amd.com>

The return value of function amdgpu_ucode_hdr_version
doesn't make sense, so change it to return true when
fw header version is match with passed in parameters.

Signed-off-by: Wenhui Sheng <Wenhui.Sheng@amd.com>
Signed-off-by: Likun Gao <Likun.Gao@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
index 89bf746fe96a..86fe9da72e6c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
@@ -428,8 +428,8 @@ bool amdgpu_ucode_hdr_version(union amdgpu_firmware_header *hdr,
 {
 	if ((hdr->common.header_version_major == hdr_major) &&
 		(hdr->common.header_version_minor == hdr_minor))
-		return false;
-	return true;
+		return true;
+	return false;
 }
 
 enum amdgpu_firmware_load_type
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 10/29] drm/amd/amdgpu: adjust the fw load type list
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (8 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 09/29] drm/amdgpu/gfx: refine fw hdr check fuction Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 11/29] drm/amdgpu: correct cp doorbell range Alex Deucher
                   ` (18 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Likun Gao, Chengming Gui

From: Chengming Gui <Jack.Gui@amd.com>

Use 0 for legacy backdoor and 1 for frontdoor.

Signed-off-by: Chengming Gui <Jack.Gui@amd.com>
Reviewed-by: Likun Gao <Likun.Gao@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
index 4042aa268dd6..4439e0119f19 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
@@ -445,8 +445,8 @@ enum AMDGPU_UCODE_STATUS {
 
 enum amdgpu_firmware_load_type {
 	AMDGPU_FW_LOAD_DIRECT = 0,
-	AMDGPU_FW_LOAD_SMU,
 	AMDGPU_FW_LOAD_PSP,
+	AMDGPU_FW_LOAD_SMU,
 	AMDGPU_FW_LOAD_RLC_BACKDOOR_AUTO,
 };
 
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 11/29] drm/amdgpu: correct cp doorbell range
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (9 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 10/29] drm/amd/amdgpu: adjust the fw load type list Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 12/29] drm/amd/amdgpu: add more fw load type to fit new ASICs Alex Deucher
                   ` (17 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Jack Xiao, Evan Quan, Hawking Zhang

From: Jack Xiao <Jack.Xiao@amd.com>

1. move MES doorbell inside the mec doorbell range,
   for mes belongs to mec block
2. setting the correct gfx/mec doorbell range, so that
   fw can correctly detect gfx/compute work load to enter/exit
   power saving state.

Signed-off-by: Jack Xiao <Jack.Xiao@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Tested-and-acked-by: Evan Quan <evan.quan@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_doorbell.h | 11 ++++++++---
 drivers/gpu/drm/amd/amdgpu/nv.c              |  4 ++++
 drivers/gpu/drm/amd/amdgpu/soc21.c           |  4 ++++
 3 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_doorbell.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_doorbell.h
index 2d9485e67125..7199b6b0be81 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_doorbell.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_doorbell.h
@@ -52,6 +52,8 @@ struct amdgpu_doorbell_index {
 	uint32_t userqueue_end;
 	uint32_t gfx_ring0;
 	uint32_t gfx_ring1;
+	uint32_t gfx_userqueue_start;
+	uint32_t gfx_userqueue_end;
 	uint32_t sdma_engine[8];
 	uint32_t mes_ring0;
 	uint32_t mes_ring1;
@@ -175,12 +177,15 @@ typedef enum _AMDGPU_NAVI10_DOORBELL_ASSIGNMENT
 	AMDGPU_NAVI10_DOORBELL_MEC_RING5		= 0x008,
 	AMDGPU_NAVI10_DOORBELL_MEC_RING6		= 0x009,
 	AMDGPU_NAVI10_DOORBELL_MEC_RING7		= 0x00A,
-	AMDGPU_NAVI10_DOORBELL_USERQUEUE_START		= 0x00B,
+	AMDGPU_NAVI10_DOORBELL_MES_RING0	        = 0x00B,
+	AMDGPU_NAVI10_DOORBELL_MES_RING1		= 0x00C,
+	AMDGPU_NAVI10_DOORBELL_USERQUEUE_START		= 0x00D,
 	AMDGPU_NAVI10_DOORBELL_USERQUEUE_END		= 0x08A,
 	AMDGPU_NAVI10_DOORBELL_GFX_RING0		= 0x08B,
 	AMDGPU_NAVI10_DOORBELL_GFX_RING1		= 0x08C,
-	AMDGPU_NAVI10_DOORBELL_MES_RING0	        = 0x090,
-	AMDGPU_NAVI10_DOORBELL_MES_RING1		= 0x091,
+	AMDGPU_NAVI10_DOORBELL_GFX_USERQUEUE_START	= 0x08D,
+	AMDGPU_NAVI10_DOORBELL_GFX_USERQUEUE_END	= 0x0FF,
+
 	/* SDMA:256~335*/
 	AMDGPU_NAVI10_DOORBELL_sDMA_ENGINE0		= 0x100,
 	AMDGPU_NAVI10_DOORBELL_sDMA_ENGINE1		= 0x10A,
diff --git a/drivers/gpu/drm/amd/amdgpu/nv.c b/drivers/gpu/drm/amd/amdgpu/nv.c
index 8cf1a7f8a632..8ecfd66c4cee 100644
--- a/drivers/gpu/drm/amd/amdgpu/nv.c
+++ b/drivers/gpu/drm/amd/amdgpu/nv.c
@@ -607,6 +607,10 @@ static void nv_init_doorbell_index(struct amdgpu_device *adev)
 	adev->doorbell_index.userqueue_end = AMDGPU_NAVI10_DOORBELL_USERQUEUE_END;
 	adev->doorbell_index.gfx_ring0 = AMDGPU_NAVI10_DOORBELL_GFX_RING0;
 	adev->doorbell_index.gfx_ring1 = AMDGPU_NAVI10_DOORBELL_GFX_RING1;
+	adev->doorbell_index.gfx_userqueue_start =
+		AMDGPU_NAVI10_DOORBELL_GFX_USERQUEUE_START;
+	adev->doorbell_index.gfx_userqueue_end =
+		AMDGPU_NAVI10_DOORBELL_GFX_USERQUEUE_END;
 	adev->doorbell_index.mes_ring0 = AMDGPU_NAVI10_DOORBELL_MES_RING0;
 	adev->doorbell_index.mes_ring1 = AMDGPU_NAVI10_DOORBELL_MES_RING1;
 	adev->doorbell_index.sdma_engine[0] = AMDGPU_NAVI10_DOORBELL_sDMA_ENGINE0;
diff --git a/drivers/gpu/drm/amd/amdgpu/soc21.c b/drivers/gpu/drm/amd/amdgpu/soc21.c
index 68985a59a6a6..80b558f649e0 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc21.c
+++ b/drivers/gpu/drm/amd/amdgpu/soc21.c
@@ -409,6 +409,10 @@ static void soc21_init_doorbell_index(struct amdgpu_device *adev)
 	adev->doorbell_index.userqueue_end = AMDGPU_NAVI10_DOORBELL_USERQUEUE_END;
 	adev->doorbell_index.gfx_ring0 = AMDGPU_NAVI10_DOORBELL_GFX_RING0;
 	adev->doorbell_index.gfx_ring1 = AMDGPU_NAVI10_DOORBELL_GFX_RING1;
+	adev->doorbell_index.gfx_userqueue_start =
+		AMDGPU_NAVI10_DOORBELL_GFX_USERQUEUE_START;
+	adev->doorbell_index.gfx_userqueue_end =
+		AMDGPU_NAVI10_DOORBELL_GFX_USERQUEUE_END;
 	adev->doorbell_index.mes_ring0 = AMDGPU_NAVI10_DOORBELL_MES_RING0;
 	adev->doorbell_index.mes_ring1 = AMDGPU_NAVI10_DOORBELL_MES_RING1;
 	adev->doorbell_index.sdma_engine[0] = AMDGPU_NAVI10_DOORBELL_sDMA_ENGINE0;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 12/29] drm/amd/amdgpu: add more fw load type to fit new ASICs
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (10 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 11/29] drm/amdgpu: correct cp doorbell range Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 13/29] drm/amdgpu: skip amdgpu_ucode_create_bo for backdoor autoload Alex Deucher
                   ` (16 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Chengming Gui, Hawking Zhang

From: Chengming Gui <Jack.Gui@amd.com>

Align exported fw load types with internal used.

Signed-off-by: Chengming Gui <Jack.Gui@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 9e72b3ec5d4d..377322cb255a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -307,7 +307,7 @@ module_param_named(dpm, amdgpu_dpm, int, 0444);
  * to -1 to select the default loading mode for the ASIC, as defined
  * by the driver.  The default is -1 (auto).
  */
-MODULE_PARM_DESC(fw_load_type, "firmware loading type (0 = force direct if supported, -1 = auto)");
+MODULE_PARM_DESC(fw_load_type, "firmware loading type (3 = rlc backdoor autoload if supported, 2 = smu load if supported, 1 = psp load, 0 = force direct if supported, -1 = auto)");
 module_param_named(fw_load_type, amdgpu_fw_load_type, int, 0444);
 
 /**
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 13/29] drm/amdgpu: skip amdgpu_ucode_create_bo for backdoor autoload
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (11 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 12/29] drm/amd/amdgpu: add more fw load type to fit new ASICs Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 14/29] drm/amdgpu: fix the fw size for sdma Alex Deucher
                   ` (15 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Likun Gao, Hawking Zhang

From: Likun Gao <Likun.Gao@amd.com>

Skip amdgpu_ucode_create_bo for backdoor RLC autoload.

Signed-off-by: Likun Gao <Likun.Gao@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
index 86fe9da72e6c..5e731db12e17 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
@@ -834,7 +834,8 @@ static int amdgpu_ucode_patch_jt(struct amdgpu_firmware_info *ucode,
 
 int amdgpu_ucode_create_bo(struct amdgpu_device *adev)
 {
-	if (adev->firmware.load_type != AMDGPU_FW_LOAD_DIRECT) {
+	if ((adev->firmware.load_type != AMDGPU_FW_LOAD_DIRECT) &&
+	    (adev->firmware.load_type != AMDGPU_FW_LOAD_RLC_BACKDOOR_AUTO)) {
 		amdgpu_bo_create_kernel(adev, adev->firmware.fw_size, PAGE_SIZE,
 			amdgpu_sriov_vf(adev) ? AMDGPU_GEM_DOMAIN_VRAM : AMDGPU_GEM_DOMAIN_GTT,
 			&adev->firmware.fw_buf,
@@ -852,7 +853,8 @@ int amdgpu_ucode_create_bo(struct amdgpu_device *adev)
 
 void amdgpu_ucode_free_bo(struct amdgpu_device *adev)
 {
-	if (adev->firmware.load_type != AMDGPU_FW_LOAD_DIRECT)
+	if ((adev->firmware.load_type != AMDGPU_FW_LOAD_DIRECT) &&
+	    (adev->firmware.load_type != AMDGPU_FW_LOAD_RLC_BACKDOOR_AUTO))
 		amdgpu_bo_free_kernel(&adev->firmware.fw_buf,
 		&adev->firmware.fw_buf_mc,
 		&adev->firmware.fw_buf_ptr);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 14/29] drm/amdgpu: fix the fw size for sdma
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (12 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 13/29] drm/amdgpu: skip amdgpu_ucode_create_bo for backdoor autoload Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 15/29] drm/amdgpu/discovery: handle AMDGPU_FW_LOAD_RLC_BACKDOOR_AUTO in SMU Alex Deucher
                   ` (14 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Likun Gao, Hawking Zhang

From: Likun Gao <Likun.Gao@amd.com>

For SDMA, if use the total size of SDMA TH0 and TH1 to allocate fw BO
may result to the ucode data overflow when copy ucode to BO as the PAGE
alignment.
IMU have the same issue.
Fix the above issue by alignment the fw size per fw ID.

Signed-off-by: Likun Gao <Likun.Gao@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
index 5e731db12e17..f8ff9128c15b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
@@ -688,12 +688,12 @@ static int amdgpu_ucode_init_single_fw(struct amdgpu_device *adev,
 	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
 		switch (ucode->ucode_id) {
 		case AMDGPU_UCODE_ID_SDMA_UCODE_TH0:
-			ucode->ucode_size = le32_to_cpu(sdma_hdr->ctx_jt_offset + sdma_hdr->ctx_jt_size);
+			ucode->ucode_size = le32_to_cpu(sdma_hdr->ctx_ucode_size_bytes);
 			ucode_addr = (u8 *)ucode->fw->data +
 				le32_to_cpu(sdma_hdr->header.ucode_array_offset_bytes);
 			break;
 		case AMDGPU_UCODE_ID_SDMA_UCODE_TH1:
-			ucode->ucode_size = le32_to_cpu(sdma_hdr->ctl_jt_offset + sdma_hdr->ctl_jt_size);
+			ucode->ucode_size = le32_to_cpu(sdma_hdr->ctl_ucode_size_bytes);
 			ucode_addr = (u8 *)ucode->fw->data +
 				le32_to_cpu(sdma_hdr->ctl_ucode_offset);
 			break;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 15/29] drm/amdgpu/discovery: handle AMDGPU_FW_LOAD_RLC_BACKDOOR_AUTO in SMU
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (13 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 14/29] drm/amdgpu: fix the fw size for sdma Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 16/29] drm/amdgpu: renovate sdma fw struct Alex Deucher
                   ` (13 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher

Handle SMU load ordering when firmware load type is
AMDGPU_FW_LOAD_RLC_BACKDOOR_AUTO.  This works similarly
to AMDGPU_FW_LOAD_DIRECT where the SMU load order is
different from the standard ordering when front door
loading is enabled.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
index 9c177ccc7ed9..0adcbd38cf61 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
@@ -2292,8 +2292,9 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
 	if (r)
 		return r;
 
-	if (adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT &&
-	    !amdgpu_sriov_vf(adev)) {
+	if ((adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT &&
+	     !amdgpu_sriov_vf(adev)) ||
+	    (adev->firmware.load_type == AMDGPU_FW_LOAD_RLC_BACKDOOR_AUTO && amdgpu_dpm == 1)) {
 		r = amdgpu_discovery_set_smu_ip_blocks(adev);
 		if (r)
 			return r;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 16/29] drm/amdgpu: renovate sdma fw struct
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (14 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 15/29] drm/amdgpu/discovery: handle AMDGPU_FW_LOAD_RLC_BACKDOOR_AUTO in SMU Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 17/29] drm/amdgpu: support RS64 CP fw front door load Alex Deucher
                   ` (12 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Likun Gao, Hawking Zhang

From: Likun Gao <Likun.Gao@amd.com>

Add sdma firmware struct version 2 to support new SDMA v6 and forward
firmware version.

v2: squash in fix

Signed-off-by: Likun Gao <Likun.Gao@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 11 +++++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h | 14 ++++++++++++++
 2 files changed, 25 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
index f8ff9128c15b..3d1258fec48e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
@@ -244,6 +244,17 @@ void amdgpu_ucode_print_sdma_hdr(const struct common_firmware_header *hdr)
 				container_of(sdma_hdr, struct sdma_firmware_header_v1_1, v1_0);
 			DRM_DEBUG("digest_size: %u\n", le32_to_cpu(sdma_v1_1_hdr->digest_size));
 		}
+	} else if (version_major == 2) {
+		const struct sdma_firmware_header_v2_0 *sdma_hdr =
+			container_of(hdr, struct sdma_firmware_header_v2_0, header);
+
+		DRM_DEBUG("ucode_feature_version: %u\n",
+			  le32_to_cpu(sdma_hdr->ucode_feature_version));
+		DRM_DEBUG("ctx_jt_offset: %u\n", le32_to_cpu(sdma_hdr->ctx_jt_offset));
+		DRM_DEBUG("ctx_jt_size: %u\n", le32_to_cpu(sdma_hdr->ctx_jt_size));
+		DRM_DEBUG("ctl_ucode_offset: %u\n", le32_to_cpu(sdma_hdr->ctl_ucode_offset));
+		DRM_DEBUG("ctl_jt_offset: %u\n", le32_to_cpu(sdma_hdr->ctl_jt_offset));
+		DRM_DEBUG("ctl_jt_size: %u\n", le32_to_cpu(sdma_hdr->ctl_jt_size));
 	} else {
 		DRM_ERROR("Unknown SDMA ucode version: %u.%u\n",
 			  version_major, version_minor);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
index 4439e0119f19..f510b6aa82ab 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.h
@@ -281,6 +281,19 @@ struct sdma_firmware_header_v1_1 {
 	uint32_t digest_size;
 };
 
+/* version_major=2, version_minor=0 */
+struct sdma_firmware_header_v2_0 {
+	struct common_firmware_header header;
+	uint32_t ucode_feature_version;
+	uint32_t ctx_ucode_size_bytes; /* context thread ucode size */
+	uint32_t ctx_jt_offset; /* context thread jt location */
+	uint32_t ctx_jt_size; /* context thread size of jt */
+	uint32_t ctl_ucode_offset;
+	uint32_t ctl_ucode_size_bytes; /* control thread ucode size */
+	uint32_t ctl_jt_offset; /* control thread jt location */
+	uint32_t ctl_jt_size; /* control thread size of jt */
+};
+
 /* gpu info payload */
 struct gpu_info_firmware_v1_0 {
 	uint32_t gc_num_se;
@@ -364,6 +377,7 @@ union amdgpu_firmware_header {
 	struct rlc_firmware_header_v2_3 rlc_v2_3;
 	struct sdma_firmware_header_v1_0 sdma;
 	struct sdma_firmware_header_v1_1 sdma_v1_1;
+	struct sdma_firmware_header_v2_0 sdma_v2_0;
 	struct gpu_info_firmware_header_v1_0 gpu_info;
 	struct dmcu_firmware_header_v1_0 dmcu;
 	struct dmcub_firmware_header_v1_0 dmcub;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 17/29] drm/amdgpu: support RS64 CP fw front door load
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (15 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 16/29] drm/amdgpu: renovate sdma fw struct Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 18/29] drm/amdgpu: add mes unmap legacy queue routine Alex Deucher
                   ` (11 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Likun Gao, Hawking Zhang

From: Likun Gao <Likun.Gao@amd.com>

Support to load RS64 CP firmware front door load.

Signed-off-by: Likun Gao <Likun.Gao@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c | 57 +++++++++++++++++++++++
 1 file changed, 57 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
index 3d1258fec48e..00fa7822458b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
@@ -672,6 +672,7 @@ static int amdgpu_ucode_init_single_fw(struct amdgpu_device *adev,
 {
 	const struct common_firmware_header *header = NULL;
 	const struct gfx_firmware_header_v1_0 *cp_hdr = NULL;
+	const struct gfx_firmware_header_v2_0 *cpv2_hdr = NULL;
 	const struct dmcu_firmware_header_v1_0 *dmcu_hdr = NULL;
 	const struct dmcub_firmware_header_v1_0 *dmcub_hdr = NULL;
 	const struct mes_firmware_header_v1_0 *mes_hdr = NULL;
@@ -690,6 +691,7 @@ static int amdgpu_ucode_init_single_fw(struct amdgpu_device *adev,
 
 	header = (const struct common_firmware_header *)ucode->fw->data;
 	cp_hdr = (const struct gfx_firmware_header_v1_0 *)ucode->fw->data;
+	cpv2_hdr = (const struct gfx_firmware_header_v2_0 *)ucode->fw->data;
 	dmcu_hdr = (const struct dmcu_firmware_header_v1_0 *)ucode->fw->data;
 	dmcub_hdr = (const struct dmcub_firmware_header_v1_0 *)ucode->fw->data;
 	mes_hdr = (const struct mes_firmware_header_v1_0 *)ucode->fw->data;
@@ -802,6 +804,61 @@ static int amdgpu_ucode_init_single_fw(struct amdgpu_device *adev,
 				le32_to_cpu(imu_hdr->header.ucode_array_offset_bytes) +
 				le32_to_cpu(imu_hdr->imu_iram_ucode_size_bytes);
 			break;
+		case AMDGPU_UCODE_ID_CP_RS64_PFP:
+			ucode->ucode_size = le32_to_cpu(cpv2_hdr->ucode_size_bytes);
+			ucode_addr = (u8 *)ucode->fw->data +
+				le32_to_cpu(header->ucode_array_offset_bytes);
+			break;
+		case AMDGPU_UCODE_ID_CP_RS64_PFP_P0_STACK:
+			ucode->ucode_size = le32_to_cpu(cpv2_hdr->data_size_bytes);
+			ucode_addr = (u8 *)ucode->fw->data +
+				le32_to_cpu(cpv2_hdr->data_offset_bytes);
+			break;
+		case AMDGPU_UCODE_ID_CP_RS64_PFP_P1_STACK:
+			ucode->ucode_size = le32_to_cpu(cpv2_hdr->data_size_bytes);
+			ucode_addr = (u8 *)ucode->fw->data +
+				le32_to_cpu(cpv2_hdr->data_offset_bytes);
+			break;
+		case AMDGPU_UCODE_ID_CP_RS64_ME:
+			ucode->ucode_size = le32_to_cpu(cpv2_hdr->ucode_size_bytes);
+			ucode_addr = (u8 *)ucode->fw->data +
+				le32_to_cpu(header->ucode_array_offset_bytes);
+			break;
+		case AMDGPU_UCODE_ID_CP_RS64_ME_P0_STACK:
+			ucode->ucode_size = le32_to_cpu(cpv2_hdr->data_size_bytes);
+			ucode_addr = (u8 *)ucode->fw->data +
+				le32_to_cpu(cpv2_hdr->data_offset_bytes);
+			break;
+		case AMDGPU_UCODE_ID_CP_RS64_ME_P1_STACK:
+			ucode->ucode_size = le32_to_cpu(cpv2_hdr->data_size_bytes);
+			ucode_addr = (u8 *)ucode->fw->data +
+				le32_to_cpu(cpv2_hdr->data_offset_bytes);
+			break;
+		case AMDGPU_UCODE_ID_CP_RS64_MEC:
+			ucode->ucode_size = le32_to_cpu(cpv2_hdr->ucode_size_bytes);
+			ucode_addr = (u8 *)ucode->fw->data +
+				le32_to_cpu(header->ucode_array_offset_bytes);
+			break;
+		case AMDGPU_UCODE_ID_CP_RS64_MEC_P0_STACK:
+			ucode->ucode_size = le32_to_cpu(cpv2_hdr->data_size_bytes);
+			ucode_addr = (u8 *)ucode->fw->data +
+				le32_to_cpu(cpv2_hdr->data_offset_bytes);
+			break;
+		case AMDGPU_UCODE_ID_CP_RS64_MEC_P1_STACK:
+			ucode->ucode_size = le32_to_cpu(cpv2_hdr->data_size_bytes);
+			ucode_addr = (u8 *)ucode->fw->data +
+				le32_to_cpu(cpv2_hdr->data_offset_bytes);
+			break;
+		case AMDGPU_UCODE_ID_CP_RS64_MEC_P2_STACK:
+			ucode->ucode_size = le32_to_cpu(cpv2_hdr->data_size_bytes);
+			ucode_addr = (u8 *)ucode->fw->data +
+				le32_to_cpu(cpv2_hdr->data_offset_bytes);
+			break;
+		case AMDGPU_UCODE_ID_CP_RS64_MEC_P3_STACK:
+			ucode->ucode_size = le32_to_cpu(cpv2_hdr->data_size_bytes);
+			ucode_addr = (u8 *)ucode->fw->data +
+				le32_to_cpu(cpv2_hdr->data_offset_bytes);
+			break;
 		default:
 			ucode->ucode_size = le32_to_cpu(header->ucode_size_bytes);
 			ucode_addr = (u8 *)ucode->fw->data +
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 18/29] drm/amdgpu: add mes unmap legacy queue routine
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (16 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 17/29] drm/amdgpu: support RS64 CP fw front door load Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 19/29] drm/amdgpu: support imu for gfx11 Alex Deucher
                   ` (10 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Jack Xiao, Hawking Zhang

From: Jack Xiao <Jack.Xiao@amd.com>

For mes kiq has been taken over by mes sched, drv can't directly
use mes kiq to unmap queues. drv has to use mes sched api to
unmap legacy queue.

Signed-off-by: Jack Xiao <Jack.Xiao@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c       |   8 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c       | 335 +++++++++++-------
 drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h       |  85 ++++-
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c        |   6 +
 drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c        |   7 +-
 drivers/gpu/drm/amd/amdgpu/mes_v10_1.c        |  69 +++-
 .../drm/amd/{amdgpu => include}/mes_api_def.h | 167 +++++++--
 7 files changed, 526 insertions(+), 151 deletions(-)
 rename drivers/gpu/drm/amd/{amdgpu => include}/mes_api_def.h (68%)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
index 40df1e04d682..5d6b04fc6206 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
@@ -367,7 +367,7 @@ int amdgpu_gfx_mqd_sw_init(struct amdgpu_device *adev,
 
 	/* create MQD for KIQ */
 	ring = &adev->gfx.kiq.ring;
-	if (!ring->mqd_obj) {
+	if (!adev->enable_mes_kiq && !ring->mqd_obj) {
 		/* originaly the KIQ MQD is put in GTT domain, but for SRIOV VRAM domain is a must
 		 * otherwise hypervisor trigger SAVE_VF fail after driver unloaded which mean MQD
 		 * deallocated and gart_unbind, to strict diverage we decide to use VRAM domain for
@@ -464,7 +464,7 @@ int amdgpu_gfx_disable_kcq(struct amdgpu_device *adev)
 {
 	struct amdgpu_kiq *kiq = &adev->gfx.kiq;
 	struct amdgpu_ring *kiq_ring = &kiq->ring;
-	int i, r;
+	int i, r = 0;
 
 	if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
 		return -EINVAL;
@@ -479,7 +479,9 @@ int amdgpu_gfx_disable_kcq(struct amdgpu_device *adev)
 	for (i = 0; i < adev->gfx.num_compute_rings; i++)
 		kiq->pmf->kiq_unmap_queues(kiq_ring, &adev->gfx.compute_ring[i],
 					   RESET_QUEUES, 0, 0);
-	r = amdgpu_ring_test_helper(kiq_ring);
+
+	if (adev->gfx.kiq.ring.sched.ready)
+		r = amdgpu_ring_test_helper(kiq_ring);
 	spin_unlock(&adev->gfx.kiq.ring_lock);
 
 	return r;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
index 5be30bf68b0c..72bafba1c470 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
@@ -150,7 +150,7 @@ int amdgpu_mes_init(struct amdgpu_device *adev)
 	idr_init(&adev->mes.queue_id_idr);
 	ida_init(&adev->mes.doorbell_ida);
 	spin_lock_init(&adev->mes.queue_id_lock);
-	mutex_init(&adev->mes.mutex);
+	mutex_init(&adev->mes.mutex_hidden);
 
 	adev->mes.total_max_queue = AMDGPU_FENCE_MES_QUEUE_ID_MASK;
 	adev->mes.vmid_mask_mmhub = 0xffffff00;
@@ -166,8 +166,12 @@ int amdgpu_mes_init(struct amdgpu_device *adev)
 	for (i = 0; i < AMDGPU_MES_MAX_GFX_PIPES; i++)
 		adev->mes.gfx_hqd_mask[i] = i ? 0 : 0xfffffffe;
 
-	for (i = 0; i < AMDGPU_MES_MAX_SDMA_PIPES; i++)
-		adev->mes.sdma_hqd_mask[i] = i ? 0 : 0x3fc;
+	for (i = 0; i < AMDGPU_MES_MAX_SDMA_PIPES; i++) {
+		if (adev->ip_versions[SDMA0_HWIP][0] < IP_VERSION(6, 0, 0))
+			adev->mes.sdma_hqd_mask[i] = i ? 0 : 0x3fc;
+		else
+			adev->mes.sdma_hqd_mask[i] = 0xfc;
+	}
 
 	for (i = 0; i < AMDGPU_MES_PRIORITY_NUM_LEVELS; i++)
 		adev->mes.agreegated_doorbells[i] = 0xffffffff;
@@ -207,7 +211,7 @@ int amdgpu_mes_init(struct amdgpu_device *adev)
 	idr_destroy(&adev->mes.gang_id_idr);
 	idr_destroy(&adev->mes.queue_id_idr);
 	ida_destroy(&adev->mes.doorbell_ida);
-	mutex_destroy(&adev->mes.mutex);
+	mutex_destroy(&adev->mes.mutex_hidden);
 	return r;
 }
 
@@ -219,7 +223,14 @@ void amdgpu_mes_fini(struct amdgpu_device *adev)
 	idr_destroy(&adev->mes.gang_id_idr);
 	idr_destroy(&adev->mes.queue_id_idr);
 	ida_destroy(&adev->mes.doorbell_ida);
-	mutex_destroy(&adev->mes.mutex);
+	mutex_destroy(&adev->mes.mutex_hidden);
+}
+
+static void amdgpu_mes_queue_free_mqd(struct amdgpu_mes_queue *q)
+{
+	amdgpu_bo_free_kernel(&q->mqd_obj,
+			      &q->mqd_gpu_addr,
+			      &q->mqd_cpu_ptr);
 }
 
 int amdgpu_mes_create_process(struct amdgpu_device *adev, int pasid,
@@ -228,13 +239,10 @@ int amdgpu_mes_create_process(struct amdgpu_device *adev, int pasid,
 	struct amdgpu_mes_process *process;
 	int r;
 
-	mutex_lock(&adev->mes.mutex);
-
 	/* allocate the mes process buffer */
 	process = kzalloc(sizeof(struct amdgpu_mes_process), GFP_KERNEL);
 	if (!process) {
 		DRM_ERROR("no more memory to create mes process\n");
-		mutex_unlock(&adev->mes.mutex);
 		return -ENOMEM;
 	}
 
@@ -244,18 +252,9 @@ int amdgpu_mes_create_process(struct amdgpu_device *adev, int pasid,
 	if (!process->doorbell_bitmap) {
 		DRM_ERROR("failed to allocate doorbell bitmap\n");
 		kfree(process);
-		mutex_unlock(&adev->mes.mutex);
 		return -ENOMEM;
 	}
 
-	/* add the mes process to idr list */
-	r = idr_alloc(&adev->mes.pasid_idr, process, pasid, pasid + 1,
-		      GFP_KERNEL);
-	if (r < 0) {
-		DRM_ERROR("failed to lock pasid=%d\n", pasid);
-		goto clean_up_memory;
-	}
-
 	/* allocate the process context bo and map it */
 	r = amdgpu_bo_create_kernel(adev, AMDGPU_MES_PROC_CTX_SIZE, PAGE_SIZE,
 				    AMDGPU_GEM_DOMAIN_GTT,
@@ -264,15 +263,29 @@ int amdgpu_mes_create_process(struct amdgpu_device *adev, int pasid,
 				    &process->proc_ctx_cpu_ptr);
 	if (r) {
 		DRM_ERROR("failed to allocate process context bo\n");
-		goto clean_up_pasid;
+		goto clean_up_memory;
 	}
 	memset(process->proc_ctx_cpu_ptr, 0, AMDGPU_MES_PROC_CTX_SIZE);
 
+	/*
+	 * Avoid taking any other locks under MES lock to avoid circular
+	 * lock dependencies.
+	 */
+	amdgpu_mes_lock(&adev->mes);
+
+	/* add the mes process to idr list */
+	r = idr_alloc(&adev->mes.pasid_idr, process, pasid, pasid + 1,
+		      GFP_KERNEL);
+	if (r < 0) {
+		DRM_ERROR("failed to lock pasid=%d\n", pasid);
+		goto clean_up_ctx;
+	}
+
 	/* allocate the starting doorbell index of the process */
 	r = amdgpu_mes_alloc_process_doorbells(adev, &process->doorbell_index);
 	if (r < 0) {
 		DRM_ERROR("failed to allocate doorbell for process\n");
-		goto clean_up_ctx;
+		goto clean_up_pasid;
 	}
 
 	DRM_DEBUG("process doorbell index = %d\n", process->doorbell_index);
@@ -283,19 +296,19 @@ int amdgpu_mes_create_process(struct amdgpu_device *adev, int pasid,
 	process->process_quantum = adev->mes.default_process_quantum;
 	process->pd_gpu_addr = amdgpu_bo_gpu_offset(vm->root.bo);
 
-	mutex_unlock(&adev->mes.mutex);
+	amdgpu_mes_unlock(&adev->mes);
 	return 0;
 
+clean_up_pasid:
+	idr_remove(&adev->mes.pasid_idr, pasid);
+	amdgpu_mes_unlock(&adev->mes);
 clean_up_ctx:
 	amdgpu_bo_free_kernel(&process->proc_ctx_bo,
 			      &process->proc_ctx_gpu_addr,
 			      &process->proc_ctx_cpu_ptr);
-clean_up_pasid:
-	idr_remove(&adev->mes.pasid_idr, pasid);
 clean_up_memory:
 	kfree(process->doorbell_bitmap);
 	kfree(process);
-	mutex_unlock(&adev->mes.mutex);
 	return r;
 }
 
@@ -308,18 +321,21 @@ void amdgpu_mes_destroy_process(struct amdgpu_device *adev, int pasid)
 	unsigned long flags;
 	int r;
 
-	mutex_lock(&adev->mes.mutex);
+	/*
+	 * Avoid taking any other locks under MES lock to avoid circular
+	 * lock dependencies.
+	 */
+	amdgpu_mes_lock(&adev->mes);
 
 	process = idr_find(&adev->mes.pasid_idr, pasid);
 	if (!process) {
 		DRM_WARN("pasid %d doesn't exist\n", pasid);
-		mutex_unlock(&adev->mes.mutex);
+		amdgpu_mes_unlock(&adev->mes);
 		return;
 	}
 
-	/* free all gangs in the process */
+	/* Remove all queues from hardware */
 	list_for_each_entry_safe(gang, tmp1, &process->gang_list, list) {
-		/* free all queues in the gang */
 		list_for_each_entry_safe(queue, tmp2, &gang->queue_list, list) {
 			spin_lock_irqsave(&adev->mes.queue_id_lock, flags);
 			idr_remove(&adev->mes.queue_id_idr, queue->queue_id);
@@ -332,29 +348,35 @@ void amdgpu_mes_destroy_process(struct amdgpu_device *adev, int pasid)
 							     &queue_input);
 			if (r)
 				DRM_WARN("failed to remove hardware queue\n");
+		}
+
+		idr_remove(&adev->mes.gang_id_idr, gang->gang_id);
+	}
 
+	amdgpu_mes_free_process_doorbells(adev, process->doorbell_index);
+	idr_remove(&adev->mes.pasid_idr, pasid);
+	amdgpu_mes_unlock(&adev->mes);
+
+	/* free all memory allocated by the process */
+	list_for_each_entry_safe(gang, tmp1, &process->gang_list, list) {
+		/* free all queues in the gang */
+		list_for_each_entry_safe(queue, tmp2, &gang->queue_list, list) {
+			amdgpu_mes_queue_free_mqd(queue);
 			list_del(&queue->list);
 			kfree(queue);
 		}
-
-		idr_remove(&adev->mes.gang_id_idr, gang->gang_id);
 		amdgpu_bo_free_kernel(&gang->gang_ctx_bo,
 				      &gang->gang_ctx_gpu_addr,
 				      &gang->gang_ctx_cpu_ptr);
 		list_del(&gang->list);
 		kfree(gang);
-	}
 
-	amdgpu_mes_free_process_doorbells(adev, process->doorbell_index);
-
-	idr_remove(&adev->mes.pasid_idr, pasid);
+	}
 	amdgpu_bo_free_kernel(&process->proc_ctx_bo,
 			      &process->proc_ctx_gpu_addr,
 			      &process->proc_ctx_cpu_ptr);
 	kfree(process->doorbell_bitmap);
 	kfree(process);
-
-	mutex_unlock(&adev->mes.mutex);
 }
 
 int amdgpu_mes_add_gang(struct amdgpu_device *adev, int pasid,
@@ -365,34 +387,12 @@ int amdgpu_mes_add_gang(struct amdgpu_device *adev, int pasid,
 	struct amdgpu_mes_gang *gang;
 	int r;
 
-	mutex_lock(&adev->mes.mutex);
-
-	process = idr_find(&adev->mes.pasid_idr, pasid);
-	if (!process) {
-		DRM_ERROR("pasid %d doesn't exist\n", pasid);
-		mutex_unlock(&adev->mes.mutex);
-		return -EINVAL;
-	}
-
 	/* allocate the mes gang buffer */
 	gang = kzalloc(sizeof(struct amdgpu_mes_gang), GFP_KERNEL);
 	if (!gang) {
-		mutex_unlock(&adev->mes.mutex);
 		return -ENOMEM;
 	}
 
-	/* add the mes gang to idr list */
-	r = idr_alloc(&adev->mes.gang_id_idr, gang, 1, 0,
-		      GFP_KERNEL);
-	if (r < 0) {
-		kfree(gang);
-		mutex_unlock(&adev->mes.mutex);
-		return r;
-	}
-
-	gang->gang_id = r;
-	*gang_id = r;
-
 	/* allocate the gang context bo and map it to cpu space */
 	r = amdgpu_bo_create_kernel(adev, AMDGPU_MES_GANG_CTX_SIZE, PAGE_SIZE,
 				    AMDGPU_GEM_DOMAIN_GTT,
@@ -401,10 +401,34 @@ int amdgpu_mes_add_gang(struct amdgpu_device *adev, int pasid,
 				    &gang->gang_ctx_cpu_ptr);
 	if (r) {
 		DRM_ERROR("failed to allocate process context bo\n");
-		goto clean_up;
+		goto clean_up_mem;
 	}
 	memset(gang->gang_ctx_cpu_ptr, 0, AMDGPU_MES_GANG_CTX_SIZE);
 
+	/*
+	 * Avoid taking any other locks under MES lock to avoid circular
+	 * lock dependencies.
+	 */
+	amdgpu_mes_lock(&adev->mes);
+
+	process = idr_find(&adev->mes.pasid_idr, pasid);
+	if (!process) {
+		DRM_ERROR("pasid %d doesn't exist\n", pasid);
+		r = -EINVAL;
+		goto clean_up_ctx;
+	}
+
+	/* add the mes gang to idr list */
+	r = idr_alloc(&adev->mes.gang_id_idr, gang, 1, 0,
+		      GFP_KERNEL);
+	if (r < 0) {
+		DRM_ERROR("failed to allocate idr for gang\n");
+		goto clean_up_ctx;
+	}
+
+	gang->gang_id = r;
+	*gang_id = r;
+
 	INIT_LIST_HEAD(&gang->queue_list);
 	gang->process = process;
 	gang->priority = gprops->priority;
@@ -414,13 +438,16 @@ int amdgpu_mes_add_gang(struct amdgpu_device *adev, int pasid,
 	gang->inprocess_gang_priority = gprops->inprocess_gang_priority;
 	list_add_tail(&gang->list, &process->gang_list);
 
-	mutex_unlock(&adev->mes.mutex);
+	amdgpu_mes_unlock(&adev->mes);
 	return 0;
 
-clean_up:
-	idr_remove(&adev->mes.gang_id_idr, gang->gang_id);
+clean_up_ctx:
+	amdgpu_mes_unlock(&adev->mes);
+	amdgpu_bo_free_kernel(&gang->gang_ctx_bo,
+			      &gang->gang_ctx_gpu_addr,
+			      &gang->gang_ctx_cpu_ptr);
+clean_up_mem:
 	kfree(gang);
-	mutex_unlock(&adev->mes.mutex);
 	return r;
 }
 
@@ -428,29 +455,35 @@ int amdgpu_mes_remove_gang(struct amdgpu_device *adev, int gang_id)
 {
 	struct amdgpu_mes_gang *gang;
 
-	mutex_lock(&adev->mes.mutex);
+	/*
+	 * Avoid taking any other locks under MES lock to avoid circular
+	 * lock dependencies.
+	 */
+	amdgpu_mes_lock(&adev->mes);
 
 	gang = idr_find(&adev->mes.gang_id_idr, gang_id);
 	if (!gang) {
 		DRM_ERROR("gang id %d doesn't exist\n", gang_id);
-		mutex_unlock(&adev->mes.mutex);
+		amdgpu_mes_unlock(&adev->mes);
 		return -EINVAL;
 	}
 
 	if (!list_empty(&gang->queue_list)) {
 		DRM_ERROR("queue list is not empty\n");
-		mutex_unlock(&adev->mes.mutex);
+		amdgpu_mes_unlock(&adev->mes);
 		return -EBUSY;
 	}
 
 	idr_remove(&adev->mes.gang_id_idr, gang->gang_id);
+	list_del(&gang->list);
+	amdgpu_mes_unlock(&adev->mes);
+
 	amdgpu_bo_free_kernel(&gang->gang_ctx_bo,
 			      &gang->gang_ctx_gpu_addr,
 			      &gang->gang_ctx_cpu_ptr);
-	list_del(&gang->list);
+
 	kfree(gang);
 
-	mutex_unlock(&adev->mes.mutex);
 	return 0;
 }
 
@@ -462,7 +495,11 @@ int amdgpu_mes_suspend(struct amdgpu_device *adev)
 	struct mes_suspend_gang_input input;
 	int r, pasid;
 
-	mutex_lock(&adev->mes.mutex);
+	/*
+	 * Avoid taking any other locks under MES lock to avoid circular
+	 * lock dependencies.
+	 */
+	amdgpu_mes_lock(&adev->mes);
 
 	idp = &adev->mes.pasid_idr;
 
@@ -475,7 +512,7 @@ int amdgpu_mes_suspend(struct amdgpu_device *adev)
 		}
 	}
 
-	mutex_unlock(&adev->mes.mutex);
+	amdgpu_mes_unlock(&adev->mes);
 	return 0;
 }
 
@@ -487,7 +524,11 @@ int amdgpu_mes_resume(struct amdgpu_device *adev)
 	struct mes_resume_gang_input input;
 	int r, pasid;
 
-	mutex_lock(&adev->mes.mutex);
+	/*
+	 * Avoid taking any other locks under MES lock to avoid circular
+	 * lock dependencies.
+	 */
+	amdgpu_mes_lock(&adev->mes);
 
 	idp = &adev->mes.pasid_idr;
 
@@ -500,17 +541,16 @@ int amdgpu_mes_resume(struct amdgpu_device *adev)
 		}
 	}
 
-	mutex_unlock(&adev->mes.mutex);
+	amdgpu_mes_unlock(&adev->mes);
 	return 0;
 }
 
-static int amdgpu_mes_queue_init_mqd(struct amdgpu_device *adev,
+static int amdgpu_mes_queue_alloc_mqd(struct amdgpu_device *adev,
 				     struct amdgpu_mes_queue *q,
 				     struct amdgpu_mes_queue_properties *p)
 {
 	struct amdgpu_mqd *mqd_mgr = &adev->mqds[p->queue_type];
 	u32 mqd_size = mqd_mgr->mqd_size;
-	struct amdgpu_mqd_prop mqd_prop = {0};
 	int r;
 
 	r = amdgpu_bo_create_kernel(adev, mqd_size, PAGE_SIZE,
@@ -523,6 +563,26 @@ static int amdgpu_mes_queue_init_mqd(struct amdgpu_device *adev,
 	}
 	memset(q->mqd_cpu_ptr, 0, mqd_size);
 
+	r = amdgpu_bo_reserve(q->mqd_obj, false);
+	if (unlikely(r != 0))
+		goto clean_up;
+
+	return 0;
+
+clean_up:
+	amdgpu_bo_free_kernel(&q->mqd_obj,
+			      &q->mqd_gpu_addr,
+			      &q->mqd_cpu_ptr);
+	return r;
+}
+
+static void amdgpu_mes_queue_init_mqd(struct amdgpu_device *adev,
+				     struct amdgpu_mes_queue *q,
+				     struct amdgpu_mes_queue_properties *p)
+{
+	struct amdgpu_mqd *mqd_mgr = &adev->mqds[p->queue_type];
+	struct amdgpu_mqd_prop mqd_prop = {0};
+
 	mqd_prop.mqd_gpu_addr = q->mqd_gpu_addr;
 	mqd_prop.hqd_base_gpu_addr = p->hqd_base_gpu_addr;
 	mqd_prop.rptr_gpu_addr = p->rptr_gpu_addr;
@@ -535,27 +595,9 @@ static int amdgpu_mes_queue_init_mqd(struct amdgpu_device *adev,
 	mqd_prop.hqd_queue_priority = p->hqd_queue_priority;
 	mqd_prop.hqd_active = false;
 
-	r = amdgpu_bo_reserve(q->mqd_obj, false);
-	if (unlikely(r != 0))
-		goto clean_up;
-
 	mqd_mgr->init_mqd(adev, q->mqd_cpu_ptr, &mqd_prop);
 
 	amdgpu_bo_unreserve(q->mqd_obj);
-	return 0;
-
-clean_up:
-	amdgpu_bo_free_kernel(&q->mqd_obj,
-			      &q->mqd_gpu_addr,
-			      &q->mqd_cpu_ptr);
-	return r;
-}
-
-static void amdgpu_mes_queue_free_mqd(struct amdgpu_mes_queue *q)
-{
-	amdgpu_bo_free_kernel(&q->mqd_obj,
-			      &q->mqd_gpu_addr,
-			      &q->mqd_cpu_ptr);
 }
 
 int amdgpu_mes_add_hw_queue(struct amdgpu_device *adev, int gang_id,
@@ -568,29 +610,38 @@ int amdgpu_mes_add_hw_queue(struct amdgpu_device *adev, int gang_id,
 	unsigned long flags;
 	int r;
 
-	mutex_lock(&adev->mes.mutex);
-
-	gang = idr_find(&adev->mes.gang_id_idr, gang_id);
-	if (!gang) {
-		DRM_ERROR("gang id %d doesn't exist\n", gang_id);
-		mutex_unlock(&adev->mes.mutex);
-		return -EINVAL;
-	}
-
 	/* allocate the mes queue buffer */
 	queue = kzalloc(sizeof(struct amdgpu_mes_queue), GFP_KERNEL);
 	if (!queue) {
-		mutex_unlock(&adev->mes.mutex);
+		DRM_ERROR("Failed to allocate memory for queue\n");
 		return -ENOMEM;
 	}
 
+	/* Allocate the queue mqd */
+	r = amdgpu_mes_queue_alloc_mqd(adev, queue, qprops);
+	if (r)
+		goto clean_up_memory;
+
+	/*
+	 * Avoid taking any other locks under MES lock to avoid circular
+	 * lock dependencies.
+	 */
+	amdgpu_mes_lock(&adev->mes);
+
+	gang = idr_find(&adev->mes.gang_id_idr, gang_id);
+	if (!gang) {
+		DRM_ERROR("gang id %d doesn't exist\n", gang_id);
+		r = -EINVAL;
+		goto clean_up_mqd;
+	}
+
 	/* add the mes gang to idr list */
 	spin_lock_irqsave(&adev->mes.queue_id_lock, flags);
 	r = idr_alloc(&adev->mes.queue_id_idr, queue, 1, 0,
 		      GFP_ATOMIC);
 	if (r < 0) {
 		spin_unlock_irqrestore(&adev->mes.queue_id_lock, flags);
-		goto clean_up_memory;
+		goto clean_up_mqd;
 	}
 	spin_unlock_irqrestore(&adev->mes.queue_id_lock, flags);
 	*queue_id = queue->queue_id = r;
@@ -603,13 +654,15 @@ int amdgpu_mes_add_hw_queue(struct amdgpu_device *adev, int gang_id,
 		goto clean_up_queue_id;
 
 	/* initialize the queue mqd */
-	r = amdgpu_mes_queue_init_mqd(adev, queue, qprops);
-	if (r)
-		goto clean_up_doorbell;
+	amdgpu_mes_queue_init_mqd(adev, queue, qprops);
 
 	/* add hw queue to mes */
 	queue_input.process_id = gang->process->pasid;
-	queue_input.page_table_base_addr = gang->process->pd_gpu_addr;
+
+	queue_input.page_table_base_addr =
+		adev->vm_manager.vram_base_offset + gang->process->pd_gpu_addr -
+		adev->gmc.vram_start;
+
 	queue_input.process_va_start = 0;
 	queue_input.process_va_end =
 		(adev->vm_manager.max_pfn - 1) << AMDGPU_GPU_PAGE_SHIFT;
@@ -629,7 +682,7 @@ int amdgpu_mes_add_hw_queue(struct amdgpu_device *adev, int gang_id,
 	if (r) {
 		DRM_ERROR("failed to add hardware queue to MES, doorbell=0x%llx\n",
 			  qprops->doorbell_off);
-		goto clean_up_mqd;
+		goto clean_up_doorbell;
 	}
 
 	DRM_DEBUG("MES hw queue was added, pasid=%d, gang id=%d, "
@@ -645,11 +698,9 @@ int amdgpu_mes_add_hw_queue(struct amdgpu_device *adev, int gang_id,
 	queue->gang = gang;
 	list_add_tail(&queue->list, &gang->queue_list);
 
-	mutex_unlock(&adev->mes.mutex);
+	amdgpu_mes_unlock(&adev->mes);
 	return 0;
 
-clean_up_mqd:
-	amdgpu_mes_queue_free_mqd(queue);
 clean_up_doorbell:
 	amdgpu_mes_queue_doorbell_free(adev, gang->process,
 				       qprops->doorbell_off);
@@ -657,9 +708,11 @@ int amdgpu_mes_add_hw_queue(struct amdgpu_device *adev, int gang_id,
 	spin_lock_irqsave(&adev->mes.queue_id_lock, flags);
 	idr_remove(&adev->mes.queue_id_idr, queue->queue_id);
 	spin_unlock_irqrestore(&adev->mes.queue_id_lock, flags);
+clean_up_mqd:
+	amdgpu_mes_unlock(&adev->mes);
+	amdgpu_mes_queue_free_mqd(queue);
 clean_up_memory:
 	kfree(queue);
-	mutex_unlock(&adev->mes.mutex);
 	return r;
 }
 
@@ -671,7 +724,11 @@ int amdgpu_mes_remove_hw_queue(struct amdgpu_device *adev, int queue_id)
 	struct mes_remove_queue_input queue_input;
 	int r;
 
-	mutex_lock(&adev->mes.mutex);
+	/*
+	 * Avoid taking any other locks under MES lock to avoid circular
+	 * lock dependencies.
+	 */
+	amdgpu_mes_lock(&adev->mes);
 
 	/* remove the mes gang from idr list */
 	spin_lock_irqsave(&adev->mes.queue_id_lock, flags);
@@ -679,7 +736,7 @@ int amdgpu_mes_remove_hw_queue(struct amdgpu_device *adev, int queue_id)
 	queue = idr_find(&adev->mes.queue_id_idr, queue_id);
 	if (!queue) {
 		spin_unlock_irqrestore(&adev->mes.queue_id_lock, flags);
-		mutex_unlock(&adev->mes.mutex);
+		amdgpu_mes_unlock(&adev->mes);
 		DRM_ERROR("queue id %d doesn't exist\n", queue_id);
 		return -EINVAL;
 	}
@@ -699,15 +756,42 @@ int amdgpu_mes_remove_hw_queue(struct amdgpu_device *adev, int queue_id)
 		DRM_ERROR("failed to remove hardware queue, queue id = %d\n",
 			  queue_id);
 
-	amdgpu_mes_queue_free_mqd(queue);
 	list_del(&queue->list);
 	amdgpu_mes_queue_doorbell_free(adev, gang->process,
 				       queue->doorbell_off);
+	amdgpu_mes_unlock(&adev->mes);
+
+	amdgpu_mes_queue_free_mqd(queue);
 	kfree(queue);
-	mutex_unlock(&adev->mes.mutex);
 	return 0;
 }
 
+int amdgpu_mes_unmap_legacy_queue(struct amdgpu_device *adev,
+				  struct amdgpu_ring *ring,
+				  enum amdgpu_unmap_queues_action action,
+				  u64 gpu_addr, u64 seq)
+{
+	struct mes_unmap_legacy_queue_input queue_input;
+	int r;
+
+	amdgpu_mes_lock(&adev->mes);
+
+	queue_input.action = action;
+	queue_input.queue_type = ring->funcs->type;
+	queue_input.doorbell_offset = ring->doorbell_index;
+	queue_input.pipe_id = ring->pipe;
+	queue_input.queue_id = ring->queue;
+	queue_input.trail_fence_addr = gpu_addr;
+	queue_input.trail_fence_data = seq;
+
+	r = adev->mes.funcs->unmap_legacy_queue(&adev->mes, &queue_input);
+	if (r)
+		DRM_ERROR("failed to unmap legacy queue\n");
+
+	amdgpu_mes_unlock(&adev->mes);
+	return r;
+}
+
 static void
 amdgpu_mes_ring_to_queue_props(struct amdgpu_device *adev,
 			       struct amdgpu_ring *ring,
@@ -771,18 +855,22 @@ int amdgpu_mes_add_ring(struct amdgpu_device *adev, int gang_id,
 	struct amdgpu_mes_queue_properties qprops = {0};
 	int r, queue_id, pasid;
 
-	mutex_lock(&adev->mes.mutex);
+	/*
+	 * Avoid taking any other locks under MES lock to avoid circular
+	 * lock dependencies.
+	 */
+	amdgpu_mes_lock(&adev->mes);
 	gang = idr_find(&adev->mes.gang_id_idr, gang_id);
 	if (!gang) {
 		DRM_ERROR("gang id %d doesn't exist\n", gang_id);
-		mutex_unlock(&adev->mes.mutex);
+		amdgpu_mes_unlock(&adev->mes);
 		return -EINVAL;
 	}
 	pasid = gang->process->pasid;
 
 	ring = kzalloc(sizeof(struct amdgpu_ring), GFP_KERNEL);
 	if (!ring) {
-		mutex_unlock(&adev->mes.mutex);
+		amdgpu_mes_unlock(&adev->mes);
 		return -ENOMEM;
 	}
 
@@ -823,7 +911,7 @@ int amdgpu_mes_add_ring(struct amdgpu_device *adev, int gang_id,
 
 	dma_fence_wait(gang->process->vm->last_update, false);
 	dma_fence_wait(ctx_data->meta_data_va->last_pt_update, false);
-	mutex_unlock(&adev->mes.mutex);
+	amdgpu_mes_unlock(&adev->mes);
 
 	r = amdgpu_mes_add_hw_queue(adev, gang_id, &qprops, &queue_id);
 	if (r)
@@ -850,7 +938,7 @@ int amdgpu_mes_add_ring(struct amdgpu_device *adev, int gang_id,
 	amdgpu_ring_fini(ring);
 clean_up_memory:
 	kfree(ring);
-	mutex_unlock(&adev->mes.mutex);
+	amdgpu_mes_unlock(&adev->mes);
 	return r;
 }
 
@@ -1086,9 +1174,10 @@ int amdgpu_mes_self_test(struct amdgpu_device *adev)
 	}
 
 	for (i = 0; i < ARRAY_SIZE(queue_types); i++) {
-		/* On sienna cichlid+, fw hasn't supported to map sdma queue. */
-		if (adev->asic_type >= CHIP_SIENNA_CICHLID &&
-		    i == AMDGPU_RING_TYPE_SDMA)
+		/* On GFX v10.3, fw hasn't supported to map sdma queue. */
+		if (adev->ip_versions[GC_HWIP][0] >= IP_VERSION(10, 3, 0) &&
+		    adev->ip_versions[GC_HWIP][0] < IP_VERSION(11, 0, 0) &&
+		    queue_types[i][0] == AMDGPU_RING_TYPE_SDMA)
 			continue;
 
 		r = amdgpu_mes_test_create_gang_and_queues(adev, pasid,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
index ba039984e431..19963be9e61a 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
@@ -55,7 +55,7 @@ enum admgpu_mes_pipe {
 struct amdgpu_mes {
 	struct amdgpu_device            *adev;
 
-	struct mutex                    mutex;
+	struct mutex                    mutex_hidden;
 
 	struct idr                      pasid_idr;
 	struct idr                      gang_id_idr;
@@ -108,9 +108,11 @@ struct amdgpu_mes {
 	uint32_t			query_status_fence_offs;
 	uint64_t			query_status_fence_gpu_addr;
 	uint64_t			*query_status_fence_ptr;
+	uint32_t			saved_flags;
 
 	/* initialize kiq pipe */
 	int                             (*kiq_hw_init)(struct amdgpu_device *adev);
+	int                             (*kiq_hw_fini)(struct amdgpu_device *adev);
 
 	/* ip specific functions */
 	const struct amdgpu_mes_funcs   *funcs;
@@ -197,6 +199,10 @@ struct mes_add_queue_input {
 	uint64_t	wptr_addr;
 	uint32_t	queue_type;
 	uint32_t	paging;
+	uint32_t        gws_base;
+	uint32_t        gws_size;
+	uint64_t	tba_addr;
+	uint64_t	tma_addr;
 };
 
 struct mes_remove_queue_input {
@@ -204,6 +210,16 @@ struct mes_remove_queue_input {
 	uint64_t	gang_context_addr;
 };
 
+struct mes_unmap_legacy_queue_input {
+	enum amdgpu_unmap_queues_action    action;
+	uint32_t                           queue_type;
+	uint32_t                           doorbell_offset;
+	uint32_t                           pipe_id;
+	uint32_t                           queue_id;
+	uint64_t                           trail_fence_addr;
+	uint64_t                           trail_fence_data;
+};
+
 struct mes_suspend_gang_input {
 	bool		suspend_all_gangs;
 	uint64_t	gang_context_addr;
@@ -223,6 +239,9 @@ struct amdgpu_mes_funcs {
 	int (*remove_hw_queue)(struct amdgpu_mes *mes,
 			       struct mes_remove_queue_input *input);
 
+	int (*unmap_legacy_queue)(struct amdgpu_mes *mes,
+				  struct mes_unmap_legacy_queue_input *input);
+
 	int (*suspend_gang)(struct amdgpu_mes *mes,
 			    struct mes_suspend_gang_input *input);
 
@@ -231,6 +250,7 @@ struct amdgpu_mes_funcs {
 };
 
 #define amdgpu_mes_kiq_hw_init(adev) (adev)->mes.kiq_hw_init((adev))
+#define amdgpu_mes_kiq_hw_fini(adev) (adev)->mes.kiq_hw_fini((adev))
 
 int amdgpu_mes_ctx_get_offs(struct amdgpu_ring *ring, unsigned int id_offs);
 
@@ -254,6 +274,11 @@ int amdgpu_mes_add_hw_queue(struct amdgpu_device *adev, int gang_id,
 			    int *queue_id);
 int amdgpu_mes_remove_hw_queue(struct amdgpu_device *adev, int queue_id);
 
+int amdgpu_mes_unmap_legacy_queue(struct amdgpu_device *adev,
+				  struct amdgpu_ring *ring,
+				  enum amdgpu_unmap_queues_action action,
+				  u64 gpu_addr, u64 seq);
+
 int amdgpu_mes_add_ring(struct amdgpu_device *adev, int gang_id,
 			int queue_type, int idx,
 			struct amdgpu_mes_ctx_data *ctx_data,
@@ -279,4 +304,62 @@ unsigned int amdgpu_mes_get_doorbell_dw_offset_in_bar(
 					uint32_t doorbell_index,
 					unsigned int doorbell_id);
 int amdgpu_mes_doorbell_process_slice(struct amdgpu_device *adev);
+
+/*
+ * MES lock can be taken in MMU notifiers.
+ *
+ * A bit more detail about why to set no-FS reclaim with MES lock:
+ *
+ * The purpose of the MMU notifier is to stop GPU access to memory so
+ * that the Linux VM subsystem can move pages around safely. This is
+ * done by preempting user mode queues for the affected process. When
+ * MES is used, MES lock needs to be taken to preempt the queues.
+ *
+ * The MMU notifier callback entry point in the driver is
+ * amdgpu_mn_invalidate_range_start_hsa. The relevant call chain from
+ * there is:
+ * amdgpu_amdkfd_evict_userptr -> kgd2kfd_quiesce_mm ->
+ * kfd_process_evict_queues -> pdd->dev->dqm->ops.evict_process_queues
+ *
+ * The last part of the chain is a function pointer where we take the
+ * MES lock.
+ *
+ * The problem with taking locks in the MMU notifier is, that MMU
+ * notifiers can be called in reclaim-FS context. That's where the
+ * kernel frees up pages to make room for new page allocations under
+ * memory pressure. While we are running in reclaim-FS context, we must
+ * not trigger another memory reclaim operation because that would
+ * recursively reenter the reclaim code and cause a deadlock. The
+ * memalloc_nofs_save/restore calls guarantee that.
+ *
+ * In addition we also need to avoid lock dependencies on other locks taken
+ * under the MES lock, for example reservation locks. Here is a possible
+ * scenario of a deadlock:
+ * Thread A: takes and holds reservation lock | triggers reclaim-FS |
+ * MMU notifier | blocks trying to take MES lock
+ * Thread B: takes and holds MES lock | blocks trying to take reservation lock
+ *
+ * In this scenario Thread B gets involved in a deadlock even without
+ * triggering a reclaim-FS operation itself.
+ * To fix this and break the lock dependency chain you'd need to either:
+ * 1. protect reservation locks with memalloc_nofs_save/restore, or
+ * 2. avoid taking reservation locks under the MES lock.
+ *
+ * Reservation locks are taken all over the kernel in different subsystems, we
+ * have no control over them and their lock dependencies.So the only workable
+ * solution is to avoid taking other locks under the MES lock.
+ * As a result, make sure no reclaim-FS happens while holding this lock anywhere
+ * to prevent deadlocks when an MMU notifier runs in reclaim-FS context.
+ */
+static inline void amdgpu_mes_lock(struct amdgpu_mes *mes)
+{
+	mutex_lock(&mes->mutex_hidden);
+	mes->saved_flags = memalloc_noreclaim_save();
+}
+
+static inline void amdgpu_mes_unlock(struct amdgpu_mes *mes)
+{
+	memalloc_noreclaim_restore(mes->saved_flags);
+	mutex_unlock(&mes->mutex_hidden);
+}
 #endif /* __AMDGPU_MES_H__ */
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index 9042e0b480ce..3c4f2a94ad9f 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -3551,8 +3551,14 @@ static void gfx10_kiq_unmap_queues(struct amdgpu_ring *kiq_ring,
 				   enum amdgpu_unmap_queues_action action,
 				   u64 gpu_addr, u64 seq)
 {
+	struct amdgpu_device *adev = kiq_ring->adev;
 	uint32_t eng_sel = ring->funcs->type == AMDGPU_RING_TYPE_GFX ? 4 : 0;
 
+	if (!adev->gfx.kiq.ring.sched.ready) {
+		amdgpu_mes_unmap_legacy_queue(adev, ring, action, gpu_addr, seq);
+		return;
+	}
+
 	amdgpu_ring_write(kiq_ring, PACKET3(PACKET3_UNMAP_QUEUES, 4));
 	amdgpu_ring_write(kiq_ring, /* Q_sel: 0, vmid: 0, engine: 0, num_Q: 1 */
 			  PACKET3_UNMAP_QUEUES_ACTION(action) |
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
index b80b5f70ecf1..61db2a378008 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
@@ -274,7 +274,7 @@ static void gmc_v11_0_flush_gpu_tlb(struct amdgpu_device *adev, uint32_t vmid,
 	/* For SRIOV run time, driver shouldn't access the register through MMIO
 	 * Directly use kiq to do the vm invalidation instead
 	 */
-	if (adev->gfx.kiq.ring.sched.ready &&
+	if (adev->gfx.kiq.ring.sched.ready && !adev->enable_mes &&
 	    (amdgpu_sriov_runtime(adev) || !amdgpu_sriov_vf(adev))) {
 		struct amdgpu_vmhub *hub = &adev->vmhub[vmhub];
 		const unsigned eng = 17;
@@ -411,6 +411,10 @@ static void gmc_v11_0_emit_pasid_mapping(struct amdgpu_ring *ring, unsigned vmid
 	struct amdgpu_device *adev = ring->adev;
 	uint32_t reg;
 
+	/* MES fw manages IH_VMID_x_LUT updating */
+	if (ring->is_mes_queue)
+		return;
+
 	if (ring->funcs->vmhub == AMDGPU_GFXHUB_0)
 		reg = SOC15_REG_OFFSET(OSSSYS, 0, regIH_VMID_0_LUT) + vmid;
 	else
@@ -803,6 +807,7 @@ static int gmc_v11_0_gart_enable(struct amdgpu_device *adev)
 	}
 
 	amdgpu_gtt_mgr_recover(&adev->mman.gtt_mgr);
+
 	r = adev->mmhub.funcs->gart_enable(adev);
 	if (r)
 		return r;
diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c b/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c
index 622aa17b18e7..030a92b3a0da 100644
--- a/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c
+++ b/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c
@@ -133,6 +133,8 @@ static int mes_v10_1_add_hw_queue(struct amdgpu_mes *mes,
 {
 	struct amdgpu_device *adev = mes->adev;
 	union MESAPI__ADD_QUEUE mes_add_queue_pkt;
+	struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_GFXHUB_0];
+	uint32_t vm_cntx_cntl = hub->vm_cntx_cntl;
 
 	memset(&mes_add_queue_pkt, 0, sizeof(mes_add_queue_pkt));
 
@@ -141,8 +143,7 @@ static int mes_v10_1_add_hw_queue(struct amdgpu_mes *mes,
 	mes_add_queue_pkt.header.dwsize = API_FRAME_SIZE_IN_DWORDS;
 
 	mes_add_queue_pkt.process_id = input->process_id;
-	mes_add_queue_pkt.page_table_base_addr =
-		input->page_table_base_addr - adev->gmc.vram_start;
+	mes_add_queue_pkt.page_table_base_addr = input->page_table_base_addr;
 	mes_add_queue_pkt.process_va_start = input->process_va_start;
 	mes_add_queue_pkt.process_va_end = input->process_va_end;
 	mes_add_queue_pkt.process_quantum = input->process_quantum;
@@ -159,6 +160,10 @@ static int mes_v10_1_add_hw_queue(struct amdgpu_mes *mes,
 	mes_add_queue_pkt.queue_type =
 		convert_to_mes_queue_type(input->queue_type);
 	mes_add_queue_pkt.paging = input->paging;
+	mes_add_queue_pkt.vm_context_cntl = vm_cntx_cntl;
+	mes_add_queue_pkt.gws_base = input->gws_base;
+	mes_add_queue_pkt.gws_size = input->gws_size;
+	mes_add_queue_pkt.trap_handler_addr = input->tba_addr;
 
 	mes_add_queue_pkt.api_status.api_completion_fence_addr =
 		mes->ring.fence_drv.gpu_addr;
@@ -192,6 +197,44 @@ static int mes_v10_1_remove_hw_queue(struct amdgpu_mes *mes,
 			&mes_remove_queue_pkt, sizeof(mes_remove_queue_pkt));
 }
 
+static int mes_v10_1_unmap_legacy_queue(struct amdgpu_mes *mes,
+				 struct mes_unmap_legacy_queue_input *input)
+{
+	union MESAPI__REMOVE_QUEUE mes_remove_queue_pkt;
+
+	memset(&mes_remove_queue_pkt, 0, sizeof(mes_remove_queue_pkt));
+
+	mes_remove_queue_pkt.header.type = MES_API_TYPE_SCHEDULER;
+	mes_remove_queue_pkt.header.opcode = MES_SCH_API_REMOVE_QUEUE;
+	mes_remove_queue_pkt.header.dwsize = API_FRAME_SIZE_IN_DWORDS;
+
+	mes_remove_queue_pkt.doorbell_offset = input->doorbell_offset;
+	mes_remove_queue_pkt.gang_context_addr = 0;
+
+	mes_remove_queue_pkt.pipe_id = input->pipe_id;
+	mes_remove_queue_pkt.queue_id = input->queue_id;
+
+	if (input->action == PREEMPT_QUEUES_NO_UNMAP) {
+		mes_remove_queue_pkt.preempt_legacy_gfx_queue = 1;
+		mes_remove_queue_pkt.tf_addr = input->trail_fence_addr;
+		mes_remove_queue_pkt.tf_data =
+			lower_32_bits(input->trail_fence_data);
+	} else {
+		if (input->queue_type == AMDGPU_RING_TYPE_GFX)
+			mes_remove_queue_pkt.unmap_legacy_gfx_queue = 1;
+		else
+			mes_remove_queue_pkt.unmap_kiq_utility_queue = 1;
+	}
+
+	mes_remove_queue_pkt.api_status.api_completion_fence_addr =
+		mes->ring.fence_drv.gpu_addr;
+	mes_remove_queue_pkt.api_status.api_completion_fence_value =
+		++mes->ring.fence_drv.sync_seq;
+
+	return mes_v10_1_submit_pkt_and_poll_completion(mes,
+			&mes_remove_queue_pkt, sizeof(mes_remove_queue_pkt));
+}
+
 static int mes_v10_1_suspend_gang(struct amdgpu_mes *mes,
 				  struct mes_suspend_gang_input *input)
 {
@@ -254,9 +297,21 @@ static int mes_v10_1_set_hw_resources(struct amdgpu_mes *mes)
 		mes_set_hw_res_pkt.sdma_hqd_mask[i] = mes->sdma_hqd_mask[i];
 
 	for (i = 0; i < AMD_PRIORITY_NUM_LEVELS; i++)
-		mes_set_hw_res_pkt.agreegated_doorbells[i] =
+		mes_set_hw_res_pkt.aggregated_doorbells[i] =
 			mes->agreegated_doorbells[i];
 
+	for (i = 0; i < 5; i++) {
+		mes_set_hw_res_pkt.gc_base[i] = adev->reg_offset[GC_HWIP][0][i];
+		mes_set_hw_res_pkt.mmhub_base[i] =
+			adev->reg_offset[MMHUB_HWIP][0][i];
+		mes_set_hw_res_pkt.osssys_base[i] =
+			adev->reg_offset[OSSSYS_HWIP][0][i];
+	}
+
+	mes_set_hw_res_pkt.disable_reset = 1;
+	mes_set_hw_res_pkt.disable_mes_log = 1;
+	mes_set_hw_res_pkt.use_different_vmid_compute = 1;
+
 	mes_set_hw_res_pkt.api_status.api_completion_fence_addr =
 		mes->ring.fence_drv.gpu_addr;
 	mes_set_hw_res_pkt.api_status.api_completion_fence_value =
@@ -269,6 +324,7 @@ static int mes_v10_1_set_hw_resources(struct amdgpu_mes *mes)
 static const struct amdgpu_mes_funcs mes_v10_1_funcs = {
 	.add_hw_queue = mes_v10_1_add_hw_queue,
 	.remove_hw_queue = mes_v10_1_remove_hw_queue,
+	.unmap_legacy_queue = mes_v10_1_unmap_legacy_queue,
 	.suspend_gang = mes_v10_1_suspend_gang,
 	.resume_gang = mes_v10_1_resume_gang,
 };
@@ -1097,6 +1153,13 @@ static int mes_v10_1_hw_init(void *handle)
 		goto failure;
 	}
 
+	/*
+	 * Disable KIQ ring usage from the driver once MES is enabled.
+	 * MES uses KIQ ring exclusively so driver cannot access KIQ ring
+	 * with MES enabled.
+	 */
+	adev->gfx.kiq.ring.sched.ready = false;
+
 	return 0;
 
 failure:
diff --git a/drivers/gpu/drm/amd/amdgpu/mes_api_def.h b/drivers/gpu/drm/amd/include/mes_api_def.h
similarity index 68%
rename from drivers/gpu/drm/amd/amdgpu/mes_api_def.h
rename to drivers/gpu/drm/amd/include/mes_api_def.h
index 3f4fca5fd1da..b2a8503feec0 100644
--- a/drivers/gpu/drm/amd/amdgpu/mes_api_def.h
+++ b/drivers/gpu/drm/amd/include/mes_api_def.h
@@ -59,6 +59,8 @@ enum MES_SCH_API_OPCODE {
 	MES_SCH_API_PROGRAM_GDS			= 12,
 	MES_SCH_API_SET_DEBUG_VMID		= 13,
 	MES_SCH_API_MISC			= 14,
+	MES_SCH_API_UPDATE_ROOT_PAGE_TABLE      = 15,
+	MES_SCH_API_AMD_LOG                     = 16,
 	MES_SCH_API_MAX				= 0xFF
 };
 
@@ -116,7 +118,12 @@ enum { MAX_VMID_GCHUB = 16 };
 enum { MAX_VMID_MMHUB = 16 };
 
 enum MES_LOG_OPERATION {
-	MES_LOG_OPERATION_CONTEXT_STATE_CHANGE = 0
+	MES_LOG_OPERATION_CONTEXT_STATE_CHANGE = 0,
+	MES_LOG_OPERATION_QUEUE_NEW_WORK = 1,
+	MES_LOG_OPERATION_QUEUE_UNWAIT_SYNC_OBJECT = 2,
+	MES_LOG_OPERATION_QUEUE_NO_MORE_WORK = 3,
+	MES_LOG_OPERATION_QUEUE_WAIT_SYNC_OBJECT = 4,
+	MES_LOG_OPERATION_QUEUE_INVALID = 0xF,
 };
 
 enum MES_LOG_CONTEXT_STATE {
@@ -124,6 +131,7 @@ enum MES_LOG_CONTEXT_STATE {
 	MES_LOG_CONTEXT_STATE_RUNNING		= 1,
 	MES_LOG_CONTEXT_STATE_READY		= 2,
 	MES_LOG_CONTEXT_STATE_READY_STANDBY	= 3,
+	MES_LOG_CONTEXT_STATE_INVALID           = 0xF,
 };
 
 struct MES_LOG_CONTEXT_STATE_CHANGE {
@@ -131,6 +139,26 @@ struct MES_LOG_CONTEXT_STATE_CHANGE {
 	enum MES_LOG_CONTEXT_STATE	new_context_state;
 };
 
+struct MES_LOG_QUEUE_NEW_WORK {
+	uint64_t                   h_queue;
+	uint64_t                   reserved;
+};
+
+struct MES_LOG_QUEUE_UNWAIT_SYNC_OBJECT {
+	uint64_t                   h_queue;
+	uint64_t                   h_sync_object;
+};
+
+struct MES_LOG_QUEUE_NO_MORE_WORK {
+	uint64_t                   h_queue;
+	uint64_t                   reserved;
+};
+
+struct MES_LOG_QUEUE_WAIT_SYNC_OBJECT {
+	uint64_t                   h_queue;
+	uint64_t                   h_sync_object;
+};
+
 struct MES_LOG_ENTRY_HEADER {
 	uint32_t	first_free_entry_index;
 	uint32_t	wraparound_count;
@@ -143,8 +171,12 @@ struct MES_LOG_ENTRY_DATA {
 	uint32_t	operation_type; /* operation_type is of MES_LOG_OPERATION type */
 	uint32_t	reserved_operation_type_bits;
 	union {
-		struct MES_LOG_CONTEXT_STATE_CHANGE	context_state_change;
-		uint64_t				reserved_operation_data[2];
+		struct MES_LOG_CONTEXT_STATE_CHANGE     context_state_change;
+		struct MES_LOG_QUEUE_NEW_WORK           queue_new_work;
+		struct MES_LOG_QUEUE_UNWAIT_SYNC_OBJECT queue_unwait_sync_object;
+		struct MES_LOG_QUEUE_NO_MORE_WORK       queue_no_more_work;
+		struct MES_LOG_QUEUE_WAIT_SYNC_OBJECT   queue_wait_sync_object;
+		uint64_t                                all[2];
 	};
 };
 
@@ -153,6 +185,10 @@ struct MES_LOG_BUFFER {
 	struct MES_LOG_ENTRY_DATA	entries[1];
 };
 
+enum MES_SWIP_TO_HWIP_DEF {
+	MES_MAX_HWIP_SEGMENT = 6,
+};
+
 union MESAPI_SET_HW_RESOURCES {
 	struct {
 		union MES_API_HEADER	header;
@@ -163,14 +199,26 @@ union MESAPI_SET_HW_RESOURCES {
 		uint32_t		compute_hqd_mask[MAX_COMPUTE_PIPES];
 		uint32_t		gfx_hqd_mask[MAX_GFX_PIPES];
 		uint32_t		sdma_hqd_mask[MAX_SDMA_PIPES];
-		uint32_t		agreegated_doorbells[AMD_PRIORITY_NUM_LEVELS];
+		uint32_t		aggregated_doorbells[AMD_PRIORITY_NUM_LEVELS];
 		uint64_t		g_sch_ctx_gpu_mc_ptr;
 		uint64_t		query_status_fence_gpu_mc_ptr;
+		uint32_t		gc_base[MES_MAX_HWIP_SEGMENT];
+		uint32_t		mmhub_base[MES_MAX_HWIP_SEGMENT];
+		uint32_t		osssys_base[MES_MAX_HWIP_SEGMENT];
 		struct MES_API_STATUS	api_status;
 		union {
 			struct {
 				uint32_t disable_reset	: 1;
-				uint32_t reserved	: 31;
+				uint32_t use_different_vmid_compute : 1;
+				uint32_t disable_mes_log   : 1;
+				uint32_t apply_mmhub_pgvm_invalidate_ack_loss_wa : 1;
+				uint32_t apply_grbm_remote_register_dummy_read_wa : 1;
+				uint32_t second_gfx_pipe_enabled : 1;
+				uint32_t enable_level_process_quantum_check : 1;
+				uint32_t apply_cwsr_program_all_vmid_sq_shader_tba_registers_wa : 1;
+				uint32_t enable_mqd_active_poll : 1;
+				uint32_t disable_timer_int : 1;
+				uint32_t reserved	: 22;
 			};
 			uint32_t	uint32_t_all;
 		};
@@ -195,12 +243,16 @@ union MESAPI__ADD_QUEUE {
 		uint32_t			doorbell_offset;
 		uint64_t			mqd_addr;
 		uint64_t			wptr_addr;
+		uint64_t                        h_context;
+		uint64_t                        h_queue;
 		enum MES_QUEUE_TYPE		queue_type;
 		uint32_t			gds_base;
 		uint32_t			gds_size;
 		uint32_t			gws_base;
 		uint32_t			gws_size;
 		uint32_t			oa_mask;
+		uint64_t                        trap_handler_addr;
+		uint32_t                        vm_context_cntl;
 
 		struct {
 			uint32_t paging			: 1;
@@ -208,7 +260,8 @@ union MESAPI__ADD_QUEUE {
 			uint32_t program_gds		: 1;
 			uint32_t is_gang_suspended	: 1;
 			uint32_t is_tmz_queue		: 1;
-			uint32_t reserved		: 24;
+			uint32_t map_kiq_utility_queue  : 1;
+			uint32_t reserved		: 23;
 		};
 		struct MES_API_STATUS		api_status;
 	};
@@ -223,10 +276,18 @@ union MESAPI__REMOVE_QUEUE {
 		uint64_t		gang_context_addr;
 
 		struct {
-			uint32_t unmap_legacy_gfx_queue	: 1;
-			uint32_t reserved		: 31;
+			uint32_t unmap_legacy_gfx_queue   : 1;
+			uint32_t unmap_kiq_utility_queue  : 1;
+			uint32_t preempt_legacy_gfx_queue : 1;
+			uint32_t reserved                 : 29;
 		};
-		struct MES_API_STATUS	api_status;
+		struct MES_API_STATUS	    api_status;
+
+		uint32_t                    pipe_id;
+		uint32_t                    queue_id;
+
+		uint64_t                    tf_addr;
+		uint32_t                    tf_data;
 	};
 
 	uint32_t	max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS];
@@ -321,16 +382,45 @@ union MESAPI__RESUME {
 
 union MESAPI__RESET {
 	struct {
-		union MES_API_HEADER	header;
+		union MES_API_HEADER		header;
 
 		struct {
-			uint32_t reset_queue	: 1;
-			uint32_t reserved	: 31;
+			/* Only reset the queue given by doorbell_offset (not entire gang) */
+			uint32_t                reset_queue_only : 1;
+			/* Hang detection first then reset any queues that are hung */
+			uint32_t                hang_detect_then_reset : 1;
+			/* Only do hang detection (no reset) */
+			uint32_t                hang_detect_only : 1;
+			/* Rest HP and LP kernel queues not managed by MES */
+			uint32_t                reset_legacy_gfx : 1;
+			uint32_t                reserved : 28;
 		};
 
-		uint64_t		gang_context_addr;
-		uint32_t		doorbell_offset; /* valid only if reset_queue = true */
-		struct MES_API_STATUS	api_status;
+		uint64_t			gang_context_addr;
+
+		/* valid only if reset_queue_only = true */
+		uint32_t			doorbell_offset;
+
+		/* valid only if hang_detect_then_reset = true */
+		uint64_t			doorbell_offset_addr;
+		enum MES_QUEUE_TYPE		queue_type;
+
+		/* valid only if reset_legacy_gfx = true */
+		uint32_t			pipe_id_lp;
+		uint32_t			queue_id_lp;
+		uint32_t			vmid_id_lp;
+		uint64_t			mqd_mc_addr_lp;
+		uint32_t			doorbell_offset_lp;
+		uint64_t			wptr_addr_lp;
+
+		uint32_t			pipe_id_hp;
+		uint32_t			queue_id_hp;
+		uint32_t			vmid_id_hp;
+		uint64_t			mqd_mc_addr_hp;
+		uint32_t			doorbell_offset_hp;
+		uint64_t			wptr_addr_hp;
+
+		struct MES_API_STATUS		api_status;
 	};
 
 	uint32_t	max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS];
@@ -408,6 +498,8 @@ union MESAPI__SET_DEBUG_VMID {
 
 enum MESAPI_MISC_OPCODE {
 	MESAPI_MISC__MODIFY_REG,
+	MESAPI_MISC__INV_GART,
+	MESAPI_MISC__QUERY_STATUS,
 	MESAPI_MISC__MAX,
 };
 
@@ -420,6 +512,21 @@ enum MODIFY_REG_SUBCODE {
 
 enum { MISC_DATA_MAX_SIZE_IN_DWORDS = 20 };
 
+struct MODIFY_REG {
+	enum MODIFY_REG_SUBCODE   subcode;
+	uint32_t                  reg_offset;
+	uint32_t                  reg_value;
+};
+
+struct INV_GART {
+	uint64_t                  inv_range_va_start;
+	uint64_t                  inv_range_size;
+};
+
+struct QUERY_STATUS {
+	uint32_t context_id;
+};
+
 union MESAPI__MISC {
 	struct {
 		union MES_API_HEADER	header;
@@ -427,11 +534,9 @@ union MESAPI__MISC {
 		struct MES_API_STATUS	api_status;
 
 		union {
-			struct {
-				enum MODIFY_REG_SUBCODE	subcode;
-				uint32_t		reg_offset;
-				uint32_t		reg_value;
-			} modify_reg;
+			struct		MODIFY_REG modify_reg;
+			struct		INV_GART inv_gart;
+			struct		QUERY_STATUS query_status;
 			uint32_t	data[MISC_DATA_MAX_SIZE_IN_DWORDS];
 		};
 	};
@@ -439,5 +544,27 @@ union MESAPI__MISC {
 	uint32_t	max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS];
 };
 
+union MESAPI__UPDATE_ROOT_PAGE_TABLE {
+	struct {
+		union MES_API_HEADER        header;
+		uint64_t                    page_table_base_addr;
+		uint64_t                    process_context_addr;
+		struct MES_API_STATUS       api_status;
+	};
+
+	uint32_t max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS];
+};
+
+union MESAPI_AMD_LOG {
+	struct {
+		union MES_API_HEADER        header;
+		uint64_t                    p_buffer_memory;
+		uint64_t                    p_buffer_size_used;
+		struct MES_API_STATUS       api_status;
+	};
+
+	uint32_t max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS];
+};
+
 #pragma pack(pop)
 #endif
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 19/29] drm/amdgpu: support imu for gfx11
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (17 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 18/29] drm/amdgpu: add mes unmap legacy queue routine Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 20/29] drm/amdgpu/mes11: initiate mes v11 support Alex Deucher
                   ` (9 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Likun Gao, Hawking Zhang

From: Likun Gao <Likun.Gao@amd.com>

Add support to initialize imu for gfx v11.
IMU is a new power management block for
gfx which manages gfx power.

Signed-off-by: Likun Gao <Likun.Gao@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/Makefile       |   3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h   |   4 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_imu.h   |  51 ++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c |  13 +
 drivers/gpu/drm/amd/amdgpu/imu_v11_0.c    | 286 ++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/imu_v11_0.h    |  30 +++
 6 files changed, 386 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_imu.h
 create mode 100644 drivers/gpu/drm/amd/amdgpu/imu_v11_0.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/imu_v11_0.h

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
index 803e7f5dc458..e74bf1bde8b9 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -130,7 +130,8 @@ amdgpu-y += \
 	gfx_v9_0.o \
 	gfx_v9_4.o \
 	gfx_v9_4_2.o \
-	gfx_v10_0.o
+	gfx_v10_0.o \
+	imu_v11_0.o
 
 # add async DMA block
 amdgpu-y += \
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
index 8e18d3fc4fab..15749016d8cf 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
@@ -30,6 +30,7 @@
 #include "clearstate_defs.h"
 #include "amdgpu_ring.h"
 #include "amdgpu_rlc.h"
+#include "amdgpu_imu.h"
 #include "soc15.h"
 #include "amdgpu_ras.h"
 
@@ -274,6 +275,7 @@ struct amdgpu_gfx {
 	struct amdgpu_me		me;
 	struct amdgpu_mec		mec;
 	struct amdgpu_kiq		kiq;
+	struct amdgpu_imu		imu;
 	struct amdgpu_scratch		scratch;
 	const struct firmware		*me_fw;	/* ME firmware */
 	uint32_t			me_fw_version;
@@ -287,6 +289,8 @@ struct amdgpu_gfx {
 	uint32_t			mec_fw_version;
 	const struct firmware		*mec2_fw; /* MEC2 firmware */
 	uint32_t			mec2_fw_version;
+	const struct firmware		*imu_fw; /* IMU firmware */
+	uint32_t			imu_fw_version;
 	uint32_t			me_feature_version;
 	uint32_t			ce_feature_version;
 	uint32_t			pfp_feature_version;
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_imu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_imu.h
new file mode 100644
index 000000000000..56cf127cdf93
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_imu.h
@@ -0,0 +1,51 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __AMDGPU_IMU_H__
+#define __AMDGPU_IMU_H__
+
+struct amdgpu_imu_funcs {
+    int (*init_microcode)(struct amdgpu_device *adev);
+    int (*load_microcode)(struct amdgpu_device *adev);
+    void (*setup_imu)(struct amdgpu_device *adev);
+    int (*start_imu)(struct amdgpu_device *adev);
+    void (*program_rlc_ram)(struct amdgpu_device *adev);
+};
+
+struct imu_rlc_ram_golden {
+    u32 hwip;
+    u32 instance;
+    u32 segment;
+    u32 reg;
+    u32 data;
+    u32 addr_mask;
+};
+
+#define IMU_RLC_RAM_GOLDEN_VALUE(ip, inst, reg, data, addr_mask) \
+    { ip##_HWIP, inst, reg##_BASE_IDX, reg, data, addr_mask }
+
+struct amdgpu_imu {
+    const struct amdgpu_imu_funcs *funcs;
+};
+
+#endif
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
index 00fa7822458b..d5b52bf9e5c3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ucode.c
@@ -126,6 +126,19 @@ void amdgpu_ucode_print_gfx_hdr(const struct common_firmware_header *hdr)
 	}
 }
 
+void amdgpu_ucode_print_imu_hdr(const struct common_firmware_header *hdr)
+{
+	uint16_t version_major = le16_to_cpu(hdr->header_version_major);
+	uint16_t version_minor = le16_to_cpu(hdr->header_version_minor);
+
+	DRM_DEBUG("IMU\n");
+	amdgpu_ucode_print_common_hdr(hdr);
+
+	if (version_major != 1) {
+		DRM_ERROR("Unknown GFX ucode version: %u.%u\n", version_major, version_minor);
+	}
+}
+
 void amdgpu_ucode_print_rlc_hdr(const struct common_firmware_header *hdr)
 {
 	uint16_t version_major = le16_to_cpu(hdr->header_version_major);
diff --git a/drivers/gpu/drm/amd/amdgpu/imu_v11_0.c b/drivers/gpu/drm/amd/amdgpu/imu_v11_0.c
new file mode 100644
index 000000000000..81952a6767d0
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/imu_v11_0.c
@@ -0,0 +1,286 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/firmware.h>
+#include "amdgpu.h"
+#include "amdgpu_imu.h"
+
+#include "gc/gc_11_0_0_offset.h"
+#include "gc/gc_11_0_0_sh_mask.h"
+
+MODULE_FIRMWARE("amdgpu/gc_11_0_0_imu.bin");
+
+static int imu_v11_0_init_microcode(struct amdgpu_device *adev)
+{
+	char fw_name[40];
+	char ucode_prefix[30];
+	int err;
+	const struct imu_firmware_header_v1_0 *imu_hdr;
+	struct amdgpu_firmware_info *info = NULL;
+
+	DRM_DEBUG("\n");
+
+	amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix, sizeof(ucode_prefix));
+
+	snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_imu.bin", ucode_prefix);
+	err = request_firmware(&adev->gfx.imu_fw, fw_name, adev->dev);
+	if (err)
+		goto out;
+	err = amdgpu_ucode_validate(adev->gfx.imu_fw);
+	if (err)
+		goto out;
+	imu_hdr = (const struct imu_firmware_header_v1_0 *)adev->gfx.imu_fw->data;
+	adev->gfx.imu_fw_version = le32_to_cpu(imu_hdr->header.ucode_version);
+	//adev->gfx.imu_feature_version = le32_to_cpu(imu_hdr->ucode_feature_version);
+	
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
+		info = &adev->firmware.ucode[AMDGPU_UCODE_ID_IMU_I];
+		info->ucode_id = AMDGPU_UCODE_ID_IMU_I;
+		info->fw = adev->gfx.imu_fw;
+		adev->firmware.fw_size +=
+			ALIGN(le32_to_cpu(imu_hdr->imu_iram_ucode_size_bytes), PAGE_SIZE);
+		info = &adev->firmware.ucode[AMDGPU_UCODE_ID_IMU_D];
+		info->ucode_id = AMDGPU_UCODE_ID_IMU_D;
+		info->fw = adev->gfx.imu_fw;
+		adev->firmware.fw_size +=
+			ALIGN(le32_to_cpu(imu_hdr->imu_dram_ucode_size_bytes), PAGE_SIZE);
+	}
+
+out:
+	if (err) {
+		dev_err(adev->dev,
+			"gfx11: Failed to load firmware \"%s\"\n",
+			fw_name);
+		release_firmware(adev->gfx.imu_fw);
+	}
+
+	return err;
+}
+
+static int imu_v11_0_load_microcode(struct amdgpu_device *adev)
+{
+	const struct imu_firmware_header_v1_0 *hdr;
+	const __le32 *fw_data;
+	unsigned i, fw_size;
+
+	if (!adev->gfx.imu_fw)
+		return -EINVAL;
+
+	hdr = (const struct imu_firmware_header_v1_0 *)adev->gfx.imu_fw->data;
+	//amdgpu_ucode_print_rlc_hdr(&hdr->header);
+
+	fw_data = (const __le32 *)(adev->gfx.imu_fw->data +
+			le32_to_cpu(hdr->header.ucode_array_offset_bytes));
+	fw_size = le32_to_cpu(hdr->imu_iram_ucode_size_bytes) / 4;
+
+	WREG32_SOC15(GC, 0, regGFX_IMU_I_RAM_ADDR, 0);
+
+	for (i = 0; i < fw_size; i++)
+		WREG32_SOC15(GC, 0, regGFX_IMU_I_RAM_DATA, le32_to_cpup(fw_data++));
+
+	WREG32_SOC15(GC, 0, regGFX_IMU_I_RAM_ADDR, adev->gfx.imu_fw_version);
+
+	fw_data = (const __le32 *)(adev->gfx.imu_fw->data +
+			le32_to_cpu(hdr->header.ucode_array_offset_bytes) +
+			le32_to_cpu(hdr->imu_iram_ucode_size_bytes));
+	fw_size = le32_to_cpu(hdr->imu_dram_ucode_size_bytes) / 4;
+
+	WREG32_SOC15(GC, 0, regGFX_IMU_D_RAM_ADDR, 0);
+
+	for (i = 0; i < fw_size; i++)
+		WREG32_SOC15(GC, 0, regGFX_IMU_D_RAM_DATA, le32_to_cpup(fw_data++));
+
+	WREG32_SOC15(GC, 0, regGFX_IMU_D_RAM_ADDR, adev->gfx.imu_fw_version);
+
+	return 0;
+}
+
+static void imu_v11_0_setup(struct amdgpu_device *adev)
+{
+	int imu_reg_val;
+
+	//enable IMU debug mode
+	WREG32_SOC15(GC, 0, regGFX_IMU_C2PMSG_ACCESS_CTRL0, 0xffffff);
+	WREG32_SOC15(GC, 0, regGFX_IMU_C2PMSG_ACCESS_CTRL1, 0xffff);
+
+	imu_reg_val = RREG32_SOC15(GC, 0, regGFX_IMU_C2PMSG_16);
+	imu_reg_val |= 0x1;
+	WREG32_SOC15(GC, 0, regGFX_IMU_C2PMSG_16, imu_reg_val);
+
+	//disble imu Rtavfs, SmsRepair, DfllBTC, and ClkB
+	imu_reg_val = RREG32_SOC15(GC, 0, regGFX_IMU_SCRATCH_10);
+	imu_reg_val |= 0x10007;
+	WREG32_SOC15(GC, 0, regGFX_IMU_SCRATCH_10, imu_reg_val);
+}
+
+static int imu_v11_0_start(struct amdgpu_device *adev)
+{
+	int imu_reg_val, i;
+
+	//Start IMU by set GFX_IMU_CORE_CTRL.CRESET = 0
+	imu_reg_val = RREG32_SOC15(GC, 0, regGFX_IMU_CORE_CTRL);
+	imu_reg_val &= 0xfffffffe;
+	WREG32_SOC15(GC, 0, regGFX_IMU_CORE_CTRL, imu_reg_val);
+
+	for (i = 0; i < adev->usec_timeout; i++) {
+		imu_reg_val = RREG32_SOC15(GC, 0, regGFX_IMU_GFX_RESET_CTRL);
+		if ((imu_reg_val & 0x1f) == 0x1f)
+			break;
+		udelay(1);
+	}
+
+	if (i >= adev->usec_timeout) {
+		dev_err(adev->dev, "init imu: IMU start timeout\n");
+		return -ETIMEDOUT;
+	}
+
+	return 0;
+}
+
+static const struct imu_rlc_ram_golden imu_rlc_ram_golden_11[] =
+{
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGUS_IO_RD_COMBINE_FLUSH, 0x00055555, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGUS_IO_WR_COMBINE_FLUSH, 0x00055555, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGUS_DRAM_COMBINE_FLUSH, 0x00555555, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGUS_MISC2, 0x00001ffe, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGUS_SDP_CREDITS , 0x003f3fff, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGUS_SDP_TAG_RESERVE1, 0x00000000, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGUS_SDP_VCC_RESERVE0, 0x00041000, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGUS_SDP_VCC_RESERVE1, 0x00000000, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGUS_SDP_VCD_RESERVE0, 0x00040000, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGUS_SDP_VCD_RESERVE1, 0x00000000, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGUS_MISC, 0x00000017, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGUS_SDP_ENABLE, 0x00000001, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCEA_SDP_CREDITS , 0x003f3fbf, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCEA_SDP_TAG_RESERVE0, 0x10201000, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCEA_SDP_TAG_RESERVE1, 0x00000080, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCEA_SDP_VCC_RESERVE0, 0x1d041040, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCEA_SDP_VCC_RESERVE1, 0x80000000, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCEA_SDP_IO_PRIORITY, 0x88888888, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCEA_MAM_CTRL, 0x0000d800, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCEA_SDP_ARB_FINAL, 0x000003f7, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCEA_SDP_ENABLE, 0x00000001, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCVM_L2_PROTECTION_FAULT_CNTL2, 0x00020000, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCMC_VM_APT_CNTL, 0x0000000c, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCMC_VM_CACHEABLE_DRAM_ADDRESS_END, 0x000fffff, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCEA_MISC, 0x0c48bff0, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regCC_GC_SA_UNIT_DISABLE, 0x00fffc01, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regCC_GC_PRIM_CONFIG, 0x000fffe1, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regCC_RB_BACKEND_DISABLE, 0x0fffff01, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regCC_GC_SHADER_ARRAY_CONFIG, 0xfffe0001, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCMC_VM_MX_L1_TLB_CNTL, 0x00000500, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCMC_VM_SYSTEM_APERTURE_LOW_ADDR, 0x00000001, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCMC_VM_SYSTEM_APERTURE_HIGH_ADDR, 0x00000000, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCMC_VM_LOCAL_FB_ADDRESS_START, 0x00000000, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCMC_VM_LOCAL_FB_ADDRESS_END, 0x000fffff, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCVM_CONTEXT0_CNTL, 0x00000000, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCVM_CONTEXT1_CNTL, 0x00000000, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCMC_VM_NB_TOP_OF_DRAM_SLOT1, 0xff800000, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCMC_VM_NB_LOWER_TOP_OF_DRAM2, 0x00000001, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCMC_VM_NB_UPPER_TOP_OF_DRAM2, 0x00000fff, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCVM_L2_PROTECTION_FAULT_CNTL, 0x00001ffc, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCMC_VM_MX_L1_TLB_CNTL, 0x00000501, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCVM_L2_CNTL, 0x00080603, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCVM_L2_CNTL2, 0x00000003, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCVM_L2_CNTL3, 0x00100003, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCVM_L2_CNTL5, 0x00003fe0, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCVM_CONTEXT0_CNTL, 0x00000001, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCVM_L2_CONTEXT0_PER_PFVF_PTE_CACHE_FRAGMENT_SIZES, 0x00000c00, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCVM_CONTEXT1_CNTL, 0x00000001, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCVM_L2_CONTEXT1_PER_PFVF_PTE_CACHE_FRAGMENT_SIZES, 0x00000c00, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGB_ADDR_CONFIG, 0x00000545, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGL2_PIPE_STEER_0, 0x13455431, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGL2_PIPE_STEER_1, 0x13455431, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGL2_PIPE_STEER_2, 0x76027602, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGL2_PIPE_STEER_3, 0x76207620, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGB_ADDR_CONFIG, 0x00000345, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCUTCL2_HARVEST_BYPASS_GROUPS, 0x0000003e, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCMC_VM_FB_LOCATION_BASE, 0x00006000, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCMC_VM_FB_LOCATION_TOP, 0x000061ff, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCMC_VM_APT_CNTL, 0x0000000c, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCMC_VM_AGP_BASE, 0x00000000, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCMC_VM_AGP_BOT, 0x00000002, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCMC_VM_AGP_TOP, 0x00000000, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regGCVM_L2_PROTECTION_FAULT_CNTL2, 0x00020000, 0xe0000000),
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regSDMA0_UCODE_SELFLOAD_CONTROL, 0x00000210, 0), 
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regSDMA1_UCODE_SELFLOAD_CONTROL, 0x00000210, 0), 
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regCPC_PSP_DEBUG, CPC_PSP_DEBUG__GPA_OVERRIDE_MASK, 0), 
+        IMU_RLC_RAM_GOLDEN_VALUE(GC, 0, regCPG_PSP_DEBUG, CPG_PSP_DEBUG__GPA_OVERRIDE_MASK, 0)
+};
+
+void program_imu_rlc_ram(struct amdgpu_device *adev,
+				const struct imu_rlc_ram_golden *regs,
+				const u32 array_size)
+{
+	const struct imu_rlc_ram_golden *entry;
+	u32 reg, data;
+	int i;
+
+	for (i = 0; i < array_size; ++i) {
+		entry = &regs[i];
+		reg =  adev->reg_offset[entry->hwip][entry->instance][entry->segment] + entry->reg;
+		reg |= entry->addr_mask;
+
+		data = entry->data;
+		if (entry->reg == regGCMC_VM_AGP_BASE)
+			data = 0x00ffffff;
+		else if (entry->reg == regGCMC_VM_AGP_TOP)
+			data = 0x0;
+		else if (entry->reg == regGCMC_VM_FB_LOCATION_BASE)
+			data = adev->gmc.vram_start >> 24;
+		else if (entry->reg == regGCMC_VM_FB_LOCATION_TOP)
+			data = adev->gmc.vram_end >> 24;
+
+		WREG32_SOC15(GC, 0, regGFX_IMU_RLC_RAM_ADDR_HIGH, 0);
+		WREG32_SOC15(GC, 0, regGFX_IMU_RLC_RAM_ADDR_LOW, reg);
+		WREG32_SOC15(GC, 0, regGFX_IMU_RLC_RAM_DATA, data);
+	}
+	//Indicate the latest entry
+	WREG32_SOC15(GC, 0, regGFX_IMU_RLC_RAM_ADDR_HIGH, 0);
+	WREG32_SOC15(GC, 0, regGFX_IMU_RLC_RAM_ADDR_LOW, 0);
+	WREG32_SOC15(GC, 0, regGFX_IMU_RLC_RAM_DATA, 0);
+}
+
+static void imu_v11_0_program_rlc_ram(struct amdgpu_device *adev)
+{
+	u32 reg_data;
+
+	WREG32_SOC15(GC, 0, regGFX_IMU_RLC_RAM_INDEX, 0x2);
+
+	program_imu_rlc_ram(adev,
+				  imu_rlc_ram_golden_11,
+				  (const u32)ARRAY_SIZE(imu_rlc_ram_golden_11));
+
+	//Indicate the contents of the RAM are valid
+	reg_data = RREG32_SOC15(GC, 0, regGFX_IMU_RLC_RAM_INDEX);
+	reg_data |= GFX_IMU_RLC_RAM_INDEX__RAM_VALID_MASK;
+	WREG32_SOC15(GC, 0, regGFX_IMU_RLC_RAM_INDEX, reg_data);
+}
+
+const struct amdgpu_imu_funcs gfx_v11_0_imu_funcs = {
+	.init_microcode = imu_v11_0_init_microcode,
+	.load_microcode = imu_v11_0_load_microcode,
+	.setup_imu = imu_v11_0_setup,
+	.start_imu = imu_v11_0_start,
+	.program_rlc_ram = imu_v11_0_program_rlc_ram,
+};
diff --git a/drivers/gpu/drm/amd/amdgpu/imu_v11_0.h b/drivers/gpu/drm/amd/amdgpu/imu_v11_0.h
new file mode 100644
index 000000000000..e71f96fc2f06
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/imu_v11_0.h
@@ -0,0 +1,30 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __IMU_V11_0_H__
+#define __IMU_V11_0_H__
+
+extern const struct amdgpu_imu_funcs gfx_v11_0_imu_funcs;
+
+#endif
+
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 20/29] drm/amdgpu/mes11: initiate mes v11 support
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (18 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 19/29] drm/amdgpu: support imu for gfx11 Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 21/29] drm/amdgpu: add init support for GFX11 (v2) Alex Deucher
                   ` (8 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Jack Xiao, Hawking Zhang

From: Jack Xiao <Jack.Xiao@amd.com>

Initiate mes v11 code base from mes v10, rename function
and register names.

Signed-off-by: Jack Xiao <Jack.Xiao@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/Makefile           |    3 +-
 drivers/gpu/drm/amd/amdgpu/mes_v11_0.c        | 1204 +++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/mes_v11_0.h        |   29 +
 drivers/gpu/drm/amd/include/mes_v11_api_def.h |  579 ++++++++
 4 files changed, 1814 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/mes_v11_0.h
 create mode 100644 drivers/gpu/drm/amd/include/mes_v11_api_def.h

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
index e74bf1bde8b9..2c490a5c2a87 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -146,7 +146,8 @@ amdgpu-y += \
 # add MES block
 amdgpu-y += \
 	amdgpu_mes.o \
-	mes_v10_1.o
+	mes_v10_1.o \
+	mes_v11_0.o
 
 # add UVD block
 amdgpu-y += \
diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
new file mode 100644
index 000000000000..af944a43fb03
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
@@ -0,0 +1,1204 @@
+/*
+ * Copyright 2019 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/firmware.h>
+#include <linux/module.h>
+#include "amdgpu.h"
+#include "soc15_common.h"
+#include "soc21.h"
+#include "gc/gc_11_0_0_offset.h"
+#include "gc/gc_11_0_0_sh_mask.h"
+#include "v10_structs.h"
+#include "mes_v11_api_def.h"
+
+MODULE_FIRMWARE("amdgpu/gc_11_0_0_mes.bin");
+MODULE_FIRMWARE("amdgpu/gc_11_0_0_mes1.bin");
+
+static int mes_v11_0_hw_fini(void *handle);
+static int mes_v11_0_kiq_hw_init(struct amdgpu_device *adev);
+static int mes_v11_0_kiq_hw_fini(struct amdgpu_device *adev);
+
+#define MES_EOP_SIZE   2048
+
+static void mes_v11_0_ring_set_wptr(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	if (ring->use_doorbell) {
+		atomic64_set((atomic64_t *)ring->wptr_cpu_addr,
+			     ring->wptr);
+		WDOORBELL64(ring->doorbell_index, ring->wptr);
+	} else {
+		BUG();
+	}
+}
+
+static u64 mes_v11_0_ring_get_rptr(struct amdgpu_ring *ring)
+{
+	return *ring->rptr_cpu_addr;
+}
+
+static u64 mes_v11_0_ring_get_wptr(struct amdgpu_ring *ring)
+{
+	u64 wptr;
+
+	if (ring->use_doorbell)
+		wptr = atomic64_read((atomic64_t *)ring->wptr_cpu_addr);
+	else
+		BUG();
+	return wptr;
+}
+
+static const struct amdgpu_ring_funcs mes_v11_0_ring_funcs = {
+	.type = AMDGPU_RING_TYPE_MES,
+	.align_mask = 1,
+	.nop = 0,
+	.support_64bit_ptrs = true,
+	.get_rptr = mes_v11_0_ring_get_rptr,
+	.get_wptr = mes_v11_0_ring_get_wptr,
+	.set_wptr = mes_v11_0_ring_set_wptr,
+	.insert_nop = amdgpu_ring_insert_nop,
+};
+
+static int mes_v11_0_submit_pkt_and_poll_completion(struct amdgpu_mes *mes,
+						    void *pkt, int size)
+{
+	int ndw = size / 4;
+	signed long r;
+	union MESAPI__ADD_QUEUE *x_pkt = pkt;
+	struct amdgpu_device *adev = mes->adev;
+	struct amdgpu_ring *ring = &mes->ring;
+
+	BUG_ON(size % 4 != 0);
+
+	if (amdgpu_ring_alloc(ring, ndw))
+		return -ENOMEM;
+
+	amdgpu_ring_write_multiple(ring, pkt, ndw);
+	amdgpu_ring_commit(ring);
+
+	DRM_DEBUG("MES msg=%d was emitted\n", x_pkt->header.opcode);
+
+	r = amdgpu_fence_wait_polling(ring, ring->fence_drv.sync_seq,
+		      adev->usec_timeout * (amdgpu_emu_mode ? 100 : 1));
+	if (r < 1) {
+		DRM_ERROR("MES failed to response msg=%d\n",
+			  x_pkt->header.opcode);
+		return -ETIMEDOUT;
+	}
+
+	return 0;
+}
+
+static int convert_to_mes_queue_type(int queue_type)
+{
+	if (queue_type == AMDGPU_RING_TYPE_GFX)
+		return MES_QUEUE_TYPE_GFX;
+	else if (queue_type == AMDGPU_RING_TYPE_COMPUTE)
+		return MES_QUEUE_TYPE_COMPUTE;
+	else if (queue_type == AMDGPU_RING_TYPE_SDMA)
+		return MES_QUEUE_TYPE_SDMA;
+	else
+		BUG();
+	return -1;
+}
+
+static int mes_v11_0_add_hw_queue(struct amdgpu_mes *mes,
+				  struct mes_add_queue_input *input)
+{
+	struct amdgpu_device *adev = mes->adev;
+	union MESAPI__ADD_QUEUE mes_add_queue_pkt;
+	struct amdgpu_vmhub *hub = &adev->vmhub[AMDGPU_GFXHUB_0];
+	uint32_t vm_cntx_cntl = hub->vm_cntx_cntl;
+
+	memset(&mes_add_queue_pkt, 0, sizeof(mes_add_queue_pkt));
+
+	mes_add_queue_pkt.header.type = MES_API_TYPE_SCHEDULER;
+	mes_add_queue_pkt.header.opcode = MES_SCH_API_ADD_QUEUE;
+	mes_add_queue_pkt.header.dwsize = API_FRAME_SIZE_IN_DWORDS;
+
+	mes_add_queue_pkt.process_id = input->process_id;
+	mes_add_queue_pkt.page_table_base_addr = input->page_table_base_addr;
+	mes_add_queue_pkt.process_va_start = input->process_va_start;
+	mes_add_queue_pkt.process_va_end = input->process_va_end;
+	mes_add_queue_pkt.process_quantum = input->process_quantum;
+	mes_add_queue_pkt.process_context_addr = input->process_context_addr;
+	mes_add_queue_pkt.gang_quantum = input->gang_quantum;
+	mes_add_queue_pkt.gang_context_addr = input->gang_context_addr;
+	mes_add_queue_pkt.inprocess_gang_priority =
+		input->inprocess_gang_priority;
+	mes_add_queue_pkt.gang_global_priority_level =
+		input->gang_global_priority_level;
+	mes_add_queue_pkt.doorbell_offset = input->doorbell_offset;
+	mes_add_queue_pkt.mqd_addr = input->mqd_addr;
+	mes_add_queue_pkt.wptr_addr = input->wptr_addr;
+	mes_add_queue_pkt.queue_type =
+		convert_to_mes_queue_type(input->queue_type);
+	mes_add_queue_pkt.paging = input->paging;
+	mes_add_queue_pkt.vm_context_cntl = vm_cntx_cntl;
+	mes_add_queue_pkt.gws_base = input->gws_base;
+	mes_add_queue_pkt.gws_size = input->gws_size;
+	mes_add_queue_pkt.trap_handler_addr = input->tba_addr;
+	mes_add_queue_pkt.tma_addr = input->tma_addr;
+
+	mes_add_queue_pkt.api_status.api_completion_fence_addr =
+		mes->ring.fence_drv.gpu_addr;
+	mes_add_queue_pkt.api_status.api_completion_fence_value =
+		++mes->ring.fence_drv.sync_seq;
+
+	return mes_v11_0_submit_pkt_and_poll_completion(mes,
+			&mes_add_queue_pkt, sizeof(mes_add_queue_pkt));
+}
+
+static int mes_v11_0_remove_hw_queue(struct amdgpu_mes *mes,
+				     struct mes_remove_queue_input *input)
+{
+	union MESAPI__REMOVE_QUEUE mes_remove_queue_pkt;
+
+	memset(&mes_remove_queue_pkt, 0, sizeof(mes_remove_queue_pkt));
+
+	mes_remove_queue_pkt.header.type = MES_API_TYPE_SCHEDULER;
+	mes_remove_queue_pkt.header.opcode = MES_SCH_API_REMOVE_QUEUE;
+	mes_remove_queue_pkt.header.dwsize = API_FRAME_SIZE_IN_DWORDS;
+
+	mes_remove_queue_pkt.doorbell_offset = input->doorbell_offset;
+	mes_remove_queue_pkt.gang_context_addr = input->gang_context_addr;
+
+	mes_remove_queue_pkt.api_status.api_completion_fence_addr =
+		mes->ring.fence_drv.gpu_addr;
+	mes_remove_queue_pkt.api_status.api_completion_fence_value =
+		++mes->ring.fence_drv.sync_seq;
+
+	return mes_v11_0_submit_pkt_and_poll_completion(mes,
+			&mes_remove_queue_pkt, sizeof(mes_remove_queue_pkt));
+}
+
+static int mes_v11_0_unmap_legacy_queue(struct amdgpu_mes *mes,
+			struct mes_unmap_legacy_queue_input *input)
+{
+	union MESAPI__REMOVE_QUEUE mes_remove_queue_pkt;
+
+	memset(&mes_remove_queue_pkt, 0, sizeof(mes_remove_queue_pkt));
+
+	mes_remove_queue_pkt.header.type = MES_API_TYPE_SCHEDULER;
+	mes_remove_queue_pkt.header.opcode = MES_SCH_API_REMOVE_QUEUE;
+	mes_remove_queue_pkt.header.dwsize = API_FRAME_SIZE_IN_DWORDS;
+
+	mes_remove_queue_pkt.doorbell_offset = input->doorbell_offset << 2;
+	mes_remove_queue_pkt.gang_context_addr = 0;
+
+	mes_remove_queue_pkt.pipe_id = input->pipe_id;
+	mes_remove_queue_pkt.queue_id = input->queue_id;
+
+	if (input->action == PREEMPT_QUEUES_NO_UNMAP) {
+		mes_remove_queue_pkt.preempt_legacy_gfx_queue = 1;
+		mes_remove_queue_pkt.tf_addr = input->trail_fence_addr;
+		mes_remove_queue_pkt.tf_data =
+			lower_32_bits(input->trail_fence_data);
+	} else {
+		if (input->queue_type == AMDGPU_RING_TYPE_GFX)
+			mes_remove_queue_pkt.unmap_legacy_gfx_queue = 1;
+		else
+			mes_remove_queue_pkt.unmap_kiq_utility_queue = 1;
+	}
+
+	mes_remove_queue_pkt.api_status.api_completion_fence_addr =
+		mes->ring.fence_drv.gpu_addr;
+	mes_remove_queue_pkt.api_status.api_completion_fence_value =
+		++mes->ring.fence_drv.sync_seq;
+
+	return mes_v11_0_submit_pkt_and_poll_completion(mes,
+			&mes_remove_queue_pkt, sizeof(mes_remove_queue_pkt));
+}
+
+static int mes_v11_0_suspend_gang(struct amdgpu_mes *mes,
+				  struct mes_suspend_gang_input *input)
+{
+	return 0;
+}
+
+static int mes_v11_0_resume_gang(struct amdgpu_mes *mes,
+				 struct mes_resume_gang_input *input)
+{
+	return 0;
+}
+
+static int mes_v11_0_query_sched_status(struct amdgpu_mes *mes)
+{
+	union MESAPI__QUERY_MES_STATUS mes_status_pkt;
+
+	memset(&mes_status_pkt, 0, sizeof(mes_status_pkt));
+
+	mes_status_pkt.header.type = MES_API_TYPE_SCHEDULER;
+	mes_status_pkt.header.opcode = MES_SCH_API_QUERY_SCHEDULER_STATUS;
+	mes_status_pkt.header.dwsize = API_FRAME_SIZE_IN_DWORDS;
+
+	mes_status_pkt.api_status.api_completion_fence_addr =
+		mes->ring.fence_drv.gpu_addr;
+	mes_status_pkt.api_status.api_completion_fence_value =
+		++mes->ring.fence_drv.sync_seq;
+
+	return mes_v11_0_submit_pkt_and_poll_completion(mes,
+			&mes_status_pkt, sizeof(mes_status_pkt));
+}
+
+static int mes_v11_0_set_hw_resources(struct amdgpu_mes *mes)
+{
+	int i;
+	struct amdgpu_device *adev = mes->adev;
+	union MESAPI_SET_HW_RESOURCES mes_set_hw_res_pkt;
+
+	memset(&mes_set_hw_res_pkt, 0, sizeof(mes_set_hw_res_pkt));
+
+	mes_set_hw_res_pkt.header.type = MES_API_TYPE_SCHEDULER;
+	mes_set_hw_res_pkt.header.opcode = MES_SCH_API_SET_HW_RSRC;
+	mes_set_hw_res_pkt.header.dwsize = API_FRAME_SIZE_IN_DWORDS;
+
+	mes_set_hw_res_pkt.vmid_mask_mmhub = mes->vmid_mask_mmhub;
+	mes_set_hw_res_pkt.vmid_mask_gfxhub = mes->vmid_mask_gfxhub;
+	mes_set_hw_res_pkt.gds_size = adev->gds.gds_size;
+	mes_set_hw_res_pkt.paging_vmid = 0;
+	mes_set_hw_res_pkt.g_sch_ctx_gpu_mc_ptr = mes->sch_ctx_gpu_addr;
+	mes_set_hw_res_pkt.query_status_fence_gpu_mc_ptr =
+		mes->query_status_fence_gpu_addr;
+
+	for (i = 0; i < MAX_COMPUTE_PIPES; i++)
+		mes_set_hw_res_pkt.compute_hqd_mask[i] =
+			mes->compute_hqd_mask[i];
+
+	for (i = 0; i < MAX_GFX_PIPES; i++)
+		mes_set_hw_res_pkt.gfx_hqd_mask[i] = mes->gfx_hqd_mask[i];
+
+	for (i = 0; i < MAX_SDMA_PIPES; i++)
+		mes_set_hw_res_pkt.sdma_hqd_mask[i] = mes->sdma_hqd_mask[i];
+
+	for (i = 0; i < AMD_PRIORITY_NUM_LEVELS; i++)
+		mes_set_hw_res_pkt.aggregated_doorbells[i] =
+			mes->agreegated_doorbells[i];
+
+	for (i = 0; i < 5; i++) {
+		mes_set_hw_res_pkt.gc_base[i] = adev->reg_offset[GC_HWIP][0][i];
+		mes_set_hw_res_pkt.mmhub_base[i] =
+				adev->reg_offset[MMHUB_HWIP][0][i];
+		mes_set_hw_res_pkt.osssys_base[i] =
+		adev->reg_offset[OSSSYS_HWIP][0][i];
+	}
+
+	mes_set_hw_res_pkt.disable_reset = 1;
+	mes_set_hw_res_pkt.disable_mes_log = 1;
+	mes_set_hw_res_pkt.use_different_vmid_compute = 1;
+
+	mes_set_hw_res_pkt.api_status.api_completion_fence_addr =
+		mes->ring.fence_drv.gpu_addr;
+	mes_set_hw_res_pkt.api_status.api_completion_fence_value =
+		++mes->ring.fence_drv.sync_seq;
+
+	return mes_v11_0_submit_pkt_and_poll_completion(mes,
+			&mes_set_hw_res_pkt, sizeof(mes_set_hw_res_pkt));
+}
+
+static const struct amdgpu_mes_funcs mes_v11_0_funcs = {
+	.add_hw_queue = mes_v11_0_add_hw_queue,
+	.remove_hw_queue = mes_v11_0_remove_hw_queue,
+	.unmap_legacy_queue = mes_v11_0_unmap_legacy_queue,
+	.suspend_gang = mes_v11_0_suspend_gang,
+	.resume_gang = mes_v11_0_resume_gang,
+};
+
+static int mes_v11_0_init_microcode(struct amdgpu_device *adev,
+				    enum admgpu_mes_pipe pipe)
+{
+	char fw_name[30];
+	char ucode_prefix[30];
+	int err;
+	const struct mes_firmware_header_v1_0 *mes_hdr;
+	struct amdgpu_firmware_info *info;
+
+	amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix, sizeof(ucode_prefix));
+
+	if (pipe == AMDGPU_MES_SCHED_PIPE)
+		snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mes.bin",
+			 ucode_prefix);
+	else
+		snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mes1.bin",
+			 ucode_prefix);
+
+	err = request_firmware(&adev->mes.fw[pipe], fw_name, adev->dev);
+	if (err)
+		return err;
+
+	err = amdgpu_ucode_validate(adev->mes.fw[pipe]);
+	if (err) {
+		release_firmware(adev->mes.fw[pipe]);
+		adev->mes.fw[pipe] = NULL;
+		return err;
+	}
+
+	mes_hdr = (const struct mes_firmware_header_v1_0 *)
+		adev->mes.fw[pipe]->data;
+	adev->mes.ucode_fw_version[pipe] =
+		le32_to_cpu(mes_hdr->mes_ucode_version);
+	adev->mes.ucode_fw_version[pipe] =
+		le32_to_cpu(mes_hdr->mes_ucode_data_version);
+	adev->mes.uc_start_addr[pipe] =
+		le32_to_cpu(mes_hdr->mes_uc_start_addr_lo) |
+		((uint64_t)(le32_to_cpu(mes_hdr->mes_uc_start_addr_hi)) << 32);
+	adev->mes.data_start_addr[pipe] =
+		le32_to_cpu(mes_hdr->mes_data_start_addr_lo) |
+		((uint64_t)(le32_to_cpu(mes_hdr->mes_data_start_addr_hi)) << 32);
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
+		int ucode, ucode_data;
+
+		if (pipe == AMDGPU_MES_SCHED_PIPE) {
+			ucode = AMDGPU_UCODE_ID_CP_MES;
+			ucode_data = AMDGPU_UCODE_ID_CP_MES_DATA;
+		} else {
+			ucode = AMDGPU_UCODE_ID_CP_MES1;
+			ucode_data = AMDGPU_UCODE_ID_CP_MES1_DATA;
+		}
+
+		info = &adev->firmware.ucode[ucode];
+		info->ucode_id = ucode;
+		info->fw = adev->mes.fw[pipe];
+		adev->firmware.fw_size +=
+			ALIGN(le32_to_cpu(mes_hdr->mes_ucode_size_bytes),
+			      PAGE_SIZE);
+
+		info = &adev->firmware.ucode[ucode_data];
+		info->ucode_id = ucode_data;
+		info->fw = adev->mes.fw[pipe];
+		adev->firmware.fw_size +=
+			ALIGN(le32_to_cpu(mes_hdr->mes_ucode_data_size_bytes),
+			      PAGE_SIZE);
+	}
+
+	return 0;
+}
+
+static void mes_v11_0_free_microcode(struct amdgpu_device *adev,
+				     enum admgpu_mes_pipe pipe)
+{
+	release_firmware(adev->mes.fw[pipe]);
+	adev->mes.fw[pipe] = NULL;
+}
+
+static int mes_v11_0_allocate_ucode_buffer(struct amdgpu_device *adev,
+					   enum admgpu_mes_pipe pipe)
+{
+	int r;
+	const struct mes_firmware_header_v1_0 *mes_hdr;
+	const __le32 *fw_data;
+	unsigned fw_size;
+
+	mes_hdr = (const struct mes_firmware_header_v1_0 *)
+		adev->mes.fw[pipe]->data;
+
+	fw_data = (const __le32 *)(adev->mes.fw[pipe]->data +
+		   le32_to_cpu(mes_hdr->mes_ucode_offset_bytes));
+	fw_size = le32_to_cpu(mes_hdr->mes_ucode_size_bytes);
+
+	r = amdgpu_bo_create_reserved(adev, fw_size,
+				      PAGE_SIZE, AMDGPU_GEM_DOMAIN_VRAM,
+				      &adev->mes.ucode_fw_obj[pipe],
+				      &adev->mes.ucode_fw_gpu_addr[pipe],
+				      (void **)&adev->mes.ucode_fw_ptr[pipe]);
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to create mes fw bo\n", r);
+		return r;
+	}
+
+	memcpy(adev->mes.ucode_fw_ptr[pipe], fw_data, fw_size);
+
+	amdgpu_bo_kunmap(adev->mes.ucode_fw_obj[pipe]);
+	amdgpu_bo_unreserve(adev->mes.ucode_fw_obj[pipe]);
+
+	return 0;
+}
+
+static int mes_v11_0_allocate_ucode_data_buffer(struct amdgpu_device *adev,
+						enum admgpu_mes_pipe pipe)
+{
+	int r;
+	const struct mes_firmware_header_v1_0 *mes_hdr;
+	const __le32 *fw_data;
+	unsigned fw_size;
+
+	mes_hdr = (const struct mes_firmware_header_v1_0 *)
+		adev->mes.fw[pipe]->data;
+
+	fw_data = (const __le32 *)(adev->mes.fw[pipe]->data +
+		   le32_to_cpu(mes_hdr->mes_ucode_data_offset_bytes));
+	fw_size = le32_to_cpu(mes_hdr->mes_ucode_data_size_bytes);
+
+	r = amdgpu_bo_create_reserved(adev, fw_size,
+				      64 * 1024, AMDGPU_GEM_DOMAIN_VRAM,
+				      &adev->mes.data_fw_obj[pipe],
+				      &adev->mes.data_fw_gpu_addr[pipe],
+				      (void **)&adev->mes.data_fw_ptr[pipe]);
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to create mes data fw bo\n", r);
+		return r;
+	}
+
+	memcpy(adev->mes.data_fw_ptr[pipe], fw_data, fw_size);
+
+	amdgpu_bo_kunmap(adev->mes.data_fw_obj[pipe]);
+	amdgpu_bo_unreserve(adev->mes.data_fw_obj[pipe]);
+
+	return 0;
+}
+
+static void mes_v11_0_free_ucode_buffers(struct amdgpu_device *adev,
+					 enum admgpu_mes_pipe pipe)
+{
+	amdgpu_bo_free_kernel(&adev->mes.data_fw_obj[pipe],
+			      &adev->mes.data_fw_gpu_addr[pipe],
+			      (void **)&adev->mes.data_fw_ptr[pipe]);
+
+	amdgpu_bo_free_kernel(&adev->mes.ucode_fw_obj[pipe],
+			      &adev->mes.ucode_fw_gpu_addr[pipe],
+			      (void **)&adev->mes.ucode_fw_ptr[pipe]);
+}
+
+static void mes_v11_0_enable(struct amdgpu_device *adev, bool enable)
+{
+	uint64_t ucode_addr;
+	uint32_t pipe, data = 0;
+
+	if (enable) {
+		data = RREG32_SOC15(GC, 0, regCP_MES_CNTL);
+		data = REG_SET_FIELD(data, CP_MES_CNTL, MES_PIPE0_RESET, 1);
+		data = REG_SET_FIELD(data, CP_MES_CNTL,
+			     MES_PIPE1_RESET, adev->enable_mes_kiq ? 1 : 0);
+		WREG32_SOC15(GC, 0, regCP_MES_CNTL, data);
+
+		mutex_lock(&adev->srbm_mutex);
+		for (pipe = 0; pipe < AMDGPU_MAX_MES_PIPES; pipe++) {
+			if (!adev->enable_mes_kiq &&
+			    pipe == AMDGPU_MES_KIQ_PIPE)
+				continue;
+
+			soc21_grbm_select(adev, 3, pipe, 0, 0);
+
+			ucode_addr = adev->mes.uc_start_addr[pipe] >> 2;
+			WREG32_SOC15(GC, 0, regCP_MES_PRGRM_CNTR_START,
+				     lower_32_bits(ucode_addr));
+			WREG32_SOC15(GC, 0, regCP_MES_PRGRM_CNTR_START_HI,
+				     upper_32_bits(ucode_addr));
+		}
+		soc21_grbm_select(adev, 0, 0, 0, 0);
+		mutex_unlock(&adev->srbm_mutex);
+
+		/* unhalt MES and activate pipe0 */
+		data = REG_SET_FIELD(0, CP_MES_CNTL, MES_PIPE0_ACTIVE, 1);
+		data = REG_SET_FIELD(data, CP_MES_CNTL, MES_PIPE1_ACTIVE,
+				     adev->enable_mes_kiq ? 1 : 0);
+		WREG32_SOC15(GC, 0, regCP_MES_CNTL, data);
+
+		if (amdgpu_emu_mode)
+			msleep(100);
+		else
+			udelay(50);
+	} else {
+		data = RREG32_SOC15(GC, 0, regCP_MES_CNTL);
+		data = REG_SET_FIELD(data, CP_MES_CNTL, MES_PIPE0_ACTIVE, 0);
+		data = REG_SET_FIELD(data, CP_MES_CNTL, MES_PIPE1_ACTIVE, 0);
+		data = REG_SET_FIELD(data, CP_MES_CNTL,
+				     MES_INVALIDATE_ICACHE, 1);
+		data = REG_SET_FIELD(data, CP_MES_CNTL, MES_PIPE0_RESET, 1);
+		data = REG_SET_FIELD(data, CP_MES_CNTL, MES_PIPE1_RESET,
+				     adev->enable_mes_kiq ? 1 : 0);
+		data = REG_SET_FIELD(data, CP_MES_CNTL, MES_HALT, 1);
+		WREG32_SOC15(GC, 0, regCP_MES_CNTL, data);
+	}
+}
+
+/* This function is for backdoor MES firmware */
+static int mes_v11_0_load_microcode(struct amdgpu_device *adev,
+				    enum admgpu_mes_pipe pipe)
+{
+	int r;
+	uint32_t data;
+	uint64_t ucode_addr;
+
+	mes_v11_0_enable(adev, false);
+
+	if (!adev->mes.fw[pipe])
+		return -EINVAL;
+
+	r = mes_v11_0_allocate_ucode_buffer(adev, pipe);
+	if (r)
+		return r;
+
+	r = mes_v11_0_allocate_ucode_data_buffer(adev, pipe);
+	if (r) {
+		mes_v11_0_free_ucode_buffers(adev, pipe);
+		return r;
+	}
+
+	mutex_lock(&adev->srbm_mutex);
+	/* me=3, pipe=0, queue=0 */
+	soc21_grbm_select(adev, 3, pipe, 0, 0);
+
+	WREG32_SOC15(GC, 0, regCP_MES_IC_BASE_CNTL, 0);
+
+	/* set ucode start address */
+	ucode_addr = adev->mes.uc_start_addr[pipe] >> 2;
+	WREG32_SOC15(GC, 0, regCP_MES_PRGRM_CNTR_START,
+		     lower_32_bits(ucode_addr));
+	WREG32_SOC15(GC, 0, regCP_MES_PRGRM_CNTR_START_HI,
+		     upper_32_bits(ucode_addr));
+
+	/* set ucode fimrware address */
+	WREG32_SOC15(GC, 0, regCP_MES_IC_BASE_LO,
+		     lower_32_bits(adev->mes.ucode_fw_gpu_addr[pipe]));
+	WREG32_SOC15(GC, 0, regCP_MES_IC_BASE_HI,
+		     upper_32_bits(adev->mes.ucode_fw_gpu_addr[pipe]));
+
+	/* set ucode instruction cache boundary to 2M-1 */
+	WREG32_SOC15(GC, 0, regCP_MES_MIBOUND_LO, 0x1FFFFF);
+
+	/* set ucode data firmware address */
+	WREG32_SOC15(GC, 0, regCP_MES_MDBASE_LO,
+		     lower_32_bits(adev->mes.data_fw_gpu_addr[pipe]));
+	WREG32_SOC15(GC, 0, regCP_MES_MDBASE_HI,
+		     upper_32_bits(adev->mes.data_fw_gpu_addr[pipe]));
+
+	/* Set 0x3FFFF (256K-1) to CP_MES_MDBOUND_LO */
+	WREG32_SOC15(GC, 0, regCP_MES_MDBOUND_LO, 0x3FFFF);
+
+	/* invalidate ICACHE */
+	data = RREG32_SOC15(GC, 0, regCP_MES_IC_OP_CNTL);
+	data = REG_SET_FIELD(data, CP_MES_IC_OP_CNTL, PRIME_ICACHE, 0);
+	data = REG_SET_FIELD(data, CP_MES_IC_OP_CNTL, INVALIDATE_CACHE, 1);
+	WREG32_SOC15(GC, 0, regCP_MES_IC_OP_CNTL, data);
+
+	/* prime the ICACHE. */
+	data = RREG32_SOC15(GC, 0, regCP_MES_IC_OP_CNTL);
+	data = REG_SET_FIELD(data, CP_MES_IC_OP_CNTL, PRIME_ICACHE, 1);
+	WREG32_SOC15(GC, 0, regCP_MES_IC_OP_CNTL, data);
+
+	soc21_grbm_select(adev, 0, 0, 0, 0);
+	mutex_unlock(&adev->srbm_mutex);
+
+	return 0;
+}
+
+static int mes_v11_0_allocate_eop_buf(struct amdgpu_device *adev,
+				      enum admgpu_mes_pipe pipe)
+{
+	int r;
+	u32 *eop;
+
+	r = amdgpu_bo_create_reserved(adev, MES_EOP_SIZE, PAGE_SIZE,
+			      AMDGPU_GEM_DOMAIN_GTT,
+			      &adev->mes.eop_gpu_obj[pipe],
+			      &adev->mes.eop_gpu_addr[pipe],
+			      (void **)&eop);
+	if (r) {
+		dev_warn(adev->dev, "(%d) create EOP bo failed\n", r);
+		return r;
+	}
+
+	memset(eop, 0,
+	       adev->mes.eop_gpu_obj[pipe]->tbo.base.size);
+
+	amdgpu_bo_kunmap(adev->mes.eop_gpu_obj[pipe]);
+	amdgpu_bo_unreserve(adev->mes.eop_gpu_obj[pipe]);
+
+	return 0;
+}
+
+static int mes_v11_0_mqd_init(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	struct v10_compute_mqd *mqd = ring->mqd_ptr;
+	uint64_t hqd_gpu_addr, wb_gpu_addr, eop_base_addr;
+	uint32_t tmp;
+
+	mqd->header = 0xC0310800;
+	mqd->compute_pipelinestat_enable = 0x00000001;
+	mqd->compute_static_thread_mgmt_se0 = 0xffffffff;
+	mqd->compute_static_thread_mgmt_se1 = 0xffffffff;
+	mqd->compute_static_thread_mgmt_se2 = 0xffffffff;
+	mqd->compute_static_thread_mgmt_se3 = 0xffffffff;
+	mqd->compute_misc_reserved = 0x00000007;
+
+	eop_base_addr = ring->eop_gpu_addr >> 8;
+	mqd->cp_hqd_eop_base_addr_lo = eop_base_addr;
+	mqd->cp_hqd_eop_base_addr_hi = upper_32_bits(eop_base_addr);
+
+	/* set the EOP size, register value is 2^(EOP_SIZE+1) dwords */
+	tmp = RREG32_SOC15(GC, 0, regCP_HQD_EOP_CONTROL);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_EOP_CONTROL, EOP_SIZE,
+			(order_base_2(MES_EOP_SIZE / 4) - 1));
+
+	mqd->cp_hqd_eop_control = tmp;
+
+	/* enable doorbell? */
+	tmp = RREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL);
+
+	if (ring->use_doorbell) {
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_OFFSET, ring->doorbell_index);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_EN, 1);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_SOURCE, 0);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_HIT, 0);
+	}
+	else
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_EN, 0);
+
+	mqd->cp_hqd_pq_doorbell_control = tmp;
+
+	/* disable the queue if it's active */
+	ring->wptr = 0;
+	mqd->cp_hqd_dequeue_request = 0;
+	mqd->cp_hqd_pq_rptr = 0;
+	mqd->cp_hqd_pq_wptr_lo = 0;
+	mqd->cp_hqd_pq_wptr_hi = 0;
+
+	/* set the pointer to the MQD */
+	mqd->cp_mqd_base_addr_lo = ring->mqd_gpu_addr & 0xfffffffc;
+	mqd->cp_mqd_base_addr_hi = upper_32_bits(ring->mqd_gpu_addr);
+
+	/* set MQD vmid to 0 */
+	tmp = RREG32_SOC15(GC, 0, regCP_MQD_CONTROL);
+	tmp = REG_SET_FIELD(tmp, CP_MQD_CONTROL, VMID, 0);
+	mqd->cp_mqd_control = tmp;
+
+	/* set the pointer to the HQD, this is similar CP_RB0_BASE/_HI */
+	hqd_gpu_addr = ring->gpu_addr >> 8;
+	mqd->cp_hqd_pq_base_lo = hqd_gpu_addr;
+	mqd->cp_hqd_pq_base_hi = upper_32_bits(hqd_gpu_addr);
+
+	/* set up the HQD, this is similar to CP_RB0_CNTL */
+	tmp = RREG32_SOC15(GC, 0, regCP_HQD_PQ_CONTROL);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, QUEUE_SIZE,
+			    (order_base_2(ring->ring_size / 4) - 1));
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, RPTR_BLOCK_SIZE,
+			    ((order_base_2(AMDGPU_GPU_PAGE_SIZE / 4) - 1) << 8));
+#ifdef __BIG_ENDIAN
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, ENDIAN_SWAP, 1);
+#endif
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 0);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TUNNEL_DISPATCH, 0);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1);
+	mqd->cp_hqd_pq_control = tmp;
+
+	/* set the wb address whether it's enabled or not */
+	wb_gpu_addr = ring->rptr_gpu_addr;;
+	mqd->cp_hqd_pq_rptr_report_addr_lo = wb_gpu_addr & 0xfffffffc;
+	mqd->cp_hqd_pq_rptr_report_addr_hi =
+		upper_32_bits(wb_gpu_addr) & 0xffff;
+
+	/* only used if CP_PQ_WPTR_POLL_CNTL.CP_PQ_WPTR_POLL_CNTL__EN_MASK=1 */
+	wb_gpu_addr = ring->wptr_gpu_addr;
+	mqd->cp_hqd_pq_wptr_poll_addr_lo = wb_gpu_addr & 0xfffffff8;
+	mqd->cp_hqd_pq_wptr_poll_addr_hi = upper_32_bits(wb_gpu_addr) & 0xffff;
+
+	tmp = 0;
+	/* enable the doorbell if requested */
+	if (ring->use_doorbell) {
+		tmp = RREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				DOORBELL_OFFSET, ring->doorbell_index);
+
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_EN, 1);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_SOURCE, 0);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_HIT, 0);
+	}
+
+	mqd->cp_hqd_pq_doorbell_control = tmp;
+
+	/* reset read and write pointers, similar to CP_RB0_WPTR/_RPTR */
+	ring->wptr = 0;
+	mqd->cp_hqd_pq_rptr = RREG32_SOC15(GC, 0, regCP_HQD_PQ_RPTR);
+
+	/* set the vmid for the queue */
+	mqd->cp_hqd_vmid = 0;
+
+	tmp = RREG32_SOC15(GC, 0, regCP_HQD_PERSISTENT_STATE);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PERSISTENT_STATE, PRELOAD_SIZE, 0x55);
+	mqd->cp_hqd_persistent_state = tmp;
+
+	/* set MIN_IB_AVAIL_SIZE */
+	tmp = RREG32_SOC15(GC, 0, regCP_HQD_IB_CONTROL);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_IB_CONTROL, MIN_IB_AVAIL_SIZE, 3);
+	mqd->cp_hqd_ib_control = tmp;
+
+	/* activate the queue */
+	mqd->cp_hqd_active = 1;
+	return 0;
+}
+
+static void mes_v11_0_queue_init_register(struct amdgpu_ring *ring)
+{
+	struct v10_compute_mqd *mqd = ring->mqd_ptr;
+	struct amdgpu_device *adev = ring->adev;
+	uint32_t data = 0;
+
+	mutex_lock(&adev->srbm_mutex);
+	soc21_grbm_select(adev, 3, ring->pipe, 0, 0);
+
+	/* set CP_HQD_VMID.VMID = 0. */
+	data = RREG32_SOC15(GC, 0, regCP_HQD_VMID);
+	data = REG_SET_FIELD(data, CP_HQD_VMID, VMID, 0);
+	WREG32_SOC15(GC, 0, regCP_HQD_VMID, data);
+
+	/* set CP_HQD_PQ_DOORBELL_CONTROL.DOORBELL_EN=0 */
+	data = RREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL);
+	data = REG_SET_FIELD(data, CP_HQD_PQ_DOORBELL_CONTROL,
+			     DOORBELL_EN, 0);
+	WREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL, data);
+
+	/* set CP_MQD_BASE_ADDR/HI with the MQD base address */
+	WREG32_SOC15(GC, 0, regCP_MQD_BASE_ADDR, mqd->cp_mqd_base_addr_lo);
+	WREG32_SOC15(GC, 0, regCP_MQD_BASE_ADDR_HI, mqd->cp_mqd_base_addr_hi);
+
+	/* set CP_MQD_CONTROL.VMID=0 */
+	data = RREG32_SOC15(GC, 0, regCP_MQD_CONTROL);
+	data = REG_SET_FIELD(data, CP_MQD_CONTROL, VMID, 0);
+	WREG32_SOC15(GC, 0, regCP_MQD_CONTROL, 0);
+
+	/* set CP_HQD_PQ_BASE/HI with the ring buffer base address */
+	WREG32_SOC15(GC, 0, regCP_HQD_PQ_BASE, mqd->cp_hqd_pq_base_lo);
+	WREG32_SOC15(GC, 0, regCP_HQD_PQ_BASE_HI, mqd->cp_hqd_pq_base_hi);
+
+	/* set CP_HQD_PQ_RPTR_REPORT_ADDR/HI */
+	WREG32_SOC15(GC, 0, regCP_HQD_PQ_RPTR_REPORT_ADDR,
+		     mqd->cp_hqd_pq_rptr_report_addr_lo);
+	WREG32_SOC15(GC, 0, regCP_HQD_PQ_RPTR_REPORT_ADDR_HI,
+		     mqd->cp_hqd_pq_rptr_report_addr_hi);
+
+	/* set CP_HQD_PQ_CONTROL */
+	WREG32_SOC15(GC, 0, regCP_HQD_PQ_CONTROL, mqd->cp_hqd_pq_control);
+
+	/* set CP_HQD_PQ_WPTR_POLL_ADDR/HI */
+	WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_POLL_ADDR,
+		     mqd->cp_hqd_pq_wptr_poll_addr_lo);
+	WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_POLL_ADDR_HI,
+		     mqd->cp_hqd_pq_wptr_poll_addr_hi);
+
+	/* set CP_HQD_PQ_DOORBELL_CONTROL */
+	WREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL,
+		     mqd->cp_hqd_pq_doorbell_control);
+
+	/* set CP_HQD_PERSISTENT_STATE.PRELOAD_SIZE=0x53 */
+	WREG32_SOC15(GC, 0, regCP_HQD_PERSISTENT_STATE, mqd->cp_hqd_persistent_state);
+
+	/* set CP_HQD_ACTIVE.ACTIVE=1 */
+	WREG32_SOC15(GC, 0, regCP_HQD_ACTIVE, mqd->cp_hqd_active);
+
+	soc21_grbm_select(adev, 0, 0, 0, 0);
+	mutex_unlock(&adev->srbm_mutex);
+}
+
+#if 0
+static int mes_v11_0_kiq_enable_queue(struct amdgpu_device *adev)
+{
+	struct amdgpu_kiq *kiq = &adev->gfx.kiq;
+	struct amdgpu_ring *kiq_ring = &adev->gfx.kiq.ring;
+	int r;
+
+	if (!kiq->pmf || !kiq->pmf->kiq_map_queues)
+		return -EINVAL;
+
+	r = amdgpu_ring_alloc(kiq_ring, kiq->pmf->map_queues_size);
+	if (r) {
+		DRM_ERROR("Failed to lock KIQ (%d).\n", r);
+		return r;
+	}
+
+	kiq->pmf->kiq_map_queues(kiq_ring, &adev->mes.ring);
+
+	r = amdgpu_ring_test_ring(kiq_ring);
+	if (r) {
+		DRM_ERROR("kfq enable failed\n");
+		kiq_ring->sched.ready = false;
+	}
+	return r;
+}
+#endif
+
+static int mes_v11_0_queue_init(struct amdgpu_device *adev,
+				enum admgpu_mes_pipe pipe)
+{
+	struct amdgpu_ring *ring;
+	int r;
+
+	if (pipe == AMDGPU_MES_KIQ_PIPE)
+		ring = &adev->gfx.kiq.ring;
+	else if (pipe == AMDGPU_MES_SCHED_PIPE)
+		ring = &adev->mes.ring;
+	else
+		BUG();
+
+	r = mes_v11_0_mqd_init(ring);
+	if (r)
+		return r;
+
+#if 0
+	if (pipe == AMDGPU_MES_SCHED_PIPE) {
+		r = mes_v11_0_kiq_enable_queue(adev);
+		if (r)
+			return r;
+	} else {
+		mes_v11_0_queue_init_register(ring);
+	}
+#else
+	mes_v11_0_queue_init_register(ring);
+#endif
+
+	return 0;
+}
+
+static int mes_v11_0_ring_init(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring;
+
+	ring = &adev->mes.ring;
+
+	ring->funcs = &mes_v11_0_ring_funcs;
+
+	ring->me = 3;
+	ring->pipe = 0;
+	ring->queue = 0;
+
+	ring->ring_obj = NULL;
+	ring->use_doorbell = true;
+	ring->doorbell_index = adev->doorbell_index.mes_ring0 << 1;
+	ring->eop_gpu_addr = adev->mes.eop_gpu_addr[AMDGPU_MES_SCHED_PIPE];
+	ring->no_scheduler = true;
+	sprintf(ring->name, "mes_%d.%d.%d", ring->me, ring->pipe, ring->queue);
+
+	return amdgpu_ring_init(adev, ring, 1024, NULL, 0,
+				AMDGPU_RING_PRIO_DEFAULT, NULL);
+}
+
+static int mes_v11_0_kiq_ring_init(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring;
+
+	spin_lock_init(&adev->gfx.kiq.ring_lock);
+
+	ring = &adev->gfx.kiq.ring;
+
+	ring->me = 3;
+	ring->pipe = 1;
+	ring->queue = 0;
+
+	ring->adev = NULL;
+	ring->ring_obj = NULL;
+	ring->use_doorbell = true;
+	ring->doorbell_index = adev->doorbell_index.mes_ring1 << 1;
+	ring->eop_gpu_addr = adev->mes.eop_gpu_addr[AMDGPU_MES_KIQ_PIPE];
+	ring->no_scheduler = true;
+	sprintf(ring->name, "mes_kiq_%d.%d.%d",
+		ring->me, ring->pipe, ring->queue);
+
+	return amdgpu_ring_init(adev, ring, 1024, NULL, 0,
+				AMDGPU_RING_PRIO_DEFAULT, NULL);
+}
+
+static int mes_v11_0_mqd_sw_init(struct amdgpu_device *adev,
+				 enum admgpu_mes_pipe pipe)
+{
+	int r, mqd_size = sizeof(struct v10_compute_mqd);
+	struct amdgpu_ring *ring;
+
+	if (pipe == AMDGPU_MES_KIQ_PIPE)
+		ring = &adev->gfx.kiq.ring;
+	else if (pipe == AMDGPU_MES_SCHED_PIPE)
+		ring = &adev->mes.ring;
+	else
+		BUG();
+
+	if (ring->mqd_obj)
+		return 0;
+
+	r = amdgpu_bo_create_kernel(adev, mqd_size, PAGE_SIZE,
+				    AMDGPU_GEM_DOMAIN_GTT, &ring->mqd_obj,
+				    &ring->mqd_gpu_addr, &ring->mqd_ptr);
+	if (r) {
+		dev_warn(adev->dev, "failed to create ring mqd bo (%d)", r);
+		return r;
+	}
+
+	memset(ring->mqd_ptr, 0, mqd_size);
+
+	/* prepare MQD backup */
+	adev->mes.mqd_backup[pipe] = kmalloc(mqd_size, GFP_KERNEL);
+	if (!adev->mes.mqd_backup[pipe])
+		dev_warn(adev->dev,
+			 "no memory to create MQD backup for ring %s\n",
+			 ring->name);
+
+	return 0;
+}
+
+static int mes_v11_0_sw_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	int pipe, r;
+
+	adev->mes.adev = adev;
+	adev->mes.funcs = &mes_v11_0_funcs;
+	adev->mes.kiq_hw_init = &mes_v11_0_kiq_hw_init;
+	adev->mes.kiq_hw_fini = &mes_v11_0_kiq_hw_fini;
+
+	r = amdgpu_mes_init(adev);
+	if (r)
+		return r;
+
+	for (pipe = 0; pipe < AMDGPU_MAX_MES_PIPES; pipe++) {
+		if (!adev->enable_mes_kiq && pipe == AMDGPU_MES_KIQ_PIPE)
+			continue;
+
+		r = mes_v11_0_init_microcode(adev, pipe);
+		if (r)
+			return r;
+
+		r = mes_v11_0_allocate_eop_buf(adev, pipe);
+		if (r)
+			return r;
+
+		r = mes_v11_0_mqd_sw_init(adev, pipe);
+		if (r)
+			return r;
+	}
+
+	if (adev->enable_mes_kiq) {
+		r = mes_v11_0_kiq_ring_init(adev);
+		if (r)
+			return r;
+	}
+
+	r = mes_v11_0_ring_init(adev);
+	if (r)
+		return r;
+
+	return 0;
+}
+
+static int mes_v11_0_sw_fini(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	int pipe;
+
+	amdgpu_device_wb_free(adev, adev->mes.sch_ctx_offs);
+	amdgpu_device_wb_free(adev, adev->mes.query_status_fence_offs);
+
+	for (pipe = 0; pipe < AMDGPU_MAX_MES_PIPES; pipe++) {
+		kfree(adev->mes.mqd_backup[pipe]);
+
+		amdgpu_bo_free_kernel(&adev->mes.eop_gpu_obj[pipe],
+				      &adev->mes.eop_gpu_addr[pipe],
+				      NULL);
+
+		mes_v11_0_free_microcode(adev, pipe);
+	}
+
+	amdgpu_bo_free_kernel(&adev->gfx.kiq.ring.mqd_obj,
+			      &adev->gfx.kiq.ring.mqd_gpu_addr,
+			      &adev->gfx.kiq.ring.mqd_ptr);
+
+	amdgpu_bo_free_kernel(&adev->mes.ring.mqd_obj,
+			      &adev->mes.ring.mqd_gpu_addr,
+			      &adev->mes.ring.mqd_ptr);
+
+	amdgpu_ring_fini(&adev->gfx.kiq.ring);
+	amdgpu_ring_fini(&adev->mes.ring);
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT) {
+		mes_v11_0_free_ucode_buffers(adev, AMDGPU_MES_KIQ_PIPE);
+		mes_v11_0_free_ucode_buffers(adev, AMDGPU_MES_SCHED_PIPE);
+	}
+
+	amdgpu_mes_fini(adev);
+	return 0;
+}
+
+static void mes_v11_0_kiq_setting(struct amdgpu_ring *ring)
+{
+	uint32_t tmp;
+	struct amdgpu_device *adev = ring->adev;
+
+	/* tell RLC which is KIQ queue */
+	tmp = RREG32_SOC15(GC, 0, regRLC_CP_SCHEDULERS);
+	tmp &= 0xffffff00;
+	tmp |= (ring->me << 5) | (ring->pipe << 3) | (ring->queue);
+	WREG32_SOC15(GC, 0, regRLC_CP_SCHEDULERS, tmp);
+	tmp |= 0x80;
+	WREG32_SOC15(GC, 0, regRLC_CP_SCHEDULERS, tmp);
+}
+
+static int mes_v11_0_kiq_hw_init(struct amdgpu_device *adev)
+{
+	int r = 0;
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT) {
+		r = mes_v11_0_load_microcode(adev, AMDGPU_MES_KIQ_PIPE);
+		if (r) {
+			DRM_ERROR("failed to load MES kiq fw, r=%d\n", r);
+			return r;
+		}
+
+		r = mes_v11_0_load_microcode(adev, AMDGPU_MES_SCHED_PIPE);
+		if (r) {
+			DRM_ERROR("failed to load MES fw, r=%d\n", r);
+			return r;
+		}
+	}
+
+	mes_v11_0_enable(adev, true);
+
+	mes_v11_0_kiq_setting(&adev->gfx.kiq.ring);
+
+	r = mes_v11_0_queue_init(adev, AMDGPU_MES_KIQ_PIPE);
+	if (r)
+		goto failure;
+
+	return r;
+
+failure:
+	mes_v11_0_hw_fini(adev);
+	return r;
+}
+
+static int mes_v11_0_kiq_hw_fini(struct amdgpu_device *adev)
+{
+	mes_v11_0_enable(adev, false);
+	return 0;
+}
+
+static int mes_v11_0_hw_init(void *handle)
+{
+	int r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (!adev->enable_mes_kiq) {
+		if (adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT) {
+			r = mes_v11_0_load_microcode(adev,
+					     AMDGPU_MES_SCHED_PIPE);
+			if (r) {
+				DRM_ERROR("failed to MES fw, r=%d\n", r);
+				return r;
+			}
+		}
+
+		mes_v11_0_enable(adev, true);
+	}
+
+	r = mes_v11_0_queue_init(adev, AMDGPU_MES_SCHED_PIPE);
+	if (r)
+		goto failure;
+
+	r = mes_v11_0_set_hw_resources(&adev->mes);
+	if (r)
+		goto failure;
+
+	r = mes_v11_0_query_sched_status(&adev->mes);
+	if (r) {
+		DRM_ERROR("MES is busy\n");
+		goto failure;
+	}
+
+	/*
+	 * Disable KIQ ring usage from the driver once MES is enabled.
+	 * MES uses KIQ ring exclusively so driver cannot access KIQ ring
+	 * with MES enabled.
+	 */
+	adev->gfx.kiq.ring.sched.ready = false;
+
+	return 0;
+
+failure:
+	mes_v11_0_hw_fini(adev);
+	return r;
+}
+
+static int mes_v11_0_hw_fini(void *handle)
+{
+	return 0;
+}
+
+static int mes_v11_0_suspend(void *handle)
+{
+	int r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	r = amdgpu_mes_suspend(adev);
+	if (r)
+		return r;
+
+	return mes_v11_0_hw_fini(adev);
+}
+
+static int mes_v11_0_resume(void *handle)
+{
+	int r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	r = mes_v11_0_hw_init(adev);
+	if (r)
+		return r;
+
+	return amdgpu_mes_resume(adev);
+}
+
+static int mes_v11_0_late_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	amdgpu_mes_self_test(adev);
+
+	return 0;
+}
+
+static const struct amd_ip_funcs mes_v11_0_ip_funcs = {
+	.name = "mes_v11_0",
+	.late_init = mes_v11_0_late_init,
+	.sw_init = mes_v11_0_sw_init,
+	.sw_fini = mes_v11_0_sw_fini,
+	.hw_init = mes_v11_0_hw_init,
+	.hw_fini = mes_v11_0_hw_fini,
+	.suspend = mes_v11_0_suspend,
+	.resume = mes_v11_0_resume,
+};
+
+const struct amdgpu_ip_block_version mes_v11_0_ip_block = {
+	.type = AMD_IP_BLOCK_TYPE_MES,
+	.major = 11,
+	.minor = 0,
+	.rev = 0,
+	.funcs = &mes_v11_0_ip_funcs,
+};
diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.h b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.h
new file mode 100644
index 000000000000..b3519e1df2b2
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.h
@@ -0,0 +1,29 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __MES_V11_0_H__
+#define __MES_V11_0_H__
+
+extern const struct amdgpu_ip_block_version mes_v11_0_ip_block;
+
+#endif
diff --git a/drivers/gpu/drm/amd/include/mes_v11_api_def.h b/drivers/gpu/drm/amd/include/mes_v11_api_def.h
new file mode 100644
index 000000000000..e30064477d82
--- /dev/null
+++ b/drivers/gpu/drm/amd/include/mes_v11_api_def.h
@@ -0,0 +1,579 @@
+/*
+ * Copyright 2022 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __MES_API_DEF_H__
+#define __MES_API_DEF_H__
+
+#pragma pack(push, 4)
+
+#define MES_API_VERSION 1
+
+/* Driver submits one API(cmd) as a single Frame and this command size is same
+ * for all API to ease the debugging and parsing of ring buffer.
+ */
+enum { API_FRAME_SIZE_IN_DWORDS = 64 };
+
+/* To avoid command in scheduler context to be overwritten whenenver mutilple
+ * interrupts come in, this creates another queue.
+ */
+enum { API_NUMBER_OF_COMMAND_MAX = 32 };
+
+enum MES_API_TYPE {
+	MES_API_TYPE_SCHEDULER = 1,
+	MES_API_TYPE_MAX
+};
+
+enum MES_SCH_API_OPCODE {
+	MES_SCH_API_SET_HW_RSRC			= 0,
+	MES_SCH_API_SET_SCHEDULING_CONFIG	= 1, /* agreegated db, quantums, etc */
+	MES_SCH_API_ADD_QUEUE			= 2,
+	MES_SCH_API_REMOVE_QUEUE		= 3,
+	MES_SCH_API_PERFORM_YIELD		= 4,
+	MES_SCH_API_SET_GANG_PRIORITY_LEVEL	= 5,
+	MES_SCH_API_SUSPEND			= 6,
+	MES_SCH_API_RESUME			= 7,
+	MES_SCH_API_RESET			= 8,
+	MES_SCH_API_SET_LOG_BUFFER		= 9,
+	MES_SCH_API_CHANGE_GANG_PRORITY		= 10,
+	MES_SCH_API_QUERY_SCHEDULER_STATUS	= 11,
+	MES_SCH_API_PROGRAM_GDS			= 12,
+	MES_SCH_API_SET_DEBUG_VMID		= 13,
+	MES_SCH_API_MISC			= 14,
+	MES_SCH_API_UPDATE_ROOT_PAGE_TABLE      = 15,
+	MES_SCH_API_AMD_LOG                     = 16,
+	MES_SCH_API_MAX				= 0xFF
+};
+
+union MES_API_HEADER {
+	struct {
+		uint32_t type		: 4; /* 0 - Invalid; 1 - Scheduling; 2 - TBD */
+		uint32_t opcode		: 8;
+		uint32_t dwsize		: 8; /* including header */
+		uint32_t reserved	: 12;
+	};
+
+	uint32_t	u32All;
+};
+
+enum MES_AMD_PRIORITY_LEVEL {
+	AMD_PRIORITY_LEVEL_LOW		= 0,
+	AMD_PRIORITY_LEVEL_NORMAL	= 1,
+	AMD_PRIORITY_LEVEL_MEDIUM	= 2,
+	AMD_PRIORITY_LEVEL_HIGH		= 3,
+	AMD_PRIORITY_LEVEL_REALTIME	= 4,
+	AMD_PRIORITY_NUM_LEVELS
+};
+
+enum MES_QUEUE_TYPE {
+	MES_QUEUE_TYPE_GFX,
+	MES_QUEUE_TYPE_COMPUTE,
+	MES_QUEUE_TYPE_SDMA,
+	MES_QUEUE_TYPE_MAX,
+};
+
+struct MES_API_STATUS {
+	uint64_t	api_completion_fence_addr;
+	uint64_t	api_completion_fence_value;
+};
+
+enum { MAX_COMPUTE_PIPES = 8 };
+enum { MAX_GFX_PIPES = 2 };
+enum { MAX_SDMA_PIPES = 2 };
+
+enum { MAX_COMPUTE_HQD_PER_PIPE = 8 };
+enum { MAX_GFX_HQD_PER_PIPE = 8 };
+enum { MAX_SDMA_HQD_PER_PIPE = 10 };
+enum { MAX_SDMA_HQD_PER_PIPE_11_0   = 8 };
+
+enum { MAX_QUEUES_IN_A_GANG = 8 };
+
+enum VM_HUB_TYPE {
+	VM_HUB_TYPE_GC = 0,
+	VM_HUB_TYPE_MM = 1,
+	VM_HUB_TYPE_MAX,
+};
+
+enum { VMID_INVALID = 0xffff };
+
+enum { MAX_VMID_GCHUB = 16 };
+enum { MAX_VMID_MMHUB = 16 };
+
+enum SET_DEBUG_VMID_OPERATIONS {
+	DEBUG_VMID_OP_PROGRAM = 0,
+	DEBUG_VMID_OP_ALLOCATE = 1,
+	DEBUG_VMID_OP_RELEASE = 2
+};
+
+enum MES_LOG_OPERATION {
+	MES_LOG_OPERATION_CONTEXT_STATE_CHANGE = 0,
+	MES_LOG_OPERATION_QUEUE_NEW_WORK = 1,
+	MES_LOG_OPERATION_QUEUE_UNWAIT_SYNC_OBJECT = 2,
+	MES_LOG_OPERATION_QUEUE_NO_MORE_WORK = 3,
+	MES_LOG_OPERATION_QUEUE_WAIT_SYNC_OBJECT = 4,
+	MES_LOG_OPERATION_QUEUE_INVALID = 0xF,
+};
+
+enum MES_LOG_CONTEXT_STATE {
+	MES_LOG_CONTEXT_STATE_IDLE		= 0,
+	MES_LOG_CONTEXT_STATE_RUNNING		= 1,
+	MES_LOG_CONTEXT_STATE_READY		= 2,
+	MES_LOG_CONTEXT_STATE_READY_STANDBY	= 3,
+	MES_LOG_CONTEXT_STATE_INVALID           = 0xF,
+};
+
+struct MES_LOG_CONTEXT_STATE_CHANGE {
+	void				*h_context;
+	enum MES_LOG_CONTEXT_STATE	new_context_state;
+};
+
+struct MES_LOG_QUEUE_NEW_WORK {
+	uint64_t                   h_queue;
+	uint64_t                   reserved;
+};
+
+struct MES_LOG_QUEUE_UNWAIT_SYNC_OBJECT {
+	uint64_t                   h_queue;
+	uint64_t                   h_sync_object;
+};
+
+struct MES_LOG_QUEUE_NO_MORE_WORK {
+	uint64_t                   h_queue;
+	uint64_t                   reserved;
+};
+
+struct MES_LOG_QUEUE_WAIT_SYNC_OBJECT {
+	uint64_t                   h_queue;
+	uint64_t                   h_sync_object;
+};
+
+struct MES_LOG_ENTRY_HEADER {
+	uint32_t	first_free_entry_index;
+	uint32_t	wraparound_count;
+	uint64_t	number_of_entries;
+	uint64_t	reserved[2];
+};
+
+struct MES_LOG_ENTRY_DATA {
+	uint64_t	gpu_time_stamp;
+	uint32_t	operation_type; /* operation_type is of MES_LOG_OPERATION type */
+	uint32_t	reserved_operation_type_bits;
+	union {
+		struct MES_LOG_CONTEXT_STATE_CHANGE     context_state_change;
+		struct MES_LOG_QUEUE_NEW_WORK           queue_new_work;
+		struct MES_LOG_QUEUE_UNWAIT_SYNC_OBJECT queue_unwait_sync_object;
+		struct MES_LOG_QUEUE_NO_MORE_WORK       queue_no_more_work;
+		struct MES_LOG_QUEUE_WAIT_SYNC_OBJECT   queue_wait_sync_object;
+		uint64_t                                all[2];
+	};
+};
+
+struct MES_LOG_BUFFER {
+	struct MES_LOG_ENTRY_HEADER	header;
+	struct MES_LOG_ENTRY_DATA	entries[1];
+};
+
+enum MES_SWIP_TO_HWIP_DEF {
+	MES_MAX_HWIP_SEGMENT = 6,
+};
+
+union MESAPI_SET_HW_RESOURCES {
+	struct {
+		union MES_API_HEADER	header;
+		uint32_t		vmid_mask_mmhub;
+		uint32_t		vmid_mask_gfxhub;
+		uint32_t		gds_size;
+		uint32_t		paging_vmid;
+		uint32_t		compute_hqd_mask[MAX_COMPUTE_PIPES];
+		uint32_t		gfx_hqd_mask[MAX_GFX_PIPES];
+		uint32_t		sdma_hqd_mask[MAX_SDMA_PIPES];
+		uint32_t		aggregated_doorbells[AMD_PRIORITY_NUM_LEVELS];
+		uint64_t		g_sch_ctx_gpu_mc_ptr;
+		uint64_t		query_status_fence_gpu_mc_ptr;
+		uint32_t		gc_base[MES_MAX_HWIP_SEGMENT];
+		uint32_t		mmhub_base[MES_MAX_HWIP_SEGMENT];
+		uint32_t		osssys_base[MES_MAX_HWIP_SEGMENT];
+		struct MES_API_STATUS	api_status;
+		union {
+			struct {
+				uint32_t disable_reset	: 1;
+				uint32_t use_different_vmid_compute : 1;
+				uint32_t disable_mes_log   : 1;
+				uint32_t apply_mmhub_pgvm_invalidate_ack_loss_wa : 1;
+				uint32_t apply_grbm_remote_register_dummy_read_wa : 1;
+				uint32_t second_gfx_pipe_enabled : 1;
+				uint32_t enable_level_process_quantum_check : 1;
+				uint32_t reserved	: 25;
+			};
+			uint32_t	uint32_t_all;
+		};
+	};
+
+	uint32_t	max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS];
+};
+
+union MESAPI__ADD_QUEUE {
+	struct {
+		union MES_API_HEADER		header;
+		uint32_t			process_id;
+		uint64_t			page_table_base_addr;
+		uint64_t			process_va_start;
+		uint64_t			process_va_end;
+		uint64_t			process_quantum;
+		uint64_t			process_context_addr;
+		uint64_t			gang_quantum;
+		uint64_t			gang_context_addr;
+		uint32_t			inprocess_gang_priority;
+		enum MES_AMD_PRIORITY_LEVEL	gang_global_priority_level;
+		uint32_t			doorbell_offset;
+		uint64_t			mqd_addr;
+		uint64_t			wptr_addr;
+		uint64_t                        h_context;
+		uint64_t                        h_queue;
+		enum MES_QUEUE_TYPE		queue_type;
+		uint32_t			gds_base;
+		uint32_t			gds_size;
+		uint32_t			gws_base;
+		uint32_t			gws_size;
+		uint32_t			oa_mask;
+		uint64_t                        trap_handler_addr;
+		uint32_t                        vm_context_cntl;
+
+		struct {
+			uint32_t paging			: 1;
+			uint32_t debug_vmid		: 4;
+			uint32_t program_gds		: 1;
+			uint32_t is_gang_suspended	: 1;
+			uint32_t is_tmz_queue		: 1;
+			uint32_t map_kiq_utility_queue  : 1;
+			uint32_t reserved		: 23;
+		};
+		struct MES_API_STATUS		api_status;
+		uint64_t                        tma_addr;
+	};
+
+	uint32_t	max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS];
+};
+
+union MESAPI__REMOVE_QUEUE {
+	struct {
+		union MES_API_HEADER	header;
+		uint32_t		doorbell_offset;
+		uint64_t		gang_context_addr;
+
+		struct {
+			uint32_t unmap_legacy_gfx_queue   : 1;
+			uint32_t unmap_kiq_utility_queue  : 1;
+			uint32_t preempt_legacy_gfx_queue : 1;
+			uint32_t reserved                 : 29;
+		};
+		struct MES_API_STATUS	    api_status;
+
+		uint32_t                    pipe_id;
+		uint32_t                    queue_id;
+
+		uint64_t                    tf_addr;
+		uint32_t                    tf_data;
+	};
+
+	uint32_t	max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS];
+};
+
+union MESAPI__SET_SCHEDULING_CONFIG {
+	struct {
+		union MES_API_HEADER	header;
+		/* Grace period when preempting another priority band for this
+		 * priority band. The value for idle priority band is ignored,
+		 * as it never preempts other bands.
+		 */
+		uint64_t		grace_period_other_levels[AMD_PRIORITY_NUM_LEVELS];
+		/* Default quantum for scheduling across processes within
+		 * a priority band.
+		 */
+		uint64_t		process_quantum_for_level[AMD_PRIORITY_NUM_LEVELS];
+		/* Default grace period for processes that preempt each other
+		 * within a priority band.
+		 */
+		uint64_t		process_grace_period_same_level[AMD_PRIORITY_NUM_LEVELS];
+		/* For normal level this field specifies the target GPU
+		 * percentage in situations when it's starved by the high level.
+		 * Valid values are between 0 and 50, with the default being 10.
+		 */
+		uint32_t		normal_yield_percent;
+		struct MES_API_STATUS	api_status;
+	};
+
+	uint32_t	max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS];
+};
+
+union MESAPI__PERFORM_YIELD {
+	struct {
+		union MES_API_HEADER	header;
+		uint32_t		dummy;
+		struct MES_API_STATUS	api_status;
+	};
+
+	uint32_t	max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS];
+};
+
+union MESAPI__CHANGE_GANG_PRIORITY_LEVEL {
+	struct {
+		union MES_API_HEADER		header;
+		uint32_t			inprocess_gang_priority;
+		enum MES_AMD_PRIORITY_LEVEL	gang_global_priority_level;
+		uint64_t			gang_quantum;
+		uint64_t			gang_context_addr;
+		struct MES_API_STATUS		api_status;
+	};
+
+	uint32_t	max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS];
+};
+
+union MESAPI__SUSPEND {
+	struct {
+		union MES_API_HEADER	header;
+		/* false - suspend all gangs; true - specific gang */
+		struct {
+			uint32_t suspend_all_gangs	: 1;
+			uint32_t reserved		: 31;
+		};
+		/* gang_context_addr is valid only if suspend_all = false */
+		uint64_t		gang_context_addr;
+
+		uint64_t		suspend_fence_addr;
+		uint32_t		suspend_fence_value;
+
+		struct MES_API_STATUS	api_status;
+	};
+
+	uint32_t	max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS];
+};
+
+union MESAPI__RESUME {
+	struct {
+		union MES_API_HEADER	header;
+		/* false - resume all gangs; true - specified gang */
+		struct {
+			uint32_t resume_all_gangs	: 1;
+			uint32_t reserved		: 31;
+		};
+		/* valid only if resume_all_gangs = false */
+		uint64_t		gang_context_addr;
+
+		struct MES_API_STATUS	api_status;
+	};
+
+	uint32_t	max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS];
+};
+
+union MESAPI__RESET {
+	struct {
+		union MES_API_HEADER		header;
+
+		struct {
+			/* Only reset the queue given by doorbell_offset (not entire gang) */
+			uint32_t                reset_queue_only : 1;
+			/* Hang detection first then reset any queues that are hung */
+			uint32_t                hang_detect_then_reset : 1;
+			/* Only do hang detection (no reset) */
+			uint32_t                hang_detect_only : 1;
+			/* Rest HP and LP kernel queues not managed by MES */
+			uint32_t                reset_legacy_gfx : 1;
+			uint32_t                reserved : 28;
+		};
+
+		uint64_t			gang_context_addr;
+
+		/* valid only if reset_queue_only = true */
+		uint32_t			doorbell_offset;
+
+		/* valid only if hang_detect_then_reset = true */
+		uint64_t			doorbell_offset_addr;
+		enum MES_QUEUE_TYPE		queue_type;
+
+		/* valid only if reset_legacy_gfx = true */
+		uint32_t			pipe_id_lp;
+		uint32_t			queue_id_lp;
+		uint32_t			vmid_id_lp;
+		uint64_t			mqd_mc_addr_lp;
+		uint32_t			doorbell_offset_lp;
+		uint64_t			wptr_addr_lp;
+
+		uint32_t			pipe_id_hp;
+		uint32_t			queue_id_hp;
+		uint32_t			vmid_id_hp;
+		uint64_t			mqd_mc_addr_hp;
+		uint32_t			doorbell_offset_hp;
+		uint64_t			wptr_addr_hp;
+
+		struct MES_API_STATUS		api_status;
+	};
+
+	uint32_t	max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS];
+};
+
+union MESAPI__SET_LOGGING_BUFFER {
+	struct {
+		union MES_API_HEADER	header;
+		/* There are separate log buffers for each queue type */
+		enum MES_QUEUE_TYPE	log_type;
+		/* Log buffer GPU Address */
+		uint64_t		logging_buffer_addr;
+		/* number of entries in the log buffer */
+		uint32_t		number_of_entries;
+		/* Entry index at which CPU interrupt needs to be signalled */
+		uint32_t		interrupt_entry;
+
+		struct MES_API_STATUS	api_status;
+	};
+
+	uint32_t	max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS];
+};
+
+union MESAPI__QUERY_MES_STATUS {
+	struct {
+		union MES_API_HEADER	header;
+		bool			mes_healthy; /* 0 - not healthy, 1 - healthy */
+		struct MES_API_STATUS	api_status;
+	};
+
+	uint32_t	max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS];
+};
+
+union MESAPI__PROGRAM_GDS {
+	struct {
+		union MES_API_HEADER	header;
+		uint64_t		process_context_addr;
+		uint32_t		gds_base;
+		uint32_t		gds_size;
+		uint32_t		gws_base;
+		uint32_t		gws_size;
+		uint32_t		oa_mask;
+		struct MES_API_STATUS	api_status;
+	};
+
+	uint32_t	max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS];
+};
+
+union MESAPI__SET_DEBUG_VMID {
+	struct {
+		union MES_API_HEADER	header;
+		struct MES_API_STATUS	api_status;
+		union {
+			struct {
+				uint32_t use_gds	: 1;
+				uint32_t operation      : 2;
+				uint32_t reserved       : 29;
+			} flags;
+			uint32_t	u32All;
+		};
+		uint32_t		reserved;
+		uint32_t		debug_vmid;
+		uint64_t		process_context_addr;
+		uint64_t		page_table_base_addr;
+		uint64_t		process_va_start;
+		uint64_t		process_va_end;
+		uint32_t		gds_base;
+		uint32_t		gds_size;
+		uint32_t		gws_base;
+		uint32_t		gws_size;
+		uint32_t		oa_mask;
+
+		/* output addr of the acquired vmid value */
+		uint64_t                output_addr;
+	};
+
+	uint32_t	max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS];
+};
+
+enum MESAPI_MISC_OPCODE {
+	MESAPI_MISC__MODIFY_REG,
+	MESAPI_MISC__INV_GART,
+	MESAPI_MISC__QUERY_STATUS,
+	MESAPI_MISC__MAX,
+};
+
+enum MODIFY_REG_SUBCODE {
+	MODIFY_REG__OVERWRITE,
+	MODIFY_REG__RMW_OR,
+	MODIFY_REG__RMW_AND,
+	MODIFY_REG__MAX,
+};
+
+enum { MISC_DATA_MAX_SIZE_IN_DWORDS = 20 };
+
+struct MODIFY_REG {
+	enum MODIFY_REG_SUBCODE   subcode;
+	uint32_t                  reg_offset;
+	uint32_t                  reg_value;
+};
+
+struct INV_GART {
+	uint64_t                  inv_range_va_start;
+	uint64_t                  inv_range_size;
+};
+
+struct QUERY_STATUS {
+	uint32_t context_id;
+};
+
+union MESAPI__MISC {
+	struct {
+		union MES_API_HEADER	header;
+		enum MESAPI_MISC_OPCODE	opcode;
+		struct MES_API_STATUS	api_status;
+
+		union {
+			struct		MODIFY_REG modify_reg;
+			struct		INV_GART inv_gart;
+			struct		QUERY_STATUS query_status;
+			uint32_t	data[MISC_DATA_MAX_SIZE_IN_DWORDS];
+		};
+	};
+
+	uint32_t	max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS];
+};
+
+union MESAPI__UPDATE_ROOT_PAGE_TABLE {
+	struct {
+		union MES_API_HEADER        header;
+		uint64_t                    page_table_base_addr;
+		uint64_t                    process_context_addr;
+		struct MES_API_STATUS       api_status;
+	};
+
+	uint32_t max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS];
+};
+
+union MESAPI_AMD_LOG {
+	struct {
+		union MES_API_HEADER        header;
+		uint64_t                    p_buffer_memory;
+		uint64_t                    p_buffer_size_used;
+		struct MES_API_STATUS       api_status;
+	};
+
+	uint32_t max_dwords_in_api[API_FRAME_SIZE_IN_DWORDS];
+};
+
+#pragma pack(pop)
+#endif
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 21/29] drm/amdgpu: add init support for GFX11 (v2)
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (19 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 20/29] drm/amdgpu/mes11: initiate mes v11 support Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 22/29] drm/amdkfd: Add KFD support for soc21 v3 Alex Deucher
                   ` (7 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Wenhui Sheng, Hawking Zhang

From: Hawking Zhang <Hawking.Zhang@amd.com>

Add initial support for GC version 11.  GC is
the graphics and compute block on the GPU.

v1: add initial gfx11 support (Wenhui)
v2: switch to new amdgpu_gfx_is_high_priority_compute_queue
    interface (Hawking)
v3: fix num_mec (Alex)

Signed-off-by: Wenhui Sheng <Wenhui.Sheng@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/Makefile     |    3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h |   13 +
 drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c  | 6320 +++++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/gfx_v11_0.h  |   29 +
 4 files changed, 6364 insertions(+), 1 deletion(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
 create mode 100644 drivers/gpu/drm/amd/amdgpu/gfx_v11_0.h

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
index 2c490a5c2a87..6caf2390a889 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -131,7 +131,8 @@ amdgpu-y += \
 	gfx_v9_4.o \
 	gfx_v9_4_2.o \
 	gfx_v10_0.o \
-	imu_v11_0.o
+	imu_v11_0.o \
+	gfx_v11_0.o
 
 # add async DMA block
 amdgpu-y += \
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
index 15749016d8cf..45522609d4b4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
@@ -57,6 +57,9 @@ struct amdgpu_mec {
 	u64			hpd_eop_gpu_addr;
 	struct amdgpu_bo	*mec_fw_obj;
 	u64			mec_fw_gpu_addr;
+	struct amdgpu_bo	*mec_fw_data_obj;
+	u64			mec_fw_data_gpu_addr;
+
 	u32 num_mec;
 	u32 num_pipe_per_mec;
 	u32 num_queue_per_pipe;
@@ -245,6 +248,10 @@ struct amdgpu_pfp {
 	struct amdgpu_bo		*pfp_fw_obj;
 	uint64_t			pfp_fw_gpu_addr;
 	uint32_t			*pfp_fw_ptr;
+
+	struct amdgpu_bo		*pfp_fw_data_obj;
+	uint64_t			pfp_fw_data_gpu_addr;
+	uint32_t			*pfp_fw_data_ptr;
 };
 
 struct amdgpu_ce {
@@ -257,6 +264,11 @@ struct amdgpu_me {
 	struct amdgpu_bo		*me_fw_obj;
 	uint64_t			me_fw_gpu_addr;
 	uint32_t			*me_fw_ptr;
+
+	struct amdgpu_bo		*me_fw_data_obj;
+	uint64_t			me_fw_data_gpu_addr;
+	uint32_t			*me_fw_data_ptr;
+
 	uint32_t			num_me;
 	uint32_t			num_pipe_per_me;
 	uint32_t			num_queue_per_pipe;
@@ -277,6 +289,7 @@ struct amdgpu_gfx {
 	struct amdgpu_kiq		kiq;
 	struct amdgpu_imu		imu;
 	struct amdgpu_scratch		scratch;
+	bool				rs64_enable; /* firmware format */
 	const struct firmware		*me_fw;	/* ME firmware */
 	uint32_t			me_fw_version;
 	const struct firmware		*pfp_fw; /* PFP firmware */
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
new file mode 100644
index 000000000000..7a34802544c0
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
@@ -0,0 +1,6320 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+#include <linux/delay.h>
+#include <linux/kernel.h>
+#include <linux/firmware.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include "amdgpu.h"
+#include "amdgpu_gfx.h"
+#include "amdgpu_psp.h"
+#include "amdgpu_smu.h"
+#include "amdgpu_atomfirmware.h"
+#include "imu_v11_0.h"
+#include "soc21.h"
+#include "nvd.h"
+
+#include "gc/gc_11_0_0_offset.h"
+#include "gc/gc_11_0_0_sh_mask.h"
+#include "smuio/smuio_13_0_6_offset.h"
+#include "smuio/smuio_13_0_6_sh_mask.h"
+#include "navi10_enum.h"
+#include "ivsrcid/gfx/irqsrcs_gfx_11_0_0.h"
+
+#include "soc15.h"
+#include "soc15d.h"
+#include "clearstate_gfx11.h"
+#include "v11_structs.h"
+#include "gfx_v11_0.h"
+#include "nbio_v4_3.h"
+#include "mes_v11_0.h"
+
+#define GFX11_NUM_GFX_RINGS		1
+#define GFX11_MEC_HPD_SIZE	2048
+
+#define RLCG_UCODE_LOADING_START_ADDRESS	0x00002000L
+
+MODULE_FIRMWARE("amdgpu/gc_11_0_0_pfp.bin");
+MODULE_FIRMWARE("amdgpu/gc_11_0_0_me.bin");
+MODULE_FIRMWARE("amdgpu/gc_11_0_0_mec.bin");
+MODULE_FIRMWARE("amdgpu/gc_11_0_0_rlc.bin");
+MODULE_FIRMWARE("amdgpu/gc_11_0_0_toc.bin");
+
+static const struct soc15_reg_golden golden_settings_gc_11_0[] =
+{
+	/* Pending on emulation bring up */
+};
+
+static const struct soc15_reg_golden golden_settings_gc_11_0_0[] =
+{
+	/* Pending on emulation bring up */
+};
+
+static const struct soc15_reg_golden golden_settings_gc_rlc_spm_11_0[] =
+{
+	/* Pending on emulation bring up */
+};
+
+#define DEFAULT_SH_MEM_CONFIG \
+	((SH_MEM_ADDRESS_MODE_64 << SH_MEM_CONFIG__ADDRESS_MODE__SHIFT) | \
+	 (SH_MEM_ALIGNMENT_MODE_UNALIGNED << SH_MEM_CONFIG__ALIGNMENT_MODE__SHIFT) | \
+	 (3 << SH_MEM_CONFIG__INITIAL_INST_PREFETCH__SHIFT))
+
+static void gfx_v11_0_disable_gpa_mode(struct amdgpu_device *adev);
+static void gfx_v11_0_set_ring_funcs(struct amdgpu_device *adev);
+static void gfx_v11_0_set_irq_funcs(struct amdgpu_device *adev);
+static void gfx_v11_0_set_gds_init(struct amdgpu_device *adev);
+static void gfx_v11_0_set_rlc_funcs(struct amdgpu_device *adev);
+static void gfx_v11_0_set_mqd_funcs(struct amdgpu_device *adev);
+static void gfx_v11_0_set_imu_funcs(struct amdgpu_device *adev);
+static int gfx_v11_0_get_cu_info(struct amdgpu_device *adev,
+                                 struct amdgpu_cu_info *cu_info);
+static uint64_t gfx_v11_0_get_gpu_clock_counter(struct amdgpu_device *adev);
+static void gfx_v11_0_select_se_sh(struct amdgpu_device *adev, u32 se_num,
+				   u32 sh_num, u32 instance);
+static u32 gfx_v11_0_get_wgp_active_bitmap_per_sh(struct amdgpu_device *adev);
+
+static void gfx_v11_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume);
+static void gfx_v11_0_ring_emit_frame_cntl(struct amdgpu_ring *ring, bool start, bool secure);
+static void gfx_v11_0_ring_emit_wreg(struct amdgpu_ring *ring, uint32_t reg,
+				     uint32_t val);
+static int gfx_v11_0_wait_for_rlc_autoload_complete(struct amdgpu_device *adev);
+static void gfx_v11_0_ring_invalidate_tlbs(struct amdgpu_ring *ring,
+					   uint16_t pasid, uint32_t flush_type,
+					   bool all_hub, uint8_t dst_sel);
+
+static void gfx11_kiq_set_resources(struct amdgpu_ring *kiq_ring, uint64_t queue_mask)
+{
+	amdgpu_ring_write(kiq_ring, PACKET3(PACKET3_SET_RESOURCES, 6));
+	amdgpu_ring_write(kiq_ring, PACKET3_SET_RESOURCES_VMID_MASK(0) |
+			  PACKET3_SET_RESOURCES_QUEUE_TYPE(0));	/* vmid_mask:0 queue_type:0 (KIQ) */
+	amdgpu_ring_write(kiq_ring, lower_32_bits(queue_mask));	/* queue mask lo */
+	amdgpu_ring_write(kiq_ring, upper_32_bits(queue_mask));	/* queue mask hi */
+	amdgpu_ring_write(kiq_ring, 0);	/* gws mask lo */
+	amdgpu_ring_write(kiq_ring, 0);	/* gws mask hi */
+	amdgpu_ring_write(kiq_ring, 0);	/* oac mask */
+	amdgpu_ring_write(kiq_ring, 0);	/* gds heap base:0, gds heap size:0 */
+}
+
+static void gfx11_kiq_map_queues(struct amdgpu_ring *kiq_ring,
+				 struct amdgpu_ring *ring)
+{
+	uint64_t mqd_addr = amdgpu_bo_gpu_offset(ring->mqd_obj);
+	uint64_t wptr_addr = ring->wptr_gpu_addr;
+	uint32_t eng_sel = ring->funcs->type == AMDGPU_RING_TYPE_GFX ? 4 : 0;
+
+	amdgpu_ring_write(kiq_ring, PACKET3(PACKET3_MAP_QUEUES, 5));
+	/* Q_sel:0, vmid:0, vidmem: 1, engine:0, num_Q:1*/
+	amdgpu_ring_write(kiq_ring, /* Q_sel: 0, vmid: 0, engine: 0, num_Q: 1 */
+			  PACKET3_MAP_QUEUES_QUEUE_SEL(0) | /* Queue_Sel */
+			  PACKET3_MAP_QUEUES_VMID(0) | /* VMID */
+			  PACKET3_MAP_QUEUES_QUEUE(ring->queue) |
+			  PACKET3_MAP_QUEUES_PIPE(ring->pipe) |
+			  PACKET3_MAP_QUEUES_ME((ring->me == 1 ? 0 : 1)) |
+			  PACKET3_MAP_QUEUES_QUEUE_TYPE(0) | /*queue_type: normal compute queue */
+			  PACKET3_MAP_QUEUES_ALLOC_FORMAT(0) | /* alloc format: all_on_one_pipe */
+			  PACKET3_MAP_QUEUES_ENGINE_SEL(eng_sel) |
+			  PACKET3_MAP_QUEUES_NUM_QUEUES(1)); /* num_queues: must be 1 */
+	amdgpu_ring_write(kiq_ring, PACKET3_MAP_QUEUES_DOORBELL_OFFSET(ring->doorbell_index));
+	amdgpu_ring_write(kiq_ring, lower_32_bits(mqd_addr));
+	amdgpu_ring_write(kiq_ring, upper_32_bits(mqd_addr));
+	amdgpu_ring_write(kiq_ring, lower_32_bits(wptr_addr));
+	amdgpu_ring_write(kiq_ring, upper_32_bits(wptr_addr));
+}
+
+static void gfx11_kiq_unmap_queues(struct amdgpu_ring *kiq_ring,
+				   struct amdgpu_ring *ring,
+				   enum amdgpu_unmap_queues_action action,
+				   u64 gpu_addr, u64 seq)
+{
+	struct amdgpu_device *adev = kiq_ring->adev;
+	uint32_t eng_sel = ring->funcs->type == AMDGPU_RING_TYPE_GFX ? 4 : 0;
+
+	if (!adev->gfx.kiq.ring.sched.ready) {
+		amdgpu_mes_unmap_legacy_queue(adev, ring, action, gpu_addr, seq);
+		return;
+	}
+
+	amdgpu_ring_write(kiq_ring, PACKET3(PACKET3_UNMAP_QUEUES, 4));
+	amdgpu_ring_write(kiq_ring, /* Q_sel: 0, vmid: 0, engine: 0, num_Q: 1 */
+			  PACKET3_UNMAP_QUEUES_ACTION(action) |
+			  PACKET3_UNMAP_QUEUES_QUEUE_SEL(0) |
+			  PACKET3_UNMAP_QUEUES_ENGINE_SEL(eng_sel) |
+			  PACKET3_UNMAP_QUEUES_NUM_QUEUES(1));
+	amdgpu_ring_write(kiq_ring,
+		  PACKET3_UNMAP_QUEUES_DOORBELL_OFFSET0(ring->doorbell_index));
+
+	if (action == PREEMPT_QUEUES_NO_UNMAP) {
+		amdgpu_ring_write(kiq_ring, lower_32_bits(gpu_addr));
+		amdgpu_ring_write(kiq_ring, upper_32_bits(gpu_addr));
+		amdgpu_ring_write(kiq_ring, seq);
+	} else {
+		amdgpu_ring_write(kiq_ring, 0);
+		amdgpu_ring_write(kiq_ring, 0);
+		amdgpu_ring_write(kiq_ring, 0);
+	}
+}
+
+static void gfx11_kiq_query_status(struct amdgpu_ring *kiq_ring,
+				   struct amdgpu_ring *ring,
+				   u64 addr,
+				   u64 seq)
+{
+	uint32_t eng_sel = ring->funcs->type == AMDGPU_RING_TYPE_GFX ? 4 : 0;
+
+	amdgpu_ring_write(kiq_ring, PACKET3(PACKET3_QUERY_STATUS, 5));
+	amdgpu_ring_write(kiq_ring,
+			  PACKET3_QUERY_STATUS_CONTEXT_ID(0) |
+			  PACKET3_QUERY_STATUS_INTERRUPT_SEL(0) |
+			  PACKET3_QUERY_STATUS_COMMAND(2));
+	amdgpu_ring_write(kiq_ring, /* Q_sel: 0, vmid: 0, engine: 0, num_Q: 1 */
+			  PACKET3_QUERY_STATUS_DOORBELL_OFFSET(ring->doorbell_index) |
+			  PACKET3_QUERY_STATUS_ENG_SEL(eng_sel));
+	amdgpu_ring_write(kiq_ring, lower_32_bits(addr));
+	amdgpu_ring_write(kiq_ring, upper_32_bits(addr));
+	amdgpu_ring_write(kiq_ring, lower_32_bits(seq));
+	amdgpu_ring_write(kiq_ring, upper_32_bits(seq));
+}
+
+static void gfx11_kiq_invalidate_tlbs(struct amdgpu_ring *kiq_ring,
+				uint16_t pasid, uint32_t flush_type,
+				bool all_hub)
+{
+	gfx_v11_0_ring_invalidate_tlbs(kiq_ring, pasid, flush_type, all_hub, 1);
+}
+
+static const struct kiq_pm4_funcs gfx_v11_0_kiq_pm4_funcs = {
+	.kiq_set_resources = gfx11_kiq_set_resources,
+	.kiq_map_queues = gfx11_kiq_map_queues,
+	.kiq_unmap_queues = gfx11_kiq_unmap_queues,
+	.kiq_query_status = gfx11_kiq_query_status,
+	.kiq_invalidate_tlbs = gfx11_kiq_invalidate_tlbs,
+	.set_resources_size = 8,
+	.map_queues_size = 7,
+	.unmap_queues_size = 6,
+	.query_status_size = 7,
+	.invalidate_tlbs_size = 2,
+};
+
+static void gfx_v11_0_set_kiq_pm4_funcs(struct amdgpu_device *adev)
+{
+	adev->gfx.kiq.pmf = &gfx_v11_0_kiq_pm4_funcs;
+}
+
+static void gfx_v11_0_init_spm_golden_registers(struct amdgpu_device *adev)
+{
+	switch (adev->ip_versions[GC_HWIP][0]) {
+	case IP_VERSION(11, 0, 0):
+		soc15_program_register_sequence(adev,
+						golden_settings_gc_rlc_spm_11_0,
+						(const u32)ARRAY_SIZE(golden_settings_gc_rlc_spm_11_0));
+		break;
+	default:
+		break;
+	}
+}
+
+static void gfx_v11_0_init_golden_registers(struct amdgpu_device *adev)
+{
+	switch (adev->ip_versions[GC_HWIP][0]) {
+	case IP_VERSION(11, 0, 0):
+		soc15_program_register_sequence(adev,
+						golden_settings_gc_11_0,
+						(const u32)ARRAY_SIZE(golden_settings_gc_11_0));
+		soc15_program_register_sequence(adev,
+						golden_settings_gc_11_0_0,
+						(const u32)ARRAY_SIZE(golden_settings_gc_11_0_0));
+		break;
+	default:
+		break;
+	}
+	gfx_v11_0_init_spm_golden_registers(adev);
+}
+
+static void gfx_v11_0_scratch_init(struct amdgpu_device *adev)
+{
+	adev->gfx.scratch.num_reg = 8;
+	adev->gfx.scratch.reg_base = SOC15_REG_OFFSET(GC, 0, regSCRATCH_REG0);
+	adev->gfx.scratch.free_mask = (1u << adev->gfx.scratch.num_reg) - 1;
+}
+
+static void gfx_v11_0_write_data_to_reg(struct amdgpu_ring *ring, int eng_sel,
+				       bool wc, uint32_t reg, uint32_t val)
+{
+	amdgpu_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
+	amdgpu_ring_write(ring, WRITE_DATA_ENGINE_SEL(eng_sel) |
+			  WRITE_DATA_DST_SEL(0) | (wc ? WR_CONFIRM : 0));
+	amdgpu_ring_write(ring, reg);
+	amdgpu_ring_write(ring, 0);
+	amdgpu_ring_write(ring, val);
+}
+
+static void gfx_v11_0_wait_reg_mem(struct amdgpu_ring *ring, int eng_sel,
+				  int mem_space, int opt, uint32_t addr0,
+				  uint32_t addr1, uint32_t ref, uint32_t mask,
+				  uint32_t inv)
+{
+	amdgpu_ring_write(ring, PACKET3(PACKET3_WAIT_REG_MEM, 5));
+	amdgpu_ring_write(ring,
+			  /* memory (1) or register (0) */
+			  (WAIT_REG_MEM_MEM_SPACE(mem_space) |
+			   WAIT_REG_MEM_OPERATION(opt) | /* wait */
+			   WAIT_REG_MEM_FUNCTION(3) |  /* equal */
+			   WAIT_REG_MEM_ENGINE(eng_sel)));
+
+	if (mem_space)
+		BUG_ON(addr0 & 0x3); /* Dword align */
+	amdgpu_ring_write(ring, addr0);
+	amdgpu_ring_write(ring, addr1);
+	amdgpu_ring_write(ring, ref);
+	amdgpu_ring_write(ring, mask);
+	amdgpu_ring_write(ring, inv); /* poll interval */
+}
+
+static int gfx_v11_0_ring_test_ring(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	uint32_t scratch;
+	uint32_t tmp = 0;
+	unsigned i;
+	int r;
+
+	r = amdgpu_gfx_scratch_get(adev, &scratch);
+	if (r) {
+		DRM_ERROR("amdgpu: cp failed to get scratch reg (%d).\n", r);
+		return r;
+	}
+
+	WREG32(scratch, 0xCAFEDEAD);
+
+	r = amdgpu_ring_alloc(ring, 5);
+	if (r) {
+		DRM_ERROR("amdgpu: cp failed to lock ring %d (%d).\n",
+			  ring->idx, r);
+		amdgpu_gfx_scratch_free(adev, scratch);
+		return r;
+	}
+
+	if (ring->funcs->type == AMDGPU_RING_TYPE_KIQ) {
+		gfx_v11_0_ring_emit_wreg(ring, scratch, 0xDEADBEEF);
+	} else {
+		amdgpu_ring_write(ring, PACKET3(PACKET3_SET_UCONFIG_REG, 1));
+		amdgpu_ring_write(ring, (scratch - PACKET3_SET_UCONFIG_REG_START));
+		amdgpu_ring_write(ring, 0xDEADBEEF);
+	}
+	amdgpu_ring_commit(ring);
+
+	for (i = 0; i < adev->usec_timeout; i++) {
+		tmp = RREG32(scratch);
+		if (tmp == 0xDEADBEEF)
+			break;
+		if (amdgpu_emu_mode == 1)
+			msleep(1);
+		else
+			udelay(1);
+	}
+
+	if (i >= adev->usec_timeout)
+		r = -ETIMEDOUT;
+
+	amdgpu_gfx_scratch_free(adev, scratch);
+
+	return r;
+}
+
+static int gfx_v11_0_ring_test_ib(struct amdgpu_ring *ring, long timeout)
+{
+	struct amdgpu_device *adev = ring->adev;
+	struct amdgpu_ib ib;
+	struct dma_fence *f = NULL;
+	unsigned index;
+	uint64_t gpu_addr;
+	volatile uint32_t *cpu_ptr;
+	long r;
+
+	/* MES KIQ fw hasn't indirect buffer support for now */
+	if (adev->enable_mes_kiq &&
+	    ring->funcs->type == AMDGPU_RING_TYPE_KIQ)
+		return 0;
+
+	memset(&ib, 0, sizeof(ib));
+
+	if (ring->is_mes_queue) {
+		uint32_t padding, offset;
+
+		offset = amdgpu_mes_ctx_get_offs(ring, AMDGPU_MES_CTX_IB_OFFS);
+		padding = amdgpu_mes_ctx_get_offs(ring,
+						  AMDGPU_MES_CTX_PADDING_OFFS);
+
+		ib.gpu_addr = amdgpu_mes_ctx_get_offs_gpu_addr(ring, offset);
+		ib.ptr = amdgpu_mes_ctx_get_offs_cpu_addr(ring, offset);
+
+		gpu_addr = amdgpu_mes_ctx_get_offs_gpu_addr(ring, padding);
+		cpu_ptr = amdgpu_mes_ctx_get_offs_cpu_addr(ring, padding);
+		*cpu_ptr = cpu_to_le32(0xCAFEDEAD);
+	} else {
+		r = amdgpu_device_wb_get(adev, &index);
+		if (r)
+			return r;
+
+		gpu_addr = adev->wb.gpu_addr + (index * 4);
+		adev->wb.wb[index] = cpu_to_le32(0xCAFEDEAD);
+		cpu_ptr = &adev->wb.wb[index];
+
+		r = amdgpu_ib_get(adev, NULL, 16, AMDGPU_IB_POOL_DIRECT, &ib);
+		if (r) {
+			DRM_ERROR("amdgpu: failed to get ib (%ld).\n", r);
+			goto err1;
+		}
+	}
+
+	ib.ptr[0] = PACKET3(PACKET3_WRITE_DATA, 3);
+	ib.ptr[1] = WRITE_DATA_DST_SEL(5) | WR_CONFIRM;
+	ib.ptr[2] = lower_32_bits(gpu_addr);
+	ib.ptr[3] = upper_32_bits(gpu_addr);
+	ib.ptr[4] = 0xDEADBEEF;
+	ib.length_dw = 5;
+
+	r = amdgpu_ib_schedule(ring, 1, &ib, NULL, &f);
+	if (r)
+		goto err2;
+
+	r = dma_fence_wait_timeout(f, false, timeout);
+	if (r == 0) {
+		r = -ETIMEDOUT;
+		goto err2;
+	} else if (r < 0) {
+		goto err2;
+	}
+
+	if (le32_to_cpu(*cpu_ptr) == 0xDEADBEEF)
+		r = 0;
+	else
+		r = -EINVAL;
+err2:
+	if (!ring->is_mes_queue)
+		amdgpu_ib_free(adev, &ib, NULL);
+	dma_fence_put(f);
+err1:
+	amdgpu_device_wb_free(adev, index);
+	return r;
+}
+
+static void gfx_v11_0_free_microcode(struct amdgpu_device *adev)
+{
+	release_firmware(adev->gfx.pfp_fw);
+	adev->gfx.pfp_fw = NULL;
+	release_firmware(adev->gfx.me_fw);
+	adev->gfx.me_fw = NULL;
+	release_firmware(adev->gfx.rlc_fw);
+	adev->gfx.rlc_fw = NULL;
+	release_firmware(adev->gfx.mec_fw);
+	adev->gfx.mec_fw = NULL;
+
+	kfree(adev->gfx.rlc.register_list_format);
+}
+
+static void gfx_v11_0_init_rlc_ext_microcode(struct amdgpu_device *adev)
+{
+	const struct rlc_firmware_header_v2_1 *rlc_hdr;
+
+	rlc_hdr = (const struct rlc_firmware_header_v2_1 *)adev->gfx.rlc_fw->data;
+	adev->gfx.rlc_srlc_fw_version = le32_to_cpu(rlc_hdr->save_restore_list_cntl_ucode_ver);
+	adev->gfx.rlc_srlc_feature_version = le32_to_cpu(rlc_hdr->save_restore_list_cntl_feature_ver);
+	adev->gfx.rlc.save_restore_list_cntl_size_bytes = le32_to_cpu(rlc_hdr->save_restore_list_cntl_size_bytes);
+	adev->gfx.rlc.save_restore_list_cntl = (u8 *)rlc_hdr + le32_to_cpu(rlc_hdr->save_restore_list_cntl_offset_bytes);
+	adev->gfx.rlc_srlg_fw_version = le32_to_cpu(rlc_hdr->save_restore_list_gpm_ucode_ver);
+	adev->gfx.rlc_srlg_feature_version = le32_to_cpu(rlc_hdr->save_restore_list_gpm_feature_ver);
+	adev->gfx.rlc.save_restore_list_gpm_size_bytes = le32_to_cpu(rlc_hdr->save_restore_list_gpm_size_bytes);
+	adev->gfx.rlc.save_restore_list_gpm = (u8 *)rlc_hdr + le32_to_cpu(rlc_hdr->save_restore_list_gpm_offset_bytes);
+	adev->gfx.rlc_srls_fw_version = le32_to_cpu(rlc_hdr->save_restore_list_srm_ucode_ver);
+	adev->gfx.rlc_srls_feature_version = le32_to_cpu(rlc_hdr->save_restore_list_srm_feature_ver);
+	adev->gfx.rlc.save_restore_list_srm_size_bytes = le32_to_cpu(rlc_hdr->save_restore_list_srm_size_bytes);
+	adev->gfx.rlc.save_restore_list_srm = (u8 *)rlc_hdr + le32_to_cpu(rlc_hdr->save_restore_list_srm_offset_bytes);
+	adev->gfx.rlc.reg_list_format_direct_reg_list_length =
+			le32_to_cpu(rlc_hdr->reg_list_format_direct_reg_list_length);
+}
+
+static void gfx_v11_0_init_rlc_iram_dram_microcode(struct amdgpu_device *adev)
+{
+	const struct rlc_firmware_header_v2_2 *rlc_hdr;
+
+	rlc_hdr = (const struct rlc_firmware_header_v2_2 *)adev->gfx.rlc_fw->data;
+	adev->gfx.rlc.rlc_iram_ucode_size_bytes = le32_to_cpu(rlc_hdr->rlc_iram_ucode_size_bytes);
+	adev->gfx.rlc.rlc_iram_ucode = (u8 *)rlc_hdr + le32_to_cpu(rlc_hdr->rlc_iram_ucode_offset_bytes);
+	adev->gfx.rlc.rlc_dram_ucode_size_bytes = le32_to_cpu(rlc_hdr->rlc_dram_ucode_size_bytes);
+	adev->gfx.rlc.rlc_dram_ucode = (u8 *)rlc_hdr + le32_to_cpu(rlc_hdr->rlc_dram_ucode_offset_bytes);
+}
+
+static void gfx_v11_0_init_rlcp_rlcv_microcode(struct amdgpu_device *adev)
+{
+	const struct rlc_firmware_header_v2_3 *rlc_hdr;
+
+	rlc_hdr = (const struct rlc_firmware_header_v2_3 *)adev->gfx.rlc_fw->data;
+	adev->gfx.rlc.rlcp_ucode_size_bytes = le32_to_cpu(rlc_hdr->rlcp_ucode_size_bytes);
+	adev->gfx.rlc.rlcp_ucode = (u8 *)rlc_hdr + le32_to_cpu(rlc_hdr->rlcp_ucode_offset_bytes);
+	adev->gfx.rlc.rlcv_ucode_size_bytes = le32_to_cpu(rlc_hdr->rlcv_ucode_size_bytes);
+	adev->gfx.rlc.rlcv_ucode = (u8 *)rlc_hdr + le32_to_cpu(rlc_hdr->rlcv_ucode_offset_bytes);
+}
+
+static int gfx_v11_0_init_microcode(struct amdgpu_device *adev)
+{
+	char fw_name[40];
+	char ucode_prefix[30];
+	int err;
+	struct amdgpu_firmware_info *info = NULL;
+	const struct common_firmware_header *header = NULL;
+	const struct gfx_firmware_header_v1_0 *cp_hdr;
+	const struct gfx_firmware_header_v2_0 *cp_hdr_v2_0;
+	const struct rlc_firmware_header_v2_0 *rlc_hdr;
+	unsigned int *tmp = NULL;
+	unsigned int i = 0;
+	uint16_t version_major;
+	uint16_t version_minor;
+
+	DRM_DEBUG("\n");
+
+	amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix, sizeof(ucode_prefix));
+
+	snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_pfp.bin", ucode_prefix);
+	err = request_firmware(&adev->gfx.pfp_fw, fw_name, adev->dev);
+	if (err)
+		goto out;
+	err = amdgpu_ucode_validate(adev->gfx.pfp_fw);
+	if (err)
+		goto out;
+	/* check pfp fw hdr version to decide if enable rs64 for gfx11.*/
+	adev->gfx.rs64_enable = amdgpu_ucode_hdr_version(
+				(union amdgpu_firmware_header *)
+				adev->gfx.pfp_fw->data, 2, 0);
+	if (adev->gfx.rs64_enable) {
+		dev_info(adev->dev, "CP RS64 enable\n");
+		cp_hdr_v2_0 = (const struct gfx_firmware_header_v2_0 *)adev->gfx.pfp_fw->data;
+		adev->gfx.pfp_fw_version = le32_to_cpu(cp_hdr_v2_0->header.ucode_version);
+		adev->gfx.pfp_feature_version = le32_to_cpu(cp_hdr_v2_0->ucode_feature_version);
+		
+	} else {
+		cp_hdr = (const struct gfx_firmware_header_v1_0 *)adev->gfx.pfp_fw->data;
+		adev->gfx.pfp_fw_version = le32_to_cpu(cp_hdr->header.ucode_version);
+		adev->gfx.pfp_feature_version = le32_to_cpu(cp_hdr->ucode_feature_version);
+	}
+
+	snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_me.bin", ucode_prefix);
+	err = request_firmware(&adev->gfx.me_fw, fw_name, adev->dev);
+	if (err)
+		goto out;
+	err = amdgpu_ucode_validate(adev->gfx.me_fw);
+	if (err)
+		goto out;
+	if (adev->gfx.rs64_enable) {
+		cp_hdr_v2_0 = (const struct gfx_firmware_header_v2_0 *)adev->gfx.me_fw->data;
+		adev->gfx.me_fw_version = le32_to_cpu(cp_hdr_v2_0->header.ucode_version);
+		adev->gfx.me_feature_version = le32_to_cpu(cp_hdr_v2_0->ucode_feature_version);
+		
+	} else {
+		cp_hdr = (const struct gfx_firmware_header_v1_0 *)adev->gfx.me_fw->data;
+		adev->gfx.me_fw_version = le32_to_cpu(cp_hdr->header.ucode_version);
+		adev->gfx.me_feature_version = le32_to_cpu(cp_hdr->ucode_feature_version);
+	}
+
+	if (!amdgpu_sriov_vf(adev)) {
+		snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_rlc.bin", ucode_prefix);
+		err = request_firmware(&adev->gfx.rlc_fw, fw_name, adev->dev);
+		if (err)
+			goto out;
+		err = amdgpu_ucode_validate(adev->gfx.rlc_fw);
+		rlc_hdr = (const struct rlc_firmware_header_v2_0 *)adev->gfx.rlc_fw->data;
+		version_major = le16_to_cpu(rlc_hdr->header.header_version_major);
+		version_minor = le16_to_cpu(rlc_hdr->header.header_version_minor);
+
+		adev->gfx.rlc_fw_version = le32_to_cpu(rlc_hdr->header.ucode_version);
+		adev->gfx.rlc_feature_version = le32_to_cpu(rlc_hdr->ucode_feature_version);
+		adev->gfx.rlc.save_and_restore_offset =
+			le32_to_cpu(rlc_hdr->save_and_restore_offset);
+		adev->gfx.rlc.clear_state_descriptor_offset =
+			le32_to_cpu(rlc_hdr->clear_state_descriptor_offset);
+		adev->gfx.rlc.avail_scratch_ram_locations =
+			le32_to_cpu(rlc_hdr->avail_scratch_ram_locations);
+		adev->gfx.rlc.reg_restore_list_size =
+			le32_to_cpu(rlc_hdr->reg_restore_list_size);
+		adev->gfx.rlc.reg_list_format_start =
+			le32_to_cpu(rlc_hdr->reg_list_format_start);
+		adev->gfx.rlc.reg_list_format_separate_start =
+			le32_to_cpu(rlc_hdr->reg_list_format_separate_start);
+		adev->gfx.rlc.starting_offsets_start =
+			le32_to_cpu(rlc_hdr->starting_offsets_start);
+		adev->gfx.rlc.reg_list_format_size_bytes =
+			le32_to_cpu(rlc_hdr->reg_list_format_size_bytes);
+		adev->gfx.rlc.reg_list_size_bytes =
+			le32_to_cpu(rlc_hdr->reg_list_size_bytes);
+		adev->gfx.rlc.register_list_format =
+			kmalloc(adev->gfx.rlc.reg_list_format_size_bytes +
+					adev->gfx.rlc.reg_list_size_bytes, GFP_KERNEL);
+		if (!adev->gfx.rlc.register_list_format) {
+			err = -ENOMEM;
+			goto out;
+		}
+
+		tmp = (unsigned int *)((uintptr_t)rlc_hdr +
+							   le32_to_cpu(rlc_hdr->reg_list_format_array_offset_bytes));
+		for (i = 0 ; i < (rlc_hdr->reg_list_format_size_bytes >> 2); i++)
+			adev->gfx.rlc.register_list_format[i] =	le32_to_cpu(tmp[i]);
+
+		adev->gfx.rlc.register_restore = adev->gfx.rlc.register_list_format + i;
+
+		tmp = (unsigned int *)((uintptr_t)rlc_hdr +
+							   le32_to_cpu(rlc_hdr->reg_list_array_offset_bytes));
+		for (i = 0 ; i < (rlc_hdr->reg_list_size_bytes >> 2); i++)
+			adev->gfx.rlc.register_restore[i] = le32_to_cpu(tmp[i]);
+
+		if (version_major == 2) {
+			if (version_minor >= 1)
+				gfx_v11_0_init_rlc_ext_microcode(adev);
+			if (version_minor >= 2)
+				gfx_v11_0_init_rlc_iram_dram_microcode(adev);
+			if (version_minor == 3)
+				gfx_v11_0_init_rlcp_rlcv_microcode(adev);
+		}
+	}
+
+	snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_mec.bin", ucode_prefix);
+	err = request_firmware(&adev->gfx.mec_fw, fw_name, adev->dev);
+	if (err)
+		goto out;
+	err = amdgpu_ucode_validate(adev->gfx.mec_fw);
+	if (err)
+		goto out;
+	if (adev->gfx.rs64_enable) {
+		cp_hdr_v2_0 = (const struct gfx_firmware_header_v2_0 *)adev->gfx.mec_fw->data;
+		adev->gfx.mec_fw_version = le32_to_cpu(cp_hdr_v2_0->header.ucode_version);
+		adev->gfx.mec_feature_version = le32_to_cpu(cp_hdr_v2_0->ucode_feature_version);
+		
+	} else {
+		cp_hdr = (const struct gfx_firmware_header_v1_0 *)adev->gfx.mec_fw->data;
+		adev->gfx.mec_fw_version = le32_to_cpu(cp_hdr->header.ucode_version);
+		adev->gfx.mec_feature_version = le32_to_cpu(cp_hdr->ucode_feature_version);
+	}
+
+	/* only one MEC for gfx 11.0.0. */
+	adev->gfx.mec2_fw = NULL;
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
+		if (adev->gfx.rs64_enable) {
+			cp_hdr_v2_0 = (const struct gfx_firmware_header_v2_0 *)adev->gfx.pfp_fw->data;
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_RS64_PFP];
+			info->ucode_id = AMDGPU_UCODE_ID_CP_RS64_PFP;
+			info->fw = adev->gfx.pfp_fw;
+			header = (const struct common_firmware_header *)info->fw->data;
+			adev->firmware.fw_size +=
+				ALIGN(le32_to_cpu(cp_hdr_v2_0->ucode_size_bytes), PAGE_SIZE);
+
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_RS64_PFP_P0_STACK];
+			info->ucode_id = AMDGPU_UCODE_ID_CP_RS64_PFP_P0_STACK;
+			info->fw = adev->gfx.pfp_fw;
+			header = (const struct common_firmware_header *)info->fw->data;
+			adev->firmware.fw_size +=
+				ALIGN(le32_to_cpu(cp_hdr_v2_0->data_size_bytes), PAGE_SIZE);
+
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_RS64_PFP_P1_STACK];
+			info->ucode_id = AMDGPU_UCODE_ID_CP_RS64_PFP_P1_STACK;
+			info->fw = adev->gfx.pfp_fw;
+			header = (const struct common_firmware_header *)info->fw->data;
+			adev->firmware.fw_size +=
+				ALIGN(le32_to_cpu(cp_hdr_v2_0->data_size_bytes), PAGE_SIZE);
+
+			cp_hdr_v2_0 = (const struct gfx_firmware_header_v2_0 *)adev->gfx.me_fw->data;
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_RS64_ME];
+			info->ucode_id = AMDGPU_UCODE_ID_CP_RS64_ME;
+			info->fw = adev->gfx.me_fw;
+			header = (const struct common_firmware_header *)info->fw->data;
+			adev->firmware.fw_size +=
+				ALIGN(le32_to_cpu(cp_hdr_v2_0->ucode_size_bytes), PAGE_SIZE);
+
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_RS64_ME_P0_STACK];
+			info->ucode_id = AMDGPU_UCODE_ID_CP_RS64_ME_P0_STACK;
+			info->fw = adev->gfx.me_fw;
+			header = (const struct common_firmware_header *)info->fw->data;
+			adev->firmware.fw_size +=
+				ALIGN(le32_to_cpu(cp_hdr_v2_0->data_size_bytes), PAGE_SIZE);
+
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_RS64_ME_P1_STACK];
+			info->ucode_id = AMDGPU_UCODE_ID_CP_RS64_ME_P1_STACK;
+			info->fw = adev->gfx.me_fw;
+			header = (const struct common_firmware_header *)info->fw->data;
+			adev->firmware.fw_size +=
+				ALIGN(le32_to_cpu(cp_hdr_v2_0->data_size_bytes), PAGE_SIZE);
+
+			cp_hdr_v2_0 = (const struct gfx_firmware_header_v2_0 *)adev->gfx.mec_fw->data;
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_RS64_MEC];
+			info->ucode_id = AMDGPU_UCODE_ID_CP_RS64_MEC;
+			info->fw = adev->gfx.mec_fw;
+			header = (const struct common_firmware_header *)info->fw->data;
+			adev->firmware.fw_size +=
+				ALIGN(le32_to_cpu(cp_hdr_v2_0->ucode_size_bytes), PAGE_SIZE);
+
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_RS64_MEC_P0_STACK];
+			info->ucode_id = AMDGPU_UCODE_ID_CP_RS64_MEC_P0_STACK;
+			info->fw = adev->gfx.mec_fw;
+			header = (const struct common_firmware_header *)info->fw->data;
+			adev->firmware.fw_size +=
+				ALIGN(le32_to_cpu(cp_hdr_v2_0->data_size_bytes), PAGE_SIZE);
+
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_RS64_MEC_P1_STACK];
+			info->ucode_id = AMDGPU_UCODE_ID_CP_RS64_MEC_P1_STACK;
+			info->fw = adev->gfx.mec_fw;
+			header = (const struct common_firmware_header *)info->fw->data;
+			adev->firmware.fw_size +=
+				ALIGN(le32_to_cpu(cp_hdr_v2_0->data_size_bytes), PAGE_SIZE);
+
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_RS64_MEC_P2_STACK];
+			info->ucode_id = AMDGPU_UCODE_ID_CP_RS64_MEC_P2_STACK;
+			info->fw = adev->gfx.mec_fw;
+			header = (const struct common_firmware_header *)info->fw->data;
+			adev->firmware.fw_size +=
+				ALIGN(le32_to_cpu(cp_hdr_v2_0->data_size_bytes), PAGE_SIZE);
+
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_RS64_MEC_P3_STACK];
+			info->ucode_id = AMDGPU_UCODE_ID_CP_RS64_MEC_P3_STACK;
+			info->fw = adev->gfx.mec_fw;
+			header = (const struct common_firmware_header *)info->fw->data;
+			adev->firmware.fw_size +=
+				ALIGN(le32_to_cpu(cp_hdr_v2_0->data_size_bytes), PAGE_SIZE);
+		} else {
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_PFP];
+			info->ucode_id = AMDGPU_UCODE_ID_CP_PFP;
+			info->fw = adev->gfx.pfp_fw;
+			header = (const struct common_firmware_header *)info->fw->data;
+			adev->firmware.fw_size +=
+				ALIGN(le32_to_cpu(header->ucode_size_bytes), PAGE_SIZE);
+
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_ME];
+			info->ucode_id = AMDGPU_UCODE_ID_CP_ME;
+			info->fw = adev->gfx.me_fw;
+			header = (const struct common_firmware_header *)info->fw->data;
+			adev->firmware.fw_size +=
+				ALIGN(le32_to_cpu(header->ucode_size_bytes), PAGE_SIZE);
+
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_MEC1];
+			info->ucode_id = AMDGPU_UCODE_ID_CP_MEC1;
+			info->fw = adev->gfx.mec_fw;
+			header = (const struct common_firmware_header *)info->fw->data;
+			cp_hdr = (const struct gfx_firmware_header_v1_0 *)info->fw->data;
+			adev->firmware.fw_size +=
+				ALIGN(le32_to_cpu(header->ucode_size_bytes) -
+				      le32_to_cpu(cp_hdr->jt_size) * 4, PAGE_SIZE);
+
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CP_MEC1_JT];
+			info->ucode_id = AMDGPU_UCODE_ID_CP_MEC1_JT;
+			info->fw = adev->gfx.mec_fw;
+			adev->firmware.fw_size +=
+				ALIGN(le32_to_cpu(cp_hdr->jt_size) * 4, PAGE_SIZE);
+		}
+
+		info = &adev->firmware.ucode[AMDGPU_UCODE_ID_RLC_G];
+		info->ucode_id = AMDGPU_UCODE_ID_RLC_G;
+		info->fw = adev->gfx.rlc_fw;
+		if (info->fw) {
+			header = (const struct common_firmware_header *)info->fw->data;
+			adev->firmware.fw_size +=
+				ALIGN(le32_to_cpu(header->ucode_size_bytes), PAGE_SIZE);
+		}
+		if (adev->gfx.rlc.save_restore_list_gpm_size_bytes &&
+		    adev->gfx.rlc.save_restore_list_srm_size_bytes) {
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_RLC_RESTORE_LIST_GPM_MEM];
+			info->ucode_id = AMDGPU_UCODE_ID_RLC_RESTORE_LIST_GPM_MEM;
+			info->fw = adev->gfx.rlc_fw;
+			adev->firmware.fw_size +=
+				ALIGN(adev->gfx.rlc.save_restore_list_gpm_size_bytes, PAGE_SIZE);
+
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_RLC_RESTORE_LIST_SRM_MEM];
+			info->ucode_id = AMDGPU_UCODE_ID_RLC_RESTORE_LIST_SRM_MEM;
+			info->fw = adev->gfx.rlc_fw;
+			adev->firmware.fw_size +=
+				ALIGN(adev->gfx.rlc.save_restore_list_srm_size_bytes, PAGE_SIZE);
+		}
+
+		if (adev->gfx.rlc.rlc_iram_ucode_size_bytes &&
+		    adev->gfx.rlc.rlc_dram_ucode_size_bytes) {
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_RLC_IRAM];
+			info->ucode_id = AMDGPU_UCODE_ID_RLC_IRAM;
+			info->fw = adev->gfx.rlc_fw;
+			adev->firmware.fw_size +=
+				ALIGN(adev->gfx.rlc.rlc_iram_ucode_size_bytes, PAGE_SIZE);
+
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_RLC_DRAM];
+			info->ucode_id = AMDGPU_UCODE_ID_RLC_DRAM;
+			info->fw = adev->gfx.rlc_fw;
+			adev->firmware.fw_size +=
+				ALIGN(adev->gfx.rlc.rlc_dram_ucode_size_bytes, PAGE_SIZE);
+		}
+
+		if (adev->gfx.rlc.rlcp_ucode_size_bytes) {
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_RLC_P];
+			info->ucode_id = AMDGPU_UCODE_ID_RLC_P;
+			info->fw = adev->gfx.rlc_fw;
+			adev->firmware.fw_size +=
+				ALIGN(adev->gfx.rlc.rlcp_ucode_size_bytes, PAGE_SIZE);
+		}
+
+		if (adev->gfx.rlc.rlcv_ucode_size_bytes) {
+			info = &adev->firmware.ucode[AMDGPU_UCODE_ID_RLC_V];
+			info->ucode_id = AMDGPU_UCODE_ID_RLC_V;
+			info->fw = adev->gfx.rlc_fw;
+			adev->firmware.fw_size +=
+				ALIGN(adev->gfx.rlc.rlcv_ucode_size_bytes, PAGE_SIZE);
+		}
+	}
+
+out:
+	if (err) {
+		dev_err(adev->dev,
+			"gfx11: Failed to load firmware \"%s\"\n",
+			fw_name);
+		release_firmware(adev->gfx.pfp_fw);
+		adev->gfx.pfp_fw = NULL;
+		release_firmware(adev->gfx.me_fw);
+		adev->gfx.me_fw = NULL;
+		release_firmware(adev->gfx.rlc_fw);
+		adev->gfx.rlc_fw = NULL;
+		release_firmware(adev->gfx.mec_fw);
+		adev->gfx.mec_fw = NULL;
+	}
+
+	return err;
+}
+
+static int gfx_v11_0_init_toc_microcode(struct amdgpu_device *adev)
+{
+	const struct psp_firmware_header_v1_0 *toc_hdr;
+	int err = 0;
+	char fw_name[40];
+	char ucode_prefix[30];
+
+	amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix, sizeof(ucode_prefix));
+
+	snprintf(fw_name, sizeof(fw_name), "amdgpu/%s_toc.bin", ucode_prefix);
+	err = request_firmware(&adev->psp.toc_fw, fw_name, adev->dev);
+	if (err)
+		goto out;
+
+	err = amdgpu_ucode_validate(adev->psp.toc_fw);
+	if (err)
+		goto out;
+
+	toc_hdr = (const struct psp_firmware_header_v1_0 *)adev->psp.toc_fw->data;
+	adev->psp.toc.fw_version = le32_to_cpu(toc_hdr->header.ucode_version);
+	adev->psp.toc.feature_version = le32_to_cpu(toc_hdr->sos.fw_version);
+	adev->psp.toc.size_bytes = le32_to_cpu(toc_hdr->header.ucode_size_bytes);
+	adev->psp.toc.start_addr = (uint8_t *)toc_hdr +
+				le32_to_cpu(toc_hdr->header.ucode_array_offset_bytes);
+	return 0;
+out:
+	dev_err(adev->dev, "Failed to load TOC microcode\n");
+	release_firmware(adev->psp.toc_fw);
+	adev->psp.toc_fw = NULL;
+	return err;
+}
+
+static u32 gfx_v11_0_get_csb_size(struct amdgpu_device *adev)
+{
+	u32 count = 0;
+	const struct cs_section_def *sect = NULL;
+	const struct cs_extent_def *ext = NULL;
+
+	/* begin clear state */
+	count += 2;
+	/* context control state */
+	count += 3;
+
+	for (sect = gfx11_cs_data; sect->section != NULL; ++sect) {
+		for (ext = sect->section; ext->extent != NULL; ++ext) {
+			if (sect->id == SECT_CONTEXT)
+				count += 2 + ext->reg_count;
+			else
+				return 0;
+		}
+	}
+
+	/* set PA_SC_TILE_STEERING_OVERRIDE */
+	count += 3;
+	/* end clear state */
+	count += 2;
+	/* clear state */
+	count += 2;
+
+	return count;
+}
+
+static void gfx_v11_0_get_csb_buffer(struct amdgpu_device *adev,
+				    volatile u32 *buffer)
+{
+	u32 count = 0, i;
+	const struct cs_section_def *sect = NULL;
+	const struct cs_extent_def *ext = NULL;
+	int ctx_reg_offset;
+
+	if (adev->gfx.rlc.cs_data == NULL)
+		return;
+	if (buffer == NULL)
+		return;
+
+	buffer[count++] = cpu_to_le32(PACKET3(PACKET3_PREAMBLE_CNTL, 0));
+	buffer[count++] = cpu_to_le32(PACKET3_PREAMBLE_BEGIN_CLEAR_STATE);
+
+	buffer[count++] = cpu_to_le32(PACKET3(PACKET3_CONTEXT_CONTROL, 1));
+	buffer[count++] = cpu_to_le32(0x80000000);
+	buffer[count++] = cpu_to_le32(0x80000000);
+
+	for (sect = adev->gfx.rlc.cs_data; sect->section != NULL; ++sect) {
+		for (ext = sect->section; ext->extent != NULL; ++ext) {
+			if (sect->id == SECT_CONTEXT) {
+				buffer[count++] =
+					cpu_to_le32(PACKET3(PACKET3_SET_CONTEXT_REG, ext->reg_count));
+				buffer[count++] = cpu_to_le32(ext->reg_index -
+						PACKET3_SET_CONTEXT_REG_START);
+				for (i = 0; i < ext->reg_count; i++)
+					buffer[count++] = cpu_to_le32(ext->extent[i]);
+			} else {
+				return;
+			}
+		}
+	}
+
+	ctx_reg_offset =
+		SOC15_REG_OFFSET(GC, 0, regPA_SC_TILE_STEERING_OVERRIDE) - PACKET3_SET_CONTEXT_REG_START;
+	buffer[count++] = cpu_to_le32(PACKET3(PACKET3_SET_CONTEXT_REG, 1));
+	buffer[count++] = cpu_to_le32(ctx_reg_offset);
+	buffer[count++] = cpu_to_le32(adev->gfx.config.pa_sc_tile_steering_override);
+
+	buffer[count++] = cpu_to_le32(PACKET3(PACKET3_PREAMBLE_CNTL, 0));
+	buffer[count++] = cpu_to_le32(PACKET3_PREAMBLE_END_CLEAR_STATE);
+
+	buffer[count++] = cpu_to_le32(PACKET3(PACKET3_CLEAR_STATE, 0));
+	buffer[count++] = cpu_to_le32(0);
+}
+
+static void gfx_v11_0_rlc_fini(struct amdgpu_device *adev)
+{
+	/* clear state block */
+	amdgpu_bo_free_kernel(&adev->gfx.rlc.clear_state_obj,
+			&adev->gfx.rlc.clear_state_gpu_addr,
+			(void **)&adev->gfx.rlc.cs_ptr);
+
+	/* jump table block */
+	amdgpu_bo_free_kernel(&adev->gfx.rlc.cp_table_obj,
+			&adev->gfx.rlc.cp_table_gpu_addr,
+			(void **)&adev->gfx.rlc.cp_table_ptr);
+}
+
+static void gfx_v11_0_init_rlcg_reg_access_ctrl(struct amdgpu_device *adev)
+{
+	struct amdgpu_rlcg_reg_access_ctrl *reg_access_ctrl;
+
+	reg_access_ctrl = &adev->gfx.rlc.reg_access_ctrl;
+	reg_access_ctrl->scratch_reg0 = SOC15_REG_OFFSET(GC, 0, regSCRATCH_REG0);
+	reg_access_ctrl->scratch_reg1 = SOC15_REG_OFFSET(GC, 0, regSCRATCH_REG1);
+	reg_access_ctrl->scratch_reg2 = SOC15_REG_OFFSET(GC, 0, regSCRATCH_REG2);
+	reg_access_ctrl->scratch_reg3 = SOC15_REG_OFFSET(GC, 0, regSCRATCH_REG3);
+	reg_access_ctrl->grbm_cntl = SOC15_REG_OFFSET(GC, 0, regGRBM_GFX_CNTL);
+	reg_access_ctrl->grbm_idx = SOC15_REG_OFFSET(GC, 0, regGRBM_GFX_INDEX);
+	reg_access_ctrl->spare_int = SOC15_REG_OFFSET(GC, 0, regRLC_SPARE_INT_0);
+	adev->gfx.rlc.rlcg_reg_access_supported = true;
+}
+
+static int gfx_v11_0_rlc_init(struct amdgpu_device *adev)
+{
+	const struct cs_section_def *cs_data;
+	int r;
+
+	adev->gfx.rlc.cs_data = gfx11_cs_data;
+
+	cs_data = adev->gfx.rlc.cs_data;
+
+	if (cs_data) {
+		/* init clear state block */
+		r = amdgpu_gfx_rlc_init_csb(adev);
+		if (r)
+			return r;
+	}
+
+	/* init spm vmid with 0xf */
+	if (adev->gfx.rlc.funcs->update_spm_vmid)
+		adev->gfx.rlc.funcs->update_spm_vmid(adev, 0xf);
+
+	return 0;
+}
+
+static void gfx_v11_0_mec_fini(struct amdgpu_device *adev)
+{
+	amdgpu_bo_free_kernel(&adev->gfx.mec.hpd_eop_obj, NULL, NULL);
+	amdgpu_bo_free_kernel(&adev->gfx.mec.mec_fw_obj, NULL, NULL);
+	amdgpu_bo_free_kernel(&adev->gfx.mec.mec_fw_data_obj, NULL, NULL);
+}
+
+static int gfx_v11_0_me_init(struct amdgpu_device *adev)
+{
+	int r;
+
+	bitmap_zero(adev->gfx.me.queue_bitmap, AMDGPU_MAX_GFX_QUEUES);
+
+	amdgpu_gfx_graphics_queue_acquire(adev);
+
+	r = gfx_v11_0_init_microcode(adev);
+	if (r)
+		DRM_ERROR("Failed to load gfx firmware!\n");
+
+	return r;
+}
+
+static int gfx_v11_0_mec_init(struct amdgpu_device *adev)
+{
+	int r;
+	u32 *hpd;
+	size_t mec_hpd_size;
+
+	bitmap_zero(adev->gfx.mec.queue_bitmap, AMDGPU_MAX_COMPUTE_QUEUES);
+
+	/* take ownership of the relevant compute queues */
+	amdgpu_gfx_compute_queue_acquire(adev);
+	mec_hpd_size = adev->gfx.num_compute_rings * GFX11_MEC_HPD_SIZE;
+
+	if (mec_hpd_size) {
+		r = amdgpu_bo_create_reserved(adev, mec_hpd_size, PAGE_SIZE,
+					      AMDGPU_GEM_DOMAIN_GTT,
+					      &adev->gfx.mec.hpd_eop_obj,
+					      &adev->gfx.mec.hpd_eop_gpu_addr,
+					      (void **)&hpd);
+		if (r) {
+			dev_warn(adev->dev, "(%d) create HDP EOP bo failed\n", r);
+			gfx_v11_0_mec_fini(adev);
+			return r;
+		}
+
+		memset(hpd, 0, mec_hpd_size);
+
+		amdgpu_bo_kunmap(adev->gfx.mec.hpd_eop_obj);
+		amdgpu_bo_unreserve(adev->gfx.mec.hpd_eop_obj);
+	}
+
+	return 0;
+}
+
+static uint32_t wave_read_ind(struct amdgpu_device *adev, uint32_t wave, uint32_t address)
+{
+	WREG32_SOC15(GC, 0, regSQ_IND_INDEX,
+		(wave << SQ_IND_INDEX__WAVE_ID__SHIFT) |
+		(address << SQ_IND_INDEX__INDEX__SHIFT));
+	return RREG32_SOC15(GC, 0, regSQ_IND_DATA);
+}
+
+static void wave_read_regs(struct amdgpu_device *adev, uint32_t wave,
+			   uint32_t thread, uint32_t regno,
+			   uint32_t num, uint32_t *out)
+{
+	WREG32_SOC15(GC, 0, regSQ_IND_INDEX,
+		(wave << SQ_IND_INDEX__WAVE_ID__SHIFT) |
+		(regno << SQ_IND_INDEX__INDEX__SHIFT) |
+		(thread << SQ_IND_INDEX__WORKITEM_ID__SHIFT) |
+		(SQ_IND_INDEX__AUTO_INCR_MASK));
+	while (num--)
+		*(out++) = RREG32_SOC15(GC, 0, regSQ_IND_DATA);
+}
+
+static void gfx_v11_0_read_wave_data(struct amdgpu_device *adev, uint32_t simd, uint32_t wave, uint32_t *dst, int *no_fields)
+{
+	/* in gfx11 the SIMD_ID is specified as part of the INSTANCE
+	 * field when performing a select_se_sh so it should be
+	 * zero here */
+	WARN_ON(simd != 0);
+
+	/* type 2 wave data */
+	dst[(*no_fields)++] = 2;
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_STATUS);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_PC_LO);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_PC_HI);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_EXEC_LO);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_EXEC_HI);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_HW_ID1);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_HW_ID2);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_GPR_ALLOC);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_LDS_ALLOC);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_TRAPSTS);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_IB_STS);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_IB_STS2);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_IB_DBG1);
+	dst[(*no_fields)++] = wave_read_ind(adev, wave, ixSQ_WAVE_M0);
+}
+
+static void gfx_v11_0_read_wave_sgprs(struct amdgpu_device *adev, uint32_t simd,
+				     uint32_t wave, uint32_t start,
+				     uint32_t size, uint32_t *dst)
+{
+	WARN_ON(simd != 0);
+
+	wave_read_regs(
+		adev, wave, 0, start + SQIND_WAVE_SGPRS_OFFSET, size,
+		dst);
+}
+
+static void gfx_v11_0_read_wave_vgprs(struct amdgpu_device *adev, uint32_t simd,
+				      uint32_t wave, uint32_t thread,
+				      uint32_t start, uint32_t size,
+				      uint32_t *dst)
+{
+	wave_read_regs(
+		adev, wave, thread,
+		start + SQIND_WAVE_VGPRS_OFFSET, size, dst);
+}
+
+static void gfx_v11_0_select_me_pipe_q(struct amdgpu_device *adev,
+									  u32 me, u32 pipe, u32 q, u32 vm)
+{
+	soc21_grbm_select(adev, me, pipe, q, vm);
+}
+
+static const struct amdgpu_gfx_funcs gfx_v11_0_gfx_funcs = {
+	.get_gpu_clock_counter = &gfx_v11_0_get_gpu_clock_counter,
+	.select_se_sh = &gfx_v11_0_select_se_sh,
+	.read_wave_data = &gfx_v11_0_read_wave_data,
+	.read_wave_sgprs = &gfx_v11_0_read_wave_sgprs,
+	.read_wave_vgprs = &gfx_v11_0_read_wave_vgprs,
+	.select_me_pipe_q = &gfx_v11_0_select_me_pipe_q,
+	.init_spm_golden = &gfx_v11_0_init_spm_golden_registers,
+};
+
+static int gfx_v11_0_gpu_early_init(struct amdgpu_device *adev)
+{
+	adev->gfx.funcs = &gfx_v11_0_gfx_funcs;
+
+	switch (adev->ip_versions[GC_HWIP][0]) {
+	case IP_VERSION(11, 0, 0):
+		adev->gfx.config.max_hw_contexts = 8;
+		adev->gfx.config.sc_prim_fifo_size_frontend = 0x20;
+		adev->gfx.config.sc_prim_fifo_size_backend = 0x100;
+		adev->gfx.config.sc_hiz_tile_fifo_size = 0;
+		adev->gfx.config.sc_earlyz_tile_fifo_size = 0x4C0;
+		break;
+	default:
+		BUG();
+		break;
+	}
+
+	return 0;
+}
+
+static int gfx_v11_0_gfx_ring_init(struct amdgpu_device *adev, int ring_id,
+				   int me, int pipe, int queue)
+{
+	int r;
+	struct amdgpu_ring *ring;
+	unsigned int irq_type;
+
+	ring = &adev->gfx.gfx_ring[ring_id];
+
+	ring->me = me;
+	ring->pipe = pipe;
+	ring->queue = queue;
+
+	ring->ring_obj = NULL;
+	ring->use_doorbell = true;
+
+	if (!ring_id)
+		ring->doorbell_index = adev->doorbell_index.gfx_ring0 << 1;
+	else
+		ring->doorbell_index = adev->doorbell_index.gfx_ring1 << 1;
+	sprintf(ring->name, "gfx_%d.%d.%d", ring->me, ring->pipe, ring->queue);
+
+	irq_type = AMDGPU_CP_IRQ_GFX_ME0_PIPE0_EOP + ring->pipe;
+	r = amdgpu_ring_init(adev, ring, 1024, &adev->gfx.eop_irq, irq_type,
+			     AMDGPU_RING_PRIO_DEFAULT, NULL);
+	if (r)
+		return r;
+	return 0;
+}
+
+static int gfx_v11_0_compute_ring_init(struct amdgpu_device *adev, int ring_id,
+				       int mec, int pipe, int queue)
+{
+	int r;
+	unsigned irq_type;
+	struct amdgpu_ring *ring;
+	unsigned int hw_prio;
+
+	ring = &adev->gfx.compute_ring[ring_id];
+
+	/* mec0 is me1 */
+	ring->me = mec + 1;
+	ring->pipe = pipe;
+	ring->queue = queue;
+
+	ring->ring_obj = NULL;
+	ring->use_doorbell = true;
+	ring->doorbell_index = (adev->doorbell_index.mec_ring0 + ring_id) << 1;
+	ring->eop_gpu_addr = adev->gfx.mec.hpd_eop_gpu_addr
+				+ (ring_id * GFX11_MEC_HPD_SIZE);
+	sprintf(ring->name, "comp_%d.%d.%d", ring->me, ring->pipe, ring->queue);
+
+	irq_type = AMDGPU_CP_IRQ_COMPUTE_MEC1_PIPE0_EOP
+		+ ((ring->me - 1) * adev->gfx.mec.num_pipe_per_mec)
+		+ ring->pipe;
+	hw_prio = amdgpu_gfx_is_high_priority_compute_queue(adev, ring) ?
+			AMDGPU_GFX_PIPE_PRIO_HIGH : AMDGPU_GFX_PIPE_PRIO_NORMAL;
+	/* type-2 packets are deprecated on MEC, use type-3 instead */
+	r = amdgpu_ring_init(adev, ring, 1024, &adev->gfx.eop_irq, irq_type,
+			     hw_prio, NULL);
+	if (r)
+		return r;
+
+	return 0;
+}
+
+static struct {
+	SOC21_FIRMWARE_ID	id;
+	unsigned int		offset;
+	unsigned int		size;
+} rlc_autoload_info[SOC21_FIRMWARE_ID_MAX];
+
+static void gfx_v11_0_parse_rlc_toc(struct amdgpu_device *adev, void *rlc_toc)
+{
+	RLC_TABLE_OF_CONTENT *ucode = rlc_toc;
+
+	while (ucode && (ucode->id > SOC21_FIRMWARE_ID_INVALID) &&
+			(ucode->id < SOC21_FIRMWARE_ID_MAX)) {
+		rlc_autoload_info[ucode->id].id = ucode->id;
+		rlc_autoload_info[ucode->id].offset = ucode->offset * 4;
+		rlc_autoload_info[ucode->id].size = ucode->size * 4;
+
+		ucode++;
+	};
+}
+
+static uint32_t gfx_v11_0_calc_toc_total_size(struct amdgpu_device *adev)
+{
+	uint32_t total_size = 0;
+	SOC21_FIRMWARE_ID id;
+
+	gfx_v11_0_parse_rlc_toc(adev, adev->psp.toc.start_addr);
+
+	for (id = SOC21_FIRMWARE_ID_RLC_G_UCODE; id < SOC21_FIRMWARE_ID_MAX; id++)
+		total_size += rlc_autoload_info[id].size;
+
+	/* In case the offset in rlc toc ucode is aligned */
+	if (total_size < rlc_autoload_info[SOC21_FIRMWARE_ID_MAX-1].offset)
+		total_size = rlc_autoload_info[SOC21_FIRMWARE_ID_MAX-1].offset +
+			rlc_autoload_info[SOC21_FIRMWARE_ID_MAX-1].size;
+
+	return total_size;
+}
+
+static int gfx_v11_0_rlc_autoload_buffer_init(struct amdgpu_device *adev)
+{
+	int r;
+	uint32_t total_size;
+
+	total_size = gfx_v11_0_calc_toc_total_size(adev);
+
+	r = amdgpu_bo_create_reserved(adev, total_size, 64 * 1024,
+			AMDGPU_GEM_DOMAIN_VRAM,
+			&adev->gfx.rlc.rlc_autoload_bo,
+			&adev->gfx.rlc.rlc_autoload_gpu_addr,
+			(void **)&adev->gfx.rlc.rlc_autoload_ptr);
+
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to create fw autoload bo\n", r);
+		return r;
+	}
+
+	return 0;
+}
+
+static void gfx_v11_0_rlc_backdoor_autoload_copy_ucode(struct amdgpu_device *adev,
+					      SOC21_FIRMWARE_ID id,
+			    		      const void *fw_data,
+					      uint32_t fw_size,
+					      uint32_t *fw_autoload_mask)
+{
+	uint32_t toc_offset;
+	uint32_t toc_fw_size;
+	char *ptr = adev->gfx.rlc.rlc_autoload_ptr;
+
+	if (id <= SOC21_FIRMWARE_ID_INVALID || id >= SOC21_FIRMWARE_ID_MAX)
+		return;
+
+	toc_offset = rlc_autoload_info[id].offset;
+	toc_fw_size = rlc_autoload_info[id].size;
+
+	if (fw_size == 0)
+		fw_size = toc_fw_size;
+
+	if (fw_size > toc_fw_size)
+		fw_size = toc_fw_size;
+
+	memcpy(ptr + toc_offset, fw_data, fw_size);
+
+	if (fw_size < toc_fw_size)
+		memset(ptr + toc_offset + fw_size, 0, toc_fw_size - fw_size);
+
+	if ((id != SOC21_FIRMWARE_ID_RS64_PFP) && (id != SOC21_FIRMWARE_ID_RS64_ME))
+		*(uint64_t *)fw_autoload_mask |= 1 << id;
+}
+
+static void gfx_v11_0_rlc_backdoor_autoload_copy_toc_ucode(struct amdgpu_device *adev,
+							uint32_t *fw_autoload_mask)
+{
+	void *data;
+	uint32_t size;
+	uint64_t *toc_ptr;
+
+	*(uint64_t *)fw_autoload_mask |= 0x1;
+
+	DRM_DEBUG("rlc autoload enabled fw: 0x%llx\n", *(uint64_t *)fw_autoload_mask);
+
+	data = adev->psp.toc.start_addr;
+	size = rlc_autoload_info[SOC21_FIRMWARE_ID_RLC_TOC].size;
+
+	toc_ptr = (uint64_t *)data + size / 8 - 1;
+	*toc_ptr = *(uint64_t *)fw_autoload_mask;
+
+	gfx_v11_0_rlc_backdoor_autoload_copy_ucode(adev, SOC21_FIRMWARE_ID_RLC_TOC,
+					data, size, fw_autoload_mask);
+}
+
+static void gfx_v11_0_rlc_backdoor_autoload_copy_gfx_ucode(struct amdgpu_device *adev,
+							uint32_t *fw_autoload_mask)
+{
+	const __le32 *fw_data;
+	uint32_t fw_size;
+	const struct gfx_firmware_header_v1_0 *cp_hdr;
+	const struct gfx_firmware_header_v2_0 *cpv2_hdr;
+	const struct rlc_firmware_header_v2_0 *rlc_hdr;
+	const struct rlc_firmware_header_v2_2 *rlcv22_hdr;
+	uint16_t version_major, version_minor;
+
+	if (adev->gfx.rs64_enable) {
+		/* pfp ucode */
+		cpv2_hdr = (const struct gfx_firmware_header_v2_0 *)
+			adev->gfx.pfp_fw->data;
+		/* instruction */
+		fw_data = (const __le32 *)(adev->gfx.pfp_fw->data +
+			le32_to_cpu(cpv2_hdr->ucode_offset_bytes));
+		fw_size = le32_to_cpu(cpv2_hdr->ucode_size_bytes);
+		gfx_v11_0_rlc_backdoor_autoload_copy_ucode(adev, SOC21_FIRMWARE_ID_RS64_PFP,
+						fw_data, fw_size, fw_autoload_mask);
+		/* data */
+		fw_data = (const __le32 *)(adev->gfx.pfp_fw->data +
+			le32_to_cpu(cpv2_hdr->data_offset_bytes));
+		fw_size = le32_to_cpu(cpv2_hdr->data_size_bytes);
+		gfx_v11_0_rlc_backdoor_autoload_copy_ucode(adev, SOC21_FIRMWARE_ID_RS64_PFP_P0_STACK,
+						fw_data, fw_size, fw_autoload_mask);
+		gfx_v11_0_rlc_backdoor_autoload_copy_ucode(adev, SOC21_FIRMWARE_ID_RS64_PFP_P1_STACK,
+						fw_data, fw_size, fw_autoload_mask);
+		/* me ucode */
+		cpv2_hdr = (const struct gfx_firmware_header_v2_0 *)
+			adev->gfx.me_fw->data;
+		/* instruction */
+		fw_data = (const __le32 *)(adev->gfx.me_fw->data +
+			le32_to_cpu(cpv2_hdr->ucode_offset_bytes));
+		fw_size = le32_to_cpu(cpv2_hdr->ucode_size_bytes);
+		gfx_v11_0_rlc_backdoor_autoload_copy_ucode(adev, SOC21_FIRMWARE_ID_RS64_ME,
+						fw_data, fw_size, fw_autoload_mask);
+		/* data */
+		fw_data = (const __le32 *)(adev->gfx.me_fw->data +
+			le32_to_cpu(cpv2_hdr->data_offset_bytes));
+		fw_size = le32_to_cpu(cpv2_hdr->data_size_bytes);
+		gfx_v11_0_rlc_backdoor_autoload_copy_ucode(adev, SOC21_FIRMWARE_ID_RS64_ME_P0_STACK,
+						fw_data, fw_size, fw_autoload_mask);
+		gfx_v11_0_rlc_backdoor_autoload_copy_ucode(adev, SOC21_FIRMWARE_ID_RS64_ME_P1_STACK,
+						fw_data, fw_size, fw_autoload_mask);
+		/* mec ucode */
+		cpv2_hdr = (const struct gfx_firmware_header_v2_0 *)
+			adev->gfx.mec_fw->data;
+		/* instruction */
+		fw_data = (const __le32 *) (adev->gfx.mec_fw->data +
+			le32_to_cpu(cpv2_hdr->ucode_offset_bytes));
+		fw_size = le32_to_cpu(cpv2_hdr->ucode_size_bytes);
+		gfx_v11_0_rlc_backdoor_autoload_copy_ucode(adev, SOC21_FIRMWARE_ID_RS64_MEC,
+						fw_data, fw_size, fw_autoload_mask);
+		/* data */
+		fw_data = (const __le32 *) (adev->gfx.mec_fw->data +
+			le32_to_cpu(cpv2_hdr->data_offset_bytes));
+		fw_size = le32_to_cpu(cpv2_hdr->data_size_bytes);
+		gfx_v11_0_rlc_backdoor_autoload_copy_ucode(adev, SOC21_FIRMWARE_ID_RS64_MEC_P0_STACK,
+						fw_data, fw_size, fw_autoload_mask);
+		gfx_v11_0_rlc_backdoor_autoload_copy_ucode(adev, SOC21_FIRMWARE_ID_RS64_MEC_P1_STACK,
+						fw_data, fw_size, fw_autoload_mask);
+		gfx_v11_0_rlc_backdoor_autoload_copy_ucode(adev, SOC21_FIRMWARE_ID_RS64_MEC_P2_STACK,
+						fw_data, fw_size, fw_autoload_mask);
+		gfx_v11_0_rlc_backdoor_autoload_copy_ucode(adev, SOC21_FIRMWARE_ID_RS64_MEC_P3_STACK,
+						fw_data, fw_size, fw_autoload_mask);
+	} else {
+		/* pfp ucode */
+		cp_hdr = (const struct gfx_firmware_header_v1_0 *)
+			adev->gfx.pfp_fw->data;
+		fw_data = (const __le32 *)(adev->gfx.pfp_fw->data +
+				le32_to_cpu(cp_hdr->header.ucode_array_offset_bytes));
+		fw_size = le32_to_cpu(cp_hdr->header.ucode_size_bytes);
+		gfx_v11_0_rlc_backdoor_autoload_copy_ucode(adev, SOC21_FIRMWARE_ID_CP_PFP,
+						fw_data, fw_size, fw_autoload_mask);
+
+		/* me ucode */
+		cp_hdr = (const struct gfx_firmware_header_v1_0 *)
+			adev->gfx.me_fw->data;
+		fw_data = (const __le32 *)(adev->gfx.me_fw->data +
+				le32_to_cpu(cp_hdr->header.ucode_array_offset_bytes));
+		fw_size = le32_to_cpu(cp_hdr->header.ucode_size_bytes);
+		gfx_v11_0_rlc_backdoor_autoload_copy_ucode(adev, SOC21_FIRMWARE_ID_CP_ME,
+						fw_data, fw_size, fw_autoload_mask);
+
+		/* mec ucode */
+		cp_hdr = (const struct gfx_firmware_header_v1_0 *)
+			adev->gfx.mec_fw->data;
+		fw_data = (const __le32 *) (adev->gfx.mec_fw->data +
+				le32_to_cpu(cp_hdr->header.ucode_array_offset_bytes));
+		fw_size = le32_to_cpu(cp_hdr->header.ucode_size_bytes) -
+			cp_hdr->jt_size * 4;
+		gfx_v11_0_rlc_backdoor_autoload_copy_ucode(adev, SOC21_FIRMWARE_ID_CP_MEC,
+						fw_data, fw_size, fw_autoload_mask);
+	}
+
+	/* rlc ucode */
+	rlc_hdr = (const struct rlc_firmware_header_v2_0 *)
+		adev->gfx.rlc_fw->data;
+	fw_data = (const __le32 *)(adev->gfx.rlc_fw->data +
+			le32_to_cpu(rlc_hdr->header.ucode_array_offset_bytes));
+	fw_size = le32_to_cpu(rlc_hdr->header.ucode_size_bytes);
+	gfx_v11_0_rlc_backdoor_autoload_copy_ucode(adev, SOC21_FIRMWARE_ID_RLC_G_UCODE,
+					fw_data, fw_size, fw_autoload_mask);
+
+	version_major = le16_to_cpu(rlc_hdr->header.header_version_major);
+	version_minor = le16_to_cpu(rlc_hdr->header.header_version_minor);
+	if (version_major == 2) {
+		if (version_minor >= 2) {
+			rlcv22_hdr = (const struct rlc_firmware_header_v2_2 *)adev->gfx.rlc_fw->data;
+
+			fw_data = (const __le32 *)(adev->gfx.rlc_fw->data +
+					le32_to_cpu(rlcv22_hdr->rlc_iram_ucode_offset_bytes));
+			fw_size = le32_to_cpu(rlcv22_hdr->rlc_iram_ucode_size_bytes);
+			gfx_v11_0_rlc_backdoor_autoload_copy_ucode(adev, SOC21_FIRMWARE_ID_RLX6_UCODE,
+					fw_data, fw_size, fw_autoload_mask);
+
+			fw_data = (const __le32 *)(adev->gfx.rlc_fw->data +
+					le32_to_cpu(rlcv22_hdr->rlc_dram_ucode_offset_bytes));
+			fw_size = le32_to_cpu(rlcv22_hdr->rlc_dram_ucode_size_bytes);
+			gfx_v11_0_rlc_backdoor_autoload_copy_ucode(adev, SOC21_FIRMWARE_ID_RLX6_DRAM_BOOT,
+					fw_data, fw_size, fw_autoload_mask);
+		}
+	}
+}
+
+static void gfx_v11_0_rlc_backdoor_autoload_copy_sdma_ucode(struct amdgpu_device *adev,
+							uint32_t *fw_autoload_mask)
+{
+	const __le32 *fw_data;
+	uint32_t fw_size;
+	const struct sdma_firmware_header_v2_0 *sdma_hdr;
+
+	sdma_hdr = (const struct sdma_firmware_header_v2_0 *)
+		adev->sdma.instance[0].fw->data;
+	fw_data = (const __le32 *) (adev->sdma.instance[0].fw->data +
+			le32_to_cpu(sdma_hdr->header.ucode_array_offset_bytes));
+	fw_size = le32_to_cpu(sdma_hdr->ctx_ucode_size_bytes);
+
+	gfx_v11_0_rlc_backdoor_autoload_copy_ucode(adev,
+			SOC21_FIRMWARE_ID_SDMA_UCODE_TH0, fw_data, fw_size, fw_autoload_mask);
+
+	fw_data = (const __le32 *) (adev->sdma.instance[0].fw->data +
+			le32_to_cpu(sdma_hdr->ctl_ucode_offset));
+	fw_size = le32_to_cpu(sdma_hdr->ctl_ucode_size_bytes);
+
+	gfx_v11_0_rlc_backdoor_autoload_copy_ucode(adev,
+			SOC21_FIRMWARE_ID_SDMA_UCODE_TH1, fw_data, fw_size, fw_autoload_mask);
+}
+
+static void gfx_v11_0_rlc_backdoor_autoload_copy_mes_ucode(struct amdgpu_device *adev,
+							uint32_t *fw_autoload_mask)
+{
+	const __le32 *fw_data;
+	unsigned fw_size;
+	const struct mes_firmware_header_v1_0 *mes_hdr;
+	int pipe, ucode_id, data_id;
+
+	for (pipe = 0; pipe < 2; pipe++) {
+		if (pipe==0) {
+			ucode_id = SOC21_FIRMWARE_ID_RS64_MES_P0;
+			data_id  = SOC21_FIRMWARE_ID_RS64_MES_P0_STACK;
+		} else {
+			ucode_id = SOC21_FIRMWARE_ID_RS64_MES_P1;
+			data_id  = SOC21_FIRMWARE_ID_RS64_MES_P1_STACK;
+		}
+
+		mes_hdr = (const struct mes_firmware_header_v1_0 *)
+			adev->mes.fw[pipe]->data;
+
+		fw_data = (const __le32 *)(adev->mes.fw[pipe]->data +
+				le32_to_cpu(mes_hdr->mes_ucode_offset_bytes));
+		fw_size = le32_to_cpu(mes_hdr->mes_ucode_size_bytes);
+
+		gfx_v11_0_rlc_backdoor_autoload_copy_ucode(adev,
+				ucode_id, fw_data, fw_size, fw_autoload_mask);
+
+		fw_data = (const __le32 *)(adev->mes.fw[pipe]->data +
+				le32_to_cpu(mes_hdr->mes_ucode_data_offset_bytes));
+		fw_size = le32_to_cpu(mes_hdr->mes_ucode_data_size_bytes);
+
+		gfx_v11_0_rlc_backdoor_autoload_copy_ucode(adev,
+				data_id, fw_data, fw_size, fw_autoload_mask);
+	}
+}
+
+static int gfx_v11_0_rlc_backdoor_autoload_enable(struct amdgpu_device *adev)
+{
+	uint32_t rlc_g_offset, rlc_g_size;
+	uint64_t gpu_addr;
+	uint32_t autoload_fw_id[2];
+
+	memset(autoload_fw_id, 0, sizeof(uint32_t) * 2);
+
+	/* RLC autoload sequence 2: copy ucode */
+	gfx_v11_0_rlc_backdoor_autoload_copy_sdma_ucode(adev, autoload_fw_id);
+	gfx_v11_0_rlc_backdoor_autoload_copy_gfx_ucode(adev, autoload_fw_id);
+	gfx_v11_0_rlc_backdoor_autoload_copy_mes_ucode(adev, autoload_fw_id);
+	gfx_v11_0_rlc_backdoor_autoload_copy_toc_ucode(adev, autoload_fw_id);
+
+	rlc_g_offset = rlc_autoload_info[SOC21_FIRMWARE_ID_RLC_G_UCODE].offset;
+	rlc_g_size = rlc_autoload_info[SOC21_FIRMWARE_ID_RLC_G_UCODE].size;
+	gpu_addr = adev->gfx.rlc.rlc_autoload_gpu_addr + rlc_g_offset;
+
+	WREG32_SOC15(GC, 0, regGFX_IMU_RLC_BOOTLOADER_ADDR_HI, upper_32_bits(gpu_addr));
+	WREG32_SOC15(GC, 0, regGFX_IMU_RLC_BOOTLOADER_ADDR_LO, lower_32_bits(gpu_addr));
+
+	WREG32_SOC15(GC, 0, regGFX_IMU_RLC_BOOTLOADER_SIZE, rlc_g_size);
+
+	/* RLC autoload sequence 3: load IMU fw */
+	if (adev->gfx.imu.funcs->load_microcode)
+		adev->gfx.imu.funcs->load_microcode(adev);
+	/* RLC autoload sequence 4 init IMU fw */
+	if (adev->gfx.imu.funcs->setup_imu)
+		adev->gfx.imu.funcs->setup_imu(adev);
+	if (adev->gfx.imu.funcs->start_imu)
+		adev->gfx.imu.funcs->start_imu(adev);
+
+	/* RLC autoload sequence 5 disable gpa mode */
+	gfx_v11_0_disable_gpa_mode(adev);
+
+	return 0;
+}
+
+static int gfx_v11_0_sw_init(void *handle)
+{
+	int i, j, k, r, ring_id = 0;
+	struct amdgpu_kiq *kiq;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	adev->gfxhub.funcs->init(adev);
+
+	switch (adev->ip_versions[GC_HWIP][0]) {
+	case IP_VERSION(11, 0, 0):
+		adev->gfx.me.num_me = 1;
+		adev->gfx.me.num_pipe_per_me = 1;
+		adev->gfx.me.num_queue_per_pipe = 1;
+		adev->gfx.mec.num_mec = 2;
+		adev->gfx.mec.num_pipe_per_mec = 4;
+		adev->gfx.mec.num_queue_per_pipe = 4;
+		break;
+	default:
+		adev->gfx.me.num_me = 1;
+		adev->gfx.me.num_pipe_per_me = 1;
+		adev->gfx.me.num_queue_per_pipe = 1;
+		adev->gfx.mec.num_mec = 1;
+		adev->gfx.mec.num_pipe_per_mec = 4;
+		adev->gfx.mec.num_queue_per_pipe = 8;
+		break;
+	}
+
+	/* EOP Event */
+	r = amdgpu_irq_add_id(adev, SOC21_IH_CLIENTID_GRBM_CP,
+			      GFX_11_0_0__SRCID__CP_EOP_INTERRUPT,
+			      &adev->gfx.eop_irq);
+	if (r)
+		return r;
+
+	/* Privileged reg */
+	r = amdgpu_irq_add_id(adev, SOC21_IH_CLIENTID_GRBM_CP,
+			      GFX_11_0_0__SRCID__CP_PRIV_REG_FAULT,
+			      &adev->gfx.priv_reg_irq);
+	if (r)
+		return r;
+
+	/* Privileged inst */
+	r = amdgpu_irq_add_id(adev, SOC21_IH_CLIENTID_GRBM_CP,
+			      GFX_11_0_0__SRCID__CP_PRIV_INSTR_FAULT,
+			      &adev->gfx.priv_inst_irq);
+	if (r)
+		return r;
+
+	adev->gfx.gfx_current_status = AMDGPU_GFX_NORMAL_MODE;
+
+	gfx_v11_0_scratch_init(adev);
+
+	if (adev->gfx.imu.funcs) {
+		if (adev->gfx.imu.funcs->init_microcode) {
+			r = adev->gfx.imu.funcs->init_microcode(adev);
+			if (r)
+				DRM_ERROR("Failed to load imu firmware!\n");
+		}
+	}
+
+	r = gfx_v11_0_me_init(adev);
+	if (r)
+		return r;
+
+	r = gfx_v11_0_rlc_init(adev);
+	if (r) {
+		DRM_ERROR("Failed to init rlc BOs!\n");
+		return r;
+	}
+
+	r = gfx_v11_0_mec_init(adev);
+	if (r) {
+		DRM_ERROR("Failed to init MEC BOs!\n");
+		return r;
+	}
+
+	/* set up the gfx ring */
+	for (i = 0; i < adev->gfx.me.num_me; i++) {
+		for (j = 0; j < adev->gfx.me.num_queue_per_pipe; j++) {
+			for (k = 0; k < adev->gfx.me.num_pipe_per_me; k++) {
+				if (!amdgpu_gfx_is_me_queue_enabled(adev, i, k, j))
+					continue;
+
+				r = gfx_v11_0_gfx_ring_init(adev, ring_id,
+							    i, k, j);
+				if (r)
+					return r;
+				ring_id++;
+			}
+		}
+	}
+
+	ring_id = 0;
+	/* set up the compute queues - allocate horizontally across pipes */
+	for (i = 0; i < adev->gfx.mec.num_mec; ++i) {
+		for (j = 0; j < adev->gfx.mec.num_queue_per_pipe; j++) {
+			for (k = 0; k < adev->gfx.mec.num_pipe_per_mec; k++) {
+				if (!amdgpu_gfx_is_mec_queue_enabled(adev, i, k,
+								     j))
+					continue;
+
+				r = gfx_v11_0_compute_ring_init(adev, ring_id,
+								i, k, j);
+				if (r)
+					return r;
+
+				ring_id++;
+			}
+		}
+	}
+
+	if (!adev->enable_mes_kiq) {
+		r = amdgpu_gfx_kiq_init(adev, GFX11_MEC_HPD_SIZE);
+		if (r) {
+			DRM_ERROR("Failed to init KIQ BOs!\n");
+			return r;
+		}
+
+		kiq = &adev->gfx.kiq;
+		r = amdgpu_gfx_kiq_init_ring(adev, &kiq->ring, &kiq->irq);
+		if (r)
+			return r;
+	}
+
+	r = amdgpu_gfx_mqd_sw_init(adev, sizeof(struct v11_compute_mqd));
+	if (r)
+		return r;
+
+	/* allocate visible FB for rlc auto-loading fw */
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_RLC_BACKDOOR_AUTO) {
+		r = gfx_v11_0_init_toc_microcode(adev);
+		if (r)
+			dev_err(adev->dev, "Failed to load toc firmware!\n");
+		r = gfx_v11_0_rlc_autoload_buffer_init(adev);
+		if (r)
+			return r;
+	}
+
+	r = gfx_v11_0_gpu_early_init(adev);
+	if (r)
+		return r;
+
+	return 0;
+}
+
+static void gfx_v11_0_pfp_fini(struct amdgpu_device *adev)
+{
+	amdgpu_bo_free_kernel(&adev->gfx.pfp.pfp_fw_obj,
+			      &adev->gfx.pfp.pfp_fw_gpu_addr,
+			      (void **)&adev->gfx.pfp.pfp_fw_ptr);
+
+	amdgpu_bo_free_kernel(&adev->gfx.pfp.pfp_fw_data_obj,
+			      &adev->gfx.pfp.pfp_fw_data_gpu_addr,
+			      (void **)&adev->gfx.pfp.pfp_fw_data_ptr);
+}
+
+static void gfx_v11_0_me_fini(struct amdgpu_device *adev)
+{
+	amdgpu_bo_free_kernel(&adev->gfx.me.me_fw_obj,
+			      &adev->gfx.me.me_fw_gpu_addr,
+			      (void **)&adev->gfx.me.me_fw_ptr);
+
+	amdgpu_bo_free_kernel(&adev->gfx.me.me_fw_data_obj,
+			       &adev->gfx.me.me_fw_data_gpu_addr,
+			       (void **)&adev->gfx.me.me_fw_data_ptr);
+}
+
+static void gfx_v11_0_rlc_autoload_buffer_fini(struct amdgpu_device *adev)
+{
+	amdgpu_bo_free_kernel(&adev->gfx.rlc.rlc_autoload_bo,
+			&adev->gfx.rlc.rlc_autoload_gpu_addr,
+			(void **)&adev->gfx.rlc.rlc_autoload_ptr);
+}
+
+static int gfx_v11_0_sw_fini(void *handle)
+{
+	int i;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	for (i = 0; i < adev->gfx.num_gfx_rings; i++)
+		amdgpu_ring_fini(&adev->gfx.gfx_ring[i]);
+	for (i = 0; i < adev->gfx.num_compute_rings; i++)
+		amdgpu_ring_fini(&adev->gfx.compute_ring[i]);
+
+	amdgpu_gfx_mqd_sw_fini(adev);
+
+	if (!adev->enable_mes_kiq) {
+		amdgpu_gfx_kiq_free_ring(&adev->gfx.kiq.ring);
+		amdgpu_gfx_kiq_fini(adev);
+	}
+
+	gfx_v11_0_pfp_fini(adev);
+	gfx_v11_0_me_fini(adev);
+	gfx_v11_0_rlc_fini(adev);
+	gfx_v11_0_mec_fini(adev);
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_RLC_BACKDOOR_AUTO)
+		gfx_v11_0_rlc_autoload_buffer_fini(adev);
+
+	gfx_v11_0_free_microcode(adev);
+
+	return 0;
+}
+
+static void gfx_v11_0_select_se_sh(struct amdgpu_device *adev, u32 se_num,
+				   u32 sh_num, u32 instance)
+{
+	u32 data;
+
+	if (instance == 0xffffffff)
+		data = REG_SET_FIELD(0, GRBM_GFX_INDEX,
+				     INSTANCE_BROADCAST_WRITES, 1);
+	else
+		data = REG_SET_FIELD(0, GRBM_GFX_INDEX, INSTANCE_INDEX,
+				     instance);
+
+	if (se_num == 0xffffffff)
+		data = REG_SET_FIELD(data, GRBM_GFX_INDEX, SE_BROADCAST_WRITES,
+				     1);
+	else
+		data = REG_SET_FIELD(data, GRBM_GFX_INDEX, SE_INDEX, se_num);
+
+	if (sh_num == 0xffffffff)
+		data = REG_SET_FIELD(data, GRBM_GFX_INDEX, SA_BROADCAST_WRITES,
+				     1);
+	else
+		data = REG_SET_FIELD(data, GRBM_GFX_INDEX, SA_INDEX, sh_num);
+
+	WREG32_SOC15(GC, 0, regGRBM_GFX_INDEX, data);
+}
+
+static u32 gfx_v11_0_get_rb_active_bitmap(struct amdgpu_device *adev)
+{
+	u32 data, mask;
+
+	data = RREG32_SOC15(GC, 0, regCC_RB_BACKEND_DISABLE);
+	data |= RREG32_SOC15(GC, 0, regGC_USER_RB_BACKEND_DISABLE);
+
+	data &= CC_RB_BACKEND_DISABLE__BACKEND_DISABLE_MASK;
+	data >>= GC_USER_RB_BACKEND_DISABLE__BACKEND_DISABLE__SHIFT;
+
+	mask = amdgpu_gfx_create_bitmask(adev->gfx.config.max_backends_per_se /
+					 adev->gfx.config.max_sh_per_se);
+
+	return (~data) & mask;
+}
+
+static void gfx_v11_0_setup_rb(struct amdgpu_device *adev)
+{
+	int i, j;
+	u32 data;
+	u32 active_rbs = 0;
+	u32 rb_bitmap_width_per_sh = adev->gfx.config.max_backends_per_se /
+					adev->gfx.config.max_sh_per_se;
+
+	mutex_lock(&adev->grbm_idx_mutex);
+	for (i = 0; i < adev->gfx.config.max_shader_engines; i++) {
+		for (j = 0; j < adev->gfx.config.max_sh_per_se; j++) {
+			gfx_v11_0_select_se_sh(adev, i, j, 0xffffffff);
+			data = gfx_v11_0_get_rb_active_bitmap(adev);
+			active_rbs |= data << ((i * adev->gfx.config.max_sh_per_se + j) *
+					       rb_bitmap_width_per_sh);
+		}
+	}
+	gfx_v11_0_select_se_sh(adev, 0xffffffff, 0xffffffff, 0xffffffff);
+	mutex_unlock(&adev->grbm_idx_mutex);
+
+	adev->gfx.config.backend_enable_mask = active_rbs;
+	adev->gfx.config.num_rbs = hweight32(active_rbs);
+}
+
+#define DEFAULT_SH_MEM_BASES	(0x6000)
+#define LDS_APP_BASE           0x1
+#define SCRATCH_APP_BASE       0x2
+
+static void gfx_v11_0_init_compute_vmid(struct amdgpu_device *adev)
+{
+	int i;
+	uint32_t sh_mem_bases;
+	uint32_t data;
+
+	/*
+	 * Configure apertures:
+	 * LDS:         0x60000000'00000000 - 0x60000001'00000000 (4GB)
+	 * Scratch:     0x60000001'00000000 - 0x60000002'00000000 (4GB)
+	 * GPUVM:       0x60010000'00000000 - 0x60020000'00000000 (1TB)
+	 */
+	sh_mem_bases = (LDS_APP_BASE << SH_MEM_BASES__SHARED_BASE__SHIFT) |
+			SCRATCH_APP_BASE;
+
+	mutex_lock(&adev->srbm_mutex);
+	for (i = adev->vm_manager.first_kfd_vmid; i < AMDGPU_NUM_VMID; i++) {
+		soc21_grbm_select(adev, 0, 0, 0, i);
+		/* CP and shaders */
+		WREG32_SOC15(GC, 0, regSH_MEM_CONFIG, DEFAULT_SH_MEM_CONFIG);
+		WREG32_SOC15(GC, 0, regSH_MEM_BASES, sh_mem_bases);
+
+		/* Enable trap for each kfd vmid. */
+		data = RREG32(SOC15_REG_OFFSET(GC, 0, regSPI_GDBG_PER_VMID_CNTL));
+		data = REG_SET_FIELD(data, SPI_GDBG_PER_VMID_CNTL, TRAP_EN, 1);
+	}
+	soc21_grbm_select(adev, 0, 0, 0, 0);
+	mutex_unlock(&adev->srbm_mutex);
+
+	/* Initialize all compute VMIDs to have no GDS, GWS, or OA
+	   acccess. These should be enabled by FW for target VMIDs. */
+	for (i = adev->vm_manager.first_kfd_vmid; i < AMDGPU_NUM_VMID; i++) {
+		WREG32_SOC15_OFFSET(GC, 0, regGDS_VMID0_BASE, 2 * i, 0);
+		WREG32_SOC15_OFFSET(GC, 0, regGDS_VMID0_SIZE, 2 * i, 0);
+		WREG32_SOC15_OFFSET(GC, 0, regGDS_GWS_VMID0, i, 0);
+		WREG32_SOC15_OFFSET(GC, 0, regGDS_OA_VMID0, i, 0);
+	}
+}
+
+static void gfx_v11_0_init_gds_vmid(struct amdgpu_device *adev)
+{
+	int vmid;
+
+	/*
+	 * Initialize all compute and user-gfx VMIDs to have no GDS, GWS, or OA
+	 * access. Compute VMIDs should be enabled by FW for target VMIDs,
+	 * the driver can enable them for graphics. VMID0 should maintain
+	 * access so that HWS firmware can save/restore entries.
+	 */
+	for (vmid = 1; vmid < 16; vmid++) {
+		WREG32_SOC15_OFFSET(GC, 0, regGDS_VMID0_BASE, 2 * vmid, 0);
+		WREG32_SOC15_OFFSET(GC, 0, regGDS_VMID0_SIZE, 2 * vmid, 0);
+		WREG32_SOC15_OFFSET(GC, 0, regGDS_GWS_VMID0, vmid, 0);
+		WREG32_SOC15_OFFSET(GC, 0, regGDS_OA_VMID0, vmid, 0);
+	}
+}
+
+static void gfx_v11_0_tcp_harvest(struct amdgpu_device *adev)
+{
+	/* TODO: harvest feature to be added later. */
+}
+
+static void gfx_v11_0_get_tcc_info(struct amdgpu_device *adev)
+{
+	/* TCCs are global (not instanced). */
+	uint32_t tcc_disable = RREG32_SOC15(GC, 0, regCGTS_TCC_DISABLE) |
+			       RREG32_SOC15(GC, 0, regCGTS_USER_TCC_DISABLE);
+
+	adev->gfx.config.tcc_disabled_mask =
+		REG_GET_FIELD(tcc_disable, CGTS_TCC_DISABLE, TCC_DISABLE) |
+		(REG_GET_FIELD(tcc_disable, CGTS_TCC_DISABLE, HI_TCC_DISABLE) << 16);
+}
+
+static void gfx_v11_0_constants_init(struct amdgpu_device *adev)
+{
+	u32 tmp;
+	int i;
+
+	WREG32_FIELD15_PREREG(GC, 0, GRBM_CNTL, READ_TIMEOUT, 0xff);
+
+	gfx_v11_0_setup_rb(adev);
+	gfx_v11_0_get_cu_info(adev, &adev->gfx.cu_info);
+	gfx_v11_0_get_tcc_info(adev);
+	adev->gfx.config.pa_sc_tile_steering_override = 0;
+
+	/* XXX SH_MEM regs */
+	/* where to put LDS, scratch, GPUVM in FSA64 space */
+	mutex_lock(&adev->srbm_mutex);
+	for (i = 0; i < adev->vm_manager.id_mgr[AMDGPU_GFXHUB_0].num_ids; i++) {
+		soc21_grbm_select(adev, 0, 0, 0, i);
+		/* CP and shaders */
+		WREG32_SOC15(GC, 0, regSH_MEM_CONFIG, DEFAULT_SH_MEM_CONFIG);
+		if (i != 0) {
+			tmp = REG_SET_FIELD(0, SH_MEM_BASES, PRIVATE_BASE,
+				(adev->gmc.private_aperture_start >> 48));
+			tmp = REG_SET_FIELD(tmp, SH_MEM_BASES, SHARED_BASE,
+				(adev->gmc.shared_aperture_start >> 48));
+			WREG32_SOC15(GC, 0, regSH_MEM_BASES, tmp);
+		}
+	}
+	soc21_grbm_select(adev, 0, 0, 0, 0);
+
+	mutex_unlock(&adev->srbm_mutex);
+
+	gfx_v11_0_init_compute_vmid(adev);
+	gfx_v11_0_init_gds_vmid(adev);
+}
+
+static void gfx_v11_0_enable_gui_idle_interrupt(struct amdgpu_device *adev,
+					       bool enable)
+{
+	u32 tmp;
+
+	if (amdgpu_sriov_vf(adev))
+		return;
+
+	tmp = RREG32_SOC15(GC, 0, regCP_INT_CNTL_RING0);
+
+	tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CNTX_BUSY_INT_ENABLE,
+			    enable ? 1 : 0);
+	tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CNTX_EMPTY_INT_ENABLE,
+			    enable ? 1 : 0);
+	tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, CMP_BUSY_INT_ENABLE,
+			    enable ? 1 : 0);
+	tmp = REG_SET_FIELD(tmp, CP_INT_CNTL_RING0, GFX_IDLE_INT_ENABLE,
+			    enable ? 1 : 0);
+
+	WREG32_SOC15(GC, 0, regCP_INT_CNTL_RING0, tmp);
+}
+
+static int gfx_v11_0_init_csb(struct amdgpu_device *adev)
+{
+	adev->gfx.rlc.funcs->get_csb_buffer(adev, adev->gfx.rlc.cs_ptr);
+
+	WREG32_SOC15(GC, 0, regRLC_CSIB_ADDR_HI,
+			adev->gfx.rlc.clear_state_gpu_addr >> 32);
+	WREG32_SOC15(GC, 0, regRLC_CSIB_ADDR_LO,
+			adev->gfx.rlc.clear_state_gpu_addr & 0xfffffffc);
+	WREG32_SOC15(GC, 0, regRLC_CSIB_LENGTH, adev->gfx.rlc.clear_state_size);
+
+	return 0;
+}
+
+void gfx_v11_0_rlc_stop(struct amdgpu_device *adev)
+{
+	u32 tmp = RREG32_SOC15(GC, 0, regRLC_CNTL);
+
+	tmp = REG_SET_FIELD(tmp, RLC_CNTL, RLC_ENABLE_F32, 0);
+	WREG32_SOC15(GC, 0, regRLC_CNTL, tmp);
+}
+
+static void gfx_v11_0_rlc_reset(struct amdgpu_device *adev)
+{
+	WREG32_FIELD15_PREREG(GC, 0, GRBM_SOFT_RESET, SOFT_RESET_RLC, 1);
+	udelay(50);
+	WREG32_FIELD15_PREREG(GC, 0, GRBM_SOFT_RESET, SOFT_RESET_RLC, 0);
+	udelay(50);
+}
+
+static void gfx_v11_0_rlc_smu_handshake_cntl(struct amdgpu_device *adev,
+					     bool enable)
+{
+	uint32_t rlc_pg_cntl;
+
+	rlc_pg_cntl = RREG32_SOC15(GC, 0, regRLC_PG_CNTL);
+
+	if (!enable) {
+		/* RLC_PG_CNTL[23] = 0 (default)
+		 * RLC will wait for handshake acks with SMU
+		 * GFXOFF will be enabled
+		 * RLC_PG_CNTL[23] = 1
+		 * RLC will not issue any message to SMU
+		 * hence no handshake between SMU & RLC
+		 * GFXOFF will be disabled
+		 */
+		rlc_pg_cntl |= RLC_PG_CNTL__SMU_HANDSHAKE_DISABLE_MASK;
+	} else
+		rlc_pg_cntl &= ~RLC_PG_CNTL__SMU_HANDSHAKE_DISABLE_MASK;
+	WREG32_SOC15(GC, 0, regRLC_PG_CNTL, rlc_pg_cntl);
+}
+
+static void gfx_v11_0_rlc_start(struct amdgpu_device *adev)
+{
+	/* TODO: enable rlc & smu handshake until smu
+	 * and gfxoff feature works as expected */
+	if (!(amdgpu_pp_feature_mask & PP_GFXOFF_MASK))
+		gfx_v11_0_rlc_smu_handshake_cntl(adev, false);
+
+	WREG32_FIELD15_PREREG(GC, 0, RLC_CNTL, RLC_ENABLE_F32, 1);
+	udelay(50);
+}
+
+static void gfx_v11_0_rlc_enable_srm(struct amdgpu_device *adev)
+{
+	uint32_t tmp;
+
+	/* enable Save Restore Machine */
+	tmp = RREG32(SOC15_REG_OFFSET(GC, 0, regRLC_SRM_CNTL));
+	tmp |= RLC_SRM_CNTL__AUTO_INCR_ADDR_MASK;
+	tmp |= RLC_SRM_CNTL__SRM_ENABLE_MASK;
+	WREG32(SOC15_REG_OFFSET(GC, 0, regRLC_SRM_CNTL), tmp);
+}
+
+static void gfx_v11_0_load_rlcg_microcode(struct amdgpu_device *adev)
+{
+	const struct rlc_firmware_header_v2_0 *hdr;
+	const __le32 *fw_data;
+	unsigned i, fw_size;
+
+	hdr = (const struct rlc_firmware_header_v2_0 *)adev->gfx.rlc_fw->data;
+	fw_data = (const __le32 *)(adev->gfx.rlc_fw->data +
+			   le32_to_cpu(hdr->header.ucode_array_offset_bytes));
+	fw_size = le32_to_cpu(hdr->header.ucode_size_bytes) / 4;
+
+	WREG32_SOC15(GC, 0, regRLC_GPM_UCODE_ADDR,
+		     RLCG_UCODE_LOADING_START_ADDRESS);
+
+	for (i = 0; i < fw_size; i++)
+		WREG32_SOC15(GC, 0, regRLC_GPM_UCODE_DATA,
+			     le32_to_cpup(fw_data++));
+
+	WREG32_SOC15(GC, 0, regRLC_GPM_UCODE_ADDR, adev->gfx.rlc_fw_version);
+}
+
+static void gfx_v11_0_load_rlc_iram_dram_microcode(struct amdgpu_device *adev)
+{
+	const struct rlc_firmware_header_v2_2 *hdr;
+	const __le32 *fw_data;
+	unsigned i, fw_size;
+	u32 tmp;
+
+	hdr = (const struct rlc_firmware_header_v2_2 *)adev->gfx.rlc_fw->data;
+
+	fw_data = (const __le32 *)(adev->gfx.rlc_fw->data +
+			le32_to_cpu(hdr->rlc_iram_ucode_offset_bytes));
+	fw_size = le32_to_cpu(hdr->rlc_iram_ucode_size_bytes) / 4;
+
+	WREG32_SOC15(GC, 0, regRLC_LX6_IRAM_ADDR, 0);
+
+	for (i = 0; i < fw_size; i++) {
+		if ((amdgpu_emu_mode == 1) && (i % 100 == 99))
+			msleep(1);
+		WREG32_SOC15(GC, 0, regRLC_LX6_IRAM_DATA,
+				le32_to_cpup(fw_data++));
+	}
+
+	WREG32_SOC15(GC, 0, regRLC_LX6_IRAM_ADDR, adev->gfx.rlc_fw_version);
+
+	fw_data = (const __le32 *)(adev->gfx.rlc_fw->data +
+			le32_to_cpu(hdr->rlc_dram_ucode_offset_bytes));
+	fw_size = le32_to_cpu(hdr->rlc_dram_ucode_size_bytes) / 4;
+
+	WREG32_SOC15(GC, 0, regRLC_LX6_DRAM_ADDR, 0);
+	for (i = 0; i < fw_size; i++) {
+		if ((amdgpu_emu_mode == 1) && (i % 100 == 99))
+			msleep(1);
+		WREG32_SOC15(GC, 0, regRLC_LX6_DRAM_DATA,
+				le32_to_cpup(fw_data++));
+	}
+
+	WREG32_SOC15(GC, 0, regRLC_LX6_IRAM_ADDR, adev->gfx.rlc_fw_version);
+
+	tmp = RREG32_SOC15(GC, 0, regRLC_LX6_CNTL);
+	tmp = REG_SET_FIELD(tmp, RLC_LX6_CNTL, PDEBUG_ENABLE, 1);
+	tmp = REG_SET_FIELD(tmp, RLC_LX6_CNTL, BRESET, 0);
+	WREG32_SOC15(GC, 0, regRLC_LX6_CNTL, tmp);
+}
+
+static void gfx_v11_0_load_rlcp_rlcv_microcode(struct amdgpu_device *adev)
+{
+	const struct rlc_firmware_header_v2_3 *hdr;
+	const __le32 *fw_data;
+	unsigned i, fw_size;
+	u32 tmp;
+
+	hdr = (const struct rlc_firmware_header_v2_3 *)adev->gfx.rlc_fw->data;
+
+	fw_data = (const __le32 *)(adev->gfx.rlc_fw->data +
+			le32_to_cpu(hdr->rlcp_ucode_offset_bytes));
+	fw_size = le32_to_cpu(hdr->rlcp_ucode_size_bytes) / 4;
+
+	WREG32_SOC15(GC, 0, regRLC_PACE_UCODE_ADDR, 0);
+
+	for (i = 0; i < fw_size; i++) {
+		if ((amdgpu_emu_mode == 1) && (i % 100 == 99))
+			msleep(1);
+		WREG32_SOC15(GC, 0, regRLC_PACE_UCODE_DATA,
+				le32_to_cpup(fw_data++));
+	}
+
+	WREG32_SOC15(GC, 0, regRLC_PACE_UCODE_ADDR, adev->gfx.rlc_fw_version);
+
+	tmp = RREG32_SOC15(GC, 0, regRLC_GPM_THREAD_ENABLE);
+	tmp = REG_SET_FIELD(tmp, RLC_GPM_THREAD_ENABLE, THREAD1_ENABLE, 1);
+	WREG32_SOC15(GC, 0, regRLC_GPM_THREAD_ENABLE, tmp);
+
+	fw_data = (const __le32 *)(adev->gfx.rlc_fw->data +
+			le32_to_cpu(hdr->rlcv_ucode_offset_bytes));
+	fw_size = le32_to_cpu(hdr->rlcv_ucode_size_bytes) / 4;
+
+	WREG32_SOC15(GC, 0, regRLC_GPU_IOV_UCODE_ADDR, 0);
+
+	for (i = 0; i < fw_size; i++) {
+		if ((amdgpu_emu_mode == 1) && (i % 100 == 99))
+			msleep(1);
+		WREG32_SOC15(GC, 0, regRLC_GPU_IOV_UCODE_DATA,
+				le32_to_cpup(fw_data++));
+	}
+
+	WREG32_SOC15(GC, 0, regRLC_GPU_IOV_UCODE_ADDR, adev->gfx.rlc_fw_version);
+
+	tmp = RREG32_SOC15(GC, 0, regRLC_GPU_IOV_F32_CNTL);
+	tmp = REG_SET_FIELD(tmp, RLC_GPU_IOV_F32_CNTL, ENABLE, 1);
+	WREG32_SOC15(GC, 0, regRLC_GPU_IOV_F32_CNTL, tmp);
+}
+
+static int gfx_v11_0_rlc_load_microcode(struct amdgpu_device *adev)
+{
+	const struct rlc_firmware_header_v2_0 *hdr;
+	uint16_t version_major;
+	uint16_t version_minor;
+
+	if (!adev->gfx.rlc_fw)
+		return -EINVAL;
+
+	hdr = (const struct rlc_firmware_header_v2_0 *)adev->gfx.rlc_fw->data;
+	amdgpu_ucode_print_rlc_hdr(&hdr->header);
+
+	version_major = le16_to_cpu(hdr->header.header_version_major);
+	version_minor = le16_to_cpu(hdr->header.header_version_minor);
+
+	if (version_major == 2) {
+		gfx_v11_0_load_rlcg_microcode(adev);
+		if (amdgpu_dpm == 1) {
+			if (version_minor >= 2)
+				gfx_v11_0_load_rlc_iram_dram_microcode(adev);
+			if (version_minor == 3)
+				gfx_v11_0_load_rlcp_rlcv_microcode(adev);
+		}
+		
+		return 0;
+	}
+
+	return -EINVAL;
+}
+
+static int gfx_v11_0_rlc_resume(struct amdgpu_device *adev)
+{
+	int r;
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
+		gfx_v11_0_init_csb(adev);
+
+		if (!amdgpu_sriov_vf(adev)) /* enable RLC SRM */
+			gfx_v11_0_rlc_enable_srm(adev);
+	} else {
+		if (amdgpu_sriov_vf(adev)) {
+			gfx_v11_0_init_csb(adev);
+			return 0;
+		}
+
+		adev->gfx.rlc.funcs->stop(adev);
+
+		/* disable CG */
+		WREG32_SOC15(GC, 0, regRLC_CGCG_CGLS_CTRL, 0);
+
+		/* disable PG */
+		WREG32_SOC15(GC, 0, regRLC_PG_CNTL, 0);
+
+		if (adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT) {
+			/* legacy rlc firmware loading */
+			r = gfx_v11_0_rlc_load_microcode(adev);
+			if (r)
+				return r;
+		}
+
+		gfx_v11_0_init_csb(adev);
+
+		adev->gfx.rlc.funcs->start(adev);
+	}
+	return 0;
+}
+
+static int gfx_v11_0_config_me_cache(struct amdgpu_device *adev, uint64_t addr)
+{
+	uint32_t usec_timeout = 50000;  /* wait for 50ms */
+	uint32_t tmp;
+	int i;
+
+	/* Trigger an invalidation of the L1 instruction caches */
+	tmp = RREG32_SOC15(GC, 0, regCP_ME_IC_OP_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_ME_IC_OP_CNTL, INVALIDATE_CACHE, 1);
+	WREG32_SOC15(GC, 0, regCP_ME_IC_OP_CNTL, tmp);
+
+	/* Wait for invalidation complete */
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, regCP_ME_IC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_ME_IC_OP_CNTL,
+					INVALIDATE_CACHE_COMPLETE))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to invalidate instruction cache\n");
+		return -EINVAL;
+	}
+
+	if (amdgpu_emu_mode == 1)
+		adev->hdp.funcs->flush_hdp(adev, NULL);
+
+	tmp = RREG32_SOC15(GC, 0, regCP_ME_IC_BASE_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_ME_IC_BASE_CNTL, VMID, 0);
+	tmp = REG_SET_FIELD(tmp, CP_ME_IC_BASE_CNTL, CACHE_POLICY, 0);
+	tmp = REG_SET_FIELD(tmp, CP_ME_IC_BASE_CNTL, EXE_DISABLE, 0);
+	tmp = REG_SET_FIELD(tmp, CP_ME_IC_BASE_CNTL, ADDRESS_CLAMP, 1);
+	WREG32_SOC15(GC, 0, regCP_ME_IC_BASE_CNTL, tmp);
+
+	/* Program me ucode address into intruction cache address register */
+	WREG32_SOC15(GC, 0, regCP_ME_IC_BASE_LO,
+			lower_32_bits(addr) & 0xFFFFF000);
+	WREG32_SOC15(GC, 0, regCP_ME_IC_BASE_HI,
+			upper_32_bits(addr));
+
+	return 0;
+}
+
+static int gfx_v11_0_config_pfp_cache(struct amdgpu_device *adev, uint64_t addr)
+{
+	uint32_t usec_timeout = 50000;  /* wait for 50ms */
+	uint32_t tmp;
+	int i;
+
+	/* Trigger an invalidation of the L1 instruction caches */
+	tmp = RREG32_SOC15(GC, 0, regCP_PFP_IC_OP_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_PFP_IC_OP_CNTL, INVALIDATE_CACHE, 1);
+	WREG32_SOC15(GC, 0, regCP_PFP_IC_OP_CNTL, tmp);
+
+	/* Wait for invalidation complete */
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, regCP_PFP_IC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_PFP_IC_OP_CNTL,
+					INVALIDATE_CACHE_COMPLETE))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to invalidate instruction cache\n");
+		return -EINVAL;
+	}
+
+	if (amdgpu_emu_mode == 1)
+		adev->hdp.funcs->flush_hdp(adev, NULL);
+
+	tmp = RREG32_SOC15(GC, 0, regCP_PFP_IC_BASE_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_PFP_IC_BASE_CNTL, VMID, 0);
+	tmp = REG_SET_FIELD(tmp, CP_PFP_IC_BASE_CNTL, CACHE_POLICY, 0);
+	tmp = REG_SET_FIELD(tmp, CP_PFP_IC_BASE_CNTL, EXE_DISABLE, 0);
+	tmp = REG_SET_FIELD(tmp, CP_PFP_IC_BASE_CNTL, ADDRESS_CLAMP, 1);
+	WREG32_SOC15(GC, 0, regCP_PFP_IC_BASE_CNTL, tmp);
+
+	/* Program pfp ucode address into intruction cache address register */
+	WREG32_SOC15(GC, 0, regCP_PFP_IC_BASE_LO,
+			lower_32_bits(addr) & 0xFFFFF000);
+	WREG32_SOC15(GC, 0, regCP_PFP_IC_BASE_HI,
+			upper_32_bits(addr));
+
+	return 0;
+}
+
+static int gfx_v11_0_config_mec_cache(struct amdgpu_device *adev, uint64_t addr)
+{
+	uint32_t usec_timeout = 50000;  /* wait for 50ms */
+	uint32_t tmp;
+	int i;
+
+	/* Trigger an invalidation of the L1 instruction caches */
+	tmp = RREG32_SOC15(GC, 0, regCP_CPC_IC_OP_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_CPC_IC_OP_CNTL, INVALIDATE_CACHE, 1);
+
+	WREG32_SOC15(GC, 0, regCP_CPC_IC_OP_CNTL, tmp);
+
+	/* Wait for invalidation complete */
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, regCP_CPC_IC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_CPC_IC_OP_CNTL,
+					INVALIDATE_CACHE_COMPLETE))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to invalidate instruction cache\n");
+		return -EINVAL;
+	}
+
+	if (amdgpu_emu_mode == 1)
+		adev->hdp.funcs->flush_hdp(adev, NULL);
+
+	tmp = RREG32_SOC15(GC, 0, regCP_CPC_IC_BASE_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_CPC_IC_BASE_CNTL, CACHE_POLICY, 0);
+	tmp = REG_SET_FIELD(tmp, CP_CPC_IC_BASE_CNTL, EXE_DISABLE, 0);
+	tmp = REG_SET_FIELD(tmp, CP_CPC_IC_BASE_CNTL, ADDRESS_CLAMP, 1);
+	WREG32_SOC15(GC, 0, regCP_CPC_IC_BASE_CNTL, tmp);
+
+	/* Program mec1 ucode address into intruction cache address register */
+	WREG32_SOC15(GC, 0, regCP_CPC_IC_BASE_LO,
+			lower_32_bits(addr) & 0xFFFFF000);
+	WREG32_SOC15(GC, 0, regCP_CPC_IC_BASE_HI,
+			upper_32_bits(addr));
+
+	return 0;
+}
+
+static int gfx_v11_0_config_pfp_cache_rs64(struct amdgpu_device *adev, uint64_t addr, uint64_t addr2)
+{
+	uint32_t usec_timeout = 50000;  /* wait for 50ms */
+	uint32_t tmp;
+	unsigned i, pipe_id;
+	const struct gfx_firmware_header_v2_0 *pfp_hdr;
+
+	pfp_hdr = (const struct gfx_firmware_header_v2_0 *)
+		adev->gfx.pfp_fw->data;
+
+	WREG32_SOC15(GC, 0, regCP_PFP_IC_BASE_LO,
+		lower_32_bits(addr));
+	WREG32_SOC15(GC, 0, regCP_PFP_IC_BASE_HI,
+		upper_32_bits(addr));
+
+	tmp = RREG32_SOC15(GC, 0, regCP_PFP_IC_BASE_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_PFP_IC_BASE_CNTL, VMID, 0);
+	tmp = REG_SET_FIELD(tmp, CP_PFP_IC_BASE_CNTL, CACHE_POLICY, 0);
+	tmp = REG_SET_FIELD(tmp, CP_PFP_IC_BASE_CNTL, EXE_DISABLE, 0);
+	WREG32_SOC15(GC, 0, regCP_PFP_IC_BASE_CNTL, tmp);
+
+	/*
+	 * Programming any of the CP_PFP_IC_BASE registers
+	 * forces invalidation of the ME L1 I$. Wait for the
+	 * invalidation complete
+	 */
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, regCP_PFP_IC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_PFP_IC_OP_CNTL,
+			INVALIDATE_CACHE_COMPLETE))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to invalidate instruction cache\n");
+		return -EINVAL;
+	}
+
+	/* Prime the L1 instruction caches */
+	tmp = RREG32_SOC15(GC, 0, regCP_PFP_IC_OP_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_PFP_IC_OP_CNTL, PRIME_ICACHE, 1);
+	WREG32_SOC15(GC, 0, regCP_PFP_IC_OP_CNTL, tmp);
+	/* Waiting for cache primed*/
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, regCP_PFP_IC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_PFP_IC_OP_CNTL,
+			ICACHE_PRIMED))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to prime instruction cache\n");
+		return -EINVAL;
+	}
+
+	mutex_lock(&adev->srbm_mutex);
+	for (pipe_id = 0; pipe_id < adev->gfx.me.num_pipe_per_me; pipe_id++) {
+		soc21_grbm_select(adev, 0, pipe_id, 0, 0);
+		WREG32_SOC15(GC, 0, regCP_PFP_PRGRM_CNTR_START,
+			(pfp_hdr->ucode_start_addr_hi << 30) |
+			(pfp_hdr->ucode_start_addr_lo >> 2));
+		WREG32_SOC15(GC, 0, regCP_PFP_PRGRM_CNTR_START_HI,
+			pfp_hdr->ucode_start_addr_hi >> 2);
+
+		/*
+		 * Program CP_ME_CNTL to reset given PIPE to take
+		 * effect of CP_PFP_PRGRM_CNTR_START.
+		 */
+		tmp = RREG32_SOC15(GC, 0, regCP_ME_CNTL);
+		if (pipe_id == 0)
+			tmp = REG_SET_FIELD(tmp, CP_ME_CNTL,
+					PFP_PIPE0_RESET, 1);
+		else
+			tmp = REG_SET_FIELD(tmp, CP_ME_CNTL,
+					PFP_PIPE1_RESET, 1);
+		WREG32_SOC15(GC, 0, regCP_ME_CNTL, tmp);
+
+		/* Clear pfp pipe0 reset bit. */
+		if (pipe_id == 0)
+			tmp = REG_SET_FIELD(tmp, CP_ME_CNTL,
+					PFP_PIPE0_RESET, 0);
+		else
+			tmp = REG_SET_FIELD(tmp, CP_ME_CNTL,
+					PFP_PIPE1_RESET, 0);
+		WREG32_SOC15(GC, 0, regCP_ME_CNTL, tmp);
+
+		WREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_BASE0_LO,
+			lower_32_bits(addr2));
+		WREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_BASE0_HI,
+			upper_32_bits(addr2));
+	}
+	soc21_grbm_select(adev, 0, 0, 0, 0);
+	mutex_unlock(&adev->srbm_mutex);
+
+	tmp = RREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_BASE_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_RS64_DC_BASE_CNTL, VMID, 0);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_RS64_DC_BASE_CNTL, CACHE_POLICY, 0);
+	WREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_BASE_CNTL, tmp);
+
+	/* Invalidate the data caches */
+	tmp = RREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_OP_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_RS64_DC_OP_CNTL, INVALIDATE_DCACHE, 1);
+	WREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_OP_CNTL, tmp);
+
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_GFX_RS64_DC_OP_CNTL,
+			INVALIDATE_DCACHE_COMPLETE))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to invalidate RS64 data cache\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int gfx_v11_0_config_me_cache_rs64(struct amdgpu_device *adev, uint64_t addr, uint64_t addr2)
+{
+	uint32_t usec_timeout = 50000;  /* wait for 50ms */
+	uint32_t tmp;
+	unsigned i, pipe_id;
+	const struct gfx_firmware_header_v2_0 *me_hdr;
+
+	me_hdr = (const struct gfx_firmware_header_v2_0 *)
+		adev->gfx.me_fw->data;
+
+	WREG32_SOC15(GC, 0, regCP_ME_IC_BASE_LO,
+		lower_32_bits(addr));
+	WREG32_SOC15(GC, 0, regCP_ME_IC_BASE_HI,
+		upper_32_bits(addr));
+
+	tmp = RREG32_SOC15(GC, 0, regCP_ME_IC_BASE_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_ME_IC_BASE_CNTL, VMID, 0);
+	tmp = REG_SET_FIELD(tmp, CP_ME_IC_BASE_CNTL, CACHE_POLICY, 0);
+	tmp = REG_SET_FIELD(tmp, CP_ME_IC_BASE_CNTL, EXE_DISABLE, 0);
+	WREG32_SOC15(GC, 0, regCP_ME_IC_BASE_CNTL, tmp);
+
+	/*
+	 * Programming any of the CP_ME_IC_BASE registers
+	 * forces invalidation of the ME L1 I$. Wait for the
+	 * invalidation complete
+	 */
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, regCP_ME_IC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_ME_IC_OP_CNTL,
+			INVALIDATE_CACHE_COMPLETE))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to invalidate instruction cache\n");
+		return -EINVAL;
+	}
+
+	/* Prime the instruction caches */
+	tmp = RREG32_SOC15(GC, 0, regCP_ME_IC_OP_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_ME_IC_OP_CNTL, PRIME_ICACHE, 1);
+	WREG32_SOC15(GC, 0, regCP_ME_IC_OP_CNTL, tmp);
+
+	/* Waiting for instruction cache primed*/
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, regCP_ME_IC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_ME_IC_OP_CNTL,
+			ICACHE_PRIMED))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to prime instruction cache\n");
+		return -EINVAL;
+	}
+
+	mutex_lock(&adev->srbm_mutex);
+	for (pipe_id = 0; pipe_id < adev->gfx.me.num_pipe_per_me; pipe_id++) {
+		soc21_grbm_select(adev, 0, pipe_id, 0, 0);
+		WREG32_SOC15(GC, 0, regCP_ME_PRGRM_CNTR_START,
+			(me_hdr->ucode_start_addr_hi << 30) |
+			(me_hdr->ucode_start_addr_lo >> 2) );
+		WREG32_SOC15(GC, 0, regCP_ME_PRGRM_CNTR_START_HI,
+			me_hdr->ucode_start_addr_hi>>2);
+
+		/*
+		 * Program CP_ME_CNTL to reset given PIPE to take
+		 * effect of CP_PFP_PRGRM_CNTR_START.
+		 */
+		tmp = RREG32_SOC15(GC, 0, regCP_ME_CNTL);
+		if (pipe_id == 0)
+			tmp = REG_SET_FIELD(tmp, CP_ME_CNTL,
+					ME_PIPE0_RESET, 1);
+		else
+			tmp = REG_SET_FIELD(tmp, CP_ME_CNTL,
+					ME_PIPE1_RESET, 1);
+		WREG32_SOC15(GC, 0, regCP_ME_CNTL, tmp);
+
+		/* Clear pfp pipe0 reset bit. */
+		if (pipe_id == 0)
+			tmp = REG_SET_FIELD(tmp, CP_ME_CNTL,
+					ME_PIPE0_RESET, 0);
+		else
+			tmp = REG_SET_FIELD(tmp, CP_ME_CNTL,
+					ME_PIPE1_RESET, 0);
+		WREG32_SOC15(GC, 0, regCP_ME_CNTL, tmp);
+
+		WREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_BASE1_LO,
+			lower_32_bits(addr2));
+		WREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_BASE1_HI,
+			upper_32_bits(addr2));
+	}
+	soc21_grbm_select(adev, 0, 0, 0, 0);
+	mutex_unlock(&adev->srbm_mutex);
+
+	tmp = RREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_BASE_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_RS64_DC_BASE_CNTL, VMID, 0);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_RS64_DC_BASE_CNTL, CACHE_POLICY, 0);
+	WREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_BASE_CNTL, tmp);
+
+	/* Invalidate the data caches */
+	tmp = RREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_OP_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_RS64_DC_OP_CNTL, INVALIDATE_DCACHE, 1);
+	WREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_OP_CNTL, tmp);
+
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_GFX_RS64_DC_OP_CNTL,
+			INVALIDATE_DCACHE_COMPLETE))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to invalidate RS64 data cache\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int gfx_v11_0_config_mec_cache_rs64(struct amdgpu_device *adev, uint64_t addr, uint64_t addr2)
+{
+	uint32_t usec_timeout = 50000;  /* wait for 50ms */
+	uint32_t tmp;
+	unsigned i;
+	const struct gfx_firmware_header_v2_0 *mec_hdr;
+
+	mec_hdr = (const struct gfx_firmware_header_v2_0 *)
+		adev->gfx.mec_fw->data;
+
+	tmp = RREG32_SOC15(GC, 0, regCP_CPC_IC_BASE_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_CPC_IC_BASE_CNTL, VMID, 0);
+	tmp = REG_SET_FIELD(tmp, CP_CPC_IC_BASE_CNTL, EXE_DISABLE, 0);
+	tmp = REG_SET_FIELD(tmp, CP_CPC_IC_BASE_CNTL, CACHE_POLICY, 0);
+	WREG32_SOC15(GC, 0, regCP_CPC_IC_BASE_CNTL, tmp);
+
+	tmp = RREG32_SOC15(GC, 0, regCP_MEC_DC_BASE_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_MEC_DC_BASE_CNTL, VMID, 0);
+	tmp = REG_SET_FIELD(tmp, CP_MEC_DC_BASE_CNTL, CACHE_POLICY, 0);
+	WREG32_SOC15(GC, 0, regCP_MEC_DC_BASE_CNTL, tmp);
+
+	mutex_lock(&adev->srbm_mutex);
+	for (i = 0; i < adev->gfx.mec.num_pipe_per_mec; i++) {
+		soc21_grbm_select(adev, 1, i, 0, 0);
+
+		WREG32_SOC15(GC, 0, regCP_MEC_MDBASE_LO, addr2);
+		WREG32_SOC15(GC, 0, regCP_MEC_MDBASE_HI,
+		     upper_32_bits(addr2));
+
+		WREG32_SOC15(GC, 0, regCP_MEC_RS64_PRGRM_CNTR_START,
+					mec_hdr->ucode_start_addr_lo >> 2 |
+					mec_hdr->ucode_start_addr_hi << 30);
+		WREG32_SOC15(GC, 0, regCP_MEC_RS64_PRGRM_CNTR_START_HI,
+					mec_hdr->ucode_start_addr_hi >> 2);
+
+		WREG32_SOC15(GC, 0, regCP_CPC_IC_BASE_LO, addr);
+		WREG32_SOC15(GC, 0, regCP_CPC_IC_BASE_HI,
+		     upper_32_bits(addr));
+	}
+	mutex_unlock(&adev->srbm_mutex);
+	soc21_grbm_select(adev, 0, 0, 0, 0);
+
+	/* Trigger an invalidation of the L1 instruction caches */
+	tmp = RREG32_SOC15(GC, 0, regCP_MEC_DC_OP_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_MEC_DC_OP_CNTL, INVALIDATE_DCACHE, 1);
+	WREG32_SOC15(GC, 0, regCP_MEC_DC_OP_CNTL, tmp);
+
+	/* Wait for invalidation complete */
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, regCP_MEC_DC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_MEC_DC_OP_CNTL,
+				       INVALIDATE_DCACHE_COMPLETE))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to invalidate instruction cache\n");
+		return -EINVAL;
+	}
+
+	/* Trigger an invalidation of the L1 instruction caches */
+	tmp = RREG32_SOC15(GC, 0, regCP_CPC_IC_OP_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_CPC_IC_OP_CNTL, INVALIDATE_CACHE, 1);
+	WREG32_SOC15(GC, 0, regCP_CPC_IC_OP_CNTL, tmp);
+
+	/* Wait for invalidation complete */
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, regCP_CPC_IC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_CPC_IC_OP_CNTL,
+				       INVALIDATE_CACHE_COMPLETE))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to invalidate instruction cache\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static void gfx_v11_0_config_gfx_rs64(struct amdgpu_device *adev)
+{
+	const struct gfx_firmware_header_v2_0 *pfp_hdr;
+	const struct gfx_firmware_header_v2_0 *me_hdr;
+	const struct gfx_firmware_header_v2_0 *mec_hdr;
+	uint32_t pipe_id, tmp;
+
+	mec_hdr = (const struct gfx_firmware_header_v2_0 *)
+		adev->gfx.mec_fw->data;
+	me_hdr = (const struct gfx_firmware_header_v2_0 *)
+		adev->gfx.me_fw->data;
+	pfp_hdr = (const struct gfx_firmware_header_v2_0 *)
+		adev->gfx.pfp_fw->data;
+
+	/* config pfp program start addr */
+	for (pipe_id = 0; pipe_id < 2; pipe_id++) {
+		soc21_grbm_select(adev, 0, pipe_id, 0, 0);
+		WREG32_SOC15(GC, 0, regCP_PFP_PRGRM_CNTR_START,
+			(pfp_hdr->ucode_start_addr_hi << 30) |
+			(pfp_hdr->ucode_start_addr_lo >> 2));
+		WREG32_SOC15(GC, 0, regCP_PFP_PRGRM_CNTR_START_HI,
+			pfp_hdr->ucode_start_addr_hi >> 2);
+	}
+	soc21_grbm_select(adev, 0, 0, 0, 0);
+
+	/* reset pfp pipe */
+	tmp = RREG32_SOC15(GC, 0, regCP_ME_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, PFP_PIPE0_RESET, 1);
+	tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, PFP_PIPE1_RESET, 1);
+	WREG32_SOC15(GC, 0, regCP_ME_CNTL, tmp);
+
+	/* clear pfp pipe reset */
+	tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, PFP_PIPE0_RESET, 0);
+	tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, PFP_PIPE1_RESET, 0);
+	WREG32_SOC15(GC, 0, regCP_ME_CNTL, tmp);
+
+	/* config me program start addr */
+	for (pipe_id = 0; pipe_id < 2; pipe_id++) {
+		soc21_grbm_select(adev, 0, pipe_id, 0, 0);
+		WREG32_SOC15(GC, 0, regCP_ME_PRGRM_CNTR_START,
+			(me_hdr->ucode_start_addr_hi << 30) |
+			(me_hdr->ucode_start_addr_lo >> 2) );
+		WREG32_SOC15(GC, 0, regCP_ME_PRGRM_CNTR_START_HI,
+			me_hdr->ucode_start_addr_hi>>2);
+	}
+	soc21_grbm_select(adev, 0, 0, 0, 0);
+
+	/* reset me pipe */
+	tmp = RREG32_SOC15(GC, 0, regCP_ME_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, ME_PIPE0_RESET, 1);
+	tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, ME_PIPE1_RESET, 1);
+	WREG32_SOC15(GC, 0, regCP_ME_CNTL, tmp);
+
+	/* clear me pipe reset */
+	tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, ME_PIPE0_RESET, 0);
+	tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, ME_PIPE1_RESET, 0);
+	WREG32_SOC15(GC, 0, regCP_ME_CNTL, tmp);
+
+	/* config mec program start addr */
+	for (pipe_id = 0; pipe_id < 4; pipe_id++) {
+		soc21_grbm_select(adev, 1, pipe_id, 0, 0);
+		WREG32_SOC15(GC, 0, regCP_MEC_RS64_PRGRM_CNTR_START,
+					mec_hdr->ucode_start_addr_lo >> 2 |
+					mec_hdr->ucode_start_addr_hi << 30);
+		WREG32_SOC15(GC, 0, regCP_MEC_RS64_PRGRM_CNTR_START_HI,
+					mec_hdr->ucode_start_addr_hi >> 2);
+	}
+	soc21_grbm_select(adev, 0, 0, 0, 0);
+}
+
+static int gfx_v11_0_wait_for_rlc_autoload_complete(struct amdgpu_device *adev)
+{
+	uint32_t cp_status;
+	uint32_t bootload_status;
+	int i, r;
+	uint64_t addr, addr2;
+
+	for (i = 0; i < adev->usec_timeout; i++) {
+		cp_status = RREG32_SOC15(GC, 0, regCP_STAT);
+		bootload_status = RREG32_SOC15(GC, 0, regRLC_RLCS_BOOTLOAD_STATUS);
+		if ((cp_status == 0) &&
+		    (REG_GET_FIELD(bootload_status,
+			RLC_RLCS_BOOTLOAD_STATUS, BOOTLOAD_COMPLETE) == 1)) {
+			break;
+		}
+		udelay(1);
+	}
+
+	if (i >= adev->usec_timeout) {
+		dev_err(adev->dev, "rlc autoload: gc ucode autoload timeout\n");
+		return -ETIMEDOUT;
+	}
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_RLC_BACKDOOR_AUTO) {
+		if (adev->gfx.rs64_enable) {
+			addr = adev->gfx.rlc.rlc_autoload_gpu_addr +
+				rlc_autoload_info[SOC21_FIRMWARE_ID_RS64_ME].offset;
+			addr2 = adev->gfx.rlc.rlc_autoload_gpu_addr +
+				rlc_autoload_info[SOC21_FIRMWARE_ID_RS64_ME_P0_STACK].offset;
+			r = gfx_v11_0_config_me_cache_rs64(adev, addr, addr2);
+			if (r)
+				return r;
+			addr = adev->gfx.rlc.rlc_autoload_gpu_addr +
+				rlc_autoload_info[SOC21_FIRMWARE_ID_RS64_PFP].offset;
+			addr2 = adev->gfx.rlc.rlc_autoload_gpu_addr +
+				rlc_autoload_info[SOC21_FIRMWARE_ID_RS64_PFP_P0_STACK].offset;
+			r = gfx_v11_0_config_pfp_cache_rs64(adev, addr, addr2);
+			if (r)
+				return r;
+			addr = adev->gfx.rlc.rlc_autoload_gpu_addr +
+				rlc_autoload_info[SOC21_FIRMWARE_ID_RS64_MEC].offset;
+			addr2 = adev->gfx.rlc.rlc_autoload_gpu_addr +
+				rlc_autoload_info[SOC21_FIRMWARE_ID_RS64_MEC_P0_STACK].offset;
+			r = gfx_v11_0_config_mec_cache_rs64(adev, addr, addr2);
+			if (r)
+				return r;
+		} else {
+			addr = adev->gfx.rlc.rlc_autoload_gpu_addr +
+				rlc_autoload_info[SOC21_FIRMWARE_ID_CP_ME].offset;
+			r = gfx_v11_0_config_me_cache(adev, addr);
+			if (r)
+				return r;
+			addr = adev->gfx.rlc.rlc_autoload_gpu_addr +
+				rlc_autoload_info[SOC21_FIRMWARE_ID_CP_PFP].offset;
+			r = gfx_v11_0_config_pfp_cache(adev, addr);
+			if (r)
+				return r;
+			addr = adev->gfx.rlc.rlc_autoload_gpu_addr +
+				rlc_autoload_info[SOC21_FIRMWARE_ID_CP_MEC].offset;
+			r = gfx_v11_0_config_mec_cache(adev, addr);
+			if (r)
+				return r;
+		}
+	}
+
+	return 0;
+}
+
+static int gfx_v11_0_cp_gfx_enable(struct amdgpu_device *adev, bool enable)
+{
+	int i;
+	u32 tmp = RREG32_SOC15(GC, 0, regCP_ME_CNTL);
+
+	tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, ME_HALT, enable ? 0 : 1);
+	tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, PFP_HALT, enable ? 0 : 1);
+	WREG32_SOC15(GC, 0, regCP_ME_CNTL, tmp);
+
+	for (i = 0; i < adev->usec_timeout; i++) {
+		if (RREG32_SOC15(GC, 0, regCP_STAT) == 0)
+			break;
+		udelay(1);
+	}
+
+	if (i >= adev->usec_timeout)
+		DRM_ERROR("failed to %s cp gfx\n", enable ? "unhalt" : "halt");
+
+	return 0;
+}
+
+static int gfx_v11_0_cp_gfx_load_pfp_microcode(struct amdgpu_device *adev)
+{
+	int r;
+	const struct gfx_firmware_header_v1_0 *pfp_hdr;
+	const __le32 *fw_data;
+	unsigned i, fw_size;
+
+	pfp_hdr = (const struct gfx_firmware_header_v1_0 *)
+		adev->gfx.pfp_fw->data;
+
+	amdgpu_ucode_print_gfx_hdr(&pfp_hdr->header);
+
+	fw_data = (const __le32 *)(adev->gfx.pfp_fw->data +
+		le32_to_cpu(pfp_hdr->header.ucode_array_offset_bytes));
+	fw_size = le32_to_cpu(pfp_hdr->header.ucode_size_bytes);
+
+	r = amdgpu_bo_create_reserved(adev, pfp_hdr->header.ucode_size_bytes,
+				      PAGE_SIZE, AMDGPU_GEM_DOMAIN_GTT,
+				      &adev->gfx.pfp.pfp_fw_obj,
+				      &adev->gfx.pfp.pfp_fw_gpu_addr,
+				      (void **)&adev->gfx.pfp.pfp_fw_ptr);
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to create pfp fw bo\n", r);
+		gfx_v11_0_pfp_fini(adev);
+		return r;
+	}
+
+	memcpy(adev->gfx.pfp.pfp_fw_ptr, fw_data, fw_size);
+
+	amdgpu_bo_kunmap(adev->gfx.pfp.pfp_fw_obj);
+	amdgpu_bo_unreserve(adev->gfx.pfp.pfp_fw_obj);
+
+	gfx_v11_0_config_pfp_cache(adev, adev->gfx.pfp.pfp_fw_gpu_addr);
+
+	WREG32_SOC15(GC, 0, regCP_HYP_PFP_UCODE_ADDR, 0);
+
+	for (i = 0; i < pfp_hdr->jt_size; i++)
+		WREG32_SOC15(GC, 0, regCP_HYP_PFP_UCODE_DATA,
+			     le32_to_cpup(fw_data + pfp_hdr->jt_offset + i));
+
+	WREG32_SOC15(GC, 0, regCP_HYP_PFP_UCODE_ADDR, adev->gfx.pfp_fw_version);
+
+	return 0;
+}
+
+static int gfx_v11_0_cp_gfx_load_pfp_microcode_rs64(struct amdgpu_device *adev)
+{
+	int r;
+	const struct gfx_firmware_header_v2_0 *pfp_hdr;
+	const __le32 *fw_ucode, *fw_data;
+	unsigned i, pipe_id, fw_ucode_size, fw_data_size;
+	uint32_t tmp;
+	uint32_t usec_timeout = 50000;  /* wait for 50ms */
+
+	pfp_hdr = (const struct gfx_firmware_header_v2_0 *)
+		adev->gfx.pfp_fw->data;
+
+	amdgpu_ucode_print_gfx_hdr(&pfp_hdr->header);
+
+	/* instruction */
+	fw_ucode = (const __le32 *)(adev->gfx.pfp_fw->data +
+		le32_to_cpu(pfp_hdr->ucode_offset_bytes));
+	fw_ucode_size = le32_to_cpu(pfp_hdr->ucode_size_bytes);
+	/* data */
+	fw_data = (const __le32 *)(adev->gfx.pfp_fw->data +
+		le32_to_cpu(pfp_hdr->data_offset_bytes));
+	fw_data_size = le32_to_cpu(pfp_hdr->data_size_bytes);
+
+	/* 64kb align */
+	r = amdgpu_bo_create_reserved(adev, fw_ucode_size,
+				      64 * 1024, AMDGPU_GEM_DOMAIN_VRAM,
+				      &adev->gfx.pfp.pfp_fw_obj,
+				      &adev->gfx.pfp.pfp_fw_gpu_addr,
+				      (void **)&adev->gfx.pfp.pfp_fw_ptr);
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to create pfp ucode fw bo\n", r);
+		gfx_v11_0_pfp_fini(adev);
+		return r;
+	}
+
+	r = amdgpu_bo_create_reserved(adev, fw_data_size,
+				      64 * 1024, AMDGPU_GEM_DOMAIN_VRAM,
+				      &adev->gfx.pfp.pfp_fw_data_obj,
+				      &adev->gfx.pfp.pfp_fw_data_gpu_addr,
+				      (void **)&adev->gfx.pfp.pfp_fw_data_ptr);
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to create pfp data fw bo\n", r);
+		gfx_v11_0_pfp_fini(adev);
+		return r;
+	}
+
+	memcpy(adev->gfx.pfp.pfp_fw_ptr, fw_ucode, fw_ucode_size);
+	memcpy(adev->gfx.pfp.pfp_fw_data_ptr, fw_data, fw_data_size);
+
+	amdgpu_bo_kunmap(adev->gfx.pfp.pfp_fw_obj);
+	amdgpu_bo_kunmap(adev->gfx.pfp.pfp_fw_data_obj);
+	amdgpu_bo_unreserve(adev->gfx.pfp.pfp_fw_obj);
+	amdgpu_bo_unreserve(adev->gfx.pfp.pfp_fw_data_obj);
+
+	if (amdgpu_emu_mode == 1)
+		adev->hdp.funcs->flush_hdp(adev, NULL);
+
+	WREG32_SOC15(GC, 0, regCP_PFP_IC_BASE_LO,
+		lower_32_bits(adev->gfx.pfp.pfp_fw_gpu_addr));
+	WREG32_SOC15(GC, 0, regCP_PFP_IC_BASE_HI,
+		upper_32_bits(adev->gfx.pfp.pfp_fw_gpu_addr));
+
+	tmp = RREG32_SOC15(GC, 0, regCP_PFP_IC_BASE_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_PFP_IC_BASE_CNTL, VMID, 0);
+	tmp = REG_SET_FIELD(tmp, CP_PFP_IC_BASE_CNTL, CACHE_POLICY, 0);
+	tmp = REG_SET_FIELD(tmp, CP_PFP_IC_BASE_CNTL, EXE_DISABLE, 0);
+	WREG32_SOC15(GC, 0, regCP_PFP_IC_BASE_CNTL, tmp);
+
+	/*
+	 * Programming any of the CP_PFP_IC_BASE registers
+	 * forces invalidation of the ME L1 I$. Wait for the
+	 * invalidation complete
+	 */
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, regCP_PFP_IC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_PFP_IC_OP_CNTL,
+			INVALIDATE_CACHE_COMPLETE))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to invalidate instruction cache\n");
+		return -EINVAL;
+	}
+
+	/* Prime the L1 instruction caches */
+	tmp = RREG32_SOC15(GC, 0, regCP_PFP_IC_OP_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_PFP_IC_OP_CNTL, PRIME_ICACHE, 1);
+	WREG32_SOC15(GC, 0, regCP_PFP_IC_OP_CNTL, tmp);
+	/* Waiting for cache primed*/
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, regCP_PFP_IC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_PFP_IC_OP_CNTL,
+			ICACHE_PRIMED))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to prime instruction cache\n");
+		return -EINVAL;
+	}
+
+	mutex_lock(&adev->srbm_mutex);
+	for (pipe_id = 0; pipe_id < adev->gfx.me.num_pipe_per_me; pipe_id++) {
+		soc21_grbm_select(adev, 0, pipe_id, 0, 0);
+		WREG32_SOC15(GC, 0, regCP_PFP_PRGRM_CNTR_START,
+			(pfp_hdr->ucode_start_addr_hi << 30) |
+			(pfp_hdr->ucode_start_addr_lo >> 2) );
+		WREG32_SOC15(GC, 0, regCP_PFP_PRGRM_CNTR_START_HI,
+			pfp_hdr->ucode_start_addr_hi>>2);
+
+		/*
+		 * Program CP_ME_CNTL to reset given PIPE to take
+		 * effect of CP_PFP_PRGRM_CNTR_START.
+		 */
+		tmp = RREG32_SOC15(GC, 0, regCP_ME_CNTL);
+		if (pipe_id == 0)
+			tmp = REG_SET_FIELD(tmp, CP_ME_CNTL,
+					PFP_PIPE0_RESET, 1);
+		else
+			tmp = REG_SET_FIELD(tmp, CP_ME_CNTL,
+					PFP_PIPE1_RESET, 1);
+		WREG32_SOC15(GC, 0, regCP_ME_CNTL, tmp);
+
+		/* Clear pfp pipe0 reset bit. */
+		if (pipe_id == 0)
+			tmp = REG_SET_FIELD(tmp, CP_ME_CNTL,
+					PFP_PIPE0_RESET, 0);
+		else
+			tmp = REG_SET_FIELD(tmp, CP_ME_CNTL,
+					PFP_PIPE1_RESET, 0);
+		WREG32_SOC15(GC, 0, regCP_ME_CNTL, tmp);
+
+		WREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_BASE0_LO,
+			lower_32_bits(adev->gfx.pfp.pfp_fw_data_gpu_addr));
+		WREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_BASE0_HI,
+			upper_32_bits(adev->gfx.pfp.pfp_fw_data_gpu_addr));
+	}
+	soc21_grbm_select(adev, 0, 0, 0, 0);
+	mutex_unlock(&adev->srbm_mutex);
+
+	tmp = RREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_BASE_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_RS64_DC_BASE_CNTL, VMID, 0);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_RS64_DC_BASE_CNTL, CACHE_POLICY, 0);
+	WREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_BASE_CNTL, tmp);
+
+	/* Invalidate the data caches */
+	tmp = RREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_OP_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_RS64_DC_OP_CNTL, INVALIDATE_DCACHE, 1);
+	WREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_OP_CNTL, tmp);
+
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_GFX_RS64_DC_OP_CNTL,
+			INVALIDATE_DCACHE_COMPLETE))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to invalidate RS64 data cache\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int gfx_v11_0_cp_gfx_load_me_microcode(struct amdgpu_device *adev)
+{
+	int r;
+	const struct gfx_firmware_header_v1_0 *me_hdr;
+	const __le32 *fw_data;
+	unsigned i, fw_size;
+
+	me_hdr = (const struct gfx_firmware_header_v1_0 *)
+		adev->gfx.me_fw->data;
+
+	amdgpu_ucode_print_gfx_hdr(&me_hdr->header);
+
+	fw_data = (const __le32 *)(adev->gfx.me_fw->data +
+		le32_to_cpu(me_hdr->header.ucode_array_offset_bytes));
+	fw_size = le32_to_cpu(me_hdr->header.ucode_size_bytes);
+
+	r = amdgpu_bo_create_reserved(adev, me_hdr->header.ucode_size_bytes,
+				      PAGE_SIZE, AMDGPU_GEM_DOMAIN_GTT,
+				      &adev->gfx.me.me_fw_obj,
+				      &adev->gfx.me.me_fw_gpu_addr,
+				      (void **)&adev->gfx.me.me_fw_ptr);
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to create me fw bo\n", r);
+		gfx_v11_0_me_fini(adev);
+		return r;
+	}
+
+	memcpy(adev->gfx.me.me_fw_ptr, fw_data, fw_size);
+
+	amdgpu_bo_kunmap(adev->gfx.me.me_fw_obj);
+	amdgpu_bo_unreserve(adev->gfx.me.me_fw_obj);
+
+	gfx_v11_0_config_me_cache(adev, adev->gfx.me.me_fw_gpu_addr);
+
+	WREG32_SOC15(GC, 0, regCP_HYP_ME_UCODE_ADDR, 0);
+
+	for (i = 0; i < me_hdr->jt_size; i++)
+		WREG32_SOC15(GC, 0, regCP_HYP_ME_UCODE_DATA,
+			     le32_to_cpup(fw_data + me_hdr->jt_offset + i));
+
+	WREG32_SOC15(GC, 0, regCP_HYP_ME_UCODE_ADDR, adev->gfx.me_fw_version);
+
+	return 0;
+}
+
+static int gfx_v11_0_cp_gfx_load_me_microcode_rs64(struct amdgpu_device *adev)
+{
+	int r;
+	const struct gfx_firmware_header_v2_0 *me_hdr;
+	const __le32 *fw_ucode, *fw_data;
+	unsigned i, pipe_id, fw_ucode_size, fw_data_size;
+	uint32_t tmp;
+	uint32_t usec_timeout = 50000;  /* wait for 50ms */
+
+	me_hdr = (const struct gfx_firmware_header_v2_0 *)
+		adev->gfx.me_fw->data;
+
+	amdgpu_ucode_print_gfx_hdr(&me_hdr->header);
+
+	/* instruction */
+	fw_ucode = (const __le32 *)(adev->gfx.me_fw->data +
+		le32_to_cpu(me_hdr->ucode_offset_bytes));
+	fw_ucode_size = le32_to_cpu(me_hdr->ucode_size_bytes);
+	/* data */
+	fw_data = (const __le32 *)(adev->gfx.me_fw->data +
+		le32_to_cpu(me_hdr->data_offset_bytes));
+	fw_data_size = le32_to_cpu(me_hdr->data_size_bytes);
+
+	/* 64kb align*/
+	r = amdgpu_bo_create_reserved(adev, fw_ucode_size,
+				      64 * 1024, AMDGPU_GEM_DOMAIN_VRAM,
+				      &adev->gfx.me.me_fw_obj,
+				      &adev->gfx.me.me_fw_gpu_addr,
+				      (void **)&adev->gfx.me.me_fw_ptr);
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to create me ucode bo\n", r);
+		gfx_v11_0_me_fini(adev);
+		return r;
+	}
+
+	r = amdgpu_bo_create_reserved(adev, fw_data_size,
+				      64 * 1024, AMDGPU_GEM_DOMAIN_VRAM,
+				      &adev->gfx.me.me_fw_data_obj,
+				      &adev->gfx.me.me_fw_data_gpu_addr,
+				      (void **)&adev->gfx.me.me_fw_data_ptr);
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to create me data bo\n", r);
+		gfx_v11_0_pfp_fini(adev);
+		return r;
+	}
+
+	memcpy(adev->gfx.me.me_fw_ptr, fw_ucode, fw_ucode_size);
+	memcpy(adev->gfx.me.me_fw_data_ptr, fw_data, fw_data_size);
+
+	amdgpu_bo_kunmap(adev->gfx.me.me_fw_obj);
+	amdgpu_bo_kunmap(adev->gfx.me.me_fw_data_obj);
+	amdgpu_bo_unreserve(adev->gfx.me.me_fw_obj);
+	amdgpu_bo_unreserve(adev->gfx.me.me_fw_data_obj);
+
+	if (amdgpu_emu_mode == 1)
+		adev->hdp.funcs->flush_hdp(adev, NULL);
+
+	WREG32_SOC15(GC, 0, regCP_ME_IC_BASE_LO,
+		lower_32_bits(adev->gfx.me.me_fw_gpu_addr));
+	WREG32_SOC15(GC, 0, regCP_ME_IC_BASE_HI,
+		upper_32_bits(adev->gfx.me.me_fw_gpu_addr));
+
+	tmp = RREG32_SOC15(GC, 0, regCP_ME_IC_BASE_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_ME_IC_BASE_CNTL, VMID, 0);
+	tmp = REG_SET_FIELD(tmp, CP_ME_IC_BASE_CNTL, CACHE_POLICY, 0);
+	tmp = REG_SET_FIELD(tmp, CP_ME_IC_BASE_CNTL, EXE_DISABLE, 0);
+	WREG32_SOC15(GC, 0, regCP_ME_IC_BASE_CNTL, tmp);
+
+	/*
+	 * Programming any of the CP_ME_IC_BASE registers
+	 * forces invalidation of the ME L1 I$. Wait for the
+	 * invalidation complete
+	 */
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, regCP_ME_IC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_ME_IC_OP_CNTL,
+			INVALIDATE_CACHE_COMPLETE))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to invalidate instruction cache\n");
+		return -EINVAL;
+	}
+
+	/* Prime the instruction caches */
+	tmp = RREG32_SOC15(GC, 0, regCP_ME_IC_OP_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_ME_IC_OP_CNTL, PRIME_ICACHE, 1);
+	WREG32_SOC15(GC, 0, regCP_ME_IC_OP_CNTL, tmp);
+
+	/* Waiting for instruction cache primed*/
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, regCP_ME_IC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_ME_IC_OP_CNTL,
+			ICACHE_PRIMED))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to prime instruction cache\n");
+		return -EINVAL;
+	}
+
+	mutex_lock(&adev->srbm_mutex);
+	for (pipe_id = 0; pipe_id < adev->gfx.me.num_pipe_per_me; pipe_id++) {
+		soc21_grbm_select(adev, 0, pipe_id, 0, 0);
+		WREG32_SOC15(GC, 0, regCP_ME_PRGRM_CNTR_START,
+			(me_hdr->ucode_start_addr_hi << 30) |
+			(me_hdr->ucode_start_addr_lo >> 2) );
+		WREG32_SOC15(GC, 0, regCP_ME_PRGRM_CNTR_START_HI,
+			me_hdr->ucode_start_addr_hi>>2);
+
+		/*
+		 * Program CP_ME_CNTL to reset given PIPE to take
+		 * effect of CP_PFP_PRGRM_CNTR_START.
+		 */
+		tmp = RREG32_SOC15(GC, 0, regCP_ME_CNTL);
+		if (pipe_id == 0)
+			tmp = REG_SET_FIELD(tmp, CP_ME_CNTL,
+					ME_PIPE0_RESET, 1);
+		else
+			tmp = REG_SET_FIELD(tmp, CP_ME_CNTL,
+					ME_PIPE1_RESET, 1);
+		WREG32_SOC15(GC, 0, regCP_ME_CNTL, tmp);
+
+		/* Clear pfp pipe0 reset bit. */
+		if (pipe_id == 0)
+			tmp = REG_SET_FIELD(tmp, CP_ME_CNTL,
+					ME_PIPE0_RESET, 0);
+		else
+			tmp = REG_SET_FIELD(tmp, CP_ME_CNTL,
+					ME_PIPE1_RESET, 0);
+		WREG32_SOC15(GC, 0, regCP_ME_CNTL, tmp);
+
+		WREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_BASE1_LO,
+			lower_32_bits(adev->gfx.me.me_fw_data_gpu_addr));
+		WREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_BASE1_HI,
+			upper_32_bits(adev->gfx.me.me_fw_data_gpu_addr));
+	}
+	soc21_grbm_select(adev, 0, 0, 0, 0);
+	mutex_unlock(&adev->srbm_mutex);
+
+	tmp = RREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_BASE_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_RS64_DC_BASE_CNTL, VMID, 0);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_RS64_DC_BASE_CNTL, CACHE_POLICY, 0);
+	WREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_BASE_CNTL, tmp);
+
+	/* Invalidate the data caches */
+	tmp = RREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_OP_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_RS64_DC_OP_CNTL, INVALIDATE_DCACHE, 1);
+	WREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_OP_CNTL, tmp);
+
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, regCP_GFX_RS64_DC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_GFX_RS64_DC_OP_CNTL,
+			INVALIDATE_DCACHE_COMPLETE))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to invalidate RS64 data cache\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static int gfx_v11_0_cp_gfx_load_microcode(struct amdgpu_device *adev)
+{
+	int r;
+
+	if (!adev->gfx.me_fw || !adev->gfx.pfp_fw)
+		return -EINVAL;
+
+	gfx_v11_0_cp_gfx_enable(adev, false);
+
+	if (adev->gfx.rs64_enable)
+		r = gfx_v11_0_cp_gfx_load_pfp_microcode_rs64(adev);
+	else
+		r = gfx_v11_0_cp_gfx_load_pfp_microcode(adev);
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to load pfp fw\n", r);
+		return r;
+	}
+
+	if (adev->gfx.rs64_enable)
+		r = gfx_v11_0_cp_gfx_load_me_microcode_rs64(adev);
+	else
+		r = gfx_v11_0_cp_gfx_load_me_microcode(adev);
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to load me fw\n", r);
+		return r;
+	}
+
+	return 0;
+}
+
+static int gfx_v11_0_cp_gfx_start(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring;
+	const struct cs_section_def *sect = NULL;
+	const struct cs_extent_def *ext = NULL;
+	int r, i;
+	int ctx_reg_offset;
+
+	/* init the CP */
+	WREG32_SOC15(GC, 0, regCP_MAX_CONTEXT,
+		     adev->gfx.config.max_hw_contexts - 1);
+	WREG32_SOC15(GC, 0, regCP_DEVICE_ID, 1);
+
+	if (!amdgpu_async_gfx_ring)
+		gfx_v11_0_cp_gfx_enable(adev, true);
+
+	ring = &adev->gfx.gfx_ring[0];
+	r = amdgpu_ring_alloc(ring, gfx_v11_0_get_csb_size(adev));
+	if (r) {
+		DRM_ERROR("amdgpu: cp failed to lock ring (%d).\n", r);
+		return r;
+	}
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_PREAMBLE_CNTL, 0));
+	amdgpu_ring_write(ring, PACKET3_PREAMBLE_BEGIN_CLEAR_STATE);
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_CONTEXT_CONTROL, 1));
+	amdgpu_ring_write(ring, 0x80000000);
+	amdgpu_ring_write(ring, 0x80000000);
+
+	for (sect = gfx11_cs_data; sect->section != NULL; ++sect) {
+		for (ext = sect->section; ext->extent != NULL; ++ext) {
+			if (sect->id == SECT_CONTEXT) {
+				amdgpu_ring_write(ring,
+						  PACKET3(PACKET3_SET_CONTEXT_REG,
+							  ext->reg_count));
+				amdgpu_ring_write(ring, ext->reg_index -
+						  PACKET3_SET_CONTEXT_REG_START);
+				for (i = 0; i < ext->reg_count; i++)
+					amdgpu_ring_write(ring, ext->extent[i]);
+			}
+		}
+	}
+
+	ctx_reg_offset =
+		SOC15_REG_OFFSET(GC, 0, regPA_SC_TILE_STEERING_OVERRIDE) - PACKET3_SET_CONTEXT_REG_START;
+	amdgpu_ring_write(ring, PACKET3(PACKET3_SET_CONTEXT_REG, 1));
+	amdgpu_ring_write(ring, ctx_reg_offset);
+	amdgpu_ring_write(ring, adev->gfx.config.pa_sc_tile_steering_override);
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_PREAMBLE_CNTL, 0));
+	amdgpu_ring_write(ring, PACKET3_PREAMBLE_END_CLEAR_STATE);
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_CLEAR_STATE, 0));
+	amdgpu_ring_write(ring, 0);
+
+	amdgpu_ring_commit(ring);
+
+	/* submit cs packet to copy state 0 to next available state */
+	if (adev->gfx.num_gfx_rings > 1) {
+		/* maximum supported gfx ring is 2 */
+		ring = &adev->gfx.gfx_ring[1];
+		r = amdgpu_ring_alloc(ring, 2);
+		if (r) {
+			DRM_ERROR("amdgpu: cp failed to lock ring (%d).\n", r);
+			return r;
+		}
+
+		amdgpu_ring_write(ring, PACKET3(PACKET3_CLEAR_STATE, 0));
+		amdgpu_ring_write(ring, 0);
+
+		amdgpu_ring_commit(ring);
+	}
+	return 0;
+}
+
+static void gfx_v11_0_cp_gfx_switch_pipe(struct amdgpu_device *adev,
+					 CP_PIPE_ID pipe)
+{
+	u32 tmp;
+
+	tmp = RREG32_SOC15(GC, 0, regGRBM_GFX_CNTL);
+	tmp = REG_SET_FIELD(tmp, GRBM_GFX_CNTL, PIPEID, pipe);
+
+	WREG32_SOC15(GC, 0, regGRBM_GFX_CNTL, tmp);
+}
+
+static void gfx_v11_0_cp_gfx_set_doorbell(struct amdgpu_device *adev,
+					  struct amdgpu_ring *ring)
+{
+	u32 tmp;
+
+	tmp = RREG32_SOC15(GC, 0, regCP_RB_DOORBELL_CONTROL);
+	if (ring->use_doorbell) {
+		tmp = REG_SET_FIELD(tmp, CP_RB_DOORBELL_CONTROL,
+				    DOORBELL_OFFSET, ring->doorbell_index);
+		tmp = REG_SET_FIELD(tmp, CP_RB_DOORBELL_CONTROL,
+				    DOORBELL_EN, 1);
+	} else {
+		tmp = REG_SET_FIELD(tmp, CP_RB_DOORBELL_CONTROL,
+				    DOORBELL_EN, 0);
+	}
+	WREG32_SOC15(GC, 0, regCP_RB_DOORBELL_CONTROL, tmp);
+
+	tmp = REG_SET_FIELD(0, CP_RB_DOORBELL_RANGE_LOWER,
+			    DOORBELL_RANGE_LOWER, ring->doorbell_index);
+	WREG32_SOC15(GC, 0, regCP_RB_DOORBELL_RANGE_LOWER, tmp);
+
+	WREG32_SOC15(GC, 0, regCP_RB_DOORBELL_RANGE_UPPER,
+		     CP_RB_DOORBELL_RANGE_UPPER__DOORBELL_RANGE_UPPER_MASK);
+}
+
+static int gfx_v11_0_cp_gfx_resume(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring;
+	u32 tmp;
+	u32 rb_bufsz;
+	u64 rb_addr, rptr_addr, wptr_gpu_addr;
+	u32 i;
+
+	/* Set the write pointer delay */
+	WREG32_SOC15(GC, 0, regCP_RB_WPTR_DELAY, 0);
+
+	/* set the RB to use vmid 0 */
+	WREG32_SOC15(GC, 0, regCP_RB_VMID, 0);
+
+	/* Init gfx ring 0 for pipe 0 */
+	mutex_lock(&adev->srbm_mutex);
+	gfx_v11_0_cp_gfx_switch_pipe(adev, PIPE_ID0);
+
+	/* Set ring buffer size */
+	ring = &adev->gfx.gfx_ring[0];
+	rb_bufsz = order_base_2(ring->ring_size / 8);
+	tmp = REG_SET_FIELD(0, CP_RB0_CNTL, RB_BUFSZ, rb_bufsz);
+	tmp = REG_SET_FIELD(tmp, CP_RB0_CNTL, RB_BLKSZ, rb_bufsz - 2);
+#ifdef __BIG_ENDIAN
+	tmp = REG_SET_FIELD(tmp, CP_RB0_CNTL, BUF_SWAP, 1);
+#endif
+	WREG32_SOC15(GC, 0, regCP_RB0_CNTL, tmp);
+
+	/* Initialize the ring buffer's write pointers */
+	ring->wptr = 0;
+	WREG32_SOC15(GC, 0, regCP_RB0_WPTR, lower_32_bits(ring->wptr));
+	WREG32_SOC15(GC, 0, regCP_RB0_WPTR_HI, upper_32_bits(ring->wptr));
+
+	/* set the wb address wether it's enabled or not */
+	rptr_addr = ring->rptr_gpu_addr;
+	WREG32_SOC15(GC, 0, regCP_RB0_RPTR_ADDR, lower_32_bits(rptr_addr));
+	WREG32_SOC15(GC, 0, regCP_RB0_RPTR_ADDR_HI, upper_32_bits(rptr_addr) &
+		     CP_RB_RPTR_ADDR_HI__RB_RPTR_ADDR_HI_MASK);
+
+	wptr_gpu_addr = ring->wptr_gpu_addr;
+	WREG32_SOC15(GC, 0, regCP_RB_WPTR_POLL_ADDR_LO,
+		     lower_32_bits(wptr_gpu_addr));
+	WREG32_SOC15(GC, 0, regCP_RB_WPTR_POLL_ADDR_HI,
+		     upper_32_bits(wptr_gpu_addr));
+
+	mdelay(1);
+	WREG32_SOC15(GC, 0, regCP_RB0_CNTL, tmp);
+
+	rb_addr = ring->gpu_addr >> 8;
+	WREG32_SOC15(GC, 0, regCP_RB0_BASE, rb_addr);
+	WREG32_SOC15(GC, 0, regCP_RB0_BASE_HI, upper_32_bits(rb_addr));
+
+	WREG32_SOC15(GC, 0, regCP_RB_ACTIVE, 1);
+
+	gfx_v11_0_cp_gfx_set_doorbell(adev, ring);
+	mutex_unlock(&adev->srbm_mutex);
+
+	/* Init gfx ring 1 for pipe 1 */
+	if (adev->gfx.num_gfx_rings > 1) {
+		mutex_lock(&adev->srbm_mutex);
+		gfx_v11_0_cp_gfx_switch_pipe(adev, PIPE_ID1);
+		/* maximum supported gfx ring is 2 */
+		ring = &adev->gfx.gfx_ring[1];
+		rb_bufsz = order_base_2(ring->ring_size / 8);
+		tmp = REG_SET_FIELD(0, CP_RB1_CNTL, RB_BUFSZ, rb_bufsz);
+		tmp = REG_SET_FIELD(tmp, CP_RB1_CNTL, RB_BLKSZ, rb_bufsz - 2);
+		WREG32_SOC15(GC, 0, regCP_RB1_CNTL, tmp);
+		/* Initialize the ring buffer's write pointers */
+		ring->wptr = 0;
+		WREG32_SOC15(GC, 0, regCP_RB1_WPTR, lower_32_bits(ring->wptr));
+		WREG32_SOC15(GC, 0, regCP_RB1_WPTR_HI, upper_32_bits(ring->wptr));
+		/* Set the wb address wether it's enabled or not */
+		rptr_addr = ring->rptr_gpu_addr;
+		WREG32_SOC15(GC, 0, regCP_RB1_RPTR_ADDR, lower_32_bits(rptr_addr));
+		WREG32_SOC15(GC, 0, regCP_RB1_RPTR_ADDR_HI, upper_32_bits(rptr_addr) &
+			     CP_RB1_RPTR_ADDR_HI__RB_RPTR_ADDR_HI_MASK);
+		wptr_gpu_addr = ring->wptr_gpu_addr;
+		WREG32_SOC15(GC, 0, regCP_RB_WPTR_POLL_ADDR_LO,
+			     lower_32_bits(wptr_gpu_addr));
+		WREG32_SOC15(GC, 0, regCP_RB_WPTR_POLL_ADDR_HI,
+			     upper_32_bits(wptr_gpu_addr));
+
+		mdelay(1);
+		WREG32_SOC15(GC, 0, regCP_RB1_CNTL, tmp);
+
+		rb_addr = ring->gpu_addr >> 8;
+		WREG32_SOC15(GC, 0, regCP_RB1_BASE, rb_addr);
+		WREG32_SOC15(GC, 0, regCP_RB1_BASE_HI, upper_32_bits(rb_addr));
+		WREG32_SOC15(GC, 0, regCP_RB1_ACTIVE, 1);
+
+		gfx_v11_0_cp_gfx_set_doorbell(adev, ring);
+		mutex_unlock(&adev->srbm_mutex);
+	}
+	/* Switch to pipe 0 */
+	mutex_lock(&adev->srbm_mutex);
+	gfx_v11_0_cp_gfx_switch_pipe(adev, PIPE_ID0);
+	mutex_unlock(&adev->srbm_mutex);
+
+	/* start the ring */
+	gfx_v11_0_cp_gfx_start(adev);
+
+	for (i = 0; i < adev->gfx.num_gfx_rings; i++) {
+		ring = &adev->gfx.gfx_ring[i];
+		ring->sched.ready = true;
+	}
+
+	return 0;
+}
+
+static void gfx_v11_0_cp_compute_enable(struct amdgpu_device *adev, bool enable)
+{
+	u32 data;
+
+	if (adev->gfx.rs64_enable) {
+		data = RREG32_SOC15(GC, 0, regCP_MEC_RS64_CNTL);
+		data = REG_SET_FIELD(data, CP_MEC_RS64_CNTL, MEC_INVALIDATE_ICACHE,
+							 enable ? 0 : 1);
+		data = REG_SET_FIELD(data, CP_MEC_RS64_CNTL, MEC_PIPE0_RESET,
+							 enable ? 0 : 1);
+		data = REG_SET_FIELD(data, CP_MEC_RS64_CNTL, MEC_PIPE1_RESET,
+							 enable ? 0 : 1);
+		data = REG_SET_FIELD(data, CP_MEC_RS64_CNTL, MEC_PIPE2_RESET,
+							 enable ? 0 : 1);
+		data = REG_SET_FIELD(data, CP_MEC_RS64_CNTL, MEC_PIPE3_RESET,
+							 enable ? 0 : 1);
+		data = REG_SET_FIELD(data, CP_MEC_RS64_CNTL, MEC_PIPE0_ACTIVE,
+							 enable ? 1 : 0);
+		data = REG_SET_FIELD(data, CP_MEC_RS64_CNTL, MEC_PIPE1_ACTIVE,
+				                         enable ? 1 : 0);
+		data = REG_SET_FIELD(data, CP_MEC_RS64_CNTL, MEC_PIPE2_ACTIVE,
+							 enable ? 1 : 0);
+		data = REG_SET_FIELD(data, CP_MEC_RS64_CNTL, MEC_PIPE3_ACTIVE,
+							 enable ? 1 : 0);
+		data = REG_SET_FIELD(data, CP_MEC_RS64_CNTL, MEC_HALT,
+							 enable ? 0 : 1);
+		WREG32_SOC15(GC, 0, regCP_MEC_RS64_CNTL, data);
+	} else {
+		data = RREG32_SOC15(GC, 0, regCP_MEC_CNTL);
+
+		if (enable) {
+			data = REG_SET_FIELD(data, CP_MEC_CNTL, MEC_ME1_HALT, 0);
+			if (!adev->enable_mes_kiq)
+				data = REG_SET_FIELD(data, CP_MEC_CNTL,
+						     MEC_ME2_HALT, 0);
+		} else {
+			data = REG_SET_FIELD(data, CP_MEC_CNTL, MEC_ME1_HALT, 1);
+			data = REG_SET_FIELD(data, CP_MEC_CNTL, MEC_ME2_HALT, 1);
+		}
+		WREG32_SOC15(GC, 0, regCP_MEC_CNTL, data);
+	}
+
+	adev->gfx.kiq.ring.sched.ready = enable;
+
+	udelay(50);
+}
+
+static int gfx_v11_0_cp_compute_load_microcode(struct amdgpu_device *adev)
+{
+	const struct gfx_firmware_header_v1_0 *mec_hdr;
+	const __le32 *fw_data;
+	unsigned i, fw_size;
+	u32 *fw = NULL;
+	int r;
+
+	if (!adev->gfx.mec_fw)
+		return -EINVAL;
+
+	gfx_v11_0_cp_compute_enable(adev, false);
+
+	mec_hdr = (const struct gfx_firmware_header_v1_0 *)adev->gfx.mec_fw->data;
+	amdgpu_ucode_print_gfx_hdr(&mec_hdr->header);
+
+	fw_data = (const __le32 *)
+		(adev->gfx.mec_fw->data +
+		 le32_to_cpu(mec_hdr->header.ucode_array_offset_bytes));
+	fw_size = le32_to_cpu(mec_hdr->header.ucode_size_bytes);
+
+	r = amdgpu_bo_create_reserved(adev, mec_hdr->header.ucode_size_bytes,
+					  PAGE_SIZE, AMDGPU_GEM_DOMAIN_GTT,
+					  &adev->gfx.mec.mec_fw_obj,
+					  &adev->gfx.mec.mec_fw_gpu_addr,
+					  (void **)&fw);
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to create mec fw bo\n", r);
+		gfx_v11_0_mec_fini(adev);
+		return r;
+	}
+
+	memcpy(fw, fw_data, fw_size);
+	
+	amdgpu_bo_kunmap(adev->gfx.mec.mec_fw_obj);
+	amdgpu_bo_unreserve(adev->gfx.mec.mec_fw_obj);
+
+	gfx_v11_0_config_mec_cache(adev, adev->gfx.mec.mec_fw_gpu_addr);
+
+	/* MEC1 */
+	WREG32_SOC15(GC, 0, regCP_MEC_ME1_UCODE_ADDR, 0);
+
+	for (i = 0; i < mec_hdr->jt_size; i++)
+		WREG32_SOC15(GC, 0, regCP_MEC_ME1_UCODE_DATA,
+			     le32_to_cpup(fw_data + mec_hdr->jt_offset + i));
+
+	WREG32_SOC15(GC, 0, regCP_MEC_ME1_UCODE_ADDR, adev->gfx.mec_fw_version);
+
+	return 0;
+}
+
+static int gfx_v11_0_cp_compute_load_microcode_rs64(struct amdgpu_device *adev)
+{
+	const struct gfx_firmware_header_v2_0 *mec_hdr;
+	const __le32 *fw_ucode, *fw_data;
+	u32 tmp, fw_ucode_size, fw_data_size;
+	u32 i, usec_timeout = 50000; /* Wait for 50 ms */
+	u32 *fw_ucode_ptr, *fw_data_ptr;
+	int r;
+
+	if (!adev->gfx.mec_fw)
+		return -EINVAL;
+
+	gfx_v11_0_cp_compute_enable(adev, false);
+
+	mec_hdr = (const struct gfx_firmware_header_v2_0 *)adev->gfx.mec_fw->data;
+	amdgpu_ucode_print_gfx_hdr(&mec_hdr->header);
+
+	fw_ucode = (const __le32 *) (adev->gfx.mec_fw->data +
+				le32_to_cpu(mec_hdr->ucode_offset_bytes));
+	fw_ucode_size = le32_to_cpu(mec_hdr->ucode_size_bytes);
+
+	fw_data = (const __le32 *) (adev->gfx.mec_fw->data +
+				le32_to_cpu(mec_hdr->data_offset_bytes));
+	fw_data_size = le32_to_cpu(mec_hdr->data_size_bytes);
+
+	r = amdgpu_bo_create_reserved(adev, fw_ucode_size,
+				      64 * 1024, AMDGPU_GEM_DOMAIN_VRAM,
+				      &adev->gfx.mec.mec_fw_obj,
+				      &adev->gfx.mec.mec_fw_gpu_addr,
+				      (void **)&fw_ucode_ptr);
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to create mec fw ucode bo\n", r);
+		gfx_v11_0_mec_fini(adev);
+		return r;
+	}
+
+	r = amdgpu_bo_create_reserved(adev, fw_data_size,
+				      64 * 1024, AMDGPU_GEM_DOMAIN_VRAM,
+				      &adev->gfx.mec.mec_fw_data_obj,
+				      &adev->gfx.mec.mec_fw_data_gpu_addr,
+				      (void **)&fw_data_ptr);
+	if (r) {
+		dev_err(adev->dev, "(%d) failed to create mec fw ucode bo\n", r);
+		gfx_v11_0_mec_fini(adev);
+		return r;
+	}
+
+	memcpy(fw_ucode_ptr, fw_ucode, fw_ucode_size);
+	memcpy(fw_data_ptr, fw_data, fw_data_size);
+
+	amdgpu_bo_kunmap(adev->gfx.mec.mec_fw_obj);
+	amdgpu_bo_kunmap(adev->gfx.mec.mec_fw_data_obj);
+	amdgpu_bo_unreserve(adev->gfx.mec.mec_fw_obj);
+	amdgpu_bo_unreserve(adev->gfx.mec.mec_fw_data_obj);
+
+	tmp = RREG32_SOC15(GC, 0, regCP_CPC_IC_BASE_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_CPC_IC_BASE_CNTL, VMID, 0);
+	tmp = REG_SET_FIELD(tmp, CP_CPC_IC_BASE_CNTL, EXE_DISABLE, 0);
+	tmp = REG_SET_FIELD(tmp, CP_CPC_IC_BASE_CNTL, CACHE_POLICY, 0);
+	WREG32_SOC15(GC, 0, regCP_CPC_IC_BASE_CNTL, tmp);
+
+	tmp = RREG32_SOC15(GC, 0, regCP_MEC_DC_BASE_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_MEC_DC_BASE_CNTL, VMID, 0);
+	tmp = REG_SET_FIELD(tmp, CP_MEC_DC_BASE_CNTL, CACHE_POLICY, 0);
+	WREG32_SOC15(GC, 0, regCP_MEC_DC_BASE_CNTL, tmp);
+
+	mutex_lock(&adev->srbm_mutex);
+	for (i = 0; i < adev->gfx.mec.num_pipe_per_mec; i++) {
+		soc21_grbm_select(adev, 1, i, 0, 0);
+
+		WREG32_SOC15(GC, 0, regCP_MEC_MDBASE_LO, adev->gfx.mec.mec_fw_data_gpu_addr);
+		WREG32_SOC15(GC, 0, regCP_MEC_MDBASE_HI,
+		     upper_32_bits(adev->gfx.mec.mec_fw_data_gpu_addr));
+
+		WREG32_SOC15(GC, 0, regCP_MEC_RS64_PRGRM_CNTR_START,
+					mec_hdr->ucode_start_addr_lo >> 2 |
+					mec_hdr->ucode_start_addr_hi << 30);
+		WREG32_SOC15(GC, 0, regCP_MEC_RS64_PRGRM_CNTR_START_HI,
+					mec_hdr->ucode_start_addr_hi >> 2);
+
+		WREG32_SOC15(GC, 0, regCP_CPC_IC_BASE_LO, adev->gfx.mec.mec_fw_gpu_addr);
+		WREG32_SOC15(GC, 0, regCP_CPC_IC_BASE_HI,
+		     upper_32_bits(adev->gfx.mec.mec_fw_gpu_addr));
+	}
+	mutex_unlock(&adev->srbm_mutex);
+	soc21_grbm_select(adev, 0, 0, 0, 0);
+
+	/* Trigger an invalidation of the L1 instruction caches */
+	tmp = RREG32_SOC15(GC, 0, regCP_MEC_DC_OP_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_MEC_DC_OP_CNTL, INVALIDATE_DCACHE, 1);
+	WREG32_SOC15(GC, 0, regCP_MEC_DC_OP_CNTL, tmp);
+
+	/* Wait for invalidation complete */
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, regCP_MEC_DC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_MEC_DC_OP_CNTL,
+				       INVALIDATE_DCACHE_COMPLETE))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to invalidate instruction cache\n");
+		return -EINVAL;
+	}
+
+	/* Trigger an invalidation of the L1 instruction caches */
+	tmp = RREG32_SOC15(GC, 0, regCP_CPC_IC_OP_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_CPC_IC_OP_CNTL, INVALIDATE_CACHE, 1);
+	WREG32_SOC15(GC, 0, regCP_CPC_IC_OP_CNTL, tmp);
+
+	/* Wait for invalidation complete */
+	for (i = 0; i < usec_timeout; i++) {
+		tmp = RREG32_SOC15(GC, 0, regCP_CPC_IC_OP_CNTL);
+		if (1 == REG_GET_FIELD(tmp, CP_CPC_IC_OP_CNTL,
+				       INVALIDATE_CACHE_COMPLETE))
+			break;
+		udelay(1);
+	}
+
+	if (i >= usec_timeout) {
+		dev_err(adev->dev, "failed to invalidate instruction cache\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static void gfx_v11_0_kiq_setting(struct amdgpu_ring *ring)
+{
+	uint32_t tmp;
+	struct amdgpu_device *adev = ring->adev;
+
+	/* tell RLC which is KIQ queue */
+	tmp = RREG32_SOC15(GC, 0, regRLC_CP_SCHEDULERS);
+	tmp &= 0xffffff00;
+	tmp |= (ring->me << 5) | (ring->pipe << 3) | (ring->queue);
+	WREG32_SOC15(GC, 0, regRLC_CP_SCHEDULERS, tmp);
+	tmp |= 0x80;
+	WREG32_SOC15(GC, 0, regRLC_CP_SCHEDULERS, tmp);
+}
+
+static void gfx_v11_0_cp_set_doorbell_range(struct amdgpu_device *adev)
+{
+	/* set graphics engine doorbell range */
+	WREG32_SOC15(GC, 0, regCP_RB_DOORBELL_RANGE_LOWER,
+		     (adev->doorbell_index.gfx_ring0 * 2) << 2);
+	WREG32_SOC15(GC, 0, regCP_RB_DOORBELL_RANGE_UPPER,
+		     (adev->doorbell_index.gfx_userqueue_end * 2) << 2);
+
+	/* set compute engine doorbell range */
+	WREG32_SOC15(GC, 0, regCP_MEC_DOORBELL_RANGE_LOWER,
+		     (adev->doorbell_index.kiq * 2) << 2);
+	WREG32_SOC15(GC, 0, regCP_MEC_DOORBELL_RANGE_UPPER,
+		     (adev->doorbell_index.userqueue_end * 2) << 2);
+}
+
+static int gfx_v11_0_gfx_mqd_init(struct amdgpu_device *adev, void *m,
+				  struct amdgpu_mqd_prop *prop)
+{
+	struct v11_gfx_mqd *mqd = m;
+	uint64_t hqd_gpu_addr, wb_gpu_addr;
+	uint32_t tmp;
+	uint32_t rb_bufsz;
+
+	/* set up gfx hqd wptr */
+	mqd->cp_gfx_hqd_wptr = 0;
+	mqd->cp_gfx_hqd_wptr_hi = 0;
+
+	/* set the pointer to the MQD */
+	mqd->cp_mqd_base_addr = prop->mqd_gpu_addr & 0xfffffffc;
+	mqd->cp_mqd_base_addr_hi = upper_32_bits(prop->mqd_gpu_addr);
+
+	/* set up mqd control */
+	tmp = RREG32_SOC15(GC, 0, regCP_GFX_MQD_CONTROL);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_MQD_CONTROL, VMID, 0);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_MQD_CONTROL, PRIV_STATE, 1);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_MQD_CONTROL, CACHE_POLICY, 0);
+	mqd->cp_gfx_mqd_control = tmp;
+
+	/* set up gfx_hqd_vimd with 0x0 to indicate the ring buffer's vmid */
+	tmp = RREG32_SOC15(GC, 0, regCP_GFX_HQD_VMID);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_VMID, VMID, 0);
+	mqd->cp_gfx_hqd_vmid = 0;
+
+	/* set up default queue priority level
+	 * 0x0 = low priority, 0x1 = high priority */
+	tmp = RREG32_SOC15(GC, 0, regCP_GFX_HQD_QUEUE_PRIORITY);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_QUEUE_PRIORITY, PRIORITY_LEVEL, 0);
+	mqd->cp_gfx_hqd_queue_priority = tmp;
+
+	/* set up time quantum */
+	tmp = RREG32_SOC15(GC, 0, regCP_GFX_HQD_QUANTUM);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_QUANTUM, QUANTUM_EN, 1);
+	mqd->cp_gfx_hqd_quantum = tmp;
+
+	/* set up gfx hqd base. this is similar as CP_RB_BASE */
+	hqd_gpu_addr = prop->hqd_base_gpu_addr >> 8;
+	mqd->cp_gfx_hqd_base = hqd_gpu_addr;
+	mqd->cp_gfx_hqd_base_hi = upper_32_bits(hqd_gpu_addr);
+
+	/* set up hqd_rptr_addr/_hi, similar as CP_RB_RPTR */
+	wb_gpu_addr = prop->rptr_gpu_addr;
+	mqd->cp_gfx_hqd_rptr_addr = wb_gpu_addr & 0xfffffffc;
+	mqd->cp_gfx_hqd_rptr_addr_hi =
+		upper_32_bits(wb_gpu_addr) & 0xffff;
+
+	/* set up rb_wptr_poll addr */
+	wb_gpu_addr = prop->wptr_gpu_addr;
+	mqd->cp_rb_wptr_poll_addr_lo = wb_gpu_addr & 0xfffffffc;
+	mqd->cp_rb_wptr_poll_addr_hi = upper_32_bits(wb_gpu_addr) & 0xffff;
+
+	/* set up the gfx_hqd_control, similar as CP_RB0_CNTL */
+	rb_bufsz = order_base_2(prop->queue_size / 4) - 1;
+	tmp = RREG32_SOC15(GC, 0, regCP_GFX_HQD_CNTL);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_CNTL, RB_BUFSZ, rb_bufsz);
+	tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_CNTL, RB_BLKSZ, rb_bufsz - 2);
+#ifdef __BIG_ENDIAN
+	tmp = REG_SET_FIELD(tmp, CP_GFX_HQD_CNTL, BUF_SWAP, 1);
+#endif
+	mqd->cp_gfx_hqd_cntl = tmp;
+
+	/* set up cp_doorbell_control */
+	tmp = RREG32_SOC15(GC, 0, regCP_RB_DOORBELL_CONTROL);
+	if (prop->use_doorbell) {
+		tmp = REG_SET_FIELD(tmp, CP_RB_DOORBELL_CONTROL,
+				    DOORBELL_OFFSET, prop->doorbell_index);
+		tmp = REG_SET_FIELD(tmp, CP_RB_DOORBELL_CONTROL,
+				    DOORBELL_EN, 1);
+	} else
+		tmp = REG_SET_FIELD(tmp, CP_RB_DOORBELL_CONTROL,
+				    DOORBELL_EN, 0);
+	mqd->cp_rb_doorbell_control = tmp;
+
+	/* reset read and write pointers, similar to CP_RB0_WPTR/_RPTR */
+	mqd->cp_gfx_hqd_rptr = RREG32_SOC15(GC, 0, regCP_GFX_HQD_RPTR);
+
+	/* active the queue */
+	mqd->cp_gfx_hqd_active = 1;
+
+	return 0;
+}
+
+#ifdef BRING_UP_DEBUG
+static int gfx_v11_0_gfx_queue_init_register(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	struct v11_gfx_mqd *mqd = ring->mqd_ptr;
+
+	/* set mmCP_GFX_HQD_WPTR/_HI to 0 */
+	WREG32_SOC15(GC, 0, regCP_GFX_HQD_WPTR, mqd->cp_gfx_hqd_wptr);
+	WREG32_SOC15(GC, 0, regCP_GFX_HQD_WPTR_HI, mqd->cp_gfx_hqd_wptr_hi);
+
+	/* set GFX_MQD_BASE */
+	WREG32_SOC15(GC, 0, regCP_MQD_BASE_ADDR, mqd->cp_mqd_base_addr);
+	WREG32_SOC15(GC, 0, regCP_MQD_BASE_ADDR_HI, mqd->cp_mqd_base_addr_hi);
+
+	/* set GFX_MQD_CONTROL */
+	WREG32_SOC15(GC, 0, regCP_GFX_MQD_CONTROL, mqd->cp_gfx_mqd_control);
+
+	/* set GFX_HQD_VMID to 0 */
+	WREG32_SOC15(GC, 0, regCP_GFX_HQD_VMID, mqd->cp_gfx_hqd_vmid);
+
+	WREG32_SOC15(GC, 0, regCP_GFX_HQD_QUEUE_PRIORITY,
+			mqd->cp_gfx_hqd_queue_priority);
+	WREG32_SOC15(GC, 0, regCP_GFX_HQD_QUANTUM, mqd->cp_gfx_hqd_quantum);
+
+	/* set GFX_HQD_BASE, similar as CP_RB_BASE */
+	WREG32_SOC15(GC, 0, regCP_GFX_HQD_BASE, mqd->cp_gfx_hqd_base);
+	WREG32_SOC15(GC, 0, regCP_GFX_HQD_BASE_HI, mqd->cp_gfx_hqd_base_hi);
+
+	/* set GFX_HQD_RPTR_ADDR, similar as CP_RB_RPTR */
+	WREG32_SOC15(GC, 0, regCP_GFX_HQD_RPTR_ADDR, mqd->cp_gfx_hqd_rptr_addr);
+	WREG32_SOC15(GC, 0, regCP_GFX_HQD_RPTR_ADDR_HI, mqd->cp_gfx_hqd_rptr_addr_hi);
+
+	/* set GFX_HQD_CNTL, similar as CP_RB_CNTL */
+	WREG32_SOC15(GC, 0, regCP_GFX_HQD_CNTL, mqd->cp_gfx_hqd_cntl);
+
+	/* set RB_WPTR_POLL_ADDR */
+	WREG32_SOC15(GC, 0, regCP_RB_WPTR_POLL_ADDR_LO, mqd->cp_rb_wptr_poll_addr_lo);
+	WREG32_SOC15(GC, 0, regCP_RB_WPTR_POLL_ADDR_HI, mqd->cp_rb_wptr_poll_addr_hi);
+
+	/* set RB_DOORBELL_CONTROL */
+	WREG32_SOC15(GC, 0, regCP_RB_DOORBELL_CONTROL, mqd->cp_rb_doorbell_control);
+
+	/* active the queue */
+	WREG32_SOC15(GC, 0, regCP_GFX_HQD_ACTIVE, mqd->cp_gfx_hqd_active);
+
+	return 0;
+}
+#endif
+
+static int gfx_v11_0_gfx_init_queue(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	struct v11_gfx_mqd *mqd = ring->mqd_ptr;
+	int mqd_idx = ring - &adev->gfx.gfx_ring[0];
+
+	if (!amdgpu_in_reset(adev) && !adev->in_suspend) {
+		memset((void *)mqd, 0, sizeof(*mqd));
+		mutex_lock(&adev->srbm_mutex);
+		soc21_grbm_select(adev, ring->me, ring->pipe, ring->queue, 0);
+		amdgpu_ring_init_mqd(ring);
+#ifdef BRING_UP_DEBUG
+		gfx_v11_0_gfx_queue_init_register(ring);
+#endif
+		soc21_grbm_select(adev, 0, 0, 0, 0);
+		mutex_unlock(&adev->srbm_mutex);
+		if (adev->gfx.me.mqd_backup[mqd_idx])
+			memcpy(adev->gfx.me.mqd_backup[mqd_idx], mqd, sizeof(*mqd));
+	} else if (amdgpu_in_reset(adev)) {
+		/* reset mqd with the backup copy */
+		if (adev->gfx.me.mqd_backup[mqd_idx])
+			memcpy(mqd, adev->gfx.me.mqd_backup[mqd_idx], sizeof(*mqd));
+		/* reset the ring */
+		ring->wptr = 0;
+		*ring->wptr_cpu_addr = 0;
+		amdgpu_ring_clear_ring(ring);
+#ifdef BRING_UP_DEBUG
+		mutex_lock(&adev->srbm_mutex);
+		soc21_grbm_select(adev, ring->me, ring->pipe, ring->queue, 0);
+		gfx_v11_0_gfx_queue_init_register(ring);
+		soc21_grbm_select(adev, 0, 0, 0, 0);
+		mutex_unlock(&adev->srbm_mutex);
+#endif
+	} else {
+		amdgpu_ring_clear_ring(ring);
+	}
+
+	return 0;
+}
+
+#ifndef BRING_UP_DEBUG
+static int gfx_v11_0_kiq_enable_kgq(struct amdgpu_device *adev)
+{
+	struct amdgpu_kiq *kiq = &adev->gfx.kiq;
+	struct amdgpu_ring *kiq_ring = &adev->gfx.kiq.ring;
+	int r, i;
+
+	if (!kiq->pmf || !kiq->pmf->kiq_map_queues)
+		return -EINVAL;
+
+	r = amdgpu_ring_alloc(kiq_ring, kiq->pmf->map_queues_size *
+					adev->gfx.num_gfx_rings);
+	if (r) {
+		DRM_ERROR("Failed to lock KIQ (%d).\n", r);
+		return r;
+	}
+
+	for (i = 0; i < adev->gfx.num_gfx_rings; i++)
+		kiq->pmf->kiq_map_queues(kiq_ring, &adev->gfx.gfx_ring[i]);
+
+	return amdgpu_ring_test_helper(kiq_ring);
+}
+#endif
+
+static int gfx_v11_0_cp_async_gfx_ring_resume(struct amdgpu_device *adev)
+{
+	int r, i;
+	struct amdgpu_ring *ring;
+
+	for (i = 0; i < adev->gfx.num_gfx_rings; i++) {
+		ring = &adev->gfx.gfx_ring[i];
+
+		r = amdgpu_bo_reserve(ring->mqd_obj, false);
+		if (unlikely(r != 0))
+			goto done;
+
+		r = amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr);
+		if (!r) {
+			r = gfx_v11_0_gfx_init_queue(ring);
+			amdgpu_bo_kunmap(ring->mqd_obj);
+			ring->mqd_ptr = NULL;
+		}
+		amdgpu_bo_unreserve(ring->mqd_obj);
+		if (r)
+			goto done;
+	}
+#ifndef BRING_UP_DEBUG
+	r = gfx_v11_0_kiq_enable_kgq(adev);
+	if (r)
+		goto done;
+#endif
+	r = gfx_v11_0_cp_gfx_start(adev);
+	if (r)
+		goto done;
+
+	for (i = 0; i < adev->gfx.num_gfx_rings; i++) {
+		ring = &adev->gfx.gfx_ring[i];
+		ring->sched.ready = true;
+	}
+done:
+	return r;
+}
+
+static int gfx_v11_0_compute_mqd_init(struct amdgpu_device *adev, void *m,
+				      struct amdgpu_mqd_prop *prop)
+{
+	struct v11_compute_mqd *mqd = m;
+	uint64_t hqd_gpu_addr, wb_gpu_addr, eop_base_addr;
+	uint32_t tmp;
+
+	mqd->header = 0xC0310800;
+	mqd->compute_pipelinestat_enable = 0x00000001;
+	mqd->compute_static_thread_mgmt_se0 = 0xffffffff;
+	mqd->compute_static_thread_mgmt_se1 = 0xffffffff;
+	mqd->compute_static_thread_mgmt_se2 = 0xffffffff;
+	mqd->compute_static_thread_mgmt_se3 = 0xffffffff;
+	mqd->compute_misc_reserved = 0x00000007;
+
+	eop_base_addr = prop->eop_gpu_addr >> 8;
+	mqd->cp_hqd_eop_base_addr_lo = eop_base_addr;
+	mqd->cp_hqd_eop_base_addr_hi = upper_32_bits(eop_base_addr);
+
+	/* set the EOP size, register value is 2^(EOP_SIZE+1) dwords */
+	tmp = RREG32_SOC15(GC, 0, regCP_HQD_EOP_CONTROL);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_EOP_CONTROL, EOP_SIZE,
+			(order_base_2(GFX11_MEC_HPD_SIZE / 4) - 1));
+
+	mqd->cp_hqd_eop_control = tmp;
+
+	/* enable doorbell? */
+	tmp = RREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL);
+
+	if (prop->use_doorbell) {
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_OFFSET, prop->doorbell_index);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_EN, 1);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_SOURCE, 0);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_HIT, 0);
+	} else {
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_EN, 0);
+	}
+
+	mqd->cp_hqd_pq_doorbell_control = tmp;
+
+	/* disable the queue if it's active */
+	mqd->cp_hqd_dequeue_request = 0;
+	mqd->cp_hqd_pq_rptr = 0;
+	mqd->cp_hqd_pq_wptr_lo = 0;
+	mqd->cp_hqd_pq_wptr_hi = 0;
+
+	/* set the pointer to the MQD */
+	mqd->cp_mqd_base_addr_lo = prop->mqd_gpu_addr & 0xfffffffc;
+	mqd->cp_mqd_base_addr_hi = upper_32_bits(prop->mqd_gpu_addr);
+
+	/* set MQD vmid to 0 */
+	tmp = RREG32_SOC15(GC, 0, regCP_MQD_CONTROL);
+	tmp = REG_SET_FIELD(tmp, CP_MQD_CONTROL, VMID, 0);
+	mqd->cp_mqd_control = tmp;
+
+	/* set the pointer to the HQD, this is similar CP_RB0_BASE/_HI */
+	hqd_gpu_addr = prop->hqd_base_gpu_addr >> 8;
+	mqd->cp_hqd_pq_base_lo = hqd_gpu_addr;
+	mqd->cp_hqd_pq_base_hi = upper_32_bits(hqd_gpu_addr);
+
+	/* set up the HQD, this is similar to CP_RB0_CNTL */
+	tmp = RREG32_SOC15(GC, 0, regCP_HQD_PQ_CONTROL);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, QUEUE_SIZE,
+			    (order_base_2(prop->queue_size / 4) - 1));
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, RPTR_BLOCK_SIZE,
+			    ((order_base_2(AMDGPU_GPU_PAGE_SIZE / 4) - 1) << 8));
+#ifdef __BIG_ENDIAN
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, ENDIAN_SWAP, 1);
+#endif
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 0);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TUNNEL_DISPATCH, 0);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1);
+	mqd->cp_hqd_pq_control = tmp;
+
+	/* set the wb address whether it's enabled or not */
+	wb_gpu_addr = prop->rptr_gpu_addr;
+	mqd->cp_hqd_pq_rptr_report_addr_lo = wb_gpu_addr & 0xfffffffc;
+	mqd->cp_hqd_pq_rptr_report_addr_hi =
+		upper_32_bits(wb_gpu_addr) & 0xffff;
+
+	/* only used if CP_PQ_WPTR_POLL_CNTL.CP_PQ_WPTR_POLL_CNTL__EN_MASK=1 */
+	wb_gpu_addr = prop->wptr_gpu_addr;
+	mqd->cp_hqd_pq_wptr_poll_addr_lo = wb_gpu_addr & 0xfffffffc;
+	mqd->cp_hqd_pq_wptr_poll_addr_hi = upper_32_bits(wb_gpu_addr) & 0xffff;
+
+	tmp = 0;
+	/* enable the doorbell if requested */
+	if (prop->use_doorbell) {
+		tmp = RREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				DOORBELL_OFFSET, prop->doorbell_index);
+
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_EN, 1);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_SOURCE, 0);
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_HIT, 0);
+	}
+
+	mqd->cp_hqd_pq_doorbell_control = tmp;
+
+	/* reset read and write pointers, similar to CP_RB0_WPTR/_RPTR */
+	mqd->cp_hqd_pq_rptr = RREG32_SOC15(GC, 0, regCP_HQD_PQ_RPTR);
+
+	/* set the vmid for the queue */
+	mqd->cp_hqd_vmid = 0;
+
+	tmp = RREG32_SOC15(GC, 0, regCP_HQD_PERSISTENT_STATE);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PERSISTENT_STATE, PRELOAD_SIZE, 0x55);
+	mqd->cp_hqd_persistent_state = tmp;
+
+	/* set MIN_IB_AVAIL_SIZE */
+	tmp = RREG32_SOC15(GC, 0, regCP_HQD_IB_CONTROL);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_IB_CONTROL, MIN_IB_AVAIL_SIZE, 3);
+	mqd->cp_hqd_ib_control = tmp;
+
+	/* set static priority for a compute queue/ring */
+	mqd->cp_hqd_pipe_priority = prop->hqd_pipe_priority;
+	mqd->cp_hqd_queue_priority = prop->hqd_queue_priority;
+
+	mqd->cp_hqd_active = prop->hqd_active;
+
+	return 0;
+}
+
+static int gfx_v11_0_kiq_init_register(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	struct v11_compute_mqd *mqd = ring->mqd_ptr;
+	int j;
+
+	/* inactivate the queue */
+	if (amdgpu_sriov_vf(adev))
+		WREG32_SOC15(GC, 0, regCP_HQD_ACTIVE, 0);
+
+	/* disable wptr polling */
+	WREG32_FIELD15_PREREG(GC, 0, CP_PQ_WPTR_POLL_CNTL, EN, 0);
+
+	/* write the EOP addr */
+	WREG32_SOC15(GC, 0, regCP_HQD_EOP_BASE_ADDR,
+	       mqd->cp_hqd_eop_base_addr_lo);
+	WREG32_SOC15(GC, 0, regCP_HQD_EOP_BASE_ADDR_HI,
+	       mqd->cp_hqd_eop_base_addr_hi);
+
+	/* set the EOP size, register value is 2^(EOP_SIZE+1) dwords */
+	WREG32_SOC15(GC, 0, regCP_HQD_EOP_CONTROL,
+	       mqd->cp_hqd_eop_control);
+
+	/* enable doorbell? */
+	WREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL,
+	       mqd->cp_hqd_pq_doorbell_control);
+
+	/* disable the queue if it's active */
+	if (RREG32_SOC15(GC, 0, regCP_HQD_ACTIVE) & 1) {
+		WREG32_SOC15(GC, 0, regCP_HQD_DEQUEUE_REQUEST, 1);
+		for (j = 0; j < adev->usec_timeout; j++) {
+			if (!(RREG32_SOC15(GC, 0, regCP_HQD_ACTIVE) & 1))
+				break;
+			udelay(1);
+		}
+		WREG32_SOC15(GC, 0, regCP_HQD_DEQUEUE_REQUEST,
+		       mqd->cp_hqd_dequeue_request);
+		WREG32_SOC15(GC, 0, regCP_HQD_PQ_RPTR,
+		       mqd->cp_hqd_pq_rptr);
+		WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_LO,
+		       mqd->cp_hqd_pq_wptr_lo);
+		WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_HI,
+		       mqd->cp_hqd_pq_wptr_hi);
+	}
+
+	/* set the pointer to the MQD */
+	WREG32_SOC15(GC, 0, regCP_MQD_BASE_ADDR,
+	       mqd->cp_mqd_base_addr_lo);
+	WREG32_SOC15(GC, 0, regCP_MQD_BASE_ADDR_HI,
+	       mqd->cp_mqd_base_addr_hi);
+
+	/* set MQD vmid to 0 */
+	WREG32_SOC15(GC, 0, regCP_MQD_CONTROL,
+	       mqd->cp_mqd_control);
+
+	/* set the pointer to the HQD, this is similar CP_RB0_BASE/_HI */
+	WREG32_SOC15(GC, 0, regCP_HQD_PQ_BASE,
+	       mqd->cp_hqd_pq_base_lo);
+	WREG32_SOC15(GC, 0, regCP_HQD_PQ_BASE_HI,
+	       mqd->cp_hqd_pq_base_hi);
+
+	/* set up the HQD, this is similar to CP_RB0_CNTL */
+	WREG32_SOC15(GC, 0, regCP_HQD_PQ_CONTROL,
+	       mqd->cp_hqd_pq_control);
+
+	/* set the wb address whether it's enabled or not */
+	WREG32_SOC15(GC, 0, regCP_HQD_PQ_RPTR_REPORT_ADDR,
+		mqd->cp_hqd_pq_rptr_report_addr_lo);
+	WREG32_SOC15(GC, 0, regCP_HQD_PQ_RPTR_REPORT_ADDR_HI,
+		mqd->cp_hqd_pq_rptr_report_addr_hi);
+
+	/* only used if CP_PQ_WPTR_POLL_CNTL.CP_PQ_WPTR_POLL_CNTL__EN_MASK=1 */
+	WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_POLL_ADDR,
+	       mqd->cp_hqd_pq_wptr_poll_addr_lo);
+	WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_POLL_ADDR_HI,
+	       mqd->cp_hqd_pq_wptr_poll_addr_hi);
+
+	/* enable the doorbell if requested */
+	if (ring->use_doorbell) {
+		WREG32_SOC15(GC, 0, regCP_MEC_DOORBELL_RANGE_LOWER,
+			(adev->doorbell_index.kiq * 2) << 2);
+		WREG32_SOC15(GC, 0, regCP_MEC_DOORBELL_RANGE_UPPER,
+			(adev->doorbell_index.userqueue_end * 2) << 2);
+	}
+
+	WREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL,
+	       mqd->cp_hqd_pq_doorbell_control);
+
+	/* reset read and write pointers, similar to CP_RB0_WPTR/_RPTR */
+	WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_LO,
+	       mqd->cp_hqd_pq_wptr_lo);
+	WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_HI,
+	       mqd->cp_hqd_pq_wptr_hi);
+
+	/* set the vmid for the queue */
+	WREG32_SOC15(GC, 0, regCP_HQD_VMID, mqd->cp_hqd_vmid);
+
+	WREG32_SOC15(GC, 0, regCP_HQD_PERSISTENT_STATE,
+	       mqd->cp_hqd_persistent_state);
+
+	/* activate the queue */
+	WREG32_SOC15(GC, 0, regCP_HQD_ACTIVE,
+	       mqd->cp_hqd_active);
+
+	if (ring->use_doorbell)
+		WREG32_FIELD15_PREREG(GC, 0, CP_PQ_STATUS, DOORBELL_ENABLE, 1);
+
+	return 0;
+}
+
+static int gfx_v11_0_kiq_init_queue(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	struct v11_compute_mqd *mqd = ring->mqd_ptr;
+	int mqd_idx = AMDGPU_MAX_COMPUTE_RINGS;
+
+	gfx_v11_0_kiq_setting(ring);
+
+	if (amdgpu_in_reset(adev)) { /* for GPU_RESET case */
+		/* reset MQD to a clean status */
+		if (adev->gfx.mec.mqd_backup[mqd_idx])
+			memcpy(mqd, adev->gfx.mec.mqd_backup[mqd_idx], sizeof(*mqd));
+
+		/* reset ring buffer */
+		ring->wptr = 0;
+		amdgpu_ring_clear_ring(ring);
+
+		mutex_lock(&adev->srbm_mutex);
+		soc21_grbm_select(adev, ring->me, ring->pipe, ring->queue, 0);
+		gfx_v11_0_kiq_init_register(ring);
+		soc21_grbm_select(adev, 0, 0, 0, 0);
+		mutex_unlock(&adev->srbm_mutex);
+	} else {
+		memset((void *)mqd, 0, sizeof(*mqd));
+		mutex_lock(&adev->srbm_mutex);
+		soc21_grbm_select(adev, ring->me, ring->pipe, ring->queue, 0);
+		amdgpu_ring_init_mqd(ring);
+		gfx_v11_0_kiq_init_register(ring);
+		soc21_grbm_select(adev, 0, 0, 0, 0);
+		mutex_unlock(&adev->srbm_mutex);
+
+		if (adev->gfx.mec.mqd_backup[mqd_idx])
+			memcpy(adev->gfx.mec.mqd_backup[mqd_idx], mqd, sizeof(*mqd));
+	}
+
+	return 0;
+}
+
+static int gfx_v11_0_kcq_init_queue(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	struct v11_compute_mqd *mqd = ring->mqd_ptr;
+	int mqd_idx = ring - &adev->gfx.compute_ring[0];
+
+	if (!amdgpu_in_reset(adev) && !adev->in_suspend) {
+		memset((void *)mqd, 0, sizeof(*mqd));
+		mutex_lock(&adev->srbm_mutex);
+		soc21_grbm_select(adev, ring->me, ring->pipe, ring->queue, 0);
+		amdgpu_ring_init_mqd(ring);
+		soc21_grbm_select(adev, 0, 0, 0, 0);
+		mutex_unlock(&adev->srbm_mutex);
+
+		if (adev->gfx.mec.mqd_backup[mqd_idx])
+			memcpy(adev->gfx.mec.mqd_backup[mqd_idx], mqd, sizeof(*mqd));
+	} else if (amdgpu_in_reset(adev)) { /* for GPU_RESET case */
+		/* reset MQD to a clean status */
+		if (adev->gfx.mec.mqd_backup[mqd_idx])
+			memcpy(mqd, adev->gfx.mec.mqd_backup[mqd_idx], sizeof(*mqd));
+
+		/* reset ring buffer */
+		ring->wptr = 0;
+		atomic64_set((atomic64_t *)ring->wptr_cpu_addr, 0);
+		amdgpu_ring_clear_ring(ring);
+	} else {
+		amdgpu_ring_clear_ring(ring);
+	}
+
+	return 0;
+}
+
+static int gfx_v11_0_kiq_resume(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring;
+	int r;
+
+	ring = &adev->gfx.kiq.ring;
+
+	r = amdgpu_bo_reserve(ring->mqd_obj, false);
+	if (unlikely(r != 0))
+		return r;
+
+	r = amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr);
+	if (unlikely(r != 0))
+		return r;
+
+	gfx_v11_0_kiq_init_queue(ring);
+	amdgpu_bo_kunmap(ring->mqd_obj);
+	ring->mqd_ptr = NULL;
+	amdgpu_bo_unreserve(ring->mqd_obj);
+	ring->sched.ready = true;
+	return 0;
+}
+
+static int gfx_v11_0_kcq_resume(struct amdgpu_device *adev)
+{
+	struct amdgpu_ring *ring = NULL;
+	int r = 0, i;
+
+	if (!amdgpu_async_gfx_ring)
+		gfx_v11_0_cp_compute_enable(adev, true);
+
+	for (i = 0; i < adev->gfx.num_compute_rings; i++) {
+		ring = &adev->gfx.compute_ring[i];
+
+		r = amdgpu_bo_reserve(ring->mqd_obj, false);
+		if (unlikely(r != 0))
+			goto done;
+		r = amdgpu_bo_kmap(ring->mqd_obj, (void **)&ring->mqd_ptr);
+		if (!r) {
+			r = gfx_v11_0_kcq_init_queue(ring);
+			amdgpu_bo_kunmap(ring->mqd_obj);
+			ring->mqd_ptr = NULL;
+		}
+		amdgpu_bo_unreserve(ring->mqd_obj);
+		if (r)
+			goto done;
+	}
+
+	r = amdgpu_gfx_enable_kcq(adev);
+done:
+	return r;
+}
+
+static int gfx_v11_0_cp_resume(struct amdgpu_device *adev)
+{
+	int r, i;
+	struct amdgpu_ring *ring;
+
+	if (!(adev->flags & AMD_IS_APU))
+		gfx_v11_0_enable_gui_idle_interrupt(adev, false);
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT) {
+		/* legacy firmware loading */
+		r = gfx_v11_0_cp_gfx_load_microcode(adev);
+		if (r)
+			return r;
+
+		if (adev->gfx.rs64_enable)
+			r = gfx_v11_0_cp_compute_load_microcode_rs64(adev);
+		else
+			r = gfx_v11_0_cp_compute_load_microcode(adev);
+		if (r)
+			return r;
+	}
+
+	gfx_v11_0_cp_set_doorbell_range(adev);
+
+	if (amdgpu_async_gfx_ring) {
+		gfx_v11_0_cp_compute_enable(adev, true);
+		gfx_v11_0_cp_gfx_enable(adev, true);
+	}
+
+	if (adev->enable_mes_kiq && adev->mes.kiq_hw_init)
+		r = amdgpu_mes_kiq_hw_init(adev);
+	else
+		r = gfx_v11_0_kiq_resume(adev);
+	if (r)
+		return r;
+
+	r = gfx_v11_0_kcq_resume(adev);
+	if (r)
+		return r;
+
+	if (!amdgpu_async_gfx_ring) {
+		r = gfx_v11_0_cp_gfx_resume(adev);
+		if (r)
+			return r;
+	} else {
+		r = gfx_v11_0_cp_async_gfx_ring_resume(adev);
+		if (r)
+			return r;
+	}
+
+	for (i = 0; i < adev->gfx.num_gfx_rings; i++) {
+		ring = &adev->gfx.gfx_ring[i];
+		r = amdgpu_ring_test_helper(ring);
+		if (r)
+			return r;
+	}
+
+	for (i = 0; i < adev->gfx.num_compute_rings; i++) {
+		ring = &adev->gfx.compute_ring[i];
+		r = amdgpu_ring_test_helper(ring);
+		if (r)
+			return r;
+	}
+
+	return 0;
+}
+
+static void gfx_v11_0_cp_enable(struct amdgpu_device *adev, bool enable)
+{
+	gfx_v11_0_cp_gfx_enable(adev, enable);
+	gfx_v11_0_cp_compute_enable(adev, enable);
+}
+
+static int gfx_v11_0_gfxhub_enable(struct amdgpu_device *adev)
+{
+	int r;
+	bool value;
+
+	r = adev->gfxhub.funcs->gart_enable(adev);
+	if (r)
+		return r;
+
+	adev->hdp.funcs->flush_hdp(adev, NULL);
+
+	value = (amdgpu_vm_fault_stop == AMDGPU_VM_FAULT_STOP_ALWAYS) ?
+		false : true;
+
+	adev->gfxhub.funcs->set_fault_enable_default(adev, value);
+	amdgpu_gmc_flush_gpu_tlb(adev, 0, AMDGPU_GFXHUB_0, 0);
+
+	return 0;
+}
+
+static void gfx_v11_0_select_cp_fw_arch(struct amdgpu_device *adev)
+{
+	u32 tmp;
+
+	/* select RS64 */
+	if (adev->gfx.rs64_enable) {
+		tmp = RREG32_SOC15(GC, 0, regCP_GFX_CNTL);
+		tmp = REG_SET_FIELD(tmp, CP_GFX_CNTL, ENGINE_SEL, 1);
+		WREG32_SOC15(GC, 0, regCP_GFX_CNTL, tmp);
+
+		tmp = RREG32_SOC15(GC, 0, regCP_MEC_ISA_CNTL);
+		tmp = REG_SET_FIELD(tmp, CP_MEC_ISA_CNTL, ISA_MODE, 1);
+		WREG32_SOC15(GC, 0, regCP_MEC_ISA_CNTL, tmp);
+	}
+
+	if (amdgpu_emu_mode == 1)
+		msleep(100);
+}
+
+static int get_gb_addr_config(struct amdgpu_device * adev)
+{
+	u32 gb_addr_config;
+
+	gb_addr_config = RREG32_SOC15(GC, 0, regGB_ADDR_CONFIG);
+	if (gb_addr_config == 0)
+		return -EINVAL;
+
+	adev->gfx.config.gb_addr_config_fields.num_pkrs =
+		1 << REG_GET_FIELD(gb_addr_config, GB_ADDR_CONFIG, NUM_PKRS);
+
+	adev->gfx.config.gb_addr_config = gb_addr_config;
+
+	adev->gfx.config.gb_addr_config_fields.num_pipes = 1 <<
+			REG_GET_FIELD(adev->gfx.config.gb_addr_config,
+				      GB_ADDR_CONFIG, NUM_PIPES);
+
+	adev->gfx.config.max_tile_pipes =
+		adev->gfx.config.gb_addr_config_fields.num_pipes;
+
+	adev->gfx.config.gb_addr_config_fields.max_compress_frags = 1 <<
+			REG_GET_FIELD(adev->gfx.config.gb_addr_config,
+				      GB_ADDR_CONFIG, MAX_COMPRESSED_FRAGS);
+	adev->gfx.config.gb_addr_config_fields.num_rb_per_se = 1 <<
+			REG_GET_FIELD(adev->gfx.config.gb_addr_config,
+				      GB_ADDR_CONFIG, NUM_RB_PER_SE);
+	adev->gfx.config.gb_addr_config_fields.num_se = 1 <<
+			REG_GET_FIELD(adev->gfx.config.gb_addr_config,
+				      GB_ADDR_CONFIG, NUM_SHADER_ENGINES);
+	adev->gfx.config.gb_addr_config_fields.pipe_interleave_size = 1 << (8 +
+			REG_GET_FIELD(adev->gfx.config.gb_addr_config,
+				      GB_ADDR_CONFIG, PIPE_INTERLEAVE_SIZE));
+
+	return 0;
+}
+
+static void gfx_v11_0_disable_gpa_mode(struct amdgpu_device *adev)
+{
+	uint32_t data;
+
+	data = RREG32_SOC15(GC, 0, regCPC_PSP_DEBUG);
+	data |= CPC_PSP_DEBUG__GPA_OVERRIDE_MASK;
+	WREG32_SOC15(GC, 0, regCPC_PSP_DEBUG, data);
+
+	data = RREG32_SOC15(GC, 0, regCPG_PSP_DEBUG);
+	data |= CPG_PSP_DEBUG__GPA_OVERRIDE_MASK;
+	WREG32_SOC15(GC, 0, regCPG_PSP_DEBUG, data);
+}
+
+static int gfx_v11_0_hw_init(void *handle)
+{
+	int r;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_RLC_BACKDOOR_AUTO) {
+		if (adev->gfx.imu.funcs) {
+			/* RLC autoload sequence 1: Program rlc ram */
+			if (adev->gfx.imu.funcs->program_rlc_ram)
+				adev->gfx.imu.funcs->program_rlc_ram(adev);
+		}
+		/* rlc autoload firmware */
+		r = gfx_v11_0_rlc_backdoor_autoload_enable(adev);
+		if (r)
+			return r;
+	} else {
+		if (adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT) {
+			if (adev->gfx.imu.funcs && (amdgpu_dpm > 0)) {
+				if (adev->gfx.imu.funcs->load_microcode)
+					adev->gfx.imu.funcs->load_microcode(adev);
+				if (adev->gfx.imu.funcs->setup_imu)
+					adev->gfx.imu.funcs->setup_imu(adev);
+				if (adev->gfx.imu.funcs->start_imu)
+					adev->gfx.imu.funcs->start_imu(adev);
+			}
+		}
+	}
+
+	if ((adev->firmware.load_type == AMDGPU_FW_LOAD_RLC_BACKDOOR_AUTO) ||
+	    (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP)) {
+		r = gfx_v11_0_wait_for_rlc_autoload_complete(adev);
+		if (r) {
+			dev_err(adev->dev, "(%d) failed to wait rlc autoload complete\n", r);
+			return r;
+		}
+	}
+
+	adev->gfx.is_poweron = true;
+
+	if(get_gb_addr_config(adev))
+		DRM_WARN("Invalid gb_addr_config !\n");
+
+	if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP &&
+	    adev->gfx.rs64_enable)
+		gfx_v11_0_config_gfx_rs64(adev);
+
+	r = gfx_v11_0_gfxhub_enable(adev);
+	if (r)
+		return r;
+
+	if (!amdgpu_emu_mode)
+		gfx_v11_0_init_golden_registers(adev);
+
+	if ((adev->firmware.load_type == AMDGPU_FW_LOAD_DIRECT) ||
+	    (adev->firmware.load_type == AMDGPU_FW_LOAD_RLC_BACKDOOR_AUTO && amdgpu_dpm == 1)) {
+		/**
+		 * For gfx 11, rlc firmware loading relies on smu firmware is
+		 * loaded firstly, so in direct type, it has to load smc ucode
+		 * here before rlc.
+		 */
+		if (!(adev->flags & AMD_IS_APU)) {
+			r = amdgpu_pm_load_smu_firmware(adev, NULL);
+			if (r)
+				return r;
+		}
+	}
+
+	gfx_v11_0_constants_init(adev);
+
+	if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP)
+		gfx_v11_0_select_cp_fw_arch(adev);
+
+	adev->nbio.funcs->gc_doorbell_init(adev);
+
+	r = gfx_v11_0_rlc_resume(adev);
+	if (r)
+		return r;
+
+	/*
+	 * init golden registers and rlc resume may override some registers,
+	 * reconfig them here
+	 */
+	gfx_v11_0_tcp_harvest(adev);
+
+	r = gfx_v11_0_cp_resume(adev);
+	if (r)
+		return r;
+
+	return r;
+}
+
+#ifndef BRING_UP_DEBUG
+static int gfx_v11_0_kiq_disable_kgq(struct amdgpu_device *adev)
+{
+	struct amdgpu_kiq *kiq = &adev->gfx.kiq;
+	struct amdgpu_ring *kiq_ring = &kiq->ring;
+	int i, r = 0;
+
+	if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
+		return -EINVAL;
+
+	if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size *
+					adev->gfx.num_gfx_rings))
+		return -ENOMEM;
+
+	for (i = 0; i < adev->gfx.num_gfx_rings; i++)
+		kiq->pmf->kiq_unmap_queues(kiq_ring, &adev->gfx.gfx_ring[i],
+					   PREEMPT_QUEUES, 0, 0);
+
+	if (adev->gfx.kiq.ring.sched.ready)
+		r = amdgpu_ring_test_helper(kiq_ring);
+
+	return r;
+}
+#endif
+
+static int gfx_v11_0_hw_fini(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	int r;
+	uint32_t tmp;
+
+	amdgpu_irq_put(adev, &adev->gfx.priv_reg_irq, 0);
+	amdgpu_irq_put(adev, &adev->gfx.priv_inst_irq, 0);
+
+	if (!adev->no_hw_access) {
+#ifndef BRING_UP_DEBUG
+		if (amdgpu_async_gfx_ring) {
+			r = gfx_v11_0_kiq_disable_kgq(adev);
+			if (r)
+				DRM_ERROR("KGQ disable failed\n");
+		}
+#endif
+		if (amdgpu_gfx_disable_kcq(adev))
+			DRM_ERROR("KCQ disable failed\n");
+
+		amdgpu_mes_kiq_hw_fini(adev);
+	}
+
+	if (amdgpu_sriov_vf(adev)) {
+		gfx_v11_0_cp_gfx_enable(adev, false);
+		/* Program KIQ position of RLC_CP_SCHEDULERS during destroy */
+		tmp = RREG32_SOC15(GC, 0, regRLC_CP_SCHEDULERS);
+		tmp &= 0xffffff00;
+		WREG32_SOC15(GC, 0, regRLC_CP_SCHEDULERS, tmp);
+
+		return 0;
+	}
+	gfx_v11_0_cp_enable(adev, false);
+	gfx_v11_0_enable_gui_idle_interrupt(adev, false);
+
+	adev->gfxhub.funcs->gart_disable(adev);
+
+	adev->gfx.is_poweron = false;
+
+	return 0;
+}
+
+static int gfx_v11_0_suspend(void *handle)
+{
+	return gfx_v11_0_hw_fini(handle);
+}
+
+static int gfx_v11_0_resume(void *handle)
+{
+	return gfx_v11_0_hw_init(handle);
+}
+
+static bool gfx_v11_0_is_idle(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (REG_GET_FIELD(RREG32_SOC15(GC, 0, regGRBM_STATUS),
+				GRBM_STATUS, GUI_ACTIVE))
+		return false;
+	else
+		return true;
+}
+
+static int gfx_v11_0_wait_for_idle(void *handle)
+{
+	unsigned i;
+	u32 tmp;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	for (i = 0; i < adev->usec_timeout; i++) {
+		/* read MC_STATUS */
+		tmp = RREG32_SOC15(GC, 0, regGRBM_STATUS) &
+			GRBM_STATUS__GUI_ACTIVE_MASK;
+
+		if (!REG_GET_FIELD(tmp, GRBM_STATUS, GUI_ACTIVE))
+			return 0;
+		udelay(1);
+	}
+	return -ETIMEDOUT;
+}
+
+static int gfx_v11_0_soft_reset(void *handle)
+{
+	u32 grbm_soft_reset = 0;
+	u32 tmp;
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	/* GRBM_STATUS */
+	tmp = RREG32_SOC15(GC, 0, regGRBM_STATUS);
+	if (tmp & (GRBM_STATUS__PA_BUSY_MASK | GRBM_STATUS__SC_BUSY_MASK |
+		   GRBM_STATUS__BCI_BUSY_MASK | GRBM_STATUS__SX_BUSY_MASK |
+		   GRBM_STATUS__TA_BUSY_MASK | GRBM_STATUS__DB_BUSY_MASK |
+		   GRBM_STATUS__CB_BUSY_MASK | GRBM_STATUS__GDS_BUSY_MASK |
+		   GRBM_STATUS__SPI_BUSY_MASK | GRBM_STATUS__GE_BUSY_NO_DMA_MASK)) {
+		grbm_soft_reset = REG_SET_FIELD(grbm_soft_reset,
+						GRBM_SOFT_RESET, SOFT_RESET_CP,
+						1);
+		grbm_soft_reset = REG_SET_FIELD(grbm_soft_reset,
+						GRBM_SOFT_RESET, SOFT_RESET_GFX,
+						1);
+	}
+
+	if (tmp & (GRBM_STATUS__CP_BUSY_MASK | GRBM_STATUS__CP_COHERENCY_BUSY_MASK)) {
+		grbm_soft_reset = REG_SET_FIELD(grbm_soft_reset,
+						GRBM_SOFT_RESET, SOFT_RESET_CP,
+						1);
+	}
+
+	/* GRBM_STATUS2 */
+	tmp = RREG32_SOC15(GC, 0, regGRBM_STATUS2);
+	if (REG_GET_FIELD(tmp, GRBM_STATUS2, RLC_BUSY))
+		grbm_soft_reset = REG_SET_FIELD(grbm_soft_reset,
+						GRBM_SOFT_RESET,
+						SOFT_RESET_RLC,
+						1);
+
+	if (grbm_soft_reset) {
+		/* stop the rlc */
+		gfx_v11_0_rlc_stop(adev);
+
+		/* Disable GFX parsing/prefetching */
+		gfx_v11_0_cp_gfx_enable(adev, false);
+
+		/* Disable MEC parsing/prefetching */
+		gfx_v11_0_cp_compute_enable(adev, false);
+
+		if (grbm_soft_reset) {
+			tmp = RREG32_SOC15(GC, 0, regGRBM_SOFT_RESET);
+			tmp |= grbm_soft_reset;
+			dev_info(adev->dev, "GRBM_SOFT_RESET=0x%08X\n", tmp);
+			WREG32_SOC15(GC, 0, regGRBM_SOFT_RESET, tmp);
+			tmp = RREG32_SOC15(GC, 0, regGRBM_SOFT_RESET);
+
+			udelay(50);
+
+			tmp &= ~grbm_soft_reset;
+			WREG32_SOC15(GC, 0, regGRBM_SOFT_RESET, tmp);
+			tmp = RREG32_SOC15(GC, 0, regGRBM_SOFT_RESET);
+		}
+
+		/* Wait a little for things to settle down */
+		udelay(50);
+	}
+	return 0;
+}
+
+static uint64_t gfx_v11_0_get_gpu_clock_counter(struct amdgpu_device *adev)
+{
+	uint64_t clock;
+
+	amdgpu_gfx_off_ctrl(adev, false);
+	mutex_lock(&adev->gfx.gpu_clock_mutex);
+	clock = (uint64_t)RREG32_SOC15(SMUIO, 0, regGOLDEN_TSC_COUNT_LOWER) |
+		((uint64_t)RREG32_SOC15(SMUIO, 0, regGOLDEN_TSC_COUNT_UPPER) << 32ULL);
+	mutex_unlock(&adev->gfx.gpu_clock_mutex);
+	amdgpu_gfx_off_ctrl(adev, true);
+	return clock;
+}
+
+static void gfx_v11_0_ring_emit_gds_switch(struct amdgpu_ring *ring,
+					   uint32_t vmid,
+					   uint32_t gds_base, uint32_t gds_size,
+					   uint32_t gws_base, uint32_t gws_size,
+					   uint32_t oa_base, uint32_t oa_size)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	/* GDS Base */
+	gfx_v11_0_write_data_to_reg(ring, 0, false,
+				    SOC15_REG_OFFSET(GC, 0, regGDS_VMID0_BASE) + 2 * vmid,
+				    gds_base);
+
+	/* GDS Size */
+	gfx_v11_0_write_data_to_reg(ring, 0, false,
+				    SOC15_REG_OFFSET(GC, 0, regGDS_VMID0_SIZE) + 2 * vmid,
+				    gds_size);
+
+	/* GWS */
+	gfx_v11_0_write_data_to_reg(ring, 0, false,
+				    SOC15_REG_OFFSET(GC, 0, regGDS_GWS_VMID0) + vmid,
+				    gws_size << GDS_GWS_VMID0__SIZE__SHIFT | gws_base);
+
+	/* OA */
+	gfx_v11_0_write_data_to_reg(ring, 0, false,
+				    SOC15_REG_OFFSET(GC, 0, regGDS_OA_VMID0) + vmid,
+				    (1 << (oa_size + oa_base)) - (1 << oa_base));
+}
+
+static int gfx_v11_0_early_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	adev->gfx.num_gfx_rings = GFX11_NUM_GFX_RINGS;
+	adev->gfx.num_compute_rings = min(amdgpu_gfx_get_num_kcq(adev),
+					  AMDGPU_MAX_COMPUTE_RINGS);
+
+	gfx_v11_0_set_kiq_pm4_funcs(adev);
+	gfx_v11_0_set_ring_funcs(adev);
+	gfx_v11_0_set_irq_funcs(adev);
+	gfx_v11_0_set_gds_init(adev);
+	gfx_v11_0_set_rlc_funcs(adev);
+	gfx_v11_0_set_mqd_funcs(adev);
+	gfx_v11_0_set_imu_funcs(adev);
+
+	gfx_v11_0_init_rlcg_reg_access_ctrl(adev);
+
+	return 0;
+}
+
+static int gfx_v11_0_late_init(void *handle)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	int r;
+
+	r = amdgpu_irq_get(adev, &adev->gfx.priv_reg_irq, 0);
+	if (r)
+		return r;
+
+	r = amdgpu_irq_get(adev, &adev->gfx.priv_inst_irq, 0);
+	if (r)
+		return r;
+
+	return 0;
+}
+
+static bool gfx_v11_0_is_rlc_enabled(struct amdgpu_device *adev)
+{
+	uint32_t rlc_cntl;
+
+	/* if RLC is not enabled, do nothing */
+	rlc_cntl = RREG32_SOC15(GC, 0, regRLC_CNTL);
+	return (REG_GET_FIELD(rlc_cntl, RLC_CNTL, RLC_ENABLE_F32)) ? true : false;
+}
+
+static void gfx_v11_0_set_safe_mode(struct amdgpu_device *adev)
+{
+	uint32_t data;
+	unsigned i;
+
+	data = RLC_SAFE_MODE__CMD_MASK;
+	data |= (1 << RLC_SAFE_MODE__MESSAGE__SHIFT);
+
+	WREG32_SOC15(GC, 0, regRLC_SAFE_MODE, data);
+
+	/* wait for RLC_SAFE_MODE */
+	for (i = 0; i < adev->usec_timeout; i++) {
+		if (!REG_GET_FIELD(RREG32_SOC15(GC, 0, regRLC_SAFE_MODE),
+				   RLC_SAFE_MODE, CMD))
+			break;
+		udelay(1);
+	}
+}
+
+static void gfx_v11_0_unset_safe_mode(struct amdgpu_device *adev)
+{
+	WREG32_SOC15(GC, 0, regRLC_SAFE_MODE, RLC_SAFE_MODE__CMD_MASK);
+}
+
+static void gfx_v11_0_update_repeater_fgcg(struct amdgpu_device *adev,
+					   bool enable)
+{
+	uint32_t def, data;
+
+	if (!(adev->cg_flags & AMD_CG_SUPPORT_REPEATER_FGCG))
+		return;
+
+	def = data = RREG32_SOC15(GC, 0, regRLC_CGTT_MGCG_OVERRIDE);
+
+	if (enable)
+		data &= ~RLC_CGTT_MGCG_OVERRIDE__GFXIP_REPEATER_FGCG_OVERRIDE_MASK;
+	else
+		data |= RLC_CGTT_MGCG_OVERRIDE__GFXIP_REPEATER_FGCG_OVERRIDE_MASK;
+
+	if (def != data)
+		WREG32_SOC15(GC, 0, regRLC_CGTT_MGCG_OVERRIDE, data);
+}
+
+#if 0
+static void gfx_v11_0_update_medium_grain_clock_gating(struct amdgpu_device *adev,
+						      bool enable)
+{
+	/* TODO: add power related feature later. */
+}
+
+static void gfx_v11_0_update_3d_clock_gating(struct amdgpu_device *adev,
+					   bool enable)
+{
+	/* TODO: add power related feature later. */
+}
+#endif
+
+static void gfx_v11_0_update_coarse_grain_clock_gating(struct amdgpu_device *adev,
+						       bool enable)
+{
+	uint32_t def, data;
+
+	if (!(adev->cg_flags &
+	      (AMD_CG_SUPPORT_GFX_CGCG |
+	      AMD_CG_SUPPORT_GFX_CGLS |
+	      AMD_CG_SUPPORT_GFX_3D_CGCG |
+	      AMD_CG_SUPPORT_GFX_3D_CGLS)))
+		return;
+
+	if (enable) {
+		def = data = RREG32_SOC15(GC, 0, regRLC_CGTT_MGCG_OVERRIDE);
+
+		/* unset CGCG override */
+		if (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGCG)
+			data &= ~RLC_CGTT_MGCG_OVERRIDE__GFXIP_CGCG_OVERRIDE_MASK;
+		if (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGLS)
+			data &= ~RLC_CGTT_MGCG_OVERRIDE__GFXIP_CGLS_OVERRIDE_MASK;
+		if (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGCG ||
+		    adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGLS)
+			data &= ~RLC_CGTT_MGCG_OVERRIDE__GFXIP_GFX3D_CG_OVERRIDE_MASK;
+
+		/* update CGCG override bits */
+		if (def != data)
+			WREG32_SOC15(GC, 0, regRLC_CGTT_MGCG_OVERRIDE, data);
+
+		/* enable cgcg FSM(0x0000363F) */
+		def = data = RREG32_SOC15(GC, 0, regRLC_CGCG_CGLS_CTRL);
+
+		if (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGCG) {
+			data &= ~RLC_CGCG_CGLS_CTRL__CGCG_GFX_IDLE_THRESHOLD_MASK;
+			data |= (0x36 << RLC_CGCG_CGLS_CTRL__CGCG_GFX_IDLE_THRESHOLD__SHIFT) |
+				 RLC_CGCG_CGLS_CTRL__CGCG_EN_MASK;
+		}
+
+		if (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGLS) {
+			data &= ~RLC_CGCG_CGLS_CTRL__CGLS_REP_COMPANSAT_DELAY_MASK;
+			data |= (0x000F << RLC_CGCG_CGLS_CTRL__CGLS_REP_COMPANSAT_DELAY__SHIFT) |
+				 RLC_CGCG_CGLS_CTRL__CGLS_EN_MASK;
+		}
+
+		if (def != data)
+			WREG32_SOC15(GC, 0, regRLC_CGCG_CGLS_CTRL, data);
+
+		/* Program RLC_CGCG_CGLS_CTRL_3D */
+		def = data = RREG32_SOC15(GC, 0, regRLC_CGCG_CGLS_CTRL_3D);
+
+		if (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGCG) {
+			data &= ~RLC_CGCG_CGLS_CTRL_3D__CGCG_GFX_IDLE_THRESHOLD_MASK;
+			data |= (0x36 << RLC_CGCG_CGLS_CTRL_3D__CGCG_GFX_IDLE_THRESHOLD__SHIFT) |
+				 RLC_CGCG_CGLS_CTRL_3D__CGCG_EN_MASK;
+		}
+
+		if (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGLS) {
+			data &= ~RLC_CGCG_CGLS_CTRL_3D__CGLS_REP_COMPANSAT_DELAY_MASK;
+			data |= (0xf << RLC_CGCG_CGLS_CTRL_3D__CGLS_REP_COMPANSAT_DELAY__SHIFT) |
+				 RLC_CGCG_CGLS_CTRL_3D__CGLS_EN_MASK;
+		}
+
+		if (def != data)
+			WREG32_SOC15(GC, 0, regRLC_CGCG_CGLS_CTRL_3D, data);
+
+		/* set IDLE_POLL_COUNT(0x00900100) */
+		def = data = RREG32_SOC15(GC, 0, regCP_RB_WPTR_POLL_CNTL);
+
+		data &= ~(CP_RB_WPTR_POLL_CNTL__POLL_FREQUENCY_MASK | CP_RB_WPTR_POLL_CNTL__IDLE_POLL_COUNT_MASK);
+		data |= (0x0100 << CP_RB_WPTR_POLL_CNTL__POLL_FREQUENCY__SHIFT) |
+			(0x0090 << CP_RB_WPTR_POLL_CNTL__IDLE_POLL_COUNT__SHIFT);
+
+		if (def != data)
+			WREG32_SOC15(GC, 0, regCP_RB_WPTR_POLL_CNTL, data);
+
+		data = RREG32_SOC15(GC, 0, regCP_INT_CNTL);
+		data = REG_SET_FIELD(data, CP_INT_CNTL, CNTX_BUSY_INT_ENABLE, 1);
+		data = REG_SET_FIELD(data, CP_INT_CNTL, CNTX_EMPTY_INT_ENABLE, 1);
+		data = REG_SET_FIELD(data, CP_INT_CNTL, CMP_BUSY_INT_ENABLE, 1);
+		data = REG_SET_FIELD(data, CP_INT_CNTL, GFX_IDLE_INT_ENABLE, 1);
+		WREG32_SOC15(GC, 0, regCP_INT_CNTL, data);
+
+		data = RREG32_SOC15(GC, 0, regSDMA0_RLC_CGCG_CTRL);
+		data = REG_SET_FIELD(data, SDMA0_RLC_CGCG_CTRL, CGCG_INT_ENABLE, 1);
+		WREG32_SOC15(GC, 0, regSDMA0_RLC_CGCG_CTRL, data);
+
+		data = RREG32_SOC15(GC, 0, regSDMA1_RLC_CGCG_CTRL);
+		data = REG_SET_FIELD(data, SDMA1_RLC_CGCG_CTRL, CGCG_INT_ENABLE, 1);
+		WREG32_SOC15(GC, 0, regSDMA1_RLC_CGCG_CTRL, data);
+	} else {
+		/* Program RLC_CGCG_CGLS_CTRL */
+		def = data = RREG32_SOC15(GC, 0, regRLC_CGCG_CGLS_CTRL);
+
+		if (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGCG)
+			data &= ~RLC_CGCG_CGLS_CTRL__CGCG_EN_MASK;
+
+		if (adev->cg_flags & AMD_CG_SUPPORT_GFX_CGLS)
+			data &= ~RLC_CGCG_CGLS_CTRL__CGLS_EN_MASK;
+
+		if (def != data)
+			WREG32_SOC15(GC, 0, regRLC_CGCG_CGLS_CTRL, data);
+
+		/* Program RLC_CGCG_CGLS_CTRL_3D */
+		def = data = RREG32_SOC15(GC, 0, regRLC_CGCG_CGLS_CTRL_3D);
+
+		if (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGCG)
+			data &= ~RLC_CGCG_CGLS_CTRL_3D__CGCG_EN_MASK;
+		if (adev->cg_flags & AMD_CG_SUPPORT_GFX_3D_CGLS)
+			data &= ~RLC_CGCG_CGLS_CTRL_3D__CGLS_EN_MASK;
+
+		if (def != data)
+			WREG32_SOC15(GC, 0, regRLC_CGCG_CGLS_CTRL_3D, data);
+
+		data = RREG32_SOC15(GC, 0, regSDMA0_RLC_CGCG_CTRL);
+		data &= ~SDMA0_RLC_CGCG_CTRL__CGCG_INT_ENABLE_MASK;
+		WREG32_SOC15(GC, 0, regSDMA0_RLC_CGCG_CTRL, data);
+
+		data = RREG32_SOC15(GC, 0, regSDMA1_RLC_CGCG_CTRL);
+		data &= ~SDMA1_RLC_CGCG_CTRL__CGCG_INT_ENABLE_MASK;
+		WREG32_SOC15(GC, 0, regSDMA1_RLC_CGCG_CTRL, data);
+	}
+}
+
+static int gfx_v11_0_update_gfx_clock_gating(struct amdgpu_device *adev,
+					    bool enable)
+{
+	amdgpu_gfx_rlc_enter_safe_mode(adev);
+
+	gfx_v11_0_update_coarse_grain_clock_gating(adev, enable);
+
+	gfx_v11_0_update_repeater_fgcg(adev, enable);
+
+	if (adev->cg_flags &
+	    (AMD_CG_SUPPORT_GFX_MGCG |
+	     AMD_CG_SUPPORT_GFX_CGLS |
+	     AMD_CG_SUPPORT_GFX_CGCG |
+	     AMD_CG_SUPPORT_GFX_3D_CGCG |
+	     AMD_CG_SUPPORT_GFX_3D_CGLS))
+	        gfx_v11_0_enable_gui_idle_interrupt(adev, enable);
+
+	amdgpu_gfx_rlc_exit_safe_mode(adev);
+
+	return 0;
+}
+
+static void gfx_v11_0_update_spm_vmid(struct amdgpu_device *adev, unsigned vmid)
+{
+	u32 reg, data;
+
+	reg = SOC15_REG_OFFSET(GC, 0, regRLC_SPM_MC_CNTL);
+	if (amdgpu_sriov_is_pp_one_vf(adev))
+		data = RREG32_NO_KIQ(reg);
+	else
+		data = RREG32(reg);
+
+	data &= ~RLC_SPM_MC_CNTL__RLC_SPM_VMID_MASK;
+	data |= (vmid & RLC_SPM_MC_CNTL__RLC_SPM_VMID_MASK) << RLC_SPM_MC_CNTL__RLC_SPM_VMID__SHIFT;
+
+	if (amdgpu_sriov_is_pp_one_vf(adev))
+		WREG32_SOC15_NO_KIQ(GC, 0, regRLC_SPM_MC_CNTL, data);
+	else
+		WREG32_SOC15(GC, 0, regRLC_SPM_MC_CNTL, data);
+}
+
+static const struct amdgpu_rlc_funcs gfx_v11_0_rlc_funcs = {
+	.is_rlc_enabled = gfx_v11_0_is_rlc_enabled,
+	.set_safe_mode = gfx_v11_0_set_safe_mode,
+	.unset_safe_mode = gfx_v11_0_unset_safe_mode,
+	.init = gfx_v11_0_rlc_init,
+	.get_csb_size = gfx_v11_0_get_csb_size,
+	.get_csb_buffer = gfx_v11_0_get_csb_buffer,
+	.resume = gfx_v11_0_rlc_resume,
+	.stop = gfx_v11_0_rlc_stop,
+	.reset = gfx_v11_0_rlc_reset,
+	.start = gfx_v11_0_rlc_start,
+	.update_spm_vmid = gfx_v11_0_update_spm_vmid,
+};
+
+static int gfx_v11_0_set_powergating_state(void *handle,
+					   enum amd_powergating_state state)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	bool enable = (state == AMD_PG_STATE_GATE);
+
+	if (amdgpu_sriov_vf(adev))
+		return 0;
+
+	switch (adev->ip_versions[GC_HWIP][0]) {
+	case IP_VERSION(11, 0, 0):
+		amdgpu_gfx_off_ctrl(adev, enable);
+		break;
+	default:
+		break;
+	}
+
+	return 0;
+}
+
+static int gfx_v11_0_set_clockgating_state(void *handle,
+					  enum amd_clockgating_state state)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+	if (amdgpu_sriov_vf(adev))
+	        return 0;
+
+	switch (adev->ip_versions[GC_HWIP][0]) {
+	case IP_VERSION(11, 0, 0):
+	case IP_VERSION(11, 0, 2):
+	        gfx_v11_0_update_gfx_clock_gating(adev,
+	                        state ==  AMD_CG_STATE_GATE);
+	        break;
+	default:
+	        break;
+	}
+
+	return 0;
+}
+
+static void gfx_v11_0_get_clockgating_state(void *handle, u64 *flags)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+	int data;
+
+	/* AMD_CG_SUPPORT_GFX_FGCG */
+	data = RREG32_SOC15(GC, 0, regRLC_CGTT_MGCG_OVERRIDE);
+	if (!(data & RLC_CGTT_MGCG_OVERRIDE__GFXIP_FGCG_OVERRIDE_MASK))
+		*flags |= AMD_CG_SUPPORT_GFX_FGCG;
+
+	/* AMD_CG_SUPPORT_GFX_MGCG */
+	data = RREG32_SOC15(GC, 0, regRLC_CGTT_MGCG_OVERRIDE);
+	if (!(data & RLC_CGTT_MGCG_OVERRIDE__GFXIP_MGCG_OVERRIDE_MASK))
+		*flags |= AMD_CG_SUPPORT_GFX_MGCG;
+
+	/* AMD_CG_SUPPORT_GFX_CGCG */
+	data = RREG32_SOC15(GC, 0, regRLC_CGCG_CGLS_CTRL);
+	if (data & RLC_CGCG_CGLS_CTRL__CGCG_EN_MASK)
+		*flags |= AMD_CG_SUPPORT_GFX_CGCG;
+
+	/* AMD_CG_SUPPORT_GFX_CGLS */
+	if (data & RLC_CGCG_CGLS_CTRL__CGLS_EN_MASK)
+		*flags |= AMD_CG_SUPPORT_GFX_CGLS;
+
+	/* AMD_CG_SUPPORT_GFX_3D_CGCG */
+	data = RREG32_SOC15(GC, 0, regRLC_CGCG_CGLS_CTRL_3D);
+	if (data & RLC_CGCG_CGLS_CTRL_3D__CGCG_EN_MASK)
+		*flags |= AMD_CG_SUPPORT_GFX_3D_CGCG;
+
+	/* AMD_CG_SUPPORT_GFX_3D_CGLS */
+	if (data & RLC_CGCG_CGLS_CTRL_3D__CGLS_EN_MASK)
+		*flags |= AMD_CG_SUPPORT_GFX_3D_CGLS;
+}
+
+static u64 gfx_v11_0_ring_get_rptr_gfx(struct amdgpu_ring *ring)
+{
+	/* gfx11 is 32bit rptr*/
+	return *(uint32_t *)ring->rptr_cpu_addr;
+}
+
+static u64 gfx_v11_0_ring_get_wptr_gfx(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	u64 wptr;
+
+	/* XXX check if swapping is necessary on BE */
+	if (ring->use_doorbell) {
+		wptr = atomic64_read((atomic64_t *)ring->wptr_cpu_addr);
+	} else {
+		wptr = RREG32_SOC15(GC, 0, regCP_RB0_WPTR);
+		wptr += (u64)RREG32_SOC15(GC, 0, regCP_RB0_WPTR_HI) << 32;
+	}
+
+	return wptr;
+}
+
+static void gfx_v11_0_ring_set_wptr_gfx(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	if (ring->use_doorbell) {
+		/* XXX check if swapping is necessary on BE */
+		atomic64_set((atomic64_t *)ring->wptr_cpu_addr, ring->wptr);
+		WDOORBELL64(ring->doorbell_index, ring->wptr);
+	} else {
+		WREG32_SOC15(GC, 0, regCP_RB0_WPTR, lower_32_bits(ring->wptr));
+		WREG32_SOC15(GC, 0, regCP_RB0_WPTR_HI, upper_32_bits(ring->wptr));
+	}
+}
+
+static u64 gfx_v11_0_ring_get_rptr_compute(struct amdgpu_ring *ring)
+{
+	/* gfx11 hardware is 32bit rptr */
+	return *(uint32_t *)ring->rptr_cpu_addr;
+}
+
+static u64 gfx_v11_0_ring_get_wptr_compute(struct amdgpu_ring *ring)
+{
+	u64 wptr;
+
+	/* XXX check if swapping is necessary on BE */
+	if (ring->use_doorbell)
+		wptr = atomic64_read((atomic64_t *)ring->wptr_cpu_addr);
+	else
+		BUG();
+	return wptr;
+}
+
+static void gfx_v11_0_ring_set_wptr_compute(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	/* XXX check if swapping is necessary on BE */
+	if (ring->use_doorbell) {
+		atomic64_set((atomic64_t *)ring->wptr_cpu_addr, ring->wptr);
+		WDOORBELL64(ring->doorbell_index, ring->wptr);
+	} else {
+		BUG(); /* only DOORBELL method supported on gfx11 now */
+	}
+}
+
+static void gfx_v11_0_ring_emit_hdp_flush(struct amdgpu_ring *ring)
+{
+	struct amdgpu_device *adev = ring->adev;
+	u32 ref_and_mask, reg_mem_engine;
+	const struct nbio_hdp_flush_reg *nbio_hf_reg = adev->nbio.hdp_flush_reg;
+
+	if (ring->funcs->type == AMDGPU_RING_TYPE_COMPUTE) {
+		switch (ring->me) {
+		case 1:
+			ref_and_mask = nbio_hf_reg->ref_and_mask_cp2 << ring->pipe;
+			break;
+		case 2:
+			ref_and_mask = nbio_hf_reg->ref_and_mask_cp6 << ring->pipe;
+			break;
+		default:
+			return;
+		}
+		reg_mem_engine = 0;
+	} else {
+		ref_and_mask = nbio_hf_reg->ref_and_mask_cp0;
+		reg_mem_engine = 1; /* pfp */
+	}
+
+	gfx_v11_0_wait_reg_mem(ring, reg_mem_engine, 0, 1,
+			       adev->nbio.funcs->get_hdp_flush_req_offset(adev),
+			       adev->nbio.funcs->get_hdp_flush_done_offset(adev),
+			       ref_and_mask, ref_and_mask, 0x20);
+}
+
+static void gfx_v11_0_ring_emit_ib_gfx(struct amdgpu_ring *ring,
+				       struct amdgpu_job *job,
+				       struct amdgpu_ib *ib,
+				       uint32_t flags)
+{
+	unsigned vmid = AMDGPU_JOB_GET_VMID(job);
+	u32 header, control = 0;
+
+	BUG_ON(ib->flags & AMDGPU_IB_FLAG_CE);
+
+	header = PACKET3(PACKET3_INDIRECT_BUFFER, 2);
+
+	control |= ib->length_dw | (vmid << 24);
+
+	if ((amdgpu_sriov_vf(ring->adev) || amdgpu_mcbp) && (ib->flags & AMDGPU_IB_FLAG_PREEMPT)) {
+		control |= INDIRECT_BUFFER_PRE_ENB(1);
+
+		if (flags & AMDGPU_IB_PREEMPTED)
+			control |= INDIRECT_BUFFER_PRE_RESUME(1);
+
+		if (vmid)
+			gfx_v11_0_ring_emit_de_meta(ring,
+				    (!amdgpu_sriov_vf(ring->adev) && flags & AMDGPU_IB_PREEMPTED) ? true : false);
+	}
+
+	if (ring->is_mes_queue)
+		/* inherit vmid from mqd */
+		control |= 0x400000;
+
+	amdgpu_ring_write(ring, header);
+	BUG_ON(ib->gpu_addr & 0x3); /* Dword align */
+	amdgpu_ring_write(ring,
+#ifdef __BIG_ENDIAN
+		(2 << 0) |
+#endif
+		lower_32_bits(ib->gpu_addr));
+	amdgpu_ring_write(ring, upper_32_bits(ib->gpu_addr));
+	amdgpu_ring_write(ring, control);
+}
+
+static void gfx_v11_0_ring_emit_ib_compute(struct amdgpu_ring *ring,
+					   struct amdgpu_job *job,
+					   struct amdgpu_ib *ib,
+					   uint32_t flags)
+{
+	unsigned vmid = AMDGPU_JOB_GET_VMID(job);
+	u32 control = INDIRECT_BUFFER_VALID | ib->length_dw | (vmid << 24);
+
+	if (ring->is_mes_queue)
+		/* inherit vmid from mqd */
+		control |= 0x40000000;
+
+	/* Currently, there is a high possibility to get wave ID mismatch
+	 * between ME and GDS, leading to a hw deadlock, because ME generates
+	 * different wave IDs than the GDS expects. This situation happens
+	 * randomly when at least 5 compute pipes use GDS ordered append.
+	 * The wave IDs generated by ME are also wrong after suspend/resume.
+	 * Those are probably bugs somewhere else in the kernel driver.
+	 *
+	 * Writing GDS_COMPUTE_MAX_WAVE_ID resets wave ID counters in ME and
+	 * GDS to 0 for this ring (me/pipe).
+	 */
+	if (ib->flags & AMDGPU_IB_FLAG_RESET_GDS_MAX_WAVE_ID) {
+		amdgpu_ring_write(ring, PACKET3(PACKET3_SET_CONFIG_REG, 1));
+		amdgpu_ring_write(ring, regGDS_COMPUTE_MAX_WAVE_ID);
+		amdgpu_ring_write(ring, ring->adev->gds.gds_compute_max_wave_id);
+	}
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_INDIRECT_BUFFER, 2));
+	BUG_ON(ib->gpu_addr & 0x3); /* Dword align */
+	amdgpu_ring_write(ring,
+#ifdef __BIG_ENDIAN
+				(2 << 0) |
+#endif
+				lower_32_bits(ib->gpu_addr));
+	amdgpu_ring_write(ring, upper_32_bits(ib->gpu_addr));
+	amdgpu_ring_write(ring, control);
+}
+
+static void gfx_v11_0_ring_emit_fence(struct amdgpu_ring *ring, u64 addr,
+				     u64 seq, unsigned flags)
+{
+	bool write64bit = flags & AMDGPU_FENCE_FLAG_64BIT;
+	bool int_sel = flags & AMDGPU_FENCE_FLAG_INT;
+
+	/* RELEASE_MEM - flush caches, send int */
+	amdgpu_ring_write(ring, PACKET3(PACKET3_RELEASE_MEM, 6));
+	amdgpu_ring_write(ring, (PACKET3_RELEASE_MEM_GCR_SEQ |
+				 PACKET3_RELEASE_MEM_GCR_GL2_WB |
+				 PACKET3_RELEASE_MEM_GCR_GL2_INV |
+				 PACKET3_RELEASE_MEM_GCR_GL2_US |
+				 PACKET3_RELEASE_MEM_GCR_GL1_INV |
+				 PACKET3_RELEASE_MEM_GCR_GLV_INV |
+				 PACKET3_RELEASE_MEM_GCR_GLM_INV |
+				 PACKET3_RELEASE_MEM_GCR_GLM_WB |
+				 PACKET3_RELEASE_MEM_CACHE_POLICY(3) |
+				 PACKET3_RELEASE_MEM_EVENT_TYPE(CACHE_FLUSH_AND_INV_TS_EVENT) |
+				 PACKET3_RELEASE_MEM_EVENT_INDEX(5)));
+	amdgpu_ring_write(ring, (PACKET3_RELEASE_MEM_DATA_SEL(write64bit ? 2 : 1) |
+				 PACKET3_RELEASE_MEM_INT_SEL(int_sel ? 2 : 0)));
+
+	/*
+	 * the address should be Qword aligned if 64bit write, Dword
+	 * aligned if only send 32bit data low (discard data high)
+	 */
+	if (write64bit)
+		BUG_ON(addr & 0x7);
+	else
+		BUG_ON(addr & 0x3);
+	amdgpu_ring_write(ring, lower_32_bits(addr));
+	amdgpu_ring_write(ring, upper_32_bits(addr));
+	amdgpu_ring_write(ring, lower_32_bits(seq));
+	amdgpu_ring_write(ring, upper_32_bits(seq));
+	amdgpu_ring_write(ring, ring->is_mes_queue ?
+			 (ring->hw_queue_id | AMDGPU_FENCE_MES_QUEUE_FLAG) : 0);
+}
+
+static void gfx_v11_0_ring_emit_pipeline_sync(struct amdgpu_ring *ring)
+{
+	int usepfp = (ring->funcs->type == AMDGPU_RING_TYPE_GFX);
+	uint32_t seq = ring->fence_drv.sync_seq;
+	uint64_t addr = ring->fence_drv.gpu_addr;
+
+	gfx_v11_0_wait_reg_mem(ring, usepfp, 1, 0, lower_32_bits(addr),
+			       upper_32_bits(addr), seq, 0xffffffff, 4);
+}
+
+static void gfx_v11_0_ring_invalidate_tlbs(struct amdgpu_ring *ring,
+				   uint16_t pasid, uint32_t flush_type,
+				   bool all_hub, uint8_t dst_sel)
+{
+	amdgpu_ring_write(ring, PACKET3(PACKET3_INVALIDATE_TLBS, 0));
+	amdgpu_ring_write(ring,
+			  PACKET3_INVALIDATE_TLBS_DST_SEL(dst_sel) |
+			  PACKET3_INVALIDATE_TLBS_ALL_HUB(all_hub) |
+			  PACKET3_INVALIDATE_TLBS_PASID(pasid) |
+			  PACKET3_INVALIDATE_TLBS_FLUSH_TYPE(flush_type));
+}
+
+static void gfx_v11_0_ring_emit_vm_flush(struct amdgpu_ring *ring,
+					 unsigned vmid, uint64_t pd_addr)
+{
+	if (ring->is_mes_queue)
+		gfx_v11_0_ring_invalidate_tlbs(ring, 0, 0, false, 0);
+	else
+		amdgpu_gmc_emit_flush_gpu_tlb(ring, vmid, pd_addr);
+
+	/* compute doesn't have PFP */
+	if (ring->funcs->type == AMDGPU_RING_TYPE_GFX) {
+		/* sync PFP to ME, otherwise we might get invalid PFP reads */
+		amdgpu_ring_write(ring, PACKET3(PACKET3_PFP_SYNC_ME, 0));
+		amdgpu_ring_write(ring, 0x0);
+	}
+}
+
+static void gfx_v11_0_ring_emit_fence_kiq(struct amdgpu_ring *ring, u64 addr,
+					  u64 seq, unsigned int flags)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	/* we only allocate 32bit for each seq wb address */
+	BUG_ON(flags & AMDGPU_FENCE_FLAG_64BIT);
+
+	/* write fence seq to the "addr" */
+	amdgpu_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
+	amdgpu_ring_write(ring, (WRITE_DATA_ENGINE_SEL(0) |
+				 WRITE_DATA_DST_SEL(5) | WR_CONFIRM));
+	amdgpu_ring_write(ring, lower_32_bits(addr));
+	amdgpu_ring_write(ring, upper_32_bits(addr));
+	amdgpu_ring_write(ring, lower_32_bits(seq));
+
+	if (flags & AMDGPU_FENCE_FLAG_INT) {
+		/* set register to trigger INT */
+		amdgpu_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
+		amdgpu_ring_write(ring, (WRITE_DATA_ENGINE_SEL(0) |
+					 WRITE_DATA_DST_SEL(0) | WR_CONFIRM));
+		amdgpu_ring_write(ring, SOC15_REG_OFFSET(GC, 0, regCPC_INT_STATUS));
+		amdgpu_ring_write(ring, 0);
+		amdgpu_ring_write(ring, 0x20000000); /* src_id is 178 */
+	}
+}
+
+static void gfx_v11_0_ring_emit_cntxcntl(struct amdgpu_ring *ring,
+					 uint32_t flags)
+{
+	uint32_t dw2 = 0;
+
+	dw2 |= 0x80000000; /* set load_enable otherwise this package is just NOPs */
+	if (flags & AMDGPU_HAVE_CTX_SWITCH) {
+		/* set load_global_config & load_global_uconfig */
+		dw2 |= 0x8001;
+		/* set load_cs_sh_regs */
+		dw2 |= 0x01000000;
+		/* set load_per_context_state & load_gfx_sh_regs for GFX */
+		dw2 |= 0x10002;
+	}
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_CONTEXT_CONTROL, 1));
+	amdgpu_ring_write(ring, dw2);
+	amdgpu_ring_write(ring, 0);
+}
+
+static unsigned gfx_v11_0_ring_emit_init_cond_exec(struct amdgpu_ring *ring)
+{
+	unsigned ret;
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_COND_EXEC, 3));
+	amdgpu_ring_write(ring, lower_32_bits(ring->cond_exe_gpu_addr));
+	amdgpu_ring_write(ring, upper_32_bits(ring->cond_exe_gpu_addr));
+	amdgpu_ring_write(ring, 0); /* discard following DWs if *cond_exec_gpu_addr==0 */
+	ret = ring->wptr & ring->buf_mask;
+	amdgpu_ring_write(ring, 0x55aa55aa); /* patch dummy value later */
+
+	return ret;
+}
+
+static void gfx_v11_0_ring_emit_patch_cond_exec(struct amdgpu_ring *ring, unsigned offset)
+{
+	unsigned cur;
+	BUG_ON(offset > ring->buf_mask);
+	BUG_ON(ring->ring[offset] != 0x55aa55aa);
+
+	cur = (ring->wptr - 1) & ring->buf_mask;
+	if (likely(cur > offset))
+		ring->ring[offset] = cur - offset;
+	else
+		ring->ring[offset] = (ring->buf_mask + 1) - offset + cur;
+}
+
+static int gfx_v11_0_ring_preempt_ib(struct amdgpu_ring *ring)
+{
+	int i, r = 0;
+	struct amdgpu_device *adev = ring->adev;
+	struct amdgpu_kiq *kiq = &adev->gfx.kiq;
+	struct amdgpu_ring *kiq_ring = &kiq->ring;
+	unsigned long flags;
+
+	if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
+		return -EINVAL;
+
+	spin_lock_irqsave(&kiq->ring_lock, flags);
+
+	if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size)) {
+		spin_unlock_irqrestore(&kiq->ring_lock, flags);
+		return -ENOMEM;
+	}
+
+	/* assert preemption condition */
+	amdgpu_ring_set_preempt_cond_exec(ring, false);
+
+	/* assert IB preemption, emit the trailing fence */
+	kiq->pmf->kiq_unmap_queues(kiq_ring, ring, PREEMPT_QUEUES_NO_UNMAP,
+				   ring->trail_fence_gpu_addr,
+				   ++ring->trail_seq);
+	amdgpu_ring_commit(kiq_ring);
+
+	spin_unlock_irqrestore(&kiq->ring_lock, flags);
+
+	/* poll the trailing fence */
+	for (i = 0; i < adev->usec_timeout; i++) {
+		if (ring->trail_seq ==
+		    le32_to_cpu(*(ring->trail_fence_cpu_addr)))
+			break;
+		udelay(1);
+	}
+
+	if (i >= adev->usec_timeout) {
+		r = -EINVAL;
+		DRM_ERROR("ring %d failed to preempt ib\n", ring->idx);
+	}
+
+	/* deassert preemption condition */
+	amdgpu_ring_set_preempt_cond_exec(ring, true);
+	return r;
+}
+
+static void gfx_v11_0_ring_emit_de_meta(struct amdgpu_ring *ring, bool resume)
+{
+	struct amdgpu_device *adev = ring->adev;
+	struct v10_de_ib_state de_payload = {0};
+	uint64_t offset, gds_addr, de_payload_gpu_addr;
+	void *de_payload_cpu_addr;
+	int cnt;
+
+	if (ring->is_mes_queue) {
+		offset = offsetof(struct amdgpu_mes_ctx_meta_data,
+				  gfx[0].gfx_meta_data) +
+			offsetof(struct v10_gfx_meta_data, de_payload);
+		de_payload_gpu_addr =
+			amdgpu_mes_ctx_get_offs_gpu_addr(ring, offset);
+		de_payload_cpu_addr =
+			amdgpu_mes_ctx_get_offs_cpu_addr(ring, offset);
+
+		offset = offsetof(struct amdgpu_mes_ctx_meta_data,
+				  gfx[0].gds_backup) +
+			offsetof(struct v10_gfx_meta_data, de_payload);
+		gds_addr = amdgpu_mes_ctx_get_offs_gpu_addr(ring, offset);
+	} else {
+		offset = offsetof(struct v10_gfx_meta_data, de_payload);
+		de_payload_gpu_addr = amdgpu_csa_vaddr(ring->adev) + offset;
+		de_payload_cpu_addr = adev->virt.csa_cpu_addr + offset;
+
+		gds_addr = ALIGN(amdgpu_csa_vaddr(ring->adev) +
+				 AMDGPU_CSA_SIZE - adev->gds.gds_size,
+				 PAGE_SIZE);
+	}
+
+	de_payload.gds_backup_addrlo = lower_32_bits(gds_addr);
+	de_payload.gds_backup_addrhi = upper_32_bits(gds_addr);
+
+	cnt = (sizeof(de_payload) >> 2) + 4 - 2;
+	amdgpu_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, cnt));
+	amdgpu_ring_write(ring, (WRITE_DATA_ENGINE_SEL(1) |
+				 WRITE_DATA_DST_SEL(8) |
+				 WR_CONFIRM) |
+				 WRITE_DATA_CACHE_POLICY(0));
+	amdgpu_ring_write(ring, lower_32_bits(de_payload_gpu_addr));
+	amdgpu_ring_write(ring, upper_32_bits(de_payload_gpu_addr));
+
+	if (resume)
+		amdgpu_ring_write_multiple(ring, de_payload_cpu_addr,
+					   sizeof(de_payload) >> 2);
+	else
+		amdgpu_ring_write_multiple(ring, (void *)&de_payload,
+					   sizeof(de_payload) >> 2);
+}
+
+static void gfx_v11_0_ring_emit_frame_cntl(struct amdgpu_ring *ring, bool start,
+				    bool secure)
+{
+	uint32_t v = secure ? FRAME_TMZ : 0;
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_FRAME_CONTROL, 0));
+	amdgpu_ring_write(ring, v | FRAME_CMD(start ? 0 : 1));
+}
+
+static void gfx_v11_0_ring_emit_rreg(struct amdgpu_ring *ring, uint32_t reg,
+				     uint32_t reg_val_offs)
+{
+	struct amdgpu_device *adev = ring->adev;
+
+	amdgpu_ring_write(ring, PACKET3(PACKET3_COPY_DATA, 4));
+	amdgpu_ring_write(ring, 0 |	/* src: register*/
+				(5 << 8) |	/* dst: memory */
+				(1 << 20));	/* write confirm */
+	amdgpu_ring_write(ring, reg);
+	amdgpu_ring_write(ring, 0);
+	amdgpu_ring_write(ring, lower_32_bits(adev->wb.gpu_addr +
+				reg_val_offs * 4));
+	amdgpu_ring_write(ring, upper_32_bits(adev->wb.gpu_addr +
+				reg_val_offs * 4));
+}
+
+static void gfx_v11_0_ring_emit_wreg(struct amdgpu_ring *ring, uint32_t reg,
+				   uint32_t val)
+{
+	uint32_t cmd = 0;
+
+	switch (ring->funcs->type) {
+	case AMDGPU_RING_TYPE_GFX:
+		cmd = WRITE_DATA_ENGINE_SEL(1) | WR_CONFIRM;
+		break;
+	case AMDGPU_RING_TYPE_KIQ:
+		cmd = (1 << 16); /* no inc addr */
+		break;
+	default:
+		cmd = WR_CONFIRM;
+		break;
+	}
+	amdgpu_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
+	amdgpu_ring_write(ring, cmd);
+	amdgpu_ring_write(ring, reg);
+	amdgpu_ring_write(ring, 0);
+	amdgpu_ring_write(ring, val);
+}
+
+static void gfx_v11_0_ring_emit_reg_wait(struct amdgpu_ring *ring, uint32_t reg,
+					uint32_t val, uint32_t mask)
+{
+	gfx_v11_0_wait_reg_mem(ring, 0, 0, 0, reg, 0, val, mask, 0x20);
+}
+
+static void gfx_v11_0_ring_emit_reg_write_reg_wait(struct amdgpu_ring *ring,
+						   uint32_t reg0, uint32_t reg1,
+						   uint32_t ref, uint32_t mask)
+{
+	int usepfp = (ring->funcs->type == AMDGPU_RING_TYPE_GFX);
+
+	gfx_v11_0_wait_reg_mem(ring, usepfp, 0, 1, reg0, reg1,
+			       ref, mask, 0x20);
+}
+
+static void gfx_v11_0_ring_soft_recovery(struct amdgpu_ring *ring,
+					 unsigned vmid)
+{
+	struct amdgpu_device *adev = ring->adev;
+	uint32_t value = 0;
+
+	value = REG_SET_FIELD(value, SQ_CMD, CMD, 0x03);
+	value = REG_SET_FIELD(value, SQ_CMD, MODE, 0x01);
+	value = REG_SET_FIELD(value, SQ_CMD, CHECK_VMID, 1);
+	value = REG_SET_FIELD(value, SQ_CMD, VM_ID, vmid);
+	WREG32_SOC15(GC, 0, regSQ_CMD, value);
+}
+
+static void
+gfx_v11_0_set_gfx_eop_interrupt_state(struct amdgpu_device *adev,
+				      uint32_t me, uint32_t pipe,
+				      enum amdgpu_interrupt_state state)
+{
+	uint32_t cp_int_cntl, cp_int_cntl_reg;
+
+	if (!me) {
+		switch (pipe) {
+		case 0:
+			cp_int_cntl_reg = SOC15_REG_OFFSET(GC, 0, regCP_INT_CNTL_RING0);
+			break;
+		case 1:
+			cp_int_cntl_reg = SOC15_REG_OFFSET(GC, 0, regCP_INT_CNTL_RING1);
+			break;
+		default:
+			DRM_DEBUG("invalid pipe %d\n", pipe);
+			return;
+		}
+	} else {
+		DRM_DEBUG("invalid me %d\n", me);
+		return;
+	}
+
+	switch (state) {
+	case AMDGPU_IRQ_STATE_DISABLE:
+		cp_int_cntl = RREG32_SOC15_IP(GC, cp_int_cntl_reg);
+		cp_int_cntl = REG_SET_FIELD(cp_int_cntl, CP_INT_CNTL_RING0,
+					    TIME_STAMP_INT_ENABLE, 0);
+		WREG32_SOC15_IP(GC, cp_int_cntl_reg, cp_int_cntl);
+		break;
+	case AMDGPU_IRQ_STATE_ENABLE:
+		cp_int_cntl = RREG32_SOC15_IP(GC, cp_int_cntl_reg);
+		cp_int_cntl = REG_SET_FIELD(cp_int_cntl, CP_INT_CNTL_RING0,
+					    TIME_STAMP_INT_ENABLE, 1);
+		WREG32_SOC15_IP(GC, cp_int_cntl_reg, cp_int_cntl);
+		break;
+	default:
+		break;
+	}
+}
+
+static void gfx_v11_0_set_compute_eop_interrupt_state(struct amdgpu_device *adev,
+						     int me, int pipe,
+						     enum amdgpu_interrupt_state state)
+{
+	u32 mec_int_cntl, mec_int_cntl_reg;
+
+	/*
+	 * amdgpu controls only the first MEC. That's why this function only
+	 * handles the setting of interrupts for this specific MEC. All other
+	 * pipes' interrupts are set by amdkfd.
+	 */
+
+	if (me == 1) {
+		switch (pipe) {
+		case 0:
+			mec_int_cntl_reg = SOC15_REG_OFFSET(GC, 0, regCP_ME1_PIPE0_INT_CNTL);
+			break;
+		case 1:
+			mec_int_cntl_reg = SOC15_REG_OFFSET(GC, 0, regCP_ME1_PIPE1_INT_CNTL);
+			break;
+		case 2:
+			mec_int_cntl_reg = SOC15_REG_OFFSET(GC, 0, regCP_ME1_PIPE2_INT_CNTL);
+			break;
+		case 3:
+			mec_int_cntl_reg = SOC15_REG_OFFSET(GC, 0, regCP_ME1_PIPE3_INT_CNTL);
+			break;
+		default:
+			DRM_DEBUG("invalid pipe %d\n", pipe);
+			return;
+		}
+	} else {
+		DRM_DEBUG("invalid me %d\n", me);
+		return;
+	}
+
+	switch (state) {
+	case AMDGPU_IRQ_STATE_DISABLE:
+		mec_int_cntl = RREG32_SOC15_IP(GC, mec_int_cntl_reg);
+		mec_int_cntl = REG_SET_FIELD(mec_int_cntl, CP_ME1_PIPE0_INT_CNTL,
+					     TIME_STAMP_INT_ENABLE, 0);
+		WREG32_SOC15_IP(GC, mec_int_cntl_reg, mec_int_cntl);
+		break;
+	case AMDGPU_IRQ_STATE_ENABLE:
+		mec_int_cntl = RREG32_SOC15_IP(GC, mec_int_cntl_reg);
+		mec_int_cntl = REG_SET_FIELD(mec_int_cntl, CP_ME1_PIPE0_INT_CNTL,
+					     TIME_STAMP_INT_ENABLE, 1);
+		WREG32_SOC15_IP(GC, mec_int_cntl_reg, mec_int_cntl);
+		break;
+	default:
+		break;
+	}
+}
+
+static int gfx_v11_0_set_eop_interrupt_state(struct amdgpu_device *adev,
+					    struct amdgpu_irq_src *src,
+					    unsigned type,
+					    enum amdgpu_interrupt_state state)
+{
+	switch (type) {
+	case AMDGPU_CP_IRQ_GFX_ME0_PIPE0_EOP:
+		gfx_v11_0_set_gfx_eop_interrupt_state(adev, 0, 0, state);
+		break;
+	case AMDGPU_CP_IRQ_GFX_ME0_PIPE1_EOP:
+		gfx_v11_0_set_gfx_eop_interrupt_state(adev, 0, 1, state);
+		break;
+	case AMDGPU_CP_IRQ_COMPUTE_MEC1_PIPE0_EOP:
+		gfx_v11_0_set_compute_eop_interrupt_state(adev, 1, 0, state);
+		break;
+	case AMDGPU_CP_IRQ_COMPUTE_MEC1_PIPE1_EOP:
+		gfx_v11_0_set_compute_eop_interrupt_state(adev, 1, 1, state);
+		break;
+	case AMDGPU_CP_IRQ_COMPUTE_MEC1_PIPE2_EOP:
+		gfx_v11_0_set_compute_eop_interrupt_state(adev, 1, 2, state);
+		break;
+	case AMDGPU_CP_IRQ_COMPUTE_MEC1_PIPE3_EOP:
+		gfx_v11_0_set_compute_eop_interrupt_state(adev, 1, 3, state);
+		break;
+	default:
+		break;
+	}
+	return 0;
+}
+
+static int gfx_v11_0_eop_irq(struct amdgpu_device *adev,
+			     struct amdgpu_irq_src *source,
+			     struct amdgpu_iv_entry *entry)
+{
+	int i;
+	u8 me_id, pipe_id, queue_id;
+	struct amdgpu_ring *ring;
+	uint32_t mes_queue_id = entry->src_data[0];
+
+	DRM_DEBUG("IH: CP EOP\n");
+
+	if (adev->enable_mes && (mes_queue_id & AMDGPU_FENCE_MES_QUEUE_FLAG)) {
+		struct amdgpu_mes_queue *queue;
+
+		mes_queue_id &= AMDGPU_FENCE_MES_QUEUE_ID_MASK;
+
+		spin_lock(&adev->mes.queue_id_lock);
+		queue = idr_find(&adev->mes.queue_id_idr, mes_queue_id);
+		if (queue) {
+			DRM_DEBUG("process mes queue id = %d\n", mes_queue_id);
+			amdgpu_fence_process(queue->ring);
+		}
+		spin_unlock(&adev->mes.queue_id_lock);
+	} else {
+		me_id = (entry->ring_id & 0x0c) >> 2;
+		pipe_id = (entry->ring_id & 0x03) >> 0;
+		queue_id = (entry->ring_id & 0x70) >> 4;
+
+		switch (me_id) {
+		case 0:
+			if (pipe_id == 0)
+				amdgpu_fence_process(&adev->gfx.gfx_ring[0]);
+			else
+				amdgpu_fence_process(&adev->gfx.gfx_ring[1]);
+			break;
+		case 1:
+		case 2:
+			for (i = 0; i < adev->gfx.num_compute_rings; i++) {
+				ring = &adev->gfx.compute_ring[i];
+				/* Per-queue interrupt is supported for MEC starting from VI.
+				 * The interrupt can only be enabled/disabled per pipe instead
+				 * of per queue.
+				 */
+				if ((ring->me == me_id) &&
+				    (ring->pipe == pipe_id) &&
+				    (ring->queue == queue_id))
+					amdgpu_fence_process(ring);
+			}
+			break;
+		}
+	}
+
+	return 0;
+}
+
+static int gfx_v11_0_set_priv_reg_fault_state(struct amdgpu_device *adev,
+					      struct amdgpu_irq_src *source,
+					      unsigned type,
+					      enum amdgpu_interrupt_state state)
+{
+	switch (state) {
+	case AMDGPU_IRQ_STATE_DISABLE:
+	case AMDGPU_IRQ_STATE_ENABLE:
+		WREG32_FIELD15_PREREG(GC, 0, CP_INT_CNTL_RING0,
+			       PRIV_REG_INT_ENABLE,
+			       state == AMDGPU_IRQ_STATE_ENABLE ? 1 : 0);
+		break;
+	default:
+		break;
+	}
+
+	return 0;
+}
+
+static int gfx_v11_0_set_priv_inst_fault_state(struct amdgpu_device *adev,
+					       struct amdgpu_irq_src *source,
+					       unsigned type,
+					       enum amdgpu_interrupt_state state)
+{
+	switch (state) {
+	case AMDGPU_IRQ_STATE_DISABLE:
+	case AMDGPU_IRQ_STATE_ENABLE:
+		WREG32_FIELD15_PREREG(GC, 0, CP_INT_CNTL_RING0,
+			       PRIV_INSTR_INT_ENABLE,
+			       state == AMDGPU_IRQ_STATE_ENABLE ? 1 : 0);
+	default:
+		break;
+	}
+
+	return 0;
+}
+
+static void gfx_v11_0_handle_priv_fault(struct amdgpu_device *adev,
+					struct amdgpu_iv_entry *entry)
+{
+	u8 me_id, pipe_id, queue_id;
+	struct amdgpu_ring *ring;
+	int i;
+
+	me_id = (entry->ring_id & 0x0c) >> 2;
+	pipe_id = (entry->ring_id & 0x03) >> 0;
+	queue_id = (entry->ring_id & 0x70) >> 4;
+
+	switch (me_id) {
+	case 0:
+		for (i = 0; i < adev->gfx.num_gfx_rings; i++) {
+			ring = &adev->gfx.gfx_ring[i];
+			/* we only enabled 1 gfx queue per pipe for now */
+			if (ring->me == me_id && ring->pipe == pipe_id)
+				drm_sched_fault(&ring->sched);
+		}
+		break;
+	case 1:
+	case 2:
+		for (i = 0; i < adev->gfx.num_compute_rings; i++) {
+			ring = &adev->gfx.compute_ring[i];
+			if (ring->me == me_id && ring->pipe == pipe_id &&
+			    ring->queue == queue_id)
+				drm_sched_fault(&ring->sched);
+		}
+		break;
+	default:
+		BUG();
+	}
+}
+
+static int gfx_v11_0_priv_reg_irq(struct amdgpu_device *adev,
+				  struct amdgpu_irq_src *source,
+				  struct amdgpu_iv_entry *entry)
+{
+	DRM_ERROR("Illegal register access in command stream\n");
+	gfx_v11_0_handle_priv_fault(adev, entry);
+	return 0;
+}
+
+static int gfx_v11_0_priv_inst_irq(struct amdgpu_device *adev,
+				   struct amdgpu_irq_src *source,
+				   struct amdgpu_iv_entry *entry)
+{
+	DRM_ERROR("Illegal instruction in command stream\n");
+	gfx_v11_0_handle_priv_fault(adev, entry);
+	return 0;
+}
+
+#if 0
+static int gfx_v11_0_kiq_set_interrupt_state(struct amdgpu_device *adev,
+					     struct amdgpu_irq_src *src,
+					     unsigned int type,
+					     enum amdgpu_interrupt_state state)
+{
+	uint32_t tmp, target;
+	struct amdgpu_ring *ring = &(adev->gfx.kiq.ring);
+
+	target = SOC15_REG_OFFSET(GC, 0, regCP_ME1_PIPE0_INT_CNTL);
+	target += ring->pipe;
+
+	switch (type) {
+	case AMDGPU_CP_KIQ_IRQ_DRIVER0:
+		if (state == AMDGPU_IRQ_STATE_DISABLE) {
+			tmp = RREG32_SOC15(GC, 0, regCPC_INT_CNTL);
+			tmp = REG_SET_FIELD(tmp, CPC_INT_CNTL,
+					    GENERIC2_INT_ENABLE, 0);
+			WREG32_SOC15(GC, 0, regCPC_INT_CNTL, tmp);
+
+			tmp = RREG32_SOC15_IP(GC, target);
+			tmp = REG_SET_FIELD(tmp, CP_ME1_PIPE0_INT_CNTL,
+					    GENERIC2_INT_ENABLE, 0);
+			WREG32_SOC15_IP(GC, target, tmp);
+		} else {
+			tmp = RREG32_SOC15(GC, 0, regCPC_INT_CNTL);
+			tmp = REG_SET_FIELD(tmp, CPC_INT_CNTL,
+					    GENERIC2_INT_ENABLE, 1);
+			WREG32_SOC15(GC, 0, regCPC_INT_CNTL, tmp);
+
+			tmp = RREG32_SOC15_IP(GC, target);
+			tmp = REG_SET_FIELD(tmp, CP_ME1_PIPE0_INT_CNTL,
+					    GENERIC2_INT_ENABLE, 1);
+			WREG32_SOC15_IP(GC, target, tmp);
+		}
+		break;
+	default:
+		BUG(); /* kiq only support GENERIC2_INT now */
+		break;
+	}
+	return 0;
+}
+#endif
+
+static void gfx_v11_0_emit_mem_sync(struct amdgpu_ring *ring)
+{
+	const unsigned int gcr_cntl =
+			PACKET3_ACQUIRE_MEM_GCR_CNTL_GL2_INV(1) |
+			PACKET3_ACQUIRE_MEM_GCR_CNTL_GL2_WB(1) |
+			PACKET3_ACQUIRE_MEM_GCR_CNTL_GLM_INV(1) |
+			PACKET3_ACQUIRE_MEM_GCR_CNTL_GLM_WB(1) |
+			PACKET3_ACQUIRE_MEM_GCR_CNTL_GL1_INV(1) |
+			PACKET3_ACQUIRE_MEM_GCR_CNTL_GLV_INV(1) |
+			PACKET3_ACQUIRE_MEM_GCR_CNTL_GLK_INV(1) |
+			PACKET3_ACQUIRE_MEM_GCR_CNTL_GLI_INV(1);
+
+	/* ACQUIRE_MEM - make one or more surfaces valid for use by the subsequent operations */
+	amdgpu_ring_write(ring, PACKET3(PACKET3_ACQUIRE_MEM, 6));
+	amdgpu_ring_write(ring, 0); /* CP_COHER_CNTL */
+	amdgpu_ring_write(ring, 0xffffffff);  /* CP_COHER_SIZE */
+	amdgpu_ring_write(ring, 0xffffff);  /* CP_COHER_SIZE_HI */
+	amdgpu_ring_write(ring, 0); /* CP_COHER_BASE */
+	amdgpu_ring_write(ring, 0);  /* CP_COHER_BASE_HI */
+	amdgpu_ring_write(ring, 0x0000000A); /* POLL_INTERVAL */
+	amdgpu_ring_write(ring, gcr_cntl); /* GCR_CNTL */
+}
+
+static const struct amd_ip_funcs gfx_v11_0_ip_funcs = {
+	.name = "gfx_v11_0",
+	.early_init = gfx_v11_0_early_init,
+	.late_init = gfx_v11_0_late_init,
+	.sw_init = gfx_v11_0_sw_init,
+	.sw_fini = gfx_v11_0_sw_fini,
+	.hw_init = gfx_v11_0_hw_init,
+	.hw_fini = gfx_v11_0_hw_fini,
+	.suspend = gfx_v11_0_suspend,
+	.resume = gfx_v11_0_resume,
+	.is_idle = gfx_v11_0_is_idle,
+	.wait_for_idle = gfx_v11_0_wait_for_idle,
+	.soft_reset = gfx_v11_0_soft_reset,
+	.set_clockgating_state = gfx_v11_0_set_clockgating_state,
+	.set_powergating_state = gfx_v11_0_set_powergating_state,
+	.get_clockgating_state = gfx_v11_0_get_clockgating_state,
+};
+
+static const struct amdgpu_ring_funcs gfx_v11_0_ring_funcs_gfx = {
+	.type = AMDGPU_RING_TYPE_GFX,
+	.align_mask = 0xff,
+	.nop = PACKET3(PACKET3_NOP, 0x3FFF),
+	.support_64bit_ptrs = true,
+	.vmhub = AMDGPU_GFXHUB_0,
+	.get_rptr = gfx_v11_0_ring_get_rptr_gfx,
+	.get_wptr = gfx_v11_0_ring_get_wptr_gfx,
+	.set_wptr = gfx_v11_0_ring_set_wptr_gfx,
+	.emit_frame_size = /* totally 242 maximum if 16 IBs */
+		5 + /* COND_EXEC */
+		7 + /* PIPELINE_SYNC */
+		SOC15_FLUSH_GPU_TLB_NUM_WREG * 5 +
+		SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 +
+		2 + /* VM_FLUSH */
+		8 + /* FENCE for VM_FLUSH */
+		20 + /* GDS switch */
+		5 + /* COND_EXEC */
+		7 + /* HDP_flush */
+		4 + /* VGT_flush */
+		31 + /*	DE_META */
+		3 + /* CNTX_CTRL */
+		5 + /* HDP_INVL */
+		8 + 8 + /* FENCE x2 */
+		8, /* gfx_v11_0_emit_mem_sync */
+	.emit_ib_size =	4, /* gfx_v11_0_ring_emit_ib_gfx */
+	.emit_ib = gfx_v11_0_ring_emit_ib_gfx,
+	.emit_fence = gfx_v11_0_ring_emit_fence,
+	.emit_pipeline_sync = gfx_v11_0_ring_emit_pipeline_sync,
+	.emit_vm_flush = gfx_v11_0_ring_emit_vm_flush,
+	.emit_gds_switch = gfx_v11_0_ring_emit_gds_switch,
+	.emit_hdp_flush = gfx_v11_0_ring_emit_hdp_flush,
+	.test_ring = gfx_v11_0_ring_test_ring,
+	.test_ib = gfx_v11_0_ring_test_ib,
+	.insert_nop = amdgpu_ring_insert_nop,
+	.pad_ib = amdgpu_ring_generic_pad_ib,
+	.emit_cntxcntl = gfx_v11_0_ring_emit_cntxcntl,
+	.init_cond_exec = gfx_v11_0_ring_emit_init_cond_exec,
+	.patch_cond_exec = gfx_v11_0_ring_emit_patch_cond_exec,
+	.preempt_ib = gfx_v11_0_ring_preempt_ib,
+	.emit_frame_cntl = gfx_v11_0_ring_emit_frame_cntl,
+	.emit_wreg = gfx_v11_0_ring_emit_wreg,
+	.emit_reg_wait = gfx_v11_0_ring_emit_reg_wait,
+	.emit_reg_write_reg_wait = gfx_v11_0_ring_emit_reg_write_reg_wait,
+	.soft_recovery = gfx_v11_0_ring_soft_recovery,
+	.emit_mem_sync = gfx_v11_0_emit_mem_sync,
+};
+
+static const struct amdgpu_ring_funcs gfx_v11_0_ring_funcs_compute = {
+	.type = AMDGPU_RING_TYPE_COMPUTE,
+	.align_mask = 0xff,
+	.nop = PACKET3(PACKET3_NOP, 0x3FFF),
+	.support_64bit_ptrs = true,
+	.vmhub = AMDGPU_GFXHUB_0,
+	.get_rptr = gfx_v11_0_ring_get_rptr_compute,
+	.get_wptr = gfx_v11_0_ring_get_wptr_compute,
+	.set_wptr = gfx_v11_0_ring_set_wptr_compute,
+	.emit_frame_size =
+		20 + /* gfx_v11_0_ring_emit_gds_switch */
+		7 + /* gfx_v11_0_ring_emit_hdp_flush */
+		5 + /* hdp invalidate */
+		7 + /* gfx_v11_0_ring_emit_pipeline_sync */
+		SOC15_FLUSH_GPU_TLB_NUM_WREG * 5 +
+		SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 +
+		2 + /* gfx_v11_0_ring_emit_vm_flush */
+		8 + 8 + 8 + /* gfx_v11_0_ring_emit_fence x3 for user fence, vm fence */
+		8, /* gfx_v11_0_emit_mem_sync */
+	.emit_ib_size =	7, /* gfx_v11_0_ring_emit_ib_compute */
+	.emit_ib = gfx_v11_0_ring_emit_ib_compute,
+	.emit_fence = gfx_v11_0_ring_emit_fence,
+	.emit_pipeline_sync = gfx_v11_0_ring_emit_pipeline_sync,
+	.emit_vm_flush = gfx_v11_0_ring_emit_vm_flush,
+	.emit_gds_switch = gfx_v11_0_ring_emit_gds_switch,
+	.emit_hdp_flush = gfx_v11_0_ring_emit_hdp_flush,
+	.test_ring = gfx_v11_0_ring_test_ring,
+	.test_ib = gfx_v11_0_ring_test_ib,
+	.insert_nop = amdgpu_ring_insert_nop,
+	.pad_ib = amdgpu_ring_generic_pad_ib,
+	.emit_wreg = gfx_v11_0_ring_emit_wreg,
+	.emit_reg_wait = gfx_v11_0_ring_emit_reg_wait,
+	.emit_reg_write_reg_wait = gfx_v11_0_ring_emit_reg_write_reg_wait,
+	.emit_mem_sync = gfx_v11_0_emit_mem_sync,
+};
+
+static const struct amdgpu_ring_funcs gfx_v11_0_ring_funcs_kiq = {
+	.type = AMDGPU_RING_TYPE_KIQ,
+	.align_mask = 0xff,
+	.nop = PACKET3(PACKET3_NOP, 0x3FFF),
+	.support_64bit_ptrs = true,
+	.vmhub = AMDGPU_GFXHUB_0,
+	.get_rptr = gfx_v11_0_ring_get_rptr_compute,
+	.get_wptr = gfx_v11_0_ring_get_wptr_compute,
+	.set_wptr = gfx_v11_0_ring_set_wptr_compute,
+	.emit_frame_size =
+		20 + /* gfx_v11_0_ring_emit_gds_switch */
+		7 + /* gfx_v11_0_ring_emit_hdp_flush */
+		5 + /*hdp invalidate */
+		7 + /* gfx_v11_0_ring_emit_pipeline_sync */
+		SOC15_FLUSH_GPU_TLB_NUM_WREG * 5 +
+		SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 7 +
+		2 + /* gfx_v11_0_ring_emit_vm_flush */
+		8 + 8 + 8, /* gfx_v11_0_ring_emit_fence_kiq x3 for user fence, vm fence */
+	.emit_ib_size =	7, /* gfx_v11_0_ring_emit_ib_compute */
+	.emit_ib = gfx_v11_0_ring_emit_ib_compute,
+	.emit_fence = gfx_v11_0_ring_emit_fence_kiq,
+	.test_ring = gfx_v11_0_ring_test_ring,
+	.test_ib = gfx_v11_0_ring_test_ib,
+	.insert_nop = amdgpu_ring_insert_nop,
+	.pad_ib = amdgpu_ring_generic_pad_ib,
+	.emit_rreg = gfx_v11_0_ring_emit_rreg,
+	.emit_wreg = gfx_v11_0_ring_emit_wreg,
+	.emit_reg_wait = gfx_v11_0_ring_emit_reg_wait,
+	.emit_reg_write_reg_wait = gfx_v11_0_ring_emit_reg_write_reg_wait,
+};
+
+static void gfx_v11_0_set_ring_funcs(struct amdgpu_device *adev)
+{
+	int i;
+
+	adev->gfx.kiq.ring.funcs = &gfx_v11_0_ring_funcs_kiq;
+
+	for (i = 0; i < adev->gfx.num_gfx_rings; i++)
+		adev->gfx.gfx_ring[i].funcs = &gfx_v11_0_ring_funcs_gfx;
+
+	for (i = 0; i < adev->gfx.num_compute_rings; i++)
+		adev->gfx.compute_ring[i].funcs = &gfx_v11_0_ring_funcs_compute;
+}
+
+static const struct amdgpu_irq_src_funcs gfx_v11_0_eop_irq_funcs = {
+	.set = gfx_v11_0_set_eop_interrupt_state,
+	.process = gfx_v11_0_eop_irq,
+};
+
+static const struct amdgpu_irq_src_funcs gfx_v11_0_priv_reg_irq_funcs = {
+	.set = gfx_v11_0_set_priv_reg_fault_state,
+	.process = gfx_v11_0_priv_reg_irq,
+};
+
+static const struct amdgpu_irq_src_funcs gfx_v11_0_priv_inst_irq_funcs = {
+	.set = gfx_v11_0_set_priv_inst_fault_state,
+	.process = gfx_v11_0_priv_inst_irq,
+};
+
+static void gfx_v11_0_set_irq_funcs(struct amdgpu_device *adev)
+{
+	adev->gfx.eop_irq.num_types = AMDGPU_CP_IRQ_LAST;
+	adev->gfx.eop_irq.funcs = &gfx_v11_0_eop_irq_funcs;
+
+	adev->gfx.priv_reg_irq.num_types = 1;
+	adev->gfx.priv_reg_irq.funcs = &gfx_v11_0_priv_reg_irq_funcs;
+
+	adev->gfx.priv_inst_irq.num_types = 1;
+	adev->gfx.priv_inst_irq.funcs = &gfx_v11_0_priv_inst_irq_funcs;
+}
+
+static void gfx_v11_0_set_imu_funcs(struct amdgpu_device *adev)
+{
+	adev->gfx.imu.funcs = &gfx_v11_0_imu_funcs;
+}
+
+static void gfx_v11_0_set_rlc_funcs(struct amdgpu_device *adev)
+{
+	adev->gfx.rlc.funcs = &gfx_v11_0_rlc_funcs;
+}
+
+static void gfx_v11_0_set_gds_init(struct amdgpu_device *adev)
+{
+	unsigned total_cu = adev->gfx.config.max_cu_per_sh *
+			    adev->gfx.config.max_sh_per_se *
+			    adev->gfx.config.max_shader_engines;
+
+	adev->gds.gds_size = 0x1000;
+	adev->gds.gds_compute_max_wave_id = total_cu * 32 - 1;
+	adev->gds.gws_size = 64;
+	adev->gds.oa_size = 16;
+}
+
+static void gfx_v11_0_set_mqd_funcs(struct amdgpu_device *adev)
+{
+	/* set gfx eng mqd */
+	adev->mqds[AMDGPU_HW_IP_GFX].mqd_size =
+		sizeof(struct v11_gfx_mqd);
+	adev->mqds[AMDGPU_HW_IP_GFX].init_mqd =
+		gfx_v11_0_gfx_mqd_init;
+	/* set compute eng mqd */
+	adev->mqds[AMDGPU_HW_IP_COMPUTE].mqd_size =
+		sizeof(struct v11_compute_mqd);
+	adev->mqds[AMDGPU_HW_IP_COMPUTE].init_mqd =
+		gfx_v11_0_compute_mqd_init;
+}
+
+static void gfx_v11_0_set_user_wgp_inactive_bitmap_per_sh(struct amdgpu_device *adev,
+							  u32 bitmap)
+{
+	u32 data;
+
+	if (!bitmap)
+		return;
+
+	data = bitmap << GC_USER_SHADER_ARRAY_CONFIG__INACTIVE_WGPS__SHIFT;
+	data &= GC_USER_SHADER_ARRAY_CONFIG__INACTIVE_WGPS_MASK;
+
+	WREG32_SOC15(GC, 0, regGC_USER_SHADER_ARRAY_CONFIG, data);
+}
+
+static u32 gfx_v11_0_get_wgp_active_bitmap_per_sh(struct amdgpu_device *adev)
+{
+	u32 data, wgp_bitmask;
+	data = RREG32_SOC15(GC, 0, regCC_GC_SHADER_ARRAY_CONFIG);
+	data |= RREG32_SOC15(GC, 0, regGC_USER_SHADER_ARRAY_CONFIG);
+
+	data &= CC_GC_SHADER_ARRAY_CONFIG__INACTIVE_WGPS_MASK;
+	data >>= CC_GC_SHADER_ARRAY_CONFIG__INACTIVE_WGPS__SHIFT;
+
+	wgp_bitmask =
+		amdgpu_gfx_create_bitmask(adev->gfx.config.max_cu_per_sh >> 1);
+
+	return (~data) & wgp_bitmask;
+}
+
+static u32 gfx_v11_0_get_cu_active_bitmap_per_sh(struct amdgpu_device *adev)
+{
+	u32 wgp_idx, wgp_active_bitmap;
+	u32 cu_bitmap_per_wgp, cu_active_bitmap;
+
+	wgp_active_bitmap = gfx_v11_0_get_wgp_active_bitmap_per_sh(adev);
+	cu_active_bitmap = 0;
+
+	for (wgp_idx = 0; wgp_idx < 16; wgp_idx++) {
+		/* if there is one WGP enabled, it means 2 CUs will be enabled */
+		cu_bitmap_per_wgp = 3 << (2 * wgp_idx);
+		if (wgp_active_bitmap & (1 << wgp_idx))
+			cu_active_bitmap |= cu_bitmap_per_wgp;
+	}
+
+	return cu_active_bitmap;
+}
+
+static int gfx_v11_0_get_cu_info(struct amdgpu_device *adev,
+				 struct amdgpu_cu_info *cu_info)
+{
+	int i, j, k, counter, active_cu_number = 0;
+	u32 mask, bitmap;
+	unsigned disable_masks[8 * 2];
+
+	if (!adev || !cu_info)
+		return -EINVAL;
+
+	amdgpu_gfx_parse_disable_cu(disable_masks, 8, 2);
+
+	mutex_lock(&adev->grbm_idx_mutex);
+	for (i = 0; i < adev->gfx.config.max_shader_engines; i++) {
+		for (j = 0; j < adev->gfx.config.max_sh_per_se; j++) {
+			mask = 1;
+			counter = 0;
+			gfx_v11_0_select_se_sh(adev, i, j, 0xffffffff);
+			if (i < 8 && j < 2)
+				gfx_v11_0_set_user_wgp_inactive_bitmap_per_sh(
+					adev, disable_masks[i * 2 + j]);
+			bitmap = gfx_v11_0_get_cu_active_bitmap_per_sh(adev);
+
+			/**
+			 * GFX11 could support more than 4 SEs, while the bitmap
+			 * in cu_info struct is 4x4 and ioctl interface struct
+			 * drm_amdgpu_info_device should keep stable.
+			 * So we use last two columns of bitmap to store cu mask for
+			 * SEs 4 to 7, the layout of the bitmap is as below:
+			 *    SE0: {SH0,SH1} --> {bitmap[0][0], bitmap[0][1]}
+			 *    SE1: {SH0,SH1} --> {bitmap[1][0], bitmap[1][1]}
+			 *    SE2: {SH0,SH1} --> {bitmap[2][0], bitmap[2][1]}
+			 *    SE3: {SH0,SH1} --> {bitmap[3][0], bitmap[3][1]}
+			 *    SE4: {SH0,SH1} --> {bitmap[0][2], bitmap[0][3]}
+			 *    SE5: {SH0,SH1} --> {bitmap[1][2], bitmap[1][3]}
+			 *    SE6: {SH0,SH1} --> {bitmap[2][2], bitmap[2][3]}
+			 *    SE7: {SH0,SH1} --> {bitmap[3][2], bitmap[3][3]}
+			 */
+			cu_info->bitmap[i % 4][j + (i / 4) * 2] = bitmap;
+
+			for (k = 0; k < adev->gfx.config.max_cu_per_sh; k++) {
+				if (bitmap & mask)
+					counter++;
+
+				mask <<= 1;
+			}
+			active_cu_number += counter;
+		}
+	}
+	gfx_v11_0_select_se_sh(adev, 0xffffffff, 0xffffffff, 0xffffffff);
+	mutex_unlock(&adev->grbm_idx_mutex);
+
+	cu_info->number = active_cu_number;
+	cu_info->simd_per_cu = NUM_SIMD_PER_CU;
+
+	return 0;
+}
+
+const struct amdgpu_ip_block_version gfx_v11_0_ip_block =
+{
+	.type = AMD_IP_BLOCK_TYPE_GFX,
+	.major = 11,
+	.minor = 0,
+	.rev = 0,
+	.funcs = &gfx_v11_0_ip_funcs,
+};
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.h b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.h
new file mode 100644
index 000000000000..10cfc29c27c9
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.h
@@ -0,0 +1,29 @@
+/*
+ * Copyright 2021 dvanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __GFX_V11_0_H__
+#define __GFX_V11_0_H__
+
+extern const struct amdgpu_ip_block_version gfx_v11_0_ip_block;
+
+#endif
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 22/29] drm/amdkfd: Add KFD support for soc21 v3
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (20 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 21/29] drm/amdgpu: add init support for GFX11 (v2) Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 23/29] drm/amdgpu: enable GFX CGCG/CGLS for GC11.0.0 Alex Deucher
                   ` (6 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Mukul Joshi, Oak Zeng, Hawking Zhang

From: Mukul Joshi <mukul.joshi@amd.com>

Add initial support for soc21 in KFD compute
driver (Mukul)
- Add new definition for soc21 device.
- Add new file for amdgpu-kfd interface for GFX11 family.
- Add new file for queue management, interrupt handling,
  mqd management for GFX11 family in KFD driver.
- Related changes/updates for soc21 device in
  KFD driver.
- Repurpose last 2 entries of SDMA MQD for driver use.

v2: Add an optional argument into update queue operation (Mukul)

v3: Switch to ip version check, replace kgd_dev with
    amdgpu_device (Hawking)

Signed-off-by: Mukul Joshi <mukul.joshi@amd.com>
Signed-off-by: Hawking Zhang <Hawking.Zhang@amd.com>
Reviewed-by: Oak Zeng <Oak.Zeng@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/Makefile           |   3 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c    |  15 +-
 .../drm/amd/amdgpu/amdgpu_amdkfd_gfx_v11.c    | 625 ++++++++++++++++++
 drivers/gpu/drm/amd/amdkfd/Makefile           |   3 +
 drivers/gpu/drm/amd/amdkfd/kfd_crat.c         |   8 +
 drivers/gpu/drm/amd/amdkfd/kfd_device.c       |  24 +-
 .../drm/amd/amdkfd/kfd_device_queue_manager.c | 299 +++++++--
 .../drm/amd/amdkfd/kfd_device_queue_manager.h |   5 +
 .../amd/amdkfd/kfd_device_queue_manager_v11.c |  81 +++
 drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c     |  56 +-
 .../gpu/drm/amd/amdkfd/kfd_int_process_v11.c  | 383 +++++++++++
 .../gpu/drm/amd/amdkfd/kfd_int_process_v9.c   |   8 +-
 drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c  |  10 +-
 .../gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c  | 508 ++++++++++++++
 drivers/gpu/drm/amd/amdkfd/kfd_priv.h         |  13 +
 drivers/gpu/drm/amd/amdkfd/kfd_process.c      |  19 +
 .../amd/amdkfd/kfd_process_queue_manager.c    |  21 +
 drivers/gpu/drm/amd/amdkfd/kfd_topology.c     |   3 +-
 drivers/gpu/drm/amd/amdkfd/soc15_int.h        |   3 +-
 .../gpu/drm/amd/include/kgd_kfd_interface.h   |   1 +
 20 files changed, 2024 insertions(+), 64 deletions(-)
 create mode 100644 drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v11.c
 create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v11.c
 create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_int_process_v11.c
 create mode 100644 drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c

diff --git a/drivers/gpu/drm/amd/amdgpu/Makefile b/drivers/gpu/drm/amd/amdgpu/Makefile
index 6caf2390a889..6a67210f282f 100644
--- a/drivers/gpu/drm/amd/amdgpu/Makefile
+++ b/drivers/gpu/drm/amd/amdgpu/Makefile
@@ -215,7 +215,8 @@ amdgpu-y += \
 	amdgpu_amdkfd_arcturus.o \
 	amdgpu_amdkfd_aldebaran.o \
 	amdgpu_amdkfd_gfx_v10.o \
-	amdgpu_amdkfd_gfx_v10_3.o
+	amdgpu_amdkfd_gfx_v10_3.o \
+	amdgpu_amdkfd_gfx_v11.o
 
 ifneq ($(CONFIG_DRM_AMDGPU_CIK),)
 amdgpu-y += amdgpu_amdkfd_gfx_v7.o
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
index 64c6664b34e8..1f8161cd507f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
@@ -100,7 +100,18 @@ static void amdgpu_doorbell_get_kfd_info(struct amdgpu_device *adev,
 	 * The first num_doorbells are used by amdgpu.
 	 * amdkfd takes whatever's left in the aperture.
 	 */
-	if (adev->doorbell.size > adev->doorbell.num_doorbells * sizeof(u32)) {
+	if (adev->enable_mes) {
+		/*
+		 * With MES enabled, we only need to initialize
+		 * the base address. The size and offset are
+		 * not initialized as AMDGPU manages the whole
+		 * doorbell space.
+		 */
+		*aperture_base = adev->doorbell.base;
+		*aperture_size = 0;
+		*start_offset = 0;
+	} else if (adev->doorbell.size > adev->doorbell.num_doorbells *
+						sizeof(u32)) {
 		*aperture_base = adev->doorbell.base;
 		*aperture_size = adev->doorbell.size;
 		*start_offset = adev->doorbell.num_doorbells * sizeof(u32);
@@ -128,7 +139,7 @@ void amdgpu_amdkfd_device_init(struct amdgpu_device *adev)
 					  AMDGPU_GMC_HOLE_START),
 			.drm_render_minor = adev_to_drm(adev)->render->index,
 			.sdma_doorbell_idx = adev->doorbell_index.sdma_engine,
-
+			.enable_mes = adev->enable_mes,
 		};
 
 		/* this is going to have a few of the MSBs set that we need to
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v11.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v11.c
new file mode 100644
index 000000000000..0b0a72ca5695
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v11.c
@@ -0,0 +1,625 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+#include <linux/mmu_context.h>
+#include "amdgpu.h"
+#include "amdgpu_amdkfd.h"
+#include "gc/gc_11_0_0_offset.h"
+#include "gc/gc_11_0_0_sh_mask.h"
+#include "oss/osssys_6_0_0_offset.h"
+#include "oss/osssys_6_0_0_sh_mask.h"
+#include "soc15_common.h"
+#include "soc15d.h"
+#include "v11_structs.h"
+#include "soc21.h"
+
+enum hqd_dequeue_request_type {
+	NO_ACTION = 0,
+	DRAIN_PIPE,
+	RESET_WAVES,
+	SAVE_WAVES
+};
+
+static void lock_srbm(struct amdgpu_device *adev, uint32_t mec, uint32_t pipe,
+			uint32_t queue, uint32_t vmid)
+{
+	mutex_lock(&adev->srbm_mutex);
+	soc21_grbm_select(adev, mec, pipe, queue, vmid);
+}
+
+static void unlock_srbm(struct amdgpu_device *adev)
+{
+	soc21_grbm_select(adev, 0, 0, 0, 0);
+	mutex_unlock(&adev->srbm_mutex);
+}
+
+static void acquire_queue(struct amdgpu_device *adev, uint32_t pipe_id,
+				uint32_t queue_id)
+{
+	uint32_t mec = (pipe_id / adev->gfx.mec.num_pipe_per_mec) + 1;
+	uint32_t pipe = (pipe_id % adev->gfx.mec.num_pipe_per_mec);
+
+	lock_srbm(adev, mec, pipe, queue_id, 0);
+}
+
+static uint64_t get_queue_mask(struct amdgpu_device *adev,
+			       uint32_t pipe_id, uint32_t queue_id)
+{
+	unsigned int bit = pipe_id * adev->gfx.mec.num_queue_per_pipe +
+			queue_id;
+
+	return 1ull << bit;
+}
+
+static void release_queue(struct amdgpu_device *adev)
+{
+	unlock_srbm(adev);
+}
+
+static void program_sh_mem_settings_v11(struct amdgpu_device *adev, uint32_t vmid,
+					uint32_t sh_mem_config,
+					uint32_t sh_mem_ape1_base,
+					uint32_t sh_mem_ape1_limit,
+					uint32_t sh_mem_bases)
+{
+	lock_srbm(adev, 0, 0, 0, vmid);
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, regSH_MEM_CONFIG), sh_mem_config);
+	WREG32(SOC15_REG_OFFSET(GC, 0, regSH_MEM_BASES), sh_mem_bases);
+
+	unlock_srbm(adev);
+}
+
+static int set_pasid_vmid_mapping_v11(struct amdgpu_device *adev, unsigned int pasid,
+					unsigned int vmid)
+{
+	uint32_t value = pasid << IH_VMID_0_LUT__PASID__SHIFT;
+
+	/* Mapping vmid to pasid also for IH block */
+	pr_debug("mapping vmid %d -> pasid %d in IH block for GFX client\n",
+			vmid, pasid);
+	WREG32(SOC15_REG_OFFSET(OSSSYS, 0, regIH_VMID_0_LUT) + vmid, value);
+
+	return 0;
+}
+
+static int init_interrupts_v11(struct amdgpu_device *adev, uint32_t pipe_id)
+{
+	uint32_t mec;
+	uint32_t pipe;
+
+	mec = (pipe_id / adev->gfx.mec.num_pipe_per_mec) + 1;
+	pipe = (pipe_id % adev->gfx.mec.num_pipe_per_mec);
+
+	lock_srbm(adev, mec, pipe, 0, 0);
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, regCPC_INT_CNTL),
+		CP_INT_CNTL_RING0__TIME_STAMP_INT_ENABLE_MASK |
+		CP_INT_CNTL_RING0__OPCODE_ERROR_INT_ENABLE_MASK);
+
+	unlock_srbm(adev);
+
+	return 0;
+}
+
+static uint32_t get_sdma_rlc_reg_offset(struct amdgpu_device *adev,
+				unsigned int engine_id,
+				unsigned int queue_id)
+{
+	uint32_t sdma_engine_reg_base = 0;
+	uint32_t sdma_rlc_reg_offset;
+
+	switch (engine_id) {
+	case 0:
+		sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA0, 0,
+				regSDMA0_QUEUE0_RB_CNTL) - regSDMA0_QUEUE0_RB_CNTL;
+		break;
+	case 1:
+		sdma_engine_reg_base = SOC15_REG_OFFSET(SDMA1, 0,
+				regSDMA1_QUEUE0_RB_CNTL) - regSDMA0_QUEUE0_RB_CNTL;
+		break;
+	default:
+		BUG();
+	}
+
+	sdma_rlc_reg_offset = sdma_engine_reg_base
+		+ queue_id * (regSDMA0_QUEUE1_RB_CNTL - regSDMA0_QUEUE0_RB_CNTL);
+
+	pr_debug("RLC register offset for SDMA%d RLC%d: 0x%x\n", engine_id,
+			queue_id, sdma_rlc_reg_offset);
+
+	return sdma_rlc_reg_offset;
+}
+
+static inline struct v11_compute_mqd *get_mqd(void *mqd)
+{
+	return (struct v11_compute_mqd *)mqd;
+}
+
+static inline struct v11_sdma_mqd *get_sdma_mqd(void *mqd)
+{
+	return (struct v11_sdma_mqd *)mqd;
+}
+
+static int hqd_load_v11(struct amdgpu_device *adev, void *mqd, uint32_t pipe_id,
+			uint32_t queue_id, uint32_t __user *wptr,
+			uint32_t wptr_shift, uint32_t wptr_mask,
+			struct mm_struct *mm)
+{
+	struct v11_compute_mqd *m;
+	uint32_t *mqd_hqd;
+	uint32_t reg, hqd_base, data;
+
+	m = get_mqd(mqd);
+
+	pr_debug("Load hqd of pipe %d queue %d\n", pipe_id, queue_id);
+	acquire_queue(adev, pipe_id, queue_id);
+
+	/* HIQ is set during driver init period with vmid set to 0*/
+	if (m->cp_hqd_vmid == 0) {
+		uint32_t value, mec, pipe;
+
+		mec = (pipe_id / adev->gfx.mec.num_pipe_per_mec) + 1;
+		pipe = (pipe_id % adev->gfx.mec.num_pipe_per_mec);
+
+		pr_debug("kfd: set HIQ, mec:%d, pipe:%d, queue:%d.\n",
+			mec, pipe, queue_id);
+		value = RREG32(SOC15_REG_OFFSET(GC, 0, regRLC_CP_SCHEDULERS));
+		value = REG_SET_FIELD(value, RLC_CP_SCHEDULERS, scheduler1,
+			((mec << 5) | (pipe << 3) | queue_id | 0x80));
+		WREG32(SOC15_REG_OFFSET(GC, 0, regRLC_CP_SCHEDULERS), value);
+	}
+
+	/* HQD registers extend from CP_MQD_BASE_ADDR to CP_HQD_EOP_WPTR_MEM. */
+	mqd_hqd = &m->cp_mqd_base_addr_lo;
+	hqd_base = SOC15_REG_OFFSET(GC, 0, regCP_MQD_BASE_ADDR);
+
+	for (reg = hqd_base;
+	     reg <= SOC15_REG_OFFSET(GC, 0, regCP_HQD_PQ_WPTR_HI); reg++)
+		WREG32(reg, mqd_hqd[reg - hqd_base]);
+
+
+	/* Activate doorbell logic before triggering WPTR poll. */
+	data = REG_SET_FIELD(m->cp_hqd_pq_doorbell_control,
+			     CP_HQD_PQ_DOORBELL_CONTROL, DOORBELL_EN, 1);
+	WREG32(SOC15_REG_OFFSET(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL), data);
+
+	if (wptr) {
+		/* Don't read wptr with get_user because the user
+		 * context may not be accessible (if this function
+		 * runs in a work queue). Instead trigger a one-shot
+		 * polling read from memory in the CP. This assumes
+		 * that wptr is GPU-accessible in the queue's VMID via
+		 * ATC or SVM. WPTR==RPTR before starting the poll so
+		 * the CP starts fetching new commands from the right
+		 * place.
+		 *
+		 * Guessing a 64-bit WPTR from a 32-bit RPTR is a bit
+		 * tricky. Assume that the queue didn't overflow. The
+		 * number of valid bits in the 32-bit RPTR depends on
+		 * the queue size. The remaining bits are taken from
+		 * the saved 64-bit WPTR. If the WPTR wrapped, add the
+		 * queue size.
+		 */
+		uint32_t queue_size =
+			2 << REG_GET_FIELD(m->cp_hqd_pq_control,
+					   CP_HQD_PQ_CONTROL, QUEUE_SIZE);
+		uint64_t guessed_wptr = m->cp_hqd_pq_rptr & (queue_size - 1);
+
+		if ((m->cp_hqd_pq_wptr_lo & (queue_size - 1)) < guessed_wptr)
+			guessed_wptr += queue_size;
+		guessed_wptr += m->cp_hqd_pq_wptr_lo & ~(queue_size - 1);
+		guessed_wptr += (uint64_t)m->cp_hqd_pq_wptr_hi << 32;
+
+		WREG32(SOC15_REG_OFFSET(GC, 0, regCP_HQD_PQ_WPTR_LO),
+		       lower_32_bits(guessed_wptr));
+		WREG32(SOC15_REG_OFFSET(GC, 0, regCP_HQD_PQ_WPTR_HI),
+		       upper_32_bits(guessed_wptr));
+		WREG32(SOC15_REG_OFFSET(GC, 0, regCP_HQD_PQ_WPTR_POLL_ADDR),
+		       lower_32_bits((uint64_t)wptr));
+		WREG32(SOC15_REG_OFFSET(GC, 0, regCP_HQD_PQ_WPTR_POLL_ADDR_HI),
+		       upper_32_bits((uint64_t)wptr));
+		pr_debug("%s setting CP_PQ_WPTR_POLL_CNTL1 to %x\n", __func__,
+			 (uint32_t)get_queue_mask(adev, pipe_id, queue_id));
+		WREG32(SOC15_REG_OFFSET(GC, 0, regCP_PQ_WPTR_POLL_CNTL1),
+		       (uint32_t)get_queue_mask(adev, pipe_id, queue_id));
+	}
+
+	/* Start the EOP fetcher */
+	WREG32(SOC15_REG_OFFSET(GC, 0, regCP_HQD_EOP_RPTR),
+	       REG_SET_FIELD(m->cp_hqd_eop_rptr,
+			     CP_HQD_EOP_RPTR, INIT_FETCHER, 1));
+
+	data = REG_SET_FIELD(m->cp_hqd_active, CP_HQD_ACTIVE, ACTIVE, 1);
+	WREG32(SOC15_REG_OFFSET(GC, 0, regCP_HQD_ACTIVE), data);
+
+	release_queue(adev);
+
+	return 0;
+}
+
+static int hiq_mqd_load_v11(struct amdgpu_device *adev, void *mqd,
+			      uint32_t pipe_id, uint32_t queue_id,
+			      uint32_t doorbell_off)
+{
+	struct amdgpu_ring *kiq_ring = &adev->gfx.kiq.ring;
+	struct v11_compute_mqd *m;
+	uint32_t mec, pipe;
+	int r;
+
+	m = get_mqd(mqd);
+
+	acquire_queue(adev, pipe_id, queue_id);
+
+	mec = (pipe_id / adev->gfx.mec.num_pipe_per_mec) + 1;
+	pipe = (pipe_id % adev->gfx.mec.num_pipe_per_mec);
+
+	pr_debug("kfd: set HIQ, mec:%d, pipe:%d, queue:%d.\n",
+		 mec, pipe, queue_id);
+
+	spin_lock(&adev->gfx.kiq.ring_lock);
+	r = amdgpu_ring_alloc(kiq_ring, 7);
+	if (r) {
+		pr_err("Failed to alloc KIQ (%d).\n", r);
+		goto out_unlock;
+	}
+
+	amdgpu_ring_write(kiq_ring, PACKET3(PACKET3_MAP_QUEUES, 5));
+	amdgpu_ring_write(kiq_ring,
+			  PACKET3_MAP_QUEUES_QUEUE_SEL(0) | /* Queue_Sel */
+			  PACKET3_MAP_QUEUES_VMID(m->cp_hqd_vmid) | /* VMID */
+			  PACKET3_MAP_QUEUES_QUEUE(queue_id) |
+			  PACKET3_MAP_QUEUES_PIPE(pipe) |
+			  PACKET3_MAP_QUEUES_ME((mec - 1)) |
+			  PACKET3_MAP_QUEUES_QUEUE_TYPE(0) | /*queue_type: normal compute queue */
+			  PACKET3_MAP_QUEUES_ALLOC_FORMAT(0) | /* alloc format: all_on_one_pipe */
+			  PACKET3_MAP_QUEUES_ENGINE_SEL(1) | /* engine_sel: hiq */
+			  PACKET3_MAP_QUEUES_NUM_QUEUES(1)); /* num_queues: must be 1 */
+	amdgpu_ring_write(kiq_ring,
+			PACKET3_MAP_QUEUES_DOORBELL_OFFSET(doorbell_off));
+	amdgpu_ring_write(kiq_ring, m->cp_mqd_base_addr_lo);
+	amdgpu_ring_write(kiq_ring, m->cp_mqd_base_addr_hi);
+	amdgpu_ring_write(kiq_ring, m->cp_hqd_pq_wptr_poll_addr_lo);
+	amdgpu_ring_write(kiq_ring, m->cp_hqd_pq_wptr_poll_addr_hi);
+	amdgpu_ring_commit(kiq_ring);
+
+out_unlock:
+	spin_unlock(&adev->gfx.kiq.ring_lock);
+	release_queue(adev);
+
+	return r;
+}
+
+static int hqd_dump_v11(struct amdgpu_device *adev,
+			uint32_t pipe_id, uint32_t queue_id,
+			uint32_t (**dump)[2], uint32_t *n_regs)
+{
+	uint32_t i = 0, reg;
+#define HQD_N_REGS 56
+#define DUMP_REG(addr) do {				\
+		if (WARN_ON_ONCE(i >= HQD_N_REGS))	\
+			break;				\
+		(*dump)[i][0] = (addr) << 2;		\
+		(*dump)[i++][1] = RREG32(addr);		\
+	} while (0)
+
+	*dump = kmalloc(HQD_N_REGS*2*sizeof(uint32_t), GFP_KERNEL);
+	if (*dump == NULL)
+		return -ENOMEM;
+
+	acquire_queue(adev, pipe_id, queue_id);
+
+	for (reg = SOC15_REG_OFFSET(GC, 0, regCP_MQD_BASE_ADDR);
+	     reg <= SOC15_REG_OFFSET(GC, 0, regCP_HQD_PQ_WPTR_HI); reg++)
+		DUMP_REG(reg);
+
+	release_queue(adev);
+
+	WARN_ON_ONCE(i != HQD_N_REGS);
+	*n_regs = i;
+
+	return 0;
+}
+
+static int hqd_sdma_load_v11(struct amdgpu_device *adev, void *mqd,
+			     uint32_t __user *wptr, struct mm_struct *mm)
+{
+	struct v11_sdma_mqd *m;
+	uint32_t sdma_rlc_reg_offset;
+	unsigned long end_jiffies;
+	uint32_t data;
+	uint64_t data64;
+	uint64_t __user *wptr64 = (uint64_t __user *)wptr;
+
+	m = get_sdma_mqd(mqd);
+	sdma_rlc_reg_offset = get_sdma_rlc_reg_offset(adev, m->sdma_engine_id,
+					    m->sdma_queue_id);
+
+	WREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_RB_CNTL,
+		m->sdmax_rlcx_rb_cntl & (~SDMA0_QUEUE0_RB_CNTL__RB_ENABLE_MASK));
+
+	end_jiffies = msecs_to_jiffies(2000) + jiffies;
+	while (true) {
+		data = RREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_CONTEXT_STATUS);
+		if (data & SDMA0_QUEUE0_CONTEXT_STATUS__IDLE_MASK)
+			break;
+		if (time_after(jiffies, end_jiffies)) {
+			pr_err("SDMA RLC not idle in %s\n", __func__);
+			return -ETIME;
+		}
+		usleep_range(500, 1000);
+	}
+
+	WREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_DOORBELL_OFFSET,
+	       m->sdmax_rlcx_doorbell_offset);
+
+	data = REG_SET_FIELD(m->sdmax_rlcx_doorbell, SDMA0_QUEUE0_DOORBELL,
+			     ENABLE, 1);
+	WREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_DOORBELL, data);
+	WREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_RB_RPTR,
+				m->sdmax_rlcx_rb_rptr);
+	WREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_RB_RPTR_HI,
+				m->sdmax_rlcx_rb_rptr_hi);
+
+	WREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_MINOR_PTR_UPDATE, 1);
+	if (read_user_wptr(mm, wptr64, data64)) {
+		WREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_RB_WPTR,
+		       lower_32_bits(data64));
+		WREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_RB_WPTR_HI,
+		       upper_32_bits(data64));
+	} else {
+		WREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_RB_WPTR,
+		       m->sdmax_rlcx_rb_rptr);
+		WREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_RB_WPTR_HI,
+		       m->sdmax_rlcx_rb_rptr_hi);
+	}
+	WREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_MINOR_PTR_UPDATE, 0);
+
+	WREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_RB_BASE, m->sdmax_rlcx_rb_base);
+	WREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_RB_BASE_HI,
+			m->sdmax_rlcx_rb_base_hi);
+	WREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_RB_RPTR_ADDR_LO,
+			m->sdmax_rlcx_rb_rptr_addr_lo);
+	WREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_RB_RPTR_ADDR_HI,
+			m->sdmax_rlcx_rb_rptr_addr_hi);
+
+	data = REG_SET_FIELD(m->sdmax_rlcx_rb_cntl, SDMA0_QUEUE0_RB_CNTL,
+			     RB_ENABLE, 1);
+	WREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_RB_CNTL, data);
+
+	return 0;
+}
+
+static int hqd_sdma_dump_v11(struct amdgpu_device *adev,
+			     uint32_t engine_id, uint32_t queue_id,
+			     uint32_t (**dump)[2], uint32_t *n_regs)
+{
+	uint32_t sdma_rlc_reg_offset = get_sdma_rlc_reg_offset(adev,
+			engine_id, queue_id);
+	uint32_t i = 0, reg;
+#undef HQD_N_REGS
+#define HQD_N_REGS (7+11+1+12+12)
+
+	*dump = kmalloc(HQD_N_REGS*2*sizeof(uint32_t), GFP_KERNEL);
+	if (*dump == NULL)
+		return -ENOMEM;
+
+	for (reg = regSDMA0_QUEUE0_RB_CNTL;
+	     reg <= regSDMA0_QUEUE0_RB_WPTR_HI; reg++)
+		DUMP_REG(sdma_rlc_reg_offset + reg);
+	for (reg = regSDMA0_QUEUE0_RB_RPTR_ADDR_HI;
+	     reg <= regSDMA0_QUEUE0_DOORBELL; reg++)
+		DUMP_REG(sdma_rlc_reg_offset + reg);
+	for (reg = regSDMA0_QUEUE0_DOORBELL_LOG;
+	     reg <= regSDMA0_QUEUE0_DOORBELL_LOG; reg++)
+		DUMP_REG(sdma_rlc_reg_offset + reg);
+	for (reg = regSDMA0_QUEUE0_DOORBELL_OFFSET;
+	     reg <= regSDMA0_QUEUE0_RB_PREEMPT; reg++)
+		DUMP_REG(sdma_rlc_reg_offset + reg);
+	for (reg = regSDMA0_QUEUE0_MIDCMD_DATA0;
+	     reg <= regSDMA0_QUEUE0_MIDCMD_CNTL; reg++)
+		DUMP_REG(sdma_rlc_reg_offset + reg);
+
+	WARN_ON_ONCE(i != HQD_N_REGS);
+	*n_regs = i;
+
+	return 0;
+}
+
+static bool hqd_is_occupied_v11(struct amdgpu_device *adev, uint64_t queue_address,
+				uint32_t pipe_id, uint32_t queue_id)
+{
+	uint32_t act;
+	bool retval = false;
+	uint32_t low, high;
+
+	acquire_queue(adev, pipe_id, queue_id);
+	act = RREG32(SOC15_REG_OFFSET(GC, 0, regCP_HQD_ACTIVE));
+	if (act) {
+		low = lower_32_bits(queue_address >> 8);
+		high = upper_32_bits(queue_address >> 8);
+
+		if (low == RREG32(SOC15_REG_OFFSET(GC, 0, regCP_HQD_PQ_BASE)) &&
+		   high == RREG32(SOC15_REG_OFFSET(GC, 0, regCP_HQD_PQ_BASE_HI)))
+			retval = true;
+	}
+	release_queue(adev);
+	return retval;
+}
+
+static bool hqd_sdma_is_occupied_v11(struct amdgpu_device *adev, void *mqd)
+{
+	struct v11_sdma_mqd *m;
+	uint32_t sdma_rlc_reg_offset;
+	uint32_t sdma_rlc_rb_cntl;
+
+	m = get_sdma_mqd(mqd);
+	sdma_rlc_reg_offset = get_sdma_rlc_reg_offset(adev, m->sdma_engine_id,
+					    m->sdma_queue_id);
+
+	sdma_rlc_rb_cntl = RREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_RB_CNTL);
+
+	if (sdma_rlc_rb_cntl & SDMA0_QUEUE0_RB_CNTL__RB_ENABLE_MASK)
+		return true;
+
+	return false;
+}
+
+static int hqd_destroy_v11(struct amdgpu_device *adev, void *mqd,
+				enum kfd_preempt_type reset_type,
+				unsigned int utimeout, uint32_t pipe_id,
+				uint32_t queue_id)
+{
+	enum hqd_dequeue_request_type type;
+	unsigned long end_jiffies;
+	uint32_t temp;
+	struct v11_compute_mqd *m = get_mqd(mqd);
+
+	acquire_queue(adev, pipe_id, queue_id);
+
+	if (m->cp_hqd_vmid == 0)
+		WREG32_FIELD15_PREREG(GC, 0, RLC_CP_SCHEDULERS, scheduler1, 0);
+
+	switch (reset_type) {
+	case KFD_PREEMPT_TYPE_WAVEFRONT_DRAIN:
+		type = DRAIN_PIPE;
+		break;
+	case KFD_PREEMPT_TYPE_WAVEFRONT_RESET:
+		type = RESET_WAVES;
+		break;
+	default:
+		type = DRAIN_PIPE;
+		break;
+	}
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, regCP_HQD_DEQUEUE_REQUEST), type);
+
+	end_jiffies = (utimeout * HZ / 1000) + jiffies;
+	while (true) {
+		temp = RREG32(SOC15_REG_OFFSET(GC, 0, regCP_HQD_ACTIVE));
+		if (!(temp & CP_HQD_ACTIVE__ACTIVE_MASK))
+			break;
+		if (time_after(jiffies, end_jiffies)) {
+			pr_err("cp queue pipe %d queue %d preemption failed\n",
+					pipe_id, queue_id);
+			release_queue(adev);
+			return -ETIME;
+		}
+		usleep_range(500, 1000);
+	}
+
+	release_queue(adev);
+	return 0;
+}
+
+static int hqd_sdma_destroy_v11(struct amdgpu_device *adev, void *mqd,
+				unsigned int utimeout)
+{
+	struct v11_sdma_mqd *m;
+	uint32_t sdma_rlc_reg_offset;
+	uint32_t temp;
+	unsigned long end_jiffies = (utimeout * HZ / 1000) + jiffies;
+
+	m = get_sdma_mqd(mqd);
+	sdma_rlc_reg_offset = get_sdma_rlc_reg_offset(adev, m->sdma_engine_id,
+					    m->sdma_queue_id);
+
+	temp = RREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_RB_CNTL);
+	temp = temp & ~SDMA0_QUEUE0_RB_CNTL__RB_ENABLE_MASK;
+	WREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_RB_CNTL, temp);
+
+	while (true) {
+		temp = RREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_CONTEXT_STATUS);
+		if (temp & SDMA0_QUEUE0_CONTEXT_STATUS__IDLE_MASK)
+			break;
+		if (time_after(jiffies, end_jiffies)) {
+			pr_err("SDMA RLC not idle in %s\n", __func__);
+			return -ETIME;
+		}
+		usleep_range(500, 1000);
+	}
+
+	WREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_DOORBELL, 0);
+	WREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_RB_CNTL,
+		RREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_RB_CNTL) |
+		SDMA0_QUEUE0_RB_CNTL__RB_ENABLE_MASK);
+
+	m->sdmax_rlcx_rb_rptr = RREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_RB_RPTR);
+	m->sdmax_rlcx_rb_rptr_hi =
+		RREG32(sdma_rlc_reg_offset + regSDMA0_QUEUE0_RB_RPTR_HI);
+
+	return 0;
+}
+
+static int wave_control_execute_v11(struct amdgpu_device *adev,
+					uint32_t gfx_index_val,
+					uint32_t sq_cmd)
+{
+	uint32_t data = 0;
+
+	mutex_lock(&adev->grbm_idx_mutex);
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, regGRBM_GFX_INDEX), gfx_index_val);
+	WREG32(SOC15_REG_OFFSET(GC, 0, regSQ_CMD), sq_cmd);
+
+	data = REG_SET_FIELD(data, GRBM_GFX_INDEX,
+		INSTANCE_BROADCAST_WRITES, 1);
+	data = REG_SET_FIELD(data, GRBM_GFX_INDEX,
+		SA_BROADCAST_WRITES, 1);
+	data = REG_SET_FIELD(data, GRBM_GFX_INDEX,
+		SE_BROADCAST_WRITES, 1);
+
+	WREG32(SOC15_REG_OFFSET(GC, 0, regGRBM_GFX_INDEX), data);
+	mutex_unlock(&adev->grbm_idx_mutex);
+
+	return 0;
+}
+
+static void set_vm_context_page_table_base_v11(struct amdgpu_device *adev,
+		uint32_t vmid, uint64_t page_table_base)
+{
+	if (!amdgpu_amdkfd_is_kfd_vmid(adev, vmid)) {
+		pr_err("trying to set page table base for wrong VMID %u\n",
+		       vmid);
+		return;
+	}
+
+	/* SDMA is on gfxhub as well for gfx11 adapters */
+	adev->gfxhub.funcs->setup_vm_pt_regs(adev, vmid, page_table_base);
+}
+
+const struct kfd2kgd_calls gfx_v11_kfd2kgd = {
+	.program_sh_mem_settings = program_sh_mem_settings_v11,
+	.set_pasid_vmid_mapping = set_pasid_vmid_mapping_v11,
+	.init_interrupts = init_interrupts_v11,
+	.hqd_load = hqd_load_v11,
+	.hiq_mqd_load = hiq_mqd_load_v11,
+	.hqd_sdma_load = hqd_sdma_load_v11,
+	.hqd_dump = hqd_dump_v11,
+	.hqd_sdma_dump = hqd_sdma_dump_v11,
+	.hqd_is_occupied = hqd_is_occupied_v11,
+	.hqd_sdma_is_occupied = hqd_sdma_is_occupied_v11,
+	.hqd_destroy = hqd_destroy_v11,
+	.hqd_sdma_destroy = hqd_sdma_destroy_v11,
+	.wave_control_execute = wave_control_execute_v11,
+	.get_atc_vmid_pasid_mapping_info = NULL,
+	.set_vm_context_page_table_base = set_vm_context_page_table_base_v11,
+};
diff --git a/drivers/gpu/drm/amd/amdkfd/Makefile b/drivers/gpu/drm/amd/amdkfd/Makefile
index 19cfbf9577b4..e758c2a24cd0 100644
--- a/drivers/gpu/drm/amd/amdkfd/Makefile
+++ b/drivers/gpu/drm/amd/amdkfd/Makefile
@@ -37,6 +37,7 @@ AMDKFD_FILES	:= $(AMDKFD_PATH)/kfd_module.o \
 		$(AMDKFD_PATH)/kfd_mqd_manager_vi.o \
 		$(AMDKFD_PATH)/kfd_mqd_manager_v9.o \
 		$(AMDKFD_PATH)/kfd_mqd_manager_v10.o \
+		$(AMDKFD_PATH)/kfd_mqd_manager_v11.o \
 		$(AMDKFD_PATH)/kfd_kernel_queue.o \
 		$(AMDKFD_PATH)/kfd_packet_manager.o \
 		$(AMDKFD_PATH)/kfd_packet_manager_vi.o \
@@ -47,10 +48,12 @@ AMDKFD_FILES	:= $(AMDKFD_PATH)/kfd_module.o \
 		$(AMDKFD_PATH)/kfd_device_queue_manager_vi.o \
 		$(AMDKFD_PATH)/kfd_device_queue_manager_v9.o \
 		$(AMDKFD_PATH)/kfd_device_queue_manager_v10.o \
+		$(AMDKFD_PATH)/kfd_device_queue_manager_v11.o \
 		$(AMDKFD_PATH)/kfd_interrupt.o \
 		$(AMDKFD_PATH)/kfd_events.o \
 		$(AMDKFD_PATH)/cik_event_interrupt.o \
 		$(AMDKFD_PATH)/kfd_int_process_v9.o \
+		$(AMDKFD_PATH)/kfd_int_process_v11.o \
 		$(AMDKFD_PATH)/kfd_smi_events.o \
 		$(AMDKFD_PATH)/kfd_crat.o
 
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
index 51e1c982ad2a..e9d79facb83a 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
@@ -1315,6 +1315,8 @@ static int fill_in_l2_l3_pcache(struct crat_subtype_cache *pcache,
 	return 1;
 }
 
+#define KFD_MAX_CACHE_TYPES 6
+
 static int kfd_fill_gpu_cache_info_from_gfx_config(struct kfd_dev *kdev,
 						   struct kfd_gpu_cache_info *pcache_info)
 {
@@ -1408,6 +1410,7 @@ static int kfd_fill_gpu_cache_info(struct kfd_dev *kdev,
 			int *num_of_entries)
 {
 	struct kfd_gpu_cache_info *pcache_info;
+	struct kfd_gpu_cache_info cache_info[KFD_MAX_CACHE_TYPES];
 	int num_of_cache_types = 0;
 	int i, j, k;
 	int ct = 0;
@@ -1516,6 +1519,11 @@ static int kfd_fill_gpu_cache_info(struct kfd_dev *kdev,
 			pcache_info = yellow_carp_cache_info;
 			num_of_cache_types = ARRAY_SIZE(yellow_carp_cache_info);
 			break;
+		case IP_VERSION(11, 0, 0):
+			pcache_info = cache_info;
+			num_of_cache_types =
+				kfd_fill_gpu_cache_info_from_gfx_config(kdev, pcache_info);
+			break;
 		default:
 			return -EINVAL;
 		}
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
index ed33e95c03e6..3d4faa66d4b0 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
@@ -53,6 +53,7 @@ extern const struct kfd2kgd_calls arcturus_kfd2kgd;
 extern const struct kfd2kgd_calls aldebaran_kfd2kgd;
 extern const struct kfd2kgd_calls gfx_v10_kfd2kgd;
 extern const struct kfd2kgd_calls gfx_v10_3_kfd2kgd;
+extern const struct kfd2kgd_calls gfx_v11_kfd2kgd;
 
 static int kfd_gtt_sa_init(struct kfd_dev *kfd, unsigned int buf_size,
 				unsigned int chunk_size);
@@ -60,7 +61,7 @@ static void kfd_gtt_sa_fini(struct kfd_dev *kfd);
 
 static int kfd_resume(struct kfd_dev *kfd);
 
-static void kfd_device_info_set_sdma_queue_num(struct kfd_dev *kfd)
+static void kfd_device_info_set_sdma_info(struct kfd_dev *kfd)
 {
 	uint32_t sdma_version = kfd->adev->ip_versions[SDMA0_HWIP][0];
 
@@ -85,6 +86,7 @@ static void kfd_device_info_set_sdma_queue_num(struct kfd_dev *kfd)
 	case IP_VERSION(5, 2, 2):/* NAVY_FLOUNDER */
 	case IP_VERSION(5, 2, 4):/* DIMGREY_CAVEFISH */
 	case IP_VERSION(5, 2, 5):/* BEIGE_GOBY */
+	case IP_VERSION(6, 0, 0):
 		kfd->device_info.num_sdma_queues_per_engine = 8;
 		break;
 	default:
@@ -93,6 +95,17 @@ static void kfd_device_info_set_sdma_queue_num(struct kfd_dev *kfd)
 			sdma_version);
 		kfd->device_info.num_sdma_queues_per_engine = 8;
 	}
+
+	switch (sdma_version) {
+	case IP_VERSION(6, 0, 0):
+		/* Reserve 1 for paging and 1 for gfx */
+		kfd->device_info.num_reserved_sdma_queues_per_engine = 2;
+		/* BIT(0)=engine-0 queue-0; BIT(1)=engine-1 queue-0; BIT(2)=engine-0 queue-1; ... */
+		kfd->device_info.reserved_sdma_queues_bitmap = 0xFULL;
+		break;
+	default:
+		break;
+	}
 }
 
 static void kfd_device_info_set_event_interrupt_class(struct kfd_dev *kfd)
@@ -121,6 +134,9 @@ static void kfd_device_info_set_event_interrupt_class(struct kfd_dev *kfd)
 	case IP_VERSION(10, 3, 5): /* BEIGE_GOBY */
 		kfd->device_info.event_interrupt_class = &event_interrupt_class_v9;
 		break;
+	case IP_VERSION(11, 0, 0):
+		kfd->device_info.event_interrupt_class = &event_interrupt_class_v11;
+		break;
 	default:
 		dev_warn(kfd_device, "v9 event interrupt handler is set due to "
 			"mismatch of gc ip block(GC_HWIP:0x%x).\n", gc_version);
@@ -145,7 +161,7 @@ static void kfd_device_info_init(struct kfd_dev *kfd,
 		kfd->device_info.ih_ring_entry_size = 8 * sizeof(uint32_t);
 		kfd->device_info.supports_cwsr = true;
 
-		kfd_device_info_set_sdma_queue_num(kfd);
+		kfd_device_info_set_sdma_info(kfd);
 
 		kfd_device_info_set_event_interrupt_class(kfd);
 
@@ -346,6 +362,10 @@ struct kfd_dev *kgd2kfd_probe(struct amdgpu_device *adev, bool vf)
 			if (!vf)
 				f2g = &gfx_v10_3_kfd2kgd;
 			break;
+		case IP_VERSION(11, 0, 0):
+			gfx_target_version = 110000;
+			f2g = &gfx_v11_kfd2kgd;
+			break;
 		default:
 			break;
 		}
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
index 198672264492..e9c9a3a67ab0 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
@@ -35,6 +35,7 @@
 #include "cik_regs.h"
 #include "kfd_kernel_queue.h"
 #include "amdgpu_amdkfd.h"
+#include "mes_api_def.h"
 
 /* Size of the per-pipe EOP queue */
 #define CIK_HPD_EOP_BYTES_LOG2 11
@@ -118,6 +119,11 @@ unsigned int get_num_xgmi_sdma_queues(struct device_queue_manager *dqm)
 		dqm->dev->device_info.num_sdma_queues_per_engine;
 }
 
+static inline uint64_t get_reserved_sdma_queues_bitmap(struct device_queue_manager *dqm)
+{
+	return dqm->dev->device_info.reserved_sdma_queues_bitmap;
+}
+
 void program_sh_mem_settings(struct device_queue_manager *dqm,
 					struct qcm_process_device *qpd)
 {
@@ -129,6 +135,151 @@ void program_sh_mem_settings(struct device_queue_manager *dqm,
 						qpd->sh_mem_bases);
 }
 
+static void kfd_hws_hang(struct device_queue_manager *dqm)
+{
+	/*
+	 * Issue a GPU reset if HWS is unresponsive
+	 */
+	dqm->is_hws_hang = true;
+
+	/* It's possible we're detecting a HWS hang in the
+	 * middle of a GPU reset. No need to schedule another
+	 * reset in this case.
+	 */
+	if (!dqm->is_resetting)
+		schedule_work(&dqm->hw_exception_work);
+}
+
+static int convert_to_mes_queue_type(int queue_type)
+{
+	int mes_queue_type;
+
+	switch (queue_type) {
+	case KFD_QUEUE_TYPE_COMPUTE:
+		mes_queue_type = MES_QUEUE_TYPE_COMPUTE;
+		break;
+	case KFD_QUEUE_TYPE_SDMA:
+		mes_queue_type = MES_QUEUE_TYPE_SDMA;
+		break;
+	default:
+		WARN(1, "Invalid queue type %d", queue_type);
+		mes_queue_type = -EINVAL;
+		break;
+	}
+
+	return mes_queue_type;
+}
+
+static int add_queue_mes(struct device_queue_manager *dqm, struct queue *q,
+			 struct qcm_process_device *qpd)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)dqm->dev->adev;
+	struct kfd_process_device *pdd = qpd_to_pdd(qpd);
+	struct mes_add_queue_input queue_input;
+	int r;
+
+	if (dqm->is_hws_hang)
+		return -EIO;
+
+	memset(&queue_input, 0x0, sizeof(struct mes_add_queue_input));
+	queue_input.process_id = qpd->pqm->process->pasid;
+	queue_input.page_table_base_addr =  qpd->page_table_base;
+	queue_input.process_va_start = 0;
+	queue_input.process_va_end = adev->vm_manager.max_pfn - 1;
+	/* MES unit for quantum is 100ns */
+	queue_input.process_quantum = KFD_MES_PROCESS_QUANTUM;  /* Equivalent to 10ms. */
+	queue_input.process_context_addr = pdd->proc_ctx_gpu_addr;
+	queue_input.gang_quantum = KFD_MES_GANG_QUANTUM; /* Equivalent to 1ms */
+	queue_input.gang_context_addr = q->gang_ctx_gpu_addr;
+	queue_input.inprocess_gang_priority = q->properties.priority;
+	queue_input.gang_global_priority_level =
+					AMDGPU_MES_PRIORITY_LEVEL_NORMAL;
+	queue_input.doorbell_offset = q->properties.doorbell_off;
+	queue_input.mqd_addr = q->gart_mqd_addr;
+	queue_input.wptr_addr = (uint64_t)q->properties.write_ptr;
+	queue_input.paging = false;
+	queue_input.tba_addr = qpd->tba_addr;
+	queue_input.tma_addr = qpd->tma_addr;
+
+	queue_input.queue_type = convert_to_mes_queue_type(q->properties.type);
+	if (queue_input.queue_type < 0) {
+		pr_err("Queue type not supported with MES, queue:%d\n",
+				q->properties.type);
+		return -EINVAL;
+	}
+
+	if (q->gws) {
+		queue_input.gws_base = 0;
+		queue_input.gws_size = qpd->num_gws;
+	}
+
+	amdgpu_mes_lock(&adev->mes);
+	r = adev->mes.funcs->add_hw_queue(&adev->mes, &queue_input);
+	amdgpu_mes_unlock(&adev->mes);
+	if (r) {
+		pr_err("failed to add hardware queue to MES, doorbell=0x%x\n",
+			q->properties.doorbell_off);
+		pr_err("MES might be in unrecoverable state, issue a GPU reset\n");
+		kfd_hws_hang(dqm);
+}
+
+	return r;
+}
+
+static int remove_queue_mes(struct device_queue_manager *dqm, struct queue *q,
+			struct qcm_process_device *qpd)
+{
+	struct amdgpu_device *adev = (struct amdgpu_device *)dqm->dev->adev;
+	int r;
+	struct mes_remove_queue_input queue_input;
+
+	if (dqm->is_hws_hang)
+		return -EIO;
+
+	memset(&queue_input, 0x0, sizeof(struct mes_remove_queue_input));
+	queue_input.doorbell_offset = q->properties.doorbell_off;
+	queue_input.gang_context_addr = q->gang_ctx_gpu_addr;
+
+	amdgpu_mes_lock(&adev->mes);
+	r = adev->mes.funcs->remove_hw_queue(&adev->mes, &queue_input);
+	amdgpu_mes_unlock(&adev->mes);
+
+	if (r) {
+		pr_err("failed to remove hardware queue from MES, doorbell=0x%x\n",
+			q->properties.doorbell_off);
+		pr_err("MES might be in unrecoverable state, issue a GPU reset\n");
+		kfd_hws_hang(dqm);
+	}
+
+	return r;
+}
+
+static int remove_all_queues_mes(struct device_queue_manager *dqm)
+{
+	struct device_process_node *cur;
+	struct qcm_process_device *qpd;
+	struct queue *q;
+	int retval = 0;
+
+	list_for_each_entry(cur, &dqm->queues, list) {
+		qpd = cur->qpd;
+		list_for_each_entry(q, &qpd->queues_list, list) {
+			if (q->properties.is_active) {
+				retval = remove_queue_mes(dqm, q, qpd);
+				if (retval) {
+					pr_err("%s: Failed to remove queue %d for dev %d",
+						__func__,
+						q->properties.queue_id,
+						dqm->dev->id);
+					return retval;
+				}
+			}
+		}
+	}
+
+	return retval;
+}
+
 static void increment_queue_count(struct device_queue_manager *dqm,
 				  struct qcm_process_device *qpd,
 				  struct queue *q)
@@ -659,6 +810,7 @@ static int update_queue(struct device_queue_manager *dqm, struct queue *q,
 	struct mqd_manager *mqd_mgr;
 	struct kfd_process_device *pdd;
 	bool prev_active = false;
+	bool add_queue = false;
 
 	dqm_lock(dqm);
 	pdd = kfd_get_process_device_data(q->device, q->process);
@@ -674,8 +826,12 @@ static int update_queue(struct device_queue_manager *dqm, struct queue *q,
 
 	/* Make sure the queue is unmapped before updating the MQD */
 	if (dqm->sched_policy != KFD_SCHED_POLICY_NO_HWS) {
-		retval = unmap_queues_cpsch(dqm,
-				KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0, false);
+		if (!dqm->dev->shared_resources.enable_mes)
+			retval = unmap_queues_cpsch(dqm,
+						    KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0, false);
+		else if (prev_active)
+			retval = remove_queue_mes(dqm, q, &pdd->qpd);
+
 		if (retval) {
 			pr_err("unmap queue failed\n");
 			goto out_unlock;
@@ -727,9 +883,12 @@ static int update_queue(struct device_queue_manager *dqm, struct queue *q,
 		q->properties.is_gws = false;
 	}
 
-	if (dqm->sched_policy != KFD_SCHED_POLICY_NO_HWS)
-		retval = map_queues_cpsch(dqm);
-	else if (q->properties.is_active &&
+	if (dqm->sched_policy != KFD_SCHED_POLICY_NO_HWS) {
+		if (!dqm->dev->shared_resources.enable_mes)
+			retval = map_queues_cpsch(dqm);
+		else if (add_queue)
+			retval = add_queue_mes(dqm, q, &pdd->qpd);
+	} else if (q->properties.is_active &&
 		 (q->properties.type == KFD_QUEUE_TYPE_COMPUTE ||
 		  q->properties.type == KFD_QUEUE_TYPE_SDMA ||
 		  q->properties.type == KFD_QUEUE_TYPE_SDMA_XGMI)) {
@@ -822,12 +981,22 @@ static int evict_process_queues_cpsch(struct device_queue_manager *dqm,
 
 		q->properties.is_active = false;
 		decrement_queue_count(dqm, qpd, q);
+
+		if (dqm->dev->shared_resources.enable_mes) {
+			retval = remove_queue_mes(dqm, q, qpd);
+			if (retval) {
+				pr_err("Failed to evict queue %d\n",
+					q->properties.queue_id);
+				goto out;
+			}
+		}
 	}
 	pdd->last_evict_timestamp = get_jiffies_64();
-	retval = execute_queues_cpsch(dqm,
-				qpd->is_debug ?
-				KFD_UNMAP_QUEUES_FILTER_ALL_QUEUES :
-				KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0);
+	if (!dqm->dev->shared_resources.enable_mes)
+		retval = execute_queues_cpsch(dqm,
+					      qpd->is_debug ?
+					      KFD_UNMAP_QUEUES_FILTER_ALL_QUEUES :
+					      KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0);
 
 out:
 	dqm_unlock(dqm);
@@ -951,9 +1120,19 @@ static int restore_process_queues_cpsch(struct device_queue_manager *dqm,
 
 		q->properties.is_active = true;
 		increment_queue_count(dqm, &pdd->qpd, q);
+
+		if (dqm->dev->shared_resources.enable_mes) {
+			retval = add_queue_mes(dqm, q, qpd);
+			if (retval) {
+				pr_err("Failed to restore queue %d\n",
+					q->properties.queue_id);
+				goto out;
+			}
+		}
 	}
-	retval = execute_queues_cpsch(dqm,
-				KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0);
+	if (!dqm->dev->shared_resources.enable_mes)
+		retval = execute_queues_cpsch(dqm,
+					      KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0);
 	qpd->evicted = 0;
 	eviction_duration = get_jiffies_64() - pdd->last_evict_timestamp;
 	atomic64_add(eviction_duration, &pdd->evict_duration_counter);
@@ -1081,6 +1260,9 @@ static int initialize_nocpsch(struct device_queue_manager *dqm)
 	memset(dqm->vmid_pasid, 0, sizeof(dqm->vmid_pasid));
 
 	dqm->sdma_bitmap = ~0ULL >> (64 - get_num_sdma_queues(dqm));
+	dqm->sdma_bitmap &= ~(get_reserved_sdma_queues_bitmap(dqm));
+	pr_info("sdma_bitmap: %llx\n", dqm->sdma_bitmap);
+
 	dqm->xgmi_sdma_bitmap = ~0ULL >> (64 - get_num_xgmi_sdma_queues(dqm));
 
 	return 0;
@@ -1277,6 +1459,9 @@ static int initialize_cpsch(struct device_queue_manager *dqm)
 	else
 		dqm->sdma_bitmap = (BIT_ULL(num_sdma_queues) - 1);
 
+	dqm->sdma_bitmap &= ~(get_reserved_sdma_queues_bitmap(dqm));
+	pr_info("sdma_bitmap: %llx\n", dqm->sdma_bitmap);
+
 	num_xgmi_sdma_queues = get_num_xgmi_sdma_queues(dqm);
 	if (num_xgmi_sdma_queues >= BITS_PER_TYPE(dqm->xgmi_sdma_bitmap))
 		dqm->xgmi_sdma_bitmap = ULLONG_MAX;
@@ -1295,14 +1480,16 @@ static int start_cpsch(struct device_queue_manager *dqm)
 	retval = 0;
 
 	dqm_lock(dqm);
-	retval = pm_init(&dqm->packet_mgr, dqm);
-	if (retval)
-		goto fail_packet_manager_init;
 
-	retval = set_sched_resources(dqm);
-	if (retval)
-		goto fail_set_sched_resources;
+	if (!dqm->dev->shared_resources.enable_mes) {
+		retval = pm_init(&dqm->packet_mgr, dqm);
+		if (retval)
+			goto fail_packet_manager_init;
 
+		retval = set_sched_resources(dqm);
+		if (retval)
+			goto fail_set_sched_resources;
+	}
 	pr_debug("Allocating fence memory\n");
 
 	/* allocate fence memory on the gart */
@@ -1321,13 +1508,15 @@ static int start_cpsch(struct device_queue_manager *dqm)
 	dqm->is_hws_hang = false;
 	dqm->is_resetting = false;
 	dqm->sched_running = true;
-	execute_queues_cpsch(dqm, KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0);
+	if (!dqm->dev->shared_resources.enable_mes)
+		execute_queues_cpsch(dqm, KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0);
 	dqm_unlock(dqm);
 
 	return 0;
 fail_allocate_vidmem:
 fail_set_sched_resources:
-	pm_uninit(&dqm->packet_mgr, false);
+	if (!dqm->dev->shared_resources.enable_mes)
+		pm_uninit(&dqm->packet_mgr, false);
 fail_packet_manager_init:
 	dqm_unlock(dqm);
 	return retval;
@@ -1343,15 +1532,22 @@ static int stop_cpsch(struct device_queue_manager *dqm)
 		return 0;
 	}
 
-	if (!dqm->is_hws_hang)
-		unmap_queues_cpsch(dqm, KFD_UNMAP_QUEUES_FILTER_ALL_QUEUES, 0, false);
+	if (!dqm->is_hws_hang) {
+		if (!dqm->dev->shared_resources.enable_mes)
+			unmap_queues_cpsch(dqm, KFD_UNMAP_QUEUES_FILTER_ALL_QUEUES, 0, false);
+		else
+			remove_all_queues_mes(dqm);
+	}
+
 	hanging = dqm->is_hws_hang || dqm->is_resetting;
 	dqm->sched_running = false;
 
-	pm_release_ib(&dqm->packet_mgr);
+	if (!dqm->dev->shared_resources.enable_mes)
+		pm_release_ib(&dqm->packet_mgr);
 
 	kfd_gtt_sa_free(dqm->dev, dqm->fence_mem);
-	pm_uninit(&dqm->packet_mgr, hanging);
+	if (!dqm->dev->shared_resources.enable_mes)
+		pm_uninit(&dqm->packet_mgr, hanging);
 	dqm_unlock(dqm);
 
 	return 0;
@@ -1469,8 +1665,14 @@ static int create_queue_cpsch(struct device_queue_manager *dqm, struct queue *q,
 	if (q->properties.is_active) {
 		increment_queue_count(dqm, qpd, q);
 
-		execute_queues_cpsch(dqm,
-				KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0);
+		if (!dqm->dev->shared_resources.enable_mes) {
+			retval = execute_queues_cpsch(dqm,
+					     KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0);
+		} else {
+			retval = add_queue_mes(dqm, q, qpd);
+			if (retval)
+				goto cleanup_queue;
+		}
 	}
 
 	/*
@@ -1485,6 +1687,13 @@ static int create_queue_cpsch(struct device_queue_manager *dqm, struct queue *q,
 	dqm_unlock(dqm);
 	return retval;
 
+cleanup_queue:
+	qpd->queue_count--;
+	list_del(&q->list);
+	if (q->properties.is_active)
+		decrement_queue_count(dqm, qpd, q);
+	mqd_mgr->free_mqd(mqd_mgr, q->mqd, q->mqd_mem_obj);
+	dqm_unlock(dqm);
 out_deallocate_doorbell:
 	deallocate_doorbell(qpd, q);
 out_deallocate_sdma_queue:
@@ -1572,13 +1781,7 @@ static int unmap_queues_cpsch(struct device_queue_manager *dqm,
 				queue_preemption_timeout_ms);
 	if (retval) {
 		pr_err("The cp might be in an unrecoverable state due to an unsuccessful queues preemption\n");
-		dqm->is_hws_hang = true;
-		/* It's possible we're detecting a HWS hang in the
-		 * middle of a GPU reset. No need to schedule another
-		 * reset in this case.
-		 */
-		if (!dqm->is_resetting)
-			schedule_work(&dqm->hw_exception_work);
+		kfd_hws_hang(dqm);
 		return retval;
 	}
 
@@ -1683,11 +1886,15 @@ static int destroy_queue_cpsch(struct device_queue_manager *dqm,
 	list_del(&q->list);
 	qpd->queue_count--;
 	if (q->properties.is_active) {
-		decrement_queue_count(dqm, qpd, q);
-		retval = execute_queues_cpsch(dqm,
-				KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0);
-		if (retval == -ETIME)
-			qpd->reset_wavefronts = true;
+		if (!dqm->dev->shared_resources.enable_mes) {
+			decrement_queue_count(dqm, qpd, q);
+			retval = execute_queues_cpsch(dqm,
+						      KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0);
+			if (retval == -ETIME)
+				qpd->reset_wavefronts = true;
+		} else {
+			retval = remove_queue_mes(dqm, q, qpd);
+		}
 	}
 
 	/*
@@ -1941,9 +2148,17 @@ static int process_termination_cpsch(struct device_queue_manager *dqm,
 		else if (q->properties.type == KFD_QUEUE_TYPE_SDMA_XGMI)
 			deallocate_sdma_queue(dqm, q);
 
-		if (q->properties.is_active)
+		if (q->properties.is_active) {
 			decrement_queue_count(dqm, qpd, q);
 
+			if (dqm->dev->shared_resources.enable_mes) {
+				retval = remove_queue_mes(dqm, q, qpd);
+				if (retval)
+					pr_err("Failed to remove queue %d\n",
+						q->properties.queue_id);
+			}
+		}
+
 		dqm->total_queue_count--;
 	}
 
@@ -1958,7 +2173,9 @@ static int process_termination_cpsch(struct device_queue_manager *dqm,
 		}
 	}
 
-	retval = execute_queues_cpsch(dqm, filter, 0);
+	if (!dqm->dev->shared_resources.enable_mes)
+		retval = execute_queues_cpsch(dqm, filter, 0);
+
 	if ((!dqm->is_hws_hang) && (retval || qpd->reset_wavefronts)) {
 		pr_warn("Resetting wave fronts (cpsch) on dev %p\n", dqm->dev);
 		dbgdev_wave_reset_wavefronts(dqm->dev, qpd->pqm->process);
@@ -2133,7 +2350,9 @@ struct device_queue_manager *device_queue_manager_init(struct kfd_dev *dev)
 		break;
 
 	default:
-		if (KFD_GC_VERSION(dev) >= IP_VERSION(10, 1, 1))
+		if (KFD_GC_VERSION(dev) >= IP_VERSION(11, 0, 0))
+			device_queue_manager_init_v11(&dqm->asic_ops);
+		else if (KFD_GC_VERSION(dev) >= IP_VERSION(10, 1, 1))
 			device_queue_manager_init_v10_navi10(&dqm->asic_ops);
 		else if (KFD_GC_VERSION(dev) >= IP_VERSION(9, 0, 1))
 			device_queue_manager_init_v9(&dqm->asic_ops);
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h
index 3d539d6483e0..a537b9ef3e16 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h
@@ -35,6 +35,9 @@
 
 #define VMID_NUM 16
 
+#define KFD_MES_PROCESS_QUANTUM		100000
+#define KFD_MES_GANG_QUANTUM		10000
+
 struct device_process_node {
 	struct qcm_process_device *qpd;
 	struct list_head list;
@@ -267,6 +270,8 @@ void device_queue_manager_init_v9(
 		struct device_queue_manager_asic_ops *asic_ops);
 void device_queue_manager_init_v10_navi10(
 		struct device_queue_manager_asic_ops *asic_ops);
+void device_queue_manager_init_v11(
+		struct device_queue_manager_asic_ops *asic_ops);
 void program_sh_mem_settings(struct device_queue_manager *dqm,
 					struct qcm_process_device *qpd);
 unsigned int get_cp_queues_num(struct device_queue_manager *dqm);
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v11.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v11.c
new file mode 100644
index 000000000000..2e129da7acb4
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager_v11.c
@@ -0,0 +1,81 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include "kfd_device_queue_manager.h"
+#include "gc/gc_11_0_0_offset.h"
+#include "gc/gc_11_0_0_sh_mask.h"
+#include "soc21_enum.h"
+
+static int update_qpd_v11(struct device_queue_manager *dqm,
+			 struct qcm_process_device *qpd);
+static void init_sdma_vm_v11(struct device_queue_manager *dqm, struct queue *q,
+			    struct qcm_process_device *qpd);
+
+void device_queue_manager_init_v11(
+	struct device_queue_manager_asic_ops *asic_ops)
+{
+	asic_ops->update_qpd = update_qpd_v11;
+	asic_ops->init_sdma_vm = init_sdma_vm_v11;
+	asic_ops->mqd_manager_init = mqd_manager_init_v11;
+}
+
+static uint32_t compute_sh_mem_bases_64bit(struct kfd_process_device *pdd)
+{
+	uint32_t shared_base = pdd->lds_base >> 48;
+	uint32_t private_base = pdd->scratch_base >> 48;
+
+	return (shared_base << SH_MEM_BASES__SHARED_BASE__SHIFT) |
+		private_base;
+}
+
+static int update_qpd_v11(struct device_queue_manager *dqm,
+			 struct qcm_process_device *qpd)
+{
+	struct kfd_process_device *pdd;
+
+	pdd = qpd_to_pdd(qpd);
+
+	/* check if sh_mem_config register already configured */
+	if (qpd->sh_mem_config == 0) {
+		qpd->sh_mem_config =
+			(SH_MEM_ALIGNMENT_MODE_UNALIGNED <<
+				SH_MEM_CONFIG__ALIGNMENT_MODE__SHIFT) |
+			(3 << SH_MEM_CONFIG__INITIAL_INST_PREFETCH__SHIFT);
+
+		qpd->sh_mem_ape1_limit = 0;
+		qpd->sh_mem_ape1_base = 0;
+	}
+
+	qpd->sh_mem_bases = compute_sh_mem_bases_64bit(pdd);
+
+	pr_debug("sh_mem_bases 0x%X\n", qpd->sh_mem_bases);
+
+	return 0;
+}
+
+static void init_sdma_vm_v11(struct device_queue_manager *dqm, struct queue *q,
+			    struct qcm_process_device *qpd)
+{
+	/* Not needed on SDMAv4 onwards any more */
+	q->properties.sdma_vm_addr = 0;
+}
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c b/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
index 5401b6317f25..cb3d2ccc5100 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_doorbell.c
@@ -49,9 +49,13 @@
 /* # of doorbell bytes allocated for each process. */
 size_t kfd_doorbell_process_slice(struct kfd_dev *kfd)
 {
-	return roundup(kfd->device_info.doorbell_size *
-			KFD_MAX_NUM_OF_QUEUES_PER_PROCESS,
-			PAGE_SIZE);
+	if (!kfd->shared_resources.enable_mes)
+		return roundup(kfd->device_info.doorbell_size *
+				KFD_MAX_NUM_OF_QUEUES_PER_PROCESS,
+				PAGE_SIZE);
+	else
+		return amdgpu_mes_doorbell_process_slice(
+					(struct amdgpu_device *)kfd->adev);
 }
 
 /* Doorbell calculations for device init. */
@@ -61,6 +65,16 @@ int kfd_doorbell_init(struct kfd_dev *kfd)
 	size_t doorbell_aperture_size;
 	size_t doorbell_process_limit;
 
+	/*
+	 * With MES enabled, just set the doorbell base as it is needed
+	 * to calculate doorbell physical address.
+	 */
+	if (kfd->shared_resources.enable_mes) {
+		kfd->doorbell_base =
+			kfd->shared_resources.doorbell_physical_address;
+		return 0;
+	}
+
 	/*
 	 * We start with calculations in bytes because the input data might
 	 * only be byte-aligned.
@@ -237,10 +251,16 @@ unsigned int kfd_get_doorbell_dw_offset_in_bar(struct kfd_dev *kfd,
 	 * the process's doorbells. The offset returned is in dword
 	 * units regardless of the ASIC-dependent doorbell size.
 	 */
-	return kfd->doorbell_base_dw_offset +
-		pdd->doorbell_index
-		* kfd_doorbell_process_slice(kfd) / sizeof(u32) +
-		doorbell_id * kfd->device_info.doorbell_size / sizeof(u32);
+	if (!kfd->shared_resources.enable_mes)
+		return kfd->doorbell_base_dw_offset +
+			pdd->doorbell_index
+			* kfd_doorbell_process_slice(kfd) / sizeof(u32) +
+			doorbell_id *
+			kfd->device_info.doorbell_size / sizeof(u32);
+	else
+		return amdgpu_mes_get_doorbell_dw_offset_in_bar(
+				(struct amdgpu_device *)kfd->adev,
+				pdd->doorbell_index, doorbell_id);
 }
 
 uint64_t kfd_get_number_elems(struct kfd_dev *kfd)
@@ -261,8 +281,16 @@ phys_addr_t kfd_get_process_doorbells(struct kfd_process_device *pdd)
 
 int kfd_alloc_process_doorbells(struct kfd_dev *kfd, unsigned int *doorbell_index)
 {
-	int r = ida_simple_get(&kfd->doorbell_ida, 1, kfd->max_doorbell_slices,
-				GFP_KERNEL);
+	int r = 0;
+
+	if (!kfd->shared_resources.enable_mes)
+		r = ida_simple_get(&kfd->doorbell_ida, 1,
+				   kfd->max_doorbell_slices, GFP_KERNEL);
+	else
+		r = amdgpu_mes_alloc_process_doorbells(
+				(struct amdgpu_device *)kfd->adev,
+				doorbell_index);
+
 	if (r > 0)
 		*doorbell_index = r;
 
@@ -271,6 +299,12 @@ int kfd_alloc_process_doorbells(struct kfd_dev *kfd, unsigned int *doorbell_inde
 
 void kfd_free_process_doorbells(struct kfd_dev *kfd, unsigned int doorbell_index)
 {
-	if (doorbell_index)
-		ida_simple_remove(&kfd->doorbell_ida, doorbell_index);
+	if (doorbell_index) {
+		if (!kfd->shared_resources.enable_mes)
+			ida_simple_remove(&kfd->doorbell_ida, doorbell_index);
+		else
+			amdgpu_mes_free_process_doorbells(
+					(struct amdgpu_device *)kfd->adev,
+					doorbell_index);
+	}
 }
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v11.c b/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v11.c
new file mode 100644
index 000000000000..c3919aaa76e6
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v11.c
@@ -0,0 +1,383 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "kfd_priv.h"
+#include "kfd_events.h"
+#include "soc15_int.h"
+#include "kfd_device_queue_manager.h"
+#include "ivsrcid/vmc/irqsrcs_vmc_1_0.h"
+#include "kfd_smi_events.h"
+
+/*
+ * GFX11 SQ Interrupts
+ *
+ * There are 3 encoding types of interrupts sourced from SQ sent as a 44-bit
+ * packet to the Interrupt Handler:
+ * Auto - Generated by the SQG (various cmd overflows, timestamps etc)
+ * Wave - Generated by S_SENDMSG through a shader program
+ * Error - HW generated errors (Illegal instructions, Memviols, EDC etc)
+ *
+ * The 44-bit packet is mapped as {context_id1[7:0],context_id0[31:0]} plus
+ * 4-bits for VMID (SOC15_VMID_FROM_IH_ENTRY) as such:
+ *
+ * - context_id1[7:6]
+ * Encoding type (0 = Auto, 1 = Wave, 2 = Error)
+ *
+ * - context_id0[26]
+ * PRIV bit indicates that Wave S_SEND or error occurred within trap
+ *
+ * - context_id0[24:0]
+ * 25-bit data with the following layout per encoding type:
+ * Auto - only context_id0[8:0] is used, which reports various interrupts
+ * generated by SQG.  The rest is 0.
+ * Wave - user data sent from m0 via S_SENDMSG (context_id0[23:0])
+ * Error - Error Type (context_id0[24:21]), Error Details (context_id0[20:0])
+ *
+ * The other context_id bits show coordinates (SE/SH/CU/SIMD/WGP) for wave
+ * S_SENDMSG and Errors.  These are 0 for Auto.
+ */
+
+enum SQ_INTERRUPT_WORD_ENCODING {
+	SQ_INTERRUPT_WORD_ENCODING_AUTO = 0x0,
+	SQ_INTERRUPT_WORD_ENCODING_INST,
+	SQ_INTERRUPT_WORD_ENCODING_ERROR,
+};
+
+enum SQ_INTERRUPT_ERROR_TYPE {
+	SQ_INTERRUPT_ERROR_TYPE_EDC_FUE = 0x0,
+	SQ_INTERRUPT_ERROR_TYPE_ILLEGAL_INST,
+	SQ_INTERRUPT_ERROR_TYPE_MEMVIOL,
+	SQ_INTERRUPT_ERROR_TYPE_EDC_FED,
+};
+
+/* SQ_INTERRUPT_WORD_AUTO_CTXID */
+#define SQ_INTERRUPT_WORD_AUTO_CTXID0__THREAD_TRACE__SHIFT		0
+#define SQ_INTERRUPT_WORD_AUTO_CTXID0__WLT__SHIFT			1
+#define SQ_INTERRUPT_WORD_AUTO_CTXID0__THREAD_TRACE_BUF_FULL__SHIFT	2
+#define SQ_INTERRUPT_WORD_AUTO_CTXID0__REG_TIMESTAMP__SHIFT		3
+#define SQ_INTERRUPT_WORD_AUTO_CTXID0__CMD_TIMESTAMP__SHIFT		4
+#define SQ_INTERRUPT_WORD_AUTO_CTXID0__HOST_CMD_OVERFLOW__SHIFT		5
+#define SQ_INTERRUPT_WORD_AUTO_CTXID0__HOST_REG_OVERFLOW__SHIFT		6
+#define SQ_INTERRUPT_WORD_AUTO_CTXID0__IMMED_OVERFLOW__SHIFT		7
+#define SQ_INTERRUPT_WORD_AUTO_CTXID0__THREAD_TRACE_UTC_ERROR__SHIFT	8
+#define SQ_INTERRUPT_WORD_AUTO_CTXID1__ENCODING__SHIFT			6
+
+#define SQ_INTERRUPT_WORD_AUTO_CTXID0__THREAD_TRACE_MASK		0x00000001
+#define SQ_INTERRUPT_WORD_AUTO_CTXID0__WLT_MASK				0x00000002
+#define SQ_INTERRUPT_WORD_AUTO_CTXID0__THREAD_TRACE_BUF_FULL_MASK	0x00000004
+#define SQ_INTERRUPT_WORD_AUTO_CTXID0__REG_TIMESTAMP_MASK		0x00000008
+#define SQ_INTERRUPT_WORD_AUTO_CTXID0__CMD_TIMESTAMP_MASK		0x00000010
+#define SQ_INTERRUPT_WORD_AUTO_CTXID0__HOST_CMD_OVERFLOW_MASK		0x00000020
+#define SQ_INTERRUPT_WORD_AUTO_CTXID0__HOST_REG_OVERFLOW_MASK		0x00000040
+#define SQ_INTERRUPT_WORD_AUTO_CTXID0__IMMED_OVERFLOW_MASK		0x00000080
+#define SQ_INTERRUPT_WORD_AUTO_CTXID0__THREAD_TRACE_UTC_ERROR_MASK	0x00000100
+#define SQ_INTERRUPT_WORD_AUTO_CTXID1__ENCODING_MASK			0x000000c0
+
+/* SQ_INTERRUPT_WORD_WAVE_CTXID */
+#define SQ_INTERRUPT_WORD_WAVE_CTXID0__DATA__SHIFT	0
+#define SQ_INTERRUPT_WORD_WAVE_CTXID0__SH_ID__SHIFT	25
+#define SQ_INTERRUPT_WORD_WAVE_CTXID0__PRIV__SHIFT	26
+#define SQ_INTERRUPT_WORD_WAVE_CTXID0__WAVE_ID__SHIFT	27
+#define SQ_INTERRUPT_WORD_WAVE_CTXID1__SIMD_ID__SHIFT	0
+#define SQ_INTERRUPT_WORD_WAVE_CTXID1__WGP_ID__SHIFT	2
+#define SQ_INTERRUPT_WORD_WAVE_CTXID1__ENCODING__SHIFT	6
+
+#define SQ_INTERRUPT_WORD_WAVE_CTXID0__DATA_MASK	0x00ffffff /* [23:0] */
+#define SQ_INTERRUPT_WORD_WAVE_CTXID0__SH_ID_MASK	0x02000000 /* [25] */
+#define SQ_INTERRUPT_WORD_WAVE_CTXID0__PRIV_MASK	0x04000000 /* [26] */
+#define SQ_INTERRUPT_WORD_WAVE_CTXID0__WAVE_ID_MASK	0xf8000000 /* [31:27] */
+#define SQ_INTERRUPT_WORD_WAVE_CTXID1__SIMD_ID_MASK	0x00000003 /* [33:32] */
+#define SQ_INTERRUPT_WORD_WAVE_CTXID1__WGP_ID_MASK	0x0000003c /* [37:34] */
+#define SQ_INTERRUPT_WORD_WAVE_CTXID1__ENCODING_MASK	0x000000c0 /* [39:38] */
+
+/* SQ_INTERRUPT_WORD_ERROR_CTXID */
+#define SQ_INTERRUPT_WORD_ERROR_CTXID0__DETAIL__SHIFT	0
+#define SQ_INTERRUPT_WORD_ERROR_CTXID0__TYPE__SHIFT	21
+#define SQ_INTERRUPT_WORD_ERROR_CTXID0__SH_ID__SHIFT	25
+#define SQ_INTERRUPT_WORD_ERROR_CTXID0__PRIV__SHIFT	26
+#define SQ_INTERRUPT_WORD_ERROR_CTXID0__WAVE_ID__SHIFT	27
+#define SQ_INTERRUPT_WORD_ERROR_CTXID1__SIMD_ID__SHIFT	0
+#define SQ_INTERRUPT_WORD_ERROR_CTXID1__WGP_ID__SHIFT	2
+#define SQ_INTERRUPT_WORD_ERROR_CTXID1__ENCODING__SHIFT	6
+
+#define SQ_INTERRUPT_WORD_ERROR_CTXID0__DETAIL_MASK	0x001fffff /* [20:0] */
+#define SQ_INTERRUPT_WORD_ERROR_CTXID0__TYPE_MASK	0x01e00000 /* [24:21] */
+#define SQ_INTERRUPT_WORD_ERROR_CTXID0__SH_ID_MASK	0x02000000 /* [25] */
+#define SQ_INTERRUPT_WORD_ERROR_CTXID0__PRIV_MASK	0x04000000 /* [26] */
+#define SQ_INTERRUPT_WORD_ERROR_CTXID0__WAVE_ID_MASK	0xf8000000 /* [31:27] */
+#define SQ_INTERRUPT_WORD_ERROR_CTXID1__SIMD_ID_MASK	0x00000003 /* [33:32] */
+#define SQ_INTERRUPT_WORD_ERROR_CTXID1__WGP_ID_MASK	0x0000003c /* [37:34] */
+#define SQ_INTERRUPT_WORD_ERROR_CTXID1__ENCODING_MASK	0x000000c0 /* [39:38] */
+
+/*
+ * The debugger will send user data(m0) with PRIV=1 to indicate it requires
+ * notification from the KFD with the following queue id (DOORBELL_ID) and
+ * trap code (TRAP_CODE).
+ */
+#define KFD_CTXID0_TRAP_CODE_SHIFT	10
+#define KFD_CTXID0_TRAP_CODE_MASK	0xfffc00
+#define KFD_CTXID0_CP_BAD_OP_ECODE_MASK	0x3ffffff
+#define KFD_CTXID0_DOORBELL_ID_MASK	0x0003ff
+
+#define KFD_CTXID0_TRAP_CODE(ctxid0)		(((ctxid0) &  \
+				KFD_CTXID0_TRAP_CODE_MASK) >> \
+				KFD_CTXID0_TRAP_CODE_SHIFT)
+#define KFD_CTXID0_CP_BAD_OP_ECODE(ctxid0)	(((ctxid0) &        \
+				KFD_CTXID0_CP_BAD_OP_ECODE_MASK) >> \
+				KFD_CTXID0_TRAP_CODE_SHIFT)
+#define KFD_CTXID0_DOORBELL_ID(ctxid0)		((ctxid0) & \
+				KFD_CTXID0_DOORBELL_ID_MASK)
+
+static void print_sq_intr_info_auto(uint32_t context_id0, uint32_t context_id1)
+{
+	pr_debug(
+		"sq_intr: auto, ttrace %d, wlt %d, ttrace_buf_full %d, reg_tms %d, cmd_tms %d, host_cmd_ovf %d, host_reg_ovf %d, immed_ovf %d, ttrace_utc_err %d\n",
+		REG_GET_FIELD(context_id0, SQ_INTERRUPT_WORD_AUTO_CTXID0, THREAD_TRACE),
+		REG_GET_FIELD(context_id0, SQ_INTERRUPT_WORD_AUTO_CTXID0, WLT),
+		REG_GET_FIELD(context_id0, SQ_INTERRUPT_WORD_AUTO_CTXID0, THREAD_TRACE_BUF_FULL),
+		REG_GET_FIELD(context_id0, SQ_INTERRUPT_WORD_AUTO_CTXID0, REG_TIMESTAMP),
+		REG_GET_FIELD(context_id0, SQ_INTERRUPT_WORD_AUTO_CTXID0, CMD_TIMESTAMP),
+		REG_GET_FIELD(context_id0, SQ_INTERRUPT_WORD_AUTO_CTXID0, HOST_CMD_OVERFLOW),
+		REG_GET_FIELD(context_id0, SQ_INTERRUPT_WORD_AUTO_CTXID0, HOST_REG_OVERFLOW),
+		REG_GET_FIELD(context_id0, SQ_INTERRUPT_WORD_AUTO_CTXID0, IMMED_OVERFLOW),
+		REG_GET_FIELD(context_id0, SQ_INTERRUPT_WORD_AUTO_CTXID0, THREAD_TRACE_UTC_ERROR));
+}
+
+static void print_sq_intr_info_inst(uint32_t context_id0, uint32_t context_id1)
+{
+	pr_debug(
+		"sq_intr: inst, data 0x%08x, sh %d, priv %d, wave_id %d, simd_id %d, wgp_id %d\n",
+		REG_GET_FIELD(context_id0, SQ_INTERRUPT_WORD_WAVE_CTXID0, DATA),
+		REG_GET_FIELD(context_id0, SQ_INTERRUPT_WORD_WAVE_CTXID0, SH_ID),
+		REG_GET_FIELD(context_id0, SQ_INTERRUPT_WORD_WAVE_CTXID0, PRIV),
+		REG_GET_FIELD(context_id0, SQ_INTERRUPT_WORD_WAVE_CTXID0, WAVE_ID),
+		REG_GET_FIELD(context_id1, SQ_INTERRUPT_WORD_WAVE_CTXID1, SIMD_ID),
+		REG_GET_FIELD(context_id1, SQ_INTERRUPT_WORD_WAVE_CTXID1, WGP_ID));
+}
+
+static void print_sq_intr_info_error(uint32_t context_id0, uint32_t context_id1)
+{
+	pr_warn(
+		"sq_intr: error, detail 0x%08x, type %d, sh %d, priv %d, wave_id %d, simd_id %d, wgp_id %d\n",
+		REG_GET_FIELD(context_id0, SQ_INTERRUPT_WORD_ERROR_CTXID0, DETAIL),
+		REG_GET_FIELD(context_id0, SQ_INTERRUPT_WORD_ERROR_CTXID0, TYPE),
+		REG_GET_FIELD(context_id0, SQ_INTERRUPT_WORD_ERROR_CTXID0, SH_ID),
+		REG_GET_FIELD(context_id0, SQ_INTERRUPT_WORD_ERROR_CTXID0, PRIV),
+		REG_GET_FIELD(context_id0, SQ_INTERRUPT_WORD_ERROR_CTXID0, WAVE_ID),
+		REG_GET_FIELD(context_id0, SQ_INTERRUPT_WORD_ERROR_CTXID1, SIMD_ID),
+		REG_GET_FIELD(context_id0, SQ_INTERRUPT_WORD_ERROR_CTXID1, WGP_ID));
+}
+
+static void event_interrupt_poison_consumption_v11(struct kfd_dev *dev,
+				uint16_t pasid, uint16_t source_id)
+{
+	int ret = -EINVAL;
+	struct kfd_process *p = kfd_lookup_process_by_pasid(pasid);
+
+	if (!p)
+		return;
+
+	/* all queues of a process will be unmapped in one time */
+	if (atomic_read(&p->poison)) {
+		kfd_unref_process(p);
+		return;
+	}
+
+	atomic_set(&p->poison, 1);
+	kfd_unref_process(p);
+
+	switch (source_id) {
+	case SOC15_INTSRC_SQ_INTERRUPT_MSG:
+		if (dev->dqm->ops.reset_queues)
+			ret = dev->dqm->ops.reset_queues(dev->dqm, pasid);
+		break;
+	case SOC21_INTSRC_SDMA_ECC:
+	default:
+		break;
+	}
+
+	kfd_signal_poison_consumed_event(dev, pasid);
+
+	/* resetting queue passes, do page retirement without gpu reset
+	   resetting queue fails, fallback to gpu reset solution */
+	if (!ret)
+		amdgpu_amdkfd_ras_poison_consumption_handler(dev->adev, false);
+	else
+		amdgpu_amdkfd_ras_poison_consumption_handler(dev->adev, true);
+}
+
+static bool event_interrupt_isr_v11(struct kfd_dev *dev,
+					const uint32_t *ih_ring_entry,
+					uint32_t *patched_ihre,
+					bool *patched_flag)
+{
+	uint16_t source_id, client_id, pasid, vmid;
+	const uint32_t *data = ih_ring_entry;
+	uint32_t context_id0;
+
+	source_id = SOC15_SOURCE_ID_FROM_IH_ENTRY(ih_ring_entry);
+	client_id = SOC15_CLIENT_ID_FROM_IH_ENTRY(ih_ring_entry);
+	/* Only handle interrupts from KFD VMIDs */
+	vmid = SOC15_VMID_FROM_IH_ENTRY(ih_ring_entry);
+	if (/*!KFD_IRQ_IS_FENCE(client_id, source_id) &&*/
+	    (vmid < dev->vm_info.first_vmid_kfd ||
+	    vmid > dev->vm_info.last_vmid_kfd))
+		return 0;
+
+	pasid = SOC15_PASID_FROM_IH_ENTRY(ih_ring_entry);
+	context_id0 = SOC15_CONTEXT_ID0_FROM_IH_ENTRY(ih_ring_entry);
+
+	if ((source_id == SOC15_INTSRC_CP_END_OF_PIPE) &&
+	    (context_id0 & AMDGPU_FENCE_MES_QUEUE_FLAG))
+		return 0;
+
+	pr_debug("client id 0x%x, source id %d, vmid %d, pasid 0x%x. raw data:\n",
+		 client_id, source_id, vmid, pasid);
+	pr_debug("%8X, %8X, %8X, %8X, %8X, %8X, %8X, %8X.\n",
+		 data[0], data[1], data[2], data[3],
+		 data[4], data[5], data[6], data[7]);
+
+	/* If there is no valid PASID, it's likely a bug */
+	if (WARN_ONCE(pasid == 0, "Bug: No PASID in KFD interrupt"))
+		return 0;
+
+	/* Interrupt types we care about: various signals and faults.
+	 * They will be forwarded to a work queue (see below).
+	 */
+	return source_id == SOC15_INTSRC_CP_END_OF_PIPE ||
+		source_id == SOC15_INTSRC_SQ_INTERRUPT_MSG ||
+		source_id == SOC15_INTSRC_CP_BAD_OPCODE ||
+		source_id == SOC21_INTSRC_SDMA_TRAP ||
+		client_id == SOC21_IH_CLIENTID_VMC ||
+		((client_id == SOC21_IH_CLIENTID_GFX) &&
+		 (source_id == UTCL2_1_0__SRCID__FAULT)) /*||
+		   KFD_IRQ_IS_FENCE(client_id, source_id)*/;
+}
+
+static void event_interrupt_wq_v11(struct kfd_dev *dev,
+					const uint32_t *ih_ring_entry)
+{
+	uint16_t source_id, client_id, ring_id, pasid, vmid;
+	uint32_t context_id0, context_id1;
+	uint8_t sq_int_enc, sq_int_errtype, sq_int_priv;
+	struct kfd_vm_fault_info info = {0};
+	struct kfd_hsa_memory_exception_data exception_data;
+
+	source_id = SOC15_SOURCE_ID_FROM_IH_ENTRY(ih_ring_entry);
+	client_id = SOC15_CLIENT_ID_FROM_IH_ENTRY(ih_ring_entry);
+	ring_id = SOC15_RING_ID_FROM_IH_ENTRY(ih_ring_entry);
+	pasid = SOC15_PASID_FROM_IH_ENTRY(ih_ring_entry);
+	vmid = SOC15_VMID_FROM_IH_ENTRY(ih_ring_entry);
+	context_id0 = SOC15_CONTEXT_ID0_FROM_IH_ENTRY(ih_ring_entry);
+	context_id1 = SOC15_CONTEXT_ID1_FROM_IH_ENTRY(ih_ring_entry);
+
+	/* VMC, UTCL2 */
+	if (client_id == SOC21_IH_CLIENTID_VMC ||
+	     ((client_id == SOC21_IH_CLIENTID_GFX) &&
+	     (source_id == UTCL2_1_0__SRCID__FAULT))) {
+
+		info.vmid = vmid;
+		info.mc_id = client_id;
+		info.page_addr = ih_ring_entry[4] |
+			(uint64_t)(ih_ring_entry[5] & 0xf) << 32;
+		info.prot_valid = ring_id & 0x08;
+		info.prot_read  = ring_id & 0x10;
+		info.prot_write = ring_id & 0x20;
+
+		memset(&exception_data, 0, sizeof(exception_data));
+		exception_data.gpu_id = dev->id;
+		exception_data.va = (info.page_addr) << PAGE_SHIFT;
+		exception_data.failure.NotPresent = info.prot_valid ? 1 : 0;
+		exception_data.failure.NoExecute = info.prot_exec ? 1 : 0;
+		exception_data.failure.ReadOnly = info.prot_write ? 1 : 0;
+		exception_data.failure.imprecise = 0;
+
+		/*kfd_set_dbg_ev_from_interrupt(dev, pasid, -1,
+					      KFD_EC_MASK(EC_DEVICE_MEMORY_VIOLATION),
+					      &exception_data, sizeof(exception_data));*/
+		kfd_smi_event_update_vmfault(dev, pasid);
+
+	/* GRBM, SDMA, SE, PMM */
+	} else if (client_id == SOC21_IH_CLIENTID_GRBM_CP ||
+		   client_id == SOC21_IH_CLIENTID_GFX) {
+
+		/* CP */
+		if (source_id == SOC15_INTSRC_CP_END_OF_PIPE)
+			kfd_signal_event_interrupt(pasid, context_id0, 32);
+		/*else if (source_id == SOC15_INTSRC_CP_BAD_OPCODE)
+			kfd_set_dbg_ev_from_interrupt(dev, pasid,
+				KFD_CTXID0_DOORBELL_ID(context_id0),
+				KFD_EC_MASK(KFD_CTXID0_CP_BAD_OP_ECODE(context_id0)),
+				NULL, 0);*/
+
+		/* SDMA */
+		else if (source_id == SOC21_INTSRC_SDMA_TRAP)
+			kfd_signal_event_interrupt(pasid, context_id0 & 0xfffffff, 28);
+		else if (source_id == SOC21_INTSRC_SDMA_ECC) {
+			event_interrupt_poison_consumption_v11(dev, pasid, source_id);
+			return;
+		}
+
+		/* SQ */
+		else if (source_id == SOC15_INTSRC_SQ_INTERRUPT_MSG) {
+			sq_int_enc = REG_GET_FIELD(context_id1,
+					SQ_INTERRUPT_WORD_WAVE_CTXID1, ENCODING);
+			switch (sq_int_enc) {
+			case SQ_INTERRUPT_WORD_ENCODING_AUTO:
+				print_sq_intr_info_auto(context_id0, context_id1);
+				break;
+			case SQ_INTERRUPT_WORD_ENCODING_INST:
+				print_sq_intr_info_inst(context_id0, context_id1);
+				sq_int_priv = REG_GET_FIELD(context_id0,
+						SQ_INTERRUPT_WORD_WAVE_CTXID0, PRIV);
+				if (sq_int_priv /*&& (kfd_set_dbg_ev_from_interrupt(dev, pasid,
+						KFD_CTXID0_DOORBELL_ID(context_id0),
+						KFD_CTXID0_TRAP_CODE(context_id0),
+						NULL, 0))*/)
+					return;
+				break;
+			case SQ_INTERRUPT_WORD_ENCODING_ERROR:
+				print_sq_intr_info_error(context_id0, context_id1);
+				sq_int_errtype = REG_GET_FIELD(context_id0,
+						SQ_INTERRUPT_WORD_ERROR_CTXID0, TYPE);
+				if (sq_int_errtype != SQ_INTERRUPT_ERROR_TYPE_ILLEGAL_INST &&
+				    sq_int_errtype != SQ_INTERRUPT_ERROR_TYPE_MEMVIOL) {
+					event_interrupt_poison_consumption_v11(
+							dev, pasid, source_id);
+					return;
+				}
+				break;
+			default:
+				break;
+			}
+			kfd_signal_event_interrupt(pasid, context_id0 & 0xffffff, 24);
+		}
+
+	/*} else if (KFD_IRQ_IS_FENCE(client_id, source_id)) {
+		kfd_process_close_interrupt_drain(pasid);*/
+	}
+}
+
+const struct kfd_event_interrupt_class event_interrupt_class_v11 = {
+	.interrupt_isr = event_interrupt_isr_v11,
+	.interrupt_wq = event_interrupt_wq_v11,
+};
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c
index f27fe022ef6f..0b75a37b689b 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_int_process_v9.c
@@ -90,7 +90,7 @@ enum SQ_INTERRUPT_ERROR_TYPE {
 #define KFD_SQ_INT_DATA__ERR_TYPE_MASK 0xF00000
 #define KFD_SQ_INT_DATA__ERR_TYPE__SHIFT 20
 
-static void event_interrupt_poison_consumption(struct kfd_dev *dev,
+static void event_interrupt_poison_consumption_v9(struct kfd_dev *dev,
 				uint16_t pasid, uint16_t client_id)
 {
 	int old_poison, ret = -EINVAL;
@@ -316,7 +316,7 @@ static void event_interrupt_wq_v9(struct kfd_dev *dev,
 					sq_intr_err);
 				if (sq_intr_err != SQ_INTERRUPT_ERROR_TYPE_ILLEGAL_INST &&
 					sq_intr_err != SQ_INTERRUPT_ERROR_TYPE_MEMVIOL) {
-					event_interrupt_poison_consumption(dev, pasid, client_id);
+					event_interrupt_poison_consumption_v9(dev, pasid, client_id);
 					return;
 				}
 				break;
@@ -337,7 +337,7 @@ static void event_interrupt_wq_v9(struct kfd_dev *dev,
 		if (source_id == SOC15_INTSRC_SDMA_TRAP) {
 			kfd_signal_event_interrupt(pasid, context_id0 & 0xfffffff, 28);
 		} else if (source_id == SOC15_INTSRC_SDMA_ECC) {
-			event_interrupt_poison_consumption(dev, pasid, client_id);
+			event_interrupt_poison_consumption_v9(dev, pasid, client_id);
 			return;
 		}
 	} else if (client_id == SOC15_IH_CLIENTID_VMC ||
@@ -348,7 +348,7 @@ static void event_interrupt_wq_v9(struct kfd_dev *dev,
 
 		if (client_id == SOC15_IH_CLIENTID_UTCL2 &&
 		    amdgpu_amdkfd_ras_query_utcl2_poison_status(dev->adev)) {
-			event_interrupt_poison_consumption(dev, pasid, client_id);
+			event_interrupt_poison_consumption_v9(dev, pasid, client_id);
 			return;
 		}
 
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c
index 5ac209209613..49a283be6b57 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c
@@ -100,7 +100,7 @@ void mqd_symmetrically_map_cu_mask(struct mqd_manager *mm,
 {
 	struct kfd_cu_info cu_info;
 	uint32_t cu_per_sh[KFD_MAX_NUM_SE][KFD_MAX_NUM_SH_PER_SE] = {0};
-	int i, se, sh, cu;
+	int i, se, sh, cu, cu_bitmap_sh_mul;
 
 	amdgpu_amdkfd_get_cu_info(mm->dev->adev, &cu_info);
 
@@ -120,6 +120,10 @@ void mqd_symmetrically_map_cu_mask(struct mqd_manager *mm,
 			cu_info.num_shader_arrays_per_engine * cu_info.num_shader_engines);
 		return;
 	}
+
+	cu_bitmap_sh_mul = (KFD_GC_VERSION(mm->dev) >= IP_VERSION(11, 0, 0) &&
+			    KFD_GC_VERSION(mm->dev) < IP_VERSION(12, 0, 0)) ? 2 : 1;
+
 	/* Count active CUs per SH.
 	 *
 	 * Some CUs in an SH may be disabled.	HW expects disabled CUs to be
@@ -129,10 +133,12 @@ void mqd_symmetrically_map_cu_mask(struct mqd_manager *mm,
 	 * Each half of se_mask must be filled only on bits 0-cu_per_sh[se][sh]-1.
 	 *
 	 * See note on Arcturus cu_bitmap layout in gfx_v9_0_get_cu_info.
+	 * See note on GFX11 cu_bitmap layout in gfx_v11_0_get_cu_info.
 	 */
 	for (se = 0; se < cu_info.num_shader_engines; se++)
 		for (sh = 0; sh < cu_info.num_shader_arrays_per_engine; sh++)
-			cu_per_sh[se][sh] = hweight32(cu_info.cu_bitmap[se % 4][sh + (se / 4)]);
+			cu_per_sh[se][sh] = hweight32(
+				cu_info.cu_bitmap[se % 4][sh + (se / 4) * cu_bitmap_sh_mul]);
 
 	/* Symmetrically map cu_mask to all SEs & SHs:
 	 * se_mask programs up to 2 SH in the upper and lower 16 bits.
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c
new file mode 100644
index 000000000000..4e0387f591be
--- /dev/null
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c
@@ -0,0 +1,508 @@
+/*
+ * Copyright 2021 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#include <linux/printk.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include "kfd_priv.h"
+#include "kfd_mqd_manager.h"
+#include "v11_structs.h"
+#include "gc/gc_11_0_0_offset.h"
+#include "gc/gc_11_0_0_sh_mask.h"
+#include "amdgpu_amdkfd.h"
+
+static inline struct v11_compute_mqd *get_mqd(void *mqd)
+{
+	return (struct v11_compute_mqd *)mqd;
+}
+
+static inline struct v11_sdma_mqd *get_sdma_mqd(void *mqd)
+{
+	return (struct v11_sdma_mqd *)mqd;
+}
+
+static void update_cu_mask(struct mqd_manager *mm, void *mqd,
+			   struct mqd_update_info *minfo)
+{
+	struct v11_compute_mqd *m;
+	uint32_t se_mask[KFD_MAX_NUM_SE] = {0};
+
+	if (!minfo || (minfo->update_flag != UPDATE_FLAG_CU_MASK) ||
+	    !minfo->cu_mask.ptr)
+		return;
+
+	mqd_symmetrically_map_cu_mask(mm,
+		minfo->cu_mask.ptr, minfo->cu_mask.count, se_mask);
+
+	m = get_mqd(mqd);
+	m->compute_static_thread_mgmt_se0 = se_mask[0];
+	m->compute_static_thread_mgmt_se1 = se_mask[1];
+	m->compute_static_thread_mgmt_se2 = se_mask[2];
+	m->compute_static_thread_mgmt_se3 = se_mask[3];
+	m->compute_static_thread_mgmt_se4 = se_mask[4];
+	m->compute_static_thread_mgmt_se5 = se_mask[5];
+	m->compute_static_thread_mgmt_se6 = se_mask[6];
+	m->compute_static_thread_mgmt_se7 = se_mask[7];
+
+	pr_debug("update cu mask to %#x %#x %#x %#x %#x %#x %#x %#x\n",
+		m->compute_static_thread_mgmt_se0,
+		m->compute_static_thread_mgmt_se1,
+		m->compute_static_thread_mgmt_se2,
+		m->compute_static_thread_mgmt_se3,
+		m->compute_static_thread_mgmt_se4,
+		m->compute_static_thread_mgmt_se5,
+		m->compute_static_thread_mgmt_se6,
+		m->compute_static_thread_mgmt_se7);
+}
+
+static void set_priority(struct v11_compute_mqd *m, struct queue_properties *q)
+{
+	m->cp_hqd_pipe_priority = pipe_priority_map[q->priority];
+	m->cp_hqd_queue_priority = q->priority;
+}
+
+static struct kfd_mem_obj *allocate_mqd(struct kfd_dev *kfd,
+		struct queue_properties *q)
+{
+	struct kfd_mem_obj *mqd_mem_obj;
+	int size;
+
+	/*
+	 * MES write to areas beyond MQD size. So allocate
+	 * 1 PAGE_SIZE memory for MQD is MES is enabled.
+	 */
+	if (kfd->shared_resources.enable_mes)
+		size = PAGE_SIZE;
+	else
+		size = sizeof(struct v11_compute_mqd);
+
+	if (kfd_gtt_sa_allocate(kfd, size, &mqd_mem_obj))
+		return NULL;
+
+	return mqd_mem_obj;
+}
+
+static void init_mqd(struct mqd_manager *mm, void **mqd,
+			struct kfd_mem_obj *mqd_mem_obj, uint64_t *gart_addr,
+			struct queue_properties *q)
+{
+	uint64_t addr;
+	struct v11_compute_mqd *m;
+	int size;
+
+	m = (struct v11_compute_mqd *) mqd_mem_obj->cpu_ptr;
+	addr = mqd_mem_obj->gpu_addr;
+
+	if (mm->dev->shared_resources.enable_mes)
+		size = PAGE_SIZE;
+	else
+		size = sizeof(struct v11_compute_mqd);
+
+	memset(m, 0, size);
+
+	m->header = 0xC0310800;
+	m->compute_pipelinestat_enable = 1;
+	m->compute_static_thread_mgmt_se0 = 0xFFFFFFFF;
+	m->compute_static_thread_mgmt_se1 = 0xFFFFFFFF;
+	m->compute_static_thread_mgmt_se2 = 0xFFFFFFFF;
+	m->compute_static_thread_mgmt_se3 = 0xFFFFFFFF;
+
+	m->cp_hqd_persistent_state = CP_HQD_PERSISTENT_STATE__PRELOAD_REQ_MASK |
+			0x55 << CP_HQD_PERSISTENT_STATE__PRELOAD_SIZE__SHIFT;
+
+	m->cp_mqd_control = 1 << CP_MQD_CONTROL__PRIV_STATE__SHIFT;
+
+	m->cp_mqd_base_addr_lo        = lower_32_bits(addr);
+	m->cp_mqd_base_addr_hi        = upper_32_bits(addr);
+
+	m->cp_hqd_quantum = 1 << CP_HQD_QUANTUM__QUANTUM_EN__SHIFT |
+			1 << CP_HQD_QUANTUM__QUANTUM_SCALE__SHIFT |
+			1 << CP_HQD_QUANTUM__QUANTUM_DURATION__SHIFT;
+
+	if (q->format == KFD_QUEUE_FORMAT_AQL) {
+		m->cp_hqd_aql_control =
+			1 << CP_HQD_AQL_CONTROL__CONTROL0__SHIFT;
+	}
+
+	if (mm->dev->cwsr_enabled) {
+		m->cp_hqd_persistent_state |=
+			(1 << CP_HQD_PERSISTENT_STATE__QSWITCH_MODE__SHIFT);
+		m->cp_hqd_ctx_save_base_addr_lo =
+			lower_32_bits(q->ctx_save_restore_area_address);
+		m->cp_hqd_ctx_save_base_addr_hi =
+			upper_32_bits(q->ctx_save_restore_area_address);
+		m->cp_hqd_ctx_save_size = q->ctx_save_restore_area_size;
+		m->cp_hqd_cntl_stack_size = q->ctl_stack_size;
+		m->cp_hqd_cntl_stack_offset = q->ctl_stack_size;
+		m->cp_hqd_wg_state_offset = q->ctl_stack_size;
+	}
+
+	*mqd = m;
+	if (gart_addr)
+		*gart_addr = addr;
+	mm->update_mqd(mm, m, q, NULL);
+}
+
+static int load_mqd(struct mqd_manager *mm, void *mqd,
+			uint32_t pipe_id, uint32_t queue_id,
+			struct queue_properties *p, struct mm_struct *mms)
+{
+	int r = 0;
+	/* AQL write pointer counts in 64B packets, PM4/CP counts in dwords. */
+	uint32_t wptr_shift = (p->format == KFD_QUEUE_FORMAT_AQL ? 4 : 0);
+
+	r = mm->dev->kfd2kgd->hqd_load(mm->dev->adev, mqd, pipe_id, queue_id,
+					  (uint32_t __user *)p->write_ptr,
+					  wptr_shift, 0, mms);
+	return r;
+}
+
+static int hiq_load_mqd_kiq(struct mqd_manager *mm, void *mqd,
+			    uint32_t pipe_id, uint32_t queue_id,
+			    struct queue_properties *p, struct mm_struct *mms)
+{
+	return mm->dev->kfd2kgd->hiq_mqd_load(mm->dev->adev, mqd, pipe_id,
+					      queue_id, p->doorbell_off);
+}
+
+static void update_mqd(struct mqd_manager *mm, void *mqd,
+		       struct queue_properties *q,
+		       struct mqd_update_info *minfo)
+{
+	struct v11_compute_mqd *m;
+
+	m = get_mqd(mqd);
+
+	m->cp_hqd_pq_control = 5 << CP_HQD_PQ_CONTROL__RPTR_BLOCK_SIZE__SHIFT;
+	m->cp_hqd_pq_control |=
+			ffs(q->queue_size / sizeof(unsigned int)) - 1 - 1;
+	pr_debug("cp_hqd_pq_control 0x%x\n", m->cp_hqd_pq_control);
+
+	m->cp_hqd_pq_base_lo = lower_32_bits((uint64_t)q->queue_address >> 8);
+	m->cp_hqd_pq_base_hi = upper_32_bits((uint64_t)q->queue_address >> 8);
+
+	m->cp_hqd_pq_rptr_report_addr_lo = lower_32_bits((uint64_t)q->read_ptr);
+	m->cp_hqd_pq_rptr_report_addr_hi = upper_32_bits((uint64_t)q->read_ptr);
+	m->cp_hqd_pq_wptr_poll_addr_lo = lower_32_bits((uint64_t)q->write_ptr);
+	m->cp_hqd_pq_wptr_poll_addr_hi = upper_32_bits((uint64_t)q->write_ptr);
+
+	m->cp_hqd_pq_doorbell_control =
+		q->doorbell_off <<
+			CP_HQD_PQ_DOORBELL_CONTROL__DOORBELL_OFFSET__SHIFT;
+	pr_debug("cp_hqd_pq_doorbell_control 0x%x\n",
+			m->cp_hqd_pq_doorbell_control);
+
+	m->cp_hqd_ib_control = 3 << CP_HQD_IB_CONTROL__MIN_IB_AVAIL_SIZE__SHIFT;
+
+	/*
+	 * HW does not clamp this field correctly. Maximum EOP queue size
+	 * is constrained by per-SE EOP done signal count, which is 8-bit.
+	 * Limit is 0xFF EOP entries (= 0x7F8 dwords). CP will not submit
+	 * more than (EOP entry count - 1) so a queue size of 0x800 dwords
+	 * is safe, giving a maximum field value of 0xA.
+	 */
+	m->cp_hqd_eop_control = min(0xA,
+		ffs(q->eop_ring_buffer_size / sizeof(unsigned int)) - 1 - 1);
+	m->cp_hqd_eop_base_addr_lo =
+			lower_32_bits(q->eop_ring_buffer_address >> 8);
+	m->cp_hqd_eop_base_addr_hi =
+			upper_32_bits(q->eop_ring_buffer_address >> 8);
+
+	m->cp_hqd_iq_timer = 0;
+
+	m->cp_hqd_vmid = q->vmid;
+
+	if (q->format == KFD_QUEUE_FORMAT_AQL) {
+		/* GC 10 removed WPP_CLAMP from PQ Control */
+		m->cp_hqd_pq_control |= CP_HQD_PQ_CONTROL__NO_UPDATE_RPTR_MASK |
+				2 << CP_HQD_PQ_CONTROL__SLOT_BASED_WPTR__SHIFT |
+				1 << CP_HQD_PQ_CONTROL__QUEUE_FULL_EN__SHIFT ;
+		m->cp_hqd_pq_doorbell_control |=
+			1 << CP_HQD_PQ_DOORBELL_CONTROL__DOORBELL_BIF_DROP__SHIFT;
+	}
+	if (mm->dev->cwsr_enabled)
+		m->cp_hqd_ctx_save_control = 0;
+
+	update_cu_mask(mm, mqd, minfo);
+	set_priority(m, q);
+
+	q->is_active = QUEUE_IS_ACTIVE(*q);
+}
+
+static uint32_t read_doorbell_id(void *mqd)
+{
+	struct v11_compute_mqd *m = (struct v11_compute_mqd *)mqd;
+
+	return m->queue_doorbell_id0;
+}
+
+static int destroy_mqd(struct mqd_manager *mm, void *mqd,
+		       enum kfd_preempt_type type,
+		       unsigned int timeout, uint32_t pipe_id,
+		       uint32_t queue_id)
+{
+	return mm->dev->kfd2kgd->hqd_destroy
+		(mm->dev->adev, mqd, type, timeout,
+		 pipe_id, queue_id);
+}
+
+static void free_mqd(struct mqd_manager *mm, void *mqd,
+			struct kfd_mem_obj *mqd_mem_obj)
+{
+	kfd_gtt_sa_free(mm->dev, mqd_mem_obj);
+}
+
+static bool is_occupied(struct mqd_manager *mm, void *mqd,
+			uint64_t queue_address,	uint32_t pipe_id,
+			uint32_t queue_id)
+{
+	return mm->dev->kfd2kgd->hqd_is_occupied(
+		mm->dev->adev, queue_address,
+		pipe_id, queue_id);
+}
+
+static int get_wave_state(struct mqd_manager *mm, void *mqd,
+			  void __user *ctl_stack,
+			  u32 *ctl_stack_used_size,
+			  u32 *save_area_used_size)
+{
+	struct v11_compute_mqd *m;
+	/*struct mqd_user_context_save_area_header header;*/
+
+	m = get_mqd(mqd);
+
+	/* Control stack is written backwards, while workgroup context data
+	 * is written forwards. Both starts from m->cp_hqd_cntl_stack_size.
+	 * Current position is at m->cp_hqd_cntl_stack_offset and
+	 * m->cp_hqd_wg_state_offset, respectively.
+	 */
+	*ctl_stack_used_size = m->cp_hqd_cntl_stack_size -
+		m->cp_hqd_cntl_stack_offset;
+	*save_area_used_size = m->cp_hqd_wg_state_offset -
+		m->cp_hqd_cntl_stack_size;
+
+	/* Control stack is not copied to user mode for GFXv11 because
+	 * it's part of the context save area that is already
+	 * accessible to user mode
+	 */
+/*
+	header.control_stack_size = *ctl_stack_used_size;
+	header.wave_state_size = *save_area_used_size;
+
+	header.wave_state_offset = m->cp_hqd_wg_state_offset;
+	header.control_stack_offset = m->cp_hqd_cntl_stack_offset;
+
+	if (copy_to_user(ctl_stack, &header, sizeof(header)))
+		return -EFAULT;
+*/
+	return 0;
+}
+
+static void init_mqd_hiq(struct mqd_manager *mm, void **mqd,
+			struct kfd_mem_obj *mqd_mem_obj, uint64_t *gart_addr,
+			struct queue_properties *q)
+{
+	struct v11_compute_mqd *m;
+
+	init_mqd(mm, mqd, mqd_mem_obj, gart_addr, q);
+
+	m = get_mqd(*mqd);
+
+	m->cp_hqd_pq_control |= 1 << CP_HQD_PQ_CONTROL__PRIV_STATE__SHIFT |
+			1 << CP_HQD_PQ_CONTROL__KMD_QUEUE__SHIFT;
+}
+
+static void init_mqd_sdma(struct mqd_manager *mm, void **mqd,
+		struct kfd_mem_obj *mqd_mem_obj, uint64_t *gart_addr,
+		struct queue_properties *q)
+{
+	struct v11_sdma_mqd *m;
+
+	m = (struct v11_sdma_mqd *) mqd_mem_obj->cpu_ptr;
+
+	memset(m, 0, sizeof(struct v11_sdma_mqd));
+
+	*mqd = m;
+	if (gart_addr)
+		*gart_addr = mqd_mem_obj->gpu_addr;
+
+	mm->update_mqd(mm, m, q, NULL);
+}
+
+static int load_mqd_sdma(struct mqd_manager *mm, void *mqd,
+		uint32_t pipe_id, uint32_t queue_id,
+		struct queue_properties *p, struct mm_struct *mms)
+{
+	return mm->dev->kfd2kgd->hqd_sdma_load(mm->dev->adev, mqd,
+					       (uint32_t __user *)p->write_ptr,
+					       mms);
+}
+
+#define SDMA_RLC_DUMMY_DEFAULT 0xf
+
+static void update_mqd_sdma(struct mqd_manager *mm, void *mqd,
+		struct queue_properties *q,
+		struct mqd_update_info *minfo)
+{
+	struct v11_sdma_mqd *m;
+
+	m = get_sdma_mqd(mqd);
+	m->sdmax_rlcx_rb_cntl = (ffs(q->queue_size / sizeof(unsigned int)) - 1)
+		<< SDMA0_QUEUE0_RB_CNTL__RB_SIZE__SHIFT |
+		q->vmid << SDMA0_QUEUE0_RB_CNTL__RB_VMID__SHIFT |
+		1 << SDMA0_QUEUE0_RB_CNTL__RPTR_WRITEBACK_ENABLE__SHIFT |
+		6 << SDMA0_QUEUE0_RB_CNTL__RPTR_WRITEBACK_TIMER__SHIFT;
+
+	m->sdmax_rlcx_rb_base = lower_32_bits(q->queue_address >> 8);
+	m->sdmax_rlcx_rb_base_hi = upper_32_bits(q->queue_address >> 8);
+	m->sdmax_rlcx_rb_rptr_addr_lo = lower_32_bits((uint64_t)q->read_ptr);
+	m->sdmax_rlcx_rb_rptr_addr_hi = upper_32_bits((uint64_t)q->read_ptr);
+	m->sdmax_rlcx_doorbell_offset =
+		q->doorbell_off << SDMA0_QUEUE0_DOORBELL_OFFSET__OFFSET__SHIFT;
+
+	m->sdma_engine_id = q->sdma_engine_id;
+	m->sdma_queue_id = q->sdma_queue_id;
+	m->sdmax_rlcx_dummy_reg = SDMA_RLC_DUMMY_DEFAULT;
+
+	q->is_active = QUEUE_IS_ACTIVE(*q);
+}
+
+/*
+ *  * preempt type here is ignored because there is only one way
+ *  * to preempt sdma queue
+ */
+static int destroy_mqd_sdma(struct mqd_manager *mm, void *mqd,
+		enum kfd_preempt_type type,
+		unsigned int timeout, uint32_t pipe_id,
+		uint32_t queue_id)
+{
+	return mm->dev->kfd2kgd->hqd_sdma_destroy(mm->dev->adev, mqd, timeout);
+}
+
+static bool is_occupied_sdma(struct mqd_manager *mm, void *mqd,
+		uint64_t queue_address, uint32_t pipe_id,
+		uint32_t queue_id)
+{
+	return mm->dev->kfd2kgd->hqd_sdma_is_occupied(mm->dev->adev, mqd);
+}
+
+#if defined(CONFIG_DEBUG_FS)
+
+static int debugfs_show_mqd(struct seq_file *m, void *data)
+{
+	seq_hex_dump(m, "    ", DUMP_PREFIX_OFFSET, 32, 4,
+		     data, sizeof(struct v11_compute_mqd), false);
+	return 0;
+}
+
+static int debugfs_show_mqd_sdma(struct seq_file *m, void *data)
+{
+	seq_hex_dump(m, "    ", DUMP_PREFIX_OFFSET, 32, 4,
+		     data, sizeof(struct v11_sdma_mqd), false);
+	return 0;
+}
+
+#endif
+
+struct mqd_manager *mqd_manager_init_v11(enum KFD_MQD_TYPE type,
+		struct kfd_dev *dev)
+{
+	struct mqd_manager *mqd;
+
+	if (WARN_ON(type >= KFD_MQD_TYPE_MAX))
+		return NULL;
+
+	mqd = kzalloc(sizeof(*mqd), GFP_KERNEL);
+	if (!mqd)
+		return NULL;
+
+	mqd->dev = dev;
+
+	switch (type) {
+	case KFD_MQD_TYPE_CP:
+		pr_debug("%s@%i\n", __func__, __LINE__);
+		mqd->allocate_mqd = allocate_mqd;
+		mqd->init_mqd = init_mqd;
+		mqd->free_mqd = free_mqd;
+		mqd->load_mqd = load_mqd;
+		mqd->update_mqd = update_mqd;
+		mqd->destroy_mqd = destroy_mqd;
+		mqd->is_occupied = is_occupied;
+		mqd->mqd_size = sizeof(struct v11_compute_mqd);
+		mqd->get_wave_state = get_wave_state;
+#if defined(CONFIG_DEBUG_FS)
+		mqd->debugfs_show_mqd = debugfs_show_mqd;
+#endif
+		pr_debug("%s@%i\n", __func__, __LINE__);
+		break;
+	case KFD_MQD_TYPE_HIQ:
+		pr_debug("%s@%i\n", __func__, __LINE__);
+		mqd->allocate_mqd = allocate_hiq_mqd;
+		mqd->init_mqd = init_mqd_hiq;
+		mqd->free_mqd = free_mqd_hiq_sdma;
+		mqd->load_mqd = hiq_load_mqd_kiq;
+		mqd->update_mqd = update_mqd;
+		mqd->destroy_mqd = destroy_mqd;
+		mqd->is_occupied = is_occupied;
+		mqd->mqd_size = sizeof(struct v11_compute_mqd);
+#if defined(CONFIG_DEBUG_FS)
+		mqd->debugfs_show_mqd = debugfs_show_mqd;
+#endif
+		mqd->read_doorbell_id = read_doorbell_id;
+		pr_debug("%s@%i\n", __func__, __LINE__);
+		break;
+	case KFD_MQD_TYPE_DIQ:
+		mqd->allocate_mqd = allocate_mqd;
+		mqd->init_mqd = init_mqd_hiq;
+		mqd->free_mqd = free_mqd;
+		mqd->load_mqd = load_mqd;
+		mqd->update_mqd = update_mqd;
+		mqd->destroy_mqd = destroy_mqd;
+		mqd->is_occupied = is_occupied;
+		mqd->mqd_size = sizeof(struct v11_compute_mqd);
+#if defined(CONFIG_DEBUG_FS)
+		mqd->debugfs_show_mqd = debugfs_show_mqd;
+#endif
+		break;
+	case KFD_MQD_TYPE_SDMA:
+		pr_debug("%s@%i\n", __func__, __LINE__);
+		mqd->allocate_mqd = allocate_sdma_mqd;
+		mqd->init_mqd = init_mqd_sdma;
+		mqd->free_mqd = free_mqd_hiq_sdma;
+		mqd->load_mqd = load_mqd_sdma;
+		mqd->update_mqd = update_mqd_sdma;
+		mqd->destroy_mqd = destroy_mqd_sdma;
+		mqd->is_occupied = is_occupied_sdma;
+		mqd->mqd_size = sizeof(struct v11_sdma_mqd);
+#if defined(CONFIG_DEBUG_FS)
+		mqd->debugfs_show_mqd = debugfs_show_mqd_sdma;
+#endif
+		pr_debug("%s@%i\n", __func__, __LINE__);
+		break;
+	default:
+		kfree(mqd);
+		return NULL;
+	}
+
+	return mqd;
+}
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
index b9ca957246dc..91e5fa56f0a2 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
@@ -228,6 +228,8 @@ struct kfd_device_info {
 	bool needs_pci_atomics;
 	uint32_t no_atomic_fw_version;
 	unsigned int num_sdma_queues_per_engine;
+	unsigned int num_reserved_sdma_queues_per_engine;
+	uint64_t reserved_sdma_queues_bitmap;
 };
 
 unsigned int kfd_get_num_sdma_engines(struct kfd_dev *kdev);
@@ -564,6 +566,10 @@ struct queue {
 
 	/* procfs */
 	struct kobject kobj;
+
+	void *gang_ctx_bo;
+	uint64_t gang_ctx_gpu_addr;
+	void *gang_ctx_cpu_ptr;
 };
 
 enum KFD_MQD_TYPE {
@@ -779,6 +785,10 @@ struct kfd_process_device {
 	 * checkpointed node to refer to this device.
 	 */
 	uint32_t user_gpu_id;
+
+	void *proc_ctx_bo;
+	uint64_t proc_ctx_gpu_addr;
+	void *proc_ctx_cpu_ptr;
 };
 
 #define qpd_to_pdd(x) container_of(x, struct kfd_process_device, qpd)
@@ -1170,6 +1180,8 @@ struct mqd_manager *mqd_manager_init_v9(enum KFD_MQD_TYPE type,
 		struct kfd_dev *dev);
 struct mqd_manager *mqd_manager_init_v10(enum KFD_MQD_TYPE type,
 		struct kfd_dev *dev);
+struct mqd_manager *mqd_manager_init_v11(enum KFD_MQD_TYPE type,
+		struct kfd_dev *dev);
 struct device_queue_manager *device_queue_manager_init(struct kfd_dev *dev);
 void device_queue_manager_uninit(struct device_queue_manager *dqm);
 struct kernel_queue *kernel_queue_init(struct kfd_dev *dev,
@@ -1292,6 +1304,7 @@ uint64_t kfd_get_number_elems(struct kfd_dev *kfd);
 /* Events */
 extern const struct kfd_event_interrupt_class event_interrupt_class_cik;
 extern const struct kfd_event_interrupt_class event_interrupt_class_v9;
+extern const struct kfd_event_interrupt_class event_interrupt_class_v11;
 
 extern const struct kfd_device_global_init_class device_global_init_class_cik;
 
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
index cb8f4a459add..e3d64ec8c353 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
@@ -1041,6 +1041,9 @@ static void kfd_process_destroy_pdds(struct kfd_process *p)
 
 		kfd_free_process_doorbells(pdd->dev, pdd->doorbell_index);
 
+		if (pdd->dev->shared_resources.enable_mes)
+			amdgpu_amdkfd_free_gtt_mem(pdd->dev->adev,
+						   pdd->proc_ctx_bo);
 		/*
 		 * before destroying pdd, make sure to report availability
 		 * for auto suspend
@@ -1484,6 +1487,7 @@ struct kfd_process_device *kfd_create_process_device_data(struct kfd_dev *dev,
 							struct kfd_process *p)
 {
 	struct kfd_process_device *pdd = NULL;
+	int retval = 0;
 
 	if (WARN_ON_ONCE(p->n_pdds >= MAX_GPU_INSTANCE))
 		return NULL;
@@ -1516,6 +1520,21 @@ struct kfd_process_device *kfd_create_process_device_data(struct kfd_dev *dev,
 	pdd->sdma_past_activity_counter = 0;
 	pdd->user_gpu_id = dev->id;
 	atomic64_set(&pdd->evict_duration_counter, 0);
+
+	if (dev->shared_resources.enable_mes) {
+		retval = amdgpu_amdkfd_alloc_gtt_mem(dev->adev,
+						AMDGPU_MES_PROC_CTX_SIZE,
+						&pdd->proc_ctx_bo,
+						&pdd->proc_ctx_gpu_addr,
+						&pdd->proc_ctx_cpu_ptr,
+						false);
+		if (retval) {
+			pr_err("failed to allocate process context bo\n");
+			goto err_free_pdd;
+		}
+		memset(pdd->proc_ctx_cpu_ptr, 0, AMDGPU_MES_PROC_CTX_SIZE);
+	}
+
 	p->pdds[p->n_pdds++] = pdd;
 
 	/* Init idr used for memory handle translation */
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
index 4f58e671d39b..dc00484ff484 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
@@ -198,8 +198,26 @@ static int init_user_queue(struct process_queue_manager *pqm,
 	(*q)->device = dev;
 	(*q)->process = pqm->process;
 
+	if (dev->shared_resources.enable_mes) {
+		retval = amdgpu_amdkfd_alloc_gtt_mem(dev->adev,
+						AMDGPU_MES_GANG_CTX_SIZE,
+						&(*q)->gang_ctx_bo,
+						&(*q)->gang_ctx_gpu_addr,
+						&(*q)->gang_ctx_cpu_ptr,
+						false);
+		if (retval) {
+			pr_err("failed to allocate gang context bo\n");
+			goto cleanup;
+		}
+		memset((*q)->gang_ctx_cpu_ptr, 0, AMDGPU_MES_GANG_CTX_SIZE);
+	}
+
 	pr_debug("PQM After init queue");
+	return 0;
 
+cleanup:
+	if (dev->shared_resources.enable_mes)
+		uninit_queue(*q);
 	return retval;
 }
 
@@ -418,6 +436,9 @@ int pqm_destroy_queue(struct process_queue_manager *pqm, unsigned int qid)
 			pdd->qpd.num_gws = 0;
 		}
 
+		if (dev->shared_resources.enable_mes)
+			amdgpu_amdkfd_free_gtt_mem(dev->adev,
+						   pqn->q->gang_ctx_bo);
 		uninit_queue(pqn->q);
 	}
 
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
index 05089f1de4e9..2e20f54bb147 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
@@ -1412,7 +1412,8 @@ int kfd_topology_add_device(struct kfd_dev *gpu)
 	dev->node_props.num_sdma_xgmi_engines =
 					kfd_get_num_xgmi_sdma_engines(gpu);
 	dev->node_props.num_sdma_queues_per_engine =
-				gpu->device_info.num_sdma_queues_per_engine;
+				gpu->device_info.num_sdma_queues_per_engine -
+				gpu->device_info.num_reserved_sdma_queues_per_engine;
 	dev->node_props.num_gws = (dev->gpu->gws &&
 		dev->gpu->dqm->sched_policy != KFD_SCHED_POLICY_NO_HWS) ?
 		dev->gpu->adev->gds.gws_size : 0;
diff --git a/drivers/gpu/drm/amd/amdkfd/soc15_int.h b/drivers/gpu/drm/amd/amdkfd/soc15_int.h
index daf3c44547d3..e3f3b0b93a59 100644
--- a/drivers/gpu/drm/amd/amdkfd/soc15_int.h
+++ b/drivers/gpu/drm/amd/amdkfd/soc15_int.h
@@ -31,7 +31,8 @@
 #define SOC15_INTSRC_VMC_FAULT		0
 #define SOC15_INTSRC_SDMA_TRAP		224
 #define SOC15_INTSRC_SDMA_ECC		220
-
+#define SOC21_INTSRC_SDMA_TRAP		49
+#define SOC21_INTSRC_SDMA_ECC		62
 
 #define SOC15_CLIENT_ID_FROM_IH_ENTRY(entry) (le32_to_cpu(entry[0]) & 0xff)
 #define SOC15_SOURCE_ID_FROM_IH_ENTRY(entry) (le32_to_cpu(entry[0]) >> 8 & 0xff)
diff --git a/drivers/gpu/drm/amd/include/kgd_kfd_interface.h b/drivers/gpu/drm/amd/include/kgd_kfd_interface.h
index 2f60cf35a444..e85364dff4e0 100644
--- a/drivers/gpu/drm/amd/include/kgd_kfd_interface.h
+++ b/drivers/gpu/drm/amd/include/kgd_kfd_interface.h
@@ -152,6 +152,7 @@ struct kgd2kfd_shared_resources {
 	/* Minor device number of the render node */
 	int drm_render_minor;
 
+	bool enable_mes;
 };
 
 struct tile_config {
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 23/29] drm/amdgpu: enable GFX CGCG/CGLS for GC11.0.0
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (21 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 22/29] drm/amdkfd: Add KFD support for soc21 v3 Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 24/29] drm/amdgpu: enable fgcg for soc21 Alex Deucher
                   ` (5 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Likun Gao, Evan Quan, Hawking Zhang

From: Evan Quan <evan.quan@amd.com>

Enable GFX CGCG (coarse grained clockgating) and
CGLS (coarse grained light sleep) for GC11.0.0.

Signed-off-by: Evan Quan <evan.quan@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Reviewed-by: Likun Gao <Likun.Gao@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/soc21.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/soc21.c b/drivers/gpu/drm/amd/amdgpu/soc21.c
index 80b558f649e0..dc200d11fcca 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc21.c
+++ b/drivers/gpu/drm/amd/amdgpu/soc21.c
@@ -480,7 +480,8 @@ static int soc21_common_early_init(void *handle)
 	adev->external_rev_id = 0xff;
 	switch (adev->ip_versions[GC_HWIP][0]) {
 	case IP_VERSION(11, 0, 0):
-		adev->cg_flags = 0;
+		adev->cg_flags = AMD_CG_SUPPORT_GFX_CGCG |
+			AMD_CG_SUPPORT_GFX_CGLS;
 		adev->pg_flags = AMD_PG_SUPPORT_ATHUB |
 			AMD_PG_SUPPORT_MMHUB;
 		adev->external_rev_id = adev->rev_id + 0x1; // TODO: need update
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 24/29] drm/amdgpu: enable fgcg for soc21
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (22 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 23/29] drm/amdgpu: enable GFX CGCG/CGLS for GC11.0.0 Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 25/29] drm/amdgpu: enable GENERIC0_INT for gfx/compute pipes Alex Deucher
                   ` (4 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Evan Quan

From: Evan Quan <evan.quan@amd.com>

Enable Fine Grained Clock Gating on soc21 asics.

Signed-off-by: Evan Quan <evan.quan@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/soc21.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/soc21.c b/drivers/gpu/drm/amd/amdgpu/soc21.c
index dc200d11fcca..d738635ecf1d 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc21.c
+++ b/drivers/gpu/drm/amd/amdgpu/soc21.c
@@ -481,7 +481,8 @@ static int soc21_common_early_init(void *handle)
 	switch (adev->ip_versions[GC_HWIP][0]) {
 	case IP_VERSION(11, 0, 0):
 		adev->cg_flags = AMD_CG_SUPPORT_GFX_CGCG |
-			AMD_CG_SUPPORT_GFX_CGLS;
+			AMD_CG_SUPPORT_GFX_CGLS |
+			AMD_CG_SUPPORT_REPEATER_FGCG;
 		adev->pg_flags = AMD_PG_SUPPORT_ATHUB |
 			AMD_PG_SUPPORT_MMHUB;
 		adev->external_rev_id = adev->rev_id + 0x1; // TODO: need update
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 25/29] drm/amdgpu: enable GENERIC0_INT for gfx/compute pipes
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (23 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 24/29] drm/amdgpu: enable fgcg for soc21 Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 26/29] drm/amdgpu/gfx10: enable kiq to map mes ring Alex Deucher
                   ` (3 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Hawking Zhang

From: Hawking Zhang <Hawking.Zhang@amd.com>

To generate an interrupt to RLC for accessing indirect
registers that CP can not access directly

Signed-off-by: Hawking Zhang <Hawking.Zhang@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
index 7a34802544c0..2f8fbe2651ff 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
@@ -5689,12 +5689,16 @@ gfx_v11_0_set_gfx_eop_interrupt_state(struct amdgpu_device *adev,
 		cp_int_cntl = RREG32_SOC15_IP(GC, cp_int_cntl_reg);
 		cp_int_cntl = REG_SET_FIELD(cp_int_cntl, CP_INT_CNTL_RING0,
 					    TIME_STAMP_INT_ENABLE, 0);
+		cp_int_cntl = REG_SET_FIELD(cp_int_cntl, CP_INT_CNTL_RING0,
+					    GENERIC0_INT_ENABLE, 0);
 		WREG32_SOC15_IP(GC, cp_int_cntl_reg, cp_int_cntl);
 		break;
 	case AMDGPU_IRQ_STATE_ENABLE:
 		cp_int_cntl = RREG32_SOC15_IP(GC, cp_int_cntl_reg);
 		cp_int_cntl = REG_SET_FIELD(cp_int_cntl, CP_INT_CNTL_RING0,
 					    TIME_STAMP_INT_ENABLE, 1);
+		cp_int_cntl = REG_SET_FIELD(cp_int_cntl, CP_INT_CNTL_RING0,
+					    GENERIC0_INT_ENABLE, 1);
 		WREG32_SOC15_IP(GC, cp_int_cntl_reg, cp_int_cntl);
 		break;
 	default:
@@ -5742,12 +5746,16 @@ static void gfx_v11_0_set_compute_eop_interrupt_state(struct amdgpu_device *adev
 		mec_int_cntl = RREG32_SOC15_IP(GC, mec_int_cntl_reg);
 		mec_int_cntl = REG_SET_FIELD(mec_int_cntl, CP_ME1_PIPE0_INT_CNTL,
 					     TIME_STAMP_INT_ENABLE, 0);
+		mec_int_cntl = REG_SET_FIELD(mec_int_cntl, CP_ME1_PIPE0_INT_CNTL,
+					     GENERIC0_INT_ENABLE, 0);
 		WREG32_SOC15_IP(GC, mec_int_cntl_reg, mec_int_cntl);
 		break;
 	case AMDGPU_IRQ_STATE_ENABLE:
 		mec_int_cntl = RREG32_SOC15_IP(GC, mec_int_cntl_reg);
 		mec_int_cntl = REG_SET_FIELD(mec_int_cntl, CP_ME1_PIPE0_INT_CNTL,
 					     TIME_STAMP_INT_ENABLE, 1);
+		mec_int_cntl = REG_SET_FIELD(mec_int_cntl, CP_ME1_PIPE0_INT_CNTL,
+					     GENERIC0_INT_ENABLE, 1);
 		WREG32_SOC15_IP(GC, mec_int_cntl_reg, mec_int_cntl);
 		break;
 	default:
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 26/29] drm/amdgpu/gfx10: enable kiq to map mes ring
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (24 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 25/29] drm/amdgpu: enable GENERIC0_INT for gfx/compute pipes Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 27/29] drm/amdgpu/gfx11: " Alex Deucher
                   ` (2 subsequent siblings)
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Jack Xiao, Hawking Zhang

From: Jack Xiao <Jack.Xiao@amd.com>

Enable KIQ to map MES ring:
1). add MES queue mapping support in MAP_QUEUES packet.
2). use correct MQD settings for MES queue.

Signed-off-by: Jack Xiao <Jack.Xiao@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c |  16 +++-
 drivers/gpu/drm/amd/amdgpu/mes_v10_1.c | 106 ++++++++++---------------
 2 files changed, 55 insertions(+), 67 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index 3c4f2a94ad9f..fc289ee54a47 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -3525,7 +3525,21 @@ static void gfx10_kiq_map_queues(struct amdgpu_ring *kiq_ring,
 {
 	uint64_t mqd_addr = amdgpu_bo_gpu_offset(ring->mqd_obj);
 	uint64_t wptr_addr = ring->wptr_gpu_addr;
-	uint32_t eng_sel = ring->funcs->type == AMDGPU_RING_TYPE_GFX ? 4 : 0;
+	uint32_t eng_sel = 0;
+
+	switch (ring->funcs->type) {
+	case AMDGPU_RING_TYPE_COMPUTE:
+		eng_sel = 0;
+		break;
+	case AMDGPU_RING_TYPE_GFX:
+		eng_sel = 4;
+		break;
+	case AMDGPU_RING_TYPE_MES:
+		eng_sel = 5;
+		break;
+	default:
+		WARN_ON(1);
+	}
 
 	amdgpu_ring_write(kiq_ring, PACKET3(PACKET3_MAP_QUEUES, 5));
 	/* Q_sel:0, vmid:0, vidmem: 1, engine:0, num_Q:1*/
diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c b/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c
index 030a92b3a0da..18a129f36215 100644
--- a/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c
+++ b/drivers/gpu/drm/amd/amdgpu/mes_v10_1.c
@@ -28,6 +28,7 @@
 #include "nv.h"
 #include "gc/gc_10_1_0_offset.h"
 #include "gc/gc_10_1_0_sh_mask.h"
+#include "gc/gc_10_1_0_default.h"
 #include "v10_structs.h"
 #include "mes_api_def.h"
 
@@ -529,7 +530,7 @@ static void mes_v10_1_enable(struct amdgpu_device *adev, bool enable)
 		data = REG_SET_FIELD(data, CP_MES_CNTL, MES_PIPE1_ACTIVE,
 				     adev->enable_mes_kiq ? 1 : 0);
 		WREG32_SOC15(GC, 0, mmCP_MES_CNTL, data);
-		udelay(50);
+		udelay(100);
 	} else {
 		data = RREG32_SOC15(GC, 0, mmCP_MES_CNTL);
 		data = REG_SET_FIELD(data, CP_MES_CNTL, MES_PIPE0_ACTIVE, 0);
@@ -665,7 +666,6 @@ static int mes_v10_1_allocate_eop_buf(struct amdgpu_device *adev,
 
 static int mes_v10_1_mqd_init(struct amdgpu_ring *ring)
 {
-	struct amdgpu_device *adev = ring->adev;
 	struct v10_compute_mqd *mqd = ring->mqd_ptr;
 	uint64_t hqd_gpu_addr, wb_gpu_addr, eop_base_addr;
 	uint32_t tmp;
@@ -679,38 +679,18 @@ static int mes_v10_1_mqd_init(struct amdgpu_ring *ring)
 	mqd->compute_misc_reserved = 0x00000003;
 
 	eop_base_addr = ring->eop_gpu_addr >> 8;
-	mqd->cp_hqd_eop_base_addr_lo = eop_base_addr;
-	mqd->cp_hqd_eop_base_addr_hi = upper_32_bits(eop_base_addr);
 
 	/* set the EOP size, register value is 2^(EOP_SIZE+1) dwords */
-	tmp = RREG32_SOC15(GC, 0, mmCP_HQD_EOP_CONTROL);
+	tmp = mmCP_HQD_EOP_CONTROL_DEFAULT;
 	tmp = REG_SET_FIELD(tmp, CP_HQD_EOP_CONTROL, EOP_SIZE,
 			(order_base_2(MES_EOP_SIZE / 4) - 1));
 
+	mqd->cp_hqd_eop_base_addr_lo = lower_32_bits(eop_base_addr);
+	mqd->cp_hqd_eop_base_addr_hi = upper_32_bits(eop_base_addr);
 	mqd->cp_hqd_eop_control = tmp;
 
-	/* enable doorbell? */
-	tmp = RREG32_SOC15(GC, 0, mmCP_HQD_PQ_DOORBELL_CONTROL);
-
-	if (ring->use_doorbell) {
-		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
-				    DOORBELL_OFFSET, ring->doorbell_index);
-		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
-				    DOORBELL_EN, 1);
-		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
-				    DOORBELL_SOURCE, 0);
-		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
-				    DOORBELL_HIT, 0);
-	}
-	else
-		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
-				    DOORBELL_EN, 0);
-
-	mqd->cp_hqd_pq_doorbell_control = tmp;
-
 	/* disable the queue if it's active */
 	ring->wptr = 0;
-	mqd->cp_hqd_dequeue_request = 0;
 	mqd->cp_hqd_pq_rptr = 0;
 	mqd->cp_hqd_pq_wptr_lo = 0;
 	mqd->cp_hqd_pq_wptr_hi = 0;
@@ -720,17 +700,28 @@ static int mes_v10_1_mqd_init(struct amdgpu_ring *ring)
 	mqd->cp_mqd_base_addr_hi = upper_32_bits(ring->mqd_gpu_addr);
 
 	/* set MQD vmid to 0 */
-	tmp = RREG32_SOC15(GC, 0, mmCP_MQD_CONTROL);
+	tmp = mmCP_MQD_CONTROL_DEFAULT;
 	tmp = REG_SET_FIELD(tmp, CP_MQD_CONTROL, VMID, 0);
 	mqd->cp_mqd_control = tmp;
 
 	/* set the pointer to the HQD, this is similar CP_RB0_BASE/_HI */
 	hqd_gpu_addr = ring->gpu_addr >> 8;
-	mqd->cp_hqd_pq_base_lo = hqd_gpu_addr;
+	mqd->cp_hqd_pq_base_lo = lower_32_bits(hqd_gpu_addr);
 	mqd->cp_hqd_pq_base_hi = upper_32_bits(hqd_gpu_addr);
 
+	/* set the wb address whether it's enabled or not */
+	wb_gpu_addr = ring->rptr_gpu_addr;
+	mqd->cp_hqd_pq_rptr_report_addr_lo = wb_gpu_addr & 0xfffffffc;
+	mqd->cp_hqd_pq_rptr_report_addr_hi =
+		upper_32_bits(wb_gpu_addr) & 0xffff;
+
+	/* only used if CP_PQ_WPTR_POLL_CNTL.CP_PQ_WPTR_POLL_CNTL__EN_MASK=1 */
+	wb_gpu_addr = ring->wptr_gpu_addr;
+	mqd->cp_hqd_pq_wptr_poll_addr_lo = wb_gpu_addr & 0xfffffff8;
+	mqd->cp_hqd_pq_wptr_poll_addr_hi = upper_32_bits(wb_gpu_addr) & 0xffff;
+
 	/* set up the HQD, this is similar to CP_RB0_CNTL */
-	tmp = RREG32_SOC15(GC, 0, mmCP_HQD_PQ_CONTROL);
+	tmp = mmCP_HQD_PQ_CONTROL_DEFAULT;
 	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, QUEUE_SIZE,
 			    (order_base_2(ring->ring_size / 4) - 1));
 	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, RPTR_BLOCK_SIZE,
@@ -738,30 +729,18 @@ static int mes_v10_1_mqd_init(struct amdgpu_ring *ring)
 #ifdef __BIG_ENDIAN
 	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, ENDIAN_SWAP, 1);
 #endif
-	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 0);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 1);
 	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TUNNEL_DISPATCH, 0);
 	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1);
 	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, NO_UPDATE_RPTR, 1);
 	mqd->cp_hqd_pq_control = tmp;
 
-	/* set the wb address whether it's enabled or not */
-	wb_gpu_addr = ring->rptr_gpu_addr;
-	mqd->cp_hqd_pq_rptr_report_addr_lo = wb_gpu_addr & 0xfffffffc;
-	mqd->cp_hqd_pq_rptr_report_addr_hi =
-		upper_32_bits(wb_gpu_addr) & 0xffff;
-
-	/* only used if CP_PQ_WPTR_POLL_CNTL.CP_PQ_WPTR_POLL_CNTL__EN_MASK=1 */
-	wb_gpu_addr = ring->wptr_gpu_addr;
-	mqd->cp_hqd_pq_wptr_poll_addr_lo = wb_gpu_addr & 0xfffffff8;
-	mqd->cp_hqd_pq_wptr_poll_addr_hi = upper_32_bits(wb_gpu_addr) & 0xffff;
-
+	/* enable doorbell? */
 	tmp = 0;
-	/* enable the doorbell if requested */
 	if (ring->use_doorbell) {
-		tmp = RREG32_SOC15(GC, 0, mmCP_HQD_PQ_DOORBELL_CONTROL);
 		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
-				DOORBELL_OFFSET, ring->doorbell_index);
-
+				    DOORBELL_OFFSET, ring->doorbell_index);
 		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
 				    DOORBELL_EN, 1);
 		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
@@ -769,30 +748,28 @@ static int mes_v10_1_mqd_init(struct amdgpu_ring *ring)
 		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
 				    DOORBELL_HIT, 0);
 	}
-
+	else
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_EN, 0);
 	mqd->cp_hqd_pq_doorbell_control = tmp;
 
-	/* reset read and write pointers, similar to CP_RB0_WPTR/_RPTR */
-	ring->wptr = 0;
-	mqd->cp_hqd_pq_rptr = RREG32_SOC15(GC, 0, mmCP_HQD_PQ_RPTR);
-
-	/* set the vmid for the queue */
 	mqd->cp_hqd_vmid = 0;
-
-	tmp = RREG32_SOC15(GC, 0, mmCP_HQD_PERSISTENT_STATE);
-	tmp = REG_SET_FIELD(tmp, CP_HQD_PERSISTENT_STATE, PRELOAD_SIZE, 0x53);
-	mqd->cp_hqd_persistent_state = tmp;
-
-	/* set MIN_IB_AVAIL_SIZE */
-	tmp = RREG32_SOC15(GC, 0, mmCP_HQD_IB_CONTROL);
-	tmp = REG_SET_FIELD(tmp, CP_HQD_IB_CONTROL, MIN_IB_AVAIL_SIZE, 3);
-	mqd->cp_hqd_ib_control = tmp;
-
 	/* activate the queue */
 	mqd->cp_hqd_active = 1;
+	mqd->cp_hqd_persistent_state = mmCP_HQD_PERSISTENT_STATE_DEFAULT;
+	mqd->cp_hqd_ib_control = mmCP_HQD_IB_CONTROL_DEFAULT;
+	mqd->cp_hqd_iq_timer = mmCP_HQD_IQ_TIMER_DEFAULT;
+	mqd->cp_hqd_quantum = mmCP_HQD_QUANTUM_DEFAULT;
+
+	tmp = mmCP_HQD_GFX_CONTROL_DEFAULT;
+	tmp = REG_SET_FIELD(tmp, CP_HQD_GFX_CONTROL, DB_UPDATED_MSG_EN, 1);
+	/* offset: 184 - this is used for CP_HQD_GFX_CONTROL */
+	mqd->cp_hqd_suspend_cntl_stack_offset = tmp;
+
 	return 0;
 }
 
+#if 0
 static void mes_v10_1_queue_init_register(struct amdgpu_ring *ring)
 {
 	struct v10_compute_mqd *mqd = ring->mqd_ptr;
@@ -854,8 +831,8 @@ static void mes_v10_1_queue_init_register(struct amdgpu_ring *ring)
 	nv_grbm_select(adev, 0, 0, 0, 0);
 	mutex_unlock(&adev->srbm_mutex);
 }
+#endif
 
-#if 0
 static int mes_v10_1_kiq_enable_queue(struct amdgpu_device *adev)
 {
 	struct amdgpu_kiq *kiq = &adev->gfx.kiq;
@@ -878,9 +855,9 @@ static int mes_v10_1_kiq_enable_queue(struct amdgpu_device *adev)
 		DRM_ERROR("kfq enable failed\n");
 		kiq_ring->sched.ready = false;
 	}
+
 	return r;
 }
-#endif
 
 static int mes_v10_1_queue_init(struct amdgpu_device *adev)
 {
@@ -890,13 +867,9 @@ static int mes_v10_1_queue_init(struct amdgpu_device *adev)
 	if (r)
 		return r;
 
-#if 0
 	r = mes_v10_1_kiq_enable_queue(adev);
 	if (r)
 		return r;
-#else
-	mes_v10_1_queue_init_register(&adev->mes.ring);
-#endif
 
 	return 0;
 }
@@ -972,6 +945,7 @@ static int mes_v10_1_mqd_sw_init(struct amdgpu_device *adev,
 		dev_warn(adev->dev, "failed to create ring mqd bo (%d)", r);
 		return r;
 	}
+	memset(ring->mqd_ptr, 0, mqd_size);
 
 	/* prepare MQD backup */
 	adev->mes.mqd_backup[pipe] = kmalloc(mqd_size, GFP_KERNEL);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 27/29] drm/amdgpu/gfx11: enable kiq to map mes ring
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (25 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 26/29] drm/amdgpu/gfx10: enable kiq to map mes ring Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 28/29] drm/amdgpu/discovery: add GFX 11.0 Support Alex Deucher
  2022-04-29 18:02 ` [PATCH 29/29] drm/amdgpu/discovery: add MES11 support Alex Deucher
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Jack Xiao, Hawking Zhang

From: Jack Xiao <Jack.Xiao@amd.com>

Enable KIQ to map MES ring:
1). add MES queue mapping support in MAP_QUEUES packet.
2). use correct MQD settings for MES queue.

Signed-off-by: Jack Xiao <Jack.Xiao@amd.com>
Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c |  16 +++-
 drivers/gpu/drm/amd/amdgpu/mes_v11_0.c | 107 ++++++++++---------------
 2 files changed, 57 insertions(+), 66 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
index 2f8fbe2651ff..184bf554acca 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
@@ -121,7 +121,21 @@ static void gfx11_kiq_map_queues(struct amdgpu_ring *kiq_ring,
 {
 	uint64_t mqd_addr = amdgpu_bo_gpu_offset(ring->mqd_obj);
 	uint64_t wptr_addr = ring->wptr_gpu_addr;
-	uint32_t eng_sel = ring->funcs->type == AMDGPU_RING_TYPE_GFX ? 4 : 0;
+	uint32_t eng_sel = 0;
+
+	switch (ring->funcs->type) {
+	case AMDGPU_RING_TYPE_COMPUTE:
+		eng_sel = 0;
+		break;
+	case AMDGPU_RING_TYPE_GFX:
+		eng_sel = 4;
+		break;
+	case AMDGPU_RING_TYPE_MES:
+		eng_sel = 5;
+		break;
+	default:
+		WARN_ON(1);
+	}
 
 	amdgpu_ring_write(kiq_ring, PACKET3(PACKET3_MAP_QUEUES, 5));
 	/* Q_sel:0, vmid:0, vidmem: 1, engine:0, num_Q:1*/
diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
index af944a43fb03..12e048c83b0c 100644
--- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
@@ -28,6 +28,7 @@
 #include "soc21.h"
 #include "gc/gc_11_0_0_offset.h"
 #include "gc/gc_11_0_0_sh_mask.h"
+#include "gc/gc_11_0_0_default.h"
 #include "v10_structs.h"
 #include "mes_v11_api_def.h"
 
@@ -632,7 +633,6 @@ static int mes_v11_0_allocate_eop_buf(struct amdgpu_device *adev,
 
 static int mes_v11_0_mqd_init(struct amdgpu_ring *ring)
 {
-	struct amdgpu_device *adev = ring->adev;
 	struct v10_compute_mqd *mqd = ring->mqd_ptr;
 	uint64_t hqd_gpu_addr, wb_gpu_addr, eop_base_addr;
 	uint32_t tmp;
@@ -646,38 +646,18 @@ static int mes_v11_0_mqd_init(struct amdgpu_ring *ring)
 	mqd->compute_misc_reserved = 0x00000007;
 
 	eop_base_addr = ring->eop_gpu_addr >> 8;
-	mqd->cp_hqd_eop_base_addr_lo = eop_base_addr;
-	mqd->cp_hqd_eop_base_addr_hi = upper_32_bits(eop_base_addr);
 
 	/* set the EOP size, register value is 2^(EOP_SIZE+1) dwords */
-	tmp = RREG32_SOC15(GC, 0, regCP_HQD_EOP_CONTROL);
+	tmp = regCP_HQD_EOP_CONTROL_DEFAULT;
 	tmp = REG_SET_FIELD(tmp, CP_HQD_EOP_CONTROL, EOP_SIZE,
 			(order_base_2(MES_EOP_SIZE / 4) - 1));
 
+	mqd->cp_hqd_eop_base_addr_lo = lower_32_bits(eop_base_addr);
+	mqd->cp_hqd_eop_base_addr_hi = upper_32_bits(eop_base_addr);
 	mqd->cp_hqd_eop_control = tmp;
 
-	/* enable doorbell? */
-	tmp = RREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL);
-
-	if (ring->use_doorbell) {
-		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
-				    DOORBELL_OFFSET, ring->doorbell_index);
-		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
-				    DOORBELL_EN, 1);
-		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
-				    DOORBELL_SOURCE, 0);
-		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
-				    DOORBELL_HIT, 0);
-	}
-	else
-		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
-				    DOORBELL_EN, 0);
-
-	mqd->cp_hqd_pq_doorbell_control = tmp;
-
 	/* disable the queue if it's active */
 	ring->wptr = 0;
-	mqd->cp_hqd_dequeue_request = 0;
 	mqd->cp_hqd_pq_rptr = 0;
 	mqd->cp_hqd_pq_wptr_lo = 0;
 	mqd->cp_hqd_pq_wptr_hi = 0;
@@ -687,17 +667,28 @@ static int mes_v11_0_mqd_init(struct amdgpu_ring *ring)
 	mqd->cp_mqd_base_addr_hi = upper_32_bits(ring->mqd_gpu_addr);
 
 	/* set MQD vmid to 0 */
-	tmp = RREG32_SOC15(GC, 0, regCP_MQD_CONTROL);
+	tmp = regCP_MQD_CONTROL_DEFAULT;
 	tmp = REG_SET_FIELD(tmp, CP_MQD_CONTROL, VMID, 0);
 	mqd->cp_mqd_control = tmp;
 
 	/* set the pointer to the HQD, this is similar CP_RB0_BASE/_HI */
 	hqd_gpu_addr = ring->gpu_addr >> 8;
-	mqd->cp_hqd_pq_base_lo = hqd_gpu_addr;
+	mqd->cp_hqd_pq_base_lo = lower_32_bits(hqd_gpu_addr);
 	mqd->cp_hqd_pq_base_hi = upper_32_bits(hqd_gpu_addr);
 
+	/* set the wb address whether it's enabled or not */
+	wb_gpu_addr = ring->rptr_gpu_addr;
+	mqd->cp_hqd_pq_rptr_report_addr_lo = wb_gpu_addr & 0xfffffffc;
+	mqd->cp_hqd_pq_rptr_report_addr_hi =
+		upper_32_bits(wb_gpu_addr) & 0xffff;
+
+	/* only used if CP_PQ_WPTR_POLL_CNTL.CP_PQ_WPTR_POLL_CNTL__EN_MASK=1 */
+	wb_gpu_addr = ring->wptr_gpu_addr;
+	mqd->cp_hqd_pq_wptr_poll_addr_lo = wb_gpu_addr & 0xfffffff8;
+	mqd->cp_hqd_pq_wptr_poll_addr_hi = upper_32_bits(wb_gpu_addr) & 0xffff;
+
 	/* set up the HQD, this is similar to CP_RB0_CNTL */
-	tmp = RREG32_SOC15(GC, 0, regCP_HQD_PQ_CONTROL);
+	tmp = regCP_HQD_PQ_CONTROL_DEFAULT;
 	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, QUEUE_SIZE,
 			    (order_base_2(ring->ring_size / 4) - 1));
 	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, RPTR_BLOCK_SIZE,
@@ -705,30 +696,18 @@ static int mes_v11_0_mqd_init(struct amdgpu_ring *ring)
 #ifdef __BIG_ENDIAN
 	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, ENDIAN_SWAP, 1);
 #endif
-	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 0);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 1);
 	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TUNNEL_DISPATCH, 0);
 	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1);
 	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, KMD_QUEUE, 1);
+	tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, NO_UPDATE_RPTR, 1);
 	mqd->cp_hqd_pq_control = tmp;
 
-	/* set the wb address whether it's enabled or not */
-	wb_gpu_addr = ring->rptr_gpu_addr;;
-	mqd->cp_hqd_pq_rptr_report_addr_lo = wb_gpu_addr & 0xfffffffc;
-	mqd->cp_hqd_pq_rptr_report_addr_hi =
-		upper_32_bits(wb_gpu_addr) & 0xffff;
-
-	/* only used if CP_PQ_WPTR_POLL_CNTL.CP_PQ_WPTR_POLL_CNTL__EN_MASK=1 */
-	wb_gpu_addr = ring->wptr_gpu_addr;
-	mqd->cp_hqd_pq_wptr_poll_addr_lo = wb_gpu_addr & 0xfffffff8;
-	mqd->cp_hqd_pq_wptr_poll_addr_hi = upper_32_bits(wb_gpu_addr) & 0xffff;
-
+	/* enable doorbell */
 	tmp = 0;
-	/* enable the doorbell if requested */
 	if (ring->use_doorbell) {
-		tmp = RREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL);
 		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
-				DOORBELL_OFFSET, ring->doorbell_index);
-
+				    DOORBELL_OFFSET, ring->doorbell_index);
 		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
 				    DOORBELL_EN, 1);
 		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
@@ -736,27 +715,24 @@ static int mes_v11_0_mqd_init(struct amdgpu_ring *ring)
 		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
 				    DOORBELL_HIT, 0);
 	}
-
+	else
+		tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_DOORBELL_CONTROL,
+				    DOORBELL_EN, 0);
 	mqd->cp_hqd_pq_doorbell_control = tmp;
 
-	/* reset read and write pointers, similar to CP_RB0_WPTR/_RPTR */
-	ring->wptr = 0;
-	mqd->cp_hqd_pq_rptr = RREG32_SOC15(GC, 0, regCP_HQD_PQ_RPTR);
-
-	/* set the vmid for the queue */
 	mqd->cp_hqd_vmid = 0;
-
-	tmp = RREG32_SOC15(GC, 0, regCP_HQD_PERSISTENT_STATE);
-	tmp = REG_SET_FIELD(tmp, CP_HQD_PERSISTENT_STATE, PRELOAD_SIZE, 0x55);
-	mqd->cp_hqd_persistent_state = tmp;
-
-	/* set MIN_IB_AVAIL_SIZE */
-	tmp = RREG32_SOC15(GC, 0, regCP_HQD_IB_CONTROL);
-	tmp = REG_SET_FIELD(tmp, CP_HQD_IB_CONTROL, MIN_IB_AVAIL_SIZE, 3);
-	mqd->cp_hqd_ib_control = tmp;
-
 	/* activate the queue */
 	mqd->cp_hqd_active = 1;
+	mqd->cp_hqd_persistent_state = regCP_HQD_PERSISTENT_STATE_DEFAULT;
+	mqd->cp_hqd_ib_control = regCP_HQD_IB_CONTROL_DEFAULT;
+	mqd->cp_hqd_iq_timer = regCP_HQD_IQ_TIMER_DEFAULT;
+	mqd->cp_hqd_quantum = regCP_HQD_QUANTUM_DEFAULT;
+
+	tmp = regCP_HQD_GFX_CONTROL_DEFAULT;
+	tmp = REG_SET_FIELD(tmp, CP_HQD_GFX_CONTROL, DB_UPDATED_MSG_EN, 1);
+	/* offset: 184 - this is used for CP_HQD_GFX_CONTROL */
+	mqd->cp_hqd_suspend_cntl_stack_offset = tmp;
+
 	return 0;
 }
 
@@ -822,7 +798,6 @@ static void mes_v11_0_queue_init_register(struct amdgpu_ring *ring)
 	mutex_unlock(&adev->srbm_mutex);
 }
 
-#if 0
 static int mes_v11_0_kiq_enable_queue(struct amdgpu_device *adev)
 {
 	struct amdgpu_kiq *kiq = &adev->gfx.kiq;
@@ -847,7 +822,6 @@ static int mes_v11_0_kiq_enable_queue(struct amdgpu_device *adev)
 	}
 	return r;
 }
-#endif
 
 static int mes_v11_0_queue_init(struct amdgpu_device *adev,
 				enum admgpu_mes_pipe pipe)
@@ -862,11 +836,17 @@ static int mes_v11_0_queue_init(struct amdgpu_device *adev,
 	else
 		BUG();
 
+	if ((pipe == AMDGPU_MES_SCHED_PIPE) &&
+	    (amdgpu_in_reset(adev) || adev->in_suspend)) {
+		*(ring->wptr_cpu_addr) = 0;
+		*(ring->rptr_cpu_addr) = 0;
+		amdgpu_ring_clear_ring(ring);
+	}
+
 	r = mes_v11_0_mqd_init(ring);
 	if (r)
 		return r;
 
-#if 0
 	if (pipe == AMDGPU_MES_SCHED_PIPE) {
 		r = mes_v11_0_kiq_enable_queue(adev);
 		if (r)
@@ -874,9 +854,6 @@ static int mes_v11_0_queue_init(struct amdgpu_device *adev,
 	} else {
 		mes_v11_0_queue_init_register(ring);
 	}
-#else
-	mes_v11_0_queue_init_register(ring);
-#endif
 
 	return 0;
 }
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 28/29] drm/amdgpu/discovery: add GFX 11.0 Support
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (26 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 27/29] drm/amdgpu/gfx11: " Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  2022-04-29 18:02 ` [PATCH 29/29] drm/amdgpu/discovery: add MES11 support Alex Deucher
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher, Likun Gao

From: Likun Gao <Likun.Gao@amd.com>

Enable GFX 11.0 on asics where it is present.

Signed-off-by: Likun Gao <Likun.Gao@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
index 0adcbd38cf61..d3c9f07a43af 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
@@ -60,6 +60,7 @@
 #include "navi10_ih.h"
 #include "ih_v6_0.h"
 #include "gfx_v10_0.h"
+#include "gfx_v11_0.h"
 #include "sdma_v5_0.h"
 #include "sdma_v5_2.h"
 #include "vcn_v2_0.h"
@@ -1732,6 +1733,9 @@ static int amdgpu_discovery_set_gc_ip_blocks(struct amdgpu_device *adev)
 	case IP_VERSION(10, 3, 7):
 		amdgpu_device_ip_block_add(adev, &gfx_v10_0_ip_block);
 		break;
+	case IP_VERSION(11, 0, 0):
+		amdgpu_device_ip_block_add(adev, &gfx_v11_0_ip_block);
+		break;
 	default:
 		dev_err(adev->dev,
 			"Failed to add gfx ip block(GC_HWIP:0x%x)\n",
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 29/29] drm/amdgpu/discovery: add MES11 support
  2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
                   ` (27 preceding siblings ...)
  2022-04-29 18:02 ` [PATCH 28/29] drm/amdgpu/discovery: add GFX 11.0 Support Alex Deucher
@ 2022-04-29 18:02 ` Alex Deucher
  28 siblings, 0 replies; 30+ messages in thread
From: Alex Deucher @ 2022-04-29 18:02 UTC (permalink / raw)
  To: amd-gfx; +Cc: Alex Deucher

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c | 21 +++++++++++++------
 1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
index d3c9f07a43af..0edb7c8f3b05 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
@@ -69,6 +69,7 @@
 #include "jpeg_v3_0.h"
 #include "amdgpu_vkms.h"
 #include "mes_v10_1.h"
+#include "mes_v11_0.h"
 #include "smuio_v11_0.h"
 #include "smuio_v11_0_6.h"
 #include "smuio_v13_0.h"
@@ -1873,7 +1874,17 @@ static int amdgpu_discovery_set_mes_ip_blocks(struct amdgpu_device *adev)
 	case IP_VERSION(10, 3, 4):
 	case IP_VERSION(10, 3, 5):
 	case IP_VERSION(10, 3, 6):
-		amdgpu_device_ip_block_add(adev, &mes_v10_1_ip_block);
+		if (amdgpu_mes) {
+			amdgpu_device_ip_block_add(adev, &mes_v10_1_ip_block);
+			adev->enable_mes = true;
+			if (amdgpu_mes_kiq)
+				adev->enable_mes_kiq = true;
+		}
+		break;
+	case IP_VERSION(11, 0, 0):
+		amdgpu_device_ip_block_add(adev, &mes_v11_0_ip_block);
+		adev->enable_mes = true;
+		adev->enable_mes_kiq = true;
 		break;
 	default:
 		break;
@@ -2308,11 +2319,9 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
 	if (r)
 		return r;
 
-	if (adev->enable_mes) {
-		r = amdgpu_discovery_set_mes_ip_blocks(adev);
-		if (r)
-			return r;
-	}
+	r = amdgpu_discovery_set_mes_ip_blocks(adev);
+	if (r)
+		return r;
 
 	return 0;
 }
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2022-04-29 18:03 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-29 18:01 [PATCH 00/29] Add support for GC 11.0 Alex Deucher
2022-04-29 18:01 ` [PATCH 01/29] drm/amdgpu: support RLCP firmware front door load Alex Deucher
2022-04-29 18:01 ` [PATCH 02/29] drm/amdgpu: support RLCV " Alex Deucher
2022-04-29 18:02 ` [PATCH 03/29] drm/amdgpu: support for new SDMA " Alex Deucher
2022-04-29 18:02 ` [PATCH 04/29] drm/amdgpu: add new CP_MES ucode ids Alex Deucher
2022-04-29 18:02 ` [PATCH 05/29] drm/amdgpu: support IMU front door load Alex Deucher
2022-04-29 18:02 ` [PATCH 06/29] drm/amdgpu: add convert for new gfx type Alex Deucher
2022-04-29 18:02 ` [PATCH 07/29] drm/amdgpu: init SDMA v6 microcode with PSP load type Alex Deucher
2022-04-29 18:02 ` [PATCH 08/29] drm/amdgpu: extend the show ucode name function Alex Deucher
2022-04-29 18:02 ` [PATCH 09/29] drm/amdgpu/gfx: refine fw hdr check fuction Alex Deucher
2022-04-29 18:02 ` [PATCH 10/29] drm/amd/amdgpu: adjust the fw load type list Alex Deucher
2022-04-29 18:02 ` [PATCH 11/29] drm/amdgpu: correct cp doorbell range Alex Deucher
2022-04-29 18:02 ` [PATCH 12/29] drm/amd/amdgpu: add more fw load type to fit new ASICs Alex Deucher
2022-04-29 18:02 ` [PATCH 13/29] drm/amdgpu: skip amdgpu_ucode_create_bo for backdoor autoload Alex Deucher
2022-04-29 18:02 ` [PATCH 14/29] drm/amdgpu: fix the fw size for sdma Alex Deucher
2022-04-29 18:02 ` [PATCH 15/29] drm/amdgpu/discovery: handle AMDGPU_FW_LOAD_RLC_BACKDOOR_AUTO in SMU Alex Deucher
2022-04-29 18:02 ` [PATCH 16/29] drm/amdgpu: renovate sdma fw struct Alex Deucher
2022-04-29 18:02 ` [PATCH 17/29] drm/amdgpu: support RS64 CP fw front door load Alex Deucher
2022-04-29 18:02 ` [PATCH 18/29] drm/amdgpu: add mes unmap legacy queue routine Alex Deucher
2022-04-29 18:02 ` [PATCH 19/29] drm/amdgpu: support imu for gfx11 Alex Deucher
2022-04-29 18:02 ` [PATCH 20/29] drm/amdgpu/mes11: initiate mes v11 support Alex Deucher
2022-04-29 18:02 ` [PATCH 21/29] drm/amdgpu: add init support for GFX11 (v2) Alex Deucher
2022-04-29 18:02 ` [PATCH 22/29] drm/amdkfd: Add KFD support for soc21 v3 Alex Deucher
2022-04-29 18:02 ` [PATCH 23/29] drm/amdgpu: enable GFX CGCG/CGLS for GC11.0.0 Alex Deucher
2022-04-29 18:02 ` [PATCH 24/29] drm/amdgpu: enable fgcg for soc21 Alex Deucher
2022-04-29 18:02 ` [PATCH 25/29] drm/amdgpu: enable GENERIC0_INT for gfx/compute pipes Alex Deucher
2022-04-29 18:02 ` [PATCH 26/29] drm/amdgpu/gfx10: enable kiq to map mes ring Alex Deucher
2022-04-29 18:02 ` [PATCH 27/29] drm/amdgpu/gfx11: " Alex Deucher
2022-04-29 18:02 ` [PATCH 28/29] drm/amdgpu/discovery: add GFX 11.0 Support Alex Deucher
2022-04-29 18:02 ` [PATCH 29/29] drm/amdgpu/discovery: add MES11 support Alex Deucher

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.