* [PATCH 00/11] Enable UVD and PSP loading for SRIOV
@ 2017-04-24 6:57 Xiangliang Yu
[not found] ` <1493017089-23101-1-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
0 siblings, 1 reply; 22+ messages in thread
From: Xiangliang Yu @ 2017-04-24 6:57 UTC (permalink / raw)
To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Xiangliang Yu
This series will enable UVD and PSP firmware loading for SRIOV.
Daniel Wang (2):
drm/amdgpu/psp: skip loading SDMA/RLCG under SRIOV VF
drm/amdgpu/vce4: fix a PSP loading VCE issue
Frank Min (5):
drm/amdgpu/vce4: move mm table constructions functions into mmsch
header file
drm/amdgpu/soc15: enable UVD code path for sriov
drm/amdgpu/uvd7: add sriov uvd initialization sequences
drm/amdgpu/uvd7: add uvd doorbell initialization for sriov
drm/amdgpu/uvd7: add UVD hw init sequences for sriov
Xiangliang Yu (4):
drm/amdgpu/virt: bypass cg and pg setting for SRIOV
drm/amdgpu/virt: change the place of virt_init_setting
drm/amdgpu/virt: add two functions for MM table
drm/amdgpu/vce4: replaced with virt_alloc_mm_table
drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c | 6 +
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c | 48 +++++
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h | 2 +
drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h | 57 +++++
drivers/gpu/drm/amd/amdgpu/soc15.c | 13 +-
drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c | 357 +++++++++++++++++++++++++++----
drivers/gpu/drm/amd/amdgpu/vce_v4_0.c | 95 ++------
drivers/gpu/drm/amd/amdgpu/vi.c | 10 +-
8 files changed, 461 insertions(+), 127 deletions(-)
--
2.7.4
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH 01/11] drm/amdgpu/virt: bypass cg and pg setting for SRIOV
[not found] ` <1493017089-23101-1-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-24 6:57 ` Xiangliang Yu
[not found] ` <1493017089-23101-2-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
2017-04-24 6:58 ` [PATCH 02/11] drm/amdgpu/virt: change the place of virt_init_setting Xiangliang Yu
` (9 subsequent siblings)
10 siblings, 1 reply; 22+ messages in thread
From: Xiangliang Yu @ 2017-04-24 6:57 UTC (permalink / raw)
To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Frank Min, Xiangliang Yu
GPU hypervisor cover all settings of CG and PG, so guest doesn't
need to do anything. Bypass it.
Signed-off-by: Frank Min <Frank.Min@amd.com>
Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
index be43823..7fce7b5 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
@@ -105,6 +105,8 @@ void amdgpu_virt_init_setting(struct amdgpu_device *adev)
/* enable virtual display */
adev->mode_info.num_crtc = 1;
adev->enable_virtual_display = true;
+ adev->cg_flags = 0;
+ adev->pg_flags = 0;
mutex_init(&adev->virt.lock_kiq);
mutex_init(&adev->virt.lock_reset);
--
2.7.4
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 02/11] drm/amdgpu/virt: change the place of virt_init_setting
[not found] ` <1493017089-23101-1-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
2017-04-24 6:57 ` [PATCH 01/11] drm/amdgpu/virt: bypass cg and pg setting " Xiangliang Yu
@ 2017-04-24 6:58 ` Xiangliang Yu
[not found] ` <1493017089-23101-3-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
2017-04-24 6:58 ` [PATCH 03/11] drm/amdgpu/psp: skip loading SDMA/RLCG under SRIOV VF Xiangliang Yu
` (8 subsequent siblings)
10 siblings, 1 reply; 22+ messages in thread
From: Xiangliang Yu @ 2017-04-24 6:58 UTC (permalink / raw)
To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Xiangliang Yu
Change place of virt_init_setting function so that can cover the
cg and pg flags configuration.
Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
---
drivers/gpu/drm/amd/amdgpu/soc15.c | 10 +++++-----
drivers/gpu/drm/amd/amdgpu/vi.c | 10 +++++-----
2 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
index 2c05dab..6999ac3 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
+++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
@@ -538,11 +538,6 @@ static int soc15_common_early_init(void *handle)
(amdgpu_ip_block_mask & (1 << AMD_IP_BLOCK_TYPE_PSP)))
psp_enabled = true;
- if (amdgpu_sriov_vf(adev)) {
- amdgpu_virt_init_setting(adev);
- xgpu_ai_mailbox_set_irq_funcs(adev);
- }
-
/*
* nbio need be used for both sdma and gfx9, but only
* initializes once
@@ -586,6 +581,11 @@ static int soc15_common_early_init(void *handle)
return -EINVAL;
}
+ if (amdgpu_sriov_vf(adev)) {
+ amdgpu_virt_init_setting(adev);
+ xgpu_ai_mailbox_set_irq_funcs(adev);
+ }
+
adev->firmware.load_type = amdgpu_ucode_get_load_type(adev, amdgpu_fw_load_type);
amdgpu_get_pcie_info(adev);
diff --git a/drivers/gpu/drm/amd/amdgpu/vi.c b/drivers/gpu/drm/amd/amdgpu/vi.c
index 505c17a..48fb373 100644
--- a/drivers/gpu/drm/amd/amdgpu/vi.c
+++ b/drivers/gpu/drm/amd/amdgpu/vi.c
@@ -895,11 +895,6 @@ static int vi_common_early_init(void *handle)
(amdgpu_ip_block_mask & (1 << AMD_IP_BLOCK_TYPE_SMC)))
smc_enabled = true;
- if (amdgpu_sriov_vf(adev)) {
- amdgpu_virt_init_setting(adev);
- xgpu_vi_mailbox_set_irq_funcs(adev);
- }
-
adev->rev_id = vi_get_rev_id(adev);
adev->external_rev_id = 0xFF;
switch (adev->asic_type) {
@@ -1072,6 +1067,11 @@ static int vi_common_early_init(void *handle)
return -EINVAL;
}
+ if (amdgpu_sriov_vf(adev)) {
+ amdgpu_virt_init_setting(adev);
+ xgpu_vi_mailbox_set_irq_funcs(adev);
+ }
+
/* vi use smc load by default */
adev->firmware.load_type = amdgpu_ucode_get_load_type(adev, amdgpu_fw_load_type);
--
2.7.4
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 03/11] drm/amdgpu/psp: skip loading SDMA/RLCG under SRIOV VF
[not found] ` <1493017089-23101-1-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
2017-04-24 6:57 ` [PATCH 01/11] drm/amdgpu/virt: bypass cg and pg setting " Xiangliang Yu
2017-04-24 6:58 ` [PATCH 02/11] drm/amdgpu/virt: change the place of virt_init_setting Xiangliang Yu
@ 2017-04-24 6:58 ` Xiangliang Yu
[not found] ` <1493017089-23101-4-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
2017-04-24 6:58 ` [PATCH 04/11] drm/amdgpu/vce4: fix a PSP loading VCE issue Xiangliang Yu
` (7 subsequent siblings)
10 siblings, 1 reply; 22+ messages in thread
From: Xiangliang Yu @ 2017-04-24 6:58 UTC (permalink / raw)
To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Daniel Wang, Xiangliang Yu
From: Daniel Wang <Daniel.Wang2@amd.com>
Now GPU hypervisor will load SDMA and RLCG ucode, so skip it
in guest.
Signed-off-by: Daniel Wang <Daniel.Wang2@amd.com>
Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
index 1e380fe..ac5e92e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
@@ -289,6 +289,12 @@ static int psp_np_fw_load(struct psp_context *psp)
if (ucode->ucode_id == AMDGPU_UCODE_ID_SMC &&
psp_smu_reload_quirk(psp))
continue;
+ if (amdgpu_sriov_vf(adev) &&
+ (ucode->ucode_id == AMDGPU_UCODE_ID_SDMA0
+ || ucode->ucode_id == AMDGPU_UCODE_ID_SDMA1
+ || ucode->ucode_id == AMDGPU_UCODE_ID_RLC_G))
+ /*skip ucode loading in SRIOV VF */
+ continue;
ret = psp_prep_cmd_buf(ucode, psp->cmd);
if (ret)
--
2.7.4
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 04/11] drm/amdgpu/vce4: fix a PSP loading VCE issue
[not found] ` <1493017089-23101-1-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
` (2 preceding siblings ...)
2017-04-24 6:58 ` [PATCH 03/11] drm/amdgpu/psp: skip loading SDMA/RLCG under SRIOV VF Xiangliang Yu
@ 2017-04-24 6:58 ` Xiangliang Yu
[not found] ` <1493017089-23101-5-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
2017-04-24 6:58 ` [PATCH 05/11] drm/amdgpu/vce4: move mm table constructions functions into mmsch header file Xiangliang Yu
` (6 subsequent siblings)
10 siblings, 1 reply; 22+ messages in thread
From: Xiangliang Yu @ 2017-04-24 6:58 UTC (permalink / raw)
To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Daniel Wang, Xiangliang Yu
From: Daniel Wang <Daniel.Wang2@amd.com>
Fixed PSP loading issue for sriov.
Signed-off-by: Daniel Wang <Daniel.Wang2@amd.com>
Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
---
drivers/gpu/drm/amd/amdgpu/vce_v4_0.c | 18 +++++++++++++++---
1 file changed, 15 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
index 76fc8ed..1deb546 100644
--- a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
@@ -291,9 +291,21 @@ static int vce_v4_0_sriov_start(struct amdgpu_device *adev)
INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_SWAP_CNTL1), 0);
INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_VM_CTRL), 0);
- INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_VCPU_CACHE_40BIT_BAR0), adev->vce.gpu_addr >> 8);
- INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_VCPU_CACHE_40BIT_BAR1), adev->vce.gpu_addr >> 8);
- INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_VCPU_CACHE_40BIT_BAR2), adev->vce.gpu_addr >> 8);
+ if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_VCPU_CACHE_40BIT_BAR0),
+ adev->firmware.ucode[AMDGPU_UCODE_ID_VCE].mc_addr >> 8);
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_VCPU_CACHE_40BIT_BAR1),
+ adev->firmware.ucode[AMDGPU_UCODE_ID_VCE].mc_addr >> 8);
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_VCPU_CACHE_40BIT_BAR2),
+ adev->firmware.ucode[AMDGPU_UCODE_ID_VCE].mc_addr >> 8);
+ } else {
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_VCPU_CACHE_40BIT_BAR0),
+ adev->vce.gpu_addr >> 8);
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_VCPU_CACHE_40BIT_BAR1),
+ adev->vce.gpu_addr >> 8);
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0, mmVCE_LMI_VCPU_CACHE_40BIT_BAR2),
+ adev->vce.gpu_addr >> 8);
+ }
offset = AMDGPU_VCE_FIRMWARE_OFFSET;
size = VCE_V4_0_FW_SIZE;
--
2.7.4
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 05/11] drm/amdgpu/vce4: move mm table constructions functions into mmsch header file
[not found] ` <1493017089-23101-1-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
` (3 preceding siblings ...)
2017-04-24 6:58 ` [PATCH 04/11] drm/amdgpu/vce4: fix a PSP loading VCE issue Xiangliang Yu
@ 2017-04-24 6:58 ` Xiangliang Yu
[not found] ` <1493017089-23101-6-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
2017-04-24 6:58 ` [PATCH 06/11] drm/amdgpu/soc15: enable UVD code path for sriov Xiangliang Yu
` (5 subsequent siblings)
10 siblings, 1 reply; 22+ messages in thread
From: Xiangliang Yu @ 2017-04-24 6:58 UTC (permalink / raw)
To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Xiangliang Yu, Frank Min
From: Frank Min <Frank.Min@amd.com>
Move mm table construction functions into mmsch header file so that
UVD can reuse it.
Signed-off-by: Frank Min <Frank.Min@amd.com>
Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
---
drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h | 57 +++++++++++++++++++++++++++++++++
drivers/gpu/drm/amd/amdgpu/vce_v4_0.c | 57 ---------------------------------
2 files changed, 57 insertions(+), 57 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h b/drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h
index 5f0fc8b..f048f91 100644
--- a/drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h
+++ b/drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h
@@ -84,4 +84,61 @@ struct mmsch_v1_0_cmd_indirect_write {
uint32_t reg_value;
};
+static inline void mmsch_insert_direct_wt(struct mmsch_v1_0_cmd_direct_write *direct_wt,
+ uint32_t *init_table,
+ uint32_t reg_offset,
+ uint32_t value)
+{
+ direct_wt->cmd_header.reg_offset = reg_offset;
+ direct_wt->reg_value = value;
+ memcpy((void *)init_table, direct_wt, sizeof(struct mmsch_v1_0_cmd_direct_write));
+}
+
+static inline void mmsch_insert_direct_rd_mod_wt(struct mmsch_v1_0_cmd_direct_read_modify_write *direct_rd_mod_wt,
+ uint32_t *init_table,
+ uint32_t reg_offset,
+ uint32_t mask, uint32_t data)
+{
+ direct_rd_mod_wt->cmd_header.reg_offset = reg_offset;
+ direct_rd_mod_wt->mask_value = mask;
+ direct_rd_mod_wt->write_data = data;
+ memcpy((void *)init_table, direct_rd_mod_wt,
+ sizeof(struct mmsch_v1_0_cmd_direct_read_modify_write));
+}
+
+static inline void mmsch_insert_direct_poll(struct mmsch_v1_0_cmd_direct_polling *direct_poll,
+ uint32_t *init_table,
+ uint32_t reg_offset,
+ uint32_t mask, uint32_t wait)
+{
+ direct_poll->cmd_header.reg_offset = reg_offset;
+ direct_poll->mask_value = mask;
+ direct_poll->wait_value = wait;
+ memcpy((void *)init_table, direct_poll, sizeof(struct mmsch_v1_0_cmd_direct_polling));
+}
+
+#define INSERT_DIRECT_RD_MOD_WT(reg, mask, data) { \
+ mmsch_insert_direct_rd_mod_wt(&direct_rd_mod_wt, \
+ init_table, (reg), \
+ (mask), (data)); \
+ init_table += sizeof(struct mmsch_v1_0_cmd_direct_read_modify_write)/4; \
+ table_size += sizeof(struct mmsch_v1_0_cmd_direct_read_modify_write)/4; \
+}
+
+#define INSERT_DIRECT_WT(reg, value) { \
+ mmsch_insert_direct_wt(&direct_wt, \
+ init_table, (reg), \
+ (value)); \
+ init_table += sizeof(struct mmsch_v1_0_cmd_direct_write)/4; \
+ table_size += sizeof(struct mmsch_v1_0_cmd_direct_write)/4; \
+}
+
+#define INSERT_DIRECT_POLL(reg, mask, wait) { \
+ mmsch_insert_direct_poll(&direct_poll, \
+ init_table, (reg), \
+ (mask), (wait)); \
+ init_table += sizeof(struct mmsch_v1_0_cmd_direct_polling)/4; \
+ table_size += sizeof(struct mmsch_v1_0_cmd_direct_polling)/4; \
+}
+
#endif
diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
index 1deb546..a3d9d4d 100644
--- a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
@@ -49,63 +49,6 @@ static void vce_v4_0_mc_resume(struct amdgpu_device *adev);
static void vce_v4_0_set_ring_funcs(struct amdgpu_device *adev);
static void vce_v4_0_set_irq_funcs(struct amdgpu_device *adev);
-static inline void mmsch_insert_direct_wt(struct mmsch_v1_0_cmd_direct_write *direct_wt,
- uint32_t *init_table,
- uint32_t reg_offset,
- uint32_t value)
-{
- direct_wt->cmd_header.reg_offset = reg_offset;
- direct_wt->reg_value = value;
- memcpy((void *)init_table, direct_wt, sizeof(struct mmsch_v1_0_cmd_direct_write));
-}
-
-static inline void mmsch_insert_direct_rd_mod_wt(struct mmsch_v1_0_cmd_direct_read_modify_write *direct_rd_mod_wt,
- uint32_t *init_table,
- uint32_t reg_offset,
- uint32_t mask, uint32_t data)
-{
- direct_rd_mod_wt->cmd_header.reg_offset = reg_offset;
- direct_rd_mod_wt->mask_value = mask;
- direct_rd_mod_wt->write_data = data;
- memcpy((void *)init_table, direct_rd_mod_wt,
- sizeof(struct mmsch_v1_0_cmd_direct_read_modify_write));
-}
-
-static inline void mmsch_insert_direct_poll(struct mmsch_v1_0_cmd_direct_polling *direct_poll,
- uint32_t *init_table,
- uint32_t reg_offset,
- uint32_t mask, uint32_t wait)
-{
- direct_poll->cmd_header.reg_offset = reg_offset;
- direct_poll->mask_value = mask;
- direct_poll->wait_value = wait;
- memcpy((void *)init_table, direct_poll, sizeof(struct mmsch_v1_0_cmd_direct_polling));
-}
-
-#define INSERT_DIRECT_RD_MOD_WT(reg, mask, data) { \
- mmsch_insert_direct_rd_mod_wt(&direct_rd_mod_wt, \
- init_table, (reg), \
- (mask), (data)); \
- init_table += sizeof(struct mmsch_v1_0_cmd_direct_read_modify_write)/4; \
- table_size += sizeof(struct mmsch_v1_0_cmd_direct_read_modify_write)/4; \
-}
-
-#define INSERT_DIRECT_WT(reg, value) { \
- mmsch_insert_direct_wt(&direct_wt, \
- init_table, (reg), \
- (value)); \
- init_table += sizeof(struct mmsch_v1_0_cmd_direct_write)/4; \
- table_size += sizeof(struct mmsch_v1_0_cmd_direct_write)/4; \
-}
-
-#define INSERT_DIRECT_POLL(reg, mask, wait) { \
- mmsch_insert_direct_poll(&direct_poll, \
- init_table, (reg), \
- (mask), (wait)); \
- init_table += sizeof(struct mmsch_v1_0_cmd_direct_polling)/4; \
- table_size += sizeof(struct mmsch_v1_0_cmd_direct_polling)/4; \
-}
-
/**
* vce_v4_0_ring_get_rptr - get read pointer
*
--
2.7.4
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 06/11] drm/amdgpu/soc15: enable UVD code path for sriov
[not found] ` <1493017089-23101-1-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
` (4 preceding siblings ...)
2017-04-24 6:58 ` [PATCH 05/11] drm/amdgpu/vce4: move mm table constructions functions into mmsch header file Xiangliang Yu
@ 2017-04-24 6:58 ` Xiangliang Yu
[not found] ` <1493017089-23101-7-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
2017-04-24 6:58 ` [PATCH 07/11] drm/amdgpu/virt: add two functions for MM table Xiangliang Yu
` (4 subsequent siblings)
10 siblings, 1 reply; 22+ messages in thread
From: Xiangliang Yu @ 2017-04-24 6:58 UTC (permalink / raw)
To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Xiangliang Yu, Frank Min
From: Frank Min <Frank.Min@amd.com>
Enable UVD block for SRIOV.
Signed-off-by: Frank Min <Frank.Min@amd.com>
Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
---
drivers/gpu/drm/amd/amdgpu/soc15.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
index 6999ac3..4e514b2 100644
--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
+++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
@@ -482,8 +482,7 @@ int soc15_set_ip_blocks(struct amdgpu_device *adev)
#endif
amdgpu_ip_block_add(adev, &gfx_v9_0_ip_block);
amdgpu_ip_block_add(adev, &sdma_v4_0_ip_block);
- if (!amdgpu_sriov_vf(adev))
- amdgpu_ip_block_add(adev, &uvd_v7_0_ip_block);
+ amdgpu_ip_block_add(adev, &uvd_v7_0_ip_block);
amdgpu_ip_block_add(adev, &vce_v4_0_ip_block);
break;
default:
--
2.7.4
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 07/11] drm/amdgpu/virt: add two functions for MM table
[not found] ` <1493017089-23101-1-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
` (5 preceding siblings ...)
2017-04-24 6:58 ` [PATCH 06/11] drm/amdgpu/soc15: enable UVD code path for sriov Xiangliang Yu
@ 2017-04-24 6:58 ` Xiangliang Yu
[not found] ` <1493017089-23101-8-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
2017-04-24 6:58 ` [PATCH 08/11] drm/amdgpu/vce4: replaced with virt_alloc_mm_table Xiangliang Yu
` (3 subsequent siblings)
10 siblings, 1 reply; 22+ messages in thread
From: Xiangliang Yu @ 2017-04-24 6:58 UTC (permalink / raw)
To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Xiangliang Yu
Add two functions to allocate & free MM table memory.
Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c | 46 ++++++++++++++++++++++++++++++++
drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h | 2 ++
2 files changed, 48 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
index 7fce7b5..1363239 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
@@ -227,3 +227,49 @@ int amdgpu_virt_reset_gpu(struct amdgpu_device *adev)
return 0;
}
+
+/**
+ * amdgpu_virt_alloc_mm_table() - alloc memory for mm table
+ * @amdgpu: amdgpu device.
+ * MM table is used by UVD and VCE for its initialization
+ * Return: Zero if allocate success.
+ */
+int amdgpu_virt_alloc_mm_table(struct amdgpu_device *adev)
+{
+ int r;
+
+ if (!amdgpu_sriov_vf(adev) || adev->virt.mm_table.gpu_addr)
+ return 0;
+
+ r = amdgpu_bo_create_kernel(adev, PAGE_SIZE, PAGE_SIZE,
+ AMDGPU_GEM_DOMAIN_VRAM,
+ &adev->virt.mm_table.bo,
+ &adev->virt.mm_table.gpu_addr,
+ (void *)&adev->virt.mm_table.cpu_addr);
+ if (r) {
+ DRM_ERROR("failed to alloc mm table and error = %d.\n", r);
+ return r;
+ }
+
+ memset((void *)adev->virt.mm_table.cpu_addr, 0, PAGE_SIZE);
+ DRM_INFO("MM table gpu addr = 0x%llx, cpu addr = %p.\n",
+ adev->virt.mm_table.gpu_addr,
+ adev->virt.mm_table.cpu_addr);
+ return 0;
+}
+
+/**
+ * amdgpu_virt_free_mm_table() - free mm table memory
+ * @amdgpu: amdgpu device.
+ * Free MM table memory
+ */
+void amdgpu_virt_free_mm_table(struct amdgpu_device *adev)
+{
+ if (!amdgpu_sriov_vf(adev) || !adev->virt.mm_table.gpu_addr)
+ return;
+
+ amdgpu_bo_free_kernel(&adev->virt.mm_table.bo,
+ &adev->virt.mm_table.gpu_addr,
+ (void *)&adev->virt.mm_table.cpu_addr);
+ adev->virt.mm_table.gpu_addr = 0;
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
index 1ee0a19..a8ed162 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
@@ -98,5 +98,7 @@ int amdgpu_virt_request_full_gpu(struct amdgpu_device *adev, bool init);
int amdgpu_virt_release_full_gpu(struct amdgpu_device *adev, bool init);
int amdgpu_virt_reset_gpu(struct amdgpu_device *adev);
int amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, bool voluntary);
+int amdgpu_virt_alloc_mm_table(struct amdgpu_device *adev);
+void amdgpu_virt_free_mm_table(struct amdgpu_device *adev);
#endif
--
2.7.4
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 08/11] drm/amdgpu/vce4: replaced with virt_alloc_mm_table
[not found] ` <1493017089-23101-1-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
` (6 preceding siblings ...)
2017-04-24 6:58 ` [PATCH 07/11] drm/amdgpu/virt: add two functions for MM table Xiangliang Yu
@ 2017-04-24 6:58 ` Xiangliang Yu
[not found] ` <1493017089-23101-9-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
2017-04-24 6:58 ` [PATCH 09/11] drm/amdgpu/uvd7: add sriov uvd initialization sequences Xiangliang Yu
` (2 subsequent siblings)
10 siblings, 1 reply; 22+ messages in thread
From: Xiangliang Yu @ 2017-04-24 6:58 UTC (permalink / raw)
To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Xiangliang Yu
Used virt_alloc_mm_table function to allocate MM table memory.
Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
---
drivers/gpu/drm/amd/amdgpu/vce_v4_0.c | 20 +++-----------------
1 file changed, 3 insertions(+), 17 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
index a3d9d4d..a34cdbd 100644
--- a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
@@ -444,20 +444,9 @@ static int vce_v4_0_sw_init(void *handle)
return r;
}
- if (amdgpu_sriov_vf(adev)) {
- r = amdgpu_bo_create_kernel(adev, PAGE_SIZE, PAGE_SIZE,
- AMDGPU_GEM_DOMAIN_VRAM,
- &adev->virt.mm_table.bo,
- &adev->virt.mm_table.gpu_addr,
- (void *)&adev->virt.mm_table.cpu_addr);
- if (!r) {
- memset((void *)adev->virt.mm_table.cpu_addr, 0, PAGE_SIZE);
- printk("mm table gpu addr = 0x%llx, cpu addr = %p. \n",
- adev->virt.mm_table.gpu_addr,
- adev->virt.mm_table.cpu_addr);
- }
+ r = amdgpu_virt_alloc_mm_table(adev);
+ if (r)
return r;
- }
return r;
}
@@ -468,10 +457,7 @@ static int vce_v4_0_sw_fini(void *handle)
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
/* free MM table */
- if (amdgpu_sriov_vf(adev))
- amdgpu_bo_free_kernel(&adev->virt.mm_table.bo,
- &adev->virt.mm_table.gpu_addr,
- (void *)&adev->virt.mm_table.cpu_addr);
+ amdgpu_virt_free_mm_table(adev);
r = amdgpu_vce_suspend(adev);
if (r)
--
2.7.4
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 09/11] drm/amdgpu/uvd7: add sriov uvd initialization sequences
[not found] ` <1493017089-23101-1-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
` (7 preceding siblings ...)
2017-04-24 6:58 ` [PATCH 08/11] drm/amdgpu/vce4: replaced with virt_alloc_mm_table Xiangliang Yu
@ 2017-04-24 6:58 ` Xiangliang Yu
[not found] ` <1493017089-23101-10-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
2017-04-24 6:58 ` [PATCH 10/11] drm/amdgpu/uvd7: add uvd doorbell initialization for sriov Xiangliang Yu
2017-04-24 6:58 ` [PATCH 11/11] drm/amdgpu/uvd7: add UVD hw init sequences " Xiangliang Yu
10 siblings, 1 reply; 22+ messages in thread
From: Xiangliang Yu @ 2017-04-24 6:58 UTC (permalink / raw)
To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Xiangliang Yu, Frank Min
From: Frank Min <Frank.Min@amd.com>
Add UVD initialization for SRIOV.
Signed-off-by: Frank Min <Frank.Min@amd.com>
Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
---
drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c | 246 ++++++++++++++++++++++++++++++++++
1 file changed, 246 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
index bf35d56..fb3da07 100644
--- a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
@@ -27,10 +27,14 @@
#include "amdgpu_uvd.h"
#include "soc15d.h"
#include "soc15_common.h"
+#include "mmsch_v1_0.h"
#include "vega10/soc15ip.h"
#include "vega10/UVD/uvd_7_0_offset.h"
#include "vega10/UVD/uvd_7_0_sh_mask.h"
+#include "vega10/VCE/vce_4_0_offset.h"
+#include "vega10/VCE/vce_4_0_default.h"
+#include "vega10/VCE/vce_4_0_sh_mask.h"
#include "vega10/NBIF/nbif_6_1_offset.h"
#include "vega10/HDP/hdp_4_0_offset.h"
#include "vega10/MMHUB/mmhub_1_0_offset.h"
@@ -41,6 +45,7 @@ static void uvd_v7_0_set_enc_ring_funcs(struct amdgpu_device *adev);
static void uvd_v7_0_set_irq_funcs(struct amdgpu_device *adev);
static int uvd_v7_0_start(struct amdgpu_device *adev);
static void uvd_v7_0_stop(struct amdgpu_device *adev);
+static int uvd_v7_0_sriov_start(struct amdgpu_device *adev);
/**
* uvd_v7_0_ring_get_rptr - get read pointer
@@ -618,6 +623,247 @@ static void uvd_v7_0_mc_resume(struct amdgpu_device *adev)
WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_GP_SCRATCH4), adev->uvd.max_handles);
}
+static int uvd_v7_0_mmsch_start(struct amdgpu_device *adev,
+ struct amdgpu_mm_table *table)
+{
+ uint32_t data = 0, loop;
+ uint64_t addr = table->gpu_addr;
+ struct mmsch_v1_0_init_header *header = (struct mmsch_v1_0_init_header *)table->cpu_addr;
+ uint32_t size;
+
+ size = header->header_size + header->vce_table_size + header->uvd_table_size;
+
+ /* 1, write to vce_mmsch_vf_ctx_addr_lo/hi register with GPU mc addr of memory descriptor location */
+ WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_MMSCH_VF_CTX_ADDR_LO), lower_32_bits(addr));
+ WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_MMSCH_VF_CTX_ADDR_HI), upper_32_bits(addr));
+
+ /* 2, update vmid of descriptor */
+ data = RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_MMSCH_VF_VMID));
+ data &= ~VCE_MMSCH_VF_VMID__VF_CTX_VMID_MASK;
+ data |= (0 << VCE_MMSCH_VF_VMID__VF_CTX_VMID__SHIFT); /* use domain0 for MM scheduler */
+ WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_MMSCH_VF_VMID), data);
+
+ /* 3, notify mmsch about the size of this descriptor */
+ WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_MMSCH_VF_CTX_SIZE), size);
+
+ /* 4, set resp to zero */
+ WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_MMSCH_VF_MAILBOX_RESP), 0);
+
+ /* 5, kick off the initialization and wait until VCE_MMSCH_VF_MAILBOX_RESP becomes non-zero */
+ WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_MMSCH_VF_MAILBOX_HOST), 0x10000001);
+
+ data = RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_MMSCH_VF_MAILBOX_RESP));
+ loop = 1000;
+ while ((data & 0x10000002) != 0x10000002) {
+ udelay(10);
+ data = RREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_MMSCH_VF_MAILBOX_RESP));
+ loop--;
+ if (!loop)
+ break;
+ }
+
+ if (!loop) {
+ dev_err(adev->dev, "failed to init MMSCH, mmVCE_MMSCH_VF_MAILBOX_RESP = %x\n", data);
+ return -EBUSY;
+ }
+
+ return 0;
+}
+
+static int uvd_v7_0_sriov_start(struct amdgpu_device *adev)
+{
+ struct amdgpu_ring *ring;
+ uint32_t offset, size, tmp;
+ uint32_t table_size = 0;
+ struct mmsch_v1_0_cmd_direct_write direct_wt = { {0} };
+ struct mmsch_v1_0_cmd_direct_read_modify_write direct_rd_mod_wt = { {0} };
+ struct mmsch_v1_0_cmd_direct_polling direct_poll = { {0} };
+ //struct mmsch_v1_0_cmd_indirect_write indirect_wt = {{0}};
+ struct mmsch_v1_0_cmd_end end = { {0} };
+ uint32_t *init_table = adev->virt.mm_table.cpu_addr;
+ struct mmsch_v1_0_init_header *header = (struct mmsch_v1_0_init_header *)init_table;
+
+ direct_wt.cmd_header.command_type = MMSCH_COMMAND__DIRECT_REG_WRITE;
+ direct_rd_mod_wt.cmd_header.command_type = MMSCH_COMMAND__DIRECT_REG_READ_MODIFY_WRITE;
+ direct_poll.cmd_header.command_type = MMSCH_COMMAND__DIRECT_REG_POLLING;
+ end.cmd_header.command_type = MMSCH_COMMAND__END;
+
+ if (header->uvd_table_offset == 0 && header->uvd_table_size == 0) {
+ header->version = MMSCH_VERSION;
+ header->header_size = sizeof(struct mmsch_v1_0_init_header) >> 2;
+
+ if (header->vce_table_offset == 0 && header->vce_table_size == 0)
+ header->uvd_table_offset = header->header_size;
+ else
+ header->uvd_table_offset = header->vce_table_size + header->vce_table_offset;
+
+ init_table += header->uvd_table_offset;
+
+ ring = &adev->uvd.ring;
+ size = AMDGPU_GPU_PAGE_ALIGN(adev->uvd.fw->size + 4);
+
+ /* disable clock gating */
+ INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_POWER_STATUS),
+ ~UVD_POWER_STATUS__UVD_PG_MODE_MASK, 0);
+ INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_STATUS),
+ 0xFFFFFFFF, 0x00000004);
+ /* mc resume*/
+ if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW),
+ lower_32_bits(adev->firmware.ucode[AMDGPU_UCODE_ID_UVD].mc_addr));
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH),
+ upper_32_bits(adev->firmware.ucode[AMDGPU_UCODE_ID_UVD].mc_addr));
+ offset = 0;
+ } else {
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW),
+ lower_32_bits(adev->uvd.gpu_addr));
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH),
+ upper_32_bits(adev->uvd.gpu_addr));
+ offset = size;
+ }
+
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_VCPU_CACHE_OFFSET0),
+ AMDGPU_UVD_FIRMWARE_OFFSET >> 3);
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_VCPU_CACHE_SIZE0), size);
+
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_VCPU_CACHE1_64BIT_BAR_LOW),
+ lower_32_bits(adev->uvd.gpu_addr + offset));
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_VCPU_CACHE1_64BIT_BAR_HIGH),
+ upper_32_bits(adev->uvd.gpu_addr + offset));
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_VCPU_CACHE_OFFSET1), (1 << 21));
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_VCPU_CACHE_SIZE1), AMDGPU_UVD_HEAP_SIZE);
+
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_VCPU_CACHE2_64BIT_BAR_LOW),
+ lower_32_bits(adev->uvd.gpu_addr + offset + AMDGPU_UVD_HEAP_SIZE));
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_VCPU_CACHE2_64BIT_BAR_HIGH),
+ upper_32_bits(adev->uvd.gpu_addr + offset + AMDGPU_UVD_HEAP_SIZE));
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_VCPU_CACHE_OFFSET2), (2 << 21));
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_VCPU_CACHE_SIZE2),
+ AMDGPU_UVD_STACK_SIZE + (AMDGPU_UVD_SESSION_SIZE * 40));
+
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_UDEC_ADDR_CONFIG),
+ adev->gfx.config.gb_addr_config);
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_UDEC_DB_ADDR_CONFIG),
+ adev->gfx.config.gb_addr_config);
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_UDEC_DBW_ADDR_CONFIG),
+ adev->gfx.config.gb_addr_config);
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_GP_SCRATCH4), adev->uvd.max_handles);
+ /* mc resume end*/
+
+ /* disable clock gating */
+ INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_CGC_CTRL),
+ ~UVD_CGC_CTRL__DYN_CLOCK_MODE_MASK, 0);
+
+ /* disable interupt */
+ INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_MASTINT_EN),
+ ~UVD_MASTINT_EN__VCPU_EN_MASK, 0);
+
+ /* stall UMC and register bus before resetting VCPU */
+ INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_CTRL2),
+ ~UVD_LMI_CTRL2__STALL_ARB_UMC_MASK,
+ UVD_LMI_CTRL2__STALL_ARB_UMC_MASK);
+
+ /* put LMI, VCPU, RBC etc... into reset */
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_SOFT_RESET),
+ (uint32_t)(UVD_SOFT_RESET__LMI_SOFT_RESET_MASK |
+ UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK |
+ UVD_SOFT_RESET__LBSI_SOFT_RESET_MASK |
+ UVD_SOFT_RESET__RBC_SOFT_RESET_MASK |
+ UVD_SOFT_RESET__CSM_SOFT_RESET_MASK |
+ UVD_SOFT_RESET__CXW_SOFT_RESET_MASK |
+ UVD_SOFT_RESET__TAP_SOFT_RESET_MASK |
+ UVD_SOFT_RESET__LMI_UMC_SOFT_RESET_MASK));
+
+ /* initialize UVD memory controller */
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_CTRL),
+ (uint32_t)((0x40 << UVD_LMI_CTRL__WRITE_CLEAN_TIMER__SHIFT) |
+ UVD_LMI_CTRL__WRITE_CLEAN_TIMER_EN_MASK |
+ UVD_LMI_CTRL__DATA_COHERENCY_EN_MASK |
+ UVD_LMI_CTRL__VCPU_DATA_COHERENCY_EN_MASK |
+ UVD_LMI_CTRL__REQ_MODE_MASK |
+ 0x00100000L));
+
+ /* disable byte swapping */
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_SWAP_CNTL), 0);
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_MP_SWAP_CNTL), 0);
+
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_MPC_SET_MUXA0), 0x40c2040);
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_MPC_SET_MUXA1), 0x0);
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_MPC_SET_MUXB0), 0x40c2040);
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_MPC_SET_MUXB1), 0x0);
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_MPC_SET_ALU), 0);
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_MPC_SET_MUX), 0x88);
+
+ /* take all subblocks out of reset, except VCPU */
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_SOFT_RESET),
+ UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK);
+
+ /* enable VCPU clock */
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_VCPU_CNTL),
+ UVD_VCPU_CNTL__CLK_EN_MASK);
+
+ /* enable UMC */
+ INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_CTRL2),
+ ~UVD_LMI_CTRL2__STALL_ARB_UMC_MASK, 0);
+
+ /* boot up the VCPU */
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_SOFT_RESET), 0);
+
+ INSERT_DIRECT_POLL(SOC15_REG_OFFSET(UVD, 0, mmUVD_STATUS), 0x02, 0x02);
+
+ /* enable master interrupt */
+ INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_MASTINT_EN),
+ ~(UVD_MASTINT_EN__VCPU_EN_MASK|UVD_MASTINT_EN__SYS_EN_MASK),
+ (UVD_MASTINT_EN__VCPU_EN_MASK|UVD_MASTINT_EN__SYS_EN_MASK));
+
+ /* clear the bit 4 of UVD_STATUS */
+ INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_STATUS),
+ ~(2 << UVD_STATUS__VCPU_REPORT__SHIFT), 0);
+
+ /* force RBC into idle state */
+ size = order_base_2(ring->ring_size);
+ tmp = REG_SET_FIELD(0, UVD_RBC_RB_CNTL, RB_BUFSZ, size);
+ tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL, RB_BLKSZ, 1);
+ tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL, RB_NO_FETCH, 1);
+ tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL, RB_WPTR_POLL_EN, 0);
+ tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL, RB_NO_UPDATE, 1);
+ tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL, RB_RPTR_WR_EN, 1);
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_RBC_RB_CNTL), tmp);
+
+ /* set the write pointer delay */
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_RBC_RB_WPTR_CNTL), 0);
+
+ /* set the wb address */
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_RBC_RB_RPTR_ADDR),
+ (upper_32_bits(ring->gpu_addr) >> 2));
+
+ /* programm the RB_BASE for ring buffer */
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_RBC_RB_64BIT_BAR_LOW),
+ lower_32_bits(ring->gpu_addr));
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_LMI_RBC_RB_64BIT_BAR_HIGH),
+ upper_32_bits(ring->gpu_addr));
+
+ ring->wptr = 0;
+ ring = &adev->uvd.ring_enc[0];
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_BASE_LO), ring->gpu_addr);
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_BASE_HI), upper_32_bits(ring->gpu_addr));
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_SIZE), ring->ring_size / 4);
+
+ ring = &adev->uvd.ring_enc[1];
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_BASE_LO2), ring->gpu_addr);
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_BASE_HI2), upper_32_bits(ring->gpu_addr));
+ INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_SIZE2), ring->ring_size / 4);
+
+ /* add end packet */
+ memcpy((void *)init_table, &end, sizeof(struct mmsch_v1_0_cmd_end));
+ table_size += sizeof(struct mmsch_v1_0_cmd_end) / 4;
+ header->uvd_table_size = table_size;
+
+ return uvd_v7_0_mmsch_start(adev, &adev->virt.mm_table);
+ }
+ return -EINVAL; /* already initializaed ? */
+}
+
/**
* uvd_v7_0_start - start UVD block
*
--
2.7.4
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 10/11] drm/amdgpu/uvd7: add uvd doorbell initialization for sriov
[not found] ` <1493017089-23101-1-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
` (8 preceding siblings ...)
2017-04-24 6:58 ` [PATCH 09/11] drm/amdgpu/uvd7: add sriov uvd initialization sequences Xiangliang Yu
@ 2017-04-24 6:58 ` Xiangliang Yu
2017-04-24 6:58 ` [PATCH 11/11] drm/amdgpu/uvd7: add UVD hw init sequences " Xiangliang Yu
10 siblings, 0 replies; 22+ messages in thread
From: Xiangliang Yu @ 2017-04-24 6:58 UTC (permalink / raw)
To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Xiangliang Yu, Frank Min
From: Frank Min <Frank.Min@amd.com>
Add UVD doorbell for SRIOV.
Signed-off-by: Frank Min <Frank.Min@amd.com>
Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
---
drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
index fb3da07..a294f05 100644
--- a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
@@ -103,6 +103,9 @@ static uint64_t uvd_v7_0_enc_ring_get_wptr(struct amdgpu_ring *ring)
{
struct amdgpu_device *adev = ring->adev;
+ if (ring->use_doorbell)
+ return adev->wb.wb[ring->wptr_offs];
+
if (ring == &adev->uvd.ring_enc[0])
return RREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_WPTR));
else
@@ -134,6 +137,13 @@ static void uvd_v7_0_enc_ring_set_wptr(struct amdgpu_ring *ring)
{
struct amdgpu_device *adev = ring->adev;
+ if (ring->use_doorbell) {
+ /* XXX check if swapping is necessary on BE */
+ adev->wb.wb[ring->wptr_offs] = lower_32_bits(ring->wptr);
+ WDOORBELL32(ring->doorbell_index, lower_32_bits(ring->wptr));
+ return;
+ }
+
if (ring == &adev->uvd.ring_enc[0])
WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_WPTR),
lower_32_bits(ring->wptr));
@@ -421,6 +431,15 @@ static int uvd_v7_0_sw_init(void *handle)
for (i = 0; i < adev->uvd.num_enc_rings; ++i) {
ring = &adev->uvd.ring_enc[i];
sprintf(ring->name, "uvd_enc%d", i);
+ if (amdgpu_sriov_vf(adev)) {
+ ring->use_doorbell = true;
+ if (i == 0)
+ ring->doorbell_index = AMDGPU_DOORBELL64_UVD_RING0_1 * 2;
+ else if (i == 1)
+ ring->doorbell_index = AMDGPU_DOORBELL64_UVD_RING2_3 * 2;
+ else
+ ring->doorbell_index = AMDGPU_DOORBELL64_UVD_RING4_5 * 2;
+ }
r = amdgpu_ring_init(adev, ring, 512, &adev->uvd.irq, 0);
if (r)
return r;
--
2.7.4
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 11/11] drm/amdgpu/uvd7: add UVD hw init sequences for sriov
[not found] ` <1493017089-23101-1-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
` (9 preceding siblings ...)
2017-04-24 6:58 ` [PATCH 10/11] drm/amdgpu/uvd7: add uvd doorbell initialization for sriov Xiangliang Yu
@ 2017-04-24 6:58 ` Xiangliang Yu
[not found] ` <1493017089-23101-12-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
10 siblings, 1 reply; 22+ messages in thread
From: Xiangliang Yu @ 2017-04-24 6:58 UTC (permalink / raw)
To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Xiangliang Yu, Frank Min
From: Frank Min <Frank.Min@amd.com>
Add UVD hw init.
Signed-off-by: Frank Min <Frank.Min@amd.com>
Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
---
drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c | 92 ++++++++++++++++++++---------------
1 file changed, 54 insertions(+), 38 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
index a294f05..e0b7ded 100644
--- a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
@@ -368,7 +368,10 @@ static int uvd_v7_0_early_init(void *handle)
{
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
- adev->uvd.num_enc_rings = 2;
+ if (amdgpu_sriov_vf(adev))
+ adev->uvd.num_enc_rings = 1;
+ else
+ adev->uvd.num_enc_rings = 2;
uvd_v7_0_set_ring_funcs(adev);
uvd_v7_0_set_enc_ring_funcs(adev);
uvd_v7_0_set_irq_funcs(adev);
@@ -421,12 +424,14 @@ static int uvd_v7_0_sw_init(void *handle)
r = amdgpu_uvd_resume(adev);
if (r)
return r;
+ if (!amdgpu_sriov_vf(adev)) {
+ ring = &adev->uvd.ring;
+ sprintf(ring->name, "uvd");
+ r = amdgpu_ring_init(adev, ring, 512, &adev->uvd.irq, 0);
+ if (r)
+ return r;
+ }
- ring = &adev->uvd.ring;
- sprintf(ring->name, "uvd");
- r = amdgpu_ring_init(adev, ring, 512, &adev->uvd.irq, 0);
- if (r)
- return r;
for (i = 0; i < adev->uvd.num_enc_rings; ++i) {
ring = &adev->uvd.ring_enc[i];
@@ -445,6 +450,10 @@ static int uvd_v7_0_sw_init(void *handle)
return r;
}
+ r = amdgpu_virt_alloc_mm_table(adev);
+ if (r)
+ return r;
+
return r;
}
@@ -453,6 +462,8 @@ static int uvd_v7_0_sw_fini(void *handle)
int i, r;
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+ amdgpu_virt_free_mm_table(adev);
+
r = amdgpu_uvd_suspend(adev);
if (r)
return r;
@@ -479,48 +490,53 @@ static int uvd_v7_0_hw_init(void *handle)
uint32_t tmp;
int i, r;
- r = uvd_v7_0_start(adev);
+ if (amdgpu_sriov_vf(adev))
+ r = uvd_v7_0_sriov_start(adev);
+ else
+ r = uvd_v7_0_start(adev);
if (r)
goto done;
- ring->ready = true;
- r = amdgpu_ring_test_ring(ring);
- if (r) {
- ring->ready = false;
- goto done;
- }
+ if (!amdgpu_sriov_vf(adev)) {
+ ring->ready = true;
+ r = amdgpu_ring_test_ring(ring);
+ if (r) {
+ ring->ready = false;
+ goto done;
+ }
- r = amdgpu_ring_alloc(ring, 10);
- if (r) {
- DRM_ERROR("amdgpu: ring failed to lock UVD ring (%d).\n", r);
- goto done;
- }
+ r = amdgpu_ring_alloc(ring, 10);
+ if (r) {
+ DRM_ERROR("amdgpu: ring failed to lock UVD ring (%d).\n", r);
+ goto done;
+ }
- tmp = PACKET0(SOC15_REG_OFFSET(UVD, 0,
- mmUVD_SEMA_WAIT_FAULT_TIMEOUT_CNTL), 0);
- amdgpu_ring_write(ring, tmp);
- amdgpu_ring_write(ring, 0xFFFFF);
+ tmp = PACKET0(SOC15_REG_OFFSET(UVD, 0,
+ mmUVD_SEMA_WAIT_FAULT_TIMEOUT_CNTL), 0);
+ amdgpu_ring_write(ring, tmp);
+ amdgpu_ring_write(ring, 0xFFFFF);
- tmp = PACKET0(SOC15_REG_OFFSET(UVD, 0,
- mmUVD_SEMA_WAIT_INCOMPLETE_TIMEOUT_CNTL), 0);
- amdgpu_ring_write(ring, tmp);
- amdgpu_ring_write(ring, 0xFFFFF);
+ tmp = PACKET0(SOC15_REG_OFFSET(UVD, 0,
+ mmUVD_SEMA_WAIT_INCOMPLETE_TIMEOUT_CNTL), 0);
+ amdgpu_ring_write(ring, tmp);
+ amdgpu_ring_write(ring, 0xFFFFF);
- tmp = PACKET0(SOC15_REG_OFFSET(UVD, 0,
- mmUVD_SEMA_SIGNAL_INCOMPLETE_TIMEOUT_CNTL), 0);
- amdgpu_ring_write(ring, tmp);
- amdgpu_ring_write(ring, 0xFFFFF);
+ tmp = PACKET0(SOC15_REG_OFFSET(UVD, 0,
+ mmUVD_SEMA_SIGNAL_INCOMPLETE_TIMEOUT_CNTL), 0);
+ amdgpu_ring_write(ring, tmp);
+ amdgpu_ring_write(ring, 0xFFFFF);
- /* Clear timeout status bits */
- amdgpu_ring_write(ring, PACKET0(SOC15_REG_OFFSET(UVD, 0,
- mmUVD_SEMA_TIMEOUT_STATUS), 0));
- amdgpu_ring_write(ring, 0x8);
+ /* Clear timeout status bits */
+ amdgpu_ring_write(ring, PACKET0(SOC15_REG_OFFSET(UVD, 0,
+ mmUVD_SEMA_TIMEOUT_STATUS), 0));
+ amdgpu_ring_write(ring, 0x8);
- amdgpu_ring_write(ring, PACKET0(SOC15_REG_OFFSET(UVD, 0,
- mmUVD_SEMA_CNTL), 0));
- amdgpu_ring_write(ring, 3);
+ amdgpu_ring_write(ring, PACKET0(SOC15_REG_OFFSET(UVD, 0,
+ mmUVD_SEMA_CNTL), 0));
+ amdgpu_ring_write(ring, 3);
- amdgpu_ring_commit(ring);
+ amdgpu_ring_commit(ring);
+ }
for (i = 0; i < adev->uvd.num_enc_rings; ++i) {
ring = &adev->uvd.ring_enc[i];
--
2.7.4
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply related [flat|nested] 22+ messages in thread
* RE: [PATCH 01/11] drm/amdgpu/virt: bypass cg and pg setting for SRIOV
[not found] ` <1493017089-23101-2-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-24 15:45 ` Deucher, Alexander
0 siblings, 0 replies; 22+ messages in thread
From: Deucher, Alexander @ 2017-04-24 15:45 UTC (permalink / raw)
To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Yu, Xiangliang, Min, Frank
> -----Original Message-----
> From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf
> Of Xiangliang Yu
> Sent: Monday, April 24, 2017 2:58 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Min, Frank; Yu, Xiangliang
> Subject: [PATCH 01/11] drm/amdgpu/virt: bypass cg and pg setting for SRIOV
>
> GPU hypervisor cover all settings of CG and PG, so guest doesn't
> need to do anything. Bypass it.
>
> Signed-off-by: Frank Min <Frank.Min@amd.com>
> Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
> index be43823..7fce7b5 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
> @@ -105,6 +105,8 @@ void amdgpu_virt_init_setting(struct amdgpu_device
> *adev)
> /* enable virtual display */
> adev->mode_info.num_crtc = 1;
> adev->enable_virtual_display = true;
> + adev->cg_flags = 0;
> + adev->pg_flags = 0;
>
> mutex_init(&adev->virt.lock_kiq);
> mutex_init(&adev->virt.lock_reset);
> --
> 2.7.4
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 22+ messages in thread
* RE: [PATCH 02/11] drm/amdgpu/virt: change the place of virt_init_setting
[not found] ` <1493017089-23101-3-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-24 15:45 ` Deucher, Alexander
0 siblings, 0 replies; 22+ messages in thread
From: Deucher, Alexander @ 2017-04-24 15:45 UTC (permalink / raw)
To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Yu, Xiangliang
> -----Original Message-----
> From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf
> Of Xiangliang Yu
> Sent: Monday, April 24, 2017 2:58 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Yu, Xiangliang
> Subject: [PATCH 02/11] drm/amdgpu/virt: change the place of
> virt_init_setting
>
> Change place of virt_init_setting function so that can cover the
> cg and pg flags configuration.
>
> Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/soc15.c | 10 +++++-----
> drivers/gpu/drm/amd/amdgpu/vi.c | 10 +++++-----
> 2 files changed, 10 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c
> b/drivers/gpu/drm/amd/amdgpu/soc15.c
> index 2c05dab..6999ac3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/soc15.c
> +++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
> @@ -538,11 +538,6 @@ static int soc15_common_early_init(void *handle)
> (amdgpu_ip_block_mask & (1 <<
> AMD_IP_BLOCK_TYPE_PSP)))
> psp_enabled = true;
>
> - if (amdgpu_sriov_vf(adev)) {
> - amdgpu_virt_init_setting(adev);
> - xgpu_ai_mailbox_set_irq_funcs(adev);
> - }
> -
> /*
> * nbio need be used for both sdma and gfx9, but only
> * initializes once
> @@ -586,6 +581,11 @@ static int soc15_common_early_init(void *handle)
> return -EINVAL;
> }
>
> + if (amdgpu_sriov_vf(adev)) {
> + amdgpu_virt_init_setting(adev);
> + xgpu_ai_mailbox_set_irq_funcs(adev);
> + }
> +
> adev->firmware.load_type = amdgpu_ucode_get_load_type(adev,
> amdgpu_fw_load_type);
>
> amdgpu_get_pcie_info(adev);
> diff --git a/drivers/gpu/drm/amd/amdgpu/vi.c
> b/drivers/gpu/drm/amd/amdgpu/vi.c
> index 505c17a..48fb373 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vi.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vi.c
> @@ -895,11 +895,6 @@ static int vi_common_early_init(void *handle)
> (amdgpu_ip_block_mask & (1 <<
> AMD_IP_BLOCK_TYPE_SMC)))
> smc_enabled = true;
>
> - if (amdgpu_sriov_vf(adev)) {
> - amdgpu_virt_init_setting(adev);
> - xgpu_vi_mailbox_set_irq_funcs(adev);
> - }
> -
> adev->rev_id = vi_get_rev_id(adev);
> adev->external_rev_id = 0xFF;
> switch (adev->asic_type) {
> @@ -1072,6 +1067,11 @@ static int vi_common_early_init(void *handle)
> return -EINVAL;
> }
>
> + if (amdgpu_sriov_vf(adev)) {
> + amdgpu_virt_init_setting(adev);
> + xgpu_vi_mailbox_set_irq_funcs(adev);
> + }
> +
> /* vi use smc load by default */
> adev->firmware.load_type = amdgpu_ucode_get_load_type(adev,
> amdgpu_fw_load_type);
>
> --
> 2.7.4
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 22+ messages in thread
* RE: [PATCH 03/11] drm/amdgpu/psp: skip loading SDMA/RLCG under SRIOV VF
[not found] ` <1493017089-23101-4-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-24 15:46 ` Deucher, Alexander
0 siblings, 0 replies; 22+ messages in thread
From: Deucher, Alexander @ 2017-04-24 15:46 UTC (permalink / raw)
To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
Cc: Wang, Daniel(Xiaowei), Yu, Xiangliang
> -----Original Message-----
> From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf
> Of Xiangliang Yu
> Sent: Monday, April 24, 2017 2:58 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Wang, Daniel(Xiaowei); Yu, Xiangliang
> Subject: [PATCH 03/11] drm/amdgpu/psp: skip loading SDMA/RLCG under
> SRIOV VF
>
> From: Daniel Wang <Daniel.Wang2@amd.com>
>
> Now GPU hypervisor will load SDMA and RLCG ucode, so skip it
> in guest.
>
> Signed-off-by: Daniel Wang <Daniel.Wang2@amd.com>
> Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
> index 1e380fe..ac5e92e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
> @@ -289,6 +289,12 @@ static int psp_np_fw_load(struct psp_context *psp)
> if (ucode->ucode_id == AMDGPU_UCODE_ID_SMC &&
> psp_smu_reload_quirk(psp))
> continue;
> + if (amdgpu_sriov_vf(adev) &&
> + (ucode->ucode_id == AMDGPU_UCODE_ID_SDMA0
> + || ucode->ucode_id == AMDGPU_UCODE_ID_SDMA1
> + || ucode->ucode_id == AMDGPU_UCODE_ID_RLC_G))
> + /*skip ucode loading in SRIOV VF */
> + continue;
>
> ret = psp_prep_cmd_buf(ucode, psp->cmd);
> if (ret)
> --
> 2.7.4
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 22+ messages in thread
* RE: [PATCH 04/11] drm/amdgpu/vce4: fix a PSP loading VCE issue
[not found] ` <1493017089-23101-5-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-24 15:49 ` Deucher, Alexander
0 siblings, 0 replies; 22+ messages in thread
From: Deucher, Alexander @ 2017-04-24 15:49 UTC (permalink / raw)
To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
Cc: Wang, Daniel(Xiaowei), Yu, Xiangliang
> -----Original Message-----
> From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf
> Of Xiangliang Yu
> Sent: Monday, April 24, 2017 2:58 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Wang, Daniel(Xiaowei); Yu, Xiangliang
> Subject: [PATCH 04/11] drm/amdgpu/vce4: fix a PSP loading VCE issue
>
> From: Daniel Wang <Daniel.Wang2@amd.com>
>
> Fixed PSP loading issue for sriov.
>
> Signed-off-by: Daniel Wang <Daniel.Wang2@amd.com>
> Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Acked-by: Alex Deucher <alexander.deucher@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/vce_v4_0.c | 18 +++++++++++++++---
> 1 file changed, 15 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
> b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
> index 76fc8ed..1deb546 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
> @@ -291,9 +291,21 @@ static int vce_v4_0_sriov_start(struct
> amdgpu_device *adev)
> INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0,
> mmVCE_LMI_SWAP_CNTL1), 0);
> INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0,
> mmVCE_LMI_VM_CTRL), 0);
>
> - INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0,
> mmVCE_LMI_VCPU_CACHE_40BIT_BAR0), adev->vce.gpu_addr >> 8);
> - INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0,
> mmVCE_LMI_VCPU_CACHE_40BIT_BAR1), adev->vce.gpu_addr >> 8);
> - INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0,
> mmVCE_LMI_VCPU_CACHE_40BIT_BAR2), adev->vce.gpu_addr >> 8);
> + if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP)
> {
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0,
> mmVCE_LMI_VCPU_CACHE_40BIT_BAR0),
> + adev-
> >firmware.ucode[AMDGPU_UCODE_ID_VCE].mc_addr >> 8);
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0,
> mmVCE_LMI_VCPU_CACHE_40BIT_BAR1),
> + adev-
> >firmware.ucode[AMDGPU_UCODE_ID_VCE].mc_addr >> 8);
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0,
> mmVCE_LMI_VCPU_CACHE_40BIT_BAR2),
> + adev-
> >firmware.ucode[AMDGPU_UCODE_ID_VCE].mc_addr >> 8);
> + } else {
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0,
> mmVCE_LMI_VCPU_CACHE_40BIT_BAR0),
> + adev->vce.gpu_addr >> 8);
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0,
> mmVCE_LMI_VCPU_CACHE_40BIT_BAR1),
> + adev->vce.gpu_addr >> 8);
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCE, 0,
> mmVCE_LMI_VCPU_CACHE_40BIT_BAR2),
> + adev->vce.gpu_addr >> 8);
> + }
>
> offset = AMDGPU_VCE_FIRMWARE_OFFSET;
> size = VCE_V4_0_FW_SIZE;
> --
> 2.7.4
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 22+ messages in thread
* RE: [PATCH 05/11] drm/amdgpu/vce4: move mm table constructions functions into mmsch header file
[not found] ` <1493017089-23101-6-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-24 15:53 ` Deucher, Alexander
0 siblings, 0 replies; 22+ messages in thread
From: Deucher, Alexander @ 2017-04-24 15:53 UTC (permalink / raw)
To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Min, Frank, Yu, Xiangliang
> -----Original Message-----
> From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf
> Of Xiangliang Yu
> Sent: Monday, April 24, 2017 2:58 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Yu, Xiangliang; Min, Frank
> Subject: [PATCH 05/11] drm/amdgpu/vce4: move mm table constructions
> functions into mmsch header file
>
> From: Frank Min <Frank.Min@amd.com>
>
> Move mm table construction functions into mmsch header file so that
> UVD can reuse it.
>
> Signed-off-by: Frank Min <Frank.Min@amd.com>
> Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h | 57
> +++++++++++++++++++++++++++++++++
> drivers/gpu/drm/amd/amdgpu/vce_v4_0.c | 57 ---------------------------------
> 2 files changed, 57 insertions(+), 57 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h
> b/drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h
> index 5f0fc8b..f048f91 100644
> --- a/drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h
> +++ b/drivers/gpu/drm/amd/amdgpu/mmsch_v1_0.h
> @@ -84,4 +84,61 @@ struct mmsch_v1_0_cmd_indirect_write {
> uint32_t reg_value;
> };
>
> +static inline void mmsch_insert_direct_wt(struct
Please change the names of the exported functions to have the v1_0 in the name. E.g.,
mmsch_v1_0_insert_direct_wt()
With that fixed:
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
> mmsch_v1_0_cmd_direct_write *direct_wt,
> + uint32_t *init_table,
> + uint32_t reg_offset,
> + uint32_t value)
> +{
> + direct_wt->cmd_header.reg_offset = reg_offset;
> + direct_wt->reg_value = value;
> + memcpy((void *)init_table, direct_wt, sizeof(struct
> mmsch_v1_0_cmd_direct_write));
> +}
> +
> +static inline void mmsch_insert_direct_rd_mod_wt(struct
> mmsch_v1_0_cmd_direct_read_modify_write *direct_rd_mod_wt,
> + uint32_t *init_table,
> + uint32_t reg_offset,
> + uint32_t mask, uint32_t data)
> +{
> + direct_rd_mod_wt->cmd_header.reg_offset = reg_offset;
> + direct_rd_mod_wt->mask_value = mask;
> + direct_rd_mod_wt->write_data = data;
> + memcpy((void *)init_table, direct_rd_mod_wt,
> + sizeof(struct mmsch_v1_0_cmd_direct_read_modify_write));
> +}
> +
> +static inline void mmsch_insert_direct_poll(struct
> mmsch_v1_0_cmd_direct_polling *direct_poll,
> + uint32_t *init_table,
> + uint32_t reg_offset,
> + uint32_t mask, uint32_t wait)
> +{
> + direct_poll->cmd_header.reg_offset = reg_offset;
> + direct_poll->mask_value = mask;
> + direct_poll->wait_value = wait;
> + memcpy((void *)init_table, direct_poll, sizeof(struct
> mmsch_v1_0_cmd_direct_polling));
> +}
> +
> +#define INSERT_DIRECT_RD_MOD_WT(reg, mask, data) { \
> + mmsch_insert_direct_rd_mod_wt(&direct_rd_mod_wt, \
> + init_table, (reg), \
> + (mask), (data)); \
> + init_table += sizeof(struct
> mmsch_v1_0_cmd_direct_read_modify_write)/4; \
> + table_size += sizeof(struct
> mmsch_v1_0_cmd_direct_read_modify_write)/4; \
> +}
> +
> +#define INSERT_DIRECT_WT(reg, value) { \
> + mmsch_insert_direct_wt(&direct_wt, \
> + init_table, (reg), \
> + (value)); \
> + init_table += sizeof(struct mmsch_v1_0_cmd_direct_write)/4; \
> + table_size += sizeof(struct mmsch_v1_0_cmd_direct_write)/4; \
> +}
> +
> +#define INSERT_DIRECT_POLL(reg, mask, wait) { \
> + mmsch_insert_direct_poll(&direct_poll, \
> + init_table, (reg), \
> + (mask), (wait)); \
> + init_table += sizeof(struct mmsch_v1_0_cmd_direct_polling)/4; \
> + table_size += sizeof(struct mmsch_v1_0_cmd_direct_polling)/4; \
> +}
> +
> #endif
> diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
> b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
> index 1deb546..a3d9d4d 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
> @@ -49,63 +49,6 @@ static void vce_v4_0_mc_resume(struct
> amdgpu_device *adev);
> static void vce_v4_0_set_ring_funcs(struct amdgpu_device *adev);
> static void vce_v4_0_set_irq_funcs(struct amdgpu_device *adev);
>
> -static inline void mmsch_insert_direct_wt(struct
> mmsch_v1_0_cmd_direct_write *direct_wt,
> - uint32_t *init_table,
> - uint32_t reg_offset,
> - uint32_t value)
> -{
> - direct_wt->cmd_header.reg_offset = reg_offset;
> - direct_wt->reg_value = value;
> - memcpy((void *)init_table, direct_wt, sizeof(struct
> mmsch_v1_0_cmd_direct_write));
> -}
> -
> -static inline void mmsch_insert_direct_rd_mod_wt(struct
> mmsch_v1_0_cmd_direct_read_modify_write *direct_rd_mod_wt,
> - uint32_t *init_table,
> - uint32_t reg_offset,
> - uint32_t mask, uint32_t data)
> -{
> - direct_rd_mod_wt->cmd_header.reg_offset = reg_offset;
> - direct_rd_mod_wt->mask_value = mask;
> - direct_rd_mod_wt->write_data = data;
> - memcpy((void *)init_table, direct_rd_mod_wt,
> - sizeof(struct mmsch_v1_0_cmd_direct_read_modify_write));
> -}
> -
> -static inline void mmsch_insert_direct_poll(struct
> mmsch_v1_0_cmd_direct_polling *direct_poll,
> - uint32_t *init_table,
> - uint32_t reg_offset,
> - uint32_t mask, uint32_t wait)
> -{
> - direct_poll->cmd_header.reg_offset = reg_offset;
> - direct_poll->mask_value = mask;
> - direct_poll->wait_value = wait;
> - memcpy((void *)init_table, direct_poll, sizeof(struct
> mmsch_v1_0_cmd_direct_polling));
> -}
> -
> -#define INSERT_DIRECT_RD_MOD_WT(reg, mask, data) { \
> - mmsch_insert_direct_rd_mod_wt(&direct_rd_mod_wt, \
> - init_table, (reg), \
> - (mask), (data)); \
> - init_table += sizeof(struct
> mmsch_v1_0_cmd_direct_read_modify_write)/4; \
> - table_size += sizeof(struct
> mmsch_v1_0_cmd_direct_read_modify_write)/4; \
> -}
> -
> -#define INSERT_DIRECT_WT(reg, value) { \
> - mmsch_insert_direct_wt(&direct_wt, \
> - init_table, (reg), \
> - (value)); \
> - init_table += sizeof(struct mmsch_v1_0_cmd_direct_write)/4; \
> - table_size += sizeof(struct mmsch_v1_0_cmd_direct_write)/4; \
> -}
> -
> -#define INSERT_DIRECT_POLL(reg, mask, wait) { \
> - mmsch_insert_direct_poll(&direct_poll, \
> - init_table, (reg), \
> - (mask), (wait)); \
> - init_table += sizeof(struct mmsch_v1_0_cmd_direct_polling)/4; \
> - table_size += sizeof(struct mmsch_v1_0_cmd_direct_polling)/4; \
> -}
> -
> /**
> * vce_v4_0_ring_get_rptr - get read pointer
> *
> --
> 2.7.4
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 22+ messages in thread
* RE: [PATCH 06/11] drm/amdgpu/soc15: enable UVD code path for sriov
[not found] ` <1493017089-23101-7-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-24 15:54 ` Deucher, Alexander
0 siblings, 0 replies; 22+ messages in thread
From: Deucher, Alexander @ 2017-04-24 15:54 UTC (permalink / raw)
To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Min, Frank, Yu, Xiangliang
> -----Original Message-----
> From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf
> Of Xiangliang Yu
> Sent: Monday, April 24, 2017 2:58 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Yu, Xiangliang; Min, Frank
> Subject: [PATCH 06/11] drm/amdgpu/soc15: enable UVD code path for sriov
>
> From: Frank Min <Frank.Min@amd.com>
>
> Enable UVD block for SRIOV.
>
> Signed-off-by: Frank Min <Frank.Min@amd.com>
> Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/soc15.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c
> b/drivers/gpu/drm/amd/amdgpu/soc15.c
> index 6999ac3..4e514b2 100644
> --- a/drivers/gpu/drm/amd/amdgpu/soc15.c
> +++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
> @@ -482,8 +482,7 @@ int soc15_set_ip_blocks(struct amdgpu_device
> *adev)
> #endif
> amdgpu_ip_block_add(adev, &gfx_v9_0_ip_block);
> amdgpu_ip_block_add(adev, &sdma_v4_0_ip_block);
> - if (!amdgpu_sriov_vf(adev))
> - amdgpu_ip_block_add(adev, &uvd_v7_0_ip_block);
> + amdgpu_ip_block_add(adev, &uvd_v7_0_ip_block);
> amdgpu_ip_block_add(adev, &vce_v4_0_ip_block);
> break;
> default:
> --
> 2.7.4
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 22+ messages in thread
* RE: [PATCH 09/11] drm/amdgpu/uvd7: add sriov uvd initialization sequences
[not found] ` <1493017089-23101-10-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-24 15:56 ` Deucher, Alexander
0 siblings, 0 replies; 22+ messages in thread
From: Deucher, Alexander @ 2017-04-24 15:56 UTC (permalink / raw)
To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Min, Frank, Yu, Xiangliang
> -----Original Message-----
> From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf
> Of Xiangliang Yu
> Sent: Monday, April 24, 2017 2:58 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Yu, Xiangliang; Min, Frank
> Subject: [PATCH 09/11] drm/amdgpu/uvd7: add sriov uvd initialization
> sequences
>
> From: Frank Min <Frank.Min@amd.com>
>
> Add UVD initialization for SRIOV.
>
> Signed-off-by: Frank Min <Frank.Min@amd.com>
> Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
This patch should come before patch 6 since 6 enables the functionality on sr-iov and this patch makes it work. With that fixed:
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c | 246
> ++++++++++++++++++++++++++++++++++
> 1 file changed, 246 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
> b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
> index bf35d56..fb3da07 100644
> --- a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
> @@ -27,10 +27,14 @@
> #include "amdgpu_uvd.h"
> #include "soc15d.h"
> #include "soc15_common.h"
> +#include "mmsch_v1_0.h"
>
> #include "vega10/soc15ip.h"
> #include "vega10/UVD/uvd_7_0_offset.h"
> #include "vega10/UVD/uvd_7_0_sh_mask.h"
> +#include "vega10/VCE/vce_4_0_offset.h"
> +#include "vega10/VCE/vce_4_0_default.h"
> +#include "vega10/VCE/vce_4_0_sh_mask.h"
> #include "vega10/NBIF/nbif_6_1_offset.h"
> #include "vega10/HDP/hdp_4_0_offset.h"
> #include "vega10/MMHUB/mmhub_1_0_offset.h"
> @@ -41,6 +45,7 @@ static void uvd_v7_0_set_enc_ring_funcs(struct
> amdgpu_device *adev);
> static void uvd_v7_0_set_irq_funcs(struct amdgpu_device *adev);
> static int uvd_v7_0_start(struct amdgpu_device *adev);
> static void uvd_v7_0_stop(struct amdgpu_device *adev);
> +static int uvd_v7_0_sriov_start(struct amdgpu_device *adev);
>
> /**
> * uvd_v7_0_ring_get_rptr - get read pointer
> @@ -618,6 +623,247 @@ static void uvd_v7_0_mc_resume(struct
> amdgpu_device *adev)
> WREG32(SOC15_REG_OFFSET(UVD, 0, mmUVD_GP_SCRATCH4),
> adev->uvd.max_handles);
> }
>
> +static int uvd_v7_0_mmsch_start(struct amdgpu_device *adev,
> + struct amdgpu_mm_table *table)
> +{
> + uint32_t data = 0, loop;
> + uint64_t addr = table->gpu_addr;
> + struct mmsch_v1_0_init_header *header = (struct
> mmsch_v1_0_init_header *)table->cpu_addr;
> + uint32_t size;
> +
> + size = header->header_size + header->vce_table_size + header-
> >uvd_table_size;
> +
> + /* 1, write to vce_mmsch_vf_ctx_addr_lo/hi register with GPU mc
> addr of memory descriptor location */
> + WREG32(SOC15_REG_OFFSET(VCE, 0,
> mmVCE_MMSCH_VF_CTX_ADDR_LO), lower_32_bits(addr));
> + WREG32(SOC15_REG_OFFSET(VCE, 0,
> mmVCE_MMSCH_VF_CTX_ADDR_HI), upper_32_bits(addr));
> +
> + /* 2, update vmid of descriptor */
> + data = RREG32(SOC15_REG_OFFSET(VCE, 0,
> mmVCE_MMSCH_VF_VMID));
> + data &= ~VCE_MMSCH_VF_VMID__VF_CTX_VMID_MASK;
> + data |= (0 << VCE_MMSCH_VF_VMID__VF_CTX_VMID__SHIFT); /*
> use domain0 for MM scheduler */
> + WREG32(SOC15_REG_OFFSET(VCE, 0, mmVCE_MMSCH_VF_VMID),
> data);
> +
> + /* 3, notify mmsch about the size of this descriptor */
> + WREG32(SOC15_REG_OFFSET(VCE, 0,
> mmVCE_MMSCH_VF_CTX_SIZE), size);
> +
> + /* 4, set resp to zero */
> + WREG32(SOC15_REG_OFFSET(VCE, 0,
> mmVCE_MMSCH_VF_MAILBOX_RESP), 0);
> +
> + /* 5, kick off the initialization and wait until
> VCE_MMSCH_VF_MAILBOX_RESP becomes non-zero */
> + WREG32(SOC15_REG_OFFSET(VCE, 0,
> mmVCE_MMSCH_VF_MAILBOX_HOST), 0x10000001);
> +
> + data = RREG32(SOC15_REG_OFFSET(VCE, 0,
> mmVCE_MMSCH_VF_MAILBOX_RESP));
> + loop = 1000;
> + while ((data & 0x10000002) != 0x10000002) {
> + udelay(10);
> + data = RREG32(SOC15_REG_OFFSET(VCE, 0,
> mmVCE_MMSCH_VF_MAILBOX_RESP));
> + loop--;
> + if (!loop)
> + break;
> + }
> +
> + if (!loop) {
> + dev_err(adev->dev, "failed to init MMSCH,
> mmVCE_MMSCH_VF_MAILBOX_RESP = %x\n", data);
> + return -EBUSY;
> + }
> +
> + return 0;
> +}
> +
> +static int uvd_v7_0_sriov_start(struct amdgpu_device *adev)
> +{
> + struct amdgpu_ring *ring;
> + uint32_t offset, size, tmp;
> + uint32_t table_size = 0;
> + struct mmsch_v1_0_cmd_direct_write direct_wt = { {0} };
> + struct mmsch_v1_0_cmd_direct_read_modify_write
> direct_rd_mod_wt = { {0} };
> + struct mmsch_v1_0_cmd_direct_polling direct_poll = { {0} };
> + //struct mmsch_v1_0_cmd_indirect_write indirect_wt = {{0}};
> + struct mmsch_v1_0_cmd_end end = { {0} };
> + uint32_t *init_table = adev->virt.mm_table.cpu_addr;
> + struct mmsch_v1_0_init_header *header = (struct
> mmsch_v1_0_init_header *)init_table;
> +
> + direct_wt.cmd_header.command_type =
> MMSCH_COMMAND__DIRECT_REG_WRITE;
> + direct_rd_mod_wt.cmd_header.command_type =
> MMSCH_COMMAND__DIRECT_REG_READ_MODIFY_WRITE;
> + direct_poll.cmd_header.command_type =
> MMSCH_COMMAND__DIRECT_REG_POLLING;
> + end.cmd_header.command_type = MMSCH_COMMAND__END;
> +
> + if (header->uvd_table_offset == 0 && header->uvd_table_size == 0)
> {
> + header->version = MMSCH_VERSION;
> + header->header_size = sizeof(struct
> mmsch_v1_0_init_header) >> 2;
> +
> + if (header->vce_table_offset == 0 && header-
> >vce_table_size == 0)
> + header->uvd_table_offset = header->header_size;
> + else
> + header->uvd_table_offset = header->vce_table_size
> + header->vce_table_offset;
> +
> + init_table += header->uvd_table_offset;
> +
> + ring = &adev->uvd.ring;
> + size = AMDGPU_GPU_PAGE_ALIGN(adev->uvd.fw->size +
> 4);
> +
> + /* disable clock gating */
> + INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_POWER_STATUS),
> +
> ~UVD_POWER_STATUS__UVD_PG_MODE_MASK, 0);
> + INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_STATUS),
> + 0xFFFFFFFF,
> 0x00000004);
> + /* mc resume*/
> + if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP)
> {
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW),
> + lower_32_bits(adev-
> >firmware.ucode[AMDGPU_UCODE_ID_UVD].mc_addr));
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH),
> + upper_32_bits(adev-
> >firmware.ucode[AMDGPU_UCODE_ID_UVD].mc_addr));
> + offset = 0;
> + } else {
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW),
> + lower_32_bits(adev->uvd.gpu_addr));
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH),
> + upper_32_bits(adev->uvd.gpu_addr));
> + offset = size;
> + }
> +
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_VCPU_CACHE_OFFSET0),
> + AMDGPU_UVD_FIRMWARE_OFFSET
> >> 3);
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_VCPU_CACHE_SIZE0), size);
> +
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_LMI_VCPU_CACHE1_64BIT_BAR_LOW),
> + lower_32_bits(adev->uvd.gpu_addr +
> offset));
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_LMI_VCPU_CACHE1_64BIT_BAR_HIGH),
> + upper_32_bits(adev->uvd.gpu_addr +
> offset));
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_VCPU_CACHE_OFFSET1), (1 << 21));
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_VCPU_CACHE_SIZE1), AMDGPU_UVD_HEAP_SIZE);
> +
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_LMI_VCPU_CACHE2_64BIT_BAR_LOW),
> + lower_32_bits(adev->uvd.gpu_addr + offset
> + AMDGPU_UVD_HEAP_SIZE));
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_LMI_VCPU_CACHE2_64BIT_BAR_HIGH),
> + upper_32_bits(adev->uvd.gpu_addr + offset
> + AMDGPU_UVD_HEAP_SIZE));
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_VCPU_CACHE_OFFSET2), (2 << 21));
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_VCPU_CACHE_SIZE2),
> + AMDGPU_UVD_STACK_SIZE +
> (AMDGPU_UVD_SESSION_SIZE * 40));
> +
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_UDEC_ADDR_CONFIG),
> + adev->gfx.config.gb_addr_config);
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_UDEC_DB_ADDR_CONFIG),
> + adev->gfx.config.gb_addr_config);
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_UDEC_DBW_ADDR_CONFIG),
> + adev->gfx.config.gb_addr_config);
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_GP_SCRATCH4), adev->uvd.max_handles);
> + /* mc resume end*/
> +
> + /* disable clock gating */
> + INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_CGC_CTRL),
> +
> ~UVD_CGC_CTRL__DYN_CLOCK_MODE_MASK, 0);
> +
> + /* disable interupt */
> + INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_MASTINT_EN),
> + ~UVD_MASTINT_EN__VCPU_EN_MASK, 0);
> +
> + /* stall UMC and register bus before resetting VCPU */
> + INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_LMI_CTRL2),
> +
> ~UVD_LMI_CTRL2__STALL_ARB_UMC_MASK,
> +
> UVD_LMI_CTRL2__STALL_ARB_UMC_MASK);
> +
> + /* put LMI, VCPU, RBC etc... into reset */
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_SOFT_RESET),
> +
> (uint32_t)(UVD_SOFT_RESET__LMI_SOFT_RESET_MASK |
> + UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK |
> + UVD_SOFT_RESET__LBSI_SOFT_RESET_MASK |
> + UVD_SOFT_RESET__RBC_SOFT_RESET_MASK |
> + UVD_SOFT_RESET__CSM_SOFT_RESET_MASK |
> + UVD_SOFT_RESET__CXW_SOFT_RESET_MASK |
> + UVD_SOFT_RESET__TAP_SOFT_RESET_MASK |
> +
> UVD_SOFT_RESET__LMI_UMC_SOFT_RESET_MASK));
> +
> + /* initialize UVD memory controller */
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_LMI_CTRL),
> + (uint32_t)((0x40 <<
> UVD_LMI_CTRL__WRITE_CLEAN_TIMER__SHIFT) |
> + UVD_LMI_CTRL__WRITE_CLEAN_TIMER_EN_MASK |
> + UVD_LMI_CTRL__DATA_COHERENCY_EN_MASK |
> +
> UVD_LMI_CTRL__VCPU_DATA_COHERENCY_EN_MASK |
> + UVD_LMI_CTRL__REQ_MODE_MASK |
> + 0x00100000L));
> +
> + /* disable byte swapping */
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_LMI_SWAP_CNTL), 0);
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_MP_SWAP_CNTL), 0);
> +
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_MPC_SET_MUXA0), 0x40c2040);
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_MPC_SET_MUXA1), 0x0);
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_MPC_SET_MUXB0), 0x40c2040);
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_MPC_SET_MUXB1), 0x0);
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_MPC_SET_ALU), 0);
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_MPC_SET_MUX), 0x88);
> +
> + /* take all subblocks out of reset, except VCPU */
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_SOFT_RESET),
> +
> UVD_SOFT_RESET__VCPU_SOFT_RESET_MASK);
> +
> + /* enable VCPU clock */
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_VCPU_CNTL),
> + UVD_VCPU_CNTL__CLK_EN_MASK);
> +
> + /* enable UMC */
> + INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_LMI_CTRL2),
> +
> ~UVD_LMI_CTRL2__STALL_ARB_UMC_MASK, 0);
> +
> + /* boot up the VCPU */
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_SOFT_RESET), 0);
> +
> + INSERT_DIRECT_POLL(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_STATUS), 0x02, 0x02);
> +
> + /* enable master interrupt */
> + INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_MASTINT_EN),
> +
> ~(UVD_MASTINT_EN__VCPU_EN_MASK|UVD_MASTINT_EN__SYS_
> EN_MASK),
> +
> (UVD_MASTINT_EN__VCPU_EN_MASK|UVD_MASTINT_EN__SYS_E
> N_MASK));
> +
> + /* clear the bit 4 of UVD_STATUS */
> + INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_STATUS),
> + ~(2 <<
> UVD_STATUS__VCPU_REPORT__SHIFT), 0);
> +
> + /* force RBC into idle state */
> + size = order_base_2(ring->ring_size);
> + tmp = REG_SET_FIELD(0, UVD_RBC_RB_CNTL, RB_BUFSZ,
> size);
> + tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL, RB_BLKSZ,
> 1);
> + tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL,
> RB_NO_FETCH, 1);
> + tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL,
> RB_WPTR_POLL_EN, 0);
> + tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL,
> RB_NO_UPDATE, 1);
> + tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL,
> RB_RPTR_WR_EN, 1);
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_RBC_RB_CNTL), tmp);
> +
> + /* set the write pointer delay */
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_RBC_RB_WPTR_CNTL), 0);
> +
> + /* set the wb address */
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_RBC_RB_RPTR_ADDR),
> + (upper_32_bits(ring->gpu_addr) >> 2));
> +
> + /* programm the RB_BASE for ring buffer */
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_LMI_RBC_RB_64BIT_BAR_LOW),
> + lower_32_bits(ring->gpu_addr));
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_LMI_RBC_RB_64BIT_BAR_HIGH),
> + upper_32_bits(ring->gpu_addr));
> +
> + ring->wptr = 0;
> + ring = &adev->uvd.ring_enc[0];
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_RB_BASE_LO), ring->gpu_addr);
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_RB_BASE_HI), upper_32_bits(ring->gpu_addr));
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_RB_SIZE), ring->ring_size / 4);
> +
> + ring = &adev->uvd.ring_enc[1];
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_RB_BASE_LO2), ring->gpu_addr);
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_RB_BASE_HI2), upper_32_bits(ring->gpu_addr));
> + INSERT_DIRECT_WT(SOC15_REG_OFFSET(UVD, 0,
> mmUVD_RB_SIZE2), ring->ring_size / 4);
> +
> + /* add end packet */
> + memcpy((void *)init_table, &end, sizeof(struct
> mmsch_v1_0_cmd_end));
> + table_size += sizeof(struct mmsch_v1_0_cmd_end) / 4;
> + header->uvd_table_size = table_size;
> +
> + return uvd_v7_0_mmsch_start(adev, &adev-
> >virt.mm_table);
> + }
> + return -EINVAL; /* already initializaed ? */
> +}
> +
> /**
> * uvd_v7_0_start - start UVD block
> *
> --
> 2.7.4
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 22+ messages in thread
* RE: [PATCH 11/11] drm/amdgpu/uvd7: add UVD hw init sequences for sriov
[not found] ` <1493017089-23101-12-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-24 15:59 ` Deucher, Alexander
0 siblings, 0 replies; 22+ messages in thread
From: Deucher, Alexander @ 2017-04-24 15:59 UTC (permalink / raw)
To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Min, Frank, Yu, Xiangliang
> -----Original Message-----
> From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf
> Of Xiangliang Yu
> Sent: Monday, April 24, 2017 2:58 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Yu, Xiangliang; Min, Frank
> Subject: [PATCH 11/11] drm/amdgpu/uvd7: add UVD hw init sequences for
> sriov
>
> From: Frank Min <Frank.Min@amd.com>
>
> Add UVD hw init.
>
> Signed-off-by: Frank Min <Frank.Min@amd.com>
> Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
This needs to land before patch 6 as well. With that fixed:
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c | 92 ++++++++++++++++++++--
> -------------
> 1 file changed, 54 insertions(+), 38 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
> b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
> index a294f05..e0b7ded 100644
> --- a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
> @@ -368,7 +368,10 @@ static int uvd_v7_0_early_init(void *handle)
> {
> struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>
> - adev->uvd.num_enc_rings = 2;
> + if (amdgpu_sriov_vf(adev))
> + adev->uvd.num_enc_rings = 1;
> + else
> + adev->uvd.num_enc_rings = 2;
> uvd_v7_0_set_ring_funcs(adev);
> uvd_v7_0_set_enc_ring_funcs(adev);
> uvd_v7_0_set_irq_funcs(adev);
> @@ -421,12 +424,14 @@ static int uvd_v7_0_sw_init(void *handle)
> r = amdgpu_uvd_resume(adev);
> if (r)
> return r;
> + if (!amdgpu_sriov_vf(adev)) {
> + ring = &adev->uvd.ring;
> + sprintf(ring->name, "uvd");
> + r = amdgpu_ring_init(adev, ring, 512, &adev->uvd.irq, 0);
> + if (r)
> + return r;
> + }
>
> - ring = &adev->uvd.ring;
> - sprintf(ring->name, "uvd");
> - r = amdgpu_ring_init(adev, ring, 512, &adev->uvd.irq, 0);
> - if (r)
> - return r;
>
> for (i = 0; i < adev->uvd.num_enc_rings; ++i) {
> ring = &adev->uvd.ring_enc[i];
> @@ -445,6 +450,10 @@ static int uvd_v7_0_sw_init(void *handle)
> return r;
> }
>
> + r = amdgpu_virt_alloc_mm_table(adev);
> + if (r)
> + return r;
> +
> return r;
> }
>
> @@ -453,6 +462,8 @@ static int uvd_v7_0_sw_fini(void *handle)
> int i, r;
> struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>
> + amdgpu_virt_free_mm_table(adev);
> +
> r = amdgpu_uvd_suspend(adev);
> if (r)
> return r;
> @@ -479,48 +490,53 @@ static int uvd_v7_0_hw_init(void *handle)
> uint32_t tmp;
> int i, r;
>
> - r = uvd_v7_0_start(adev);
> + if (amdgpu_sriov_vf(adev))
> + r = uvd_v7_0_sriov_start(adev);
> + else
> + r = uvd_v7_0_start(adev);
> if (r)
> goto done;
>
> - ring->ready = true;
> - r = amdgpu_ring_test_ring(ring);
> - if (r) {
> - ring->ready = false;
> - goto done;
> - }
> + if (!amdgpu_sriov_vf(adev)) {
> + ring->ready = true;
> + r = amdgpu_ring_test_ring(ring);
> + if (r) {
> + ring->ready = false;
> + goto done;
> + }
>
> - r = amdgpu_ring_alloc(ring, 10);
> - if (r) {
> - DRM_ERROR("amdgpu: ring failed to lock UVD ring (%d).\n",
> r);
> - goto done;
> - }
> + r = amdgpu_ring_alloc(ring, 10);
> + if (r) {
> + DRM_ERROR("amdgpu: ring failed to lock UVD ring
> (%d).\n", r);
> + goto done;
> + }
>
> - tmp = PACKET0(SOC15_REG_OFFSET(UVD, 0,
> - mmUVD_SEMA_WAIT_FAULT_TIMEOUT_CNTL), 0);
> - amdgpu_ring_write(ring, tmp);
> - amdgpu_ring_write(ring, 0xFFFFF);
> + tmp = PACKET0(SOC15_REG_OFFSET(UVD, 0,
> + mmUVD_SEMA_WAIT_FAULT_TIMEOUT_CNTL), 0);
> + amdgpu_ring_write(ring, tmp);
> + amdgpu_ring_write(ring, 0xFFFFF);
>
> - tmp = PACKET0(SOC15_REG_OFFSET(UVD, 0,
> - mmUVD_SEMA_WAIT_INCOMPLETE_TIMEOUT_CNTL), 0);
> - amdgpu_ring_write(ring, tmp);
> - amdgpu_ring_write(ring, 0xFFFFF);
> + tmp = PACKET0(SOC15_REG_OFFSET(UVD, 0,
> +
> mmUVD_SEMA_WAIT_INCOMPLETE_TIMEOUT_CNTL), 0);
> + amdgpu_ring_write(ring, tmp);
> + amdgpu_ring_write(ring, 0xFFFFF);
>
> - tmp = PACKET0(SOC15_REG_OFFSET(UVD, 0,
> - mmUVD_SEMA_SIGNAL_INCOMPLETE_TIMEOUT_CNTL), 0);
> - amdgpu_ring_write(ring, tmp);
> - amdgpu_ring_write(ring, 0xFFFFF);
> + tmp = PACKET0(SOC15_REG_OFFSET(UVD, 0,
> +
> mmUVD_SEMA_SIGNAL_INCOMPLETE_TIMEOUT_CNTL), 0);
> + amdgpu_ring_write(ring, tmp);
> + amdgpu_ring_write(ring, 0xFFFFF);
>
> - /* Clear timeout status bits */
> - amdgpu_ring_write(ring, PACKET0(SOC15_REG_OFFSET(UVD, 0,
> - mmUVD_SEMA_TIMEOUT_STATUS), 0));
> - amdgpu_ring_write(ring, 0x8);
> + /* Clear timeout status bits */
> + amdgpu_ring_write(ring,
> PACKET0(SOC15_REG_OFFSET(UVD, 0,
> + mmUVD_SEMA_TIMEOUT_STATUS), 0));
> + amdgpu_ring_write(ring, 0x8);
>
> - amdgpu_ring_write(ring, PACKET0(SOC15_REG_OFFSET(UVD, 0,
> - mmUVD_SEMA_CNTL), 0));
> - amdgpu_ring_write(ring, 3);
> + amdgpu_ring_write(ring,
> PACKET0(SOC15_REG_OFFSET(UVD, 0,
> + mmUVD_SEMA_CNTL), 0));
> + amdgpu_ring_write(ring, 3);
>
> - amdgpu_ring_commit(ring);
> + amdgpu_ring_commit(ring);
> + }
>
> for (i = 0; i < adev->uvd.num_enc_rings; ++i) {
> ring = &adev->uvd.ring_enc[i];
> --
> 2.7.4
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 22+ messages in thread
* RE: [PATCH 07/11] drm/amdgpu/virt: add two functions for MM table
[not found] ` <1493017089-23101-8-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-24 16:02 ` Deucher, Alexander
0 siblings, 0 replies; 22+ messages in thread
From: Deucher, Alexander @ 2017-04-24 16:02 UTC (permalink / raw)
To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Yu, Xiangliang
> -----Original Message-----
> From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf
> Of Xiangliang Yu
> Sent: Monday, April 24, 2017 2:58 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Yu, Xiangliang
> Subject: [PATCH 07/11] drm/amdgpu/virt: add two functions for MM table
>
> Add two functions to allocate & free MM table memory.
>
> Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c | 46
> ++++++++++++++++++++++++++++++++
> drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h | 2 ++
> 2 files changed, 48 insertions(+)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
> index 7fce7b5..1363239 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
> @@ -227,3 +227,49 @@ int amdgpu_virt_reset_gpu(struct amdgpu_device
> *adev)
>
> return 0;
> }
> +
> +/**
> + * amdgpu_virt_alloc_mm_table() - alloc memory for mm table
> + * @amdgpu: amdgpu device.
> + * MM table is used by UVD and VCE for its initialization
> + * Return: Zero if allocate success.
> + */
> +int amdgpu_virt_alloc_mm_table(struct amdgpu_device *adev)
> +{
> + int r;
> +
> + if (!amdgpu_sriov_vf(adev) || adev->virt.mm_table.gpu_addr)
> + return 0;
> +
> + r = amdgpu_bo_create_kernel(adev, PAGE_SIZE, PAGE_SIZE,
> + AMDGPU_GEM_DOMAIN_VRAM,
> + &adev->virt.mm_table.bo,
> + &adev->virt.mm_table.gpu_addr,
> + (void *)&adev->virt.mm_table.cpu_addr);
> + if (r) {
> + DRM_ERROR("failed to alloc mm table and error = %d.\n", r);
> + return r;
> + }
> +
> + memset((void *)adev->virt.mm_table.cpu_addr, 0, PAGE_SIZE);
> + DRM_INFO("MM table gpu addr = 0x%llx, cpu addr = %p.\n",
> + adev->virt.mm_table.gpu_addr,
> + adev->virt.mm_table.cpu_addr);
> + return 0;
> +}
> +
> +/**
> + * amdgpu_virt_free_mm_table() - free mm table memory
> + * @amdgpu: amdgpu device.
> + * Free MM table memory
> + */
> +void amdgpu_virt_free_mm_table(struct amdgpu_device *adev)
> +{
> + if (!amdgpu_sriov_vf(adev) || !adev->virt.mm_table.gpu_addr)
> + return;
> +
> + amdgpu_bo_free_kernel(&adev->virt.mm_table.bo,
> + &adev->virt.mm_table.gpu_addr,
> + (void *)&adev->virt.mm_table.cpu_addr);
> + adev->virt.mm_table.gpu_addr = 0;
> +}
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
> index 1ee0a19..a8ed162 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.h
> @@ -98,5 +98,7 @@ int amdgpu_virt_request_full_gpu(struct
> amdgpu_device *adev, bool init);
> int amdgpu_virt_release_full_gpu(struct amdgpu_device *adev, bool init);
> int amdgpu_virt_reset_gpu(struct amdgpu_device *adev);
> int amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, bool voluntary);
> +int amdgpu_virt_alloc_mm_table(struct amdgpu_device *adev);
> +void amdgpu_virt_free_mm_table(struct amdgpu_device *adev);
>
> #endif
> --
> 2.7.4
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 22+ messages in thread
* RE: [PATCH 08/11] drm/amdgpu/vce4: replaced with virt_alloc_mm_table
[not found] ` <1493017089-23101-9-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
@ 2017-04-24 16:02 ` Deucher, Alexander
0 siblings, 0 replies; 22+ messages in thread
From: Deucher, Alexander @ 2017-04-24 16:02 UTC (permalink / raw)
To: amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW; +Cc: Yu, Xiangliang
> -----Original Message-----
> From: amd-gfx [mailto:amd-gfx-bounces@lists.freedesktop.org] On Behalf
> Of Xiangliang Yu
> Sent: Monday, April 24, 2017 2:58 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: Yu, Xiangliang
> Subject: [PATCH 08/11] drm/amdgpu/vce4: replaced with
> virt_alloc_mm_table
>
> Used virt_alloc_mm_table function to allocate MM table memory.
>
> Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/vce_v4_0.c | 20 +++-----------------
> 1 file changed, 3 insertions(+), 17 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
> b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
> index a3d9d4d..a34cdbd 100644
> --- a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
> @@ -444,20 +444,9 @@ static int vce_v4_0_sw_init(void *handle)
> return r;
> }
>
> - if (amdgpu_sriov_vf(adev)) {
> - r = amdgpu_bo_create_kernel(adev, PAGE_SIZE, PAGE_SIZE,
> - AMDGPU_GEM_DOMAIN_VRAM,
> - &adev->virt.mm_table.bo,
> - &adev->virt.mm_table.gpu_addr,
> - (void *)&adev-
> >virt.mm_table.cpu_addr);
> - if (!r) {
> - memset((void *)adev->virt.mm_table.cpu_addr, 0,
> PAGE_SIZE);
> - printk("mm table gpu addr = 0x%llx, cpu addr = %p.
> \n",
> - adev->virt.mm_table.gpu_addr,
> - adev->virt.mm_table.cpu_addr);
> - }
> + r = amdgpu_virt_alloc_mm_table(adev);
> + if (r)
> return r;
> - }
>
> return r;
> }
> @@ -468,10 +457,7 @@ static int vce_v4_0_sw_fini(void *handle)
> struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>
> /* free MM table */
> - if (amdgpu_sriov_vf(adev))
> - amdgpu_bo_free_kernel(&adev->virt.mm_table.bo,
> - &adev->virt.mm_table.gpu_addr,
> - (void *)&adev->virt.mm_table.cpu_addr);
> + amdgpu_virt_free_mm_table(adev);
>
> r = amdgpu_vce_suspend(adev);
> if (r)
> --
> 2.7.4
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2017-04-24 16:02 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-24 6:57 [PATCH 00/11] Enable UVD and PSP loading for SRIOV Xiangliang Yu
[not found] ` <1493017089-23101-1-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
2017-04-24 6:57 ` [PATCH 01/11] drm/amdgpu/virt: bypass cg and pg setting " Xiangliang Yu
[not found] ` <1493017089-23101-2-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
2017-04-24 15:45 ` Deucher, Alexander
2017-04-24 6:58 ` [PATCH 02/11] drm/amdgpu/virt: change the place of virt_init_setting Xiangliang Yu
[not found] ` <1493017089-23101-3-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
2017-04-24 15:45 ` Deucher, Alexander
2017-04-24 6:58 ` [PATCH 03/11] drm/amdgpu/psp: skip loading SDMA/RLCG under SRIOV VF Xiangliang Yu
[not found] ` <1493017089-23101-4-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
2017-04-24 15:46 ` Deucher, Alexander
2017-04-24 6:58 ` [PATCH 04/11] drm/amdgpu/vce4: fix a PSP loading VCE issue Xiangliang Yu
[not found] ` <1493017089-23101-5-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
2017-04-24 15:49 ` Deucher, Alexander
2017-04-24 6:58 ` [PATCH 05/11] drm/amdgpu/vce4: move mm table constructions functions into mmsch header file Xiangliang Yu
[not found] ` <1493017089-23101-6-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
2017-04-24 15:53 ` Deucher, Alexander
2017-04-24 6:58 ` [PATCH 06/11] drm/amdgpu/soc15: enable UVD code path for sriov Xiangliang Yu
[not found] ` <1493017089-23101-7-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
2017-04-24 15:54 ` Deucher, Alexander
2017-04-24 6:58 ` [PATCH 07/11] drm/amdgpu/virt: add two functions for MM table Xiangliang Yu
[not found] ` <1493017089-23101-8-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
2017-04-24 16:02 ` Deucher, Alexander
2017-04-24 6:58 ` [PATCH 08/11] drm/amdgpu/vce4: replaced with virt_alloc_mm_table Xiangliang Yu
[not found] ` <1493017089-23101-9-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
2017-04-24 16:02 ` Deucher, Alexander
2017-04-24 6:58 ` [PATCH 09/11] drm/amdgpu/uvd7: add sriov uvd initialization sequences Xiangliang Yu
[not found] ` <1493017089-23101-10-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
2017-04-24 15:56 ` Deucher, Alexander
2017-04-24 6:58 ` [PATCH 10/11] drm/amdgpu/uvd7: add uvd doorbell initialization for sriov Xiangliang Yu
2017-04-24 6:58 ` [PATCH 11/11] drm/amdgpu/uvd7: add UVD hw init sequences " Xiangliang Yu
[not found] ` <1493017089-23101-12-git-send-email-Xiangliang.Yu-5C7GfCeVMHo@public.gmane.org>
2017-04-24 15:59 ` Deucher, Alexander
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.