* [PATCH v3] drm/amdgpu: dequeue mes scheduler during fini
@ 2022-10-18 5:37 YuBiao Wang
2022-10-18 7:43 ` Xiao, Jack
0 siblings, 1 reply; 6+ messages in thread
From: YuBiao Wang @ 2022-10-18 5:37 UTC (permalink / raw)
To: amd-gfx
Cc: YuBiao Wang, Andrey Grodzovsky, Jack Xiao, Feifei Xu,
horace.chen, Kevin Wang, Tuikov Luben, Deucher Alexander,
Evan Quan, Christian König, Monk Liu, Hawking Zhang
[Why]
If mes is not dequeued during fini, mes will be in an uncleaned state
during reload, then mes couldn't receive some commands which leads to
reload failure.
[How]
Perform MES dequeue via MMIO after all the unmap jobs are done by mes
and before kiq fini.
v3: Move the dequeue operation inside kiq_hw_fini.
Signed-off-by: YuBiao Wang <YuBiao.Wang@amd.com>
---
drivers/gpu/drm/amd/amdgpu/mes_v11_0.c | 42 ++++++++++++++++++++++++--
1 file changed, 39 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
index 1174dcc88db5..b477bed40d61 100644
--- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
@@ -1151,6 +1151,42 @@ static int mes_v11_0_sw_fini(void *handle)
return 0;
}
+static void mes_v11_0_kiq_dequeue_sched(struct amdgpu_device *adev)
+{
+ uint32_t data;
+ int i;
+
+ mutex_lock(&adev->srbm_mutex);
+ soc21_grbm_select(adev, 3, AMDGPU_MES_SCHED_PIPE, 0, 0);
+
+ /* disable the queue if it's active */
+ if (RREG32_SOC15(GC, 0, regCP_HQD_ACTIVE) & 1) {
+ WREG32_SOC15(GC, 0, regCP_HQD_DEQUEUE_REQUEST, 1);
+ for (i = 0; i < adev->usec_timeout; i++) {
+ if (!(RREG32_SOC15(GC, 0, regCP_HQD_ACTIVE) & 1))
+ break;
+ udelay(1);
+ }
+ }
+ data = RREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL);
+ data = REG_SET_FIELD(data, CP_HQD_PQ_DOORBELL_CONTROL,
+ DOORBELL_EN, 0);
+ data = REG_SET_FIELD(data, CP_HQD_PQ_DOORBELL_CONTROL,
+ DOORBELL_HIT, 1);
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL, data);
+
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL, 0);
+
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_LO, 0);
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_HI, 0);
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_RPTR, 0);
+
+ soc21_grbm_select(adev, 0, 0, 0, 0);
+ mutex_unlock(&adev->srbm_mutex);
+
+ adev->mes.ring.sched.ready = false;
+}
+
static void mes_v11_0_kiq_setting(struct amdgpu_ring *ring)
{
uint32_t tmp;
@@ -1202,6 +1238,9 @@ static int mes_v11_0_kiq_hw_init(struct amdgpu_device *adev)
static int mes_v11_0_kiq_hw_fini(struct amdgpu_device *adev)
{
+ if (adev->mes.ring.sched.ready)
+ mes_v11_0_kiq_dequeue_sched(adev);
+
mes_v11_0_enable(adev, false);
return 0;
}
@@ -1257,9 +1296,6 @@ static int mes_v11_0_hw_init(void *handle)
static int mes_v11_0_hw_fini(void *handle)
{
- struct amdgpu_device *adev = (struct amdgpu_device *)handle;
-
- adev->mes.ring.sched.ready = false;
return 0;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v3] drm/amdgpu: dequeue mes scheduler during fini
2022-10-18 5:37 [PATCH v3] drm/amdgpu: dequeue mes scheduler during fini YuBiao Wang
@ 2022-10-18 7:43 ` Xiao, Jack
0 siblings, 0 replies; 6+ messages in thread
From: Xiao, Jack @ 2022-10-18 7:43 UTC (permalink / raw)
To: Wang, YuBiao, amd-gfx
Cc: Grodzovsky, Andrey, Wang, Yang(Kevin),
Xu, Feifei, Chen, Horace, Tuikov, Luben, Deucher, Alexander,
Quan, Evan, Koenig, Christian, Liu, Monk, Zhang, Hawking
[-- Attachment #1: Type: text/plain, Size: 3762 bytes --]
[AMD Official Use Only - General]
Reviewed-by: Jack Xiao <Jack.Xiao@amd.com>
Regards,
Jack
________________________________
From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> on behalf of YuBiao Wang <YuBiao.Wang@amd.com>
Sent: Tuesday, 18 October 2022 13:37
To: amd-gfx@lists.freedesktop.org <amd-gfx@lists.freedesktop.org>
Cc: Wang, YuBiao <YuBiao.Wang@amd.com>; Grodzovsky, Andrey <Andrey.Grodzovsky@amd.com>; Xiao, Jack <Jack.Xiao@amd.com>; Xu, Feifei <Feifei.Xu@amd.com>; Chen, Horace <Horace.Chen@amd.com>; Wang, Yang(Kevin) <KevinYang.Wang@amd.com>; Tuikov, Luben <Luben.Tuikov@amd.com>; Deucher, Alexander <Alexander.Deucher@amd.com>; Quan, Evan <Evan.Quan@amd.com>; Koenig, Christian <Christian.Koenig@amd.com>; Liu, Monk <Monk.Liu@amd.com>; Zhang, Hawking <Hawking.Zhang@amd.com>
Subject: [PATCH v3] drm/amdgpu: dequeue mes scheduler during fini
[Why]
If mes is not dequeued during fini, mes will be in an uncleaned state
during reload, then mes couldn't receive some commands which leads to
reload failure.
[How]
Perform MES dequeue via MMIO after all the unmap jobs are done by mes
and before kiq fini.
v3: Move the dequeue operation inside kiq_hw_fini.
Signed-off-by: YuBiao Wang <YuBiao.Wang@amd.com>
---
drivers/gpu/drm/amd/amdgpu/mes_v11_0.c | 42 ++++++++++++++++++++++++--
1 file changed, 39 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
index 1174dcc88db5..b477bed40d61 100644
--- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
@@ -1151,6 +1151,42 @@ static int mes_v11_0_sw_fini(void *handle)
return 0;
}
+static void mes_v11_0_kiq_dequeue_sched(struct amdgpu_device *adev)
+{
+ uint32_t data;
+ int i;
+
+ mutex_lock(&adev->srbm_mutex);
+ soc21_grbm_select(adev, 3, AMDGPU_MES_SCHED_PIPE, 0, 0);
+
+ /* disable the queue if it's active */
+ if (RREG32_SOC15(GC, 0, regCP_HQD_ACTIVE) & 1) {
+ WREG32_SOC15(GC, 0, regCP_HQD_DEQUEUE_REQUEST, 1);
+ for (i = 0; i < adev->usec_timeout; i++) {
+ if (!(RREG32_SOC15(GC, 0, regCP_HQD_ACTIVE) & 1))
+ break;
+ udelay(1);
+ }
+ }
+ data = RREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL);
+ data = REG_SET_FIELD(data, CP_HQD_PQ_DOORBELL_CONTROL,
+ DOORBELL_EN, 0);
+ data = REG_SET_FIELD(data, CP_HQD_PQ_DOORBELL_CONTROL,
+ DOORBELL_HIT, 1);
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL, data);
+
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL, 0);
+
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_LO, 0);
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_HI, 0);
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_RPTR, 0);
+
+ soc21_grbm_select(adev, 0, 0, 0, 0);
+ mutex_unlock(&adev->srbm_mutex);
+
+ adev->mes.ring.sched.ready = false;
+}
+
static void mes_v11_0_kiq_setting(struct amdgpu_ring *ring)
{
uint32_t tmp;
@@ -1202,6 +1238,9 @@ static int mes_v11_0_kiq_hw_init(struct amdgpu_device *adev)
static int mes_v11_0_kiq_hw_fini(struct amdgpu_device *adev)
{
+ if (adev->mes.ring.sched.ready)
+ mes_v11_0_kiq_dequeue_sched(adev);
+
mes_v11_0_enable(adev, false);
return 0;
}
@@ -1257,9 +1296,6 @@ static int mes_v11_0_hw_init(void *handle)
static int mes_v11_0_hw_fini(void *handle)
{
- struct amdgpu_device *adev = (struct amdgpu_device *)handle;
-
- adev->mes.ring.sched.ready = false;
return 0;
}
--
2.25.1
[-- Attachment #2: Type: text/html, Size: 7488 bytes --]
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v3] drm/amdgpu: dequeue mes scheduler during fini
2022-10-14 3:34 YuBiao Wang
2022-10-14 7:08 ` Christian König
@ 2022-10-18 3:37 ` Xiao, Jack
1 sibling, 0 replies; 6+ messages in thread
From: Xiao, Jack @ 2022-10-18 3:37 UTC (permalink / raw)
To: Wang, YuBiao, amd-gfx
Cc: Grodzovsky, Andrey, Wang, Yang(Kevin),
Xu, Feifei, Chen, Horace, Tuikov, Luben, Deucher, Alexander,
Quan, Evan, Koenig, Christian, Liu, Monk, Zhang, Hawking
[-- Attachment #1: Type: text/plain, Size: 6200 bytes --]
[AMD Official Use Only - General]
>> + /* dequeue sched pipe via kiq */
>> + void (*kiq_dequeue_sched)(struct amdgpu_device *adev);
>> +
It's unnecessary to expose a new callback function proto to perform dequeuing scheduler queue,
for kiq_fini or mes_hw_fini callback had got the propriate sequence to do this during fini procedure.
In addition, it seems that the issue is caused by that unclear CP_HQD_PQ_WPTR/CONTROL registers would trigger scheduler reading from unexpected address once CP_MES_CNTL is enabled:
a. for dequeue operation uses MMIO rather than kiq, can the dequeue operation move to mes_v11_hw_fini?
b. if something went wrong, it's better move the dequeue operation inside mes_v11_0_kiq_hw_fini before disabling mes controller.
Regards,
Jack
________________________________
From: YuBiao Wang <YuBiao.Wang@amd.com>
Sent: Friday, 14 October 2022 11:34
To: amd-gfx@lists.freedesktop.org <amd-gfx@lists.freedesktop.org>
Cc: Grodzovsky, Andrey <Andrey.Grodzovsky@amd.com>; Quan, Evan <Evan.Quan@amd.com>; Chen, Horace <Horace.Chen@amd.com>; Tuikov, Luben <Luben.Tuikov@amd.com>; Koenig, Christian <Christian.Koenig@amd.com>; Deucher, Alexander <Alexander.Deucher@amd.com>; Xiao, Jack <Jack.Xiao@amd.com>; Zhang, Hawking <Hawking.Zhang@amd.com>; Liu, Monk <Monk.Liu@amd.com>; Xu, Feifei <Feifei.Xu@amd.com>; Wang, Yang(Kevin) <KevinYang.Wang@amd.com>; Wang, YuBiao <YuBiao.Wang@amd.com>
Subject: [PATCH v3] drm/amdgpu: dequeue mes scheduler during fini
Update: Remove redundant comments as Christian suggests.
[Why]
If mes is not dequeued during fini, mes will be in an uncleaned state
during reload, then mes couldn't receive some commands which leads to
reload failure.
[How]
Perform MES dequeue via MMIO after all the unmap jobs are done by mes
and before kiq fini.
Signed-off-by: YuBiao Wang <YuBiao.Wang@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h | 3 ++
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c | 3 ++
drivers/gpu/drm/amd/amdgpu/mes_v11_0.c | 41 +++++++++++++++++++++++--
3 files changed, 44 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
index ad980f4b66e1..ea8efa52503b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
@@ -130,6 +130,9 @@ struct amdgpu_mes {
int (*kiq_hw_init)(struct amdgpu_device *adev);
int (*kiq_hw_fini)(struct amdgpu_device *adev);
+ /* dequeue sched pipe via kiq */
+ void (*kiq_dequeue_sched)(struct amdgpu_device *adev);
+
/* ip specific functions */
const struct amdgpu_mes_funcs *funcs;
};
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
index 257b2e4de600..7c75758f58e1 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
@@ -4406,6 +4406,9 @@ static int gfx_v11_0_hw_fini(void *handle)
if (amdgpu_gfx_disable_kcq(adev))
DRM_ERROR("KCQ disable failed\n");
+ if (adev->mes.ring.sched.ready && adev->mes.kiq_dequeue_sched)
+ adev->mes.kiq_dequeue_sched(adev);
+
amdgpu_mes_kiq_hw_fini(adev);
}
diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
index b48684db2832..eef29646b074 100644
--- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
@@ -44,6 +44,7 @@ MODULE_FIRMWARE("amdgpu/gc_11_0_3_mes1.bin");
static int mes_v11_0_hw_fini(void *handle);
static int mes_v11_0_kiq_hw_init(struct amdgpu_device *adev);
static int mes_v11_0_kiq_hw_fini(struct amdgpu_device *adev);
+static void mes_v11_0_kiq_dequeue_sched(struct amdgpu_device *adev);
#define MES_EOP_SIZE 2048
@@ -1078,6 +1079,7 @@ static int mes_v11_0_sw_init(void *handle)
adev->mes.funcs = &mes_v11_0_funcs;
adev->mes.kiq_hw_init = &mes_v11_0_kiq_hw_init;
adev->mes.kiq_hw_fini = &mes_v11_0_kiq_hw_fini;
+ adev->mes.kiq_dequeue_sched = &mes_v11_0_kiq_dequeue_sched;
r = amdgpu_mes_init(adev);
if (r)
@@ -1151,6 +1153,42 @@ static int mes_v11_0_sw_fini(void *handle)
return 0;
}
+static void mes_v11_0_kiq_dequeue_sched(struct amdgpu_device *adev)
+{
+ uint32_t data;
+ int i;
+
+ mutex_lock(&adev->srbm_mutex);
+ soc21_grbm_select(adev, 3, AMDGPU_MES_SCHED_PIPE, 0, 0);
+
+ /* disable the queue if it's active */
+ if (RREG32_SOC15(GC, 0, regCP_HQD_ACTIVE) & 1) {
+ WREG32_SOC15(GC, 0, regCP_HQD_DEQUEUE_REQUEST, 1);
+ for (i = 0; i < adev->usec_timeout; i++) {
+ if (!(RREG32_SOC15(GC, 0, regCP_HQD_ACTIVE) & 1))
+ break;
+ udelay(1);
+ }
+ }
+ data = RREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL);
+ data = REG_SET_FIELD(data, CP_HQD_PQ_DOORBELL_CONTROL,
+ DOORBELL_EN, 0);
+ data = REG_SET_FIELD(data, CP_HQD_PQ_DOORBELL_CONTROL,
+ DOORBELL_HIT, 1);
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL, data);
+
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL, 0);
+
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_LO, 0);
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_HI, 0);
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_RPTR, 0);
+
+ soc21_grbm_select(adev, 0, 0, 0, 0);
+ mutex_unlock(&adev->srbm_mutex);
+
+ adev->mes.ring.sched.ready = false;
+}
+
static void mes_v11_0_kiq_setting(struct amdgpu_ring *ring)
{
uint32_t tmp;
@@ -1257,9 +1295,6 @@ static int mes_v11_0_hw_init(void *handle)
static int mes_v11_0_hw_fini(void *handle)
{
- struct amdgpu_device *adev = (struct amdgpu_device *)handle;
-
- adev->mes.ring.sched.ready = false;
return 0;
}
--
2.25.1
[-- Attachment #2: Type: text/html, Size: 17875 bytes --]
^ permalink raw reply related [flat|nested] 6+ messages in thread
* RE: [PATCH v3] drm/amdgpu: dequeue mes scheduler during fini
2022-10-14 7:08 ` Christian König
@ 2022-10-18 2:13 ` Wang, YuBiao
0 siblings, 0 replies; 6+ messages in thread
From: Wang, YuBiao @ 2022-10-18 2:13 UTC (permalink / raw)
To: Deucher, Alexander, Koenig, Christian, amd-gfx
Cc: Grodzovsky, Andrey, Xiao, Jack, Wang, Yang(Kevin),
Xu, Feifei, Chen, Horace, Tuikov, Luben, Quan, Evan, Liu, Monk,
Zhang, Hawking
Hi Alex,
Could you help review this patch? Thanks.
Best Regards,
Yubiao Wang
-----Original Message-----
From: Koenig, Christian <Christian.Koenig@amd.com>
Sent: Friday, October 14, 2022 3:08 PM
To: Wang, YuBiao <YuBiao.Wang@amd.com>; amd-gfx@lists.freedesktop.org
Cc: Grodzovsky, Andrey <Andrey.Grodzovsky@amd.com>; Quan, Evan <Evan.Quan@amd.com>; Chen, Horace <Horace.Chen@amd.com>; Tuikov, Luben <Luben.Tuikov@amd.com>; Deucher, Alexander <Alexander.Deucher@amd.com>; Xiao, Jack <Jack.Xiao@amd.com>; Zhang, Hawking <Hawking.Zhang@amd.com>; Liu, Monk <Monk.Liu@amd.com>; Xu, Feifei <Feifei.Xu@amd.com>; Wang, Yang(Kevin) <KevinYang.Wang@amd.com>
Subject: Re: [PATCH v3] drm/amdgpu: dequeue mes scheduler during fini
Am 14.10.22 um 05:34 schrieb YuBiao Wang:
> Update: Remove redundant comments as Christian suggests.
>
> [Why]
> If mes is not dequeued during fini, mes will be in an uncleaned state
> during reload, then mes couldn't receive some commands which leads to
> reload failure.
>
> [How]
> Perform MES dequeue via MMIO after all the unmap jobs are done by mes
> and before kiq fini.
>
> Signed-off-by: YuBiao Wang <YuBiao.Wang@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h | 3 ++
> drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c | 3 ++
> drivers/gpu/drm/amd/amdgpu/mes_v11_0.c | 41 +++++++++++++++++++++++--
> 3 files changed, 44 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
> index ad980f4b66e1..ea8efa52503b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
> @@ -130,6 +130,9 @@ struct amdgpu_mes {
> int (*kiq_hw_init)(struct amdgpu_device *adev);
> int (*kiq_hw_fini)(struct amdgpu_device *adev);
>
> + /* dequeue sched pipe via kiq */
> + void (*kiq_dequeue_sched)(struct amdgpu_device *adev);
> +
> /* ip specific functions */
> const struct amdgpu_mes_funcs *funcs;
> };
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
> b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
> index 257b2e4de600..7c75758f58e1 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
> @@ -4406,6 +4406,9 @@ static int gfx_v11_0_hw_fini(void *handle)
> if (amdgpu_gfx_disable_kcq(adev))
> DRM_ERROR("KCQ disable failed\n");
>
> + if (adev->mes.ring.sched.ready && adev->mes.kiq_dequeue_sched)
> + adev->mes.kiq_dequeue_sched(adev);
> +
> amdgpu_mes_kiq_hw_fini(adev);
> }
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
> b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
> index b48684db2832..eef29646b074 100644
> --- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
> @@ -44,6 +44,7 @@ MODULE_FIRMWARE("amdgpu/gc_11_0_3_mes1.bin");
> static int mes_v11_0_hw_fini(void *handle);
> static int mes_v11_0_kiq_hw_init(struct amdgpu_device *adev);
> static int mes_v11_0_kiq_hw_fini(struct amdgpu_device *adev);
> +static void mes_v11_0_kiq_dequeue_sched(struct amdgpu_device *adev);
>
> #define MES_EOP_SIZE 2048
>
> @@ -1078,6 +1079,7 @@ static int mes_v11_0_sw_init(void *handle)
> adev->mes.funcs = &mes_v11_0_funcs;
> adev->mes.kiq_hw_init = &mes_v11_0_kiq_hw_init;
> adev->mes.kiq_hw_fini = &mes_v11_0_kiq_hw_fini;
> + adev->mes.kiq_dequeue_sched = &mes_v11_0_kiq_dequeue_sched;
>
> r = amdgpu_mes_init(adev);
> if (r)
> @@ -1151,6 +1153,42 @@ static int mes_v11_0_sw_fini(void *handle)
> return 0;
> }
>
> +static void mes_v11_0_kiq_dequeue_sched(struct amdgpu_device *adev) {
> + uint32_t data;
> + int i;
> +
> + mutex_lock(&adev->srbm_mutex);
> + soc21_grbm_select(adev, 3, AMDGPU_MES_SCHED_PIPE, 0, 0);
> +
> + /* disable the queue if it's active */
> + if (RREG32_SOC15(GC, 0, regCP_HQD_ACTIVE) & 1) {
> + WREG32_SOC15(GC, 0, regCP_HQD_DEQUEUE_REQUEST, 1);
> + for (i = 0; i < adev->usec_timeout; i++) {
> + if (!(RREG32_SOC15(GC, 0, regCP_HQD_ACTIVE) & 1))
> + break;
> + udelay(1);
> + }
> + }
> + data = RREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL);
> + data = REG_SET_FIELD(data, CP_HQD_PQ_DOORBELL_CONTROL,
> + DOORBELL_EN, 0);
> + data = REG_SET_FIELD(data, CP_HQD_PQ_DOORBELL_CONTROL,
> + DOORBELL_HIT, 1);
> + WREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL, data);
> +
> + WREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL, 0);
> +
> + WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_LO, 0);
> + WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_HI, 0);
> + WREG32_SOC15(GC, 0, regCP_HQD_PQ_RPTR, 0);
> +
> + soc21_grbm_select(adev, 0, 0, 0, 0);
> + mutex_unlock(&adev->srbm_mutex);
> +
> + adev->mes.ring.sched.ready = false;
> +}
> +
> static void mes_v11_0_kiq_setting(struct amdgpu_ring *ring)
> {
> uint32_t tmp;
> @@ -1257,9 +1295,6 @@ static int mes_v11_0_hw_init(void *handle)
>
> static int mes_v11_0_hw_fini(void *handle)
> {
> - struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> -
> - adev->mes.ring.sched.ready = false;
> return 0;
> }
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v3] drm/amdgpu: dequeue mes scheduler during fini
2022-10-14 3:34 YuBiao Wang
@ 2022-10-14 7:08 ` Christian König
2022-10-18 2:13 ` Wang, YuBiao
2022-10-18 3:37 ` Xiao, Jack
1 sibling, 1 reply; 6+ messages in thread
From: Christian König @ 2022-10-14 7:08 UTC (permalink / raw)
To: YuBiao Wang, amd-gfx
Cc: Andrey Grodzovsky, Jack Xiao, Feifei Xu, horace.chen, Kevin Wang,
Tuikov Luben, Deucher Alexander, Evan Quan, Monk Liu,
Hawking Zhang
Am 14.10.22 um 05:34 schrieb YuBiao Wang:
> Update: Remove redundant comments as Christian suggests.
>
> [Why]
> If mes is not dequeued during fini, mes will be in an uncleaned state
> during reload, then mes couldn't receive some commands which leads to
> reload failure.
>
> [How]
> Perform MES dequeue via MMIO after all the unmap jobs are done by mes
> and before kiq fini.
>
> Signed-off-by: YuBiao Wang <YuBiao.Wang@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h | 3 ++
> drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c | 3 ++
> drivers/gpu/drm/amd/amdgpu/mes_v11_0.c | 41 +++++++++++++++++++++++--
> 3 files changed, 44 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
> index ad980f4b66e1..ea8efa52503b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
> @@ -130,6 +130,9 @@ struct amdgpu_mes {
> int (*kiq_hw_init)(struct amdgpu_device *adev);
> int (*kiq_hw_fini)(struct amdgpu_device *adev);
>
> + /* dequeue sched pipe via kiq */
> + void (*kiq_dequeue_sched)(struct amdgpu_device *adev);
> +
> /* ip specific functions */
> const struct amdgpu_mes_funcs *funcs;
> };
> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
> index 257b2e4de600..7c75758f58e1 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
> @@ -4406,6 +4406,9 @@ static int gfx_v11_0_hw_fini(void *handle)
> if (amdgpu_gfx_disable_kcq(adev))
> DRM_ERROR("KCQ disable failed\n");
>
> + if (adev->mes.ring.sched.ready && adev->mes.kiq_dequeue_sched)
> + adev->mes.kiq_dequeue_sched(adev);
> +
> amdgpu_mes_kiq_hw_fini(adev);
> }
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
> index b48684db2832..eef29646b074 100644
> --- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
> @@ -44,6 +44,7 @@ MODULE_FIRMWARE("amdgpu/gc_11_0_3_mes1.bin");
> static int mes_v11_0_hw_fini(void *handle);
> static int mes_v11_0_kiq_hw_init(struct amdgpu_device *adev);
> static int mes_v11_0_kiq_hw_fini(struct amdgpu_device *adev);
> +static void mes_v11_0_kiq_dequeue_sched(struct amdgpu_device *adev);
>
> #define MES_EOP_SIZE 2048
>
> @@ -1078,6 +1079,7 @@ static int mes_v11_0_sw_init(void *handle)
> adev->mes.funcs = &mes_v11_0_funcs;
> adev->mes.kiq_hw_init = &mes_v11_0_kiq_hw_init;
> adev->mes.kiq_hw_fini = &mes_v11_0_kiq_hw_fini;
> + adev->mes.kiq_dequeue_sched = &mes_v11_0_kiq_dequeue_sched;
>
> r = amdgpu_mes_init(adev);
> if (r)
> @@ -1151,6 +1153,42 @@ static int mes_v11_0_sw_fini(void *handle)
> return 0;
> }
>
> +static void mes_v11_0_kiq_dequeue_sched(struct amdgpu_device *adev)
> +{
> + uint32_t data;
> + int i;
> +
> + mutex_lock(&adev->srbm_mutex);
> + soc21_grbm_select(adev, 3, AMDGPU_MES_SCHED_PIPE, 0, 0);
> +
> + /* disable the queue if it's active */
> + if (RREG32_SOC15(GC, 0, regCP_HQD_ACTIVE) & 1) {
> + WREG32_SOC15(GC, 0, regCP_HQD_DEQUEUE_REQUEST, 1);
> + for (i = 0; i < adev->usec_timeout; i++) {
> + if (!(RREG32_SOC15(GC, 0, regCP_HQD_ACTIVE) & 1))
> + break;
> + udelay(1);
> + }
> + }
> + data = RREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL);
> + data = REG_SET_FIELD(data, CP_HQD_PQ_DOORBELL_CONTROL,
> + DOORBELL_EN, 0);
> + data = REG_SET_FIELD(data, CP_HQD_PQ_DOORBELL_CONTROL,
> + DOORBELL_HIT, 1);
> + WREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL, data);
> +
> + WREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL, 0);
> +
> + WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_LO, 0);
> + WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_HI, 0);
> + WREG32_SOC15(GC, 0, regCP_HQD_PQ_RPTR, 0);
> +
> + soc21_grbm_select(adev, 0, 0, 0, 0);
> + mutex_unlock(&adev->srbm_mutex);
> +
> + adev->mes.ring.sched.ready = false;
> +}
> +
> static void mes_v11_0_kiq_setting(struct amdgpu_ring *ring)
> {
> uint32_t tmp;
> @@ -1257,9 +1295,6 @@ static int mes_v11_0_hw_init(void *handle)
>
> static int mes_v11_0_hw_fini(void *handle)
> {
> - struct amdgpu_device *adev = (struct amdgpu_device *)handle;
> -
> - adev->mes.ring.sched.ready = false;
> return 0;
> }
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v3] drm/amdgpu: dequeue mes scheduler during fini
@ 2022-10-14 3:34 YuBiao Wang
2022-10-14 7:08 ` Christian König
2022-10-18 3:37 ` Xiao, Jack
0 siblings, 2 replies; 6+ messages in thread
From: YuBiao Wang @ 2022-10-14 3:34 UTC (permalink / raw)
To: amd-gfx
Cc: YuBiao Wang, Andrey Grodzovsky, Jack Xiao, Feifei Xu,
horace.chen, Kevin Wang, Tuikov Luben, Deucher Alexander,
Evan Quan, Christian König, Monk Liu, Hawking Zhang
Update: Remove redundant comments as Christian suggests.
[Why]
If mes is not dequeued during fini, mes will be in an uncleaned state
during reload, then mes couldn't receive some commands which leads to
reload failure.
[How]
Perform MES dequeue via MMIO after all the unmap jobs are done by mes
and before kiq fini.
Signed-off-by: YuBiao Wang <YuBiao.Wang@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h | 3 ++
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c | 3 ++
drivers/gpu/drm/amd/amdgpu/mes_v11_0.c | 41 +++++++++++++++++++++++--
3 files changed, 44 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
index ad980f4b66e1..ea8efa52503b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
@@ -130,6 +130,9 @@ struct amdgpu_mes {
int (*kiq_hw_init)(struct amdgpu_device *adev);
int (*kiq_hw_fini)(struct amdgpu_device *adev);
+ /* dequeue sched pipe via kiq */
+ void (*kiq_dequeue_sched)(struct amdgpu_device *adev);
+
/* ip specific functions */
const struct amdgpu_mes_funcs *funcs;
};
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
index 257b2e4de600..7c75758f58e1 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
@@ -4406,6 +4406,9 @@ static int gfx_v11_0_hw_fini(void *handle)
if (amdgpu_gfx_disable_kcq(adev))
DRM_ERROR("KCQ disable failed\n");
+ if (adev->mes.ring.sched.ready && adev->mes.kiq_dequeue_sched)
+ adev->mes.kiq_dequeue_sched(adev);
+
amdgpu_mes_kiq_hw_fini(adev);
}
diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
index b48684db2832..eef29646b074 100644
--- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
@@ -44,6 +44,7 @@ MODULE_FIRMWARE("amdgpu/gc_11_0_3_mes1.bin");
static int mes_v11_0_hw_fini(void *handle);
static int mes_v11_0_kiq_hw_init(struct amdgpu_device *adev);
static int mes_v11_0_kiq_hw_fini(struct amdgpu_device *adev);
+static void mes_v11_0_kiq_dequeue_sched(struct amdgpu_device *adev);
#define MES_EOP_SIZE 2048
@@ -1078,6 +1079,7 @@ static int mes_v11_0_sw_init(void *handle)
adev->mes.funcs = &mes_v11_0_funcs;
adev->mes.kiq_hw_init = &mes_v11_0_kiq_hw_init;
adev->mes.kiq_hw_fini = &mes_v11_0_kiq_hw_fini;
+ adev->mes.kiq_dequeue_sched = &mes_v11_0_kiq_dequeue_sched;
r = amdgpu_mes_init(adev);
if (r)
@@ -1151,6 +1153,42 @@ static int mes_v11_0_sw_fini(void *handle)
return 0;
}
+static void mes_v11_0_kiq_dequeue_sched(struct amdgpu_device *adev)
+{
+ uint32_t data;
+ int i;
+
+ mutex_lock(&adev->srbm_mutex);
+ soc21_grbm_select(adev, 3, AMDGPU_MES_SCHED_PIPE, 0, 0);
+
+ /* disable the queue if it's active */
+ if (RREG32_SOC15(GC, 0, regCP_HQD_ACTIVE) & 1) {
+ WREG32_SOC15(GC, 0, regCP_HQD_DEQUEUE_REQUEST, 1);
+ for (i = 0; i < adev->usec_timeout; i++) {
+ if (!(RREG32_SOC15(GC, 0, regCP_HQD_ACTIVE) & 1))
+ break;
+ udelay(1);
+ }
+ }
+ data = RREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL);
+ data = REG_SET_FIELD(data, CP_HQD_PQ_DOORBELL_CONTROL,
+ DOORBELL_EN, 0);
+ data = REG_SET_FIELD(data, CP_HQD_PQ_DOORBELL_CONTROL,
+ DOORBELL_HIT, 1);
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL, data);
+
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_DOORBELL_CONTROL, 0);
+
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_LO, 0);
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_WPTR_HI, 0);
+ WREG32_SOC15(GC, 0, regCP_HQD_PQ_RPTR, 0);
+
+ soc21_grbm_select(adev, 0, 0, 0, 0);
+ mutex_unlock(&adev->srbm_mutex);
+
+ adev->mes.ring.sched.ready = false;
+}
+
static void mes_v11_0_kiq_setting(struct amdgpu_ring *ring)
{
uint32_t tmp;
@@ -1257,9 +1295,6 @@ static int mes_v11_0_hw_init(void *handle)
static int mes_v11_0_hw_fini(void *handle)
{
- struct amdgpu_device *adev = (struct amdgpu_device *)handle;
-
- adev->mes.ring.sched.ready = false;
return 0;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
end of thread, other threads:[~2022-10-18 7:43 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-18 5:37 [PATCH v3] drm/amdgpu: dequeue mes scheduler during fini YuBiao Wang
2022-10-18 7:43 ` Xiao, Jack
-- strict thread matches above, loose matches on Subject: below --
2022-10-14 3:34 YuBiao Wang
2022-10-14 7:08 ` Christian König
2022-10-18 2:13 ` Wang, YuBiao
2022-10-18 3:37 ` Xiao, Jack
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.