All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] drm/amdgpu: enhance amdgpu_vcn_suspend
@ 2021-05-17 14:57 James Zhu
  2021-05-17 14:57 ` [PATCH 2/2] drm/amdgpu: remove unsued vcn_v3_0_hw_fini James Zhu
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: James Zhu @ 2021-05-17 14:57 UTC (permalink / raw)
  To: amd-gfx; +Cc: jamesz

During vcn suspends, stop ring continue to receive new requests,
and try to wait for all vcn jobs to finish gracefully.

Signed-off-by: James Zhu <James.Zhu@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 20 +++++++++++++++++++-
 1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
index 2016459..7e9f5cb 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
@@ -275,9 +275,27 @@ int amdgpu_vcn_suspend(struct amdgpu_device *adev)
 {
 	unsigned size;
 	void *ptr;
+	int retry_max = 6;
 	int i;
 
-	cancel_delayed_work_sync(&adev->vcn.idle_work);
+	for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
+		if (adev->vcn.harvest_config & (1 << i))
+			continue;
+		ring = &adev->vcn.inst[i].ring_dec;
+		ring->sched.ready = false;
+
+		for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
+			ring = &adev->vcn.inst[i].ring_enc[j];
+			ring->sched.ready = false;
+		}
+	}
+
+	while (retry_max--) {
+		if (cancel_delayed_work_sync(&adev->vcn.idle_work)) {
+			dev_warn(adev->dev, "Waiting for left VCN job(s) to finish gracefully ...");
+			mdelay(5);
+		}
+	}
 
 	for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
 		if (adev->vcn.harvest_config & (1 << i))
-- 
2.7.4

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/2] drm/amdgpu: remove unsued vcn_v3_0_hw_fini
  2021-05-17 14:57 [PATCH 1/2] drm/amdgpu: enhance amdgpu_vcn_suspend James Zhu
@ 2021-05-17 14:57 ` James Zhu
  2021-05-17 15:52 ` [PATCH v2 1/2] drm/amdgpu: enhance amdgpu_vcn_suspend James Zhu
  2021-05-17 19:43 ` [PATCH " Christian König
  2 siblings, 0 replies; 13+ messages in thread
From: James Zhu @ 2021-05-17 14:57 UTC (permalink / raw)
  To: amd-gfx; +Cc: jamesz

Removed unsued vcn_v3_0_hw_fini when enhanced
common amdgpu_vicn_suspend is applied.

Signed-off-by: James Zhu <James.Zhu@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
index cf165ab..e7505ec 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c
@@ -411,10 +411,6 @@ static int vcn_v3_0_suspend(void *handle)
 	int r;
 	struct amdgpu_device *adev = (struct amdgpu_device *)handle;
 
-	r = vcn_v3_0_hw_fini(adev);
-	if (r)
-		return r;
-
 	r = amdgpu_vcn_suspend(adev);
 
 	return r;
-- 
2.7.4

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 1/2] drm/amdgpu: enhance amdgpu_vcn_suspend
  2021-05-17 14:57 [PATCH 1/2] drm/amdgpu: enhance amdgpu_vcn_suspend James Zhu
  2021-05-17 14:57 ` [PATCH 2/2] drm/amdgpu: remove unsued vcn_v3_0_hw_fini James Zhu
@ 2021-05-17 15:52 ` James Zhu
  2021-05-17 16:34   ` Leo Liu
  2021-05-17 19:43 ` [PATCH " Christian König
  2 siblings, 1 reply; 13+ messages in thread
From: James Zhu @ 2021-05-17 15:52 UTC (permalink / raw)
  To: amd-gfx; +Cc: jamesz

During vcn suspends, stop ring continue to receive new requests,
and try to wait for all vcn jobs to finish gracefully.

v2: Forced powering gate vcn hardware after few wainting retry.

Signed-off-by: James Zhu <James.Zhu@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 22 +++++++++++++++++++++-
 1 file changed, 21 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
index 2016459..9f3a6e7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
@@ -275,9 +275,29 @@ int amdgpu_vcn_suspend(struct amdgpu_device *adev)
 {
 	unsigned size;
 	void *ptr;
+	int retry_max = 6;
 	int i;
 
-	cancel_delayed_work_sync(&adev->vcn.idle_work);
+	for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
+		if (adev->vcn.harvest_config & (1 << i))
+			continue;
+		ring = &adev->vcn.inst[i].ring_dec;
+		ring->sched.ready = false;
+
+		for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
+			ring = &adev->vcn.inst[i].ring_enc[j];
+			ring->sched.ready = false;
+		}
+	}
+
+	while (retry_max-- && cancel_delayed_work_sync(&adev->vcn.idle_work))
+		mdelay(5);
+	if (!retry_max && !amdgpu_sriov_vf(adev)) {
+		if (RREG32_SOC15(VCN, i, mmUVD_STATUS)) {
+			dev_warn(adev->dev, "Forced powering gate vcn hardware!");
+			vcn_v3_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
+		}
+	}
 
 	for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
 		if (adev->vcn.harvest_config & (1 << i))
-- 
2.7.4

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/2] drm/amdgpu: enhance amdgpu_vcn_suspend
  2021-05-17 15:52 ` [PATCH v2 1/2] drm/amdgpu: enhance amdgpu_vcn_suspend James Zhu
@ 2021-05-17 16:34   ` Leo Liu
  2021-05-17 16:54     ` James Zhu
  0 siblings, 1 reply; 13+ messages in thread
From: Leo Liu @ 2021-05-17 16:34 UTC (permalink / raw)
  To: James Zhu, amd-gfx; +Cc: jamesz


On 2021-05-17 11:52 a.m., James Zhu wrote:
> During vcn suspends, stop ring continue to receive new requests,
> and try to wait for all vcn jobs to finish gracefully.
>
> v2: Forced powering gate vcn hardware after few wainting retry.
>
> Signed-off-by: James Zhu <James.Zhu@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 22 +++++++++++++++++++++-
>   1 file changed, 21 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
> index 2016459..9f3a6e7 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
> @@ -275,9 +275,29 @@ int amdgpu_vcn_suspend(struct amdgpu_device *adev)
>   {
>   	unsigned size;
>   	void *ptr;
> +	int retry_max = 6;
>   	int i;
>   
> -	cancel_delayed_work_sync(&adev->vcn.idle_work);
> +	for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
> +		if (adev->vcn.harvest_config & (1 << i))
> +			continue;
> +		ring = &adev->vcn.inst[i].ring_dec;
> +		ring->sched.ready = false;
> +
> +		for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
> +			ring = &adev->vcn.inst[i].ring_enc[j];
> +			ring->sched.ready = false;
> +		}
> +	}
> +
> +	while (retry_max-- && cancel_delayed_work_sync(&adev->vcn.idle_work))
> +		mdelay(5);

I think it's possible to have one pending job unprocessed with VCN when 
suspend sequence getting here, but it shouldn't be more than one, 
cancel_delayed_work_sync probably return false after the first time, so 
calling cancel_delayed_work_sync once should be enough here. we probably 
need to wait longer from:

SOC15_WAIT_ON_RREG(VCN, inst_idx, mmUVD_POWER_STATUS, 1,
         UVD_POWER_STATUS__UVD_POWER_STATUS_MASK);

to make sure the unprocessed job get done.


Regards,

Leo


> +	if (!retry_max && !amdgpu_sriov_vf(adev)) {
> +		if (RREG32_SOC15(VCN, i, mmUVD_STATUS)) {
> +			dev_warn(adev->dev, "Forced powering gate vcn hardware!");
> +			vcn_v3_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
> +		}
> +	}
>   
>   	for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
>   		if (adev->vcn.harvest_config & (1 << i))
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/2] drm/amdgpu: enhance amdgpu_vcn_suspend
  2021-05-17 16:34   ` Leo Liu
@ 2021-05-17 16:54     ` James Zhu
  2021-05-17 17:43       ` Leo Liu
  0 siblings, 1 reply; 13+ messages in thread
From: James Zhu @ 2021-05-17 16:54 UTC (permalink / raw)
  To: Leo Liu, James Zhu, amd-gfx

I am wondering if there are still some jobs kept in the queue, it is 
lucky to check

UVD_POWER_STATUS done, but after, fw start a new job that list in the queue.

To handle this situation perfectly, we need add mechanism to suspend fw 
first.

Another case, if it is unlucky, that  vcn fw hung at that time, 
UVD_POWER_STATUS

always keeps busy.   then it needs force powering gate the vcn hw after 
certain time waiting.

Best Regards!

James

On 2021-05-17 12:34 p.m., Leo Liu wrote:
>
> On 2021-05-17 11:52 a.m., James Zhu wrote:
>> During vcn suspends, stop ring continue to receive new requests,
>> and try to wait for all vcn jobs to finish gracefully.
>>
>> v2: Forced powering gate vcn hardware after few wainting retry.
>>
>> Signed-off-by: James Zhu <James.Zhu@amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 22 +++++++++++++++++++++-
>>   1 file changed, 21 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>> index 2016459..9f3a6e7 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>> @@ -275,9 +275,29 @@ int amdgpu_vcn_suspend(struct amdgpu_device *adev)
>>   {
>>       unsigned size;
>>       void *ptr;
>> +    int retry_max = 6;
>>       int i;
>>   -    cancel_delayed_work_sync(&adev->vcn.idle_work);
>> +    for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
>> +        if (adev->vcn.harvest_config & (1 << i))
>> +            continue;
>> +        ring = &adev->vcn.inst[i].ring_dec;
>> +        ring->sched.ready = false;
>> +
>> +        for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
>> +            ring = &adev->vcn.inst[i].ring_enc[j];
>> +            ring->sched.ready = false;
>> +        }
>> +    }
>> +
>> +    while (retry_max-- && 
>> cancel_delayed_work_sync(&adev->vcn.idle_work))
>> +        mdelay(5);
>
> I think it's possible to have one pending job unprocessed with VCN 
> when suspend sequence getting here, but it shouldn't be more than one, 
> cancel_delayed_work_sync probably return false after the first time, 
> so calling cancel_delayed_work_sync once should be enough here. we 
> probably need to wait longer from:
>
> SOC15_WAIT_ON_RREG(VCN, inst_idx, mmUVD_POWER_STATUS, 1,
>         UVD_POWER_STATUS__UVD_POWER_STATUS_MASK);
>
> to make sure the unprocessed job get done.
>
>
> Regards,
>
> Leo
>
>
>> +    if (!retry_max && !amdgpu_sriov_vf(adev)) {
>> +        if (RREG32_SOC15(VCN, i, mmUVD_STATUS)) {
>> +            dev_warn(adev->dev, "Forced powering gate vcn hardware!");
>> +            vcn_v3_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
>> +        }
>> +    }
>>         for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
>>           if (adev->vcn.harvest_config & (1 << i))
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/2] drm/amdgpu: enhance amdgpu_vcn_suspend
  2021-05-17 16:54     ` James Zhu
@ 2021-05-17 17:43       ` Leo Liu
  2021-05-17 17:59         ` James Zhu
  0 siblings, 1 reply; 13+ messages in thread
From: Leo Liu @ 2021-05-17 17:43 UTC (permalink / raw)
  To: James Zhu, James Zhu, amd-gfx


On 2021-05-17 12:54 p.m., James Zhu wrote:
> I am wondering if there are still some jobs kept in the queue, it is 
> lucky to check

Yes it's possible, in this case delayed handler is set, so cancelling 
once is enough.


>
> UVD_POWER_STATUS done, but after, fw start a new job that list in the 
> queue.
>
> To handle this situation perfectly, we need add mechanism to suspend 
> fw first.

I think that should be handled by the sequence from 
vcn_v3_0_stop_dpg_mode().


>
> Another case, if it is unlucky, that  vcn fw hung at that time, 
> UVD_POWER_STATUS
>
> always keeps busy.   then it needs force powering gate the vcn hw 
> after certain time waiting.

Yep, we still need to gate VCN power after certain timeout.


Regards,

Leo



>
> Best Regards!
>
> James
>
> On 2021-05-17 12:34 p.m., Leo Liu wrote:
>>
>> On 2021-05-17 11:52 a.m., James Zhu wrote:
>>> During vcn suspends, stop ring continue to receive new requests,
>>> and try to wait for all vcn jobs to finish gracefully.
>>>
>>> v2: Forced powering gate vcn hardware after few wainting retry.
>>>
>>> Signed-off-by: James Zhu <James.Zhu@amd.com>
>>> ---
>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 22 +++++++++++++++++++++-
>>>   1 file changed, 21 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c 
>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>> index 2016459..9f3a6e7 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>> @@ -275,9 +275,29 @@ int amdgpu_vcn_suspend(struct amdgpu_device *adev)
>>>   {
>>>       unsigned size;
>>>       void *ptr;
>>> +    int retry_max = 6;
>>>       int i;
>>>   -    cancel_delayed_work_sync(&adev->vcn.idle_work);
>>> +    for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
>>> +        if (adev->vcn.harvest_config & (1 << i))
>>> +            continue;
>>> +        ring = &adev->vcn.inst[i].ring_dec;
>>> +        ring->sched.ready = false;
>>> +
>>> +        for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
>>> +            ring = &adev->vcn.inst[i].ring_enc[j];
>>> +            ring->sched.ready = false;
>>> +        }
>>> +    }
>>> +
>>> +    while (retry_max-- && 
>>> cancel_delayed_work_sync(&adev->vcn.idle_work))
>>> +        mdelay(5);
>>
>> I think it's possible to have one pending job unprocessed with VCN 
>> when suspend sequence getting here, but it shouldn't be more than 
>> one, cancel_delayed_work_sync probably return false after the first 
>> time, so calling cancel_delayed_work_sync once should be enough here. 
>> we probably need to wait longer from:
>>
>> SOC15_WAIT_ON_RREG(VCN, inst_idx, mmUVD_POWER_STATUS, 1,
>>         UVD_POWER_STATUS__UVD_POWER_STATUS_MASK);
>>
>> to make sure the unprocessed job get done.
>>
>>
>> Regards,
>>
>> Leo
>>
>>
>>> +    if (!retry_max && !amdgpu_sriov_vf(adev)) {
>>> +        if (RREG32_SOC15(VCN, i, mmUVD_STATUS)) {
>>> +            dev_warn(adev->dev, "Forced powering gate vcn hardware!");
>>> +            vcn_v3_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
>>> +        }
>>> +    }
>>>         for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
>>>           if (adev->vcn.harvest_config & (1 << i))
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/2] drm/amdgpu: enhance amdgpu_vcn_suspend
  2021-05-17 17:43       ` Leo Liu
@ 2021-05-17 17:59         ` James Zhu
  2021-05-17 18:07           ` Leo Liu
  0 siblings, 1 reply; 13+ messages in thread
From: James Zhu @ 2021-05-17 17:59 UTC (permalink / raw)
  To: Leo Liu, James Zhu, amd-gfx


[-- Attachment #1.1: Type: text/plain, Size: 4434 bytes --]

Then we forgot the proposal I provided before.

I think the below seq may fixed the race condition issue that we are facing.

1. stop scheduling new jobs

     for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
         if (adev->vcn.harvest_config & (1 << i))
             continue;

         ring = &adev->vcn.inst[i].ring_dec;
         ring->sched.ready = false;

         for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
             ring = &adev->vcn.inst[i].ring_enc[j];
             ring->sched.ready = false;
         }
     }

2.    cancel_delayed_work_sync(&adev->vcn.idle_work);

3. SOC15_WAIT_ON_RREG(VCN, inst_idx, mmUVD_POWER_STATUS, 1,
          UVD_POWER_STATUS__UVD_POWER_STATUS_MASK);

4. amdgpu_device_ip_set_powergating_state(adev, AMD_IP_BLOCK_TYPE_VCN,   
AMD_PG_STATE_GATE);

5.  saved_bo

Best Regards!

James

On 2021-05-17 1:43 p.m., Leo Liu wrote:
>
> On 2021-05-17 12:54 p.m., James Zhu wrote:
>> I am wondering if there are still some jobs kept in the queue, it is 
>> lucky to check
>
> Yes it's possible, in this case delayed handler is set, so cancelling 
> once is enough.
>
>
>>
>> UVD_POWER_STATUS done, but after, fw start a new job that list in the 
>> queue.
>>
>> To handle this situation perfectly, we need add mechanism to suspend 
>> fw first.
>
> I think that should be handled by the sequence from 
> vcn_v3_0_stop_dpg_mode().
>
>
>>
>> Another case, if it is unlucky, that  vcn fw hung at that time, 
>> UVD_POWER_STATUS
>>
>> always keeps busy.   then it needs force powering gate the vcn hw 
>> after certain time waiting.
>
> Yep, we still need to gate VCN power after certain timeout.
>
>
> Regards,
>
> Leo
>
>
>
>>
>> Best Regards!
>>
>> James
>>
>> On 2021-05-17 12:34 p.m., Leo Liu wrote:
>>>
>>> On 2021-05-17 11:52 a.m., James Zhu wrote:
>>>> During vcn suspends, stop ring continue to receive new requests,
>>>> and try to wait for all vcn jobs to finish gracefully.
>>>>
>>>> v2: Forced powering gate vcn hardware after few wainting retry.
>>>>
>>>> Signed-off-by: James Zhu <James.Zhu@amd.com>
>>>> ---
>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 22 +++++++++++++++++++++-
>>>>   1 file changed, 21 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c 
>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>>> index 2016459..9f3a6e7 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>>> @@ -275,9 +275,29 @@ int amdgpu_vcn_suspend(struct amdgpu_device 
>>>> *adev)
>>>>   {
>>>>       unsigned size;
>>>>       void *ptr;
>>>> +    int retry_max = 6;
>>>>       int i;
>>>>   - cancel_delayed_work_sync(&adev->vcn.idle_work);
>>>> +    for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
>>>> +        if (adev->vcn.harvest_config & (1 << i))
>>>> +            continue;
>>>> +        ring = &adev->vcn.inst[i].ring_dec;
>>>> +        ring->sched.ready = false;
>>>> +
>>>> +        for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
>>>> +            ring = &adev->vcn.inst[i].ring_enc[j];
>>>> +            ring->sched.ready = false;
>>>> +        }
>>>> +    }
>>>> +
>>>> +    while (retry_max-- && 
>>>> cancel_delayed_work_sync(&adev->vcn.idle_work))
>>>> +        mdelay(5);
>>>
>>> I think it's possible to have one pending job unprocessed with VCN 
>>> when suspend sequence getting here, but it shouldn't be more than 
>>> one, cancel_delayed_work_sync probably return false after the first 
>>> time, so calling cancel_delayed_work_sync once should be enough 
>>> here. we probably need to wait longer from:
>>>
>>> SOC15_WAIT_ON_RREG(VCN, inst_idx, mmUVD_POWER_STATUS, 1,
>>>         UVD_POWER_STATUS__UVD_POWER_STATUS_MASK);
>>>
>>> to make sure the unprocessed job get done.
>>>
>>>
>>> Regards,
>>>
>>> Leo
>>>
>>>
>>>> +    if (!retry_max && !amdgpu_sriov_vf(adev)) {
>>>> +        if (RREG32_SOC15(VCN, i, mmUVD_STATUS)) {
>>>> +            dev_warn(adev->dev, "Forced powering gate vcn 
>>>> hardware!");
>>>> +            vcn_v3_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
>>>> +        }
>>>> +    }
>>>>         for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
>>>>           if (adev->vcn.harvest_config & (1 << i))

[-- Attachment #1.2: Type: text/html, Size: 8550 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/2] drm/amdgpu: enhance amdgpu_vcn_suspend
  2021-05-17 17:59         ` James Zhu
@ 2021-05-17 18:07           ` Leo Liu
  2021-05-17 18:11             ` Zhu, James
  0 siblings, 1 reply; 13+ messages in thread
From: Leo Liu @ 2021-05-17 18:07 UTC (permalink / raw)
  To: James Zhu, James Zhu, amd-gfx


[-- Attachment #1.1: Type: text/plain, Size: 4782 bytes --]

Definitely, we need to move cancel_delayed_work_sync moved to before 
power gate.

Should "save_bo" be step 4 before power gate ?

Regards,

Leo


On 2021-05-17 1:59 p.m., James Zhu wrote:
>
> Then we forgot the proposal I provided before.
>
> I think the below seq may fixed the race condition issue that we are 
> facing.
>
> 1. stop scheduling new jobs
>
>     for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
>         if (adev->vcn.harvest_config & (1 << i))
>             continue;
>
>         ring = &adev->vcn.inst[i].ring_dec;
>         ring->sched.ready = false;
>
>         for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
>             ring = &adev->vcn.inst[i].ring_enc[j];
>             ring->sched.ready = false;
>         }
>     }
>
> 2.    cancel_delayed_work_sync(&adev->vcn.idle_work);
>
> 3. SOC15_WAIT_ON_RREG(VCN, inst_idx, mmUVD_POWER_STATUS, 1,
>          UVD_POWER_STATUS__UVD_POWER_STATUS_MASK);
>
> 4. amdgpu_device_ip_set_powergating_state(adev, 
> AMD_IP_BLOCK_TYPE_VCN,   AMD_PG_STATE_GATE);
>
> 5.  saved_bo
>
> Best Regards!
>
> James
>
> On 2021-05-17 1:43 p.m., Leo Liu wrote:
>>
>> On 2021-05-17 12:54 p.m., James Zhu wrote:
>>> I am wondering if there are still some jobs kept in the queue, it is 
>>> lucky to check
>>
>> Yes it's possible, in this case delayed handler is set, so cancelling 
>> once is enough.
>>
>>
>>>
>>> UVD_POWER_STATUS done, but after, fw start a new job that list in 
>>> the queue.
>>>
>>> To handle this situation perfectly, we need add mechanism to suspend 
>>> fw first.
>>
>> I think that should be handled by the sequence from 
>> vcn_v3_0_stop_dpg_mode().
>>
>>
>>>
>>> Another case, if it is unlucky, that  vcn fw hung at that time, 
>>> UVD_POWER_STATUS
>>>
>>> always keeps busy.   then it needs force powering gate the vcn hw 
>>> after certain time waiting.
>>
>> Yep, we still need to gate VCN power after certain timeout.
>>
>>
>> Regards,
>>
>> Leo
>>
>>
>>
>>>
>>> Best Regards!
>>>
>>> James
>>>
>>> On 2021-05-17 12:34 p.m., Leo Liu wrote:
>>>>
>>>> On 2021-05-17 11:52 a.m., James Zhu wrote:
>>>>> During vcn suspends, stop ring continue to receive new requests,
>>>>> and try to wait for all vcn jobs to finish gracefully.
>>>>>
>>>>> v2: Forced powering gate vcn hardware after few wainting retry.
>>>>>
>>>>> Signed-off-by: James Zhu <James.Zhu@amd.com>
>>>>> ---
>>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 22 +++++++++++++++++++++-
>>>>>   1 file changed, 21 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c 
>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>>>> index 2016459..9f3a6e7 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>>>> @@ -275,9 +275,29 @@ int amdgpu_vcn_suspend(struct amdgpu_device 
>>>>> *adev)
>>>>>   {
>>>>>       unsigned size;
>>>>>       void *ptr;
>>>>> +    int retry_max = 6;
>>>>>       int i;
>>>>>   - cancel_delayed_work_sync(&adev->vcn.idle_work);
>>>>> +    for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
>>>>> +        if (adev->vcn.harvest_config & (1 << i))
>>>>> +            continue;
>>>>> +        ring = &adev->vcn.inst[i].ring_dec;
>>>>> +        ring->sched.ready = false;
>>>>> +
>>>>> +        for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
>>>>> +            ring = &adev->vcn.inst[i].ring_enc[j];
>>>>> +            ring->sched.ready = false;
>>>>> +        }
>>>>> +    }
>>>>> +
>>>>> +    while (retry_max-- && 
>>>>> cancel_delayed_work_sync(&adev->vcn.idle_work))
>>>>> +        mdelay(5);
>>>>
>>>> I think it's possible to have one pending job unprocessed with VCN 
>>>> when suspend sequence getting here, but it shouldn't be more than 
>>>> one, cancel_delayed_work_sync probably return false after the first 
>>>> time, so calling cancel_delayed_work_sync once should be enough 
>>>> here. we probably need to wait longer from:
>>>>
>>>> SOC15_WAIT_ON_RREG(VCN, inst_idx, mmUVD_POWER_STATUS, 1,
>>>>         UVD_POWER_STATUS__UVD_POWER_STATUS_MASK);
>>>>
>>>> to make sure the unprocessed job get done.
>>>>
>>>>
>>>> Regards,
>>>>
>>>> Leo
>>>>
>>>>
>>>>> +    if (!retry_max && !amdgpu_sriov_vf(adev)) {
>>>>> +        if (RREG32_SOC15(VCN, i, mmUVD_STATUS)) {
>>>>> +            dev_warn(adev->dev, "Forced powering gate vcn 
>>>>> hardware!");
>>>>> +            vcn_v3_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
>>>>> +        }
>>>>> +    }
>>>>>         for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
>>>>>           if (adev->vcn.harvest_config & (1 << i))

[-- Attachment #1.2: Type: text/html, Size: 8726 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/2] drm/amdgpu: enhance amdgpu_vcn_suspend
  2021-05-17 18:07           ` Leo Liu
@ 2021-05-17 18:11             ` Zhu, James
  2021-05-17 18:15               ` Leo Liu
  0 siblings, 1 reply; 13+ messages in thread
From: Zhu, James @ 2021-05-17 18:11 UTC (permalink / raw)
  To: Liu, Leo, amd-gfx


[-- Attachment #1.1: Type: text/plain, Size: 4603 bytes --]

[AMD Official Use Only - Internal Distribution Only]

save_bo needn't ungate vcn,  it just keeps data in memory.


Thanks & Best Regards!


James Zhu

________________________________
From: Liu, Leo <Leo.Liu@amd.com>
Sent: Monday, May 17, 2021 2:07 PM
To: Zhu, James <James.Zhu@amd.com>; Zhu, James <James.Zhu@amd.com>; amd-gfx@lists.freedesktop.org <amd-gfx@lists.freedesktop.org>
Subject: Re: [PATCH v2 1/2] drm/amdgpu: enhance amdgpu_vcn_suspend


Definitely, we need to move cancel_delayed_work_sync moved to before power gate.

Should "save_bo" be step 4 before power gate ?

Regards,

Leo


On 2021-05-17 1:59 p.m., James Zhu wrote:

Then we forgot the proposal I provided before.

I think the below seq may fixed the race condition issue that we are facing.

1. stop scheduling new jobs

    for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
        if (adev->vcn.harvest_config & (1 << i))
            continue;

        ring = &adev->vcn.inst[i].ring_dec;
        ring->sched.ready = false;

        for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
            ring = &adev->vcn.inst[i].ring_enc[j];
            ring->sched.ready = false;
        }
    }

2.    cancel_delayed_work_sync(&adev->vcn.idle_work);

3.    SOC15_WAIT_ON_RREG(VCN, inst_idx, mmUVD_POWER_STATUS, 1,
         UVD_POWER_STATUS__UVD_POWER_STATUS_MASK);

4.    amdgpu_device_ip_set_powergating_state(adev, AMD_IP_BLOCK_TYPE_VCN,   AMD_PG_STATE_GATE);

5.  saved_bo

Best Regards!

James

On 2021-05-17 1:43 p.m., Leo Liu wrote:

On 2021-05-17 12:54 p.m., James Zhu wrote:
I am wondering if there are still some jobs kept in the queue, it is lucky to check

Yes it's possible, in this case delayed handler is set, so cancelling once is enough.



UVD_POWER_STATUS done, but after, fw start a new job that list in the queue.

To handle this situation perfectly, we need add mechanism to suspend fw first.

I think that should be handled by the sequence from vcn_v3_0_stop_dpg_mode().



Another case, if it is unlucky, that  vcn fw hung at that time, UVD_POWER_STATUS

always keeps busy.   then it needs force powering gate the vcn hw after certain time waiting.

Yep, we still need to gate VCN power after certain timeout.


Regards,

Leo




Best Regards!

James

On 2021-05-17 12:34 p.m., Leo Liu wrote:

On 2021-05-17 11:52 a.m., James Zhu wrote:
During vcn suspends, stop ring continue to receive new requests,
and try to wait for all vcn jobs to finish gracefully.

v2: Forced powering gate vcn hardware after few wainting retry.

Signed-off-by: James Zhu <James.Zhu@amd.com><mailto:James.Zhu@amd.com>
---
  drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 22 +++++++++++++++++++++-
  1 file changed, 21 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
index 2016459..9f3a6e7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
@@ -275,9 +275,29 @@ int amdgpu_vcn_suspend(struct amdgpu_device *adev)
  {
      unsigned size;
      void *ptr;
+    int retry_max = 6;
      int i;
  -    cancel_delayed_work_sync(&adev->vcn.idle_work);
+    for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
+        if (adev->vcn.harvest_config & (1 << i))
+            continue;
+        ring = &adev->vcn.inst[i].ring_dec;
+        ring->sched.ready = false;
+
+        for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
+            ring = &adev->vcn.inst[i].ring_enc[j];
+            ring->sched.ready = false;
+        }
+    }
+
+    while (retry_max-- && cancel_delayed_work_sync(&adev->vcn.idle_work))
+        mdelay(5);

I think it's possible to have one pending job unprocessed with VCN when suspend sequence getting here, but it shouldn't be more than one, cancel_delayed_work_sync probably return false after the first time, so calling cancel_delayed_work_sync once should be enough here. we probably need to wait longer from:

SOC15_WAIT_ON_RREG(VCN, inst_idx, mmUVD_POWER_STATUS, 1,
        UVD_POWER_STATUS__UVD_POWER_STATUS_MASK);

to make sure the unprocessed job get done.


Regards,

Leo


+    if (!retry_max && !amdgpu_sriov_vf(adev)) {
+        if (RREG32_SOC15(VCN, i, mmUVD_STATUS)) {
+            dev_warn(adev->dev, "Forced powering gate vcn hardware!");
+            vcn_v3_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
+        }
+    }
        for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
          if (adev->vcn.harvest_config & (1 << i))

[-- Attachment #1.2: Type: text/html, Size: 8480 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/2] drm/amdgpu: enhance amdgpu_vcn_suspend
  2021-05-17 18:11             ` Zhu, James
@ 2021-05-17 18:15               ` Leo Liu
  2021-05-17 18:19                 ` Leo Liu
  0 siblings, 1 reply; 13+ messages in thread
From: Leo Liu @ 2021-05-17 18:15 UTC (permalink / raw)
  To: Zhu, James, amd-gfx


[-- Attachment #1.1: Type: text/plain, Size: 5727 bytes --]

The saved data are from the engine cache, it's the runtime of engine 
before suspend, it might be different after you have the engine powered off.


Regards,

Leo



On 2021-05-17 2:11 p.m., Zhu, James wrote:
>
> [AMD Official Use Only - Internal Distribution Only]
>
>
> save_bo needn't ungate vcn,  it just keeps data in memory.
>
> Thanks & Best Regards!
>
>
> James Zhu
>
> ------------------------------------------------------------------------
> *From:* Liu, Leo <Leo.Liu@amd.com>
> *Sent:* Monday, May 17, 2021 2:07 PM
> *To:* Zhu, James <James.Zhu@amd.com>; Zhu, James <James.Zhu@amd.com>; 
> amd-gfx@lists.freedesktop.org <amd-gfx@lists.freedesktop.org>
> *Subject:* Re: [PATCH v2 1/2] drm/amdgpu: enhance amdgpu_vcn_suspend
>
> Definitely, we need to move cancel_delayed_work_sync moved to before 
> power gate.
>
> Should "save_bo" be step 4 before power gate ?
>
> Regards,
>
> Leo
>
>
> On 2021-05-17 1:59 p.m., James Zhu wrote:
>>
>> Then we forgot the proposal I provided before.
>>
>> I think the below seq may fixed the race condition issue that we are 
>> facing.
>>
>> 1. stop scheduling new jobs
>>
>>     for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
>>         if (adev->vcn.harvest_config & (1 << i))
>>             continue;
>>
>>         ring = &adev->vcn.inst[i].ring_dec;
>>         ring->sched.ready = false;
>>
>>         for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
>>             ring = &adev->vcn.inst[i].ring_enc[j];
>>             ring->sched.ready = false;
>>         }
>>     }
>>
>> 2. cancel_delayed_work_sync(&adev->vcn.idle_work);
>>
>> 3. SOC15_WAIT_ON_RREG(VCN, inst_idx, mmUVD_POWER_STATUS, 1,
>>          UVD_POWER_STATUS__UVD_POWER_STATUS_MASK);
>>
>> 4. amdgpu_device_ip_set_powergating_state(adev, 
>> AMD_IP_BLOCK_TYPE_VCN,   AMD_PG_STATE_GATE);
>>
>> 5.  saved_bo
>>
>> Best Regards!
>>
>> James
>>
>> On 2021-05-17 1:43 p.m., Leo Liu wrote:
>>>
>>> On 2021-05-17 12:54 p.m., James Zhu wrote:
>>>> I am wondering if there are still some jobs kept in the queue, it 
>>>> is lucky to check
>>>
>>> Yes it's possible, in this case delayed handler is set, so 
>>> cancelling once is enough.
>>>
>>>
>>>>
>>>> UVD_POWER_STATUS done, but after, fw start a new job that list in 
>>>> the queue.
>>>>
>>>> To handle this situation perfectly, we need add mechanism to 
>>>> suspend fw first.
>>>
>>> I think that should be handled by the sequence from 
>>> vcn_v3_0_stop_dpg_mode().
>>>
>>>
>>>>
>>>> Another case, if it is unlucky, that  vcn fw hung at that time, 
>>>> UVD_POWER_STATUS
>>>>
>>>> always keeps busy.   then it needs force powering gate the vcn hw 
>>>> after certain time waiting.
>>>
>>> Yep, we still need to gate VCN power after certain timeout.
>>>
>>>
>>> Regards,
>>>
>>> Leo
>>>
>>>
>>>
>>>>
>>>> Best Regards!
>>>>
>>>> James
>>>>
>>>> On 2021-05-17 12:34 p.m., Leo Liu wrote:
>>>>>
>>>>> On 2021-05-17 11:52 a.m., James Zhu wrote:
>>>>>> During vcn suspends, stop ring continue to receive new requests,
>>>>>> and try to wait for all vcn jobs to finish gracefully.
>>>>>>
>>>>>> v2: Forced powering gate vcn hardware after few wainting retry.
>>>>>>
>>>>>> Signed-off-by: James Zhu <James.Zhu@amd.com> 
>>>>>> <mailto:James.Zhu@amd.com>
>>>>>> ---
>>>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 22 
>>>>>> +++++++++++++++++++++-
>>>>>>   1 file changed, 21 insertions(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c 
>>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>>>>> index 2016459..9f3a6e7 100644
>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>>>>> @@ -275,9 +275,29 @@ int amdgpu_vcn_suspend(struct amdgpu_device 
>>>>>> *adev)
>>>>>>   {
>>>>>>       unsigned size;
>>>>>>       void *ptr;
>>>>>> +    int retry_max = 6;
>>>>>>       int i;
>>>>>>   - cancel_delayed_work_sync(&adev->vcn.idle_work);
>>>>>> +    for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
>>>>>> +        if (adev->vcn.harvest_config & (1 << i))
>>>>>> +            continue;
>>>>>> +        ring = &adev->vcn.inst[i].ring_dec;
>>>>>> +        ring->sched.ready = false;
>>>>>> +
>>>>>> +        for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
>>>>>> +            ring = &adev->vcn.inst[i].ring_enc[j];
>>>>>> +            ring->sched.ready = false;
>>>>>> +        }
>>>>>> +    }
>>>>>> +
>>>>>> +    while (retry_max-- && 
>>>>>> cancel_delayed_work_sync(&adev->vcn.idle_work))
>>>>>> +        mdelay(5);
>>>>>
>>>>> I think it's possible to have one pending job unprocessed with VCN 
>>>>> when suspend sequence getting here, but it shouldn't be more than 
>>>>> one, cancel_delayed_work_sync probably return false after the 
>>>>> first time, so calling cancel_delayed_work_sync once should be 
>>>>> enough here. we probably need to wait longer from:
>>>>>
>>>>> SOC15_WAIT_ON_RREG(VCN, inst_idx, mmUVD_POWER_STATUS, 1,
>>>>>         UVD_POWER_STATUS__UVD_POWER_STATUS_MASK);
>>>>>
>>>>> to make sure the unprocessed job get done.
>>>>>
>>>>>
>>>>> Regards,
>>>>>
>>>>> Leo
>>>>>
>>>>>
>>>>>> +    if (!retry_max && !amdgpu_sriov_vf(adev)) {
>>>>>> +        if (RREG32_SOC15(VCN, i, mmUVD_STATUS)) {
>>>>>> +            dev_warn(adev->dev, "Forced powering gate vcn 
>>>>>> hardware!");
>>>>>> +            vcn_v3_0_set_powergating_state(adev, 
>>>>>> AMD_PG_STATE_GATE);
>>>>>> +        }
>>>>>> +    }
>>>>>>         for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
>>>>>>           if (adev->vcn.harvest_config & (1 << i))

[-- Attachment #1.2: Type: text/html, Size: 12762 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/2] drm/amdgpu: enhance amdgpu_vcn_suspend
  2021-05-17 18:15               ` Leo Liu
@ 2021-05-17 18:19                 ` Leo Liu
  0 siblings, 0 replies; 13+ messages in thread
From: Leo Liu @ 2021-05-17 18:19 UTC (permalink / raw)
  To: Zhu, James, amd-gfx


[-- Attachment #1.1: Type: text/plain, Size: 6265 bytes --]

To be accurate, the Bo is mapped to engine cache window, and the runtime 
of engine stacks, so we should save it before the poweroff.


On 2021-05-17 2:15 p.m., Leo Liu wrote:
>
> The saved data are from the engine cache, it's the runtime of engine 
> before suspend, it might be different after you have the engine 
> powered off.
>
>
> Regards,
>
> Leo
>
>
>
> On 2021-05-17 2:11 p.m., Zhu, James wrote:
>>
>> [AMD Official Use Only - Internal Distribution Only]
>>
>>
>> save_bo needn't ungate vcn,  it just keeps data in memory.
>>
>> Thanks & Best Regards!
>>
>>
>> James Zhu
>>
>> ------------------------------------------------------------------------
>> *From:* Liu, Leo <Leo.Liu@amd.com>
>> *Sent:* Monday, May 17, 2021 2:07 PM
>> *To:* Zhu, James <James.Zhu@amd.com>; Zhu, James <James.Zhu@amd.com>; 
>> amd-gfx@lists.freedesktop.org <amd-gfx@lists.freedesktop.org>
>> *Subject:* Re: [PATCH v2 1/2] drm/amdgpu: enhance amdgpu_vcn_suspend
>>
>> Definitely, we need to move cancel_delayed_work_sync moved to before 
>> power gate.
>>
>> Should "save_bo" be step 4 before power gate ?
>>
>> Regards,
>>
>> Leo
>>
>>
>> On 2021-05-17 1:59 p.m., James Zhu wrote:
>>>
>>> Then we forgot the proposal I provided before.
>>>
>>> I think the below seq may fixed the race condition issue that we are 
>>> facing.
>>>
>>> 1. stop scheduling new jobs
>>>
>>>     for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
>>>         if (adev->vcn.harvest_config & (1 << i))
>>>             continue;
>>>
>>>         ring = &adev->vcn.inst[i].ring_dec;
>>>         ring->sched.ready = false;
>>>
>>>         for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
>>>             ring = &adev->vcn.inst[i].ring_enc[j];
>>>             ring->sched.ready = false;
>>>         }
>>>     }
>>>
>>> 2. cancel_delayed_work_sync(&adev->vcn.idle_work);
>>>
>>> 3. SOC15_WAIT_ON_RREG(VCN, inst_idx, mmUVD_POWER_STATUS, 1,
>>>          UVD_POWER_STATUS__UVD_POWER_STATUS_MASK);
>>>
>>> 4. amdgpu_device_ip_set_powergating_state(adev, 
>>> AMD_IP_BLOCK_TYPE_VCN,   AMD_PG_STATE_GATE);
>>>
>>> 5.  saved_bo
>>>
>>> Best Regards!
>>>
>>> James
>>>
>>> On 2021-05-17 1:43 p.m., Leo Liu wrote:
>>>>
>>>> On 2021-05-17 12:54 p.m., James Zhu wrote:
>>>>> I am wondering if there are still some jobs kept in the queue, it 
>>>>> is lucky to check
>>>>
>>>> Yes it's possible, in this case delayed handler is set, so 
>>>> cancelling once is enough.
>>>>
>>>>
>>>>>
>>>>> UVD_POWER_STATUS done, but after, fw start a new job that list in 
>>>>> the queue.
>>>>>
>>>>> To handle this situation perfectly, we need add mechanism to 
>>>>> suspend fw first.
>>>>
>>>> I think that should be handled by the sequence from 
>>>> vcn_v3_0_stop_dpg_mode().
>>>>
>>>>
>>>>>
>>>>> Another case, if it is unlucky, that  vcn fw hung at that time, 
>>>>> UVD_POWER_STATUS
>>>>>
>>>>> always keeps busy.   then it needs force powering gate the vcn hw 
>>>>> after certain time waiting.
>>>>
>>>> Yep, we still need to gate VCN power after certain timeout.
>>>>
>>>>
>>>> Regards,
>>>>
>>>> Leo
>>>>
>>>>
>>>>
>>>>>
>>>>> Best Regards!
>>>>>
>>>>> James
>>>>>
>>>>> On 2021-05-17 12:34 p.m., Leo Liu wrote:
>>>>>>
>>>>>> On 2021-05-17 11:52 a.m., James Zhu wrote:
>>>>>>> During vcn suspends, stop ring continue to receive new requests,
>>>>>>> and try to wait for all vcn jobs to finish gracefully.
>>>>>>>
>>>>>>> v2: Forced powering gate vcn hardware after few wainting retry.
>>>>>>>
>>>>>>> Signed-off-by: James Zhu <James.Zhu@amd.com> 
>>>>>>> <mailto:James.Zhu@amd.com>
>>>>>>> ---
>>>>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 22 
>>>>>>> +++++++++++++++++++++-
>>>>>>>   1 file changed, 21 insertions(+), 1 deletion(-)
>>>>>>>
>>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c 
>>>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>>>>>> index 2016459..9f3a6e7 100644
>>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>>>>>> @@ -275,9 +275,29 @@ int amdgpu_vcn_suspend(struct amdgpu_device 
>>>>>>> *adev)
>>>>>>>   {
>>>>>>>       unsigned size;
>>>>>>>       void *ptr;
>>>>>>> +    int retry_max = 6;
>>>>>>>       int i;
>>>>>>>   - cancel_delayed_work_sync(&adev->vcn.idle_work);
>>>>>>> +    for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
>>>>>>> +        if (adev->vcn.harvest_config & (1 << i))
>>>>>>> +            continue;
>>>>>>> +        ring = &adev->vcn.inst[i].ring_dec;
>>>>>>> +        ring->sched.ready = false;
>>>>>>> +
>>>>>>> +        for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
>>>>>>> +            ring = &adev->vcn.inst[i].ring_enc[j];
>>>>>>> +            ring->sched.ready = false;
>>>>>>> +        }
>>>>>>> +    }
>>>>>>> +
>>>>>>> +    while (retry_max-- && 
>>>>>>> cancel_delayed_work_sync(&adev->vcn.idle_work))
>>>>>>> +        mdelay(5);
>>>>>>
>>>>>> I think it's possible to have one pending job unprocessed with 
>>>>>> VCN when suspend sequence getting here, but it shouldn't be more 
>>>>>> than one, cancel_delayed_work_sync probably return false after 
>>>>>> the first time, so calling cancel_delayed_work_sync once should 
>>>>>> be enough here. we probably need to wait longer from:
>>>>>>
>>>>>> SOC15_WAIT_ON_RREG(VCN, inst_idx, mmUVD_POWER_STATUS, 1,
>>>>>>         UVD_POWER_STATUS__UVD_POWER_STATUS_MASK);
>>>>>>
>>>>>> to make sure the unprocessed job get done.
>>>>>>
>>>>>>
>>>>>> Regards,
>>>>>>
>>>>>> Leo
>>>>>>
>>>>>>
>>>>>>> +    if (!retry_max && !amdgpu_sriov_vf(adev)) {
>>>>>>> +        if (RREG32_SOC15(VCN, i, mmUVD_STATUS)) {
>>>>>>> +            dev_warn(adev->dev, "Forced powering gate vcn 
>>>>>>> hardware!");
>>>>>>> +            vcn_v3_0_set_powergating_state(adev, 
>>>>>>> AMD_PG_STATE_GATE);
>>>>>>> +        }
>>>>>>> +    }
>>>>>>>         for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
>>>>>>>           if (adev->vcn.harvest_config & (1 << i))
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx

[-- Attachment #1.2: Type: text/html, Size: 14222 bytes --]

[-- Attachment #2: Type: text/plain, Size: 154 bytes --]

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] drm/amdgpu: enhance amdgpu_vcn_suspend
  2021-05-17 14:57 [PATCH 1/2] drm/amdgpu: enhance amdgpu_vcn_suspend James Zhu
  2021-05-17 14:57 ` [PATCH 2/2] drm/amdgpu: remove unsued vcn_v3_0_hw_fini James Zhu
  2021-05-17 15:52 ` [PATCH v2 1/2] drm/amdgpu: enhance amdgpu_vcn_suspend James Zhu
@ 2021-05-17 19:43 ` Christian König
  2021-05-17 20:08   ` James Zhu
  2 siblings, 1 reply; 13+ messages in thread
From: Christian König @ 2021-05-17 19:43 UTC (permalink / raw)
  To: James Zhu, amd-gfx; +Cc: jamesz

Am 17.05.21 um 16:57 schrieb James Zhu:
> During vcn suspends, stop ring continue to receive new requests,
> and try to wait for all vcn jobs to finish gracefully.
>
> Signed-off-by: James Zhu <James.Zhu@amd.com>
> ---
>   drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 20 +++++++++++++++++++-
>   1 file changed, 19 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
> index 2016459..7e9f5cb 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
> @@ -275,9 +275,27 @@ int amdgpu_vcn_suspend(struct amdgpu_device *adev)
>   {
>   	unsigned size;
>   	void *ptr;
> +	int retry_max = 6;
>   	int i;
>   
> -	cancel_delayed_work_sync(&adev->vcn.idle_work);
> +	for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
> +		if (adev->vcn.harvest_config & (1 << i))
> +			continue;
> +		ring = &adev->vcn.inst[i].ring_dec;
> +		ring->sched.ready = false;
> +
> +		for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
> +			ring = &adev->vcn.inst[i].ring_enc[j];
> +			ring->sched.ready = false;
> +		}
> +	}
> +
> +	while (retry_max--) {
> +		if (cancel_delayed_work_sync(&adev->vcn.idle_work)) {
> +			dev_warn(adev->dev, "Waiting for left VCN job(s) to finish gracefully ...");
> +			mdelay(5);
> +		}
> +	}

Ok that just makes no sense at all.

A cancel_delayed_work_sync() call is final, you never need to call it 
more than once.

Christian.

>   
>   	for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
>   		if (adev->vcn.harvest_config & (1 << i))

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/2] drm/amdgpu: enhance amdgpu_vcn_suspend
  2021-05-17 19:43 ` [PATCH " Christian König
@ 2021-05-17 20:08   ` James Zhu
  0 siblings, 0 replies; 13+ messages in thread
From: James Zhu @ 2021-05-17 20:08 UTC (permalink / raw)
  To: Christian König, James Zhu, amd-gfx


On 2021-05-17 3:43 p.m., Christian König wrote:
> Am 17.05.21 um 16:57 schrieb James Zhu:
>> During vcn suspends, stop ring continue to receive new requests,
>> and try to wait for all vcn jobs to finish gracefully.
>>
>> Signed-off-by: James Zhu <James.Zhu@amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 20 +++++++++++++++++++-
>>   1 file changed, 19 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>> index 2016459..7e9f5cb 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>> @@ -275,9 +275,27 @@ int amdgpu_vcn_suspend(struct amdgpu_device *adev)
>>   {
>>       unsigned size;
>>       void *ptr;
>> +    int retry_max = 6;
>>       int i;
>>   -    cancel_delayed_work_sync(&adev->vcn.idle_work);
>> +    for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
>> +        if (adev->vcn.harvest_config & (1 << i))
>> +            continue;
>> +        ring = &adev->vcn.inst[i].ring_dec;
>> +        ring->sched.ready = false;
>> +
>> +        for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
>> +            ring = &adev->vcn.inst[i].ring_enc[j];
>> +            ring->sched.ready = false;
>> +        }
>> +    }
>> +
>> +    while (retry_max--) {
>> +        if (cancel_delayed_work_sync(&adev->vcn.idle_work)) {
>> +            dev_warn(adev->dev, "Waiting for left VCN job(s) to 
>> finish gracefully ...");
>> +            mdelay(5);
>> +        }
>> +    }
>
> Ok that just makes no sense at all.
>
> A cancel_delayed_work_sync() call is final, you never need to call it 
> more than once.
>
yeah, I am preparing a new patch. Thanks! James
> Christian.
>
>>         for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
>>           if (adev->vcn.harvest_config & (1 << i))
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2021-05-17 20:08 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-17 14:57 [PATCH 1/2] drm/amdgpu: enhance amdgpu_vcn_suspend James Zhu
2021-05-17 14:57 ` [PATCH 2/2] drm/amdgpu: remove unsued vcn_v3_0_hw_fini James Zhu
2021-05-17 15:52 ` [PATCH v2 1/2] drm/amdgpu: enhance amdgpu_vcn_suspend James Zhu
2021-05-17 16:34   ` Leo Liu
2021-05-17 16:54     ` James Zhu
2021-05-17 17:43       ` Leo Liu
2021-05-17 17:59         ` James Zhu
2021-05-17 18:07           ` Leo Liu
2021-05-17 18:11             ` Zhu, James
2021-05-17 18:15               ` Leo Liu
2021-05-17 18:19                 ` Leo Liu
2021-05-17 19:43 ` [PATCH " Christian König
2021-05-17 20:08   ` James Zhu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.