* [PATCH 1/2] drm/msm: Move hangcheck timer restart
@ 2022-08-03 17:23 ` Rob Clark
0 siblings, 0 replies; 12+ messages in thread
From: Rob Clark @ 2022-08-03 17:23 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Rob Clark, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, David Airlie, Daniel Vetter,
open list
From: Rob Clark <robdclark@chromium.org>
Don't directly restart the hangcheck timer from the timer handler, but
instead start it after the recover_worker replays remaining jobs.
If the kthread is blocked for other reasons, there is no point to
immediately restart the timer. Fixes a random symptom of the problem
fixed in the next patch.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_gpu.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
index fba85f894314..8f9c48eabf7d 100644
--- a/drivers/gpu/drm/msm/msm_gpu.c
+++ b/drivers/gpu/drm/msm/msm_gpu.c
@@ -328,6 +328,7 @@ find_submit(struct msm_ringbuffer *ring, uint32_t fence)
}
static void retire_submits(struct msm_gpu *gpu);
+static void hangcheck_timer_reset(struct msm_gpu *gpu);
static void get_comm_cmdline(struct msm_gem_submit *submit, char **comm, char **cmd)
{
@@ -420,6 +421,8 @@ static void recover_worker(struct kthread_work *work)
}
if (msm_gpu_active(gpu)) {
+ bool restart_hangcheck = false;
+
/* retire completed submits, plus the one that hung: */
retire_submits(gpu);
@@ -436,10 +439,15 @@ static void recover_worker(struct kthread_work *work)
unsigned long flags;
spin_lock_irqsave(&ring->submit_lock, flags);
- list_for_each_entry(submit, &ring->submits, node)
+ list_for_each_entry(submit, &ring->submits, node) {
gpu->funcs->submit(gpu, submit);
+ restart_hangcheck = true;
+ }
spin_unlock_irqrestore(&ring->submit_lock, flags);
}
+
+ if (restart_hangcheck)
+ hangcheck_timer_reset(gpu);
}
mutex_unlock(&gpu->lock);
@@ -515,10 +523,6 @@ static void hangcheck_handler(struct timer_list *t)
kthread_queue_work(gpu->worker, &gpu->recover_work);
}
- /* if still more pending work, reset the hangcheck timer: */
- if (fence_after(ring->fctx->last_fence, ring->hangcheck_fence))
- hangcheck_timer_reset(gpu);
-
/* workaround for missing irq: */
msm_gpu_retire(gpu);
}
--
2.36.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 1/2] drm/msm: Move hangcheck timer restart
@ 2022-08-03 17:23 ` Rob Clark
0 siblings, 0 replies; 12+ messages in thread
From: Rob Clark @ 2022-08-03 17:23 UTC (permalink / raw)
To: dri-devel
Cc: Rob Clark, David Airlie, linux-arm-msm, Abhinav Kumar, open list,
Sean Paul, Dmitry Baryshkov, freedreno
From: Rob Clark <robdclark@chromium.org>
Don't directly restart the hangcheck timer from the timer handler, but
instead start it after the recover_worker replays remaining jobs.
If the kthread is blocked for other reasons, there is no point to
immediately restart the timer. Fixes a random symptom of the problem
fixed in the next patch.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_gpu.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
index fba85f894314..8f9c48eabf7d 100644
--- a/drivers/gpu/drm/msm/msm_gpu.c
+++ b/drivers/gpu/drm/msm/msm_gpu.c
@@ -328,6 +328,7 @@ find_submit(struct msm_ringbuffer *ring, uint32_t fence)
}
static void retire_submits(struct msm_gpu *gpu);
+static void hangcheck_timer_reset(struct msm_gpu *gpu);
static void get_comm_cmdline(struct msm_gem_submit *submit, char **comm, char **cmd)
{
@@ -420,6 +421,8 @@ static void recover_worker(struct kthread_work *work)
}
if (msm_gpu_active(gpu)) {
+ bool restart_hangcheck = false;
+
/* retire completed submits, plus the one that hung: */
retire_submits(gpu);
@@ -436,10 +439,15 @@ static void recover_worker(struct kthread_work *work)
unsigned long flags;
spin_lock_irqsave(&ring->submit_lock, flags);
- list_for_each_entry(submit, &ring->submits, node)
+ list_for_each_entry(submit, &ring->submits, node) {
gpu->funcs->submit(gpu, submit);
+ restart_hangcheck = true;
+ }
spin_unlock_irqrestore(&ring->submit_lock, flags);
}
+
+ if (restart_hangcheck)
+ hangcheck_timer_reset(gpu);
}
mutex_unlock(&gpu->lock);
@@ -515,10 +523,6 @@ static void hangcheck_handler(struct timer_list *t)
kthread_queue_work(gpu->worker, &gpu->recover_work);
}
- /* if still more pending work, reset the hangcheck timer: */
- if (fence_after(ring->fctx->last_fence, ring->hangcheck_fence))
- hangcheck_timer_reset(gpu);
-
/* workaround for missing irq: */
msm_gpu_retire(gpu);
}
--
2.36.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 2/2] drm/msm/rd: Fix FIFO-full deadlock
2022-08-03 17:23 ` Rob Clark
@ 2022-08-03 17:23 ` Rob Clark
-1 siblings, 0 replies; 12+ messages in thread
From: Rob Clark @ 2022-08-03 17:23 UTC (permalink / raw)
To: dri-devel
Cc: linux-arm-msm, freedreno, Rob Clark, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, David Airlie, Daniel Vetter,
open list
From: Rob Clark <robdclark@chromium.org>
If the previous thing cat'ing $debugfs/rd left the FIFO full, then
subsequent open could deadlock in rd_write() (because open is blocked,
not giving a chance for read() to consume any data in the FIFO). Also
it is generally a good idea to clear out old data from the FIFO.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_rd.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c
index a92ffde53f0b..db2f847c8535 100644
--- a/drivers/gpu/drm/msm/msm_rd.c
+++ b/drivers/gpu/drm/msm/msm_rd.c
@@ -196,6 +196,9 @@ static int rd_open(struct inode *inode, struct file *file)
file->private_data = rd;
rd->open = true;
+ /* Reset fifo to clear any previously unread data: */
+ rd->fifo.head = rd->fifo.tail = 0;
+
/* the parsing tools need to know gpu-id to know which
* register database to load.
*
--
2.36.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH 2/2] drm/msm/rd: Fix FIFO-full deadlock
@ 2022-08-03 17:23 ` Rob Clark
0 siblings, 0 replies; 12+ messages in thread
From: Rob Clark @ 2022-08-03 17:23 UTC (permalink / raw)
To: dri-devel
Cc: Rob Clark, David Airlie, linux-arm-msm, Abhinav Kumar, open list,
Sean Paul, Dmitry Baryshkov, freedreno
From: Rob Clark <robdclark@chromium.org>
If the previous thing cat'ing $debugfs/rd left the FIFO full, then
subsequent open could deadlock in rd_write() (because open is blocked,
not giving a chance for read() to consume any data in the FIFO). Also
it is generally a good idea to clear out old data from the FIFO.
Signed-off-by: Rob Clark <robdclark@chromium.org>
---
drivers/gpu/drm/msm/msm_rd.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c
index a92ffde53f0b..db2f847c8535 100644
--- a/drivers/gpu/drm/msm/msm_rd.c
+++ b/drivers/gpu/drm/msm/msm_rd.c
@@ -196,6 +196,9 @@ static int rd_open(struct inode *inode, struct file *file)
file->private_data = rd;
rd->open = true;
+ /* Reset fifo to clear any previously unread data: */
+ rd->fifo.head = rd->fifo.tail = 0;
+
/* the parsing tools need to know gpu-id to know which
* register database to load.
*
--
2.36.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] drm/msm: Move hangcheck timer restart
2022-08-03 17:23 ` Rob Clark
@ 2022-08-03 19:52 ` Akhil P Oommen
-1 siblings, 0 replies; 12+ messages in thread
From: Akhil P Oommen @ 2022-08-03 19:52 UTC (permalink / raw)
To: Rob Clark, dri-devel
Cc: linux-arm-msm, freedreno, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, David Airlie, Daniel Vetter,
open list
On 8/3/2022 10:53 PM, Rob Clark wrote:
> From: Rob Clark <robdclark@chromium.org>
>
> Don't directly restart the hangcheck timer from the timer handler, but
> instead start it after the recover_worker replays remaining jobs.
>
> If the kthread is blocked for other reasons, there is no point to
> immediately restart the timer. Fixes a random symptom of the problem
> fixed in the next patch.
>
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
> drivers/gpu/drm/msm/msm_gpu.c | 14 +++++++++-----
> 1 file changed, 9 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
> index fba85f894314..8f9c48eabf7d 100644
> --- a/drivers/gpu/drm/msm/msm_gpu.c
> +++ b/drivers/gpu/drm/msm/msm_gpu.c
> @@ -328,6 +328,7 @@ find_submit(struct msm_ringbuffer *ring, uint32_t fence)
> }
>
> static void retire_submits(struct msm_gpu *gpu);
> +static void hangcheck_timer_reset(struct msm_gpu *gpu);
>
> static void get_comm_cmdline(struct msm_gem_submit *submit, char **comm, char **cmd)
> {
> @@ -420,6 +421,8 @@ static void recover_worker(struct kthread_work *work)
> }
>
> if (msm_gpu_active(gpu)) {
> + bool restart_hangcheck = false;
> +
> /* retire completed submits, plus the one that hung: */
> retire_submits(gpu);
>
> @@ -436,10 +439,15 @@ static void recover_worker(struct kthread_work *work)
> unsigned long flags;
>
> spin_lock_irqsave(&ring->submit_lock, flags);
> - list_for_each_entry(submit, &ring->submits, node)
> + list_for_each_entry(submit, &ring->submits, node) {
> gpu->funcs->submit(gpu, submit);
> + restart_hangcheck = true;
> + }
> spin_unlock_irqrestore(&ring->submit_lock, flags);
> }
> +
> + if (restart_hangcheck)
> + hangcheck_timer_reset(gpu);
> }
>
> mutex_unlock(&gpu->lock);
> @@ -515,10 +523,6 @@ static void hangcheck_handler(struct timer_list *t)
> kthread_queue_work(gpu->worker, &gpu->recover_work);
> }
>
> - /* if still more pending work, reset the hangcheck timer: */
In the scenario mentioned here, shouldn't we restart the timer?
-Akhil.
> - if (fence_after(ring->fctx->last_fence, ring->hangcheck_fence))
> - hangcheck_timer_reset(gpu);
> -
> /* workaround for missing irq: */
> msm_gpu_retire(gpu);
> }
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] drm/msm: Move hangcheck timer restart
@ 2022-08-03 19:52 ` Akhil P Oommen
0 siblings, 0 replies; 12+ messages in thread
From: Akhil P Oommen @ 2022-08-03 19:52 UTC (permalink / raw)
To: Rob Clark, dri-devel
Cc: Rob Clark, freedreno, David Airlie, linux-arm-msm, Abhinav Kumar,
open list, Dmitry Baryshkov, Sean Paul
On 8/3/2022 10:53 PM, Rob Clark wrote:
> From: Rob Clark <robdclark@chromium.org>
>
> Don't directly restart the hangcheck timer from the timer handler, but
> instead start it after the recover_worker replays remaining jobs.
>
> If the kthread is blocked for other reasons, there is no point to
> immediately restart the timer. Fixes a random symptom of the problem
> fixed in the next patch.
>
> Signed-off-by: Rob Clark <robdclark@chromium.org>
> ---
> drivers/gpu/drm/msm/msm_gpu.c | 14 +++++++++-----
> 1 file changed, 9 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
> index fba85f894314..8f9c48eabf7d 100644
> --- a/drivers/gpu/drm/msm/msm_gpu.c
> +++ b/drivers/gpu/drm/msm/msm_gpu.c
> @@ -328,6 +328,7 @@ find_submit(struct msm_ringbuffer *ring, uint32_t fence)
> }
>
> static void retire_submits(struct msm_gpu *gpu);
> +static void hangcheck_timer_reset(struct msm_gpu *gpu);
>
> static void get_comm_cmdline(struct msm_gem_submit *submit, char **comm, char **cmd)
> {
> @@ -420,6 +421,8 @@ static void recover_worker(struct kthread_work *work)
> }
>
> if (msm_gpu_active(gpu)) {
> + bool restart_hangcheck = false;
> +
> /* retire completed submits, plus the one that hung: */
> retire_submits(gpu);
>
> @@ -436,10 +439,15 @@ static void recover_worker(struct kthread_work *work)
> unsigned long flags;
>
> spin_lock_irqsave(&ring->submit_lock, flags);
> - list_for_each_entry(submit, &ring->submits, node)
> + list_for_each_entry(submit, &ring->submits, node) {
> gpu->funcs->submit(gpu, submit);
> + restart_hangcheck = true;
> + }
> spin_unlock_irqrestore(&ring->submit_lock, flags);
> }
> +
> + if (restart_hangcheck)
> + hangcheck_timer_reset(gpu);
> }
>
> mutex_unlock(&gpu->lock);
> @@ -515,10 +523,6 @@ static void hangcheck_handler(struct timer_list *t)
> kthread_queue_work(gpu->worker, &gpu->recover_work);
> }
>
> - /* if still more pending work, reset the hangcheck timer: */
In the scenario mentioned here, shouldn't we restart the timer?
-Akhil.
> - if (fence_after(ring->fctx->last_fence, ring->hangcheck_fence))
> - hangcheck_timer_reset(gpu);
> -
> /* workaround for missing irq: */
> msm_gpu_retire(gpu);
> }
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] drm/msm: Move hangcheck timer restart
2022-08-03 19:52 ` Akhil P Oommen
@ 2022-08-03 20:29 ` Rob Clark
-1 siblings, 0 replies; 12+ messages in thread
From: Rob Clark @ 2022-08-03 20:29 UTC (permalink / raw)
To: Akhil P Oommen
Cc: dri-devel, linux-arm-msm, freedreno, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, David Airlie, Daniel Vetter,
open list
On Wed, Aug 3, 2022 at 12:52 PM Akhil P Oommen <quic_akhilpo@quicinc.com> wrote:
>
> On 8/3/2022 10:53 PM, Rob Clark wrote:
> > From: Rob Clark <robdclark@chromium.org>
> >
> > Don't directly restart the hangcheck timer from the timer handler, but
> > instead start it after the recover_worker replays remaining jobs.
> >
> > If the kthread is blocked for other reasons, there is no point to
> > immediately restart the timer. Fixes a random symptom of the problem
> > fixed in the next patch.
> >
> > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > ---
> > drivers/gpu/drm/msm/msm_gpu.c | 14 +++++++++-----
> > 1 file changed, 9 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
> > index fba85f894314..8f9c48eabf7d 100644
> > --- a/drivers/gpu/drm/msm/msm_gpu.c
> > +++ b/drivers/gpu/drm/msm/msm_gpu.c
> > @@ -328,6 +328,7 @@ find_submit(struct msm_ringbuffer *ring, uint32_t fence)
> > }
> >
> > static void retire_submits(struct msm_gpu *gpu);
> > +static void hangcheck_timer_reset(struct msm_gpu *gpu);
> >
> > static void get_comm_cmdline(struct msm_gem_submit *submit, char **comm, char **cmd)
> > {
> > @@ -420,6 +421,8 @@ static void recover_worker(struct kthread_work *work)
> > }
> >
> > if (msm_gpu_active(gpu)) {
> > + bool restart_hangcheck = false;
> > +
> > /* retire completed submits, plus the one that hung: */
> > retire_submits(gpu);
> >
> > @@ -436,10 +439,15 @@ static void recover_worker(struct kthread_work *work)
> > unsigned long flags;
> >
> > spin_lock_irqsave(&ring->submit_lock, flags);
> > - list_for_each_entry(submit, &ring->submits, node)
> > + list_for_each_entry(submit, &ring->submits, node) {
> > gpu->funcs->submit(gpu, submit);
> > + restart_hangcheck = true;
> > + }
> > spin_unlock_irqrestore(&ring->submit_lock, flags);
> > }
> > +
> > + if (restart_hangcheck)
> > + hangcheck_timer_reset(gpu);
> > }
> >
> > mutex_unlock(&gpu->lock);
> > @@ -515,10 +523,6 @@ static void hangcheck_handler(struct timer_list *t)
> > kthread_queue_work(gpu->worker, &gpu->recover_work);
> > }
> >
> > - /* if still more pending work, reset the hangcheck timer: */
> In the scenario mentioned here, shouldn't we restart the timer?
yeah, actually the case where we don't want to restart the timer is
*only* when we schedule recover_work..
BR,
-R
>
> -Akhil.
> > - if (fence_after(ring->fctx->last_fence, ring->hangcheck_fence))
> > - hangcheck_timer_reset(gpu);
> > -
> > /* workaround for missing irq: */
> > msm_gpu_retire(gpu);
> > }
> >
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] drm/msm: Move hangcheck timer restart
@ 2022-08-03 20:29 ` Rob Clark
0 siblings, 0 replies; 12+ messages in thread
From: Rob Clark @ 2022-08-03 20:29 UTC (permalink / raw)
To: Akhil P Oommen
Cc: Rob Clark, David Airlie, linux-arm-msm, Abhinav Kumar, dri-devel,
open list, Sean Paul, Dmitry Baryshkov, freedreno
On Wed, Aug 3, 2022 at 12:52 PM Akhil P Oommen <quic_akhilpo@quicinc.com> wrote:
>
> On 8/3/2022 10:53 PM, Rob Clark wrote:
> > From: Rob Clark <robdclark@chromium.org>
> >
> > Don't directly restart the hangcheck timer from the timer handler, but
> > instead start it after the recover_worker replays remaining jobs.
> >
> > If the kthread is blocked for other reasons, there is no point to
> > immediately restart the timer. Fixes a random symptom of the problem
> > fixed in the next patch.
> >
> > Signed-off-by: Rob Clark <robdclark@chromium.org>
> > ---
> > drivers/gpu/drm/msm/msm_gpu.c | 14 +++++++++-----
> > 1 file changed, 9 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
> > index fba85f894314..8f9c48eabf7d 100644
> > --- a/drivers/gpu/drm/msm/msm_gpu.c
> > +++ b/drivers/gpu/drm/msm/msm_gpu.c
> > @@ -328,6 +328,7 @@ find_submit(struct msm_ringbuffer *ring, uint32_t fence)
> > }
> >
> > static void retire_submits(struct msm_gpu *gpu);
> > +static void hangcheck_timer_reset(struct msm_gpu *gpu);
> >
> > static void get_comm_cmdline(struct msm_gem_submit *submit, char **comm, char **cmd)
> > {
> > @@ -420,6 +421,8 @@ static void recover_worker(struct kthread_work *work)
> > }
> >
> > if (msm_gpu_active(gpu)) {
> > + bool restart_hangcheck = false;
> > +
> > /* retire completed submits, plus the one that hung: */
> > retire_submits(gpu);
> >
> > @@ -436,10 +439,15 @@ static void recover_worker(struct kthread_work *work)
> > unsigned long flags;
> >
> > spin_lock_irqsave(&ring->submit_lock, flags);
> > - list_for_each_entry(submit, &ring->submits, node)
> > + list_for_each_entry(submit, &ring->submits, node) {
> > gpu->funcs->submit(gpu, submit);
> > + restart_hangcheck = true;
> > + }
> > spin_unlock_irqrestore(&ring->submit_lock, flags);
> > }
> > +
> > + if (restart_hangcheck)
> > + hangcheck_timer_reset(gpu);
> > }
> >
> > mutex_unlock(&gpu->lock);
> > @@ -515,10 +523,6 @@ static void hangcheck_handler(struct timer_list *t)
> > kthread_queue_work(gpu->worker, &gpu->recover_work);
> > }
> >
> > - /* if still more pending work, reset the hangcheck timer: */
> In the scenario mentioned here, shouldn't we restart the timer?
yeah, actually the case where we don't want to restart the timer is
*only* when we schedule recover_work..
BR,
-R
>
> -Akhil.
> > - if (fence_after(ring->fctx->last_fence, ring->hangcheck_fence))
> > - hangcheck_timer_reset(gpu);
> > -
> > /* workaround for missing irq: */
> > msm_gpu_retire(gpu);
> > }
> >
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] drm/msm: Move hangcheck timer restart
2022-08-03 20:29 ` Rob Clark
@ 2022-08-04 7:53 ` Akhil P Oommen
-1 siblings, 0 replies; 12+ messages in thread
From: Akhil P Oommen @ 2022-08-04 7:53 UTC (permalink / raw)
To: Rob Clark
Cc: dri-devel, linux-arm-msm, freedreno, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, David Airlie, Daniel Vetter,
open list
On 8/4/2022 1:59 AM, Rob Clark wrote:
> On Wed, Aug 3, 2022 at 12:52 PM Akhil P Oommen <quic_akhilpo@quicinc.com> wrote:
>> On 8/3/2022 10:53 PM, Rob Clark wrote:
>>> From: Rob Clark <robdclark@chromium.org>
>>>
>>> Don't directly restart the hangcheck timer from the timer handler, but
>>> instead start it after the recover_worker replays remaining jobs.
>>>
>>> If the kthread is blocked for other reasons, there is no point to
>>> immediately restart the timer. Fixes a random symptom of the problem
>>> fixed in the next patch.
>>>
>>> Signed-off-by: Rob Clark <robdclark@chromium.org>
>>> ---
>>> drivers/gpu/drm/msm/msm_gpu.c | 14 +++++++++-----
>>> 1 file changed, 9 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
>>> index fba85f894314..8f9c48eabf7d 100644
>>> --- a/drivers/gpu/drm/msm/msm_gpu.c
>>> +++ b/drivers/gpu/drm/msm/msm_gpu.c
>>> @@ -328,6 +328,7 @@ find_submit(struct msm_ringbuffer *ring, uint32_t fence)
>>> }
>>>
>>> static void retire_submits(struct msm_gpu *gpu);
>>> +static void hangcheck_timer_reset(struct msm_gpu *gpu);
>>>
>>> static void get_comm_cmdline(struct msm_gem_submit *submit, char **comm, char **cmd)
>>> {
>>> @@ -420,6 +421,8 @@ static void recover_worker(struct kthread_work *work)
>>> }
>>>
>>> if (msm_gpu_active(gpu)) {
>>> + bool restart_hangcheck = false;
>>> +
>>> /* retire completed submits, plus the one that hung: */
>>> retire_submits(gpu);
>>>
>>> @@ -436,10 +439,15 @@ static void recover_worker(struct kthread_work *work)
>>> unsigned long flags;
>>>
>>> spin_lock_irqsave(&ring->submit_lock, flags);
>>> - list_for_each_entry(submit, &ring->submits, node)
>>> + list_for_each_entry(submit, &ring->submits, node) {
>>> gpu->funcs->submit(gpu, submit);
>>> + restart_hangcheck = true;
>>> + }
>>> spin_unlock_irqrestore(&ring->submit_lock, flags);
>>> }
>>> +
>>> + if (restart_hangcheck)
>>> + hangcheck_timer_reset(gpu);
>>> }
>>>
>>> mutex_unlock(&gpu->lock);
>>> @@ -515,10 +523,6 @@ static void hangcheck_handler(struct timer_list *t)
>>> kthread_queue_work(gpu->worker, &gpu->recover_work);
>>> }
>>>
>>> - /* if still more pending work, reset the hangcheck timer: */
>> In the scenario mentioned here, shouldn't we restart the timer?
> yeah, actually the case where we don't want to restart the timer is
> *only* when we schedule recover_work..
>
> BR,
> -R
Not sure if your codebase is different but based on msm-next branch,
when "if (fence != ring->hangcheck_fence)" is true, we now skip
rescheduling the timer. I don't think that is what we want. There should
be a hangcheck timer running as long as there is an active submit,
unless we have scheduled a recover_work here.
-Akhil.
>
>> -Akhil.
>>> - if (fence_after(ring->fctx->last_fence, ring->hangcheck_fence))
>>> - hangcheck_timer_reset(gpu);
>>> -
>>> /* workaround for missing irq: */
>>> msm_gpu_retire(gpu);
>>> }
>>>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] drm/msm: Move hangcheck timer restart
@ 2022-08-04 7:53 ` Akhil P Oommen
0 siblings, 0 replies; 12+ messages in thread
From: Akhil P Oommen @ 2022-08-04 7:53 UTC (permalink / raw)
To: Rob Clark
Cc: Rob Clark, David Airlie, linux-arm-msm, Abhinav Kumar, dri-devel,
open list, Sean Paul, Dmitry Baryshkov, freedreno
On 8/4/2022 1:59 AM, Rob Clark wrote:
> On Wed, Aug 3, 2022 at 12:52 PM Akhil P Oommen <quic_akhilpo@quicinc.com> wrote:
>> On 8/3/2022 10:53 PM, Rob Clark wrote:
>>> From: Rob Clark <robdclark@chromium.org>
>>>
>>> Don't directly restart the hangcheck timer from the timer handler, but
>>> instead start it after the recover_worker replays remaining jobs.
>>>
>>> If the kthread is blocked for other reasons, there is no point to
>>> immediately restart the timer. Fixes a random symptom of the problem
>>> fixed in the next patch.
>>>
>>> Signed-off-by: Rob Clark <robdclark@chromium.org>
>>> ---
>>> drivers/gpu/drm/msm/msm_gpu.c | 14 +++++++++-----
>>> 1 file changed, 9 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
>>> index fba85f894314..8f9c48eabf7d 100644
>>> --- a/drivers/gpu/drm/msm/msm_gpu.c
>>> +++ b/drivers/gpu/drm/msm/msm_gpu.c
>>> @@ -328,6 +328,7 @@ find_submit(struct msm_ringbuffer *ring, uint32_t fence)
>>> }
>>>
>>> static void retire_submits(struct msm_gpu *gpu);
>>> +static void hangcheck_timer_reset(struct msm_gpu *gpu);
>>>
>>> static void get_comm_cmdline(struct msm_gem_submit *submit, char **comm, char **cmd)
>>> {
>>> @@ -420,6 +421,8 @@ static void recover_worker(struct kthread_work *work)
>>> }
>>>
>>> if (msm_gpu_active(gpu)) {
>>> + bool restart_hangcheck = false;
>>> +
>>> /* retire completed submits, plus the one that hung: */
>>> retire_submits(gpu);
>>>
>>> @@ -436,10 +439,15 @@ static void recover_worker(struct kthread_work *work)
>>> unsigned long flags;
>>>
>>> spin_lock_irqsave(&ring->submit_lock, flags);
>>> - list_for_each_entry(submit, &ring->submits, node)
>>> + list_for_each_entry(submit, &ring->submits, node) {
>>> gpu->funcs->submit(gpu, submit);
>>> + restart_hangcheck = true;
>>> + }
>>> spin_unlock_irqrestore(&ring->submit_lock, flags);
>>> }
>>> +
>>> + if (restart_hangcheck)
>>> + hangcheck_timer_reset(gpu);
>>> }
>>>
>>> mutex_unlock(&gpu->lock);
>>> @@ -515,10 +523,6 @@ static void hangcheck_handler(struct timer_list *t)
>>> kthread_queue_work(gpu->worker, &gpu->recover_work);
>>> }
>>>
>>> - /* if still more pending work, reset the hangcheck timer: */
>> In the scenario mentioned here, shouldn't we restart the timer?
> yeah, actually the case where we don't want to restart the timer is
> *only* when we schedule recover_work..
>
> BR,
> -R
Not sure if your codebase is different but based on msm-next branch,
when "if (fence != ring->hangcheck_fence)" is true, we now skip
rescheduling the timer. I don't think that is what we want. There should
be a hangcheck timer running as long as there is an active submit,
unless we have scheduled a recover_work here.
-Akhil.
>
>> -Akhil.
>>> - if (fence_after(ring->fctx->last_fence, ring->hangcheck_fence))
>>> - hangcheck_timer_reset(gpu);
>>> -
>>> /* workaround for missing irq: */
>>> msm_gpu_retire(gpu);
>>> }
>>>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] drm/msm: Move hangcheck timer restart
2022-08-04 7:53 ` Akhil P Oommen
@ 2022-08-04 17:33 ` Rob Clark
-1 siblings, 0 replies; 12+ messages in thread
From: Rob Clark @ 2022-08-04 17:33 UTC (permalink / raw)
To: Akhil P Oommen
Cc: dri-devel, linux-arm-msm, freedreno, Rob Clark, Abhinav Kumar,
Dmitry Baryshkov, Sean Paul, David Airlie, Daniel Vetter,
open list
On Thu, Aug 4, 2022 at 12:53 AM Akhil P Oommen <quic_akhilpo@quicinc.com> wrote:
>
> On 8/4/2022 1:59 AM, Rob Clark wrote:
> > On Wed, Aug 3, 2022 at 12:52 PM Akhil P Oommen <quic_akhilpo@quicinc.com> wrote:
> >> On 8/3/2022 10:53 PM, Rob Clark wrote:
> >>> From: Rob Clark <robdclark@chromium.org>
> >>>
> >>> Don't directly restart the hangcheck timer from the timer handler, but
> >>> instead start it after the recover_worker replays remaining jobs.
> >>>
> >>> If the kthread is blocked for other reasons, there is no point to
> >>> immediately restart the timer. Fixes a random symptom of the problem
> >>> fixed in the next patch.
> >>>
> >>> Signed-off-by: Rob Clark <robdclark@chromium.org>
> >>> ---
> >>> drivers/gpu/drm/msm/msm_gpu.c | 14 +++++++++-----
> >>> 1 file changed, 9 insertions(+), 5 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
> >>> index fba85f894314..8f9c48eabf7d 100644
> >>> --- a/drivers/gpu/drm/msm/msm_gpu.c
> >>> +++ b/drivers/gpu/drm/msm/msm_gpu.c
> >>> @@ -328,6 +328,7 @@ find_submit(struct msm_ringbuffer *ring, uint32_t fence)
> >>> }
> >>>
> >>> static void retire_submits(struct msm_gpu *gpu);
> >>> +static void hangcheck_timer_reset(struct msm_gpu *gpu);
> >>>
> >>> static void get_comm_cmdline(struct msm_gem_submit *submit, char **comm, char **cmd)
> >>> {
> >>> @@ -420,6 +421,8 @@ static void recover_worker(struct kthread_work *work)
> >>> }
> >>>
> >>> if (msm_gpu_active(gpu)) {
> >>> + bool restart_hangcheck = false;
> >>> +
> >>> /* retire completed submits, plus the one that hung: */
> >>> retire_submits(gpu);
> >>>
> >>> @@ -436,10 +439,15 @@ static void recover_worker(struct kthread_work *work)
> >>> unsigned long flags;
> >>>
> >>> spin_lock_irqsave(&ring->submit_lock, flags);
> >>> - list_for_each_entry(submit, &ring->submits, node)
> >>> + list_for_each_entry(submit, &ring->submits, node) {
> >>> gpu->funcs->submit(gpu, submit);
> >>> + restart_hangcheck = true;
> >>> + }
> >>> spin_unlock_irqrestore(&ring->submit_lock, flags);
> >>> }
> >>> +
> >>> + if (restart_hangcheck)
> >>> + hangcheck_timer_reset(gpu);
> >>> }
> >>>
> >>> mutex_unlock(&gpu->lock);
> >>> @@ -515,10 +523,6 @@ static void hangcheck_handler(struct timer_list *t)
> >>> kthread_queue_work(gpu->worker, &gpu->recover_work);
> >>> }
> >>>
> >>> - /* if still more pending work, reset the hangcheck timer: */
> >> In the scenario mentioned here, shouldn't we restart the timer?
> > yeah, actually the case where we don't want to restart the timer is
> > *only* when we schedule recover_work..
> >
> > BR,
> > -R
> Not sure if your codebase is different but based on msm-next branch,
> when "if (fence != ring->hangcheck_fence)" is true, we now skip
> rescheduling the timer. I don't think that is what we want. There should
> be a hangcheck timer running as long as there is an active submit,
> unless we have scheduled a recover_work here.
>
right, v2 will change that to only skip rescheduling the timer in the
recover path
BR,
-R
> -Akhil.
> >
> >> -Akhil.
> >>> - if (fence_after(ring->fctx->last_fence, ring->hangcheck_fence))
> >>> - hangcheck_timer_reset(gpu);
> >>> -
> >>> /* workaround for missing irq: */
> >>> msm_gpu_retire(gpu);
> >>> }
> >>>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/2] drm/msm: Move hangcheck timer restart
@ 2022-08-04 17:33 ` Rob Clark
0 siblings, 0 replies; 12+ messages in thread
From: Rob Clark @ 2022-08-04 17:33 UTC (permalink / raw)
To: Akhil P Oommen
Cc: Rob Clark, David Airlie, linux-arm-msm, Abhinav Kumar, dri-devel,
open list, Sean Paul, Dmitry Baryshkov, freedreno
On Thu, Aug 4, 2022 at 12:53 AM Akhil P Oommen <quic_akhilpo@quicinc.com> wrote:
>
> On 8/4/2022 1:59 AM, Rob Clark wrote:
> > On Wed, Aug 3, 2022 at 12:52 PM Akhil P Oommen <quic_akhilpo@quicinc.com> wrote:
> >> On 8/3/2022 10:53 PM, Rob Clark wrote:
> >>> From: Rob Clark <robdclark@chromium.org>
> >>>
> >>> Don't directly restart the hangcheck timer from the timer handler, but
> >>> instead start it after the recover_worker replays remaining jobs.
> >>>
> >>> If the kthread is blocked for other reasons, there is no point to
> >>> immediately restart the timer. Fixes a random symptom of the problem
> >>> fixed in the next patch.
> >>>
> >>> Signed-off-by: Rob Clark <robdclark@chromium.org>
> >>> ---
> >>> drivers/gpu/drm/msm/msm_gpu.c | 14 +++++++++-----
> >>> 1 file changed, 9 insertions(+), 5 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c
> >>> index fba85f894314..8f9c48eabf7d 100644
> >>> --- a/drivers/gpu/drm/msm/msm_gpu.c
> >>> +++ b/drivers/gpu/drm/msm/msm_gpu.c
> >>> @@ -328,6 +328,7 @@ find_submit(struct msm_ringbuffer *ring, uint32_t fence)
> >>> }
> >>>
> >>> static void retire_submits(struct msm_gpu *gpu);
> >>> +static void hangcheck_timer_reset(struct msm_gpu *gpu);
> >>>
> >>> static void get_comm_cmdline(struct msm_gem_submit *submit, char **comm, char **cmd)
> >>> {
> >>> @@ -420,6 +421,8 @@ static void recover_worker(struct kthread_work *work)
> >>> }
> >>>
> >>> if (msm_gpu_active(gpu)) {
> >>> + bool restart_hangcheck = false;
> >>> +
> >>> /* retire completed submits, plus the one that hung: */
> >>> retire_submits(gpu);
> >>>
> >>> @@ -436,10 +439,15 @@ static void recover_worker(struct kthread_work *work)
> >>> unsigned long flags;
> >>>
> >>> spin_lock_irqsave(&ring->submit_lock, flags);
> >>> - list_for_each_entry(submit, &ring->submits, node)
> >>> + list_for_each_entry(submit, &ring->submits, node) {
> >>> gpu->funcs->submit(gpu, submit);
> >>> + restart_hangcheck = true;
> >>> + }
> >>> spin_unlock_irqrestore(&ring->submit_lock, flags);
> >>> }
> >>> +
> >>> + if (restart_hangcheck)
> >>> + hangcheck_timer_reset(gpu);
> >>> }
> >>>
> >>> mutex_unlock(&gpu->lock);
> >>> @@ -515,10 +523,6 @@ static void hangcheck_handler(struct timer_list *t)
> >>> kthread_queue_work(gpu->worker, &gpu->recover_work);
> >>> }
> >>>
> >>> - /* if still more pending work, reset the hangcheck timer: */
> >> In the scenario mentioned here, shouldn't we restart the timer?
> > yeah, actually the case where we don't want to restart the timer is
> > *only* when we schedule recover_work..
> >
> > BR,
> > -R
> Not sure if your codebase is different but based on msm-next branch,
> when "if (fence != ring->hangcheck_fence)" is true, we now skip
> rescheduling the timer. I don't think that is what we want. There should
> be a hangcheck timer running as long as there is an active submit,
> unless we have scheduled a recover_work here.
>
right, v2 will change that to only skip rescheduling the timer in the
recover path
BR,
-R
> -Akhil.
> >
> >> -Akhil.
> >>> - if (fence_after(ring->fctx->last_fence, ring->hangcheck_fence))
> >>> - hangcheck_timer_reset(gpu);
> >>> -
> >>> /* workaround for missing irq: */
> >>> msm_gpu_retire(gpu);
> >>> }
> >>>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2022-08-04 17:40 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-03 17:23 [PATCH 1/2] drm/msm: Move hangcheck timer restart Rob Clark
2022-08-03 17:23 ` Rob Clark
2022-08-03 17:23 ` [PATCH 2/2] drm/msm/rd: Fix FIFO-full deadlock Rob Clark
2022-08-03 17:23 ` Rob Clark
2022-08-03 19:52 ` [PATCH 1/2] drm/msm: Move hangcheck timer restart Akhil P Oommen
2022-08-03 19:52 ` Akhil P Oommen
2022-08-03 20:29 ` Rob Clark
2022-08-03 20:29 ` Rob Clark
2022-08-04 7:53 ` Akhil P Oommen
2022-08-04 7:53 ` Akhil P Oommen
2022-08-04 17:33 ` Rob Clark
2022-08-04 17:33 ` Rob Clark
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.