All of lore.kernel.org
 help / color / mirror / Atom feed
From: Luben Tuikov <luben.tuikov@amd.com>
To: "Christian König" <christian.koenig@amd.com>,
	"Andrey Grodzovsky" <andrey.grodzovsky@amd.com>,
	amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	ckoenig.leichtzumerken@gmail.com, daniel.vetter@ffwll.ch,
	robh@kernel.org, l.stach@pengutronix.de, yuq825@gmail.com,
	eric@anholt.net
Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>,
	gregkh@linuxfoundation.org, Steven Price <steven.price@arm.com>,
	Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>,
	Russell King <linux+etnaviv@armlinux.org.uk>,
	Alexander.Deucher@amd.com
Subject: Re: [PATCH v4 12/14] drm/scheduler: Job timeout handler returns status
Date: Tue, 19 Jan 2021 12:47:56 -0500	[thread overview]
Message-ID: <b84f39e1-9bd8-ad53-6156-7af02c5604b7@amd.com> (raw)
In-Reply-To: <337f5f7d-d728-41d1-724b-6a62500f1f06@amd.com>

On 2021-01-19 2:53 a.m., Christian König wrote:
> Am 18.01.21 um 22:01 schrieb Andrey Grodzovsky:
>> From: Luben Tuikov <luben.tuikov@amd.com>
>>
>> This patch does not change current behaviour.
>>
>> The driver's job timeout handler now returns
>> status indicating back to the DRM layer whether
>> the task (job) was successfully aborted or whether
>> more time should be given to the task to complete.
>>
>> Default behaviour as of this patch, is preserved,
>> except in obvious-by-comment case in the Panfrost
>> driver, as documented below.
>>
>> All drivers which make use of the
>> drm_sched_backend_ops' .timedout_job() callback
>> have been accordingly renamed and return the
>> would've-been default value of
>> DRM_TASK_STATUS_ALIVE to restart the task's
>> timeout timer--this is the old behaviour, and
>> is preserved by this patch.
>>
>> In the case of the Panfrost driver, its timedout
>> callback correctly first checks if the job had
>> completed in due time and if so, it now returns
>> DRM_TASK_STATUS_COMPLETE to notify the DRM layer
>> that the task can be moved to the done list, to be
>> freed later. In the other two subsequent checks,
>> the value of DRM_TASK_STATUS_ALIVE is returned, as
>> per the default behaviour.
>>
>> A more involved driver's solutions can be had
>> in subequent patches.
>>
>> v2: Use enum as the status of a driver's job
>>      timeout callback method.
>>
>> v4: (By Andrey Grodzovsky)
>> Replace DRM_TASK_STATUS_COMPLETE with DRM_TASK_STATUS_ENODEV
>> to enable a hint to the schduler for when NOT to rearm the
>> timeout timer.
> As Lukas pointed out returning the job (or task) status doesn't make 
> much sense.
>
> What we return here is the status of the scheduler.
>
> I would either rename the enum or completely drop it and return a 
> negative error status.

Yes, that could be had.

Although, dropping the enum and returning [-1, 0], might
make the return status meaning vague. Using an enum with an appropriate
name, makes the intention clear to the next programmer.

Now, Andrey did rename one of the enumerated values to
DRM_TASK_STATUS_ENODEV, perhaps the same but with:

enum drm_sched_status {
    DRM_SCHED_STAT_NONE, /* Reserve 0 */
    DRM_SCHED_STAT_NOMINAL,
    DRM_SCHED_STAT_ENODEV,
};

and also renaming the enum to the above would be acceptable?

Regards,
Luben

> Apart from that looks fine to me,
> Christian.
>
>
>> Cc: Alexander Deucher <Alexander.Deucher@amd.com>
>> Cc: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>
>> Cc: Christian König <christian.koenig@amd.com>
>> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
>> Cc: Lucas Stach <l.stach@pengutronix.de>
>> Cc: Russell King <linux+etnaviv@armlinux.org.uk>
>> Cc: Christian Gmeiner <christian.gmeiner@gmail.com>
>> Cc: Qiang Yu <yuq825@gmail.com>
>> Cc: Rob Herring <robh@kernel.org>
>> Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
>> Cc: Steven Price <steven.price@arm.com>
>> Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
>> Cc: Eric Anholt <eric@anholt.net>
>> Reported-by: kernel test robot <lkp@intel.com>
>> Signed-off-by: Luben Tuikov <luben.tuikov@amd.com>
>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c |  6 ++++--
>>   drivers/gpu/drm/etnaviv/etnaviv_sched.c | 10 +++++++++-
>>   drivers/gpu/drm/lima/lima_sched.c       |  4 +++-
>>   drivers/gpu/drm/panfrost/panfrost_job.c |  9 ++++++---
>>   drivers/gpu/drm/scheduler/sched_main.c  |  4 +---
>>   drivers/gpu/drm/v3d/v3d_sched.c         | 32 +++++++++++++++++---------------
>>   include/drm/gpu_scheduler.h             | 17 ++++++++++++++---
>>   7 files changed, 54 insertions(+), 28 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> index ff48101..a111326 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> @@ -28,7 +28,7 @@
>>   #include "amdgpu.h"
>>   #include "amdgpu_trace.h"
>>   
>> -static void amdgpu_job_timedout(struct drm_sched_job *s_job)
>> +static enum drm_task_status amdgpu_job_timedout(struct drm_sched_job *s_job)
>>   {
>>   	struct amdgpu_ring *ring = to_amdgpu_ring(s_job->sched);
>>   	struct amdgpu_job *job = to_amdgpu_job(s_job);
>> @@ -41,7 +41,7 @@ static void amdgpu_job_timedout(struct drm_sched_job *s_job)
>>   	    amdgpu_ring_soft_recovery(ring, job->vmid, s_job->s_fence->parent)) {
>>   		DRM_ERROR("ring %s timeout, but soft recovered\n",
>>   			  s_job->sched->name);
>> -		return;
>> +		return DRM_TASK_STATUS_ALIVE;
>>   	}
>>   
>>   	amdgpu_vm_get_task_info(ring->adev, job->pasid, &ti);
>> @@ -53,10 +53,12 @@ static void amdgpu_job_timedout(struct drm_sched_job *s_job)
>>   
>>   	if (amdgpu_device_should_recover_gpu(ring->adev)) {
>>   		amdgpu_device_gpu_recover(ring->adev, job);
>> +		return DRM_TASK_STATUS_ALIVE;
>>   	} else {
>>   		drm_sched_suspend_timeout(&ring->sched);
>>   		if (amdgpu_sriov_vf(adev))
>>   			adev->virt.tdr_debug = true;
>> +		return DRM_TASK_STATUS_ALIVE;
>>   	}
>>   }
>>   
>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>> index cd46c88..c495169 100644
>> --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>> @@ -82,7 +82,8 @@ static struct dma_fence *etnaviv_sched_run_job(struct drm_sched_job *sched_job)
>>   	return fence;
>>   }
>>   
>> -static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job)
>> +static enum drm_task_status etnaviv_sched_timedout_job(struct drm_sched_job
>> +						       *sched_job)
>>   {
>>   	struct etnaviv_gem_submit *submit = to_etnaviv_submit(sched_job);
>>   	struct etnaviv_gpu *gpu = submit->gpu;
>> @@ -120,9 +121,16 @@ static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job)
>>   
>>   	drm_sched_resubmit_jobs(&gpu->sched);
>>   
>> +	/* Tell the DRM scheduler that this task needs
>> +	 * more time.
>> +	 */
>> +	drm_sched_start(&gpu->sched, true);
>> +	return DRM_TASK_STATUS_ALIVE;
>> +
>>   out_no_timeout:
>>   	/* restart scheduler after GPU is usable again */
>>   	drm_sched_start(&gpu->sched, true);
>> +	return DRM_TASK_STATUS_ALIVE;
>>   }
>>   
>>   static void etnaviv_sched_free_job(struct drm_sched_job *sched_job)
>> diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
>> index 63b4c56..66d9236 100644
>> --- a/drivers/gpu/drm/lima/lima_sched.c
>> +++ b/drivers/gpu/drm/lima/lima_sched.c
>> @@ -415,7 +415,7 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
>>   	mutex_unlock(&dev->error_task_list_lock);
>>   }
>>   
>> -static void lima_sched_timedout_job(struct drm_sched_job *job)
>> +static enum drm_task_status lima_sched_timedout_job(struct drm_sched_job *job)
>>   {
>>   	struct lima_sched_pipe *pipe = to_lima_pipe(job->sched);
>>   	struct lima_sched_task *task = to_lima_task(job);
>> @@ -449,6 +449,8 @@ static void lima_sched_timedout_job(struct drm_sched_job *job)
>>   
>>   	drm_sched_resubmit_jobs(&pipe->base);
>>   	drm_sched_start(&pipe->base, true);
>> +
>> +	return DRM_TASK_STATUS_ALIVE;
>>   }
>>   
>>   static void lima_sched_free_job(struct drm_sched_job *job)
>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
>> index 04e6f6f..10d41ac 100644
>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
>> @@ -432,7 +432,8 @@ static void panfrost_scheduler_start(struct panfrost_queue_state *queue)
>>   	mutex_unlock(&queue->lock);
>>   }
>>   
>> -static void panfrost_job_timedout(struct drm_sched_job *sched_job)
>> +static enum drm_task_status panfrost_job_timedout(struct drm_sched_job
>> +						  *sched_job)
>>   {
>>   	struct panfrost_job *job = to_panfrost_job(sched_job);
>>   	struct panfrost_device *pfdev = job->pfdev;
>> @@ -443,7 +444,7 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
>>   	 * spurious. Bail out.
>>   	 */
>>   	if (dma_fence_is_signaled(job->done_fence))
>> -		return;
>> +		return DRM_TASK_STATUS_ALIVE;
>>   
>>   	dev_err(pfdev->dev, "gpu sched timeout, js=%d, config=0x%x, status=0x%x, head=0x%x, tail=0x%x, sched_job=%p",
>>   		js,
>> @@ -455,11 +456,13 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
>>   
>>   	/* Scheduler is already stopped, nothing to do. */
>>   	if (!panfrost_scheduler_stop(&pfdev->js->queue[js], sched_job))
>> -		return;
>> +		return DRM_TASK_STATUS_ALIVE;
>>   
>>   	/* Schedule a reset if there's no reset in progress. */
>>   	if (!atomic_xchg(&pfdev->reset.pending, 1))
>>   		schedule_work(&pfdev->reset.work);
>> +
>> +	return DRM_TASK_STATUS_ALIVE;
>>   }
>>   
>>   static const struct drm_sched_backend_ops panfrost_sched_ops = {
>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
>> index 92637b7..73fccc5 100644
>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>> @@ -527,7 +527,7 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery)
>>   EXPORT_SYMBOL(drm_sched_start);
>>   
>>   /**
>> - * drm_sched_resubmit_jobs - helper to relunch job from pending ring list
>> + * drm_sched_resubmit_jobs - helper to relaunch jobs from the pending list
>>    *
>>    * @sched: scheduler instance
>>    *
>> @@ -561,8 +561,6 @@ void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched)
>>   		} else {
>>   			s_job->s_fence->parent = fence;
>>   		}
>> -
>> -
>>   	}
>>   }
>>   EXPORT_SYMBOL(drm_sched_resubmit_jobs);
>> diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c
>> index 452682e..3740665e 100644
>> --- a/drivers/gpu/drm/v3d/v3d_sched.c
>> +++ b/drivers/gpu/drm/v3d/v3d_sched.c
>> @@ -259,7 +259,7 @@ v3d_cache_clean_job_run(struct drm_sched_job *sched_job)
>>   	return NULL;
>>   }
>>   
>> -static void
>> +static enum drm_task_status
>>   v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job)
>>   {
>>   	enum v3d_queue q;
>> @@ -285,6 +285,8 @@ v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job)
>>   	}
>>   
>>   	mutex_unlock(&v3d->reset_lock);
>> +
>> +	return DRM_TASK_STATUS_ALIVE;
>>   }
>>   
>>   /* If the current address or return address have changed, then the GPU
>> @@ -292,7 +294,7 @@ v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job)
>>    * could fail if the GPU got in an infinite loop in the CL, but that
>>    * is pretty unlikely outside of an i-g-t testcase.
>>    */
>> -static void
>> +static enum drm_task_status
>>   v3d_cl_job_timedout(struct drm_sched_job *sched_job, enum v3d_queue q,
>>   		    u32 *timedout_ctca, u32 *timedout_ctra)
>>   {
>> @@ -304,39 +306,39 @@ v3d_cl_job_timedout(struct drm_sched_job *sched_job, enum v3d_queue q,
>>   	if (*timedout_ctca != ctca || *timedout_ctra != ctra) {
>>   		*timedout_ctca = ctca;
>>   		*timedout_ctra = ctra;
>> -		return;
>> +		return DRM_TASK_STATUS_ALIVE;
>>   	}
>>   
>> -	v3d_gpu_reset_for_timeout(v3d, sched_job);
>> +	return v3d_gpu_reset_for_timeout(v3d, sched_job);
>>   }
>>   
>> -static void
>> +static enum drm_task_status
>>   v3d_bin_job_timedout(struct drm_sched_job *sched_job)
>>   {
>>   	struct v3d_bin_job *job = to_bin_job(sched_job);
>>   
>> -	v3d_cl_job_timedout(sched_job, V3D_BIN,
>> -			    &job->timedout_ctca, &job->timedout_ctra);
>> +	return v3d_cl_job_timedout(sched_job, V3D_BIN,
>> +				   &job->timedout_ctca, &job->timedout_ctra);
>>   }
>>   
>> -static void
>> +static enum drm_task_status
>>   v3d_render_job_timedout(struct drm_sched_job *sched_job)
>>   {
>>   	struct v3d_render_job *job = to_render_job(sched_job);
>>   
>> -	v3d_cl_job_timedout(sched_job, V3D_RENDER,
>> -			    &job->timedout_ctca, &job->timedout_ctra);
>> +	return v3d_cl_job_timedout(sched_job, V3D_RENDER,
>> +				   &job->timedout_ctca, &job->timedout_ctra);
>>   }
>>   
>> -static void
>> +static enum drm_task_status
>>   v3d_generic_job_timedout(struct drm_sched_job *sched_job)
>>   {
>>   	struct v3d_job *job = to_v3d_job(sched_job);
>>   
>> -	v3d_gpu_reset_for_timeout(job->v3d, sched_job);
>> +	return v3d_gpu_reset_for_timeout(job->v3d, sched_job);
>>   }
>>   
>> -static void
>> +static enum drm_task_status
>>   v3d_csd_job_timedout(struct drm_sched_job *sched_job)
>>   {
>>   	struct v3d_csd_job *job = to_csd_job(sched_job);
>> @@ -348,10 +350,10 @@ v3d_csd_job_timedout(struct drm_sched_job *sched_job)
>>   	 */
>>   	if (job->timedout_batches != batches) {
>>   		job->timedout_batches = batches;
>> -		return;
>> +		return DRM_TASK_STATUS_ALIVE;
>>   	}
>>   
>> -	v3d_gpu_reset_for_timeout(v3d, sched_job);
>> +	return v3d_gpu_reset_for_timeout(v3d, sched_job);
>>   }
>>   
>>   static const struct drm_sched_backend_ops v3d_bin_sched_ops = {
>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>> index 975e8a6..3ba36bc 100644
>> --- a/include/drm/gpu_scheduler.h
>> +++ b/include/drm/gpu_scheduler.h
>> @@ -206,6 +206,11 @@ static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job,
>>   	return s_job && atomic_inc_return(&s_job->karma) > threshold;
>>   }
>>   
>> +enum drm_task_status {
>> +	DRM_TASK_STATUS_ENODEV,
>> +	DRM_TASK_STATUS_ALIVE
>> +};
>> +
>>   /**
>>    * struct drm_sched_backend_ops
>>    *
>> @@ -230,10 +235,16 @@ struct drm_sched_backend_ops {
>>   	struct dma_fence *(*run_job)(struct drm_sched_job *sched_job);
>>   
>>   	/**
>> -         * @timedout_job: Called when a job has taken too long to execute,
>> -         * to trigger GPU recovery.
>> +	 * @timedout_job: Called when a job has taken too long to execute,
>> +	 * to trigger GPU recovery.
>> +	 *
>> +	 * Return DRM_TASK_STATUS_ALIVE, if the task (job) is healthy
>> +	 * and executing in the hardware, i.e. it needs more time.
>> +	 *
>> +	 * Return DRM_TASK_STATUS_ENODEV, if the task (job) has
>> +	 * been aborted.
>>   	 */
>> -	void (*timedout_job)(struct drm_sched_job *sched_job);
>> +	enum drm_task_status (*timedout_job)(struct drm_sched_job *sched_job);
>>   
>>   	/**
>>            * @free_job: Called once the job's finished fence has been signaled

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

WARNING: multiple messages have this Message-ID (diff)
From: Luben Tuikov <luben.tuikov@amd.com>
To: "Christian König" <christian.koenig@amd.com>,
	"Andrey Grodzovsky" <andrey.grodzovsky@amd.com>,
	amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	ckoenig.leichtzumerken@gmail.com, daniel.vetter@ffwll.ch,
	robh@kernel.org, l.stach@pengutronix.de, yuq825@gmail.com,
	eric@anholt.net
Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>,
	gregkh@linuxfoundation.org,
	Christian Gmeiner <christian.gmeiner@gmail.com>,
	Steven Price <steven.price@arm.com>,
	ppaalanen@gmail.com,
	Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>,
	Russell King <linux+etnaviv@armlinux.org.uk>,
	Alexander.Deucher@amd.com, Harry.Wentland@amd.com
Subject: Re: [PATCH v4 12/14] drm/scheduler: Job timeout handler returns status
Date: Tue, 19 Jan 2021 12:47:56 -0500	[thread overview]
Message-ID: <b84f39e1-9bd8-ad53-6156-7af02c5604b7@amd.com> (raw)
In-Reply-To: <337f5f7d-d728-41d1-724b-6a62500f1f06@amd.com>

On 2021-01-19 2:53 a.m., Christian König wrote:
> Am 18.01.21 um 22:01 schrieb Andrey Grodzovsky:
>> From: Luben Tuikov <luben.tuikov@amd.com>
>>
>> This patch does not change current behaviour.
>>
>> The driver's job timeout handler now returns
>> status indicating back to the DRM layer whether
>> the task (job) was successfully aborted or whether
>> more time should be given to the task to complete.
>>
>> Default behaviour as of this patch, is preserved,
>> except in obvious-by-comment case in the Panfrost
>> driver, as documented below.
>>
>> All drivers which make use of the
>> drm_sched_backend_ops' .timedout_job() callback
>> have been accordingly renamed and return the
>> would've-been default value of
>> DRM_TASK_STATUS_ALIVE to restart the task's
>> timeout timer--this is the old behaviour, and
>> is preserved by this patch.
>>
>> In the case of the Panfrost driver, its timedout
>> callback correctly first checks if the job had
>> completed in due time and if so, it now returns
>> DRM_TASK_STATUS_COMPLETE to notify the DRM layer
>> that the task can be moved to the done list, to be
>> freed later. In the other two subsequent checks,
>> the value of DRM_TASK_STATUS_ALIVE is returned, as
>> per the default behaviour.
>>
>> A more involved driver's solutions can be had
>> in subequent patches.
>>
>> v2: Use enum as the status of a driver's job
>>      timeout callback method.
>>
>> v4: (By Andrey Grodzovsky)
>> Replace DRM_TASK_STATUS_COMPLETE with DRM_TASK_STATUS_ENODEV
>> to enable a hint to the schduler for when NOT to rearm the
>> timeout timer.
> As Lukas pointed out returning the job (or task) status doesn't make 
> much sense.
>
> What we return here is the status of the scheduler.
>
> I would either rename the enum or completely drop it and return a 
> negative error status.

Yes, that could be had.

Although, dropping the enum and returning [-1, 0], might
make the return status meaning vague. Using an enum with an appropriate
name, makes the intention clear to the next programmer.

Now, Andrey did rename one of the enumerated values to
DRM_TASK_STATUS_ENODEV, perhaps the same but with:

enum drm_sched_status {
    DRM_SCHED_STAT_NONE, /* Reserve 0 */
    DRM_SCHED_STAT_NOMINAL,
    DRM_SCHED_STAT_ENODEV,
};

and also renaming the enum to the above would be acceptable?

Regards,
Luben

> Apart from that looks fine to me,
> Christian.
>
>
>> Cc: Alexander Deucher <Alexander.Deucher@amd.com>
>> Cc: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>
>> Cc: Christian König <christian.koenig@amd.com>
>> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
>> Cc: Lucas Stach <l.stach@pengutronix.de>
>> Cc: Russell King <linux+etnaviv@armlinux.org.uk>
>> Cc: Christian Gmeiner <christian.gmeiner@gmail.com>
>> Cc: Qiang Yu <yuq825@gmail.com>
>> Cc: Rob Herring <robh@kernel.org>
>> Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
>> Cc: Steven Price <steven.price@arm.com>
>> Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
>> Cc: Eric Anholt <eric@anholt.net>
>> Reported-by: kernel test robot <lkp@intel.com>
>> Signed-off-by: Luben Tuikov <luben.tuikov@amd.com>
>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c |  6 ++++--
>>   drivers/gpu/drm/etnaviv/etnaviv_sched.c | 10 +++++++++-
>>   drivers/gpu/drm/lima/lima_sched.c       |  4 +++-
>>   drivers/gpu/drm/panfrost/panfrost_job.c |  9 ++++++---
>>   drivers/gpu/drm/scheduler/sched_main.c  |  4 +---
>>   drivers/gpu/drm/v3d/v3d_sched.c         | 32 +++++++++++++++++---------------
>>   include/drm/gpu_scheduler.h             | 17 ++++++++++++++---
>>   7 files changed, 54 insertions(+), 28 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> index ff48101..a111326 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> @@ -28,7 +28,7 @@
>>   #include "amdgpu.h"
>>   #include "amdgpu_trace.h"
>>   
>> -static void amdgpu_job_timedout(struct drm_sched_job *s_job)
>> +static enum drm_task_status amdgpu_job_timedout(struct drm_sched_job *s_job)
>>   {
>>   	struct amdgpu_ring *ring = to_amdgpu_ring(s_job->sched);
>>   	struct amdgpu_job *job = to_amdgpu_job(s_job);
>> @@ -41,7 +41,7 @@ static void amdgpu_job_timedout(struct drm_sched_job *s_job)
>>   	    amdgpu_ring_soft_recovery(ring, job->vmid, s_job->s_fence->parent)) {
>>   		DRM_ERROR("ring %s timeout, but soft recovered\n",
>>   			  s_job->sched->name);
>> -		return;
>> +		return DRM_TASK_STATUS_ALIVE;
>>   	}
>>   
>>   	amdgpu_vm_get_task_info(ring->adev, job->pasid, &ti);
>> @@ -53,10 +53,12 @@ static void amdgpu_job_timedout(struct drm_sched_job *s_job)
>>   
>>   	if (amdgpu_device_should_recover_gpu(ring->adev)) {
>>   		amdgpu_device_gpu_recover(ring->adev, job);
>> +		return DRM_TASK_STATUS_ALIVE;
>>   	} else {
>>   		drm_sched_suspend_timeout(&ring->sched);
>>   		if (amdgpu_sriov_vf(adev))
>>   			adev->virt.tdr_debug = true;
>> +		return DRM_TASK_STATUS_ALIVE;
>>   	}
>>   }
>>   
>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>> index cd46c88..c495169 100644
>> --- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
>> @@ -82,7 +82,8 @@ static struct dma_fence *etnaviv_sched_run_job(struct drm_sched_job *sched_job)
>>   	return fence;
>>   }
>>   
>> -static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job)
>> +static enum drm_task_status etnaviv_sched_timedout_job(struct drm_sched_job
>> +						       *sched_job)
>>   {
>>   	struct etnaviv_gem_submit *submit = to_etnaviv_submit(sched_job);
>>   	struct etnaviv_gpu *gpu = submit->gpu;
>> @@ -120,9 +121,16 @@ static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job)
>>   
>>   	drm_sched_resubmit_jobs(&gpu->sched);
>>   
>> +	/* Tell the DRM scheduler that this task needs
>> +	 * more time.
>> +	 */
>> +	drm_sched_start(&gpu->sched, true);
>> +	return DRM_TASK_STATUS_ALIVE;
>> +
>>   out_no_timeout:
>>   	/* restart scheduler after GPU is usable again */
>>   	drm_sched_start(&gpu->sched, true);
>> +	return DRM_TASK_STATUS_ALIVE;
>>   }
>>   
>>   static void etnaviv_sched_free_job(struct drm_sched_job *sched_job)
>> diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
>> index 63b4c56..66d9236 100644
>> --- a/drivers/gpu/drm/lima/lima_sched.c
>> +++ b/drivers/gpu/drm/lima/lima_sched.c
>> @@ -415,7 +415,7 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
>>   	mutex_unlock(&dev->error_task_list_lock);
>>   }
>>   
>> -static void lima_sched_timedout_job(struct drm_sched_job *job)
>> +static enum drm_task_status lima_sched_timedout_job(struct drm_sched_job *job)
>>   {
>>   	struct lima_sched_pipe *pipe = to_lima_pipe(job->sched);
>>   	struct lima_sched_task *task = to_lima_task(job);
>> @@ -449,6 +449,8 @@ static void lima_sched_timedout_job(struct drm_sched_job *job)
>>   
>>   	drm_sched_resubmit_jobs(&pipe->base);
>>   	drm_sched_start(&pipe->base, true);
>> +
>> +	return DRM_TASK_STATUS_ALIVE;
>>   }
>>   
>>   static void lima_sched_free_job(struct drm_sched_job *job)
>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c
>> index 04e6f6f..10d41ac 100644
>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
>> @@ -432,7 +432,8 @@ static void panfrost_scheduler_start(struct panfrost_queue_state *queue)
>>   	mutex_unlock(&queue->lock);
>>   }
>>   
>> -static void panfrost_job_timedout(struct drm_sched_job *sched_job)
>> +static enum drm_task_status panfrost_job_timedout(struct drm_sched_job
>> +						  *sched_job)
>>   {
>>   	struct panfrost_job *job = to_panfrost_job(sched_job);
>>   	struct panfrost_device *pfdev = job->pfdev;
>> @@ -443,7 +444,7 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
>>   	 * spurious. Bail out.
>>   	 */
>>   	if (dma_fence_is_signaled(job->done_fence))
>> -		return;
>> +		return DRM_TASK_STATUS_ALIVE;
>>   
>>   	dev_err(pfdev->dev, "gpu sched timeout, js=%d, config=0x%x, status=0x%x, head=0x%x, tail=0x%x, sched_job=%p",
>>   		js,
>> @@ -455,11 +456,13 @@ static void panfrost_job_timedout(struct drm_sched_job *sched_job)
>>   
>>   	/* Scheduler is already stopped, nothing to do. */
>>   	if (!panfrost_scheduler_stop(&pfdev->js->queue[js], sched_job))
>> -		return;
>> +		return DRM_TASK_STATUS_ALIVE;
>>   
>>   	/* Schedule a reset if there's no reset in progress. */
>>   	if (!atomic_xchg(&pfdev->reset.pending, 1))
>>   		schedule_work(&pfdev->reset.work);
>> +
>> +	return DRM_TASK_STATUS_ALIVE;
>>   }
>>   
>>   static const struct drm_sched_backend_ops panfrost_sched_ops = {
>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
>> index 92637b7..73fccc5 100644
>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>> @@ -527,7 +527,7 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery)
>>   EXPORT_SYMBOL(drm_sched_start);
>>   
>>   /**
>> - * drm_sched_resubmit_jobs - helper to relunch job from pending ring list
>> + * drm_sched_resubmit_jobs - helper to relaunch jobs from the pending list
>>    *
>>    * @sched: scheduler instance
>>    *
>> @@ -561,8 +561,6 @@ void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched)
>>   		} else {
>>   			s_job->s_fence->parent = fence;
>>   		}
>> -
>> -
>>   	}
>>   }
>>   EXPORT_SYMBOL(drm_sched_resubmit_jobs);
>> diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c
>> index 452682e..3740665e 100644
>> --- a/drivers/gpu/drm/v3d/v3d_sched.c
>> +++ b/drivers/gpu/drm/v3d/v3d_sched.c
>> @@ -259,7 +259,7 @@ v3d_cache_clean_job_run(struct drm_sched_job *sched_job)
>>   	return NULL;
>>   }
>>   
>> -static void
>> +static enum drm_task_status
>>   v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job)
>>   {
>>   	enum v3d_queue q;
>> @@ -285,6 +285,8 @@ v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job)
>>   	}
>>   
>>   	mutex_unlock(&v3d->reset_lock);
>> +
>> +	return DRM_TASK_STATUS_ALIVE;
>>   }
>>   
>>   /* If the current address or return address have changed, then the GPU
>> @@ -292,7 +294,7 @@ v3d_gpu_reset_for_timeout(struct v3d_dev *v3d, struct drm_sched_job *sched_job)
>>    * could fail if the GPU got in an infinite loop in the CL, but that
>>    * is pretty unlikely outside of an i-g-t testcase.
>>    */
>> -static void
>> +static enum drm_task_status
>>   v3d_cl_job_timedout(struct drm_sched_job *sched_job, enum v3d_queue q,
>>   		    u32 *timedout_ctca, u32 *timedout_ctra)
>>   {
>> @@ -304,39 +306,39 @@ v3d_cl_job_timedout(struct drm_sched_job *sched_job, enum v3d_queue q,
>>   	if (*timedout_ctca != ctca || *timedout_ctra != ctra) {
>>   		*timedout_ctca = ctca;
>>   		*timedout_ctra = ctra;
>> -		return;
>> +		return DRM_TASK_STATUS_ALIVE;
>>   	}
>>   
>> -	v3d_gpu_reset_for_timeout(v3d, sched_job);
>> +	return v3d_gpu_reset_for_timeout(v3d, sched_job);
>>   }
>>   
>> -static void
>> +static enum drm_task_status
>>   v3d_bin_job_timedout(struct drm_sched_job *sched_job)
>>   {
>>   	struct v3d_bin_job *job = to_bin_job(sched_job);
>>   
>> -	v3d_cl_job_timedout(sched_job, V3D_BIN,
>> -			    &job->timedout_ctca, &job->timedout_ctra);
>> +	return v3d_cl_job_timedout(sched_job, V3D_BIN,
>> +				   &job->timedout_ctca, &job->timedout_ctra);
>>   }
>>   
>> -static void
>> +static enum drm_task_status
>>   v3d_render_job_timedout(struct drm_sched_job *sched_job)
>>   {
>>   	struct v3d_render_job *job = to_render_job(sched_job);
>>   
>> -	v3d_cl_job_timedout(sched_job, V3D_RENDER,
>> -			    &job->timedout_ctca, &job->timedout_ctra);
>> +	return v3d_cl_job_timedout(sched_job, V3D_RENDER,
>> +				   &job->timedout_ctca, &job->timedout_ctra);
>>   }
>>   
>> -static void
>> +static enum drm_task_status
>>   v3d_generic_job_timedout(struct drm_sched_job *sched_job)
>>   {
>>   	struct v3d_job *job = to_v3d_job(sched_job);
>>   
>> -	v3d_gpu_reset_for_timeout(job->v3d, sched_job);
>> +	return v3d_gpu_reset_for_timeout(job->v3d, sched_job);
>>   }
>>   
>> -static void
>> +static enum drm_task_status
>>   v3d_csd_job_timedout(struct drm_sched_job *sched_job)
>>   {
>>   	struct v3d_csd_job *job = to_csd_job(sched_job);
>> @@ -348,10 +350,10 @@ v3d_csd_job_timedout(struct drm_sched_job *sched_job)
>>   	 */
>>   	if (job->timedout_batches != batches) {
>>   		job->timedout_batches = batches;
>> -		return;
>> +		return DRM_TASK_STATUS_ALIVE;
>>   	}
>>   
>> -	v3d_gpu_reset_for_timeout(v3d, sched_job);
>> +	return v3d_gpu_reset_for_timeout(v3d, sched_job);
>>   }
>>   
>>   static const struct drm_sched_backend_ops v3d_bin_sched_ops = {
>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>> index 975e8a6..3ba36bc 100644
>> --- a/include/drm/gpu_scheduler.h
>> +++ b/include/drm/gpu_scheduler.h
>> @@ -206,6 +206,11 @@ static inline bool drm_sched_invalidate_job(struct drm_sched_job *s_job,
>>   	return s_job && atomic_inc_return(&s_job->karma) > threshold;
>>   }
>>   
>> +enum drm_task_status {
>> +	DRM_TASK_STATUS_ENODEV,
>> +	DRM_TASK_STATUS_ALIVE
>> +};
>> +
>>   /**
>>    * struct drm_sched_backend_ops
>>    *
>> @@ -230,10 +235,16 @@ struct drm_sched_backend_ops {
>>   	struct dma_fence *(*run_job)(struct drm_sched_job *sched_job);
>>   
>>   	/**
>> -         * @timedout_job: Called when a job has taken too long to execute,
>> -         * to trigger GPU recovery.
>> +	 * @timedout_job: Called when a job has taken too long to execute,
>> +	 * to trigger GPU recovery.
>> +	 *
>> +	 * Return DRM_TASK_STATUS_ALIVE, if the task (job) is healthy
>> +	 * and executing in the hardware, i.e. it needs more time.
>> +	 *
>> +	 * Return DRM_TASK_STATUS_ENODEV, if the task (job) has
>> +	 * been aborted.
>>   	 */
>> -	void (*timedout_job)(struct drm_sched_job *sched_job);
>> +	enum drm_task_status (*timedout_job)(struct drm_sched_job *sched_job);
>>   
>>   	/**
>>            * @free_job: Called once the job's finished fence has been signaled

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2021-01-19 19:04 UTC|newest]

Thread overview: 196+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-18 21:01 [PATCH v4 00/14] RFC Support hot device unplug in amdgpu Andrey Grodzovsky
2021-01-18 21:01 ` Andrey Grodzovsky
2021-01-18 21:01 ` [PATCH v4 01/14] drm/ttm: Remap all page faults to per process dummy page Andrey Grodzovsky
2021-01-18 21:01   ` Andrey Grodzovsky
2021-01-18 21:48   ` Alex Deucher
2021-01-18 21:48     ` Alex Deucher
2021-01-19  8:41   ` Christian König
2021-01-19  8:41     ` Christian König
2021-01-19 13:56   ` Daniel Vetter
2021-01-19 13:56     ` Daniel Vetter
2021-01-25 15:28     ` Andrey Grodzovsky
2021-01-25 15:28       ` Andrey Grodzovsky
2021-01-27 14:29       ` Andrey Grodzovsky
2021-01-27 14:29         ` Andrey Grodzovsky
2021-02-02 14:21         ` Daniel Vetter
2021-02-02 14:21           ` Daniel Vetter
2021-01-18 21:01 ` [PATCH v4 02/14] drm: Unamp the entire device address space on device unplug Andrey Grodzovsky
2021-01-18 21:01   ` Andrey Grodzovsky
2021-01-18 21:01 ` [PATCH v4 03/14] drm/ttm: Expose ttm_tt_unpopulate for driver use Andrey Grodzovsky
2021-01-18 21:01   ` Andrey Grodzovsky
2021-01-18 21:01 ` [PATCH v4 04/14] drm/sched: Cancel and flush all oustatdning jobs before finish Andrey Grodzovsky
2021-01-18 21:01   ` Andrey Grodzovsky
2021-01-18 21:49   ` Alex Deucher
2021-01-18 21:49     ` Alex Deucher
2021-01-19  8:42   ` Christian König
2021-01-19  8:42     ` Christian König
2021-01-19  9:50     ` Christian König
2021-01-19  9:50       ` Christian König
2021-01-18 21:01 ` [PATCH v4 05/14] drm/amdgpu: Split amdgpu_device_fini into early and late Andrey Grodzovsky
2021-01-18 21:01   ` Andrey Grodzovsky
2021-01-19  8:45   ` Christian König
2021-01-19  8:45     ` Christian König
2021-01-18 21:01 ` [PATCH v4 06/14] drm/amdgpu: Add early fini callback Andrey Grodzovsky
2021-01-18 21:01   ` Andrey Grodzovsky
2021-01-18 21:01 ` [PATCH v4 07/14] drm/amdgpu: Register IOMMU topology notifier per device Andrey Grodzovsky
2021-01-18 21:01   ` Andrey Grodzovsky
2021-01-18 21:52   ` Alex Deucher
2021-01-18 21:52     ` Alex Deucher
2021-01-19  8:48   ` Christian König
2021-01-19  8:48     ` Christian König
2021-01-19 13:45     ` Daniel Vetter
2021-01-19 13:45       ` Daniel Vetter
2021-01-19 21:21       ` Andrey Grodzovsky
2021-01-19 21:21         ` Andrey Grodzovsky
2021-01-19 22:01         ` Daniel Vetter
2021-01-19 22:01           ` Daniel Vetter
2021-01-20  4:21           ` Andrey Grodzovsky
2021-01-20  4:21             ` Andrey Grodzovsky
2021-01-20  8:38             ` Daniel Vetter
2021-01-20  8:38               ` Daniel Vetter
     [not found]               ` <1a5f7ccb-1f91-91be-1cb1-e7cb43ac2c13@amd.com>
2021-01-21 10:48                 ` Daniel Vetter
2021-01-21 10:48                   ` Daniel Vetter
2021-01-20  5:01     ` Andrey Grodzovsky
2021-01-20  5:01       ` Andrey Grodzovsky
2021-01-20 19:38       ` Andrey Grodzovsky
2021-01-20 19:38         ` Andrey Grodzovsky
2021-01-21 10:42         ` Christian König
2021-01-21 10:42           ` Christian König
2021-01-18 21:01 ` [PATCH v4 08/14] drm/amdgpu: Fix a bunch of sdma code crash post device unplug Andrey Grodzovsky
2021-01-18 21:01   ` Andrey Grodzovsky
2021-01-19  8:51   ` Christian König
2021-01-19  8:51     ` Christian König
2021-01-18 21:01 ` [PATCH v4 09/14] drm/amdgpu: Remap all page faults to per process dummy page Andrey Grodzovsky
2021-01-18 21:01   ` Andrey Grodzovsky
2021-01-19  8:52   ` Christian König
2021-01-19  8:52     ` Christian König
2021-01-18 21:01 ` [PATCH v4 10/14] dmr/amdgpu: Move some sysfs attrs creation to default_attr Andrey Grodzovsky
2021-01-18 21:01   ` Andrey Grodzovsky
2021-01-19  7:34   ` Greg KH
2021-01-19  7:34     ` Greg KH
2021-01-19 16:36     ` Andrey Grodzovsky
2021-01-19 16:36       ` Andrey Grodzovsky
2021-01-19 17:47       ` Greg KH
2021-01-19 17:47         ` Greg KH
2021-01-19 19:04         ` Alex Deucher
2021-01-19 19:04           ` Alex Deucher
2021-01-19 19:16           ` Andrey Grodzovsky
2021-01-19 19:16             ` Andrey Grodzovsky
2021-01-19 19:41           ` Greg KH
2021-01-19 19:41             ` Greg KH
2021-01-19  8:53   ` Christian König
2021-01-19  8:53     ` Christian König
2021-01-18 21:01 ` [PATCH v4 11/14] drm/amdgpu: Guard against write accesses after device removal Andrey Grodzovsky
2021-01-18 21:01   ` Andrey Grodzovsky
2021-01-19  8:55   ` Christian König
2021-01-19  8:55     ` Christian König
2021-01-19 15:35     ` Andrey Grodzovsky
2021-01-19 15:35       ` Andrey Grodzovsky
2021-01-19 15:39       ` Christian König
2021-01-19 15:39         ` Christian König
2021-01-19 18:05       ` Daniel Vetter
2021-01-19 18:05         ` Daniel Vetter
2021-01-19 18:22         ` Andrey Grodzovsky
2021-01-19 18:22           ` Andrey Grodzovsky
2021-01-19 18:59           ` Christian König
2021-01-19 18:59             ` Christian König
2021-01-19 19:16             ` Andrey Grodzovsky
2021-01-19 19:16               ` Andrey Grodzovsky
2021-01-20 19:34               ` Andrey Grodzovsky
2021-01-20 19:34                 ` Andrey Grodzovsky
2021-01-28 17:23             ` Andrey Grodzovsky
2021-01-28 17:23               ` Andrey Grodzovsky
2021-01-29 15:16               ` Christian König
2021-01-29 15:16                 ` Christian König
2021-01-29 17:35                 ` Andrey Grodzovsky
2021-01-29 17:35                   ` Andrey Grodzovsky
2021-01-29 19:25                   ` Christian König
2021-01-29 19:25                     ` Christian König
2021-02-05 16:22                     ` Andrey Grodzovsky
2021-02-05 16:22                       ` Andrey Grodzovsky
2021-02-05 22:10                       ` Daniel Vetter
2021-02-05 22:10                         ` Daniel Vetter
2021-02-05 23:09                         ` Andrey Grodzovsky
2021-02-05 23:09                           ` Andrey Grodzovsky
2021-02-06 14:18                           ` Daniel Vetter
2021-02-06 14:18                             ` Daniel Vetter
2021-02-07 21:28                         ` Andrey Grodzovsky
2021-02-07 21:28                           ` Andrey Grodzovsky
2021-02-07 21:50                           ` Daniel Vetter
2021-02-07 21:50                             ` Daniel Vetter
2021-02-08  9:37                             ` Christian König
2021-02-08  9:37                               ` Christian König
2021-02-08  9:48                               ` Daniel Vetter
2021-02-08  9:48                                 ` Daniel Vetter
2021-02-08 10:03                                 ` Christian König
2021-02-08 10:03                                   ` Christian König
2021-02-08 10:11                                   ` Daniel Vetter
2021-02-08 10:11                                     ` Daniel Vetter
2021-02-08 13:59                                     ` Christian König
2021-02-08 13:59                                       ` Christian König
2021-02-08 16:23                                       ` Daniel Vetter
2021-02-08 16:23                                         ` Daniel Vetter
2021-02-08 22:15                                         ` Andrey Grodzovsky
2021-02-08 22:15                                           ` Andrey Grodzovsky
2021-02-09  7:58                                           ` Christian König
2021-02-09  7:58                                             ` Christian König
2021-02-09 14:30                                             ` Andrey Grodzovsky
2021-02-09 14:30                                               ` Andrey Grodzovsky
2021-02-09 15:40                                               ` Christian König
2021-02-09 15:40                                                 ` Christian König
2021-02-10 22:01                                                 ` Andrey Grodzovsky
2021-02-10 22:01                                                   ` Andrey Grodzovsky
2021-02-12 15:00                                                   ` Andrey Grodzovsky
2021-02-12 15:00                                                     ` Andrey Grodzovsky
2021-02-08 22:09                               ` Andrey Grodzovsky
2021-02-08 22:09                                 ` Andrey Grodzovsky
2021-02-09  8:27                                 ` Christian König
2021-02-09  8:27                                   ` Christian König
2021-02-09  9:46                                   ` Daniel Vetter
2021-02-09  9:46                                     ` Daniel Vetter
2021-01-18 21:01 ` [PATCH v4 12/14] drm/scheduler: Job timeout handler returns status Andrey Grodzovsky
2021-01-18 21:01   ` Andrey Grodzovsky
2021-01-19  7:53   ` Christian König
2021-01-19  7:53     ` Christian König
2021-01-19 17:47     ` Luben Tuikov [this message]
2021-01-19 17:47       ` Luben Tuikov
2021-01-19 18:53       ` Christian König
2021-01-19 18:53         ` Christian König
2021-01-18 21:01 ` [PATCH v4 13/14] drm/sched: Make timeout timer rearm conditional Andrey Grodzovsky
2021-01-18 21:01   ` Andrey Grodzovsky
2021-01-18 21:01 ` [PATCH v4 14/14] drm/amdgpu: Prevent any job recoveries after device is unplugged Andrey Grodzovsky
2021-01-18 21:01   ` Andrey Grodzovsky
2021-01-19 14:16 ` [PATCH v4 00/14] RFC Support hot device unplug in amdgpu Daniel Vetter
2021-01-19 14:16   ` Daniel Vetter
2021-01-19 17:31   ` Andrey Grodzovsky
2021-01-19 17:31     ` Andrey Grodzovsky
2021-01-19 18:08     ` Daniel Vetter
2021-01-19 18:08       ` Daniel Vetter
2021-01-19 18:18       ` Andrey Grodzovsky
2021-01-19 18:18         ` Andrey Grodzovsky
2021-01-20  9:05         ` Daniel Vetter
2021-01-20  9:05           ` Daniel Vetter
2021-01-20 14:19           ` Andrey Grodzovsky
2021-01-20 14:19             ` Andrey Grodzovsky
2021-01-20 15:59             ` Daniel Vetter
2021-01-20 15:59               ` Daniel Vetter
2021-02-08  5:59               ` Andrey Grodzovsky
2021-02-08  5:59                 ` Andrey Grodzovsky
2021-02-08  7:27                 ` Daniel Vetter
2021-02-08  7:27                   ` Daniel Vetter
2021-02-09  4:01                   ` Andrey Grodzovsky
2021-02-09  4:01                     ` Andrey Grodzovsky
2021-02-09  9:50                     ` Daniel Vetter
2021-02-09  9:50                       ` Daniel Vetter
2021-02-09 15:34                       ` Andrey Grodzovsky
2021-02-09 15:34                         ` Andrey Grodzovsky
2021-02-18 20:03                       ` Andrey Grodzovsky
2021-02-18 20:03                         ` Andrey Grodzovsky
2021-02-19 10:24                         ` Daniel Vetter
2021-02-19 10:24                           ` Daniel Vetter
2021-02-24 16:30                           ` Andrey Grodzovsky
2021-02-24 16:30                             ` Andrey Grodzovsky
2021-02-25 10:25                             ` Daniel Vetter
2021-02-25 10:25                               ` Daniel Vetter
2021-02-25 16:12                               ` Andrey Grodzovsky
2021-02-25 16:12                                 ` Andrey Grodzovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b84f39e1-9bd8-ad53-6156-7af02c5604b7@amd.com \
    --to=luben.tuikov@amd.com \
    --cc=Alexander.Deucher@amd.com \
    --cc=alyssa.rosenzweig@collabora.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=andrey.grodzovsky@amd.com \
    --cc=christian.koenig@amd.com \
    --cc=ckoenig.leichtzumerken@gmail.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=eric@anholt.net \
    --cc=gregkh@linuxfoundation.org \
    --cc=l.stach@pengutronix.de \
    --cc=linux+etnaviv@armlinux.org.uk \
    --cc=robh@kernel.org \
    --cc=steven.price@arm.com \
    --cc=tomeu.vizoso@collabora.com \
    --cc=yuq825@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.