All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
To: "Christian König" <ckoenig.leichtzumerken@gmail.com>,
	"Zhang, Jack (Jian)" <Jack.Zhang1@amd.com>,
	"dri-devel@lists.freedesktop.org"
	<dri-devel@lists.freedesktop.org>,
	"amd-gfx@lists.freedesktop.org" <amd-gfx@lists.freedesktop.org>,
	"Koenig, Christian" <Christian.Koenig@amd.com>,
	"Liu, Monk" <Monk.Liu@amd.com>,
	"Deng, Emily" <Emily.Deng@amd.com>,
	"Rob Herring" <robh@kernel.org>,
	"Tomeu Vizoso" <tomeu.vizoso@collabora.com>,
	"Steven Price" <steven.price@arm.com>
Subject: Re: [PATCH v3] drm/scheduler re-insert Bailing job to avoid memleak
Date: Wed, 17 Mar 2021 10:50:59 -0400	[thread overview]
Message-ID: <64583ef0-d19c-1906-91f2-70fd649fa46e@amd.com> (raw)
In-Reply-To: <9b48b715-52dd-e435-2873-2472427dffda@gmail.com>

I actually have a race condition concern here - see bellow -

On 2021-03-17 3:43 a.m., Christian König wrote:
> I was hoping Andrey would take a look since I'm really busy with other 
> work right now.
>
> Regards,
> Christian.
>
> Am 17.03.21 um 07:46 schrieb Zhang, Jack (Jian):
>> Hi, Andrey/Crhistian and Team,
>>
>> I didn't receive the reviewer's message from maintainers on panfrost 
>> driver for several days.
>> Due to this patch is urgent for my current working project.
>> Would you please help to give some review ideas?
>>
>> Many Thanks,
>> Jack
>> -----Original Message-----
>> From: Zhang, Jack (Jian)
>> Sent: Tuesday, March 16, 2021 3:20 PM
>> To: dri-devel@lists.freedesktop.org; amd-gfx@lists.freedesktop.org; 
>> Koenig, Christian <Christian.Koenig@amd.com>; Grodzovsky, Andrey 
>> <Andrey.Grodzovsky@amd.com>; Liu, Monk <Monk.Liu@amd.com>; Deng, 
>> Emily <Emily.Deng@amd.com>; Rob Herring <robh@kernel.org>; Tomeu 
>> Vizoso <tomeu.vizoso@collabora.com>; Steven Price <steven.price@arm.com>
>> Subject: RE: [PATCH v3] drm/scheduler re-insert Bailing job to avoid 
>> memleak
>>
>> [AMD Public Use]
>>
>> Ping
>>
>> -----Original Message-----
>> From: Zhang, Jack (Jian)
>> Sent: Monday, March 15, 2021 1:24 PM
>> To: Jack Zhang <Jack.Zhang1@amd.com>; 
>> dri-devel@lists.freedesktop.org; amd-gfx@lists.freedesktop.org; 
>> Koenig, Christian <Christian.Koenig@amd.com>; Grodzovsky, Andrey 
>> <Andrey.Grodzovsky@amd.com>; Liu, Monk <Monk.Liu@amd.com>; Deng, 
>> Emily <Emily.Deng@amd.com>; Rob Herring <robh@kernel.org>; Tomeu 
>> Vizoso <tomeu.vizoso@collabora.com>; Steven Price <steven.price@arm.com>
>> Subject: RE: [PATCH v3] drm/scheduler re-insert Bailing job to avoid 
>> memleak
>>
>> [AMD Public Use]
>>
>> Hi, Rob/Tomeu/Steven,
>>
>> Would you please help to review this patch for panfrost driver?
>>
>> Thanks,
>> Jack Zhang
>>
>> -----Original Message-----
>> From: Jack Zhang <Jack.Zhang1@amd.com>
>> Sent: Monday, March 15, 2021 1:21 PM
>> To: dri-devel@lists.freedesktop.org; amd-gfx@lists.freedesktop.org; 
>> Koenig, Christian <Christian.Koenig@amd.com>; Grodzovsky, Andrey 
>> <Andrey.Grodzovsky@amd.com>; Liu, Monk <Monk.Liu@amd.com>; Deng, 
>> Emily <Emily.Deng@amd.com>
>> Cc: Zhang, Jack (Jian) <Jack.Zhang1@amd.com>
>> Subject: [PATCH v3] drm/scheduler re-insert Bailing job to avoid memleak
>>
>> re-insert Bailing jobs to avoid memory leak.
>>
>> V2: move re-insert step to drm/scheduler logic
>> V3: add panfrost's return value for bailing jobs in case it hits the 
>> memleak issue.
>>
>> Signed-off-by: Jack Zhang <Jack.Zhang1@amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 4 +++-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c    | 8 ++++++--
>>   drivers/gpu/drm/panfrost/panfrost_job.c    | 4 ++--
>>   drivers/gpu/drm/scheduler/sched_main.c     | 8 +++++++-
>>   include/drm/gpu_scheduler.h                | 1 +
>>   5 files changed, 19 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> index 79b9cc73763f..86463b0f936e 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> @@ -4815,8 +4815,10 @@ int amdgpu_device_gpu_recover(struct 
>> amdgpu_device *adev,
>>                       job ? job->base.id : -1);
>>             /* even we skipped this reset, still need to set the job 
>> to guilty */
>> -        if (job)
>> +        if (job) {
>>               drm_sched_increase_karma(&job->base);
>> +            r = DRM_GPU_SCHED_STAT_BAILING;
>> +        }
>>           goto skip_recovery;
>>       }
>>   diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> index 759b34799221..41390bdacd9e 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> @@ -34,6 +34,7 @@ static enum drm_gpu_sched_stat 
>> amdgpu_job_timedout(struct drm_sched_job *s_job)
>>       struct amdgpu_job *job = to_amdgpu_job(s_job);
>>       struct amdgpu_task_info ti;
>>       struct amdgpu_device *adev = ring->adev;
>> +    int ret;
>>         memset(&ti, 0, sizeof(struct amdgpu_task_info));
>>   @@ -52,8 +53,11 @@ static enum drm_gpu_sched_stat 
>> amdgpu_job_timedout(struct drm_sched_job *s_job)
>>             ti.process_name, ti.tgid, ti.task_name, ti.pid);
>>         if (amdgpu_device_should_recover_gpu(ring->adev)) {
>> -        amdgpu_device_gpu_recover(ring->adev, job);
>> -        return DRM_GPU_SCHED_STAT_NOMINAL;
>> +        ret = amdgpu_device_gpu_recover(ring->adev, job);
>> +        if (ret == DRM_GPU_SCHED_STAT_BAILING)
>> +            return DRM_GPU_SCHED_STAT_BAILING;
>> +        else
>> +            return DRM_GPU_SCHED_STAT_NOMINAL;
>>       } else {
>>           drm_sched_suspend_timeout(&ring->sched);
>>           if (amdgpu_sriov_vf(adev))
>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c 
>> b/drivers/gpu/drm/panfrost/panfrost_job.c
>> index 6003cfeb1322..e2cb4f32dae1 100644
>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
>> @@ -444,7 +444,7 @@ static enum drm_gpu_sched_stat 
>> panfrost_job_timedout(struct drm_sched_job
>>        * spurious. Bail out.
>>        */
>>       if (dma_fence_is_signaled(job->done_fence))
>> -        return DRM_GPU_SCHED_STAT_NOMINAL;
>> +        return DRM_GPU_SCHED_STAT_BAILING;
>>         dev_err(pfdev->dev, "gpu sched timeout, js=%d, config=0x%x, 
>> status=0x%x, head=0x%x, tail=0x%x, sched_job=%p",
>>           js,
>> @@ -456,7 +456,7 @@ static enum drm_gpu_sched_stat 
>> panfrost_job_timedout(struct drm_sched_job
>>         /* Scheduler is already stopped, nothing to do. */
>>       if (!panfrost_scheduler_stop(&pfdev->js->queue[js], sched_job))
>> -        return DRM_GPU_SCHED_STAT_NOMINAL;
>> +        return DRM_GPU_SCHED_STAT_BAILING;
>>         /* Schedule a reset if there's no reset in progress. */
>>       if (!atomic_xchg(&pfdev->reset.pending, 1)) diff --git 
>> a/drivers/gpu/drm/scheduler/sched_main.c 
>> b/drivers/gpu/drm/scheduler/sched_main.c
>> index 92d8de24d0a1..a44f621fb5c4 100644
>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>> @@ -314,6 +314,7 @@ static void drm_sched_job_timedout(struct 
>> work_struct *work)  {
>>       struct drm_gpu_scheduler *sched;
>>       struct drm_sched_job *job;
>> +    int ret;
>>         sched = container_of(work, struct drm_gpu_scheduler, 
>> work_tdr.work);
>>   @@ -331,8 +332,13 @@ static void drm_sched_job_timedout(struct 
>> work_struct *work)
>>           list_del_init(&job->list);
>>           spin_unlock(&sched->job_list_lock);
>>   -        job->sched->ops->timedout_job(job);
>> +        ret = job->sched->ops->timedout_job(job);
>>   +        if (ret == DRM_GPU_SCHED_STAT_BAILING) {
>> +            spin_lock(&sched->job_list_lock);
>> +            list_add(&job->node, &sched->ring_mirror_list);
>> +            spin_unlock(&sched->job_list_lock);
>> +        }


At this point we don't hold GPU reset locks anymore, and so we could
be racing against another TDR thread from another scheduler ring of same 
device
or another XGMI hive member. The other thread might be in the middle of 
luckless
iteration of mirror list (drm_sched_stop, drm_sched_start and 
drm_sched_resubmit)
and so locking job_list_lock will not help. Looks like it's required to 
take all GPU rest locks
here.

Andrey


>>           /*
>>            * Guilty job did complete and hence needs to be manually 
>> removed
>>            * See drm_sched_stop doc.
>> diff --git a/include/drm/gpu_scheduler.h 
>> b/include/drm/gpu_scheduler.h index 4ea8606d91fe..8093ac2427ef 100644
>> --- a/include/drm/gpu_scheduler.h
>> +++ b/include/drm/gpu_scheduler.h
>> @@ -210,6 +210,7 @@ enum drm_gpu_sched_stat {
>>       DRM_GPU_SCHED_STAT_NONE, /* Reserve 0 */
>>       DRM_GPU_SCHED_STAT_NOMINAL,
>>       DRM_GPU_SCHED_STAT_ENODEV,
>> +    DRM_GPU_SCHED_STAT_BAILING,
>>   };
>>     /**
>> -- 
>> 2.25.1
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Ce90f30af0f43444c6aea08d8e91860c4%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637515638213180413%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=NnLqtz%2BZ8%2BweYwCqRinrfkqmhzibNAF6CYSdVqL6xi0%3D&amp;reserved=0 
>>
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

WARNING: multiple messages have this Message-ID (diff)
From: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
To: "Christian König" <ckoenig.leichtzumerken@gmail.com>,
	"Zhang, Jack (Jian)" <Jack.Zhang1@amd.com>,
	"dri-devel@lists.freedesktop.org"
	<dri-devel@lists.freedesktop.org>,
	"amd-gfx@lists.freedesktop.org" <amd-gfx@lists.freedesktop.org>,
	"Koenig, Christian" <Christian.Koenig@amd.com>,
	"Liu, Monk" <Monk.Liu@amd.com>,
	"Deng, Emily" <Emily.Deng@amd.com>,
	"Rob Herring" <robh@kernel.org>,
	"Tomeu Vizoso" <tomeu.vizoso@collabora.com>,
	"Steven Price" <steven.price@arm.com>
Subject: Re: [PATCH v3] drm/scheduler re-insert Bailing job to avoid memleak
Date: Wed, 17 Mar 2021 10:50:59 -0400	[thread overview]
Message-ID: <64583ef0-d19c-1906-91f2-70fd649fa46e@amd.com> (raw)
In-Reply-To: <9b48b715-52dd-e435-2873-2472427dffda@gmail.com>

I actually have a race condition concern here - see bellow -

On 2021-03-17 3:43 a.m., Christian König wrote:
> I was hoping Andrey would take a look since I'm really busy with other 
> work right now.
>
> Regards,
> Christian.
>
> Am 17.03.21 um 07:46 schrieb Zhang, Jack (Jian):
>> Hi, Andrey/Crhistian and Team,
>>
>> I didn't receive the reviewer's message from maintainers on panfrost 
>> driver for several days.
>> Due to this patch is urgent for my current working project.
>> Would you please help to give some review ideas?
>>
>> Many Thanks,
>> Jack
>> -----Original Message-----
>> From: Zhang, Jack (Jian)
>> Sent: Tuesday, March 16, 2021 3:20 PM
>> To: dri-devel@lists.freedesktop.org; amd-gfx@lists.freedesktop.org; 
>> Koenig, Christian <Christian.Koenig@amd.com>; Grodzovsky, Andrey 
>> <Andrey.Grodzovsky@amd.com>; Liu, Monk <Monk.Liu@amd.com>; Deng, 
>> Emily <Emily.Deng@amd.com>; Rob Herring <robh@kernel.org>; Tomeu 
>> Vizoso <tomeu.vizoso@collabora.com>; Steven Price <steven.price@arm.com>
>> Subject: RE: [PATCH v3] drm/scheduler re-insert Bailing job to avoid 
>> memleak
>>
>> [AMD Public Use]
>>
>> Ping
>>
>> -----Original Message-----
>> From: Zhang, Jack (Jian)
>> Sent: Monday, March 15, 2021 1:24 PM
>> To: Jack Zhang <Jack.Zhang1@amd.com>; 
>> dri-devel@lists.freedesktop.org; amd-gfx@lists.freedesktop.org; 
>> Koenig, Christian <Christian.Koenig@amd.com>; Grodzovsky, Andrey 
>> <Andrey.Grodzovsky@amd.com>; Liu, Monk <Monk.Liu@amd.com>; Deng, 
>> Emily <Emily.Deng@amd.com>; Rob Herring <robh@kernel.org>; Tomeu 
>> Vizoso <tomeu.vizoso@collabora.com>; Steven Price <steven.price@arm.com>
>> Subject: RE: [PATCH v3] drm/scheduler re-insert Bailing job to avoid 
>> memleak
>>
>> [AMD Public Use]
>>
>> Hi, Rob/Tomeu/Steven,
>>
>> Would you please help to review this patch for panfrost driver?
>>
>> Thanks,
>> Jack Zhang
>>
>> -----Original Message-----
>> From: Jack Zhang <Jack.Zhang1@amd.com>
>> Sent: Monday, March 15, 2021 1:21 PM
>> To: dri-devel@lists.freedesktop.org; amd-gfx@lists.freedesktop.org; 
>> Koenig, Christian <Christian.Koenig@amd.com>; Grodzovsky, Andrey 
>> <Andrey.Grodzovsky@amd.com>; Liu, Monk <Monk.Liu@amd.com>; Deng, 
>> Emily <Emily.Deng@amd.com>
>> Cc: Zhang, Jack (Jian) <Jack.Zhang1@amd.com>
>> Subject: [PATCH v3] drm/scheduler re-insert Bailing job to avoid memleak
>>
>> re-insert Bailing jobs to avoid memory leak.
>>
>> V2: move re-insert step to drm/scheduler logic
>> V3: add panfrost's return value for bailing jobs in case it hits the 
>> memleak issue.
>>
>> Signed-off-by: Jack Zhang <Jack.Zhang1@amd.com>
>> ---
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 4 +++-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c    | 8 ++++++--
>>   drivers/gpu/drm/panfrost/panfrost_job.c    | 4 ++--
>>   drivers/gpu/drm/scheduler/sched_main.c     | 8 +++++++-
>>   include/drm/gpu_scheduler.h                | 1 +
>>   5 files changed, 19 insertions(+), 6 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> index 79b9cc73763f..86463b0f936e 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
>> @@ -4815,8 +4815,10 @@ int amdgpu_device_gpu_recover(struct 
>> amdgpu_device *adev,
>>                       job ? job->base.id : -1);
>>             /* even we skipped this reset, still need to set the job 
>> to guilty */
>> -        if (job)
>> +        if (job) {
>>               drm_sched_increase_karma(&job->base);
>> +            r = DRM_GPU_SCHED_STAT_BAILING;
>> +        }
>>           goto skip_recovery;
>>       }
>>   diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c 
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> index 759b34799221..41390bdacd9e 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>> @@ -34,6 +34,7 @@ static enum drm_gpu_sched_stat 
>> amdgpu_job_timedout(struct drm_sched_job *s_job)
>>       struct amdgpu_job *job = to_amdgpu_job(s_job);
>>       struct amdgpu_task_info ti;
>>       struct amdgpu_device *adev = ring->adev;
>> +    int ret;
>>         memset(&ti, 0, sizeof(struct amdgpu_task_info));
>>   @@ -52,8 +53,11 @@ static enum drm_gpu_sched_stat 
>> amdgpu_job_timedout(struct drm_sched_job *s_job)
>>             ti.process_name, ti.tgid, ti.task_name, ti.pid);
>>         if (amdgpu_device_should_recover_gpu(ring->adev)) {
>> -        amdgpu_device_gpu_recover(ring->adev, job);
>> -        return DRM_GPU_SCHED_STAT_NOMINAL;
>> +        ret = amdgpu_device_gpu_recover(ring->adev, job);
>> +        if (ret == DRM_GPU_SCHED_STAT_BAILING)
>> +            return DRM_GPU_SCHED_STAT_BAILING;
>> +        else
>> +            return DRM_GPU_SCHED_STAT_NOMINAL;
>>       } else {
>>           drm_sched_suspend_timeout(&ring->sched);
>>           if (amdgpu_sriov_vf(adev))
>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c 
>> b/drivers/gpu/drm/panfrost/panfrost_job.c
>> index 6003cfeb1322..e2cb4f32dae1 100644
>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
>> @@ -444,7 +444,7 @@ static enum drm_gpu_sched_stat 
>> panfrost_job_timedout(struct drm_sched_job
>>        * spurious. Bail out.
>>        */
>>       if (dma_fence_is_signaled(job->done_fence))
>> -        return DRM_GPU_SCHED_STAT_NOMINAL;
>> +        return DRM_GPU_SCHED_STAT_BAILING;
>>         dev_err(pfdev->dev, "gpu sched timeout, js=%d, config=0x%x, 
>> status=0x%x, head=0x%x, tail=0x%x, sched_job=%p",
>>           js,
>> @@ -456,7 +456,7 @@ static enum drm_gpu_sched_stat 
>> panfrost_job_timedout(struct drm_sched_job
>>         /* Scheduler is already stopped, nothing to do. */
>>       if (!panfrost_scheduler_stop(&pfdev->js->queue[js], sched_job))
>> -        return DRM_GPU_SCHED_STAT_NOMINAL;
>> +        return DRM_GPU_SCHED_STAT_BAILING;
>>         /* Schedule a reset if there's no reset in progress. */
>>       if (!atomic_xchg(&pfdev->reset.pending, 1)) diff --git 
>> a/drivers/gpu/drm/scheduler/sched_main.c 
>> b/drivers/gpu/drm/scheduler/sched_main.c
>> index 92d8de24d0a1..a44f621fb5c4 100644
>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>> @@ -314,6 +314,7 @@ static void drm_sched_job_timedout(struct 
>> work_struct *work)  {
>>       struct drm_gpu_scheduler *sched;
>>       struct drm_sched_job *job;
>> +    int ret;
>>         sched = container_of(work, struct drm_gpu_scheduler, 
>> work_tdr.work);
>>   @@ -331,8 +332,13 @@ static void drm_sched_job_timedout(struct 
>> work_struct *work)
>>           list_del_init(&job->list);
>>           spin_unlock(&sched->job_list_lock);
>>   -        job->sched->ops->timedout_job(job);
>> +        ret = job->sched->ops->timedout_job(job);
>>   +        if (ret == DRM_GPU_SCHED_STAT_BAILING) {
>> +            spin_lock(&sched->job_list_lock);
>> +            list_add(&job->node, &sched->ring_mirror_list);
>> +            spin_unlock(&sched->job_list_lock);
>> +        }


At this point we don't hold GPU reset locks anymore, and so we could
be racing against another TDR thread from another scheduler ring of same 
device
or another XGMI hive member. The other thread might be in the middle of 
luckless
iteration of mirror list (drm_sched_stop, drm_sched_start and 
drm_sched_resubmit)
and so locking job_list_lock will not help. Looks like it's required to 
take all GPU rest locks
here.

Andrey


>>           /*
>>            * Guilty job did complete and hence needs to be manually 
>> removed
>>            * See drm_sched_stop doc.
>> diff --git a/include/drm/gpu_scheduler.h 
>> b/include/drm/gpu_scheduler.h index 4ea8606d91fe..8093ac2427ef 100644
>> --- a/include/drm/gpu_scheduler.h
>> +++ b/include/drm/gpu_scheduler.h
>> @@ -210,6 +210,7 @@ enum drm_gpu_sched_stat {
>>       DRM_GPU_SCHED_STAT_NONE, /* Reserve 0 */
>>       DRM_GPU_SCHED_STAT_NOMINAL,
>>       DRM_GPU_SCHED_STAT_ENODEV,
>> +    DRM_GPU_SCHED_STAT_BAILING,
>>   };
>>     /**
>> -- 
>> 2.25.1
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx@lists.freedesktop.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Ce90f30af0f43444c6aea08d8e91860c4%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637515638213180413%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=NnLqtz%2BZ8%2BweYwCqRinrfkqmhzibNAF6CYSdVqL6xi0%3D&amp;reserved=0 
>>
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2021-03-17 14:51 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-15  5:20 [PATCH v3] drm/scheduler re-insert Bailing job to avoid memleak Jack Zhang
2021-03-15  5:20 ` Jack Zhang
2021-03-15  5:23 ` Zhang, Jack (Jian)
2021-03-15  5:23   ` Zhang, Jack (Jian)
2021-03-16  7:19   ` Zhang, Jack (Jian)
2021-03-16  7:19     ` Zhang, Jack (Jian)
2021-03-17  6:46     ` Zhang, Jack (Jian)
2021-03-17  6:46       ` Zhang, Jack (Jian)
2021-03-17  7:43       ` Christian König
2021-03-17  7:43         ` Christian König
2021-03-17 14:50         ` Andrey Grodzovsky [this message]
2021-03-17 14:50           ` Andrey Grodzovsky
2021-03-17 15:11           ` Zhang, Jack (Jian)
2021-03-17 15:11             ` Zhang, Jack (Jian)
2021-03-18 10:41             ` Zhang, Jack (Jian)
2021-03-18 10:41               ` Zhang, Jack (Jian)
2021-03-18 16:16               ` Andrey Grodzovsky
2021-03-18 16:16                 ` Andrey Grodzovsky
2021-03-25  9:51                 ` Zhang, Jack (Jian)
2021-03-25  9:51                   ` Zhang, Jack (Jian)
2021-03-25 16:32                   ` Andrey Grodzovsky
2021-03-25 16:32                     ` Andrey Grodzovsky
2021-03-26  2:23                     ` Zhang, Jack (Jian)
2021-03-26  2:23                       ` Zhang, Jack (Jian)
2021-03-26  9:05                       ` Christian König
2021-03-26  9:05                         ` Christian König
2021-03-26 11:21                         ` 回复: " Liu, Monk
2021-03-26 11:21                           ` Liu, Monk
2021-03-26 14:51                           ` Christian König
2021-03-26 14:51                             ` Christian König
2021-03-30  3:10                             ` Liu, Monk
2021-03-30  3:10                               ` Liu, Monk
2021-03-30  6:59                               ` Christian König
2021-03-30  6:59                                 ` Christian König
2021-03-22 15:29   ` Steven Price
2021-03-22 15:29     ` Steven Price
2021-03-26  2:04     ` Zhang, Jack (Jian)
2021-03-26  2:04       ` Zhang, Jack (Jian)
2021-03-26  9:07       ` Steven Price
2021-03-26  9:07         ` Steven Price

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=64583ef0-d19c-1906-91f2-70fd649fa46e@amd.com \
    --to=andrey.grodzovsky@amd.com \
    --cc=Christian.Koenig@amd.com \
    --cc=Emily.Deng@amd.com \
    --cc=Jack.Zhang1@amd.com \
    --cc=Monk.Liu@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=ckoenig.leichtzumerken@gmail.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=robh@kernel.org \
    --cc=steven.price@arm.com \
    --cc=tomeu.vizoso@collabora.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.