From: "Koenig, Christian" <Christian.Koenig-5C7GfCeVMHo@public.gmane.org> To: "Deng, Emily" <Emily.Deng-5C7GfCeVMHo@public.gmane.org>, "amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org" <amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org> Subject: Re: [PATCH] drm/amdgpu: Fix the null pointer issue for tdr Date: Fri, 8 Nov 2019 09:42:32 +0000 [thread overview] Message-ID: <70c2c1cc-40b8-30da-7aee-f59fbc4d0d42@amd.com> (raw) In-Reply-To: <MN2PR12MB29755CFCE09CEC0D9EB999D18F7B0-rweVpJHSKToFlvJWC7EAqwdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org> Am 08.11.19 um 10:39 schrieb Deng, Emily: > Sorry, please take your time. Have you seen my other response a bit below? I can't follow how it would be possible for job->s_fence to be NULL without the job also being freed. So it looks like this patch is just papering over some bigger issues. Regards, Christian. > > Best wishes > Emily Deng > > > >> -----Original Message----- >> From: Koenig, Christian <Christian.Koenig@amd.com> >> Sent: Friday, November 8, 2019 5:08 PM >> To: Deng, Emily <Emily.Deng@amd.com>; amd-gfx@lists.freedesktop.org >> Subject: Re: [PATCH] drm/amdgpu: Fix the null pointer issue for tdr >> >> Am 08.11.19 um 09:52 schrieb Deng, Emily: >>> Ping..... >> You need to give me at least enough time to wake up :) >> >>> >>> Best wishes >>> Emily Deng >>> >>> >>> >>>> -----Original Message----- >>>> From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> On Behalf Of >>>> Deng, Emily >>>> Sent: Friday, November 8, 2019 10:56 AM >>>> To: Koenig, Christian <Christian.Koenig@amd.com>; amd- >>>> gfx@lists.freedesktop.org >>>> Subject: RE: [PATCH] drm/amdgpu: Fix the null pointer issue for tdr >>>> >>>>> -----Original Message----- >>>>> From: Christian König <ckoenig.leichtzumerken@gmail.com> >>>>> Sent: Thursday, November 7, 2019 7:28 PM >>>>> To: Deng, Emily <Emily.Deng@amd.com>; amd-gfx@lists.freedesktop.org >>>>> Subject: Re: [PATCH] drm/amdgpu: Fix the null pointer issue for tdr >>>>> >>>>> Am 07.11.19 um 11:25 schrieb Emily Deng: >>>>>> When the job is already signaled, the s_fence is freed. Then it >>>>>> will has null pointer in amdgpu_device_gpu_recover. >>>>> NAK, the s_fence is only set to NULL when the job is destroyed. See >>>>> drm_sched_job_cleanup(). >>>> I know it is set to NULL in drm_sched_job_cleanup. But in one case, >>>> when it enter into the amdgpu_device_gpu_recover, it already in >>>> drm_sched_job_cleanup, and at this time, it will go to free job. But >>>> the amdgpu_device_gpu_recover sometimes is faster. At that time, job >>>> is not freed, but s_fence is already NULL. >> No, that case can't happen. See here: >> >>> drm_sched_job_cleanup(s_job); >>> >>> amdgpu_ring_priority_put(ring, s_job->s_priority); >>> dma_fence_put(job->fence); >>> amdgpu_sync_free(&job->sync); >>> amdgpu_sync_free(&job->sched_sync); >>> kfree(job); >> The job itself is freed up directly after freeing the reference to the s_fence. >> >> So you are just papering over a much bigger problem here. This patch is a >> clear NAK. >> >> Regards, >> Christian. >> >>>>> When you see a job without an s_fence then that means the problem is >>>>> somewhere else. >>>>> >>>>> Regards, >>>>> Christian. >>>>> >>>>>> Signed-off-by: Emily Deng <Emily.Deng@amd.com> >>>>>> --- >>>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 +- >>>>>> drivers/gpu/drm/scheduler/sched_main.c | 11 ++++++----- >>>>>> 2 files changed, 7 insertions(+), 6 deletions(-) >>>>>> >>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c >>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c >>>>>> index e6ce949..5a8f08e 100644 >>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c >>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c >>>>>> @@ -4075,7 +4075,7 @@ int amdgpu_device_gpu_recover(struct >>>>> amdgpu_device *adev, >>>>>> * >>>>>> * job->base holds a reference to parent fence >>>>>> */ >>>>>> - if (job && job->base.s_fence->parent && >>>>>> + if (job && job->base.s_fence && job->base.s_fence->parent && >>>>>> dma_fence_is_signaled(job->base.s_fence->parent)) >>>>>> job_signaled = true; >>>>>> >>>>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c >>>>>> b/drivers/gpu/drm/scheduler/sched_main.c >>>>>> index 31809ca..56cc10e 100644 >>>>>> --- a/drivers/gpu/drm/scheduler/sched_main.c >>>>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c >>>>>> @@ -334,8 +334,8 @@ void drm_sched_increase_karma(struct >>>>> drm_sched_job >>>>>> *bad) >>>>>> >>>>>> spin_lock(&rq->lock); >>>>>> list_for_each_entry_safe(entity, tmp, &rq->entities, >>>>> list) { >>>>>> - if (bad->s_fence->scheduled.context == >>>>>> - entity->fence_context) { >>>>>> + if (bad->s_fence && (bad->s_fence- >>>>>> scheduled.context == >>>>>> + entity->fence_context)) { >>>>>> if (atomic_read(&bad->karma) > >>>>>> bad->sched->hang_limit) >>>>>> if (entity->guilty) >>>>>> @@ -376,7 +376,7 @@ void drm_sched_stop(struct drm_gpu_scheduler >>>>> *sched, struct drm_sched_job *bad) >>>>>> * This iteration is thread safe as sched thread is stopped. >>>>>> */ >>>>>> list_for_each_entry_safe_reverse(s_job, tmp, &sched- >>>>>> ring_mirror_list, node) { >>>>>> - if (s_job->s_fence->parent && >>>>>> + if (s_job->s_fence && s_job->s_fence->parent && >>>>>> dma_fence_remove_callback(s_job->s_fence->parent, >>>>>> &s_job->cb)) { >>>>>> atomic_dec(&sched->hw_rq_count); @@ -395,7 >>>> +395,8 @@ void >>>>>> drm_sched_stop(struct drm_gpu_scheduler >>>>> *sched, struct drm_sched_job *bad) >>>>>> * >>>>>> * Job is still alive so fence refcount at least 1 >>>>>> */ >>>>>> - dma_fence_wait(&s_job->s_fence->finished, false); >>>>>> + if (s_job->s_fence) >>>>>> + dma_fence_wait(&s_job->s_fence->finished, >>>>> false); >>>>>> /* >>>>>> * We must keep bad job alive for later use during @@ >>>>> -438,7 >>>>>> +439,7 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, >>>>>> +bool >>>>> full_recovery) >>>>>> * GPU recovers can't run in parallel. >>>>>> */ >>>>>> list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, >>>>>> node) >>>>> { >>>>>> - struct dma_fence *fence = s_job->s_fence->parent; >>>>>> + struct dma_fence *fence = s_job->s_fence ? s_job->s_fence- >>>>>> parent : >>>>>> +NULL; >>>>>> >>>>>> atomic_inc(&sched->hw_rq_count); >>>>>> >>>> _______________________________________________ >>>> amd-gfx mailing list >>>> amd-gfx@lists.freedesktop.org >>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
WARNING: multiple messages have this Message-ID (diff)
From: "Koenig, Christian" <Christian.Koenig@amd.com> To: "Deng, Emily" <Emily.Deng@amd.com>, "amd-gfx@lists.freedesktop.org" <amd-gfx@lists.freedesktop.org> Subject: Re: [PATCH] drm/amdgpu: Fix the null pointer issue for tdr Date: Fri, 8 Nov 2019 09:42:32 +0000 [thread overview] Message-ID: <70c2c1cc-40b8-30da-7aee-f59fbc4d0d42@amd.com> (raw) Message-ID: <20191108094232.VIDYwEajeUYQq5K9y6I_oumOciEcJIIZKMdsFwS8s8Q@z> (raw) In-Reply-To: <MN2PR12MB29755CFCE09CEC0D9EB999D18F7B0@MN2PR12MB2975.namprd12.prod.outlook.com> Am 08.11.19 um 10:39 schrieb Deng, Emily: > Sorry, please take your time. Have you seen my other response a bit below? I can't follow how it would be possible for job->s_fence to be NULL without the job also being freed. So it looks like this patch is just papering over some bigger issues. Regards, Christian. > > Best wishes > Emily Deng > > > >> -----Original Message----- >> From: Koenig, Christian <Christian.Koenig@amd.com> >> Sent: Friday, November 8, 2019 5:08 PM >> To: Deng, Emily <Emily.Deng@amd.com>; amd-gfx@lists.freedesktop.org >> Subject: Re: [PATCH] drm/amdgpu: Fix the null pointer issue for tdr >> >> Am 08.11.19 um 09:52 schrieb Deng, Emily: >>> Ping..... >> You need to give me at least enough time to wake up :) >> >>> >>> Best wishes >>> Emily Deng >>> >>> >>> >>>> -----Original Message----- >>>> From: amd-gfx <amd-gfx-bounces@lists.freedesktop.org> On Behalf Of >>>> Deng, Emily >>>> Sent: Friday, November 8, 2019 10:56 AM >>>> To: Koenig, Christian <Christian.Koenig@amd.com>; amd- >>>> gfx@lists.freedesktop.org >>>> Subject: RE: [PATCH] drm/amdgpu: Fix the null pointer issue for tdr >>>> >>>>> -----Original Message----- >>>>> From: Christian König <ckoenig.leichtzumerken@gmail.com> >>>>> Sent: Thursday, November 7, 2019 7:28 PM >>>>> To: Deng, Emily <Emily.Deng@amd.com>; amd-gfx@lists.freedesktop.org >>>>> Subject: Re: [PATCH] drm/amdgpu: Fix the null pointer issue for tdr >>>>> >>>>> Am 07.11.19 um 11:25 schrieb Emily Deng: >>>>>> When the job is already signaled, the s_fence is freed. Then it >>>>>> will has null pointer in amdgpu_device_gpu_recover. >>>>> NAK, the s_fence is only set to NULL when the job is destroyed. See >>>>> drm_sched_job_cleanup(). >>>> I know it is set to NULL in drm_sched_job_cleanup. But in one case, >>>> when it enter into the amdgpu_device_gpu_recover, it already in >>>> drm_sched_job_cleanup, and at this time, it will go to free job. But >>>> the amdgpu_device_gpu_recover sometimes is faster. At that time, job >>>> is not freed, but s_fence is already NULL. >> No, that case can't happen. See here: >> >>> drm_sched_job_cleanup(s_job); >>> >>> amdgpu_ring_priority_put(ring, s_job->s_priority); >>> dma_fence_put(job->fence); >>> amdgpu_sync_free(&job->sync); >>> amdgpu_sync_free(&job->sched_sync); >>> kfree(job); >> The job itself is freed up directly after freeing the reference to the s_fence. >> >> So you are just papering over a much bigger problem here. This patch is a >> clear NAK. >> >> Regards, >> Christian. >> >>>>> When you see a job without an s_fence then that means the problem is >>>>> somewhere else. >>>>> >>>>> Regards, >>>>> Christian. >>>>> >>>>>> Signed-off-by: Emily Deng <Emily.Deng@amd.com> >>>>>> --- >>>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 +- >>>>>> drivers/gpu/drm/scheduler/sched_main.c | 11 ++++++----- >>>>>> 2 files changed, 7 insertions(+), 6 deletions(-) >>>>>> >>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c >>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c >>>>>> index e6ce949..5a8f08e 100644 >>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c >>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c >>>>>> @@ -4075,7 +4075,7 @@ int amdgpu_device_gpu_recover(struct >>>>> amdgpu_device *adev, >>>>>> * >>>>>> * job->base holds a reference to parent fence >>>>>> */ >>>>>> - if (job && job->base.s_fence->parent && >>>>>> + if (job && job->base.s_fence && job->base.s_fence->parent && >>>>>> dma_fence_is_signaled(job->base.s_fence->parent)) >>>>>> job_signaled = true; >>>>>> >>>>>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c >>>>>> b/drivers/gpu/drm/scheduler/sched_main.c >>>>>> index 31809ca..56cc10e 100644 >>>>>> --- a/drivers/gpu/drm/scheduler/sched_main.c >>>>>> +++ b/drivers/gpu/drm/scheduler/sched_main.c >>>>>> @@ -334,8 +334,8 @@ void drm_sched_increase_karma(struct >>>>> drm_sched_job >>>>>> *bad) >>>>>> >>>>>> spin_lock(&rq->lock); >>>>>> list_for_each_entry_safe(entity, tmp, &rq->entities, >>>>> list) { >>>>>> - if (bad->s_fence->scheduled.context == >>>>>> - entity->fence_context) { >>>>>> + if (bad->s_fence && (bad->s_fence- >>>>>> scheduled.context == >>>>>> + entity->fence_context)) { >>>>>> if (atomic_read(&bad->karma) > >>>>>> bad->sched->hang_limit) >>>>>> if (entity->guilty) >>>>>> @@ -376,7 +376,7 @@ void drm_sched_stop(struct drm_gpu_scheduler >>>>> *sched, struct drm_sched_job *bad) >>>>>> * This iteration is thread safe as sched thread is stopped. >>>>>> */ >>>>>> list_for_each_entry_safe_reverse(s_job, tmp, &sched- >>>>>> ring_mirror_list, node) { >>>>>> - if (s_job->s_fence->parent && >>>>>> + if (s_job->s_fence && s_job->s_fence->parent && >>>>>> dma_fence_remove_callback(s_job->s_fence->parent, >>>>>> &s_job->cb)) { >>>>>> atomic_dec(&sched->hw_rq_count); @@ -395,7 >>>> +395,8 @@ void >>>>>> drm_sched_stop(struct drm_gpu_scheduler >>>>> *sched, struct drm_sched_job *bad) >>>>>> * >>>>>> * Job is still alive so fence refcount at least 1 >>>>>> */ >>>>>> - dma_fence_wait(&s_job->s_fence->finished, false); >>>>>> + if (s_job->s_fence) >>>>>> + dma_fence_wait(&s_job->s_fence->finished, >>>>> false); >>>>>> /* >>>>>> * We must keep bad job alive for later use during @@ >>>>> -438,7 >>>>>> +439,7 @@ void drm_sched_start(struct drm_gpu_scheduler *sched, >>>>>> +bool >>>>> full_recovery) >>>>>> * GPU recovers can't run in parallel. >>>>>> */ >>>>>> list_for_each_entry_safe(s_job, tmp, &sched->ring_mirror_list, >>>>>> node) >>>>> { >>>>>> - struct dma_fence *fence = s_job->s_fence->parent; >>>>>> + struct dma_fence *fence = s_job->s_fence ? s_job->s_fence- >>>>>> parent : >>>>>> +NULL; >>>>>> >>>>>> atomic_inc(&sched->hw_rq_count); >>>>>> >>>> _______________________________________________ >>>> amd-gfx mailing list >>>> amd-gfx@lists.freedesktop.org >>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx _______________________________________________ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
next prev parent reply other threads:[~2019-11-08 9:42 UTC|newest] Thread overview: 80+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-11-07 10:25 [PATCH] drm/amdgpu: Fix the null pointer issue for tdr Emily Deng 2019-11-07 10:25 ` Emily Deng [not found] ` <1573122349-22080-1-git-send-email-Emily.Deng-5C7GfCeVMHo@public.gmane.org> 2019-11-07 11:28 ` Christian König 2019-11-07 11:28 ` Christian König [not found] ` <9de32e5b-69a2-f43f-629f-fef3c30bf5a1-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> 2019-11-08 2:55 ` Deng, Emily 2019-11-08 2:55 ` Deng, Emily [not found] ` <MN2PR12MB2975D4E26CED960B82305F308F7B0-rweVpJHSKToFlvJWC7EAqwdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org> 2019-11-08 8:52 ` Deng, Emily 2019-11-08 8:52 ` Deng, Emily [not found] ` <MN2PR12MB2975E26D8A8352863BA01FCA8F7B0-rweVpJHSKToFlvJWC7EAqwdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org> 2019-11-08 9:07 ` Koenig, Christian 2019-11-08 9:07 ` Koenig, Christian [not found] ` <c01acb29-72ce-a109-3ca5-166706327d61-5C7GfCeVMHo@public.gmane.org> 2019-11-08 9:39 ` Deng, Emily 2019-11-08 9:39 ` Deng, Emily [not found] ` <MN2PR12MB29755CFCE09CEC0D9EB999D18F7B0-rweVpJHSKToFlvJWC7EAqwdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org> 2019-11-08 9:42 ` Koenig, Christian [this message] 2019-11-08 9:42 ` Koenig, Christian [not found] ` <70c2c1cc-40b8-30da-7aee-f59fbc4d0d42-5C7GfCeVMHo@public.gmane.org> 2019-11-08 10:11 ` Deng, Emily 2019-11-08 10:11 ` Deng, Emily [not found] ` <DM6PR12MB2971859C1BF16EE7E65B35B18F7B0-lmeGfMZKVrGd4IXjMPYtUQdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org> 2019-11-08 10:14 ` Koenig, Christian 2019-11-08 10:14 ` Koenig, Christian [not found] ` <d6f9c508-3c23-c797-1cbc-7502dc4c0b13-5C7GfCeVMHo@public.gmane.org> 2019-11-08 10:22 ` Deng, Emily 2019-11-08 10:22 ` Deng, Emily [not found] ` <DM6PR12MB29714AB9AD16FA3ABD7D62C28F7B0-lmeGfMZKVrGd4IXjMPYtUQdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org> 2019-11-08 10:26 ` Koenig, Christian 2019-11-08 10:26 ` Koenig, Christian [not found] ` <dcc1124b-5e19-b018-7449-659a8b7d74ea-5C7GfCeVMHo@public.gmane.org> 2019-11-08 10:32 ` Deng, Emily 2019-11-08 10:32 ` Deng, Emily [not found] ` <DM6PR12MB29710DFE90F22F5903499AFE8F7B0-lmeGfMZKVrGd4IXjMPYtUQdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org> 2019-11-08 10:35 ` Koenig, Christian 2019-11-08 10:35 ` Koenig, Christian [not found] ` <91f4a0c4-23e3-a399-5cb1-fb01da922784-5C7GfCeVMHo@public.gmane.org> 2019-11-08 10:54 ` Deng, Emily 2019-11-08 10:54 ` Deng, Emily [not found] ` <DM6PR12MB2971D540D3000B67E44970AF8F7B0-lmeGfMZKVrGd4IXjMPYtUQdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org> 2019-11-08 19:04 ` Grodzovsky, Andrey 2019-11-08 19:04 ` Grodzovsky, Andrey 2019-11-08 19:01 ` Grodzovsky, Andrey 2019-11-08 19:01 ` Grodzovsky, Andrey [not found] ` <30ac4863-70e0-2b95-4819-e9431a6b4680-5C7GfCeVMHo@public.gmane.org> 2019-11-11 7:19 ` Deng, Emily 2019-11-11 7:19 ` Deng, Emily [not found] ` <MN2PR12MB2975652B5191BAC055C01BEC8F740-rweVpJHSKToFlvJWC7EAqwdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org> 2019-11-11 9:05 ` Deng, Emily 2019-11-11 9:05 ` Deng, Emily [not found] ` <MN2PR12MB2975B736A666D9EEC5E5DB158F740-rweVpJHSKToFlvJWC7EAqwdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org> 2019-11-11 21:35 ` Andrey Grodzovsky 2019-11-11 21:35 ` Andrey Grodzovsky [not found] ` <53130d01-da16-7cc0-55df-ea2532e6b3d0-5C7GfCeVMHo@public.gmane.org> 2019-11-12 5:48 ` Deng, Emily 2019-11-12 5:48 ` Deng, Emily 2019-11-11 18:06 ` Andrey Grodzovsky 2019-11-11 18:06 ` Andrey Grodzovsky 2019-11-12 3:28 ` Grodzovsky, Andrey 2019-11-12 3:28 ` Grodzovsky, Andrey [not found] ` <MWHPR12MB1453817C6F05A57FD431E159EA770-Gy0DoCVfaSWZBIDmKHdw+wdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org> 2019-11-12 6:02 ` Deng, Emily 2019-11-12 6:02 ` Deng, Emily [not found] ` <MN2PR12MB29750EDB35E27DF9CD63152C8F770-rweVpJHSKToFlvJWC7EAqwdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org> 2019-11-11 21:11 ` Christian König 2019-11-11 21:11 ` Christian König [not found] ` <2f035f22-4057-dd9e-27ef-0f5612113e29-5C7GfCeVMHo@public.gmane.org> 2019-11-12 19:21 ` Andrey Grodzovsky 2019-11-12 19:21 ` Andrey Grodzovsky [not found] ` <9269d447-ed32-81f7-ab43-cb16139096e2-5C7GfCeVMHo@public.gmane.org> 2019-11-13 7:36 ` Christian König 2019-11-13 7:36 ` Christian König [not found] ` <33ffe2f1-32b6-a238-3752-cee67cd9e141-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> 2019-11-13 14:12 ` Andrey Grodzovsky 2019-11-13 14:12 ` Andrey Grodzovsky [not found] ` <40bb3114-d996-10af-3140-51a4f7c212d6-5C7GfCeVMHo@public.gmane.org> 2019-11-13 14:20 ` Christian König 2019-11-13 14:20 ` Christian König [not found] ` <0858ea1b-d205-006d-a6ec-24b78b33e45b-5C7GfCeVMHo@public.gmane.org> 2019-11-13 16:00 ` Andrey Grodzovsky 2019-11-13 16:00 ` Andrey Grodzovsky [not found] ` <c784ef0a-2cd7-d4b1-0581-356d8c401102-5C7GfCeVMHo@public.gmane.org> 2019-11-14 8:12 ` Christian König 2019-11-14 8:12 ` Christian König [not found] ` <088fb2bc-b401-17cc-4d7c-001705ee1eb9-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> 2019-11-14 15:53 ` Andrey Grodzovsky 2019-11-14 15:53 ` Andrey Grodzovsky 2019-11-14 22:14 ` Andrey Grodzovsky 2019-11-14 22:14 ` Andrey Grodzovsky [not found] ` <e267429b-9c80-a9e7-7ffd-75ec439ed759-5C7GfCeVMHo@public.gmane.org> 2019-11-15 4:39 ` Deng, Emily 2019-11-15 4:39 ` Deng, Emily [not found] ` <MN2PR12MB29754C96F982E8C4F5ACC4C08F700-rweVpJHSKToFlvJWC7EAqwdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org> 2019-11-18 14:07 ` Andrey Grodzovsky 2019-11-18 14:07 ` Andrey Grodzovsky [not found] ` <c4791437-d42d-31fd-972f-cd2cdb26e951-5C7GfCeVMHo@public.gmane.org> 2019-11-18 16:16 ` Christian König 2019-11-18 16:16 ` Christian König [not found] ` <7963ba8a-e51b-59ce-6c3e-46670e40b27f-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> 2019-11-18 16:23 ` Andrey Grodzovsky 2019-11-18 16:23 ` Andrey Grodzovsky [not found] ` <ed7d4065-f83f-3273-5820-e6556e6edc46-5C7GfCeVMHo@public.gmane.org> 2019-11-18 16:44 ` Christian König 2019-11-18 16:44 ` Christian König [not found] ` <2ac04f61-8fe9-62a9-0240-f0bb9f2b1761-5C7GfCeVMHo@public.gmane.org> 2019-11-18 17:01 ` Andrey Grodzovsky 2019-11-18 17:01 ` Andrey Grodzovsky [not found] ` <34f789a2-4abd-d6a7-3aa0-fb37e5ba5a86-5C7GfCeVMHo@public.gmane.org> 2019-11-18 20:01 ` Christian König 2019-11-18 20:01 ` Christian König [not found] ` <51b4b317-fa7e-8920-de56-698ce69a8d0a-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> 2019-11-18 21:05 ` Andrey Grodzovsky 2019-11-18 21:05 ` Andrey Grodzovsky
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=70c2c1cc-40b8-30da-7aee-f59fbc4d0d42@amd.com \ --to=christian.koenig-5c7gfcevmho@public.gmane.org \ --cc=Emily.Deng-5C7GfCeVMHo@public.gmane.org \ --cc=amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.