All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>
To: Daniel Vetter <daniel@ffwll.ch>
Cc: "amd-gfx list" <amd-gfx@lists.freedesktop.org>,
	"Greg KH" <gregkh@linuxfoundation.org>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	"Qiang Yu" <yuq825@gmail.com>,
	"Alex Deucher" <Alexander.Deucher@amd.com>,
	"Christian König" <christian.koenig@amd.com>
Subject: Re: [PATCH v3 05/12] drm/ttm: Expose ttm_tt_unpopulate for driver use
Date: Thu, 17 Dec 2020 16:06:38 -0500	[thread overview]
Message-ID: <19ec7cb6-e1aa-e4ac-9cb1-a08d60c07af4@amd.com> (raw)
In-Reply-To: <CAKMK7uF5NRARdA1BrsYSBgYw-ioTc_P54LXLCi4LQ21S3NZc1Q@mail.gmail.com>


On 12/17/20 3:48 PM, Daniel Vetter wrote:
> On Thu, Dec 17, 2020 at 9:38 PM Andrey Grodzovsky
> <Andrey.Grodzovsky@amd.com> wrote:
>>
>> On 12/17/20 3:10 PM, Christian König wrote:
>>> [SNIP]
>>>>>> By eliminating such users, and replacing them with local maps which
>>>>>>> are strictly bound in how long they can exist (and hence we can
>>>>>>> serialize against them finishing in our hotunplug code).
>>>>>> Not sure I see how serializing against BO map/unmap helps - our problem as
>>>>>> you described is that once
>>>>>> device is extracted and then something else quickly takes it's place in the
>>>>>> PCI topology
>>>>>> and gets assigned same physical IO ranges, then our driver will start
>>>>>> accessing this
>>>>>> new device because our 'zombie' BOs are still pointing to those ranges.
>>>>> Until your driver's remove callback is finished the ranges stay reserved.
>>>>
>>>> The ranges stay reserved until unmapped which happens in bo->destroy
>>> I'm not sure of that. Why do you think that?
>>
>> Because of this sequence
>> ttm_bo_release->destroy->amdgpu_bo_destroy->amdgpu_bo_kunmap->...->iounmap
>> Is there another place I am missing ?
> iounmap is just the mapping, it doesn't reserve anything in the resource tree.
>
> And I don't think we should keep resources reserved past the pci
> remove callback, because that would upset the pci subsystem trying to
> assign resources to a newly hotplugged pci device.


I assumed we are talking about VA ranges still mapped in the page table. I just 
assumed
that part of ioremap is also reservation of the mapped physical ranges. In fact, 
if we
do can explicitly reserve those ranges (as you mention here) then together with 
postponing
system memory pages freeing/releasing back to the page pool until after BO is 
unmapped
from the kernel address space I believe this could solve the issue of quick HW 
reinsertion
and make all the drm_dev_ener/exit guarding obsolete.

Andrey


> Also from a quick check amdgpu does not reserve the pci bars it's
> using. Somehow most drm drivers don't do that, not exactly sure why,
> maybe auto-enumeration of resources just works too good and we don't
> need the safety net of kernel/resource.c anymore.
> -Daniel
>
>
>>>> which for most internally allocated buffers is during sw_fini when last drm_put
>>>> is called.
>>>>
>>>>
>>>>> If that's not the case, then hotunplug would be fundamentally impossible
>>>>> ot handle correctly.
>>>>>
>>>>> Of course all the mmio actions will time out, so it might take some time
>>>>> to get through it all.
>>>>
>>>> I found that PCI code provides pci_device_is_present function
>>>> we can use to avoid timeouts - it reads device vendor and checks if all 1s is
>>>> returned
>>>> or not. We can call it from within register accessors before trying read/write
>>> That's way to much overhead! We need to keep that much lower or it will result
>>> in quite a performance drop.
>>>
>>> I suggest to rather think about adding drm_dev_enter/exit guards.
>>
>> Sure, this one is just a bit upstream to the disconnect event. Eventually none
>> of them is watertight.
>>
>> Andrey
>>
>>
>>> Christian.
>>>
>>>>>> Another point regarding serializing - problem  is that some of those BOs are
>>>>>> very long lived, take for example the HW command
>>>>>> ring buffer Christian mentioned before -
>>>>>> (amdgpu_ring_init->amdgpu_bo_create_kernel), it's life span
>>>>>> is basically for the entire time the device exists, it's destroyed only in
>>>>>> the SW fini stage (when last drm_dev
>>>>>> reference is dropped) and so should I grab it's dma_resv_lock from
>>>>>> amdgpu_pci_remove code and wait
>>>>>> for it to be unmapped before proceeding with the PCI remove code ? This can
>>>>>> take unbound time and that why I don't understand
>>>>>> how serializing will help.
>>>>> Uh you need to untangle that. After hw cleanup is done no one is allowed
>>>>> to touch that ringbuffer bo anymore from the kernel.
>>>>
>>>> I would assume we are not allowed to touch it once we identified the device is
>>>> gone in order to minimize the chance of accidental writes to some other
>>>> device which might now
>>>> occupy those IO ranges ?
>>>>
>>>>
>>>>>    That's what
>>>>> drm_dev_enter/exit guards are for. Like you say we cant wait for all sw
>>>>> references to disappear.
>>>>
>>>> Yes, didn't make sense to me why would we use vmap_local for internally
>>>> allocated buffers. I think we should also guard registers read/writes for the
>>>> same reason as above.
>>>>
>>>>
>>>>> The vmap_local is for mappings done by other drivers, through the dma-buf
>>>>> interface (where "other drivers" can include fbdev/fbcon, if you use the
>>>>> generic helpers).
>>>>> -Daniel
>>>>
>>>> Ok, so I assumed that with vmap_local you were trying to solve the problem of
>>>> quick reinsertion
>>>> of another device into same MMIO range that my driver still points too but
>>>> actually are you trying to solve
>>>> the issue of exported dma buffers outliving the device ? For this we have
>>>> drm_device refcount in the GEM layer
>>>> i think.
>>>>
>>>> Andrey
>>>>
>>>>
>>>>>> Andrey
>>>>>>
>>>>>>
>>>>>>> It doesn't
>>>>>>> solve all your problems, but it's a tool to get there.
>>>>>>> -Daniel
>>>>>>>
>>>>>>>> Andrey
>>>>>>>>
>>>>>>>>
>>>>>>>>> - handle fbcon somehow. I think shutting it all down should work out.
>>>>>>>>> - worst case keep the system backing storage around for shared dma-buf
>>>>>>>>> until the other non-dynamic driver releases it. for vram we require
>>>>>>>>> dynamic importers (and maybe it wasn't such a bright idea to allow
>>>>>>>>> pinning of importer buffers, might need to revisit that).
>>>>>>>>>
>>>>>>>>> Cheers, Daniel
>>>>>>>>>
>>>>>>>>>> Christian.
>>>>>>>>>>
>>>>>>>>>>> Andrey
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> -Daniel
>>>>>>>>>>>>
>>>>>>>>>>>>> Christian.
>>>>>>>>>>>>>
>>>>>>>>>>>>>> I loaded the driver with vm_update_mode=3
>>>>>>>>>>>>>> meaning all VM updates done using CPU and hasn't seen any OOPs after
>>>>>>>>>>>>>> removing the device. I guess i can test it more by allocating GTT and
>>>>>>>>>>>>>> VRAM BOs
>>>>>>>>>>>>>> and trying to read/write to them after device is removed.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Andrey
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>>> Christian.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Andrey
>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>> amd-gfx mailing list
>>>>>>>>>>>>>> amd-gfx@lists.freedesktop.org
>>>>>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Cc632e5bd5a1f402ac40608d8a2cd2072%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637438349203619335%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=tKk0GTmSnkLVV42HuQaPAj01qFiwDW6Zs%2Bgi2hoq%2BvA%3D&amp;reserved=0
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

WARNING: multiple messages have this Message-ID (diff)
From: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>
To: Daniel Vetter <daniel@ffwll.ch>
Cc: "Rob Herring" <robh@kernel.org>,
	"amd-gfx list" <amd-gfx@lists.freedesktop.org>,
	"Greg KH" <gregkh@linuxfoundation.org>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	"Anholt, Eric" <eric@anholt.net>,
	"Pekka Paalanen" <ppaalanen@gmail.com>,
	"Qiang Yu" <yuq825@gmail.com>,
	"Alex Deucher" <Alexander.Deucher@amd.com>,
	"Wentland, Harry" <Harry.Wentland@amd.com>,
	"Christian König" <christian.koenig@amd.com>,
	"Lucas Stach" <l.stach@pengutronix.de>
Subject: Re: [PATCH v3 05/12] drm/ttm: Expose ttm_tt_unpopulate for driver use
Date: Thu, 17 Dec 2020 16:06:38 -0500	[thread overview]
Message-ID: <19ec7cb6-e1aa-e4ac-9cb1-a08d60c07af4@amd.com> (raw)
In-Reply-To: <CAKMK7uF5NRARdA1BrsYSBgYw-ioTc_P54LXLCi4LQ21S3NZc1Q@mail.gmail.com>


On 12/17/20 3:48 PM, Daniel Vetter wrote:
> On Thu, Dec 17, 2020 at 9:38 PM Andrey Grodzovsky
> <Andrey.Grodzovsky@amd.com> wrote:
>>
>> On 12/17/20 3:10 PM, Christian König wrote:
>>> [SNIP]
>>>>>> By eliminating such users, and replacing them with local maps which
>>>>>>> are strictly bound in how long they can exist (and hence we can
>>>>>>> serialize against them finishing in our hotunplug code).
>>>>>> Not sure I see how serializing against BO map/unmap helps - our problem as
>>>>>> you described is that once
>>>>>> device is extracted and then something else quickly takes it's place in the
>>>>>> PCI topology
>>>>>> and gets assigned same physical IO ranges, then our driver will start
>>>>>> accessing this
>>>>>> new device because our 'zombie' BOs are still pointing to those ranges.
>>>>> Until your driver's remove callback is finished the ranges stay reserved.
>>>>
>>>> The ranges stay reserved until unmapped which happens in bo->destroy
>>> I'm not sure of that. Why do you think that?
>>
>> Because of this sequence
>> ttm_bo_release->destroy->amdgpu_bo_destroy->amdgpu_bo_kunmap->...->iounmap
>> Is there another place I am missing ?
> iounmap is just the mapping, it doesn't reserve anything in the resource tree.
>
> And I don't think we should keep resources reserved past the pci
> remove callback, because that would upset the pci subsystem trying to
> assign resources to a newly hotplugged pci device.


I assumed we are talking about VA ranges still mapped in the page table. I just 
assumed
that part of ioremap is also reservation of the mapped physical ranges. In fact, 
if we
do can explicitly reserve those ranges (as you mention here) then together with 
postponing
system memory pages freeing/releasing back to the page pool until after BO is 
unmapped
from the kernel address space I believe this could solve the issue of quick HW 
reinsertion
and make all the drm_dev_ener/exit guarding obsolete.

Andrey


> Also from a quick check amdgpu does not reserve the pci bars it's
> using. Somehow most drm drivers don't do that, not exactly sure why,
> maybe auto-enumeration of resources just works too good and we don't
> need the safety net of kernel/resource.c anymore.
> -Daniel
>
>
>>>> which for most internally allocated buffers is during sw_fini when last drm_put
>>>> is called.
>>>>
>>>>
>>>>> If that's not the case, then hotunplug would be fundamentally impossible
>>>>> ot handle correctly.
>>>>>
>>>>> Of course all the mmio actions will time out, so it might take some time
>>>>> to get through it all.
>>>>
>>>> I found that PCI code provides pci_device_is_present function
>>>> we can use to avoid timeouts - it reads device vendor and checks if all 1s is
>>>> returned
>>>> or not. We can call it from within register accessors before trying read/write
>>> That's way to much overhead! We need to keep that much lower or it will result
>>> in quite a performance drop.
>>>
>>> I suggest to rather think about adding drm_dev_enter/exit guards.
>>
>> Sure, this one is just a bit upstream to the disconnect event. Eventually none
>> of them is watertight.
>>
>> Andrey
>>
>>
>>> Christian.
>>>
>>>>>> Another point regarding serializing - problem  is that some of those BOs are
>>>>>> very long lived, take for example the HW command
>>>>>> ring buffer Christian mentioned before -
>>>>>> (amdgpu_ring_init->amdgpu_bo_create_kernel), it's life span
>>>>>> is basically for the entire time the device exists, it's destroyed only in
>>>>>> the SW fini stage (when last drm_dev
>>>>>> reference is dropped) and so should I grab it's dma_resv_lock from
>>>>>> amdgpu_pci_remove code and wait
>>>>>> for it to be unmapped before proceeding with the PCI remove code ? This can
>>>>>> take unbound time and that why I don't understand
>>>>>> how serializing will help.
>>>>> Uh you need to untangle that. After hw cleanup is done no one is allowed
>>>>> to touch that ringbuffer bo anymore from the kernel.
>>>>
>>>> I would assume we are not allowed to touch it once we identified the device is
>>>> gone in order to minimize the chance of accidental writes to some other
>>>> device which might now
>>>> occupy those IO ranges ?
>>>>
>>>>
>>>>>    That's what
>>>>> drm_dev_enter/exit guards are for. Like you say we cant wait for all sw
>>>>> references to disappear.
>>>>
>>>> Yes, didn't make sense to me why would we use vmap_local for internally
>>>> allocated buffers. I think we should also guard registers read/writes for the
>>>> same reason as above.
>>>>
>>>>
>>>>> The vmap_local is for mappings done by other drivers, through the dma-buf
>>>>> interface (where "other drivers" can include fbdev/fbcon, if you use the
>>>>> generic helpers).
>>>>> -Daniel
>>>>
>>>> Ok, so I assumed that with vmap_local you were trying to solve the problem of
>>>> quick reinsertion
>>>> of another device into same MMIO range that my driver still points too but
>>>> actually are you trying to solve
>>>> the issue of exported dma buffers outliving the device ? For this we have
>>>> drm_device refcount in the GEM layer
>>>> i think.
>>>>
>>>> Andrey
>>>>
>>>>
>>>>>> Andrey
>>>>>>
>>>>>>
>>>>>>> It doesn't
>>>>>>> solve all your problems, but it's a tool to get there.
>>>>>>> -Daniel
>>>>>>>
>>>>>>>> Andrey
>>>>>>>>
>>>>>>>>
>>>>>>>>> - handle fbcon somehow. I think shutting it all down should work out.
>>>>>>>>> - worst case keep the system backing storage around for shared dma-buf
>>>>>>>>> until the other non-dynamic driver releases it. for vram we require
>>>>>>>>> dynamic importers (and maybe it wasn't such a bright idea to allow
>>>>>>>>> pinning of importer buffers, might need to revisit that).
>>>>>>>>>
>>>>>>>>> Cheers, Daniel
>>>>>>>>>
>>>>>>>>>> Christian.
>>>>>>>>>>
>>>>>>>>>>> Andrey
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> -Daniel
>>>>>>>>>>>>
>>>>>>>>>>>>> Christian.
>>>>>>>>>>>>>
>>>>>>>>>>>>>> I loaded the driver with vm_update_mode=3
>>>>>>>>>>>>>> meaning all VM updates done using CPU and hasn't seen any OOPs after
>>>>>>>>>>>>>> removing the device. I guess i can test it more by allocating GTT and
>>>>>>>>>>>>>> VRAM BOs
>>>>>>>>>>>>>> and trying to read/write to them after device is removed.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Andrey
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>>> Christian.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Andrey
>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>> amd-gfx mailing list
>>>>>>>>>>>>>> amd-gfx@lists.freedesktop.org
>>>>>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7Cc632e5bd5a1f402ac40608d8a2cd2072%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637438349203619335%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=tKk0GTmSnkLVV42HuQaPAj01qFiwDW6Zs%2Bgi2hoq%2BvA%3D&amp;reserved=0
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>
>
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2020-12-17 21:06 UTC|newest]

Thread overview: 212+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-21  5:21 [PATCH v3 00/12] RFC Support hot device unplug in amdgpu Andrey Grodzovsky
2020-11-21  5:21 ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 01/12] drm: Add dummy page per device or GEM object Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21 14:15   ` Christian König
2020-11-21 14:15     ` Christian König
2020-11-23  4:54     ` Andrey Grodzovsky
2020-11-23  4:54       ` Andrey Grodzovsky
2020-11-23  8:01       ` Christian König
2020-11-23  8:01         ` Christian König
2021-01-05 21:04         ` Andrey Grodzovsky
2021-01-05 21:04           ` Andrey Grodzovsky
2021-01-07 16:21           ` Daniel Vetter
2021-01-07 16:21             ` Daniel Vetter
2021-01-07 16:26             ` Andrey Grodzovsky
2021-01-07 16:26               ` Andrey Grodzovsky
2021-01-07 16:28               ` Andrey Grodzovsky
2021-01-07 16:28                 ` Andrey Grodzovsky
2021-01-07 16:30               ` Daniel Vetter
2021-01-07 16:30                 ` Daniel Vetter
2021-01-07 16:37                 ` Andrey Grodzovsky
2021-01-07 16:37                   ` Andrey Grodzovsky
2021-01-08 14:26                   ` Andrey Grodzovsky
2021-01-08 14:26                     ` Andrey Grodzovsky
2021-01-08 14:33                     ` Christian König
2021-01-08 14:33                       ` Christian König
2021-01-08 14:46                       ` Andrey Grodzovsky
2021-01-08 14:46                         ` Andrey Grodzovsky
2021-01-08 14:52                         ` Christian König
2021-01-08 14:52                           ` Christian König
2021-01-08 16:49                           ` Grodzovsky, Andrey
2021-01-08 16:49                             ` Grodzovsky, Andrey
2021-01-11 16:13                             ` Daniel Vetter
2021-01-11 16:13                               ` Daniel Vetter
2021-01-11 16:15                               ` Daniel Vetter
2021-01-11 16:15                                 ` Daniel Vetter
2021-01-11 17:41                                 ` Andrey Grodzovsky
2021-01-11 17:41                                   ` Andrey Grodzovsky
2021-01-11 18:31                                   ` Andrey Grodzovsky
2021-01-12  9:07                                     ` Daniel Vetter
2021-01-11 20:45                                 ` Andrey Grodzovsky
2021-01-11 20:45                                   ` Andrey Grodzovsky
2021-01-12  9:10                                   ` Daniel Vetter
2021-01-12  9:10                                     ` Daniel Vetter
2021-01-12 12:32                                     ` Christian König
2021-01-12 12:32                                       ` Christian König
2021-01-12 15:59                                       ` Andrey Grodzovsky
2021-01-12 15:59                                         ` Andrey Grodzovsky
2021-01-13  9:14                                         ` Christian König
2021-01-13  9:14                                           ` Christian König
2021-01-13 14:40                                           ` Andrey Grodzovsky
2021-01-13 14:40                                             ` Andrey Grodzovsky
2021-01-12 15:54                                     ` Andrey Grodzovsky
2021-01-12 15:54                                       ` Andrey Grodzovsky
2021-01-12  8:12                               ` Christian König
2021-01-12  8:12                                 ` Christian König
2021-01-12  9:13                                 ` Daniel Vetter
2021-01-12  9:13                                   ` Daniel Vetter
2020-11-21  5:21 ` [PATCH v3 02/12] drm: Unamp the entire device address space on device unplug Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21 14:16   ` Christian König
2020-11-21 14:16     ` Christian König
2020-11-24 14:44     ` Daniel Vetter
2020-11-24 14:44       ` Daniel Vetter
2020-11-21  5:21 ` [PATCH v3 03/12] drm/ttm: Remap all page faults to per process dummy page Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 04/12] drm/ttm: Set dma addr to null after freee Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21 14:13   ` Christian König
2020-11-21 14:13     ` Christian König
2020-11-23  5:15     ` Andrey Grodzovsky
2020-11-23  5:15       ` Andrey Grodzovsky
2020-11-23  8:04       ` Christian König
2020-11-23  8:04         ` Christian König
2020-11-21  5:21 ` [PATCH v3 05/12] drm/ttm: Expose ttm_tt_unpopulate for driver use Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-25 10:42   ` Christian König
2020-11-25 10:42     ` Christian König
2020-11-23 20:05     ` Andrey Grodzovsky
2020-11-23 20:05       ` Andrey Grodzovsky
2020-11-23 20:20       ` Christian König
2020-11-23 20:20         ` Christian König
2020-11-23 20:38         ` Andrey Grodzovsky
2020-11-23 20:38           ` Andrey Grodzovsky
2020-11-23 20:41           ` Christian König
2020-11-23 20:41             ` Christian König
2020-11-23 21:08             ` Andrey Grodzovsky
2020-11-23 21:08               ` Andrey Grodzovsky
2020-11-24  7:41               ` Christian König
2020-11-24  7:41                 ` Christian König
2020-11-24 16:22                 ` Andrey Grodzovsky
2020-11-24 16:22                   ` Andrey Grodzovsky
2020-11-24 16:44                   ` Christian König
2020-11-24 16:44                     ` Christian König
2020-11-25 10:40                     ` Daniel Vetter
2020-11-25 10:40                       ` Daniel Vetter
2020-11-25 12:57                       ` Christian König
2020-11-25 12:57                         ` Christian König
2020-11-25 16:36                         ` Daniel Vetter
2020-11-25 16:36                           ` Daniel Vetter
2020-11-25 19:34                           ` Andrey Grodzovsky
2020-11-25 19:34                             ` Andrey Grodzovsky
2020-11-27 13:10                             ` Grodzovsky, Andrey
2020-11-27 13:10                               ` Grodzovsky, Andrey
2020-11-27 14:59                             ` Daniel Vetter
2020-11-27 14:59                               ` Daniel Vetter
2020-11-27 16:04                               ` Andrey Grodzovsky
2020-11-27 16:04                                 ` Andrey Grodzovsky
2020-11-30 14:15                                 ` Daniel Vetter
2020-11-30 14:15                                   ` Daniel Vetter
2020-11-25 16:56                         ` Michel Dänzer
2020-11-25 16:56                           ` Michel Dänzer
2020-11-25 17:02                           ` Daniel Vetter
2020-11-25 17:02                             ` Daniel Vetter
2020-12-15 20:18                     ` Andrey Grodzovsky
2020-12-15 20:18                       ` Andrey Grodzovsky
2020-12-16  8:04                       ` Christian König
2020-12-16  8:04                         ` Christian König
2020-12-16 14:21                         ` Daniel Vetter
2020-12-16 14:21                           ` Daniel Vetter
2020-12-16 16:13                           ` Andrey Grodzovsky
2020-12-16 16:13                             ` Andrey Grodzovsky
2020-12-16 16:18                             ` Christian König
2020-12-16 16:18                               ` Christian König
2020-12-16 17:12                               ` Daniel Vetter
2020-12-16 17:12                                 ` Daniel Vetter
2020-12-16 17:20                                 ` Daniel Vetter
2020-12-16 17:20                                   ` Daniel Vetter
2020-12-16 18:26                                 ` Andrey Grodzovsky
2020-12-16 18:26                                   ` Andrey Grodzovsky
2020-12-16 23:15                                   ` Daniel Vetter
2020-12-16 23:15                                     ` Daniel Vetter
2020-12-17  0:20                                     ` Andrey Grodzovsky
2020-12-17  0:20                                       ` Andrey Grodzovsky
2020-12-17 12:01                                       ` Daniel Vetter
2020-12-17 12:01                                         ` Daniel Vetter
2020-12-17 19:19                                         ` Andrey Grodzovsky
2020-12-17 19:19                                           ` Andrey Grodzovsky
2020-12-17 20:10                                           ` Christian König
2020-12-17 20:10                                             ` Christian König
2020-12-17 20:38                                             ` Andrey Grodzovsky
2020-12-17 20:38                                               ` Andrey Grodzovsky
2020-12-17 20:48                                               ` Daniel Vetter
2020-12-17 20:48                                                 ` Daniel Vetter
2020-12-17 21:06                                                 ` Andrey Grodzovsky [this message]
2020-12-17 21:06                                                   ` Andrey Grodzovsky
2020-12-18 14:30                                                   ` Daniel Vetter
2020-12-18 14:30                                                     ` Daniel Vetter
2020-12-17 20:42                                           ` Daniel Vetter
2020-12-17 20:42                                             ` Daniel Vetter
2020-12-17 21:13                                             ` Andrey Grodzovsky
2020-12-17 21:13                                               ` Andrey Grodzovsky
2021-01-04 16:33                                               ` Andrey Grodzovsky
2021-01-04 16:33                                                 ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 06/12] drm/sched: Cancel and flush all oustatdning jobs before finish Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-22 11:56   ` Christian König
2020-11-22 11:56     ` Christian König
2020-11-21  5:21 ` [PATCH v3 07/12] drm/sched: Prevent any job recoveries after device is unplugged Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-22 11:57   ` Christian König
2020-11-22 11:57     ` Christian König
2020-11-23  5:37     ` Andrey Grodzovsky
2020-11-23  5:37       ` Andrey Grodzovsky
2020-11-23  8:06       ` Christian König
2020-11-23  8:06         ` Christian König
2020-11-24  1:12         ` Luben Tuikov
2020-11-24  1:12           ` Luben Tuikov
2020-11-24  7:50           ` Christian König
2020-11-24  7:50             ` Christian König
2020-11-24 17:11             ` Luben Tuikov
2020-11-24 17:11               ` Luben Tuikov
2020-11-24 17:17               ` Andrey Grodzovsky
2020-11-24 17:17                 ` Andrey Grodzovsky
2020-11-24 17:41                 ` Luben Tuikov
2020-11-24 17:41                   ` Luben Tuikov
2020-11-24 17:40               ` Christian König
2020-11-24 17:40                 ` Christian König
2020-11-24 17:44                 ` Luben Tuikov
2020-11-24 17:44                   ` Luben Tuikov
2020-11-21  5:21 ` [PATCH v3 08/12] drm/amdgpu: Split amdgpu_device_fini into early and late Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-24 14:53   ` Daniel Vetter
2020-11-24 14:53     ` Daniel Vetter
2020-11-24 15:51     ` Andrey Grodzovsky
2020-11-24 15:51       ` Andrey Grodzovsky
2020-11-25 10:41       ` Daniel Vetter
2020-11-25 10:41         ` Daniel Vetter
2020-11-25 17:41         ` Andrey Grodzovsky
2020-11-25 17:41           ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 09/12] drm/amdgpu: Add early fini callback Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 10/12] drm/amdgpu: Avoid sysfs dirs removal post device unplug Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-24 14:49   ` Daniel Vetter
2020-11-24 14:49     ` Daniel Vetter
2020-11-24 22:27     ` Andrey Grodzovsky
2020-11-24 22:27       ` Andrey Grodzovsky
2020-11-25  9:04       ` Daniel Vetter
2020-11-25  9:04         ` Daniel Vetter
2020-11-25 17:39         ` Andrey Grodzovsky
2020-11-25 17:39           ` Andrey Grodzovsky
2020-11-27 13:12           ` Grodzovsky, Andrey
2020-11-27 13:12             ` Grodzovsky, Andrey
2020-11-27 15:04           ` Daniel Vetter
2020-11-27 15:04             ` Daniel Vetter
2020-11-27 15:34             ` Andrey Grodzovsky
2020-11-27 15:34               ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 11/12] drm/amdgpu: Register IOMMU topology notifier per device Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 12/12] drm/amdgpu: Fix a bunch of sdma code crash post device unplug Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=19ec7cb6-e1aa-e4ac-9cb1-a08d60c07af4@amd.com \
    --to=andrey.grodzovsky@amd.com \
    --cc=Alexander.Deucher@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=christian.koenig@amd.com \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=yuq825@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.