All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>
To: Daniel Vetter <daniel@ffwll.ch>
Cc: "daniel.vetter@ffwll.ch" <daniel.vetter@ffwll.ch>,
	"amd-gfx@lists.freedesktop.org" <amd-gfx@lists.freedesktop.org>,
	"dri-devel@lists.freedesktop.org"
	<dri-devel@lists.freedesktop.org>,
	"gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>,
	"Deucher, Alexander" <Alexander.Deucher@amd.com>,
	"Koenig, Christian" <Christian.Koenig@amd.com>,
	"yuq825@gmail.com" <yuq825@gmail.com>
Subject: Re: [PATCH v3 01/12] drm: Add dummy page per device or GEM object
Date: Mon, 11 Jan 2021 12:41:29 -0500	[thread overview]
Message-ID: <da936412-c40a-7e29-f1a4-f155c85d3836@amd.com> (raw)
In-Reply-To: <X/x5nXM7bZDl+MWR@phenom.ffwll.local>


On 1/11/21 11:15 AM, Daniel Vetter wrote:
> On Mon, Jan 11, 2021 at 05:13:56PM +0100, Daniel Vetter wrote:
>> On Fri, Jan 08, 2021 at 04:49:55PM +0000, Grodzovsky, Andrey wrote:
>>> Ok then, I guess I will proceed with the dummy pages list implementation then.
>>>
>>> Andrey
>>>
>>> ________________________________
>>> From: Koenig, Christian <Christian.Koenig@amd.com>
>>> Sent: 08 January 2021 09:52
>>> To: Grodzovsky, Andrey <Andrey.Grodzovsky@amd.com>; Daniel Vetter <daniel@ffwll.ch>
>>> Cc: amd-gfx@lists.freedesktop.org <amd-gfx@lists.freedesktop.org>; dri-devel@lists.freedesktop.org <dri-devel@lists.freedesktop.org>; daniel.vetter@ffwll.ch <daniel.vetter@ffwll.ch>; robh@kernel.org <robh@kernel.org>; l.stach@pengutronix.de <l.stach@pengutronix.de>; yuq825@gmail.com <yuq825@gmail.com>; eric@anholt.net <eric@anholt.net>; Deucher, Alexander <Alexander.Deucher@amd.com>; gregkh@linuxfoundation.org <gregkh@linuxfoundation.org>; ppaalanen@gmail.com <ppaalanen@gmail.com>; Wentland, Harry <Harry.Wentland@amd.com>
>>> Subject: Re: [PATCH v3 01/12] drm: Add dummy page per device or GEM object
>>>
>>> Mhm, I'm not aware of any let over pointer between TTM and GEM and we
>>> worked quite hard on reducing the size of the amdgpu_bo, so another
>>> extra pointer just for that corner case would suck quite a bit.
>> We have a ton of other pointers in struct amdgpu_bo (or any of it's lower
>> things) which are fairly single-use, so I'm really not much seeing the
>> point in making this a special case. It also means the lifetime management
>> becomes a bit iffy, since we can't throw away the dummy page then the last
>> reference to the bo is released (since we don't track it there), but only
>> when the last pointer to the device is released. Potentially this means a
>> pile of dangling pages hanging around for too long.
> Also if you really, really, really want to have this list, please don't
> reinvent it since we have it already. drmm_ is exactly meant for resources
> that should be freed when the final drm_device reference disappears.
> -Daniel


Can you elaborate ? We still need to actually implement the list but you want me 
to use
drmm_add_action for it's destruction instead of explicitly doing it (like I'm 
already doing from  ttm_bo_device_release) ?

Andrey


>   
>> If you need some ideas for redundant pointers:
>> - destroy callback (kinda not cool to not have this const anyway), we
>>    could refcount it all with the overall gem bo. Quite a bit of work.
>> - bdev pointer, if we move the device ttm stuff into struct drm_device, or
>>    create a common struct ttm_device, we can ditch that
>> - We could probably merge a few of the fields and find 8 bytes somewhere
>> - we still have 2 krefs, would probably need to fix that before we can
>>    merge the destroy callbacks
>>
>> So there's plenty of room still, if the size of a bo struct is really that
>> critical. Imo it's not.
>>
>>
>>> Christian.
>>>
>>> Am 08.01.21 um 15:46 schrieb Andrey Grodzovsky:
>>>> Daniel had some objections to this (see bellow) and so I guess I need
>>>> you both to agree on the approach before I proceed.
>>>>
>>>> Andrey
>>>>
>>>> On 1/8/21 9:33 AM, Christian König wrote:
>>>>> Am 08.01.21 um 15:26 schrieb Andrey Grodzovsky:
>>>>>> Hey Christian, just a ping.
>>>>> Was there any question for me here?
>>>>>
>>>>> As far as I can see the best approach would still be to fill the VMA
>>>>> with a single dummy page and avoid pointers in the GEM object.
>>>>>
>>>>> Christian.
>>>>>
>>>>>> Andrey
>>>>>>
>>>>>> On 1/7/21 11:37 AM, Andrey Grodzovsky wrote:
>>>>>>> On 1/7/21 11:30 AM, Daniel Vetter wrote:
>>>>>>>> On Thu, Jan 07, 2021 at 11:26:52AM -0500, Andrey Grodzovsky wrote:
>>>>>>>>> On 1/7/21 11:21 AM, Daniel Vetter wrote:
>>>>>>>>>> On Tue, Jan 05, 2021 at 04:04:16PM -0500, Andrey Grodzovsky wrote:
>>>>>>>>>>> On 11/23/20 3:01 AM, Christian König wrote:
>>>>>>>>>>>> Am 23.11.20 um 05:54 schrieb Andrey Grodzovsky:
>>>>>>>>>>>>> On 11/21/20 9:15 AM, Christian König wrote:
>>>>>>>>>>>>>> Am 21.11.20 um 06:21 schrieb Andrey Grodzovsky:
>>>>>>>>>>>>>>> Will be used to reroute CPU mapped BO's page faults once
>>>>>>>>>>>>>>> device is removed.
>>>>>>>>>>>>>> Uff, one page for each exported DMA-buf? That's not
>>>>>>>>>>>>>> something we can do.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> We need to find a different approach here.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Can't we call alloc_page() on each fault and link them together
>>>>>>>>>>>>>> so they are freed when the device is finally reaped?
>>>>>>>>>>>>> For sure better to optimize and allocate on demand when we reach
>>>>>>>>>>>>> this corner case, but why the linking ?
>>>>>>>>>>>>> Shouldn't drm_prime_gem_destroy be good enough place to free ?
>>>>>>>>>>>> I want to avoid keeping the page in the GEM object.
>>>>>>>>>>>>
>>>>>>>>>>>> What we can do is to allocate a page on demand for each fault
>>>>>>>>>>>> and link
>>>>>>>>>>>> the together in the bdev instead.
>>>>>>>>>>>>
>>>>>>>>>>>> And when the bdev is then finally destroyed after the last
>>>>>>>>>>>> application
>>>>>>>>>>>> closed we can finally release all of them.
>>>>>>>>>>>>
>>>>>>>>>>>> Christian.
>>>>>>>>>>> Hey, started to implement this and then realized that by
>>>>>>>>>>> allocating a page
>>>>>>>>>>> for each fault indiscriminately
>>>>>>>>>>> we will be allocating a new page for each faulting virtual
>>>>>>>>>>> address within a
>>>>>>>>>>> VA range belonging the same BO
>>>>>>>>>>> and this is obviously too much and not the intention. Should I
>>>>>>>>>>> instead use
>>>>>>>>>>> let's say a hashtable with the hash
>>>>>>>>>>> key being faulting BO address to actually keep allocating and
>>>>>>>>>>> reusing same
>>>>>>>>>>> dummy zero page per GEM BO
>>>>>>>>>>> (or for that matter DRM file object address for non imported
>>>>>>>>>>> BOs) ?
>>>>>>>>>> Why do we need a hashtable? All the sw structures to track this
>>>>>>>>>> should
>>>>>>>>>> still be around:
>>>>>>>>>> - if gem_bo->dma_buf is set the buffer is currently exported as
>>>>>>>>>> a dma-buf,
>>>>>>>>>>      so defensively allocate a per-bo page
>>>>>>>>>> - otherwise allocate a per-file page
>>>>>>>>> That exactly what we have in current implementation
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> Or is the idea to save the struct page * pointer? That feels a
>>>>>>>>>> bit like
>>>>>>>>>> over-optimizing stuff. Better to have a simple implementation
>>>>>>>>>> first and
>>>>>>>>>> then tune it if (and only if) any part of it becomes a problem
>>>>>>>>>> for normal
>>>>>>>>>> usage.
>>>>>>>>> Exactly - the idea is to avoid adding extra pointer to
>>>>>>>>> drm_gem_object,
>>>>>>>>> Christian suggested to instead keep a linked list of dummy pages
>>>>>>>>> to be
>>>>>>>>> allocated on demand once we hit a vm_fault. I will then also
>>>>>>>>> prefault the entire
>>>>>>>>> VA range from vma->vm_end - vma->vm_start to vma->vm_end and map
>>>>>>>>> them
>>>>>>>>> to that single dummy page.
>>>>>>>> This strongly feels like premature optimization. If you're worried
>>>>>>>> about
>>>>>>>> the overhead on amdgpu, pay down the debt by removing one of the
>>>>>>>> redundant
>>>>>>>> pointers between gem and ttm bo structs (I think we still have
>>>>>>>> some) :-)
>>>>>>>>
>>>>>>>> Until we've nuked these easy&obvious ones we shouldn't play "avoid 1
>>>>>>>> pointer just because" games with hashtables.
>>>>>>>> -Daniel
>>>>>>>
>>>>>>> Well, if you and Christian can agree on this approach and suggest
>>>>>>> maybe what pointer is
>>>>>>> redundant and can be removed from GEM struct so we can use the
>>>>>>> 'credit' to add the dummy page
>>>>>>> to GEM I will be happy to follow through.
>>>>>>>
>>>>>>> P.S Hash table is off the table anyway and we are talking only
>>>>>>> about linked list here since by prefaulting
>>>>>>> the entire VA range for a vmf->vma i will be avoiding redundant
>>>>>>> page faults to same VMA VA range and so
>>>>>>> don't need to search and reuse an existing dummy page but simply
>>>>>>> create a new one for each next fault.
>>>>>>>
>>>>>>> Andrey
>> -- 
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll.ch%2F&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7C4b581c55df204ca3d07408d8b64c1db8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637459785321798393%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=EvvAip8vs9fzVRS1rb0r5ODiBMngxPuI9GKR2%2F%2B2LzE%3D&amp;reserved=0
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

WARNING: multiple messages have this Message-ID (diff)
From: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>
To: Daniel Vetter <daniel@ffwll.ch>
Cc: "robh@kernel.org" <robh@kernel.org>,
	"daniel.vetter@ffwll.ch" <daniel.vetter@ffwll.ch>,
	"amd-gfx@lists.freedesktop.org" <amd-gfx@lists.freedesktop.org>,
	"eric@anholt.net" <eric@anholt.net>,
	"ppaalanen@gmail.com" <ppaalanen@gmail.com>,
	"dri-devel@lists.freedesktop.org"
	<dri-devel@lists.freedesktop.org>,
	"gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>,
	"Deucher, Alexander" <Alexander.Deucher@amd.com>,
	"l.stach@pengutronix.de" <l.stach@pengutronix.de>,
	"Wentland, Harry" <Harry.Wentland@amd.com>,
	"Koenig, Christian" <Christian.Koenig@amd.com>,
	"yuq825@gmail.com" <yuq825@gmail.com>
Subject: Re: [PATCH v3 01/12] drm: Add dummy page per device or GEM object
Date: Mon, 11 Jan 2021 12:41:29 -0500	[thread overview]
Message-ID: <da936412-c40a-7e29-f1a4-f155c85d3836@amd.com> (raw)
In-Reply-To: <X/x5nXM7bZDl+MWR@phenom.ffwll.local>


On 1/11/21 11:15 AM, Daniel Vetter wrote:
> On Mon, Jan 11, 2021 at 05:13:56PM +0100, Daniel Vetter wrote:
>> On Fri, Jan 08, 2021 at 04:49:55PM +0000, Grodzovsky, Andrey wrote:
>>> Ok then, I guess I will proceed with the dummy pages list implementation then.
>>>
>>> Andrey
>>>
>>> ________________________________
>>> From: Koenig, Christian <Christian.Koenig@amd.com>
>>> Sent: 08 January 2021 09:52
>>> To: Grodzovsky, Andrey <Andrey.Grodzovsky@amd.com>; Daniel Vetter <daniel@ffwll.ch>
>>> Cc: amd-gfx@lists.freedesktop.org <amd-gfx@lists.freedesktop.org>; dri-devel@lists.freedesktop.org <dri-devel@lists.freedesktop.org>; daniel.vetter@ffwll.ch <daniel.vetter@ffwll.ch>; robh@kernel.org <robh@kernel.org>; l.stach@pengutronix.de <l.stach@pengutronix.de>; yuq825@gmail.com <yuq825@gmail.com>; eric@anholt.net <eric@anholt.net>; Deucher, Alexander <Alexander.Deucher@amd.com>; gregkh@linuxfoundation.org <gregkh@linuxfoundation.org>; ppaalanen@gmail.com <ppaalanen@gmail.com>; Wentland, Harry <Harry.Wentland@amd.com>
>>> Subject: Re: [PATCH v3 01/12] drm: Add dummy page per device or GEM object
>>>
>>> Mhm, I'm not aware of any let over pointer between TTM and GEM and we
>>> worked quite hard on reducing the size of the amdgpu_bo, so another
>>> extra pointer just for that corner case would suck quite a bit.
>> We have a ton of other pointers in struct amdgpu_bo (or any of it's lower
>> things) which are fairly single-use, so I'm really not much seeing the
>> point in making this a special case. It also means the lifetime management
>> becomes a bit iffy, since we can't throw away the dummy page then the last
>> reference to the bo is released (since we don't track it there), but only
>> when the last pointer to the device is released. Potentially this means a
>> pile of dangling pages hanging around for too long.
> Also if you really, really, really want to have this list, please don't
> reinvent it since we have it already. drmm_ is exactly meant for resources
> that should be freed when the final drm_device reference disappears.
> -Daniel


Can you elaborate ? We still need to actually implement the list but you want me 
to use
drmm_add_action for it's destruction instead of explicitly doing it (like I'm 
already doing from  ttm_bo_device_release) ?

Andrey


>   
>> If you need some ideas for redundant pointers:
>> - destroy callback (kinda not cool to not have this const anyway), we
>>    could refcount it all with the overall gem bo. Quite a bit of work.
>> - bdev pointer, if we move the device ttm stuff into struct drm_device, or
>>    create a common struct ttm_device, we can ditch that
>> - We could probably merge a few of the fields and find 8 bytes somewhere
>> - we still have 2 krefs, would probably need to fix that before we can
>>    merge the destroy callbacks
>>
>> So there's plenty of room still, if the size of a bo struct is really that
>> critical. Imo it's not.
>>
>>
>>> Christian.
>>>
>>> Am 08.01.21 um 15:46 schrieb Andrey Grodzovsky:
>>>> Daniel had some objections to this (see bellow) and so I guess I need
>>>> you both to agree on the approach before I proceed.
>>>>
>>>> Andrey
>>>>
>>>> On 1/8/21 9:33 AM, Christian König wrote:
>>>>> Am 08.01.21 um 15:26 schrieb Andrey Grodzovsky:
>>>>>> Hey Christian, just a ping.
>>>>> Was there any question for me here?
>>>>>
>>>>> As far as I can see the best approach would still be to fill the VMA
>>>>> with a single dummy page and avoid pointers in the GEM object.
>>>>>
>>>>> Christian.
>>>>>
>>>>>> Andrey
>>>>>>
>>>>>> On 1/7/21 11:37 AM, Andrey Grodzovsky wrote:
>>>>>>> On 1/7/21 11:30 AM, Daniel Vetter wrote:
>>>>>>>> On Thu, Jan 07, 2021 at 11:26:52AM -0500, Andrey Grodzovsky wrote:
>>>>>>>>> On 1/7/21 11:21 AM, Daniel Vetter wrote:
>>>>>>>>>> On Tue, Jan 05, 2021 at 04:04:16PM -0500, Andrey Grodzovsky wrote:
>>>>>>>>>>> On 11/23/20 3:01 AM, Christian König wrote:
>>>>>>>>>>>> Am 23.11.20 um 05:54 schrieb Andrey Grodzovsky:
>>>>>>>>>>>>> On 11/21/20 9:15 AM, Christian König wrote:
>>>>>>>>>>>>>> Am 21.11.20 um 06:21 schrieb Andrey Grodzovsky:
>>>>>>>>>>>>>>> Will be used to reroute CPU mapped BO's page faults once
>>>>>>>>>>>>>>> device is removed.
>>>>>>>>>>>>>> Uff, one page for each exported DMA-buf? That's not
>>>>>>>>>>>>>> something we can do.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> We need to find a different approach here.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Can't we call alloc_page() on each fault and link them together
>>>>>>>>>>>>>> so they are freed when the device is finally reaped?
>>>>>>>>>>>>> For sure better to optimize and allocate on demand when we reach
>>>>>>>>>>>>> this corner case, but why the linking ?
>>>>>>>>>>>>> Shouldn't drm_prime_gem_destroy be good enough place to free ?
>>>>>>>>>>>> I want to avoid keeping the page in the GEM object.
>>>>>>>>>>>>
>>>>>>>>>>>> What we can do is to allocate a page on demand for each fault
>>>>>>>>>>>> and link
>>>>>>>>>>>> the together in the bdev instead.
>>>>>>>>>>>>
>>>>>>>>>>>> And when the bdev is then finally destroyed after the last
>>>>>>>>>>>> application
>>>>>>>>>>>> closed we can finally release all of them.
>>>>>>>>>>>>
>>>>>>>>>>>> Christian.
>>>>>>>>>>> Hey, started to implement this and then realized that by
>>>>>>>>>>> allocating a page
>>>>>>>>>>> for each fault indiscriminately
>>>>>>>>>>> we will be allocating a new page for each faulting virtual
>>>>>>>>>>> address within a
>>>>>>>>>>> VA range belonging the same BO
>>>>>>>>>>> and this is obviously too much and not the intention. Should I
>>>>>>>>>>> instead use
>>>>>>>>>>> let's say a hashtable with the hash
>>>>>>>>>>> key being faulting BO address to actually keep allocating and
>>>>>>>>>>> reusing same
>>>>>>>>>>> dummy zero page per GEM BO
>>>>>>>>>>> (or for that matter DRM file object address for non imported
>>>>>>>>>>> BOs) ?
>>>>>>>>>> Why do we need a hashtable? All the sw structures to track this
>>>>>>>>>> should
>>>>>>>>>> still be around:
>>>>>>>>>> - if gem_bo->dma_buf is set the buffer is currently exported as
>>>>>>>>>> a dma-buf,
>>>>>>>>>>      so defensively allocate a per-bo page
>>>>>>>>>> - otherwise allocate a per-file page
>>>>>>>>> That exactly what we have in current implementation
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> Or is the idea to save the struct page * pointer? That feels a
>>>>>>>>>> bit like
>>>>>>>>>> over-optimizing stuff. Better to have a simple implementation
>>>>>>>>>> first and
>>>>>>>>>> then tune it if (and only if) any part of it becomes a problem
>>>>>>>>>> for normal
>>>>>>>>>> usage.
>>>>>>>>> Exactly - the idea is to avoid adding extra pointer to
>>>>>>>>> drm_gem_object,
>>>>>>>>> Christian suggested to instead keep a linked list of dummy pages
>>>>>>>>> to be
>>>>>>>>> allocated on demand once we hit a vm_fault. I will then also
>>>>>>>>> prefault the entire
>>>>>>>>> VA range from vma->vm_end - vma->vm_start to vma->vm_end and map
>>>>>>>>> them
>>>>>>>>> to that single dummy page.
>>>>>>>> This strongly feels like premature optimization. If you're worried
>>>>>>>> about
>>>>>>>> the overhead on amdgpu, pay down the debt by removing one of the
>>>>>>>> redundant
>>>>>>>> pointers between gem and ttm bo structs (I think we still have
>>>>>>>> some) :-)
>>>>>>>>
>>>>>>>> Until we've nuked these easy&obvious ones we shouldn't play "avoid 1
>>>>>>>> pointer just because" games with hashtables.
>>>>>>>> -Daniel
>>>>>>>
>>>>>>> Well, if you and Christian can agree on this approach and suggest
>>>>>>> maybe what pointer is
>>>>>>> redundant and can be removed from GEM struct so we can use the
>>>>>>> 'credit' to add the dummy page
>>>>>>> to GEM I will be happy to follow through.
>>>>>>>
>>>>>>> P.S Hash table is off the table anyway and we are talking only
>>>>>>> about linked list here since by prefaulting
>>>>>>> the entire VA range for a vmf->vma i will be avoiding redundant
>>>>>>> page faults to same VMA VA range and so
>>>>>>> don't need to search and reuse an existing dummy page but simply
>>>>>>> create a new one for each next fault.
>>>>>>>
>>>>>>> Andrey
>> -- 
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll.ch%2F&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7C4b581c55df204ca3d07408d8b64c1db8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637459785321798393%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=EvvAip8vs9fzVRS1rb0r5ODiBMngxPuI9GKR2%2F%2B2LzE%3D&amp;reserved=0
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2021-01-11 17:41 UTC|newest]

Thread overview: 212+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-21  5:21 [PATCH v3 00/12] RFC Support hot device unplug in amdgpu Andrey Grodzovsky
2020-11-21  5:21 ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 01/12] drm: Add dummy page per device or GEM object Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21 14:15   ` Christian König
2020-11-21 14:15     ` Christian König
2020-11-23  4:54     ` Andrey Grodzovsky
2020-11-23  4:54       ` Andrey Grodzovsky
2020-11-23  8:01       ` Christian König
2020-11-23  8:01         ` Christian König
2021-01-05 21:04         ` Andrey Grodzovsky
2021-01-05 21:04           ` Andrey Grodzovsky
2021-01-07 16:21           ` Daniel Vetter
2021-01-07 16:21             ` Daniel Vetter
2021-01-07 16:26             ` Andrey Grodzovsky
2021-01-07 16:26               ` Andrey Grodzovsky
2021-01-07 16:28               ` Andrey Grodzovsky
2021-01-07 16:28                 ` Andrey Grodzovsky
2021-01-07 16:30               ` Daniel Vetter
2021-01-07 16:30                 ` Daniel Vetter
2021-01-07 16:37                 ` Andrey Grodzovsky
2021-01-07 16:37                   ` Andrey Grodzovsky
2021-01-08 14:26                   ` Andrey Grodzovsky
2021-01-08 14:26                     ` Andrey Grodzovsky
2021-01-08 14:33                     ` Christian König
2021-01-08 14:33                       ` Christian König
2021-01-08 14:46                       ` Andrey Grodzovsky
2021-01-08 14:46                         ` Andrey Grodzovsky
2021-01-08 14:52                         ` Christian König
2021-01-08 14:52                           ` Christian König
2021-01-08 16:49                           ` Grodzovsky, Andrey
2021-01-08 16:49                             ` Grodzovsky, Andrey
2021-01-11 16:13                             ` Daniel Vetter
2021-01-11 16:13                               ` Daniel Vetter
2021-01-11 16:15                               ` Daniel Vetter
2021-01-11 16:15                                 ` Daniel Vetter
2021-01-11 17:41                                 ` Andrey Grodzovsky [this message]
2021-01-11 17:41                                   ` Andrey Grodzovsky
2021-01-11 18:31                                   ` Andrey Grodzovsky
2021-01-12  9:07                                     ` Daniel Vetter
2021-01-11 20:45                                 ` Andrey Grodzovsky
2021-01-11 20:45                                   ` Andrey Grodzovsky
2021-01-12  9:10                                   ` Daniel Vetter
2021-01-12  9:10                                     ` Daniel Vetter
2021-01-12 12:32                                     ` Christian König
2021-01-12 12:32                                       ` Christian König
2021-01-12 15:59                                       ` Andrey Grodzovsky
2021-01-12 15:59                                         ` Andrey Grodzovsky
2021-01-13  9:14                                         ` Christian König
2021-01-13  9:14                                           ` Christian König
2021-01-13 14:40                                           ` Andrey Grodzovsky
2021-01-13 14:40                                             ` Andrey Grodzovsky
2021-01-12 15:54                                     ` Andrey Grodzovsky
2021-01-12 15:54                                       ` Andrey Grodzovsky
2021-01-12  8:12                               ` Christian König
2021-01-12  8:12                                 ` Christian König
2021-01-12  9:13                                 ` Daniel Vetter
2021-01-12  9:13                                   ` Daniel Vetter
2020-11-21  5:21 ` [PATCH v3 02/12] drm: Unamp the entire device address space on device unplug Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21 14:16   ` Christian König
2020-11-21 14:16     ` Christian König
2020-11-24 14:44     ` Daniel Vetter
2020-11-24 14:44       ` Daniel Vetter
2020-11-21  5:21 ` [PATCH v3 03/12] drm/ttm: Remap all page faults to per process dummy page Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 04/12] drm/ttm: Set dma addr to null after freee Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21 14:13   ` Christian König
2020-11-21 14:13     ` Christian König
2020-11-23  5:15     ` Andrey Grodzovsky
2020-11-23  5:15       ` Andrey Grodzovsky
2020-11-23  8:04       ` Christian König
2020-11-23  8:04         ` Christian König
2020-11-21  5:21 ` [PATCH v3 05/12] drm/ttm: Expose ttm_tt_unpopulate for driver use Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-25 10:42   ` Christian König
2020-11-25 10:42     ` Christian König
2020-11-23 20:05     ` Andrey Grodzovsky
2020-11-23 20:05       ` Andrey Grodzovsky
2020-11-23 20:20       ` Christian König
2020-11-23 20:20         ` Christian König
2020-11-23 20:38         ` Andrey Grodzovsky
2020-11-23 20:38           ` Andrey Grodzovsky
2020-11-23 20:41           ` Christian König
2020-11-23 20:41             ` Christian König
2020-11-23 21:08             ` Andrey Grodzovsky
2020-11-23 21:08               ` Andrey Grodzovsky
2020-11-24  7:41               ` Christian König
2020-11-24  7:41                 ` Christian König
2020-11-24 16:22                 ` Andrey Grodzovsky
2020-11-24 16:22                   ` Andrey Grodzovsky
2020-11-24 16:44                   ` Christian König
2020-11-24 16:44                     ` Christian König
2020-11-25 10:40                     ` Daniel Vetter
2020-11-25 10:40                       ` Daniel Vetter
2020-11-25 12:57                       ` Christian König
2020-11-25 12:57                         ` Christian König
2020-11-25 16:36                         ` Daniel Vetter
2020-11-25 16:36                           ` Daniel Vetter
2020-11-25 19:34                           ` Andrey Grodzovsky
2020-11-25 19:34                             ` Andrey Grodzovsky
2020-11-27 13:10                             ` Grodzovsky, Andrey
2020-11-27 13:10                               ` Grodzovsky, Andrey
2020-11-27 14:59                             ` Daniel Vetter
2020-11-27 14:59                               ` Daniel Vetter
2020-11-27 16:04                               ` Andrey Grodzovsky
2020-11-27 16:04                                 ` Andrey Grodzovsky
2020-11-30 14:15                                 ` Daniel Vetter
2020-11-30 14:15                                   ` Daniel Vetter
2020-11-25 16:56                         ` Michel Dänzer
2020-11-25 16:56                           ` Michel Dänzer
2020-11-25 17:02                           ` Daniel Vetter
2020-11-25 17:02                             ` Daniel Vetter
2020-12-15 20:18                     ` Andrey Grodzovsky
2020-12-15 20:18                       ` Andrey Grodzovsky
2020-12-16  8:04                       ` Christian König
2020-12-16  8:04                         ` Christian König
2020-12-16 14:21                         ` Daniel Vetter
2020-12-16 14:21                           ` Daniel Vetter
2020-12-16 16:13                           ` Andrey Grodzovsky
2020-12-16 16:13                             ` Andrey Grodzovsky
2020-12-16 16:18                             ` Christian König
2020-12-16 16:18                               ` Christian König
2020-12-16 17:12                               ` Daniel Vetter
2020-12-16 17:12                                 ` Daniel Vetter
2020-12-16 17:20                                 ` Daniel Vetter
2020-12-16 17:20                                   ` Daniel Vetter
2020-12-16 18:26                                 ` Andrey Grodzovsky
2020-12-16 18:26                                   ` Andrey Grodzovsky
2020-12-16 23:15                                   ` Daniel Vetter
2020-12-16 23:15                                     ` Daniel Vetter
2020-12-17  0:20                                     ` Andrey Grodzovsky
2020-12-17  0:20                                       ` Andrey Grodzovsky
2020-12-17 12:01                                       ` Daniel Vetter
2020-12-17 12:01                                         ` Daniel Vetter
2020-12-17 19:19                                         ` Andrey Grodzovsky
2020-12-17 19:19                                           ` Andrey Grodzovsky
2020-12-17 20:10                                           ` Christian König
2020-12-17 20:10                                             ` Christian König
2020-12-17 20:38                                             ` Andrey Grodzovsky
2020-12-17 20:38                                               ` Andrey Grodzovsky
2020-12-17 20:48                                               ` Daniel Vetter
2020-12-17 20:48                                                 ` Daniel Vetter
2020-12-17 21:06                                                 ` Andrey Grodzovsky
2020-12-17 21:06                                                   ` Andrey Grodzovsky
2020-12-18 14:30                                                   ` Daniel Vetter
2020-12-18 14:30                                                     ` Daniel Vetter
2020-12-17 20:42                                           ` Daniel Vetter
2020-12-17 20:42                                             ` Daniel Vetter
2020-12-17 21:13                                             ` Andrey Grodzovsky
2020-12-17 21:13                                               ` Andrey Grodzovsky
2021-01-04 16:33                                               ` Andrey Grodzovsky
2021-01-04 16:33                                                 ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 06/12] drm/sched: Cancel and flush all oustatdning jobs before finish Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-22 11:56   ` Christian König
2020-11-22 11:56     ` Christian König
2020-11-21  5:21 ` [PATCH v3 07/12] drm/sched: Prevent any job recoveries after device is unplugged Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-22 11:57   ` Christian König
2020-11-22 11:57     ` Christian König
2020-11-23  5:37     ` Andrey Grodzovsky
2020-11-23  5:37       ` Andrey Grodzovsky
2020-11-23  8:06       ` Christian König
2020-11-23  8:06         ` Christian König
2020-11-24  1:12         ` Luben Tuikov
2020-11-24  1:12           ` Luben Tuikov
2020-11-24  7:50           ` Christian König
2020-11-24  7:50             ` Christian König
2020-11-24 17:11             ` Luben Tuikov
2020-11-24 17:11               ` Luben Tuikov
2020-11-24 17:17               ` Andrey Grodzovsky
2020-11-24 17:17                 ` Andrey Grodzovsky
2020-11-24 17:41                 ` Luben Tuikov
2020-11-24 17:41                   ` Luben Tuikov
2020-11-24 17:40               ` Christian König
2020-11-24 17:40                 ` Christian König
2020-11-24 17:44                 ` Luben Tuikov
2020-11-24 17:44                   ` Luben Tuikov
2020-11-21  5:21 ` [PATCH v3 08/12] drm/amdgpu: Split amdgpu_device_fini into early and late Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-24 14:53   ` Daniel Vetter
2020-11-24 14:53     ` Daniel Vetter
2020-11-24 15:51     ` Andrey Grodzovsky
2020-11-24 15:51       ` Andrey Grodzovsky
2020-11-25 10:41       ` Daniel Vetter
2020-11-25 10:41         ` Daniel Vetter
2020-11-25 17:41         ` Andrey Grodzovsky
2020-11-25 17:41           ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 09/12] drm/amdgpu: Add early fini callback Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 10/12] drm/amdgpu: Avoid sysfs dirs removal post device unplug Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-24 14:49   ` Daniel Vetter
2020-11-24 14:49     ` Daniel Vetter
2020-11-24 22:27     ` Andrey Grodzovsky
2020-11-24 22:27       ` Andrey Grodzovsky
2020-11-25  9:04       ` Daniel Vetter
2020-11-25  9:04         ` Daniel Vetter
2020-11-25 17:39         ` Andrey Grodzovsky
2020-11-25 17:39           ` Andrey Grodzovsky
2020-11-27 13:12           ` Grodzovsky, Andrey
2020-11-27 13:12             ` Grodzovsky, Andrey
2020-11-27 15:04           ` Daniel Vetter
2020-11-27 15:04             ` Daniel Vetter
2020-11-27 15:34             ` Andrey Grodzovsky
2020-11-27 15:34               ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 11/12] drm/amdgpu: Register IOMMU topology notifier per device Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 12/12] drm/amdgpu: Fix a bunch of sdma code crash post device unplug Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=da936412-c40a-7e29-f1a4-f155c85d3836@amd.com \
    --to=andrey.grodzovsky@amd.com \
    --cc=Alexander.Deucher@amd.com \
    --cc=Christian.Koenig@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=daniel.vetter@ffwll.ch \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=yuq825@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.