All of lore.kernel.org
 help / color / mirror / Atom feed
From: Daniel Vetter <daniel@ffwll.ch>
To: christian.koenig@amd.com
Cc: daniel.vetter@ffwll.ch, dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org, gregkh@linuxfoundation.org,
	Alexander.Deucher@amd.com, yuq825@gmail.com
Subject: Re: [PATCH v3 05/12] drm/ttm: Expose ttm_tt_unpopulate for driver use
Date: Wed, 25 Nov 2020 17:36:21 +0100	[thread overview]
Message-ID: <20201125163621.GZ401619@phenom.ffwll.local> (raw)
In-Reply-To: <71683ae7-f443-c15a-7003-6ba4ad3d4b15@gmail.com>

On Wed, Nov 25, 2020 at 01:57:40PM +0100, Christian König wrote:
> Am 25.11.20 um 11:40 schrieb Daniel Vetter:
> > On Tue, Nov 24, 2020 at 05:44:07PM +0100, Christian König wrote:
> > > Am 24.11.20 um 17:22 schrieb Andrey Grodzovsky:
> > > > On 11/24/20 2:41 AM, Christian König wrote:
> > > > > Am 23.11.20 um 22:08 schrieb Andrey Grodzovsky:
> > > > > > On 11/23/20 3:41 PM, Christian König wrote:
> > > > > > > Am 23.11.20 um 21:38 schrieb Andrey Grodzovsky:
> > > > > > > > On 11/23/20 3:20 PM, Christian König wrote:
> > > > > > > > > Am 23.11.20 um 21:05 schrieb Andrey Grodzovsky:
> > > > > > > > > > On 11/25/20 5:42 AM, Christian König wrote:
> > > > > > > > > > > Am 21.11.20 um 06:21 schrieb Andrey Grodzovsky:
> > > > > > > > > > > > It's needed to drop iommu backed pages on device unplug
> > > > > > > > > > > > before device's IOMMU group is released.
> > > > > > > > > > > It would be cleaner if we could do the whole
> > > > > > > > > > > handling in TTM. I also need to double check
> > > > > > > > > > > what you are doing with this function.
> > > > > > > > > > > 
> > > > > > > > > > > Christian.
> > > > > > > > > > 
> > > > > > > > > > Check patch "drm/amdgpu: Register IOMMU topology
> > > > > > > > > > notifier per device." to see
> > > > > > > > > > how i use it. I don't see why this should go
> > > > > > > > > > into TTM mid-layer - the stuff I do inside
> > > > > > > > > > is vendor specific and also I don't think TTM is
> > > > > > > > > > explicitly aware of IOMMU ?
> > > > > > > > > > Do you mean you prefer the IOMMU notifier to be
> > > > > > > > > > registered from within TTM
> > > > > > > > > > and then use a hook to call into vendor specific handler ?
> > > > > > > > > No, that is really vendor specific.
> > > > > > > > > 
> > > > > > > > > What I meant is to have a function like
> > > > > > > > > ttm_resource_manager_evict_all() which you only need
> > > > > > > > > to call and all tt objects are unpopulated.
> > > > > > > > 
> > > > > > > > So instead of this BO list i create and later iterate in
> > > > > > > > amdgpu from the IOMMU patch you just want to do it
> > > > > > > > within
> > > > > > > > TTM with a single function ? Makes much more sense.
> > > > > > > Yes, exactly.
> > > > > > > 
> > > > > > > The list_empty() checks we have in TTM for the LRU are
> > > > > > > actually not the best idea, we should now check the
> > > > > > > pin_count instead. This way we could also have a list of the
> > > > > > > pinned BOs in TTM.
> > > > > > 
> > > > > > So from my IOMMU topology handler I will iterate the TTM LRU for
> > > > > > the unpinned BOs and this new function for the pinned ones  ?
> > > > > > It's probably a good idea to combine both iterations into this
> > > > > > new function to cover all the BOs allocated on the device.
> > > > > Yes, that's what I had in my mind as well.
> > > > > 
> > > > > > 
> > > > > > > BTW: Have you thought about what happens when we unpopulate
> > > > > > > a BO while we still try to use a kernel mapping for it? That
> > > > > > > could have unforeseen consequences.
> > > > > > 
> > > > > > Are you asking what happens to kmap or vmap style mapped CPU
> > > > > > accesses once we drop all the DMA backing pages for a particular
> > > > > > BO ? Because for user mappings
> > > > > > (mmap) we took care of this with dummy page reroute but indeed
> > > > > > nothing was done for in kernel CPU mappings.
> > > > > Yes exactly that.
> > > > > 
> > > > > In other words what happens if we free the ring buffer while the
> > > > > kernel still writes to it?
> > > > > 
> > > > > Christian.
> > > > 
> > > > While we can't control user application accesses to the mapped buffers
> > > > explicitly and hence we use page fault rerouting
> > > > I am thinking that in this  case we may be able to sprinkle
> > > > drm_dev_enter/exit in any such sensitive place were we might
> > > > CPU access a DMA buffer from the kernel ?
> > > Yes, I fear we are going to need that.
> > Uh ... problem is that dma_buf_vmap are usually permanent things. Maybe we
> > could stuff this into begin/end_cpu_access (but only for the kernel, so a
> > bit tricky)?
> 
> Oh very very good point! I haven't thought about DMA-buf mmaps in this
> context yet.
> 
> 
> > btw the other issue with dma-buf (and even worse with dma_fence) is
> > refcounting of the underlying drm_device. I'd expect that all your
> > callbacks go boom if the dma_buf outlives your drm_device. That part isn't
> > yet solved in your series here.
> 
> Well thinking more about this, it seems to be a another really good argument
> why mapping pages from DMA-bufs into application address space directly is a
> very bad idea :)
> 
> But yes, we essentially can't remove the device as long as there is a
> DMA-buf with mappings. No idea how to clean that one up.

drm_dev_get/put in drm_prime helpers should get us like 90% there I think.

The even more worrying thing is random dma_fence attached to the dma_resv
object. We could try to clean all of ours up, but they could have escaped
already into some other driver. And since we're talking about egpu
hotunplug, dma_fence escaping to the igpu is a pretty reasonable use-case.

I have no how to fix that one :-/
-Daniel

> 
> Christian.
> 
> > -Daniel
> > 
> > > > Things like CPU page table updates, ring buffer accesses and FW memcpy ?
> > > > Is there other places ?
> > > Puh, good question. I have no idea.
> > > 
> > > > Another point is that at this point the driver shouldn't access any such
> > > > buffers as we are at the process finishing the device.
> > > > AFAIK there is no page fault mechanism for kernel mappings so I don't
> > > > think there is anything else to do ?
> > > Well there is a page fault handler for kernel mappings, but that one just
> > > prints the stack trace into the system log and calls BUG(); :)
> > > 
> > > Long story short we need to avoid any access to released pages after unplug.
> > > No matter if it's from the kernel or userspace.
> > > 
> > > Regards,
> > > Christian.
> > > 
> > > > Andrey
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

WARNING: multiple messages have this Message-ID (diff)
From: Daniel Vetter <daniel@ffwll.ch>
To: christian.koenig@amd.com
Cc: robh@kernel.org, Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>,
	daniel.vetter@ffwll.ch, dri-devel@lists.freedesktop.org,
	eric@anholt.net, ppaalanen@gmail.com,
	amd-gfx@lists.freedesktop.org, Daniel Vetter <daniel@ffwll.ch>,
	gregkh@linuxfoundation.org, Alexander.Deucher@amd.com,
	l.stach@pengutronix.de, Harry.Wentland@amd.com, yuq825@gmail.com
Subject: Re: [PATCH v3 05/12] drm/ttm: Expose ttm_tt_unpopulate for driver use
Date: Wed, 25 Nov 2020 17:36:21 +0100	[thread overview]
Message-ID: <20201125163621.GZ401619@phenom.ffwll.local> (raw)
In-Reply-To: <71683ae7-f443-c15a-7003-6ba4ad3d4b15@gmail.com>

On Wed, Nov 25, 2020 at 01:57:40PM +0100, Christian König wrote:
> Am 25.11.20 um 11:40 schrieb Daniel Vetter:
> > On Tue, Nov 24, 2020 at 05:44:07PM +0100, Christian König wrote:
> > > Am 24.11.20 um 17:22 schrieb Andrey Grodzovsky:
> > > > On 11/24/20 2:41 AM, Christian König wrote:
> > > > > Am 23.11.20 um 22:08 schrieb Andrey Grodzovsky:
> > > > > > On 11/23/20 3:41 PM, Christian König wrote:
> > > > > > > Am 23.11.20 um 21:38 schrieb Andrey Grodzovsky:
> > > > > > > > On 11/23/20 3:20 PM, Christian König wrote:
> > > > > > > > > Am 23.11.20 um 21:05 schrieb Andrey Grodzovsky:
> > > > > > > > > > On 11/25/20 5:42 AM, Christian König wrote:
> > > > > > > > > > > Am 21.11.20 um 06:21 schrieb Andrey Grodzovsky:
> > > > > > > > > > > > It's needed to drop iommu backed pages on device unplug
> > > > > > > > > > > > before device's IOMMU group is released.
> > > > > > > > > > > It would be cleaner if we could do the whole
> > > > > > > > > > > handling in TTM. I also need to double check
> > > > > > > > > > > what you are doing with this function.
> > > > > > > > > > > 
> > > > > > > > > > > Christian.
> > > > > > > > > > 
> > > > > > > > > > Check patch "drm/amdgpu: Register IOMMU topology
> > > > > > > > > > notifier per device." to see
> > > > > > > > > > how i use it. I don't see why this should go
> > > > > > > > > > into TTM mid-layer - the stuff I do inside
> > > > > > > > > > is vendor specific and also I don't think TTM is
> > > > > > > > > > explicitly aware of IOMMU ?
> > > > > > > > > > Do you mean you prefer the IOMMU notifier to be
> > > > > > > > > > registered from within TTM
> > > > > > > > > > and then use a hook to call into vendor specific handler ?
> > > > > > > > > No, that is really vendor specific.
> > > > > > > > > 
> > > > > > > > > What I meant is to have a function like
> > > > > > > > > ttm_resource_manager_evict_all() which you only need
> > > > > > > > > to call and all tt objects are unpopulated.
> > > > > > > > 
> > > > > > > > So instead of this BO list i create and later iterate in
> > > > > > > > amdgpu from the IOMMU patch you just want to do it
> > > > > > > > within
> > > > > > > > TTM with a single function ? Makes much more sense.
> > > > > > > Yes, exactly.
> > > > > > > 
> > > > > > > The list_empty() checks we have in TTM for the LRU are
> > > > > > > actually not the best idea, we should now check the
> > > > > > > pin_count instead. This way we could also have a list of the
> > > > > > > pinned BOs in TTM.
> > > > > > 
> > > > > > So from my IOMMU topology handler I will iterate the TTM LRU for
> > > > > > the unpinned BOs and this new function for the pinned ones  ?
> > > > > > It's probably a good idea to combine both iterations into this
> > > > > > new function to cover all the BOs allocated on the device.
> > > > > Yes, that's what I had in my mind as well.
> > > > > 
> > > > > > 
> > > > > > > BTW: Have you thought about what happens when we unpopulate
> > > > > > > a BO while we still try to use a kernel mapping for it? That
> > > > > > > could have unforeseen consequences.
> > > > > > 
> > > > > > Are you asking what happens to kmap or vmap style mapped CPU
> > > > > > accesses once we drop all the DMA backing pages for a particular
> > > > > > BO ? Because for user mappings
> > > > > > (mmap) we took care of this with dummy page reroute but indeed
> > > > > > nothing was done for in kernel CPU mappings.
> > > > > Yes exactly that.
> > > > > 
> > > > > In other words what happens if we free the ring buffer while the
> > > > > kernel still writes to it?
> > > > > 
> > > > > Christian.
> > > > 
> > > > While we can't control user application accesses to the mapped buffers
> > > > explicitly and hence we use page fault rerouting
> > > > I am thinking that in this  case we may be able to sprinkle
> > > > drm_dev_enter/exit in any such sensitive place were we might
> > > > CPU access a DMA buffer from the kernel ?
> > > Yes, I fear we are going to need that.
> > Uh ... problem is that dma_buf_vmap are usually permanent things. Maybe we
> > could stuff this into begin/end_cpu_access (but only for the kernel, so a
> > bit tricky)?
> 
> Oh very very good point! I haven't thought about DMA-buf mmaps in this
> context yet.
> 
> 
> > btw the other issue with dma-buf (and even worse with dma_fence) is
> > refcounting of the underlying drm_device. I'd expect that all your
> > callbacks go boom if the dma_buf outlives your drm_device. That part isn't
> > yet solved in your series here.
> 
> Well thinking more about this, it seems to be a another really good argument
> why mapping pages from DMA-bufs into application address space directly is a
> very bad idea :)
> 
> But yes, we essentially can't remove the device as long as there is a
> DMA-buf with mappings. No idea how to clean that one up.

drm_dev_get/put in drm_prime helpers should get us like 90% there I think.

The even more worrying thing is random dma_fence attached to the dma_resv
object. We could try to clean all of ours up, but they could have escaped
already into some other driver. And since we're talking about egpu
hotunplug, dma_fence escaping to the igpu is a pretty reasonable use-case.

I have no how to fix that one :-/
-Daniel

> 
> Christian.
> 
> > -Daniel
> > 
> > > > Things like CPU page table updates, ring buffer accesses and FW memcpy ?
> > > > Is there other places ?
> > > Puh, good question. I have no idea.
> > > 
> > > > Another point is that at this point the driver shouldn't access any such
> > > > buffers as we are at the process finishing the device.
> > > > AFAIK there is no page fault mechanism for kernel mappings so I don't
> > > > think there is anything else to do ?
> > > Well there is a page fault handler for kernel mappings, but that one just
> > > prints the stack trace into the system log and calls BUG(); :)
> > > 
> > > Long story short we need to avoid any access to released pages after unplug.
> > > No matter if it's from the kernel or userspace.
> > > 
> > > Regards,
> > > Christian.
> > > 
> > > > Andrey
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2020-11-25 16:36 UTC|newest]

Thread overview: 212+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-21  5:21 [PATCH v3 00/12] RFC Support hot device unplug in amdgpu Andrey Grodzovsky
2020-11-21  5:21 ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 01/12] drm: Add dummy page per device or GEM object Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21 14:15   ` Christian König
2020-11-21 14:15     ` Christian König
2020-11-23  4:54     ` Andrey Grodzovsky
2020-11-23  4:54       ` Andrey Grodzovsky
2020-11-23  8:01       ` Christian König
2020-11-23  8:01         ` Christian König
2021-01-05 21:04         ` Andrey Grodzovsky
2021-01-05 21:04           ` Andrey Grodzovsky
2021-01-07 16:21           ` Daniel Vetter
2021-01-07 16:21             ` Daniel Vetter
2021-01-07 16:26             ` Andrey Grodzovsky
2021-01-07 16:26               ` Andrey Grodzovsky
2021-01-07 16:28               ` Andrey Grodzovsky
2021-01-07 16:28                 ` Andrey Grodzovsky
2021-01-07 16:30               ` Daniel Vetter
2021-01-07 16:30                 ` Daniel Vetter
2021-01-07 16:37                 ` Andrey Grodzovsky
2021-01-07 16:37                   ` Andrey Grodzovsky
2021-01-08 14:26                   ` Andrey Grodzovsky
2021-01-08 14:26                     ` Andrey Grodzovsky
2021-01-08 14:33                     ` Christian König
2021-01-08 14:33                       ` Christian König
2021-01-08 14:46                       ` Andrey Grodzovsky
2021-01-08 14:46                         ` Andrey Grodzovsky
2021-01-08 14:52                         ` Christian König
2021-01-08 14:52                           ` Christian König
2021-01-08 16:49                           ` Grodzovsky, Andrey
2021-01-08 16:49                             ` Grodzovsky, Andrey
2021-01-11 16:13                             ` Daniel Vetter
2021-01-11 16:13                               ` Daniel Vetter
2021-01-11 16:15                               ` Daniel Vetter
2021-01-11 16:15                                 ` Daniel Vetter
2021-01-11 17:41                                 ` Andrey Grodzovsky
2021-01-11 17:41                                   ` Andrey Grodzovsky
2021-01-11 18:31                                   ` Andrey Grodzovsky
2021-01-12  9:07                                     ` Daniel Vetter
2021-01-11 20:45                                 ` Andrey Grodzovsky
2021-01-11 20:45                                   ` Andrey Grodzovsky
2021-01-12  9:10                                   ` Daniel Vetter
2021-01-12  9:10                                     ` Daniel Vetter
2021-01-12 12:32                                     ` Christian König
2021-01-12 12:32                                       ` Christian König
2021-01-12 15:59                                       ` Andrey Grodzovsky
2021-01-12 15:59                                         ` Andrey Grodzovsky
2021-01-13  9:14                                         ` Christian König
2021-01-13  9:14                                           ` Christian König
2021-01-13 14:40                                           ` Andrey Grodzovsky
2021-01-13 14:40                                             ` Andrey Grodzovsky
2021-01-12 15:54                                     ` Andrey Grodzovsky
2021-01-12 15:54                                       ` Andrey Grodzovsky
2021-01-12  8:12                               ` Christian König
2021-01-12  8:12                                 ` Christian König
2021-01-12  9:13                                 ` Daniel Vetter
2021-01-12  9:13                                   ` Daniel Vetter
2020-11-21  5:21 ` [PATCH v3 02/12] drm: Unamp the entire device address space on device unplug Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21 14:16   ` Christian König
2020-11-21 14:16     ` Christian König
2020-11-24 14:44     ` Daniel Vetter
2020-11-24 14:44       ` Daniel Vetter
2020-11-21  5:21 ` [PATCH v3 03/12] drm/ttm: Remap all page faults to per process dummy page Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 04/12] drm/ttm: Set dma addr to null after freee Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21 14:13   ` Christian König
2020-11-21 14:13     ` Christian König
2020-11-23  5:15     ` Andrey Grodzovsky
2020-11-23  5:15       ` Andrey Grodzovsky
2020-11-23  8:04       ` Christian König
2020-11-23  8:04         ` Christian König
2020-11-21  5:21 ` [PATCH v3 05/12] drm/ttm: Expose ttm_tt_unpopulate for driver use Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-25 10:42   ` Christian König
2020-11-25 10:42     ` Christian König
2020-11-23 20:05     ` Andrey Grodzovsky
2020-11-23 20:05       ` Andrey Grodzovsky
2020-11-23 20:20       ` Christian König
2020-11-23 20:20         ` Christian König
2020-11-23 20:38         ` Andrey Grodzovsky
2020-11-23 20:38           ` Andrey Grodzovsky
2020-11-23 20:41           ` Christian König
2020-11-23 20:41             ` Christian König
2020-11-23 21:08             ` Andrey Grodzovsky
2020-11-23 21:08               ` Andrey Grodzovsky
2020-11-24  7:41               ` Christian König
2020-11-24  7:41                 ` Christian König
2020-11-24 16:22                 ` Andrey Grodzovsky
2020-11-24 16:22                   ` Andrey Grodzovsky
2020-11-24 16:44                   ` Christian König
2020-11-24 16:44                     ` Christian König
2020-11-25 10:40                     ` Daniel Vetter
2020-11-25 10:40                       ` Daniel Vetter
2020-11-25 12:57                       ` Christian König
2020-11-25 12:57                         ` Christian König
2020-11-25 16:36                         ` Daniel Vetter [this message]
2020-11-25 16:36                           ` Daniel Vetter
2020-11-25 19:34                           ` Andrey Grodzovsky
2020-11-25 19:34                             ` Andrey Grodzovsky
2020-11-27 13:10                             ` Grodzovsky, Andrey
2020-11-27 13:10                               ` Grodzovsky, Andrey
2020-11-27 14:59                             ` Daniel Vetter
2020-11-27 14:59                               ` Daniel Vetter
2020-11-27 16:04                               ` Andrey Grodzovsky
2020-11-27 16:04                                 ` Andrey Grodzovsky
2020-11-30 14:15                                 ` Daniel Vetter
2020-11-30 14:15                                   ` Daniel Vetter
2020-11-25 16:56                         ` Michel Dänzer
2020-11-25 16:56                           ` Michel Dänzer
2020-11-25 17:02                           ` Daniel Vetter
2020-11-25 17:02                             ` Daniel Vetter
2020-12-15 20:18                     ` Andrey Grodzovsky
2020-12-15 20:18                       ` Andrey Grodzovsky
2020-12-16  8:04                       ` Christian König
2020-12-16  8:04                         ` Christian König
2020-12-16 14:21                         ` Daniel Vetter
2020-12-16 14:21                           ` Daniel Vetter
2020-12-16 16:13                           ` Andrey Grodzovsky
2020-12-16 16:13                             ` Andrey Grodzovsky
2020-12-16 16:18                             ` Christian König
2020-12-16 16:18                               ` Christian König
2020-12-16 17:12                               ` Daniel Vetter
2020-12-16 17:12                                 ` Daniel Vetter
2020-12-16 17:20                                 ` Daniel Vetter
2020-12-16 17:20                                   ` Daniel Vetter
2020-12-16 18:26                                 ` Andrey Grodzovsky
2020-12-16 18:26                                   ` Andrey Grodzovsky
2020-12-16 23:15                                   ` Daniel Vetter
2020-12-16 23:15                                     ` Daniel Vetter
2020-12-17  0:20                                     ` Andrey Grodzovsky
2020-12-17  0:20                                       ` Andrey Grodzovsky
2020-12-17 12:01                                       ` Daniel Vetter
2020-12-17 12:01                                         ` Daniel Vetter
2020-12-17 19:19                                         ` Andrey Grodzovsky
2020-12-17 19:19                                           ` Andrey Grodzovsky
2020-12-17 20:10                                           ` Christian König
2020-12-17 20:10                                             ` Christian König
2020-12-17 20:38                                             ` Andrey Grodzovsky
2020-12-17 20:38                                               ` Andrey Grodzovsky
2020-12-17 20:48                                               ` Daniel Vetter
2020-12-17 20:48                                                 ` Daniel Vetter
2020-12-17 21:06                                                 ` Andrey Grodzovsky
2020-12-17 21:06                                                   ` Andrey Grodzovsky
2020-12-18 14:30                                                   ` Daniel Vetter
2020-12-18 14:30                                                     ` Daniel Vetter
2020-12-17 20:42                                           ` Daniel Vetter
2020-12-17 20:42                                             ` Daniel Vetter
2020-12-17 21:13                                             ` Andrey Grodzovsky
2020-12-17 21:13                                               ` Andrey Grodzovsky
2021-01-04 16:33                                               ` Andrey Grodzovsky
2021-01-04 16:33                                                 ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 06/12] drm/sched: Cancel and flush all oustatdning jobs before finish Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-22 11:56   ` Christian König
2020-11-22 11:56     ` Christian König
2020-11-21  5:21 ` [PATCH v3 07/12] drm/sched: Prevent any job recoveries after device is unplugged Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-22 11:57   ` Christian König
2020-11-22 11:57     ` Christian König
2020-11-23  5:37     ` Andrey Grodzovsky
2020-11-23  5:37       ` Andrey Grodzovsky
2020-11-23  8:06       ` Christian König
2020-11-23  8:06         ` Christian König
2020-11-24  1:12         ` Luben Tuikov
2020-11-24  1:12           ` Luben Tuikov
2020-11-24  7:50           ` Christian König
2020-11-24  7:50             ` Christian König
2020-11-24 17:11             ` Luben Tuikov
2020-11-24 17:11               ` Luben Tuikov
2020-11-24 17:17               ` Andrey Grodzovsky
2020-11-24 17:17                 ` Andrey Grodzovsky
2020-11-24 17:41                 ` Luben Tuikov
2020-11-24 17:41                   ` Luben Tuikov
2020-11-24 17:40               ` Christian König
2020-11-24 17:40                 ` Christian König
2020-11-24 17:44                 ` Luben Tuikov
2020-11-24 17:44                   ` Luben Tuikov
2020-11-21  5:21 ` [PATCH v3 08/12] drm/amdgpu: Split amdgpu_device_fini into early and late Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-24 14:53   ` Daniel Vetter
2020-11-24 14:53     ` Daniel Vetter
2020-11-24 15:51     ` Andrey Grodzovsky
2020-11-24 15:51       ` Andrey Grodzovsky
2020-11-25 10:41       ` Daniel Vetter
2020-11-25 10:41         ` Daniel Vetter
2020-11-25 17:41         ` Andrey Grodzovsky
2020-11-25 17:41           ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 09/12] drm/amdgpu: Add early fini callback Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 10/12] drm/amdgpu: Avoid sysfs dirs removal post device unplug Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-24 14:49   ` Daniel Vetter
2020-11-24 14:49     ` Daniel Vetter
2020-11-24 22:27     ` Andrey Grodzovsky
2020-11-24 22:27       ` Andrey Grodzovsky
2020-11-25  9:04       ` Daniel Vetter
2020-11-25  9:04         ` Daniel Vetter
2020-11-25 17:39         ` Andrey Grodzovsky
2020-11-25 17:39           ` Andrey Grodzovsky
2020-11-27 13:12           ` Grodzovsky, Andrey
2020-11-27 13:12             ` Grodzovsky, Andrey
2020-11-27 15:04           ` Daniel Vetter
2020-11-27 15:04             ` Daniel Vetter
2020-11-27 15:34             ` Andrey Grodzovsky
2020-11-27 15:34               ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 11/12] drm/amdgpu: Register IOMMU topology notifier per device Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 12/12] drm/amdgpu: Fix a bunch of sdma code crash post device unplug Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201125163621.GZ401619@phenom.ffwll.local \
    --to=daniel@ffwll.ch \
    --cc=Alexander.Deucher@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=christian.koenig@amd.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=yuq825@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.