All of lore.kernel.org
 help / color / mirror / Atom feed
From: Daniel Vetter <daniel.vetter@ffwll.ch>
To: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>
Cc: "amd-gfx list" <amd-gfx@lists.freedesktop.org>,
	"Greg KH" <gregkh@linuxfoundation.org>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	"Qiang Yu" <yuq825@gmail.com>,
	"Alex Deucher" <Alexander.Deucher@amd.com>,
	"Christian König" <christian.koenig@amd.com>
Subject: Re: [PATCH v3 05/12] drm/ttm: Expose ttm_tt_unpopulate for driver use
Date: Thu, 17 Dec 2020 00:15:35 +0100	[thread overview]
Message-ID: <CAKMK7uGQtOgHxXQ_AK7f0unrwOnLQm3nb-VUJ_pW6vonRazu0Q@mail.gmail.com> (raw)
In-Reply-To: <8083b9f8-ee43-504f-0690-7add68472ca9@amd.com>

On Wed, Dec 16, 2020 at 7:26 PM Andrey Grodzovsky
<Andrey.Grodzovsky@amd.com> wrote:
>
>
> On 12/16/20 12:12 PM, Daniel Vetter wrote:
> > On Wed, Dec 16, 2020 at 5:18 PM Christian König
> > <christian.koenig@amd.com> wrote:
> >> Am 16.12.20 um 17:13 schrieb Andrey Grodzovsky:
> >>> On 12/16/20 9:21 AM, Daniel Vetter wrote:
> >>>> On Wed, Dec 16, 2020 at 9:04 AM Christian König
> >>>> <ckoenig.leichtzumerken@gmail.com> wrote:
> >>>>> Am 15.12.20 um 21:18 schrieb Andrey Grodzovsky:
> >>>>>> [SNIP]
> >>>>>>>> While we can't control user application accesses to the mapped
> >>>>>>>> buffers explicitly and hence we use page fault rerouting
> >>>>>>>> I am thinking that in this  case we may be able to sprinkle
> >>>>>>>> drm_dev_enter/exit in any such sensitive place were we might
> >>>>>>>> CPU access a DMA buffer from the kernel ?
> >>>>>>> Yes, I fear we are going to need that.
> >>>>>>>
> >>>>>>>> Things like CPU page table updates, ring buffer accesses and FW
> >>>>>>>> memcpy ? Is there other places ?
> >>>>>>> Puh, good question. I have no idea.
> >>>>>>>
> >>>>>>>> Another point is that at this point the driver shouldn't access any
> >>>>>>>> such buffers as we are at the process finishing the device.
> >>>>>>>> AFAIK there is no page fault mechanism for kernel mappings so I
> >>>>>>>> don't think there is anything else to do ?
> >>>>>>> Well there is a page fault handler for kernel mappings, but that one
> >>>>>>> just prints the stack trace into the system log and calls BUG(); :)
> >>>>>>>
> >>>>>>> Long story short we need to avoid any access to released pages after
> >>>>>>> unplug. No matter if it's from the kernel or userspace.
> >>>>>> I was just about to start guarding with drm_dev_enter/exit CPU
> >>>>>> accesses from kernel to GTT ot VRAM buffers but then i looked more in
> >>>>>> the code
> >>>>>> and seems like ttm_tt_unpopulate just deletes DMA mappings (for the
> >>>>>> sake of device to main memory access). Kernel page table is not
> >>>>>> touched
> >>>>>> until last bo refcount is dropped and the bo is released
> >>>>>> (ttm_bo_release->destroy->amdgpu_bo_destroy->amdgpu_bo_kunmap). This
> >>>>>> is both
> >>>>>> for GTT BOs maped to kernel by kmap (or vmap) and for VRAM BOs mapped
> >>>>>> by ioremap. So as i see it, nothing will bad will happen after we
> >>>>>> unpopulate a BO while we still try to use a kernel mapping for it,
> >>>>>> system memory pages backing GTT BOs are still mapped and not freed and
> >>>>>> for
> >>>>>> VRAM BOs same is for the IO physical ranges mapped into the kernel
> >>>>>> page table since iounmap wasn't called yet.
> >>>>> The problem is the system pages would be freed and if we kernel driver
> >>>>> still happily write to them we are pretty much busted because we write
> >>>>> to freed up memory.
> >>>
> >>> OK, i see i missed ttm_tt_unpopulate->..->ttm_pool_free which will
> >>> release
> >>> the GTT BO pages. But then isn't there a problem in ttm_bo_release since
> >>> ttm_bo_cleanup_memtype_use which also leads to pages release comes
> >>> before bo->destroy which unmaps the pages from kernel page table ? Won't
> >>> we have end up writing to freed memory in this time interval ? Don't we
> >>> need to postpone pages freeing to after kernel page table unmapping ?
> >> BOs are only destroyed when there is a guarantee that nobody is
> >> accessing them any more.
> >>
> >> The problem here is that the pages as well as the VRAM can be
> >> immediately reused after the hotplug event.
> >>
> >>>
> >>>> Similar for vram, if this is actual hotunplug and then replug, there's
> >>>> going to be a different device behind the same mmio bar range most
> >>>> likely (the higher bridges all this have the same windows assigned),
> >>>
> >>> No idea how this actually works but if we haven't called iounmap yet
> >>> doesn't it mean that those physical ranges that are still mapped into
> >>> page
> >>> table should be reserved and cannot be reused for another
> >>> device ? As a guess, maybe another subrange from the higher bridge's
> >>> total
> >>> range will be allocated.
> >> Nope, the PCIe subsystem doesn't care about any ioremap still active for
> >> a range when it is hotplugged.
> >>
> >>>> and that's bad news if we keep using it for current drivers. So we
> >>>> really have to point all these cpu ptes to some other place.
> >>>
> >>> We can't just unmap it without syncing against any in kernel accesses
> >>> to those buffers
> >>> and since page faulting technique we use for user mapped buffers seems
> >>> to not be possible
> >>> for kernel mapped buffers I am not sure how to do it gracefully...
> >> We could try to replace the kmap with a dummy page under the hood, but
> >> that is extremely tricky.
> >>
> >> Especially since BOs which are just 1 page in size could point to the
> >> linear mapping directly.
> > I think it's just more work. Essentially
> > - convert as much as possible of the kernel mappings to vmap_local,
> > which Thomas Zimmermann is rolling out. That way a dma_resv_lock will
> > serve as a barrier, and ofc any new vmap needs to fail or hand out a
> > dummy mapping.
>
> Read those patches. I am not sure how this helps with protecting
> against accesses to released backing pages or IO physical ranges of BO
> which is already mapped during the unplug event ?

By eliminating such users, and replacing them with local maps which
are strictly bound in how long they can exist (and hence we can
serialize against them finishing in our hotunplug code). It doesn't
solve all your problems, but it's a tool to get there.
-Daniel

> Andrey
>
>
> > - handle fbcon somehow. I think shutting it all down should work out.
> > - worst case keep the system backing storage around for shared dma-buf
> > until the other non-dynamic driver releases it. for vram we require
> > dynamic importers (and maybe it wasn't such a bright idea to allow
> > pinning of importer buffers, might need to revisit that).
> >
> > Cheers, Daniel
> >
> >> Christian.
> >>
> >>> Andrey
> >>>
> >>>
> >>>> -Daniel
> >>>>
> >>>>> Christian.
> >>>>>
> >>>>>> I loaded the driver with vm_update_mode=3
> >>>>>> meaning all VM updates done using CPU and hasn't seen any OOPs after
> >>>>>> removing the device. I guess i can test it more by allocating GTT and
> >>>>>> VRAM BOs
> >>>>>> and trying to read/write to them after device is removed.
> >>>>>>
> >>>>>> Andrey
> >>>>>>
> >>>>>>
> >>>>>>> Regards,
> >>>>>>> Christian.
> >>>>>>>
> >>>>>>>> Andrey
> >>>>>> _______________________________________________
> >>>>>> amd-gfx mailing list
> >>>>>> amd-gfx@lists.freedesktop.org
> >>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7C37b4367cbdaa4133b01d08d8a1e5bf41%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637437355430007196%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=r0bIJS3HUDkFPqyFinAt4eahM%2BjF01DObZ5abgstzSU%3D&amp;reserved=0
> >>>>>>
> >



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

WARNING: multiple messages have this Message-ID (diff)
From: Daniel Vetter <daniel.vetter@ffwll.ch>
To: Andrey Grodzovsky <Andrey.Grodzovsky@amd.com>
Cc: "Rob Herring" <robh@kernel.org>,
	"amd-gfx list" <amd-gfx@lists.freedesktop.org>,
	"Greg KH" <gregkh@linuxfoundation.org>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	"Anholt, Eric" <eric@anholt.net>,
	"Pekka Paalanen" <ppaalanen@gmail.com>,
	"Qiang Yu" <yuq825@gmail.com>,
	"Alex Deucher" <Alexander.Deucher@amd.com>,
	"Wentland, Harry" <Harry.Wentland@amd.com>,
	"Christian König" <christian.koenig@amd.com>,
	"Lucas Stach" <l.stach@pengutronix.de>
Subject: Re: [PATCH v3 05/12] drm/ttm: Expose ttm_tt_unpopulate for driver use
Date: Thu, 17 Dec 2020 00:15:35 +0100	[thread overview]
Message-ID: <CAKMK7uGQtOgHxXQ_AK7f0unrwOnLQm3nb-VUJ_pW6vonRazu0Q@mail.gmail.com> (raw)
In-Reply-To: <8083b9f8-ee43-504f-0690-7add68472ca9@amd.com>

On Wed, Dec 16, 2020 at 7:26 PM Andrey Grodzovsky
<Andrey.Grodzovsky@amd.com> wrote:
>
>
> On 12/16/20 12:12 PM, Daniel Vetter wrote:
> > On Wed, Dec 16, 2020 at 5:18 PM Christian König
> > <christian.koenig@amd.com> wrote:
> >> Am 16.12.20 um 17:13 schrieb Andrey Grodzovsky:
> >>> On 12/16/20 9:21 AM, Daniel Vetter wrote:
> >>>> On Wed, Dec 16, 2020 at 9:04 AM Christian König
> >>>> <ckoenig.leichtzumerken@gmail.com> wrote:
> >>>>> Am 15.12.20 um 21:18 schrieb Andrey Grodzovsky:
> >>>>>> [SNIP]
> >>>>>>>> While we can't control user application accesses to the mapped
> >>>>>>>> buffers explicitly and hence we use page fault rerouting
> >>>>>>>> I am thinking that in this  case we may be able to sprinkle
> >>>>>>>> drm_dev_enter/exit in any such sensitive place were we might
> >>>>>>>> CPU access a DMA buffer from the kernel ?
> >>>>>>> Yes, I fear we are going to need that.
> >>>>>>>
> >>>>>>>> Things like CPU page table updates, ring buffer accesses and FW
> >>>>>>>> memcpy ? Is there other places ?
> >>>>>>> Puh, good question. I have no idea.
> >>>>>>>
> >>>>>>>> Another point is that at this point the driver shouldn't access any
> >>>>>>>> such buffers as we are at the process finishing the device.
> >>>>>>>> AFAIK there is no page fault mechanism for kernel mappings so I
> >>>>>>>> don't think there is anything else to do ?
> >>>>>>> Well there is a page fault handler for kernel mappings, but that one
> >>>>>>> just prints the stack trace into the system log and calls BUG(); :)
> >>>>>>>
> >>>>>>> Long story short we need to avoid any access to released pages after
> >>>>>>> unplug. No matter if it's from the kernel or userspace.
> >>>>>> I was just about to start guarding with drm_dev_enter/exit CPU
> >>>>>> accesses from kernel to GTT ot VRAM buffers but then i looked more in
> >>>>>> the code
> >>>>>> and seems like ttm_tt_unpopulate just deletes DMA mappings (for the
> >>>>>> sake of device to main memory access). Kernel page table is not
> >>>>>> touched
> >>>>>> until last bo refcount is dropped and the bo is released
> >>>>>> (ttm_bo_release->destroy->amdgpu_bo_destroy->amdgpu_bo_kunmap). This
> >>>>>> is both
> >>>>>> for GTT BOs maped to kernel by kmap (or vmap) and for VRAM BOs mapped
> >>>>>> by ioremap. So as i see it, nothing will bad will happen after we
> >>>>>> unpopulate a BO while we still try to use a kernel mapping for it,
> >>>>>> system memory pages backing GTT BOs are still mapped and not freed and
> >>>>>> for
> >>>>>> VRAM BOs same is for the IO physical ranges mapped into the kernel
> >>>>>> page table since iounmap wasn't called yet.
> >>>>> The problem is the system pages would be freed and if we kernel driver
> >>>>> still happily write to them we are pretty much busted because we write
> >>>>> to freed up memory.
> >>>
> >>> OK, i see i missed ttm_tt_unpopulate->..->ttm_pool_free which will
> >>> release
> >>> the GTT BO pages. But then isn't there a problem in ttm_bo_release since
> >>> ttm_bo_cleanup_memtype_use which also leads to pages release comes
> >>> before bo->destroy which unmaps the pages from kernel page table ? Won't
> >>> we have end up writing to freed memory in this time interval ? Don't we
> >>> need to postpone pages freeing to after kernel page table unmapping ?
> >> BOs are only destroyed when there is a guarantee that nobody is
> >> accessing them any more.
> >>
> >> The problem here is that the pages as well as the VRAM can be
> >> immediately reused after the hotplug event.
> >>
> >>>
> >>>> Similar for vram, if this is actual hotunplug and then replug, there's
> >>>> going to be a different device behind the same mmio bar range most
> >>>> likely (the higher bridges all this have the same windows assigned),
> >>>
> >>> No idea how this actually works but if we haven't called iounmap yet
> >>> doesn't it mean that those physical ranges that are still mapped into
> >>> page
> >>> table should be reserved and cannot be reused for another
> >>> device ? As a guess, maybe another subrange from the higher bridge's
> >>> total
> >>> range will be allocated.
> >> Nope, the PCIe subsystem doesn't care about any ioremap still active for
> >> a range when it is hotplugged.
> >>
> >>>> and that's bad news if we keep using it for current drivers. So we
> >>>> really have to point all these cpu ptes to some other place.
> >>>
> >>> We can't just unmap it without syncing against any in kernel accesses
> >>> to those buffers
> >>> and since page faulting technique we use for user mapped buffers seems
> >>> to not be possible
> >>> for kernel mapped buffers I am not sure how to do it gracefully...
> >> We could try to replace the kmap with a dummy page under the hood, but
> >> that is extremely tricky.
> >>
> >> Especially since BOs which are just 1 page in size could point to the
> >> linear mapping directly.
> > I think it's just more work. Essentially
> > - convert as much as possible of the kernel mappings to vmap_local,
> > which Thomas Zimmermann is rolling out. That way a dma_resv_lock will
> > serve as a barrier, and ofc any new vmap needs to fail or hand out a
> > dummy mapping.
>
> Read those patches. I am not sure how this helps with protecting
> against accesses to released backing pages or IO physical ranges of BO
> which is already mapped during the unplug event ?

By eliminating such users, and replacing them with local maps which
are strictly bound in how long they can exist (and hence we can
serialize against them finishing in our hotunplug code). It doesn't
solve all your problems, but it's a tool to get there.
-Daniel

> Andrey
>
>
> > - handle fbcon somehow. I think shutting it all down should work out.
> > - worst case keep the system backing storage around for shared dma-buf
> > until the other non-dynamic driver releases it. for vram we require
> > dynamic importers (and maybe it wasn't such a bright idea to allow
> > pinning of importer buffers, might need to revisit that).
> >
> > Cheers, Daniel
> >
> >> Christian.
> >>
> >>> Andrey
> >>>
> >>>
> >>>> -Daniel
> >>>>
> >>>>> Christian.
> >>>>>
> >>>>>> I loaded the driver with vm_update_mode=3
> >>>>>> meaning all VM updates done using CPU and hasn't seen any OOPs after
> >>>>>> removing the device. I guess i can test it more by allocating GTT and
> >>>>>> VRAM BOs
> >>>>>> and trying to read/write to them after device is removed.
> >>>>>>
> >>>>>> Andrey
> >>>>>>
> >>>>>>
> >>>>>>> Regards,
> >>>>>>> Christian.
> >>>>>>>
> >>>>>>>> Andrey
> >>>>>> _______________________________________________
> >>>>>> amd-gfx mailing list
> >>>>>> amd-gfx@lists.freedesktop.org
> >>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7C37b4367cbdaa4133b01d08d8a1e5bf41%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637437355430007196%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=r0bIJS3HUDkFPqyFinAt4eahM%2BjF01DObZ5abgstzSU%3D&amp;reserved=0
> >>>>>>
> >



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2020-12-16 23:15 UTC|newest]

Thread overview: 212+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-21  5:21 [PATCH v3 00/12] RFC Support hot device unplug in amdgpu Andrey Grodzovsky
2020-11-21  5:21 ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 01/12] drm: Add dummy page per device or GEM object Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21 14:15   ` Christian König
2020-11-21 14:15     ` Christian König
2020-11-23  4:54     ` Andrey Grodzovsky
2020-11-23  4:54       ` Andrey Grodzovsky
2020-11-23  8:01       ` Christian König
2020-11-23  8:01         ` Christian König
2021-01-05 21:04         ` Andrey Grodzovsky
2021-01-05 21:04           ` Andrey Grodzovsky
2021-01-07 16:21           ` Daniel Vetter
2021-01-07 16:21             ` Daniel Vetter
2021-01-07 16:26             ` Andrey Grodzovsky
2021-01-07 16:26               ` Andrey Grodzovsky
2021-01-07 16:28               ` Andrey Grodzovsky
2021-01-07 16:28                 ` Andrey Grodzovsky
2021-01-07 16:30               ` Daniel Vetter
2021-01-07 16:30                 ` Daniel Vetter
2021-01-07 16:37                 ` Andrey Grodzovsky
2021-01-07 16:37                   ` Andrey Grodzovsky
2021-01-08 14:26                   ` Andrey Grodzovsky
2021-01-08 14:26                     ` Andrey Grodzovsky
2021-01-08 14:33                     ` Christian König
2021-01-08 14:33                       ` Christian König
2021-01-08 14:46                       ` Andrey Grodzovsky
2021-01-08 14:46                         ` Andrey Grodzovsky
2021-01-08 14:52                         ` Christian König
2021-01-08 14:52                           ` Christian König
2021-01-08 16:49                           ` Grodzovsky, Andrey
2021-01-08 16:49                             ` Grodzovsky, Andrey
2021-01-11 16:13                             ` Daniel Vetter
2021-01-11 16:13                               ` Daniel Vetter
2021-01-11 16:15                               ` Daniel Vetter
2021-01-11 16:15                                 ` Daniel Vetter
2021-01-11 17:41                                 ` Andrey Grodzovsky
2021-01-11 17:41                                   ` Andrey Grodzovsky
2021-01-11 18:31                                   ` Andrey Grodzovsky
2021-01-12  9:07                                     ` Daniel Vetter
2021-01-11 20:45                                 ` Andrey Grodzovsky
2021-01-11 20:45                                   ` Andrey Grodzovsky
2021-01-12  9:10                                   ` Daniel Vetter
2021-01-12  9:10                                     ` Daniel Vetter
2021-01-12 12:32                                     ` Christian König
2021-01-12 12:32                                       ` Christian König
2021-01-12 15:59                                       ` Andrey Grodzovsky
2021-01-12 15:59                                         ` Andrey Grodzovsky
2021-01-13  9:14                                         ` Christian König
2021-01-13  9:14                                           ` Christian König
2021-01-13 14:40                                           ` Andrey Grodzovsky
2021-01-13 14:40                                             ` Andrey Grodzovsky
2021-01-12 15:54                                     ` Andrey Grodzovsky
2021-01-12 15:54                                       ` Andrey Grodzovsky
2021-01-12  8:12                               ` Christian König
2021-01-12  8:12                                 ` Christian König
2021-01-12  9:13                                 ` Daniel Vetter
2021-01-12  9:13                                   ` Daniel Vetter
2020-11-21  5:21 ` [PATCH v3 02/12] drm: Unamp the entire device address space on device unplug Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21 14:16   ` Christian König
2020-11-21 14:16     ` Christian König
2020-11-24 14:44     ` Daniel Vetter
2020-11-24 14:44       ` Daniel Vetter
2020-11-21  5:21 ` [PATCH v3 03/12] drm/ttm: Remap all page faults to per process dummy page Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 04/12] drm/ttm: Set dma addr to null after freee Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21 14:13   ` Christian König
2020-11-21 14:13     ` Christian König
2020-11-23  5:15     ` Andrey Grodzovsky
2020-11-23  5:15       ` Andrey Grodzovsky
2020-11-23  8:04       ` Christian König
2020-11-23  8:04         ` Christian König
2020-11-21  5:21 ` [PATCH v3 05/12] drm/ttm: Expose ttm_tt_unpopulate for driver use Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-25 10:42   ` Christian König
2020-11-25 10:42     ` Christian König
2020-11-23 20:05     ` Andrey Grodzovsky
2020-11-23 20:05       ` Andrey Grodzovsky
2020-11-23 20:20       ` Christian König
2020-11-23 20:20         ` Christian König
2020-11-23 20:38         ` Andrey Grodzovsky
2020-11-23 20:38           ` Andrey Grodzovsky
2020-11-23 20:41           ` Christian König
2020-11-23 20:41             ` Christian König
2020-11-23 21:08             ` Andrey Grodzovsky
2020-11-23 21:08               ` Andrey Grodzovsky
2020-11-24  7:41               ` Christian König
2020-11-24  7:41                 ` Christian König
2020-11-24 16:22                 ` Andrey Grodzovsky
2020-11-24 16:22                   ` Andrey Grodzovsky
2020-11-24 16:44                   ` Christian König
2020-11-24 16:44                     ` Christian König
2020-11-25 10:40                     ` Daniel Vetter
2020-11-25 10:40                       ` Daniel Vetter
2020-11-25 12:57                       ` Christian König
2020-11-25 12:57                         ` Christian König
2020-11-25 16:36                         ` Daniel Vetter
2020-11-25 16:36                           ` Daniel Vetter
2020-11-25 19:34                           ` Andrey Grodzovsky
2020-11-25 19:34                             ` Andrey Grodzovsky
2020-11-27 13:10                             ` Grodzovsky, Andrey
2020-11-27 13:10                               ` Grodzovsky, Andrey
2020-11-27 14:59                             ` Daniel Vetter
2020-11-27 14:59                               ` Daniel Vetter
2020-11-27 16:04                               ` Andrey Grodzovsky
2020-11-27 16:04                                 ` Andrey Grodzovsky
2020-11-30 14:15                                 ` Daniel Vetter
2020-11-30 14:15                                   ` Daniel Vetter
2020-11-25 16:56                         ` Michel Dänzer
2020-11-25 16:56                           ` Michel Dänzer
2020-11-25 17:02                           ` Daniel Vetter
2020-11-25 17:02                             ` Daniel Vetter
2020-12-15 20:18                     ` Andrey Grodzovsky
2020-12-15 20:18                       ` Andrey Grodzovsky
2020-12-16  8:04                       ` Christian König
2020-12-16  8:04                         ` Christian König
2020-12-16 14:21                         ` Daniel Vetter
2020-12-16 14:21                           ` Daniel Vetter
2020-12-16 16:13                           ` Andrey Grodzovsky
2020-12-16 16:13                             ` Andrey Grodzovsky
2020-12-16 16:18                             ` Christian König
2020-12-16 16:18                               ` Christian König
2020-12-16 17:12                               ` Daniel Vetter
2020-12-16 17:12                                 ` Daniel Vetter
2020-12-16 17:20                                 ` Daniel Vetter
2020-12-16 17:20                                   ` Daniel Vetter
2020-12-16 18:26                                 ` Andrey Grodzovsky
2020-12-16 18:26                                   ` Andrey Grodzovsky
2020-12-16 23:15                                   ` Daniel Vetter [this message]
2020-12-16 23:15                                     ` Daniel Vetter
2020-12-17  0:20                                     ` Andrey Grodzovsky
2020-12-17  0:20                                       ` Andrey Grodzovsky
2020-12-17 12:01                                       ` Daniel Vetter
2020-12-17 12:01                                         ` Daniel Vetter
2020-12-17 19:19                                         ` Andrey Grodzovsky
2020-12-17 19:19                                           ` Andrey Grodzovsky
2020-12-17 20:10                                           ` Christian König
2020-12-17 20:10                                             ` Christian König
2020-12-17 20:38                                             ` Andrey Grodzovsky
2020-12-17 20:38                                               ` Andrey Grodzovsky
2020-12-17 20:48                                               ` Daniel Vetter
2020-12-17 20:48                                                 ` Daniel Vetter
2020-12-17 21:06                                                 ` Andrey Grodzovsky
2020-12-17 21:06                                                   ` Andrey Grodzovsky
2020-12-18 14:30                                                   ` Daniel Vetter
2020-12-18 14:30                                                     ` Daniel Vetter
2020-12-17 20:42                                           ` Daniel Vetter
2020-12-17 20:42                                             ` Daniel Vetter
2020-12-17 21:13                                             ` Andrey Grodzovsky
2020-12-17 21:13                                               ` Andrey Grodzovsky
2021-01-04 16:33                                               ` Andrey Grodzovsky
2021-01-04 16:33                                                 ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 06/12] drm/sched: Cancel and flush all oustatdning jobs before finish Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-22 11:56   ` Christian König
2020-11-22 11:56     ` Christian König
2020-11-21  5:21 ` [PATCH v3 07/12] drm/sched: Prevent any job recoveries after device is unplugged Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-22 11:57   ` Christian König
2020-11-22 11:57     ` Christian König
2020-11-23  5:37     ` Andrey Grodzovsky
2020-11-23  5:37       ` Andrey Grodzovsky
2020-11-23  8:06       ` Christian König
2020-11-23  8:06         ` Christian König
2020-11-24  1:12         ` Luben Tuikov
2020-11-24  1:12           ` Luben Tuikov
2020-11-24  7:50           ` Christian König
2020-11-24  7:50             ` Christian König
2020-11-24 17:11             ` Luben Tuikov
2020-11-24 17:11               ` Luben Tuikov
2020-11-24 17:17               ` Andrey Grodzovsky
2020-11-24 17:17                 ` Andrey Grodzovsky
2020-11-24 17:41                 ` Luben Tuikov
2020-11-24 17:41                   ` Luben Tuikov
2020-11-24 17:40               ` Christian König
2020-11-24 17:40                 ` Christian König
2020-11-24 17:44                 ` Luben Tuikov
2020-11-24 17:44                   ` Luben Tuikov
2020-11-21  5:21 ` [PATCH v3 08/12] drm/amdgpu: Split amdgpu_device_fini into early and late Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-24 14:53   ` Daniel Vetter
2020-11-24 14:53     ` Daniel Vetter
2020-11-24 15:51     ` Andrey Grodzovsky
2020-11-24 15:51       ` Andrey Grodzovsky
2020-11-25 10:41       ` Daniel Vetter
2020-11-25 10:41         ` Daniel Vetter
2020-11-25 17:41         ` Andrey Grodzovsky
2020-11-25 17:41           ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 09/12] drm/amdgpu: Add early fini callback Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 10/12] drm/amdgpu: Avoid sysfs dirs removal post device unplug Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-24 14:49   ` Daniel Vetter
2020-11-24 14:49     ` Daniel Vetter
2020-11-24 22:27     ` Andrey Grodzovsky
2020-11-24 22:27       ` Andrey Grodzovsky
2020-11-25  9:04       ` Daniel Vetter
2020-11-25  9:04         ` Daniel Vetter
2020-11-25 17:39         ` Andrey Grodzovsky
2020-11-25 17:39           ` Andrey Grodzovsky
2020-11-27 13:12           ` Grodzovsky, Andrey
2020-11-27 13:12             ` Grodzovsky, Andrey
2020-11-27 15:04           ` Daniel Vetter
2020-11-27 15:04             ` Daniel Vetter
2020-11-27 15:34             ` Andrey Grodzovsky
2020-11-27 15:34               ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 11/12] drm/amdgpu: Register IOMMU topology notifier per device Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky
2020-11-21  5:21 ` [PATCH v3 12/12] drm/amdgpu: Fix a bunch of sdma code crash post device unplug Andrey Grodzovsky
2020-11-21  5:21   ` Andrey Grodzovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAKMK7uGQtOgHxXQ_AK7f0unrwOnLQm3nb-VUJ_pW6vonRazu0Q@mail.gmail.com \
    --to=daniel.vetter@ffwll.ch \
    --cc=Alexander.Deucher@amd.com \
    --cc=Andrey.Grodzovsky@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=christian.koenig@amd.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=yuq825@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.