All of lore.kernel.org
 help / color / mirror / Atom feed
From: Daniel Vetter <daniel@ffwll.ch>
To: Jerome Glisse <jglisse@redhat.com>
Cc: Alex Sierra <alex.sierra@amd.com>,
	"Yang, Philip" <philip.yang@amd.com>,
	Felix Kuehling <felix.kuehling@amd.com>,
	amd-gfx list <amd-gfx@lists.freedesktop.org>,
	dri-devel <dri-devel@lists.freedesktop.org>
Subject: Re: [PATCH 00/35] Add HMM-based SVM memory manager to KFD
Date: Thu, 14 Jan 2021 11:39:15 +0100	[thread overview]
Message-ID: <CAKMK7uF4d0pX1RsE2wzwm2aOM_kx5aTEHw3qjsyeOO7Efh6hcA@mail.gmail.com> (raw)
In-Reply-To: <CAKMK7uF=5HA1kuM6phVe6n74MZSrCB4pb9Dn6Ax02tj6xQ0dWQ@mail.gmail.com>

On Thu, Jan 14, 2021 at 10:26 AM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Thu, Jan 14, 2021 at 4:27 AM Jerome Glisse <jglisse@redhat.com> wrote:
> >
> > On Wed, Jan 13, 2021 at 09:31:11PM +0100, Daniel Vetter wrote:
> > > On Wed, Jan 13, 2021 at 5:56 PM Jerome Glisse <jglisse@redhat.com> wrote:
> > > > On Fri, Jan 08, 2021 at 03:40:07PM +0100, Daniel Vetter wrote:
> > > > > On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
> > > > > > Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
> > > > > > > On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
> > > > > > >> This is the first version of our HMM based shared virtual memory manager
> > > > > > >> for KFD. There are still a number of known issues that we're working through
> > > > > > >> (see below). This will likely lead to some pretty significant changes in
> > > > > > >> MMU notifier handling and locking on the migration code paths. So don't
> > > > > > >> get hung up on those details yet.
> > > > > > >>
> > > > > > >> But I think this is a good time to start getting feedback. We're pretty
> > > > > > >> confident about the ioctl API, which is both simple and extensible for the
> > > > > > >> future. (see patches 4,16) The user mode side of the API can be found here:
> > > > > > >> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
> > > > > > >>
> > > > > > >> I'd also like another pair of eyes on how we're interfacing with the GPU VM
> > > > > > >> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
> > > > > > >> and some retry IRQ handling changes (32).
> > > > > > >>
> > > > > > >>
> > > > > > >> Known issues:
> > > > > > >> * won't work with IOMMU enabled, we need to dma_map all pages properly
> > > > > > >> * still working on some race conditions and random bugs
> > > > > > >> * performance is not great yet
> > > > > > > Still catching up, but I think there's another one for your list:
> > > > > > >
> > > > > > >  * hmm gpu context preempt vs page fault handling. I've had a short
> > > > > > >    discussion about this one with Christian before the holidays, and also
> > > > > > >    some private chats with Jerome. It's nasty since no easy fix, much less
> > > > > > >    a good idea what's the best approach here.
> > > > > >
> > > > > > Do you have a pointer to that discussion or any more details?
> > > > >
> > > > > Essentially if you're handling an hmm page fault from the gpu, you can
> > > > > deadlock by calling dma_fence_wait on a (chain of, possibly) other command
> > > > > submissions or compute contexts with dma_fence_wait. Which deadlocks if
> > > > > you can't preempt while you have that page fault pending. Two solutions:
> > > > >
> > > > > - your hw can (at least for compute ctx) preempt even when a page fault is
> > > > >   pending
> > > > >
> > > > > - lots of screaming in trying to come up with an alternate solution. They
> > > > >   all suck.
> > > > >
> > > > > Note that the dma_fence_wait is hard requirement, because we need that for
> > > > > mmu notifiers and shrinkers, disallowing that would disable dynamic memory
> > > > > management. Which is the current "ttm is self-limited to 50% of system
> > > > > memory" limitation Christian is trying to lift. So that's really not
> > > > > a restriction we can lift, at least not in upstream where we need to also
> > > > > support old style hardware which doesn't have page fault support and
> > > > > really has no other option to handle memory management than
> > > > > dma_fence_wait.
> > > > >
> > > > > Thread was here:
> > > > >
> > > > > https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=edA@mail.gmail.com/
> > > > >
> > > > > There's a few ways to resolve this (without having preempt-capable
> > > > > hardware), but they're all supremely nasty.
> > > > > -Daniel
> > > > >
> > > >
> > > > I had a new idea, i wanted to think more about it but have not yet,
> > > > anyway here it is. Adding a new callback to dma fence which ask the
> > > > question can it dead lock ? Any time a GPU driver has pending page
> > > > fault (ie something calling into the mm) it answer yes, otherwise
> > > > no. The GPU shrinker would ask the question before waiting on any
> > > > dma-fence and back of if it gets yes. Shrinker can still try many
> > > > dma buf object for which it does not get a yes on associated fence.
> > >
> > > Having that answer on a given fence isn't enough, you still need to
> > > forward that information through the entire dependency graph, across
> > > drivers. That's the hard part, since that dependency graph is very
> > > implicit in the code, and we'd need to first roll it out across all
> > > drivers.
> >
> > Here i am saying do not wait on fence for which you are not sure.
> > Only wait on fence for which you are 100% certain you can not dead
> > lock. So if you can never be sure on dma fence then never wait on
> > dma-fence in the shrinker. However most driver should have enough
> > information in their shrinker to know if it is safe to wait on
> > fence internal to their device driver (and also know if any of
> > those fence has implicit outside dependency). So first implementation
> > would be to say always deadlock and then having each driver build
> > confidence into what it can ascertain.
>
> I just don't think that actually works in practice:
>
> - on a single gpu you can't wait for vk/gl due to shared CUs, so only
> sdma and uvd are left (or whatever else pure fixed function)
>
> - for multi-gpu you get the guessing game of what leaks across gpus
> and what doesn't. With p2p dma-buf we're now leaking dma_fence across
> gpus even when there's no implicit syncing by userspace (although for
> amdgpu this is tricky since iirc it still lacks the flag to let
> userspace decide this, so this is more for other drivers).
>
> - you don't just need to guarantee that there's no dma_fence
> dependency going back to you, you also need to make sure there's no
> other depedency chain through locks or whatever that closes the loop.
> And since your proposal here is against the dma_fence lockdep
> annotations we have now, lockdep won't help you (and let's be honest,
> review doesn't catch this stuff either, so it's up to hangs in
> production to catch this stuff)
>
> - you still need the full dependency graph within the driver, and only
> i915 scheduler has that afaik. And I'm not sure implementing that was
> a bright idea
>
> - assuming it's a deadlock by default means all gl/vk memory is
> pinned. That's not nice, plus in additional you need hacks like ttm's
> "max 50% of system memory" to paper over the worst fallout, which
> Christian is trying to lift. I really do think we need to be able to
> move towards more dynamic memory management, not less.

Forgot one issue:

- somehow you need to transport the knowledge that you're in the gpu
fault repair path of a specific engine down to shrinkers/mmu notifiers
and all that. And it needs to be fairly specific, otherwise it just
amounts again to "no more dma_fence_wait allowed".

-Daniel

> So in the end you're essentially disabling shrinking/eviction of other
> gpu tasks, and I don't think that works. I really think the only two
> realistic options are
> - guarantee forward progress of other dma_fence (hw preemption,
> reserved CUs, or whatever else you have)
> - guarantee there's not a single offending dma_fence active in the
> system that could cause problems
>
> Hand-waving that in theory we could track the dependencies and that in
> theory we could do some deadlock avoidance of some sorts about that
> just doesn't look like a pragmatic&practical solution to me here. It
> feels about as realistic as just creating a completely new memory
> management model that sidesteps the entire dma_fence issues we have
> due to mixing up kernel memory management and userspace sync fences in
> one thing.
>
> Cheers, Daniel
>
> > > > This does not solve the mmu notifier case, for this you would just
> > > > invalidate the gem userptr object (with a flag but not releasing the
> > > > page refcount) but you would not wait for the GPU (ie no dma fence
> > > > wait in that code path anymore). The userptr API never really made
> > > > the contract that it will always be in sync with the mm view of the
> > > > world so if different page get remapped to same virtual address
> > > > while GPU is still working with the old pages it should not be an
> > > > issue (it would not be in our usage of userptr for compositor and
> > > > what not).
> > > >
> > > > Maybe i overlook something there.
> > >
> > > tbh I'm never really clear on how much exactly we need, and whether
> > > maybe the new pin/unpin api should fix it all.
> >
> > pin/unpin is not a solution it is to fix something with GUP (where
> > we need to know if a page is GUPed or not). GUP should die longterm
> > so anything using GUP (pin/unpin falls into that) should die longterm.
> > Pining memory is bad period (it just breaks too much mm and it is
> > unsolvable for things like mremap, splice, ...).
> >
> > Cheers,
> > Jérôme
> >
>
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

WARNING: multiple messages have this Message-ID (diff)
From: Daniel Vetter <daniel@ffwll.ch>
To: Jerome Glisse <jglisse@redhat.com>
Cc: Alex Sierra <alex.sierra@amd.com>,
	"Yang, Philip" <philip.yang@amd.com>,
	Felix Kuehling <felix.kuehling@amd.com>,
	amd-gfx list <amd-gfx@lists.freedesktop.org>,
	dri-devel <dri-devel@lists.freedesktop.org>
Subject: Re: [PATCH 00/35] Add HMM-based SVM memory manager to KFD
Date: Thu, 14 Jan 2021 11:39:15 +0100	[thread overview]
Message-ID: <CAKMK7uF4d0pX1RsE2wzwm2aOM_kx5aTEHw3qjsyeOO7Efh6hcA@mail.gmail.com> (raw)
In-Reply-To: <CAKMK7uF=5HA1kuM6phVe6n74MZSrCB4pb9Dn6Ax02tj6xQ0dWQ@mail.gmail.com>

On Thu, Jan 14, 2021 at 10:26 AM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Thu, Jan 14, 2021 at 4:27 AM Jerome Glisse <jglisse@redhat.com> wrote:
> >
> > On Wed, Jan 13, 2021 at 09:31:11PM +0100, Daniel Vetter wrote:
> > > On Wed, Jan 13, 2021 at 5:56 PM Jerome Glisse <jglisse@redhat.com> wrote:
> > > > On Fri, Jan 08, 2021 at 03:40:07PM +0100, Daniel Vetter wrote:
> > > > > On Thu, Jan 07, 2021 at 11:25:41AM -0500, Felix Kuehling wrote:
> > > > > > Am 2021-01-07 um 4:23 a.m. schrieb Daniel Vetter:
> > > > > > > On Wed, Jan 06, 2021 at 10:00:52PM -0500, Felix Kuehling wrote:
> > > > > > >> This is the first version of our HMM based shared virtual memory manager
> > > > > > >> for KFD. There are still a number of known issues that we're working through
> > > > > > >> (see below). This will likely lead to some pretty significant changes in
> > > > > > >> MMU notifier handling and locking on the migration code paths. So don't
> > > > > > >> get hung up on those details yet.
> > > > > > >>
> > > > > > >> But I think this is a good time to start getting feedback. We're pretty
> > > > > > >> confident about the ioctl API, which is both simple and extensible for the
> > > > > > >> future. (see patches 4,16) The user mode side of the API can be found here:
> > > > > > >> https://github.com/RadeonOpenCompute/ROCT-Thunk-Interface/blob/fxkamd/hmm-wip/src/svm.c
> > > > > > >>
> > > > > > >> I'd also like another pair of eyes on how we're interfacing with the GPU VM
> > > > > > >> code in amdgpu_vm.c (see patches 12,13), retry page fault handling (24,25),
> > > > > > >> and some retry IRQ handling changes (32).
> > > > > > >>
> > > > > > >>
> > > > > > >> Known issues:
> > > > > > >> * won't work with IOMMU enabled, we need to dma_map all pages properly
> > > > > > >> * still working on some race conditions and random bugs
> > > > > > >> * performance is not great yet
> > > > > > > Still catching up, but I think there's another one for your list:
> > > > > > >
> > > > > > >  * hmm gpu context preempt vs page fault handling. I've had a short
> > > > > > >    discussion about this one with Christian before the holidays, and also
> > > > > > >    some private chats with Jerome. It's nasty since no easy fix, much less
> > > > > > >    a good idea what's the best approach here.
> > > > > >
> > > > > > Do you have a pointer to that discussion or any more details?
> > > > >
> > > > > Essentially if you're handling an hmm page fault from the gpu, you can
> > > > > deadlock by calling dma_fence_wait on a (chain of, possibly) other command
> > > > > submissions or compute contexts with dma_fence_wait. Which deadlocks if
> > > > > you can't preempt while you have that page fault pending. Two solutions:
> > > > >
> > > > > - your hw can (at least for compute ctx) preempt even when a page fault is
> > > > >   pending
> > > > >
> > > > > - lots of screaming in trying to come up with an alternate solution. They
> > > > >   all suck.
> > > > >
> > > > > Note that the dma_fence_wait is hard requirement, because we need that for
> > > > > mmu notifiers and shrinkers, disallowing that would disable dynamic memory
> > > > > management. Which is the current "ttm is self-limited to 50% of system
> > > > > memory" limitation Christian is trying to lift. So that's really not
> > > > > a restriction we can lift, at least not in upstream where we need to also
> > > > > support old style hardware which doesn't have page fault support and
> > > > > really has no other option to handle memory management than
> > > > > dma_fence_wait.
> > > > >
> > > > > Thread was here:
> > > > >
> > > > > https://lore.kernel.org/dri-devel/CAKMK7uGgoeF8LmFBwWh5mW1k4xWjuUh3hdSFpVH1NBM7K0=edA@mail.gmail.com/
> > > > >
> > > > > There's a few ways to resolve this (without having preempt-capable
> > > > > hardware), but they're all supremely nasty.
> > > > > -Daniel
> > > > >
> > > >
> > > > I had a new idea, i wanted to think more about it but have not yet,
> > > > anyway here it is. Adding a new callback to dma fence which ask the
> > > > question can it dead lock ? Any time a GPU driver has pending page
> > > > fault (ie something calling into the mm) it answer yes, otherwise
> > > > no. The GPU shrinker would ask the question before waiting on any
> > > > dma-fence and back of if it gets yes. Shrinker can still try many
> > > > dma buf object for which it does not get a yes on associated fence.
> > >
> > > Having that answer on a given fence isn't enough, you still need to
> > > forward that information through the entire dependency graph, across
> > > drivers. That's the hard part, since that dependency graph is very
> > > implicit in the code, and we'd need to first roll it out across all
> > > drivers.
> >
> > Here i am saying do not wait on fence for which you are not sure.
> > Only wait on fence for which you are 100% certain you can not dead
> > lock. So if you can never be sure on dma fence then never wait on
> > dma-fence in the shrinker. However most driver should have enough
> > information in their shrinker to know if it is safe to wait on
> > fence internal to their device driver (and also know if any of
> > those fence has implicit outside dependency). So first implementation
> > would be to say always deadlock and then having each driver build
> > confidence into what it can ascertain.
>
> I just don't think that actually works in practice:
>
> - on a single gpu you can't wait for vk/gl due to shared CUs, so only
> sdma and uvd are left (or whatever else pure fixed function)
>
> - for multi-gpu you get the guessing game of what leaks across gpus
> and what doesn't. With p2p dma-buf we're now leaking dma_fence across
> gpus even when there's no implicit syncing by userspace (although for
> amdgpu this is tricky since iirc it still lacks the flag to let
> userspace decide this, so this is more for other drivers).
>
> - you don't just need to guarantee that there's no dma_fence
> dependency going back to you, you also need to make sure there's no
> other depedency chain through locks or whatever that closes the loop.
> And since your proposal here is against the dma_fence lockdep
> annotations we have now, lockdep won't help you (and let's be honest,
> review doesn't catch this stuff either, so it's up to hangs in
> production to catch this stuff)
>
> - you still need the full dependency graph within the driver, and only
> i915 scheduler has that afaik. And I'm not sure implementing that was
> a bright idea
>
> - assuming it's a deadlock by default means all gl/vk memory is
> pinned. That's not nice, plus in additional you need hacks like ttm's
> "max 50% of system memory" to paper over the worst fallout, which
> Christian is trying to lift. I really do think we need to be able to
> move towards more dynamic memory management, not less.

Forgot one issue:

- somehow you need to transport the knowledge that you're in the gpu
fault repair path of a specific engine down to shrinkers/mmu notifiers
and all that. And it needs to be fairly specific, otherwise it just
amounts again to "no more dma_fence_wait allowed".

-Daniel

> So in the end you're essentially disabling shrinking/eviction of other
> gpu tasks, and I don't think that works. I really think the only two
> realistic options are
> - guarantee forward progress of other dma_fence (hw preemption,
> reserved CUs, or whatever else you have)
> - guarantee there's not a single offending dma_fence active in the
> system that could cause problems
>
> Hand-waving that in theory we could track the dependencies and that in
> theory we could do some deadlock avoidance of some sorts about that
> just doesn't look like a pragmatic&practical solution to me here. It
> feels about as realistic as just creating a completely new memory
> management model that sidesteps the entire dma_fence issues we have
> due to mixing up kernel memory management and userspace sync fences in
> one thing.
>
> Cheers, Daniel
>
> > > > This does not solve the mmu notifier case, for this you would just
> > > > invalidate the gem userptr object (with a flag but not releasing the
> > > > page refcount) but you would not wait for the GPU (ie no dma fence
> > > > wait in that code path anymore). The userptr API never really made
> > > > the contract that it will always be in sync with the mm view of the
> > > > world so if different page get remapped to same virtual address
> > > > while GPU is still working with the old pages it should not be an
> > > > issue (it would not be in our usage of userptr for compositor and
> > > > what not).
> > > >
> > > > Maybe i overlook something there.
> > >
> > > tbh I'm never really clear on how much exactly we need, and whether
> > > maybe the new pin/unpin api should fix it all.
> >
> > pin/unpin is not a solution it is to fix something with GUP (where
> > we need to know if a page is GUPed or not). GUP should die longterm
> > so anything using GUP (pin/unpin falls into that) should die longterm.
> > Pining memory is bad period (it just breaks too much mm and it is
> > unsolvable for things like mremap, splice, ...).
> >
> > Cheers,
> > Jérôme
> >
>
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2021-01-14 10:39 UTC|newest]

Thread overview: 168+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-07  3:00 [PATCH 00/35] Add HMM-based SVM memory manager to KFD Felix Kuehling
2021-01-07  3:00 ` Felix Kuehling
2021-01-07  3:00 ` [PATCH 01/35] drm/amdkfd: select kernel DEVICE_PRIVATE option Felix Kuehling
2021-01-07  3:00   ` Felix Kuehling
2021-01-07  3:00 ` [PATCH 02/35] drm/amdgpu: replace per_device_list by array Felix Kuehling
2021-01-07  3:00   ` Felix Kuehling
2021-01-07  3:00 ` [PATCH 03/35] drm/amdkfd: helper to convert gpu id and idx Felix Kuehling
2021-01-07  3:00   ` Felix Kuehling
2021-01-07  3:00 ` [PATCH 04/35] drm/amdkfd: add svm ioctl API Felix Kuehling
2021-01-07  3:00   ` Felix Kuehling
2021-01-07  3:00 ` [PATCH 05/35] drm/amdkfd: Add SVM API support capability bits Felix Kuehling
2021-01-07  3:00   ` Felix Kuehling
2021-01-07  3:00 ` [PATCH 06/35] drm/amdkfd: register svm range Felix Kuehling
2021-01-07  3:00   ` Felix Kuehling
2021-01-07  3:00 ` [PATCH 07/35] drm/amdkfd: add svm ioctl GET_ATTR op Felix Kuehling
2021-01-07  3:00   ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 08/35] drm/amdgpu: add common HMM get pages function Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07 10:53   ` Christian König
2021-01-07 10:53     ` Christian König
2021-01-07  3:01 ` [PATCH 09/35] drm/amdkfd: validate svm range system memory Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 10/35] drm/amdkfd: register overlap system memory range Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 11/35] drm/amdkfd: deregister svm range Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 12/35] drm/amdgpu: export vm update mapping interface Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07 10:54   ` Christian König
2021-01-07 10:54     ` Christian König
2021-01-07  3:01 ` [PATCH 13/35] drm/amdkfd: map svm range to GPUs Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 14/35] drm/amdkfd: svm range eviction and restore Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 15/35] drm/amdkfd: add xnack enabled flag to kfd_process Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 16/35] drm/amdkfd: add ioctl to configure and query xnack retries Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 17/35] drm/amdkfd: register HMM device private zone Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-03-01  8:32   ` Daniel Vetter
2021-03-01  8:32     ` Daniel Vetter
2021-03-01  8:46     ` Thomas Hellström (Intel)
2021-03-01  8:46       ` Thomas Hellström (Intel)
2021-03-01  8:58       ` Daniel Vetter
2021-03-01  8:58         ` Daniel Vetter
2021-03-01  9:30         ` Thomas Hellström (Intel)
2021-03-01  9:30           ` Thomas Hellström (Intel)
2021-03-04 17:58       ` Felix Kuehling
2021-03-04 17:58         ` Felix Kuehling
2021-03-11 12:24         ` Thomas Hellström (Intel)
2021-03-11 12:24           ` Thomas Hellström (Intel)
2021-01-07  3:01 ` [PATCH 18/35] drm/amdkfd: validate vram svm range from TTM Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 19/35] drm/amdkfd: support xgmi same hive mapping Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 20/35] drm/amdkfd: copy memory through gart table Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 21/35] drm/amdkfd: HMM migrate ram to vram Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 22/35] drm/amdkfd: HMM migrate vram to ram Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 23/35] drm/amdkfd: invalidate tables on page retry fault Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 24/35] drm/amdkfd: page table restore through svm API Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 25/35] drm/amdkfd: SVM API call to restore page tables Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 26/35] drm/amdkfd: add svm_bo reference for eviction fence Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 27/35] drm/amdgpu: add param bit flag to create SVM BOs Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 28/35] drm/amdkfd: add svm_bo eviction mechanism support Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 29/35] drm/amdgpu: svm bo enable_signal call condition Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07 10:56   ` Christian König
2021-01-07 10:56     ` Christian König
2021-01-07 16:16     ` Felix Kuehling
2021-01-07 16:16       ` Felix Kuehling
2021-01-07 16:28       ` Christian König
2021-01-07 16:28         ` Christian König
2021-01-07 16:53         ` Felix Kuehling
2021-01-07 16:53           ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 30/35] drm/amdgpu: add svm_bo eviction to enable_signal cb Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 31/35] drm/amdgpu: reserve fence slot to update page table Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07 10:57   ` Christian König
2021-01-07 10:57     ` Christian König
2021-01-07  3:01 ` [PATCH 32/35] drm/amdgpu: enable retry fault wptr overflow Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07 11:01   ` Christian König
2021-01-07 11:01     ` Christian König
2021-01-07  3:01 ` [PATCH 33/35] drm/amdkfd: refine migration policy with xnack on Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 34/35] drm/amdkfd: add svm range validate timestamp Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07  3:01 ` [PATCH 35/35] drm/amdkfd: multiple gpu migrate vram to vram Felix Kuehling
2021-01-07  3:01   ` Felix Kuehling
2021-01-07  9:23 ` [PATCH 00/35] Add HMM-based SVM memory manager to KFD Daniel Vetter
2021-01-07  9:23   ` Daniel Vetter
2021-01-07 16:25   ` Felix Kuehling
2021-01-07 16:25     ` Felix Kuehling
2021-01-08 14:40     ` Daniel Vetter
2021-01-08 14:40       ` Daniel Vetter
2021-01-08 14:45       ` Christian König
2021-01-08 14:45         ` Christian König
2021-01-08 15:58       ` Felix Kuehling
2021-01-08 15:58         ` Felix Kuehling
2021-01-08 16:06         ` Daniel Vetter
2021-01-08 16:06           ` Daniel Vetter
2021-01-08 16:36           ` Felix Kuehling
2021-01-08 16:36             ` Felix Kuehling
2021-01-08 16:53             ` Daniel Vetter
2021-01-08 16:53               ` Daniel Vetter
2021-01-08 17:56               ` Felix Kuehling
2021-01-08 17:56                 ` Felix Kuehling
2021-01-11 16:29                 ` Daniel Vetter
2021-01-11 16:29                   ` Daniel Vetter
2021-01-14  5:34                   ` Felix Kuehling
2021-01-14  5:34                     ` Felix Kuehling
2021-01-14 12:19                     ` Christian König
2021-01-14 12:19                       ` Christian König
2021-01-13 16:56       ` Jerome Glisse
2021-01-13 16:56         ` Jerome Glisse
2021-01-13 20:31         ` Daniel Vetter
2021-01-13 20:31           ` Daniel Vetter
2021-01-14  3:27           ` Jerome Glisse
2021-01-14  3:27             ` Jerome Glisse
2021-01-14  9:26             ` Daniel Vetter
2021-01-14  9:26               ` Daniel Vetter
2021-01-14 10:39               ` Daniel Vetter [this message]
2021-01-14 10:39                 ` Daniel Vetter
2021-01-14 10:49         ` Christian König
2021-01-14 10:49           ` Christian König
2021-01-14 11:52           ` Daniel Vetter
2021-01-14 11:52             ` Daniel Vetter
2021-01-14 13:37             ` HMM fence (was Re: [PATCH 00/35] Add HMM-based SVM memory manager to KFD) Christian König
2021-01-14 13:37               ` Christian König
2021-01-14 13:57               ` Daniel Vetter
2021-01-14 13:57                 ` Daniel Vetter
2021-01-14 14:13                 ` Christian König
2021-01-14 14:13                   ` Christian König
2021-01-14 14:23                   ` Daniel Vetter
2021-01-14 14:23                     ` Daniel Vetter
2021-01-14 15:08                     ` Christian König
2021-01-14 15:08                       ` Christian König
2021-01-14 15:40                       ` Daniel Vetter
2021-01-14 15:40                         ` Daniel Vetter
2021-01-14 16:01                         ` Christian König
2021-01-14 16:01                           ` Christian König
2021-01-14 16:36                           ` Daniel Vetter
2021-01-14 16:36                             ` Daniel Vetter
2021-01-14 19:08                             ` Christian König
2021-01-14 19:08                               ` Christian König
2021-01-14 20:09                               ` Daniel Vetter
2021-01-14 20:09                                 ` Daniel Vetter
2021-01-14 16:51               ` Jerome Glisse
2021-01-14 16:51                 ` Jerome Glisse
2021-01-14 21:13                 ` Felix Kuehling
2021-01-14 21:13                   ` Felix Kuehling
2021-01-15  7:47                   ` Christian König
2021-01-15  7:47                     ` Christian König
2021-01-13 16:47 ` [PATCH 00/35] Add HMM-based SVM memory manager to KFD Jerome Glisse
2021-01-13 16:47   ` Jerome Glisse
2021-01-14  0:06   ` Felix Kuehling
2021-01-14  0:06     ` Felix Kuehling

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAKMK7uF4d0pX1RsE2wzwm2aOM_kx5aTEHw3qjsyeOO7Efh6hcA@mail.gmail.com \
    --to=daniel@ffwll.ch \
    --cc=alex.sierra@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=felix.kuehling@amd.com \
    --cc=jglisse@redhat.com \
    --cc=philip.yang@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.