All of lore.kernel.org
 help / color / mirror / Atom feed
From: Daniel Vetter <daniel.vetter@ffwll.ch>
To: "Thomas Hellström (Intel)" <thomas_os@shipmail.org>
Cc: "Dave Airlie" <airlied@gmail.com>,
	"Christian König" <christian.koenig@amd.com>,
	"Daniel Stone" <daniels@collabora.com>,
	linux-rdma <linux-rdma@vger.kernel.org>,
	"Intel Graphics Development" <intel-gfx@lists.freedesktop.org>,
	"Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>,
	"DRI Development" <dri-devel@lists.freedesktop.org>,
	"moderated list:DMA BUFFER SHARING FRAMEWORK"
	<linaro-mm-sig@lists.linaro.org>,
	"Steve Pronovost" <spronovo@microsoft.com>,
	"amd-gfx mailing list" <amd-gfx@lists.freedesktop.org>,
	"Jason Ekstrand" <jason@jlekstrand.net>,
	"Jesse Natalie" <jenatali@microsoft.com>,
	"Daniel Vetter" <daniel.vetter@intel.com>,
	"Thomas Hellstrom" <thomas.hellstrom@intel.com>,
	"Mika Kuoppala" <mika.kuoppala@intel.com>,
	"Felix Kuehling" <Felix.Kuehling@amd.com>,
	"Linux Media Mailing List" <linux-media@vger.kernel.org>
Subject: Re: [Linaro-mm-sig] [PATCH 1/2] dma-buf.rst: Document why indefinite fences are a bad idea
Date: Wed, 22 Jul 2020 13:39:21 +0200	[thread overview]
Message-ID: <CAKMK7uH0rcyepP2hDpNB-yuvNyjee1tPmxWUyefS5j7i-N6Pfw@mail.gmail.com> (raw)
In-Reply-To: <8fd999f2-cbf6-813c-6ad4-131948fb5cc5@shipmail.org>

On Wed, Jul 22, 2020 at 12:31 PM Thomas Hellström (Intel)
<thomas_os@shipmail.org> wrote:
>
>
> On 2020-07-22 11:45, Daniel Vetter wrote:
> > On Wed, Jul 22, 2020 at 10:05 AM Thomas Hellström (Intel)
> > <thomas_os@shipmail.org> wrote:
> >>
> >> On 2020-07-22 09:11, Daniel Vetter wrote:
> >>> On Wed, Jul 22, 2020 at 8:45 AM Thomas Hellström (Intel)
> >>> <thomas_os@shipmail.org> wrote:
> >>>> On 2020-07-22 00:45, Dave Airlie wrote:
> >>>>> On Tue, 21 Jul 2020 at 18:47, Thomas Hellström (Intel)
> >>>>> <thomas_os@shipmail.org> wrote:
> >>>>>> On 7/21/20 9:45 AM, Christian König wrote:
> >>>>>>> Am 21.07.20 um 09:41 schrieb Daniel Vetter:
> >>>>>>>> On Mon, Jul 20, 2020 at 01:15:17PM +0200, Thomas Hellström (Intel)
> >>>>>>>> wrote:
> >>>>>>>>> Hi,
> >>>>>>>>>
> >>>>>>>>> On 7/9/20 2:33 PM, Daniel Vetter wrote:
> >>>>>>>>>> Comes up every few years, gets somewhat tedious to discuss, let's
> >>>>>>>>>> write this down once and for all.
> >>>>>>>>>>
> >>>>>>>>>> What I'm not sure about is whether the text should be more explicit in
> >>>>>>>>>> flat out mandating the amdkfd eviction fences for long running compute
> >>>>>>>>>> workloads or workloads where userspace fencing is allowed.
> >>>>>>>>> Although (in my humble opinion) it might be possible to completely
> >>>>>>>>> untangle
> >>>>>>>>> kernel-introduced fences for resource management and dma-fences used
> >>>>>>>>> for
> >>>>>>>>> completion- and dependency tracking and lift a lot of restrictions
> >>>>>>>>> for the
> >>>>>>>>> dma-fences, including prohibiting infinite ones, I think this makes
> >>>>>>>>> sense
> >>>>>>>>> describing the current state.
> >>>>>>>> Yeah I think a future patch needs to type up how we want to make that
> >>>>>>>> happen (for some cross driver consistency) and what needs to be
> >>>>>>>> considered. Some of the necessary parts are already there (with like the
> >>>>>>>> preemption fences amdkfd has as an example), but I think some clear docs
> >>>>>>>> on what's required from both hw, drivers and userspace would be really
> >>>>>>>> good.
> >>>>>>> I'm currently writing that up, but probably still need a few days for
> >>>>>>> this.
> >>>>>> Great! I put down some (very) initial thoughts a couple of weeks ago
> >>>>>> building on eviction fences for various hardware complexity levels here:
> >>>>>>
> >>>>>> https://gitlab.freedesktop.org/thomash/docs/-/blob/master/Untangling%20dma-fence%20and%20memory%20allocation.odt
> >>>>> We are seeing HW that has recoverable GPU page faults but only for
> >>>>> compute tasks, and scheduler without semaphores hw for graphics.
> >>>>>
> >>>>> So a single driver may have to expose both models to userspace and
> >>>>> also introduces the problem of how to interoperate between the two
> >>>>> models on one card.
> >>>>>
> >>>>> Dave.
> >>>> Hmm, yes to begin with it's important to note that this is not a
> >>>> replacement for new programming models or APIs, This is something that
> >>>> takes place internally in drivers to mitigate many of the restrictions
> >>>> that are currently imposed on dma-fence and documented in this and
> >>>> previous series. It's basically the driver-private narrow completions
> >>>> Jason suggested in the lockdep patches discussions implemented the same
> >>>> way as eviction-fences.
> >>>>
> >>>> The memory fence API would be local to helpers and middle-layers like
> >>>> TTM, and the corresponding drivers.  The only cross-driver-like
> >>>> visibility would be that the dma-buf move_notify() callback would not be
> >>>> allowed to wait on dma-fences or something that depends on a dma-fence.
> >>> Because we can't preempt (on some engines at least) we already have
> >>> the requirement that cross driver buffer management can get stuck on a
> >>> dma-fence. Not even taking into account the horrors we do with
> >>> userptr, which are cross driver no matter what. Limiting move_notify
> >>> to memory fences only doesn't work, since the pte clearing might need
> >>> to wait for a dma_fence first. Hence this becomes a full end-of-batch
> >>> fence, not just a limited kernel-internal memory fence.
> >> For non-preemptible hardware the memory fence typically *is* the
> >> end-of-batch fence. (Unless, as documented, there is a scheduler
> >> consuming sync-file dependencies in which case the memory fence wait
> >> needs to be able to break out of that). The key thing is not that we can
> >> break out of execution, but that we can break out of dependencies, since
> >> when we're executing all dependecies (modulo semaphores) are already
> >> fulfilled. That's what's eliminating the deadlocks.
> >>
> >>> That's kinda why I think only reasonable option is to toss in the
> >>> towel and declare dma-fence to be the memory fence (and suck up all
> >>> the consequences of that decision as uapi, which is kinda where we
> >>> are), and construct something new&entirely free-wheeling for userspace
> >>> fencing. But only for engines that allow enough preempt/gpu page
> >>> faulting to make that possible. Free wheeling userspace fences/gpu
> >>> semaphores or whatever you want to call them (on windows I think it's
> >>> monitored fence) only work if you can preempt to decouple the memory
> >>> fences from your gpu command execution.
> >>>
> >>> There's the in-between step of just decoupling the batchbuffer
> >>> submission prep for hw without any preempt (but a scheduler), but that
> >>> seems kinda pointless. Modern execbuf should be O(1) fastpath, with
> >>> all the allocation/mapping work pulled out ahead. vk exposes that
> >>> model directly to clients, GL drivers could use it internally too, so
> >>> I see zero value in spending lots of time engineering very tricky
> >>> kernel code just for old userspace. Much more reasonable to do that in
> >>> userspace, where we have real debuggers and no panics about security
> >>> bugs (or well, a lot less, webgl is still a thing, but at least
> >>> browsers realized you need to container that completely).
> >> Sure, it's definitely a big chunk of work. I think the big win would be
> >> allowing memory allocation in dma-fence critical sections. But I
> >> completely buy the above argument. I just wanted to point out that many
> >> of the dma-fence restrictions are IMHO fixable, should we need to do
> >> that for whatever reason.
> > I'm still not sure that's possible, without preemption at least. We
> > have 4 edges:
> > - Kernel has internal depencies among memory fences. We want that to
> > allow (mild) amounts of overcommit, since that simplifies live so
> > much.
> > - Memory fences can block gpu ctx execution (by nature of the memory
> > simply not being there yet due to our overcommit)
> > - gpu ctx have (if we allow this) userspace controlled semaphore
> > dependencies. Of course userspace is expected to not create deadlocks,
> > but that's only assuming the kernel doesn't inject additional
> > dependencies. Compute folks really want that.
> > - gpu ctx can hold up memory allocations if all we have is
> > end-of-batch fences. And end-of-batch fences are all we have without
> > preempt, plus if we want backwards compat with the entire current
> > winsys/compositor ecosystem we need them, which allows us to inject
> > stuff dependent upon them pretty much anywhere.
> >
> > Fundamentally that's not fixable without throwing one of the edges
> > (and the corresponding feature that enables) out, since no entity has
> > full visibility into what's going on. E.g. forcing userspace to tell
> > the kernel about all semaphores just brings up back to the
> > drm_timeline_syncobj design we have merged right now. And that's imo
> > no better.
>
> Indeed, HW waiting for semaphores without being able to preempt that
> wait is a no-go. The doc (perhaps naively) assumes nobody is doing that.

preempt is a necessary but not sufficient condition, you also must not
have end-of-batch memory fences. And i915 has semaphore support and
end-of-batch memory fences, e.g. one piece is:

commit c4e8ba7390346a77ffe33ec3f210bc62e0b6c8c6
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date:   Tue Apr 7 14:08:11 2020 +0100

    drm/i915/gt: Yield the timeslice if caught waiting on a user semaphore

Sure it preempts, but that's not enough.

> > That's kinda why I'm not seeing much benefits in a half-way state:
> > Tons of work, and still not what userspace wants. And for the full
> > deal that userspace wants we might as well not change anything with
> > dma-fences. For that we need a) ctx preempt and b) new entirely
> > decoupled fences that never feed back into a memory fences and c) are
> > controlled entirely by userspace. And c) is the really important thing
> > people want us to provide.
> >
> > And once we're ok with dma_fence == memory fences, then enforcing the
> > strict and painful memory allocation limitations is actually what we
> > want.
>
> Let's hope you're right. My fear is that that might be pretty painful as
> well.

Oh it's very painful too:
- We need a separate uapi flavour for gpu ctx with preempt instead of
end-of-batch dma-fence.
- Which needs to be implemented without breaking stuff badly - e.g. we
need to make sure we don't probe-wait on fences unnecessarily since
that forces random unwanted preempts.
- If we want this with winsys integration we need full userspace
revisions since all the dma_fence based sync sharing is out (implicit
sync on dma-buf, sync_file, drm_syncobj are all defunct since we can
only go the other way round).

Utter pain, but I think it's better since it can be done
driver-by-driver, and even userspace usecase by usecase. Which means
we can experiment in areas where the 10+ years of uapi guarantee isn't
so painful, learn, until we do the big jump of new
zero-interaction-with-memory-management fences become baked in forever
into compositor/winsys/modeset protocols. With the other approach of
splitting dma-fence we need to do all the splitting first, make sure
we get it right, and only then can we enable the use-case for real.

That's just not going to happen, at least not in upstream across all
drivers. Within a single driver in some vendor tree hacking stuff up
is totally fine ofc.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

WARNING: multiple messages have this Message-ID (diff)
From: Daniel Vetter <daniel.vetter@ffwll.ch>
To: "Thomas Hellström (Intel)" <thomas_os@shipmail.org>
Cc: "Felix Kuehling" <Felix.Kuehling@amd.com>,
	"Daniel Stone" <daniels@collabora.com>,
	linux-rdma <linux-rdma@vger.kernel.org>,
	"Intel Graphics Development" <intel-gfx@lists.freedesktop.org>,
	"DRI Development" <dri-devel@lists.freedesktop.org>,
	"moderated list:DMA BUFFER SHARING FRAMEWORK"
	<linaro-mm-sig@lists.linaro.org>,
	"Steve Pronovost" <spronovo@microsoft.com>,
	"amd-gfx mailing list" <amd-gfx@lists.freedesktop.org>,
	"Jason Ekstrand" <jason@jlekstrand.net>,
	"Jesse Natalie" <jenatali@microsoft.com>,
	"Daniel Vetter" <daniel.vetter@intel.com>,
	"Thomas Hellstrom" <thomas.hellstrom@intel.com>,
	"Linux Media Mailing List" <linux-media@vger.kernel.org>,
	"Christian König" <christian.koenig@amd.com>,
	"Mika Kuoppala" <mika.kuoppala@intel.com>
Subject: Re: [Linaro-mm-sig] [PATCH 1/2] dma-buf.rst: Document why indefinite fences are a bad idea
Date: Wed, 22 Jul 2020 13:39:21 +0200	[thread overview]
Message-ID: <CAKMK7uH0rcyepP2hDpNB-yuvNyjee1tPmxWUyefS5j7i-N6Pfw@mail.gmail.com> (raw)
In-Reply-To: <8fd999f2-cbf6-813c-6ad4-131948fb5cc5@shipmail.org>

On Wed, Jul 22, 2020 at 12:31 PM Thomas Hellström (Intel)
<thomas_os@shipmail.org> wrote:
>
>
> On 2020-07-22 11:45, Daniel Vetter wrote:
> > On Wed, Jul 22, 2020 at 10:05 AM Thomas Hellström (Intel)
> > <thomas_os@shipmail.org> wrote:
> >>
> >> On 2020-07-22 09:11, Daniel Vetter wrote:
> >>> On Wed, Jul 22, 2020 at 8:45 AM Thomas Hellström (Intel)
> >>> <thomas_os@shipmail.org> wrote:
> >>>> On 2020-07-22 00:45, Dave Airlie wrote:
> >>>>> On Tue, 21 Jul 2020 at 18:47, Thomas Hellström (Intel)
> >>>>> <thomas_os@shipmail.org> wrote:
> >>>>>> On 7/21/20 9:45 AM, Christian König wrote:
> >>>>>>> Am 21.07.20 um 09:41 schrieb Daniel Vetter:
> >>>>>>>> On Mon, Jul 20, 2020 at 01:15:17PM +0200, Thomas Hellström (Intel)
> >>>>>>>> wrote:
> >>>>>>>>> Hi,
> >>>>>>>>>
> >>>>>>>>> On 7/9/20 2:33 PM, Daniel Vetter wrote:
> >>>>>>>>>> Comes up every few years, gets somewhat tedious to discuss, let's
> >>>>>>>>>> write this down once and for all.
> >>>>>>>>>>
> >>>>>>>>>> What I'm not sure about is whether the text should be more explicit in
> >>>>>>>>>> flat out mandating the amdkfd eviction fences for long running compute
> >>>>>>>>>> workloads or workloads where userspace fencing is allowed.
> >>>>>>>>> Although (in my humble opinion) it might be possible to completely
> >>>>>>>>> untangle
> >>>>>>>>> kernel-introduced fences for resource management and dma-fences used
> >>>>>>>>> for
> >>>>>>>>> completion- and dependency tracking and lift a lot of restrictions
> >>>>>>>>> for the
> >>>>>>>>> dma-fences, including prohibiting infinite ones, I think this makes
> >>>>>>>>> sense
> >>>>>>>>> describing the current state.
> >>>>>>>> Yeah I think a future patch needs to type up how we want to make that
> >>>>>>>> happen (for some cross driver consistency) and what needs to be
> >>>>>>>> considered. Some of the necessary parts are already there (with like the
> >>>>>>>> preemption fences amdkfd has as an example), but I think some clear docs
> >>>>>>>> on what's required from both hw, drivers and userspace would be really
> >>>>>>>> good.
> >>>>>>> I'm currently writing that up, but probably still need a few days for
> >>>>>>> this.
> >>>>>> Great! I put down some (very) initial thoughts a couple of weeks ago
> >>>>>> building on eviction fences for various hardware complexity levels here:
> >>>>>>
> >>>>>> https://gitlab.freedesktop.org/thomash/docs/-/blob/master/Untangling%20dma-fence%20and%20memory%20allocation.odt
> >>>>> We are seeing HW that has recoverable GPU page faults but only for
> >>>>> compute tasks, and scheduler without semaphores hw for graphics.
> >>>>>
> >>>>> So a single driver may have to expose both models to userspace and
> >>>>> also introduces the problem of how to interoperate between the two
> >>>>> models on one card.
> >>>>>
> >>>>> Dave.
> >>>> Hmm, yes to begin with it's important to note that this is not a
> >>>> replacement for new programming models or APIs, This is something that
> >>>> takes place internally in drivers to mitigate many of the restrictions
> >>>> that are currently imposed on dma-fence and documented in this and
> >>>> previous series. It's basically the driver-private narrow completions
> >>>> Jason suggested in the lockdep patches discussions implemented the same
> >>>> way as eviction-fences.
> >>>>
> >>>> The memory fence API would be local to helpers and middle-layers like
> >>>> TTM, and the corresponding drivers.  The only cross-driver-like
> >>>> visibility would be that the dma-buf move_notify() callback would not be
> >>>> allowed to wait on dma-fences or something that depends on a dma-fence.
> >>> Because we can't preempt (on some engines at least) we already have
> >>> the requirement that cross driver buffer management can get stuck on a
> >>> dma-fence. Not even taking into account the horrors we do with
> >>> userptr, which are cross driver no matter what. Limiting move_notify
> >>> to memory fences only doesn't work, since the pte clearing might need
> >>> to wait for a dma_fence first. Hence this becomes a full end-of-batch
> >>> fence, not just a limited kernel-internal memory fence.
> >> For non-preemptible hardware the memory fence typically *is* the
> >> end-of-batch fence. (Unless, as documented, there is a scheduler
> >> consuming sync-file dependencies in which case the memory fence wait
> >> needs to be able to break out of that). The key thing is not that we can
> >> break out of execution, but that we can break out of dependencies, since
> >> when we're executing all dependecies (modulo semaphores) are already
> >> fulfilled. That's what's eliminating the deadlocks.
> >>
> >>> That's kinda why I think only reasonable option is to toss in the
> >>> towel and declare dma-fence to be the memory fence (and suck up all
> >>> the consequences of that decision as uapi, which is kinda where we
> >>> are), and construct something new&entirely free-wheeling for userspace
> >>> fencing. But only for engines that allow enough preempt/gpu page
> >>> faulting to make that possible. Free wheeling userspace fences/gpu
> >>> semaphores or whatever you want to call them (on windows I think it's
> >>> monitored fence) only work if you can preempt to decouple the memory
> >>> fences from your gpu command execution.
> >>>
> >>> There's the in-between step of just decoupling the batchbuffer
> >>> submission prep for hw without any preempt (but a scheduler), but that
> >>> seems kinda pointless. Modern execbuf should be O(1) fastpath, with
> >>> all the allocation/mapping work pulled out ahead. vk exposes that
> >>> model directly to clients, GL drivers could use it internally too, so
> >>> I see zero value in spending lots of time engineering very tricky
> >>> kernel code just for old userspace. Much more reasonable to do that in
> >>> userspace, where we have real debuggers and no panics about security
> >>> bugs (or well, a lot less, webgl is still a thing, but at least
> >>> browsers realized you need to container that completely).
> >> Sure, it's definitely a big chunk of work. I think the big win would be
> >> allowing memory allocation in dma-fence critical sections. But I
> >> completely buy the above argument. I just wanted to point out that many
> >> of the dma-fence restrictions are IMHO fixable, should we need to do
> >> that for whatever reason.
> > I'm still not sure that's possible, without preemption at least. We
> > have 4 edges:
> > - Kernel has internal depencies among memory fences. We want that to
> > allow (mild) amounts of overcommit, since that simplifies live so
> > much.
> > - Memory fences can block gpu ctx execution (by nature of the memory
> > simply not being there yet due to our overcommit)
> > - gpu ctx have (if we allow this) userspace controlled semaphore
> > dependencies. Of course userspace is expected to not create deadlocks,
> > but that's only assuming the kernel doesn't inject additional
> > dependencies. Compute folks really want that.
> > - gpu ctx can hold up memory allocations if all we have is
> > end-of-batch fences. And end-of-batch fences are all we have without
> > preempt, plus if we want backwards compat with the entire current
> > winsys/compositor ecosystem we need them, which allows us to inject
> > stuff dependent upon them pretty much anywhere.
> >
> > Fundamentally that's not fixable without throwing one of the edges
> > (and the corresponding feature that enables) out, since no entity has
> > full visibility into what's going on. E.g. forcing userspace to tell
> > the kernel about all semaphores just brings up back to the
> > drm_timeline_syncobj design we have merged right now. And that's imo
> > no better.
>
> Indeed, HW waiting for semaphores without being able to preempt that
> wait is a no-go. The doc (perhaps naively) assumes nobody is doing that.

preempt is a necessary but not sufficient condition, you also must not
have end-of-batch memory fences. And i915 has semaphore support and
end-of-batch memory fences, e.g. one piece is:

commit c4e8ba7390346a77ffe33ec3f210bc62e0b6c8c6
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date:   Tue Apr 7 14:08:11 2020 +0100

    drm/i915/gt: Yield the timeslice if caught waiting on a user semaphore

Sure it preempts, but that's not enough.

> > That's kinda why I'm not seeing much benefits in a half-way state:
> > Tons of work, and still not what userspace wants. And for the full
> > deal that userspace wants we might as well not change anything with
> > dma-fences. For that we need a) ctx preempt and b) new entirely
> > decoupled fences that never feed back into a memory fences and c) are
> > controlled entirely by userspace. And c) is the really important thing
> > people want us to provide.
> >
> > And once we're ok with dma_fence == memory fences, then enforcing the
> > strict and painful memory allocation limitations is actually what we
> > want.
>
> Let's hope you're right. My fear is that that might be pretty painful as
> well.

Oh it's very painful too:
- We need a separate uapi flavour for gpu ctx with preempt instead of
end-of-batch dma-fence.
- Which needs to be implemented without breaking stuff badly - e.g. we
need to make sure we don't probe-wait on fences unnecessarily since
that forces random unwanted preempts.
- If we want this with winsys integration we need full userspace
revisions since all the dma_fence based sync sharing is out (implicit
sync on dma-buf, sync_file, drm_syncobj are all defunct since we can
only go the other way round).

Utter pain, but I think it's better since it can be done
driver-by-driver, and even userspace usecase by usecase. Which means
we can experiment in areas where the 10+ years of uapi guarantee isn't
so painful, learn, until we do the big jump of new
zero-interaction-with-memory-management fences become baked in forever
into compositor/winsys/modeset protocols. With the other approach of
splitting dma-fence we need to do all the splitting first, make sure
we get it right, and only then can we enable the use-case for real.

That's just not going to happen, at least not in upstream across all
drivers. Within a single driver in some vendor tree hacking stuff up
is totally fine ofc.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

WARNING: multiple messages have this Message-ID (diff)
From: Daniel Vetter <daniel.vetter@ffwll.ch>
To: "Thomas Hellström (Intel)" <thomas_os@shipmail.org>
Cc: "Felix Kuehling" <Felix.Kuehling@amd.com>,
	"Daniel Stone" <daniels@collabora.com>,
	linux-rdma <linux-rdma@vger.kernel.org>,
	"Intel Graphics Development" <intel-gfx@lists.freedesktop.org>,
	"DRI Development" <dri-devel@lists.freedesktop.org>,
	"moderated list:DMA BUFFER SHARING FRAMEWORK"
	<linaro-mm-sig@lists.linaro.org>,
	"Steve Pronovost" <spronovo@microsoft.com>,
	"amd-gfx mailing list" <amd-gfx@lists.freedesktop.org>,
	"Jesse Natalie" <jenatali@microsoft.com>,
	"Daniel Vetter" <daniel.vetter@intel.com>,
	"Thomas Hellstrom" <thomas.hellstrom@intel.com>,
	"Linux Media Mailing List" <linux-media@vger.kernel.org>,
	"Christian König" <christian.koenig@amd.com>,
	"Mika Kuoppala" <mika.kuoppala@intel.com>
Subject: Re: [Intel-gfx] [Linaro-mm-sig] [PATCH 1/2] dma-buf.rst: Document why indefinite fences are a bad idea
Date: Wed, 22 Jul 2020 13:39:21 +0200	[thread overview]
Message-ID: <CAKMK7uH0rcyepP2hDpNB-yuvNyjee1tPmxWUyefS5j7i-N6Pfw@mail.gmail.com> (raw)
In-Reply-To: <8fd999f2-cbf6-813c-6ad4-131948fb5cc5@shipmail.org>

On Wed, Jul 22, 2020 at 12:31 PM Thomas Hellström (Intel)
<thomas_os@shipmail.org> wrote:
>
>
> On 2020-07-22 11:45, Daniel Vetter wrote:
> > On Wed, Jul 22, 2020 at 10:05 AM Thomas Hellström (Intel)
> > <thomas_os@shipmail.org> wrote:
> >>
> >> On 2020-07-22 09:11, Daniel Vetter wrote:
> >>> On Wed, Jul 22, 2020 at 8:45 AM Thomas Hellström (Intel)
> >>> <thomas_os@shipmail.org> wrote:
> >>>> On 2020-07-22 00:45, Dave Airlie wrote:
> >>>>> On Tue, 21 Jul 2020 at 18:47, Thomas Hellström (Intel)
> >>>>> <thomas_os@shipmail.org> wrote:
> >>>>>> On 7/21/20 9:45 AM, Christian König wrote:
> >>>>>>> Am 21.07.20 um 09:41 schrieb Daniel Vetter:
> >>>>>>>> On Mon, Jul 20, 2020 at 01:15:17PM +0200, Thomas Hellström (Intel)
> >>>>>>>> wrote:
> >>>>>>>>> Hi,
> >>>>>>>>>
> >>>>>>>>> On 7/9/20 2:33 PM, Daniel Vetter wrote:
> >>>>>>>>>> Comes up every few years, gets somewhat tedious to discuss, let's
> >>>>>>>>>> write this down once and for all.
> >>>>>>>>>>
> >>>>>>>>>> What I'm not sure about is whether the text should be more explicit in
> >>>>>>>>>> flat out mandating the amdkfd eviction fences for long running compute
> >>>>>>>>>> workloads or workloads where userspace fencing is allowed.
> >>>>>>>>> Although (in my humble opinion) it might be possible to completely
> >>>>>>>>> untangle
> >>>>>>>>> kernel-introduced fences for resource management and dma-fences used
> >>>>>>>>> for
> >>>>>>>>> completion- and dependency tracking and lift a lot of restrictions
> >>>>>>>>> for the
> >>>>>>>>> dma-fences, including prohibiting infinite ones, I think this makes
> >>>>>>>>> sense
> >>>>>>>>> describing the current state.
> >>>>>>>> Yeah I think a future patch needs to type up how we want to make that
> >>>>>>>> happen (for some cross driver consistency) and what needs to be
> >>>>>>>> considered. Some of the necessary parts are already there (with like the
> >>>>>>>> preemption fences amdkfd has as an example), but I think some clear docs
> >>>>>>>> on what's required from both hw, drivers and userspace would be really
> >>>>>>>> good.
> >>>>>>> I'm currently writing that up, but probably still need a few days for
> >>>>>>> this.
> >>>>>> Great! I put down some (very) initial thoughts a couple of weeks ago
> >>>>>> building on eviction fences for various hardware complexity levels here:
> >>>>>>
> >>>>>> https://gitlab.freedesktop.org/thomash/docs/-/blob/master/Untangling%20dma-fence%20and%20memory%20allocation.odt
> >>>>> We are seeing HW that has recoverable GPU page faults but only for
> >>>>> compute tasks, and scheduler without semaphores hw for graphics.
> >>>>>
> >>>>> So a single driver may have to expose both models to userspace and
> >>>>> also introduces the problem of how to interoperate between the two
> >>>>> models on one card.
> >>>>>
> >>>>> Dave.
> >>>> Hmm, yes to begin with it's important to note that this is not a
> >>>> replacement for new programming models or APIs, This is something that
> >>>> takes place internally in drivers to mitigate many of the restrictions
> >>>> that are currently imposed on dma-fence and documented in this and
> >>>> previous series. It's basically the driver-private narrow completions
> >>>> Jason suggested in the lockdep patches discussions implemented the same
> >>>> way as eviction-fences.
> >>>>
> >>>> The memory fence API would be local to helpers and middle-layers like
> >>>> TTM, and the corresponding drivers.  The only cross-driver-like
> >>>> visibility would be that the dma-buf move_notify() callback would not be
> >>>> allowed to wait on dma-fences or something that depends on a dma-fence.
> >>> Because we can't preempt (on some engines at least) we already have
> >>> the requirement that cross driver buffer management can get stuck on a
> >>> dma-fence. Not even taking into account the horrors we do with
> >>> userptr, which are cross driver no matter what. Limiting move_notify
> >>> to memory fences only doesn't work, since the pte clearing might need
> >>> to wait for a dma_fence first. Hence this becomes a full end-of-batch
> >>> fence, not just a limited kernel-internal memory fence.
> >> For non-preemptible hardware the memory fence typically *is* the
> >> end-of-batch fence. (Unless, as documented, there is a scheduler
> >> consuming sync-file dependencies in which case the memory fence wait
> >> needs to be able to break out of that). The key thing is not that we can
> >> break out of execution, but that we can break out of dependencies, since
> >> when we're executing all dependecies (modulo semaphores) are already
> >> fulfilled. That's what's eliminating the deadlocks.
> >>
> >>> That's kinda why I think only reasonable option is to toss in the
> >>> towel and declare dma-fence to be the memory fence (and suck up all
> >>> the consequences of that decision as uapi, which is kinda where we
> >>> are), and construct something new&entirely free-wheeling for userspace
> >>> fencing. But only for engines that allow enough preempt/gpu page
> >>> faulting to make that possible. Free wheeling userspace fences/gpu
> >>> semaphores or whatever you want to call them (on windows I think it's
> >>> monitored fence) only work if you can preempt to decouple the memory
> >>> fences from your gpu command execution.
> >>>
> >>> There's the in-between step of just decoupling the batchbuffer
> >>> submission prep for hw without any preempt (but a scheduler), but that
> >>> seems kinda pointless. Modern execbuf should be O(1) fastpath, with
> >>> all the allocation/mapping work pulled out ahead. vk exposes that
> >>> model directly to clients, GL drivers could use it internally too, so
> >>> I see zero value in spending lots of time engineering very tricky
> >>> kernel code just for old userspace. Much more reasonable to do that in
> >>> userspace, where we have real debuggers and no panics about security
> >>> bugs (or well, a lot less, webgl is still a thing, but at least
> >>> browsers realized you need to container that completely).
> >> Sure, it's definitely a big chunk of work. I think the big win would be
> >> allowing memory allocation in dma-fence critical sections. But I
> >> completely buy the above argument. I just wanted to point out that many
> >> of the dma-fence restrictions are IMHO fixable, should we need to do
> >> that for whatever reason.
> > I'm still not sure that's possible, without preemption at least. We
> > have 4 edges:
> > - Kernel has internal depencies among memory fences. We want that to
> > allow (mild) amounts of overcommit, since that simplifies live so
> > much.
> > - Memory fences can block gpu ctx execution (by nature of the memory
> > simply not being there yet due to our overcommit)
> > - gpu ctx have (if we allow this) userspace controlled semaphore
> > dependencies. Of course userspace is expected to not create deadlocks,
> > but that's only assuming the kernel doesn't inject additional
> > dependencies. Compute folks really want that.
> > - gpu ctx can hold up memory allocations if all we have is
> > end-of-batch fences. And end-of-batch fences are all we have without
> > preempt, plus if we want backwards compat with the entire current
> > winsys/compositor ecosystem we need them, which allows us to inject
> > stuff dependent upon them pretty much anywhere.
> >
> > Fundamentally that's not fixable without throwing one of the edges
> > (and the corresponding feature that enables) out, since no entity has
> > full visibility into what's going on. E.g. forcing userspace to tell
> > the kernel about all semaphores just brings up back to the
> > drm_timeline_syncobj design we have merged right now. And that's imo
> > no better.
>
> Indeed, HW waiting for semaphores without being able to preempt that
> wait is a no-go. The doc (perhaps naively) assumes nobody is doing that.

preempt is a necessary but not sufficient condition, you also must not
have end-of-batch memory fences. And i915 has semaphore support and
end-of-batch memory fences, e.g. one piece is:

commit c4e8ba7390346a77ffe33ec3f210bc62e0b6c8c6
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date:   Tue Apr 7 14:08:11 2020 +0100

    drm/i915/gt: Yield the timeslice if caught waiting on a user semaphore

Sure it preempts, but that's not enough.

> > That's kinda why I'm not seeing much benefits in a half-way state:
> > Tons of work, and still not what userspace wants. And for the full
> > deal that userspace wants we might as well not change anything with
> > dma-fences. For that we need a) ctx preempt and b) new entirely
> > decoupled fences that never feed back into a memory fences and c) are
> > controlled entirely by userspace. And c) is the really important thing
> > people want us to provide.
> >
> > And once we're ok with dma_fence == memory fences, then enforcing the
> > strict and painful memory allocation limitations is actually what we
> > want.
>
> Let's hope you're right. My fear is that that might be pretty painful as
> well.

Oh it's very painful too:
- We need a separate uapi flavour for gpu ctx with preempt instead of
end-of-batch dma-fence.
- Which needs to be implemented without breaking stuff badly - e.g. we
need to make sure we don't probe-wait on fences unnecessarily since
that forces random unwanted preempts.
- If we want this with winsys integration we need full userspace
revisions since all the dma_fence based sync sharing is out (implicit
sync on dma-buf, sync_file, drm_syncobj are all defunct since we can
only go the other way round).

Utter pain, but I think it's better since it can be done
driver-by-driver, and even userspace usecase by usecase. Which means
we can experiment in areas where the 10+ years of uapi guarantee isn't
so painful, learn, until we do the big jump of new
zero-interaction-with-memory-management fences become baked in forever
into compositor/winsys/modeset protocols. With the other approach of
splitting dma-fence we need to do all the splitting first, make sure
we get it right, and only then can we enable the use-case for real.

That's just not going to happen, at least not in upstream across all
drivers. Within a single driver in some vendor tree hacking stuff up
is totally fine ofc.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

WARNING: multiple messages have this Message-ID (diff)
From: Daniel Vetter <daniel.vetter@ffwll.ch>
To: "Thomas Hellström (Intel)" <thomas_os@shipmail.org>
Cc: "Felix Kuehling" <Felix.Kuehling@amd.com>,
	"Daniel Stone" <daniels@collabora.com>,
	linux-rdma <linux-rdma@vger.kernel.org>,
	"Intel Graphics Development" <intel-gfx@lists.freedesktop.org>,
	"Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>,
	"DRI Development" <dri-devel@lists.freedesktop.org>,
	"moderated list:DMA BUFFER SHARING FRAMEWORK"
	<linaro-mm-sig@lists.linaro.org>,
	"Steve Pronovost" <spronovo@microsoft.com>,
	"amd-gfx mailing list" <amd-gfx@lists.freedesktop.org>,
	"Jason Ekstrand" <jason@jlekstrand.net>,
	"Jesse Natalie" <jenatali@microsoft.com>,
	"Daniel Vetter" <daniel.vetter@intel.com>,
	"Thomas Hellstrom" <thomas.hellstrom@intel.com>,
	"Linux Media Mailing List" <linux-media@vger.kernel.org>,
	"Dave Airlie" <airlied@gmail.com>,
	"Christian König" <christian.koenig@amd.com>,
	"Mika Kuoppala" <mika.kuoppala@intel.com>
Subject: Re: [Linaro-mm-sig] [PATCH 1/2] dma-buf.rst: Document why indefinite fences are a bad idea
Date: Wed, 22 Jul 2020 13:39:21 +0200	[thread overview]
Message-ID: <CAKMK7uH0rcyepP2hDpNB-yuvNyjee1tPmxWUyefS5j7i-N6Pfw@mail.gmail.com> (raw)
In-Reply-To: <8fd999f2-cbf6-813c-6ad4-131948fb5cc5@shipmail.org>

On Wed, Jul 22, 2020 at 12:31 PM Thomas Hellström (Intel)
<thomas_os@shipmail.org> wrote:
>
>
> On 2020-07-22 11:45, Daniel Vetter wrote:
> > On Wed, Jul 22, 2020 at 10:05 AM Thomas Hellström (Intel)
> > <thomas_os@shipmail.org> wrote:
> >>
> >> On 2020-07-22 09:11, Daniel Vetter wrote:
> >>> On Wed, Jul 22, 2020 at 8:45 AM Thomas Hellström (Intel)
> >>> <thomas_os@shipmail.org> wrote:
> >>>> On 2020-07-22 00:45, Dave Airlie wrote:
> >>>>> On Tue, 21 Jul 2020 at 18:47, Thomas Hellström (Intel)
> >>>>> <thomas_os@shipmail.org> wrote:
> >>>>>> On 7/21/20 9:45 AM, Christian König wrote:
> >>>>>>> Am 21.07.20 um 09:41 schrieb Daniel Vetter:
> >>>>>>>> On Mon, Jul 20, 2020 at 01:15:17PM +0200, Thomas Hellström (Intel)
> >>>>>>>> wrote:
> >>>>>>>>> Hi,
> >>>>>>>>>
> >>>>>>>>> On 7/9/20 2:33 PM, Daniel Vetter wrote:
> >>>>>>>>>> Comes up every few years, gets somewhat tedious to discuss, let's
> >>>>>>>>>> write this down once and for all.
> >>>>>>>>>>
> >>>>>>>>>> What I'm not sure about is whether the text should be more explicit in
> >>>>>>>>>> flat out mandating the amdkfd eviction fences for long running compute
> >>>>>>>>>> workloads or workloads where userspace fencing is allowed.
> >>>>>>>>> Although (in my humble opinion) it might be possible to completely
> >>>>>>>>> untangle
> >>>>>>>>> kernel-introduced fences for resource management and dma-fences used
> >>>>>>>>> for
> >>>>>>>>> completion- and dependency tracking and lift a lot of restrictions
> >>>>>>>>> for the
> >>>>>>>>> dma-fences, including prohibiting infinite ones, I think this makes
> >>>>>>>>> sense
> >>>>>>>>> describing the current state.
> >>>>>>>> Yeah I think a future patch needs to type up how we want to make that
> >>>>>>>> happen (for some cross driver consistency) and what needs to be
> >>>>>>>> considered. Some of the necessary parts are already there (with like the
> >>>>>>>> preemption fences amdkfd has as an example), but I think some clear docs
> >>>>>>>> on what's required from both hw, drivers and userspace would be really
> >>>>>>>> good.
> >>>>>>> I'm currently writing that up, but probably still need a few days for
> >>>>>>> this.
> >>>>>> Great! I put down some (very) initial thoughts a couple of weeks ago
> >>>>>> building on eviction fences for various hardware complexity levels here:
> >>>>>>
> >>>>>> https://gitlab.freedesktop.org/thomash/docs/-/blob/master/Untangling%20dma-fence%20and%20memory%20allocation.odt
> >>>>> We are seeing HW that has recoverable GPU page faults but only for
> >>>>> compute tasks, and scheduler without semaphores hw for graphics.
> >>>>>
> >>>>> So a single driver may have to expose both models to userspace and
> >>>>> also introduces the problem of how to interoperate between the two
> >>>>> models on one card.
> >>>>>
> >>>>> Dave.
> >>>> Hmm, yes to begin with it's important to note that this is not a
> >>>> replacement for new programming models or APIs, This is something that
> >>>> takes place internally in drivers to mitigate many of the restrictions
> >>>> that are currently imposed on dma-fence and documented in this and
> >>>> previous series. It's basically the driver-private narrow completions
> >>>> Jason suggested in the lockdep patches discussions implemented the same
> >>>> way as eviction-fences.
> >>>>
> >>>> The memory fence API would be local to helpers and middle-layers like
> >>>> TTM, and the corresponding drivers.  The only cross-driver-like
> >>>> visibility would be that the dma-buf move_notify() callback would not be
> >>>> allowed to wait on dma-fences or something that depends on a dma-fence.
> >>> Because we can't preempt (on some engines at least) we already have
> >>> the requirement that cross driver buffer management can get stuck on a
> >>> dma-fence. Not even taking into account the horrors we do with
> >>> userptr, which are cross driver no matter what. Limiting move_notify
> >>> to memory fences only doesn't work, since the pte clearing might need
> >>> to wait for a dma_fence first. Hence this becomes a full end-of-batch
> >>> fence, not just a limited kernel-internal memory fence.
> >> For non-preemptible hardware the memory fence typically *is* the
> >> end-of-batch fence. (Unless, as documented, there is a scheduler
> >> consuming sync-file dependencies in which case the memory fence wait
> >> needs to be able to break out of that). The key thing is not that we can
> >> break out of execution, but that we can break out of dependencies, since
> >> when we're executing all dependecies (modulo semaphores) are already
> >> fulfilled. That's what's eliminating the deadlocks.
> >>
> >>> That's kinda why I think only reasonable option is to toss in the
> >>> towel and declare dma-fence to be the memory fence (and suck up all
> >>> the consequences of that decision as uapi, which is kinda where we
> >>> are), and construct something new&entirely free-wheeling for userspace
> >>> fencing. But only for engines that allow enough preempt/gpu page
> >>> faulting to make that possible. Free wheeling userspace fences/gpu
> >>> semaphores or whatever you want to call them (on windows I think it's
> >>> monitored fence) only work if you can preempt to decouple the memory
> >>> fences from your gpu command execution.
> >>>
> >>> There's the in-between step of just decoupling the batchbuffer
> >>> submission prep for hw without any preempt (but a scheduler), but that
> >>> seems kinda pointless. Modern execbuf should be O(1) fastpath, with
> >>> all the allocation/mapping work pulled out ahead. vk exposes that
> >>> model directly to clients, GL drivers could use it internally too, so
> >>> I see zero value in spending lots of time engineering very tricky
> >>> kernel code just for old userspace. Much more reasonable to do that in
> >>> userspace, where we have real debuggers and no panics about security
> >>> bugs (or well, a lot less, webgl is still a thing, but at least
> >>> browsers realized you need to container that completely).
> >> Sure, it's definitely a big chunk of work. I think the big win would be
> >> allowing memory allocation in dma-fence critical sections. But I
> >> completely buy the above argument. I just wanted to point out that many
> >> of the dma-fence restrictions are IMHO fixable, should we need to do
> >> that for whatever reason.
> > I'm still not sure that's possible, without preemption at least. We
> > have 4 edges:
> > - Kernel has internal depencies among memory fences. We want that to
> > allow (mild) amounts of overcommit, since that simplifies live so
> > much.
> > - Memory fences can block gpu ctx execution (by nature of the memory
> > simply not being there yet due to our overcommit)
> > - gpu ctx have (if we allow this) userspace controlled semaphore
> > dependencies. Of course userspace is expected to not create deadlocks,
> > but that's only assuming the kernel doesn't inject additional
> > dependencies. Compute folks really want that.
> > - gpu ctx can hold up memory allocations if all we have is
> > end-of-batch fences. And end-of-batch fences are all we have without
> > preempt, plus if we want backwards compat with the entire current
> > winsys/compositor ecosystem we need them, which allows us to inject
> > stuff dependent upon them pretty much anywhere.
> >
> > Fundamentally that's not fixable without throwing one of the edges
> > (and the corresponding feature that enables) out, since no entity has
> > full visibility into what's going on. E.g. forcing userspace to tell
> > the kernel about all semaphores just brings up back to the
> > drm_timeline_syncobj design we have merged right now. And that's imo
> > no better.
>
> Indeed, HW waiting for semaphores without being able to preempt that
> wait is a no-go. The doc (perhaps naively) assumes nobody is doing that.

preempt is a necessary but not sufficient condition, you also must not
have end-of-batch memory fences. And i915 has semaphore support and
end-of-batch memory fences, e.g. one piece is:

commit c4e8ba7390346a77ffe33ec3f210bc62e0b6c8c6
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date:   Tue Apr 7 14:08:11 2020 +0100

    drm/i915/gt: Yield the timeslice if caught waiting on a user semaphore

Sure it preempts, but that's not enough.

> > That's kinda why I'm not seeing much benefits in a half-way state:
> > Tons of work, and still not what userspace wants. And for the full
> > deal that userspace wants we might as well not change anything with
> > dma-fences. For that we need a) ctx preempt and b) new entirely
> > decoupled fences that never feed back into a memory fences and c) are
> > controlled entirely by userspace. And c) is the really important thing
> > people want us to provide.
> >
> > And once we're ok with dma_fence == memory fences, then enforcing the
> > strict and painful memory allocation limitations is actually what we
> > want.
>
> Let's hope you're right. My fear is that that might be pretty painful as
> well.

Oh it's very painful too:
- We need a separate uapi flavour for gpu ctx with preempt instead of
end-of-batch dma-fence.
- Which needs to be implemented without breaking stuff badly - e.g. we
need to make sure we don't probe-wait on fences unnecessarily since
that forces random unwanted preempts.
- If we want this with winsys integration we need full userspace
revisions since all the dma_fence based sync sharing is out (implicit
sync on dma-buf, sync_file, drm_syncobj are all defunct since we can
only go the other way round).

Utter pain, but I think it's better since it can be done
driver-by-driver, and even userspace usecase by usecase. Which means
we can experiment in areas where the 10+ years of uapi guarantee isn't
so painful, learn, until we do the big jump of new
zero-interaction-with-memory-management fences become baked in forever
into compositor/winsys/modeset protocols. With the other approach of
splitting dma-fence we need to do all the splitting first, make sure
we get it right, and only then can we enable the use-case for real.

That's just not going to happen, at least not in upstream across all
drivers. Within a single driver in some vendor tree hacking stuff up
is totally fine ofc.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2020-07-22 11:39 UTC|newest]

Thread overview: 467+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-07 20:12 [PATCH 00/25] dma-fence annotations, round 3 Daniel Vetter
2020-07-07 20:12 ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12 ` Daniel Vetter
2020-07-07 20:12 ` [PATCH 01/25] dma-fence: basic lockdep annotations Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-08 14:57   ` Christian König
2020-07-08 14:57     ` Christian König
2020-07-08 14:57     ` [Intel-gfx] " Christian König
2020-07-08 14:57     ` Christian König
2020-07-08 15:12     ` Daniel Vetter
2020-07-08 15:12       ` Daniel Vetter
2020-07-08 15:12       ` [Intel-gfx] " Daniel Vetter
2020-07-08 15:12       ` Daniel Vetter
2020-07-08 15:19       ` Alex Deucher
2020-07-08 15:19         ` Alex Deucher
2020-07-08 15:19         ` [Intel-gfx] " Alex Deucher
2020-07-08 15:19         ` Alex Deucher
2020-07-08 15:37         ` Daniel Vetter
2020-07-08 15:37           ` Daniel Vetter
2020-07-08 15:37           ` [Intel-gfx] " Daniel Vetter
2020-07-08 15:37           ` Daniel Vetter
2020-07-14 11:09           ` Daniel Vetter
2020-07-14 11:09             ` Daniel Vetter
2020-07-14 11:09             ` [Intel-gfx] " Daniel Vetter
2020-07-14 11:09             ` Daniel Vetter
2020-07-09  7:32       ` [Intel-gfx] " Daniel Stone
2020-07-09  7:32         ` Daniel Stone
2020-07-09  7:32         ` Daniel Stone
2020-07-09  7:32         ` Daniel Stone
2020-07-09  7:52         ` Daniel Vetter
2020-07-09  7:52           ` Daniel Vetter
2020-07-09  7:52           ` Daniel Vetter
2020-07-09  7:52           ` Daniel Vetter
2020-07-13 16:26     ` Daniel Vetter
2020-07-13 16:26       ` Daniel Vetter
2020-07-13 16:26       ` [Intel-gfx] " Daniel Vetter
2020-07-13 16:26       ` Daniel Vetter
2020-07-13 16:39       ` Christian König
2020-07-13 16:39         ` Christian König
2020-07-13 16:39         ` [Intel-gfx] " Christian König
2020-07-13 16:39         ` Christian König
2020-07-13 20:31         ` Dave Airlie
2020-07-13 20:31           ` Dave Airlie
2020-07-13 20:31           ` [Intel-gfx] " Dave Airlie
2020-07-13 20:31           ` Dave Airlie
2020-07-07 20:12 ` [PATCH 02/25] dma-fence: prime " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-09  8:09   ` Daniel Vetter
2020-07-09  8:09     ` Daniel Vetter
2020-07-09  8:09     ` [Intel-gfx] " Daniel Vetter
2020-07-09  8:09     ` Daniel Vetter
2020-07-10 12:43     ` Jason Gunthorpe
2020-07-10 12:43       ` Jason Gunthorpe
2020-07-10 12:43       ` [Intel-gfx] " Jason Gunthorpe
2020-07-10 12:43       ` Jason Gunthorpe
2020-07-10 12:48       ` Christian König
2020-07-10 12:48         ` Christian König
2020-07-10 12:48         ` [Intel-gfx] " Christian König
2020-07-10 12:48         ` Christian König
2020-07-10 12:54         ` Jason Gunthorpe
2020-07-10 12:54           ` Jason Gunthorpe
2020-07-10 12:54           ` [Intel-gfx] " Jason Gunthorpe
2020-07-10 12:54           ` Jason Gunthorpe
2020-07-10 13:01           ` Christian König
2020-07-10 13:01             ` Christian König
2020-07-10 13:01             ` [Intel-gfx] " Christian König
2020-07-10 13:01             ` Christian König
2020-07-10 13:48             ` Jason Gunthorpe
2020-07-10 13:48               ` Jason Gunthorpe
2020-07-10 13:48               ` [Intel-gfx] " Jason Gunthorpe
2020-07-10 13:48               ` Jason Gunthorpe
2020-07-10 14:02               ` Daniel Vetter
2020-07-10 14:02                 ` Daniel Vetter
2020-07-10 14:02                 ` [Intel-gfx] " Daniel Vetter
2020-07-10 14:02                 ` Daniel Vetter
2020-07-10 14:23                 ` Jason Gunthorpe
2020-07-10 14:23                   ` Jason Gunthorpe
2020-07-10 14:23                   ` [Intel-gfx] " Jason Gunthorpe
2020-07-10 14:23                   ` Jason Gunthorpe
2020-07-10 20:02                   ` Daniel Vetter
2020-07-10 20:02                     ` Daniel Vetter
2020-07-10 20:02                     ` [Intel-gfx] " Daniel Vetter
2020-07-10 20:02                     ` Daniel Vetter
2020-07-07 20:12 ` [PATCH 03/25] dma-buf.rst: Document why idenfinite fences are a bad idea Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-09  7:36   ` [Intel-gfx] " Daniel Stone
2020-07-09  7:36     ` Daniel Stone
2020-07-09  7:36     ` Daniel Stone
2020-07-09  7:36     ` Daniel Stone
2020-07-09  8:04     ` Daniel Vetter
2020-07-09  8:04       ` Daniel Vetter
2020-07-09  8:04       ` Daniel Vetter
2020-07-09  8:04       ` Daniel Vetter
2020-07-09 12:11       ` Daniel Stone
2020-07-09 12:11         ` Daniel Stone
2020-07-09 12:11         ` Daniel Stone
2020-07-09 12:11         ` Daniel Stone
2020-07-09 12:31         ` Daniel Vetter
2020-07-09 12:31           ` Daniel Vetter
2020-07-09 12:31           ` Daniel Vetter
2020-07-09 12:31           ` Daniel Vetter
2020-07-09 14:28           ` Christian König
2020-07-09 14:28             ` Christian König
2020-07-09 14:28             ` Christian König
2020-07-09 14:28             ` Christian König
2020-07-09 11:53   ` Christian König
2020-07-09 11:53     ` Christian König
2020-07-09 11:53     ` [Intel-gfx] " Christian König
2020-07-09 11:53     ` Christian König
2020-07-09 12:33   ` [PATCH 1/2] dma-buf.rst: Document why indefinite " Daniel Vetter
2020-07-09 12:33     ` Daniel Vetter
2020-07-09 12:33     ` [Intel-gfx] " Daniel Vetter
2020-07-09 12:33     ` Daniel Vetter
2020-07-09 12:33     ` [PATCH 2/2] drm/virtio: Remove open-coded commit-tail function Daniel Vetter
2020-07-09 12:33       ` [Intel-gfx] " Daniel Vetter
2020-07-09 12:33       ` Daniel Vetter
2020-07-09 12:33       ` Daniel Vetter
2020-07-09 12:48       ` Gerd Hoffmann
2020-07-09 12:48         ` [Intel-gfx] " Gerd Hoffmann
2020-07-09 12:48         ` Gerd Hoffmann
2020-07-09 12:48         ` Gerd Hoffmann
2020-07-09 14:05       ` Sam Ravnborg
2020-07-09 14:05         ` [Intel-gfx] " Sam Ravnborg
2020-07-09 14:05         ` Sam Ravnborg
2020-07-09 14:05         ` Sam Ravnborg
2020-07-14  9:13         ` Daniel Vetter
2020-07-14  9:13           ` [Intel-gfx] " Daniel Vetter
2020-07-14  9:13           ` Daniel Vetter
2020-07-14  9:13           ` Daniel Vetter
2020-08-19 12:43       ` Jiri Slaby
2020-08-19 12:43         ` [Intel-gfx] " Jiri Slaby
2020-08-19 12:43         ` Jiri Slaby
2020-08-19 12:47         ` Jiri Slaby
2020-08-19 12:47           ` [Intel-gfx] " Jiri Slaby
2020-08-19 12:47           ` Jiri Slaby
2020-08-19 13:24         ` Gerd Hoffmann
2020-08-19 13:24           ` [Intel-gfx] " Gerd Hoffmann
2020-08-19 13:24           ` Gerd Hoffmann
2020-08-19 13:24           ` Gerd Hoffmann
2020-08-20  6:32           ` Jiri Slaby
2020-08-20  6:32             ` [Intel-gfx] " Jiri Slaby
2020-08-20  6:32             ` Jiri Slaby
2020-08-21  7:01             ` Gerd Hoffmann
2020-08-21  7:01               ` [Intel-gfx] " Gerd Hoffmann
2020-08-21  7:01               ` Gerd Hoffmann
2020-08-21  7:01               ` Gerd Hoffmann
2020-07-10 12:30     ` [PATCH 1/2] dma-buf.rst: Document why indefinite fences are a bad idea Maarten Lankhorst
2020-07-10 12:30       ` Maarten Lankhorst
2020-07-10 12:30       ` [Intel-gfx] " Maarten Lankhorst
2020-07-10 12:30       ` Maarten Lankhorst
2020-07-14 17:46     ` Jason Ekstrand
2020-07-14 17:46       ` Jason Ekstrand
2020-07-14 17:46       ` [Intel-gfx] " Jason Ekstrand
2020-07-14 17:46       ` Jason Ekstrand
2020-07-20 11:15     ` [Linaro-mm-sig] " Thomas Hellström (Intel)
2020-07-20 11:15       ` Thomas Hellström (Intel)
2020-07-20 11:15       ` [Intel-gfx] " Thomas Hellström (Intel)
2020-07-20 11:15       ` Thomas Hellström (Intel)
2020-07-21  7:41       ` Daniel Vetter
2020-07-21  7:41         ` Daniel Vetter
2020-07-21  7:41         ` [Intel-gfx] " Daniel Vetter
2020-07-21  7:41         ` Daniel Vetter
2020-07-21  7:45         ` Christian König
2020-07-21  7:45           ` Christian König
2020-07-21  7:45           ` [Intel-gfx] " Christian König
2020-07-21  7:45           ` Christian König
2020-07-21  8:47           ` Thomas Hellström (Intel)
2020-07-21  8:47             ` Thomas Hellström (Intel)
2020-07-21  8:47             ` [Intel-gfx] " Thomas Hellström (Intel)
2020-07-21  8:47             ` Thomas Hellström (Intel)
2020-07-21  8:55             ` Christian König
2020-07-21  8:55               ` Christian König
2020-07-21  8:55               ` [Intel-gfx] " Christian König
2020-07-21  8:55               ` Christian König
2020-07-21  9:16               ` Daniel Vetter
2020-07-21  9:16                 ` Daniel Vetter
2020-07-21  9:16                 ` [Intel-gfx] " Daniel Vetter
2020-07-21  9:16                 ` Daniel Vetter
2020-07-21  9:24                 ` Daniel Vetter
2020-07-21  9:24                   ` Daniel Vetter
2020-07-21  9:24                   ` [Intel-gfx] " Daniel Vetter
2020-07-21  9:24                   ` Daniel Vetter
2020-07-21  9:37               ` Thomas Hellström (Intel)
2020-07-21  9:37                 ` Thomas Hellström (Intel)
2020-07-21  9:37                 ` [Intel-gfx] " Thomas Hellström (Intel)
2020-07-21  9:37                 ` Thomas Hellström (Intel)
2020-07-21  9:50                 ` Daniel Vetter
2020-07-21  9:50                   ` Daniel Vetter
2020-07-21  9:50                   ` [Intel-gfx] " Daniel Vetter
2020-07-21  9:50                   ` Daniel Vetter
2020-07-21 10:47                   ` Thomas Hellström (Intel)
2020-07-21 10:47                     ` Thomas Hellström (Intel)
2020-07-21 10:47                     ` [Intel-gfx] " Thomas Hellström (Intel)
2020-07-21 10:47                     ` Thomas Hellström (Intel)
2020-07-21 13:59                     ` Christian König
2020-07-21 13:59                       ` Christian König
2020-07-21 13:59                       ` [Intel-gfx] " Christian König
2020-07-21 13:59                       ` Christian König
2020-07-21 17:46                       ` Thomas Hellström (Intel)
2020-07-21 17:46                         ` Thomas Hellström (Intel)
2020-07-21 17:46                         ` [Intel-gfx] " Thomas Hellström (Intel)
2020-07-21 17:46                         ` Thomas Hellström (Intel)
2020-07-21 18:18                         ` Daniel Vetter
2020-07-21 18:18                           ` Daniel Vetter
2020-07-21 18:18                           ` [Intel-gfx] " Daniel Vetter
2020-07-21 18:18                           ` Daniel Vetter
2020-07-21 21:42                       ` Dave Airlie
2020-07-21 21:42                         ` Dave Airlie
2020-07-21 21:42                         ` [Intel-gfx] " Dave Airlie
2020-07-21 21:42                         ` Dave Airlie
2020-07-21 22:45             ` Dave Airlie
2020-07-21 22:45               ` Dave Airlie
2020-07-21 22:45               ` [Intel-gfx] " Dave Airlie
2020-07-21 22:45               ` Dave Airlie
2020-07-22  6:45               ` Thomas Hellström (Intel)
2020-07-22  6:45                 ` Thomas Hellström (Intel)
2020-07-22  6:45                 ` [Intel-gfx] " Thomas Hellström (Intel)
2020-07-22  6:45                 ` Thomas Hellström (Intel)
2020-07-22  7:11                 ` Daniel Vetter
2020-07-22  7:11                   ` Daniel Vetter
2020-07-22  7:11                   ` [Intel-gfx] " Daniel Vetter
2020-07-22  7:11                   ` Daniel Vetter
2020-07-22  8:05                   ` Thomas Hellström (Intel)
2020-07-22  8:05                     ` Thomas Hellström (Intel)
2020-07-22  8:05                     ` [Intel-gfx] " Thomas Hellström (Intel)
2020-07-22  8:05                     ` Thomas Hellström (Intel)
2020-07-22  9:45                     ` Daniel Vetter
2020-07-22  9:45                       ` Daniel Vetter
2020-07-22  9:45                       ` [Intel-gfx] " Daniel Vetter
2020-07-22  9:45                       ` Daniel Vetter
2020-07-22 10:31                       ` Thomas Hellström (Intel)
2020-07-22 10:31                         ` Thomas Hellström (Intel)
2020-07-22 10:31                         ` [Intel-gfx] " Thomas Hellström (Intel)
2020-07-22 10:31                         ` Thomas Hellström (Intel)
2020-07-22 11:39                         ` Daniel Vetter [this message]
2020-07-22 11:39                           ` Daniel Vetter
2020-07-22 11:39                           ` [Intel-gfx] " Daniel Vetter
2020-07-22 11:39                           ` Daniel Vetter
2020-07-22 12:22                           ` Thomas Hellström (Intel)
2020-07-22 12:22                             ` Thomas Hellström (Intel)
2020-07-22 12:22                             ` [Intel-gfx] " Thomas Hellström (Intel)
2020-07-22 12:22                             ` Thomas Hellström (Intel)
2020-07-22 12:41                             ` Daniel Vetter
2020-07-22 12:41                               ` Daniel Vetter
2020-07-22 12:41                               ` [Intel-gfx] " Daniel Vetter
2020-07-22 12:41                               ` Daniel Vetter
2020-07-22 13:12                               ` Thomas Hellström (Intel)
2020-07-22 13:12                                 ` Thomas Hellström (Intel)
2020-07-22 13:12                                 ` [Intel-gfx] " Thomas Hellström (Intel)
2020-07-22 13:12                                 ` Thomas Hellström (Intel)
2020-07-22 14:07                                 ` Daniel Vetter
2020-07-22 14:07                                   ` Daniel Vetter
2020-07-22 14:07                                   ` [Intel-gfx] " Daniel Vetter
2020-07-22 14:07                                   ` Daniel Vetter
2020-07-22 14:23                                   ` Christian König
2020-07-22 14:23                                     ` Christian König
2020-07-22 14:23                                     ` [Intel-gfx] " Christian König
2020-07-22 14:23                                     ` Christian König
2020-07-22 14:30                                     ` Thomas Hellström (Intel)
2020-07-22 14:30                                       ` Thomas Hellström (Intel)
2020-07-22 14:30                                       ` [Intel-gfx] " Thomas Hellström (Intel)
2020-07-22 14:30                                       ` Thomas Hellström (Intel)
2020-07-22 14:35                                       ` Christian König
2020-07-22 14:35                                         ` Christian König
2020-07-22 14:35                                         ` [Intel-gfx] " Christian König
2020-07-22 14:35                                         ` Christian König
2020-07-07 20:12 ` [PATCH 04/25] drm/vkms: Annotate vblank timer Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-12 22:27   ` Rodrigo Siqueira
2020-07-12 22:27     ` Rodrigo Siqueira
2020-07-12 22:27     ` [Intel-gfx] " Rodrigo Siqueira
2020-07-12 22:27     ` Rodrigo Siqueira
2020-07-14  9:57     ` Melissa Wen
2020-07-14  9:57       ` Melissa Wen
2020-07-14  9:57       ` [Intel-gfx] " Melissa Wen
2020-07-14  9:57       ` Melissa Wen
2020-07-14  9:59       ` Daniel Vetter
2020-07-14  9:59         ` Daniel Vetter
2020-07-14  9:59         ` [Intel-gfx] " Daniel Vetter
2020-07-14  9:59         ` Daniel Vetter
2020-07-14 14:55         ` Melissa Wen
2020-07-14 14:55           ` Melissa Wen
2020-07-14 14:55           ` [Intel-gfx] " Melissa Wen
2020-07-14 14:55           ` Melissa Wen
2020-07-14 15:23           ` Daniel Vetter
2020-07-14 15:23             ` Daniel Vetter
2020-07-14 15:23             ` [Intel-gfx] " Daniel Vetter
2020-07-14 15:23             ` Daniel Vetter
2020-07-07 20:12 ` [PATCH 05/25] drm/vblank: Annotate with dma-fence signalling section Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12 ` [PATCH 06/25] drm/amdgpu: add dma-fence annotations to atomic commit path Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12 ` [PATCH 07/25] drm/komdea: Annotate dma-fence critical section in " Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-08  5:17   ` james qian wang (Arm Technology China)
2020-07-08  5:17     ` [Intel-gfx] " james qian wang (Arm Technology China)
2020-07-08  5:17     ` james qian wang (Arm Technology China)
2020-07-14  8:34     ` Daniel Vetter
2020-07-14  8:34       ` [Intel-gfx] " Daniel Vetter
2020-07-14  8:34       ` Daniel Vetter
2020-07-07 20:12 ` [PATCH 08/25] drm/malidp: " Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-15 12:53   ` Liviu Dudau
2020-07-15 12:53     ` [Intel-gfx] " Liviu Dudau
2020-07-15 12:53     ` Liviu Dudau
2020-07-15 13:51     ` Daniel Vetter
2020-07-15 13:51       ` [Intel-gfx] " Daniel Vetter
2020-07-15 13:51       ` Daniel Vetter
2020-07-07 20:12 ` [PATCH 09/25] drm/atmel: Use drm_atomic_helper_commit Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:37   ` Sam Ravnborg
2020-07-07 20:37     ` [Intel-gfx] " Sam Ravnborg
2020-07-07 20:37     ` Sam Ravnborg
2020-07-07 20:37     ` Sam Ravnborg
2020-07-07 21:31   ` [PATCH] " Daniel Vetter
2020-07-07 21:31     ` [Intel-gfx] " Daniel Vetter
2020-07-07 21:31     ` Daniel Vetter
2020-07-07 21:31     ` Daniel Vetter
2020-07-14  9:55     ` Sam Ravnborg
2020-07-14  9:55       ` [Intel-gfx] " Sam Ravnborg
2020-07-14  9:55       ` Sam Ravnborg
2020-07-14  9:55       ` Sam Ravnborg
2020-07-07 20:12 ` [PATCH 10/25] drm/imx: Annotate dma-fence critical section in commit path Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12 ` [PATCH 11/25] drm/omapdrm: " Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12 ` [PATCH 12/25] drm/rcar-du: " Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 23:32   ` Laurent Pinchart
2020-07-07 23:32     ` [Intel-gfx] " Laurent Pinchart
2020-07-07 23:32     ` Laurent Pinchart
2020-07-14  8:39     ` Daniel Vetter
2020-07-14  8:39       ` [Intel-gfx] " Daniel Vetter
2020-07-14  8:39       ` Daniel Vetter
     [not found] ` <20200707201229.472834-1-daniel.vetter-/w4YWyX8dFk@public.gmane.org>
2020-07-07 20:12   ` [PATCH 13/25] drm/tegra: " Daniel Vetter
2020-07-07 20:12     ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12     ` Daniel Vetter
2020-07-07 20:12     ` Daniel Vetter
2020-07-07 20:12 ` [PATCH 14/25] drm/tidss: " Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-08  9:01   ` Jyri Sarha
2020-07-08  9:01     ` [Intel-gfx] " Jyri Sarha
2020-07-08  9:01     ` Jyri Sarha
2020-07-07 20:12 ` [PATCH 15/25] drm/tilcdc: Use standard drm_atomic_helper_commit Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-08  9:17   ` Jyri Sarha
2020-07-08  9:17     ` [Intel-gfx] " Jyri Sarha
2020-07-08  9:17     ` Jyri Sarha
2020-07-08  9:27     ` Daniel Vetter
2020-07-08  9:27       ` [Intel-gfx] " Daniel Vetter
2020-07-08  9:27       ` Daniel Vetter
2020-07-08  9:44   ` [PATCH] " Daniel Vetter
2020-07-08  9:44     ` [Intel-gfx] " Daniel Vetter
2020-07-08  9:44     ` Daniel Vetter
2020-07-08 10:21     ` Jyri Sarha
2020-07-08 10:21       ` [Intel-gfx] " Jyri Sarha
2020-07-08 10:21       ` Jyri Sarha
2020-07-08 14:20   ` Daniel Vetter
2020-07-08 14:20     ` [Intel-gfx] " Daniel Vetter
2020-07-08 14:20     ` Daniel Vetter
2020-07-10 11:16     ` Jyri Sarha
2020-07-10 11:16       ` [Intel-gfx] " Jyri Sarha
2020-07-10 11:16       ` Jyri Sarha
2020-07-14  8:32       ` Daniel Vetter
2020-07-14  8:32         ` [Intel-gfx] " Daniel Vetter
2020-07-14  8:32         ` Daniel Vetter
2020-07-07 20:12 ` [PATCH 16/25] drm/atomic-helper: Add dma-fence annotations Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12 ` [PATCH 17/25] drm/scheduler: use dma-fence annotations in main thread Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12 ` [PATCH 18/25] drm/amdgpu: use dma-fence annotations in cs_submit() Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12 ` [PATCH 19/25] drm/amdgpu: s/GFP_KERNEL/GFP_ATOMIC in scheduler code Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-14 10:49   ` Daniel Vetter
2020-07-14 10:49     ` Daniel Vetter
2020-07-14 10:49     ` [Intel-gfx] " Daniel Vetter
2020-07-14 10:49     ` Daniel Vetter
2020-07-14 11:40     ` Christian König
2020-07-14 11:40       ` Christian König
2020-07-14 11:40       ` [Intel-gfx] " Christian König
2020-07-14 11:40       ` Christian König
2020-07-14 14:31       ` Daniel Vetter
2020-07-14 14:31         ` Daniel Vetter
2020-07-14 14:31         ` [Intel-gfx] " Daniel Vetter
2020-07-14 14:31         ` Daniel Vetter
2020-07-15  9:17         ` Christian König
2020-07-15  9:17           ` Christian König
2020-07-15  9:17           ` [Intel-gfx] " Christian König
2020-07-15  9:17           ` Christian König
2020-07-15 11:53           ` Daniel Vetter
2020-07-15 11:53             ` Daniel Vetter
2020-07-15 11:53             ` [Intel-gfx] " Daniel Vetter
2020-07-15 11:53             ` Daniel Vetter
2020-07-07 20:12 ` [PATCH 20/25] drm/amdgpu: DC also loves to allocate stuff where it shouldn't Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-14 11:12   ` Daniel Vetter
2020-07-14 11:12     ` Daniel Vetter
2020-07-14 11:12     ` [Intel-gfx] " Daniel Vetter
2020-07-14 11:12     ` Daniel Vetter
2020-07-07 20:12 ` [PATCH 21/25] drm/amdgpu/dc: Stop dma_resv_lock inversion in commit_tail Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12 ` [PATCH 22/25] drm/scheduler: use dma-fence annotations in tdr work Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12 ` [PATCH 23/25] drm/amdgpu: use dma-fence annotations for gpu reset code Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12 ` [PATCH 24/25] Revert "drm/amdgpu: add fbdev suspend/resume on gpu reset" Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12 ` [PATCH 25/25] drm/amdgpu: gpu recovery does full modesets Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 20:12   ` [Intel-gfx] " Daniel Vetter
2020-07-07 20:12   ` Daniel Vetter
2020-07-07 22:10 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for dma-fence annotations, round 3 (rev2) Patchwork
2020-07-07 22:12 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-07-07 22:32 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2020-07-08  0:59 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
2020-07-08 11:13 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for dma-fence annotations, round 3 (rev3) Patchwork
2020-07-08 11:14 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-07-08 11:37 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
2020-07-08 14:53 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for dma-fence annotations, round 3 (rev4) Patchwork
2020-07-08 14:55 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-07-08 15:15 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2020-07-08 18:46 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
2020-07-09 13:05 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for dma-fence annotations, round 3 (rev6) Patchwork
2020-07-09 13:06 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-07-09 13:26 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2020-07-09 15:38 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAKMK7uH0rcyepP2hDpNB-yuvNyjee1tPmxWUyefS5j7i-N6Pfw@mail.gmail.com \
    --to=daniel.vetter@ffwll.ch \
    --cc=Felix.Kuehling@amd.com \
    --cc=airlied@gmail.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=christian.koenig@amd.com \
    --cc=daniel.vetter@intel.com \
    --cc=daniels@collabora.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=jason@jlekstrand.net \
    --cc=jenatali@microsoft.com \
    --cc=linaro-mm-sig@lists.linaro.org \
    --cc=linux-media@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=maarten.lankhorst@linux.intel.com \
    --cc=mika.kuoppala@intel.com \
    --cc=spronovo@microsoft.com \
    --cc=thomas.hellstrom@intel.com \
    --cc=thomas_os@shipmail.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.