All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jerome Glisse <jglisse@redhat.com>
To: Felix Kuehling <felix.kuehling@amd.com>
Cc: "Alex Deucher" <alexdeucher@gmail.com>,
	linux-rdma <linux-rdma@vger.kernel.org>,
	"Thomas Hellström (Intel)" <thomas_os@shipmail.org>,
	"Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>,
	LKML <linux-kernel@vger.kernel.org>,
	"DRI Development" <dri-devel@lists.freedesktop.org>,
	"moderated list:DMA BUFFER SHARING FRAMEWORK"
	<linaro-mm-sig@lists.linaro.org>,
	"Jason Gunthorpe" <jgg@ziepe.ca>,
	"Thomas Hellstrom" <thomas.hellstrom@intel.com>,
	"amd-gfx list" <amd-gfx@lists.freedesktop.org>,
	"Daniel Vetter" <daniel@ffwll.ch>,
	"Daniel Vetter" <daniel.vetter@intel.com>,
	"open list:DMA BUFFER SHARING FRAMEWORK"
	<linux-media@vger.kernel.org>,
	"Intel Graphics Development" <intel-gfx@lists.freedesktop.org>,
	"Christian König" <christian.koenig@amd.com>,
	"Mika Kuoppala" <mika.kuoppala@intel.com>
Subject: Re: [Linaro-mm-sig] [PATCH 04/18] dma-fence: prime lockdep annotations
Date: Fri, 19 Jun 2020 15:40:56 -0400	[thread overview]
Message-ID: <20200619194056.GA13117@redhat.com> (raw)
In-Reply-To: <86f7f5e5-81a0-5429-5a6e-0d3b0860cfae@amd.com>

On Fri, Jun 19, 2020 at 03:30:32PM -0400, Felix Kuehling wrote:
> 
> Am 2020-06-19 um 3:11 p.m. schrieb Alex Deucher:
> > On Fri, Jun 19, 2020 at 2:09 PM Jerome Glisse <jglisse@redhat.com> wrote:
> >> On Fri, Jun 19, 2020 at 02:23:08PM -0300, Jason Gunthorpe wrote:
> >>> On Fri, Jun 19, 2020 at 06:19:41PM +0200, Daniel Vetter wrote:
> >>>
> >>>> The madness is only that device B's mmu notifier might need to wait
> >>>> for fence_B so that the dma operation finishes. Which in turn has to
> >>>> wait for device A to finish first.
> >>> So, it sound, fundamentally you've got this graph of operations across
> >>> an unknown set of drivers and the kernel cannot insert itself in
> >>> dma_fence hand offs to re-validate any of the buffers involved?
> >>> Buffers which by definition cannot be touched by the hardware yet.
> >>>
> >>> That really is a pretty horrible place to end up..
> >>>
> >>> Pinning really is right answer for this kind of work flow. I think
> >>> converting pinning to notifers should not be done unless notifier
> >>> invalidation is relatively bounded.
> >>>
> >>> I know people like notifiers because they give a bit nicer performance
> >>> in some happy cases, but this cripples all the bad cases..
> >>>
> >>> If pinning doesn't work for some reason maybe we should address that?
> >> Note that the dma fence is only true for user ptr buffer which predate
> >> any HMM work and thus were using mmu notifier already. You need the
> >> mmu notifier there because of fork and other corner cases.
> >>
> >> For nouveau the notifier do not need to wait for anything it can update
> >> the GPU page table right away. Modulo needing to write to GPU memory
> >> using dma engine if the GPU page table is in GPU memory that is not
> >> accessible from the CPU but that's never the case for nouveau so far
> >> (but i expect it will be at one point).
> >>
> >>
> >> So i see this as 2 different cases, the user ptr case, which does pin
> >> pages by the way, where things are synchronous. Versus the HMM cases
> >> where everything is asynchronous.
> >>
> >>
> >> I probably need to warn AMD folks again that using HMM means that you
> >> must be able to update the GPU page table asynchronously without
> >> fence wait. The issue for AMD is that they already update their GPU
> >> page table using DMA engine. I believe this is still doable if they
> >> use a kernel only DMA engine context, where only kernel can queue up
> >> jobs so that you do not need to wait for unrelated things and you can
> >> prioritize GPU page table update which should translate in fast GPU
> >> page table update without DMA fence.
> > All devices which support recoverable page faults also have a
> > dedicated paging engine for the kernel driver which the driver already
> > makes use of.  We can also update the GPU page tables with the CPU.
> 
> We have a potential problem with CPU updating page tables while the GPU
> is retrying on page table entries because 64 bit CPU transactions don't
> arrive in device memory atomically.
> 
> We are using SDMA for page table updates. This currently goes through a
> the DRM GPU scheduler to a special SDMA queue that's used by kernel-mode
> only. But since it's based on the DRM GPU scheduler, we do use dma-fence
> to wait for completion.

Yeah my worry is mostly that some cross dma fence leak into it but
it should never happen realy, maybe there is a way to catch if it
does and print a warning.

So yes you can use dma fence, as long as they do not have cross-dep.
Another expectation is that they complete quickly and usualy page
table update do.

Cheers,
Jérôme


WARNING: multiple messages have this Message-ID (diff)
From: Jerome Glisse <jglisse@redhat.com>
To: Felix Kuehling <felix.kuehling@amd.com>
Cc: linux-rdma <linux-rdma@vger.kernel.org>,
	"Thomas Hellström (Intel)" <thomas_os@shipmail.org>,
	LKML <linux-kernel@vger.kernel.org>,
	"DRI Development" <dri-devel@lists.freedesktop.org>,
	"Christian König" <christian.koenig@amd.com>,
	"moderated list:DMA BUFFER SHARING FRAMEWORK"
	<linaro-mm-sig@lists.linaro.org>,
	"Jason Gunthorpe" <jgg@ziepe.ca>,
	"Thomas Hellstrom" <thomas.hellstrom@intel.com>,
	"amd-gfx list" <amd-gfx@lists.freedesktop.org>,
	"Daniel Vetter" <daniel.vetter@intel.com>,
	"Mika Kuoppala" <mika.kuoppala@intel.com>,
	"Intel Graphics Development" <intel-gfx@lists.freedesktop.org>,
	"open list:DMA BUFFER SHARING FRAMEWORK"
	<linux-media@vger.kernel.org>
Subject: Re: [Linaro-mm-sig] [PATCH 04/18] dma-fence: prime lockdep annotations
Date: Fri, 19 Jun 2020 15:40:56 -0400	[thread overview]
Message-ID: <20200619194056.GA13117@redhat.com> (raw)
In-Reply-To: <86f7f5e5-81a0-5429-5a6e-0d3b0860cfae@amd.com>

On Fri, Jun 19, 2020 at 03:30:32PM -0400, Felix Kuehling wrote:
> 
> Am 2020-06-19 um 3:11 p.m. schrieb Alex Deucher:
> > On Fri, Jun 19, 2020 at 2:09 PM Jerome Glisse <jglisse@redhat.com> wrote:
> >> On Fri, Jun 19, 2020 at 02:23:08PM -0300, Jason Gunthorpe wrote:
> >>> On Fri, Jun 19, 2020 at 06:19:41PM +0200, Daniel Vetter wrote:
> >>>
> >>>> The madness is only that device B's mmu notifier might need to wait
> >>>> for fence_B so that the dma operation finishes. Which in turn has to
> >>>> wait for device A to finish first.
> >>> So, it sound, fundamentally you've got this graph of operations across
> >>> an unknown set of drivers and the kernel cannot insert itself in
> >>> dma_fence hand offs to re-validate any of the buffers involved?
> >>> Buffers which by definition cannot be touched by the hardware yet.
> >>>
> >>> That really is a pretty horrible place to end up..
> >>>
> >>> Pinning really is right answer for this kind of work flow. I think
> >>> converting pinning to notifers should not be done unless notifier
> >>> invalidation is relatively bounded.
> >>>
> >>> I know people like notifiers because they give a bit nicer performance
> >>> in some happy cases, but this cripples all the bad cases..
> >>>
> >>> If pinning doesn't work for some reason maybe we should address that?
> >> Note that the dma fence is only true for user ptr buffer which predate
> >> any HMM work and thus were using mmu notifier already. You need the
> >> mmu notifier there because of fork and other corner cases.
> >>
> >> For nouveau the notifier do not need to wait for anything it can update
> >> the GPU page table right away. Modulo needing to write to GPU memory
> >> using dma engine if the GPU page table is in GPU memory that is not
> >> accessible from the CPU but that's never the case for nouveau so far
> >> (but i expect it will be at one point).
> >>
> >>
> >> So i see this as 2 different cases, the user ptr case, which does pin
> >> pages by the way, where things are synchronous. Versus the HMM cases
> >> where everything is asynchronous.
> >>
> >>
> >> I probably need to warn AMD folks again that using HMM means that you
> >> must be able to update the GPU page table asynchronously without
> >> fence wait. The issue for AMD is that they already update their GPU
> >> page table using DMA engine. I believe this is still doable if they
> >> use a kernel only DMA engine context, where only kernel can queue up
> >> jobs so that you do not need to wait for unrelated things and you can
> >> prioritize GPU page table update which should translate in fast GPU
> >> page table update without DMA fence.
> > All devices which support recoverable page faults also have a
> > dedicated paging engine for the kernel driver which the driver already
> > makes use of.  We can also update the GPU page tables with the CPU.
> 
> We have a potential problem with CPU updating page tables while the GPU
> is retrying on page table entries because 64 bit CPU transactions don't
> arrive in device memory atomically.
> 
> We are using SDMA for page table updates. This currently goes through a
> the DRM GPU scheduler to a special SDMA queue that's used by kernel-mode
> only. But since it's based on the DRM GPU scheduler, we do use dma-fence
> to wait for completion.

Yeah my worry is mostly that some cross dma fence leak into it but
it should never happen realy, maybe there is a way to catch if it
does and print a warning.

So yes you can use dma fence, as long as they do not have cross-dep.
Another expectation is that they complete quickly and usualy page
table update do.

Cheers,
Jérôme

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

WARNING: multiple messages have this Message-ID (diff)
From: Jerome Glisse <jglisse@redhat.com>
To: Felix Kuehling <felix.kuehling@amd.com>
Cc: linux-rdma <linux-rdma@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	"DRI Development" <dri-devel@lists.freedesktop.org>,
	"Christian König" <christian.koenig@amd.com>,
	"moderated list:DMA BUFFER SHARING FRAMEWORK"
	<linaro-mm-sig@lists.linaro.org>,
	"Jason Gunthorpe" <jgg@ziepe.ca>,
	"Thomas Hellstrom" <thomas.hellstrom@intel.com>,
	"amd-gfx list" <amd-gfx@lists.freedesktop.org>,
	"Alex Deucher" <alexdeucher@gmail.com>,
	"Daniel Vetter" <daniel.vetter@intel.com>,
	"Mika Kuoppala" <mika.kuoppala@intel.com>,
	"Intel Graphics Development" <intel-gfx@lists.freedesktop.org>,
	"open list:DMA BUFFER SHARING FRAMEWORK"
	<linux-media@vger.kernel.org>
Subject: Re: [Intel-gfx] [Linaro-mm-sig] [PATCH 04/18] dma-fence: prime lockdep annotations
Date: Fri, 19 Jun 2020 15:40:56 -0400	[thread overview]
Message-ID: <20200619194056.GA13117@redhat.com> (raw)
In-Reply-To: <86f7f5e5-81a0-5429-5a6e-0d3b0860cfae@amd.com>

On Fri, Jun 19, 2020 at 03:30:32PM -0400, Felix Kuehling wrote:
> 
> Am 2020-06-19 um 3:11 p.m. schrieb Alex Deucher:
> > On Fri, Jun 19, 2020 at 2:09 PM Jerome Glisse <jglisse@redhat.com> wrote:
> >> On Fri, Jun 19, 2020 at 02:23:08PM -0300, Jason Gunthorpe wrote:
> >>> On Fri, Jun 19, 2020 at 06:19:41PM +0200, Daniel Vetter wrote:
> >>>
> >>>> The madness is only that device B's mmu notifier might need to wait
> >>>> for fence_B so that the dma operation finishes. Which in turn has to
> >>>> wait for device A to finish first.
> >>> So, it sound, fundamentally you've got this graph of operations across
> >>> an unknown set of drivers and the kernel cannot insert itself in
> >>> dma_fence hand offs to re-validate any of the buffers involved?
> >>> Buffers which by definition cannot be touched by the hardware yet.
> >>>
> >>> That really is a pretty horrible place to end up..
> >>>
> >>> Pinning really is right answer for this kind of work flow. I think
> >>> converting pinning to notifers should not be done unless notifier
> >>> invalidation is relatively bounded.
> >>>
> >>> I know people like notifiers because they give a bit nicer performance
> >>> in some happy cases, but this cripples all the bad cases..
> >>>
> >>> If pinning doesn't work for some reason maybe we should address that?
> >> Note that the dma fence is only true for user ptr buffer which predate
> >> any HMM work and thus were using mmu notifier already. You need the
> >> mmu notifier there because of fork and other corner cases.
> >>
> >> For nouveau the notifier do not need to wait for anything it can update
> >> the GPU page table right away. Modulo needing to write to GPU memory
> >> using dma engine if the GPU page table is in GPU memory that is not
> >> accessible from the CPU but that's never the case for nouveau so far
> >> (but i expect it will be at one point).
> >>
> >>
> >> So i see this as 2 different cases, the user ptr case, which does pin
> >> pages by the way, where things are synchronous. Versus the HMM cases
> >> where everything is asynchronous.
> >>
> >>
> >> I probably need to warn AMD folks again that using HMM means that you
> >> must be able to update the GPU page table asynchronously without
> >> fence wait. The issue for AMD is that they already update their GPU
> >> page table using DMA engine. I believe this is still doable if they
> >> use a kernel only DMA engine context, where only kernel can queue up
> >> jobs so that you do not need to wait for unrelated things and you can
> >> prioritize GPU page table update which should translate in fast GPU
> >> page table update without DMA fence.
> > All devices which support recoverable page faults also have a
> > dedicated paging engine for the kernel driver which the driver already
> > makes use of.  We can also update the GPU page tables with the CPU.
> 
> We have a potential problem with CPU updating page tables while the GPU
> is retrying on page table entries because 64 bit CPU transactions don't
> arrive in device memory atomically.
> 
> We are using SDMA for page table updates. This currently goes through a
> the DRM GPU scheduler to a special SDMA queue that's used by kernel-mode
> only. But since it's based on the DRM GPU scheduler, we do use dma-fence
> to wait for completion.

Yeah my worry is mostly that some cross dma fence leak into it but
it should never happen realy, maybe there is a way to catch if it
does and print a warning.

So yes you can use dma fence, as long as they do not have cross-dep.
Another expectation is that they complete quickly and usualy page
table update do.

Cheers,
Jérôme

_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

WARNING: multiple messages have this Message-ID (diff)
From: Jerome Glisse <jglisse@redhat.com>
To: Felix Kuehling <felix.kuehling@amd.com>
Cc: linux-rdma <linux-rdma@vger.kernel.org>,
	"Thomas Hellström (Intel)" <thomas_os@shipmail.org>,
	"Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>,
	LKML <linux-kernel@vger.kernel.org>,
	"DRI Development" <dri-devel@lists.freedesktop.org>,
	"Christian König" <christian.koenig@amd.com>,
	"moderated list:DMA BUFFER SHARING FRAMEWORK"
	<linaro-mm-sig@lists.linaro.org>,
	"Jason Gunthorpe" <jgg@ziepe.ca>,
	"Thomas Hellstrom" <thomas.hellstrom@intel.com>,
	"amd-gfx list" <amd-gfx@lists.freedesktop.org>,
	"Daniel Vetter" <daniel@ffwll.ch>,
	"Alex Deucher" <alexdeucher@gmail.com>,
	"Daniel Vetter" <daniel.vetter@intel.com>,
	"Mika Kuoppala" <mika.kuoppala@intel.com>,
	"Intel Graphics Development" <intel-gfx@lists.freedesktop.org>,
	"open list:DMA BUFFER SHARING FRAMEWORK"
	<linux-media@vger.kernel.org>
Subject: Re: [Linaro-mm-sig] [PATCH 04/18] dma-fence: prime lockdep annotations
Date: Fri, 19 Jun 2020 15:40:56 -0400	[thread overview]
Message-ID: <20200619194056.GA13117@redhat.com> (raw)
In-Reply-To: <86f7f5e5-81a0-5429-5a6e-0d3b0860cfae@amd.com>

On Fri, Jun 19, 2020 at 03:30:32PM -0400, Felix Kuehling wrote:
> 
> Am 2020-06-19 um 3:11 p.m. schrieb Alex Deucher:
> > On Fri, Jun 19, 2020 at 2:09 PM Jerome Glisse <jglisse@redhat.com> wrote:
> >> On Fri, Jun 19, 2020 at 02:23:08PM -0300, Jason Gunthorpe wrote:
> >>> On Fri, Jun 19, 2020 at 06:19:41PM +0200, Daniel Vetter wrote:
> >>>
> >>>> The madness is only that device B's mmu notifier might need to wait
> >>>> for fence_B so that the dma operation finishes. Which in turn has to
> >>>> wait for device A to finish first.
> >>> So, it sound, fundamentally you've got this graph of operations across
> >>> an unknown set of drivers and the kernel cannot insert itself in
> >>> dma_fence hand offs to re-validate any of the buffers involved?
> >>> Buffers which by definition cannot be touched by the hardware yet.
> >>>
> >>> That really is a pretty horrible place to end up..
> >>>
> >>> Pinning really is right answer for this kind of work flow. I think
> >>> converting pinning to notifers should not be done unless notifier
> >>> invalidation is relatively bounded.
> >>>
> >>> I know people like notifiers because they give a bit nicer performance
> >>> in some happy cases, but this cripples all the bad cases..
> >>>
> >>> If pinning doesn't work for some reason maybe we should address that?
> >> Note that the dma fence is only true for user ptr buffer which predate
> >> any HMM work and thus were using mmu notifier already. You need the
> >> mmu notifier there because of fork and other corner cases.
> >>
> >> For nouveau the notifier do not need to wait for anything it can update
> >> the GPU page table right away. Modulo needing to write to GPU memory
> >> using dma engine if the GPU page table is in GPU memory that is not
> >> accessible from the CPU but that's never the case for nouveau so far
> >> (but i expect it will be at one point).
> >>
> >>
> >> So i see this as 2 different cases, the user ptr case, which does pin
> >> pages by the way, where things are synchronous. Versus the HMM cases
> >> where everything is asynchronous.
> >>
> >>
> >> I probably need to warn AMD folks again that using HMM means that you
> >> must be able to update the GPU page table asynchronously without
> >> fence wait. The issue for AMD is that they already update their GPU
> >> page table using DMA engine. I believe this is still doable if they
> >> use a kernel only DMA engine context, where only kernel can queue up
> >> jobs so that you do not need to wait for unrelated things and you can
> >> prioritize GPU page table update which should translate in fast GPU
> >> page table update without DMA fence.
> > All devices which support recoverable page faults also have a
> > dedicated paging engine for the kernel driver which the driver already
> > makes use of.  We can also update the GPU page tables with the CPU.
> 
> We have a potential problem with CPU updating page tables while the GPU
> is retrying on page table entries because 64 bit CPU transactions don't
> arrive in device memory atomically.
> 
> We are using SDMA for page table updates. This currently goes through a
> the DRM GPU scheduler to a special SDMA queue that's used by kernel-mode
> only. But since it's based on the DRM GPU scheduler, we do use dma-fence
> to wait for completion.

Yeah my worry is mostly that some cross dma fence leak into it but
it should never happen realy, maybe there is a way to catch if it
does and print a warning.

So yes you can use dma fence, as long as they do not have cross-dep.
Another expectation is that they complete quickly and usualy page
table update do.

Cheers,
Jérôme

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2020-06-19 19:41 UTC|newest]

Thread overview: 421+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-04  8:12 [PATCH 00/18] dma-fence lockdep annotations, round 2 Daniel Vetter
2020-06-04  8:12 ` Daniel Vetter
2020-06-04  8:12 ` [Intel-gfx] " Daniel Vetter
2020-06-04  8:12 ` Daniel Vetter
2020-06-04  8:12 ` [PATCH 01/18] mm: Track mmu notifiers in fs_reclaim_acquire/release Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12   ` [Intel-gfx] " Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-10 12:01   ` Thomas Hellström (Intel)
2020-06-10 12:01     ` Thomas Hellström (Intel)
2020-06-10 12:01     ` [Intel-gfx] " Thomas Hellström (Intel)
2020-06-10 12:01     ` Thomas Hellström (Intel)
2020-06-10 12:25     ` [Intel-gfx] " Daniel Vetter
2020-06-10 12:25       ` Daniel Vetter
2020-06-10 12:25       ` Daniel Vetter
2020-06-10 12:25       ` Daniel Vetter
2020-06-10 19:41   ` [PATCH] " Daniel Vetter
2020-06-10 19:41     ` Daniel Vetter
2020-06-10 19:41     ` [Intel-gfx] " Daniel Vetter
2020-06-10 19:41     ` Daniel Vetter
2020-06-11 14:29     ` Jason Gunthorpe
2020-06-11 14:29       ` Jason Gunthorpe
2020-06-11 14:29       ` Jason Gunthorpe
2020-06-21 17:42     ` Qian Cai
2020-06-21 17:42       ` Qian Cai
2020-06-21 17:42       ` [Intel-gfx] " Qian Cai
2020-06-21 17:42       ` Qian Cai
2020-06-21 18:07       ` Daniel Vetter
2020-06-21 18:07         ` Daniel Vetter
2020-06-21 18:07         ` [Intel-gfx] " Daniel Vetter
2020-06-21 18:07         ` Daniel Vetter
2020-06-21 20:01         ` Daniel Vetter
2020-06-21 20:01           ` Daniel Vetter
2020-06-21 20:01           ` [Intel-gfx] " Daniel Vetter
2020-06-21 20:01           ` Daniel Vetter
2020-06-21 22:09           ` Qian Cai
2020-06-21 22:09             ` Qian Cai
2020-06-21 22:09             ` [Intel-gfx] " Qian Cai
2020-06-21 22:09             ` Qian Cai
2020-06-23 16:17           ` Qian Cai
2020-06-23 16:17             ` Qian Cai
2020-06-23 16:17             ` [Intel-gfx] " Qian Cai
2020-06-23 16:17             ` Qian Cai
2020-06-23 22:13             ` Daniel Vetter
2020-06-23 22:13               ` Daniel Vetter
2020-06-23 22:13               ` [Intel-gfx] " Daniel Vetter
2020-06-23 22:13               ` Daniel Vetter
2020-06-23 22:29               ` Qian Cai
2020-06-23 22:29                 ` Qian Cai
2020-06-23 22:29                 ` [Intel-gfx] " Qian Cai
2020-06-23 22:29                 ` Qian Cai
2020-06-23 22:31       ` Dave Chinner
2020-06-23 22:31         ` Dave Chinner
2020-06-23 22:31         ` [Intel-gfx] " Dave Chinner
2020-06-23 22:31         ` Dave Chinner
2020-06-23 22:36         ` Daniel Vetter
2020-06-23 22:36           ` Daniel Vetter
2020-06-23 22:36           ` [Intel-gfx] " Daniel Vetter
2020-06-23 22:36           ` Daniel Vetter
2020-06-21 17:00   ` [PATCH 01/18] " Qian Cai
2020-06-21 17:00     ` Qian Cai
2020-06-21 17:00     ` [Intel-gfx] " Qian Cai
2020-06-21 17:00     ` Qian Cai
2020-06-21 17:28     ` Daniel Vetter
2020-06-21 17:28       ` Daniel Vetter
2020-06-21 17:28       ` [Intel-gfx] " Daniel Vetter
2020-06-21 17:28       ` Daniel Vetter
2020-06-21 17:46       ` Qian Cai
2020-06-21 17:46         ` Qian Cai
2020-06-21 17:46         ` [Intel-gfx] " Qian Cai
2020-06-21 17:46         ` Qian Cai
2020-06-04  8:12 ` [PATCH 02/18] dma-buf: minor doc touch-ups Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12   ` [Intel-gfx] " Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-10 13:07   ` Thomas Hellström (Intel)
2020-06-10 13:07     ` Thomas Hellström (Intel)
2020-06-10 13:07     ` [Intel-gfx] " Thomas Hellström (Intel)
2020-06-10 13:07     ` Thomas Hellström (Intel)
2020-06-12  7:05   ` [PATCH] " Daniel Vetter
2020-06-12  7:05     ` [Intel-gfx] " Daniel Vetter
2020-06-24 19:32     ` Daniel Vetter
2020-06-24 19:32       ` [Intel-gfx] " Daniel Vetter
2020-06-04  8:12 ` [PATCH 03/18] dma-fence: basic lockdep annotations Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12   ` [Intel-gfx] " Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:57   ` Thomas Hellström (Intel)
2020-06-04  8:57     ` Thomas Hellström (Intel)
2020-06-04  8:57     ` [Intel-gfx] " Thomas Hellström (Intel)
2020-06-04  8:57     ` Thomas Hellström (Intel)
2020-06-04  9:21     ` Daniel Vetter
2020-06-04  9:21       ` Daniel Vetter
2020-06-04  9:21       ` [Intel-gfx] " Daniel Vetter
2020-06-04  9:21       ` Daniel Vetter
2020-06-04  9:26       ` Chris Wilson
2020-06-04  9:26         ` Chris Wilson
2020-06-04  9:26         ` [Intel-gfx] " Chris Wilson
2020-06-04  9:36         ` Daniel Vetter
2020-06-04  9:36           ` Daniel Vetter
2020-06-04  9:36           ` Daniel Vetter
2020-06-04  9:36           ` Daniel Vetter
2020-06-05 13:29   ` [PATCH] " Daniel Vetter
2020-06-05 13:29     ` Daniel Vetter
2020-06-05 13:29     ` [Intel-gfx] " Daniel Vetter
2020-06-05 13:29     ` Daniel Vetter
2020-06-05 14:30     ` Thomas Hellström (Intel)
2020-06-05 14:30       ` Thomas Hellström (Intel)
2020-06-05 14:30       ` [Intel-gfx] " Thomas Hellström (Intel)
2020-06-05 14:30       ` Thomas Hellström (Intel)
2020-06-11  9:57     ` Maarten Lankhorst
2020-06-11  9:57       ` Maarten Lankhorst
2020-06-11  9:57       ` [Intel-gfx] " Maarten Lankhorst
2020-06-11  9:57       ` Maarten Lankhorst
2020-06-10 14:21   ` [Intel-gfx] [PATCH 03/18] " Tvrtko Ursulin
2020-06-10 14:21     ` Tvrtko Ursulin
2020-06-10 14:21     ` Tvrtko Ursulin
2020-06-10 14:21     ` Tvrtko Ursulin
2020-06-10 15:17     ` Daniel Vetter
2020-06-10 15:17       ` Daniel Vetter
2020-06-10 15:17       ` Daniel Vetter
2020-06-10 15:17       ` Daniel Vetter
2020-06-11 10:36       ` Tvrtko Ursulin
2020-06-11 10:36         ` Tvrtko Ursulin
2020-06-11 10:36         ` Tvrtko Ursulin
2020-06-11 10:36         ` Tvrtko Ursulin
2020-06-11 11:29         ` Daniel Vetter
2020-06-11 11:29           ` Daniel Vetter
2020-06-11 11:29           ` Daniel Vetter
2020-06-11 11:29           ` Daniel Vetter
2020-06-11 14:29           ` Tvrtko Ursulin
2020-06-11 14:29             ` Tvrtko Ursulin
2020-06-11 14:29             ` Tvrtko Ursulin
2020-06-11 14:29             ` Tvrtko Ursulin
2020-06-11 15:03             ` Daniel Vetter
2020-06-11 15:03               ` Daniel Vetter
2020-06-11 15:03               ` Daniel Vetter
2020-06-11 15:03               ` Daniel Vetter
2020-06-11  8:00   ` Chris Wilson
2020-06-11  8:00     ` Chris Wilson
2020-06-11  8:00     ` [Intel-gfx] " Chris Wilson
2020-06-11  8:44     ` Dave Airlie
2020-06-11  8:44       ` Dave Airlie
2020-06-11  8:44       ` [Intel-gfx] " Dave Airlie
2020-06-11  8:44       ` Dave Airlie
2020-06-11  9:01       ` [Intel-gfx] " Daniel Stone
2020-06-11  9:01         ` Daniel Stone
2020-06-11  9:01         ` Daniel Stone
2020-06-11  9:01         ` Daniel Stone
2020-06-19  8:25         ` Chris Wilson
2020-06-19  8:25           ` Chris Wilson
2020-06-19  8:25           ` Chris Wilson
2020-06-19  8:51           ` Daniel Vetter
2020-06-19  8:51             ` Daniel Vetter
2020-06-19  8:51             ` Daniel Vetter
2020-06-19  8:51             ` Daniel Vetter
2020-06-19  9:13             ` Chris Wilson
2020-06-19  9:13               ` Chris Wilson
2020-06-19  9:13               ` Chris Wilson
2020-06-19  9:43               ` Daniel Vetter
2020-06-19  9:43                 ` Daniel Vetter
2020-06-19  9:43                 ` Daniel Vetter
2020-06-19  9:43                 ` Daniel Vetter
2020-06-19 13:12                 ` Chris Wilson
2020-06-19 13:12                   ` Chris Wilson
2020-06-19 13:12                   ` Chris Wilson
2020-06-22  9:16                   ` Daniel Vetter
2020-06-22  9:16                     ` Daniel Vetter
2020-06-22  9:16                     ` Daniel Vetter
2020-06-22  9:16                     ` Daniel Vetter
2020-07-09  7:29                 ` Daniel Stone
2020-07-09  7:29                   ` Daniel Stone
2020-07-09  7:29                   ` Daniel Stone
2020-07-09  7:29                   ` Daniel Stone
2020-07-09  8:01                   ` Daniel Vetter
2020-07-09  8:01                     ` Daniel Vetter
2020-07-09  8:01                     ` Daniel Vetter
2020-07-09  8:01                     ` Daniel Vetter
2020-06-12  7:06   ` [PATCH] " Daniel Vetter
2020-06-12  7:06     ` Daniel Vetter
2020-06-12  7:06     ` [Intel-gfx] " Daniel Vetter
2020-06-12  7:06     ` Daniel Vetter
2020-06-04  8:12 ` [PATCH 04/18] dma-fence: prime " Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12   ` [Intel-gfx] " Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-11  7:30   ` [Linaro-mm-sig] " Thomas Hellström (Intel)
2020-06-11  7:30     ` Thomas Hellström (Intel)
2020-06-11  7:30     ` [Intel-gfx] " Thomas Hellström (Intel)
2020-06-11  7:30     ` Thomas Hellström (Intel)
2020-06-11  8:34     ` Daniel Vetter
2020-06-11  8:34       ` Daniel Vetter
2020-06-11  8:34       ` [Intel-gfx] " Daniel Vetter
2020-06-11  8:34       ` Daniel Vetter
2020-06-11 14:15       ` Jason Gunthorpe
2020-06-11 14:15         ` Jason Gunthorpe
2020-06-11 14:15         ` Jason Gunthorpe
2020-06-11 23:35         ` Felix Kuehling
2020-06-11 23:35           ` Felix Kuehling
2020-06-11 23:35           ` [Intel-gfx] " Felix Kuehling
2020-06-11 23:35           ` Felix Kuehling
2020-06-12  5:11           ` Daniel Vetter
2020-06-12  5:11             ` Daniel Vetter
2020-06-12  5:11             ` [Intel-gfx] " Daniel Vetter
2020-06-12  5:11             ` Daniel Vetter
2020-06-19 18:13           ` Jerome Glisse
2020-06-19 18:13             ` Jerome Glisse
2020-06-19 18:13             ` [Intel-gfx] " Jerome Glisse
2020-06-19 18:13             ` Jerome Glisse
2020-06-23  7:39           ` Daniel Vetter
2020-06-23  7:39             ` Daniel Vetter
2020-06-23  7:39             ` [Intel-gfx] " Daniel Vetter
2020-06-23  7:39             ` Daniel Vetter
2020-06-23 18:44             ` Felix Kuehling
2020-06-23 18:44               ` Felix Kuehling
2020-06-23 18:44               ` [Intel-gfx] " Felix Kuehling
2020-06-23 18:44               ` Felix Kuehling
2020-06-23 19:02               ` Daniel Vetter
2020-06-23 19:02                 ` Daniel Vetter
2020-06-23 19:02                 ` [Intel-gfx] " Daniel Vetter
2020-06-23 19:02                 ` Daniel Vetter
2020-06-16 12:07         ` Daniel Vetter
2020-06-16 12:07           ` Daniel Vetter
2020-06-16 12:07           ` [Intel-gfx] " Daniel Vetter
2020-06-16 12:07           ` Daniel Vetter
2020-06-16 14:53           ` Jason Gunthorpe
2020-06-16 14:53             ` Jason Gunthorpe
2020-06-16 14:53             ` Jason Gunthorpe
2020-06-17  7:57             ` Daniel Vetter
2020-06-17  7:57               ` Daniel Vetter
2020-06-17  7:57               ` [Intel-gfx] " Daniel Vetter
2020-06-17  7:57               ` Daniel Vetter
2020-06-17 15:29               ` Jason Gunthorpe
2020-06-17 15:29                 ` Jason Gunthorpe
2020-06-17 15:29                 ` Jason Gunthorpe
2020-06-18 14:42                 ` Daniel Vetter
2020-06-18 14:42                   ` Daniel Vetter
2020-06-18 14:42                   ` [Intel-gfx] " Daniel Vetter
2020-06-18 14:42                   ` Daniel Vetter
2020-06-17  6:48           ` Daniel Vetter
2020-06-17  6:48             ` Daniel Vetter
2020-06-17  6:48             ` [Intel-gfx] " Daniel Vetter
2020-06-17  6:48             ` Daniel Vetter
2020-06-17 15:28             ` Jason Gunthorpe
2020-06-17 15:28               ` Jason Gunthorpe
2020-06-17 15:28               ` Jason Gunthorpe
2020-06-18 15:00               ` Daniel Vetter
2020-06-18 15:00                 ` Daniel Vetter
2020-06-18 15:00                 ` [Intel-gfx] " Daniel Vetter
2020-06-18 15:00                 ` Daniel Vetter
2020-06-18 17:23                 ` Jason Gunthorpe
2020-06-18 17:23                   ` Jason Gunthorpe
2020-06-18 17:23                   ` Jason Gunthorpe
2020-06-19  7:22                   ` Daniel Vetter
2020-06-19  7:22                     ` Daniel Vetter
2020-06-19  7:22                     ` [Intel-gfx] " Daniel Vetter
2020-06-19  7:22                     ` Daniel Vetter
2020-06-19 11:39                     ` Jason Gunthorpe
2020-06-19 11:39                       ` Jason Gunthorpe
2020-06-19 11:39                       ` Jason Gunthorpe
2020-06-19 15:06                       ` Daniel Vetter
2020-06-19 15:06                         ` Daniel Vetter
2020-06-19 15:06                         ` [Intel-gfx] " Daniel Vetter
2020-06-19 15:06                         ` Daniel Vetter
2020-06-19 15:15                         ` Jason Gunthorpe
2020-06-19 15:15                           ` Jason Gunthorpe
2020-06-19 15:15                           ` Jason Gunthorpe
2020-06-19 16:19                           ` Daniel Vetter
2020-06-19 16:19                             ` Daniel Vetter
2020-06-19 16:19                             ` [Intel-gfx] " Daniel Vetter
2020-06-19 16:19                             ` Daniel Vetter
2020-06-19 17:23                             ` Jason Gunthorpe
2020-06-19 17:23                               ` Jason Gunthorpe
2020-06-19 17:23                               ` Jason Gunthorpe
2020-06-19 18:09                               ` Jerome Glisse
2020-06-19 18:09                                 ` Jerome Glisse
2020-06-19 18:09                                 ` [Intel-gfx] " Jerome Glisse
2020-06-19 18:09                                 ` Jerome Glisse
2020-06-19 18:18                                 ` Jason Gunthorpe
2020-06-19 18:18                                   ` Jason Gunthorpe
2020-06-19 18:18                                   ` Jason Gunthorpe
2020-06-19 19:48                                   ` Felix Kuehling
2020-06-19 19:48                                     ` Felix Kuehling
2020-06-19 19:48                                     ` [Intel-gfx] " Felix Kuehling
2020-06-19 19:48                                     ` Felix Kuehling
2020-06-19 19:55                                     ` Jason Gunthorpe
2020-06-19 19:55                                       ` Jason Gunthorpe
2020-06-19 19:55                                       ` Jason Gunthorpe
2020-06-19 20:03                                       ` Felix Kuehling
2020-06-19 20:03                                         ` Felix Kuehling
2020-06-19 20:03                                         ` [Intel-gfx] " Felix Kuehling
2020-06-19 20:03                                         ` Felix Kuehling
2020-06-19 20:31                                       ` Jerome Glisse
2020-06-19 20:31                                         ` Jerome Glisse
2020-06-19 20:31                                         ` [Intel-gfx] " Jerome Glisse
2020-06-19 20:31                                         ` Jerome Glisse
2020-06-22 11:46                                         ` Jason Gunthorpe
2020-06-22 11:46                                           ` Jason Gunthorpe
2020-06-22 11:46                                           ` Jason Gunthorpe
2020-06-22 20:15                                           ` Jerome Glisse
2020-06-22 20:15                                             ` Jerome Glisse
2020-06-22 20:15                                             ` [Intel-gfx] " Jerome Glisse
2020-06-22 20:15                                             ` Jerome Glisse
2020-06-23  0:02                                             ` Jason Gunthorpe
2020-06-23  0:02                                               ` Jason Gunthorpe
2020-06-23  0:02                                               ` Jason Gunthorpe
2020-06-19 20:10                                   ` Jerome Glisse
2020-06-19 20:10                                     ` Jerome Glisse
2020-06-19 20:10                                     ` [Intel-gfx] " Jerome Glisse
2020-06-19 20:10                                     ` Jerome Glisse
2020-06-19 20:43                                     ` Daniel Vetter
2020-06-19 20:43                                       ` Daniel Vetter
2020-06-19 20:43                                       ` [Intel-gfx] " Daniel Vetter
2020-06-19 20:43                                       ` Daniel Vetter
2020-06-19 20:59                                       ` Jerome Glisse
2020-06-19 20:59                                         ` Jerome Glisse
2020-06-19 20:59                                         ` [Intel-gfx] " Jerome Glisse
2020-06-19 20:59                                         ` Jerome Glisse
2020-06-23  0:05                                     ` Jason Gunthorpe
2020-06-23  0:05                                       ` Jason Gunthorpe
2020-06-23  0:05                                       ` Jason Gunthorpe
2020-06-19 19:11                                 ` Alex Deucher
2020-06-19 19:11                                   ` Alex Deucher
2020-06-19 19:11                                   ` [Intel-gfx] " Alex Deucher
2020-06-19 19:11                                   ` Alex Deucher
2020-06-19 19:30                                   ` Felix Kuehling
2020-06-19 19:30                                     ` Felix Kuehling
2020-06-19 19:30                                     ` [Intel-gfx] " Felix Kuehling
2020-06-19 19:30                                     ` Felix Kuehling
2020-06-19 19:40                                     ` Jerome Glisse [this message]
2020-06-19 19:40                                       ` Jerome Glisse
2020-06-19 19:40                                       ` [Intel-gfx] " Jerome Glisse
2020-06-19 19:40                                       ` Jerome Glisse
2020-06-19 19:51                                     ` Jason Gunthorpe
2020-06-19 19:51                                       ` Jason Gunthorpe
2020-06-19 19:51                                       ` Jason Gunthorpe
2020-06-12  7:01   ` [PATCH] " Daniel Vetter
2020-06-12  7:01     ` Daniel Vetter
2020-06-12  7:01     ` [Intel-gfx] " Daniel Vetter
2020-06-12  7:01     ` Daniel Vetter
2020-06-04  8:12 ` [PATCH 05/18] drm/vkms: Annotate vblank timer Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12   ` [Intel-gfx] " Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12 ` [PATCH 06/18] drm/vblank: Annotate with dma-fence signalling section Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12   ` [Intel-gfx] " Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12 ` [PATCH 07/18] drm/atomic-helper: Add dma-fence annotations Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12   ` [Intel-gfx] " Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12 ` [PATCH 08/18] drm/amdgpu: add dma-fence annotations to atomic commit path Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12   ` [Intel-gfx] " Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-23 10:51   ` Daniel Vetter
2020-06-23 10:51     ` Daniel Vetter
2020-06-23 10:51     ` [Intel-gfx] " Daniel Vetter
2020-06-23 10:51     ` Daniel Vetter
2020-06-04  8:12 ` [PATCH 09/18] drm/scheduler: use dma-fence annotations in main thread Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12   ` [Intel-gfx] " Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12 ` [PATCH 10/18] drm/amdgpu: use dma-fence annotations in cs_submit() Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12   ` [Intel-gfx] " Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12 ` [PATCH 11/18] drm/amdgpu: s/GFP_KERNEL/GFP_ATOMIC in scheduler code Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12   ` [Intel-gfx] " Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12 ` [PATCH 12/18] drm/amdgpu: DC also loves to allocate stuff where it shouldn't Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12   ` [Intel-gfx] " Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12 ` [PATCH 13/18] drm/amdgpu/dc: Stop dma_resv_lock inversion in commit_tail Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12   ` [Intel-gfx] " Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-05  8:30   ` Pierre-Eric Pelloux-Prayer
2020-06-05  8:30     ` Pierre-Eric Pelloux-Prayer
2020-06-05  8:30     ` [Intel-gfx] " Pierre-Eric Pelloux-Prayer
2020-06-05  8:30     ` Pierre-Eric Pelloux-Prayer
2020-06-05 12:41     ` Daniel Vetter
2020-06-05 12:41       ` Daniel Vetter
2020-06-05 12:41       ` [Intel-gfx] " Daniel Vetter
2020-06-05 12:41       ` Daniel Vetter
2020-06-04  8:12 ` [PATCH 14/18] drm/scheduler: use dma-fence annotations in tdr work Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12   ` [Intel-gfx] " Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12 ` [PATCH 15/18] drm/amdgpu: use dma-fence annotations for gpu reset code Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12   ` [Intel-gfx] " Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12 ` [PATCH 16/18] Revert "drm/amdgpu: add fbdev suspend/resume on gpu reset" Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12   ` [Intel-gfx] " Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12 ` [PATCH 17/18] drm/amdgpu: gpu recovery does full modesets Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12   ` [Intel-gfx] " Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12 ` [PATCH 18/18] drm/i915: Annotate dma_fence_work Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:12   ` [Intel-gfx] " Daniel Vetter
2020-06-04  8:12   ` Daniel Vetter
2020-06-04  8:55 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for dma-fence lockdep annotations, round 2 Patchwork
2020-06-04  8:57 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-06-04  9:08 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
2020-06-05 13:59 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for dma-fence lockdep annotations, round 2 (rev2) Patchwork
2020-06-05 14:01 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-06-05 14:15 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
2020-06-10 20:20 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for dma-fence lockdep annotations, round 2 (rev3) Patchwork
2020-06-10 20:21 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-06-10 20:35 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
2020-06-12  7:18 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for dma-fence lockdep annotations, round 2 (rev6) Patchwork
2020-06-12  7:19 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-06-12  7:32 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
2020-06-22 10:11 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for dma-fence lockdep annotations, round 2 (rev7) Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200619194056.GA13117@redhat.com \
    --to=jglisse@redhat.com \
    --cc=alexdeucher@gmail.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=christian.koenig@amd.com \
    --cc=daniel.vetter@intel.com \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=felix.kuehling@amd.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=jgg@ziepe.ca \
    --cc=linaro-mm-sig@lists.linaro.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-media@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=maarten.lankhorst@linux.intel.com \
    --cc=mika.kuoppala@intel.com \
    --cc=thomas.hellstrom@intel.com \
    --cc=thomas_os@shipmail.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.