All of lore.kernel.org
 help / color / mirror / Atom feed
From: Kenny Ho <y2kenny@gmail.com>
To: Daniel Vetter <daniel@ffwll.ch>
Cc: "Kenny Ho" <Kenny.Ho@amd.com>,
	"Kuehling, Felix" <felix.kuehling@amd.com>,
	jsparks@cray.com, dri-devel <dri-devel@lists.freedesktop.org>,
	lkaplan@cray.com, "Alex Deucher" <alexander.deucher@amd.com>,
	"amd-gfx list" <amd-gfx@lists.freedesktop.org>,
	"Greathouse, Joseph" <joseph.greathouse@amd.com>,
	"Tejun Heo" <tj@kernel.org>,
	cgroups@vger.kernel.org,
	"Christian König" <christian.koenig@amd.com>
Subject: Re: [PATCH v2 00/11] new cgroup controller for gpu/drm subsystem
Date: Tue, 14 Apr 2020 09:50:44 -0400	[thread overview]
Message-ID: <CAOWid-dJckd8kV57MKNA_W83SN4OHnOGPURL7oOm-SqoYRLX=w@mail.gmail.com> (raw)
In-Reply-To: <CAKMK7uHwX9NbGb1ptnP=CAwxDayfM_z9kvFMMb=YiH+ynjNqKQ@mail.gmail.com>

Hi Daniel,

I appreciate many of your review so far and I much prefer keeping
things technical but that is very difficult to do when I get Intel
developers calling my implementation "most AMD-specific solution
possible" and objecting to an implementation because their hardware
cannot support it.  Can you help me with a more charitable
interpretation of what has been happening?

Perhaps the following questions can help keep the discussion technical:
1)  Is it possible to implement non-work-conserving distribution of
GPU without spatial sharing?  (If yes, I'd love to hear a suggestion,
if not...question 2.)
2)  If spatial sharing is required to support GPU HPC use cases, what
would you implement if you have the hardware support today?

Regards,
Kenny

On Tue, Apr 14, 2020 at 9:26 AM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Tue, Apr 14, 2020 at 3:14 PM Kenny Ho <y2kenny@gmail.com> wrote:
> >
> > Ok.  I was hoping you can clarify the contradiction between the
> > existance of the spec below and your "not something any other gpu can
> > reasonably support" statement.  I mean, OneAPI is Intel's spec and
> > doesn't that at least make SubDevice support "reasonable" for one more
> > vendor?
> >
> > Partisanship aside, as a drm co-maintainer, do you really not see the
> > need for non-work-conserving way of distributing GPU as a resource?
> > You recognized the latencies involved (although that's really just
> > part of the story... time sharing is never going to be good enough
> > even if your switching cost is zero.)  As a drm co-maintainer, are you
> > suggesting GPU has no place in the HPC use case?
>
>  So I did chat with people and my understanding for how this subdevice
> stuff works is roughly, from least to most fine grained support:
> - Not possible at all, hw doesn't have any such support
> - The hw is actually not a single gpu, but a bunch of chips behind a
> magic bridge/interconnect, and there's a scheduler load-balancing
> stuff and you can't actually run on all "cores" in parallel with one
> compute/3d job. So subdevices just give you some of these cores, but
> from client api pov they're exactly as powerful as the full device. So
> this kinda works like assigning an entire NUMA node, including all the
> cpu cores and memory bandwidth and everything.
> - Hw has multiple "engines" which share resources (like compute cores
> or whatever) behind the scenes. There's no control over how this
> sharing works really, and whether you have guarantees about minimal
> execution resources or not. This kinda works like hyperthreading.
> - Then finally we have the CU mask thing amdgpu has. Which works like
> what you're proposing, works on amd.
>
> So this isn't something that I think we should standardize in a
> resource management framework like cgroups. Because it's a complete
> mess. Note that _all_ the above things (including the "no subdevices"
> one) are valid implementations of "subdevices" in the various specs.
>
> Now on your question on "why was this added to various standards?"
> because opencl has that too (and the rocm thing, and everything else
> it seems). What I heard is that a few people pushed really hard, and
> no one objected hard enough (because not having subdevices is a
> standards compliant implementation), so that's why it happened. Just
> because it's in various standards doesn't mean that a) it's actually
> standardized in a useful fashion and b) something we should just
> blindly adopt.
>
> Also like where exactly did you understand that I'm against gpus in
> HPC uses cases. Approaching this in a slightly less tribal way would
> really, really help to get something landed (which I'd like to see
> happen, personally). Always spinning this as an Intel vs AMD thing
> like you do here with every reply really doesn't help moving this in.
>
> So yeah stricter isolation is something customers want, it's just not
> something we can really give out right now at a level below the
> device.
> -Daniel
>
> >
> > Regards,
> > Kenny
> >
> > On Tue, Apr 14, 2020 at 8:52 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> > >
> > > On Tue, Apr 14, 2020 at 2:47 PM Kenny Ho <y2kenny@gmail.com> wrote:
> > > > On Tue, Apr 14, 2020 at 8:20 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> > > > > My understanding from talking with a few other folks is that
> > > > > the cpumask-style CU-weight thing is not something any other gpu can
> > > > > reasonably support (and we have about 6+ of those in-tree)
> > > >
> > > > How does Intel plan to support the SubDevice API as described in your
> > > > own spec here:
> > > > https://spec.oneapi.com/versions/0.7/oneL0/core/INTRO.html#subdevice-support
> > >
> > > I can't talk about whether future products might or might not support
> > > stuff and in what form exactly they might support stuff or not support
> > > stuff. Or why exactly that's even in the spec there or not.
> > >
> > > Geez
> > > -Daniel
> > > --
> > > Daniel Vetter
> > > Software Engineer, Intel Corporation
> > > +41 (0) 79 365 57 48 - http://blog.ffwll.ch
>
>
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

WARNING: multiple messages have this Message-ID (diff)
From: Kenny Ho <y2kenny@gmail.com>
To: Daniel Vetter <daniel@ffwll.ch>
Cc: "Kenny Ho" <Kenny.Ho@amd.com>,
	"Kuehling, Felix" <felix.kuehling@amd.com>,
	jsparks@cray.com, dri-devel <dri-devel@lists.freedesktop.org>,
	lkaplan@cray.com, "Alex Deucher" <alexander.deucher@amd.com>,
	"amd-gfx list" <amd-gfx@lists.freedesktop.org>,
	"Greathouse, Joseph" <joseph.greathouse@amd.com>,
	"Tejun Heo" <tj@kernel.org>,
	cgroups@vger.kernel.org,
	"Christian König" <christian.koenig@amd.com>
Subject: Re: [PATCH v2 00/11] new cgroup controller for gpu/drm subsystem
Date: Tue, 14 Apr 2020 09:50:44 -0400	[thread overview]
Message-ID: <CAOWid-dJckd8kV57MKNA_W83SN4OHnOGPURL7oOm-SqoYRLX=w@mail.gmail.com> (raw)
In-Reply-To: <CAKMK7uHwX9NbGb1ptnP=CAwxDayfM_z9kvFMMb=YiH+ynjNqKQ@mail.gmail.com>

Hi Daniel,

I appreciate many of your review so far and I much prefer keeping
things technical but that is very difficult to do when I get Intel
developers calling my implementation "most AMD-specific solution
possible" and objecting to an implementation because their hardware
cannot support it.  Can you help me with a more charitable
interpretation of what has been happening?

Perhaps the following questions can help keep the discussion technical:
1)  Is it possible to implement non-work-conserving distribution of
GPU without spatial sharing?  (If yes, I'd love to hear a suggestion,
if not...question 2.)
2)  If spatial sharing is required to support GPU HPC use cases, what
would you implement if you have the hardware support today?

Regards,
Kenny

On Tue, Apr 14, 2020 at 9:26 AM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Tue, Apr 14, 2020 at 3:14 PM Kenny Ho <y2kenny@gmail.com> wrote:
> >
> > Ok.  I was hoping you can clarify the contradiction between the
> > existance of the spec below and your "not something any other gpu can
> > reasonably support" statement.  I mean, OneAPI is Intel's spec and
> > doesn't that at least make SubDevice support "reasonable" for one more
> > vendor?
> >
> > Partisanship aside, as a drm co-maintainer, do you really not see the
> > need for non-work-conserving way of distributing GPU as a resource?
> > You recognized the latencies involved (although that's really just
> > part of the story... time sharing is never going to be good enough
> > even if your switching cost is zero.)  As a drm co-maintainer, are you
> > suggesting GPU has no place in the HPC use case?
>
>  So I did chat with people and my understanding for how this subdevice
> stuff works is roughly, from least to most fine grained support:
> - Not possible at all, hw doesn't have any such support
> - The hw is actually not a single gpu, but a bunch of chips behind a
> magic bridge/interconnect, and there's a scheduler load-balancing
> stuff and you can't actually run on all "cores" in parallel with one
> compute/3d job. So subdevices just give you some of these cores, but
> from client api pov they're exactly as powerful as the full device. So
> this kinda works like assigning an entire NUMA node, including all the
> cpu cores and memory bandwidth and everything.
> - Hw has multiple "engines" which share resources (like compute cores
> or whatever) behind the scenes. There's no control over how this
> sharing works really, and whether you have guarantees about minimal
> execution resources or not. This kinda works like hyperthreading.
> - Then finally we have the CU mask thing amdgpu has. Which works like
> what you're proposing, works on amd.
>
> So this isn't something that I think we should standardize in a
> resource management framework like cgroups. Because it's a complete
> mess. Note that _all_ the above things (including the "no subdevices"
> one) are valid implementations of "subdevices" in the various specs.
>
> Now on your question on "why was this added to various standards?"
> because opencl has that too (and the rocm thing, and everything else
> it seems). What I heard is that a few people pushed really hard, and
> no one objected hard enough (because not having subdevices is a
> standards compliant implementation), so that's why it happened. Just
> because it's in various standards doesn't mean that a) it's actually
> standardized in a useful fashion and b) something we should just
> blindly adopt.
>
> Also like where exactly did you understand that I'm against gpus in
> HPC uses cases. Approaching this in a slightly less tribal way would
> really, really help to get something landed (which I'd like to see
> happen, personally). Always spinning this as an Intel vs AMD thing
> like you do here with every reply really doesn't help moving this in.
>
> So yeah stricter isolation is something customers want, it's just not
> something we can really give out right now at a level below the
> device.
> -Daniel
>
> >
> > Regards,
> > Kenny
> >
> > On Tue, Apr 14, 2020 at 8:52 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> > >
> > > On Tue, Apr 14, 2020 at 2:47 PM Kenny Ho <y2kenny@gmail.com> wrote:
> > > > On Tue, Apr 14, 2020 at 8:20 AM Daniel Vetter <daniel@ffwll.ch> wrote:
> > > > > My understanding from talking with a few other folks is that
> > > > > the cpumask-style CU-weight thing is not something any other gpu can
> > > > > reasonably support (and we have about 6+ of those in-tree)
> > > >
> > > > How does Intel plan to support the SubDevice API as described in your
> > > > own spec here:
> > > > https://spec.oneapi.com/versions/0.7/oneL0/core/INTRO.html#subdevice-support
> > >
> > > I can't talk about whether future products might or might not support
> > > stuff and in what form exactly they might support stuff or not support
> > > stuff. Or why exactly that's even in the spec there or not.
> > >
> > > Geez
> > > -Daniel
> > > --
> > > Daniel Vetter
> > > Software Engineer, Intel Corporation
> > > +41 (0) 79 365 57 48 - http://blog.ffwll.ch
>
>
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

WARNING: multiple messages have this Message-ID (diff)
From: Kenny Ho <y2kenny-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
To: Daniel Vetter <daniel-/w4YWyX8dFk@public.gmane.org>
Cc: "Kenny Ho" <Kenny.Ho-5C7GfCeVMHo@public.gmane.org>,
	"Kuehling, Felix" <felix.kuehling-5C7GfCeVMHo@public.gmane.org>,
	jsparks-WVYJKLFxKCc@public.gmane.org,
	dri-devel
	<dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org>,
	lkaplan-WVYJKLFxKCc@public.gmane.org,
	"Alex Deucher" <alexander.deucher-5C7GfCeVMHo@public.gmane.org>,
	"amd-gfx list"
	<amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org>,
	"Greathouse,
	Joseph" <joseph.greathouse-5C7GfCeVMHo@public.gmane.org>,
	"Tejun Heo" <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	"Christian König" <christian.koenig-5C7GfCeVMHo@public.gmane.org>
Subject: Re: [PATCH v2 00/11] new cgroup controller for gpu/drm subsystem
Date: Tue, 14 Apr 2020 09:50:44 -0400	[thread overview]
Message-ID: <CAOWid-dJckd8kV57MKNA_W83SN4OHnOGPURL7oOm-SqoYRLX=w@mail.gmail.com> (raw)
In-Reply-To: <CAKMK7uHwX9NbGb1ptnP=CAwxDayfM_z9kvFMMb=YiH+ynjNqKQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>

Hi Daniel,

I appreciate many of your review so far and I much prefer keeping
things technical but that is very difficult to do when I get Intel
developers calling my implementation "most AMD-specific solution
possible" and objecting to an implementation because their hardware
cannot support it.  Can you help me with a more charitable
interpretation of what has been happening?

Perhaps the following questions can help keep the discussion technical:
1)  Is it possible to implement non-work-conserving distribution of
GPU without spatial sharing?  (If yes, I'd love to hear a suggestion,
if not...question 2.)
2)  If spatial sharing is required to support GPU HPC use cases, what
would you implement if you have the hardware support today?

Regards,
Kenny

On Tue, Apr 14, 2020 at 9:26 AM Daniel Vetter <daniel-/w4YWyX8dFk@public.gmane.org> wrote:
>
> On Tue, Apr 14, 2020 at 3:14 PM Kenny Ho <y2kenny-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> >
> > Ok.  I was hoping you can clarify the contradiction between the
> > existance of the spec below and your "not something any other gpu can
> > reasonably support" statement.  I mean, OneAPI is Intel's spec and
> > doesn't that at least make SubDevice support "reasonable" for one more
> > vendor?
> >
> > Partisanship aside, as a drm co-maintainer, do you really not see the
> > need for non-work-conserving way of distributing GPU as a resource?
> > You recognized the latencies involved (although that's really just
> > part of the story... time sharing is never going to be good enough
> > even if your switching cost is zero.)  As a drm co-maintainer, are you
> > suggesting GPU has no place in the HPC use case?
>
>  So I did chat with people and my understanding for how this subdevice
> stuff works is roughly, from least to most fine grained support:
> - Not possible at all, hw doesn't have any such support
> - The hw is actually not a single gpu, but a bunch of chips behind a
> magic bridge/interconnect, and there's a scheduler load-balancing
> stuff and you can't actually run on all "cores" in parallel with one
> compute/3d job. So subdevices just give you some of these cores, but
> from client api pov they're exactly as powerful as the full device. So
> this kinda works like assigning an entire NUMA node, including all the
> cpu cores and memory bandwidth and everything.
> - Hw has multiple "engines" which share resources (like compute cores
> or whatever) behind the scenes. There's no control over how this
> sharing works really, and whether you have guarantees about minimal
> execution resources or not. This kinda works like hyperthreading.
> - Then finally we have the CU mask thing amdgpu has. Which works like
> what you're proposing, works on amd.
>
> So this isn't something that I think we should standardize in a
> resource management framework like cgroups. Because it's a complete
> mess. Note that _all_ the above things (including the "no subdevices"
> one) are valid implementations of "subdevices" in the various specs.
>
> Now on your question on "why was this added to various standards?"
> because opencl has that too (and the rocm thing, and everything else
> it seems). What I heard is that a few people pushed really hard, and
> no one objected hard enough (because not having subdevices is a
> standards compliant implementation), so that's why it happened. Just
> because it's in various standards doesn't mean that a) it's actually
> standardized in a useful fashion and b) something we should just
> blindly adopt.
>
> Also like where exactly did you understand that I'm against gpus in
> HPC uses cases. Approaching this in a slightly less tribal way would
> really, really help to get something landed (which I'd like to see
> happen, personally). Always spinning this as an Intel vs AMD thing
> like you do here with every reply really doesn't help moving this in.
>
> So yeah stricter isolation is something customers want, it's just not
> something we can really give out right now at a level below the
> device.
> -Daniel
>
> >
> > Regards,
> > Kenny
> >
> > On Tue, Apr 14, 2020 at 8:52 AM Daniel Vetter <daniel-/w4YWyX8dFk@public.gmane.org> wrote:
> > >
> > > On Tue, Apr 14, 2020 at 2:47 PM Kenny Ho <y2kenny-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> > > > On Tue, Apr 14, 2020 at 8:20 AM Daniel Vetter <daniel-/w4YWyX8dFk@public.gmane.org> wrote:
> > > > > My understanding from talking with a few other folks is that
> > > > > the cpumask-style CU-weight thing is not something any other gpu can
> > > > > reasonably support (and we have about 6+ of those in-tree)
> > > >
> > > > How does Intel plan to support the SubDevice API as described in your
> > > > own spec here:
> > > > https://spec.oneapi.com/versions/0.7/oneL0/core/INTRO.html#subdevice-support
> > >
> > > I can't talk about whether future products might or might not support
> > > stuff and in what form exactly they might support stuff or not support
> > > stuff. Or why exactly that's even in the spec there or not.
> > >
> > > Geez
> > > -Daniel
> > > --
> > > Daniel Vetter
> > > Software Engineer, Intel Corporation
> > > +41 (0) 79 365 57 48 - http://blog.ffwll.ch
>
>
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> +41 (0) 79 365 57 48 - http://blog.ffwll.ch

  reply	other threads:[~2020-04-14 13:51 UTC|newest]

Thread overview: 90+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <lkaplan@cray.com; daniel@ffwll.ch; nirmoy.das@amd.com; damon.mcdougall@amd.com; juan.zuniga-anaya@amd.com; hannes@cmpxchg.org>
2020-02-26 19:01 ` [PATCH v2 00/11] new cgroup controller for gpu/drm subsystem Kenny Ho
2020-02-26 19:01   ` Kenny Ho
2020-02-26 19:01   ` Kenny Ho
2020-02-26 19:01   ` [PATCH v2 01/11] cgroup: Introduce cgroup for drm subsystem Kenny Ho
2020-02-26 19:01     ` Kenny Ho
2020-02-26 19:01     ` Kenny Ho
2020-02-26 19:01   ` [PATCH v2 02/11] drm, cgroup: Bind drm and cgroup subsystem Kenny Ho
2020-02-26 19:01     ` Kenny Ho
2020-02-26 19:01     ` Kenny Ho
2020-02-26 19:01   ` [PATCH v2 03/11] drm, cgroup: Initialize drmcg properties Kenny Ho
2020-02-26 19:01     ` Kenny Ho
2020-02-26 19:01     ` Kenny Ho
2020-02-26 19:01   ` [PATCH v2 04/11] drm, cgroup: Add total GEM buffer allocation stats Kenny Ho
2020-02-26 19:01     ` Kenny Ho
2020-02-26 19:01     ` Kenny Ho
2020-02-26 19:01   ` [PATCH v2 05/11] drm, cgroup: Add peak " Kenny Ho
2020-02-26 19:01     ` Kenny Ho
2020-02-26 19:01     ` Kenny Ho
2020-02-26 19:01   ` [PATCH v2 06/11] drm, cgroup: Add GEM buffer allocation count stats Kenny Ho
2020-02-26 19:01     ` Kenny Ho
2020-02-26 19:01     ` Kenny Ho
2020-02-26 19:01   ` [PATCH v2 07/11] drm, cgroup: Add total GEM buffer allocation limit Kenny Ho
2020-02-26 19:01     ` Kenny Ho
2020-02-26 19:01     ` Kenny Ho
2020-02-26 19:01   ` [PATCH v2 08/11] drm, cgroup: Add peak " Kenny Ho
2020-02-26 19:01     ` Kenny Ho
2020-02-26 19:01     ` Kenny Ho
2020-02-26 19:01   ` [PATCH v2 09/11] drm, cgroup: Add compute as gpu cgroup resource Kenny Ho
2020-02-26 19:01     ` Kenny Ho
2020-02-26 19:01     ` Kenny Ho
2020-02-26 19:01   ` [PATCH v2 10/11] drm, cgroup: add update trigger after limit change Kenny Ho
2020-02-26 19:01     ` Kenny Ho
2020-02-26 19:01     ` Kenny Ho
2020-02-26 19:01   ` [PATCH v2 11/11] drm/amdgpu: Integrate with DRM cgroup Kenny Ho
2020-02-26 19:01     ` Kenny Ho
2020-02-26 19:01     ` Kenny Ho
2020-03-17 16:03   ` [PATCH v2 00/11] new cgroup controller for gpu/drm subsystem Kenny Ho
2020-03-17 16:03     ` Kenny Ho
2020-03-17 16:03     ` Kenny Ho
2020-03-24 18:46     ` Tejun Heo
2020-03-24 18:46       ` Tejun Heo
2020-03-24 18:46       ` Tejun Heo
2020-03-24 18:49       ` Kenny Ho
2020-03-24 18:49         ` Kenny Ho
2020-03-24 18:49         ` Kenny Ho
2020-04-13 19:11         ` Tejun Heo
2020-04-13 19:11           ` Tejun Heo
2020-04-13 19:11           ` Tejun Heo
2020-04-13 20:12           ` Ho, Kenny
2020-04-13 20:12             ` Ho, Kenny
2020-04-13 20:12             ` Ho, Kenny
2020-04-13 20:17           ` Kenny Ho
2020-04-13 20:17             ` Kenny Ho
2020-04-13 20:17             ` Kenny Ho
2020-04-13 20:54             ` Tejun Heo
2020-04-13 20:54               ` Tejun Heo
2020-04-13 20:54               ` Tejun Heo
2020-04-13 21:40               ` Kenny Ho
2020-04-13 21:40                 ` Kenny Ho
2020-04-13 21:40                 ` Kenny Ho
2020-04-13 21:53                 ` Tejun Heo
2020-04-13 21:53                   ` Tejun Heo
2020-04-13 21:53                   ` Tejun Heo
2020-04-14 12:20           ` Daniel Vetter
2020-04-14 12:20             ` Daniel Vetter
2020-04-14 12:20             ` Daniel Vetter
2020-04-14 12:47             ` Kenny Ho
2020-04-14 12:47               ` Kenny Ho
2020-04-14 12:47               ` Kenny Ho
2020-04-14 12:52               ` Daniel Vetter
2020-04-14 12:52                 ` Daniel Vetter
2020-04-14 12:52                 ` Daniel Vetter
2020-04-14 13:14                 ` Kenny Ho
2020-04-14 13:14                   ` Kenny Ho
2020-04-14 13:14                   ` Kenny Ho
2020-04-14 13:26                   ` Daniel Vetter
2020-04-14 13:26                     ` Daniel Vetter
2020-04-14 13:26                     ` Daniel Vetter
2020-04-14 13:50                     ` Kenny Ho [this message]
2020-04-14 13:50                       ` Kenny Ho
2020-04-14 13:50                       ` Kenny Ho
2020-04-14 14:04                       ` Daniel Vetter
2020-04-14 14:04                         ` Daniel Vetter
2020-04-14 14:04                         ` Daniel Vetter
2020-04-14 14:29                         ` Kenny Ho
2020-04-14 14:29                           ` Kenny Ho
2020-04-14 14:29                           ` Kenny Ho
2020-04-14 15:01                           ` Daniel Vetter
2020-04-14 15:01                             ` Daniel Vetter
2020-04-14 15:01                             ` Daniel Vetter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAOWid-dJckd8kV57MKNA_W83SN4OHnOGPURL7oOm-SqoYRLX=w@mail.gmail.com' \
    --to=y2kenny@gmail.com \
    --cc=Kenny.Ho@amd.com \
    --cc=alexander.deucher@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=cgroups@vger.kernel.org \
    --cc=christian.koenig@amd.com \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=felix.kuehling@amd.com \
    --cc=joseph.greathouse@amd.com \
    --cc=jsparks@cray.com \
    --cc=lkaplan@cray.com \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.