All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tejun Heo <tj@kernel.org>
To: "Michal Koutný" <mkoutny@suse.com>
Cc: "Tvrtko Ursulin" <tvrtko.ursulin@linux.intel.com>,
	Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	cgroups@vger.kernel.org, linux-kernel@vger.kernel.org,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	"Zefan Li" <lizefan.x@bytedance.com>,
	"Dave Airlie" <airlied@redhat.com>,
	"Daniel Vetter" <daniel.vetter@ffwll.ch>,
	"Rob Clark" <robdclark@chromium.org>,
	"Stéphane Marchesin" <marcheu@chromium.org>,
	"T . J . Mercier" <tjmercier@google.com>,
	Kenny.Ho@amd.com, "Christian König" <christian.koenig@amd.com>,
	"Brian Welty" <brian.welty@intel.com>,
	"Tvrtko Ursulin" <tvrtko.ursulin@intel.com>
Subject: Re: [RFC v3 00/12] DRM scheduling cgroup controller
Date: Thu, 26 Jan 2023 07:04:08 -1000	[thread overview]
Message-ID: <Y9KyiCPYj2Mzym3Z@slm.duckdns.org> (raw)
In-Reply-To: <20230126130050.GA22442@blackbody.suse.cz>

Hello,

On Thu, Jan 26, 2023 at 02:00:50PM +0100, Michal Koutný wrote:
> On Wed, Jan 25, 2023 at 06:11:35PM +0000, Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> > I don't immediately see how you envisage the half-userspace implementation
> > would look like in terms of what functionality/new APIs would be provided by
> > the kernel?
> 
> Output:
> 	drm.stat (with consumed time(s))
> 
> Input:
> 	drm.throttle (alternatives)
> 	- a) writing 0,1 (in rough analogy to your proposed
> 	     notifications)
> 	- b) writing duration (in loose analogy to memory.reclaim)
> 	     - for how long GPU work should be backed off
> 
> An userspace agent sitting between these two and it'd do the measurement
> and calculation depending on given policies (weighting, throttling) and
> apply respective controls.
> 
> (In resemblance of e.g. https://denji.github.io/cpulimit/)

Yeah, things like this can be done from userspace but if we're gonna build
the infrastructure to allow that in gpu drivers and so on, I don't see why
we wouldn't add a generic in-kernel control layer if we can implement a
proper weight based control. We can of course also expose .max style
interface to allow userspace to do whatever they wanna do with it.

> > Problem there is to find a suitable point to charge at. If for a moment we
> > limit the discussion to i915, out of the box we could having charging
> > happening at several thousand times per second to effectively never. This is
> > to illustrate the GPU context execution dynamics which range from many small
> > packets of work to multi-minute, or longer. For the latter to be accounted
> > for we'd still need some periodic scanning, which would then perhaps go per
> > driver. For the former we'd have thousands of needless updates per second.
> > 
> > Hence my thinking was to pay both the cost of accounting and collecting the
> > usage data once per actionable event, where the latter is controlled by some
> > reasonable scanning period/frequency.
> > 
> > In addition to that, a few DRM drivers already support GPU usage querying
> > via fdinfo, so that being externally triggered, it is next to trivial to
> > wire all those DRM drivers into such common DRM cgroup controller framework.
> > All that every driver needs to implement on top is the "over budget"
> > callback.
> 
> I'd also like show comparison with CPU accounting and controller.
> There is tick-based (~sampling) measurement of various components of CPU
> time (task_group_account_field()). But the actual schedulling (weights)
> or throttling is based on precise accounting (update_curr()).
> 
> So, if the goal is to have precise and guaranteed limits, it shouldn't
> (cannot) be based on sampling. OTOH, if it must be sampling based due to
> variability of the device landscape, it could be advisory mechanism with
> the userspace component.

As for the specific control mechanism, yeah, charge based interface would be
more conventional and my suspicion is that transposing the current
implementation that way likely isn't too difficult. It just pushes "am I
over the limit?" decisions to the specific drivers with the core layer
telling them how much under/over budget they are. I'm curious what other gpu
driver folks think about the current RFC tho. Is at least AMD on board with
the approach?

Thanks.

-- 
tejun

WARNING: multiple messages have this Message-ID (diff)
From: Tejun Heo <tj@kernel.org>
To: "Michal Koutný" <mkoutny@suse.com>
Cc: "Rob Clark" <robdclark@chromium.org>,
	"Tvrtko Ursulin" <tvrtko.ursulin@linux.intel.com>,
	Kenny.Ho@amd.com, "Dave Airlie" <airlied@redhat.com>,
	"Stéphane Marchesin" <marcheu@chromium.org>,
	"Daniel Vetter" <daniel.vetter@ffwll.ch>,
	Intel-gfx@lists.freedesktop.org,
	"Brian Welty" <brian.welty@intel.com>,
	linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
	"Christian König" <christian.koenig@amd.com>,
	"Tvrtko Ursulin" <tvrtko.ursulin@intel.com>,
	"Zefan Li" <lizefan.x@bytedance.com>,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	cgroups@vger.kernel.org, "T . J . Mercier" <tjmercier@google.com>
Subject: Re: [RFC v3 00/12] DRM scheduling cgroup controller
Date: Thu, 26 Jan 2023 07:04:08 -1000	[thread overview]
Message-ID: <Y9KyiCPYj2Mzym3Z@slm.duckdns.org> (raw)
In-Reply-To: <20230126130050.GA22442@blackbody.suse.cz>

Hello,

On Thu, Jan 26, 2023 at 02:00:50PM +0100, Michal Koutný wrote:
> On Wed, Jan 25, 2023 at 06:11:35PM +0000, Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> > I don't immediately see how you envisage the half-userspace implementation
> > would look like in terms of what functionality/new APIs would be provided by
> > the kernel?
> 
> Output:
> 	drm.stat (with consumed time(s))
> 
> Input:
> 	drm.throttle (alternatives)
> 	- a) writing 0,1 (in rough analogy to your proposed
> 	     notifications)
> 	- b) writing duration (in loose analogy to memory.reclaim)
> 	     - for how long GPU work should be backed off
> 
> An userspace agent sitting between these two and it'd do the measurement
> and calculation depending on given policies (weighting, throttling) and
> apply respective controls.
> 
> (In resemblance of e.g. https://denji.github.io/cpulimit/)

Yeah, things like this can be done from userspace but if we're gonna build
the infrastructure to allow that in gpu drivers and so on, I don't see why
we wouldn't add a generic in-kernel control layer if we can implement a
proper weight based control. We can of course also expose .max style
interface to allow userspace to do whatever they wanna do with it.

> > Problem there is to find a suitable point to charge at. If for a moment we
> > limit the discussion to i915, out of the box we could having charging
> > happening at several thousand times per second to effectively never. This is
> > to illustrate the GPU context execution dynamics which range from many small
> > packets of work to multi-minute, or longer. For the latter to be accounted
> > for we'd still need some periodic scanning, which would then perhaps go per
> > driver. For the former we'd have thousands of needless updates per second.
> > 
> > Hence my thinking was to pay both the cost of accounting and collecting the
> > usage data once per actionable event, where the latter is controlled by some
> > reasonable scanning period/frequency.
> > 
> > In addition to that, a few DRM drivers already support GPU usage querying
> > via fdinfo, so that being externally triggered, it is next to trivial to
> > wire all those DRM drivers into such common DRM cgroup controller framework.
> > All that every driver needs to implement on top is the "over budget"
> > callback.
> 
> I'd also like show comparison with CPU accounting and controller.
> There is tick-based (~sampling) measurement of various components of CPU
> time (task_group_account_field()). But the actual schedulling (weights)
> or throttling is based on precise accounting (update_curr()).
> 
> So, if the goal is to have precise and guaranteed limits, it shouldn't
> (cannot) be based on sampling. OTOH, if it must be sampling based due to
> variability of the device landscape, it could be advisory mechanism with
> the userspace component.

As for the specific control mechanism, yeah, charge based interface would be
more conventional and my suspicion is that transposing the current
implementation that way likely isn't too difficult. It just pushes "am I
over the limit?" decisions to the specific drivers with the core layer
telling them how much under/over budget they are. I'm curious what other gpu
driver folks think about the current RFC tho. Is at least AMD on board with
the approach?

Thanks.

-- 
tejun

WARNING: multiple messages have this Message-ID (diff)
From: Tejun Heo <tj@kernel.org>
To: "Michal Koutný" <mkoutny@suse.com>
Cc: "Rob Clark" <robdclark@chromium.org>,
	Kenny.Ho@amd.com, "Dave Airlie" <airlied@redhat.com>,
	"Stéphane Marchesin" <marcheu@chromium.org>,
	"Daniel Vetter" <daniel.vetter@ffwll.ch>,
	Intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org,
	dri-devel@lists.freedesktop.org,
	"Christian König" <christian.koenig@amd.com>,
	"Zefan Li" <lizefan.x@bytedance.com>,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	cgroups@vger.kernel.org, "T . J . Mercier" <tjmercier@google.com>
Subject: Re: [Intel-gfx] [RFC v3 00/12] DRM scheduling cgroup controller
Date: Thu, 26 Jan 2023 07:04:08 -1000	[thread overview]
Message-ID: <Y9KyiCPYj2Mzym3Z@slm.duckdns.org> (raw)
In-Reply-To: <20230126130050.GA22442@blackbody.suse.cz>

Hello,

On Thu, Jan 26, 2023 at 02:00:50PM +0100, Michal Koutný wrote:
> On Wed, Jan 25, 2023 at 06:11:35PM +0000, Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> > I don't immediately see how you envisage the half-userspace implementation
> > would look like in terms of what functionality/new APIs would be provided by
> > the kernel?
> 
> Output:
> 	drm.stat (with consumed time(s))
> 
> Input:
> 	drm.throttle (alternatives)
> 	- a) writing 0,1 (in rough analogy to your proposed
> 	     notifications)
> 	- b) writing duration (in loose analogy to memory.reclaim)
> 	     - for how long GPU work should be backed off
> 
> An userspace agent sitting between these two and it'd do the measurement
> and calculation depending on given policies (weighting, throttling) and
> apply respective controls.
> 
> (In resemblance of e.g. https://denji.github.io/cpulimit/)

Yeah, things like this can be done from userspace but if we're gonna build
the infrastructure to allow that in gpu drivers and so on, I don't see why
we wouldn't add a generic in-kernel control layer if we can implement a
proper weight based control. We can of course also expose .max style
interface to allow userspace to do whatever they wanna do with it.

> > Problem there is to find a suitable point to charge at. If for a moment we
> > limit the discussion to i915, out of the box we could having charging
> > happening at several thousand times per second to effectively never. This is
> > to illustrate the GPU context execution dynamics which range from many small
> > packets of work to multi-minute, or longer. For the latter to be accounted
> > for we'd still need some periodic scanning, which would then perhaps go per
> > driver. For the former we'd have thousands of needless updates per second.
> > 
> > Hence my thinking was to pay both the cost of accounting and collecting the
> > usage data once per actionable event, where the latter is controlled by some
> > reasonable scanning period/frequency.
> > 
> > In addition to that, a few DRM drivers already support GPU usage querying
> > via fdinfo, so that being externally triggered, it is next to trivial to
> > wire all those DRM drivers into such common DRM cgroup controller framework.
> > All that every driver needs to implement on top is the "over budget"
> > callback.
> 
> I'd also like show comparison with CPU accounting and controller.
> There is tick-based (~sampling) measurement of various components of CPU
> time (task_group_account_field()). But the actual schedulling (weights)
> or throttling is based on precise accounting (update_curr()).
> 
> So, if the goal is to have precise and guaranteed limits, it shouldn't
> (cannot) be based on sampling. OTOH, if it must be sampling based due to
> variability of the device landscape, it could be advisory mechanism with
> the userspace component.

As for the specific control mechanism, yeah, charge based interface would be
more conventional and my suspicion is that transposing the current
implementation that way likely isn't too difficult. It just pushes "am I
over the limit?" decisions to the specific drivers with the core layer
telling them how much under/over budget they are. I'm curious what other gpu
driver folks think about the current RFC tho. Is at least AMD on board with
the approach?

Thanks.

-- 
tejun

WARNING: multiple messages have this Message-ID (diff)
From: Tejun Heo <tj@kernel.org>
To: "Michal Koutný" <mkoutny@suse.com>
Cc: "Rob Clark" <robdclark@chromium.org>,
	Kenny.Ho@amd.com, "Dave Airlie" <airlied@redhat.com>,
	"Stéphane Marchesin" <marcheu@chromium.org>,
	"Daniel Vetter" <daniel.vetter@ffwll.ch>,
	Intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org,
	dri-devel@lists.freedesktop.org,
	"Christian König" <christian.koenig@amd.com>,
	"Zefan Li" <lizefan.x@bytedance.com>,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	cgroups@vger.kernel.org, "T . J . Mercier" <tjmercier@google.com>
Subject: Re: [RFC v3 00/12] DRM scheduling cgroup controller
Date: Thu, 26 Jan 2023 07:04:08 -1000	[thread overview]
Message-ID: <Y9KyiCPYj2Mzym3Z@slm.duckdns.org> (raw)
In-Reply-To: <20230126130050.GA22442@blackbody.suse.cz>

Hello,

On Thu, Jan 26, 2023 at 02:00:50PM +0100, Michal Koutný wrote:
> On Wed, Jan 25, 2023 at 06:11:35PM +0000, Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
> > I don't immediately see how you envisage the half-userspace implementation
> > would look like in terms of what functionality/new APIs would be provided by
> > the kernel?
> 
> Output:
> 	drm.stat (with consumed time(s))
> 
> Input:
> 	drm.throttle (alternatives)
> 	- a) writing 0,1 (in rough analogy to your proposed
> 	     notifications)
> 	- b) writing duration (in loose analogy to memory.reclaim)
> 	     - for how long GPU work should be backed off
> 
> An userspace agent sitting between these two and it'd do the measurement
> and calculation depending on given policies (weighting, throttling) and
> apply respective controls.
> 
> (In resemblance of e.g. https://denji.github.io/cpulimit/)

Yeah, things like this can be done from userspace but if we're gonna build
the infrastructure to allow that in gpu drivers and so on, I don't see why
we wouldn't add a generic in-kernel control layer if we can implement a
proper weight based control. We can of course also expose .max style
interface to allow userspace to do whatever they wanna do with it.

> > Problem there is to find a suitable point to charge at. If for a moment we
> > limit the discussion to i915, out of the box we could having charging
> > happening at several thousand times per second to effectively never. This is
> > to illustrate the GPU context execution dynamics which range from many small
> > packets of work to multi-minute, or longer. For the latter to be accounted
> > for we'd still need some periodic scanning, which would then perhaps go per
> > driver. For the former we'd have thousands of needless updates per second.
> > 
> > Hence my thinking was to pay both the cost of accounting and collecting the
> > usage data once per actionable event, where the latter is controlled by some
> > reasonable scanning period/frequency.
> > 
> > In addition to that, a few DRM drivers already support GPU usage querying
> > via fdinfo, so that being externally triggered, it is next to trivial to
> > wire all those DRM drivers into such common DRM cgroup controller framework.
> > All that every driver needs to implement on top is the "over budget"
> > callback.
> 
> I'd also like show comparison with CPU accounting and controller.
> There is tick-based (~sampling) measurement of various components of CPU
> time (task_group_account_field()). But the actual schedulling (weights)
> or throttling is based on precise accounting (update_curr()).
> 
> So, if the goal is to have precise and guaranteed limits, it shouldn't
> (cannot) be based on sampling. OTOH, if it must be sampling based due to
> variability of the device landscape, it could be advisory mechanism with
> the userspace component.

As for the specific control mechanism, yeah, charge based interface would be
more conventional and my suspicion is that transposing the current
implementation that way likely isn't too difficult. It just pushes "am I
over the limit?" decisions to the specific drivers with the core layer
telling them how much under/over budget they are. I'm curious what other gpu
driver folks think about the current RFC tho. Is at least AMD on board with
the approach?

Thanks.

-- 
tejun

  reply	other threads:[~2023-01-26 17:04 UTC|newest]

Thread overview: 126+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-12 16:55 [Intel-gfx] [RFC v3 00/12] DRM scheduling cgroup controller Tvrtko Ursulin
2023-01-12 16:55 ` Tvrtko Ursulin
2023-01-12 16:55 ` Tvrtko Ursulin
2023-01-12 16:55 ` Tvrtko Ursulin
2023-01-12 16:55 ` [RFC 01/12] drm: Track clients by tgid and not tid Tvrtko Ursulin
2023-01-12 16:55   ` Tvrtko Ursulin
2023-01-12 16:55   ` Tvrtko Ursulin
2023-01-12 16:55   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-12 16:55 ` [RFC 02/12] drm: Update file owner during use Tvrtko Ursulin
2023-01-12 16:55   ` Tvrtko Ursulin
2023-01-12 16:55   ` Tvrtko Ursulin
2023-01-12 16:55   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-12 16:56 ` [RFC 03/12] cgroup: Add the DRM cgroup controller Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-12 16:56 ` [RFC 04/12] drm/cgroup: Track clients per owning process Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-17 16:03   ` Stanislaw Gruszka
2023-01-17 16:03     ` Stanislaw Gruszka
2023-01-17 16:03     ` [Intel-gfx] " Stanislaw Gruszka
2023-01-17 16:03     ` Stanislaw Gruszka
2023-01-17 16:25     ` Tvrtko Ursulin
2023-01-17 16:25       ` Tvrtko Ursulin
2023-01-17 16:25       ` Tvrtko Ursulin
2023-01-17 16:25       ` [Intel-gfx] " Tvrtko Ursulin
2023-01-12 16:56 ` [RFC 05/12] drm/cgroup: Allow safe external access to file_priv Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-12 16:56 ` [RFC 06/12] drm/cgroup: Add ability to query drm cgroup GPU time Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-12 16:56 ` [RFC 07/12] drm/cgroup: Add over budget signalling callback Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-12 16:56 ` [RFC 08/12] drm/cgroup: Only track clients which are providing drm_cgroup_ops Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-12 16:56 ` [RFC 09/12] cgroup/drm: Client exit hook Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-12 16:56 ` [RFC 10/12] cgroup/drm: Introduce weight based drm cgroup control Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-14 21:20   ` kernel test robot
2023-01-27 13:01   ` Michal Koutný
2023-01-27 13:01     ` Michal Koutný
2023-01-27 13:01     ` [Intel-gfx] " Michal Koutný
2023-01-27 13:01     ` Michal Koutný
2023-01-27 13:31     ` Tvrtko Ursulin
2023-01-27 13:31       ` Tvrtko Ursulin
2023-01-27 13:31       ` [Intel-gfx] " Tvrtko Ursulin
2023-01-27 13:31       ` Tvrtko Ursulin
2023-01-27 14:11       ` Michal Koutný
2023-01-27 14:11         ` [Intel-gfx] " Michal Koutný
2023-01-27 14:11         ` Michal Koutný
2023-01-27 15:21         ` Tvrtko Ursulin
2023-01-27 15:21           ` Tvrtko Ursulin
2023-01-27 15:21           ` [Intel-gfx] " Tvrtko Ursulin
2023-01-27 15:21           ` Tvrtko Ursulin
2023-01-28  1:11   ` Tejun Heo
2023-01-28  1:11     ` Tejun Heo
2023-01-28  1:11     ` [Intel-gfx] " Tejun Heo
2023-01-28  1:11     ` Tejun Heo
2023-02-02 14:26     ` Tvrtko Ursulin
2023-02-02 14:26       ` Tvrtko Ursulin
2023-02-02 14:26       ` [Intel-gfx] " Tvrtko Ursulin
2023-02-02 14:26       ` Tvrtko Ursulin
2023-02-02 20:00       ` Tejun Heo
2023-02-02 20:00         ` Tejun Heo
2023-02-02 20:00         ` [Intel-gfx] " Tejun Heo
2023-02-02 20:00         ` Tejun Heo
2023-01-12 16:56 ` [RFC 11/12] drm/i915: Wire up with drm controller GPU time query Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-12 16:56 ` [RFC 12/12] drm/i915: Implement cgroup controller over budget throttling Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-12 17:40 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for DRM scheduling cgroup controller (rev3) Patchwork
2023-01-12 18:09 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2023-01-13  6:52 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
2023-01-23 15:42 ` [RFC v3 00/12] DRM scheduling cgroup controller Michal Koutný
2023-01-23 15:42   ` Michal Koutný
2023-01-23 15:42   ` [Intel-gfx] " Michal Koutný
2023-01-23 15:42   ` Michal Koutný
2023-01-25 18:11   ` Tvrtko Ursulin
2023-01-25 18:11     ` Tvrtko Ursulin
2023-01-25 18:11     ` Tvrtko Ursulin
2023-01-25 18:11     ` [Intel-gfx] " Tvrtko Ursulin
2023-01-26 13:00     ` Michal Koutný
2023-01-26 13:00       ` Michal Koutný
2023-01-26 13:00       ` [Intel-gfx] " Michal Koutný
2023-01-26 13:00       ` Michal Koutný
2023-01-26 17:04       ` Tejun Heo [this message]
2023-01-26 17:04         ` Tejun Heo
2023-01-26 17:04         ` [Intel-gfx] " Tejun Heo
2023-01-26 17:04         ` Tejun Heo
2023-01-26 17:57         ` Tvrtko Ursulin
2023-01-26 17:57           ` Tvrtko Ursulin
2023-01-26 17:57           ` [Intel-gfx] " Tvrtko Ursulin
2023-01-26 17:57           ` Tvrtko Ursulin
2023-01-26 18:14           ` Tvrtko Ursulin
2023-01-26 18:14             ` Tvrtko Ursulin
2023-01-26 18:14             ` [Intel-gfx] " Tvrtko Ursulin
2023-01-26 18:14             ` Tvrtko Ursulin
2023-01-27 10:04           ` Michal Koutný
2023-01-27 10:04             ` Michal Koutný
2023-01-27 10:04             ` [Intel-gfx] " Michal Koutný
2023-01-27 10:04             ` Michal Koutný
2023-01-27 11:40             ` Tvrtko Ursulin
2023-01-27 11:40               ` Tvrtko Ursulin
2023-01-27 11:40               ` Tvrtko Ursulin
2023-01-27 11:40               ` [Intel-gfx] " Tvrtko Ursulin
2023-01-27 13:00               ` Michal Koutný
2023-01-27 13:00                 ` [Intel-gfx] " Michal Koutný
2023-01-27 13:00                 ` Michal Koutný

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y9KyiCPYj2Mzym3Z@slm.duckdns.org \
    --to=tj@kernel.org \
    --cc=Intel-gfx@lists.freedesktop.org \
    --cc=Kenny.Ho@amd.com \
    --cc=airlied@redhat.com \
    --cc=brian.welty@intel.com \
    --cc=cgroups@vger.kernel.org \
    --cc=christian.koenig@amd.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizefan.x@bytedance.com \
    --cc=marcheu@chromium.org \
    --cc=mkoutny@suse.com \
    --cc=robdclark@chromium.org \
    --cc=tjmercier@google.com \
    --cc=tvrtko.ursulin@intel.com \
    --cc=tvrtko.ursulin@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.