All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: "Michal Koutný" <mkoutny@suse.com>
Cc: "Rob Clark" <robdclark@chromium.org>,
	Kenny.Ho@amd.com, "Dave Airlie" <airlied@redhat.com>,
	"Stéphane Marchesin" <marcheu@chromium.org>,
	"Daniel Vetter" <daniel.vetter@ffwll.ch>,
	Intel-gfx@lists.freedesktop.org,
	"Brian Welty" <brian.welty@intel.com>,
	linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
	"Christian König" <christian.koenig@amd.com>,
	"Tvrtko Ursulin" <tvrtko.ursulin@intel.com>,
	"Zefan Li" <lizefan.x@bytedance.com>,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	"Tejun Heo" <tj@kernel.org>,
	cgroups@vger.kernel.org, "T . J . Mercier" <tjmercier@google.com>
Subject: Re: [RFC v3 00/12] DRM scheduling cgroup controller
Date: Fri, 27 Jan 2023 11:40:58 +0000	[thread overview]
Message-ID: <246fefac-6c1d-e4f1-c0df-721ca23ab56a@linux.intel.com> (raw)
In-Reply-To: <20230127100428.GA3527@blackbody.suse.cz>


On 27/01/2023 10:04, Michal Koutný wrote:
> On Thu, Jan 26, 2023 at 05:57:24PM +0000, Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
>> So even if the RFC shows just a simple i915 implementation, the controller
>> itself shouldn't prevent a smarter approach (via exposed ABI).
> 
> scan/query + over budget notification is IMO limited in guarantees.

It is yes, I tried to stress out that it is not a hard guarantee in any 
shape and form and that the "quality" of adhering to the allocated 
budget will depend on individual hw and sw capabilities.

But it is what I believe is the best approach given a) how different in 
scheduling capability GPU drivers are and b) the very fact there isn't a 
central scheduling entity as opposed to the CPU side of things.

It is just no possible to do a hard guarantee system. GPUs do not 
preempt as nicely and easily as the CPUs do. And the frequency of 
context switches varies widely from too fast to "never", so again, 
charging would have several problems to overcome which would make the 
whole setup IMHO pointless.

And not only that some GPUs do not preempt nicely, but some even can't 
do any of this, period. Even if we stay within the lineage of hardware 
supported by only i915, we have three distinct categories: 1) can't do 
any of this, 2a) can do fine grained priority based scheduling with 
reasonable preemption capability, 2b) ditto but without reasonable 
preemption capability, and 3) like 2a) and 2b) but with the scheduler in 
the firmware and currently supporting coarse priority based scheduling.

Shall I also mention that a single cgroup can contain multiple GPU 
clients, all using different GPUs with a different mix of the above 
listed challenges?

The main point is, should someone prove me wrong and come up a smarter 
way at some point in the future, then "drm.weight" as an ABI remains 
compatible and the improvement can happen completely under the hood. In 
the mean time users get external control, and _some_ ability to improve 
the user experience with the scenarios such as I described yesterday.

>> [...]
>> Yes agreed, and to re-stress out, the ABI as proposed does not preclude
>> changing from scanning to charging or whatever. The idea was for it to be
>> compatible in concept with the CPU controller and also avoid baking in the
>> controlling method to individual drivers.
>> [...]
> 
> But I submit to your point of rather not exposing this via cgroup API
> for possible future refinements.

Ack.

>> Secondly, doing this in userspace would require the ability to get some sort
>> of an atomic snapshot of the whole tree hierarchy to account for changes in
>> layout of the tree and task migrations. Or some retry logic with some added
>> ABI fields to enable it.
> 
> Note, that the proposed implementation is succeptible to miscount due to
> concurrent tree modifications and task migrations too (scanning may not
> converge under frequent cgroup layout modifications, and migrating tasks
> may be summed 0 or >1 times). While in-kernel implementation may assure
> the snapshot view, it'd come at cost. (Read: since the mechanism isn't
> precise anyway, I don't suggest a fully synchronized scanning.)

The part that scanning may not converge in my _current implementation_ 
is true. For instance if clients would be constantly coming and going, 
for that I took a shortcut of not bothering to accumulate usage on 
process/client exit, and I just wait for a stable two periods to look at 
the current state. I reckon this is possibly okay for the real world.

Cgroup tree hierarchy modifications being the reason for not converging 
can also happen, but I thought I can hand wave that as not a realistic 
scenario. Perhaps I am not imaginative enough?

Under or over-accounting for migrating tasks I don't think can happen 
since I am explicitly handling that.

Regards,

Tvrtko

WARNING: multiple messages have this Message-ID (diff)
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: "Michal Koutný" <mkoutny@suse.com>
Cc: "Rob Clark" <robdclark@chromium.org>,
	Kenny.Ho@amd.com, "Dave Airlie" <airlied@redhat.com>,
	"Stéphane Marchesin" <marcheu@chromium.org>,
	"Daniel Vetter" <daniel.vetter@ffwll.ch>,
	Intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org,
	dri-devel@lists.freedesktop.org,
	"Christian König" <christian.koenig@amd.com>,
	"Zefan Li" <lizefan.x@bytedance.com>,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	"Tejun Heo" <tj@kernel.org>,
	cgroups@vger.kernel.org, "T . J . Mercier" <tjmercier@google.com>
Subject: Re: [Intel-gfx] [RFC v3 00/12] DRM scheduling cgroup controller
Date: Fri, 27 Jan 2023 11:40:58 +0000	[thread overview]
Message-ID: <246fefac-6c1d-e4f1-c0df-721ca23ab56a@linux.intel.com> (raw)
In-Reply-To: <20230127100428.GA3527@blackbody.suse.cz>


On 27/01/2023 10:04, Michal Koutný wrote:
> On Thu, Jan 26, 2023 at 05:57:24PM +0000, Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
>> So even if the RFC shows just a simple i915 implementation, the controller
>> itself shouldn't prevent a smarter approach (via exposed ABI).
> 
> scan/query + over budget notification is IMO limited in guarantees.

It is yes, I tried to stress out that it is not a hard guarantee in any 
shape and form and that the "quality" of adhering to the allocated 
budget will depend on individual hw and sw capabilities.

But it is what I believe is the best approach given a) how different in 
scheduling capability GPU drivers are and b) the very fact there isn't a 
central scheduling entity as opposed to the CPU side of things.

It is just no possible to do a hard guarantee system. GPUs do not 
preempt as nicely and easily as the CPUs do. And the frequency of 
context switches varies widely from too fast to "never", so again, 
charging would have several problems to overcome which would make the 
whole setup IMHO pointless.

And not only that some GPUs do not preempt nicely, but some even can't 
do any of this, period. Even if we stay within the lineage of hardware 
supported by only i915, we have three distinct categories: 1) can't do 
any of this, 2a) can do fine grained priority based scheduling with 
reasonable preemption capability, 2b) ditto but without reasonable 
preemption capability, and 3) like 2a) and 2b) but with the scheduler in 
the firmware and currently supporting coarse priority based scheduling.

Shall I also mention that a single cgroup can contain multiple GPU 
clients, all using different GPUs with a different mix of the above 
listed challenges?

The main point is, should someone prove me wrong and come up a smarter 
way at some point in the future, then "drm.weight" as an ABI remains 
compatible and the improvement can happen completely under the hood. In 
the mean time users get external control, and _some_ ability to improve 
the user experience with the scenarios such as I described yesterday.

>> [...]
>> Yes agreed, and to re-stress out, the ABI as proposed does not preclude
>> changing from scanning to charging or whatever. The idea was for it to be
>> compatible in concept with the CPU controller and also avoid baking in the
>> controlling method to individual drivers.
>> [...]
> 
> But I submit to your point of rather not exposing this via cgroup API
> for possible future refinements.

Ack.

>> Secondly, doing this in userspace would require the ability to get some sort
>> of an atomic snapshot of the whole tree hierarchy to account for changes in
>> layout of the tree and task migrations. Or some retry logic with some added
>> ABI fields to enable it.
> 
> Note, that the proposed implementation is succeptible to miscount due to
> concurrent tree modifications and task migrations too (scanning may not
> converge under frequent cgroup layout modifications, and migrating tasks
> may be summed 0 or >1 times). While in-kernel implementation may assure
> the snapshot view, it'd come at cost. (Read: since the mechanism isn't
> precise anyway, I don't suggest a fully synchronized scanning.)

The part that scanning may not converge in my _current implementation_ 
is true. For instance if clients would be constantly coming and going, 
for that I took a shortcut of not bothering to accumulate usage on 
process/client exit, and I just wait for a stable two periods to look at 
the current state. I reckon this is possibly okay for the real world.

Cgroup tree hierarchy modifications being the reason for not converging 
can also happen, but I thought I can hand wave that as not a realistic 
scenario. Perhaps I am not imaginative enough?

Under or over-accounting for migrating tasks I don't think can happen 
since I am explicitly handling that.

Regards,

Tvrtko

WARNING: multiple messages have this Message-ID (diff)
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: "Michal Koutný" <mkoutny@suse.com>
Cc: "Tejun Heo" <tj@kernel.org>,
	Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	cgroups@vger.kernel.org, linux-kernel@vger.kernel.org,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	"Zefan Li" <lizefan.x@bytedance.com>,
	"Dave Airlie" <airlied@redhat.com>,
	"Daniel Vetter" <daniel.vetter@ffwll.ch>,
	"Rob Clark" <robdclark@chromium.org>,
	"Stéphane Marchesin" <marcheu@chromium.org>,
	"T . J . Mercier" <tjmercier@google.com>,
	Kenny.Ho@amd.com, "Christian König" <christian.koenig@amd.com>,
	"Brian Welty" <brian.welty@intel.com>,
	"Tvrtko Ursulin" <tvrtko.ursulin@intel.com>
Subject: Re: [RFC v3 00/12] DRM scheduling cgroup controller
Date: Fri, 27 Jan 2023 11:40:58 +0000	[thread overview]
Message-ID: <246fefac-6c1d-e4f1-c0df-721ca23ab56a@linux.intel.com> (raw)
In-Reply-To: <20230127100428.GA3527@blackbody.suse.cz>


On 27/01/2023 10:04, Michal Koutný wrote:
> On Thu, Jan 26, 2023 at 05:57:24PM +0000, Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
>> So even if the RFC shows just a simple i915 implementation, the controller
>> itself shouldn't prevent a smarter approach (via exposed ABI).
> 
> scan/query + over budget notification is IMO limited in guarantees.

It is yes, I tried to stress out that it is not a hard guarantee in any 
shape and form and that the "quality" of adhering to the allocated 
budget will depend on individual hw and sw capabilities.

But it is what I believe is the best approach given a) how different in 
scheduling capability GPU drivers are and b) the very fact there isn't a 
central scheduling entity as opposed to the CPU side of things.

It is just no possible to do a hard guarantee system. GPUs do not 
preempt as nicely and easily as the CPUs do. And the frequency of 
context switches varies widely from too fast to "never", so again, 
charging would have several problems to overcome which would make the 
whole setup IMHO pointless.

And not only that some GPUs do not preempt nicely, but some even can't 
do any of this, period. Even if we stay within the lineage of hardware 
supported by only i915, we have three distinct categories: 1) can't do 
any of this, 2a) can do fine grained priority based scheduling with 
reasonable preemption capability, 2b) ditto but without reasonable 
preemption capability, and 3) like 2a) and 2b) but with the scheduler in 
the firmware and currently supporting coarse priority based scheduling.

Shall I also mention that a single cgroup can contain multiple GPU 
clients, all using different GPUs with a different mix of the above 
listed challenges?

The main point is, should someone prove me wrong and come up a smarter 
way at some point in the future, then "drm.weight" as an ABI remains 
compatible and the improvement can happen completely under the hood. In 
the mean time users get external control, and _some_ ability to improve 
the user experience with the scenarios such as I described yesterday.

>> [...]
>> Yes agreed, and to re-stress out, the ABI as proposed does not preclude
>> changing from scanning to charging or whatever. The idea was for it to be
>> compatible in concept with the CPU controller and also avoid baking in the
>> controlling method to individual drivers.
>> [...]
> 
> But I submit to your point of rather not exposing this via cgroup API
> for possible future refinements.

Ack.

>> Secondly, doing this in userspace would require the ability to get some sort
>> of an atomic snapshot of the whole tree hierarchy to account for changes in
>> layout of the tree and task migrations. Or some retry logic with some added
>> ABI fields to enable it.
> 
> Note, that the proposed implementation is succeptible to miscount due to
> concurrent tree modifications and task migrations too (scanning may not
> converge under frequent cgroup layout modifications, and migrating tasks
> may be summed 0 or >1 times). While in-kernel implementation may assure
> the snapshot view, it'd come at cost. (Read: since the mechanism isn't
> precise anyway, I don't suggest a fully synchronized scanning.)

The part that scanning may not converge in my _current implementation_ 
is true. For instance if clients would be constantly coming and going, 
for that I took a shortcut of not bothering to accumulate usage on 
process/client exit, and I just wait for a stable two periods to look at 
the current state. I reckon this is possibly okay for the real world.

Cgroup tree hierarchy modifications being the reason for not converging 
can also happen, but I thought I can hand wave that as not a realistic 
scenario. Perhaps I am not imaginative enough?

Under or over-accounting for migrating tasks I don't think can happen 
since I am explicitly handling that.

Regards,

Tvrtko

WARNING: multiple messages have this Message-ID (diff)
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: "Michal Koutný" <mkoutny@suse.com>
Cc: "Rob Clark" <robdclark@chromium.org>,
	Kenny.Ho@amd.com, "Dave Airlie" <airlied@redhat.com>,
	"Stéphane Marchesin" <marcheu@chromium.org>,
	"Daniel Vetter" <daniel.vetter@ffwll.ch>,
	Intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org,
	dri-devel@lists.freedesktop.org,
	"Christian König" <christian.koenig@amd.com>,
	"Zefan Li" <lizefan.x@bytedance.com>,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	"Tejun Heo" <tj@kernel.org>,
	cgroups@vger.kernel.org, "T . J . Mercier" <tjmercier@google.com>
Subject: Re: [RFC v3 00/12] DRM scheduling cgroup controller
Date: Fri, 27 Jan 2023 11:40:58 +0000	[thread overview]
Message-ID: <246fefac-6c1d-e4f1-c0df-721ca23ab56a@linux.intel.com> (raw)
In-Reply-To: <20230127100428.GA3527@blackbody.suse.cz>


On 27/01/2023 10:04, Michal Koutný wrote:
> On Thu, Jan 26, 2023 at 05:57:24PM +0000, Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com> wrote:
>> So even if the RFC shows just a simple i915 implementation, the controller
>> itself shouldn't prevent a smarter approach (via exposed ABI).
> 
> scan/query + over budget notification is IMO limited in guarantees.

It is yes, I tried to stress out that it is not a hard guarantee in any 
shape and form and that the "quality" of adhering to the allocated 
budget will depend on individual hw and sw capabilities.

But it is what I believe is the best approach given a) how different in 
scheduling capability GPU drivers are and b) the very fact there isn't a 
central scheduling entity as opposed to the CPU side of things.

It is just no possible to do a hard guarantee system. GPUs do not 
preempt as nicely and easily as the CPUs do. And the frequency of 
context switches varies widely from too fast to "never", so again, 
charging would have several problems to overcome which would make the 
whole setup IMHO pointless.

And not only that some GPUs do not preempt nicely, but some even can't 
do any of this, period. Even if we stay within the lineage of hardware 
supported by only i915, we have three distinct categories: 1) can't do 
any of this, 2a) can do fine grained priority based scheduling with 
reasonable preemption capability, 2b) ditto but without reasonable 
preemption capability, and 3) like 2a) and 2b) but with the scheduler in 
the firmware and currently supporting coarse priority based scheduling.

Shall I also mention that a single cgroup can contain multiple GPU 
clients, all using different GPUs with a different mix of the above 
listed challenges?

The main point is, should someone prove me wrong and come up a smarter 
way at some point in the future, then "drm.weight" as an ABI remains 
compatible and the improvement can happen completely under the hood. In 
the mean time users get external control, and _some_ ability to improve 
the user experience with the scenarios such as I described yesterday.

>> [...]
>> Yes agreed, and to re-stress out, the ABI as proposed does not preclude
>> changing from scanning to charging or whatever. The idea was for it to be
>> compatible in concept with the CPU controller and also avoid baking in the
>> controlling method to individual drivers.
>> [...]
> 
> But I submit to your point of rather not exposing this via cgroup API
> for possible future refinements.

Ack.

>> Secondly, doing this in userspace would require the ability to get some sort
>> of an atomic snapshot of the whole tree hierarchy to account for changes in
>> layout of the tree and task migrations. Or some retry logic with some added
>> ABI fields to enable it.
> 
> Note, that the proposed implementation is succeptible to miscount due to
> concurrent tree modifications and task migrations too (scanning may not
> converge under frequent cgroup layout modifications, and migrating tasks
> may be summed 0 or >1 times). While in-kernel implementation may assure
> the snapshot view, it'd come at cost. (Read: since the mechanism isn't
> precise anyway, I don't suggest a fully synchronized scanning.)

The part that scanning may not converge in my _current implementation_ 
is true. For instance if clients would be constantly coming and going, 
for that I took a shortcut of not bothering to accumulate usage on 
process/client exit, and I just wait for a stable two periods to look at 
the current state. I reckon this is possibly okay for the real world.

Cgroup tree hierarchy modifications being the reason for not converging 
can also happen, but I thought I can hand wave that as not a realistic 
scenario. Perhaps I am not imaginative enough?

Under or over-accounting for migrating tasks I don't think can happen 
since I am explicitly handling that.

Regards,

Tvrtko

  reply	other threads:[~2023-01-27 11:41 UTC|newest]

Thread overview: 126+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-12 16:55 [Intel-gfx] [RFC v3 00/12] DRM scheduling cgroup controller Tvrtko Ursulin
2023-01-12 16:55 ` Tvrtko Ursulin
2023-01-12 16:55 ` Tvrtko Ursulin
2023-01-12 16:55 ` Tvrtko Ursulin
2023-01-12 16:55 ` [RFC 01/12] drm: Track clients by tgid and not tid Tvrtko Ursulin
2023-01-12 16:55   ` Tvrtko Ursulin
2023-01-12 16:55   ` Tvrtko Ursulin
2023-01-12 16:55   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-12 16:55 ` [RFC 02/12] drm: Update file owner during use Tvrtko Ursulin
2023-01-12 16:55   ` Tvrtko Ursulin
2023-01-12 16:55   ` Tvrtko Ursulin
2023-01-12 16:55   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-12 16:56 ` [RFC 03/12] cgroup: Add the DRM cgroup controller Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-12 16:56 ` [RFC 04/12] drm/cgroup: Track clients per owning process Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-17 16:03   ` Stanislaw Gruszka
2023-01-17 16:03     ` Stanislaw Gruszka
2023-01-17 16:03     ` [Intel-gfx] " Stanislaw Gruszka
2023-01-17 16:03     ` Stanislaw Gruszka
2023-01-17 16:25     ` Tvrtko Ursulin
2023-01-17 16:25       ` Tvrtko Ursulin
2023-01-17 16:25       ` Tvrtko Ursulin
2023-01-17 16:25       ` [Intel-gfx] " Tvrtko Ursulin
2023-01-12 16:56 ` [RFC 05/12] drm/cgroup: Allow safe external access to file_priv Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-12 16:56 ` [RFC 06/12] drm/cgroup: Add ability to query drm cgroup GPU time Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-12 16:56 ` [RFC 07/12] drm/cgroup: Add over budget signalling callback Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-12 16:56 ` [RFC 08/12] drm/cgroup: Only track clients which are providing drm_cgroup_ops Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-12 16:56 ` [RFC 09/12] cgroup/drm: Client exit hook Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-12 16:56 ` [RFC 10/12] cgroup/drm: Introduce weight based drm cgroup control Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-14 21:20   ` kernel test robot
2023-01-27 13:01   ` Michal Koutný
2023-01-27 13:01     ` Michal Koutný
2023-01-27 13:01     ` [Intel-gfx] " Michal Koutný
2023-01-27 13:01     ` Michal Koutný
2023-01-27 13:31     ` Tvrtko Ursulin
2023-01-27 13:31       ` Tvrtko Ursulin
2023-01-27 13:31       ` [Intel-gfx] " Tvrtko Ursulin
2023-01-27 13:31       ` Tvrtko Ursulin
2023-01-27 14:11       ` Michal Koutný
2023-01-27 14:11         ` [Intel-gfx] " Michal Koutný
2023-01-27 14:11         ` Michal Koutný
2023-01-27 15:21         ` Tvrtko Ursulin
2023-01-27 15:21           ` Tvrtko Ursulin
2023-01-27 15:21           ` [Intel-gfx] " Tvrtko Ursulin
2023-01-27 15:21           ` Tvrtko Ursulin
2023-01-28  1:11   ` Tejun Heo
2023-01-28  1:11     ` Tejun Heo
2023-01-28  1:11     ` [Intel-gfx] " Tejun Heo
2023-01-28  1:11     ` Tejun Heo
2023-02-02 14:26     ` Tvrtko Ursulin
2023-02-02 14:26       ` Tvrtko Ursulin
2023-02-02 14:26       ` [Intel-gfx] " Tvrtko Ursulin
2023-02-02 14:26       ` Tvrtko Ursulin
2023-02-02 20:00       ` Tejun Heo
2023-02-02 20:00         ` Tejun Heo
2023-02-02 20:00         ` [Intel-gfx] " Tejun Heo
2023-02-02 20:00         ` Tejun Heo
2023-01-12 16:56 ` [RFC 11/12] drm/i915: Wire up with drm controller GPU time query Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-12 16:56 ` [RFC 12/12] drm/i915: Implement cgroup controller over budget throttling Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` Tvrtko Ursulin
2023-01-12 16:56   ` [Intel-gfx] " Tvrtko Ursulin
2023-01-12 17:40 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for DRM scheduling cgroup controller (rev3) Patchwork
2023-01-12 18:09 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2023-01-13  6:52 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
2023-01-23 15:42 ` [RFC v3 00/12] DRM scheduling cgroup controller Michal Koutný
2023-01-23 15:42   ` Michal Koutný
2023-01-23 15:42   ` [Intel-gfx] " Michal Koutný
2023-01-23 15:42   ` Michal Koutný
2023-01-25 18:11   ` Tvrtko Ursulin
2023-01-25 18:11     ` Tvrtko Ursulin
2023-01-25 18:11     ` Tvrtko Ursulin
2023-01-25 18:11     ` [Intel-gfx] " Tvrtko Ursulin
2023-01-26 13:00     ` Michal Koutný
2023-01-26 13:00       ` Michal Koutný
2023-01-26 13:00       ` [Intel-gfx] " Michal Koutný
2023-01-26 13:00       ` Michal Koutný
2023-01-26 17:04       ` Tejun Heo
2023-01-26 17:04         ` Tejun Heo
2023-01-26 17:04         ` [Intel-gfx] " Tejun Heo
2023-01-26 17:04         ` Tejun Heo
2023-01-26 17:57         ` Tvrtko Ursulin
2023-01-26 17:57           ` Tvrtko Ursulin
2023-01-26 17:57           ` [Intel-gfx] " Tvrtko Ursulin
2023-01-26 17:57           ` Tvrtko Ursulin
2023-01-26 18:14           ` Tvrtko Ursulin
2023-01-26 18:14             ` Tvrtko Ursulin
2023-01-26 18:14             ` [Intel-gfx] " Tvrtko Ursulin
2023-01-26 18:14             ` Tvrtko Ursulin
2023-01-27 10:04           ` Michal Koutný
2023-01-27 10:04             ` Michal Koutný
2023-01-27 10:04             ` [Intel-gfx] " Michal Koutný
2023-01-27 10:04             ` Michal Koutný
2023-01-27 11:40             ` Tvrtko Ursulin [this message]
2023-01-27 11:40               ` Tvrtko Ursulin
2023-01-27 11:40               ` Tvrtko Ursulin
2023-01-27 11:40               ` [Intel-gfx] " Tvrtko Ursulin
2023-01-27 13:00               ` Michal Koutný
2023-01-27 13:00                 ` [Intel-gfx] " Michal Koutný
2023-01-27 13:00                 ` Michal Koutný

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=246fefac-6c1d-e4f1-c0df-721ca23ab56a@linux.intel.com \
    --to=tvrtko.ursulin@linux.intel.com \
    --cc=Intel-gfx@lists.freedesktop.org \
    --cc=Kenny.Ho@amd.com \
    --cc=airlied@redhat.com \
    --cc=brian.welty@intel.com \
    --cc=cgroups@vger.kernel.org \
    --cc=christian.koenig@amd.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizefan.x@bytedance.com \
    --cc=marcheu@chromium.org \
    --cc=mkoutny@suse.com \
    --cc=robdclark@chromium.org \
    --cc=tj@kernel.org \
    --cc=tjmercier@google.com \
    --cc=tvrtko.ursulin@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.