All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Cc: "Rob Clark" <robdclark@chromium.org>,
	"Brian Welty" <brian.welty@intel.com>,
	Kenny.Ho@amd.com, "Tvrtko Ursulin" <tvrtko.ursulin@intel.com>,
	"Daniel Vetter" <daniel.vetter@ffwll.ch>,
	"Eero Tamminen" <eero.t.tamminen@intel.com>,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	linux-kernel@vger.kernel.org,
	"Stéphane Marchesin" <marcheu@chromium.org>,
	"Christian König" <christian.koenig@amd.com>,
	"Zefan Li" <lizefan.x@bytedance.com>,
	"Dave Airlie" <airlied@redhat.com>, "Tejun Heo" <tj@kernel.org>,
	cgroups@vger.kernel.org, "T . J . Mercier" <tjmercier@google.com>
Subject: [PATCH 15/17] cgroup/drm: Expose GPU utilisation
Date: Wed, 12 Jul 2023 12:46:03 +0100	[thread overview]
Message-ID: <20230712114605.519432-16-tvrtko.ursulin@linux.intel.com> (raw)
In-Reply-To: <20230712114605.519432-1-tvrtko.ursulin@linux.intel.com>

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

To support container use cases where external orchestrators want to make
deployment and migration decisions based on GPU load and capacity, we can
expose the GPU load as seen by the controller in a new drm.active_us
field. This field contains a monotonic cumulative time cgroup has spent
executing GPU loads, as reported by the DRM drivers being used by group
members.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Eero Tamminen <eero.t.tamminen@intel.com>
---
 Documentation/admin-guide/cgroup-v2.rst |  3 +++
 kernel/cgroup/drm.c                     | 26 ++++++++++++++++++++++++-
 2 files changed, 28 insertions(+), 1 deletion(-)

diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index da350858c59f..bbe986366f4a 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -2445,6 +2445,9 @@ will be respected.
 DRM scheduling soft limits interface files
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
+  drm.active_us
+	GPU time used by the group recursively including all child groups.
+
   drm.weight
 	Standard cgroup weight based control [1, 10000] used to configure the
 	relative distributing of GPU time between the sibling groups.
diff --git a/kernel/cgroup/drm.c b/kernel/cgroup/drm.c
index b244e3d828cc..7c20d4ebc634 100644
--- a/kernel/cgroup/drm.c
+++ b/kernel/cgroup/drm.c
@@ -25,6 +25,8 @@ struct drm_cgroup_state {
 	bool over;
 	bool over_budget;
 
+	u64 total_us;
+
 	u64 per_s_budget_us;
 	u64 prev_active_us;
 	u64 active_us;
@@ -117,6 +119,20 @@ drmcs_write_weight(struct cgroup_subsys_state *css, struct cftype *cftype,
 	return 0;
 }
 
+static u64
+drmcs_read_total_us(struct cgroup_subsys_state *css, struct cftype *cft)
+{
+	struct drm_cgroup_state *drmcs = css_to_drmcs(css);
+	u64 val;
+
+	/* Mutex being overkill unless arch cannot atomically read u64.. */
+	mutex_lock(&drmcg_mutex);
+	val = drmcs->total_us;
+	mutex_unlock(&drmcg_mutex);
+
+	return val;
+}
+
 static bool __start_scanning(unsigned int period_us)
 {
 	struct drm_cgroup_state *root = &root_drmcs.drmcs;
@@ -169,11 +185,14 @@ static bool __start_scanning(unsigned int period_us)
 		parent = css_to_drmcs(node->parent);
 
 		active = drmcs_get_active_time_us(drmcs);
-		if (period_us && active > drmcs->prev_active_us)
+		if (period_us && active > drmcs->prev_active_us) {
 			drmcs->active_us += active - drmcs->prev_active_us;
+			drmcs->total_us += drmcs->active_us;
+		}
 		drmcs->prev_active_us = active;
 
 		parent->active_us += drmcs->active_us;
+		parent->total_us += drmcs->active_us;
 		parent->sum_children_weights += drmcs->weight;
 
 		css_put(node);
@@ -551,6 +570,11 @@ struct cftype files[] = {
 		.read_u64 = drmcs_read_weight,
 		.write_u64 = drmcs_write_weight,
 	},
+	{
+		.name = "active_us",
+		.flags = CFTYPE_NOT_ON_ROOT,
+		.read_u64 = drmcs_read_total_us,
+	},
 	{ } /* Zero entry terminates. */
 };
 
-- 
2.39.2


WARNING: multiple messages have this Message-ID (diff)
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Cc: "Rob Clark" <robdclark@chromium.org>,
	Kenny.Ho@amd.com, "Daniel Vetter" <daniel.vetter@ffwll.ch>,
	"Eero Tamminen" <eero.t.tamminen@intel.com>,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	linux-kernel@vger.kernel.org,
	"Stéphane Marchesin" <marcheu@chromium.org>,
	"Christian König" <christian.koenig@amd.com>,
	"Zefan Li" <lizefan.x@bytedance.com>,
	"Dave Airlie" <airlied@redhat.com>, "Tejun Heo" <tj@kernel.org>,
	cgroups@vger.kernel.org, "T . J . Mercier" <tjmercier@google.com>
Subject: [Intel-gfx] [PATCH 15/17] cgroup/drm: Expose GPU utilisation
Date: Wed, 12 Jul 2023 12:46:03 +0100	[thread overview]
Message-ID: <20230712114605.519432-16-tvrtko.ursulin@linux.intel.com> (raw)
In-Reply-To: <20230712114605.519432-1-tvrtko.ursulin@linux.intel.com>

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

To support container use cases where external orchestrators want to make
deployment and migration decisions based on GPU load and capacity, we can
expose the GPU load as seen by the controller in a new drm.active_us
field. This field contains a monotonic cumulative time cgroup has spent
executing GPU loads, as reported by the DRM drivers being used by group
members.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Eero Tamminen <eero.t.tamminen@intel.com>
---
 Documentation/admin-guide/cgroup-v2.rst |  3 +++
 kernel/cgroup/drm.c                     | 26 ++++++++++++++++++++++++-
 2 files changed, 28 insertions(+), 1 deletion(-)

diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index da350858c59f..bbe986366f4a 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -2445,6 +2445,9 @@ will be respected.
 DRM scheduling soft limits interface files
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
+  drm.active_us
+	GPU time used by the group recursively including all child groups.
+
   drm.weight
 	Standard cgroup weight based control [1, 10000] used to configure the
 	relative distributing of GPU time between the sibling groups.
diff --git a/kernel/cgroup/drm.c b/kernel/cgroup/drm.c
index b244e3d828cc..7c20d4ebc634 100644
--- a/kernel/cgroup/drm.c
+++ b/kernel/cgroup/drm.c
@@ -25,6 +25,8 @@ struct drm_cgroup_state {
 	bool over;
 	bool over_budget;
 
+	u64 total_us;
+
 	u64 per_s_budget_us;
 	u64 prev_active_us;
 	u64 active_us;
@@ -117,6 +119,20 @@ drmcs_write_weight(struct cgroup_subsys_state *css, struct cftype *cftype,
 	return 0;
 }
 
+static u64
+drmcs_read_total_us(struct cgroup_subsys_state *css, struct cftype *cft)
+{
+	struct drm_cgroup_state *drmcs = css_to_drmcs(css);
+	u64 val;
+
+	/* Mutex being overkill unless arch cannot atomically read u64.. */
+	mutex_lock(&drmcg_mutex);
+	val = drmcs->total_us;
+	mutex_unlock(&drmcg_mutex);
+
+	return val;
+}
+
 static bool __start_scanning(unsigned int period_us)
 {
 	struct drm_cgroup_state *root = &root_drmcs.drmcs;
@@ -169,11 +185,14 @@ static bool __start_scanning(unsigned int period_us)
 		parent = css_to_drmcs(node->parent);
 
 		active = drmcs_get_active_time_us(drmcs);
-		if (period_us && active > drmcs->prev_active_us)
+		if (period_us && active > drmcs->prev_active_us) {
 			drmcs->active_us += active - drmcs->prev_active_us;
+			drmcs->total_us += drmcs->active_us;
+		}
 		drmcs->prev_active_us = active;
 
 		parent->active_us += drmcs->active_us;
+		parent->total_us += drmcs->active_us;
 		parent->sum_children_weights += drmcs->weight;
 
 		css_put(node);
@@ -551,6 +570,11 @@ struct cftype files[] = {
 		.read_u64 = drmcs_read_weight,
 		.write_u64 = drmcs_write_weight,
 	},
+	{
+		.name = "active_us",
+		.flags = CFTYPE_NOT_ON_ROOT,
+		.read_u64 = drmcs_read_total_us,
+	},
 	{ } /* Zero entry terminates. */
 };
 
-- 
2.39.2


WARNING: multiple messages have this Message-ID (diff)
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org,
	"Tejun Heo" <tj@kernel.org>,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	"Zefan Li" <lizefan.x@bytedance.com>,
	"Dave Airlie" <airlied@redhat.com>,
	"Daniel Vetter" <daniel.vetter@ffwll.ch>,
	"Rob Clark" <robdclark@chromium.org>,
	"Stéphane Marchesin" <marcheu@chromium.org>,
	"T . J . Mercier" <tjmercier@google.com>,
	Kenny.Ho@amd.com, "Christian König" <christian.koenig@amd.com>,
	"Brian Welty" <brian.welty@intel.com>,
	"Tvrtko Ursulin" <tvrtko.ursulin@intel.com>,
	"Eero Tamminen" <eero.t.tamminen@intel.com>
Subject: [PATCH 15/17] cgroup/drm: Expose GPU utilisation
Date: Wed, 12 Jul 2023 12:46:03 +0100	[thread overview]
Message-ID: <20230712114605.519432-16-tvrtko.ursulin@linux.intel.com> (raw)
In-Reply-To: <20230712114605.519432-1-tvrtko.ursulin@linux.intel.com>

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

To support container use cases where external orchestrators want to make
deployment and migration decisions based on GPU load and capacity, we can
expose the GPU load as seen by the controller in a new drm.active_us
field. This field contains a monotonic cumulative time cgroup has spent
executing GPU loads, as reported by the DRM drivers being used by group
members.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Eero Tamminen <eero.t.tamminen@intel.com>
---
 Documentation/admin-guide/cgroup-v2.rst |  3 +++
 kernel/cgroup/drm.c                     | 26 ++++++++++++++++++++++++-
 2 files changed, 28 insertions(+), 1 deletion(-)

diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index da350858c59f..bbe986366f4a 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -2445,6 +2445,9 @@ will be respected.
 DRM scheduling soft limits interface files
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
+  drm.active_us
+	GPU time used by the group recursively including all child groups.
+
   drm.weight
 	Standard cgroup weight based control [1, 10000] used to configure the
 	relative distributing of GPU time between the sibling groups.
diff --git a/kernel/cgroup/drm.c b/kernel/cgroup/drm.c
index b244e3d828cc..7c20d4ebc634 100644
--- a/kernel/cgroup/drm.c
+++ b/kernel/cgroup/drm.c
@@ -25,6 +25,8 @@ struct drm_cgroup_state {
 	bool over;
 	bool over_budget;
 
+	u64 total_us;
+
 	u64 per_s_budget_us;
 	u64 prev_active_us;
 	u64 active_us;
@@ -117,6 +119,20 @@ drmcs_write_weight(struct cgroup_subsys_state *css, struct cftype *cftype,
 	return 0;
 }
 
+static u64
+drmcs_read_total_us(struct cgroup_subsys_state *css, struct cftype *cft)
+{
+	struct drm_cgroup_state *drmcs = css_to_drmcs(css);
+	u64 val;
+
+	/* Mutex being overkill unless arch cannot atomically read u64.. */
+	mutex_lock(&drmcg_mutex);
+	val = drmcs->total_us;
+	mutex_unlock(&drmcg_mutex);
+
+	return val;
+}
+
 static bool __start_scanning(unsigned int period_us)
 {
 	struct drm_cgroup_state *root = &root_drmcs.drmcs;
@@ -169,11 +185,14 @@ static bool __start_scanning(unsigned int period_us)
 		parent = css_to_drmcs(node->parent);
 
 		active = drmcs_get_active_time_us(drmcs);
-		if (period_us && active > drmcs->prev_active_us)
+		if (period_us && active > drmcs->prev_active_us) {
 			drmcs->active_us += active - drmcs->prev_active_us;
+			drmcs->total_us += drmcs->active_us;
+		}
 		drmcs->prev_active_us = active;
 
 		parent->active_us += drmcs->active_us;
+		parent->total_us += drmcs->active_us;
 		parent->sum_children_weights += drmcs->weight;
 
 		css_put(node);
@@ -551,6 +570,11 @@ struct cftype files[] = {
 		.read_u64 = drmcs_read_weight,
 		.write_u64 = drmcs_write_weight,
 	},
+	{
+		.name = "active_us",
+		.flags = CFTYPE_NOT_ON_ROOT,
+		.read_u64 = drmcs_read_total_us,
+	},
 	{ } /* Zero entry terminates. */
 };
 
-- 
2.39.2


WARNING: multiple messages have this Message-ID (diff)
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Cc: "Rob Clark" <robdclark@chromium.org>,
	Kenny.Ho@amd.com, "Daniel Vetter" <daniel.vetter@ffwll.ch>,
	"Eero Tamminen" <eero.t.tamminen@intel.com>,
	"Johannes Weiner" <hannes@cmpxchg.org>,
	linux-kernel@vger.kernel.org,
	"Stéphane Marchesin" <marcheu@chromium.org>,
	"Christian König" <christian.koenig@amd.com>,
	"Zefan Li" <lizefan.x@bytedance.com>,
	"Dave Airlie" <airlied@redhat.com>, "Tejun Heo" <tj@kernel.org>,
	cgroups@vger.kernel.org, "T . J . Mercier" <tjmercier@google.com>
Subject: [PATCH 15/17] cgroup/drm: Expose GPU utilisation
Date: Wed, 12 Jul 2023 12:46:03 +0100	[thread overview]
Message-ID: <20230712114605.519432-16-tvrtko.ursulin@linux.intel.com> (raw)
In-Reply-To: <20230712114605.519432-1-tvrtko.ursulin@linux.intel.com>

From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>

To support container use cases where external orchestrators want to make
deployment and migration decisions based on GPU load and capacity, we can
expose the GPU load as seen by the controller in a new drm.active_us
field. This field contains a monotonic cumulative time cgroup has spent
executing GPU loads, as reported by the DRM drivers being used by group
members.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Eero Tamminen <eero.t.tamminen@intel.com>
---
 Documentation/admin-guide/cgroup-v2.rst |  3 +++
 kernel/cgroup/drm.c                     | 26 ++++++++++++++++++++++++-
 2 files changed, 28 insertions(+), 1 deletion(-)

diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index da350858c59f..bbe986366f4a 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -2445,6 +2445,9 @@ will be respected.
 DRM scheduling soft limits interface files
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
+  drm.active_us
+	GPU time used by the group recursively including all child groups.
+
   drm.weight
 	Standard cgroup weight based control [1, 10000] used to configure the
 	relative distributing of GPU time between the sibling groups.
diff --git a/kernel/cgroup/drm.c b/kernel/cgroup/drm.c
index b244e3d828cc..7c20d4ebc634 100644
--- a/kernel/cgroup/drm.c
+++ b/kernel/cgroup/drm.c
@@ -25,6 +25,8 @@ struct drm_cgroup_state {
 	bool over;
 	bool over_budget;
 
+	u64 total_us;
+
 	u64 per_s_budget_us;
 	u64 prev_active_us;
 	u64 active_us;
@@ -117,6 +119,20 @@ drmcs_write_weight(struct cgroup_subsys_state *css, struct cftype *cftype,
 	return 0;
 }
 
+static u64
+drmcs_read_total_us(struct cgroup_subsys_state *css, struct cftype *cft)
+{
+	struct drm_cgroup_state *drmcs = css_to_drmcs(css);
+	u64 val;
+
+	/* Mutex being overkill unless arch cannot atomically read u64.. */
+	mutex_lock(&drmcg_mutex);
+	val = drmcs->total_us;
+	mutex_unlock(&drmcg_mutex);
+
+	return val;
+}
+
 static bool __start_scanning(unsigned int period_us)
 {
 	struct drm_cgroup_state *root = &root_drmcs.drmcs;
@@ -169,11 +185,14 @@ static bool __start_scanning(unsigned int period_us)
 		parent = css_to_drmcs(node->parent);
 
 		active = drmcs_get_active_time_us(drmcs);
-		if (period_us && active > drmcs->prev_active_us)
+		if (period_us && active > drmcs->prev_active_us) {
 			drmcs->active_us += active - drmcs->prev_active_us;
+			drmcs->total_us += drmcs->active_us;
+		}
 		drmcs->prev_active_us = active;
 
 		parent->active_us += drmcs->active_us;
+		parent->total_us += drmcs->active_us;
 		parent->sum_children_weights += drmcs->weight;
 
 		css_put(node);
@@ -551,6 +570,11 @@ struct cftype files[] = {
 		.read_u64 = drmcs_read_weight,
 		.write_u64 = drmcs_write_weight,
 	},
+	{
+		.name = "active_us",
+		.flags = CFTYPE_NOT_ON_ROOT,
+		.read_u64 = drmcs_read_total_us,
+	},
 	{ } /* Zero entry terminates. */
 };
 
-- 
2.39.2


  parent reply	other threads:[~2023-07-12 11:48 UTC|newest]

Thread overview: 156+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-12 11:45 [RFC v5 00/17] DRM cgroup controller with scheduling control and memory stats Tvrtko Ursulin
2023-07-12 11:45 ` Tvrtko Ursulin
2023-07-12 11:45 ` Tvrtko Ursulin
2023-07-12 11:45 ` [Intel-gfx] " Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 01/17] drm/i915: Add ability for tracking buffer objects per client Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45   ` [Intel-gfx] " Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 02/17] drm/i915: Record which client owns a VM Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45   ` [Intel-gfx] " Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 03/17] drm/i915: Track page table backing store usage Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45   ` [Intel-gfx] " Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 04/17] drm/i915: Account ring buffer and context state storage Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45   ` [Intel-gfx] " Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 05/17] drm/i915: Implement fdinfo memory stats printing Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45   ` [Intel-gfx] " Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 06/17] drm: Update file owner during use Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45   ` [Intel-gfx] " Tvrtko Ursulin
2023-07-12 11:45 ` [PATCH 07/17] cgroup: Add the DRM cgroup controller Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45   ` [Intel-gfx] " Tvrtko Ursulin
2023-07-12 11:45 ` [Intel-gfx] [PATCH 08/17] drm/cgroup: Track DRM clients per cgroup Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-21 22:14   ` Tejun Heo
2023-07-21 22:14     ` Tejun Heo
2023-07-21 22:14     ` [Intel-gfx] " Tejun Heo
2023-07-21 22:14     ` Tejun Heo
2023-07-12 11:45 ` [Intel-gfx] [PATCH 09/17] drm/cgroup: Add ability to query drm cgroup GPU time Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45 ` [Intel-gfx] [PATCH 10/17] drm/cgroup: Add over budget signalling callback Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45 ` [Intel-gfx] [PATCH 11/17] drm/cgroup: Only track clients which are providing drm_cgroup_ops Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:45   ` Tvrtko Ursulin
2023-07-12 11:46 ` [PATCH 12/17] cgroup/drm: Introduce weight based drm cgroup control Tvrtko Ursulin
2023-07-12 11:46   ` Tvrtko Ursulin
2023-07-12 11:46   ` Tvrtko Ursulin
2023-07-12 11:46   ` [Intel-gfx] " Tvrtko Ursulin
2023-07-21 22:17   ` Tejun Heo
2023-07-21 22:17     ` Tejun Heo
2023-07-21 22:17     ` [Intel-gfx] " Tejun Heo
2023-07-21 22:17     ` Tejun Heo
2023-07-25 13:46     ` Tvrtko Ursulin
2023-07-25 13:46       ` Tvrtko Ursulin
2023-07-25 13:46       ` [Intel-gfx] " Tvrtko Ursulin
2023-07-25 13:46       ` Tvrtko Ursulin
2023-07-12 11:46 ` [PATCH 13/17] drm/i915: Wire up with drm controller GPU time query Tvrtko Ursulin
2023-07-12 11:46   ` Tvrtko Ursulin
2023-07-12 11:46   ` Tvrtko Ursulin
2023-07-12 11:46   ` [Intel-gfx] " Tvrtko Ursulin
2023-07-12 11:46 ` [PATCH 14/17] drm/i915: Implement cgroup controller over budget throttling Tvrtko Ursulin
2023-07-12 11:46   ` Tvrtko Ursulin
2023-07-12 11:46   ` Tvrtko Ursulin
2023-07-12 11:46   ` [Intel-gfx] " Tvrtko Ursulin
2023-07-12 11:46 ` Tvrtko Ursulin [this message]
2023-07-12 11:46   ` [PATCH 15/17] cgroup/drm: Expose GPU utilisation Tvrtko Ursulin
2023-07-12 11:46   ` Tvrtko Ursulin
2023-07-12 11:46   ` [Intel-gfx] " Tvrtko Ursulin
2023-07-21 22:19   ` Tejun Heo
2023-07-21 22:19     ` Tejun Heo
2023-07-21 22:19     ` [Intel-gfx] " Tejun Heo
2023-07-21 22:19     ` Tejun Heo
2023-07-21 22:20     ` Tejun Heo
2023-07-21 22:20       ` Tejun Heo
2023-07-21 22:20       ` [Intel-gfx] " Tejun Heo
2023-07-21 22:20       ` Tejun Heo
2023-07-25 14:08       ` Tvrtko Ursulin
2023-07-25 14:08         ` Tvrtko Ursulin
2023-07-25 14:08         ` [Intel-gfx] " Tvrtko Ursulin
2023-07-25 14:08         ` Tvrtko Ursulin
2023-07-25 21:44         ` Tejun Heo
2023-07-25 21:44           ` Tejun Heo
2023-07-25 21:44           ` [Intel-gfx] " Tejun Heo
2023-07-25 21:44           ` Tejun Heo
2023-07-12 11:46 ` [PATCH 16/17] cgroup/drm: Expose memory stats Tvrtko Ursulin
2023-07-12 11:46   ` Tvrtko Ursulin
2023-07-12 11:46   ` Tvrtko Ursulin
2023-07-12 11:46   ` [Intel-gfx] " Tvrtko Ursulin
2023-07-21 22:21   ` Tejun Heo
2023-07-21 22:21     ` Tejun Heo
2023-07-21 22:21     ` [Intel-gfx] " Tejun Heo
2023-07-21 22:21     ` Tejun Heo
2023-07-26 10:14     ` Maarten Lankhorst
2023-07-26 10:14       ` [Intel-gfx] " Maarten Lankhorst
2023-07-26 10:14       ` Maarten Lankhorst
2023-07-26 11:41       ` Tvrtko Ursulin
2023-07-26 11:41         ` Tvrtko Ursulin
2023-07-26 11:41         ` Tvrtko Ursulin
2023-07-26 11:41         ` [Intel-gfx] " Tvrtko Ursulin
2023-07-27 11:54         ` Maarten Lankhorst
2023-07-27 11:54           ` Maarten Lankhorst
2023-07-27 11:54           ` [Intel-gfx] " Maarten Lankhorst
2023-07-27 11:54           ` Maarten Lankhorst
2023-07-27 17:08           ` Tvrtko Ursulin
2023-07-27 17:08             ` Tvrtko Ursulin
2023-07-27 17:08             ` [Intel-gfx] " Tvrtko Ursulin
2023-07-27 17:08             ` Tvrtko Ursulin
2023-07-28 14:15             ` Tvrtko Ursulin
2023-07-28 14:15               ` Tvrtko Ursulin
2023-07-28 14:15               ` [Intel-gfx] " Tvrtko Ursulin
2023-07-28 14:15               ` Tvrtko Ursulin
2023-07-26 19:44       ` Tejun Heo
2023-07-26 19:44         ` Tejun Heo
2023-07-26 19:44         ` [Intel-gfx] " Tejun Heo
2023-07-26 19:44         ` Tejun Heo
2023-07-27 13:42         ` Maarten Lankhorst
2023-07-27 13:42           ` Maarten Lankhorst
2023-07-27 13:42           ` [Intel-gfx] " Maarten Lankhorst
2023-07-27 13:42           ` Maarten Lankhorst
2023-07-27 16:43           ` Tvrtko Ursulin
2023-07-27 16:43             ` Tvrtko Ursulin
2023-07-27 16:43             ` [Intel-gfx] " Tvrtko Ursulin
2023-07-27 16:43             ` Tvrtko Ursulin
2023-07-26 16:44     ` Tvrtko Ursulin
2023-07-26 16:44       ` Tvrtko Ursulin
2023-07-26 16:44       ` [Intel-gfx] " Tvrtko Ursulin
2023-07-26 16:44       ` Tvrtko Ursulin
2023-07-26 19:49       ` Tejun Heo
2023-07-26 19:49         ` Tejun Heo
2023-07-26 19:49         ` [Intel-gfx] " Tejun Heo
2023-07-26 19:49         ` Tejun Heo
2023-07-12 11:46 ` [PATCH 17/17] drm/i915: Wire up to the drm cgroup " Tvrtko Ursulin
2023-07-12 11:46   ` Tvrtko Ursulin
2023-07-12 11:46   ` Tvrtko Ursulin
2023-07-12 11:46   ` [Intel-gfx] " Tvrtko Ursulin
2023-07-12 14:46 ` [Intel-gfx] ✗ Fi.CI.BUILD: failure for DRM cgroup controller with scheduling control and " Patchwork
2023-07-19 20:31 ` [RFC v5 00/17] " T.J. Mercier
2023-07-19 20:31   ` T.J. Mercier
2023-07-19 20:31   ` T.J. Mercier
2023-07-19 20:31   ` [Intel-gfx] " T.J. Mercier
2023-07-20 10:55   ` Tvrtko Ursulin
2023-07-20 10:55     ` Tvrtko Ursulin
2023-07-20 10:55     ` [Intel-gfx] " Tvrtko Ursulin
2023-07-20 10:55     ` Tvrtko Ursulin
2023-07-20 17:22     ` T.J. Mercier
2023-07-20 17:22       ` T.J. Mercier
2023-07-20 17:22       ` [Intel-gfx] " T.J. Mercier
2023-07-20 17:22       ` T.J. Mercier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230712114605.519432-16-tvrtko.ursulin@linux.intel.com \
    --to=tvrtko.ursulin@linux.intel.com \
    --cc=Intel-gfx@lists.freedesktop.org \
    --cc=Kenny.Ho@amd.com \
    --cc=airlied@redhat.com \
    --cc=brian.welty@intel.com \
    --cc=cgroups@vger.kernel.org \
    --cc=christian.koenig@amd.com \
    --cc=daniel.vetter@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=eero.t.tamminen@intel.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizefan.x@bytedance.com \
    --cc=marcheu@chromium.org \
    --cc=robdclark@chromium.org \
    --cc=tj@kernel.org \
    --cc=tjmercier@google.com \
    --cc=tvrtko.ursulin@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.