All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
@ 2021-09-10 13:15 ` Thomas Hellström
  0 siblings, 0 replies; 35+ messages in thread
From: Thomas Hellström @ 2021-09-10 13:15 UTC (permalink / raw)
  To: intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Thomas Hellström,
	Matthew Auld, König Christian

Both the provider (resource manager) and the consumer (the TTM driver)
want to subclass struct ttm_resource. Since this is left for the resource
manager, we need to provide a private pointer for the TTM driver.

Provide a struct ttm_resource_private for the driver to subclass for
data with the same lifetime as the struct ttm_resource: In the i915 case
it will, for example, be an sg-table and radix tree into the LMEM
/VRAM pages that currently are awkwardly attached to the GEM object.

Provide an ops structure for associated ops (Which is only destroy() ATM)
It might seem pointless to provide a separate ops structure, but Linus
has previously made it clear that that's the norm.

After careful audit one could perhaps also on a per-driver basis
replace the delete_mem_notify() TTM driver callback with the above
destroy function.

Cc: Matthew Auld <matthew.william.auld@gmail.com>
Cc: König Christian <Christian.Koenig@amd.com>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/ttm/ttm_resource.c | 10 +++++++---
 include/drm/ttm/ttm_resource.h     | 28 ++++++++++++++++++++++++++++
 2 files changed, 35 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c
index 2431717376e7..973e7c50bfed 100644
--- a/drivers/gpu/drm/ttm/ttm_resource.c
+++ b/drivers/gpu/drm/ttm/ttm_resource.c
@@ -57,13 +57,17 @@ int ttm_resource_alloc(struct ttm_buffer_object *bo,
 void ttm_resource_free(struct ttm_buffer_object *bo, struct ttm_resource **res)
 {
 	struct ttm_resource_manager *man;
+	struct ttm_resource *resource = *res;
 
-	if (!*res)
+	if (!resource)
 		return;
 
-	man = ttm_manager_type(bo->bdev, (*res)->mem_type);
-	man->func->free(man, *res);
 	*res = NULL;
+	if (resource->priv)
+		resource->priv->ops.destroy(resource->priv);
+
+	man = ttm_manager_type(bo->bdev, resource->mem_type);
+	man->func->free(man, resource);
 }
 EXPORT_SYMBOL(ttm_resource_free);
 
diff --git a/include/drm/ttm/ttm_resource.h b/include/drm/ttm/ttm_resource.h
index 140b6b9a8bbe..5a22c9a29c05 100644
--- a/include/drm/ttm/ttm_resource.h
+++ b/include/drm/ttm/ttm_resource.h
@@ -44,6 +44,7 @@ struct dma_buf_map;
 struct io_mapping;
 struct sg_table;
 struct scatterlist;
+struct ttm_resource_private;
 
 struct ttm_resource_manager_func {
 	/**
@@ -153,6 +154,32 @@ struct ttm_bus_placement {
 	enum ttm_caching	caching;
 };
 
+/**
+ * struct ttm_resource_private_ops - Operations for a struct
+ * ttm_resource_private
+ *
+ * Not much benefit to keep this as a separate struct with only a single member,
+ * but keeping a separate ops struct is the norm.
+ */
+struct ttm_resource_private_ops {
+	/**
+	 * destroy() - Callback to destroy the private data
+	 * @priv - The private data to destroy
+	 */
+	void (*destroy) (struct ttm_resource_private *priv);
+};
+
+/**
+ * struct ttm_resource_private - TTM driver private data
+ * @ops: Pointer to struct ttm_resource_private_ops with associated operations
+ *
+ * Intended to be subclassed to hold, for example cached data sharing the
+ * lifetime with a struct ttm_resource.
+ */
+struct ttm_resource_private {
+	const struct ttm_resource_private_ops ops;
+};
+
 /**
  * struct ttm_resource
  *
@@ -171,6 +198,7 @@ struct ttm_resource {
 	uint32_t mem_type;
 	uint32_t placement;
 	struct ttm_bus_placement bus;
+	struct ttm_resource_private *priv;
 };
 
 /**
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Intel-gfx] [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
@ 2021-09-10 13:15 ` Thomas Hellström
  0 siblings, 0 replies; 35+ messages in thread
From: Thomas Hellström @ 2021-09-10 13:15 UTC (permalink / raw)
  To: intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Thomas Hellström,
	Matthew Auld, König Christian

Both the provider (resource manager) and the consumer (the TTM driver)
want to subclass struct ttm_resource. Since this is left for the resource
manager, we need to provide a private pointer for the TTM driver.

Provide a struct ttm_resource_private for the driver to subclass for
data with the same lifetime as the struct ttm_resource: In the i915 case
it will, for example, be an sg-table and radix tree into the LMEM
/VRAM pages that currently are awkwardly attached to the GEM object.

Provide an ops structure for associated ops (Which is only destroy() ATM)
It might seem pointless to provide a separate ops structure, but Linus
has previously made it clear that that's the norm.

After careful audit one could perhaps also on a per-driver basis
replace the delete_mem_notify() TTM driver callback with the above
destroy function.

Cc: Matthew Auld <matthew.william.auld@gmail.com>
Cc: König Christian <Christian.Koenig@amd.com>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 drivers/gpu/drm/ttm/ttm_resource.c | 10 +++++++---
 include/drm/ttm/ttm_resource.h     | 28 ++++++++++++++++++++++++++++
 2 files changed, 35 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c
index 2431717376e7..973e7c50bfed 100644
--- a/drivers/gpu/drm/ttm/ttm_resource.c
+++ b/drivers/gpu/drm/ttm/ttm_resource.c
@@ -57,13 +57,17 @@ int ttm_resource_alloc(struct ttm_buffer_object *bo,
 void ttm_resource_free(struct ttm_buffer_object *bo, struct ttm_resource **res)
 {
 	struct ttm_resource_manager *man;
+	struct ttm_resource *resource = *res;
 
-	if (!*res)
+	if (!resource)
 		return;
 
-	man = ttm_manager_type(bo->bdev, (*res)->mem_type);
-	man->func->free(man, *res);
 	*res = NULL;
+	if (resource->priv)
+		resource->priv->ops.destroy(resource->priv);
+
+	man = ttm_manager_type(bo->bdev, resource->mem_type);
+	man->func->free(man, resource);
 }
 EXPORT_SYMBOL(ttm_resource_free);
 
diff --git a/include/drm/ttm/ttm_resource.h b/include/drm/ttm/ttm_resource.h
index 140b6b9a8bbe..5a22c9a29c05 100644
--- a/include/drm/ttm/ttm_resource.h
+++ b/include/drm/ttm/ttm_resource.h
@@ -44,6 +44,7 @@ struct dma_buf_map;
 struct io_mapping;
 struct sg_table;
 struct scatterlist;
+struct ttm_resource_private;
 
 struct ttm_resource_manager_func {
 	/**
@@ -153,6 +154,32 @@ struct ttm_bus_placement {
 	enum ttm_caching	caching;
 };
 
+/**
+ * struct ttm_resource_private_ops - Operations for a struct
+ * ttm_resource_private
+ *
+ * Not much benefit to keep this as a separate struct with only a single member,
+ * but keeping a separate ops struct is the norm.
+ */
+struct ttm_resource_private_ops {
+	/**
+	 * destroy() - Callback to destroy the private data
+	 * @priv - The private data to destroy
+	 */
+	void (*destroy) (struct ttm_resource_private *priv);
+};
+
+/**
+ * struct ttm_resource_private - TTM driver private data
+ * @ops: Pointer to struct ttm_resource_private_ops with associated operations
+ *
+ * Intended to be subclassed to hold, for example cached data sharing the
+ * lifetime with a struct ttm_resource.
+ */
+struct ttm_resource_private {
+	const struct ttm_resource_private_ops ops;
+};
+
 /**
  * struct ttm_resource
  *
@@ -171,6 +198,7 @@ struct ttm_resource {
 	uint32_t mem_type;
 	uint32_t placement;
 	struct ttm_bus_placement bus;
+	struct ttm_resource_private *priv;
 };
 
 /**
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for drm/ttm: Add a private member to the struct ttm_resource
  2021-09-10 13:15 ` [Intel-gfx] " Thomas Hellström
  (?)
@ 2021-09-10 13:25 ` Patchwork
  -1 siblings, 0 replies; 35+ messages in thread
From: Patchwork @ 2021-09-10 13:25 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-gfx

== Series Details ==

Series: drm/ttm: Add a private member to the struct ttm_resource
URL   : https://patchwork.freedesktop.org/series/94550/
State : warning

== Summary ==

$ dim checkpatch origin/drm-tip
6fac6006f050 drm/ttm: Add a private member to the struct ttm_resource
-:83: WARNING:SPACING: Unnecessary space before function pointer arguments
#83: FILE: include/drm/ttm/ttm_resource.h:168:
+	void (*destroy) (struct ttm_resource_private *priv);

total: 0 errors, 1 warnings, 0 checks, 66 lines checked



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for drm/ttm: Add a private member to the struct ttm_resource
  2021-09-10 13:15 ` [Intel-gfx] " Thomas Hellström
  (?)
  (?)
@ 2021-09-10 13:54 ` Patchwork
  -1 siblings, 0 replies; 35+ messages in thread
From: Patchwork @ 2021-09-10 13:54 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 5507 bytes --]

== Series Details ==

Series: drm/ttm: Add a private member to the struct ttm_resource
URL   : https://patchwork.freedesktop.org/series/94550/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_10569 -> Patchwork_21012
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/index.html

Known issues
------------

  Here are the changes found in Patchwork_21012 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@amdgpu/amd_cs_nop@sync-fork-compute0:
    - fi-snb-2600:        NOTRUN -> [SKIP][1] ([fdo#109271]) +17 similar issues
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/fi-snb-2600/igt@amdgpu/amd_cs_nop@sync-fork-compute0.html

  * igt@gem_exec_fence@basic-busy@bcs0:
    - fi-kbl-soraka:      NOTRUN -> [SKIP][2] ([fdo#109271]) +8 similar issues
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/fi-kbl-soraka/igt@gem_exec_fence@basic-busy@bcs0.html

  * igt@gem_exec_suspend@basic-s3:
    - fi-skl-6600u:       [PASS][3] -> [INCOMPLETE][4] ([i915#198])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/fi-skl-6600u/igt@gem_exec_suspend@basic-s3.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/fi-skl-6600u/igt@gem_exec_suspend@basic-s3.html

  * igt@gem_huc_copy@huc-copy:
    - fi-kbl-soraka:      NOTRUN -> [SKIP][5] ([fdo#109271] / [i915#2190])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/fi-kbl-soraka/igt@gem_huc_copy@huc-copy.html

  * igt@i915_selftest@live@gt_lrc:
    - fi-rkl-guc:         [PASS][6] -> [DMESG-WARN][7] ([i915#3958])
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/fi-rkl-guc/igt@i915_selftest@live@gt_lrc.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/fi-rkl-guc/igt@i915_selftest@live@gt_lrc.html

  * igt@i915_selftest@live@gt_pm:
    - fi-kbl-soraka:      NOTRUN -> [DMESG-FAIL][8] ([i915#1886] / [i915#2291])
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/fi-kbl-soraka/igt@i915_selftest@live@gt_pm.html

  * igt@kms_chamelium@common-hpd-after-suspend:
    - fi-kbl-soraka:      NOTRUN -> [SKIP][9] ([fdo#109271] / [fdo#111827]) +8 similar issues
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/fi-kbl-soraka/igt@kms_chamelium@common-hpd-after-suspend.html

  * igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d:
    - fi-kbl-soraka:      NOTRUN -> [SKIP][10] ([fdo#109271] / [i915#533])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/fi-kbl-soraka/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d.html

  
#### Possible fixes ####

  * igt@i915_selftest@live@hangcheck:
    - fi-snb-2600:        [INCOMPLETE][11] ([i915#3921]) -> [PASS][12]
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/fi-snb-2600/igt@i915_selftest@live@hangcheck.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/fi-snb-2600/igt@i915_selftest@live@hangcheck.html

  * igt@i915_selftest@live@perf:
    - {fi-tgl-dsi}:       [DMESG-WARN][13] ([i915#2867]) -> [PASS][14]
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/fi-tgl-dsi/igt@i915_selftest@live@perf.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/fi-tgl-dsi/igt@i915_selftest@live@perf.html

  * igt@kms_cursor_legacy@basic-flip-after-cursor-varying-size:
    - fi-rkl-11600:       [SKIP][15] ([fdo#111825]) -> [PASS][16] +1 similar issue
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/fi-rkl-11600/igt@kms_cursor_legacy@basic-flip-after-cursor-varying-size.html
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/fi-rkl-11600/igt@kms_cursor_legacy@basic-flip-after-cursor-varying-size.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#111825]: https://bugs.freedesktop.org/show_bug.cgi?id=111825
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [i915#1886]: https://gitlab.freedesktop.org/drm/intel/issues/1886
  [i915#198]: https://gitlab.freedesktop.org/drm/intel/issues/198
  [i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
  [i915#2291]: https://gitlab.freedesktop.org/drm/intel/issues/2291
  [i915#2867]: https://gitlab.freedesktop.org/drm/intel/issues/2867
  [i915#3921]: https://gitlab.freedesktop.org/drm/intel/issues/3921
  [i915#3958]: https://gitlab.freedesktop.org/drm/intel/issues/3958
  [i915#533]: https://gitlab.freedesktop.org/drm/intel/issues/533


Participating hosts (43 -> 39)
------------------------------

  Additional (1): fi-kbl-soraka 
  Missing    (5): bat-dg1-6 bat-dg1-5 fi-bsw-cyan bat-adlp-4 fi-bdw-samus 


Build changes
-------------

  * Linux: CI_DRM_10569 -> Patchwork_21012

  CI-20190529: 20190529
  CI_DRM_10569: 5ffefab3f90a812fc6ee169f4c8aa1d8b2ceaa34 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_6203: 64452a46b57ca4ef35eb65a547df8ed1b131e8f0 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_21012: 6fac6006f0509859d1204ae46c04109d290af359 @ git://anongit.freedesktop.org/gfx-ci/linux


== Linux commits ==

6fac6006f050 drm/ttm: Add a private member to the struct ttm_resource

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/index.html

[-- Attachment #2: Type: text/html, Size: 6655 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
  2021-09-10 13:15 ` [Intel-gfx] " Thomas Hellström
@ 2021-09-10 14:40   ` Christian König
  -1 siblings, 0 replies; 35+ messages in thread
From: Christian König @ 2021-09-10 14:40 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld



Am 10.09.21 um 15:15 schrieb Thomas Hellström:
> Both the provider (resource manager) and the consumer (the TTM driver)
> want to subclass struct ttm_resource. Since this is left for the resource
> manager, we need to provide a private pointer for the TTM driver.
>
> Provide a struct ttm_resource_private for the driver to subclass for
> data with the same lifetime as the struct ttm_resource: In the i915 case
> it will, for example, be an sg-table and radix tree into the LMEM
> /VRAM pages that currently are awkwardly attached to the GEM object.
>
> Provide an ops structure for associated ops (Which is only destroy() ATM)
> It might seem pointless to provide a separate ops structure, but Linus
> has previously made it clear that that's the norm.
>
> After careful audit one could perhaps also on a per-driver basis
> replace the delete_mem_notify() TTM driver callback with the above
> destroy function.

Well this is a really big NAK to this approach.

If you need to attach some additional information to the resource then 
implement your own resource manager like everybody else does.

Regards,
Christian.

>
> Cc: Matthew Auld <matthew.william.auld@gmail.com>
> Cc: König Christian <Christian.Koenig@amd.com>
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
>   drivers/gpu/drm/ttm/ttm_resource.c | 10 +++++++---
>   include/drm/ttm/ttm_resource.h     | 28 ++++++++++++++++++++++++++++
>   2 files changed, 35 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c
> index 2431717376e7..973e7c50bfed 100644
> --- a/drivers/gpu/drm/ttm/ttm_resource.c
> +++ b/drivers/gpu/drm/ttm/ttm_resource.c
> @@ -57,13 +57,17 @@ int ttm_resource_alloc(struct ttm_buffer_object *bo,
>   void ttm_resource_free(struct ttm_buffer_object *bo, struct ttm_resource **res)
>   {
>   	struct ttm_resource_manager *man;
> +	struct ttm_resource *resource = *res;
>   
> -	if (!*res)
> +	if (!resource)
>   		return;
>   
> -	man = ttm_manager_type(bo->bdev, (*res)->mem_type);
> -	man->func->free(man, *res);
>   	*res = NULL;
> +	if (resource->priv)
> +		resource->priv->ops.destroy(resource->priv);
> +
> +	man = ttm_manager_type(bo->bdev, resource->mem_type);
> +	man->func->free(man, resource);
>   }
>   EXPORT_SYMBOL(ttm_resource_free);
>   
> diff --git a/include/drm/ttm/ttm_resource.h b/include/drm/ttm/ttm_resource.h
> index 140b6b9a8bbe..5a22c9a29c05 100644
> --- a/include/drm/ttm/ttm_resource.h
> +++ b/include/drm/ttm/ttm_resource.h
> @@ -44,6 +44,7 @@ struct dma_buf_map;
>   struct io_mapping;
>   struct sg_table;
>   struct scatterlist;
> +struct ttm_resource_private;
>   
>   struct ttm_resource_manager_func {
>   	/**
> @@ -153,6 +154,32 @@ struct ttm_bus_placement {
>   	enum ttm_caching	caching;
>   };
>   
> +/**
> + * struct ttm_resource_private_ops - Operations for a struct
> + * ttm_resource_private
> + *
> + * Not much benefit to keep this as a separate struct with only a single member,
> + * but keeping a separate ops struct is the norm.
> + */
> +struct ttm_resource_private_ops {
> +	/**
> +	 * destroy() - Callback to destroy the private data
> +	 * @priv - The private data to destroy
> +	 */
> +	void (*destroy) (struct ttm_resource_private *priv);
> +};
> +
> +/**
> + * struct ttm_resource_private - TTM driver private data
> + * @ops: Pointer to struct ttm_resource_private_ops with associated operations
> + *
> + * Intended to be subclassed to hold, for example cached data sharing the
> + * lifetime with a struct ttm_resource.
> + */
> +struct ttm_resource_private {
> +	const struct ttm_resource_private_ops ops;
> +};
> +
>   /**
>    * struct ttm_resource
>    *
> @@ -171,6 +198,7 @@ struct ttm_resource {
>   	uint32_t mem_type;
>   	uint32_t placement;
>   	struct ttm_bus_placement bus;
> +	struct ttm_resource_private *priv;
>   };
>   
>   /**


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Intel-gfx] [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
@ 2021-09-10 14:40   ` Christian König
  0 siblings, 0 replies; 35+ messages in thread
From: Christian König @ 2021-09-10 14:40 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld



Am 10.09.21 um 15:15 schrieb Thomas Hellström:
> Both the provider (resource manager) and the consumer (the TTM driver)
> want to subclass struct ttm_resource. Since this is left for the resource
> manager, we need to provide a private pointer for the TTM driver.
>
> Provide a struct ttm_resource_private for the driver to subclass for
> data with the same lifetime as the struct ttm_resource: In the i915 case
> it will, for example, be an sg-table and radix tree into the LMEM
> /VRAM pages that currently are awkwardly attached to the GEM object.
>
> Provide an ops structure for associated ops (Which is only destroy() ATM)
> It might seem pointless to provide a separate ops structure, but Linus
> has previously made it clear that that's the norm.
>
> After careful audit one could perhaps also on a per-driver basis
> replace the delete_mem_notify() TTM driver callback with the above
> destroy function.

Well this is a really big NAK to this approach.

If you need to attach some additional information to the resource then 
implement your own resource manager like everybody else does.

Regards,
Christian.

>
> Cc: Matthew Auld <matthew.william.auld@gmail.com>
> Cc: König Christian <Christian.Koenig@amd.com>
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
>   drivers/gpu/drm/ttm/ttm_resource.c | 10 +++++++---
>   include/drm/ttm/ttm_resource.h     | 28 ++++++++++++++++++++++++++++
>   2 files changed, 35 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_resource.c b/drivers/gpu/drm/ttm/ttm_resource.c
> index 2431717376e7..973e7c50bfed 100644
> --- a/drivers/gpu/drm/ttm/ttm_resource.c
> +++ b/drivers/gpu/drm/ttm/ttm_resource.c
> @@ -57,13 +57,17 @@ int ttm_resource_alloc(struct ttm_buffer_object *bo,
>   void ttm_resource_free(struct ttm_buffer_object *bo, struct ttm_resource **res)
>   {
>   	struct ttm_resource_manager *man;
> +	struct ttm_resource *resource = *res;
>   
> -	if (!*res)
> +	if (!resource)
>   		return;
>   
> -	man = ttm_manager_type(bo->bdev, (*res)->mem_type);
> -	man->func->free(man, *res);
>   	*res = NULL;
> +	if (resource->priv)
> +		resource->priv->ops.destroy(resource->priv);
> +
> +	man = ttm_manager_type(bo->bdev, resource->mem_type);
> +	man->func->free(man, resource);
>   }
>   EXPORT_SYMBOL(ttm_resource_free);
>   
> diff --git a/include/drm/ttm/ttm_resource.h b/include/drm/ttm/ttm_resource.h
> index 140b6b9a8bbe..5a22c9a29c05 100644
> --- a/include/drm/ttm/ttm_resource.h
> +++ b/include/drm/ttm/ttm_resource.h
> @@ -44,6 +44,7 @@ struct dma_buf_map;
>   struct io_mapping;
>   struct sg_table;
>   struct scatterlist;
> +struct ttm_resource_private;
>   
>   struct ttm_resource_manager_func {
>   	/**
> @@ -153,6 +154,32 @@ struct ttm_bus_placement {
>   	enum ttm_caching	caching;
>   };
>   
> +/**
> + * struct ttm_resource_private_ops - Operations for a struct
> + * ttm_resource_private
> + *
> + * Not much benefit to keep this as a separate struct with only a single member,
> + * but keeping a separate ops struct is the norm.
> + */
> +struct ttm_resource_private_ops {
> +	/**
> +	 * destroy() - Callback to destroy the private data
> +	 * @priv - The private data to destroy
> +	 */
> +	void (*destroy) (struct ttm_resource_private *priv);
> +};
> +
> +/**
> + * struct ttm_resource_private - TTM driver private data
> + * @ops: Pointer to struct ttm_resource_private_ops with associated operations
> + *
> + * Intended to be subclassed to hold, for example cached data sharing the
> + * lifetime with a struct ttm_resource.
> + */
> +struct ttm_resource_private {
> +	const struct ttm_resource_private_ops ops;
> +};
> +
>   /**
>    * struct ttm_resource
>    *
> @@ -171,6 +198,7 @@ struct ttm_resource {
>   	uint32_t mem_type;
>   	uint32_t placement;
>   	struct ttm_bus_placement bus;
> +	struct ttm_resource_private *priv;
>   };
>   
>   /**


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Intel-gfx] ✓ Fi.CI.IGT: success for drm/ttm: Add a private member to the struct ttm_resource
  2021-09-10 13:15 ` [Intel-gfx] " Thomas Hellström
                   ` (3 preceding siblings ...)
  (?)
@ 2021-09-10 15:12 ` Patchwork
  -1 siblings, 0 replies; 35+ messages in thread
From: Patchwork @ 2021-09-10 15:12 UTC (permalink / raw)
  To: Thomas Hellström; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 30279 bytes --]

== Series Details ==

Series: drm/ttm: Add a private member to the struct ttm_resource
URL   : https://patchwork.freedesktop.org/series/94550/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_10569_full -> Patchwork_21012_full
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Known issues
------------

  Here are the changes found in Patchwork_21012_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_ctx_isolation@preservation-s3@rcs0:
    - shard-skl:          [PASS][1] -> [INCOMPLETE][2] ([i915#198])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-skl9/igt@gem_ctx_isolation@preservation-s3@rcs0.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-skl1/igt@gem_ctx_isolation@preservation-s3@rcs0.html

  * igt@gem_ctx_persistence@process:
    - shard-snb:          NOTRUN -> [SKIP][3] ([fdo#109271] / [i915#1099]) +1 similar issue
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-snb5/igt@gem_ctx_persistence@process.html

  * igt@gem_eio@unwedge-stress:
    - shard-tglb:         [PASS][4] -> [TIMEOUT][5] ([i915#2369] / [i915#3063] / [i915#3648])
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-tglb2/igt@gem_eio@unwedge-stress.html
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb5/igt@gem_eio@unwedge-stress.html

  * igt@gem_exec_fair@basic-deadline:
    - shard-kbl:          [PASS][6] -> [FAIL][7] ([i915#2846])
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-kbl1/igt@gem_exec_fair@basic-deadline.html
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-kbl3/igt@gem_exec_fair@basic-deadline.html

  * igt@gem_exec_params@rsvd2-dirt:
    - shard-tglb:         NOTRUN -> [SKIP][8] ([fdo#109283])
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb2/igt@gem_exec_params@rsvd2-dirt.html

  * igt@gem_exec_suspend@basic-s0:
    - shard-tglb:         [PASS][9] -> [INCOMPLETE][10] ([i915#456])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-tglb2/igt@gem_exec_suspend@basic-s0.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb7/igt@gem_exec_suspend@basic-s0.html

  * igt@gem_huc_copy@huc-copy:
    - shard-apl:          NOTRUN -> [SKIP][11] ([fdo#109271] / [i915#2190])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-apl3/igt@gem_huc_copy@huc-copy.html

  * igt@gem_pread@exhaustion:
    - shard-snb:          NOTRUN -> [WARN][12] ([i915#2658])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-snb5/igt@gem_pread@exhaustion.html

  * igt@gem_softpin@evict-snoop:
    - shard-tglb:         NOTRUN -> [SKIP][13] ([fdo#109312])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb1/igt@gem_softpin@evict-snoop.html

  * igt@gem_userptr_blits@input-checking:
    - shard-tglb:         NOTRUN -> [DMESG-WARN][14] ([i915#3002]) +1 similar issue
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb2/igt@gem_userptr_blits@input-checking.html
    - shard-snb:          NOTRUN -> [DMESG-WARN][15] ([i915#3002])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-snb5/igt@gem_userptr_blits@input-checking.html

  * igt@gem_userptr_blits@readonly-unsync:
    - shard-tglb:         NOTRUN -> [SKIP][16] ([i915#3297])
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb6/igt@gem_userptr_blits@readonly-unsync.html

  * igt@gen3_render_mixed_blits:
    - shard-skl:          NOTRUN -> [SKIP][17] ([fdo#109271]) +34 similar issues
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-skl9/igt@gen3_render_mixed_blits.html

  * igt@gen7_exec_parse@basic-offset:
    - shard-tglb:         NOTRUN -> [SKIP][18] ([fdo#109289])
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb1/igt@gen7_exec_parse@basic-offset.html

  * igt@gen9_exec_parse@allowed-single:
    - shard-skl:          [PASS][19] -> [DMESG-WARN][20] ([i915#1436] / [i915#716])
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-skl3/igt@gen9_exec_parse@allowed-single.html
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-skl1/igt@gen9_exec_parse@allowed-single.html

  * igt@gen9_exec_parse@bb-start-cmd:
    - shard-tglb:         NOTRUN -> [SKIP][21] ([i915#2856]) +1 similar issue
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb1/igt@gen9_exec_parse@bb-start-cmd.html

  * igt@i915_selftest@live@gt_lrc:
    - shard-tglb:         NOTRUN -> [DMESG-FAIL][22] ([i915#2373])
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb1/igt@i915_selftest@live@gt_lrc.html

  * igt@i915_selftest@live@gt_pm:
    - shard-tglb:         NOTRUN -> [DMESG-FAIL][23] ([i915#1759] / [i915#2291])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb1/igt@i915_selftest@live@gt_pm.html

  * igt@kms_async_flips@alternate-sync-async-flip:
    - shard-snb:          [PASS][24] -> [FAIL][25] ([i915#2521])
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-snb5/igt@kms_async_flips@alternate-sync-async-flip.html
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-snb2/igt@kms_async_flips@alternate-sync-async-flip.html

  * igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip:
    - shard-apl:          NOTRUN -> [SKIP][26] ([fdo#109271] / [i915#3777]) +1 similar issue
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-apl6/igt@kms_big_fb@y-tiled-max-hw-stride-32bpp-rotate-0-hflip.html

  * igt@kms_big_fb@yf-tiled-addfb:
    - shard-tglb:         NOTRUN -> [SKIP][27] ([fdo#111615]) +3 similar issues
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb6/igt@kms_big_fb@yf-tiled-addfb.html

  * igt@kms_big_joiner@invalid-modeset:
    - shard-tglb:         NOTRUN -> [SKIP][28] ([i915#2705])
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb6/igt@kms_big_joiner@invalid-modeset.html

  * igt@kms_ccs@pipe-a-bad-aux-stride-y_tiled_gen12_rc_ccs_cc:
    - shard-apl:          NOTRUN -> [SKIP][29] ([fdo#109271] / [i915#3886]) +7 similar issues
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-apl2/igt@kms_ccs@pipe-a-bad-aux-stride-y_tiled_gen12_rc_ccs_cc.html

  * igt@kms_ccs@pipe-a-bad-pixel-format-y_tiled_gen12_mc_ccs:
    - shard-tglb:         NOTRUN -> [SKIP][30] ([i915#3689] / [i915#3886]) +1 similar issue
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb1/igt@kms_ccs@pipe-a-bad-pixel-format-y_tiled_gen12_mc_ccs.html

  * igt@kms_ccs@pipe-a-crc-sprite-planes-basic-y_tiled_ccs:
    - shard-tglb:         NOTRUN -> [SKIP][31] ([i915#3689]) +7 similar issues
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb2/igt@kms_ccs@pipe-a-crc-sprite-planes-basic-y_tiled_ccs.html

  * igt@kms_cdclk@mode-transition:
    - shard-apl:          NOTRUN -> [SKIP][32] ([fdo#109271]) +170 similar issues
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-apl2/igt@kms_cdclk@mode-transition.html

  * igt@kms_cdclk@plane-scaling:
    - shard-tglb:         NOTRUN -> [SKIP][33] ([i915#3742])
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb6/igt@kms_cdclk@plane-scaling.html

  * igt@kms_chamelium@dp-crc-fast:
    - shard-snb:          NOTRUN -> [SKIP][34] ([fdo#109271] / [fdo#111827]) +5 similar issues
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-snb6/igt@kms_chamelium@dp-crc-fast.html

  * igt@kms_color_chamelium@pipe-a-gamma:
    - shard-iclb:         NOTRUN -> [SKIP][35] ([fdo#109284] / [fdo#111827]) +1 similar issue
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-iclb7/igt@kms_color_chamelium@pipe-a-gamma.html

  * igt@kms_color_chamelium@pipe-b-ctm-0-75:
    - shard-tglb:         NOTRUN -> [SKIP][36] ([fdo#109284] / [fdo#111827]) +9 similar issues
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb2/igt@kms_color_chamelium@pipe-b-ctm-0-75.html

  * igt@kms_color_chamelium@pipe-invalid-degamma-lut-sizes:
    - shard-apl:          NOTRUN -> [SKIP][37] ([fdo#109271] / [fdo#111827]) +13 similar issues
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-apl3/igt@kms_color_chamelium@pipe-invalid-degamma-lut-sizes.html
    - shard-skl:          NOTRUN -> [SKIP][38] ([fdo#109271] / [fdo#111827]) +2 similar issues
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-skl9/igt@kms_color_chamelium@pipe-invalid-degamma-lut-sizes.html

  * igt@kms_content_protection@atomic:
    - shard-apl:          NOTRUN -> [TIMEOUT][39] ([i915#1319]) +1 similar issue
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-apl2/igt@kms_content_protection@atomic.html

  * igt@kms_content_protection@dp-mst-type-1:
    - shard-kbl:          NOTRUN -> [SKIP][40] ([fdo#109271]) +4 similar issues
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-kbl1/igt@kms_content_protection@dp-mst-type-1.html

  * igt@kms_content_protection@lic:
    - shard-tglb:         NOTRUN -> [SKIP][41] ([fdo#111828])
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb6/igt@kms_content_protection@lic.html

  * igt@kms_cursor_crc@pipe-b-cursor-32x10-sliding:
    - shard-tglb:         NOTRUN -> [SKIP][42] ([i915#3359]) +4 similar issues
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb2/igt@kms_cursor_crc@pipe-b-cursor-32x10-sliding.html

  * igt@kms_cursor_crc@pipe-c-cursor-512x512-rapid-movement:
    - shard-tglb:         NOTRUN -> [SKIP][43] ([fdo#109279] / [i915#3359]) +1 similar issue
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb2/igt@kms_cursor_crc@pipe-c-cursor-512x512-rapid-movement.html

  * igt@kms_cursor_crc@pipe-d-cursor-32x32-onscreen:
    - shard-tglb:         NOTRUN -> [SKIP][44] ([i915#3319]) +2 similar issues
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb2/igt@kms_cursor_crc@pipe-d-cursor-32x32-onscreen.html

  * igt@kms_cursor_edge_walk@pipe-d-128x128-right-edge:
    - shard-snb:          NOTRUN -> [SKIP][45] ([fdo#109271]) +171 similar issues
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-snb5/igt@kms_cursor_edge_walk@pipe-d-128x128-right-edge.html

  * igt@kms_dp_tiled_display@basic-test-pattern:
    - shard-tglb:         NOTRUN -> [SKIP][46] ([i915#426])
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb6/igt@kms_dp_tiled_display@basic-test-pattern.html

  * igt@kms_flip@2x-absolute-wf_vblank:
    - shard-tglb:         NOTRUN -> [SKIP][47] ([fdo#111825] / [i915#3966])
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb6/igt@kms_flip@2x-absolute-wf_vblank.html

  * igt@kms_flip@2x-plain-flip-fb-recreate@ab-hdmi-a1-hdmi-a2:
    - shard-glk:          [PASS][48] -> [FAIL][49] ([i915#2122])
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-glk4/igt@kms_flip@2x-plain-flip-fb-recreate@ab-hdmi-a1-hdmi-a2.html
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-glk1/igt@kms_flip@2x-plain-flip-fb-recreate@ab-hdmi-a1-hdmi-a2.html

  * igt@kms_flip@flip-vs-suspend-interruptible@c-dp1:
    - shard-apl:          [PASS][50] -> [DMESG-WARN][51] ([i915#180]) +2 similar issues
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-apl6/igt@kms_flip@flip-vs-suspend-interruptible@c-dp1.html
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-apl7/igt@kms_flip@flip-vs-suspend-interruptible@c-dp1.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs:
    - shard-apl:          NOTRUN -> [SKIP][52] ([fdo#109271] / [i915#2672])
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-apl7/igt@kms_flip_scaled_crc@flip-32bpp-ytile-to-32bpp-ytilegen12rcccs.html

  * igt@kms_frontbuffer_tracking@fbc-suspend:
    - shard-kbl:          [PASS][53] -> [DMESG-WARN][54] ([i915#180]) +1 similar issue
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-kbl1/igt@kms_frontbuffer_tracking@fbc-suspend.html
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-kbl4/igt@kms_frontbuffer_tracking@fbc-suspend.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-indfb-draw-mmap-cpu:
    - shard-tglb:         NOTRUN -> [SKIP][55] ([fdo#111825]) +29 similar issues
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb1/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-indfb-draw-mmap-cpu.html

  * igt@kms_hdr@bpc-switch-suspend:
    - shard-skl:          [PASS][56] -> [FAIL][57] ([i915#1188])
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-skl7/igt@kms_hdr@bpc-switch-suspend.html
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-skl6/igt@kms_hdr@bpc-switch-suspend.html

  * igt@kms_invalid_dotclock:
    - shard-tglb:         NOTRUN -> [SKIP][58] ([fdo#110577])
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb1/igt@kms_invalid_dotclock.html

  * igt@kms_pipe_crc_basic@read-crc-pipe-d:
    - shard-apl:          NOTRUN -> [SKIP][59] ([fdo#109271] / [i915#533])
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-apl2/igt@kms_pipe_crc_basic@read-crc-pipe-d.html

  * igt@kms_plane_alpha_blend@pipe-b-alpha-basic:
    - shard-apl:          NOTRUN -> [FAIL][60] ([fdo#108145] / [i915#265])
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-apl3/igt@kms_plane_alpha_blend@pipe-b-alpha-basic.html

  * igt@kms_plane_alpha_blend@pipe-c-coverage-7efc:
    - shard-skl:          [PASS][61] -> [FAIL][62] ([fdo#108145] / [i915#265]) +1 similar issue
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-skl7/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-skl2/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html

  * igt@kms_plane_lowres@pipe-a-tiling-none:
    - shard-tglb:         NOTRUN -> [SKIP][63] ([i915#3536]) +1 similar issue
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb1/igt@kms_plane_lowres@pipe-a-tiling-none.html

  * igt@kms_plane_lowres@pipe-b-tiling-yf:
    - shard-tglb:         NOTRUN -> [SKIP][64] ([fdo#112054]) +1 similar issue
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb2/igt@kms_plane_lowres@pipe-b-tiling-yf.html

  * igt@kms_plane_lowres@pipe-c-tiling-none:
    - shard-iclb:         NOTRUN -> [SKIP][65] ([i915#3536])
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-iclb7/igt@kms_plane_lowres@pipe-c-tiling-none.html

  * igt@kms_psr2_sf@cursor-plane-update-sf:
    - shard-skl:          NOTRUN -> [SKIP][66] ([fdo#109271] / [i915#658])
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-skl9/igt@kms_psr2_sf@cursor-plane-update-sf.html

  * igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-1:
    - shard-apl:          NOTRUN -> [SKIP][67] ([fdo#109271] / [i915#658]) +3 similar issues
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-apl7/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area-1.html

  * igt@kms_psr2_sf@plane-move-sf-dmg-area-2:
    - shard-tglb:         NOTRUN -> [SKIP][68] ([i915#2920]) +1 similar issue
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb1/igt@kms_psr2_sf@plane-move-sf-dmg-area-2.html

  * igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-3:
    - shard-kbl:          NOTRUN -> [SKIP][69] ([fdo#109271] / [i915#658])
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-kbl1/igt@kms_psr2_sf@primary-plane-update-sf-dmg-area-3.html

  * igt@kms_psr2_su@frontbuffer:
    - shard-iclb:         [PASS][70] -> [SKIP][71] ([fdo#109642] / [fdo#111068] / [i915#658])
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-iclb2/igt@kms_psr2_su@frontbuffer.html
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-iclb8/igt@kms_psr2_su@frontbuffer.html

  * igt@kms_psr@psr2_cursor_plane_onoff:
    - shard-tglb:         NOTRUN -> [FAIL][72] ([i915#132] / [i915#3467]) +3 similar issues
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb6/igt@kms_psr@psr2_cursor_plane_onoff.html
    - shard-iclb:         NOTRUN -> [SKIP][73] ([fdo#109441])
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-iclb7/igt@kms_psr@psr2_cursor_plane_onoff.html

  * igt@kms_psr@psr2_no_drrs:
    - shard-iclb:         [PASS][74] -> [SKIP][75] ([fdo#109441])
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-iclb2/igt@kms_psr@psr2_no_drrs.html
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-iclb3/igt@kms_psr@psr2_no_drrs.html

  * igt@kms_rotation_crc@cursor-rotation-180:
    - shard-glk:          [PASS][76] -> [FAIL][77] ([i915#65])
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-glk7/igt@kms_rotation_crc@cursor-rotation-180.html
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-glk2/igt@kms_rotation_crc@cursor-rotation-180.html

  * igt@nouveau_crc@pipe-b-source-outp-complete:
    - shard-tglb:         NOTRUN -> [SKIP][78] ([i915#2530]) +1 similar issue
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb6/igt@nouveau_crc@pipe-b-source-outp-complete.html

  * igt@prime_nv_pcopy@test3_2:
    - shard-tglb:         NOTRUN -> [SKIP][79] ([fdo#109291]) +3 similar issues
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb1/igt@prime_nv_pcopy@test3_2.html

  * igt@prime_vgem@fence-flip-hang:
    - shard-tglb:         NOTRUN -> [SKIP][80] ([fdo#109295])
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb6/igt@prime_vgem@fence-flip-hang.html

  * igt@runner@aborted:
    - shard-tglb:         NOTRUN -> ([FAIL][81], [FAIL][82]) ([i915#3002] / [i915#3728])
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb2/igt@runner@aborted.html
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb8/igt@runner@aborted.html

  * igt@syncobj_timeline@single-wait-available-signaled:
    - shard-glk:          [PASS][83] -> [DMESG-WARN][84] ([i915#118] / [i915#95]) +1 similar issue
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-glk1/igt@syncobj_timeline@single-wait-available-signaled.html
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-glk8/igt@syncobj_timeline@single-wait-available-signaled.html

  * igt@sysfs_clients@fair-7:
    - shard-apl:          NOTRUN -> [SKIP][85] ([fdo#109271] / [i915#2994])
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-apl6/igt@sysfs_clients@fair-7.html

  * igt@sysfs_clients@split-10:
    - shard-tglb:         NOTRUN -> [SKIP][86] ([i915#2994])
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb2/igt@sysfs_clients@split-10.html

  
#### Possible fixes ####

  * igt@gem_ctx_persistence@engines-hang@vcs0:
    - {shard-rkl}:        [FAIL][87] ([i915#2410]) -> [PASS][88]
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-rkl-2/igt@gem_ctx_persistence@engines-hang@vcs0.html
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-rkl-1/igt@gem_ctx_persistence@engines-hang@vcs0.html

  * igt@gem_eio@unwedge-stress:
    - shard-iclb:         [TIMEOUT][89] ([i915#2369] / [i915#2481] / [i915#3070]) -> [PASS][90]
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-iclb6/igt@gem_eio@unwedge-stress.html
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-iclb1/igt@gem_eio@unwedge-stress.html

  * igt@gem_exec_fair@basic-none-rrul@rcs0:
    - shard-glk:          [FAIL][91] ([i915#2842]) -> [PASS][92] +1 similar issue
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-glk3/igt@gem_exec_fair@basic-none-rrul@rcs0.html
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-glk5/igt@gem_exec_fair@basic-none-rrul@rcs0.html

  * igt@gem_exec_fair@basic-none@rcs0:
    - shard-kbl:          [FAIL][93] ([i915#2842]) -> [PASS][94]
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-kbl2/igt@gem_exec_fair@basic-none@rcs0.html
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-kbl6/igt@gem_exec_fair@basic-none@rcs0.html

  * igt@gem_exec_fair@basic-sync@rcs0:
    - shard-kbl:          [SKIP][95] ([fdo#109271]) -> [PASS][96]
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-kbl6/igt@gem_exec_fair@basic-sync@rcs0.html
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-kbl7/igt@gem_exec_fair@basic-sync@rcs0.html

  * igt@gem_exec_suspend@basic-s3:
    - shard-skl:          [INCOMPLETE][97] ([i915#198]) -> [PASS][98]
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-skl1/igt@gem_exec_suspend@basic-s3.html
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-skl9/igt@gem_exec_suspend@basic-s3.html

  * igt@gem_mmap_gtt@cpuset-big-copy-odd:
    - {shard-rkl}:        [FAIL][99] ([i915#307]) -> [PASS][100]
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-rkl-2/igt@gem_mmap_gtt@cpuset-big-copy-odd.html
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-rkl-5/igt@gem_mmap_gtt@cpuset-big-copy-odd.html

  * igt@gem_mmap_gtt@cpuset-big-copy-xy:
    - shard-iclb:         [FAIL][101] ([i915#307]) -> [PASS][102]
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-iclb7/igt@gem_mmap_gtt@cpuset-big-copy-xy.html
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-iclb2/igt@gem_mmap_gtt@cpuset-big-copy-xy.html

  * igt@i915_pm_dc@dc6-psr:
    - shard-iclb:         [FAIL][103] ([i915#454]) -> [PASS][104]
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-iclb3/igt@i915_pm_dc@dc6-psr.html
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-iclb5/igt@i915_pm_dc@dc6-psr.html

  * igt@i915_pm_rpm@cursor:
    - {shard-rkl}:        [SKIP][105] ([i915#1849]) -> [PASS][106] +15 similar issues
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-rkl-5/igt@i915_pm_rpm@cursor.html
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-rkl-6/igt@i915_pm_rpm@cursor.html

  * igt@i915_pm_rps@waitboost:
    - {shard-rkl}:        [FAIL][107] -> [PASS][108]
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-rkl-2/igt@i915_pm_rps@waitboost.html
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-rkl-1/igt@i915_pm_rps@waitboost.html

  * igt@i915_selftest@live@hangcheck:
    - shard-snb:          [INCOMPLETE][109] ([i915#3921]) -> [PASS][110]
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-snb6/igt@i915_selftest@live@hangcheck.html
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-snb5/igt@i915_selftest@live@hangcheck.html

  * igt@kms_big_fb@linear-8bpp-rotate-180:
    - {shard-rkl}:        [SKIP][111] ([i915#3638]) -> [PASS][112]
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-rkl-5/igt@kms_big_fb@linear-8bpp-rotate-180.html
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-rkl-6/igt@kms_big_fb@linear-8bpp-rotate-180.html

  * igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip:
    - {shard-rkl}:        [SKIP][113] ([i915#3721]) -> [PASS][114] +2 similar issues
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-rkl-5/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip.html
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-rkl-6/igt@kms_big_fb@x-tiled-max-hw-stride-32bpp-rotate-180-hflip-async-flip.html

  * igt@kms_ccs@pipe-a-bad-rotation-90-y_tiled_gen12_rc_ccs:
    - {shard-rkl}:        [SKIP][115] ([i915#1845]) -> [PASS][116] +1 similar issue
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-rkl-5/igt@kms_ccs@pipe-a-bad-rotation-90-y_tiled_gen12_rc_ccs.html
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-rkl-6/igt@kms_ccs@pipe-a-bad-rotation-90-y_tiled_gen12_rc_ccs.html

  * igt@kms_color@pipe-a-ctm-green-to-red:
    - shard-skl:          [DMESG-WARN][117] ([i915#1982]) -> [PASS][118] +1 similar issue
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-skl9/igt@kms_color@pipe-a-ctm-green-to-red.html
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-skl7/igt@kms_color@pipe-a-ctm-green-to-red.html

  * igt@kms_color@pipe-b-ctm-red-to-blue:
    - {shard-rkl}:        [SKIP][119] ([i915#1149] / [i915#1849] / [i915#4070]) -> [PASS][120]
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-rkl-5/igt@kms_color@pipe-b-ctm-red-to-blue.html
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-rkl-6/igt@kms_color@pipe-b-ctm-red-to-blue.html

  * igt@kms_cursor_crc@pipe-b-cursor-256x85-rapid-movement:
    - {shard-rkl}:        [SKIP][121] ([fdo#112022] / [i915#4070]) -> [PASS][122] +3 similar issues
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-rkl-5/igt@kms_cursor_crc@pipe-b-cursor-256x85-rapid-movement.html
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-rkl-6/igt@kms_cursor_crc@pipe-b-cursor-256x85-rapid-movement.html

  * igt@kms_cursor_edge_walk@pipe-a-128x128-top-edge:
    - {shard-rkl}:        [SKIP][123] ([i915#1849] / [i915#4070]) -> [PASS][124] +1 similar issue
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-rkl-5/igt@kms_cursor_edge_walk@pipe-a-128x128-top-edge.html
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-rkl-6/igt@kms_cursor_edge_walk@pipe-a-128x128-top-edge.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions:
    - shard-skl:          [FAIL][125] ([i915#2346]) -> [PASS][126]
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-skl1/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-skl6/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html

  * igt@kms_cursor_legacy@flip-vs-cursor-crc-legacy:
    - {shard-rkl}:        [SKIP][127] ([fdo#111825] / [i915#4070]) -> [PASS][128]
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-rkl-5/igt@kms_cursor_legacy@flip-vs-cursor-crc-legacy.html
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-rkl-6/igt@kms_cursor_legacy@flip-vs-cursor-crc-legacy.html

  * igt@kms_draw_crc@draw-method-rgb565-mmap-gtt-xtiled:
    - {shard-rkl}:        [SKIP][129] ([fdo#111314]) -> [PASS][130] +2 similar issues
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-rkl-5/igt@kms_draw_crc@draw-method-rgb565-mmap-gtt-xtiled.html
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-rkl-6/igt@kms_draw_crc@draw-method-rgb565-mmap-gtt-xtiled.html

  * igt@kms_flip@2x-flip-vs-expired-vblank@ab-hdmi-a1-hdmi-a2:
    - shard-glk:          [FAIL][131] ([i915#2122]) -> [PASS][132]
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-glk2/igt@kms_flip@2x-flip-vs-expired-vblank@ab-hdmi-a1-hdmi-a2.html
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-glk8/igt@kms_flip@2x-flip-vs-expired-vblank@ab-hdmi-a1-hdmi-a2.html

  * igt@kms_flip@flip-vs-expired-vblank-interruptible@a-edp1:
    - shard-skl:          [FAIL][133] ([i915#79]) -> [PASS][134]
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-skl7/igt@kms_flip@flip-vs-expired-vblank-interruptible@a-edp1.html
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-skl9/igt@kms_flip@flip-vs-expired-vblank-interruptible@a-edp1.html

  * igt@kms_flip@flip-vs-expired-vblank@a-dp1:
    - shard-apl:          [FAIL][135] ([i915#79]) -> [PASS][136]
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-apl3/igt@kms_flip@flip-vs-expired-vblank@a-dp1.html
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-apl2/igt@kms_flip@flip-vs-expired-vblank@a-dp1.html

  * igt@kms_flip@flip-vs-suspend-interruptible@a-dp1:
    - shard-kbl:          [DMESG-WARN][137] ([i915#180]) -> [PASS][138] +2 similar issues
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-kbl4/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-kbl1/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html

  * igt@kms_frontbuffer_tracking@fbc-2p-pri-indfb-multidraw:
    - shard-glk:          [DMESG-FAIL][139] ([i915#118] / [i915#95]) -> [PASS][140]
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-glk4/igt@kms_frontbuffer_tracking@fbc-2p-pri-indfb-multidraw.html
   [140]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-glk9/igt@kms_frontbuffer_tracking@fbc-2p-pri-indfb-multidraw.html

  * igt@kms_frontbuffer_tracking@psr-suspend:
    - shard-tglb:         [INCOMPLETE][141] ([i915#2411] / [i915#456]) -> [PASS][142] +1 similar issue
   [141]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-tglb7/igt@kms_frontbuffer_tracking@psr-suspend.html
   [142]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb8/igt@kms_frontbuffer_tracking@psr-suspend.html

  * igt@kms_pipe_crc_basic@suspend-read-crc-pipe-d:
    - shard-tglb:         [INCOMPLETE][143] -> [PASS][144]
   [143]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_10569/shard-tglb7/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-d.html
   [144]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/shard-tglb1/igt@kms_

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_21012/index.html

[-- Attachment #2: Type: text/html, Size: 33562 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Intel-gfx] [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
  2021-09-10 14:40   ` [Intel-gfx] " Christian König
@ 2021-09-10 15:30     ` Thomas Hellström
  -1 siblings, 0 replies; 35+ messages in thread
From: Thomas Hellström @ 2021-09-10 15:30 UTC (permalink / raw)
  To: Christian König, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld

On Fri, 2021-09-10 at 16:40 +0200, Christian König wrote:
> 
> 
> Am 10.09.21 um 15:15 schrieb Thomas Hellström:
> > Both the provider (resource manager) and the consumer (the TTM
> > driver)
> > want to subclass struct ttm_resource. Since this is left for the
> > resource
> > manager, we need to provide a private pointer for the TTM driver.
> > 
> > Provide a struct ttm_resource_private for the driver to subclass
> > for
> > data with the same lifetime as the struct ttm_resource: In the i915
> > case
> > it will, for example, be an sg-table and radix tree into the LMEM
> > /VRAM pages that currently are awkwardly attached to the GEM
> > object.
> > 
> > Provide an ops structure for associated ops (Which is only
> > destroy() ATM)
> > It might seem pointless to provide a separate ops structure, but
> > Linus
> > has previously made it clear that that's the norm.
> > 
> > After careful audit one could perhaps also on a per-driver basis
> > replace the delete_mem_notify() TTM driver callback with the above
> > destroy function.
> 
> Well this is a really big NAK to this approach.
> 
> If you need to attach some additional information to the resource
> then 
> implement your own resource manager like everybody else does.

Well this was the long discussion we had back then when the resource
mangagers started to derive from struct resource and I was under the
impression that we had come to an agreement about the different use-
cases here, and this was my main concern.

I mean, it's a pretty big layer violation to do that for this use-case.
The TTM resource manager doesn't want to know about this data at all,
it's private to the ttm resource user layer and the resource manager
works perfectly well without it. (I assume the other drivers that
implement their own resource managers need the data that the
subclassing provides?)

The fundamental problem here is that there are two layers wanting to
subclass struct ttm_resource. That means one layer gets to do that, the
second gets to use a private pointer, (which in turn can provide yet
another private pointer to a potential third layer). With your
suggestion, the second layer instead is forced to subclass each
subclassed instance it uses from  the first layer provides?

Ofc we can do that, but it does indeed feel pretty awkward.

In any case, if you still think that's the approach we should go for,
I'd need to add init() and fini() members to the ttm_range_manager_func
struct to allow subclassing without having to unnecessarily copy the
full code? 

Thanks,
Thomas










> 
> Regards,
> Christian.
> 
> > 
> > Cc: Matthew Auld <matthew.william.auld@gmail.com>
> > Cc: König Christian <Christian.Koenig@amd.com>
> > Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > ---
> >   drivers/gpu/drm/ttm/ttm_resource.c | 10 +++++++---
> >   include/drm/ttm/ttm_resource.h     | 28
> > ++++++++++++++++++++++++++++
> >   2 files changed, 35 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/ttm/ttm_resource.c
> > b/drivers/gpu/drm/ttm/ttm_resource.c
> > index 2431717376e7..973e7c50bfed 100644
> > --- a/drivers/gpu/drm/ttm/ttm_resource.c
> > +++ b/drivers/gpu/drm/ttm/ttm_resource.c
> > @@ -57,13 +57,17 @@ int ttm_resource_alloc(struct ttm_buffer_object
> > *bo,
> >   void ttm_resource_free(struct ttm_buffer_object *bo, struct
> > ttm_resource **res)
> >   {
> >         struct ttm_resource_manager *man;
> > +       struct ttm_resource *resource = *res;
> >   
> > -       if (!*res)
> > +       if (!resource)
> >                 return;
> >   
> > -       man = ttm_manager_type(bo->bdev, (*res)->mem_type);
> > -       man->func->free(man, *res);
> >         *res = NULL;
> > +       if (resource->priv)
> > +               resource->priv->ops.destroy(resource->priv);
> > +
> > +       man = ttm_manager_type(bo->bdev, resource->mem_type);
> > +       man->func->free(man, resource);
> >   }
> >   EXPORT_SYMBOL(ttm_resource_free);
> >   
> > diff --git a/include/drm/ttm/ttm_resource.h
> > b/include/drm/ttm/ttm_resource.h
> > index 140b6b9a8bbe..5a22c9a29c05 100644
> > --- a/include/drm/ttm/ttm_resource.h
> > +++ b/include/drm/ttm/ttm_resource.h
> > @@ -44,6 +44,7 @@ struct dma_buf_map;
> >   struct io_mapping;
> >   struct sg_table;
> >   struct scatterlist;
> > +struct ttm_resource_private;
> >   
> >   struct ttm_resource_manager_func {
> >         /**
> > @@ -153,6 +154,32 @@ struct ttm_bus_placement {
> >         enum ttm_caching        caching;
> >   };
> >   
> > +/**
> > + * struct ttm_resource_private_ops - Operations for a struct
> > + * ttm_resource_private
> > + *
> > + * Not much benefit to keep this as a separate struct with only a
> > single member,
> > + * but keeping a separate ops struct is the norm.
> > + */
> > +struct ttm_resource_private_ops {
> > +       /**
> > +        * destroy() - Callback to destroy the private data
> > +        * @priv - The private data to destroy
> > +        */
> > +       void (*destroy) (struct ttm_resource_private *priv);
> > +};
> > +
> > +/**
> > + * struct ttm_resource_private - TTM driver private data
> > + * @ops: Pointer to struct ttm_resource_private_ops with
> > associated operations
> > + *
> > + * Intended to be subclassed to hold, for example cached data
> > sharing the
> > + * lifetime with a struct ttm_resource.
> > + */
> > +struct ttm_resource_private {
> > +       const struct ttm_resource_private_ops ops;
> > +};
> > +
> >   /**
> >    * struct ttm_resource
> >    *
> > @@ -171,6 +198,7 @@ struct ttm_resource {
> >         uint32_t mem_type;
> >         uint32_t placement;
> >         struct ttm_bus_placement bus;
> > +       struct ttm_resource_private *priv;
> >   };
> >   
> >   /**
> 



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
@ 2021-09-10 15:30     ` Thomas Hellström
  0 siblings, 0 replies; 35+ messages in thread
From: Thomas Hellström @ 2021-09-10 15:30 UTC (permalink / raw)
  To: Christian König, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld

On Fri, 2021-09-10 at 16:40 +0200, Christian König wrote:
> 
> 
> Am 10.09.21 um 15:15 schrieb Thomas Hellström:
> > Both the provider (resource manager) and the consumer (the TTM
> > driver)
> > want to subclass struct ttm_resource. Since this is left for the
> > resource
> > manager, we need to provide a private pointer for the TTM driver.
> > 
> > Provide a struct ttm_resource_private for the driver to subclass
> > for
> > data with the same lifetime as the struct ttm_resource: In the i915
> > case
> > it will, for example, be an sg-table and radix tree into the LMEM
> > /VRAM pages that currently are awkwardly attached to the GEM
> > object.
> > 
> > Provide an ops structure for associated ops (Which is only
> > destroy() ATM)
> > It might seem pointless to provide a separate ops structure, but
> > Linus
> > has previously made it clear that that's the norm.
> > 
> > After careful audit one could perhaps also on a per-driver basis
> > replace the delete_mem_notify() TTM driver callback with the above
> > destroy function.
> 
> Well this is a really big NAK to this approach.
> 
> If you need to attach some additional information to the resource
> then 
> implement your own resource manager like everybody else does.

Well this was the long discussion we had back then when the resource
mangagers started to derive from struct resource and I was under the
impression that we had come to an agreement about the different use-
cases here, and this was my main concern.

I mean, it's a pretty big layer violation to do that for this use-case.
The TTM resource manager doesn't want to know about this data at all,
it's private to the ttm resource user layer and the resource manager
works perfectly well without it. (I assume the other drivers that
implement their own resource managers need the data that the
subclassing provides?)

The fundamental problem here is that there are two layers wanting to
subclass struct ttm_resource. That means one layer gets to do that, the
second gets to use a private pointer, (which in turn can provide yet
another private pointer to a potential third layer). With your
suggestion, the second layer instead is forced to subclass each
subclassed instance it uses from  the first layer provides?

Ofc we can do that, but it does indeed feel pretty awkward.

In any case, if you still think that's the approach we should go for,
I'd need to add init() and fini() members to the ttm_range_manager_func
struct to allow subclassing without having to unnecessarily copy the
full code? 

Thanks,
Thomas










> 
> Regards,
> Christian.
> 
> > 
> > Cc: Matthew Auld <matthew.william.auld@gmail.com>
> > Cc: König Christian <Christian.Koenig@amd.com>
> > Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > ---
> >   drivers/gpu/drm/ttm/ttm_resource.c | 10 +++++++---
> >   include/drm/ttm/ttm_resource.h     | 28
> > ++++++++++++++++++++++++++++
> >   2 files changed, 35 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/ttm/ttm_resource.c
> > b/drivers/gpu/drm/ttm/ttm_resource.c
> > index 2431717376e7..973e7c50bfed 100644
> > --- a/drivers/gpu/drm/ttm/ttm_resource.c
> > +++ b/drivers/gpu/drm/ttm/ttm_resource.c
> > @@ -57,13 +57,17 @@ int ttm_resource_alloc(struct ttm_buffer_object
> > *bo,
> >   void ttm_resource_free(struct ttm_buffer_object *bo, struct
> > ttm_resource **res)
> >   {
> >         struct ttm_resource_manager *man;
> > +       struct ttm_resource *resource = *res;
> >   
> > -       if (!*res)
> > +       if (!resource)
> >                 return;
> >   
> > -       man = ttm_manager_type(bo->bdev, (*res)->mem_type);
> > -       man->func->free(man, *res);
> >         *res = NULL;
> > +       if (resource->priv)
> > +               resource->priv->ops.destroy(resource->priv);
> > +
> > +       man = ttm_manager_type(bo->bdev, resource->mem_type);
> > +       man->func->free(man, resource);
> >   }
> >   EXPORT_SYMBOL(ttm_resource_free);
> >   
> > diff --git a/include/drm/ttm/ttm_resource.h
> > b/include/drm/ttm/ttm_resource.h
> > index 140b6b9a8bbe..5a22c9a29c05 100644
> > --- a/include/drm/ttm/ttm_resource.h
> > +++ b/include/drm/ttm/ttm_resource.h
> > @@ -44,6 +44,7 @@ struct dma_buf_map;
> >   struct io_mapping;
> >   struct sg_table;
> >   struct scatterlist;
> > +struct ttm_resource_private;
> >   
> >   struct ttm_resource_manager_func {
> >         /**
> > @@ -153,6 +154,32 @@ struct ttm_bus_placement {
> >         enum ttm_caching        caching;
> >   };
> >   
> > +/**
> > + * struct ttm_resource_private_ops - Operations for a struct
> > + * ttm_resource_private
> > + *
> > + * Not much benefit to keep this as a separate struct with only a
> > single member,
> > + * but keeping a separate ops struct is the norm.
> > + */
> > +struct ttm_resource_private_ops {
> > +       /**
> > +        * destroy() - Callback to destroy the private data
> > +        * @priv - The private data to destroy
> > +        */
> > +       void (*destroy) (struct ttm_resource_private *priv);
> > +};
> > +
> > +/**
> > + * struct ttm_resource_private - TTM driver private data
> > + * @ops: Pointer to struct ttm_resource_private_ops with
> > associated operations
> > + *
> > + * Intended to be subclassed to hold, for example cached data
> > sharing the
> > + * lifetime with a struct ttm_resource.
> > + */
> > +struct ttm_resource_private {
> > +       const struct ttm_resource_private_ops ops;
> > +};
> > +
> >   /**
> >    * struct ttm_resource
> >    *
> > @@ -171,6 +198,7 @@ struct ttm_resource {
> >         uint32_t mem_type;
> >         uint32_t placement;
> >         struct ttm_bus_placement bus;
> > +       struct ttm_resource_private *priv;
> >   };
> >   
> >   /**
> 



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
  2021-09-10 15:30     ` Thomas Hellström
@ 2021-09-10 17:03       ` Christian König
  -1 siblings, 0 replies; 35+ messages in thread
From: Christian König @ 2021-09-10 17:03 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld

Am 10.09.21 um 17:30 schrieb Thomas Hellström:
> On Fri, 2021-09-10 at 16:40 +0200, Christian König wrote:
>>
>> Am 10.09.21 um 15:15 schrieb Thomas Hellström:
>>> Both the provider (resource manager) and the consumer (the TTM
>>> driver)
>>> want to subclass struct ttm_resource. Since this is left for the
>>> resource
>>> manager, we need to provide a private pointer for the TTM driver.
>>>
>>> Provide a struct ttm_resource_private for the driver to subclass
>>> for
>>> data with the same lifetime as the struct ttm_resource: In the i915
>>> case
>>> it will, for example, be an sg-table and radix tree into the LMEM
>>> /VRAM pages that currently are awkwardly attached to the GEM
>>> object.
>>>
>>> Provide an ops structure for associated ops (Which is only
>>> destroy() ATM)
>>> It might seem pointless to provide a separate ops structure, but
>>> Linus
>>> has previously made it clear that that's the norm.
>>>
>>> After careful audit one could perhaps also on a per-driver basis
>>> replace the delete_mem_notify() TTM driver callback with the above
>>> destroy function.
>> Well this is a really big NAK to this approach.
>>
>> If you need to attach some additional information to the resource
>> then
>> implement your own resource manager like everybody else does.
> Well this was the long discussion we had back then when the resource
> mangagers started to derive from struct resource and I was under the
> impression that we had come to an agreement about the different use-
> cases here, and this was my main concern.

Ok, then we somehow didn't understood each other.

> I mean, it's a pretty big layer violation to do that for this use-case.

Well exactly that's the point. TTM should not have a layer design in the 
first place.

Devices, BOs, resources etc.. are base classes which should implement a 
base functionality which is then extended by the drivers to implement 
the driver specific functionality.

That is a component based approach, and not layered at all.

> The TTM resource manager doesn't want to know about this data at all,
> it's private to the ttm resource user layer and the resource manager
> works perfectly well without it. (I assume the other drivers that
> implement their own resource managers need the data that the
> subclassing provides?)

Yes, that's exactly why we have the subclassing.

> The fundamental problem here is that there are two layers wanting to
> subclass struct ttm_resource. That means one layer gets to do that, the
> second gets to use a private pointer, (which in turn can provide yet
> another private pointer to a potential third layer). With your
> suggestion, the second layer instead is forced to subclass each
> subclassed instance it uses from  the first layer provides?

Well completely drop the layer approach/thinking here.

The resource is an object with a base class. The base class implements 
the interface TTM needs to handle the object, e.g. create/destroy/debug 
etc...

Then we need to subclass this object because without any additional 
information the object is pretty pointless.

One possibility for this is to use the range manager to implement 
something drm_mm based. BTW: We should probably rename that to something 
like ttm_res_drm_mm or similar.

What we should avoid is to abuse TTM resource interfaces in the driver, 
e.g. what i915 is currently doing. This is a TTM->resource mgr interface 
and should not be used by drivers at all.

> Ofc we can do that, but it does indeed feel pretty awkward.
>
> In any case, if you still think that's the approach we should go for,
> I'd need to add init() and fini() members to the ttm_range_manager_func
> struct to allow subclassing without having to unnecessarily copy the
> full code?

Yes, exporting the ttm_range_manager functions as needed is one thing I 
wanted to do for the amdgpu_gtt_mgr.c code as well.

Just don't extend the function table but rather directly export the 
necessary functions.

Regards,
Christian.

>
> Thanks,
> Thomas
>
>
>
>
>
>
>
>
>
>
>> Regards,
>> Christian.
>>
>>> Cc: Matthew Auld <matthew.william.auld@gmail.com>
>>> Cc: König Christian <Christian.Koenig@amd.com>
>>> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>> ---
>>>    drivers/gpu/drm/ttm/ttm_resource.c | 10 +++++++---
>>>    include/drm/ttm/ttm_resource.h     | 28
>>> ++++++++++++++++++++++++++++
>>>    2 files changed, 35 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/ttm/ttm_resource.c
>>> b/drivers/gpu/drm/ttm/ttm_resource.c
>>> index 2431717376e7..973e7c50bfed 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_resource.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_resource.c
>>> @@ -57,13 +57,17 @@ int ttm_resource_alloc(struct ttm_buffer_object
>>> *bo,
>>>    void ttm_resource_free(struct ttm_buffer_object *bo, struct
>>> ttm_resource **res)
>>>    {
>>>          struct ttm_resource_manager *man;
>>> +       struct ttm_resource *resource = *res;
>>>    
>>> -       if (!*res)
>>> +       if (!resource)
>>>                  return;
>>>    
>>> -       man = ttm_manager_type(bo->bdev, (*res)->mem_type);
>>> -       man->func->free(man, *res);
>>>          *res = NULL;
>>> +       if (resource->priv)
>>> +               resource->priv->ops.destroy(resource->priv);
>>> +
>>> +       man = ttm_manager_type(bo->bdev, resource->mem_type);
>>> +       man->func->free(man, resource);
>>>    }
>>>    EXPORT_SYMBOL(ttm_resource_free);
>>>    
>>> diff --git a/include/drm/ttm/ttm_resource.h
>>> b/include/drm/ttm/ttm_resource.h
>>> index 140b6b9a8bbe..5a22c9a29c05 100644
>>> --- a/include/drm/ttm/ttm_resource.h
>>> +++ b/include/drm/ttm/ttm_resource.h
>>> @@ -44,6 +44,7 @@ struct dma_buf_map;
>>>    struct io_mapping;
>>>    struct sg_table;
>>>    struct scatterlist;
>>> +struct ttm_resource_private;
>>>    
>>>    struct ttm_resource_manager_func {
>>>          /**
>>> @@ -153,6 +154,32 @@ struct ttm_bus_placement {
>>>          enum ttm_caching        caching;
>>>    };
>>>    
>>> +/**
>>> + * struct ttm_resource_private_ops - Operations for a struct
>>> + * ttm_resource_private
>>> + *
>>> + * Not much benefit to keep this as a separate struct with only a
>>> single member,
>>> + * but keeping a separate ops struct is the norm.
>>> + */
>>> +struct ttm_resource_private_ops {
>>> +       /**
>>> +        * destroy() - Callback to destroy the private data
>>> +        * @priv - The private data to destroy
>>> +        */
>>> +       void (*destroy) (struct ttm_resource_private *priv);
>>> +};
>>> +
>>> +/**
>>> + * struct ttm_resource_private - TTM driver private data
>>> + * @ops: Pointer to struct ttm_resource_private_ops with
>>> associated operations
>>> + *
>>> + * Intended to be subclassed to hold, for example cached data
>>> sharing the
>>> + * lifetime with a struct ttm_resource.
>>> + */
>>> +struct ttm_resource_private {
>>> +       const struct ttm_resource_private_ops ops;
>>> +};
>>> +
>>>    /**
>>>     * struct ttm_resource
>>>     *
>>> @@ -171,6 +198,7 @@ struct ttm_resource {
>>>          uint32_t mem_type;
>>>          uint32_t placement;
>>>          struct ttm_bus_placement bus;
>>> +       struct ttm_resource_private *priv;
>>>    };
>>>    
>>>    /**
>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Intel-gfx] [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
@ 2021-09-10 17:03       ` Christian König
  0 siblings, 0 replies; 35+ messages in thread
From: Christian König @ 2021-09-10 17:03 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld

Am 10.09.21 um 17:30 schrieb Thomas Hellström:
> On Fri, 2021-09-10 at 16:40 +0200, Christian König wrote:
>>
>> Am 10.09.21 um 15:15 schrieb Thomas Hellström:
>>> Both the provider (resource manager) and the consumer (the TTM
>>> driver)
>>> want to subclass struct ttm_resource. Since this is left for the
>>> resource
>>> manager, we need to provide a private pointer for the TTM driver.
>>>
>>> Provide a struct ttm_resource_private for the driver to subclass
>>> for
>>> data with the same lifetime as the struct ttm_resource: In the i915
>>> case
>>> it will, for example, be an sg-table and radix tree into the LMEM
>>> /VRAM pages that currently are awkwardly attached to the GEM
>>> object.
>>>
>>> Provide an ops structure for associated ops (Which is only
>>> destroy() ATM)
>>> It might seem pointless to provide a separate ops structure, but
>>> Linus
>>> has previously made it clear that that's the norm.
>>>
>>> After careful audit one could perhaps also on a per-driver basis
>>> replace the delete_mem_notify() TTM driver callback with the above
>>> destroy function.
>> Well this is a really big NAK to this approach.
>>
>> If you need to attach some additional information to the resource
>> then
>> implement your own resource manager like everybody else does.
> Well this was the long discussion we had back then when the resource
> mangagers started to derive from struct resource and I was under the
> impression that we had come to an agreement about the different use-
> cases here, and this was my main concern.

Ok, then we somehow didn't understood each other.

> I mean, it's a pretty big layer violation to do that for this use-case.

Well exactly that's the point. TTM should not have a layer design in the 
first place.

Devices, BOs, resources etc.. are base classes which should implement a 
base functionality which is then extended by the drivers to implement 
the driver specific functionality.

That is a component based approach, and not layered at all.

> The TTM resource manager doesn't want to know about this data at all,
> it's private to the ttm resource user layer and the resource manager
> works perfectly well without it. (I assume the other drivers that
> implement their own resource managers need the data that the
> subclassing provides?)

Yes, that's exactly why we have the subclassing.

> The fundamental problem here is that there are two layers wanting to
> subclass struct ttm_resource. That means one layer gets to do that, the
> second gets to use a private pointer, (which in turn can provide yet
> another private pointer to a potential third layer). With your
> suggestion, the second layer instead is forced to subclass each
> subclassed instance it uses from  the first layer provides?

Well completely drop the layer approach/thinking here.

The resource is an object with a base class. The base class implements 
the interface TTM needs to handle the object, e.g. create/destroy/debug 
etc...

Then we need to subclass this object because without any additional 
information the object is pretty pointless.

One possibility for this is to use the range manager to implement 
something drm_mm based. BTW: We should probably rename that to something 
like ttm_res_drm_mm or similar.

What we should avoid is to abuse TTM resource interfaces in the driver, 
e.g. what i915 is currently doing. This is a TTM->resource mgr interface 
and should not be used by drivers at all.

> Ofc we can do that, but it does indeed feel pretty awkward.
>
> In any case, if you still think that's the approach we should go for,
> I'd need to add init() and fini() members to the ttm_range_manager_func
> struct to allow subclassing without having to unnecessarily copy the
> full code?

Yes, exporting the ttm_range_manager functions as needed is one thing I 
wanted to do for the amdgpu_gtt_mgr.c code as well.

Just don't extend the function table but rather directly export the 
necessary functions.

Regards,
Christian.

>
> Thanks,
> Thomas
>
>
>
>
>
>
>
>
>
>
>> Regards,
>> Christian.
>>
>>> Cc: Matthew Auld <matthew.william.auld@gmail.com>
>>> Cc: König Christian <Christian.Koenig@amd.com>
>>> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>> ---
>>>    drivers/gpu/drm/ttm/ttm_resource.c | 10 +++++++---
>>>    include/drm/ttm/ttm_resource.h     | 28
>>> ++++++++++++++++++++++++++++
>>>    2 files changed, 35 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/ttm/ttm_resource.c
>>> b/drivers/gpu/drm/ttm/ttm_resource.c
>>> index 2431717376e7..973e7c50bfed 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_resource.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_resource.c
>>> @@ -57,13 +57,17 @@ int ttm_resource_alloc(struct ttm_buffer_object
>>> *bo,
>>>    void ttm_resource_free(struct ttm_buffer_object *bo, struct
>>> ttm_resource **res)
>>>    {
>>>          struct ttm_resource_manager *man;
>>> +       struct ttm_resource *resource = *res;
>>>    
>>> -       if (!*res)
>>> +       if (!resource)
>>>                  return;
>>>    
>>> -       man = ttm_manager_type(bo->bdev, (*res)->mem_type);
>>> -       man->func->free(man, *res);
>>>          *res = NULL;
>>> +       if (resource->priv)
>>> +               resource->priv->ops.destroy(resource->priv);
>>> +
>>> +       man = ttm_manager_type(bo->bdev, resource->mem_type);
>>> +       man->func->free(man, resource);
>>>    }
>>>    EXPORT_SYMBOL(ttm_resource_free);
>>>    
>>> diff --git a/include/drm/ttm/ttm_resource.h
>>> b/include/drm/ttm/ttm_resource.h
>>> index 140b6b9a8bbe..5a22c9a29c05 100644
>>> --- a/include/drm/ttm/ttm_resource.h
>>> +++ b/include/drm/ttm/ttm_resource.h
>>> @@ -44,6 +44,7 @@ struct dma_buf_map;
>>>    struct io_mapping;
>>>    struct sg_table;
>>>    struct scatterlist;
>>> +struct ttm_resource_private;
>>>    
>>>    struct ttm_resource_manager_func {
>>>          /**
>>> @@ -153,6 +154,32 @@ struct ttm_bus_placement {
>>>          enum ttm_caching        caching;
>>>    };
>>>    
>>> +/**
>>> + * struct ttm_resource_private_ops - Operations for a struct
>>> + * ttm_resource_private
>>> + *
>>> + * Not much benefit to keep this as a separate struct with only a
>>> single member,
>>> + * but keeping a separate ops struct is the norm.
>>> + */
>>> +struct ttm_resource_private_ops {
>>> +       /**
>>> +        * destroy() - Callback to destroy the private data
>>> +        * @priv - The private data to destroy
>>> +        */
>>> +       void (*destroy) (struct ttm_resource_private *priv);
>>> +};
>>> +
>>> +/**
>>> + * struct ttm_resource_private - TTM driver private data
>>> + * @ops: Pointer to struct ttm_resource_private_ops with
>>> associated operations
>>> + *
>>> + * Intended to be subclassed to hold, for example cached data
>>> sharing the
>>> + * lifetime with a struct ttm_resource.
>>> + */
>>> +struct ttm_resource_private {
>>> +       const struct ttm_resource_private_ops ops;
>>> +};
>>> +
>>>    /**
>>>     * struct ttm_resource
>>>     *
>>> @@ -171,6 +198,7 @@ struct ttm_resource {
>>>          uint32_t mem_type;
>>>          uint32_t placement;
>>>          struct ttm_bus_placement bus;
>>> +       struct ttm_resource_private *priv;
>>>    };
>>>    
>>>    /**
>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
  2021-09-10 17:03       ` [Intel-gfx] " Christian König
@ 2021-09-11  6:07         ` Thomas Hellström
  -1 siblings, 0 replies; 35+ messages in thread
From: Thomas Hellström @ 2021-09-11  6:07 UTC (permalink / raw)
  To: Christian König, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld

On Fri, 2021-09-10 at 19:03 +0200, Christian König wrote:
> Am 10.09.21 um 17:30 schrieb Thomas Hellström:
> > On Fri, 2021-09-10 at 16:40 +0200, Christian König wrote:
> > > 
> > > Am 10.09.21 um 15:15 schrieb Thomas Hellström:
> > > > Both the provider (resource manager) and the consumer (the TTM
> > > > driver)
> > > > want to subclass struct ttm_resource. Since this is left for
> > > > the
> > > > resource
> > > > manager, we need to provide a private pointer for the TTM
> > > > driver.
> > > > 
> > > > Provide a struct ttm_resource_private for the driver to
> > > > subclass
> > > > for
> > > > data with the same lifetime as the struct ttm_resource: In the
> > > > i915
> > > > case
> > > > it will, for example, be an sg-table and radix tree into the
> > > > LMEM
> > > > /VRAM pages that currently are awkwardly attached to the GEM
> > > > object.
> > > > 
> > > > Provide an ops structure for associated ops (Which is only
> > > > destroy() ATM)
> > > > It might seem pointless to provide a separate ops structure,
> > > > but
> > > > Linus
> > > > has previously made it clear that that's the norm.
> > > > 
> > > > After careful audit one could perhaps also on a per-driver
> > > > basis
> > > > replace the delete_mem_notify() TTM driver callback with the
> > > > above
> > > > destroy function.
> > > Well this is a really big NAK to this approach.
> > > 
> > > If you need to attach some additional information to the resource
> > > then
> > > implement your own resource manager like everybody else does.
> > Well this was the long discussion we had back then when the
> > resource
> > mangagers started to derive from struct resource and I was under
> > the
> > impression that we had come to an agreement about the different
> > use-
> > cases here, and this was my main concern.
> 
> Ok, then we somehow didn't understood each other.
> 
> > I mean, it's a pretty big layer violation to do that for this use-
> > case.
> 
> Well exactly that's the point. TTM should not have a layer design in
> the 
> first place.
> 
> Devices, BOs, resources etc.. are base classes which should implement
> a 
> base functionality which is then extended by the drivers to implement
> the driver specific functionality.
> 
> That is a component based approach, and not layered at all.
> 
> > The TTM resource manager doesn't want to know about this data at
> > all,
> > it's private to the ttm resource user layer and the resource
> > manager
> > works perfectly well without it. (I assume the other drivers that
> > implement their own resource managers need the data that the
> > subclassing provides?)
> 
> Yes, that's exactly why we have the subclassing.
> 
> > The fundamental problem here is that there are two layers wanting
> > to
> > subclass struct ttm_resource. That means one layer gets to do that,
> > the
> > second gets to use a private pointer, (which in turn can provide
> > yet
> > another private pointer to a potential third layer). With your
> > suggestion, the second layer instead is forced to subclass each
> > subclassed instance it uses from  the first layer provides?
> 
> Well completely drop the layer approach/thinking here.
> 
> The resource is an object with a base class. The base class
> implements 
> the interface TTM needs to handle the object, e.g.
> create/destroy/debug 
> etc...
> 
> Then we need to subclass this object because without any additional 
> information the object is pretty pointless.
> 
> One possibility for this is to use the range manager to implement 
> something drm_mm based. BTW: We should probably rename that to
> something 
> like ttm_res_drm_mm or similar.

Sure I'm all in on that, but my point is this becomes pretty awkward
because the reusable code already subclasses struct ttm_resource. Let
me give you an example:

Prereqs:
1) We want to be able to re-use resource manager implementations among
drivers.
2) A driver might want to re-use multiple implementations and have
identical data "struct i915_data" attached to both

With your suggestion that combination of prereqs would look like:

struct i915_resource {
	/* Reason why we subclass */
	struct i915_data my_data;

	/* 
         * Uh this is awkward. We need to do this because these       
         * already subclassed struct ttm_resource.
         */
	struct ttm_resource *resource;
	union {
		struct ttm_range_mgr_node range;
		struct i915_ttm_buddy_resource buddy;
        };
};

And I can't make it look like

struct i915_resource {
	struct i915_data my_data;
	struct ttm_resource *resource;
}

Without that private back pointer.
 
But what I'd *really* would want is.

struct i915_resource {
	struct i915_data my_data;
	struct ttm_resource resource;
};

This would be identical to how we subclass a struct ttm_buffer_object
or a struct ttm_tt. But It can't look like this because then we can't
reuse exising implementations that *already subclass* struct
ttm_resource.

What we have currently ttm_resource-wise is like having a struct
tt_bo_vram, a struct ttm_bo_system, a struct ttm_bo_gtt and trying to
subclass them all combined into a struct i915_bo. It would become
awkward without a dynamic backend that facilitates subclassing a single
struct ttm_buffer_object? 

So basically the question boils down to: Why do we do struct
ttm_resources differently?


> 
> What we should avoid is to abuse TTM resource interfaces in the
> driver, 
> e.g. what i915 is currently doing. This is a TTM->resource mgr
> interface 
> and should not be used by drivers at all.

Yes I guess that can be easily fixed when whatever we end up with above
lands.

> 
> > Ofc we can do that, but it does indeed feel pretty awkward.
> > 
> > In any case, if you still think that's the approach we should go
> > for,
> > I'd need to add init() and fini() members to the
> > ttm_range_manager_func
> > struct to allow subclassing without having to unnecessarily copy
> > the
> > full code?
> 
> Yes, exporting the ttm_range_manager functions as needed is one thing
> I 
> wanted to do for the amdgpu_gtt_mgr.c code as well.
> 
> Just don't extend the function table but rather directly export the 
> necessary functions.

Sure.
/Thomas



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Intel-gfx] [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
@ 2021-09-11  6:07         ` Thomas Hellström
  0 siblings, 0 replies; 35+ messages in thread
From: Thomas Hellström @ 2021-09-11  6:07 UTC (permalink / raw)
  To: Christian König, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld

On Fri, 2021-09-10 at 19:03 +0200, Christian König wrote:
> Am 10.09.21 um 17:30 schrieb Thomas Hellström:
> > On Fri, 2021-09-10 at 16:40 +0200, Christian König wrote:
> > > 
> > > Am 10.09.21 um 15:15 schrieb Thomas Hellström:
> > > > Both the provider (resource manager) and the consumer (the TTM
> > > > driver)
> > > > want to subclass struct ttm_resource. Since this is left for
> > > > the
> > > > resource
> > > > manager, we need to provide a private pointer for the TTM
> > > > driver.
> > > > 
> > > > Provide a struct ttm_resource_private for the driver to
> > > > subclass
> > > > for
> > > > data with the same lifetime as the struct ttm_resource: In the
> > > > i915
> > > > case
> > > > it will, for example, be an sg-table and radix tree into the
> > > > LMEM
> > > > /VRAM pages that currently are awkwardly attached to the GEM
> > > > object.
> > > > 
> > > > Provide an ops structure for associated ops (Which is only
> > > > destroy() ATM)
> > > > It might seem pointless to provide a separate ops structure,
> > > > but
> > > > Linus
> > > > has previously made it clear that that's the norm.
> > > > 
> > > > After careful audit one could perhaps also on a per-driver
> > > > basis
> > > > replace the delete_mem_notify() TTM driver callback with the
> > > > above
> > > > destroy function.
> > > Well this is a really big NAK to this approach.
> > > 
> > > If you need to attach some additional information to the resource
> > > then
> > > implement your own resource manager like everybody else does.
> > Well this was the long discussion we had back then when the
> > resource
> > mangagers started to derive from struct resource and I was under
> > the
> > impression that we had come to an agreement about the different
> > use-
> > cases here, and this was my main concern.
> 
> Ok, then we somehow didn't understood each other.
> 
> > I mean, it's a pretty big layer violation to do that for this use-
> > case.
> 
> Well exactly that's the point. TTM should not have a layer design in
> the 
> first place.
> 
> Devices, BOs, resources etc.. are base classes which should implement
> a 
> base functionality which is then extended by the drivers to implement
> the driver specific functionality.
> 
> That is a component based approach, and not layered at all.
> 
> > The TTM resource manager doesn't want to know about this data at
> > all,
> > it's private to the ttm resource user layer and the resource
> > manager
> > works perfectly well without it. (I assume the other drivers that
> > implement their own resource managers need the data that the
> > subclassing provides?)
> 
> Yes, that's exactly why we have the subclassing.
> 
> > The fundamental problem here is that there are two layers wanting
> > to
> > subclass struct ttm_resource. That means one layer gets to do that,
> > the
> > second gets to use a private pointer, (which in turn can provide
> > yet
> > another private pointer to a potential third layer). With your
> > suggestion, the second layer instead is forced to subclass each
> > subclassed instance it uses from  the first layer provides?
> 
> Well completely drop the layer approach/thinking here.
> 
> The resource is an object with a base class. The base class
> implements 
> the interface TTM needs to handle the object, e.g.
> create/destroy/debug 
> etc...
> 
> Then we need to subclass this object because without any additional 
> information the object is pretty pointless.
> 
> One possibility for this is to use the range manager to implement 
> something drm_mm based. BTW: We should probably rename that to
> something 
> like ttm_res_drm_mm or similar.

Sure I'm all in on that, but my point is this becomes pretty awkward
because the reusable code already subclasses struct ttm_resource. Let
me give you an example:

Prereqs:
1) We want to be able to re-use resource manager implementations among
drivers.
2) A driver might want to re-use multiple implementations and have
identical data "struct i915_data" attached to both

With your suggestion that combination of prereqs would look like:

struct i915_resource {
	/* Reason why we subclass */
	struct i915_data my_data;

	/* 
         * Uh this is awkward. We need to do this because these       
         * already subclassed struct ttm_resource.
         */
	struct ttm_resource *resource;
	union {
		struct ttm_range_mgr_node range;
		struct i915_ttm_buddy_resource buddy;
        };
};

And I can't make it look like

struct i915_resource {
	struct i915_data my_data;
	struct ttm_resource *resource;
}

Without that private back pointer.
 
But what I'd *really* would want is.

struct i915_resource {
	struct i915_data my_data;
	struct ttm_resource resource;
};

This would be identical to how we subclass a struct ttm_buffer_object
or a struct ttm_tt. But It can't look like this because then we can't
reuse exising implementations that *already subclass* struct
ttm_resource.

What we have currently ttm_resource-wise is like having a struct
tt_bo_vram, a struct ttm_bo_system, a struct ttm_bo_gtt and trying to
subclass them all combined into a struct i915_bo. It would become
awkward without a dynamic backend that facilitates subclassing a single
struct ttm_buffer_object? 

So basically the question boils down to: Why do we do struct
ttm_resources differently?


> 
> What we should avoid is to abuse TTM resource interfaces in the
> driver, 
> e.g. what i915 is currently doing. This is a TTM->resource mgr
> interface 
> and should not be used by drivers at all.

Yes I guess that can be easily fixed when whatever we end up with above
lands.

> 
> > Ofc we can do that, but it does indeed feel pretty awkward.
> > 
> > In any case, if you still think that's the approach we should go
> > for,
> > I'd need to add init() and fini() members to the
> > ttm_range_manager_func
> > struct to allow subclassing without having to unnecessarily copy
> > the
> > full code?
> 
> Yes, exporting the ttm_range_manager functions as needed is one thing
> I 
> wanted to do for the amdgpu_gtt_mgr.c code as well.
> 
> Just don't extend the function table but rather directly export the 
> necessary functions.

Sure.
/Thomas



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
  2021-09-11  6:07         ` [Intel-gfx] " Thomas Hellström
@ 2021-09-13  6:17           ` Christian König
  -1 siblings, 0 replies; 35+ messages in thread
From: Christian König @ 2021-09-13  6:17 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld

Am 11.09.21 um 08:07 schrieb Thomas Hellström:
> On Fri, 2021-09-10 at 19:03 +0200, Christian König wrote:
>> Am 10.09.21 um 17:30 schrieb Thomas Hellström:
>>> On Fri, 2021-09-10 at 16:40 +0200, Christian König wrote:
>>>> Am 10.09.21 um 15:15 schrieb Thomas Hellström:
>>>>> Both the provider (resource manager) and the consumer (the TTM
>>>>> driver)
>>>>> want to subclass struct ttm_resource. Since this is left for
>>>>> the
>>>>> resource
>>>>> manager, we need to provide a private pointer for the TTM
>>>>> driver.
>>>>>
>>>>> Provide a struct ttm_resource_private for the driver to
>>>>> subclass
>>>>> for
>>>>> data with the same lifetime as the struct ttm_resource: In the
>>>>> i915
>>>>> case
>>>>> it will, for example, be an sg-table and radix tree into the
>>>>> LMEM
>>>>> /VRAM pages that currently are awkwardly attached to the GEM
>>>>> object.
>>>>>
>>>>> Provide an ops structure for associated ops (Which is only
>>>>> destroy() ATM)
>>>>> It might seem pointless to provide a separate ops structure,
>>>>> but
>>>>> Linus
>>>>> has previously made it clear that that's the norm.
>>>>>
>>>>> After careful audit one could perhaps also on a per-driver
>>>>> basis
>>>>> replace the delete_mem_notify() TTM driver callback with the
>>>>> above
>>>>> destroy function.
>>>> Well this is a really big NAK to this approach.
>>>>
>>>> If you need to attach some additional information to the resource
>>>> then
>>>> implement your own resource manager like everybody else does.
>>> Well this was the long discussion we had back then when the
>>> resource
>>> mangagers started to derive from struct resource and I was under
>>> the
>>> impression that we had come to an agreement about the different
>>> use-
>>> cases here, and this was my main concern.
>> Ok, then we somehow didn't understood each other.
>>
>>> I mean, it's a pretty big layer violation to do that for this use-
>>> case.
>> Well exactly that's the point. TTM should not have a layer design in
>> the
>> first place.
>>
>> Devices, BOs, resources etc.. are base classes which should implement
>> a
>> base functionality which is then extended by the drivers to implement
>> the driver specific functionality.
>>
>> That is a component based approach, and not layered at all.
>>
>>> The TTM resource manager doesn't want to know about this data at
>>> all,
>>> it's private to the ttm resource user layer and the resource
>>> manager
>>> works perfectly well without it. (I assume the other drivers that
>>> implement their own resource managers need the data that the
>>> subclassing provides?)
>> Yes, that's exactly why we have the subclassing.
>>
>>> The fundamental problem here is that there are two layers wanting
>>> to
>>> subclass struct ttm_resource. That means one layer gets to do that,
>>> the
>>> second gets to use a private pointer, (which in turn can provide
>>> yet
>>> another private pointer to a potential third layer). With your
>>> suggestion, the second layer instead is forced to subclass each
>>> subclassed instance it uses from  the first layer provides?
>> Well completely drop the layer approach/thinking here.
>>
>> The resource is an object with a base class. The base class
>> implements
>> the interface TTM needs to handle the object, e.g.
>> create/destroy/debug
>> etc...
>>
>> Then we need to subclass this object because without any additional
>> information the object is pretty pointless.
>>
>> One possibility for this is to use the range manager to implement
>> something drm_mm based. BTW: We should probably rename that to
>> something
>> like ttm_res_drm_mm or similar.
> Sure I'm all in on that, but my point is this becomes pretty awkward
> because the reusable code already subclasses struct ttm_resource. Let
> me give you an example:
>
> Prereqs:
> 1) We want to be able to re-use resource manager implementations among
> drivers.
> 2) A driver might want to re-use multiple implementations and have
> identical data "struct i915_data" attached to both

Well that's the point I don't really understand. Why would a driver want 
to do this?

It's perfectly possible that you have ttm_range_manager extended and a 
potential ttm_page_manager, but that are two different objects then 
which also need different handling.

> ....
> This would be identical to how we subclass a struct ttm_buffer_object
> or a struct ttm_tt. But It can't look like this because then we can't
> reuse exising implementations that *already subclass* struct
> ttm_resource.
>
> What we have currently ttm_resource-wise is like having a struct
> tt_bo_vram, a struct ttm_bo_system, a struct ttm_bo_gtt and trying to
> subclass them all combined into a struct i915_bo. It would become
> awkward without a dynamic backend that facilitates subclassing a single
> struct ttm_buffer_object?

Why? They all implement different handling.

When you add a private point to ttm_resource you allow common handling 
which doesn't take into account that this ttm_resource object is 
subclassed.

> So basically the question boils down to: Why do we do struct
> ttm_resources differently?

ttm_buffer_object is a subclass of drm_gem_object and I hope to make 
ttm_device a subclass of drm_device in the near term.

I really try to understand what you mean hear, but I even after reading 
that multiple times I absolutely don't get it.

Regards,
Christian.

>> What we should avoid is to abuse TTM resource interfaces in the
>> driver,
>> e.g. what i915 is currently doing. This is a TTM->resource mgr
>> interface
>> and should not be used by drivers at all.
> Yes I guess that can be easily fixed when whatever we end up with above
> lands.
>
>>> Ofc we can do that, but it does indeed feel pretty awkward.
>>>
>>> In any case, if you still think that's the approach we should go
>>> for,
>>> I'd need to add init() and fini() members to the
>>> ttm_range_manager_func
>>> struct to allow subclassing without having to unnecessarily copy
>>> the
>>> full code?
>> Yes, exporting the ttm_range_manager functions as needed is one thing
>> I
>> wanted to do for the amdgpu_gtt_mgr.c code as well.
>>
>> Just don't extend the function table but rather directly export the
>> necessary functions.
> Sure.
> /Thomas
>
>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Intel-gfx] [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
@ 2021-09-13  6:17           ` Christian König
  0 siblings, 0 replies; 35+ messages in thread
From: Christian König @ 2021-09-13  6:17 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld

Am 11.09.21 um 08:07 schrieb Thomas Hellström:
> On Fri, 2021-09-10 at 19:03 +0200, Christian König wrote:
>> Am 10.09.21 um 17:30 schrieb Thomas Hellström:
>>> On Fri, 2021-09-10 at 16:40 +0200, Christian König wrote:
>>>> Am 10.09.21 um 15:15 schrieb Thomas Hellström:
>>>>> Both the provider (resource manager) and the consumer (the TTM
>>>>> driver)
>>>>> want to subclass struct ttm_resource. Since this is left for
>>>>> the
>>>>> resource
>>>>> manager, we need to provide a private pointer for the TTM
>>>>> driver.
>>>>>
>>>>> Provide a struct ttm_resource_private for the driver to
>>>>> subclass
>>>>> for
>>>>> data with the same lifetime as the struct ttm_resource: In the
>>>>> i915
>>>>> case
>>>>> it will, for example, be an sg-table and radix tree into the
>>>>> LMEM
>>>>> /VRAM pages that currently are awkwardly attached to the GEM
>>>>> object.
>>>>>
>>>>> Provide an ops structure for associated ops (Which is only
>>>>> destroy() ATM)
>>>>> It might seem pointless to provide a separate ops structure,
>>>>> but
>>>>> Linus
>>>>> has previously made it clear that that's the norm.
>>>>>
>>>>> After careful audit one could perhaps also on a per-driver
>>>>> basis
>>>>> replace the delete_mem_notify() TTM driver callback with the
>>>>> above
>>>>> destroy function.
>>>> Well this is a really big NAK to this approach.
>>>>
>>>> If you need to attach some additional information to the resource
>>>> then
>>>> implement your own resource manager like everybody else does.
>>> Well this was the long discussion we had back then when the
>>> resource
>>> mangagers started to derive from struct resource and I was under
>>> the
>>> impression that we had come to an agreement about the different
>>> use-
>>> cases here, and this was my main concern.
>> Ok, then we somehow didn't understood each other.
>>
>>> I mean, it's a pretty big layer violation to do that for this use-
>>> case.
>> Well exactly that's the point. TTM should not have a layer design in
>> the
>> first place.
>>
>> Devices, BOs, resources etc.. are base classes which should implement
>> a
>> base functionality which is then extended by the drivers to implement
>> the driver specific functionality.
>>
>> That is a component based approach, and not layered at all.
>>
>>> The TTM resource manager doesn't want to know about this data at
>>> all,
>>> it's private to the ttm resource user layer and the resource
>>> manager
>>> works perfectly well without it. (I assume the other drivers that
>>> implement their own resource managers need the data that the
>>> subclassing provides?)
>> Yes, that's exactly why we have the subclassing.
>>
>>> The fundamental problem here is that there are two layers wanting
>>> to
>>> subclass struct ttm_resource. That means one layer gets to do that,
>>> the
>>> second gets to use a private pointer, (which in turn can provide
>>> yet
>>> another private pointer to a potential third layer). With your
>>> suggestion, the second layer instead is forced to subclass each
>>> subclassed instance it uses from  the first layer provides?
>> Well completely drop the layer approach/thinking here.
>>
>> The resource is an object with a base class. The base class
>> implements
>> the interface TTM needs to handle the object, e.g.
>> create/destroy/debug
>> etc...
>>
>> Then we need to subclass this object because without any additional
>> information the object is pretty pointless.
>>
>> One possibility for this is to use the range manager to implement
>> something drm_mm based. BTW: We should probably rename that to
>> something
>> like ttm_res_drm_mm or similar.
> Sure I'm all in on that, but my point is this becomes pretty awkward
> because the reusable code already subclasses struct ttm_resource. Let
> me give you an example:
>
> Prereqs:
> 1) We want to be able to re-use resource manager implementations among
> drivers.
> 2) A driver might want to re-use multiple implementations and have
> identical data "struct i915_data" attached to both

Well that's the point I don't really understand. Why would a driver want 
to do this?

It's perfectly possible that you have ttm_range_manager extended and a 
potential ttm_page_manager, but that are two different objects then 
which also need different handling.

> ....
> This would be identical to how we subclass a struct ttm_buffer_object
> or a struct ttm_tt. But It can't look like this because then we can't
> reuse exising implementations that *already subclass* struct
> ttm_resource.
>
> What we have currently ttm_resource-wise is like having a struct
> tt_bo_vram, a struct ttm_bo_system, a struct ttm_bo_gtt and trying to
> subclass them all combined into a struct i915_bo. It would become
> awkward without a dynamic backend that facilitates subclassing a single
> struct ttm_buffer_object?

Why? They all implement different handling.

When you add a private point to ttm_resource you allow common handling 
which doesn't take into account that this ttm_resource object is 
subclassed.

> So basically the question boils down to: Why do we do struct
> ttm_resources differently?

ttm_buffer_object is a subclass of drm_gem_object and I hope to make 
ttm_device a subclass of drm_device in the near term.

I really try to understand what you mean hear, but I even after reading 
that multiple times I absolutely don't get it.

Regards,
Christian.

>> What we should avoid is to abuse TTM resource interfaces in the
>> driver,
>> e.g. what i915 is currently doing. This is a TTM->resource mgr
>> interface
>> and should not be used by drivers at all.
> Yes I guess that can be easily fixed when whatever we end up with above
> lands.
>
>>> Ofc we can do that, but it does indeed feel pretty awkward.
>>>
>>> In any case, if you still think that's the approach we should go
>>> for,
>>> I'd need to add init() and fini() members to the
>>> ttm_range_manager_func
>>> struct to allow subclassing without having to unnecessarily copy
>>> the
>>> full code?
>> Yes, exporting the ttm_range_manager functions as needed is one thing
>> I
>> wanted to do for the amdgpu_gtt_mgr.c code as well.
>>
>> Just don't extend the function table but rather directly export the
>> necessary functions.
> Sure.
> /Thomas
>
>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
  2021-09-13  6:17           ` [Intel-gfx] " Christian König
@ 2021-09-13  9:36             ` Thomas Hellström
  -1 siblings, 0 replies; 35+ messages in thread
From: Thomas Hellström @ 2021-09-13  9:36 UTC (permalink / raw)
  To: Christian König, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld


On 9/13/21 8:17 AM, Christian König wrote:
> Am 11.09.21 um 08:07 schrieb Thomas Hellström:
>> On Fri, 2021-09-10 at 19:03 +0200, Christian König wrote:
>>> Am 10.09.21 um 17:30 schrieb Thomas Hellström:
>>>> On Fri, 2021-09-10 at 16:40 +0200, Christian König wrote:
>>>>> Am 10.09.21 um 15:15 schrieb Thomas Hellström:
>>>>>> Both the provider (resource manager) and the consumer (the TTM
>>>>>> driver)
>>>>>> want to subclass struct ttm_resource. Since this is left for
>>>>>> the
>>>>>> resource
>>>>>> manager, we need to provide a private pointer for the TTM
>>>>>> driver.
>>>>>>
>>>>>> Provide a struct ttm_resource_private for the driver to
>>>>>> subclass
>>>>>> for
>>>>>> data with the same lifetime as the struct ttm_resource: In the
>>>>>> i915
>>>>>> case
>>>>>> it will, for example, be an sg-table and radix tree into the
>>>>>> LMEM
>>>>>> /VRAM pages that currently are awkwardly attached to the GEM
>>>>>> object.
>>>>>>
>>>>>> Provide an ops structure for associated ops (Which is only
>>>>>> destroy() ATM)
>>>>>> It might seem pointless to provide a separate ops structure,
>>>>>> but
>>>>>> Linus
>>>>>> has previously made it clear that that's the norm.
>>>>>>
>>>>>> After careful audit one could perhaps also on a per-driver
>>>>>> basis
>>>>>> replace the delete_mem_notify() TTM driver callback with the
>>>>>> above
>>>>>> destroy function.
>>>>> Well this is a really big NAK to this approach.
>>>>>
>>>>> If you need to attach some additional information to the resource
>>>>> then
>>>>> implement your own resource manager like everybody else does.
>>>> Well this was the long discussion we had back then when the
>>>> resource
>>>> mangagers started to derive from struct resource and I was under
>>>> the
>>>> impression that we had come to an agreement about the different
>>>> use-
>>>> cases here, and this was my main concern.
>>> Ok, then we somehow didn't understood each other.
>>>
>>>> I mean, it's a pretty big layer violation to do that for this use-
>>>> case.
>>> Well exactly that's the point. TTM should not have a layer design in
>>> the
>>> first place.
>>>
>>> Devices, BOs, resources etc.. are base classes which should implement
>>> a
>>> base functionality which is then extended by the drivers to implement
>>> the driver specific functionality.
>>>
>>> That is a component based approach, and not layered at all.
>>>
>>>> The TTM resource manager doesn't want to know about this data at
>>>> all,
>>>> it's private to the ttm resource user layer and the resource
>>>> manager
>>>> works perfectly well without it. (I assume the other drivers that
>>>> implement their own resource managers need the data that the
>>>> subclassing provides?)
>>> Yes, that's exactly why we have the subclassing.
>>>
>>>> The fundamental problem here is that there are two layers wanting
>>>> to
>>>> subclass struct ttm_resource. That means one layer gets to do that,
>>>> the
>>>> second gets to use a private pointer, (which in turn can provide
>>>> yet
>>>> another private pointer to a potential third layer). With your
>>>> suggestion, the second layer instead is forced to subclass each
>>>> subclassed instance it uses from  the first layer provides?
>>> Well completely drop the layer approach/thinking here.
>>>
>>> The resource is an object with a base class. The base class
>>> implements
>>> the interface TTM needs to handle the object, e.g.
>>> create/destroy/debug
>>> etc...
>>>
>>> Then we need to subclass this object because without any additional
>>> information the object is pretty pointless.
>>>
>>> One possibility for this is to use the range manager to implement
>>> something drm_mm based. BTW: We should probably rename that to
>>> something
>>> like ttm_res_drm_mm or similar.
>> Sure I'm all in on that, but my point is this becomes pretty awkward
>> because the reusable code already subclasses struct ttm_resource. Let
>> me give you an example:
>>
>> Prereqs:
>> 1) We want to be able to re-use resource manager implementations among
>> drivers.
>> 2) A driver might want to re-use multiple implementations and have
>> identical data "struct i915_data" attached to both
>
> Well that's the point I don't really understand. Why would a driver 
> want to do this?

Let's say you have a struct ttm_object_vram and a struct ttm_object_gtt, 
both subclassing drm_gem_object. Then I'd say a driver would want to 
subclass those to attach identical data, extend functionality and 
provide a single i915_gem_object to the rest of the driver, which 
couldn't care less whether it's vram or gtt? Wouldn't you say having 
separate struct ttm_object_vram and a struct ttm_object_gtt in this case 
would be awkward?. We *want* to allow common handling.

It's the exact same situation here. With struct ttm_resource you let 
*different* implementation flavours subclass it, which makes it awkward 
for the driver to extend the functionality in a common way by 
subclassing, unless the driver only uses a single implementation.

OT:

Having a variable size array as the last member of the range manager 
resource makes embedding that extremely fragile IMO. Perhaps hide that 
variable size functionality in the driver rather than in the common code?

/Thomas




^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Intel-gfx] [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
@ 2021-09-13  9:36             ` Thomas Hellström
  0 siblings, 0 replies; 35+ messages in thread
From: Thomas Hellström @ 2021-09-13  9:36 UTC (permalink / raw)
  To: Christian König, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld


On 9/13/21 8:17 AM, Christian König wrote:
> Am 11.09.21 um 08:07 schrieb Thomas Hellström:
>> On Fri, 2021-09-10 at 19:03 +0200, Christian König wrote:
>>> Am 10.09.21 um 17:30 schrieb Thomas Hellström:
>>>> On Fri, 2021-09-10 at 16:40 +0200, Christian König wrote:
>>>>> Am 10.09.21 um 15:15 schrieb Thomas Hellström:
>>>>>> Both the provider (resource manager) and the consumer (the TTM
>>>>>> driver)
>>>>>> want to subclass struct ttm_resource. Since this is left for
>>>>>> the
>>>>>> resource
>>>>>> manager, we need to provide a private pointer for the TTM
>>>>>> driver.
>>>>>>
>>>>>> Provide a struct ttm_resource_private for the driver to
>>>>>> subclass
>>>>>> for
>>>>>> data with the same lifetime as the struct ttm_resource: In the
>>>>>> i915
>>>>>> case
>>>>>> it will, for example, be an sg-table and radix tree into the
>>>>>> LMEM
>>>>>> /VRAM pages that currently are awkwardly attached to the GEM
>>>>>> object.
>>>>>>
>>>>>> Provide an ops structure for associated ops (Which is only
>>>>>> destroy() ATM)
>>>>>> It might seem pointless to provide a separate ops structure,
>>>>>> but
>>>>>> Linus
>>>>>> has previously made it clear that that's the norm.
>>>>>>
>>>>>> After careful audit one could perhaps also on a per-driver
>>>>>> basis
>>>>>> replace the delete_mem_notify() TTM driver callback with the
>>>>>> above
>>>>>> destroy function.
>>>>> Well this is a really big NAK to this approach.
>>>>>
>>>>> If you need to attach some additional information to the resource
>>>>> then
>>>>> implement your own resource manager like everybody else does.
>>>> Well this was the long discussion we had back then when the
>>>> resource
>>>> mangagers started to derive from struct resource and I was under
>>>> the
>>>> impression that we had come to an agreement about the different
>>>> use-
>>>> cases here, and this was my main concern.
>>> Ok, then we somehow didn't understood each other.
>>>
>>>> I mean, it's a pretty big layer violation to do that for this use-
>>>> case.
>>> Well exactly that's the point. TTM should not have a layer design in
>>> the
>>> first place.
>>>
>>> Devices, BOs, resources etc.. are base classes which should implement
>>> a
>>> base functionality which is then extended by the drivers to implement
>>> the driver specific functionality.
>>>
>>> That is a component based approach, and not layered at all.
>>>
>>>> The TTM resource manager doesn't want to know about this data at
>>>> all,
>>>> it's private to the ttm resource user layer and the resource
>>>> manager
>>>> works perfectly well without it. (I assume the other drivers that
>>>> implement their own resource managers need the data that the
>>>> subclassing provides?)
>>> Yes, that's exactly why we have the subclassing.
>>>
>>>> The fundamental problem here is that there are two layers wanting
>>>> to
>>>> subclass struct ttm_resource. That means one layer gets to do that,
>>>> the
>>>> second gets to use a private pointer, (which in turn can provide
>>>> yet
>>>> another private pointer to a potential third layer). With your
>>>> suggestion, the second layer instead is forced to subclass each
>>>> subclassed instance it uses from  the first layer provides?
>>> Well completely drop the layer approach/thinking here.
>>>
>>> The resource is an object with a base class. The base class
>>> implements
>>> the interface TTM needs to handle the object, e.g.
>>> create/destroy/debug
>>> etc...
>>>
>>> Then we need to subclass this object because without any additional
>>> information the object is pretty pointless.
>>>
>>> One possibility for this is to use the range manager to implement
>>> something drm_mm based. BTW: We should probably rename that to
>>> something
>>> like ttm_res_drm_mm or similar.
>> Sure I'm all in on that, but my point is this becomes pretty awkward
>> because the reusable code already subclasses struct ttm_resource. Let
>> me give you an example:
>>
>> Prereqs:
>> 1) We want to be able to re-use resource manager implementations among
>> drivers.
>> 2) A driver might want to re-use multiple implementations and have
>> identical data "struct i915_data" attached to both
>
> Well that's the point I don't really understand. Why would a driver 
> want to do this?

Let's say you have a struct ttm_object_vram and a struct ttm_object_gtt, 
both subclassing drm_gem_object. Then I'd say a driver would want to 
subclass those to attach identical data, extend functionality and 
provide a single i915_gem_object to the rest of the driver, which 
couldn't care less whether it's vram or gtt? Wouldn't you say having 
separate struct ttm_object_vram and a struct ttm_object_gtt in this case 
would be awkward?. We *want* to allow common handling.

It's the exact same situation here. With struct ttm_resource you let 
*different* implementation flavours subclass it, which makes it awkward 
for the driver to extend the functionality in a common way by 
subclassing, unless the driver only uses a single implementation.

OT:

Having a variable size array as the last member of the range manager 
resource makes embedding that extremely fragile IMO. Perhaps hide that 
variable size functionality in the driver rather than in the common code?

/Thomas




^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
  2021-09-13  9:36             ` [Intel-gfx] " Thomas Hellström
@ 2021-09-13  9:41               ` Christian König
  -1 siblings, 0 replies; 35+ messages in thread
From: Christian König @ 2021-09-13  9:41 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld

Am 13.09.21 um 11:36 schrieb Thomas Hellström:
> On 9/13/21 8:17 AM, Christian König wrote:
>> Am 11.09.21 um 08:07 schrieb Thomas Hellström:
>>> On Fri, 2021-09-10 at 19:03 +0200, Christian König wrote:
>>>> Am 10.09.21 um 17:30 schrieb Thomas Hellström:
>>>>> On Fri, 2021-09-10 at 16:40 +0200, Christian König wrote:
>>>>>> Am 10.09.21 um 15:15 schrieb Thomas Hellström:
>>>>>>> Both the provider (resource manager) and the consumer (the TTM
>>>>>>> driver)
>>>>>>> want to subclass struct ttm_resource. Since this is left for
>>>>>>> the
>>>>>>> resource
>>>>>>> manager, we need to provide a private pointer for the TTM
>>>>>>> driver.
>>>>>>>
>>>>>>> Provide a struct ttm_resource_private for the driver to
>>>>>>> subclass
>>>>>>> for
>>>>>>> data with the same lifetime as the struct ttm_resource: In the
>>>>>>> i915
>>>>>>> case
>>>>>>> it will, for example, be an sg-table and radix tree into the
>>>>>>> LMEM
>>>>>>> /VRAM pages that currently are awkwardly attached to the GEM
>>>>>>> object.
>>>>>>>
>>>>>>> Provide an ops structure for associated ops (Which is only
>>>>>>> destroy() ATM)
>>>>>>> It might seem pointless to provide a separate ops structure,
>>>>>>> but
>>>>>>> Linus
>>>>>>> has previously made it clear that that's the norm.
>>>>>>>
>>>>>>> After careful audit one could perhaps also on a per-driver
>>>>>>> basis
>>>>>>> replace the delete_mem_notify() TTM driver callback with the
>>>>>>> above
>>>>>>> destroy function.
>>>>>> Well this is a really big NAK to this approach.
>>>>>>
>>>>>> If you need to attach some additional information to the resource
>>>>>> then
>>>>>> implement your own resource manager like everybody else does.
>>>>> Well this was the long discussion we had back then when the
>>>>> resource
>>>>> mangagers started to derive from struct resource and I was under
>>>>> the
>>>>> impression that we had come to an agreement about the different
>>>>> use-
>>>>> cases here, and this was my main concern.
>>>> Ok, then we somehow didn't understood each other.
>>>>
>>>>> I mean, it's a pretty big layer violation to do that for this use-
>>>>> case.
>>>> Well exactly that's the point. TTM should not have a layer design in
>>>> the
>>>> first place.
>>>>
>>>> Devices, BOs, resources etc.. are base classes which should implement
>>>> a
>>>> base functionality which is then extended by the drivers to implement
>>>> the driver specific functionality.
>>>>
>>>> That is a component based approach, and not layered at all.
>>>>
>>>>> The TTM resource manager doesn't want to know about this data at
>>>>> all,
>>>>> it's private to the ttm resource user layer and the resource
>>>>> manager
>>>>> works perfectly well without it. (I assume the other drivers that
>>>>> implement their own resource managers need the data that the
>>>>> subclassing provides?)
>>>> Yes, that's exactly why we have the subclassing.
>>>>
>>>>> The fundamental problem here is that there are two layers wanting
>>>>> to
>>>>> subclass struct ttm_resource. That means one layer gets to do that,
>>>>> the
>>>>> second gets to use a private pointer, (which in turn can provide
>>>>> yet
>>>>> another private pointer to a potential third layer). With your
>>>>> suggestion, the second layer instead is forced to subclass each
>>>>> subclassed instance it uses from  the first layer provides?
>>>> Well completely drop the layer approach/thinking here.
>>>>
>>>> The resource is an object with a base class. The base class
>>>> implements
>>>> the interface TTM needs to handle the object, e.g.
>>>> create/destroy/debug
>>>> etc...
>>>>
>>>> Then we need to subclass this object because without any additional
>>>> information the object is pretty pointless.
>>>>
>>>> One possibility for this is to use the range manager to implement
>>>> something drm_mm based. BTW: We should probably rename that to
>>>> something
>>>> like ttm_res_drm_mm or similar.
>>> Sure I'm all in on that, but my point is this becomes pretty awkward
>>> because the reusable code already subclasses struct ttm_resource. Let
>>> me give you an example:
>>>
>>> Prereqs:
>>> 1) We want to be able to re-use resource manager implementations among
>>> drivers.
>>> 2) A driver might want to re-use multiple implementations and have
>>> identical data "struct i915_data" attached to both
>>
>> Well that's the point I don't really understand. Why would a driver 
>> want to do this?
>
> Let's say you have a struct ttm_object_vram and a struct 
> ttm_object_gtt, both subclassing drm_gem_object. Then I'd say a driver 
> would want to subclass those to attach identical data, extend 
> functionality and provide a single i915_gem_object to the rest of the 
> driver, which couldn't care less whether it's vram or gtt? Wouldn't 
> you say having separate struct ttm_object_vram and a struct 
> ttm_object_gtt in this case would be awkward?. We *want* to allow 
> common handling.

Yeah, but that's a bad idea. This is like diamond inheritance in C++.

When you need the same functionality in different backends you implement 
that as separate object and then add a parent class.

>
> It's the exact same situation here. With struct ttm_resource you let 
> *different* implementation flavours subclass it, which makes it 
> awkward for the driver to extend the functionality in a common way by 
> subclassing, unless the driver only uses a single implementation.

Well the driver should use separate implementations for their different 
domains as much as possible.

> OT:
>
> Having a variable size array as the last member of the range manager 
> resource makes embedding that extremely fragile IMO. Perhaps hide that 
> variable size functionality in the driver rather than in the common code?

Yeah, Arun is already working on that. It's just not finished yet.

Regards,
Christian.

>
>
> /Thomas
>
>
>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Intel-gfx] [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
@ 2021-09-13  9:41               ` Christian König
  0 siblings, 0 replies; 35+ messages in thread
From: Christian König @ 2021-09-13  9:41 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld

Am 13.09.21 um 11:36 schrieb Thomas Hellström:
> On 9/13/21 8:17 AM, Christian König wrote:
>> Am 11.09.21 um 08:07 schrieb Thomas Hellström:
>>> On Fri, 2021-09-10 at 19:03 +0200, Christian König wrote:
>>>> Am 10.09.21 um 17:30 schrieb Thomas Hellström:
>>>>> On Fri, 2021-09-10 at 16:40 +0200, Christian König wrote:
>>>>>> Am 10.09.21 um 15:15 schrieb Thomas Hellström:
>>>>>>> Both the provider (resource manager) and the consumer (the TTM
>>>>>>> driver)
>>>>>>> want to subclass struct ttm_resource. Since this is left for
>>>>>>> the
>>>>>>> resource
>>>>>>> manager, we need to provide a private pointer for the TTM
>>>>>>> driver.
>>>>>>>
>>>>>>> Provide a struct ttm_resource_private for the driver to
>>>>>>> subclass
>>>>>>> for
>>>>>>> data with the same lifetime as the struct ttm_resource: In the
>>>>>>> i915
>>>>>>> case
>>>>>>> it will, for example, be an sg-table and radix tree into the
>>>>>>> LMEM
>>>>>>> /VRAM pages that currently are awkwardly attached to the GEM
>>>>>>> object.
>>>>>>>
>>>>>>> Provide an ops structure for associated ops (Which is only
>>>>>>> destroy() ATM)
>>>>>>> It might seem pointless to provide a separate ops structure,
>>>>>>> but
>>>>>>> Linus
>>>>>>> has previously made it clear that that's the norm.
>>>>>>>
>>>>>>> After careful audit one could perhaps also on a per-driver
>>>>>>> basis
>>>>>>> replace the delete_mem_notify() TTM driver callback with the
>>>>>>> above
>>>>>>> destroy function.
>>>>>> Well this is a really big NAK to this approach.
>>>>>>
>>>>>> If you need to attach some additional information to the resource
>>>>>> then
>>>>>> implement your own resource manager like everybody else does.
>>>>> Well this was the long discussion we had back then when the
>>>>> resource
>>>>> mangagers started to derive from struct resource and I was under
>>>>> the
>>>>> impression that we had come to an agreement about the different
>>>>> use-
>>>>> cases here, and this was my main concern.
>>>> Ok, then we somehow didn't understood each other.
>>>>
>>>>> I mean, it's a pretty big layer violation to do that for this use-
>>>>> case.
>>>> Well exactly that's the point. TTM should not have a layer design in
>>>> the
>>>> first place.
>>>>
>>>> Devices, BOs, resources etc.. are base classes which should implement
>>>> a
>>>> base functionality which is then extended by the drivers to implement
>>>> the driver specific functionality.
>>>>
>>>> That is a component based approach, and not layered at all.
>>>>
>>>>> The TTM resource manager doesn't want to know about this data at
>>>>> all,
>>>>> it's private to the ttm resource user layer and the resource
>>>>> manager
>>>>> works perfectly well without it. (I assume the other drivers that
>>>>> implement their own resource managers need the data that the
>>>>> subclassing provides?)
>>>> Yes, that's exactly why we have the subclassing.
>>>>
>>>>> The fundamental problem here is that there are two layers wanting
>>>>> to
>>>>> subclass struct ttm_resource. That means one layer gets to do that,
>>>>> the
>>>>> second gets to use a private pointer, (which in turn can provide
>>>>> yet
>>>>> another private pointer to a potential third layer). With your
>>>>> suggestion, the second layer instead is forced to subclass each
>>>>> subclassed instance it uses from  the first layer provides?
>>>> Well completely drop the layer approach/thinking here.
>>>>
>>>> The resource is an object with a base class. The base class
>>>> implements
>>>> the interface TTM needs to handle the object, e.g.
>>>> create/destroy/debug
>>>> etc...
>>>>
>>>> Then we need to subclass this object because without any additional
>>>> information the object is pretty pointless.
>>>>
>>>> One possibility for this is to use the range manager to implement
>>>> something drm_mm based. BTW: We should probably rename that to
>>>> something
>>>> like ttm_res_drm_mm or similar.
>>> Sure I'm all in on that, but my point is this becomes pretty awkward
>>> because the reusable code already subclasses struct ttm_resource. Let
>>> me give you an example:
>>>
>>> Prereqs:
>>> 1) We want to be able to re-use resource manager implementations among
>>> drivers.
>>> 2) A driver might want to re-use multiple implementations and have
>>> identical data "struct i915_data" attached to both
>>
>> Well that's the point I don't really understand. Why would a driver 
>> want to do this?
>
> Let's say you have a struct ttm_object_vram and a struct 
> ttm_object_gtt, both subclassing drm_gem_object. Then I'd say a driver 
> would want to subclass those to attach identical data, extend 
> functionality and provide a single i915_gem_object to the rest of the 
> driver, which couldn't care less whether it's vram or gtt? Wouldn't 
> you say having separate struct ttm_object_vram and a struct 
> ttm_object_gtt in this case would be awkward?. We *want* to allow 
> common handling.

Yeah, but that's a bad idea. This is like diamond inheritance in C++.

When you need the same functionality in different backends you implement 
that as separate object and then add a parent class.

>
> It's the exact same situation here. With struct ttm_resource you let 
> *different* implementation flavours subclass it, which makes it 
> awkward for the driver to extend the functionality in a common way by 
> subclassing, unless the driver only uses a single implementation.

Well the driver should use separate implementations for their different 
domains as much as possible.

> OT:
>
> Having a variable size array as the last member of the range manager 
> resource makes embedding that extremely fragile IMO. Perhaps hide that 
> variable size functionality in the driver rather than in the common code?

Yeah, Arun is already working on that. It's just not finished yet.

Regards,
Christian.

>
>
> /Thomas
>
>
>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
  2021-09-13  9:41               ` [Intel-gfx] " Christian König
@ 2021-09-13 10:16                 ` Thomas Hellström
  -1 siblings, 0 replies; 35+ messages in thread
From: Thomas Hellström @ 2021-09-13 10:16 UTC (permalink / raw)
  To: Christian König, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld


On 9/13/21 11:41 AM, Christian König wrote:
> Am 13.09.21 um 11:36 schrieb Thomas Hellström:
>> On 9/13/21 8:17 AM, Christian König wrote:
>>> Am 11.09.21 um 08:07 schrieb Thomas Hellström:
>>>> On Fri, 2021-09-10 at 19:03 +0200, Christian König wrote:
>>>>> Am 10.09.21 um 17:30 schrieb Thomas Hellström:
>>>>>> On Fri, 2021-09-10 at 16:40 +0200, Christian König wrote:
>>>>>>> Am 10.09.21 um 15:15 schrieb Thomas Hellström:
>>>>>>>> Both the provider (resource manager) and the consumer (the TTM
>>>>>>>> driver)
>>>>>>>> want to subclass struct ttm_resource. Since this is left for
>>>>>>>> the
>>>>>>>> resource
>>>>>>>> manager, we need to provide a private pointer for the TTM
>>>>>>>> driver.
>>>>>>>>
>>>>>>>> Provide a struct ttm_resource_private for the driver to
>>>>>>>> subclass
>>>>>>>> for
>>>>>>>> data with the same lifetime as the struct ttm_resource: In the
>>>>>>>> i915
>>>>>>>> case
>>>>>>>> it will, for example, be an sg-table and radix tree into the
>>>>>>>> LMEM
>>>>>>>> /VRAM pages that currently are awkwardly attached to the GEM
>>>>>>>> object.
>>>>>>>>
>>>>>>>> Provide an ops structure for associated ops (Which is only
>>>>>>>> destroy() ATM)
>>>>>>>> It might seem pointless to provide a separate ops structure,
>>>>>>>> but
>>>>>>>> Linus
>>>>>>>> has previously made it clear that that's the norm.
>>>>>>>>
>>>>>>>> After careful audit one could perhaps also on a per-driver
>>>>>>>> basis
>>>>>>>> replace the delete_mem_notify() TTM driver callback with the
>>>>>>>> above
>>>>>>>> destroy function.
>>>>>>> Well this is a really big NAK to this approach.
>>>>>>>
>>>>>>> If you need to attach some additional information to the resource
>>>>>>> then
>>>>>>> implement your own resource manager like everybody else does.
>>>>>> Well this was the long discussion we had back then when the
>>>>>> resource
>>>>>> mangagers started to derive from struct resource and I was under
>>>>>> the
>>>>>> impression that we had come to an agreement about the different
>>>>>> use-
>>>>>> cases here, and this was my main concern.
>>>>> Ok, then we somehow didn't understood each other.
>>>>>
>>>>>> I mean, it's a pretty big layer violation to do that for this use-
>>>>>> case.
>>>>> Well exactly that's the point. TTM should not have a layer design in
>>>>> the
>>>>> first place.
>>>>>
>>>>> Devices, BOs, resources etc.. are base classes which should implement
>>>>> a
>>>>> base functionality which is then extended by the drivers to implement
>>>>> the driver specific functionality.
>>>>>
>>>>> That is a component based approach, and not layered at all.
>>>>>
>>>>>> The TTM resource manager doesn't want to know about this data at
>>>>>> all,
>>>>>> it's private to the ttm resource user layer and the resource
>>>>>> manager
>>>>>> works perfectly well without it. (I assume the other drivers that
>>>>>> implement their own resource managers need the data that the
>>>>>> subclassing provides?)
>>>>> Yes, that's exactly why we have the subclassing.
>>>>>
>>>>>> The fundamental problem here is that there are two layers wanting
>>>>>> to
>>>>>> subclass struct ttm_resource. That means one layer gets to do that,
>>>>>> the
>>>>>> second gets to use a private pointer, (which in turn can provide
>>>>>> yet
>>>>>> another private pointer to a potential third layer). With your
>>>>>> suggestion, the second layer instead is forced to subclass each
>>>>>> subclassed instance it uses from  the first layer provides?
>>>>> Well completely drop the layer approach/thinking here.
>>>>>
>>>>> The resource is an object with a base class. The base class
>>>>> implements
>>>>> the interface TTM needs to handle the object, e.g.
>>>>> create/destroy/debug
>>>>> etc...
>>>>>
>>>>> Then we need to subclass this object because without any additional
>>>>> information the object is pretty pointless.
>>>>>
>>>>> One possibility for this is to use the range manager to implement
>>>>> something drm_mm based. BTW: We should probably rename that to
>>>>> something
>>>>> like ttm_res_drm_mm or similar.
>>>> Sure I'm all in on that, but my point is this becomes pretty awkward
>>>> because the reusable code already subclasses struct ttm_resource. Let
>>>> me give you an example:
>>>>
>>>> Prereqs:
>>>> 1) We want to be able to re-use resource manager implementations among
>>>> drivers.
>>>> 2) A driver might want to re-use multiple implementations and have
>>>> identical data "struct i915_data" attached to both
>>>
>>> Well that's the point I don't really understand. Why would a driver 
>>> want to do this?
>>
>> Let's say you have a struct ttm_object_vram and a struct 
>> ttm_object_gtt, both subclassing drm_gem_object. Then I'd say a 
>> driver would want to subclass those to attach identical data, extend 
>> functionality and provide a single i915_gem_object to the rest of the 
>> driver, which couldn't care less whether it's vram or gtt? Wouldn't 
>> you say having separate struct ttm_object_vram and a struct 
>> ttm_object_gtt in this case would be awkward?. We *want* to allow 
>> common handling.
>
> Yeah, but that's a bad idea. This is like diamond inheritance in C++.
>
> When you need the same functionality in different backends you 
> implement that as separate object and then add a parent class.
>
>>
>> It's the exact same situation here. With struct ttm_resource you let 
>> *different* implementation flavours subclass it, which makes it 
>> awkward for the driver to extend the functionality in a common way by 
>> subclassing, unless the driver only uses a single implementation.
>
> Well the driver should use separate implementations for their 
> different domains as much as possible.
>
Hmm, Now you lost me a bit. Are you saying that the way we do dynamic 
backends in the struct ttm_buffer_object to facilitate driver 
subclassing is a bad idea or that the RFC with backpointer is a bad idea?

If the latter, I can agree with that, but could we perhaps then work to 
find a way to turn the common manager (or in the future perhaps 
managers) into helpers that doesn't embed struct ttm_resource rather 
than a full-fledged resource manager. Then the driver will always be 
responsible for embedding the struct ttm_resource and combines helpers 
as it sees fit?

Thanks,
/Thomas




^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Intel-gfx] [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
@ 2021-09-13 10:16                 ` Thomas Hellström
  0 siblings, 0 replies; 35+ messages in thread
From: Thomas Hellström @ 2021-09-13 10:16 UTC (permalink / raw)
  To: Christian König, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld


On 9/13/21 11:41 AM, Christian König wrote:
> Am 13.09.21 um 11:36 schrieb Thomas Hellström:
>> On 9/13/21 8:17 AM, Christian König wrote:
>>> Am 11.09.21 um 08:07 schrieb Thomas Hellström:
>>>> On Fri, 2021-09-10 at 19:03 +0200, Christian König wrote:
>>>>> Am 10.09.21 um 17:30 schrieb Thomas Hellström:
>>>>>> On Fri, 2021-09-10 at 16:40 +0200, Christian König wrote:
>>>>>>> Am 10.09.21 um 15:15 schrieb Thomas Hellström:
>>>>>>>> Both the provider (resource manager) and the consumer (the TTM
>>>>>>>> driver)
>>>>>>>> want to subclass struct ttm_resource. Since this is left for
>>>>>>>> the
>>>>>>>> resource
>>>>>>>> manager, we need to provide a private pointer for the TTM
>>>>>>>> driver.
>>>>>>>>
>>>>>>>> Provide a struct ttm_resource_private for the driver to
>>>>>>>> subclass
>>>>>>>> for
>>>>>>>> data with the same lifetime as the struct ttm_resource: In the
>>>>>>>> i915
>>>>>>>> case
>>>>>>>> it will, for example, be an sg-table and radix tree into the
>>>>>>>> LMEM
>>>>>>>> /VRAM pages that currently are awkwardly attached to the GEM
>>>>>>>> object.
>>>>>>>>
>>>>>>>> Provide an ops structure for associated ops (Which is only
>>>>>>>> destroy() ATM)
>>>>>>>> It might seem pointless to provide a separate ops structure,
>>>>>>>> but
>>>>>>>> Linus
>>>>>>>> has previously made it clear that that's the norm.
>>>>>>>>
>>>>>>>> After careful audit one could perhaps also on a per-driver
>>>>>>>> basis
>>>>>>>> replace the delete_mem_notify() TTM driver callback with the
>>>>>>>> above
>>>>>>>> destroy function.
>>>>>>> Well this is a really big NAK to this approach.
>>>>>>>
>>>>>>> If you need to attach some additional information to the resource
>>>>>>> then
>>>>>>> implement your own resource manager like everybody else does.
>>>>>> Well this was the long discussion we had back then when the
>>>>>> resource
>>>>>> mangagers started to derive from struct resource and I was under
>>>>>> the
>>>>>> impression that we had come to an agreement about the different
>>>>>> use-
>>>>>> cases here, and this was my main concern.
>>>>> Ok, then we somehow didn't understood each other.
>>>>>
>>>>>> I mean, it's a pretty big layer violation to do that for this use-
>>>>>> case.
>>>>> Well exactly that's the point. TTM should not have a layer design in
>>>>> the
>>>>> first place.
>>>>>
>>>>> Devices, BOs, resources etc.. are base classes which should implement
>>>>> a
>>>>> base functionality which is then extended by the drivers to implement
>>>>> the driver specific functionality.
>>>>>
>>>>> That is a component based approach, and not layered at all.
>>>>>
>>>>>> The TTM resource manager doesn't want to know about this data at
>>>>>> all,
>>>>>> it's private to the ttm resource user layer and the resource
>>>>>> manager
>>>>>> works perfectly well without it. (I assume the other drivers that
>>>>>> implement their own resource managers need the data that the
>>>>>> subclassing provides?)
>>>>> Yes, that's exactly why we have the subclassing.
>>>>>
>>>>>> The fundamental problem here is that there are two layers wanting
>>>>>> to
>>>>>> subclass struct ttm_resource. That means one layer gets to do that,
>>>>>> the
>>>>>> second gets to use a private pointer, (which in turn can provide
>>>>>> yet
>>>>>> another private pointer to a potential third layer). With your
>>>>>> suggestion, the second layer instead is forced to subclass each
>>>>>> subclassed instance it uses from  the first layer provides?
>>>>> Well completely drop the layer approach/thinking here.
>>>>>
>>>>> The resource is an object with a base class. The base class
>>>>> implements
>>>>> the interface TTM needs to handle the object, e.g.
>>>>> create/destroy/debug
>>>>> etc...
>>>>>
>>>>> Then we need to subclass this object because without any additional
>>>>> information the object is pretty pointless.
>>>>>
>>>>> One possibility for this is to use the range manager to implement
>>>>> something drm_mm based. BTW: We should probably rename that to
>>>>> something
>>>>> like ttm_res_drm_mm or similar.
>>>> Sure I'm all in on that, but my point is this becomes pretty awkward
>>>> because the reusable code already subclasses struct ttm_resource. Let
>>>> me give you an example:
>>>>
>>>> Prereqs:
>>>> 1) We want to be able to re-use resource manager implementations among
>>>> drivers.
>>>> 2) A driver might want to re-use multiple implementations and have
>>>> identical data "struct i915_data" attached to both
>>>
>>> Well that's the point I don't really understand. Why would a driver 
>>> want to do this?
>>
>> Let's say you have a struct ttm_object_vram and a struct 
>> ttm_object_gtt, both subclassing drm_gem_object. Then I'd say a 
>> driver would want to subclass those to attach identical data, extend 
>> functionality and provide a single i915_gem_object to the rest of the 
>> driver, which couldn't care less whether it's vram or gtt? Wouldn't 
>> you say having separate struct ttm_object_vram and a struct 
>> ttm_object_gtt in this case would be awkward?. We *want* to allow 
>> common handling.
>
> Yeah, but that's a bad idea. This is like diamond inheritance in C++.
>
> When you need the same functionality in different backends you 
> implement that as separate object and then add a parent class.
>
>>
>> It's the exact same situation here. With struct ttm_resource you let 
>> *different* implementation flavours subclass it, which makes it 
>> awkward for the driver to extend the functionality in a common way by 
>> subclassing, unless the driver only uses a single implementation.
>
> Well the driver should use separate implementations for their 
> different domains as much as possible.
>
Hmm, Now you lost me a bit. Are you saying that the way we do dynamic 
backends in the struct ttm_buffer_object to facilitate driver 
subclassing is a bad idea or that the RFC with backpointer is a bad idea?

If the latter, I can agree with that, but could we perhaps then work to 
find a way to turn the common manager (or in the future perhaps 
managers) into helpers that doesn't embed struct ttm_resource rather 
than a full-fledged resource manager. Then the driver will always be 
responsible for embedding the struct ttm_resource and combines helpers 
as it sees fit?

Thanks,
/Thomas




^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
  2021-09-13 10:16                 ` [Intel-gfx] " Thomas Hellström
@ 2021-09-13 12:41                   ` Thomas Hellström
  -1 siblings, 0 replies; 35+ messages in thread
From: Thomas Hellström @ 2021-09-13 12:41 UTC (permalink / raw)
  To: Christian König, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld


On 9/13/21 12:16 PM, Thomas Hellström wrote:
>
> On 9/13/21 11:41 AM, Christian König wrote:
>> Am 13.09.21 um 11:36 schrieb Thomas Hellström:
>>> On 9/13/21 8:17 AM, Christian König wrote:
>>>> Am 11.09.21 um 08:07 schrieb Thomas Hellström:
>>>>> On Fri, 2021-09-10 at 19:03 +0200, Christian König wrote:
>>>>>> Am 10.09.21 um 17:30 schrieb Thomas Hellström:
>>>>>>> On Fri, 2021-09-10 at 16:40 +0200, Christian König wrote:
>>>>>>>> Am 10.09.21 um 15:15 schrieb Thomas Hellström:
>>>>>>>>> Both the provider (resource manager) and the consumer (the TTM
>>>>>>>>> driver)
>>>>>>>>> want to subclass struct ttm_resource. Since this is left for
>>>>>>>>> the
>>>>>>>>> resource
>>>>>>>>> manager, we need to provide a private pointer for the TTM
>>>>>>>>> driver.
>>>>>>>>>
>>>>>>>>> Provide a struct ttm_resource_private for the driver to
>>>>>>>>> subclass
>>>>>>>>> for
>>>>>>>>> data with the same lifetime as the struct ttm_resource: In the
>>>>>>>>> i915
>>>>>>>>> case
>>>>>>>>> it will, for example, be an sg-table and radix tree into the
>>>>>>>>> LMEM
>>>>>>>>> /VRAM pages that currently are awkwardly attached to the GEM
>>>>>>>>> object.
>>>>>>>>>
>>>>>>>>> Provide an ops structure for associated ops (Which is only
>>>>>>>>> destroy() ATM)
>>>>>>>>> It might seem pointless to provide a separate ops structure,
>>>>>>>>> but
>>>>>>>>> Linus
>>>>>>>>> has previously made it clear that that's the norm.
>>>>>>>>>
>>>>>>>>> After careful audit one could perhaps also on a per-driver
>>>>>>>>> basis
>>>>>>>>> replace the delete_mem_notify() TTM driver callback with the
>>>>>>>>> above
>>>>>>>>> destroy function.
>>>>>>>> Well this is a really big NAK to this approach.
>>>>>>>>
>>>>>>>> If you need to attach some additional information to the resource
>>>>>>>> then
>>>>>>>> implement your own resource manager like everybody else does.
>>>>>>> Well this was the long discussion we had back then when the
>>>>>>> resource
>>>>>>> mangagers started to derive from struct resource and I was under
>>>>>>> the
>>>>>>> impression that we had come to an agreement about the different
>>>>>>> use-
>>>>>>> cases here, and this was my main concern.
>>>>>> Ok, then we somehow didn't understood each other.
>>>>>>
>>>>>>> I mean, it's a pretty big layer violation to do that for this use-
>>>>>>> case.
>>>>>> Well exactly that's the point. TTM should not have a layer design in
>>>>>> the
>>>>>> first place.
>>>>>>
>>>>>> Devices, BOs, resources etc.. are base classes which should 
>>>>>> implement
>>>>>> a
>>>>>> base functionality which is then extended by the drivers to 
>>>>>> implement
>>>>>> the driver specific functionality.
>>>>>>
>>>>>> That is a component based approach, and not layered at all.
>>>>>>
>>>>>>> The TTM resource manager doesn't want to know about this data at
>>>>>>> all,
>>>>>>> it's private to the ttm resource user layer and the resource
>>>>>>> manager
>>>>>>> works perfectly well without it. (I assume the other drivers that
>>>>>>> implement their own resource managers need the data that the
>>>>>>> subclassing provides?)
>>>>>> Yes, that's exactly why we have the subclassing.
>>>>>>
>>>>>>> The fundamental problem here is that there are two layers wanting
>>>>>>> to
>>>>>>> subclass struct ttm_resource. That means one layer gets to do that,
>>>>>>> the
>>>>>>> second gets to use a private pointer, (which in turn can provide
>>>>>>> yet
>>>>>>> another private pointer to a potential third layer). With your
>>>>>>> suggestion, the second layer instead is forced to subclass each
>>>>>>> subclassed instance it uses from  the first layer provides?
>>>>>> Well completely drop the layer approach/thinking here.
>>>>>>
>>>>>> The resource is an object with a base class. The base class
>>>>>> implements
>>>>>> the interface TTM needs to handle the object, e.g.
>>>>>> create/destroy/debug
>>>>>> etc...
>>>>>>
>>>>>> Then we need to subclass this object because without any additional
>>>>>> information the object is pretty pointless.
>>>>>>
>>>>>> One possibility for this is to use the range manager to implement
>>>>>> something drm_mm based. BTW: We should probably rename that to
>>>>>> something
>>>>>> like ttm_res_drm_mm or similar.
>>>>> Sure I'm all in on that, but my point is this becomes pretty awkward
>>>>> because the reusable code already subclasses struct ttm_resource. Let
>>>>> me give you an example:
>>>>>
>>>>> Prereqs:
>>>>> 1) We want to be able to re-use resource manager implementations 
>>>>> among
>>>>> drivers.
>>>>> 2) A driver might want to re-use multiple implementations and have
>>>>> identical data "struct i915_data" attached to both
>>>>
>>>> Well that's the point I don't really understand. Why would a driver 
>>>> want to do this?
>>>
>>> Let's say you have a struct ttm_object_vram and a struct 
>>> ttm_object_gtt, both subclassing drm_gem_object. Then I'd say a 
>>> driver would want to subclass those to attach identical data, extend 
>>> functionality and provide a single i915_gem_object to the rest of 
>>> the driver, which couldn't care less whether it's vram or gtt? 
>>> Wouldn't you say having separate struct ttm_object_vram and a struct 
>>> ttm_object_gtt in this case would be awkward?. We *want* to allow 
>>> common handling.
>>
>> Yeah, but that's a bad idea. This is like diamond inheritance in C++.
>>
>> When you need the same functionality in different backends you 
>> implement that as separate object and then add a parent class.
>>
>>>
>>> It's the exact same situation here. With struct ttm_resource you let 
>>> *different* implementation flavours subclass it, which makes it 
>>> awkward for the driver to extend the functionality in a common way 
>>> by subclassing, unless the driver only uses a single implementation.
>>
>> Well the driver should use separate implementations for their 
>> different domains as much as possible.
>>
> Hmm, Now you lost me a bit. Are you saying that the way we do dynamic 
> backends in the struct ttm_buffer_object to facilitate driver 
> subclassing is a bad idea or that the RFC with backpointer is a bad idea?
>
>
Or if you mean diamond inheritance is bad, yes that's basically my point.

Looking at
https://en.wikipedia.org/wiki/Multiple_inheritance#/media/File:Diamond_inheritance.svg

1)

A would be the struct ttm_resource itself,
D would be struct i915_resource,
B would be struct ttm_range_mgr_node,
C would be struct i915_ttm_buddy_resource

And we need to resolve the ambiguity using the awkward union construct, 
iff we need to derive from both B and C.

Struct ttm_buffer_object and struct ttm_tt instead have B) and C) being 
dynamic backends of A) or a single type derived from A) Hence the 
problem doesn't exist for these types.

So the question from last email remains, if ditching this RFC, can we 
have B) and C) implemented by helpers that can be used from D) and that 
don't derive from A?

Thanks,

Thomas




^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Intel-gfx] [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
@ 2021-09-13 12:41                   ` Thomas Hellström
  0 siblings, 0 replies; 35+ messages in thread
From: Thomas Hellström @ 2021-09-13 12:41 UTC (permalink / raw)
  To: Christian König, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld


On 9/13/21 12:16 PM, Thomas Hellström wrote:
>
> On 9/13/21 11:41 AM, Christian König wrote:
>> Am 13.09.21 um 11:36 schrieb Thomas Hellström:
>>> On 9/13/21 8:17 AM, Christian König wrote:
>>>> Am 11.09.21 um 08:07 schrieb Thomas Hellström:
>>>>> On Fri, 2021-09-10 at 19:03 +0200, Christian König wrote:
>>>>>> Am 10.09.21 um 17:30 schrieb Thomas Hellström:
>>>>>>> On Fri, 2021-09-10 at 16:40 +0200, Christian König wrote:
>>>>>>>> Am 10.09.21 um 15:15 schrieb Thomas Hellström:
>>>>>>>>> Both the provider (resource manager) and the consumer (the TTM
>>>>>>>>> driver)
>>>>>>>>> want to subclass struct ttm_resource. Since this is left for
>>>>>>>>> the
>>>>>>>>> resource
>>>>>>>>> manager, we need to provide a private pointer for the TTM
>>>>>>>>> driver.
>>>>>>>>>
>>>>>>>>> Provide a struct ttm_resource_private for the driver to
>>>>>>>>> subclass
>>>>>>>>> for
>>>>>>>>> data with the same lifetime as the struct ttm_resource: In the
>>>>>>>>> i915
>>>>>>>>> case
>>>>>>>>> it will, for example, be an sg-table and radix tree into the
>>>>>>>>> LMEM
>>>>>>>>> /VRAM pages that currently are awkwardly attached to the GEM
>>>>>>>>> object.
>>>>>>>>>
>>>>>>>>> Provide an ops structure for associated ops (Which is only
>>>>>>>>> destroy() ATM)
>>>>>>>>> It might seem pointless to provide a separate ops structure,
>>>>>>>>> but
>>>>>>>>> Linus
>>>>>>>>> has previously made it clear that that's the norm.
>>>>>>>>>
>>>>>>>>> After careful audit one could perhaps also on a per-driver
>>>>>>>>> basis
>>>>>>>>> replace the delete_mem_notify() TTM driver callback with the
>>>>>>>>> above
>>>>>>>>> destroy function.
>>>>>>>> Well this is a really big NAK to this approach.
>>>>>>>>
>>>>>>>> If you need to attach some additional information to the resource
>>>>>>>> then
>>>>>>>> implement your own resource manager like everybody else does.
>>>>>>> Well this was the long discussion we had back then when the
>>>>>>> resource
>>>>>>> mangagers started to derive from struct resource and I was under
>>>>>>> the
>>>>>>> impression that we had come to an agreement about the different
>>>>>>> use-
>>>>>>> cases here, and this was my main concern.
>>>>>> Ok, then we somehow didn't understood each other.
>>>>>>
>>>>>>> I mean, it's a pretty big layer violation to do that for this use-
>>>>>>> case.
>>>>>> Well exactly that's the point. TTM should not have a layer design in
>>>>>> the
>>>>>> first place.
>>>>>>
>>>>>> Devices, BOs, resources etc.. are base classes which should 
>>>>>> implement
>>>>>> a
>>>>>> base functionality which is then extended by the drivers to 
>>>>>> implement
>>>>>> the driver specific functionality.
>>>>>>
>>>>>> That is a component based approach, and not layered at all.
>>>>>>
>>>>>>> The TTM resource manager doesn't want to know about this data at
>>>>>>> all,
>>>>>>> it's private to the ttm resource user layer and the resource
>>>>>>> manager
>>>>>>> works perfectly well without it. (I assume the other drivers that
>>>>>>> implement their own resource managers need the data that the
>>>>>>> subclassing provides?)
>>>>>> Yes, that's exactly why we have the subclassing.
>>>>>>
>>>>>>> The fundamental problem here is that there are two layers wanting
>>>>>>> to
>>>>>>> subclass struct ttm_resource. That means one layer gets to do that,
>>>>>>> the
>>>>>>> second gets to use a private pointer, (which in turn can provide
>>>>>>> yet
>>>>>>> another private pointer to a potential third layer). With your
>>>>>>> suggestion, the second layer instead is forced to subclass each
>>>>>>> subclassed instance it uses from  the first layer provides?
>>>>>> Well completely drop the layer approach/thinking here.
>>>>>>
>>>>>> The resource is an object with a base class. The base class
>>>>>> implements
>>>>>> the interface TTM needs to handle the object, e.g.
>>>>>> create/destroy/debug
>>>>>> etc...
>>>>>>
>>>>>> Then we need to subclass this object because without any additional
>>>>>> information the object is pretty pointless.
>>>>>>
>>>>>> One possibility for this is to use the range manager to implement
>>>>>> something drm_mm based. BTW: We should probably rename that to
>>>>>> something
>>>>>> like ttm_res_drm_mm or similar.
>>>>> Sure I'm all in on that, but my point is this becomes pretty awkward
>>>>> because the reusable code already subclasses struct ttm_resource. Let
>>>>> me give you an example:
>>>>>
>>>>> Prereqs:
>>>>> 1) We want to be able to re-use resource manager implementations 
>>>>> among
>>>>> drivers.
>>>>> 2) A driver might want to re-use multiple implementations and have
>>>>> identical data "struct i915_data" attached to both
>>>>
>>>> Well that's the point I don't really understand. Why would a driver 
>>>> want to do this?
>>>
>>> Let's say you have a struct ttm_object_vram and a struct 
>>> ttm_object_gtt, both subclassing drm_gem_object. Then I'd say a 
>>> driver would want to subclass those to attach identical data, extend 
>>> functionality and provide a single i915_gem_object to the rest of 
>>> the driver, which couldn't care less whether it's vram or gtt? 
>>> Wouldn't you say having separate struct ttm_object_vram and a struct 
>>> ttm_object_gtt in this case would be awkward?. We *want* to allow 
>>> common handling.
>>
>> Yeah, but that's a bad idea. This is like diamond inheritance in C++.
>>
>> When you need the same functionality in different backends you 
>> implement that as separate object and then add a parent class.
>>
>>>
>>> It's the exact same situation here. With struct ttm_resource you let 
>>> *different* implementation flavours subclass it, which makes it 
>>> awkward for the driver to extend the functionality in a common way 
>>> by subclassing, unless the driver only uses a single implementation.
>>
>> Well the driver should use separate implementations for their 
>> different domains as much as possible.
>>
> Hmm, Now you lost me a bit. Are you saying that the way we do dynamic 
> backends in the struct ttm_buffer_object to facilitate driver 
> subclassing is a bad idea or that the RFC with backpointer is a bad idea?
>
>
Or if you mean diamond inheritance is bad, yes that's basically my point.

Looking at
https://en.wikipedia.org/wiki/Multiple_inheritance#/media/File:Diamond_inheritance.svg

1)

A would be the struct ttm_resource itself,
D would be struct i915_resource,
B would be struct ttm_range_mgr_node,
C would be struct i915_ttm_buddy_resource

And we need to resolve the ambiguity using the awkward union construct, 
iff we need to derive from both B and C.

Struct ttm_buffer_object and struct ttm_tt instead have B) and C) being 
dynamic backends of A) or a single type derived from A) Hence the 
problem doesn't exist for these types.

So the question from last email remains, if ditching this RFC, can we 
have B) and C) implemented by helpers that can be used from D) and that 
don't derive from A?

Thanks,

Thomas




^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
  2021-09-13 12:41                   ` [Intel-gfx] " Thomas Hellström
@ 2021-09-14  7:40                     ` Christian König
  -1 siblings, 0 replies; 35+ messages in thread
From: Christian König @ 2021-09-14  7:40 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld

Am 13.09.21 um 14:41 schrieb Thomas Hellström:
> [SNIP]
>>>> Let's say you have a struct ttm_object_vram and a struct 
>>>> ttm_object_gtt, both subclassing drm_gem_object. Then I'd say a 
>>>> driver would want to subclass those to attach identical data, 
>>>> extend functionality and provide a single i915_gem_object to the 
>>>> rest of the driver, which couldn't care less whether it's vram or 
>>>> gtt? Wouldn't you say having separate struct ttm_object_vram and a 
>>>> struct ttm_object_gtt in this case would be awkward?. We *want* to 
>>>> allow common handling.
>>>
>>> Yeah, but that's a bad idea. This is like diamond inheritance in C++.
>>>
>>> When you need the same functionality in different backends you 
>>> implement that as separate object and then add a parent class.
>>>
>>>>
>>>> It's the exact same situation here. With struct ttm_resource you 
>>>> let *different* implementation flavours subclass it, which makes it 
>>>> awkward for the driver to extend the functionality in a common way 
>>>> by subclassing, unless the driver only uses a single implementation.
>>>
>>> Well the driver should use separate implementations for their 
>>> different domains as much as possible.
>>>
>> Hmm, Now you lost me a bit. Are you saying that the way we do dynamic 
>> backends in the struct ttm_buffer_object to facilitate driver 
>> subclassing is a bad idea or that the RFC with backpointer is a bad 
>> idea?
>>
>>
> Or if you mean diamond inheritance is bad, yes that's basically my point.

That diamond inheritance is a bad idea. What I don't understand is why 
you need that in the first place?

Information that you attach to a resource are specific to the domain 
where the resource is allocated from. So why do you want to attach the 
same information to a resources from different domains?

>
> Looking at
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FMultiple_inheritance%23%2Fmedia%2FFile%3ADiamond_inheritance.svg&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Cece4bd8aab644feacc1808d976b3ca56%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637671336950757656%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=LPMnfvC1px0bW8o420vP72oBbkm1v76A%2B0PDUw7urQY%3D&amp;reserved=0 
>
>
> 1)
>
> A would be the struct ttm_resource itself,
> D would be struct i915_resource,
> B would be struct ttm_range_mgr_node,
> C would be struct i915_ttm_buddy_resource
>
> And we need to resolve the ambiguity using the awkward union 
> construct, iff we need to derive from both B and C.
>
> Struct ttm_buffer_object and struct ttm_tt instead have B) and C) 
> being dynamic backends of A) or a single type derived from A) Hence 
> the problem doesn't exist for these types.
>
> So the question from last email remains, if ditching this RFC, can we 
> have B) and C) implemented by helpers that can be used from D) and 
> that don't derive from A?

Well we already have that in the form of drm_mm. I mean the 
ttm_range_manager is just a relatively small glue code which implements 
the TTMs resource interface using the drm_mm object and a spinlock. IIRC 
that less than 200 lines of code.

So you should already have the necessary helpers and just need to 
implement the resource manager as far as I can see.

I mean I reused the ttm_range_manager_node in for amdgpu_gtt_mgr and 
could potentially reuse a bit more of the ttm_range_manager code. But I 
don't see that as much of an issue, the extra functionality there is 
just minimal.

Regards,
Christian.

>
> Thanks,
>
> Thomas
>
>
>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Intel-gfx] [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
@ 2021-09-14  7:40                     ` Christian König
  0 siblings, 0 replies; 35+ messages in thread
From: Christian König @ 2021-09-14  7:40 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld

Am 13.09.21 um 14:41 schrieb Thomas Hellström:
> [SNIP]
>>>> Let's say you have a struct ttm_object_vram and a struct 
>>>> ttm_object_gtt, both subclassing drm_gem_object. Then I'd say a 
>>>> driver would want to subclass those to attach identical data, 
>>>> extend functionality and provide a single i915_gem_object to the 
>>>> rest of the driver, which couldn't care less whether it's vram or 
>>>> gtt? Wouldn't you say having separate struct ttm_object_vram and a 
>>>> struct ttm_object_gtt in this case would be awkward?. We *want* to 
>>>> allow common handling.
>>>
>>> Yeah, but that's a bad idea. This is like diamond inheritance in C++.
>>>
>>> When you need the same functionality in different backends you 
>>> implement that as separate object and then add a parent class.
>>>
>>>>
>>>> It's the exact same situation here. With struct ttm_resource you 
>>>> let *different* implementation flavours subclass it, which makes it 
>>>> awkward for the driver to extend the functionality in a common way 
>>>> by subclassing, unless the driver only uses a single implementation.
>>>
>>> Well the driver should use separate implementations for their 
>>> different domains as much as possible.
>>>
>> Hmm, Now you lost me a bit. Are you saying that the way we do dynamic 
>> backends in the struct ttm_buffer_object to facilitate driver 
>> subclassing is a bad idea or that the RFC with backpointer is a bad 
>> idea?
>>
>>
> Or if you mean diamond inheritance is bad, yes that's basically my point.

That diamond inheritance is a bad idea. What I don't understand is why 
you need that in the first place?

Information that you attach to a resource are specific to the domain 
where the resource is allocated from. So why do you want to attach the 
same information to a resources from different domains?

>
> Looking at
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FMultiple_inheritance%23%2Fmedia%2FFile%3ADiamond_inheritance.svg&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Cece4bd8aab644feacc1808d976b3ca56%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637671336950757656%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=LPMnfvC1px0bW8o420vP72oBbkm1v76A%2B0PDUw7urQY%3D&amp;reserved=0 
>
>
> 1)
>
> A would be the struct ttm_resource itself,
> D would be struct i915_resource,
> B would be struct ttm_range_mgr_node,
> C would be struct i915_ttm_buddy_resource
>
> And we need to resolve the ambiguity using the awkward union 
> construct, iff we need to derive from both B and C.
>
> Struct ttm_buffer_object and struct ttm_tt instead have B) and C) 
> being dynamic backends of A) or a single type derived from A) Hence 
> the problem doesn't exist for these types.
>
> So the question from last email remains, if ditching this RFC, can we 
> have B) and C) implemented by helpers that can be used from D) and 
> that don't derive from A?

Well we already have that in the form of drm_mm. I mean the 
ttm_range_manager is just a relatively small glue code which implements 
the TTMs resource interface using the drm_mm object and a spinlock. IIRC 
that less than 200 lines of code.

So you should already have the necessary helpers and just need to 
implement the resource manager as far as I can see.

I mean I reused the ttm_range_manager_node in for amdgpu_gtt_mgr and 
could potentially reuse a bit more of the ttm_range_manager code. But I 
don't see that as much of an issue, the extra functionality there is 
just minimal.

Regards,
Christian.

>
> Thanks,
>
> Thomas
>
>
>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Intel-gfx] [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
  2021-09-14  7:40                     ` [Intel-gfx] " Christian König
@ 2021-09-14  8:27                       ` Thomas Hellström
  -1 siblings, 0 replies; 35+ messages in thread
From: Thomas Hellström @ 2021-09-14  8:27 UTC (permalink / raw)
  To: Christian König, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld

On Tue, 2021-09-14 at 09:40 +0200, Christian König wrote:
> Am 13.09.21 um 14:41 schrieb Thomas Hellström:
> > [SNIP]
> > > > > Let's say you have a struct ttm_object_vram and a struct 
> > > > > ttm_object_gtt, both subclassing drm_gem_object. Then I'd say
> > > > > a 
> > > > > driver would want to subclass those to attach identical data,
> > > > > extend functionality and provide a single i915_gem_object to
> > > > > the 
> > > > > rest of the driver, which couldn't care less whether it's
> > > > > vram or 
> > > > > gtt? Wouldn't you say having separate struct ttm_object_vram
> > > > > and a 
> > > > > struct ttm_object_gtt in this case would be awkward?. We
> > > > > *want* to 
> > > > > allow common handling.
> > > > 
> > > > Yeah, but that's a bad idea. This is like diamond inheritance
> > > > in C++.
> > > > 
> > > > When you need the same functionality in different backends you 
> > > > implement that as separate object and then add a parent class.
> > > > 
> > > > > 
> > > > > It's the exact same situation here. With struct ttm_resource
> > > > > you 
> > > > > let *different* implementation flavours subclass it, which
> > > > > makes it 
> > > > > awkward for the driver to extend the functionality in a
> > > > > common way 
> > > > > by subclassing, unless the driver only uses a single
> > > > > implementation.
> > > > 
> > > > Well the driver should use separate implementations for their 
> > > > different domains as much as possible.
> > > > 
> > > Hmm, Now you lost me a bit. Are you saying that the way we do
> > > dynamic 
> > > backends in the struct ttm_buffer_object to facilitate driver 
> > > subclassing is a bad idea or that the RFC with backpointer is a
> > > bad 
> > > idea?
> > > 
> > > 
> > Or if you mean diamond inheritance is bad, yes that's basically my
> > point.
> 
> That diamond inheritance is a bad idea. What I don't understand is
> why 
> you need that in the first place?
> 
> Information that you attach to a resource are specific to the domain 
> where the resource is allocated from. So why do you want to attach
> the 
> same information to a resources from different domains?

Again, for the same reason that we do that with struct i915_gem_objects
and struct ttm_tts, to extend the functionality. I mean information
that we attach when we subclass a struct ttm_buffer_object doesn't
necessarily care about whether it's a VRAM or a GTT object. In exactly
the same way, information that we want to attach to a struct
ttm_resource doesn't necessarily care whether it's a system or a VRAM
resource, and need not be specific to any of those.

In this particular case, as memory management becomes asynchronous, you
can't attach things like sg-tables and gpu binding information to the
gem object anymore, because the object may have a number of migrations
in the pipeline. Such things need to be attached to the structure that
abstracts the memory allocation, and which may have a completely
different lifetime than the object itself.

In our particular case we want to attach information for cached page
lookup and and sg-table, and moving forward probably the gpu binding
(vma) information, and that is the same information for any
ttm_resource regardless where it's allocated from.

Typical example: A pipelined GPU operation happening before an async
eviction goes wrong. We need to error capture and reset. But if we look
at the object for error capturing, it's already updated pointing to an
after-eviction resource, and the resource sits on a ghost object (or in
the future when ghost objects go away perhaps in limbo somewhere).

We need to capture the memory pointed to by the struct ttm_resource the
GPU was referencing, and to be able to do that we need to cache driver-
specific info on the resource. Typically an sg-list and GPU binding
information. 

Anyway, that cached information needs to be destroyed together with the
resource and thus we need to be able to access that information from
the resource in some way, regardless whether it's a pointer or whether
we embed the struct resource.

I think it's pretty important here that we (using the inheritance
diagram below) recognize the need for D to inherit from A, just like we
do for objects or ttm_tts.


> 
> > 
> > Looking at
> > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FMultiple_inheritance%23%2Fmedia%2FFile%3ADiamond_inheritance.svg&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Cece4bd8aab644feacc1808d976b3ca56%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637671336950757656%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=LPMnfvC1px0bW8o420vP72oBbkm1v76A%2B0PDUw7urQY%3D&amp;reserved=0
> >  
> > 
> > 
> > 1)
> > 
> > A would be the struct ttm_resource itself,
> > D would be struct i915_resource,
> > B would be struct ttm_range_mgr_node,
> > C would be struct i915_ttm_buddy_resource
> > 
> > And we need to resolve the ambiguity using the awkward union 
> > construct, iff we need to derive from both B and C.
> > 
> > Struct ttm_buffer_object and struct ttm_tt instead have B) and C) 
> > being dynamic backends of A) or a single type derived from A) Hence
> > the problem doesn't exist for these types.
> > 
> > So the question from last email remains, if ditching this RFC, can
> > we 
> > have B) and C) implemented by helpers that can be used from D) and 
> > that don't derive from A?
> 
> Well we already have that in the form of drm_mm. I mean the 
> ttm_range_manager is just a relatively small glue code which
> implements 
> the TTMs resource interface using the drm_mm object and a spinlock.
> IIRC 
> that less than 200 lines of code.
> 
> So you should already have the necessary helpers and just need to 
> implement the resource manager as far as I can see.
> 
> I mean I reused the ttm_range_manager_node in for amdgpu_gtt_mgr and 
> could potentially reuse a bit more of the ttm_range_manager code. But
> I 
> don't see that as much of an issue, the extra functionality there is 
> just minimal.

Sure but that would give up the prereq of having reusable resource
manager implementations. What happens if someone would like to reuse
the buddy manager? And to complicate things even more, the information
we attach to VRAM resources also needs to be attached to system
resources. Sure we could probably re-implement a combined system-buddy-
range manager, but that seems like something overly complex.

The other object examples resolve the diamond inheritance with a
pointer to the specialization (BC) and let D derive from A.

TTM resources do it backwards. If we can just recognize that and ponder
what's the easiest way to resolve this given the current design, I
actually think we'd arrive at a backpointer to allow downcasting from A
to D.

Thanks,
Thomas



> 
> Regards,
> Christian.
> 
> > 
> > Thanks,
> > 
> > Thomas
> > 
> > 
> > 
> 



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
@ 2021-09-14  8:27                       ` Thomas Hellström
  0 siblings, 0 replies; 35+ messages in thread
From: Thomas Hellström @ 2021-09-14  8:27 UTC (permalink / raw)
  To: Christian König, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld

On Tue, 2021-09-14 at 09:40 +0200, Christian König wrote:
> Am 13.09.21 um 14:41 schrieb Thomas Hellström:
> > [SNIP]
> > > > > Let's say you have a struct ttm_object_vram and a struct 
> > > > > ttm_object_gtt, both subclassing drm_gem_object. Then I'd say
> > > > > a 
> > > > > driver would want to subclass those to attach identical data,
> > > > > extend functionality and provide a single i915_gem_object to
> > > > > the 
> > > > > rest of the driver, which couldn't care less whether it's
> > > > > vram or 
> > > > > gtt? Wouldn't you say having separate struct ttm_object_vram
> > > > > and a 
> > > > > struct ttm_object_gtt in this case would be awkward?. We
> > > > > *want* to 
> > > > > allow common handling.
> > > > 
> > > > Yeah, but that's a bad idea. This is like diamond inheritance
> > > > in C++.
> > > > 
> > > > When you need the same functionality in different backends you 
> > > > implement that as separate object and then add a parent class.
> > > > 
> > > > > 
> > > > > It's the exact same situation here. With struct ttm_resource
> > > > > you 
> > > > > let *different* implementation flavours subclass it, which
> > > > > makes it 
> > > > > awkward for the driver to extend the functionality in a
> > > > > common way 
> > > > > by subclassing, unless the driver only uses a single
> > > > > implementation.
> > > > 
> > > > Well the driver should use separate implementations for their 
> > > > different domains as much as possible.
> > > > 
> > > Hmm, Now you lost me a bit. Are you saying that the way we do
> > > dynamic 
> > > backends in the struct ttm_buffer_object to facilitate driver 
> > > subclassing is a bad idea or that the RFC with backpointer is a
> > > bad 
> > > idea?
> > > 
> > > 
> > Or if you mean diamond inheritance is bad, yes that's basically my
> > point.
> 
> That diamond inheritance is a bad idea. What I don't understand is
> why 
> you need that in the first place?
> 
> Information that you attach to a resource are specific to the domain 
> where the resource is allocated from. So why do you want to attach
> the 
> same information to a resources from different domains?

Again, for the same reason that we do that with struct i915_gem_objects
and struct ttm_tts, to extend the functionality. I mean information
that we attach when we subclass a struct ttm_buffer_object doesn't
necessarily care about whether it's a VRAM or a GTT object. In exactly
the same way, information that we want to attach to a struct
ttm_resource doesn't necessarily care whether it's a system or a VRAM
resource, and need not be specific to any of those.

In this particular case, as memory management becomes asynchronous, you
can't attach things like sg-tables and gpu binding information to the
gem object anymore, because the object may have a number of migrations
in the pipeline. Such things need to be attached to the structure that
abstracts the memory allocation, and which may have a completely
different lifetime than the object itself.

In our particular case we want to attach information for cached page
lookup and and sg-table, and moving forward probably the gpu binding
(vma) information, and that is the same information for any
ttm_resource regardless where it's allocated from.

Typical example: A pipelined GPU operation happening before an async
eviction goes wrong. We need to error capture and reset. But if we look
at the object for error capturing, it's already updated pointing to an
after-eviction resource, and the resource sits on a ghost object (or in
the future when ghost objects go away perhaps in limbo somewhere).

We need to capture the memory pointed to by the struct ttm_resource the
GPU was referencing, and to be able to do that we need to cache driver-
specific info on the resource. Typically an sg-list and GPU binding
information. 

Anyway, that cached information needs to be destroyed together with the
resource and thus we need to be able to access that information from
the resource in some way, regardless whether it's a pointer or whether
we embed the struct resource.

I think it's pretty important here that we (using the inheritance
diagram below) recognize the need for D to inherit from A, just like we
do for objects or ttm_tts.


> 
> > 
> > Looking at
> > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FMultiple_inheritance%23%2Fmedia%2FFile%3ADiamond_inheritance.svg&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7Cece4bd8aab644feacc1808d976b3ca56%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637671336950757656%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&amp;sdata=LPMnfvC1px0bW8o420vP72oBbkm1v76A%2B0PDUw7urQY%3D&amp;reserved=0
> >  
> > 
> > 
> > 1)
> > 
> > A would be the struct ttm_resource itself,
> > D would be struct i915_resource,
> > B would be struct ttm_range_mgr_node,
> > C would be struct i915_ttm_buddy_resource
> > 
> > And we need to resolve the ambiguity using the awkward union 
> > construct, iff we need to derive from both B and C.
> > 
> > Struct ttm_buffer_object and struct ttm_tt instead have B) and C) 
> > being dynamic backends of A) or a single type derived from A) Hence
> > the problem doesn't exist for these types.
> > 
> > So the question from last email remains, if ditching this RFC, can
> > we 
> > have B) and C) implemented by helpers that can be used from D) and 
> > that don't derive from A?
> 
> Well we already have that in the form of drm_mm. I mean the 
> ttm_range_manager is just a relatively small glue code which
> implements 
> the TTMs resource interface using the drm_mm object and a spinlock.
> IIRC 
> that less than 200 lines of code.
> 
> So you should already have the necessary helpers and just need to 
> implement the resource manager as far as I can see.
> 
> I mean I reused the ttm_range_manager_node in for amdgpu_gtt_mgr and 
> could potentially reuse a bit more of the ttm_range_manager code. But
> I 
> don't see that as much of an issue, the extra functionality there is 
> just minimal.

Sure but that would give up the prereq of having reusable resource
manager implementations. What happens if someone would like to reuse
the buddy manager? And to complicate things even more, the information
we attach to VRAM resources also needs to be attached to system
resources. Sure we could probably re-implement a combined system-buddy-
range manager, but that seems like something overly complex.

The other object examples resolve the diamond inheritance with a
pointer to the specialization (BC) and let D derive from A.

TTM resources do it backwards. If we can just recognize that and ponder
what's the easiest way to resolve this given the current design, I
actually think we'd arrive at a backpointer to allow downcasting from A
to D.

Thanks,
Thomas



> 
> Regards,
> Christian.
> 
> > 
> > Thanks,
> > 
> > Thomas
> > 
> > 
> > 
> 



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
  2021-09-14  8:27                       ` Thomas Hellström
@ 2021-09-14  8:53                         ` Christian König
  -1 siblings, 0 replies; 35+ messages in thread
From: Christian König @ 2021-09-14  8:53 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld

Am 14.09.21 um 10:27 schrieb Thomas Hellström:
> On Tue, 2021-09-14 at 09:40 +0200, Christian König wrote:
>> Am 13.09.21 um 14:41 schrieb Thomas Hellström:
>>> [SNIP]
>>>>>> Let's say you have a struct ttm_object_vram and a struct
>>>>>> ttm_object_gtt, both subclassing drm_gem_object. Then I'd say
>>>>>> a
>>>>>> driver would want to subclass those to attach identical data,
>>>>>> extend functionality and provide a single i915_gem_object to
>>>>>> the
>>>>>> rest of the driver, which couldn't care less whether it's
>>>>>> vram or
>>>>>> gtt? Wouldn't you say having separate struct ttm_object_vram
>>>>>> and a
>>>>>> struct ttm_object_gtt in this case would be awkward?. We
>>>>>> *want* to
>>>>>> allow common handling.
>>>>> Yeah, but that's a bad idea. This is like diamond inheritance
>>>>> in C++.
>>>>>
>>>>> When you need the same functionality in different backends you
>>>>> implement that as separate object and then add a parent class.
>>>>>
>>>>>> It's the exact same situation here. With struct ttm_resource
>>>>>> you
>>>>>> let *different* implementation flavours subclass it, which
>>>>>> makes it
>>>>>> awkward for the driver to extend the functionality in a
>>>>>> common way
>>>>>> by subclassing, unless the driver only uses a single
>>>>>> implementation.
>>>>> Well the driver should use separate implementations for their
>>>>> different domains as much as possible.
>>>>>
>>>> Hmm, Now you lost me a bit. Are you saying that the way we do
>>>> dynamic
>>>> backends in the struct ttm_buffer_object to facilitate driver
>>>> subclassing is a bad idea or that the RFC with backpointer is a
>>>> bad
>>>> idea?
>>>>
>>>>
>>> Or if you mean diamond inheritance is bad, yes that's basically my
>>> point.
>> That diamond inheritance is a bad idea. What I don't understand is
>> why
>> you need that in the first place?
>>
>> Information that you attach to a resource are specific to the domain
>> where the resource is allocated from. So why do you want to attach
>> the
>> same information to a resources from different domains?
> Again, for the same reason that we do that with struct i915_gem_objects
> and struct ttm_tts, to extend the functionality. I mean information
> that we attach when we subclass a struct ttm_buffer_object doesn't
> necessarily care about whether it's a VRAM or a GTT object. In exactly
> the same way, information that we want to attach to a struct
> ttm_resource doesn't necessarily care whether it's a system or a VRAM
> resource, and need not be specific to any of those.
>
> In this particular case, as memory management becomes asynchronous, you
> can't attach things like sg-tables and gpu binding information to the
> gem object anymore, because the object may have a number of migrations
> in the pipeline. Such things need to be attached to the structure that
> abstracts the memory allocation, and which may have a completely
> different lifetime than the object itself.
>
> In our particular case we want to attach information for cached page
> lookup and and sg-table, and moving forward probably the gpu binding
> (vma) information, and that is the same information for any
> ttm_resource regardless where it's allocated from.
>
> Typical example: A pipelined GPU operation happening before an async
> eviction goes wrong. We need to error capture and reset. But if we look
> at the object for error capturing, it's already updated pointing to an
> after-eviction resource, and the resource sits on a ghost object (or in
> the future when ghost objects go away perhaps in limbo somewhere).
>
> We need to capture the memory pointed to by the struct ttm_resource the
> GPU was referencing, and to be able to do that we need to cache driver-
> specific info on the resource. Typically an sg-list and GPU binding
> information.
>
> Anyway, that cached information needs to be destroyed together with the
> resource and thus we need to be able to access that information from
> the resource in some way, regardless whether it's a pointer or whether
> we embed the struct resource.
>
> I think it's pretty important here that we (using the inheritance
> diagram below) recognize the need for D to inherit from A, just like we
> do for objects or ttm_tts.
>
>
>>> Looking at
>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FMultiple_inheritance%23%2Fmedia%2FFile%3ADiamond_inheritance.svg&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C268bb562db8548b285b408d977598b2c%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637672048739103176%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=bPyDqiSF%2FHFZbl74ux0vfwh3uma5hZIUf2xbzb9yZz8%3D&amp;reserved=0
>>>   
>>>
>>>
>>> 1)
>>>
>>> A would be the struct ttm_resource itself,
>>> D would be struct i915_resource,
>>> B would be struct ttm_range_mgr_node,
>>> C would be struct i915_ttm_buddy_resource
>>>
>>> And we need to resolve the ambiguity using the awkward union
>>> construct, iff we need to derive from both B and C.
>>>
>>> Struct ttm_buffer_object and struct ttm_tt instead have B) and C)
>>> being dynamic backends of A) or a single type derived from A) Hence
>>> the problem doesn't exist for these types.
>>>
>>> So the question from last email remains, if ditching this RFC, can
>>> we
>>> have B) and C) implemented by helpers that can be used from D) and
>>> that don't derive from A?
>> Well we already have that in the form of drm_mm. I mean the
>> ttm_range_manager is just a relatively small glue code which
>> implements
>> the TTMs resource interface using the drm_mm object and a spinlock.
>> IIRC
>> that less than 200 lines of code.
>>
>> So you should already have the necessary helpers and just need to
>> implement the resource manager as far as I can see.
>>
>> I mean I reused the ttm_range_manager_node in for amdgpu_gtt_mgr and
>> could potentially reuse a bit more of the ttm_range_manager code. But
>> I
>> don't see that as much of an issue, the extra functionality there is
>> just minimal.
> Sure but that would give up the prereq of having reusable resource
> manager implementations. What happens if someone would like to reuse
> the buddy manager? And to complicate things even more, the information
> we attach to VRAM resources also needs to be attached to system
> resources. Sure we could probably re-implement a combined system-buddy-
> range manager, but that seems like something overly complex.
>
> The other object examples resolve the diamond inheritance with a
> pointer to the specialization (BC) and let D derive from A.
>
> TTM resources do it backwards. If we can just recognize that and ponder
> what's the easiest way to resolve this given the current design, I
> actually think we'd arrive at a backpointer to allow downcasting from A
> to D.

Yeah, but I think you are approaching that from the wrong side.

For use cases like this I think you should probably have the following 
objects and inheritances:

1. Driver specific objects like i915_sg, i915_vma which don't inherit 
anything from TTM.
2. i915_vram_node which inherits from ttm_resource or a potential 
ttm_buddy_allocator.
3. i915_gtt_node which inherits from ttm_range_manger_node.
4. Maybe i915_sys_node which inherits from ttm_resource as well.

The managers for the individual domains then provide the glue code to 
implement both the TTM resource interface as well as a driver specific 
interface to access the driver objects.

Amdgpu just uses a switch/case for now, but you could as well extend the 
ttm_resource_manager_func table and upcast that inside the driver.

Regards,
Christian.

>
> Thanks,
> Thomas
>
>
>
>> Regards,
>> Christian.
>>
>>> Thanks,
>>>
>>> Thomas
>>>
>>>
>>>
>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Intel-gfx] [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
@ 2021-09-14  8:53                         ` Christian König
  0 siblings, 0 replies; 35+ messages in thread
From: Christian König @ 2021-09-14  8:53 UTC (permalink / raw)
  To: Thomas Hellström, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld

Am 14.09.21 um 10:27 schrieb Thomas Hellström:
> On Tue, 2021-09-14 at 09:40 +0200, Christian König wrote:
>> Am 13.09.21 um 14:41 schrieb Thomas Hellström:
>>> [SNIP]
>>>>>> Let's say you have a struct ttm_object_vram and a struct
>>>>>> ttm_object_gtt, both subclassing drm_gem_object. Then I'd say
>>>>>> a
>>>>>> driver would want to subclass those to attach identical data,
>>>>>> extend functionality and provide a single i915_gem_object to
>>>>>> the
>>>>>> rest of the driver, which couldn't care less whether it's
>>>>>> vram or
>>>>>> gtt? Wouldn't you say having separate struct ttm_object_vram
>>>>>> and a
>>>>>> struct ttm_object_gtt in this case would be awkward?. We
>>>>>> *want* to
>>>>>> allow common handling.
>>>>> Yeah, but that's a bad idea. This is like diamond inheritance
>>>>> in C++.
>>>>>
>>>>> When you need the same functionality in different backends you
>>>>> implement that as separate object and then add a parent class.
>>>>>
>>>>>> It's the exact same situation here. With struct ttm_resource
>>>>>> you
>>>>>> let *different* implementation flavours subclass it, which
>>>>>> makes it
>>>>>> awkward for the driver to extend the functionality in a
>>>>>> common way
>>>>>> by subclassing, unless the driver only uses a single
>>>>>> implementation.
>>>>> Well the driver should use separate implementations for their
>>>>> different domains as much as possible.
>>>>>
>>>> Hmm, Now you lost me a bit. Are you saying that the way we do
>>>> dynamic
>>>> backends in the struct ttm_buffer_object to facilitate driver
>>>> subclassing is a bad idea or that the RFC with backpointer is a
>>>> bad
>>>> idea?
>>>>
>>>>
>>> Or if you mean diamond inheritance is bad, yes that's basically my
>>> point.
>> That diamond inheritance is a bad idea. What I don't understand is
>> why
>> you need that in the first place?
>>
>> Information that you attach to a resource are specific to the domain
>> where the resource is allocated from. So why do you want to attach
>> the
>> same information to a resources from different domains?
> Again, for the same reason that we do that with struct i915_gem_objects
> and struct ttm_tts, to extend the functionality. I mean information
> that we attach when we subclass a struct ttm_buffer_object doesn't
> necessarily care about whether it's a VRAM or a GTT object. In exactly
> the same way, information that we want to attach to a struct
> ttm_resource doesn't necessarily care whether it's a system or a VRAM
> resource, and need not be specific to any of those.
>
> In this particular case, as memory management becomes asynchronous, you
> can't attach things like sg-tables and gpu binding information to the
> gem object anymore, because the object may have a number of migrations
> in the pipeline. Such things need to be attached to the structure that
> abstracts the memory allocation, and which may have a completely
> different lifetime than the object itself.
>
> In our particular case we want to attach information for cached page
> lookup and and sg-table, and moving forward probably the gpu binding
> (vma) information, and that is the same information for any
> ttm_resource regardless where it's allocated from.
>
> Typical example: A pipelined GPU operation happening before an async
> eviction goes wrong. We need to error capture and reset. But if we look
> at the object for error capturing, it's already updated pointing to an
> after-eviction resource, and the resource sits on a ghost object (or in
> the future when ghost objects go away perhaps in limbo somewhere).
>
> We need to capture the memory pointed to by the struct ttm_resource the
> GPU was referencing, and to be able to do that we need to cache driver-
> specific info on the resource. Typically an sg-list and GPU binding
> information.
>
> Anyway, that cached information needs to be destroyed together with the
> resource and thus we need to be able to access that information from
> the resource in some way, regardless whether it's a pointer or whether
> we embed the struct resource.
>
> I think it's pretty important here that we (using the inheritance
> diagram below) recognize the need for D to inherit from A, just like we
> do for objects or ttm_tts.
>
>
>>> Looking at
>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FMultiple_inheritance%23%2Fmedia%2FFile%3ADiamond_inheritance.svg&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C268bb562db8548b285b408d977598b2c%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637672048739103176%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=bPyDqiSF%2FHFZbl74ux0vfwh3uma5hZIUf2xbzb9yZz8%3D&amp;reserved=0
>>>   
>>>
>>>
>>> 1)
>>>
>>> A would be the struct ttm_resource itself,
>>> D would be struct i915_resource,
>>> B would be struct ttm_range_mgr_node,
>>> C would be struct i915_ttm_buddy_resource
>>>
>>> And we need to resolve the ambiguity using the awkward union
>>> construct, iff we need to derive from both B and C.
>>>
>>> Struct ttm_buffer_object and struct ttm_tt instead have B) and C)
>>> being dynamic backends of A) or a single type derived from A) Hence
>>> the problem doesn't exist for these types.
>>>
>>> So the question from last email remains, if ditching this RFC, can
>>> we
>>> have B) and C) implemented by helpers that can be used from D) and
>>> that don't derive from A?
>> Well we already have that in the form of drm_mm. I mean the
>> ttm_range_manager is just a relatively small glue code which
>> implements
>> the TTMs resource interface using the drm_mm object and a spinlock.
>> IIRC
>> that less than 200 lines of code.
>>
>> So you should already have the necessary helpers and just need to
>> implement the resource manager as far as I can see.
>>
>> I mean I reused the ttm_range_manager_node in for amdgpu_gtt_mgr and
>> could potentially reuse a bit more of the ttm_range_manager code. But
>> I
>> don't see that as much of an issue, the extra functionality there is
>> just minimal.
> Sure but that would give up the prereq of having reusable resource
> manager implementations. What happens if someone would like to reuse
> the buddy manager? And to complicate things even more, the information
> we attach to VRAM resources also needs to be attached to system
> resources. Sure we could probably re-implement a combined system-buddy-
> range manager, but that seems like something overly complex.
>
> The other object examples resolve the diamond inheritance with a
> pointer to the specialization (BC) and let D derive from A.
>
> TTM resources do it backwards. If we can just recognize that and ponder
> what's the easiest way to resolve this given the current design, I
> actually think we'd arrive at a backpointer to allow downcasting from A
> to D.

Yeah, but I think you are approaching that from the wrong side.

For use cases like this I think you should probably have the following 
objects and inheritances:

1. Driver specific objects like i915_sg, i915_vma which don't inherit 
anything from TTM.
2. i915_vram_node which inherits from ttm_resource or a potential 
ttm_buddy_allocator.
3. i915_gtt_node which inherits from ttm_range_manger_node.
4. Maybe i915_sys_node which inherits from ttm_resource as well.

The managers for the individual domains then provide the glue code to 
implement both the TTM resource interface as well as a driver specific 
interface to access the driver objects.

Amdgpu just uses a switch/case for now, but you could as well extend the 
ttm_resource_manager_func table and upcast that inside the driver.

Regards,
Christian.

>
> Thanks,
> Thomas
>
>
>
>> Regards,
>> Christian.
>>
>>> Thanks,
>>>
>>> Thomas
>>>
>>>
>>>
>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
  2021-09-14  8:53                         ` [Intel-gfx] " Christian König
@ 2021-09-14 10:38                           ` Thomas Hellström
  -1 siblings, 0 replies; 35+ messages in thread
From: Thomas Hellström @ 2021-09-14 10:38 UTC (permalink / raw)
  To: Christian König, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld

On Tue, 2021-09-14 at 10:53 +0200, Christian König wrote:
> Am 14.09.21 um 10:27 schrieb Thomas Hellström:
> > On Tue, 2021-09-14 at 09:40 +0200, Christian König wrote:
> > > Am 13.09.21 um 14:41 schrieb Thomas Hellström:
> > > > [SNIP]
> > > > > > > Let's say you have a struct ttm_object_vram and a struct
> > > > > > > ttm_object_gtt, both subclassing drm_gem_object. Then I'd
> > > > > > > say
> > > > > > > a
> > > > > > > driver would want to subclass those to attach identical
> > > > > > > data,
> > > > > > > extend functionality and provide a single i915_gem_object
> > > > > > > to
> > > > > > > the
> > > > > > > rest of the driver, which couldn't care less whether it's
> > > > > > > vram or
> > > > > > > gtt? Wouldn't you say having separate struct
> > > > > > > ttm_object_vram
> > > > > > > and a
> > > > > > > struct ttm_object_gtt in this case would be awkward?. We
> > > > > > > *want* to
> > > > > > > allow common handling.
> > > > > > Yeah, but that's a bad idea. This is like diamond
> > > > > > inheritance
> > > > > > in C++.
> > > > > > 
> > > > > > When you need the same functionality in different backends
> > > > > > you
> > > > > > implement that as separate object and then add a parent
> > > > > > class.
> > > > > > 
> > > > > > > It's the exact same situation here. With struct
> > > > > > > ttm_resource
> > > > > > > you
> > > > > > > let *different* implementation flavours subclass it,
> > > > > > > which
> > > > > > > makes it
> > > > > > > awkward for the driver to extend the functionality in a
> > > > > > > common way
> > > > > > > by subclassing, unless the driver only uses a single
> > > > > > > implementation.
> > > > > > Well the driver should use separate implementations for
> > > > > > their
> > > > > > different domains as much as possible.
> > > > > > 
> > > > > Hmm, Now you lost me a bit. Are you saying that the way we do
> > > > > dynamic
> > > > > backends in the struct ttm_buffer_object to facilitate driver
> > > > > subclassing is a bad idea or that the RFC with backpointer is
> > > > > a
> > > > > bad
> > > > > idea?
> > > > > 
> > > > > 
> > > > Or if you mean diamond inheritance is bad, yes that's basically
> > > > my
> > > > point.
> > > That diamond inheritance is a bad idea. What I don't understand
> > > is
> > > why
> > > you need that in the first place?
> > > 
> > > Information that you attach to a resource are specific to the
> > > domain
> > > where the resource is allocated from. So why do you want to
> > > attach
> > > the
> > > same information to a resources from different domains?
> > Again, for the same reason that we do that with struct
> > i915_gem_objects
> > and struct ttm_tts, to extend the functionality. I mean information
> > that we attach when we subclass a struct ttm_buffer_object doesn't
> > necessarily care about whether it's a VRAM or a GTT object. In
> > exactly
> > the same way, information that we want to attach to a struct
> > ttm_resource doesn't necessarily care whether it's a system or a
> > VRAM
> > resource, and need not be specific to any of those.
> > 
> > In this particular case, as memory management becomes asynchronous,
> > you
> > can't attach things like sg-tables and gpu binding information to
> > the
> > gem object anymore, because the object may have a number of
> > migrations
> > in the pipeline. Such things need to be attached to the structure
> > that
> > abstracts the memory allocation, and which may have a completely
> > different lifetime than the object itself.
> > 
> > In our particular case we want to attach information for cached
> > page
> > lookup and and sg-table, and moving forward probably the gpu
> > binding
> > (vma) information, and that is the same information for any
> > ttm_resource regardless where it's allocated from.
> > 
> > Typical example: A pipelined GPU operation happening before an
> > async
> > eviction goes wrong. We need to error capture and reset. But if we
> > look
> > at the object for error capturing, it's already updated pointing to
> > an
> > after-eviction resource, and the resource sits on a ghost object
> > (or in
> > the future when ghost objects go away perhaps in limbo somewhere).
> > 
> > We need to capture the memory pointed to by the struct ttm_resource
> > the
> > GPU was referencing, and to be able to do that we need to cache
> > driver-
> > specific info on the resource. Typically an sg-list and GPU binding
> > information.
> > 
> > Anyway, that cached information needs to be destroyed together with
> > the
> > resource and thus we need to be able to access that information
> > from
> > the resource in some way, regardless whether it's a pointer or
> > whether
> > we embed the struct resource.
> > 
> > I think it's pretty important here that we (using the inheritance
> > diagram below) recognize the need for D to inherit from A, just
> > like we
> > do for objects or ttm_tts.
> > 
> > 
> > > > Looking at
> > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FMultiple_inheritance%23%2Fmedia%2FFile%3ADiamond_inheritance.svg&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C268bb562db8548b285b408d977598b2c%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637672048739103176%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=bPyDqiSF%2FHFZbl74ux0vfwh3uma5hZIUf2xbzb9yZz8%3D&amp;reserved=0
> > > >   
> > > > 
> > > > 
> > > > 1)
> > > > 
> > > > A would be the struct ttm_resource itself,
> > > > D would be struct i915_resource,
> > > > B would be struct ttm_range_mgr_node,
> > > > C would be struct i915_ttm_buddy_resource
> > > > 
> > > > And we need to resolve the ambiguity using the awkward union
> > > > construct, iff we need to derive from both B and C.
> > > > 
> > > > Struct ttm_buffer_object and struct ttm_tt instead have B) and
> > > > C)
> > > > being dynamic backends of A) or a single type derived from A)
> > > > Hence
> > > > the problem doesn't exist for these types.
> > > > 
> > > > So the question from last email remains, if ditching this RFC,
> > > > can
> > > > we
> > > > have B) and C) implemented by helpers that can be used from D)
> > > > and
> > > > that don't derive from A?
> > > Well we already have that in the form of drm_mm. I mean the
> > > ttm_range_manager is just a relatively small glue code which
> > > implements
> > > the TTMs resource interface using the drm_mm object and a
> > > spinlock.
> > > IIRC
> > > that less than 200 lines of code.
> > > 
> > > So you should already have the necessary helpers and just need to
> > > implement the resource manager as far as I can see.
> > > 
> > > I mean I reused the ttm_range_manager_node in for amdgpu_gtt_mgr
> > > and
> > > could potentially reuse a bit more of the ttm_range_manager code.
> > > But
> > > I
> > > don't see that as much of an issue, the extra functionality there
> > > is
> > > just minimal.
> > Sure but that would give up the prereq of having reusable resource
> > manager implementations. What happens if someone would like to
> > reuse
> > the buddy manager? And to complicate things even more, the
> > information
> > we attach to VRAM resources also needs to be attached to system
> > resources. Sure we could probably re-implement a combined system-
> > buddy-
> > range manager, but that seems like something overly complex.
> > 
> > The other object examples resolve the diamond inheritance with a
> > pointer to the specialization (BC) and let D derive from A.
> > 
> > TTM resources do it backwards. If we can just recognize that and
> > ponder
> > what's the easiest way to resolve this given the current design, I
> > actually think we'd arrive at a backpointer to allow downcasting
> > from A
> > to D.
> 
> Yeah, but I think you are approaching that from the wrong side.
> 
> For use cases like this I think you should probably have the
> following 
> objects and inheritances:
> 
> 1. Driver specific objects like i915_sg, i915_vma which don't inherit
> anything from TTM.
> 2. i915_vram_node which inherits from ttm_resource or a potential 
> ttm_buddy_allocator.
> 3. i915_gtt_node which inherits from ttm_range_manger_node.
> 4. Maybe i915_sys_node which inherits from ttm_resource as well.
> 
> The managers for the individual domains then provide the glue code to
> implement both the TTM resource interface as well as a driver
> specific 
> interface to access the driver objects.

Well yes, but this is not really much better than the union thing. More
memory efficient but also more duplicated type definitions and manager
definitions and in addition overriding the default system resource
manager, not counting the kerneldoc needed to explain why all this is
necessary.

It was this complexity I was trying to get away from in the first
place.

/Thomas




> Amdgpu just uses a switch/case for now, but you could as well extend
> the 
> ttm_resource_manager_func table and upcast that inside the driver.
> 
> Regards,
> Christian.
> 
> > 
> > Thanks,
> > Thomas
> > 
> > 
> > 
> > > Regards,
> > > Christian.
> > > 
> > > > Thanks,
> > > > 
> > > > Thomas
> > > > 
> > > > 
> > > > 
> > 
> 



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Intel-gfx] [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
@ 2021-09-14 10:38                           ` Thomas Hellström
  0 siblings, 0 replies; 35+ messages in thread
From: Thomas Hellström @ 2021-09-14 10:38 UTC (permalink / raw)
  To: Christian König, intel-gfx, dri-devel
  Cc: maarten.lankhorst, matthew.auld, Matthew Auld

On Tue, 2021-09-14 at 10:53 +0200, Christian König wrote:
> Am 14.09.21 um 10:27 schrieb Thomas Hellström:
> > On Tue, 2021-09-14 at 09:40 +0200, Christian König wrote:
> > > Am 13.09.21 um 14:41 schrieb Thomas Hellström:
> > > > [SNIP]
> > > > > > > Let's say you have a struct ttm_object_vram and a struct
> > > > > > > ttm_object_gtt, both subclassing drm_gem_object. Then I'd
> > > > > > > say
> > > > > > > a
> > > > > > > driver would want to subclass those to attach identical
> > > > > > > data,
> > > > > > > extend functionality and provide a single i915_gem_object
> > > > > > > to
> > > > > > > the
> > > > > > > rest of the driver, which couldn't care less whether it's
> > > > > > > vram or
> > > > > > > gtt? Wouldn't you say having separate struct
> > > > > > > ttm_object_vram
> > > > > > > and a
> > > > > > > struct ttm_object_gtt in this case would be awkward?. We
> > > > > > > *want* to
> > > > > > > allow common handling.
> > > > > > Yeah, but that's a bad idea. This is like diamond
> > > > > > inheritance
> > > > > > in C++.
> > > > > > 
> > > > > > When you need the same functionality in different backends
> > > > > > you
> > > > > > implement that as separate object and then add a parent
> > > > > > class.
> > > > > > 
> > > > > > > It's the exact same situation here. With struct
> > > > > > > ttm_resource
> > > > > > > you
> > > > > > > let *different* implementation flavours subclass it,
> > > > > > > which
> > > > > > > makes it
> > > > > > > awkward for the driver to extend the functionality in a
> > > > > > > common way
> > > > > > > by subclassing, unless the driver only uses a single
> > > > > > > implementation.
> > > > > > Well the driver should use separate implementations for
> > > > > > their
> > > > > > different domains as much as possible.
> > > > > > 
> > > > > Hmm, Now you lost me a bit. Are you saying that the way we do
> > > > > dynamic
> > > > > backends in the struct ttm_buffer_object to facilitate driver
> > > > > subclassing is a bad idea or that the RFC with backpointer is
> > > > > a
> > > > > bad
> > > > > idea?
> > > > > 
> > > > > 
> > > > Or if you mean diamond inheritance is bad, yes that's basically
> > > > my
> > > > point.
> > > That diamond inheritance is a bad idea. What I don't understand
> > > is
> > > why
> > > you need that in the first place?
> > > 
> > > Information that you attach to a resource are specific to the
> > > domain
> > > where the resource is allocated from. So why do you want to
> > > attach
> > > the
> > > same information to a resources from different domains?
> > Again, for the same reason that we do that with struct
> > i915_gem_objects
> > and struct ttm_tts, to extend the functionality. I mean information
> > that we attach when we subclass a struct ttm_buffer_object doesn't
> > necessarily care about whether it's a VRAM or a GTT object. In
> > exactly
> > the same way, information that we want to attach to a struct
> > ttm_resource doesn't necessarily care whether it's a system or a
> > VRAM
> > resource, and need not be specific to any of those.
> > 
> > In this particular case, as memory management becomes asynchronous,
> > you
> > can't attach things like sg-tables and gpu binding information to
> > the
> > gem object anymore, because the object may have a number of
> > migrations
> > in the pipeline. Such things need to be attached to the structure
> > that
> > abstracts the memory allocation, and which may have a completely
> > different lifetime than the object itself.
> > 
> > In our particular case we want to attach information for cached
> > page
> > lookup and and sg-table, and moving forward probably the gpu
> > binding
> > (vma) information, and that is the same information for any
> > ttm_resource regardless where it's allocated from.
> > 
> > Typical example: A pipelined GPU operation happening before an
> > async
> > eviction goes wrong. We need to error capture and reset. But if we
> > look
> > at the object for error capturing, it's already updated pointing to
> > an
> > after-eviction resource, and the resource sits on a ghost object
> > (or in
> > the future when ghost objects go away perhaps in limbo somewhere).
> > 
> > We need to capture the memory pointed to by the struct ttm_resource
> > the
> > GPU was referencing, and to be able to do that we need to cache
> > driver-
> > specific info on the resource. Typically an sg-list and GPU binding
> > information.
> > 
> > Anyway, that cached information needs to be destroyed together with
> > the
> > resource and thus we need to be able to access that information
> > from
> > the resource in some way, regardless whether it's a pointer or
> > whether
> > we embed the struct resource.
> > 
> > I think it's pretty important here that we (using the inheritance
> > diagram below) recognize the need for D to inherit from A, just
> > like we
> > do for objects or ttm_tts.
> > 
> > 
> > > > Looking at
> > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FMultiple_inheritance%23%2Fmedia%2FFile%3ADiamond_inheritance.svg&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C268bb562db8548b285b408d977598b2c%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637672048739103176%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=bPyDqiSF%2FHFZbl74ux0vfwh3uma5hZIUf2xbzb9yZz8%3D&amp;reserved=0
> > > >   
> > > > 
> > > > 
> > > > 1)
> > > > 
> > > > A would be the struct ttm_resource itself,
> > > > D would be struct i915_resource,
> > > > B would be struct ttm_range_mgr_node,
> > > > C would be struct i915_ttm_buddy_resource
> > > > 
> > > > And we need to resolve the ambiguity using the awkward union
> > > > construct, iff we need to derive from both B and C.
> > > > 
> > > > Struct ttm_buffer_object and struct ttm_tt instead have B) and
> > > > C)
> > > > being dynamic backends of A) or a single type derived from A)
> > > > Hence
> > > > the problem doesn't exist for these types.
> > > > 
> > > > So the question from last email remains, if ditching this RFC,
> > > > can
> > > > we
> > > > have B) and C) implemented by helpers that can be used from D)
> > > > and
> > > > that don't derive from A?
> > > Well we already have that in the form of drm_mm. I mean the
> > > ttm_range_manager is just a relatively small glue code which
> > > implements
> > > the TTMs resource interface using the drm_mm object and a
> > > spinlock.
> > > IIRC
> > > that less than 200 lines of code.
> > > 
> > > So you should already have the necessary helpers and just need to
> > > implement the resource manager as far as I can see.
> > > 
> > > I mean I reused the ttm_range_manager_node in for amdgpu_gtt_mgr
> > > and
> > > could potentially reuse a bit more of the ttm_range_manager code.
> > > But
> > > I
> > > don't see that as much of an issue, the extra functionality there
> > > is
> > > just minimal.
> > Sure but that would give up the prereq of having reusable resource
> > manager implementations. What happens if someone would like to
> > reuse
> > the buddy manager? And to complicate things even more, the
> > information
> > we attach to VRAM resources also needs to be attached to system
> > resources. Sure we could probably re-implement a combined system-
> > buddy-
> > range manager, but that seems like something overly complex.
> > 
> > The other object examples resolve the diamond inheritance with a
> > pointer to the specialization (BC) and let D derive from A.
> > 
> > TTM resources do it backwards. If we can just recognize that and
> > ponder
> > what's the easiest way to resolve this given the current design, I
> > actually think we'd arrive at a backpointer to allow downcasting
> > from A
> > to D.
> 
> Yeah, but I think you are approaching that from the wrong side.
> 
> For use cases like this I think you should probably have the
> following 
> objects and inheritances:
> 
> 1. Driver specific objects like i915_sg, i915_vma which don't inherit
> anything from TTM.
> 2. i915_vram_node which inherits from ttm_resource or a potential 
> ttm_buddy_allocator.
> 3. i915_gtt_node which inherits from ttm_range_manger_node.
> 4. Maybe i915_sys_node which inherits from ttm_resource as well.
> 
> The managers for the individual domains then provide the glue code to
> implement both the TTM resource interface as well as a driver
> specific 
> interface to access the driver objects.

Well yes, but this is not really much better than the union thing. More
memory efficient but also more duplicated type definitions and manager
definitions and in addition overriding the default system resource
manager, not counting the kerneldoc needed to explain why all this is
necessary.

It was this complexity I was trying to get away from in the first
place.

/Thomas




> Amdgpu just uses a switch/case for now, but you could as well extend
> the 
> ttm_resource_manager_func table and upcast that inside the driver.
> 
> Regards,
> Christian.
> 
> > 
> > Thanks,
> > Thomas
> > 
> > 
> > 
> > > Regards,
> > > Christian.
> > > 
> > > > Thanks,
> > > > 
> > > > Thomas
> > > > 
> > > > 
> > > > 
> > 
> 



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
  2021-09-14 10:38                           ` [Intel-gfx] " Thomas Hellström
@ 2021-09-14 14:07                             ` Daniel Vetter
  -1 siblings, 0 replies; 35+ messages in thread
From: Daniel Vetter @ 2021-09-14 14:07 UTC (permalink / raw)
  To: Thomas Hellström
  Cc: Christian König, intel-gfx, dri-devel, maarten.lankhorst,
	matthew.auld, Matthew Auld

On Tue, Sep 14, 2021 at 12:38:00PM +0200, Thomas Hellström wrote:
> On Tue, 2021-09-14 at 10:53 +0200, Christian König wrote:
> > Am 14.09.21 um 10:27 schrieb Thomas Hellström:
> > > On Tue, 2021-09-14 at 09:40 +0200, Christian König wrote:
> > > > Am 13.09.21 um 14:41 schrieb Thomas Hellström:
> > > > > [SNIP]
> > > > > > > > Let's say you have a struct ttm_object_vram and a struct
> > > > > > > > ttm_object_gtt, both subclassing drm_gem_object. Then I'd
> > > > > > > > say
> > > > > > > > a
> > > > > > > > driver would want to subclass those to attach identical
> > > > > > > > data,
> > > > > > > > extend functionality and provide a single i915_gem_object
> > > > > > > > to
> > > > > > > > the
> > > > > > > > rest of the driver, which couldn't care less whether it's
> > > > > > > > vram or
> > > > > > > > gtt? Wouldn't you say having separate struct
> > > > > > > > ttm_object_vram
> > > > > > > > and a
> > > > > > > > struct ttm_object_gtt in this case would be awkward?. We
> > > > > > > > *want* to
> > > > > > > > allow common handling.
> > > > > > > Yeah, but that's a bad idea. This is like diamond
> > > > > > > inheritance
> > > > > > > in C++.
> > > > > > > 
> > > > > > > When you need the same functionality in different backends
> > > > > > > you
> > > > > > > implement that as separate object and then add a parent
> > > > > > > class.
> > > > > > > 
> > > > > > > > It's the exact same situation here. With struct
> > > > > > > > ttm_resource
> > > > > > > > you
> > > > > > > > let *different* implementation flavours subclass it,
> > > > > > > > which
> > > > > > > > makes it
> > > > > > > > awkward for the driver to extend the functionality in a
> > > > > > > > common way
> > > > > > > > by subclassing, unless the driver only uses a single
> > > > > > > > implementation.
> > > > > > > Well the driver should use separate implementations for
> > > > > > > their
> > > > > > > different domains as much as possible.
> > > > > > > 
> > > > > > Hmm, Now you lost me a bit. Are you saying that the way we do
> > > > > > dynamic
> > > > > > backends in the struct ttm_buffer_object to facilitate driver
> > > > > > subclassing is a bad idea or that the RFC with backpointer is
> > > > > > a
> > > > > > bad
> > > > > > idea?
> > > > > > 
> > > > > > 
> > > > > Or if you mean diamond inheritance is bad, yes that's basically
> > > > > my
> > > > > point.
> > > > That diamond inheritance is a bad idea. What I don't understand
> > > > is
> > > > why
> > > > you need that in the first place?
> > > > 
> > > > Information that you attach to a resource are specific to the
> > > > domain
> > > > where the resource is allocated from. So why do you want to
> > > > attach
> > > > the
> > > > same information to a resources from different domains?
> > > Again, for the same reason that we do that with struct
> > > i915_gem_objects
> > > and struct ttm_tts, to extend the functionality. I mean information
> > > that we attach when we subclass a struct ttm_buffer_object doesn't
> > > necessarily care about whether it's a VRAM or a GTT object. In
> > > exactly
> > > the same way, information that we want to attach to a struct
> > > ttm_resource doesn't necessarily care whether it's a system or a
> > > VRAM
> > > resource, and need not be specific to any of those.
> > > 
> > > In this particular case, as memory management becomes asynchronous,
> > > you
> > > can't attach things like sg-tables and gpu binding information to
> > > the
> > > gem object anymore, because the object may have a number of
> > > migrations
> > > in the pipeline. Such things need to be attached to the structure
> > > that
> > > abstracts the memory allocation, and which may have a completely
> > > different lifetime than the object itself.
> > > 
> > > In our particular case we want to attach information for cached
> > > page
> > > lookup and and sg-table, and moving forward probably the gpu
> > > binding
> > > (vma) information, and that is the same information for any
> > > ttm_resource regardless where it's allocated from.
> > > 
> > > Typical example: A pipelined GPU operation happening before an
> > > async
> > > eviction goes wrong. We need to error capture and reset. But if we
> > > look
> > > at the object for error capturing, it's already updated pointing to
> > > an
> > > after-eviction resource, and the resource sits on a ghost object
> > > (or in
> > > the future when ghost objects go away perhaps in limbo somewhere).
> > > 
> > > We need to capture the memory pointed to by the struct ttm_resource
> > > the
> > > GPU was referencing, and to be able to do that we need to cache
> > > driver-
> > > specific info on the resource. Typically an sg-list and GPU binding
> > > information.
> > > 
> > > Anyway, that cached information needs to be destroyed together with
> > > the
> > > resource and thus we need to be able to access that information
> > > from
> > > the resource in some way, regardless whether it's a pointer or
> > > whether
> > > we embed the struct resource.
> > > 
> > > I think it's pretty important here that we (using the inheritance
> > > diagram below) recognize the need for D to inherit from A, just
> > > like we
> > > do for objects or ttm_tts.
> > > 
> > > 
> > > > > Looking at
> > > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FMultiple_inheritance%23%2Fmedia%2FFile%3ADiamond_inheritance.svg&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C268bb562db8548b285b408d977598b2c%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637672048739103176%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=bPyDqiSF%2FHFZbl74ux0vfwh3uma5hZIUf2xbzb9yZz8%3D&amp;reserved=0
> > > > >   
> > > > > 
> > > > > 
> > > > > 1)
> > > > > 
> > > > > A would be the struct ttm_resource itself,
> > > > > D would be struct i915_resource,
> > > > > B would be struct ttm_range_mgr_node,
> > > > > C would be struct i915_ttm_buddy_resource
> > > > > 
> > > > > And we need to resolve the ambiguity using the awkward union
> > > > > construct, iff we need to derive from both B and C.
> > > > > 
> > > > > Struct ttm_buffer_object and struct ttm_tt instead have B) and
> > > > > C)
> > > > > being dynamic backends of A) or a single type derived from A)
> > > > > Hence
> > > > > the problem doesn't exist for these types.
> > > > > 
> > > > > So the question from last email remains, if ditching this RFC,
> > > > > can
> > > > > we
> > > > > have B) and C) implemented by helpers that can be used from D)
> > > > > and
> > > > > that don't derive from A?
> > > > Well we already have that in the form of drm_mm. I mean the
> > > > ttm_range_manager is just a relatively small glue code which
> > > > implements
> > > > the TTMs resource interface using the drm_mm object and a
> > > > spinlock.
> > > > IIRC
> > > > that less than 200 lines of code.
> > > > 
> > > > So you should already have the necessary helpers and just need to
> > > > implement the resource manager as far as I can see.
> > > > 
> > > > I mean I reused the ttm_range_manager_node in for amdgpu_gtt_mgr
> > > > and
> > > > could potentially reuse a bit more of the ttm_range_manager code.
> > > > But
> > > > I
> > > > don't see that as much of an issue, the extra functionality there
> > > > is
> > > > just minimal.
> > > Sure but that would give up the prereq of having reusable resource
> > > manager implementations. What happens if someone would like to
> > > reuse
> > > the buddy manager? And to complicate things even more, the
> > > information
> > > we attach to VRAM resources also needs to be attached to system
> > > resources. Sure we could probably re-implement a combined system-
> > > buddy-
> > > range manager, but that seems like something overly complex.
> > > 
> > > The other object examples resolve the diamond inheritance with a
> > > pointer to the specialization (BC) and let D derive from A.
> > > 
> > > TTM resources do it backwards. If we can just recognize that and
> > > ponder
> > > what's the easiest way to resolve this given the current design, I
> > > actually think we'd arrive at a backpointer to allow downcasting
> > > from A
> > > to D.
> > 
> > Yeah, but I think you are approaching that from the wrong side.
> > 
> > For use cases like this I think you should probably have the
> > following 
> > objects and inheritances:
> > 
> > 1. Driver specific objects like i915_sg, i915_vma which don't inherit
> > anything from TTM.
> > 2. i915_vram_node which inherits from ttm_resource or a potential 
> > ttm_buddy_allocator.
> > 3. i915_gtt_node which inherits from ttm_range_manger_node.
> > 4. Maybe i915_sys_node which inherits from ttm_resource as well.
> > 
> > The managers for the individual domains then provide the glue code to
> > implement both the TTM resource interface as well as a driver
> > specific 
> > interface to access the driver objects.
> 
> Well yes, but this is not really much better than the union thing. More
> memory efficient but also more duplicated type definitions and manager
> definitions and in addition overriding the default system resource
> manager, not counting the kerneldoc needed to explain why all this is
> necessary.
> 
> It was this complexity I was trying to get away from in the first
> place.

I honestly don't think the union thing is the worst. At least as long as
we're reworking i915 at a fairly invasive pace it's probably the lest
worst approach.

For the specific case of sg list I'm also not sure how great our current
i915 design of "everything is an sg" really is. In the wider community
there's clear rejection of sg for p2p addresses, so having this as a
per-ttm_res_manager kind of situation is probably not the worst.

In that world every ttm_res_manager would have it's own implementation of
binding into ptes, which then iterate over the pagetables with some common
abstraction. So in a way more of a helper approach for the i915
implementations of the various hooks, at the cost of a bit of code
duplication.

I do agree with Christian that the various backpointers to sort out the
diamond inheritence issue isn't not great. The other options aren't pretty
either, but at least it's more contained to i915.
-Daniel


> /Thomas
> 
> 
> 
> 
> > Amdgpu just uses a switch/case for now, but you could as well extend
> > the 
> > ttm_resource_manager_func table and upcast that inside the driver.
> > 
> > Regards,
> > Christian.
> > 
> > > 
> > > Thanks,
> > > Thomas
> > > 
> > > 
> > > 
> > > > Regards,
> > > > Christian.
> > > > 
> > > > > Thanks,
> > > > > 
> > > > > Thomas
> > > > > 
> > > > > 
> > > > > 
> > > 
> > 
> 
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Intel-gfx] [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
@ 2021-09-14 14:07                             ` Daniel Vetter
  0 siblings, 0 replies; 35+ messages in thread
From: Daniel Vetter @ 2021-09-14 14:07 UTC (permalink / raw)
  To: Thomas Hellström
  Cc: Christian König, intel-gfx, dri-devel, maarten.lankhorst,
	matthew.auld, Matthew Auld

On Tue, Sep 14, 2021 at 12:38:00PM +0200, Thomas Hellström wrote:
> On Tue, 2021-09-14 at 10:53 +0200, Christian König wrote:
> > Am 14.09.21 um 10:27 schrieb Thomas Hellström:
> > > On Tue, 2021-09-14 at 09:40 +0200, Christian König wrote:
> > > > Am 13.09.21 um 14:41 schrieb Thomas Hellström:
> > > > > [SNIP]
> > > > > > > > Let's say you have a struct ttm_object_vram and a struct
> > > > > > > > ttm_object_gtt, both subclassing drm_gem_object. Then I'd
> > > > > > > > say
> > > > > > > > a
> > > > > > > > driver would want to subclass those to attach identical
> > > > > > > > data,
> > > > > > > > extend functionality and provide a single i915_gem_object
> > > > > > > > to
> > > > > > > > the
> > > > > > > > rest of the driver, which couldn't care less whether it's
> > > > > > > > vram or
> > > > > > > > gtt? Wouldn't you say having separate struct
> > > > > > > > ttm_object_vram
> > > > > > > > and a
> > > > > > > > struct ttm_object_gtt in this case would be awkward?. We
> > > > > > > > *want* to
> > > > > > > > allow common handling.
> > > > > > > Yeah, but that's a bad idea. This is like diamond
> > > > > > > inheritance
> > > > > > > in C++.
> > > > > > > 
> > > > > > > When you need the same functionality in different backends
> > > > > > > you
> > > > > > > implement that as separate object and then add a parent
> > > > > > > class.
> > > > > > > 
> > > > > > > > It's the exact same situation here. With struct
> > > > > > > > ttm_resource
> > > > > > > > you
> > > > > > > > let *different* implementation flavours subclass it,
> > > > > > > > which
> > > > > > > > makes it
> > > > > > > > awkward for the driver to extend the functionality in a
> > > > > > > > common way
> > > > > > > > by subclassing, unless the driver only uses a single
> > > > > > > > implementation.
> > > > > > > Well the driver should use separate implementations for
> > > > > > > their
> > > > > > > different domains as much as possible.
> > > > > > > 
> > > > > > Hmm, Now you lost me a bit. Are you saying that the way we do
> > > > > > dynamic
> > > > > > backends in the struct ttm_buffer_object to facilitate driver
> > > > > > subclassing is a bad idea or that the RFC with backpointer is
> > > > > > a
> > > > > > bad
> > > > > > idea?
> > > > > > 
> > > > > > 
> > > > > Or if you mean diamond inheritance is bad, yes that's basically
> > > > > my
> > > > > point.
> > > > That diamond inheritance is a bad idea. What I don't understand
> > > > is
> > > > why
> > > > you need that in the first place?
> > > > 
> > > > Information that you attach to a resource are specific to the
> > > > domain
> > > > where the resource is allocated from. So why do you want to
> > > > attach
> > > > the
> > > > same information to a resources from different domains?
> > > Again, for the same reason that we do that with struct
> > > i915_gem_objects
> > > and struct ttm_tts, to extend the functionality. I mean information
> > > that we attach when we subclass a struct ttm_buffer_object doesn't
> > > necessarily care about whether it's a VRAM or a GTT object. In
> > > exactly
> > > the same way, information that we want to attach to a struct
> > > ttm_resource doesn't necessarily care whether it's a system or a
> > > VRAM
> > > resource, and need not be specific to any of those.
> > > 
> > > In this particular case, as memory management becomes asynchronous,
> > > you
> > > can't attach things like sg-tables and gpu binding information to
> > > the
> > > gem object anymore, because the object may have a number of
> > > migrations
> > > in the pipeline. Such things need to be attached to the structure
> > > that
> > > abstracts the memory allocation, and which may have a completely
> > > different lifetime than the object itself.
> > > 
> > > In our particular case we want to attach information for cached
> > > page
> > > lookup and and sg-table, and moving forward probably the gpu
> > > binding
> > > (vma) information, and that is the same information for any
> > > ttm_resource regardless where it's allocated from.
> > > 
> > > Typical example: A pipelined GPU operation happening before an
> > > async
> > > eviction goes wrong. We need to error capture and reset. But if we
> > > look
> > > at the object for error capturing, it's already updated pointing to
> > > an
> > > after-eviction resource, and the resource sits on a ghost object
> > > (or in
> > > the future when ghost objects go away perhaps in limbo somewhere).
> > > 
> > > We need to capture the memory pointed to by the struct ttm_resource
> > > the
> > > GPU was referencing, and to be able to do that we need to cache
> > > driver-
> > > specific info on the resource. Typically an sg-list and GPU binding
> > > information.
> > > 
> > > Anyway, that cached information needs to be destroyed together with
> > > the
> > > resource and thus we need to be able to access that information
> > > from
> > > the resource in some way, regardless whether it's a pointer or
> > > whether
> > > we embed the struct resource.
> > > 
> > > I think it's pretty important here that we (using the inheritance
> > > diagram below) recognize the need for D to inherit from A, just
> > > like we
> > > do for objects or ttm_tts.
> > > 
> > > 
> > > > > Looking at
> > > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FMultiple_inheritance%23%2Fmedia%2FFile%3ADiamond_inheritance.svg&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C268bb562db8548b285b408d977598b2c%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637672048739103176%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=bPyDqiSF%2FHFZbl74ux0vfwh3uma5hZIUf2xbzb9yZz8%3D&amp;reserved=0
> > > > >   
> > > > > 
> > > > > 
> > > > > 1)
> > > > > 
> > > > > A would be the struct ttm_resource itself,
> > > > > D would be struct i915_resource,
> > > > > B would be struct ttm_range_mgr_node,
> > > > > C would be struct i915_ttm_buddy_resource
> > > > > 
> > > > > And we need to resolve the ambiguity using the awkward union
> > > > > construct, iff we need to derive from both B and C.
> > > > > 
> > > > > Struct ttm_buffer_object and struct ttm_tt instead have B) and
> > > > > C)
> > > > > being dynamic backends of A) or a single type derived from A)
> > > > > Hence
> > > > > the problem doesn't exist for these types.
> > > > > 
> > > > > So the question from last email remains, if ditching this RFC,
> > > > > can
> > > > > we
> > > > > have B) and C) implemented by helpers that can be used from D)
> > > > > and
> > > > > that don't derive from A?
> > > > Well we already have that in the form of drm_mm. I mean the
> > > > ttm_range_manager is just a relatively small glue code which
> > > > implements
> > > > the TTMs resource interface using the drm_mm object and a
> > > > spinlock.
> > > > IIRC
> > > > that less than 200 lines of code.
> > > > 
> > > > So you should already have the necessary helpers and just need to
> > > > implement the resource manager as far as I can see.
> > > > 
> > > > I mean I reused the ttm_range_manager_node in for amdgpu_gtt_mgr
> > > > and
> > > > could potentially reuse a bit more of the ttm_range_manager code.
> > > > But
> > > > I
> > > > don't see that as much of an issue, the extra functionality there
> > > > is
> > > > just minimal.
> > > Sure but that would give up the prereq of having reusable resource
> > > manager implementations. What happens if someone would like to
> > > reuse
> > > the buddy manager? And to complicate things even more, the
> > > information
> > > we attach to VRAM resources also needs to be attached to system
> > > resources. Sure we could probably re-implement a combined system-
> > > buddy-
> > > range manager, but that seems like something overly complex.
> > > 
> > > The other object examples resolve the diamond inheritance with a
> > > pointer to the specialization (BC) and let D derive from A.
> > > 
> > > TTM resources do it backwards. If we can just recognize that and
> > > ponder
> > > what's the easiest way to resolve this given the current design, I
> > > actually think we'd arrive at a backpointer to allow downcasting
> > > from A
> > > to D.
> > 
> > Yeah, but I think you are approaching that from the wrong side.
> > 
> > For use cases like this I think you should probably have the
> > following 
> > objects and inheritances:
> > 
> > 1. Driver specific objects like i915_sg, i915_vma which don't inherit
> > anything from TTM.
> > 2. i915_vram_node which inherits from ttm_resource or a potential 
> > ttm_buddy_allocator.
> > 3. i915_gtt_node which inherits from ttm_range_manger_node.
> > 4. Maybe i915_sys_node which inherits from ttm_resource as well.
> > 
> > The managers for the individual domains then provide the glue code to
> > implement both the TTM resource interface as well as a driver
> > specific 
> > interface to access the driver objects.
> 
> Well yes, but this is not really much better than the union thing. More
> memory efficient but also more duplicated type definitions and manager
> definitions and in addition overriding the default system resource
> manager, not counting the kerneldoc needed to explain why all this is
> necessary.
> 
> It was this complexity I was trying to get away from in the first
> place.

I honestly don't think the union thing is the worst. At least as long as
we're reworking i915 at a fairly invasive pace it's probably the lest
worst approach.

For the specific case of sg list I'm also not sure how great our current
i915 design of "everything is an sg" really is. In the wider community
there's clear rejection of sg for p2p addresses, so having this as a
per-ttm_res_manager kind of situation is probably not the worst.

In that world every ttm_res_manager would have it's own implementation of
binding into ptes, which then iterate over the pagetables with some common
abstraction. So in a way more of a helper approach for the i915
implementations of the various hooks, at the cost of a bit of code
duplication.

I do agree with Christian that the various backpointers to sort out the
diamond inheritence issue isn't not great. The other options aren't pretty
either, but at least it's more contained to i915.
-Daniel


> /Thomas
> 
> 
> 
> 
> > Amdgpu just uses a switch/case for now, but you could as well extend
> > the 
> > ttm_resource_manager_func table and upcast that inside the driver.
> > 
> > Regards,
> > Christian.
> > 
> > > 
> > > Thanks,
> > > Thomas
> > > 
> > > 
> > > 
> > > > Regards,
> > > > Christian.
> > > > 
> > > > > Thanks,
> > > > > 
> > > > > Thomas
> > > > > 
> > > > > 
> > > > > 
> > > 
> > 
> 
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
  2021-09-14 14:07                             ` [Intel-gfx] " Daniel Vetter
@ 2021-09-14 15:30                               ` Thomas Hellström
  -1 siblings, 0 replies; 35+ messages in thread
From: Thomas Hellström @ 2021-09-14 15:30 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Christian König, intel-gfx, dri-devel, maarten.lankhorst,
	matthew.auld, Matthew Auld


On 9/14/21 4:07 PM, Daniel Vetter wrote:
> On Tue, Sep 14, 2021 at 12:38:00PM +0200, Thomas Hellström wrote:
>> On Tue, 2021-09-14 at 10:53 +0200, Christian König wrote:
>>> Am 14.09.21 um 10:27 schrieb Thomas Hellström:
>>>> On Tue, 2021-09-14 at 09:40 +0200, Christian König wrote:
>>>>> Am 13.09.21 um 14:41 schrieb Thomas Hellström:
>>>>>> [SNIP]
>>>>>>>>> Let's say you have a struct ttm_object_vram and a struct
>>>>>>>>> ttm_object_gtt, both subclassing drm_gem_object. Then I'd
>>>>>>>>> say
>>>>>>>>> a
>>>>>>>>> driver would want to subclass those to attach identical
>>>>>>>>> data,
>>>>>>>>> extend functionality and provide a single i915_gem_object
>>>>>>>>> to
>>>>>>>>> the
>>>>>>>>> rest of the driver, which couldn't care less whether it's
>>>>>>>>> vram or
>>>>>>>>> gtt? Wouldn't you say having separate struct
>>>>>>>>> ttm_object_vram
>>>>>>>>> and a
>>>>>>>>> struct ttm_object_gtt in this case would be awkward?. We
>>>>>>>>> *want* to
>>>>>>>>> allow common handling.
>>>>>>>> Yeah, but that's a bad idea. This is like diamond
>>>>>>>> inheritance
>>>>>>>> in C++.
>>>>>>>>
>>>>>>>> When you need the same functionality in different backends
>>>>>>>> you
>>>>>>>> implement that as separate object and then add a parent
>>>>>>>> class.
>>>>>>>>
>>>>>>>>> It's the exact same situation here. With struct
>>>>>>>>> ttm_resource
>>>>>>>>> you
>>>>>>>>> let *different* implementation flavours subclass it,
>>>>>>>>> which
>>>>>>>>> makes it
>>>>>>>>> awkward for the driver to extend the functionality in a
>>>>>>>>> common way
>>>>>>>>> by subclassing, unless the driver only uses a single
>>>>>>>>> implementation.
>>>>>>>> Well the driver should use separate implementations for
>>>>>>>> their
>>>>>>>> different domains as much as possible.
>>>>>>>>
>>>>>>> Hmm, Now you lost me a bit. Are you saying that the way we do
>>>>>>> dynamic
>>>>>>> backends in the struct ttm_buffer_object to facilitate driver
>>>>>>> subclassing is a bad idea or that the RFC with backpointer is
>>>>>>> a
>>>>>>> bad
>>>>>>> idea?
>>>>>>>
>>>>>>>
>>>>>> Or if you mean diamond inheritance is bad, yes that's basically
>>>>>> my
>>>>>> point.
>>>>> That diamond inheritance is a bad idea. What I don't understand
>>>>> is
>>>>> why
>>>>> you need that in the first place?
>>>>>
>>>>> Information that you attach to a resource are specific to the
>>>>> domain
>>>>> where the resource is allocated from. So why do you want to
>>>>> attach
>>>>> the
>>>>> same information to a resources from different domains?
>>>> Again, for the same reason that we do that with struct
>>>> i915_gem_objects
>>>> and struct ttm_tts, to extend the functionality. I mean information
>>>> that we attach when we subclass a struct ttm_buffer_object doesn't
>>>> necessarily care about whether it's a VRAM or a GTT object. In
>>>> exactly
>>>> the same way, information that we want to attach to a struct
>>>> ttm_resource doesn't necessarily care whether it's a system or a
>>>> VRAM
>>>> resource, and need not be specific to any of those.
>>>>
>>>> In this particular case, as memory management becomes asynchronous,
>>>> you
>>>> can't attach things like sg-tables and gpu binding information to
>>>> the
>>>> gem object anymore, because the object may have a number of
>>>> migrations
>>>> in the pipeline. Such things need to be attached to the structure
>>>> that
>>>> abstracts the memory allocation, and which may have a completely
>>>> different lifetime than the object itself.
>>>>
>>>> In our particular case we want to attach information for cached
>>>> page
>>>> lookup and and sg-table, and moving forward probably the gpu
>>>> binding
>>>> (vma) information, and that is the same information for any
>>>> ttm_resource regardless where it's allocated from.
>>>>
>>>> Typical example: A pipelined GPU operation happening before an
>>>> async
>>>> eviction goes wrong. We need to error capture and reset. But if we
>>>> look
>>>> at the object for error capturing, it's already updated pointing to
>>>> an
>>>> after-eviction resource, and the resource sits on a ghost object
>>>> (or in
>>>> the future when ghost objects go away perhaps in limbo somewhere).
>>>>
>>>> We need to capture the memory pointed to by the struct ttm_resource
>>>> the
>>>> GPU was referencing, and to be able to do that we need to cache
>>>> driver-
>>>> specific info on the resource. Typically an sg-list and GPU binding
>>>> information.
>>>>
>>>> Anyway, that cached information needs to be destroyed together with
>>>> the
>>>> resource and thus we need to be able to access that information
>>>> from
>>>> the resource in some way, regardless whether it's a pointer or
>>>> whether
>>>> we embed the struct resource.
>>>>
>>>> I think it's pretty important here that we (using the inheritance
>>>> diagram below) recognize the need for D to inherit from A, just
>>>> like we
>>>> do for objects or ttm_tts.
>>>>
>>>>
>>>>>> Looking at
>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FMultiple_inheritance%23%2Fmedia%2FFile%3ADiamond_inheritance.svg&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C268bb562db8548b285b408d977598b2c%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637672048739103176%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=bPyDqiSF%2FHFZbl74ux0vfwh3uma5hZIUf2xbzb9yZz8%3D&amp;reserved=0
>>>>>>    
>>>>>>
>>>>>>
>>>>>> 1)
>>>>>>
>>>>>> A would be the struct ttm_resource itself,
>>>>>> D would be struct i915_resource,
>>>>>> B would be struct ttm_range_mgr_node,
>>>>>> C would be struct i915_ttm_buddy_resource
>>>>>>
>>>>>> And we need to resolve the ambiguity using the awkward union
>>>>>> construct, iff we need to derive from both B and C.
>>>>>>
>>>>>> Struct ttm_buffer_object and struct ttm_tt instead have B) and
>>>>>> C)
>>>>>> being dynamic backends of A) or a single type derived from A)
>>>>>> Hence
>>>>>> the problem doesn't exist for these types.
>>>>>>
>>>>>> So the question from last email remains, if ditching this RFC,
>>>>>> can
>>>>>> we
>>>>>> have B) and C) implemented by helpers that can be used from D)
>>>>>> and
>>>>>> that don't derive from A?
>>>>> Well we already have that in the form of drm_mm. I mean the
>>>>> ttm_range_manager is just a relatively small glue code which
>>>>> implements
>>>>> the TTMs resource interface using the drm_mm object and a
>>>>> spinlock.
>>>>> IIRC
>>>>> that less than 200 lines of code.
>>>>>
>>>>> So you should already have the necessary helpers and just need to
>>>>> implement the resource manager as far as I can see.
>>>>>
>>>>> I mean I reused the ttm_range_manager_node in for amdgpu_gtt_mgr
>>>>> and
>>>>> could potentially reuse a bit more of the ttm_range_manager code.
>>>>> But
>>>>> I
>>>>> don't see that as much of an issue, the extra functionality there
>>>>> is
>>>>> just minimal.
>>>> Sure but that would give up the prereq of having reusable resource
>>>> manager implementations. What happens if someone would like to
>>>> reuse
>>>> the buddy manager? And to complicate things even more, the
>>>> information
>>>> we attach to VRAM resources also needs to be attached to system
>>>> resources. Sure we could probably re-implement a combined system-
>>>> buddy-
>>>> range manager, but that seems like something overly complex.
>>>>
>>>> The other object examples resolve the diamond inheritance with a
>>>> pointer to the specialization (BC) and let D derive from A.
>>>>
>>>> TTM resources do it backwards. If we can just recognize that and
>>>> ponder
>>>> what's the easiest way to resolve this given the current design, I
>>>> actually think we'd arrive at a backpointer to allow downcasting
>>>> from A
>>>> to D.
>>> Yeah, but I think you are approaching that from the wrong side.
>>>
>>> For use cases like this I think you should probably have the
>>> following
>>> objects and inheritances:
>>>
>>> 1. Driver specific objects like i915_sg, i915_vma which don't inherit
>>> anything from TTM.
>>> 2. i915_vram_node which inherits from ttm_resource or a potential
>>> ttm_buddy_allocator.
>>> 3. i915_gtt_node which inherits from ttm_range_manger_node.
>>> 4. Maybe i915_sys_node which inherits from ttm_resource as well.
>>>
>>> The managers for the individual domains then provide the glue code to
>>> implement both the TTM resource interface as well as a driver
>>> specific
>>> interface to access the driver objects.
>> Well yes, but this is not really much better than the union thing. More
>> memory efficient but also more duplicated type definitions and manager
>> definitions and in addition overriding the default system resource
>> manager, not counting the kerneldoc needed to explain why all this is
>> necessary.
>>
>> It was this complexity I was trying to get away from in the first
>> place.
> I honestly don't think the union thing is the worst. At least as long as
> we're reworking i915 at a fairly invasive pace it's probably the lest
> worst approach.
>
> For the specific case of sg list I'm also not sure how great our current
> i915 design of "everything is an sg" really is. In the wider community
> there's clear rejection of sg for p2p addresses, so having this as a
> per-ttm_res_manager kind of situation is probably not the worst.

OK well, I'm no defender of the usage of sg list itself, but I was under 
the impression that as long as it was either only visible to the driver 
code itself or constructed using dma_map_resource() returned addresses 
for p2p it would be OK?

> In that world every ttm_res_manager would have it's own implementation of
> binding into ptes, which then iterate over the pagetables with some common
> abstraction. So in a way more of a helper approach for the i915
> implementations of the various hooks, at the cost of a bit of code
> duplication.
>
> I do agree with Christian that the various backpointers to sort out the
> diamond inheritence issue isn't not great. The other options aren't pretty
> either, but at least it's more contained to i915.

OK, I guess I will have to implement whatever ends up prettiest without 
the back pointer then. I wonder whether there is something we can think 
of in the future to avoid these diamond- or diamond like inheritances.

/Thomas


> -Daniel
>
>
>> /Thomas
>>
>>
>>
>>
>>> Amdgpu just uses a switch/case for now, but you could as well extend
>>> the
>>> ttm_resource_manager_func table and upcast that inside the driver.
>>>
>>> Regards,
>>> Christian.
>>>
>>>> Thanks,
>>>> Thomas
>>>>
>>>>
>>>>
>>>>> Regards,
>>>>> Christian.
>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Thomas
>>>>>>
>>>>>>
>>>>>>
>>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [Intel-gfx] [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource
@ 2021-09-14 15:30                               ` Thomas Hellström
  0 siblings, 0 replies; 35+ messages in thread
From: Thomas Hellström @ 2021-09-14 15:30 UTC (permalink / raw)
  To: Daniel Vetter
  Cc: Christian König, intel-gfx, dri-devel, maarten.lankhorst,
	matthew.auld, Matthew Auld


On 9/14/21 4:07 PM, Daniel Vetter wrote:
> On Tue, Sep 14, 2021 at 12:38:00PM +0200, Thomas Hellström wrote:
>> On Tue, 2021-09-14 at 10:53 +0200, Christian König wrote:
>>> Am 14.09.21 um 10:27 schrieb Thomas Hellström:
>>>> On Tue, 2021-09-14 at 09:40 +0200, Christian König wrote:
>>>>> Am 13.09.21 um 14:41 schrieb Thomas Hellström:
>>>>>> [SNIP]
>>>>>>>>> Let's say you have a struct ttm_object_vram and a struct
>>>>>>>>> ttm_object_gtt, both subclassing drm_gem_object. Then I'd
>>>>>>>>> say
>>>>>>>>> a
>>>>>>>>> driver would want to subclass those to attach identical
>>>>>>>>> data,
>>>>>>>>> extend functionality and provide a single i915_gem_object
>>>>>>>>> to
>>>>>>>>> the
>>>>>>>>> rest of the driver, which couldn't care less whether it's
>>>>>>>>> vram or
>>>>>>>>> gtt? Wouldn't you say having separate struct
>>>>>>>>> ttm_object_vram
>>>>>>>>> and a
>>>>>>>>> struct ttm_object_gtt in this case would be awkward?. We
>>>>>>>>> *want* to
>>>>>>>>> allow common handling.
>>>>>>>> Yeah, but that's a bad idea. This is like diamond
>>>>>>>> inheritance
>>>>>>>> in C++.
>>>>>>>>
>>>>>>>> When you need the same functionality in different backends
>>>>>>>> you
>>>>>>>> implement that as separate object and then add a parent
>>>>>>>> class.
>>>>>>>>
>>>>>>>>> It's the exact same situation here. With struct
>>>>>>>>> ttm_resource
>>>>>>>>> you
>>>>>>>>> let *different* implementation flavours subclass it,
>>>>>>>>> which
>>>>>>>>> makes it
>>>>>>>>> awkward for the driver to extend the functionality in a
>>>>>>>>> common way
>>>>>>>>> by subclassing, unless the driver only uses a single
>>>>>>>>> implementation.
>>>>>>>> Well the driver should use separate implementations for
>>>>>>>> their
>>>>>>>> different domains as much as possible.
>>>>>>>>
>>>>>>> Hmm, Now you lost me a bit. Are you saying that the way we do
>>>>>>> dynamic
>>>>>>> backends in the struct ttm_buffer_object to facilitate driver
>>>>>>> subclassing is a bad idea or that the RFC with backpointer is
>>>>>>> a
>>>>>>> bad
>>>>>>> idea?
>>>>>>>
>>>>>>>
>>>>>> Or if you mean diamond inheritance is bad, yes that's basically
>>>>>> my
>>>>>> point.
>>>>> That diamond inheritance is a bad idea. What I don't understand
>>>>> is
>>>>> why
>>>>> you need that in the first place?
>>>>>
>>>>> Information that you attach to a resource are specific to the
>>>>> domain
>>>>> where the resource is allocated from. So why do you want to
>>>>> attach
>>>>> the
>>>>> same information to a resources from different domains?
>>>> Again, for the same reason that we do that with struct
>>>> i915_gem_objects
>>>> and struct ttm_tts, to extend the functionality. I mean information
>>>> that we attach when we subclass a struct ttm_buffer_object doesn't
>>>> necessarily care about whether it's a VRAM or a GTT object. In
>>>> exactly
>>>> the same way, information that we want to attach to a struct
>>>> ttm_resource doesn't necessarily care whether it's a system or a
>>>> VRAM
>>>> resource, and need not be specific to any of those.
>>>>
>>>> In this particular case, as memory management becomes asynchronous,
>>>> you
>>>> can't attach things like sg-tables and gpu binding information to
>>>> the
>>>> gem object anymore, because the object may have a number of
>>>> migrations
>>>> in the pipeline. Such things need to be attached to the structure
>>>> that
>>>> abstracts the memory allocation, and which may have a completely
>>>> different lifetime than the object itself.
>>>>
>>>> In our particular case we want to attach information for cached
>>>> page
>>>> lookup and and sg-table, and moving forward probably the gpu
>>>> binding
>>>> (vma) information, and that is the same information for any
>>>> ttm_resource regardless where it's allocated from.
>>>>
>>>> Typical example: A pipelined GPU operation happening before an
>>>> async
>>>> eviction goes wrong. We need to error capture and reset. But if we
>>>> look
>>>> at the object for error capturing, it's already updated pointing to
>>>> an
>>>> after-eviction resource, and the resource sits on a ghost object
>>>> (or in
>>>> the future when ghost objects go away perhaps in limbo somewhere).
>>>>
>>>> We need to capture the memory pointed to by the struct ttm_resource
>>>> the
>>>> GPU was referencing, and to be able to do that we need to cache
>>>> driver-
>>>> specific info on the resource. Typically an sg-list and GPU binding
>>>> information.
>>>>
>>>> Anyway, that cached information needs to be destroyed together with
>>>> the
>>>> resource and thus we need to be able to access that information
>>>> from
>>>> the resource in some way, regardless whether it's a pointer or
>>>> whether
>>>> we embed the struct resource.
>>>>
>>>> I think it's pretty important here that we (using the inheritance
>>>> diagram below) recognize the need for D to inherit from A, just
>>>> like we
>>>> do for objects or ttm_tts.
>>>>
>>>>
>>>>>> Looking at
>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FMultiple_inheritance%23%2Fmedia%2FFile%3ADiamond_inheritance.svg&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C268bb562db8548b285b408d977598b2c%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637672048739103176%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=bPyDqiSF%2FHFZbl74ux0vfwh3uma5hZIUf2xbzb9yZz8%3D&amp;reserved=0
>>>>>>    
>>>>>>
>>>>>>
>>>>>> 1)
>>>>>>
>>>>>> A would be the struct ttm_resource itself,
>>>>>> D would be struct i915_resource,
>>>>>> B would be struct ttm_range_mgr_node,
>>>>>> C would be struct i915_ttm_buddy_resource
>>>>>>
>>>>>> And we need to resolve the ambiguity using the awkward union
>>>>>> construct, iff we need to derive from both B and C.
>>>>>>
>>>>>> Struct ttm_buffer_object and struct ttm_tt instead have B) and
>>>>>> C)
>>>>>> being dynamic backends of A) or a single type derived from A)
>>>>>> Hence
>>>>>> the problem doesn't exist for these types.
>>>>>>
>>>>>> So the question from last email remains, if ditching this RFC,
>>>>>> can
>>>>>> we
>>>>>> have B) and C) implemented by helpers that can be used from D)
>>>>>> and
>>>>>> that don't derive from A?
>>>>> Well we already have that in the form of drm_mm. I mean the
>>>>> ttm_range_manager is just a relatively small glue code which
>>>>> implements
>>>>> the TTMs resource interface using the drm_mm object and a
>>>>> spinlock.
>>>>> IIRC
>>>>> that less than 200 lines of code.
>>>>>
>>>>> So you should already have the necessary helpers and just need to
>>>>> implement the resource manager as far as I can see.
>>>>>
>>>>> I mean I reused the ttm_range_manager_node in for amdgpu_gtt_mgr
>>>>> and
>>>>> could potentially reuse a bit more of the ttm_range_manager code.
>>>>> But
>>>>> I
>>>>> don't see that as much of an issue, the extra functionality there
>>>>> is
>>>>> just minimal.
>>>> Sure but that would give up the prereq of having reusable resource
>>>> manager implementations. What happens if someone would like to
>>>> reuse
>>>> the buddy manager? And to complicate things even more, the
>>>> information
>>>> we attach to VRAM resources also needs to be attached to system
>>>> resources. Sure we could probably re-implement a combined system-
>>>> buddy-
>>>> range manager, but that seems like something overly complex.
>>>>
>>>> The other object examples resolve the diamond inheritance with a
>>>> pointer to the specialization (BC) and let D derive from A.
>>>>
>>>> TTM resources do it backwards. If we can just recognize that and
>>>> ponder
>>>> what's the easiest way to resolve this given the current design, I
>>>> actually think we'd arrive at a backpointer to allow downcasting
>>>> from A
>>>> to D.
>>> Yeah, but I think you are approaching that from the wrong side.
>>>
>>> For use cases like this I think you should probably have the
>>> following
>>> objects and inheritances:
>>>
>>> 1. Driver specific objects like i915_sg, i915_vma which don't inherit
>>> anything from TTM.
>>> 2. i915_vram_node which inherits from ttm_resource or a potential
>>> ttm_buddy_allocator.
>>> 3. i915_gtt_node which inherits from ttm_range_manger_node.
>>> 4. Maybe i915_sys_node which inherits from ttm_resource as well.
>>>
>>> The managers for the individual domains then provide the glue code to
>>> implement both the TTM resource interface as well as a driver
>>> specific
>>> interface to access the driver objects.
>> Well yes, but this is not really much better than the union thing. More
>> memory efficient but also more duplicated type definitions and manager
>> definitions and in addition overriding the default system resource
>> manager, not counting the kerneldoc needed to explain why all this is
>> necessary.
>>
>> It was this complexity I was trying to get away from in the first
>> place.
> I honestly don't think the union thing is the worst. At least as long as
> we're reworking i915 at a fairly invasive pace it's probably the lest
> worst approach.
>
> For the specific case of sg list I'm also not sure how great our current
> i915 design of "everything is an sg" really is. In the wider community
> there's clear rejection of sg for p2p addresses, so having this as a
> per-ttm_res_manager kind of situation is probably not the worst.

OK well, I'm no defender of the usage of sg list itself, but I was under 
the impression that as long as it was either only visible to the driver 
code itself or constructed using dma_map_resource() returned addresses 
for p2p it would be OK?

> In that world every ttm_res_manager would have it's own implementation of
> binding into ptes, which then iterate over the pagetables with some common
> abstraction. So in a way more of a helper approach for the i915
> implementations of the various hooks, at the cost of a bit of code
> duplication.
>
> I do agree with Christian that the various backpointers to sort out the
> diamond inheritence issue isn't not great. The other options aren't pretty
> either, but at least it's more contained to i915.

OK, I guess I will have to implement whatever ends up prettiest without 
the back pointer then. I wonder whether there is something we can think 
of in the future to avoid these diamond- or diamond like inheritances.

/Thomas


> -Daniel
>
>
>> /Thomas
>>
>>
>>
>>
>>> Amdgpu just uses a switch/case for now, but you could as well extend
>>> the
>>> ttm_resource_manager_func table and upcast that inside the driver.
>>>
>>> Regards,
>>> Christian.
>>>
>>>> Thanks,
>>>> Thomas
>>>>
>>>>
>>>>
>>>>> Regards,
>>>>> Christian.
>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Thomas
>>>>>>
>>>>>>
>>>>>>
>>

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2021-09-14 15:41 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-10 13:15 [RFC PATCH] drm/ttm: Add a private member to the struct ttm_resource Thomas Hellström
2021-09-10 13:15 ` [Intel-gfx] " Thomas Hellström
2021-09-10 13:25 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for " Patchwork
2021-09-10 13:54 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-09-10 14:40 ` [RFC PATCH] " Christian König
2021-09-10 14:40   ` [Intel-gfx] " Christian König
2021-09-10 15:30   ` Thomas Hellström
2021-09-10 15:30     ` Thomas Hellström
2021-09-10 17:03     ` Christian König
2021-09-10 17:03       ` [Intel-gfx] " Christian König
2021-09-11  6:07       ` Thomas Hellström
2021-09-11  6:07         ` [Intel-gfx] " Thomas Hellström
2021-09-13  6:17         ` Christian König
2021-09-13  6:17           ` [Intel-gfx] " Christian König
2021-09-13  9:36           ` Thomas Hellström
2021-09-13  9:36             ` [Intel-gfx] " Thomas Hellström
2021-09-13  9:41             ` Christian König
2021-09-13  9:41               ` [Intel-gfx] " Christian König
2021-09-13 10:16               ` Thomas Hellström
2021-09-13 10:16                 ` [Intel-gfx] " Thomas Hellström
2021-09-13 12:41                 ` Thomas Hellström
2021-09-13 12:41                   ` [Intel-gfx] " Thomas Hellström
2021-09-14  7:40                   ` Christian König
2021-09-14  7:40                     ` [Intel-gfx] " Christian König
2021-09-14  8:27                     ` Thomas Hellström
2021-09-14  8:27                       ` Thomas Hellström
2021-09-14  8:53                       ` Christian König
2021-09-14  8:53                         ` [Intel-gfx] " Christian König
2021-09-14 10:38                         ` Thomas Hellström
2021-09-14 10:38                           ` [Intel-gfx] " Thomas Hellström
2021-09-14 14:07                           ` Daniel Vetter
2021-09-14 14:07                             ` [Intel-gfx] " Daniel Vetter
2021-09-14 15:30                             ` Thomas Hellström
2021-09-14 15:30                               ` [Intel-gfx] " Thomas Hellström
2021-09-10 15:12 ` [Intel-gfx] ✓ Fi.CI.IGT: success for " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.