All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] ttm for internal
@ 2022-05-03 19:13 ` Robert Beckett
  0 siblings, 0 replies; 32+ messages in thread
From: Robert Beckett @ 2022-05-03 19:13 UTC (permalink / raw)
  To: dri-devel, intel-gfx; +Cc: Robert Beckett, Thomas Hellström, Matthew Auld

This series refactors i915's internal buffer backend to use ttm.
It uses ttm's pool allocator to allocate volatile pages in place of the
old code which rolled its own via alloc_pages.
This is continuing progress to align all backends on using ttm.

Robert Beckett (4):
  drm/i915: add gen6 ppgtt dummy creation function
  drm/i915: setup ggtt scratch page after memory regions
  drm/i915: allow volatile buffers to use ttm pool allocator
  drm/i915: internal buffers use ttm backend

 drivers/gpu/drm/i915/gem/i915_gem_internal.c | 264 ++++++++-----------
 drivers/gpu/drm/i915/gem/i915_gem_internal.h |   5 -
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c      |  15 +-
 drivers/gpu/drm/i915/gem/i915_gem_ttm.h      |  12 +-
 drivers/gpu/drm/i915/gt/gen6_ppgtt.c         |  43 ++-
 drivers/gpu/drm/i915/gt/intel_gt_gmch.c      |  20 +-
 drivers/gpu/drm/i915/gt/intel_gt_gmch.h      |   6 +
 drivers/gpu/drm/i915/i915_driver.c           |  16 +-
 8 files changed, 201 insertions(+), 180 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Intel-gfx] [PATCH 0/4] ttm for internal
@ 2022-05-03 19:13 ` Robert Beckett
  0 siblings, 0 replies; 32+ messages in thread
From: Robert Beckett @ 2022-05-03 19:13 UTC (permalink / raw)
  To: dri-devel, intel-gfx; +Cc: Thomas Hellström, Matthew Auld

This series refactors i915's internal buffer backend to use ttm.
It uses ttm's pool allocator to allocate volatile pages in place of the
old code which rolled its own via alloc_pages.
This is continuing progress to align all backends on using ttm.

Robert Beckett (4):
  drm/i915: add gen6 ppgtt dummy creation function
  drm/i915: setup ggtt scratch page after memory regions
  drm/i915: allow volatile buffers to use ttm pool allocator
  drm/i915: internal buffers use ttm backend

 drivers/gpu/drm/i915/gem/i915_gem_internal.c | 264 ++++++++-----------
 drivers/gpu/drm/i915/gem/i915_gem_internal.h |   5 -
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c      |  15 +-
 drivers/gpu/drm/i915/gem/i915_gem_ttm.h      |  12 +-
 drivers/gpu/drm/i915/gt/gen6_ppgtt.c         |  43 ++-
 drivers/gpu/drm/i915/gt/intel_gt_gmch.c      |  20 +-
 drivers/gpu/drm/i915/gt/intel_gt_gmch.h      |   6 +
 drivers/gpu/drm/i915/i915_driver.c           |  16 +-
 8 files changed, 201 insertions(+), 180 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH 1/4] drm/i915: add gen6 ppgtt dummy creation function
  2022-05-03 19:13 ` [Intel-gfx] " Robert Beckett
  (?)
@ 2022-05-03 19:13   ` Robert Beckett
  -1 siblings, 0 replies; 32+ messages in thread
From: Robert Beckett @ 2022-05-03 19:13 UTC (permalink / raw)
  To: dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
	Tvrtko Ursulin, David Airlie, Daniel Vetter
  Cc: Robert Beckett, Matthew Auld, Thomas Hellström, linux-kernel

Internal gem objects will soon just be volatile system memory region
objects.
To enable this, create a separate dummy object creation function
for gen6 ppgtt

Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
---
 drivers/gpu/drm/i915/gt/gen6_ppgtt.c | 43 ++++++++++++++++++++++++++--
 1 file changed, 40 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
index 1bb766c79dcb..f3b660cfeb7f 100644
--- a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
+++ b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
@@ -372,6 +372,45 @@ static const struct drm_i915_gem_object_ops pd_dummy_obj_ops = {
 	.put_pages = pd_dummy_obj_put_pages,
 };
 
+static struct drm_i915_gem_object *
+i915_gem_object_create_dummy(struct drm_i915_private *i915, phys_addr_t size)
+{
+	static struct lock_class_key lock_class;
+	struct drm_i915_gem_object *obj;
+	unsigned int cache_level;
+
+	GEM_BUG_ON(!size);
+	GEM_BUG_ON(!IS_ALIGNED(size, PAGE_SIZE));
+
+	if (overflows_type(size, obj->base.size))
+		return ERR_PTR(-E2BIG);
+
+	obj = i915_gem_object_alloc();
+	if (!obj)
+		return ERR_PTR(-ENOMEM);
+
+	drm_gem_private_object_init(&i915->drm, &obj->base, size);
+	i915_gem_object_init(obj, &pd_dummy_obj_ops, &lock_class, 0);
+	obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
+
+	/*
+	 * Mark the object as volatile, such that the pages are marked as
+	 * dontneed whilst they are still pinned. As soon as they are unpinned
+	 * they are allowed to be reaped by the shrinker, and the caller is
+	 * expected to repopulate - the contents of this object are only valid
+	 * whilst active and pinned.
+	 */
+	i915_gem_object_set_volatile(obj);
+
+	obj->read_domains = I915_GEM_DOMAIN_CPU;
+	obj->write_domain = I915_GEM_DOMAIN_CPU;
+
+	cache_level = HAS_LLC(i915) ? I915_CACHE_LLC : I915_CACHE_NONE;
+	i915_gem_object_set_cache_coherency(obj, cache_level);
+
+	return obj;
+}
+
 static struct i915_page_directory *
 gen6_alloc_top_pd(struct gen6_ppgtt *ppgtt)
 {
@@ -383,9 +422,7 @@ gen6_alloc_top_pd(struct gen6_ppgtt *ppgtt)
 	if (unlikely(!pd))
 		return ERR_PTR(-ENOMEM);
 
-	pd->pt.base = __i915_gem_object_create_internal(ppgtt->base.vm.gt->i915,
-							&pd_dummy_obj_ops,
-							I915_PDES * SZ_4K);
+	pd->pt.base = i915_gem_object_create_dummy(ppgtt->base.vm.gt->i915, I915_PDES * SZ_4K);
 	if (IS_ERR(pd->pt.base)) {
 		err = PTR_ERR(pd->pt.base);
 		pd->pt.base = NULL;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 1/4] drm/i915: add gen6 ppgtt dummy creation function
@ 2022-05-03 19:13   ` Robert Beckett
  0 siblings, 0 replies; 32+ messages in thread
From: Robert Beckett @ 2022-05-03 19:13 UTC (permalink / raw)
  To: dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
	Tvrtko Ursulin, David Airlie, Daniel Vetter
  Cc: Robert Beckett, Thomas Hellström, Matthew Auld, linux-kernel

Internal gem objects will soon just be volatile system memory region
objects.
To enable this, create a separate dummy object creation function
for gen6 ppgtt

Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
---
 drivers/gpu/drm/i915/gt/gen6_ppgtt.c | 43 ++++++++++++++++++++++++++--
 1 file changed, 40 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
index 1bb766c79dcb..f3b660cfeb7f 100644
--- a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
+++ b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
@@ -372,6 +372,45 @@ static const struct drm_i915_gem_object_ops pd_dummy_obj_ops = {
 	.put_pages = pd_dummy_obj_put_pages,
 };
 
+static struct drm_i915_gem_object *
+i915_gem_object_create_dummy(struct drm_i915_private *i915, phys_addr_t size)
+{
+	static struct lock_class_key lock_class;
+	struct drm_i915_gem_object *obj;
+	unsigned int cache_level;
+
+	GEM_BUG_ON(!size);
+	GEM_BUG_ON(!IS_ALIGNED(size, PAGE_SIZE));
+
+	if (overflows_type(size, obj->base.size))
+		return ERR_PTR(-E2BIG);
+
+	obj = i915_gem_object_alloc();
+	if (!obj)
+		return ERR_PTR(-ENOMEM);
+
+	drm_gem_private_object_init(&i915->drm, &obj->base, size);
+	i915_gem_object_init(obj, &pd_dummy_obj_ops, &lock_class, 0);
+	obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
+
+	/*
+	 * Mark the object as volatile, such that the pages are marked as
+	 * dontneed whilst they are still pinned. As soon as they are unpinned
+	 * they are allowed to be reaped by the shrinker, and the caller is
+	 * expected to repopulate - the contents of this object are only valid
+	 * whilst active and pinned.
+	 */
+	i915_gem_object_set_volatile(obj);
+
+	obj->read_domains = I915_GEM_DOMAIN_CPU;
+	obj->write_domain = I915_GEM_DOMAIN_CPU;
+
+	cache_level = HAS_LLC(i915) ? I915_CACHE_LLC : I915_CACHE_NONE;
+	i915_gem_object_set_cache_coherency(obj, cache_level);
+
+	return obj;
+}
+
 static struct i915_page_directory *
 gen6_alloc_top_pd(struct gen6_ppgtt *ppgtt)
 {
@@ -383,9 +422,7 @@ gen6_alloc_top_pd(struct gen6_ppgtt *ppgtt)
 	if (unlikely(!pd))
 		return ERR_PTR(-ENOMEM);
 
-	pd->pt.base = __i915_gem_object_create_internal(ppgtt->base.vm.gt->i915,
-							&pd_dummy_obj_ops,
-							I915_PDES * SZ_4K);
+	pd->pt.base = i915_gem_object_create_dummy(ppgtt->base.vm.gt->i915, I915_PDES * SZ_4K);
 	if (IS_ERR(pd->pt.base)) {
 		err = PTR_ERR(pd->pt.base);
 		pd->pt.base = NULL;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [Intel-gfx] [PATCH 1/4] drm/i915: add gen6 ppgtt dummy creation function
@ 2022-05-03 19:13   ` Robert Beckett
  0 siblings, 0 replies; 32+ messages in thread
From: Robert Beckett @ 2022-05-03 19:13 UTC (permalink / raw)
  To: dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
	Tvrtko Ursulin, David Airlie, Daniel Vetter
  Cc: Thomas Hellström, Matthew Auld, linux-kernel

Internal gem objects will soon just be volatile system memory region
objects.
To enable this, create a separate dummy object creation function
for gen6 ppgtt

Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
---
 drivers/gpu/drm/i915/gt/gen6_ppgtt.c | 43 ++++++++++++++++++++++++++--
 1 file changed, 40 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
index 1bb766c79dcb..f3b660cfeb7f 100644
--- a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
+++ b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
@@ -372,6 +372,45 @@ static const struct drm_i915_gem_object_ops pd_dummy_obj_ops = {
 	.put_pages = pd_dummy_obj_put_pages,
 };
 
+static struct drm_i915_gem_object *
+i915_gem_object_create_dummy(struct drm_i915_private *i915, phys_addr_t size)
+{
+	static struct lock_class_key lock_class;
+	struct drm_i915_gem_object *obj;
+	unsigned int cache_level;
+
+	GEM_BUG_ON(!size);
+	GEM_BUG_ON(!IS_ALIGNED(size, PAGE_SIZE));
+
+	if (overflows_type(size, obj->base.size))
+		return ERR_PTR(-E2BIG);
+
+	obj = i915_gem_object_alloc();
+	if (!obj)
+		return ERR_PTR(-ENOMEM);
+
+	drm_gem_private_object_init(&i915->drm, &obj->base, size);
+	i915_gem_object_init(obj, &pd_dummy_obj_ops, &lock_class, 0);
+	obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
+
+	/*
+	 * Mark the object as volatile, such that the pages are marked as
+	 * dontneed whilst they are still pinned. As soon as they are unpinned
+	 * they are allowed to be reaped by the shrinker, and the caller is
+	 * expected to repopulate - the contents of this object are only valid
+	 * whilst active and pinned.
+	 */
+	i915_gem_object_set_volatile(obj);
+
+	obj->read_domains = I915_GEM_DOMAIN_CPU;
+	obj->write_domain = I915_GEM_DOMAIN_CPU;
+
+	cache_level = HAS_LLC(i915) ? I915_CACHE_LLC : I915_CACHE_NONE;
+	i915_gem_object_set_cache_coherency(obj, cache_level);
+
+	return obj;
+}
+
 static struct i915_page_directory *
 gen6_alloc_top_pd(struct gen6_ppgtt *ppgtt)
 {
@@ -383,9 +422,7 @@ gen6_alloc_top_pd(struct gen6_ppgtt *ppgtt)
 	if (unlikely(!pd))
 		return ERR_PTR(-ENOMEM);
 
-	pd->pt.base = __i915_gem_object_create_internal(ppgtt->base.vm.gt->i915,
-							&pd_dummy_obj_ops,
-							I915_PDES * SZ_4K);
+	pd->pt.base = i915_gem_object_create_dummy(ppgtt->base.vm.gt->i915, I915_PDES * SZ_4K);
 	if (IS_ERR(pd->pt.base)) {
 		err = PTR_ERR(pd->pt.base);
 		pd->pt.base = NULL;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 2/4] drm/i915: setup ggtt scratch page after memory regions
  2022-05-03 19:13 ` [Intel-gfx] " Robert Beckett
  (?)
@ 2022-05-03 19:13   ` Robert Beckett
  -1 siblings, 0 replies; 32+ messages in thread
From: Robert Beckett @ 2022-05-03 19:13 UTC (permalink / raw)
  To: dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
	Tvrtko Ursulin, David Airlie, Daniel Vetter
  Cc: Robert Beckett, Matthew Auld, Thomas Hellström, linux-kernel

reorder scratch page allocation so that memory regions are available
to allocate the buffers

Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
---
 drivers/gpu/drm/i915/gt/intel_gt_gmch.c | 20 ++++++++++++++++++--
 drivers/gpu/drm/i915/gt/intel_gt_gmch.h |  6 ++++++
 drivers/gpu/drm/i915/i915_driver.c      | 16 ++++++++++------
 3 files changed, 34 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_gt_gmch.c b/drivers/gpu/drm/i915/gt/intel_gt_gmch.c
index 18e488672d1b..5411df1734ac 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_gmch.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_gmch.c
@@ -440,8 +440,6 @@ static int ggtt_probe_common(struct i915_ggtt *ggtt, u64 size)
 	struct drm_i915_private *i915 = ggtt->vm.i915;
 	struct pci_dev *pdev = to_pci_dev(i915->drm.dev);
 	phys_addr_t phys_addr;
-	u32 pte_flags;
-	int ret;
 
 	GEM_WARN_ON(pci_resource_len(pdev, 0) != gen6_gttmmadr_size(i915));
 	phys_addr = pci_resource_start(pdev, 0) + gen6_gttadr_offset(i915);
@@ -463,6 +461,24 @@ static int ggtt_probe_common(struct i915_ggtt *ggtt, u64 size)
 	}
 
 	kref_init(&ggtt->vm.resv_ref);
+
+	return 0;
+}
+
+/**
+ * i915_ggtt_setup_scratch_page - setup ggtt scratch page
+ * @i915: i915 device
+ */
+int i915_ggtt_setup_scratch_page(struct drm_i915_private *i915)
+{
+	struct i915_ggtt *ggtt = to_gt(i915)->ggtt;
+	u32 pte_flags;
+	int ret;
+
+	/* gen5- scratch setup currently happens in @intel_gtt_init */
+	if (GRAPHICS_VER(i915) <= 5)
+		return 0;
+
 	ret = setup_scratch_page(&ggtt->vm);
 	if (ret) {
 		drm_err(&i915->drm, "Scratch setup failed\n");
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_gmch.h b/drivers/gpu/drm/i915/gt/intel_gt_gmch.h
index 75ed55c1f30a..c6b79cb78637 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_gmch.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_gmch.h
@@ -15,6 +15,7 @@ int intel_gt_gmch_gen6_probe(struct i915_ggtt *ggtt);
 int intel_gt_gmch_gen8_probe(struct i915_ggtt *ggtt);
 int intel_gt_gmch_gen5_probe(struct i915_ggtt *ggtt);
 int intel_gt_gmch_gen5_enable_hw(struct drm_i915_private *i915);
+int i915_ggtt_setup_scratch_page(struct drm_i915_private *i915);
 
 /* Stubs for non-x86 platforms */
 #else
@@ -41,6 +42,11 @@ static inline int intel_gt_gmch_gen5_enable_hw(struct drm_i915_private *i915)
 	/* No HW should be enabled for this case yet, return fail */
 	return -ENODEV;
 }
+
+static inline int i915_ggtt_setup_scratch_page(struct drm_i915_private *i915)
+{
+	return 0;
+}
 #endif
 
 #endif /* __INTEL_GT_GMCH_H__ */
diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915_driver.c
index 90b0ce5051af..f67476b2f349 100644
--- a/drivers/gpu/drm/i915/i915_driver.c
+++ b/drivers/gpu/drm/i915/i915_driver.c
@@ -69,6 +69,7 @@
 #include "gem/i915_gem_mman.h"
 #include "gem/i915_gem_pm.h"
 #include "gt/intel_gt.h"
+#include "gt/intel_gt_gmch.h"
 #include "gt/intel_gt_pm.h"
 #include "gt/intel_rc6.h"
 
@@ -589,12 +590,16 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
 
 	ret = intel_gt_tiles_init(dev_priv);
 	if (ret)
-		goto err_mem_regions;
+		goto err_ggtt;
+
+	ret = i915_ggtt_setup_scratch_page(dev_priv);
+	if (ret)
+		goto err_ggtt;
 
 	ret = i915_ggtt_enable_hw(dev_priv);
 	if (ret) {
 		drm_err(&dev_priv->drm, "failed to enable GGTT\n");
-		goto err_mem_regions;
+		goto err_ggtt;
 	}
 
 	pci_set_master(pdev);
@@ -646,11 +651,10 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
 err_msi:
 	if (pdev->msi_enabled)
 		pci_disable_msi(pdev);
-err_mem_regions:
-	intel_memory_regions_driver_release(dev_priv);
 err_ggtt:
 	i915_ggtt_driver_release(dev_priv);
 	i915_gem_drain_freed_objects(dev_priv);
+	intel_memory_regions_driver_release(dev_priv);
 	i915_ggtt_driver_late_release(dev_priv);
 err_perf:
 	i915_perf_fini(dev_priv);
@@ -896,9 +900,9 @@ int i915_driver_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	intel_modeset_driver_remove_nogem(i915);
 out_cleanup_hw:
 	i915_driver_hw_remove(i915);
-	intel_memory_regions_driver_release(i915);
 	i915_ggtt_driver_release(i915);
 	i915_gem_drain_freed_objects(i915);
+	intel_memory_regions_driver_release(i915);
 	i915_ggtt_driver_late_release(i915);
 out_cleanup_mmio:
 	i915_driver_mmio_release(i915);
@@ -955,9 +959,9 @@ static void i915_driver_release(struct drm_device *dev)
 
 	i915_gem_driver_release(dev_priv);
 
-	intel_memory_regions_driver_release(dev_priv);
 	i915_ggtt_driver_release(dev_priv);
 	i915_gem_drain_freed_objects(dev_priv);
+	intel_memory_regions_driver_release(dev_priv);
 	i915_ggtt_driver_late_release(dev_priv);
 
 	i915_driver_mmio_release(dev_priv);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 2/4] drm/i915: setup ggtt scratch page after memory regions
@ 2022-05-03 19:13   ` Robert Beckett
  0 siblings, 0 replies; 32+ messages in thread
From: Robert Beckett @ 2022-05-03 19:13 UTC (permalink / raw)
  To: dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
	Tvrtko Ursulin, David Airlie, Daniel Vetter
  Cc: Robert Beckett, Thomas Hellström, Matthew Auld, linux-kernel

reorder scratch page allocation so that memory regions are available
to allocate the buffers

Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
---
 drivers/gpu/drm/i915/gt/intel_gt_gmch.c | 20 ++++++++++++++++++--
 drivers/gpu/drm/i915/gt/intel_gt_gmch.h |  6 ++++++
 drivers/gpu/drm/i915/i915_driver.c      | 16 ++++++++++------
 3 files changed, 34 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_gt_gmch.c b/drivers/gpu/drm/i915/gt/intel_gt_gmch.c
index 18e488672d1b..5411df1734ac 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_gmch.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_gmch.c
@@ -440,8 +440,6 @@ static int ggtt_probe_common(struct i915_ggtt *ggtt, u64 size)
 	struct drm_i915_private *i915 = ggtt->vm.i915;
 	struct pci_dev *pdev = to_pci_dev(i915->drm.dev);
 	phys_addr_t phys_addr;
-	u32 pte_flags;
-	int ret;
 
 	GEM_WARN_ON(pci_resource_len(pdev, 0) != gen6_gttmmadr_size(i915));
 	phys_addr = pci_resource_start(pdev, 0) + gen6_gttadr_offset(i915);
@@ -463,6 +461,24 @@ static int ggtt_probe_common(struct i915_ggtt *ggtt, u64 size)
 	}
 
 	kref_init(&ggtt->vm.resv_ref);
+
+	return 0;
+}
+
+/**
+ * i915_ggtt_setup_scratch_page - setup ggtt scratch page
+ * @i915: i915 device
+ */
+int i915_ggtt_setup_scratch_page(struct drm_i915_private *i915)
+{
+	struct i915_ggtt *ggtt = to_gt(i915)->ggtt;
+	u32 pte_flags;
+	int ret;
+
+	/* gen5- scratch setup currently happens in @intel_gtt_init */
+	if (GRAPHICS_VER(i915) <= 5)
+		return 0;
+
 	ret = setup_scratch_page(&ggtt->vm);
 	if (ret) {
 		drm_err(&i915->drm, "Scratch setup failed\n");
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_gmch.h b/drivers/gpu/drm/i915/gt/intel_gt_gmch.h
index 75ed55c1f30a..c6b79cb78637 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_gmch.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_gmch.h
@@ -15,6 +15,7 @@ int intel_gt_gmch_gen6_probe(struct i915_ggtt *ggtt);
 int intel_gt_gmch_gen8_probe(struct i915_ggtt *ggtt);
 int intel_gt_gmch_gen5_probe(struct i915_ggtt *ggtt);
 int intel_gt_gmch_gen5_enable_hw(struct drm_i915_private *i915);
+int i915_ggtt_setup_scratch_page(struct drm_i915_private *i915);
 
 /* Stubs for non-x86 platforms */
 #else
@@ -41,6 +42,11 @@ static inline int intel_gt_gmch_gen5_enable_hw(struct drm_i915_private *i915)
 	/* No HW should be enabled for this case yet, return fail */
 	return -ENODEV;
 }
+
+static inline int i915_ggtt_setup_scratch_page(struct drm_i915_private *i915)
+{
+	return 0;
+}
 #endif
 
 #endif /* __INTEL_GT_GMCH_H__ */
diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915_driver.c
index 90b0ce5051af..f67476b2f349 100644
--- a/drivers/gpu/drm/i915/i915_driver.c
+++ b/drivers/gpu/drm/i915/i915_driver.c
@@ -69,6 +69,7 @@
 #include "gem/i915_gem_mman.h"
 #include "gem/i915_gem_pm.h"
 #include "gt/intel_gt.h"
+#include "gt/intel_gt_gmch.h"
 #include "gt/intel_gt_pm.h"
 #include "gt/intel_rc6.h"
 
@@ -589,12 +590,16 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
 
 	ret = intel_gt_tiles_init(dev_priv);
 	if (ret)
-		goto err_mem_regions;
+		goto err_ggtt;
+
+	ret = i915_ggtt_setup_scratch_page(dev_priv);
+	if (ret)
+		goto err_ggtt;
 
 	ret = i915_ggtt_enable_hw(dev_priv);
 	if (ret) {
 		drm_err(&dev_priv->drm, "failed to enable GGTT\n");
-		goto err_mem_regions;
+		goto err_ggtt;
 	}
 
 	pci_set_master(pdev);
@@ -646,11 +651,10 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
 err_msi:
 	if (pdev->msi_enabled)
 		pci_disable_msi(pdev);
-err_mem_regions:
-	intel_memory_regions_driver_release(dev_priv);
 err_ggtt:
 	i915_ggtt_driver_release(dev_priv);
 	i915_gem_drain_freed_objects(dev_priv);
+	intel_memory_regions_driver_release(dev_priv);
 	i915_ggtt_driver_late_release(dev_priv);
 err_perf:
 	i915_perf_fini(dev_priv);
@@ -896,9 +900,9 @@ int i915_driver_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	intel_modeset_driver_remove_nogem(i915);
 out_cleanup_hw:
 	i915_driver_hw_remove(i915);
-	intel_memory_regions_driver_release(i915);
 	i915_ggtt_driver_release(i915);
 	i915_gem_drain_freed_objects(i915);
+	intel_memory_regions_driver_release(i915);
 	i915_ggtt_driver_late_release(i915);
 out_cleanup_mmio:
 	i915_driver_mmio_release(i915);
@@ -955,9 +959,9 @@ static void i915_driver_release(struct drm_device *dev)
 
 	i915_gem_driver_release(dev_priv);
 
-	intel_memory_regions_driver_release(dev_priv);
 	i915_ggtt_driver_release(dev_priv);
 	i915_gem_drain_freed_objects(dev_priv);
+	intel_memory_regions_driver_release(dev_priv);
 	i915_ggtt_driver_late_release(dev_priv);
 
 	i915_driver_mmio_release(dev_priv);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [Intel-gfx] [PATCH 2/4] drm/i915: setup ggtt scratch page after memory regions
@ 2022-05-03 19:13   ` Robert Beckett
  0 siblings, 0 replies; 32+ messages in thread
From: Robert Beckett @ 2022-05-03 19:13 UTC (permalink / raw)
  To: dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
	Tvrtko Ursulin, David Airlie, Daniel Vetter
  Cc: Thomas Hellström, Matthew Auld, linux-kernel

reorder scratch page allocation so that memory regions are available
to allocate the buffers

Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
---
 drivers/gpu/drm/i915/gt/intel_gt_gmch.c | 20 ++++++++++++++++++--
 drivers/gpu/drm/i915/gt/intel_gt_gmch.h |  6 ++++++
 drivers/gpu/drm/i915/i915_driver.c      | 16 ++++++++++------
 3 files changed, 34 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/intel_gt_gmch.c b/drivers/gpu/drm/i915/gt/intel_gt_gmch.c
index 18e488672d1b..5411df1734ac 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_gmch.c
+++ b/drivers/gpu/drm/i915/gt/intel_gt_gmch.c
@@ -440,8 +440,6 @@ static int ggtt_probe_common(struct i915_ggtt *ggtt, u64 size)
 	struct drm_i915_private *i915 = ggtt->vm.i915;
 	struct pci_dev *pdev = to_pci_dev(i915->drm.dev);
 	phys_addr_t phys_addr;
-	u32 pte_flags;
-	int ret;
 
 	GEM_WARN_ON(pci_resource_len(pdev, 0) != gen6_gttmmadr_size(i915));
 	phys_addr = pci_resource_start(pdev, 0) + gen6_gttadr_offset(i915);
@@ -463,6 +461,24 @@ static int ggtt_probe_common(struct i915_ggtt *ggtt, u64 size)
 	}
 
 	kref_init(&ggtt->vm.resv_ref);
+
+	return 0;
+}
+
+/**
+ * i915_ggtt_setup_scratch_page - setup ggtt scratch page
+ * @i915: i915 device
+ */
+int i915_ggtt_setup_scratch_page(struct drm_i915_private *i915)
+{
+	struct i915_ggtt *ggtt = to_gt(i915)->ggtt;
+	u32 pte_flags;
+	int ret;
+
+	/* gen5- scratch setup currently happens in @intel_gtt_init */
+	if (GRAPHICS_VER(i915) <= 5)
+		return 0;
+
 	ret = setup_scratch_page(&ggtt->vm);
 	if (ret) {
 		drm_err(&i915->drm, "Scratch setup failed\n");
diff --git a/drivers/gpu/drm/i915/gt/intel_gt_gmch.h b/drivers/gpu/drm/i915/gt/intel_gt_gmch.h
index 75ed55c1f30a..c6b79cb78637 100644
--- a/drivers/gpu/drm/i915/gt/intel_gt_gmch.h
+++ b/drivers/gpu/drm/i915/gt/intel_gt_gmch.h
@@ -15,6 +15,7 @@ int intel_gt_gmch_gen6_probe(struct i915_ggtt *ggtt);
 int intel_gt_gmch_gen8_probe(struct i915_ggtt *ggtt);
 int intel_gt_gmch_gen5_probe(struct i915_ggtt *ggtt);
 int intel_gt_gmch_gen5_enable_hw(struct drm_i915_private *i915);
+int i915_ggtt_setup_scratch_page(struct drm_i915_private *i915);
 
 /* Stubs for non-x86 platforms */
 #else
@@ -41,6 +42,11 @@ static inline int intel_gt_gmch_gen5_enable_hw(struct drm_i915_private *i915)
 	/* No HW should be enabled for this case yet, return fail */
 	return -ENODEV;
 }
+
+static inline int i915_ggtt_setup_scratch_page(struct drm_i915_private *i915)
+{
+	return 0;
+}
 #endif
 
 #endif /* __INTEL_GT_GMCH_H__ */
diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915_driver.c
index 90b0ce5051af..f67476b2f349 100644
--- a/drivers/gpu/drm/i915/i915_driver.c
+++ b/drivers/gpu/drm/i915/i915_driver.c
@@ -69,6 +69,7 @@
 #include "gem/i915_gem_mman.h"
 #include "gem/i915_gem_pm.h"
 #include "gt/intel_gt.h"
+#include "gt/intel_gt_gmch.h"
 #include "gt/intel_gt_pm.h"
 #include "gt/intel_rc6.h"
 
@@ -589,12 +590,16 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
 
 	ret = intel_gt_tiles_init(dev_priv);
 	if (ret)
-		goto err_mem_regions;
+		goto err_ggtt;
+
+	ret = i915_ggtt_setup_scratch_page(dev_priv);
+	if (ret)
+		goto err_ggtt;
 
 	ret = i915_ggtt_enable_hw(dev_priv);
 	if (ret) {
 		drm_err(&dev_priv->drm, "failed to enable GGTT\n");
-		goto err_mem_regions;
+		goto err_ggtt;
 	}
 
 	pci_set_master(pdev);
@@ -646,11 +651,10 @@ static int i915_driver_hw_probe(struct drm_i915_private *dev_priv)
 err_msi:
 	if (pdev->msi_enabled)
 		pci_disable_msi(pdev);
-err_mem_regions:
-	intel_memory_regions_driver_release(dev_priv);
 err_ggtt:
 	i915_ggtt_driver_release(dev_priv);
 	i915_gem_drain_freed_objects(dev_priv);
+	intel_memory_regions_driver_release(dev_priv);
 	i915_ggtt_driver_late_release(dev_priv);
 err_perf:
 	i915_perf_fini(dev_priv);
@@ -896,9 +900,9 @@ int i915_driver_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	intel_modeset_driver_remove_nogem(i915);
 out_cleanup_hw:
 	i915_driver_hw_remove(i915);
-	intel_memory_regions_driver_release(i915);
 	i915_ggtt_driver_release(i915);
 	i915_gem_drain_freed_objects(i915);
+	intel_memory_regions_driver_release(i915);
 	i915_ggtt_driver_late_release(i915);
 out_cleanup_mmio:
 	i915_driver_mmio_release(i915);
@@ -955,9 +959,9 @@ static void i915_driver_release(struct drm_device *dev)
 
 	i915_gem_driver_release(dev_priv);
 
-	intel_memory_regions_driver_release(dev_priv);
 	i915_ggtt_driver_release(dev_priv);
 	i915_gem_drain_freed_objects(dev_priv);
+	intel_memory_regions_driver_release(dev_priv);
 	i915_ggtt_driver_late_release(dev_priv);
 
 	i915_driver_mmio_release(dev_priv);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 3/4] drm/i915: allow volatile buffers to use ttm pool allocator
  2022-05-03 19:13 ` [Intel-gfx] " Robert Beckett
  (?)
@ 2022-05-03 19:13   ` Robert Beckett
  -1 siblings, 0 replies; 32+ messages in thread
From: Robert Beckett @ 2022-05-03 19:13 UTC (permalink / raw)
  To: dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
	Tvrtko Ursulin, David Airlie, Daniel Vetter
  Cc: Robert Beckett, Matthew Auld, Thomas Hellström, linux-kernel

internal buffers should be shmem backed.
if a volatile buffer is requested, allow ttm to use the pool allocator
to provide volatile pages as backing

Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
index 4c25d9b2f138..fdb3a1c18cb6 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
@@ -309,7 +309,8 @@ static struct ttm_tt *i915_ttm_tt_create(struct ttm_buffer_object *bo,
 		page_flags |= TTM_TT_FLAG_ZERO_ALLOC;
 
 	caching = i915_ttm_select_tt_caching(obj);
-	if (i915_gem_object_is_shrinkable(obj) && caching == ttm_cached) {
+	if (i915_gem_object_is_shrinkable(obj) && caching == ttm_cached &&
+	    !i915_gem_object_is_volatile(obj)) {
 		page_flags |= TTM_TT_FLAG_EXTERNAL |
 			      TTM_TT_FLAG_EXTERNAL_MAPPABLE;
 		i915_tt->is_shmem = true;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 3/4] drm/i915: allow volatile buffers to use ttm pool allocator
@ 2022-05-03 19:13   ` Robert Beckett
  0 siblings, 0 replies; 32+ messages in thread
From: Robert Beckett @ 2022-05-03 19:13 UTC (permalink / raw)
  To: dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
	Tvrtko Ursulin, David Airlie, Daniel Vetter
  Cc: Robert Beckett, Thomas Hellström, Matthew Auld, linux-kernel

internal buffers should be shmem backed.
if a volatile buffer is requested, allow ttm to use the pool allocator
to provide volatile pages as backing

Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
index 4c25d9b2f138..fdb3a1c18cb6 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
@@ -309,7 +309,8 @@ static struct ttm_tt *i915_ttm_tt_create(struct ttm_buffer_object *bo,
 		page_flags |= TTM_TT_FLAG_ZERO_ALLOC;
 
 	caching = i915_ttm_select_tt_caching(obj);
-	if (i915_gem_object_is_shrinkable(obj) && caching == ttm_cached) {
+	if (i915_gem_object_is_shrinkable(obj) && caching == ttm_cached &&
+	    !i915_gem_object_is_volatile(obj)) {
 		page_flags |= TTM_TT_FLAG_EXTERNAL |
 			      TTM_TT_FLAG_EXTERNAL_MAPPABLE;
 		i915_tt->is_shmem = true;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [Intel-gfx] [PATCH 3/4] drm/i915: allow volatile buffers to use ttm pool allocator
@ 2022-05-03 19:13   ` Robert Beckett
  0 siblings, 0 replies; 32+ messages in thread
From: Robert Beckett @ 2022-05-03 19:13 UTC (permalink / raw)
  To: dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
	Tvrtko Ursulin, David Airlie, Daniel Vetter
  Cc: Thomas Hellström, Matthew Auld, linux-kernel

internal buffers should be shmem backed.
if a volatile buffer is requested, allow ttm to use the pool allocator
to provide volatile pages as backing

Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
index 4c25d9b2f138..fdb3a1c18cb6 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
@@ -309,7 +309,8 @@ static struct ttm_tt *i915_ttm_tt_create(struct ttm_buffer_object *bo,
 		page_flags |= TTM_TT_FLAG_ZERO_ALLOC;
 
 	caching = i915_ttm_select_tt_caching(obj);
-	if (i915_gem_object_is_shrinkable(obj) && caching == ttm_cached) {
+	if (i915_gem_object_is_shrinkable(obj) && caching == ttm_cached &&
+	    !i915_gem_object_is_volatile(obj)) {
 		page_flags |= TTM_TT_FLAG_EXTERNAL |
 			      TTM_TT_FLAG_EXTERNAL_MAPPABLE;
 		i915_tt->is_shmem = true;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 4/4] drm/i915: internal buffers use ttm backend
  2022-05-03 19:13 ` [Intel-gfx] " Robert Beckett
  (?)
@ 2022-05-03 19:13   ` Robert Beckett
  -1 siblings, 0 replies; 32+ messages in thread
From: Robert Beckett @ 2022-05-03 19:13 UTC (permalink / raw)
  To: dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
	Tvrtko Ursulin, David Airlie, Daniel Vetter
  Cc: Robert Beckett, Matthew Auld, Thomas Hellström, linux-kernel

refactor internal buffer backend to allocate volatile pages via
ttm pool allocator

Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_internal.c | 264 ++++++++-----------
 drivers/gpu/drm/i915/gem/i915_gem_internal.h |   5 -
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c      |  12 +-
 drivers/gpu/drm/i915/gem/i915_gem_ttm.h      |  12 +-
 4 files changed, 125 insertions(+), 168 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index c698f95af15f..815ec9466cc0 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -4,156 +4,119 @@
  * Copyright © 2014-2016 Intel Corporation
  */
 
-#include <linux/scatterlist.h>
-#include <linux/slab.h>
-#include <linux/swiotlb.h>
-
+#include <drm/ttm/ttm_bo_driver.h>
+#include <drm/ttm/ttm_placement.h>
+#include "drm/ttm/ttm_bo_api.h"
+#include "gem/i915_gem_internal.h"
+#include "gem/i915_gem_region.h"
+#include "gem/i915_gem_ttm.h"
 #include "i915_drv.h"
-#include "i915_gem.h"
-#include "i915_gem_internal.h"
-#include "i915_gem_object.h"
-#include "i915_scatterlist.h"
-#include "i915_utils.h"
-
-#define QUIET (__GFP_NORETRY | __GFP_NOWARN)
-#define MAYFAIL (__GFP_RETRY_MAYFAIL | __GFP_NOWARN)
-
-static void internal_free_pages(struct sg_table *st)
-{
-	struct scatterlist *sg;
-
-	for (sg = st->sgl; sg; sg = __sg_next(sg)) {
-		if (sg_page(sg))
-			__free_pages(sg_page(sg), get_order(sg->length));
-	}
-
-	sg_free_table(st);
-	kfree(st);
-}
 
-static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
+static int i915_internal_get_pages(struct drm_i915_gem_object *obj)
 {
-	struct drm_i915_private *i915 = to_i915(obj->base.dev);
-	struct sg_table *st;
-	struct scatterlist *sg;
-	unsigned int sg_page_sizes;
-	unsigned int npages;
-	int max_order;
-	gfp_t gfp;
-
-	max_order = MAX_ORDER;
-#ifdef CONFIG_SWIOTLB
-	if (is_swiotlb_active(obj->base.dev->dev)) {
-		unsigned int max_segment;
-
-		max_segment = swiotlb_max_segment();
-		if (max_segment) {
-			max_segment = max_t(unsigned int, max_segment,
-					    PAGE_SIZE) >> PAGE_SHIFT;
-			max_order = min(max_order, ilog2(max_segment));
-		}
+	struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
+	struct ttm_operation_ctx ctx = {
+		.interruptible = true,
+		.no_wait_gpu = false,
+	};
+	struct ttm_place place = {
+		.fpfn = 0,
+		.lpfn = 0,
+		.mem_type = I915_PL_SYSTEM,
+		.flags = 0,
+	};
+	struct ttm_placement placement = {
+		.num_placement = 1,
+		.placement = &place,
+		.num_busy_placement = 0,
+		.busy_placement = NULL,
+	};
+	int ret;
+
+	ret = ttm_bo_validate(bo, &placement, &ctx);
+	if (ret) {
+		ret = i915_ttm_err_to_gem(ret);
+		return ret;
 	}
-#endif
 
-	gfp = GFP_KERNEL | __GFP_HIGHMEM | __GFP_RECLAIMABLE;
-	if (IS_I965GM(i915) || IS_I965G(i915)) {
-		/* 965gm cannot relocate objects above 4GiB. */
-		gfp &= ~__GFP_HIGHMEM;
-		gfp |= __GFP_DMA32;
+	if (bo->ttm && !ttm_tt_is_populated(bo->ttm)) {
+		ret = ttm_tt_populate(bo->bdev, bo->ttm, &ctx);
+		if (ret)
+			return ret;
 	}
 
-create_st:
-	st = kmalloc(sizeof(*st), GFP_KERNEL);
-	if (!st)
-		return -ENOMEM;
+	if (!i915_gem_object_has_pages(obj)) {
+		struct i915_refct_sgt *rsgt =
+			i915_ttm_resource_get_st(obj, bo->resource);
 
-	npages = obj->base.size / PAGE_SIZE;
-	if (sg_alloc_table(st, npages, GFP_KERNEL)) {
-		kfree(st);
-		return -ENOMEM;
-	}
+		if (IS_ERR(rsgt))
+			return PTR_ERR(rsgt);
 
-	sg = st->sgl;
-	st->nents = 0;
-	sg_page_sizes = 0;
-
-	do {
-		int order = min(fls(npages) - 1, max_order);
-		struct page *page;
-
-		do {
-			page = alloc_pages(gfp | (order ? QUIET : MAYFAIL),
-					   order);
-			if (page)
-				break;
-			if (!order--)
-				goto err;
-
-			/* Limit subsequent allocations as well */
-			max_order = order;
-		} while (1);
-
-		sg_set_page(sg, page, PAGE_SIZE << order, 0);
-		sg_page_sizes |= PAGE_SIZE << order;
-		st->nents++;
-
-		npages -= 1 << order;
-		if (!npages) {
-			sg_mark_end(sg);
-			break;
-		}
-
-		sg = __sg_next(sg);
-	} while (1);
-
-	if (i915_gem_gtt_prepare_pages(obj, st)) {
-		/* Failed to dma-map try again with single page sg segments */
-		if (get_order(st->sgl->length)) {
-			internal_free_pages(st);
-			max_order = 0;
-			goto create_st;
-		}
-		goto err;
+		GEM_BUG_ON(obj->mm.rsgt);
+		obj->mm.rsgt = rsgt;
+		__i915_gem_object_set_pages(obj, &rsgt->table,
+					    i915_sg_dma_sizes(rsgt->table.sgl));
 	}
 
-	__i915_gem_object_set_pages(obj, st, sg_page_sizes);
+	GEM_BUG_ON(bo->ttm && ((obj->base.size >> PAGE_SHIFT) < bo->ttm->num_pages));
+	i915_ttm_adjust_lru(obj);
 
 	return 0;
+}
 
-err:
-	sg_set_page(sg, NULL, 0, 0);
-	sg_mark_end(sg);
-	internal_free_pages(st);
+static const struct drm_i915_gem_object_ops i915_gem_object_internal_ops = {
+	.name = "i915_gem_object_ttm",
+	.flags = I915_GEM_OBJECT_IS_SHRINKABLE,
 
-	return -ENOMEM;
-}
+	.get_pages = i915_internal_get_pages,
+	.put_pages = i915_ttm_put_pages,
+	.adjust_lru = i915_ttm_adjust_lru,
+	.delayed_free = i915_ttm_delayed_free,
+};
 
-static void i915_gem_object_put_pages_internal(struct drm_i915_gem_object *obj,
-					       struct sg_table *pages)
+void i915_ttm_internal_bo_destroy(struct ttm_buffer_object *bo)
 {
-	i915_gem_gtt_finish_pages(obj, pages);
-	internal_free_pages(pages);
+	struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
 
-	obj->mm.dirty = false;
+	mutex_destroy(&obj->ttm.get_io_page.lock);
 
-	__start_cpu_write(obj);
-}
+	if (obj->ttm.created) {
+		/* This releases all gem object bindings to the backend. */
+		__i915_gem_free_object(obj);
 
-static const struct drm_i915_gem_object_ops i915_gem_object_internal_ops = {
-	.name = "i915_gem_object_internal",
-	.flags = I915_GEM_OBJECT_IS_SHRINKABLE,
-	.get_pages = i915_gem_object_get_pages_internal,
-	.put_pages = i915_gem_object_put_pages_internal,
-};
+		call_rcu(&obj->rcu, __i915_gem_free_object_rcu);
+	} else {
+		__i915_gem_object_fini(obj);
+	}
+}
 
+/**
+ * i915_gem_object_create_internal: create an object with volatile pages
+ * @i915: the i915 device
+ * @size: the size in bytes of backing storage to allocate for the object
+ *
+ * Creates a new object that wraps some internal memory for private use.
+ * This object is not backed by swappable storage, and as such its contents
+ * are volatile and only valid whilst pinned. If the object is reaped by the
+ * shrinker, its pages and data will be discarded. Equally, it is not a full
+ * GEM object and so not valid for access from userspace. This makes it useful
+ * for hardware interfaces like ringbuffers (which are pinned from the time
+ * the request is written to the time the hardware stops accessing it), but
+ * not for contexts (which need to be preserved when not active for later
+ * reuse). Note that it is not cleared upon allocation.
+ */
 struct drm_i915_gem_object *
-__i915_gem_object_create_internal(struct drm_i915_private *i915,
-				  const struct drm_i915_gem_object_ops *ops,
-				  phys_addr_t size)
+i915_gem_object_create_internal(struct drm_i915_private *i915,
+				phys_addr_t size)
 {
 	static struct lock_class_key lock_class;
 	struct drm_i915_gem_object *obj;
 	unsigned int cache_level;
+	struct ttm_operation_ctx ctx = {
+		.interruptible = true,
+		.no_wait_gpu = false,
+	};
+	int ret;
 
 	GEM_BUG_ON(!size);
 	GEM_BUG_ON(!IS_ALIGNED(size, PAGE_SIZE));
@@ -166,45 +129,34 @@ __i915_gem_object_create_internal(struct drm_i915_private *i915,
 		return ERR_PTR(-ENOMEM);
 
 	drm_gem_private_object_init(&i915->drm, &obj->base, size);
-	i915_gem_object_init(obj, ops, &lock_class, 0);
-	obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
+	i915_gem_object_init(obj, &i915_gem_object_internal_ops, &lock_class,
+			     I915_BO_ALLOC_VOLATILE);
+
+	INIT_LIST_HEAD(&obj->mm.region_link);
+
+	INIT_RADIX_TREE(&obj->ttm.get_io_page.radix, GFP_KERNEL | __GFP_NOWARN);
+	mutex_init(&obj->ttm.get_io_page.lock);
 
-	/*
-	 * Mark the object as volatile, such that the pages are marked as
-	 * dontneed whilst they are still pinned. As soon as they are unpinned
-	 * they are allowed to be reaped by the shrinker, and the caller is
-	 * expected to repopulate - the contents of this object are only valid
-	 * whilst active and pinned.
-	 */
-	i915_gem_object_set_volatile(obj);
+	obj->base.vma_node.driver_private = i915_gem_to_ttm(obj);
 
+	ret = ttm_bo_init_reserved(&i915->bdev, i915_gem_to_ttm(obj), size,
+				   ttm_bo_type_kernel, i915_ttm_sys_placement(),
+				   0, &ctx, NULL, NULL, i915_ttm_internal_bo_destroy);
+	if (ret) {
+		ret = i915_ttm_err_to_gem(ret);
+		i915_gem_object_free(obj);
+		return ERR_PTR(ret);
+	}
+
+	obj->ttm.created = true;
 	obj->read_domains = I915_GEM_DOMAIN_CPU;
 	obj->write_domain = I915_GEM_DOMAIN_CPU;
-
+	obj->mem_flags &= ~I915_BO_FLAG_IOMEM;
+	obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
 	cache_level = HAS_LLC(i915) ? I915_CACHE_LLC : I915_CACHE_NONE;
 	i915_gem_object_set_cache_coherency(obj, cache_level);
+	i915_gem_object_unlock(obj);
 
 	return obj;
 }
 
-/**
- * i915_gem_object_create_internal: create an object with volatile pages
- * @i915: the i915 device
- * @size: the size in bytes of backing storage to allocate for the object
- *
- * Creates a new object that wraps some internal memory for private use.
- * This object is not backed by swappable storage, and as such its contents
- * are volatile and only valid whilst pinned. If the object is reaped by the
- * shrinker, its pages and data will be discarded. Equally, it is not a full
- * GEM object and so not valid for access from userspace. This makes it useful
- * for hardware interfaces like ringbuffers (which are pinned from the time
- * the request is written to the time the hardware stops accessing it), but
- * not for contexts (which need to be preserved when not active for later
- * reuse). Note that it is not cleared upon allocation.
- */
-struct drm_i915_gem_object *
-i915_gem_object_create_internal(struct drm_i915_private *i915,
-				phys_addr_t size)
-{
-	return __i915_gem_object_create_internal(i915, &i915_gem_object_internal_ops, size);
-}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.h b/drivers/gpu/drm/i915/gem/i915_gem_internal.h
index 6664e06112fc..524e1042b20f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.h
@@ -15,9 +15,4 @@ struct drm_i915_private;
 struct drm_i915_gem_object *
 i915_gem_object_create_internal(struct drm_i915_private *i915,
 				phys_addr_t size);
-struct drm_i915_gem_object *
-__i915_gem_object_create_internal(struct drm_i915_private *i915,
-				  const struct drm_i915_gem_object_ops *ops,
-				  phys_addr_t size);
-
 #endif /* __I915_GEM_INTERNAL_H__ */
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
index fdb3a1c18cb6..92195ead8c11 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
@@ -83,7 +83,7 @@ struct ttm_placement *i915_ttm_sys_placement(void)
 	return &i915_sys_placement;
 }
 
-static int i915_ttm_err_to_gem(int err)
+int i915_ttm_err_to_gem(int err)
 {
 	/* Fastpath */
 	if (likely(!err))
@@ -745,8 +745,8 @@ struct ttm_device_funcs *i915_ttm_driver(void)
 	return &i915_ttm_bo_driver;
 }
 
-static int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
-				struct ttm_placement *placement)
+int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
+			 struct ttm_placement *placement)
 {
 	struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
 	struct ttm_operation_ctx ctx = {
@@ -871,8 +871,8 @@ static int i915_ttm_migrate(struct drm_i915_gem_object *obj,
 	return __i915_ttm_migrate(obj, mr, obj->flags);
 }
 
-static void i915_ttm_put_pages(struct drm_i915_gem_object *obj,
-			       struct sg_table *st)
+void i915_ttm_put_pages(struct drm_i915_gem_object *obj,
+			struct sg_table *st)
 {
 	/*
 	 * We're currently not called from a shrinker, so put_pages()
@@ -995,7 +995,7 @@ void i915_ttm_adjust_lru(struct drm_i915_gem_object *obj)
  * it's not idle, and using the TTM destroyed list handling could help us
  * benefit from that.
  */
-static void i915_ttm_delayed_free(struct drm_i915_gem_object *obj)
+void i915_ttm_delayed_free(struct drm_i915_gem_object *obj)
 {
 	GEM_BUG_ON(!obj->ttm.created);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
index 73e371aa3850..06701c46d8e2 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
@@ -26,6 +26,7 @@ i915_gem_to_ttm(struct drm_i915_gem_object *obj)
  * i915 ttm gem object destructor. Internal use only.
  */
 void i915_ttm_bo_destroy(struct ttm_buffer_object *bo);
+void i915_ttm_internal_bo_destroy(struct ttm_buffer_object *bo);
 
 /**
  * i915_ttm_to_gem - Convert a struct ttm_buffer_object to an embedding
@@ -37,8 +38,10 @@ void i915_ttm_bo_destroy(struct ttm_buffer_object *bo);
 static inline struct drm_i915_gem_object *
 i915_ttm_to_gem(struct ttm_buffer_object *bo)
 {
-	if (bo->destroy != i915_ttm_bo_destroy)
+	if (bo->destroy != i915_ttm_bo_destroy &&
+	    bo->destroy != i915_ttm_internal_bo_destroy) {
 		return NULL;
+	}
 
 	return container_of(bo, struct drm_i915_gem_object, __do_not_access);
 }
@@ -66,6 +69,7 @@ i915_ttm_resource_get_st(struct drm_i915_gem_object *obj,
 			 struct ttm_resource *res);
 
 void i915_ttm_adjust_lru(struct drm_i915_gem_object *obj);
+void i915_ttm_delayed_free(struct drm_i915_gem_object *obj);
 
 int i915_ttm_purge(struct drm_i915_gem_object *obj);
 
@@ -92,4 +96,10 @@ static inline bool i915_ttm_cpu_maps_iomem(struct ttm_resource *mem)
 	/* Once / if we support GGTT, this is also false for cached ttm_tts */
 	return mem->mem_type != I915_PL_SYSTEM;
 }
+
+int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
+			 struct ttm_placement *placement);
+void i915_ttm_put_pages(struct drm_i915_gem_object *obj, struct sg_table *st);
+int i915_ttm_err_to_gem(int err);
+
 #endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH 4/4] drm/i915: internal buffers use ttm backend
@ 2022-05-03 19:13   ` Robert Beckett
  0 siblings, 0 replies; 32+ messages in thread
From: Robert Beckett @ 2022-05-03 19:13 UTC (permalink / raw)
  To: dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
	Tvrtko Ursulin, David Airlie, Daniel Vetter
  Cc: Robert Beckett, Thomas Hellström, Matthew Auld, linux-kernel

refactor internal buffer backend to allocate volatile pages via
ttm pool allocator

Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_internal.c | 264 ++++++++-----------
 drivers/gpu/drm/i915/gem/i915_gem_internal.h |   5 -
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c      |  12 +-
 drivers/gpu/drm/i915/gem/i915_gem_ttm.h      |  12 +-
 4 files changed, 125 insertions(+), 168 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index c698f95af15f..815ec9466cc0 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -4,156 +4,119 @@
  * Copyright © 2014-2016 Intel Corporation
  */
 
-#include <linux/scatterlist.h>
-#include <linux/slab.h>
-#include <linux/swiotlb.h>
-
+#include <drm/ttm/ttm_bo_driver.h>
+#include <drm/ttm/ttm_placement.h>
+#include "drm/ttm/ttm_bo_api.h"
+#include "gem/i915_gem_internal.h"
+#include "gem/i915_gem_region.h"
+#include "gem/i915_gem_ttm.h"
 #include "i915_drv.h"
-#include "i915_gem.h"
-#include "i915_gem_internal.h"
-#include "i915_gem_object.h"
-#include "i915_scatterlist.h"
-#include "i915_utils.h"
-
-#define QUIET (__GFP_NORETRY | __GFP_NOWARN)
-#define MAYFAIL (__GFP_RETRY_MAYFAIL | __GFP_NOWARN)
-
-static void internal_free_pages(struct sg_table *st)
-{
-	struct scatterlist *sg;
-
-	for (sg = st->sgl; sg; sg = __sg_next(sg)) {
-		if (sg_page(sg))
-			__free_pages(sg_page(sg), get_order(sg->length));
-	}
-
-	sg_free_table(st);
-	kfree(st);
-}
 
-static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
+static int i915_internal_get_pages(struct drm_i915_gem_object *obj)
 {
-	struct drm_i915_private *i915 = to_i915(obj->base.dev);
-	struct sg_table *st;
-	struct scatterlist *sg;
-	unsigned int sg_page_sizes;
-	unsigned int npages;
-	int max_order;
-	gfp_t gfp;
-
-	max_order = MAX_ORDER;
-#ifdef CONFIG_SWIOTLB
-	if (is_swiotlb_active(obj->base.dev->dev)) {
-		unsigned int max_segment;
-
-		max_segment = swiotlb_max_segment();
-		if (max_segment) {
-			max_segment = max_t(unsigned int, max_segment,
-					    PAGE_SIZE) >> PAGE_SHIFT;
-			max_order = min(max_order, ilog2(max_segment));
-		}
+	struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
+	struct ttm_operation_ctx ctx = {
+		.interruptible = true,
+		.no_wait_gpu = false,
+	};
+	struct ttm_place place = {
+		.fpfn = 0,
+		.lpfn = 0,
+		.mem_type = I915_PL_SYSTEM,
+		.flags = 0,
+	};
+	struct ttm_placement placement = {
+		.num_placement = 1,
+		.placement = &place,
+		.num_busy_placement = 0,
+		.busy_placement = NULL,
+	};
+	int ret;
+
+	ret = ttm_bo_validate(bo, &placement, &ctx);
+	if (ret) {
+		ret = i915_ttm_err_to_gem(ret);
+		return ret;
 	}
-#endif
 
-	gfp = GFP_KERNEL | __GFP_HIGHMEM | __GFP_RECLAIMABLE;
-	if (IS_I965GM(i915) || IS_I965G(i915)) {
-		/* 965gm cannot relocate objects above 4GiB. */
-		gfp &= ~__GFP_HIGHMEM;
-		gfp |= __GFP_DMA32;
+	if (bo->ttm && !ttm_tt_is_populated(bo->ttm)) {
+		ret = ttm_tt_populate(bo->bdev, bo->ttm, &ctx);
+		if (ret)
+			return ret;
 	}
 
-create_st:
-	st = kmalloc(sizeof(*st), GFP_KERNEL);
-	if (!st)
-		return -ENOMEM;
+	if (!i915_gem_object_has_pages(obj)) {
+		struct i915_refct_sgt *rsgt =
+			i915_ttm_resource_get_st(obj, bo->resource);
 
-	npages = obj->base.size / PAGE_SIZE;
-	if (sg_alloc_table(st, npages, GFP_KERNEL)) {
-		kfree(st);
-		return -ENOMEM;
-	}
+		if (IS_ERR(rsgt))
+			return PTR_ERR(rsgt);
 
-	sg = st->sgl;
-	st->nents = 0;
-	sg_page_sizes = 0;
-
-	do {
-		int order = min(fls(npages) - 1, max_order);
-		struct page *page;
-
-		do {
-			page = alloc_pages(gfp | (order ? QUIET : MAYFAIL),
-					   order);
-			if (page)
-				break;
-			if (!order--)
-				goto err;
-
-			/* Limit subsequent allocations as well */
-			max_order = order;
-		} while (1);
-
-		sg_set_page(sg, page, PAGE_SIZE << order, 0);
-		sg_page_sizes |= PAGE_SIZE << order;
-		st->nents++;
-
-		npages -= 1 << order;
-		if (!npages) {
-			sg_mark_end(sg);
-			break;
-		}
-
-		sg = __sg_next(sg);
-	} while (1);
-
-	if (i915_gem_gtt_prepare_pages(obj, st)) {
-		/* Failed to dma-map try again with single page sg segments */
-		if (get_order(st->sgl->length)) {
-			internal_free_pages(st);
-			max_order = 0;
-			goto create_st;
-		}
-		goto err;
+		GEM_BUG_ON(obj->mm.rsgt);
+		obj->mm.rsgt = rsgt;
+		__i915_gem_object_set_pages(obj, &rsgt->table,
+					    i915_sg_dma_sizes(rsgt->table.sgl));
 	}
 
-	__i915_gem_object_set_pages(obj, st, sg_page_sizes);
+	GEM_BUG_ON(bo->ttm && ((obj->base.size >> PAGE_SHIFT) < bo->ttm->num_pages));
+	i915_ttm_adjust_lru(obj);
 
 	return 0;
+}
 
-err:
-	sg_set_page(sg, NULL, 0, 0);
-	sg_mark_end(sg);
-	internal_free_pages(st);
+static const struct drm_i915_gem_object_ops i915_gem_object_internal_ops = {
+	.name = "i915_gem_object_ttm",
+	.flags = I915_GEM_OBJECT_IS_SHRINKABLE,
 
-	return -ENOMEM;
-}
+	.get_pages = i915_internal_get_pages,
+	.put_pages = i915_ttm_put_pages,
+	.adjust_lru = i915_ttm_adjust_lru,
+	.delayed_free = i915_ttm_delayed_free,
+};
 
-static void i915_gem_object_put_pages_internal(struct drm_i915_gem_object *obj,
-					       struct sg_table *pages)
+void i915_ttm_internal_bo_destroy(struct ttm_buffer_object *bo)
 {
-	i915_gem_gtt_finish_pages(obj, pages);
-	internal_free_pages(pages);
+	struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
 
-	obj->mm.dirty = false;
+	mutex_destroy(&obj->ttm.get_io_page.lock);
 
-	__start_cpu_write(obj);
-}
+	if (obj->ttm.created) {
+		/* This releases all gem object bindings to the backend. */
+		__i915_gem_free_object(obj);
 
-static const struct drm_i915_gem_object_ops i915_gem_object_internal_ops = {
-	.name = "i915_gem_object_internal",
-	.flags = I915_GEM_OBJECT_IS_SHRINKABLE,
-	.get_pages = i915_gem_object_get_pages_internal,
-	.put_pages = i915_gem_object_put_pages_internal,
-};
+		call_rcu(&obj->rcu, __i915_gem_free_object_rcu);
+	} else {
+		__i915_gem_object_fini(obj);
+	}
+}
 
+/**
+ * i915_gem_object_create_internal: create an object with volatile pages
+ * @i915: the i915 device
+ * @size: the size in bytes of backing storage to allocate for the object
+ *
+ * Creates a new object that wraps some internal memory for private use.
+ * This object is not backed by swappable storage, and as such its contents
+ * are volatile and only valid whilst pinned. If the object is reaped by the
+ * shrinker, its pages and data will be discarded. Equally, it is not a full
+ * GEM object and so not valid for access from userspace. This makes it useful
+ * for hardware interfaces like ringbuffers (which are pinned from the time
+ * the request is written to the time the hardware stops accessing it), but
+ * not for contexts (which need to be preserved when not active for later
+ * reuse). Note that it is not cleared upon allocation.
+ */
 struct drm_i915_gem_object *
-__i915_gem_object_create_internal(struct drm_i915_private *i915,
-				  const struct drm_i915_gem_object_ops *ops,
-				  phys_addr_t size)
+i915_gem_object_create_internal(struct drm_i915_private *i915,
+				phys_addr_t size)
 {
 	static struct lock_class_key lock_class;
 	struct drm_i915_gem_object *obj;
 	unsigned int cache_level;
+	struct ttm_operation_ctx ctx = {
+		.interruptible = true,
+		.no_wait_gpu = false,
+	};
+	int ret;
 
 	GEM_BUG_ON(!size);
 	GEM_BUG_ON(!IS_ALIGNED(size, PAGE_SIZE));
@@ -166,45 +129,34 @@ __i915_gem_object_create_internal(struct drm_i915_private *i915,
 		return ERR_PTR(-ENOMEM);
 
 	drm_gem_private_object_init(&i915->drm, &obj->base, size);
-	i915_gem_object_init(obj, ops, &lock_class, 0);
-	obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
+	i915_gem_object_init(obj, &i915_gem_object_internal_ops, &lock_class,
+			     I915_BO_ALLOC_VOLATILE);
+
+	INIT_LIST_HEAD(&obj->mm.region_link);
+
+	INIT_RADIX_TREE(&obj->ttm.get_io_page.radix, GFP_KERNEL | __GFP_NOWARN);
+	mutex_init(&obj->ttm.get_io_page.lock);
 
-	/*
-	 * Mark the object as volatile, such that the pages are marked as
-	 * dontneed whilst they are still pinned. As soon as they are unpinned
-	 * they are allowed to be reaped by the shrinker, and the caller is
-	 * expected to repopulate - the contents of this object are only valid
-	 * whilst active and pinned.
-	 */
-	i915_gem_object_set_volatile(obj);
+	obj->base.vma_node.driver_private = i915_gem_to_ttm(obj);
 
+	ret = ttm_bo_init_reserved(&i915->bdev, i915_gem_to_ttm(obj), size,
+				   ttm_bo_type_kernel, i915_ttm_sys_placement(),
+				   0, &ctx, NULL, NULL, i915_ttm_internal_bo_destroy);
+	if (ret) {
+		ret = i915_ttm_err_to_gem(ret);
+		i915_gem_object_free(obj);
+		return ERR_PTR(ret);
+	}
+
+	obj->ttm.created = true;
 	obj->read_domains = I915_GEM_DOMAIN_CPU;
 	obj->write_domain = I915_GEM_DOMAIN_CPU;
-
+	obj->mem_flags &= ~I915_BO_FLAG_IOMEM;
+	obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
 	cache_level = HAS_LLC(i915) ? I915_CACHE_LLC : I915_CACHE_NONE;
 	i915_gem_object_set_cache_coherency(obj, cache_level);
+	i915_gem_object_unlock(obj);
 
 	return obj;
 }
 
-/**
- * i915_gem_object_create_internal: create an object with volatile pages
- * @i915: the i915 device
- * @size: the size in bytes of backing storage to allocate for the object
- *
- * Creates a new object that wraps some internal memory for private use.
- * This object is not backed by swappable storage, and as such its contents
- * are volatile and only valid whilst pinned. If the object is reaped by the
- * shrinker, its pages and data will be discarded. Equally, it is not a full
- * GEM object and so not valid for access from userspace. This makes it useful
- * for hardware interfaces like ringbuffers (which are pinned from the time
- * the request is written to the time the hardware stops accessing it), but
- * not for contexts (which need to be preserved when not active for later
- * reuse). Note that it is not cleared upon allocation.
- */
-struct drm_i915_gem_object *
-i915_gem_object_create_internal(struct drm_i915_private *i915,
-				phys_addr_t size)
-{
-	return __i915_gem_object_create_internal(i915, &i915_gem_object_internal_ops, size);
-}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.h b/drivers/gpu/drm/i915/gem/i915_gem_internal.h
index 6664e06112fc..524e1042b20f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.h
@@ -15,9 +15,4 @@ struct drm_i915_private;
 struct drm_i915_gem_object *
 i915_gem_object_create_internal(struct drm_i915_private *i915,
 				phys_addr_t size);
-struct drm_i915_gem_object *
-__i915_gem_object_create_internal(struct drm_i915_private *i915,
-				  const struct drm_i915_gem_object_ops *ops,
-				  phys_addr_t size);
-
 #endif /* __I915_GEM_INTERNAL_H__ */
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
index fdb3a1c18cb6..92195ead8c11 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
@@ -83,7 +83,7 @@ struct ttm_placement *i915_ttm_sys_placement(void)
 	return &i915_sys_placement;
 }
 
-static int i915_ttm_err_to_gem(int err)
+int i915_ttm_err_to_gem(int err)
 {
 	/* Fastpath */
 	if (likely(!err))
@@ -745,8 +745,8 @@ struct ttm_device_funcs *i915_ttm_driver(void)
 	return &i915_ttm_bo_driver;
 }
 
-static int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
-				struct ttm_placement *placement)
+int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
+			 struct ttm_placement *placement)
 {
 	struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
 	struct ttm_operation_ctx ctx = {
@@ -871,8 +871,8 @@ static int i915_ttm_migrate(struct drm_i915_gem_object *obj,
 	return __i915_ttm_migrate(obj, mr, obj->flags);
 }
 
-static void i915_ttm_put_pages(struct drm_i915_gem_object *obj,
-			       struct sg_table *st)
+void i915_ttm_put_pages(struct drm_i915_gem_object *obj,
+			struct sg_table *st)
 {
 	/*
 	 * We're currently not called from a shrinker, so put_pages()
@@ -995,7 +995,7 @@ void i915_ttm_adjust_lru(struct drm_i915_gem_object *obj)
  * it's not idle, and using the TTM destroyed list handling could help us
  * benefit from that.
  */
-static void i915_ttm_delayed_free(struct drm_i915_gem_object *obj)
+void i915_ttm_delayed_free(struct drm_i915_gem_object *obj)
 {
 	GEM_BUG_ON(!obj->ttm.created);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
index 73e371aa3850..06701c46d8e2 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
@@ -26,6 +26,7 @@ i915_gem_to_ttm(struct drm_i915_gem_object *obj)
  * i915 ttm gem object destructor. Internal use only.
  */
 void i915_ttm_bo_destroy(struct ttm_buffer_object *bo);
+void i915_ttm_internal_bo_destroy(struct ttm_buffer_object *bo);
 
 /**
  * i915_ttm_to_gem - Convert a struct ttm_buffer_object to an embedding
@@ -37,8 +38,10 @@ void i915_ttm_bo_destroy(struct ttm_buffer_object *bo);
 static inline struct drm_i915_gem_object *
 i915_ttm_to_gem(struct ttm_buffer_object *bo)
 {
-	if (bo->destroy != i915_ttm_bo_destroy)
+	if (bo->destroy != i915_ttm_bo_destroy &&
+	    bo->destroy != i915_ttm_internal_bo_destroy) {
 		return NULL;
+	}
 
 	return container_of(bo, struct drm_i915_gem_object, __do_not_access);
 }
@@ -66,6 +69,7 @@ i915_ttm_resource_get_st(struct drm_i915_gem_object *obj,
 			 struct ttm_resource *res);
 
 void i915_ttm_adjust_lru(struct drm_i915_gem_object *obj);
+void i915_ttm_delayed_free(struct drm_i915_gem_object *obj);
 
 int i915_ttm_purge(struct drm_i915_gem_object *obj);
 
@@ -92,4 +96,10 @@ static inline bool i915_ttm_cpu_maps_iomem(struct ttm_resource *mem)
 	/* Once / if we support GGTT, this is also false for cached ttm_tts */
 	return mem->mem_type != I915_PL_SYSTEM;
 }
+
+int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
+			 struct ttm_placement *placement);
+void i915_ttm_put_pages(struct drm_i915_gem_object *obj, struct sg_table *st);
+int i915_ttm_err_to_gem(int err);
+
 #endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [Intel-gfx] [PATCH 4/4] drm/i915: internal buffers use ttm backend
@ 2022-05-03 19:13   ` Robert Beckett
  0 siblings, 0 replies; 32+ messages in thread
From: Robert Beckett @ 2022-05-03 19:13 UTC (permalink / raw)
  To: dri-devel, intel-gfx, Jani Nikula, Joonas Lahtinen, Rodrigo Vivi,
	Tvrtko Ursulin, David Airlie, Daniel Vetter
  Cc: Thomas Hellström, Matthew Auld, linux-kernel

refactor internal buffer backend to allocate volatile pages via
ttm pool allocator

Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_internal.c | 264 ++++++++-----------
 drivers/gpu/drm/i915/gem/i915_gem_internal.h |   5 -
 drivers/gpu/drm/i915/gem/i915_gem_ttm.c      |  12 +-
 drivers/gpu/drm/i915/gem/i915_gem_ttm.h      |  12 +-
 4 files changed, 125 insertions(+), 168 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index c698f95af15f..815ec9466cc0 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -4,156 +4,119 @@
  * Copyright © 2014-2016 Intel Corporation
  */
 
-#include <linux/scatterlist.h>
-#include <linux/slab.h>
-#include <linux/swiotlb.h>
-
+#include <drm/ttm/ttm_bo_driver.h>
+#include <drm/ttm/ttm_placement.h>
+#include "drm/ttm/ttm_bo_api.h"
+#include "gem/i915_gem_internal.h"
+#include "gem/i915_gem_region.h"
+#include "gem/i915_gem_ttm.h"
 #include "i915_drv.h"
-#include "i915_gem.h"
-#include "i915_gem_internal.h"
-#include "i915_gem_object.h"
-#include "i915_scatterlist.h"
-#include "i915_utils.h"
-
-#define QUIET (__GFP_NORETRY | __GFP_NOWARN)
-#define MAYFAIL (__GFP_RETRY_MAYFAIL | __GFP_NOWARN)
-
-static void internal_free_pages(struct sg_table *st)
-{
-	struct scatterlist *sg;
-
-	for (sg = st->sgl; sg; sg = __sg_next(sg)) {
-		if (sg_page(sg))
-			__free_pages(sg_page(sg), get_order(sg->length));
-	}
-
-	sg_free_table(st);
-	kfree(st);
-}
 
-static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
+static int i915_internal_get_pages(struct drm_i915_gem_object *obj)
 {
-	struct drm_i915_private *i915 = to_i915(obj->base.dev);
-	struct sg_table *st;
-	struct scatterlist *sg;
-	unsigned int sg_page_sizes;
-	unsigned int npages;
-	int max_order;
-	gfp_t gfp;
-
-	max_order = MAX_ORDER;
-#ifdef CONFIG_SWIOTLB
-	if (is_swiotlb_active(obj->base.dev->dev)) {
-		unsigned int max_segment;
-
-		max_segment = swiotlb_max_segment();
-		if (max_segment) {
-			max_segment = max_t(unsigned int, max_segment,
-					    PAGE_SIZE) >> PAGE_SHIFT;
-			max_order = min(max_order, ilog2(max_segment));
-		}
+	struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
+	struct ttm_operation_ctx ctx = {
+		.interruptible = true,
+		.no_wait_gpu = false,
+	};
+	struct ttm_place place = {
+		.fpfn = 0,
+		.lpfn = 0,
+		.mem_type = I915_PL_SYSTEM,
+		.flags = 0,
+	};
+	struct ttm_placement placement = {
+		.num_placement = 1,
+		.placement = &place,
+		.num_busy_placement = 0,
+		.busy_placement = NULL,
+	};
+	int ret;
+
+	ret = ttm_bo_validate(bo, &placement, &ctx);
+	if (ret) {
+		ret = i915_ttm_err_to_gem(ret);
+		return ret;
 	}
-#endif
 
-	gfp = GFP_KERNEL | __GFP_HIGHMEM | __GFP_RECLAIMABLE;
-	if (IS_I965GM(i915) || IS_I965G(i915)) {
-		/* 965gm cannot relocate objects above 4GiB. */
-		gfp &= ~__GFP_HIGHMEM;
-		gfp |= __GFP_DMA32;
+	if (bo->ttm && !ttm_tt_is_populated(bo->ttm)) {
+		ret = ttm_tt_populate(bo->bdev, bo->ttm, &ctx);
+		if (ret)
+			return ret;
 	}
 
-create_st:
-	st = kmalloc(sizeof(*st), GFP_KERNEL);
-	if (!st)
-		return -ENOMEM;
+	if (!i915_gem_object_has_pages(obj)) {
+		struct i915_refct_sgt *rsgt =
+			i915_ttm_resource_get_st(obj, bo->resource);
 
-	npages = obj->base.size / PAGE_SIZE;
-	if (sg_alloc_table(st, npages, GFP_KERNEL)) {
-		kfree(st);
-		return -ENOMEM;
-	}
+		if (IS_ERR(rsgt))
+			return PTR_ERR(rsgt);
 
-	sg = st->sgl;
-	st->nents = 0;
-	sg_page_sizes = 0;
-
-	do {
-		int order = min(fls(npages) - 1, max_order);
-		struct page *page;
-
-		do {
-			page = alloc_pages(gfp | (order ? QUIET : MAYFAIL),
-					   order);
-			if (page)
-				break;
-			if (!order--)
-				goto err;
-
-			/* Limit subsequent allocations as well */
-			max_order = order;
-		} while (1);
-
-		sg_set_page(sg, page, PAGE_SIZE << order, 0);
-		sg_page_sizes |= PAGE_SIZE << order;
-		st->nents++;
-
-		npages -= 1 << order;
-		if (!npages) {
-			sg_mark_end(sg);
-			break;
-		}
-
-		sg = __sg_next(sg);
-	} while (1);
-
-	if (i915_gem_gtt_prepare_pages(obj, st)) {
-		/* Failed to dma-map try again with single page sg segments */
-		if (get_order(st->sgl->length)) {
-			internal_free_pages(st);
-			max_order = 0;
-			goto create_st;
-		}
-		goto err;
+		GEM_BUG_ON(obj->mm.rsgt);
+		obj->mm.rsgt = rsgt;
+		__i915_gem_object_set_pages(obj, &rsgt->table,
+					    i915_sg_dma_sizes(rsgt->table.sgl));
 	}
 
-	__i915_gem_object_set_pages(obj, st, sg_page_sizes);
+	GEM_BUG_ON(bo->ttm && ((obj->base.size >> PAGE_SHIFT) < bo->ttm->num_pages));
+	i915_ttm_adjust_lru(obj);
 
 	return 0;
+}
 
-err:
-	sg_set_page(sg, NULL, 0, 0);
-	sg_mark_end(sg);
-	internal_free_pages(st);
+static const struct drm_i915_gem_object_ops i915_gem_object_internal_ops = {
+	.name = "i915_gem_object_ttm",
+	.flags = I915_GEM_OBJECT_IS_SHRINKABLE,
 
-	return -ENOMEM;
-}
+	.get_pages = i915_internal_get_pages,
+	.put_pages = i915_ttm_put_pages,
+	.adjust_lru = i915_ttm_adjust_lru,
+	.delayed_free = i915_ttm_delayed_free,
+};
 
-static void i915_gem_object_put_pages_internal(struct drm_i915_gem_object *obj,
-					       struct sg_table *pages)
+void i915_ttm_internal_bo_destroy(struct ttm_buffer_object *bo)
 {
-	i915_gem_gtt_finish_pages(obj, pages);
-	internal_free_pages(pages);
+	struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
 
-	obj->mm.dirty = false;
+	mutex_destroy(&obj->ttm.get_io_page.lock);
 
-	__start_cpu_write(obj);
-}
+	if (obj->ttm.created) {
+		/* This releases all gem object bindings to the backend. */
+		__i915_gem_free_object(obj);
 
-static const struct drm_i915_gem_object_ops i915_gem_object_internal_ops = {
-	.name = "i915_gem_object_internal",
-	.flags = I915_GEM_OBJECT_IS_SHRINKABLE,
-	.get_pages = i915_gem_object_get_pages_internal,
-	.put_pages = i915_gem_object_put_pages_internal,
-};
+		call_rcu(&obj->rcu, __i915_gem_free_object_rcu);
+	} else {
+		__i915_gem_object_fini(obj);
+	}
+}
 
+/**
+ * i915_gem_object_create_internal: create an object with volatile pages
+ * @i915: the i915 device
+ * @size: the size in bytes of backing storage to allocate for the object
+ *
+ * Creates a new object that wraps some internal memory for private use.
+ * This object is not backed by swappable storage, and as such its contents
+ * are volatile and only valid whilst pinned. If the object is reaped by the
+ * shrinker, its pages and data will be discarded. Equally, it is not a full
+ * GEM object and so not valid for access from userspace. This makes it useful
+ * for hardware interfaces like ringbuffers (which are pinned from the time
+ * the request is written to the time the hardware stops accessing it), but
+ * not for contexts (which need to be preserved when not active for later
+ * reuse). Note that it is not cleared upon allocation.
+ */
 struct drm_i915_gem_object *
-__i915_gem_object_create_internal(struct drm_i915_private *i915,
-				  const struct drm_i915_gem_object_ops *ops,
-				  phys_addr_t size)
+i915_gem_object_create_internal(struct drm_i915_private *i915,
+				phys_addr_t size)
 {
 	static struct lock_class_key lock_class;
 	struct drm_i915_gem_object *obj;
 	unsigned int cache_level;
+	struct ttm_operation_ctx ctx = {
+		.interruptible = true,
+		.no_wait_gpu = false,
+	};
+	int ret;
 
 	GEM_BUG_ON(!size);
 	GEM_BUG_ON(!IS_ALIGNED(size, PAGE_SIZE));
@@ -166,45 +129,34 @@ __i915_gem_object_create_internal(struct drm_i915_private *i915,
 		return ERR_PTR(-ENOMEM);
 
 	drm_gem_private_object_init(&i915->drm, &obj->base, size);
-	i915_gem_object_init(obj, ops, &lock_class, 0);
-	obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
+	i915_gem_object_init(obj, &i915_gem_object_internal_ops, &lock_class,
+			     I915_BO_ALLOC_VOLATILE);
+
+	INIT_LIST_HEAD(&obj->mm.region_link);
+
+	INIT_RADIX_TREE(&obj->ttm.get_io_page.radix, GFP_KERNEL | __GFP_NOWARN);
+	mutex_init(&obj->ttm.get_io_page.lock);
 
-	/*
-	 * Mark the object as volatile, such that the pages are marked as
-	 * dontneed whilst they are still pinned. As soon as they are unpinned
-	 * they are allowed to be reaped by the shrinker, and the caller is
-	 * expected to repopulate - the contents of this object are only valid
-	 * whilst active and pinned.
-	 */
-	i915_gem_object_set_volatile(obj);
+	obj->base.vma_node.driver_private = i915_gem_to_ttm(obj);
 
+	ret = ttm_bo_init_reserved(&i915->bdev, i915_gem_to_ttm(obj), size,
+				   ttm_bo_type_kernel, i915_ttm_sys_placement(),
+				   0, &ctx, NULL, NULL, i915_ttm_internal_bo_destroy);
+	if (ret) {
+		ret = i915_ttm_err_to_gem(ret);
+		i915_gem_object_free(obj);
+		return ERR_PTR(ret);
+	}
+
+	obj->ttm.created = true;
 	obj->read_domains = I915_GEM_DOMAIN_CPU;
 	obj->write_domain = I915_GEM_DOMAIN_CPU;
-
+	obj->mem_flags &= ~I915_BO_FLAG_IOMEM;
+	obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
 	cache_level = HAS_LLC(i915) ? I915_CACHE_LLC : I915_CACHE_NONE;
 	i915_gem_object_set_cache_coherency(obj, cache_level);
+	i915_gem_object_unlock(obj);
 
 	return obj;
 }
 
-/**
- * i915_gem_object_create_internal: create an object with volatile pages
- * @i915: the i915 device
- * @size: the size in bytes of backing storage to allocate for the object
- *
- * Creates a new object that wraps some internal memory for private use.
- * This object is not backed by swappable storage, and as such its contents
- * are volatile and only valid whilst pinned. If the object is reaped by the
- * shrinker, its pages and data will be discarded. Equally, it is not a full
- * GEM object and so not valid for access from userspace. This makes it useful
- * for hardware interfaces like ringbuffers (which are pinned from the time
- * the request is written to the time the hardware stops accessing it), but
- * not for contexts (which need to be preserved when not active for later
- * reuse). Note that it is not cleared upon allocation.
- */
-struct drm_i915_gem_object *
-i915_gem_object_create_internal(struct drm_i915_private *i915,
-				phys_addr_t size)
-{
-	return __i915_gem_object_create_internal(i915, &i915_gem_object_internal_ops, size);
-}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.h b/drivers/gpu/drm/i915/gem/i915_gem_internal.h
index 6664e06112fc..524e1042b20f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.h
@@ -15,9 +15,4 @@ struct drm_i915_private;
 struct drm_i915_gem_object *
 i915_gem_object_create_internal(struct drm_i915_private *i915,
 				phys_addr_t size);
-struct drm_i915_gem_object *
-__i915_gem_object_create_internal(struct drm_i915_private *i915,
-				  const struct drm_i915_gem_object_ops *ops,
-				  phys_addr_t size);
-
 #endif /* __I915_GEM_INTERNAL_H__ */
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
index fdb3a1c18cb6..92195ead8c11 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
@@ -83,7 +83,7 @@ struct ttm_placement *i915_ttm_sys_placement(void)
 	return &i915_sys_placement;
 }
 
-static int i915_ttm_err_to_gem(int err)
+int i915_ttm_err_to_gem(int err)
 {
 	/* Fastpath */
 	if (likely(!err))
@@ -745,8 +745,8 @@ struct ttm_device_funcs *i915_ttm_driver(void)
 	return &i915_ttm_bo_driver;
 }
 
-static int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
-				struct ttm_placement *placement)
+int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
+			 struct ttm_placement *placement)
 {
 	struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
 	struct ttm_operation_ctx ctx = {
@@ -871,8 +871,8 @@ static int i915_ttm_migrate(struct drm_i915_gem_object *obj,
 	return __i915_ttm_migrate(obj, mr, obj->flags);
 }
 
-static void i915_ttm_put_pages(struct drm_i915_gem_object *obj,
-			       struct sg_table *st)
+void i915_ttm_put_pages(struct drm_i915_gem_object *obj,
+			struct sg_table *st)
 {
 	/*
 	 * We're currently not called from a shrinker, so put_pages()
@@ -995,7 +995,7 @@ void i915_ttm_adjust_lru(struct drm_i915_gem_object *obj)
  * it's not idle, and using the TTM destroyed list handling could help us
  * benefit from that.
  */
-static void i915_ttm_delayed_free(struct drm_i915_gem_object *obj)
+void i915_ttm_delayed_free(struct drm_i915_gem_object *obj)
 {
 	GEM_BUG_ON(!obj->ttm.created);
 
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
index 73e371aa3850..06701c46d8e2 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
@@ -26,6 +26,7 @@ i915_gem_to_ttm(struct drm_i915_gem_object *obj)
  * i915 ttm gem object destructor. Internal use only.
  */
 void i915_ttm_bo_destroy(struct ttm_buffer_object *bo);
+void i915_ttm_internal_bo_destroy(struct ttm_buffer_object *bo);
 
 /**
  * i915_ttm_to_gem - Convert a struct ttm_buffer_object to an embedding
@@ -37,8 +38,10 @@ void i915_ttm_bo_destroy(struct ttm_buffer_object *bo);
 static inline struct drm_i915_gem_object *
 i915_ttm_to_gem(struct ttm_buffer_object *bo)
 {
-	if (bo->destroy != i915_ttm_bo_destroy)
+	if (bo->destroy != i915_ttm_bo_destroy &&
+	    bo->destroy != i915_ttm_internal_bo_destroy) {
 		return NULL;
+	}
 
 	return container_of(bo, struct drm_i915_gem_object, __do_not_access);
 }
@@ -66,6 +69,7 @@ i915_ttm_resource_get_st(struct drm_i915_gem_object *obj,
 			 struct ttm_resource *res);
 
 void i915_ttm_adjust_lru(struct drm_i915_gem_object *obj);
+void i915_ttm_delayed_free(struct drm_i915_gem_object *obj);
 
 int i915_ttm_purge(struct drm_i915_gem_object *obj);
 
@@ -92,4 +96,10 @@ static inline bool i915_ttm_cpu_maps_iomem(struct ttm_resource *mem)
 	/* Once / if we support GGTT, this is also false for cached ttm_tts */
 	return mem->mem_type != I915_PL_SYSTEM;
 }
+
+int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
+			 struct ttm_placement *placement);
+void i915_ttm_put_pages(struct drm_i915_gem_object *obj, struct sg_table *st);
+int i915_ttm_err_to_gem(int err);
+
 #endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [Intel-gfx] ✗ Fi.CI.BAT: failure for ttm for internal
  2022-05-03 19:13 ` [Intel-gfx] " Robert Beckett
                   ` (4 preceding siblings ...)
  (?)
@ 2022-05-03 19:52 ` Patchwork
  -1 siblings, 0 replies; 32+ messages in thread
From: Patchwork @ 2022-05-03 19:52 UTC (permalink / raw)
  To: Robert Beckett; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 9901 bytes --]

== Series Details ==

Series: ttm for internal
URL   : https://patchwork.freedesktop.org/series/103492/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_11599 -> Patchwork_103492v1
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_103492v1 absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_103492v1, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/index.html

Participating hosts (39 -> 41)
------------------------------

  Additional (4): fi-kbl-soraka fi-hsw-4770 fi-icl-u2 bat-adls-5 
  Missing    (2): bat-dg2-8 fi-bsw-cyan 

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_103492v1:

### IGT changes ###

#### Possible regressions ####

  * igt@i915_selftest@live@execlists:
    - fi-icl-u2:          NOTRUN -> [INCOMPLETE][1]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-icl-u2/igt@i915_selftest@live@execlists.html

  
#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@i915_selftest@live@hangcheck:
    - {bat-adls-5}:       NOTRUN -> [DMESG-WARN][2]
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/bat-adls-5/igt@i915_selftest@live@hangcheck.html

  
Known issues
------------

  Here are the changes found in Patchwork_103492v1 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_exec_fence@basic-busy@bcs0:
    - fi-kbl-soraka:      NOTRUN -> [SKIP][3] ([fdo#109271]) +9 similar issues
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-kbl-soraka/igt@gem_exec_fence@basic-busy@bcs0.html

  * igt@gem_huc_copy@huc-copy:
    - fi-hsw-4770:        NOTRUN -> [SKIP][4] ([fdo#109271]) +9 similar issues
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-hsw-4770/igt@gem_huc_copy@huc-copy.html
    - fi-kbl-soraka:      NOTRUN -> [SKIP][5] ([fdo#109271] / [i915#2190])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-kbl-soraka/igt@gem_huc_copy@huc-copy.html
    - fi-icl-u2:          NOTRUN -> [SKIP][6] ([i915#2190])
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-icl-u2/igt@gem_huc_copy@huc-copy.html

  * igt@gem_lmem_swapping@basic:
    - fi-kbl-soraka:      NOTRUN -> [SKIP][7] ([fdo#109271] / [i915#4613]) +3 similar issues
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-kbl-soraka/igt@gem_lmem_swapping@basic.html

  * igt@gem_lmem_swapping@parallel-random-engines:
    - fi-icl-u2:          NOTRUN -> [SKIP][8] ([i915#4613]) +3 similar issues
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-icl-u2/igt@gem_lmem_swapping@parallel-random-engines.html

  * igt@i915_pm_backlight@basic-brightness:
    - fi-hsw-4770:        NOTRUN -> [SKIP][9] ([fdo#109271] / [i915#3012])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-hsw-4770/igt@i915_pm_backlight@basic-brightness.html

  * igt@i915_selftest@live@gt_pm:
    - fi-kbl-soraka:      NOTRUN -> [DMESG-FAIL][10] ([i915#1886])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-kbl-soraka/igt@i915_selftest@live@gt_pm.html

  * igt@i915_selftest@live@hangcheck:
    - fi-hsw-4770:        NOTRUN -> [INCOMPLETE][11] ([i915#4785])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-hsw-4770/igt@i915_selftest@live@hangcheck.html
    - fi-snb-2600:        [PASS][12] -> [INCOMPLETE][13] ([i915#3921])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11599/fi-snb-2600/igt@i915_selftest@live@hangcheck.html
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-snb-2600/igt@i915_selftest@live@hangcheck.html

  * igt@kms_chamelium@dp-crc-fast:
    - fi-hsw-4770:        NOTRUN -> [SKIP][14] ([fdo#109271] / [fdo#111827]) +7 similar issues
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-hsw-4770/igt@kms_chamelium@dp-crc-fast.html

  * igt@kms_chamelium@dp-edid-read:
    - fi-kbl-soraka:      NOTRUN -> [SKIP][15] ([fdo#109271] / [fdo#111827]) +7 similar issues
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-kbl-soraka/igt@kms_chamelium@dp-edid-read.html

  * igt@kms_chamelium@hdmi-hpd-fast:
    - fi-icl-u2:          NOTRUN -> [SKIP][16] ([fdo#111827]) +7 similar issues
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-icl-u2/igt@kms_chamelium@hdmi-hpd-fast.html

  * igt@kms_cursor_legacy@basic-busy-flip-before-cursor-legacy:
    - fi-icl-u2:          NOTRUN -> [SKIP][17] ([fdo#109278]) +2 similar issues
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-icl-u2/igt@kms_cursor_legacy@basic-busy-flip-before-cursor-legacy.html

  * igt@kms_force_connector_basic@force-load-detect:
    - fi-icl-u2:          NOTRUN -> [SKIP][18] ([fdo#109285])
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-icl-u2/igt@kms_force_connector_basic@force-load-detect.html

  * igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d:
    - fi-hsw-4770:        NOTRUN -> [SKIP][19] ([fdo#109271] / [i915#533])
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-hsw-4770/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d.html
    - fi-kbl-soraka:      NOTRUN -> [SKIP][20] ([fdo#109271] / [i915#533])
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-kbl-soraka/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d.html

  * igt@kms_psr@primary_mmap_gtt:
    - fi-hsw-4770:        NOTRUN -> [SKIP][21] ([fdo#109271] / [i915#1072]) +3 similar issues
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-hsw-4770/igt@kms_psr@primary_mmap_gtt.html

  * igt@kms_setmode@basic-clone-single-crtc:
    - fi-icl-u2:          NOTRUN -> [SKIP][22] ([i915#3555])
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-icl-u2/igt@kms_setmode@basic-clone-single-crtc.html

  * igt@prime_vgem@basic-userptr:
    - fi-icl-u2:          NOTRUN -> [SKIP][23] ([i915#3301])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-icl-u2/igt@prime_vgem@basic-userptr.html

  * igt@runner@aborted:
    - fi-icl-u2:          NOTRUN -> [FAIL][24] ([i915#4312])
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-icl-u2/igt@runner@aborted.html
    - fi-hsw-4770:        NOTRUN -> [FAIL][25] ([fdo#109271] / [i915#4312] / [i915#5594])
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-hsw-4770/igt@runner@aborted.html

  
#### Possible fixes ####

  * igt@i915_selftest@live@hangcheck:
    - {fi-jsl-1}:         [INCOMPLETE][26] -> [PASS][27]
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11599/fi-jsl-1/igt@i915_selftest@live@hangcheck.html
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/fi-jsl-1/igt@i915_selftest@live@hangcheck.html

  * igt@kms_busy@basic@flip:
    - {bat-adlp-6}:       [DMESG-WARN][28] ([i915#3576]) -> [PASS][29] +1 similar issue
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11599/bat-adlp-6/igt@kms_busy@basic@flip.html
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/bat-adlp-6/igt@kms_busy@basic@flip.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109278]: https://bugs.freedesktop.org/show_bug.cgi?id=109278
  [fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072
  [i915#1886]: https://gitlab.freedesktop.org/drm/intel/issues/1886
  [i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
  [i915#3012]: https://gitlab.freedesktop.org/drm/intel/issues/3012
  [i915#3282]: https://gitlab.freedesktop.org/drm/intel/issues/3282
  [i915#3291]: https://gitlab.freedesktop.org/drm/intel/issues/3291
  [i915#3301]: https://gitlab.freedesktop.org/drm/intel/issues/3301
  [i915#3555]: https://gitlab.freedesktop.org/drm/intel/issues/3555
  [i915#3576]: https://gitlab.freedesktop.org/drm/intel/issues/3576
  [i915#3921]: https://gitlab.freedesktop.org/drm/intel/issues/3921
  [i915#4103]: https://gitlab.freedesktop.org/drm/intel/issues/4103
  [i915#4312]: https://gitlab.freedesktop.org/drm/intel/issues/4312
  [i915#4613]: https://gitlab.freedesktop.org/drm/intel/issues/4613
  [i915#4785]: https://gitlab.freedesktop.org/drm/intel/issues/4785
  [i915#533]: https://gitlab.freedesktop.org/drm/intel/issues/533
  [i915#5594]: https://gitlab.freedesktop.org/drm/intel/issues/5594


Build changes
-------------

  * Linux: CI_DRM_11599 -> Patchwork_103492v1

  CI-20190529: 20190529
  CI_DRM_11599: 3117a90bbbdd0cc8da3713e2a43964c09f7bf8de @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_6464: eddc67c5c85b8ee6eb4d13752ca43da5073dc985 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_103492v1: 3117a90bbbdd0cc8da3713e2a43964c09f7bf8de @ git://anongit.freedesktop.org/gfx-ci/linux


### Linux commits

0c2918f49204 drm/i915: internal buffers use ttm backend
76886810ed52 drm/i915: allow volatile buffers to use ttm pool allocator
f91c9968caab drm/i915: setup ggtt scratch page after memory regions
9c5b257fda97 drm/i915: add gen6 ppgtt dummy creation function

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v1/index.html

[-- Attachment #2: Type: text/html, Size: 12031 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Intel-gfx] ✗ Fi.CI.BAT: failure for ttm for internal (rev2)
  2022-05-03 19:13 ` [Intel-gfx] " Robert Beckett
                   ` (5 preceding siblings ...)
  (?)
@ 2022-05-06  3:44 ` Patchwork
  -1 siblings, 0 replies; 32+ messages in thread
From: Patchwork @ 2022-05-06  3:44 UTC (permalink / raw)
  To: Robert Beckett; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 5857 bytes --]

== Series Details ==

Series: ttm for internal (rev2)
URL   : https://patchwork.freedesktop.org/series/103492/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_11614 -> Patchwork_103492v2
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_103492v2 absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_103492v2, please notify your bug team to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v2/index.html

Participating hosts (41 -> 37)
------------------------------

  Additional (1): bat-adlm-1 
  Missing    (5): fi-bsw-cyan fi-icl-u2 fi-apl-guc bat-rpls-1 bat-jsl-2 

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_103492v2:

### IGT changes ###

#### Possible regressions ####

  * igt@i915_selftest@live@gem:
    - fi-elk-e7500:       [PASS][1] -> [DMESG-FAIL][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11614/fi-elk-e7500/igt@i915_selftest@live@gem.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v2/fi-elk-e7500/igt@i915_selftest@live@gem.html

  
#### Suppressed ####

  The following results come from untrusted machines, tests, or statuses.
  They do not affect the overall result.

  * igt@i915_selftest@live@gt_mocs:
    - {bat-adlm-1}:       NOTRUN -> [INCOMPLETE][3]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v2/bat-adlm-1/igt@i915_selftest@live@gt_mocs.html

  
Known issues
------------

  Here are the changes found in Patchwork_103492v2 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@kms_chamelium@common-hpd-after-suspend:
    - fi-bsw-n3050:       NOTRUN -> [SKIP][4] ([fdo#109271] / [fdo#111827])
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v2/fi-bsw-n3050/igt@kms_chamelium@common-hpd-after-suspend.html

  * igt@kms_flip@basic-flip-vs-modeset@b-edp1:
    - bat-adlp-4:         [PASS][5] -> [DMESG-WARN][6] ([i915#3576]) +3 similar issues
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11614/bat-adlp-4/igt@kms_flip@basic-flip-vs-modeset@b-edp1.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v2/bat-adlp-4/igt@kms_flip@basic-flip-vs-modeset@b-edp1.html

  * igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a:
    - fi-bsw-n3050:       NOTRUN -> [SKIP][7] ([fdo#109271])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v2/fi-bsw-n3050/igt@kms_pipe_crc_basic@suspend-read-crc-pipe-a.html

  * igt@runner@aborted:
    - fi-elk-e7500:       NOTRUN -> [FAIL][8] ([fdo#109271] / [i915#4312])
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v2/fi-elk-e7500/igt@runner@aborted.html

  
#### Possible fixes ####

  * igt@core_hotunplug@unbind-rebind:
    - {bat-rpls-2}:       [DMESG-WARN][9] ([i915#4391]) -> [PASS][10]
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11614/bat-rpls-2/igt@core_hotunplug@unbind-rebind.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v2/bat-rpls-2/igt@core_hotunplug@unbind-rebind.html

  * igt@i915_selftest@live@execlists:
    - fi-bsw-n3050:       [INCOMPLETE][11] ([i915#2940] / [i915#5801]) -> [PASS][12]
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11614/fi-bsw-n3050/igt@i915_selftest@live@execlists.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v2/fi-bsw-n3050/igt@i915_selftest@live@execlists.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072
  [i915#1155]: https://gitlab.freedesktop.org/drm/intel/issues/1155
  [i915#1982]: https://gitlab.freedesktop.org/drm/intel/issues/1982
  [i915#2411]: https://gitlab.freedesktop.org/drm/intel/issues/2411
  [i915#2867]: https://gitlab.freedesktop.org/drm/intel/issues/2867
  [i915#2940]: https://gitlab.freedesktop.org/drm/intel/issues/2940
  [i915#3282]: https://gitlab.freedesktop.org/drm/intel/issues/3282
  [i915#3555]: https://gitlab.freedesktop.org/drm/intel/issues/3555
  [i915#3576]: https://gitlab.freedesktop.org/drm/intel/issues/3576
  [i915#3708]: https://gitlab.freedesktop.org/drm/intel/issues/3708
  [i915#4103]: https://gitlab.freedesktop.org/drm/intel/issues/4103
  [i915#4312]: https://gitlab.freedesktop.org/drm/intel/issues/4312
  [i915#4391]: https://gitlab.freedesktop.org/drm/intel/issues/4391
  [i915#4613]: https://gitlab.freedesktop.org/drm/intel/issues/4613
  [i915#5801]: https://gitlab.freedesktop.org/drm/intel/issues/5801


Build changes
-------------

  * Linux: CI_DRM_11614 -> Patchwork_103492v2

  CI-20190529: 20190529
  CI_DRM_11614: b34f19b38e76292c5ac846fb9a8d4d0c4036dd78 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_6467: 929abc51cdd48d673efa03e025b1f31b557972ed @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_103492v2: b34f19b38e76292c5ac846fb9a8d4d0c4036dd78 @ git://anongit.freedesktop.org/gfx-ci/linux


### Linux commits

498cd8f739f1 drm/i915: internal buffers use ttm backend
fd126670b1eb drm/i915: allow volatile buffers to use ttm pool allocator
9575dd81e1c4 drm/i915: setup ggtt scratch page after memory regions
b9c902e318c6 drm/i915: add gen6 ppgtt dummy creation function

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v2/index.html

[-- Attachment #2: Type: text/html, Size: 6091 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Intel-gfx] ✓ Fi.CI.BAT: success for ttm for internal (rev3)
  2022-05-03 19:13 ` [Intel-gfx] " Robert Beckett
                   ` (6 preceding siblings ...)
  (?)
@ 2022-05-10 21:26 ` Patchwork
  -1 siblings, 0 replies; 32+ messages in thread
From: Patchwork @ 2022-05-10 21:26 UTC (permalink / raw)
  To: Robert Beckett; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 6299 bytes --]

== Series Details ==

Series: ttm for internal (rev3)
URL   : https://patchwork.freedesktop.org/series/103492/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_11630 -> Patchwork_103492v3
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/index.html

Participating hosts (44 -> 41)
------------------------------

  Additional (2): fi-bdw-5557u bat-dg2-9 
  Missing    (5): bat-dg1-5 fi-bsw-cyan fi-icl-u2 fi-ctg-p8600 fi-bdw-samus 

Known issues
------------

  Here are the changes found in Patchwork_103492v3 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@i915_selftest@live@gem:
    - fi-pnv-d510:        NOTRUN -> [DMESG-FAIL][1] ([i915#4528])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/fi-pnv-d510/igt@i915_selftest@live@gem.html

  * igt@i915_selftest@live@hangcheck:
    - fi-hsw-4770:        [PASS][2] -> [INCOMPLETE][3] ([i915#4785])
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/fi-hsw-4770/igt@i915_selftest@live@hangcheck.html
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/fi-hsw-4770/igt@i915_selftest@live@hangcheck.html
    - fi-bdw-5557u:       NOTRUN -> [INCOMPLETE][4] ([i915#3921])
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/fi-bdw-5557u/igt@i915_selftest@live@hangcheck.html

  * igt@kms_chamelium@dp-crc-fast:
    - fi-bdw-5557u:       NOTRUN -> [SKIP][5] ([fdo#109271] / [fdo#111827]) +7 similar issues
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/fi-bdw-5557u/igt@kms_chamelium@dp-crc-fast.html

  * igt@kms_setmode@basic-clone-single-crtc:
    - fi-bdw-5557u:       NOTRUN -> [SKIP][6] ([fdo#109271]) +14 similar issues
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/fi-bdw-5557u/igt@kms_setmode@basic-clone-single-crtc.html

  * igt@runner@aborted:
    - fi-hsw-4770:        NOTRUN -> [FAIL][7] ([fdo#109271] / [i915#4312] / [i915#5594])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/fi-hsw-4770/igt@runner@aborted.html

  
#### Possible fixes ####

  * igt@core_hotunplug@unbind-rebind:
    - {bat-rpls-2}:       [DMESG-WARN][8] ([i915#4391]) -> [PASS][9]
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/bat-rpls-2/igt@core_hotunplug@unbind-rebind.html
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/bat-rpls-2/igt@core_hotunplug@unbind-rebind.html

  * igt@gem_exec_suspend@basic-s0@smem:
    - {fi-ehl-2}:         [DMESG-WARN][10] ([i915#5122]) -> [PASS][11]
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/fi-ehl-2/igt@gem_exec_suspend@basic-s0@smem.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/fi-ehl-2/igt@gem_exec_suspend@basic-s0@smem.html

  * igt@i915_selftest@live@requests:
    - fi-pnv-d510:        [DMESG-FAIL][12] ([i915#4528]) -> [PASS][13]
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/fi-pnv-d510/igt@i915_selftest@live@requests.html
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/fi-pnv-d510/igt@i915_selftest@live@requests.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072
  [i915#1155]: https://gitlab.freedesktop.org/drm/intel/issues/1155
  [i915#3291]: https://gitlab.freedesktop.org/drm/intel/issues/3291
  [i915#3555]: https://gitlab.freedesktop.org/drm/intel/issues/3555
  [i915#3595]: https://gitlab.freedesktop.org/drm/intel/issues/3595
  [i915#3708]: https://gitlab.freedesktop.org/drm/intel/issues/3708
  [i915#3921]: https://gitlab.freedesktop.org/drm/intel/issues/3921
  [i915#4077]: https://gitlab.freedesktop.org/drm/intel/issues/4077
  [i915#4079]: https://gitlab.freedesktop.org/drm/intel/issues/4079
  [i915#4083]: https://gitlab.freedesktop.org/drm/intel/issues/4083
  [i915#4103]: https://gitlab.freedesktop.org/drm/intel/issues/4103
  [i915#4212]: https://gitlab.freedesktop.org/drm/intel/issues/4212
  [i915#4213]: https://gitlab.freedesktop.org/drm/intel/issues/4213
  [i915#4215]: https://gitlab.freedesktop.org/drm/intel/issues/4215
  [i915#4312]: https://gitlab.freedesktop.org/drm/intel/issues/4312
  [i915#4391]: https://gitlab.freedesktop.org/drm/intel/issues/4391
  [i915#4528]: https://gitlab.freedesktop.org/drm/intel/issues/4528
  [i915#4785]: https://gitlab.freedesktop.org/drm/intel/issues/4785
  [i915#4873]: https://gitlab.freedesktop.org/drm/intel/issues/4873
  [i915#5122]: https://gitlab.freedesktop.org/drm/intel/issues/5122
  [i915#5190]: https://gitlab.freedesktop.org/drm/intel/issues/5190
  [i915#5274]: https://gitlab.freedesktop.org/drm/intel/issues/5274
  [i915#5275]: https://gitlab.freedesktop.org/drm/intel/issues/5275
  [i915#5356]: https://gitlab.freedesktop.org/drm/intel/issues/5356
  [i915#5594]: https://gitlab.freedesktop.org/drm/intel/issues/5594
  [i915#5763]: https://gitlab.freedesktop.org/drm/intel/issues/5763
  [i915#5879]: https://gitlab.freedesktop.org/drm/intel/issues/5879
  [i915#5885]: https://gitlab.freedesktop.org/drm/intel/issues/5885


Build changes
-------------

  * Linux: CI_DRM_11630 -> Patchwork_103492v3

  CI-20190529: 20190529
  CI_DRM_11630: fa53ea1e866d739663dbcfab3afa4d0f5e3a12e1 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_6471: 1d6816f1200520f936a799b7b0ef2e6f396abb16 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_103492v3: fa53ea1e866d739663dbcfab3afa4d0f5e3a12e1 @ git://anongit.freedesktop.org/gfx-ci/linux


### Linux commits

64ce45b9180b drm/i915: internal buffers use ttm backend
26315526a6a5 drm/i915: allow volatile buffers to use ttm pool allocator
34cbe86f6b31 drm/i915: setup ggtt scratch page after memory regions
d94baaebad88 drm/i915: add gen6 ppgtt dummy creation function

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/index.html

[-- Attachment #2: Type: text/html, Size: 5851 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [Intel-gfx] ✓ Fi.CI.IGT: success for ttm for internal (rev3)
  2022-05-03 19:13 ` [Intel-gfx] " Robert Beckett
                   ` (7 preceding siblings ...)
  (?)
@ 2022-05-11  2:28 ` Patchwork
  -1 siblings, 0 replies; 32+ messages in thread
From: Patchwork @ 2022-05-11  2:28 UTC (permalink / raw)
  To: Robert Beckett; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 39371 bytes --]

== Series Details ==

Series: ttm for internal (rev3)
URL   : https://patchwork.freedesktop.org/series/103492/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_11630_full -> Patchwork_103492v3_full
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Participating hosts (13 -> 12)
------------------------------

  Missing    (1): shard-dg1 

Known issues
------------

  Here are the changes found in Patchwork_103492v3_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_eio@unwedge-stress:
    - shard-tglb:         [PASS][1] -> [TIMEOUT][2] ([i915#3063])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-tglb6/igt@gem_eio@unwedge-stress.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-tglb6/igt@gem_eio@unwedge-stress.html

  * igt@gem_exec_fair@basic-deadline:
    - shard-glk:          [PASS][3] -> [FAIL][4] ([i915#2846])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-glk3/igt@gem_exec_fair@basic-deadline.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-glk9/igt@gem_exec_fair@basic-deadline.html

  * igt@gem_exec_fair@basic-none-rrul@rcs0:
    - shard-kbl:          [PASS][5] -> [FAIL][6] ([i915#2842])
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-kbl4/igt@gem_exec_fair@basic-none-rrul@rcs0.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-kbl7/igt@gem_exec_fair@basic-none-rrul@rcs0.html

  * igt@gem_exec_fair@basic-none-share@rcs0:
    - shard-tglb:         [PASS][7] -> [FAIL][8] ([i915#2842]) +1 similar issue
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-tglb2/igt@gem_exec_fair@basic-none-share@rcs0.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-tglb7/igt@gem_exec_fair@basic-none-share@rcs0.html

  * igt@gem_exec_fair@basic-none@vcs0:
    - shard-apl:          NOTRUN -> [FAIL][9] ([i915#2842])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-apl3/igt@gem_exec_fair@basic-none@vcs0.html

  * igt@gem_exec_fair@basic-pace-share@rcs0:
    - shard-glk:          [PASS][10] -> [FAIL][11] ([i915#2842]) +1 similar issue
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-glk6/igt@gem_exec_fair@basic-pace-share@rcs0.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-glk4/igt@gem_exec_fair@basic-pace-share@rcs0.html

  * igt@gem_exec_fair@basic-pace@vcs1:
    - shard-iclb:         NOTRUN -> [FAIL][12] ([i915#2842])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb1/igt@gem_exec_fair@basic-pace@vcs1.html

  * igt@gem_exec_flush@basic-uc-set-default:
    - shard-snb:          [PASS][13] -> [SKIP][14] ([fdo#109271]) +2 similar issues
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-snb2/igt@gem_exec_flush@basic-uc-set-default.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-snb6/igt@gem_exec_flush@basic-uc-set-default.html

  * igt@gem_exec_params@no-vebox:
    - shard-iclb:         NOTRUN -> [SKIP][15] ([fdo#109283])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb5/igt@gem_exec_params@no-vebox.html

  * igt@gem_exec_whisper@basic-fds-priority:
    - shard-skl:          [PASS][16] -> [DMESG-WARN][17] ([i915#1982]) +1 similar issue
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-skl9/igt@gem_exec_whisper@basic-fds-priority.html
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-skl8/igt@gem_exec_whisper@basic-fds-priority.html

  * igt@gem_huc_copy@huc-copy:
    - shard-iclb:         NOTRUN -> [SKIP][18] ([i915#2190])
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb5/igt@gem_huc_copy@huc-copy.html

  * igt@gem_lmem_swapping@heavy-verify-random:
    - shard-kbl:          NOTRUN -> [SKIP][19] ([fdo#109271] / [i915#4613])
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-kbl4/igt@gem_lmem_swapping@heavy-verify-random.html

  * igt@gem_lmem_swapping@parallel-random-engines:
    - shard-apl:          NOTRUN -> [SKIP][20] ([fdo#109271] / [i915#4613])
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-apl3/igt@gem_lmem_swapping@parallel-random-engines.html

  * igt@gem_pxp@reject-modify-context-protection-off-3:
    - shard-iclb:         NOTRUN -> [SKIP][21] ([i915#4270])
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb5/igt@gem_pxp@reject-modify-context-protection-off-3.html

  * igt@gem_render_copy@y-tiled-mc-ccs-to-yf-tiled-ccs:
    - shard-iclb:         NOTRUN -> [SKIP][22] ([i915#768])
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb5/igt@gem_render_copy@y-tiled-mc-ccs-to-yf-tiled-ccs.html

  * igt@gen3_render_linear_blits:
    - shard-iclb:         NOTRUN -> [SKIP][23] ([fdo#109289]) +1 similar issue
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb5/igt@gen3_render_linear_blits.html

  * igt@gen9_exec_parse@cmd-crossing-page:
    - shard-iclb:         NOTRUN -> [SKIP][24] ([i915#2856])
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb5/igt@gen9_exec_parse@cmd-crossing-page.html

  * igt@i915_pm_dc@dc3co-vpb-simulation:
    - shard-iclb:         NOTRUN -> [SKIP][25] ([i915#658])
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb3/igt@i915_pm_dc@dc3co-vpb-simulation.html

  * igt@i915_pm_rpm@modeset-non-lpsp:
    - shard-iclb:         NOTRUN -> [SKIP][26] ([fdo#110892]) +1 similar issue
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb3/igt@i915_pm_rpm@modeset-non-lpsp.html

  * igt@i915_pm_sseu@full-enable:
    - shard-iclb:         NOTRUN -> [SKIP][27] ([i915#4387])
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb3/igt@i915_pm_sseu@full-enable.html

  * igt@kms_atomic_transition@plane-all-modeset-transition-fencing:
    - shard-iclb:         NOTRUN -> [SKIP][28] ([i915#1769]) +1 similar issue
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb5/igt@kms_atomic_transition@plane-all-modeset-transition-fencing.html

  * igt@kms_big_fb@4-tiled-8bpp-rotate-90:
    - shard-iclb:         NOTRUN -> [SKIP][29] ([i915#5286]) +1 similar issue
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb5/igt@kms_big_fb@4-tiled-8bpp-rotate-90.html

  * igt@kms_big_fb@y-tiled-64bpp-rotate-90:
    - shard-iclb:         NOTRUN -> [SKIP][30] ([fdo#110725] / [fdo#111614]) +1 similar issue
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb5/igt@kms_big_fb@y-tiled-64bpp-rotate-90.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0-async-flip:
    - shard-iclb:         NOTRUN -> [SKIP][31] ([fdo#110723]) +2 similar issues
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb3/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0-async-flip.html

  * igt@kms_big_joiner@invalid-modeset:
    - shard-iclb:         NOTRUN -> [SKIP][32] ([i915#2705])
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb5/igt@kms_big_joiner@invalid-modeset.html

  * igt@kms_ccs@pipe-a-bad-pixel-format-y_tiled_gen12_mc_ccs:
    - shard-apl:          NOTRUN -> [SKIP][33] ([fdo#109271] / [i915#3886])
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-apl8/igt@kms_ccs@pipe-a-bad-pixel-format-y_tiled_gen12_mc_ccs.html

  * igt@kms_ccs@pipe-b-missing-ccs-buffer-y_tiled_gen12_mc_ccs:
    - shard-iclb:         NOTRUN -> [SKIP][34] ([fdo#109278] / [i915#3886])
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb5/igt@kms_ccs@pipe-b-missing-ccs-buffer-y_tiled_gen12_mc_ccs.html

  * igt@kms_ccs@pipe-c-bad-rotation-90-y_tiled_gen12_mc_ccs:
    - shard-kbl:          NOTRUN -> [SKIP][35] ([fdo#109271] / [i915#3886]) +3 similar issues
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-kbl4/igt@kms_ccs@pipe-c-bad-rotation-90-y_tiled_gen12_mc_ccs.html

  * igt@kms_chamelium@dp-hpd-storm-disable:
    - shard-apl:          NOTRUN -> [SKIP][36] ([fdo#109271] / [fdo#111827]) +4 similar issues
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-apl3/igt@kms_chamelium@dp-hpd-storm-disable.html

  * igt@kms_chamelium@vga-hpd-after-suspend:
    - shard-iclb:         NOTRUN -> [SKIP][37] ([fdo#109284] / [fdo#111827]) +3 similar issues
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb3/igt@kms_chamelium@vga-hpd-after-suspend.html

  * igt@kms_color_chamelium@pipe-c-ctm-max:
    - shard-kbl:          NOTRUN -> [SKIP][38] ([fdo#109271] / [fdo#111827]) +5 similar issues
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-kbl4/igt@kms_color_chamelium@pipe-c-ctm-max.html

  * igt@kms_content_protection@legacy:
    - shard-kbl:          NOTRUN -> [TIMEOUT][39] ([i915#1319]) +1 similar issue
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-kbl7/igt@kms_content_protection@legacy.html

  * igt@kms_content_protection@uevent:
    - shard-iclb:         NOTRUN -> [SKIP][40] ([fdo#109300] / [fdo#111066])
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb5/igt@kms_content_protection@uevent.html

  * igt@kms_cursor_crc@pipe-a-cursor-suspend:
    - shard-kbl:          [PASS][41] -> [DMESG-WARN][42] ([i915#180]) +1 similar issue
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-kbl3/igt@kms_cursor_crc@pipe-a-cursor-suspend.html
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-kbl1/igt@kms_cursor_crc@pipe-a-cursor-suspend.html

  * igt@kms_cursor_crc@pipe-b-cursor-32x10-random:
    - shard-kbl:          NOTRUN -> [SKIP][43] ([fdo#109271]) +57 similar issues
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-kbl4/igt@kms_cursor_crc@pipe-b-cursor-32x10-random.html

  * igt@kms_cursor_crc@pipe-b-cursor-512x512-sliding:
    - shard-iclb:         NOTRUN -> [SKIP][44] ([fdo#109278] / [fdo#109279])
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb5/igt@kms_cursor_crc@pipe-b-cursor-512x512-sliding.html

  * igt@kms_cursor_legacy@flip-vs-cursor-legacy:
    - shard-skl:          [PASS][45] -> [FAIL][46] ([i915#2346])
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-skl1/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-skl2/igt@kms_cursor_legacy@flip-vs-cursor-legacy.html

  * igt@kms_cursor_legacy@flip-vs-cursor-varying-size:
    - shard-iclb:         [PASS][47] -> [FAIL][48] ([i915#2346])
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-iclb2/igt@kms_cursor_legacy@flip-vs-cursor-varying-size.html
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb7/igt@kms_cursor_legacy@flip-vs-cursor-varying-size.html

  * igt@kms_cursor_legacy@pipe-d-single-move:
    - shard-iclb:         NOTRUN -> [SKIP][49] ([fdo#109278]) +19 similar issues
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb5/igt@kms_cursor_legacy@pipe-d-single-move.html

  * igt@kms_draw_crc@draw-method-xrgb2101010-mmap-cpu-4tiled:
    - shard-iclb:         NOTRUN -> [SKIP][50] ([i915#5287])
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb3/igt@kms_draw_crc@draw-method-xrgb2101010-mmap-cpu-4tiled.html

  * igt@kms_flip@2x-plain-flip-fb-recreate-interruptible:
    - shard-iclb:         NOTRUN -> [SKIP][51] ([fdo#109274]) +1 similar issue
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb5/igt@kms_flip@2x-plain-flip-fb-recreate-interruptible.html

  * igt@kms_flip@flip-vs-suspend-interruptible@c-dp1:
    - shard-apl:          [PASS][52] -> [DMESG-WARN][53] ([i915#180]) +3 similar issues
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-apl8/igt@kms_flip@flip-vs-suspend-interruptible@c-dp1.html
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-apl6/igt@kms_flip@flip-vs-suspend-interruptible@c-dp1.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs-downscaling:
    - shard-iclb:         NOTRUN -> [SKIP][54] ([i915#2587])
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb3/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilercccs-downscaling.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-blt:
    - shard-iclb:         NOTRUN -> [SKIP][55] ([fdo#109280]) +9 similar issues
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb5/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-blt.html

  * igt@kms_hdr@bpc-switch-suspend@bpc-switch-suspend-edp-1-pipe-a:
    - shard-skl:          [PASS][56] -> [FAIL][57] ([i915#1188])
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-skl1/igt@kms_hdr@bpc-switch-suspend@bpc-switch-suspend-edp-1-pipe-a.html
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-skl2/igt@kms_hdr@bpc-switch-suspend@bpc-switch-suspend-edp-1-pipe-a.html

  * igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d:
    - shard-apl:          NOTRUN -> [SKIP][58] ([fdo#109271] / [i915#533])
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-apl8/igt@kms_pipe_crc_basic@compare-crc-sanitycheck-pipe-d.html

  * igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min:
    - shard-skl:          [PASS][59] -> [FAIL][60] ([fdo#108145] / [i915#265]) +1 similar issue
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-skl7/igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min.html
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-skl7/igt@kms_plane_alpha_blend@pipe-a-constant-alpha-min.html

  * igt@kms_plane_alpha_blend@pipe-c-alpha-transparent-fb:
    - shard-apl:          NOTRUN -> [FAIL][61] ([i915#265])
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-apl8/igt@kms_plane_alpha_blend@pipe-c-alpha-transparent-fb.html

  * igt@kms_plane_alpha_blend@pipe-c-constant-alpha-max:
    - shard-kbl:          NOTRUN -> [FAIL][62] ([fdo#108145] / [i915#265])
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-kbl7/igt@kms_plane_alpha_blend@pipe-c-constant-alpha-max.html

  * igt@kms_plane_cursor@pipe-d-overlay-size-64:
    - shard-apl:          NOTRUN -> [SKIP][63] ([fdo#109271]) +49 similar issues
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-apl3/igt@kms_plane_cursor@pipe-d-overlay-size-64.html

  * igt@kms_plane_lowres@pipe-b-tiling-4:
    - shard-iclb:         NOTRUN -> [SKIP][64] ([i915#5288])
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb5/igt@kms_plane_lowres@pipe-b-tiling-4.html

  * igt@kms_plane_lowres@pipe-b-tiling-none:
    - shard-iclb:         NOTRUN -> [SKIP][65] ([i915#3536])
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb3/igt@kms_plane_lowres@pipe-b-tiling-none.html

  * igt@kms_plane_scaling@downscale-with-pixel-format-factor-0-5@pipe-c-edp-1-downscale-with-pixel-format:
    - shard-iclb:         [PASS][66] -> [SKIP][67] ([i915#5176]) +2 similar issues
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-iclb7/igt@kms_plane_scaling@downscale-with-pixel-format-factor-0-5@pipe-c-edp-1-downscale-with-pixel-format.html
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb2/igt@kms_plane_scaling@downscale-with-pixel-format-factor-0-5@pipe-c-edp-1-downscale-with-pixel-format.html

  * igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-5@pipe-c-edp-1-planes-upscale-downscale:
    - shard-iclb:         [PASS][68] -> [SKIP][69] ([i915#5235]) +2 similar issues
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-iclb6/igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-5@pipe-c-edp-1-planes-upscale-downscale.html
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb2/igt@kms_plane_scaling@planes-upscale-factor-0-25-downscale-factor-0-5@pipe-c-edp-1-planes-upscale-downscale.html

  * igt@kms_psr2_sf@overlay-plane-move-continuous-sf:
    - shard-kbl:          NOTRUN -> [SKIP][70] ([fdo#109271] / [i915#658])
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-kbl4/igt@kms_psr2_sf@overlay-plane-move-continuous-sf.html

  * igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area:
    - shard-iclb:         NOTRUN -> [SKIP][71] ([fdo#111068] / [i915#658])
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb5/igt@kms_psr2_sf@overlay-plane-update-sf-dmg-area.html

  * igt@kms_psr2_su@frontbuffer-xrgb8888:
    - shard-iclb:         [PASS][72] -> [SKIP][73] ([fdo#109642] / [fdo#111068] / [i915#658])
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-iclb2/igt@kms_psr2_su@frontbuffer-xrgb8888.html
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb1/igt@kms_psr2_su@frontbuffer-xrgb8888.html

  * igt@kms_psr@psr2_cursor_plane_move:
    - shard-iclb:         [PASS][74] -> [SKIP][75] ([fdo#109441]) +3 similar issues
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-iclb2/igt@kms_psr@psr2_cursor_plane_move.html
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb7/igt@kms_psr@psr2_cursor_plane_move.html

  * igt@kms_psr@psr2_primary_mmap_gtt:
    - shard-iclb:         NOTRUN -> [SKIP][76] ([fdo#109441])
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb5/igt@kms_psr@psr2_primary_mmap_gtt.html

  * igt@kms_psr_stress_test@flip-primary-invalidate-overlay:
    - shard-tglb:         [PASS][77] -> [SKIP][78] ([i915#5519]) +1 similar issue
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-tglb3/igt@kms_psr_stress_test@flip-primary-invalidate-overlay.html
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-tglb8/igt@kms_psr_stress_test@flip-primary-invalidate-overlay.html

  * igt@kms_setmode@basic-clone-single-crtc:
    - shard-iclb:         NOTRUN -> [SKIP][79] ([i915#3555])
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb3/igt@kms_setmode@basic-clone-single-crtc.html

  * igt@kms_vblank@pipe-c-ts-continuation-suspend:
    - shard-skl:          [PASS][80] -> [INCOMPLETE][81] ([i915#4939])
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-skl1/igt@kms_vblank@pipe-c-ts-continuation-suspend.html
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-skl2/igt@kms_vblank@pipe-c-ts-continuation-suspend.html

  * igt@nouveau_crc@pipe-b-source-outp-inactive:
    - shard-iclb:         NOTRUN -> [SKIP][82] ([i915#2530])
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb5/igt@nouveau_crc@pipe-b-source-outp-inactive.html

  * igt@perf@polling-small-buf:
    - shard-skl:          [PASS][83] -> [FAIL][84] ([i915#1722])
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-skl2/igt@perf@polling-small-buf.html
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-skl9/igt@perf@polling-small-buf.html

  * igt@prime_nv_pcopy@test3_1:
    - shard-iclb:         NOTRUN -> [SKIP][85] ([fdo#109291]) +1 similar issue
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb3/igt@prime_nv_pcopy@test3_1.html

  * igt@prime_vgem@fence-write-hang:
    - shard-iclb:         NOTRUN -> [SKIP][86] ([fdo#109295])
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb3/igt@prime_vgem@fence-write-hang.html

  * igt@sysfs_clients@recycle:
    - shard-iclb:         NOTRUN -> [SKIP][87] ([i915#2994])
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb5/igt@sysfs_clients@recycle.html

  * igt@sysfs_heartbeat_interval@mixed@bcs0:
    - shard-skl:          [PASS][88] -> [WARN][89] ([i915#4055])
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-skl4/igt@sysfs_heartbeat_interval@mixed@bcs0.html
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-skl2/igt@sysfs_heartbeat_interval@mixed@bcs0.html

  * igt@sysfs_heartbeat_interval@mixed@vcs0:
    - shard-skl:          [PASS][90] -> [FAIL][91] ([i915#1731])
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-skl4/igt@sysfs_heartbeat_interval@mixed@vcs0.html
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-skl2/igt@sysfs_heartbeat_interval@mixed@vcs0.html

  * igt@tools_test@sysfs_l3_parity:
    - shard-iclb:         NOTRUN -> [SKIP][92] ([fdo#109307])
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb5/igt@tools_test@sysfs_l3_parity.html

  
#### Possible fixes ####

  * igt@gem_eio@in-flight-contexts-10ms:
    - shard-tglb:         [TIMEOUT][93] ([i915#3063]) -> [PASS][94]
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-tglb3/igt@gem_eio@in-flight-contexts-10ms.html
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-tglb8/igt@gem_eio@in-flight-contexts-10ms.html

  * igt@gem_exec_fair@basic-pace-solo@rcs0:
    - shard-kbl:          [FAIL][95] ([i915#2842]) -> [PASS][96]
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-kbl3/igt@gem_exec_fair@basic-pace-solo@rcs0.html
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-kbl1/igt@gem_exec_fair@basic-pace-solo@rcs0.html

  * igt@gem_exec_flush@basic-uc-rw-default:
    - shard-snb:          [SKIP][97] ([fdo#109271]) -> [PASS][98] +1 similar issue
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-snb6/igt@gem_exec_flush@basic-uc-rw-default.html
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-snb2/igt@gem_exec_flush@basic-uc-rw-default.html

  * igt@gem_exec_whisper@basic-contexts-forked-all:
    - shard-glk:          [DMESG-WARN][99] ([i915#118]) -> [PASS][100]
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-glk9/igt@gem_exec_whisper@basic-contexts-forked-all.html
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-glk6/igt@gem_exec_whisper@basic-contexts-forked-all.html

  * igt@i915_suspend@debugfs-reader:
    - shard-apl:          [DMESG-WARN][101] ([i915#180]) -> [PASS][102] +1 similar issue
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-apl4/igt@i915_suspend@debugfs-reader.html
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-apl3/igt@i915_suspend@debugfs-reader.html

  * igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions:
    - shard-iclb:         [FAIL][103] ([i915#2346]) -> [PASS][104]
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-iclb7/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb2/igt@kms_cursor_legacy@flip-vs-cursor-atomic-transitions.html

  * igt@kms_cursor_legacy@pipe-c-torture-move:
    - {shard-rkl}:        [SKIP][105] ([i915#4070]) -> [PASS][106]
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-rkl-2/igt@kms_cursor_legacy@pipe-c-torture-move.html
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-rkl-5/igt@kms_cursor_legacy@pipe-c-torture-move.html

  * igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ab-hdmi-a1-hdmi-a2:
    - shard-glk:          [FAIL][107] ([i915#79]) -> [PASS][108]
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-glk8/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ab-hdmi-a1-hdmi-a2.html
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-glk1/igt@kms_flip@2x-flip-vs-expired-vblank-interruptible@ab-hdmi-a1-hdmi-a2.html

  * igt@kms_flip@2x-plain-flip-ts-check-interruptible@ac-hdmi-a1-hdmi-a2:
    - shard-glk:          [FAIL][109] ([i915#2122]) -> [PASS][110]
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-glk9/igt@kms_flip@2x-plain-flip-ts-check-interruptible@ac-hdmi-a1-hdmi-a2.html
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-glk6/igt@kms_flip@2x-plain-flip-ts-check-interruptible@ac-hdmi-a1-hdmi-a2.html

  * igt@kms_flip@flip-vs-suspend-interruptible@a-dp1:
    - shard-kbl:          [DMESG-WARN][111] ([i915#180]) -> [PASS][112] +5 similar issues
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-kbl6/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-kbl4/igt@kms_flip@flip-vs-suspend-interruptible@a-dp1.html

  * igt@kms_flip@plain-flip-ts-check@b-edp1:
    - shard-skl:          [FAIL][113] ([i915#2122]) -> [PASS][114]
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-skl6/igt@kms_flip@plain-flip-ts-check@b-edp1.html
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-skl4/igt@kms_flip@plain-flip-ts-check@b-edp1.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-cur-indfb-draw-mmap-wc:
    - {shard-rkl}:        [SKIP][115] ([i915#1849] / [i915#4098]) -> [PASS][116]
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-rkl-5/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-cur-indfb-draw-mmap-wc.html
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-rkl-6/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-cur-indfb-draw-mmap-wc.html

  * igt@kms_psr2_su@page_flip-xrgb8888:
    - shard-iclb:         [SKIP][117] ([fdo#109642] / [fdo#111068] / [i915#658]) -> [PASS][118]
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-iclb7/igt@kms_psr2_su@page_flip-xrgb8888.html
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb2/igt@kms_psr2_su@page_flip-xrgb8888.html

  * igt@kms_psr@psr2_primary_mmap_cpu:
    - shard-iclb:         [SKIP][119] ([fdo#109441]) -> [PASS][120]
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-iclb3/igt@kms_psr@psr2_primary_mmap_cpu.html
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb2/igt@kms_psr@psr2_primary_mmap_cpu.html

  * igt@kms_psr_stress_test@invalidate-primary-flip-overlay:
    - shard-iclb:         [SKIP][121] ([i915#5519]) -> [PASS][122]
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-iclb6/igt@kms_psr_stress_test@invalidate-primary-flip-overlay.html
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb1/igt@kms_psr_stress_test@invalidate-primary-flip-overlay.html

  * igt@perf@polling-parameterized:
    - shard-skl:          [FAIL][123] ([i915#5639]) -> [PASS][124]
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-skl8/igt@perf@polling-parameterized.html
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-skl9/igt@perf@polling-parameterized.html

  * igt@perf@short-reads:
    - shard-skl:          [FAIL][125] ([i915#51]) -> [PASS][126]
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-skl9/igt@perf@short-reads.html
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-skl6/igt@perf@short-reads.html

  
#### Warnings ####

  * igt@gem_exec_balancer@parallel-keep-submit-fence:
    - shard-iclb:         [DMESG-WARN][127] ([i915#5614]) -> [SKIP][128] ([i915#4525])
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-iclb1/igt@gem_exec_balancer@parallel-keep-submit-fence.html
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb3/igt@gem_exec_balancer@parallel-keep-submit-fence.html

  * igt@gem_exec_balancer@parallel-ordering:
    - shard-iclb:         [DMESG-FAIL][129] ([i915#5614]) -> [SKIP][130] ([i915#4525])
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-iclb4/igt@gem_exec_balancer@parallel-ordering.html
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb5/igt@gem_exec_balancer@parallel-ordering.html

  * igt@gem_exec_balancer@parallel-out-fence:
    - shard-iclb:         [SKIP][131] ([i915#4525]) -> [DMESG-WARN][132] ([i915#5614])
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-iclb6/igt@gem_exec_balancer@parallel-out-fence.html
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb1/igt@gem_exec_balancer@parallel-out-fence.html

  * igt@gem_exec_fair@basic-throttle@rcs0:
    - shard-iclb:         [FAIL][133] ([i915#2842]) -> [FAIL][134] ([i915#2849])
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-iclb2/igt@gem_exec_fair@basic-throttle@rcs0.html
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb7/igt@gem_exec_fair@basic-throttle@rcs0.html

  * igt@kms_psr2_sf@overlay-plane-move-continuous-sf:
    - shard-iclb:         [SKIP][135] ([i915#658]) -> [SKIP][136] ([i915#2920])
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-iclb3/igt@kms_psr2_sf@overlay-plane-move-continuous-sf.html
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-iclb2/igt@kms_psr2_sf@overlay-plane-move-continuous-sf.html

  * igt@runner@aborted:
    - shard-apl:          ([FAIL][137], [FAIL][138], [FAIL][139], [FAIL][140], [FAIL][141], [FAIL][142], [FAIL][143], [FAIL][144]) ([fdo#109271] / [i915#180] / [i915#3002] / [i915#4312] / [i915#5257]) -> ([FAIL][145], [FAIL][146], [FAIL][147], [FAIL][148], [FAIL][149], [FAIL][150], [FAIL][151]) ([i915#180] / [i915#3002] / [i915#4312] / [i915#5257])
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-apl4/igt@runner@aborted.html
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-apl8/igt@runner@aborted.html
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-apl8/igt@runner@aborted.html
   [140]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-apl8/igt@runner@aborted.html
   [141]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-apl1/igt@runner@aborted.html
   [142]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-apl3/igt@runner@aborted.html
   [143]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-apl8/igt@runner@aborted.html
   [144]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_11630/shard-apl6/igt@runner@aborted.html
   [145]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-apl4/igt@runner@aborted.html
   [146]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-apl8/igt@runner@aborted.html
   [147]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-apl4/igt@runner@aborted.html
   [148]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-apl8/igt@runner@aborted.html
   [149]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-apl2/igt@runner@aborted.html
   [150]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-apl6/igt@runner@aborted.html
   [151]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/shard-apl6/igt@runner@aborted.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [fdo#103375]: https://bugs.freedesktop.org/show_bug.cgi?id=103375
  [fdo#108145]: https://bugs.freedesktop.org/show_bug.cgi?id=108145
  [fdo#109271]: https://bugs.freedesktop.org/show_bug.cgi?id=109271
  [fdo#109274]: https://bugs.freedesktop.org/show_bug.cgi?id=109274
  [fdo#109278]: https://bugs.freedesktop.org/show_bug.cgi?id=109278
  [fdo#109279]: https://bugs.freedesktop.org/show_bug.cgi?id=109279
  [fdo#109280]: https://bugs.freedesktop.org/show_bug.cgi?id=109280
  [fdo#109283]: https://bugs.freedesktop.org/show_bug.cgi?id=109283
  [fdo#109284]: https://bugs.freedesktop.org/show_bug.cgi?id=109284
  [fdo#109285]: https://bugs.freedesktop.org/show_bug.cgi?id=109285
  [fdo#109289]: https://bugs.freedesktop.org/show_bug.cgi?id=109289
  [fdo#109291]: https://bugs.freedesktop.org/show_bug.cgi?id=109291
  [fdo#109295]: https://bugs.freedesktop.org/show_bug.cgi?id=109295
  [fdo#109300]: https://bugs.freedesktop.org/show_bug.cgi?id=109300
  [fdo#109307]: https://bugs.freedesktop.org/show_bug.cgi?id=109307
  [fdo#109308]: https://bugs.freedesktop.org/show_bug.cgi?id=109308
  [fdo#109441]: https://bugs.freedesktop.org/show_bug.cgi?id=109441
  [fdo#109642]: https://bugs.freedesktop.org/show_bug.cgi?id=109642
  [fdo#110189]: https://bugs.freedesktop.org/show_bug.cgi?id=110189
  [fdo#110723]: https://bugs.freedesktop.org/show_bug.cgi?id=110723
  [fdo#110725]: https://bugs.freedesktop.org/show_bug.cgi?id=110725
  [fdo#110892]: https://bugs.freedesktop.org/show_bug.cgi?id=110892
  [fdo#111066]: https://bugs.freedesktop.org/show_bug.cgi?id=111066
  [fdo#111068]: https://bugs.freedesktop.org/show_bug.cgi?id=111068
  [fdo#111314]: https://bugs.freedesktop.org/show_bug.cgi?id=111314
  [fdo#111614]: https://bugs.freedesktop.org/show_bug.cgi?id=111614
  [fdo#111615]: https://bugs.freedesktop.org/show_bug.cgi?id=111615
  [fdo#111825]: https://bugs.freedesktop.org/show_bug.cgi?id=111825
  [fdo#111827]: https://bugs.freedesktop.org/show_bug.cgi?id=111827
  [fdo#112022]: https://bugs.freedesktop.org/show_bug.cgi?id=112022
  [i915#1072]: https://gitlab.freedesktop.org/drm/intel/issues/1072
  [i915#1149]: https://gitlab.freedesktop.org/drm/intel/issues/1149
  [i915#118]: https://gitlab.freedesktop.org/drm/intel/issues/118
  [i915#1188]: https://gitlab.freedesktop.org/drm/intel/issues/1188
  [i915#1319]: https://gitlab.freedesktop.org/drm/intel/issues/1319
  [i915#132]: https://gitlab.freedesktop.org/drm/intel/issues/132
  [i915#1722]: https://gitlab.freedesktop.org/drm/intel/issues/1722
  [i915#1731]: https://gitlab.freedesktop.org/drm/intel/issues/1731
  [i915#1769]: https://gitlab.freedesktop.org/drm/intel/issues/1769
  [i915#180]: https://gitlab.freedesktop.org/drm/intel/issues/180
  [i915#1825]: https://gitlab.freedesktop.org/drm/intel/issues/1825
  [i915#1845]: https://gitlab.freedesktop.org/drm/intel/issues/1845
  [i915#1849]: https://gitlab.freedesktop.org/drm/intel/issues/1849
  [i915#1911]: https://gitlab.freedesktop.org/drm/intel/issues/1911
  [i915#1982]: https://gitlab.freedesktop.org/drm/intel/issues/1982
  [i915#2122]: https://gitlab.freedesktop.org/drm/intel/issues/2122
  [i915#2190]: https://gitlab.freedesktop.org/drm/intel/issues/2190
  [i915#2232]: https://gitlab.freedesktop.org/drm/intel/issues/2232
  [i915#2346]: https://gitlab.freedesktop.org/drm/intel/issues/2346
  [i915#2410]: https://gitlab.freedesktop.org/drm/intel/issues/2410
  [i915#2530]: https://gitlab.freedesktop.org/drm/intel/issues/2530
  [i915#2582]: https://gitlab.freedesktop.org/drm/intel/issues/2582
  [i915#2587]: https://gitlab.freedesktop.org/drm/intel/issues/2587
  [i915#265]: https://gitlab.freedesktop.org/drm/intel/issues/265
  [i915#2705]: https://gitlab.freedesktop.org/drm/intel/issues/2705
  [i915#2842]: https://gitlab.freedesktop.org/drm/intel/issues/2842
  [i915#2846]: https://gitlab.freedesktop.org/drm/intel/issues/2846
  [i915#2849]: https://gitlab.freedesktop.org/drm/intel/issues/2849
  [i915#2856]: https://gitlab.freedesktop.org/drm/intel/issues/2856
  [i915#2920]: https://gitlab.freedesktop.org/drm/intel/issues/2920
  [i915#2994]: https://gitlab.freedesktop.org/drm/intel/issues/2994
  [i915#3002]: https://gitlab.freedesktop.org/drm/intel/issues/3002
  [i915#3012]: https://gitlab.freedesktop.org/drm/intel/issues/3012
  [i915#3063]: https://gitlab.freedesktop.org/drm/intel/issues/3063
  [i915#3319]: https://gitlab.freedesktop.org/drm/intel/issues/3319
  [i915#3359]: https://gitlab.freedesktop.org/drm/intel/issues/3359
  [i915#3536]: https://gitlab.freedesktop.org/drm/intel/issues/3536
  [i915#3555]: https://gitlab.freedesktop.org/drm/intel/issues/3555
  [i915#3558]: https://gitlab.freedesktop.org/drm/intel/issues/3558
  [i915#3637]: https://gitlab.freedesktop.org/drm/intel/issues/3637
  [i915#3638]: https://gitlab.freedesktop.org/drm/intel/issues/3638
  [i915#3639]: https://gitlab.freedesktop.org/drm/intel/issues/3639
  [i915#3701]: https://gitlab.freedesktop.org/drm/intel/issues/3701
  [i915#3734]: https://gitlab.freedesktop.org/drm/intel/issues/3734
  [i915#3886]: https://gitlab.freedesktop.org/drm/intel/issues/3886
  [i915#3955]: https://gitlab.freedesktop.org/drm/intel/issues/3955
  [i915#4055]: https://gitlab.freedesktop.org/drm/intel/issues/4055
  [i915#4070]: https://gitlab.freedesktop.org/drm/intel/issues/4070
  [i915#4098]: https://gitlab.freedesktop.org/drm/intel/issues/4098
  [i915#4270]: https://gitlab.freedesktop.org/drm/intel/issues/4270
  [i915#4278]: https://gitlab.freedesktop.org/drm/intel/issues/4278
  [i915#4312]: https://gitlab.freedesktop.org/drm/intel/issues/4312
  [i915#4369]: https://gitlab.freedesktop.org/drm/intel/issues/4369
  [i915#4387]: https://gitlab.freedesktop.org/drm/intel/issues/4387
  [i915#4525]: https://gitlab.freedesktop.org/drm/intel/issues/4525
  [i915#4613]: https://gitlab.freedesktop.org/drm/intel/issues/4613
  [i915#4939]: https://gitlab.freedesktop.org/drm/intel/issues/4939
  [i915#5080]: https://gitlab.freedesktop.org/drm/intel/issues/5080
  [i915#51]: https://gitlab.freedesktop.org/drm/intel/issues/51
  [i915#5176]: https://gitlab.freedesktop.org/drm/intel/issues/5176
  [i915#5235]: https://gitlab.freedesktop.org/drm/intel/issues/5235
  [i915#5257]: https://gitlab.freedesktop.org/drm/intel/issues/5257
  [i915#5286]: https://gitlab.freedesktop.org/drm/intel/issues/5286
  [i915#5287]: https://gitlab.freedesktop.org/drm/intel/issues/5287
  [i915#5288]: https://gitlab.freedesktop.org/drm/intel/issues/5288
  [i915#5289]: https://gitlab.freedesktop.org/drm/intel/issues/5289
  [i915#533]: https://gitlab.freedesktop.org/drm/intel/issues/533
  [i915#5519]: https://gitlab.freedesktop.org/drm/intel/issues/5519
  [i915#5614]: https://gitlab.freedesktop.org/drm/intel/issues/5614
  [i915#5639]: https://gitlab.freedesktop.org/drm/intel/issues/5639
  [i915#5691]: https://gitlab.freedesktop.org/drm/intel/issues/5691
  [i915#658]: https://gitlab.freedesktop.org/drm/intel/issues/658
  [i915#768]: https://gitlab.freedesktop.org/drm/intel/issues/768
  [i915#79]: https://gitlab.freedesktop.org/drm/intel/issues/79


Build changes
-------------

  * Linux: CI_DRM_11630 -> Patchwork_103492v3

  CI-20190529: 20190529
  CI_DRM_11630: fa53ea1e866d739663dbcfab3afa4d0f5e3a12e1 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_6471: 1d6816f1200520f936a799b7b0ef2e6f396abb16 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_103492v3: fa53ea1e866d739663dbcfab3afa4d0f5e3a12e1 @ git://anongit.freedesktop.org/gfx-ci/linux
  piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_103492v3/index.html

[-- Attachment #2: Type: text/html, Size: 44263 bytes --]

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 1/4] drm/i915: add gen6 ppgtt dummy creation function
  2022-05-03 19:13   ` Robert Beckett
@ 2022-05-11 10:13     ` Thomas Hellström
  -1 siblings, 0 replies; 32+ messages in thread
From: Thomas Hellström @ 2022-05-11 10:13 UTC (permalink / raw)
  To: Robert Beckett, dri-devel, intel-gfx, Jani Nikula,
	Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, David Airlie,
	Daniel Vetter
  Cc: Matthew Auld, linux-kernel

Hi,

On Tue, 2022-05-03 at 19:13 +0000, Robert Beckett wrote:
> Internal gem objects will soon just be volatile system memory region
> objects.
> To enable this, create a separate dummy object creation function
> for gen6 ppgtt


It's not clear from the commit message why we need a special case for
this. Could you describe more in detail?

Thanks,
Thomas


> 
> Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
> ---
>  drivers/gpu/drm/i915/gt/gen6_ppgtt.c | 43
> ++++++++++++++++++++++++++--
>  1 file changed, 40 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
> b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
> index 1bb766c79dcb..f3b660cfeb7f 100644
> --- a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
> +++ b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
> @@ -372,6 +372,45 @@ static const struct drm_i915_gem_object_ops
> pd_dummy_obj_ops = {
>         .put_pages = pd_dummy_obj_put_pages,
>  };
>  
> +static struct drm_i915_gem_object *
> +i915_gem_object_create_dummy(struct drm_i915_private *i915,
> phys_addr_t size)
> +{
> +       static struct lock_class_key lock_class;
> +       struct drm_i915_gem_object *obj;
> +       unsigned int cache_level;
> +
> +       GEM_BUG_ON(!size);
> +       GEM_BUG_ON(!IS_ALIGNED(size, PAGE_SIZE));
> +
> +       if (overflows_type(size, obj->base.size))
> +               return ERR_PTR(-E2BIG);
> +
> +       obj = i915_gem_object_alloc();
> +       if (!obj)
> +               return ERR_PTR(-ENOMEM);
> +
> +       drm_gem_private_object_init(&i915->drm, &obj->base, size);
> +       i915_gem_object_init(obj, &pd_dummy_obj_ops, &lock_class, 0);
> +       obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
> +
> +       /*
> +        * Mark the object as volatile, such that the pages are
> marked as
> +        * dontneed whilst they are still pinned. As soon as they are
> unpinned
> +        * they are allowed to be reaped by the shrinker, and the
> caller is
> +        * expected to repopulate - the contents of this object are
> only valid
> +        * whilst active and pinned.
> +        */
> +       i915_gem_object_set_volatile(obj);
> +
> +       obj->read_domains = I915_GEM_DOMAIN_CPU;
> +       obj->write_domain = I915_GEM_DOMAIN_CPU;
> +
> +       cache_level = HAS_LLC(i915) ? I915_CACHE_LLC :
> I915_CACHE_NONE;
> +       i915_gem_object_set_cache_coherency(obj, cache_level);
> +
> +       return obj;
> +}
> +
>  static struct i915_page_directory *
>  gen6_alloc_top_pd(struct gen6_ppgtt *ppgtt)
>  {
> @@ -383,9 +422,7 @@ gen6_alloc_top_pd(struct gen6_ppgtt *ppgtt)
>         if (unlikely(!pd))
>                 return ERR_PTR(-ENOMEM);
>  
> -       pd->pt.base = __i915_gem_object_create_internal(ppgtt-
> >base.vm.gt->i915,
> -
>                                                        &pd_dummy_obj_o
> ps,
> -                                                       I915_PDES *
> SZ_4K);
> +       pd->pt.base = i915_gem_object_create_dummy(ppgtt->base.vm.gt-
> >i915, I915_PDES * SZ_4K);
>         if (IS_ERR(pd->pt.base)) {
>                 err = PTR_ERR(pd->pt.base);
>                 pd->pt.base = NULL;



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 1/4] drm/i915: add gen6 ppgtt dummy creation function
@ 2022-05-11 10:13     ` Thomas Hellström
  0 siblings, 0 replies; 32+ messages in thread
From: Thomas Hellström @ 2022-05-11 10:13 UTC (permalink / raw)
  To: Robert Beckett, dri-devel, intel-gfx, Jani Nikula,
	Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, David Airlie,
	Daniel Vetter
  Cc: Matthew Auld, linux-kernel

Hi,

On Tue, 2022-05-03 at 19:13 +0000, Robert Beckett wrote:
> Internal gem objects will soon just be volatile system memory region
> objects.
> To enable this, create a separate dummy object creation function
> for gen6 ppgtt


It's not clear from the commit message why we need a special case for
this. Could you describe more in detail?

Thanks,
Thomas


> 
> Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
> ---
>  drivers/gpu/drm/i915/gt/gen6_ppgtt.c | 43
> ++++++++++++++++++++++++++--
>  1 file changed, 40 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
> b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
> index 1bb766c79dcb..f3b660cfeb7f 100644
> --- a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
> +++ b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
> @@ -372,6 +372,45 @@ static const struct drm_i915_gem_object_ops
> pd_dummy_obj_ops = {
>         .put_pages = pd_dummy_obj_put_pages,
>  };
>  
> +static struct drm_i915_gem_object *
> +i915_gem_object_create_dummy(struct drm_i915_private *i915,
> phys_addr_t size)
> +{
> +       static struct lock_class_key lock_class;
> +       struct drm_i915_gem_object *obj;
> +       unsigned int cache_level;
> +
> +       GEM_BUG_ON(!size);
> +       GEM_BUG_ON(!IS_ALIGNED(size, PAGE_SIZE));
> +
> +       if (overflows_type(size, obj->base.size))
> +               return ERR_PTR(-E2BIG);
> +
> +       obj = i915_gem_object_alloc();
> +       if (!obj)
> +               return ERR_PTR(-ENOMEM);
> +
> +       drm_gem_private_object_init(&i915->drm, &obj->base, size);
> +       i915_gem_object_init(obj, &pd_dummy_obj_ops, &lock_class, 0);
> +       obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
> +
> +       /*
> +        * Mark the object as volatile, such that the pages are
> marked as
> +        * dontneed whilst they are still pinned. As soon as they are
> unpinned
> +        * they are allowed to be reaped by the shrinker, and the
> caller is
> +        * expected to repopulate - the contents of this object are
> only valid
> +        * whilst active and pinned.
> +        */
> +       i915_gem_object_set_volatile(obj);
> +
> +       obj->read_domains = I915_GEM_DOMAIN_CPU;
> +       obj->write_domain = I915_GEM_DOMAIN_CPU;
> +
> +       cache_level = HAS_LLC(i915) ? I915_CACHE_LLC :
> I915_CACHE_NONE;
> +       i915_gem_object_set_cache_coherency(obj, cache_level);
> +
> +       return obj;
> +}
> +
>  static struct i915_page_directory *
>  gen6_alloc_top_pd(struct gen6_ppgtt *ppgtt)
>  {
> @@ -383,9 +422,7 @@ gen6_alloc_top_pd(struct gen6_ppgtt *ppgtt)
>         if (unlikely(!pd))
>                 return ERR_PTR(-ENOMEM);
>  
> -       pd->pt.base = __i915_gem_object_create_internal(ppgtt-
> >base.vm.gt->i915,
> -
>                                                        &pd_dummy_obj_o
> ps,
> -                                                       I915_PDES *
> SZ_4K);
> +       pd->pt.base = i915_gem_object_create_dummy(ppgtt->base.vm.gt-
> >i915, I915_PDES * SZ_4K);
>         if (IS_ERR(pd->pt.base)) {
>                 err = PTR_ERR(pd->pt.base);
>                 pd->pt.base = NULL;



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 2/4] drm/i915: setup ggtt scratch page after memory regions
  2022-05-03 19:13   ` Robert Beckett
@ 2022-05-11 11:24     ` Thomas Hellström
  -1 siblings, 0 replies; 32+ messages in thread
From: Thomas Hellström @ 2022-05-11 11:24 UTC (permalink / raw)
  To: Robert Beckett, dri-devel, intel-gfx, Jani Nikula,
	Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, David Airlie,
	Daniel Vetter
  Cc: Matthew Auld, linux-kernel

On Tue, 2022-05-03 at 19:13 +0000, Robert Beckett wrote:
> reorder scratch page allocation so that memory regions are available

Nit: s/reorder/Reorder/

Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>

> to allocate the buffers
> 
> Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
> ---
>  drivers/gpu/drm/i915/gt/intel_gt_gmch.c | 20 ++++++++++++++++++--
>  drivers/gpu/drm/i915/gt/intel_gt_gmch.h |  6 ++++++
>  drivers/gpu/drm/i915/i915_driver.c      | 16 ++++++++++------
>  3 files changed, 34 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_gt_gmch.c
> b/drivers/gpu/drm/i915/gt/intel_gt_gmch.c
> index 18e488672d1b..5411df1734ac 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gt_gmch.c
> +++ b/drivers/gpu/drm/i915/gt/intel_gt_gmch.c
> @@ -440,8 +440,6 @@ static int ggtt_probe_common(struct i915_ggtt
> *ggtt, u64 size)
>         struct drm_i915_private *i915 = ggtt->vm.i915;
>         struct pci_dev *pdev = to_pci_dev(i915->drm.dev);
>         phys_addr_t phys_addr;
> -       u32 pte_flags;
> -       int ret;
>  
>         GEM_WARN_ON(pci_resource_len(pdev, 0) !=
> gen6_gttmmadr_size(i915));
>         phys_addr = pci_resource_start(pdev, 0) +
> gen6_gttadr_offset(i915);
> @@ -463,6 +461,24 @@ static int ggtt_probe_common(struct i915_ggtt
> *ggtt, u64 size)
>         }
>  
>         kref_init(&ggtt->vm.resv_ref);
> +
> +       return 0;
> +}
> +
> +/**
> + * i915_ggtt_setup_scratch_page - setup ggtt scratch page
> + * @i915: i915 device
> + */
> +int i915_ggtt_setup_scratch_page(struct drm_i915_private *i915)
> +{
> +       struct i915_ggtt *ggtt = to_gt(i915)->ggtt;
> +       u32 pte_flags;
> +       int ret;
> +
> +       /* gen5- scratch setup currently happens in @intel_gtt_init
> */
> +       if (GRAPHICS_VER(i915) <= 5)
> +               return 0;
> +
>         ret = setup_scratch_page(&ggtt->vm);
>         if (ret) {
>                 drm_err(&i915->drm, "Scratch setup failed\n");
> diff --git a/drivers/gpu/drm/i915/gt/intel_gt_gmch.h
> b/drivers/gpu/drm/i915/gt/intel_gt_gmch.h
> index 75ed55c1f30a..c6b79cb78637 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gt_gmch.h
> +++ b/drivers/gpu/drm/i915/gt/intel_gt_gmch.h
> @@ -15,6 +15,7 @@ int intel_gt_gmch_gen6_probe(struct i915_ggtt
> *ggtt);
>  int intel_gt_gmch_gen8_probe(struct i915_ggtt *ggtt);
>  int intel_gt_gmch_gen5_probe(struct i915_ggtt *ggtt);
>  int intel_gt_gmch_gen5_enable_hw(struct drm_i915_private *i915);
> +int i915_ggtt_setup_scratch_page(struct drm_i915_private *i915);
>  
>  /* Stubs for non-x86 platforms */
>  #else
> @@ -41,6 +42,11 @@ static inline int
> intel_gt_gmch_gen5_enable_hw(struct drm_i915_private *i915)
>         /* No HW should be enabled for this case yet, return fail */
>         return -ENODEV;
>  }
> +
> +static inline int i915_ggtt_setup_scratch_page(struct
> drm_i915_private *i915)
> +{
> +       return 0;
> +}
>  #endif
>  
>  #endif /* __INTEL_GT_GMCH_H__ */
> diff --git a/drivers/gpu/drm/i915/i915_driver.c
> b/drivers/gpu/drm/i915/i915_driver.c
> index 90b0ce5051af..f67476b2f349 100644
> --- a/drivers/gpu/drm/i915/i915_driver.c
> +++ b/drivers/gpu/drm/i915/i915_driver.c
> @@ -69,6 +69,7 @@
>  #include "gem/i915_gem_mman.h"
>  #include "gem/i915_gem_pm.h"
>  #include "gt/intel_gt.h"
> +#include "gt/intel_gt_gmch.h"
>  #include "gt/intel_gt_pm.h"
>  #include "gt/intel_rc6.h"
>  
> @@ -589,12 +590,16 @@ static int i915_driver_hw_probe(struct
> drm_i915_private *dev_priv)
>  
>         ret = intel_gt_tiles_init(dev_priv);
>         if (ret)
> -               goto err_mem_regions;
> +               goto err_ggtt;
> +
> +       ret = i915_ggtt_setup_scratch_page(dev_priv);
> +       if (ret)
> +               goto err_ggtt;
>  
>         ret = i915_ggtt_enable_hw(dev_priv);
>         if (ret) {
>                 drm_err(&dev_priv->drm, "failed to enable GGTT\n");
> -               goto err_mem_regions;
> +               goto err_ggtt;
>         }
>  
>         pci_set_master(pdev);
> @@ -646,11 +651,10 @@ static int i915_driver_hw_probe(struct
> drm_i915_private *dev_priv)
>  err_msi:
>         if (pdev->msi_enabled)
>                 pci_disable_msi(pdev);
> -err_mem_regions:
> -       intel_memory_regions_driver_release(dev_priv);
>  err_ggtt:
>         i915_ggtt_driver_release(dev_priv);
>         i915_gem_drain_freed_objects(dev_priv);
> +       intel_memory_regions_driver_release(dev_priv);
>         i915_ggtt_driver_late_release(dev_priv);
>  err_perf:
>         i915_perf_fini(dev_priv);
> @@ -896,9 +900,9 @@ int i915_driver_probe(struct pci_dev *pdev, const
> struct pci_device_id *ent)
>         intel_modeset_driver_remove_nogem(i915);
>  out_cleanup_hw:
>         i915_driver_hw_remove(i915);
> -       intel_memory_regions_driver_release(i915);
>         i915_ggtt_driver_release(i915);
>         i915_gem_drain_freed_objects(i915);
> +       intel_memory_regions_driver_release(i915);
>         i915_ggtt_driver_late_release(i915);
>  out_cleanup_mmio:
>         i915_driver_mmio_release(i915);
> @@ -955,9 +959,9 @@ static void i915_driver_release(struct drm_device
> *dev)
>  
>         i915_gem_driver_release(dev_priv);
>  
> -       intel_memory_regions_driver_release(dev_priv);
>         i915_ggtt_driver_release(dev_priv);
>         i915_gem_drain_freed_objects(dev_priv);
> +       intel_memory_regions_driver_release(dev_priv);
>         i915_ggtt_driver_late_release(dev_priv);
>  
>         i915_driver_mmio_release(dev_priv);



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 2/4] drm/i915: setup ggtt scratch page after memory regions
@ 2022-05-11 11:24     ` Thomas Hellström
  0 siblings, 0 replies; 32+ messages in thread
From: Thomas Hellström @ 2022-05-11 11:24 UTC (permalink / raw)
  To: Robert Beckett, dri-devel, intel-gfx, Jani Nikula,
	Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, David Airlie,
	Daniel Vetter
  Cc: Matthew Auld, linux-kernel

On Tue, 2022-05-03 at 19:13 +0000, Robert Beckett wrote:
> reorder scratch page allocation so that memory regions are available

Nit: s/reorder/Reorder/

Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>

> to allocate the buffers
> 
> Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
> ---
>  drivers/gpu/drm/i915/gt/intel_gt_gmch.c | 20 ++++++++++++++++++--
>  drivers/gpu/drm/i915/gt/intel_gt_gmch.h |  6 ++++++
>  drivers/gpu/drm/i915/i915_driver.c      | 16 ++++++++++------
>  3 files changed, 34 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gt/intel_gt_gmch.c
> b/drivers/gpu/drm/i915/gt/intel_gt_gmch.c
> index 18e488672d1b..5411df1734ac 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gt_gmch.c
> +++ b/drivers/gpu/drm/i915/gt/intel_gt_gmch.c
> @@ -440,8 +440,6 @@ static int ggtt_probe_common(struct i915_ggtt
> *ggtt, u64 size)
>         struct drm_i915_private *i915 = ggtt->vm.i915;
>         struct pci_dev *pdev = to_pci_dev(i915->drm.dev);
>         phys_addr_t phys_addr;
> -       u32 pte_flags;
> -       int ret;
>  
>         GEM_WARN_ON(pci_resource_len(pdev, 0) !=
> gen6_gttmmadr_size(i915));
>         phys_addr = pci_resource_start(pdev, 0) +
> gen6_gttadr_offset(i915);
> @@ -463,6 +461,24 @@ static int ggtt_probe_common(struct i915_ggtt
> *ggtt, u64 size)
>         }
>  
>         kref_init(&ggtt->vm.resv_ref);
> +
> +       return 0;
> +}
> +
> +/**
> + * i915_ggtt_setup_scratch_page - setup ggtt scratch page
> + * @i915: i915 device
> + */
> +int i915_ggtt_setup_scratch_page(struct drm_i915_private *i915)
> +{
> +       struct i915_ggtt *ggtt = to_gt(i915)->ggtt;
> +       u32 pte_flags;
> +       int ret;
> +
> +       /* gen5- scratch setup currently happens in @intel_gtt_init
> */
> +       if (GRAPHICS_VER(i915) <= 5)
> +               return 0;
> +
>         ret = setup_scratch_page(&ggtt->vm);
>         if (ret) {
>                 drm_err(&i915->drm, "Scratch setup failed\n");
> diff --git a/drivers/gpu/drm/i915/gt/intel_gt_gmch.h
> b/drivers/gpu/drm/i915/gt/intel_gt_gmch.h
> index 75ed55c1f30a..c6b79cb78637 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gt_gmch.h
> +++ b/drivers/gpu/drm/i915/gt/intel_gt_gmch.h
> @@ -15,6 +15,7 @@ int intel_gt_gmch_gen6_probe(struct i915_ggtt
> *ggtt);
>  int intel_gt_gmch_gen8_probe(struct i915_ggtt *ggtt);
>  int intel_gt_gmch_gen5_probe(struct i915_ggtt *ggtt);
>  int intel_gt_gmch_gen5_enable_hw(struct drm_i915_private *i915);
> +int i915_ggtt_setup_scratch_page(struct drm_i915_private *i915);
>  
>  /* Stubs for non-x86 platforms */
>  #else
> @@ -41,6 +42,11 @@ static inline int
> intel_gt_gmch_gen5_enable_hw(struct drm_i915_private *i915)
>         /* No HW should be enabled for this case yet, return fail */
>         return -ENODEV;
>  }
> +
> +static inline int i915_ggtt_setup_scratch_page(struct
> drm_i915_private *i915)
> +{
> +       return 0;
> +}
>  #endif
>  
>  #endif /* __INTEL_GT_GMCH_H__ */
> diff --git a/drivers/gpu/drm/i915/i915_driver.c
> b/drivers/gpu/drm/i915/i915_driver.c
> index 90b0ce5051af..f67476b2f349 100644
> --- a/drivers/gpu/drm/i915/i915_driver.c
> +++ b/drivers/gpu/drm/i915/i915_driver.c
> @@ -69,6 +69,7 @@
>  #include "gem/i915_gem_mman.h"
>  #include "gem/i915_gem_pm.h"
>  #include "gt/intel_gt.h"
> +#include "gt/intel_gt_gmch.h"
>  #include "gt/intel_gt_pm.h"
>  #include "gt/intel_rc6.h"
>  
> @@ -589,12 +590,16 @@ static int i915_driver_hw_probe(struct
> drm_i915_private *dev_priv)
>  
>         ret = intel_gt_tiles_init(dev_priv);
>         if (ret)
> -               goto err_mem_regions;
> +               goto err_ggtt;
> +
> +       ret = i915_ggtt_setup_scratch_page(dev_priv);
> +       if (ret)
> +               goto err_ggtt;
>  
>         ret = i915_ggtt_enable_hw(dev_priv);
>         if (ret) {
>                 drm_err(&dev_priv->drm, "failed to enable GGTT\n");
> -               goto err_mem_regions;
> +               goto err_ggtt;
>         }
>  
>         pci_set_master(pdev);
> @@ -646,11 +651,10 @@ static int i915_driver_hw_probe(struct
> drm_i915_private *dev_priv)
>  err_msi:
>         if (pdev->msi_enabled)
>                 pci_disable_msi(pdev);
> -err_mem_regions:
> -       intel_memory_regions_driver_release(dev_priv);
>  err_ggtt:
>         i915_ggtt_driver_release(dev_priv);
>         i915_gem_drain_freed_objects(dev_priv);
> +       intel_memory_regions_driver_release(dev_priv);
>         i915_ggtt_driver_late_release(dev_priv);
>  err_perf:
>         i915_perf_fini(dev_priv);
> @@ -896,9 +900,9 @@ int i915_driver_probe(struct pci_dev *pdev, const
> struct pci_device_id *ent)
>         intel_modeset_driver_remove_nogem(i915);
>  out_cleanup_hw:
>         i915_driver_hw_remove(i915);
> -       intel_memory_regions_driver_release(i915);
>         i915_ggtt_driver_release(i915);
>         i915_gem_drain_freed_objects(i915);
> +       intel_memory_regions_driver_release(i915);
>         i915_ggtt_driver_late_release(i915);
>  out_cleanup_mmio:
>         i915_driver_mmio_release(i915);
> @@ -955,9 +959,9 @@ static void i915_driver_release(struct drm_device
> *dev)
>  
>         i915_gem_driver_release(dev_priv);
>  
> -       intel_memory_regions_driver_release(dev_priv);
>         i915_ggtt_driver_release(dev_priv);
>         i915_gem_drain_freed_objects(dev_priv);
> +       intel_memory_regions_driver_release(dev_priv);
>         i915_ggtt_driver_late_release(dev_priv);
>  
>         i915_driver_mmio_release(dev_priv);



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/4] drm/i915: allow volatile buffers to use ttm pool allocator
  2022-05-03 19:13   ` Robert Beckett
@ 2022-05-11 12:42     ` Thomas Hellström
  -1 siblings, 0 replies; 32+ messages in thread
From: Thomas Hellström @ 2022-05-11 12:42 UTC (permalink / raw)
  To: Robert Beckett, dri-devel, intel-gfx, Jani Nikula,
	Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, David Airlie,
	Daniel Vetter
  Cc: Matthew Auld, linux-kernel

Hi, Bob,

On Tue, 2022-05-03 at 19:13 +0000, Robert Beckett wrote:
> internal buffers should be shmem backed.
> if a volatile buffer is requested, allow ttm to use the pool
> allocator
> to provide volatile pages as backing
> 
> Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
> ---
>  drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> index 4c25d9b2f138..fdb3a1c18cb6 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> @@ -309,7 +309,8 @@ static struct ttm_tt *i915_ttm_tt_create(struct
> ttm_buffer_object *bo,
>                 page_flags |= TTM_TT_FLAG_ZERO_ALLOC;
>  
>         caching = i915_ttm_select_tt_caching(obj);
> -       if (i915_gem_object_is_shrinkable(obj) && caching ==
> ttm_cached) {
> +       if (i915_gem_object_is_shrinkable(obj) && caching ==
> ttm_cached &&
> +           !i915_gem_object_is_volatile(obj)) {
>                 page_flags |= TTM_TT_FLAG_EXTERNAL |
>                               TTM_TT_FLAG_EXTERNAL_MAPPABLE;
>                 i915_tt->is_shmem = true;

While this is ok, I think it also needs adjustment in the i915_ttm
shrink callback. If someone creates a volatile smem object which then
hits the shrinker, I think we might hit asserts that it's a is_shem
ttm?

In this case, the shrink callback should just i915_ttm_purge().

/Thomas



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 3/4] drm/i915: allow volatile buffers to use ttm pool allocator
@ 2022-05-11 12:42     ` Thomas Hellström
  0 siblings, 0 replies; 32+ messages in thread
From: Thomas Hellström @ 2022-05-11 12:42 UTC (permalink / raw)
  To: Robert Beckett, dri-devel, intel-gfx, Jani Nikula,
	Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, David Airlie,
	Daniel Vetter
  Cc: Matthew Auld, linux-kernel

Hi, Bob,

On Tue, 2022-05-03 at 19:13 +0000, Robert Beckett wrote:
> internal buffers should be shmem backed.
> if a volatile buffer is requested, allow ttm to use the pool
> allocator
> to provide volatile pages as backing
> 
> Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
> ---
>  drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> index 4c25d9b2f138..fdb3a1c18cb6 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> @@ -309,7 +309,8 @@ static struct ttm_tt *i915_ttm_tt_create(struct
> ttm_buffer_object *bo,
>                 page_flags |= TTM_TT_FLAG_ZERO_ALLOC;
>  
>         caching = i915_ttm_select_tt_caching(obj);
> -       if (i915_gem_object_is_shrinkable(obj) && caching ==
> ttm_cached) {
> +       if (i915_gem_object_is_shrinkable(obj) && caching ==
> ttm_cached &&
> +           !i915_gem_object_is_volatile(obj)) {
>                 page_flags |= TTM_TT_FLAG_EXTERNAL |
>                               TTM_TT_FLAG_EXTERNAL_MAPPABLE;
>                 i915_tt->is_shmem = true;

While this is ok, I think it also needs adjustment in the i915_ttm
shrink callback. If someone creates a volatile smem object which then
hits the shrinker, I think we might hit asserts that it's a is_shem
ttm?

In this case, the shrink callback should just i915_ttm_purge().

/Thomas



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 4/4] drm/i915: internal buffers use ttm backend
  2022-05-03 19:13   ` Robert Beckett
@ 2022-05-11 14:14     ` Thomas Hellström
  -1 siblings, 0 replies; 32+ messages in thread
From: Thomas Hellström @ 2022-05-11 14:14 UTC (permalink / raw)
  To: Robert Beckett, dri-devel, intel-gfx, Jani Nikula,
	Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, David Airlie,
	Daniel Vetter
  Cc: Matthew Auld, linux-kernel

On Tue, 2022-05-03 at 19:13 +0000, Robert Beckett wrote:
> refactor internal buffer backend to allocate volatile pages via
> ttm pool allocator
> 
> Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
> ---
>  drivers/gpu/drm/i915/gem/i915_gem_internal.c | 264 ++++++++---------
> --
>  drivers/gpu/drm/i915/gem/i915_gem_internal.h |   5 -
>  drivers/gpu/drm/i915/gem/i915_gem_ttm.c      |  12 +-
>  drivers/gpu/drm/i915/gem/i915_gem_ttm.h      |  12 +-
>  4 files changed, 125 insertions(+), 168 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
> b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
> index c698f95af15f..815ec9466cc0 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
> @@ -4,156 +4,119 @@
>   * Copyright © 2014-2016 Intel Corporation
>   */
>  
> -#include <linux/scatterlist.h>
> -#include <linux/slab.h>
> -#include <linux/swiotlb.h>
> -
> +#include <drm/ttm/ttm_bo_driver.h>
> +#include <drm/ttm/ttm_placement.h>
> +#include "drm/ttm/ttm_bo_api.h"
> +#include "gem/i915_gem_internal.h"
> +#include "gem/i915_gem_region.h"
> +#include "gem/i915_gem_ttm.h"
>  #include "i915_drv.h"
> -#include "i915_gem.h"
> -#include "i915_gem_internal.h"
> -#include "i915_gem_object.h"
> -#include "i915_scatterlist.h"
> -#include "i915_utils.h"
> -
> -#define QUIET (__GFP_NORETRY | __GFP_NOWARN)
> -#define MAYFAIL (__GFP_RETRY_MAYFAIL | __GFP_NOWARN)
> -
> -static void internal_free_pages(struct sg_table *st)
> -{
> -       struct scatterlist *sg;
> -
> -       for (sg = st->sgl; sg; sg = __sg_next(sg)) {
> -               if (sg_page(sg))
> -                       __free_pages(sg_page(sg), get_order(sg-
> >length));
> -       }
> -
> -       sg_free_table(st);
> -       kfree(st);
> -}
>  
> -static int i915_gem_object_get_pages_internal(struct
> drm_i915_gem_object *obj)
> +static int i915_internal_get_pages(struct drm_i915_gem_object *obj)
>  {
> -       struct drm_i915_private *i915 = to_i915(obj->base.dev);
> -       struct sg_table *st;
> -       struct scatterlist *sg;
> -       unsigned int sg_page_sizes;
> -       unsigned int npages;
> -       int max_order;
> -       gfp_t gfp;
> -
> -       max_order = MAX_ORDER;
> -#ifdef CONFIG_SWIOTLB
> -       if (is_swiotlb_active(obj->base.dev->dev)) {
> -               unsigned int max_segment;
> -
> -               max_segment = swiotlb_max_segment();
> -               if (max_segment) {
> -                       max_segment = max_t(unsigned int,
> max_segment,
> -                                           PAGE_SIZE) >> PAGE_SHIFT;
> -                       max_order = min(max_order,
> ilog2(max_segment));
> -               }
> +       struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
> +       struct ttm_operation_ctx ctx = {
> +               .interruptible = true,
> +               .no_wait_gpu = false,
> +       };
> +       struct ttm_place place = {
> +               .fpfn = 0,
> +               .lpfn = 0,
> +               .mem_type = I915_PL_SYSTEM,
> +               .flags = 0,
> +       };
> +       struct ttm_placement placement = {
> +               .num_placement = 1,
> +               .placement = &place,
> +               .num_busy_placement = 0,
> +               .busy_placement = NULL,
> +       };
> +       int ret;
> +
> +       ret = ttm_bo_validate(bo, &placement, &ctx);
> +       if (ret) {
> +               ret = i915_ttm_err_to_gem(ret);
> +               return ret;
>         }
> -#endif
>  
> -       gfp = GFP_KERNEL | __GFP_HIGHMEM | __GFP_RECLAIMABLE;
> -       if (IS_I965GM(i915) || IS_I965G(i915)) {
> -               /* 965gm cannot relocate objects above 4GiB. */
> -               gfp &= ~__GFP_HIGHMEM;
> -               gfp |= __GFP_DMA32;


It looks like we're losing this restriction?

There is a flag to ttm_device_init() to make TTM only do __GFP_DMA32
allocations.

> +       if (bo->ttm && !ttm_tt_is_populated(bo->ttm)) {
> +               ret = ttm_tt_populate(bo->bdev, bo->ttm, &ctx);
> +               if (ret)
> +                       return ret;
>         }
>  
> -create_st:
> -       st = kmalloc(sizeof(*st), GFP_KERNEL);
> -       if (!st)
> -               return -ENOMEM;
> +       if (!i915_gem_object_has_pages(obj)) {
> +               struct i915_refct_sgt *rsgt =
> +                       i915_ttm_resource_get_st(obj, bo->resource);
>  
> -       npages = obj->base.size / PAGE_SIZE;
> -       if (sg_alloc_table(st, npages, GFP_KERNEL)) {
> -               kfree(st);
> -               return -ENOMEM;
> -       }
> +               if (IS_ERR(rsgt))
> +                       return PTR_ERR(rsgt);
>  
> -       sg = st->sgl;
> -       st->nents = 0;
> -       sg_page_sizes = 0;
> -
> -       do {
> -               int order = min(fls(npages) - 1, max_order);
> -               struct page *page;
> -
> -               do {
> -                       page = alloc_pages(gfp | (order ? QUIET :
> MAYFAIL),
> -                                          order);
> -                       if (page)
> -                               break;
> -                       if (!order--)
> -                               goto err;
> -
> -                       /* Limit subsequent allocations as well */
> -                       max_order = order;
> -               } while (1);
> -
> -               sg_set_page(sg, page, PAGE_SIZE << order, 0);
> -               sg_page_sizes |= PAGE_SIZE << order;
> -               st->nents++;
> -
> -               npages -= 1 << order;
> -               if (!npages) {
> -                       sg_mark_end(sg);
> -                       break;
> -               }
> -
> -               sg = __sg_next(sg);
> -       } while (1);
> -
> -       if (i915_gem_gtt_prepare_pages(obj, st)) {
> -               /* Failed to dma-map try again with single page sg
> segments */
> -               if (get_order(st->sgl->length)) {
> -                       internal_free_pages(st);
> -                       max_order = 0;
> -                       goto create_st;
> -               }
> -               goto err;
> +               GEM_BUG_ON(obj->mm.rsgt);
> +               obj->mm.rsgt = rsgt;
> +               __i915_gem_object_set_pages(obj, &rsgt->table,
> +                                           i915_sg_dma_sizes(rsgt-
> >table.sgl));
>         }
>  
> -       __i915_gem_object_set_pages(obj, st, sg_page_sizes);
> +       GEM_BUG_ON(bo->ttm && ((obj->base.size >> PAGE_SHIFT) < bo-
> >ttm->num_pages));
> +       i915_ttm_adjust_lru(obj);
>  
>         return 0;
> +}
>  
> -err:
> -       sg_set_page(sg, NULL, 0, 0);
> -       sg_mark_end(sg);
> -       internal_free_pages(st);
> +static const struct drm_i915_gem_object_ops
> i915_gem_object_internal_ops = {
> +       .name = "i915_gem_object_ttm",
> +       .flags = I915_GEM_OBJECT_IS_SHRINKABLE,
>  
> -       return -ENOMEM;
> -}
> +       .get_pages = i915_internal_get_pages,
> +       .put_pages = i915_ttm_put_pages,
> +       .adjust_lru = i915_ttm_adjust_lru,
> +       .delayed_free = i915_ttm_delayed_free,
> +};
>  
> -static void i915_gem_object_put_pages_internal(struct
> drm_i915_gem_object *obj,
> -                                              struct sg_table
> *pages)
> +void i915_ttm_internal_bo_destroy(struct ttm_buffer_object *bo)
>  {
> -       i915_gem_gtt_finish_pages(obj, pages);
> -       internal_free_pages(pages);
> +       struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
>  
> -       obj->mm.dirty = false;
> +       mutex_destroy(&obj->ttm.get_io_page.lock);
>  
> -       __start_cpu_write(obj);
> -}
> +       if (obj->ttm.created) {
> +               /* This releases all gem object bindings to the
> backend. */
> +               __i915_gem_free_object(obj);
>  
> -static const struct drm_i915_gem_object_ops
> i915_gem_object_internal_ops = {
> -       .name = "i915_gem_object_internal",
> -       .flags = I915_GEM_OBJECT_IS_SHRINKABLE,
> -       .get_pages = i915_gem_object_get_pages_internal,
> -       .put_pages = i915_gem_object_put_pages_internal,
> -};
> +               call_rcu(&obj->rcu, __i915_gem_free_object_rcu);
> +       } else {
> +               __i915_gem_object_fini(obj);
> +       }
> +}
>  
> +/**
> + * i915_gem_object_create_internal: create an object with volatile
> pages
> + * @i915: the i915 device
> + * @size: the size in bytes of backing storage to allocate for the
> object
> + *
> + * Creates a new object that wraps some internal memory for private
> use.
> + * This object is not backed by swappable storage, and as such its
> contents
> + * are volatile and only valid whilst pinned. If the object is
> reaped by the
> + * shrinker, its pages and data will be discarded. Equally, it is
> not a full
> + * GEM object and so not valid for access from userspace. This makes
> it useful
> + * for hardware interfaces like ringbuffers (which are pinned from
> the time
> + * the request is written to the time the hardware stops accessing
> it), but
> + * not for contexts (which need to be preserved when not active for
> later
> + * reuse). Note that it is not cleared upon allocation.
> + */
>  struct drm_i915_gem_object *
> -__i915_gem_object_create_internal(struct drm_i915_private *i915,
> -                                 const struct
> drm_i915_gem_object_ops *ops,
> -                                 phys_addr_t size)
> +i915_gem_object_create_internal(struct drm_i915_private *i915,
> +                               phys_addr_t size)
>  {
>         static struct lock_class_key lock_class;
>         struct drm_i915_gem_object *obj;
>         unsigned int cache_level;
> +       struct ttm_operation_ctx ctx = {
> +               .interruptible = true,
> +               .no_wait_gpu = false,
> +       };
> +       int ret;
>  
>         GEM_BUG_ON(!size);
>         GEM_BUG_ON(!IS_ALIGNED(size, PAGE_SIZE));
> @@ -166,45 +129,34 @@ __i915_gem_object_create_internal(struct
> drm_i915_private *i915,
>                 return ERR_PTR(-ENOMEM);
>  
>         drm_gem_private_object_init(&i915->drm, &obj->base, size);
> -       i915_gem_object_init(obj, ops, &lock_class, 0);
> -       obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
> +       i915_gem_object_init(obj, &i915_gem_object_internal_ops,
> &lock_class,
> +                            I915_BO_ALLOC_VOLATILE);
> +
> +       INIT_LIST_HEAD(&obj->mm.region_link);
> +
> +       INIT_RADIX_TREE(&obj->ttm.get_io_page.radix, GFP_KERNEL |
> __GFP_NOWARN);
> +       mutex_init(&obj->ttm.get_io_page.lock);
>  
> -       /*
> -        * Mark the object as volatile, such that the pages are
> marked as
> -        * dontneed whilst they are still pinned. As soon as they are
> unpinned
> -        * they are allowed to be reaped by the shrinker, and the
> caller is
> -        * expected to repopulate - the contents of this object are
> only valid
> -        * whilst active and pinned.
> -        */
> -       i915_gem_object_set_volatile(obj);
> +       obj->base.vma_node.driver_private = i915_gem_to_ttm(obj);
>  
> +       ret = ttm_bo_init_reserved(&i915->bdev, i915_gem_to_ttm(obj),
> size,
> +                                  ttm_bo_type_kernel,
> i915_ttm_sys_placement(),
> +                                  0, &ctx, NULL, NULL,
> i915_ttm_internal_bo_destroy);
> +       if (ret) {
> +               ret = i915_ttm_err_to_gem(ret);
> +               i915_gem_object_free(obj);
> +               return ERR_PTR(ret);
> +       }
> +
> +       obj->ttm.created = true;
>         obj->read_domains = I915_GEM_DOMAIN_CPU;
>         obj->write_domain = I915_GEM_DOMAIN_CPU;
> -
> +       obj->mem_flags &= ~I915_BO_FLAG_IOMEM;
> +       obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
>         cache_level = HAS_LLC(i915) ? I915_CACHE_LLC :
> I915_CACHE_NONE;
>         i915_gem_object_set_cache_coherency(obj, cache_level);
> +       i915_gem_object_unlock(obj);
>  
>         return obj;
>  }
>  
> -/**
> - * i915_gem_object_create_internal: create an object with volatile
> pages
> - * @i915: the i915 device
> - * @size: the size in bytes of backing storage to allocate for the
> object
> - *
> - * Creates a new object that wraps some internal memory for private
> use.
> - * This object is not backed by swappable storage, and as such its
> contents
> - * are volatile and only valid whilst pinned. If the object is
> reaped by the
> - * shrinker, its pages and data will be discarded. Equally, it is
> not a full
> - * GEM object and so not valid for access from userspace. This makes
> it useful
> - * for hardware interfaces like ringbuffers (which are pinned from
> the time
> - * the request is written to the time the hardware stops accessing
> it), but
> - * not for contexts (which need to be preserved when not active for
> later
> - * reuse). Note that it is not cleared upon allocation.
> - */
> -struct drm_i915_gem_object *
> -i915_gem_object_create_internal(struct drm_i915_private *i915,
> -                               phys_addr_t size)
> -{
> -       return __i915_gem_object_create_internal(i915,
> &i915_gem_object_internal_ops, size);

While we don't have a TTM shmem backend ready yet for internal,

Did you consider setting up just yet another region,
INTEL_REGION_INTERNAL,
.class = INTEL_MEMORY_SYSTEM and
.instance = 1,

And make it create a TTM system region on integrated, and use
same region as INTEL_REGION_SMEM on dgfx.

I think ttm should automatically map that to I915_PL_SYSTEM and the
backwards mapping in i915_ttm_region() should never get called since
the object is never moved.

Then I figure it should suffice to just call
__i915_gem_ttm_object_init() and we could drop a lot of code.

/Thomas




> -}
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.h
> b/drivers/gpu/drm/i915/gem/i915_gem_internal.h
> index 6664e06112fc..524e1042b20f 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.h
> @@ -15,9 +15,4 @@ struct drm_i915_private;
>  struct drm_i915_gem_object *
>  i915_gem_object_create_internal(struct drm_i915_private *i915,
>                                 phys_addr_t size);
> -struct drm_i915_gem_object *
> -__i915_gem_object_create_internal(struct drm_i915_private *i915,
> -                                 const struct
> drm_i915_gem_object_ops *ops,
> -                                 phys_addr_t size);
> -
>  #endif /* __I915_GEM_INTERNAL_H__ */
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> index fdb3a1c18cb6..92195ead8c11 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> @@ -83,7 +83,7 @@ struct ttm_placement *i915_ttm_sys_placement(void)
>         return &i915_sys_placement;
>  }
>  
> -static int i915_ttm_err_to_gem(int err)
> +int i915_ttm_err_to_gem(int err)
>  {
>         /* Fastpath */
>         if (likely(!err))
> @@ -745,8 +745,8 @@ struct ttm_device_funcs *i915_ttm_driver(void)
>         return &i915_ttm_bo_driver;
>  }
>  
> -static int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
> -                               struct ttm_placement *placement)
> +int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
> +                        struct ttm_placement *placement)
>  {
>         struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
>         struct ttm_operation_ctx ctx = {
> @@ -871,8 +871,8 @@ static int i915_ttm_migrate(struct
> drm_i915_gem_object *obj,
>         return __i915_ttm_migrate(obj, mr, obj->flags);
>  }
>  
> -static void i915_ttm_put_pages(struct drm_i915_gem_object *obj,
> -                              struct sg_table *st)
> +void i915_ttm_put_pages(struct drm_i915_gem_object *obj,
> +                       struct sg_table *st)
>  {
>         /*
>          * We're currently not called from a shrinker, so put_pages()
> @@ -995,7 +995,7 @@ void i915_ttm_adjust_lru(struct
> drm_i915_gem_object *obj)
>   * it's not idle, and using the TTM destroyed list handling could
> help us
>   * benefit from that.
>   */
> -static void i915_ttm_delayed_free(struct drm_i915_gem_object *obj)
> +void i915_ttm_delayed_free(struct drm_i915_gem_object *obj)
>  {
>         GEM_BUG_ON(!obj->ttm.created);
>  
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
> b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
> index 73e371aa3850..06701c46d8e2 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
> @@ -26,6 +26,7 @@ i915_gem_to_ttm(struct drm_i915_gem_object *obj)
>   * i915 ttm gem object destructor. Internal use only.
>   */
>  void i915_ttm_bo_destroy(struct ttm_buffer_object *bo);
> +void i915_ttm_internal_bo_destroy(struct ttm_buffer_object *bo);
>  
>  /**
>   * i915_ttm_to_gem - Convert a struct ttm_buffer_object to an
> embedding
> @@ -37,8 +38,10 @@ void i915_ttm_bo_destroy(struct ttm_buffer_object
> *bo);
>  static inline struct drm_i915_gem_object *
>  i915_ttm_to_gem(struct ttm_buffer_object *bo)
>  {
> -       if (bo->destroy != i915_ttm_bo_destroy)
> +       if (bo->destroy != i915_ttm_bo_destroy &&
> +           bo->destroy != i915_ttm_internal_bo_destroy) {
>                 return NULL;
> +       }
>  
>         return container_of(bo, struct drm_i915_gem_object,
> __do_not_access);
>  }
> @@ -66,6 +69,7 @@ i915_ttm_resource_get_st(struct drm_i915_gem_object
> *obj,
>                          struct ttm_resource *res);
>  
>  void i915_ttm_adjust_lru(struct drm_i915_gem_object *obj);
> +void i915_ttm_delayed_free(struct drm_i915_gem_object *obj);
>  
>  int i915_ttm_purge(struct drm_i915_gem_object *obj);
>  
> @@ -92,4 +96,10 @@ static inline bool i915_ttm_cpu_maps_iomem(struct
> ttm_resource *mem)
>         /* Once / if we support GGTT, this is also false for cached
> ttm_tts */
>         return mem->mem_type != I915_PL_SYSTEM;
>  }
> +
> +int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
> +                        struct ttm_placement *placement);
> +void i915_ttm_put_pages(struct drm_i915_gem_object *obj, struct
> sg_table *st);
> +int i915_ttm_err_to_gem(int err);
> +
>  #endif



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 4/4] drm/i915: internal buffers use ttm backend
@ 2022-05-11 14:14     ` Thomas Hellström
  0 siblings, 0 replies; 32+ messages in thread
From: Thomas Hellström @ 2022-05-11 14:14 UTC (permalink / raw)
  To: Robert Beckett, dri-devel, intel-gfx, Jani Nikula,
	Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, David Airlie,
	Daniel Vetter
  Cc: Matthew Auld, linux-kernel

On Tue, 2022-05-03 at 19:13 +0000, Robert Beckett wrote:
> refactor internal buffer backend to allocate volatile pages via
> ttm pool allocator
> 
> Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
> ---
>  drivers/gpu/drm/i915/gem/i915_gem_internal.c | 264 ++++++++---------
> --
>  drivers/gpu/drm/i915/gem/i915_gem_internal.h |   5 -
>  drivers/gpu/drm/i915/gem/i915_gem_ttm.c      |  12 +-
>  drivers/gpu/drm/i915/gem/i915_gem_ttm.h      |  12 +-
>  4 files changed, 125 insertions(+), 168 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
> b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
> index c698f95af15f..815ec9466cc0 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
> @@ -4,156 +4,119 @@
>   * Copyright © 2014-2016 Intel Corporation
>   */
>  
> -#include <linux/scatterlist.h>
> -#include <linux/slab.h>
> -#include <linux/swiotlb.h>
> -
> +#include <drm/ttm/ttm_bo_driver.h>
> +#include <drm/ttm/ttm_placement.h>
> +#include "drm/ttm/ttm_bo_api.h"
> +#include "gem/i915_gem_internal.h"
> +#include "gem/i915_gem_region.h"
> +#include "gem/i915_gem_ttm.h"
>  #include "i915_drv.h"
> -#include "i915_gem.h"
> -#include "i915_gem_internal.h"
> -#include "i915_gem_object.h"
> -#include "i915_scatterlist.h"
> -#include "i915_utils.h"
> -
> -#define QUIET (__GFP_NORETRY | __GFP_NOWARN)
> -#define MAYFAIL (__GFP_RETRY_MAYFAIL | __GFP_NOWARN)
> -
> -static void internal_free_pages(struct sg_table *st)
> -{
> -       struct scatterlist *sg;
> -
> -       for (sg = st->sgl; sg; sg = __sg_next(sg)) {
> -               if (sg_page(sg))
> -                       __free_pages(sg_page(sg), get_order(sg-
> >length));
> -       }
> -
> -       sg_free_table(st);
> -       kfree(st);
> -}
>  
> -static int i915_gem_object_get_pages_internal(struct
> drm_i915_gem_object *obj)
> +static int i915_internal_get_pages(struct drm_i915_gem_object *obj)
>  {
> -       struct drm_i915_private *i915 = to_i915(obj->base.dev);
> -       struct sg_table *st;
> -       struct scatterlist *sg;
> -       unsigned int sg_page_sizes;
> -       unsigned int npages;
> -       int max_order;
> -       gfp_t gfp;
> -
> -       max_order = MAX_ORDER;
> -#ifdef CONFIG_SWIOTLB
> -       if (is_swiotlb_active(obj->base.dev->dev)) {
> -               unsigned int max_segment;
> -
> -               max_segment = swiotlb_max_segment();
> -               if (max_segment) {
> -                       max_segment = max_t(unsigned int,
> max_segment,
> -                                           PAGE_SIZE) >> PAGE_SHIFT;
> -                       max_order = min(max_order,
> ilog2(max_segment));
> -               }
> +       struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
> +       struct ttm_operation_ctx ctx = {
> +               .interruptible = true,
> +               .no_wait_gpu = false,
> +       };
> +       struct ttm_place place = {
> +               .fpfn = 0,
> +               .lpfn = 0,
> +               .mem_type = I915_PL_SYSTEM,
> +               .flags = 0,
> +       };
> +       struct ttm_placement placement = {
> +               .num_placement = 1,
> +               .placement = &place,
> +               .num_busy_placement = 0,
> +               .busy_placement = NULL,
> +       };
> +       int ret;
> +
> +       ret = ttm_bo_validate(bo, &placement, &ctx);
> +       if (ret) {
> +               ret = i915_ttm_err_to_gem(ret);
> +               return ret;
>         }
> -#endif
>  
> -       gfp = GFP_KERNEL | __GFP_HIGHMEM | __GFP_RECLAIMABLE;
> -       if (IS_I965GM(i915) || IS_I965G(i915)) {
> -               /* 965gm cannot relocate objects above 4GiB. */
> -               gfp &= ~__GFP_HIGHMEM;
> -               gfp |= __GFP_DMA32;


It looks like we're losing this restriction?

There is a flag to ttm_device_init() to make TTM only do __GFP_DMA32
allocations.

> +       if (bo->ttm && !ttm_tt_is_populated(bo->ttm)) {
> +               ret = ttm_tt_populate(bo->bdev, bo->ttm, &ctx);
> +               if (ret)
> +                       return ret;
>         }
>  
> -create_st:
> -       st = kmalloc(sizeof(*st), GFP_KERNEL);
> -       if (!st)
> -               return -ENOMEM;
> +       if (!i915_gem_object_has_pages(obj)) {
> +               struct i915_refct_sgt *rsgt =
> +                       i915_ttm_resource_get_st(obj, bo->resource);
>  
> -       npages = obj->base.size / PAGE_SIZE;
> -       if (sg_alloc_table(st, npages, GFP_KERNEL)) {
> -               kfree(st);
> -               return -ENOMEM;
> -       }
> +               if (IS_ERR(rsgt))
> +                       return PTR_ERR(rsgt);
>  
> -       sg = st->sgl;
> -       st->nents = 0;
> -       sg_page_sizes = 0;
> -
> -       do {
> -               int order = min(fls(npages) - 1, max_order);
> -               struct page *page;
> -
> -               do {
> -                       page = alloc_pages(gfp | (order ? QUIET :
> MAYFAIL),
> -                                          order);
> -                       if (page)
> -                               break;
> -                       if (!order--)
> -                               goto err;
> -
> -                       /* Limit subsequent allocations as well */
> -                       max_order = order;
> -               } while (1);
> -
> -               sg_set_page(sg, page, PAGE_SIZE << order, 0);
> -               sg_page_sizes |= PAGE_SIZE << order;
> -               st->nents++;
> -
> -               npages -= 1 << order;
> -               if (!npages) {
> -                       sg_mark_end(sg);
> -                       break;
> -               }
> -
> -               sg = __sg_next(sg);
> -       } while (1);
> -
> -       if (i915_gem_gtt_prepare_pages(obj, st)) {
> -               /* Failed to dma-map try again with single page sg
> segments */
> -               if (get_order(st->sgl->length)) {
> -                       internal_free_pages(st);
> -                       max_order = 0;
> -                       goto create_st;
> -               }
> -               goto err;
> +               GEM_BUG_ON(obj->mm.rsgt);
> +               obj->mm.rsgt = rsgt;
> +               __i915_gem_object_set_pages(obj, &rsgt->table,
> +                                           i915_sg_dma_sizes(rsgt-
> >table.sgl));
>         }
>  
> -       __i915_gem_object_set_pages(obj, st, sg_page_sizes);
> +       GEM_BUG_ON(bo->ttm && ((obj->base.size >> PAGE_SHIFT) < bo-
> >ttm->num_pages));
> +       i915_ttm_adjust_lru(obj);
>  
>         return 0;
> +}
>  
> -err:
> -       sg_set_page(sg, NULL, 0, 0);
> -       sg_mark_end(sg);
> -       internal_free_pages(st);
> +static const struct drm_i915_gem_object_ops
> i915_gem_object_internal_ops = {
> +       .name = "i915_gem_object_ttm",
> +       .flags = I915_GEM_OBJECT_IS_SHRINKABLE,
>  
> -       return -ENOMEM;
> -}
> +       .get_pages = i915_internal_get_pages,
> +       .put_pages = i915_ttm_put_pages,
> +       .adjust_lru = i915_ttm_adjust_lru,
> +       .delayed_free = i915_ttm_delayed_free,
> +};
>  
> -static void i915_gem_object_put_pages_internal(struct
> drm_i915_gem_object *obj,
> -                                              struct sg_table
> *pages)
> +void i915_ttm_internal_bo_destroy(struct ttm_buffer_object *bo)
>  {
> -       i915_gem_gtt_finish_pages(obj, pages);
> -       internal_free_pages(pages);
> +       struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
>  
> -       obj->mm.dirty = false;
> +       mutex_destroy(&obj->ttm.get_io_page.lock);
>  
> -       __start_cpu_write(obj);
> -}
> +       if (obj->ttm.created) {
> +               /* This releases all gem object bindings to the
> backend. */
> +               __i915_gem_free_object(obj);
>  
> -static const struct drm_i915_gem_object_ops
> i915_gem_object_internal_ops = {
> -       .name = "i915_gem_object_internal",
> -       .flags = I915_GEM_OBJECT_IS_SHRINKABLE,
> -       .get_pages = i915_gem_object_get_pages_internal,
> -       .put_pages = i915_gem_object_put_pages_internal,
> -};
> +               call_rcu(&obj->rcu, __i915_gem_free_object_rcu);
> +       } else {
> +               __i915_gem_object_fini(obj);
> +       }
> +}
>  
> +/**
> + * i915_gem_object_create_internal: create an object with volatile
> pages
> + * @i915: the i915 device
> + * @size: the size in bytes of backing storage to allocate for the
> object
> + *
> + * Creates a new object that wraps some internal memory for private
> use.
> + * This object is not backed by swappable storage, and as such its
> contents
> + * are volatile and only valid whilst pinned. If the object is
> reaped by the
> + * shrinker, its pages and data will be discarded. Equally, it is
> not a full
> + * GEM object and so not valid for access from userspace. This makes
> it useful
> + * for hardware interfaces like ringbuffers (which are pinned from
> the time
> + * the request is written to the time the hardware stops accessing
> it), but
> + * not for contexts (which need to be preserved when not active for
> later
> + * reuse). Note that it is not cleared upon allocation.
> + */
>  struct drm_i915_gem_object *
> -__i915_gem_object_create_internal(struct drm_i915_private *i915,
> -                                 const struct
> drm_i915_gem_object_ops *ops,
> -                                 phys_addr_t size)
> +i915_gem_object_create_internal(struct drm_i915_private *i915,
> +                               phys_addr_t size)
>  {
>         static struct lock_class_key lock_class;
>         struct drm_i915_gem_object *obj;
>         unsigned int cache_level;
> +       struct ttm_operation_ctx ctx = {
> +               .interruptible = true,
> +               .no_wait_gpu = false,
> +       };
> +       int ret;
>  
>         GEM_BUG_ON(!size);
>         GEM_BUG_ON(!IS_ALIGNED(size, PAGE_SIZE));
> @@ -166,45 +129,34 @@ __i915_gem_object_create_internal(struct
> drm_i915_private *i915,
>                 return ERR_PTR(-ENOMEM);
>  
>         drm_gem_private_object_init(&i915->drm, &obj->base, size);
> -       i915_gem_object_init(obj, ops, &lock_class, 0);
> -       obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
> +       i915_gem_object_init(obj, &i915_gem_object_internal_ops,
> &lock_class,
> +                            I915_BO_ALLOC_VOLATILE);
> +
> +       INIT_LIST_HEAD(&obj->mm.region_link);
> +
> +       INIT_RADIX_TREE(&obj->ttm.get_io_page.radix, GFP_KERNEL |
> __GFP_NOWARN);
> +       mutex_init(&obj->ttm.get_io_page.lock);
>  
> -       /*
> -        * Mark the object as volatile, such that the pages are
> marked as
> -        * dontneed whilst they are still pinned. As soon as they are
> unpinned
> -        * they are allowed to be reaped by the shrinker, and the
> caller is
> -        * expected to repopulate - the contents of this object are
> only valid
> -        * whilst active and pinned.
> -        */
> -       i915_gem_object_set_volatile(obj);
> +       obj->base.vma_node.driver_private = i915_gem_to_ttm(obj);
>  
> +       ret = ttm_bo_init_reserved(&i915->bdev, i915_gem_to_ttm(obj),
> size,
> +                                  ttm_bo_type_kernel,
> i915_ttm_sys_placement(),
> +                                  0, &ctx, NULL, NULL,
> i915_ttm_internal_bo_destroy);
> +       if (ret) {
> +               ret = i915_ttm_err_to_gem(ret);
> +               i915_gem_object_free(obj);
> +               return ERR_PTR(ret);
> +       }
> +
> +       obj->ttm.created = true;
>         obj->read_domains = I915_GEM_DOMAIN_CPU;
>         obj->write_domain = I915_GEM_DOMAIN_CPU;
> -
> +       obj->mem_flags &= ~I915_BO_FLAG_IOMEM;
> +       obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
>         cache_level = HAS_LLC(i915) ? I915_CACHE_LLC :
> I915_CACHE_NONE;
>         i915_gem_object_set_cache_coherency(obj, cache_level);
> +       i915_gem_object_unlock(obj);
>  
>         return obj;
>  }
>  
> -/**
> - * i915_gem_object_create_internal: create an object with volatile
> pages
> - * @i915: the i915 device
> - * @size: the size in bytes of backing storage to allocate for the
> object
> - *
> - * Creates a new object that wraps some internal memory for private
> use.
> - * This object is not backed by swappable storage, and as such its
> contents
> - * are volatile and only valid whilst pinned. If the object is
> reaped by the
> - * shrinker, its pages and data will be discarded. Equally, it is
> not a full
> - * GEM object and so not valid for access from userspace. This makes
> it useful
> - * for hardware interfaces like ringbuffers (which are pinned from
> the time
> - * the request is written to the time the hardware stops accessing
> it), but
> - * not for contexts (which need to be preserved when not active for
> later
> - * reuse). Note that it is not cleared upon allocation.
> - */
> -struct drm_i915_gem_object *
> -i915_gem_object_create_internal(struct drm_i915_private *i915,
> -                               phys_addr_t size)
> -{
> -       return __i915_gem_object_create_internal(i915,
> &i915_gem_object_internal_ops, size);

While we don't have a TTM shmem backend ready yet for internal,

Did you consider setting up just yet another region,
INTEL_REGION_INTERNAL,
.class = INTEL_MEMORY_SYSTEM and
.instance = 1,

And make it create a TTM system region on integrated, and use
same region as INTEL_REGION_SMEM on dgfx.

I think ttm should automatically map that to I915_PL_SYSTEM and the
backwards mapping in i915_ttm_region() should never get called since
the object is never moved.

Then I figure it should suffice to just call
__i915_gem_ttm_object_init() and we could drop a lot of code.

/Thomas




> -}
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.h
> b/drivers/gpu/drm/i915/gem/i915_gem_internal.h
> index 6664e06112fc..524e1042b20f 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.h
> @@ -15,9 +15,4 @@ struct drm_i915_private;
>  struct drm_i915_gem_object *
>  i915_gem_object_create_internal(struct drm_i915_private *i915,
>                                 phys_addr_t size);
> -struct drm_i915_gem_object *
> -__i915_gem_object_create_internal(struct drm_i915_private *i915,
> -                                 const struct
> drm_i915_gem_object_ops *ops,
> -                                 phys_addr_t size);
> -
>  #endif /* __I915_GEM_INTERNAL_H__ */
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> index fdb3a1c18cb6..92195ead8c11 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> @@ -83,7 +83,7 @@ struct ttm_placement *i915_ttm_sys_placement(void)
>         return &i915_sys_placement;
>  }
>  
> -static int i915_ttm_err_to_gem(int err)
> +int i915_ttm_err_to_gem(int err)
>  {
>         /* Fastpath */
>         if (likely(!err))
> @@ -745,8 +745,8 @@ struct ttm_device_funcs *i915_ttm_driver(void)
>         return &i915_ttm_bo_driver;
>  }
>  
> -static int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
> -                               struct ttm_placement *placement)
> +int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
> +                        struct ttm_placement *placement)
>  {
>         struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
>         struct ttm_operation_ctx ctx = {
> @@ -871,8 +871,8 @@ static int i915_ttm_migrate(struct
> drm_i915_gem_object *obj,
>         return __i915_ttm_migrate(obj, mr, obj->flags);
>  }
>  
> -static void i915_ttm_put_pages(struct drm_i915_gem_object *obj,
> -                              struct sg_table *st)
> +void i915_ttm_put_pages(struct drm_i915_gem_object *obj,
> +                       struct sg_table *st)
>  {
>         /*
>          * We're currently not called from a shrinker, so put_pages()
> @@ -995,7 +995,7 @@ void i915_ttm_adjust_lru(struct
> drm_i915_gem_object *obj)
>   * it's not idle, and using the TTM destroyed list handling could
> help us
>   * benefit from that.
>   */
> -static void i915_ttm_delayed_free(struct drm_i915_gem_object *obj)
> +void i915_ttm_delayed_free(struct drm_i915_gem_object *obj)
>  {
>         GEM_BUG_ON(!obj->ttm.created);
>  
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
> b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
> index 73e371aa3850..06701c46d8e2 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
> @@ -26,6 +26,7 @@ i915_gem_to_ttm(struct drm_i915_gem_object *obj)
>   * i915 ttm gem object destructor. Internal use only.
>   */
>  void i915_ttm_bo_destroy(struct ttm_buffer_object *bo);
> +void i915_ttm_internal_bo_destroy(struct ttm_buffer_object *bo);
>  
>  /**
>   * i915_ttm_to_gem - Convert a struct ttm_buffer_object to an
> embedding
> @@ -37,8 +38,10 @@ void i915_ttm_bo_destroy(struct ttm_buffer_object
> *bo);
>  static inline struct drm_i915_gem_object *
>  i915_ttm_to_gem(struct ttm_buffer_object *bo)
>  {
> -       if (bo->destroy != i915_ttm_bo_destroy)
> +       if (bo->destroy != i915_ttm_bo_destroy &&
> +           bo->destroy != i915_ttm_internal_bo_destroy) {
>                 return NULL;
> +       }
>  
>         return container_of(bo, struct drm_i915_gem_object,
> __do_not_access);
>  }
> @@ -66,6 +69,7 @@ i915_ttm_resource_get_st(struct drm_i915_gem_object
> *obj,
>                          struct ttm_resource *res);
>  
>  void i915_ttm_adjust_lru(struct drm_i915_gem_object *obj);
> +void i915_ttm_delayed_free(struct drm_i915_gem_object *obj);
>  
>  int i915_ttm_purge(struct drm_i915_gem_object *obj);
>  
> @@ -92,4 +96,10 @@ static inline bool i915_ttm_cpu_maps_iomem(struct
> ttm_resource *mem)
>         /* Once / if we support GGTT, this is also false for cached
> ttm_tts */
>         return mem->mem_type != I915_PL_SYSTEM;
>  }
> +
> +int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
> +                        struct ttm_placement *placement);
> +void i915_ttm_put_pages(struct drm_i915_gem_object *obj, struct
> sg_table *st);
> +int i915_ttm_err_to_gem(int err);
> +
>  #endif



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 4/4] drm/i915: internal buffers use ttm backend
  2022-05-11 14:14     ` [Intel-gfx] " Thomas Hellström
@ 2022-05-23 15:52       ` Robert Beckett
  -1 siblings, 0 replies; 32+ messages in thread
From: Robert Beckett @ 2022-05-23 15:52 UTC (permalink / raw)
  To: Thomas Hellström, dri-devel, intel-gfx, Jani Nikula,
	Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, David Airlie,
	Daniel Vetter
  Cc: Matthew Auld, linux-kernel



On 11/05/2022 15:14, Thomas Hellström wrote:
> On Tue, 2022-05-03 at 19:13 +0000, Robert Beckett wrote:
>> refactor internal buffer backend to allocate volatile pages via
>> ttm pool allocator
>>
>> Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
>> ---
>>   drivers/gpu/drm/i915/gem/i915_gem_internal.c | 264 ++++++++---------
>> --
>>   drivers/gpu/drm/i915/gem/i915_gem_internal.h |   5 -
>>   drivers/gpu/drm/i915/gem/i915_gem_ttm.c      |  12 +-
>>   drivers/gpu/drm/i915/gem/i915_gem_ttm.h      |  12 +-
>>   4 files changed, 125 insertions(+), 168 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
>> b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
>> index c698f95af15f..815ec9466cc0 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
>> @@ -4,156 +4,119 @@
>>    * Copyright © 2014-2016 Intel Corporation
>>    */
>>   
>> -#include <linux/scatterlist.h>
>> -#include <linux/slab.h>
>> -#include <linux/swiotlb.h>
>> -
>> +#include <drm/ttm/ttm_bo_driver.h>
>> +#include <drm/ttm/ttm_placement.h>
>> +#include "drm/ttm/ttm_bo_api.h"
>> +#include "gem/i915_gem_internal.h"
>> +#include "gem/i915_gem_region.h"
>> +#include "gem/i915_gem_ttm.h"
>>   #include "i915_drv.h"
>> -#include "i915_gem.h"
>> -#include "i915_gem_internal.h"
>> -#include "i915_gem_object.h"
>> -#include "i915_scatterlist.h"
>> -#include "i915_utils.h"
>> -
>> -#define QUIET (__GFP_NORETRY | __GFP_NOWARN)
>> -#define MAYFAIL (__GFP_RETRY_MAYFAIL | __GFP_NOWARN)
>> -
>> -static void internal_free_pages(struct sg_table *st)
>> -{
>> -       struct scatterlist *sg;
>> -
>> -       for (sg = st->sgl; sg; sg = __sg_next(sg)) {
>> -               if (sg_page(sg))
>> -                       __free_pages(sg_page(sg), get_order(sg-
>>> length));
>> -       }
>> -
>> -       sg_free_table(st);
>> -       kfree(st);
>> -}
>>   
>> -static int i915_gem_object_get_pages_internal(struct
>> drm_i915_gem_object *obj)
>> +static int i915_internal_get_pages(struct drm_i915_gem_object *obj)
>>   {
>> -       struct drm_i915_private *i915 = to_i915(obj->base.dev);
>> -       struct sg_table *st;
>> -       struct scatterlist *sg;
>> -       unsigned int sg_page_sizes;
>> -       unsigned int npages;
>> -       int max_order;
>> -       gfp_t gfp;
>> -
>> -       max_order = MAX_ORDER;
>> -#ifdef CONFIG_SWIOTLB
>> -       if (is_swiotlb_active(obj->base.dev->dev)) {
>> -               unsigned int max_segment;
>> -
>> -               max_segment = swiotlb_max_segment();
>> -               if (max_segment) {
>> -                       max_segment = max_t(unsigned int,
>> max_segment,
>> -                                           PAGE_SIZE) >> PAGE_SHIFT;
>> -                       max_order = min(max_order,
>> ilog2(max_segment));
>> -               }
>> +       struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
>> +       struct ttm_operation_ctx ctx = {
>> +               .interruptible = true,
>> +               .no_wait_gpu = false,
>> +       };
>> +       struct ttm_place place = {
>> +               .fpfn = 0,
>> +               .lpfn = 0,
>> +               .mem_type = I915_PL_SYSTEM,
>> +               .flags = 0,
>> +       };
>> +       struct ttm_placement placement = {
>> +               .num_placement = 1,
>> +               .placement = &place,
>> +               .num_busy_placement = 0,
>> +               .busy_placement = NULL,
>> +       };
>> +       int ret;
>> +
>> +       ret = ttm_bo_validate(bo, &placement, &ctx);
>> +       if (ret) {
>> +               ret = i915_ttm_err_to_gem(ret);
>> +               return ret;
>>          }
>> -#endif
>>   
>> -       gfp = GFP_KERNEL | __GFP_HIGHMEM | __GFP_RECLAIMABLE;
>> -       if (IS_I965GM(i915) || IS_I965G(i915)) {
>> -               /* 965gm cannot relocate objects above 4GiB. */
>> -               gfp &= ~__GFP_HIGHMEM;
>> -               gfp |= __GFP_DMA32;
> 
> 
> It looks like we're losing this restriction?
> 
> There is a flag to ttm_device_init() to make TTM only do __GFP_DMA32
> allocations.

agreed. will fix for v2

> 
>> +       if (bo->ttm && !ttm_tt_is_populated(bo->ttm)) {
>> +               ret = ttm_tt_populate(bo->bdev, bo->ttm, &ctx);
>> +               if (ret)
>> +                       return ret;
>>          }
>>   
>> -create_st:
>> -       st = kmalloc(sizeof(*st), GFP_KERNEL);
>> -       if (!st)
>> -               return -ENOMEM;
>> +       if (!i915_gem_object_has_pages(obj)) {
>> +               struct i915_refct_sgt *rsgt =
>> +                       i915_ttm_resource_get_st(obj, bo->resource);
>>   
>> -       npages = obj->base.size / PAGE_SIZE;
>> -       if (sg_alloc_table(st, npages, GFP_KERNEL)) {
>> -               kfree(st);
>> -               return -ENOMEM;
>> -       }
>> +               if (IS_ERR(rsgt))
>> +                       return PTR_ERR(rsgt);
>>   
>> -       sg = st->sgl;
>> -       st->nents = 0;
>> -       sg_page_sizes = 0;
>> -
>> -       do {
>> -               int order = min(fls(npages) - 1, max_order);
>> -               struct page *page;
>> -
>> -               do {
>> -                       page = alloc_pages(gfp | (order ? QUIET :
>> MAYFAIL),
>> -                                          order);
>> -                       if (page)
>> -                               break;
>> -                       if (!order--)
>> -                               goto err;
>> -
>> -                       /* Limit subsequent allocations as well */
>> -                       max_order = order;
>> -               } while (1);
>> -
>> -               sg_set_page(sg, page, PAGE_SIZE << order, 0);
>> -               sg_page_sizes |= PAGE_SIZE << order;
>> -               st->nents++;
>> -
>> -               npages -= 1 << order;
>> -               if (!npages) {
>> -                       sg_mark_end(sg);
>> -                       break;
>> -               }
>> -
>> -               sg = __sg_next(sg);
>> -       } while (1);
>> -
>> -       if (i915_gem_gtt_prepare_pages(obj, st)) {
>> -               /* Failed to dma-map try again with single page sg
>> segments */
>> -               if (get_order(st->sgl->length)) {
>> -                       internal_free_pages(st);
>> -                       max_order = 0;
>> -                       goto create_st;
>> -               }
>> -               goto err;
>> +               GEM_BUG_ON(obj->mm.rsgt);
>> +               obj->mm.rsgt = rsgt;
>> +               __i915_gem_object_set_pages(obj, &rsgt->table,
>> +                                           i915_sg_dma_sizes(rsgt-
>>> table.sgl));
>>          }
>>   
>> -       __i915_gem_object_set_pages(obj, st, sg_page_sizes);
>> +       GEM_BUG_ON(bo->ttm && ((obj->base.size >> PAGE_SHIFT) < bo-
>>> ttm->num_pages));
>> +       i915_ttm_adjust_lru(obj);
>>   
>>          return 0;
>> +}
>>   
>> -err:
>> -       sg_set_page(sg, NULL, 0, 0);
>> -       sg_mark_end(sg);
>> -       internal_free_pages(st);
>> +static const struct drm_i915_gem_object_ops
>> i915_gem_object_internal_ops = {
>> +       .name = "i915_gem_object_ttm",
>> +       .flags = I915_GEM_OBJECT_IS_SHRINKABLE,
>>   
>> -       return -ENOMEM;
>> -}
>> +       .get_pages = i915_internal_get_pages,
>> +       .put_pages = i915_ttm_put_pages,
>> +       .adjust_lru = i915_ttm_adjust_lru,
>> +       .delayed_free = i915_ttm_delayed_free,
>> +};
>>   
>> -static void i915_gem_object_put_pages_internal(struct
>> drm_i915_gem_object *obj,
>> -                                              struct sg_table
>> *pages)
>> +void i915_ttm_internal_bo_destroy(struct ttm_buffer_object *bo)
>>   {
>> -       i915_gem_gtt_finish_pages(obj, pages);
>> -       internal_free_pages(pages);
>> +       struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
>>   
>> -       obj->mm.dirty = false;
>> +       mutex_destroy(&obj->ttm.get_io_page.lock);
>>   
>> -       __start_cpu_write(obj);
>> -}
>> +       if (obj->ttm.created) {
>> +               /* This releases all gem object bindings to the
>> backend. */
>> +               __i915_gem_free_object(obj);
>>   
>> -static const struct drm_i915_gem_object_ops
>> i915_gem_object_internal_ops = {
>> -       .name = "i915_gem_object_internal",
>> -       .flags = I915_GEM_OBJECT_IS_SHRINKABLE,
>> -       .get_pages = i915_gem_object_get_pages_internal,
>> -       .put_pages = i915_gem_object_put_pages_internal,
>> -};
>> +               call_rcu(&obj->rcu, __i915_gem_free_object_rcu);
>> +       } else {
>> +               __i915_gem_object_fini(obj);
>> +       }
>> +}
>>   
>> +/**
>> + * i915_gem_object_create_internal: create an object with volatile
>> pages
>> + * @i915: the i915 device
>> + * @size: the size in bytes of backing storage to allocate for the
>> object
>> + *
>> + * Creates a new object that wraps some internal memory for private
>> use.
>> + * This object is not backed by swappable storage, and as such its
>> contents
>> + * are volatile and only valid whilst pinned. If the object is
>> reaped by the
>> + * shrinker, its pages and data will be discarded. Equally, it is
>> not a full
>> + * GEM object and so not valid for access from userspace. This makes
>> it useful
>> + * for hardware interfaces like ringbuffers (which are pinned from
>> the time
>> + * the request is written to the time the hardware stops accessing
>> it), but
>> + * not for contexts (which need to be preserved when not active for
>> later
>> + * reuse). Note that it is not cleared upon allocation.
>> + */
>>   struct drm_i915_gem_object *
>> -__i915_gem_object_create_internal(struct drm_i915_private *i915,
>> -                                 const struct
>> drm_i915_gem_object_ops *ops,
>> -                                 phys_addr_t size)
>> +i915_gem_object_create_internal(struct drm_i915_private *i915,
>> +                               phys_addr_t size)
>>   {
>>          static struct lock_class_key lock_class;
>>          struct drm_i915_gem_object *obj;
>>          unsigned int cache_level;
>> +       struct ttm_operation_ctx ctx = {
>> +               .interruptible = true,
>> +               .no_wait_gpu = false,
>> +       };
>> +       int ret;
>>   
>>          GEM_BUG_ON(!size);
>>          GEM_BUG_ON(!IS_ALIGNED(size, PAGE_SIZE));
>> @@ -166,45 +129,34 @@ __i915_gem_object_create_internal(struct
>> drm_i915_private *i915,
>>                  return ERR_PTR(-ENOMEM);
>>   
>>          drm_gem_private_object_init(&i915->drm, &obj->base, size);
>> -       i915_gem_object_init(obj, ops, &lock_class, 0);
>> -       obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
>> +       i915_gem_object_init(obj, &i915_gem_object_internal_ops,
>> &lock_class,
>> +                            I915_BO_ALLOC_VOLATILE);
>> +
>> +       INIT_LIST_HEAD(&obj->mm.region_link);
>> +
>> +       INIT_RADIX_TREE(&obj->ttm.get_io_page.radix, GFP_KERNEL |
>> __GFP_NOWARN);
>> +       mutex_init(&obj->ttm.get_io_page.lock);
>>   
>> -       /*
>> -        * Mark the object as volatile, such that the pages are
>> marked as
>> -        * dontneed whilst they are still pinned. As soon as they are
>> unpinned
>> -        * they are allowed to be reaped by the shrinker, and the
>> caller is
>> -        * expected to repopulate - the contents of this object are
>> only valid
>> -        * whilst active and pinned.
>> -        */
>> -       i915_gem_object_set_volatile(obj);
>> +       obj->base.vma_node.driver_private = i915_gem_to_ttm(obj);
>>   
>> +       ret = ttm_bo_init_reserved(&i915->bdev, i915_gem_to_ttm(obj),
>> size,
>> +                                  ttm_bo_type_kernel,
>> i915_ttm_sys_placement(),
>> +                                  0, &ctx, NULL, NULL,
>> i915_ttm_internal_bo_destroy);
>> +       if (ret) {
>> +               ret = i915_ttm_err_to_gem(ret);
>> +               i915_gem_object_free(obj);
>> +               return ERR_PTR(ret);
>> +       }
>> +
>> +       obj->ttm.created = true;
>>          obj->read_domains = I915_GEM_DOMAIN_CPU;
>>          obj->write_domain = I915_GEM_DOMAIN_CPU;
>> -
>> +       obj->mem_flags &= ~I915_BO_FLAG_IOMEM;
>> +       obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
>>          cache_level = HAS_LLC(i915) ? I915_CACHE_LLC :
>> I915_CACHE_NONE;
>>          i915_gem_object_set_cache_coherency(obj, cache_level);
>> +       i915_gem_object_unlock(obj);
>>   
>>          return obj;
>>   }
>>   
>> -/**
>> - * i915_gem_object_create_internal: create an object with volatile
>> pages
>> - * @i915: the i915 device
>> - * @size: the size in bytes of backing storage to allocate for the
>> object
>> - *
>> - * Creates a new object that wraps some internal memory for private
>> use.
>> - * This object is not backed by swappable storage, and as such its
>> contents
>> - * are volatile and only valid whilst pinned. If the object is
>> reaped by the
>> - * shrinker, its pages and data will be discarded. Equally, it is
>> not a full
>> - * GEM object and so not valid for access from userspace. This makes
>> it useful
>> - * for hardware interfaces like ringbuffers (which are pinned from
>> the time
>> - * the request is written to the time the hardware stops accessing
>> it), but
>> - * not for contexts (which need to be preserved when not active for
>> later
>> - * reuse). Note that it is not cleared upon allocation.
>> - */
>> -struct drm_i915_gem_object *
>> -i915_gem_object_create_internal(struct drm_i915_private *i915,
>> -                               phys_addr_t size)
>> -{
>> -       return __i915_gem_object_create_internal(i915,
>> &i915_gem_object_internal_ops, size);
> 
> While we don't have a TTM shmem backend ready yet for internal,
> 
> Did you consider setting up just yet another region,
> INTEL_REGION_INTERNAL,
> .class = INTEL_MEMORY_SYSTEM and
> .instance = 1,
> 
> And make it create a TTM system region on integrated, and use
> same region as INTEL_REGION_SMEM on dgfx.
> 
> I think ttm should automatically map that to I915_PL_SYSTEM and the
> backwards mapping in i915_ttm_region() should never get called since
> the object is never moved.
> 
> Then I figure it should suffice to just call
> __i915_gem_ttm_object_init() and we could drop a lot of code.
> 

i briefly considered using a new fake region, but with current precedent 
mapping memory regions to real segemented memory areas I considered it 
an abuse of the semantics of memory regions.

If we are happy to have fake regions, I can revert it back to a previous 
design of using system region for discreet and add a fake region setup 
for integrated.

Would this be preferred over the current design?

> /Thomas
> 
> 
> 
> 
>> -}
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.h
>> b/drivers/gpu/drm/i915/gem/i915_gem_internal.h
>> index 6664e06112fc..524e1042b20f 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.h
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.h
>> @@ -15,9 +15,4 @@ struct drm_i915_private;
>>   struct drm_i915_gem_object *
>>   i915_gem_object_create_internal(struct drm_i915_private *i915,
>>                                  phys_addr_t size);
>> -struct drm_i915_gem_object *
>> -__i915_gem_object_create_internal(struct drm_i915_private *i915,
>> -                                 const struct
>> drm_i915_gem_object_ops *ops,
>> -                                 phys_addr_t size);
>> -
>>   #endif /* __I915_GEM_INTERNAL_H__ */
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
>> b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
>> index fdb3a1c18cb6..92195ead8c11 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
>> @@ -83,7 +83,7 @@ struct ttm_placement *i915_ttm_sys_placement(void)
>>          return &i915_sys_placement;
>>   }
>>   
>> -static int i915_ttm_err_to_gem(int err)
>> +int i915_ttm_err_to_gem(int err)
>>   {
>>          /* Fastpath */
>>          if (likely(!err))
>> @@ -745,8 +745,8 @@ struct ttm_device_funcs *i915_ttm_driver(void)
>>          return &i915_ttm_bo_driver;
>>   }
>>   
>> -static int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
>> -                               struct ttm_placement *placement)
>> +int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
>> +                        struct ttm_placement *placement)
>>   {
>>          struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
>>          struct ttm_operation_ctx ctx = {
>> @@ -871,8 +871,8 @@ static int i915_ttm_migrate(struct
>> drm_i915_gem_object *obj,
>>          return __i915_ttm_migrate(obj, mr, obj->flags);
>>   }
>>   
>> -static void i915_ttm_put_pages(struct drm_i915_gem_object *obj,
>> -                              struct sg_table *st)
>> +void i915_ttm_put_pages(struct drm_i915_gem_object *obj,
>> +                       struct sg_table *st)
>>   {
>>          /*
>>           * We're currently not called from a shrinker, so put_pages()
>> @@ -995,7 +995,7 @@ void i915_ttm_adjust_lru(struct
>> drm_i915_gem_object *obj)
>>    * it's not idle, and using the TTM destroyed list handling could
>> help us
>>    * benefit from that.
>>    */
>> -static void i915_ttm_delayed_free(struct drm_i915_gem_object *obj)
>> +void i915_ttm_delayed_free(struct drm_i915_gem_object *obj)
>>   {
>>          GEM_BUG_ON(!obj->ttm.created);
>>   
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
>> b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
>> index 73e371aa3850..06701c46d8e2 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
>> @@ -26,6 +26,7 @@ i915_gem_to_ttm(struct drm_i915_gem_object *obj)
>>    * i915 ttm gem object destructor. Internal use only.
>>    */
>>   void i915_ttm_bo_destroy(struct ttm_buffer_object *bo);
>> +void i915_ttm_internal_bo_destroy(struct ttm_buffer_object *bo);
>>   
>>   /**
>>    * i915_ttm_to_gem - Convert a struct ttm_buffer_object to an
>> embedding
>> @@ -37,8 +38,10 @@ void i915_ttm_bo_destroy(struct ttm_buffer_object
>> *bo);
>>   static inline struct drm_i915_gem_object *
>>   i915_ttm_to_gem(struct ttm_buffer_object *bo)
>>   {
>> -       if (bo->destroy != i915_ttm_bo_destroy)
>> +       if (bo->destroy != i915_ttm_bo_destroy &&
>> +           bo->destroy != i915_ttm_internal_bo_destroy) {
>>                  return NULL;
>> +       }
>>   
>>          return container_of(bo, struct drm_i915_gem_object,
>> __do_not_access);
>>   }
>> @@ -66,6 +69,7 @@ i915_ttm_resource_get_st(struct drm_i915_gem_object
>> *obj,
>>                           struct ttm_resource *res);
>>   
>>   void i915_ttm_adjust_lru(struct drm_i915_gem_object *obj);
>> +void i915_ttm_delayed_free(struct drm_i915_gem_object *obj);
>>   
>>   int i915_ttm_purge(struct drm_i915_gem_object *obj);
>>   
>> @@ -92,4 +96,10 @@ static inline bool i915_ttm_cpu_maps_iomem(struct
>> ttm_resource *mem)
>>          /* Once / if we support GGTT, this is also false for cached
>> ttm_tts */
>>          return mem->mem_type != I915_PL_SYSTEM;
>>   }
>> +
>> +int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
>> +                        struct ttm_placement *placement);
>> +void i915_ttm_put_pages(struct drm_i915_gem_object *obj, struct
>> sg_table *st);
>> +int i915_ttm_err_to_gem(int err);
>> +
>>   #endif
> 
> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 4/4] drm/i915: internal buffers use ttm backend
@ 2022-05-23 15:52       ` Robert Beckett
  0 siblings, 0 replies; 32+ messages in thread
From: Robert Beckett @ 2022-05-23 15:52 UTC (permalink / raw)
  To: Thomas Hellström, dri-devel, intel-gfx, Jani Nikula,
	Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, David Airlie,
	Daniel Vetter
  Cc: Matthew Auld, linux-kernel



On 11/05/2022 15:14, Thomas Hellström wrote:
> On Tue, 2022-05-03 at 19:13 +0000, Robert Beckett wrote:
>> refactor internal buffer backend to allocate volatile pages via
>> ttm pool allocator
>>
>> Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
>> ---
>>   drivers/gpu/drm/i915/gem/i915_gem_internal.c | 264 ++++++++---------
>> --
>>   drivers/gpu/drm/i915/gem/i915_gem_internal.h |   5 -
>>   drivers/gpu/drm/i915/gem/i915_gem_ttm.c      |  12 +-
>>   drivers/gpu/drm/i915/gem/i915_gem_ttm.h      |  12 +-
>>   4 files changed, 125 insertions(+), 168 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
>> b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
>> index c698f95af15f..815ec9466cc0 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
>> @@ -4,156 +4,119 @@
>>    * Copyright © 2014-2016 Intel Corporation
>>    */
>>   
>> -#include <linux/scatterlist.h>
>> -#include <linux/slab.h>
>> -#include <linux/swiotlb.h>
>> -
>> +#include <drm/ttm/ttm_bo_driver.h>
>> +#include <drm/ttm/ttm_placement.h>
>> +#include "drm/ttm/ttm_bo_api.h"
>> +#include "gem/i915_gem_internal.h"
>> +#include "gem/i915_gem_region.h"
>> +#include "gem/i915_gem_ttm.h"
>>   #include "i915_drv.h"
>> -#include "i915_gem.h"
>> -#include "i915_gem_internal.h"
>> -#include "i915_gem_object.h"
>> -#include "i915_scatterlist.h"
>> -#include "i915_utils.h"
>> -
>> -#define QUIET (__GFP_NORETRY | __GFP_NOWARN)
>> -#define MAYFAIL (__GFP_RETRY_MAYFAIL | __GFP_NOWARN)
>> -
>> -static void internal_free_pages(struct sg_table *st)
>> -{
>> -       struct scatterlist *sg;
>> -
>> -       for (sg = st->sgl; sg; sg = __sg_next(sg)) {
>> -               if (sg_page(sg))
>> -                       __free_pages(sg_page(sg), get_order(sg-
>>> length));
>> -       }
>> -
>> -       sg_free_table(st);
>> -       kfree(st);
>> -}
>>   
>> -static int i915_gem_object_get_pages_internal(struct
>> drm_i915_gem_object *obj)
>> +static int i915_internal_get_pages(struct drm_i915_gem_object *obj)
>>   {
>> -       struct drm_i915_private *i915 = to_i915(obj->base.dev);
>> -       struct sg_table *st;
>> -       struct scatterlist *sg;
>> -       unsigned int sg_page_sizes;
>> -       unsigned int npages;
>> -       int max_order;
>> -       gfp_t gfp;
>> -
>> -       max_order = MAX_ORDER;
>> -#ifdef CONFIG_SWIOTLB
>> -       if (is_swiotlb_active(obj->base.dev->dev)) {
>> -               unsigned int max_segment;
>> -
>> -               max_segment = swiotlb_max_segment();
>> -               if (max_segment) {
>> -                       max_segment = max_t(unsigned int,
>> max_segment,
>> -                                           PAGE_SIZE) >> PAGE_SHIFT;
>> -                       max_order = min(max_order,
>> ilog2(max_segment));
>> -               }
>> +       struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
>> +       struct ttm_operation_ctx ctx = {
>> +               .interruptible = true,
>> +               .no_wait_gpu = false,
>> +       };
>> +       struct ttm_place place = {
>> +               .fpfn = 0,
>> +               .lpfn = 0,
>> +               .mem_type = I915_PL_SYSTEM,
>> +               .flags = 0,
>> +       };
>> +       struct ttm_placement placement = {
>> +               .num_placement = 1,
>> +               .placement = &place,
>> +               .num_busy_placement = 0,
>> +               .busy_placement = NULL,
>> +       };
>> +       int ret;
>> +
>> +       ret = ttm_bo_validate(bo, &placement, &ctx);
>> +       if (ret) {
>> +               ret = i915_ttm_err_to_gem(ret);
>> +               return ret;
>>          }
>> -#endif
>>   
>> -       gfp = GFP_KERNEL | __GFP_HIGHMEM | __GFP_RECLAIMABLE;
>> -       if (IS_I965GM(i915) || IS_I965G(i915)) {
>> -               /* 965gm cannot relocate objects above 4GiB. */
>> -               gfp &= ~__GFP_HIGHMEM;
>> -               gfp |= __GFP_DMA32;
> 
> 
> It looks like we're losing this restriction?
> 
> There is a flag to ttm_device_init() to make TTM only do __GFP_DMA32
> allocations.

agreed. will fix for v2

> 
>> +       if (bo->ttm && !ttm_tt_is_populated(bo->ttm)) {
>> +               ret = ttm_tt_populate(bo->bdev, bo->ttm, &ctx);
>> +               if (ret)
>> +                       return ret;
>>          }
>>   
>> -create_st:
>> -       st = kmalloc(sizeof(*st), GFP_KERNEL);
>> -       if (!st)
>> -               return -ENOMEM;
>> +       if (!i915_gem_object_has_pages(obj)) {
>> +               struct i915_refct_sgt *rsgt =
>> +                       i915_ttm_resource_get_st(obj, bo->resource);
>>   
>> -       npages = obj->base.size / PAGE_SIZE;
>> -       if (sg_alloc_table(st, npages, GFP_KERNEL)) {
>> -               kfree(st);
>> -               return -ENOMEM;
>> -       }
>> +               if (IS_ERR(rsgt))
>> +                       return PTR_ERR(rsgt);
>>   
>> -       sg = st->sgl;
>> -       st->nents = 0;
>> -       sg_page_sizes = 0;
>> -
>> -       do {
>> -               int order = min(fls(npages) - 1, max_order);
>> -               struct page *page;
>> -
>> -               do {
>> -                       page = alloc_pages(gfp | (order ? QUIET :
>> MAYFAIL),
>> -                                          order);
>> -                       if (page)
>> -                               break;
>> -                       if (!order--)
>> -                               goto err;
>> -
>> -                       /* Limit subsequent allocations as well */
>> -                       max_order = order;
>> -               } while (1);
>> -
>> -               sg_set_page(sg, page, PAGE_SIZE << order, 0);
>> -               sg_page_sizes |= PAGE_SIZE << order;
>> -               st->nents++;
>> -
>> -               npages -= 1 << order;
>> -               if (!npages) {
>> -                       sg_mark_end(sg);
>> -                       break;
>> -               }
>> -
>> -               sg = __sg_next(sg);
>> -       } while (1);
>> -
>> -       if (i915_gem_gtt_prepare_pages(obj, st)) {
>> -               /* Failed to dma-map try again with single page sg
>> segments */
>> -               if (get_order(st->sgl->length)) {
>> -                       internal_free_pages(st);
>> -                       max_order = 0;
>> -                       goto create_st;
>> -               }
>> -               goto err;
>> +               GEM_BUG_ON(obj->mm.rsgt);
>> +               obj->mm.rsgt = rsgt;
>> +               __i915_gem_object_set_pages(obj, &rsgt->table,
>> +                                           i915_sg_dma_sizes(rsgt-
>>> table.sgl));
>>          }
>>   
>> -       __i915_gem_object_set_pages(obj, st, sg_page_sizes);
>> +       GEM_BUG_ON(bo->ttm && ((obj->base.size >> PAGE_SHIFT) < bo-
>>> ttm->num_pages));
>> +       i915_ttm_adjust_lru(obj);
>>   
>>          return 0;
>> +}
>>   
>> -err:
>> -       sg_set_page(sg, NULL, 0, 0);
>> -       sg_mark_end(sg);
>> -       internal_free_pages(st);
>> +static const struct drm_i915_gem_object_ops
>> i915_gem_object_internal_ops = {
>> +       .name = "i915_gem_object_ttm",
>> +       .flags = I915_GEM_OBJECT_IS_SHRINKABLE,
>>   
>> -       return -ENOMEM;
>> -}
>> +       .get_pages = i915_internal_get_pages,
>> +       .put_pages = i915_ttm_put_pages,
>> +       .adjust_lru = i915_ttm_adjust_lru,
>> +       .delayed_free = i915_ttm_delayed_free,
>> +};
>>   
>> -static void i915_gem_object_put_pages_internal(struct
>> drm_i915_gem_object *obj,
>> -                                              struct sg_table
>> *pages)
>> +void i915_ttm_internal_bo_destroy(struct ttm_buffer_object *bo)
>>   {
>> -       i915_gem_gtt_finish_pages(obj, pages);
>> -       internal_free_pages(pages);
>> +       struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
>>   
>> -       obj->mm.dirty = false;
>> +       mutex_destroy(&obj->ttm.get_io_page.lock);
>>   
>> -       __start_cpu_write(obj);
>> -}
>> +       if (obj->ttm.created) {
>> +               /* This releases all gem object bindings to the
>> backend. */
>> +               __i915_gem_free_object(obj);
>>   
>> -static const struct drm_i915_gem_object_ops
>> i915_gem_object_internal_ops = {
>> -       .name = "i915_gem_object_internal",
>> -       .flags = I915_GEM_OBJECT_IS_SHRINKABLE,
>> -       .get_pages = i915_gem_object_get_pages_internal,
>> -       .put_pages = i915_gem_object_put_pages_internal,
>> -};
>> +               call_rcu(&obj->rcu, __i915_gem_free_object_rcu);
>> +       } else {
>> +               __i915_gem_object_fini(obj);
>> +       }
>> +}
>>   
>> +/**
>> + * i915_gem_object_create_internal: create an object with volatile
>> pages
>> + * @i915: the i915 device
>> + * @size: the size in bytes of backing storage to allocate for the
>> object
>> + *
>> + * Creates a new object that wraps some internal memory for private
>> use.
>> + * This object is not backed by swappable storage, and as such its
>> contents
>> + * are volatile and only valid whilst pinned. If the object is
>> reaped by the
>> + * shrinker, its pages and data will be discarded. Equally, it is
>> not a full
>> + * GEM object and so not valid for access from userspace. This makes
>> it useful
>> + * for hardware interfaces like ringbuffers (which are pinned from
>> the time
>> + * the request is written to the time the hardware stops accessing
>> it), but
>> + * not for contexts (which need to be preserved when not active for
>> later
>> + * reuse). Note that it is not cleared upon allocation.
>> + */
>>   struct drm_i915_gem_object *
>> -__i915_gem_object_create_internal(struct drm_i915_private *i915,
>> -                                 const struct
>> drm_i915_gem_object_ops *ops,
>> -                                 phys_addr_t size)
>> +i915_gem_object_create_internal(struct drm_i915_private *i915,
>> +                               phys_addr_t size)
>>   {
>>          static struct lock_class_key lock_class;
>>          struct drm_i915_gem_object *obj;
>>          unsigned int cache_level;
>> +       struct ttm_operation_ctx ctx = {
>> +               .interruptible = true,
>> +               .no_wait_gpu = false,
>> +       };
>> +       int ret;
>>   
>>          GEM_BUG_ON(!size);
>>          GEM_BUG_ON(!IS_ALIGNED(size, PAGE_SIZE));
>> @@ -166,45 +129,34 @@ __i915_gem_object_create_internal(struct
>> drm_i915_private *i915,
>>                  return ERR_PTR(-ENOMEM);
>>   
>>          drm_gem_private_object_init(&i915->drm, &obj->base, size);
>> -       i915_gem_object_init(obj, ops, &lock_class, 0);
>> -       obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
>> +       i915_gem_object_init(obj, &i915_gem_object_internal_ops,
>> &lock_class,
>> +                            I915_BO_ALLOC_VOLATILE);
>> +
>> +       INIT_LIST_HEAD(&obj->mm.region_link);
>> +
>> +       INIT_RADIX_TREE(&obj->ttm.get_io_page.radix, GFP_KERNEL |
>> __GFP_NOWARN);
>> +       mutex_init(&obj->ttm.get_io_page.lock);
>>   
>> -       /*
>> -        * Mark the object as volatile, such that the pages are
>> marked as
>> -        * dontneed whilst they are still pinned. As soon as they are
>> unpinned
>> -        * they are allowed to be reaped by the shrinker, and the
>> caller is
>> -        * expected to repopulate - the contents of this object are
>> only valid
>> -        * whilst active and pinned.
>> -        */
>> -       i915_gem_object_set_volatile(obj);
>> +       obj->base.vma_node.driver_private = i915_gem_to_ttm(obj);
>>   
>> +       ret = ttm_bo_init_reserved(&i915->bdev, i915_gem_to_ttm(obj),
>> size,
>> +                                  ttm_bo_type_kernel,
>> i915_ttm_sys_placement(),
>> +                                  0, &ctx, NULL, NULL,
>> i915_ttm_internal_bo_destroy);
>> +       if (ret) {
>> +               ret = i915_ttm_err_to_gem(ret);
>> +               i915_gem_object_free(obj);
>> +               return ERR_PTR(ret);
>> +       }
>> +
>> +       obj->ttm.created = true;
>>          obj->read_domains = I915_GEM_DOMAIN_CPU;
>>          obj->write_domain = I915_GEM_DOMAIN_CPU;
>> -
>> +       obj->mem_flags &= ~I915_BO_FLAG_IOMEM;
>> +       obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
>>          cache_level = HAS_LLC(i915) ? I915_CACHE_LLC :
>> I915_CACHE_NONE;
>>          i915_gem_object_set_cache_coherency(obj, cache_level);
>> +       i915_gem_object_unlock(obj);
>>   
>>          return obj;
>>   }
>>   
>> -/**
>> - * i915_gem_object_create_internal: create an object with volatile
>> pages
>> - * @i915: the i915 device
>> - * @size: the size in bytes of backing storage to allocate for the
>> object
>> - *
>> - * Creates a new object that wraps some internal memory for private
>> use.
>> - * This object is not backed by swappable storage, and as such its
>> contents
>> - * are volatile and only valid whilst pinned. If the object is
>> reaped by the
>> - * shrinker, its pages and data will be discarded. Equally, it is
>> not a full
>> - * GEM object and so not valid for access from userspace. This makes
>> it useful
>> - * for hardware interfaces like ringbuffers (which are pinned from
>> the time
>> - * the request is written to the time the hardware stops accessing
>> it), but
>> - * not for contexts (which need to be preserved when not active for
>> later
>> - * reuse). Note that it is not cleared upon allocation.
>> - */
>> -struct drm_i915_gem_object *
>> -i915_gem_object_create_internal(struct drm_i915_private *i915,
>> -                               phys_addr_t size)
>> -{
>> -       return __i915_gem_object_create_internal(i915,
>> &i915_gem_object_internal_ops, size);
> 
> While we don't have a TTM shmem backend ready yet for internal,
> 
> Did you consider setting up just yet another region,
> INTEL_REGION_INTERNAL,
> .class = INTEL_MEMORY_SYSTEM and
> .instance = 1,
> 
> And make it create a TTM system region on integrated, and use
> same region as INTEL_REGION_SMEM on dgfx.
> 
> I think ttm should automatically map that to I915_PL_SYSTEM and the
> backwards mapping in i915_ttm_region() should never get called since
> the object is never moved.
> 
> Then I figure it should suffice to just call
> __i915_gem_ttm_object_init() and we could drop a lot of code.
> 

i briefly considered using a new fake region, but with current precedent 
mapping memory regions to real segemented memory areas I considered it 
an abuse of the semantics of memory regions.

If we are happy to have fake regions, I can revert it back to a previous 
design of using system region for discreet and add a fake region setup 
for integrated.

Would this be preferred over the current design?

> /Thomas
> 
> 
> 
> 
>> -}
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.h
>> b/drivers/gpu/drm/i915/gem/i915_gem_internal.h
>> index 6664e06112fc..524e1042b20f 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.h
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.h
>> @@ -15,9 +15,4 @@ struct drm_i915_private;
>>   struct drm_i915_gem_object *
>>   i915_gem_object_create_internal(struct drm_i915_private *i915,
>>                                  phys_addr_t size);
>> -struct drm_i915_gem_object *
>> -__i915_gem_object_create_internal(struct drm_i915_private *i915,
>> -                                 const struct
>> drm_i915_gem_object_ops *ops,
>> -                                 phys_addr_t size);
>> -
>>   #endif /* __I915_GEM_INTERNAL_H__ */
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
>> b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
>> index fdb3a1c18cb6..92195ead8c11 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
>> @@ -83,7 +83,7 @@ struct ttm_placement *i915_ttm_sys_placement(void)
>>          return &i915_sys_placement;
>>   }
>>   
>> -static int i915_ttm_err_to_gem(int err)
>> +int i915_ttm_err_to_gem(int err)
>>   {
>>          /* Fastpath */
>>          if (likely(!err))
>> @@ -745,8 +745,8 @@ struct ttm_device_funcs *i915_ttm_driver(void)
>>          return &i915_ttm_bo_driver;
>>   }
>>   
>> -static int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
>> -                               struct ttm_placement *placement)
>> +int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
>> +                        struct ttm_placement *placement)
>>   {
>>          struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
>>          struct ttm_operation_ctx ctx = {
>> @@ -871,8 +871,8 @@ static int i915_ttm_migrate(struct
>> drm_i915_gem_object *obj,
>>          return __i915_ttm_migrate(obj, mr, obj->flags);
>>   }
>>   
>> -static void i915_ttm_put_pages(struct drm_i915_gem_object *obj,
>> -                              struct sg_table *st)
>> +void i915_ttm_put_pages(struct drm_i915_gem_object *obj,
>> +                       struct sg_table *st)
>>   {
>>          /*
>>           * We're currently not called from a shrinker, so put_pages()
>> @@ -995,7 +995,7 @@ void i915_ttm_adjust_lru(struct
>> drm_i915_gem_object *obj)
>>    * it's not idle, and using the TTM destroyed list handling could
>> help us
>>    * benefit from that.
>>    */
>> -static void i915_ttm_delayed_free(struct drm_i915_gem_object *obj)
>> +void i915_ttm_delayed_free(struct drm_i915_gem_object *obj)
>>   {
>>          GEM_BUG_ON(!obj->ttm.created);
>>   
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
>> b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
>> index 73e371aa3850..06701c46d8e2 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.h
>> @@ -26,6 +26,7 @@ i915_gem_to_ttm(struct drm_i915_gem_object *obj)
>>    * i915 ttm gem object destructor. Internal use only.
>>    */
>>   void i915_ttm_bo_destroy(struct ttm_buffer_object *bo);
>> +void i915_ttm_internal_bo_destroy(struct ttm_buffer_object *bo);
>>   
>>   /**
>>    * i915_ttm_to_gem - Convert a struct ttm_buffer_object to an
>> embedding
>> @@ -37,8 +38,10 @@ void i915_ttm_bo_destroy(struct ttm_buffer_object
>> *bo);
>>   static inline struct drm_i915_gem_object *
>>   i915_ttm_to_gem(struct ttm_buffer_object *bo)
>>   {
>> -       if (bo->destroy != i915_ttm_bo_destroy)
>> +       if (bo->destroy != i915_ttm_bo_destroy &&
>> +           bo->destroy != i915_ttm_internal_bo_destroy) {
>>                  return NULL;
>> +       }
>>   
>>          return container_of(bo, struct drm_i915_gem_object,
>> __do_not_access);
>>   }
>> @@ -66,6 +69,7 @@ i915_ttm_resource_get_st(struct drm_i915_gem_object
>> *obj,
>>                           struct ttm_resource *res);
>>   
>>   void i915_ttm_adjust_lru(struct drm_i915_gem_object *obj);
>> +void i915_ttm_delayed_free(struct drm_i915_gem_object *obj);
>>   
>>   int i915_ttm_purge(struct drm_i915_gem_object *obj);
>>   
>> @@ -92,4 +96,10 @@ static inline bool i915_ttm_cpu_maps_iomem(struct
>> ttm_resource *mem)
>>          /* Once / if we support GGTT, this is also false for cached
>> ttm_tts */
>>          return mem->mem_type != I915_PL_SYSTEM;
>>   }
>> +
>> +int __i915_ttm_get_pages(struct drm_i915_gem_object *obj,
>> +                        struct ttm_placement *placement);
>> +void i915_ttm_put_pages(struct drm_i915_gem_object *obj, struct
>> sg_table *st);
>> +int i915_ttm_err_to_gem(int err);
>> +
>>   #endif
> 
> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 1/4] drm/i915: add gen6 ppgtt dummy creation function
  2022-05-11 10:13     ` [Intel-gfx] " Thomas Hellström
@ 2022-05-23 15:52       ` Robert Beckett
  -1 siblings, 0 replies; 32+ messages in thread
From: Robert Beckett @ 2022-05-23 15:52 UTC (permalink / raw)
  To: Thomas Hellström, dri-devel, intel-gfx, Jani Nikula,
	Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, David Airlie,
	Daniel Vetter
  Cc: Matthew Auld, linux-kernel



On 11/05/2022 11:13, Thomas Hellström wrote:
> Hi,
> 
> On Tue, 2022-05-03 at 19:13 +0000, Robert Beckett wrote:
>> Internal gem objects will soon just be volatile system memory region
>> objects.
>> To enable this, create a separate dummy object creation function
>> for gen6 ppgtt
> 
> 
> It's not clear from the commit message why we need a special case for
> this. Could you describe more in detail?

it always was a special case, that used the internal backend but 
provided it's own ops, so actually had no benefit from using the 
internal backend. See b0b0f2d225da6fe58417fae37e3f797e2db27b62

I'll add some further explanation in the commit message for v2.

> 
> Thanks,
> Thomas
> 
> 
>>
>> Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
>> ---
>>   drivers/gpu/drm/i915/gt/gen6_ppgtt.c | 43
>> ++++++++++++++++++++++++++--
>>   1 file changed, 40 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
>> b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
>> index 1bb766c79dcb..f3b660cfeb7f 100644
>> --- a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
>> +++ b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
>> @@ -372,6 +372,45 @@ static const struct drm_i915_gem_object_ops
>> pd_dummy_obj_ops = {
>>          .put_pages = pd_dummy_obj_put_pages,
>>   };
>>   
>> +static struct drm_i915_gem_object *
>> +i915_gem_object_create_dummy(struct drm_i915_private *i915,
>> phys_addr_t size)
>> +{
>> +       static struct lock_class_key lock_class;
>> +       struct drm_i915_gem_object *obj;
>> +       unsigned int cache_level;
>> +
>> +       GEM_BUG_ON(!size);
>> +       GEM_BUG_ON(!IS_ALIGNED(size, PAGE_SIZE));
>> +
>> +       if (overflows_type(size, obj->base.size))
>> +               return ERR_PTR(-E2BIG);
>> +
>> +       obj = i915_gem_object_alloc();
>> +       if (!obj)
>> +               return ERR_PTR(-ENOMEM);
>> +
>> +       drm_gem_private_object_init(&i915->drm, &obj->base, size);
>> +       i915_gem_object_init(obj, &pd_dummy_obj_ops, &lock_class, 0);
>> +       obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
>> +
>> +       /*
>> +        * Mark the object as volatile, such that the pages are
>> marked as
>> +        * dontneed whilst they are still pinned. As soon as they are
>> unpinned
>> +        * they are allowed to be reaped by the shrinker, and the
>> caller is
>> +        * expected to repopulate - the contents of this object are
>> only valid
>> +        * whilst active and pinned.
>> +        */
>> +       i915_gem_object_set_volatile(obj);
>> +
>> +       obj->read_domains = I915_GEM_DOMAIN_CPU;
>> +       obj->write_domain = I915_GEM_DOMAIN_CPU;
>> +
>> +       cache_level = HAS_LLC(i915) ? I915_CACHE_LLC :
>> I915_CACHE_NONE;
>> +       i915_gem_object_set_cache_coherency(obj, cache_level);
>> +
>> +       return obj;
>> +}
>> +
>>   static struct i915_page_directory *
>>   gen6_alloc_top_pd(struct gen6_ppgtt *ppgtt)
>>   {
>> @@ -383,9 +422,7 @@ gen6_alloc_top_pd(struct gen6_ppgtt *ppgtt)
>>          if (unlikely(!pd))
>>                  return ERR_PTR(-ENOMEM);
>>   
>> -       pd->pt.base = __i915_gem_object_create_internal(ppgtt-
>>> base.vm.gt->i915,
>> -
>>                                                         &pd_dummy_obj_o
>> ps,
>> -                                                       I915_PDES *
>> SZ_4K);
>> +       pd->pt.base = i915_gem_object_create_dummy(ppgtt->base.vm.gt-
>>> i915, I915_PDES * SZ_4K);
>>          if (IS_ERR(pd->pt.base)) {
>>                  err = PTR_ERR(pd->pt.base);
>>                  pd->pt.base = NULL;
> 
> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 1/4] drm/i915: add gen6 ppgtt dummy creation function
@ 2022-05-23 15:52       ` Robert Beckett
  0 siblings, 0 replies; 32+ messages in thread
From: Robert Beckett @ 2022-05-23 15:52 UTC (permalink / raw)
  To: Thomas Hellström, dri-devel, intel-gfx, Jani Nikula,
	Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, David Airlie,
	Daniel Vetter
  Cc: Matthew Auld, linux-kernel



On 11/05/2022 11:13, Thomas Hellström wrote:
> Hi,
> 
> On Tue, 2022-05-03 at 19:13 +0000, Robert Beckett wrote:
>> Internal gem objects will soon just be volatile system memory region
>> objects.
>> To enable this, create a separate dummy object creation function
>> for gen6 ppgtt
> 
> 
> It's not clear from the commit message why we need a special case for
> this. Could you describe more in detail?

it always was a special case, that used the internal backend but 
provided it's own ops, so actually had no benefit from using the 
internal backend. See b0b0f2d225da6fe58417fae37e3f797e2db27b62

I'll add some further explanation in the commit message for v2.

> 
> Thanks,
> Thomas
> 
> 
>>
>> Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
>> ---
>>   drivers/gpu/drm/i915/gt/gen6_ppgtt.c | 43
>> ++++++++++++++++++++++++++--
>>   1 file changed, 40 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
>> b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
>> index 1bb766c79dcb..f3b660cfeb7f 100644
>> --- a/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
>> +++ b/drivers/gpu/drm/i915/gt/gen6_ppgtt.c
>> @@ -372,6 +372,45 @@ static const struct drm_i915_gem_object_ops
>> pd_dummy_obj_ops = {
>>          .put_pages = pd_dummy_obj_put_pages,
>>   };
>>   
>> +static struct drm_i915_gem_object *
>> +i915_gem_object_create_dummy(struct drm_i915_private *i915,
>> phys_addr_t size)
>> +{
>> +       static struct lock_class_key lock_class;
>> +       struct drm_i915_gem_object *obj;
>> +       unsigned int cache_level;
>> +
>> +       GEM_BUG_ON(!size);
>> +       GEM_BUG_ON(!IS_ALIGNED(size, PAGE_SIZE));
>> +
>> +       if (overflows_type(size, obj->base.size))
>> +               return ERR_PTR(-E2BIG);
>> +
>> +       obj = i915_gem_object_alloc();
>> +       if (!obj)
>> +               return ERR_PTR(-ENOMEM);
>> +
>> +       drm_gem_private_object_init(&i915->drm, &obj->base, size);
>> +       i915_gem_object_init(obj, &pd_dummy_obj_ops, &lock_class, 0);
>> +       obj->mem_flags |= I915_BO_FLAG_STRUCT_PAGE;
>> +
>> +       /*
>> +        * Mark the object as volatile, such that the pages are
>> marked as
>> +        * dontneed whilst they are still pinned. As soon as they are
>> unpinned
>> +        * they are allowed to be reaped by the shrinker, and the
>> caller is
>> +        * expected to repopulate - the contents of this object are
>> only valid
>> +        * whilst active and pinned.
>> +        */
>> +       i915_gem_object_set_volatile(obj);
>> +
>> +       obj->read_domains = I915_GEM_DOMAIN_CPU;
>> +       obj->write_domain = I915_GEM_DOMAIN_CPU;
>> +
>> +       cache_level = HAS_LLC(i915) ? I915_CACHE_LLC :
>> I915_CACHE_NONE;
>> +       i915_gem_object_set_cache_coherency(obj, cache_level);
>> +
>> +       return obj;
>> +}
>> +
>>   static struct i915_page_directory *
>>   gen6_alloc_top_pd(struct gen6_ppgtt *ppgtt)
>>   {
>> @@ -383,9 +422,7 @@ gen6_alloc_top_pd(struct gen6_ppgtt *ppgtt)
>>          if (unlikely(!pd))
>>                  return ERR_PTR(-ENOMEM);
>>   
>> -       pd->pt.base = __i915_gem_object_create_internal(ppgtt-
>>> base.vm.gt->i915,
>> -
>>                                                         &pd_dummy_obj_o
>> ps,
>> -                                                       I915_PDES *
>> SZ_4K);
>> +       pd->pt.base = i915_gem_object_create_dummy(ppgtt->base.vm.gt-
>>> i915, I915_PDES * SZ_4K);
>>          if (IS_ERR(pd->pt.base)) {
>>                  err = PTR_ERR(pd->pt.base);
>>                  pd->pt.base = NULL;
> 
> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH 3/4] drm/i915: allow volatile buffers to use ttm pool allocator
  2022-05-11 12:42     ` [Intel-gfx] " Thomas Hellström
@ 2022-05-23 15:53       ` Robert Beckett
  -1 siblings, 0 replies; 32+ messages in thread
From: Robert Beckett @ 2022-05-23 15:53 UTC (permalink / raw)
  To: Thomas Hellström, dri-devel, intel-gfx, Jani Nikula,
	Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, David Airlie,
	Daniel Vetter
  Cc: Matthew Auld, linux-kernel



On 11/05/2022 13:42, Thomas Hellström wrote:
> Hi, Bob,
> 
> On Tue, 2022-05-03 at 19:13 +0000, Robert Beckett wrote:
>> internal buffers should be shmem backed.
>> if a volatile buffer is requested, allow ttm to use the pool
>> allocator
>> to provide volatile pages as backing
>>
>> Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
>> ---
>>   drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 3 ++-
>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
>> b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
>> index 4c25d9b2f138..fdb3a1c18cb6 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
>> @@ -309,7 +309,8 @@ static struct ttm_tt *i915_ttm_tt_create(struct
>> ttm_buffer_object *bo,
>>                  page_flags |= TTM_TT_FLAG_ZERO_ALLOC;
>>   
>>          caching = i915_ttm_select_tt_caching(obj);
>> -       if (i915_gem_object_is_shrinkable(obj) && caching ==
>> ttm_cached) {
>> +       if (i915_gem_object_is_shrinkable(obj) && caching ==
>> ttm_cached &&
>> +           !i915_gem_object_is_volatile(obj)) {
>>                  page_flags |= TTM_TT_FLAG_EXTERNAL |
>>                                TTM_TT_FLAG_EXTERNAL_MAPPABLE;
>>                  i915_tt->is_shmem = true;
> 
> While this is ok, I think it also needs adjustment in the i915_ttm
> shrink callback. If someone creates a volatile smem object which then
> hits the shrinker, I think we might hit asserts that it's a is_shem
> ttm?
> 
> In this case, the shrink callback should just i915_ttm_purge().

agreed. nice catch.
I'll fix for v2

looks like we could maybe do with some extra shrinker testing too? looks 
like nothing caught this during CI testing

> 
> /Thomas
> 
> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [Intel-gfx] [PATCH 3/4] drm/i915: allow volatile buffers to use ttm pool allocator
@ 2022-05-23 15:53       ` Robert Beckett
  0 siblings, 0 replies; 32+ messages in thread
From: Robert Beckett @ 2022-05-23 15:53 UTC (permalink / raw)
  To: Thomas Hellström, dri-devel, intel-gfx, Jani Nikula,
	Joonas Lahtinen, Rodrigo Vivi, Tvrtko Ursulin, David Airlie,
	Daniel Vetter
  Cc: Matthew Auld, linux-kernel



On 11/05/2022 13:42, Thomas Hellström wrote:
> Hi, Bob,
> 
> On Tue, 2022-05-03 at 19:13 +0000, Robert Beckett wrote:
>> internal buffers should be shmem backed.
>> if a volatile buffer is requested, allow ttm to use the pool
>> allocator
>> to provide volatile pages as backing
>>
>> Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
>> ---
>>   drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 3 ++-
>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
>> b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
>> index 4c25d9b2f138..fdb3a1c18cb6 100644
>> --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
>> +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
>> @@ -309,7 +309,8 @@ static struct ttm_tt *i915_ttm_tt_create(struct
>> ttm_buffer_object *bo,
>>                  page_flags |= TTM_TT_FLAG_ZERO_ALLOC;
>>   
>>          caching = i915_ttm_select_tt_caching(obj);
>> -       if (i915_gem_object_is_shrinkable(obj) && caching ==
>> ttm_cached) {
>> +       if (i915_gem_object_is_shrinkable(obj) && caching ==
>> ttm_cached &&
>> +           !i915_gem_object_is_volatile(obj)) {
>>                  page_flags |= TTM_TT_FLAG_EXTERNAL |
>>                                TTM_TT_FLAG_EXTERNAL_MAPPABLE;
>>                  i915_tt->is_shmem = true;
> 
> While this is ok, I think it also needs adjustment in the i915_ttm
> shrink callback. If someone creates a volatile smem object which then
> hits the shrinker, I think we might hit asserts that it's a is_shem
> ttm?
> 
> In this case, the shrink callback should just i915_ttm_purge().

agreed. nice catch.
I'll fix for v2

looks like we could maybe do with some extra shrinker testing too? looks 
like nothing caught this during CI testing

> 
> /Thomas
> 
> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2022-05-23 15:54 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-03 19:13 [PATCH 0/4] ttm for internal Robert Beckett
2022-05-03 19:13 ` [Intel-gfx] " Robert Beckett
2022-05-03 19:13 ` [PATCH 1/4] drm/i915: add gen6 ppgtt dummy creation function Robert Beckett
2022-05-03 19:13   ` [Intel-gfx] " Robert Beckett
2022-05-03 19:13   ` Robert Beckett
2022-05-11 10:13   ` Thomas Hellström
2022-05-11 10:13     ` [Intel-gfx] " Thomas Hellström
2022-05-23 15:52     ` Robert Beckett
2022-05-23 15:52       ` [Intel-gfx] " Robert Beckett
2022-05-03 19:13 ` [PATCH 2/4] drm/i915: setup ggtt scratch page after memory regions Robert Beckett
2022-05-03 19:13   ` [Intel-gfx] " Robert Beckett
2022-05-03 19:13   ` Robert Beckett
2022-05-11 11:24   ` Thomas Hellström
2022-05-11 11:24     ` [Intel-gfx] " Thomas Hellström
2022-05-03 19:13 ` [PATCH 3/4] drm/i915: allow volatile buffers to use ttm pool allocator Robert Beckett
2022-05-03 19:13   ` [Intel-gfx] " Robert Beckett
2022-05-03 19:13   ` Robert Beckett
2022-05-11 12:42   ` Thomas Hellström
2022-05-11 12:42     ` [Intel-gfx] " Thomas Hellström
2022-05-23 15:53     ` Robert Beckett
2022-05-23 15:53       ` [Intel-gfx] " Robert Beckett
2022-05-03 19:13 ` [PATCH 4/4] drm/i915: internal buffers use ttm backend Robert Beckett
2022-05-03 19:13   ` [Intel-gfx] " Robert Beckett
2022-05-03 19:13   ` Robert Beckett
2022-05-11 14:14   ` Thomas Hellström
2022-05-11 14:14     ` [Intel-gfx] " Thomas Hellström
2022-05-23 15:52     ` Robert Beckett
2022-05-23 15:52       ` [Intel-gfx] " Robert Beckett
2022-05-03 19:52 ` [Intel-gfx] ✗ Fi.CI.BAT: failure for ttm for internal Patchwork
2022-05-06  3:44 ` [Intel-gfx] ✗ Fi.CI.BAT: failure for ttm for internal (rev2) Patchwork
2022-05-10 21:26 ` [Intel-gfx] ✓ Fi.CI.BAT: success for ttm for internal (rev3) Patchwork
2022-05-11  2:28 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.